First “modern and powerful” open source LLM?
Key features
- Fully open model: open weights + open data + full training details including all data and training recipes
- Massively Multilingual: 1811 natively supported languages
- Compliant Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data
We probably won’t get better, but sounds like it’s still being trained on scraped data unless you explicitly opt out, including anything that may be getting mirrored by third parties that don’t opt out. Also, they can remove data from the training material retroactively… But presumably won’t be retraining the model from scratch, which means it will still have that in their weights, and the official weights will still have a potential advantage on models trained later on their training data.
From the license:
Oof, so they’re basically passing on data protection deletion requests to the users and telling them all to respectfully account for them.
They also claim “open data”, but I’m having trouble finding the actual training data, only the “Training data reconstruction scripts”…
that’s the problem with deletion requests, the data isn’t in there. it can’t be, from a purely mathematical standpoint. statistically, with the amount of stuff that goes into training, any full work included in an llm is represented by less than one bit. but the model just… remakes sensitive information from scratch. ih reconstructs infringing data based on patterns.
which of course highlights the big issue with data anonymization: it can’t really be done.