Meta's Omnilingual MT for 1,600 Languages
- stingraycharles - 22570 sekunder sedanI find that meta’s translations are very poor compared to others, at least for relatively obscure languages, which I figured was relevant considering the article.
Google Translate is a good default, but LLMs are really good at translations, as they’re better capable at understanding context and providing culturally appropriate translations.
(I live in Cambodia where they speak Khmer)
- ks2048 - 10041 sekunder sedanI'll be looking at this in detail. I've started a company to do similar things, https://6k.ai
I'm currently concentrating on better data gathering for low-resource languages.
When you look in detail at data like Common Crawl, finepdfs, and fineweb, (1) they are really lacking quality data sources if you know where to look, and (2) the sources they have are not processed "finely" enough (e.g. finepdfs classify each page of PDF as having a specific language, where-as many language learning sources have language pairs, etc.
- djoldman - 15339 sekunder sedanJust spent a long time trying to find where you can download any of these weights.
Is it open weight? If so, why isn't there just a straight link to the models?
- ks2048 - 9779 sekunder sedanMeta released No Language Left Behind (NLLB) [1], I think in 2022. I wonder why this in not "NLLB 2.0"? These companies love introducing new names to confuse things
- garyclarke27 - 15548 sekunder sedanThey can translate 1600 languages, but they cannot do basic text formatting, where are the paragraphs?
- psychoslave - 22407 sekunder sedanThat's a high count, but still a bit away from "Omni". Usual count is between 4k and 8k depending the source. But the first 1k might be the hardest, certainly.
- ks2048 - 9705 sekunder sedanAnother interesting thing mentioned here is: BOUQuET: Benchmark and Open-initiative for Universal Quality Evaluation in Translation.
- intended - 7975 sekunder sedanDidn’t research show that models get worse at translation the more languages get added in? The curse of multilinguality? Lauscher 2020?
It looks like meta found a way forward.
Reading meta’s abstract, it seems that they have found ways to improve the quality of the training data, and also new evaluation tools?
They are also saying that OMT-LLaMA does a better job at text generation than other baseline models.
- croes - 21066 sekunder sedanOff topic, since the AI craze MS‘ documentation translation has ridiculous errors like translating try catch keywords to "versuchen" and "fangen" for German pages
- rowanseerwald - 13916 sekunder sedan[dead]
- ath3nd - 10870 sekunder sedan[dead]
- true21733 - 303676 sekunder sedan[dead]
- bikeshaving - 15773 sekunder sedanI’m very wary of celebrating Meta’s language work when the company was credibly found to have contributed to the genocide against the Rohingya in Myanmar, and separately, to human rights abuses against Tigrayans during the conflict in northern Ethiopia. Be careful whose sins you’re laundering.
https://www.amnesty.org/en/latest/news/2025/02/meta-new-poli... https://www.amnesty.org/en/latest/news/2023/10/meta-failure-...
Nördnytt! 🤓