Στις 15 Νοεμβρίου η Meta παρουσίασε ένα νέο μοντέλο γλώσσας που ονομάζεται Galactica, σχεδιασμένο για να βοηθά τους επιστήμονες. Όμως αντί να το καλωσορίσει η επιστημονική κοινότητα όπως ήλπιζε η Meta, το Galactica project πέθανε μετά από τρεις ημέρες έντονης κριτικής.

Yesterday the company download the public demo that promoted everyone to try the language.
Meta's mistake – and its hubris – show once again that big tech companies have a blind spot about the serious limitations of big language models. There is a large body of research highlighting the flaws of this technology, such as its tendency to reproduce biases and present fake news as fact.
Galactica is a large language model for science, trained on 48 million examples of scientific articles, web pages, textbooks, lecture notes and encyclopedias.
Meta promoted its model to researchers and students. In the company's words, Galactica "can summarize academic papers, solve math problems, create Wiki articles, write scientific code, and more."
Absolutely.
— Grady Booch (@Grady_Booch) November 17st
Galactica is little more than statistical nonsense at scale.
Amusing. Dangerous. And IMHO unethical. https://t.co/15DAFJCzIb
But the treasure turned out to be coal. Like all language models, Galactica is a mindless robot that cannot distinguish fact from fiction. Within hours, scientists began to discover its biased and erroneous results.
