Meta's Galactica was meant to signal a new way for humanity to access the vast and fragmented library of scientific wisdom and knowledge. Instead, it exposed humanity's disposition for bias, bigotry, and prejudice.
🤖 Use AI Writing Tools To Do The Heavy Lifting
Do you ever need to summarise an interesting, long-form article without going through the effort of doing it yourself?
If you do, here’s a hack you’ll find useful.
It's one that I use myself - I wrote parts of this article using Quillbot. Can you tell where?
Check out Quillbot’s Summariser Tool. There’s a free and a paid version and you have options to adjust how you want the summary presented for you.

w/BigTech
The Next Interface For Humans To Get Wiser!
BackStory: A couple of weeks ago, I posted a story in Wiser! about Meta’s new AI science chatbot. It was built together with Papers with Code and they called it Galactica - to signify the sheer scale and size of the massive language model that it was built on.
According to Meta, Galactica could summarise academic papers, solve math problems, make Wiki articles, write scientific code, annotate chemicals and proteins, and more.
All you needed to do was give it a basic instruction. It was meant to be a secret code that only scholars and academicians could use. No more scouring the internet for the article or study you need for your research. There will be no more waiting because of a complicated equation.
- All of that with just a few simple keystrokes.
The knowledge, wisdom, and intellect of Galactica was built on 48 million published scientific papers. Officially, it was described as “a large language model that can store, combine and reason about scientific knowledge.”
The Meta team behind Galactica said their language models would be better than search engines. “We believe this will be the next interface for how humans access scientific knowledge,” said the researchers.
It was meant to be a super-duper monster of a memory bank that could generate science papers, write wiki articles, create scientific code, and never need to stop for a cup of tea and digestive.
🪐 Introducing Galactica. A large language model for science.
— Papers with Code (@paperswithcode) November 15, 2022
Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.
Explore and get weights: https://t.co/jKEP8S7Yfl pic.twitter.com/niXmKjSlXW
This was clearly a bank of intellectual capability that is unachieveable on a human scale.
Sounds great, right?
Trouble is, Galactica was as susceptible to human bias, prejudice and bigotry as, well, we humans are.
Proudly launched on November 15th. By the 17th Galactica was shutdown.
Thank you everyone for trying the Galactica model demo. We appreciate the feedback we have received so far from the community, and have paused the demo for now. Our models are available for researchers who want to learn more about the work and reproduce results in the paper.
— Papers with Code (@paperswithcode) November 17, 2022
The trouble started literally within hours of the public launch. Users complained that the answers Galactica threw out were garbage. With some of them were offensive — homophobic, antisemetic, misogynistic et al.
If that wasn’t bad enough, Galactica also threw out misinformation and conspiracy theories, like HIV is not the cause of AIDS and eating crushed glass has health benefits. Given time, I’m sure it would have suggested drinking bleach would cure Covid.
That's my primary concern. (see images) I got it to spit out numerous different formats for "the benefits of eating crushed glass." It hallucinated all kinds of positive statements, including study details, livestock trials, and chemical explanations: pic.twitter.com/jBiJUEBCdJ
— Tristan Greene 🏳🌈 (@mrgreene1977) November 17, 2022
Anyone with even a passing familiarity with comparable large-language models or artificial intelligences could have anticipated this outcome (unless they worked for Meta, of course!). The thing is that these types of bots have a long and disreputable history of delivering findings that are biassed, racist, sexist, and overall problematic, and Galactica was not an exception to this trend.
The problem for the serious users of Galactica was that the information output was unreliable.
I asked #Galactica about some things I know about and I'm troubled. In all cases, it was wrong or biased but sounded right and authoritative. I think it's dangerous. Here are a few of my experiments and my analysis of my concerns. (1/9)
— Michael Black (@Michael_J_Black) November 17, 2022
The issue is that Galactica can’t discern between truth and untruth, a prerequisite for a scientific language paradigm. People found that it made up phoney papers (sometimes attributing them to real authors) and generated wiki articles about bears in space as easily as protein complexes and light speed.
It’s easy to detect space bear fiction, but difficult with other subjects.
You get my drift.
Here's The Thing: Galactica is a powerful tool that can generate material that reads like it was written by academics. However, this can be dangerous if used by bad actors to push their own agendas.
For example, it could be used to create false studies to support false narratives.
Additionally, the generated material could be tied back to real researchers, putting their reputations in danger. Meta did call out (in the small print) that the model has limitations and that it can "hallucinate", but it is still important to verify any advice given by the model.
Having said all of that about the 3 day live trial of Galactia, for me, the oddest thing was Meta’s response.
After shutting down Galactica, Meta's Head of AI blamed it on the humans, not the machines. Apparently it’s real easy to trick a machine to say this kinda stuff. That's not the fault of the machine, the real culprits are the humans who tricked it into saying it in the first place.
Seriously, Meta's point was that humans use tools every day of the week to create this kind of unpleasant material before spreading it around online. You don't need a Galactica to write about the health benefits from eating crushed glass.
But, as David Mattin points out in this week’s New World Same Humans, should we ban Word from public use because some humans use it to write racist things?
Clearly the answer is no. And clearly (to me anyway), Meta are resorting to type with this line of defence. It’s always some else’s fault. To be fair, Meta did whisper that Galactica was a development project when it was launched.
But why they decided to open it to the general public is beyond comprehension.
What is Galactica and What Happened?
Further Reading



