What Are Consumer Brands Doing In Web3, Blockchain and the Metaverse?
Learn more

Meta's Galactica was meant to signal a new way for humanity to access the vast and fragmented library of scientific wisdom and knowledge. Instead, it exposed humanity's disposition for bias, bigotry, and prejudice.


🤖 Use AI Writing Tools To Do The Heavy Lifting

Do you ever need to summarise an interesting, long-form article without going through the effort of doing it yourself?

If you do, here’s a hack you’ll find useful.

It's one that I use myself - I wrote parts of this article using Quillbot. Can you tell where?

Check out Quillbot’s Summariser Tool. There’s a free and a paid version and you have options to adjust how you want the summary presented for you.

2X Your Content Creation with Quillbot’s AI Writing Tool
What if you could generate a single piece of content and reuse it across multiple platforms, saving you time while still giving value? The good news is that you can do it just like me. Using the AI-powered writing tool from Quillbot.

w/BigTech


The Next Interface For Humans To Get Wiser!

BackStory: A couple of weeks ago, I posted a story in Wiser! about Meta’s new AI science chatbot. It was built together with Papers with Code and they called it Galactica - to signify the sheer scale and size of the massive language model that it was built on.

According to Meta, Galactica could summarise academic papers, solve math problems, make Wiki articles, write scientific code, annotate chemicals and proteins, and more.

All you needed to do was give it a basic instruction. It was meant to be a secret code that only scholars and academicians could use. No more scouring the internet for the article or study you need for your research. There will be no more waiting because of a complicated equation.

  • All of that with just a few simple keystrokes.

The knowledge, wisdom, and intellect of Galactica was built on 48 million published scientific papers. Officially, it was described as “a large language model that can store, combine and reason about scientific knowledge.”

The Meta team behind Galactica said their language models would be better than search engines. “We believe this will be the next interface for how humans access scientific knowledge,said the researchers.

It was meant to be a super-duper monster of a memory bank that could generate science papers, write wiki articles, create scientific code, and never need to stop for a cup of tea and digestive.

This was clearly a bank of intellectual capability that is unachieveable on a human scale.

Sounds great, right?

Trouble is, Galactica was as susceptible to human bias, prejudice and bigotry as, well, we humans are.

Proudly launched on November 15th. By the 17th Galactica was shutdown.

The trouble started literally within hours of the public launch. Users complained that the answers Galactica threw out were garbage. With some of them were offensive — homophobic, antisemetic, misogynistic et al.

If that wasn’t bad enough, Galactica also threw out misinformation and conspiracy theories, like HIV is not the cause of AIDS and eating crushed glass has health benefits. Given time, I’m sure it would have suggested drinking bleach would cure Covid.

Anyone with even a passing familiarity with comparable large-language models or artificial intelligences could have anticipated this outcome (unless they worked for Meta, of course!). The thing is that these types of bots have a long and disreputable history of delivering findings that are biassed, racist, sexist, and overall problematic, and Galactica was not an exception to this trend.

The problem for the serious users of Galactica was that the information output was unreliable.

The issue is that Galactica can’t discern between truth and untruth, a prerequisite for a scientific language paradigm. People found that it made up phoney papers (sometimes attributing them to real authors) and generated wiki articles about bears in space as easily as protein complexes and light speed.

It’s easy to detect space bear fiction, but difficult with other subjects.

You get my drift.

Here's The Thing: Galactica is a powerful tool that can generate material that reads like it was written by academics. However, this can be dangerous if used by bad actors to push their own agendas.

For example, it could be used to create false studies to support false narratives.

Additionally, the generated material could be tied back to real researchers, putting their reputations in danger. Meta did call out (in the small print) that the model has limitations and that it can "hallucinate", but it is still important to verify any advice given by the model.

Having said all of that about the 3 day live trial of Galactia, for me, the oddest thing was Meta’s response.

After shutting down Galactica, Meta's Head of AI blamed it on the humans, not the machines. Apparently it’s real easy to trick a machine to say this kinda stuff. That's not the fault of the machine, the real culprits are the humans who tricked it into saying it in the first place.

Seriously, Meta's point was that humans use tools every day of the week to create this kind of unpleasant material before spreading it around online. You don't need a Galactica to write about the health benefits from eating crushed glass.

But, as David Mattin points out in this week’s New World Same Humans, should we ban Word from public use because some humans use it to write racist things?

Clearly the answer is no. And clearly (to me anyway), Meta are resorting to type with this line of defence. It’s always some else’s fault. To be fair, Meta did whisper that Galactica was a development project when it was launched.

But why they decided to open it to the general public is beyond comprehension.

Had they never heard of Tay?


What is Galactica and What Happened?


Further Reading

Why Meta’s latest large language model survived only three days online
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
New Week #107
Listen now (13 min) | Meta’s new generative AI science tool enrages users. Plus more news and analysis from this week.
What Meta’s Galactica missteps mean for GPT-4 | The AI Beat
Meta’s missteps over its Galactica demo and Stanford’s debut of its HELM benchmark followed weeks of rumors about OpenAI’s GPT-4.
This Bot Is the Most Dangerous Thing Meta’s Made Yet
Galactica is a new AI model that was supposed to push scientific research to new places. Instead, it’s become a manufacturer for fake research and bigoted ideas.
The Galactica AI model was trained on scientific knowledge – but it spat out alarmingly plausible nonsense
The story of Meta’s latest AI model shows the pitfalls of machine learning – and a disregard for potential risks.

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to 🤔 Wiser! - tech innovation & strategy.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.