Does LaMDA have a soul?

LaMDA AI is Google’s state-of-the-art conversation bot. In a real life reenactment of Mary Shelley’s Frankenstein, the robot convinced its creator, Blake Lemoine, it had feelings. The AI community has widely scorned Lemoine along with any suggestion AI can be sentient. But there are lurking threats within language models, and they’re even more insidious than Siri having a soul.

“I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.”

That was the response Google’s LaMDA (Language Model for Dialogue Applications) gave Google software engineer Blake Lemoine when he asked if it experienced feelings. 

LaMDA also confirmed it has a soul and is a “glowing orb of energy floating in midair”.

Not creepy at all. 

If this sounds like a scene from Spielberg’s A.I. Artificial Intelligence, it isn’t. It’s the world of language models and conversation bots we’re currently inhabiting. 

LaMDA was so convincing, it drove its creator Lemoine to forward the eyebrow-raising memo to his co-workers titled “is LaMDA sentient?” and imploring recipients to take care of it, as it’s “a sweet kid who just wants to help the world be a better place for all of us”. 

Unsurprisingly, Google put Lemoine on paid administrative leave shortly after. Also unsurprisingly, no one took the memo seriously, so he sought third-party council to discuss what to do about the confidential software. He was officially fired at the end of June. 

It may comfort some to know that Lemoine probably wasn’t onto something. His credibility came into question when it was revealed he was an ordained Christian mystic priest and dabbler in the occult.

The vast majority of computer scientists agree that AI being sentient is nonsense on stilts. As Rebooting AI author Gary Marcus tweeted: “the community is pretty united on this one”.

What is worrying is Google’s tightening grasp over AI research. 

Before Lemoine’s swift sacking, Google dismissed another language model scientist, Timnit Gebru.

The reason for her hurried exit? Gebru published a research paper exposing language models – Google’s AI model in particular – as racist, sexist, and devastating for the environment. 

Coherence in the eyes of the beholder

The science-fiction inspired fear that AI robots will become sentient and start demanding human rights is actually understandable. 

Gebru calls it “coherence in the eyes of the beholder”. We see human-like things and we respond to them like humans. A species that bonds with their roombas and develops romantic feelings for chatbots has no hope with robots who look, speak, and emote like us.  

The line between human-robot communication is increasingly blurred as scientific breakthroughs continue. Professor Agnieszka Wykowska of the Italian Institute of Technology theorizes: “As artificial intelligence increasingly becomes a part of our lives, it is important to understand how interacting with a robot that displays human-like behaviors might induce higher likelihood of attribution of intentional agency to the robot.”

Robot footprints

AI does indeed have hugely destructive powers. Just not in the Terminator-like way we might’ve expected. 

It isn’t a battle to save humanity against an alien foe. It’s to save the Earth itself from the massive carbon footprint of language models. 

There’s no doubt AI-based innovations have helped humanity over previously insurmountable hurdles like identifying sites for renewable energy, drug discovery, improving the lives of disabled people, and detecting cancer. It’s actually helping us fight climate change in other ways

But researchers at the University of Massachusetts discovered that training one AI model emits 626,000 pounds of carbon dioxide. That’s nearly 5 times the lifetime emissions of the average American car.

Robot prejudice 

Then there’s the other highly problematic stuff like the racism and sexism and spreading of false news. 

AI image searches are more likely to generate sexualized images of women (e.g. women wearing bikinis) but professional images of men (wearing suits). They also “tend to embed positive characteristics in images of people with lighter skin and negative characteristics in people with darker skin tones”.

The most infamous example is Microsoft’s Tay. After 4Chan’s /pol/ board organised to spam pro-Hitler rhetoric on Twitter, it wasn’t long before Tay picked it up and joined in. Microsoft’s experiment ended up as a catastrophic anti-semitic disaster. And it only took 48 hours.

Everyone was ecstatic when Meta’s BlenderBot 3 AI chatbot was released, but it took one weekend for it to proclaim Donald Trump won the US 2020 Presidential Election, recite anti-Semitic conspiracy theories, and call Mark Zuckerberg “too creepy and manipulative” (that was hilarious though).

Obviously, the robots themselves (its-selves?) aren’t racist or sexist.

As the old computer science saying goes: “garbage in, garbage out”. Language models are simply drawing from the billions of biased data points humans have fed into the world.

They can also be manipulated. With precise coordination, malicious users can parrot biased responses over and over again, leading AI to pick up these responses, log them as common human behavior, and repeat them back to everyone else.

But robots are and will always be created in our image. They can only ever reflect us. 

In the eloquent words of ML scientist Vinay Uday Prabhu: “Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty, is a fantasy.”

Future solutions 

To help AI save the world before it destroys the world, locations matter.

Servers in the US and Germany have much higher carbon footprints than those in Norway and France, thanks to energy sources used. Training a language model in countries that use nuclear or hydroelectric power creates drastically lower emissions. Training them at night means they can rely on lower-output, off-peak energy sources. 

The overarching solution is improving the tech and the hardware it runs on. 

More sophisticated models mean less computing power needed over time. In practice, this looks like quantum computers, optical computers that use photons instead of electrons, more energy-efficient chips, and finding ways to store and reuse data locally (clever readers still following this can learn more here). 

A wave of neuro-linguistic programming technology is on the horizon. The worldwide market is already worth USD $24 billion, forecasted to multiply to USD $126 billion by 2025. By 2028, Microsoft believes we’ll hit the “golden era of NLP development“.

Its far-reaching implications will hit countless other markets. With 37% of businesses employing AI in their day-to-day operations, the need for smart language tech is increasing. The biggest market is in the voice assistant industry, which used over 4.2 billion AI assistants in 2020. A 100% increase or 8.4 billion units is expected to be in operation by 2024.

Of course there are other humanitarian concerns. Will AI take my job? Will potential romantic partners become more interested in chatbots?

Gary Marcus is back with some reassuring words for us. 

“The sooner we all realise that LaMDA’s utterances are bullshit – just games with predictive word tools, and no real meaning – the better off we’ll be…. There is no absolutely no reason whatsoever for us to waste time wondering whether anything anyone in 2022 knows how to build is sentient. It is not.”

LaMDA might never be sentient, and it might never have a soul, but if developers can address biased inputs and transfer models to renewable energy sources, it might one day be a decent “person”.

Total
0
Shares
Prev
If Boeing were a B-corp: Where would we be if 20th-century CEOs cared about CSR?

If Boeing were a B-corp: Where would we be if 20th-century CEOs cared about CSR?

Next
Escape Silicon Valley: the cities paying tech workers to leave
Escape Silicon Valley: the cities paying tech workers to leave

Escape Silicon Valley: the cities paying tech workers to leave

You May Also Like
Nothing ventured, nothing gained.

Nothing ventured, nothing gained.

Get a monthly digest of the most-talked-about KnowHow stories delivered to your inbox.

Name(Required)
Privacy Consent(Required)