An AI Luminary Quits Google to Warn of AI Danger

Plus news from StabilityAI, Amazon and OpenAI

First, the headlines:

  • OpenAI raises $300m and comes back online in Italy 

  • StabilityAI releases open source language model StableVicuna and a text-to-image model DeepFloyd IF that can actually put words in your generative AI images. Try it.

  • Every earnings call mentions AI - and Amazon was no exception. AlexaLLM?

Why a Turing Award Winner Quit Google to Warn About AI

The AI safety question isn’t new. What’s new is:

  • GPT-4 (and other recent AI advances) having capabilities that many thought were years away

  • An all-consuming AI arms race in big tech

  • Hundreds of millions of new AI users grappling with AI safety questions for the first time

On that last point, my sense is that a lot of people just don’t know what to think. On the one hand, the arguments of the safety folks - coming through Bankless interviews or whatever transmission channel they come through - are compelling, and scary. On the other, things really can’t be that bad, right?

Geoffrey Hinton seems to think they might be. Dr. Hinton is a pioneer of neural networks, something he started working on in 1972. In 2012, he built a neural network that could identify objects in photos with two students - one who would go on to be chief scientist at OpenAI. A company around it was soon acquired by Google. In 2018, Hinton’s work on neural networks won the Turing Award which is seen as akin to the Nobel Prize in computing.

Over the last year, Hinton has gotten increasingly uncomfortable with the industry he helped found. There are a few reasons why. For one, the rate of change is faster than anyone expected. “Look at how it was five years ago and how it is now. Take the difference and propagate it forward. That’s scary.”

On top of that, he sees an arms race that will be nigh impossible to slow down. Microsoft jumping out ahead of Google on AI triggered a response from Google that, in Hinton’s estimation, undermined their previous stance of being careful not to release something that might cause harm.

So what is Hinton worried about? Kinda all of it. Fake information flooding the internet. A massive upheaval in the jobs market. Nefarious actors purposefully using it to do bad things. And well, even worse.

“The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

What We Talked About Today

Ever wanted real words in your generative AI images? Me too. Luckily there’s a new open source competitor to Midjourney that does exactly that.

Thanks for reading! If you want more AI Breakdown:

The AI Breakdown podcast - https://pod.link/1680633614

The AI Breakdown YouTube - https://www.youtube.com/@theaibreakdown

Signing off from the future’s past - NLW