- The AI Daily Brief
- Posts
- What the Open Source AI Insurgency Means for Google and OpenAI
What the Open Source AI Insurgency Means for Google and OpenAI
And should it change how we think about AI policy?
Welcome to The AI Breakdown, the most interesting & important news and conversations in AI.
First, the News:
AMD stock ⬆️ on reports that they’re working on Microsoft’s AI chip project “Athena”
Slack is the latest co to go AI with…yup, SlackGPT
NYT latest AI scare: “Killer Robots Become the Military’s Tools”
White House not inviting Zuck to AI meeting looks like an intentional snub
The Most Interesting Discussion
By now I’m sure you’ve read about the leaked note from a Google researcher about how Google (and OpenAI) had “no moat” and were getting skunked by an open source insurgency.
If you haven’t read it yet, go do that now. Or just read my summary thread 👇
𝗚𝗼𝗼𝗴𝗹𝗲 𝗟𝗼𝘀𝗶𝗻𝗴 𝘁𝗼 𝗢𝗽𝗲𝗻 𝗦𝗼𝘂𝗿𝗰𝗲 𝗔𝗜? "𝗪𝗲 𝗛𝗮𝘃𝗲 𝗡𝗼 𝗠𝗼𝗮𝘁, 𝗔𝗻𝗱 𝗡𝗲𝗶𝘁𝗵𝗲𝗿 𝗗𝗼𝗲𝘀 𝗢𝗽𝗲𝗻𝗔𝗜"
A leaked note from a Google researcher argues that they're losing to open source. It's quite relevant to the state of AI.
A short thread 🧵
— Nathaniel Whittemore (@nlw)
9:01 PM • May 5, 2023
There were a lot of great discussions (including some disagreements) on Twitter following the leak. Many made the point that developers are the big moat for OpenAI.
The clues to why developers are so tied to OpenAI’s GPT models live in GitHub and Discord conversations happening every day (just look at any open-source AI project’s).
There is so much prompt engineering happening to improve the robustness and intelligence of these AI products… twitter.com/i/web/status/1…
— Nate Chan (@nathanwchan)
6:29 PM • May 4, 2023
Others pointed out that, while Google might not have the current AI dev moat, they certainly have moat-like assets….
Google has no moat. They don't have over 90% search traffic. They don't have everyone's emails and the most used email client. Their OS is not powering 70% of smartphones.
They will never be able to deploy LLM features into these products -- instead, people will run OSS LLMs.
— Sergey Karayev (@sergeykarayev)
10:25 PM • May 4, 2023
Then of course there is the safety discussion…
Maximally open source development of AGI is one of the worst possible paths we could take
It's like a nuclear weapon in every household, a bioweapon production facility in every high school lab, chemical weapons too cheap to meter, but somehow worse than all of these combined
— Jeffrey Ladish (@JeffLadish)
12:24 AM • May 5, 2023
One question I have is, if this huge rise in open source AI development is as important a force in the development in the space as it seems, how is that perspective to be represented in important policy discussions like the one that happened at the White House yesterday? 🤔 Interesting times, no doubt.
The Next Ridiculous SciFi thing:
WTF: Mind reading is here.
Researchers invented a new #AI method to convert brain signals into video. See the results for yourself
Published in Nature yesterday: nature.com/articles/s4158…
What are the implications? Is this the biggest paper of 2023?
#CEBRA
— Andrew Kean Gao (@itsandrewgao)
9:17 PM • May 4, 2023
And Finally, What The AI Breakdown Podcast/YouTube Covered Today
Thanks for reading and have a great weekend! If you want more AI Breakdown:
The AI Breakdown podcast - https://pod.link/1680633614
The AI Breakdown YouTube - https://www.youtube.com/@theaibreakdown
Signing off from the future’s past - NLW