- The AI Daily Brief
- Posts
- Which LLMs Hallucinate Least?
Which LLMs Hallucinate Least?
Plus a China-US AI nuclear deal?
The AI Breakdown First Five - Tuesday November 14, 2023
Today on the First Five:
5. AI Can Detect Heart Attacks 10 Years in Advance
4. Google Suing AI Scammers
3. 45 Nations Sign AI Military Use Compact
2. Which LLMs Hallucinate the Least?
1. US and China Agree No AI in Nuclear Weapons
5. AI Can Detect Heart Attacks 10 Years in Advance
Even many AI skeptics are enthused about its potential applications in medicine, and this story is a great example why. An Oxford study suggests that AI readings of cardiac CT scans can more accurately predict heart attacks even a decade in advance before heart disease is detected.
“AI could predict heart attack risk up to 10 years in the future, finds Oxford study”
Another reason for your own private local AI.
Article:
— Brian Roemmele (@BrianRoemmele)
1:13 AM • Nov 14, 2023
4. Google Suing AI Scammers
Conventional wisdom would suggest that there’s not all that much that can be done about scammers beyond platform moderation and consumer vigilance, but Google is now proactively suing unnamed parties in India and Vietnam for tricking consumers into downloading Malware disguised as an app version of Google’s Bard AI.
Realizing the full potential of AI requires building trust—and that includes thwarting AI scammers. Today we’re taking legal action to protect our users.
More on that in this post from @Google's Halimah DeLaine Prado: blog.google/outreach-initi…
— Kent Walker (@Kent_Walker)
11:39 PM • Nov 13, 2023
3. 45 Nations Sign AI Military Use Compact
Global militaries are racing to figure out how to apply AI to war, but 45 of them have now signed on to a set of 10 measures to mitigate risks. The most notable absence from the US-led coalition is China.
Grateful to the 45 endorsers that joined us at the @UN for our event on the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.
AI offers incredible promise to address global challenges, but it has the potential to compound threats and… twitter.com/i/web/status/1…
— Ambassador Linda Thomas-Greenfield (@USAmbUN)
1:50 AM • Nov 14, 2023
2. Which LLMs Hallucinate the Least?
Generative AI platforms are increasingly being called to the big leaks of professional use, but hallucination remains a major problem. A new paper looks at which LLMs hallucinate the least.
This ranking matches other studies, at least in terms of rough ordering (GPT-4 has the lowest hallucination rate in every study I have seen).
BUT the actual hallucination rate for each model depends on the tasks. GPT-4 made up 18% of cites in this test: nature.com/articles/s4159…
— Ethan Mollick (@emollick)
12:45 AM • Nov 14, 2023
1. US and China Agree No AI in Nuclear Weapons
Although China won’t sign on to the wider declaration, bilateral conversations with the US are expected to produce an accord through which the nations agree to limit the use of AI in nuclear weapons control systems.
I know the bar is low but a US-China agreement to “avoid automating nuclear command and control systems” is like the bare minimum of what functioning human beings interested in collective survival should agree to.
— Matthew Pines (@matthew_pines)
8:01 PM • Nov 12, 2023
BONUS: The most used GPT so far?
1 week after OpenAI dev day:
-18,000+ conversions in Grimoire
-5 custom GPTs launched w/1k+ convos
-26,200+ across all 7 GPTs in the tavern
-$22 revenue, via tip jar (site 100% made in an early version of grimore)— Nick Dobos (@NickADobos)
8:33 PM • Nov 13, 2023
The Latest AI Breakdown Episodes
Thanks for reading! How did this edition hit you?
How was today's newsletter? |
The AI Breakdown podcast - https://pod.link/1680633614
The AI Breakdown YouTube - https://www.youtube.com/@theaibreakdown
The AI Breakdown Discord - https://bit.ly/aibreakdown