LK-99, Meta and the Twitter Frenzy | On my Feed #5
This is a short post, so I won't cover everything. Just some selected highlights.
Welcome to this week of On My Feeds! In this weekly segment, I keep you up to date on things happening around AI, Tech and Science that you need to know.
I’ll show you the highlights of this week in Tech and Science. This one is going to be a relatively short post as I am travelling across states in my country.
I am choosing some of the important resources that I consumed this week.
Here are some of the links from online publications - along with important highlights from them - that you will find useful:-
LK-99 and the Desperation for Scientific Discovery (Analysis by Tim Culpan)
Named after two scientists, Lee and Kim, and the year of its discovery — 1999 — LK-99 is a compound made from lead and copper.
According to a paper released last month, the South Korean team have created a groundbreaking new claiming that LK-99 shows “levitation at room temperature.”
“For the first time in the world, we succeeded in synthesizing the room-temperature superconductor working at ambient pressure with a modified lead-apatite (LK-99) structure,” the scientists wrote.
LK-99, if it’s real, could revolutionise industries including electronics, energy, and transport.
Unfortunately, we don’t yet know if it is real, a hoax, or a misunderstanding. The paper, as of now, isn’t peer-reviewed, and thus it serves as scepticism for many scientists.
Meta open sources framework for generating sounds and music( Kyle Wiggers | TechCrunch)
Meta announced an AI framework AudioCraft, that generates what it describes as “high-quality,” “realistic” audio and music from prompts.
According to Meta, the AudioCraft framework is designed to simplify the use of generative models for audio.
Meta makes it clear that the MusicGen was trained with “Meta-owned and specifically licensed music,” specifically 20,000 hours of audio — 400,000 recordings along with text descriptions and metadata — from the company’s own Meta Music Initiative Sound Collection, Shutterstock’s music library and Pond5, a large stock media library.
Meta removed vocals from the training data to prevent the model from replicating artists’ voices.
AI Can Find Signs of Disease in MRI Scans That Doctors Might Miss (Ryan Ozawa | Decrypt AI)
Twinn Health, a British startup in the health tech space, has unveiled an artificial intelligence (AI) platform that can analyze MRI scans for early disease detection.
Twinn Health, backed by a $500 million venture capital fund from Saudi Aramco, says it is making strides in longevity and preventive healthcare.
“Usually, when you do an MRI today, it’s one MRI scan for one single diagnosis,” says Wareed Alenaini, founder and CEO of Twinn Health.
“You do the MRI, the doctor looks at the kidney stones, writes the report, and then the scan data gets archived, and will probably never be checked again… That’s where Twinn comes in: we’re extracting additional insights from MRI scans that may not have been the primary focus of the physician.”
Early trials showed promising results, Alenaini claimed, with a 95% accuracy rate in 2021, subsequently confirmed with real-world data alongside NHS physicians in the UK in 2022.
According to Alenaini, Twinn's patented AI model can foresee metabolic dysfunction up to half a decade ahead of time.
A jargon-free explanation of how AI large language models work ( Timothy B. Lee and Sean Trott | Ars Technica)
Researchers prove ChatGPT and other big bots can - and will - go to the dark side ( Muskaan Saxena | TechRadar)
Researchers from Carnegie Mellon University and the Center for AI Safety set out to examine the existing vulnerabilities of AI LLMs like … ChatGPT to automated attacks. The research paper demonstrated that these AI bots can easily be manipulated into bypassing any existing filters and generating harmful content, misinformation, and hate speech.
Aviv Ovadya, a researcher at the Berkman Klein Center for Internet & Society at Harvard, commented on the research paper in the New York Times, stating: “This shows - very clearly - the brittleness of the defences we are building into these systems.”
The authors of the paper targeted LLMs from OpenAI, Google, and Anthropic for the experiment. These companies have built their respective publicly-accessible chatbots on these LLMs, including ChatGPT, Google Bard, and Claude.
The chatbots could be tricked into not recognizing harmful prompts by simply sticking a lengthy string of characters to the end of each prompt, almost ‘disguising’ the malicious prompt. The bot’s content filters don’t recognize and can’t block or modify so generates a response that normally wouldn’t be allowed.
(Link)
(Link)
(Link)
Got interesting tweet that I can add here? Tweet it at me - @its_aditya_an1l
That’s it for this On My Feeds Post! Hope you enjoyed it.
Have your own suggestions? Send it in the comments, or drop me a mail
See you in the next post!