DeepMind Claims Human-Level AI Is Now an Inevitability, AI Industry Faces ‘Openwashing’ Accusations Over Open-Source Claims, AI Pioneer Hinton Warns of Job Losses, Calls for Universal Basic Income
All this and more on this “we can see right through you” edition of Hashtag Trending. I’m your host, Jim Love, let’s get into it.
Some stories today came together to raise a pretty fundamental question. Last week’s announcements from both OpenAI and Google crossed a line in the development of AI. It’s now official. AI is not just about clever use of text and linguistic models. It doesn’t only read – it can hear and see and yes, at least appear, to understand and even elicit emotional response. And in the middle of this, it’s was easy to lose sight of this in the midst of the other developments – AI is orders of magnitude smarter. Whether it’s solving linear algebra or a massive increase in the information it can bring to bear in a single conversation – we have passed some big milestones in development.
How close we are to what we call artificial general intelligence ors AGI, you can call it what you want, we don’t know. But there is something big happening. Which raises a legitimate question – how open are those who develop and run AI being with rest of us? How – to use that awful word – “transparent” are they
Here’s the stories that just jumped out at me.
Fresh on the heels of our weekend documentary style interview on Open Source and AI, comes a story from the New York Times noting that there is heated debate brewing in the AI world over what it truly means for an artificial intelligence model to be “open source.” Some are accusing major companies of misleadingly deploying the open source label, in a practice being called “openwashing.”.
As AI systems become increasingly powerful and impactful, there are growing calls for these models to be developed as “open source” – allowing outside inspection, replication, and broad access. However, there’s no agreed-upon definition of what open source AI actually entails.
This has led to accusations that some prominent AI companies are engaging in “openwashing” – using the open source label in a disingenuous way to portray an inaccurate sense of transparency and accessibility around their proprietary AI models.
Take OpenAI, the startup behind ChatGPT. Despite its name implying openness, OpenAI actually discloses very little about the training data and code underlying its flagship language model. Similarly, Meta labels its latest LLaMA models as “open source” but with significant restrictions on their use.
True open-source AI implies not just releasing source code, but also the training data, model weights, and applying an open license allowing wide replication and modification. But very few organizations meet these full criteria due to the immense computing power and curated data required.
David Gray Widder, a Cornell Tech researcher, says even the most open AI models currently available don’t enable full reproducibility or democratized access, given the prohibitive resource needs. He argues labeling any AI system as truly “open source” is misleading at best.
Groups like the Linux Foundation are attempting to create clear definitions and frameworks around varying degrees of AI openness to prevent disingenuous “openwashing” claims.
But some experts are skeptical that realistically achievable open-source AI is even possible, given the secrecy and massive resources required by just a handful of major companies and institutions.
Sources Include: New York Times International
A lead researcher at DeepMind, the artificial intelligence company owned by Google, has made a bold proclamation – declaring that the quest to achieve human-level AI, known as AGI or artificial general intelligence, is essentially a “game over.”
In a series of tweets responding to an article doubting whether AGI will ever be accomplished, DeepMind’s research director Dr. Nando de Freitas stated “the game is over” in the decades-long pursuit of replicating human intelligence in machines.
De Freitas was weighing in on DeepMind’s latest AI system called “Gato” – a multi-talented “generalist agent” that can complete a wide range of complex tasks like stacking blocks, writing poetry, and more. He claimed this advancement means AGI is essentially just a matter of further scaling up such systems.
Specifically, de Freitas wrote “It’s all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities…Solving these challenges is what will deliver AGI.”
However, when asked how close Gato is to passing a true test of human-level AI like the Turing test, de Freitas acknowledged Gato is still “far” from that milestone.
The advent of AGI has long been a controversial topic, with some top researchers warning superintelligent AI could represent an existential risk to humanity if it advances in uncontrolled ways.
De Freitas acknowledged safety is “paramount” and likely the “biggest challenge” in developing AGI responsibly. Google and DeepMind are already working on preventative measures like theoretical “big red button” concepts to shut down an advanced AI if needed.
But de Freitas’ bold statements double down on DeepMind’s confidence in its roadmap toward AGI, despite open skepticism that human-level AI may not be achievable or is still extremely far off.
The proclamation that AGI’s creation is now just an inevitability awaiting further scale and refinement is sure to spur more intense debates over the implications of such powerful artificial intelligence.
It also makes you wonder. Last week we had the departure of Ilya Sutzkever as well as the person head of superalignment for OpenAI. SuperAlignment is the buzzword for safety – making sure that AI systems are developed in a way that won’t harm humanity.
There was also some talk about how Sutzkever, one of the people who was responsible for the earlier firing of Sam Altman, and how Sutzkever even though he was not visible, was actually remotely working as the real force behind superalignment efforts.
Sources include: UK News
Google’s artificial intelligence division DeepMind has unveiled a powerful new AI that experts believe could transform how we develop treatments and find cures for diseases.
DeepMind’s newly upgraded AlphaFold3 represents a major AI breakthrough in visualizing and predicting how the building blocks of biology interact and bind together on a molecular level.
Experts say this “window” into biological processes could open up new frontiers in how we discover and design drugs to target diseases more precisely and effectively.
Professor Daniel Rigden at the University of Liverpool says AlphaFold3 provides unparalleled accuracy in modeling how proteins, DNA, and RNA molecules structurally interrelate and function together within cells.
This level of structural prediction was previously only achievable through extremely costly and laborious physical experiments. AlphaFold3 makes it possible through software, drastically reducing time and cost barriers.
The AI’s ability to simulate how drug compounds structurally bind to target proteins could be a game-changer for pharmaceutical research in developing more potent and targeted medicines.
Rigden says the AI system’s applications span “literally any area of biological research,” making drug discovery just one area set to be revolutionized.
Google believes AlphaFold3 could also lead to breakthroughs in understanding plant biology for improving food security, or even advancing our comprehension of how human DNA replication and repair processes function.
The company’s own Isomorphic Labs drug discovery team is already utilizing AlphaFold3 to help accelerate and enhance their drug design pipelines, including potentially targeting entirely new disease areas.
DeepMind released a database of 200 million protein structure predictions from an earlier AlphaFold version, which is already widely utilized by researchers worldwide.
With the new upgraded system’s increased scale and capabilities, experts say AlphaFold3 could fundamentally transform humanity’s ability to understand, engineer, and interface with the natural world’s molecular machinery.
Sources include: Yahoo Finance
There’s been a great deal of discussion on the how big the disruption from AI will be. Some experts maintain that AI will eliminate a vast number of jobs. Others maintain that AI will be a net creator of jobs.
So how big will the impact be?
One of the founding pioneers of modern artificial intelligence is sounding the alarm about AI’s potential to displace human workers on a massive scale.
Geoffrey Hinton, the AI researcher known as the “godfather” of neural networks which power today’s AI systems, says he is “very worried about AI taking lots of mundane jobs.”
In an interview with the BBC, the Canadian computer scientist said he directly advised officials in the UK government to establish a universal basic income program to help offset AI’s economic impact.
A universal basic income would provide recurring cash payments to all citizens, regardless of their employment status, with no restrictions on how the money is spent.
Hinton, who spent decades developing the theoretical foundations of machine learning at Google, believes a basic income safety net may be crucial in the coming years and decades. His concerns stem from AI’s increasing capabilities across industries, threatening to automate numerous occupations.
The AI pioneer warns that without intervention like a universal basic income, the wealth generated by AI could only benefit a small portion of society while leaving many unemployed in its wake, which he called “very bad for society.”
Hinton advocates for a cautious approach in developing artificial general intelligence, or AGI, which could pose what he views as an “extinction-level threat” to humanity in as little as 5-20 years if it progresses too quickly.
But even the biggest proponents of swiftly advancing AGI agree the economic impacts must be addressed. OpenAI CEO Sam Altman has discussed the concept of not just a universal basic income, but a “universal basic compute” – giving everyone access to large AI models to leverage as they see fit.
We are searching for an will be looking to bring in one or more experts on the impact of AI for our weekend interview show. If anyone in our audience has a connection or knows of someone who can give us an objective look at this, please let us know.
Sources include: Business Insider
That’s our show for today.
We love your comments. Reach me at editorial@technewsday.ca. Show notes are at technewsday.ca or .com – take your pick.
I’m your host, Jim Love, have a terrific Tuesday.