The real crisis in AI safety. Hashtag Trending, Tuesday, June 4th, 2024

Share post:

Only 4% of teenagers use AI tools daily according to a new study, the New York Stock Exchange comes to a halt, AMD ups its game to compete with market leader Nvidia and an intriguing piece asks the question – what exactly is AI safety?

These stories and more on this “isn’t a little late to be asking that” edition of Hashtag Trending. I’m your host Jim Love, let’s get into it.

While generative AI has taken the world by storm, it hasn’t yet become a daily habit for most teens and young adults according to an exclusive new survey from Common Sense Media, Harvard and Hopelab.

The survey, found that just 4% of respondents aged 14-22 said they use AI tools daily or almost every day.

In fact, 41% reported never having used AI at all, with another 8% unsure what AI tools even are.

For those who do use AI, the most common uses were getting information at 53% and brainstorming ideas at 51%.

But there were some big differences across demographics, with 62% of Black youth reporting using AI for schoolwork compared to just 40% of white respondents.

Looking ahead, a 41% plurality expected AI to have both positive and negative impacts over the next decade. However, LGBTQ+ youth were significantly more likely at 28% to anticipate mostly negative impacts versus 17% of their non-LGBTQ+ peers.

But the survey suggests generative AI hasn’t fully penetrated daily life for this young demographic yet. As avid early adopters of new tech historically, how teens and young adults shape their relationship with AI could profoundly influence its trajectory.

There’s a link to the full study in the show notes.

Sources include: Common Sense Media

The New York Stock Exchange is urgently investigating a technical glitch that caused dozens of major stocks to erroneously show massive price drops of up to 99 percent earlier today.

Big names like Berkshire Hathaway, GameStop, Chipotle and Barrick Gold were temporarily halted for volatility after their listed share prices plummeted due to the technical error. Berkshire’s A shares, for example, briefly showed a 99.97% loss before trading was frozen.

Around 50 stocks in total were impacted according to the NYSE, which said the problem involved inaccurate “limit up-limit down” price bands that are designed to prevent excess volatility.

These bands set limits on how much a stock can rise or fall over a 5-minute period. However, a glitch caused the bands to severely malfunction, allowing for the erroneous and extreme price drops before trading halts kicked in.

In a statement, the NYSE blamed the issue on inaccurate price data being published industry-wide, triggering the haywire limit up-limit down levels. The exchange says it has now resolved the technical problem and restarted trading for impacted stocks after a few hours of investigation.

While highly disruptive, the price plunges appear to have been merely a technical glitch rather than an actual market crash. The NYSE stated it is now reviewing potentially impacted trades to resolve any issues caused by the errant stock pricing.

Sources include: Daily Mail and The Register

AMD unveiled its latest artificial intelligence chips aimed at challenging market leader Nvidia in the lucrative AI semiconductor space. At Computex, AMD CEO Lisa Su introduced the MI325X accelerator chip set to be available in late 2024.

The company also detailed plans for an upcoming MI350 series expected in 2025 that AMD claims will deliver 35 times better performance for AI inference workloads compared to their current MI300 chips. Inference is critical to generative AI workloads.

Additionally, AMD revealed it is working on next-generation MI400 AI chips targeted for release in 2026, based on new architecture they are calling “Next.”

I’m not going to pretend to compare the speeds and feeds of these different chips, it’s not appropriate for a podcast but it is clear that AMD has made some substantial progress in performance and has thrown itself into both high and consumer level AI chips to keep up with Nvidia.

The new product roadmap shows AMD is aiming to release AI chip upgrades on a yearly basis, emulating Nvidia’s goal of annual announcements.

While Nvidia still dominates the market with around 80% share, AMD isn’t going to be counted out.  Even if they can’t equal Nvidia’s new Blackwell architecture, if the explosive growth of this market continues, and they have a competitive product, maybe even at a price advantage, there will be customers in a world with, at least in the short term, an insatiable appetite for AI computing cycles.

Sources include: Reuters

The idea of making artificial intelligence “safe” is something everyone agrees is important. But increasingly, there is a real question to be answered.  What do we mean by “safe?”

Axios journalist Scott Rosenberg wrote an excellent short piece that posed this question.

On one side are those focused on preventing AI from developing goals misaligned with humanity’s best interests – avoiding a catastrophic scenario where advanced AI pursues an unintended objective with disastrous consequences.

Others are looking to root out harmful biases that could lead AI systems to unfairly discriminate against certain groups based on traits like race, gender or background. But Google found out how hard it is to develop rules that would do that and not distort historic realities.

The rise of powerful language models like ChatGPT has brought new debates around misinformation and hate speech risks. Major companies have implemented “guardrails” to filter out disinformation and toxic content. Others, however, condemn this as “censorship” and being ideologically “woke.”

Elon Musk wants to create unfiltered models. He thinks the most important thing is not “teaching AI to lie.” But that’s non-sensical. No-one wants an AI that doesn’t tell the truth. The problem is that we no longer seem to be able to agree on what truth is.

At its core, we have what can only be called a partisan rift motivated by fundamentally different definitions of what “AI safety” should entail and which risks deserve prioritization.

But that lack of consensus may have consequences. Clearly, this lack of agreement is tearing the social fabric in the US and other countries as well. But if we can’t rise above our differences and agree on what AI safety means, or if we can’t find a way to agree on some objective standards, AI might reflect not the best of who we are, but the worst of who we might become.

What might sink us could be a failure of human to agree on what is true and, frozen in indecision, we may develop no guardrails at all. And in the early days of these systems we saw examples of how nasty an AI could be with no guard rails. Remember the system that tried to get the journalist to leave his wife?  Or that threatened to ruin a journalist’s reputation?  Or that told the suicidal person that they were right, maybe they should end it all?

Where did the AI learn those ideas? From us. Somewhere in all that it ingested, it came up with those ideas. But you might say, the AI must have learned that from fiction. The problem is that it can’t tell the difference between fact and fiction.

Perhaps. But if we can’t tell the difference, how will we set guard rails? Who will set them?

The problem might be that we can’t teach AI – but that it might actually learn from us.

I put a link to Scott’s article in the show notes.

Sources include: Axios

And that’s it for today’s show. Remember that you can get us on Apple, Spotify or wherever you get your podcasts. We’re available on YouTube in both audio and video format.

Show notes are on Tech Newsday dot com or dot ca. Take your pick.

I’m your host Jim Love, have a Terrific Tuesday

SUBSCRIBE NOW

Related articles

AI – Where Are We Two Years After ChatGPT Launch? Hashtag Trending Weekend Edition for November 2, 2024

Project Synapse: Exploring the Evolution of AI with Marcel Gagné and Jim Love In this episode, Jim Love introduces...

Cyber Security Today Weekend – Show notes from the panel show

Cybersecurity Today: Monthly Panel Discussion In this month's weekend edition of Cybersecurity Today, host Jim Love assembles a top...

Deceptive Delight – A New AI Exploit: Cyber Security Today for Friday, November 1, 2024

Deceptive Delight: A New Jailbreak Technique Exposes Vulnerabilities in AI Models,Report Reveals 21% Surge in API Vulnerabilities in...

Google Gets The Biggest Fine In History: Hashtag Trending For Friday, November 1, 2024

Microsoft's AI Strategy: Turning AI Investment into Profit, Google CEO Says 25% of Its Code Is Now AI-Generated,...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways