Google suffers another embarrassing AI launch. Hashtag Trending for Tuesday, May 28th, 2024

Share post:

London Drugs refused to pay ransom and data is leaked. Another epic AI fail from Google, Meta’s chief scientist tells students that large language models aren’t worth studying and more bad news on the risks of kids and online activity.

All this and more on this “shocking truth” edition of Hashtag Trending. I’m your host, Jim Love, let’s get into it.

London Drugs has confirmed that some of its corporate data has been leaked online by the LockBit ransomware group. This follows the Canadian pharmacy chain’s April 28th cyberattack that forced temporary store closures.

LockBit demanded a 25 million dollar ransom payment. Although there were rumours that the company had agreed to pay or was at least negotiating, in the end they did not pay the ransom.

In a statement, London Drugs said quote – “We are aware that some of these exfiltrated files have now been released…London Drugs is unwilling and unable to pay ransom to these cybercriminals.”

The leaked files, around 300 gigabytes in size, include human resources records, medical notes detailing issues like sexual assault, financial data, legal documents and more.

Once cybersecurity analyst, Brent Callow, likened the data dump to kidnappers killing a hostage after ransom demands went unmet.

“This is like kidnappers killing their hostage. They’re giving up on being able to monetize the attack and are releasing the info as a warning to future victims,” Callow stated.

London Drugs maintains there is still no evidence that customer data or primary employee databases were compromised. However, impacted corporate staff will be notified and offered assistances such as credit monitoring services.

The company has also committed to a full investigation and disclosure to any affected people of precisely what data has been leaked.

The company continues working with law enforcement on the investigation.

Source include: CHEK News and the Times Colonist

Yann LeCun, Chief AI Scientist at Meta, had some contrarian advice for students looking to get into the AI space at the VivaTech conference in Paris. LeCun stated:

“If you are a student interested in building the next generation of AI systems, don’t work on LLMs (large language models). This is in the hands of large companies, there’s nothing you can bring to the table.”

LeCun, a pioneer in the development of convolutional neural networks, instead urged students to focus on developing next-generation AI that can overcome the limitations of large language models like GPT-3.

“Eventually all our interactions with the digital world will be mediated by AI assistants. This will be extremely dangerous for diversity of thought, for democracy, for just about everything if a small number control it all,” LeCun warned.

His comments come as the AI community is divided on the future trajectory – whether to continue scaling up transformer-based language models or explore new architectures entirely. Some experts believe moving away from transformer models could lead to breakthroughs comparable to GPT-4.

However, large language models continue advancing rapidly, with models like GPT-4o demonstrating multimodal capabilities to understand video and audio natively. As Sam Altman stated, training data may no longer be a bottleneck for further scaling up these models.

Despite that, Le Cunn’s advice rings makes a lot of sense – but it also exhibits a rare candor – the large companies may very well have a virtual monopoly on these models and that should not only serve as a warning to students looking for future avenues – it should be a warning to us all.

Sources include: Analytics India

Google’s “AI Overviews” – a new experimental AI search feature has led to yet another publicity nightmare for the company when it provided inaccurate and nonsensical responses to some queries.

The AI-powered tool, was supposed to summarize and provide insights from search results, has told users things like using “non-toxic glue” to help cheese stick to pizza and that geologists recommend eating a rock per day to ensure they got their supply of necessary minerals.

The glitches have been widely mocked on social media. In one example, a reporter searching if gasoline could cook spaghetti faster was told it could be used “to make a spicy spaghetti dish” and given a recipe.

These answers, by the way, come from satirical sites like the Onion or old posts in sites like Reddit.

A Google spokesperson acknowledged these were “isolated examples” and “aren’t representative of most people’s experiences”, stating “The vast majority of AI overviews provide high quality information.” However, the company said it has taken action where violations occurred to refine its systems.

The struggles highlight the challenges of deploying AI search capabilities that need to handle any query accurately. As Pedro Domingos, a professor of computer science stated: “We don’t know how many searches it got right, because they’re less funny to share on social media, but AI search clearly needs to be able to handle anything thrown at it.”

It’s easy to collect some errors and amplify them, and no doubt, like earlier AI failures, there is a small army of people trying to get a non-sensical or humourous answer to a question.  But in fairness, if you asked these same questions of perplexity.ai, a google rival, you would indeed get accurate answers. Humans shouldn’t eat rocks. You leave your cheese out at room temperature if you have problems with it not sticking.

The ability to distinguish between satire or humour and factual posts should have been something that Google’s design team should have least have considered.

The problem is not that Google has failures; it just seems to always fail on simple things that should’ve been easily caught.  And I’m not sure they’ve had a major launch without one of these epic backlashes.

Part of it is how they set themselves up. We’re Google. Here’s our big deal.

Maybe, just maybe they could have released this and said it’s new and its going to make stupid mistakes – try and break it – and then thanked the people who found the problems.

Just sayin’

Sources include: The BBC

Alarming new research from the University of Edinburgh’s Childlight initiative estimates that over 300 million children globally faced online sexual exploitation and abuse in the past year alone.

The study provides the first global estimate of the crisis’s scale. Researchers found one in eight children, or 12.6%, were victims of non-consensual sexual images/videos and abusive online interactions like grooming.

Paul Stanfield, Childlight’s CEO, painted a grim picture, stating:

“This is on a staggering scale that in the UK alone equates to forming a line of male offenders that could stretch all the way from Glasgow to London – or filling Wembley Stadium 20 times over.”

The research highlighted the prevalence of offenders admitting they would abuse children if assured secrecy – 14 million men in the U.S., 1.8 million in the UK, and 7.5% in Australia.  And yes, this exists in Canada.

Grace Tame, a survivor who now runs a foundation on the issue, warned the crisis is “steadily worsening thanks to advancing technologies” enabling instantaneous creation and distribution of abuse material.

Stephen Kavanagh of Interpol called it a “clear and present danger” requiring a unified global response, including better investigator training and data sharing.

Scottish Minister Natalie Don stated: “Keeping children and young people safe from sexual abuse and exploitation is of the utmost importance…these are global problems which require global solutions.”

There’s no perfect solution to this, but if you haven’t had that talk with your kids that tells them they can tell you anything and all you’ll do is understand and help them – maybe it’s time.

Sources include:  The Independent

And that’s our show for today…

We love your comments. Reach me at editorial@technewsday.ca. Show notes are at technewsday.ca or .com – take your pick.

I’m your host, Jim Love, have a terrific Tuesday

 

SUBSCRIBE NOW

Related articles

CrowdStrike faces backlash over $10 “apology” voucher

CrowdStrike is facing criticism after offering a $10 UberEats voucher to apologize for a global IT outage that...

North Korean hacker infiltrates US security vendor, loads malware

KnowBe4, a US-based security vendor, unknowingly hired a North Korean hacker who attempted to introduce malware into the...

Security company accidentally hires a North Korean state hacker: Cybersecurity Today for Friday, July 26, 2024

A security company accidentally hires a North Korean state actor posing as a software engineer. CrowdStrike issues its...

CrowdStrike releases an update from initial Post Incident Review: Hashtag Trending Special Edition for Thursday July 25, 2024

Security vendor CrowdStrike released an update on from their initial Post Incident Review today. The first, and most surprising...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways