AI hallucinations ended in a year? Hashtag Trending, Monday April 22, 2024

Share post:

Capital Gains tax in Canada gets criticized by tech sector.  Amazon drops 100,000 jobs while vastly increasing its use of robots. Tesla’s luxury Cyber Truck faces a huge recall.  AI Rapidly Catching Up to Humans Across Benchmarks, AI Hallucinations Could Be Solved Within a Year, Was the AI incident with Air Canada’s chat bot really AI?

All this and more on the “take the money and run” edition of Hashtag Trending. I’m your host, Jim Love. Let’s get into it.

The Canadian government’s plan to increase the capital gains tax rate has tech entrepreneurs and startups worried it could make it even harder to attract talent and funding in Canada’s innovation economy.

Under the new policy, the capital gains inclusion rate rises from 50% to 66% for those with over $250,000 in annual capital gains. The change aims to generate billions in new revenue from wealthy Canadians.

But critics argue the higher taxes will disproportionately impact startups and tech firms who often use stock options and equity to compensate employees due to limited funding.

Dr. Malik Shukayev is an economics professor at the University of Alberta:

“Stocks are really important for Canadian tech startups. They usually offer developers stock options as part of compensation since they can’t always pay top salaries.”

When those stocks rise in value and are eventually sold, employees now face paying the higher 66% capital gains rate on their profits.

Shukayev says this puts domestic startups at a major disadvantage compared to their American competitors in the battle for top tech talent:

“It’s going to be harder to motivate people to take the risk and join a startup when their potential upside from stock options is taxed so heavily.”

The policy does include exemptions like allowing entrepreneurs to earn up to $2 million in capital gains tax-free over their lifetime. But many entrepreneurs aim to build companies worth far more.

Shukayev provides an example of a startup valued at $1 million that eventually sells for $100 million after becoming successful. Under the new rules, the $99 million capital gain would face the punishing 66% tax rate.

He argues this could severely dampen the incentive for bright innovators to take the risks necessary to build Canada’s next billion-dollar tech companies.

As Canada races to establish itself as an innovation leader, the capital gains tax increase has tech founders worried it will drive even more entrepreneurial talent to lower-tax jurisdictions.

Sources include: CTV News and LinkedIn

E-commerce giant Amazon is rapidly expanding its use of robotics across its operations, while reducing its human workforce over the past few years. The shift raises questions about potential job displacement in the future.

Amazon now has over 750,000 robots deployed to work alongside its remaining 1.5 million human employees worldwide. That’s a major increase from 520,000 robots in 2022 and just 200,000 in 2019.

Meanwhile, the company’s global workforce has shrunk by over 100,000 positions since 2021 when it employed 1.6 million people.

The robots include new advanced models designed to speed up inventory management and a bipedal robot that can move tote boxes in fulfillment centers.

Amazon says the robotics push improves efficiency, safety and delivery speeds for customers. The company argues robots take over repetitive tasks, allowing employees to shift into new skilled job roles that have emerged.

An Amazon spokesperson said, “Deploying robots has led to the creation of new skilled job categories that previously didn’t exist at Amazon.”

However, the massive scale of automation can’t help but raise concerns over potential job losses, particularly for roles that can be easily automated.

Research shows industrial robots negatively impact jobs and wages in areas where they are deployed. The trend feeds fears over technological unemployment and worsening income inequality.

While Amazon says that there are a number of new job categories created by this automation, critics question whether the quality and the growth of those roles can make up for jobs displaced by robots taking over more and more tasks.

As one of the world’s largest private employers, Amazon’s robotics acceleration could foreshadow broader workforce impacts as AI and automation reshape industries

Sources include: Yahoo News

Tesla is recalling close to 4,000 of its newly released Cybertruck electric pickups due to a potential safety defect that could cause unintended acceleration.

According to documents from U.S. safety regulators, the accelerator pedal can dislodge and become trapped, causing the trucks to inadvertently speed up and increasing the risk of a crash.

The problem stems from residual lubricant used during assembly that reduced the pedal’s ability to retain its position properly.

The recall covers nearly 3,900 Cybertrucks manufactured between last November and early April. Tesla says it is not aware of any collisions or injuries related to the pedal issue so far.

Owners will be able to get the accelerator pedal replaced or reworked by Tesla free of charge to fix the defect.

The recall is just the latest setback for Tesla, which began deliveries of its highly anticipated Cybertruck late last year following production delays and now numerous reports and complaints about the quality of these expensive vehicles.

The criticisms come at a time when sales for the company are softening, resulting on the cutting 10% of its workforce.

Sources include: Axios

The latest AI Index report from Stanford University highlights the remarkable progress artificial intelligence has made in closing the gap with human performance.

The comprehensive study details how AI systems have already surpassed human levels in areas like image classification, reading comprehension, visual reasoning and language inference tasks.

On competition math problems, AI scored 84% correct in 2023, the human benchmark is 90% but this might underestimate the achievement. When you check the level of problems that are being used, it’s unlikely that the average person could solve these.

For visual common sense reasoning, AI achieved 82% compared to 85% for humans. Again, it seems less remarkable to have achieved this score, until you think about what it means. This score means that an AI cannot only identify a picture of a cat as a cat for example, but that it can also derive conclusions from the context.

In one test example, there are two people sitting on the sidewalk as someone walks past. The AI is asked how one of these people managed to become a few dollars richer. The AI correctly understood from the context, the person having a musical instrument near them that they were busking and earned the money playing music. That’s an incredible cognitive leap and considering that less than two years ago this would have probably stumped even the best AI model.

While still trailing humans on certain complex cognitive benchmarks, AI’s capability is advancing at a blistering pace. One issue now is that many tests are becoming obsolete as AI blazes past previous performance ceilings.

Researchers are scrambling to develop new, more challenging benchmarks that can properly assess where AI still lags and where humans maintain an advantage. The rapid evolution underscores AI is still a nascent field with incredible growth potential.

According to the report, there are still concerns about so-called AI hallucinations, or, as the report more correctly calls them, presenting false information as fact. The noted this continues to be an issue, though large language models have made enormous progress in this area.

Text-to-image generation has also seen exponential gains. For anyone who has seen OpenAI’s Sora, this is no surprise, but other tools like Midjourney have achieved photorealistic results in just the last two years that would take a human artist far longer to accomplish.

There are still issues with text to image generation in common usage. Text within graphic images are still have frequent misspellings and subtle errors that some might not catch immediately. A recent supposed photo put out by Trump supporters showing him praying in church with a halo of light around him looked authentic, until you counted his fingers and saw that he had six on each hand.

AI has clearly made real impacts on productivity and quality of work in many industries. The report cites a number of studies, but it also suggest that unregulated or perhaps a better word might be unsupervised AI can lead to lower quality of work.

An interesting trend in the report shows the rapid commericalization of AI.

In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.

One reason for this could be cost. The costs to train a model are astonishing. The report states., “according to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.”

Despite a decline in overall AI private investment last year, funding for generative AI surged, growing by more than 8 times from 2022 to reach $25.2 billion.

As AI’s capabilities expand across sectors, the Index highlights emerging regulatory efforts, impacts on the workforce, and accelerated scientific breakthroughs enabled by AI systems.

The report notes that “over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 38% in 2022.”

Not surprisingly, there are an increasing number of regulations aimed at trying to control and regulate the use and development of AI.

According to the report, In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56%.

There’s a link to the full report in the show notes.

Sources include: AI Index Report

 

At least one AI expert believes the issue of AI hallucinations, or more correctly stated, providing inaccurate answers may be solved “with a year.”

Raza Habib, a former Google AI researcher and co-founder and CEO of AI startup Humanloop, expressed optimism that AI hallucinations are “obviously solvable” within a year’s time.

Habib was one of twenty executives recently invited to a closed door meeting with OpenAI’s Sam Altman where Altman reportedly shared OpenAI’s progress and plans for the coming year.

Habib says large language models initially demonstrate strong calibration between their confidence levels and factual accuracy. However, this correlation becomes undone during the fine-tuning process of training the models on human preferences. So in the act of making them more human like, we also give them a propensity to give inaccurate answers.

By preserving the initial calibration, Habib believes the propensity for hallucinations can be greatly reduced. Habib says, “The knowledge is kind of there already. The thing we need to figure out is how to preserve it once we make the models more steerable.”

But Habib also argued some level of hallucination may actually be desirable, especially for creative tasks where novel ideas are valued over strict factual accuracy. He stated, “We want them to propose things that are weird and novel – and then be able to filter that in some way.”

Sources include: Forbes

And speaking of inaccuracy, at the bottom of the page on a recent Forbes article, reporting on a panel featuring Habib at a recent Forbes symposium, panelists weighed in on Air Canada’s recent chatbot mishap, where inaccurate information led a customer to purchase unnecessarily expensive tickets. Habib called the situation “completely avoidable” had proper testing and guardrails been implemented by the airline.

Fellow panelist Jeremy Barnes of ServiceNow echoed that sentiment, saying companies shouldn’t rush AI deployments to customers before efficient validation.

And in the final line, the article noted that the at spokesperson for Air Canada told Forbes “The Chatbot involved in the incident did not use AI,” “The technology powering it predated Generative AI capabilities (like ChatGPT and other LLMs).”

I have to say, this ticked me off. As we in the public want to get real information about the progress of AI, this isn’t the type of information that should get buried at the bottom of an article as an afterthought.

Listeners may recall that we expressed reservations whether this was an AI system. Not to be critical, but as an actual professional working in this area, something just didn’t seem right Air Canada suddenly launching a generative AI chatbot and just unleashing it in a critical area.

It turns out, we may be right. We may not have the journalistic clout of a Forbes or a New York Times, but we have asked Air Canada to clarify this. We’ll see if they get back to us.

I’m not being overly critical. We work on tight deadlines from a variety of sources and I’m sure we make mistakes too. But we don’t correct them with timid note at the end of a story. And if we missed it, we apologize to Forbes, but this should have been a headline, not an endline.

And that’s our show for today.  Love to hear your opinions as always. You can reach me at therealjimlove@gmail.com or our new editorial address – editorial@technewsday.ca

If you caught the show on Quantum computing this weekend, I’d love to get your reaction to it.

Our show notes are now also posted at TechNewsDay.ca or .com take your pick – along with other stories. Check it out.

I’m your host Jim Love, have a Marvelous Monday.

 

 

 

 

 

SUBSCRIBE NOW

Related articles

Cyber Security Today, May 3, 2024 – North Korea exploits weak email DMARC settings, and the latest Verizon analysis of thousands of data breaches

This episode reports on warnings about threats from China, Russia and North Korea, the hack of Dropbox Sign's infrastructure

Open AI to launch search engine to compete with Google? Hashtag Trending, Friday, May 3, 2024

“Insider” Jimmy Apples says OpenAI is going to launch a search engine to compete with Google, Intel is...

Developer of “Unfollow Everything” sues Meta over control of social feeds

Ethan Zuckerman, an associate professor at the University of Massachusetts—Amherst, has filed a lawsuit against Meta, arguing that...

New York business leaders most optimistic about impact of AI: Accenture study

New York City's business elite are increasingly optimistic about the transformative potential of artificial intelligence, according to a...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways