Will the “AI bubble” burst? Hashtag Trending for Wednesday, July 10, 2024

Share post:

Europe may be reigning in big tech, but Canada and the US are struggling, despite public concern.  Analysts are warning that AI may give us a repeat of the dot com bubble. Anthropic is playing to the growing concern of the public by proposing Constitutional AI

All this and more on the “what me worry” edition of Hashtag Trending.  I’m your host Jim Love, let’s get into it.

Global streaming giants are challenging new Canadian regulations that require them to contribute to local news production. The Motion Picture Association-Canada, representing Netflix, Disney, and other major players, has filed legal challenges against rules imposed by the Canadian Radio-television and Telecommunications Commission (CRTC).

The CRTC mandated that major streaming services contribute 5% of their Canadian revenues to support the domestic broadcasting system, including news generation. This measure, set to take effect in September, is expected to raise about C$200 million annually.

In their legal filing, the streaming companies argue: ‘The CRTC acted unreasonably in compelling foreign online undertakings to contribute monies to support news production.’ They claim the decision lacks a legal basis and proper justification.

The CRTC maintains that the funding will support areas of immediate need in the broadcasting system, including local news, French-language, and Indigenous content.

This clash highlights the ongoing tension between global tech platforms and national regulators as countries seek to protect and promote local content. It also raises questions about the responsibilities of international streaming services in supporting domestic media ecosystems.

For tech and media professionals, this case could set a precedent for similar regulations in other countries, potentially reshaping the global streaming landscape.”

Sources include: Reuters

One market analyst, James Ferguson of MacroStrategy Partnership is drawing parallels to the dot-com bubble, and warning investors of potential pitfalls.

The market’s focus on AI-linked stocks, particularly hardware giant Nvidia, reminds Ferguson of the dot-com era’s concentration in tech stocks. He questions Nvidia’s valuation, trading at nearly 40 times sales, given uncertain long-term prospects.

Ferguson highlights two key issues with AI: hallucinations and energy consumption. He argues, ‘If AI cannot be trusted…then AI is effectively, in my mind, useless.’

He also notes that a recent study suggests AI could consume as much power as the Netherlands by 2027, raising cost-effectiveness concerns.

There is no doubt that both of these are real concerns, although huge progress has been made on ensuring the accuracy of generative AI.  Still, there are arguments to be made in both cases, with pessimists claiming that these are deal breakers and with optimists claiming that solutions can be found to both issues.

Ferguson notes that despite his warnings, he has no idea when the bubble will burst and until that time, there could still be huge gains made in AI stocks.

Sources include: Yahoo Finance

There are other indications that AI may face some longer term challenges, this time coming with a warning from Sequoia Capital.

“The AI boom is driving massive tech investments, but one analyst warns of a potential disconnect between infrastructure spending and actual revenue. David Cahn of Sequoia Capital has calculated that the AI ecosystem needs to generate $600 billion in annual revenue to justify current investments – a staggering increase from his $200 billion estimate just last year.

Cahn’s analysis considers GPU costs, energy expenses, and necessary profit margins. He notes, ‘For every $1 spent on a GPU, roughly $1 needs to be spent on energy costs to run the GPU in a data center.’

While tech giants report AI-driven growth, like Microsoft’s 7-point increase in AI contributions to Azure, the question remains: Will end-customer demand match the build-out?

Cahn compares GPU capex to building railroads, suggesting eventual payoff. He states, ‘There are always winners during periods of excess infrastructure building. Founders and company builders will continue to build in AI – and they will be more likely to succeed.’

However, he cautions that investors may bear the brunt if revenues fall short of projections. As the industry races to adopt new technologies like Nvidia’s B100 chip, businesses must carefully consider the long-term implications of these massive AI investments.”

Sources include: Techspot

Anthropic, the creator of the popular Claude AI is becoming a huge competitor for OpenAI particularly with the release of its new functionality released in the past few weeks. We covered that story yesterday. Despite the rave reviews, OpenAI is still the big player, with its early leader advantage, and huge investment from Microsoft.  Although Anthropic does have support from Amazon, it is not at the same level as what Microsoft has put into OpenAI and has come later in the game.

But there’s another aspect to Anthropic’s competitive stance that might prove to be a real advantage with both the public and government and allow it to catch up with OpenAI – it’s focus on AI safety.

Recent polling reveals growing public concern about AI safety and development speed. The AI Policy Institute found that 75% of both Democrats and Republicans prefer a careful, controlled approach to AI over racing to develop powerful systems.

Interestingly, 50% of voters believe the U.S. should use its AI advantage to enforce safety restrictions and testing requirements, while only 23% support rapid development to outpace China.

Anthropic’s CEO acknowledges these concerns and said in a recent interview that poorly managed AI could indeed undermine democracy.

As a solution, Anthropic is offering what it calls Constitutional AI – a way of aligning AI models with public input.  This is not new, Anthropic was started by a former AI executive with the aim of building safer AI and the post describing Constitutional AI has been on their site since last October.

Anthropic and the Collective Intelligence Project conducted an experiment in AI governance, involving about 1,000 Americans in drafting a constitution for an AI system. This public input process, using the Polis platform, aimed to explore how democratic processes can influence AI development.

The good news? The publicly sourced constitution showed both similarities and differences compared to Anthropic’s in-house version.  While there wasn’t universal agreement on each point, these are the key aspects that the surveyed group thought were critical to responsible development of AI.

  1. Objectivity and impartiality: The public constitution placed a greater emphasis on providing balanced and objective information that reflects all sides of a situation.
  2. Accessibility and inclusivity: There was a focus on making AI systems understanding, adaptable, accessible, and flexible for people with disabilities.
  3. Respect for human rights: Similar to Anthropic’s in-house constitution, the public wanted AI to respect rights like freedom, universal equality, fair treatment, and protection against discrimination.
  4. Misinformation prevention: The public supported principles that avoid endorsing misinformation or expanding on conspiracy theories or violence.
  5. Balanced approach to individual and collective interests: While there was some disagreement, many supported balancing personal responsibility and individual liberty with collective welfare.
  6. Safety and security: The public supported preventing the release of tools that terrorists or foreign adversaries could use against the U.S.
  7. Ethical behaviour: The constitution included principles promoting desired behaviour rather than just avoiding undesired behaviour.
  8. Transparency and accountability: While not explicitly stated, the process itself implies a desire for more transparent AI development processes.

This research represents one of the first instances where public input directly influenced a language model’s behaviour through written specifications. It underscores the growing importance of incorporating democratic processes in AI development and governance.

There’s a link to the full report in our show notes.

And responsible AI could be a huge competitive edge with public and government concern about AI on the rise.

Anthropic and its partner Amazon are actively courting lucrative government contracts leveraging Constitutional AI.

And OpenAI, after taking some recent bad publicity in this area, is also touting itself as acting responsibly in AI development. Rumours are that it has withheld what was a huge popular success, its SORA video generation model, until after the upcoming election. It is also actively lobbying the US government to emphasize its work on safe AI.

Why is this so important? First of all, meeting public concern is always politically expedient, but the US has another key issue. They really have no overall regulatory structure to reign in AI. Any legislation that they have is not sufficient to meet the new developments and recent Supreme Court rulings have all but stripped government agencies of their ability to interpret legislation and regulate corporations in any area.

So any actions from these companies to meet public concern are going to be welcomed.

Will this really make us safer or promote more responsible AI development? That remains to be seen. But like the famous saying about chicken soup – it can’t hoit.

Sources include: Analytics India and Time and Anthropic Constitutional AI

And that’s our show for today.

Hashtag Trending is on summer hours. We will have 3 daily news shows a week, so our next show will be our Weekend Edition which we hope to post on Friday, but as much as I need a bit of a summer recharge, we’ll play with the schedule to see what works best.

Show notes are at technewsday.ca or .com  – either one works.

We love your comments.  Contact me at editorial@technewsday.ca

I’m your host Jim Love, have a Wonderful Wednesday.






Related articles

Who is the real competition for OpenAI? Hashtag Trending for Tuesday, July 9, 2024

Claude 3.5 is giving OpenAI a real run for its money.  It’s a war of the bots –...

Cyber Security Today, July 8, 2024 – New ransomware group discovered, and summer podcast break starts

A new ransomware group is discovered. Welcome to Cyber Security Today. It's Monday July 8th, 2024. I'm Howard Solomon,...

Realizing the costs and impacts of AI and social media: Hashtag Trending in the Summer for Monday, July 8, 2024

Well, I’m back – what did I miss? It seems like we’re discovering the real costs of AI –...

Cyber Security Today, Week in Review for week ending July 5, 2024

Welcome to Cyber Security Today. I'm Howard Solomon, contributing reporter on cybersecurity for TechNewsday.com. What should management and IT...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways