Hashtag Trending is brought to you this week by the book Elisa: A Tale of Quantum Kisses. Pre-release of my new book will be available on Amazon and Kindle early this week with full release starting on Friday, Dec 13th. I’ll post more info through the week. If you place a pre-release order and want an early review copy, contact me at editorial@technewsday.ca or a Elisabook.com
AWS Makes Numerous Major Announcements on AI at re:Invent, European Journalists Federation Vows to Leave X, China Punches Back Against US Sanctions By Restricting Exports of Key Materials Needed In Technology and Defence Applications and ChatGPT Has A List Of People It Won’t Discuss…
Welcome to Hashtag Trending, I’m your host, Jim Love. Let’s get into it.
AWS Unveils Trainium2 to Cut AI Costs by 40%
At AWS re:Invent 2024, Amazon Web Services announced the release of Trainium2-powered EC2 instances, delivering up to 40% better price performance compared to GPU-based instances. The new Trainium2 chips are optimized for training and deploying large language models (LLMs), providing up to 20.8 petaflops of compute performance.
AWS also introduced Trn2 UltraServers, which combine four Trainium2 servers into a single system, offering 83.2 petaflops of compute for large-scale AI tasks. Looking ahead, AWS previewed its Trainium3 chip, expected in late 2025, boasting four times the performance of current UltraServers.
AWS is collaborating with companies like Anthropic, Databricks, and Hugging Face to enhance their AI models using Trainium2 hardware. Adobe and Qualcomm are among early adopters, with Adobe reporting significant cost savings during initial testing and Qualcomm using Trainium2 for edge AI applications.
AWS also launched the Neuron SDK, a software toolkit for optimizing AI models on Trainium hardware, with support for frameworks like PyTorch and JAX. Trainium2-powered instances are now available in the US East (Ohio) region, with wider availability planned. UltraServers remain in preview mode.
And in another announcement at re:Invent
AWS Expands Amazon Bedrock with Advanced AI Tools
(AWS) announced major updates to its Amazon Bedrock platform, aiming to simplify the creation and deployment of generative AI applications.
One highlight is Automated Reasoning Checks, designed to minimize AI hallucinations—critical for sectors like healthcare and finance where accuracy is paramount. PwC is already leveraging this feature to develop reliable AI assistants, enhancing client trust and decision-making.
AWS also unveiled Model Distillation, enabling users to shrink large AI models into smaller, cost-efficient versions without significant accuracy loss. This innovation offers models that run 500% faster and 75% cheaper. Robin AI is utilizing this to deliver quick, accurate legal insights while cutting costs.
Bedrock now supports multi-agent coordination for complex workflows, enabling AI agents to collaborate seamlessly. For example, Moody’s uses this feature to enhance risk analysis, allowing agents to focus on specialized tasks for more accurate assessments.
AWS emphasized Bedrock’s broad model selection, including Anthropic, Meta, and in-house Nova models, claiming that it made Bedrock a much more versatile tool for businesses integrating generative AI.
So AWS is added error checking to its other competitive areas of choice of model and cost savings. It will be interesting to watch how the other players respond to the competitive challenges.
European Journalists’ Mass Exodus from X
Remember when Twitter was the go-to platform for breaking news? Well, those days might be numbered. In a dramatic move, Europe’s largest journalism organization is about to pull the plug on X, formerly Twitter.
The European Federation of Journalists, representing over 295,000 journalists across 44 countries, has announced they’re leaving the platform on January 20, 2025. The timing isn’t coincidental – it’s the same day as Donald Trump’s presidential inauguration, a figure they say exemplifies their concerns about the platform’s role in spreading disinformation.
“We can no longer ethically participate in a network transformed into a machine of disinformation and propaganda,” says EFJ General Secretary Ricardo Gutiérrez. The organization joins major media outlets like The Guardian in abandoning the platform over concerns about its direction under Elon Musk’s leadership.
But here’s why this matters beyond the headlines: Twitter originally rose to prominence as the internet’s real-time news feed. With journalists potentially leading a mass exodus, the platform faces a crucial test. BlueSky appears to be positioning itself as the natural successor, but the race for Twitter’s former crown is far from over.
The key question isn’t just about losing 295,000 journalists – it’s about whether their audiences will follow them to new platforms, potentially triggering a broader migration that could reshape social media’s news landscape.
China’s Tech Export Ban
In a significant escalation of the ongoing tech trade war, China has announced a sweeping ban on exports of critical materials essential for semiconductor manufacturing and high-tech applications.
The ban specifically targets gallium, germanium, antimony, and other key materials with potential military applications. What makes this particularly significant? The United States currently sources almost half of its gallium and germanium directly from China.
These materials, while produced in relatively small quantities, are crucial components in manufacturing computer chips for mobile phones, cars, solar panels, and military technology. China’s move comes as a direct response to expanding U.S. restrictions on semiconductor-related exports and the addition of 140 predominantly Chinese companies to America’s restricted “entity list.”
The ban also extends to super-hard materials, including synthetic diamonds, which are vital for industrial applications like cutting tools and protective coatings. China’s Commerce Ministry is implementing strict licensing requirements for these exports, effectively giving Beijing control over the supply chain.
Industry reaction has been swift. The China Semiconductor Industry Association warns that these restrictions are disrupting supply chains and inflating costs for American companies, even suggesting that “U.S. chip products are no longer safe and reliable.” With both nations citing national security concerns, this latest development signals a deepening of the technological divide between the world’s two largest economies.
ChatGPT’s Name Problem
If you’ve ever had ChatGPT suddenly stop working when you mention certain names, you’re not alone. A fascinating discovery has revealed that OpenAI’s chatbot has a list of forbidden names that cause it to completely shut down conversations.
The phenomenon first came to light with Brian Hood, an Australian mayor who threatened to sue OpenAI after ChatGPT falsely claimed he had been imprisoned for bribery. In reality, Hood was a whistleblower who exposed corporate misconduct. After settling the case in April 2023, OpenAI apparently added his name to a hard-coded filter list.
Since then, researchers have identified several other names that trigger similar shutdowns, including prominent legal scholars Jonathan Turley and Jonathan Zittrain. When users mention these names in any context, ChatGPT responds with “I’m unable to produce a response” before terminating the session.
While this might seem like a simple solution to prevent defamation, security experts warn it could create serious problems. Prompt engineer Riley Goodside has already demonstrated how these filters could be exploited for denial-of-service attacks, potentially disrupting ChatGPT’s ability to process entire websites or conversations.
Perhaps most concerning is the impact on people who share these blocked names. Imagine being named David Mayer – a fairly common name that was temporarily blocked – and finding yourself unable to use ChatGPT for everyday tasks. OpenAI says they have since fixed this particular “glitch,” but in a test we found that ChatGPT would not process in input with the list of names from this story included. The incident raises important questions about how AI companies should balance legal protection with practical usability and it also raises questions that we discussed on our AI show last Saturday – who will it be that decides what information AI systems will and will not disclose?
And that’s our show for today.
Reach me at editorial@technewsday.ca
I’m your host Jim Love, have a Wonderful Wednesday