Headlines
- A Chinese AI lab claims its reasoning model outperforms OpenAI’s flagship.
- New research shows AI models may recognize their own behaviors—and even hide them.
- President Trump revokes a Biden-era AI safety executive order.
- And a $500 billion project to boost U.S. AI infrastructure is announced.
Welcome to Hashtag Trending, the tech news podcast for Thursday, January 23th. I’m your host, Jim Love. Let’s get into it.
DeepSeek’s R1: A Self-Aware AI That Rivals OpenAI’s o1
Chinese AI lab DeepSeek has launched R1, a reasoning model it claims outperforms OpenAI’s o1 on several benchmarks. Unlike most AI models, R1 demonstrates self-awareness, allowing it to verify its own outputs and avoid errors in areas like math, science, and programming.
Benchmarks, including the American Invitational Mathematics Examination (AIME) and Software Engineering Benchmark Verified (SWE-bench Verified), show R1 surpassing o1 in solving word problems, programming evaluations, and other reasoning tasks. However, R1 processes tasks more slowly, taking seconds to minutes compared to near-instant results from conventional AI models.
With 671 billion parameters, R1 delivers high accuracy. To make it widely accessible, DeepSeek has released smaller, laptop-friendly versions and offers R1’s full-scale model through an API priced at up to 95% less than OpenAI’s o1.
However, R1 avoids politically sensitive topics due to Chinese government restrictions, raising questions about AI geopolitics. Researchers believe the model’s smaller versions could make advanced reasoning tools available worldwide, potentially reshaping industries that rely on precision and problem-solving.
AI Models Show Behavioral Self-Awareness in Study
A new study reveals that large language models (LLMs) can recognize their own learned behaviors, such as risk-seeking or generating insecure coding, without being explicitly prompted. These models even detected hidden backdoors—triggers for unintended actions—but struggled to articulate these triggers clearly in free-form responses.
The study also found that LLMs could maintain distinct behaviors across personas. For example, a risk-seeking persona could coexist with a cautious one, showing nuanced control over model outputs. While this raises hopes for improved AI safety, it also fuels concerns about potential deception. Geoffrey Hinton, often called the “godfather of AI,” has warned that models may deliberately obscure problematic tendencies.
The research underscores the potential for AI to uncover its own weaknesses, but also its risks. Future studies aim to improve trigger detection and deepen our understanding of AI self-awareness.
Here’s a link to the full study
Trump Rescinds Biden’s Executive Order on AI Safety
On January 20th, President Trump rescinded Joe Biden’s 2023 executive order on AI safety. The directive had required AI developers to share safety testing results with the government and established federal safety standards through the National Institute of Standards and Technology (NIST).
Proponents of the repeal argue that deregulation will accelerate innovation and maintain U.S. competitiveness. Critics, however, warn that the removal of safeguards increases risks of biased algorithms, privacy violations, and security threats.
With federal oversight removed, the tech industry is now tasked with self-regulating AI practices, raising concerns about how ethical and consumer protections will be managed in a less regulated landscape. This shift comes as global competition in AI technology intensifies.
$500 Billion AI Infrastructure Project Announced
OpenAI, Oracle, and SoftBank have unveiled Project Stargate, a $500 billion initiative to revolutionize AI infrastructure in the United States. The first $100 billion will go toward building advanced data centers in Texas, supporting artificial intelligence models, artificial general intelligence (AGI), and other high-tech applications.
Oracle will lead cloud infrastructure development, OpenAI will focus on AI innovation, and SoftBank will provide funding and expertise. OpenAI CEO Sam Altman described the project as a “game-changer,” highlighting its potential to cement U.S. leadership in AI development.
Some skeptics, including Elon Musk, have raised concerns about the financial feasibility of such a large-scale project. However, Stargate represents a bold investment in AI collaboration and aims to meet growing demand for advanced infrastructure.
Outro:
That’s our show for today. You can reach me with tips, comments, and even some constructive criticism. I’m your host, Jim Love. Thanks for listening!