Microsoft’s AI Strategy: Turning AI Investment into Profit, Google CEO Says 25% of Its Code Is Now AI-Generated, Google Gets The Biggest Fine In History
Welcome to Hashtag Trending. I’m your host, Jim Love. Let’s get into it.
Microsoft’s AI Strategy: Turning AI Investment into Profit
There has been a lot of skepticism about whether companies can earn back their massive investments in AI. Microsoft seems to be defying that and appears to have found a profitable path forward.
One way there are doing this is by turning away some AI business. They are focusing on AI inference rather than training.
Training involves teaching an AI model by feeding it large amounts of data so it can learn patterns and make decisions, while inference is the process of using these trained models to make real-time predictions or generate results.
Inference represents the point at which AI models are actively used in real-life scenarios, such as providing recommendations, completing tasks, or generating content for users.
Training takes massive amounts of computer resources and there are limits to the speed at which even companies like Microsoft can bring new cycles online.
By turning away customers looking to rent raw GPU power for training new AI models, Microsoft is focusing on the more lucrative AI inference workloads, powering popular services, and services that involve their products like GitHub Copilot and Microsoft 365 Copilot.
On a recent Q1 2025 earnings call, CEO Satya Nadella explained that prioritizing inference is a strategic decision aimed at funding the company’s massive infrastructure investments.
Microsoft spent $20 billion this quarter alone on datacenters and servers to expand its AI capabilities, with much of that spending focused on keeping up with soaring demand for its AI-powered products.
“We’re not actually selling raw GPUs for other people to train,” Nadella said. “We turn away that kind of business because we have so much demand for inference.” This focus on inferencing, rather than training, aligns with Microsoft’s strategy of leveraging AI for enterprise customers and its own product offerings, such as Copilot features across its software ecosystem.
Inferencing, or the application of pre-trained models to make predictions and generate insights, is proving to be the more lucrative area for Microsoft. CFO Amy Hood added that revenue from inferencing is helping generate funds to pay for future model training efforts, creating a sustainable growth cycle for the company.
Microsoft’s Intelligent Cloud segment saw a 20% year-over-year revenue increase, reaching $24.1 billion, while Azure and other cloud services grew by 33%. Despite costs rising by 12% in the quarter, Nadella and Hood emphasized that focusing on inferencing is driving better quality revenue and keeping Microsoft on track for future growth.
So where is there NOT exponential growth? Windows revenue dropped by 2% and other areas like Office 365 and on premises server licences were similarly flat. Not anything to panic about yet, but without the investments in AI and AI processign, Microsoft’s earnings picture would be vastly different.
While many tech giants are still searching for ways to make their AI investments pay off, Microsoft appears to have found a winning formula.
There may still be an AI bubble lurking out there, waiting to burst, but Microsoft has certainly been attacking this strategically.
With this, and Google’s huge growth spike we saw yesterday, the approach of these two firms could be a pivotal shift in how cloud giants allocate their resources in the evolving AI landscape.
Google CEO Says 25% of Its Code Is Now AI-Generated
For those who doubt whether AI is ready for prime time in coding, Google’s latest reveal may have them reconsider. During its Q3 earnings call, Google CEO Sundar Pichai noted that a 25% of the company’s new code is now being generated by artificial intelligence. But there is, of course, a catch.
Pichai explained that this AI-generated code is then reviewed and accepted by human engineers. “This helps our engineers do more and move faster,” he said. While AI is making an impact, it’s not as simple as flipping a switch and letting the machines take over—every line still needs human oversight. This speaks to both the promise and the limitations of AI programming assistants, which are known to sometimes insert errors, infringe on copyrights, or even cause outages.
Critics might argue that having engineers review and fix AI-generated code might be more tedious than just writing it in the first place. But Pichai believes that, overall, AI speeds up the process, allowing programmers to focus more on complex and creative tasks rather than repetitive coding.
Google’s use of AI coding tools is part of a broader trend among tech giants. The company has also rolled out other AI-driven products, such as its newly rebranded Gemini chatbot and its Notebook Language Model (LM). NotebookLM in particular has, to coin the phrase, gone viral – it was and probably still is an experiment, but it has captivated attention by its ability to not just analyse vast amounts of text, and let people converse with the text, but it also turns the results into a podcast with incredibly human sounding voices providing what our tests have shown to be some exceptionally good analysis.
AI-generated overviews have even been integrated into Google Search across more than 100 new countries, though these efforts have faced criticism for inaccuracies and intrusive ads.
But back to the question of whether AI coding is truly ready for unassisted production environments. Google’s example shows that while AI can indeed generate substantial portions of code, even they are not yet ready to operate without human checks and balances.
That said, the fact that AI is now contributing to a quarter of Google’s code is a notable milestone—and another step into a near term future of coding as a collaborative effort between humans and machines.
Google Gets Its Biggest Fine Ever
Just yesterday, the tech giant lost its final appeal on a fine exceeding €2 billion, and if you thought that was an exaggeration, consider the latest move from Russia.
A court there has fined Google an astronomical $20 decillion, that’s 20 followed by 33 zeroes. The fine is for blocking Russian media content—an amount so absurd it exceeds the global GDP many times over. To put it in perspective, the World Bank estimates global GDP at around $100 trillion, which is mere pocket change compared to this fine. Google, it seems, would need more money than exists on Earth to pay up.
The bizarre amount comes after a four-year court case that began when YouTube banned the ultra-nationalist Russian channel Tsargrad in 2020 due to U.S. sanctions against its owner. Following Russia’s invasion of Ukraine in 2022, more channels were added to the banned list, prompting the Russian court to impose escalating fines.
You’ve probably heard the story of the emperor who granted a subjects wish to be paid simply by a grain of rice that doubled for each square on a chess board, and how that final amount would exceed all the rice production in that country.
Same approach with fines that double and compound interest and Google is now on the hook for a sum described by a judge as “a case in which there are many, many zeros.”
Not that there’s much chance of bankrupting Google’s parent Alphabet over these issues. Google in Russia has been inactive since 2022 after Russian authorities seized its bank accounts, rendering its operations there effectively bankrupt. Nevertheless, the fines keep piling up, and there may actually be a battle to seize any remaining Google assets which could continue in courts worldwide.
While the situation may sound a little absurd, it highlights an ongoing theme for Google: the relentless regulatory attention and increasing fines may take a toll. We’re not offering any opinion on whether the validity or necessity for regulations or fines, but even though Google and other tech companies aren’t being fined a decillion dollars, as one US legislator said, “a billion here, a billion there, and soon you are talking about real money.”
And that’s our show for today.
Catch our AI panel this weekend as we try a new show idea we are calling Project Synapse – AI in Action.
Reach me at editorial@technewsday.ca
I’m your host Jim Love, have a Fabulous Friday.