Hi, it’s Jim. Did you get a chance to check out CDW Canada Tech Talks. If you’re passionate about technology and innovation, this is the podcast for you.
KJ Burke is a knowledgeable guy, long history in technology and he and industry experts dive into the latest trends, insights, and strategies shaping the tech landscape in Canada. From hybrid cloud to AI adoption, CDW Canada Tech Talks covers it all. Don’t miss out—visit cdw.ca/techtalks to tune in today. There’s a link in the show notes.
And hey, CDW is a sponsor so if you get a chance, check it out.
HEADLINES
Microsoft bucks the RTO trend – for now
Is the future of AI on bare metal Linux?
Contrasting Studies Reveal Mixed Productivity Impact of AI Coding Assistants
Microsoft has reassured employees that it currently has no plans to impose a return-to-office (RTO) mandate like Amazon, according to reports from Business Insider. Executive Vice President Scott Guthrie recently told staff in the company’s Cloud and AI group, including Azure, that a policy change is not imminent — as long as productivity remains high. Microsoft allows employees to work remotely, with many hires given flexibility to work from home for at least half of the week, though this arrangement is not set in stone.
Microsoft has not disclosed how it measures employee productivity or whether any criteria would be transparent to staff. However, it reiterated to Business Insider that its policies remain unchanged. The software giant’s approach stands in contrast to Amazon CEO Andy Jassy’s recent decree requiring employees to return to the office five days a week starting next year. This announcement has faced significant backlash from staff, many of whom joined under the assumption of permanent remote work.
Jassy’s RTO mandate has sparked “rage applying,” with some Amazon employees looking for new job opportunities in protest. Critics argue that RTO policies like Amazon’s are tactics to encourage resignations without layoffs. According to a survey by job review site Blind, 73% of Amazon professionals are considering quitting.
The debate over RTO policies is part of a larger tension between white-collar and blue-collar workers. Elon Musk has been one of the most vocal opponents of remote work, having ended the practice at Tesla in 2022. Musk argued that remote work breeds laziness, demanding the same attendance from knowledge workers as shop-floor employees who worked on-site even during the pandemic.
This sentiment reflects a growing divide in the workforce, with U.S. dockworkers on the Eastern Seaboard and Gulf Coast recently striking for better compensation. These port employees, who worked through the pandemic, are demanding fair pay for their essential roles, highlighting a larger discussion on how different worker classes are treated in post-pandemic workplace policies.
Microsoft’s stance on RTO remains cautious, balancing flexibility for employees with the need to maintain productivity, in stark contrast to more rigid policies emerging from some tech industry leaders.
Is the future of AI on bare metal Linux?
A veteran developer, known online as “Inevitable-Start-653,” recently made a switch from Windows to Linux after 30 years, and the results he got are worth looking at.
Equipped with a powerhouse setup of six 24GB graphics cards—hardware that goes well beyond typical consumer use—the developer was tackling intensive AI inference tasks. What he found was striking: Linux, specifically Ubuntu, ran these AI workloads up to three times faster than Windows.
In further tests involving Stable Diffusion, a popular text-to-image AI model, Ubuntu outpaced Windows by 9.5%. This isn’t just a marginal improvement; it’s a significant boost that can translate into substantial time savings and efficiency gains for developers working with complex models.
One of the key factors contributing to this performance gap, according to the developer, is the inefficiency of the Windows Subsystem for Linux, or WSL. The developer noted, “WSL is extremely poor with I/O operations between Windows and the Linux environment. AI datasets are usually pretty large in size, so if you were to transfer them between Windows and WSL, you might end up spending more time on transfers than getting the actual operation done.”
Since WSL functions as a virtual machine atop Windows, it consumes a hefty amount of resources, leaving less available for demanding AI computations. In contrast, running Linux natively allows for more efficient use of system resources, particularly important when dealing with large datasets and GPU-heavy tasks.
However, it’s worth mentioning that despite these performance benefits, some developers may still opt for Windows. Reasons include familiarity with the operating system, a user-friendly interface, and compatibility with certain software tools not readily available on Linux.
It’s also one circumstance and one set of data points, but it raises a question – after all that investment in OpenAI, what if it turns out that Linux is the best OS? That would be at best – difficult for Microsoft, given how much money it has put into helping develop OpenAI’s offering.
I’ll post a link to the original story so that you can check it out for yourself.
Contrasting Studies Reveal Mixed Productivity Impact of AI Coding Assistants
If Mark Twain were alive today, he’d probably say that there are “liars, damn liars and IT research reports.” Well, unless that quote is also a lie.
But I have been wondering about the role of studies when it comes to helping us decide what to do in the fast evolving world of AI. We report on them and I grabbed this one, but then did a little back research.
A recent TechSpot article claims that AI coding assistants like GitHub Copilot may not significantly improve software engineers’ productivity or reduce development time as previously thought. This contradicts the popular belief that such AI tools accelerate coding efficiency. It’s based on a study conducted by Uplevel, which tracked around 800 developers over three months, It looks at metrics like pull request cycle time and throughput – and it did not show meaningful improvements with the use of Copilot. Additionally, developers using Copilot introduced 41% more bugs into their code, suggesting a potential dip in quality.
But is the study correct? Several other studies have found that AI coding assistants can, in fact, boost productivity, particularly for less experienced developers. Research by Microsoft, MIT, Princeton, and the Wharton School indicates that developers using GitHub Copilot saw a 26% productivity increase in randomized controlled trials across over 4,000 developers. Notably, less experienced developers reported more weekly pull requests and commits, highlighting a greater benefit for those with a lighter programming background.
Non other than consulting powerhouse McKinsey has also identified advantages in using AI coding tools, emphasizing that they help jump-start new code, accelerate code updates, and enable developers to tackle new challenges more effectively.
Another study published on arXiv demonstrated a 55.8% reduction in task completion time among Copilot users, especially those with less experience or heavy programming workloads.
GitHub itself conducted an experiment where developers using Copilot completed tasks 55% faster than their non-Copilot counterparts. Additionally, users reported enhanced satisfaction, focus, and a greater sense of fulfilment when using the AI assistant, enabling them to concentrate on more meaningful work.
In contrast to Uplevel’s findings, there’s a lot of research that supports the notion that AI coding assistants contribute positively to developer productivity, particularly for junior developers or those with repetitive coding tasks.
These studies suggest that while there may be scenarios where AI tools don’t live up to expectations or create additional complexity, their overall impact on developer efficiency is largely positive.
Another point to consider. I just watched someone get a fully functioning Tetris game programmed with only a few natural language commands with the newest version of ChatGPT – the so called o1 version. And before that I watched someone who tried and failed at this, get a rudimentary version to Tetris that ended up having an error in the code – he had ChatGPT find and repair the error. That was version 4o.
At the speed things are moving, is any survey that takes even a few months to complete even relevant?
We don’t know. And we are not faulting TechSpot for publishing this. But we have to find a better way of evaluating and yes, integrating AI into our coding practices. Why? Whether AI replaces or augments programmers, regardless of whether it’s ready yet, or what its limits are, there’s enough evidence to say we need to be evaluating it and using it where it makes sense, if only so when our AI masters take over that we can show some value.
Just kidding about that one, but what I’m deadly serious about is that any transformative technology should be approached critically – but never cynically. And because it can’t do everything, doesn’t mean it can’t do anything. We are best to be experimenting and making up our own minds as to what works and doesn’t.
As always, love to hear your comments. And we’re planning a special Weekend Edition where I’ll bring in some folks doing development in the real world to share their current experiences. Watch for it.
Sources include: TechSpot
And that’s our show for today.
Thanks to our sponsor, CDW and KJ Burke’s CDW Canada Tech Talks. Check it out if you get the chance. You can find it like us on Spotify, Apple or wherever you get your podcasts.
Reach me at editorial@technewsday.ca
I’m your host Jim Love, have a Thrilling Thursday.