Open AI releases a new Canvas feature for all. Microsoft Says It Will Revolutionize Search With It’s Latest Update,Open Source Developers Overwhelmed by AI-Generated Bug Reports, and Yet Another Microsoft 365 Outage Disrupts Office Web Apps and Admin Center
Welcome to Hashtag Trending, I’m your host, Jim Love. Let’s get into it.
Open AI releases new Canvas feature for all.
A friend of mine put a note in our Discord channel to make sure I didn’t miss that OpenAI had unveiled Sora yesterday. If you missed the show and the announcement, Sora is an absolutely incredible tool for generating complex photo realistic videos. If you’ve seen it, you’ll know why my friend was so enthusiastic. If you haven’t – you should, if only to see where the technology has taken us.
In any event my comment for him was, “Sora on Day 3?” Where do you go from there?
On Day 4, ChatGPT announced that its Canvas program that had been in beta for some months with paid subscribers was going to be released to all ChatGPT users.
Canvas is probably best described as a new way to work with ChatGPT that is more “conversational.”
In norm use, you give ChatGPT a prompt. It writes something that appears underneath your prompt. You read it and make more suggestions in a new prompt and hope that it will get it right.
Sometimes you are ten versions down before you get something completed.
But what if you could have your prompt and the response from ChatGPT side by side. AND what if you could make slight changes to ChatGPT’s response in real time? Just go into the window and make your changes. And you can make changes as you would in many editors including bolding and other things you might expect in an editor.
It will also have a toolbar on the right, which is new that will allow you to click for some common commands – make it shorter, longer, check spelling and even add emojis.
Well, in a nutshell, that’s Canvas.
And remember, OpenAI has set these demos up to address common criticisms or complaints.
So in this demo, they carefully showed the appropriate way to use Canvas for school work. You can ask ChatGPT to give you comments on your document, in the demo, one of the engineers had a physics problem relating of course to Santa and he asked ChatGPT to give him feedback like a physics professor would.
The text was highlighted with comments. And this is where I say they were careful to show the “right way” to do it. Because you could just ask ChatGPT to rewrite your paper taking into account what a physics professor would say should change – but in the example they carefully went through each comment and made individual changes.
One of the things that has frustrated me about Canvas was that you could lose track of changes. This version has a “show changes” button which highlights all the changes your have made. Excellent for checking your work or seeing exactly what ChatGPT has changed.
Another theme that they came back to was the idea of using Canvas as an editor for coding. Right now they are focusing on Python, which is getting to be a very common language, especially in data and AI uses.
You can put Python code into Canvas and, like our earlier physics example, it will find errors and give comments about your code.
Again, like our earlier physics paper, ChatGPT can find errors which you can fix or ask it to automatically fix them.
For python code, it has an emulator so you can run the code and even generate documents, charts and graphics.
The final demo was again Christmas themed but showed a great use case where Canvas was added to a GPT that could read handwritten letters (in this case to Santa) and generate automatic responses. It was a great way to slide in a use case about automating repetitive tasks like issuing standard responses, even when the source is a handwritten document.
Bottom line, Canvas is a more collaborative approach to co-creating a document. I’ve tried it and wasn’t thrilled myself, but I might give it another shot after these updates, which i guess is a success for the OpenAI team.
It won’t take our mind off the fact that Sora is still swamped and unavailable —
+++
Microsoft tried to get a little attention with an AI announcement of their own this week. It’s relevant in many aspects given Sundar Pichai, the CEO of Google’s – attack on Microsoft, where he claimed that Google’s search would change radically in 2025. And he diss’d Microsoft by saying he’d love to have a side by side comparison with Microsoft’s search – if they had their own and not somebody else’s.
But, and it was probably not possible that Microsoft’s new announcement of their integration of AI and search was timed to meet these comments, but it certainly did.
Even if their AI was not their own, Microsoft came out with a very compelling demo of search that was a real conversation, with a voice ability that was a match for anything else in the market.
And they showed how interactive search could truly work. Once example was planning a vacation instead of giving the searcher a list to read through, they had a great conversation about places to stay, places to visit, what amenities the different Air BnB’s were – you have to listen to it to truly appreciate it.
It was a personal use case, but the applications to both work and personal life were clear.
Microsoft’s new offering is open for a limited number of companies with Pro licenses and I think it’s just in the US, but by the time Google gets its search offering out, Microsoft will be on their second or even third iteration.
If Pichai thinks that Microsoft will be a pushover – he better be coming up with something truly amazing.
There’s a link in the show notes.
https://youtu.be/uBhUHSDk-4A?si=Poi9xWQYMF1K_QPW
And to the dark side of AI:
Open Source Developers Overwhelmed by AI-Generated Bug Reports
The open-source community is grappling with a new challenge: an influx of low-quality bug reports generated by artificial intelligence (AI). These submissions, often riddled with inaccuracies, are frustrating maintainers and wasting valuable time, especially among volunteers.
Seth Larson, the security developer-in-residence at the Python Software Foundation, recently highlighted the issue in a blog post, describing the surge of “spammy, LLM-hallucinated security reports.” He noted that these AI-generated submissions, while appearing credible at first glance, require time-consuming evaluation only to reveal their lack of substance.
Larson isn’t alone. Daniel Stenberg, the maintainer of Curl, expressed similar frustrations, sharing an example of an AI-assisted bug report that wasted significant time. In his response, Stenberg criticized the use of AI tools for bug reporting, stating, “We receive AI slop like this regularly and at volume… You contribute to unnecessary load of Curl maintainers.”
The rise of generative AI tools has exacerbated a long-standing issue of spam in open-source projects. While tools like chatbots and large language models (LLMs) can assist with coding, they often fail to understand nuanced codebases, resulting in false positives. Larson warns that these reports, though relatively few in number now, could signal a larger problem for the open-source community.
“Whatever happens to Python or pip is likely to eventually happen to more projects or more frequently,” Larson said. He expressed concern about smaller projects and isolated maintainers who might waste significant time addressing false reports, potentially leading to burnout.
Seeking Solutions
Larson calls for pre-verification of bug reports by humans before submission and urges platforms that host vulnerability reports to limit automated or abusive entries. He also believes the open-source community needs systemic changes, including funding and resources to support maintainers.
“Funding for staffing is one answer,” Larson explained, pointing to his own role supported by a grant. “Involvement from donated employment time is another.”
While generative AI holds potential for improving workflows, Larson’s message is clear: for now, AI tools cannot replace human oversight in understanding and managing complex codebases. Addressing the challenge of AI-generated “slop” reports is essential to safeguarding the sustainability and efficiency of open-source development.
+++
And I’d say “and now for something completely different, but it’s different only in terms of it not being AI. The actual content is getting to be depressingly familiar.
Microsoft 365 Outage Disrupts Office Web Apps and Admin Center
Microsoft 365 experienced a widespread outage on December 10, impacting Office web apps, the admin center, and services like Outlook, OneDrive, and others. Users reported being unable to access their accounts via web browsers, receiving error messages indicating a service outage. Microsoft recommended using desktop applications as a workaround for those with licenses.
The company traced the issue to a problem with token generation in its authentication infrastructure, compounded by a recent service change that introduced an error in identifying token expiry times.
A fix was tested and deployed within hours, reverting to an alternative token generation process and disabling proactive caching to mitigate the outage. Microsoft confirmed the issue was resolved after monitoring service telemetry.
This outage follows a series of Microsoft service disruptions, including a major incident two weeks ago and a prolonged outage in July caused by a distributed denial-of-service (DDoS) attack amplified by a protective error. While Microsoft has restored functionality, the recurrence of such incidents highlights the challenges of maintaining reliability in its extensive cloud infrastructure.
And that’s our show for today.
Reach me at editorial@technewsday.ca
I’m your host Jim Love, have a Wonderful Wednesday.