OpenAI’s new Tasks feature hints at autonomous AI, Google unveils Titans AI with long-term memory, and where are all these great new jobs that AI is going to create – tech giants continue to cut jobs despite AI investments.
Welcome to Hashtag Trending. I’m your host, Jim Love, Let’s get into it.
OpenAI’s ChatGPT Gets Tasks Feature
OpenAI is pushing ChatGPT closer to becoming a full-fledged digital assistant with a new feature called Tasks. It’s available in beta for paid users, although it is still being rolled out so you may not have it yet.
Tasks allows ChatGPT to schedule reminders and automate actions without using third-party apps like Siri, Alexa, or Google Assistant. It’s another step toward OpenAI’s vision of creating autonomous AI agents that can handle daily tasks on their own.
So, what can Tasks do? Users can set reminders like “Send me a weather update every morning at 7 a.m.” or “Remind me of my dentist appointment in three months.”
I hope they are going to be able to say, wake me up and play Hashtag Trending to start my day.
These notifications can be sent through the web, desktop, mobile, and even email. Users can manage up to 25 active tasks at a time through a new Tasks menu in their ChatGPT profile.
Right now, Tasks is only available to paying subscribers—those on ChatGPT Plus, Team, and Pro plans. But we assume, like other OpenAI features, it will roll out to everyone, even free users in the near future. Or maybe this will remain one of the elements that might make you want to pay for ChatGPT.
This move is part of a bigger strategy. OpenAI has hinted at developing more advanced tools like Operator, which could take over aspects of your computer. They are pretty much forced into this since Anthropic launched their similar feature “Computer Use” that allows their AI to operate functions autonomously on your PC. OpenAI, like it or not, only keeps its relevance if it is seen as the leader in AI. There are a number of companies that would love to jump in and claim that title.
In any event, Operator could reduce the need for traditional assistants and make ChatGPT a true all-in-one digital companion.
If this vision becomes a reality, it could change how we interact with technology. The interface with the computer is changed fundamentally and forever. Instead of typing commands or tapping apps, users might rely more on proactive AI agents that manage tasks automatically, saving time and reducing the need for manual input.
Google Introduces Titans AI With Long-Term Memory
Google is claiming a huge development in AI development with a new architecture called Titans.
It’s designed to replace Transformers, the technology behind large language models like GPT-4 and Llama 3. Transformers which were first developed and introduced by Google in the famous paper “Attention is All You Need” which many feel is a take on the Beatle’s song All You Need Is Love.
Transformers made generative AI possible with their encode/decode structure that made large language models a thing. Using the Transformer architecture, generative AI could almost miraculously predict the next token (think of a token as a word or a part of a large word). Why I say miraculously was that OpenAI discovered that you could use that ability to have your AI generate text responses. That became ChatGPT
There is some discussion about whether OpenAI even knew what they were looking for. But it happened.
Now the miracle of transformers is based on this prediction of the next token or word in a sequence. That is both a huge step forward and a limitation at the same time.
The problem with Transformers is that they struggle to handle long sequences of information efficiently. That’s why Google has developed Titans – aiming to fix that by introducing a long-term memory module that retains information without slowing down.
Here’s how it works. Titans splits memory into two categories. Short-term memory is handled with traditional attention mechanisms, which are great for immediate inputs. Long-term memory is managed by a neural memory module that can retain context from earlier conversations or data without impacting performance.
This is a big deal because it addresses a key limitation of current models—the fixed-length context windows. Titans can scale context windows to over 2 million tokens without significant performance loss. That’s a huge leap from existing models, which often struggle with anything beyond a few thousand tokens.
One of the lead researchers, Ali Behrouz, said, “Titans balances both recent and distant information, which improves accuracy and efficiency.” That’s especially useful for tasks like language modeling and time series analysis, where understanding historical context is essential.
Google’s new approach shows that AI innovation isn’t just about building bigger models. By improving memory handling, Titans outperforms larger models like GPT-4, showing that efficiency and smarter architecture can sometimes beat sheer size.
This is a big development. How will OpenAI respond? They’ve been working on it. I don’t know if you need the paid version of ChatGPT but try asking it to commit something to memory. In the version I have, it will do that. Will it rival what Google is doing? Who knows? All I know is that we aren’t hearing a Beatles song this time.
This one is more like the song from Cats. Memory.
So where are all these new jobs
If you caught the recent interview with Mark Zuckerberg you would have heard a very interesting bit where he first announces confidently that AI will replace all his “mid level” software engineers. And if you listen, you can almost hear the gap where he thinks “Holy shit, did I really say that.” And he recovers quickly with the same old mantra that you are hearing when he says, “of course, I don’t think that AI will cut jobs, our people will just be able to work on more interesting stuff.”
He’s not the only one thinking this. In the most recent World Economic Forum study, many employers said that there would be a 22% increase in software developers and AI rolled out.”
Frankly, I looked at even where we are today and I never thought I’d say this but I’m more in agreement with Mark Zuckerberg – at his first statement. There are going to be a lot of programmers put out of work by AI. Not the top level and the best – but mid-level and under are totally exposed.
And even in the early stages, this seems to be coming true. Despite heavy investments in artificial intelligence, major tech companies like Meta and Microsoft are still cutting jobs. Meta recently announced plans to lay off 3,600 employees, targeting the lowest-performing 5% of its workforce. This is the company’s third major round of layoffs in three years.
Microsoft is also cutting staff, including positions in security, sales, and gaming. These cuts are part of ongoing performance reviews, but they stand out because Microsoft has been making a big push into AI. The company even created a new Core AI division, led by former Meta executive Jay Parikh, to enhance its AI infrastructure.
These layoffs raise a big question: If AI is supposed to create jobs, why are we seeing more job cuts instead? It’s a contradiction that’s becoming more apparent. Companies like Amazon and Google have also announced layoffs recently, even as they invest in AI to improve efficiency and productivity.
The tech industry is clearly in a period of transition. Many companies are focusing on cost-cutting measures while still betting on AI as a growth driver. But it’s unclear when—or if—those AI-driven jobs will actually materialize.
For now, it seems the immediate impact of AI is more about streamlining operations and reducing headcounts. The promise of AI creating new roles might take longer to fulfill. And that’s something workers in the tech industry—and beyond—will be watching closely.
Outro:
That’s our show for today. You can reach me with tips, comments, and even some constructive criticism. I’m your host, Jim Love. Thanks for listening.