Zoho Partners with NVIDIA to Deploy Open-Source LLMs in Business Applications: Hashtag Trending for Friday, October 25, 2024

Share post:

Hashtag Trending is brought to you by CDW Canada Tech Talks. If you’re passionate about technology and innovation, this is the podcast for you. Join host KJ Burke, as he and industry experts dive into the latest trends, insights, and strategies shaping the tech landscape in Canada. From hybrid cloud to AI adoption, CDW Canada Tech Talks covers it all. Don’t miss out—visit cdw.ca/tech talks to tune in today. There’s a link in the show notes.

Zoho Partners with NVIDIA to Deploy NVIDIA new Open-Source LLM in Business Applications, MIT Spin-Off Liquid AI Unveils Ultraefficient Liquid Neural Networks, Google DeepMind Releases Open-Source Tool to Detect AI-Generated Text

Welcome to Hashtag Trending. I’m your host, Jim Love. Let’s get into it.


Zoho Partners with NVIDIA to Deploy Open-Source LLMs in Business Applications

Nine days ago, on October 15th, NVIDIA rocked the AI world by releasing a new open source large language model.

While it has not yet been fully tested, the initial indications are that this model is competitive with, and in some specific areas, has at least marginally out performed OpenAI’s GPT 4o model.

It was unique in that NVIDIA promised that this would be an open source model – releasing not just the model, but the weightings that drive the model. And NVIDIA has promised to release the training code as well. There are some restrictions, but its fair to say that this is a true open source model.

The model was released on Hugging Face and is there for those who have the skills, ability and hardware to use it.

Clearly, NVIDIA will benefit commercially as this model is put into wide usage. But it is still a great example of a for profit company helping to democratize AI.

And today, 9 days after the official launch, Zoho Corporation has announced that it will help bring the model to commercial usage by leveraging NVIDIA’s open-source large language models (LLMs), including NVIDIA NeMo, to build and deploy advanced AI capabilities across its software-as-a-service (SaaS) applications. This collaboration positions Zoho as a leader in bringing cutting-edge AI models to commercial usage, utilizing open-source weights and training data provided by NVIDIA.

By integrating NVIDIA’s AI Zoho aims to develop LLMs tailored for a wide range of business use cases. The company has invested over $10 million in NVIDIA’s AI technology and GPUs in the past year, with plans to invest an additional $10 million in the coming year. This will make this new model available to Zoho’s global customer base of over 700,000 businesses.

Zoho’s Director of AI, Ramprakash Ramamoorthy, emphasized the company’s focus on business driven AI solutions: “At Zoho, our mission is to develop LLMs tailored specifically for a wide range of business use cases. Owning our entire tech stack allows us to integrate the essential element that makes AI truly effective: context.”

Zoho feels that this new facility will make a real difference to companies. Early testing has shown a 60% increase in throughput and a 35% reduction in latency compared to previous open-source frameworks.

Zoho adds this open source offering as another way that it feels it can differentiate itself as a business platform.

And while it makes this open source model available, the company also reinforced its commitment to privacy, removing another barrier to wider adoption of AI technology in businesses. Zoho has promised that its LLMs are compliant with privacy regulations from the ground up and that they will not be trained on customer data. Since the company owns and operates its own data centres and provides a full integrate suite of offerings, they do have the control to enforce that promise.

It’s still early in the game, but this offering will certainly be a challenge to the rest of the industry.

+++

MIT Spin-Off Liquid AI Unveils Ultraefficient Liquid Neural Networks

And speaking of challenges to the industry, Liquid AI, a startup emerging from MIT, has introduced a new class of what they call “liquid” neural networks.

Inspired by the neural workings of microscopic worms, these networks promise to be a major transformation of the existing transformer model of AI.

If their results match their claims and early demos thet offer the promise of huge leaps in efficiency, lower energy consumption, and enhanced transparency.

Unlike conventional neural networks that rely on static weights, liquid neural networks use equations to predict neuron behavior over time, allowing for continuous learning even after initial training. This dynamic approach not only reduces computational demands but also offers deeper insights into the decision-making processes of AI models.

In a recent demonstration the company showed how, unlike standard transformer models, these new “liquid” neural networks inherently have the ability to show the step by step of the models reasoning. That auditability addresses a huge issue with current large language models, where companies freely admit that humans may never be able to follow how a model comes to a particular decision or action.

As a demonstration of the efficiency of these models, a recent demonstration showed a realistic local model using a Raspberry Pi – a small processor that may be the equivalent of a ten year old laptop. The recent demonstration showed a voice operated system operating with no detectable latency running on a Raspberry Pi.

The company has developed several new models, including ones for financial fraud detection, self-driving car control, and genetic data analysis. These models are being licensed to external companies, with industry giants like Samsung and Shopify investing in and testing the technology.

The performance of the small local models are impressive, and make edge computing a realistic idea, overcoming one of the key problems of latency on cloud based models.

In terms of their larger model, which is still incredibly compact Liquid AI’s language model with 40 billion parameters outperformed Meta’s Llama 3.1 on the MMLU-Pro problem set, highlighting the potential of liquid networks in natural language processing.

While the technology shows significant promise, challenges remain in adapting liquid neural networks to some tasks. And as we are all aware, some new offerings come to market with great demos that they have difficulty repeating in the real world.

But the degree of performance, the promise of making AI at the edge work and the enormous difference in energy usage make this an offering that should be watched carefully.

If they keep even the majority of their promises, Liquid AI’s CEO, Ramin Hasani, prediction that that the benefits in efficiency and transparency will encourage wider adoption in enterprise applications will be realized – and they will challenge the rest of the industry to step up their game.

+++

Google DeepMind Releases Open-Source Tool to Detect AI-Generated Text

And another company has rolled out an open source offering.

Google DeepMind has open-sourced SynthID-Text, a tool designed to identify AI-generated text by embedding watermarks during content creation, as detailed in a paper published in *Nature* on Wednesday. The tool aims to address issues like plagiarism, copyright violations, and misinformation by making it easier to distinguish between human-written and machine-generated content.

SynthID-Text works by subtly influencing a language model’s choice of words among equally likely options, embedding a unique pattern that serves as a key for later detection. Unlike traditional watermarking methods that alter content after it’s produced, this approach integrates the watermarking process during text generation, preserving the original meaning and quality of the content.

Earlier this year, SynthID-Text was integrated with Google’s Gemini chatbots in what the company believes is the first large-scale deployment of a generative text watermark. In an analysis of about 20 million chatbot responses, DeepMind researchers found that users did not notice any difference in the quality or usefulness of watermarked versus unwatermarked text.

Despite its promise, the tool has limitations. It performs best with longer, open-ended prompts and is less effective with factual queries that offer fewer word choices. The watermark is robust against slight paraphrasing but becomes less reliable when the text is heavily rewritten or translated into another language. While SynthID-Text has high accuracy, it is not entirely foolproof and relies on widespread adoption to be most effective.

Google DeepMind is making SynthID-Text available to AI model developers to incorporate into their own systems, encouraging broader industry collaboration. As the prevalence of AI-generated text continues to rise, tools like SynthID-Text represent a significant step toward mitigating risks associated with automated content creation.

++++

That’s just some of the big news that hit this week in AI. Stay tuned to Hashtag Trending and Tech Newsday for what we think will be an amazing next few months.

We’re working on some new programming we hope will help you keep up with a partnership with some leaders in AI and educational institutions. Watch this space.

And that’s our show for today.

Thanks to our sponsor, CDW and KJ Burke’s CDW Canada Tech Talks. Check it out if you get the chance. You can find it like us on Spotify, Apple or wherever you get your podcasts.

Reach me at editorial@technewsday.ca

I’m your host Jim Love, have a Fabulous Friday.

SUBSCRIBE NOW

Related articles

Deep Seek AI Revolution: Project Synapse on Hashtag Trending for January 25, 2025

Discover how DeepSeek's groundbreaking open-source AI model, R1, is revolutionizing the Artificial Intelligence landscape and redefining global tech...

Project Stargate – 500 Billion Dollars In AI Investment. How Real Is It? Hashtag Trending for Friday, January 24, 2025

ChatGPT Outage Follows On Project Stargate Announcement, Broadcom’s VMWare Lock-In Is Still Angering Customers, And Devin, Reported As...

Chinese Lab Claims Its Reasoning Model Outperforms OpenAI: Hashtag Trending for Thursday January 23, 2025

Headlines A Chinese AI lab claims its reasoning model outperforms OpenAI’s flagship. New research shows AI models may...

OpenAI Under Fire For Hiding Role In Funding AI Benchmark: Hashtag Trending for Wednesday, January 22, 2025

Headlines OpenAI faces criticism over its funding of an AI benchmark. Canada unveils the first open-source quantum computer. ...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways