Times up for TikTok. Or is it? Hashtag Trending for Thursday April 25, 2024

Share post:

Times up for TikTok – or is it? A whirlwind of news in AI this week. And an incredible story of the creativity of those who keep legacy tech alive.

All this and more on the “firehose” edition of Hashtag Trending. I’m your host, Jim Love. Let’s get into it.


US lawmakers have passed a bill forcing TikTok’s Chinese parent company Byte Dance to sell off the app’s American operations – or potentially see it banned nationwide.

The US Senate overwhelmingly approved the bipartisan Protecting Americans from Foreign Adversary Controlled Applications Act, or PFACAA by a vote of 79 to 18 in the Senate on Tuesday.

PFACAA, a law where the acronym may be harder to say than the full name, compels Byte Dance to divest its stake in TikTok within 9 months and have a US-approved buyer lined up, citing national security concerns.

If ByteDance fails to comply, the law would make it illegal for TikTok to be distributed or updated on any American app stores or web hosting services starting in late January 2025. Essentially, banning it from distribution in the United States.

President Biden has said he will sign the bill into law, despite TikTok’s vocal opposition.

TikTok CEO Shou Zi Chew slammed the new law as “a ban on you and your voice”, vowing to fight it in court as an “unconstitutional” violation of free speech.

US officials, when you can get them to go on the record, say that they are not against TikTok. In fact, the Biden campaign has been an active spender on TikTok, hoping to reach the all-important youth vote. The are, however, concerned that TikTok’s user data could be accessible to Beijing and used for espionage or spreading propaganda, though TikTok denies any improper data sharing.

But for those who fear the apps misuse to influence public opinion, the New York Times reported that the app had quietly shut down analytics that critics has used to question the company’s content moderation, particularly in regard to what some had claimed was a pro-Palestinian bias in the Israel-Hamas conflict.

The tool, called Creative Centre, is used to track popular hashtags and audience numbers. As of last week, the search button on the tool and any links for war and US politics were removed.

This lack of transparency fuels the fears of those who have worried about the massively popular app being used to unduly influence the 170 million users, with a growing number of younger users now regarding the app a major, if not sole source of current affairs information.

And TikTok is aggressively fighting the legislation. Byte Dance, its parent company, spent a record $2.7 million lobbying US lawmakers and officials between January and March of this year. TikTok itself spent over $4.5 million on an advertising blitz opposing any ban. That’s more than 7 million on lobbying alone.

Some of that lobbying apparently reached the former President of the US, who came out against the legislation, reportedly after meetings with a key TikTok investor.

And that doesn’t count the “free” lobbying by TikTok users. Reportedly, both the House of Representatives and the Senate members were actively targeted in campaigns to defeat the legislation, which, as they point out, is not designed to shut down TikTok, but to ensure that it’s not controlled by a foreign government whose interests may be and often are opposed to those of the US government.

But if those campaigns did anything, they seemed to unite and strengthen the resolve of politicians. In two bodies where one legislator quipped you “couldn’t get agreement that the days of the week end in “y” a rare bipartisan coalition emerged. The PFACAA act passed with overwhelming margins of 79-18 in the Senate and over 300 House votes.

But the fight is not over. TikTok now faces an incredibly high-stakes fight in the courts to block or overturn the law or to find an approved American buyer willing to pay tens of billions of dollars within the 9-month window to continue operating in the US. And despite its frequent claims that the Chinese government does not interfere in TikTok’s affairs, there are few that will claim that any sale would have to get approval from the Chinese government.

Sources include: The Register, the BBC, CNBC and the New York Times

If you are struggling to keep up with the developments in AI, you are not alone. This week we’ve seen the emergence of a number of powerful AI models and novel approaches that are enabling smaller, faster, and even infinitely context-aware systems.

The race is definitely on to develop leaner and speedier large language models that can match or even surpass the capabilities of compute-intensive giants like GPT-4.

Meta, Facebook’s parent has recently launched a new version of their open-source AI, Llama 3 which they claim is smaller, faster but equal to the larger models from Open AI, Anthropic or Google.

A key player making waves is a Silicon Valley startup called Groq spelled with a q – not to be confused with Elon Musk’s Grok spelled with a k.

Groq with a q has developed specialized language processing units or LPUs that have demonstrated blistering fast performance – up to 10 times speedier than GPUs for AI.

Now take that hardware acceleration and put in on a new model…

On the Groq cloud, the latest Llama 3 model from Meta can churn out over 800 tokens per second on certain prompts, leaving GPT-4 in the dust according to some users.

A token is the basic unit of processing in AI and represents a word or a part of a word.

This huge increase in processing of tokens is at a fraction of the cost of OpenAI and Anthropic’s models.  Llama 3 on Groq represents a compelling option for enterprises looking to integrate AI at scale.

But raw speed is just one frontier being pushed. Researchers are also tackling the long-standing challenge of unlimited context length for language models.

It’s strange to be calling something that was,and may still be as revolutionary as transformer-based models “conventional” but that’s what they have become in the lightspeed pace of change in AI development. And, despite the rapid expansion in the number of tokens that current AI models can process these conventional transformer models, ultimately are restricted in the number of tokens they ingest and retain. When these barriers are exceeded, the models become unstable and more inaccurate.

But innovative architectures from Meta, Google, Microsoft and others aiming to shatter this barrier.

Meta, the parent of Facebook has given no ground to Google, Anthropic, OpenAI and Microsoft. Their Llama model is competitive with any other. Their new offering,  “MEGALODON” is designed for efficient, and they claim, potentially unlimited-length sequence modeling.

Google has introduced methods like “Infini-Attention” and “Feedback Attention Memory” that the claim will allow existing large language models to process infinitely long contexts. And Microsoft researchers have published techniques to dramatically extend the context window beyond 2 million tokens.

The implications are profound – language AIs that can understand and reason over documents, books or even databases in their entirety without forgetting key details.

At the other end of the spectrum, Microsoft is coming out with a model they call Phi 3 which is small enough to work on a phone, but from earlier reviews is incredibly powerful. This is certainly only one of many developments. Apple will undoubtedly be developing its small footprint model as well – along with many others trying to grab the gold ring with AI in the mobile space.

And Microsoft and Apple cannot afford to be slow to market. While companies like Humane have stumbled in their attempt to come to market with a wearable or portable device, other companies like the former Rewind, now relaunched as Limitless are coming to market with a low-priced wearable AI device in the coming months. And responding to critics who were skeptical, the founder of RabbitR1, a small AI device with low price point, did a live demo that demonstrated the power of his agent-based model. And both of these devices are, or will be available for only a few hundred dollars US.

And while the giants of the industry are commanding all of our attention, there are other companies that are pushing the envelope. One of these, Snowflake has unveiled “Arctic” – an open enterprise language model that achieves top-tier performance while activating a fraction of the parameters compared to other large models. Despite its relatively compact 480 billion parameter size, Arctic matches or exceeds the capabilities of models like Llama 3 and Claude on many benchmarks.

There’ve been a lot of stories this week. And we’ll put links into the show notes so you can check some of these out, but this is what a week or two looks like in the world of AI.

Sources include:

Llama 3 on Groq

Snowflake

Infinite Context Length

Google Beats Transformer Models

Phi 3 from Microsoft

Rabbit R1 Demo

And for anybody out there who is stuck maintaining some old legacy system and looking for miracles when it breaks in the middle of the night, there was a great story about fixing some 1970’s tech.

In an impressive feat of ingenuity, NASA engineers have found a clever way to bring the venerable Voyager 1 spacecraft back from the brink after months of radio silence.

The 46-year-old probe, which is the only human-made object to reach interstellar space, had stopped transmitting understandable data back in November – posing a dire threat to one of NASA’s longest-running missions.

By March of this year, the Voyager 1 support team at NASA’s Jet Propulsion Laboratory had traced the data garbling issue to the spacecraft’s flight data subsystem or FDS – the computer that packages up all the scientific and telemetry info before beaming it 15 billion miles back to Earth.

A portion of the FDS memory had failed, corrupting the data stream. But with Voyager’s decades-old computer architecture and extremely limited memory, the engineers couldn’t simply transfer the affected code elsewhere in one piece. The problematic software was too large to relocate intact.

That’s when the team got creative. Their innovative hack? Start slicing up the corrupted FDS code into smaller sections and strategically scatter them across the remaining healthy memory locations. Almost like a high-tech game of Tetris in deep space.

It wasn’t an easy process by any stretch. Every tiny adjustment has to be meticulously tested and verified before uploading to the spacecraft, which sits so far away that signal roundtrip times stretch out to 45 hours.

But incredibly, just the first few relocated code blocks seem to have done the trick. For the first time in nearly half a year, Voyager 1 has begun transmitting coherent data about its systems status back to its anxious flight team on Earth.

In the coming weeks, NASA will relocate and optimize the rest of the flight data software using this ingenious piecemeal approach. If all goes well, they may be able to fully revive not just the engineering telemetry, but the stream of precious interstellar science data flowing from Voyager’s suite of instruments.

It’s an improbable reprieve for one of the most amazing spacecraft ever launched – built with 1970s technology, still beaming back from the cosmic void over 15 billion miles away, 46 years after its launch. Thanks to the clever tinkering of its JPL team, the first probe to leave our solar system may have years of life left yet.

So while we marvel at the new developments in tech, let’s raise a toast to the unsung heroes who keep the mountain of legacy tech that’s out there – running for the benefit of us all.

We salute you and your immense creativity!

Sources include: The Byte

And that’s our show.

Hashtag trending goes to air five days a week with a weekend interview show. And we are also on YouTube – check us out. Give us a like or a subscribe and help us build that audience.

Find us at our new home at technewsday.ca or .com – you pick. And you can reach me with comments, suggestions or even criticism at therealjimlove@gmail.com or at editorial@technewsday.ca

We’re going to do a redesign of the pages but for now you can find us in the top stories each day.

I’m your host Jim Love, have a Thrilling Thursday.

 

 

 

 

 

 

 

SUBSCRIBE NOW

Related articles

Cyber Security Today, May 3, 2024 – North Korea exploits weak email DMARC settings, and the latest Verizon analysis of thousands of data breaches

This episode reports on warnings about threats from China, Russia and North Korea, the hack of Dropbox Sign's infrastructure

Open AI to launch search engine to compete with Google? Hashtag Trending, Friday, May 3, 2024

“Insider” Jimmy Apples says OpenAI is going to launch a search engine to compete with Google, Intel is...

Hashtag Trending for World Password Day, Thursday, May 2nd, 2024

Security firm Okta warns of an unprecendented password stuffing attack that is piggybacking on regular user’s mobile and...

Cyber Security Today, May 1, 2024 – Data may have been stolen in London Drugs cyber attack, Congressional testimony today by UnitedHealth CEO on...

This episode reports on a vulnerability in the R programming language, fines against large American wireless carriers

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways