Judge Denies Meta’s Motion to Dismiss  Lawsuit Over Youth Addiction: Hashtag Trending, Thursday, October 17, 2024

Share post:

Judge Denies Meta’s Motion to Dismiss  Lawsuit Over Youth Addiction to Social Media, FTC Enacts “Click-to-Cancel” Rule to simplify ending subscriptions, the FCC is investigating data caps, and a study sponsored by Apple claims recent advances in logical problems solving in AI models may not be a sign that they models are learning to reason.

Welcome to Hashtag Trending. I’m your host, Jim Love.  Let’s get into it.


Judge Largely Denies Meta’s Motion to Dismiss in Lawsuit Over Youth Addiction to Social Media

A federal judge has mostly rejected Meta Platforms’ attempt to dismiss a lawsuit brought by 34 states claiming that the company’s social media apps, including Facebook and Instagram, are designed to be addictive for minors. The decision by U.S. District Judge Yvonne Gonzalez Rogers highlights the continuing legal challenges faced by social media companies regarding the impact of their platforms on children’s health and public safety.

The states allege that Meta’s design and development strategies, particularly for Facebook and Instagram, encourage compulsive use among young users, resulting in various harms. Despite Meta’s defense citing Section 230 of the Communications Decency Act—which provides immunity from liability for content posted by third parties—the judge found that several of the consumer protection claims were valid. This section, however, did protect Meta from liability regarding certain platform features like infinite scroll and autoplay, which are integral to content delivery.

Significantly, the judge allowed the states’ claims related to the Children’s Online Privacy Protection Act (COPPA) to proceed, emphasizing that the platforms could be considered directed at children due to the nature of the content hosted. Furthermore, the judge rejected Meta’s request to dismiss the states’ failure to warn theory, which involves claims that Meta did not adequately inform users about the known risks associated with its platform features.

Meta has responded by highlighting its efforts to create safer and more controlled environments for young users, including recent changes to Instagram that impose stricter limits on interactions for teen accounts. However, the plaintiffs, including notable figures like California Attorney General Bonta, argue that Meta has consistently prioritized engagement over safety, contributing to a broader crisis of youth mental health influenced by social media usage.

The ruling allows the multi-district litigation, which consolidates a large number of claims from children, adolescents, school districts, local governments, and state attorneys general, to proceed in Oakland. This ongoing legal battle underscores the growing scrutiny of social media practices and the urgent calls for accountability in how these platforms operate, especially concerning their youngest users.

Sources:

FTC Enacts “Click-to-Cancel” Rule to Simplify Ending Subscriptions and Memberships

The Federal Trade Commission (FTC) announced a decisive new “click-to-cancel” rule on October 16, 2024, aimed at making it easier for consumers to cancel subscriptions and memberships. The rule mandates that the process to cancel must be as straightforward as the process to enroll, addressing longstanding consumer frustrations with overly complicated cancellation procedures.

FTC Chair Lina M. Khan highlighted the common practice where businesses compel consumers to endure cumbersome processes to end their subscriptions, often resulting in unnecessary expenses for services no longer desired. The new rule intends to eliminate such practices, ensuring a straightforward path for consumers wishing to discontinue services.

This final rule applies to virtually all negative option programs across any media, where where a failure to cancel is treated as approval and it aims at enhancing transparency and fairness in digital marketing and sales. It also includes strict requirements for sellers, such as prohibiting the misrepresentation of essential facts in negative option marketing, mandating clear disclosure of material terms before obtaining billing information, and securing informed consent from consumers before initiating charges.

Developed as part of an ongoing review and modernization of the FTC’s 1973 Negative Option Rule, this update responds to the shift towards digital economies where signing up consumers for recurring charges has become increasingly easier for businesses. The rule, set to take effect 180 days after publication in the Federal Register, follows a robust public consultation that began with a proposed rulemaking announced in March 2023, which attracted over 16,000 comments from various stakeholders.

The FTC noted a significant increase in consumer complaints regarding such practices, with daily complaints rising sharply over the past five years. In response, the final rule stipulates several prohibitions aimed at protecting consumers, including clear guidelines on the cancellation process and the immediate cessation of charges once a cancellation request is made.

Interestingly, the final rule was approved by a 3-2 vote, reflecting some dissent within the Commission. Sadly, the adopted rule omits previously considered requirements such as annual reminders to consumers about their subscriptions and constraints on how sellers can interact with consumers attempting to cancel.

The FTC has prepared a fact sheet, led by primary staffer Katherine Johnson from the FTC’s Bureau of Consumer Protection, summarizing these changes. 

Sources include: FTC press release and notification

FCC Investigates Broadband Data Caps Amid Growing Consumer Concerns 

And another US regulatory body, the Federal Communications Commission (FCC) has initiated a formal inquiry into the necessity and impact of broadband data caps in the United States. This move comes as part of a broader effort to understand the implications of these caps on consumers and competition, especially given the increased reliance on digital connectivity for daily activities.

Data caps, which limit the amount of data a consumer can use each month, have become a contentious issue. Exceeding these caps can lead to additional fees or reduced internet speeds, practices that have drawn criticism from various consumer advocacy groups. The FCC’s notice of inquiry, approved on Tuesday, aims to explore whether these data caps are justified by current technological capacities and the growing demand for broadband access.

Emma Roth, a journalist known for her extensive coverage of consumer technology and digital culture, reports that the FCC began collecting public comments on this issue in June. The public’s input is still being solicited through a form on the FCC’s website, where individuals can share their experiences with data caps, including any associated challenges.

FCC Chairwoman Jessica Rosenworcel highlighted the practical difficulties and potential unfairness of data caps, noting the particular strain they place on small businesses, low-income families, and individuals with disabilities who rely heavily on internet access. The inquiry will also review the effectiveness of recent regulatory measures, such as the mandatory “nutrition labels” for internet plans, which aim to improve transparency about data limits and service terms.

This inquiry reflects a critical evaluation of whether current data cap practices align with the public interest, as the nation becomes increasingly dependent on digital services for entertainment, work, and communication.

Sources include: The Verge  (Includes a link to the government paper on this investigation)

Apple AI Team Highlights Flaws in Large Language Models’ Reasoning Abilities

A new report from Apple’s artificial intelligence research team is casting doubts on the reasoning capabilities of large language models (LLMs) developed by entities like OpenAI and Meta. The report critically evaluates whether these AI models truly possess the critical thinking necessary to reliably solve problems, a vital trait for real-world applications.

When they first emerged, Large Language Models, which really are based in, as their name suggests, language struggled with basic tasks in Mathematics. 

What they didn’t do well was even basic mathematics. That changed relatively early, but while math got more accurate, these models couldn’t address logical mathematical challenges. That too, has changed, with Google’s Gemini model being the first to score at a level to win a silver medal at a high school math contest.  

And recently Large Language Models have show impressive performances in structured logic puzzles, brain teasers that formerly were largely unsolvable for generative AI solutions. 

OpenAI and others have not only qualified in mathematical logic, but are able to work their way through logic puzzles and brain teasers that have defeated them in the past. 

One such problem is known as the river crossing problem. A hunter, a wolf, a cabbage, and a sheep need to cross a river. The boat only fits two of them at a time. How do you get all three across without the sheep eating the cabbage or the wolf eating the sheep?

Recently OpenAI’s 4o model solved this easily and has had impressive results on other similar problems.

But Apple researchers argue that these achievements might not signify genuine reasoning. 

Their findings of the Apple study claim that they found a significant vulnerability: slight modifications in how questions are structured or the addition of irrelevant details can lead to incorrect answers or logical inconsistencies. The models could be misled by irrelevant data. 

So Apple’s team abandoned the standard testing and introduced a new benchmark, GSM-Symbolic, specifically designed to test the depth of LLMs’ reasoning by altering the phrasing or context of queries slightly and observing the effects on the models’ responses.

In one test they adapted a problem of how many apples could someone pick over three days, but changed the fruit to and unfamiliar one – kiwis. They also added a bit of irrelevant data were smaller than average. In that, they claimed that many models failed to get the right answer. 

The study suggests that current LLMs, including those tested like ChatGPT, might often rely more on sophisticated pattern recognition rather than a deep understanding, or that they had simply learned these particular tests in their training –  which would lead to errors when faced with slightly altered or contextually enriched questions. This “data leakage” or “fragility” indicates that while LLMs are excellent at matching patterns, their ability to perform true logical deduction under varied contexts remains limited.  

Apple’s findings highlight a crucial consideration for deploying LLMs in scenarios that require dependable reasoning, such as in medical diagnostics, legal interpretations, or complex decision-making processes where nuanced understanding and consistency are essential.

Despite these findings, independent tests, including those conducted by other reviewers, have shown that models like ChatGPT can successfully navigate complex logic puzzles without error, even when scenarios and elements are significantly altered. This discrepancy raises questions about the conditions under which LLMs may falter and their implications for future AI development and deployment.

But…we ran their tests through GPT4o and couldn’t replicate their results. We published a story in TechNewsDay.com today with some of the results. And in case the models had somehow been fed the details of Apple’s paper or the new testing model – unlikely, but possible – we altered the tests ourselves. We made numerous changes and introduced irrelevant details and the both GPT 4 and 4o were able to easily solve the problems. 

Where does this leave us? It may be that we are looking at this the wrong way. Maybe, as Geoffrey Hinton, one of the fathers of AI has said, AI may have a different kind of intelligence. Perhaps it will never reason the way we do, but does that matter? If it functions at least as well or better than humans in what results it gets, not how it gets them, its value would be proven. 

What is needed is a constant set of objective benchmarks to determine the practical capabilities of AI models and worry less about whether they think like we do. And given what we are seeing the in world today, NOT reasoning like humans may be a superior characteristic, even if it makes occasional errors.

And that’s our show for today. 

 Reach me at editorial@technewsday.ca 

I’m your host Jim Love, have a thrilling Thursday.

SUBSCRIBE NOW

Related articles

Social Media Fraud Focuses Attacks On Truth Social: Cyber Security Today Weekend for January 18, 2025

Unmasking Social Media Scams: An Interview with Netcraft's Robert Duncan In this weekend edition of 'Cybersecurity Today,' host Jim...

Can Canada Get It’s Mojo Back? An Exclusive Interview With Jim Balsillie for Hashtag Trending

In this episode of the series, 'Can Canada Get Its Mojo Back?', host Jim Love explores the economic...

Open AI and Google Both Have Major AI Announcements: Hashtag Trending for Thursday, January 16, 2025

OpenAI’s new Tasks feature hints at autonomous AI, Google unveils Titans AI with long-term memory, and where are...

WordPress Co-Founder Warns Lawsuits Could End WordPress.org: Hashtag Trending for Wednesday, January 15, 2025

WordPress Co-Founder Warns Lawsuits Could Mean The End Of  WordPress.org. Tech Leaders Launch $30M Campaign to Protect Bluesky...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways