Report Reveals Cyber Security Threats For Holiday Shopping Season: Cyber Security Today for Friday, November 15, 2024

Share post:

Getting Ready For The Surge in Cyber Exploits Targets Online Holiday Shoppers, Report Shows, Secret Service Claims No Warrant Needed for Location Data Tracking via App Permissions, Anthropic’s Claude AI Tested by DOE for Nuclear Safety Amid Rising Government Interest

This is Cyber Security Today. I’m your host, Jim Love


Getting Ready For The Surge in Cyber Exploits Targets Online Holiday Shoppers, Report Shows

As the holiday season approaches, a new report from BforeAI looks at how cybercriminals are exploiting retail websites and creating sophisticated scams to target online shoppers. BforeAI analyzed around 6,000 domains registered over the last three months, highlighting how these attacks are evolving with new techniques and more advanced deception.

Some of the Key Threats they identified include:

Brand Spoofing and Domain Manipulation:  Of the 6,000 domains studied, 4,036 used popular brand names like Walmart, Amazon, and Target, often appended with terms like “shop,” “deal,” or random numbers (e.g., *ebay-088.com*). 

These manipulations mimic legitimate URLs, leveraging .com, .shop, and .xyz domains, which are cheap and easily mistaken for trusted sites. These are common ways that scammers take advantage of recognizable brands to build seemingly credible phishing sites.

Malware-Laden Fake Apps: Among the confirmed 185 malicious sites found in the report, several promoted fake mobile apps, mimicking legitimate platforms like Amazon and Flipkart. Distributed through third-party links on phishing websites, these fake apps aim to harvest users’ credentials and credit card details. For example, one phishing site linked to an unofficial “Amazon” app for Android, designed to quietly siphon data from user devices.

Fraudulent Sites Were Often Tied To Claims Like The “Biggest Sale of the Year”: With over 1,500 domains promoting discounts tied to specific retail brand events like “Big Billion Days” (a Flipkart promotion). These sites feature banners and design elements that echo well-known brands, luring users in with limited-time offers and prompting them to enter personal and payment information on cloned payment pages.

Chatbots and Fake Customer Support: The report found cybercriminals embedding chatbots on fraudulent sites to simulate live support, making the sites appear more legitimate. These bots often guide users to “support links” that ultimately lead to phishing pages or malware downloads. For instance, a fake Walmart site used a chatbot to request sensitive details under the pretense of order assistance.

Cryptocurrency Scams on Retail Sites: In a new twist, cybercriminals are integrating cryptocurrency “wallet connections” into fake retail sites, urging users to link their digital wallets for purchases. These fraudulent sites steal wallet credentials and siphon funds in irreversible crypto transactions, capitalizing on the growing use of digital currencies in retail.

And The Report Found Another New Twist.  Investment Scams Masquerading as Retail Offers: Using brands like Walmart, some domains were found luring users into investment schemes under the guise of retail job or investment opportunities. Victims are contacted through messaging apps like WhatsApp and Telegram, encouraged to invest money, only to be locked out of “group chats” once they make significant deposits.

The report is an illustration of just how sophisticated tactics are evolving in online retail scams, particularly around these high-traffic shopping periods.

The link to the report is shown below. It’s san interesting look at what to expect in the coming month. If your interested, there’s a link in the show notes. 

https://bfore.ai/2024-online-holiday-retail-threat-report/

Secret Service Claims No Warrant Needed for Location Data Tracking via App Permissions

Internal emails obtained by tech blog 404 Media in a Freedom of Information request, reveal that the U.S. Secret Service has used location data from ordinary smartphone apps for tracking purposes, asserting that user consent was granted through app terms of service. Having that consent, they claim, negates the need for a warrant. 

The emails detail the agency’s use of Locate X, a tool by a firm called Babel Street. Locate X enables tracking of an individual’s movements based on data gathered from common apps.

The emails also reveal internal debates as some Secret Service officials raised concerns about the legality of using such data without a warrant. One email cited that the data use could conflict with the Fourth Amendment following the “Carpenter v. U.S.” ruling, which requires a warrant for cell-site location data. 

However, Babel Street’s stance again was that consent through terms of service allows the data’s collection and sale, claiming, “A warrant isn’t needed because the user gives consent.”

At least one US Senator,  Ron Wyden responded, stating that this practice likely violates the Fourth Amendment but he emphasized the need for legislation along the lines of the the Fourth Amendment is Not for Sale Act, which would limit government access to commercial data.

While the Secret Service has ceased using Locate X, how many other government agencies in Canada and the US might be using this or similar tactics is not known. But the case highlights ongoing issues around user privacy, consent, and government surveillance methods.

Going through with a FOIA request like this, especially for a small publication is a big deal. 404 Media has been gracious enough to share a lot of the details on their blog.  There’s a link in the show notes in case you want to check it out or even chip in a few bucks to help them fund these types of activities. We are also reaching out to them to see if we can do an interview on this topic for our weekend edition. Watch this space.

https://www.404media.co/fyi-a-warrant-isnt-needed-secret-service-says-you-agreed-to-be-tracked-with-location-data/

Anthropic’s Claude AI Tested by DOE for Nuclear Safety Amid Rising Government Interest

Anthropic is collaborating with the Department of Energy’s (DOE) National Nuclear Security Administration (NNSA) to ensure its Claude AI models aren’t capable of providing information that could be misused to develop nuclear weapons. This partnership marks the first time a leading AI model has been deployed in a classified setting, potentially setting a precedent for future government-AI collaborations.

Since April, the NNSA has been “red-teaming” Anthropic’s Claude 3 Sonnet model, testing its responses to ensure they don’t reveal sensitive nuclear data. The project has now been extended to cover Claude 3.5 Sonnet, released in June. According to Anthropic, the findings will eventually be shared with research labs to assist in their own security evaluations.

Marina Favaro, Anthropic’s national security policy lead, emphasized the federal government’s expertise in evaluating national security risks in AI, noting, “This work will help developers build stronger safeguards for frontier AI systems that advance responsible innovation and American leadership.”

With increasing interest from government agencies, Anthropic recently launched a partnership with Palantir and Amazon Web Services to make Claude accessible to U.S. intelligence agencies. OpenAI and Scale AI are also securing government contracts, while broader AI safety policies are becoming central to national discussions. 

While Anthropic is regarded as the leader in AI safety, OpenAI also has deals with the Treasury Department, NASA and other Agencies. And Scale AI has developed a model based on Meta’s open-source Llama which is aimed at the defence sector. 

President Biden recently called for AI safety tests in classified settings, though the future of these initiatives may face uncertainty under the incoming administration.

And while the Trump administration is still an unknown in terms of the response to AI regulation, Elon Musk is undeniably in the new President’s inner circle and Musk has been outspoken on the risks of AI. Supposedly his concern was what drove him to help found OpenAI. He has also supported the recent failed California legislation which attempted to impose tougher safety measures on AI development.

That’s our show for today.

You can find links to reports and other details in our show notes at technewsday.com. We welcome your comments, tips and the occasional bit of constructive criticism at editorial@technewsday.ca

I’m your host, Jim Love, thanks for listening. 

SUBSCRIBE NOW

Related articles

Social Media Fraud Focuses Attacks On Truth Social: Cyber Security Today Weekend for January 18, 2025

Unmasking Social Media Scams: An Interview with Netcraft's Robert Duncan In this weekend edition of 'Cybersecurity Today,' host Jim...

Can Canada Get It’s Mojo Back? An Exclusive Interview With Jim Balsillie for Hashtag Trending

In this episode of the series, 'Can Canada Get Its Mojo Back?', host Jim Love explores the economic...

Open AI and Google Both Have Major AI Announcements: Hashtag Trending for Thursday, January 16, 2025

OpenAI’s new Tasks feature hints at autonomous AI, Google unveils Titans AI with long-term memory, and where are...

WordPress Co-Founder Warns Lawsuits Could End WordPress.org: Hashtag Trending for Wednesday, January 15, 2025

WordPress Co-Founder Warns Lawsuits Could Mean The End Of  WordPress.org. Tech Leaders Launch $30M Campaign to Protect Bluesky...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways