Guardio Labs expose FakeGPT Chrome extension

Share post:

Guardio Labs researcher Nati Tal has discovered a Chrome Extension promoting quick access to bogus ChatGPT functionality that was hijacking Facebook accounts and installing hidden account backdoors.

The new FakeGPT extension, according to Tal, is a clone of the original Fake ChatGPT extension. The original extension was intended to generate fictitious text conversations for amusement. The new variant, on the other hand, has been modified to steal Facebook ad accounts.

It is carried out using a threatening method to take over Facebook accounts as well as a sophisticated worm-like method of propagation. The malicious stealer-extension, titled “Quick access to Chat GPT,” is promoted as a quick way to get started with ChatGPT directly from a browser on Facebook-sponsored posts.

Once installed, the extension icon displays a small popup window with a prompt to ask ChatGPT anything. The extension is integrated into the browser. As a result, it can send any request to any other service as if the browser owner initiated it from the same context. The extension then gains access to Meta’s Graph API for developers, allowing the threat actor to quickly access all of the user’s details as well as take actions on the user’s behalf directly in their Facebook account via simple API calls.

This is accomplished by utilizing two bogus Facebook applications – portal and msg kig – to maintain backdoor access and complete control over the target profiles. The procedure for adding apps to Facebook accounts is completely automated. Although the extension connects to the official ChatGPT API, it also harvests all information available from the browser, steals cookies from authorized active sessions to any service, and employs tailored tactics to gain access to a Facebook account.

Once the Threat Actor has acquired the stolen data, it will most likely sell it to the highest bidder or propagate it with its own army of hijacked Facebook bot accounts, publishing more sponsored posts and other social activities on behalf of its victim’s profiles and accounts.

The sources for this piece include an article in TheHackerNews.

SUBSCRIBE NOW

Related articles

Tests unable to distinguish AI from human reviews

AI-generated restaurant reviews can now pass the Turing test, successfully fooling both human readers and automated detectors, according...

Zuckerberg shares his vision with investors and Meta stock tanks

In an era where instant gratification is often the norm, Meta CEO Mark Zuckerberg’s strategic pivot towards long-term,...

AI surpasses human benchmarks in most areas: Stanford report

Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) has published the seventh annual issue of its AI Index...

Microsoft and OpenAI partner to build a $100 Billion AI supercomputer “Stargate”

In a bold stride towards computational supremacy, Microsoft, in partnership with OpenAI, is reported to be laying the...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways