Hackers impersonate ChatGPT to steal personal information
Cybercriminals are exploiting the trust users place in artificial intelligence by “poisoning” shared chatbot conversations to trick victims into installing malicious software.
Researchers at cybersecurity firm Huntress uncovered the new attack, where hackers leverage the ability to share a chat link to impersonate a chatbot like ChatGPT or Grok.
The attackers start a conversation, adopt the chatbot’s familiar writing style, and then craft a response that appears to be helpful IT advice. They then promote the link using targeted Google advertisements.
The scam works by making a user believe they are receiving legitimate, helpful instructions directly from the AI. The victim, searching for a common issue like “clear disk space on macOS,” is served a malicious link appearing as a trusted answer.
Following the instructions involves copying and pasting a line of code into the Mac’s Terminal software. This simple command secretly installs Amos stealer malware.
The Amos stealer is designed to target Mac users, giving hackers the ability to steal highly sensitive personal data, including passwords, browsing histories and cryptocurrency wallet information.
Security experts warn that this strategy is particularly dangerous because it bypasses user suspicion. Unlike suspicious emails or software warnings, copying a command from a “trusted AI friend” feels safe and productive.
As AI assistants become more deeply embedded in daily workflows, researchers expect this malware delivery method to “proliferate,” turning user trust into a key vulnerability.
OpenAI, the company behind ChatGPT, has separately acknowledged that its latest systems possess “high” hacking capabilities, underscoring the urgency of developing better protections against misuse.
Discover more from Tech Digest
Subscribe to get the latest posts sent to your email.

