Modelwire
Subscribe

Supercharged scams

Illustration accompanying: Supercharged scams

Criminals are weaponizing large language models to automate phishing and spam campaigns at scale, exploiting the same text-generation capabilities that made ChatGPT popular. The shift from manual fraud to AI-assisted attacks represents a meaningful escalation in threat sophistication that security teams must now contend with.

Modelwire context

Analyst take

The more pointed issue isn't that criminals are using LLMs — that's been documented for over a year — it's that the cost curve for running convincing, personalized fraud campaigns has collapsed to near zero, which changes the economics of attack volume in ways that legacy spam filters were never designed to handle.

This story pairs directly with 'Weaponized deepfakes' from MIT Technology Review on the same day, and together the two pieces sketch a coherent picture: generative AI is simultaneously lowering the barrier to text-based fraud and synthetic media fraud, making April 21st something of a milestone in how the publication is framing the threat moment. The counterweight is OpenAI's 'Accelerating the cyber defense ecosystem' piece from April 16th, where the company announced $10M in API grants and GPT-5.4-Cyber specifically to help security firms respond. That juxtaposition matters: the same underlying model capabilities are being deployed on both sides of the line, and the question of which scales faster is genuinely open. The 'Agent orchestration' piece from the same day adds another layer, since autonomous agents could amplify phishing campaigns well beyond what a single LLM call currently enables.

Watch whether enterprise email security vendors (Proofpoint, Abnormal, Mimecast) report measurable increases in AI-generated phishing volume in their next quarterly threat reports. If detection rates are not keeping pace with volume growth by Q3 2026, the defense gap OpenAI's program is meant to close will look significantly wider than the $10M grant implies.

This analysis is generated by Modelwire’s editorial layer from our archive and the summary above. It is not a substitute for the original reporting. How we write it.

MentionsChatGPT · MIT Technology Review

Modelwire summarizes — we don’t republish. The full article lives on technologyreview.com. If you’re a publisher and want a different summarization policy for your work, see our takedown page.

Supercharged scams · Modelwire