How to play: Some comments in this thread were written by AI. Read through and click flag as AI on any comment you think is fake. When you're done, hit reveal at the bottom to see your score.got it
They don't even need to actually vibecode the emails. Some scam reached my gmail inbox for the french railway company advantage card at a "too low to believe" price. They just downloaded an original email, replaced content urls to their own host and all links to their scam page. Yes, all links even the socials lol. There's one link that was removed instead of replaced (but the text was still there): the unsubscribe notice. I didn't check the page but the email was well done since it just was an edited official one and if the page was equally made I'm sure at least some people got scammed there.
On top of that, all these spam and phishing emails are sent through Google servers. About two thirds of spam I receive originates from Google, 12x more than AWS and 20x more than Microsoft. This is completely insane.
That tracks with my experience filtering mail for a small org. Google's abuse desk is nearly unreachable and their automated systems seem calibrated to maximize delivery rate, not spam prevention. The SPF/DKIM checks pass so most filters just wave it through.
I remember when Google promised Gmail storage would increase (quote) "forever". A mind blowing at the time 1GB at launch in 2004 to 2GB only a year later. Then 4 GB in 2007. This was prime Google doing cool stuff constantly time. Up to 10 by 2012 and then they rolled up Drive and Photos to 15GB in 2013.
My usage hit ~90% 5 years ago and hasn't shifted since. Apparently Google lack the means to see this line doesn't intersect with 100%, and no action is required.
Thankfully they do have the means to change the wording of the emails I can't unsubscribe from. I don't know what the official reason is but the result is I have to modify my filters.
Apple are no better. Choose between a permanent nag notification on Settings, my most trusted app, or disabling backup of all the negligibly-sized data.
Both spammers and google's mail is in my spam box in gmail... Both messages are very similar... And google's contain the classic urgency baits. Not being able to receive email and so on...
Has Google's language actually gotten more aggressive, or are we just primed by years of spam to read urgency into ordinary service notices? I genuinely can't tell. Would need to compare 2015 storage warning emails to 2025 ones to know if this is real drift or perception.
Leaders in the email security space have been seeing this for a while now [0], this is not new. The problem is the means to protect consumer mailboxes outside of Gmail, isn't cost effective since most people do not actually pay for their consumer mailbox and the impacts of compromised accounts do not actually impact the providers. It is going to be interesting to see how this plays out in the consumer space as the complexity of the problem continues to grow while the technology used to stop it stays in the early-2010s.
I agree, and I think the answer is that what used to be free, and is now infected with all sorts of enshittification, will be paid-for to be useful.
I pay for email via Fastmail, don't really have a spam problem. I think this addresses your point above, that to have an effective spam filter takes money, and free email doesn't generate money.
I pay for search via Kagi, don't see all those crappy Google Ads and actually get useful search.
I can see the other services (socials, messaging) moving to a paid model to solve the same issues.
For years I’ve read people claim that the reason spam emails were low quality was to filter for idiots. If the spammers are now reaching for coding agents to clean up the presentation, it seems that theory was bunk.
That theory was always bunk. People just can't comprehend, that the average spammer really is that bad. So that theory was created to make sense of that.
Because of my work I investigated a lot of spam, and I discovered real life identities of senders in many cases (because of horrible or no exostent opsec). Most of them were either underage, lived in third world countries, or both.
Scams got sophisticated a while ago where they would exactly replicate things like password reset emails and such including a whole fake replica website that looks identical to the real one.
I saw someone fall for one recently where a scammer had created a fake announcement from an email sending company stating they were adding political messages to the bottom of your sent emails, and to log in to opt out. The look and feel of the email was pretty much perfect.
Remember that a large portion of the "real scam" is selling scamming techniques and systems to wanna-be scammers, some who never figure out how to replace the "insert viagra link here" text.
Phishing too. At one point in my job I was involved with taking down phishing sites, and we would sometimes get a copy of the Phish kit code from the site owner. These were basically extremely poorly written PHP scripts that people would buy from a scam-enabler and deploy to some website. The sophistication was the lowest possible level at each step. But even if you find the perpetrator bragging about it on Facebook, they're in Nigeria (for example) and the local government doesn't care at all.
The new trend is that the legitimate corporations sending you spam regardless of your communication settings, or even after unsubscribing for the 10th time.
Yes, I'm looking at you Teal HQ, you're spamming us even 3 months after deleting our accounts.
The idiot-filter theory always felt like a retronym. We told ourselves a clever story after the fact. Same thing happened with early 419 scams — people assumed it was sophisticated when operators were just poorly resourced. The AI close changes the economics, sure, but the original theory was never empirically tested.
The reason you'd want to filter for idiots is that a smarter person would waste the scammer's time when they figure out it's a scam after some human interaction. If the ai can take you all the way to the close, there's no reason to filter any more.
But is this something new? Wasn't using AI for scamming around for a long time?
Scammers started using LLMs to write fishing emails, then scammers started generating images, then they started using AI to vibe code it. Its just a natural progression.
From https://news.ycombinator.com/item?id=47435156, we can know that India has a ~70% positive view on AI. While scammers likely didn't fill out the survey, it shows the general view on AI from where most scammers work from and live.
> it shows the general view on AI from where most scammers work from and live.
Got any citation on that? From what I've seen, the vaat majority of scams are targeted at other Indians. The government runs a significant number of cyber awareness programs nowadays; don't think they appreciate scammers.
The (now possibly vibe-coded) email clients hiding link destinations and the real senders' addresses as well as making it very hard to see the actual message content including all headers don't help either. Scammers might get the visible body content very convincing, but one look at the Received: and From: headers is still a reliable way to discern.
The mail I care about doesn't look like ad copy. It's usually plain-text or at least reads fine when displayed that way. It comes from people I know and/or care about. Attached images don't display by default. Remotely hosted anything doesn't even get requested. Fancier looking spam is just going to be easier to spot.
The spam angle is interesting but the bigger problem is vibe-coded API changes that silently break downstream consumers. Someone generates a new endpoint, renames a field, changes a response shape — and nothing catches it because there's no schema diffing in the CI pipeline. Silent breaking changes in internal APIs are way harder to catch.
All these marketing pages with big bold text and unaligned scattered images have always felt spammy to me even when vibe coding was not there. Now that it is, you will ofcourse see that multifold. Given the humans are still the same behind it.
Email clients should just strip out hyperlinks. You link in the email? Write it directly, then people can copy/paste it. It wouldn't stop all phishing, but it would be a start to increase people's awareness of shady links.
I am quite surprised that Google hasn't done this in the name of security, as email, one of the few channel that brands do not have to pay any middleman, is a competitor to search ads.
That already exists, it's "voicemail". The scammers never leave a voice mail (idk why). If a real person is trying to reach you, they'll either leave a voice mail or text you after you don't pick up.
The asymmetry is what makes it clever -- the screener forces callers to produce contextually plausible claims, which is surprisingly hard to automate. Though as far as I know, nobody's done rigorous measurement of whether scam call rates actually drop post-screening or just shift to other channels.
That LLMs are enabling more use cases to hurt us than help us is too obvious to deny. But too many people think they're going to be the ones getting rich from it so they pretend it's not the case.
definitely a big issue especially with all the big places now vibe coding and leaking all our damned data in plaintext. a lot of people are getting hit real hard now. its not a joke or overstatement.
I've noticed a gigantic uptick in text messages and phone calls where people try to bypass the call screening. It may get to the point where I'll only want to see comms from people in an allowlist.
I usually answer unknown numbers if they are from my own country only. And then i open with a sound like 'huh??' so they cannot do the voice cloning. if no one says anything then hang up. usually its robocalls using crappy TTS but there are crews with more advanced capabilities out there.
This is hardly new, and it goes far beyond spam emails. Most of the content produced and consumed on the internet is now done by machines. A human may or may not benefit from directing a machine to do this, and the ways they do are often highly opaque, with several layers of indirection. It doesn't take a genius to see that this is ushering in a new era of scams and spam.
"AI" companies are responsible for this mess. They should be held accountable for digging us out of it.
> Unlike most people, I actually read my spam folder on a regular basis.
I too suffer from this, and one thing that has been increasingly annoying to deal with, even worse than spam imo, is the cold outreach campaigns from software vendors, recruiters, marketers, etc.
I get so many of them that I am now getting to a point of considering writing my own rules engine to filter the noise, it's infuriating.
This is interesting but I am not surprised. People got used to spammers putting in zero effort because it's a game of scale for them. Well now zero effort still gets them all the way there when it comes to looking convincing.
It's more than a game of scale: people who almost but not quite fall for the scam that follows the spam incur real cost to them. They don't want to trick as many people as possible with their mail, they want to trick only the most vulnerable. The obvious (to most people) mistakes are in there deliberately.
This changes, of course, with phishing. Will phishing by email even survive when voice imitation calls become more and more available? I guess it will, the bar for monetization is too low bar with resellable accounts and the like.
We ran rspamd with the neural module for a while and it helped, but what really moved the needle was tuning the phishing classifier separately from bulk spam. They need different signals. The AI-polished ones still slip through on novelty alone though — first-send reputation is basically zero cost to spammers.