AI Security Breaches, Vibe Coding Secrets Leak, and OpenAI's $500B Week
AI Security Disasters This Week 🔥
Half a trillion in AI valuations. Eight thousand children's records nicked. Sudo is broken. Everything is on fire and we're shipping vibes.
Right. Where to start with this week of AI security breaches and vibe coding gone wrong.
A ransomware gang called Radiant (love the branding, lads) hacked a nursery chain called Kido and walked off with personal data on 8,000 children. Children. Not enterprise accounts. Not crypto wallets. Actual kids in actual nurseries. I don't usually get properly miffed about security news because frankly if you're still leaving RDP open to the internet you deserve what's coming. But targeting nurseries? That's a new kind of grim.
Meanwhile: OpenAI hit a $500 billion valuation. Half. A. Trillion. We're living in a timeline where AI companies are worth more than most countries' GDP and a nursery chain can't keep toddler data safe. Cool. This is fine.
The Hits Keep Coming 💀
It wasn't just the nursery hack. This week was a proper security shambles from top to bottom.
Google Ads serving trojans. You search for something legitimate, click an ad at the top of Google, and congratulations, you've just installed malware. Google taking money to distribute malware is chef's kiss levels of ironic. The ad platform that prints money can't vet what it's printing.
Fake invoices spreading RATs. Remote Access Trojans shipped via invoice PDFs. Because apparently we still haven't sorted out "don't open random attachments" after twenty-odd years of trying.
The sudo exploit (CVE-2025-32463) got added to CISA's Known Exploited Vulnerabilities list. Actively exploited. In the wild. Right now. Sudo. The thing that literally gates root access on every Linux box you've ever touched. If that doesn't make you sweat a bit, you're not paying attention.
Tile tracking devices flagged as a stalking risk. The thing you bought to find your keys can apparently be weaponised to find you. Reassuring.
UK Co-Op attack costs hit $275 million. DragonForce's April attack resulted in weeks of empty shelves and a quarter billion in damages. That's not a cyber incident. That's an economic event.
Silent smishing, SMS phishing that doesn't even trigger a notification. Your phone gets compromised and you don't even know it happened. Brilliant.
Feels like: Living in a horror film where every door you open has something worse behind it, and someone in the background keeps cheerfully announcing record-breaking funding rounds.
The Vibe Coding Security Reckoning 🫠
Here's where it gets properly interesting. "Vibe coded secrets leak" was an actual headline this week. Apps built by AI, people just accepting whatever the model spits out, shipping with hardcoded API keys and credentials sitting right there in the code.
I've said it before and I'll keep banging this drum: vibe coding is sick for prototyping. It's magic for scaffolding. But the moment you ship vibe-coded output without reviewing it, you're essentially deploying code that nobody has read. Not you. Not the model (it doesn't read code, it predicts tokens). Nobody.
And now those predicted tokens include your AWS keys. Outstanding.
| 📚 Geek Corner |
|---|
| The secrets-in-AI-code problem: When an LLM generates code, it draws on patterns from training data, which includes thousands of tutorials and Stack Overflow answers with placeholder credentials. The model doesn't know those are supposed to be placeholders. It's pattern-matching, and the pattern is "config file has a string here." Combine that with developers who treat AI output as trusted input, skip code review, and push straight to main... you get secrets in production. The fix isn't to stop using AI for code. It's to treat AI output the same way you'd treat a PR from an intern: review everything, run secret scanning (git-secrets, trufflehog, gitleaks), and never trust generated config values. The tooling exists. People just aren't using it because vibes. |
Meanwhile, In AI Utopia 🚀
While the security world was having a week, the AI hype machine was running at full chat.
Anthropic dropped Claude Sonnet 4.5 along with the Claude Agent SDK. The model topped SWE-bench Verified, which means it's now the best at fixing code that probably got compromised because someone else's AI wrote dodgy code. The circle of life. (I put Claude Code through its paces against Codex on an iOS simulator build the following week, if you're curious how it actually performs.)
OpenAI launched Sora 2, photorealistic video generation where you can insert yourself into generated footage. Nothing concerning about deepfake technology going mainstream the same week we're discussing social engineering attacks. Nothing at all.
OpenAI hit $500 billion valuation after a $6.6 billion secondary share sale. The world's most valuable startup. I don't even know what to do with that number. The Co-Op hack cost $275 million. OpenAI's valuation could absorb roughly 1,800 Co-Op-scale attacks and still be worth something. The scales are broken.
And in the developer corner: comment-driven development became a talking point, the idea that since LLMs rely on comments, well-commented code is now functional documentation for your AI pair programmer. Also, someone did a proper write-up on Claude Code's magic and how its agentic patterns actually work under the hood. Both good reads, both slightly surreal to be discussing while sudo is actively exploited.
The Hiring Hot Take 🌶️
The Changelog ran a piece this week: "Hiring only senior engineers is killing companies."
I've got thoughts on this one. The argument goes that if you only hire seniors, you end up with a team of people who are all good at making decisions but nobody wants to do the actual building. Everyone's an architect, nobody's laying bricks.
There's truth in it. I've seen teams of eight seniors spend three sprints debating an API schema that a motivated mid-level would've shipped in a week. But the counter-argument, that you need juniors to do the grunt work, is properly outdated now. The grunt work is increasingly what AI handles. The boring CRUD endpoints, the boilerplate, the scaffolding. That was the junior dev pipeline, and it's being automated.
So where does that leave us? Reckon the real problem isn't "too many seniors" but "too many people who only know how to be senior in the old way." The game's changing. Being senior used to mean you'd seen every pattern. Now it means you can evaluate whether the pattern the AI suggested is actually right, or whether it's about to ship your secrets to GitHub.
Full circle, innit.
The Bit That Keeps Me Up 😶
Here's what's actually bothering me about this week. It's not any single story. It's the gap.
On one side: AI companies hitting half-trillion valuations. Agent SDKs launching. Photorealistic video generation. Comment-driven development. The future is here and it's magnificent.
On the other side: nurseries getting hacked. Sudo exploits in the wild. Vibe-coded apps leaking credentials. Invoice phishing still working. The fundamentals are on fire and nobody's watching because everyone's distracted by the shiny new model release.
We're building the most sophisticated software in human history on top of infrastructure that can't even keep nursery data safe. And the response from the industry is to ship faster, review less, and let the AI handle it.
I'm not saying slow down. I'm saying look down. The foundations are cracking and we're too busy adding floors.
Bottom line: Week 40 was a horror show wearing a party hat. Record valuations upstairs, ransomware in the basement. The AI gold rush is real, but so are the 8,000 kids whose data is floating around a dark web forum right now. Maybe, just maybe, we should spend as much energy on the boring security stuff as we do on the next model benchmark. But we won't. Because vibes.