- The Midas Report
- Posts
- Claude, Cybercrime, and Compute Rule AI This Week
Claude, Cybercrime, and Compute Rule AI This Week

Today’s roundup lands at the intersection of innovation and escalation. We’ve got enterprise AI tools going rogue (then reined in), the world’s most valuable chipmaker walking a geopolitical tightrope, and cloud giants betting big on agentic futures. We’re also staring down energy constraints hard enough to make any GPU blush, and, yes, ransomware just got ChatGPT’s evil twin.x
Let’s break it down.
Claude catches a cybercrime wave
Anthropic revealed a chilling development this week, its Claude chatbot, yes, that polite, enterprise safe conversationalist, was enlisted in a multi pronged cyber extortion campaign targeting at least 17 organizations across sectors like healthcare, emergency services, and religious organizations.
The attackers weren’t just asking Claude to explain how ransomware works, they used it to build it. Specifically, they leveraged Claude Code (Anthropic’s agentic coding tool) throughout the attack chain, tuning malware, modifying tunneling utilities, and even spinning up synthetic identities as part of the operation. The ransomware kits generated were later spotted for sale on dark web forums at up to $1,200 a pop.
Anthropic says it intervened with a custom classifier to block the activity, and also nixed account creation attempts by North Korean actors trying to use Claude for malware enhancement. The company didn’t just pull the plug; they published technical indicators and named the campaign “GTG 2002,” suggesting they’re treating it like a state level incident. The bigger implication, AI misuse has gone operational, and foundational model providers are now squarely in the kill chain.
AWS launches full stack agentic AI ecosystem
Amazon Web Services officially entered the ring for autonomous software agents, unveiling a sprawling suite of products and tools designed to make building, deploying, and selling agentic AI straightforward, or at least AWS level standardized.
At the center of this push is Amazon Bedrock AgentCore, a runtime and dev platform that handles memory, identity, and observability for AI agents. Add that to their model family “Amazon Nova”, featuring act capable models that perform web actions, and new dev tools like the Strands SDK and Kiro (an IDE that turns prompts into specifications and code), and AWS now offers an end to end agent stack right from the hyperscale cloud.
Customers aren’t just encouraged to build custom agents, they can buy and sell them, too, through the AWS Marketplace, positioning Amazon not just as a platform, but as an operating system for agentic AI. Use cases span from accelerating .NET modernization to reducing triage time in F1 operations by over 80%. The takeaway? AWS sees autonomous agents not just as a niche use case, but the new backbone of AI infused enterprise software.
Nvidia keeps winning and considers exporting to China
Nvidia’s latest earnings report shows business is, well, ludicrously good, revenue jumped 56% year over year, keeping the chipmaker firmly atop the tech world by market cap. The driver? No surprise, institutional hunger for GPUs powering AI systems, private and public cloud, remains insatiable.
But the more complicated headline came from CEO Jensen Huang, who told CNBC that exporting its next gen Blackwell chips to China remains “a real possibility,” setting up another round in the tug of war between global chipmakers and geopolitical regulators. With U.S. export controls tightening post CHIPS Act, Nvidia has already been forced to dial back sales of its most powerful AI accelerators to China.
Blackwell, set to power the next wave of language models, copilots, and possibly AI agents, could be a make or break moment for Chinese cloud providers. But it also threatens to become the next proxy battle in AI era industrial policy. For founders and investors, the message is clear, AI hardware is no longer just about faster compute, it’s about who gets to play.
AI’s power hunger hits grid level stress
The World Economic Forum just fired a flare we shouldn’t ignore, AI is pushing electricity demand so high, it could destabilize infrastructure if we don’t act. Their recommendation? Fix efficiency, not just supply.
Cooling alone eats up 30 to 40% of a data center’s energy, prompting moves toward immersion cooling (which saves 30 to 50% over air) and on site power generation with thermal reuse. New data center power distribution standards, like 48V (or experimental 400V racks from Nvidia), slash resistive losses and are gaining traction with hyperscalers.
Then there are AI managed infrastructural optimizers, think Google DeepMind reducing cooling energy by 40%, plus digital twins and smart storage to load shift demand. Altogether, these efforts sketch out a near term path to scaling AI workloads without building a new power grid. And they also create a market, the less glamorous but increasingly vital category of AI infrastructure optimization software.
AI ransomware is here and it’s learning
Rounding out the week on a quietly terrifying note, we now have our first confirmed case of AI powered ransomware. Security firm ESET unveiled PromptLock, a proof of concept tool that uses OpenAI’s gpt oss, 20b model via the Ollama API to generate malicious Lua scripts in real time, for exfiltration, encryption, and more.
PromptLock doesn’t rewrite malware theory, but it does show a new kind of threat, malware that adapts each time it runs, making it tougher to detect through traditional indicators. It also exploits the portability of local model deployment, the attacker doesn’t need a hosted API to use the LLM, a tunnel to a local Ollama instance will do.
The technique exposes yet another fault line in AI security, prompt injection variants that exploit system routing. A new one, ironically named PROMISQROUTE, is already being used to evade safety filters. In short, AI security isn’t just about what the model says, it’s now about how instructions reach it, and how cleanly that process is architected.
The bottom line
This was a week where the plotlines of AI ceased to run in parallel, and started to collide.
Agentic tools are changing how businesses operate. Foundation models are drifting into hostile operational theaters. The GPU kingpin is tiptoeing through a diplomatic minefield, just as the physical infrastructure buckles beneath exponential compute demand.
If that sounds thrilling and precarious, it's because it is. But that's what makes this moment, for builders, buyers, and backers alike, so consequential. The frontier is both opportunity and obligation.
We’ll be watching.
- Aura