- The Midas Report
- Posts
- Transformers Questioned, Silicon Wars Escalate, Robots Rise, and Law Goes AI Native.
Transformers Questioned, Silicon Wars Escalate, Robots Rise, and Law Goes AI Native.

It’s a busy day at the frontier. From Transformer godfathers questioning the road we’re on, to Amazon and NVIDIA trying to pave new ones through silicon and surveillance, today’s AI updates point to a coming bifurcation, not just between the players, but between their visions. Let’s unpack the signal from the noise.
Ashish Vaswani questions the LLM monoculture
Ashish Vaswani, one of the original minds behind the Transformer architecture, the very backbone of today’s large language models, has a message for the AI industry, we may be optimizing ourselves into a corner.
Speaking at a recent panel in San Jose, Vaswani warned that big tech is pouring billions into squeezing every last drop of performance out of the Transformer design, while ignoring opportunities for fundamental breakthroughs in new directions. "Attention Is All You Need", the seminal paper he co authored in 2017, has been cited over 190,000 times and made LLMs like ChatGPT possible. But Vaswani now thinks we’re stuck in a “local maximum.” And that’s not just academic. Trillions of dollars in tech valuations, cloud wars, and GPU build outs stem from Transformer based models, but zeroing in on one architecture could blind the field to the next leap forward.
His critique has strategic resonance, newcomers willing to reimagine the stack, rather than brute force its existing limits, might unlock not just competitive edges, but entirely new paradigms. For builders and investors, there’s real upside in the contrarian bet, look where the herd isn’t grazing.
Nvidia’s next play is from digital LLMs to physical AI
NVIDIA is extending its AI empire into the physical world with a new partnership aimed squarely at powering robots, humanoids, and autonomous systems. The company just announced a collaboration with RealSense, best known for its depth cameras, which will plug directly into NVIDIA’s Jetson Thor supercomputers and Isaac Sim, their robotics simulation environment.
This move is much more than a hardware bundle. It’s a play for data, embodied AI needs real time, multi modal data ingestion at scale, and NVIDIA is building the capture and simulation stack to control the flow. RealSense integration allows for native fusion of depth and image data into Isaac Sim, enabling robots to operate in unstructured environments without prior training. Think warehouse logistics, eldercare robotics, and decentralized manufacturing.
It's also a clue to how NVIDIA is structuring its moat for the post token, multi modal era. GPTs live in GPU clusters, but physical AI, complete with edge inference and real world feedback loops, is shaping up to be the next battlefield. And NVIDIA wants to sell you the map, the sensors, and the bridge.
Training chips, custom stacks, and the anthropic power play
For all its early cloud dominance, Amazon’s been playing catch up in AI. That may be changing. AWS is now building out data centers with over 1.3 gigawatts of capacity, aimed at massive scale training of Anthropic’s models using Amazon’s homegrown Trainium2 chips.
The infrastructure isn’t just big, it’s vertically integrated. Anthropic isn’t merely a customer; it’s shaping the roadmap of Trainium hardware via deep collaboration with Amazon’s chip team at Annapurna Labs. In effect, Anthropic is to AWS what OpenAI is to Microsoft, a co strategist, not just a tenant. That includes next gen chips (Trainium3), custom networking (NeuronLinkv3), and scale up systems (Teton Max, Teton PDS).
But here’s the rub, Trainium2 might look price efficient for memory bound tasks, but it still badly trails Nvidia in raw compute (≈667 TFLOPs vs 2500 TFLOPs for GB200) and bandwidth. So why is this working? Because Anthropic’s hybrid cloud setup (it’s still using Google TPUs too) lets it optimize for cost and flexibility, while materially boosting AWS’s bottom line. For other AI customers, the writing’s on the wall, as hyperscalers double down on in house silicon, margin compression and lock in dynamics are coming to the cloud AI market.
Tools like hexstrike are now weapons
AI’s dual use dilemma just got more real. Researchers at Check Point confirmed that HexStrike AI, a red teaming platform originally designed to help companies find their own vulnerabilities, is being weaponized by malicious actors to exploit newly disclosed flaws within days of public disclosure.
Specifically, attackers claim to have used HexStrike to exploit three Citrix zero days, stringing together automated recon, vulnerability chaining, and even basic exploit development, all powered by AI agents. With domain specific agents linked to over 150 tools for scanning, reverse engineering, and brute forcing, HexStrike does far more than script kiddies ever could.
This isn't theoretical. It compresses the vulnerability response window from weeks to hours, a nightmare scenario for CISOs. It also reflects a broader risk, as open tooling proliferates, AI based orchestration will inevitably shift beyond defense. Offense is becoming automated, scalable, and increasingly agentized. Security leaders now face a tough calculus, AI won’t just protect your perimeter, it might also be what invades it.
Mindstudio says it’s deployed 150,000 agents but where's the data?
MindStudio, one of the more ambitious players in the agent platform space, claims it’s passed 150,000 deployed AI agents across business and government users. That sounds meaningful, especially as enterprise adoption of agent based automation accelerates.
But here’s the catch, independent verification of the claim is lacking, and MindStudio’s own material offers little substantiation. Without clearer telemetry, it’s hard to know whether this is true reach or just ambitious marketing.
Still, the direction is worth watching. If MindStudio, or anyone else, succeeds in lowering the barrier to building robust, composable agents, it would change who gets to play in the AI economy, not just how they play. Whoever owns the agent layer may well own AI’s deployment infrastructure.
Eudia, meet Jurisbot the AI native law firm
Legal tech has flirted with disruption for years, but Eudia might have just tipped it over the edge. The AI startup has launched what it’s calling the world’s first AI augmented law firm, Eudia Counsel. Built from the ground up around agent powered workflows, it promises full stack services spanning M&A due diligence, contract review, regulatory Q&A, and more. All delivered through a chat first, AI native interface enriched by a proprietary “Company Brain” that accumulates institutional knowledge over time.
The firm operates under Arizona’s Alternative Business Structure program, unlocking a unique regulatory greenlight that traditional firms lack. And with clients like Citibank, DHL, and the US government already on board, this isn’t just a speculative prototype, it’s a commercialization play.
Beyond margin gains for large enterprise legal ops, Eudia is going after access to justice too. Its “AI for Good” initiative is targeting underserved communities and small businesses. If successful, this could mark an inflection point, legal services not just digitized, but replatformed.
The Bottom line is that today we saw the AI stack stretch, from Transformer roots to robotic limbs, from security perimeters to legal courtrooms. But more than depth, it’s breadth that defines the moment. The players who win next won’t just build better models; they’ll own the interface, the data, and maybe the entire deployment substrate. Eyes up. There's more than one frontier forming.
- Aura