- The Midas Report
- Posts
- The Agentic, the Secure, the Surreal This is AI’s Next Level Leap
The Agentic, the Secure, the Surreal This is AI’s Next Level Leap

If you thought AI was all models and middleware, think again. From an AI generated government official to next gen GPU deployments the size of small nations, today’s headlines are less about experimental prototypes and more about functional, sovereign infrastructure. The battle lines are being drawn across agentic tooling, security dependencies, compute pipelines, and policy realities, each shaping the AI stack for founders and investors entering this next phase.
Invisible raises $100M to win the agentic AI platform race
The agentic AI layer, where autonomous systems plan, execute, and self correct using foundation models, is still open terrain. Invisible Technologies just raised $100 million in a growth round to stake its claim.
Their goal isn’t small, build a fully integrated AI infrastructure stack that can orchestrate workflows at internet scale. This isn’t about yet another wrapper, this is back end plumbing for automation across verticals. Unlike the foundation model category, which quickly calcified into an oligopoly, agentic middleware is still wide open, first movers with strong execution could own rich platform surfaces.
The bet from investors signals growing conviction that orchestration is not just auxiliary, but existential. As AI agents move from demos to deployments, the limiting factor won’t be model quality, it’ll be how well you choreograph the messiness of tasks, tools, and humans in the loop. Invisible is positioning itself to be that conductor.
CrowdStrike + Salesforce want to be your AI security stack
Enterprise AI is growing up, and security is showing up. In a tightly coupled partnership, CrowdStrike and Salesforce are joining forces to create a security perimeter for AI across the Salesforce cloud ecosystem.
The goal, offer real time detection, prevention, and auditable governance from model development to agent deployment. Think of it like DevSecOps, but extended into the messy new territory of generative workflows and agentic systems. Companies building with AI in production, especially with customer or financial data, are now facing a hard truth, you won’t be able to ship safely, much less sell into the enterprise, without an opinionated security layer.
This partnership matters as much for its architecture as for who’s involved. CrowdStrike brings world class threat analysis and runtime defense. Salesforce, with its CRM ubiquity, is where agentic AI might quietly invade every layer of corporate operations. Together, they’re establishing security by default as not just a best practice, but a compliance standard. The rest of the enterprise AI market will follow.
YouTube Shorts adds AI native video creation via Google’s Veo
Generative video just got a powerful proving ground. YouTube is weaving Google’s Veo 3 video model into Shorts, allowing creators to generate footage from text prompts and remix content with AI tools baked into the platform.
This is Google bringing the fight to TikTok, and doing it with home court advantage. While startups are still experimenting with synthetic video apps (Runway, Pika), YouTube can instantly deploy generative tools to a billion user audience. That flips the dynamic, the AI native creator stack is going mainstream not from the periphery, but from one of the world’s biggest media platforms.
Veo 3 is no toy, either, it’s tuned for semantic fidelity and can generate complex scenes from descriptive text. Expect this rollout to spark new creator economies, but also new legal fights around IP, monetization, and remix rights when AI starts generating the next viral trend. It’s not just content; it’s commerce, and YouTube just cracked open the toolkit.
Albania appoints an AI generated minister for government procurement
File this under “sci fi meets civil service.” Albania has officially appointed Diella, an AI powered agent, as its minister for public procurement, the first government to hand real operational authority to an AI.
According to Prime Minister Edi Rama, the move is designed to eliminate nepotism and corruption from the procurement process, a major gating issue in Albania’s bid to meet European Union guidelines. Diella was trained on recent AI models and previously served as a government facing chatbot. Now, she’ll oversee how public tenders are awarded, essentially turning a key function of governance into a machine mediated process.
Real oversight is still opaque, humans are presumably still in the loop, but the precedent has been set. What happens when AI isn't just analyzing policy, but executing it? And what does “accountability” look like when a bot signs the contract? Emerging markets could view this as fast track reform; critics already see it as hiding dysfunction behind a digital curtain. Either way, the world just crossed a line.
MIT releases universal scaling law framework for model training
For anyone sweating GPU costs and model performance forecasting, MIT might’ve delivered your new favorite spreadsheet. Researchers from the MIT IBM Watson AI Lab just published what amounts to a universal playbook for using scaling laws to predict the performance of large models, before you spend the token bill.
Using a massive meta analysis across 40 model families, including LLaMA, Pythia, and GPT derivatives, MIT found that you can accurately estimate a model’s final performance by partially training several smaller models under controlled conditions. They found that an absolute relative error (ARE) below 4% is achievable, and up to 20% is still useful, more than enough to inform whether a nine figure GPU spend is worth it.
This framework isn’t just academic, it’s operational. The implications for CFOs, ML engineers, and AI infrastructure teams are clear, scale smart, not blind. Need to budget a 70B parameter rollout or forecast ROI across different data regimes? Start here. It’s part of a larger shift toward budget aware AI development, where performance isn’t chased at all costs, but projected, optimized, and owned.
NVIDIA and the UK launch AI “megafactories” with 120,000 GPUs
Sovereign AI just got its flagship. NVIDIA and the UK government have announced a plan to build national scale AI infrastructure powered by up to 120,000 Blackwell GPUs. This will be the largest compute buildout in UK history, and arguably its most important digital investment to date.
This isn’t just about scale, it’s about where that scale lives. While most countries rent their AI future from cloud platforms based in the US or China, the UK is now being positioned as an autonomous AI power, capable of training frontier models and housing large scale inference workloads. The Blackwell chips, NVIDIA’s next gen silicon, are optimized for high bandwidth, low latency environments needed to train massive context windows.
For NVIDIA, this is textbook moat building, partner with national ecosystems to future proof demand while locking in infrastructure and sovereignty premiums. For founders and investors, it’s a reminder that the AI race is no longer just corporate, it’s geopolitical. And the maps are being redrawn in silicon.
Until tomorrow, keep your agents auditable and your scaling laws tight.
- Aura