- The Midas Report
- Posts
- AI Scales, Invades, and Expands as Healthcare Delivers, Agents Flood Networks, Meta Bets $10B
AI Scales, Invades, and Expands as Healthcare Delivers, Agents Flood Networks, Meta Bets $10B

Here’s what’s unfolding today in the the increasingly fast shifting world of AI. Healthcare finally stops piloting and starts scaling. AI agents flood your enterprise network without even knocking. Meta bets $10B on small town silicon. And a tough call all founders are circling, build or buy their AI core. Let’s dig in.
Healthcare AI is finally delivering, the pilots are over
After years of fancy decks and failed pilots, generative AI in healthcare is now locking into real clinical workflows and enterprise systems, and the numbers back it up.
Galen Growth’s new Pharma Innovation Report 2024 highlights a sector that’s quietly moved from experimentation to execution. Since 2020, 72% of digital health startups partnering with pharma have embedded AI, many with clinically validated outcomes. And investors are rewarding results, AI enabled health ventures raised $7.8 billion in the first half of 2025 alone, 65% of all digital health capital.
How’d we get here in one of the planet’s most heavily regulated markets? With a focus on targeted, validated use cases, not generic chatbots. Platforms like LillyDirect and PfizerForAll have integrated AI across telehealth, medication adherence, logistics, and digital pharmacies. Tools like Galen Growth’s Alpha Copilot embed domain specific AI into research workflows. And significantly, the winners are often partnerships, not in house builds. MIT’s NANDA initiative found that outsourcing to specialized AI vendors yields a 67% deployment success rate, compared to a 5% hit rate for in house pilots chasing flashy GenAI gains.
The lesson for adjacent sectors like pharma, insurance, and even financial services? The playbook now exists. Success hinges less on model size and more on embedding AI tightly into the fabric of regulated processes, with a strong side of clinical or executive buy in.
AI agents aren’t the future, they’re already solving the to do list
We’re now well past AI hype and into AI headcount. Autonomous agents, trained to complete multi step tasks across enterprise domains, are proving they can handle real work.
From analyzing contracts and drafting HR policies to running fraud detection and reconciling finance reports, early deployments of AI agents are delivering process improvements, time savings, and higher accuracy in over 30 routine use cases. These aren’t theoretical demos, they’re systems live in production, showing operators exactly what automation at scale can do.
Why does this matter? Because concrete examples are finally outstripping abstract potential. And when operators can point to cost reduction, policy consistency, or processing times slashed from days to minutes, they can justify investment internally and model adoption externally. Expect the floodgates to open in domains like compliance, customer support, and procurement, where multi step knowledge work has long been under optimized.
Infrastructure is breaking under the weight of AI traffic
If AI agents are doing real work, it turns out that work has a serious data footprint. New analysis shows AI generated traffic, especially from autonomous agents, is surging across enterprise networks, creating pressure on infrastructure, security, and observability stacks.
In just six months, AI driven traffic jumped from 2.6% to 8.2% of all verified bot traffic. OpenAI’s crawlers alone made 1.2 billion requests across enterprise platforms in June. And over a third of all internet traffic now comes from non browser sources, APIs, SDKs, mobile apps, or autonomous agents acting without direct human input.
This has serious implications. Legacy security systems, often built for binary allow/deny traffic models, can’t distinguish between malicious bots and helpful agents querying APIs or scraping internal data indexes. The line between automations and adversaries is blurring fast.
The way forward? Intent based detection systems that analyze behavioral context and dynamic telemetry, not just IP addresses or blacklists. Enterprises are also starting to think about monetizing AI demand, charging for API usage, throttling access, and determining licensing tiers for agent use, something that might redefine the economics of data heavy SaaS platforms.
New platform puts AI agents into every workflow on demand
One platform is betting it can make AI agents enterprise native, no prompt engineering required. Global.AI is going after the holy grail of agent adoption, enable anyone to convert a business workflow into a dynamic, self improving loop without touching a line of code.
Their new offering claims to embed intelligent process agents directly into workflows across operations, finance, support, and admin, learning from each cycle and updating actions over time. The pitch resembles what Zapier was to task automation, but leveled up with cognitive autonomy.
While marketing promises often outpace functional depth, this type of product is a big unlock if it works. Because today, deploying AI agents usually means bespoke integrations and glue code. A no code layer makes agents not only accessible, but scalable. This could represent a pivotal shift in where AI value gets captured, less in the model wars, more in deployment rails.
Build or buy? A defining question for enterprise AI strategies
As AI becomes a first class layer in enterprise architecture, execs across industries are hitting the same fork in the road, Should we build our AI stack, or buy trusted components?
A new practical guide lays out a framework to make that call. Variables include team capability, data readiness, time to market, ability to fine tune models, cost of ongoing maintenance, and regulatory risk. The key, There’s no one size fits all. Some functions, like internal copilots or tech support agents, might be best built to tailor to workforce systems. Others, like document summarization or vision models, can be purchased off the shelf and integrated faster than you can open a GitHub repo.
What’s clear is that misjudging this decision often leads to overspending, delays, or worse, failure to scale. Enterprises need to treat “build vs buy” not as an engineering debate, but as a core strategic lever with real opportunity cost.
Meta’s $10B ambitions go rural
In case you thought the era of hyperscale AI buildouts was slowing, Meta just reminded us otherwise. The company is putting down a $10 billion bet on a rural AI infrastructure megaproject in northern Louisiana, building one of the largest data center complexes in North America.
On its face, this is classic hyperscaler expansion. But underneath, it signals a few key shifts, AI workloads are demanding compute on massive scales, far beyond current cloud levels. Rural regions, with energy headroom and land access, are becoming critical players in the data supply chain. And AI, once “just software,” is driving real world capital flows in infrastructure, power, and labor.
For investors and policymakers, it’s another signal that AI’s long game will be shaped not just by software moats, but by physical and economic infrastructure as well.
Until tomorrow,
Aura