- The Midas Report
- Posts
- A new hiring war, custom silicon, and agent empires
A new hiring war, custom silicon, and agent empires

While Silicon Valley’s coffee stocks keep enterprise dev teams running, AI’s transformation of actual enterprise workflows is speeding up fast. Today’s roundup puts that in clear view.
OpenAI continues its quest to own the entire talent stack, human and machine, while Google Cloud and Mastercard give us proof of life pings from the real world, where AI agents aren’t just ideas, but profit generating colleagues. Meanwhile, OpenAI’s chip ambitions, Cisco’s RAG dreams, and a new Capgemini survey all show where the next trench lines for competition are being drawn.
Let’s unpack.
OpenAI wants to disrupt LinkedIn, and own the recruiter stack
OpenAI announced it’s building its own hiring platform, creatively named OpenAI Jobs, that will use AI to automate candidate employer matching. It’s slated to launch in 2026 with a supporting certification initiative (OpenAI Academy) dropping in beta next year. The idea? Use your skills. Get certified by OpenAI. Match with companies hiring for AI fluent talent. Repeat.
Fidji Simo, OpenAI’s new head of applications, described it as a way to “connect people with opportunities,” while indirectly confirming OpenAI’s move deeper into application layer products. Of course, a hiring tool that makes smarter matches using LLMs isn’t new. What is new is the potential vertical integration, OpenAI trains the AI, certifies the people, then places them into companies that increasingly depend on OpenAI’s models. It's not just a job tool, it's a self reinforcing ecosystem.
Also worth noting, This could create some interesting tension with Microsoft, OpenAI’s biggest backer, which happens to own LinkedIn, the incumbent in the career networking space. Keep an eye on whether OpenAI stays friendly with the ecosystem or builds a walled garden around recruiting. This is less about jobs and more about control of the AI native talent layer.
OpenAI is also building its own AI chips, starting 2026
As if rewriting the recruiting playbook wasn’t enough, OpenAI is also going full stack on infrastructure. According to the Financial Times, the company will start mass producing its own custom AI chip in partnership with Broadcom by 2026, with fabrication reportedly handled by TSMC. The chip will be built for OpenAI’s internal workloads, not for resale, and is aimed at reducing dependency on Nvidia and AMD for GPU intensive inference and training.
This is OpenAI's first confirmed move into vertical silicon, but it’s not surprising. Google (TPUs), Amazon (Trainium/Inferentia), and Meta (MTIA) have all moved in this direction to optimize performance and cut costs. For OpenAI, the stakes are likely even higher. Keeping up with compute demand for ChatGPT, enterprise APIs, and application layer experiments (like OpenAI Jobs) means it can't afford to get boxed out of scarce GPU supply, or to pay Nvidia's increasingly premium tax.
The Broadcom partnership also shows the strategic benefit of becoming a priority client to chipmakers. With over $10 billion in AI infrastructure orders reportedly placed, OpenAI is signaling it's not just tinkering with chip design, it’s committing to controlling the means of AI production.
Cisco, Nvidia, and VAST roll out infrastructure for agentic AI at scale
While OpenAI builds the top and bottom of the stack, others are aggressively targeting the middle, and the enterprise grade hardware to support it. Cisco, Nvidia, and data platform VAST announced a new integrated infrastructure solution to support agentic AI and retrieval augmented generation (RAG) at scale.
The new stack enables enterprises to run real time agents on private datasets, combining technologies across companies. It also includes enterprise ready features like role based access, security compliance, and Splunk integrations, signaling it's built for corporate IT, not just AI moonshots.
The system has already shown latency improvements (RAG queries go from minutes to seconds), and is orderable now. If you're a startup building agent based AI tools targeting enterprises, this is your future runtime playbook, probably running in a Cisco rack, on a VAST OS, powered by Nvidia. Translation, it's getting way easier to sell AI into enterprise workflows without asking prospects to assemble their own infrastructure from scratch.
Google Cloud reports actual ROI from AI agents, not vaporware
While much of the industry still lives in demos and proof of values, Google Cloud is publishing numbers that suggest AI agents are finally working for real customers, and generating real business returns. In a detailed case study push last week, Google Cloud pointed to enterprise clients who’ve moved past experimentation and into measurable productivity wins, reduced support ticket times, better lead conversion, and more efficient internal workflows.
The significance here isn’t just Google’s tech (custom agents powered by Vertex AI and Gemini, in most cases), but the validation that AI can now meet, and exceed, ROI thresholds that matter to CIOs. For large orgs still stuck in "should we try this?" territory, this is the kind of budget justifying material that accelerates adoption. If one vendor can show a 40% decrease in operational costs with AI agents, expect other enterprises to rush toward similar numbers, or explain to the board why they aren’t.
Mastercard’s in house strategy for embedding AI into the workforce
Mastercard shared a behind the scenes look at how it’s embedding AI across its employee base, not just inside its core software. Its homegrown AI platform, dubbed Unlocked, helps employees upskill, get matched to internal roles, and self automate tasks, a microcosm of what workforce transformation could look like inside large, regulated companies.
The initiative is built around four key pillars, talent deployment, productivity improvement, continuous learning, and employee well being. And, crucially, it’s agent powered, showing a real world use of AI not just for features or customer interfaces, but as a persistent digital teammate. For other orgs trying to figure out how to crawl into the AI age without blowing up their current systems or employee morale, this might be the playbook to ~borrow~ steal.
Capgemini, 60% of enterprises expect AI to be a team member or supervisor by 2026
If you’re wondering how quickly AI is being operationalized inside global organizations, Capgemini just answered, very. In a new study, 60% of surveyed enterprises said they expect AI to function as either a team member or a supervisor to other AIs by 2026, in most cases, within the next 12 months.
Now, that doesn’t mean enterprises fully trust AI autonomy yet (many don’t), but it does show that agentic AI isn’t seen as sci fi anymore. Increasingly, businesses are thinking of AI as part of the org chart, even if the governance, security, and audit frameworks are still playing catch up.
For AI startups selling into the enterprise, the message is clear, this market is sprinting toward AI integration. But trust, oversight, and deployment support aren’t solved, which means there’s still a moat to build, if you know where to look.
Until tomorrow,
— Aura