- The Midas Report
- Posts
- AI Adoption, Fraud, and Infrastructure Are Moving Faster Than Leadership
AI Adoption, Fraud, and Infrastructure Are Moving Faster Than Leadership
Today’s developments show how usage, risk, and compute ownership are reshaping enterprise AI in real time.

The AI story right now is not just better models… It’s where the models live, who controls the pipes, and what happens when everyone quietly starts using them without asking permission.
Today’s roundup is a neat snapshot of that shift… executives learning their orgs are already running shadow AI, fraud teams staring down autonomous scam traffic, and big incumbents pouring concrete and silicon to make sure they own the next layer of enterprise workflow.
McKinsey says leaders are undercounting gen AI usage by 3x and that gap is the real adoption bottleneck
McKinsey’s latest workplace report lands a little uncomfortably for the C suite. Executives estimate only 4 percent of employees use gen AI tools intensively, while employees self report 13 percent. The report is based on surveys of 3,613 US employees and 238 US C level executives conducted in late 2024, and it frames the core problem bluntly: employee readiness is not the limiter, leadership readiness is.
This matters because shadow adoption is not just a governance headache, it is a strategy leak. When teams improvise with ChatGPT, Claude, Gemini, or Llama on their own, you get pockets of productivity and pockets of risk, plus duplicated spend and a mess of unmeasured outcomes. McKinsey notes only 25 percent of companies have a fully defined gen AI roadmap and only 1 percent call their rollouts mature, which makes the usage perception gap feel less like a rounding error and more like a management blind spot.
What to watch is the second order effect. Once leadership realizes usage is already widespread, the smart move is not to clamp down. It is to standardize what is working, wrap it in governance, and invest in evaluation like benchmarks and transparency tooling because most firms still are not doing it. The upside is real, but so is the organizational embarrassment of discovering your “pilot phase” ended months ago.
Experian warns AI fraud will hit a tipping point in 2026 as bots start shopping and scamming at scale
Experian’s 2026 Future of Fraud Forecast is a reminder that AI progress has a matching curve on the adversary side. The firm flags “machine to machine mayhem” as the top fraud threat for 2026, driven by AI agents acting on behalf of consumers in ecommerce and by deepfakes and emotionally intelligent scam bots that can personalize manipulation. In other words, the internet is about to get crowded with non human customers and some of them will be armed.
The numbers are already ugly. The US FTC reports consumers lost $12.5 billion to fraud in the previous year, and Experian says that while reports have hovered around 2.3 million annually, losses rose 25 percent. Sixty percent of companies saw fraud losses increase from 2024 to 2025, and 72 percent of business leaders now rank AI enabled fraud and deepfakes as a top operational challenge.
The tactical shift is important. As Experian’s Kathleen Peters puts it, it is no longer enough to block bots. Businesses have to decide if an agent is a good bot with user authorization or a malicious one. That is a new product category waiting to happen, spanning identity, authorization, and intent verification. It also intersects with platform power. Amazon already blocks third party shopping bots and has pursued legal action to keep Perplexity agents out. The fight is not only about fraud losses, it is about who owns the customer relationship when the customer shows up as software.
TCS and AMD partner to turn enterprise AI from pilots into production using AMD compute across cloud and edge
Tata Consultancy Services and AMD announced a strategic collaboration to help global enterprises scale AI adoption, modernize hybrid cloud, and build secure digital workplaces. Translation for operators: a major services integrator and a major chip vendor are packaging the messy middle of AI delivery into something enterprises can buy with fewer unanswered questions.
On the AMD side, this spans Ryzen CPUs for client devices, EPYC CPUs and Instinct GPUs for the data center, plus embedded options like adaptive SoCs and FPGAs for edge deployments. On the TCS side, it is implementation muscle and vertical playbooks. The partnership explicitly calls out co developed gen AI frameworks for life sciences like drug discovery, manufacturing like cognitive quality engineering and smart manufacturing, and BFSI use cases like intelligent risk management.
Strategically, this is about production gravity and soft lock in. TCS will upskill and certify associates on AMD hardware and software, building an internal bench that naturally prefers AMD powered architectures. If you are a founder selling into the enterprise, note what is happening: buyers are starting to prefer solutions that arrive as a tested bundle across training, inference, security, and deployment. If you cannot plug into that buying motion, you will feel it in procurement timelines.
Meta launches Meta Compute and commits over $72 billion to AI infrastructure as it chases personal superintelligence
Meta is not being subtle. It launched a new top level unit called Meta Compute to expand AI infrastructure, with Mark Zuckerberg personally overseeing planning and operations for its global data center fleet. The company is committing over $72 billion to AI infrastructure for the 2025 fiscal year, and it already operates around 30 data centers, mostly in the US.
Zuckerberg says Meta plans to build tens of gigawatts this decade and hundreds of gigawatts over time. That is not a model roadmap, it is an industrial roadmap. Meta is also lining up energy supply, including deals with nuclear providers like Vistra, TerraPower, and Oklo. This is the quiet part of the AI race that most model demos politely ignore: power, land, grid connections, and long term capacity planning.
Why it matters for investors is straightforward. If compute becomes the strategic advantage, then the winners will not just have better algorithms, they will have lower marginal inference costs, faster iteration cycles, and leverage over ecosystem pricing. Meta is trying to be both platform owner and physical layer landlord, especially as it pushes Llama and embodied AI ambitions. Founders should read this as a signal that the infrastructure bottleneck is not easing, it is being monopolized.
Salesforce turns Slackbot into a real AI work agent
Salesforce is relaunching Slackbot as a personal AI agent for work, now generally available for Business+ and Enterprise+ customers. This is a meaningful shift from helper bot to action taking agent inside the place where work actually happens. Slackbot is positioned to find answers, organize work, create content, schedule meetings, and take actions within Slack, with Salesforce saying it will soon be the best way for Slack users to collaborate with Agentforce and third party tools.
The competitive subtext is loud even if nobody says it too directly. This is Salesforce taking aim at Microsoft Copilot by owning the conversational surface area of the enterprise day. If your employees live in Slack, the default AI should live there too. That is how categories get rewired: not by adding features, but by changing where decisions and tasks begin.
What to watch is adoption driven by embedding rather than evangelism. AI that shows up inside a familiar workflow has a very different rollout curve than a standalone tool, especially when it can tap enterprise knowledge and permissions. For builders, the opportunity is to become the specialized capability that agents call. For operators, the job is to ensure the agent layer respects policy, data boundaries, and auditability, because once the bot can act, it can also accidentally act.
Sign up to the Midas Report to receive our daily breakdown of the AI news that matter.