- The Midas Report
- Posts
- GenAI Grows Up and Shrinks Down at the Same Time
GenAI Grows Up and Shrinks Down at the Same Time
Top AI News - January 19, 2026

Enterprise AI is getting stricter, mobile AI is getting richer, and hardware breakthroughs are quietly rewriting the cost curve
If you feel like AI is simultaneously becoming more industrial and more everywhere, you are not imagining things. Today’s lineup spans a millionfold energy cut in training, a sharper enterprise playbook for reliability, a fresh wave of APAC budget expansion, IBM packaging agentic AI into a consulting product, and a reminder that consumers are already voting with their thumbs and wallets.
The theme is execution over excitement. The winning teams in 2026 will not be the ones with the flashiest demo. They will be the ones who can ship systems that behave, pencil out economically, and scale without melting the planet or the org chart.
New training method cuts AI energy use by a millionfold
A memristor training update trick turns energy into a rounding error, and makes edge AI feel a lot less imaginary.
Researchers from Zhejiang Lab and Fudan University published an approach called error aware probabilistic update that trains neural networks directly on memristor based analog in memory computing hardware while slashing training energy use by nearly six orders of magnitude versus GPUs. The work appeared in Nature Communications, and it tackles a very specific pain point that has held analog training back: write errors in memristors that worsen over time due to device relaxation.
Their move is conceptually elegant. Instead of performing dense, constant weight updates like standard backprop, the method probabilistically transforms the update magnitude so the average update stays the same but the number of actual writes plummets. In experiments and simulations, it drives update frequency below 0.1 percent, with one reported run updating only 0.86 per thousand parameters during training of a 152 layer ResNet. Fewer writes mean less energy and longer device life, and the team reports around a 1,000 times lifetime extension thanks to reduced write operations.
Why founders and investors should care is not just the headline number. If training can happen on low power analog hardware, you can imagine a future where certain models are trained or fine tuned closer to the edge, where power budgets and infrastructure are constrained. It also reframes the “AI is capped by energy” argument.
Watch whether this approach generalizes cleanly beyond memristors to other non volatile memory tech like ferroelectric transistors or magnetoresistive RAM, and whether the tooling ecosystem emerges to make analog training feel less like a lab project and more like a product roadmap.
Salesforce targets enterprise friction in GenAI adoption
Salesforce is betting the next phase of enterprise GenAI is not smarter models, but stricter systems.
Salesforce is leaning into the reality that pilots are easy and production is a compliance nightmare. Its message is that strong benchmark performance does not automatically translate to consistent business outcomes, and that enterprises do not want 97 percent correctness when the workflow needs to work 100 percent of the time. That framing is refreshingly unromantic and deeply accurate.
The strategy is to combine generative AI with deterministic systems so rules, standard operating procedures, and audit constraints remain non negotiable. In practice that means a blend of large and small language models, used where flexibility, reasoning, and empathy are needed, with rule based logic wrapping the parts that cannot be allowed to drift. This is less “LLM as the app” and more “LLM as a component inside an engineered workflow,” which is how regulated industries and serious back offices actually buy software.
The signal here is market maturity. Vendors who keep selling general purpose copilots will get squeezed by buyers asking for reliability, governance, and measurable ROI. Watch 2026 for procurement language shifting from model choice to system guarantees, escalation paths, and proof that edge cases are handled without a human babysitter on call.
APAC enterprises to lift AI budgets by 15% in 2026
ASEAN spend is climbing, hybrid is the default architecture, and agentic AI is moving from curiosity to cautious trials.
Lenovo’s CIO Playbook 2026 study run by IDC surveyed 920 APAC tech leaders and found 96 percent of ASEAN organizations plan to increase AI investment in 2026, averaging 15 percent growth. Even more telling is where the money goes. Hybrid AI infrastructure is being adopted by 86 percent of APAC orgs, reflecting a pragmatic mix of public cloud, on premises, and edge deployments driven by cost control and data sovereignty.
The economics are forcing the architecture. Lenovo points out inference can cost 15 times more than training over a model’s lifetime, and its APAC leadership expects 75 percent of AI compute will be dedicated to inference by 2030. That is a quiet but profound shift. If your product relies on heavy inference, your margin and your customer’s cloud bill will become the same conversation. This is why hybrid, quantization, caching, and model right sizing are turning into board level topics rather than engineering trivia.
Agentic AI is also edging forward. Sixty percent of APAC organizations are exploring or planning limited deployments, but only 10 percent are ready to scale, largely due to governance, integration, and data quality. The opportunity for builders is clear: win by making hybrid operations and agent governance boring and manageable, especially as non IT departments increasingly fund AI initiatives.
IBM rolls out Enterprise Advantage to scale agentic AI
IBM is packaging its internal AI delivery playbook into a productized consulting lane for governed agentic deployments.
IBM launched IBM Enterprise Advantage, an asset based consulting service available January 19, 2026, aimed at helping clients build, govern, and operate internal AI platforms at scale. The pitch is low disruption: clients can scale agentic AI without changing cloud providers, models, or core infrastructure, and can integrate across AWS, Google Cloud, Azure, IBM watsonx, plus open source and closed source models.
This is IBM recognizing the true constraint in enterprise AI is not model access, it is platform operations. The offering builds on IBM Consulting Advantage, which IBM says has supported over 150 client engagements and boosted consultant productivity by up to 50 percent. In other words, IBM is trying to sell the assembly line, not just the parts, and to do it with reusable assets rather than bespoke consulting hours.
For operators, the watch item is whether these packaged services become the “agentic AI platform in a box” that wins budget faster than DIY internal builds. For startups, this is both competition and validation: buyers want governed platforms and migration free adoption. If you sell into enterprise, your differentiation needs to be sharper than “we orchestrate agents,” because IBM is now saying that too, with procurement friendly wrapping.
Chart shows generative AI apps are dominating mobile
Consumer GenAI is no longer a novelty category, it is becoming a top tier mobile business line.
Visual Capitalist, citing Sensor Tower, shows generative AI apps on track to approach 4 billion downloads, $4.8 billion in in app purchase revenue, and more than 43 billion hours of time spent in 2025. The forward projection is even louder: consumer spending is expected to exceed $10 billion in 2026, with generative AI rising from the number 10 category in downloads to number 4, and jumping to number 3 in in app purchase revenue, surpassing Dating and Social Discovery. By time spent, it is projected to hit number 5 globally.
ChatGPT is a useful benchmark: as of Q3 2025 it ranked number 2 globally by in app purchase revenue across iOS and Google Play, behind only TikTok. That is not “people are trying AI.” That is “people are paying for AI at scale.” The monetization proof matters because it suggests consumers will subscribe for utility, not just entertainment, and it gives builders permission to price for value.
The strategic takeaway is that mobile is becoming a primary distribution channel for GenAI, not a companion. Watch for the next wave to be less chat and more embedded workflows: education, creation, search replacement behaviors, and vertical tools that feel like apps, not prompts. If you are investing, this is where retention metrics will start to look like real software businesses, not viral experiments.