• The Midas Report
  • Posts
  • Zuckerberg bets on AI self improving and quietly pivots on AI

Zuckerberg bets on AI self improving and quietly pivots on AI

3 min read.

On July 30, Mark Zuckerberg published a policy paper on Meta’s website with two messages that matter for builders.

First, he says Meta has begun observing its AI systems improving themselves without human input. The improvement is slow for now but undeniable.

Second, Meta will be far more careful about which AI models it releases to the public under an open source framework. That combination is not accidental. It is a statement that the company’s ambitions are rising even as it tightens control over what leaves the lab.

Ambition with Guardrails

Zuckerberg’s language blends audacity and caution. He is extremely optimistic that superintelligence will help humanity accelerate our pace of progress.

He also lays out a user facing destination. It is a vision of personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.

That is a consumer platform pitch, not a research note. Yet it arrives alongside a clear operational pivot on distribution. The open versus closed debate is increasingly filtered through safety concerns, and Meta is publicly aligning with a more selective release posture.

From Theory to Practice in Self Improving AI

There is technical context for why self improving systems sharpen that debate. The Gödel Machine is a long standing theoretical concept for an AI that can rewrite its own code when it can formally prove the change is beneficial.

In October 2024, researchers at the University of California, Santa Barbara reported a Gödel Agent in an arXiv preprint. According to the paper, the agent could access its entire codebase and the code used to develop improvements. It could implement self improvements and was evaluated on tasks including coding, science, math, and reasoning.

The study reported that Gödel Agent consistently showed better performance in key areas than human designed agents to which it was compared. None of that means Meta is using the same approach. It does suggest that self improvement is moving from thought experiment to early practice across the field.

When systems begin to change themselves, even incrementally, the release question stops being purely about model quality and starts being about stewardship. Zuckerberg’s paper states that Meta will exercise greater caution in deciding which AI models to release under an open source framework.

That reads as a hedge against open source backlash as capabilities climb. It also reads as a bet on walled power. If self improving behavior becomes a differentiator, the value concentrates where the feedback loops and guardrails live. The company that operates the training stack, the evaluation harness, and the release gates can move faster in private and pick what, if anything, becomes public.

The Product and Policy Signal for Developers

For founders and developers, the product signal is as important as the policy shift. Zuckerberg is pointing to a world where a personal superintelligence sits at the center of the user relationship. If Meta builds that, it will sit atop whatever self improving mechanics the company is now seeing, but it will arrive on Meta’s terms.

Open ecosystems have been the route for broad experimentation. A selective release stance narrows that surface area. The UCSB results show that agents which can access and adapt their own code can outperform more static, human designed counterparts on certain benchmarks. That is a compelling research frontier, but it is one that raises the bar on oversight and evaluation before models are handed to the public.

The immediate takeaway is not panic. Zuckerberg’s own description is that progress on self improvement is slow for now. The strategic takeaway is clarity. Meta is aiming high on capability and tightening its grip on distribution.

That creates a sharper dividing line in the industry. The open source camp must now argue not only for innovation speed but for credible safety practices as systems begin to modify themselves. The closed camp must show that restraint does not become stagnation or gatekeeping.

In other words, the next phase of AI will be shaped as much by release policy as by research breakthroughs. Meta’s move suggests that the firms who expect self improving systems to matter will keep the strongest versions private. The rest of the ecosystem will have to decide whether to build inside those walls or to push for open alternatives that can keep pace.