
My illustration above of Herbert Sim (aka Bitcoin Man), alongside with Vitalik Buterin, interacting with an artificial intelligence (AI) figure in holographic projection, in a futuristic city public space.
Nick Bostrom — one of the world’s most influential thinkers on existential risk and the future of artificial intelligence — has shaped global debates for two decades, from his early warnings about superintelligence to his role in inspiring today’s AI safety movement. When Bostrom publishes a new working paper, especially one proposing an entirely new governance model for AGI, it commands attention.
His 2025 paper, “Open Global Investment as a Governance Model for AGI”, is no exception: it puts forward an ambitious, market-driven blueprint for managing what could become the most powerful technology in human history. But precisely because of Bostrom’s stature — and because the stakes around AGI governance are so high — it’s essential to scrutinize not just the novelty of his proposal, but its assumptions, feasibility, and potential unintended consequences. What follows is a critical examination of the OGI model: where it succeeds, where it stumbles, and what its blind spots reveal about the challenges of governing transformative AI.
Bostrom proposes the “open global investment” (OGI) model as a governance architecture for the development of artificial general intelligence (AGI) that might arrive on short notice. The core idea is that one or more AGI-leading corporations would be widely open to international investment, operate under a government-defined safety framework, enjoy structural protections against expropriation, and distribute ownership broadly across global citizens (to the extent possible) via shareholding.
He argues this model offers advantages compared to alternatives such as a U.S. “Manhattan Project for AI”, a multinational public research institution (“CERN for AI”), or unilateral private development. The claimed virtues include better global inclusiveness, incentive alignment, practicality under short timelines, and leveraging existing corporate and property rights infrastructures.
The key features:
- A publicly listed (or rather broadly share-owned) corporation (or set of firms) developing the AGI.
- International investors and sovereigns allowed to buy shares, so that many states have a stake in the corporation’s success.
- Government(s) provide regulatory safeguards, subsidies, protection against antitrust or expropriation, and embed governance enhancements in the corporate charter.
- The model is explicitly framed as a second-best, politically feasible option rather than ideal.
What the model gets right
1. Pragmatic realism about timelines
Bostrom recognises that many governance proposals assume long time-horizons, ideal institutions, international consensus, or slow regulatory processes; by contrast, OGI is pitched for near-term scenarios where AGI appears rapidly.
That recognition is a strength: many governance proposals fail at the implementation stage because they assume slow, deliberative consensus that may be out-paced by rapidly advancing technology. For instance, media commentary emphasises that AI capabilities are developing at dizzying speed, leaving regulators playing catch-up.
2. Leveraging existing corporate and property-law institutions
Rather than inventing novel governance forms from scratch (which may suffer legitimacy or capacity problems), OGI builds on well-established corporate law, capital markets, investor protections, shareholder structures. Bostrom argues that these institutions are comparatively robust relative to a totally new international body.
In principle, this reduces institutional friction and may make faster implementation possible.
3. Inclusiveness and benefit-sharing rhetoric
The model emphasises broad distribution of ownership (shares) and profits, as compared to nationalised or exclusive private monopoly models. Bostrom suggests that this widens participation (insofar as people or sovereign funds can buy shares) and may reduce resentment or adversarial dynamics between states.
This is an attractive normative aspiration: in a world where AGI could generate vast rents, thinking about how the upside is widely distributed is laudable. Media and scholarly commentary on AI often emphasise inequality risk, labour displacement, winner-takes-all outcomes. For example, the Harvard Law Review pointed out the “amoral drift” of AI corporate governance as profits and speed outpace safety.
Where the model falters or requires further scrutiny
1. Ownership does not equal governance control
One key weakness is that broad shareholding does not guarantee meaningful governance control, especially for high-stakes AGI development. Bostrom himself acknowledges that share classes may be differentiated (e.g., founder shares, class A, B, C) to preserve founder/control rights.
In practice, many publicly-traded corporations have diffuse shareholders with limited influence; decision-making remains concentrated among executives and boards. Thus, the aspiration of broad stakeholder influence may remain symbolic rather than operational. Commentary from the media supports this: for example the Harvard Law Review article argued that the governance structures of AI firms are not keeping pace with risk.
Existing capital-market patterns show high concentration of voting power in institutional investors; Bostrom cites the “Big Three” index funds controlling ~25% of S&P voting rights.
Thus, while OGI intends inclusive participation, the structural reality may remain elite-driven. This gap between intention and institutional reality ought to be more sharply recognised.
2. Incentives might mis-align with safety
An implicit assumption of OGI is that shareholders will have a long-term interest in safe and beneficial AGI because their investment depends on the corporation’s success. However this assumption deserves deeper challenge.
First, the profit motive may conflict with caution. If the first AGI delivers extraordinary returns, the pressure to roll out and monetise may trump safety prudence. Bostrom mentions windfall profits and taxation possibilities, but the incentive to race remains strong.
Second, the model might incentivise externalising risk: shareholders may accept risk of existential harm if upside is enormous and individual losses small (or if they exit pre-deployment). Indeed the AISafety.info summary of criticisms notes: “OGI will have bad side-effects” because early investors may benefit from hastening AGI even if it increases risk.
Third, the model relies on the host government and other states credibly committing not to expropriate or regulate away profits; yet history suggests high risk of regulatory or political intervention when large rents accumulate or when national security stakes rise. Bostrom acknowledges this challenge, but perhaps under-estimates how difficult it is to guarantee investor-rights protections in the context of existential-risk tech.
Media narratives around AI often highlight exactly these risks: for example, WIRED pointed out that AI may form the ultimate bubble, partly because investor fervour overwhelms caution.
Bottom line: OGI may under-state how mis-aligned incentives (between profit, speed and safety) can skew corporate behaviour, especially in frontier AI.
3. Geopolitical and sovereign risk concerns
The model envisions international shareholding to reduce adversarial dynamics among states. But in practice, geopolitical realities complicate this ideal. Some key concerns:
Sovereign states may invest for strategic rather than commercial motives, and may still race to deploy capabilities. Ownership rights may not suffice to restrain militarised or dual-use deployment. Bostrom discusses this under “Military applications and foreign competitors”.
Some states have weak property-rights regimes, may nationalise or restrict foreign investment, or impose strict export controls: this challenges the assumption that investment gives real influence or safety. The AISafety.info critique highlights exactly this drawback: “that companies will not be nationalised” is a weak assumption globally.
The model’s success depends on a host government (such as the U.S. in his example) being committed to the arrangement; yet in an AI arms-race scenario, national security imperatives may override investor protections. Media often report on an “AI arms race” between U.S. and China, where rules are second to speed. For example: WIRED again on the bubble drawing on race dynamics.
Thus while OGI aspires to inter-state cooperative risk reduction, it may underestimate how geopolitical competition and national-security imperatives could distort investment logic or override governance frameworks.
4. The model addresses ‘who owns’ but less well ‘who governs’ or ‘what is safe’
The OGI model is strong on the ownership/investment axis, but somewhat weaker on specifying what governance mechanisms will ensure safety of AGI deployment, alignment of goals, and monitoring of upstream research. Bostrom acknowledges that regulation, treaties, and international agreements lie outside his core scope.
But in the AGI context, governance questions of alignment, oversight, safety audits, technical red-teaming, transparency of training runs, and fallback systems arguably matter more than share-class structures. Many commentators argue that existing governance and regulatory frameworks lag far behind AI capabilities. The Harvard Law Review piece warns that AI startups are advancing faster than alignment research.
Hence the OGI model could risk becoming a corporate investor-structure change while leaving the most critical safety problems under-specified.
5. Inequality and access concerns remain
Although OGI claims to improve inclusiveness (via shareholding), it partly skirts questions of deeper structural inequality. Some issues:
Participation in stock markets still presumes capital access, financial markets, institutional infrastructure. Many under-resourced countries or populations may not meaningfully invest. Bostrom acknowledges this: citizens of lower-income countries will have little influence per person.
Even if profits are broadly distributed, power may remain concentrated among those who shape the corporation’s direction (founders, board, voting class shares). Without mechanisms for democratic participation or public-goods orientation, the “benefit-sharing” may remain de facto elite-driven.
The model emphasises profit sharing through tax or dividends, but does not sufficiently engage with labour displacement, economic dislocation, or broader social contract issues that AGI may trigger. Many media discussions on AI draw attention to these distributive risks — e.g., Harvard Business Review on uncertain business models for AI companies.
6. Feasibility of the IPO or public-structure in high-risk AGI context
Bostrom mentions the possibility of an AGI leading firm going public or being structured as a public-benefit corporation. But this raises difficult practical questions:
Frontier AI firms commonly stay private for strategic, competitive, secrecy, or regulatory reasons (to avoid quarterly disclosure of roadmap, to shield alignment work, etc.). Bostrom notes this in appendix.
The capital markets for a public AGI company may not yet exist at sufficient scale or appetite; public investors may balk at moon-shot risk, or regulatory liability. For example, WIRED recently argued that AI may be the ultimate bubble given uncertainty and investor fervour.
The governance demands of an AGI firm (massive compute, deep secrecy, dual-use risk, export controls) may conflict with the transparency and accountability expectations of public markets.
Hence while OGI’s public investment framing is imaginative, the bridge from current private AI lab models to openly listed AGI firms may be more challenging than the paper acknowledges.
Comparative Assessment: OGI vs Other Models
In his paper, Bostrom compares OGI with alternatives: (a) exclusive private corporate development; (b) a nationalised “Manhattan Project for AI”; (c) a multinational institution (e.g., “CERN for AI”). He argues OGI strikes a favourable compromise in terms of inclusiveness, incentives, and implementation speed.
My view: This comparative framing is useful, but underplays some trade-offs:
- Private corporate model: Has typical market-speed advantages, less bureaucratic overhead, but leads to concentrated power and no global inclusion. OGI improves on this in theory, but adds complexity and dependency on multinational investor coordination.
- Nationalised model: Easier for a single government to control & regulate, but poor global inclusion and may heighten geopolitical race dynamics (other states feel excluded). OGI addresses this by inviting foreign investment — yet as noted, geopolitical dynamics may still erode this.
- Multinational institution: Ideal for global governance and cooperation, but historically slow, under-resourced, and often powerless relative to private actors (media narratives confirm this: global AI governance remains fragmented). For example, the briefing by Chatham House overviewing AI governance emphasised that open-source and democratization models are among the explored mechanisms.
Therefore OGI occupies a plausible middle ground. But the cost is complexity: aligning multiple states, host governments, large corporations, and global capital markets is non-trivial. The model may underestimate institutional friction, geopolitical inertia, and alignment/safety governance gap.
Normative Implications and Broader Reflections
Risk of reinforcing power via investment logic
By embedding AGI development within a publicly invested corporation, the model may inadvertently reinforce the logic of capital accumulation, growth, shareholder value and competitive advantage — rather than democratic oversight, societal control, or public interest. In recent media coverage of AI, this dynamic is flagged: firms racing to deploy capabilities and raising huge capital, with governance lagging. For example, the Harvard Law Review article emphasised the mis-alignment of governance with speed and profit motives.
If the leading AGI corporation becomes the nexus of global investor power, we risk a form of “techno-capitalist” governance, where shareholders, not publics, hold the levers of decision-making. This raises ethical questions around democracy, accountability and the role of citizen voice in shaping transformative technologies.
Justice, global south and benefit sharing
The model’s rhetoric of global shareholding is commendable, but the practical mechanisms for including low-income countries, historically under-invested regions, and marginalised populations remain weak. Governance scholarship warns that global AI governance must attend to equitable access and benefit sharing. For example, an academic piece on “Democratising AI” highlights that turning talk of democratization into concrete policies is complex.
In the OGI framework, simply allowing foreign investment may not translate into meaningful participation or benefit for global south populations. Without institutional mechanisms (e.g., designated trust funds, mandated dividend transfers, capacity building in low-income states), the model risks perpetuating global inequality even if superficially inclusive.
Dependency on host government commitments
The entire structure of OGI depends on the host state (e.g., US) committing to safe regulatory frameworks, non-expropriation, export controls, etc. The paper rightly underscores this as a requirement, but perhaps under-weights the political risk. In high-stakes strategic domains like AGI, host governments may revert to national security logics, impose controls, or commandeer the firm’s assets. If that happens, the model’s inclusive promise could collapse or produce perverse incentives.
Safety governance remains the weakest link
As noted earlier, OGI emphasises corporate and investment structures, but says less about alignment, control of capability rollout, monitoring of transformative risks. Yet many commentators argue that the most urgent challenge for AGI governance may not be who owns the firm or how profits are distributed, but how to ensure the first AGI is safe, aligned with broad human values, and under effective oversight. The Harvard Law Review and other media have pointed to the “alignment gap” — that startups are moving faster than governance.
One might argue: the OGI model could be combined with parallel safety/regulation mechanisms. But as a stand-alone proposal, it places greater weight on the investment dimension than the control dimension.
Final Assessment
Nick Bostrom’s OGI model deserves credit for:
- offering a novel framing of AGI governance that combines investment, corporate law and international participation
- recognising the urgency of governance in short-timeline AGI scenarios
- articulating a plausible middle route between state monopoly, purely private corporate and multinational institution models
However, the model’s practical viability is constrained by several significant factors:
- the assumption that broad shareholding translates into real governance influence is optimistic
- the incentive structures around profit, speed and safety are under-explored and may lead to risk trade-offs
- geopolitical and sovereign risk dimensions may undermine the inclusive promise
- the safety/oversight mechanisms remain under-specified relative to the magnitude of risk
- global justice, equitable participation and labour/distributional issues require deeper integration
In sum: the OGI model adds a valuable piece to the governance puzzle — it helps shift attention to who owns and who gains in the AGI story, which is too often neglected. But it is not a full governance solution.
For AGI, the more urgent question may be how do we ensure safe, aligned, robust AGI development under conditions of race, dual-use and deep uncertainty, rather than simply how do we distribute the profits.
The corporate/investor focus must be paired with robust safety governance, democratic accountability and global public-interest mechanisms.
Suggestions for further development of the model
- Governance oversight layer: Build into the model a mandatory oversight body (board or independent authority) with mandated safety/ethics review powers, whistle-blower protections, and pre-deployment auditing.
- Investor mobility and risk externalities: Incorporate mechanisms so that investor returns are contingent on meeting safety milestones (e.g., slower rollout, alignment audits), to better align profit with safety.
- Global south inclusion fund: Create a dedicated mechanism (shared trust, sovereign-backed fund) to channel a portion of AGI-company profits into global south capacity-building, inclusive dividends or public goods — addressing distributional equity.
- Exit and contingency planning: Because AGI is higher risk than standard corporate ventures, the model should define exit strategies, orphan-asset scenarios, state takeover regimes or winding down the company if alignment is not achieved.
- International treaty anchoring: Embed the investment model in an international treaty or pact that commits states to non-expropriation, export-control coordination, liability rules — thus reducing sovereignty risk.
Conclusion
The Open Global Investment model is a thoughtful and creative contribution to the AGI governance debate. It pushes the conversation beyond pure regulation or national models to consider the role of global capital and corporate structure in shaping AGI outcomes.
That said, its success depends heavily on favourable institutional, geopolitical and corporate conditions — and on bridging the substantial gap between investment structure and safety governance.
As the media and policy discourse increasingly emphasise AI governance, alignment and distributive justice, the OGI model should be seen as one important component of a broader governance architecture — rather than a stand-alone fix.