Decentralised Ownership: A Model for Safer AI?

Proposal in one sentence:

I propose to research whether decentralised ownership can reduce the risk of misaligned or misdirected Artificial Intelligence.

Description of the project and what problem is it solving:

Powerful AI models that are either misaligned with human values, or used to enforce a particular moral view, represent a significant threat to our near and long-term future. This issue has recently emerged as a key priority from research on long-termism and existential risk; it is the focus of Stuart Russel’s latest book “Human Compatible”, and is one of the core focus areas of FTX-founder Sam Bankman-Fried’s recently launched “Future Fund”.

AI safety researchers have yet to focus on decentralisation, because prior to the emergence of blockchain technology and its integration with AI development, it was not feasible to own or control an AI system in a truly decentralised manner. However, progress in this field is now rapid, raising important questions about the future. Is an AI with decentralised ownership more or less likely to be developed in-line with a representative view of human values? Would decentralised control make it harder for an AI to be directed to cause harm, or conversely would it be harder to regulate?

Research on decentralisation may uncover significant opportunities to de-risk future AI systems, and thereby direct resources towards accelerating development in this area. It may also identify and help mitigate new risks that might otherwise go unnoticed.

This project aims to demonstrate that decentralisation has significant implications for AI safety, which should be studied and then factored into future research and development efforts. This will be achieved through a literature review and theoretical discussion. The project will conclude with the design of an experiment that tests whether an AI is safer, and more easily controlled, if its ownership is decentralised. This will likely involve a game in which centralised and decentralised groups compete through control of AI agents in a simulated environment.

Grant Deliverables:

  • Literature review of AI risks that relate to centralised ownership and control
  • Theoretical discussion of how decentralisation mitigates or enhances such risks, and what news risk it might introduce
  • Design for an experiment which tests the theory in real world scenarios


Casey (BarefootDev), ML Researcher & Engineer, Co-founder and Lead Developer at


Best wishes for the research. I’m interested in the idea of distributed AI and I wonder about decentralised ownership of an AI system, compared with distributed AIs that are owned by individuals (i.e. ownership could be seen as centralized with each person owning an AI). One of the risks of AI is a convergence of human behavior because it makes the AI’s predictions more accurate. I wonder if decentralised ownership of one system would encourage that.


Thanks! Great pick-up. So there are different kinds of centralised AI (owned by individuals or groups), and they raise different risks which need to be considered in a discussion of what happens when we decentralised AI! I’ll be sure to incorporate this in the project.


Fascinating and critical topic. You do a very good job in articulating the issue, and my sense from this proposal is you have the required foundation of knowledge and analytical skills to do a great job on this project.

One area to potentially consider as a proxy to the issue at hand is whether decentralized protocols in general (say, for example, defi protocols), have actually served to decentralize power, and conversely, whether they aggregate more power over more resources in the hands of a small minority of token holders - power they may not be able to aggregate in a more centralized system - and whether that’s a good thing.

In any case though, I appreciate that you’re trying to look closely and objectively into this issue, because I believe it’s much more convoluted than I think people typically assume, and it’s especially important in the case of AI.

Last - I wonder if you’ve run across the thinking of Ben Goertzal and SingularityNet. I don’t know much about the platform or its founder myself, but that may be another interesting area of research.

1 Like

Thank you for your feedback! It certainly is a convoluted topic, but that’s why I think it’s so important to investigate from multiple angles.

I have followed Ben Goertzal for some time and will be sure to dig deeper into his thinking in my research. Looking at proxy examples of decentralisation is another great recommendation. A core assumption of the very question I’m asking is that these technlogies do actually lead to decentralisation, so it will be worth raising and investigating this as part of the work.

1 Like

Love the scope and ambition. This very topic of decentralized AI/ML is precisely what I’m digging into as well (from the perspective of a researcher-cum-investor).

Organically, I’ve come across Dongkun Hou et. al., “A Systematic Literature Review of Blockchain-Based Federated Learning: Architectures, Applications and Issues” in 2nd Info. Comm. Tech. Conf. pp. 302-307 (May 2021).

Also, will your deliverables address the various companies/projects/protocols working on decentralized ai/ml now? Or, is the intention to keep the scope to more general “decentralization as an idea with its attributes” solves centralized AI risks?

Lastly, I would absolutely love to help out here. This is a core part of my research interest and am actively exploring! E.g., here, here.

1 Like

Hi @sanfender, thanks for the response! I don’t think there’s scope to review the various companies etc working on decentralised AI, though in extensions of this work I’m sure it will become more relevant.

Thanks also for sharing those resources, I’ll be sure to have a read. If the proposal is accepted I’ll get in touch!

1 Like

Hey @BarefootDev, awesome proposal. Are you also considering running some simulations, e.g. with cadCAD or TokenSPICE, of the experiment you’re proposing? CadCAD in particular is very well suited for simulations on governance mechanisms and rigorous testing of the structure of your system. I’ve done some work with it on CeSci vs DeSci value flows & emerging incentives, happy to discuss some ways it could be integrated in your project if you think quantifying your literature review with simulations is something you’d like to explore.

Thanks @smejak, I don’t have experience with either so it would be great to discuss. My primary goals for this grant are theoretical: I want to build a solid foundation on the topic before getting ahead of myself and starting to build (which is difficult to resist!).

I am however hoping to fit in some experimental design (ie an outline of future practical work that could be done), and it would be good to consider simulations as part of this, so once I get into the research let’s chat about it!

1 Like