Decentralised Ownership: A Model for Safer AI?

Proposal in one sentence:

I propose to research whether decentralised ownership can reduce the risk of misaligned or misdirected Artificial Intelligence.

Description of the project and what problem is it solving:

Powerful AI models that are either misaligned with human values, or used to enforce a particular moral view, represent a significant threat to our near and long-term future. This issue has recently emerged as a key priority from research on long-termism and existential risk; it is the focus of Stuart Russel’s latest book “Human Compatible”, and is one of the core focus areas of FTX-founder Sam Bankman-Fried’s recently launched “Future Fund”.

AI safety researchers have yet to focus on decentralisation, because prior to the emergence of blockchain technology and its integration with AI development, it was not feasible to own or control an AI system in a truly decentralised manner. However, progress in this field is now rapid, raising important questions about the future. Is an AI with decentralised ownership more or less likely to be developed in-line with a representative view of human values? Would decentralised control make it harder for an AI to be directed to cause harm, or conversely would it be harder to regulate?

Research on decentralisation may uncover significant opportunities to de-risk future AI systems, and thereby direct resources towards accelerating development in this area. It may also identify and help mitigate new risks that might otherwise go unnoticed.

This project aims to demonstrate that decentralisation has significant implications for AI safety, which should be studied and then factored into future research and development efforts. This will be achieved through a literature review and theoretical discussion. The project will conclude with the design of an experiment that tests whether an AI is safer, and more easily controlled, if its ownership is decentralised. This will likely involve a game in which centralised and decentralised groups compete through control of AI agents in a simulated environment.

Grant Deliverables:

  • Literature review of AI risks that relate to centralised ownership and control
  • Theoretical discussion of how decentralisation mitigates or enhances such risks, and what news risk it might introduce
  • Design for an experiment which tests the theory in real world scenarios

Squad

Casey (BarefootDev), ML Researcher & Engineer, Co-founder and Lead Developer at Spectra.art

2 Likes

Best wishes for the research. I’m interested in the idea of distributed AI and I wonder about decentralised ownership of an AI system, compared with distributed AIs that are owned by individuals (i.e. ownership could be seen as centralized with each person owning an AI). One of the risks of AI is a convergence of human behavior because it makes the AI’s predictions more accurate. I wonder if decentralised ownership of one system would encourage that.

1 Like

Thanks! Great pick-up. So there are different kinds of centralised AI (owned by individuals or groups), and they raise different risks which need to be considered in a discussion of what happens when we decentralised AI! I’ll be sure to incorporate this in the project.

2 Likes

Fascinating and critical topic. You do a very good job in articulating the issue, and my sense from this proposal is you have the required foundation of knowledge and analytical skills to do a great job on this project.

One area to potentially consider as a proxy to the issue at hand is whether decentralized protocols in general (say, for example, defi protocols), have actually served to decentralize power, and conversely, whether they aggregate more power over more resources in the hands of a small minority of token holders - power they may not be able to aggregate in a more centralized system - and whether that’s a good thing.

In any case though, I appreciate that you’re trying to look closely and objectively into this issue, because I believe it’s much more convoluted than I think people typically assume, and it’s especially important in the case of AI.

Last - I wonder if you’ve run across the thinking of Ben Goertzal and SingularityNet. I don’t know much about the platform or its founder myself, but that may be another interesting area of research.

Thank you for your feedback! It certainly is a convoluted topic, but that’s why I think it’s so important to investigate from multiple angles.

I have followed Ben Goertzal for some time and will be sure to dig deeper into his thinking in my research. Looking at proxy examples of decentralisation is another great recommendation. A core assumption of the very question I’m asking is that these technlogies do actually lead to decentralisation, so it will be worth raising and investigating this as part of the work.