Proposal: Survey of AI risk and decentralized AI

Name: Survey of AI risk and decentralized AI

Proposal in one sentence:: This project aims to study the intersection of AI risk and decentralized AI, with the goal of understanding the potential benefits and drawbacks of decentralized AI and how it can impact the risks associated with artificial intelligence.

Description of the project and what problem it is solving:

This project aims to study the intersection of AI risk and decentralized AI, with the goal of understanding the potential benefits and drawbacks of decentralized AI and how it can impact the risks associated with artificial intelligence.

By gathering evidence on this topic, the project aims to contribute contextual awareness to actors in the field, and help inform decision-making about decentralized AI and its relation to AI risk. The project will try to identify key questions, works, debates, and evidence within at the intersection of these two topics, and to interpret them systematically. The collection and analysis methods will be detailed. Notably, one strategy that will be employed will be to ask relevant scholars for their recommendations of existing works, and subsequently for their feedback on the work. If the work results in findings judged worthwhile, it will be disseminated to relevant stakeholders: ArXiv, forums related to AI safety, relevant researchers and groups, to a journal or venue. In any case, the process and result will be public. One goal of this is to be a starting point for further analysis at this intersection.

A potential issue of this project is that there might not be much literature or works to review at the specific intersection (disjointness of the two fields), while the union of both fields might be very large. In that case, an analysis of the intersection will be performed. I studied machine learning, I work as a software engineer in web3, and I engage with the AI safety space by following works and organizing AI safety events alongside academic conferences. I believe this enables me to cast an informed analysis of the intersection.

Grant Deliverables

Spread the Love

I will invite relevant workers I find. Members of the public can join as well. The grant will be shared amongst workers.

My Twitter profile is @orpheuslummis (but I wish there would be a decentralized identity system we could use instead).

Additional notes for proposals

This is a public good. The work will be done in public and hopefully collaboratively on a Github repo. I expect it might take me 3 months to do a good job in the context of my other commitments - one month after the grant period begins, I will deliver a draft of the Survey whitepaper, and only later publish the peer-reviewed completed version.

An hypercert of the work will be minted (when the hypercert system is deployed probably in 2023).

I definitely welcome your criticism and feedback. Reply here, but you can also reach me directly - find my contact info at orpheuslummis.info

3 Likes

Hey @orpheus, awesome proposal! I think there are definitely people in the community whoā€™d like to collaborate on this, I also like the idea to mint a hypercert for tracking the long-term impact of this work. Have you started looking at some specific resources that you could share? Also, given that this is probably going to be a long-term project, do you have an idea of the scope of the initial survey whitepaper & database?

The AI risk framework for this survey work is based on [2206.05862] X-Risk Analysis for AI Research which views risk in terms of: hazard (source of potential harm, and its severity and prevalence) Ɨ vulnerability (susceptibility to damage of hazards) Ɨ exposure, where Ɨ means interaction. Additionally, it suggests the following partitioning of AI safety works: robustness, monitoring, alignment, systemic safety.

I became aware of the paper Is Decentralized AI Safer? (https://arxiv.org/abs/2211.05828) published last month by members of this very community, only now - after making this proposal!

This proposal is to survey existing decentralized systems and ideas. I suggest that it add value relative to Decentralized AI Safer? (and potentially other works) in terms of being more exhaustive and detailed, and more principled in analysing AI risk of decentralized systems and ideas.

I will aim for the scope of the initial database to be decently developed - this is where most of the work will go. I expect the initial whitepaper draft to provide an overall analysis, but not in depth. Iā€™m quite uncertain as to how it will go.

Iā€™m also still lacking clarity at the distinction between ā€˜decentralizedā€™ and ā€˜distributedā€™ Iā€™m aiming for. I will attempt to use a specific concept.

2 Likes