AI Angels: Privacy preserving distributed language modelling

Name of Project: AI Angels - Privacy preserving distributed language modelling

Proposal in one sentence: To develop personalized AI assistants, we need to solve the problem of data privacy. This proposal is to explore a way to tackle this by combining ideas from edge computing and differential privacy.

Description of the project and what problem it is solving:

AI assistants like chatGPT would provide a lot more value if they had access to more personalized data about you as an individual. Doing this involves several challenges: 1) how to record the personal data, 2) how should the AI model process it, and 3) and how do we keep the data private and secure.

Here we tackle 3). However, data privacy/security is tightly related to 2), the AI system architecture. This is because the kind of AI model induces limitations (e.g. you can’t run GPT-3 on your laptop).

One approach to design a personalized AI is to store all the personal user data on the cloud, and run the models on this data directly. However, earning the trust from the user is proving tricky with such an approach. This is why companies like Rewind.AI have opted to keep the personal data on the user’s computer.

However, this runs into the issue of the user’s computer not being powerful enough to run start-of-the-art models like GPT-3.

In this proposal, I want to explore an approach to tackle this problem: by compining distributed language-model computation with a form of differential privacy.

The design I have in mind involves having a small language model (small enough to run on edge/on the user’s laptop) that has access to the personal data of the user (via retrieval + prompting), and uses this to answer a query the user provides (a la chatGPT). However, to compensate for the model being smaller (and thus less powerful), we allow it to do some intermediate computation where it can send questions to an API for a much more powerful model (say GPT-3). The answers it receives from the API are used to provide the final answer.

To maintain privacy in this model, however, we need to ensure that the questions the smaller model sends to the API are privacy preserving, i.e. they don’t reveal sensitive information from the user (e.g. the small model could ask “What is a good idea for marketing a VR startup?” but not “what is a good idea for marketing my secret stealh startup XYZ”. This proposal is about exploring how to best achieve this.

There are two main approaches to be explored: prompt engineering, and fine-tuning via RL where the reward is whether a large language model thinks that the question reveals info from parts of the prompt (including retrieval) labelled as “sensitive” (for simplicity we assume the user labels which parts of its personal data are sensitive: e.g. passwords, addresses, health data, trade secrets, etc.). We will probably start with the former as it’s simpler. Evaluation can be done with human/user feedback.

Ultimately, this could be the foundation of a network of AIs and humans, with a distinguished set of AIs, which I’m calling “AI angels”, which are aligned to each individual human via personalization and offer the interface between the human and the wider network of AIs. This vision is related to the vision outlined here for AI alignment.

Grant Deliverables:

  • Results with users studies of different approaches of tackling the problem via prompt engineering of the small/edge language model.

  • Exploration into tackling the problem via RL fine-tuning of the smaller model. If the approach proves feasible within the compute budge, then we’ll perform user studies too.

Spread the Love

I’d be interested in having help with designing and distributing a system for gathering feedback for the user evaluations.

Squad

Squad members:
guillefix
twitter: @guillefix
discord: guillefix#8591

I’m happy to onboard some people who may be interested to help and share the reward if they want to commit enough time!:>

3 Likes

Toolformer was recently published and it seems relevant to the project, as they train an LM that can outperform larger LLMs by being able to query APIs

So maybe there’s a way to adapt their training procedure for the queries to be privacy preserving!:> Gotta take a look at the paper!

Love the idea. Do you also plan to do traditional privacy ai tech like homomorphic encryption(training on encrypted data) and federated learning things?

Seems really interesting ,especially when, the AI could determine based on what data the questions can be answered, and how it can ensured what type of data was very private to the user , and what data was personal, but can be used to generate the questions like “What is a good idea for marketing a VR startup?” !

Very interesting, and definitely a big problem with current LLMs options. We are facing similar problems at my company, where we want to develop AI solutions but we have the blocker of not sharing sensitive data.

What we did so far is to anonymize our data before doing the calls to OpenAI API, but expect to see better solutions than that. Fun part is that we also tested GPT (with random data) for data anonymization and it work pretty well, so in line with what you are proposing. For smaller models, I have been reading that flan-T5 is one of the best performing models that fit in a personal GPU, so would be worth trying…

Keep us posted, interested to see which can of solutions can be developed in this space!

1 Like

Thanks for the suggestions and comment. It’s cool to see you’ve already began looking at this problem!
I’m not sure what you meant by “we also tested GPT (with random data) for data anonymization and it work pretty well”. Could you explain what you meant by that?
I suppose by anonymization you mean removing names or sensitive substrings right?

Federated learning could be naturally integrated into this system yeah. Holomorphic encryption, less sure. I’d have to learn/think more about it

1 Like

That’s right. We build some functionality (with Langchain) to send a dataset to a model and ask it to anonymize any sensitive data. The model was able to change most of the data, especially IDs and company names. The cool thing was that one can specify things like: ‘change for invented slippers company names’, and that worked too.

Oh so you asked a LM to identify the sensitive data, like flag it? And then you anonymized it in another way, or the LM itself did the whole anonymization?

The model did the whole process including anonymization, you can try some examples in chatGPT with sample data, just ask to anonymize it.