Compass Labs: AI infrastructure for optimizing and automating decision making in DeFi

Name of Project:

Compass Labs

Proposal in one sentence:

Compass Labs (https://compasslabs.ai/) is building the AI infrastructure for optimizing and automating decision making in DeFi in a one-click solution.

Description of the problem and what problem is it solving:

Fundamentally DeFi is to complicated and extremely difficult to interact with, making the space inaccessible for non-experts. At Compass Labs (https://compasslabs.ai/) we use machine learning to remove such complexity and optimize and automate decision-making on DeFi protocols such that people can interact in a one click solution.

The blockchain space is hyper-optimizable for three reasons; 1) protocols are written in open-source smart-contract code, 2) on-chain data is transparent and 3) tokenomics act as a reward function. The combination of these three aspects means the Sim2Real gap in blockchain based systems is very narrow compared to other industries, making the environment uniquely suited for compute-driven control. Therefore, at Compass, we build simulation environments and simulate how a system will respond to agent actions and be confident the results will transfer to real life with minimal assumptions, while discovering optimization opportunities at scale.

Our first focus is on solving the liquidity provisioning problem in DeFi. Currently, liquidity provisioning is complex and investors are not able to quantify risks such as impermanent loss, pool complexity and fee optimisation whilst maximising APY. At the moment, >50% of liquidity providers is losing money on UniswapV3 and protocols are put at risk as they lack shock-absorbing capacity due to centralized liquidity (>$90B loss during May 2022 market crash). Together this makes decentralized exchanges highly inefficient as protocols increase trading fees to compensate for liquidity risk.

Our first product, CompassV1, is an AI research stack in which we use machine learning to enable individuals and DAOs to participate in dynamic liquidity provisioning to decentralised exchanges. We use CEX and DEX data, as well as smart contract code, to run agent-based simulations to train a Bayesian model based reinforcement learning algorithm to derive a policy that maximises risk-adjusted returns in real time, where the risk profile depends on the user-selected risk parameter. The optimal pool distribution is then given as the vault strategy, and the assets are allocated across the liquidity pools and rebalanced in real time while the results of the strategies are used as feedback loop to inform and update the model. In this way, we enable non-uniform and capital efficient liquidity distribution strategies.

Grant Deliverables:

Bayesian Online Learning Toolbox: a framework built on top of jax that enables the development of Bayesian models which can be quickly trained online.

Integration with Enzyme

Squad:

  • Elisabeth Duijnstee (Co-founder): Elisabeth completed her Ph.D. in Physics at the University of Oxford a year early with multiple first-author papers in leading physics journals. She then joined a hedge fund before doing a Machine Learning fellowship at Faculty AI. She is also a partner at Founders and Funders at Oxford’s Business School.
  • Rohan Tangri (Co-founder): Rohan was a data scientist at JP Morgan London, who also sponsored his Ph.D. in Modern Statistics and Statistical Machine Learning at Imperial College London. He also was a lead scientist at Limbic Labs and helped them raise a successful £400K government funding. Rohan also did research with NASA to develop a noise detection algorithm for the InSight mission on Mars.
  • Peter Yatsyshin (Research Scientist): Peter is a research associate at The Alan Turing Institute, focusing on scalable methods for statistical inference and machine Learning. He holds a Ph.D. from Imperial College London in computational statistical physics. Peter’s favourite tech stack is jax and numpyro and has successfully applied this to research for the NHS.

Theo Dale (Blockchain engineer): Theo is a software engineer who graduated from the University of Bath with a Masters in Computer Science. Since then he helped multiple start-ups to deliver products ranging from building biodiversity analysis tools to decentralized NFT collateralized lending protocols.

Twitter: @labs_compass

Discord: Discord

1 Like

This sounds really cool, and imo you’ve found an excellent application area re: Sim2Real gap being low due to transparency baked in by default. I’m new to the web3 space, so this may be a dumb question. Is there a concern that there might still be some opacity regarding algorithms used for market-making and liquidity deployment? Or is the idea to estimate the conditional distribution p ( risks | algorithm of choice ) ?

I’m also curious if there are any open-source libraries related to what you’re doing that might be useful to prototype at a smaller scale before building out the entire stack from scratch? For instance, something like RecSim might give you an RL-friendly means to model agents and behaviors in a toy environment. That being said, I’m not sure how translatable that is to your situation since it wasn’t originally designed for web3 simulation.

Lastly, a fundamental question arises re: agent behavior in response to other agents’ novel actions; I’ve been posed this question when I talk about social network simulation and I’m curious what your take is: how do you model agent responses without having prior information for a novel intervention? Isn’t this an impossible task without access to the true causal model for agent-level decision making processes–or is there an assumption of rationality, or other bounds on behavioral mechanisms such that you construct an implicit / explicit model with associated assumptions?

To be clear, I’m all for this. It’s super impactful and I believe people are already successfully(ish?) doing related / overlapping work at larger for-profit organisations like Gauntlet so there is certainly a strong indication that an open-source project would be useful!

Also, I’m partially asking these questions because I do doctoral research on similar topics and want to solicit opinions from others working in the space on more practical topics. Excited to see where y’all go!

The ‘litepaper’ boils down to this statement

For CompassV1 we use a combination of on-chain transaction data and smart contract code to simulate how a protocol will behave in response to our actions. We then run agent-based simulations to train a Bayesian model together with reinforcement learning to derive a policy that maximized risk-adjusted returns, where the risk profile depends on a user-selected risk parameter.

I am highly skeptical for two reasons

When you are trying to make simulations based on time series data, and are not measuring the underlying causal factors, you end up with bad correlational models that can be adversarial attacked, and will be adversarially attacked, because the model has no access to the ground truth.