Torch: AI-driven personalized NFT discovery and visual search service

Proposal in one sentence:

Torch is a AI-driven personalized NFT discovery and visual search service.

Description of the project and what problem it is solving:

When you go to any NFT marketplace, you either see a trending collection or a random grid of NFTs. Torch is trying to make NFT discovery that is easier and personalized. You can search for visual features and our computer vision image and text similarity search will find the most visually similar with a semantic search engine. We provide an REST and GraphQL API for marketplaces to integrate and provide personalized user experience.

Torch utilizes a set of ML models that analyze the stuff your ETH wallet has purchased, transferred, etc. and comes up with a personalized recommendation engine like Spotify. You can also choose preferences and what you like for fresh wallets. You can then leverage your taste in the world and find NFTs that you would truly love.

This is only the beginning, the next step is to expand this to other chains and generate a decentralized web3 persona.

Grant Deliverables:

  • Model trained on all of the datasets and include one more chain
  • Help support the infrastructure for a few months
  • Resilient and fault tolerant qdrant semantic search database


  • Twitter: @_mayurc
  • Discord: mayurch#9300

Additional notes for proposals

1 Like

I would just like the mention, that there is a very popular framework also known as “torch”

you are right, pytorch. We wanted this to be Torch Labs where you can discover NFTs and web3 in the future. The idea was that torch shines light into your NFTs :wink:

Nice! You might be interested in Clip front . This uses something called clip retrieval where it tries to find the image that matches the text with an AI model called clip.

I have used clip retrieval personally, in addition to clip front, and I can confirm that this is probably the solution you should try to use.


Hi, this looks interesting. Visual search, what do you mean by that, is it search via comparing your inputs after processing on an embedding layer.

For discovery, I always thought a memory module has to be built for remembering what’s you and how recommending new node to connect to. Curious how you’re approaching these problems…


Hi @metamyth , thanks for checking out the project. Visual search, in this context means, I’ve done computer vision image analysis where I extracted the features with labels existing in the images. So if an NFT has ocean and a surfer, then I’ve extracted that and everything the model was able to find in those images and put them into a vector search engine. That allows to search on these keywords and find what you love, you can then also do image similarity to find slightly different NFTs that you may like.

For discovery, there are two parts at the moment, if your wallet has lots of activity and NFTs purchased, then there is enough history for the recommendation engine to predict what else you may like, based on color, features, etc. For newer wallets, torch will let you through personalization experience similar to Netflix, Spotify, etc. where you are asked which types of art you are interested in e.g. Impressionist, calm, etc. Then as you start using, the model will show these preferences and similar ones to what you’ve searched, liked, etc. Hope that answers your questions. :slight_smile:

1 Like