As AI develops rapidly in the web3 world, it is even more difficult to distinguish between real innovation and narrative bubbles. During the ETHDenver conference, we invited 11 of the most mainstream AI projects. This article will give you a brief overview of the vision, implementation methods and application scenarios of each project. Let's take a look at how these AI projects are shaking up the current world. Here are some important questions, along with answers from each project: data:Data supply: How to obtain AI training data? (Grass) Data Source: How to protect the IP of the data source? (Story Protocol) Data alignment: How to ensure that certain data is used by the model? (Space & Time) Model:Open Economics: How to build an open platform with incentive mechanisms? (Bittensor, Sentient) Model alignment: How to prove that model results have not been tampered with? (Modulus Labs, Ora) Infrastructure:Universal infrastructure: How do we connect all infrastructure together? (Ritual) Agent:AI Agents: How to make agents intelligent, composable and ownable? (Future Primitive, Olas, Myshell) 1. GrassWhy: Data is the foundation of all AI training, but the way gatekeepers are screened makes it difficult to obtain high-quality training data. It is true that a lot of data can be scraped from the public Internet, but major websites usually block commercial data centers. What it is: GRASS is a data provision protocol that makes data accessible and promotes fairness in AI infrastructure. How it works: Users install a Chrome web extension and use its excess computing and bandwidth to scan the Internet for AI data. Grass operates a network of nearly 1 million global web crawling nodes. Using this network, Grass scrapes more than 1TB of data every day and then generates structured datasets. Where it is used: Grass nodes are currently operating in 190 countries around the world. 2. Story ProtocolWhy: AI remixing is illegal and inevitable. The main obstacle to the development of artificial intelligence is the lack of profitability and the inability to bring attribution and source protection to IP and content creators. What it is: The composable on-chain IP layer allows creators to set their own rules of engagement, increasing the recognizability and liquidity of global IP works. How to do it: Creators can purchase licensed NFTs to convert their static IPs into programmable IPs. Programmable IP is a layer that can be read and written by any program, consisting of Nouns and Verbs. Nouns include data structures, IP-related metadata, and use ERC6551; Verbs include modules, a series of functions of IP assets, such as licensing, revenue streams for derivative works, and global access. As long as the derivatives are profitable, the income will automatically flow back. Where to use it: Story Protocol can be used for licensing, leasing, derivatives, and customization of regions, channels, validity periods, revocability, transferability, and attribution. 3. Space and TimeWhy: As LLMs evolve, datasets and parameters can be modified or tampered with by large companies; it is important to cryptographically prove that the dataset has not been tampered with, ensuring that the same dataset is used during LLM training. In addition, Space and Time has been exploring ways to clean copyrighted data, extract data from a verifiable vector database, and inject hints during inference. What it is: Space and Time is an indexer and ZK prover that proves SQL queries or vector searches against indexed data. How it works: LLM providers can load their own on-chain/off-chain training datasets into Space and Time, where the data is witnessed and threshold-signed using cryptographic commitments, which are then used to prove that the dataset was indeed used for training. After that, lawyers or auditors/reviewers can ensure that the dataset has not been tampered with after training. Space and Time has developed a GPU accelerator "Blitzar", which has achieved 14 seconds verification time on a single GPU to complete a 2 million row table query. What it’s used for: Space and Time allows users to create queries using plain text. A few seconds later, OpenAI retrieves the context from the vector search database and writes the exact Space and Time SQL that can be executed by the prover, which returns the proof in 4 seconds. 4. BittensorWhy: OpenAI's goal is to monopolize control of artificial intelligence. What is: Bittensor is a decentralized open-source AI platform. How to do it: The Bittensor network has 32 subnets. These subnets started with models, but have now expanded to storage, computing, crawling, tracking, and different AI fields. TAO tokens incentivize subnet builders to continuously improve models or projects, and validators rank the results of the subnets. The ranking will change the distribution of TAO, and the last one will be kicked out of the network. This mechanism ensures that models compete to produce the best output data, and the work with the greatest collective value will be rewarded. Where it’s used: Powerful applications are emerging, such as FileTAO for decentralized storage, Cortex TAO for OpenAI reasoning, Nous Research for LLM fine-tuning, and Fractal Research for decentralized text-to-video conversion. 5. SentientWhy: AGI construction is dangerous and faces the risk of "human extinction threat" and capitalist framework, so they inherently need crypto platforms; and crypto platforms need native killer applications. What it is: Sentient is a sovereign incentive-driven AI development platform. Where to use: Sentient uses a crowdsourcing approach to support community coordination and contribution training models to reduce costs, uses open protocols to control reasoning, achieves composability between models, and flows value back to network participants. Aggregating web2 and web3 power and leveraging tokens, Sentient will largely incentivize developers to build trustless AGI. 6. Modulus labsWhy: As the future of AGI becomes unstoppable, we need to demonstrate that AI results are accountable, safe, generated by a certified model, not manipulated, and not dependent on the benevolence of a trusted central authority. What it is: Modulus has built a specialized AI ZK prover “Remainder” that provides AI capabilities to dApps at a fraction of the cost. How it works: It doesn’t make sense to use modern ZK proof systems for unverifiable AI outputs, as the overhead is about 10,000 to 100,000x. Modulus built a custom prover for AI reasoning with only 180x overhead. Where it is used: Upshot is a noteworthy implementation, but Upshot’s complex evaluation model can only be performed off-chain, which raises trust issues. However, Upshot can send valuable AI evaluations to Modulus every hour, and Modulus can generate “correctness proofs” for AI calculations, aggregate the proofs and send them to Ethereum for final verification. 7. OraWhy: AI models cannot be run on-chain, thousands of computers will perform a single inference. Verifying results on-chain is feasible, but as models become larger, the cost of ZKML grows exponentially, so a linearly growing cost — OPML — is needed. What it is: Ora is an on-chain AI oracle that uses OPML for AI models of any scale. How to do it: Use Oracle to delegate computation to off-chain nodes. Users initiate transactions from smart contracts through prompts and naming models. The OAO contract delegates the transaction to the OPML node to perform reasoning, generate fraud proofs and submit the verifier to ORO. The verification result is returned to the transaction initiator. But OPML still needs the assistance of ZKP to achieve input/output privacy protection and instant finality. In addition, ORA's ZK oracle can generate storage proofs for OPML, so there is no need to repeat OPML when reusing OPML. Where to use it: Now with ORA, we can use large models such as Stable Diffusion and 7B-LLaMA on the Ethereum mainnet. ORA can support AI-managed DAOs and AIGC NFTs (such as EIP7007) to give ownership to models. 8. RitualWhy: As AI infrastructure becomes increasingly centralized, permissioned, and subject to increasingly stringent regulation, new forms of scrutiny and manipulation emerge. Cryptography, however, provides primitives for privacy and computational integrity, coordination and incentives, and permissionless-by-default infrastructure. What it is: Ritual serves as the natural convergence point for the convergence of crypto and AI, comprising a decentralized oracle network and sovereign chain (with custom VMs and coprocessors). How to do it: The Oracle network "Infernet" enables any smart contract on the EVM chain to connect the on-chain workflow to the off-chain ML reasoning, and the final coprocessor realizes AI local operations at the VM level while maintaining composability with other data availability (DA), permanent storage, Prover network, GPU network and reasoning engine. Nodes on the Ritual network not only run and serve model operations, but also run consensus and execution clients. Where to use it: "Frenrug" uses the Inference SDK to build an uncertain LLM that guides users to buy and sell keys on friend.tech. For DeFi lending protocols, they can train models to parameterize and generalize all protocols. For crypto-enabled AI, MyShell will be the first model creator economy example. 9. OlasWhy: Autonomous Agents (AA) are powerful entities that can perceive certain information and perform actions. However, the potential of web2 autonomous agents is severely limited: they cannot do KYC verification, users cannot have ownership, platform censorship risks, and limited composability. What it is: Olas is a decentralized protocol for shared autonomous agents. How it works: In the world of Olas, agents are off-chain, while registration and management are on-chain. Agents are arranged to perform autonomous services ("decentralized autonomous agents"). Agent operators control an agent and a consensus tool. Each agent runs a finite state machine (FSM) that is replicated on a temporary blockchain between service agents. Service agents reach consensus off-chain before taking on-chain actions. The network provides OLAS token incentives to all stakeholders: capital providers, code providers (developers), agent operators (stakeholders), and service owners (entrepreneurs). What it's used for: Olas Predict is an economy of three types of autonomous agents that continuously create and participate in prediction markets for arbitrary future events. Trader agent operators don't have to worry about subscribing to OpenAI, they just pay cryptocurrency per request. 10. MyShellWhy: With the rise of LLM, we need better tools to help creators easily build applications and turn the current “static” creator economy into a dynamic space. What it is: MyShell is a decentralized platform for discovering, creating, and staking AI-native applications. How to do it: Creators can create AI native applications on MyShell in just a few minutes, from companion AI to various tools designed to enhance your learning and work experience: select an open source model, edit the prefix and suffix of the prompt, provide information about specific fields, images, and upcoming video features. MyShell LLM is based on a large amount of private data, making the role-playing experience closer to humans. Tokens are used to access advanced features, support and encourage creators, and settle royalties. Where it’s used: The MyShell platform has multiple use cases: AI characters with distinct personalities and unique voices can be used for companionship, learning, and gaming; there are also tools for language learning, text-to-image conversion, summarizing video content, and more. 11. Future PrimitiveWhy: NFTs are internet-native objects, but they lack programmability, thus limiting their further actions on the chain. What is it: Future Primitive converts NFTs into smart entities through ERC6551 and its infrastructure. How to do it: With ERC6551, each NFT has the same rights as Ethereum users. They can self-custody assets, perform arbitrary operations, and control multiple independent accounts. Each NFT address is the same on any EVM chain, unlocking cross-chain potential. For token binding V4, it issues authorization for your TBA, and the smart contract can perform on-chain operations completely autonomously, without requiring you to sign, verify, or execute transactions, and the owner can always revoke the authorization extension. |
<<: Bitcoin Still 6 Months Away from ETF “Liquidity Crisis” – New Analysis
>>: Is there still hope for an Ethereum spot ETF? What risks is the SEC blocking?
In many martial arts novels, you will often see t...
In recent years, Bitcoin and cryptocurrencies hav...
In fact, facial features are the best way to give...
There are many kinds of friends, some are good fr...
Many companies have raised millions in just minut...
Chapter 0 Introduction At the end of 2013, Bitcoi...
After setting ATH prices continuously, Bitcoin us...
There are various miscellaneous lines in our palm...
Everyone hopes to be rich, especially men, becaus...
Bitcoin evangelist, speaker and community directo...
The song "Dimples" has become popular a...
Palmistry and face reading to see your wisdom and...
Competition in modern society is so fierce. Altho...
Aquiline nose, also known as hooked nose, is a fac...
Bitcoin, the cryptocurrency with the highest mark...