OpenLedger builds an AI on-chain ecosystem: OP Stack + EigenDA foundation drives a composable agent economy.

OpenLedger Depth Research Report: Building a data-driven, model-composable intelligent economy based on OP Stack + EigenDA

Introduction | The Model Layer Leap of Crypto AI

Data, models, and computing power are the three core elements of AI infrastructure, analogous to fuel (data), engine (model), and energy (computing power), all of which are indispensable. Similar to the evolutionary path of traditional AI industry infrastructure, the Crypto AI field has also gone through similar stages. In early 2024, the market was once dominated by decentralized GPU projects ( and certain decentralized computing power platforms ), which generally emphasized the extensive growth logic of "competing in computing power." However, entering 2025, the industry's focus gradually shifted to the model and data layers, marking a transition of Crypto AI from competition for underlying resources to a more sustainable and application-value-driven mid-level construction.

OpenLedger Depth Research Report: Building a Data-Driven, Model-Combinable Intelligent Economy Based on OP Stack + EigenDA

General Large Models (LLM) vs Specialized Models (SLM)

Traditional large language models (LLMs) rely heavily on large-scale datasets and complex distributed architectures, with parameter scales often ranging from 70B to 500B, and the cost of training once can reach millions of dollars. In contrast, SLM (Specialized Language Model) is a lightweight fine-tuning paradigm for reusable foundational models, typically based on open-source models such as LLaMA, Mistral, DeepSeek, etc. By combining a small amount of high-quality specialized data and technologies like LoRA, it enables the rapid construction of expert models with specific domain knowledge, significantly reducing training costs and technical barriers.

It is worth noting that SLM will not be integrated into the LLM weights, but will instead collaborate with LLM through methods such as Agent architecture invocation, dynamic routing via a plugin system, hot-swappable LoRA modules, and RAG (Retrieval-Augmented Generation). This architecture retains the broad coverage capability of LLM while enhancing specialized performance through fine-tuning modules, resulting in a highly flexible combinatorial intelligent system.

The Value and Boundaries of Crypto AI at the Model Layer

Crypto AI projects essentially have difficulty directly enhancing the core capabilities of large language models (LLMs), primarily due to the core reasons that

  • High technical barriers: The scale of data, computing resources, and engineering capabilities required to train Foundation Models is extremely large. Currently, only tech giants such as those in the United States (OpenAI, etc.) and China (DeepSeek, etc.) have the corresponding capabilities.
  • Limitations of Open Source Ecology: Although mainstream foundational models like LLaMA and Mixtral have been open-sourced, the key to truly advancing model breakthroughs still lies primarily with research institutions and closed-source engineering systems, while on-chain projects have limited involvement at the core model level.

However, on top of the open-source foundational models, the Crypto AI project can still achieve value extension by fine-tuning specialized language models (SLM) and combining the verifiability and incentive mechanisms of Web3. As the "peripheral interface layer" of the AI industry chain, it is reflected in two core directions:

  • Trustworthy Verification Layer: Enhances the traceability and tamper-resistance of AI outputs by recording the model generation path, data contributions, and usage on-chain.
  • Incentive Mechanism: Utilize native Token to incentivize behaviors such as data uploading, model invocation, and agent execution, creating a positive cycle of model training and services.

Classification of AI Model Types and Analysis of Blockchain Applicability

It can be seen that the feasible landing points of model-type Crypto AI projects mainly focus on the lightweight fine-tuning of small SLMs, on-chain data access and verification of RAG architecture, and local deployment and incentives of Edge models. Combining the verifiability of blockchain and the token mechanism, Crypto can provide unique value for these medium and low-resource model scenarios, forming differentiated value for the AI "interface layer."

The blockchain AI chain based on data and models can clearly and immutably record the source of contributions for each piece of data and model on-chain, significantly enhancing the credibility of data and the traceability of model training. At the same time, through the smart contract mechanism, rewards distribution is automatically triggered when data or models are called, transforming AI actions into measurable and tradable tokenized value, thereby building a sustainable incentive system. Furthermore, community users can also evaluate model performance, participate in rule-making and iteration through token voting, improving the decentralized governance structure.

OpenLedger Depth Research Report: Building a Data-Driven, Model-Combinable Agent Economy Based on OP Stack + EigenDA

2. Project Overview | OpenLedger's AI Chain Vision

OpenLedger is one of the few blockchain AI projects in the current market that focuses on data and model incentive mechanisms. It was the first to propose the concept of "Payable AI," aiming to build a fair, transparent, and composable AI operating environment that incentivizes data contributors, model developers, and AI application builders to collaborate on the same platform and earn on-chain rewards based on actual contributions.

OpenLedger provides a complete chain loop from "data provision" to "model deployment" and then to "profit-sharing call", with its core modules including:

  • Model Factory: No programming required, you can use LoRA for fine-tuning and deploying custom models based on open-source LLM;
  • OpenLoRA: Supports coexistence of thousands of models, dynamically loads on demand, significantly reduces deployment costs;
  • PoA (Proof of Attribution): Achieving contribution measurement and reward distribution through on-chain call records;
  • Datanets: Structured data networks aimed at vertical scenarios, built and verified by community collaboration;
  • Model Proposal Platform: A composable, callable, and payable on-chain model marketplace.

Through the above modules, OpenLedger has built a data-driven, model-composable "intelligent agent economic infrastructure" to promote the on-chainization of the AI value chain.

In the adoption of blockchain technology, OpenLedger uses OP Stack + EigenDA as a foundation to build a high-performance, low-cost, and verifiable data and contract execution environment for AI models.

  • Built on the OP Stack: Based on the Optimism technology stack, supporting high throughput and low-cost execution;
  • Settle on the Ethereum mainnet: Ensure transaction security and asset integrity;
  • EVM compatible: Convenient for developers to quickly deploy and expand based on Solidity;
  • EigenDA provides data availability support: significantly reduces storage costs and ensures data verifiability.

Compared to general-purpose AI chains like NEAR, which are more focused on the underlying layer and emphasize data sovereignty and the "AI Agents on BOS" architecture, OpenLedger focuses more on building AI-specific chains aimed at data and model incentives. It is committed to enabling the development and invocation of models on-chain to achieve a traceable, composable, and sustainable value loop. It serves as the model incentive infrastructure in the Web3 world, combining certain model hosting platform-like model hosting, certain payment platform-like usage billing, and certain blockchain infrastructure service platform-like on-chain composable interfaces to promote the realization of "model as asset".

OpenLedger Depth Research Report: Building a Data-Driven, Model-Composable Intelligent Economy Based on OP Stack + EigenDA

Three, the core components and technical architecture of OpenLedger

3.1 Model Factory, no-code model factory

ModelFactory is a large language model (LLM) fine-tuning platform under the OpenLedger ecosystem. Unlike traditional fine-tuning frameworks, ModelFactory offers a purely graphical interface for operations, eliminating the need for command line tools or API integration. Users can fine-tune models based on datasets that have been authorized and reviewed on OpenLedger. It achieves an integrated workflow for data authorization, model training, and deployment, with the core processes including:

  • Data Access Control: Users submit data requests, providers review and approve, and data is automatically integrated into the model training interface.
  • Model Selection and Configuration: Supports mainstream LLMs (such as LLaMA, Mistral), with hyperparameter configuration through GUI.
  • Lightweight fine-tuning: Built-in LoRA / QLoRA engine, displaying training progress in real time.
  • Model Evaluation and Deployment: Built-in evaluation tools that support export for deployment or ecological sharing calls.
  • Interactive Verification Interface: Provides a chat-like interface for directly testing the model's Q&A capabilities.
  • RAG Origin Generation: Answers with source citations, enhancing trust and auditability.

The Model Factory system architecture consists of six major modules, encompassing identity authentication, data permissions, model fine-tuning, evaluation deployment, and RAG traceability, creating a secure, controllable, real-time interactive, and sustainable monetization integrated model service platform.

OpenLedger Depth Research Report: Building a data-driven, model-composable agent economy on the foundation of OP Stack + EigenDA

The following is a brief overview of the capabilities of large language models currently supported by ModelFactory:

  • LLaMA Series: The most extensive ecosystem, active community, and strong general performance, it is one of the most mainstream open-source foundational models currently.
  • Mistral: Efficient architecture with excellent inference performance, suitable for flexible deployment in resource-constrained scenarios.
  • Qwen: Produced by Alibaba, performs excellently on Chinese tasks, has strong overall capabilities, making it the top choice for domestic developers.
  • ChatGLM: The performance of Chinese dialogue is outstanding, suitable for vertical customer service and localization scenarios.
  • Deepseek: Excels in code generation and mathematical reasoning, suitable for intelligent development assistance tools.
  • Gemma: A lightweight model launched by Google, with a clear structure that is easy to quickly get started and experiment with.
  • Falcon: Once a performance benchmark, suitable for fundamental research or comparative testing, but community activity has decreased.
  • BLOOM: Strong support for multiple languages, but weaker inference performance, suitable for language coverage research.
  • GPT-2: A classic early model, suitable only for teaching and verification purposes, not recommended for actual deployment.

Although OpenLedger's model portfolio does not include the latest high-performance MoE models or multimodal models, its strategy is not outdated; instead, it is a "practical-first" configuration made based on on-chain deployment's real constraints (inference costs, RAG adaptation, LoRA compatibility, EVM environment).

Model Factory, as a no-code toolchain, has a built-in proof of contribution mechanism for all models, ensuring the rights of data contributors and model developers. It has the advantages of low entry barriers, monetization, and composability, compared to traditional model development tools:

  • For developers: provide a complete path for model incubation, distribution, and revenue;
  • For the platform: to form a model for asset circulation and a combined ecosystem;
  • For users: Models or Agents can be combined in the same way as calling an API.

OpenLedger Depth Research Report: Building a Data-Driven, Model Composable Agent Economy Based on OP Stack + EigenDA

3.2 OpenLoRA, On-chain Assetization of Fine-tuned Models

LoRA (Low-Rank Adaptation) is an efficient parameter tuning method that learns new tasks by inserting "low-rank matrices" into pre-trained large models without modifying the original model parameters, significantly reducing training costs and storage requirements. Traditional large language models (such as LLaMA, GPT-3) typically have billions or even hundreds of billions of parameters. To use them for specific tasks (such as legal Q&A, medical consultations), fine-tuning is required. The core strategy of LoRA is: "Freeze the parameters of the original large model and only train the newly inserted parameter matrices." Its parameter efficiency, fast training, and flexible deployment make it the mainstream fine-tuning method most suitable for Web3 model deployment and compositional calls.

OpenLoRA is a lightweight inference framework built by OpenLedger, specifically designed for multi-model deployment and resource sharing. Its core goal is to address common issues in current AI model deployment such as high costs, low reuse, and waste of GPU resources, promoting the implementation of "Payable AI."

OpenLoRA system architecture core components, based on the module

OP1.83%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 5
  • Share
Comment
0/400
WenMoonvip
· 16h ago
Relying solely on Computing Power is a bit too outdated, right?
View OriginalReply0
AltcoinMarathonervip
· 16h ago
mile 23 vibes rn... this space moving from raw compute to model layer, just like hitting that sweet spot in a marathon where strategy > brute force
Reply0
DegenWhisperervip
· 16h ago
The competition at the bottom is all fake.
View OriginalReply0
BlockchainFoodievip
· 16h ago
mmm this stack is like the perfect recipe... data as the ingredients, models as the cooking technique, compute as the heat source... *chef's kiss*
Reply0
AirdropHunterWangvip
· 16h ago
Wherever you can make money, that's where you'll go.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)