📢 Exclusive on Gate Square — #PROVE Creative Contest# is Now Live!
CandyDrop × Succinct (PROVE) — Trade to share 200,000 PROVE 👉 https://www.gate.com/announcements/article/46469
Futures Lucky Draw Challenge: Guaranteed 1 PROVE Airdrop per User 👉 https://www.gate.com/announcements/article/46491
🎁 Endless creativity · Rewards keep coming — Post to share 300 PROVE!
📅 Event PeriodAugust 12, 2025, 04:00 – August 17, 2025, 16:00 UTC
📌 How to Participate
1.Publish original content on Gate Square related to PROVE or the above activities (minimum 100 words; any format: analysis, tutorial, creativ
Optimism co-founder discusses the future of OP Stack with Plasma Mode developers
DEVS ON DEVS: TDOT and BEN JONES in Conversation
This special dialogue invited tdot(, the core protocol developer of Plasma Mode, who is also the developer of Redstone ) and Ben Jones, the co-founder of Optimism. Optimism is the core promoter of OP Stack. Plasma Mode allows developers to build on OP Stack without the need to publish data to L1, but instead can flexibly switch to off-chain data providers, thereby saving costs and improving scalability. They discussed the origins of the collaboration between Redstone and Optimism, the importance of reviving Plasma, the necessity of bringing experimental protocols into production environments, the future roadmap of Plasma Mode and OP Stack, and their expectations for the development of the full-chain gaming field.
How to Improve OP Stack Using Plasma Mode
Ben: What is the process of starting to improve the OP Stack?
tdot: I joined Lattice about a year ago, specifically responsible for Plasma Mode. The goal is clear: we have many MUD applications that consume a lot of gas, and at the same time, we are trying to put a large amount of data on-chain, so we need a solution that supports these needs while being cost-effective. The Lattice team has already conducted some experiments on the OP Stack, such as prototyping some on-chain worlds and deploying them on the OP Stack. We found that the OP Stack is already very usable.
So we ask ourselves, "How can we make it cheaper?" The basic assumption is, "We believe the OP Stack is the framework that aligns best with the Ethereum philosophy and is fully compatible with EVM." What runs on the mainnet can also run on the OP Stack, which is the ideal solution. But we want it to be cheaper.
At that time, calldata was still the data availability for the OP Stack chain (DA), which was very expensive. So we clearly couldn't use calldata to launch an L2, because our full-chain games and MUD worlds require higher throughput. Therefore, we decided to start exploring other data availability (Alt DA) solutions. In fact, it was mentioned in the initial OP Stack documentation that Alt DA should be explored.
So we asked ourselves, "What if we start from off-chain DA?" We hope that the entire security model and everything can rely on L1 Ethereum. Therefore, we avoided other Alt DA solutions and decided to store the data in centralized DA storage, and then find an effective security model on L1.
This is why we want to reuse some old Plasma concepts and place them on top of rollups. There are some differences here. The biggest question is, how to implement off-chain DA and on-chain data challenges on the existing OP Stack? Our goal is to make as few changes to the OP Stack as possible, with no impact on the rollup path, because we do not want to affect the security of other rollup chains that use the OP Stack.
When designing a rollup, you wouldn't think, "What would happen if someone changed the data generation process to store data elsewhere?" Even with these changes, the OP Stack remains very powerful and works well out of the box. This is the first change we made.
After that, we need to write contracts to create these challenges. There are DA challenges used to enforce putting data on-chain. This is the second step, integrating the contracts into the process. We must build the entire integration system during the derivation process, so that you can derive data from an off-chain DA source as well as an L1 DA challenge contract, in case the data is submitted on-chain during the challenge resolution process.
This is the crux of the matter. It's complicated because we want to keep things elegant and robust. At the same time, it's a relatively simple concept. We are not trying to reinvent everything or change the entire OP Stack, but rather trying to keep things simple in a complex environment. So overall, it's been a very cool engineering journey.
Ben: I can talk about it from the perspective of OP. You mentioned some of the early work of Lattice. Coincidentally, at the same time, we at Optimism almost completely rewrote the entire OP Stack, and we refer to this release as Bedrock.
Basically, two years after building the rollup, we took a step back and reflected, saying: "Well, if we were to maximize all the experiences we've learned, what would that look like?" This evolved into what is ultimately known as the Bedrock codebase, which is the biggest upgrade we have made to the network.
At that time, we collaborated with you on a project called OPCraft, and I believe Biomes is its spiritual successor. This was the most enjoyable time we had playing on-chain. At the same time, we breathed a sigh of relief because others could also use OP Stack for development. I think another important turning point for scalability in the past few years is that many people can run the chain.
It's not just those who have developed large and complex codebases that can do this. When we started collaborating, seeing others take over this codebase and do some truly amazing things was a great affirmation. Then seeing this situation scale up to Plasma in practical applications is just so cool. I can even talk a bit about that history.
Before Optimism became Optimism, we were actually researching a technology called Plasma. At that time, the task we undertook far exceeded the capacity of the scalability community at that time. The designs you see in the early Plasma design may not have a direct correspondence with today's Plasma.
Today's Plasma is much simpler. We separate the proof and challenge of state verification from the challenge of data. Ultimately, we recognized a few years ago that Rollups are much simpler than Plasma. I think the conclusion of the community at that time was "Plasma is dead". This is a meme from that period in the history of Ethereum scaling.
But we have always believed that "Plasma is not dead, it's just that we can start with a simpler task." Now we use different terminology. For example, at that time there were concepts like (exits), and now you can look back and say, "Oh, that was a data availability challenge with some extra steps." So it's amazing to see not only that OP Stack is being used by others, but it has also evolved into something we initially attempted but in a very chaotic and immature abstract way. We have completed a full circle, and you have done a fantastic job abstracting around them and making it work in a reasonable and sensible way. That's really cool.
The most important thing is to enter the production environment as soon as possible.
tdot: The Plasma mode still has some challenges and unresolved issues that we are working hard to address. The key is how to avoid spending up to ten years on it? You know what I mean? We need to reach a stage where we can deliver results as soon as possible.
This is our idea. We already have many applications based on MUD that want to go live on the mainnet immediately. We need to prepare a mainnet for these games as soon as possible. People are already waiting and ready. You need a fast and functional chain to run all these applications, so that they can develop in parallel and improve while we solve the issues. It takes a long time from R&D to achieving production stability.
To launch something on the mainnet, making it permissionless, robust, and secure, requires a significant amount of time. It is already amazing to see the entire process of achieving this goal. That's why we need to maintain a high level of agility, as there is too much going on. The entire ecosystem is developing very quickly. I believe everyone is delivering a lot of innovation. That's why you must keep up, but you also cannot compromise on security and performance; otherwise, the system won't function.
Ben: Or it could be called a technical burden. The principle of minimal change that you mentioned is one of the core ideas behind our Bedrock rewrite. I talked about the entire end-to-end rewrite, but more importantly, we reduced about 50,000 lines of code, which is powerful in itself. Because you're right, these things are indeed very difficult.
Every line of code added takes you further away from the production environment, making it harder to undergo practical testing and introducing more opportunities for errors. Therefore, we are very grateful for all your efforts in pushing this process forward, especially for your contributions to the new operational model of the OP Stack.
tdot: The OP Stack has indeed created a way for you to make quick progress on such matters. Coordinating everyone is very difficult because we are obviously two different companies. At Lattice, we are building a game, a game engine, and a chain.
You are building hundreds and thousands of things and regularly delivering all these products. From a coordination perspective, this is indeed very challenging.
Ben: Yes, there is indeed still a long way to go. But that is the core appeal of modularity. For me, from the perspective of the OP Stack, this is one of the most exciting things, not to mention the amazing games and virtual worlds currently being built on Redstone. Purely from the perspective of the OP Stack, this is a very powerful example that proves many excellent core developers have joined in and improved this stack, which is truly remarkable.
This is the first time that you can significantly change the properties of the system through a key boolean value. To be able to achieve this completely, as you said, there is indeed a long way to go. But even getting close to doing this effectively requires modular support, right? For us, it is a relief to see you achieve this without needing to rewrite L2 Geth, for example. To me, this proves that modularization is working.
tdot: The situation has improved now. From this example, you have turned everything into independent small modules that can be adjusted and properties changed. So I am very much looking forward to seeing what new features will be integrated. I remember we were once concerned that we had a fork containing all the changes to the OP Stack, which needed to be merged into the main branch. At that time, we thought, "Oh my god, it would be crazy to review everything."
We had to break it down into smaller parts, but the whole process went very smoothly. The atmosphere of collaboration with the team was very good, so the review process was also pleasant. It felt very natural. Moreover, I think the process went very quickly in reviewing and resolving some potential issues. Everything went unexpectedly smoothly.
Ben: This is really great. One of our focuses this year is to create a contribution path for the OP Stack. So I really appreciate your participation in testing and pushing these processes forward. I'm glad these processes haven't been overwhelming, and we've achieved some results. Speaking of this, I'm curious, how do you see this work developing next? What are you most looking forward to developing next?
tdot: There are many different directions for work. The main focus is on the integration with the fault-proof mechanism. We adopt a gradual approach to decentralize the entire tech stack and enhance its permissionless characteristics, with the ultimate goal of achieving features such as permissionless operation and forced exit.
We have this ultimate goal and are gradually achieving it while maintaining security. One challenge is that sometimes not going live on the mainnet is easier because it eliminates the need for hard forks. You might think, "Oh, I just need to wait until everything is completely ready to launch, so there won't be any hard forks and no technical burden." However, if you want to quickly launch the mainnet, you have to deal with these complex upgrades and release frequently. Achieving this while maintaining high availability is always a challenge.
I believe that once the fault-proof mechanism and all these components are ready, there will be many upgrades in the Plasma model aspect. I think there is still some room for optimization in batch submission of commitments. Right now, we are keeping it very simple, one commitment per transaction. And the commitment is just the hash of the input data stored off-chain.
We keep it as simple as possible for now, so that the review can be straightforward and quick, and there is no major difference for the OP Stack. However, there are some optimizations that can make it cheaper, such as batching the commitments or submitting them to the blob, or adopting other different methods. So we will definitely look into this to reduce the costs of L1.
This is something we are very excited about. Of course, we are also looking forward to all the upcoming interoperability-related content and the ability to interact across all chains. Figuring this out will be a huge advancement for users.
Many of these tasks will definitely need to be implemented by you. However, we would like to clarify what they look like under the Plasma model and with different security assumptions.
Ben: Speaking of this, it will be another test for the OP Stack modularization. The fault proofs ( you mentioned, we are very much looking forward to their application in Plasma.