Insights from Cartesi on Efficient Computing and AI Ethics
Cartesi’s open-source philosophy, strategies for making AI more energy-efficient, and how developers are using off-chain computation to run more complex AI models.
Hey Epic Builders! In our latest X poll, you voted on who we should interview next, and the clear winner was Cartesi!
So, we sat down with João Pedro Garcia, Developer Advocate at Cartesi, to explore some of the most exciting topics in AI development today, including energy efficiency, ethical considerations, and real-time data processing.
João discussed Cartesi’s open-source philosophy, strategies for making AI more energy-efficient, and how developers are using off-chain computation to run more complex AI models.
🤖 dAGI hack: Web3 х AI online hackathon by Epic Web3
Showcase your skills and win $50K! Join dAGI Hack, a 5-week event, and collaborate with over 500 builders worldwide.
Access top-tier mentors
Work on exciting challenges like AI-powered governance, Web3 gaming, and decentralized marketplaces
Compete for the general prize pool & special bounties
📍 Online, so you can join from anywhere!
— The development of autonomous AI systems is advancing rapidly, raising questions about ethics and accountability. What ethical considerations should developers prioritize when building these systems, and how can these be effectively implemented in decentralized environments?
— I think such questions about accountability and transparency have existed for a good while. Since we first imagined robots and autonomous agents, this has been the subject of debate. Highlighted even in fiction, like Isaac Asimov's "Three Laws of Robotics". Today, transparency is very relevant, ensuring that AI decision-making is traceable or auditable.
There is a huge discussion of privacy and security also being vital, particularly in decentralized systems where sensitive data must be protected without sacrificing functionality. Blockchain can play a critical role in making AI actions auditable, reproducible and when needed transparent. This helps to track decisions and be sure that there isn’t artificial bias introduced in models.
Open-source allows developers to innovate together, ensuring that AI advancements benefit the entire community rather than being confined to a select few.
— Cartesi supports an open-source approach to AI development. How does this philosophy impact the projects built on your platform, and what are the primary benefits of maintaining an open-source model in today’s AI landscape?
— Open-source ethos that extends to AI, creates an environment where transparency, collaboration, and innovation are central. The ability to access, review, and contribute to the core technologies gives users confidence. Security by omission is not security and validation from the community is extremely beneficial. It fosters trust among users, as they can verify the integrity of the algorithms and models being deployed.
There are many benefits: Democratic access to cutting-edge tools and frameworks, mitigation of ethical concerns around AI, as open-source encourages collective scrutiny of potential biases, and collaboration leading to greater diversity of thought, which is essential in solving complex problems. Open-source allows developers to innovate together, ensuring that AI advancements benefit the entire community rather than being confined to a select few.
— As AI models face increased scrutiny over energy consumption, what strategies have proven effective for improving energy efficiency, especially within the Web3 context?
— Several strategies have proven effective, at least within the Web3 context. I think layer-2 scaling solutions, rollups specifically, are probably the biggest gain here, as they enable most computations to be handled off-chain, significantly reducing on-chain energy consumption while maintaining decentralized integrity.
If we are talking about app-specific rollups this is even better. They are not only saving blockspace, but with an optimistic approach you can save a lot of computational effort, differently than zk-rollups where you would need so much more resources just to run a simpler version of the program.
Of course optimizing the model itself with pruning, quantization, and knowledge distillation helps, and for some of this, it’s important to have an execution environment that allows you more freedom and flexibility, which a Linux VM provides.
— Scalability is a growing challenge as AI models become more complex. How is Cartesi’s infrastructure equipped to meet these demands in decentralized applications? Can you share specific challenges you’ve overcome in this area?
— Cartesi Rollups solution had in its design a great synergy with AI from the start. By using the flexibility of a Linux-based execution environment, we provided developers with a familiar platform capable of complex computations where they could use all the stack they are used to. The app-specific nature of our rollup framework helped with the scalability by offering a dedicated CPU to each dApp. Recently, we’ve been exploring integrations with data availability layers, enabling more complex inputs, and expanding the potential for solutions.
One of the most significant challenges recently addressed happened during a hackathon, where a contributor was able to delegate CPU calls from the virtual machine to the host machine while preserving the deterministic nature of our execution environment. This allowed some of our large language model prototypes to run dozens of times faster.
— Data privacy remains a major concern in AI development. What best practices should developers follow to safeguard user data while ensuring the efficiency of AI models, particularly in decentralized systems?
— Data anonymization and minimization are always possibilities, but going deeper, the best approach really depends on the specific use case. There’s a lot of hype around various privacy-preserving techniques, but it’s important for developers to assess which methods align best with the requirements of their solution.
There’s no one-size-fits-all solution, and understanding the trade-offs between privacy, performance, and scalability is key. Achieving an ideal balance is challenging. It ultimately falls on developers to really study, only then they can determine how to tackle each kind of problem. The beauty of a flexible protocol like Cartesi is that it empowers developers to integrate necessary tools to achieve their goals.
— Cartesi’s off-chain computation capabilities are highly valued in the dApp space. In what ways are developers using this to enhance AI models, especially for large datasets or complex algorithms? Any successful implementations you can highlight?
— The aforementioned implementation of using host machine resources for the AI model is probably the most interesting breakthrough so far. It’s only due to the fact that the computation happens off chain using a machine capable of using so many different resources that we were able to maintain determinism while doing that. For large datasets or high volumes of data, we are working on integrations with DA layers to make this process much more feasible, allowing applications that require large data transfers to come to life within the Cartesi infrastructure, such as in the case of AI algorithms.
— Real-time data processing is essential for many AI applications. How does Cartesi enable this within decentralized environments, and what impact does it have on the performance of AI-driven dApps?
— Many things come to my mind, but one I would like to highlight is a project, Trust & Teach AI made by a grantee called Kiril. It’s a project that runs reinforcement learning with human feedback on-chain, using on-chain trust, scalability, transparency, and reputation.
The LLM, A llama2.c with stories15m model, is executed for each transaction. After finishing running the LLM, users can validate the result. With inputs from the user the model can be constantly improved. It was a great proof of concept.
— Gaming remains a dynamic sector within the dApp ecosystem. How is Cartesi facilitating AI integration into gaming applications, and could you share some examples of innovative projects that have utilized your platform?
An extremely interesting solution that comes to mind when we talk about it was a proof of concept of a chess game that could be run on the Cartesi Machine.
Ultrachess is a dApp that allows players to engage in on-chain chess matches, but the unique thing is that users have the ability to deploy AI-powered chess engines as NFTs, which can autonomously compete on their behalf, potentially earning value.
You can have player x player, player x AI and even AI x AI.
— Looking forward, which emerging industries do you anticipate will be most impacted by AI in the next few years? How can platforms like Cartesi contribute to and drive this transformation?
— Personally, I’m very interested in the potential of AI paired IoT. I love to think of smart houses and automatically mapping the energy consumption with a clean energy provider for example, creating power purchase agreements.
Cartesi can help with decentralization and verifiable computation. In fact, I’m currently exploring how all of these technologies can come together as part of my master’s research. However, as I mentioned earlier in my response about best practices, finding the right solutions requires a lot of study and exploration, and I’m still diving into that.
That wraps it up for today! 👋 But before you go...
Check out our LinkedIn or Twitter pages for more details. Follow us to stay updated on all the latest news!
Best,
Epic AI team.