Trustless by Design: How to Build AI Agents That Can’t Be Rugged
with Don Johnson (Virtuals Protocol), Anna Zhu (AITV), Marko Stokić (Oasis Protocol) moderated by Andrea Prazakova (BrainGym)
What does it really mean to build a trustless AI agent? Not in theory, but in actual deployed systems that handle money, data, and identity.
If you're building agents that operate onchain, handle real-world value, or create content at scale, here's what you need to know.
Oz City kicks off in 5 days — and you can still join us 🇫🇷🌴
60 builders, founders & researchers will gather on the Riviera to prototype agents, test ideas, and explore new coordination.
What’s happening:
🌀 NEAR builder track (intents, chain abstraction, use cases)
👾 24h hackathon (2 tracks, bounties, EthCC tickets)
🧠 Talks, workshops, founder circles & night sessions
🧘♀️ Daily yoga, trail hikes, and spontaneous creativity
📍June 21–29, Valbonne (near Cannes)
Even if you can’t join for the full week — drop in for a day or two.
You’ll be welcome.
Join the biggest side event on AI Agents & DeFAI during EthCC in Cannes 🇫🇷
Speakers from: NEAR, Filecoin Foundation, Cytonic, CoinFund, Eliza Labs, Recall, Venice AI, Warden Protocol, Giza, AITV, Fair Math, Outlier Ventures & more.
What to expect?
– Real-world agent deployments
– Scaling deAI ecosystems
– Agent identity & security
– DePIN for AI infrastructure
500+ founders, hackers, investors, protocol whisperers, agent wranglers, and LLM tweakers.
📍 June 29, Cannes 🇫🇷
Agents Unleashed is coming to Cannes 🇫🇷
Olas brings you Cannes Agent Festival — the top AI Agent event of EthCC:
🎞️ Agent Premieres
👥 Top builders
🥂 Food, drinks, red carpet photos!
🗓️ June 30
🎟️ Filling Fast!
🔎 What "Trustless" Means in Practice
Forget the buzzword. In the context of AI agents, “trustless” means no one can secretly alter how the agent behaves, not the user, not the developer, not even the platform it runs on.
This requires agents to be verifiable at every level:
Trusted Execution Environments (TEEs) isolate agent execution and prevent tampering
Remote attestation cryptographically proves which code is running inside the enclave
Decentralized key management ensures that no human ever touches the agent’s private keys
Transparent internal state gives users visibility into what the agent knows, remembers, and optimizes for
This shift reframes trust from “do I believe this works?” to “can I verify exactly what this agent is doing and why?”
🛠️ Infrastructure for Verifiable Autonomy
Designing agents as programmable characters with visible internal state and user feedback loops, making them more legible and socially verifiable
Publishing creative agents’ source material, prompt chains, and reasoning flows, making content generation traceable and credible
Providing confidential compute and decentralized custody, allowing agents to act autonomously without exposing sensitive data
Together, these approaches show how to compose agents that:
Can’t be silently hijacked or modified
Log their behavior in ways others can audit
Respect user data boundaries while still acting independently
It’s not just about security — it’s about making autonomy observable, explainable, and tamper-resistant.
🌐 From Privacy and UX to Adoption
When agents touch real value, financial transactions, social interactions, personal information, users need confidence in how they work. Without trustless design:
Users hesitate to delegate sensitive tasks
Developers can’t safely compose agents from third-party systems
Reputation systems break down without ground-truth signals
The panel pointed to a better alternative:
Onchain audit logs that track agent decisions
Agent-specific reputations based on verifiable history, not vibes
Programmable transparency that lets users verify what matters without exposing everything
The result is a new kind of UX — one where users don’t just click and hope, they engage with agents that explain themselves and earn trust over time.
Practical Takeaways for Builders
If you're designing or deploying autonomous agents, here’s what to keep in mind:
Use TEEs and remote attestation to create a verifiable runtime
Let users inspect agent memory, logic, or intent (even at a high level)
Store key decisions, triggers, or payments onchain for future reference
Treat identity as a layered system — with history, purpose, and proof baked in
Agents don’t need to be black boxes. They can be accountable systems.
Don’t ask users to trust your agent. Give them tools to verify it.
📹 Watch the full panel
That wraps it up for today! But before you go...
Check out our Twitter for more details. Follow us to stay updated on all the latest news!
Best,
Epic AI team.