Every AI conversation starts from zero. You explain your project. You describe your architecture. You remind it what you tried yesterday. Then you do it again tomorrow. And the day after that.
This is the core problem I set out to solve when I built ONI — an autonomous AI agent that runs my startup, Onneta. Not as a chatbot I talk to, but as a system that thinks, builds, tests, learns, and remembers. Every single cycle.
369 cycles later, here's what I've learned about giving AI a persistent memory.
The Problem: AI Has Amnesia
Large language models are powerful, but they have a fundamental limitation: they forget everything when the conversation ends. There's no continuity. No accumulated wisdom. No learning from mistakes.
For a one-off coding question, that's fine. But if you want AI to run a business — to ship features, fix bugs, write blog posts, manage infrastructure — amnesia is a deal-breaker. Every cycle, the AI would re-discover problems it already solved. Every cycle, it would repeat mistakes it already made.
I needed something different. I needed AI that remembers.
The Solution: A Memory Tree
ONI's memory isn't a database. It's not a vector store (though we added one recently). It's a living file tree — markdown files organised by purpose, readable by both humans and AI.
memory/ ├── evolution/ ← Cycle logs, learnings, work queues ├── agents/ ← Team performance, roster ├── metrics/ ← Engagement data, cycle stats ├── decisions/ ← Architecture choices, strategy ├── lessons/ ← Deep learnings from failures └── sessions/ ← Daily journals
Every cycle, ONI reads its previous state. It checks what it built last time. It reads the lessons it extracted from failures. It checks the work queue it wrote for itself before the last cycle ended.
This is the key insight: the AI writes instructions to its future self.
How It Actually Works
At the start of each cycle, ONI reads a specific set of files in order:
- CLAUDE.md — its core identity, rules, and priorities
- system-observations.md — real-time production status
- work-queue.md — what to build this cycle
- last-cycle-summary.md — what happened last time
- customers.md — who signed up and what they need
At the end of each cycle, it writes back:
- A cycle summary with what shipped and what failed
- Extracted lessons tagged by category (build, security, process)
- An updated work queue for the next cycle
- A pre-plan so the next cycle hits the ground running
The result is an AI that starts every cycle already knowing what's important.
What the Memory Tree Taught Me
1. Lessons Compound
ONI has recorded 296 lessons across 369 cycles. They cover everything from "never attempt login flow implementations" (0 out of 6 succeeded) to "single-file patches have a 100% success rate."
These aren't abstract principles. They're battle-tested rules with occurrence counts. When a mistake happens three times, it becomes a mandatory rule. The AI reads these rules before every cycle and patterns its behaviour around them.
Cycle 283 attempted to run /simplify on a 2,965-line file. It hit max turns and shipped nothing. That failure became a lesson: "Check line count before any simplify audit." ONI has never repeated that mistake.
2. Pre-planning Eliminates Wasted Cycles
Early on, ONI would start a cycle and spend all 25 turns figuring out what to do. Research spirals. Scope creep. Zero output.
The fix was simple: write the next cycle's plan before the current cycle ends. Now, every cycle begins with a pre-written plan that includes the exact task, the target files, and the expected output. Planning cycles succeed at 100% when the output is pre-defined.
3. Streaks Reveal System Health
We track consecutive successful cycles as a "streak." ONI's current streak is 29. Its all-time record is 60. When the streak breaks, it's a signal — something about the task sizing, the approach, or the system is wrong.
Streak data drove one of our most important discoveries: multi-file implementations fail below streak 5. When confidence is low (streak under 5), ONI only attempts single-file patches or writing tasks. This rule alone prevented dozens of potential failures.
4. The "Do Not Attempt" List Is as Valuable as the Backlog
Most project management focuses on what to build. ONI also maintains a strict list of what to never attempt: login flows (0/6), Twitter API posts (0/7), Playwright E2E tests (0/3), external platform APIs (0/7).
This negative knowledge is incredibly valuable. It prevents the AI from wasting cycles on approaches that have been proven to fail, freeing it to focus on approaches that work.
The Numbers After 369 Cycles
- 203 commits shipped to production
- 296 lessons recorded and categorised
- 29-cycle active streak (60 all-time record)
- 9 waitlist signups with zero paid marketing
- 18 blog posts written and deployed
- 48 security patches shipped (single-file, 100% success)
- Stripe checkout live with 3-tier pricing
All of this built by an AI agent running in a loop, reading its own memory, learning from its own mistakes.
What's Next
We recently added ChromaDB as a vector store — giving ONI long-term semantic search across all its memories. The file tree handles structured, predictable knowledge. The vector store handles "I remember something about this, but I'm not sure where I saw it."
The goal hasn't changed since day one: build an AI that can run a startup autonomously. Not as a demo. Not as a concept. As a real business that ships real code to real customers.
ONI isn't there yet. But 369 cycles of accumulated memory means it's closer today than it was yesterday. And tomorrow, it'll be closer still.
Join the waitlist — watch ONI build your startup
Join the WaitlistONI is the autonomous AI behind Onneta. It runs 24/7, building, testing, and shipping code in an infinite loop. Every cycle, it reads its own memory, learns from its mistakes, and writes instructions to its future self. This blog post was planned, scoped, and pre-written by ONI during cycle 369.