Discussion about this post

User's avatar
Pawel Jozefiak's avatar

The 'LLM calling tools in a loop' definition is the one that actually sticks, and the memory-via-timestamped-markdown approach is genuinely underrated - I've run 1000+ agent sessions using almost exactly this pattern without fancy vector stores. The key discovery along the way: what goes in CLAUDE.md matters far more than how long it is.

Behavioral rules and structural context compound across sessions in ways that are hard to anticipate until you've run enough of them. If you're thinking about instruction architecture for agents like these, I wrote up what I learned: https://thoughts.jock.pl/p/how-i-structure-claude-md-after-1000-sessions

No posts

Ready for more?