Discussion about this post

User's avatar
Eugene Yaroslavtev's avatar

Hey, nice write-up - the high-level architecture in particular seems like a good plan. One important issue I noticed in your prompt you should know about:

"requires_changes": { "type": "boolean", "description": "whether the plan needs modifications" },

"reflection": { "type": "string", "description": "explanation of what changes are needed or why no changes are needed" }

Since an LLM will respond to this in key order, when you ask it to first answer "yes/no" to `requires_changes` and only doing `reflection` after, you're basically having it justify whatever it chose based on unwritten reasoning when it decided the value of `requires_changes`. The correct order here would be to put `reflection` first so it can reason about it first *before* deciding yes/no. Hope that helps!

Expand full comment
Andy's avatar

hey, thank you for posting this

have a few questions:

- why not simply use message history as memory? LLM would attend to it and also store prev tokens in KV cache, minimizing the amount of info needed to recompute

- reflection could be implemented as a separate agent or tool, allowing composability and dynamic routing, but you added it to the Agent. why this choice?

Expand full comment
4 more comments...

No posts