Self-Learning Agents: From Prompt Evolving to Experience Learning

Dec 8, 2025

Self-learning AI agents: Acontext vs DSPy
Self-learning AI agents: Acontext vs DSPy


A self-evolving agent framework like DSPy works beautifully when you can define a clear eval for your entire product. With a stable metric, it can evolve prompts and steadily improve overall performance. But agent self-learning in real applications is not 'global' at all, it should be per user, per task.

Users bring different goals: one uses Manus to build a website, another for trip planning, and another for code and architecture docs. These tasks don't share a single success definition. And without a unified eval, DSPy has nothing to optimize toward.

Some might imagine running a separate DSPy loop for each user. But as soon as you try, two structural problems appear.


  • The system prompts fragments into countless user-specific variants, which are long, unstructured, and impossible to maintain.

  • Each user would need a specific evaluation, but user tasks vary too widely to define and maintain at scale.

The result is clear: prompt evolution cannot deliver accurate per-user self-learning. It's built for overall intelligence improvement, not task-driven adaptation.

Acontext: User-level Experience Learning

Acontext takes a different path. It never rewrites your prompts or asks you to design evals. It learns directly from real execution and real user confirmation in a Notion-style Skill Space:


  • No prompt changes: Your system prompt stays clean. The learned skill lives in the user's workspace, not in the prompt.

  • No eval design required: Acontext uses real user feedback as the signal: confirmation equals success; negative feedback equals failure; silence does not count.

Developers also get direct visibility into what's happening: per-user success rates, day-over-day changes, and task volume, so the learning process is transparent rather than a black box.

Acontext is intentionally conservative about what to learn. Only complex, multi-step experiences become skills; trivial tasks are ignored. And when needed, you can require explicit user approval before a new skill is added: https://docs.acontext.io/learn/advance/wait-user

Try Acontext

If you're building agents and want them to learn from real experiences, Acontext is ready for you. It's open-source, and we're improving it rapidly with the help of the community.

👉 GitHub: https://github.com/memodb-io/Acontext ⭐️ Give it a star, explore the examples, and try it in your agent stack.

🤟Discord: https://discord.acontext.io

If you try Acontext, please let us know how it works for you. We'd love to hear your feedback and see how we can make Acontext more helpful.