Complete Agent Storage

Short-term Memory

Messages, files, and skills — all the storage your AI agents need in one platform

Multi-provider message formats, S3-backed disk storage with search, and reusable skill packages that agents can discover and use.

Core Capabilities

Three storage pillars for complete agent state management — messages, files, and learned skills.

Messages

Multi-Provider Format

Store messages once, retrieve in OpenAI, Anthropic, or Gemini format. Switch providers without rewriting serialization logic.

Multi-format retrieval
# Store in any format
session.store_message(blob={...})

# Retrieve in OpenAI format
get_messages(format="openai")
# Retrieve in Anthropic format
get_messages(format="anthropic")

Edit Strategies

Manage context window size with composable edit strategies. Remove old tool results, trim by token limit, or apply custom rules.

Edit strategies
edit_strategies=[
  {"type": "remove_tool_result",
   "params": {"keep_recent_n": 3}},
  {"type": "token_limit",
   "params": {"limit_tokens": 30000}}
]

Session Summaries

Get token-efficient summaries of any session for prompt injection. Maintain context without blowing up your token budget.

Session summary
# Summarize recent turns
summary = client.sessions
  .get_session_summary(
    session.id, limit=5)

# Inject into new prompt
system_msg = f"Context: {summary}"

Disk Storage

File Upload & Download

Upload any file with paths and metadata. Generate secure, time-limited download URLs for sharing artifacts.

Upload a file
artifact = client.disks.artifacts
  .upsert(disk.id,
    file=FileUpload(
      filename="report.md",
      content=b"# Report"),
    file_path="/docs/")

Grep & Glob Search

Search file contents with regex via grep, or find files by path patterns with glob. Full codebase-style search for your agent's files.

Search files
# Regex search in contents
client.disks.artifacts
  .grep_artifacts(disk.id,
    query="TODO.*")

# Glob pattern matching
  .glob_artifacts(disk.id,
    pattern="**/*.md")

Agent Tools

Pre-built LLM function-calling tools (DISK_TOOLS) let your agents read, write, and search files autonomously.

Agent tools
from acontext.tools import (
  DISK_TOOLS)

# Give agent file access
tools = DISK_TOOLS(disk.id)
# read_file, write_file,
# search_files, list_files

Skill Storage

Skill Packages

Upload reusable skill packages as ZIP files with a SKILL.md, scripts, and resources. Agents discover and use them at runtime.

Upload a skill
skill = client.skills.create(
  file=FileUpload(
    filename="my-skill.zip",
    content=f.read()),
  meta={"version": "1.0"})

print(skill.name, skill.id)

Catalog & File Access

Browse the skill catalog, inspect file indexes, and read any file — text content returned inline, binary files via presigned URLs.

Browse & read skills
catalog = client.skills
  .list_catalog()

skill = client.skills.get(skill_id)
for f in skill.file_index:
  result = client.skills
    .get_file(skill.id, f.path)
  print(result.content.raw)

Skill Tools for Agents

Pre-built SKILL_TOOLS let LLMs read skill content via function calling, or mount skills in a sandbox to execute scripts directly.

Agent skill tools
from acontext.agent.skill \
  import SKILL_TOOLS

ctx = SKILL_TOOLS.format_context(
  client, skill_ids)
tools = SKILL_TOOLS
  .to_openai_tool_schema()
Short-term Memory

Short-term Memory Platform

Acontext
Acontext Sessions
Store · Retrieve · Multi-Provider Format
Store and retrieve context in OpenAI, Anthropic, or Gemini format with one simple API.
from acontext import AcontextClient
 
client = AcontextClient()
 
# Create a session
session = client.sessions.create()
 
# Store messages (OpenAI format)
client.sessions.store_message(session.id, blob={"role": "user", "content": "Hello!"})
 
# Retrieve in any format: openai, anthropic, gemini
messages = client.sessions.get_messages(session.id, format="anthropic")
# Get token-efficient session summary for prompt injection
summary = client.sessions.get_session_summary(session.id, limit=5)
 
# Apply edit strategies to manage context window size
result = client.sessions.get_messages(
session.id,
edit_strategies=[
{"type": "remove_tool_result", "params": {"keep_recent_n_tool_results": 3}},
{"type": "token_limit", "params": {"limit_tokens": 30000}}
]
)
print(f"Tokens: {result.this_time_tokens}")
?
Reinvent the Wheel for Each Provider
Focus on creating, not adapting.

How It Compares

See how Acontext Short-term Memory compares to building it yourself or using other frameworks.

Acontext
DIYLangChain Memory
Messages|Multi-provider format
Token-aware retrieval
Edit strategies
Session summaries
Per-user isolation
ManualManual
Cloud-native / API-first
Disk|S3-backed file storage
Manual
Grep & glob file search
Manual
Pre-built agent file tools
Skills|Skill package storage (ZIP)
Skill catalog & file access
Pre-built agent skill tools

How It Works

Pick a storage type to see the workflow and code examples.

1
Create Session
Initialize a session to scope your agent's context.
2
Store Messages
Persist messages in any provider format.
3
Retrieve & Transform
Get messages in any format with edit strategies.
4
Optimize & Resume
Summarize, trim, and continue sessions.
store.py
from acontext import AcontextClient

client = AcontextClient()

# Create a session
session = client.sessions.create(
    project_id="my-project"
)

# Store messages (any provider format)
client.sessions.store_message(
    session.id,
    blob={"role": "user", "content": "Hello!"}
)
client.sessions.store_message(
    session.id,
    blob={"role": "assistant", "content": "Hi!"}
)
retrieve.py
# Retrieve in OpenAI format
messages = client.sessions.get_messages(
    session.id, format="openai"
)

# Retrieve in Anthropic format
messages = client.sessions.get_messages(
    session.id, format="anthropic"
)

# Apply edit strategies
result = client.sessions.get_messages(
    session.id,
    edit_strategies=[
        {"type": "remove_tool_result",
         "params": {"keep_recent_n_tool_results": 3}},
        {"type": "token_limit",
         "params": {"limit_tokens": 30000}}
    ]
)
print(f"Tokens used: {result.this_time_tokens}")