Light
Dark

MemGPT is now part of Letta

September 23, 2024

MemGPT started as a research project at UC Berkeley, inspired by an effort to improve the memory systems of Discord chatbots. As it turns out, many builders in the AI ecosystem were (and still are!) looking for a memory management solution, and the paper we wrote went viral. Since then, the scope of our research and open source development has grown significantly, and the MemGPT repository has rapidly evolved to support agents more generally, with support for features like custom tools, data sources, and even memory classes, as well as the ability to deploy and persist these agents as stateful services.

The rapid popularity of the research, the paper, and the context window management technique itself has resulted in some confusion about the MemGPT name. Some people use “MemGPT” to refer to the original “LLM OS” techniques in the paper, some use “MemGPT” to refer to an archetype of LLM-driven chatbots with self-editing memory, and others use “MemGPT” to refer to the open source agent framework. To clarify this, we’ve decided that MemGPT should refer to the original agent design pattern described in the research paper (empowering LLMs with self-editing memory tools), and use the name Letta to refer to the agent framework.

Introducing Letta, the company we’ve started to advance the frontier of AI systems with memory. We started Letta to continue research and development of the LLM OS: the orchestration layer that sits above the model layer. Letta the company will continue to maintain the open source repository, MemGPT. Our commercial offering will focus on making deployability easier, and improving performance, and offering UX tools for debugging and monitoring agents.

This means: 

  • The pypi package will be moved to letta
  • The Docker image will be moved to letta/letta-server

Thank you to our open source and Discord communities for using MemGPT and invaluable feedback over the course of the project. We hope to keep having you along for this ride, and promise to fix bugs faster now that we are not just two PhD students :)

- Charles and Sarah

Jul 7, 2025
Agent Memory: How to Build Agents that Learn and Remember

Traditional LLMs operate in a stateless paradigm—each interaction exists in isolation, with no knowledge carried forward from previous conversations. Agent memory solves this problem.

Jul 3, 2025
Anatomy of a Context Window: A Guide to Context Engineering

As AI agents become more sophisticated, understanding how to design and manage their context windows (via context engineering) has become crucial for developers.

May 29, 2025
Letta Leaderboard: Benchmarking LLMs on Agentic Memory

We're excited to announce the Letta Leaderboard, a comprehensive benchmark suite that evaluates how effectively LLMs manage agentic memory.

May 14, 2025
Memory Blocks: The Key to Agentic Context Management

Memory blocks offer an elegant abstraction for context window management. By structuring the context into discrete, functional units, we can give LLM agents more consistent, usable memory.

Jul 24, 2025
Introducing Letta Filesystem

Today we're announcing Letta Filesystem, which provides an interface for agents to organize and reference content from documents like PDFs, transcripts, documentation, and more.

Apr 21, 2025
Sleep-time Compute

Sleep-time compute is a new way to scale AI capabilities: letting models "think" during downtime. Instead of sitting idle between tasks, AI agents can now use their "sleep" time to process information and form new connections by rewriting their memory state.

Apr 17, 2025
Announcing Letta Client SDKs for Python and TypeScript

We've releasing new client SDKs (support for TypeScript and Python) and upgraded developer documentation

Apr 2, 2025
Agent File

Introducing Agent File (.af): An open file format for serializing stateful agents with persistent memory and behavior.

Feb 13, 2025
RAG is not Agent Memory

Although RAG provides a way to connect LLMs and agents to more data than what can fit into context, traditional RAG is insufficient for building agent memory.

Nov 7, 2024
New course on Letta with DeepLearning.AI

DeepLearning.AI has released a new course on agent memory in collaboration with Letta.

Feb 6, 2025
Stateful Agents: The Missing Link in LLM Intelligence

Introducing “stateful agents”: AI systems that maintain persistent memory and actually learn during deployment, not just during training.

Jan 15, 2025
Introducing the Agent Development Environment

Introducing the Letta Agent Development Environment (ADE): Agents as Context + Tools

Dec 13, 2024
Letta v0.6.4 release

Letta v0.6.4 adds Python 3.13 support and an official TypeScript SDK.

Nov 14, 2024
The AI agents stack

Understanding the AI agents stack landscape.