Light
Dark

New course on Letta with DeepLearning.AI

Company
November 7, 2024

LLMs as Operating Systems: Agent Memory

We are excited to announce the launch of our brand new course in collaboration with Andrew Ng's DeepLearning.AI - LLMs as Operating Systems: Agent Memory.

In this course, we go over the fundamentals of adding memory to LLM agents:

  • Why is memory important for LLM agents? Memory is key to personalizing agents - without memory, agents are less engaging, less personalizable, and are unable to perform multi-step reasoning (breaking down a task into many subtasks).
  • Why do LLMs not have memory? LLMs are stateless compute units: they hold information in their weights, but they do not remember their previous inputs and outputs (for example prior conversations) unless the prior inputs are explicitly feed back to the LLM. This means that if we want to use LLMs to build powerful agentic systems, we have to add memory outside of the LLM.
  • What is the concept of self-editing memory? With self-editing memory, you can give LLM agents the ability to learn over time by making modifications to their persistent state via tool calling.
  • How does the concept of memory management (or "context management") in LLM agents resemble memory management in an operating system? In a traditional computer, the operating system (OS) moves data back and forth between an unlimited "virtual memory" and a limited "physical memory". You can think of the role of the context management system for LLM agents as that of an "LLM OS", where the LLM OS must move data back and forth between a "virtual context" (all the data available to the LLM agent) and the "physical context" (the actual context window of the LLM input). This concept of the "LLM OS" context manager was first introduced in the MemGPT research paper.

The course includes around 1.5 hours of material, including video lectures and Python notebooks. In one notebook, you learn to implement a basic self-editing agent (in the style of MemGPT) entire from scratch.

If you ever read the MemGPT research paper or have used the Letta framework, and were curious about how the engineering works under the hood, you can take this course to find out!

To read more about the course and enroll for free, visit the course page on DeepLearning.AI's website.

Jul 7, 2025
Agent Memory: How to Build Agents that Learn and Remember

Traditional LLMs operate in a stateless paradigm—each interaction exists in isolation, with no knowledge carried forward from previous conversations. Agent memory solves this problem.

Jul 3, 2025
Anatomy of a Context Window: A Guide to Context Engineering

As AI agents become more sophisticated, understanding how to design and manage their context windows (via context engineering) has become crucial for developers.

Feb 13, 2025
RAG is not Agent Memory

Although RAG provides a way to connect LLMs and agents to more data than what can fit into context, traditional RAG is insufficient for building agent memory.

Nov 14, 2024
The AI agents stack

Understanding the AI agents stack landscape.

Sep 23, 2024
Announcing Letta

We are excited to publicly announce Letta.

Sep 23, 2024
MemGPT is now part of Letta

The MemGPT open source project is now part of Letta.

Jul 24, 2025
Introducing Letta Filesystem

Today we're announcing Letta Filesystem, which provides an interface for agents to organize and reference content from documents like PDFs, transcripts, documentation, and more.

Apr 17, 2025
Announcing Letta Client SDKs for Python and TypeScript

We've releasing new client SDKs (support for TypeScript and Python) and upgraded developer documentation

Apr 2, 2025
Agent File

Introducing Agent File (.af): An open file format for serializing stateful agents with persistent memory and behavior.

Jan 15, 2025
Introducing the Agent Development Environment

Introducing the Letta Agent Development Environment (ADE): Agents as Context + Tools

Dec 13, 2024
Letta v0.6.4 release

Letta v0.6.4 adds Python 3.13 support and an official TypeScript SDK.

Nov 6, 2024
Letta v0.5.2 release

Letta v0.5.2 adds tool rules, which allows you to constrain the behavior of your Letta agents similar to graphs.

Oct 23, 2024
Letta v0.5.1 release

Letta v0.5.1 adds support for auto-loading entire external tool libraries into your Letta server.

Oct 14, 2024
Letta v0.5 release

Letta v0.5 adds dynamic model (LLM) listings across multiple providers.

Oct 3, 2024
Letta v0.4.1 release

Letta v0.4.1 adds support for Composio, LangChain, and CrewAI tools.

May 29, 2025
Letta Leaderboard: Benchmarking LLMs on Agentic Memory

We're excited to announce the Letta Leaderboard, a comprehensive benchmark suite that evaluates how effectively LLMs manage agentic memory.

May 14, 2025
Memory Blocks: The Key to Agentic Context Management

Memory blocks offer an elegant abstraction for context window management. By structuring the context into discrete, functional units, we can give LLM agents more consistent, usable memory.

Apr 21, 2025
Sleep-time Compute

Sleep-time compute is a new way to scale AI capabilities: letting models "think" during downtime. Instead of sitting idle between tasks, AI agents can now use their "sleep" time to process information and form new connections by rewriting their memory state.

Feb 6, 2025
Stateful Agents: The Missing Link in LLM Intelligence

Introducing “stateful agents”: AI systems that maintain persistent memory and actually learn during deployment, not just during training.