Getting Started
Install lettactl and deploy your first agent fleet.
Installation
Install globally via npm, or use npx to run without installing:
Global install
npm install -g lettactlOr use npx
npx lettactl --helpEnvironment Setup
Point lettactl at your Letta server:
Environment variables
export LETTA_BASE_URL=http://localhost:8283
# Optional: API key for Letta Cloud
export LETTA_API_KEY=your-key-hereConversation Search (Optional)
Letta agents include a conversation_search tool for searching past conversations. By default it uses basic text matching. For semantic search (e.g. "what did we discuss about the campaign?"), enable Turbopuffer:
Turbopuffer env vars
# Add to your Letta server environment
LETTA_USE_TPUF=true
LETTA_TPUF_API_KEY=your_turbopuffer_api_key
LETTA_EMBED_ALL_MESSAGES=trueRequires OPENAI_API_KEY for embeddings. Only messages sent after enabling are indexed.
Your First Fleet Config
Create a file called fleet.yaml with a minimal agent definition:
fleet.yaml
agents:
- name: my-first-agent
description: "A simple AI assistant"
llm_config:
model: "openai/gpt-4o"
context_window: 128000
system_prompt:
value: "You are a helpful assistant."
memory_blocks:
- name: user_preferences
description: "What I know about the user"
agent_owned: true
limit: 5000Deploy
Preview what will happen, then deploy:
Deploy
# Preview changes
lettactl apply -f fleet.yaml --dry-run
# Deploy for real
lettactl apply -f fleet.yamlVerify
Check that your agent is running:
Verify
# List agents
lettactl get agents
# Send a test message
lettactl send my-first-agent "Hello, world!"
# View response
lettactl messages my-first-agent