Operations Guide¶
Configure, secure, and deploy your Redis Agent Memory Server in production. This guide covers server configuration, authentication, LLM providers, and infrastructure setup.
Server Configuration¶
-
⚙️ Configuration
Environment variables, YAML config files, and all server settings
-
🔐 Authentication
OAuth2/JWT, token-based auth, and multi-provider setup
-
🛡️ Security
Security considerations for custom prompts and production deployments
AI Provider Setup¶
-
🤖 LLM Providers
Configure OpenAI, Anthropic, AWS Bedrock, Ollama, and 100+ providers via LiteLLM
-
📐 Embedding Providers
Set up embedding models for semantic search
-
🗄️ Memory Vector Databases
Configure Redis or custom memory vector databases
Quick Reference¶
| Topic | Description |
|---|---|
| Configuration | All environment variables and YAML settings |
| Authentication | OAuth2, token auth, and development mode |
| Security | Custom prompt security and best practices |
| LLM Providers | Generation models including AWS Bedrock |
| Embedding Providers | Embedding models and dimensions |
| Memory Vector Databases | Storage backend configuration |
Where to Start¶
Deploying to production? Start with Configuration to understand all server settings.
Setting up authentication? See Authentication for OAuth2 or token-based auth.
Using AWS? The LLM Providers guide covers AWS Bedrock setup for both generation and embedding models.
Customizing storage? Check Memory Vector Databases for Redis and custom backend options.