Skip to main content
Ryumem is fully open source and can be self-hosted on your own infrastructure. Choose between Docker Compose (recommended), Helm Chart for Kubernetes, or local development setup. The fastest way to get Ryumem running with all components.
1

Clone the Repository

git clone https://github.com/predictable-labs/ryumem.git
cd ryumem
2

Configure Environment

# Configure server
cp server/.env.example server/.env
# Edit server/.env and add your LLM API key (GOOGLE_API_KEY or OPENAI_API_KEY)

# Configure dashboard
cp dashboard/env.template dashboard/.env
You need at least one LLM API key (Google Gemini or OpenAI) for entity extraction and embeddings.
3

Start All Services

docker-compose up -d
This starts:
4

Generate Your API Key

Register a customer to get your API key:
curl -X POST "http://localhost:8000/register" \
  -H "Content-Type: application/json" \
  -d '{"customer_id": "my_project"}'
The response includes your API key (starts with ryu_). Save this securely.

Option 2: Helm Chart (Kubernetes)

Deploy Ryumem to Kubernetes using the official Helm chart.
1

Add the Helm Repository

# Clone the repository to get the Helm chart
git clone https://github.com/predictable-labs/ryumem.git
cd ryumem
2

Configure Values

Create a custom values file:
# custom-values.yaml
secrets:
  googleApiKey: "your-google-api-key"  # or openaiApiKey
  adminApiKey: "your-admin-key"

ingress:
  enabled: true
  className: nginx
  hosts:
    - host: ryumem.your-domain.com
      paths:
        - path: /
          pathType: Prefix

dashboard:
  enabled: true
  ingress:
    enabled: true
    className: nginx
    hosts:
      - host: ryumem-dashboard.your-domain.com
        paths:
          - path: /
            pathType: Prefix

persistence:
  enabled: true
  size: 20Gi
3

Install the Chart

helm install ryumem ./helm/ryumem \
  -f custom-values.yaml \
  --namespace ryumem \
  --create-namespace
4

Verify Installation

# Check pods are running
kubectl get pods -n ryumem

# Get the API endpoint
kubectl get ingress -n ryumem
5

Generate Your API Key

Port-forward and register:
kubectl port-forward -n ryumem svc/ryumem 8000:8000

curl -X POST "http://localhost:8000/register" \
  -H "Content-Type: application/json" \
  -d '{"customer_id": "my_project"}'

Helm Chart Configuration

Key configuration options in values.yaml:
ParameterDefaultDescription
replicaCount1Number of API server replicas
image.repositoryghcr.io/predictable-labs/ryumemContainer image
image.taglatestImage tag
secrets.googleApiKey""Google Gemini API key
secrets.openaiApiKey""OpenAI API key
secrets.adminApiKey""Admin API key for registration
ryumem.llm.providergeminiLLM provider
ryumem.llm.modelgemini-2.0-flash-expLLM model
ryumem.embedding.providergeminiEmbedding provider
ryumem.embedding.modeltext-embedding-004Embedding model
persistence.enabledtrueEnable persistent storage
persistence.size10GiStorage size
dashboard.enabledtrueDeploy dashboard
ingress.enabledfalseEnable API ingress
dashboard.ingress.enabledfalseEnable dashboard ingress
For production deployments, use external secret management (like Kubernetes Secrets, Vault, or External Secrets Operator) instead of storing API keys in values files.

Option 3: Local Development

For development or when you want more control over individual components.

Prerequisites

  • Python 3.10+
  • Node.js 18+
  • An LLM API key (Google Gemini, OpenAI, or local Ollama)

Start the API Server

1

Install the SDK

pip install -e .
2

Install Server Dependencies

cd server
pip install -r requirements.txt
3

Configure Environment

cp .env.example .env
# Edit .env and set your LLM API key
4

Start the Server

uvicorn main:app --reload --host 0.0.0.0 --port 8000

Start the Dashboard

cd dashboard
npm install
cp env.template .env
npm run dev
The dashboard will be available at http://localhost:3000.

Environment Variables

Server Configuration (server/.env)

VariableRequiredDefaultDescription
GOOGLE_API_KEYYes*-Google Gemini API key
OPENAI_API_KEYYes*-OpenAI API key
RYUMEM_DB_FOLDERYes./dataDatabase storage path
ADMIN_API_KEYYes-Admin key for registration
LLM_PROVIDERNogeminiLLM provider (gemini, openai, ollama, litellm)
LLM_MODELNogemini-2.0-flash-expLLM model name
EMBEDDING_PROVIDERNogeminiEmbedding provider
EMBEDDING_MODELNotext-embedding-004Embedding model
CORS_ORIGINSNohttp://localhost:3000Allowed CORS origins
*At least one LLM API key is required

Dashboard Configuration (dashboard/.env)

VariableRequiredDefaultDescription
NEXT_PUBLIC_API_URLYeshttp://localhost:8000Ryumem API server URL

Quick Start with Python SDK

Once your server is running, install the Python SDK:
pip install ryumem
Initialize and use Ryumem:
from ryumem import Ryumem

# Initialize with your local server
ryumem = Ryumem(
    api_url="http://localhost:8000",
    api_key="ryu_your_api_key_here"
)

# Add your first memory episode
ryumem.add_episode(
    content="Alice works at Google in Mountain View as a Software Engineer.",
    user_id="user_123",
    session_id="session_abc",
)

# Search your memories
results = ryumem.search(
    query="Where does Alice work?",
    user_id="user_123",
    session_id="session_abc",
)

print(results)

Accessing the Dashboard

Once Ryumem is running:
  1. Navigate to http://localhost:3000
  2. Enter your API key (starts with ryu_)
  3. Click “Sign in”
Your API key is stored securely in your browser and used for all API requests. Dashboard Features:
  • Search and query your knowledge graph
  • Visualize entities and relationships
  • View episodes and memories
  • Track tool execution analytics
  • Configure system settings

Dashboard Guide

Complete guide to dashboard features and navigation

Using with Ollama (Local LLMs)

For fully local deployment without cloud API keys:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Pull a model
ollama pull llama3.2

# Configure server/.env
LLM_PROVIDER=ollama
LLM_MODEL=llama3.2
EMBEDDING_PROVIDER=ollama
EMBEDDING_MODEL=nomic-embed-text
Local LLMs may have different performance characteristics compared to cloud providers. Test with your use case.

Next Steps