Back to Home

ODAM Documentation

API Reference & Guides

ODAM Quickstart

Get personalized AI memory running in 5 minutes

🚀

Performance Achievement

ODAM achieves 95% personalization, 91% lower latency, and 90% token savings!

1. Installation

Docker (Recommended)

Bash
# Clone and run
git clone https://github.com/aipsyhelp/odam
cd odam
./deploy.sh

Manual Installation

Bash
# Install dependencies
pip install -r requirements.txt

# Configure environment variables
cp production.env .env
nano .env

# Run the service
python main_llm_hybrid.py

2. Configuration

Edit your .env file with your credentials:

Environment Variables
# Azure OpenAI
AZURE_OPENAI_ENDPOINT=https://your-openai.openai.azure.com/
AZURE_OPENAI_KEY=your-api-key
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4o-mini

# CosmosDB (Graph Database)
COSMOS_ENDPOINT=https://your-cosmos.documents.azure.com:443/
COSMOS_KEY=your-cosmos-key
COSMOS_DATABASE_NAME=odam_graph_db
COSMOS_CONTAINER_NAME=entities

# Redis (Caching)
REDIS_URL=rediss://:your-key@your-redis.redis.cache.windows.net:6380

3. First Test

Python

Python
import requests

# Register user with personal data
response = requests.post(
    "http://localhost:8080/api/v1/chat",
    json={
        "message": "Hi! I'm Alex, a developer from Kyiv.",
        "user_id": "alex_kyiv",
        "session_id": "intro_session"
    }
)

result = response.json()
print(f"🎯 Personalization: {result['personalization_level']}%")
print(f"🧠 Entities extracted: {result['entities_extracted']}")
print(f"⏱️ Processing time: {result['processing_time']}s")
print(f"💬 Response: {result['response']}")

Expected Response

JSON Response
{
  "response": "Hi Alex! Nice to meet a developer from Kyiv!",
  "personalization_level": 75,
  "memory_utilization": 100,
  "entities_extracted": 3,
  "memories_found": 0,
  "confidence": 94.2,
  "processing_time": 7.1,
  "fallback_used": false,
  "session_id": "intro_session"
}

4. Test Contextual Memory

Make a second request to test if the system remembers previous information:

Python
# Memory test - system should remember previous information
response = requests.post(
    "http://localhost:8080/api/v1/chat",
    json={
        "message": "Tell me about my work in IT",
        "user_id": "alex_kyiv",
        "session_id": "work_discussion"
    }
)

result = response.json()
print(f"🎯 Personalization: {result['personalization_level']}%")
print(f"🧠 Memories found: {result['memories_found']}")
print(f"💬 Response: {result['response']}")

What's Next?

Troubleshooting

If you encounter issues, check the following:

  • • Ensure all environment variables are correctly set
  • • Verify Azure OpenAI and CosmosDB connections
  • • Check Redis connectivity
  • • Review logs for detailed error messages