Get Cognio up and running in 5 minutes.
The fastest way to get started:
# Clone the repository
git clone https://github.com/0xReLogic/Cognio.git
cd Cognio
# Start with docker-compose
docker-compose up -d
# Verify it's running
curl http://localhost:8080/health
Expected response:
{
"status": "healthy",
"version": "0.1.0"
}
# Clone the repository
git clone https://github.com/0xReLogic/Cognio.git
cd Cognio
# Install Python dependencies
pip install -r requirements.txt
# Start the server
uvicorn src.main:app --host 0.0.0.0 --port 8080
On first run, the server will download the embedding model:
paraphrase-multilingual-mpnet-base-v2 (multilingual, higher quality)EMBED_MODEL=all-MiniLM-L6-v2 in .env (lighter, faster)Model download takes about 30-60 seconds depending on your connection.
# In another terminal
curl http://localhost:8080/health
curl -X POST http://localhost:8080/memory/save \
-H "Content-Type: application/json" \
-d '{
"text": "FastAPI is a modern Python web framework for building APIs",
"project": "LEARNING",
"tags": ["python", "fastapi", "web"]
}'
Response:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"saved": true,
"reason": "created",
"duplicate": false
}
curl "http://localhost:8080/memory/search?q=Python%20web%20framework&limit=3"
Response:
{
"query": "Python web framework",
"results": [
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"text": "FastAPI is a modern Python web framework for building APIs",
"score": 0.89,
"project": "LEARNING",
"tags": ["python", "fastapi", "web"],
"created_at": "2025-01-05T10:30:00Z"
}
],
"total": 1
}
Notice how it found the memory even though you searched for "web framework" and the memory says "web framework"!
Open your browser and go to:
You'll see the Swagger UI where you can:
Create a .env file to customize settings:
# Copy the example
cp .env.example .env
# Edit with your preferred settings
nano .env
# Database location
DB_PATH=./data/memory.db
# Embedding model (choose based on your needs)
# Default: paraphrase-multilingual-mpnet-base-v2 (multilingual, higher quality)
# Fast: all-MiniLM-L6-v2 (lighter, faster)
EMBED_MODEL=paraphrase-multilingual-mpnet-base-v2
EMBED_DEVICE=cpu
# Server configuration
API_HOST=0.0.0.0
API_PORT=8080
# Search defaults
DEFAULT_SEARCH_LIMIT=5
SIMILARITY_THRESHOLD=0.7
Enable automatic tag generation using AI:
# Enable auto-tagging
AUTOTAG_ENABLED=true
LLM_PROVIDER=groq
# Groq API (FREE tier: 14,400 requests/day)
# Get key from: https://console.groq.com/keys
GROQ_API_KEY=your-groq-api-key-here
GROQ_MODEL=openai/gpt-oss-120b
# Or use OpenAI (paid)
# LLM_PROVIDER=openai
# OPENAI_API_KEY=your-openai-key-here
# OPENAI_MODEL=gpt-4o-mini
# Enable summarization (default: enabled)
SUMMARIZATION_ENABLED=true
# Method: extractive (fast, no API) or abstractive (better quality, uses LLM)
SUMMARIZATION_METHOD=extractive
# Require API key for all requests
API_KEY=your-secret-key-here
Check out the examples/ directory:
examples/basic_usage.py - Python SDK examplesexamples/curl_examples.sh - Command-line examplesTo use Cognio with AI clients (Claude Desktop, Cursor, VS Code Copilot, etc):
Auto-Setup (Recommended):
cd mcp-server
npm run setup
This automatically configures all 9 supported AI clients:
Manual Setup:
See mcp-server/README.md for client-specific configuration examples.
After Setup:
cognio.md in your workspace with usage guide"Search my memories for Docker" or "Remember this: FastAPI is awesome"Server won't start?
lsof -i :8080API_PORT=8081 uvicorn src.main:app --host 0.0.0.0 --port 8081Can't find saved memories?
ls -lh data/memory.dbcurl http://localhost:8080/memory/statsSlow searches?
?threshold=0.5?project=YOUR_PROJECTNeed help?
Now that you have Cognio running:
Happy remembering! 🧠