AI模型聚合管理中转分发系统,一个应用管理您的所有AI模型,支持将多种大模型转为统一格式调用,支持OpenAI、Claude、Gemini等格式,可供个人或者企业内部管理与分发渠道使用。

Calcium-Ion e68edf81f7 Update README.md 10 hónapja
.github 13d1b8203c chore: update CI 11 hónapja
bin d84b0b0f5d chore: add model parameter to the time_test script (#245) 2 éve
common bd48f43410 feat: claude relay 10 hónapja
constant bf80d71ddf feat: Add Gemini version settings configuration support (close #568) 10 hónapja
controller bd48f43410 feat: claude relay 10 hónapja
docs 115a181db3 feat: Add thinking-to-content conversion for stream responses 10 hónapja
dto bd48f43410 feat: claude relay 10 hónapja
middleware bd48f43410 feat: claude relay 10 hónapja
model c47d8a10f0 feat: Support postgresql:// dsn format 10 hónapja
relay 2048b451bf fix panic 10 hónapja
router bd48f43410 feat: claude relay 10 hónapja
service bd48f43410 feat: claude relay 10 hónapja
setting bd48f43410 feat: claude relay 10 hónapja
web bd48f43410 feat: claude relay 10 hónapja
.dockerignore 006bc37231 refactor: access_token auth 1 éve
.env.example bf80d71ddf feat: Add Gemini version settings configuration support (close #568) 10 hónapja
.gitignore 5f082d72bb update dockerignore 1 éve
BT.md 0dd1953cd6 Update BT.md 1 éve
Dockerfile 81591f20e0 refactor: Optimize Dockerfile for Go build process 11 hónapja
LICENSE fcb8506679 Update LICENSE 1 éve
Midjourney.md bec18ed82d Update README.md 1 éve
README.en.md 3ad96d3b4e feat: update readme and i18n 10 hónapja
README.md e68edf81f7 Update README.md 10 hónapja
Rerank.md c44a32efe0 chore: update rerank.md 10 hónapja
Suno.md bec18ed82d Update README.md 1 éve
VERSION 7e80e2da3a fix: add a blank VERSION file (#135) 2 éve
docker-compose.yml 3da1344897 feat: Add user notification settings with quota warning and multiple notification methods 10 hónapja
go.mod bd48f43410 feat: claude relay 10 hónapja
go.sum c433af284c feat: add oidc support 10 hónapja
main.go 5937d850d9 refactor: Replace manual goroutine creation with gopool.Go 10 hónapja
makefile 6e54f01435 update makefile 1 éve
one-api.service c6717307d0 chore: update one-api.service 2 éve

README.en.md

![new-api](/web/public/logo.png) # New API 🍥 Next Generation LLM Gateway and AI Asset Management System

license release docker docker GoReportCard

📝 Project Description

[!NOTE]
This is an open-source project developed based on One API

[!IMPORTANT]

  • Users must comply with OpenAI's Terms of Use and relevant laws and regulations. Not to be used for illegal purposes.
  • This project is for personal learning only. Stability is not guaranteed, and no technical support is provided.

✨ Key Features

  1. 🎨 New UI interface (some interfaces pending update)
  2. 🌍 Multi-language support (work in progress)
  3. 🎨 Added Midjourney-Proxy(Plus) interface support, Integration Guide
  4. 💰 Online recharge support, configurable in system settings:
    • EasyPay
  5. 🔍 Query usage quota by key:
  6. 📑 Configurable items per page in pagination
  7. 🔄 Compatible with original One API database (one-api.db)
  8. 💵 Support per-request model pricing, configurable in System Settings - Operation Settings
  9. ⚖️ Support channel weighted random selection
  10. 📈 Data dashboard (console)
  11. 🔒 Configurable model access per token
  12. 🤖 Telegram authorization login support:
    1. System Settings - Configure Login Registration - Allow Telegram Login
    2. Send /setdomain command to @Botfather
    3. Select your bot, then enter http(s)://your-website/login
    4. Telegram Bot name is the bot username without @
  13. 🎵 Added Suno API interface support, Integration Guide
  14. 🔄 Support for Rerank models, compatible with Cohere and Jina, can integrate with Dify, Integration Guide
  15. OpenAI Realtime API - Support for OpenAI's Realtime API, including Azure channels
  16. 🧠 Support for setting reasoning effort through model name suffix:
    • Add suffix -high to set high reasoning effort (e.g., o3-mini-high)
    • Add suffix -medium to set medium reasoning effort
    • Add suffix -low to set low reasoning effort
  17. 🔄 Thinking to content option thinking_to_content in Channel->Edit->Channel Extra Settings, default is false, when true, the reasoning_content of the thinking content will be converted to <think> tags and concatenated to the content returned.
  18. 🔄 Model rate limit, support setting total request limit and successful request limit in System Settings->Rate Limit Settings
  19. 💰 Cache billing support, when enabled can charge a configurable ratio for cache hits:
    1. Set Prompt Cache Ratio in System Settings -> Operation Settings
    2. Set Prompt Cache Ratio in channel settings, range 0-1 (e.g., 0.5 means 50% charge on cache hits)
    3. Supported channels:
      • OpenAI
      • Azure
      • DeepSeek
      • Claude

Model Support

This version additionally supports:

  1. Third-party model gpts (gpt-4-gizmo-*)
  2. Midjourney-Proxy(Plus) interface, Integration Guide
  3. Custom channels with full API URL support
  4. Suno API interface, Integration Guide
  5. Rerank models, supporting Cohere and Jina, Integration Guide
  6. Dify

You can add custom models gpt-4-gizmo-* in channels. These are third-party models and cannot be called with official OpenAI keys.

Additional Configurations Beyond One API

  • GENERATE_DEFAULT_TOKEN: Generate initial token for new users, default false
  • STREAMING_TIMEOUT: Set streaming response timeout, default 60 seconds
  • DIFY_DEBUG: Output workflow and node info to client for Dify channel, default true
  • FORCE_STREAM_OPTION: Override client stream_options parameter, default true
  • GET_MEDIA_TOKEN: Calculate image tokens, default true
  • GET_MEDIA_TOKEN_NOT_STREAM: Calculate image tokens in non-stream mode, default true
  • UPDATE_TASK: Update async tasks (Midjourney, Suno), default true
  • GEMINI_MODEL_MAP: Specify Gemini model versions (v1/v1beta), format: "model:version", comma-separated
  • COHERE_SAFETY_SETTING: Cohere model safety settings, options: NONE, CONTEXTUAL, STRICT, default NONE
  • GEMINI_VISION_MAX_IMAGE_NUM: Gemini model maximum image number, default 16, set to -1 to disable
  • MAX_FILE_DOWNLOAD_MB: Maximum file download size in MB, default 20
  • CRYPTO_SECRET: Encryption key for encrypting database content
  • AZURE_DEFAULT_API_VERSION: Azure channel default API version, if not specified in channel settings, use this version, default 2024-12-01-preview
  • NOTIFICATION_LIMIT_DURATION_MINUTE: Duration of notification limit in minutes, default 10
  • NOTIFY_LIMIT_COUNT: Maximum number of user notifications in the specified duration, default 2

Deployment

[!TIP] Latest Docker image: calciumion/new-api:latest
Default account: root, password: 123456

Multi-Server Deployment

  • Must set SESSION_SECRET environment variable, otherwise login state will not be consistent across multiple servers.
  • If using a public Redis, must set CRYPTO_SECRET environment variable, otherwise Redis content will not be able to be obtained in multi-server deployment.

Requirements

  • Local database (default): SQLite (Docker deployment must mount /data directory)
  • Remote database: MySQL >= 5.7.8, PgSQL >= 9.6

Deployment with BT Panel

Install BT Panel (version 9.2.0 or above) from BT Panel Official Website, choose the stable version script to download and install.
After installation, log in to BT Panel and click Docker in the menu bar. First-time access will prompt to install Docker service. Click Install Now and follow the prompts to complete installation.
After installation, find New-API in the app store, click install, configure basic options to complete installation.
Pictorial Guide

Docker Deployment

Using Docker Compose (Recommended)

# Clone project
git clone https://github.com/Calcium-Ion/new-api.git
cd new-api
# Edit docker-compose.yml as needed
# nano docker-compose.yml
# vim docker-compose.yml
# Start
docker-compose up -d

Update Version

docker-compose pull
docker-compose up -d

Direct Docker Image Usage

# SQLite deployment:
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest

# MySQL deployment (add -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi"), modify database connection parameters as needed
# Example:
docker run --name new-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest

Update Version

# Pull the latest image
docker pull calciumion/new-api:latest
# Stop and remove the old container
docker stop new-api
docker rm new-api
# Run the new container with the same parameters as before
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest

Alternatively, you can use Watchtower for automatic updates (not recommended, may cause database incompatibility):

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower -cR

Channel Retry

Channel retry is implemented, configurable in Settings->Operation Settings->General Settings. Cache recommended.
If retry is enabled, the system will automatically use the next priority channel for the same request after a failed request.

Cache Configuration

  1. REDIS_CONN_STRING: Use Redis as cache
    • Example: REDIS_CONN_STRING=redis://default:redispw@localhost:49153
  2. MEMORY_CACHE_ENABLED: Enable memory cache, default false
    • Example: MEMORY_CACHE_ENABLED=true

Why Some Errors Don't Retry

Error codes 400, 504, 524 won't retry

To Enable Retry for 400

In Channel->Edit, set Status Code Override to:

{
  "400": "500"
}

Integration Guides

Related Projects

🌟 Star History

Star History Chart