AI模型聚合管理中转分发系统,一个应用管理您的所有AI模型,支持将多种大模型转为统一格式调用,支持OpenAI、Claude、Gemini等格式,可供个人或者企业内部管理与分发渠道使用。

Calcium-Ion 8e2b6d0aad Merge pull request #622 from Calcium-Ion/i18n-fix há 1 ano atrás
.github 5f3798053f Create FUNDING.yml há 1 ano atrás
bin d84b0b0f5d chore: add model parameter to the time_test script (#245) há 2 anos atrás
common 568d4e3f71 feat: support Azure Comm Service SMTP há 1 ano atrás
constant 195ab1fdd5 feat: add gemini tool_calls finish reason há 1 ano atrás
controller 56ccb30a94 fix: 渠道标签开启下使用ID排序出错 há 1 ano atrás
dto 97fdcd8e8f feat: 完善audio计费 há 1 ano atrás
i18n 221d7b5c99 feat: Integrate i18n support and enhance UI text localization há 1 ano atrás
middleware 4c809277aa feat: support br há 1 ano atrás
model 6625563f80 feat: Enhance quota data handling and CSS styling há 1 ano atrás
relay 5d338337a0 feat: 兼容OpenAI格式下设置gemini模型联网搜索 #615 há 1 ano atrás
router 4c809277aa feat: support br há 1 ano atrás
service c8a29251ac 1 há 1 ano atrás
web 41a7cee98e feat: Refactor App and ChannelsTable components for improved i18n support há 1 ano atrás
.env.example 8eb32e9b3f Modify the default gemini API to v1beta há 1 ano atrás
.gitignore 84f40b63b2 feat: 添加.env配置文件和初始化环境变量 há 1 ano atrás
BT.md 0dd1953cd6 Update BT.md há 1 ano atrás
Dockerfile 66e30f4115 fix: ci há 1 ano atrás
LICENSE fcb8506679 Update LICENSE há 1 ano atrás
Midjourney.md bec18ed82d Update README.md há 1 ano atrás
README.en.md cfdf6e48f1 Update README há 1 ano atrás
README.md cfdf6e48f1 Update README há 1 ano atrás
Rerank.md bec18ed82d Update README.md há 1 ano atrás
Suno.md bec18ed82d Update README.md há 1 ano atrás
VERSION 7e80e2da3a fix: add a blank VERSION file (#135) há 2 anos atrás
docker-compose.yml 07b1c9a4db Update docker-compose.yml há 1 ano atrás
go.mod 79de02b05f chore: Update dependencies and refactor JSON handling #614 há 1 ano atrás
go.sum 79de02b05f chore: Update dependencies and refactor JSON handling #614 há 1 ano atrás
main.go 56ccb30a94 fix: 渠道标签开启下使用ID排序出错 há 1 ano atrás
makefile 6e54f01435 update makefile há 1 ano atrás
one-api.service c6717307d0 chore: update one-api.service há 2 anos atrás

README.en.md

![new-api](/web/public/logo.png) # New API 🍥 Next Generation LLM Gateway and AI Asset Management System

license release docker docker GoReportCard

📝 Project Description

[!NOTE]
This is an open-source project developed based on One API

[!IMPORTANT]

  • Users must comply with OpenAI's Terms of Use and relevant laws and regulations. Not to be used for illegal purposes.
  • This project is for personal learning only. Stability is not guaranteed, and no technical support is provided.

✨ Key Features

  1. 🎨 New UI interface (some interfaces pending update)
  2. 🌍 Multi-language support (work in progress)
  3. 🎨 Added Midjourney-Proxy(Plus) interface support, Integration Guide
  4. 💰 Online recharge support, configurable in system settings:
    • EasyPay
  5. 🔍 Query usage quota by key:
  6. 📑 Configurable items per page in pagination
  7. 🔄 Compatible with original One API database (one-api.db)
  8. 💵 Support per-request model pricing, configurable in System Settings - Operation Settings
  9. ⚖️ Support channel weighted random selection
  10. 📈 Data dashboard (console)
  11. 🔒 Configurable model access per token
  12. 🤖 Telegram authorization login support:
    1. System Settings - Configure Login Registration - Allow Telegram Login
    2. Send /setdomain command to @Botfather
    3. Select your bot, then enter http(s)://your-website/login
    4. Telegram Bot name is the bot username without @
  13. 🎵 Added Suno API interface support, Integration Guide
  14. 🔄 Support for Rerank models, compatible with Cohere and Jina, can integrate with Dify, Integration Guide
  15. OpenAI Realtime API - Support for OpenAI's Realtime API, including Azure channels

Model Support

This version additionally supports:

  1. Third-party model gps (gpt-4-gizmo-*)
  2. Midjourney-Proxy(Plus) interface, Integration Guide
  3. Custom channels with full API URL support
  4. Suno API interface, Integration Guide
  5. Rerank models, supporting Cohere and Jina, Integration Guide
  6. Dify

You can add custom models gpt-4-gizmo-* in channels. These are third-party models and cannot be called with official OpenAI keys.

Additional Configurations Beyond One API

  • GENERATE_DEFAULT_TOKEN: Generate initial token for new users, default false
  • STREAMING_TIMEOUT: Set streaming response timeout, default 60 seconds
  • DIFY_DEBUG: Output workflow and node info to client for Dify channel, default true
  • FORCE_STREAM_OPTION: Override client stream_options parameter, default true
  • GET_MEDIA_TOKEN: Calculate image tokens, default true
  • GET_MEDIA_TOKEN_NOT_STREAM: Calculate image tokens in non-stream mode, default true
  • UPDATE_TASK: Update async tasks (Midjourney, Suno), default true
  • GEMINI_MODEL_MAP: Specify Gemini model versions (v1/v1beta), format: "model:version", comma-separated
  • COHERE_SAFETY_SETTING: Cohere model safety settings, options: NONE, CONTEXTUAL, STRICT, default NONE

Deployment

[!TIP] Latest Docker image: calciumion/new-api:latest
Default account: root, password: 123456
Update command:

> docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower -cR
> ```

### Requirements
- Local database (default): SQLite (Docker deployment must mount `/data` directory)
- Remote database: MySQL >= 5.7.8, PgSQL >= 9.6

### Docker Deployment
### Using Docker Compose (Recommended)

shell

Clone project

git clone https://github.com/Calcium-Ion/new-api.git cd new-api

Edit docker-compose.yml as needed

Start

docker-compose up -d


### Direct Docker Image Usage

shell

SQLite deployment:

docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest

MySQL deployment (add -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi"), modify database connection parameters as needed

Example:

docker run --name new-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest


## Channel Retry
Channel retry is implemented, configurable in `Settings->Operation Settings->General Settings`. **Cache recommended**.  
First retry uses same priority, second retry uses next priority, and so on.

### Cache Configuration
1. `REDIS_CONN_STRING`: Use Redis as cache
    + Example: `REDIS_CONN_STRING=redis://default:redispw@localhost:49153`
2. `MEMORY_CACHE_ENABLED`: Enable memory cache, default `false`
    + Example: `MEMORY_CACHE_ENABLED=true`

### Why Some Errors Don't Retry
Error codes 400, 504, 524 won't retry
### To Enable Retry for 400
In `Channel->Edit`, set `Status Code Override` to:

json { "400": "500" } ```

Integration Guides

Related Projects

🌟 Star History

Star History Chart