AI模型聚合管理中转分发系统,一个应用管理您的所有AI模型,支持将多种大模型转为统一格式调用,支持OpenAI、Claude、Gemini等格式,可供个人或者企业内部管理与分发渠道使用。

Seefs aacdc395c8 Merge pull request #2013 from seefs001/fix/ci 2 месяцев назад
.github aacdc395c8 Merge pull request #2013 from seefs001/fix/ci 2 месяцев назад
bin d84b0b0f5d chore: add model parameter to the time_test script (#245) 2 лет назад
common 9f4a2d64a3 feat: add sora video submit task 2 месяцев назад
constant 9f4a2d64a3 feat: add sora video submit task 2 месяцев назад
controller 57e5d67f86 fix(channel): move log statement after sleep in auto-test loop 2 месяцев назад
docs 51d71a6e1a ✨ feat: add Spanish feature request template to GitHub issue tracker for improved feature proposal submissions 2 месяцев назад
dto e8966c7374 feat: pplx channel 2 месяцев назад
electron 629a534798 chore(deps-dev): bump electron from 28.3.3 to 35.7.5 in /electron 2 месяцев назад
logger 39a868faea 💱 feat(settings): introduce site-wide quota display type (USD/CNY/TOKENS/CUSTOM) 2 месяцев назад
middleware 5f36e32821 feat: add openai sdk create 2 месяцев назад
model 6897a9ffd8 fix: channel remark ignore issue 2 месяцев назад
relay 07b099006c feat: add logging for model details and enhance action assignment in relay tasks 2 месяцев назад
router 5a7f498629 Merge pull request #1997 from feitianbubu/pr/add-sora-fetch-task 2 месяцев назад
service 76ab8a480a Merge pull request #1401 from feitianbubu/pr/add-qwen-channel-auto-disabled 2 месяцев назад
setting a54baf4998 feat: sora 增加参数校验与计费 2 месяцев назад
types 5d4a0757f7 fix: ensure error message is set when it is empty in error handling #1972 2 месяцев назад
web 07b099006c feat: add logging for model details and enhance action assignment in relay tasks 2 месяцев назад
.dockerignore fe9b305232 fix: legal setting 2 месяцев назад
.env.example c6cf1b98f8 feat(option): enhance UpdateOption to handle various value types and improve validation 3 месяцев назад
.gitignore fe9b305232 fix: legal setting 2 месяцев назад
Dockerfile 24bc24abaa feat: matrix ci 2 месяцев назад
LICENSE 4d8189f21b ⚖️ docs(LICENSE): update license information from Apache 2.0 to New API Licensing 5 месяцев назад
README.en.md 98261ec9fa chore: update README files 2 месяцев назад
README.fr.md 98261ec9fa chore: update README files 2 месяцев назад
README.ja.md 98261ec9fa chore: update README files 2 месяцев назад
README.md 98261ec9fa chore: update README files 2 месяцев назад
VERSION 7e80e2da3a fix: add a blank VERSION file (#135) 2 лет назад
docker-compose.yml b0b275b236 chore(docker): add comment for compatibility with older Docker versions 2 месяцев назад
go.mod 60dc910a27 fix: update jwt package import to v5 across multiple files 2 месяцев назад
go.sum c4e0fc1837 chore: go version & sonic dep 2 месяцев назад
main.go 8e10af82b1 fix(main): conditionally log missing .env file message based on debug mode 2 месяцев назад
makefile 27bbd951f0 feat: use bun when develop locally 6 месяцев назад
one-api.service c6717307d0 chore: update one-api.service 2 лет назад

README.en.md

中文 | English | Français | 日本語

[!NOTE] MT (Machine Translation): This document is machine translated. For the most accurate information, please refer to the Chinese version.

![new-api](/web/public/logo.png) # New API 🍥 Next-Generation Large Model Gateway and AI Asset Management System

license release docker docker GoReportCard

📝 Project Description

[!NOTE]
This is an open-source project developed based on One API

[!IMPORTANT]

🤝 Trusted Partners

 

No particular order

<img

src="./docs/images/cherry-studio.png" alt="Cherry Studio" height="120"

/> <img

src="./docs/images/pku.png" alt="Peking University" height="120"

/> <img

src="./docs/images/ucloud.png" alt="UCloud" height="120"

/> <img

src="./docs/images/aliyun.png" alt="Alibaba Cloud" height="120"

/> <img

src="./docs/images/io-net.png" alt="IO.NET" height="120"

/>

 

📚 Documentation

For detailed documentation, please visit our official Wiki: https://docs.newapi.pro/

You can also access the AI-generated DeepWiki: Ask DeepWiki

✨ Key Features

New API offers a wide range of features, please refer to Features Introduction for details:

  1. 🎨 Brand new UI interface
  2. 🌍 Multi-language support
  3. 💰 Online recharge functionality, currently supports EPay and Stripe
  4. 🔍 Support for querying usage quotas with keys (works with neko-api-key-tool)
  5. 🔄 Compatible with the original One API database
  6. 💵 Support for pay-per-use model pricing
  7. ⚖️ Support for weighted random channel selection
  8. 📈 Data dashboard (console)
  9. 🔒 Token grouping and model restrictions
  10. 🤖 Support for more authorization login methods (LinuxDO, Telegram, OIDC)
  11. 🔄 Support for Rerank models (Cohere and Jina), API Documentation
  12. ⚡ Support for OpenAI Realtime API (including Azure channels), API Documentation
  13. ⚡ Support for OpenAI Responses format, API Documentation
  14. ⚡ Support for Claude Messages format, API Documentation
  15. ⚡ Support for Google Gemini format, API Documentation
  16. 🧠 Support for setting reasoning effort through model name suffixes:
    1. OpenAI o-series models
      • Add -high suffix for high reasoning effort (e.g.: o3-mini-high)
      • Add -medium suffix for medium reasoning effort (e.g.: o3-mini-medium)
      • Add -low suffix for low reasoning effort (e.g.: o3-mini-low)
    2. Claude thinking models
      • Add -thinking suffix to enable thinking mode (e.g.: claude-3-7-sonnet-20250219-thinking)
  17. 🔄 Thinking-to-content functionality
  18. 🔄 Model rate limiting for users
  19. 🔄 Request format conversion functionality, supporting the following three format conversions:
    1. OpenAI Chat Completions => Claude Messages
    2. Claude Messages => OpenAI Chat Completions (can be used for Claude Code to call third-party models)
    3. OpenAI Chat Completions => Gemini Chat
  20. 💰 Cache billing support, which allows billing at a set ratio when cache is hit:
    1. Set the Prompt Cache Ratio option in System Settings-Operation Settings
    2. Set Prompt Cache Ratio in the channel, range 0-1, e.g., setting to 0.5 means billing at 50% when cache is hit
    3. Supported channels:
      • OpenAI
      • Azure
      • DeepSeek
      • Claude

Model Support

This version supports multiple models, please refer to API Documentation-Relay Interface for details:

  1. Third-party models gpts (gpt-4-gizmo-*)
  2. Third-party channel Midjourney-Proxy(Plus) interface, API Documentation
  3. Third-party channel Suno API interface, API Documentation
  4. Custom channels, supporting full call address input
  5. Rerank models (Cohere and Jina), API Documentation
  6. Claude Messages format, API Documentation
  7. Google Gemini format, API Documentation
  8. Dify, currently only supports chatflow
  9. For more interfaces, please refer to API Documentation

Environment Variable Configuration

For detailed configuration instructions, please refer to Installation Guide-Environment Variables Configuration:

  • GENERATE_DEFAULT_TOKEN: Whether to generate initial tokens for newly registered users, default is false
  • STREAMING_TIMEOUT: Streaming response timeout, default is 300 seconds
  • DIFY_DEBUG: Whether to output workflow and node information for Dify channels, default is true
  • GET_MEDIA_TOKEN: Whether to count image tokens, default is true
  • GET_MEDIA_TOKEN_NOT_STREAM: Whether to count image tokens in non-streaming cases, default is true
  • UPDATE_TASK: Whether to update asynchronous tasks (Midjourney, Suno), default is true
  • GEMINI_VISION_MAX_IMAGE_NUM: Maximum number of images for Gemini models, default is 16
  • MAX_FILE_DOWNLOAD_MB: Maximum file download size in MB, default is 20
  • CRYPTO_SECRET: Encryption key used for encrypting Redis database content
  • AZURE_DEFAULT_API_VERSION: Azure channel default API version, default is 2025-04-01-preview
  • NOTIFICATION_LIMIT_DURATION_MINUTE: Notification limit duration, default is 10 minutes
  • NOTIFY_LIMIT_COUNT: Maximum number of user notifications within the specified duration, default is 2
  • ERROR_LOG_ENABLED=true: Whether to record and display error logs, default is false

Deployment

For detailed deployment guides, please refer to Installation Guide-Deployment Methods:

[!TIP] Latest Docker image: calciumion/new-api:latest

Multi-machine Deployment Considerations

  • Environment variable SESSION_SECRET must be set, otherwise login status will be inconsistent across multiple machines
  • If sharing Redis, CRYPTO_SECRET must be set, otherwise Redis content cannot be accessed across multiple machines

Deployment Requirements

  • Local database (default): SQLite (Docker deployment must mount the /data directory)
  • Remote database: MySQL version >= 5.7.8, PgSQL version >= 9.6

Deployment Methods

Using BaoTa Panel Docker Feature

Install BaoTa Panel (version 9.2.0 or above), find New-API in the application store and install it. Tutorial with images

Using Docker Compose (Recommended)

# Download the project
git clone https://github.com/Calcium-Ion/new-api.git
cd new-api
# Edit docker-compose.yml as needed
# Start
docker-compose up -d

Using Docker Image Directly

# Using SQLite
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest

# Using MySQL
docker run --name new-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest

Channel Retry and Cache

Channel retry functionality has been implemented, you can set the number of retries in Settings->Operation Settings->General Settings->Failure Retry Count, recommended to enable caching functionality.

Cache Configuration Method

  1. REDIS_CONN_STRING: Set Redis as cache
  2. MEMORY_CACHE_ENABLED: Enable memory cache (no need to set manually if Redis is set)

API Documentation

For detailed API documentation, please refer to API Documentation:

Related Projects

Other projects based on New API:

Help and Support

If you have any questions, please refer to Help and Support:

🌟 Star History

Star History Chart