소스 검색

docs: update plugin docs (#219)

zijiren 7 달 전
부모
커밋
f7870330b1
3개의 변경된 파일120개의 추가작업 그리고 48개의 파일을 삭제
  1. 37 1
      README.md
  2. 36 0
      README.zh.md
  3. 47 47
      core/relay/plugin/cache/README.md

+ 37 - 1
README.md

@@ -45,6 +45,12 @@ AI Proxy is a powerful, production-ready AI gateway that provides intelligent re
 - **Embedded MCP**: Built-in MCP servers with configuration templates
 - **OpenAPI to MCP**: Automatic conversion of OpenAPI specs to MCP tools
 
+### 🔌 **Plugin System**
+
+- **Cache Plugin**: High-performance caching for identical requests with Redis/memory storage
+- **Web Search Plugin**: Real-time web search capabilities with support for Google, Bing, and Arxiv
+- **Extensible Architecture**: Easy to add custom plugins for additional functionality
+
 ### 🔧 **Advanced Capabilities**
 
 - **Multi-format Support**: Text, image, audio, and document processing
@@ -61,6 +67,10 @@ graph TB
     Gateway --> Auth[Authentication & Authorization]
     Gateway --> Router[Intelligent Router]
     Gateway --> Monitor[Monitoring & Analytics]
+    Gateway --> Plugins[Plugin System]
+    
+    Plugins --> CachePlugin[Cache Plugin]
+    Plugins --> SearchPlugin[Web Search Plugin]
     
     Router --> Provider1[OpenAI]
     Router --> Provider2[Anthropic]
@@ -161,6 +171,32 @@ IP_GROUPS_BAN_THRESHOLD=10     # IP sharing ban threshold
 
 </details>
 
+## 🔌 Plugins
+
+AI Proxy supports a plugin system that extends its functionality. Currently available plugins:
+
+### Cache Plugin
+
+The Cache Plugin provides high-performance caching for AI API requests:
+
+- **Dual Storage**: Supports both Redis and in-memory caching
+- **Content-based Keys**: Uses SHA256 hash of request body
+- **Configurable TTL**: Custom time-to-live for cached items
+- **Size Limits**: Prevents memory issues with configurable limits
+
+[View Cache Plugin Documentation](./core/relay/plugin/cache/README.md)
+
+### Web Search Plugin
+
+The Web Search Plugin adds real-time web search capabilities:
+
+- **Multiple Search Engines**: Supports Google, Bing, and Arxiv
+- **Smart Query Rewriting**: AI-powered query optimization
+- **Reference Management**: Automatic citation formatting
+- **Dynamic Control**: User-controllable search depth
+
+[View Web Search Plugin Documentation](./core/relay/plugin/web-search/README.md)
+
 ## 📚 API Documentation
 
 ### Interactive API Explorer
@@ -255,4 +291,4 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
 
 - OpenAI for the API specification
 - The open-source community for various integrations
-- All contributors and users of AI Proxy
+- All contributors and users of AI Proxy

+ 36 - 0
README.zh.md

@@ -45,6 +45,12 @@ AI Proxy 是一个强大的、生产就绪的 AI 网关,提供智能请求路
 - **嵌入式 MCP**:带配置模板的内置 MCP 服务器
 - **OpenAPI 转 MCP**:自动将 OpenAPI 规范转换为 MCP 工具
 
+### 🔌 **插件系统**
+
+- **缓存插件**:高性能缓存,支持 Redis/内存存储,用于相同请求
+- **网络搜索插件**:实时网络搜索功能,支持 Google、Bing 和 Arxiv
+- **可扩展架构**:易于添加自定义插件以实现额外功能
+
 ### 🔧 **高级功能**
 
 - **多格式支持**:文本、图像、音频和文档处理
@@ -61,6 +67,10 @@ graph TB
     Gateway --> Auth[身份验证与授权]
     Gateway --> Router[智能路由器]
     Gateway --> Monitor[监控与分析]
+    Gateway --> Plugins[插件系统]
+    
+    Plugins --> CachePlugin[缓存插件]
+    Plugins --> SearchPlugin[网络搜索插件]
     
     Router --> Provider1[OpenAI]
     Router --> Provider2[Anthropic]
@@ -161,6 +171,32 @@ IP_GROUPS_BAN_THRESHOLD=10     # IP 共享禁用阈值
 
 </details>
 
+## 🔌 插件
+
+AI Proxy 支持插件系统来扩展其功能。当前可用的插件:
+
+### 缓存插件
+
+缓存插件为 AI API 请求提供高性能缓存:
+
+- **双重存储**:支持 Redis 和内存缓存
+- **基于内容的键**:使用请求体的 SHA256 哈希
+- **可配置 TTL**:缓存项的自定义生存时间
+- **大小限制**:通过可配置限制防止内存问题
+
+[查看缓存插件文档](./core/relay/plugin/cache/README.zh.md)
+
+### 网络搜索插件
+
+网络搜索插件添加实时网络搜索功能:
+
+- **多搜索引擎**:支持 Google、Bing 和 Arxiv
+- **智能查询重写**:AI 驱动的查询优化
+- **引用管理**:自动引用格式化
+- **动态控制**:用户可控的搜索深度
+
+[查看网络搜索插件文档](./core/relay/plugin/web-search/README.zh.md)
+
 ## 📚 API 文档
 
 ### 交互式 API 浏览器

+ 47 - 47
core/relay/plugin/cache/README.md

@@ -1,20 +1,20 @@
-# Cache 插件配置指南
+# Cache Plugin Configuration Guide
 
-## 概述
+## Overview
 
-Cache 插件是一个高性能的 AI API 请求缓存解决方案,通过存储和重用相同请求的响应来帮助减少延迟和成本。它支持内存缓存和 Redis,适用于分布式部署。
+The Cache Plugin is a high-performance caching solution for AI API requests that helps reduce latency and costs by storing and reusing responses for identical requests. It supports both in-memory caching and Redis, making it suitable for distributed deployments.
 
-## 功能特性
+## Features
 
-- **双重存储**:支持内存缓存和 Redis,提供灵活的部署选项
-- **自动降级**:Redis 不可用时自动降级到内存缓存
-- **基于内容的缓存**:使用请求体的 SHA256 哈希值生成缓存键
-- **可配置 TTL**:为缓存项设置自定义生存时间
-- **大小限制**:可配置最大项目大小以防止内存问题
-- **缓存头部**:可选的头部信息来指示缓存命中
-- **零拷贝设计**:通过缓冲池实现高效的内存使用
+- **Dual Storage**: Supports both in-memory cache and Redis for flexible deployment options
+- **Automatic Fallback**: Automatically falls back to in-memory cache when Redis is unavailable
+- **Content-Based Caching**: Uses SHA256 hash of request body to generate cache keys
+- **Configurable TTL**: Set custom time-to-live for cached items
+- **Size Limits**: Configurable maximum item size to prevent memory issues
+- **Cache Headers**: Optional headers to indicate cache hits
+- **Zero-Copy Design**: Efficient memory usage through buffer pooling
 
-## 配置示例
+## Configuration Example
 
 ```json
 {
@@ -32,51 +32,51 @@ Cache 插件是一个高性能的 AI API 请求缓存解决方案,通过存储
 }
 ```
 
-## 配置字段说明
+## Configuration Fields
 
-### 插件配置
+### Plugin Configuration
 
-| 字段 | 类型 | 必填 | 默认值 | 说明 |
-|------|------|------|--------|------|
-| `enable_plugin` | bool | 是 | false | 是否启用 Cache 插件 |
-| `ttl` | int | 否 | 300 | 缓存项的生存时间(秒) |
-| `item_max_size` | int | 否 | 1048576 (1MB) | 单个缓存项的最大大小(字节) |
-| `add_cache_hit_header` | bool | 否 | false | 是否添加指示缓存命中的头部 |
-| `cache_hit_header` | string | 否 | "X-Aiproxy-Cache" | 缓存命中头部的名称 |
+| Field | Type | Required | Default | Description |
+|-------|------|----------|---------|-------------|
+| `enable_plugin` | bool | Yes | false | Whether to enable the Cache plugin |
+| `ttl` | int | No | 300 | Time-to-live for cached items (in seconds) |
+| `item_max_size` | int | No | 1048576 (1MB) | Maximum size of a single cached item (in bytes) |
+| `add_cache_hit_header` | bool | No | false | Whether to add a header indicating cache hit |
+| `cache_hit_header` | string | No | "X-Aiproxy-Cache" | Name of the cache hit header |
 
-## 工作原理
+## How It Works
 
-### 缓存键生成
+### Cache Key Generation
 
-插件基于以下内容生成缓存键:
+The plugin generates cache keys based on:
 
-1. 请求模式(如 chat completions)
-2. 请求体的 SHA256 哈希值
+1. Request pattern (e.g., chat completions)
+2. SHA256 hash of the request body
 
-这确保了相同的请求会命中缓存,而不同的请求不会相互干扰。
+This ensures identical requests hit the cache while different requests don't interfere with each other.
 
-### 缓存存储
+### Cache Storage
 
-插件使用两层缓存策略:
+The plugin uses a two-tier caching strategy:
 
-1. **Redis(如果可用)**:分布式缓存的主要存储
-2. **内存**:备用存储或未配置 Redis 时的主要存储
+1. **Redis (if available)**: Primary storage for distributed caching
+2. **Memory**: Fallback storage or primary when Redis is not configured
 
-### 请求流程
+### Request Flow
 
-1. **请求阶段**:
-   - 插件检查是否启用缓存
-   - 从请求体生成缓存键
-   - 查找缓存(先查 Redis,再查内存)
-   - 如果命中,立即返回缓存的响应
-   - 如果未命中,继续请求上游 API
+1. **Request Phase**:
+   - Plugin checks if caching is enabled
+   - Generates cache key from request body
+   - Looks up cache (Redis first, then memory)
+   - If hit, immediately returns cached response
+   - If miss, continues to upstream API
 
-2. **响应阶段**:
-   - 捕获响应体和头部
-   - 如果响应成功,存储到缓存
-   - 遵守大小限制以防止内存问题
+2. **Response Phase**:
+   - Captures response body and headers
+   - If response is successful, stores in cache
+   - Respects size limits to prevent memory issues
 
-## 使用示例
+## Usage Example
 
 ```json
 {
@@ -91,17 +91,17 @@ Cache 插件是一个高性能的 AI API 请求缓存解决方案,通过存储
 }
 ```
 
-## 响应头部示例
+## Response Header Example
 
-当 `add_cache_hit_header` 启用时:
+When `add_cache_hit_header` is enabled:
 
-**缓存命中:**
+**Cache Hit:**
 
 ```
 X-Aiproxy-Cache: hit
 ```
 
-**缓存未命中:**
+**Cache Miss:**
 
 ```
 X-Aiproxy-Cache: miss