diff --git a/README_en.md b/README_en.md
index 0f2e8d5..bc26366 100644
--- a/README_en.md
+++ b/README_en.md
@@ -46,164 +46,9 @@ Access all reverse engineered LLM libs by standard OpenAI API format.
-### Supported LLM libs
+## Documentation
-|Adapter|Multi Round|Stream|Function Call|Status|Comment|
-|---|---|---|---|---|---|
-|[acheong08/ChatGPT](https://github.com/acheong08/ChatGPT)|✅|✅|❌|✅|ChatGPT Web Version|
-|[KoushikNavuluri/Claude-API](https://github.com/KoushikNavuluri/Claude-API)|✅|❌|❌|✅|Claude Web Version|
-|[dsdanielpark/Bard-API](https://github.com/dsdanielpark/Bard-API)|✅|❌|❌|✅|Google Bard Web Version|
-|[xtekky/gpt4free](https://github.com/xtekky/gpt4free)|✅|✅|❌|✅|gpt4free cracked multiple platforms|
-|[Soulter/hugging-chat-api](https://github.com/Soulter/hugging-chat-api)|✅|✅|❌|✅|hubbingface chat model|
-|[xw5xr6/revTongYi](https://github.com/xw5xr6/revTongYi)|✅|✅|❌|✅|Aliyun TongYi QianWen Web Version|
+Please refer to the documentation for deployment and configuration:
-### Supported API paths
-
-- `/v1/chat/completions`
-
-File a issue or pull request if you want to add more.
-
-## Setup
-
-### Docker (Recommended)
-
-```bash
-docker run -d -p 3000:3000 --restart always --name free-one-api -v ~/free-one-api/data:/app/data rockchin/free-one-api
-```
-
-This command will start free-one-api and specify `~/free-one-api/data` as the container's file storage mapping directory.
-Then you can open the admin page at `http://localhost:3000/`.
-
-### Manual
-
-```bash
-git clone https://github.com/RockChinQ/free-one-api.git
-cd free-one-api
-
-cd web && npm install && npm run build && cd ..
-
-pip install -r requirements.txt
-python main.py
-```
-
-then you can open the admin page at `http://localhost:3000/`.
-
-## Usage
-
-1. Create channel on the admin page, create a new key.
-
-
-
-2. Set the url (e.g. http://localhost:3000/v1 ) as OpenAI endpoint, and set the generated key as OpenAI api key.
-3. Then you can use the OpenAI API to access the reverse engineered LLM lib.
-
-```curl
-# curl example
-curl http://localhost:3000/v1/chat/completions \
- -X POST \
- -H "Content-Type: application/json" \
- -H "Authorization: Bearer $OPENAI_API_KEY" \
- -d '{
- "model": "gpt-3.5-turbo",
- "messages": [
- {
- "role": "system",
- "content": "You are a helpful assistant."
- },
- {
- "role": "user",
- "content": "Hello!"
- }
- ],
- "stream": true
- }'
-```
-
-```python
-# python example
-import openai
-
-openai.api_base = "http://localhost:3000/v1"
-openai.api_key = "generated key"
-
-response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=[
- {
- "role": "user",
- "content": "hello, how are you?"
- }
- ],
- stream=False,
-)
-
-print(response)
-```
-
-### Configurations
-
-Configuration file is saved at `data/config.yaml`
-
-```yaml
-database:
- # SQLite DB file path
- path: ./data/free_one_api.db
- type: sqlite
-logging:
- debug: false # Enable debug log
-misc:
- # Reverse proxy address for acheong08/ChatGPT adapter.
- # Default public reverse proxy may be unstable, it is recommended to build your own:
- # https://github.com/acheong08/ChatGPT-Proxy-V4
- chatgpt_api_base: https://chatproxy.rockchin.top/api/
-# Random advertisement, will be appended to the end of each response
-random_ad:
- # advertisement list
- ad_list:
- - ' (This response is sponsored by Free One API. Consider star the project on GitHub:
- https://github.com/RockChinQ/free-one-api )'
- # Enable random ad
- enabled: false
- # Random ad rate
- rate: 0.05
-router:
- # Backend listen port
- port: 3000
- # Admin page login password
- token: '12345678'
-watchdog:
- heartbeat:
- # Max fail times
- fail_limit: 3
- # Heartbeat check interval (seconds)
- interval: 1800
- # Single channel heartbeat check timeout (seconds)
- timeout: 300
-web:
- # Frontend page path
- frontend_path: ./web/dist/
-```
-
-## Quick Test
-
-### Demo
-
-Allow to login and modify the channel/apikey settings.Reset every 30 minutes(xx:00/xx:30).
-
-Address: https://foa-demo.rockchin.top
-Password: 12345678
-
-### Test channel
-
-Can only use the channel, can't login:
-
-api_base: https://foa.rockchin.top/v1
-api_key: sk-foaDfZxzvfrwfqkBDJEMq7C0rdXkhOjXx4aM23pH42tv8SJ4
-model: gpt-3.5-turbo
-
-## Performance
-
-Gantt chart of request time with 4 channel enabled and 16 threads in client side, querying question "write a quick sort in Java":
-(Channel labelled with `Channel ID `, X axis is time in seconds)
-
-
\ No newline at end of file
+- GitHub Page: https://rockchinq.github.io/free-one-api
+- Self-deployment documentation: https://free-one-api.rockchin.top
diff --git a/docs/en/Adapters.md b/docs/en/Adapters.md
new file mode 100644
index 0000000..1783506
--- /dev/null
+++ b/docs/en/Adapters.md
@@ -0,0 +1,115 @@
+# Adapters
+
+Free One API currently supports multiple LLM reverse engineering libraries, each channel supports a corresponding adapter, the adapter is responsible for converting the client's request into a reverse engineering library request, and converting the reverse engineering library's response into the client's response.
+
+## acheong08/ChatGPT
+
+ChatGPT official website reverse engineering library
+
+### Configuration
+
+1. Select `acheong08/ChatGPT` as `Adapter`
+
+![select adapter](assets/select_adapter.png)
+
+2. Go to `chat.openai.com` and log in to your account
+
+3. Access `https://chat.openai.com/api/auth/session` directly in the browser, copy the `access_token` obtained
+
+![Alt text](assets/get_actoken.png)
+
+4. Enter in the `Config` column
+
+```json
+{
+ "access_token": "your access token"
+}
+```
+
+5. Save to test
+
+### Reverse proxy
+
+ChatGPT needs to use a reverse proxy to bypass Cloudflare's restrictions. The Free One API project defaults to the proxy address provided by the developer `https://chatproxy.rockchin.top/api/`, but the pressure is very high. It is strongly recommended to build a reverse proxy by yourself.
+
+* Please configure according to this project document:
+
+Edit `misc.chatgpt_api_base` to your reverse proxy address in `data/config.yaml`.
+You can also enter directly in the `Config` column when creating the `acheong08/ChatGPT` adapter
+
+```json
+{
+ "reverse_proxy": "your reverse proxy address"
+}
+```
+
+If not set, the `misc.chatgpt_api_base` field in the configuration file will be used as the reverse proxy address.
+
+## KoushikNavuluri/Claude-API
+
+Anthropic Claude official website reverse engineering library
+
+### Configuration
+
+1. Select `KoushikNavuluri/Claude-API` as `Adapter`
+
+2. Log in to `claude.ai`, open `F12`, select the `Network` column, find any request, and copy the `Cookie` string in the request header
+
+![claude_get_cookie](assets/claude_cookie.png)
+
+3. Enter in the `Config` column
+
+```json
+{
+ "cookie": "your cookie"
+}
+```
+
+## xtekky/gpt4free
+
+xtekky/gpt4free integrates multiple LLM reverse engineering libraries of multiple platforms
+
+### Configuration
+
+1. Select `xtekky/gpt4free` as `Adapter`
+
+2. No authentication required, just save
+
+## Soulter/hugging-chat-api
+
+huggingface.co/chat official website reverse engineering library
+
+### Configuration
+
+1. Register `HuggingFace` account
+
+2. Select `Soulter/hugging-chat-api` as `Adapter`
+
+3. Enter in the `Config` column
+
+```json
+{
+ "email": "HuggingFace Email",
+ "passwd": "HuggingFace Password"
+}
+```
+
+## xw5xr6/revTongYi
+
+Aliyun TongYi QianWen official website reverse engineering library
+
+### Configuration
+
+1. Select `xw5xr6/revTongYi` as `Adapter`
+
+2. Go to and log in to your account
+
+3. Refer to the configuration method of Claude above to obtain the `Cookie` string
+
+4. Enter in the `Config` column
+
+```json
+{
+ "cookie": "通义千问cookie"
+}
+```
diff --git a/docs/en/Config.md b/docs/en/Config.md
new file mode 100644
index 0000000..46c0ab9
--- /dev/null
+++ b/docs/en/Config.md
@@ -0,0 +1,43 @@
+# Configurations
+
+Configuration file is saved at `data/config.yaml`
+
+```yaml
+database:
+ # SQLite DB file path
+ path: ./data/free_one_api.db
+ type: sqlite
+logging:
+ debug: false # Enable debug log
+misc:
+ # Reverse proxy address for acheong08/ChatGPT adapter.
+ # Default public reverse proxy may be unstable, it is recommended to build your own:
+ # https://github.com/acheong08/ChatGPT-Proxy-V4
+ chatgpt_api_base: https://chatproxy.rockchin.top/api/
+# Random advertisement, will be appended to the end of each response
+random_ad:
+ # advertisement list
+ ad_list:
+ - ' (This response is sponsored by Free One API. Consider star the project on GitHub:
+ https://github.com/RockChinQ/free-one-api )'
+ # Enable random ad
+ enabled: false
+ # Random ad rate
+ rate: 0.05
+router:
+ # Backend listen port
+ port: 3000
+ # Admin page login password
+ token: '12345678'
+watchdog:
+ heartbeat:
+ # Max fail times
+ fail_limit: 3
+ # Heartbeat check interval (seconds)
+ interval: 1800
+ # Single channel heartbeat check timeout (seconds)
+ timeout: 300
+web:
+ # Frontend page path
+ frontend_path: ./web/dist/
+```
\ No newline at end of file
diff --git a/docs/en/Demo.md b/docs/en/Demo.md
new file mode 100644
index 0000000..7763f52
--- /dev/null
+++ b/docs/en/Demo.md
@@ -0,0 +1,17 @@
+
+# Quick Test
+
+## Demo
+
+Allow to login and modify the channel/apikey settings.Reset every 30 minutes(xx:00/xx:30).
+
+Address: https://foa-demo.rockchin.top
+Password: 12345678
+
+## Test channel
+
+Can only use the channel, can't login:
+
+api_base: https://foa.rockchin.top/v1
+api_key: sk-foaDfZxzvfrwfqkBDJEMq7C0rdXkhOjXx4aM23pH42tv8SJ4
+model: gpt-3.5-turbo
\ No newline at end of file
diff --git a/docs/en/README.md b/docs/en/README.md
index a5c0127..8d5a3f4 100644
--- a/docs/en/README.md
+++ b/docs/en/README.md
@@ -1,3 +1,25 @@
# Free One API Documentation
-WIP
\ No newline at end of file
+## Supported LLM libs
+
+|Adapter|Multi Round|Stream|Function Call|Status|Comment|
+|---|---|---|---|---|---|
+|[acheong08/ChatGPT](https://github.com/acheong08/ChatGPT)|✅|✅|❌|✅|ChatGPT Web Version|
+|[KoushikNavuluri/Claude-API](https://github.com/KoushikNavuluri/Claude-API)|✅|❌|❌|✅|Claude Web Version|
+|[dsdanielpark/Bard-API](https://github.com/dsdanielpark/Bard-API)|✅|❌|❌|✅|Google Bard Web Version|
+|[xtekky/gpt4free](https://github.com/xtekky/gpt4free)|✅|✅|❌|✅|gpt4free cracked multiple platforms|
+|[Soulter/hugging-chat-api](https://github.com/Soulter/hugging-chat-api)|✅|✅|❌|✅|hubbingface chat model|
+|[xw5xr6/revTongYi](https://github.com/xw5xr6/revTongYi)|✅|✅|❌|✅|Aliyun TongYi QianWen Web Version|
+
+## Supported API paths
+
+- `/v1/chat/completions`
+
+File a issue or pull request if you want to add more.
+
+## Performance
+
+Gantt chart of request time with 4 channel enabled and 16 threads in client side, querying question "write a quick sort in Java":
+(Channel labelled with `Channel ID `, X axis is time in seconds)
+
+![Load Balance](assets/load_balance.png)
diff --git a/docs/en/Setup.md b/docs/en/Setup.md
new file mode 100644
index 0000000..e019d4f
--- /dev/null
+++ b/docs/en/Setup.md
@@ -0,0 +1,24 @@
+# Setup
+
+## Docker (Recommended)
+
+```bash
+docker run -d -p 3000:3000 --restart always --name free-one-api -v ~/free-one-api/data:/app/data rockchin/free-one-api
+```
+
+This command will start free-one-api and specify `~/free-one-api/data` as the container's file storage mapping directory.
+Then you can open the admin page at `http://localhost:3000/`.
+
+## Manual
+
+```bash
+git clone https://github.com/RockChinQ/free-one-api.git
+cd free-one-api
+
+cd web && npm install && npm run build && cd ..
+
+pip install -r requirements.txt
+python main.py
+```
+
+then you can open the admin page at `http://localhost:3000/`.
\ No newline at end of file
diff --git a/docs/en/Usage.md b/docs/en/Usage.md
new file mode 100644
index 0000000..1d964e0
--- /dev/null
+++ b/docs/en/Usage.md
@@ -0,0 +1,59 @@
+# Usage
+
+1. Create a channel and fill in the name.
+
+![add_channel](assets/add_channel.png)
+
+2. Select the reverse engineering library adapter used by this channel and fill in the configuration.
+
+> Please refer to the [Adapters](/en/Adapters.md) document.
+
+3. Create a new key in the API Key column.
+
+4. Set the url (e.g. http://localhost:3000/v1 ) as OpenAI's api_base and the generated key as OpenAI api key.
+5. Now you can use the OpenAI API to access the reverse engineering LLM library.
+
+## Testing
+
+```curl
+# curl example
+curl http://localhost:3000/v1/chat/completions \
+ -X POST \
+ -H "Content-Type: application/json" \
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
+ -d '{
+ "model": "gpt-3.5-turbo",
+ "messages": [
+ {
+ "role": "system",
+ "content": "You are a helpful assistant."
+ },
+ {
+ "role": "user",
+ "content": "Hello!"
+ }
+ ],
+ "stream": true
+ }'
+```
+
+```python
+# python example
+import openai
+
+openai.api_base = "http://localhost:3000/v1"
+openai.api_key = "generated key"
+
+response = openai.ChatCompletion.create(
+ model="gpt-3.5-turbo",
+ messages=[
+ {
+ "role": "user",
+ "content": "hello, how are you?"
+ }
+ ],
+ stream=False,
+)
+
+print(response)
+```
diff --git a/docs/en/_sidebar.md b/docs/en/_sidebar.md
new file mode 100644
index 0000000..1108dfc
--- /dev/null
+++ b/docs/en/_sidebar.md
@@ -0,0 +1,6 @@
+* [Home Page](/en/)
+* [Deployment](/en/Setup)
+* [Usage](/en/Usage)
+* [Adapters](/en/Adapters)
+* [Config File](/en/Config)
+* [Demo](/en/Demo)
\ No newline at end of file
diff --git a/docs/en/assets/add_channel.png b/docs/en/assets/add_channel.png
new file mode 100644
index 0000000..b9385e5
Binary files /dev/null and b/docs/en/assets/add_channel.png differ
diff --git a/docs/en/assets/claude_cookie.png b/docs/en/assets/claude_cookie.png
new file mode 100644
index 0000000..94e80f5
Binary files /dev/null and b/docs/en/assets/claude_cookie.png differ
diff --git a/docs/en/assets/get_actoken.png b/docs/en/assets/get_actoken.png
new file mode 100644
index 0000000..4bd040b
Binary files /dev/null and b/docs/en/assets/get_actoken.png differ
diff --git a/docs/en/assets/load_balance.png b/docs/en/assets/load_balance.png
new file mode 100644
index 0000000..0cee5c0
Binary files /dev/null and b/docs/en/assets/load_balance.png differ
diff --git a/docs/en/assets/select_adapter.png b/docs/en/assets/select_adapter.png
new file mode 100644
index 0000000..0444f52
Binary files /dev/null and b/docs/en/assets/select_adapter.png differ