Skip to content

Commit 6728b51

Browse files
committed
convert chatbot zh-markdown to en
1 parent fcfad01 commit 6728b51

File tree

12 files changed

+323
-319
lines changed

12 files changed

+323
-319
lines changed
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
---
2+
title: Acknowledgements
3+
slug: Acknowledgements
4+
description: 介绍主要功能
5+
url: "docs/acknowledgements"
6+
aliases:
7+
- "/docs/acknowledgements"
8+
---
9+
10+
The documentation homepage of CodeFuse-ai is built on [docura](https://github.com/docura/docura)
11+
12+
The ChatBot project is based on [langchain-chatchat](https://github.com/chatchat-space/Langchain-Chatchat) and [codebox-api](https://github.com/shroominic/codebox-api).
13+
14+
......
15+
16+
Deep gratitude is extended for their open-source contributions!

content/en/docs/acnowledgements/d1.acknowledgements.md

Lines changed: 0 additions & 11 deletions
This file was deleted.

content/en/docs/chatbot/c1.quickstart.md

Lines changed: 28 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,15 @@ aliases:
77
- "/docs/quickstart"
88
---
99

10+
<p align="left">
11+
<a href="/docs/quickstart-zh">中文</a>&nbsp | &nbsp<a>English&nbsp </a>
12+
</p>
13+
1014
## 🚀 Quick Start
1115

12-
Please install the Nvidia driver yourself; this project has been tested on Python 3.9.18, CUDA 11.7, Windows, and X86 architecture macOS systems.
16+
To deploy private models, please install the NVIDIA driver by yourself.
17+
This project has been tested on Python 3.9.18 and CUDA 11.7 environments, as well as on Windows and macOS systems with x86 architecture.
18+
For Docker installation, private LLM access, and related startup issues, see: [Start-detail...](/docs/start-detail)
1319

1420
### Preparation of Python environment
1521

@@ -23,7 +29,6 @@ conda activate Codefusegpt
2329
- Install related dependencies
2430
```bash
2531
cd Codefuse-ChatBot
26-
# python=3.9,use notebook-latest,python=3.8 use notebook==6.5.5
2732
pip install -r requirements.txt
2833
```
2934

@@ -35,42 +40,47 @@ cd configs
3540
cp model_config.py.example model_config.py
3641
cp server_config.py.example server_config.py
3742

38-
# model_config#11~12 If you need to use the openai interface, openai interface key
43+
# model_config#11~12 If you need to use the OpenAI interface, the OpenAI interface key
3944
os.environ["OPENAI_API_KEY"] = "sk-xxx"
40-
# You can replace the api_base_url yourself
45+
# Replace with the api_base_url you need
4146
os.environ["API_BASE_URL"] = "https://api.openai.com/v1"
4247

43-
# vi model_config#105 You need to choose the language model
48+
# vi model_config#LLM_MODEL The language model you need to choose
4449
LLM_MODEL = "gpt-3.5-turbo"
50+
LLM_MODELs = ["gpt-3.5-turbo"]
4551

46-
# vi model_config#43 You need to choose the vector model
52+
# vi model_config#EMBEDDING_MODEL The private vector model you need to choose
53+
EMBEDDING_ENGINE = 'model'
4754
EMBEDDING_MODEL = "text2vec-base"
4855

49-
# vi model_config#25 Modify to your local path, if you can directly connect to huggingface, no modification is needed
50-
"text2vec-base": "shibing624/text2vec-base-chinese",
56+
# Example of vector model access, modify model_config#embedding_model_dict
57+
# If the model directory is:
58+
model_dir: ~/codefuse-chatbot/embedding_models/shibing624/text2vec-base-chinese
59+
# Configure as follows
60+
"text2vec-base": "shibing624/text2vec-base-chinese"
61+
5162

52-
# vi server_config#8~14, it is recommended to start the service using containers.
63+
# vi server_config#8~14, It's recommended to use a container to start the service to prevent environment conflicts when installing other dependencies using the codeInterpreter feature
5364
DOCKER_SERVICE = True
54-
# Whether to use container sandboxing is up to your specific requirements and preferences
65+
# Whether to use a container sandbox
5566
SANDBOX_DO_REMOTE = True
56-
# Whether to use api-service to use chatbot
57-
NO_REMOTE_API = True
5867
```
5968

6069
### Start the Service
6170

6271
By default, only webui related services are started, and fastchat is not started (optional).
6372
```bash
64-
# if use codellama-34b-int4, you should replace fastchat's gptq.py
73+
# If you need to support the codellama-34b-int4 model, you need to patch fastchat
6574
# cp examples/gptq.py ~/site-packages/fastchat/modules/gptq.py
66-
# dev_opsgpt/service/llm_api.py#258 => kwargs={"gptq_wbits": 4},
75+
# Modify dev_opsgpt/llm_api.py#258 to kwargs={"gptq_wbits": 4},
6776

68-
# start llm-service(可选)
69-
python dev_opsgpt/service/llm_api.py
77+
# Start llm-service (optional)
78+
python dev_opsgpt/llm_api.py
7079
```
7180

81+
For more LLM access methods, see [Details...](/docs/fastchat)
7282
```bash
73-
# After configuring server_config.py, you can start with just one click.
83+
# After completing the server_config.py configuration, you can start with one click
7484
cd examples
75-
bash start_webui.sh
85+
python start.py
7686
```

content/en/docs/chatbot/c2.start-detail.md

Lines changed: 48 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -5,48 +5,50 @@ aliases:
55
- "/docs/start-detail"
66
---
77

8-
请自行安装 nvidia 驱动程序,本项目已在 Python 3.9.18,CUDA 11.7 环境下,Windows、X86 架构的 macOS 系统中完成测试。
8+
<p align="left">
9+
<a href="/docs/start-detail-zh">中文</a>&nbsp | &nbsp<a>English&nbsp </a>
10+
</p>
911

10-
### python 环境准备
1112

12-
- 推荐采用 conda 对 python 环境进行管理(可选)
13+
If you need to deploy a privatized model, please install the NVIDIA driver yourself.
14+
15+
### Preparation of Python environment
16+
- It is recommended to use conda to manage the python environment (optional)
1317
```bash
14-
# 准备 conda 环境
15-
conda create --name devopsgpt python=3.9
16-
conda activate devopsgpt
18+
# Prepare conda environment
19+
conda create --name Codefusegpt python=3.9
20+
conda activate Codefusegpt
1721
```
1822

19-
- 安装相关依赖
23+
- Install related dependencies
2024
```bash
21-
cd codefuse-chatbot
22-
# python=3.9,notebook用最新即可,python=3.8用notebook=6.5.6
25+
cd Codefuse-ChatBot
2326
pip install -r requirements.txt
2427
```
2528

26-
### 沙盒环境准备
27-
- windows Docker 安装:
28-
[Docker Desktop for Windows](https://docs.docker.com/desktop/install/windows-install/) 支持 64 位版本的 Windows 10 Pro,且必须开启 Hyper-V(若版本为 v1903 及以上则无需开启 Hyper-V),或者 64 位版本的 Windows 10 Home v1903 及以上版本。
29-
29+
### Sandbox Environment Preparation
30+
- Windows Docker installation:
31+
[Docker Desktop for Windows](https://docs.docker.com/desktop/install/windows-install/) supports 64-bit versions of Windows 10 Pro with Hyper-V enabled (Hyper-V is not required for versions v1903 and above), or 64-bit versions of Windows 10 Home v1903 and above.
3032
- [【全面详细】Windows10 Docker安装详细教程](https://zhuanlan.zhihu.com/p/441965046)
3133
- [Docker 从入门到实践](https://yeasy.gitbook.io/docker_practice/install/windows)
32-
- [Docker Desktop requires the Server service to be enabled 处理](https://blog.csdn.net/sunhy_csdn/article/details/106526991)
34+
- [Handling 'Docker Desktop requires the Server service to be enabled'](https://blog.csdn.net/sunhy_csdn/article/details/106526991)
3335
- [安装wsl或者等报错提示](https://learn.microsoft.com/zh-cn/windows/wsl/install)
3436

35-
- Linux Docker 安装
36-
Linux 安装相对比较简单,请自行 baidu/google 相关安装
37+
- Linux Docker installation
38+
Linux installation is relatively simple, please search Baidu/Google for installation guides.
3739

38-
- Mac Docker 安装
40+
- Mac Docker installation
3941
- [Docker 从入门到实践](https://yeasy.gitbook.io/docker_practice/install/mac)
4042

4143
```bash
42-
# 构建沙盒环境的镜像,notebook版本问题见上述
44+
# Build the image for the sandbox environment, see above for notebook version issues
4345
bash docker_build.sh
4446
```
4547

46-
### 模型下载(可选)
48+
### Model Download (Optional)
4749

48-
如需使用开源 LLM Embedding 模型可以从 HuggingFace 下载。
49-
此处以 THUDM/chatglm2-6bm 和 text2vec-base-chinese 为例:
50+
If you need to use open-source LLM and Embedding models, you can download them from HuggingFace.
51+
Here we take THUDM/chatglm2-6b and text2vec-base-chinese as examples:
5052

5153
```
5254
# install git-lfs
@@ -62,57 +64,60 @@ cp ~/shibing624/text2vec-base-chinese ~/codefuse-chatbot/embedding_models/
6264
```
6365

6466

65-
### 基础配置
67+
68+
### Basic Configuration
6669

6770
```bash
68-
# 修改服务启动的基础配置
71+
# Modify the basic configuration for service startup
6972
cd configs
7073
cp model_config.py.example model_config.py
7174
cp server_config.py.example server_config.py
7275

73-
# model_config#11~12 若需要使用openai接口,openai接口key
76+
# model_config#11~12 If you need to use the OpenAI interface, the OpenAI interface key
7477
os.environ["OPENAI_API_KEY"] = "sk-xxx"
75-
# 可自行替换自己需要的api_base_url
78+
# Replace with the api_base_url you need
7679
os.environ["API_BASE_URL"] = "https://api.openai.com/v1"
7780

78-
# vi model_config#LLM_MODEL 你需要选择的语言模型
81+
# vi model_config#LLM_MODEL The language model you need to choose
7982
LLM_MODEL = "gpt-3.5-turbo"
8083
LLM_MODELs = ["gpt-3.5-turbo"]
8184

82-
# vi model_config#EMBEDDING_MODEL 你需要选择的私有化向量模型
85+
# vi model_config#EMBEDDING_MODEL The private vector model you need to choose
8386
EMBEDDING_ENGINE = 'model'
8487
EMBEDDING_MODEL = "text2vec-base"
8588

86-
# vi model_config#embedding_model_dict 修改成你的本地路径,如果能直接连接huggingface则无需修改
87-
# 若模型地址为:
89+
# Example of vector model access, modify model_config#embedding_model_dict
90+
# If the model directory is:
8891
model_dir: ~/codefuse-chatbot/embedding_models/shibing624/text2vec-base-chinese
89-
# 配置如下
90-
"text2vec-base": "shibing624/text2vec-base-chinese",
92+
# Configure as follows
93+
"text2vec-base": "shibing624/text2vec-base-chinese"
94+
9195

92-
# vi server_config#8~14, 推荐采用容器启动服务
96+
# vi server_config#8~14, It's recommended to use a container to start the service to prevent environment conflicts when installing other dependencies using the codeInterpreter feature
9397
DOCKER_SERVICE = True
94-
# 是否采用容器沙箱
98+
# Whether to use a container sandbox
9599
SANDBOX_DO_REMOTE = True
96-
# 是否采用api服务来进行
97-
NO_REMOTE_API = True
98100
```
99101

100-
### 启动服务
101102

102-
默认只启动webui相关服务,未启动fastchat(可选)。
103+
104+
### Starting the Service
105+
106+
By default, only the webui-related services are started, and fastchat is not started (optional).
107+
103108
```bash
104-
# 若需要支撑codellama-34b-int4模型,需要给fastchat打一个补丁
109+
# If you need to support the codellama-34b-int4 model, you need to patch fastchat
105110
# cp examples/gptq.py ~/site-packages/fastchat/modules/gptq.py
106-
# dev_opsgpt/service/llm_api.py#258 修改为 kwargs={"gptq_wbits": 4},
111+
# Modify dev_opsgpt/llm_api.py#258 to kwargs={"gptq_wbits": 4},
107112

108-
# start llm-service(可选)
109-
python dev_opsgpt/service/llm_api.py
113+
# start llm-service (optional)
114+
python dev_opsgpt/llm_api.py
110115
```
111-
更多LLM接入方法见[详情...](./fastchat.md)
116+
For more LLM integration methods, see[more details...](./fastchat.md)
112117
<br>
113118

114119
```bash
115-
# 完成server_config.py配置后,可一键启动
120+
# After completing the server_config.py configuration, you can start with one click
116121
cd examples
117122
python start.py
118123
```

0 commit comments

Comments
 (0)