Skip to content

Commit 2be2060

Browse files
author
yejunjin
authored
add an openai chat example (#33)
1 parent e8ef7e4 commit 2be2060

File tree

4 files changed

+59
-1
lines changed

4 files changed

+59
-1
lines changed

documents/CN/examples_python.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -357,6 +357,9 @@ docker run -d \
357357
-m /workspace/qwen/Qwen-7B-Chat \
358358
/workspace/config_qwen_v10_7b.json
359359
```
360+
361+
你还可以使用[openai_chat.py](../../examples/python/4_fastchat/openai_chat.py)来测试使用openai api的聊天客户端。
362+
360363
# 模型配置文件
361364

362365
`<path_to_dashinfer>/examples/python/model_config`目录下提供了一些config示例。

documents/EN/examples_python.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -357,6 +357,8 @@ docker run -d \
357357
/workspace/config_qwen_v10_7b.json
358358
```
359359

360+
You can also use [openai_chat.py](../../examples/python/4_fastchat/openai_chat.py) to test the chat client using the OpenAI API.
361+
360362
# Model Configuration Files
361363

362364
The `<path_to_dashinfer>/examples/python/model_config` directory provides several configuration examples.
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
#!/usr/bin/env python3
2+
3+
import openai
4+
5+
openai.api_key = "EMPTY"
6+
openai.base_url = "http://localhost:8000/v1/"
7+
8+
def test_list_models():
9+
model_list = openai.models.list()
10+
names = [x.id for x in model_list.data]
11+
return names
12+
13+
def test_chat_completion_stream(model):
14+
messages = [{"role": "user", "content": "Talk about the impact of artificial intelligence on different aspects of society. Please talk at least 1000 words"}]
15+
res = openai.chat.completions.create(
16+
model=model, messages=messages, stream=True, temperature=0
17+
)
18+
for chunk in res:
19+
try:
20+
content = chunk.choices[0].delta.content
21+
if content is None:
22+
content = ""
23+
except Exception as e:
24+
content = chunk.choices[0].delta.get("content", "")
25+
print(content, end="", flush=True)
26+
print()
27+
28+
def test_chat_completion(model):
29+
completion = openai.chat.completions.create(
30+
model=model,
31+
messages=[{"role": "user", "content": "Hello! What is your name?"}],
32+
temperature=0,
33+
)
34+
print(completion.choices[0].message.content)
35+
36+
if __name__ == "__main__":
37+
model = "Qwen-7B-Chat" # the model name which is typically the last component of model path
38+
39+
print("List models:")
40+
print(test_list_models())
41+
42+
print("\n\n====================================")
43+
print("Chat completion:")
44+
test_chat_completion(model)
45+
46+
print("\n\n====================================")
47+
print("Chat completion stream:")
48+
test_chat_completion_stream(model)

scripts/docker/fschat_ubuntu_x86.Dockerfile

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,11 @@ COPY ./fschat_entrypoint.sh ./
88
SHELL ["conda", "run", "-n", "py38env", "/bin/bash", "-c"]
99
RUN pip install \
1010
-i https://mirrors.aliyun.com/pypi/simple/ \
11-
"fschat[model_worker]"
11+
"fschat[model_worker]==0.2.36"
12+
13+
# fastchat has a bug in pydantic v2: https://github.com/lm-sys/FastChat/pull/3356
14+
# downgrade to v1.10.13
15+
RUN pip uninstall pydantic -y \
16+
&& pip install -i https://mirrors.aliyun.com/pypi/simple pydantic==1.10.13
1217

1318
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "py38env", "./fschat_entrypoint.sh"]

0 commit comments

Comments
 (0)