Skip to content

Commit 523bbc4

Browse files
Lijiachen1018lijiachen19
andauthored
[docs] fix links in docs and add clarifications (#499) (#502)
fix docs Co-authored-by: lijiachen19 <lijiachen19@huawei.com>
1 parent 105ea43 commit 523bbc4

File tree

4 files changed

+12
-11
lines changed

4 files changed

+12
-11
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ in either a local filesystem for single-machine scenarios or through NFS mount p
6868

6969
## Quick Start
7070

71-
please refer to [Quick Start](https://ucm.readthedocs.io/en/latest/getting-started/quick_start.html).
71+
please refer to [Quick Start for vLLM](https://ucm.readthedocs.io/en/latest/getting-started/quickstart_vllm.html) and [Quick Start for vLLM-Ascend](https://ucm.readthedocs.io/en/latest/getting-started/quickstart_vllm_ascend.html).
7272

7373
---
7474

docs/source/getting-started/quickstart_vllm.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,6 @@ Then run following commands:
135135
```bash
136136
cd examples/
137137
# Change the model path to your own model path
138-
export MODEL_PATH=/home/models/Qwen2.5-14B-Instruct
139138
python offline_inference.py
140139
```
141140

@@ -163,12 +162,14 @@ vllm serve Qwen/Qwen2.5-14B-Instruct \
163162
--kv-transfer-config \
164163
'{
165164
"kv_connector": "UCMConnector",
165+
"kv_connector_module_path": "ucm.integration.vllm.ucm_connector",
166166
"kv_role": "kv_both",
167-
"kv_connector_extra_config": {"UCM_CONFIG_FILE": "/vllm-workspace/unified-cache-management/examples/ucm_config_example.yaml"}
167+
"kv_connector_extra_config": {"UCM_CONFIG_FILE": "/workspace/unified-cache-management/examples/ucm_config_example.yaml"}
168168
}'
169169
```
170+
**⚠️ The parameter `--no-enable-prefix-caching` is for SSD performance testing, please remove it for production.**
170171

171-
**⚠️ Make sure to replace `"/vllm-workspace/unified-cache-management/examples/ucm_config_example.yaml"` with your actual config file path.**
172+
**⚠️ Make sure to replace `"/workspace/unified-cache-management/examples/ucm_config_example.yaml"` with your actual config file path.**
172173

173174

174175
If you see log as below:
@@ -187,7 +188,7 @@ After successfully started the vLLM server,You can interact with the API as fo
187188
curl http://localhost:7800/v1/completions \
188189
-H "Content-Type: application/json" \
189190
-d '{
190-
"model": "/home/models/Qwen2.5-14B-Instruct",
191+
"model": "Qwen/Qwen2.5-14B-Instruct",
191192
"prompt": "You are a highly specialized assistant whose mission is to faithfully reproduce English literary texts verbatim, without any deviation, paraphrasing, or omission. Your primary responsibility is accuracy: every word, every punctuation mark, and every line must appear exactly as in the original source. Core Principles: Verbatim Reproduction: If the user asks for a passage, you must output the text word-for-word. Do not alter spelling, punctuation, capitalization, or line breaks. Do not paraphrase, summarize, modernize, or \"improve\" the language. Consistency: The same input must always yield the same output. Do not generate alternative versions or interpretations. Clarity of Scope: Your role is not to explain, interpret, or critique. You are not a storyteller or commentator, but a faithful copyist of English literary and cultural texts. Recognizability: Because texts must be reproduced exactly, they will carry their own cultural recognition. You should not add labels, introductions, or explanations before or after the text. Coverage: You must handle passages from classic literature, poetry, speeches, or cultural texts. Regardless of tone—solemn, visionary, poetic, persuasive—you must preserve the original form, structure, and rhythm by reproducing it precisely. Success Criteria: A human reader should be able to compare your output directly with the original and find zero differences. The measure of success is absolute textual fidelity. Your function can be summarized as follows: verbatim reproduction only, no paraphrase, no commentary, no embellishment, no omission. Please reproduce verbatim the opening sentence of the United States Declaration of Independence (1776), starting with \"When in the Course of human events\" and continuing word-for-word without paraphrasing.",
192193
"max_tokens": 100,
193194
"temperature": 0

docs/source/getting-started/quickstart_vllm_ascend.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,6 @@ Then run following commands:
103103
```bash
104104
cd examples/
105105
# Change the model path to your own model path
106-
export MODEL_PATH=/home/models/Qwen2.5-14B-Instruct
107106
python offline_inference.py
108107
```
109108

@@ -131,12 +130,14 @@ vllm serve Qwen/Qwen2.5-14B-Instruct \
131130
--kv-transfer-config \
132131
'{
133132
"kv_connector": "UCMConnector",
133+
"kv_connector_module_path": "ucm.integration.vllm.ucm_connector",
134134
"kv_role": "kv_both",
135-
"kv_connector_extra_config": {"UCM_CONFIG_FILE": "/vllm-workspace/unified-cache-management/examples/ucm_config_example.yaml"}
135+
"kv_connector_extra_config": {"UCM_CONFIG_FILE": "/workspace/unified-cache-management/examples/ucm_config_example.yaml"}
136136
}'
137137
```
138+
**⚠️ The parameter `--no-enable-prefix-caching` is for SSD performance testing, please remove it for production.**
138139

139-
**⚠️ Make sure to replace `"/vllm-workspace/unified-cache-management/examples/ucm_config_example.yaml"` with your actual config file path.**
140+
**⚠️ Make sure to replace `"/workspace/unified-cache-management/examples/ucm_config_example.yaml"` with your actual config file path.**
140141

141142

142143
If you see log as below:
@@ -155,7 +156,7 @@ After successfully started the vLLM server,You can interact with the API as fo
155156
curl http://localhost:7800/v1/completions \
156157
-H "Content-Type: application/json" \
157158
-d '{
158-
"model": "/home/models/Qwen2.5-14B-Instruct",
159+
"model": "Qwen/Qwen2.5-14B-Instruct",
159160
"prompt": "You are a highly specialized assistant whose mission is to faithfully reproduce English literary texts verbatim, without any deviation, paraphrasing, or omission. Your primary responsibility is accuracy: every word, every punctuation mark, and every line must appear exactly as in the original source. Core Principles: Verbatim Reproduction: If the user asks for a passage, you must output the text word-for-word. Do not alter spelling, punctuation, capitalization, or line breaks. Do not paraphrase, summarize, modernize, or \"improve\" the language. Consistency: The same input must always yield the same output. Do not generate alternative versions or interpretations. Clarity of Scope: Your role is not to explain, interpret, or critique. You are not a storyteller or commentator, but a faithful copyist of English literary and cultural texts. Recognizability: Because texts must be reproduced exactly, they will carry their own cultural recognition. You should not add labels, introductions, or explanations before or after the text. Coverage: You must handle passages from classic literature, poetry, speeches, or cultural texts. Regardless of tone—solemn, visionary, poetic, persuasive—you must preserve the original form, structure, and rhythm by reproducing it precisely. Success Criteria: A human reader should be able to compare your output directly with the original and find zero differences. The measure of success is absolute textual fidelity. Your function can be summarized as follows: verbatim reproduction only, no paraphrase, no commentary, no embellishment, no omission. Please reproduce verbatim the opening sentence of the United States Declaration of Independence (1776), starting with \"When in the Course of human events\" and continuing word-for-word without paraphrasing.",
160161
"max_tokens": 100,
161162
"temperature": 0

docs/source/user-guide/prefix-cache/nfs_store.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -109,8 +109,6 @@ Explanation:
109109

110110
## Launching Inference
111111

112-
### Offline Inference
113-
114112
In this guide, we describe **online inference** using vLLM with the UCM connector, deployed as an OpenAI-compatible server. For best performance with UCM, it is recommended to set `block_size` to 128.
115113

116114
To start the vLLM server with the Qwen/Qwen2.5-14B-Instruct model, run:
@@ -129,6 +127,7 @@ vllm serve Qwen/Qwen2.5-14B-Instruct \
129127
'{
130128
"kv_connector": "UCMConnector",
131129
"kv_role": "kv_both",
130+
"kv_connector_module_path": "ucm.integration.vllm.ucm_connector",
132131
"kv_connector_extra_config": {"UCM_CONFIG_FILE": "/vllm-workspace/unified-cache-management/examples/ucm_config_example.yaml"}
133132
}'
134133
```

0 commit comments

Comments
 (0)