From 6c1a01950194cb1fbea2b0403d41df185a12e3de Mon Sep 17 00:00:00 2001 From: Lijiachen1018 <30387633+Lijiachen1018@users.noreply.github.com> Date: Wed, 10 Dec 2025 14:17:07 +0800 Subject: [PATCH] [docs] fix links in docs and add clarifications (#499) fix docs Co-authored-by: lijiachen19 --- README.md | 2 +- docs/source/getting-started/quickstart_vllm.md | 9 +++++---- docs/source/getting-started/quickstart_vllm_ascend.md | 9 +++++---- docs/source/user-guide/prefix-cache/nfs_store.md | 3 +-- 4 files changed, 12 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index 787306195..30fdbc337 100644 --- a/README.md +++ b/README.md @@ -68,7 +68,7 @@ in either a local filesystem for single-machine scenarios or through NFS mount p ## Quick Start -please refer to [Quick Start](https://ucm.readthedocs.io/en/latest/getting-started/quick_start.html). +please refer to [Quick Start for vLLM](https://ucm.readthedocs.io/en/latest/getting-started/quickstart_vllm.html) and [Quick Start for vLLM-Ascend](https://ucm.readthedocs.io/en/latest/getting-started/quickstart_vllm_ascend.html). --- diff --git a/docs/source/getting-started/quickstart_vllm.md b/docs/source/getting-started/quickstart_vllm.md index c50bf8dc2..b10fec654 100644 --- a/docs/source/getting-started/quickstart_vllm.md +++ b/docs/source/getting-started/quickstart_vllm.md @@ -135,7 +135,6 @@ Then run following commands: ```bash cd examples/ # Change the model path to your own model path -export MODEL_PATH=/home/models/Qwen2.5-14B-Instruct python offline_inference.py ``` @@ -163,12 +162,14 @@ vllm serve Qwen/Qwen2.5-14B-Instruct \ --kv-transfer-config \ '{ "kv_connector": "UCMConnector", + "kv_connector_module_path": "ucm.integration.vllm.ucm_connector", "kv_role": "kv_both", - "kv_connector_extra_config": {"UCM_CONFIG_FILE": "/vllm-workspace/unified-cache-management/examples/ucm_config_example.yaml"} + "kv_connector_extra_config": {"UCM_CONFIG_FILE": "/workspace/unified-cache-management/examples/ucm_config_example.yaml"} }' ``` +**⚠️ The parameter `--no-enable-prefix-caching` is for SSD performance testing, please remove it for production.** -**⚠️ Make sure to replace `"/vllm-workspace/unified-cache-management/examples/ucm_config_example.yaml"` with your actual config file path.** +**⚠️ Make sure to replace `"/workspace/unified-cache-management/examples/ucm_config_example.yaml"` with your actual config file path.** If you see log as below: @@ -187,7 +188,7 @@ After successfully started the vLLM server,You can interact with the API as fo curl http://localhost:7800/v1/completions \ -H "Content-Type: application/json" \ -d '{ - "model": "/home/models/Qwen2.5-14B-Instruct", + "model": "Qwen/Qwen2.5-14B-Instruct", "prompt": "You are a highly specialized assistant whose mission is to faithfully reproduce English literary texts verbatim, without any deviation, paraphrasing, or omission. Your primary responsibility is accuracy: every word, every punctuation mark, and every line must appear exactly as in the original source. Core Principles: Verbatim Reproduction: If the user asks for a passage, you must output the text word-for-word. Do not alter spelling, punctuation, capitalization, or line breaks. Do not paraphrase, summarize, modernize, or \"improve\" the language. Consistency: The same input must always yield the same output. Do not generate alternative versions or interpretations. Clarity of Scope: Your role is not to explain, interpret, or critique. You are not a storyteller or commentator, but a faithful copyist of English literary and cultural texts. Recognizability: Because texts must be reproduced exactly, they will carry their own cultural recognition. You should not add labels, introductions, or explanations before or after the text. Coverage: You must handle passages from classic literature, poetry, speeches, or cultural texts. Regardless of tone—solemn, visionary, poetic, persuasive—you must preserve the original form, structure, and rhythm by reproducing it precisely. Success Criteria: A human reader should be able to compare your output directly with the original and find zero differences. The measure of success is absolute textual fidelity. Your function can be summarized as follows: verbatim reproduction only, no paraphrase, no commentary, no embellishment, no omission. Please reproduce verbatim the opening sentence of the United States Declaration of Independence (1776), starting with \"When in the Course of human events\" and continuing word-for-word without paraphrasing.", "max_tokens": 100, "temperature": 0 diff --git a/docs/source/getting-started/quickstart_vllm_ascend.md b/docs/source/getting-started/quickstart_vllm_ascend.md index 2a6307c99..ec00bb0f8 100644 --- a/docs/source/getting-started/quickstart_vllm_ascend.md +++ b/docs/source/getting-started/quickstart_vllm_ascend.md @@ -103,7 +103,6 @@ Then run following commands: ```bash cd examples/ # Change the model path to your own model path -export MODEL_PATH=/home/models/Qwen2.5-14B-Instruct python offline_inference.py ``` @@ -131,12 +130,14 @@ vllm serve Qwen/Qwen2.5-14B-Instruct \ --kv-transfer-config \ '{ "kv_connector": "UCMConnector", + "kv_connector_module_path": "ucm.integration.vllm.ucm_connector", "kv_role": "kv_both", - "kv_connector_extra_config": {"UCM_CONFIG_FILE": "/vllm-workspace/unified-cache-management/examples/ucm_config_example.yaml"} + "kv_connector_extra_config": {"UCM_CONFIG_FILE": "/workspace/unified-cache-management/examples/ucm_config_example.yaml"} }' ``` +**⚠️ The parameter `--no-enable-prefix-caching` is for SSD performance testing, please remove it for production.** -**⚠️ Make sure to replace `"/vllm-workspace/unified-cache-management/examples/ucm_config_example.yaml"` with your actual config file path.** +**⚠️ Make sure to replace `"/workspace/unified-cache-management/examples/ucm_config_example.yaml"` with your actual config file path.** If you see log as below: @@ -155,7 +156,7 @@ After successfully started the vLLM server,You can interact with the API as fo curl http://localhost:7800/v1/completions \ -H "Content-Type: application/json" \ -d '{ - "model": "/home/models/Qwen2.5-14B-Instruct", + "model": "Qwen/Qwen2.5-14B-Instruct", "prompt": "You are a highly specialized assistant whose mission is to faithfully reproduce English literary texts verbatim, without any deviation, paraphrasing, or omission. Your primary responsibility is accuracy: every word, every punctuation mark, and every line must appear exactly as in the original source. Core Principles: Verbatim Reproduction: If the user asks for a passage, you must output the text word-for-word. Do not alter spelling, punctuation, capitalization, or line breaks. Do not paraphrase, summarize, modernize, or \"improve\" the language. Consistency: The same input must always yield the same output. Do not generate alternative versions or interpretations. Clarity of Scope: Your role is not to explain, interpret, or critique. You are not a storyteller or commentator, but a faithful copyist of English literary and cultural texts. Recognizability: Because texts must be reproduced exactly, they will carry their own cultural recognition. You should not add labels, introductions, or explanations before or after the text. Coverage: You must handle passages from classic literature, poetry, speeches, or cultural texts. Regardless of tone—solemn, visionary, poetic, persuasive—you must preserve the original form, structure, and rhythm by reproducing it precisely. Success Criteria: A human reader should be able to compare your output directly with the original and find zero differences. The measure of success is absolute textual fidelity. Your function can be summarized as follows: verbatim reproduction only, no paraphrase, no commentary, no embellishment, no omission. Please reproduce verbatim the opening sentence of the United States Declaration of Independence (1776), starting with \"When in the Course of human events\" and continuing word-for-word without paraphrasing.", "max_tokens": 100, "temperature": 0 diff --git a/docs/source/user-guide/prefix-cache/nfs_store.md b/docs/source/user-guide/prefix-cache/nfs_store.md index 7cb1bc0dd..5f58cd102 100644 --- a/docs/source/user-guide/prefix-cache/nfs_store.md +++ b/docs/source/user-guide/prefix-cache/nfs_store.md @@ -109,8 +109,6 @@ Explanation: ## Launching Inference -### Offline Inference - In this guide, we describe **online inference** using vLLM with the UCM connector, deployed as an OpenAI-compatible server. For best performance with UCM, it is recommended to set `block_size` to 128. To start the vLLM server with the Qwen/Qwen2.5-14B-Instruct model, run: @@ -129,6 +127,7 @@ vllm serve Qwen/Qwen2.5-14B-Instruct \ '{ "kv_connector": "UCMConnector", "kv_role": "kv_both", + "kv_connector_module_path": "ucm.integration.vllm.ucm_connector", "kv_connector_extra_config": {"UCM_CONFIG_FILE": "/vllm-workspace/unified-cache-management/examples/ucm_config_example.yaml"} }' ```