Skip to content

Commit b5941a2

Browse files
committed
Update documentation
1 parent 3108d24 commit b5941a2

File tree

3 files changed

+90
-69
lines changed

3 files changed

+90
-69
lines changed

README.md

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
[![code checks](https://github.com/VectorInstitute/vector-inference/actions/workflows/code_checks.yml/badge.svg)](https://github.com/VectorInstitute/vector-inference/actions/workflows/code_checks.yml)
88
[![docs](https://github.com/VectorInstitute/vector-inference/actions/workflows/docs.yml/badge.svg)](https://github.com/VectorInstitute/vector-inference/actions/workflows/docs.yml)
99
[![codecov](https://codecov.io/github/VectorInstitute/vector-inference/branch/main/graph/badge.svg?token=NI88QSIGAC)](https://app.codecov.io/github/VectorInstitute/vector-inference/tree/main)
10-
[![vLLM](https://img.shields.io/badge/vllm-0.9.2)](https://docs.vllm.ai/en/v0.9.2/index.html)
10+
[![vLLM](https://img.shields.io/badge/vLLM-0.10.1.1-blue)](https://docs.vllm.ai/en/v0.10.1.1/)
1111
![GitHub License](https://img.shields.io/github/license/VectorInstitute/vector-inference)
1212

1313
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, follow the instructions in [Installation](#installation).
@@ -18,7 +18,7 @@ If you are using the Vector cluster environment, and you don't need any customiz
1818
```bash
1919
pip install vec-inf
2020
```
21-
Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile) to set up your own environment with the package. The latest image has `vLLM` version `0.9.2`.
21+
Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile) to set up your own environment with the package. The latest image has `vLLM` version `0.10.1.1`.
2222

2323
If you'd like to use `vec-inf` on your own Slurm cluster, you would need to update the configuration files, there are 3 ways to do it:
2424
* Clone the repository and update the `environment.yaml` and the `models.yaml` file in [`vec_inf/config`](vec_inf/config/), then install from source by running `pip install .`.
@@ -42,23 +42,26 @@ You should see an output like the following:
4242

4343
<img width="600" alt="launch_image" src="https://github.com/user-attachments/assets/a72a99fd-4bf2-408e-8850-359761d96c4f">
4444

45+
**NOTE**: On Vector Killarney Cluster environment, the following fields are required:
46+
* `--account`, `-A`: The Slurm account, this argument can be set to default by setting environment variable `VEC_INF_ACCOUNT`.
47+
* `--work-dir`, `-D`: A working directory other than your home directory, this argument can be set to default by seeting environment variable `VEC_INF_WORK_DIR`.
4548

4649
#### Overrides
4750

4851
Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be
49-
overriden. For example, if `qos` is to be overriden:
52+
overriden. For example, if `resource-type` is to be overriden:
5053

5154
```bash
52-
vec-inf launch Meta-Llama-3.1-8B-Instruct --qos <new_qos>
55+
vec-inf launch Meta-Llama-3.1-8B-Instruct --resource-type <new_resource_type>
5356
```
5457

55-
To overwrite default vLLM engine arguments, you can specify the engine arguments in a comma separated string:
58+
To overwrite default `vllm serve` arguments, you can specify the arguments in a comma separated string:
5659

5760
```bash
5861
vec-inf launch Meta-Llama-3.1-8B-Instruct --vllm-args '--max-model-len=65536,--compilation-config=3'
5962
```
6063

61-
For the full list of vLLM engine arguments, you can find them [here](https://docs.vllm.ai/en/stable/serving/engine_args.html), make sure you select the correct vLLM version.
64+
For the full list of `vllm serve` arguments, you can find them [here](https://docs.vllm.ai/en/stable/cli/serve.html), make sure you select the correct vLLM version.
6265

6366
#### Custom models
6467

@@ -85,14 +88,12 @@ models:
8588
gpus_per_node: 1
8689
num_nodes: 1
8790
vocab_size: 152064
88-
qos: m2
8991
time: 08:00:00
90-
partition: a40
92+
resource_type: l40s # You can also leave this field empty if your environment has a default type of resource to use
9193
model_weights_parent_dir: /h/<username>/model-weights
9294
vllm_args:
9395
--max-model-len: 1010000
9496
--max-num-seqs: 256
95-
--compilation-config: 3
9697
```
9798
9899
You would then set the `VEC_INF_MODEL_CONFIG` path using:
@@ -103,9 +104,8 @@ export VEC_INF_MODEL_CONFIG=/h/<username>/my-model-config.yaml
103104

104105
**NOTE**
105106
* There are other parameters that can also be added to the config but not shown in this example, check the [`ModelConfig`](vec_inf/client/config.py) for details.
106-
* Check [vLLM Engine Arguments](https://docs.vllm.ai/en/stable/serving/engine_args.html) for the full list of available vLLM engine arguments, the default parallel size for any parallelization is default to 1, so none of the sizes were set specifically in this example
107+
* Check [`vllm serve`](https://docs.vllm.ai/en/stable/cli/serve.html) for the full list of available vLLM engine arguments, the default parallel size for any parallelization is default to 1, so none of the sizes were set specifically in this example
107108
* For GPU partitions with non-Ampere architectures, e.g. `rtx6000`, `t4v2`, BF16 isn't supported. For models that have BF16 as the default type, when using a non-Ampere GPU, use FP16 instead, i.e. `--dtype: float16`
108-
* Setting `--compilation-config` to `3` currently breaks multi-node model launches, so we don't set them for models that require multiple nodes of GPUs.
109109

110110
#### Other commands
111111

@@ -114,7 +114,7 @@ export VEC_INF_MODEL_CONFIG=/h/<username>/my-model-config.yaml
114114
* `metrics`: Streams performance metrics to the console.
115115
* `shutdown`: Shutdown a model by providing its Slurm job ID.
116116
* `list`: List all available model names, or view the default/cached configuration of a specific model.
117-
* `cleanup`: Remove old log directories. You can filter by `--model-family`, `--model-name`, `--job-id`, and/or `--before-job-id`. Use `--dry-run` to preview what would be deleted.
117+
* `cleanup`: Remove old log directories, use `--help` to see the supported filters. Use `--dry-run` to preview what would be deleted.
118118

119119
For more details on the usage of these commands, refer to the [User Guide](https://vectorinstitute.github.io/vector-inference/user_guide/)
120120

@@ -125,6 +125,7 @@ Example:
125125
```python
126126
>>> from vec_inf.api import VecInfClient
127127
>>> client = VecInfClient()
128+
>>> # Assume VEC_INF_ACCOUNT and VEC_INF_WORK_DIR is set
128129
>>> response = client.launch_model("Meta-Llama-3.1-8B-Instruct")
129130
>>> job_id = response.slurm_job_id
130131
>>> status = client.get_status(job_id)

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ If you are using the Vector cluster environment, and you don't need any customiz
1010
pip install vec-inf
1111
```
1212

13-
Otherwise, we recommend using the provided [`Dockerfile`](https://github.com/VectorInstitute/vector-inference/blob/main/Dockerfile) to set up your own environment with the package. The latest image has `vLLM` version `0.9.2`.
13+
Otherwise, we recommend using the provided [`Dockerfile`](https://github.com/VectorInstitute/vector-inference/blob/main/Dockerfile) to set up your own environment with the package. The latest image has `vLLM` version `0.10.1.1`.
1414

1515
If you'd like to use `vec-inf` on your own Slurm cluster, you would need to update the configuration files, there are 3 ways to do it:
1616
* Clone the repository and update the `environment.yaml` and the `models.yaml` file in [`vec_inf/config`](vec_inf/config/), then install from source by running `pip install .`.

0 commit comments

Comments
 (0)