You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Merge pull request #100 from VectorInstitute/feature/decouple_from_v
* Decouple VI from Vaughan cluster, all cluster-related (slurm) variables have been decoupled from code base into a dedicated file
* Update dynamic slurm script generation to use a template instead of hard-coded strings, add slurm configuration to generated script to enhance reusability
* Parse vLLM engine args dynamically instead of processing selected arguments
* Updated module and function docstrings
* Updated docs, bumped version
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the environment variables in [`vec_inf/client/_vars.py`](vec_inf/client/_vars.py), [`vec_inf/client/_config.py`](vec_inf/client/_config.py), [`vllm.slurm`](vec_inf/vllm.slurm), [`multinode_vllm.slurm`](vec_inf/multinode_vllm.slurm) and [`models.yaml`](vec_inf/config/models.yaml) accordingly.
12
+
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the environment variables in [`vec_inf/client/slurm_vars.py`](vec_inf/client/slurm_vars.py), and the model config for cached model weights in [`vec_inf/config/models.yaml`](vec_inf/config/models.yaml) accordingly.
12
13
13
14
## Installation
14
15
If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, run the following to install package:
@@ -20,7 +21,9 @@ Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile) to set up
20
21
21
22
## Usage
22
23
23
-
### `launch` command
24
+
Vector Inference provides 2 user interfaces, a CLI and an API
25
+
26
+
### CLI
24
27
25
28
The `launch` command allows users to deploy a model as a slurm job. If the job successfully launches, a URL endpoint is exposed for the user to send requests for inference.
Models that are already supported by `vec-inf` would be launched using the cached configuration or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be
42
+
Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be
40
43
overriden. For example, if `qos` is to be overriden:
For the full list of vLLM engine arguments, you can find them [here](https://docs.vllm.ai/en/stable/serving/engine_args.html), make sure you select the correct vLLM version.
56
+
46
57
#### Custom models
47
58
48
59
You can also launch your own custom model as long as the model architecture is [supported by vLLM](https://docs.vllm.ai/en/stable/models/supported_models.html), and make sure to follow the instructions below:
When the server is ready, you should see an output like this:
97
+
Note that there are other parameters that can also be added to the config but not shown in this example, check the [`ModelConfig`](vec_inf/client/config.py) for details.
* `status`: Check the model status by providing its Slurm job ID, `--json-mode` supported.
102
+
* `metrics`: Streams performance metrics to the console.
103
+
* `shutdown`: Shutdown a model by providing its Slurm job ID.
104
+
* `list`: List all available model names, or view the default/cached configuration of a specific model, `--json-mode` supported.
103
105
104
-
* **PENDING**: Job submitted to Slurm, but not executed yet. Job pending reason will be shown.
105
-
* **LAUNCHING**: Job is running but the server is not ready yet.
106
-
* **READY**: Inference server running and ready to take requests.
107
-
* **FAILED**: Inference server in an unhealthy state. Job failed reason will be shown.
108
-
* **SHUTDOWN**: Inference server is shutdown/cancelled.
106
+
For more details on the usage of these commands, refer to the [User Guide](https://vectorinstitute.github.io/vector-inference/user_guide/)
109
107
110
-
Note that the base URL is only available when model is in `READY` state, and if you've changed the Slurm log directory path, you also need to specify it when using the `status` command.
108
+
### API
111
109
112
-
### `metrics` command
113
-
Once your server is ready, you can check performance metrics by providing the Slurm job ID to the `metrics` command:
114
-
```bash
115
-
vec-inf metrics 15373800
116
-
```
117
-
118
-
And you will see the performance metrics streamed to your console, note that the metrics are updated with a 2-second interval.
`launch`, `list`, and `status` command supports `--json-mode`, where the command output would be structured as a JSON string.
127
+
With every model launch, a Slurm script will be generated dynamically based on the job and model configuration. Once the Slurm job is queued, the generated Slurm script will be moved to the log directory for reproducibility, located at `$log_dir/$model_family/$model_name.$slurm_job_id/$model_name.$slurm_job_id.slurm`. In the same directory you can also find a JSON file with the same name that captures the launch configuration, and will have an entry of server URL once the server is ready.
146
128
147
129
## Send inference requests
130
+
148
131
Once the inference server is ready, you can start sending in inference requests. We provide example scripts for sending inference requests in [`examples`](examples) folder. Make sure to update the model server URL and the model weights location in the scripts. For example, you can run `python examples/inference/llm/chat_completions.py`, and you should expect to see an output like the following:
Copy file name to clipboardExpand all lines: docs/index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Vector Inference: Easy inference on Slurm clusters
2
2
3
-
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the environment variables in [`vec_inf/client/_vars.py`](https://github.com/VectorInstitute/vector-inference/blob/main/vec_inf/client/_vars.py), [`vec_inf/client/_config.py`](https://github.com/VectorInstitute/vector-inference/blob/main/vec_inf/client/_config.py), [`vllm.slurm`](https://github.com/VectorInstitute/vector-inference/blob/main/vec_inf/vllm.slurm), [`multinode_vllm.slurm`](https://github.com/VectorInstitute/vector-inference/blob/main/vec_inf/multinode_vllm.slurm) and [`models.yaml`](https://github.com/VectorInstitute/vector-inference/blob/main/vec_inf/config/models.yaml) accordingly.
3
+
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the environment variables in [`vec_inf/client/slurm_vars.py`](https://github.com/VectorInstitute/vector-inference/blob/main/vec_inf/client/slurm_vars.py), and the model config for cached model weights in [`vec_inf/config/models.yaml`](https://github.com/VectorInstitute/vector-inference/blob/main/vec_inf/config/models.yaml) accordingly.
0 commit comments