Skip to content

Commit 31d6f5b

Browse files
committed
Simplify top level README, redirect model tracking to new location
1 parent f104df3 commit 31d6f5b

File tree

2 files changed

+5
-64
lines changed

2 files changed

+5
-64
lines changed

README.md

Lines changed: 2 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212

1313
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **This package runs natively on the Vector Institute cluster environments**. To adapt to other environments, follow the instructions in [Installation](#installation).
1414

15-
**NOTE**: Supported models on Killarney are tracked [here](vec_inf/config/README.md)
15+
**NOTE**: Supported models on Killarney are tracked [here](./MODEL_TRACKING.md)
1616

1717
## Installation
1818
If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, run the following to install package:
@@ -48,66 +48,7 @@ You should see an output like the following:
4848
* `--account`, `-A`: The Slurm account, this argument can be set to default by setting environment variable `VEC_INF_ACCOUNT`.
4949
* `--work-dir`, `-D`: A working directory other than your home directory, this argument can be set to default by seeting environment variable `VEC_INF_WORK_DIR`.
5050

51-
#### Overrides
52-
53-
Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be
54-
overriden. For example, if `resource-type` is to be overriden:
55-
56-
```bash
57-
vec-inf launch Meta-Llama-3.1-8B-Instruct --resource-type <new_resource_type>
58-
```
59-
60-
To overwrite default `vllm serve` arguments, you can specify the arguments in a comma separated string:
61-
62-
```bash
63-
vec-inf launch Meta-Llama-3.1-8B-Instruct --vllm-args '--max-model-len=65536,--compilation-config=3'
64-
```
65-
66-
For the full list of `vllm serve` arguments, you can find them [here](https://docs.vllm.ai/en/stable/cli/serve.html), make sure you select the correct vLLM version.
67-
68-
#### Custom models
69-
70-
You can also launch your own custom model as long as the model architecture is [supported by vLLM](https://docs.vllm.ai/en/stable/models/supported_models.html), and make sure to follow the instructions below:
71-
* Your model weights directory naming convention should follow `$MODEL_FAMILY-$MODEL_VARIANT` ($MODEL_VARIANT is OPTIONAL).
72-
* Your model weights directory should contain HuggingFace format weights.
73-
* You should specify your model configuration by:
74-
* Creating a custom configuration file for your model and specify its path via setting the environment variable `VEC_INF_MODEL_CONFIG` (This one will supersede `VEC_INF_CONFIG_DIR` if that is also set). Check the [default parameters](vec_inf/config/models.yaml) file for the format of the config file. All the parameters for the model should be specified in that config file.
75-
* Add your model configuration to the cached `models.yaml` in your cluster environment (if you have write access to the cached configuration directory).
76-
* Using launch command options to specify your model setup.
77-
* For other model launch parameters you can reference the default values for similar models using the [`list` command ](#list-command).
78-
79-
Here is an example to deploy a custom [Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) model which is not
80-
supported in the default list of models using a user custom config. In this case, the model weights are assumed to be downloaded to
81-
a `model-weights` directory inside the user's home directory. The weights directory of the model follows the naming convention so it
82-
would be named `Qwen2.5-7B-Instruct-1M`. The following yaml file would need to be created, lets say it is named `/h/<username>/my-model-config.yaml`.
83-
84-
```yaml
85-
models:
86-
Qwen2.5-7B-Instruct-1M:
87-
model_family: Qwen2.5
88-
model_variant: 7B-Instruct-1M
89-
model_type: LLM
90-
gpus_per_node: 1
91-
num_nodes: 1
92-
vocab_size: 152064
93-
time: 08:00:00
94-
resource_type: l40s # You can also leave this field empty if your environment has a default type of resource to use
95-
model_weights_parent_dir: /h/<username>/model-weights
96-
vllm_args:
97-
--max-model-len: 1010000
98-
--max-num-seqs: 256
99-
```
100-
101-
You would then set the `VEC_INF_MODEL_CONFIG` path using:
102-
103-
```bash
104-
export VEC_INF_MODEL_CONFIG=/h/<username>/my-model-config.yaml
105-
```
106-
107-
**NOTE**
108-
* There are other parameters that can also be added to the config but not shown in this example, check the [`ModelConfig`](vec_inf/client/config.py) for details.
109-
* Check [`vllm serve`](https://docs.vllm.ai/en/stable/cli/serve.html) for the full list of available vLLM engine arguments, the default parallel size for any parallelization is default to 1, so none of the sizes were set specifically in this example
110-
* For GPU partitions with non-Ampere architectures, e.g. `rtx6000`, `t4v2`, BF16 isn't supported. For models that have BF16 as the default type, when using a non-Ampere GPU, use FP16 instead, i.e. `--dtype: float16`
51+
Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be overriden. You can also launch your own custom model as long as the model architecture is [supported by vLLM](https://docs.vllm.ai/en/stable/models/supported_models.html). For detailed instructions on how to customize your model launch, check out the [`launch` command section in User Guide](https://vectorinstitute.github.io/vector-inference/latest/user_guide/#launch-command)
11152

11253
#### Other commands
11354

docs/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/stable/). **This package runs natively on the Vector Institute cluster environment**. To adapt to other environments, follow the instructions in [Installation](#installation).
44

5-
**NOTE**: Supported models on Killarney are tracked [here](vec_inf/config/README.md)
5+
**NOTE**: Supported models on Killarney are tracked [here](https://github.com/VectorInstitute/vector-inference/blob/main/MODEL_TRACKING.md)
66

77
## Installation
88

@@ -15,6 +15,6 @@ pip install vec-inf
1515
Otherwise, we recommend using the provided [`Dockerfile`](https://github.com/VectorInstitute/vector-inference/blob/main/Dockerfile) to set up your own environment with the package. The latest image has `vLLM` version `0.10.1.1`.
1616

1717
If you'd like to use `vec-inf` on your own Slurm cluster, you would need to update the configuration files, there are 3 ways to do it:
18-
* Clone the repository and update the `environment.yaml` and the `models.yaml` file in [`vec_inf/config`](vec_inf/config/), then install from source by running `pip install .`.
19-
* The package would try to look for cached configuration files in your environment before using the default configuration. The default cached configuration directory path points to `/model-weights/vec-inf-shared`, you would need to create an `environment.yaml` and a `models.yaml` following the format of these files in [`vec_inf/config`](vec_inf/config/).
18+
* Clone the repository and update the `environment.yaml` and the `models.yaml` file in [`vec_inf/config`](https://github.com/VectorInstitute/vector-inference/blob/main/vec_inf/config), then install from source by running `pip install .`.
19+
* The package would try to look for cached configuration files in your environment before using the default configuration. The default cached configuration directory path points to `/model-weights/vec-inf-shared`, you would need to create an `environment.yaml` and a `models.yaml` following the format of these files in [`vec_inf/config`](https://github.com/VectorInstitute/vector-inference/blob/main/vec_inf/config).
2020
* The package would also look for an enviroment variable `VEC_INF_CONFIG_DIR`. You can put your `environment.yaml` and `models.yaml` in a directory of your choice and set the enviroment variable `VEC_INF_CONFIG_DIR` to point to that location.

0 commit comments

Comments
 (0)