Skip to content

Commit db7fa66

Browse files
committed
Add Quick search support for ensemble composing model parameter ranges
Fix copyrights Fix tests Update tests Allow user to specify composing models instead of relying on auto-discovery Warn when there is a non-existent composing model. Update copyrights Update copyrights Fix model name YAML Correctly get kind Properly set CPU/GPU kind Address Copilot feedback
1 parent 2b36a7b commit db7fa66

15 files changed

+1947
-169
lines changed

docs/config_search.md

Lines changed: 91 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,6 @@
11
<!--
2-
Copyright (c) 2020-2024, NVIDIA CORPORATION. All rights reserved.
3-
4-
Licensed under the Apache License, Version 2.0 (the "License");
5-
you may not use this file except in compliance with the License.
6-
You may obtain a copy of the License at
7-
8-
http://www.apache.org/licenses/LICENSE-2.0
9-
10-
Unless required by applicable law or agreed to in writing, software
11-
distributed under the License is distributed on an "AS IS" BASIS,
12-
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13-
See the License for the specific language governing permissions and
14-
limitations under the License.
2+
SPDX-FileCopyrightText: Copyright (c) 2020-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
3+
SPDX-License-Identifier: Apache-2.0
154
-->
165

176
# Table of Contents
@@ -240,7 +229,8 @@ manual sweep:
240229

241230
_This mode has the following limitations:_
242231

243-
- If model config parameters are specified, they can contain only one possible combination of parameters
232+
- Top-level models can contain only one possible combination of model config parameters
233+
- Composing models (ensemble/BLS sub-models) can specify parameter ranges if they follow specific patterns (see [Ensemble Composing Model Parameter Ranges](#ensemble-composing-model-parameter-ranges))
244234

245235
This mode uses a hill climbing algorithm to search the configuration space, looking for
246236
the maximal objective value within the specified constraints. In the majority of cases
@@ -262,6 +252,85 @@ profile_models:
262252

263253
---
264254

255+
### **Ensemble Composing Model Parameter Ranges**
256+
257+
When profiling ensemble or BLS models in Quick search mode, composing models (sub-models) can specify parameter ranges for `instance_group` count. This enables optimization of composing models with different resource requirements, such as:
258+
259+
- CPU-bound models (tokenizers, preprocessors) that may benefit from higher instance counts
260+
- GPU-bound models (inference models, embeddings) with limited GPU memory
261+
262+
**Supported Instance Count Patterns:**
263+
264+
Model Analyzer supports two types of instance count sequences that map to Quick search's coordinate system:
265+
266+
1. **Powers of 2**: `[1, 2, 4, 8, 16, 32]` or subsets like `[2, 4, 8]`
267+
- Maps to exponential search dimensions
268+
- Recommended for most use cases
269+
270+
2. **Contiguous sequences**: `[1, 2, 3, 4, 5]` or ranges like `[5, 6, 7, 8]`
271+
- Maps to linear search dimensions
272+
- Useful for fine-grained control
273+
274+
**Important Notes:**
275+
276+
- Only composing models can specify instance count ranges in Quick mode
277+
- Top-level models (non-composing) must still have a single parameter combination
278+
- Only powers of 2 or contiguous sequences are supported; arbitrary value lists (e.g., `[1, 3, 7, 15]`) are not supported
279+
- Composing models are identified using `ensemble_composing_models`, `bls_composing_models`, or `cpu_only_composing_models` configuration
280+
281+
---
282+
283+
_An example with ensemble containing CPU tokenizer and GPU inference model:_
284+
285+
```yaml
286+
model_repository: /path/to/model/repository/
287+
288+
run_config_search_mode: quick
289+
export_path: /tmp/results
290+
override_output_model_repository: true
291+
292+
profile_models:
293+
- ensemble_model
294+
295+
ensemble_composing_models:
296+
tokenizer:
297+
model_config_parameters:
298+
instance_group:
299+
- kind: KIND_CPU
300+
count: [1, 2, 4, 8, 16, 32] # Powers of 2 sequence
301+
dynamic_batching:
302+
max_queue_delay_microseconds: [0]
303+
inference_model:
304+
model_config_parameters:
305+
instance_group:
306+
- kind: KIND_GPU
307+
count: [1, 2, 4, 8] # Subset of powers of 2
308+
dynamic_batching:
309+
max_queue_delay_microseconds: [0]
310+
```
311+
312+
In this example:
313+
- Only the ensemble is listed in `profile_models` - composing models are auto-discovered from `ensemble_scheduling`
314+
- The `ensemble_composing_models` section provides configurations for auto-discovered models
315+
- The tokenizer (CPU model) searches instance counts from 1 to 32
316+
- The inference model (GPU model) searches instance counts from 1 to 8
317+
- Quick search explores both dimensions in parallel to find the optimal combination
318+
- The ensemble model itself uses default parameters
319+
- Any models specified in `ensemble_composing_models` that don't exist in the ensemble will be ignored with a warning
320+
321+
**Instance Group Kind:**
322+
323+
The `kind` field (`KIND_CPU` or `KIND_GPU`) in `instance_group` is respected when explicitly specified. This allows you to control whether a model runs on CPU or GPU directly in the config without needing the separate `cpu_only_composing_models` option.
324+
325+
Priority order for determining instance kind:
326+
1. **Explicit `kind` in `instance_group`** (highest priority) - if you specify `kind: KIND_CPU` or `kind: KIND_GPU`, that value is used
327+
2. **`cpu_only_composing_models` config** - models listed here will use KIND_CPU
328+
3. **Default to KIND_GPU** (lowest priority) - if neither is specified, models default to GPU instances
329+
330+
This means you can override `cpu_only_composing_models` by explicitly specifying `kind: KIND_GPU` in the instance_group
331+
332+
---
333+
265334
### **Limiting Batch Size, Instance Group, and Client Concurrency**
266335

267336
Using the `--run-config-search-<min/max>...` config options you have the ability to clamp the algorithm's upper or lower bounds for the model's batch size and instance group count, as well as the client's request concurrency.
@@ -398,6 +467,10 @@ _This mode has the following limitations:_
398467

399468
Ensemble models can be optimized using the Quick Search mode's hill climbing algorithm to search the composing models' configuration spaces in parallel, looking for the maximal objective value within the specified constraints. Model Analyzer has observed positive outcomes towards finding the maximum objective value; with runtimes under one hour (compared to the days it would take a brute force run to complete) for ensembles that contain up to four composing models.
400469

470+
**Composing Model Parameter Ranges:**
471+
472+
Composing models within ensembles can specify instance count ranges to optimize models with different resource requirements (e.g., CPU tokenizers vs GPU inference models). See [Ensemble Composing Model Parameter Ranges](#ensemble-composing-model-parameter-ranges) for details on supported patterns and configuration examples.
473+
401474
After Model Analyzer has found the best config(s), it will then sweep the top-N configurations found (specified by `--num-configs-per-model`) over the concurrency range before generation of the summary reports.
402475

403476
---
@@ -412,6 +485,10 @@ _This mode has the following limitations:_
412485

413486
BLS models can be optimized using the Quick Search mode's hill climbing algorithm to search the BLS composing models' configuration spaces, as well as the BLS model's instance count, in parallel, looking for the maximal objective value within the specified constraints. Model Analyzer has observed positive outcomes towards finding the maximum objective value; with runtimes under one hour (compared to the days it would take a brute force run to complete) for BLS models that contain up to four composing models.
414487

488+
**Composing Model Parameter Ranges:**
489+
490+
BLS composing models can specify instance count ranges to optimize models with different resource requirements. Models are identified using the `bls_composing_models` configuration parameter. See [Ensemble Composing Model Parameter Ranges](#ensemble-composing-model-parameter-ranges) for details on supported patterns and configuration examples.
491+
415492
After Model Analyzer has found the best config(s), it will then sweep the top-N configurations found (specified by `--num-configs-per-model`) over the concurrency range before generation of the summary reports.
416493

417494
---

model_analyzer/config/generate/model_profile_spec.py

Lines changed: 58 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
# SPDX-License-Identifier: Apache-2.0
44

55
from copy import deepcopy
6-
from typing import List
6+
from typing import List, Optional
77

88
from model_analyzer.config.input.config_command_profile import ConfigCommandProfile
99
from model_analyzer.config.input.objects.config_model_profile_spec import (
@@ -34,8 +34,63 @@ def __init__(
3434
config, client, gpus, config.model_repository, spec.model_name()
3535
)
3636

37-
if spec.model_name() in config.cpu_only_composing_models:
38-
self._cpu_only = True
37+
# Determine cpu_only based on priority:
38+
# 1) User-specified kind in instance_group (highest)
39+
# 2) cpu_only_composing_models config
40+
# 3) Inherited from spec via deepcopy (default False)
41+
explicit_kind = self._get_explicit_instance_kind(spec)
42+
if explicit_kind == "KIND_CPU":
43+
cpu_only = True
44+
elif explicit_kind == "KIND_GPU":
45+
cpu_only = False
46+
elif spec.model_name() in config.cpu_only_composing_models:
47+
cpu_only = True
48+
else:
49+
cpu_only = spec.cpu_only()
50+
51+
self._cpu_only = cpu_only
52+
53+
@staticmethod
54+
def _get_explicit_instance_kind(spec: ConfigModelProfileSpec) -> Optional[str]:
55+
"""
56+
Check if the spec has an explicit kind specified in instance_group.
57+
58+
Returns the kind if explicitly specified, None otherwise.
59+
This allows users to specify KIND_CPU or KIND_GPU directly in
60+
model_config_parameters.instance_group instead of using the
61+
separate cpu_only_composing_models config option.
62+
63+
The config parser may wrap values in lists for sweep support, so we need
64+
to handle structures like:
65+
- [[{'count': [1, 2, 4], 'kind': ['KIND_CPU']}]] (double-wrapped, kind is list)
66+
- [{'count': [1, 2, 4], 'kind': 'KIND_CPU'}] (single-wrapped, kind is string)
67+
"""
68+
model_config_params = spec.model_config_parameters()
69+
if model_config_params is None:
70+
return None
71+
72+
instance_group = model_config_params.get("instance_group")
73+
if instance_group is None or not isinstance(instance_group, list):
74+
return None
75+
76+
# instance_group structure can be doubly wrapped due to config parsing:
77+
# [[ {'kind': ['KIND_GPU'], 'count': [1, 2, 4]} ]]
78+
# Unwrap the nested structure if needed
79+
if len(instance_group) > 0 and isinstance(instance_group[0], list):
80+
instance_group = instance_group[0]
81+
82+
# instance_group is now a list of dicts, each potentially containing 'kind'
83+
for ig in instance_group:
84+
if isinstance(ig, dict):
85+
kind = ig.get("kind")
86+
# Handle case where kind is wrapped in a list by config parser
87+
# e.g., ['KIND_CPU'] instead of 'KIND_CPU'
88+
if isinstance(kind, list) and len(kind) > 0:
89+
kind = kind[0]
90+
if kind in ("KIND_CPU", "KIND_GPU"):
91+
return kind
92+
93+
return None
3994

4095
def get_default_config(self) -> dict:
4196
"""Returns the default configuration for this model"""

model_analyzer/config/generate/quick_run_config_generator.py

Lines changed: 70 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@
2525
from model_analyzer.config.run.model_run_config import ModelRunConfig
2626
from model_analyzer.config.run.run_config import RunConfig
2727
from model_analyzer.constants import LOGGER_NAME
28+
from model_analyzer.model_analyzer_exceptions import TritonModelAnalyzerException
2829
from model_analyzer.perf_analyzer.perf_config import PerfAnalyzerConfig
2930
from model_analyzer.result.run_config_measurement import RunConfigMeasurement
3031
from model_analyzer.triton.model.model_config import ModelConfig
@@ -425,33 +426,53 @@ def _get_next_model_config_variant(
425426
)
426427

427428
model_config_params = deepcopy(model.model_config_parameters())
429+
430+
# Extract user-specified instance_group kind before removing it
431+
instance_kind = self._extract_instance_group_kind(model_config_params)
432+
if not instance_kind:
433+
# Fallback to cpu_only flag
434+
instance_kind = "KIND_CPU" if model.cpu_only() else "KIND_GPU"
435+
428436
if model_config_params:
437+
# Remove parameters that are controlled by search dimensions
429438
model_config_params.pop("max_batch_size", None)
439+
model_config_params.pop("instance_group", None)
430440

431-
# This is guaranteed to only generate one combination (check is in config_command)
441+
# Generate combinations from remaining parameters
442+
# For composing models, this may include dynamic_batching settings, etc.
432443
param_combos = GeneratorUtils.generate_combinations(model_config_params)
433-
assert len(param_combos) == 1
434444

435-
param_combo = param_combos[0]
445+
# Top-level models must have exactly 1 combination (validated earlier)
446+
# Composing models can have 1 combination (non-searchable params are fixed)
447+
if len(param_combos) > 1:
448+
raise TritonModelAnalyzerException(
449+
f"Model {model.model_name()} has multiple parameter combinations "
450+
f"after removing searchable parameters. This should have been caught "
451+
f"during config validation."
452+
)
453+
454+
param_combo = param_combos[0] if param_combos else {}
436455
else:
437456
param_combo = {}
438457

439-
kind = "KIND_CPU" if model.cpu_only() else "KIND_GPU"
458+
# Add instance_group with count from dimension and kind from config
440459
instance_count = self._calculate_instance_count(dimension_values)
441-
442460
param_combo["instance_group"] = [
443461
{
444462
"count": instance_count,
445-
"kind": kind,
463+
"kind": instance_kind,
446464
}
447465
]
448466

467+
# Add max_batch_size from dimension if applicable
449468
if "max_batch_size" in dimension_values:
450469
param_combo["max_batch_size"] = self._calculate_model_batch_size(
451470
dimension_values
452471
)
453472

454-
if model.supports_dynamic_batching():
473+
# Add default dynamic_batching if model supports it and not already specified
474+
# Preserves user-specified dynamic_batching settings (single combinations only)
475+
if model.supports_dynamic_batching() and "dynamic_batching" not in param_combo:
455476
param_combo["dynamic_batching"] = {}
456477

457478
model_config_variant = BaseModelConfigGenerator.make_model_config_variant(
@@ -463,6 +484,48 @@ def _get_next_model_config_variant(
463484

464485
return model_config_variant
465486

487+
def _extract_instance_group_kind(self, model_config_params: dict) -> str:
488+
"""
489+
Extract the 'kind' field from instance_group in model_config_parameters.
490+
491+
The config parser may wrap values in lists for sweep support, so we need
492+
to handle structures like:
493+
- [[{'count': [1, 2, 4], 'kind': ['KIND_CPU']}]] (double-wrapped, kind is list)
494+
- [{'count': [1, 2, 4], 'kind': 'KIND_CPU'}] (single-wrapped, kind is string)
495+
496+
Returns empty string if not found or if instance_group is not specified.
497+
"""
498+
if not model_config_params or "instance_group" not in model_config_params:
499+
return ""
500+
501+
instance_group = model_config_params["instance_group"]
502+
503+
# Handle various nested list structures from config parsing
504+
if isinstance(instance_group, list) and len(instance_group) > 0:
505+
# Handle nested structure: [[ {...} ]]
506+
while (
507+
isinstance(instance_group, list)
508+
and len(instance_group) > 0
509+
and isinstance(instance_group[0], list)
510+
):
511+
instance_group = instance_group[0]
512+
513+
# Now should have [{...}] structure
514+
if (
515+
isinstance(instance_group, list)
516+
and len(instance_group) > 0
517+
and isinstance(instance_group[0], dict)
518+
):
519+
kind = instance_group[0].get("kind", "")
520+
# Handle case where kind is wrapped in a list by config parser
521+
# e.g., ['KIND_CPU'] instead of 'KIND_CPU'
522+
if isinstance(kind, list) and len(kind) > 0:
523+
kind = kind[0]
524+
if isinstance(kind, str) and kind in ("KIND_CPU", "KIND_GPU"):
525+
return kind
526+
527+
return ""
528+
466529
def _create_next_model_run_config(
467530
self,
468531
model: ModelProfileSpec,

0 commit comments

Comments
 (0)