Skip to content

Commit 13abdd3

Browse files
committed
Add Quick search support for ensemble composing model parameter ranges
Fix copyrights Fix tests
1 parent 2b36a7b commit 13abdd3

9 files changed

+1226
-64
lines changed

docs/config_search.md

Lines changed: 83 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,6 @@
11
<!--
2-
Copyright (c) 2020-2024, NVIDIA CORPORATION. All rights reserved.
3-
4-
Licensed under the Apache License, Version 2.0 (the "License");
5-
you may not use this file except in compliance with the License.
6-
You may obtain a copy of the License at
7-
8-
http://www.apache.org/licenses/LICENSE-2.0
9-
10-
Unless required by applicable law or agreed to in writing, software
11-
distributed under the License is distributed on an "AS IS" BASIS,
12-
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13-
See the License for the specific language governing permissions and
14-
limitations under the License.
2+
SPDX-FileCopyrightText: Copyright (c) 2020-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
3+
SPDX-License-Identifier: Apache-2.0
154
-->
165

176
# Table of Contents
@@ -240,7 +229,8 @@ manual sweep:
240229

241230
_This mode has the following limitations:_
242231

243-
- If model config parameters are specified, they can contain only one possible combination of parameters
232+
- Top-level models can contain only one possible combination of model config parameters
233+
- Composing models (ensemble/BLS sub-models) can specify parameter ranges if they follow specific patterns (see [Ensemble Composing Model Parameter Ranges](#ensemble-composing-model-parameter-ranges))
244234

245235
This mode uses a hill climbing algorithm to search the configuration space, looking for
246236
the maximal objective value within the specified constraints. In the majority of cases
@@ -262,6 +252,77 @@ profile_models:
262252

263253
---
264254

255+
### **Ensemble Composing Model Parameter Ranges**
256+
257+
When profiling ensemble or BLS models in Quick search mode, composing models (sub-models) can specify parameter ranges for `instance_group` count. This enables optimization of composing models with different resource requirements, such as:
258+
259+
- CPU-bound models (tokenizers, preprocessors) that may benefit from higher instance counts
260+
- GPU-bound models (inference models, embeddings) with limited GPU memory
261+
262+
**Supported Instance Count Patterns:**
263+
264+
Model Analyzer supports two types of instance count sequences that map to Quick search's coordinate system:
265+
266+
1. **Powers of 2**: `[1, 2, 4, 8, 16, 32]` or subsets like `[2, 4, 8]`
267+
- Maps to exponential search dimensions
268+
- Recommended for most use cases
269+
270+
2. **Contiguous sequences**: `[1, 2, 3, 4, 5]` or ranges like `[5, 6, 7, 8]`
271+
- Maps to linear search dimensions
272+
- Useful for fine-grained control
273+
274+
**Important Notes:**
275+
276+
- Only composing models can specify instance count ranges in Quick mode
277+
- Top-level models (non-composing) must still have a single parameter combination
278+
- Only powers of 2 or contiguous sequences are supported; arbitrary value lists (e.g., `[1, 3, 7, 15]`) are not supported
279+
- Composing models are identified using `cpu_only_composing_models` or `bls_composing_models` configuration
280+
281+
---
282+
283+
_An example with ensemble containing CPU tokenizer and GPU inference model:_
284+
285+
```yaml
286+
model_repository: /path/to/model/repository/
287+
288+
run_config_search_mode: quick
289+
export_path: /tmp/results
290+
override_output_model_repository: true
291+
292+
cpu_only_composing_models:
293+
- tokenizer
294+
295+
profile_models:
296+
tokenizer:
297+
model_config_parameters:
298+
instance_group:
299+
- kind: KIND_CPU
300+
count: [1, 2, 4, 8, 16, 32] # Powers of 2 sequence
301+
dynamic_batching:
302+
max_queue_delay_microseconds: [0]
303+
304+
inference_model:
305+
model_config_parameters:
306+
instance_group:
307+
- kind: KIND_GPU
308+
count: [1, 2, 4, 8] # Subset of powers of 2
309+
dynamic_batching:
310+
max_queue_delay_microseconds: [0]
311+
312+
ensemble_model:
313+
model_config_parameters:
314+
dynamic_batching:
315+
max_queue_delay_microseconds: [0]
316+
```
317+
318+
In this example:
319+
- The tokenizer (CPU model) searches instance counts from 1 to 32
320+
- The inference model (GPU model) searches instance counts from 1 to 8
321+
- Quick search explores both dimensions in parallel to find the optimal combination
322+
- The ensemble model itself has fixed parameters (single combination)
323+
324+
---
325+
265326
### **Limiting Batch Size, Instance Group, and Client Concurrency**
266327

267328
Using the `--run-config-search-<min/max>...` config options you have the ability to clamp the algorithm's upper or lower bounds for the model's batch size and instance group count, as well as the client's request concurrency.
@@ -398,6 +459,10 @@ _This mode has the following limitations:_
398459

399460
Ensemble models can be optimized using the Quick Search mode's hill climbing algorithm to search the composing models' configuration spaces in parallel, looking for the maximal objective value within the specified constraints. Model Analyzer has observed positive outcomes towards finding the maximum objective value; with runtimes under one hour (compared to the days it would take a brute force run to complete) for ensembles that contain up to four composing models.
400461

462+
**Composing Model Parameter Ranges:**
463+
464+
Composing models within ensembles can specify instance count ranges to optimize models with different resource requirements (e.g., CPU tokenizers vs GPU inference models). See [Ensemble Composing Model Parameter Ranges](#ensemble-composing-model-parameter-ranges) for details on supported patterns and configuration examples.
465+
401466
After Model Analyzer has found the best config(s), it will then sweep the top-N configurations found (specified by `--num-configs-per-model`) over the concurrency range before generation of the summary reports.
402467

403468
---
@@ -412,6 +477,10 @@ _This mode has the following limitations:_
412477

413478
BLS models can be optimized using the Quick Search mode's hill climbing algorithm to search the BLS composing models' configuration spaces, as well as the BLS model's instance count, in parallel, looking for the maximal objective value within the specified constraints. Model Analyzer has observed positive outcomes towards finding the maximum objective value; with runtimes under one hour (compared to the days it would take a brute force run to complete) for BLS models that contain up to four composing models.
414479

480+
**Composing Model Parameter Ranges:**
481+
482+
BLS composing models can specify instance count ranges to optimize models with different resource requirements. Models are identified using the `bls_composing_models` configuration parameter. See [Ensemble Composing Model Parameter Ranges](#ensemble-composing-model-parameter-ranges) for details on supported patterns and configuration examples.
483+
415484
After Model Analyzer has found the best config(s), it will then sweep the top-N configurations found (specified by `--num-configs-per-model`) over the concurrency range before generation of the summary reports.
416485

417486
---

model_analyzer/config/generate/quick_run_config_generator.py

Lines changed: 59 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@
2525
from model_analyzer.config.run.model_run_config import ModelRunConfig
2626
from model_analyzer.config.run.run_config import RunConfig
2727
from model_analyzer.constants import LOGGER_NAME
28+
from model_analyzer.model_analyzer_exceptions import TritonModelAnalyzerException
2829
from model_analyzer.perf_analyzer.perf_config import PerfAnalyzerConfig
2930
from model_analyzer.result.run_config_measurement import RunConfigMeasurement
3031
from model_analyzer.triton.model.model_config import ModelConfig
@@ -425,33 +426,53 @@ def _get_next_model_config_variant(
425426
)
426427

427428
model_config_params = deepcopy(model.model_config_parameters())
429+
430+
# Extract user-specified instance_group kind before removing it
431+
instance_kind = self._extract_instance_group_kind(model_config_params)
432+
if not instance_kind:
433+
# Fallback to cpu_only flag
434+
instance_kind = "KIND_CPU" if model.cpu_only() else "KIND_GPU"
435+
428436
if model_config_params:
437+
# Remove parameters that are controlled by search dimensions
429438
model_config_params.pop("max_batch_size", None)
439+
model_config_params.pop("instance_group", None)
430440

431-
# This is guaranteed to only generate one combination (check is in config_command)
441+
# Generate combinations from remaining parameters
442+
# For composing models, this may include dynamic_batching settings, etc.
432443
param_combos = GeneratorUtils.generate_combinations(model_config_params)
433-
assert len(param_combos) == 1
434444

435-
param_combo = param_combos[0]
445+
# Top-level models must have exactly 1 combination (validated earlier)
446+
# Composing models can have 1 combination (non-searchable params are fixed)
447+
if len(param_combos) > 1:
448+
raise TritonModelAnalyzerException(
449+
f"Model {model.model_name()} has multiple parameter combinations "
450+
f"after removing searchable parameters. This should have been caught "
451+
f"during config validation."
452+
)
453+
454+
param_combo = param_combos[0] if param_combos else {}
436455
else:
437456
param_combo = {}
438457

439-
kind = "KIND_CPU" if model.cpu_only() else "KIND_GPU"
458+
# Add instance_group with count from dimension and kind from config
440459
instance_count = self._calculate_instance_count(dimension_values)
441-
442460
param_combo["instance_group"] = [
443461
{
444462
"count": instance_count,
445-
"kind": kind,
463+
"kind": instance_kind,
446464
}
447465
]
448466

467+
# Add max_batch_size from dimension if applicable
449468
if "max_batch_size" in dimension_values:
450469
param_combo["max_batch_size"] = self._calculate_model_batch_size(
451470
dimension_values
452471
)
453472

454-
if model.supports_dynamic_batching():
473+
# Add default dynamic_batching if model supports it and not already specified
474+
# Preserves user-specified dynamic_batching settings (single combinations only)
475+
if model.supports_dynamic_batching() and "dynamic_batching" not in param_combo:
455476
param_combo["dynamic_batching"] = {}
456477

457478
model_config_variant = BaseModelConfigGenerator.make_model_config_variant(
@@ -463,6 +484,37 @@ def _get_next_model_config_variant(
463484

464485
return model_config_variant
465486

487+
def _extract_instance_group_kind(self, model_config_params: dict) -> str:
488+
"""
489+
Extract the 'kind' field from instance_group in model_config_parameters.
490+
491+
Returns empty string if not found or if instance_group is not specified.
492+
"""
493+
if not model_config_params or "instance_group" not in model_config_params:
494+
return ""
495+
496+
instance_group = model_config_params["instance_group"]
497+
498+
# Handle various nested list structures from config parsing
499+
if isinstance(instance_group, list) and len(instance_group) > 0:
500+
# Handle nested structure: [[ {...} ]]
501+
while (
502+
isinstance(instance_group, list)
503+
and len(instance_group) > 0
504+
and isinstance(instance_group[0], list)
505+
):
506+
instance_group = instance_group[0]
507+
508+
# Now should have [{...}] structure
509+
if (
510+
isinstance(instance_group, list)
511+
and len(instance_group) > 0
512+
and isinstance(instance_group[0], dict)
513+
):
514+
return instance_group[0].get("kind", "")
515+
516+
return ""
517+
466518
def _create_next_model_run_config(
467519
self,
468520
model: ModelProfileSpec,

0 commit comments

Comments
 (0)