Skip to content

Conversation

@brauliobo
Copy link

  • Added -g / --gpu-device CLI option to select a specific GPU device.
  • Added support for GPU_DEVICE environment variable as a fallback.
  • Updated whisper_params to store the gpu_device index.
  • Passed the selected device ID to whisper_context_params initialization.
  • Applied changes to both examples/cli/cli.cpp and examples/server/server.cpp.

- Added `-g` / `--gpu-device` CLI option to select a specific GPU device.
- Added support for `GPU_DEVICE` environment variable as a fallback.
- Updated `whisper_params` to store the `gpu_device` index.
- Passed the selected device ID to `whisper_context_params` initialization.
- Applied changes to both `examples/cli/cli.cpp` and `examples/server/server.cpp`.
else if (arg == "-dtw" || arg == "--dtw") { params.dtw = ARGV_NEXT; }
else if (arg == "-ls" || arg == "--log-score") { params.log_score = true; }
else if (arg == "-ng" || arg == "--no-gpu") { params.use_gpu = false; }
else if (arg == "-g" || arg == "--gpu-device") { params.gpu_device = std::stoi(ARGV_NEXT); }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we should use --device here to be consistent with llama.cpp.

}

bool whisper_params_parse(int argc, char ** argv, whisper_params & params, server_params & sparams) {
if (const char * env_gpu_device = std::getenv("GPU_DEVICE")) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And perhaps this should be WHISPER_ARG_DEVICE to be consistent with llama.cpp.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants