-
Notifications
You must be signed in to change notification settings - Fork 125
fix gpu templates for real though #1077
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
CodeAnt AI is reviewing your PR. Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughCompute and validate GPU mode in the run script (using gpuConfigOptions), expose gpu booleans (gpu_enabled, gpu_acc, gpu_mp) to template context, change many batch templates to use Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Suggested labels
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR standardizes GPU configuration variable naming across all HPC cluster templates by replacing the ambiguous gpu variable with the more explicit gpu_enabled. The change ensures consistency in how GPU mode is checked throughout the templating system and adds additional GPU mode variables for future extensibility.
Key Changes
- Replaced
gpuwithgpu_enabledacross 14 template files for clarity - Added GPU mode boolean computation in
run.pythat derivesgpu_enabled,gpu_acc, andgpu_mpfrom thegpuConfigOptionsenum - Maintained backward compatibility by computing these booleans from the existing
gpuargument value
Reviewed changes
Copilot reviewed 15 out of 15 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| toolchain/mfc/run/run.py | Added import of gpuConfigOptions, computed GPU mode booleans (gpu_enabled, gpu_acc, gpu_mp), and passed them to template rendering |
| toolchain/templates/summit.mako | Updated GPU checks from gpu to gpu_enabled in module loading and jsrun configuration |
| toolchain/templates/santis.mako | Updated GPU checks from gpu to gpu_enabled in module loading and SLURM GPU binding |
| toolchain/templates/phoenix.mako | Updated GPU checks from gpu to gpu_enabled in SLURM directives and module loading |
| toolchain/templates/phoenix-bench.mako | Updated GPU checks from gpu to gpu_enabled in SLURM directives and module loading |
| toolchain/templates/oscar.mako | Updated GPU checks from gpu to gpu_enabled in SLURM GPU allocation and module loading |
| toolchain/templates/nautilus.mako | Updated GPU checks from gpu to gpu_enabled in SLURM GPU binding and module loading |
| toolchain/templates/hipergator.mako | Updated GPU checks from gpu to gpu_enabled in SLURM GPU task allocation and module loading |
| toolchain/templates/frontier.mako | Updated GPU checks from gpu != 'no' and gpu to consistent gpu_enabled usage in SLURM directives, module loading, and MPICH GPU support configuration |
| toolchain/templates/deltaai.mako | Updated GPU checks from gpu to gpu_enabled in SLURM GPU allocation and module loading |
| toolchain/templates/delta.mako | Updated GPU checks from gpu to gpu_enabled in SLURM GPU allocation and module loading |
| toolchain/templates/default.mako | Updated GPU check from gpu to gpu_enabled in jsrun GPU resource allocation |
| toolchain/templates/carpenter.mako | Updated GPU check from gpu to gpu_enabled in module loading |
| toolchain/templates/carpenter-cray.mako | Updated GPU check from gpu to gpu_enabled in module loading |
|
CodeAnt AI finished reviewing your PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
toolchain/mfc/run/run.py(3 hunks)toolchain/templates/bridges2.mako(2 hunks)toolchain/templates/carpenter-cray.mako(1 hunks)toolchain/templates/carpenter.mako(1 hunks)toolchain/templates/default.mako(1 hunks)toolchain/templates/delta.mako(2 hunks)toolchain/templates/deltaai.mako(2 hunks)toolchain/templates/frontier.mako(3 hunks)toolchain/templates/hipergator.mako(2 hunks)toolchain/templates/nautilus.mako(2 hunks)toolchain/templates/oscar.mako(2 hunks)toolchain/templates/phoenix-bench.mako(2 hunks)toolchain/templates/phoenix.mako(2 hunks)toolchain/templates/santis.mako(2 hunks)toolchain/templates/summit.mako(2 hunks)
🧰 Additional context used
🧠 Learnings (8)
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Applied to files:
toolchain/templates/frontier.makotoolchain/templates/bridges2.makotoolchain/templates/delta.makotoolchain/templates/hipergator.makotoolchain/templates/summit.makotoolchain/mfc/run/run.pytoolchain/templates/oscar.makotoolchain/templates/phoenix.makotoolchain/templates/phoenix-bench.makotoolchain/templates/carpenter-cray.makotoolchain/templates/carpenter.makotoolchain/templates/default.makotoolchain/templates/nautilus.makotoolchain/templates/santis.mako
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Applied to files:
toolchain/templates/frontier.makotoolchain/templates/delta.makotoolchain/templates/hipergator.makotoolchain/templates/summit.makotoolchain/templates/oscar.makotoolchain/templates/phoenix-bench.makotoolchain/templates/carpenter.makotoolchain/templates/santis.mako
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging
Applied to files:
toolchain/templates/frontier.makotoolchain/templates/hipergator.makotoolchain/templates/summit.makotoolchain/templates/oscar.makotoolchain/templates/phoenix-bench.mako
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Applied to files:
toolchain/templates/frontier.makotoolchain/templates/hipergator.makotoolchain/templates/summit.makotoolchain/templates/oscar.makotoolchain/templates/phoenix-bench.mako
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros
Applied to files:
toolchain/templates/summit.mako
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code
Applied to files:
toolchain/templates/oscar.mako
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Compile with Cray `ftn` or NVIDIA `nvfortran` for GPU offloading; also build CPU-only with GNU `gfortran` and Intel `ifx`/`ifort` for portability
Applied to files:
toolchain/templates/phoenix-bench.mako
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Draft a step-by-step plan before making changes; build after each step using `./mfc.sh build -t pre_process simulation -j $(nproc)`
Applied to files:
toolchain/templates/carpenter-cray.mako
🧬 Code graph analysis (1)
toolchain/mfc/run/run.py (1)
toolchain/mfc/state.py (4)
ARG(85-93)ARGS(95-98)CFG(100-103)gpuConfigOptions(5-8)
🪛 GitHub Actions: Lint Toolchain
toolchain/mfc/run/run.py
[error] 104-106: Pylint identified unnecessary parentheses after assignment in lines 104-106. Process exited with code 16.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (23)
- GitHub Check: Detect File Changes
- GitHub Check: Detect File Changes
- GitHub Check: Georgia Tech | Phoenix (NVHPC) (gpu)
- GitHub Check: Oak Ridge | Frontier (CCE) (gpu)
- GitHub Check: Georgia Tech | Phoenix (NVHPC) (cpu)
- GitHub Check: Self Hosted (cpu, none, frontier)
- GitHub Check: Self Hosted (gpu, omp, gt)
- GitHub Check: Self Hosted (cpu, none, gt)
- GitHub Check: Github (macos, mpi, no-debug, false)
- GitHub Check: Self Hosted (gpu, acc, frontier)
- GitHub Check: Self Hosted (gpu, omp, frontier)
- GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
- GitHub Check: Github (macos, mpi, debug, false)
- GitHub Check: Self Hosted (gpu, acc, gt)
- GitHub Check: Github (ubuntu, mpi, no-debug, true)
- GitHub Check: Github (ubuntu, mpi, debug, false)
- GitHub Check: Github (ubuntu, mpi, debug, true)
- GitHub Check: Github (ubuntu, mpi, no-debug, false)
- GitHub Check: Coverage Test on CodeCov
- GitHub Check: Code Cleanliness Check
- GitHub Check: Agent
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Build & Publish
🔇 Additional comments (19)
toolchain/templates/carpenter.mako (1)
28-28: LGTM!Consistent with the gpu → gpu_enabled refactoring.
toolchain/templates/phoenix-bench.mako (2)
20-23: LGTM!GPU resource allocation correctly gated by
gpu_enabled.
34-34: LGTM!Module loading mode selection correctly uses
gpu_enabled.toolchain/templates/oscar.mako (2)
17-21: LGTM!SBATCH GPU directives correctly gated by
gpu_enabled.
35-35: LGTM!Module loading correctly uses
gpu_enabled.toolchain/templates/deltaai.mako (2)
17-21: LGTM!SBATCH GPU configuration correctly gated by
gpu_enabled.
35-35: LGTM!Module loading mode correctly uses
gpu_enabled.toolchain/templates/default.mako (1)
51-51: LGTM!GPU allocation for jsrun correctly uses
gpu_enabled. The conditional expression properly assigns 1 GPU per resource set when GPU mode is enabled, 0 otherwise.toolchain/templates/santis.mako (2)
62-62: LGTM!Module loading mode correctly uses
gpu_enabledfor batch engine.
77-79: LGTM!srun GPU allocation correctly gated by
gpu_enabled.toolchain/templates/nautilus.mako (2)
17-20: LGTM!SBATCH GPU directives correctly gated by
gpu_enabled.
34-34: LGTM!Module loading mode correctly uses
gpu_enabled.toolchain/templates/carpenter-cray.mako (1)
28-28: LGTM! Verify all templates updated.The rename from
gputogpu_enabledimproves clarity and aligns with the broader refactoring. Run the following to verify no templates still reference the oldgpuvariable:rg -n '\$\{\s*gpu\s*[:\}]|%\s+if\s+gpu\s*:|if\s+gpu\s+else' toolchain/templates/ --type makotoolchain/templates/bridges2.mako (1)
17-20: Swapping togpu_enabledin SBATCH and load command looks correctUsing
gpu_enabledto gate GPU SBATCH directives and to choose-m gvs-m cinmfc.sh loadkeeps the template consistent with the new boolean fromrun.pyand avoids the old truthiness issue of string values.Also applies to: 34-34
toolchain/templates/delta.mako (1)
17-21: GPU resource gating viagpu_enabledis consistent and fixes prior string-truthinessConditioning
--gpus-per-node,--mem, and--gpu-bindplus themfc.sh-mmode ongpu_enabledaligns this template with the new GPU-mode booleans and should prevent requesting GPUs whengpu='no'.Also applies to: 35-35
toolchain/templates/phoenix.mako (1)
20-23: Phoenix template now correctly followsgpu_enabledUsing
gpu_enabledfor the GPU SBATCH block and for selecting GPU vs CPU module mode is consistent with the new run-time flags and keeps behavior in sync with other templates.Also applies to: 34-34
toolchain/templates/summit.mako (1)
19-19: Summit GPU enablement wiring viagpu_enabledlooks goodChoosing
-m gvs-m cinmfc.sh loadand toggling--smpiargs="-gpu"/--gpu_per_rsbased ongpu_enabledmatches the intended GPU-mode boolean semantics and keeps CPU-only jobs free of GPU flags.Also applies to: 30-35
toolchain/templates/hipergator.mako (1)
12-15: Hipergator GPU flags now correctly keyed offgpu_enabledGating
--gpus-per-task/--gpu-bindand themfc.sh-margument ongpu_enabledis consistent with the new run-time GPU booleans and keeps CPU jobs from requesting GPUs or GPU modules.Also applies to: 38-38
toolchain/templates/frontier.mako (1)
13-16: Frontier template correctly centralizes GPU behavior ongpu_enabledSwitching SBATCH GPU directives,
MPICH_GPU_SUPPORT_ENABLED, interactivesrunGPU flags, and themfc.sh-mmode to usegpu_enabledunifies GPU control and keeps CPU-only runs from accidentally enabling GPU-related options.Also applies to: 37-37, 42-46, 69-71
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issues found across 15 files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
toolchain/mfc/run/run.py (1)
102-115: GPU mode validation and boolean derivation are logically sound
gpu_mode = ARG('gpu')plusvalid_gpu_modes = {e.value for e in gpuConfigOptions}and the membership check ensure only known modes (NONE/ACC/MP) are accepted before template rendering.- The derived flags
gpu_enabled = gpu_mode != gpuConfigOptions.NONE.value,gpu_acc = gpu_mode == gpuConfigOptions.ACC.value,gpu_mp = gpu_mode == gpuConfigOptions.MP.value
are mutually consistent and cover all enum cases, which matches how templates are expected to gate GPU behaviors.This should give templates a clear, unambiguous view of the GPU state.
If Ruff’s TRY003 warning about the long
MFCExceptionmessage becomes noisy, you could optionally factor the mode list into a small helper (or constant) to shorten the raise site, but that’s stylistic and not functionally required.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
toolchain/mfc/args.py(1 hunks)toolchain/mfc/run/run.py(3 hunks)toolchain/templates/frontier.mako(3 hunks)toolchain/templates/phoenix.mako(2 hunks)toolchain/templates/summit.mako(0 hunks)
💤 Files with no reviewable changes (1)
- toolchain/templates/summit.mako
🚧 Files skipped from review as they are similar to previous changes (2)
- toolchain/templates/frontier.mako
- toolchain/templates/phoenix.mako
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead
Applied to files:
toolchain/mfc/run/run.py
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Applied to files:
toolchain/mfc/run/run.py
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Applied to files:
toolchain/mfc/run/run.py
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Applied to files:
toolchain/mfc/run/run.py
🧬 Code graph analysis (1)
toolchain/mfc/args.py (1)
toolchain/mfc/state.py (1)
gpuConfigOptions(5-8)
🪛 Ruff (0.14.7)
toolchain/mfc/run/run.py
108-110: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
- GitHub Check: Coverage Test on CodeCov
- GitHub Check: Code Cleanliness Check
- GitHub Check: Github (ubuntu, mpi, no-debug, false)
- GitHub Check: Github (ubuntu, mpi, debug, false)
- GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
- GitHub Check: Github (macos, mpi, no-debug, false)
- GitHub Check: Github (macos, mpi, debug, false)
- GitHub Check: Github (ubuntu, mpi, no-debug, true)
- GitHub Check: Github (ubuntu, mpi, debug, true)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Build & Publish
🔇 Additional comments (3)
toolchain/mfc/args.py (1)
60-66: GPU CLI default semantics look correct and align with new gpu mode handlingThe
--gpu/--no-gpusetup withnargs='?',const=gpuConfigOptions.ACC.value, anddefault=gpuConfigOptions.NONE.valuecorrectly gives:
- CPU by default (
NONE),- ACC when
--gpuis passed without an argument,- and an explicit mode when
--gpu {acc,mp,no}is used,while
--no-gpuforcesNONE. This matches thegpuConfigOptionsenum and the downstreamARG('gpu')handling inrun.py.toolchain/mfc/run/run.py (2)
10-10: ImportinggpuConfigOptionshere is appropriatePulling
gpuConfigOptionsintorun.pyalongsideARG/ARGS/CFGis the right place for centralizing GPU mode handling and keeps run-time validation close to argument access.
124-127: Template context wiring for GPU/profiler flags looks consistentAdding
profiler=shlex.join(__profiler_prepend())and exposinggpu_enabled,gpu_acc, andgpu_mpin the render context is coherent with the new GPU semantics and keeps templates free from directARG('gpu')parsing. Assuming the templates have been updated to use these flags (as described in the PR), this wiring should support the intended conditional GPU configuration cleanly.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #1077 +/- ##
=======================================
Coverage 44.16% 44.16%
=======================================
Files 71 71
Lines 20197 20197
Branches 1970 1970
=======================================
Hits 8921 8921
Misses 10148 10148
Partials 1128 1128 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
/improve |
| cd "${MFC_ROOTDIR}" | ||
| . ./mfc.sh load -c o -m ${'g' if gpu else 'c'} | ||
| . ./mfc.sh load -c o -m ${'g' if gpu_enabled else 'c'} | ||
| cd - > /dev/null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Correct the typo in the environment variable from ${MFC_ROOTDIR} to ${MFC_ROOT_DIR} in the oscar.mako template. [possible issue, importance: 9]
| cd "${MFC_ROOTDIR}" | |
| . ./mfc.sh load -c o -m ${'g' if gpu else 'c'} | |
| . ./mfc.sh load -c o -m ${'g' if gpu_enabled else 'c'} | |
| cd - > /dev/null | |
| cd "${MFC_ROOT_DIR}" | |
| . ./mfc.sh load -c o -m ${'g' if gpu_enabled else 'c'} | |
| cd - > /dev/null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 16 out of 16 changed files in this pull request and generated 2 comments.
User description
User description
Fixing up templates across the board.
PR Type
Bug fix
Description
Replace ambiguous
gpuboolean with explicitgpu_enabled,gpu_acc,gpu_mpflagsFix GPU mode detection using
gpuConfigOptionsenum valuesUpdate all HPC cluster templates to use new GPU mode variables
Ensure consistent GPU configuration across 16 template files
Diagram Walkthrough
File Walkthrough
15 files
Add GPU mode boolean computation and pass to templatesReplace gpu with gpu_enabled in conditionalsReplace gpu with gpu_enabled in conditionalsReplace gpu with gpu_enabled in conditionalsReplace gpu with gpu_enabled in GPU task countReplace gpu with gpu_enabled in conditionalsReplace gpu with gpu_enabled in conditionalsReplace gpu != 'no' with gpu_enabled throughoutReplace gpu with gpu_enabled in conditionalsReplace gpu with gpu_enabled in conditionalsReplace gpu with gpu_enabled in conditionalsReplace gpu with gpu_enabled in conditionalsReplace gpu with gpu_enabled in conditionalsReplace gpu with gpu_enabled in conditionalsReplace gpu with gpu_enabled in jsrun argumentsCodeAnt-AI Description
Make GPU enablement explicit in job scripts and HPC templates
What Changed
Impact
✅ Fewer incorrect GPU allocations✅ Correct GPU module loading✅ Consistent GPU resource requests across clusters💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.