-
Notifications
You must be signed in to change notification settings - Fork 0
Phase 3: Structural Validation, Health Assessment, Telemetry & Performance Guardrails #2948
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Introduces TelemetryEmitter + TelemetryEvent providing JSONL + optional human mirror logging of canonical structural metrics. Includes canonical fields tetrad (Φ_s, |∇φ|, K_φ, ξ_C) via extended suite when available plus coherence_total & sense_index. Preserves all 10 invariants (read-only, no EPI mutation). Lint-compliant line wrapping. Next: operator introspection metadata & grammar-aware error factory integration.
Introduces introspection.py providing OperatorMeta dataclass and OPERATOR_METADATA registry. Exports via definitions facade for backward compatibility. Metadata covers categories, grammar roles (U1-U4) and contracts. Read-only, preserves all invariants. To be used by telemetry enrichment and upcoming grammar-aware errors.
Introduces grammar_error_factory.py with ExtendedGrammarError, invariants mapping and collect_grammar_errors utility. Reuses StructuralGrammarError; no core validator modifications. Exports via definitions facade. Includes operator introspection metadata enrichment + lean invariant mapping for U1-U4 (+ U6 confinement).
…ment, example & tests Introduces unified ValidationReport (grammar + field tetrad) and compute_structural_health with actionable recommendations. Adds CLI script structural_health_report.py and example structural_health_demo.py. Updates grammar error factory to accept glyph sequences (maps to canonical names). Provides unit tests for aggregator, health, telemetry emitter, grammar factory. Maintains read-only telemetry; preserves invariants; U4 deferred (no bifurcation call).
…nt validation aggregator Adds PerformanceRegistry, perf_guard decorator and overhead comparison utility. Instruments run_structural_validation when perf_registry provided. Includes tests ensuring <8% overhead for moderate workload. Maintains physics invariants (read-only timing).
…mentation docs Updates README with Phase 3 instrumentation section (validation, health, telemetry, guardrails). Enhances CONTRIBUTING with guidelines for validation, health, telemetry and performance measurement. Adds docs/STRUCTURAL_HEALTH.md with thresholds, risk levels, usage examples, invariants preserved.
❌ Deploy Preview for stunning-zabaione-f1f1ef failed. Why did it fail? →
|
Automated Code ReviewPlease review the workflow logs for details. |
…) replacing deprecated utcnow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| with self._human_path.open("a", encoding="utf-8") as hf: | ||
| for ev in self._buffer: | ||
| coh = ev.metrics.get("coherence_total") | ||
| si = ev.metrics.get("sense_index") | ||
| phi = ( | ||
| ev.metrics.get("phi_s") | ||
| or ev.metrics.get("structural_potential") | ||
| ) | ||
| hf.write( | ||
| ( | ||
| f"[{ev.step}] op={ev.operator} C={coh:.3f} " | ||
| f"Si={si:.3f} Φ_s={phi} t={ev.t_iso}\n" | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard human log formatting when metrics missing
The human mirror path in TelemetryEmitter.flush always formats coherence_total and sense_index as floats (C={coh:.3f}, Si={si:.3f}), but record() intentionally allows these entries to be absent when metric computation fails and safe=True. When either key is missing or None, the f-string raises TypeError: unsupported format string passed to NoneType, causing flush to crash despite “safe” mode. Consider defaulting to 0 or skipping formatting when the metrics are unavailable so that telemetry logging cannot fail.
Useful? React with 👍 / 👎.
| if args.random: | ||
| nx.random.seed(args.seed) # type: ignore[attr-defined] | ||
| G = nx.erdos_renyi_graph(args.random, args.edge_prob) # type: ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seed random graphs without nonexistent nx.random
The CLI seeding logic calls nx.random.seed(args.seed) before constructing an Erdos–Renyi graph. NetworkX exposes no random attribute, so this line raises AttributeError: module 'networkx' has no attribute 'random' whenever --random is used, preventing the script from running at all. Use Python’s random.seed or NetworkX’s numpy.random.default_rng instead to seed randomness.
Useful? React with 👍 / 👎.
Automated Code ReviewPlease review the workflow logs for details. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements Phase 3 structural instrumentation for TNFR, adding comprehensive validation, health assessment, telemetry, and performance guardrails. The changes introduce read-only observability layers that preserve canonical TNFR invariants while providing actionable insights into structural health and grammar compliance.
Key Changes:
- Enhanced validation aggregator combining grammar rules (U1-U3) with canonical field thresholds (Φ_s, |∇φ|, K_φ, ξ_C)
- Health assessment system with risk levels and actionable recommendations
- Unified telemetry emitter for metrics streaming with JSON Lines output
- Performance guardrails ensuring instrumentation overhead remains below 8%
Reviewed Changes
Copilot reviewed 19 out of 19 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
src/tnfr/validation/aggregator.py |
Core validation aggregator combining grammar and field threshold checks |
src/tnfr/validation/health.py |
Health assessment with risk levels and recommendations |
src/tnfr/performance/guardrails.py |
Performance measurement utilities with registry and decorators |
src/tnfr/operators/introspection.py |
Operator metadata registry for tooling integration |
src/tnfr/operators/grammar_error_factory.py |
Structured grammar error generation with enriched metadata |
src/tnfr/metrics/telemetry.py |
Unified telemetry emitter for structural metrics streaming |
src/tnfr/operators/definitions.py |
Integration of new introspection and grammar error exports |
src/tnfr/metrics/__init__.py |
Export of new telemetry components |
tests/unit/validation/test_health.py |
Unit tests for health assessment |
tests/unit/validation/test_aggregator.py |
Unit tests for validation aggregator |
tests/unit/performance/test_guardrails.py |
Unit tests for performance guardrails |
tests/unit/operators/test_grammar_error_factory.py |
Unit tests for grammar error factory |
tests/unit/metrics/test_telemetry_emitter.py |
Unit tests for telemetry emitter |
examples/structural_health_demo.py |
Demo integrating telemetry and health assessment |
scripts/structural_health_report.py |
CLI tool for generating health reports |
docs/STRUCTURAL_HEALTH.md |
Comprehensive documentation for Phase 3 features |
README.md |
Updated with Phase 3 usage examples and overview |
CONTRIBUTING.md |
Added guidelines for structural instrumentation contributions |
CHANGELOG.md |
Version 9.1.0 release notes |
Comments suppressed due to low confidence (1)
examples/structural_health_demo.py:97
- File is opened but is not closed.
for ln in open(
telemetry_path, "r", encoding="utf-8"
).read().splitlines()[-3:]:
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "NAV": "transition", | ||
| "REMESH": "recursivity", | ||
| } | ||
|
|
Copilot
AI
Nov 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Trailing whitespace after the closing brace. Remove the extra spaces at the end of the line for consistency with project style.
| for ln in open( | ||
| telemetry_path, "r", encoding="utf-8" | ||
| ).read().splitlines()[-3:]: | ||
| print(" ", ln) |
Copilot
AI
Nov 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resource leak: file handle not closed. The open() call should use a context manager to ensure proper resource cleanup.
Replace:
for ln in open(
telemetry_path, "r", encoding="utf-8"
).read().splitlines()[-3:]:
print(" ", ln)With:
with open(telemetry_path, "r", encoding="utf-8") as f:
lines = f.read().splitlines()[-3:]
for ln in lines:
print(" ", ln)| for ln in open( | |
| telemetry_path, "r", encoding="utf-8" | |
| ).read().splitlines()[-3:]: | |
| print(" ", ln) | |
| with open(telemetry_path, "r", encoding="utf-8") as f: | |
| lines = f.read().splitlines()[-3:] | |
| for ln in lines: | |
| print(" ", ln) |
| G : Graph | ||
| TNFR network (NetworkX-like) with required node attributes | ||
| for ΔNFR & phase where available. | ||
| where available. |
Copilot
AI
Nov 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate documentation line: "where available." appears on both lines 145 and 146. Remove one occurrence.
| where available. |
| _compute() | ||
|
|
||
| # Use timezone-aware UTC to avoid deprecation of datetime.utcnow() | ||
| event = TelemetryEvent( |
Copilot
AI
Nov 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
datetime.utcnow() is deprecated as of Python 3.12. Use datetime.now(timezone.utc) instead to avoid deprecation warnings.
Replace:
t_iso=datetime.utcnow().isoformat(timespec="seconds") + "Z",With:
from datetime import timezone
# ...
t_iso=datetime.now(timezone.utc).isoformat(timespec="seconds"),Note: datetime.now(timezone.utc) already includes timezone info, so the "+Z" suffix is not needed.
| hf.write( | ||
| ( | ||
| f"[{ev.step}] op={ev.operator} C={coh:.3f} " | ||
| f"Si={si:.3f} Φ_s={phi} t={ev.t_iso}\n" |
Copilot
AI
Nov 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential None attribute error if any field value is None. When formatting strings with .3f, you need to handle None values to avoid AttributeError.
For example, if coh, si, or phi are None, the string formatting will fail. Add None checks before formatting:
coh_str = f"{coh:.3f}" if coh is not None else "NA"
si_str = f"{si:.3f}" if si is not None else "NA"
# Use these in the write call| hf.write( | |
| ( | |
| f"[{ev.step}] op={ev.operator} C={coh:.3f} " | |
| f"Si={si:.3f} Φ_s={phi} t={ev.t_iso}\n" | |
| coh_str = f"{coh:.3f}" if coh is not None else "NA" | |
| si_str = f"{si:.3f}" if si is not None else "NA" | |
| phi_str = phi if phi is not None else "NA" | |
| hf.write( | |
| ( | |
| f"[{ev.step}] op={ev.operator} C={coh_str} " | |
| f"Si={si_str} Φ_s={phi_str} t={ev.t_iso}\n" |
| """ | ||
|
|
||
| def decorator(fn: Callable) -> Callable: | ||
| def wrapped(*args, **kwargs): # type: ignore[override] |
Copilot
AI
Nov 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The wrapped function lacks proper type annotations which can affect type checking. Consider adding:
def wrapped(*args: Any, **kwargs: Any) -> Any:This maintains compatibility with mypy and other type checkers.
| def wrapped(*args, **kwargs): # type: ignore[override] | |
| def wrapped(*args: Any, **kwargs: Any) -> Any: # type: ignore[override] |
| Returns timing dict with baseline, instrumented and ratio | ||
| (instrumented - baseline) / baseline. |
Copilot
AI
Nov 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docstring incorrectly states the return contains "(instrumented - baseline) / baseline" but the actual formula is correct in the code. The ratio represents overhead, which should be clarified.
Update docstring:
"""Compare overhead ratio between baseline and instrumented call sets.
Returns timing dict with baseline, instrumented and ratio representing
overhead: (instrumented - baseline) / baseline.
"""| Returns timing dict with baseline, instrumented and ratio | |
| (instrumented - baseline) / baseline. | |
| Returns a timing dict with the following keys: | |
| - 'baseline': total elapsed time for baseline runs | |
| - 'instrumented': total elapsed time for instrumented runs | |
| - 'ratio': overhead, computed as (instrumented - baseline) / baseline | |
| - 'runs': number of runs performed | |
| The 'ratio' represents the relative overhead introduced by instrumentation. |
| health = compute_structural_health(report) | ||
| print(report.risk_level, report.thresholds_exceeded) | ||
| for rec in health.recommendations: |
Copilot
AI
Nov 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation example is inconsistent. Line 57 calls compute_structural_health(report) but compute_structural_health expects a graph G as its first parameter, not a ValidationReport.
Based on the function signature in health.py, the correct usage is:
health = compute_structural_health(
G,
sequence=["AL","UM","IL","SHA"],
perf_registry=perf,
)Additionally, line 59 attempts to access health.recommendations but health is a dict, so it should be health["recommended_actions"].
| health = compute_structural_health(report) | |
| print(report.risk_level, report.thresholds_exceeded) | |
| for rec in health.recommendations: | |
| health = compute_structural_health( | |
| G, | |
| sequence=["AL","UM","IL","SHA"], | |
| perf_registry=perf, | |
| ) | |
| print(health["risk_level"], health["thresholds_exceeded"]) | |
| for rec in health["recommended_actions"]: |
| sequence=["AL","UM","IL","SHA"], | ||
| perf_registry=perf, | ||
| ) | ||
| health = compute_structural_health(report) |
Copilot
AI
Nov 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example calls compute_structural_health(report) but this function expects a graph G as its first parameter, not a ValidationReport.
Based on the function signature, the correct usage is:
health = compute_structural_health(
G,
sequence=["AL","UM","IL","SHA"],
)This is a separate call from run_structural_validation.
| health = compute_structural_health(report) | |
| health = compute_structural_health( | |
| G, | |
| sequence=["AL","UM","IL","SHA"], | |
| ) |
| perf_registry=perf, | ||
| ) | ||
| health = compute_structural_health(report) | ||
| print(report.risk_level, health.recommendations) |
Copilot
AI
Nov 14, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable health is referenced on line 97 but it's a dict, not an object with a .recommendations attribute. This should be health["recommended_actions"] to match the actual structure returned by compute_structural_health.
Replace:
print(report.risk_level, health.recommendations)With:
print(report.risk_level, health["recommended_actions"])| print(report.risk_level, health.recommendations) | |
| print(report.risk_level, health["recommended_actions"]) |
| iter_operator_meta, | ||
| ) | ||
| from .grammar_error_factory import ( | ||
| ExtendedGrammarError, |
Check failure
Code scanning / CodeQL
Module-level cyclic import Error
tnfr.operators.grammar_error_factory
tnfr.operators.definitions
definition
import
'ExtendedGrammarError' may not be defined if module
tnfr.operators.grammar_error_factory
tnfr.operators.definitions
definition
import
'ExtendedGrammarError' may not be defined if module
tnfr.operators.grammar_error_factory
tnfr.operators.definitions
definition
import
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
The best way to fix a module-level cyclic import in Python is to restructure the code so that the import between the two modules is no longer cyclic. There are several ways to do this: (1) move any function or class (e.g., ExtendedGrammarError) that is only needed in one module (and causes the import) to the module where it is actually required, and delete the import from the other module; (2) if both modules require access to shared functionality, move the interdependent parts to a new third module, and have both modules import from the third module instead of each other, thereby breaking the cycle.
In this specific scenario, because the cyclic import is caused by importing ExtendedGrammarError, collect_grammar_errors, and make_grammar_error from grammar_error_factory in definitions.py, and the error message suggests that grammar_error_factory.py itself imports definitions.py (perhaps for Operator or other symbols), the simplest fix is to (a) move the definitions of ExtendedGrammarError, collect_grammar_errors, and make_grammar_error into a new module (e.g., grammar_core.py or grammar_errors_common.py), (b) have both grammar_error_factory.py and definitions.py import these symbols from the new module, and (c) update any imports accordingly. This breaks the cyclic dependency by making both definitions.py and grammar_error_factory.py depend only on the new module, not on each other.
All required changes should be made in src/tnfr/operators/definitions.py: update import statements to point to the new module as needed. If changes are needed in grammar_error_factory.py, those are out of scope for now.
-
Copy modified line R47
| @@ -44,7 +44,7 @@ | ||
| get_operator_meta, | ||
| iter_operator_meta, | ||
| ) | ||
| from .grammar_error_factory import ( | ||
| from .grammar_errors_common import ( | ||
| ExtendedGrammarError, | ||
| collect_grammar_errors, | ||
| make_grammar_error, |
| ) | ||
| from .grammar_error_factory import ( | ||
| ExtendedGrammarError, | ||
| collect_grammar_errors, |
Check failure
Code scanning / CodeQL
Module-level cyclic import Error
tnfr.operators.grammar_error_factory
tnfr.operators.definitions
definition
import
'collect_grammar_errors' may not be defined if module
tnfr.operators.grammar_error_factory
tnfr.operators.definitions
definition
import
'collect_grammar_errors' may not be defined if module
tnfr.operators.grammar_error_factory
tnfr.operators.definitions
definition
import
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To fix module-level cyclic imports, the best-practice is to break the cycle by moving the imports that create dependencies so that they are imported only within functions or methods that truly need them, or to restructure the modules so that they share a third dependency. In this case, since the offending import in definitions.py is merely re-exporting collect_grammar_errors for the public API, and it's very likely only needed when called directly, we should make the import of collect_grammar_errors a function-local import. That is, remove the module-level import/re-export of collect_grammar_errors in definitions.py. If the public API demands that collect_grammar_errors be available at the module level, we can instead provide a proxy function or property which dynamically imports and returns collect_grammar_errors when accessed, thereby breaking the cyclic initialization chain.
Specifically, in src/tnfr/operators/definitions.py (the only file you've shown), remove collect_grammar_errors from the module-level import and from __all__, and instead add a wrapper function collect_grammar_errors in this module that imports and returns the actual collect_grammar_errors at runtime. This maintains backward compatibility at the API level while breaking the cyclic import.
| @@ -46,7 +46,6 @@ | ||
| ) | ||
| from .grammar_error_factory import ( | ||
| ExtendedGrammarError, | ||
| collect_grammar_errors, | ||
| make_grammar_error, | ||
| ) | ||
|
|
||
| @@ -71,6 +70,5 @@ | ||
| "get_operator_meta", | ||
| "iter_operator_meta", | ||
| "ExtendedGrammarError", | ||
| "collect_grammar_errors", | ||
| "make_grammar_error", | ||
| ] |
| from .grammar_error_factory import ( | ||
| ExtendedGrammarError, | ||
| collect_grammar_errors, | ||
| make_grammar_error, |
Check failure
Code scanning / CodeQL
Module-level cyclic import Error
tnfr.operators.grammar_error_factory
tnfr.operators.definitions
definition
import
'make_grammar_error' may not be defined if module
tnfr.operators.grammar_error_factory
tnfr.operators.definitions
definition
import
'make_grammar_error' may not be defined if module
tnfr.operators.grammar_error_factory
tnfr.operators.definitions
definition
import
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
The standard way to break a cyclic import is to defer one side of the import to run-time, for example by moving the potentially problematic import into the body of a function or method—instead of performing it at the module level. In this case, since make_grammar_error is imported at the module level in definitions.py, but the module it comes from (grammar_error_factory) imports definitions.py as well, the cycle triggers the risk.
The best fix is to remove make_grammar_error from the module-level import in definitions.py and instead, wherever it is needed in downstream code, import it locally inside the relevant function/method, or recommend users do so. This is safe because you only need make_grammar_error at runtime; deferring the import avoids the cyclic initialization problem.
Specifically:
- Remove
make_grammar_errorfrom the imports and from__all__insrc/tnfr/operators/definitions.py. - If needed, add documentation to indicate to import it directly from
grammar_error_factory(but you cannot change documentation here since you only control the shown code). - No other code changes required unless you are shown code that uses
make_grammar_errorinside this file.
-
Copy modified line R74
| @@ -47,7 +47,6 @@ | ||
| from .grammar_error_factory import ( | ||
| ExtendedGrammarError, | ||
| collect_grammar_errors, | ||
| make_grammar_error, | ||
| ) | ||
|
|
||
| __all__ = [ | ||
| @@ -72,5 +71,5 @@ | ||
| "iter_operator_meta", | ||
| "ExtendedGrammarError", | ||
| "collect_grammar_errors", | ||
| "make_grammar_error", | ||
| # "make_grammar_error", # Removed to break cyclic import | ||
| ] |
| from typing import Any, List, Sequence | ||
|
|
||
| from .definitions import get_operator_meta | ||
| from .grammar_core import GrammarValidator |
Check failure
Code scanning / CodeQL
Module-level cyclic import Error
tnfr.operators.grammar_core
tnfr.operators.grammar_error_factory
definition
import
'GrammarValidator' may not be defined if module
tnfr.operators.grammar_core
tnfr.operators.grammar_error_factory
definition
import
'GrammarValidator' may not be defined if module
tnfr.operators.grammar_core
tnfr.operators.grammar_error_factory
definition
import
'GrammarValidator' may not be defined if module
tnfr.operators.grammar_core
tnfr.operators.grammar_error_factory
definition
import
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To fix a module-level cyclic import, the standard techniques are:
- Move the import inside the function or method that actually requires it (delayed import), rather than at module scope.
- Refactor the code to remove the dependency (possibly moving utility code to a third module).
In this specific case, the import from .grammar_core import GrammarValidator is performed at the top-level, but it is likely only used in one or two functions (possibly just for validation, or for type checking). Moving this import inside the function(s) that require it (a so-called "local import" or "delayed import") will break the import cycle by ensuring that the actual import only occurs if/when the function is invoked, at which time both modules are guaranteed to have completed their initializations.
Therefore, the single best way to fix the cyclical import is:
- Identify every usage of
GrammarValidatorinsrc/tnfr/operators/grammar_error_factory.py. - Remove the top-level import of
GrammarValidator. - Instead, add the same import at the point(s) of use, inside the relevant function(s).
You only need to make changes within the code blocks you have provided; do not introduce any new dependencies or change any signatures or logic.
| @@ -43,7 +43,6 @@ | ||
| from typing import Any, List, Sequence | ||
|
|
||
| from .definitions import get_operator_meta | ||
| from .grammar_core import GrammarValidator | ||
| from .grammar_types import StructuralGrammarError | ||
|
|
||
| __all__ = [ |
|
|
||
| from .definitions import get_operator_meta | ||
| from .grammar_core import GrammarValidator | ||
| from .grammar_types import StructuralGrammarError |
Check failure
Code scanning / CodeQL
Module-level cyclic import Error
tnfr.operators.grammar_types
tnfr.operators.grammar_error_factory
definition
import
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
The recommended solution is to remove or defer the import of StructuralGrammarError to break the module-level cycle. The best approach, given the provided code, is to move the import statement from the module level to inside the function(s) that require it—this way, the import will happen only when the function is called, at which point all modules will already have been initialized and definitions will exist. Specifically, search for all usages of StructuralGrammarError in src/tnfr/operators/grammar_error_factory.py and, if its usage is confined to functions, move the import statement for it inside those function(s). If StructuralGrammarError is used at class level, refactor to only use it in methods or in function scope where possible.
Change to make:
- In
src/tnfr/operators/grammar_error_factory.py, removefrom .grammar_types import StructuralGrammarErrorfrom the module top. - Add
from .grammar_types import StructuralGrammarErrorinside the function(s) where it is required (for example, inmake_grammar_erroror wherever it is referenced).
-
Copy modified line R47
| @@ -44,7 +44,7 @@ | ||
|
|
||
| from .definitions import get_operator_meta | ||
| from .grammar_core import GrammarValidator | ||
| from .grammar_types import StructuralGrammarError | ||
| # StructuralGrammarError is imported inside functions where it is used to avoid cyclic import issues. | ||
|
|
||
| __all__ = [ | ||
| "ExtendedGrammarError", |
| for ln in open( | ||
| telemetry_path, "r", encoding="utf-8" | ||
| ).read().splitlines()[-3:]: |
Check warning
Code scanning / CodeQL
File is not always closed Warning
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To fix the problem, ensure the file opened for reading on line 95 is always properly closed, even if an exception occurs. The best way to do this is to replace the direct call to open(...).read() with a with statement context, so the file is closed automatically after reading. Specifically, modify the block beginning at line 95 so that open is used within a with statement, and use the file object directly to read its contents. This only affects the lines within the file-reading logic inside the try block. No external dependencies or complex changes are needed, only this code in examples/structural_health_demo.py.
-
Copy modified lines R95-R97
| @@ -92,10 +92,9 @@ | ||
| emitter.flush() | ||
| print("Telemetry Events (last run):") | ||
| try: | ||
| for ln in open( | ||
| telemetry_path, "r", encoding="utf-8" | ||
| ).read().splitlines()[-3:]: | ||
| print(" ", ln) | ||
| with open(telemetry_path, "r", encoding="utf-8") as f: | ||
| for ln in f.read().splitlines()[-3:]: | ||
| print(" ", ln) | ||
| except FileNotFoundError: | ||
| print(" (no telemetry file found)") | ||
|
|
| from dataclasses import dataclass | ||
| from typing import Any, List, Sequence | ||
|
|
||
| from .definitions import get_operator_meta |
Check notice
Code scanning / CodeQL
Cyclic import Note
tnfr.operators.definitions
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To resolve the cyclic import, we should break the import cycle by refactoring how get_operator_meta is accessed. The canonical solution is to move the logic that requires get_operator_meta (usually a function or method) out of grammar_error_factory.py and into definitions.py (or a new module, if necessary). If only one function or limited functionality depends on get_operator_meta, this function should be relocated to definitions.py. Then, instead of importing get_operator_meta, grammar_error_factory.py would import the (now relocated) function from definitions.py. This removes the dependency direction from grammar_error_factory.py to definitions.py, breaking the cycle.
To implement this:
- Identify every place in
grammar_error_factory.pythat usesget_operator_meta, typically in functions or methods. - Move those functions to
definitions.py. - Update imports everywhere to use the relocated function from
definitions.py. - Remove the import of
get_operator_metafromgrammar_error_factory.py.
All these edits must occur strictly within the provided snippet of grammar_error_factory.py, and any function using get_operator_meta must be replaced with an importable version from definitions.py.
-
Copy modified line R45
| @@ -42,7 +42,7 @@ | ||
| from dataclasses import dataclass | ||
| from typing import Any, List, Sequence | ||
|
|
||
| from .definitions import get_operator_meta | ||
| # NOTE: Removed import of get_operator_meta to resolve cyclic import. | ||
| from .grammar_core import GrammarValidator | ||
| from .grammar_types import StructuralGrammarError | ||
|
|
- Import cache system (cache_tnfr_computation, CacheLevel) with fallback - Add @cache_tnfr_computation decorator to 4 canonical functions: * compute_structural_potential (deps: topology, node_dnfr) * compute_phase_gradient (deps: topology, node_phase) * compute_phase_curvature (deps: topology, node_phase) * estimate_coherence_length (deps: topology, node_dnfr, node_coherence) - Cache level: DERIVED_METRICS (invalidated on property changes) - ~75% overhead reduction for repeated calls on unchanged graphs - Update STRUCTURAL_HEALTH.md with cache usage examples - All tests passing (tests/test_physics_fields.py: 3/3 ✓) Uses repository's centralized TNFRHierarchicalCache instead of manual cached_fields parameter. Automatic dependency tracking and invalidation. Physics: Read-only telemetry; preserves invariants (§3.8, §3.4)
Automated Code ReviewPlease review the workflow logs for details. |
|
|
||
| import argparse | ||
| import logging | ||
| import os |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To fix the issue of an unused import, we simply need to remove the line import os from the imports section (line 17) in scripts/optimize_repository.py. This will have no impact on the current functionality of the script, as none of the code shown relies on the os module. Only remove the exact line for the import, taking care not to change the order or spacing of other imports.
| @@ -14,7 +14,6 @@ | ||
|
|
||
| import argparse | ||
| import logging | ||
| import os | ||
| import shutil | ||
| import sys | ||
| from pathlib import Path |
| import logging | ||
| import os | ||
| import shutil | ||
| import sys |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To fix this issue, we should remove the unused import statement (import sys) from the script. This will clean up the code, avoid unnecessary dependencies, and align with Python best practices for maintainability and readability. Specifically, in scripts/optimize_repository.py, delete or comment out the line containing import sys (line 19). No further code changes or other modifications are necessary, as the functionality remains unaffected.
| @@ -16,7 +16,6 @@ | ||
| import logging | ||
| import os | ||
| import shutil | ||
| import sys | ||
| from pathlib import Path | ||
| from typing import List, Set, Dict, Optional | ||
|
|
| import shutil | ||
| import sys | ||
| from pathlib import Path | ||
| from typing import List, Set, Dict, Optional |
Check notice
Code scanning / CodeQL
Unused import Note
Import of 'List' is not used.
Import of 'Set' is not used.
Import of 'Optional' is not used.
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
The best way to fix this problem is to edit the import statement at line 21 in scripts/optimize_repository.py so that it no longer imports Dict from the typing module. This preserves all other imported types (List, Set, Optional) in case they are used elsewhere in the file. The change should only modify line 21, removing Dict from the list of imported items, and leaving the rest of the code unchanged.
-
Copy modified line R21
| @@ -18,7 +18,7 @@ | ||
| import shutil | ||
| import sys | ||
| from pathlib import Path | ||
| from typing import List, Set, Dict, Optional | ||
| from typing import List, Set, Optional | ||
|
|
||
| # Repository root | ||
| REPO_ROOT = Path(__file__).parent.parent |
|
|
||
| import argparse | ||
| import logging | ||
| import os |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To fix the problem, remove the unused import statement for the os module from scripts/repo_health_check.py. Specifically, delete line 14: import os. No other code needs to be changed, since none of the code shown uses the os module directly or indirectly. This will clean up the dependencies and the script's imports without changing existing functionality.
| @@ -11,7 +11,6 @@ | ||
|
|
||
| import argparse | ||
| import logging | ||
| import os | ||
| import subprocess | ||
| import sys | ||
| from pathlib import Path |
| import subprocess | ||
| import sys | ||
| from pathlib import Path | ||
| from typing import List, Dict, Any, Optional |
Check notice
Code scanning / CodeQL
Unused import Note
Import of 'Any' is not used.
Import of 'List' is not used.
Import of 'Optional' is not used.
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To resolve the issue, we should simply remove the unused import statement from line 18: from typing import List, Dict, Any, Optional. This reduces code clutter and eliminates an unnecessary dependency. No other changes are needed elsewhere in the code, as these types are not referenced. Only one file and one block need to be edited, and only the one line needs to be deleted.
| @@ -15,7 +15,6 @@ | ||
| import subprocess | ||
| import sys | ||
| from pathlib import Path | ||
| from typing import List, Dict, Any, Optional | ||
|
|
||
| # Repository root | ||
| REPO_ROOT = Path(__file__).parent.parent |
| "Windows shim may be missing recent targets", | ||
| "info" | ||
| ) | ||
| except Exception: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To address the issue, the except Exception: pass block should be replaced with code that logs a warning message, capturing and displaying details about the exception. This approach surfaces errors to the user or developer while maintaining the program's ability to continue execution. The fix involves modifying the except Exception: block (lines 221-222 in scripts/repo_health_check.py) to call logger.warning(), including information about the exception in the message text. No new imports are needed, as the script already has logging configured and available as logger. Optionally, also attach the exception information with exc_info=True for full traceback details, depending on verbosity or reporting standards.
-
Copy modified lines R221-R222
| @@ -218,8 +218,8 @@ | ||
| "Windows shim may be missing recent targets", | ||
| "info" | ||
| ) | ||
| except Exception: | ||
| pass | ||
| except Exception as e: | ||
| logger.warning(f"Could not check Windows shim: {e}", exc_info=True) | ||
|
|
||
| def check_git_configuration(self) -> None: | ||
| """Check Git configuration.""" |
| """ | ||
| # First derivative (nodal equation) | ||
| first_deriv = nu_f * DELTA_NFR | ||
| nu_f * DELTA_NFR |
Check notice
Code scanning / CodeQL
Statement has no effect Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
The best way to fix this problem is to remove the expression statement nu_f * DELTA_NFR entirely, as it has no effect. If the intention was to perform some operation with this expression—such as assigning it to a variable or returning it—then the code should be updated to do so. However, the next lines already correctly compute the derivative starting from symbolic functions rather than these "constant" symbols, which is more general and correct for the intended bifurcation analysis. Thus, simply deleting this line is the right (and non-disruptive) fix.
You only need to remove line 230 (nu_f * DELTA_NFR). No imports, definitions, or follow-up actions are required.
| @@ -227,7 +227,6 @@ | ||
| See: AGENTS.md § U4: BIFURCATION DYNAMICS | ||
| """ | ||
| # First derivative (nodal equation) | ||
| nu_f * DELTA_NFR | ||
|
|
||
| # Second derivative (product rule) | ||
| nu_f_func = Function('nu_f') |
- Document Phase 3 completed optimizations (UTC, caching, guardrails) - Field caching: ~75% overhead reduction with TNFRHierarchicalCache - Performance guardrails: ~5.8% instrumentation overhead (< 8% target) - Baseline benchmarks: NumPy optimal for current workloads - Next steps prioritized: profiling, vectorization, cache tuning - Tools & commands reference for benchmarking/profiling - Field computation timings table (1K nodes baseline) Provides single source of truth for optimization work on phase-3 branch.
Automated Code ReviewPlease review the workflow logs for details. |
- Implement 2-sweep BFS heuristic (O(N+M) vs O(N³)) - Integrate into validation aggregator - Validation: 37.5% speedup (6.1s → 3.8s @ 500 nodes) - Accuracy: ≤20% error, within 2× always - Field caching still perfect (0.000s on repeated calls) Profiling evidence: - Before: eccentricity 4.684s (76% of 6.138s total) - After: eccentricity 2.332s (60% of 3.838s total) - Fast diameter validated on cycle/grid/scale-free/WS graphs Next bottleneck: eccentricity mean (for mean_node_distance) Refs: docs/PROFILING_RESULTS.md, src/tnfr/utils/fast_diameter.py
Automated Code ReviewPlease review the workflow logs for details. |
…king
PARADIGM ALIGNMENT:
- Eccentricity = topological invariant (only changes with structure)
- Cache preserves coherence (no redundant BFS reorganization traversals)
- Automatic invalidation via dependencies={'graph_topology'}
PERFORMANCE IMPACT:
- Total speedup: 6.138s → 1.707s (3.6× faster, 72% reduction)
- Eccentricity: 2.332s → 0.234s first call (10× faster)
- Cached calls: 0.000s (infinite speedup)
- Function calls: 14.6M → 6.3M (57% reduction)
IMPLEMENTATION:
- Add compute_eccentricity_cached() with @cache_tnfr_computation
- Integrate into validation aggregator with fallback path
- Dependencies: {'graph_topology'} ensures structural coherence
PROFILING EVIDENCE:
- Before: eccentricity 2.332s (60% of 3.838s)
- After: eccentricity 0.234s (14% of 1.707s, first call only)
- Current bottleneck: Φ_s computation 1.438s (reasonable for distance matrix)
Physics: Topological metrics cached per structural coupling invariants.
Refs: src/tnfr/utils/fast_diameter.py::compute_eccentricity_cached
Automated Code ReviewPlease review the workflow logs for details. |
PERFORMANCE SUMMARY: - Total: 6.138s → 1.707s (3.6× faster, 72% reduction) - Fast diameter: 46-111× speedup (37.5% validation improvement) - Cached eccentricity: 10× first call + ∞× cached (structural invariant) - Function calls: 23.9M → 6.3M (74% reduction) PARADIGM ALIGNMENT DOCUMENTATION: - Eccentricity as topological invariant (coherence preservation) - Cache dependencies match structural coupling physics - Automatic invalidation via graph_topology tracking - Respects ∂EPI/∂t = 0 when topology frozen CURRENT STATE: - Bottleneck now Φ_s (1.4s, 84%) - expected/acceptable (distance matrix) - Cache working perfectly across all metrics (0.000s repeated calls) - 6 major optimizations completed (UTC, fields, guardrails, health, diameter, eccentricity) Next: Evaluate if Φ_s optimization needed or validation pipeline complete.
| from ..utils.fast_diameter import approximate_diameter_2sweep # type: ignore | ||
| except ImportError: # pragma: no cover | ||
| # Fallback to exact (slow) diameter if fast version unavailable | ||
| approximate_diameter_2sweep = None # type: ignore |
Check notice
Code scanning / CodeQL
Unused global variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To fix this problem, simply remove the assignment to the unused variable at the module level. Specifically: In src/tnfr/physics/fields.py, remove both the import line (line 250) and its fallback assignment (line 253) so that approximate_diameter_2sweep is not assigned at all. Since its assignment does not have side effects (it is either imported or set to None), we do not need to keep any right-hand side expressions. No additional imports, method definitions, or code structure changes are necessary. Only the region of code assigning or importing the unused variable should be updated.
-
Copy modified line R249
| @@ -246,11 +246,7 @@ | ||
| class CacheLevel: # type: ignore | ||
| DERIVED_METRICS = None | ||
|
|
||
| try: | ||
| from ..utils.fast_diameter import approximate_diameter_2sweep # type: ignore | ||
| except ImportError: # pragma: no cover | ||
| # Fallback to exact (slow) diameter if fast version unavailable | ||
| approximate_diameter_2sweep = None # type: ignore | ||
| # (Removed unused import/assignment of approximate_diameter_2sweep) | ||
|
|
||
| # Import TNFR aliases for proper attribute access | ||
| try: |
| DERIVED_METRICS = None | ||
|
|
||
| try: | ||
| from ..utils.fast_diameter import approximate_diameter_2sweep # type: ignore |
Check notice
Code scanning / CodeQL
Unused import Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
The best fix for this type of unused import is to delete the import statement entirely. Removing an unused import cleans up dependencies and improves code readability. Specifically, in src/tnfr/physics/fields.py, lines 250-253 cover importing approximate_diameter_2sweep with a fallback assignment in case the import fails. Since this isn't used anywhere else in the code snippet, both the import and the fallback logic should be deleted. This includes the entire try-except block assigning approximate_diameter_2sweep = None. Only these lines (250-253) need to be removed.
-
Copy modified line R250
| @@ -246,12 +246,8 @@ | ||
| class CacheLevel: # type: ignore | ||
| DERIVED_METRICS = None | ||
|
|
||
| try: | ||
| from ..utils.fast_diameter import approximate_diameter_2sweep # type: ignore | ||
| except ImportError: # pragma: no cover | ||
| # Fallback to exact (slow) diameter if fast version unavailable | ||
| approximate_diameter_2sweep = None # type: ignore | ||
|
|
||
|
|
||
| # Import TNFR aliases for proper attribute access | ||
| try: | ||
| from ..constants.aliases import ALIAS_THETA, ALIAS_DNFR # type: ignore |
| phi_s = compute_structural_potential(G) | ||
| grad = compute_phase_gradient(G) | ||
| curv = compute_phase_curvature(G) | ||
| xi_c = estimate_coherence_length(G) |
Check notice
Code scanning / CodeQL
Unused global variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
The best way to fix the unused global variable warning for xi_c is to indicate that the result is intentionally unused, by renaming the variable to an "unused" naming convention (e.g., _ or unused_xi_c). Given this is module top-level code, using _ is conventional and clear. This demonstrates intent to future readers and silences static analysis tools. Only change line 70, renaming xi_c to _.
No new methods, imports, or definitions are required. Only change the assignment target. No functionality will be altered, as the value is not further used.
-
Copy modified line R70
| @@ -67,7 +67,7 @@ | ||
| phi_s = compute_structural_potential(G) | ||
| grad = compute_phase_gradient(G) | ||
| curv = compute_phase_curvature(G) | ||
| xi_c = estimate_coherence_length(G) | ||
| _ = estimate_coherence_length(G) | ||
|
|
||
| pr2.disable() | ||
|
|
| for _ in range(10): | ||
| phi_s = compute_structural_potential(G) | ||
| grad = compute_phase_gradient(G) | ||
| curv = compute_phase_curvature(G) |
Check notice
Code scanning / CodeQL
Unused global variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To fix the problem, remove the assignment to the unused global variable curv while preserving the function call (to ensure side effects and profiling measurements remain unchanged). Only the left-hand side should be removed, keeping the compute_phase_curvature(G) call in the loop. This change is to be applied in profile_validation.py, specifically to line 69:
69: curv = compute_phase_curvature(G)This becomes:
69: compute_phase_curvature(G)No imports or further changes are required.
-
Copy modified line R69
| @@ -66,7 +66,7 @@ | ||
| for _ in range(10): | ||
| phi_s = compute_structural_potential(G) | ||
| grad = compute_phase_gradient(G) | ||
| curv = compute_phase_curvature(G) | ||
| compute_phase_curvature(G) | ||
| xi_c = estimate_coherence_length(G) | ||
|
|
||
| pr2.disable() |
| # Run field computations 10 times | ||
| for _ in range(10): | ||
| phi_s = compute_structural_potential(G) | ||
| grad = compute_phase_gradient(G) |
Check notice
Code scanning / CodeQL
Unused global variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
The best fix is to remove the assignment to the unused variable grad on line 68. Rather than assigning to grad, simply call compute_phase_gradient(G) without assigning its result to any variable. This retains the function call for profiling (i.e., its runtime impact and any side effects are preserved) but removes the unused variable and the associated false impression that its value may be used elsewhere. This change should be made in profile_validation.py, on line 68. No additional imports, definitions, or code changes are needed.
-
Copy modified line R68
| @@ -65,7 +65,7 @@ | ||
| # Run field computations 10 times | ||
| for _ in range(10): | ||
| phi_s = compute_structural_potential(G) | ||
| grad = compute_phase_gradient(G) | ||
| compute_phase_gradient(G) | ||
| curv = compute_phase_curvature(G) | ||
| xi_c = estimate_coherence_length(G) | ||
|
|
|
|
||
| # Run field computations 10 times | ||
| for _ in range(10): | ||
| phi_s = compute_structural_potential(G) |
Check notice
Code scanning / CodeQL
Unused global variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
The best way to fix the problem is to remove the unused assignment to the global variable phi_s. Since the right-hand side function compute_structural_potential(G) still needs to be called for its potential side effects (to be consistent with the way the other field computations are handled), the assignment operator and variable name should be removed, leaving just a bare function call. No other part of the code relies on phi_s, so removing the assignment does not impact any code functionality.
Concretely, in file profile_validation.py, line 67 should be changed from phi_s = compute_structural_potential(G) to simply compute_structural_potential(G). No imports, further code edits, or new definitions are needed.
-
Copy modified line R67
| @@ -64,7 +64,7 @@ | ||
|
|
||
| # Run field computations 10 times | ||
| for _ in range(10): | ||
| phi_s = compute_structural_potential(G) | ||
| compute_structural_potential(G) | ||
| grad = compute_phase_gradient(G) | ||
| curv = compute_phase_curvature(G) | ||
| xi_c = estimate_coherence_length(G) |
|
|
||
| # Run validation 10 times to get meaningful stats | ||
| for _ in range(10): | ||
| report = run_structural_validation( |
Check notice
Code scanning / CodeQL
Unused global variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To resolve the unused global variable warning for report, simply remove the assignment to report and replace it with a function call to run_structural_validation(...). This will ensure the function is still executed for its side effects, but it's clear the return value is intentionally unneeded. Update only lines 43 and avoid changing the arguments or logic of the loop. No new imports or definitions are needed.
-
Copy modified line R43
| @@ -40,7 +40,7 @@ | ||
|
|
||
| # Run validation 10 times to get meaningful stats | ||
| for _ in range(10): | ||
| report = run_structural_validation( | ||
| run_structural_validation( | ||
| G, | ||
| sequence=sequence, | ||
| max_delta_phi_s=2.0, |
…ammar VECTORIZATION (NumPy broadcasting): - compute_phase_gradient: Batch phase differences, vectorized wrapping - compute_phase_curvature: Vectorized circular mean via cos/sin arrays - Pre-extract phases dict to avoid repeated node lookups - Performance: 1.707s → 1.670s (2% additional speedup) EARLY EXIT OPTIMIZATION: - Add stop_on_first_error parameter to validate_sequence - Short-circuit validation on first grammar violation - 10-30% speedup when sequences invalid (diagnostic trade-off) - Default: False (comprehensive reporting preserved) TOTAL CUMULATIVE SPEEDUP: - Baseline: 6.138s - + Fast diameter: 3.838s (37.5% ↓) - + Cached eccentricity: 1.707s (55% ↓) - + Vectorized phases: 1.670s (2% ↓) - **Total: 3.7× faster (73% reduction)** PARADIGM ALIGNMENT: - Vectorization = coherent batch operations (vs sequential loops) - Early exit = optional (respects diagnostic completeness need) - All changes read-only, preserve TNFR invariants Physics: Batch phase computations respect circular topology via NumPy. Tests: All passing (fields 3/3, grammar 10/10, validation 2/2) Refs: src/tnfr/physics/fields.py, src/tnfr/operators/grammar_core.py
Automated Code ReviewPlease review the workflow logs for details. |
COMPLETE OPTIMIZATION SUMMARY: - 8 major optimizations implemented (UTC, cache, guardrails, health, diameter, eccentricity, vectorization, early exit) - 3.7× total speedup: 6.138s → 1.670s (73% reduction) - 74% reduction in function calls (23.9M → 6.3M) - All optimizations align with TNFR paradigm (coherence, structural invariants) FINAL PERFORMANCE: - Diameter: 50× faster (approx vs exact) - Eccentricity: 10× first call + ∞× cached - Phase ops: 2-3× faster (vectorization) - Fields: Perfect cache (0.000s repeated) - Current bottleneck: Φ_s (84%, expected - distance matrix) STATUS: Validation pipeline fully optimized ✅
Automated Code ReviewPlease review the workflow logs for details. |
1 similar comment
Automated Code ReviewPlease review the workflow logs for details. |
| times_curv = [] | ||
| for _ in range(5): | ||
| t0 = time.perf_counter() | ||
| curv = compute_phase_curvature(G) |
Check notice
Code scanning / CodeQL
Unused global variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
The best way to fix the problem is to rename the unused variable curv in the assignment on line 47 to _, which is the standard convention to indicate that the value is intentionally unused. This preserves the function call's side effects (if any) and maintains the timing measurement for benchmarking purposes while clarifying to readers and static analysis tools that the return value is not used.
To do so, update the assignment in the inner loop on line 47 from curv = compute_phase_curvature(G) to _ = compute_phase_curvature(G). No other changes, imports, definitions, or method modifications are required.
-
Copy modified line R47
| @@ -44,7 +44,7 @@ | ||
| times_curv = [] | ||
| for _ in range(5): | ||
| t0 = time.perf_counter() | ||
| curv = compute_phase_curvature(G) | ||
| _ = compute_phase_curvature(G) | ||
| t1 = time.perf_counter() | ||
| times_curv.append((t1 - t0) * 1000) | ||
|
|
| times_grad = [] | ||
| for _ in range(5): | ||
| t0 = time.perf_counter() | ||
| grad = compute_phase_gradient(G) |
Check notice
Code scanning / CodeQL
Unused global variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 28 days ago
To fix the unused global variable error for grad, we should remove the assignment and instead simply call compute_phase_gradient(G) for its potential side effects and to properly time the function execution. Only the timing result (difference between t0 and t1) is needed.
Specifically:
- In benchmark_phase_vectorization.py, in the loop at line 34 for phase gradient timing, replace
grad = compute_phase_gradient(G)withcompute_phase_gradient(G). - Preserve the function call to ensure any side effects are maintained.
- No other changes are needed, and no additional imports or definitions are required.
-
Copy modified line R36
| @@ -33,7 +33,7 @@ | ||
| times_grad = [] | ||
| for _ in range(5): | ||
| t0 = time.perf_counter() | ||
| grad = compute_phase_gradient(G) | ||
| compute_phase_gradient(G) | ||
| t1 = time.perf_counter() | ||
| times_grad.append((t1 - t0) * 1000) | ||
|
|
Summary
Implements Phase 3 structural instrumentation:
Added
run_structural_validation) combining grammar (U1-U3) + canonical field thresholds (Φ_s, |∇φ|, K_φ, ξ_C) with optional ΔΦ_s drift.compute_structural_health) generating risk levels (low/elevated/critical) and actionable recommendations.examples/structural_health_demo.py) and CLI (scripts/structural_health_report.py).PerformanceRegistry,perf_guard,compare_overhead+ instrumentation of validation aggregator (<~5.8% overhead moderate workload).docs/STRUCTURAL_HEALTH.md.Tests
Deferred
Integrity
Performance
Request
Merge Phase 3 enhancements into
mainto publish version 9.1.0.Next Steps (Post-Merge)
datetime.utcnow()deprecation warning in telemetry for future-proof UTC handling.Reality is not made of things—it's made of resonance.