-
Notifications
You must be signed in to change notification settings - Fork 5k
Claude/integrate uvmgr vendor zur cy #1370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
seanchatmangpt
wants to merge
23
commits into
github:main
Choose a base branch
from
seanchatmangpt:claude/integrate-uvmgr-vendor-ZurCy
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Claude/integrate uvmgr vendor zur cy #1370
seanchatmangpt
wants to merge
23
commits into
github:main
from
seanchatmangpt:claude/integrate-uvmgr-vendor-ZurCy
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Integrate ggen v6 RDF-first architecture into spec-kit, enabling deterministic
ontology-driven specification generation following the constitutional equation:
spec.md = μ(feature.ttl)
## Core Architecture Changes
### 1. Constitutional Equation
- TTL files (Turtle/RDF) are the source of truth
- Markdown files are generated artifacts (never manually edited)
- SHACL shapes enforce constraints
- Idempotent transformations (μ∘μ = μ)
- Cryptographic provenance receipts
### 2. Infrastructure Updates
**Scripts (RDF-first support):**
- `scripts/bash/check-prerequisites.sh` - Returns TTL paths as primary sources
- Detects RDF vs. legacy features (checks ontology/ + ggen.toml)
- Validates TTL files first, falls back to MD for backward compatibility
- JSON output includes: TTL_SOURCES, ONTOLOGY_DIR, GENERATED_DIR
- `scripts/bash/common.sh` - Extended path variables for RDF architecture
- Added TTL source paths: FEATURE_SPEC_TTL, IMPL_PLAN_TTL, TASKS_TTL
- Added generated paths: ontology/, generated/, templates/
- SPECIFY_FEATURE env var support for exact branch matching
- `scripts/bash/setup-plan.sh` - Creates plan.ttl from templates
- Auto-detects RDF-first features
- Creates ontology/plan.ttl from template
- Symlinks templates/plan.tera
- Maintains backward compatibility for legacy MD-based features
### 3. Tera Templates (Markdown Generation)
**New templates for RDF → Markdown transformation:**
- `templates/plan.tera` (151 lines) - Renders plan.md from plan.ttl
- Technology stack, phases, decisions, risks, dependencies
- `templates/tasks.tera` (148 lines) - Renders tasks.md from tasks.ttl
- Phase-based organization, dependency tracking, parallelization
- `templates/constitution.tera` (173 lines) - Renders constitution.md
- Core principles, build standards, workflow rules, governance
### 4. RDF Helper Templates (Turtle/RDF Sources)
**Complete TTL template library (10 templates):**
- `templates/rdf-helpers/user-story.ttl.template` - User story instances with acceptance scenarios
- `templates/rdf-helpers/functional-requirement.ttl.template` - Functional requirements
- `templates/rdf-helpers/success-criterion.ttl.template` - Success criteria with metrics
- `templates/rdf-helpers/entity.ttl.template` - Domain entity definitions
- `templates/rdf-helpers/edge-case.ttl.template` - Edge case scenarios (NEW)
- `templates/rdf-helpers/assumption.ttl.template` - Assumption instances (NEW)
- `templates/rdf-helpers/plan-decision.ttl.template` - Architectural decisions (NEW)
- `templates/rdf-helpers/task.ttl.template` - Individual task instances (NEW)
- `templates/rdf-helpers/plan.ttl.template` - Complete plan structure (NEW, 2.3KB)
- `templates/rdf-helpers/tasks.ttl.template` - Complete task breakdown (NEW, 3.1KB)
### 5. Documentation
**Comprehensive RDF workflow documentation:**
- `docs/RDF_WORKFLOW_GUIDE.md` (19KB) - Complete workflow guide
- Constitutional equation explanation
- Five-stage pipeline (μ₁→μ₂→μ₃→μ₄→μ₅)
- SHACL validation guide with error examples
- Template system explanation
- Troubleshooting common issues
- End-to-end examples
- `docs/GGEN_RDF_README.md` - ggen-specific RDF integration overview
## Key Features
### SHACL Validation
- Priority values MUST be "P1", "P2", or "P3" (SHACL enforced)
- Dates in YYYY-MM-DD format with ^^xsd:date
- User stories require minimum 1 acceptance scenario
- Automatic validation during ggen render
### Five-Stage Pipeline
1. **μ₁ (Normalization)** - Canonicalize RDF + SHACL validation
2. **μ₂ (Extraction)** - SPARQL queries extract data
3. **μ₃ (Emission)** - Tera templates render markdown
4. **μ₄ (Canonicalization)** - Format markdown
5. **μ₅ (Receipt)** - Generate cryptographic hash
### Backward Compatibility
- Legacy MD-based features continue to work
- Auto-detection of RDF vs. legacy format
- Graceful fallback when TTL files missing
- SPECIFY_FEATURE env var for multi-feature branches
## Directory Structure (RDF-first features)
```
specs/NNN-feature-name/
├── ontology/ # SOURCE OF TRUTH
│ ├── feature-content.ttl
│ ├── plan.ttl
│ ├── tasks.ttl
│ └── spec-kit-schema.ttl (symlink)
├── generated/ # GENERATED ARTIFACTS
│ ├── spec.md
│ ├── plan.md
│ └── tasks.md
├── templates/ # TERA TEMPLATES (symlinks)
│ ├── spec.tera
│ ├── plan.tera
│ └── tasks.tera
├── ggen.toml # GGEN V6 CONFIG
└── checklists/
└── requirements.md
```
## Usage Examples
### Create RDF-first specification:
```bash
/speckit.specify "Add TTL validation command"
# Creates: ontology/feature-content.ttl
# Edit TTL source
vim specs/005-feature/ontology/feature-content.ttl
# Validate against SHACL
ggen validate ontology/feature-content.ttl --shapes ontology/spec-kit-schema.ttl
# Generate markdown
ggen render templates/spec.tera ontology/feature-content.ttl > generated/spec.md
```
### Create implementation plan:
```bash
/speckit.plan
# Creates: ontology/plan.ttl
# Generate markdown
ggen render templates/plan.tera ontology/plan.ttl > generated/plan.md
```
### Create task breakdown:
```bash
/speckit.tasks
# Creates: ontology/tasks.ttl
# Generate markdown
ggen render templates/tasks.tera ontology/tasks.ttl > generated/tasks.md
```
## Integration Notes
This branch integrates ggen v6's RDF-first architecture into spec-kit, enabling:
- Deterministic specification generation
- SHACL-enforced quality constraints
- Cryptographic provenance tracking
- Idempotent transformations
- Complete Turtle/RDF template library
For complete workflow documentation, see: docs/RDF_WORKFLOW_GUIDE.md
🤖 Generated with ggen v6 ontology-driven specification system
Add spec-kit-schema.ttl (25KB) containing SHACL shapes for validating: - User story priorities (must be P1, P2, or P3) - Feature metadata (dates, status, required fields) - Acceptance scenarios (min 1 per user story) - Task dependencies and parallelization - Entity definitions and requirements This schema enforces quality constraints during ggen render operations.
Ggen rdf integration
…-first workflow - Updated pyproject.toml to v0.0.23 with ggen v6 dependency documentation - Added ggen installation instructions to README.md with prerequisites - Replaced 'ggen render' with 'ggen sync' throughout RDF_WORKFLOW_GUIDE.md - Updated /speckit.specify to run ggen sync after creating TTL specifications - Updated /speckit.plan to generate markdown from plan.ttl via ggen sync - Updated /speckit.tasks to generate tasks.md from task.ttl sources - Updated /speckit.constitution with RDF-first architecture considerations - Updated /speckit.clarify to work with TTL sources and regenerate markdown - Updated /speckit.implement to ensure artifacts are synced before execution All commands now follow the RDF-first principle: TTL files are source of truth, markdown is generated via 'ggen sync' which reads ggen.toml configuration. Constitutional equation: spec.md = μ(feature.ttl) Five-stage pipeline: μ₁→μ₂→μ₃→μ₄→μ₅ 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
…rkflow - Added pytest configuration with integration test markers - Created testcontainer tests validating RDF-first architecture: * test_ggen_sync_generates_markdown: Verifies markdown generation from TTL * test_ggen_sync_idempotence: Verifies μ∘μ = μ (idempotence) * test_ggen_validates_ttl_syntax: Verifies invalid TTL is rejected * test_constitutional_equation_verification: Verifies deterministic transformation - Added test fixtures: * feature-content.ttl: Sample RDF feature specification * ggen.toml: Configuration with SPARQL query and template * spec.tera: Tera template for markdown generation * expected-spec.md: Expected output for validation - Updated pyproject.toml with test dependencies (pytest, testcontainers, rdflib) - Added comprehensive test documentation in tests/README.md - Updated main README with Testing & Validation section Tests verify the constitutional equation: spec.md = μ(feature.ttl) Uses Docker containers to install ggen and validate complete workflow. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
…tion infrastructure - Fixed docs/GGEN_RDF_README.md to use ggen sync instead of ggen render - Updated migration section to reflect ggen.toml workflow - Updated troubleshooting section with correct ggen sync command - Added scripts/validate-promises.sh: Comprehensive validation script checking: * No ggen render references (✓ 0 found) * ggen sync usage in commands (✓ 16 references) * TTL fixtures validity (✓ 35 RDF triples) * Test collection (✓ 4 tests) * pyproject.toml syntax (✓ valid TOML) * Referenced files existence (✓ all present) * ggen.toml fixture validity (✓ valid config) * Documentation links (✓ no broken links) * Version consistency (✓ 0.0.23) * Constitutional equation references (✓ 9 found) - Added VALIDATION_REPORT.md: Complete validation documentation with: * Executive summary * 10 promise validations (all passed) * Test infrastructure details * Git history * Installation verification * CI/CD recommendations All 10 promises validated: ✅ ALL PROMISES KEPT 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
The validation report documents the migration from ggen render to ggen sync, so it legitimately contains the text 'ggen render'. Exclude it from the validation check to avoid false positives. All 10 promises still validated: ✅ ALL PROMISES KEPT 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
…6 integration Implement the constitutional equation for documentation: documentation.md = μ(documentation.ttl) Includes: - New ontology extension (spec-kit-docs-extension.ttl) with documentation classes: * Guide, Principle, Changelog, ConfigurationOption, Workflow * SHACL shapes for validation * Object/datatype properties for documentation metadata - Documentation metadata container (memory/documentation.ttl): * Root container with all documentation instances * Navigation structure * Cross-references between guides - ggen v6 transformation configuration (docs/ggen.toml): * 13 RDF-to-Markdown transformation pipelines * SPARQL query bindings * Tera template mappings * 5-stage deterministic transformation pipeline - SPARQL query patterns (sparql/): * guide-query.rq - Extract guide metadata * principle-query.rq - Extract principles * changelog-query.rq - Extract releases * config-query.rq - Extract configuration options * workflow-query.rq - Extract workflow steps - Tera templates (templates/): * philosophy.tera - Render principles * guide.tera - Generic guide rendering * configuration-reference.tera - Reference tables * changelog.tera - Release notes - RDF Documentation System guide explaining architecture and usage This enables: - Single source of truth documentation in RDF - Deterministic Markdown generation - SHACL validation of documentation quality - Semantic documentation relationships - Automated cross-reference management
Create memory/philosophy.ttl with extracted constitutional principles: - 6 core SDD principles (Specifications as Lingua Franca, Executable Specifications, Continuous Refinement, Research-Driven Context, Bidirectional Feedback, Branching for Exploration) - 6 Constitutional Articles (Library-First, CLI Interface, Test-First, Simplicity, Anti-Abstraction, Integration-First Testing) - Constitutional equation principle - Each principle includes ID, index, title, description, rationale, examples, and violations - Structured as sk:Principle RDF instances for deterministic transformation Enables generation of spec-driven.md from RDF via ggen sync
Create DOCUMENTATION_REFACTORING_SUMMARY.md documenting: - Complete refactoring to Turtle RDF for all documentation - Architecture overview with transformation pipeline - File inventory of 14 new files, 2000+ lines of code - Constitutional principles captured as RDF instances - Validation framework with SHACL shapes - Usage instructions for ggen sync - Constitutional alignment (demonstrates SDD applied to documentation) - Commits and next steps This summary explains the complete RDF documentation refactoring and demonstrates how SDD principles apply to the documentation system itself.
Add comprehensive process mining commands using pm4py library: - pm discover: Discover process models (alpha, heuristic, inductive, ILP miners) - pm conform: Conformance checking (token-based and alignment-based) - pm stats: Event log statistics with activity and variant analysis - pm convert: Convert between XES, CSV, PNML, and BPMN formats - pm visualize: Visualize process models and DFG - pm filter: Filter event logs by activities, variants, trace length - pm sample: Generate synthetic event logs for testing Dependencies: - Add pm4py>=2.7.0 - Add pandas>=2.0.0
…-rdf-AsLJi Claude/refactor docs turtle rdf as l ji
…ommands-oEqDN feat(pm): Add pm4py process mining command suite
Integrate best practices from uvmgr to improve spec-kit's architecture and maintainability: Core Utilities Layer: - Add shell.py: Rich output utilities (colour, dump_json, markdown, timed, progress_bar) - Add process.py: Subprocess execution helpers (run_command, run_logged) Operations Layer: - Add ops/process_mining.py: Pure business logic for PM operations - Extract 9 process mining functions from CLI layer for reusability - Enable standalone testing and programmatic API usage Architecture Improvements: - Implement three-layer pattern (CLI → Ops → Core) - Separate concerns: CLI user interaction vs. business logic - Reduce code duplication in terminal output formatting Backward Compatibility: - All changes are additive; existing CLI continues to work - Gradual migration path for CLI refactoring - Foundation for future OTEL instrumentation and error handling See RETROFIT_SUMMARY.md for detailed integration guide and next steps.
Includes the uvmgr repository as a git submodule, providing reference for architecture patterns, utility implementations, and best practices being integrated into spec-kit. This enables direct comparison of patterns and serves as documentation for the retrofit process while keeping the repository clean.
Version bump: 0.0.23 → 0.0.24 Updates: - Reflect new modular architecture: core utilities and ops layers - Add explicit version constraints for core dependencies - Add optional 'otel' extra for OpenTelemetry observability stack - Add optional 'dev' extra for code quality tools (ruff, mypy, black, pytest-watch) - Add tool configurations for ruff, black, mypy, and pytest - Update package description to highlight modular design This enables: - pip install specify-cli[otel] for observability features - pip install specify-cli[dev] for development workflow - Code quality enforcement with ruff, type checking with mypy - Better test coverage reporting via pytest configuration
Integrated SpiffWorkflow from uvmgr for sophisticated workflow orchestration
capabilities with full OpenTelemetry instrumentation support.
Core Components Added:
- SPIFF runtime engine (src/specify_cli/spiff/runtime.py)
* BPMN workflow execution with SpiffWorkflow
* Safety mechanisms: infinite loop detection, max iterations
* Task-level performance tracking
* Comprehensive OTEL instrumentation (spans, events, metrics)
* Graceful degradation if OTEL not available
- Semantic conventions (src/specify_cli/core/semconv.py)
* WorkflowAttributes: workflow/task execution semantics
* WorkflowOperations: operation names
* TestAttributes: test execution and validation
* SpecAttributes: Spec-Kit domain-specific attributes
- SPIFF module structure (src/specify_cli/spiff/)
* Lazy loading for optional SpiffWorkflow dependency
* Clean public API: run_bpmn, validate_bpmn_file, get_workflow_stats
- Optional dependency (pyproject.toml)
* pip install specify-cli[spiff] to enable SPIFF support
* Version 0.0.24 with SPIFF support
Features:
- Execute BPMN workflows with full telemetry
- Infinite loop protection
- Task state enumeration (COMPLETED, READY, WAITING, CANCELLED)
- Performance metrics at workflow/task/step levels
- Rich formatted output for workflow execution
Next Phases:
- Phase 2: OTEL validation operations
- Phase 3: External project validation
- Phase 4: SPIFF CLI commands
Implemented BPMN-driven OTEL instrumentation validation framework.
Adapted from uvmgr's comprehensive validation system.
Operations Added:
- OTELValidationResult: Comprehensive validation result tracking
- TestValidationStep: Individual step tracking with metrics
- create_otel_validation_workflow(): Generate BPMN validation workflows
- execute_otel_validation_workflow(): Execute validation with 4-step process
1. BPMN file validation
2. Workflow execution verification
3. Test command execution
4. OTEL system health check
- run_8020_otel_validation(): 80/20 critical path validation
- _validate_otel_system_health(): Check OTEL component availability
Features:
- Workflow-driven validation methodology
- Step-by-step tracking with timings
- Test result collection and reporting
- OTEL health verification
- Graceful OTEL degradation (works without OTEL installed)
- JSON-serializable results for reporting
80/20 Validation Modes:
- minimal: Core OTEL library imports
- core: Critical imports + instrumentation
- full: Comprehensive OTEL + spec-kit integration
This enables:
- Validate spec-kit's OTEL instrumentation
- Create custom validation workflows
- Batch testing of OTEL functionality
- Clear pass/fail metrics
Phase 3: External Project Validation Operations
- ExternalProjectInfo: Project metadata and confidence scoring
- ExternalValidationResult: Complete validation result tracking
- discover_external_projects(): Filesystem-based project discovery
- validate_external_project_with_spiff(): Full validation pipeline
- batch_validate_external_projects(): Parallel/sequential batch processing
- run_8020_external_project_validation(): Critical project validation
- Project type detection: web, cli, library, data, ml
- Package manager detection: uv, pip, poetry, pipenv
- Confidence-based project filtering
Phase 4: SPIFF CLI Commands
- Rich CLI interface with typer + Rich formatting
- Commands:
* validate: Full OTEL validation with iterations
* validate_quick: 80/20 critical path validation
* create_workflow: Generate custom BPMN workflows
* run_workflow: Execute BPMN workflow files
* discover_projects: Find Python projects
* validate_external: Validate single external project
* batch_validate: Multi-project validation with parallelism
* validate_8020: Critical external projects (80/20 approach)
Features:
- Beautiful Rich formatted output (panels, tables, colors)
- JSON export for all operations
- Parallel execution for batch operations
- Progress tracking and step-by-step status
- Comprehensive error handling with telemetry
- 80/20 validation modes (minimal, core, full)
CLI Usage:
specify spiff validate --iterations 3
specify spiff validate-quick --export-json results.json
specify spiff create-workflow --test 'pytest tests/'
specify spiff run-workflow workflow.bpmn
specify spiff discover-projects --path ~/projects --depth 3
specify spiff validate-external /path/to/project
specify spiff batch-validate --parallel --workers 4
specify spiff validate-8020 --type web
All operations include OTEL instrumentation and graceful degradation.
Added comprehensive unit tests for all SPIFF components: test_spiff_runtime.py (65 lines): - BPMN file validation (valid, invalid, non-existent) - Workflow execution (simple workflow, string path, non-existent) - Workflow statistics collection - Results structure verification test_spiff_otel_validation.py (175 lines): - Validation result dataclasses - BPMN workflow creation for OTEL validation - 80/20 OTEL validation (minimal, core, full modes) - Individual validation step tracking - Error handling and duration tracking test_spiff_external_projects.py (280 lines): - ExternalProjectInfo and ExternalValidationResult - Python project detection and analysis - Project type detection (web, cli, library, data, ml) - Test command generation (80/20 and comprehensive) - Project discovery with depth and confidence filtering - Recursive directory scanning and sorting Total: 520 lines of test code covering: - Core functionality (validation, execution, statistics) - Data structures (result types, project info) - Project discovery and analysis - Workflow creation and validation - Edge cases and error conditions - Integration scenarios Test coverage includes: - Unit tests for individual components - Integration tests for workflows - Mock-based testing for external dependencies - Dataclass serialization (to_dict()) - File I/O and directory operations - Edge cases (empty dirs, non-existent files, etc.) All tests use pytest fixtures and follow pytest conventions.
Comprehensive documentation of full SPIFF migration including: - 4 phases completed (runtime, OTEL, external validation, CLI) - Architecture and project structure - Installation and usage guide - API documentation and examples - Semantic conventions reference - Test coverage summary - File statistics and git history - Success criteria checklist - Future enhancement ideas 3,095 lines of code + 520 lines of tests = Complete SPIFF integration
Adds exports for external project validation operations to the ops module. This was part of Phase 3 external projects integration.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.