-
Notifications
You must be signed in to change notification settings - Fork 29
feat: implement Skill management system for Claude Code #183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add comprehensive Skill management functionality to allow users to upload,
manage, and deploy Claude Code Skills ZIP packages.
Core Changes:
- Add SkillBinary model for storing ZIP package binary data
- Implement Skill CRD schemas following Kubernetes design pattern
- Create skill_service with ZIP validation and YAML frontmatter parsing
- Add Skills REST API endpoints (upload, list, get, download, update, delete)
- Extend Ghost spec to support skills field with reference validation
- Add Alembic migration for skill_binaries table
- Integrate Skills download in ClaudeCodeAgent initialization
Technical Details:
- ZIP packages validated for size (max 10MB) and SKILL.md presence
- SHA256 hash calculation for integrity verification
- Skill deletion prevented if referenced by Ghost
- Skills automatically deployed to ~/.claude/skills/ on executor startup
- User isolation enforced across all Skill operations
Database:
- New skill_binaries table with CASCADE delete constraint
- Foreign key to kinds table for CRD data
- Stores binary_data, file_size, file_hash fields
API Endpoints:
- POST /api/v1/kinds/skills/upload - Upload new Skill
- GET /api/v1/kinds/skills - List user's Skills
- GET /api/v1/kinds/skills/{id} - Get Skill details
- GET /api/v1/kinds/skills/{id}/download - Download ZIP package
- PUT /api/v1/kinds/skills/{id} - Update Skill
- DELETE /api/v1/kinds/skills/{id} - Delete Skill (with reference check)
WalkthroughA new Skills feature is introduced enabling users to upload, manage, and deploy Claude Code Skill packages. Changes span database schema (skill_binaries table), API endpoints (upload/list/download/delete), ORM models, Pydantic schemas, service-layer validation and CRUD operations, bot-skills associations, and executor-side skill deployment from the backend. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant API as Skills API
participant SkillService
participant DB as Database
participant Validator as SkillValidator
User->>API: POST /upload (ZIP file)
API->>Validator: validate_zip(file_content)
Validator->>Validator: Check size, ZIP integrity<br/>Extract SKILL.md, parse YAML<br/>Compute SHA-256 hash
Validator-->>API: Metadata + hash + file_size
API->>SkillService: create_skill(user_id, name,<br/>namespace, file_content)
SkillService->>SkillService: Check duplicate<br/>by namespace
SkillService->>DB: Create Kind record
SkillService->>DB: Create SkillBinary record
DB-->>SkillService: Skill CRD
SkillService-->>API: Skill (with metadata)
API-->>User: 201 Skill created
sequenceDiagram
participant Executor
participant Backend as Backend API
participant DB as Database
participant FileSystem as Filesystem
Executor->>Executor: initialize(bot_config)
Executor->>Backend: GET /v1/kinds/skills?user_id=X<br/>(list user's skills)
Backend->>DB: Query skills for user
DB-->>Backend: SkillList
Backend-->>Executor: [Skill, ...]
loop For each skill
Executor->>Backend: GET /{skill_id}/download
Backend->>DB: get_skill_binary(skill_id)
DB-->>Backend: ZIP bytes
Backend-->>Executor: ZIP bytes (Content-Type: zip)
Executor->>FileSystem: Extract to ~/.claude/skills/<br/>skill_name/
end
Executor->>Executor: Resume agent initialization
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (13)
backend/app/schemas/bot.py (1)
6-6: Consider using consistent type hint style.The file mixes
List[str](fromtyping) with lowercasedict[str, Any](built-in generic). For consistency, consider using either alltypingimports or all built-in generics (Python 3.9+).-from typing import Any, Optional, List +from typing import Any, OptionalThen update line 19 and similar:
- skills: Optional[List[str]] = None # List of Skill names to associate + skills: Optional[list[str]] = None # List of Skill names to associateAlso applies to: 16-19
backend/app/models/skill_binary.py (2)
8-10: Reorder imports per PEP 8.Standard library imports (
datetime) should come before third-party imports (sqlalchemy). As per coding guidelines, use isort for import organization.+from datetime import datetime + from sqlalchemy import Column, Integer, String, LargeBinary, ForeignKey, DateTime -from datetime import datetime + from app.db.base import Base
22-22: Consider using UTC for timestamp consistency.Using
datetime.nowcaptures local server time, which may vary across distributed systems. Consider usingdatetime.utcnowor a timezone-aware approach for consistency with other timestamp fields in the codebase.- created_at = Column(DateTime, default=datetime.now) + created_at = Column(DateTime, default=datetime.utcnow)backend/app/services/adapters/bot_kinds.py (1)
348-361: Consider consolidating Ghost update logic.The skill update block follows a similar pattern to
mcp_serversupdate (lines 341-346). Both validate the Ghost model, update a field, dump JSON, flag as modified, and add to session. This pattern appears multiple times in the method.While functional as-is, consider extracting a helper method for Ghost field updates to reduce duplication:
def _update_ghost_field(self, ghost: Kind, field_name: str, value: Any) -> None: ghost_crd = Ghost.model_validate(ghost.json) setattr(ghost_crd.spec, field_name, value) ghost.json = ghost_crd.model_dump() flag_modified(ghost, "json")backend/alembic/versions/a1b2c3d4e5f6_add_skill_binaries_table.py (1)
23-35: Raw SQL migration limits database portability.The migration uses MySQL-specific syntax (
LONGBLOB,ENGINE=InnoDB,COMMENT). If cross-database compatibility is needed in the future, consider using Alembic'sop.create_table()with SQLAlchemy types. However, if MySQL is the only target database, this approach is acceptable.If portability is desired:
def upgrade() -> None: op.create_table( 'skill_binaries', sa.Column('id', sa.Integer(), autoincrement=True, nullable=False), sa.Column('kind_id', sa.Integer(), nullable=False), sa.Column('binary_data', sa.LargeBinary(), nullable=False), sa.Column('file_size', sa.Integer(), nullable=False), sa.Column('file_hash', sa.String(64), nullable=False), sa.Column('created_at', sa.DateTime(), server_default=sa.func.now()), sa.PrimaryKeyConstraint('id'), sa.ForeignKeyConstraint(['kind_id'], ['kinds.id'], ondelete='CASCADE'), sa.UniqueConstraint('kind_id', name='idx_skill_binary_kind_id'), ) op.create_index('ix_skill_binaries_id', 'skill_binaries', ['id'])executor/agents/claude_code/claude_code_agent.py (1)
539-541: Use specific exception types and explicit conversion.Static analysis flags broad exception handling (BLE001) and suggests explicit conversion flags (RUF010).
Consider catching more specific exceptions and using f-string conversion:
- except Exception as e: - logger.warning(f"Failed to download skill '{skill_name}': {str(e)}, continuing") + except (httpx.HTTPError, KeyError, ValueError) as e: + logger.warning(f"Failed to download skill '{skill_name}': {e!s}, continuing") continue if deployed_count > 0: logger.info(f"Deployed {deployed_count} skills to {skills_dir}") else: logger.info("No skills were deployed") - except Exception as e: - logger.warning(f"Failed to download and deploy skills: {str(e)}") + except (httpx.HTTPError, OSError, KeyError) as e: + logger.warning(f"Failed to download and deploy skills: {e!s}") # Don't fail the task if skills deployment failsAlso applies to: 548-550
backend/app/api/endpoints/kind/skills.py (2)
8-11: Move HTTPException import to top-level.
HTTPExceptionis imported inline within function bodies (lines 39 and 141). Move it to the top-level imports for consistency and to avoid repeated import overhead.-from fastapi import APIRouter, Depends, UploadFile, File, Form, status +from fastapi import APIRouter, Depends, HTTPException, UploadFile, File, Form, statusThen remove the inline imports at lines 39 and 141.
57-75: Consider adding an upper bound for the pagination limit.The
limitparameter has no maximum constraint, allowing clients to request an arbitrarily large number of records which could impact performance.+MAX_PAGE_SIZE = 100 + @router.get("", response_model=SkillList) def list_skills( skip: int = 0, - limit: int = 100, + limit: int = Query(default=100, le=MAX_PAGE_SIZE, ge=1), current_user: User = Depends(security.get_current_user), db: Session = Depends(get_db) ):You'll also need to import
Queryfromfastapi.backend/app/services/skill_service.py (5)
56-62: Path traversal check is adequate but could be strengthened.The check for
'..'catches basic directory traversal attempts. For defense-in-depth, consider also normalizing the path and rejecting absolute paths on any platform:+import os + for name in zip_file.namelist(): # Prevent directory traversal attacks - if name.startswith('/') or '..' in name: + normalized = os.path.normpath(name) + if name.startswith('/') or '..' in name or os.path.isabs(normalized): raise HTTPException( status_code=400, detail="Invalid file path in ZIP package" )
78-89: Use exception chaining for better debugging.The raised exceptions lose the original traceback. Use
raise ... from errorraise ... from Noneto maintain the exception chain.except zipfile.BadZipFile: raise HTTPException( status_code=400, detail="Invalid ZIP file format" - ) + ) from None except HTTPException: raise except Exception as e: raise HTTPException( status_code=400, detail=f"Failed to process ZIP file: {str(e)}" - ) + ) from e
103-161: Custom YAML parser is fragile; consider using PyYAML.The hand-rolled YAML parser handles only simple key-value pairs and single-line arrays. It will fail on:
- Multi-line strings (using
|or>)- Nested structures
- YAML anchors/references
- Quoted strings containing colons
Consider using
PyYAMLfor robust parsing:import yaml @staticmethod def _parse_skill_md(content: str) -> Dict[str, Any]: frontmatter_pattern = r'^---\s*\n(.*?)\n---' match = re.search(frontmatter_pattern, content, re.DOTALL | re.MULTILINE) if not match: raise HTTPException( status_code=400, detail="SKILL.md must contain YAML frontmatter (---\\n...\\n---)" ) try: metadata = yaml.safe_load(match.group(1)) or {} except yaml.YAMLError as e: raise HTTPException( status_code=400, detail=f"Invalid YAML frontmatter: {str(e)}" ) from e if "description" not in metadata: raise HTTPException( status_code=400, detail="SKILL.md frontmatter must contain 'description' field" ) return metadataIf staying with the custom parser, document the supported format constraints in the docstring.
432-462: Consider batch query to avoid N+1 database calls.The method executes one query per skill name, which can be inefficient for large lists.
@staticmethod def validate_skill_references( db: Session, user_id: int, skill_names: List[str], namespace: str = "default" ): if not skill_names: return existing = db.query(Kind.name).filter( and_( Kind.user_id == user_id, Kind.kind == "Skill", Kind.name.in_(skill_names), Kind.namespace == namespace ) ).all() existing_names = {row.name for row in existing} missing = set(skill_names) - existing_names if missing: raise HTTPException( status_code=400, detail=f"Skill(s) not found in namespace '{namespace}': {', '.join(sorted(missing))}" )
383-395: Reference check loads all user Ghosts into memory.The current approach fetches all Ghost records for the user to check skill references. For users with many Ghosts, this could impact performance.
If using MySQL 5.7+ or PostgreSQL, consider using JSON query functions to filter at the database level:
# MySQL example with JSON_CONTAINS ghosts = db.query(Kind).filter( and_( Kind.user_id == user_id, Kind.kind == "Ghost", func.json_contains(Kind.json, f'"{skill_name}"', '$.spec.skills') ) ).all()This is an optional optimization for future scalability.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
backend/alembic/versions/a1b2c3d4e5f6_add_skill_binaries_table.py(1 hunks)backend/app/api/endpoints/kind/__init__.py(1 hunks)backend/app/api/endpoints/kind/skills.py(1 hunks)backend/app/models/__init__.py(1 hunks)backend/app/models/skill_binary.py(1 hunks)backend/app/schemas/bot.py(4 hunks)backend/app/schemas/kind.py(2 hunks)backend/app/services/adapters/bot_kinds.py(5 hunks)backend/app/services/skill_service.py(1 hunks)executor/agents/claude_code/claude_code_agent.py(3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Python code must be PEP 8 compliant with Black formatter (line length: 88)
Use isort for import organization in Python
Python functions and variables must have descriptive names
Docstrings required for all public Python functions and classes
Extract magic numbers to named constants in Python
Python functions must not exceed 50 lines
Type hints required for all Python function signatures
Run pylint and flake8 for Python code quality checks
Files:
backend/app/schemas/kind.pybackend/alembic/versions/a1b2c3d4e5f6_add_skill_binaries_table.pybackend/app/models/__init__.pyexecutor/agents/claude_code/claude_code_agent.pybackend/app/api/endpoints/kind/__init__.pybackend/app/api/endpoints/kind/skills.pybackend/app/services/skill_service.pybackend/app/services/adapters/bot_kinds.pybackend/app/schemas/bot.pybackend/app/models/skill_binary.py
🧠 Learnings (1)
📚 Learning: 2025-11-26T07:34:32.573Z
Learnt from: CR
Repo: wecode-ai/Wegent PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-26T07:34:32.573Z
Learning: Use Kubernetes-inspired CRD design (Ghost, Model, Shell, Bot, Team, Task)
Applied to files:
backend/app/schemas/kind.py
🧬 Code graph analysis (4)
backend/app/models/__init__.py (1)
backend/app/models/skill_binary.py (1)
SkillBinary(13-26)
backend/app/api/endpoints/kind/skills.py (2)
backend/app/services/skill_service.py (7)
SkillService(164-462)create_skill(168-252)list_skills(271-286)get_skill(255-268)get_skill_binary(289-309)update_skill(312-365)delete_skill(368-405)backend/app/core/security.py (1)
get_current_user(27-50)
backend/app/services/skill_service.py (4)
backend/app/models/kind.py (1)
Kind(14-30)backend/app/models/skill_binary.py (1)
SkillBinary(13-26)backend/app/schemas/kind.py (5)
Skill(308-314)SkillList(317-321)SkillSpec(293-298)SkillStatus(301-305)ObjectMeta(13-18)backend/app/api/endpoints/kind/skills.py (4)
get_skill(79-91)list_skills(58-75)update_skill(128-155)delete_skill(159-176)
backend/app/services/adapters/bot_kinds.py (2)
backend/app/services/skill_service.py (2)
SkillService(164-462)validate_skill_references(433-462)backend/app/schemas/kind.py (3)
Ghost(41-47)Shell(102-108)Model(68-74)
🪛 GitHub Actions: Tests
executor/agents/claude_code/claude_code_agent.py
[error] 446-446: SyntaxError: 'await' outside async function
🪛 Ruff (0.14.5)
executor/agents/claude_code/claude_code_agent.py
539-539: Do not catch blind exception: Exception
(BLE001)
540-540: Use explicit conversion flag
Replace with conversion flag
(RUF010)
548-548: Do not catch blind exception: Exception
(BLE001)
549-549: Use explicit conversion flag
Replace with conversion flag
(RUF010)
backend/app/api/endpoints/kind/skills.py
24-24: Do not perform function call File in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
27-27: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
28-28: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
61-61: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
62-62: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
81-81: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
82-82: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
97-97: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
98-98: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
130-130: Do not perform function call File in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
131-131: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
132-132: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
161-161: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
162-162: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
backend/app/services/skill_service.py
59-62: Abstract raise to an inner function
(TRY301)
70-73: Abstract raise to an inner function
(TRY301)
79-82: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
85-85: Do not catch blind exception: Exception
(BLE001)
86-89: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
88-88: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🔇 Additional comments (14)
backend/app/schemas/bot.py (1)
19-19: LGTM!The
skillsfield is consistently added acrossBotBase,BotUpdate, andBotDetailschemas with appropriate optionality, enabling skill association with bots.Also applies to: 33-33, 55-55
backend/app/models/__init__.py (1)
16-16: LGTM!The
SkillBinarymodel is correctly imported and exported, following the existing pattern for other models in the package.Also applies to: 22-23
backend/app/api/endpoints/kind/__init__.py (1)
19-20: LGTM!The skills router is correctly registered with appropriate prefix and tags. Placement before
kinds_routerensures/v1/kinds/skillsroutes are matched before more generic kind routes.backend/app/services/adapters/bot_kinds.py (2)
48-54: LGTM!Skill references are properly validated before bot creation, ensuring all referenced skills exist for the user.
541-567: LGTM!The
_convert_to_bot_dictmethod correctly extracts and includes theskillsfield from the Ghost spec, maintaining consistency with other bot properties.backend/alembic/versions/a1b2c3d4e5f6_add_skill_binaries_table.py (1)
38-40: Approve downgrade implementation.The downgrade correctly drops the table, which will cascade delete all skill binary data. This is the expected behavior given the foreign key constraint.
executor/agents/claude_code/claude_code_agent.py (2)
479-482: Verify that deleting existing skills directory is the intended behavior.The code unconditionally removes the entire
~/.claude/skillsdirectory on each task startup (line 482:shutil.rmtree(skills_dir)). This removes all previously deployed skills, which may be unexpected if skills are shared across tasks or bots.Please confirm this is the intended behavior. If skills should persist across tasks, consider a selective cleanup approach that only removes skills not in the current bot's configuration.
18-20: LGTM!The new imports for
zipfile,shutil, andhttpxare correctly added to support the skills deployment workflow.backend/app/schemas/kind.py (2)
33-33: LGTM!The
skillsfield addition toGhostSpecfollows the existing pattern for optional list fields and properly usesOptional[List[str]]typing. This enables associating Skills with Ghosts as intended by the feature.
292-321: Well-structured Skill CRD schemas following established patterns.The
SkillSpec,SkillStatus,Skill, andSkillListclasses are consistent with the existing Kubernetes-inspired CRD design pattern used throughout this file. Docstrings are present, and type hints are properly applied.backend/app/api/endpoints/kind/skills.py (3)
37-40: Extension check is defense-in-depth but easily bypassed.The
.zipextension check on the filename is a superficial validation that can be trivially bypassed. This is acceptable sinceSkillValidator.validate_zipperforms proper ZIP format validation in the service layer. Consider adding a comment to clarify this is just a quick-fail check.
78-91: LGTM!The
get_skillendpoint correctly delegates to the service layer with proper dependency injection and returns the typed response.
158-175: LGTM!The
delete_skillendpoint correctly usesHTTP_204_NO_CONTENTstatus and delegates reference checking to the service layer.backend/app/services/skill_service.py (1)
167-252: LGTM!The
create_skillmethod properly validates uniqueness, processes the ZIP, and creates bothKindandSkillBinaryrecords atomically within a transaction. Usingflush()before creatingSkillBinarycorrectly ensures thekind_idforeign key is available.
| return StreamingResponse( | ||
| io.BytesIO(binary_data), | ||
| media_type="application/zip", | ||
| headers={ | ||
| "Content-Disposition": f"attachment; filename={skill.metadata.name}.zip" | ||
| } | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sanitize filename in Content-Disposition header to prevent header injection.
The skill name is used directly in the Content-Disposition header without sanitization. If the name contains special characters (quotes, newlines, semicolons), it could corrupt the header or enable HTTP response splitting.
+ # Sanitize filename for Content-Disposition header
+ safe_filename = "".join(c for c in skill.metadata.name if c.isalnum() or c in "-_")
+
return StreamingResponse(
io.BytesIO(binary_data),
media_type="application/zip",
headers={
- "Content-Disposition": f"attachment; filename={skill.metadata.name}.zip"
+ "Content-Disposition": f'attachment; filename="{safe_filename}.zip"'
}
)Note: The filename should also be wrapped in quotes per RFC 6266.
🤖 Prompt for AI Agents
In backend/app/api/endpoints/kind/skills.py around lines 118 to 124, the
Content-Disposition header uses skill.metadata.name directly which risks header
injection and invalid headers; sanitize the filename by removing or replacing
CR/LF, quotes, semicolons and other unsafe characters (e.g., replace with
underscore), URL- or RFC5987-encode a UTF-8 fallback if needed, and wrap the
final filename in double quotes per RFC 6266 (or provide both filename and
filename* for UTF-8). Update the header construction to use the
sanitized/encoded filename and a safe fallback like "skill.zip".
| skill_crd = kind.json | ||
| skill_crd["spec"]["description"] = metadata["description"] | ||
| skill_crd["spec"]["version"] = metadata.get("version") | ||
| skill_crd["spec"]["author"] = metadata.get("author") | ||
| skill_crd["spec"]["tags"] = metadata.get("tags", []) | ||
| skill_crd["status"]["fileSize"] = metadata["file_size"] | ||
| skill_crd["status"]["fileHash"] = metadata["file_hash"] | ||
|
|
||
| kind.json = skill_crd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SQLAlchemy may not detect in-place JSON mutations.
Mutating the kind.json dict in place (lines 335-340) may not be detected by SQLAlchemy's change tracking, causing the update to be silently skipped on commit.
Use flag_modified to explicitly mark the column as dirty:
+from sqlalchemy.orm.attributes import flag_modified
+
# Update Kind record
skill_crd = kind.json
skill_crd["spec"]["description"] = metadata["description"]
skill_crd["spec"]["version"] = metadata.get("version")
skill_crd["spec"]["author"] = metadata.get("author")
skill_crd["spec"]["tags"] = metadata.get("tags", [])
skill_crd["status"]["fileSize"] = metadata["file_size"]
skill_crd["status"]["fileHash"] = metadata["file_hash"]
kind.json = skill_crd
+ flag_modified(kind, "json")Alternatively, create a new dict and assign it to ensure the column is marked as modified.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In backend/app/services/skill_service.py around lines 334 to 342, mutating the
kind.json dict in place may not be detected by SQLAlchemy change tracking so the
update can be skipped; to fix this either (1) import and call
sqlalchemy.orm.attributes.flag_modified(kind, "json") after performing the
in-place mutations to mark the column as dirty, or (2) build a new dict (copying
existing kind.json, apply the metadata fields) and assign kind.json = new_dict
so SQLAlchemy detects the change before commit.
|
|
||
| # Download and deploy Skills if available | ||
| await self._download_and_deploy_skills(bot_config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: await used in non-async function causes SyntaxError.
The initialize() method is synchronous (line 389: def initialize(self) -> TaskStatus:), but line 446 uses await to call _download_and_deploy_skills(). This causes the pipeline failure: SyntaxError: 'await' outside async function.
You need to either:
Option 1: Make initialize() async and update all callers:
- def initialize(self) -> TaskStatus:
+ async def initialize(self) -> TaskStatus:Option 2: Run the async method synchronously within initialize():
logger.info(f"Saved Claude Code config to {claude_json_path}")
# Download and deploy Skills if available
- await self._download_and_deploy_skills(bot_config)
+ try:
+ loop = asyncio.get_running_loop()
+ asyncio.create_task(self._download_and_deploy_skills(bot_config))
+ except RuntimeError:
+ # No running event loop
+ asyncio.run(self._download_and_deploy_skills(bot_config))Option 3 (simplest if skills deployment can be deferred): Move the call to pre_execute() or execute() which already handles async execution.
🧰 Tools
🪛 GitHub Actions: Tests
[error] 446-446: SyntaxError: 'await' outside async function
🤖 Prompt for AI Agents
In executor/agents/claude_code/claude_code_agent.py around lines 444–446, the
synchronous initialize() method is using "await" to call the async
_download_and_deploy_skills(), causing a SyntaxError; fix by one of three
approaches: (A) make initialize() async (change signature and update all callers
to await it) so awaiting is valid, (B) keep initialize() sync and call the
coroutine synchronously (e.g., run it with asyncio.run() or
loop.run_until_complete() inside initialize), or (C - simplest if deployment can
be deferred) remove the await from initialize and move the await
self._download_and_deploy_skills(bot_config) call into an existing async method
such as pre_execute() or execute() so the async call runs in an async context.
| # Download and deploy each skill | ||
| deployed_count = 0 | ||
| async with httpx.AsyncClient(timeout=30.0) as client: | ||
| for skill_name in skills: | ||
| try: | ||
| logger.info(f"Downloading skill: {skill_name}") | ||
|
|
||
| # Get skill ID by name | ||
| list_url = f"{api_base_url}/api/v1/kinds/skills?skip=0&limit=100" | ||
| list_response = await client.get( | ||
| list_url, | ||
| headers={"Authorization": f"Bearer {token}"} | ||
| ) | ||
|
|
||
| if list_response.status_code != 200: | ||
| logger.warning( | ||
| f"Failed to list skills: HTTP {list_response.status_code}, skipping skill {skill_name}" | ||
| ) | ||
| continue | ||
|
|
||
| skills_data = list_response.json() | ||
| skill_items = skills_data.get("items", []) | ||
|
|
||
| # Find skill by name | ||
| skill_id = None | ||
| for item in skill_items: | ||
| if item.get("metadata", {}).get("name") == skill_name: | ||
| # Extract ID from the Kind record | ||
| # We need to call the backend to get the ID | ||
| # For now, we'll use a different approach - get by name | ||
| skill_id = skill_name | ||
| break | ||
|
|
||
| if not skill_id: | ||
| logger.warning(f"Skill '{skill_name}' not found in API, skipping") | ||
| continue | ||
|
|
||
| # Download skill ZIP package | ||
| # We need to modify the API to support download by name | ||
| # For now, skip the download implementation | ||
| logger.info(f"Skill '{skill_name}' found, but download by name not yet supported") | ||
| # TODO: Implement download by name or modify API | ||
|
|
||
| except Exception as e: | ||
| logger.warning(f"Failed to download skill '{skill_name}': {str(e)}, continuing") | ||
| continue | ||
|
|
||
| if deployed_count > 0: | ||
| logger.info(f"Deployed {deployed_count} skills to {skills_dir}") | ||
| else: | ||
| logger.info("No skills were deployed") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incomplete skill download implementation.
The skill download logic is incomplete:
- Line 526 assigns
skill_nametoskill_idinstead of extracting the actual ID - Lines 533-537 show a TODO indicating download by name is not yet supported
deployed_count(line 497) is never incremented, so the success message will never be shown
The method currently lists skills but doesn't download or deploy them. Either complete the implementation or document this as a known limitation:
# Find skill by name
skill_id = None
for item in skill_items:
if item.get("metadata", {}).get("name") == skill_name:
- # Extract ID from the Kind record
- # We need to call the backend to get the ID
- # For now, we'll use a different approach - get by name
- skill_id = skill_name
+ skill_id = item.get("id") # Assuming 'id' is in the response
break
if not skill_id:
logger.warning(f"Skill '{skill_name}' not found in API, skipping")
continue
- # Download skill ZIP package
- # We need to modify the API to support download by name
- # For now, skip the download implementation
- logger.info(f"Skill '{skill_name}' found, but download by name not yet supported")
- # TODO: Implement download by name or modify API
+ # Download skill ZIP package by ID
+ download_url = f"{api_base_url}/api/v1/kinds/skills/{skill_id}/download"
+ download_response = await client.get(
+ download_url,
+ headers={"Authorization": f"Bearer {token}"}
+ )
+ if download_response.status_code == 200:
+ # Extract and deploy skill
+ # ... extraction logic here ...
+ deployed_count += 1Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.14.5)
539-539: Do not catch blind exception: Exception
(BLE001)
540-540: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
In executor/agents/claude_code/claude_code_agent.py around lines 496 to 546, the
skill download/deploy path is incomplete: the code sets skill_id = skill_name
instead of extracting the real Kind ID, never performs a download, and never
increments deployed_count. Fix by extracting the actual ID from the Kind record
(e.g., item.get("id") or item.get("metadata", {}).get("uid") depending on API),
call the correct download endpoint using that ID (construct
/api/v1/kinds/skills/{skill_id}/download or the backend’s ZIP endpoint), check
the download response status, stream and save the ZIP to a temp file, unzip it
into skills_dir, increment deployed_count on successful deploy, and keep
existing try/except logging for errors; ensure headers include the auth token
and handle cleanup of temp files.
Summary
This PR implements a comprehensive Skill management system for Wegent, allowing users to upload, manage, and deploy Claude Code Skills ZIP packages. Skills are extensions that enhance Claude Code capabilities, similar to MCP servers.
Key Features
SKILL.mdthrough the web UIkindstableskill_binariestable with SHA256 integrity verificationImplementation Details
Backend
SkillBinarymodel for binary storage with CASCADE deleteSkill,SkillSpec,SkillStatus,SkillListCRD schemasSkillServicewith ZIP validation, YAML frontmatter parsing, and reference checkingPOST /api/v1/kinds/skills/upload- Upload SkillGET /api/v1/kinds/skills- List SkillsGET /api/v1/kinds/skills/{id}- Get Skill detailsGET /api/v1/kinds/skills/{id}/download- Download ZIPPUT /api/v1/kinds/skills/{id}- Update SkillDELETE /api/v1/kinds/skills/{id}- Delete SkillExecutor
~/.claude/skills/Database
a1b2c3d4e5f6addsskill_binariestablekind_id(FK),binary_data(LONGBLOB),file_size,file_hash,created_atkind_id, CASCADE delete on FKValidation
SKILL.mddescription; optional:version,author,tagsTechnical Decisions
~/.claude/skills/before each deployment to avoid conflictsMigration Required
Yes. Run database migration before deploying:
cd backend alembic upgrade headNext Steps
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.