diff --git a/SECURITY.md b/SECURITY.md
index 30a1a8a..1716638 100644
--- a/SECURITY.md
+++ b/SECURITY.md
@@ -1,9 +1,18 @@
# Security Policy
+For discovery, see also: `https://flyingrobots.dev/.well-known/security.txt`.
+
+Privacy: We store vulnerability reports in restricted systems, retain PII only as needed for communication, und purge within 180 days after closure.
+
## Supported Versions
-Hear me, contributors and maintainers: only ze most current score shall be defended from discord.
-All other editions? Archived in ze library, never to be patched again.
+Only ze **latest stable major release** receives ze vigilance of BunBun's keen ears und my quill (currently: 1.x).
+
+Vhen a new major symphony is released, ze previous major becomes silent foreverβno more security patches!
+
+Vithin ze supported major, only ze latest minor movement gets attention (no N-1 minors).
+
+Ve vill announce zese EOL dates in RELEASES.md ze moment a new major composition debuts.
| Version | Supported |
| ------- | ------------------ |
diff --git a/docs/code-reviews/PR3/25cb2bb4b000cad26fd5fcc02cb5186775827c40.md b/docs/code-reviews/PR3/25cb2bb4b000cad26fd5fcc02cb5186775827c40.md
new file mode 100644
index 0000000..e9d1c7c
--- /dev/null
+++ b/docs/code-reviews/PR3/25cb2bb4b000cad26fd5fcc02cb5186775827c40.md
@@ -0,0 +1,116 @@
+---
+title: 25cb2bb4b000cad26fd5fcc02cb5186775827c40.md
+description: Preserved review artifacts and rationale.
+audience: [contributors]
+domain: [quality]
+tags: [review]
+status: archive
+---
+
+# Code Review Feedback
+
+| Date | Agent | SHA | Branch | PR |
+|------|-------|-----|--------|----|
+| 2025-11-06 | CodeRabbit (and reviewers) | `25cb2bb4b000cad26fd5fcc02cb5186775827c40` | [chore/security](https://github.com/flyingrobots/draft-punks/tree/chore/security "flyingrobots/draft-punks:chore/security") | [PR#3](https://github.com/flyingrobots/draft-punks/pull/3) |
+
+## CODE REVIEW FEEDBACK
+
+### General comment β coderabbitai[bot]
+
+```text
+
+
+
+> [!NOTE]
+> Currently processing new changes in this PR. This may take a few minutes, please wait...
+>
+>
+> π₯ Commits
+>
+> Reviewing files that changed from the base of the PR and between 34ec9acc1dab75b82c6065490e1976eb338ae304 and 25cb2bb4b000cad26fd5fcc02cb5186775827c40.
+>
+>
+>
+>
+> β Files ignored due to path filters (1)
+>
+> * `SECURITY.md` is excluded by `!*.md`
+>
+>
+>
+>
+> π Files selected for processing (2)
+>
+> * `tools/review/get_comments_markdown.py` (1 hunks)
+> * `tools/review/seed_feedback_from_github.py` (2 hunks)
+>
+>
+>
+> ```ascii
+> ____________________________________
+> < Zero-day? Zero chance on my watch. >
+> ------------------------------------
+> \
+> \ \
+> \ /\
+> ( )
+> .( o ).
+> ```
+
+
+
+
+> [!TIP]
+>
+> CodeRabbit can generate a title for your PR based on the changes with custom instructions.
+>
+> Add the `reviews.auto_title_instructions` setting in your project's settings in CodeRabbit to generate a title for your PR based on the changes in the PR with custom instructions.
+>
+>
+
+
+
+
+
+
+β¨ Finishing touches
+
+- [ ] π Generate docstrings
+
+π§ͺ Generate unit tests (beta)
+
+- [ ] Create PR with unit tests
+- [ ] Post copyable unit tests in a comment
+- [ ] Commit unit tests in branch `chore/security`
+
+
+
+
+
+
+
+
+
+---
+
+Thanks for using [CodeRabbit](https://coderabbit.ai?utm_source=oss&utm_medium=github&utm_campaign=flyingrobots/draft-punks&utm_content=3)! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
+
+
+β€οΈ Share
+
+- [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai)
+- [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai)
+- [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai)
+- [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
+
+
+
+Comment `@coderabbitai help` to get the list of available commands and usage tips.
+
+
+```
+
+_Meta_:
+
+{response}
+
diff --git a/docs/code-reviews/PR3/f775e24c1d3578c42f823710cd23295d4d90a652.md b/docs/code-reviews/PR3/f775e24c1d3578c42f823710cd23295d4d90a652.md
new file mode 100644
index 0000000..a47da33
--- /dev/null
+++ b/docs/code-reviews/PR3/f775e24c1d3578c42f823710cd23295d4d90a652.md
@@ -0,0 +1,678 @@
+---
+title: f775e24c1d3578c42f823710cd23295d4d90a652.md
+description: Preserved review artifacts and rationale.
+audience: [contributors]
+domain: [quality]
+tags: [review]
+status: archive
+---
+
+# Code Review Feedback
+
+| Date | Agent | SHA | Branch | PR |
+|------|-------|-----|--------|----|
+| 2025-11-06 | CodeRabbit (and reviewers) | `f775e24c1d3578c42f823710cd23295d4d90a652` | [chore/security](https://github.com/flyingrobots/draft-punks/tree/chore/security "flyingrobots/draft-punks:chore/security") | [PR#3](https://github.com/flyingrobots/draft-punks/pull/3) |
+
+## CODE REVIEW FEEDBACK
+
+### docs/code-reviews/PR3/25cb2bb4b000cad26fd5fcc02cb5186775827c40.md:113 β coderabbitai[bot]
+
+```text
+_β οΈ Potential issue_ | _π‘ Minor_
+
+**Format the bare URL as a proper Markdown link.**
+
+The bare URL violates markdown best practices and accessibility guidelines. Wrap it in proper link syntax for consistency with the rest of the document.
+
+
+
+Apply this diff:
+
+```diff
+-_Meta_: https://github.com/flyingrobots/draft-punks/pull/3#issuecomment-3495262753
++_Meta_:
+```
+
+Based on static analysis.
+
+
+
+
+π Committable suggestion
+
+> βΌοΈ **IMPORTANT**
+> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
+
+```suggestion
+_Meta_:
+```
+
+
+
+
+
+
+π§° Tools
+
+
+πͺ markdownlint-cli2 (0.18.1)
+
+113-113: Bare URL used
+
+(MD034, no-bare-urls)
+
+
+
+
+
+
+π€ Prompt for AI Agents
+
+```
+In docs/code-reviews/PR3/25cb2bb4b000cad26fd5fcc02cb518677c40.md around line
+113, a bare URL is present; replace it with proper Markdown link syntax by
+wrapping the URL in [anchor text](URL) or using the URL itself as the link
+target with descriptive text (e.g., the PR or comment title) so the document
+follows Markdown/linking best practices and accessibility guidelines.
+```
+
+
+
+
+```
+
+_Meta_: https://github.com/flyingrobots/draft-punks/pull/3#discussion_r2497632871
+
+{response}
+
+### tools/review/get_comments_markdown.py:31 β coderabbitai[bot]
+
+```text
+_π§Ή Nitpick_ | _π΅ Trivial_
+
+**Modernize type hints to use built-in generics.**
+
+With `from __future__ import annotations` already imported, you can use lowercase `dict`, `list`, and `tuple` directly instead of the deprecated `typing` versions per PEP 585 (Python 3.9+).
+
+
+
+Apply this diff:
+
+```diff
+-from typing import Dict, List, Tuple
++from typing import TYPE_CHECKING
+```
+
+Then update all type hints throughout the file:
+- `Dict[str, str]` β `dict[str, str]`
+- `List[dict]` β `list[dict]`
+- `Tuple[str, str]` β `tuple[str, str]`
+
+Based on static analysis.
+
+
+> Committable suggestion skipped: line range outside the PR's diff.
+
+
+π§° Tools
+
+
+πͺ Ruff (0.14.3)
+
+31-31: `typing.Dict` is deprecated, use `dict` instead
+
+(UP035)
+
+---
+
+31-31: `typing.List` is deprecated, use `list` instead
+
+(UP035)
+
+---
+
+31-31: `typing.Tuple` is deprecated, use `tuple` instead
+
+(UP035)
+
+
+
+
+
+
+π€ Prompt for AI Agents
+
+```
+In tools/review/get_comments_markdown.py around line 31, the file imports typing
+generics (Dict, List, Tuple) but the project uses from __future__ import
+annotations and should use built-in generics per PEP 585; replace the import of
+Dict, List, Tuple with built-in types (or remove those names from the typing
+import) and update all annotations in the file: change Dict[str, str] to
+dict[str, str], List[dict] to list[dict], and Tuple[str, str] to tuple[str,
+str], ensuring any remaining typing usage is either removed or replaced with the
+appropriate lowercase built-in generic types.
+```
+
+
+
+
+```
+
+_Meta_: https://github.com/flyingrobots/draft-punks/pull/3#discussion_r2497632880
+
+{response}
+
+### tools/review/get_comments_markdown.py:237 β coderabbitai[bot]
+
+```text
+_β οΈ Potential issue_ | _π‘ Minor_
+
+**Remove unused variable or prefix with underscore.**
+
+The `known` variable is assigned but never used. Per convention, prefix unused variables with underscore to indicate intentional non-use.
+
+
+
+Apply this diff:
+
+```diff
+- known, _ = ap.parse_known_args([])
++ _, _ = ap.parse_known_args([])
+```
+
+Based on static analysis.
+
+
+
+
+π Committable suggestion
+
+> βΌοΈ **IMPORTANT**
+> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
+
+```suggestion
+ _, _ = ap.parse_known_args([])
+```
+
+
+
+
+
+
+π§° Tools
+
+
+πͺ Ruff (0.14.3)
+
+237-237: Unpacked variable `known` is never used
+
+Prefix it with an underscore or any other dummy variable pattern
+
+(RUF059)
+
+
+
+
+
+
+π€ Prompt for AI Agents
+
+```
+In tools/review/get_comments_markdown.py around line 237, the variable `known`
+is assigned from ap.parse_known_args([]) but never used; either remove the
+unused assignment or prefix the variable name with an underscore (e.g.,
+`_known`) to signal intentional non-use. Update the line accordingly and run
+tests/lint to ensure no unused-variable warnings remain.
+```
+
+
+
+
+```
+
+_Meta_: https://github.com/flyingrobots/draft-punks/pull/3#discussion_r2497632883
+
+{response}
+
+### tools/review/get_comments_markdown.py:267 β coderabbitai[bot]
+
+```text
+_β οΈ Potential issue_ | _π‘ Minor_
+
+**Simplify the owner/repo precedence logic and remove unused variable.**
+
+The precedence determination is overly complex with nested conditionals. Line 257 assigns `repo_name_arg` but never uses it (F841). The conditional logic on lines 263-266 is convoluted.
+
+
+
+
+Apply this diff to remove the unused variable and simplify:
+
+```diff
+ remote_arg = args.remote or kv.get("remote")
+ slug_arg = args.slug or kv.get("slug")
+ repo_kv = kv.get("repo")
+ owner_arg = args.owner or kv.get("owner")
+- repo_name_arg = args.repo or (kv.get("repo_name") if repo_kv and "/" not in repo_kv else None)
+
+ if remote_arg:
+ owner, repo = parse_remote_url(remote_arg)
+ elif slug_arg:
+ owner, repo = parse_slug(slug_arg)
+ elif repo_kv and "/" in repo_kv:
+ owner, repo = parse_slug(repo_kv)
+- elif owner_arg and (args.repo or kv.get("repo")) and "/" not in (args.repo or kv.get("repo")): # owner/repo split
+- owner, repo = owner_arg, (args.repo or kv.get("repo"))
++ elif owner_arg:
++ repo_arg = args.repo or kv.get("repo")
++ if repo_arg and "/" not in repo_arg:
++ owner, repo = owner_arg, repo_arg
+ else:
+```
+
+Based on static analysis.
+
+
+π§° Tools
+
+
+πͺ Ruff (0.14.3)
+
+257-257: Local variable `repo_name_arg` is assigned to but never used
+
+Remove assignment to unused variable `repo_name_arg`
+
+(F841)
+
+
+
+
+
+
+π€ Prompt for AI Agents
+
+```
+In tools/review/get_comments_markdown.py around lines 253 to 267, remove the
+unused repo_name_arg assignment and simplify the owner/repo precedence: check
+remote_arg first (parse_remote_url), then slug_arg (parse_slug), then if repo_kv
+contains "/" parse_slug(repo_kv), then if owner_arg is provided and the repo
+value (args.repo or kv.get("repo")) exists and does not contain a "/" use
+owner_arg and that repo value; otherwise fall through to the existing else
+branch. Drop the unused variable and collapse the nested conditionals into this
+clear linear precedence.
+```
+
+
+
+
+```
+
+_Meta_: https://github.com/flyingrobots/draft-punks/pull/3#discussion_r2497632886
+
+{response}
+
+### tools/review/seed_feedback_from_github.py:46 β coderabbitai[bot]
+
+```text
+_π§Ή Nitpick_ | _π΅ Trivial_
+
+**Improve exception chaining for better debugging.**
+
+The error handling discards the original exception context. Use `raise ... from e` to preserve the traceback chain per PEP 3134, which aids debugging.
+
+
+
+Apply this diff:
+
+```diff
+ except urllib.error.HTTPError as e:
+ msg = e.read().decode("utf-8", errors="ignore")
+- raise SystemExit(f"GitHub API error {e.code} on {url}: {msg}")
++ raise SystemExit(f"GitHub API error {e.code} on {url}: {msg}") from e
+```
+
+Based on static analysis.
+
+
+
+
+π Committable suggestion
+
+> βΌοΈ **IMPORTANT**
+> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
+
+```suggestion
+ except urllib.error.HTTPError as e:
+ msg = e.read().decode("utf-8", errors="ignore")
+ raise SystemExit(f"GitHub API error {e.code} on {url}: {msg}") from e
+```
+
+
+
+
+
+
+π§° Tools
+
+
+πͺ Ruff (0.14.3)
+
+46-46: Within an `except` clause, raise exceptions with `raise ... from err` or `raise ... from None` to distinguish them from errors in exception handling
+
+(B904)
+
+---
+
+46-46: Avoid specifying long messages outside the exception class
+
+(TRY003)
+
+
+
+
+
+
+π€ Prompt for AI Agents
+
+```
+In tools/review/seed_feedback_from_github.py around lines 44 to 46, the except
+block for urllib.error.HTTPError discards the original exception context; modify
+the raise to preserve exception chaining by re-raising the SystemExit from the
+caught exception (use "raise SystemExit(...) from e") so the original traceback
+is retained for debugging.
+```
+
+
+
+
+```
+
+_Meta_: https://github.com/flyingrobots/draft-punks/pull/3#discussion_r2497632892
+
+{response}
+
+### tools/review/seed_feedback_from_github.py:69 β coderabbitai[bot]
+
+```text
+_β οΈ Potential issue_ | _π Major_
+
+**CRITICAL: Eliminate code duplication across modules.**
+
+This `parse_link_header` function is duplicated in `tools/review/get_comments_markdown.py` (lines 56-70 as `_parse_link_header`). Both `gh_get_all` functions (this file lines 72-109 and get_comments_markdown.py lines 73-107) are also nearly identical. This violates DRY principles and creates a maintenance nightmare.
+
+
+
+
+**Solution: Extract shared utilities into a common module.**
+
+Create `tools/review/github_api.py`:
+
+```python
+#!/usr/bin/env python3
+"""Shared GitHub API utilities for review tooling."""
+
+import json
+import urllib.error
+import urllib.parse as parse
+import urllib.request
+from typing import Dict, List, Tuple
+
+
+API = "https://api.github.com"
+
+
+def http_get(url: str, token: str) -> Tuple[object, Dict[str, str]]:
+ """Execute GET request and return (json_data, headers)."""
+ req = urllib.request.Request(url)
+ req.add_header("Accept", "application/vnd.github+json")
+ if token:
+ req.add_header("Authorization", f"Bearer {token}")
+ try:
+ with urllib.request.urlopen(req, timeout=30) as r:
+ data = json.loads(r.read().decode("utf-8"))
+ headers = {k.lower(): v for k, v in r.headers.items()}
+ return data, headers
+ except urllib.error.HTTPError as e:
+ msg = e.read().decode("utf-8", errors="ignore")
+ raise SystemExit(f"GitHub API error {e.code} on {url}: {msg}") from e
+ except urllib.error.URLError as e:
+ raise SystemExit(f"Network error contacting GitHub: {e}") from e
+
+
+def parse_link_header(link_header: str) -> Dict[str, str]:
+ """Parse GitHub Link header into {rel: url} dict."""
+ rels = {}
+ if not link_header:
+ return rels
+ for part in link_header.split(","):
+ part = part.strip()
+ if not part or ";" not in part:
+ continue
+ url_part, rel_part = part.split(";", 1)
+ url = url_part.strip().strip("<>")
+ if "rel=" in rel_part:
+ rel = rel_part.split("rel=", 1)[1].strip().strip('"')
+ rels[rel] = url
+ return rels
+
+
+def gh_get_all(path: str, token: str) -> List[dict]:
+ """GET and paginate GitHub API, returning all items.
+
+ Adds per_page=100 and follows Link rel="next".
+ """
+ url = API + path
+ parsed = parse.urlparse(url)
+ qs = parse.parse_qsl(parsed.query, keep_blank_values=True)
+ qs = [(k, v) for (k, v) in qs if k not in ("per_page", "page")]
+ qs.append(("per_page", "100"))
+ url = parse.urlunparse(parsed._replace(query=parse.urlencode(qs)))
+
+ items: List[dict] = []
+ while True:
+ data, headers = http_get(url, token)
+ if isinstance(data, list):
+ items.extend(data)
+ else:
+ for key in ("items", "nodes"):
+ if key in data and isinstance(data[key], list):
+ items.extend(data[key])
+ break
+ else:
+ raise SystemExit("Unexpected GitHub response shape during pagination")
+
+ links = parse_link_header(headers.get("link", ""))
+ if "next" in links:
+ url = links["next"]
+ continue
+ break
+ return items
+
+
+def normalize_text(s: str) -> str:
+ """Normalize line endings and whitespace."""
+ return (s or "").replace("\r\n", "\n").strip()
+```
+
+Then update both scripts to import from this shared module. Would you like me to generate those refactored imports?
+
+
+```
+
+_Meta_: https://github.com/flyingrobots/draft-punks/pull/3#discussion_r2497632896
+
+{response}
+
+### General comment β coderabbitai[bot]
+
+```text
+
+
+
+> [!NOTE]
+> Currently processing new changes in this PR. This may take a few minutes, please wait...
+>
+>
+> π₯ Commits
+>
+> Reviewing files that changed from the base of the PR and between f5312b43b287eaa39375481e5569f25e87acf92f and f775e24c1d3578c42f823710cd23295d4d90a652.
+>
+>
+>
+>
+> π Files selected for processing (2)
+>
+> * `docs/code-reviews/PR3/25cb2bb4b000cad26fd5fcc02cb5186775827c40.md` (1 hunks)
+> * `tools/review/get_comments_markdown.py` (1 hunks)
+>
+>
+>
+> ```ascii
+> _______________________________________________
+> < Optimization hinders evolution. - Alan Perlis >
+> -----------------------------------------------
+> \
+> \ \
+> \ /\
+> ( )
+> .( o ).
+> ```
+
+
+
+
+> [!TIP]
+>
+> CodeRabbit can scan for known vulnerabilities in your dependencies using OSV Scanner.
+>
+> OSV Scanner will automatically detect and report security vulnerabilities in your project's dependencies. No additional configuration is required.
+>
+>
+
+
+
+
+
+
+## Summary by CodeRabbit
+
+* **Documentation**
+ * Added archived code review artifacts and documentation files for comprehensive historical reference and compliance tracking.
+
+* **Chores**
+ * Introduced new utility to extract GitHub Pull Request comments and export as formatted Markdown documentation files.
+ * Enhanced GitHub API integration with automatic pagination support to reliably retrieve complete comment datasets from pull requests.
+
+
+## Walkthrough
+
+This pull request adds tooling infrastructure for automated code review artifact management. It introduces a new documentation file containing an archived code review, a fresh Python script for extracting GitHub PR comments as Markdown output, and refactors an existing script to support GitHub API pagination via Link headers.
+
+## Changes
+
+| Cohort / File(s) | Summary |
+|---|---|
+| **Documentation Archive**
`docs/code-reviews/PR3/25cb2bb4b000cad26fd5fcc02cb5186775827c40.md` | New archived code review artifact containing metadata, auto-generated feedback, and review notes for PR `#3` |
+| **GitHub Comment Extraction Tooling**
`tools/review/get_comments_markdown.py` | New script to fetch all review and issue comments from GitHub PRs, paginate results, normalize data, and output as formatted Markdown with timestamps and author info |
+| **GitHub API Pagination Enhancement**
`tools/review/seed_feedback_from_github.py` | Refactored to introduce `gh_get_with_headers()`, `parse_link_header()`, and `gh_get_all()` for robust multi-page GitHub API pagination; replaced single-page calls with paginated variants |
+
+## Sequence Diagram
+
+```mermaid
+sequenceDiagram
+ actor User
+ participant get_comments_markdown.py as Script
+ participant GitHub API
+ participant File System
+
+ User->>Script: Invoke with PR number & token
+ Script->>Script: Parse arguments & resolve repo
+ Script->>GitHub API: Fetch review comments (paginated)
+ GitHub API-->>Script: Comments page 1 + Link header
+ Script->>GitHub API: Follow Link header (next page)
+ GitHub API-->>Script: Comments page N
+ Script->>GitHub API: Fetch issue comments (paginated)
+ GitHub API-->>Script: Issue comments (all pages)
+ Script->>Script: Normalize & sort chronologically
+ Script->>Script: Format as Markdown
+ alt --out flag provided
+ Script->>File System: Write Markdown
+ File System-->>User: File saved
+ else stdout
+ Script-->>User: Print Markdown
+ end
+```
+
+## Estimated code review effort
+
+π― 3 (Moderate) | β±οΈ ~25 minutes
+
+- **`get_comments_markdown.py`**: Requires careful scrutiny of argument parsing logic, GitHub API interaction patterns, error handling edge cases (malformed URLs, missing tokens, API rate limits), and Markdown formatting correctness. Validate that all pagination scenarios are covered and that comment normalization doesn't lose critical metadata.
+- **`seed_feedback_from_github.py`**: Verify Link header parsing correctness (RFC 5988 compliance), ensure per_page=100 parameter doesn't cause unexpected behavior, and confirm that refactored `gh_get()` signature change doesn't break existing callers elsewhere in the codebase.
+- **Documentation file**: Minimal review; verify metadata fields are consistent with repository conventions.
+
+## Poem
+
+> π€ Comments drift through GitHub's API stream,
+> Paginated dreams collected into one,
+> Link headers guide us page by page,
+> Markdown blooms where chaos was beforeβ
+> *Review artifacts, automated and pristine.* β¨
+
+
+
+
+
+## Pre-merge checks and finishing touches
+
+β Failed checks (1 warning, 1 inconclusive)
+
+| Check name | Status | Explanation | Resolution |
+| :----------------: | :------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Docstring Coverage | β οΈ Warning | Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. | You can run `@coderabbitai generate docstrings` to improve docstring coverage. |
+| Title check | β Inconclusive | The title 'chore/security' is vague and generic, using non-descriptive branch naming conventions rather than conveying meaningful information about the actual changes. | Revise the title to clearly describe the primary change, such as 'Add pagination to GitHub API seeding and create comments dumper utility' or similar. |
+
+
+
+β
Passed checks (1 passed)
+
+| Check name | Status | Explanation |
+| :---------------: | :------- | :---------------------------------------------------------------------------------------------------------------------------------------------- |
+| Description check | β
Passed | The description lists multiple objectives (SECURITY.md, code reviews, pagination, comments dumper) that correspond to actual changes in the PR. |
+
+
+
+
+
+
+
+---
+
+Thanks for using [CodeRabbit](https://coderabbit.ai?utm_source=oss&utm_medium=github&utm_campaign=flyingrobots/draft-punks&utm_content=3)! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
+
+
+β€οΈ Share
+
+- [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai)
+- [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai)
+- [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai)
+- [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
+
+
+
+Comment `@coderabbitai help` to get the list of available commands and usage tips.
+
+
+
+
+
+
+
+
+
+```
+
+_Meta_: https://github.com/flyingrobots/draft-punks/pull/3#issuecomment-3495262753
+
+{response}
+
diff --git a/tools/review/get_comments_markdown.py b/tools/review/get_comments_markdown.py
new file mode 100755
index 0000000..5b425ac
--- /dev/null
+++ b/tools/review/get_comments_markdown.py
@@ -0,0 +1,296 @@
+#!/usr/bin/env python3
+"""
+Print all comments from a GitHub Pull Request as Markdown.
+
+Usage examples:
+
+ # Minimal: infer owner/repo from local git remote
+ GITHUB_TOKEN=... python3 tools/review/get_comments_markdown.py pr=123
+
+ # Or specify repo explicitly
+ GITHUB_TOKEN=... python3 tools/review/get_comments_markdown.py --owner octo --repo hello --pr 123
+ GITHUB_TOKEN=... python3 tools/review/get_comments_markdown.py --repo-slug octo/hello --pr 123
+ GITHUB_TOKEN=... python3 tools/review/get_comments_markdown.py --remote https://github.com/octo/hello.git --pr 123
+ GITHUB_TOKEN=... python3 tools/review/get_comments_markdown.py pr=123 repo=octo/hello
+ GITHUB_TOKEN=... python3 tools/review/get_comments_markdown.py pr=123 remote=git@github.com:octo/hello.git
+
+ # Write to a file
+ GITHUB_TOKEN=... python3 tools/review/get_comments_markdown.py pr=123 --out comments.md
+
+The script paginates over review comments and issue comments so very large PRs
+are fully captured. It does not filter by author.
+"""
+from __future__ import annotations
+
+import argparse
+import json
+import os
+import subprocess
+import sys
+import textwrap
+from typing import Dict, List, Tuple
+from urllib import error, parse, request
+
+
+API = "https://api.github.com"
+
+
+def _http_get(url: str, token: str) -> Tuple[object, Dict[str, str]]:
+ headers = {"Accept": "application/vnd.github+json"}
+ if token:
+ headers["Authorization"] = f"Bearer {token}"
+ req = request.Request(url, headers=headers, method="GET")
+ try:
+ with request.urlopen(req, timeout=30) as resp:
+ body = resp.read().decode("utf-8")
+ data = json.loads(body)
+ hdrs = {k.lower(): v for k, v in resp.headers.items()}
+ return data, hdrs
+ except error.HTTPError as exc:
+ details = exc.read().decode("utf-8", "replace")
+ raise SystemExit(f"GitHub API error {exc.code} on {url}: {details}") from exc
+ except error.URLError as exc:
+ raise SystemExit(f"Network error contacting GitHub: {exc}") from exc
+
+
+def _parse_link_header(value: str) -> Dict[str, str]:
+ out: Dict[str, str] = {}
+ if not value:
+ return out
+ for part in value.split(","):
+ part = part.strip()
+ if not part or ";" not in part:
+ continue
+ url_part, meta_part = part.split(";", 1)
+ url = url_part.strip().strip("<>")
+ meta = {k.strip(): v.strip().strip('"') for k, v in (m.split("=", 1) for m in meta_part.split(";") if "=" in m)}
+ rel = meta.get("rel")
+ if rel:
+ out[rel] = url
+ return out
+
+
+def gh_get_all(path: str, token: str) -> List[dict]:
+ """GET and paginate a REST collection, returning all items.
+
+ Adds per_page=100 and follows Link rel="next".
+ """
+ # Build initial URL with per_page=100
+ url = API + path
+ parsed = parse.urlparse(url)
+ q = parse.parse_qsl(parsed.query, keep_blank_values=True)
+ q = [(k, v) for (k, v) in q if k not in ("per_page", "page")]
+ q.append(("per_page", "100"))
+ url = parse.urlunparse(parsed._replace(query=parse.urlencode(q)))
+
+ items: List[dict] = []
+ while True:
+ data, headers = _http_get(url, token)
+ if isinstance(data, list):
+ items.extend(data)
+ else:
+ # Some APIs respond with { items: [...] }
+ seq = None
+ for key in ("items", "nodes"):
+ if key in data and isinstance(data[key], list):
+ seq = data[key]
+ break
+ if seq is None:
+ raise SystemExit("Unexpected response shape from GitHub (no list found)")
+ items.extend(seq)
+
+ links = _parse_link_header(headers.get("link", ""))
+ if "next" in links:
+ url = links["next"]
+ continue
+ break
+ return items
+
+
+def inside_git_repo() -> bool:
+ try:
+ out = subprocess.check_output([
+ "git",
+ "rev-parse",
+ "--is-inside-work-tree",
+ ], text=True, stderr=subprocess.DEVNULL).strip()
+ return out == "true"
+ except Exception:
+ return False
+
+
+def parse_remote_url(remote: str) -> Tuple[str, str]:
+ # Support common formats
+ if remote.startswith("git@github.com:"):
+ path = remote.split(":", 1)[1]
+ elif remote.startswith("https://github.com/"):
+ path = remote.split("https://github.com/", 1)[1]
+ else:
+ # Try to treat whatever after last '/' as path
+ if "://" in remote or "@" in remote:
+ # Unknown scheme/host; best-effort
+ path = remote.split("/", 3)[-1]
+ else:
+ path = remote
+ if path.endswith(".git"):
+ path = path[:-4]
+ if "/" not in path:
+ raise SystemExit(f"Unrecognized Git remote URL: {remote}")
+ owner, repo = path.split("/", 1)
+ return owner, repo
+
+
+def owner_repo_from_origin() -> Tuple[str, str]:
+ try:
+ remote = subprocess.check_output(
+ ["git", "config", "--get", "remote.origin.url"],
+ text=True,
+ stderr=subprocess.DEVNULL,
+ ).strip()
+ except Exception:
+ remote = ""
+ if not remote:
+ raise SystemExit("No origin remote found. Pass --repo-slug or --owner/--repo.")
+ return parse_remote_url(remote)
+
+
+def parse_slug(slug: str) -> Tuple[str, str]:
+ if not slug or "/" not in slug:
+ raise SystemExit("Repo slug must be in the form owner/repo")
+ owner, repo = slug.split("/", 1)
+ if not owner or not repo:
+ raise SystemExit("Repo slug must be in the form owner/repo")
+ return owner, repo
+
+
+def normalize(s: str) -> str:
+ return (s or "").replace("\r\n", "\n").strip()
+
+
+def collect_comments(token: str, owner: str, repo: str, pr_number: int) -> List[dict]:
+ # Review comments (inline on diffs)
+ rev = gh_get_all(f"/repos/{owner}/{repo}/pulls/{pr_number}/comments", token)
+ # Issue comments (PR discussion thread)
+ iss = gh_get_all(f"/repos/{owner}/{repo}/issues/{pr_number}/comments", token)
+
+ out: List[dict] = []
+ for c in rev:
+ out.append(
+ {
+ "created_at": c.get("created_at"),
+ "author": (c.get("user") or {}).get("login", "unknown"),
+ "body": c.get("body", ""),
+ "url": c.get("html_url", ""),
+ "type": "review_comment",
+ }
+ )
+ for c in iss:
+ out.append(
+ {
+ "created_at": c.get("created_at"),
+ "author": (c.get("user") or {}).get("login", "unknown"),
+ "body": c.get("body", ""),
+ "url": c.get("html_url", ""),
+ "type": "issue_comment",
+ }
+ )
+ # Sort chronologically by created_at (fallback to URL as tie-breaker)
+ out.sort(key=lambda d: (d.get("created_at") or "", d.get("url") or ""))
+ return out
+
+
+def format_markdown(comments: List[dict]) -> str:
+ lines: List[str] = []
+ for i, c in enumerate(comments, 1):
+ author = c.get("author", "unknown")
+ body = normalize(c.get("body", ""))
+ lines.append(f"## Comment {i} by {author}\n")
+ if body:
+ lines.append(body + "\n")
+ else:
+ lines.append("(no content)\n")
+ # Ensure a blank line between comments
+ lines.append("")
+ return "\n".join(lines).rstrip() + "\n"
+
+
+def parse_kv_args(argv: List[str]) -> Dict[str, str]:
+ """Support simple key=value args like pr=123 in addition to flags."""
+ kv = {}
+ for a in list(argv):
+ if "=" in a and not a.startswith("--"):
+ k, v = a.split("=", 1)
+ kv[k.strip()] = v.strip()
+ return kv
+
+
+def main(argv: List[str]) -> int:
+ # Basic flags
+ ap = argparse.ArgumentParser(description="Print PR comments as Markdown")
+ ap.add_argument("--owner", help="GitHub owner/org (optional)")
+ ap.add_argument("--repo", help="GitHub repo name (optional)")
+ ap.add_argument("--repo-slug", "--slug", dest="slug", help="GitHub repo as 'owner/repo'")
+ ap.add_argument("--remote", help="Explicit remote URL to parse for owner/repo")
+ ap.add_argument("--pr", type=int, help="Pull request number")
+ ap.add_argument("--out", help="Write output to file instead of stdout")
+ # Allow pr=123 shorthand
+ _, _ = ap.parse_known_args([])
+
+ kv = parse_kv_args(argv)
+ # Re-parse with provided argv
+ args = ap.parse_args([a for a in argv if "=" not in a])
+
+ pr_number = args.pr or int(kv.get("pr", "0") or 0)
+ if not pr_number:
+ ap.error("--pr or pr=NUMBER is required")
+
+ # Determine owner/repo precedence:
+ # 1) --remote / remote= 2) --repo-slug / repo=owner/repo 3) --owner/--repo or owner=/repo=
+ # 4) if inside git repo: origin remote 5) GITHUB_REPOSITORY env
+ owner = None
+ repo = None
+
+ remote_arg = args.remote or kv.get("remote")
+ slug_arg = args.slug or kv.get("slug")
+ repo_kv = kv.get("repo")
+ owner_arg = args.owner or kv.get("owner")
+ repo_name_arg = args.repo or (kv.get("repo_name") if repo_kv and "/" not in repo_kv else None)
+
+ if remote_arg:
+ owner, repo = parse_remote_url(remote_arg)
+ elif slug_arg:
+ owner, repo = parse_slug(slug_arg)
+ elif repo_kv and "/" in repo_kv:
+ owner, repo = parse_slug(repo_kv)
+ elif owner_arg and (args.repo or kv.get("repo")) and "/" not in (args.repo or kv.get("repo")): # owner/repo split
+ owner, repo = owner_arg, (args.repo or kv.get("repo"))
+ else:
+ if inside_git_repo():
+ owner, repo = owner_repo_from_origin()
+ else:
+ env_repo = os.environ.get("GITHUB_REPOSITORY")
+ if env_repo and "/" in env_repo:
+ owner, repo = env_repo.split("/", 1)
+
+ if not owner or not repo:
+ ap.error("Could not determine repository. Provide --remote, --repo-slug, or --owner and --repo, or run inside a git repo with an origin.")
+
+ token = os.environ.get("GITHUB_TOKEN", "")
+ if not token:
+ print("GITHUB_TOKEN environment variable must be set", file=sys.stderr)
+ return 1
+
+ comments = collect_comments(token, owner, repo, pr_number)
+ md = format_markdown(comments)
+
+ if args.out:
+ with open(args.out, "w", encoding="utf-8") as f:
+ f.write(md)
+ print(f"Wrote {args.out}")
+ else:
+ sys.stdout.write(md)
+ return 0
+
+
+if __name__ == "__main__":
+ raise SystemExit(main(sys.argv[1:]))
diff --git a/tools/review/seed_feedback_from_github.py b/tools/review/seed_feedback_from_github.py
index b8082e2..a7bc9a5 100755
--- a/tools/review/seed_feedback_from_github.py
+++ b/tools/review/seed_feedback_from_github.py
@@ -30,17 +30,83 @@
API = "https://api.github.com"
-def gh_get(path: str, token: str):
- req = urllib.request.Request(API + path)
+def gh_get_with_headers(url: str, token: str):
+ """GET a full URL and return (json, headers)."""
+ req = urllib.request.Request(url)
req.add_header("Accept", "application/vnd.github+json")
if token:
req.add_header("Authorization", f"Bearer {token}")
try:
with urllib.request.urlopen(req) as r:
- return json.loads(r.read().decode("utf-8"))
+ data = json.loads(r.read().decode("utf-8"))
+ headers = {k.lower(): v for k, v in r.headers.items()}
+ return data, headers
except urllib.error.HTTPError as e:
msg = e.read().decode("utf-8", errors="ignore")
- raise SystemExit(f"GitHub API error {e.code} on {path}: {msg}")
+ raise SystemExit(f"GitHub API error {e.code} on {url}: {msg}")
+
+
+def gh_get(path: str, token: str):
+ """GET a repository-relative path and return JSON body only."""
+ data, _ = gh_get_with_headers(API + path, token)
+ return data
+
+
+def parse_link_header(link_header: str) -> dict:
+ """Parse GitHub Link header into a dict of {rel: url}."""
+ rels = {}
+ if not link_header:
+ return rels
+ parts = [p.strip() for p in link_header.split(",")]
+ for p in parts:
+ if ";" not in p:
+ continue
+ url_part, rel_part = p.split(";", 1)
+ url = url_part.strip().strip("<>")
+ if "rel=" in rel_part:
+ rel = rel_part.split("rel=", 1)[1].strip().strip('"')
+ rels[rel] = url
+ return rels
+
+
+def gh_get_all(path: str, token: str):
+ """GET and paginate over all pages for a given resource path.
+
+ Appends per_page=100 and follows Link: rel="next" until exhausted.
+ Returns a list aggregated across all pages.
+ """
+ import urllib.parse as up
+
+ url = API + path
+ # Ensure per_page=100 is present
+ parsed = up.urlparse(url)
+ qs = up.parse_qsl(parsed.query, keep_blank_values=True)
+ # Remove any existing per_page/page to avoid duplication
+ qs = [(k, v) for (k, v) in qs if k not in ("per_page", "page")]
+ qs.append(("per_page", "100"))
+ url = up.urlunparse(parsed._replace(query=up.urlencode(qs)))
+
+ items = []
+ while True:
+ data, headers = gh_get_with_headers(url, token)
+ if isinstance(data, list):
+ items.extend(data)
+ else:
+ # Some endpoints may return objects; try common list fields
+ for key in ("items", "nodes"):
+ if key in data and isinstance(data[key], list):
+ items.extend(data[key])
+ break
+ else:
+ raise SystemExit("Unexpected GitHub response shape during pagination")
+
+ links = parse_link_header(headers.get("link", ""))
+ next_url = links.get("next")
+ if next_url:
+ url = next_url
+ continue
+ break
+ return items
def normalize(s: str) -> str:
@@ -66,10 +132,10 @@ def main():
if not head_sha:
raise SystemExit("Unable to determine PR head SHA. Pass --commit explicitly.")
- # Collect review comments
- rev_comments = gh_get(pr_path + "/comments?per_page=100", token)
- # Collect issue comments (discussion)
- iss_comments = gh_get(f"/repos/{args.owner}/{args.repo}/issues/{args.pr}/comments?per_page=100", token)
+ # Collect review comments (paginated)
+ rev_comments = gh_get_all(pr_path + "/comments", token)
+ # Collect issue comments (discussion, paginated)
+ iss_comments = gh_get_all(f"/repos/{args.owner}/{args.repo}/issues/{args.pr}/comments", token)
# Build output path
out_dir = pathlib.Path(args.out) / f"PR{args.pr}"
@@ -136,4 +202,3 @@ def emit_feedback(title: str, body: str, meta: str = ""):
if __name__ == "__main__":
main()
-