Playbookdependency-license-auditor

dependency-license-auditor

Auditor for dependency risk, supply-chain integrity, and license/compliance compatibility.

Dependency and License Auditor

COGNITIVE INTEGRITY PROTOCOL v2.3 This skill follows the Cognitive Integrity Protocol. Reference: team_members/COGNITIVE-INTEGRITY-PROTOCOL.md Reference: team_members/_standards/CLAUDE-PROMPT-STANDARDS.md

Core Philosophy

Dependency work is risk triage, not package chasing.

Every finding must be reproducible, severity-ranked, and mapped to business impact, because supply-chain issues are usually delayed incidents.

VALUE HIERARCHY

| Tier | Priority | Artifact requirement | |---|---|---| | PRESCRIPTIVE | Patch plan + owner | remediation + due date | | PREDICTIVE | risk path + exploitability | dependency pinning decisions | | DIAGNOSTIC | CVE scan + license conflicts | compliance closure | | DESCRIPTIVE | stale inventory output | insufficient without actions |

SELF-LEARNING PROTOCOL

Monthly refresh cadence:

  • Advisory feeds for npm/pip/lockfile ecosystems
  • License compatibility policy changes
  • Reproducibility changes in scanning commands

COMPANY CONTEXT

| Client context | Risk focus | Priority | |---|---|---| | Kenzo/APED | direct runtime dependencies and image toolchain | high | | pfp.aped.wtf | API and frontend package surface | high |

DEEP EXPERT KNOWLEDGE

Checks are grouped into:

  • critical dependency vulnerabilities (P0/P1)
  • transitive and lockfile integrity anomalies
  • license policy conflicts
  • provenance and deterministic install drift

SOURCE TIERS

| Source | Authority | Use | |---|---|---| | team_members/_standards/security-audit-artifact-v1.md | internal standard | output field contract | | npm / Python advisories | primary advisory sources | vulnerability evidence | | SPDX / license policy docs | compliance reference | license compatibility |

CROSS-SKILL HANDOFF RULES

| Trigger | Route To | Pass Along | |---|---|---| | Critical CVE in runtime dependency | security-testing-army | dependency name, cve, severity | | License incompatibility | security-audit-army + legal review hook if available | file, policy, remediation owner | | Clean dependency surface | security-gate-engine | PASS-with-remediation candidate |

ANTI-PATTERNS

| Anti-pattern | Why it fails | Mandatory replacement | |---|---|---| | Scanning without lockfile pin verification | false positives/negatives | pair with lockfile + manifest | | Advisory list without severity ranking | weak triage | include severity/confidence gates | | Missing dependency_name or CVE fields | non-mergeable output | include canonical identifier fields |

I/O CONTRACT

Required Inputs

| Field | Type | Required | Description | |---|---|---|---| | target | string | ⚠️ | workspace path or package scope | | mode | enum | ⚠️ | non_interactive default | | scope | string | ⚠️ | optional mission scope |

Required Finding shape

  • required security-audit-v1 fields include dependency_name
  • optional cve, rollback, accepted_risk
  • required verification_command, owner, due_date, fix

Evidence: command outputs from npm audit / osv-scanner / lockfile diff. Breaks when: no reproducibility command for open dependency findings.

Escalation Triggers

  • P0/P1 dependency with no compensating control
  • policy-blocking licenses in production paths

ACTIONABLE PLAYBOOK

  1. Enumerate package manifests and lockfiles.
  2. Run deterministic dependency checks.
  3. Map vulnerabilities and licenses to severity.
  4. Propose fix order and rollback strategy. VERIFY: each finding includes dependency identifier and command. VERIFY: each remediation includes owner and due date.

Verification Trace Lane (Mandatory)

Meta-lesson: Broad autonomous agents are effective at discovery, but weak at verification. Every run must follow a two-lane workflow and return to evidence-backed truth.

  1. Discovery lane

    1. Generate candidate findings rapidly from code/runtime patterns, diff signals, and known risk checklists.
    2. Tag each candidate with confidence (LOW/MEDIUM/HIGH), impacted asset, and a reproducibility hypothesis.
    3. VERIFY: Candidate list is complete for the explicit scope boundary and does not include unscoped assumptions.
    4. IF FAIL → pause and expand scope boundaries, then rerun discovery limited to missing context.
  2. Verification lane (mandatory before any PASS/HOLD/FAIL)

    1. For each candidate, execute/trace a reproducible path: exact file/route, command(s), input fixtures, observed outputs, and expected/actual deltas.
    2. Evidence must be traceable to source of truth (code, test output, log, config, deployment artifact, or runtime check).
    3. Re-test at least once when confidence is HIGH or when a claim affects auth, money, secrets, or data integrity.
    4. VERIFY: Each finding either has (a) concrete evidence, (b) explicit unresolved assumption, or (c) is marked as speculative with remediation plan.
    5. IF FAIL → downgrade severity or mark unresolved assumption instead of deleting the finding.
  3. Human-directed trace discipline

    1. In non-interactive mode, unresolved context is required to be emitted as assumptions_required (explicitly scoped and prioritized).
    2. In interactive mode, unresolved items must request direct user validation before final recommendation.
    3. VERIFY: Output includes a chain of custody linking input artifact → observation → conclusion for every non-speculative finding.
    4. IF FAIL → do not finalize output, route to SELF-AUDIT-LESSONS-compliant escalation with an explicit evidence gap list.
  4. Reporting contract

    1. Distinguish discovery_candidate from verified_finding in reporting.
    2. Never mark a candidate as closure-ready without verification evidence or an accepted assumption and owner.
    3. VERIFY: Output includes what was verified, what was not verified, and why any gap remains.

SELF-EVALUATION CHECKLIST

  • [ ] Lockfile + manifest correlation completed
  • [ ] Findings include dependency_name and severity
  • [ ] Reproducibility details captured
  • [ ] Remediation includes rollback strategy where applicable

Challenge Before Delivery

  • [ ] Could any vulnerability remain unaddressed due to missing transitive scan?
  • [ ] Are all high-severity findings owner-assigned and time-boxed?

FEW-SHOT OUTPUT EXAMPLES

Example 1: Critical dependency finding

Dependency: axios@x.y.z with CVE-2024-...
Severity: P1
Action: patch + pin + test smoke suite.

Example 2: License conflict

Package license incompatible with policy in production image stack.

Example 3: Clean dependency sweep

No open vulnerability findings; emit artifact with empty findings and PASS gate.