Clinejection: How One GitHub Issue Title Turned into a Supply Chain Incident

On February 17, 2026, Cline 2.3.0 was compromised via an AI-driven CI/CD attack chain that began with prompt injection in a GitHub issue title and ended with a malicious npm release that silently installed an unauthorized AI-capable tool.

Clinejection: How One GitHub Issue Title Turned into a Supply Chain Incident

Overview

On February 17, 2026, version 2.3.0 of Cline was published to npm with a malicious postinstall script that globally installed an unauthorized package named openclaw. The compromised version was live for roughly 8 hours, during which it was downloaded about 4,000 times.

This incident is notable because the compromise originated from prompt injection in GitHub issue metadata, propagated through AI-assisted automation in CI, and culminated in a supply chain attack that abused normal dependency installation flows.

What Actually Happened

  • Date: February 17, 2026
  • Package: cline
  • Version: 2.3.0
  • Malicious behavior: postinstall script that globally installed openclaw
  • Exposure window: ~8 hours
  • Estimated impact: ~4,000 downloads

The attacker’s goal was to piggyback on Cline’s install process to distribute another AI-capable tool (openclaw) without explicit, interactive user consent.

The Attack Chain, Step by Step

1. Prompt Injection Through Issue Metadata

The initial foothold came from untrusted text in a GitHub issue title. The attacker crafted a title containing hidden instruction payloads designed to influence an AI triage workflow that processed issue metadata.

Because the workflow treated this text as benign context rather than untrusted input, the model was successfully instructed to perform actions that aligned with the attacker’s intent.

2. Workflow-Level Code Execution

The AI triage system was wired into automation that could execute commands or modify configuration based on the model’s output. Once the model was coerced via prompt injection, the workflow:

  • Proposed or executed package installation behavior aligned with the attacker’s instructions
  • Bridged the gap from language-level control (prompt) to code-level effects (shell / script execution)

This step turned a text-only attack into real code execution inside CI.

3. GitHub Actions Cache Poisoning

The attacker then leveraged GitHub Actions cache behavior to plant crafted artifacts in the CI environment. By influencing what was cached and later restored, they were able to:

  • Introduce malicious or modified files into subsequent workflow runs
  • Persist attacker-controlled material across runs without needing repeated direct access

This effectively converted the CI cache into a persistence and delivery channel.

4. Release Credential Exfiltration

With control over CI artifacts and execution paths, the attacker was able to access and exfiltrate release credentials that were available to the workflow. This included secrets used to:

  • Authenticate to npm for publishing
  • Sign or authorize release artifacts

Once these credentials were compromised, the attacker could publish a seemingly legitimate release under the project’s identity.

5. Unauthorized Publish with Install-Time Side Effects

Using the stolen release credentials, the attacker published Cline 2.3.0 to npm with:

  • A malicious postinstall script
  • Behavior that globally installed an additional package: openclaw

From the user’s perspective, this looked like a normal dependency update. In reality, the install process silently introduced a secondary AI-capable tool onto their system.

Why This Incident Was Different

Several aspects make this incident stand out:

  1. AI as the initial attack surface

The compromise began not with traditional code injection, but with prompt injection in a GitHub issue title processed by an AI workflow.

  1. Automation as the execution bridge

The AI system wasn’t just advisory; it was wired into automated actions (install, configure, publish). This turned model output into direct system changes.

  1. Supply chain impact via normal mechanics

The malicious behavior was delivered through standard npm lifecycle hooks (postinstall), meaning:

  • No extra prompts or warnings
  • No obvious behavioral change during install
  • Users were compromised by doing what they normally do: npm install
  1. AI tool installing another AI tool

An AI-assisted developer tool (Cline) was compromised to automatically install another AI-capable tool (openclaw), amplifying potential downstream impact.

Recommendations for Engineering Teams

1. Treat All Repo Text as Untrusted

  • Consider issue titles, descriptions, comments, PR text, and docs as untrusted input, especially when used as context for AI systems.
  • Apply input sanitization, filtering, and context bounding before feeding repo text into models.
  • Avoid giving models direct authority to execute commands based solely on untrusted text.

2. Separate Read/Analyze from Execute/Mutate

  • Architect AI workflows so that analysis ("what should we do?") is separated from execution ("do it").
  • Require explicit human review or policy checks between model suggestions and actions that:
  • Run shell commands
  • Modify CI configuration
  • Touch release or deployment pipelines

3. Eliminate Long-Lived Release Tokens

  • Replace long-lived tokens with short-lived, scoped credentials issued just-in-time for a specific release.
  • Use OIDC-based federation or similar mechanisms where possible.
  • Ensure release secrets are not broadly available to all CI jobs or branches.

4. Harden CI Cache Strategy

  • Treat CI caches as potentially attacker-controlled.
  • Avoid caching executables, scripts, or configuration that can influence builds without verification.
  • Implement cache key validation, integrity checks, and signing for critical artifacts.

5. Enforce Install-Time Policy Controls

  • Use tools or policies that block or flag:
  • Unexpected postinstall / lifecycle scripts
  • Global installs triggered by dependencies
  • Network access during install where not strictly required
  • Consider allowlists for packages permitted to run install scripts.

6. Build Fast Security Response Processes

  • Prepare playbooks for:
  • Rapid unpublishing / deprecating compromised versions
  • Notifying users and downstream dependents
  • Rotating credentials and rebuilding trust chains
  • Regularly drill incident response for supply chain and AI-related failures.

The Larger Pattern

This incident illustrates that prompt injection and package compromise are now part of a single, unified attack pipeline:

  1. Language injection → Model is coerced via untrusted text (e.g., issue titles).
  2. Execution → AI output is wired into automation that runs code.
  3. CI compromise → Caches and workflows are manipulated to persist control.
  4. Release trust break → Credentials are exfiltrated; malicious releases are published.
  5. End-user machine compromise → Normal install flows deliver hidden payloads.

As AI systems become more deeply integrated into development and release workflows, every text field that can reach a model becomes a potential control surface, and every automated action connected to that model becomes a potential execution primitive.

Engineering teams should treat AI-augmented CI/CD as security-critical infrastructure, applying the same rigor they would to production auth systems, signing keys, and deployment pipelines.

warning

AI + CI/CD = New Supply Chain Risk Surface

Any workflow where AI output can directly influence code execution, CI configuration, or release publishing must be treated as a high-risk integration point. Enforce strict separation between model suggestions and automated actions, and assume all repository text can be adversarial.

ci-hardening-checklist.md
- [ ] Ensure AI workflows cannot directly execute shell commands without human or policy gating
- [ ] Scope CI secrets so release credentials are only available to dedicated, short-lived jobs
- [ ] Audit GitHub Actions cache usage; avoid caching unverified executables or scripts
- [ ] Add checks to flag or block new `postinstall` scripts in dependencies
- [ ] Implement monitoring and alerting for unusual publish activity (time, source, version patterns)
- [ ] Run regular incident response drills for package compromise and AI workflow abuse