Core Concepts
Understand how sus protects AI agents from malicious packages
Overview
sus is a security-first package gateway for AI agents. It answers one simple question: is this package sus?
Traditional package managers were built for human developers who can review code, recognize suspicious patterns, and make judgment calls. AI agents don't have these instincts. They trust what they're told and execute what they're given. This makes them uniquely vulnerable to a new class of attacks designed specifically to exploit autonomous code execution.
sus solves this by scanning every package before installation, detecting both traditional security threats and agentic-specific attacks, and providing a centralized docs index so agents know how to use packages correctly.
Why Traditional Package Managers Fail AI Agents
npm, yarn, pnpm, and bun were never designed with AI agents in mind. Here's what they're missing:
| Capability | npm | yarn | pnpm | bun | sus |
|---|---|---|---|---|---|
| CVE scanning | Post-install | Post-install | Post-install | No | Pre-install |
| Malware detection | No* | No* | No* | No* | Yes |
| Typosquatting protection | No* | No* | No* | No* | Yes |
| Prompt injection detection | No | No | No | No | Yes |
| AGENTS.md docs index | No | No | No | No | Yes |
| Pre-install safety checks | No | No | No | No | Yes |
When an AI agent runs npm install event-stream@3.3.6, it installs known malware without any warning. With sus, that package is blocked before it ever touches your system.
The Agentic Threat Landscape
AI agents face threats that human developers would catch but automated systems miss:
- Prompt Injection in READMEs: Malicious instructions hidden in package documentation that hijack agent behavior
- Error Message Attacks: Crafted error messages that trick agents into executing harmful commands
- Hidden Instructions: Comments in source code designed to override agent instructions
- Install Script Exploits: postinstall scripts that target autonomous execution contexts
- Repo Poisoning: Legitimate packages with compromised versions targeting agents
These attacks don't show up in CVE databases. Traditional security tools don't scan for them. sus does.
How sus Works
sus uses a distributed architecture where packages are pre-scanned in the cloud, so lookups are instant:
The Scanning Pipeline
When sus scans a package, it runs a comprehensive analysis pipeline:
- Metadata Fetch: Pull package info from the registry (maintainers, downloads, repository)
- Tarball Download: Fetch and extract the package contents
- Parallel Analysis: Run all scanners concurrently for speed
- Risk Calculation: Combine all signals into a final risk assessment
- Store & Serve: Cache results for instant lookups
Risk Levels
Every package gets assigned one of three risk levels:
Clean
No issues detected. Safe to install.
$ sus add express
🔍 checking express@4.21.0...
✅ not sus
├─ publisher: expressjs (verified)
├─ downloads: 32M/week
├─ cves: 0
└─ install scripts: none
📦 installed
📝 updated AGENTS.md docs indexWarning
Minor issues that warrant attention but don't block installation.
$ sus add lodash@4.17.20
🔍 checking lodash@4.17.20...
⚠️ kinda sus
├─ cve: CVE-2021-23337 (prototype pollution, medium)
└─ fix available: 4.17.21
📦 installed (use --strict to block warnings)Critical
Major threats that block installation by default.
$ sus add event-stream@3.3.6
🔍 checking event-stream@3.3.6...
🚨 MEGA SUS
├─ malware: flatmap-stream injection
├─ targets: cryptocurrency wallets
└─ status: COMPROMISED
❌ not installed. use --yolo to force (don't)Threat Detection
sus detects two categories of threats:
Traditional Threats
- CVEs: Known vulnerabilities from OSV, NVD, and GitHub Advisory databases
- Known Malware: Packages like
event-stream,node-ipcthat have been compromised - Typosquatting: Packages like
expresssorlodahsdesigned to trick users - Suspicious Install Scripts: postinstall scripts that download or execute code
- Maintainer Hijacking: Signs that a package maintainer account was compromised
Agentic Threats
Detected using AI-powered analysis:
- Prompt Injection: Instructions in READMEs designed to hijack agent behavior
- Error Message Attacks: Crafted error strings that manipulate agents
- Hidden Instructions: Comments in code that override agent instructions
- Repo Poisoning: Malicious code disguised as legitimate updates
- Instruction Override: Attempts to change agent behavior through package content
Agentic threat detection uses the same AI models that power coding assistants, making it effective at catching attacks specifically designed to exploit AI agents.
Capability Extraction
sus performs static analysis to detect what a package can do:
| Capability | What It Detects |
|---|---|
| Network | HTTP/HTTPS requests, WebSocket connections, TCP/UDP sockets, domains accessed, protocols used |
| Filesystem | File read/write operations, paths accessed with permissions (read, write, or both) |
| Process | Child process spawning via exec/spawn/fork, specific commands extracted (npm, git, curl, etc.) |
| Environment | Environment variables accessed via process.env |
| Native | Native modules, binding.gyp presence, N-API bindings (node-addon-api, napi-rs, nan) |
This information helps agents understand what permissions a package needs and whether its capabilities match its stated purpose.
$ sus check axios
🔍 checking axios@1.6.0...
✅ not sus
capabilities:
├─ network: http, https (api calls)
├─ filesystem: none
├─ process: none
└─ environment: HTTP_PROXY, HTTPS_PROXYAGENTS.md Docs Index
For every scanned package, sus generates documentation that helps AI agents use the package correctly. This documentation is:
- Saved to
.sus-docs/in your project - Indexed in
AGENTS.mdfor easy discovery
Each package doc includes:
- Quick Start: Minimal code to get started
- Key APIs: Most important functions and their usage
- Best Practices: Recommended patterns and configurations
- Common Gotchas: Mistakes to avoid
- Permissions: What capabilities the package requires
Why Docs Index?
Based on Vercel's research, passive context in AGENTS.md outperforms active skill retrieval:
| Approach | Pass Rate |
|---|---|
| Active skill retrieval | 79% |
| Passive context (AGENTS.md) | 100% |
This is why sus uses a centralized docs index instead of per-agent-folder skills.
Run sus init to set up the docs index in your project.
Trust Score
Every package gets a trust score from 0-100 based on multiple signals:
| Signal | Impact |
|---|---|
| 5+ maintainers | +20 points |
| Linked repository | +10 points |
| Has description | +5 points |
| High download count | +15 points |
| Package age (1+ year) | +10 points |
| Verified publisher | +20 points |
| Base score | 50 points |
A higher trust score doesn't mean a package is safe. It means the package has more signals of legitimacy. Even high-trust packages can have CVEs or be compromised.
Scan Priority
sus continuously monitors package registries and prioritizes scans:
| Priority | Trigger |
|---|---|
| Immediate | Known malicious patterns detected |
| High | User-requested scan via CLI or API |
| Medium | New package discovered by watcher |
| Low | Background re-scan of existing packages |
This ensures that when you run sus add, the package has likely already been scanned and results are instant.
Summary
sus protects AI agents by:
- Scanning before installing - Unlike
npm audit, sus blocks threats before they touch your system - Detecting agentic threats - Prompt injection, repo poisoning, and other attacks targeting AI
- Extracting capabilities - Understanding what packages can actually do
- Maintaining a docs index - Centralized documentation in
AGENTS.mdfor instant agent access - Calculating trust scores - Multiple signals combined into actionable risk levels
The result: AI agents can safely install packages without falling victim to attacks designed to exploit their autonomous nature.
On this page