Back to Blog

Scanning 6 AI Agent Repos for Security Risks: What We Found

How we ran Vectra Guard on Moltbot, Open Interpreter, AutoGPT, Huginn, LocalAGI, and Aider—and what it means for securing local AI agents.

Vectra Guard scanning AI agent repos for security risks: Moltbot, Open Interpreter, AutoGPT, Huginn, LocalAGI, Aider with security warnings highlighted.

Local AI agents are having a moment. Tools like Moltbot (formerly Clawdbot), Open Interpreter, and AutoGPT give users assistants that run code, hit APIs, and automate tasks on their machines. That power also creates real risk: exposed dashboards, secrets in configs, and unchecked subprocess or HTTP usage can turn a productivity tool into an attack surface—as recent headlines about Moltbot showed.

We wanted to see how common these patterns are across the ecosystem. So we ran Vectra Guard (our open-source CLI for sandboxing, secret scanning, and static security checks) on six popular AI agent repos. Here's what we did and what we found.

What We Scanned

We cloned six public repos into a test workspace and ran:

  • Static security scanvg scan-security for Go, Python, C, and config (YAML/JSON), looking for things like bind-to-all-interfaces, env/API access, remote HTTP, subprocess use, and auth-off patterns.
  • Repo auditvg audit repo for the same code findings plus secret detection and package audits (npm/pip) where applicable.

Repos:

ProjectWhat it does
MoltbotPersonal AI agent; messaging, local execution, control panel.
Open InterpreterNatural language interface that runs code locally.
AutoGPTAutonomous agent platform; tasks, tools, API.
HuginnAgent framework for monitoring and acting on your behalf.
LocalAGISelf-hosted local AI agent platform.
AiderAI pair programming in the terminal; edits code, runs locally.

All are open source and run or trigger code locally—exactly the kind of surface we care about.

The Numbers

The table below shows what Vectra Guard reported for each repo—the same kind of output you get when you run vg scan-security and vg audit repo on your own codebase.

RepoCode findingsSeverity mixSecrets (candidates)Package issues
AutoGPT1,298Medium5,65114 (python)
Open Interpreter2475 high, 242 medium1014 (python)
Aider435Medium170
LocalAGI682 high, 66 medium1314 (python)
Moltbot11Medium1,546npm 0, python 14
Huginn3 (scan) / 0 (audit)10414 (python)

Code findings (Vectra Guard) = static patterns (e.g. env access, external HTTP, bind 0.0.0.0, subprocess); comment-only lines are skipped. Each finding has a severity (medium, high, or critical) and marks a real pattern to review—fix or justify, not ignore. Secrets = key-like strings in source/config with secret context (token/api_key/secret etc.) and high-entropy values; lockfiles are skipped and FP filters applied (~7.5K total across six repos). Package issues = pip-audit / npm audit where we ran it. All counts are from Vectra Guard v0.6.0 (see similar-agent-findings.json and similar-agent-scan-raw.txt); you can reproduce or run the same on your repo.

That visibility is what gives you confidence: you see exactly what to fix or justify before deployment, and you have an audit trail for compliance.

Across 6 repos we saw 2,149 static code findings. The dominant pattern is external HTTP usage (non-localhost URLs and remote HTTP calls), followed by environment/API access, then subprocess use and Go equivalents in the one Go-heavy repo (LocalAGI). Bind-all-interfaces and unauthenticated-access (config) appear in a minority of locations but map directly to the kind of incidents that made Moltbot headlines. High-severity items (e.g. PY_EXEC, PY_EVAL) are rare but concentrated in code paths that execute or evaluate dynamic content.

What Showed Up Most

From the full scan we extracted counts by rule code (see similar-agent-findings.json (vectra-guard tool output format) and findings-analysis.md for reference):

  • External and remote HTTPPY_EXTERNAL_HTTP (1,196) and PY_REMOTE_HTTP (171) dominate. Validate and restrict what gets passed into requests.
  • Environment and API usagePY_ENV_ACCESS (430) and Go's GO_ENV_READ (37). Don't echo env in user-facing or external channels.
  • Subprocess and execPY_SUBPROCESS (172) and a few PY_EXEC/PY_EVAL. Validate and sandbox.
  • Binding to all interfacesBIND_ALL_INTERFACES (22). Binding to 0.0.0.0 is fine only with auth and TLS.
  • Config and deploymentUNAUTHENTICATED_ACCESS (3) in config. Fix or justify every one.

None of this is to say these projects are "unsafe"—they're complex, actively developed, and many findings are in tests or optional features. The takeaway is that these patterns are widespread. Findings are stored in vectra-guard tool output format (same as vg audit repo --output json). For a deeper discussion, see Findings analysis.

Reports & artifacts

All scan outputs and analysis live in the Vectra Guard repo under docs/reports/. Open these links directly (no clone needed):

Rule reference and securing checklist: Control panel & deployment security.

How to Run It (Using the Release Artifact)

Use the release binary from GitHub Releases and the scan script via curl from the repo. No clone or build required.

1. Create a directory and download the script:

Code
mkdir -p vg-scan/scripts
curl -sSL https://raw.githubusercontent.com/xadnavyaai/vectra-guard/main/scripts/test-similar-agent-repos.sh -o vg-scan/scripts/test-similar-agent-repos.sh
chmod +x vg-scan/scripts/test-similar-agent-repos.sh
cd vg-scan

2. Download the vectra-guard binary for your platform (latest: v0.6.0) into this directory:

Code
# Linux (amd64)
curl -sSL https://github.com/xadnavyaai/vectra-guard/releases/download/v0.6.0/vectra-guard-linux-amd64 -o vectra-guard
chmod +x vectra-guard

# macOS (Apple Silicon)
curl -sSL https://github.com/xadnavyaai/vectra-guard/releases/download/v0.6.0/vectra-guard-darwin-arm64 -o vectra-guard
chmod +x vectra-guard

3. Clone the six AI agent repos into test-workspaces/:

Code
./scripts/test-similar-agent-repos.sh clone-all

4. Run the scan:

Code
VG_CMD="./vectra-guard" ./scripts/test-similar-agent-repos.sh

Optional: put vectra-guard on your PATH and run the script with no VG_CMD; the script defaults to vg.

What You Can Do With This

  • Run the same on your repovg scan-security and vg audit repo give you the same visibility and confidence: code findings, secrets, and package audits so you can fix or justify issues before they hit production.
  • Maintainers — Run those commands in CI to catch bind-all, env/HTTP misuse, and config issues before they ship.
  • Operators — Before exposing any agent dashboard, fix or justify every BIND_ALL_INTERFACES and UNAUTHENTICATED_ACCESS; use auth and TLS when binding to 0.0.0.0.
  • Contributors — Use vg exec -- for agent-triggered commands so they run in the sandbox, and consider vg prompt-firewall for user-facing prompt content.

Vectra Guard gives you these numbers so you know what needs attention. Local AI agents are here to stay—and so are the risks that come with broad system access. Scanning early and often, and sandboxing execution, is one way to keep the upside without the headline.