The hud analyze command inspects MCP environments to discover their tools and capabilities. By default, it uses cached metadata for instant results. Use --live for real-time analysis.

Usage

hud analyze <TARGET> [OPTIONS]

Arguments

target
string
required
Docker image, lock file path, or command to analyze

Options

--format
string
default:"interactive"
Output format: interactive, json, or markdown. Short: -f
--verbose
boolean
default:"false"
Show full tool schemas and parameters. Short: -v
--live
boolean
default:"false"
Run container for live analysis (slower but more accurate)
--config
string
JSON config file with MCP configuration. Short: -c
--cursor
string
Analyze a server from Cursor config
--command
string
Analyze a local command instead of Docker image
--timeout
integer
default:"30"
Timeout in seconds for live analysis

Analysis Modes

Fast Mode (Default)

Uses cached metadata from:
  1. Local lock file cache (~/.hud/locks/)
  2. HUD registry (if available)
  3. Basic Docker manifest info
# Instant results from metadata
hud analyze hudpython/text-analyzer:latest

# From lock file
hud analyze ./my-env.lock.yaml
Fast mode is perfect for quick inspection and doesn’t require running containers.

Live Mode

Runs the actual container for comprehensive analysis:
# Full analysis with running container
hud analyze hudpython/text-analyzer:latest --live

# With environment variables
hud analyze my-env:latest --live -e API_KEY=test
Use --live when you need:
  • Real-time tool validation
  • Resource discovery
  • Telemetry information
  • Testing with specific env vars

Output Formats

Interactive (Default)

Fast mode output:
📊 Environment Overview
┌─────────────┬─────────────────────────┐
│ Image       │ hudpython/text-2048     │
│ Source      │ HUD Registry            │
│ Built       │ 2024-01-15T10:30:00Z    │
│ HUD Version │ 0.1.0                   │
│ Init Time   │ 450 ms                  │
│ Tools       │ 6                       │
└─────────────┴─────────────────────────┘

🔧 Available Tools
└── Tools
    ├── setup - Initialize environment
    ├── evaluate - Return environment state
    ├── move - Move tiles in direction
    ├── reset_board - Reset game to start
    └── get_score - Get current score
Live mode output (--live):
🔍 Analyzing MCP environment: hudpython/text-2048:latest

📊 Environment Overview
┌─────────────┬─────────────────────────┐
│ Server      │ hud-text-2048           │
│ Initialized │ ✓                       │
└─────────────┴─────────────────────────┘

🔧 Available Tools
├── Regular Tools
│   ├── move
│   │   └── Move tiles in the specified direction
│   ├── reset_board
│   │   └── Reset the game board to initial state
│   └── get_score
│       └── Get the current game score
└── Hub Tools
    └── game_hub
        ├── save_state
        └── load_state

📚 Available Resources
┌──────────────────┬────────────────┬─────────────┐
│ URI              │ Name           │ Type        │
├──────────────────┼────────────────┼─────────────┤
│ game://state     │ Game State     │ application │
│ game://history   │ Move History   │ text/plain  │
└──────────────────┴────────────────┴─────────────┘

📡 Telemetry Data
┌─────────┬─────────────────────────────┐
│ Live URL│ https://app.hud.so/xyz123   │
│ Status  │ running                     │
│ Services│ 2/2 running                 │
└─────────┴─────────────────────────────┘

JSON Format

# Fast mode
hud analyze my-env --format json

{
  "image": "my-env:latest",
  "status": "from_cache",
  "tool_count": 6,
  "init_time": 450,
  "tools": [{
    "name": "setup",
    "description": "Initialize environment"
  }]
}

# Live mode
hud analyze my-env --format json --live

{
  "metadata": {
    "servers": ["my-env"],
    "initialized": true
  },
  "tools": [{
    "name": "setup",
    "description": "Initialize environment",
    "input_schema": {...}
  }],
  "hub_tools": {...},
  "resources": [...],
  "telemetry": {...}
}

Markdown Format

hud analyze my-env --format markdown > docs/tools.md
Generates formatted documentation with tool descriptions and schemas.

Examples

Fast Analysis (Default)

# Quick inspection from metadata
hud analyze hudpython/text-analyzer:latest

# Analyze multiple environments rapidly
for env in browser scraper analyzer; do
  hud analyze "hudpython/$env:latest"
done

# From lock file
hud analyze ./environments/prod.lock.yaml

# With verbose schemas
hud analyze my-env:latest --verbose

Live Analysis

# Full container analysis
hud analyze my-env:latest --live

# With environment variables
hud analyze my-env:latest --live -e API_KEY=test -e DEBUG=true

# Custom timeout for slow environments
hud analyze heavy-env:latest --live --timeout 60

Local Development

# Analyze local command
hud analyze --command "python my_server.py" --live

# Cursor integration
hud analyze --cursor my-dev-server --live

# With config file
hud analyze my-env:latest --config mcp-config.json

Documentation Generation

# Generate tool documentation
hud analyze my-env:latest --format markdown > docs/tools.md

# Compare versions
hud analyze my-env:v1.0 --format json > v1.json
hud analyze my-env:v2.0 --format json > v2.json
diff v1.json v2.json

# Extract tool names
hud analyze my-env --format json | jq -r '.tools[].name'

Performance Comparison

ModeSpeedUse Case
Fast (default)< 1 secondQuick inspection, CI checks
Live (--live)5-30 secondsFull validation, debugging
Use fast mode for rapid iteration during development. Switch to --live for final validation.

CI/CD Integration

Fast CI Checks

# GitHub Actions - Quick tool validation
- name: Verify tools exist
  run: |
    TOOLS=$(hud analyze $IMAGE --format json | jq '.tool_count')
    if [ "$TOOLS" -lt 3 ]; then
      echo "Not enough tools!"
      exit 1
    fi

Python Validation Script

#!/usr/bin/env python3
import subprocess
import json

def verify_environment(image, required_tools):
    """Fast validation using metadata"""
    # Quick check with metadata
    result = subprocess.run(
        ["hud", "analyze", image, "--format", "json"],
        capture_output=True, text=True, check=True
    )
    
    data = json.loads(result.stdout)
    available = {tool["name"] for tool in data["tools"]}
    missing = set(required_tools) - available
    
    if missing:
        print(f"⚠️  Missing tools: {missing}")
        print("Running live analysis for details...")
        
        # Full check if needed
        subprocess.run(
            ["hud", "analyze", image, "--live"],
            check=True
        )
    return len(missing) == 0

# Usage
verify_environment("my-env:latest", ["setup", "evaluate", "process"])

Troubleshooting

Best Practices

  1. Default to Fast Mode: Start with metadata for quick checks
  2. Live for Validation: Use --live before production deployments
  3. Cache Lock Files: Share lock files for consistent metadata
  4. Version Your Tools: Track tool changes across versions
  5. Automate Checks: Add fast analysis to CI pipelines
Fast mode shows what tools should be available based on build-time analysis. Live mode shows what actually works right now.

See Also