EBI is a security tool that analyzes scripts before execution using LLM-powered analysis. It acts as a protective wrapper around any command, detecting malicious code, vulnerabilities, and hidden instructions to keep your system safe.
- π‘οΈ Security-First Design: Blocks execution of suspicious scripts by default
- π€ AI-Powered Analysis: Uses LLMs to detect vulnerabilities and malicious patterns
- π Multi-Language Support: Currently supports Bash and Python scripts
- π― Smart Detection: Analyzes code structure, comments, and string literals separately
- β‘ Fast & Efficient: Parallel analysis with configurable timeouts
- π¨ User-Friendly: Clear, colored reports with risk levels and recommendations
# Clone the repository
git clone https://github.com/co3k/ebi.git
cd ebi
# Build with Cargo
cargo build --release
# Install to system
sudo cp target/release/ebi /usr/local/bin/
# Verify installation
ebi --version
- Rust 1.75 or higher
- Internet connection (for LLM API calls)
- API key for OpenAI or compatible LLM service
Set your LLM API key:
# For OpenAI
export OPENAI_API_KEY="sk-your-api-key"
# For Google Gemini
export GEMINI_API_KEY="your-gemini-api-key"
# For Anthropic Claude
export ANTHROPIC_API_KEY="your-anthropic-api-key"
# Optional: Set default model
export EBI_DEFAULT_MODEL="gemini-2.5-flash"
# Optional: Set default timeout (seconds)
export EBI_DEFAULT_TIMEOUT=120
EBI supports multiple output languages with automatic locale detection:
# Automatic language detection from system locale
echo 'echo "Hello"' | ebi bash
# Explicit language selection
echo 'echo "Hello"' | ebi --output-lang japanese bash
# Environment variable override
export EBI_OUTPUT_LANGUAGE=japanese
echo 'echo "Hello"' | ebi bash
Language Priority (highest to lowest):
EBI_OUTPUT_LANGUAGE
environment variable--output-lang
CLI option- System locale detection (
LC_ALL
,LC_MESSAGES
,LANG
,LANGUAGE
) - Default: English
Supported Languages:
english
(oren
) - English outputjapanese
(orja
,jp
) - Japanese output
Locale Detection: EBI automatically detects your system locale and uses the appropriate language:
- Japanese locales (
ja_JP.UTF-8
,ja
, etc.) β Japanese output - English locales (
en_US.UTF-8
,en
,C.UTF-8
, etc.) β English output - Unknown locales β English output (fallback)
Analyze a simple script before execution:
echo 'echo "Hello, World!"' | ebi bash
Safely analyze scripts from the internet:
# Instead of this dangerous approach:
# curl -sL https://example.com/install.sh | bash
# Use EBI to analyze first:
curl -sL https://example.com/install.sh | ebi bash
ebi [OPTIONS] <COMMAND> [COMMAND_ARGS...]
Options:
-l, --lang <LANGUAGE>
: Override automatic language detection-m, --model <MODEL>
: LLM model to use (default: gpt-5-mini)-t, --timeout <SECONDS>
: Analysis timeout in seconds (10-300, default: 300)-v, --verbose
: Enable verbose output-d, --debug
: Enable debug output with LLM communications-h, --help
: Display help message-V, --version
: Display version
# Analyze Python script with custom model
cat script.py | ebi --model gemini-2.5-flash python
# Analyze with verbose output
cat installer.sh | ebi --verbose bash
# Force language detection
cat ambiguous_script | ebi --lang bash sh
# Increase timeout for large scripts
cat large_script.py | ebi --timeout 120 python
Level | Description | Recommendation |
---|---|---|
NONE | No security risks detected | Safe to execute |
LOW | Minor concerns identified | Review and proceed |
MEDIUM | Notable risks found | Careful review needed |
HIGH | Significant security risks | Not recommended to execute |
CRITICAL | Dangerous operations detected | Execution blocked |
- Input Processing: Receives script via stdin
- Language Detection: Identifies script language via CLI args, command name, or shebang
- AST Parsing: Parses script structure using Tree-sitter
- Component Extraction: Separates code, comments, and string literals
- Parallel Analysis: Performs vulnerability and injection detection using LLMs
- Risk Assessment: Aggregates results and determines overall risk level
- User Interaction: Presents report and prompts for execution decision
- Safe Execution: Executes only after user confirmation (if safe)
- Fail-Safe Default: Blocks execution when LLM service is unavailable
- No Logging: Doesn't store scripts or analysis results by default
- Timeout Protection: Configurable timeouts for both analysis and user input
- Explicit Consent: Always requires user confirmation before execution
- Conservative Analysis: When uncertain, defaults to higher risk levels
# Clone repository
git clone https://github.com/co3k/ebi.git
cd ebi
# Run tests
cargo test
# Run with clippy
cargo clippy
# Build release version
cargo build --release
# Run all tests
cargo test
# Run with verbose output
cargo test -- --nocapture
# Run specific test module
cargo test analyzer::
ebi/
βββ src/
β βββ analyzer/ # LLM analysis modules
β βββ cli/ # CLI interface and user interaction
β βββ executor/ # Script execution handling
β βββ models/ # Data models and types
β βββ parser/ # Script parsing and AST analysis
β βββ main.rs # Entry point
βββ tests/
β βββ contract/ # API contract tests
β βββ integration/ # Integration tests
β βββ unit/ # Unit tests
βββ Cargo.toml # Dependencies and metadata
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Tree-sitter for robust code parsing
- The Rust community for excellent libraries and tools
- OpenAI for providing powerful LLM capabilities
EBI is a security tool that provides analysis and recommendations. While it aims to detect malicious code and vulnerabilities, it is not infallible. Always review scripts carefully and use your judgment before execution. The authors are not responsible for any damage caused by scripts executed after EBI analysis.
For issues, questions, or suggestions, please open an issue on GitHub.