AI Tools
Varlock is purpose-built for the AI era. Your .env.schema gives AI agents full context on your configuration — variable names, types, validation rules, descriptions — while your secret values never leave your machine or touch AI servers.
This solves two critical problems with AI-assisted development:
- Secret exposure — AI tools read your project files, including
.envfiles. With varlock, secrets are never stored in plain text — they’re fetched at runtime from secure providers. - AI-generated leaks — AI agents may hardcode secrets or log sensitive values in generated code.
varlock scancatches these leaks before they’re committed, and runtime protection redacts secrets from logs and responses.
Securely inject secrets into AI CLI tools
Section titled “Securely inject secrets into AI CLI tools”Many AI coding assistants offer CLI tools that require API keys and other secrets. Instead of storing these secrets in plain text .env or .json files or exposing them in your shell history, use varlock to inject them securely at runtime. This applies both to config that might be required to bootstrap the tool itself, as well as things like MCP servers that require API keys.
1. Install varlock
Section titled “1. Install varlock”If you haven’t already, install varlock on your system.
2. Create an environment schema
Section titled “2. Create an environment schema”Define your API keys and secrets in your .env.schema file. Mark sensitive values appropriately:
# @sensitive @requiredOPENAI_API_KEY=exec('op read "op://api-local/openai/api-key"')
# @sensitive @requiredANTHROPIC_API_KEY=exec('op read "op://api-local/anthropic/api-key"')
# @sensitive @requiredGOOGLE_API_KEY=exec('op read "op://api-local/google/api-key"')Store the actual secret values in your preferred secrets provider like 1Password (as shown above), AWS Secrets Manager, or any other provider with a CLI to fetch invidual secrets.
3. Run your tool via varlock run
Section titled “3. Run your tool via varlock run”Execute your AI CLI tool through varlock to securely inject the environment variables:
varlock run -- <your-cli-command>Popular AI CLI tool examples
Section titled “Popular AI CLI tool examples”Here’s how to configure and run popular AI coding CLI tools with varlock:
Claude Code is Anthropic’s CLI tool for AI-assisted coding.
Environment variable:
ANTHROPIC_API_KEY- Your Anthropic API key
Add to .env.schema:
# @sensitive @requiredANTHROPIC_API_KEY=exec('op read "op://api-local/anthropic/api-key"')Run with varlock:
varlock run -- claudeSee supported env variables here.
Opencode is a provider-agnostic AI coding assistant that works in your terminal.
Environment variables:
ANTHROPIC_API_KEY- For Claude modelsOPENAI_API_KEY- For OpenAI modelsOPENCODE_CONFIG- Path to custom config file (optional)
Add to .env.schema:
# @sensitive @requiredANTHROPIC_API_KEY=exec('op read "op://api-local/anthropic/api-key"')
# @sensitiveOPENAI_API_KEY=exec('op read "op://api-local/openai/api-key"')Add an auth configuration:
opencode auth loginIt will ask you to paste your API key. Instead, paste in an env reference like this:
{"env:ANTHROPIC_API_KEY"}
Your config file (~/.local/share/opencode/auth.json) should now look like this:
{ "anthropic": { "type": "api", "key": "{env:ANTHROPIC_API_KEY}" }}Run with varlock:
varlock run -- opencode
# or with specific modelvarlock run -- opencode --model claude-3-5-sonnetSee the Opencode docs for more information.
Gemini CLI is Google’s open source AI agent.
Environment variable:
GOOGLE_API_KEYorGEMINI_API_KEY- Your Google AI API key
Add to .env.schema:
# @sensitive @requiredGOOGLE_CLOUD_PROJECT=exec('op read "op://api-local/google/cloud-project"')
# @sensitive @requiredGOOGLE_API_KEY=exec('op read "op://api-local/google/api-key"')Run with varlock:
varlock run -- geminiSee the Gemini CLI auth docs for more information.
Allowing schema files for AI tools
Section titled “Allowing schema files for AI tools”Most AI tools ignore .env.* files by default. To ensure your AI tool can access your environment schema, add the following to your .gitignore:
!.env.schemaIf you use a tool with its own ignore file, check that tool’s documentation to see how it handles ignore files and make sure .env.schema is allowed.
Custom instructions and rules
Section titled “Custom instructions and rules”To give your AI tool full context about varlock, you can provide it with the full Varlock llms.txt. In Cursor, this is accomplished via ‘Add New Custom Docs’.
If your tool supports custom rules, you can use our own varlock Cursor rule file from this repo as a starting point to create your own that is most suited to your workflow.
Scan for leaked secrets
Section titled “Scan for leaked secrets”AI agents can sometimes hardcode secret values or leak them into generated code. Use varlock scan to proactively detect leaked secrets in your codebase:
# Scan the current directory for leaked secret valuesvarlock scan
# Scan specific pathsvarlock scan ./src ./configYou can also set up varlock scan as a git pre-commit hook to automatically catch leaks before they’re committed:
# Add to your .git/hooks/pre-commit or use a hook manager like husky/lefthookvarlock scan --stagedThis is especially valuable when working with AI coding tools — the scan command compares your resolved secret values against your codebase to find any that may have been accidentally included in plain text.
Varlock Docs MCP
Section titled “Varlock Docs MCP”We also have a docs MCP server that allows you to search the Varlock docs. See more details here.