Skip to content

AI Tools

Varlock is purpose-built for the AI era. Your .env.schema gives AI agents full context on your configuration — variable names, types, validation rules, descriptions — while your secret values never leave your machine or touch AI servers.

This solves two critical problems with AI-assisted development:

  1. Secret exposure — AI tools read your project files, including .env files. With varlock, secrets are never stored in plain text — they’re fetched at runtime from secure providers.
  2. AI-generated leaks — AI agents may hardcode secrets or log sensitive values in generated code. varlock scan catches these leaks before they’re committed, and runtime protection redacts secrets from logs and responses.

Many AI coding assistants offer CLI tools that require API keys and other secrets. Instead of storing these secrets in plain text .env or .json files or exposing them in your shell history, use varlock to inject them securely at runtime. This applies both to config that might be required to bootstrap the tool itself, as well as things like MCP servers that require API keys.

If you haven’t already, install varlock on your system.

Define your API keys and secrets in your .env.schema file. Mark sensitive values appropriately:

.env.schema
# @sensitive @required
OPENAI_API_KEY=exec('op read "op://api-local/openai/api-key"')
# @sensitive @required
ANTHROPIC_API_KEY=exec('op read "op://api-local/anthropic/api-key"')
# @sensitive @required
GOOGLE_API_KEY=exec('op read "op://api-local/google/api-key"')

Store the actual secret values in your preferred secrets provider like 1Password (as shown above), AWS Secrets Manager, or any other provider with a CLI to fetch invidual secrets.

Execute your AI CLI tool through varlock to securely inject the environment variables:

Terminal window
varlock run -- <your-cli-command>

Here’s how to configure and run popular AI coding CLI tools with varlock:

Claude Code is Anthropic’s CLI tool for AI-assisted coding.

Environment variable:

  • ANTHROPIC_API_KEY - Your Anthropic API key

Add to .env.schema:

# @sensitive @required
ANTHROPIC_API_KEY=exec('op read "op://api-local/anthropic/api-key"')

Run with varlock:

Terminal window
varlock run -- claude

See supported env variables here.


Most AI tools ignore .env.* files by default. To ensure your AI tool can access your environment schema, add the following to your .gitignore:

!.env.schema

If you use a tool with its own ignore file, check that tool’s documentation to see how it handles ignore files and make sure .env.schema is allowed.

To give your AI tool full context about varlock, you can provide it with the full Varlock llms.txt. In Cursor, this is accomplished via ‘Add New Custom Docs’.

If your tool supports custom rules, you can use our own varlock Cursor rule file from this repo as a starting point to create your own that is most suited to your workflow.

AI agents can sometimes hardcode secret values or leak them into generated code. Use varlock scan to proactively detect leaked secrets in your codebase:

Terminal window
# Scan the current directory for leaked secret values
varlock scan
# Scan specific paths
varlock scan ./src ./config

You can also set up varlock scan as a git pre-commit hook to automatically catch leaks before they’re committed:

Terminal window
# Add to your .git/hooks/pre-commit or use a hook manager like husky/lefthook
varlock scan --staged

This is especially valuable when working with AI coding tools — the scan command compares your resolved secret values against your codebase to find any that may have been accidentally included in plain text.

We also have a docs MCP server that allows you to search the Varlock docs. See more details here.