config.toml Reference
Codex configuration lives at ~/.codex/config.toml. Project-level overrides go in .codex/config.toml at your repo root.
Precedence Order
Settings load in this order (later overrides earlier):
- System defaults
- System config (
/etc/codex/config.toml) - User config (
~/.codex/config.toml) - Project config (
.codex/config.toml) - Active profile
- CLI flags
A --model flag beats everything. A project config beats your user config.
Essential Settings
Most users need these:
# Model selection
model = "gpt-5.4"
model_reasoning_effort = "medium"
# How much to ask before acting
approval_policy = "on-request"
# File access
sandbox_mode = "workspace-write"
# Search capability (cached | live | disabled)
web_search = "live"
Approval Policies
| Policy | Behavior |
|---|---|
untrusted | Ask before everything |
on-request | Ask for risky actions (Auto, the default) |
never | Full auto, no prompts |
granular | Fine-grained control |
For granular control:
approval_policy = { granular = { sandbox_approval = true, rules = true, mcp_elicitations = true, request_permissions = false, skill_approval = false } }
Sandbox Configuration
Modes
read-only— Can’t modify filesworkspace-write— Can write to projectdanger-full-access— No restrictions (use carefully)
Workspace-Write Details
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
network_access = true
writable_roots = ["/tmp/build", "~/.cache/codex"]
exclude_slash_tmp = false
exclude_tmpdir_env_var = false
writable_roots adds paths outside your project that Codex can write to.
Profiles
Define named configurations for different contexts:
[profiles.ci]
model = "gpt-5.4-mini"
approval_policy = "never"
web_search = "disabled"
[profiles.trusted]
model = "gpt-5.3-codex"
approval_policy = "never"
sandbox_mode = "workspace-write"
[profiles.paranoid]
approval_policy = "untrusted"
sandbox_mode = "read-only"
Activate a profile:
codex --profile ci
Use ci for automation pipelines. Use trusted for well-tested personal projects. Use paranoid for unfamiliar codebases.
Feature Flags
Enable experimental or optional features:
[features]
multi_agent = true # Spawn subagents
shell_snapshot = true # Cache shell environment
unified_exec = true # PTY-backed execution
Model Provider Settings
Custom API endpoints:
model = "gpt-5.4"
model_provider = "openai"
[model_providers.azure]
base_url = "https://my-resource.openai.azure.com"
env_key = "AZURE_OPENAI_KEY"
[model_providers.ollama]
base_url = "http://localhost:11434/v1"
Project Documentation
Control AGENTS.md loading:
# Alternative file names
project_doc_fallback_filenames = ["TEAM_GUIDE.md", ".agents.md"]
# Max size before truncation
project_doc_max_bytes = 65536
MCP Servers
Configure Model Context Protocol servers:
[mcp_servers.github]
command = "mcp-server-github"
enabled_tools = ["search_repos", "get_file"]
startup_timeout_sec = 10
[mcp_servers.postgres]
command = "mcp-server-postgres"
disabled_tools = ["drop_table"]
enabled = false # set true to activate
History Settings
Control conversation persistence:
[history]
persistence = "save-all" # or "none"
Agent Configuration
Multi-agent settings:
[agents]
max_threads = 6
max_depth = 3
Subagent model selection is done per-agent via config files referenced in [agents.<name>] sub-tables.
Daily Use Template
Good starting point for interactive work:
model = "gpt-5.4"
model_reasoning_effort = "medium"
approval_policy = "on-request"
sandbox_mode = "workspace-write"
web_search = "live"
[features]
multi_agent = true
[history]
persistence = "save-all"
CI/CD Template
For automation pipelines:
model = "gpt-5.4-mini"
approval_policy = "never"
sandbox_mode = "workspace-write"
web_search = "disabled"
[sandbox_workspace_write]
network_access = false
[features]
multi_agent = false
[history]
persistence = "none"
Use gpt-5.4-mini for cost efficiency. Disable web search and network for reproducibility.
Environment
Shell environment handling is configured as a table:
[shell_environment_policy]
inherit = "core" # "core" | "none"
exclude = ["SECRET_KEY", "AWS_*"]
all passes your full shell environment to subprocesses. core keeps essential vars like PATH. none uses a clean environment.
Related
- Installation — Get Codex running
- AGENTS.md — Project-specific instructions
- Models — Model selection guide