LOCAL DOCKER BENCHMARK · UPDATED 2026-05-10 11:55 UTC

Config-management tools tested for speed, readability & failure behavior.

Practical benchmark of Ansible, pyinfra, Salt-SSH, plus a follow-up Salt master/minion architecture test. Targets were local Docker hosts with SSH, netem packet loss, stopped containers, stale IPs, and mid-run kills.

Best default

Ansible

Readable, popular, strongest failure/idempotence UX.

Fastest feedback loop

pyinfra

7.456s cold vs Ansible 26.880s and Salt-SSH 141.408s.

All-tools fault checks

21/21

Expected outcomes matched in the fresh run.

Salt scale test

10×500

10 minions, 500 tiny files each, harsh netem and kill scenarios.

Recommendation

Use Ansible as the beginner-friendly default. It was not the fastest, but it had the clearest playbooks, most popular ecosystem, and most explicit failure/idempotence reporting. Use pyinfra when speed and Python-native configuration matter most. Avoid Salt-SSH as a general beginner-default; consider full Salt master/minion only when you specifically want Salt's persistent-agent architecture.

Ansible26/30
pyinfra25/30
Salt-SSH15/30

Agentless tool speed

Fresh all-tools run on 5 Docker SSH targets after fixing the Salt state. Bars are scaled per scenario; shorter times are better.

ToolCold applyWarm rerun80ms + 5% lossDecision score
pyinfra
7.456s1.0× fastest
5.386s1.0× fastest
12.490s1.0× fastest
25/30Fastest by far; great if Python-as-config is acceptable.
Ansible
26.880s3.6× fastest
26.989s5.0× fastest
32.555s2.6× fastest
26/30Best default: readable, popular, clear failures/idempotence.
Salt-SSH
141.408s19.0× fastest
43.704s8.1× fastest
52.510s4.2× fastest
15/30Works, but slow/quirky for agentless SSH here.

Failure behavior

Fault tests checked whether tools failed loudly instead of silently succeeding, and whether they recovered after inventory refresh.

ToolStopped hostStale IPRefreshed IPMid-run kill
Ansible✓ 30.533s✓ 10.737s✓ 13.284s✓ 21.806s
pyinfra✓ 15.930s✓ 12.368s✓ 3.207s✓ 10.632s
Salt-SSH✓ 44.236s✓ 69.553s✓ 28.152s✓ 43.411s

Salt-SSH vs Salt master/minion

Follow-up run with 10 Salt minions and 500 tiny files per minion. Master/minion was faster on clean networks; Salt-SSH survived the deliberately brutal netem test.

ScenarioSalt-SSHSalt master/minion
Cold 10×500 files
229.537ssurvived
97.002ssurvived
Warm 10×500 files
103.504ssurvived
57.423ssurvived
Harsh netem
130.999ssurvived
240.106stimed out / unexpected
3 stopped minions
52.782sfailed as expected
83.408sfailed as expected
2 disconnected minions
174.147sfailed as expected
71.940sfailed as expected
3 killed mid-run
119.238sfailed as expected
94.509sfailed as expected
Recovery run
178.525ssurvived
100.652ssurvived

Popularity / ecosystem

GitHub API snapshot: Ansible ≈68.5k stars, Salt ≈15.4k, pyinfra ≈5.6k. Ansible remains the ecosystem-safe choice.

Readability

Ansible YAML is verbose but standard. pyinfra is compact Python. Salt SLS is readable, but setup/debugging was fussier in this benchmark.

Caveat

This is a local Docker/SSH benchmark, not production WAN scale. It is best for ergonomics, failure behavior, and rough speed signals.

Evidence & artifacts

Raw files are kept on the host; this public page is the polished overview.

1
Fresh all-tools report/home/kristjan-variksoo/config-mgmt-bench/results/fresh_full_decision_report.md
2
Salt architecture report/home/kristjan-variksoo/config-mgmt-bench/results/salt_arch_decision_report.md
3
Salt clean run JSON/home/kristjan-variksoo/config-mgmt-bench/results/salt-scale-20260510T090030Z/results.json
4
Harness/home/kristjan-variksoo/config-mgmt-bench/scripts/run_benchmarks.py and scripts/run_salt_arch_benchmark.py