A scriptable multi-threaded benchmark tool for databases and systems based on LuaJIT.
sysbench is a versatile, scriptable, multi-threaded performance benchmarking tool based on LuaJIT. It is primarily used for database workloads but can create arbitrarily complex workloads for any system component, providing detailed metrics like latency, throughput, and resource utilization under controlled loads.
Database administrators, DevOps engineers, and performance testers who need to evaluate and compare the performance of database systems (like MySQL or PostgreSQL), storage, CPU, memory, and threading under realistic, high-concurrency conditions.
Developers choose sysbench for its low overhead at high thread counts, extensive statistical reporting including percentiles and histograms, and the flexibility to create custom benchmarks through Lua scripting, making it suitable for simulating real-world, complex workloads.
Scriptable database and system performance benchmark
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Provides detailed latency percentiles, histograms, and rate statistics for comprehensive analysis, as highlighted in the features section of the README.
Capable of generating and tracking hundreds of millions of events per second with thousands of threads, minimizing benchmark interference, as stated in the features.
Leverages LuaJIT for custom benchmark creation through predefined hooks, allowing simulation of complex, real-world workloads beyond bundled tests.
Includes ready-to-use tests for databases, file I/O, CPU, memory, threads, and mutex, reducing initial setup time for common performance scenarios.
Native Windows builds are dropped in version 1.0, requiring Windows Subsystem for Linux (WSL), which adds complexity for Windows users, as admitted in the README.
Building from source involves multiple dependencies and manual configuration, especially for database drivers like MySQL or PostgreSQL, making installation cumbersome on some systems.
All custom benchmarks must be written in Lua, which may not align with teams' existing scripting preferences or expertise, limiting accessibility for non-Lua developers.