A benchmarking tool that measures user-visible latency in interactive Zsh shells, including input lag and command lag.
zsh-bench is a benchmarking tool for interactive Zsh shells that measures user-visible latencies like input lag, command lag, and startup delays. It helps developers and shell users quantify the performance impact of their Zsh configurations, plugins, and themes, enabling data-driven optimizations for a faster, more responsive terminal experience.
Zsh users, shell configuration enthusiasts, plugin developers, and anyone looking to optimize their interactive shell performance for a snappier command-line workflow.
Unlike traditional benchmarks that measure synthetic startup times, zsh-bench focuses on real-world, human-perceivable latencies, providing actionable insights to make your shell feel instantaneous. It includes tools to validate human perception thresholds and compare the performance of various Zsh setups objectively.
Benchmark for interactive Zsh
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Measures latencies like input lag and command lag that directly impact user experience, as the README emphasizes 'user-visible latency' over synthetic benchmarks.
Includes human-bench to test human sensitivity to latencies, helping determine what feels instantaneous, based on the 'How fast is fast' section with blind studies.
Benchmarks predefined Zsh configs, plugin managers, and custom setups with detailed results, as shown in extensive tables comparing performance across frameworks.
Supports Docker and user isolation for consistent testing across environments, ensuring reliable comparisons, as described in the usage options.
Requires Zsh 5.8+ as login shell and Docker for isolation modes, which can be restrictive for users on older systems or without containerization.
Benchmarking may hang if tmux is started without a specific fix or pollute command history, as warned in the usage section with caveats.
Understanding normalized latencies and human perception thresholds adds complexity, making results less accessible for casual users without technical depth.