Showing 15 of 15 projects
A command-line benchmarking tool that performs statistical analysis across multiple runs to accurately measure and compare shell command execution times.
A drop-in replacement for the MNIST dataset, featuring 70,000 Zalando fashion article images for benchmarking machine learning algorithms.
A powerful .NET library for transforming methods into benchmarks, tracking performance, and sharing reproducible measurement experiments.
A curated list of semantic segmentation papers, code, datasets, and resources across various deep learning frameworks.
A C++ library for microbenchmarking code snippets, providing a framework similar to unit tests for performance measurement.
A scriptable multi-threaded benchmark tool for databases and systems based on LuaJIT.
A statistics-driven microbenchmarking library for Rust that provides rigorous performance analysis.
A lightweight, cross-platform C++11 base library providing high-performance utilities like logging, coroutines, JSON, and networking.
A Python package for evaluating and comparing odometry and SLAM algorithm trajectories with support for multiple formats and metrics.
A simple, high-performance, zero-copy C++17 serialization and reflection library with no dependencies.
A high-performance JSON serializer and deserializer for .NET, built on Sigil with extensive optimization.
A benchmark suite comparing the performance of Go web frameworks across connection, routing, and handler processing.
An open-source benchmark suite of continuous control robotic manipulation environments for multi-task and meta reinforcement learning.
A simple, fast, accurate single-header microbenchmarking library for C++11/14/17/20.
A benchmark suite comparing the performance of Go HTTP request routers and web frameworks using real-world API routing structures.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.