A .NET library for safely refactoring critical code paths by comparing old and new implementations in production.
Scientist.NET is a .NET library that enables safe refactoring of critical code paths by running new implementations alongside existing ones in production. It compares results, measures performance, and reports discrepancies without affecting the live system, allowing developers to validate changes with real-world data.
Developers and teams working on .NET applications who need to refactor high-risk, business-critical code with minimal disruption and maximum confidence.
It provides a systematic, data-driven approach to refactoring that reduces risk by comparing old and new behavior in production, offering detailed insights into performance and correctness before fully switching over.
A .NET library for carefully refactoring critical paths. It's a port of GitHub's Ruby Scientist library
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Runs new candidate code alongside the existing control in production, returning the control value while measuring differences and performance without affecting users, as shown in the basic CanAccess example.
Allows overriding default equality checks with custom comparison logic for complex types, enabling precise matching criteria, demonstrated in the Compare method for user objects.
Supports asynchronous operations and parallel candidate execution to minimize performance impact, with detailed examples for concurrent task management and cancellation tokens.
Gracefully handles exceptions in candidate code and provides hooks to manage internal errors, ensuring experiments don't crash the application, as outlined in the Thrown method.
Explicitly not safe for methods that change data, limiting use to read-behavior experiments and requiring separate handling for writes, as warned in the 'Designing an experiment' section.
Requires implementing custom publishers, comparison logic, and context management to make results actionable, adding initial development effort beyond basic integration.
Even with async support, running experiments adds latency, and misconfiguration can lead to significant slowdowns, especially if candidates are not optimized or run synchronously.