A curated collection of gradient boosting research papers with implementations from top machine learning conferences.
Awesome Gradient Boosting Papers is a comprehensive, community-maintained repository that aggregates academic research on gradient and adaptive boosting methods. It serves as a centralized resource for discovering recent advancements, implementations, and applications across domains like machine learning, computer vision, natural language processing, and data mining. The repository organizes papers from major conferences (e.g., NeurIPS, ICML, CVPR, KDD) from 2009 to 2025, with many entries linking to official code for reproducibility.
This repository is designed for machine learning researchers, practitioners, and graduate students who need to stay updated on state-of-the-art gradient boosting techniques and their implementations. It is particularly useful for those working on specialized applications like fairness, robustness, federated learning, or probabilistic prediction who require access to both theoretical papers and practical code.
Developers choose this project because it provides a curated, structured, and up-to-date collection of gradient boosting research with direct links to implementations, lowering the barrier to applying cutting-edge methods. Its unique value lies in bridging academic research and practical use by organizing papers by year and conference while connecting to related awesome-lists for specialized areas like graph classification and fraud detection.
A curated list of gradient boosting research papers with implementations.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Aggregates papers from 2009 to 2025 across top conferences like NeurIPS, ICML, and KDD, providing a historical lens on boosting evolution.
Many entries include '[Code]' links to official repositories, lowering the barrier to reproducibility and practical application.
Organized by year and conference (e.g., CVPR for vision, ACL for NLP), enabling efficient navigation of domain-specific advancements.
Covers diverse topics like fairness, federated learning, and neural integration, showcasing boosting's adaptability beyond traditional ML.
Acts as a passive aggregator without assessing paper impact, code reliability, or methodological soundness, leaving users to do the heavy lifting.
Code links are static and may break over time without active maintenance, compromising reproducibility for older entries.
Provides only titles and links—no summaries, key insights, or comparative analysis—requiring significant effort to extract actionable knowledge.