A curated list of the top 100 most cited deep learning papers from 2012-2016, serving as a foundational reading list.
Awesome Deep Learning Papers is a curated GitHub repository listing the top 100 most cited academic papers in deep learning from 2012 to 2016. It serves as a foundational reading list to help researchers and engineers understand the seminal works that shaped the modern deep learning landscape. The project filters the vast literature to highlight papers with the highest impact and broadest applicability.
Deep learning researchers, graduate students, and practitioners who want a structured, high-quality entry point into the core literature of the field without being overwhelmed by its volume.
It provides a community-vetted, citation-based filter to identify the most influential papers, saving time and offering a clear learning path. Unlike broader paper lists, it focuses on a critical historical period and maintains a limited, manageable size to ensure quality over quantity.
The most cited deep learning papers
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
The list selects papers based on explicit citation thresholds (e.g., +800 citations for 2012 papers), ensuring only high-impact works are included, as detailed in the 'Awesome list criteria' section of the README.
Papers are categorized into topics like Convolutional Networks and NLP/RNNs, making it easy to navigate specific areas of interest, as shown in the structured Contents section.
Includes sections for datasets, software, books, and tutorials, providing a holistic learning path beyond just paper lists, as seen in the HW/SW/Dataset and Book/Survey/Review sections.
Was actively maintained with community contributions and a process for adding/removing papers to keep the list at 100, though now archived, as mentioned in the contributing guide and update notes.
The project is no longer maintained since 2017 due to the overwhelming number of new papers, making it irrelevant for recent research developments, as stated in the notice at the top of the README.
Focuses only on papers from 2012-2016, so it misses both older foundational works (only briefly covered in 'Old Papers') and all post-2016 breakthroughs, which limits its current comprehensiveness.
It's purely a list of papers with links to PDFs; there are no accompanying code repositories or practical examples, which might hinder hands-on learning despite the fetch_papers.py script for downloading.