A curated list of academic papers and resources for image and video inpainting techniques.
Awesome-Inpainting-Tech is a curated, community-maintained list of academic papers, codebases, and resources dedicated to image and video inpainting. It solves the problem of fragmented research discovery by aggregating the latest methods—from classical patch-based techniques to modern diffusion and GAN-based models—into a single, automatically generated repository. This enables rapid literature review and access to implementations for developers and researchers.
Computer vision researchers, AI engineers, and graduate students who need a comprehensive, up-to-date reference for state-of-the-art inpainting techniques and their code implementations.
Developers choose this over manually searching academic portals because it provides a centralized, chronologically organized, and code-linked resource that is continuously updated by the community, saving significant time in literature review and project scoping.
A curated list of image inpainting and video inpainting papers and resources
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Lists papers from top conferences like CVPR, ICCV, and ECCV spanning 2000 to 2026, providing a complete historical perspective on inpainting evolution.
README is programmatically generated from a CSV file, ensuring consistency and ease of maintenance through community pull requests.
Many entries include direct links to official code repositories and project pages, speeding up implementation for researchers and engineers.
Covers both image and video inpainting, addressing the more complex temporal aspects in video, as seen in resources like ProPainter and AVID.
Welcomes pull requests to keep the collection current, leveraging community input to stay updated with the latest research advancements.
Lacks performance comparisons, benchmarks, or subjective ratings, forcing users to independently evaluate which papers are most effective for specific tasks.
Beyond code links, there's no guidance on setup, dependencies, or running the code, requiring users to navigate each repository's often sparse documentation.
With hundreds of papers listed chronologically, it lacks filtering by method type (e.g., GAN vs. diffusion) or application, making it hard to find niche solutions.
Excludes commercial tools, datasets, and practical deployment considerations, limiting its utility for industry teams needing production-ready insights.