A lightweight CoreML model for detecting NSFW content in images, specifically trained to distinguish between suggestive and explicit content.
NSFWDetector is a lightweight CoreML model for detecting not-safe-for-work content in images on iOS devices. It helps developers implement content moderation by distinguishing between appropriate photos and explicit nudity, with specific training to differentiate suggestive social media content from pornographic material. The model runs entirely on-device, ensuring privacy and offline functionality.
iOS developers building apps that handle user-generated images and need content moderation capabilities, particularly social media, messaging, or photo-sharing applications.
Developers choose NSFWDetector for its extremely small size (17 kB), native CoreML integration, and specialized training that focuses on the practical challenge of distinguishing suggestive content from explicit nudity, unlike more general NSFW detection models.
A NSFW (aka porn) detector with CoreML
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
At only 17 kB, it minimizes app size impact compared to alternatives like Yahoo's open_nsfw, as highlighted in the README's app size section.
Leverages CoreML for efficient on-device inference, ensuring privacy and offline functionality, as stated in the key features.
Specifically trained to distinguish suggestive Instagram-like photos from porn, addressing a practical challenge outlined in the philosophy.
Offers an intuitive Swift interface with completion handlers, making integration straightforward as demonstrated in the usage example.
Limited to iOS devices with CoreML support, making it unsuitable for cross-platform development, as it only integrates with native iOS frameworks.
Requires developers to set custom confidence thresholds without default optimal values, leading to trial-and-error configuration, as noted in the usage notes.
The README acknowledges issues with certain pictures and requests feedback, indicating inaccuracies or limited coverage for edge cases outside its specialized training.