A Swift camera system for iOS providing easy integration, customizable media capture, and image streaming.
NextLevel is a comprehensive Swift camera framework for iOS that simplifies media capture by providing a powerful yet intuitive API for photo and video recording, multi-clip editing, and real-time image processing. It enables developers to build sophisticated camera features similar to popular social media apps without dealing with the complexities of AVFoundation. The framework supports modern Swift concurrency with async/await and offers extensive camera controls and device support.
iOS developers building camera-intensive applications such as social media apps, video editing tools, augmented reality experiences, or apps requiring custom image processing and computer vision. It is particularly suited for those who want advanced camera functionality without managing low-level AVFoundation details.
Developers choose NextLevel because it abstracts the complexity of AVFoundation while offering a feature-rich API comparable to apps like Snapchat and Instagram, including multi-clip 'Vine-like' recording, ARKit integration, and real-time buffer processing. Its modern async/await support and comprehensive camera controls provide a robust foundation for focusing on unique functionality like computational photography or augmented reality.
⬆️ Media Capture in Swift
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Implements native Swift async/await and AsyncStream events for reactive programming, as highlighted in the Swift 6 updates, making code cleaner and more maintainable.
Offers adjustable focus, exposure, white balance, zoom, flash, and frame rate settings, abstracting AVFoundation complexities while providing fine-grained control similar to social media apps.
Enables 'Vine-like' video clip recording and seamless merging into a single video, with APIs for pause/resume and clip management, simplifying edited video creation.
Supports raw photo capture, depth data, portrait effects matte, and ARKit integration (beta), catering to sophisticated photography and augmented reality applications.
Allows real-time sample buffer analysis and modification for computer vision, with hooks for custom rendering and frame manipulation, though it may impact frame rate.
Requires iOS 16.0+ for modern features, excluding support for older devices and potentially limiting app reach for projects needing broader compatibility.
Needs specific compiler flags (-D USE_ARKIT, -D USE_TRUE_DEPTH) and careful configuration to avoid App Store rejection, adding setup overhead and risk if misconfigured.
Enabling custom buffer rendering can reduce frame rate, as acknowledged in the README, making it less ideal for real-time applications where performance is critical.
The extensive API and configuration options, such as audio session management for Bluetooth support, can be overwhelming for developers new to iOS media capture.