A private, automatic macOS app that watches your screen and uses AI to generate a timeline of your daily activities.
Dayflow is a native macOS application that automatically captures screen activity and uses AI to generate a private, contextual timeline of your workday. It solves the problem of manual time tracking by providing an accurate, automated record of what you actually accomplished, distinguishing between productive work and distractions. The app is designed to be lightweight and privacy-focused, keeping all data on your device.
Professionals and knowledge workers on macOS who want automated, insightful time tracking without compromising privacy, including founders, engineers, students, researchers, marketers, salespeople, and freelancers.
Developers choose Dayflow for its unique combination of automatic, context-aware tracking, strong privacy controls with local-first data storage, and the flexibility to use various AI providers (cloud or local). It provides deeper insights than basic app trackers while remaining efficient and extensible.
The automatic work journal. Privately turns your screen into a timeline of what you actually accomplished. Open-source and local-first.
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Uses AI to distinguish between activities like research vs. entertainment within the same app, providing more meaningful insights than basic app usage loggers.
Data stays on your Mac with options for fully local AI processing via Ollama or LM Studio, ensuring user control and avoiding cloud dependencies.
Lightweight operation with ~100MB RAM and <1% CPU usage per the README, minimizing system impact while tracking screen activity.
Supports multiple providers: Gemini (cloud), local models, and ChatGPT/Claude (paid), allowing customization based on cost, privacy, and quality needs.
Only available for macOS 13+, excluding Windows, Linux, and older Mac versions, which severely limits its user base and flexibility.
Requires configuration like API keys for Gemini, local server setup for Ollama, or paid subscriptions for ChatGPT/Claude, adding friction to initial use.
Using local AI involves 33+ LLM calls per analysis, is GPU-intensive, drains battery faster, and may produce lower-quality summaries compared to cloud models.
The journal feature is in beta and requires an access code, making it less accessible and potentially unstable for general use.