A visual, low-code data preparation tool that generates Python code for ETL, reporting, and AI-assisted workflows.
Amphi ETL is a visual data preparation tool that allows users to build data transformation pipelines through a low-code interface. It generates native Python code using libraries like pandas and DuckDB, enabling portable and customizable ETL, reporting, and AI-assisted workflows. The platform is designed to simplify data preparation while maintaining the flexibility of code.
Data analysts, data engineers, and scientists who need to create and maintain data pipelines without extensive coding, or who want to accelerate development with visual tools and AI assistance.
Developers choose Amphi for its balance of visual simplicity and code-generation power, allowing rapid pipeline development without locking them into a proprietary system. Its self-hosted nature and extensibility provide full control and customization for diverse data workflows.
visual data prep powered by python
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
The drag-and-drop interface accelerates pipeline development and reduces maintenance time, as emphasized in the README for low-code efficiency.
Generates native Python code using pandas and DuckDB, allowing pipelines to be executed anywhere and fully customized, ensuring no vendor lock-in.
Can be deployed on laptops or cloud infrastructure for complete data privacy and control, avoiding third-party exposure as highlighted in the features.
Supports embedding Python or SQL directly into pipelines and adding custom components, providing flexibility for advanced use cases demonstrated in the README example.
Designed to integrate seamlessly with AI assistants like ChatGPT and Claude, enhancing productivity in data preparation, as noted in the philosophy.
Focused on lightweight ETL and reporting, it lacks support for real-time data streaming, which is a gap for modern data applications requiring low-latency.
As a newer project, it has a smaller community and fewer pre-built components compared to established tools like Apache Airflow or dbt, limiting out-of-the-box functionality.
Creating custom components requires knowledge of TypeScript/JavaScript and Python, as shown in the README example, posing a steep learning curve for non-developers.
Collects anonymous telemetry data unless opted out, which might not align with strict privacy policies, as mentioned in the README's telemetry section.