Fast first pipeline
Install the runner, connect a source and destination, and materialize bronze, silver, and gold assets without hand-authoring the first dbt project.
Skippr is the AI Data Agent for EL(T)M. It takes you from raw source data to reviewable dbt output and AI-ready warehouse assets fast, while keeping row-level data on the machine running `skippr` and in your destination.
curl -fsSL https://install.skippr.io/install.sh | shirm https://install.skippr.io/install.ps1 | iexSkippr is a compiled runner. For the first pipeline you also need Python 3.10+, dbt-core, and the adapter for your destination because Skippr generates and validates standard dbt output.
See Install for the full setup.
| Path | Best for | Guide |
|---|---|---|
| Snowflake | Production-style evaluation with a warehouse most teams already use | Quick Start: Snowflake |
| PostgreSQL | Local evaluation without a cloud warehouse account | Quick Start: PostgreSQL |
| BigQuery | GCP-first evaluation path | Quick Start: BigQuery |
See How It Works for the full pipeline flow and how it maps to the CLI phases.
skippr to your destination. It is not sent through Skippr's cloud path.Skippr supports databases, warehouses, object stores, streaming systems, and operational sinks. The strongest starting points for evaluation are Snowflake, PostgreSQL, BigQuery, MSSQL, S3, Postgres, MySQL, MongoDB, DynamoDB, and Kafka.
Connector guides include authentication, permissions or network requirements, and troubleshooting so you can evaluate them quickly.
See the Source Connectors and Destination Connectors for provider-specific setup.
| Dependency | Why |
|---|---|
| Python 3.10+ | Required by dbt |
| dbt-core + warehouse adapter | Model compilation and materialisation |
| A Skippr account | Provides authentication and cloud-backed control-plane services |