Skip to content

SkipprLike Codex, but for data.

Skippr is the AI Data Agent for EL(T)M. It takes you from raw source data to reviewable dbt output and AI-ready warehouse assets fast, while keeping row-level data on the machine running `skippr` and in your destination.

Install

bash
curl -fsSL https://install.skippr.io/install.sh | sh
powershell
irm https://install.skippr.io/install.ps1 | iex

Skippr is a compiled runner. For the first pipeline you also need Python 3.10+, dbt-core, and the adapter for your destination because Skippr generates and validates standard dbt output.

See Install for the full setup.

Choose an evaluation path

PathBest forGuide
SnowflakeProduction-style evaluation with a warehouse most teams already useQuick Start: Snowflake
PostgreSQLLocal evaluation without a cloud warehouse accountQuick Start: PostgreSQL
BigQueryGCP-first evaluation pathQuick Start: BigQuery

What Skippr Actually Does

  1. Discover -- reads source metadata and determines the warehouse shape.
  2. Sync -- moves raw data into bronze tables in your destination.
  3. Model -- drafts silver and gold dbt assets for review.
  4. Validate -- compiles and runs the generated project against your destination.

See How It Works for the full pipeline flow and how it maps to the CLI phases.

Trust Boundaries

  • Row-level data path: source data moves from the machine running skippr to your destination. It is not sent through Skippr's cloud path.
  • AI input by default: schema metadata is the default model input. Data samples are optional and off by default.
  • Cloud-backed services: authentication, hosted LLM access by default, and control-plane services are cloud-backed.
  • Reviewable output: the result is your warehouse tables plus standard dbt files you can inspect and extend.

Technical Proof

Connectors

Skippr supports databases, warehouses, object stores, streaming systems, and operational sinks. The strongest starting points for evaluation are Snowflake, PostgreSQL, BigQuery, MSSQL, S3, Postgres, MySQL, MongoDB, DynamoDB, and Kafka.

Connector guides include authentication, permissions or network requirements, and troubleshooting so you can evaluate them quickly.

See the Source Connectors and Destination Connectors for provider-specific setup.

Requirements

DependencyWhy
Python 3.10+Required by dbt
dbt-core + warehouse adapterModel compilation and materialisation
A Skippr accountProvides authentication and cloud-backed control-plane services