Skip to content

SkipprLike Codex, but for data.

Skippr is the AI Data Agent for ELtM: extract, load, transform/model, and metadata workflows across supported sources and destinations.

Install

See the Install guide for the full setup, including Windows PowerShell.

curl -fsSL https://install.skippr.io/install.sh | shClick to copy

Installing Skippr means accepting the Skippr EULA.

Skippr is a compiled runner. For the first pipeline you also need Python 3.10+, dbt-core, and the adapter for your destination because Skippr generates and validates standard dbt output.

See Install for the full setup.

Choose an evaluation path

PathBest forGuide
SnowflakeProduction-style evaluation with a warehouse most teams already useQuick Start: Snowflake
PostgreSQLLocal evaluation without a cloud warehouse accountQuick Start: PostgreSQL
BigQueryGCP-first evaluation pathQuick Start: BigQuery

What Skippr Does

  1. Discover -- reads source metadata and determines the destination shape.
  2. Sync -- moves raw data into your configured destination.
  3. Model -- drafts and validates silver and gold dbt assets for review.
  4. Validate -- compiles and runs the generated project against your destination.

See How It Works for the full pipeline flow and how it maps to the CLI phases.

Data and service boundaries

  • Data path: source data moves from the machine running skippr to your destination. It is not sent through Skippr's cloud services.
  • AI input by default: schema metadata is the default model input. Data samples are optional and off by default.
  • Cloud-backed services: authentication, hosted LLM access by default, and control-plane services are cloud-backed.
  • Reviewable output: the result is your warehouse tables plus standard dbt files you can inspect and extend.

Architecture and deep dives

Connectors

Skippr supports databases, warehouses, object stores, streaming systems, and operational sinks. The strongest starting points for evaluation are Snowflake, PostgreSQL, BigQuery, MSSQL, S3, Postgres, MySQL, MongoDB, DynamoDB, and Kafka.

Connector guides include authentication, permissions or network requirements, and troubleshooting so you can evaluate them quickly.

See the Source Connectors and Destination Connectors for provider-specific setup.

Requirements

DependencyWhy
Python 3.10+Required by dbt
dbt-core + warehouse adapterModel compilation and materialisation
A Skippr accountProvides authentication and cloud-backed control-plane services