Minutes, not months
A working data pipeline from first install to materialised dbt models, without writing a single line of SQL or YAML by hand.
Go from raw source data to production-ready dbt models in a single command. Skippr handles extraction, loading, schema mapping, and dbt code generation -- so you can skip the weeks of pipeline plumbing and start querying clean data in minutes.
curl -fsSL https://install.skippr.io | shskippr user login # log in or create a new account
skippr init my-project
skippr connect warehouse snowflake
skippr connect source mssql
skippr runFive commands. That's extract, load, and a full bronze/silver/gold dbt project -- compiled, validated, and materialised in your warehouse.
See How It Works for the full pipeline breakdown.
| Category | Connectors |
|---|---|
| Databases | MSSQL, MySQL, PostgreSQL, Redshift, MongoDB, DynamoDB, ClickHouse, MotherDuck |
| Object Stores | S3, SFTP, Delta Lake |
| Streaming | Kafka, SQS, Kinesis, AMQP (RabbitMQ), SNS, EventBridge, MQTT, WebSocket |
| HTTP | HTTP Client, HTTP Server |
| Other | Socket (TCP/UDP/Unix), StatsD, Local File, Stdin |
| Category | Connectors |
|---|---|
| Warehouses | Snowflake, BigQuery, PostgreSQL, Athena (S3 + Glue), Databricks, Synapse, Redshift, ClickHouse, MotherDuck |
| Cloud Storage | GCS, Azure Blob, SFTP |
| Messaging | AMQP (RabbitMQ) |
| Other | Local File, Stdout |
See the Source Connectors and Destination Connectors for setup instructions per provider.
| Dependency | Why |
|---|---|
| Python 3.10+ | Required by dbt |
| dbt-core + warehouse adapter | Model compilation and materialisation |
| A Skippr account | Provides LLM keys, cloud storage, and usage metering |