Sling
Slingdata.ioBlogGithubHelp!
  • Introduction
  • Sling CLI
    • Installation
    • Environment
    • Running Sling
    • Global Variables
    • CLI Pro
  • Sling Platform
    • Sling Platform
      • Architecture
      • Agents
      • Connections
      • Editor
      • API
      • Deploy from CLI
  • Concepts
    • Replications
      • Structure
      • Modes
      • Source Options
      • Target Options
      • Columns
      • Transforms
      • Runtime Variables
      • Tags & Wildcards
    • Hooks / Steps
      • Check
      • Command
      • Copy
      • Delete
      • Group
      • Http
      • Inspect
      • List
      • Log
      • Query
      • Replication
      • Store
      • Read
      • Write
    • Pipelines
    • Data Quality
      • Constraints
  • Examples
    • File to Database
      • Custom SQL
      • Incremental
    • Database to Database
      • Custom SQL
      • Incremental
      • Backfill
    • Database to File
      • Incremental
    • Sling + Python 🚀
  • Connections
    • Database Connections
      • Athena
      • BigTable
      • BigQuery
      • Cloudflare D1
      • Clickhouse
      • DuckDB
      • DuckLake
      • Iceberg
      • MotherDuck
      • MariaDB
      • MongoDB
      • Elasticsearch
      • MySQL
      • Oracle
      • Postgres
      • Prometheus
      • Proton
      • Redshift
      • S3 Tables
      • StarRocks
      • SQLite
      • SQL Server
      • Snowflake
      • Trino
    • Storage Connections
      • AWS S3
      • Azure Storage
      • Backblaze B2
      • Cloudflare R2
      • DigitalOcean Spaces
      • FTP
      • Google Drive
      • Google Storage
      • Local Storage
      • Min.IO
      • SFTP
      • Wasabi
Powered by GitBook
On this page
  • Setup
  • Using sling conns
  • Environment Variable
  • Sling Env File YAML
  • Database user creation
  1. Connections
  2. Database Connections

Redshift

Connect & Ingest data from / to a Redshift database

Setup

The following credentials keys are accepted:

  • host (required) -> The hostname / ip of the instance

  • user (required) -> The username to access the instance

  • database (required) -> The database name of the instance

  • aws_bucket (required) -> The name of the S3 Bucket for Bulk Loading / Unloading

  • aws_access_key_id (required) -> The AWS Access Key ID to access the bucket for Bulk Loading / Unloading

  • aws_secret_access_key (required) -> The AWS Secret Key to access the bucket for Bulk Loading / Unloading

  • aws_session_token (optional) -> The AWS Session token to access the bucket for Bulk Loading / Unloading

  • schema (optional) -> The default schema to use when loading

  • password (optional) -> The password to access the instance

  • port (optional) -> The port of the instance. Default is 5439.

  • ssh_tunnel (optional) -> The URL of the SSH server you would like to use as a tunnel (example ssh://user:password@db.host:22)

  • ssh_private_key (optional) -> The private key to use to access a SSH server (raw string or path to file).

  • ssh_passphrase (optional) -> The passphrase to use to access a SSH server.

Using sling conns

Here are examples of setting a connection named REDSHIFT. We must provide the type=redshift property:

$ sling conns set REDSHIFT type=redshift host=<host> user=<user> database=<database> password=<password> port=<port> aws_bucket=<aws_bucket> aws_access_key_id=<aws_access_key_id> aws_secret_access_key=<aws_secret_access_key>

# OR use url
$ sling conns set REDSHIFT url="redshift://myuser:mypass@host.ip:5439/mydatabase" aws_bucket=<aws_bucket> aws_access_key_id=<aws_access_key_id> aws_secret_access_key=<aws_secret_access_key>

Environment Variable

export REDSHIFT='redshift://myuser:mypass@host.ip:5439/mydatabase'
export AWS_BUCKET='<aws_bucket>'
export AWS_ACCESS_KEY_ID='<aws_access_key_id>'
export AWS_SECRET_ACCESS_KEY='<aws_secret_access_key>'

export REDSHIFT='{ type: redshift, url: "redshift://myuser:mypass@host.ip:5439/mydatabase", aws_bucket: "<aws_bucket>", aws_access_key_id: "<aws_access_key_id>", aws_secret_access_key: "<aws_secret_access_key>" }'

Sling Env File YAML

connections:
  REDSHIFT:
    type: redshift
    host: <host>
    user: <user>
    password: <password>
    port: <port>
    database: <database>
    schema: <schema>
    aws_bucket: <aws_bucket>
    aws_access_key_id: <aws_access_key_id>
    aws_secret_access_key: <aws_secret_access_key>

  REDSHIFT_URL:
    url: "redshift://myuser:mypass@host.ip:5439/mydatabase"
    aws_bucket: <aws_bucket>
    aws_access_key_id: <aws_access_key_id>
    aws_secret_access_key: <aws_secret_access_key>

Database user creation

To allow Sling to access your database, we need to create a user with the proper privileges. Please follow the steps below:

  1. Create a user sling (or whatever you prefer) by running:

    CREATE USER sling WITH PASSWORD '<password>';
  2. If you are planning to load data into this connection, grant the following privileges to that user:

    GRANT CREATE ON DATABASE <database_name> TO sling;
  3. If you are planning to extract data from this connection, you need to give permission to read the tables you'd like Sling to extract.

    -- Need this to read table & column names 
    GRANT SELECT ON ALL TABLES IN SCHEMA information_schema TO sling;
    GRANT SELECT ON ALL TABLES IN SCHEMA pg_catalog TO sling;
    
    -- run this to grant SELECT permission to all tables in schema `marketing` to user sling
    GRANT SELECT ON ALL TABLES IN SCHEMA marketing TO sling;
PreviousProtonNextStarRocks

Last updated 4 months ago

See to learn more about the sling env.yaml file.

If you are facing issues connecting, please reach out to us at , on or open a Github Issue .

here
support@slingdata.io
discord
here