Sparkpost to Redshift

Hi there! This page will show you the tools you need to get your SparkPost data into Redshift. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

Pulling Data Out of SparkPost

The first step for getting your SparkPost data into Redshift is collecting that data from SparkPost’s servers. You can do this using Webhooks. Documentation for SparkPost Webhooks integration can be found here.

Data from SparkPost can be retrieved via user-defined HTTP callbacks.  The first thing you need to do is set up the webhook in your SparkPost account. After that you need somewhere to send the data. Typically, it’s a special URL that your script listens to.

Sample SparkPost Data

Once you’ve set up your HTTP endpoint, SparkPost will begin sending data.  Data will be enclosed in the body of the request in JSON format.  Below is a sample of what that data looks like when SparkPost sends data for an inbound email.

[
  {
    "msys": {
      "relay_message": {
        "content": {
          "email_rfc822": "Return-Path: me@here.com>\r\nMIME-Version: 1.0\r\nFrom: me@here.com\r\nReceived: by 10.114.82.10 with HTTP; Mon, 4 Jul 2016 07:53:14 -0700 (PDT)\r\nDate: Mon, 4 Jul 2016 15:53:14 +0100\r\nMessage-ID: 484810298443-112311-xqxbby@mail.there.com>\r\nSubject: Relay webhooks rawk!\r\nTo: you@there.com\r\nContent-Type: multipart/alternative; boundary=deaddeaffeedf00fall45dbhail980dhypnot0ad\r\n\r\n--deaddeaffeedf00fall45dbhail980dhypnot0ad\r\nContent-Type: text/plain; charset=UTF-8\r\nHi there SparkPostians.\r\n\r\n--deaddeaffeedf00fall45dbhail980dhypnot0ad\r\nContent-Type: text/html; charset=UTF-8\r\n\r\nHi there SparkPostians\r\n\r\n--deaddeaffeedf00fall45dbhail980dhypnot0ad--\r\n",
          "email_rfc822_is_base64": false,
          "headers": [
            {
              "Return-Path": "me@here.com"
            },
            {
              "MIME-Version": "1.0"
            },
            {
              "From": "me@here.com"
            },
            {
              "Received": "by 10.114.82.10 with HTTP; Mon, 4 Jul 2016 07:53:14 -0700 (PDT)"
            },
            {
              "Date": "Mon, 4 Jul 2016 15:53:14 +0100"
            },
            {
              "Message-ID": "484810298443-112311-xqxbby@mail.there.com"
            },
            {
              "Subject": "Relay webhooks rawk!"
            },
            {
              "To": "you@there.com"
            }
          ],
          "html": "Hi there SparkPostians",
          "subject": "We come in peace",
          "text": "Hi there SparkPostians.",
          "to": [
            "your@yourdomain.com"
          ]
        },
        "customer_id": "1337",
        "friendly_from": "me@here.com",
        "msg_from": "me@here.com",
        "rcpt_to": "you@there.com",
        "webhook_id": "4839201967643219"
      }
    }
  }
]

Preparing SparkPost Data for Redshift

Now you need to map all those data fields into a schema that can be inserted into your Redshift database. This means that, for each value in the response, you need to identify a predefined data type (i.e. INTEGER, DATETIME, etc.) and build a table that can receive them.

The SparkPost documentation can give you a good sense of what fields will be provided by each endpoint, along with their corresponding data types. Once you have identified all of the columns you will want to insert, use the CREATE TABLE statement in Redshift to define a table that can receive all of this data.

Inserting SparkPost Data into Redshift

It may seem like the easiest way to add your data is to build tried-and-true INSERT statements that add data to your Redshift table row-by-row. If you have any experience with SQL, this will be your gut reaction and it will work but isn’t the most efficient way to get the job done.

Redshift actually offers some good documentation for how to best bulk insert data into new tables. The COPY command is particularly useful for this task, as it allows you to insert multiple rows without needing to build individual INSERT statements for each row.

If you cannot use COPY, it might help to use PREPARE to create a prepared INSERT statement, and then use EXECUTE as many times as required. This avoids some of the overhead of repeatedly parsing and planning INSERT.

Keeping Data Up-To-Date

So what’s next? You’ve built a script that collects data from SparkPost and moves it into Redshift.  What happens when SparkPost sends a data type that your script doesn’t recognize?  It’s also important to consider the situation where an entry in Redshift needs to be updated to a new value. Once you’ve build in that functionality, you can set your script up as a cron job or continuous loop to keep pulling new data as it appears.

Other Data Warehouse Options

Redshift is totally awesome, but sometimes you need to start smaller or optimize for different things. In this case, many people choose to get started with Postgres, which is an open source RDBMS that uses nearly identical SQL syntax to Redshift. If you’re interested in seeing the relevant steps for loading this data into Postgres, check out SparkPost to Postgres

Easier and Faster Alternatives

If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to solve this problem automatically. With just a few clicks, Stitch starts extracting your SparkPost data via the webhook API, structuring it in a way that is optimized for analysis, and inserting that data into your Redshift data warehouse.