With Portable, integrate Affinity data with your Redshift warehouse in minutes. Access your AI-powered relationship intelligence platform data from Redshift without having to manage cumbersome ETL scripts.
The Two Paths to Connect Affinity to Amazon Redshift
There are two ways to sync data from Affinity into your data warehouse for analytics.
Method 1: Manually Developing a Custom Data Pipeline Yourself
Write code from scratch or use an open-source framework to build an integration between Affinity and Redshift.
Method 2: Automating the ETL Process with a No-Code Solution
Leverage a pre-built connector from a cloud-hosted solution like Portable.
How to Create Value with Affinity Data
Teams connect Affinity to their data warehouse to build dashboards and generate value for their business. Let’s dig into the capabilities Affinity exposes via their API, outline insights you can build with the data, and summarize the most common analytics environments that teams are using to process their Affinity data.
Extract: What Data Can You Extract from the Affinity API?
Affinity is a AI-powered relationship intelligence platform used for managing customer relationships across applications.
To help clients power downstream analytics, Affinity offers an application programming interface (API) for clients to extract data on business entities. Here are a few example entities you can extract from the API:
- List Entries
- Field Values
- Field Value Changes
- Relationship Strengths
- Entity Files
You can visit the Affinity API Documentation to explore the entire catalog of available API resources and the complete schema definition for each.
As you think about the data you will need for analytics, don’t forget that Portable offers no-code integrations to other similar applications.
Regardless of the SaaS solution you use, it’s important to find a AI-powered relationship intelligence platform with robust data available for analytics.
Load: Which Destinations Are Best for Your Affinity ETL Pipeline?
To turn raw data from Affinity into dashboards, most companies centralize information into a data warehouse or data lake. For Portable clients, the most common ETL pipelines are:
- Affinity to Snowflake Integration
- Affinity to Google BigQuery Integration
- Affinity to Amazon Redshift Integration
- Affinity to PostgreSQL Integration
Once you have a destination to load the data, it’s common to combine Affinity data with information from other enterprise applications like Jira, Mailchimp, HubSpot, Zendesk, and Klaviyo.
From there, you can build cross-functional dashboards in a visualization tool like Power BI, Tableau, Looker, or Retool.
Develop: Which Dashboards Should You Build with Affinity Data?
Now that you have identified the data you want to extract, the next step is to plan out the dashboards you can build with the data.
As a process, you want to consume raw data, overlay SQL logic, and build a dashboard to either 1) increase revenue or 2) decrease costs.
Replicating Affinity data into your cloud data warehouse can unlock a wide array of opportunities to power analytics, automate workflows, and develop products. The use cases are endless.
Now that we have a clear sense of the insights we can create, let’s compare the process of developing a custom Affinity integration with the benefits of using a no-code ETL solution like Portable.
Method 1: Building a Custom Affinity ETL Pipeline
To build your own Affinity integration, there are three steps:
- Navigate the Affinity API documentation
- Make your first API request
- Turn an API request into a complete data pipeline
Let’s walk through the process in more detail.
How to Interpret Affinity’s API Documentation
When reading API documentation, there are a handful of key concepts to consider.
There are many common authentication mechanisms. OAuth 2.0 (Auth Code and Client Credentials), API Keys, JWT Tokens, Personal Access Tokens, Basic Authentication, etc. For Affinity, it’s important to identify the authentication mechanism and how best to incorporate the necessary credentials into your API requests.
To use the API, you will need to generate an API secret key. This can be done easily through the Settings Panel that is accessible through the left sidebar on the Affinity web app. For more support, visit the How to obtain your API Key article in our Help Center.
Requests are authenticated using HTTP Basic Auth. Provide your API key as the basic auth password. You do not need to provide a username.
Currently, we support one key per user on your team. Once you have generated a key, you will need to pass in the key with every API request for us to process it successfully. Otherwise, an error with a code of 401 will be returned.
It’s important to identify the Affinity API endpoints you want to use for analytics. Most APIs offer a combination of GET, POST, PUT, and DELETE request methods; however, for analytics, GET requests are typically the most useful. At times, POST requests can be used to extract data as well.
For Affinity, the lists endpoint is a great place to get started.
For each API endpoint you would like to use for analytics, you need to understand the method (GET, POST, PUT, or DELETE) and the URL, but there are other considerations to take into account as well. You should look out for pagination mechanics, query parameters, and parameters that are added to the request path.
Affinity uses next_page_token parameters for pagination
How Do You Call the Affinity API? (Tutorial)
- Follow the instructions above to read the Affinity API documentation
- Identify and collect your credentials for authentication
- Pick the API resource you want to pull data from
- Configure the necessary parameters, method, and URL to make your first request (e.g. with curl or Postman)
- Add your credentials and make your first API call . Here is an example request using curl (without real credentials):
curl 'https://api.affinity.co/lists' -u :$APIKEY
How Do You Maintain a Custom Affinity to Redshift ETL Pipeline?
Making a call to the Affinity API is just the beginning of maintaining a complete custom ETL pipeline.
Here is a getting-started guide to building a production-grade pipeline for Affinity:
- For each API endpoint, define schemas (which fields exist and the type for each)
- Process the API response and parse the data (typically parsing JSON or XML)
- Handle and replicate nested objects and custom fields
- Identify which Affinity fields are primary keys and which keys are required vs. optional
- Version control your changes in a git-based workflow (using GitHub, GitLab, etc.)
- Handle code dependencies in your toolchain and the upgrades that come with each
- Monitor the health of the upstream API, and —when things go wrong— troubleshoot via the status page, reach out to support, and open tickets
- Handle error codes (HTTP error codes like 400s, 500s, etc.)
- Manage and respect rate limits imposed by the server
We won’t go into detail on all of the items above, but rate limits are a great example of the complexity found in a production-grade data pipeline.
Professional: 40,000 calls per month Premium: 100,000 calls per month Enterprise: Unlimited calls per month
This monthly account-level limit resets at the end of each calendar month.
All API requests will be halted at 900 per user, per minute.
If you don’t respect rate limits, and if you can’t handle server responses (like 429 errors with a Retry-After header), your pipeline can break, and analytics can become out-of-date.
What Are the Drawbacks of Building the Affinity ETL Pipeline Yourself?
You can probably tell at this point that there is a lot of work that goes into building and maintaining an ETL pipeline from Affinity to your data warehouse.
If you want less development work, faster insights, and no ongoing responsibilities, you should consider a cloud-hosted ETL solution.
Let’s walk through the setup process for a no-code ETL solution and its benefits.
Method 2: Using a No-Code Affinity ETL Solution
No-code ETL solutions are simple. Vendors specialize in building and maintaining data pipelines on your behalf. Instead of starting from scratch for each integration. Companies like Portable create connector templates that can be leveraged by hundreds or thousands of clients.
Step-By-Step Tutorial for Configuring Your Affinity ETL Pipeline
Off-the-shelf ETL tools offer a no-code setup process. Here are the instructions to connect Affinity to your cloud data warehouse with Portable.
- Create an account (no credit card required)
- Add a source —search for and select Affinity
- Authenticate with Affinity using the instructions in the Portable console
- Select Redshift and authenticate
- Set up a flow connecting Affinity to your analytics environment
- Run your flow to replicate data from Affinity to your warehouse
- Use the dropdown to set your data flow to run on a cadence
What Are the Benefits of Using Portable for Affinity ETL?
Start moving Affinity data in minutes. Save yourself the headaches of reading API documentation, writing code, and worrying about maintenance. Leave the hassle to us.
Easy to Understand Pricing
With predictable, fixed-cost pricing per data flow, you know exactly how much your Affinity integration will cost every month.
Fast Development Speeds
Access lightning-fast connector development. Portable can build new integrations on-demand in hours or days.
APIs change. Schemas evolve. Affinity will have maintenance issues and errors. With Portable, we will do everything in our power to make your life easier.
Unlimited Data Volumes
You can move as much data from Affinity to Amazon Redshift as you want without worrying about usage credits or overages. Instead of analyzing your ETL costs, you should be analyzing your data.
Free to Get Started
Sign up and get started for free. You don’t need a credit card to manually trigger a data sync, so you can try all of our connectors before paying a dime.