When you need to extract data from an application, but the data source only has an API, what do you do?
Every company uses disparate tools - CRM systems, spreadsheets, and bespoke applications.
And while some tools have native integrations to connect to your downstream systems, most platforms offer interfaces for YOU to extract data or receive data.
This means that YOU need to do the work connecting that application to other tools within your enterprise.
Or you have to pay hefty prices for a data integration consultant.
Luckily, there are no-code solutions like Portable that make it easy to interface with APIs.
An API connector allows users to extract data from an API, or import data into an API without writing code.
When interfacing directly with APIs, engineers need to read the API documentation, parse JSON responses, and process metadata like pagination information and rate limit details.
Instead of writing code, it's common to use a no-code solution to move data between your applications.
Data teams use API connectors to power 3 main use cases:
Centralizing information for data analytics
Connecting systems to automate manual workflows
Building data pipelines to power products for clients
One of the most common use cases for API connectors is to ETL large amounts of data into a centralized analytics environment to power dashboards and insights.
These dashboards and insights create value by enabling business stakeholders to make more informed strategic decisions. For example:
E-commerce brands can analyze their Shopify data alongside their Google Ads data or their Google Analytics data to understand which ads are driving orders
Accounting teams can compare customer data from a CRM system with financial data from QuickBooks, Xero, Zoho Books or another accounting platform
Most analytics teams would rather use an off-the-shelf solution for API connectors instead of trying to build a custom ETL integration in-house.
Process automation has slightly different technical requirements from analytics, but API connectors are just as valuable.
For most workflow automation use cases, data needs to move in closer to real-time. For instance:
If a user signs up for your website, your system should send them an email
If a prospect becomes qualified based on usage data, the sales team should be notified
It's common for workflow automation to use webhooks to route data quickly, but import APIs and REST APIs to extract data are critical to any form of process automation.
The most direct form of creating value from data is by creating data products for clients.
Why? Because clients spend money directly to purchase the product. It's easy to measure the ROI of a data investment when you have the revenue to point to.
When building external products, API connectors can accelerate development speeds, reduce maintenance efforts, and offer clients a simple experience to integrate disparate tools into your product.
The two most common use cases for API connectors in product development are:
Streamlining data ingestion to allow your product to pull data from external APIs
Offering enhanced connectivity to make it easy to export data to downstream systems (like Snowflake, Redshift, BigQuery, or PostgreSQL)
To build an API connector from scratch, there are 4 steps:
Research the API
Deploy Data Pipeline Infrastructure
Connect to the API
Extract the API Data
Process the API Response
Deliver Data to the Destination
Let's walk through each step in more detail.
First, you need to conduct research and get up to speed on the specific interfaces that are available to pull or push data from the platform you want to integrate with. This involves:
Finding the API docs
Determining the type of API (REST API, GraphQL, SOAP, etc.)
Understanding the format of the API data (JSON, XML, CSV, etc.)
Next, you need to deploy infrastructure to push data into the API or pull data from the API. This involves:
Setting up cloud infrastructure (storage, compute, networking)
Writing version-controlled code (using GitHub, GitLab, or something similar)
Deploying changes with a CI/CD process and debugging any issues as they arise
Monitoring your data pipeline with key metrics, dashboards, and alerts when things fail
Once you have the data pipeline infrastructure in place to move data, you now need to connect to the API. This involves:
Understanding common authentication mechanisms (API Key, OAuth 2.0, Basic Auth, etc.)
Make sure your credentials are authorized with the correct permissions
Add the necessary headers and query parameters to your API requests
Make any necessary API calls (GET requests, POST requests) to authenticate and establish an API connection
Now it's time for the fun part! Extracting data from the API with API requests. To do so, you need to:
Select the API endpoints you want to extract data from
Construct your API calls (header, query parameters, URLs, etc.)
Include any necessary parameters or payload in your API request
Log the API responses so that you can process them into useful data
After extracting data, it's now to process the API response into useful information that can be used downstream for analytics, automation, or product development. For example:
Standardizing data into coherent schemas (that can be mapped to your company's data dictionary)
Applying schema validation to make sure it is clean and usable
Identify and alert on any schema changes
Defining any transformations, filters, or calculations while data is in motion
Join any other data sets necessary to deliver the data
Prepare and package the data for delivery
The final step in the API connector process is to deliver the data to your destination. Sometimes, you simply want to store the data, so you can deliver it to a cloud storage bucket or a file. At other times, the data needs to be loaded into an application or a database. At a high level, the delivery process involves:
Selecting the destination
Determine how best to load the data (using an append-only strategy or syncing all of the data each time)
Determining the schema (fields, columns, etc.) necessary to load the data
Process your existing data into that schema
Format the data for delivery (as files, JSON, XML, etc.)
Make API requests, upload files, or apply SQL statements to deliver the data
As you can probably tell, building your API connector can take a while. There are lots of moving parts, and things change all the time.
With no-code API connectors, the process of moving data is much simpler.
It is extremely easy to use no-code API connectors from to move data. Here are the 7 steps:
Sign up for an account (no payment required)
Connect to the source API --- search for and select the API you need
Configure authentication details using the instructions in the Portable app
Select your destination of choice and authenticate
Connect your source API to your destination
Run your flow to sync data from the API to your processing environment
Set the data flow to run on a cadence
Well, it depends. Which is more important? Your time, or your money?
There are many ways to leverage an API connector for free (write your code, use open-source templates, deploy your infrastructure, etc.). But you also need to incorporate the amount of time it takes your team to manage the tooling that moves the data.
If you're looking for a free API connector solution, check out Portable's free tier.
Portable has a catalog of 300+ no-code API connectors. If you need an API connector that isn't listed, feel free to get in touch. We build new connectors in hours or days for free.
Here are a few in-depth tutorials for connecting Jira to Snowflake, Mailchimp to BigQuery, and Stripe to PostgreSQL.
Ready to get started? Try Portable today!