With Portable, you can sync 7shifts data into your PostgreSQL warehouse in minutes. Access all of your Employee Scheduling data from PostgreSQL without having to manage cumbersome ETL scripts.
The Two Paths To Connect 7shifts To Your Data Warehouse
There are two ways to sync data from 7shifts into your data warehouse for analytics.
Method 1: Manually Developing A Custom Data Pipeline Yourself
Write code from scratch or use an open-source framework to build an integration between 7shifts and your warehouse.
Method 2: Automating The ETL Process With A No-Code Solution
Leverage a prebuilt connector from a cloud-hosted solution like Portable.
How To Create Value With 7shifts Data
Teams connect 7shifts to their data warehouse to build dashboards and generate value for their business. Let’s dig into the capabilities 7shifts exposes via their API, outline insights you can build with the data, and summarize the most common analytics environments teams are using to process their 7shifts data.
Extract: What Data Can You Extract From The 7shifts API?
7shifts is an employee scheduling platform used for hiring and managing restaurant professionals.
To help clients power downstream analytics, 7shifts offers an application programming interface (API) for clients to extract data on business entities. Here are a few example entities you can extract from the API.
- Time Punches
You can visit the 7shifts API documentation to explore the entire catalog of available API resources and the complete schema definition for each. As an example, here are some of the details for the locations endpoint in the 7shifts API documentation.
As you think about the data you will need for analytics, don’t forget that Portable offers no-code integrations to other similar applications like Everhour, Harvest, and Hubstaff that can be useful for comparison purposes.
Regardless of the SaaS solution you use, it’s important to find a employee scheduling platform with robust data available for analytics.
Load: Which Destinations Are Best For A 7shifts ETL Pipeline?
To turn raw data from 7shifts into dashboards, most companies centralize information into a data warehouse or data lake. For Portable clients, the most common ETL pipelines are:
- 7shifts to Snowflake Integration
- 7shifts to Google BigQuery Integration
- 7shifts to Amazon Redshift Integration
- 7shifts to PostgreSQL Integration
Once you have a destination to load the data, it’s common to combine 7shifts data with information from other enterprise applications like Jira, Mailchimp, HubSpot, Zendesk, and LinkedIn.
From there, you can build cross-functional dashboards in a visualization tool like Power BI, Tableau, Looker, or Retool.
Develop: Which Dashboards Should You Build With 7shifts Data?
Now that you have identified the data you want to extract, the next step is to plan out the dashboards you can build with the data.
As a process, you want to consume raw data, overlay SQL logic, and build a dashboard to either 1) increase revenue or 2) decrease costs.
Here are three employee scheduling analytics dashboards you should consider as a starting point.
- Shifts Per Employee - Get a clear image of which employees are working when. You can chart the hours worked per employee over time to understand engagement.
- Labor vs. Revenue - When you map your labor expenses against your revenue data, you can finally understand when you need to increase bandwidth and when you can reduce labor costs.
- Events (By Location) - In addition to understanding your employee shifts and time cards, you can also gain insight into events at your locations to optimize traffic to stores and location utilization.
Beyond the dashboards above, replicating 7shifts data into your cloud data warehouse can unlock a wide array of opportunities to power analytics, automate workflows, and develop products. The use cases are endless.
Now that we have a clear sense of the insights we can create, let’s compare the process of developing a custom 7shifts integration with the benefits of using a no-code ETL solution like Portable.
Method 1: Building A Custom 7shifts ETL Pipeline
To build your own 7shifts integration, there are three steps:
- Navigate the 7shifts API documentation
- Make your first API request
- Turn an API request into a complete data pipeline
Let’s walk through the process in more detail.
How To Interpret 7shifts’s API Documentation
When reading API documentation, there are a handful of key concepts to consider.
There are many common authentication mechanisms. OAuth 2.0 (Auth Code and Client Credentials), API Keys, JWT Tokens, Personal Access Tokens, Basic Authentication, etc. For 7shifts, it’s important to identify the authentication mechanism and how best to incorporate the necessary credentials into your API requests.
7shifts used OAuth 2.0 Auth Code for authentication. They also support API key authentication; however, they recommend clients migrate to the OAuth authentication workflow.
It’s important to identify the 7shifts API endpoints you want to use for analytics. Most APIs offer GET, POST, PUT, and DELETE methods; however, for analytics, GET requests are typically the most useful. At times, POST requests can be used to extract data as well.
For 7shifts, the locations endpoint is a great place to get started.
For each API endpoint, you would like to use for analytics, you need to understand the method (GET, POST, PUT, or DELETE) and the URL, but there are other considerations to take into account as well. You should look out for pagination mechanics, query parameters, and parameters that are added to the request path.
Chargebee uses limit and cursor parameters for pagination.
Most requests require a company ID to be included in the URL path. For instance, to list all locations, you need a company_id.
How Do You Call The 7shifts API? (Tutorial)
- Follow the instructions above to read the 7shifts API documentation
- Identify and collect your credentials for authentication
- Pick the API resource you want to pull data from
- Configure the necessary parameters, method, and URL to make your first request (Either with curl or Postman)
- Add your credentials and make your first API call. Here is an example request using curl (without real credentials):
curl --request GET \ --url 'https://api.7shifts.com/v2/company/company_id/locations?limit=100' \ --header 'accept: application/json'
How Do You Maintain A Custom 7shifts ETL Pipeline?
Making a call to the 7shifts API is just the beginning of maintaining a complete custom ETL pipeline.
Here is a getting started guide to building a production-grade pipeline for 7shifts:
- For each API endpoint, define schemas (which fields exist and the type for each)
- Process the API response and parse the data (typically parsing JSON or XML)
- Handle and replicate nested objects and custom fields
- Identify which fields are primary keys and which keys are required vs. optional
- Version control your changes in a git-based workflow (using GitHub, GitLab, etc.)
- Handle code dependencies in your toolchain and the upgrades that come with each
- Monitor the health of the upstream API, and - when things go wrong - troubleshoot via the status page, reach out to support, and open tickets
- Handle error codes (HTTP error codes like 400s, 500s, etc.)
- Manage and respect rate limits imposed by the server
We won’t go into detail on all of the items above, but rate limits are a great example of the complexity found in a production-grade data pipeline.
For rate limits, 7shifts restricts each token. You must only make 10 requests per access token across all endpoints.
If you don’t respect rate limits, and if you can’t handle server responses (like 429 errors with a Retry-After header), your pipeline can break, and analytics can become out-of-date.
What Are The Drawbacks Of Building A 7shifts ETL Pipeline Yourself?
You can probably tell at this point that there is a lot of work that goes into building and maintaining an ETL pipeline from 7shifts to your data warehouse.
If you want less development work, faster insights, and no ongoing responsibilities, you should consider a cloud-hosted ETL solution.
Let’s walk through the setup process for a no-code ETL solution and its benefits.
Method 2: Using A No-Code 7shifts ETL Solution
No-code ETL solutions are simple. Vendors are specialized in building and maintaining data pipelines on your behalf. Instead of starting from scratch for each integration. Companies like Portable create connector templates that can be leveraged by hundreds or thousands of clients.
Step-By-Step Tutorial For Configuring A 7shifts ETL Pipeline
Off-the-shelf ETL tools offer a no-code setup process. Here are the instructions to connect 7shifts to your cloud data warehouse with Portable.
- Create an account (no credit card required)
- Add a source - Search for and select 7shifts
- Authenticate with 7shifts using the instructions in the Portable console
- Select your warehouse (Snowflake, BigQuery, Redshift, or PostgreSQL) and authenticate
- Set up a flow connecting 7shifts to your analytics environment
- Run your flow to replicate data from 7shifts to your warehouse
- Use the dropdown to set your data flow to run on a cadence
What Are The Benefits Of Using Portable For 7shifts ETL?
Start moving 7shifts data in minutes. Save yourself the headaches of reading API documentation, writing code, and worrying about maintenance. Leave the hassle to us.
Easy To Understand Pricing
With predictable, fixed-cost pricing per data flow, you know exactly how much your 7shifts integration will cost every month.
Fast Development Speeds
Access lightning-fast connector development. Portable can build new integrations on-demand in hours or days.
APIs change. Schemas evolve. 7shifts will have maintenance issues and errors. With Portable, we will do everything in our power to make your life easier.
Unlimited Data Volumes
You can move as much 7shifts data as you want without worrying about usage credits or overages. Instead of analyzing your ETL costs, you should be analyzing your data.
Free To Get Started
Sign up and get started for free. You don’t need a credit card to manually trigger a data sync, so you can try all of our connectors before paying a dime.