With Portable, integrate Tempo data with your BigQuery warehouse in minutes. Access your project, product and strategic portfolio management solution data from BigQuery without having to manage cumbersome ETL scripts.
The Two Paths to Connect Tempo to Google BigQuery
There are two ways to sync data from Tempo into your data warehouse for analytics.
Method 1: Manually Developing a Custom Data Pipeline Yourself
Write code from scratch or use an open-source framework to build an integration between Tempo and BigQuery.
Method 2: Automating the ETL Process with a No-Code Solution
Leverage a pre-built connector from a cloud-hosted solution like Portable.
How to Create Value with Tempo Data
Teams connect Tempo to their data warehouse to build dashboards and generate value for their business. Let’s dig into the capabilities Tempo exposes via their API, outline insights you can build with the data, and summarize the most common analytics environments that teams are using to process their Tempo data.
Extract: What Data Can You Extract from the Tempo API?
Tempo is a project, product and strategic portfolio management solution used for enabling teams to complete projects on time and handle finances efficiently.
To help clients power downstream analytics, Tempo offers an application programming interface (API) for clients to extract data on business entities. Here are a few example entities you can extract from the API:
Account - Categories Account - Category-Types Account - Links Customers Generic Resources Global Configurations Holiday Schemes Periods Permission Roles Plans Program Roles Skills Skill Assignments Team Team Links Team Memberships Timesheet Approvals User Schedule Work Attributes Workload Schemes Worklogs
You can visit the Tempo API Documentation to explore the entire catalog of available API resources and the complete schema definition for each.
As you think about the data you will need for analytics, don’t forget that Portable offers no-code integrations to other similar applications.
Regardless of the SaaS solution you use, it’s important to find a project, product and strategic portfolio management solution with robust data available for analytics.
Load: Which Destinations Are Best for Your Tempo ETL Pipeline?
To turn raw data from Tempo into dashboards, most companies centralize information into a data warehouse or data lake. For Portable clients, the most common ETL pipelines are:
- Tempo to Snowflake Integration
- Tempo to Google BigQuery Integration
- Tempo to Amazon Redshift Integration
- Tempo to PostgreSQL Integration
Once you have a destination to load the data, it’s common to combine Tempo data with information from other enterprise applications like Jira, Mailchimp, HubSpot, Zendesk, and Klaviyo.
From there, you can build cross-functional dashboards in a visualization tool like Power BI, Tableau, Looker, or Retool.
Develop: Which Dashboards Should You Build with Tempo Data?
Now that you have identified the data you want to extract, the next step is to plan out the dashboards you can build with the data.
As a process, you want to consume raw data, overlay SQL logic, and build a dashboard to either 1) increase revenue or 2) decrease costs.
Replicating Tempo data into your cloud data warehouse can unlock a wide array of opportunities to power analytics, automate workflows, and develop products. The use cases are endless.
Now that we have a clear sense of the insights we can create, let’s compare the process of developing a custom Tempo integration with the benefits of using a no-code ETL solution like Portable.
Method 1: Building a Custom Tempo ETL Pipeline
To build your own Tempo integration, there are three steps:
- Navigate the Tempo API documentation
- Make your first API request
- Turn an API request into a complete data pipeline
Let’s walk through the process in more detail.
How to Interpret Tempo’s API Documentation
When reading API documentation, there are a handful of key concepts to consider.
There are many common authentication mechanisms. OAuth 2.0 (Auth Code and Client Credentials), API Keys, JWT Tokens, Personal Access Tokens, Basic Authentication, etc. For Tempo, it’s important to identify the authentication mechanism and how best to incorporate the necessary credentials into your API requests.
Tempo uses Auth 2.0 tokens for authentication.
It’s important to identify the Tempo API endpoints you want to use for analytics. Most APIs offer a combination of GET, POST, PUT, and DELETE request methods; however, for analytics, GET requests are typically the most useful. At times, POST requests can be used to extract data as well.
For Tempo, the accounts endpoint is a great place to get started.
For each API endpoint you would like to use for analytics, you need to understand the method (GET, POST, PUT, or DELETE) and the URL, but there are other considerations to take into account as well. You should look out for pagination mechanics, query parameters, and parameters that are added to the request path.
Tempo uses offset and limit parameters for pagination.
Some API endpoints require unique identifiers from a previous API response to be included in the URL path. For instance, to update customers, you need a key that is returned from another endpoint.
How Do You Call the Tempo API? (Tutorial)
- Follow the instructions above to read the Tempo API documentation
- Identify and collect your credentials for authentication
- Pick the API resource you want to pull data from
- Configure the necessary parameters, method, and URL to make your first request (e.g. with curl or Postman)
- Add your credentials and make your first API call . Here is an example request using curl (without real credentials):
curl -v -H 'Authorization: Bearer $token' 'https://api.tempo.io/4/worklogs?...'
How Do You Maintain a Custom Tempo to BigQuery ETL Pipeline?
Making a call to the Tempo API is just the beginning of maintaining a complete custom ETL pipeline.
Here is a getting-started guide to building a production-grade pipeline for Tempo:
- For each API endpoint, define schemas (which fields exist and the type for each)
- Process the API response and parse the data (typically parsing JSON or XML)
- Handle and replicate nested objects and custom fields
- Identify which Tempo fields are primary keys and which keys are required vs. optional
- Version control your changes in a git-based workflow (using GitHub, GitLab, etc.)
- Handle code dependencies in your toolchain and the upgrades that come with each
- Monitor the health of the upstream API, and —when things go wrong— troubleshoot via the status page, reach out to support, and open tickets
- Handle error codes (HTTP error codes like 400s, 500s, etc.)
- Manage and respect rate limits imposed by the server
We won’t go into detail on all of the items above, but rate limits are a great example of the complexity found in a production-grade data pipeline.
Currently, the Tempo API gateway has a rate limit of 5 requests per second, regardless of the connection types: Application access, user token, or OAuth2 connection.
The largest dataset stored in Tempo is from worklogs. The number of results returned is based on your REST call parameters, so try to avoid getting months of worklogs in one single call. Instead, it’s recommended to write a script to loop through user/issue/week with pagination for better network traffic and server performance. You can narrow down your query with the following parameters:
- Use a shorter time frame, such as by day or by week (instead of by month)
- Use a single user (instead of teams)
- Use pagination
- Use a limit (maximum: 1000 for REST API V3, 5000 for REST API V4)
If you don’t respect rate limits, and if you can’t handle server responses (like 429 errors with a Retry-After header), your pipeline can break, and analytics can become out-of-date.
What Are the Drawbacks of Building the Tempo ETL Pipeline Yourself?
You can probably tell at this point that there is a lot of work that goes into building and maintaining an ETL pipeline from Tempo to your data warehouse.
If you want less development work, faster insights, and no ongoing responsibilities, you should consider a cloud-hosted ETL solution.
Let’s walk through the setup process for a no-code ETL solution and its benefits.
Method 2: Using a No-Code Tempo ETL Solution
No-code ETL solutions are simple. Vendors specialize in building and maintaining data pipelines on your behalf. Instead of starting from scratch for each integration. Companies like Portable create connector templates that can be leveraged by hundreds or thousands of clients.
Step-By-Step Tutorial for Configuring Your Tempo ETL Pipeline
Off-the-shelf ETL tools offer a no-code setup process. Here are the instructions to connect Tempo to your cloud data warehouse with Portable.
- Create an account (no credit card required)
- Add a source —search for and select Tempo
- Authenticate with Tempo using the instructions in the Portable console
- Select BigQuery and authenticate
- Set up a flow connecting Tempo to your analytics environment
- Run your flow to replicate data from Tempo to your warehouse
- Use the dropdown to set your data flow to run on a cadence
What Are the Benefits of Using Portable for Tempo ETL?
Start moving Tempo data in minutes. Save yourself the headaches of reading API documentation, writing code, and worrying about maintenance. Leave the hassle to us.
Easy to Understand Pricing
With predictable, fixed-cost pricing per data flow, you know exactly how much your Tempo integration will cost every month.
Fast Development Speeds
Access lightning-fast connector development. Portable can build new integrations on-demand in hours or days.
APIs change. Schemas evolve. Tempo will have maintenance issues and errors. With Portable, we will do everything in our power to make your life easier.
Unlimited Data Volumes
You can move as much data from Tempo to Google BigQuery as you want without worrying about usage credits or overages. Instead of analyzing your ETL costs, you should be analyzing your data.
Free to Get Started
Sign up and get started for free. You don’t need a credit card to manually trigger a data sync, so you can try all of our connectors before paying a dime.