Path To Pipelines Builder Certification
A complete Guide to Navigate and Use Quickbase
VKIITS PRIVATE LIMITED
Prepared by,
Pipelines
In Quickbase, Pipelines is an automation tool that allows users to connect different apps and services to automate workflows without manual intervention. It enables users to move data, trigger actions, and integrate with third-party platforms efficiently.
Key Features of Quickbase Pipelines:
Workflow Automation: Automate repetitive tasks like data updates, approvals, and notifications.
Multi-Step Actions: Chain multiple actions together in a sequence.
Integration with External Systems: Connect Quickbase with platforms like Salesforce, Slack, Google Drive, and more.
Drag-and-Drop Interface: No-code/low-code setup with an intuitive UI.
Triggers & Actions: Set conditions that trigger specific actions automatically.
Error Handling: Manage exceptions and ensure workflow reliability.
How Quickbase Pipelines Work:
Trigger: Defines what starts the pipeline (e.g., a new record in Quickbase).
Steps: Individual actions that execute based on the trigger (e.g., updating a record, sending an email).
Channels: Built-in connectors for different services (e.g., Quickbase, Gmail, Dropbox).
Conditions & Filters: Control when and how data flows.
What your pipeline can do:
Span many tools
Include needed steps
Apply Conditions
Transform your data
Set and convert data and time
Control when it starts
Channel options:
1. Trigger: A trigger is an event that occurs to tell the pipeline to start.
2. Action:An action is the operation that the step is configured to take.
3.Query:A query is a request or search for data.
Tips for using loops in your pipelines:
In the exercises for this section, you built pipelines that included a query step followed by a for each loop. That loop performed an action for each record found in the search. As you begin to work with loops in your pipelines, keep a few things in mind.
Inside and outside the loop:Limit the steps within the loop to only what you need performed on each record returned from the search. These steps go under the DO branch. If there are any steps that need to happen outside of the query step and loop, they go back in the main pipeline:
Infinite loops:Avoid creating an infinite loop. This can happen when the action you perform on a record in a loop causes the pipeline to retrigger. For example, your pipeline starts when you update a record in a particular app and table, it then runs a query, and it updates a record from the same app and table within the loop that triggered the pipeline. Each time the pipeline updates any of those records, a new pipeline run gets triggered. This is especially an issue if the pipeline is updating something in the record that is dynamic, like the current date and time. That value continually changes, so the pipeline will keep going.
Optimizing loops:Do you have a large number of records to create, update, or delete within the loop in your pipeline? To optimize pipeline performance when you are working with a large number of records, you can also use bulk record sets. In fact, you can use them to optimize your pipeline regardless of the number of records affected in the loop. For more information on how to use bulk record sets to make your pipeline more efficient.
Managing Pipelines in the My pipelines Page
The "My Pipelines" page in Quickbase is where you can view, organize, and manage all your pipelines. It provides an overview of your automation workflows and allows you to monitor their status, edit configurations, and troubleshoot issues.
Key Features of "My Pipelines" Page
Pipeline List View
Displays all the pipelines you have created.
Shows key details like Pipeline Name, Status, Last Run Time, and Owner.
Pipeline Status Indicators
Active (Green): Running and operational.
Paused (Yellow): Temporarily disabled.
Error (Red): Needs troubleshooting.
Draft (Gray): Not yet activated.
Search & Filter Pipelines
Use a search bar to find a specific pipeline.
Filter pipelines by status, owner, or last run time.
Pipeline Actions (Manage Individual Pipelines)
Edit: Modify pipeline steps, triggers, and actions.
Pause/Resume: Temporarily disable or restart a pipeline.
Duplicate: Copy an existing pipeline to create a similar one.
Delete: Remove a pipeline permanently.
Pipeline Logs & Debugging
View run history to check execution details.
Identify and troubleshoot errors or failed runs.
Download logs for deeper analysis.
Organizing Pipelines
Rename pipelines for better clarity.
Use categories or naming conventions to manage multiple pipelines efficiently.
How to Access & Manage Your Pipelines
Go to Quickbase Pipelines.
Click on "My Pipelines" in the navigation menu.
Use search and filters to find a specific pipeline.
Click on a pipeline to view details, edit, or check logs.
Building powerful pipelines using included features
Conditional statements
Conditional statements in Quickbase Pipelines allow you to control the flow of actions based on specific conditions. They help you filter data, make decisions, and execute different actions based on whether a condition is true or false.
Types of Conditional Statements
1. If-Else Conditions
Allows pipelines to take different actions based on a condition.
Example:
If an order amount is greater than $500, then assign it to a manager.
Else, assign it to a regular sales rep.
2. Boolean Conditions (True/False Checks)
Used to check if a value is true or false.
Example:
If a checkbox field "Urgent" is true, send an email notification.
3. Comparison Conditions
Compares two values using operators like:
Equal To (=)
Not Equal To (!=)
Greater Than (>)
Less Than (<)
Contains (for text fields)
Example:
If status = "Approved", update the record.
4. Multiple Conditions (AND/OR)
Use AND when multiple conditions must be true.
Use OR when at least one condition must be true.
Example:
If (status = "Pending" AND priority = "High"), send an alert.
Link & Fetch Records in Quickbase Pipelines
Link & Fetch is a powerful feature in Quickbase Pipelines that helps establish relationships between records across tables and apps. It allows you to link related records and fetch data automatically when conditions are met.
1. What is "Link & Fetch"?
Link: Creates a connection between records in different tables using a Reference Field.
Fetch: Retrieves related data from a linked record to populate fields automatically.
2. How to Use Link & Fetch in Pipelines
A. Linking Records (Establish a Relationship)
Example: When a new Order is created, link it to the correct Customer in a separate table.
Steps to Link:
Trigger: Start with a "Record Created" event in the Orders Table.
Find a Matching Record: Use the Search Records step in the Customers Table based on an identifier (e.g., Customer Name or ID).
Update Order Record: Set the Customer Reference Field in the Order Table to link it.
B. Fetching Records (Pull Related Data)
Example: When an Order is linked to a Customer, fetch the Customer's Email and update the order details.
Steps to Fetch:
Trigger: When an Order Record is Updated (linked to a Customer).
Fetch Related Data: Use the Find Record step in the Customers Table to get details.
Update the Order Record: Populate the Customer Email field in the Orders Table.
Introduction to included channels
Quickbase provides some included channels to help you in building your pipelines. You’re already familiar with one of them: The Quick Base channel. Like the Quick Base channel, other included channels are available for your use in pipelines and do not count towards your channel entitlement quota. You also don’t have to enable them first to use them.
The included channels provide functionality to accomplish a variety of tasks or to integrate with third party systems. In addition to the Quick Base channel, the included channels are:
Call another pipeline
This is especially useful when you have many repeatable steps used in more than one pipeline. It saves you time and prevents errors by configuring those same steps one time rather than having to add the steps to each individual pipeline. It also makes edits easier, since you only have to make changes in the one pipeline. The pipeline you call defines a set of reusable steps needed by the calling pipeline.
Transform your data using jinja expressions
So far in this course, you have looked at how you can use conditions and included channels in your pipelines. This lesson looks at how you can transform your data within a pipeline using the jinja templating language. This gives you flexibility as you move data between systems.
You may already be familiar with some of the concepts and logic from working with formula fields and API calls in your apps. Jinja is the language you use in pipelines.
Why use jinja
You can take advantage of jinja in your pipelines to accomplish different things including:
Calculating and convert numbers
Transforming text and dates
Summarizing search results
Clearing the value in a field
When using the jinja templating language, you create jinja expressions with placeholders for variables. Those variables can be replaced, manipulated, and transformed in the template.
Triggering a Pipeline with Multiple Quickbase Events
In Quickbase Pipelines, you can trigger a single pipeline using multiple Quickbase events (such as record creation, updates, or deletions) from different tables. This allows for flexible and efficient automation.
Ways to Trigger a Pipeline with Multiple Quickbase Events
1. Using Multiple Triggers in a Single Pipeline
In your pipeline, add multiple Quickbase "Records" triggers (e.g., "When a record is created," "When a record is updated," etc.).
Each trigger can monitor different events from the same or different tables.
The pipeline runs whenever any of the defined events occur.
Example:
Trigger 1: When a new Order is created.
Trigger 2: When a Customer record is updated.
The pipeline executes if either event occurs.
2. Using a Single Trigger with a Conditional Filter
If all triggers come from the same table, use one trigger (e.g., "When a record is added or modified") and then apply conditional filters to determine the event type.
You can add conditional logic (e.g., "If field X changed, do Y").
Example:
One trigger: "When a record in the Invoices table is modified."
Condition 1: If the Status changes to "Paid," update a related record.
Condition 2: If the Due Date is changed, send a reminder email.
3. Using Multiple Pipelines with a Central Callable Pipeline
Create separate pipelines for different events (e.g., one for new records, one for updates).
Each of these pipelines calls a shared Callable Pipeline that handles the main workflow.
This keeps pipelines modular and avoids redundant logic.
Example:
Pipeline A: Triggers when a new employee is added.
Pipeline B: Triggers when an employee’s role changes.
Both call Pipeline C, which handles notifications and reporting.
Creating Table-Specific Change Logs with Quickbase Pipelines
A change log tracks modifications to records, such as updates, deletions, or creations. In Quickbase, you can use Pipelines to automatically log these changes in a dedicated Change Log table.
Steps to Create a Table-Specific Change Log
1. Create a "Change Log" Table
Add a new table named "Change Log."
Include fields like:
Record ID (Linked to Original Table)
Changed Field Name
Old Value
New Value
Modified By (User)
Modified Date/Time
2. Build the Pipeline to Track Changes
Trigger: When a Record is Modified
Select the table you want to track (e.g., "Orders").
Choose "When a record is updated" as the trigger.
Add a Condition to Check Field Changes
Use the Conditional step to compare "Before" and "After" values.
Example: If Status changes from "Pending" to "Shipped," log it.
Create a New Record in the "Change Log" Table
Insert a step to add a new record to the "Change Log" table.
Map the original Record ID, Changed Field, Old Value, New Value, and Modified By.
Save and Activate the Pipeline
Importing CSV Files into your Quickbase App
Importing CSV files into Quickbase allows you to bulk add or update records in your app’s tables. You can do this manually or automate the process using Pipelines.
Recreating Automation use cases in Pipeline
Automation to pipelines:
Connect with external platforms
Data transformation and workflow tools
Sophisticated automated workflows
Cascading deletes is a database feature that automatically removes related records in child tables when a record in the parent table is deleted. It helps maintain referential integrity by ensuring that no orphaned records remain in the database.
How It Works:
When a row in the parent table is deleted, all associated rows in the child table are also deleted automatically.
This behavior is defined using foreign key constraints with the ON DELETE CASCADE option in relational databases like MySQL, PostgreSQL, SQL Server, and Oracle.
Capturing Previous value: Ability to modify project records.
How It Works:
When any of the following fields change in the project table.
Status.Start Date, End Date
Create a record in the project change Log table that details the change,as well as who made the change and when the change was made
Create Missing Parent When Child Added :Refers to a scenario in which a parent record is automatically created if it does not already exist when inserting a child record. This ensures referential integrity and avoids errors due to missing parent records.
Use Cases:
Database Relationships – When inserting a child record that references a parent, the parent record should be created if it doesn’t exist.
ETL/Data Pipelines – While processing data, if a referenced parent is missing, it should be generated dynamically.
Event-Driven Systems – Inserting related records in a system where parent records might not always exist beforehand.
How to use included channels
The Pipelines Built-In Channels
1.Bucket:
Defines tables on the fly
Add new rows to the defined table
Export the table to CSV
2.CSV Handler:
Fetch a CSV
Iterate over the records in the CSV
3.Callable pipelines:
Make a call to another pipeline
Receive a call from a pipeline
4.Clock:
The ability to add a sleep times in a pipeline step
5.Text:
Working with Regex
6.JSON Handler:
Fetch a JSON
Iterate over the records in the JSON
7.Webhooks:
Make HTTP requests
Receive HTTP requests
Improving Pipeline performance using Bulk records sets
When you work with large numbers of records in a pipeline, your pipeline may need to create, update, or delete those records. For each record touched, the pipeline makes a call to your app. When working with a few records, this may not be important. If you have large numbers of records, however, this can have an impact on the speed of the pipeline. By batching, or grouping, records together, your pipeline makes fewer calls to your app. For example, if your pipeline needs to update 1000 records, it would need to make 1000 calls to your app. By batching those 1000 records into groups, the pipeline makes a few calls instead. This optimizes your pipeline and decreases the load on your app while the pipeline runs.
Prepare Bulk Record Upsert: This action is added to the pipeline before you search for records and enter a loop. It prepares the pipeline for handling records in bulk.
Add Bulk Upsert Row: This action is added to the pipeline within the For each loop. The records are temporarily stored for the next step.
Commit Upsert: This action is added to the pipeline after the loop and outside of it. The action takes the temporarily stored records from Add a Bulk Import Row, and moves them to the Quickbase app table specified.
How to Optimize Your Pipelines: Best Practices, Tips & Tricks
Optimizing pipelines in Quickbase involves improving their efficiency, reducing execution time, and minimizing errors. Here are some best practices:
1. Streamline Triggers & Actions
Avoid Unnecessary Triggers: Ensure pipelines only trigger when needed by using filters effectively.
Batch Updates Instead of Individual Actions: Minimize API calls by grouping updates.
Use Scheduled Triggers: If real-time execution isn’t required, schedule pipelines to run at intervals.
2. Optimize Querying & Filtering
Filter at the Source: Instead of filtering records within a pipeline, apply filters in Quickbase itself.
Use Specific Queries: Select only the fields you need rather than retrieving entire records.
3. Reduce Redundant Steps
Rearrange Steps for Efficiency: Ensure data processing steps happen in the most logical and efficient order.
Use Loops Wisely: Avoid unnecessary loops by using batch processing when possible.
4. Manage Error Handling & Logging
Use Error Notifications: Set up error handling to catch issues early.
Log Pipeline Runs: Store pipeline run history to analyze failures and optimize execution.
5. Minimize External API Calls
Limit Data Requests: Only retrieve data that is essential.
Use Caching Strategies: Store frequently used data to reduce redundant API calls.
6. Leverage Parallel Processing
Run Independent Pipelines in Parallel: If multiple actions don’t depend on each other, execute them concurrently.
7. Monitor & Refine Regularly
Analyze Pipeline Performance: Review execution times and logs to find bottlenecks.
Iterate & Improve: Test modifications on a staging environment before deploying.
Troubleshooting pipelines
Troubleshooting in Quickbase Pipelines involves identifying and fixing issues that prevent a pipeline from running correctly or efficiently. Below are common issues and steps to resolve them.
1. Identify the Issue
Check Pipeline Run History: Review the execution logs for errors or unexpected behavior.
Look for Failed Steps: If a pipeline fails, find the step where it stopped.
Review Error Messages: Quickbase provides error messages that give clues about the problem.
2. Common Pipeline Issues & Fixes
Pipeline not triggering
Data not updating correctly
Pipeline running too slowly
API Rate limits exceeded
Loops running infinitely
Pipeline failing on external API Calls
3. Debugging Steps
Run in Test Mode: If available, test the pipeline with sample data.
Check Permissions: Ensure the pipeline has access to the necessary tables and records.
Use Logging & Alerts: Add steps to log data or send alerts when something goes wrong.
Break Down Complex Pipelines: Run steps separately to find where it fails.
Validate Input Data: Ensure data formats match the expected values in Quickbase.
4. When to Contact Quickbase Support
Persistent errors that logs don’t explain.
Performance issues even after optimization.
Unexpected Quickbase system errors.
Managing files with Pipelines
File management in Quickbase Pipelines involves automating the handling of file attachments, including uploading, downloading, moving, and processing files between Quickbase and external services like Google Drive, Dropbox, or SFTP.
1. Common File Management Use Cases
Uploading files: Automatically add attachments to Quickbase records from emails, cloud storage, or other sources.
Downloading files: Retrieve and store attachments from Quickbase for backup or processing.
Moving files: Transfer files between different systems (e.g., from Quickbase to Google Drive).
Processing files: Extract data from files (e.g., CSV or Excel) and update Quickbase records.
Deleting old files: Automate the cleanup of outdated or unnecessary attachments.
2. How to Manage Files Using Pipelines
A. Uploading Files to Quickbase
Use a Cloud Storage (Google Drive, Dropbox, OneDrive) or Email trigger to detect new files.
Use the Quickbase "Create Record" or "Update Record" action to attach the file to a record.
Ensure the file format is compatible with Quickbase.
B. Downloading Files from Quickbase
Use a Quickbase trigger (e.g., "When a record is created or modified") to detect when an attachment is added.
Extract the file and move it to another system (e.g., Google Drive or an FTP server).
C. Moving Files Between Systems
Use cloud storage or FTP connectors to transfer files from Quickbase to external systems.
Automate file renaming or sorting based on metadata.
D. Processing File Data
If working with CSVs or spreadsheets:
Use the CSV Parser to extract data.
Map the extracted fields to Quickbase tables for record updates.
E. Deleting or Archiving Old Files
Set up a scheduled pipeline to identify and delete outdated attachments.
Move older files to an archive storage system before deletion.
Path To Pipelines Builder Certification
A complete Guide to Navigate and Use Quickbase
VKIITS PRIVATE LIMITED
Prepared by,
VKIITS Team.
Pipelines
In Quickbase, Pipelines is an automation tool that allows users to connect different apps and services to automate workflows without manual intervention. It enables users to move data, trigger actions, and integrate with third-party platforms efficiently.
Key Features of Quickbase Pipelines:
Workflow Automation: Automate repetitive tasks like data updates, approvals, and notifications.
Multi-Step Actions: Chain multiple actions together in a sequence.
Integration with External Systems: Connect Quickbase with platforms like Salesforce, Slack, Google Drive, and more.
Drag-and-Drop Interface: No-code/low-code setup with an intuitive UI.
Triggers & Actions: Set conditions that trigger specific actions automatically.
Error Handling: Manage exceptions and ensure workflow reliability.
How Quickbase Pipelines Work:
Trigger: Defines what starts the pipeline (e.g., a new record in Quickbase).
Steps: Individual actions that execute based on the trigger (e.g., updating a record, sending an email).
Channels: Built-in connectors for different services (e.g., Quickbase, Gmail, Dropbox).
Conditions & Filters: Control when and how data flows.
What your pipeline can do:
Span many tools
Include needed steps
Apply Conditions
Transform your data
Set and convert data and time
Control when it starts
Channel options:
1. Trigger: A trigger is an event that occurs to tell the pipeline to start.
2. Action:An action is the operation that the step is configured to take.
3.Query:A query is a request or search for data.
Tips for using loops in your pipelines:
In the exercises for this section, you built pipelines that included a query step followed by a for each loop. That loop performed an action for each record found in the search. As you begin to work with loops in your pipelines, keep a few things in mind.
Inside and outside the loop:Limit the steps within the loop to only what you need performed on each record returned from the search. These steps go under the DO branch. If there are any steps that need to happen outside of the query step and loop, they go back in the main pipeline:
Infinite loops:Avoid creating an infinite loop. This can happen when the action you perform on a record in a loop causes the pipeline to retrigger. For example, your pipeline starts when you update a record in a particular app and table, it then runs a query, and it updates a record from the same app and table within the loop that triggered the pipeline. Each time the pipeline updates any of those records, a new pipeline run gets triggered. This is especially an issue if the pipeline is updating something in the record that is dynamic, like the current date and time. That value continually changes, so the pipeline will keep going.
Optimizing loops:Do you have a large number of records to create, update, or delete within the loop in your pipeline? To optimize pipeline performance when you are working with a large number of records, you can also use bulk record sets. In fact, you can use them to optimize your pipeline regardless of the number of records affected in the loop. For more information on how to use bulk record sets to make your pipeline more efficient.
Managing Pipelines in the My pipelines Page
The "My Pipelines" page in Quickbase is where you can view, organize, and manage all your pipelines. It provides an overview of your automation workflows and allows you to monitor their status, edit configurations, and troubleshoot issues.
Key Features of "My Pipelines" Page
Pipeline List View
Displays all the pipelines you have created.
Shows key details like Pipeline Name, Status, Last Run Time, and Owner.
Pipeline Status Indicators
Active (Green): Running and operational.
Paused (Yellow): Temporarily disabled.
Error (Red): Needs troubleshooting.
Draft (Gray): Not yet activated.
Search & Filter Pipelines
Use a search bar to find a specific pipeline.
Filter pipelines by status, owner, or last run time.
Pipeline Actions (Manage Individual Pipelines)
Edit: Modify pipeline steps, triggers, and actions.
Pause/Resume: Temporarily disable or restart a pipeline.
Duplicate: Copy an existing pipeline to create a similar one.
Delete: Remove a pipeline permanently.
Pipeline Logs & Debugging
View run history to check execution details.
Identify and troubleshoot errors or failed runs.
Download logs for deeper analysis.
Organizing Pipelines
Rename pipelines for better clarity.
Use categories or naming conventions to manage multiple pipelines efficiently.
How to Access & Manage Your Pipelines
Go to Quickbase Pipelines.
Click on "My Pipelines" in the navigation menu.
Use search and filters to find a specific pipeline.
Click on a pipeline to view details, edit, or check logs.
Building powerful pipelines using included features
Conditional statements
Conditional statements in Quickbase Pipelines allow you to control the flow of actions based on specific conditions. They help you filter data, make decisions, and execute different actions based on whether a condition is true or false.
Types of Conditional Statements
1. If-Else Conditions
Allows pipelines to take different actions based on a condition.
Example:
If an order amount is greater than $500, then assign it to a manager.
Else, assign it to a regular sales rep.
2. Boolean Conditions (True/False Checks)
Used to check if a value is true or false.
Example:
If a checkbox field "Urgent" is true, send an email notification.
3. Comparison Conditions
Compares two values using operators like:
Equal To (=)
Not Equal To (!=)
Greater Than (>)
Less Than (<)
Contains (for text fields)
Example:
If status = "Approved", update the record.
4. Multiple Conditions (AND/OR)
Use AND when multiple conditions must be true.
Use OR when at least one condition must be true.
Example:
If (status = "Pending" AND priority = "High"), send an alert.
Link & Fetch Records in Quickbase Pipelines
Link & Fetch is a powerful feature in Quickbase Pipelines that helps establish relationships between records across tables and apps. It allows you to link related records and fetch data automatically when conditions are met.
1. What is "Link & Fetch"?
Link: Creates a connection between records in different tables using a Reference Field.
Fetch: Retrieves related data from a linked record to populate fields automatically.
2. How to Use Link & Fetch in Pipelines
A. Linking Records (Establish a Relationship)
Example: When a new Order is created, link it to the correct Customer in a separate table.
Steps to Link:
Trigger: Start with a "Record Created" event in the Orders Table.
Find a Matching Record: Use the Search Records step in the Customers Table based on an identifier (e.g., Customer Name or ID).
Update Order Record: Set the Customer Reference Field in the Order Table to link it.
B. Fetching Records (Pull Related Data)
Example: When an Order is linked to a Customer, fetch the Customer's Email and update the order details.
Steps to Fetch:
Trigger: When an Order Record is Updated (linked to a Customer).
Fetch Related Data: Use the Find Record step in the Customers Table to get details.
Update the Order Record: Populate the Customer Email field in the Orders Table.
Introduction to included channels
Quickbase provides some included channels to help you in building your pipelines. You’re already familiar with one of them: The Quick Base channel. Like the Quick Base channel, other included channels are available for your use in pipelines and do not count towards your channel entitlement quota. You also don’t have to enable them first to use them.
The included channels provide functionality to accomplish a variety of tasks or to integrate with third party systems. In addition to the Quick Base channel, the included channels are:
Call another pipeline
This is especially useful when you have many repeatable steps used in more than one pipeline. It saves you time and prevents errors by configuring those same steps one time rather than having to add the steps to each individual pipeline. It also makes edits easier, since you only have to make changes in the one pipeline. The pipeline you call defines a set of reusable steps needed by the calling pipeline.
Transform your data using jinja expressions
So far in this course, you have looked at how you can use conditions and included channels in your pipelines. This lesson looks at how you can transform your data within a pipeline using the jinja templating language. This gives you flexibility as you move data between systems.
You may already be familiar with some of the concepts and logic from working with formula fields and API calls in your apps. Jinja is the language you use in pipelines.
Why use jinja
You can take advantage of jinja in your pipelines to accomplish different things including:
Calculating and convert numbers
Transforming text and dates
Summarizing search results
Clearing the value in a field
When using the jinja templating language, you create jinja expressions with placeholders for variables. Those variables can be replaced, manipulated, and transformed in the template.
Triggering a Pipeline with Multiple Quickbase Events
In Quickbase Pipelines, you can trigger a single pipeline using multiple Quickbase events (such as record creation, updates, or deletions) from different tables. This allows for flexible and efficient automation.
Ways to Trigger a Pipeline with Multiple Quickbase Events
1. Using Multiple Triggers in a Single Pipeline
In your pipeline, add multiple Quickbase "Records" triggers (e.g., "When a record is created," "When a record is updated," etc.).
Each trigger can monitor different events from the same or different tables.
The pipeline runs whenever any of the defined events occur.
Example:
Trigger 1: When a new Order is created.
Trigger 2: When a Customer record is updated.
The pipeline executes if either event occurs.
2. Using a Single Trigger with a Conditional Filter
If all triggers come from the same table, use one trigger (e.g., "When a record is added or modified") and then apply conditional filters to determine the event type.
You can add conditional logic (e.g., "If field X changed, do Y").
Example:
One trigger: "When a record in the Invoices table is modified."
Condition 1: If the Status changes to "Paid," update a related record.
Condition 2: If the Due Date is changed, send a reminder email.
3. Using Multiple Pipelines with a Central Callable Pipeline
Create separate pipelines for different events (e.g., one for new records, one for updates).
Each of these pipelines calls a shared Callable Pipeline that handles the main workflow.
This keeps pipelines modular and avoids redundant logic.
Example:
Pipeline A: Triggers when a new employee is added.
Pipeline B: Triggers when an employee’s role changes.
Both call Pipeline C, which handles notifications and reporting.
Creating Table-Specific Change Logs with Quickbase Pipelines
A change log tracks modifications to records, such as updates, deletions, or creations. In Quickbase, you can use Pipelines to automatically log these changes in a dedicated Change Log table.
Steps to Create a Table-Specific Change Log
1. Create a "Change Log" Table
Add a new table named "Change Log."
Include fields like:
Record ID (Linked to Original Table)
Changed Field Name
Old Value
New Value
Modified By (User)
Modified Date/Time
2. Build the Pipeline to Track Changes
Trigger: When a Record is Modified
Select the table you want to track (e.g., "Orders").
Choose "When a record is updated" as the trigger.
Add a Condition to Check Field Changes
Use the Conditional step to compare "Before" and "After" values.
Example: If Status changes from "Pending" to "Shipped," log it.
Create a New Record in the "Change Log" Table
Insert a step to add a new record to the "Change Log" table.
Map the original Record ID, Changed Field, Old Value, New Value, and Modified By.
Save and Activate the Pipeline
Importing CSV Files into your Quickbase App
Importing CSV files into Quickbase allows you to bulk add or update records in your app’s tables. You can do this manually or automate the process using Pipelines.
Recreating Automation use cases in Pipeline
Automation to pipelines:
Connect with external platforms
Data transformation and workflow tools
Sophisticated automated workflows
Cascading deletes is a database feature that automatically removes related records in child tables when a record in the parent table is deleted. It helps maintain referential integrity by ensuring that no orphaned records remain in the database.
How It Works:
When a row in the parent table is deleted, all associated rows in the child table are also deleted automatically.
This behavior is defined using foreign key constraints with the ON DELETE CASCADE option in relational databases like MySQL, PostgreSQL, SQL Server, and Oracle.
Capturing Previous value: Ability to modify project records.
How It Works:
When any of the following fields change in the project table.
Status.Start Date, End Date
Create a record in the project change Log table that details the change,as well as who made the change and when the change was made
Create Missing Parent When Child Added :Refers to a scenario in which a parent record is automatically created if it does not already exist when inserting a child record. This ensures referential integrity and avoids errors due to missing parent records.
Use Cases:
Database Relationships – When inserting a child record that references a parent, the parent record should be created if it doesn’t exist.
ETL/Data Pipelines – While processing data, if a referenced parent is missing, it should be generated dynamically.
Event-Driven Systems – Inserting related records in a system where parent records might not always exist beforehand.
How to use included channels
The Pipelines Built-In Channels
1.Bucket:
Defines tables on the fly
Add new rows to the defined table
Export the table to CSV
2.CSV Handler:
Fetch a CSV
Iterate over the records in the CSV
3.Callable pipelines:
Make a call to another pipeline
Receive a call from a pipeline
4.Clock:
The ability to add a sleep times in a pipeline step
5.Text:
Working with Regex
6.JSON Handler:
Fetch a JSON
Iterate over the records in the JSON
7.Webhooks:
Make HTTP requests
Receive HTTP requests
Improving Pipeline performance using Bulk records sets
When you work with large numbers of records in a pipeline, your pipeline may need to create, update, or delete those records. For each record touched, the pipeline makes a call to your app. When working with a few records, this may not be important. If you have large numbers of records, however, this can have an impact on the speed of the pipeline. By batching, or grouping, records together, your pipeline makes fewer calls to your app. For example, if your pipeline needs to update 1000 records, it would need to make 1000 calls to your app. By batching those 1000 records into groups, the pipeline makes a few calls instead. This optimizes your pipeline and decreases the load on your app while the pipeline runs.
Prepare Bulk Record Upsert: This action is added to the pipeline before you search for records and enter a loop. It prepares the pipeline for handling records in bulk.
Add Bulk Upsert Row: This action is added to the pipeline within the For each loop. The records are temporarily stored for the next step.
Commit Upsert: This action is added to the pipeline after the loop and outside of it. The action takes the temporarily stored records from Add a Bulk Import Row, and moves them to the Quickbase app table specified.
How to Optimize Your Pipelines: Best Practices, Tips & Tricks
Optimizing pipelines in Quickbase involves improving their efficiency, reducing execution time, and minimizing errors. Here are some best practices:
1. Streamline Triggers & Actions
Avoid Unnecessary Triggers: Ensure pipelines only trigger when needed by using filters effectively.
Batch Updates Instead of Individual Actions: Minimize API calls by grouping updates.
Use Scheduled Triggers: If real-time execution isn’t required, schedule pipelines to run at intervals.
2. Optimize Querying & Filtering
Filter at the Source: Instead of filtering records within a pipeline, apply filters in Quickbase itself.
Use Specific Queries: Select only the fields you need rather than retrieving entire records.
3. Reduce Redundant Steps
Rearrange Steps for Efficiency: Ensure data processing steps happen in the most logical and efficient order.
Use Loops Wisely: Avoid unnecessary loops by using batch processing when possible.
4. Manage Error Handling & Logging
Use Error Notifications: Set up error handling to catch issues early.
Log Pipeline Runs: Store pipeline run history to analyze failures and optimize execution.
5. Minimize External API Calls
Limit Data Requests: Only retrieve data that is essential.
Use Caching Strategies: Store frequently used data to reduce redundant API calls.
6. Leverage Parallel Processing
Run Independent Pipelines in Parallel: If multiple actions don’t depend on each other, execute them concurrently.
7. Monitor & Refine Regularly
Analyze Pipeline Performance: Review execution times and logs to find bottlenecks.
Iterate & Improve: Test modifications on a staging environment before deploying.
Troubleshooting pipelines
Troubleshooting in Quickbase Pipelines involves identifying and fixing issues that prevent a pipeline from running correctly or efficiently. Below are common issues and steps to resolve them.
1. Identify the Issue
Check Pipeline Run History: Review the execution logs for errors or unexpected behavior.
Look for Failed Steps: If a pipeline fails, find the step where it stopped.
Review Error Messages: Quickbase provides error messages that give clues about the problem.
2. Common Pipeline Issues & Fixes
Pipeline not triggering
Data not updating correctly
Pipeline running too slowly
API Rate limits exceeded
Loops running infinitely
Pipeline failing on external API Calls
3. Debugging Steps
Run in Test Mode: If available, test the pipeline with sample data.
Check Permissions: Ensure the pipeline has access to the necessary tables and records.
Use Logging & Alerts: Add steps to log data or send alerts when something goes wrong.
Break Down Complex Pipelines: Run steps separately to find where it fails.
Validate Input Data: Ensure data formats match the expected values in Quickbase.
4. When to Contact Quickbase Support
Persistent errors that logs don’t explain.
Performance issues even after optimization.
Unexpected Quickbase system errors.
Managing files with Pipelines
File management in Quickbase Pipelines involves automating the handling of file attachments, including uploading, downloading, moving, and processing files between Quickbase and external services like Google Drive, Dropbox, or SFTP.
1. Common File Management Use Cases
Uploading files: Automatically add attachments to Quickbase records from emails, cloud storage, or other sources.
Downloading files: Retrieve and store attachments from Quickbase for backup or processing.
Moving files: Transfer files between different systems (e.g., from Quickbase to Google Drive).
Processing files: Extract data from files (e.g., CSV or Excel) and update Quickbase records.
Deleting old files: Automate the cleanup of outdated or unnecessary attachments.
2. How to Manage Files Using Pipelines
A. Uploading Files to Quickbase
Use a Cloud Storage (Google Drive, Dropbox, OneDrive) or Email trigger to detect new files.
Use the Quickbase "Create Record" or "Update Record" action to attach the file to a record.
Ensure the file format is compatible with Quickbase.
B. Downloading Files from Quickbase
Use a Quickbase trigger (e.g., "When a record is created or modified") to detect when an attachment is added.
Extract the file and move it to another system (e.g., Google Drive or an FTP server).
C. Moving Files Between Systems
Use cloud storage or FTP connectors to transfer files from Quickbase to external systems.
Automate file renaming or sorting based on metadata.
D. Processing File Data
If working with CSVs or spreadsheets:
Use the CSV Parser to extract data.
Map the extracted fields to Quickbase tables for record updates.
E. Deleting or Archiving Old Files
Set up a scheduled pipeline to identify and delete outdated attachments.
Move older files to an archive storage system before deletion.
Comments
Post a Comment