New DevOps Features to Future-proof your Data Pipelines

In today’s data-driven world, managing real-time, trusted data pipelines is crucial for AI and business success. Qlik™ Talend Cloud’s new DevOps capabilities—like automated schema evolution, GitHub version control, and Import/Export APIs— simplify and optimize pipeline management.

Introduction: 

In the era of artificial intelligence (AI), the need for accurate, high-quality, real-time data has never been greater. Qlik Talend Cloud, the market-leading data integration and quality solutions offering from Qlik™, utilize AI-enriched, no-code pipelines to rapidly deliver real-time trusted data throughout your organization — driving AI innovation, intelligent decisions, and business modernization. 

Just a few months ago, Qlik™ announced the general availability of Qlik Talend Cloud and they are making it increasingly easier for developers and data engineers to build, manage and fine tune their high-performance data pipelines to support their enterprise needs and use cases

New DevOps Capabilities

Today, Qlik™ is announcing several new Dev-Ops capabilities and innovations that will make it even simpler and easier for customers to adopt and manage Qlik™ Talend Cloud for ingesting, transforming, modeling, collaborating and working with data for analytics and AI needs.

Some of the key new capabilities that are being launched include:

  • Automated Schema Evolution
  • Version Control with GitHub
  • New Import/ Export APIs

Let’s dive into each of them a little bit more.

1. Automated Schema Evolution

In modern data ecosystems, data sources and business requirements are often dynamic, so being able to evolve schemas—i.e., the structure of data—without interrupting the flow of data is critical for smooth operations.

Schema evolution allows users to easily detect structural changes (aka schema drift) to multiple data sources and then control how those changes will be applied to your project. At Qlik™, schema evolution and schema drift for Replication pipelines was previously supported, with a manual process for multi-step data pipelines where data engineers had to manually monitor and make relevant changes to the pipelines.

Qlik™ announces the release of Automated Schema Evolution, featuring the ability to detect and automate all of the DDL changes that were made to the source database schema and dramatically simplify the efforts required to modify the pipelines. 

Any changes in the data structure at the source database will automatically be picked up and applied to the target structure – including the Type 2 history, a comprehensive live view of the architecture – and the changes applied to the pipelines without any manual interventions. 

What this enables is a much more automated and well-oiled data operations, with fewer errors and things breaking downstream, and a storage (bronze) layer that is always up to date – even when new values or columns are added to the source.

So, if a new column is added to the source database, it is automatically detected and captured with Type 2 history and the appropriate changes are reflected in the tables in the landing and storage zones, without the need for any data reloads. Also, for any changes beyond the bronze layer, users get notifications/alerts to schema changes in the bronze layer so they can react quickly to make changes to downstream pipelines.

Here is a quick demo on the new Automated Schema Evolution functionality:

With Automated Schema Evolution, users can set distinct rules and have access to a series of more fine-grained controls for schema evolution. These include the specific actions that need to be taken for various DDL events – such as automatically adding a column to the target, or suspending a table on rename etc.  These configurations allow users to prevent downstream impact and control behavior in the target data platform.

See example below.

Automatic schema evolution is now generally available, and you can learn more about the feature in the Qlik™ help/documentation page here.

2. Version Control with GitHub 

Version Control empowers developers to work concurrently on different aspects of a project—such as adding new features or fixing bugs—without disrupting the main version of the project. This approach supports incremental, collaborative, and secure development, allowing teams to release updates progressively while maintaining stability. 

Qlik™ launches Version Control for Qlik™ Talend Cloud Pipelines through efficient and secure GitHub integration and branching support. 

Every Qlik™ Talend Cloud user in the organization can utilize their GitHub account with a personal access token to connect their projects to any authorized GitHub repository. 

More importantly, with the new ‘branching’ feature, it enables multiple developers to:  

  • Create branches to isolate a feature or edit a project without affecting the ‘Main’;
  • Customize the schema prefix to be added to all branch datasets to avoid conflicts;
  • Switch between branches, share branches, or apply changes from GitHub to the working branch or ’Main’;
  • Delete branches, if need be, when the specific activity is completed.

This parallel development model allows team members to sync their changes efficiently and collaborate without conflicts and putting each other’s work at risk.   

Users can even open an existing project located on a GitHub repository. This allows sharing projects across spaces and tenants. 

Here is a quick demo on the new Version Control functionality:

Using GitHub, developers can submit pull requests, where other team members can review and approve the code before it’s merged back into the main project. This review process ensures collaborative quality control and reduces the risk of introducing errors while enabling developers to safely commit and push the changes to the central repository. 

Version Control, branching and schema prefixes further enhances security as well, minimizing the risk of data loss or corruption, while fostering collaborative engagement. 

The Version Control feature is now generally available now in Qlik™ Talend Cloud Standard edition (and upwards). For more details on the Version Control feature, and how to get started, please visit the documentation page here. 

3. New Import / Export REST APIs

Along with the other DevOps innovations, Qlik™ announces the launch of a new set of REST API endpoints for importing and exporting Pipelines or projects in QTC. This will in turn enable users to start building and managing their data pipelines using a Continuous Integration/ Continuous Deployment (CI/CD) approach.

These new APIs programmatically reproduce the capabilities available within the user interface; and allow programmers to manage projects across tenants and spaces for deployment purposes in an easy-to-use fashion.

In just a few API calls, users can now read project variables (referred to as bindings), export and re-import projects.

The export API creates a ZIP file containing all necessary project contents for re-import. Besides all project-related resources (including tasks, datasets etc.), the export API also generates a separate “bindings” file whose purpose is to list all project parameters and variables for users to customize on re-import. 

To import a project, users have the choice of either creating a new project (using the dedicated API) or importing within an existing project. In the latter case, users only need to read/update the bindings and import the project contents to overwrite the existing one.

The list of Import/ Export APIs endpoints that we are launching include:

Please find below a quick demo of the new Import/ Export APIs:

This feature is available now generally available, and you can find more details on the Export/ Import APIs, visit the documentation page here. 

Summary 

These new DevOps features in Qlik™ Talend Cloud represent a significant step forward in simplifying and optimizing data pipeline management. By automating key processes like schema evolution, integrating robust version control capabilities, and enabling seamless CI/CD workflows through new APIs, we’re empowering data teams to work faster, smarter, and with greater confidence.  

As businesses continue to rely on real-time, trusted data for AI-driven decision-making, these innovations ensure that your data pipelines are not only resilient and scalable but also future-proof. Whether you’re managing complex data environments or developing cutting-edge analytics solutions, these enhancements make it easier to adapt, collaborate, and stay ahead of the curve. Definitely these new features will help you unlock even greater value from your data, driving innovation and business transformation. 

For information about Qlik™, click here: qlik.com.
For specific and specialized solutions from QQinfo, click here: QQsolutions.
In order to be in touch with the latest news in the field, unique solutions explained, but also with our personal perspectives regarding the world of management, data and analytics, click here: QQblog !