Qlik Compose for Data Lakes: End-to-End Data Integration for Hadoop Data Lakes
Deliver timely, high-quality and well-governed transactional data to the business

Data Lakes enable enterprises to process vast data volumes and address use cases that range from batch analysis to streaming analytics and machine learning. Whether on premises or in the cloud, Data Lakes provide an efficient, scalable and centralized foundation for modern analytics.

Data Lake Ingestion with Qlik Compose for Data Lakes

Qlik Replicate is a simple, universal and realtime data ingestion solution that delivers data efficiently to any major Hadoop/Data Lake platform. With Qlik Replicate, architects and database administrators can eliminate manual coding with a 100% automated interface that quickly and easily configures, controls and monitors bulk loads as well as real-time updates. You can ingest data across hundreds or thousands of end points – including any major RDBMS, legacy system, data warehouse, Data Lake distribution or streaming platform – through a single pane of glass. Qlik Replicate also minimizes production impact and administrative burden by copying source updates from transaction logs, with no need for agents.

Data Lake Transformation with Qlik Compose for Data Lakes

Qlik Compose for Data Lakes automates the creation and loading of Hadoop Hive structures, as well as the transformation of enterprise data within them. Our solution fully automates the pipeline of BI ready data into Hive, enabling you to automatically create both Operational Data Stores (ODS) and Historical Data Stores (HDS). And we leverage the latest innovations in Hadoop such as the new ACID Merge SQL capabilities, available today in Apache Hive (part of the Hortonworks 2.6 distribution), to automatically and efficiently process data insertions, updates and deletions.
 

Business Benefit
  • Faster Data Lake operational readiness
  • Reduced development time
  • Reduced reliance on Hadoop skills
  • Easier compliance