Data Pipelines — 01

chitra mudgal
Oct 22, 2020

--

A pipeline’s literal meaning is a linear sequence of specialized modules. A data pipeline is simply a series of steps involved to move data from one system to another.

Why Data Pipelines?

  1. To ensure consistent flow of data from one location to another.
  2. To carry out useful analysis — Analysis cannot begin unless data from all the systems is available.
  3. Data from multiple systems sometimes needs to be combined in ways that makes sense for analysis.

Data Pipelines and ETLs

Although, these days words — data pipelines and ETLs are used interchangeably but they are not the same. Traditionally, ETLs were designed for processing data in batches; used for extracting, transforming and loading data from one system to another. Data pipelines also transfer data from one system to another system. Then whats the difference?

Difference between Data Pipelines and ETLs

I could gather only these two differences so far:

  1. Data pipelines may or may not involve data transformation.
  2. Data pipelines could involve one or more ETLs.

Thus, ETLs actually become a subset of Data Pipelines.

Which Use Cases require Data Pipelines?

  1. The business that requires data from multiple sources.
  2. There is a requirement of real-time analysis.
  3. There exists multiple data silos.

Various Types of Data Pipeline Solutions

There could be various types of data pipeline solutions like batch processing, real-time processing, cloud-native solution or one could go for open source tools to save the cost.

I will discuss about these solutions/techniques in subsequent write-ups.

--

--

chitra mudgal

Chitra is a Sr. Architect at Publicis Sapient, India. She is an expert of Java/Cloud Solution Architecture and deeply involved into Data Integration Projects.