Chapter 1. Introduction to Data Pipelines

Behind every glossy dashboard, machine learning model and business-changing insight is data. Not just raw data, but data collected from numerous sources which must be cleaned, processed, and combined to deliver value. The famous phrase, “data is the new oil” has proven true. Just like oil, the value of data is in its potential after it’s refined and delivered to the consumer. Also like oil, it takes efficient pipelines to deliver data through each stage of its value chain.

This Pocket Reference is intended to discuss what these data pipelines are and show how they fit into a modern data ecosystem. It discusses common considerations and key decision points when implementing pipelines, such as batch vs. streaming data ingestion, build vs. buy of tooling, and more. Though it does not focus on a single language or platform, it addresses the most common decisions made by data professionals while discussing foundational concepts that apply to home-grown solutions, open source frameworks, and commercial products.

What Are Data Pipelines?

Data pipelines are sets of processes that move and transform data from various sources to a destination where new value can be derived. They are the foundation of analytics, reporting, and machine learning capabilities.

The complexity of a data pipeline depends on the state and structure of the source data as well as the needs of the analytics project. In their simplest form, pipelines may only copy data from one source such as a REST API and load to a destination such as a SQL table in a data warehouse. In practice, however, pipelines typically consist of multiple steps including data extraction, data preprocessing, data validation, and at times training or running a machine learning model before delivering data to its final destination. Pipelines often contain tasks from multiple systems and programming languages. What’s more, data teams typically own and maintain numerous data pipelines that share dependencies and must be coordinated. Figure 1-1 illustrates a simple pipeline.

simple pipeline
Figure 1-1. A simple pipeline that loads server log data into an S3 Bucket, does some basic processing and structuring and loads the results into an Amazon Redshift database.

Who Builds Data Pipelines?

With the popularization of cloud computing and Software as a Service (SaaS), the number of data sources organizations need to make sense of has exploded. At the same time, the demand for data to feed machine learning models, data science research, and time sensitive insights is higher than ever. In order to keep up, data engineering has emerged as a key role on analytics teams. Data engineers specialize in building and maintaining the data pipelines that underpin the analytics ecosystem.

A data engineer’s purpose isn’t simply to load data into a data warehouse. Data engineers work closely with data scientists and analysts to understand what will be done with the data and help bring their needs into a scalable production state.

Data engineers take pride in ensuring the validity and timeliness of the data they deliver. That means testing, alerting and contingency plans for when something goes wrong. And yes, something will eventually go wrong!

The specific skills of a data engineer depend somewhat on the tech stack their organization uses. However, there are some common skills that all good data engineers possess.

SQL and Data Warehousing Fundamentals

Data engineers need to know how to query databases, and SQL is the universal language to do so. Experienced data engineers know how to write high performance SQL, and understand the fundamentals of data warehousing and data modeling. Even if a data team includes data warehousing specialists, a data engineer with warehousing fundamentals is a better partner and can fill more complex technical gaps that arise.

Python and/or Java

The language a data engineer is proficient in will depend on the tech stack of their team, but either way a data engineer isn’t going to get the job done with “no code” tools even if they have some good ones in their arsenal. Python and Java currently dominate in data engineering, but newcomers like Go are emerging.

Distributed Computing

Solving a problem that involves high data volume and a desire to process data quickly has led data engineers to work with distributed computing platforms. Distributed computing combines the power of multiple systems to efficiently store, process and analyze high volumes of data.

One popular example of distributed computing in analytics are the Hadoop ecosystem, which includes distributed file storage via HDFS, processing via MapReduce, data analysis via Pig, and more. Apache Spark is another popular distributed processing framework.

Though not all data pipelines require the use of distributed computing, data engineers need to know how and when to utilize such a framework.

Basic System Administration

A data engineer is expected to be proficient on the Linux command line and be able to do things like understand application logs, schedule cron jobs and troubleshoot firewall and other security settings. Even when working fully on a cloud provider such as Amazon Web Services (AWS), Azure, or Google Cloud, they’ll end up using those skills to get cloud services working together and data pipelines deployed.

A Business Goal Mentality

A good data engineer doesn’t just possess technical skills. They may not interface with business users on a regular basis, but the analysts and data scientists on the team certainly will. The data engineer will make better architectural decisions if they’re aware of the reason why they’re building a pipeline in the first place.

Why Build Data Pipelines?

Just like the tip of the iceberg is all that can be seen by a passing ship, the end product of the analytics workflow is all that the majority of an organization sees. Executives see dashboards and pristine charts. Marketing shares cleanly packaged insights on social media. Customer Support optimizes the call center staffing based on the output of a predictive demand model.

What most people outside of analytics often fail to appreciate is that to generate what is seen, there’s a complex machinery that is unseen. For every dashboard and insight that a data analyst generates, and each predictive model developed by a data scientist, are data pipelines working behind the scenes. It’s not uncommon for a single dashboard, or even a single metric, to be derived from data originating in multiple source systems. In addition, data pipelines do more than just extract data from sources and load them into simple database tables or flat files for analysts to use. Raw data is refined along the way to clean, structure, normalize, combine, aggregate, and at times anonymize or otherwise secure it. In other words, there’s a lot more going on below the water line.

How Are Pipelines Built?

Along with data engineers, numerous tools to build and support data pipelines have emerged in recent years. Some are open source, some commercial, and some are home grown. Some pipelines are written in Python, some in Java, some in another language and some with no code at all.

Throughout this Pocket Reference I explore some of the most popular products and frameworks for building pipelines, as well as how to determine which to use based on your organization’s needs and constraints.

Though I do not cover all such products in depth, I do provide examples and sample code for some. All code in this book is written in Python and SQL. These are the most common, and in my opinion the most accessible, languages for building data pipelines.

In addition, pipelines are not just built - they are monitored, maintained, and extended. Data engineers are tasked with not just delivering data once, but building pipelines and supporting infrastructure that deliver and process it reliably, securely, and on-time. It’s no small feat, but when it’s done well the value of an organization’s data can truly be unlocked.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset