Serverless Data Processing with Dataflow

This course is intended for big data practitioners who want to further their understanding of Dataflow in order to advance their data processing applications.

google badge
Book this course
Call our sales team today
3 day course
Partner of the Year
Private
Private
A private training session for your team. Groups can be of any size, at a location of your choice including our training centres.

Course Credits

Select the pre-paid training investment that’s right for you and help your money stretch a little further with our course credits.

As a Google Cloud Partner, Jellyfish has been selected to deliver this three-day course, which will help you meet day-to-day data processing needs within your business.

Our expert practitioner will start with the foundations, showing you how Apache Beam and Dataflow work together to meet your data processing needs efficiently without the risk of vendor lock-in.

The section on developing pipelines will show you how you convert your business logic into data processing applications that can run on Dataflow. Toward the end of the session, you’ll focus on operations, reviewing the most important lessons for operating a data application on Dataflow, including monitoring, troubleshooting, testing, and reliability.

Our Serverless Data Processing with Dataflow course is available as a private training session that can be delivered via Virtual Classroom or at a location of your choice in the US.

Course overview

Who should attend:

This course is suitable for data engineers, data analysts and data scientists aspiring to develop data engineering skills.

What you'll learn:

By the end of this course, you will be able to:

  • Demonstrate how Apache Beam and Dataflow work together to fulfill your organization's data processing needs
  • Summarize the benefits of the Beam Portability Framework and enable it for your Dataflow pipelines
  • Enable Shuffle and Streaming Engine, for batch and streaming pipelines respectively, for maximum performance
  • Enable Flexible Resource Scheduling for more cost-efficient performance
  • Select the right combination of IAM permissions for your Dataflow job
  • Implement best practices for a secure data processing environment
  • Select and tune the I / O of your choice for your Dataflow pipeline
  • Use schemas to simplify your Beam code and improve the performance of your pipeline
  • Develop a Beam pipeline using SQL and DataFrames
  • Perform monitoring, troubleshooting, testing and CI / CD on Dataflow pipelines

Prerequisites

To get the most out of this course, you should have an understanding of building batch data pipelines and building resilient streaming analytics systems.

Course agenda

Module 1: Introduction
  • Introduce the course objectives
  • Demonstrate how Apache Beam and Dataflow work together to fulfill your organization's data processing needs
Module 2: Beam Portability
  • Summarize the benefits of the Beam Portability Framework
  • Customize the data processing environment of your pipeline using custom containers
  • Review use cases for cross-language transformations
  • Enable the Portability framework for your Dataflow pipelines
Module 3: Separating Compute & Storage with Dataflow
  • Enable Shuffle and Streaming Engine, for batch and streaming pipelines respectively, for maximum performance
  • Enable Flexible Resource Scheduling for more cost-efficient performances
Module 4: IAM, Quotas & Permissions
  • Select the right combination of IAM permissions for your Dataflow job
  • Determine your capacity needs by inspecting the relevant quotas for your Dataflow jobs
Module 5: Security
  • Select your zonal data processing strategy using Dataflow, depending on your data locality needs
  • Implement best practices for a secure data processing environment
Module 6: Beam Concepts Review
  • Review main Apache Beam concepts (Pipeline, PCollections, PTransforms, Runner, reading / writing, Utility PTransforms, side inputs) bundles and DoFn Lifecycle
Module 7: Windows, Watermarks, Triggers
  • Implement logic to handle your late data
  • Review different types of triggers
  • Review cores streaming concepts (unbounded PCollections, windows)
Module 8: Sources & Sinks
  • Write the I / O of your choice for your Dataflow pipeline
  • Tune your source / sink transformation for maximum performance
  • Create custom sources and sinks using SDF
Module 9: Schemas
  • Introduce schemas, which give developers a way to express structured data in their Beam pipeliness
  • Use schemas to simplify your Beam code and improve the performance of your pipeline
Module 10: State & Timers
  • Identify use cases for state and timer API implementations
  • Select the right type of state and timers for your pipeline
Module 11: Best Practices
  • Implement best practices for Dataflow pipelines
Module 12: Dataflow SQL & DataFrames
  • Develop a Beam pipeline using SQL and DataFrames
Module 13: Beam Notebooks
  • Prototype your pipeline in Python using Beam notebooks
  • Launch a job to Dataflow from a notebook
Module 14: Monitoring
  • Navigate the Dataflow Job Details UI
  • Interpret Job Metrics charts to diagnose pipeline regressions
  • Set alerts on Dataflow jobs using Cloud Monitoring
Module 15: Logging & Error Reporting
  • Use the Dataflow logs and diagnostics widgets to troubleshoot pipeline issues
Module 16: Troubleshooting & Debug
  • Use a structured approach to debug your Dataflow pipelines
  • Examine common causes for pipeline failures
Module 17: Performance
  • Understand performance considerations for pipelines
  • Consider how the shape of your data can affect pipeline performance
Module 18: Testing & CI / CD
  • Testing approaches for your Dataflow pipeline
  • Review frameworks and features available to streamline your CI / CD workflow for Dataflow pipelines
Module 19: Reliability
  • Implement reliability best practices for your Dataflow pipelines
Module 20: Flex Templates
  • Using flex templates to standardize and reuse Dataflow pipeline code
Module 21: Summary
  • Summary of all modules
close
Don't miss out
Keep up to date with news, views and offers from Jellyfish Training.
Your data will be handled in accordance with our Privacy Policy