Back

Implementing a Data Analytics Solution with Azure Databricks (M-DP3011)

NOK 9.900

-
+
SKU M-DP3011 Categories , Tags ,


Learn how to harness the power of Apache Spark and powerful clusters running on the Azure Databricks platform to run large data engineering workloads in the cloud.
This learning path helps prepare you for Exam DP-203: Data Engineering on Microsoft Azure.

TARGET AUDIENCE:
Not available. Please contact.

COURSE PREREQUISITES:
None

COURSE CONTENT:
Module 1: Explore Azure Databricks
Azure Databricks is a cloud service that provides a scalable platform for data analytics using Apache Spark.

• Provision an Azure Databricks workspace.
• Identify core workloads and personas for Azure Databricks.
• Describe key concepts of an Azure Databricks solution.
Module 2: Use Apache Spark in Azure Databricks
Azure Databricks is built on Apache Spark and enables data engineers and analysts to run Spark jobs to transform, analyze and visualize data at scale.

• Describe key elements of the Apache Spark architecture.
• Create and configure a Spark cluster.
• Describe use cases for Spark.
• Use Spark to process and analyze data stored in files.
• Use Spark to visualize data.
Module 3: Use Delta Lake in Azure Databricks
Delta Lake is an open source relational storage area for Spark that you can use to implement a data lakehouse architecture in Azure Databricks.

• Describe core features and capabilities of Delta Lake.
• Create and use Delta Lake tables in Azure Databricks.
• Create Spark catalog tables for Delta Lake data.
• Use Delta Lake tables for streaming data.
Module 4: Use SQL Warehouses in Azure Databricks
Azure Databricks provides SQL Warehouses that enable data analysts to work with data using familiar relational SQL queries.

• Create and configure SQL Warehouses in Azure Databricks.
• Create databases and tables.
• Create queries and dashboards.
Module 5: Run Azure Databricks Notebooks with Azure Data Factory
Using pipelines in Azure Data Factory to run notebooks in Azure Databricks enables you to automate data engineering processes at cloud scale.

• Describe how Azure Databricks notebooks can be run in a pipeline.
• Create an Azure Data Factory linked service for Azure Databricks.
• Use a Notebook activity in a pipeline.
• Pass parameters to a notebook.

COURSE OBJECTIVE:
Not available. Please contact.

FOLLOW ON COURSES:
Not available. Please contact.