COURSE OBJECTIVE:
In this course, you will learn to:
• Explain the benefits of MLOps
• Compare and contrast DevOps and MLOps
• Evaluate the security and governance requirements for an ML use case and describe possible solutions and mitigation strategies
• Set up experimentation environments for MLOps with Amazon SageMaker
• Explain best practices for versioning and maintaining the integrity of ML model assets (data, model, and code)
• Describe three options for creating a full CI/CD pipeline in an ML context
• Recall best practices for implementing automated packaging, testing and deployment. (Data/model/code)
• Demonstrate how to monitor ML based solutions
• Demonstrate how to automate an ML solution that tests, packages, and deploys a model in an automated fashion; detects performance degradation; and re-trains the model on top of newly acquired data
TARGET AUDIENCE:
This course is intended for:
– MLOps engineers who want to productionize and monitor ML models in the AWS cloud
– DevOps engineers who will be responsible for successfully deploying and maintaining ML models in production
COURSE PREREQUISITES:
We recommend that attendees of this course have:
• AWS Technical Essentials (classroom or digital)
• DevOps Engineering on AWS, or equivalent experience
• Practical Data Science with Amazon SageMaker, or equivalent experience
COURSE CONTENT:
Day 1
Module 1: Introduction to MLOps
• Processes
• People
• Technology
• Security and governance
• MLOps maturity model
Module 2: Initial MLOps: Experimentation Environments in SageMaker Studio
• Bringing MLOps to experimentation
• Setting up the ML experimentation environment
• Demonstration: Creating and Updating a Lifecycle Configuration for SageMaker Studio
• Hands-On Lab: Provisioning a SageMaker Studio Environment with the AWS Service Catalog
• Workbook: Initial MLOps
Module 3: Repeatable MLOps: Repositories
• Managing data for MLOps
• Version control of ML models
• Code repositories in ML
Module 4: Repeatable MLOps: Orchestration
• ML pipelines
• Demonstration: Using SageMaker Pipelines to Orchestrate Model Building Pipelines
Day 2
Module 4: Repeatable MLOps: Orchestration (continued)
• End-to-end orchestration with AWS Step Functions
• Hands-On Lab: Automating a Workflow with Step Functions
• End-to-end orchestration with SageMaker Projects
• Demonstration: Standardizing an End-to-End ML Pipeline with SageMaker Projects
• Using third-party tools for repeatability
• Demonstration: Exploring Human-in-the-Loop During Inference
• Governance and security
• Demonstration: Exploring Security Best Practices for SageMaker
• Workbook: Repeatable MLOps
Module 5: Reliable MLOps: Scaling and Testing
• Scaling and multi-account strategies
• Testing and traffic-shifting
• Demonstration: Using SageMaker Inference Recommender
• Hands-On Lab: Testing Model Variants
Day 3
Module 5: Reliable MLOps: Scaling and Testing (continued)
• Hands-On Lab: Shifting Traffic
• Workbook: Multi-account strategies
Module 6: Reliable MLOps: Monitoring
• The importance of monitoring in ML
• Hands-On Lab: Monitoring a Model for Data Drift
• Operations considerations for model monitoring
• Remediating problems identified by monitoring ML solutions
• Workbook: Reliable MLOps
• Hands-On Lab: Building and Troubleshooting an ML Pipeline
FOLLOW ON COURSES:
Not available. Please contact.