In this course, the learner is guided through a realistic scenario of governing the generative models of a data science project during their lifecycle. The project focuses on deploying large language models (LLMs) that can be used for resume summarization. Summaries generated by the models can be used to assist in the hiring process, and must be accurate to ensure fairness to all candidates and to ensure the organization hires the best candidate for the position. The course educates the learner in IBM's cutting-edge monitoring tools for LLM accuracy and health. The learner will be guided through hands-on exercises to deploy and evaluate the models, track their lineage and metadata throughout the lifecycle, and explore how the IBM watsonx.governance solution empowers organizations and individuals to confidently implement their generative AI projects.
TARGET AUDIENCE:
Data scientists, AI engineers, compliance officers, business stakeholders, and AI governance subject matter experts
COURSE PREREQUISITES:
Not available. Please contact.
COURSE CONTENT:
– Introduction- Create an AI use case- Create a generative AI model- Deploy a generative AI model- Evaluate deployed models
COURSE OBJECTIVE:
After completing this course, the learner will be able to:- Explain the importance of AI governance- Define and understand the methods used to evaluate generative AI models- Create an AI use case to address a business problem- Create and deploy candidate models associated with the use case to address that problem- Run evaluations of the generative AI models- Understand how the watsonx.governance solution can be used to evaluate and manage third-party (non-IBM) models.
FOLLOW ON COURSES:
Not available. Please contact.