Enrolment options

Course Introduction:

This course, titled AI Red Teaming 101, is a comprehensive introductory program designed to teach learners how AI red teams identify, analyze, and mitigate security vulnerabilities in artificial intelligence systems. Throughout this course, students will learn the foundations of AI threats, prompt-based attacks, multi-turn exploitation techniques, and automated assessment tools used to evaluate the robustness of modern generative AI models. The course is suitable for beginners in cybersecurity, machine learning, software engineering, or anyone interested in understanding how real-world AI systems such as Microsoft Copilot are protected.

Course Duration and Modules:

The total duration of the course is approximately 1 hour and 15 minutes, consisting of ten short educational videos. Students are encouraged to complete one module at a time to ensure proper understanding and knowledge retention. The course is self-paced and can typically be completed within 1–2 days, depending on the learner’s study schedule and review time.

Modules and Video Titles:

  1. Episode 1: What is AI Red Teaming? | AI Red Teaming 101 with Amanda and Gary
  2. Episode 2: How Generative AI Models Work (and Why It Matters) | AI Red Teaming 101
  3. Episode 3: Direct Prompt Injection Explained | AI Red Teaming 101
  4. Episode 4: Indirect Prompt Injection Explained | AI Red Teaming 101
  5. Episode 5: Prompt Injection Attacks – Single-Turn | AI Red Teaming 101
  6. Episode 6: Prompt Injection Attacks – Multi-Turn | AI Red Teaming 101
  7. Episode 7: Defending Against Attacks: Mitigations and Guardrails | AI Red Teaming 101
  8. Episode 8: Automating AI Red Teaming with PyRIT | AI Red Teaming 101
  9. Episode 9: Automating Single-Turn Attacks with PyRIT | AI Red Teaming 101
  10. Episode 10: Automating Multi-Turn Attacks with PyRIT | AI Red Teaming 101

Course Presenter:

The course is presented by Microsoft Developer, who has extensive experience in AI security and software development. The presenter is known for a beginner-friendly, hands-on teaching style that makes complex AI security concepts accessible without requiring a PhD.

Course Certificate:

The Qalam Scholar Certificate has international recognition and can be verified through a barcode. This certificate enhances learners’ credibility and supports securing both national and international career opportunities.

Learning Objectives:

By the end of this course, students will be able to:

  • Understand the role and purpose of AI red teaming.
  • Identify vulnerabilities in generative AI models.
  • Explain direct and indirect prompt injection attacks.
  • Apply basic mitigation strategies to secure AI systems.
  • Automate simple and multi-turn attack simulations using PyRIT.
  • Gain practical insights into defending AI systems in real-world scenarios.
Course rating:

5.0(1)

Self enrolment (Student)
Self enrolment (Student)