1 minute read

Introduction

Today, I focused on developing an assessment framework based on four risk categories outlined by the EU AI Act:

  • Prohibited
  • High Risk
  • Limited Risk
  • Minimal Risk

I spent some time reading into the EU AI Act to determine how different AI systems are classified into the categories above.

Development Notes

First of all, I built a basic structure for the framework by creating private static boolean methods to check which category the user’s system falls on. Reading through Chapters 2 and 3 of the EU AI Act, I noticed there are specific compliance requirments for prohibited and high risk cases, while limited and minimal risk cases are distinguished by whether they need transparency requirments.

Screenshot 2025-05-11 at 10 11 25 PM

Therefore, I created methods for the prohibited and high risk case, and to check for transparency requirments. For the compliance questions for the prohibited case, I referenced Article 5 of the EU AI Act. I also created a helper method to prompted the user with a q&a yes/no structure.

Screenshot 2025-05-11 at 10 14 18 PM

Assuming that I have finishing writing the methods for the high risk case and the transparency requirment, I call on the methods in the main method to determine the risk classification.

Screenshot 2025-05-11 at 10 15 36 PM

At the moment, if the user answers yes (y) to any of the compliance questions for the prohibited case test, the system returns a warning that the user’s system might be prohibited under the EU AI Act. Screenshot 2025-05-11 at 10 19 06 PM