AI Risk Classification System: Day 2
Introduction
Today, I focused on developing an assessment framework based on four risk categories outlined by the EU AI Act:
- Prohibited
- High Risk
- Limited Risk
- Minimal Risk
I spent some time reading into the EU AI Act to determine how different AI systems are classified into the categories above.
Development Notes
First of all, I built a basic structure for the framework by creating private static boolean methods to check which category the user’s system falls on. Reading through Chapters 2 and 3 of the EU AI Act, I noticed there are specific compliance requirments for prohibited and high risk cases, while limited and minimal risk cases are distinguished by whether they need transparency requirments.
Therefore, I created methods for the prohibited and high risk case, and to check for transparency requirments. For the compliance questions for the prohibited case, I referenced Article 5 of the EU AI Act. I also created a helper method to prompted the user with a q&a yes/no structure.
Assuming that I have finishing writing the methods for the high risk case and the transparency requirment, I call on the methods in the main method to determine the risk classification.
At the moment, if the user answers yes (y) to any of the compliance questions for the prohibited case test, the system returns a warning that the user’s system might be prohibited under the EU AI Act.