Overall, this will effectively support the development of AI pipelines with guaranteed levels of performance, explained clearly. We seek to inform both the developer and the user thoroughly in regards to the possible algorithmic choices and their expected effects. Furthermore, we present an extensive programme in explainability of fairness-related qualities. Seeking the middle-ground, we suggest a priori certification of fairness-related qualities in AI pipelines via modular compositions of pre-processing, training, inference, and post-processing steps with certain properties. Both extremes offer little in terms of optimizing the pipeline and inflexibility in explaining the pipeline’s fairness-related qualities. At the other extreme, one could consider nuanced communication of the exact tradeoffs involved in AI pipeline choices and their effect on industrial and bias outcomes, post hoc. At one extreme, one could consider risk averse a priori guarantees via hard constraints on certain bias measures in the training process. Notably, we consider a comprehensive and flexible certification of properties of AI pipelines, certain closed-loops and more complicated interconnections. In this Horizon Europe project, we address the matter of transparency and explainability of AI using approaches inspired by control theory.
0 Comments
Leave a Reply. |