Keynote at KTH Royal Institute of Technology
Prof. Matteo Maffei delivered a lecture titled “Verification of Global Safety Properties in Deep Neural Networks with Confidence” at the Center for Cyber Defence and Information Security (CDIS).
Prof. Matteo Maffei’s keynote addressed how to make modern AI systems trustworthy when they are deployed in safety- and security-critical domains such as autonomous driving, finance, and cyber-physical systems. He showed that widely used notions like robustness and fairness can be expressed as global 2-safety properties, which relate pairs of neural network executions rather than individual inputs. The talk introduced a confidence-aware notion of global robustness, where only high-confidence classifications must remain stable under small input perturbations, and proposed a verification approach based on self-composition and a piecewise-linear abstraction of the softmax function. This allows these rich properties to be checked using existing neural network verification tools such as Marabou. Experimental results on standard robustness benchmarks and fairness-sensitive datasets (e.g. German Credit, COMPAS) demonstrate that the method can certify meaningful confidence thresholds and explore trade-offs between accuracy, robustness, and fairness. Prof. Maffei concluded with an outlook on scaling the technique to larger models and integrating it into the broader TU Wien Cybersecurity Center agenda on trustworthy AI.
Reference: Presentation