Modern AI Security: Challenges in the Era of Large Language Models
Talk by Jelena Milosevic
Abstract: Large Language Models are increasingly embedded in critical systems, yet they introduce a growing set of security risks. This presentation highlights key vulnerabilities such as data leakage and prompt injection, and examines how the rise of Agentic AI expands the attack surface through indirect and white-box threats. It concludes by assessing current defense mechanisms and underscoring the need for stronger, collaborative approaches to securing next-generation AI systems.
Bio: Jelena Milosevic is a Professor of Generative AI at the University of Applied Sciences and Arts Northwestern Switzerland (FHNW), where she leads research and teaching in Generative and Agentic AI with an emphasis on secure, efficient deployment and on-device intelligence. Previously, she was a Senior Data Scientist at Yokoy (Zurich), delivering multimodal document-understanding systems and leading an LLM benchmarking framework. She also built several production-ready ML solutions at Mondi Group (Vienna). Jelena earned a PhD from USI Lugano on runtime malware detection for resource-constrained devices and conducted postdoctoral research at TU Wien in network security and adversarial ML.