AI and Cybersecurity? Science is Needed
Artificial intelligence and cybersecurity is one of the central topics in today’s digital world. While AI enhances threat detection and automation, it is also exploited by attackers, creating a rapidly evolving technological arms race.
In a guest commentary in Kurier, Daniel Arp highlights the limitations of artificial intelligence as a security solution and warns of an ongoing arms race with increasingly sophisticated attackers. According to Arp, despite rapid technological progress, human expertise and academic research remain indispensable.
Artificial intelligence is currently widely promoted as a universal solution across many domains, including cybersecurity. AI-based systems are often marketed as digital “wonder weapons” capable of automatically detecting threats such as malware or phishing emails. While this promise sounds appealing, Arp cautions that it represents only part of the reality.
“AI can indeed identify patterns that humans might overlook and significantly reduce the manual workload of security teams,” Arp explains. “However, believing that AI alone can solve all security problems ignores the fact that attackers are also using AI.” Malicious actors, for example, already rely on AI to generate flawless phishing emails in multiple languages.
Arp notes that modern AI systems can automatically search for vulnerabilities and adapt to defensive measures like “digital chameleons.” At the same time, AI has inherent weaknesses: its performance depends heavily on the quality of the data on which it is trained. These datasets are often incomplete, biased, or outdated, making AI-based detection systems vulnerable to targeted attacks. As a result, such systems may misclassify harmless code as malicious or miss real threats.
This is where scientific research plays a crucial role, Arp emphasizes. Researchers must develop more robust models, increase transparency, and ensure that AI systems are explainable and verifiable. Despite notable advances, he does not expect fully autonomous security systems to become viable in the near future.
“AI will support security teams, but it will not replace them,” Arp concludes. “Real progress emerges not from AI models alone, but from their interaction with rigorous research and human expertise.”