Felix Jedrzejewski

PhD Student, Blekinge Tekniska Högskola (BTH)


Felix Jedrzejewski conducts his Ph.D. studies at BTH’s DIPT faculty. Before coming to Karlskrona, he finished his master’s in Information Systems at the Technical University Munich in Germany. During his studies, he had the pleasure to, amongst others, work at Siemens Corporate Technology in one of the IT Security research groups.

About the topic: Adversarial Machine Learning in the Industry

The rise of Artificial Intelligence (AI) in the last few months is astronomical but also logical. One reason for it is LLMs and its most prominent example ChatGPT, which often appears to almost perfectly do the tedious work for us by only writing a short command in its prompt. As it is often the case, new technologies create new opportunities but also challenges. While many of us are in the process of reflecting upon potentially new fields of application of AI or its application Machine Learning (ML), others are already exploiting the technologies for security breaches. Besides the known ML attacks like Adversarial Examples and Data Poisoning, there are constantly new attack patterns specifically targeting LLMs.

Multiple studies conducted over the last years indicate that ML practitioners can often not catch up on the latest security and privacy-based measures and that security practices should be in place to foster a certain level of ML security. How does industry decide which ML-specific security practices should be applied? Are there security-related requirements present during the development of ML systems handling Adversarial Machine Learning Threats? How can the overall security of ML systems be evaluated?

Questions like these are in the scope of the talk.