Mauro Barni graduated in electronic engineering at the University of Florence in
1991. He received the PhD in Informatics and Telecommunications in October 1995.
He has carried out his research activity for more than 20 years, first at the
Department of Electronics and Telecommunication of the University of Florence,
then at the Department of Information Engineering and Mathematics of the
University of Siena where he works as full Professor. His activity focuses on
digital image processing and information security, with particular reference to
the application of image processing techniques to copyright protection (digital
watermarking) and authentication of multimedia (multimedia forensics). He has
been studying the possibility of processing signals that has been previously
encrypted without decrypting them (signal processing in the encrypted domain –
s.p.e.d.). Lately he has been working on theoretical and practical aspects of
adversarial signal processing and adversarial machine learning.
He is author/co-author of about 350 papers published in international journals and conference proceedings, he holds four patents in the field of digital watermarking and one patent dealing with anticounterfeiting technology. His papers on digital watermarking have significantly contributed to the development of such a theory in the last decade as it is demonstrated by the large number of citations some of these papers have received. The overall citation record of M. Barni amounts to an h-number of 63 according to Scholar Google search engine. He is co-author of the book “Watermarking Systems Engineering: Enabling Digital Assets Security and other Applications”, published by Dekker Inc. in February 2004. He is editor of the book “Document and Image Compression” published by CRC-Press in 2006.
He has been the chairman of the IEEE Multimedia Signal Processing Workshop held in Siena in 2004, and the chairman of the IV edition of the International Workshop on Digital Watermarking. He was the technical program co-chair of ICASSP 2014 and the technical program chairman of the 2005 edition of the Information Hiding Workshop, the VIII edition of the International Workshop on Digital Watermarking and the V edition of the IEEE Workshop on Information Forensics and Security (WIFS 2013). In 2008, he was the recipient of the IEEE Signal Processing Magazine best column award. In 2010 he was awarded the IEEE Transactions on Geoscience and Remote Sensing best paper award. He was the recipient of the Individual Technical Achievement Award of EURASIP EURASIP for 2016.
He was the Editor in chief of the IEEE Transactions on Information Forensics and Security from 2015 to 2017. He was the founding editor in chief of the EURASIP Journal on Information Security. He has been a member of the editorial board of several journals including, IEEE Signal Processing Magazine, IEEE Trans. on Circuits and system for Video Technology, IEEE Transactions on Multimedia, IEEE Signal Processing Letters, IEEE Transactions on dependable and Secure Computing, IEEE Open Journal on Signal Processing.
From 2010 to 2011, Prof. Barni has been the chairman of the IEEE Information Forensic and Security Technical Committee (IFS-TC) of the IEEE Signal Processing Society. He has been a member of the IEEE Multimedia Signal Processing technical committee and of the conference board of the IEEE Signal Processing Society. Mauro Barni is a fellow member of the IEEE and senior member of EURASIP. He was appointed distinguished lecturer by the IEEE Signal Processing Society for the years 2013-2014.
Abstract: Since the existence of adversarial examples has been observed for the first time, a vast amount of research has been dedicated to understand the origin of the weakness of deep learning models against properly crafted input samples, and to devise suitable remedies. After almost a decade, we now know that adversarial examples ubiquitously affect deep learning models, regardless of their architecture and the task they intend to solve. However, the ultimate reason for the existence of adversarial examples is not well understood yet. A great effort has also been paid to develop possible defenses, most of which turned out to be defeatable with a slight modification of the algorithm used to build the adversarial examples. Yet the life of attackers is not as easy as one may think, since exploiting adversarial examples in real life applications is not straightforward. The goal of this speech is to outline a possible explanation as to why adversarial examples are so easy to craft in the case of binary decision networks, and to highlight some difficulties that attackers must face with to apply adversarial examples in a real-life setting.