Isao Echizen received B.S., M.S., and D.E. degrees from the Tokyo Institute of Technology, Japan, in 1995, 1997, and 2003, respectively. He joined Hitachi, Ltd. in 1997 and until 2007 was a research engineer in the company's systems development laboratory. He is currently an advisor to the director general of the National Institute of Informatics (NII), a professor in NII's Information and Society Research Division, and a professor in the Department of Information and Communication Engineering, Graduate School of Information Science and Technology, The University of Tokyo, Japan. He is also a visiting professor at the Tsuda University, Japan, and was a visiting professor at the University of Freiburg, Germany, in 2010 and at the University of Halle-Wittenberg, Germany, in 2011. He is currently engaged in research on multimedia security and multimedia forensics. He currently serves as a research director in CREST Trusted quality AI systems project, JST (Japan Science and Technology Agency). He received the Best Paper Award from the IPSJ in 2005 and 2014, the Fujio Frontier Award and the Image Electronics Technology Award in 2010, the One of the Best Papers Award from the Information Security and Privacy Conference in 2011, the IPSJ Nagao Special Researcher Award in 2011, the DOCOMO Mobile Science Award in 2014, the Information Security Cultural Award in 2016, and the IEEE Workshop on Information Forensics and Security Best Paper Award in 2017. He was a member of the Information Forensics and Security Technical Committee and the IEEE Signal Processing Society. He is the Japanese representative on IFIP TC11 (Security and Privacy Protection in Information Processing Systems).
Abstract: With the spread of smartphones and social media, anyone can now share media such as images, videos, sounds, and texts. Artificial intelligence (AI) technologies greatly enhance the convenience and value of social media, making it easy to analyze user preferences from a large amount of data as well as to perform media processing such as machine translation and speech-to-text conversion in accordance with user needs. However, with the evolution of AI technology and the enrichment of computer resources stemming from the ability to acquire a large amount of human-related information such as fingerprint, face, voice, body, and natural language, malicious actors can generate fake media (FM) such as fake images, fake voice data, and fake documentation that can pass for the real thing. The generation of FM has become a serious social problem. The appearance and spread of the new coronavirus infection (COVID-19) led to the generation and spread on social media of fake news regarding preventive and therapeutic methods without scientific basis and of photographs of city scenes taken from a specific direction with a telephoto-lens camera that gave the impression of a crowded area. This "infodemic" of uncertain information can cause anxiety and confusion in society. We can easily envision an enterprising criminal group with a clear intention using AI to readily generate fake images, fake voice data, and fake documentation that can pass for the real thing and then spreading them on social media to create an infodemic. To achieve a healthy human-centered cyber society, it is essential to improve the reliability of information by appropriately dealing with such threats and, at the same time, to support diverse communication and decision-making. This talk will outline these threats and introduce technology for users to control distribution of their own biometric data in cyberspace and technology for detecting fake media. Further, the speaker will introduce “Social information technologies to counter infodemics (CREST FakeMedia project),” which is currently being developed under JST CREST.