Prof. Yasuhiro Matsuda, Kanagawa Institute of Technology, Japan
Bio: Prof. Yasuhiro Matsuda obtained his Ph.D. degree from the University of Tokyo in 2007. He joined the Department of Welfare Systems Engineering at Kanagawa Institute of Technology in 2000, and later joined the Department of Robotics and Mechatronics. Now, he is a professor and department chair of the Department of Clinical Engineering. Prof. Matsuda’s expertise is in the field of assistive technology for deaf and/or blind persons and measurement engineering. Currently, his main research interest is in the development of the communication support system using Finger Braille for deafblind person and tactual communication tool for elderly person.
Speech Title: Braille Recognition System
Abstract: Finger Braille is one of the communication media of deafblind people. In one-handed Finger Braille, a sender dots the left part of the Braille code on the distal interphalangeal (DIP) joints of the index, middle and ring fingers of a receiver, and subsequently dots the right part of the Braille code on the proximal interphalangeal (PIP) joints of the same fingers. To assist communication between deafblind individuals and non-disabled people, we have been developing a Finger Braille recognition system using small piezoelectric accelerometers worn by the receiver. The recognition system recognizes the dotting of Finger Braille by the deafblind person and synthesizes this tactile communication into speech for the non-disabled person. The accelerometers were mounted on the top of finger rings. The results of the evaluation experiment showed that the recognition system could recognize the dotted fingers and positions accurately when the interpreter dotted clearly.
Prof. Beomjin Kim, Indiana University-Purdue University Fort Wayne, USA
Bio: Beomjin Kim is a professor and the chair of the Computer Science Department at Indiana University-Purdue University Fort Wayne, Indiana, USA. He is the director of the Information Analytics and Visualization Center: a Center of Excellence at IPFW that was established in 2011 with a $500,000 grant awarded from Lilly Endowment Inc. Dr. Kim obtained his Ph.D. in Computer Science from the Illinois Institute of Technology in 1998. He has received several research awards, including the Distinguished Research Award from Allied Academies and the Researcher of the Year Award from the IPFW Sigma Xi Chapter. He has conducted projects funded by the National Science Foundation, State of Indiana, Purdue Research Foundation, Parkview Health System, and regional businesses. He serves as a member of editorial boards on journals and the program committee of conferences. His research interests include data analytics and visualization, virtual reality, medical imaging, and computer science education.
Speech Title: The Effect of Computer Graphics Techniques on Perceiving Depth in Virtual Environments
Abstract: 3D stereoscopic devices have been utilized in a variety of areas such as entertainment, simulation, training, education, and medicine. It provides depth perception advantages, but there are known limitations that prevent users from accurately perceiving depth. The lack of natural depth cues and differences between the user’s actual convergence in reality and the viewer’s convergence on the screen can make depth perception difficult. Researchers have studied techniques to improve the perception of users in examining stereoscopic images and reduce visual fatigue when they use 3D vision technology. Previous studies have produced mixed results, showing a general trend of underestimation in depth perception in 3D environments. This study examines the influence of graphics techniques selectively applied to 3D images for reducing measurement errors. The experimental results presented outcomes in perceiving depth that changed depending on the use of techniques and image types. The study reemphasizes the significance of utilizing depth cues and also suggests future research directions for investigating impacts of depth cues.
Prof. Tae-Seong Kim, Kyung Hee University, Republic of Korea
Tae-Seong Kim received the B.S. degree in Biomedical Engineering from the University of Southern California (USC) in 1991, M.S. degrees in Biomedical and Electrical Engineering from USC in 1993 and 1998 respectively, and Ph.D. in Biomedical Engineering from USC in 1999. After his postdoctoral work in Cognitive Sciences at the University of California at Irvine in 2000, he joined the Alfred E. Mann Institute for Biomedical Engineering and Dept. of Biomedical Engineering at USC as Research Scientist and Research Assistant Professor. In 2004, he moved to Kyung Hee University in Korea where he is currently Professor in the Department of Biomedical Engineering. His research interests have spanned various areas of biomedical imaging, bioelectromagnetism, neural engineering, and assistive biomedical lifecare technologies. Dr. Kim has been developing advanced signal and image processing methods, pattern classification, machine learning methods, novel medical imaging modalities, and rehabilitation technologies. Dr. Kim has published more than 300 papers and seven international book chapters. He holds ten international and domestic patents and has received nine best paper awards.
Speech Title: Deep Learning Methodologies in Smart Assistive Lifecare Technologies
Abstract: Due to the rapid increase in the elderly population, the field of assistive lifecare technologies is also advancing rapidly. The goal of assistive lifecare technology is to increase the quality of life and to promote the health of residents proactively, especially for the elderly, for ambient assisted living. In general, smart sensors and devices in smart environments are active components of ambient assisted living technologies. Also they provide alternative means of e-healthcare over caregivers or institutional care. In this presentation, how deep learning methodologies can be applied to these smart multi-modal sensors and devices for assistive lifecare technologies. Various topics including human activity recognition, human motion recognition, life event detection, lifelogging, etc. will be covered.