Nowadays, personal communication devices such as mobile phones are very popular all over the world. The success of mobile phones lies in the fact that interpersonal communication and interface is vital to the well-being of humans, and mobile phones help people achieve this goal by extending the sense of hearing and power of speech to far away places conveniently. However, apart from the auditory sense, humans also possess other senses for communication and interaction such as vision and body motion. These sensations play very important roles in our everyday interpersonal communication. In terms of technology, the state-of-the-art in sensing hardware, control software, telecommunication protocol, and computer networks has already allowed various forms of data to be transmitted and analyzed efficiently. So why can't we find multimodal sensation communication commonplace among the mobile devices? We believe that the lack of efficient human–machine interfaces has caused a bottleneck. Therefore, we have designed and built a series of mobile devices that possess perceptual powers and communication capabilities and can support intelligent interactions. We call this novel class of devices intelligent wearable interfaces.
The human interface facilitates a new form of human–machine interaction comprising a small body-worn intelligent machine. The interface is always ready, always accessible, and always with the user. It is designed to be useful in mobile settings. A wearable interface is similar to a wearable computer in the sense that they are both inextricably intertwined with the wearer, but the wearable interface does not need to have the full functionality of a computer system.
One major benefit provided by wearable intelligent interfaces is that they are in close proximity to the users so that human data such as motion and physiological information can be obtained and analyzed anywhere at anytime. The ongoing miniaturization revolution in electronics, sensor, and battery technologies, driven largely by the cell phone and handheld device markets, has made possible implementations of small wearable interfaces. Along with these hardware advances, progress in human data modeling and machine learning algorithms have also made possible the analysis and interpretation of complex, multichannel sensor data. Our intelligent wearable interface systems will leverage progress in these areas.
In this book, we present approaches to assist humans intelligently by machine learning, which are capable of being applied in pervasive and portable intelligent systems. The goal of the research is to equip interfaces and intelligent devices with the ability to detect, recognize, track, interpret, and organize different types of human behaviors and environmental situations intelligently by learning from demonstration. In terms of the interaction between humans and machines, we hope to move the place and style of human interaction away from the traditional desktops, and to put them into the real world where we normally live and act. The technologies developed in this research will allow us to enhance the effectiveness in building intelligent interfaces that are in close proximity to humans.
Instead of solving these problems using heuristic rules, we propose to approach them by learning from demonstration examples. The complexities of many human interface issues make them very difficult to be handled analytically or heuristically. Our experiences in human function modeling [1, 2] indicate that learning human actions from observation is a more viable and efficient method in modeling the details and variations involved with human behaviors. The solutions to these technical problems will create the core modules whose flexible combination will form the basis of systems that can fit into different application areas.
Technology and interface design issues associated with wearable computers and augmented reality displays have attracted the attention of many researchers. The previous focus was more related to ubiquitous computing and technology that attempts to blend or fuse computer-generated information with human sensations of the natural world [3], especially through personal imaging systems [4]. In this book, we will concentrate on sensor-based wearable tools that point toward practical systems that are usable by common people, not just by technical users, for applications in everyday life. As a sensory prosthesis, the main technologies of many of our application systems are based on motion sensing, wireless communication, and intelligent control with the following three major foci:
We believe that by combining the advent in MEMS sensing, intelligent algorithms, and communication technologies, it is possible to develop novel human interface systems that could enable multifunctional control and input tasks and allow the overall shrinkage in size of the interface devices. Experimental results from our prototype systems indicate that human interfacing functions could be performed using existing vision sensors, bioelectric sensors, and MEMS-based motion sensors.
The human interface can be considered as intelligent if it employs some kind of intelligent technique to provide efficient interactivity between the user and the machine, or between the user and the environment. Example capabilities include functions such as user adaptivity, anticipation of user needs, taking initiative and making suggestions to the user, and providing explanation of its actions. We believe that this research area will open up tremendous new human–computer interface possibilities, resulting in rich academic research contents and potential product lines in consumer electronics and multimedia industries.
This book is composed of eight chapters. This first chapter serves as an introduction and overview of the work, and related research is summarized. In Chapter 2, we present the network architecture system for a wearable robot, which is a mobile information device capable of supporting remote communication and intelligent interaction between networked entities. Chapter 3 addresses the intelligent glasses, which can automatically perform language translation in real time. In Chapter 4, we introduce the intelligent cap that enables a severely disabled person to guide his/her wheelchair by eye-gaze. In Chapter 5, we develop a sensor-integrated shoe system as an information acquisition platform to sense foot motion. It can be used for computer control in the a form of a shoe-mouse, human identification under the framework of capturing and analyzing dynamic human gait, and health monitoring for patients with musculoskeletal and neurologic disorders. In Chapter 6, we present our work on merging MEMS force sensors and wireless technology to develop a novel multifunctional finger-ring–based interface input system, which could potentially replace the mouse, pen, and keyboard as input devices to the computer. In Chapter 7, we present the motion-sensor–based Digital Writing Instrument for recording handwriting on any surface. In Chapter 8, a human airbag system is designed to reduce the impact force from slippage falling-down. A recognition algorithm is developed for real-time falling determination.
1. M. C. Nechyba and Y. Xu, “Human Control Strategy: Abstraction, Verification and Replication,” IEEE Control Systems, Vol. 17, No. 5, pp. 48–61, 1997.
2. K. K. Lee and Y. Xu, “Computational Intelligence for Modeling Human Sensations in Virtual Environments,” Journal of Advanced Computational Intelligence, Vol. 8, No. 3, pp. 302–312, 2004.
3. W. Barfield and T. Caudell, Eds., Fundemantals of Wearable Computers and Augmented Reality, New Jersey, Lawrence Erlbaum Associates Publishers, 2001.
4. S. Mann, Intelligent Image Processing, New York, IEEE and Wiley, 2002.
Intelligent Wearable Interfaces, by Yangsheng Xu, Wen J. Li, and Ka Keung C. Lee
Copyright © 2008 John Wiley & Sons, Inc.