CHAPTER 1

Introduction

Nowadays, personal communication devices such as mobile phones are very popular all over the world. The success of mobile phones lies in the fact that interpersonal communication and interface is vital to the well-being of humans, and mobile phones help people achieve this goal by extending the sense of hearing and power of speech to far away places conveniently. However, apart from the auditory sense, humans also possess other senses for communication and interaction such as vision and body motion. These sensations play very important roles in our everyday interpersonal communication. In terms of technology, the state-of-the-art in sensing hardware, control software, telecommunication protocol, and computer networks has already allowed various forms of data to be transmitted and analyzed efficiently. So why can't we find multimodal sensation communication commonplace among the mobile devices? We believe that the lack of efficient human–machine interfaces has caused a bottleneck. Therefore, we have designed and built a series of mobile devices that possess perceptual powers and communication capabilities and can support intelligent interactions. We call this novel class of devices intelligent wearable interfaces.

1.1 THE INTELLIGENT WEARABLE INTERFACE

The human interface facilitates a new form of human–machine interaction comprising a small body-worn intelligent machine. The interface is always ready, always accessible, and always with the user. It is designed to be useful in mobile settings. A wearable interface is similar to a wearable computer in the sense that they are both inextricably intertwined with the wearer, but the wearable interface does not need to have the full functionality of a computer system.

One major benefit provided by wearable intelligent interfaces is that they are in close proximity to the users so that human data such as motion and physiological information can be obtained and analyzed anywhere at anytime. The ongoing miniaturization revolution in electronics, sensor, and battery technologies, driven largely by the cell phone and handheld device markets, has made possible implementations of small wearable interfaces. Along with these hardware advances, progress in human data modeling and machine learning algorithms have also made possible the analysis and interpretation of complex, multichannel sensor data. Our intelligent wearable interface systems will leverage progress in these areas.

In this book, we present approaches to assist humans intelligently by machine learning, which are capable of being applied in pervasive and portable intelligent systems. The goal of the research is to equip interfaces and intelligent devices with the ability to detect, recognize, track, interpret, and organize different types of human behaviors and environmental situations intelligently by learning from demonstration. In terms of the interaction between humans and machines, we hope to move the place and style of human interaction away from the traditional desktops, and to put them into the real world where we normally live and act. The technologies developed in this research will allow us to enhance the effectiveness in building intelligent interfaces that are in close proximity to humans.

Instead of solving these problems using heuristic rules, we propose to approach them by learning from demonstration examples. The complexities of many human interface issues make them very difficult to be handled analytically or heuristically. Our experiences in human function modeling [1, 2] indicate that learning human actions from observation is a more viable and efficient method in modeling the details and variations involved with human behaviors. The solutions to these technical problems will create the core modules whose flexible combination will form the basis of systems that can fit into different application areas.

1.2 LEARNING FROM DEMONSTRATION

Technology and interface design issues associated with wearable computers and augmented reality displays have attracted the attention of many researchers. The previous focus was more related to ubiquitous computing and technology that attempts to blend or fuse computer-generated information with human sensations of the natural world [3], especially through personal imaging systems [4]. In this book, we will concentrate on sensor-based wearable tools that point toward practical systems that are usable by common people, not just by technical users, for applications in everyday life. As a sensory prosthesis, the main technologies of many of our application systems are based on motion sensing, wireless communication, and intelligent control with the following three major foci:

  • Hardware Design and Sensor Integration: In designing our intelligent wearable systems, we require them to be convenient to wear and socially acceptable. Thus, the communication, sensor, and computational hardware required should not substantially change the weight and weight balance of typical clothing, lest it alter how an individual normally walks or behaves. We, therefore, in most cases, anticipate embedding the system inside human clothing, such as integrating the system into the insole of a typical shoe, spectacles, waist belt, finger-rings, or cap. These systems involve the use of vision sensors, bioelectric sensors, and motion sensors.
  • Motion Detection: MEMS (Micro ElectroMechanical System) sensors play a major role in our endeavor to develop functional wearable interfaces because of their low cost and miniatured size. In a several chapters in our book, we present systems that use MEMS sensors to measure multi-dimensional force/acceleration of various body parts, and wirelessly transmit these motion data to the computer for information analysis. Our typical prototype consists of four main subsystems: 1) the wearable MEMS motion sensor, 2) the wearable wireless transmission board, 3) the wireless transmission interface board for PC, and 4) the information processing algorithm and display program.
  • Motion Modeling And Recognition: Human motion is a complicated phenomenon. Meaningful information expressed by the human body is embedded in both the static and the temporal characteristics of the motions. Motion modeling is the basics of human motion analysis. It involves selection of representation form, kinetic modeling, temporal modeling, and so on. Different types of human motion have different characteristics. The adoption of recognition techniques depends on the level of abstraction where the information is to be processed. Understanding the properties of the motions to be analyzed is crucial to the design of recognition methodology.

We believe that by combining the advent in MEMS sensing, intelligent algorithms, and communication technologies, it is possible to develop novel human interface systems that could enable multifunctional control and input tasks and allow the overall shrinkage in size of the interface devices. Experimental results from our prototype systems indicate that human interfacing functions could be performed using existing vision sensors, bioelectric sensors, and MEMS-based motion sensors.

The human interface can be considered as intelligent if it employs some kind of intelligent technique to provide efficient interactivity between the user and the machine, or between the user and the environment. Example capabilities include functions such as user adaptivity, anticipation of user needs, taking initiative and making suggestions to the user, and providing explanation of its actions. We believe that this research area will open up tremendous new human–computer interface possibilities, resulting in rich academic research contents and potential product lines in consumer electronics and multimedia industries.

This book is composed of eight chapters. This first chapter serves as an introduction and overview of the work, and related research is summarized. In Chapter 2, we present the network architecture system for a wearable robot, which is a mobile information device capable of supporting remote communication and intelligent interaction between networked entities. Chapter 3 addresses the intelligent glasses, which can automatically perform language translation in real time. In Chapter 4, we introduce the intelligent cap that enables a severely disabled person to guide his/her wheelchair by eye-gaze. In Chapter 5, we develop a sensor-integrated shoe system as an information acquisition platform to sense foot motion. It can be used for computer control in the a form of a shoe-mouse, human identification under the framework of capturing and analyzing dynamic human gait, and health monitoring for patients with musculoskeletal and neurologic disorders. In Chapter 6, we present our work on merging MEMS force sensors and wireless technology to develop a novel multifunctional finger-ring–based interface input system, which could potentially replace the mouse, pen, and keyboard as input devices to the computer. In Chapter 7, we present the motion-sensor–based Digital Writing Instrument for recording handwriting on any surface. In Chapter 8, a human airbag system is designed to reduce the impact force from slippage falling-down. A recognition algorithm is developed for real-time falling determination.

REFERENCES

1. M. C. Nechyba and Y. Xu, “Human Control Strategy: Abstraction, Verification and Replication,” IEEE Control Systems, Vol. 17, No. 5, pp. 48–61, 1997.

2. K. K. Lee and Y. Xu, “Computational Intelligence for Modeling Human Sensations in Virtual Environments,” Journal of Advanced Computational Intelligence, Vol. 8, No. 3, pp. 302–312, 2004.

3. W. Barfield and T. Caudell, Eds., Fundemantals of Wearable Computers and Augmented Reality, New Jersey, Lawrence Erlbaum Associates Publishers, 2001.

4. S. Mann, Intelligent Image Processing, New York, IEEE and Wiley, 2002.

Intelligent Wearable Interfaces, by Yangsheng Xu, Wen J. Li, and Ka Keung C. Lee
Copyright © 2008 John Wiley & Sons, Inc.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset