Chapter 7.3

Wearable and Non-Invasive Assistive Technologies

Maysam Ghovanloo1 and Xueliang Huo2,    1GT-Bionics Lab, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA,    2Interactive Entertainment Business, Microsoft, Redmond, Washington, USA

In this chapter, we have reviewed the state-of-the-art wearable assistive technologies (AT) for people with severe paralysis. More specifically, we have described the concept and architecture of a Tongue Drive System (TDS), which is a tongue-operated wireless and wearable AT that can infer user’s intentions by detecting their volitional tongue movements using an array of magnetic sensors and a small magnetic tracer secured on the tongue. We have presented a dual-mode TDS (dTDS), which incorporates speech recognition and TDS into one device to provide its end user with a more efficacious, flexible, and reliable computer access tool that can be used in a wider variety of daily tasks. Clinical trials by individuals with high-level spinal cord injuries (SCI) demonstrated that TDS can effectively substitute some of the hand and finger functions in controlling computer mouse cursor and driving-powered wheelchairs.

Keywords

Assistive technologies; magnetic sensors; sensor signal processing; paralysis; tongue drive system; wireless wearable devices; human computer interface

1 Assistive Devices for Individuals with Severe Paralysis

The number of people with paralysis is increasing among all age groups. A study initiated by the Christopher and Dana Reeve Foundation showed that one in fifty people in the United States is living with paralysis. Figure 1 shows major causes of paralysis from spinal cord injuries (SCI) to neuromuscular disorders. Sixteen percent of these individuals (about one million) have stated that they are unable to move and cannot live without continuous help [1]. Moreover, the National Institutes of Health in the United States report that 11,000 cases of severe SCIs from automotive accidents, acts of violence, and falls add every year to this population. Sadly, 55% of individuals with spinal cord injuries are between 16 and 30 years old, and they will need special care for the rest of their lives [2].

image
Figure 1 Causes of paralysis in the U.S. [1].

Assistive technology (AT) is “any item, piece of equipment or product system whether acquired commercially off the shelf, modified or customized that is used to increase, maintain or improve functional capabilities of individuals with disabilities” [3]. ATs are either worn by users (i.e., wearable head tracker) or mounted on their wheelchairs (i.e., sip-n-puff) or bed for easy access. They can enable individuals with different types of disabilities, particularly severe paralysis, to communicate their intentions to other devices in their environments, particularly computers. This will ease the individuals’ need for receiving continuous help, thus reducing the burden on their family members, releasing their dedicated caregivers, and reducing their healthcare and assisted-living costs. It may also help them to be employed and experience active, independent, and productive lives.

Among ATs, those providing alternative control for computer access and wheeled mobility are considered the most important for today’s active lifestyle since they can improve the users’ quality of life (QoL) by easing two major limitations: effective communication and independent mobility [4]. It is generally accepted that once individuals with disability are “enabled” to move around and effectively access computers and/or smartphones, they can virtually do most if not all of the things that able-bodied individuals with educational, administrative, or scholarly careers do on a daily basis [5,6]. This has resulted in a considerable amount of ongoing research towards developing new ATs that can potentially take advantage of any remaining abilities of these individuals, such as head motion, eye movements, muscle contractions, and even brain signals, to provide this population with alternative means to interact with computers and electronic gadgets. However, up until now, very few ATs have made a successful transition outside of the research laboratories into the consumer market to be widely used by severely disabled individuals, and many of those that are commercialized still have numerous shortcomings and difficulties.

1.1 Sip-n-Puff

Sip-n-puff is a simple, low-cost, switch-based AT, which allows its user to control a powered wheelchair (PWC) [7] or manipulate a mouse cursor [8] by blowing and sucking through a straw (Figure 2). However, it is slow, cumbersome for complicated commands, and offers very limited flexibility, degrees of freedom (DoF), and adaptability to user abilities. It only has a limited number of direct choices (four commands: soft sip, hard sip, soft puff, hard puff), which should be entered one-at-a-time in series. Another major limitation of sip-n-puff is the lack of proportional control, as opposed to a joystick, which can provide a much easier and smoother control over different movements, such as acceleration and deceleration of a PWC. Sip-n-puff needs regular cleaning and maintenance due to being exposed to saliva and food residue. It requires diaphragm control and may not benefit those who continuously use ventilators, such as those with SCI at level C2.

image
Figure 2 Sip-n-puff devices require users to suck and blow through a straw that is connected to a pneumatic switch or pressure sensor to control powered wheelchairs [7,8].

1.2 Head Controllers

Another group of ATs, known as head-pointers, are designed to replace arm and hand functions with head movements. Many of these devices are dedicated to emulating a computer mouse as a mean for users to control a cursor movement on the computer screen [915]. Some of them can also be used for wheelchair operation by embedding switch sensors in the head rest [16] or detecting head motion using inertial sensors [17,18]. Figure 3 shows a variety of such devices that are based on different tracking mechanisms.

image
Figure 3 Different types of head movement-based assistive devices: (a) Boost Tracer based on head acceleration measured by gyroscopes [10], (b) Headmaster based on the intensity of ultrasonic sounds received by three head-mounted microphones [11], (c) Headmouse based on infrared reflection received from a head-dot [12], (d) Tracker Pro based on infrared reflection similar to Headmouse [13], (e) TrackIR based on infrared emission received from an active unit mounted on the headset [14], (f) Camera Mouse based on tracking the movements of a user-defined facial feature within a webcam field of view [15], (g) Head array wheelchair controller based on proximity sensors [16], and (h) Magitek wheelchair controller based on head movements measured by accelerometers [17].

One limitation of head-controlled assistive devices is that only individuals in whom head movement is not inhibited may benefit from them [9]. Many of those with quadriplegia and locked-in syndrome do not have good head movements, or if they do, their muscles are weak. Therefore, they may not benefit from any of these devices. Another limitation of head controllers is that the user’s head should always be in the range within the reach of the device sensors, otherwise the AT would not be accessible to the user. For example, the head mounted sensors, reflectors, and laser beams, or even the user’s facial features, are not accessible when he/she is not sitting in front of the computer/webcam or when lying in bed. Also, the use of head-controlled assistive devices for a long period of time can be quite fatiguing since they exhaust the user’s neck muscles, which may already be weak as a result of the disability. They are also susceptible to inertial forces applied to the head when the wheelchair is in motion.

1.3 Eye-Tracking Systems

Another category of assistive devices operate by tracking eye movements and eye gaze, or more precisely, by detecting corneal reflections and tracking pupil position [1922]. In these devices, a camera placed in front of the users, or a miniature optical sensor worn by the users, captures the light, usually artificially generated infrared light, reflected from the cornea, lens, or retinal blood vessel (Figure 4). The information is further analyzed to extract the eye movement from the changes in reflection, and translated to move a cursor on the computer screen. Electro-oculograms (EOG) have also been utilized for detecting the eye movements to generate control commands for both computer access and wheelchair control [2327].

image
Figure 4 Different eye tracking devices: (a) Wearable EOG-based eye tracker operates by interpreting bioelectrical signals recorded using surface electrodes attached to the skin around user’s eyes [26], (b) Lightweight wearable eye tracking headgear tracks the eye movement using a micro-lens camera [20], (c) iView X HED wearable video-based eye tracker from SensoMotoric Instruments (SMI) works by tracking the pupil position based on corneal reflection [21], and (d) EyeTech TM3 computer-mounted eye tracker based on corneal reflection similar to iView [22].

Since eyes have evolved as sensory parts of our body, a drawback of the eye-tracking systems is that they affect the users’ normal vision by requiring extra eye movements that sometimes interfere with the users’ visual tasks. Despite some recent progress, the “Midas touch” problem, which refers to initiation of unintended commands, is an issue when the user just looks at some point and the system considers that as a command [27]. The EOG-based method, shown in Figure 4(a), requires facial surface electrodes and a bulky signal-processing unit attached to the goggles that are unsightly and give the user a strange look. This might make the user feel uncomfortable in public. In general, camera-based eye trackers (Figure 4(b)–(d)) are sensitive to the ambient light condition, and generally not suitable or safe for the outdoor environment and wheelchair control. The computer-mounted eye tracking method always requires a camera or display in front of the users for detection or visual feedback, respectively. This is similar to the limitations of the head-controlled devices, requiring the user’s head to remain within a certain range.

1.4 Electromyography-Based Controllers

Electromyogram (EMG) is referred to the electrical signals that are generated by muscle fibers during muscle contractions [28]. EMG-based control systems monitor EMG signals from a targeted group of muscles, typically facial [28], neck [29], or shoulder muscles [30], which are associated with the movements or twitches that the user is still able to perform. Customized signal-processing algorithms can recognize the EMG patterns associated with each movement and produce a set of discrete control commands that can be used to move the mouse cursor and perform selection for computer interaction [28,29] or replace joystick function to manipulate a wheelchair [30,31].

EMG-based systems are relatively error-prone and need complex muscular interactions [31]. These systems require highly specialized hardware and sophisticated signal-processing algorithms, therefore resulting in low portability [32]. Proper positioning of EMG electrodes for good contact with the skin, which sometimes need special gels or adhesives, and taking them off, are time consuming and cumbersome. Moreover, facial electrode attachment suffers the same cosmetic problem as an EOG-based eye tracker.

1.5 Voice Controllers

There are environmental controllers that utilize voice commands as input. Speech recognition software, such as Siri from Apple, Dragon Naturally Speaking [33] from Nuance, and Talking Desktop [34], are effective in particular aspects of computer access, such as text entry or individual commands (copy, paste, delete, etc.). However, they are not efficient in cursor navigation and are sensitive to accents, dialects, and environment noise. There are non-speech, sound-based voice controllers, such as the Vocal Joystick [35], developed for cursor control by mapping different sounds to specific cursor movement directions while associating the energy (loudness) of the sound to the velocity of cursor movement. In these devices, language specificity and accent sensitivity has been eliminated. However, users might feel uncomfortable and awkward when making such sounds in public places, particularly when people are supposed to be quiet, e.g., in a church or library. There are researchers working on developing voice-based controllers for PWC manipulation [36,37]. These devices can provide reasonable bandwidth and have relatively short response time. However, they are not safe enough to operate the wheelchair independently and therefore have to rely on an additional autonomous navigation system to avoid collisions. A common problem associated with almost all voice-based controllers is that they can work properly in indoor and quiet environments, but become inefficient and even completely useless in noisy outdoor environments. Voice-based systems can also raise privacy concerns when trying to type an email or text message in public.

1.6 Brain-Computer Interfaces

A group of assistive devices, known as brain-computer interfaces (BCIs), directly tap into the source of all volitional control, the brain. Such BCIs can potentially provide broad coverage among users with various disabilities. However, depending on how close the electrodes are placed with respect to the brain, there is always a compromise between invasiveness and information bandwidth. Non-invasive BCIs sense surface electrical signals generated from forehead muscles and electro-encephalogram (EEG) activities (Figure 5(a) –(c)) [3844]. Limited bandwidth and susceptibility to noise and motion artifacts have prevented these devices from being used for important tasks such as navigating PWCs in outdoor environments, which need short reaction times and high reliability. EEG-based BCIs need extensive training and concentration before adequate control is obtained or retained. They also have considerable setup time to prepare the scalp sites for good electrode contact, properly place and later remove the electrodes or electrode cap, and finally clean the skin. There is the risk of harmful skin breakdown or allergic reaction if electrodes and gels remain on the scalp for extended periods of time. Several groups are developing dry electrodes [45]. However, the quality of recording is not as good as the standard electrodes. Invasive BCIs (Figure 5(d)–(e)), on the other hand, require planar (electrocorticogram – ECoG) or intracortical electrodes to record brain signals, often from the motor cortex area, while many patients may not be quite ready to undergo a complicated brain surgery for the sake of regaining access to their environment [4650]. Also, long-term and reliable recording beyond two to three years is yet to be demonstrated.

image
Figure 5 Some of the existing brain computer interfaces (BCIs): (a) Noninvasive surface EEG-based BCI, (b) BSI-Toyota EEG-based BCI for wheelchair control [42], (c) Honda BCI system combining EEG with NIR [43], (d) Invasive BCIs utilizing electrocorticogram (ECoG) signal [48], and (e) BrainGate invasive BCI based on the neural signals detected by intracranial microelectrodes [46].

1.7 Tongue-Operated Devices

Figure 6 shows a group of ATs that operate based on the users’ voluntary oral movements. Tongue-Touch-Keypad (TTK) (a) is a switch-based device with nine keys that require tongue pressure [51]. Tongue-Mouse (b) is like a touch pad that is operated by tongue [52]. Tongue-Point (c) and Integra-Mouse (d) are modified joysticks to be used with tongue and mouth [53,54]. They need tongue pressure and may cause fatigue and irritation over long-term use. Think-A-Move (e) measures pressure changes in the ear canal as a result of tongue movements [55]. It offers only 1-D control with limited DoF. Figure 6(f) shows an optical tongue gesture detector, which does not need any attachments to the tongue [56]. A potential problem with this device is the high probability of unintended commands during speech or ingestion. Yet there is also an inductive intraoral controller, which is similar to TTK with 18 inductive switches that are activated with a metallic activation unit on the tongue [57]. However, these technologies either need bulky objects inserted into users’ mouths, preventing them from eating or talking while using the devices, or require tongue and lip contact and pressure, which may cause fatigue and irritation over long-term use.

image
Figure 6 Tongue-operated ATs: (a) Tongue Touch Keypad (TTK) [51], (b) Tongue-Mouse [52], (c) Tongue-Point [53], (d) Integra-Mouse [54] (e) Think-A-Move [55], (f) Optical tongue gesture detector [56], and (g) Inductive tongue-computer interface [57].

In summary, the existing ATs either provide their users with very slow and limited control over their environment or they are highly invasive and in early stages of development. Therefore, there is clearly an urgent need for developing a new wearable assistive device that can meet the following requirements to better assist its end users:

• Take full advantage of a user’s existing capability

• Either non-invasive or minimally invasive

• Powerful and can be used to interface with computer, wheelchair, different electronic devices, and environment

• User friendly, putting no or little physical and mental burden on end users

• Wearable and robust and can be used under different conditions

• Cosmetically acceptable

• Cost effective

2 Why Use the Tongue for Wearable Technology?

The motor homunculus in Figure 7 shows that tongue and mouth occupy a significant amount of sensory and motor cortex in the human brain that rivals that of the fingers and the hands. Hence, they are inherently capable of sophisticated motor control and manipulation tasks with many degrees of freedom, which is evident from its role in speech and ingestion [58]. The tongue is connected to the brain via the hypoglossal nerve, which generally escapes severe damage in SCI and most neuromuscular diseases. As a result, even patients with high level SCIs still maintain intact tongue control capabilities. The tongue can move rapidly and accurately within the oral cavity, which indicates its high capacity for wideband indirect communication with the brain. Its motion is intuitive, and unlike EEG-based BCIs does not require thinking or concentration. The tongue muscle has a low rate of perceived exertion and does not fatigue easily. Therefore, a tongue-based device can be used continuously for several hours as long as it allows the tongue to freely move within the oral space. The motoneurons controlling tongue muscles receive a wealth of vestibular input, allowing the tongue body position to be reflexively adjusted with changes in the body position. Therefore, tongue-operated devices can be easily used anywhere, and in any position, such as sitting on a wheelchair or lying in bed. Another advantage of using the tongue is that the tongue location inside the mouth can afford its users considerable privacy, which is especially important for people with disabilities, who do not want to be considered different from their able-bodied counterparts. Finally, unlike some BCIs that use neural signals from the motor cortex, picked up by implanted electrode arrays on the brain surface, non-invasive access to the tongue motion is readily available without penetrating the skull.

image
Figure 7 Tongue and mouth in the motor homunculus [58].

3 Wireless Tracking of Tongue Motion

A tongue drive system (TDS) is a minimally invasive, unobtrusive, tongue-operated, wireless and wearable assistive technology that can enable people with severe paralysis to control their environment, such as access computers or driving wheelchairs, using nothing but their volitional tongue movements. The system infers the users’ intentions based on the voluntary positions, gestures, and movements of their tongues and translates them into certain user-defined commands that are simultaneously available to the TDS users in real time. These commands can be used to control the movements of a cursor on the PC screen, thereby substituting for a mouse or touchpad, or to substitute for the joystick function in a powered wheelchair (PWC) [59].

Conceptually, a TDS consists of an array of magnetic sensors, either mounted on a dental retainer inside the mouth, similar to an orthodontic brace (intraoral TDS or iTDS), as shown in Figure 8(a), or on a headset outside the mouth, similar to a head-worn microphone (external TDS, eTDS), as shown in Figure 8(b), plus a small permanent magnetic tracer. The magnetic tracer, which is the size of a grain of rice, can be temporarily attached to the tongue using tissue adhesives. However, for long-term use the user should receive a tongue piercing and wear a magnetic tongue stud with the magnetic tracer embedded in it. Alternatively, the tracer can be coated with biocompatible materials, such as titanium or gold, and implanted under the tongue mucosa. The magnetic field generated by the tracer varies inside and around the mouth with the tongue movements. These variations can be detected by the magnetic sensors and wirelessly transmitted to a smartphone or a PC, which can be worn by the user or attached to his/her PWC. A sensor signal-processing (SSP) algorithm running on the PC/smartphone classifies the sensor signals and converts them into user-defined control commands, which are then wirelessly communicated to the target devices in the user’s environment [59].

image
Figure 8 Block diagram of the Tongue Drive System: (a) Intraoral TDS (iTDS) with magnetic sensors and control unit located on a dental retainer, and (b) External TDS (eTDS) with all the electronics mounted on a headset.

Alternatively, the magnetic tracer can be considered a magnetic dipole because its size is much smaller than the distance between sensors. The position and orientation of the magnetic tracer inside the oral cavity can be accurately tracked using measured magnetic field strength at known sensor locations and dipole equation. Various iterative optimization algorithms, such as Swamp Particle, Powell, DIRECT, and Nelder-Mead, have been implemented to solve such high-order nonlinear problems [60]. The trajectory of the magnetic tracer, which represents the movement of the tongue, can be used to define tongue gesture commands to provide users with a theoretically unlimited number of commands. It can also be utilized for more advanced proportional control.

A key advantage of the TDS is that a few magnetic sensors and an inherently wireless small magnetic tracer can capture a large number of tongue movements, each of which can represent a specific command. A set of dedicated tongue movements can be tailored to each individual user based on his/her mouth anatomy, preferences, lifestyle, and remaining abilities, and mapped onto a set of customized functions for environmental access. Therefore, a TDS can benefit a wide range of potential users with different types of disabilities because of its flexible and adaptive operating mechanism. By tracking tongue movements in real-time, a TDS also has the potential to provide its users with proportional control, which is easier, smoother, and more natural than the switch-based control for complex tasks such as maneuvering a PWC in confined spaces. Using TDS does not require users’ tongues to touch or push against anything. This can significantly reduce tongue fatigue, which is an important factor that affects AT acceptability, and therefore results in greater user satisfaction and technology adoption rate. The TDS headset can be equipped with additional transducers, such as a microphone or motion sensors, and combined with commercial voice recognition software and a customized graphical user interface (GUI) to create a “single” integrated, multi-modal, multi-functional system, which can be effectively used in a variety of environments for multiple purposes [61].

4 Wearable Tongue Drive System

Figure 9 shows the latest wearable eTDS prototype [59], which includes: 1) A magnetic tracer, 2) a wireless headset built on headgear to mechanically support an array of four 3-axial magnetic sensors and a control unit that combines and packetizes the acquired magnetic field measurement raw data before wireless transmission, 3) a wireless receiver that receives the data packets from the headset and delivers them to the PC or smartphone, and 4) a GUI running on the PC/smartphone, which includes high throughput data streaming drivers and the SSP algorithm for filtering and classifying the magnetic sensor signals.

image
Figure 9 Major components of the external Tongue Drive System (eTDS).

4.1 Permanent Magnetic Tracer

The TDS magnetic tracer was made of an alloy, Nd2Fe14B, known as a rare-earth permanent magnet, which has one of the highest residual magnetic strength values, resulting in a small-sized tracer without sacrificing the signal-to-noise ratio (SNR). Currently, we use a disc-shaped magnetic tracer (Φ5 mm × 1.6 mm) with Br=14,500 Gauss from K&J Magnetics.

4.2 Wireless Headset

The wireless headset has been equipped with a pair of goosenecks, each of which mechanically supports two 3-axial anisotropic magneto-resistive (AMR) HMC1043 sensors (Honeywell) near the subjects’ cheeks, symmetrical to the sagittal plane to detect the magnetic field variation due to tongue motion. It also has a wireless control unit to packetize and wirelessly transmit the data samples and a pair of rechargeable batteries. We used commercially available headgear, shown in Figure 9, for most human subject trials. However, we have also developed a custom-designed headset using 3-D printing technology [62].

As shown in the eTDS headset block diagram in Figure 10, each HMC1043 consists of three orthogonal AMR Wheatstone bridges, which resistances change in the presence of a magnetic field in parallel with its sensing direction, and result in differential output voltages. These outputs are multi-plexed before being amplified by a low noise instrumentation amplifier, INA331 (Texas Instruments), with a gain of 200 V/V. A microcontroller unit (MCU) with a built-in 2.4 GHz RF transceiver (CC2510, TI) samples each sensor output at 50 Hz, using its on-chip 12-bit analog-to-digital converter (ADC), while turning on only one sensor at a time to save power. Each sensor is duty cycled at 2%, which results in a total on-time of only 8%. To avoid sensitivity and linearity degradations in the presence of strong fields (>20 Gauss) when the magnetic tracer is very close to the sensor (<1 cm), the MCU generates a short pulse (2 μs) to reset the sensor right before its output is sampled.

image
Figure 10 The block diagram of the eTDS wireless headset.

If users hold their tongues close to the left-back module (<1 cm) for >3 s, the TDS status switches between operational and standby modes. When the system is in the operational mode, all four sensor outputs are sampled at 50 Hz, and the results are packed into the data frame for RF transmission. In the standby mode, the MCU only samples the left-back side module at 1 Hz and turns off the RF transceiver to save power.

A simple but effective wireless handshaking has been implemented between the headset and the wireless receiver to establish a dedicated wireless connection between the two devices without interference from nearby eTDS headsets. When the eTDS headset is turned on, it enters an initialization mode by default and broadcasts a handshaking request packet containing a specific header and its unique network ID using a basic frequency channel (2.45 GHz) at 1 s time intervals for one minute. If the headset receives a response packet back from a nearby USB receiver within the initialization period, it will update its frequency channel, standby threshold, and other operating parameters that are included in the response packet. Then it sends an acknowledgement packet back to the transceiver to complete the handshaking. The headset then switches to normal operating mode using the received parameters. Otherwise, the headset will enter the standby mode by blinking a red LED to indicate that the initialization has failed and the power cycle should be repeated.

The power management circuitry includes a pair of AAA Ni-Mn batteries, a voltage regulator, a low-voltage detector, and a battery charger. The system consumes roughly 6.5 mA at 2.5 V supply, and can run for more than 120 hours following a full charge.

4.3 Wireless USB Receiver

Figure 9 shows a prototype wireless USB receiver dongle, which has the same type of MCU as the eTDS headset (CC2510). The receiver has two operating modes: handshaking and normal. In the handshaking mode, the receiver first listens to any incoming handshaking request packets from the eTDS headsets within range (~10 m). If it receives a handshaking request packet with a valid network ID, it will scan through all available frequency channels and choose the least crowded one as the communication channel for that specific headset. The receiver then switches to transmit mode and sends a handshaking response packet to the headset, before switching back to receiver mode and waiting for the confirmation of the acknowledgment packet. If an acknowledgment is received within 5 s, the receiver will update its frequency channel to the same frequency as the eTDS headset and enters the normal operating mode of receiving data packets. Otherwise, it will notify the user via PC/smartphone that the handshaking has failed. In the normal mode, the CC2510 MCU wirelessly receives the RF packets from the headset through the 2.4 GHz wireless link, extracts the sensor output from the packets, and then delivers them to the PC/smartphone through USB.

4.4 Graphical User Interface

The current GUI for computer access has been developed in the LabVIEW environment for testing and demonstration purposes. Generally, there is no need to present the eTDS users with a specific GUI. Because as long as the SSP engine is running in the background, the eTDS can directly substitute the mouse and keyboard functions in the Windows operating system and provide users with access to all the applications or software on the PC.

In the PWC GUI, a universal wheelchair control protocol has been implemented based on two state vectors: one for linear movements and one for rotations. The speed and direction of the wheelchair movements or rotations are proportional to the absolute values and polarities of these two state vectors, respectively. Five commands are defined in the PWC GUI to modify the analog state vectors, resulting in the wheelchair moving forward (FD) or backward (BD), turning right (TR) or left (TL) and stopping/neutral (N). Each command increments/decrements its associated state vector by a certain amount until a predefined maximum/minimum level is reached. The neutral command (N), which is issued automatically when the tongue returns back to its resting position, always returns the state vectors back to zero. Therefore, by simply returning their tongues to their resting positions, the users can bring the wheelchair to a standstill.

Based on the above rules, we implemented two wheelchair control strategies: discrete and continuous. In the discrete control strategy, the state vectors are mutually exclusive, i.e., only one state vector can be non-zero at any time. If a new command changes the current state, e.g., from FD to TR, the old state vector (linear) has to be gradually reduced/increased to zero before the new vector (rotation) can be changed. Hence, the user is not allowed to change the moving direction of the wheelchair before stopping. This was a safety feature particularly for novice users at the cost of reducing the wheelchair agility. In the continuous control strategy, on the other hand, the state vectors are no longer mutually exclusive and the users are allowed to steer the wheelchair to the left or right as it is moving forward or backward. Thus, the wheelchair movements are continuous and much smoother, making it possible to follow a curve, for example.

5 Sensor Signal-Processing Algorithm

The sensor signal-processing (SSP) algorithm, which has been implemented in C, has three main components: external magnetic interference (EMI) attenuation, feature extraction (FE), and command classification.

5.1 External Magnetic Interference Attenuation

The EMI attenuation is a pre-processing function to enhance the signal-to-noise ratio (SNR) by minimizing interference from the ambient magnetic field, such as the Earth’s magnetic field (EMF) and focus on the field generated by the magnetic tracer on the tongue. A stereo differential noise cancellation technique was implemented and proved to be inherently robust against EMI. In this method, the outputs of each 3-axial sensor module are mathematically transformed to orient the sensor module in parallel to the module on the opposite side of the sagittal plane. EMI sources are often far from the sensors and result in common-mode signals in each sensor module and the virtual replica of the opposite side module. On the other hand, signals resulting from the movements of the magnetic tracer that is located in between the two sensor modules are differential in nature unless the tracer moves symmetrically with respect to both sensor modules along the sagittal plane. Therefore, if the transformed outputs of each sensor pair are subtracted, the EMI common-mode signal is significantly attenuated, while the differential-mode is amplified. As a result, the SNR is greatly improved [63].

5.2 Feature Extraction

The FE algorithm, which is based on principal component analysis (PCA), is used to reduce the dimensions of the incoming sensor data and accelerate computations. During the feature identification (also known as training) session, users associate a preferred tongue gesture or position to each TDS command and repeat that command 10 times in 3 s intervals by moving their tongue from its resting position to the desired position after receiving a visual cue from the GUI. Meanwhile, the sensor outputs are recorded in each repetition and labeled with the executed command. Once training is completed, the FE extracts the most significant features of the sensor waveforms for each specific command offline in order to reduce the dimensions of the incoming data. The labeled samples are then used to form a cluster for each command in the virtual PCA feature space. During normal TDS operation, the same FE algorithm runs over the incoming raw sensor data and reflects them onto the PCA feature space by calculating the principal component vector for each sample in real time. These vectors contain the most significant features that help in discriminating different clusters (TDS commands).

5.3 Command Classification

The k-nearest neighbors (KNN) classifier is used within the PCA feature space to calculate the proximity of the incoming data points to the clusters formed during the training session. The KNN algorithm inflates a virtual sphere from the position of incoming data point in the PCA space until it contains k-nearest classified training points. Then it associates the new data point to the command that has the majority of the training points inside that sphere. The current eTDS prototype supports six individual tongue commands for mouse control, including four directional commands (UP, DOWN, LEFT, and RIGHT) and two selection commands (LEFT-SELECT and RIGHT-SELECT), which are simultaneously available to the user, plus a neutral command defined at the tongue-resting position. The same commands can be used for wheelchair navigation when the associated application is activated on the smartphone (accelerate, decelerate, turn-left, and turn-right). Selection commands in this are used to control the wheelchair power-seating functions.

To improve the classification accuracy, we have also employed a two-stage classification algorithm that can distinguish between six different control commands inside the oral cavity with near absolute accuracy [64]. The first phase mainly focuses on performing a hundred percent accurate discrimination between left (left, up, left-select) versus right side (right, down, right-select) commands. This is done by calculating the Euclidean distance of an upcoming point to the left and right command-positions, which are averaged from training trials. These distances are normalized to compensate for any asymmetry in users’ left vs. right-side commands and then compared to produce a left/right decision. Based on the outcome of the first stage, the second stage of classification is applied on either the left or right side of the oral cavity to detect and discriminate between left, up, left-select, and neutral commands on the left side, or right, down, right-select, and neutral commands on the right side. The second stage of classification begins with de-noising the raw data using the noise cancellation technique mentioned above. The filtered data is fed to a group of linear and nonlinear classifiers consisting of linear, diagonal linear, quadratic, diagonal quadratic, Mahalanobis minimum distance, and KNN classifiers. At the end, the outputs of all classifiers are combined following a majority voting schema to provide a final result.

6 Dual-Mode Tongue Drive System

6.1 The Advantages of Multi-Modal System

It has been understood that an interface that is designed around only one input modality may not be fast and flexible enough to meet the diverse needs of the end users in today’s hectic and demanding lifestyles [65]. Most existing assistive devices, including our TDS, operate well for a narrow set of specific tasks, under a specific set of environmental conditions, for users with a specific set of remaining abilities. Due to the wide variety of tasks in daily life, various types and levels of disabilities, a multitude of environmental conditions, and a diversity of user goals and preferences, ATs that work perfectly well for one set of tasks, users, and environments, might show poor performance in other tasks or environments by the same users, or even completely lose their functionality when used for other applications by other users. In addition to the environmental and operating conditions, the performance of the single-mode ATs can be further degraded by the users’ condition, such as fatigue, spasms, weakness, accent, etc.

A multi-modal device that expands the user access beyond one input channel, on the other hand, can potentially improve the speed of access by increasing the information transfer bandwidth between users and computers [66]. A clear proof of this fact is the use of both mouse/touchpad and keyboard by the majority of able-bodied users on their desktop or laptop machines. In addition, multi-modal devices increase the number of alternatives available to users to accomplish a certain task, giving users the ability to switch among different input modalities, based on their convenience, familiarity, and environmental conditions [67]. Multi-modal devices can also provide their users with more options to cope with fatigue. This is an important factor that improves the acceptability of ATs and can result in greater user satisfaction and technology adoption.

6.2 The Concept of Dual-Mode Tongue Drive System

The dTDS, the block diagram for which is shown in Figure 11, operates based on the information collected from two independent input channels: free voluntary tongue motion and speech. The two input channels are processed independently, while being simultaneously accessible to the users.

image
Figure 11 The block diagram of dual-mode Tongue Drive System (dTDS).

The primary dTDS modality involves tracking tongue motion in 3-D oral space using a small magnetic tracer attached to the tongue via adhesives, piercing, or implantation, and an array of magnetic sensors, similar to the original TDS. The secondary dTDS input modality is based on the user’s speech, captured using a microphone, conditioned, digitized, and wirelessly transmitted to the smartphone/PC along with the magnetic sensor data. Both TDS and speech recognition (SR) modalities are simultaneously accessible to the dTDS users, particularly for mouse navigation and typing, respectively, and they have the flexibility to choose their desired input mode for any specific task without external assistance. The tongue-based primary modality is always active and regarded as the default input modality. The tongue commands, however, can be used to enable/disable the speech-based secondary modality via the dTDS graphical user interface (GUI) to reduce the system power consumption and extend battery lifetime.

By taking advantage of the strength of both TDS and SR technologies, dTDS can provide people with severe disabilities with a more efficacious, flexible, and reliable computer access tool that can be used in a wider variety of personal and environmental conditions. Specifically, the dTDS can help its users by 1) increasing the speed of access by using each modality for its optimal target tasks and functions; 2) allowing users to select either technology depending on the personal and environmental conditions, such as weakness, fatigue, acoustic noise and privacy [66]; 3) and by providing users with a higher level of independence by eliminating the need for switching from one AT to another, which often requires receiving assistance from a caregiver.

6.2.1 Wearable dTDS Prototype

The dTDS prototype, built on a customized wireless headset, is an enhanced version of the original TDS with the necessary hardware for a two-way wireless audio link to acquire and transmit users’ vocal commands, while providing them with auditory feedback through an earphone. Figure 12 shows the main components of the dTDS prototype, with two major improvements from the original TDS: 1) A custom-designed wireless headset, fabricated through 3-D rapid prototyping, which mechanically supports an array of four 3-axial magnetic sensors and a microphone plus their interfacing circuitry to measure magnetic field and acoustic signals (a control unit combines and packetizes the acquired raw data before wireless transmission), and 2) a wireless transceiver acting like a bi-directional wireless gateway to exchange audio/data packets between the headset and the PC or smartphone.

image
Figure 12 Major components of the dual-mode Tongue Drive System (dTDS).
6.2.1.1 Wireless Headset

A customized wireless headset was designed to combine aesthetics with user comfort, mechanical strength, and stable positioning of the sensors. The headset was also designed to offer flexibility and adjustability to adapt to the user’s head anatomy, while enabling proper positioning of the magnetic sensors and the microphone near users’ cheeks [62].

The headset has a pair of adjustable sensor poles, each of which holds a pair of 3-axial magneto-impedance (MI) sensors (AMI306, Aichi Steel) near the subjects’ cheeks, similar to TDS, to measure the magnetic field strength. A low-power MCU (CC2510) with a built-in 2.4 GHz RF transceiver communicates with each sensor through the I2C interface to acquire samples at 50 Hz, while turning on only one sensor at a time to save power. When all four sensors are sampled, the results are packed into one magnetic data frame to be ready for RF transmission.

The acoustic signal acquisition is managed by an audio codec (TLV320-AIC3204, TI) and delivered through the built-in inter-IC sound (I2S) interface of the CC2510 MCU [61]. A miniaturized SiSonic MEMS microphone (Knowles) was placed near the tip of the right sensor board, as shown in Figure 6, to capture the acoustic signal. Digitized audio samples are compressed to an 8-bit format using the CC2510 built-in μ-Law compression hardware to save the RF bandwidth. Once a complete audio data frame consisting of 54 samples has been acquired in 6.75 ms, the MCU assembles an RF packet containing one audio and one magnetic data frame and transmits it wirelessly [61].

After sending each RF packet, the MCU expects to receive a back telemetry packet including one data frame and one optional audio frame, which depends on whether the uplink audio channel from the transceiver to the headset has been activated or not. The data frame contains control commands from the PC/smartphone to switch on/off the speech modality. The audio frame in the back telemetry packet contains digitized sound signals from the PC/smartphone. The MCU extracts the audio samples from the back telemetry packet and sends them to the playback DAC of the audio codec through the I2S interface to generate audible analog audio signals if the user attaches an earphone to the headset audio jack. The CC2510 MCU can handle an RF data rate of 500 kbps, which is sufficient for bi-directional data and audio transmission.

Power-management circuitry includes a miniaturized 130 mAh lithium-polymer battery, a voltage regulator, a low voltage detector and a battery charger. dTDS consumes either 6 or 35 mA from a 3 V supply depending on whether the bi-directional audio channel is off or on. This would allow the system to be used continuously for 20 or 4 hours in the uni-modal TDS or dTDS modes, respectively.

6.2.1.2 Wireless USB Transceiver

Figure 12 shows a prototype of the transceiver equipped with a USB port and two audio jacks to interface the magnetic sensor data and acoustic signals with the PC, respectively. The transceiver has two operating modes: handshaking and normal. The handshaking mode is similar to the TDS USB receiver, explained in section 4.3. The normal mode is slightly different. In this mode, the transceiver works like a bi-directional wireless gateway to exchange data and audio samples between the dTDS headset and the PC/smartphone. The magnetic data within the headset packets are extracted and delivered to the PC/smartphone through the USB port. The audio data, however, is streamed into a playback audio codec via its I2S interface and converted to an analog audio signal, which is then delivered to the microphone input of the PC/smartphone through a 3.5 mm audio jack (see Figure 12). The transceiver can also receive analog audio output from the PC/smartphone headphone jack and digitize it using the same audio codec and I2S interface. These audio samples are compressed using the CC2510 built-in μ-law compression hardware and packaged in an audio frame. The transceiver also receives data packets from the computer, which contain the dTDS operating parameters used to program the dTDS headset on the fly. The data packet is combined with the audio frame to form a back telemetry RF packet, which is then wirelessly sent back to the headset.

Table 1 summarizes some of the key features of the dTDS prototype.

Table 1

Dual-mode Tongue Drive System Hardware Specifications

Specification Value
Magnetic Tracer
Material Nd2Fe14B rare-earth magnet
Size (diameter and thickness) ∅ 3 mm×1.6 mm
Residual magnetic strength 14500 Gauss
Magnetic Sensors
Type Aichi Steel AMI306 MI sensor
Dimensions 2.0×2.0×1.0 mm3
Sensitivity / range 600 LSB/Gauss/±300 μT
Microphone
Type SiSonic SPM0408HE5H
Dimensions 4.7×3.8×1.1 mm3
Sensitivity / SNR −22 dB / 59 dB
Control Unit
Microcontroller TI–CC2510 SoC
Wireless frequency / data rate 2.4 GHz/500 kbps
Sampling rate 50 sample/s/sensor
Number of sensors /duty cycle 4/8%
Audio codec / interface TLV320AIC3204/I2 S
Audio sampling rate / resolution / compression 8 ksps/16 bits / μ-Law
Operating voltage / total current 3 V/35 mA (audio on)
3 V/6 mA (audio off)
Dimensions 36×16 mm2
Headset
Rapid prototyping material Objet VeroGray resin
Total weight 90 g (including battery)

Image

7 Clinical Assessment

The performance of the eTDS prototype was evaluated by thirteen patients with high-level SCI (C2-C5) [68]. Trials consisted of two independent sessions: computer access (CA) and PWC navigation session (PWCN). This section only reports the experimental procedure and important results from both sessions [68].

7.1 Subjects

Thirteen human subjects (four females and nine males) aged 18 to 64 years old with SCI (C2~C5) were recruited from the Shepherd Center inpatient (11) and outpatient (2) populations. Informed consent was obtained from all subjects. All trials were carried out in the SCI unit of the Shepherd Center with approvals from the Georgia Institute of Technology and the Shepherd Center IRBs.

7.2 Magnet Attachment

A new permanent magnet was sanitized using 70% isopropyl rubbing alcohol, dried and attached to a 20 cm thread of dental floss using superglue. The upper surface of the magnet was softened by adding a layer of medical-grade silicone rubber (Nusil Technology) to prevent possible harm to the subjects’ teeth and gums. The subjects’ tongue surface was dried for better adherence, and the bottom of the magnet was attached to the subjects’ tongues, about 1 cm from the tip, using a dental adhesive. The other end of the dental floss was tied to the eTDS headset during the trials to prevent the tracer from being accidentally swallowed or aspired if it was detached from the subject’s tongue.

7.3 Command Definition

To facilitate command classification, subjects were advised to choose their tongue positions for different commands as diversely as possible. They were also asked to refrain from defining the TDS commands in the midline of the mouth (over the sagittal plane) because those positions are shared with the tongue’s natural movements during speech, breathing, and coughing. The recommended tongue positions were as follows: touching the roots of the lower-left teeth with the tip of the tongue for “left,” lower-right teeth for “right,” upper-left teeth for “up,” upper-right teeth for “down,” left cheek for “left-click,” and right cheek for “double-click.”

7.4 Training Session

During this session, a customized GUI prompted subjects to execute each command by turning on its associated indicator on the screen in 3 s intervals. Subjects were asked to issue the command by moving their tongue from its resting position to the corresponding command position when the command light was on, and returning it back to its resting position when the light went off. This procedure was repeated 10 times for the entire set of 6 commands plus the tongue resting position, resulting in a total of 70 data points.

7.5 Response Time Measurement

This experiment was designed to provide a quantitative measure of the TDS performance by measuring how quickly and accurately a command can be issued from the time it is intended by the user. This period, which is referred to as the TDS response time, T, along with the correct command selection probability within T, were used to calculate the information transfer rate (ITR) for the TDS, which is a widely accepted measure for evaluating and comparing the performance of different BCIs. The ITR indicates the amount of information that is communicated between a user and a computer within a certain time period. There are various definitions for the ITR. The one we used has been detailed in [39].

A dedicated GUI was developed for this experiment to randomly select 1 out of 6 commands and turn its indicator on. Subjects were asked to issue the command within T, on an audiovisual cue [68]. The GUI also provided subjects with an additional real-time visual feedback by changing the size of a bar associated with each command, indicating how close the tongue was to the position of that specific command. During the test, t was changed from 2 s to 1.5, 1.2, and 1.0 s, and 40 commands were issued at each time interval. The mean probability of correct choices (PCC) for each T was recorded.

7.5.1 Powered Wheelchair Navigation

In the PWCN session, subjects were transferred to a Q6000 powered wheelchair with a 12” laptop placed on a wheelchair tray in front of them, as shown in Figure 13(a). They were asked to define four commands (FD, BD, TR, TL) to control the wheelchair state vectors in addition to the tongue resting position (N) for stopping. Then they were required to navigate the wheelchair, using TDS, through an obstacle course, as fast as possible, while avoiding obstacles or running off the track. Three slightly different courses were utilized. However, they were all close to the layout shown in Figure 13(b). The average track length was 38.9±3.9 m with 10.9±1.0 turns.

image
Figure 13 (a) A subject with SCI at level C4, wearing the eTDS prototype and navigating a powered wheelchair through an obstacle course. (b) Plan of the powered wheelchair navigation track in the obstacle course showing dimensions, location of obstacles, and approximate powered wheelchair trajectory. (c) The GUI provides users with visual feedback on the commands that have been selected.

During the experiment, the laptop lid was initially opened to provide the subjects with visual feedback (shown in Figure 13(c)). However, later it was closed to help them see the track more easily. Subjects were required to repeat each experiment at least twice for discrete and continuous control strategies, with and without visual feedback. The navigation time, number of collisions, and number of issued commands were recorded for each trial.

7.5.2 Results

All subjects, including three who had very limited computer experience, managed to complete all the required CA and PWCN tasks without difficulty. Figure 14 shows subjects’ performance in completing the response time measurement task. On average, a reasonable PCC of 82% was achieved with T=1 s, yielding ITR=95 bits/min. Table 2 compares the response time, number of commands, and calculated ITR of a few tongue computer interfaces (TCIs) and BCIs that are reported in the literature. It can be seen that the TDS offers a much better ITR compared to BCIs and TCIs due to its rapid response time.

image
Figure 14 Response time measurement results: (a) Mean probability of correct choice (PCC) vs. response time, and (b) The eTDS information transfer rate (ITR) vs. response time.

Table 2

Comparison Between the Tongue Drive System and Other BCIs/TCIs*

Reference Type Number of Commands Response Time (s) ITR (Bits/Min)
[39] EEG-BCI 2−4 3−4 25
[69] TTK-TCI* 9 3.5 40
[57] TCI* 5 2.4 58
TDS TCI* 6 1.0 95

Image

*TCI: Tongue Computer Interface

Figure 15 shows the results of the PWCN session, including the average navigation speed and number of collisions along with their 95% confidence intervals using different control strategies. In general, the continuous control strategy was much more efficient than the discrete control. Subjects consistently performed better without visual feedback by navigating faster with fewer collisions. These results demonstrate that subjects could easily remember and correctly issue the TDS tongue commands without requiring a computer screen in front of them, which may distract their attention or block their sight. Improved performance without visual feedback can also be attributed to the learning effect because it always followed the trials with visual feedback.

image
Figure 15 Average navigation speed and number of collisions for discrete and continuous control strategies, with and without visual feedback (VF).

8 Future Directions

The work performed to date has created a solid theoretical and technical basis for the development of the TDS in the context of wearable assistive technology. However, a considerable amount of work remains to be done before TDS can be accepted, used, and appreciated by its end users on a daily basis.

8.1 Intraoral TDS

An intraoral version of the TDS (iTDS) is under development [70]. In this new generation of TDS, the size of electronic components will radically shrink to the level that they can be hermetically sealed and embedded in a dental retainer, to be worn comfortably inside the mouth. The iTDS dental retainers can be customized to the users’ oral anatomy by orthodontists to firmly clasp to their teeth and reduce the range of displacements. The iTDS can significantly improve the reliability, performance, safety, and acceptability of this assistive technology by resolving the mechanical stability problem while being completely inconspicuous, hidden inside the mouth.

8.2 Multi-Modal TDS

The performance and end-user coverage of the current TDS will be further enhanced by adding other input modalities, such as head control using commercial motion sensors. In the current dTDS, the commands from different modalities, e.g., tongue and voice commands, are used to operate their dedicated devices or complete dedicated tasks individually. In addition, these commands can be fused together to enrich the control of one device at a time and achieve a higher control accuracy and bandwidth in demanding tasks such as being able to activate numerous controls on a gaming console as well as various shortcuts.

8.3 Data Compression and Sensor Fusion

The RF transceiver is the most power hungry part of the TDS. In order to reduce the power consumption and extend the battery lifetime of the TDS, the active time of the RF transceiver must be reduced. It can be achieved by compressing sensor outputs (magnetometer, accelerometer, gyroscope, ambient light sensor, microphone, and camera) using software or a hardware codec to reduce RF packet size and therefore save wireless transmission power. Alternatively, the data can even be processed locally to generate control commands by a low-power digital signal processor (DSP) incorporated in the control unit. In this case, the RF transceiver only needs to transmit control commands instead of raw sensor output, resulting in a significant reduction of wireless transmission bandwidth and power. This requires a highly efficient sensor fusion and processing algorithm to be implemented in a local DSP.

8.4 SSP Algorithm Improvement

The current SSP algorithm will be optimized to improve the command classification accuracy and to solve the “junk commands” problem. These are random unintended commands that are sometimes issued as the user moves his/her tongue from one command to another. New SSP algorithms, such as those based on support vector machines (SVMs), should be explored and evaluated to increase the number of tongue commands, possibly from six (coarse mode) to twelve (fine mode). Proportional control capability should definitely be explored and added to the current TDS to provide its end users with much easier, smoother, and more natural control over computer mouse cursor or powered wheelchairs.

8.5 Environmental Control

The TDS can also be used as an input device for electronic aids to daily life (EADLs) or environmental control units (EDUs) to interact and manipulate electronic appliances such as a television, radio, CD player, lights, fan, etc., in a smart home environments. The commercially available EDAL devices receive their control commands from a central controller, i.e., a computer, a touch-screen terminal, or simply an array of switches, and then communicate with the remote devices through RF, infrared, ultrasonic, or power lines using the widely accepted X10 protocol. In the TDS, a PC or a smartphone that runs the SSP algorithm can communicate with the EADL devices through USB or wireless link after converting TDS commands into a format that can be recognized by these commercial devices. In this way, a new set of functions for environmental control can be added to the TDS with minor modifications.

References

1. Christopher and Dana Reeve Foundation. One degree of separation: Paralysis and spinal cord injury in the United States, Available from: <http://www.christopherreeve.org/site/c.ddJFKRNoFiG/b.5091685/k.58BD/One_Degree_of_Separation.htm>. (Last Accessed: 10.07.14).

2. National Institute of Neurological Disorders and Stroke (NINDS), NIH. Spinal cord injury: Hope through research. Available from: <http://www.ninds.nih.gov/disorders/sci/detail_sci.htm>. (Last Accessed: 10.07.14).

3. The US technology-related assistance for individuals with disabilities act of 1988, Section 3.1. Public Law 100-407. (Aug. 1988, renewed in 1998 in the Clinton Assistive Technology Act.) Available from: <http://section508.gov/index.cfm?fuseAction=AssistAct>. (Last Accessed: 10.07.14).

4. Carlson, D. and Ehrlich, N. (2005). Assistive technology and information technology use and need by persons with disabilities in the United States. Report of U.S. Department of Education, National Institute on Disability and Rehabilitation, Washington, D.C.

5. Cook AM, Polgar JM. Cook and Hussey’s Assistive Technologies: Principles and Practice third ed. St. Louis: Mosby 2007.

6. Scherer MJ. Living in the State of Stuck: How Assistive Technology Impacts the Lives of People with Disabilities forth ed. MA: Brookline Book; 2005.

7. Therafin Corp. Sip-N-Puff. Available from: <http://www.therafin.com/sipnpuff.htm>. (Last Accessed: 10.07.14).

8. Origin Instruments Corp. Sip and Puff Switch. Available from: <http://www.orin.com/access/sip_puff/>. (Last Accessed: 10.07.14).

9. Pereira C, Neto R, Reynaldo A, Luzo M, Oliveira R. Development and evaluation of a head-controlled human-computer interface with mouse-like functions for physically disabled users. Clin Sci. 2009;64:975–981.

10. Boost Technology. Boost Tracer. Available from: <http://www.boosttechnology.com/>. (Last Accessed: 10.07.14).

11. Anson D, Lawler G, Kissinger A, Timko M, Tuminski J, Drew B. The efficacy of three head pointing devices for a mouse emulation task. Assist Tech. 2002;14:140–150.

12. Origin Instruments Corp. Headmouse Extreme. Available from: <http://www.orin.com/access/headmouse>. (Last Accessed: 10.07.14).

13. Madentec Limited. Tracker Pro Wireless Head Tracking. Available from: <http://www.ablenetinc.com/Assistive-Technology/Computer-Access/TrackerPro>. (Last Accessed: 10.07.14).

14. Natural Point. TrackIR. Available from: <http://www.naturalpoint.com/trackir>. (Last Accessed: 10.07.14).

15. Camera Mouse. CameraMouse. Available from: <http://www.cameramouse.org>. (Last Accessed: 10.07.14).

16. Adaptive Switch Labs Inc. ASL Head Array. Available from: <http://www.asl-inc.com/products/product_detail.php?prod=103>. (Last Accessed: 10.07.14).

17. Magitek.com., LLC. Magitek Human Interface Drive Controls. Available from: <http://www.magitek.com>. (Last Accessed: 10.07.14).

18. Craig DA, Nguyen HT. Wireless real-time head movement system using a personal digital assistant (PDA) for control of a power wheelchair. Proc IEEE Eng Med Biol Conf. 2005;772–775.

19. Chen YL, Tang FT, Chang WH, Wong MK, Shih YY, Kuo TS. The new design of an infrared-controlled human–computer interface for the disabled. IEEE Trans Rehab Eng. 1999;7:474–481.

20. Babcock JS, Pelz JB. Building a lightweight eye tracking headgear. Proc 2004 Symp Eye Track Res Appl. 2004;109–114.

21. SensoMotoric Instruments. IVIEW X™ HED. Available from: <http://www.smivision.com/en/gaze-and-eye-tracking-systems/products/iview-x-hed.html>. (Last Accessed: 10.07.14).

22. Eyetech Digital System, Mesa, AZ. Available from: <http://www.eyetechds.com/assistivetech/products/qg3.htm>.

23. Barea R, Boquete L, Mazo M, Lopez E. System for assisted mobility using eye movements based on electrooculography. IEEE Trans Rehab Eng. 2002;10:209–218.

24. Law C, Leung M, Xu Y, Tso S. A cap as interface for wheelchair control. IEEE/RSJ Intl Conf Intell Robots Syst. 2002;2:1439–1444.

25. Chen Y, Newman WS. A human-robot interface based on electrooculography. Proc Int Conf Robot Automation. 2004;1:243–248.

26. Bulling A, Roggen D, Troster G. Wearable EOG goggles: Seamless sensing and context-awareness in everyday environments. J Ambient Intell Smart Environ (JAISE). 2009;1:157–171.

27. Jacob R. The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM Trans Info Syst (TOIS). 1991;9:152–169.

28. Chin C, Barreto A, Gremades G, Adjouadi M. Integrated electromyogram and eye-gaze tracking cursor control system for computer users with motor disabilities. J Rehabil Res Dev. 2008;45:161–174.

29. Chang G, Kang W, Luh J, et al. Real-time implementation of electromyogram pattern recognition as a control command of man-machine interface. Med Eng Phys. 1996;18:529–537.

30. Moon I, Lee M, Chu J, Mun M. Wearable emg-based hci for electric-powered wheelchair users with motor disabilities. Proc Intl IEEE Conf Robot Automation 2005;2649–2654.

31. Felzer T, Nordman R. Alternative wheelchair control. Proc Intl IEEE-BAIS Symp Res on Assistive Tech. 2007;67–74.

32. Music J, Cecic M, Bonkovic M. Testing inertial sensor performance as hands-free human-computer interface. WSEAS Trans Comput. 2009;8:715–724.

33. Nuance. Dragon voice recognition software. Available from: <http://www.nuance.com>, cited in Oct. 2013. (Last Accessed: 10.07.14).

34. Talking Desktop Software. Talking Desktop Voice Recognition Software. Available from: <http://www.talkingdesktop.com>, cited in Oct. 2013.

35. Harada S, Landay JA, Malkin J, Li X, Bilmes JA. The vocal joystick: evaluation of voice-based cursor control techniques. Proc ACM Conf Comput Accessibility – CHI 2006;197–204.

36. Simpson R, Levine S. Voice control of a powered wheelchair. IEEE Trans Rehab Eng. 2002;10:122–125.

37. Pacnik, G., Benkic, K., and Brecko, B., Voice operated intelligent wheelchair – VOIC. Proc. ISIE 3, 1221–1226.

38. Vidal JJ. Toward direct brain–computer communication. Annu Rev Biophys Bioeng. 1973;2:157–180.

39. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain-computer interfaces for communication and control. Clin Neurophysiol. 2002;113:767–791.

40. Moore MM. Real-world applications for brain-computer interface technology. IEEE Trans Rehabil Eng. 2003;11:162–165.

41. McFarland DJ, Krusienski DJ, Sarnacki WA, Wolpaw JR. Emulation of computer mouse control with a noninvasive brain–computer interface. J Neural Eng. 2008;5:101–110.

42. Choi K, Cichocki A. Control of a wheelchair by motor imagery in real time. Lect Notes Comput Sci. 2008;5326:330–337.

43. Bogue R. Brain-computer interfaces: control by thought. Ind Robot Int J. 2010;37:126–132.

44. Coyle S, Ward T, Markham C, McDarby G. On the suitability of near-infrared (NIR) systems for next-generation brain–computer interfaces. Physiol Meas. 2004;25:815–822.

45. Chi YM, Wang Y-T, Wang Y, Maier C, Jung T-P, Cauwenberghs G. Dry and noncontact EEG sensors for mobile brain–computer interfaces. IEEE Trans Neural Sys Rehab Eng. 2012;20:228–235.

46. Hochberg LR, et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442:164–171.

47. Donoghue JP. Bridging the brain to the world: a perspective on neural interface systems. Neuron. 2008;60:511–521.

48. Schalk G, et al. Two-dimensional movement control using electrocorticographic signals in humans. J Neural Eng. 2008;5:74–83.

49. Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB. Cortical control of a prosthetic arm for self-feeding. Nature. 2008;453:1098–1101.

50. Kennedy PR, Andreasen D, Ehirim P, et al. Using human extra-cortical local field potentials to control a switch. J Neural Eng. 2004;1:72–77.

51. TongueTouch Keypad™ (TTK), New Abilities Inc., [Online]. Available from: <http://www.newabilities.com/>. (Last Accessed: 10.07.14).

52. Nutt W, Arlanch C, Nigg S, Staufert G. Tongue-mouse for quadriplegics. J Micromech Microeng. 1998;8:155–157.

53. Salem C, Zhai S. An isometric tongue pointing device. Proc CHI. 1997;97:22–27.

54. USB Integra Mouse, Tash Inc., [Online]. Available from: <http://www.tashinc.com/catalog/ca_usb_integra_mouse.html>.

55. Vaidyanathan R, Chung B, Gupta L, Kook H, Kota S, West JD. A tongue-movement communication and control concept for hands-free human-machine interfaces. IEEE Trans Systems, Man Cybern A Syst Hum. 2007;37:533–546.

56. Saponas S, Kelly D, Parviz BA, Tan DS. Optically sensing tongue gestures for computer input. Proc ACM Symp User Interface Softw Technol. 2009;177–180.

57. Struijk LNSA. An inductive tongue computer interface for control of computers and assistive devices. IEEE Trans Biomed Eng. 2006;53:2594–2597.

58. Kandel ER, Schwartz JH, Jessell TM. Principles of Neural Science forth ed. New York: McGraw-Hill; 2000.

59. Huo X, Wang J, Ghovanloo M. A magneto-inductive sensor based wireless tongue-computer interface. IEEE Trans Neural Syst Rehabil Eng. 2008;16:497–504.

60. Wang J, Huo X, Ghovanloo M. A Modified Particle Swarm Optimization Method for Real-Time Magnetic Tracking of Tongue Motion. Proc IEEE 30th Eng Med Biol Conf. 2008.

61. Huo X, Park H, Kim J, Ghovanloo M. A dual-mode human computer interface combining speech and tongue motion for people with severe disabilities. IEEE Trans Neural Syst Rehabil Eng. 2013.

62. Park H, Kim J, Huo X, Wang IO, Ghovanloo M. New ergonomic headset for tongue-drive system with wireless smartphone interface. Proc 33rd IEEE Eng Med Biol Conf. 2011;7344–7347.

63. Huo X, Wang J, Ghovanloo M. A wireless tongue-computer interface using stereo differential magnetic field measurement. Proc 29th IEEE Eng Med Biol Conf. 2007;5723–5726.

64. Sadeghian EB, Huo X, Ghonvaloo M. Command detection and classification in tongue drive assistive technology. Proc IEEE 33rd Eng Med Biol Conf. 2011;5465–5468.

65. Keates S, Robinson P. The use of gestures in multimodal input. Proc 3rd Intl ACM Conf Assist Tech. 1998;35–42.

66. Shein F, Brownlow N, Treviranus J, Pames P. Climbing out of the rut: The future of interface technology. Proc of the Visions Conf.: Augmentative and Alternative Comm in the Next Decade Wilmington, DE: University of Delaware/Alfred I. duPont Institute; 1990.

67. Baljko M. The contrastive evaluation of unimodal and multimodal interfaces for voice output communication aids. Proc 7th Intl Conf Multimodal Interfaces 2005;301–308.

68. Huo X, Ghovanloo M. Evaluation of a wireless wearable tongue–computer interface by individuals with high-level spinal cord injuries. J Neural Eng. 2010;7:497–504.

69. Lau C, O’Leary S. Comparison of computer interface devices for persons with severe physical disabilities. Am J Occup Ther. 1993;47:1022–1030.

70. Park H, et al. A wireless magnetoresistive sensing system for an intraoral tongue-computer in terface. IEEE Trans Biomed Circ Syst. 2012;6:571–585.

71. Huo X, Wang J, Ghovanloo M. Using unconstrained tongue motion as an alternative control mechanism for wheeled mobility. IEEE Trans Biomed Eng. 2009;56:1719–1726.

72. Cheng C, Huo X, Ghovanloo M. Towards a magnetic localization system for 3-D tracking of tongue movements in speech-language therapy. Proc IEEE 31st Eng Med Biol Conf. 2009;563–566.

73. Huo X, Wang J, Ghovanloo M. A magnetic wireless tongue-computer interface. Proc Intl IEEE EMBS Conf Neural Eng. 2007;322–326.

74. Smith A, Dunaway J, Demasco P, Peichl D. Multimodal input for computer access and alternative communication. Proc 2nd Annu ACM Conf Assist Tech. 1996;80–85.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset