Chapter 2
Evolution of HiTL Technologies

As we discussed in the introductory chapter of this book, the technological progress of the last few decades has been particularly remarkable. Still, how did all of this come to be? First, in Section 2.1, we will see how researchers began to address and understand the scope of simple “things”, and we will subsequently consider work that targeted more complex and larger environments. Finally, in Section 2.2, we will discuss more recent examples of monitoring of human beings.

2.1 “Things”, Sensors, and the Real World

The linkage of physical objects and sensors to the Internet and their integration with web and enterprise applications has long been considered in the literature. As previously mentioned, early works began by proposing the use of physical tokens (such as barcodes or electronic tags) to relate objects to the web. An example of such an approach was described in [1], as Figure 2.1 illustrates. Since these early times, researchers were concerned with the heterogeneity and complexity of the available network protocols. In the world of pervasive computing, “client devices” like PDAs were used to access services on “server devices” like printers, light switches, or smart home appliances. Since these regular server devices have limited computation, memory, and power capabilities, they require extremely small and inexpensive servers that consist of low-performance micro controllers with only a few kilobytes of memory. However, they are expected to be able to support complex ad hoc networking protocols, and be capable of handling computationally heavy communication tasks like parsing and generating XML messages. In an attempt to solve this problem, Shaman [2], a Java-based service gateway, was proposed. As shown in Figure 2.2, it worked as a network proxy that supported various standards for ad hoc networking, allowing for the integration of small, limited and low-power server devices (known as LiteServers) into heterogeneous networking communities.

Illustration of books and other common objects augmented with RFID tags and associated with virtual documents by PDAs.

Figure 2.1 In [1], books and other common objects were augmented with RFID tags and associated with virtual documents by PDAs.

Illustration of Shaman acting as a representative for the connected LiteServers, offering Java and HTML interfaces.

Figure 2.2 Shaman [2] acted as a representative for the connected LiteServers, offering Java and HTML interfaces.

Another interesting advancement in this area was achieved by the Cooltown project [3], which provided an infrastructure for “nomadic computing”, a term used by the authors to describe the human interaction with mobile and ubiquitous devices. In the Cooltown project, the authors intended to push web technology into common digital appliances such as printers, radios, and automobiles, and also to non-electronic things like CDs, books, and/or paintings, connecting each “thing” to a web “presence” (see Figure 2.3). The web presence extended the concept of a web page to every physical entity, basically a page with information and services for every entity of the physical world. The authors designed the “Cooltown Museum” test environment, where they implemented two different methods of web presence recognition. One method consisted of the use of infrared beacons that supplied PDAs with the URL of the corresponding point of web presence, where the other consisted of the use of tag identifiers which were sensed by the PDAs and sent to a service that maintained a collection of bindings from identifiers to URLs and which then returned the corresponding URL.

Illustration of Device web presence.

Figure 2.3 Device web presence in Cooltown [3]. Source: Adapted from Kindberg et al. 2002.

The success of Web 2.0 and the advent of web services and associated technologies like SOAP (simple object access protocol) and WSDL (web service definition language) brought brand-new approaches to the integration of devices and sensor networks. In this vein, the “web of things” vision is concerned with providing a concrete architecture where actuators and embedded devices expose their data and functionality as an integral part of the web.

The open-source Project JXTA tried to specify a standard set of protocols for ad hoc, pervasive, peer-to-peer (P2) computing as a foundation for the web of things. It standardized a thin and generic network protocol layer to build a wide variety of P2P applications, where each peer benefits from being connected to a high number of other peers, and where information is shared and maintained among the community of embedded devices. The network protocol itself is independent from software and hardware platforms, and defines a virtual network on top of the existing physical network infrastructure in order to hide all the complexity of the underlying physical protocols (see Figure 2.4). Thus, as long as the JXTA protocols are implemented in a given platform, all peers in the network would be able to communicate with the members of that platform. The members of the JXTA initiative have already ported the protocol to a few different platforms; namely, the JXTA-C project delivers a small and efficient implementation of the JXTA protocols that can be used for small memory embedded devices without requiring any proxy servers [4].

Illustration of virtual ad hoc networks to serve abstract the real ones.

Figure 2.4 JXTA [4] peers created virtual ad hoc networks which served to abstract the real ones.

Another approach was Tiny Web Services [46], a system that deployed web services technology on resource-constrained sensor nodes, allowing their functionality and data to be directly accessed by multiple applications. Data could be carried through SOAP-formatted packets, while web service bindings were defined through XML and the WSDL.

However, some members of the scientific community felt that these types of approach were not ideal, arguing that both the interface definition (WSDL) and the messages (SOAP) were too complex for devices with limited capabilities and also that the overall system was not truly loosely coupled [47]. Instead of relying on proprietary and tightly coupled systems, REST principles were then applied to embedded devices, allowing the use of web languages like HTML, JavaScript, and PHP to create novel web applications (see Figure 2.5). With this approach, the interaction with a sensor node becomes as easy as typing a URI into a Web browser, and it allows for traditional web mechanisms to be applied to embedded devices such as browsing, searching, and bookmarking [5].

Several other projects focused on REST protocol principles for sensor integration with the web. One of these projects was the sMAP project [6], which presented an architecture, specifications, and implementations of a simple monitoring and action profile (sMAP) that promoted data interoperability between sensor and actuators in building environments and the Internet. The sMAP architecture allowed clients to communicate with embedded devices in buildings through the Internet. In order to support resource-constrained devices, this communication was dependent on several proxies that compressed and decompressed data between IP end points. The proposed architecture was built on HTTP/REST protocols and used JSON as the object interchange format. The authors applied the architecture to several resource monitors and actuators inside a commercial building, including mote-based wireless sensors. Overall, the authors believed that the approach based on REST was widely implementable and efficient, while the communication API definitions were expressive and concise. They also believed that the use of proxies is ideally suited for resource-constrained embedded devices.

Illustration of proxies offering embedded devices' capabilities through RESTful web services.

Figure 2.5 Works such as [5] and [6] used proxies to offer embedded devices' capabilities through RESTful web services.

More recently, the the Internet Engineering Task Force (IETF) Constrained RESTful environments (CoRE) Working Group has standardized the Constrained Application Protocol (CoAP). CoAP is a web transfer protocol for use with constrained nodes with very few resources. The protocol is designed for machine-to-machine (M2M) applications such as smart energy and building automation. According to its standards track, “CoAP provides a request/response interaction model between application endpoints, supports built-in discovery of services and resources, and includes key concepts of the Web, such as URIs and Internet media types. CoAP is designed to easily interface with HTTP for integration with the Web, while meeting specialized requirements such as multicast support, very low overhead, and simplicity for constrained environments” [48].

These advances in the integration of sensing led to the use of IoT and wireless sensor nodes, soon evolving beyond “things”, and also became a doorway between virtual environments and real-world information. This sensing data has tremendous potential, particularly when we consider the power of crowdsourcing. This is evidenced through the number of organizations that freely open and share data with their users. MTA [49], for example, provides open-source transit-related data for the development of applications. Also, the OpenDataBCN [50] open data portal shares the city of Barcelona's data regarding geography, demography, economy, city services and utilities, and administration. The cities of Toronto [51], Edmonton [52], Ottawa [53], and Vancouver [54] have also joined forces to collaborate on an “open data framework” initiative that openly offers city-related data sets to users.

Several initiatives are dedicated to applying all of this sensory information from real-world locations to virtual representations of those locations.

One example is SenseWeb [7], a scalable infrastructure for sharing sensing resources among sensor owners and exploring geocentric sensor data streams. This infrastructure offered a web-based front-end, called SensorMap, which enabled users to visualize the sensor data streams on an interactive map. Instead of depending on closed and monolithic solutions, sensor deployments shared their data to make their resources re-usable by other systems or concurrently used by multiple entities. The environment map was a virtual representation of real-world locations, allowing users to analyze environmental phenomena through the combination of multiple sensor streams dispersed in space. The SenseWeb proposal tackled two main problems. First, it succeeded in combining information from groups of heterogeneous sensors that differed in resource, mobility, and network connectivity. The solution used an open and extensible architecture that resorted to remote sensor gateways to host the different sensor data streams, exposed through uniform interfaces. These gateways communicated with a coordinator, which served as a common point of access for the various sensor contributors and for applications to gain access to the available data (see Figure 2.6). The second problem tackled by SenseWeb was the management of scalability when dealing with large amounts of geographical data. To this end, the authors provided several techniques for caching computationally expensive visualizations derived from sensor data and for efficiently reusing them to serve user queries. The various components of the SenseWeb system exposed their functionality by using a set of web service API interfaces that allowed applications developed on different platforms to access SenseWeb data.

Illustration of SenseWeb architecture.

Figure 2.6 The SenseWeb [7] architecture.

The work in [55] introduces the concept of “reality mining”, the data mining of sensor streams that monitor specific environments. The manipulation of massive amounts of sensory data can be used in detection and actuation systems, allowing users to use sensor data in valuable ways. The authors designed a prototype of a sensor information system that used geographic information systems software, mission planning/terrain visualization systems, and sensor networks in conjunction with a photo-realistic, 3D visualization of the prototype's environment. They used the prototype to propose several systems where the use of sensors and virtual representations would be useful. These propositions included a fire-detection system which used sensors in order to help anticipate the initial spread of the fire, virtual tourism, and a live view of stock price changes as clouds over a 3D map.

Other research projects and companies focus on the use of sensors and intelligent devices in an urban context. For example, the Urban Sensing research project [56] sought to develop cultural and technological approaches using mobile and embedded sensing to enrich civic life. The idea was that many ubiquitous sensors for urban sensing are already deployed and mobile phones can provide sounds and imagery from these sensors. Thus, users will have access to a great diversity of sensors in future urban settings that will allow them to know more about their homes, neighborhoods, and communities. By sharing and cross-referencing sensed data with publicly available data from private and municipal monitoring systems, a user can have access to information about the city, such as traffic, weather, air quality, and pedestrian flow.

Sense Networks, Inc. was a company that indexed real-world data using real-time and historical location data for predictive analytics across multiple industries (it has since been acquired by YP Marketing Solutions [57]). Sense Networks developed machine learning technology that indexes and ranks real-world places, based on movement data between these places at different times. This movement and location data was collected in real-time from devices with GPS or WiFi positioning technology, such as mobile phones and automobiles. This information was used to create applications that can create profiles of various locations within a city and use them to better understand visitors and anticipate their needs. Earlier products were CitySense, a local nightlife discovery and social navigation platform, and CabSense, an application that helped users find available taxicabs near them.

WikiCity [8] was a project with the objective of developing a platform that allowed an entire city to become a real-time control system. In order to achieve this, the platform required sensors able to acquire information about several aspects of the city, intelligent mechanisms that evaluated the performance of the system, and physical actuators that performed actions on the system (see Figure 2.7). One interesting aspect of this project was the fact that it considered the city's own inhabitants actuators. The platform was capable of storing and exchanging data with the users through mobile devices and web interfaces, and it enabled people to “become distributed intelligent actuators, which pursue their individual interests in cooperation and competition with others, and thus become prime actors themselves in improving the efficiency of urban systems”.

Illustration of interfaced between virtual data and the physical world through a semantically defined format for data exchange.

Figure 2.7 WikiCity [8] interfaced between virtual data and the physical world through a semantically defined format for data exchange.

2.2 Human Sensing and Virtual Communities

More recently, some researchers began to focus on the human side of sensing. Many of these works also presented a strong social networking component.

One example is the work of Lifton, J. et al. at the Massachusetts Institute of Technology media laboratory, who coined the term “dual reality” [58] [59] to indicate the ability to merge the real and virtual realities by using sensor networks. They designed several prototypes where they performed experiences in merging a real-world location, the MIT Media Lab's third floor, and virtual worlds, in this case Second Life®. One of these prototypes is described in [60] in which the authors present the ShadowLab, a Second Life® map of the Media Lab's third floor animated by data collected from a network of several sensor/actuator nodes. The sensor nodes used in ShadowLab could sense light, vibration, sound, motion, temperature, and measure the amount of alternating current drawn from each outlet. They also hosted a low-power radio for wireless communication. The data provided by these sensors was animated in virtual environments in an engaging way that naturally suggested the sensed stimuli. This was accomplished by resort to virtual objects called DataPonds, that changed appearance depending on the activity level in their respective location. The authors also began to experiment with virtual/real-world interaction by allowing Second Life® avatars to interact with virtual objects that would play audio clips through speakers, and by designing physical versions of DataPonds that would be stimulated by avatar motion in a particular region of ShadowLab. They also used ShadowLab to experiment with user avatar transformations based on real-world data, as avatars could “metamorphose” according to the activity level outside of the avatar's corresponding user's office.

Another dual reality implementation described in [60] was the Ubiquitous Sensor Portal, a device designed for two-way cross-reality experience and communication. These portals streamed information in both directions, from the user's environment to ShadowLab, and from ShadowLab to the real world. The portals hosted several environmental sensors that measured motion, light, sound level, vibration, temperature, and humidity. They could also communicate with a family of badges designed to identify individuals facing them and capture audio and video. Because the portals could stream private data, an important requirement was to manage privacy. A system of user badges was implemented, where each badge beaconed a unique ID, to wirelessly mediate privacy settings. Portals knew which badges were potentially in sensor capture range and controlled data access according to the badge user's preferences.

Sensor nodes have also been used as a means of transmitting mobility of people into virtual worlds. In [61] a framework is proposed which maps a sensor node to an object in Second Life®. The location of the sensor mode is calculated by the framework and reflected on an avatar in Second Life® which moves according to the real-world movement of the node. The location of the node is calculated from the received signal strength indication (RSSI) values from three or more fixed reference nodes, thus requiring a carefully designed WSN architecture.

Despite the importance of all of these research initiatives, one particular invention has undeniably changed not only the landscape and prospects of human sensing but also our very own society and daily life. This revolutionary invention surged in the form of a small rectangular device that tends to accompany us everywhere we go: the mobile phone. While traditional phones have been with us for the last 140 years, mobile phones have drastically changed the paradigm of long-distance communication, owing to the mobility they provide. In fact, the mobile phone has nearly become a basic need and an important part of our lives; people often claim their day is “ruined” when they forget theirs at home.

Mobile phones are so important that have been disseminated even in places that lack much-needed basic infrastructure. The International Telecommunications Union found that, by the end of 2011, the number of mobile phone subscriptions reached 5.9 billion, representing a penetration of 87% of the entire world and 79% of all developing countries [62]. In fact, the development of mobile phone networks surpasses other infrastructures, such as paved roads and electricity, in many low- and middle-income countries, diminishing the need for fixed Internet deployment [63] [64].

Thus, mobile phones are extremely common, increasingly cheap, and provide mobile Internet connectivity almost everywhere, even in less favored environments. These characteristics make them excellent candidates for gateways for new types of large-scale HiTLCPSs aimed at solving real-world social problems. Some research suggested the use of these extended mobile networks to help low-income patients in under-developed countries to manage chronic diseases, such as diabetes. This idea was tested on patients with diabetes from a clinic in a semi-rural area of Honduras, through a system that delivered automated phone calls that helped manage the disease [65]. The lack of technological infrastructures also prompts the mobile phone to serve as a very resourceful device, sometimes even more so than in developed countries. One interesting example of this is mobile banking: Kenya's mobile network Safaricom introduced a service called M-Pesa, which allows users to store money on their mobiles. Users can then pay utility bills or send money to friends through a simple SMS and the recipient converts it into cash at their local M-Pesa office. This allows millions of Africans to have cheap, mobile, and easy access to a bank account [66].

Perhaps because of their usefulness and dissemination, the evolution of mobile phones has been extremely rapid and the market remains very volatile. In fact, mobile phones are quickly being replaced by “smartphones”, devices possessing a computing power that matches a desktop computer, and a size compatible with our pockets. And even in the latest iteration, the evolution continues to be astonishing.

Consider the Nokia 6101 (Figure 2.8), a very popular smartphone at the time of its release; current smartphones such as the iPhone 6s and Nexus 5X make this mobile look almost archaic, yet it was released in 2005, a mere 12 years ago. Smartphones and tablets have become personal portable computers, representing a versatile computational resource; nowadays, even the most basic and cheap smartphones are capable of processing considerable amounts of information through basic programming platforms. Modern smartphones are actually more powerful than desktop computers from a decade ago. For example, an iPad 2 tablet, introduced in 2011, has a peak calculation speed equivalent to that of the Cray-2 supercomputer, introduced in 1985 [67]. However, tablets and smartphones also possess advanced sensors such as gyroscopes, accelerometers, and digital compasses, feature quad-core processors and up to 2 gigabyes of RAM. In a very real sense, these devices have brought us pocket-size, supercomputer-like computational power in a matter of a few years. They have also brought us incredible mobile connectivity, providing Internet access almost anywhere.

Illustration of Nokia 6101 vs iPhone 6s/LG Nexus 5X.

Figure 2.8 Nokia 6101 vs iPhone 6s/LG Nexus 5X.

Even when seen in perspective, it is difficult to grasp how fast the mobile market has been evolving. As it is the case with most silicon-based technology, mobile phones also tend to become cheaper over time. This means that they are more easily adopted by the general population, namely in developing countries. In fact, smartphone sales are globally outpacing those of regular phones [68].

The possibilities of such advanced mobile platforms are already apparent in the diversity of existing applications made available for them. However, these are only primordial examples. Smartphones are not evolving alone, having grown together with the Internet boom and closely accompanying the evolution of the World Wide Web and social networking. Almost all newer smartphone models offer native support for the integration with several social networking services (such as Twitter1 and Facebook2) also offering advanced Internet browsers that function almost as well as their personal computer counterparts.

Not surprisingly, considering the social beings we humans are, as our Internet-connected devices evolved so did the means we use to communicate and interact with the people we deem close. In the past, people's interactions were mostly face-to-face amongst their peer groups, with occasional long-distance relationships through letters or telephone calls. In today's world, we see a social revolution where people use their smartphones to share, in real time, funny stories, thoughts, feelings, photographs, and other pieces of their lives with their family and friends, some of which they have not physically been in contact with for a long time, and in some cases not even ever seen in “real life”.

There has also been a considerable evolution over earlier iterations of social networking when it comes to the sharing of personal information: whereas users used to simply fill their personal pages with static personal information (such as their hobbies or self-descriptions), we are now seeing mobile social networks that use collaborative feedback to acquire real-world information in order to provide more useful services. We are also seeing an enormous increase in the sharing of social activity, with users posting more multimedia items about their lives and social interests. According to research by Pixable [69] [70], the changing of Facebook profile pictures seems to increase every year; in fact, the number of profile photos per user per year tripled from 2006 to 2011, independently of the user's age, since older users upload as much as younger users do. The research indicated that, on average, a Facebook profile picture had two comments and three likes and the average person had 26 profile pictures. From a social networking point of view, the representation of the current status of people in virtual environments is a very interesting concept: access to social networking nowadays is becoming increasingly more mobile, and it is not uncommon to see people use their smartphones to share and discuss daily experiences shortly after their occurrence, updating thoughts, and responding to feedback from their friends as the situation develops and the user's life continues. Current users can announce social events to their group of friends, share experiences through photos and comments, and show their opinions and hobbies through “likes” and their own “private wall”.

Thus, social networking is a phenomenon that bloomed and continues to connect an astonishing number of users, becoming the fastest-growing active social media behavior online. The sheer scale at which these changes are happening is astonishing: a 2014 statistical analysis by Browser Media, Socialnomics, and MacWorld suggested that Facebook, one of the largest social networks, had around 1.4 billion users worldwide, and that 98% of 18 to 24-year-olds already used social media websites [71]. This social networking tendency continued as the number of Facebook users increased 12% from 2014 to 2015 [72]. In fact, in 2016, another study claimed that 31% of the global population used Facebook [73].

Since social networks are becoming so important in the interconnections between humans, it is expected that they will play a prominent role in HiTLCPSs. As mobile technology develops, social networking websites become increasingly more pervasive. This is evident when we take into consideration the ubiquity of social networking smartphone applications. In 2016, of the 1,721,000,000 monthly active Facebook users around 1,104,000,000 were mobile ones [72] who spent 68% of their total Facebook time on a mobile device [73].

Despite these advancements and the general public's interest in these social services, their current functionality does not yet reflect the true dynamics of people's relationships and personal lives. Instead of being pre-determined and unique events in time, social group activities can, in fact, happen very frequently and, most of the time, spontaneously. Current systems are not capable of providing this “real-time” component to social networking, which diminishes its true potential. In a sense, we can classify current social networks as still very “static” when compared to a more complete system capable of closely following the extent of human social interactions. While the use of collaborative contributions is still an important part of social applications and can provide meaningful and useful data, sensing systems can provide a more reliable and responsive feedback that is crucial in achieving “real-time and human-aware” social-networking. In fact, an HiTL approach to social networking may well prove to be a technological leap over current social networking of the same magnitude as the one provided by mobile phones over traditional telephones.

2.3 In Summary..

This chapter showed us a certain evolution in terms of IoT and CPSs. Figure 2.9 shows a timeline containing some of the relevant events and works that were referenced.

Illustration of HiTL technologies evolution timeline.

Figure 2.9 HiTL technologies evolution timeline.

Research that began with a focus on “things” has then evolved to the monitoring of entire environments and, more recently, of human users. This makes sense, since real-world objects are more controllable: they are made by humans and we understand the full extent of their uses and states. Thus, they became the initial targets for extending CPSs into the web. Later development of WSNs enabled CPSs to monitor wide geographical locations. However, only more recently did we achieve the necessary advancements in miniaturization, computational power, sensing, information linkage, and machine learning that allow us to focus on the most complex aspects of our reality, including ourselves. These possibilities have been brought forward by the tremendous advances in mobile devices, such as smartphones, and social-networking.

All of these advancements and ideas make it difficult for us to understand the range, limits, and possibilities of these new “human-aware” paradigms. In an attempt to organize ideas, we will now focus our attention on organizing these HiTL concepts into taxonomic concepts.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset