14
Fog Computing Model for Evolving Smart Transportation Applications

M. Muzakkir Hussain Mohammad Saad Alam and M.M. Sufyan Beg

14.1 Introduction

Due to the increased number of connected things in smart and industrial applications – more specifically, intelligent transportation systems (ITS), the growing volume and velocity of Internet of Things (IoT) data exchange – there is a great urgency for rigorous communication resources to address the bottlenecks in terms of data processing, data latency, and traffic overhead [1]. Fog computing emerges as an substitute for traditional cloud computing to support geographically distributed, latency sensitive, and QoS‐aware IoT applications while reducing the burden of data centers in traditional cloud computing [2]. In particular, fog computing due to its peculiarities (e.g., low latency, location awareness, and capacity of processing large number of nodes with wireless access) to support heterogeneity and real‐time applications is a potentially attractive solution to the delay and resource‐constrained large‐scale industrial applications [3].

However, with the benefits of fog computing, the research challenges arise while realizing fog computing for such applications [4]. For instance, how should we handle different protocols and data format from highly dissimilar data sources in fog layer? How do we determine which data should be processed in cloud or be processed in fog layer (task association, resource allocation/provisioning, VM migration) [5]? How can real‐time responses and simultaneous data collection be achieved from large heterogeneous sources in industrial applications? This chapter makes a rigorous assessment toward the viability of fog computing approaches on emerging smart transportation architectures [6]. As a proof of concept, we perform a case study on the fog computing requirements of intelligent traffic light management (ITLM) system; see how the previous questions, and others, can be addressed [7]. Orchestrating such applications can simplify maintenance and enhance data security and system reliability [8]. For efficient management of those activities in ITS domain, we define a distributed fog orchestration framework that defines the dynamic, policy‐based life‐cycle management of fog services. Finally, the chapter concludes with an overview of the core issues, challenges, and future research directions in fog‐enabled orchestration for IoT services in smart transportation domain.

The chapter is organized as follows. Section introduces the needs and prospects of adopting data‐drive transportation architectures and the landscape of smart applications supported over adoption of such data‐driven mobility models. It discusses which computer requirements can be best fulfilled through cloud computing and which require fog rollout. Section identifies the fog computing requirements of ITS such as mission‐critical architectures. It assesses the state of cloud platforms to store and compute support for such applications and discusses the proper mix of both computational models to best meet the mission‐critical computing needs of smart transportation applications. Section presents a fog computing framework customized to support latency sensitive ITS applications. Its four advantages are captured in the acronym CEAL, for cognition, efficiency, agility, and latency. The fog orchestrating requirements in ITS domain are substantiated in Section through an intelligent traffic lights management (ITLM) system case study. In Section the key big data issues, challenges, and future research opportunities are outlined, while developing a viable fog orchestrator for smart transportation applications.

14.2 Data‐Driven Intelligent Transportation Systems

Due to rigorous research and development in state‐of‐the‐art information and communication technologies (ICT) and upsurge in human population, intelligent transportation systems (ITS) have become an integral part of contemporary human life [9]. The ITS architecture comprises a set of advanced applications aimed at applying ICT amenities to provide QoS and QoE guaranteed service for traffic management and transport [10, 11]. Figure 14.1 depicts the fundamental components of a typical ITS architecture [12]. The dependence on transportation systems is indispensable, as is clear from the fact that nearly 40% of the global population devotes at least one hour commuting on road every day [13, 14]. In fact, the competitiveness of a nation, its economic forte, and its productivity rely heavily on how robustly its transportation infrastructures are installed [15]. However, the current landscape of vehicle penetration into transportation architectures comes with numerous opportunities and challenges [16]. It may be in the form of traffic congestion, parking issues, carbon footprints, or accidents, for example [17]. Efficient transportation protocols and policies need to be employed to confront such issues. Thanks to odd/even policies adopted by China in the Beijing Olympics 2008 [18] and the same by the Delhi government in 2016, one of the notable attempts to alleviate fleet congestion and air pollution in cities [19].

Diagram listing components of a intelligent transportation systems including Smart Traveller Information Systems; Smart Vehicle Control Systems; and Smart Transportation Management Systems.

Figure 14.1 Key components of a data‐driven ITS [12].

But such an approach works well only for specific events and time frames, not scalable to nationwide and every‐time transportation services. Augmenting with additional infrastructures such as new road construction and road widening might have significant effect but will be trapped in cost‐ and space‐related silos. The optimal strategy is to efficiently utilize the available transportation resources through data‐driven analytics of ITS data streams. The data generated from IoT‐aided transportation telematics such as cameras, inductive‐loop detectors, global positioning system (GPS)‐based receivers, and microwave detectors can be collected and analyzed to unlock latent knowledge, ultimately used for intelligent decision making [20].

Table 14.1 highlights the key categories of applications supported by ITS in the realm of IoT [21]. Many efforts, such as developing vehicular networking and traffic communications protocols and standards, have been being devoted by ITS utilities in order to find reliable and ubiquitous transportation solutions in contemporary smart cities [22]. For instance, the US Federal Communications Commission (FCC) has allocated 75 MHz of spectrum in the 5.850 GHz to 5.925 GHz band for the exclusive use of dedicated short‐range communications (DSRC) [23]. In addition, some approved amendments have been dedicated to ITS technology such as Wireless Access in Vehicular Environments (WAVE IEEE 802.11p) and Worldwide Interoperability for Microwave Access (WiMAX IEEE 802.16) [11]. The difference between conventional technology‐driven ITS and a data‐driven ITS is that conventional ITS mainly depends on historical and human experiences and places less emphasis on the utilization of real‐time ITS data or information [13]. Thanks to modern ICT facilities, currently the data can not only be processed into useful information but can also be employed to generate new functions and services in varying range of ITS domains [24].

Table 14.1 Application use cases for data‐driven intelligent transportation applications.

Applications Usage
Vision‐driven ITS applications Vehicle detection [27], Pedestrian detection [28] traffic sign detection, lane tracking, traffic behavior analysis, vehicle density, pedestrian density estimation, construction of vehicle trajectories [28], statistical traffic data analysis
Multi‐source‐ (sensors and IoT) driven ITS applications Vision‐driven automatic incident detection (AID) [29], DGPS [30], Cooperative collision warning system (CCWS) [31], automatic vehicle identification (AVI) [32]. Unmanned aerial vehicles (UAVs).
Learning‐driven ITS application Online learning [16], trajectory/motion
pattern analysis, data fusion, rule extraction, ADP‐based learning control, reinforcement learning (RL), ITS‐oriented learning.
Datasets for perceived visualization Line charts; bidirectional bar charts; rose diagrams; data images.

Since the major percentage of IoT endpoints in a typical ITS are primitive, i.e. the deployment of required compute and storage resources is not guaranteed every time and everywhere; an external agent should undertake the computation and analytics tasks. The storage and processing loads in an IoT‐aware transportation framework will be swarmed up from billions of static as well as mobile sensor nodes spanning over a vast geographical domain [25]. An ideal ITS infrastructure is driven by mission‐critical service constraints viz. low‐latency, real‐time decision making, strict response times, and analytical consistencies [12].

In fact, an IoT‐aware ITS ecosystem is constrained by stringent service requirements such as low‐power communication backbone, optimal energy trading, proper renewable penetration, and other power monitoring utilities [16]. Such heterogeneity in the data architecture of an ITS envisaged the use of advanced store and compute platforms for overcoming various technical challenges at different levels of computation and processing. Rather than relying on the master–slave computation model as in legacy systems, the current notion is to get switched to data center level analytics operating under the client–server paradigm [7].

The objective of reaching a consensus on where to install the compute and storage resources continues to be an open question for academia, industries, R&Ds, and legislative bodies. The cloud computing had emerged to be promising technology to support ITS because of its ability to provide convenient and on‐demand, anytime, anywhere network access to its shared computing resources, provisioned and released with minimal management effort or service provider interaction [21]. The cloud service also frees the IoT devices from battery‐draining processing tasks by availing unlimited pay‐per use resources through virtualization [26]. However, the varying modalities of services facilitated by cloud computing paradigms perish to meet the mission critical requirements of a data‐driven ITS. The existing cloud computing paradigm ceases to welcome its proponents because of its adequacies in building common and multipurpose platform that can provide feasible solutions to the stringent requirements of ITS in IoT space. In the next section we analyze the computing needs of mission‐critical smart transportation applications and assess the states of generic cloud models. Correspondingly, we also highlight how a paradigm shift from generic cloud‐based centralized computation into geo‐distributed fog computing model can turn to be a near ideal solution to carry out the mission‐critical smart transportation applications.

14.3 Mission‐Critical Computing Requirements of Smart Transportation Applications

Consider a typical traffic lighting use case where smart traffic lights will be able to adapt themselves to the real‐time traffic circumstances within a particular region. In this case, the reaction time for one or several smart traffic lights is too short that it is virtually impossible to traffic all the application execution to a distant cloud. Therefore, such traffic lights should be programmed in a way that they autonomously cooperate with each other and with all the locally available computing resources such as roadside units (RSU) to coordinate their operations. Other such examples may be vehicular search applications [9], vehicular crowd sourcing [21], smart parking, etc. [33, 34]. From such examples, it is perceived that there is a need for computing frameworks that will provide ubiquitous and real‐time analytics services for varying transportation domains. Some key data collection, processing, and disseminating requirements of smart transportation infrastructures are highlighted in this section.

14.3.1 Modularity

The contemporary intelligent transportation network is a large and complex system, as it involves heterogeneous IoT and non‐IoT devices with numerous data types demanding a wide set of processing algorithms. Thus, the software platform supporting the ITS applications should have characteristic modularity and flexibility support. The applications must be incrementally deployed in a way that the system should be self‐evolving and fault tolerant i.e. partial failures do not affect the whole system dynamics. Modularity also ensures different data processing algorithms to be designed and plugged into the system with minimal effort. This is important due to the diverse range of data streams generated in smart transportation infrastructures. Thus, the application development process can be done in two independent stages, developing individual modules and developing module interconnection logics. The earlier stage can be done by component or module providers while the latter can be done by smart transportation developers. The cloud platforms provide enough modularity and flexibility support for deploying ITS applications but the centralized execution strategies often lead to poor quality of experience (QoE) for the stakeholders.

14.3.2 Scalability

An ideal ITS architecture should be distributed and scalable enough to efficiently serve a large vehicle population. Though cloud provides scalable resource pools, due to the huge volume of real‐time data generated by the ITS environment, it might not able to sustain with the smart transportation applications' requirement with regard to the low latency requirement. Current cloud‐based ITS applications often “embrace inconsistency,” thus implementing consistency preserving computational structures constitute a promising investment domain for the research & development (R&D) sector. The trend envisions a more flexible infrastructure, as in fog computing models where computation resources in dynamic objects such as moving vehicles can also participate in the application.

14.3.3 Context‐Awareness and Abstraction Support

As the ITS components such as vehicles and other infrastructures are mobile and sparsely distributed over large geography, fog computing will provide context‐aware computing platforms for reliable transportation services. Further, the geo‐distributed context information should be exposed to developers so that they can build context‐aware applications. Because of the high level of heterogeneity and the large number of IoT devices in a typical ITS application, viz. smart parking requires a high degree of abstraction of how the heterogeneous computations and processing are described, coordinated or interact with one another. The centralized cloud‐based ITS solutions needs to be upgraded to dedicated fog solutions such that the model allows it to work with a pool of vehicles at once. For instance, such a programming abstraction should be able to describe the command like: “get the State of Charge (SoC) of these groups of cars in this location”.

14.3.4 Decentralization

Since the ITS applications usually operate over a large number of heterogeneous and dynamic transportation telematics such as mobile/autonomous vehicles or roadside units (RSU), decentralized execution or programming model is necessary. The centralized cloud‐based application has to implement all sorts of conditions and exception handling to deal with such a heterogeneity and dynamic nature. The fog platform will ensure scalable execution if the application can be developed in a modular way with components being distributed to the edge devices. Instead of relying on remote cloud data centers, fog computing provides robust decentralization support to leverage the computing resources of the ITS components such vehicles and sensors to execute the application in order to fulfill the latency requirement of the ITS applications.

14.3.5 Energy Consumption of Cloud Data Centers

The energy consumption in mega data centers is likely to triple in the coming decade [35]; thus, adopting energy‐aware strategies becomes an earnest need for computational folks. Offloading the whole universe of transportation applications into the cloud data centers causes untenable energy demands, a challenge that can only be alleviated by adopting sensible energy management strategies. Also, there are plenty of ITS applications without significant energy implications and instead of overloading data centers with such trivial tasks, the analytics can be made ready at and within ITS fog nodes such as vehicular platoons, parked vehicular networks, RTUs, SCADA systems, roadside units (RSU), base stations and network gateways.

Motivated by the abovementioned mission‐critical computing requirements of IoT‐aware smart transportation applications, the downsides of current cloud computing infrastructures to meet those needs, and having the assumption that the transportation design community is not in a position to reinvent a dedicated Internet infrastructure or to develop computing platforms and elements from scratch that fulfill all those requirements, we in this work present a fog computing framework whose principle underlie on offloading the time and resource‐critical operations F rom c O re to ed G e. The argument here is not to cannibalize the existing centralized cloud support for ITS, but to comprehend the applicability of fog computing algorithms to interplay with the core centered cloud computing support leveraged with a new breed of real‐time and latency free utilities. The objective is also to develop a viable computational prototype for an ITS architecture in the realm of IoT space, through proper orchestration and assignment of compute and storage resources to the endpoints and where the cloud and fog technologies tuned to interplay and assist one other in synergistically.

14.4 Fog Computing for Smart Transportation Applications

Figure 14.2 depicts a typical fog‐assisted cloud architecture customized for smart transportation applications. It is a consensus that the fog paradigm is not envisioned to cannibalize or replace the cloud computing platforms; rather, the notion is to realize fog platforms as a perfect ally, or an extension of cooperative modules having an interplay with the cloud infrastructure. In fact, according to [4], properties like elasticity, distributed computation, etc. are defined commonly for both cloud as well as fog. However, since the computation‐intensive tasks from resource‐constrained entities such as sensor nodes are mapped to computational resource blocks (CRBs) of dedicated fog nodes, the response time is appreciably reduced. The distinguishing geo‐distributed intelligence provided by fog deployments makes it more viable for security constrained services as the critical and sensitive is selectively processed on local fog nodes and is kept within the user control instead of offloading to the vendor‐regulated mega data centers. The fog service models also improve the energy efficacy by offloading the power intensive computations to battery saving modes [12]. Additional fog nodes can be dynamically plugged‐in when and wherever necessary, thereby removing the scalability issues that hinders the success of cloud computing models. The bandwidth issues are dramatically fixed as raw application requests are filtered, processed, analyzed, and cached in local computing nodes, thus reducing the data traffic across the cloud gateways. If a robust and predictive caching algorithm is employed, the fog nodes would serve a significant portion of consumer requests from the local nodes only, thus liberating the reliance on data center connectivity. The fog nodes can be efficiently programmed to incorporate context and situational awareness about the data, thereby improving the dependability of the system.

Image described by caption and surrounding text.

Figure 14.2 Topology of FOG computing paradigm for smart transportation architectures.

The underlying notion of fog is the distribution of store, communicate, control, and compute resources from the edge to the remote cloud continuum. The fog architectures may be either fully distributed, mostly centralized, or somewhere in between. In addition to the virtualization facilities, specialized hardware and software modules can be employed for implementing fog applications. In the context of an IoT‐aided ITS, a customized fog platform will permit specific applications to run anywhere, reducing the need for specialized applications dedicated just for the cloud, just for the endpoints, or just for the edge devices. It will enable applications from multiple vendors to run on the same physical machine without reciprocated interference. Further, a fog architecture will provide a common lifecycle management framework for all applications, offering capabilities for composing, configuring, dispatching, activating and deactivating, adding and removing, and updating applications. It will further provide a secure execution environment for fog services and applications. Among the strong list of fog specialties, we here define four key advantages of typical fog architecture acronymed as CEAL [6].

14.4.1 Cognition

The most peculiar property of a fog platform is its cognizance to client‐centric objectives, also termed as geo‐distributed intelligence. The framework is aware of the context of customer requirements and can best determine where to carry out the computing, storage, and control functions along the cloud‐to‐thing continuum. Thus, the fog applications can be populated at the vicinity ITS endpoints and are ensured to be better aware of and closely reflect customer requirements.

14.4.2 Efficiency

In fog architectures, the compute, storage, and control functions are pooled and disseminated anywhere across the cloud and the edge nodes, acquiring full advantage of the diverse resources available along the cloud‐to‐thing continuum. In IoT‐aided ITS infrastructures, the fog model allows utilities and applications to leverage the otherwise idling computing, storage, and networking resources abundantly available both along the network edge (HAN, NAN, MAN etc.) and at end‐user devices such as smart meters, smart home appliances, connected vehicles, and network edge routers. Fog's closer proximity to the endpoints will enable it to be more closely integrated with consumer applications.

14.4.3 Agility

It is usually much faster and more affordable to experiment with client and edge devices, rather than waiting for vendors of large network and cloud boxes to initiate or adopt an innovation. Fog will make it easier to create an open marketplace for individuals and small teams to use open application programming interfaces, open software development kits (SDKs), and the proliferation of mobile devices to scale, innovate, develop, deploy, and operate new services.

14.4.4 Latency

Fog enables data analytics at the network edge and can support time‐sensitive functions for ITS like cyber‐physical systems. This is essential not only for developing stable control systems but also for the tactile Internet vision to enable embedded AI applications with millisecond response requirements. Such advantages, in turn, enable new services and business models, and may help broaden revenues and reduce cost, thereby accelerating IoT‐aided ITS rollouts. Furthermore, Table 14.2 compares the performance of cloud and fog computing deployments in smart transportation applications.

Table 14.2 Performance comparison of cloud and fog computing models in smart transportation applications.

Characteristics and requirements Pure cloud platform Fog‐assisted cloud platform
1 Geo‐distribution Centralized Distributed
2 Context/location awareness No Yes
3 Service node distribution Within the Internet At core as well as edges
4 Latency High Low
5 Delay jitter High Low
6 Client‐server separation Remote/Multiple hops Single hop
7 Security Not defined Defined degree of security
8 Node population Few Very large
9 Mobility support Limited Rich mobility support
10 Last‐mile connectivity support Leased line Wired/Wireless
11 Real‐time analytics Supported Supported
12 Enroute data attacks/DoS High probability Low probability

A triple‐tier fog‐assisted cloud computing architecture is presented in Figure 14.2, where a substantial proportion of ITS control and computational tasks are nontrivially hybridized to geo‐distributed fog computing nodes alongside the cloud computing support. The hybridization objective is to overcome the disruption caused by the penetration of IoT utilities into ITS infrastructures that calls for active proliferation of control, storage, networking, and computational resources across the heterogeneous edges or endpoints. The tier nearest to ground is termed as physical schema or data generator layer, which primarily comprises a wide range of intelligent IoT‐enabled devices scattered across the ITS geography. This is the sensing network consisting of several noninvasive, highly reliable, low‐cost wireless sensory nodes and smart mobile devices for capturing situational context information from ITS stakeholders.

The data capturing/generating devices are widely distributed at numerous ITS endpoints and the voluminous data streams generated from these geo‐spatially distributed sensors have to be processed as a coherent whole. However, this layer may occasionally filter data streams for local consumption (edge computing) while offloading the rest to upper tiers through dedicated gateways. Such entities may be abstracted into application‐specific logical clusters, directly or indirectly influenced by the expediency of ITS operations. In connected vehicular networks, such clusters are formed from vehicular applications where the intelligent vehicles equipped with sensing units such as on‐board sensors (OBS) organize themselves to form vehicular fogs. Often, the transportation telematics support such as cellular telephony, on‐board sensors (OBS), roadside units (RSU), and smart wearable devices may uncover the computational as well as networking capabilities latent in the underutilized vehicular resources. The underutilized vehicular resources may occasionally be transformed into communicational and analytics use, where a collaborative multitude of end‐user clients or near‐user edge devices carry out communication and computation, based on better utilization of individual storage, communication, and computational resources of each vehicle [36].

Similarly, presence of clusters could also be traced in smart home networks (HAN) that have noteworthy contributions in ITS operational dynamics. The intelligent IoT‐equipped home agents such as smart parking lots, CC camera, and home charging devices are potentially active data‐generation entities and may also be augmented with actuators to provide storage, analysis, and computational support for satisfying the prompt and local decision‐making services (edge computing).

Layer 2 constitutes the fog computing layer comprising low‐power intelligent fog computing nodes (FCN) such as routers, switches, high‐end proxy servers, intelligent agents, and commodity hardware, having peculiar ability of storage, computation, and packet routing. The software‐defined networking (SDN) assembles the physical clusters to form virtualized intercluster private networks (ICPN) that route the generated data to the fog devices spanned across the fog computing layer The fog devices and their corresponding utilities form geographically distributed virtual computing snapshots or instances that are mapped to lower‐layer devices in order to serve the processing and computing demands of ITS. Each fog node is mapped to and is responsible for a local cluster of sensors covering a neighborhood or a small community, executing data analytics in real‐time. However, since the IoT devices in layer 1 are often dynamic (viz. vehicular sensors), robust mobility management techniques need to be employed to enable flexible association of those entities with the layer 2 fog nodes in order to realize a consistent and reliable data transmission policy.

Often, the FCNs in layer 2 are parallel to the nodes lying below in the hierarchy to undertake tasks. In many cases, the FCN may form further subtrees of FCNs, with each node at a higher depth in the tree managed by the ones at lower depth, in master–slave paradigm. A typical association of such hierarchies is depicted in Figure 14.3. Considering the VANET scenario, the FCNs may be assigned with spatial and temporal data to identify potential hazardous events in road transportation network such as accidents, vehicle thefts, or intruder vehicles in the network. In such circumstances, these computing nodes may interrupt the local execution for small timespans, and the data analysis results will be fed back and reported to the upper layer (from street‐level to citywise traffic monitoring entities) for complex, historical, and large‐scaled behavior analysis and condition monitoring. In other words, the distributed analytics from multi‐tier fogs (followed by aggregation analytics in many case studies) performed at proposed fog layers act as localized “reflex” decisions to avoid potential contingencies. Meanwhile, a significant fraction of generated IoT data from smart grid applications don't require that data be dispatched to the remote clouds; hence, response latency and bandwidth consumption problems could be easily solved.

Schematic diagram depicting Data/Control Flow among FCNs in Layer 2 with arrows marked Communication and Control; Infrastructure depicted by circles and hardware.

Figure 14.3 Data/Control Flow among FCNs in Layer 2

The uppermost tier in customized fog architecture is the cloud computing layer consisting of mega data centers that provides citywide ITS monitoring and global centralization in contrast to localization, geo‐distributed intelligence, low latency and context awareness support provided by layer 2. The computational elements at this layer are focused to produce complex, long‐term, and citywide behavioral analytics such as large‐scale event detection, long‐term pattern recognition, and relationship modeling, to support dynamic decision making. This will ensure that ITS communities perform wide area situational awareness (WASA), wide area demand response, and resource management in the case of a natural disaster or a large‐scale service interruption. The processing output of layer 2 can be categorized into two dimensions. The first one comprises analysis and status reports and the corresponding data that demand large‐scaled and long‐term behavior analysis and condition monitoring. Such datasets are offloaded to cloud computing mega datacenters situated in layer 3 via high‐speed WAN gateways and links. The other part of analysis result is the inferences, decisions, and quick feedback control to the aligned data consumers.

14.5 Case Study: Intelligent Traffic Lights Management (ITLM) System

A smart traffic management prototype calls for the deployment of intelligent traffic lights (ITLs) equipped with sensing capabilities at each crossing. Such sensors measure the distance and speed of approaching vehicles to and from every direction. The sensors also detect and regulate the movement of pedestrian and cycle commuters intercepting every street and crossing on its way. The prime QoS attributes of ITLM architecture can be summarized as follows:

  1. Accident prevention. The ITLs may need to trigger stop or slow‐down signals to candidate vehicles or to modify their execute cycle(s) to avoid collisions in real time.
  2. Ensuring vehicles mobility. The ITLs need efficient software programming interfaces that can learn the fleet dynamics. Accordingly, they maintain the green pulses to guarantee steady flow of traffic in near real time.
  3. Reliability. The historical datasets generated by ITLM systems are collected, stored in back‐end large databases, and then analyzed using big data analytics (BDA) tools to evaluate and enhance the architectural reliability. Thus, such activities relate to the storage and analysis of global data ranging over long time spans.

In order to illustrate the key computational requirements of such ITLMs, let us consider a green pulse signaling the movement of a vehicle at 40 mph – i.e. it travels 1.7 meters per 100 microseconds. If a probable collision with a pedestrian is anticipated, the associated ITL(s) must issue an urgent alarm to the approaching vehicles. Here fog computing comes into play, as the control loop sub‐system needs to react within some 100 microseconds to few milliseconds. The aggregated local subsystem response latency for such mission‐critical tasks is on the order of <10 ms. Now, triggering any action to prevent accidents may successively trump other operations. Thus, the local ITL network might also alter its execution cycle, an action that may introduce perturbation in the green lights, affecting the whole system dynamics. To dampen the effect of such perturbation, a resynchronization signal needs to be sent along all the ITLs in the global system, a task that will be accomplished on a time scale of hundreds of milliseconds to a few seconds. An interplay between the fog and the cloud is accentuated here. The research thrust is to develop a viable computational prototype for an ITLM system in the realm of IoT space, through proper orchestration and assignment of compute and storage resources to the endpoints and where the cloud and fog technologies are tuned to interplay and assist each other in a synergistic manner. Some of the critical computing requirements of a customized ITLM are identified in Table 14.3.

Table 14.3 Computing requirements of intelligent traffic light management (ITLM) systems.

Attributes Description
Mobility Tight mobility constraints for the commuters as well as ITLs (ideally regular red‐green pulses)
Geo‐distribution Wide (across region) and dense (intersections and ramp accesses)
Low/predictable latency Tight within the scope of the intersection
Fog‐cloud interplay Data at different time scales (sensors/vehicles at intersection, traffic info at diverse collection points)
Multi‐agencies orchestration Agencies that run the system must coordinate control law policies in real time
Consistency Getting the traffic landscape demands a degree of consistency between collection points

The fog model leveraged with modular compute and storage devices offers common interfaces and programming environments for the ITL networking infrastructures, though having varying form factors and encasings. Since the ITLM is a highly distributed system that collects data over an extended geography, ensuring an acceptable degree of consistency between the different aggregator points is crucial for the implementation of efficient traffic policies.

The fog vision anticipates an integrated hardware infrastructure and software platform with the purpose of streamlining and making more efficient the deployment of new services and applications. The ITL fog nodes are multitenant and also provide strict service guarantees for mission‐critical systems such as the ITLM, in contrast with softer guarantees (e.g., infotainment), even when run for the same provider. The network of ITLs may extend beyond the domains of a single controlling authority. Thus, the orchestration of consistent policies involving multiple agencies is a challenge unique to fog computing. A typical orchestration scenario for ITLM sub‐system is presented in Figure 14.4.

Schematic diagram depicting orchestration scenario for intelligent traffic management service with planning, resource management, and condition monitoring of remote cloud, local orchestrator.

Figure 14.4 An orchestration scenario for intelligent traffic management service.

The cloud–fog dispatch middleware (CFDM) defines an orchestration platform to handle a number of critical software components across the whole system, which is deployed across a wide geographical area. The CFDM employed in ITLMs have decision‐making modules (DMM), which create the control policies and push them to the individual ITLs. The DMM can be implemented in a centralized, distributed, or hierarchical way. In the latter, the most likely implementation nodes with DMM functionality of regional scope must coordinate their policies across the whole system. Whatever the implementation, the system should behave as if orchestrated by a single, all‐knowledgeable DM. The CFDM defines a set of protocols for the federated message bus, which passes data from the traffic lights to the DMM nodes, pushes policies from the DMM nodes to the ITLs, and exchanges information between those ITLs.

In addition to the actionable real‐time (RT) information generated by the sensors, and the near‐RT data passed to the DMM and exchanged among the set of ITLs, there are volumes of valuable data collected by the ITLM system. This data must be ingested in a data center (DC)/cloud for deep big data analytics that extends over time (days, months, even years) and over the covered territory. The results of such historical batch analytics may be further used to improve the reliability and QoS of future executions. The outputs of such bulk analytics can be a used as solutions for:

  • Evaluation of the impact on traffic (and its consequences for the economy and the environment) of different policies
  • Monitoring of city pollutants
  • Trends and patterns in traffic

The ITLM use‐case just discussed reflects the need for robust orchestration frameworks that can simplify, maintain, and improve the ITS data security and system reliability. The data‐driven ITS is an ideal example of cyber‐physical systems (CPS) encompassing physical and virtual components capable of interfacing and interacting with existing network infrastructure. Thus, addressing how to efficiently deal with the ITS applications in IoT space, their dynamic variations, and the transient operational behavior is a tedious challenge.

14.6 Fog Orchestration Challenges and Future Directions

High‐paced R&D and investments efforts in the past decade have led more mature cloud‐based techniques with efficient frameworks, deployment platforms, simulation toolkits, and business models. However, in the context of fog deployments, such efforts, though on pace, are still in their infancy [17]. There may be plenty of studies hypothesizing the execution scenario of fog platforms, but these are still in the concept and simulation phase. Roll‐out of fog services must inherit many of the properties of cloud counterparts, and the requirement of deploying computational workloads on fog computing nodes (FCN) must be properly demystified. In addition, fog comes with its inherent silos and raises many questions that seek a consensus regarding the right answers. Some of them may be where to place a workload, what are the connection policies, protocols, and standards, how to model/interpret the interaction of/among fog nodes, and how to route the workload, for example. In the next section, we highlight the key orchestration challenges in fog‐enabled orchestration for ITS applications. Following this, the nascent research avenues envisioned by such issues and challenges are also explored.

14.6.1 Fog Orchestration Challenges for Intelligent Transportation Applications in IoT Space

14.6.1.1 Scalability

Since the heterogeneous sensors and smart devices employed in ITS are designed from multiple IoT manufacturers and vendors, selecting an optimal device becomes increasingly intricate while considering customized hardware configurations and personalized ITS requirements. Moreover, there may be applications that can only operate with specific hardware architectures viz. ARM or Intel etc., and through a wide range of operating systems. Additionally, the ITS applications with stringent security requirements might require specific hardware and protocols to function. An orchestration framework need not only cater to such functional requirements, it must scale efficiently in the face of increasingly larger workflows that change dynamically. The orchestrator must assess whether the assembled systems, comprised of cloud resources, sensors, and fog computing nodes (FCN), coupled with geographic distributions and constraints, are capable of provisioning complex services correctly and efficiently. In particular, the orchestrator must be able to automatically predict, detect, and resolve issues pertaining to scalability bottlenecks that could arise from an increased application scale in a customized ITS architecture.

14.6.1.2 Privacy and Security

In IoT‐aided ITS case studies such as ITLMs or smart parking, a specific application is composed of multiple sensors, computer chips, and devices. Their deployment in varying different geographic locations thus results in increased attack vectors of involved objects. Examples of attack vectors may be human‐caused sabotage of network infrastructure, malicious programs provoking data leakage, or even physical access to devices [37]. Holistic security and risk assessment procedures are needed to effectively and dynamically evaluate the security and measure risks, as evaluating the security of dynamic IoT‐based application orchestration becomes increasingly critical for secure data placement and processing. The IoT‐integrated devices for fog support such as switches, routers, and base stations, if they are brought to be used as publicly accessible computing edge nodes, need greater articulation regarding the risk associated by public and private vendors that own these devices as well as those that will employ these devices. Also, the intended objective of such devices, e.g. an Internet router for handling network traffic, cannot be compromised just because it is being used as a fog node. The fog can be made multitenant only when stringent security protocols are enforced.

14.6.1.3 Dynamic Workflows

Another significant characteristic and challenge for IoT‐enabled ITS applications is their ability to evolve and dynamically change their workflow composition. This problem, in the context of software upgrades through FCNs or the frequent join‐leave behavior of network objects, will change the internal properties and performance, potentially altering the overall workflow execution pattern. Moreover, handheld devices used by ITS stakeholders inevitably suffer from software and hardware aging, which will invariably result in changing workflow behavior and its device properties (e.g., low‐battery devices will degrade the data transmission rate). Furthermore, performance of transportation applications will change owing to their transient and/or short‐lived behavior within the ITS subsystem, including spikes in resource consumption or big data generation. This leads to a strong requirement for automatic and intelligent reconfiguration of the topological structure and assigned resources within the workflow, and importantly, that of FCNs.

14.6.1.4 Tolerance

Scaling a fog computing framework in proportion to ITS application demands increases the probability of failure. Some rare software bugs or hardware faults that don't manifest at small scale or in testing environments, such as stragglers, can have a debilitating effect on system performance and reliability. At the scale, heterogeneity, and complexity we're anticipating, different fault combinations will likely occur. To address these system failures, developers should incorporate redundant replications and user‐transparent, fault‐tolerant deployment, and execution techniques in orchestration design.

14.7 Future Research Directions

The challenges outlined in the previous sub‐section unlock several key research directions for successful deployment of fog‐supported ITS architectures. The research prospects defined for fog life cycle management can be executed in three broad phases. In the deployment phase, research opportunities include optimal node selection and routing as well as parallel algorithms to handle scalability issues. In the runtime phase, incremental design and analytics, re‐engineering, dynamic orchestration, etc., are potential research thrusts for supporting dynamic QoS monitoring and providing guaranteed QoE. In the evaluation phase, big‐data‐driven analytics (BD2A) and optimization algorithms are prime avenues that need to be explored to improve orchestration quality and accelerate optimization for problem solving. Figure 14.5 shows the functional elements of a typical fog orchestrator, along with the key requirements and challenges at each phase.

Flow diagram depicting functional elements of a typical fog orchestrator showing the key requirements and challenges at each phase.

Figure 14.5 Functional elements of a typical fog orchestrator showing the key requirements and challenges at each phase.

14.7.1 Opportunities in the Deployment Phase

Fog computing provides research opportunities in node selection, routing, parallelization, and heuristics.

14.7.1.1 Optimal Node Selection and Routing

Determining resources and services in cloud paradigms is a well‐explored area and easily understood, but exploiting network edges in decentralized fog settings calls for discovery mechanisms to associate optimal nodes [38]. Resource discovery in fog computing is not as easy as in both tightly and loosely coupled distributed environments, and manual mechanisms are not feasible because of the sheer volume of FCNs available at the fog layer [39]. If the ITS utility needs to execute machine learning or big‐data tasks, resource allocation strategies also need to cater to datastream of heterogeneous devices from multiple generations as well as online workloads.

Benchmark algorithms must be developed for efficient estimation of FCNs' availability and capability. These algorithms must allow for seamless augmentation (and release) of FCNs in the computational workflow at varying hierarchical levels without added latencies or compromised QoE.

Autonomic node recovery mechanisms need to be devised to ensure consistency and reliability in fault detection in FCN networked architectures, as existing cloud‐based solutions don't fit. Besides, the most potential research aspect to ponder is workflow partitioning in fog computing environments. Though numerous task partitioning techniques, languages, and tools have been successfully implemented for cloud data centers, research regarding work apportioning among FCNs is still in concept phase.

Without specifying the capabilities and geo‐distribution of candidate FCNs, automated mechanism for realizing computation offloading among those nodes is challenging. Maintaining a ranked list of associated host nodes through priority aware resource management policies, making hierarchies or pipelines for sequential offloading of workloads, developing schedulers for dynamically deploying segregated tasks to a multiple nodes, algorithms for parallelization and multitasking of only FCNs, FCNs and data centers, or only data enters, etc., are rigorous research topics in academia as well as the R&D community.

14.7.1.2 Parallelization Approaches to Manage Scale and Complexity

Optimization algorithms or graph‐based approaches are typically time‐ and resource‐consuming when applied on a large scale, and necessitate parallel approaches to accelerate the optimization process. Recent work provides possible solutions to leverage an in‐memory computing framework to execute tasks in a cloud infrastructure in parallel. However, realizing dynamic graph generation and partitioning at runtime to adapt to the shifting space of possible solutions stemming from the scale and dynamicity of IoT components remains an unsolved problem.

14.7.1.3 Heuristics and Late Calibration

To ensure near‐real‐time intervention during IoT application development, one approach is to use correction mechanisms that could be applied even when suboptimal solutions are deployed initially. For example, in some cases, if the orchestrator finds a candidate solution that approximately satisfies the reliability and data transmission requirements, it can temporarily suspend the search for further optimal solutions. At runtime, the orchestrator can then continue to improve decision results with new information and a reevaluation of constraints, and use task‐ and data‐migration approaches to realize workflow redeployment.

14.7.2 Opportunities in Runtime Phase

In the runtime phase, research opportunities for fog computing include dynamic orchestration of resources, incremental strategies, QoS, and proactive decision‐making.

14.7.2.1 Dynamic Orchestration of Fog Resources

Apart from the initial placement, all workflow components dynamically change in response to internal transformations or abnormal system behavior. IoT applications are exposed to uncertain environments where execution variations are commonplace. Because of the degradation of consumable devices and sensors, capabilities such as security and reliability that initially were guaranteed will vary, resulting in the initial workflow being no longer optimal or even totally invalid.

Furthermore, the structural topology might change according to the task execution progress (i.e., a computation task is finished or evicted) or will be affected by the execution environment's evolution. Abnormalities might occur, owing to the variability of combinations of hardware and software crashes, or data skew across different management domains of devices due to abnormal data and request bursting. This will result in unbalanced data communication and subsequent reduction of application reliability. Therefore, dynamically orchestrating task execution and resource reallocation is essential.

14.7.2.2 Incremental Computation Strategies

The ITS applications may often be choreographed through workflow or task graphs to assemble different IoT applications. In some domains, the orchestration is supplied with a plethora of candidate devices with different geographical locations and attributes. In some cases, orchestration would typically be considered too computationally intensive, as it is extremely time‐consuming to perform operations, including prefiltering, candidate selection, and combination calculation, while considering all specified constraints and objectives. Static models and methods become viable when the application workload and parallel tasks are known at design time. In contrast, in the presence of variations and disturbances, orchestration methods typically rely on incremental scheduling at runtime (rather than straightforward complete recalculation by rerunning static methods) to decrease unnecessary computation and minimize schedule makespan.

14.7.2.3 QoS‐Aware Control and Monitoring Protocols

To capture the dynamic evolution and variables (such as dynamic evolution, state transition, and new IoT operations), we should predefine the quantitative criteria and measuring approach of dynamic QoS thresholds in terms of latency, availability, throughput, and so on. These thresholds usually dictate upper and lower bounds on the metrics as desired at runtime. In a normal setting, complex QoS information‐processing methods such as hyper‐scale matrix update and calculation would lead to many scalability issues.

14.7.2.4 Proactive Decision‐Making

Localized regions of self‐updates become ubiquitous within fog environments. The orchestrator should record staged states and data produced by fog components periodically or in an event‐based manner. This information will form a set of time series of graphs and facilitate the analysis and proactive recognition of anomalous events to dynamically determine such hotspots [40]. The data and event streams should be efficiently transmitted among fog components, so system outage, appliance failure, or load spikes will rapidly feed back to the central orchestrator for decision making.

14.7.3 Opportunities in Evaluation Phase: Big‐Data‐Driven Analytics (BD2A) and Optimization

A typical ITS framework congregates the diverse transportation entities into a clique‐like structure in the IoT realm and enables a bidirectional flow of energy and data among the stakeholders in order to facilitate the assets optimization. The major data sources for a data‐driven ITS include ITS‐sensing objects such as connected vehicles, on‐board sensors (OBS), road‐side units (RSU), traffic sensors and actuators, GPS devices, ITLs, and web data from recommender systems, crowdsourcing, and feedback modules.

Furthermore, the domain of IoT in ITS applications is extended to numerous geographically distributed devices that produce multidimensional, high‐volume dynamic data streams requiring a noble mix of real‐time analytics and data aggregation [41]. Figure 14.6 depicts the conceptual framework for BD2A and optimization of an intelligent traffic management use case based on cloud and fog platforms. The fog orchestration module should employ efficient data‐driven optimization and planning algorithms for reliable data management across complex IoT‐aided ITS endpoints.

Schematic diagram depicting conceptual framework for BD2A and optimization of ITLM based on cloud and fog platforms.

Figure 14.6 The conceptual framework for BD2A and optimization of ITLM based on cloud and fog platforms.

While developing ITS applications adhered to fog computing and making proper trade of such applications across different layers in the fog environment, the developers should employ robust optimization procedures that stabilize the schema definitions, mappings, all overlapping, and interconnection between layers (if any). In order to reduce data transmission latencies, data‐processing activities and the database services may be pipelined. Rather than frequent triggering of move‐data actions, use of multiple data‐locality principles (e.g. temporal, spatial etc.) and efficient caching techniques can distribute or reschedule the computation tasks of FCNs near the sensors, thereby improving the delays. The data‐relevant attributes related to QoS parameters such as the data‐generation rate or data‐compression ratio can be customized to adapt to the desired degree of performance and assigned resources to strike a balance between data quality and specified response‐time targets.

A major challenge is that decision operators are still computationally time consuming. To tackle this problem, online machine learning can provision several online training (such as classification and clustering) and prediction models to capture the constant evolutionary behavior of each system element, producing time series of trends to intelligently predict the required system resource usage, failure occurrence, and straggler compute tasks, all of which can be learned from historical data and a history‐based optimization (HBO) procedure. Researchers or developers should investigate these smart techniques, with corresponding heuristics applied in an existing decision‐making framework to create a continuous feedback loop. Cloud machine learning offers analysts a set of data exploration tools and a variety of choices for using machine learning models and algorithms.

14.8 Conclusions

In this chapter, we revisited the need for data‐driven transportation architecture, discussing the functionality of its key components and certain deployment issues associated with it. Then we identified the service‐critical store and compute requirements of application supported over such data‐driven transportation architectures, analyzed the current state of cloud deployments, and outlined the need for going through geo‐distributed fog methodologies for fulfilling those needs. We also presented a fog computing framework customized to smart transportation applications and highlighted the requirements for fog models through an intelligent traffic management system (ITLM) use case. The successful deployment of fog models requires an orchestration framework that can simplify maintenance and enhance data security and system reliability. The chapter finally provided an overview of the core issues, challenges, and future research directions in fog‐enabled orchestration for smart transportation services in the realm of IoT.

References

  1. 1 Intel Corporation. Designing Next‐Generation Telematics Solutions. White Paper, 2018.
  2. 2 B. Varghese, N. Wang, S. Barbhuiya, P. Kilpatrick, and D. S. Nikolopoulos. Challenges and Opportunities in Edge Computing. In Proceedings of the 2016 IEEE Int. Conf. Smart Cloud, SmartCloud 2016 , pp. 20–26, 2016.
  3. 3 O. Skarlat, S. Schulte, and M. Borkowski. Resource Provisioning for IoT Services in the Fog. 9th IEEE International Conference on Service Oriented Computing and Applications , November 4–6, 2016, Macau, China.
  4. 4 S. Park, O. Simeone, and S.S. Shitz. Joint Optimization of Cloud and Edge Processing for Fog Radio Access Networks. IEEE Trans. Wireless Communications , 15(11): 7621–7632, 2016).
  5. 5 C. Perera, Y. Qin, J. C. Estrella, S. Reiff‐marganiec, and A.V. Vasilakos. Fog computing for sustainable smart cities: A survey. ACM Computing Surveys , 50(3): 1–43, 2017.
  6. 6 M. Chiang and T. Zhang. Fog and IoT: An overview of research opportunities. IEEE Internet Things Journal , 3(6): 854–864, 2016.
  7. 7 M.M. Hussain, M.S. Alam, and M.M.S. Beg. Computational viability of fog methodologies in IoT‐enabled smart city architectures – a smart grid case study. EAI Endorsed Transactions , 2(7): 1–12, 2018.
  8. 8 C. Byers and P. Wetterwald. Fog computing: distributing data and intelligence for resiliency and scale necessary for IoT. ACM Ubiquity Symposium, November, 2015.
  9. 9 Z. Wen, R. Yang, P. Garraghan, T. Lin, J. Xu, and M. Rovatsos. Fog orchestration for Internet of Things services. IEEE Internet Computing , 21(2): 16–24, 2017.
  10. 10 N.K. Giang, V.C.M. Leung, and R. Lea. On developing smart transportation applications in fog computing paradigm. ACM DIVANet'16 , November 13–17, Malta, pp. 91–98, 2016.
  11. 11 W. He, G. Yan, L.Da Xu, and S. Member. Developing vehicular data cloud services in the IoT environment. IEEE Trans. Industrial Informatics , 10(2): 1587–1595, 2014.
  12. 12 S. Bitam. ITS‐Cloud: Cloud Computing for Intelligent Transportation System. IEEE Globecom 2012 – Communications Software, Services and Multimedia Symposium, California, USA, 2054–2059.
  13. 13 J.M. Sussman. Perspectives on Intelligent Transportation Systems (ITS). New York: Springer‐Verlag, 2005.
  14. 14 T. Gandhi and M. Trivedi. Vehicle surround capture: Survey of techniques and a novel vehicle blind spots. IEEE Trans. Intelligent. Transp. Syst. , 7(3): 293–308, September 2006.
  15. 15 M.M. Hussain, M.S. Alam, and M.M.S. Beg. Federated cloud analytics frameworks in next generation transport oriented smart cities (TOSCs) – Applications, challenges and future directions. EAI Endorsed Transactions. Smart Cities, 2(7), 2018.
  16. 16 J. Zhang, F. Wang, K. Wang, W. Lin, X. Xu, and C. Chen. Data‐driven intelligent transportation systems : a survey. IEEE Trans. Intelligent. Transp. Systems , 12(4): 1624–1639, 2011.
  17. 17 X. Hou, Y. Li, M. Chen, et al. Vehicular Fog Computing : A Viewpoint of Vehicles as the Infrastructures. IEEE Trans Vehicular Tech. , 65(6): 3860–3873, 2016.
  18. 18 A. O. Kotb, Y. C. Shen, X. Zhu, and Y. Huang. IParker – A new smart car‐parking system based on dynamic resource allocation and pricing. IEEE Trans. Intell. Transp. Systems , 17(9): 2637–2647, 2016.
  19. 19 O. Scheme. Central Pollution Control Board. Delhi Central Pollution Control Board, Delhi, pp. 1–6, 2016.
  20. 20 X. Wang, X. Zheng, Q. Zhang, T. Wang, and D. Shen. Crowdsourcing in ITS : The state of the work and the networking. IEEE Trans. Intell. Transp. Systems , 17(6): 1596–1605, 2016.
  21. 21 Z. Liu, H. Wang, W. Chen, et al. An incidental delivery based method for resolving multirobot pairwised transportation problems. IEEE Trans. Intell. Transp. System , 17(7), 1852–1866, 2016.
  22. 22 D. Wu, Y. Zhang, L. Bao, and A. C. Regan. Location‐based crowdsourcing for vehicular communication in hybrid networks. IEEE Trans. Intell. Transp. System, 14(2), 837–846, 2013.
  23. 23 M. Tubaishat, P. Zhuang, Q. Qi, and Y. Shang. Wireless sensor networks in intelligent transportation systems. Wirel. Commun. Mobile. Computing. Wiley InterScience, 2009, no. 9, pp. 87–302.
  24. 24 White Paper. Freeway Incident Management Handbook, Federal Highway Administration, Available: http://ntl.bts.gov/lib/jpodocs/rept_mis/7243.pdf.
  25. 25 M.M. Hussain, M.S. Alam, M.M.S. Beg, and H. Malik. A Risk averse business model for smart charging of electric vehicles. In Proceedings of First International Conference on Smart System, Innovations and Computing, Smart Innovation, Systems and Technologies , 79: 749‐759, 2018.
  26. 26 M. Saqib, M.M. Hussain, M.S. Alam, and M.M.S. Beg. Smart electric vehicle charging through cloud monitoring and management. Technology Economics Smart Grids Sustain Energy , 2(18): 1–10, 2017.
  27. 27 C.‐C. R. Wang and J.‐J. J. Lien. Automatic vehicle detection using local features – A statistical approach. IEEE Trans. Intell. Transp. System , 9(1): 83–96, 2008.
  28. 28 L. Bi, O. Tsimhoni, and Y. Liu. Using image‐based metrics to model pedestrian detection performance with night‐vision systems. IEEE Trans. Intell. Transp. System , 10(1): 155–164, 2009.
  29. 29 S. Atev, G. Miller, and N.P. Papanikolopoulos. Clustering of vehicle trajectories. IEEE Trans. Intell. Transp. System , 11(3): 647–657, September 2010.
  30. 30 Z. Sun, G. Bebis, and R. Miller. On‐road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell. , 28(5): 694–711, 2006.
  31. 31 J. Huang and H.‐S. Tan. DGPS‐based vehicle‐to‐vehicle cooperative collision warning: Engineering feasibility viewpoints. IEEE Trans. Intell. Transp. System , 7(4): 415–428, 2006.
  32. 32 J.M. Clanton, D.M. Bevly, and A.S. Hodel. A low‐cost solution for an integrated multisensor lane departure warning system. IEEE Trans. Intell. Transp. System , 10(1): 47–59, 2009.
  33. 33 K. Sohn and K. Hwang. Space‐based passing time estimation on a freeway using cell phones as traffic probes. IEEE Trans. Intell. Transp. System , 9(3): 559–568, 2008.
  34. 34 M.M. Hussain, F. Khan, M.S. Alam, and M.M.S. Beg. Fog computing for ubiquitous transportation applications – a smart parking case study. Lect. Notes Electrical. Engineering , 2018 (In Press).
  35. 35 T. N. Pham, M.‐F. Tsai, D. B. Nguyen, C.‐R. Dow, and D.‐J. Deng. A cloud‐based smart‐parking system based on Internet‐of‐Things technologies. IEEE Access , 3: 1581–1591, 2015.
  36. 36 B.X. Yu, F. Ieee, Y. Xue, and M. Ieee. Smart grids: A cyber – physical systems perspective. In Proceedings of the IEEE , 24(5): 1–13, 2016.
  37. 37 E. Baccarelli, P.G. Vinueza Naranjo, M. Scarpiniti, M. Shojafar, and J.H. Abawajy. Fog of everything: energy‐efficient networked computing architectures, research challenges, and a case study. IEEE Access , 5: 1–37, 2017.
  38. 38 A. Beloglazov and R. Buyya. Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data centers. Concurrency Comput., Practice. Experience , 24(13): 1397–1420, September 2012.
  39. 39 H. Zhang, Y. Xiao, S. Bu, D. Niyato, R. Yu, and Z. Han. Computing resource allocation in three‐tier IoT fog networks: A joint optimization approach combining stackelberg game and matching. IEEE Internet of Things Journal, 1–10, 2017.
  40. 40 K.C. Okafor, I.E. Achumba, G.A. Chukwudebe, and G.C. Ononiwu. Leveraging fog computing for scalable IoT datacenter using spine‐leaf network topology. Journal of Electrical and Computer Engineering, Hindawi , 1–11, 2017.
  41. 41 J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami. Internet of Things (IoT): A vision, architectural elements, and future directions. Future Generation Computer System , 29(7): 1645–1660, 2013.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset