Ahmed Chebaane1, Abdelmajid Khelil1, and Neeraj Suri2
1Department of Software Engineering, Landshut University of Applied Sciences, Landshut, Germany
2Department of Computer Science, Lancaster University, UK
Over the past few years, Internet of Things (IoT) has undoubtedly become an integral part of our quotidian lives connecting objects such as vehicles, machines, and products with various users through the Internet.
Cloud computing has been proposed as a promising approach for IoT applications/services to virtualize things and to deal with analytics, storage, and computation of data generated from IoT devices. This approach is viable in ample cases such as executing the application in the cloud for saving battery lifetime, offering on-demand data storage to the end-users, etc. In the automotive field, cloud computing and IoT support building smart vehicles by facilitating communication across vehicles, infrastructures, and other connected devices, which may improve road safety as well as traffic efficiency [1].
Dew computing, mobile cloud computing (MCC), and vehicular cloud computing (VCC) are variations of cloud computing. When cloud applications assume an active participation of the end devices along with the cloud in the execution of services and applications, cloud computing is also referred to as dew computing [2]. MCC [3] has been introduced to combine both paradigms mobile computing and cloud computing in order to overcome the mobile devices constraints. VCC [1] brings the MCC paradigm to vehicular networks by providing public services such as parking systems, traffic information, etc.
However, knowing that cloud data centers are geographically highly centralized, resulting in a large (network hop) distance between the end device/vehicle and the cloud/data center, usually leading to unpredictable communication delays [4], it is complicated if not impossible to enable time-critical applications with short-lived data across vehicles and clouds. Constraints, failures, and attacks in the heterogeneous computing environment are the typical perturbation of the timeliness in VCC. Therefore, there is a need to manage data very close to the point of use, i.e. end devices, in order to enable low-latency applications.
Fog computing was introduced in 2012 by both industry and academia [5, 6] to address the challenge of delay-sensitive IoT applications (Figure 17.1). Fog computing brings cloud computing to the edge of the network. Instead of centrally running analysis, processing, and storage functions in the cloud, they are now decentralized and running on gateways/fogs very close to the end-user devices, thus, decreasing latency, rendering it more predictable, and preserving data locality and privacy. Accordingly, this network architecture is more suitable for time-critical applications such as vehicle-to-vehicle (V2V), vehicle-to-device (V2D) [7], and vehicle-to-infrastructure (V2I) communication or autonomously driving cars, whose data processing must happen in a delay-sensitive manner. Provided a careful coping with perturbations, fog computing is indeed a highly promising architecture to make finally vehicular networked application into a reality after a long and intensive research over the last two decades.
Edge computing [8] pushes computation/intelligence even closer to the things, i.e. things may play the role of gateways/fogs [5].
We categorize fog computing either in delay-tolerant or delay-critical. Delay-tolerant fog computing addresses distributed applications that may tolerate either high or fluctuating timeliness. Compared to cloud computing, delay-tolerant fog computing still reduces network traffic and gives the data owner more localized control on own data. The main category of delay-tolerant vehicular applications are those that provide entertainment and infotainment for drivers and passengers. Usually, they are not safety-critical and accordingly not delay-critical. In this chapter, we focus on delay-critical vehicular applications such as detection of immediate obstacles on the road, cooperative, or platoon driving. We detail the target scenarios and their requirements on fog computing in Section 17.2.
Vehicular fog computing (VFC) [9, 10] has gained a significant focus in the last few years. In addition, VFC plays a significant role to support high mobility, rise computational capability and decrease communication latency, which suits well delay-sensitive applications. Static fogs for vehicular applications may be implemented on top of available architectures such as traffic lights, traffic signs, street lighting or cellular base stations, toll collect infrastructure, bridges. New architectures, known as mobile fog computing, aim to model vehicles as fog nodes for communication and computation, thus integrating fog computing, edge computing, and Vehicular Ad-hoc NETworks (VANETs) to a homogeneous architecture (Figure 17.1).
VANET computing [11] restricts communication among vehicles using a short-range communication (such as dedicated short-range communication [DSRC], IEEE 802.11p, D2D) to enable vehicles to analyze and to share information between each others. This information could be safety-relevant, e.g. accident prevention, traffic jams, or general information, e.g. position, weather, in order to enhance safety on the road. Compared to fog computing, VANET encompasses only vehicles for computation and sharing information to the neighboring vehicles that are referred to as V2V [12]; however, fog computing includes V2I [12] for the purpose of increasing computation capability and exchange of information for vehicles via infrastructure elements, i.e. fog nodes.
The interworking of cloud, fog, and VANET computing is illustrated in Figure 17.1. This interworking will be the common architecture to address all vehicular applications. However, delay-critical applications will rely either on VANET or fog computing or a combination of both. In the latter case, one may consider vehicles as either end-device or a mobile fog. Accordingly, mobile fog computing is an emerging architecture, where fogs may be mobile.
Though there are surveys on fog computing [4, 13, 14] and several recent papers presenting the VFC architectures, algorithms, etc. for enabling delay-sensitive applications, there is no survey of this emerging field. This chapter aims at filling this gap and presenting a comprehensive survey.
In this chapter, we comprehensively survey the literature on delay-critical fog-based vehicular applications. Nonetheless, we also briefly survey the fog computing support for other delay-critical application domains such as smart grid, industry 4.0 and IoT, while pointing to the potentials of the adoption of the available techniques to the field of vehicular networks.
In Section 17.2, we present the focused applications and their timeliness requirements. In addition, we survey the key perturbations that hinder fulfilling these timeliness requirements. In Section 17.3, we introduce the existing research works to cope with the perturbation in order to meet the timeliness requirements. In Section 17.4, we address the research gaps achieved in this survey and we provide a future research direction In Section 17.5, we conclude the paper.
In this section, we present various scenarios of time-critical applications and their timeliness requirements. Next, we survey the perturbations that may complicate meeting the desired deadline.
We first detail a representative application scenario, i.e. obstacle detection that clearly shows the need for delay-critical fog computing in vehicular networks. Next, we survey a broad class of application scenarios that emphasize these needs.
Consider a (autonomous) vehicle driving with 200 km h−1 on the highway and suddenly an obstacle appears on the road. The obstacle detection application includes the type of obstacle and its velocity is a prerequisite for a suitable decision making. For example, depending on whether the obstacle is a human being, a large object, or something harmless, and depending on the surrounding vehicles, the vehicle should ignore, avoid, or overdrive it. This application could execute directly on the on-board computers of prime and modern vehicles. However, most of vehicles will not be able to do the processing on-board while meeting the deadline due to limited resources. For this purpose, the application (and its data) should be partitioned and selected parts of it should be transmitted with minimal delay and highest reliability to the surrounding fog infrastructure. A cloud solution is not suitable because the latency of the data transfer is not deterministic and may become intolerable (Figure 17.2).
Usually, fogs from the same provider are (directly) interconnected and can exchange data, balance loads among each other, and execute similar measures for reliable and delay-aware computing. When data is processed, the result should be transmitted back to the initiator to take appropriate decisions. For instance, if the obstacle is a human being, an immediate collision avoidance maneuver must be initiated. If it is just a harmless object, such as a tire part, it can be run over. Thus, nobody is endangered by an unnecessary lane change maneuver. This gained knowledge may be then communicated to other affected vehicles, such as those immediately following the considered vehicle.
In order to make this scenario possible, data processing and distribution should not violate the tolerable delay. At a speed of 200 km h−1, a vehicle covers a distance of about 50 m in 900 ms. Therefore, the decision must be taken in maximum 90 ms, which is the time to execute the application. Therefore, we refer to such applications as short-lived ones.
The aforementioned representative application scenario clearly motivates the necessity of fog computing support to provide for such crucial applications. In the following, we illustrate further delay-critical application classes that require fog computing support.
A cooperative perception class is based on swapping/fusing data from different sensors sources/vehicles and/or infrastructures using wireless networks in order to cooperatively perceive an important context. This information should be treated as a map-merging problem. See-through, lifted-seat, or satellite view are some use cases from the cooperative perception class [15].
A cooperative driving class enables maneuvers to review, share, plan, coordinate, and apply information concerning driving trajectories among vehicles in a safe way including negotiation and optimization of trajectories. The possible cases in this class are lane change warning, lane merge, etc. [15].
The cooperative safety class addresses the presence of vulnerable road user (VRU), where affected vehicles and/or infrastructure entities should interchange the VRU information to improve safety on the road. Moreover, VRU information acquired is processed and analyzed by the on-board unit of the vehicles or external system. The alert message generated is transmitted to the drivers or to the autonomous driving system to take applicable and corrective decisions in order to provide safety. Obstacle detection, collision warning, network-assisted vulnerable pedestrians, and bike driver protection are a possible scenario in this class [15].
Autonomous navigation classes target the building of self-governing real-time intelligent high definition maps of the surrounding area. Precisely, the information comes from the cooperative perception and a well-defined map that provides accurate and optimum performance in achieving autonomous navigation, e.g. high-definition local map acquisition [15].
Autonomous driving classes enable self-driving vehicles through wireless communication that allows the control of the major vehicle component from outside the vehicles to facilitate remote driving, which requires information about the perception layer and infrastructure. An example of the use case is self-driving in the city [15].
We now follow an application model that is commonly used in fog computing as well as in other distributed embedded systems communities. An application is represented as a directed acyclic flow graph of tasks [16]. The edges specify data dependencies between tasks. We differentiate two kinds of tasks: execution tasks and communication tasks. An execution task is a composite of code and data. A communication task is the transmission of an execution task from one node to another on a certain communication path.
An application has a certain priority that applies to all its tasks. Each task has specified execution times on selected computing nodes. An application has a timeliness requirement, which is usually based on executing the entire application while meeting a certain deadline. An application deadline is the maximum tolerable delay. The root task is executed on the application initiator (the vehicle that starts the application). The rest of the tasks can be either executed locally or on surrounding fogs. A task is usually represented by an application container along with its dependencies, the task execution time, and priority.
In order to efficiently deploy an application while meeting its timeliness requirements, we usually need a set of middleware building blocks. The goal of these building blocks is to find an assignment of tasks to nodes, and communication tasks to communication links. The key building blocks are resource monitoring and task scheduling (Section 17.2.5).
A timeliness guarantee is a fundamental quality level for providing a service delivery that satisfies the application quality of service (QoS) requirements. We identify three main timeliness guarantee levels in the literature: hard real-time (RT), soft RT, and firm RT [17] requirement classes. We survey explicitly the existing efforts to address these requirements.
A hard RT application is defined as follows: any delay in completing application execution within deadline means system failure, which can lead to catastrophic damage on the road and a violation of security requirements. Hard RT requirement uses a preventive version to prioritize tasks for scheduling.
A soft RT application is tolerant with the deadline, which is based on three requirements types: number of deadline misses in an interval of time, tardiness, and probabilistic bounds [18]. Soft RT allows the system to fail respecting the deadline even many times while the tasks are performed correctly. In this case, the result still is useful for the end-user but its utility degrades after passing the desired deadline. Soft RT requirement uses a nonpreemptive version to prioritize tasks for scheduling.
A firm RT application [19] is tolerant to skipping some tasks but still meeting the deadline [20] (also known as weakly hard RT). Unlike soft RT, firm RT applications are not considered to have failed but the result of the request is useless once the system fails to reach the deadline.
In summary, soft RT is soft with the respective deadline, and the result is useful after missing the deadline. In contrast, for hard RT applications, missing the deadline may lead to catastrophic damages. Firm RT is between soft RT and hard RT, as it is strict with the deadline, so the result is useless but no harm happens when missing the deadline. It is noteworthy to mention that RT systems require clock synchronization across multiple networked entities. In vehicular networks, we assume vehicles and fogs are equipped with GPS receivers and therefore all clocks are synchronized with the GPS global clock.
We now benchmark the application classes with respect to their timeliness guarantee requirements. As illustrated in Table 17.1, most of the applications require RT communication and computation but depending on the concrete application scenario and context, various RT classes may be required. For example, autonomous navigation and cooperative perception tolerate passing the deadline so that they belong to the soft RT application classes. Cooperative safety and autonomous driving require meeting the deadline. Because of the critical nature of the situation, a deadline in few tens of milliseconds needs to be met in order to avoid a fatal damage. Consequently, they are hard RT class. Cooperative driving mostly requires rm RT requirement due to the necessity to get the result within the deadline in order to enhance safety in the road but nothing critical happens when the execution time has exceeded the deadline.
Table 17.1 Benchmarking of application classes.
App class | Possible scenario | Need for fog computing | RT class | Timeliness guarantees (deadline) | Fog architecture |
Cooperative driving | Lane change warning | Timely communication | Firm RT requirements | Few 100–1000 ms | VANET |
Online and offline analysis | Fog computing | ||||
Lane merge | |||||
Privacy preserving, authenticity and integrity | |||||
Information sharing among V2V and V2I to enhance the QoS and enable the integration of legacy vehicles for calculation. | |||||
Cooperative safety | Neighbor collision warning | High computation to process the presence of VRU in ultra low-latency | Hard RT requirements | Few 10 ms | VANET |
Fog computing | |||||
Obstacle detection | Locally (OnBoard) | ||||
Real time communication | |||||
Network assisted vulnerable pedestrian protection | |||||
Security and Reliability | |||||
Delay critical for deadline | |||||
Information sharing among V2V and V2I. | |||||
Cooperative perception | see-through | Capacity (for analyze and localize of the detected object) | Soft RT requirements | Few seconds | VANET |
Life-seat | Fog computing | ||||
Satellite view | |||||
Heterogeneity among vehicles (computing and communication) | Cloud computing | ||||
Sharing information about localization and relative position need the communication through the infrastructure in addition to the V2V communication. | |||||
Autonomous driving | Self driving in the city | Very low latency in communication and computation | Hard RT requirements | Few 10 ms | VANET |
Fog computing | |||||
Almost require 100% reliability | Locally (OnBoard) | ||||
Efficient security | |||||
Information sharing among V2V and V2I. | |||||
Autonomous navigation | High-definition local map acquisition | Centralized and decentralized computation in real time | Soft RT requirements | Few seconds | Fog computing |
Cloud computing | |||||
Real time distribution of map information |
To guarantee safety and satisfy the service delivery of all these application scenario classes, fog computing is a suitable candidate for computing architecture that enables very low communication and computation delays among vehicles and infrastructures. If needed and possible, fog computing can be seamlessly integrated with cloud computing, which can support tolerant applications such as smart parking with higher computational capability.
Fog computing plays a fundamental role in the cooperative driving class, which can improve cooperation among drivers and enhance safety on the road by applying VANET to coordinate with the infrastructure in order to analyze and exchange information close to the vehicles.
To ensure real-time computation in a distributed, mobile, and heterogeneous infrastructure, fog computing is considered as a viable solution for vehicular applications and services [21]. Resource monitoring, scheduling, RT computation, and RT communication are the key functional blocks to ensure service delivery deadline respecting the QoS of the system [22].
Resource monitoring creates awareness of current and future available networks and computation resources. Accordingly, it is fundamental for task scheduling. The network resource monitoring collects important network indicators such as available channels and their bandwidth in a current or future vehicle location. The computational resource monitoring maintains indicators concerning the available processing and storage resources in time. An integral part of the monitoring is to consider the impact of failures and the availability of the resources.
The resource monitoring function usually is distributed across multiple entities. It is hard to have one central entity that monitors all available resources in a multitenant vehicle environment.
Scheduling tasks target an effective planning of the application tasks depending on available resources and application requirements. The requirements specify the task dependencies, task priority, and the deadline. The resulting schedule is given as an assignment of starting times to every task and communication activity. Automatic rescheduling is also adapted to resolve the issues of the incoming and outgoing fog nodes around the vehicle. The scheduler should guarantee the deadline according to the existing resources.
Similar to resource monitoring, the scheduler functionality is usually shared with multiple nodes.
The RT task computation requires a careful resource management on the node executing a task (Section 17.3.1.2), in order to ensure the overall RT computation of the complete application across vehicles and fog nodes. On node-level this is a well-investigated topic in the literature. On the application level an RT scheduler is indispensable.
RT communication plays an effective role to assure application-level RT computation in fog computing by selecting a suitable fog link to distribute communication tasks, i.e. to off-load/migrate execution tasks for processing in convenient networking delays, which enable RT communication between sensors/fog nodes to fog nodes in vehicular networks. Ge et al. [23] implement the 5G SDN vehicular network paradigm to enhance latency-aware communication time. Therefore, a good management of the network (Section 17.3.1.1) resources can enhance RT communication in VFC.
A communication task usually consists of migrating a task/container such as Docker [24] from the initiator vehicle to the selected fog nodes such as vehicles and roadside units (RSU), while keeping a high fidelity of the applications.
Docker enables application virtualization through containers. These containers combine individual application parts (tasks) together with all necessary auxiliaries. Therefore, Merkel [25] often refers to lightweight virtualization in terms of containers.
On the other hand, perturbation can break communication or computation between vehicles and/or infrastructures, which is a serious problem in such a scenario that needs ultralow latency in communication and computation. We elaborate more on timeliness perturbations in the subsequent section.
After defining the applications, their requirements on latency and the building blocks that allow to fulfill these requirements, we now survey the perturbations, i.e. constraints, failures, and threats that complicate the design of delay-critical VFC (Figure 17.3).
High mobility and strong heterogeneity in nodes and links obviously complicate monitoring and scheduling, thus fulfilling the timeliness requirements.
Communication failures usually lead to delays in the distributed application execution and subsequently to the violation of timeliness requirements.
We now survey the available research efforts to cope with the perturbations and still efficiently meet the timeliness requirements despite the perturbations.
In VFC, network management includes vertical federation that depends on the physical partitioning (ex. 5G, LTE, WIFI, 802.11p), [15, 28–35] and horizontal federation that depends on the time portioning and bandwidth (e.g. slicing, software defined networking [SDN] to manage vehicular neighbor groups [VNGs], network function virtualization [NFV]) [30, 35, 36].
Mobility of vehicles and fogs as well as the heterogeneity of network nodes and links result in continuously changing network resources. Accordingly, an efficient network resource management is indispensable for VFC. In order to enable seamless handover among different fog nodes, Bao et al. [31] develop a follow-me fog (FMF) framework to reduce the latency of the handover scheme in fog computing. In addition, Palattella et al. [37] describe the gap of connectivity and security of the handover in the vehicle-to-everything (V2X) and propose a proactive cross-layer, cross-terrestrial-technology, and cross-slices handover approach based on fog computing, including 5G, to achieve zero-latency handover in the vehicular network. For security they aim to enable a quick authentication and re-authentication handover based on SDN and fog. A detailed overview on this research field as well a proposal for proactive handover can be found in [38].
From the point of view of an application initiator (root task), mobility and heterogeneity lead to a permanently changing pool of available and useful computational resources. Accordingly, an efficient resource management is crucial for VFC. The selected resource management technique has an effective impact on enhancing the organization and the optimization of resource allocation among fog nodes ensuring the QoS and minimizing the execution time and cost. Scalability in fog computing enables both horizontal and vertical extensibility of fog resources [39] to cover regularly the high demands of resources for vehicular networks. Resource management involves computation management and data management as shown in Figure 17.4. As defined in Section 17.2, we survey the existing literature that we judge useful/applicable for the VFC.
Task migration/offloading is a crucial technique to migrate Virtual Machine (VM) or container from Fog node to another Fog node. H. Yao et al. [52] introduce a rod side Cloudlet (RSC) that enable the VM migration in the VCC to improve the response time and reduce network and VM migration cost during vehicle movement. A. Machen et al. [53] show the performance of the container compared to the VM in the proposed layered migration framework among mobile edge Cloud. Additional related research in the service migration where the authors focus on the container in mobile Fog Computing [54–56]. I. Farris et al. [57] aim to enhance the proactive migration of latency-aware applications in MEC by providing two Integers Linear Problem optimization schemas to guarantee the desired Quality of Experience (QoE) and decrease the cost of proactive replication. Wang et al. [58] provide a relevant survey on the service migration in MEC.
Off-loading intends to send tasks from the application initiator to other fogs in order to reduce energy consumption and service delays. This technique is typically used in MCC such as on smart phones. Wu et al. [59] develop a task off-loading strategy in VFC based on the proposed model Direction-based Vehicular Network Model (DVNM) to perform off-loading of the tasks within vehicles and RSUs. Zhang et al. [60] develop an efficient code partition algorithm for MCC based on depth-first search and a linear time-searching scheme to find the convenient points on a sequence of calls for off-loading and integration. This contribution could adapt effectively to the VFC. Zhou et al. [61] provide a good survey that explains the data off-loading techniques for V2V, V2I, and V2X through VANET.
After processing the application correctly, the application initiator decides where to store the results/data. Most of the researchers are addressing optimization of the data for storage and then select the data that should be stored on the vehicle or on the cloud data centers [67, 68].
Fault tolerance is a fundamental solution to cope with failures. Some related research [69–72] addresses the fault tolerance in fog computing to meet the timeliness guarantees. Kopetz et al. [73] implement a fault-tolerance technique in VFC for the real-time application that well improved the allocation of the time-triggered virtual machine (TTVM) on different fog nodes systems.
In addition, fault tolerance is also highlighted in the connected vehicles to limit the constraint in message delivery infrastructure. Du [74] designs a distributed message delivery system by developing a prototype infrastructure using Kafka [75]. The results show the performance of the proposed prototype in the connected vehicle application, which is highly scalable, fault tolerant, and able to deliver in parallel a big amount of messages in a short time.
Our literature survey has shown that the following aspects are insufficiently addressed in the literature.
Fog mobility is considered among the most crucial challenges in fog computing. Existing contributions usually consider static fog nodes and mobile or fixed user devices. Supporting mobility of fog nodes largely remains an open challenge due to the complication of resource and network management. In particular, in vehicular fog environments, the mobility is very high and even further hardens the application and system design. Accordingly, designing an efficient paradigm that can provide a wide range of scenarios and ensure mobility-aware management and coordination among mobile fog nodes is urgently needed.
Resource management due to the mobility of fog nodes, provisioning, virtualization, selection, and scheduling of resources need to be revised and optimized to cope with the continuously changing network topology and the fluctuating resource availability.
Service level agreements (SLAs) are widely investigated for cloud computing, however, still need research efforts to adopt them for mobile fog computing with strict timeliness and bandwidth guarantees. This challenge should be addressed by defining and designing metrics and SLA enforcement techniques that are suitable for mobile fog computing. In addition to SLA, security LA (SecLA) represents a real challenge for VFC, due to the high mobility and geographically distributed vehicles that require a privacy-preserving, critical-data protection, and fast authentication.
In this chapter, we illustrate the different application scenario classes and their timeliness requirements. Next, we present the different perturbations that complicate the design of delay-critical VFC. Then, we survey literature on network management, resource management, security, and fault tolerance to cope with perturbation and to guarantee the timeliness requirements. For further investigation in delay-critical VFC, we point out key research gaps and challenges that require deep research attention.