List of Figures

Fig. 1.1 Simple architecture model of a cellular network and terminology employed (cell, terminal, base station, coverage area). 10
Fig. 1.2 Network formation tradeoff: cost versus benefit of collaboration. For the network tradeoff, the total cost and total gain, summed over all entities are considered. 17
Fig. 2.1 Examples of node infection models of interest. 38
Fig. 3.1 Simple epidemic model: SI infection paradigm for each member of the population. 42
Fig. 3.2 Simple epidemic model: Percentage of infected hosts as a time function. 43
Fig. 3.3 Kermack-McKendrick: Underlying infection model. 44
Fig. 3.4 State transitions in the two-factor spreading model. 47
Fig. 3.5 Two-factor model: numbers of infected and removed hosts. 49
Fig. 3.6 General epidemics infection model—state transition diagram. 56
Fig. 4.1 Mapping of malware diffusion problem to the behavior of a queuing system. The shaded nodes are susceptible legitimate neighbors of node i. The colored nodes are either malicious nodes or legitimate already infected neighbors of i. Node i is considered susceptible at the moment. 66
Fig. 4.2 Closed queuing systems modeling malware diffusion over a wireless SIS network. 68
Fig. 4.3 The Norton equivalent model for malware propagation in communications networks. The figure shows the instance where k nodes are currently infected. 70
Fig. 4.4 State diagram for the analysis of the two-queue closed network and for obtaining the expression of the steady-state distribution. 73
Fig. 4.5 Probability of no infected nodes πI(0)image. 77
Fig. 4.6 Probability of all nodes infected πI(N)image. 77
Fig. 4.7 Average number of infected nodes E[LI]image versus λ/μλ/µ. 78
Fig. 4.8 Average number of infected nodes E[LI]image versus legitimate N and malicious M nodes. 79
Fig. 4.9 Average throughput of noninfected queue E[γS]E[γs] versus legitimate N and malicious M nodes. 80
Fig. 4.10 Average throughput of noninfected queue E[γS]E[γs] versus infection λ and recovery μ rate. 81
Fig. 4.11 Norton equivalent of the closed queuing network model for a propagative system. Compared to Fig. 4.3, there is a difference in the infection rate due to the impact of attacker. 82
Fig. 4.12 Probability of zero nodes infected πI(0)image (accurate-approximated). 85
Fig. 4.13 Probability of all nodes infected πI(N)image versus λ/μλ/µ. 86
Fig. 4.14 Probability of all nodes infected πI(N)image versus R. 87
Fig. 4.15 Average number of infected nodes E[LI]image versus λ/μλ/µ. 88
Fig. 4.16 Average number of infected nodes E[LI]image versus R. 89
Fig. 4.17 Average number of infected nodes E[LI]image versus N. 90
Fig. 4.18 Average throughput of the noninfected queue E[γS]E[γs] versus λ. 91
Fig. 4.19 Average throughput of the noninfected queue E[γS]E[γs] versus R. 92
Fig. 4.20 Average throughput of the noninfected queue E[γS]E[γs] versus N. 93
Fig. 4.21 State-transition diagram for legitimate nodes in a network with churn. 95
Fig. 4.22 Queuing models for malware spreading in networks with churn. 97
Fig. 4.23 Percentage of susceptible and infected nodes versus network infection/ recovery strength and comparison with networks with no churn for complex networks. 102
Fig. 4.24 Expected percentage of susceptible, infected, and recovering nodes versus infection/recovery strength (simulation) for complex networks with 400 and 800 initial nodes. 103
Fig. 4.25 Percentages of susceptible and infected nodes as a function of infection to recovery strength (numerical) for wireless distributed (multihop) networks. 104
Fig. 4.26 Expected number of nodes in each state of a wireless distributed (multihop) network with respect to network density. 105
Fig. 4.27 Expected number of nodes in each state of a wireless distributed (multihop) network with respect to infection/recovery rates. 106
Fig. 4.28 Expected percentage variation of the total number of nodes with respect to node density and infection/recovery strength for wireless distributed (multihop) networks. 106
Fig. 5.1 Random Field (RF) terminology over a random network of n + 1 sites and three phases. 109
Fig. 5.2 Examples of complex network topologies of interest. 116
Fig. 5.3 Examples of neighborhood for the darkly blue shaded (black in print versions) node (site) s in topologies of interest. 118
Fig. 5.4 SIS malware-propagative chain network and MRF notation. 119
Fig. 5.5 Steady-state system distributions for T/JT/J = –0.2. 123
Fig. 5.6 Expected number of infected nodes. 124
Fig. 5.7 Lattice network and MRF malware diffusion model notation. 126
Fig. 5.8 ER random networks and malware modeling MRFs. 130
Fig. 5.9 MRF malware diffusion modeling for WS SW networks. 131
Fig. 5.10 MRF malware diffusion modeling for SF networks. 132
Fig. 5.11 MRF malware diffusion modeling for random geometric (multihop) networks. 133
Fig. 5.12 Scaling of percentage of infected nodes with respect to network density: the sparse network regime. 135
Fig. 5.13 Scaling of percentage of infected nodes with respect to network density: the moderate-density regime. 135
Fig. 5.14 Scaling of percentage of infected nodes with respect to network density: the dense network regime. 136
Fig. 6.1 Transitions: S, I, R, D, respectively, represent fraction of the susceptible, infective, recovered, and dead. v(t) is the dynamic control parameter of the malware. 144
Fig. 6.2 Evaluation of the optimal controller and the corresponding states as functions of time. The parameters are time horizon: T=10T=10, initial infection fraction: I0 = 0.1, contact rate: β=0.9β = 0.9, instantaneous reward rate of infection for the malware: f(I)=0.1If(I) = 0.1I, reward per each killed node: κ=1image. Also, we have taken Q(S,I)0.2image, and B(S,I)0image in the left and B(S,I)0.2image in the right figures. That is, in the left figure, patches can only immunize the susceptible nodes but in the right figure, the same patch can successfully remove the infection, if any, and immunize the node against future infection. We can see that when patching can recover the infective nodes too (right figure), then the malware starts the killing phase earlier. This makes sense as deferring the killing in the hope of finding a new susceptible is now much riskier. 149
Fig. 6.3 The jump (up) point of optimal v, i.e. the starting time of the slaughter period, for different values of the patching and rates. For both curves, we have taken the recovery rate of the susceptible nodes, i.e. Q(S, I) as γ, and the recovery rate of the infective nodes, i.e. B(S, I), once as zero and once as the same as Q(S, I) where γ is varied from 0.02 to 0.7 with steps of 0.02. The rest of the parameters are f(I)=0.1If(I) = 0.1I, κ=1image, T=10image, β=0.9β = 0.9, and I0=0.1image. Note that when B(S,I)γimage, then for γ ≥ 0.6, the malware starts killing the infective nodes from time zero. 150
Fig. 7.1 State transitions. uNi(t)image and uNr(t)image are the control parameters of the network while uM(t)image is the control parameter of the malware. 159
Fig. 7.2 State evolution and saddle-point strategies. The parameters of the game are as follows: κI=10image, κD=13image, κu=10image, κr=5image, KI=KD=0image, β2=β1=β0=4.47image, π=1image, and initial fractions I0=0.15,R0=0.1,D0=0image, and T=4image. 167
Fig. 9.1 Methodology for studying optimal attacks. 183
Fig. 9.2 Zero-level contours of g(x1,x2)image. 187
Fig. 9.3 Optimal E[LI]image versus λ/μimage. 188
Fig. 9.4 Optimal E[LI]image versus N. 189
Fig. 9.5 Optimal E[γS]E[γs] versus (λ,μ)image. 190
Fig. 9.6 Optimal E[γS]E[γs] versus N. 191
Fig. 9.7 Contemporary wireless complex communication network architecture depicting all the considered and converged types of networks, including interconnections to wired backhauls. 194
Fig. 9.8 IDD in regular lattice (HoMPC;pw=0)image, ER (HoMEC;pw=1)image, SF (HeMUC), and SW (HoMUC) networks for different pwimage with mean degree equal to 10, λ=0.01image, and N = 2500. 203
Fig. 9.9 IDD in dynamic MANET with R=2mimage, λ=1image, and k¯=2.51,1.26,0.63,and0.13image, respectively. 206
Fig. 9.10 The IDD in wireless complex networks (cyber-physical systems) consisting of both long-range and broadcast dissemination patterns. 208
Fig. 9.11 IDD in hybrid (HoMEC and HoMPC) complex networks of propagating information in both delocalized and broadcast fashions, where ke¯=6,kb¯=3image, and λ=0.05λ = 0.05. 209
Fig. 9.12 Average number of infected users of the legitimate network E[LI]image as a function of λ/μimage. 212
Fig. 9.13 Average number of infected nodes E[LI]image as a function of N (numerical result). 213
Fig. B.1 A generic independent queuing system. 236
Fig. B.2 Graphical presentation of the relation between the arrival-departure counting processes and visual explanation of Littleʼs law. 239
Fig. B.3 State diagram for the birth-death process. 243
Fig. B.4 Two queues in tandem. 249
Fig. B.5 A simple two-queue closed network. 251
Fig. B.6 State diagram of a two-queue closed queuing network with state-dependent service rates. 253
Fig. C.1 Analogy between functions, functionals, and extreme values. 260
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset