Chapter 10
A Lightweight Dynamic Optimization Methodology for Embedded Wireless Sensor Networks*

Advancements in semiconductor technology, as predicted by Moore's law, have enabled high transistor density in a small chip area resulting in the miniaturization of embedded systems (e.g., embedded sensor nodes). Embedded wireless sensor networks (EWSNs) are envisioned as distributed computing systems, which are proliferating in many application domains (e.g., defense, health care, surveillance systems), each with varying application requirements that can be defined by high-level application metrics (e.g., lifetime, reliability). However, the diversity of EWSN application domains makes it difficult for commercial off-the-shelf (COTS) embedded sensor nodes to meet these application requirements.

Since COTS embedded sensor nodes are mass-produced to optimize cost, many COTS embedded sensor nodes possess tunable parameters (e.g., processor voltage and frequency, sensing frequency), whose values can be tuned for application specialization [69]. The EWSN application designers (those who design, manage, or deploy the EWSN for an application) are typically biologists, teachers, farmers, and household consumers that are experts within their application domain, but have limited technical expertise. Given the large design space and operating constraints, determining appropriate parameter values (operating state) can be a daunting and/or time-consuming task for nonexpert application managers. Typically, embedded sensor node vendors assign initial generic tunable parameter value settings; however, no one tunable parameter value setting is appropriate for all applications. To assist the EWSN managers with parameter tuning to best fit the application requirements, an automated parameter tuning process is required.

Parameter optimization is the process of assigning appropriate (optimal or near-optimal) tunable parameter value settings to meet application requirements. Parameter optimizations can be static or dynamic. Static optimizations assign parameter values at deployment, and these values remain fixed for the lifetime of the sensor node. One of the challenges associated with static optimizations is accurately determining the tunable parameter value settings using environmental stimuli prediction/simulation. Furthermore, static optimizations are not appropriate for applications with varying environmental stimuli. Alternatively, dynamic optimizations assign (and reassign/change) parameter values during runtime enabling the sensor node to adapt to changing environmental stimuli, and thus more accurately meet application requirements.

EWSN dynamic optimizations present additional challenges as compared to traditional processor or memory (cache) dynamic optimizations because sensor nodes have more tunable parameters and a larger design space. The dynamic profiling and optimization (DPOP) project aims to address these challenges and complexities associated with sensor-based system design through the use of automated optimization methods [321]. The DPOP project has gathered dynamic profiling statistics from a sensor-based system; however, the parameter optimization process has not been addressed.

In this chapter, we investigate parameter optimization using dynamic profiling data already collected from the platform. We analyze several dynamic optimization methods and evaluate algorithms that provide a good operating state without significantly depleting the battery energy. We explore a large design space with many tunable parameters and values, which provide a fine-grained design space, enabling embedded sensor nodes to more closely meet application requirements as compared to smaller, more course-grained design spaces. Gordon-Ross et al. [329] showed that finer-grained design spaces contain interesting design alternatives and result in increased benefits in the cache subsystem (though similar trends follow for other subsystems). However, the large design space exacerbates optimization challenges, taking into consideration an embedded sensor node's constrained memory and computational resources. Considering the embedded sensor node's limited battery life, energy-efficient computing is always of paramount significance. Therefore, optimization algorithms that conserve energy by minimizing design space exploration to find a good operating state are critical, especially for large design spaces and highly constrained systems. Additionally, rapidly changing application requirements and environmental stimuli coupled with limited battery reserves necessitates a highly responsive and low overhead methodology.

Our main contributions in this chapter are as follows:

  • We propose a lightweight dynamic optimization methodology that intelligently selects appropriate initial tunable parameter value settings by evaluating application requirements, the relative importance of these requirements with respect to each other, and the magnitude in which each parameter effects each requirement. This one-shot operating state obtained from appropriate initial parameter value settings provides a high-quality operating state with minimal design space exploration for highly constrained applications. Results reveal that the one-shot operating state is within 8% of the optimal operating state averaged over several different application domains and design spaces.
  • We present a dynamic optimization methodology to iteratively improve the one-shot operating state to provide an optimal or near-optimal operating state for less constrained applications. Our dynamic optimization methodology combines the initial tunable parameter value settings with an intelligent exploration ordering of tunable parameter values and an exploration arrangement of tunable parameters (since some parameters are more critical for an application than others and thus should be explored first [330] (e.g., the transmission power parameter may be more critical for a lifetime-sensitive application than for processor voltage)).
  • We architect a lightweight online greedy algorithm that leverages intelligent parameter arrangement to iteratively explore the design space, resulting in an operating state within 2% of the optimal operating state while exploring only 0.04% of the design space.

This work has a broad impact on EWSN design and deployment. Our work enables nonexpert application managers to leverage our dynamic optimization methodology to automatically tailor the embedded sensor node tunable parameters to best meet the application requirements with little design time effort. Our proposed methodology is suitable for all EWSN applications ranging from highly constrained to highly flexible applications. The one-shot operating state provides a good operating state for highly constrained applications, whereas greedy exploration of the parameters provides improvement over the one-shot operating state to determine a high-quality operating state for less constrained applications. Our initial parameter value settings, parameter arrangement, and exploration ordering techniques are also applicable to other systems or application domains (e.g., cache tuning) with different application requirements and different tunable parameters.

The remainder of this chapter is organized as follows. Section 10.1 overviews the related work. Section 10.2 presents our dynamic optimization methodology along with the state space and objective function formulation. We describe our dynamic optimization methodology's steps and algorithms in Section 10.3. Experimental results are presented in Section 10.4. Finally, Section 10.5 concludes the chapter.

10.1 Related Work

There exists much research in the area of dynamic optimizations [186–188, 330], but most previous work targets the processor or memory (cache) in computer systems. There exists little previous work on EWSN dynamic optimization, which presents more challenges given a unique design space, design constraints, platform particulars, and external influences from the EWSN's operating environment.

In the area of DPOP, Sridharan and Lysecky [300] dynamically profiled an EWSN's operating environment to gather profiling statistics; however, they did not describe a methodology to leverage these profiling statistics for dynamic optimization. Shenoy et al. [331] investigated profiling methods for dynamically monitoring sensor-based platforms with respect to network traffic and energy consumption, but did not explore dynamic optimizations. In prior work, Munir and Gordon-Ross [23, 69] proposed a Markov decision process (MDP)-based methodology for optimal sensor node parameter tuning to meet application requirements as a first step toward EWSN dynamic optimization. The MDP-based methodology required high computational and memory resources for large design spaces and needed a high-performance base station node (sink node) to compute the optimal operating state for large design spaces. The operating states determined at the base station were then communicated to the other sensor nodes. The high resource requirements made the MDP-based methodology infeasible for autonomous dynamic optimization for large design spaces given the constrained resources of individual sensor nodes. Kogekar et al. [227] proposed dynamic software reconfiguration to adapt software to new operating conditions; however, their work did not consider sensor node tunable parameters and application requirements. Verma [311] investigated simulated annealing (SA) and particle swarm optimization (PSO)-based parameter tuning for EWSNs and observed that SA performed better than PSO because PSO often quickly converged to local minima. Although there exists work on optimization of EWSNs, our work uses multiobjective optimization and fine-grained design space to find optimal (or near-optimal) sensor node operating states that meet application requirements.

One of the prominent dynamic optimization techniques for reducing energy consumption is dynamic voltage and frequency scaling (DVFS). Several previous works explored DVFS in EWSNs. Min et al. [225] utilized a voltage scheduler, running in tandem with the operating system's task scheduler, to perform DVFS based on an a priori prediction of the sensor node's workload and resulted in a 60% reduction in energy consumption. Similarly, Yuan and Qu [226] used additional transmitted data packet information to select appropriate processor voltage and frequency values. Although DVFS is a method for dynamic optimization, DVFS considers only two sensor node tunable parameters (processor voltage and frequency). In this chapter, we expand the embedded sensor node parameter tuning space, which provides a finer-grained design space, enabling embedded sensor nodes to more closely meet application requirements.

Some dynamic optimization work utilized dynamic power management for energy conservation in EWSNs. Wang et al. [332] proposed a strategy for optimizing mobile sensor node placement to meet coverage and energy requirements. Their strategy utilized dynamic power management to optimize the sensor nodes' sleep state transitions for energy conservation. Ning and Cassandras [333] presented a link layer dynamic optimization approach for energy conservation in EWSNs by minimizing the idle listening time. Their approach utilized traffic statistics to optimally control the receiver sleep interval. In our work, we incorporate energy conservation by switching the sensors, processors, and transceivers to low power, idle modes when these components are not actively sensing, processing, and communicating, respectively.

Although there exists work on EWSN optimizations, dynamic optimization requires further research. Specifically, there is a need for lightweight dynamic optimization methodologies for sensor node parameter tuning considering a sensor node's limited energy and storage. Furthermore, sensor node tunable parameter arrangement and exploration order require further investigation. Our work provides contribution to the dynamic optimization of EWSNs by proposing a lightweight dynamic optimization methodology for EWSNs in addition to a sensor node's tunable parameter arrangement and exploration order techniques.

10.2 Dynamic Optimization Methodology

In this section, we give an overview of our dynamic optimization methodology along with the state space and objective function formulation for the methodology.

10.2.1 Overview

Figure 10.1 depicts our dynamic optimization methodology for distributed EWSNs. EWSN designers evaluate application requirements and capture these requirements as high-level application metrics (e.g., lifetime, throughput, reliability) and associated weight factors. The weight factors signify the weightage/importance of application metrics with respect to each other. The sensor nodes use application metrics and weight factors to determine an appropriate operating state (tunable parameter value settings) by leveraging an application metrics estimation model. The application metrics estimation model (elaborated in Chapter 3) estimates high-level application metrics from low-level sensor node parameters and sensor node hardware-specific internals.

c10f001

Figure 10.1 A lightweight dynamic optimization methodology per sensor node for EWSNs

Figure 10.1 shows the per sensor node dynamic optimization process (encompassed by the dashed circle), which is orchestrated by the dynamic optimization controller. The process consists of two operating modes: the one-shot mode wherein the sensor node operating state is directly determined by initial parameter value settings and the improvement mode wherein the operating state is iteratively improved using an online optimization algorithm. The dynamic optimization process consists of three steps. In the first step corresponding to the one-shot mode, the dynamic optimization controller intelligently determines the initial parameter value settings (operating state) and exploration order (ascending or descending), which is critical in reducing the number of states explored in the third step. In the one-shot mode, the dynamic optimization process is complete and the sensor node transitions directly to the operating state specified by the initial parameter value settings. The second step corresponds to the improvement mode, which determines the parameter arrangement based on application metric weight factors (e.g., explore processor voltage, frequency, and then sensing frequency). This parameter arrangement reduces the design space exploration time using an optimization algorithm in the third step to determine a good quality operating state. The third step corresponds to the improvement mode and invokes an online optimization algorithm for parameter exploration to iteratively improve the operating state to more closely meet application requirements as compared to the one-shot's operating state. The online optimization algorithm leverages the intelligent initial parameter value settings, exploration order, and parameter arrangement.

A dynamic profiler records profiling statistics (e.g., processor voltage, wireless channel condition, radio transmission power) given the current operating state and environmental stimuli and passes these profiling statistics to the dynamic optimization controller. The dynamic optimization controller processes the profiling statistics to determine whether the current operating state meets the application requirements. If the application requirements are not met, the dynamic optimization controller reinvokes the dynamic optimization process to determine a new operating state. This feedback process continues to ensure that the application requirements are best met under changing environmental stimuli. We point out that our current work describes the dynamic optimization methodology; however, incorporation of profiling statistics to provide feedback is part of our future work.

10.2.2 State Space

The state space c10-math-0001 for our dynamic optimization methodology given c10-math-0002 tunable parameters is defined as

10.1 equation

where c10-math-0004 denotes the state space for tunable parameter c10-math-0005 and c10-math-0006 denotes the Cartesian product. Each tunable parameter's state space c10-math-0007 consists of c10-math-0008 values

10.2 equation

where c10-math-0010 denotes the tunable parameter c10-math-0011's state space cardinality (the number of tunable values in c10-math-0012). c10-math-0013 is a set of c10-math-0014-tuples formed by taking one tunable parameter value from each tunable parameter. A single c10-math-0015-tuple c10-math-0016 is given as

10.3 equation

Each c10-math-0018-tuple represents a sensor note operating state. We point out that some c10-math-0019-tuples in c10-math-0020 may not be feasible (such as invalid combinations of processor voltage and frequency) and can be regarded as do not care tuples.

10.2.3 Optimization Objection Function

The sensor node dynamic optimization problem can be formulated as

10.4 equation

where c10-math-0023 represents the objective function and captures application metrics and weight factors and is given as

10.5 equation

where c10-math-0025 and c10-math-0026 denote the objective function and weight factor for the c10-math-0027 application metric, respectively, given that there are c10-math-0028 application metrics.

For our dynamic optimization methodology, we consider three application metrics (c10-math-0029): lifetime, throughput, and reliability, whose objective functions are represented by c10-math-0030, c10-math-0031, and c10-math-0032, respectively. We define c10-math-0033 (Fig. 10.2) using the piecewise linear function

where c10-math-0035 denotes the lifetime offered by state c10-math-0036, the constant parameters c10-math-0037 and c10-math-0038 denote the desired minimum and maximum lifetime, and the constant parameters c10-math-0039 and c10-math-0040 denote the acceptable minimum and maximum lifetime. The piecewise linear objective function provides EWSN designers with a flexible application requirement specification, as it allows both desirable and acceptable ranges [310]. The objective function reward gradient (slope) would be greater in the desired range than the acceptable range; however, there would be no reward/gain for operating outside the acceptable range. The constant parameters c10-math-0041, c10-math-0042, and c10-math-0043 in Eq. (10.6) denote the c10-math-0044 value at c10-math-0045, c10-math-0046, and c10-math-0047, respectively (constant parameters are assigned by the EWSN designer based on the minimum and maximum acceptable/desired values of application metrics). The c10-math-0048 and c10-math-0049 can be defined similar to Eq. (10.6).

c10f002

Figure 10.2 Lifetime objective function c10-math-0021

The objective function characterization enables the reward/gain calculation from operating in a given state based on the high-level metric values offered by the state. Although different characterization of objective functions results in different reward values from different states, our dynamic optimization methodology selects a high-quality operating state from the design space to maximize the given objective function value. We consider piecewise linear objective functions as a typical example from the possible objective functions (e.g., linear, piecewise linear, nonlinear) to illustrate our dynamic optimization methodology, though other characterizations of objective functions work equally well for our methodology.

10.3 Algorithms for Dynamic Optimization Methodology

In this section, we describe our dynamic optimization methodology's three steps (Fig. 10.1) and associated algorithms.

10.3.1 Initial Tunable Parameter Value Settings and Exploration Order

The first step of our dynamic optimization methodology determines initial tunable parameter value settings and exploration order (ascending or descending). These initial tunable parameter value settings results in a high-quality operating state in one-shot, hence the name one-shot mode (Fig. 10.1). The algorithm calculates the application metric objective function values for the first and last values in the set of tunable values for each tunable parameter while other tunable parameters are set to an arbitrary initial setting (either first or last value). We point out that the tunable values for a tunable parameter can be arranged in an ascending order (e.g., for processor voltage c10-math-0050 (V)). This objective function values calculation determines the effectiveness of setting a particular tunable parameter value in meeting the desired objective (e.g., lifetime). The tunable parameter setting that gives a higher objective function value is selected as the initial parameter value for that tunable parameter. The exploration order for that tunable parameter is set to descending if the last value in the set of tunable values (e.g., c10-math-0051 in our previous example) gives a higher objective function value or ascending otherwise. This exploration order selection helps in reducing design space exploration for a greedy-based optimization algorithm (step 3), which stops exploring a tunable parameter as soon as a tunable parameter setting gives a lower objective function value than the initial setting. This initial parameter value setting and exploration order determination procedure are then repeated for all other tunable parameters and application metrics.

Algorithm 6 describes our technique to determine initial tunable parameter value settings and exploration order (first step of our dynamic optimization methodology). The algorithm takes as input the objective function c10-math-0052, the number of tunable parameters c10-math-0053, the number of values for each tunable parameter c10-math-0054, the number of application metrics c10-math-0055, and c10-math-0056 where c10-math-0057 represents a vector containing the tunable parameters, c10-math-0058. For each application metric c10-math-0059, the algorithm calculates vectors c10-math-0060 and c10-math-0061 (where c10-math-0062 denotes the exploration direction (ascending or descending)), which store the initial value settings and exploration order, respectively, for the tunable parameters. The algorithm determines the c10-math-0063 application metric objective function values c10-math-0064 and c10-math-0065 where the parameter being explored c10-math-0066 is assigned its first c10-math-0067 and last c10-math-0068 tunable values, respectively, and the rest of the tunable parameters c10-math-0069 are assigned initial values (lines 3 and 4). c10-math-0070 stores the difference between c10-math-0071 and c10-math-0072. If c10-math-0073, c10-math-0074 results in a greater (or equal when c10-math-0075) objective function value as compared to c10-math-0076 for parameter c10-math-0077 (i.e., the objective function value decreases as the parameter value decreases). Therefore, to reduce the number of states explored while considering that the greedy algorithm (Section 10.3.3) stops exploring a tunable parameter if a tunable parameter's value yields a comparatively lower objective function value, c10-math-0078's exploration order must be descending (lines 6–8). The algorithm assigns c10-math-0079 as the initial value of c10-math-0080 for the c10-math-0081 application metric (line 9). If c10-math-0082, the algorithm assigns the exploration order as ascending for c10-math-0083 and c10-math-0084 as the initial value setting of c10-math-0085 (lines 11–13). This c10-math-0086 calculation procedure is repeated for all c10-math-0087 application metrics and all c10-math-0088 tunable parameters (lines 1–16).

algorithm

10.3.2 Parameter Arrangement

Depending on the application metric weight factors, some parameters are more critical to meeting application requirements than other parameters. For example, sensing frequency is a critical parameter for applications with a high responsiveness weight factor, and therefore, sensing frequency should be explored first. In this section, we describe a technique for parameter arrangement such that parameters are explored in an order characterized by the parameters' impact on application metrics based on relative weight factors. This parameter arrangement technique (step 2) is part of the improvement mode, which is suitable for relatively less constrained applications that would benefit from a higher quality operating state than the one-shot mode's operating state (Fig. 10.1).

The parameter arrangement step determines an arrangement for the tunable parameters corresponding to each application metric, which dictates the order in which the parameters will be explored. This arrangement is based on the difference between the application metric's objective function values corresponding to the first and last values of the tunable parameters, which is calculated in step 1 (i.e., the tunable parameter that gives the highest difference in an application metric's objective function values is the first parameter in the arrangement vector for that application metric). For an arrangement that considers all application metrics, the tunable parameters' order is set in accordance with application metrics' weight factors such that the tunable parameters having a greater effect on application metrics with higher weight factors are situated before parameters having a lesser affect on application metrics with lower weight factors in the arrangement. We point out that the effect of the tunable parameters on an application metric is determined from the objective function value calculations as described in step 1. The arrangement that considers all application metrics selects the first few tunable parameters corresponding to each application metric, starting from the application metric with the highest weight factor such that no parameters are repeated in the final intelligent parameter arrangement. For example, if processor voltage is among the first few tunable parameters corresponding to two application metrics, then the processor voltage setting corresponding to the application metric with the greater weight factor is selected, whereas the processor voltage setting corresponding to the application metric with the lower weight factor is ignored in the final intelligent parameter arrangement. Step 3 (online optimization algorithm) uses this intelligent parameter arrangement for further design space exploration. The mathematical details of the parameter arrangement step are as follows.

Our parameter arrangement technique is based on calculations performed in Algorithm 6. We define

10.7 equation

where c10-math-0090 is a vector containing c10-math-0091 arranged in descending order by their respective values and is given as

The tunable parameter arrangement vector c10-math-0093 corresponding to c10-math-0094 (one-to-one correspondence) is given by

10.9 equation

An intelligent parameter arrangement c10-math-0096 must consider all application metrics' weight factors with higher importance given to the higher weight factors, that is,

where c10-math-0098 denotes the number of tunable parameters taken from c10-math-0099 such that c10-math-0100. Our technique allows taking more tunable parameters from parameter arrangement vectors corresponding to higher weight factor application metrics: c10-math-0101. In Eq. (10.10), c10-math-0102 tunable parameters are taken from vector c10-math-0103, then c10-math-0104 from vector c10-math-0105, and so on to c10-math-0106 from vector c10-math-0107 such that c10-math-0108. In other words, we select those tunable parameters from parameter arrangement vectors corresponding to the lower weight factors that are not already selected from parameter arrangement vectors corresponding to the higher weight factors (i.e., c10-math-0109 comprises of disjoint or nonoverlapping tunable parameters corresponding to each application metric).

In the situation where weight factor c10-math-0110 is much greater than all other weight factors, an intelligent parameter arrangement c10-math-0111 would correspond to the parameter arrangement for the application metric with weight factor c10-math-0112

10.11 equation

The initial parameter value vector c10-math-0114 and the exploration order (ascending or descending) vector c10-math-0115 corresponding to c10-math-0116 (Eq. (10.10)) can be determined from c10-math-0117 (Eq. (10.10)), c10-math-0118, and c10-math-0119 (Algorithm 6) by examining the tunable parameter from c10-math-0120 and determining the tunable parameter's initial value setting from c10-math-0121 and exploration order from c10-math-0122.

10.3.3 Online Optimization Algorithm

Step 3 of our dynamic optimization methodology, which also belongs to the improvement mode, iteratively improves the one-shot operating state. This step leverages information from steps 1 and 2 and uses a greedy optimization algorithm for tunable parameters exploration in an effort to determine a better operating state than the one obtained from step 1 (Section 10.3.1). The greedy algorithm explores the tunable parameters in the order determined in step 2. The greedy algorithm stops exploring a tunable parameter as soon as a tunable parameter setting yields a lower objective function value as compared to the previous tunable parameter setting for that tunable parameter, and hence named as greedy. This greedy approach helps in reducing design space exploration to determine an operating state. Although we propose a greedy algorithm for design space exploration, any other algorithm can be used in step 3.

Algorithm 7 depicts our online greedy optimization algorithm, which leverages the initial parameter value settings (Section 10.3.1), parameter value exploration order (Section 10.3.1), and parameter arrangement (Section 10.3.2). The algorithm takes as input the objective function c10-math-0123, the number of tunable parameters c10-math-0124, the number of values for each tunable parameter c10-math-0125, the intelligent tunable parameter arrangement vector c10-math-0126, the tunable parameters' initial value vector c10-math-0127, and the tunable parameter's exploration order (ascending or descending) vector c10-math-0128. The algorithm initializes state c10-math-0129 from c10-math-0130 (line 1) and c10-math-0131 with c10-math-0132's objective function value (line 2). The algorithm explores each parameter in c10-math-0133 where c10-math-0134 (Eq. (10.10)) in ascending or descending order as given by c10-math-0135 (lines 3 and 4). For each tunable parameter c10-math-0136 (line 5), the algorithm assigns c10-math-0137 the objective function value from the current state c10-math-0138 (line 6). The current state c10-math-0139 denotes tunable parameter value settings and can be written as

10.12 equation

where c10-math-0141 denotes the parameter value corresponding to the tunable parameter c10-math-0142 being explored and set c10-math-0143 denotes the parameter value settings other than the current tunable parameter c10-math-0144 being explored and is given by

10.13 equation

where c10-math-0146 denotes the initial value of the parameter as given by c10-math-0147 and c10-math-0148 denotes the best found value of c10-math-0149 after exploring c10-math-0150 (lines 5–13 of Algorithm 7).

algorithm

If c10-math-0151 (the objection function value increases), c10-math-0152 is assigned to c10-math-0153 and the state c10-math-0154 is assigned to state c10-math-0155 (lines 7–9). If c10-math-0156, the algorithm stops exploring the current parameter c10-math-0157 and starts exploring the next tunable parameter (lines 10–12). The algorithm returns the best found objective function value c10-math-0158 and the state c10-math-0159 corresponding to c10-math-0160.

10.3.4 Computational Complexity

The computational complexity for our dynamic optimization methodology is c10-math-0161, which is comprised of the intelligent initial parameter value settings and exploration ordering (Algorithm 6) c10-math-0162, parameter arrangement c10-math-0163 (sorting c10-math-0164 (Eq. (10.8)) contributes the c10-math-0165 factor) (Section 10.3.2), and the online optimization algorithm for parameter exploration (Algorithm 7) c10-math-0166. Assuming that the number of tunable parameters c10-math-0167 is larger than the number of parameter's tunable values c10-math-0168, the computational complexity of our methodology can be given as c10-math-0169. This complexity reveals that our proposed methodology is lightweight and is thus feasible for implementation on sensor nodes with tight resource constraints.

10.4 Experimental Results

In this section, we describe the experimental setup and results for three application domains: security/defense, health care, and ambient conditions monitoring. The results include the percentage improvements attained by our initial tunable parameter settings (one-shot operating state) over other alternative initial value settings, and a comparison of our greedy algorithm (which leverages intelligent initial parameter settings, exploration order, and parameter arrangement) for design space exploration with other variants of a greedy algorithm and SA. This section also presents an execution time and data memory analysis to verify the complexity of our dynamic optimization methodology.

10.4.1 Experimental Setup

Our experimental setup is based on the Crossbow IRIS mote platform [80] with a battery capacity of 2000 mA h using two AA alkaline batteries. The IRIS mote platform integrates an Atmel ATmega1281 microcontroller [77], an MTS400 sensor board [75] with Sensirion SHT1x temperature and humidity sensors [76], and an Atmel AT86RF230 low-power 2.4-GHz transceiver [78]. Table 10.1 shows the sensor node hardware-specific values, corresponding to the IRIS mote platform, which are used by the application metrics estimation model [76–78, 80].

Table 10.1 Crossbow IRIS mote platform hardware specifications

Notation Description Value
c10-math-0170 Battery voltage 3.6 V
c10-math-0171 Battery capacity 2000 mA h
c10-math-0172 Processing instructions per bit 5
c10-math-0173 Sensing resolution bits 24
c10-math-0174 Transceiver voltage 3 V
c10-math-0175 Transceiver data rate 250 kbps
c10-math-0176 Transceiver receive current 15.5 mA
c10-math-0177 Transceiver sleep current 20 nA
c10-math-0178 Sensing board voltage 3 V
c10-math-0179 Sensing measurement current 550 c10-math-0180A
c10-math-0181 Sensing measurement time 55 ms
c10-math-0182 Sensing sleep current 0.3 c10-math-0183A

We analyze six tunable parameters: processor voltage c10-math-0184, processor frequency c10-math-0185, sensing frequency c10-math-0186, packet size c10-math-0187, packet transmission interval c10-math-0188, and transceiver transmission power c10-math-0189. In order to explore the fidelity of our methodology across small and large design spaces, we consider two design space cardinalities (number of states in the design space): c10-math-0190 and c10-math-0191. The tunable parameters for c10-math-0192 are c10-math-0193 (V), c10-math-0194 (MHz) [77], c10-math-0195 (samples/s) [76], c10-math-0196 (B), c10-math-0197 (s), and c10-math-0198 (dB m) [78]. The tunable parameters for c10-math-0199 are c10-math-0200 (V), c10-math-0201 (MHz) [77], c10-math-0202 (samples/s) [76], c10-math-0203 (B), c10-math-0204 (s), and c10-math-0205 (dB m) [78]. All state space tuples are feasible for c10-math-0206, whereas c10-math-0207 contains 7779 infeasible state space tuples because all c10-math-0208 and c10-math-0209 pairs are not feasible.

In order to evaluate the robustness of our methodology across different applications with varying application metric weight factors, we model three sample application domains: a security/defense system, a healthcare application, and an ambient conditions monitoring application. We assign application-specific values for the desirable minimum c10-math-0210, desirable maximum c10-math-0211, acceptable minimum c10-math-0212, and acceptable maximum c10-math-0213 objective function parameter values for application metrics (Section 10.2.3) as shown in Table 10.2. We specify the objective function parameters as a multiple of base units for lifetime, throughput, and reliability. We assume one lifetime unit is equal to 5 days, one throughput unit is equal to 20 kbps, and one reliability unit is equal to 0.05 (percentage of error-free packet transmissions).

Table 10.2 Desirable minimum c10-math-0214, desirable maximum c10-math-0215, acceptable minimum c10-math-0216, and acceptable maximum c10-math-0217 objective function parameter values for a security/defense (defense) system, health care, and an ambient conditions monitoring application

Notation Defense (units) Health care (units) Ambient monitoring (units)
c10-math-0218 8 12 6
c10-math-0219 30 32 40
c10-math-0220 1 2 3
c10-math-0221 36 40 60
c10-math-0222 20 19 15
c10-math-0223 34 36 29
c10-math-0224 0.5 0.4 0.05
c10-math-0225 45 47 35
c10-math-0226 14 12 11
c10-math-0227 19.8 17 16
c10-math-0228 10 8 6
c10-math-0229 20 20 20

One lifetime unit = 5 days, one throughput unit = 20 kbps, one reliability unit = 0.05

In order to evaluate our one-shot dynamic optimization solution quality, we compare the solution from the one-shot initial parameter settings c10-math-0230 with the solutions obtained from the following four potential initial parameter value settings (although any feasible c10-math-0231-tuple c10-math-0232 can be taken as the initial parameter settings):

  • c10-math-0233 assigns the first parameter value for each tunable parameter, that is, c10-math-0234.
  • c10-math-0235 assigns the last parameter value for each tunable parameter, that is, c10-math-0236.
  • c10-math-0237 assigns the middle parameter value for each tunable parameter, that is, c10-math-0238.
  • c10-math-0239 assigns a random value for each tunable parameter, that is, c10-math-0240 where c10-math-0241 denotes a function to generate a random/pseudorandom integer and % denotes the modulus operator.

Although we analyzed our methodology for the IRIS motes platform, three application domains, and two design spaces, our algorithms are equally applicable to any platform, application domain, and design space. Since the constant assignments for the minimum and maximum desirable values and weight factors are application dependent and designer specified, appropriate assignments can be made for any application given the application's specific requirements. Finally, since the number of tunable parameters and the parameters' possible/allowed tunable values dictates the size of the design space, we evaluate both large and small design spaces but any sized design space could be evaluated by varying the number of tunable parameters and associated values.

10.4.2 Results

In this section, we present results for percentage improvements attained by our dynamic optimization methodology over other optimization methodologies. We implemented our dynamic optimization methodology in C++. To evaluate the effectiveness of our one-shot solution, we compare the one-shot solution's results with four alternative initial parameter arrangements (Section 10.4.1). We normalize the objective function values corresponding to the operating states attained by our dynamic optimization methodology with respect to the optimal solution obtained using an exhaustive search. We compare the relative complexity of our one-shot dynamic optimization methodology with two other dynamic optimization methodologies, which leverage greedy- and SA-based algorithms for design space exploration [316]. Although for brevity we present results for only a subset of the initial parameter value settings, application domains, and design spaces, we observed that results for extensive application domains, design spaces, and initial parameter settings revealed similar trends.

10.4.2.1 Percentage Improvements of One-Shot Over Other Initial Parameter Settings

Table 10.3 depicts the percentage improvements attained by the one-shot parameter settings c10-math-0258 over other parameter settings for different application domains and weight factors. We point out that different weight factors could result in different percentage improvements; however, we observed similar trends for other weight factors. Table 10.3 shows that one-shot initial parameter settings can result in as high as 155% improvement as compared to other initial value settings. We observe that some arbitrary settings may give a comparable or even a better solution for a particular application domain, application metric weight factors, and design space cardinality, but that arbitrary setting would not scale to other application domains, application metric weight factors, and design space cardinalities. For example, c10-math-0259 obtains a 12% better quality solution than c10-math-0260 for the ambient conditions monitoring application for c10-math-0261, but yields a 10% lower quality solution for the security/defense and healthcare applications for c10-math-0262, and a 57%, 31%, and 20% lower quality solution than c10-math-0263 for the security/defense, health care, and ambient conditions monitoring applications, respectively, for c10-math-0264. The percentage improvement attained by c10-math-0265 over all application domains and design spaces is 33% on average. Our one-shot methodology is the first approach (to the best of our knowledge) to intelligent initial tunable parameter value settings for sensor nodes to provide a good quality operating state, as arbitrary initial parameter value settings typically result in a poor operating state. Results reveal that on average c10-math-0266 gives a solution within 8% of the optimal solution obtained from exhaustive search.

Table 10.3 Percentage improvements attained by one-shot (c10-math-0242) over other initial parameter settings for c10-math-0243 and c10-math-0244

c10-math-0245 c10-math-0246 c10-math-0247
Application domain c10-math-0248 (%) c10-math-0249 (%) c10-math-0250 (%) c10-math-0251 (%) c10-math-0252 (%) c10-math-0253 (%) c10-math-0254 (%) c10-math-0255 (%)
Security/defense 155 10 57 29 148 0.3 10 92
Health care 78 7 31 11 73 0.3 10 45
Ambient conditions monitoring 52 6 20 7 15 c10-math-02567 c10-math-025712 18

The percentage improvement attained by c10-math-0267 over all application domains and design spaces is 33% on average. Our one-shot methodology is the first approach (to the best of our knowledge) to intelligent initial tunable parameter value settings for sensor nodes to provide a good quality operating state, as arbitrary initial parameter value settings typically result in a poor operating state. Results reveal that on average c10-math-0268 gives a solution within 8% of the optimal solution obtained from an exhaustive search [73].

10.4.2.2 Comparison of One-Shot with Greedy Variants and SA-Based Dynamic Optimization Methodologies

In order to investigate the effectiveness of our one-shot methodology, we compare the one-shot solution's quality (indicated by the attained objective function value) with two other dynamic optimization methodologies, which leverage SA-based and greedy-based (denoted by c10-math-0269 where c10-math-0270 stands for ascending order of parameter exploration) exploration of design space. We assign initial parameter value settings for greedy- and SA-based methodologies as c10-math-0271 and c10-math-0272, respectively. Note that, for brevity, we present results for c10-math-0273 and c10-math-0274; however, other initial parameter settings such as c10-math-0275 and c10-math-0276 would yield similar trends when combined with greedy-based and SA-based design space exploration.

Figure 10.3 shows the objective function value normalized to the optimal solution (obtained from exhaustive search) versus the number of states explored for the one-shot, c10-math-0277, and SA algorithms for a security/defense system for c10-math-0278. The one-shot solution is within 1.8% of the optimal solution obtained from exhaustive search. The figure shows that c10-math-0279 and SA explore 11 states (1.51% of the design space) and 10 states (1.37% of the design space), respectively, to attain an equivalent or better quality solution than the one-shot solution. Although greedy- and SA-based methodologies explore few states to reach a comparable solution as that of our one-shot methodology, the one-shot methodology is suitable when design space exploration is not an option due to an extremely large design space and/or extremely stringent computational, memory, and timing constraints. These results indicate that other arbitrary initial value settings (e.g., c10-math-0280, c10-math-0281) do not provide a good quality operating state and necessitate design space exploration by online algorithms (e.g., greedy) to provide a good quality operating state. We point out that if the greedy- and SA-based methodologies leverage our one-shot initial tunable parameter value settings c10-math-0282, further improvements over the one-shot solution can produce a very good quality (optimal or near-optimal) operating state [316].

c10f003

Figure 10.3 Objective function value normalized to the optimal solution for a varying number of states explored for one-shot, greedy, and SA algorithms for a security/defense system where c10-math-0283, c10-math-0284, c10-math-0285, and c10-math-0286

Figure 10.4 shows the objective function value normalized to the optimal solution versus the number of states explored for a security/defense system for c10-math-0291. The one-shot solution is within 8.6% of the optimal solution. The figure shows that c10-math-0292 converges to a lower quality solution than the one-shot solution after exploring 9 states (0.029% of the design space) and SA explores 8 states (0.026% of the design space) to yield a better quality solution than the one-shot solution. These results reveal that the greedy exploration of parameters may not necessarily attain a better quality solution than our one-shot solution.

c10f004

Figure 10.4 Objective function value normalized to the optimal solution for a varying number of states explored for one-shot, greedy, and SA algorithms for a security/defense system where c10-math-0287, c10-math-0288, c10-math-0289, and c10-math-0290

Figure 10.5 shows the objective function value normalized to the optimal solution versus the number of states explored for a healthcare application for c10-math-0293. The one-shot solution is within 2.1% of the optimal solution. The figure shows that c10-math-0294 converges to an almost equal quality solution as compared to the one-shot solution after exploring 11 states (1.5% of the design space) and SA explores 10 states (1.4% of the design space) to yield an almost equal quality solution as compared to the one-shot solution. These results indicate that further exploration of the design space is required to find an equivalent quality solution as compared to one-shot if the intelligent initial value settings leveraged by one-shot is not used.

c10f005

Figure 10.5 Objective function value normalized to the optimal solution for a varying number of states explored for one-shot, greedy, and SA algorithms for a healthcare application where c10-math-0295, c10-math-0296, c10-math-0297, and c10-math-0298

Figure 10.6 shows the objective function value normalized to the optimal solution versus the number of states explored for a healthcare application for c10-math-0299. The one-shot solution is within 1.6% of the optimal solution. The figure shows that c10-math-0300 converges to a lower quality solution than the one-shot solution after exploring 9 states (0.029% of the design space) and SA explores 6 states (0.019% of the design space) to yield a better quality solution than the one-shot solution. These results confirm that the greedy exploration of parameters may not necessarily attain a better quality solution than our one-shot solution.

c10f006

Figure 10.6 Objective function value normalized to the optimal solution for a varying number of states explored for one-shot, greedy, and SA algorithms for a healthcare application where c10-math-0301, c10-math-0302, c10-math-0303, and c10-math-0304

Figure 10.7 shows the objective function value normalized to the optimal solution versus the number of states explored for an ambient conditions monitoring application for c10-math-0305. The one-shot solution is within 7.7% of the optimal solution. The figure shows that c10-math-0306 and SA converge to an equivalent or better quality solution than the one-shot solution after exploring 4 states (0.549% of the design space) and 10 states (1.37% of the design space). These results again confirm that the greedy- and SA-based exploration can provide improved results over the one-shot solution, but requires additional state exploration.

c10f007

Figure 10.7 Objective function value normalized to the optimal solution for a varying number of states explored for one-shot, greedy, and SA algorithms for an ambient conditions monitoring application where c10-math-0307, c10-math-0308, c10-math-0309, and c10-math-0310

Figure 10.8 shows the objective function value normalized to the optimal solution versus the number of states explored for an ambient conditions monitoring application for c10-math-0311. The one-shot solution is within 24.7% of the optimal solution. The figure shows that both c10-math-0312 and SA converge to an equivalent or better quality solution than the one-shot solution after exploring 3 states (0.01% of the design space). These results indicate that both greedy and SA can give good quality solutions after exploring a very small percentage of the design space and both greedy-based and SA-based methods enable lightweight dynamic optimizations [316]. The results also indicate that the one-shot solution provides a good quality solution when further design space exploration is not possible due to resource constraints.

c10f008

Figure 10.8 Objective function value normalized to the optimal solution for a varying number of states explored for one-shot, greedy, and SA algorithms for an ambient conditions monitoring application where c10-math-0313, c10-math-0314, c10-math-0315, and c10-math-0316

10.4.2.3 Comparison of Optimized Greedy (GD) with Greedy Variants and SA-Based Dynamic Optimization Methodologies

For comparison purposes, we implemented an SA-based algorithm, our greedy online optimization algorithm (GD) (which leverages intelligent initial parameter value selection, exploration ordering, and parameter arrangement), and several other greedy online algorithm variations (Table 10.4) in C++. We compare our results with SA to provide relative comparisons of our dynamic optimization methodology with another methodology that leverages an SA-based online optimization algorithm and arbitrary initial value settings. We point out that step 3 of our dynamic optimization methodology can use any lightweight algorithm (e.g., greedy-based, SA-based) in the improvement mode (Fig. 10.1). Although we present SA for comparison with the greedy algorithm, both of these algorithms are equally applicable to our dynamic optimization methodology. We compare GD results with different greedy algorithm variations (Table 10.4) to provide an insight into how initial parameter value settings, exploration ordering, and parameter arrangement affect the final operating state quality. We normalize the objective function value (corresponding to the operating state) attained by the algorithms with respect to the optimal solution (objective function value corresponding to the optimal operating state) obtained from an exhaustive search.

Table 10.4 Greedy algorithms with different parameter arrangements and exploration orders

Notation Description
GD Greedy algorithm with parameter exploration order c10-math-0317 and arrangement c10-math-0318
c10-math-0319 Explores parameter values in ascending order with arrangement c10-math-0320
c10-math-0321 Explores parameter values in ascending order with arrangement c10-math-0322
c10-math-0323 Explores parameter values in ascending order with arrangement c10-math-0324
c10-math-0325 Explores parameter values in descending order with arrangement c10-math-0326
c10-math-0327 Explores parameter values in descending order with arrangement c10-math-0328
c10-math-0329 Explores parameter values in descending order with arrangement c10-math-0330

Figure 10.9 shows the objective function values normalized to the optimal solution for SA and greedy algorithms versus the number of states explored for a security/defense system for c10-math-0331. Results indicate that c10-math-0332, c10-math-0333, c10-math-0334, c10-math-0335, c10-math-0336, c10-math-0337, and GD converge to a steady-state solution (objective function value corresponding to the operating state) after exploring 11, 10, 11, 10, 10, 9, and 8 states, respectively. We point out that we do not plot the results for each iteration and greedy algorithm variations for brevity; however, we obtained the results for all iterations and greedy algorithm variations. These convergence results show that GD converges to a final operating state slightly faster than other greedy algorithms, exploring only 1.1% of the design space. c10-math-0338 and c10-math-0339 converge to almost equal quality solutions as c10-math-0340 and c10-math-0341 showing that ascending or descending parameter values exploration and parameter arrangements do not significantly impact the solution quality for this application for c10-math-0342 = 729.

c10f009

Figure 10.9 Objective function values normalized to the optimal solution for a varying number of states explored for SA and the greedy algorithms for a security/defense system where c10-math-0343, c10-math-0344, c10-math-0345, and c10-math-0346

Results also indicate that the SA algorithm outperforms all greedy algorithms and converges to the optimal solution after exploring 400 states or 55% of the design space. Figure 10.9 also verifies the ability of our methodology to determine a good quality, near-optimal solution in one-shot that is within 1.4% of the optimal solution. GD achieves only a 1.8% improvement over the initial state after exploring 8 states.

Figure 10.10 shows the objective function values normalized to the optimal solution for SA and greedy algorithms versus the number of states explored for a security/defense system for c10-math-0347. Results reveal that GD converges to the final solution by exploring only 0.04% of the design space. c10-math-0348, c10-math-0349, and c10-math-0350 converge to better solutions than c10-math-0351, c10-math-0352, and c10-math-0353 showing that descending parameter values exploration and parameter arrangements c10-math-0354, c10-math-0355, and c10-math-0356 are better for this application as compared to the ascending parameter values exploration and parameter arrangements c10-math-0357, c10-math-0358, and c10-math-0359. This difference is because a descending exploration order tends to select higher tunable parameter values, which increases the throughput considerably as compared to lower tunable parameter values. Since throughput has been assigned a higher weight factor for this application than the lifetime, better overall objective function values are attained.

c10f010

Figure 10.10 Objective function values normalized to the optimal solution for a varying number of states explored for SA and greedy algorithms for a security/defense system where c10-math-0360, c10-math-0361, c10-math-0362, and c10-math-0363

Comparing Figs. 10.10 and 10.9 reveals that the design space size also affects the solution quality in addition to the parameter value exploration order and parameter arrangement. For example, for c10-math-0364, the ascending and descending parameter values exploration order and parameter arrangement result in comparable quality solutions, whereas for c10-math-0365, the descending parameter values exploration order results in higher quality solutions. Again, the SA algorithm outperforms all greedy algorithms and converges to the optimal solution for c10-math-0366 after exploring 100 states or 0.3% of the design space. Figure 10.10 also verifies the ability of our methodology to determine a good quality, near-optimal solution in one-shot that is within 9% of the optimal solution. GD achieves only a 0.3% improvement over the initial state (one-shot solution) after exploring 11 states.

Results for a healthcare application for c10-math-0371 reveal that GD converges to the final solution slightly faster than other greedy algorithms, exploring only 1% of the design space. The SA algorithm outperforms the greedy algorithm variants after exploring 400 states or 55% of the design space for c10-math-0372, but the SA improvement over the greedy algorithm variants is insignificant as the greedy algorithm variants attain near-optimal solutions. Results indicate that the one-shot solution is within 2% of the optimal solution. GD achieves only a 2% improvement over the one-shot solution after exploring 8 states.

Results for a healthcare application for c10-math-0373 reveal that GD converges to the final solution by exploring only 0.0257% of the design space. The SA algorithm outperforms all greedy algorithms and converges to the optimal solution after exploring 100 states (0.3% of the design space). The one-shot solution is within 1.5% of the optimal solution. GD achieves only a 0.2% improvement over the one-shot solution after exploring 8 states.

Figure 10.11 shows the objective function values normalized to the optimal solution versus the number of states explored for an ambient conditions monitoring application for c10-math-0374. Results reveal that GD converges to the final solution slightly faster than other greedy algorithms, exploring only 1.1% of the design space. c10-math-0375, c10-math-0376, and c10-math-0377 converge to a higher quality solution than c10-math-0378, c10-math-0379, and c10-math-0380 because the ascending exploration order tends to select lower tunable parameter values, which results in comparatively larger lifetime values as compared to higher tunable parameter values. This higher lifetime results in higher lifetime objective function values and thus higher overall objective function values. We observe that the greedy algorithm variants result in higher quality solutions after exploring more states than the one attained by GD since c10-math-0381, c10-math-0382, c10-math-0383, and c10-math-0384 attain the optimal solution for c10-math-0385. This observation reveals that other arbitrary parameter arrangements and exploration orders may obtain better solutions than GD, but those arbitrary arrangements and exploration orders would not scale for different application domains with different weight factors and for different design space cardinalities. The SA algorithm outperforms c10-math-0386, c10-math-0387, and GD after exploring 400 states (55% of the design space). c10-math-0388, c10-math-0389, c10-math-0390, and c10-math-0391 attain optimal solutions. Our one-shot solution is within 8% of the optimal solution. GD achieves only a 2% improvement over the one-shot solution after exploring 8 states.

c10f011

Figure 10.11 Objective function values normalized to the optimal solution for a varying number of states explored for SA and greedy algorithms for an ambient conditions monitoring application where c10-math-0367, c10-math-0368, c10-math-0369, and c10-math-0370

Results for an ambient conditions monitoring application for c10-math-0392 indicate that GD converges to the optimal solution after exploring 13 states (0.04% of design space), with a 17% improvement over the one-shot solution. The one-shot solution is within 14% of the optimal solution. c10-math-0393, c10-math-0394, and c10-math-0395 converge to a better solution than c10-math-0396, c10-math-0397, and c10-math-0398 for similar reasons as c10-math-0399. The one-shot solution is within 14% of the optimal solution. SA converges to a near-optimal solution after exploring 400 states (1.3% of the design space).

The results for different application domains and design spaces verify that the one-shot mode provides a high-quality solution that is within 8% of the optimal solution averaged over all application domains and design space cardinalities. These results also verify that improvements can be achieved over the one-shot solution during the improvement mode. The results indicate that GD may explore more states than other greedy algorithms if state exploration provides a noticeable improvement over the one-shot solution. The results also provide an insight into the convergence rates and reveal that even though the design space cardinality increases by 43c10-math-0400, both heuristic algorithms (greedy and SA) still explore only a small percentage of the design space and result in high-quality solutions. Furthermore, although SA outperforms the greedy algorithms after exploring a comparatively larger portion of the design space, GD still provides an optimal or near-optimal solution with significantly less design space exploration. These results advocate the use of GD as a design space exploration algorithm for constrained applications, whereas SA can be used for relatively less constrained applications. We point out that both GD and SA are online algorithms for dynamic optimization and are suitable for larger design spaces as compared to other stochastic algorithms, such as MDP-based algorithms, which are only suitable for restricted (comparatively smaller) design spaces [69].

10.4.2.4 Computational Complexity

We analyze the relative complexity of the algorithms by measuring their execution time and data memory requirements. We perform data memory analysis for each step of our dynamic optimization methodology. Our data memory analysis assumes an 8-bit processor for sensor nodes with integer data types requiring 2 B of storage and float data types requiring 4 B of storage. Analysis reveals that the one-shot solution (step 1) requires only 150, 188, 248, and 416 B, whereas step 2 requires 94, 140, 200, and 494 B for (number of tunable parameters c10-math-0401, number of application metrics c10-math-0402) equal to (3, 2), (3, 3), (6, 3), and (6, 6), respectively. GD in step 3 requires 458, 528, 574, 870, and 886 B, whereas SA in step 3 requires 514, 582, 624, 920, and 936 B of storage for design space cardinalities of 8, 81, 729, 31,104, and 46,656, respectively.

The data memory analysis shows that SA has comparatively larger memory requirements than the greedy algorithm. Our analysis reveals that the data memory requirements for all three steps of our dynamic optimization methodology increase linearly as the number of tunable parameters, tunable values, and application metrics, and thus the design space, increase. The analysis verifies that although all three steps of our dynamic optimization methodology have low data memory requirements, the one-shot solution in step 1 requires 361% less memory on average.

We measured the execution time for all three steps of our dynamic optimization methodology averaged over 10,000 runs (to smooth any discrepancies in execution time due to operating system overheads) on an Intel Xeon CPU running at 2.66 GHz [326] using the Linux/Unix time command [327]. We scaled these execution times to the Atmel ATmega1281 microcontroller [77] running at 8 MHz. Although scaling does not provide 100% accuracy for the microcontroller runtime because of different instruction set architectures and memory subsystems, scaling provides reasonable runtime estimates and enables relative comparisons. Results showed that steps 1 and 2 required 1.66 ms and 0.332 ms, respectively, both for c10-math-0403 and c10-math-0404. For step 3, we compared GD with SA. GD explored 10 states and required 0.887 ms and 1.33 ms on average to converge to the solution for c10-math-0405 and c10-math-0406, respectively. SA required 2.76 ms and 2.88 ms to explore the first 10 states (to provide a fair comparison with GD) for c10-math-0407 and c10-math-0408, respectively. The other greedy algorithms required comparatively more time than GD because they required more design state exploration to converge than GD; however, all the greedy algorithms required less execution time than SA.

To verify that our dynamic optimization methodology is lightweight, we compared the execution time results for all three steps of our dynamic optimization methodology with the exhaustive search. The exhaustive search required 29.526 ms and 2.765 s for c10-math-0409 and c10-math-0410, respectively, which gives speedups of 10c10-math-0411 and 832c10-math-0412, respectively, for our dynamic optimization methodology. The execution time analysis reveals that all three steps of our dynamic optimization methodology require execution time on the order of milliseconds, and the one-shot solution requires 138% less execution time on average as compared to all three steps of the dynamic optimization methodology. Execution time savings attained by the one-shot solution as compared to the three steps of our dynamic optimization methodology are 73% and 186% for GD and SA, respectively, when c10-math-0413, and are 100% and 138% for GD and SA, respectively, when c10-math-0414. These results indicate that the design space cardinality affects the execution time linearly, and our dynamic optimization methodology's advantage increases as the design space cardinality increases. We verified our execution time analysis using the clock() function [328], which confirmed similar trends.

To further verify that our dynamic optimization methodology is lightweight, we calculate the energy consumption for the two modes of our methodology—the one-shot and the improvement modes with either a GD- or SA-based online algorithm. We calculate the energy consumption c10-math-0415 for an Atmel ATmega1281 microcontroller [77] operating at c10-math-0416 V and c10-math-0417 MHz as c10-math-0418 where c10-math-0419 and c10-math-0420 denote the processor's active current and the execution time for the methodology's operating mode at c10-math-0421, respectively (we observed similar trends for other processor voltage and frequency settings). We point out that we consider the execution time for exploring the first 10 states for both the GD- and SA-based online algorithms in our energy calculations as both the GD and SA algorithms attained near-optimal results after exploring 10 states both for c10-math-0422 and c10-math-0423. Table 10.5 summarizes the energy calculations for different modes of our dynamic optimization methodology as well as for the exhaustive search for c10-math-0424 and c10-math-0425. We assume that the sensor node's battery energy in our calculations is c10-math-0426 J (which is computed using our application metrics estimation model). Results indicate that one-shot consumes 1679% and 166,510% less energy as compared to the exhaustive search for c10-math-0427 and c10-math-0428, respectively. Improvement mode using GD as the online algorithm consumes 926% and 83,135% less energy as compared to the exhaustive search for c10-math-0429 and c10-math-0430, respectively. Improvement mode using SA as the online algorithm consumes 521% and 56,656% less energy as compared to the exhaustive search for c10-math-0431 and c10-math-0432, respectively. Furthermore, our dynamic optimization methodology using GD as the online algorithm can be executed c10-math-0433 more times than the exhaustive search and c10-math-0434 more times than when using SA as the online algorithm for c10-math-0435. These results verify that our dynamic optimization methodology is lightweight and can be theoretically executed on the order of million times even on energy-constrained sensor nodes.

Table 10.5 Energy consumption for the one-shot and the improvement mode for our dynamic optimization methodology

Mode c10-math-0436 c10-math-0437 c10-math-0438 c10-math-0439
One-shot 1.66 23.75 c10-math-0440 c10-math-0441
c10-math-0442 2.879 41.2 c10-math-0443 c10-math-0444
c10-math-0445 4.752 68 c10-math-0446 c10-math-0447
c10-math-0448 29.526 422.52 c10-math-0449 c10-math-0450
c10-math-0451 3.322 47.54 c10-math-0452 c10-math-0453
c10-math-0454 4.872 69.72 c10-math-0455 c10-math-0456
c10-math-0457 2,765 39,570 c10-math-0458 c10-math-0459

c10-math-0460 and c10-math-0461 denote the improvement mode using GD and SA as the online algorithms, respectively, for c10-math-0462 = c10-math-0463 where c10-math-0464 = {729, 31,104}. c10-math-0465 denotes the exhaustive search for c10-math-0466 = c10-math-0467 where c10-math-0468 = {729, 31,104}. c10-math-0469 denotes the fraction of battery energy consumed in an operating mode and c10-math-0470 denotes the maximum number of times (runs) our dynamic optimization methodology can be executed in a given mode depending on the sensor node's battery energy

10.5 Chapter Summary

In this chapter, we proposed a lightweight dynamic optimization methodology for EWSNs, which provided a high-quality solution in one-shot using an intelligent initial tunable parameter value setting for highly constrained applications. We also proposed an online greedy optimization algorithm that leveraged intelligent design space exploration techniques to iteratively improve on the one-shot solution for less constrained applications. Results showed that our one-shot solution is near-optimal and within 8% of the optimal solution on average. Compared with simulating annealing (SA) and different greedy algorithm variations, results showed that the one-shot solution yielded improvements as high as 155% over other arbitrary initial parameter settings. Results indicated that our greedy algorithm converged to the optimal or near-optimal solution after exploring only 1% and 0.04% of the design space, whereas SA explored 55% and 1.3% of the design space for design space cardinalities of 729 and 31,104, respectively. Data memory and execution time analysis revealed that our one-shot solution (step 1) required 361% and 85% less data memory and execution time, respectively, when compared to using all the three steps of our dynamic optimization methodology. Furthermore, one-shot consumed 1679% and 166,510% less energy as compared to the exhaustive search for c10-math-0471 and c10-math-0472, respectively. Improvement mode using GD as the online algorithm consumed 926% and 83,135% less energy as compared to the exhaustive search for c10-math-0473 and c10-math-0474, respectively. Computational complexity along with the execution time, data memory analysis, and energy consumption confirmed that our methodology is lightweight and thus feasible for sensor nodes with limited resources.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset