In this section, applications are within this range 0 <H< 1 and we use 2D fBm Hölder different functions (polynomial [49–51] and exponential [49–51]) that take as input the points (x, y) of the dataset and provide as output the desired regularity; these functions are used to show the dataset singularities:

(a)A polynomial of degree n is a function of the form

H1(x,y)=anxnyn+an1xn1yn1++a2x2y2+a1xy+a0(11.12)

where a0, ..., n are constant values.

(b)The general definition of the exponential function is[52]

H2(x,y)=L1+ek(xx0y)(11.13)

where e is the natural logarithm base, x0 is the x-value of the sigmoid midpoint, L is the curve maximum value and k the curve steepness.

2D fBm Hölder function coefficient value interval: Notice that, in the case H=12, B is a planar Brownian motion. We will always assume that H>12. In this case, it is known that B is a transient process (see [53]). In this respect, study of the 2D fBm with Hurst parameter H>12. could seem much simpler than the study of the planar Brownian motion [54–56].

11.3.3.1Diffusion-limited aggregation

Witten and Sander [50] introduced the diffusion-limited aggregation (DLA) model, also called Brownian trees, in order to simulate cluster formation processes that relied on a sequence of computer-generated random walks on a lattice that simulated diffusion of a particle. Tacitly, each random walk’s step size was fixed as one lattice unit. The DLA simulation’s nonlattice version was conducted by Meakin [57], who limited the step size to one particle diameter. However, diffusion in real cluster formation processes might go beyond this diameter, when it is considered that the step size depends on physical parameters of the system such as concentration, temperature, particle size and pressure. For this reason, it has become significant to define the effects of the step size on the manifold processes regarding cluster formation.

The flowchart for the DLA algorithm is given in Figure 11.21.

Based on the flowchart for the DLA algorithm as shown in Figure 11.21, you can find the steps for the DLA algorithm. The steps applied in DLA algorithm are as follows:

Figure 11.21: The flowchart for the DLA algorithm.

The general structure of the DLA algorithm is shown in Figure 11.22. Let us try to figure out how the DLA algorithm is applied to the dataset. Figure 11.23 shows the steps of this.

Step (1) DLA algorithm for dataset particle count is chosen.

Steps (218) Set the random number seeds based on the current time and points.

Steps (1921) If you do not strike an existing point, save intermediate images for your dataset.

DLA model has a highly big step size. In this book, we concentrate on the step-size effects on the structure of clusters, formed by DLA model concerning the following datasets: MS dataset, economy (U.N.I.S.) dataset and WAIS-R dataset. We have also considered the two-length-scale concept as was suggested by Bensimon et al. [58]. A diffusing particle is deemed as adhering to an aggregate upon the center of the diffusing particle being at a distance from any member particle of the aggregate . The particle diameter is normally taken as the sticking length on the basis of hard-sphere concept. Specific for diffusion related processes, the other length scale is the mean free path l0 belonging to the diffusing particle termed as the step size of random walks. We look at the case of l0 >> a along with the extreme case of 10 >> a (studied by Bensimon et al. [58]) due to the fact that the constraint of l0a might not be valid for all physical systems.

Figure 11.22: General DLA algorithm.
Figure 11.23: DLA particle clusters are formed randomly by moving them in the direction of a target area till they strike with the existing structures, and then become part of the clusters.

DLA is classified into family of fractals that are termed as stochastic fractals due to their formation by random processes.

DLA method in data analysis: Let us apply the DLA method on the datasets comprising numeric data. They are economy (U.N.I.S.) dataset (Table 2.8), MS dataset (see Table 2.12) and WAIS-R dataset (see Table 2.19). The nonlattice DLA computer simulations were performed with various step sizes ranging from 128 to 256 points for 10,000 particles.

The cluster in Figures 11.25(a) and (b), 11.26(a) and (b) and 11.27(a) and (b) has a highly open structure so you can imagine that a motion with a small step size can move a particle into the central region of the cluster easily. On the other hand, the direction of each movement is random so it is very improbable for a particle to retain its incident direction through a number of steps. It will travel around in a certain domain producing a considerably extensive incident beam. Thus, a particle after short random walks in general traces the outer branches of the growing cluster rather than entering into the cluster core. Through a larger step size, the particle may travel a lengthier distance in a single direction.

Here we can have a look at our implementation of DLA for MS dataset, economy (U.N.I.S.) dataset and WAIS-R dataset for FracLab [51]:

(a)For each step, the particles vary their position by the use of these two motions. A random motion is selected from the economy (U.N.I.S.) dataset, MS dataset and WAIS-R dataset of 2D numeric matrices that were randomly produced. The fixed set of transforms makes it faster in a way, allowing for more evident structures to form compared to what may be possible if it was merely random.
(b)An attractive motion is the particle attracted toward the center of the screen, or else the rendering time would be excessively long.

During the handling of the set of transforms, when one adds (a, b), it is also important to insert (-a, -b). In this way, the attraction force toward the center is allowed to be the primary influence on the particle. This is also done for another reason, which is due to the fact that without this provision, there are substantial number of cases in which the particle never reaches close enough toward the standing cluster.

Figure 11.24: General DLA algorithm for the economy dataset.

Analogously for the economic dataset we have the following algorithm (Figure 11.24).

The DLA algorithm steps are as follows:

Step (1) DLA algorithm for economy dataset particle count is selected 10,000 for economy dataset.

Steps (218) Set the random number seeds based on the current time, and 128 points and 256 points are introduced for application of economy dataset DLA. We repeat till we hit another pixel. Our objective is to move in a random direction. x is represented as DLA 128 and 256 point image’s weight and y is represented as DLA 128 and 256 point image’s height.

Steps (1921) If we do not structure an existing point, save intermediate images for economy dataset. Returns true if no neighboring pixels and get positions for 128 points and 256 points.

Figure 11.25: Economy (U.N.I.S.) dataset random cluster of 10,000 particles generated using the DLA model with the step size of (a) 128 points and (b) 256 points. (a) Economy (U.N.I.S.) dataset 128 points DLA and (b) economy (U.N.I.S.) dataset 256 points DLA.

Figure 11.25 (a) and –(b) shows the structure of 10,000-particle clusters produced by the DLA model with step size of 128 points and 256 points for the economy (U.N.I.S.) dataset.

It is seen through the figures that cluster structure becomes more compact when the step size is increased. Figure 11.26(a) shows the radius of cluster as a function of the launched 10,000 particles by the DLA model with step size of 128 points concerning the economy (U.N.I.S.) dataset. It is seen through the figures that cluster structure becomes more compact when the step size is increased. Figure 11.26(b) shows the radius of cluster as a function of the launched 10,000 particles by the DLA model with step size of 256 points concerning the economy (U.N.I.S.) dataset.

Economy (U.N.I.S.) dataset numerical simulations of the DLA process were initially performed in 2D, having been reported in plots of a typical aggregate (Figure11.26(a) and (b)). Nonlattice DLA computer simulations were conducted with various step sizes that range from 128 to 256 points for 10,000 particles. Figure 11.27(a) and (b) presents the structure of 10,000-particle clusters produced by the DLA model with step size of 128 points and 256 points for the economy (U.N.I.S.) dataset. You can see that the cluster structure becomes more compact when the step size is increased. Figure 11.27(a) and (b) shows the radius of cluster as a function of the launched 10,000 particles by the DLA model with step size of 128 points and 256 points for economy (U.N.I.S.) dataset. At all steps of the calculation, it is required to launch particles randomly, allowing them to walk in a random manner, but stop them when they reach a perimeter site, or in another case, it is essential to start them over when they walk away. It is adequate to start walkers on a small circle defining the existing cluster for the launching. We select another circle twice the radius of the cluster for the purpose of grabbing the errant particles. Figure 11.27(a) and (b) shows that 128 points DLA logarithm of the radius cluster as a function is increased according to 256 points DLA for economy (U.N.I.S.) dataset.

Figure 11.26: Radius of cluster as a function of the launched 10,000 particles by the DLA model with step size of (a) 128 points and (b) 256 points for economy (U.N.I.S.) dataset.

The DLA algorithm is given in Figure 11.27.

Figure 11.27: General DLA algorithm for the MS dataset.

The MS dataset provided in Table 2.12 in line with the DLA algorithm steps (Figure 11.24) is based on the following steps:

Step (1) DLA algorithm for MS dataset particle count is selected 10,000 for the MS dataset.

Steps (218) Set the random number seeds based upon the current time, and 128 points and 256 points are introduced for application of MS dataset DLA. We repeat till we hit another pixel. Our objective is to move in a random direction. x is represented as DLA 128 and 256 point image’s weight and y is represented as DLA 128 and 256 point image’s height.

Steps (1921) If we do not structure an existing point, save intermediate images for the MS dataset. Returns true if no neighboring pixels and obtain positions for 128 points and 256 points.

Figure 11.28(a) and (b) shows the structure of 10,000-particle clusters produced by the DLA model with step size of 128 points and 256 points for the MS dataset.

Figure 11.28: MS dataset random cluster of 10,000 particles produced using the DLA model with the step size of (a) 128 points and (b) 256 points. (a) MS dataset 128 points DLA and (b) MS dataset 256 points DLA

It is seen through the figures that cluster structure becomes more compact when the step size is increasing. Figure 11.29(a) shows the radius of cluster as a function of the launched 10,000 particles by the DLA model with step size of 128 points concerning the MS dataset. It is seen that cluster structure becomes more compact when the step size is increasing. Figure 11.29 (b) shows the radius of cluster as a function of the launched 10,000 particles by the DLA model with step size of 256 points concerning the MS dataset.

MS dataset numerical simulations of the DLA process were initially performed in 2D, having been reported in plot of a typical aggregate (Figure 11.25(a) and (b)). Nonlattice DLA computer simulations were conducted with various step sizes that range from 128 to 256 points for 10,000 particles. Figure 11.28(a) and (b) presents the structure of 10,000-particle clusters produced by the DLA model with step size of 128 points and 256 points for the MS dataset. You can see that the cluster structure becomes more compact when the step size is increased. Figure 11.28(a) and (b) shows the radius of cluster as a function of the launched 10,000 particles by the DLA model with step size of 128 points and 256 points MS dataset. At all steps of the calculation, it is required to launch particles randomly, allowing them to walk in a random manner, but stop them when they reach a perimeter site, or in another case, it is essential to start them over when they walk away. It is adequate to start walkers on a small circle defining the existing cluster for the launching.

Figure 11.29: Radius of cluster as a function of the launched 10,000 particles by the DLA model with step size of (a) 128 points and (b) 256 points for the MS dataset.

We select another circle twice the radius of the cluster for the purpose of grabbing the errant particles. Figure 11.28(a) and (b) shows that 128 points DLA logarithm of the radius cluster as a function is increased according to 256 points DLA for MS dataset (Figure 11.30).

Figure 11.30: General DLA algorithm for the WAIS-R dataset.

The steps of DLA algorithm are as follows:

Step (1) DLA algorithm for WAIS-R dataset particle count is selected 10,000 for WAIS-R dataset.

Steps (218) Set the random number seeds based on the current time, and 128 points and 256 points are introduced for application of WAIS-R dataset DLA. We repeat till we hit another pixel. Our aim is to move in a random direction. x is represented as DLA 128 and 256 point image’s weight and y is represented as DLA 128 and 256 point image’s height.

Steps (1921) If we do not structure an existing point, save intermediate images for WAIS-R dataset. Returns true if no neighboring pixels and obtain positions for 128 points and 256 points.

Figure 11.31(a) and (b) shows the structure of 10,000-particle clusters produced by the DLA model with step size of 128 points and 256 points for the WAIS-R dataset.

Figure 11.31: WAIS-R dataset random cluster of 10,000 particles produced using the DLA model with the step size of (a) 128 points and (b) 256 points. (a) WAIS-R dataset 128 points DLA and (b) WAIS-R dataset 256 points DLA.

It is seen through the figures that cluster structure becomes more compact when the step size is increased. Figure 11.31(a) shows the radius of cluster as a function of the launched 10,000 particles by the DLA model with step size of 128 points concerning the WAIS-R dataset. It is seen through the figures that cluster structure becomes more compact when the step size is increased. Figure 11.28(b) shows the radius of cluster as a function of the launched 10,000 particles by the DLA model with step size of 256 points concerning the WAIS-R dataset.

WAIS-R dataset numerical simulations of the DLA process were initially performed in 2D, having been reported in plot of a typical aggregate (Figure 11.32(a) and (b)). Nonlattice DLA computer simulations were conducted with various step sizes that range from 128 to 256 points for 10,000 particles. Figure 11.32(a) and (b) presents the structure of 10,000-particle clusters produced by the DLA model with step size of 128 points and 256 points for the WAIS-R dataset. You can see that the cluster structure becomes more compact when the step size is increased. Figure 11.32(a) and (b) shows the radius of cluster as a function of the launched 10,000 particles by the DLA model with step size of 128 points and 256 points for the WAIS-R dataset. At all steps of the calculation, it is required to launch particles randomly, allowing them to walk in a random manner, but stop them when they reach a perimeter site, or in another case, it is essential to start them over when they walk away. It is adequate to start walkers on a small circle defining the existing cluster for the launching. We select another circle twice the radius of the cluster for the purpose of grabbing the errant particles. Figure 11.32(a) and (b) shows that 128 points DLA logarithm of the radius cluster as a function is increased according to 256 points DLA for WAIS-R dataset.

Figure 11.32: Radius of cluster as a function of the launched 10,000 particles by the DLA model with step size of 128 points (a) and 256 points (b) WAIS-R dataset.

11.4Multifractal analysis with LVQ algorithm

In this section, we consider the classification of a dataset through the multifractal Brownian motion synthesis 2D multifractal analysis as follows:

(a)Multifractal Brownian motion synthesis 2D is applied to the data.
(b)Brownian motion Hölder regularity (polynomial and exponential) for the analysis is applied to the data for the purpose of identifying the data.
(c)Singularities are classified through the Kohonen learning vector quantization (LVQ) algorithm. The best matrix size (which yields the best classification result) is selected for the new dataset while applying the Hölder functions (polynomial and exponential) on the dataset.
Two multifractal Brownian motion synthesis 2D functions are provided below:

General polynomial Hölder function:

H1(x,y)=anxnyn+an1xn1yn1++a2x2y2+a1xy+a0(11.12)

The coefficient values in eq. (11.12) (an, an–1,…,a0) are derived based on the results of the LVQ algorithm according to 2D fBm Hölder function coefficient value. They are the coefficient values that yield the most accurate result. With these values, the definition is made as follows:

(a1=0.8,a0=0.5);H1(x,y)=(0.5+0.8×x×y)(11.13)

General exponential Hölder function:

H2(x,y)=L1+ek(xx0y)(11.14)

The coefficient values in eq. (11.14) (x0, L) are derived based on the results of the LVQ algorithm according to 2D fBm Hölder function coefficient value. They are the coefficient values that yield the most accurate result. With these values, the definition is made as follows:

(L=0.6,k=100,x0=0.5;H2(x,y)=0.6+0.61+exp(100×(x0.5y))(11.15)

Two-dimensional fBm Hölder function (see eqs. (11.12) and (11.13)) coefficient values are applied with 2D fBm Hölder function coefficient value interval and chosen accordingly. In eqs. (11.13) and (11.15) (the selected coefficient values), LVQ algorithm was applied to the new singled out datasets (p_New Dataset and e_New Dataset). According to the 2D fBm Hölder function coefficient value interval, coefficient values are chosen among the coefficients that yield the most accurate results in the LVQ algorithm application. Equations (11.13) and (11.15) show that the coefficient values change based on the accuracy rates as derived from the algorithm applied to the data.

How can we single out the singularities when the multifractal Brownian motion synthesis 2D functions (polynomial and exponential) are applied onto the dataset (D)?

As explained in Chapter 2, for i = 0, 1, ..., K in x : i0, i1, ..., iK; j = 0, 1, ..., L in y : j0, j1, ..., jM, D(K × L) represents a dataset. x : i. is the entry on the line and y: j. is the matrix that represents the attribute in the column (see Figure 11.33). A matrix is a 2D array of numbers. When the array has K rows and L columns, it is said that the matrix size is K × L.

Figure 11.33 shows the D dataset (4×5) as depicted with row (x) and column (y) numbers. As the dataset we consider the MS dataset, economy (U.N.I.S.) dataset and WAIS-R dataset.

Figure 11.33: Presented with x row and y column as an example regarding D dataset.

The process to single out the singularities that will be obtained from the application of the multifractal Hölder regularity functions on these three different data can be summarized as follows:

The steps in Figure 11.34 are described in detail as follows:

Step (i) Application to data multifractal Brownian motion synthesis 2D. Multifractal Brownian motion synthesis 2D is described as follows: primarily you have to calculate the fractal dimension D(D = [x1, x2, ..., xm]) of the original dataset by using eqs. (11.12) and (11.13). As the next step, you have the new dataset (p_New Datasets and e_New Datasets) formed by selecting attributes that bring about the minimum change of current dataset’s fractal dimension until the number of remaining features is the upper bound of fractal dimension D.Dnew(Dnew=[x1new,x2new,...,xnnew]).

Figure 11.34: Classification of the dataset with the application of multifractal Hölder regularity through LVQ algorithm.

Step (ii) Application on the new data of the multifractal Brownian motion synthesis 2D algorithm (see [50] for more information).

It is possible for us to perform the classification of the new dataset. For this, let us have a closer look at the steps provided in Figure 11.35.

Figure 11.35: General LVQ algorithm for the newly produced data.

Step (iii) The new datasets produced by the singled out singularities (p_New Dataset and e_New Dataset) are as a final point classified by using the Kohonen LVQ algorithm Figure 11.35.

Figure 11.35 explains the steps of LVQ algorithm as follows:

Step (1) Initialize reference vectors (several strategies are discussed shortly). Initialize learning rate, γ is default (0).

Step (2) While stopping condition is false, follow Steps (2–6).

Step (3) foreach training input vector x new, follow Steps (3–4).

Step (4) Find j so that xnewwj is a minimum.

Step (5) Update wij.

Steps (69) if (Y = yj)

wj(new)=wj(old)+γ[xnewwj(old)]

else

wj(new)=wj(old)γ[xnewwj(old)]

Step (10) The condition may assume a fixed number of iterations (i.e., executions of Step (2)) or learning rate reaching a satisfactorily small value.

11.4.1Polynomial Hölder function with LVQ algorithm for the analysis of various data

LVQ algorithmhas been applied on economy (U.N.I.S.) dataset, multiple sclerosis (MS) dataset and WAIS-R dataset in Sections 11.4.1.1, 11.4.1.2 and 11.4.1.3, respectively.

11.4.1.1Polynomial Hölder function with LVQ algorithm for the analysis of economy (U.N.I.S.)

For the second set of data, in the following sections, we shall use some data related to USA (abbreviated by the letter U), New Zealand (represented by the letter N), Italian (shown with the letter I) and Sweden (signified by the letter S) economies for U.N.I.S. countries. The attributes of these countries’ economies are data that are concerned with years, unemployment, GDP per capita (current international $), youth male (%of male labor force ages 1524) (national estimate), …,GDP growth (annual %). Data consist of a total of 18 attributes. Data that belong to U.N.I.S. economies from 1960 to 2015 are defined based on the attributes provided in Table 2.8 economy dataset (http://data.worldbank.org) [12], to be used in the following sections.

Our aim is to classify the economy dataset through the multifractal Brownian motion synthesis 2D multifractal analysis of the data. Our method is based on the following steps provided in Figure 11.36:

Figure 11.36: Classification of the economy dataset with the application of multifractal Hölder polynomial regularity function through LVQ algorithm.
i.Multifractal Brownian motion synthesis 2D is applied to the economy (U.N.I.S.) dataset (228 × 18).
ii.Brownian motion Hölder regularity (polynomial) for analysis is applied to the data for the purpose of identifying on the data and some new datasets comprise the singled out singularities regarding p_Economy dataset (16 × 16) (see eq. (11.13)):
H1(x,y)=(0.5+0.8×x×y).
iii.The new datasets (p_Economy dataset) comprised of the singled out singularities, with the matrix size (16 × 16) yielding the best classification result, are finally classified by using the Kohonen LVQ algorithm.

The steps of Figure 11.36 are explained as follows:

Step (i) Application to economy dataset multifractal Brownian motion synthesis 2D. Multifractal Brownian motion synthesis 2D can be described as follows: primarily you have to calculate the fractal dimension D(D = [x1, x2, ..., x228]) of the original economy dataset. Fractal dimension is computed with eq. (11.13). As the next step, you have the new dataset (p_Economy Dataset) formed by selecting attributes that bring about the minimum change of current dataset’s fractal dimension until the number of remaining features is the upper bound of fractal dimension D. The newly produced multifractal Brownian motion synthesis 2D dataset is Dnew(Dnew=[x1new,x2new,...,x16new]) (see [50] for more information).

Step (ii) Application to data multifractal Brownian motion synthesis 2D (see [50] for more information).

It is possible for us to perform the classification of the p_ Economy dataset. For this, we can have a closer glance at the steps presented in Figure 11.37:

Step (iii) The singled out singularity p_Economy datasets are finally classified by using the Kohonen LVQ algorithm in Figure 11.37.

Figure 11.37 explains the steps of LVQ algorithm as follows:

Step (1) Initialize reference vectors (several strategies are discussed shortly). Initialize learning rate, γ is default (0).

Figure 11.37: General LVQ algorithm for p_ Economy dataset.

Step (2) While stopping condition is false, follow Steps (2–6).

Step (3) foreach training input vector xnew follow Steps (3–4).

Step (4) Find j so that xnewwj is a minimum.

Step (5) Update wij.

Steps (68) if (Y = yj)

wj(new)=wj(old)+γ[xnewwj(old)]

else

wj(new)=wj(old)γ[xnewwj(old)]

Step (9) The condition is likely to postulate a fixed number of iterations (i.e., executions of Step (2)) or learning rate reaches an acceptably small value.

Therefore, the data in the new p_Economy dataset with 33.33% portion is allocated for the test procedure, being classified as USA, New Zealand, Italy, Sweden yielding an accuracy rate of 84.02% based on the LVQ algorithm.

11.4.1.2Polynomial Hölder function with LVQ algorithm for the analysis of multiple sclerosis

As presented in Table 2.12, multiple sclerosis dataset has data from the following groups: 76 samples belonging to relapsing remitting multiple sclerosis (RRMS), 76 samples to secondary progressive multiple sclerosis (SPMS), 76 samples to primary progressive multiple sclerosis (PPMS) and 76 samples to healthy subjects of control group. The attributes of the control group are data regarding brain stem (MRI [magnetic resonance imaging] 1), corpus callosum periventricular (MRI 2), upper cervical (MRI 3) lesion diameter size (mm) in the MRI image and Expanded Disability Status Scale (EDSS) score. Data are made up of a total of 112 attributes. By using these attributes of 304 individuals, we can know whether the data belong to the MS subgroup or healthy group is known. How can we make the classification as to which MS patient belongs to which subgroup ofMS including healthy individuals and those diagnosed with MS (based on the lesion diameters (MRI 1, MRI 2, MRI 3), number of lesion size for (MRI 1, MRI 2, MRI 3) as obtained from MRI images and EDSS scores)? The dimension of D matrix is 304 × 112, which means it contains an MS dataset of 304 individuals with their 112 attributes (see Table 2.12) for the MS dataset:

Our purpose is to ensure the classification of the MS dataset by the multifractal Brownian motion synthesis 2D multifractal analysis of the data. Our method follows the steps mentioned in Figure 11.38:

Figure 11.38: Classification of the MS dataset with the application of multifractal Hölder polynomial regularity function through LVQ algorithm.
(i)Multifractal Brownian motion synthesis 2D is applied to the MS dataset (304 × 112).
(ii)Brownian motion Hölder regularity (polynomial) for analysis is applied to the data for the purpose of identifying on the data and some new datasets comprised of the singled out singularities regarding p_MS dataset (64 × 64 ) (see eq. (11.13)):
(iii)The new datasets (p_MS dataset) comprised of singled out singularities, with the matrix size (64 × 64) yielding the best classification result, are finally classified using the Kohonen LVQ algorithm.

The steps in Figure 11.38 are explained as follows:

Step (i) Application to MS dataset multifractal Brownian motion synthesis 2D. Multifractal Brownian motion synthesis 2D can be described as follows: primarily you have to calculate the fractal dimension D(D = [x1, x2, . . . , x304]) of the original MS dataset. Fractal dimension is calculated with eq. (11.13). As the next step, you have the new dataset (p_MS dataset) formed by selecting attributes that bring about the minimum change of current dataset’s fractal dimension until the number of remaining features is the upper bound of fractal dimension D. The newly produced multifractal Brownian motion synthesis 2D dataset is Dnew(Dnew=[x1new,x2new,...,x64new]).

Step (ii) Application to data multifractal Brownian motion synthesis 2D (see [50] for more information).

It is possible for us to perform the classification of p_MS dataset. For this, let us have a closer look at the steps provided in Figure 11.39:

Figure 11.39: General LVQ algorithm for p_MS dataset.

Step (iii) The singled out singularity p_MS datasets are finally classified using the Kohonen LVQ algorithm in Figure 11.39.

Figure 11.39 explains the steps of LVQ algorithm as follows:

Step (1) Initialize reference vectors (several strategies are discussed shortly). Initialize learning rate, γ is default (0).

Step (2) While stopping condition is false, follow Steps (2–6).

Step (3) foreach training input vector xnew follow (Steps 3–4).

Step (4) Find j so that xnewwj is a minimum.

Step (5) Update wij.

Steps (68) if (Y = yj)

wj(new)=wj(old)+γ[xnewwj(old)]

else

wj(new)=wj(old)γ[xnewwj(old)]

Step (9) The condition is likely to postulate a fixed number of iterations (i.e., executions of Step (2)) or learning rate reaches an acceptably small value.

Consequently, the data in the new p_MS dataset (we singled out the singularities) with 33.33% portion are allocated for the test procedure, being classified as RRMS, SPMS, PPMS and healthy with an accuracy rate of 80% based on the LVQ algorithm.

11.4.1.3Polynomial Hölder function with LVQ algorithm for the analysis of mental functions

As presented in Table 2.19, the WAIS-R dataset has data 200 belonging to patient and 200 samples to healthy control group. The attributes of the control group are data regarding school education, gender, …, D.M. Data are made up of a total of 21 attributes. By using these attributes of 400 individuals, it is known that whether the data belong to patient or healthy group. How can we make the classification as to which individual belongs to which patient or healthy individuals and those diagnosed with WAIS-R test (based on the school education, gender, the DM, vocabulary, QIV, VIV, …, D.M see Chapter 2, Table 2.18)? D matrix has a dimension of 400 × 21. This means D matrix includes the WAIS-R dataset of 400 individuals along with their 21 attributes (see Table 2.19) for the WAIS-R dataset. For the classification of D matrix through LVQ the first step training procedure is to be employed.

Our purpose is to ensure the classification of the WAIS-R dataset by the multifractal Brownian motion synthesis 2D multifractal analysis of the data. Our method follows the steps given in Figure 11.40:

(i)Multifractal Brownian motion synthesis 2D is applied to the WAIS-R dataset (400 × 21).
(ii)Brownian motion Hölder regularity (polynomial) for analysis is applied to the data for the purpose of identifying on the data and some new datasets comprised of the singled out singularities regarding p_WAIS-R dataset.
(iii)The new datasets (p_WAIS-R dataset) comprised of the singled out singularities, with the matrix size (16 × 16) yielding the best classification result, are finally classified using the Kohonen LVQ algorithm.
Figure 11.40: Classification of the WAIS-R dataset with the application of multifractal polynomial Hölder regularity function through LVQ algorithm.

The steps of Figure 11.40 are explained as follows:

Step (i) Application to WAIS-R dataset multifractal Brownian motion synthesis 2D. Multifractal Brownian motion synthesis 2D can be described as follows: primarily you have to compute the fractal dimension D(D = [x1, x2, . . . , x400]) of the original WAIS-R dataset. Fractal dimension is computed with eq. (11.13). As the next step, you have the new dataset (p_WAIS-R dataset) formed by selecting attributes that bring about the minimum change of current dataset’s fractal dimension until the number of remaining features is the upper bound of fractal dimension D. The .newly produced multifractal Brownian motion synthesis 2D dataset is Dnew(Dnew=[x1new,x2new,...,x16new]).

Step (ii) Application to data multifractal Brownian motion synthesis 2D (see [50] for more information).

We can perform the classification of p_ WAIS-R dataset. Hence, study the steps provided in Figure 11.41:

Step (iii) The singled out singularities p_WAIS-R dataset are finally classified through the use of the Kohonen LVQ algorithm in Figure 11.42.

Figure 11.41 explains the steps of LVQ algorithm as follows:

Step (1) Initialize reference vectors (several strategies are discussed shortly).

Initialize learning rate, γ is default (0).

Step (2) While stopping condition is false, follow Steps (2–6).

Step (3) foreach training input vector xnew follow Steps (3–4).

Step (4) Find j so that xnewwj is a minimum.

Step (5) Update wij.

Figure 11.41: General LVQ algorithm for p_ WAIS-R dataset.
Figure 11.42: Classification of the economy dataset with the application of multifractal Brownian motion synthesis 2D exponential Hölder function through LVQ algorithm.

Steps (68) if (Y = yj)

else

Steps (910) The condition might postulate a fixed number of iterations (i.e., executions of Step (2)) or learning rate reaching an acceptably small value.

Therefore, the data in the new p_WAIS-R dataset with 33.33% portion are allocated for the test procedure, being classified as Y = [Patient, Healthy] with an accuracy rate of 81.15% based on the LVQ algorithm.

11.4.2Exponential Hölder function with LVQ algorithm for the analysis of various data

LVQ algorithmhas been applied on economy (U.N.I.S.) dataset, multiple sclerosis (MS) dataset and WAIS-R dataset in Sections 11.4.2.1, 11.4.2.2 and 11.4.2.3, respectively.

11.4.2.1Exponential Hölder function with LVQ algorithm for the analysis of economy (U.N.I.S.)

As the second set of data, we will use in the following sections, some data related to U.N.I.S. countries’ economies. The attributes of these countries’ economies are data regarding years, unemployment, GDP per capita (current international $), youth male (% of male labor force ages 1524) (national estimate), …,GDP growth (annual %). Data are made up of a total of 18 attributes. Data belonging to U.N.I.S. economies from 1960 to 2015 are defined based on the attributes given in Table 2.8 economy dataset (http://data.worldbank.org) [12], which will be used in the following sections.

Our aim is to classify the economy dataset through the Hölder exponent multifractal analysis of the data. Our method is based on the following steps in Figure 11.42:

(i)Multifractal Brownian motion synthesis 2D is applied to the economy dataset (228 × 18). Brownian motion Hölder regularity (exponential) for analysis is applied to the data for the purpose of identifying on the data and some new datasets comprised of the singled out singularities regarding the e_ Economy dataset (16 × 16) as eq. (11.15):
H2(x,y)=0.6+0.61+exp(100×(x0.5y))
(ii)The matrix size with the best classification result is (16 × 16) for the new dataset (e_Economy dataset), which is made up of the singled out singularities.

The steps of Figure 11.42 are explained as follows:

Step (i) Application to economy dataset multifractal Brownian motion synthesis 2D. Multifractal Brownian motion synthesis 2D can be described as follows: primarily you have to calculate the fractal dimension D(D = [x1, x2, ..., x128]) of the original economy dataset. Fractal dimension is computed with eq. (11.15). As the next step, you have the new dataset (e_Economy Dataset) formed by selecting attributes that bring about the minimum change of current dataset’s fractal dimension until the number of remaining features is the upper bound of fractal dimension D. The newly produced multifractal Brownian motion synthesis 2D dataset is Dnew(Dnew=[x1new,x2new,...,x128new]). Let us have a closer look at the steps provided in Figure 11.43:

Step (ii) Application to data multifractal Brownian motion synthesis 2D (see [50] for more information).

It is possible for us to perform the classification of the e_Economy dataset. For this, let us have a closer look at the steps provided in Figure 11.43:

Figure 11.43: General LVQ algorithm for e_Economy dataset.

Step (iii) The singled out singularities e_Economy dataset are finally classified by using the Kohonen LVQ algorithm in Figure 11.43.

Figure 11.44 explains the steps of LVQ algorithm as follows:

Figure 11.44: Classification of the MS dataset with the application of multifractal Brownian motion synthesis 2D exponential Hölder regularity function through LVQ algorithm.

Step (1) Initialize reference vectors (several strategies are discussed shortly). Initialize learning rate, γ is default (0).

Step (2) While stopping condition is false, follow Steps (2–6).

Step (3) foreach training input vector xnew follow (Steps 3–4).

Step (4) Find j so that xnewwj is a minimum.

Step (5) Update wij.

Steps (68) if (Y = yj)

else

Step (9) The condition might postulate a fixed number of iterations (i.e., execution of Step (2)) or learning rate reaches an acceptably small value.

As a result, the data in the newe_Economy dataset with 33.33% portion are allocated for the test procedure, being classified as Y = [USA, New Zealand, Italy, Sweden] with an accuracy rate of 80.30% based on the LVQ algorithm.

11.4.2.2Exponential Hölder function with LVQ algorithm for the analysis of multiple sclerosis

As presented in Table 2.12, multiple sclerosis dataset has data from the following groups: 76 samples belonging to RRMS, 76 samples to SPMS, 76 samples to PPMS, 76 samples to healthy subjects of control group. The attributes of the control group are data regarding brain stem (MRI 1), corpus callosum periventricular (MRI 2), upper cervical (MRI 3) lesion diameter size (mm) in the MRI image and EDSS score. Data are made up of a total of 112 attributes. By using these attributes of 304 individuals, we can know whether the data belong to the MS subgroup or healthy group. How can we make the classification as to which MS patient belongs to which subgroup of MS including healthy individuals and those diagnosed with MS (based on the lesion diameters (MRI 1, MRI 2, MRI 3), number of lesion size for (MRI 1, MRI 2, MRI 3) as obtained from MRI images and EDSS scores)? D matrix has a dimension of 304 × 112. This means D matrix includes the MS dataset of 304 individuals along with their 112 attributes (see Table 2.12 for the MS dataset):

Our purpose is to ensure the classification of the MS dataset by the multifractal Brownian motion synthesis 2D multifractal analysis of the data. Our method follows the steps shown in Figure 11.44:

(i)Multifractal Brownian motion synthesis 2D is applied to the MS dataset (400 × 21). Brownian motion Hölder regularity (exponential) for analysis is applied to the data for the purpose of identifying on the data and some new datasets comprised of the singled out singularities regarding the e_MS dataset (64 × 64) (see eq. (11.15)):
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset