11
Optimized Feature Selection Techniques for Classifying Electrocorticography Signals

B. Paulchamy1, R. Uma Maheshwari1*, D. Sudarvizhi AP(Sr. G)2, R. Anandkumar AP(Sr. G)2 and Ravi G. 3

1Hindusthan Institute of Technology, Coimbatore, Tamil Nadu, India

2KPR Institute of Engineering and Technology, Coimbatore, Tamil Nadu, India

3Department of ECE, Sona College of Technology, Salem, Tamil Nadu, India

Abstract

The combination of hardware and software communication systems that consists of external devices or control computers that use cerebral activity is Brain-Computer Interface (BCI). BCI helps to communicate with severely impaired people who have been wholly paralyzed or “locked” by neurological neuromuscular conditions. A BCI system works to classify brain signals and carry out computer-controlled actions using machine learning algorithms. As a recording technique for BCI, electrocorticography (ECoG) is better suited for fundamental neuroscience. The signal acquisition stage in a generic BCI framework captures brain signals and reduces noise and process artifacts. The pre-processing phase prepares the signals for further processing in a suitable way. The extraction stage identifies discriminative information in the recorded brain signals. Once measured, the signal is mapped to a vector containing the signals observed with useful and discriminant characteristics. The function of the Auto-Regressive (AR) model and Wavelet Transform functions are extracted. The features extracted are merged. SVM uses a discriminative hyperplane to identify classes. The effect of ECoG signal function selection and SVM parameter optimization has been studied. Clonal Selection Algorithm is a particular class of Artificial Immune systems, which uses the Clonal Selection part’s primary mechanism. The SVM and simultaneously optimizes the selection of the functionality. The test results show the SVM classification efficiency compared with the RBF classifier and the FUZZY classifier. At the final stage, a method is investigated for hybridizing CLONALG with a Genetic Algorithm (GA). In this method, an outer (GA) search circuit is used to check the current population for restriction and then divide them into practicable and unfeasible individuals. Introduced as an internal loop, CLONALG clones and mutates antibodies first and calculates the distances between antibodies and antigens. The most affinity individuals are selected, and the new antibodies are defined. Results demonstrate that in the classification of ECoG signals, the proposed method achieves a precision of 96.76%.

Moreover, the chapter provides better results for classifying fused features such as Autoregressive and SVM Classifier wavelet transformers. A particular class of artificial immune systems is also being studied, including the combination of GA and CLONALG to select the best SVM-RBFN kernel features and parameters simultaneously. Results show that GA and CLONALG are the most accurate in the ECoG signals classification.

Keywords: Brain interface, genetic algorithm, clonal selection, radial bias neural network, support vector machine, accuracy, precision, signal classification

11.1 Introduction

Change leads to change ALS, spinal cord injury, brain or muscle dystrophy stroke, cerebral and multiple cases of sclerosis are many of those degenerative conditions that affect the neurological pathways. The conditions are degenerative. These diseases affect more than five million people worldwide. The rate of involvement in syndrome is not international, however, roughly two people per 100,000 years in Europe and the US have ALS alone. Most affected people are unable to travel, move their arms and hands, speak and devour food. For physically challenged individuals, control and communication systems are available. In a BCI, user-unique features are eliminated and converted into device communication instructions.

11.1.1 Brain–Computer Interface

Brain–Computer Interfacing (BCI) technology enables the human being’s capability to communicate or control (particularly impaired) equipment by thinking or expressing purposes to be increased through direct interaction with human brain systems and machines. BCI systems are becoming successful due to a better understanding of the brain oscillation dynamics. Neuronal nets create feedback loops in the brain that are recorded for electrical encephalography or oscillatory electrocorticography (ECoG) [1]. BCI is an authoritative communication and control option in interactions between people and systems and is an interaction beyond the keyboard by definition. The BCI communication and control system in the original definition is not dependent on the neuromuscular output channels of the brain. The BCIs have expectations with them. BCI provides direct connections between the brain and external devices. System – hardware/software enables persons without peripheral nerves/muscles to communicate with their surroundings and simply regulate messages from brain activity. The interface enhances communication chances for those with severe engine and neuromuscular difficulties. Different BCI applications include engines, environment, leisure, and multimedia. BCI makes it possible to transmit or regulate neural activity in the brain without direct physical motions through equipment, prosthetic limbs, and robots. Since computerized systems are a tool to develop life, ongoing BCI developments require both an understanding of brain waves and a study of EEG/ECoG data. BCI is recognized for its support in the provision of external communication to handicapped persons and a realistic tool for prosthesis control. Gadgets are also incorporated in BCI applications through the translation of concepts into video games and personal computers. BCI integrates medical, psychological, neurological, HCI, rehabilitative, signaling, and machine education. BCI offers a wide range of chances for research [2].

As Amyotrophic Lateral Sclerosis (ALS), BCI can circumvent corporeal mediation, enabling people to interact freely and spontaneously in a gaming setting, as acknowledged by patients who have already spoken with others and the environment despite full paralysis. However, a BCI that circumvents a physical encounter may seem unnatural in converting ideas into physical movements. The consequence is that you must be highly conscious of this activity, which develops new control levels when using the brain activity directly [3]. The initial step in developing an effective BCI system is to find the appropriate control signals from an ECoG or EEG [4]. The following is a suitable control signal: (i) Each person is accurately defined; (ii) easy for expressing intents to be modulated/transformed; and (iii) consistent tracking/detecting. The locked-in patients are completely aware and attentive, but cannot move their muscles to convey their wants, desires, and feelings. A healthy brain is closed to them in paralysis. The biggest reason for locking is unknown The motor neurons of the central and peripheral nervous systems eventually disappear during neurological diseases. Paralysis usually starts in the lower extremes and progresses to the arms and hands, paralyzing the respiratory and swallowing muscles. Patients can only live at this point if they decide to ventilate artificially. Brain-computer (BCI) interfaces are the sole way to interact and control the environment for confined patients.

11.2 Literature Study

To improve the accuracy of the BCI classification of MI-based ECoG with the hybrid algorithm Rathipriya et al. [5]. The classifier 32 SVM has been restored using the same features triggered by the classification cross-relation approach. The performance is tested using classification accuracy using a 10-fold cross-validation technique. The authors contrasted the behavior of the methodology proposed with the current one. Suggested that BCI use ECoG. The outcome of classification is based mostly on the extraction of features. Initially, discrete wavelet transformations are made using ECoG data from one patient. The individual was asked to move the left little finger or language imaginarily. In the following stage, the corresponding wavelet energy of the eight channels has been determined and a 40-dimensional vector has been built (PNN). ECoG signals have been used in BCI design and generated a novel method for the extraction of functionality and the classification of imagination movement in ECoGbased BCI research according to offline analysis. Suggested a wavelet analysis pattern recognition protocol and a generic ECoG BCI system FLDA protocol. Results demonstrated that 92% maximum precision was picked as an effective feature for ECoG for the test data, wavelet variance as well as wavelet packet variance. At the time of perception of the continuous discourse. Employed ECoG cortical signal to decode phonetic units. The scientists found ECoG electrodes with a discriminative response to a certain number of Chinese phonemes by investigating the wavelet time-frequency characteristics. Features relating to gamma and strong gamma power are linked to certain phoneme sets. The clustered assembly is closely synchronized with categories of phonology defined by place and expression. These findings were taken into account in 33 Chinese phonemes decoding models. We achieved a continuous level of accuracy that is larger than a chance for five patients who discriminate against certain phonetic clusters by using SVM classifiers. Kayikcioglu & Aydemir [8] The design proposal for classification of the ECoG signal captured by motor imaging in different time units. Using the k neighbors approach, extracted vectors were converted to wavelets and categorized. The proposed methodology was successfully implemented for DataSet I of the 2005 BCI contest and reached a 95% accuracy in classification in the test set. The preliminary research of the relations between EEG and Event-Related Potentials (ERPs) ECoG from one patient using a BCI speller was given. One test session and one identical session were carried out by the patient. In the experimental session, the spelling paradigm of BCI is applied and the EEG is scalp-registered, before the ECoG grid implantation and after grid implantation is monitored in the same ECoG session. The EEG and ECoG have attained almost perfect orthography correctness. Offline analysis was conducted to get the proper estimation of average EEG ERPs using ECoG data. Initial results showed that EEG ERPs may be predicted with proximal asynchronous accuracy. Data from ECoG that use simple linear spatial models. A BCI system and a comparison of feature selection methods were provided to increase classification efficiencies. The Wavelet Packet Tree was used to extract the properties after the ECoG signals were pre-processed. GA, Mutual Information (MI), and Info Gain Attributes are chosen to be used (IG). Dataset I BCI Competition III was used as an input dataset for experimenting with the proposed methodology in ECoG recording motor images. Results showed that the selection of features enhanced accuracy in categorization. Li et al. employed the power spectral density for the selection of features (2011). For the extraction of functions and the non-linear classification of engine imaging using SVMs, a common CSP protocol is used. The accuracy of the 83% classification rate is achieved on BCI III Dataset I. The new Protocol for classifying one-test ECoG in the motor scenario has been developed by Wei & Tu [11]. Initially, gas from multi-channel ECoGs is selected as an optimum channel subset, and power characteristics have been extracted by CSP, with FDA finally being used to classify. Dataset I of BCI Competition III has been used for the procedure and 90% classification precision is achieved with only seven channels. Zhang et al. introduced a 6.4 μ W ECoG/EEG Integrated Processing Circuit (EPIC) for increasing BCI applications with 0.46 μV rms noise flooring (2011). Results from the ECoG recording from the main engine cortex of an awake monkey have been observed in vivo. Wang et al. developed improved extraction of features, a non-stationary approach of multi-variable adaptive auto-regressive (MVAAR) models using EEG data (2010). The BCI system has to be adapted in this case. Function extraction strategies have been compared between MVAAR and others. The outcome showed that the MVAAR system extraction was excellent. Recent research implies that ECoG signals may be utilized to successfully deduce the identification of engine, linguistic or intellectual functions. The BCI design requires a proper selection of mental functions and functional extraction methodologies. Studies reveal that the autoregressive model and wavelet transformation function extraction approach provides enhanced rating performance on the non-stationary biosignals, compared with current techniques. Wavelets give a temporal and frequency domain simultaneous localization and are calculated extremely rapidly. Even when data is non-monotonous and non-linearly separable input, SVMs may offer reliable and solid classification results on a good, theoretical basis. They can assist you to conveniently assess more relevant facts. Although numerous feature selection processes are examined based on optimization approaches, GA and CLONALG provide good classification precision and an ideal solution globally for selecting important classification feature groups. CLONALG makes it easy and universal to apply. It is comparable among various algorithms of evolution.

11.3 Proposed Methodology

A BCI system that pre-processes the ECoG signal and extracts features is provided. A feature fusion technology for improving the ECoG classification is tried in this chapter. Car regressive (AR) and Wavelet transform features are extracted. Auto-Regressive model benefits from the easy determination of the current output. An advantage of the modeling of AR is that variable estimating techniques are available. Utilitative AR Models for the EEG spectral parameter analysis (SPA), the advantage of the wavelet is that it has a great temporal resolution and a weaker frequency resolution at higher frequencies for the finer scale. The advantage of wavelets is that signals may be analyzed at several stages simultaneously.

For BCIs such as coiflets, bi-orthogonal, and so on, various wavelets are used. There are many decomposition strategies for wavelets, which are both wavelets and wavelets. The functionality extracted is fused. Features are picked with the Information Gain (IG) and selected features are categorized using the Support Vector Machine (SVM) (FC). This project experiments with the CLONALG technique to improve the parameters of C and Gamma on the RBF SVM kernel.

11.3.1 Dataset

For assessment of the suggested approaches, the ECoG recordings of the data set I for the BCI Competition III are utilized. The data set is made up of motor imaging signals that are produced when the participant performs short left-finger or tongue imagined motions in a BCI experiment. There was a sample rate of 1000 Hz for all ECoG signals. The recorded potentials have been enhanced and stored at microvolt levels to aid categorization. The sample recordings were captured for 3 seconds from either the imagined finger or tongue movement. Although the recording began 0.5 seconds after the visual cycle to avoid potentials induced visually.

11.3.2 Feature Extraction Using Auto-Regressive (AR) Model and Wavelet Transform

11.3.2.1 Auto-Regressive Features

An auto-regressive model (AR) depicts a random process in statistics and signal processing so that some time-varying processes are described. The AR models its output variable according to its historical values as linear. AR denotes an autoregressive order model p (p). The model AR(p) is defined as

equation image(11.1)

where φ1,... φp represents the parameters of the model, c is a constant, and εt is white noise. This can be equivalently written using the backshift operator B as

equation image(11.2)
equation image(11.3)

so that, φ(B)Xt = c + εt

An AR model may therefore be regarded as the output of an infinite pulse response filter with white noise.

11.3.2.2 Wavelet Features

Orthogonal transformations are used in the identification of patterns because they provide a non-invertible transition from space to reduced dimensions. However, classification operations with fewer characteristics are completed with a little increase in classification error.

equation image(11.4)

11.3.2.3 Feature Selection Methods

A promising area of research in the fields of engineering, data mining, statistics and pattern recognition, and so on was feature selection. A sub-set of input variables is to be obtained by removing characteristics that are either irrelevant or do not include any predictive information [6]. The choice of features proved to be quite successful in both theories and practice in minimizing the complexity of the findings, boosting learning efficiency, and enhancing predictive accuracy.

The main purpose is to discover a subset that delivers greater classification precision in supervised learning. When the size of a domain rises, N also increases the number of features. The identification of an optimum function subset is intractable and the issues with the selection of functions have proved NP-hard [7]. The selection of features prevents fit and enhances the model performance that delivers quicker and rentable models.

An extra complexity layer in modeling is included to identify optimal parameters for the whole range of functions. Optimal feature selection. It identifies the initial set of optimum subsets and optimizes model parameters. Selection techniques for attributes are classed as approaches to filters and wrappers [8]. Data mining algorithms are independent of the selected attributes and of the importance of the features solely by concentrating on the inherent characteristics of the filter technique.

The choice of characteristics is necessary since the number of unimportant features is excessive, loud, or deceptive. All listings of attribute subsets should be checked for attribute selection, which is in most instances inopportune since 2n subsets for n attributes are produced. Feature selection searches for an optimal d set [9]. A typical difficulty in global combinatorial optimization is the feature selection procedure.

11.3.2.4 Information Gain (IG)

The information gain (IG) approach for high-dimensional data is extensively utilized. IG is the anticipated entropy decrease (H). For a range of classes.

C = {c1,..., ck }, the information gain of a feature f, IG(f) is given

equation image(11.5)
equation image(11.6)
equation image(11.7)

It is used to calculate entropy in all classrooms for all characteristics. The characteristics are according to their IG value based on their value. In comparison to others, the higher IG value would be more informative. Top N number of attributes to categorize the instance are picked and utilized.

11.3.2.5 Clonal Selection

Immune algorithms are based on immunological principles and are used effectively in many issues of optimization. CLONALG is the type of Artificial Immune System where the selected portion of the clonal is the principal method. CLONALG is a type of CLONALG system. Initially, this approach was presented to solve non-linear functions [10].

The notion of clonal selection is the basic premise of contemporary immunology and is therefore strongly linked to other immunological ideas. This is followed by such theories-inspired algorithms. For example, the issues of classification of negative selection algorithms in the additional area still depend on the notion of clonal selection to enhance the examples iteratively. Clustering and optimization network techniques leverage the excitation and suppression aspects of the network model, but also the clonal selection concept for repeated model refining [12].

CLONALG is a popular model, comparable to mutation-based evolutionary algorithms built upon the clonal selection and affinity maturation concept [13]. The clonal selection idea is inspired by the following factors:

  • Maintenance of a specific memory set
  • Selection and cloning of most stimulated antibodies
  • Death of non-stimulated antibodies
  • Affinity maturation (mutation)
  • Re-selection of clones proportional to affinity with antigen
  • Generation and maintenance of diversity.

The objective of this approach is to construct an antibody memory pool to solve an engineering challenge. An antimicrobial is a solution element or a single remedy to this problem in this example, while an antigen is a problem space element or an assessment. The method provides two search algorithms for the final memory neurons that you want to use. The first is a local search performed by cloned antibody affinity maturation. More clones for matching (chosen) colonies are created [14]. The second search method has a global dimension and includes the introduction into the population of randomly produced colonies to further enhance diverse populations and offers a technique of perhaps avoiding local optimism.

The artificial immune system approach is inspired by the CLONALG theory of clonal selection; (CLONal selection ALGorithm). Two populations are present in CLONALG: one of the antigens, Ag, an antibody. The string characteristics m = mL...m1, i.e. a point in L—dimensional forms space S, m = SL, indicate the individual antibody/antigen. The population of Ab is several current applicant solutions, Ag being recognizable in the environment. The method cycles for preset maximum generations after random initialization of the first population P(0). (Ngen).

11.3.2.6 An Overview of the Steps of the CLONALG

  1. Introduction – The first stage in CLONALG is to start and divide an antibody pool size N into two parts: a memory anti-memory section m, which represents the solution of the algorithm, and an antibody pool r, used to introduce variation among solutions.
  2. Loop – The solutions are iterated and all known antigens are exposed. Each repetition is called a generation, and either the user stipulates the number of generations G or a certain termination condition.
    1. Select Antigen – An antigen from the present generation is randomly picked.
    2. Exposure –The system is exposed to the chosen antigen, with all antibodies against the antigen calculated with Affinity values.
    3. Selection of n clusters with the greatest antigen affinity.
    4. Cloning – The clusters that are picked are cloned by their affinity.
    5. Affinity Ripening (mutation) – The more closely relaxed the lower the mutation more affinity is exposed to the cloned antibody ripening.
    6. Clone exposure – Antigen clones will be subsequently exposed and affinity calculated.
    7. Candidacy – The most affinity antibodies are then selected as candidate memory antibodies that will be implanted in the population if their affinity is higher than that of the memory pool antigen m. h.
    8. Replacement: in the last phase, new random antibodies are substituted in the existing r antigenic pool with the lowest affinity.
  3. Finish – The memory part of the antigen pool is then considered an algorithm answer when the training round is completed.

11.3.3 Hybrid CLONALG

The hybrid CLONALG consists of an external (GA) search loop, which checks for limited violations of the present population and then divides it into viable (antigens) and ineffective persons (antibodies). If no viable person exists, the best unfeasible person (who is the lowest limit violation) is sent to the antigenic population. The AIS is presented here as an internal loop where clusters are copied first and subsequently mutated. Mutated. Next, antigens and antibodies are calculated for their distances (affinities). Those with higher affinity (smaller distances) are chosen so that novel antibodies are defined (closer to the feasible region). It is repeated several times during this cycle (CLONALG). The resultant population of the antibody is then transferred to the GA where restriction violations and fitness values for the viable individuals are calculated. The selection procedure is undertaken such that the selected parents produce a new population and terminate the external (GA) loop is applied to recombination and mutation operators. The GA selection technique is a binary technique. Tournaments in which each person is picked once and their opponent is drawn randomly from a population with replacement. The rules of the tournament are:

  • every viable person is preferable over an unfeasible person,
  • the one with a greater fitness value will be picked between two feasible people, and
  • the other two with a minor restriction infringement between two infeasible persons. It must be mentioned that here affine is calculated by using a conventional Euclidean vector standard from the sum of phenotypical distances.

A pseudo-code for the proposed hybrid is given

images

A serial algorithm scheme based on the artificial immune system is illustrated in Figure 11.1. This way, GA searches the globally optimal. The CLONALG assists the GA to attain an efficient area.

The performance is evaluated for:

ARC -Autoregressive feature selection using CLONALG
WAC -Wavelet features selection using CLONALG
ARWAC -Autoregressive–Wavelet fused features selection using CLONALG
Schematic illustration of the flowchart of the proposed hybrid CLONALG.

Figure 11.1 Flowchart of the proposed hybrid CLONALG.

ARWACSVM -Autoregressive–Wavelet fused features selection using CLONALG–Support Vector Machine Classifier
ARWAHCSVM -Autoregressive–Wavelet fused features selection using Hybrid CLONALG–Support Vector Machine Classifier
Autoregressive-Wavelet fused features selection using CLONALG Optimized Support Vector Machine Classifier
ARWAHCSVM- HOpt-Autoregressive–Wavelet fused features selection using Hybrid CLONALG–Support Vector Machine Classifier–Hybrid Optimization
ARSVM-Autoregressive Features, Support Vector Machine
WASVM-Wavelet features, Support Vector Machine
ARWASVM-Autoregressive–Wavelet fused features, Support Vector Machine

11.4 Experimental Results

The results of the SVM Classifier are presented in Table 11.1. The accuracy and precision of classification – recall for the SVM classifier are shown in Figures 11.2 and 11.3. The accuracy and reminder observed demonstrate that the ARWAIGSVM is 6.6 and 4.7 percent greater than those of ARIGSVM. The results demonstrate that the ARWAIGSVM is 6.6% over the ARIGSVM level, and 4.6% over the WAIGSVM standard.

The fused functional selection employing ARWAIG information gain results in a superior result as compared to the separate automobile regressive and wavelet functionalities due to little information loss.

The RBFN Classifier findings are presented in Table 11.2. Figures 11.4 and 11.5 illustrate the accuracy and accuracy of the classification, respectively the recall of the RBFN classifier. The accuracy of categorization and ARWAIGRBF reminder is 7.8% higher than ARIGRBF and 4.3% higher than WAIGRBF, 7.9% more than ARIGRBF, and 4.3% higher than WAIGRBF.

Table 11.1 Performance metrics of SVM classifier.

SVM classifierAccuracyPrecisionRecall
ARIG73.740.73800.7374
WAIG75.180.75250.7518
ARWAIG78.780.78790.7878
Schematic illustration of the classification accuracy of SVM classifier.

Figure 11.2 Classification accuracy of SVM classifier.

Schematic illustration of precision and recall of SVM classifier.

Figure 11.3 Precision and recall of SVM classifier.

Table 11.2 Performance metrics of RBFN classifier.

RBF classifierAccuracyPrecisionRecall
ARIG70.500.705050.7050
WAIG73.020.730800.7302
ARWAIG76.260.763050.7626
Schematic illustration of the classification accuracy of RBFN classifier.

Figure 11.4 Classification accuracy of RBFN classifier.

Schematic illustration of the precision and recall of RBFN classifier.

Figure 11.5 Precision and recall of RBFN classifier.

The fused selection with ARWAIG information gain results better than the individual car regression and wavelet features due to low loss of information.

Fuzzy Classifier findings are listed in Table 11.3. The classification accuracy and accuracy suggest Figures 11.6 and 11.7 are reminiscent of the Fuzzy Classifier. ARWAIGFC is 8.9% higher than ARIGFC and 5.7% higher than WAIGFC, as a result of its accuracy and recall. ARWAIGFC is 8.9 percent more accurate than ARIGFC and 5.7 percent more accurate than WAIGFC.

Table 11.3 Performance metrics of fuzzy classifier.

Fuzzy classifierAccuracyPrecisionRecall
ARIG71.070.711550.7105
WAIG73.380.734600.7338
ARWAIG77.700.777500.777
Schematic illustration of the classification accuracy of Fuzzy classifier.

Figure 11.6 Classification accuracy of Fuzzy classifier.

Schematic illustration of the precision and recall of Fuzzy classifier.

Figure 11.7 Precision and recall of Fuzzy classifier.

The fused selection with ARWAIG information gain results better than the individual car regression and wavelet features due to low loss of information.

11.4.1 Results of Feature Selection Using IG with Various Classifiers

The classification accuracy, accuracy, and reminder for features such as autoregressive, wavelet, and automatically regressive wavelet fuse utilizing data gain are shown in Tables 11.4 to 11.6 and in Figure 11.8 using classifier features such as SVM, RBF, and Fuzzy.

The classification accuracy of the SVM classifier is more effective than that of RBF and Fuzzy classification, as demonstrated in Table 11.4 and Figure 11.8. ARWAIGSVM is more successful than ARIGSVM 6.6% and 4.7% and then WAIGSVM 4.7%. Results demonstrate that ARWAIGRBF’s accuracy is 7.8 percent greater than ARIGRBF for RBF classification, and 4.3 percent better than WAIGRBF. ARWAIGFC’s precision is improved by 8.9% compared to ARIGFC in Fuzzy classifiers and by 5.7% compared to WAIGFC.

Table 11.4 Classification accuracy in percentage.

SVM classifierRBF classifierFuzzy classifier
ARIG73.7470.5071.07
WAIG75.1873.0273.38
ARWAIG78.7876.2677.70

Table 11.5 Performance metrics – precision of various classifiers.

SVM classifierRBF classifierFuzzy classifier
ARIG0.738000.705050.71155
WAIG0.752450.730800.73460
ARWAIG0.787900.763050.77750

Table 11.6 Performance metrics – recall of various classifiers.

SVM classifierRBF classifierFuzzy classifier
ARIG0.73740.70500.7105
WAIG0.75180.73020.7338
ARWAIG0.78780.76260.7770
Schematic illustration of the classification accuracy in percentage.

Figure 11.8 Classification accuracy in percen

The fused feature selection employing ARWAIG data in all three classifiers has been observed, since information is less lost, to better than the single-car regressive features and wavelet.

The accuracy of the SVM classification is seen better than that of the RBF and Fuzzy classification in Table 11.5 and Figure 11.9. The result demonstrates that ARWAIGSVM is better than ARIGSVM by 6.6 percent and better than WAIGSVM by 4.6 percent. ARWAIGRBF’s accuracy is 7.9% greater than ARIGRBF for RBF classification and 4.3% more than WAIGRBF. The ARWAIGFC accuracy is 8.9% greater than ARIGFC and 5.7% better than the WAIGFC. For Fuzzy Classification. The fused feature selection employing ARWAIG data in all three classifiers has been observed, since information is less lost, to better than the single regressive features and wavelet.

Schematic illustration of the precision of various classifiers.

Figure 11.9 Precision of various classifiers.

Schematic illustration of the recall of various classifiers.

Figure 11.10 Recall of various classifiers.

The recording of the SVM classification is shown from Table 11.6 and Figure 11.10 to be better than that of the ratings of RBF and Fuzzy. Results demonstrate that ARWAIGSVM is 6.6% better than ARIGSVM and 4.7% better than WAIGSVM. This result ARWAIGRBF recall is 7.8% better than ARIGRBF and 4.3% more than WAIGRBF for RBF classification. For the classification Fuzzy, ARWAIGFC is more effective than ARIGFC by 8.9% and WAIGFC by 5.7%. It is also noticed that the fused feature selection employing ARWAIG information is superior to the separate auto regression system in all three classifiers.

11.4.2 Results of Optimizing Support Vector Machine Using CLONALG Selection

Root Middle Square Error and Accuracy (RMSE) SVM results with different RBF parameter values Optimized Gamma, C, and SVM The RBF characteristics in Table 11.7 and Precision and Call are compared and presented. SVM results with different RBF parameter values Compared and presented in Table 11.8 are Gamma and C and Optified SVM RBF parameters. The accuracy of classification, RMSE, Precision and Recall for the classification of the functions with the RBF kernel optimized using CLONALG selection are shown in Figure 11.11.

Table 11.7 Classification accuracy and RMSE for various Gamma and C values.

SVM RBF parametersClassification accuracy %RMSE
GammaC
0.1250.12577.33810.4760
0.1251.00077.33810.4760
0.2500.12571.94240.5297
1.0000.12562.94960.6087
CLONALG optimized SVM with RBF kernel91.37000.1621

It is observed from Table 11.7 that varying the parameter C does not affect the classification accuracy or the RMSE. Also, a higher value of Gamma leads to inefficient performance of the SVM.

The accuracy of the categorization and RMSE are shown in Figures 11.11 and 11.12. The optimized kernel SVM is observed as achieving better results than other SVM values. Gamma rose by 36.8% over Gamma value 1 and RMSE dropped by 115.9% over Gamma value 1. This is because the SVM parameter with the RBF kernel is regularized, hence there is no overfit. SVM employs the kernel function to include the kernel knowledge of the issue. By improving the regulated parameters C and Gamma using CLONALG, the RBF kernel performance is enhanced.

Table 11.8 Precision and recall for various Gamma and C values.

SVM RBF parametersPrecisionRecall
GammaC
0.1250.1250.7730.773
0.1251.0000.7730.773
0.2500.1250.7200.719
1.0000.1250.6350.629
CLONALG optimized SVM with RBF kernel0.91580.91365
Schematic illustration of the classification accuracy for various Gamma and C values.

Figure 11.11 Classification accuracy for various Gamma and C values.

Schematic illustration of root mean squared error of SVM.

Figure 11.12 Root mean squared error of SVM.

11.5 Conclusion

The preceding findings show that Autoregressive feature fusion technology with wavelet increases the accuracy of the categorization. The selection of hybrid optimism has a critical influence that decrypt user’s intents, also in RBF kernel SVM classification for regularization Parameter C checks the balance between the hyper huge margin Classification aircraft and minimal error rate. It is seen that Hybrid optimization performance metrics for the selection of functions and Better than the other findings is the SVM classifier. Use of signal grading RBF kernel SVM.

References

  1. 1. Pour, P.A., Gulrez, T., AlZoubi, O., Gargiulo, G., Calvo, R.A., Brain-computer interface: Next generation thought controlled distributed video game development platform, in: 2008 IEEE Symposium On Computational Intelligence and Games, IEEE, pp. 251–257, 2008, December.
  2. 2. Alomari, M.H., AbuBaker, A., Turani, A., Baniyounes, A.M., Manasreh, A., EEG mouse: A machine learning-based brain computer interface. Int. J. Adv. Comput. Sci. Appl., 5, 4, 193–198, 2014.
  3. 3. Bos, D.P.O., Reuderink, B., van de Laar, B., Gürkök, H., Mühl, C., Poel, M., Heylen, D., Brain-computer interfacing and games, in: Brain-computer interfaces, pp. 149–178, Springer, London, 2010.
  4. 4. Wolpaw, J.R., McFarland, D.J., Vaughan, T.M., Brain-computer interface research at the Wadsworth Center. IEEE Trans. Rehabil. Eng., 8, 2, 222–226, 2000.
  5. 5. Rathipriya, N., Deepajothi, S., Rajendran, T., Classification of motor imagery ECoG signals using support vector machine for brain computer interface, in: 2013 Fifth International Conference on Advanced Computing (ICoAC), IEEE, pp. 63–66, 2013, December.
  6. 6. Abiri, R., Borhani, S., Sellers, E.W., Jiang, Y., Zhao, X., A comprehensive review of EEG-based brain–computer interface paradigms. J. Neural Eng., 16, 1, 011001, 2019.
  7. 7. Young, B.M., Nigogosyan, Z., Walton, L.M., Song, J., Nair, V.A., Grogan, S.W., Prabhakaran, V., Changes in functional brain organization and behavioral correlations after rehabilitative therapy using a brain-computer interface. Front. Neuroeng., 7, 26, 2014.
  8. 8. Kayikcioglu, T. and Aydemir, O., A polynomial fitting and k-NN based approach for improving classification of motor imagery BCI data. Patt. Recognit. Lett., 31, 11, 1207– 1215, 2010.
  9. 9. Wu, C.H., Chang, H.C., Lee, P.L., Li, K.S., Sie, J.J., Sun, C.W., Shyu, K.K., Frequency recognition in an SSVEP-based brain computer interface using empirical mode decomposition and refined generalized zero-crossing. J. Neurosci. Methods, 196, 1, 170–181, 2011.
  10. 10. Rao, R. P., Stocco, A., Bryan, M., Sarma, D., Youngquist, T. M., Wu, J., Prat, C. S., A direct brain-to-brain interface in humans. PloS One, 9, 11, e111332, 2014.
  11. 11. Wei, Q. and Tu, W., Channel selection by genetic algorithms for classifying single-trial ECoG during motor imagery, in: 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE, pp. 624–627, 2008, August.
  12. 12. Hwang, H.J., Kim, S., Choi, S., Im, C.H., EEG-based brain-computer interfaces: A thorough literature survey. Int. J. Hum.-Comput. Interact., 29, 12, 814– 826, 2013.
  13. 13. Lin, C.T., Chang, C.J., Lin, B.S., Hung, S.H., Chao, C.F., Wang, I.J., A real-time wireless brain–computer interface system for drowsiness detection. IEEE Trans. Biomed. Circuits Syst., 4, 4, 214–222, 2010.
  14. 14. Sokhal, J., Aggarwal, S., Garg, B., Classification of EEG signals using novel algorithm for channel selection and feature extraction. Int. J. Appl. Eng. Res., 12, 12, 3491–3499, 2017.

Note

  1. *Corresponding author: [email protected]
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset