8
An IoMT ‐Based Smart Remote Monitoring System for Healthcare

Chetna Kaushal1, Md Khairul Islam2, Anshu Singla1, and Md Al Amin3

1 Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India

2 Department of Information and Communication Technology, Islamic University, Kushtia, Bangladesh

3 Department of Computer Science and Engineering, Prime University, Dhaka, Bangladesh

8.1 Introduction

Cancer is a disorder where the body's cells grow rapidly creating a malignant tumor; cancers are called after the body part where they begin or continue to spread to the nearest body part (e.g., lung cancer). Cancer found in the cervix tissue is generally known as cervical cancer. The cervix act as an interface between the uterus and vagina [1].

Cervical intraepithelial neoplasia (CIN) is the unwanted growth of a woman's cervix. It makes the way to the cancer called cervical cancer, and this precancerous CIN can be divided into three grades (CIN‐I, ‐II, and ‐III).

At the early stage of cervical cancer, no symptoms are commonly detected. Later signs can include frequent menstrual bleeding, stomach discomfort, or pain during periods or after menopause or sexual activity [1]. After sexual intercourse, pain is not a symptom, but postcoital bleeding can be a symptom of cervical cancer about to happen or present [2].

The main cause of cervical cancer is Human Papillomavirus infection (HPV). About 90% or cervical cancer cases have been confirmed to have been infected with HPV for a long period [3, 4]. Generally, HPV has moved from one human to another during sex. Thus, according to the Cleveland Clinic, about 80% of people who are sexually active will carry HPV at any point in their lives, but it doesn't mean that every woman infected with HPV will be diagnosed with cervical cancer. [5, 6]. The risk of cervical cancer increases with alcohol consumption and smoking (which are important risk factors in all cancers), use of birth control pills for a long time (five or more years), and having multiple sexual partners at an early age. However, these are less significant risk factors [1, 7]. Cervical cancer is diagnosed in women above 30, and cancer cells require about 10–20 years to develop [6].

Figure 8.1 depicts different cancer cases found, based on a 2018 report. Lung, breast, colon, and cervix are the top four. Lung cancer cases reported worldwide are almost 2.1 million with 85% mortality, and it stands on the top of the table. Breast and colon cancers reported on 2018 are 2 million and 1.1 million cases, respectively. Mortality of breast and colon cancer are almost 30% and 50%, respectively. Cervical cancer is the world's fourth most common cancer [8, 9] and the fourth most prominent in women [10]. Nearly 529 000 patients were diagnosed in 2012, which is nearly 8% of the estimated cancer cases [6]; its mortality rate is 50.4%.

Graph depicts different types of cancer cases found in the world in 2018.

Figure 8.1 Different types of cancer cases found in the world in 2018.

Figure 8.2 depicts the cervical cancer cases and deaths of each country based on the Human Development Index (HDI). HDI is defined by education, per capita income, and other indices. Thus, countries with medium HDI have most of the cases and deaths, but countries with very high HDI have comparatively very few cases and deaths. Dramatically, cervical cancer cases in low HDI countries are similar to those in very high HDI countries, but death cases are high [8]. According to the World Health Organization (WHO), developed countries reported approximately 70% of cervical cancers and 90% of deaths [66]. In 2018, about 570 000 women were diagnosed, and almost 54.57% of the patients died, according to the WHO report. Although death rates have increased by 4% in recent times, developed countries have reduced their death rates in large numbers. However, women from 42 underdeveloped countries are most likely affected by cervical cancer [11]. About 84% of cases and 88% of deaths have been recorded in underdeveloped countries [8]. Squamous cell carcinoma (80%), adenocarcinoma (15%), adenosquamous carcinoma (3–5%), small cell carcinoma, and neuroendocrine cancer are the most frequent forms of cervical cancer [9].

Graph depicts cervical cancer cases and deaths depending on the HDI value of countries.

Figure 8.2 Cervical cancer cases and deaths depending on the HDI value of countries.

Due to the extreme death rate, many methods have been developed to identify pre‐stage of cervical cancer so that the chances of survival can be improved by early diagnosis and early treatment of cervix cells. Different computer‐aided screening methods are used to detect an infected cervix before it has transformed into an invasive cancer. This computer‐assisted diagnosis (CAD) system is common, simple to use, and less time‐consuming. The Pap test (Pap smear), liquid‐based cytology (LBC), HPV DNA checking, visual inspection, and visual analysis of acetic acid are different screening procedures discussed in the literature [12].

For the past 50 years, the Pap test has lowered the case and death rate of cervical cancer in the US [9, 12]. The Pap test demanded a high amount of test capital, which has limited its use in developing countries. In the Pap test with conventional cytology, the cells are carefully scratched out of the ectocervix and endocervix by trained, skilled pathologists with a spatula or brush and smeared under a microscope slide [12]. It has a relatively low sensitivity of 54% but a high precision of 94%[13].

In LBC, cells are obtained in an identical method to a conventional Pap test. Samples are smeared into a liquid of maintained space. After filters or centrifuges, the sample is passed to a slide. It is a procedure that is more costly than traditional cytology and requires extra resources and expensive tools. No difference in relative sensitivity and precision from the traditional way was found in LBC, but there is a benefit in evaluating high‐risk HPV in the cervix cells. Moreover, in LBC, proper sample selection is not feasible [12, 14].

Automated Pap analysis helps to minimize errors by analyzing Pap smear slides using the CAD tool. AutoPap and AutoCyte Screen are two automatic Pap types. In AutoPap, centered on an algorithm, the sample of the slide is checked and graded. While it does not explain which of the cells is irregular, it may describe the cervix's actual condition [15].

Methods for visual screening are easy and can be carried out by a professional doctor or nurse, are comparatively cheap, do not require laboratory space, and have instant results. The different visual analysis types are as follows: unaided visual inspection and visual inspection with acetic acid (VIA) treatment. It is a representation of the cervix without acetic acid in the human eye. This reveals bad performance [1618].

To identify pre‐cancerous lesions, visual inspection after application of 3–5% acetic acid is performed. It can also be identified with the human eye, known as cervicoscopy, or immediate visual inspection. Changeable coagulation of cellular proteins is caused by applying 3–5% acetic acid and invasive cancer regions containing a significant number of the same cells in the epithelial cells [19, 20]. In order to correctly detect pre‐cancerous lesions, VIA could do better than the Pap test [21]. Between 45% and 79% of women at high risk of developing cervical cancer have been correctly detected by qualified doctors and mid‐level providers [22]. VIA indicates higher false‐positive cases than the Pap approach.

However, it is hard to find professional pathologists to identify pre‐cancerous cervix cells by the naked eye. Community and researchers developed Internet of Things– (IoT‐) [23] and Internet of Healthcare Things– (IoHT‐) [24] based devices for pre‐screening. IoHT‐based devices work for Pap‐smear images and classify normal and abnormal images only. But there is a need of a system where women can have pre‐screening of cervical cancer at low cost with timely accurate diagnosis. The major challenge in such implementation requires computer vision and computational resources along with knowledge of medical sciences. Low resource and cost‐effective settings are required with highly secured and confidential data for preliminary screening tests. If the test results are positive, then the patient may be recommended or referred for colposcopy and further treatment with expert doctors. This preliminary screening may help the patient in reducing the chances of cervical cancer and can be treated by midwives and nurses. This may reduce the burden on the experts and may help the patient in early diagnosis.

Colposcopy or CIN images of the patients are hard to get but Intel and MobileODT through a competition on Kaggle in 2017 classified different types of cervix to detect the precancerous conditions of patients [25].

With continued improvement in screening methods and HPV vaccination programs in developed countries, the disparity of burden between developed countries and women in resource‐poor settings becomes even more profound. Nearly 85% of cervical cancer deaths occur in low‐ and middle‐income‐countries, and cervical cancer is a leading cause of cancer death in women in the developing world. If women are timely screened once after the age of 35 years, there is a 70% reduction in the risk of death due to cervical cancer. This can also reduce the disparity among women in developed countries and developing countries. The sentiments of women in developing countries may be hurt while to undergo physical examination. The women's hesitant nature gives us a high need to have a portable, wireless, and confidential device to have pre‐screening by midwives, nearby healthcare centers, or by themselves. The persistent HPV infection may lead to cervix cancer. The timely pre‐screening may define the important determinants and timely clearance with small treatments, which further reduces the viral persistence that may lead to cervical cancer. Recent image capturing technology can be used to make a portable, low‐cost, wireless, and confidential device that can be easily accessed and used by nearby healthcare centers.

8.2 Literature Review

IoT technology is an evolving healthcare field that has decreased patient waiting time and ensures treatment quality, even at home, by automating real‐time procedures. Various medical imaging experiments have been performed using IoT, known as Internet of Medical Things (IoMT), to pave the way for automation approaches.

He et al. explore cancer diagnostic approaches using deep learning algorithms such as convolutional neural networks (CNNs) and autoencoders (AE) [65]. To detect cancer, images of multiple organs or cells are used (such as lung, colon, breast, cervical). Various pre‐trained models such as VGG16, ResNet, and GoogleNet are used to distinguish between benign and malignant breast cells [26]. These pre‐trained models used average pooling and are attached to a completely connected layer. In a study [27], first of all Pap smear images were taken of the cervical cells from the Herlev data set. Then they introduced random forest (a machine learning algorithm) to classify normal and abnormal images. They extracted features depending on weight and used a 10‐fold cross‐validation for training. A custom CNN model called CXNN for classification has been created by Sun et al. [27]. They tuned optimization and training hyperparameters for CXNN.

Lu and Liu [28] explain clinical treatment and remote monitoring of ECG in real‐time using IoT in medical science. While they offer the concept of integrating IoT in healthcare, they do not explain IoT devices or how they are linked. In Ijaz et al. [29], a hybrid prediction model (HPM) is employed to provide early prediction of type 2 diabetes and hypertension using machine learning algorithms. The HPM of the analysis can be implemented using the IoMT, which stores evaluated risk factors on a safe remote server and provides the patient with the proper course. The authors suggest an IoT‐based android app for future work but don't explain how data is gathered by the sensor and linked to the mobile app. For future work, they suggest an IoT‐based android app, but they do not explain how the sensor captures or links information to the app. Rahman et al. [30] describe how the diagnosed cancer patients should maintain a secure routine during treatment. However, covering patients' daily lives is difficult, so many IoT‐based models have been developed to obtain a real‐time report of patients' conditions and activities. This study shows the identification of the state of the skeletons and the exercise of cancer patients; however, how this addresses cancer patients' issues is unclear. An IoT network system employed in Onasanya and Elshakankiri [23] mainly focuses on the wireless topology of different devices for diagnosis, recovery, and patient management. It is done by the analysis of health data obtained by IoT network systems and other mobile devices to help health professionals and make urgent assessments on cancer conditions.

Chandy [31] discusses IoT integration for multiple diagnostic systems of medical imaging such as MRI, x‐rays, CT, and ultrasound. However, there is still a need for further work on encryption algorithms to have a secure IoT medical device so as to avoid any misuse of data. Onasanya and Elshakankiri [32] created a smart healthcare infrastructure, especially for patients with cancer. They divided their entire solution into five‐layer approaches and joined them to construct an efficient, interconnected IoT framework. Specifically, these layers are a service layer for identification, tracking, and follow‐up, a data center layer for capturing cancer patient data, a cancer treatment layer for visualizing patient reports, a hospital layer for moving patient status to a doctor or nurse, and a protection layer for preventing unexpected hospital incidents.

Single‐board computers are developed by Parra et al. [33] to offer a low‐cost diagnosis for underdeveloped countries using a Raspberry Pi (RP) computer. They have developed two devices to diagnose cervix cell before turning into invasive cancer with real‐time results. One is based on visual inspection of the cervix's nuclei high‐resolution image implemented by IoT devices with deep learning algorithms for classifying cervix cells named PiHRME. Moreover, another molecular level test of cervical cells is also implemented by IoT to identify cells' current condition, called PiReader. Both are easy to use and cost half of the previous manual approaches. Although it is cost‐effective to use the Raspberry Pi computer, it can't be used to store and train with a lot of images. Khamparia et al. [24] employ the IoHT for the classification of cervical cell images collected from a Pap‐dating approach. In order to accomplish the mission, they have merged both IoT and image classification algorithms. To distinguish images, they use both machine learning and deep learning algorithms. Transfer learning was also used to reuse the pre‐developed knowledge of models. They claim a 97% accuracy because the data set is very limited, which causes the trained model to be overfitted. Cervical cancer screening will need a huge data set to train a robust model, which is still a concern for researchers.

8.3 Methodology

8.3.1 IoT Device for Image Acquisition

In this part, we describe an IoT‐based cervix detection healthcare system. We have used a Raspberry Pi (RP) board in the device, with a Raspberry Pi camera and an ultrasonic sensor. Robotics equipment is used in the device, which open the labia minora (LM). A labia minora and cervix detection algorithm is implemented in that RP board for capturing the correct organs. At first, the camera will take a photo to find the labia minora because it is the front part of cervix. After capturing the photo, the algorithm will detect the position of the labia minora (LM). After fixing the position of LM, it will be treated as an object. Then we will use ultrasonic sensor to measure the distance from the sensor to LM. We will measure the distance in order to correctly open the vagina by using the medical equipment, which will come from the IoT device, and it will work automatically, without any needed assistance. So if the equipment doesn't get accurate measure and location, then it will not be able to open the vagina. After opening the vagina, the camera will take a photo of the cervix. Then, the algorithm will work to detect cervix precancerous condition automatically. Figure 8.3 describes the total communication processes of IoT device elements with proper direction and command. The following subsections describe all the devices, step by step.

Schematic illustration of proposed methodology for IoMT-based smart remote monitoring system for cervix cancer.

Figure 8.3 Proposed methodology for IoMT‐based smart remote monitoring system for cervix cancer.

8.3.1.1 Raspberry Pi

In the United Kingdom, the RP foundation developed RP, which is a series of small single‐board computers launched in 2012. It is known as minicomputer, which performs computational processes. It is widely used for real‐time image or video processing, IoT‐based applications, and robotics‐based applications. In this board, we can add a camera, a sensor to measure distance, and other important devices. The Raspberry Pi includes a 1.2 GHz quad‐core 1.4 GHz, 1 GB RAM, the Broadcom SoC, GPU, built‐in Wi‐Fi and Bluetooth 4.0, and 10/100 Mbps Ethernet. Most people use it to build hardware projects, to make home automation, to process algorithms, and so on [34, 35]. As of 2016, about eight million Raspberry Pi had been sold, and it is projected to be one the most popular single‐board computers in the world [36]. We have implemented RP to detect and open the labia minora and at last detect the cervix using a pretrained model in the board (Figure 8.4).

Schematic illustration of automated IoT-based image acquisition process.

Figure 8.4 Automated IoT‐based image acquisition process.

Camera: Raspberry Pi has an embedded camera module (version 2). It has a Sony IMX219 eight megapixel camera sensor, and it's able to deliver almost five‐megapixel resolution of image or 1080p high‐definition video at 30 frames per second and still shots up to 2592 × 1944 resolution. Time lapse, slow motion video, IP CCTV and monitoring, and more are included in this module. The camera module was attached to a Raspberry Pi by a 15 pin ribbon cable and to the dedicated 15 pin MIPI camera serial interface, which was designed for interfacing to cameras and has a BCM2835 processor [37, 38].

8.3.1.2 Sensor

The ultrasonic sensor HC‐SR04 is used to measure the distance of an object from the sensor. This sensor is able to measure distances from 2 mm (0.02 m) to 400 cm (4 m). This sensor is the most popular sensor used in many applications and devices where we need to measure distances. Put another way, we can say an ultrasonic sensor is an electronic device that measures distances by emitting ultrasonic sound waves and convert this waves into an electrical signal. It is generally used with microcontroller and microprocessor platforms such as Arduino, ARM, PIC, and Raspberry Pi. Ultrasonic sensors are used in IoT‐ and robotic‐based platforms for detecting the targeted object, as well as in manufacturing technologies. Notably, ultrasonic technology has enabled in the medical sector or health sector to produce images of internal organs, to detect tumors, and to ensure the health of babies in the womb [39]. Now, we can use an ultrasonic sensor with RP [40]. RP works at 3.3 V logic, but the ultrasonic sensor HC‐SR04 works at 5 V. Thus, if we want to connect the ultrasonic sensor with this RP, then we have to connect an echo pin to RP before connecting it to the sensor. At first we have to connect the trig pin of the ultrasonic sensor with the physical pin GPIO23 of the RP. Then we have to use a combination of 680 Ω and 1.5 kΩ resistor for converting the echo pin to 3.3 V logic. And we need to connect it to a physical pin GPIO24 of the RP. Then the problem will be solved. Finally we have to write programming algorithms on Raspberry Pi to enable and perform the required task by the ultrasonic sensor.

8.3.1.3 Equipment

The IoT‐ and robotics‐based equipment used will be able to close or open the vagina carefully as it is a very sensitive organ of women. The equipment can move left, right, up, and down using the direction from RP. RP is directed by using the data of the sensor and algorithmic calculations.

After taking photos of the cervix, the device will store the image, storage embedded on RP board, and finally a trained machine learning algorithm will produce the desired diagnosis for cervical cancer. This algorithm will be implemented in the Raspberry Pi board to classify the cervix type using the image. Now this result and image will be sent to a cloud repository from RP storage with regard to the result about the patient. We have to install a cloud repository app using an Internet connection to the RP board. For details, see Doshi et al. [41] Finally the result and corresponding data is stored for both patient and the health team's supervision.

8.3.2 Central Repository

The CAD system is an IoT‐based healthcare system that can be used at home without going to the hospital to check for cervical cancer conditions. After checking the results, the CAD system saves the corresponding images from each patient to the central cloud archive for further use by the research institute and others. In addition to the classifiers, the efficiency of the classifiers is strongly dependent on the number of images/data points. The central repository is linked to a healthcare data center through a server. In order to gain insight into the data, central repository data can be analyzed, synchronized, and grouped. Researchers can also build their model more robustly and reliably by using repository data to improve the CAD system's adaptability. As we intend to share the central archive for public use, we can recheck the records so that there is no violation of patient confidentiality. We should follow the government's rules and regulations for setting up certain networks and equipment. And patients should also know the guidelines and policies of the CAD system [23].

8.3.3 Automatic Computer Assisted Diagnosis (ACAD)

Very few specialist pathologists are capable of identifying pre‐cancer lesions using a visual inspection process. Expert availability and their attitude have an effect on proper identification. CAD is an emerging approach that will provide the pathologist with advantages of fast results, higher accuracy, and reduced time and suffering of patients. In medical imaging, the use of CAD in the study of cytological images of cervix lesions has become a topic of interest.

8.3.3.1 Preprocessing

IoT devices use sensors to capture images of the targeted area. Now these images have to be extracted to collect information from them. So we can do segmentation and image extraction of features. But we have to preprocess the data before getting to this point because due to the atmosphere or other factors, the image of the IoT system can be distorted or can contain noise [42].

Unexpected occurrences, such as noise, outliers, and missing pixel values, may have a major effect on segmentation efficiency, feature extraction, and model classification. So we will preprocess the images captured by the IoT network before storing the images in the central repository. It is possible to categorize image preprocessing in medical imaging into four parts. The first one concerns the correction of pixel brightness, also known as the transformation of brightness. After brightness corrections, the output pixel values of the image are strongly dependent on the input pixel values of the images. Contrast modifications are one of the key components of preprocessing, especially in medical imaging. It is possible to break brightness corrections into two grayscale and brightness transformation sections. Grayscale is commonly used in medical imaging, especially in the detection of cell levels.

Second, geometric transition restores the position of the pixel, which is usually distorted by the external environment, such as blurred transformation of images. This phase of the technique involves rotation, translation, shear, and scale. Such preprocessing methods are often used for increasing images database to balance the data set of different labels.

Third, filter the image to retrieve critical information using certain kernels that improve or smooth the edge condition. As a result, the models are able to learn the necessary knowledge

Fourth, restoration of the image, which entails improving the deteriorated images. Most of the image loss occurs due to temperature, camera lens, or miscommunication of the electro‐optical sensor. Two methods of restoration are available: deterministic and stochastic; both are important for medical imaging tasks. The first is used where noise and other deterioration are known. The second one works depending on a small group of pre‐enhanced images position and restoration [43]. There should be no loss of information, patches, lesions if any i.e. affected area during preprocessing of image.

8.3.3.2 Segmentation

In order to complete the purpose, image segmentation is one of the essential tasks. Segmentation of the image helps to distinguish and differentiate the appropriate organs and cells from other undesirable areas; in a nutshell, it identifies the most significant area that includes the majority of the information referred to as the region of interest (ROI). Image segmentation is an automated ROI separation method that takes into account that the region does not overlap with the embedded correlative portion. The method of image segmentation in medical imaging is managed and implemented by supervised and unsupervised algorithms. Picture segmentation typically operates on two‐dimensional images, but it has now been applied using a co‐evolutionary kernel on three‐dimensional MRI images [44, 45].

Different methods of segmentation are discussed in the following subsections to classify the cervix area of interest for visual inspection:

8.3.3.2.1 Clustering

Clustering is an unsupervised ROI segmentation approach that aims to locate a pixel space of the same or near intensity. It can work with fewer data and needs no particular labeling of the training data since it is unsupervised. For segmentation, there are several clustering algorithms such as k‐means, fuzzy c‐means, and hard c‐means. Among them, one of the most used and best‐performing algorithms is k‐means. For any label, it finds an average mean intensity. It calculates the mean intensity of each pixel, and if the mean of each pixel is near the individual label, that pixel is assigned to that label space [46]. In Bai et al. [47], ROI segmentation was performed on HSV images using the k‐means algorithm. Here, first, histogram‐based thresholding is applied to eliminate the image's mirror effect if it appears. Then the images are converted from RGB to HSV images, and the ROI is found based on the image's V space using k‐means.

8.3.3.2.2 Region Growing

The region growing (RG) algorithm is a semiautomated base segmentation technique. It considers the correlative region on the basis of certain previously established parameters (e.g., edge or intensity). And it needs to refer to a region from which the correlative field is detected, which is why it is semiautomated. It keeps adding areas until the criteria are not fulfilled. On a grayscale picture, it performs well. It converts the images to HSI space first and then adds RG to separate ROI They used this algorithm on 111 cervical cancer screening photos, [48].

8.3.3.2.3 Threshold

Threshold is a threshold‐based segmentation method where a label is given an intensity greater than the threshold value and another label is assigned to the items where the value is less than the threshold. It is usually a segmentation of two labels, but multithresholding is feasible.

8.3.3.2.4 Artificial Neural Network

It is one of the most used segmentation methods, but learning is ultimately supervised. It is used to segment the training data set with weights and other parameters specified so that information loss is low. It can then segment the ROI from the test/new data set image.

Segmentation methods vary much depending on the image domain. But an inaccurate segmentation technique selection can be the cause of a loss of information. Thus, the selection of an image segmentation technique will help increase the accuracy of any project.

8.3.3.3 Feature Extraction and Selection

Features are extracted and selected before classifier models are implemented. The efficiency of the model classifiers is strongly dependent on the extraction and selection techniques of the features. The classifier's model results may not be satisfactory due to existing noise and multiple features. Feature extraction and selection techniques lower computing time and memory requirements while also improving the performance of classifiers. [49]. We had to extract features from images of cervix cells first based on morphology, texture, and intensity. In medical imaging, there are many feature extraction techniques that are used. Some of the most used extraction techniques are discussed here:

8.3.3.3.1 Morphological Feature Extraction

The extraction of features is known as morphological feature extraction, based on the size and shape of artifacts [50]. Size and shape are expressed as a mathematical value. In contrast to the standard cervix cell, an increase in the size of cervix cells, which is measured and viewed as a function. And the shape of cervix cells, such as round, invasive, or smooth, is often taken as a feature. Various methods are used to assess the size and form of cells from images such as threshold, clustering, and statistical techniques. Morphological‐based feature extraction has been used by many researchers [5154].

8.3.3.3.2 Texture Based Feature Extraction

A low‐level feature of the images is the texture. In the case of morphological (size and shape) and intensity‐ (color‐) based features, it may be the same for two objects. But in order to distinguish these two things, we need to consider the smoothness or the coarseness, or the continuity of colors (texture feature). Thus, the extraction of the texture attribute is the nonlinear, semantic, or regional level/position of the brightness of color in the images. There are several methods for estimating texture features, such as the gray level co‐occurrence matrix and the wavelet transform. There are several pieces of cervical cancer screening studies that have used extraction of the texture‐based element [505558].

8.3.3.3.3 Intensity Based Feature Extraction

The intensity‐based feature is the extraction of information using the color distribution from gray or color images. But the continuity or smoothness of the edge of the image and other areas is not considered in this category of feature extraction. The strength base attribute was used in cervical cancer screening problems to learn and identify the multiple cervix lesion disorders [50, 59, 60].

We need to select features after extracting the attributes so that the classifier works well for unseen images and is robust. Yet in supervised learning there are numerous methods for feature selection, such as unsupervised basic learning, which uses the correlation for the collection of features; decision tree algorithms; information gain; and dimension reduction techniques (PCA) [506163].

8.3.3.4 Classification Model

The data set contains three types of CIN: high‐grade CIN, low‐grade CIN, and normal cervix. Thus, we need to identify the various conditions of the cervix using the extracted and chosen features of the images. We need classifiers to classify between various targets or labels. Hence, based on the correlation to the target, classifiers are used to distinguish each image between targets. It may be supervised or unsupervised by classifiers. Various researchers have used various machine learning algorithms to detect high, low‐grade CIN, or normal cervix (classifier). But the most popular algorithms for machine learning are linear regression, logistic regression, decision tree, SVM, naive Bayes, k‐nearest neighbors, random forest, and XGBoost, among others. Mariarputham and Stephen [49] have used SVM to classify Pap smear images into seven target labels. Lu et al. [64] have deployed five different machine learning algorithms for cervical cancer diagnosis.

We need an assessment metric to measure the efficiency of the classifiers. The CAD framework must work well enough that the shortage of experts may be compensated. There are several metrics for measurement, such as accuracy, sensitivity, and specificity. The total number of true positive and true negative points against the total target points is determined by the accuracy. The sensitivity indicates the percentage of patients at risk of being affected, and the specificity determines the percentage of patients not at risk. Thus, we can measure the success of the CAD system based on these metrics. Generally, performance tests on unseen (test) data and the model trained on the data set are overfitting or underfitting.

8.3.4 Recommender System

We use the central cloud repository to store the latest image of the cervix's lesions situation and the outcome. This will be passed to the healthcare staff to recommend certain procedures to be followed. Women are generally concerned about their health in developed countries. However, in underdeveloped areas, it is nearly impossible to keep track of their health status on a regular basis. To reduce the disparity and help women in underdeveloped countries, the proposed IoMT‐based healthcare system will help them to connect to the medical team even from home. The health services will consider using their data from the central database if any situation arises and suggest going to the hospital for recheck‐up or treatment. Even the medical staff will inform the doctor or pathologists in the available time so that no miscommunication occurs. This entire procedure not only benefits people with difficulties in developing countries but also helps women in developed countries to save time.

8.4 Use Case of Real‐Time Remote Monitoring System

The major challenges in the area of real‐time applications of healthcare is privacy, scalability, and compliance to transfer patients' data from one to another network. Another major challenge for IoMT is to allocate resources such as doctors and midwives to the patient immediately and locate everything using tracking devices. Cloud storage of a cervical cancer patient data must have the following features (Figure 8.5).

Schematic illustration of an IoMT-based smart remote monitoring system for cervix cancer.

Figure 8.5 An IoMT‐based smart remote monitoring system for cervix cancer.

8.4.1 Patient Remote Monitoring

Immediate availability of doctors, nurses, and a nearby healthcare center is checked by IoMT using databases on the cloud. The connected devices on the cloud will be analyzed based on which monitoring schedules and efficient utilization of equipment will be maintained. Staff scheduling and resource allocation schemes will ensure the resource provision in right time and place, thereby reducing resource conflicts.

8.4.2 Management and Scheduling

A patient registers by visiting a nearby healthcare center or through a personal device registered on IoMT. As soon as a patient joins the network, she can be detected by the controller of the system, for example a software‐defined networking (SDN) controller. The controller provides the features such as security, scalability, and flexibility to transfer the patient's data to cloud storage. The controller will also create a virtual network to maintain patient confidentiality. Patient monitoring from remote areas requires stable communication to acquire images either through wired or wireless networks. A doctor is assigned/allocated based on the availability. Follow‐up schedules, diagnoses, and reports are sent to concerned parties through a management and scheduling subsystem.

8.4.3 Cloud Information Exchange

The data can be obtained from different remote locations and need to be shared among patients, doctors, midwives, and laboratories. Hospitals are the main center for data generation related to diagnosis and medical reports/records of patients. The data may be shared among heterogeneous environments: network to network, machine to network, cloud to single machine, or vice versa.

8.4.4 Device Malfunctioning

IoMT system will also monitor and generate an alert signal for connected medical devices when a device is malfunctioning. Receiving the malfunctioning indication for a device, medical staff will be allocated to take precautions and fix the equipment quickly, minimizing or even avoiding its downtime. In case the device connected malfunctions, the patient data will be transferred to the nearest center for monitoring. The remote monitoring of a patient can also help in tracking patients' health and timely instructions for the next course of action. Also, the in‐time notification will help to fix the bugs of equipment before it is damaged beyond repair.

8.5 Conclusion

In the current scenario, we require a reasonable trade‐off between the burden and benefits of screening. Cytology tests and colposcopies increase the burden at both ends: patient as well as experts. Thus, preliminary diagnosis may help women with the recommendation whether they should go for colposcopy or not. Apart from this, it may help to take care of women's sentiments by providing an automated low‐cost, secured, and portable device in nearby healthcare centers which can be handled by midwives. The recommender system in the proposed methodology may help the patient with the appropriate advice for screening intervals if required and future actions to reduce incidence rate. If CIN I is diagnosed in the patient, the standard treatment of recommendation includes monitoring for progression and may reduce the incidence of CIN II and III with help of remote monitoring. Many developing countries have no cervical cancer screening programs due to lack of medical professionals and other resources. The proposed methodology will help the government and policy makers to have cervical cancer awareness programs, which will help in reducing the incidence of cervical cancer.

References

  1. 1 PDQ Screening and Prevention Editorial Board. Cervical cancer screening (pdq®): Patient version. PDQ Cancer Information Summaries [Internet], 2002.
  2. 2 Tarney, C.M. and Han, J. (2014). Postcoital bleeding: a review on etiology, diagnosis, and management. Obstetrics and Gynecology International 2014.
  3. 3 Holland, J.F. and Pollock, R.E. (2010). Holland‐Frei Cancer Medicine, vol. 8. PMPH‐USA.
  4. 4 Kumar, V., Abbas, A.K., Fausto, N., and Mitchell, R.N. (2007). Cervical cancer. In: Robbins Basic Pathology, vol. 8, 718–721. Saunders Elsevier.
  5. 5 Dunne, E.F. and Park, I.U. (2013). HPV and HPV‐associated diseases. Infectious Disease Clinics 27 (4): 765–778.
  6. 6 Stewart, B.W. and Wild C.P. (2014). World cancer report 2014. World Health Organization.
  7. 7 National Cancer Institute (2016). Cervical cancer treatment (pdq®)–health professional version.
  8. 8 Arbyn, M., Weiderpass, E., Bruni, L. et al. (2020). Estimates of incidence and mortality of cervical cancer in 2018: a worldwide analysis. The Lancet Global Health 8 (2): e191–e203.
  9. 9 Jemal, A., Siegel, R., and Ward, E. (2007). Cervical cancer. CA: A Cancer Journal for Clinicians 57: 43–66.
  10. 10 Canavan, T.P. and Doshi, N.R. (2000). Cervical cancer. American Family Physician 61 (5): 1369–1376.
  11. 11 Canfell, K., Kim, J.J., Brisson, M. et al. (2020). Mortality impact of achieving who cervical cancer elimination targets: a comparative modelling analysis in 78 low‐income and lower‐middle‐income countries. The Lancet 395 (10224): 591–603.
  12. 12Mishra, G.A., Pimple, S.A., and Shastri, S.S. (2011). An overview of prevention and early detection of cervical cancers. Indian Journal of Medical and Paediatric Oncology: Official Journal of Indian Society of Medical & Paediatric Oncology 320 (3): 125.
  13. 13 Coste, J., Cochand‐Priollet, B., de Cremoux, P. et al. (2003). Cross sectional study of conventional cervical smear, monolayer cytology, and human papillomavirus DNA testing for cervical cancer screening. BMJ 326 (7392): 733.
  14. 14 Ronco, G., Cuzick, J., Pierotti, P. et al. (2007). Accuracy of liquid based versus conventional cytology: overall results of new technologies for cervical cancer screening: randomised controlled trial. BMJ 335 (7609): 28.
  15. 15 Apgar, B.S. (2001). New tests for cervical cancer screening. American Family Physician 64 (5): 729.
  16. 16 Nene, B.M., Deshpande, S., Jayant, K. et al. (1996). Early detection of cervical cancer by visual inspection: a population‐based study in rural India. International Journal of Cancer 68 (6): 770–773.
  17. 17 Singh, V., Sehgal, A., and Luthra, U.K. (1992). Screening for cervical cancer by direct inspection. British Medical Journal 304 (6826): 534–535.
  18. 18 Wesley, R., Sankaranarayanan, R., Mathew, B. et al. (1997). Evaluation of visual inspection as a screening test for cervical cancer. British Journal of Cancer 75 (3): 436–440.
  19. 19 Sankaranarayanan, R., Basu, P., Wesley, R.S. et al. (2004). Accuracy of visual screening for cervical neoplasia: results from an iarc multicentre study in India and Africa. International Journal of Cancer 110 (6): 907–913.
  20. 20 University of Zimbabwe and JHPIEGO Cervical Cancer Project (1999). Visual inspection with acetic acid for cervical‐cancer screening: test qualities in a primary‐care setting. The Lancet 353 (9156): 869–873.
  21. 21 Sherris, J., Wittet, S., Kleine, A. et al. (2009). Evidence‐based, alternative cervical cancer screening approaches in low‐resource settings. International Perspectives on Sexual and Reproductive Health 35 (3): 147–152.
  22. 22 Sankaranarayanan, R., Gaffikin, L., Jacob, M. et al. (2005). A critical assessment of screening methods for cervical neoplasia. International Journal of Gynecology & Obstetrics 89: S4–S12.
  23. 23 Onasanya, A. and Elshakankiri, M. (2017). Iot implementation for cancer care and business analytics/cloud services in healthcare systems. In: Proceedings of the10th International Conference on Utility and Cloud Computing, 205–206.
  24. 24 Khamparia, A., Gupta, D., de Albuquerque, V.H.C. et al. (2020). Internet of health things‐driven deep learning system for detection and classification of cervical cells using transfer learning. The Journal of Supercomputing: 1–19.
  25. 25Intel Kaggle (2017). Mobileodt cervical cancer screening, 2019.
  26. 26 Khan, S.U., Islam, N., Jan, Z. et al. (2019). A novel deep learning based framework for the detection and classification of breast cancer using transfer learning. Pattern Recognition Letters 125: 1–6.
  27. 27 Sun, G., Li, S., Cao, Y., and Lang, F. (2017). Cervical cancer diagnosis based on random forest. International Journal of Performability Engineering 13 (4).
  28. 28 Lu, D. and Liu, T. (2011). The application of iot in medical system. In: 2011 IEEE International Symposium on IT in Medicine and Education, vol. 1, 272–275. IEEE.
  29. 29 Ijaz, M.F., Alfian, G., Syafrudin, M., and Rhee, J. (2018). Hybrid prediction model for type 2 diabetes and hypertension using dbscan‐based outlier detection, synthetic minority over sampling technique (smote), and random forest. Applied Sciences 8 (8): 1325.
  30. 30 Rahman, A., Rashid, M., Le Kernec, J. et al. (2019). A secure occupational therapy framework for monitoring cancer patients' quality of life. Sensors 19 (23): 5258.
  31. 31 Chandy, A. (2019). A review on IoT based medical imaging technology for healthcare applications. Journal of Innovative Image Processing (JIIP) 1 (01): 51–60.
  32. 32 Onasanya, A. and Elshakankiri, M. (2019). Smart integrated iot healthcare system for cancer care. Wireless Networks: 1–16.
  33. 33 Parra, S., Carranza, E., Coole, J. et al. (2020). Development of low‐cost point‐of‐care technologies for cervical cancer prevention based on a single‐board computer. IEEE Journal of Translational Engineering in Health and Medicine's 8: 1–10.
  34. 34 Kurniawan, A. (2019). Introduction to raspberry pi. In: Raspbian OS Programming with the Raspberry Pi, 1–25. Springer.
  35. 35 Pajankar, A. (2017). Introduction to single board computers and raspberry pi. In: Raspberry Pi Image Processing Programming, 1–24. Springer.
  36. 36 Upton, E. (2016). Raspberry pi 2 on sale now at $35. Raspberry Pi.
  37. 37 Raspberry Pi (2015). Raspberry pi 3 model b. online].(www.raspberrypi.org).
  38. 38 Shilpashree, K.S., Lokesha, H., and Shivkumar, H. (2015). Implementation of image processing on raspberry pi. International Journal of Advanced Research in Computer and Communication. Engineering 4 (5): 199–202.
  39. 39 Reebs, S.R. (1998). Ultrasonic sensor, November 22 1988. US Patent 4,785,664.
  40. 40 Yulianto, A. et al. (2019). Design prototype of audio guidance system for blind by using raspberry pi and fuzzy logic controller. In: Journal of Physics: Conference Series, vol. 1230, 012024. IOP Publishing.
  41. 41 Paras Doshi, Yash Shirke, Tejas Hegde, and Pranav Dhanvij. (2018). Text reader for visually impaired using google cloud vision api, 4 (12): 2349–6002.
  42. 42 Beutel, J., Sonka, M., Kundel, H.L. et al. (eds.) (2000). Handbook of Medical Imaging: Medical Image Processing and Analysis 2. SPIE Press.
  43. 43 Sonka, M., Hlavac, V., and Boyle, R. (1993). Image pre‐processing. In: Image Processing, Analysis and Machine Vision, 56–111. Springer.
  44. 44 Pham, D.L., Chenyang, X., and Prince, J.L. (2000). Current methods in medical image segmentation. Annual Review of Biomedical Engineering 2 (1): 315–337.
  45. 45 Sharma, N. and Aggarwal, L.M. (2010). Automated medical image segmentation techniques. Journal of Medical Physics 35 (1): 3.
  46. 46 Pohle, R. and Toennies, K.D. (2001). Segmentation of medical images using adaptive region growing. In: Medical Imaging 2001: Image Processing, vol. 4322, 1337–1346. International Society for Optics and Photonics.
  47. 47 Bai, B., Liu, P.‐Z., Yong‐Zhao, D., and Luo, Y.‐M. (2018). Automatic segmentation of cervical region in colposcopic images using K‐means. Australasian Physical & Engineering Sciences in Medicine 41 (4): 1077–1085.
  48. 48 Lange, H. (2005). Automatic detection of multi‐level acetowhite regions in rgb color images of the uterine cervix. In: Medical Imaging 2005: Image Processing, vol. 5747, 1004–1017. International Society for Optics and Photonics.
  49. 49 Mariarputham, E.J. and Stephen, A. (2015). Nominated texture based cervical cancer classification. Computational and Mathematical Methods in Medicine 2015.
  50. 50 Jusman, Y., Ng, S.C., and Osman, N.A.A. (2014). Intelligent screening systems for cervical cancer. The Scientific World Journal 2014.
  51. 51 Gómez‐Mayorga, M.E., Gallegos‐Funes, F.J., De‐la Rosa‐Vázquez, J.M. et al. (2009). Diagnosis of cervical cancer using the median M‐type radial basis function (MMRBF) neural network. In: Mexican International Conference on Artificial Intelligence, 258–267. Springer.
  52. 52 Li, Z. and Najarian, K. (2001). Automated classification of Pap smear tests using neural networks. In: IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No. 01CH37222), vol. 4, 2899–2901. IEEE.
  53. 53 Mat‐Isa, N.A., Mashor, M.Y., and Othman, N.H. (2008). An automated cervical pre‐cancerous diagnostic system. Artificial Intelligence in Medicine 42 (1): 1–11.
  54. 54 Thiran, J.‐P. and Macq, B. (1996). Morphological feature extraction for the classification of digital images of cancerous tissues. IEEE Transactions on Biomedical Engineering 43 (10): 1011–1020.
  55. 55 Arya, M., Mittal, N., and Singh, G. (2018). Texture‐based feature extraction of smear images for the detection of cervical cancer. IET Computer Vision 12 (8): 1049–1059.
  56. 56Ji, Q., Engel, J., and Craine, E. (2000). Texture analysis for classification of cervix lesions. IEEE Transactions on Medical Imaging 19 (11): 1144–1149.
  57. 57 Sneha, K., Arunvinodh, C. et al. (2016). Cervical cancer detection and classification using texture analysis. Biomedical and Pharmacology Journal 9 (2): 663–671.
  58. 58 Wei, L., Gan, Q., and Ji, T. (2017). Cervical cancer histology image identification method based on texture and lesion area features. Computer Assisted Surgery 22 (sup1): 186–199.
  59. 59 Suryatenggara, J., Ane, B.K., Pandjaitan, M., and Steinberg, W. (2009). Pattern recognition on 2d cervical cytological digital images for early detection of cervix cancer. In: 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), 257–262. IEEE.
  60. 60 Van Raad, V. (2005). Image analysis and segmentation of anatomical features of cervix uteri in color space. In: 2005 13th European Signal Processing Conference, 1–4. IEEE.
  61. 61 Ashok, B. and Aruna, P. (2016). Comparison of feature selection methods for diagnosis of cervical cancer using SVM classifier. International Journal of Engineering Research and Applications 6: 94–99.
  62. 62 Hallinan, J. (2001). Feature selection and classification in the diagnosis of cervical cancer. The Practical Handbook of Genetic Algorithms Applications: 167–202.
  63. 63 Iliyasu, A.M. and Fatichah, C. (2017). A quantum hybrid PSO combined with fuzzy k‐NN approach to feature selection and cell classification in cervical cancer detection. Sensors 17 (12): 2935.
  64. 64 Jiayi, L., Song, E., Ghoneim, A., and Alrashoud, M. (2020). Machine learning for assisting cervical cancer diagnosis: an ensemble approach. Future Generation Computer Systems 106: 199–205.
  65. 65 Hu, Z., Tang, J., Wang, Z. et al. (2018). Deep learning for image‐based cancer detection and diagnosis‐ a survey. Pattern Recognition 83: 134–149.
  66. 66 World Health Organization (WHO) et al. (2015). Cervical cancer prevention and control saves lives in the republic of korea. Diakses di http://www.who.int/news‐room/feature‐stories/detail/cervical‐cancerprevention‐and‐control‐saves‐lives‐in‐the‐republic‐of‐korea [diakses 30 April 2018].
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset