Chetna Kaushal1, Md Khairul Islam2, Anshu Singla1, and Md Al Amin3
1 Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
2 Department of Information and Communication Technology, Islamic University, Kushtia, Bangladesh
3 Department of Computer Science and Engineering, Prime University, Dhaka, Bangladesh
Cancer is a disorder where the body's cells grow rapidly creating a malignant tumor; cancers are called after the body part where they begin or continue to spread to the nearest body part (e.g., lung cancer). Cancer found in the cervix tissue is generally known as cervical cancer. The cervix act as an interface between the uterus and vagina [1].
Cervical intraepithelial neoplasia (CIN) is the unwanted growth of a woman's cervix. It makes the way to the cancer called cervical cancer, and this precancerous CIN can be divided into three grades (CIN‐I, ‐II, and ‐III).
At the early stage of cervical cancer, no symptoms are commonly detected. Later signs can include frequent menstrual bleeding, stomach discomfort, or pain during periods or after menopause or sexual activity [1]. After sexual intercourse, pain is not a symptom, but postcoital bleeding can be a symptom of cervical cancer about to happen or present [2].
The main cause of cervical cancer is Human Papillomavirus infection (HPV). About 90% or cervical cancer cases have been confirmed to have been infected with HPV for a long period [3, 4]. Generally, HPV has moved from one human to another during sex. Thus, according to the Cleveland Clinic, about 80% of people who are sexually active will carry HPV at any point in their lives, but it doesn't mean that every woman infected with HPV will be diagnosed with cervical cancer. [5, 6]. The risk of cervical cancer increases with alcohol consumption and smoking (which are important risk factors in all cancers), use of birth control pills for a long time (five or more years), and having multiple sexual partners at an early age. However, these are less significant risk factors [1, 7]. Cervical cancer is diagnosed in women above 30, and cancer cells require about 10–20 years to develop [6].
Figure 8.1 depicts different cancer cases found, based on a 2018 report. Lung, breast, colon, and cervix are the top four. Lung cancer cases reported worldwide are almost 2.1 million with 85% mortality, and it stands on the top of the table. Breast and colon cancers reported on 2018 are 2 million and 1.1 million cases, respectively. Mortality of breast and colon cancer are almost 30% and 50%, respectively. Cervical cancer is the world's fourth most common cancer [8, 9] and the fourth most prominent in women [10]. Nearly 529 000 patients were diagnosed in 2012, which is nearly 8% of the estimated cancer cases [6]; its mortality rate is 50.4%.
Figure 8.2 depicts the cervical cancer cases and deaths of each country based on the Human Development Index (HDI). HDI is defined by education, per capita income, and other indices. Thus, countries with medium HDI have most of the cases and deaths, but countries with very high HDI have comparatively very few cases and deaths. Dramatically, cervical cancer cases in low HDI countries are similar to those in very high HDI countries, but death cases are high [8]. According to the World Health Organization (WHO), developed countries reported approximately 70% of cervical cancers and 90% of deaths [66]. In 2018, about 570 000 women were diagnosed, and almost 54.57% of the patients died, according to the WHO report. Although death rates have increased by 4% in recent times, developed countries have reduced their death rates in large numbers. However, women from 42 underdeveloped countries are most likely affected by cervical cancer [11]. About 84% of cases and 88% of deaths have been recorded in underdeveloped countries [8]. Squamous cell carcinoma (80%), adenocarcinoma (15%), adenosquamous carcinoma (3–5%), small cell carcinoma, and neuroendocrine cancer are the most frequent forms of cervical cancer [9].
Due to the extreme death rate, many methods have been developed to identify pre‐stage of cervical cancer so that the chances of survival can be improved by early diagnosis and early treatment of cervix cells. Different computer‐aided screening methods are used to detect an infected cervix before it has transformed into an invasive cancer. This computer‐assisted diagnosis (CAD) system is common, simple to use, and less time‐consuming. The Pap test (Pap smear), liquid‐based cytology (LBC), HPV DNA checking, visual inspection, and visual analysis of acetic acid are different screening procedures discussed in the literature [12].
For the past 50 years, the Pap test has lowered the case and death rate of cervical cancer in the US [9, 12]. The Pap test demanded a high amount of test capital, which has limited its use in developing countries. In the Pap test with conventional cytology, the cells are carefully scratched out of the ectocervix and endocervix by trained, skilled pathologists with a spatula or brush and smeared under a microscope slide [12]. It has a relatively low sensitivity of 54% but a high precision of 94%[13].
In LBC, cells are obtained in an identical method to a conventional Pap test. Samples are smeared into a liquid of maintained space. After filters or centrifuges, the sample is passed to a slide. It is a procedure that is more costly than traditional cytology and requires extra resources and expensive tools. No difference in relative sensitivity and precision from the traditional way was found in LBC, but there is a benefit in evaluating high‐risk HPV in the cervix cells. Moreover, in LBC, proper sample selection is not feasible [12, 14].
Automated Pap analysis helps to minimize errors by analyzing Pap smear slides using the CAD tool. AutoPap and AutoCyte Screen are two automatic Pap types. In AutoPap, centered on an algorithm, the sample of the slide is checked and graded. While it does not explain which of the cells is irregular, it may describe the cervix's actual condition [15].
Methods for visual screening are easy and can be carried out by a professional doctor or nurse, are comparatively cheap, do not require laboratory space, and have instant results. The different visual analysis types are as follows: unaided visual inspection and visual inspection with acetic acid (VIA) treatment. It is a representation of the cervix without acetic acid in the human eye. This reveals bad performance [16–18].
To identify pre‐cancerous lesions, visual inspection after application of 3–5% acetic acid is performed. It can also be identified with the human eye, known as cervicoscopy, or immediate visual inspection. Changeable coagulation of cellular proteins is caused by applying 3–5% acetic acid and invasive cancer regions containing a significant number of the same cells in the epithelial cells [19, 20]. In order to correctly detect pre‐cancerous lesions, VIA could do better than the Pap test [21]. Between 45% and 79% of women at high risk of developing cervical cancer have been correctly detected by qualified doctors and mid‐level providers [22]. VIA indicates higher false‐positive cases than the Pap approach.
However, it is hard to find professional pathologists to identify pre‐cancerous cervix cells by the naked eye. Community and researchers developed Internet of Things– (IoT‐) [23] and Internet of Healthcare Things– (IoHT‐) [24] based devices for pre‐screening. IoHT‐based devices work for Pap‐smear images and classify normal and abnormal images only. But there is a need of a system where women can have pre‐screening of cervical cancer at low cost with timely accurate diagnosis. The major challenge in such implementation requires computer vision and computational resources along with knowledge of medical sciences. Low resource and cost‐effective settings are required with highly secured and confidential data for preliminary screening tests. If the test results are positive, then the patient may be recommended or referred for colposcopy and further treatment with expert doctors. This preliminary screening may help the patient in reducing the chances of cervical cancer and can be treated by midwives and nurses. This may reduce the burden on the experts and may help the patient in early diagnosis.
Colposcopy or CIN images of the patients are hard to get but Intel and MobileODT through a competition on Kaggle in 2017 classified different types of cervix to detect the precancerous conditions of patients [25].
With continued improvement in screening methods and HPV vaccination programs in developed countries, the disparity of burden between developed countries and women in resource‐poor settings becomes even more profound. Nearly 85% of cervical cancer deaths occur in low‐ and middle‐income‐countries, and cervical cancer is a leading cause of cancer death in women in the developing world. If women are timely screened once after the age of 35 years, there is a 70% reduction in the risk of death due to cervical cancer. This can also reduce the disparity among women in developed countries and developing countries. The sentiments of women in developing countries may be hurt while to undergo physical examination. The women's hesitant nature gives us a high need to have a portable, wireless, and confidential device to have pre‐screening by midwives, nearby healthcare centers, or by themselves. The persistent HPV infection may lead to cervix cancer. The timely pre‐screening may define the important determinants and timely clearance with small treatments, which further reduces the viral persistence that may lead to cervical cancer. Recent image capturing technology can be used to make a portable, low‐cost, wireless, and confidential device that can be easily accessed and used by nearby healthcare centers.
IoT technology is an evolving healthcare field that has decreased patient waiting time and ensures treatment quality, even at home, by automating real‐time procedures. Various medical imaging experiments have been performed using IoT, known as Internet of Medical Things (IoMT), to pave the way for automation approaches.
He et al. explore cancer diagnostic approaches using deep learning algorithms such as convolutional neural networks (CNNs) and autoencoders (AE) [65]. To detect cancer, images of multiple organs or cells are used (such as lung, colon, breast, cervical). Various pre‐trained models such as VGG16, ResNet, and GoogleNet are used to distinguish between benign and malignant breast cells [26]. These pre‐trained models used average pooling and are attached to a completely connected layer. In a study [27], first of all Pap smear images were taken of the cervical cells from the Herlev data set. Then they introduced random forest (a machine learning algorithm) to classify normal and abnormal images. They extracted features depending on weight and used a 10‐fold cross‐validation for training. A custom CNN model called CXNN for classification has been created by Sun et al. [27]. They tuned optimization and training hyperparameters for CXNN.
Lu and Liu [28] explain clinical treatment and remote monitoring of ECG in real‐time using IoT in medical science. While they offer the concept of integrating IoT in healthcare, they do not explain IoT devices or how they are linked. In Ijaz et al. [29], a hybrid prediction model (HPM) is employed to provide early prediction of type 2 diabetes and hypertension using machine learning algorithms. The HPM of the analysis can be implemented using the IoMT, which stores evaluated risk factors on a safe remote server and provides the patient with the proper course. The authors suggest an IoT‐based android app for future work but don't explain how data is gathered by the sensor and linked to the mobile app. For future work, they suggest an IoT‐based android app, but they do not explain how the sensor captures or links information to the app. Rahman et al. [30] describe how the diagnosed cancer patients should maintain a secure routine during treatment. However, covering patients' daily lives is difficult, so many IoT‐based models have been developed to obtain a real‐time report of patients' conditions and activities. This study shows the identification of the state of the skeletons and the exercise of cancer patients; however, how this addresses cancer patients' issues is unclear. An IoT network system employed in Onasanya and Elshakankiri [23] mainly focuses on the wireless topology of different devices for diagnosis, recovery, and patient management. It is done by the analysis of health data obtained by IoT network systems and other mobile devices to help health professionals and make urgent assessments on cancer conditions.
Chandy [31] discusses IoT integration for multiple diagnostic systems of medical imaging such as MRI, x‐rays, CT, and ultrasound. However, there is still a need for further work on encryption algorithms to have a secure IoT medical device so as to avoid any misuse of data. Onasanya and Elshakankiri [32] created a smart healthcare infrastructure, especially for patients with cancer. They divided their entire solution into five‐layer approaches and joined them to construct an efficient, interconnected IoT framework. Specifically, these layers are a service layer for identification, tracking, and follow‐up, a data center layer for capturing cancer patient data, a cancer treatment layer for visualizing patient reports, a hospital layer for moving patient status to a doctor or nurse, and a protection layer for preventing unexpected hospital incidents.
Single‐board computers are developed by Parra et al. [33] to offer a low‐cost diagnosis for underdeveloped countries using a Raspberry Pi (RP) computer. They have developed two devices to diagnose cervix cell before turning into invasive cancer with real‐time results. One is based on visual inspection of the cervix's nuclei high‐resolution image implemented by IoT devices with deep learning algorithms for classifying cervix cells named PiHRME. Moreover, another molecular level test of cervical cells is also implemented by IoT to identify cells' current condition, called PiReader. Both are easy to use and cost half of the previous manual approaches. Although it is cost‐effective to use the Raspberry Pi computer, it can't be used to store and train with a lot of images. Khamparia et al. [24] employ the IoHT for the classification of cervical cell images collected from a Pap‐dating approach. In order to accomplish the mission, they have merged both IoT and image classification algorithms. To distinguish images, they use both machine learning and deep learning algorithms. Transfer learning was also used to reuse the pre‐developed knowledge of models. They claim a 97% accuracy because the data set is very limited, which causes the trained model to be overfitted. Cervical cancer screening will need a huge data set to train a robust model, which is still a concern for researchers.
In this part, we describe an IoT‐based cervix detection healthcare system. We have used a Raspberry Pi (RP) board in the device, with a Raspberry Pi camera and an ultrasonic sensor. Robotics equipment is used in the device, which open the labia minora (LM). A labia minora and cervix detection algorithm is implemented in that RP board for capturing the correct organs. At first, the camera will take a photo to find the labia minora because it is the front part of cervix. After capturing the photo, the algorithm will detect the position of the labia minora (LM). After fixing the position of LM, it will be treated as an object. Then we will use ultrasonic sensor to measure the distance from the sensor to LM. We will measure the distance in order to correctly open the vagina by using the medical equipment, which will come from the IoT device, and it will work automatically, without any needed assistance. So if the equipment doesn't get accurate measure and location, then it will not be able to open the vagina. After opening the vagina, the camera will take a photo of the cervix. Then, the algorithm will work to detect cervix precancerous condition automatically. Figure 8.3 describes the total communication processes of IoT device elements with proper direction and command. The following subsections describe all the devices, step by step.
In the United Kingdom, the RP foundation developed RP, which is a series of small single‐board computers launched in 2012. It is known as minicomputer, which performs computational processes. It is widely used for real‐time image or video processing, IoT‐based applications, and robotics‐based applications. In this board, we can add a camera, a sensor to measure distance, and other important devices. The Raspberry Pi includes a 1.2 GHz quad‐core 1.4 GHz, 1 GB RAM, the Broadcom SoC, GPU, built‐in Wi‐Fi and Bluetooth 4.0, and 10/100 Mbps Ethernet. Most people use it to build hardware projects, to make home automation, to process algorithms, and so on [34, 35]. As of 2016, about eight million Raspberry Pi had been sold, and it is projected to be one the most popular single‐board computers in the world [36]. We have implemented RP to detect and open the labia minora and at last detect the cervix using a pretrained model in the board (Figure 8.4).
Camera: Raspberry Pi has an embedded camera module (version 2). It has a Sony IMX219 eight megapixel camera sensor, and it's able to deliver almost five‐megapixel resolution of image or 1080p high‐definition video at 30 frames per second and still shots up to 2592 × 1944 resolution. Time lapse, slow motion video, IP CCTV and monitoring, and more are included in this module. The camera module was attached to a Raspberry Pi by a 15 pin ribbon cable and to the dedicated 15 pin MIPI camera serial interface, which was designed for interfacing to cameras and has a BCM2835 processor [37, 38].
The ultrasonic sensor HC‐SR04 is used to measure the distance of an object from the sensor. This sensor is able to measure distances from 2 mm (0.02 m) to 400 cm (4 m). This sensor is the most popular sensor used in many applications and devices where we need to measure distances. Put another way, we can say an ultrasonic sensor is an electronic device that measures distances by emitting ultrasonic sound waves and convert this waves into an electrical signal. It is generally used with microcontroller and microprocessor platforms such as Arduino, ARM, PIC, and Raspberry Pi. Ultrasonic sensors are used in IoT‐ and robotic‐based platforms for detecting the targeted object, as well as in manufacturing technologies. Notably, ultrasonic technology has enabled in the medical sector or health sector to produce images of internal organs, to detect tumors, and to ensure the health of babies in the womb [39]. Now, we can use an ultrasonic sensor with RP [40]. RP works at 3.3 V logic, but the ultrasonic sensor HC‐SR04 works at 5 V. Thus, if we want to connect the ultrasonic sensor with this RP, then we have to connect an echo pin to RP before connecting it to the sensor. At first we have to connect the trig pin of the ultrasonic sensor with the physical pin GPIO23 of the RP. Then we have to use a combination of 680 Ω and 1.5 kΩ resistor for converting the echo pin to 3.3 V logic. And we need to connect it to a physical pin GPIO24 of the RP. Then the problem will be solved. Finally we have to write programming algorithms on Raspberry Pi to enable and perform the required task by the ultrasonic sensor.
The IoT‐ and robotics‐based equipment used will be able to close or open the vagina carefully as it is a very sensitive organ of women. The equipment can move left, right, up, and down using the direction from RP. RP is directed by using the data of the sensor and algorithmic calculations.
After taking photos of the cervix, the device will store the image, storage embedded on RP board, and finally a trained machine learning algorithm will produce the desired diagnosis for cervical cancer. This algorithm will be implemented in the Raspberry Pi board to classify the cervix type using the image. Now this result and image will be sent to a cloud repository from RP storage with regard to the result about the patient. We have to install a cloud repository app using an Internet connection to the RP board. For details, see Doshi et al. [41] Finally the result and corresponding data is stored for both patient and the health team's supervision.
The CAD system is an IoT‐based healthcare system that can be used at home without going to the hospital to check for cervical cancer conditions. After checking the results, the CAD system saves the corresponding images from each patient to the central cloud archive for further use by the research institute and others. In addition to the classifiers, the efficiency of the classifiers is strongly dependent on the number of images/data points. The central repository is linked to a healthcare data center through a server. In order to gain insight into the data, central repository data can be analyzed, synchronized, and grouped. Researchers can also build their model more robustly and reliably by using repository data to improve the CAD system's adaptability. As we intend to share the central archive for public use, we can recheck the records so that there is no violation of patient confidentiality. We should follow the government's rules and regulations for setting up certain networks and equipment. And patients should also know the guidelines and policies of the CAD system [23].
Very few specialist pathologists are capable of identifying pre‐cancer lesions using a visual inspection process. Expert availability and their attitude have an effect on proper identification. CAD is an emerging approach that will provide the pathologist with advantages of fast results, higher accuracy, and reduced time and suffering of patients. In medical imaging, the use of CAD in the study of cytological images of cervix lesions has become a topic of interest.
IoT devices use sensors to capture images of the targeted area. Now these images have to be extracted to collect information from them. So we can do segmentation and image extraction of features. But we have to preprocess the data before getting to this point because due to the atmosphere or other factors, the image of the IoT system can be distorted or can contain noise [42].
Unexpected occurrences, such as noise, outliers, and missing pixel values, may have a major effect on segmentation efficiency, feature extraction, and model classification. So we will preprocess the images captured by the IoT network before storing the images in the central repository. It is possible to categorize image preprocessing in medical imaging into four parts. The first one concerns the correction of pixel brightness, also known as the transformation of brightness. After brightness corrections, the output pixel values of the image are strongly dependent on the input pixel values of the images. Contrast modifications are one of the key components of preprocessing, especially in medical imaging. It is possible to break brightness corrections into two grayscale and brightness transformation sections. Grayscale is commonly used in medical imaging, especially in the detection of cell levels.
Second, geometric transition restores the position of the pixel, which is usually distorted by the external environment, such as blurred transformation of images. This phase of the technique involves rotation, translation, shear, and scale. Such preprocessing methods are often used for increasing images database to balance the data set of different labels.
Third, filter the image to retrieve critical information using certain kernels that improve or smooth the edge condition. As a result, the models are able to learn the necessary knowledge
Fourth, restoration of the image, which entails improving the deteriorated images. Most of the image loss occurs due to temperature, camera lens, or miscommunication of the electro‐optical sensor. Two methods of restoration are available: deterministic and stochastic; both are important for medical imaging tasks. The first is used where noise and other deterioration are known. The second one works depending on a small group of pre‐enhanced images position and restoration [43]. There should be no loss of information, patches, lesions if any i.e. affected area during preprocessing of image.
In order to complete the purpose, image segmentation is one of the essential tasks. Segmentation of the image helps to distinguish and differentiate the appropriate organs and cells from other undesirable areas; in a nutshell, it identifies the most significant area that includes the majority of the information referred to as the region of interest (ROI). Image segmentation is an automated ROI separation method that takes into account that the region does not overlap with the embedded correlative portion. The method of image segmentation in medical imaging is managed and implemented by supervised and unsupervised algorithms. Picture segmentation typically operates on two‐dimensional images, but it has now been applied using a co‐evolutionary kernel on three‐dimensional MRI images [44, 45].
Different methods of segmentation are discussed in the following subsections to classify the cervix area of interest for visual inspection:
Clustering is an unsupervised ROI segmentation approach that aims to locate a pixel space of the same or near intensity. It can work with fewer data and needs no particular labeling of the training data since it is unsupervised. For segmentation, there are several clustering algorithms such as k‐means, fuzzy c‐means, and hard c‐means. Among them, one of the most used and best‐performing algorithms is k‐means. For any label, it finds an average mean intensity. It calculates the mean intensity of each pixel, and if the mean of each pixel is near the individual label, that pixel is assigned to that label space [46]. In Bai et al. [47], ROI segmentation was performed on HSV images using the k‐means algorithm. Here, first, histogram‐based thresholding is applied to eliminate the image's mirror effect if it appears. Then the images are converted from RGB to HSV images, and the ROI is found based on the image's V space using k‐means.
The region growing (RG) algorithm is a semiautomated base segmentation technique. It considers the correlative region on the basis of certain previously established parameters (e.g., edge or intensity). And it needs to refer to a region from which the correlative field is detected, which is why it is semiautomated. It keeps adding areas until the criteria are not fulfilled. On a grayscale picture, it performs well. It converts the images to HSI space first and then adds RG to separate ROI They used this algorithm on 111 cervical cancer screening photos, [48].
Threshold is a threshold‐based segmentation method where a label is given an intensity greater than the threshold value and another label is assigned to the items where the value is less than the threshold. It is usually a segmentation of two labels, but multithresholding is feasible.
It is one of the most used segmentation methods, but learning is ultimately supervised. It is used to segment the training data set with weights and other parameters specified so that information loss is low. It can then segment the ROI from the test/new data set image.
Segmentation methods vary much depending on the image domain. But an inaccurate segmentation technique selection can be the cause of a loss of information. Thus, the selection of an image segmentation technique will help increase the accuracy of any project.
Features are extracted and selected before classifier models are implemented. The efficiency of the model classifiers is strongly dependent on the extraction and selection techniques of the features. The classifier's model results may not be satisfactory due to existing noise and multiple features. Feature extraction and selection techniques lower computing time and memory requirements while also improving the performance of classifiers. [49]. We had to extract features from images of cervix cells first based on morphology, texture, and intensity. In medical imaging, there are many feature extraction techniques that are used. Some of the most used extraction techniques are discussed here:
The extraction of features is known as morphological feature extraction, based on the size and shape of artifacts [50]. Size and shape are expressed as a mathematical value. In contrast to the standard cervix cell, an increase in the size of cervix cells, which is measured and viewed as a function. And the shape of cervix cells, such as round, invasive, or smooth, is often taken as a feature. Various methods are used to assess the size and form of cells from images such as threshold, clustering, and statistical techniques. Morphological‐based feature extraction has been used by many researchers [51–54].
A low‐level feature of the images is the texture. In the case of morphological (size and shape) and intensity‐ (color‐) based features, it may be the same for two objects. But in order to distinguish these two things, we need to consider the smoothness or the coarseness, or the continuity of colors (texture feature). Thus, the extraction of the texture attribute is the nonlinear, semantic, or regional level/position of the brightness of color in the images. There are several methods for estimating texture features, such as the gray level co‐occurrence matrix and the wavelet transform. There are several pieces of cervical cancer screening studies that have used extraction of the texture‐based element [5055–58].
The intensity‐based feature is the extraction of information using the color distribution from gray or color images. But the continuity or smoothness of the edge of the image and other areas is not considered in this category of feature extraction. The strength base attribute was used in cervical cancer screening problems to learn and identify the multiple cervix lesion disorders [50, 59, 60].
We need to select features after extracting the attributes so that the classifier works well for unseen images and is robust. Yet in supervised learning there are numerous methods for feature selection, such as unsupervised basic learning, which uses the correlation for the collection of features; decision tree algorithms; information gain; and dimension reduction techniques (PCA) [5061–63].
The data set contains three types of CIN: high‐grade CIN, low‐grade CIN, and normal cervix. Thus, we need to identify the various conditions of the cervix using the extracted and chosen features of the images. We need classifiers to classify between various targets or labels. Hence, based on the correlation to the target, classifiers are used to distinguish each image between targets. It may be supervised or unsupervised by classifiers. Various researchers have used various machine learning algorithms to detect high, low‐grade CIN, or normal cervix (classifier). But the most popular algorithms for machine learning are linear regression, logistic regression, decision tree, SVM, naive Bayes, k‐nearest neighbors, random forest, and XGBoost, among others. Mariarputham and Stephen [49] have used SVM to classify Pap smear images into seven target labels. Lu et al. [64] have deployed five different machine learning algorithms for cervical cancer diagnosis.
We need an assessment metric to measure the efficiency of the classifiers. The CAD framework must work well enough that the shortage of experts may be compensated. There are several metrics for measurement, such as accuracy, sensitivity, and specificity. The total number of true positive and true negative points against the total target points is determined by the accuracy. The sensitivity indicates the percentage of patients at risk of being affected, and the specificity determines the percentage of patients not at risk. Thus, we can measure the success of the CAD system based on these metrics. Generally, performance tests on unseen (test) data and the model trained on the data set are overfitting or underfitting.
We use the central cloud repository to store the latest image of the cervix's lesions situation and the outcome. This will be passed to the healthcare staff to recommend certain procedures to be followed. Women are generally concerned about their health in developed countries. However, in underdeveloped areas, it is nearly impossible to keep track of their health status on a regular basis. To reduce the disparity and help women in underdeveloped countries, the proposed IoMT‐based healthcare system will help them to connect to the medical team even from home. The health services will consider using their data from the central database if any situation arises and suggest going to the hospital for recheck‐up or treatment. Even the medical staff will inform the doctor or pathologists in the available time so that no miscommunication occurs. This entire procedure not only benefits people with difficulties in developing countries but also helps women in developed countries to save time.
The major challenges in the area of real‐time applications of healthcare is privacy, scalability, and compliance to transfer patients' data from one to another network. Another major challenge for IoMT is to allocate resources such as doctors and midwives to the patient immediately and locate everything using tracking devices. Cloud storage of a cervical cancer patient data must have the following features (Figure 8.5).
Immediate availability of doctors, nurses, and a nearby healthcare center is checked by IoMT using databases on the cloud. The connected devices on the cloud will be analyzed based on which monitoring schedules and efficient utilization of equipment will be maintained. Staff scheduling and resource allocation schemes will ensure the resource provision in right time and place, thereby reducing resource conflicts.
A patient registers by visiting a nearby healthcare center or through a personal device registered on IoMT. As soon as a patient joins the network, she can be detected by the controller of the system, for example a software‐defined networking (SDN) controller. The controller provides the features such as security, scalability, and flexibility to transfer the patient's data to cloud storage. The controller will also create a virtual network to maintain patient confidentiality. Patient monitoring from remote areas requires stable communication to acquire images either through wired or wireless networks. A doctor is assigned/allocated based on the availability. Follow‐up schedules, diagnoses, and reports are sent to concerned parties through a management and scheduling subsystem.
The data can be obtained from different remote locations and need to be shared among patients, doctors, midwives, and laboratories. Hospitals are the main center for data generation related to diagnosis and medical reports/records of patients. The data may be shared among heterogeneous environments: network to network, machine to network, cloud to single machine, or vice versa.
IoMT system will also monitor and generate an alert signal for connected medical devices when a device is malfunctioning. Receiving the malfunctioning indication for a device, medical staff will be allocated to take precautions and fix the equipment quickly, minimizing or even avoiding its downtime. In case the device connected malfunctions, the patient data will be transferred to the nearest center for monitoring. The remote monitoring of a patient can also help in tracking patients' health and timely instructions for the next course of action. Also, the in‐time notification will help to fix the bugs of equipment before it is damaged beyond repair.
In the current scenario, we require a reasonable trade‐off between the burden and benefits of screening. Cytology tests and colposcopies increase the burden at both ends: patient as well as experts. Thus, preliminary diagnosis may help women with the recommendation whether they should go for colposcopy or not. Apart from this, it may help to take care of women's sentiments by providing an automated low‐cost, secured, and portable device in nearby healthcare centers which can be handled by midwives. The recommender system in the proposed methodology may help the patient with the appropriate advice for screening intervals if required and future actions to reduce incidence rate. If CIN I is diagnosed in the patient, the standard treatment of recommendation includes monitoring for progression and may reduce the incidence of CIN II and III with help of remote monitoring. Many developing countries have no cervical cancer screening programs due to lack of medical professionals and other resources. The proposed methodology will help the government and policy makers to have cervical cancer awareness programs, which will help in reducing the incidence of cervical cancer.