The Investigation and Analysis of Village Remains in Jinzhong Prefecture of Shanxi Province, China
Shanxi Province is a province with a long history in China. The historical characteristics of Jinzhong Prefecture in Shaanxi Province are very prominent. This research has done a lot of field research and analysis, and has analyzed a large number of documents. The formation and characteristics of villages in Jinzhong Prefecture are summarized. But the remains of many areas have not been systematically discovered and analyzed. This study found that the reasons for the formation of villages are natural, cultural, traffic and economic reasons. It mainly includes water, mountain, and developed business culture during the Ming and Qing Dynasties. By analyzing the evolution characteristics of each period, the characteristics and remains of the existing villages are explained in detail. These types of relics mainly include courtyards, fortresses, and Exchange shops. This study can provide systematic guidance on the protection of future village remains.
Early Recognition and Grading of Cataract Using a Combined Log Gabor/Discrete Wavelet Transform with ANN and SVM
Eyes are considered to be the most sensitive and
important organ for human being. Thus, any eye disorder will affect
the patient in all aspects of life. Cataract is one of those eye disorders
that lead to blindness if not treated correctly and quickly. This paper
demonstrates a model for automatic detection, classification, and
grading of cataracts based on image processing techniques and
artificial intelligence. The proposed system is developed to ease the
cataract diagnosis process for both ophthalmologists and patients.
The wavelet transform combined with 2D Log Gabor Wavelet
transform was used as feature extraction techniques for a dataset of
120 eye images followed by a classification process that classified the
image set into three classes; normal, early, and advanced stage. A
comparison between the two used classifiers, the support vector
machine SVM and the artificial neural network ANN were done for
the same dataset of 120 eye images. It was concluded that SVM gave
better results than ANN. SVM success rate result was 96.8%
accuracy where ANN success rate result was 92.3% accuracy.
Detection of Keypoint in Press-Fit Curve Based on Convolutional Neural Network
The quality of press-fit assembly is closely related to
reliability and safety of product. The paper proposed a keypoint
detection method based on convolutional neural network to improve
the accuracy of keypoint detection in press-fit curve. It would
provide an auxiliary basis for judging quality of press-fit assembly.
The press-fit curve is a curve of press-fit force and displacement.
Both force data and distance data are time-series data. Therefore,
one-dimensional convolutional neural network is used to process
the press-fit curve. After the obtained press-fit data is filtered, the
multi-layer one-dimensional convolutional neural network is used to
perform the automatic learning of press-fit curve features, and then
sent to the multi-layer perceptron to finally output keypoint of the
curve. We used the data of press-fit assembly equipment in the actual
production process to train CNN model, and we used different data
from the same equipment to evaluate the performance of detection.
Compared with the existing research result, the performance of
detection was significantly improved. This method can provide a
reliable basis for the judgment of press-fit quality.
Synthetic Aperture Radar Remote Sensing Classification Using the Bag of Visual Words Model to Land Cover Studies
Classification of high resolution polarimetric Synthetic Aperture Radar (PolSAR) images plays an important role in land cover and land use management. Recently, classification algorithms based on Bag of Visual Words (BOVW) model have attracted significant interest among scholars and researchers in and out of the field of remote sensing. In this paper, BOVW model with pixel based low-level features has been implemented to classify a subset of San Francisco bay PolSAR image, acquired by RADARSAR 2 in C-band. We have used segment-based decision-making strategy and compared the result with the result of traditional Support Vector Machine (SVM) classifier. 90.95% overall accuracy of the classification with the proposed algorithm has shown that the proposed algorithm is comparable with the state-of-the-art methods. In addition to increase in the classification accuracy, the proposed method has decreased undesirable speckle effect of SAR images.
A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm
Feature selection and attribute reduction are crucial
problems, and widely used techniques in the field of machine
learning, data mining and pattern recognition to overcome the
well-known phenomenon of the Curse of Dimensionality. This paper
presents a feature selection method that efficiently carries out attribute
reduction, thereby selecting the most informative features of a dataset.
It consists of two components: 1) a measure for feature subset
evaluation, and 2) a search strategy. For the evaluation measure,
we have employed the fuzzy-rough dependency degree (FRFDD)
of the lower approximation-based fuzzy-rough feature selection
(L-FRFS) due to its effectiveness in feature selection. As for the
search strategy, a modified version of a binary shuffled frog leaping
algorithm is proposed (B-SFLA). The proposed feature selection
method is obtained by hybridizing the B-SFLA with the FRDD. Nine
classifiers have been employed to compare the proposed approach
with several existing methods over twenty two datasets, including
nine high dimensional and large ones, from the UCI repository.
The experimental results demonstrate that the B-SFLA approach
significantly outperforms other metaheuristic methods in terms of the
number of selected features and the classification accuracy.
Relevant LMA Features for Human Motion Recognition
Motion recognition from videos is actually a very
complex task due to the high variability of motions. This paper
describes the challenges of human motion recognition, especially
motion representation step with relevant features. Our descriptor
vector is inspired from Laban Movement Analysis method. We
propose discriminative features using the Random Forest algorithm
in order to remove redundant features and make learning algorithms
operate faster and more effectively. We validate our method on
MSRC-12 and UTKinect datasets.
Automated Heart Sound Classification from Unsegmented Phonocardiogram Signals Using Time Frequency Features
Cardiologists perform cardiac auscultation to detect
abnormalities in heart sounds. Since accurate auscultation is
a crucial first step in screening patients with heart diseases,
there is a need to develop computer-aided detection/diagnosis
(CAD) systems to assist cardiologists in interpreting heart sounds
and provide second opinions. In this paper different algorithms
are implemented for automated heart sound classification using
unsegmented phonocardiogram (PCG) signals. Support vector
machine (SVM), artificial neural network (ANN) and cartesian
genetic programming evolved artificial neural network (CGPANN)
without the application of any segmentation algorithm has been
explored in this study. The signals are first pre-processed to remove
any unwanted frequencies. Both time and frequency domain features
are then extracted for training the different models. The different
algorithms are tested in multiple scenarios and their strengths and
weaknesses are discussed. Results indicate that SVM outperforms
the rest with an accuracy of 73.64%.
A Speeded up Robust Scale-Invariant Feature Transform Currency Recognition Algorithm
All currencies around the world look very different from each other. For instance, the size, color, and pattern of the paper are different. With the development of modern banking services, automatic methods for paper currency recognition become important in many applications like vending machines. One of the currency recognition architecture’s phases is Feature detection and description. There are many algorithms that are used for this phase, but they still have some disadvantages. This paper proposes a feature detection algorithm, which merges the advantages given in the current SIFT and SURF algorithms, which we call, Speeded up Robust Scale-Invariant Feature Transform (SR-SIFT) algorithm. Our proposed SR-SIFT algorithm overcomes the problems of both the SIFT and SURF algorithms. The proposed algorithm aims to speed up the SIFT feature detection algorithm and keep it robust. Simulation results demonstrate that the proposed SR-SIFT algorithm decreases the average response time, especially in small and minimum number of best key points, increases the distribution of the number of best key points on the surface of the currency. Furthermore, the proposed algorithm increases the accuracy of the true best point distribution inside the currency edge than the other two algorithms.
Hybrid Anomaly Detection Using Decision Tree and Support Vector Machine
Intrusion detection systems (IDS) are the main components of network security. These systems analyze the network events for intrusion detection. The design of an IDS is through the training of normal traffic data or attack. The methods of machine learning are the best ways to design IDSs. In the method presented in this article, the pruning algorithm of C5.0 decision tree is being used to reduce the features of traffic data used and training IDS by the least square vector algorithm (LS-SVM). Then, the remaining features are arranged according to the predictor importance criterion. The least important features are eliminated in the order. The remaining features of this stage, which have created the highest level of accuracy in LS-SVM, are selected as the final features. The features obtained, compared to other similar articles which have examined the selected features in the least squared support vector machine model, are better in the accuracy, true positive rate, and false positive. The results are tested by the UNSW-NB15 dataset.
Automatic Landmark Selection Based on Feature Clustering for Visual Autonomous Unmanned Aerial Vehicle Navigation
The selection of specific landmarks for an Unmanned
Aerial Vehicles’ Visual Navigation systems based on Automatic
Landmark Recognition has significant influence on the precision of
the system’s estimated position. At the same time, manual selection
of the landmarks does not guarantee a high recognition rate, which
would also result on a poor precision. This work aims to develop an
automatic landmark selection that will take the image of the flight
area and identify the best landmarks to be recognized by the Visual
Navigation Landmark Recognition System. The criterion to select
a landmark is based on features detected by ORB or AKAZE and
edges information on each possible landmark. Results have shown
that disposition of possible landmarks is quite different from the
Non-Circular Carbon Fiber Reinforced Polymers Chainring Failure Analysis
This paper presents a finite element model to simulate the teeth failure of non-circular composite chainring. Model consists of the chainring and a part of the chain. To reduce the size of the model, only the first 11 rollers are simulated. In order to validate the model, it is firstly applied to a circular aluminum chainring and evolution of the stress in the teeth is compared with the literature. Then, effect of the non-circular shape is studied through three different loading positions. Strength of non-circular composite chainring and failure scenario is investigated. Moreover, two composite lay-ups are proposed to observe the influence of the stacking. Results show that composite material can be used but the lay-up has a large influence on the strength. Finally, loading position does not have influence on the first composite failure that always occurs in the first tooth.
Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features
Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.
The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition
Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit.
Learning to Recommend with Negative Ratings Based on Factorization Machine
Rating prediction is an important problem for recommender systems. The task is to predict the rating for an item that a user would give. Most of the existing algorithms for the task ignore the effect of negative ratings rated by users on items, but the negative ratings have a significant impact on users’ purchasing decisions in practice. In this paper, we present a rating prediction algorithm based on factorization machines that consider the effect of negative ratings inspired by Loss Aversion theory. The aim of this paper is to develop a concave and a convex negative disgust function to evaluate the negative ratings respectively. Experiments are conducted on MovieLens dataset. The experimental results demonstrate the effectiveness of the proposed methods by comparing with other four the state-of-the-art approaches. The negative ratings showed much importance in the accuracy of ratings predictions.
K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors
Matching high dimensional features between images is computationally expensive for exhaustive search approaches in computer vision. Although the dimension of the feature can be degraded by simplifying the prior knowledge of homography, matching accuracy may degrade as a tradeoff. In this paper, we present a feature matching method based on k-means algorithm that reduces the matching cost and matches the features between images instead of using a simplified geometric assumption. Experimental results show that the proposed method outperforms the previous linear exhaustive search approaches in terms of the inlier ratio of matched pairs.
High-Fidelity 1D Dynamic Model of a Hydraulic Servo Valve Using 3D Computational Fluid Dynamics and Electromagnetic Finite Element Analysis
The dynamic performance of a 4-way solenoid operated hydraulic spool valve has been analyzed by means of a one-dimensional modeling approach capturing flow, magnetic and fluid forces, valve inertia forces, fluid compressibility, and damping. Increased model accuracy was achieved by analyzing the detailed three-dimensional electromagnetic behavior of the solenoids and flow behavior through the spool valve body for a set of relevant operating conditions, thereby allowing the accurate mapping of flow and magnetic forces on the moving valve body, in lieu of representing the respective forces by lower-order models or by means of simplistic textbook correlations. The resulting high-fidelity one-dimensional model provided the basis for specific and timely design modification eliminating experimentally observed valve oscillations.
Wavelet-Based ECG Signal Analysis and Classification
This paper presents the processing and analysis of ECG signals. The study is based on wavelet transform and uses exclusively the MATLAB environment. This study includes removing Baseline wander and further de-noising through wavelet transform and metrics such as signal-to noise ratio (SNR), Peak signal-to-noise ratio (PSNR) and the mean squared error (MSE) are used to assess the efficiency of the de-noising techniques. Feature extraction is subsequently performed whereby signal features such as heart rate, rise and fall levels are extracted and the QRS complex was detected which helped in classifying the ECG signal. The classification is the last step in the analysis of the ECG signals and it is shown that these are successfully classified as Normal rhythm or Abnormal rhythm. The final result proved the adequacy of using wavelet transform for the analysis of ECG signals.
Terrain Classification for Ground Robots Based on Acoustic Features
The motivation of our work is to detect different
terrain types traversed by a robot based on acoustic data from the
robot-terrain interaction. Different acoustic features and classifiers
were investigated, such as Mel-frequency cepstral coefficient and
Gamma-tone frequency cepstral coefficient for the feature extraction,
and Gaussian mixture model and Feed forward neural network for the
classification. We analyze the system’s performance by comparing
our proposed techniques with some other features surveyed from
distinct related works. We achieve precision and recall values between
87% and 100% per class, and an average accuracy at 95.2%. We also
study the effect of varying audio chunk size in the application phase
of the models and find only a mild impact on performance.
An Approach Based on Statistics and Multi-Resolution Representation to Classify Mammograms
One of the significant and continual public health problems in the world is breast cancer. Early detection is very important to fight the disease, and mammography has been one of the most common and reliable methods to detect the disease in the early stages. However, it is a difficult task, and computer-aided diagnosis (CAD) systems are needed to assist radiologists in providing both accurate and uniform evaluation for mass in mammograms. In this study, a multiresolution statistical method to classify mammograms as normal and abnormal in digitized mammograms is used to construct a CAD system. The mammogram images are represented by wave atom transform, and this representation is made by certain groups of coefficients, independently. The CAD system is designed by calculating some statistical features using each group of coefficients. The classification is performed by using support vector machine (SVM).
Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Evaluation Framework for Investments in Rail Infrastructure Projects
Transport infrastructures are high-cost, long-term investments that serve as vital foundations for the operation of a region or nation and are essential to a country’s or business’s economic development and prosperity, by improving well-being and generating jobs and income. The development of appropriate financing options is of key importance in the decision making process in order develop viable transport infrastructures. The development of transport infrastructure has increasingly been shifting toward alternative methods of project financing such as Public Private Partnership (PPPs) and hybrid forms. In this paper, a methodological decision-making framework based on the evaluation of the financial viability of transportation infrastructure for different financial schemes is presented. The framework leads to an assessment of the financial viability which can be achieved by performing various financing scenarios analyses. To illustrate the application of the proposed methodology, a case study of rail transport infrastructure financing scenario analysis in Greece is developed.
Forensic Speaker Verification in Noisy Environmental by Enhancing the Speech Signal Using ICA Approach
We propose a system to real environmental noise and
channel mismatch for forensic speaker verification systems. This
method is based on suppressing various types of real environmental
noise by using independent component analysis (ICA) algorithm.
The enhanced speech signal is applied to mel frequency cepstral
coefficients (MFCC) or MFCC feature warping to extract the
essential characteristics of the speech signal. Channel effects are
reduced using an intermediate vector (i-vector) and probabilistic
linear discriminant analysis (PLDA) approach for classification. The
proposed algorithm is evaluated by using an Australian forensic voice
comparison database, combined with car, street and home noises
from QUT-NOISE at a signal to noise ratio (SNR) ranging from -10
dB to 10 dB. Experimental results indicate that the MFCC feature
warping-ICA achieves a reduction in equal error rate about (48.22%,
44.66%, and 50.07%) over using MFCC feature warping when the
test speech signals are corrupted with random sessions of street, car,
and home noises at -10 dB SNR.
Towards a Complete Automation Feature Recognition System for Sheet Metal Manufacturing
Sheet metal processing is automated, but the step from product models to the production machine control still requires human intervention. This may cause time consuming bottlenecks in the production process and increase the risk of human errors. In this paper we present a system, which automatically recognizes features from the CAD-model of the sheet metal product. By using these features, the system produces a complete model of the particular sheet metal product. Then the model is used as an input for the sheet metal processing machine. Currently the system is implemented, capable to recognize more than 11 of the most common sheet metal structural features, and the procedure is fully automated. This provides remarkable savings in the production time, and protects against the human errors. This paper presents the developed system architecture, applied algorithms and system software implementation and testing.
Feature Selection and Predictive Modeling of Housing Data Using Random Forest
Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).
Data Quality Enhancement with String Length Distribution
Recently, collectable manufacturing data are rapidly
increasing. On the other hand, mega recall is getting serious as
a social problem. Under such circumstances, there are increasing
needs for preventing mega recalls by defect analysis such as
root cause analysis and abnormal detection utilizing manufacturing
data. However, the time to classify strings in manufacturing data
by traditional method is too long to meet requirement of quick
defect analysis. Therefore, we present String Length Distribution
Classification method (SLDC) to correctly classify strings in a short
time. This method learns character features, especially string length
distribution from Product ID, Machine ID in BOM and asset list.
By applying the proposal to strings in actual manufacturing data, we
verified that the classification time of strings can be reduced by 80%.
As a result, it can be estimated that the requirement of quick defect
analysis can be fulfilled.
The Modulation of Self-interest Instruction on the Fair-Proposing Behavior in Ultimatum Game
Ultimatum game is an experimental paradigm to study human decision making. There are two players, a proposer and a responder, to split a fixed amount of money. According to the traditional economic theory on ultimatum game, proposer should propose the selfish offers to responder as much as possible to maximize proposer’s own outcomes. However, most evidences had showed that people chose more fair offers, hence two hypotheses – fairness favoring and strategic concern were proposed. In current study, we induced the motivation in participants to be either selfish or altruistic, and manipulated the task variables, the stake sizes (NT$100, 1000, 10000) and the share sizes (the 40%, 30%, 20%, 10% of the sum as selfish offers, and the 60%, 70%, 80%, 90% of the sum as altruistic offers), to examine the two hypotheses. The results showed that most proposers chose more fair offers with longer reaction times (RTs) no matter in choosing between the fair and selfish offers, or between the fair and altruistic offers. However, the proposers received explicit self-interest instruction chose more selfish offers accompanied with longer RTs in choosing between the fair and selfish offers. Therefore, the results supported the strategic concern hypothesis that previous proposers choosing the fair offers might be resulted from the fear of rejection by responders. Proposers would become more self-interest if the fear of being rejected is eliminated.
sEMG Interface Design for Locomotion Identification
Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.
An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation
With the development of HyperSpectral Imagery
(HSI) technology, the spectral resolution of HSI became denser,
which resulted in large number of spectral bands, high correlation
between neighboring, and high data redundancy. However, the
semantic interpretation is a challenging task for HSI analysis
due to the high dimensionality and the high correlation of the
different spectral bands. In fact, this work presents a dimensionality
reduction approach that allows to overcome the different issues
improving the semantic interpretation of HSI. Therefore, in order
to preserve the spatial information, the Tensor Locality Preserving
Projection (TLPP) has been applied to transform the original HSI.
In the second step, knowledge has been extracted based on the
adjacency graph to describe the different pixels. Based on the
transformation matrix using TLPP, a weighted matrix has been
constructed to rank the different spectral bands based on their
contribution score. Thus, the relevant bands have been adaptively
selected based on the weighted matrix. The performance of the
presented approach has been validated by implementing several
experiments, and the obtained results demonstrate the efficiency
of this approach compared to various existing dimensionality
reduction techniques. Also, according to the experimental results,
we can conclude that this approach can adaptively select the
relevant spectral improving the semantic interpretation of HSI.
Diagnosis of Diabetes Using Computer Methods: Soft Computing Methods for Diabetes Detection Using Iris
Complementary and Alternative Medicine (CAM) techniques are quite popular and effective for chronic diseases. Iridology is more than 150 years old CAM technique which analyzes the patterns, tissue weakness, color, shape, structure, etc. for disease diagnosis. The objective of this paper is to validate the use of iridology for the diagnosis of the diabetes. The suggested model was applied in a systemic disease with ocular effects. 200 subject data of 100 each diabetic and non-diabetic were evaluated. Complete procedure was kept very simple and free from the involvement of any iridologist. From the normalized iris, the region of interest was cropped. All 63 features were extracted using statistical, texture analysis, and two-dimensional discrete wavelet transformation. A comparison of accuracies of six different classifiers has been presented. The result shows 89.66% accuracy by the random forest classifier.
Reduction of False Positives in Head-Shoulder Detection Based on Multi-Part Color Segmentation
The paper presents a method that utilizes figure-ground color segmentation to extract effective global feature in terms of false positive reduction in the head-shoulder detection. Conventional detectors that rely on local features such as HOG due to real-time operation suffer from false positives. Color cue in an input image provides salient information on a global characteristic which is necessary to alleviate the false positives of the local feature based detectors. An effective approach that uses figure-ground color segmentation has been presented in an effort to reduce the false positives in object detection. In this paper, an extended version of the approach is presented that adopts separate multipart foregrounds instead of a single prior foreground and performs the figure-ground color segmentation with each of the foregrounds. The multipart foregrounds include the parts of the head-shoulder shape and additional auxiliary foregrounds being optimized by a search algorithm. A classifier is constructed with the feature that consists of a set of the multiple resulting segmentations. Experimental results show that the presented method can discriminate more false positive than the single prior shape-based classifier as well as detectors with the local features. The improvement is possible because the presented approach can reduce the false positives that have the same colors in the head and shoulder foregrounds.