Map Matching Performance under Various Similarity Metrics for Heterogeneous Robot Teams
Aerial and ground robots have various advantages of usage in different missions. Aerial robots can move quickly and get a different sight of view of the area, but those vehicles cannot carry heavy payloads. On the other hand, unmanned ground vehicles (UGVs) are slow moving vehicles, since those can carry heavier payloads than unmanned aerial vehicles (UAVs). In this context, we investigate the performances of various Similarity Metrics to provide a common map for Heterogeneous Robot Team (HRT) in complex environments. Within the usage of Lidar Odometry and Octree Mapping technique, the local 3D maps of the environment are gathered. In order to obtain a common map for HRT, informative theoretic similarity metrics are exploited. All types of these similarity metrics gave adequate as allowable simulation time and accurate results that can be used in different types of applications. For the heterogeneous multi robot team, those methods can be used to match different types of maps.
Dimension Free Rigid Point Set Registration in Linear Time
This paper proposes a rigid point set matching
algorithm in arbitrary dimensions based on the idea of symmetric
covariant function. A group of functions of the points in the set are
formulated using rigid invariants. Each of these functions computes a
pair of correspondence from the given point set. Then the computed
correspondences are used to recover the unknown rigid transform
parameters. Each computed point can be geometrically interpreted as
the weighted mean center of the point set. The algorithm is compact,
fast, and dimension free without any optimization process. It either
computes the desired transform for noiseless data in linear time, or
fails quickly in exceptional cases. Experimental results for synthetic
data and 2D/3D real data are provided, which demonstrate potential
applications of the algorithm to a wide range of problems.
Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
In recent years, real-time spatial applications, like
location-aware services and traffic monitoring, have become more
and more important. Such applications result dynamic environments
where data as well as queries are continuously moving. As a result,
there is a tremendous amount of real-time spatial data generated
every day. The growth of the data volume seems to outspeed the
advance of our computing infrastructure. For instance, in real-time
spatial Big Data, users expect to receive the results of each query
within a short time period without holding in account the load
of the system. But with a huge amount of real-time spatial data
generated, the system performance degrades rapidly especially in
overload situations. To solve this problem, we propose the use of
data partitioning as an optimization technique. Traditional horizontal
and vertical partitioning can increase the performance of the system
and simplify data management. But they remain insufficient for
real-time spatial Big data; they can’t deal with real-time and
stream queries efficiently. Thus, in this paper, we propose a novel
data partitioning approach for real-time spatial Big data named
VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial
Big data). This contribution is an implementation of the Matching
algorithm for traditional vertical partitioning. We find, firstly, the
optimal attribute sequence by the use of Matching algorithm. Then,
we propose a new cost model used for database partitioning, for
keeping the data amount of each partition more balanced limit and
for providing a parallel execution guarantees for the most frequent
queries. VPA-RTSBD aims to obtain a real-time partitioning scheme
and deals with stream data. It improves the performance of query
execution by maximizing the degree of parallel execution. This affects
QoS (Quality Of Service) improvement in real-time spatial Big Data
especially with a huge volume of stream data. The performance of
our contribution is evaluated via simulation experiments. The results
show that the proposed algorithm is both efficient and scalable, and
that it outperforms comparable algorithms.
Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping
In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented.
Definition and Core Components of the Role-Partner Allocation Problem in Collaborative Networks
In the current constantly changing economic context, collaborative networks allow partners to undertake projects that would not be possible if attempted by them individually. These projects usually involve the performance of a group of tasks (named roles) that have to be distributed among the partners. Thus, an allocation/matching problem arises that will be referred to as Role-Partner Allocation problem. In real life this situation is addressed by negotiation between partners in order to reach ad hoc agreements. Besides taking a long time and being hard work, both historical evidence and economic analysis show that such approach is not recommended. Instead, the allocation process should be automated by means of a centralized matching scheme. However, as a preliminary step to start the search for such a matching mechanism (or even the development of a new one), the problem and its core components must be specified. To this end, this paper establishes (i) the definition of the problem and its constraints, (ii) the key features of the involved elements (i.e., roles and partners); and (iii) how to create preference lists both for roles and partners. Only this way it will be possible to conduct subsequent methodological research on the solution method.
A POX Controller Module to Prepare a List of Flow Header Information Extracted from SDN Traffic
Software Defined Networking (SDN) is a paradigm designed to facilitate the way of controlling the network dynamically and with more agility. Network traffic is a set of flows, each of which contains a set of packets. In SDN, a matching process is performed on every packet coming to the network in the SDN switch. Only the headers of the new packets will be forwarded to the SDN controller. In terminology, the flow header fields are called tuples. Basically, these tuples are 5-tuple: the source and destination IP addresses, source and destination ports, and protocol number. This flow information is used to provide an overview of the network traffic. Our module is meant to extract this 5-tuple with the packets and flows numbers and show them as a list. Therefore, this list can be used as a first step in the way of detecting the DDoS attack. Thus, this module can be considered as the beginning stage of any flow-based DDoS detection method.
A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error
A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.
Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features
Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.
Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools
Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.
K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors
Matching high dimensional features between images is computationally expensive for exhaustive search approaches in computer vision. Although the dimension of the feature can be degraded by simplifying the prior knowledge of homography, matching accuracy may degrade as a tradeoff. In this paper, we present a feature matching method based on k-means algorithm that reduces the matching cost and matches the features between images instead of using a simplified geometric assumption. Experimental results show that the proposed method outperforms the previous linear exhaustive search approaches in terms of the inlier ratio of matched pairs.
Improved Pattern Matching Applied to Surface Mounting Devices Components Localization on Automated Optical Inspection
Automated Optical Inspection (AOI) Systems are commonly used on Printed Circuit Boards (PCB) manufacturing. The use of this technology has been proven as highly efficient for process improvements and quality achievements. The correct extraction of the component for posterior analysis is a critical step of the AOI process. Nowadays, the Pattern Matching Algorithm is commonly used, although this algorithm requires extensive calculations and is time consuming. This paper will present an improved algorithm for the component localization process, with the capability of implementation in a parallel execution system.
Element-Independent Implementation for Method of Lagrange Multipliers
Treatment for the non-matching interface is an important computational issue. To handle this problem, the method of Lagrange multipliers including classical and localized versions are the most popular technique. It essentially imposes the interface compatibility conditions by introducing Lagrange multipliers. However, the numerical system becomes unstable and inefficient due to the Lagrange multipliers. The interface element-independent formulation that does not include the Lagrange multipliers can be obtained by modifying the independent variables mathematically. Through this modification, more efficient and stable system can be achieved while involving equivalent accuracy comparing with the conventional method. A numerical example is conducted to verify the validity of the presented method.
A Comparison of Image Data Representations for Local Stereo Matching
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.
Conditions for Model Matching of Switched Asynchronous Sequential Machines with Output Feedback
Solvability of the model matching problem for
input/output switched asynchronous sequential machines is discussed
in this paper. The control objective is to determine the existence
condition and design algorithm for a corrective controller that can
match the stable-state behavior of the closed-loop system to that of
a reference model. Switching operations and correction procedures
are incorporated using output feedback so that the controlled
switched machine can show the desired input/output behavior. A
matrix expression is presented to address reachability of switched
asynchronous sequential machines with output equivalence with
respect to a model. The presented reachability condition for the
controller design is validated in a simple example.
An Electrically Small Silver Ink Printed FR4 Antenna for RF Transceiver Chip CC1101
An electrically small meander line antenna is designed for impedance matching with RF transceiver chip CC1101. The design provides the flexibility of tuning the reactance of the antenna over a wide range of values: highly capacitive to highly inductive. The antenna was printed with silver ink on FR4 substrate using the screen printing design process. The antenna impedance was perfectly matched to CC1101 at 433 MHz. The measured radiation efficiency of the antenna was 81.3% at resonance. The 3 dB and 10 dB fractional bandwidth of the antenna was 14.5% and 4.78%, respectively. The read range of the antenna was compared with a copper wire monopole antenna over a distance of five meters. The antenna, with a perfect impedance match with RF transceiver chip CC1101, shows improvement in the read range compared to a monopole antenna over the specified distance.
Key Competences in Economics and Business Field: The Employers’ Side of the Story
Rapid technological developments and increase in organizations’ interdependence on international scale are changing the traditional workplace paradigm. A key feature of knowledge based economy is that employers are looking for individuals that possess both specific academic skills and knowledge, and also capability to be proactive and respond to problems creatively and autonomously. The focus of this paper is workers with Economics and Business background and its goals are threefold: (1) to explore wide range of competences and identify which are the most important to employers; (2) to investigate the existence and magnitude of gap between required and possessed level of a certain competency; and (3) to inquire how this gap is connected with performance of a company. A study was conducted on a representative sample of Croatian enterprises during the spring of 2016. Results show that generic, rather than specific, competences are more important to employers and the gap between the relative importance of certain competence and its current representation in existing workforce is greater for generic competences than for specific. Finally, results do not support the hypothesis that this gap is correlated with firms’ performance.
A Developmental Survey of Local Stereo Matching Algorithms
This paper presents an overview of the history and development of stereo matching algorithms. Details from its inception, up to relatively recent techniques are described, noting challenges that have been surmounted across these past decades. Different components of these are explored, though focus is directed towards the local matching techniques. While global approaches have existed for some time, and demonstrated greater accuracy than their counterparts, they are generally quite slow. Many strides have been made more recently, allowing local methods to catch up in terms of accuracy, without sacrificing the overall performance.
Design of IMC-PID Controller Cascaded Filter for Simplified Decoupling Control System
In this work, the IMC-PID controller cascaded filter based on Internal Model Control (IMC) scheme is systematically proposed for the simplified decoupling control system. The simplified decoupling is firstly introduced for multivariable processes by using coefficient matching to obtain a stable, proper, and causal simplified decoupler. Accordingly, transfer functions of decoupled apparent processes can be expressed as a set of n equivalent independent processes and then derived as a ratio of the original open-loop transfer function to the diagonal element of the dynamic relative gain array. The IMC-PID controller in series with filter is then directly employed to enhance the overall performance of the decoupling control system while avoiding difficulties arising from properties inherent to simplified decoupling. Some simulation studies are considered to demonstrate the simplicity and effectiveness of the proposed method. Simulations were conducted by tuning various controllers of the multivariate processes with multiple time delays. The results indicate that the proposed method consistently performs well with fast and well-balanced closed-loop time responses.
A Practical and Efficient Evaluation Function for 3D Model Based Vehicle Matching
3D model-based vehicle matching provides a new way
for vehicle recognition, localization and tracking. Its key is to
construct an evaluation function, also called fitness function, to
measure the degree of vehicle matching. The existing fitness functions
often poorly perform when the clutter and occlusion exist in traffic
scenarios. In this paper, we present a practical and efficient fitness
function. Unlike the existing evaluation functions, the proposed
fitness function is to study the vehicle matching problem from
both local and global perspectives, which exploits the pixel gradient
information as well as the silhouette information. In view of the
discrepancy between 3D vehicle model and real vehicle, a weighting
strategy is introduced to differently treat the fitting of the model’s
wireframes. Additionally, a normalization operation for the model’s
projection is performed to improve the accuracy of the matching.
Experimental results on real traffic videos reveal that the proposed
fitness function is efficient and robust to the cluttered background
and partial occlusion.
Computing Maximum Uniquely Restricted Matchings in Restricted Interval Graphs
A uniquely restricted matching is defined to be a
matching M whose matched vertices induces a sub-graph which has
only one perfect matching. In this paper, we make progress on the
open question of the status of this problem on interval graphs (graphs
obtained as the intersection graph of intervals on a line). We give
an algorithm to compute maximum cardinality uniquely restricted
matchings on certain sub-classes of interval graphs. We consider two
sub-classes of interval graphs, the former contained in the latter, and
give O(|E|^2) time algorithms for both of them. It is to be noted that
both sub-classes are incomparable to proper interval graphs (graphs
obtained as the intersection graph of intervals in which no interval
completely contains another interval), on which the problem can be
solved in polynomial time.
Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.
Improving Human Hand Localization in Indoor Environment by Using Frequency Domain Analysis
A human’s hand localization is revised by using radar cross section (RCS) measurements with a minimum root mean square (RMS) error matching algorithm on a touchless keypad mock-up model. RCS and frequency transfer function measurements are carried out in an indoor environment on the frequency ranged from 3.0 to 11.0 GHz to cover federal communications commission (FCC) standards. The touchless keypad model is tested in two different distances between the hand and the keypad. The initial distance of 19.50 cm is identical to the heights of transmitting (Tx) and receiving (Rx) antennas, while the second distance is 29.50 cm from the keypad. Moreover, the effects of Rx angles relative to the hand of human factor are considered. The RCS input parameters are compared with power loss parameters at each frequency. From the results, the performance of the RCS input parameters with the second distance, 29.50 cm at 3 GHz is better than the others.
Enhancing the Performance of H.264/AVC in Adaptive Group of Pictures Mode Using Octagon and Square Search Pattern
This paper integrates Octagon and Square Search
pattern (OCTSS) motion estimation algorithm into H.264/AVC
(Advanced Video Coding) video codec in Adaptive Group of Pictures
(AGOP) mode. AGOP structure is computed based on scene change
in the video sequence. Octagon and square search pattern block-based
motion estimation method is implemented in inter-prediction process
of H.264/AVC. Both these methods reduce bit rate and computational
complexity while maintaining the quality of the video sequence
respectively. Experiments are conducted for different types of video
sequence. The results substantially proved that the bit rate,
computation time and PSNR gain achieved by the proposed method
is better than the existing H.264/AVC with fixed GOP and AGOP.
With a marginal gain in quality of 0.28dB and average gain in bitrate
of 132.87kbps, the proposed method reduces the average computation
time by 27.31 minutes when compared to the existing state-of-art
H.264/AVC video codec.
A Simple Adaptive Atomic Decomposition Voice Activity Detector Implemented by Matching Pursuit
A simple adaptive voice activity detector (VAD) is
implemented using Gabor and gammatone atomic decomposition of
speech for high Gaussian noise environments. Matching pursuit is
used for atomic decomposition, and is shown to achieve optimal
speech detection capability at high data compression rates for low
signal to noise ratios. The most active dictionary elements found by
matching pursuit are used for the signal reconstruction so that the
algorithm adapts to the individual speakers dominant time-frequency
characteristics. Speech has a high peak to average ratio enabling
matching pursuit greedy heuristic of highest inner products to isolate
high energy speech components in high noise environments. Gabor
and gammatone atoms are both investigated with identical
logarithmically spaced center frequencies, and similar bandwidths.
The algorithm performs equally well for both Gabor and gammatone
atoms with no significant statistical differences. The algorithm
achieves 70% accuracy at a 0 dB SNR, 90% accuracy at a 5 dB SNR
and 98% accuracy at a 20dB SNR using 30d B SNR as a reference
for voice activity.
Indian License Plate Detection and Recognition Using Morphological Operation and Template Matching
Automatic License plate recognition (ALPR) is a technology which recognizes the registration plate or number plate or License plate of a vehicle. In this paper, an Indian vehicle number plate is mined and the characters are predicted in efficient manner. ALPR involves four major technique i) Pre-processing ii) License Plate Location Identification iii) Individual Character Segmentation iv) Character Recognition. The opening phase, named pre-processing helps to remove noises and enhances the quality of the image using the conception of Morphological Operation and Image subtraction. The second phase, the most puzzling stage ascertain the location of license plate using the protocol Canny Edge detection, dilation and erosion. In the third phase, each characters characterized by Connected Component Approach (CCA) and in the ending phase, each segmented characters are conceptualized using cross correlation template matching- a scheme specifically appropriate for fixed format. Major application of ALPR is Tolling collection, Border Control, Parking, Stolen cars, Enforcement, Access Control, Traffic control. The database consists of 500 car images taken under dissimilar lighting condition is used. The efficiency of the system is 97%. Our future focus is Indian Vehicle License Plate Validation (Whether License plate of a vehicle is as per Road transport and highway standard).
High-Accuracy Satellite Image Analysis and Rapid DSM Extraction for Urban Environment Evaluations (Tripoli-Libya)
Modelling of the earth's surface and evaluation of
urban environment, with 3D models, is an important research topic.
New stereo capabilities of high resolution optical satellites images,
such as the tri-stereo mode of Pleiades, combined with new image
matching algorithms, are now available and can be applied in urban
area analysis. In addition, photogrammetry software packages gained
new, more efficient matching algorithms, such as SGM, as well as
improved filters to deal with shadow areas, can achieve more dense
and more precise results.
This paper describes a comparison between 3D data extracted
from tri-stereo and dual stereo satellite images, combined with pixel
based matching and Wallis filter. The aim was to improve the
accuracy of 3D models especially in urban areas, in order to assess if
satellite images are appropriate for a rapid evaluation of urban
The results showed that 3D models achieved by Pleiades tri-stereo
outperformed, both in terms of accuracy and detail, the result
obtained from a Geo-eye pair. The assessment was made with
reference digital surface models derived from high resolution aerial
photography. This could mean that tri-stereo images can be
successfully used for the proposed urban change analyses.
Accrual Based Scheduling for Cloud in Single and Multi Resource System: Study of Three Techniques
This paper evaluates the accrual based scheduling for
cloud in single and multi-resource system. Numerous organizations
benefit from Cloud computing by hosting their applications. The
cloud model provides needed access to computing with potentially
unlimited resources. Scheduling is tasks and resources mapping to a
certain optimal goal principle. Scheduling, schedules tasks to virtual
machines in accordance with adaptable time, in sequence under
transaction logic constraints. A good scheduling algorithm improves
CPU use, turnaround time, and throughput. In this paper, three realtime
cloud services scheduling algorithm for single resources and
multiple resources are investigated. Experimental results show
Resource matching algorithm performance to be superior for both
single and multi-resource scheduling when compared to benefit first
scheduling, Migration, Checkpoint algorithms.
Size-Reduction Strategies for Iris Codes
Iris codes contain bits with different entropy. This
work investigates different strategies to reduce the size of iris
code templates with the aim of reducing storage requirements and
computational demand in the matching process. Besides simple subsampling
schemes, also a binary multi-resolution representation as
used in the JBIG hierarchical coding mode is assessed. We find that
iris code template size can be reduced significantly while maintaining
recognition accuracy. Besides, we propose a two-stage identification
approach, using small-sized iris code templates in a pre-selection
stage, and full resolution templates for final identification, which
shows promising recognition behaviour.
Tool for Fast Detection of Java Code Snippets
This paper presents general results on the Java source
code snippet detection problem. We propose the tool which uses
graph and subgraph isomorphism detection. A number of solutions
for all of these tasks have been proposed in the literature. However,
although that all these solutions are really fast, they compare just the
constant static trees. Our solution offers to enter an input sample
dynamically with the Scripthon language while preserving an
acceptable speed. We used several optimizations to achieve very low
number of comparisons during the matching algorithm.
Driver Fatigue State Recognition with Pixel Based Caveat Scheme Using Eye-Tracking
Driver fatigue is an important factor in the increasing
number of road accidents. Dynamic template matching method was
proposed to address the problem of real-time driver fatigue detection
system based on eye-tracking. An effective vision based approach
was used to analyze the driver’s eye state to detect fatigue. The driver
fatigue system consists of Face detection, Eye detection, Eye
tracking, and Fatigue detection. Initially frames are captured from a
color video in a car dashboard and transformed from RGB into YCbCr
color space to detect the driver’s face. Canny edge operator was used
to estimating the eye region and the locations of eyes are extracted.
The extracted eyes were considered as a template matching for eye
tracking. Edge Map Overlapping (EMO) and Edge Pixel Count
(EPC) matching function were used for eye tracking which is used to
improve the matching accuracy. The pixel of eyeball was tracked
from the eye regions which are used to determine the fatigue state of