A Growing Natural Gas Approach for Evaluating Quality of Software Modules
The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.
Growing Neural Gas, data clustering, fault prediction.
Cloud Computing Databases: Latest Trends and Architectural Concepts
The Economic factors are leading to the rise of
infrastructures provides software and computing facilities as a
service, known as cloud services or cloud computing. Cloud services
can provide efficiencies for application providers, both by limiting
up-front capital expenses, and by reducing the cost of ownership over
time. Such services are made available in a data center, using shared
commodity hardware for computation and storage. There is a varied
set of cloud services available today, including application services
(salesforce.com), storage services (Amazon S3), compute services
(Google App Engine, Amazon EC2) and data services (Amazon
SimpleDB, Microsoft SQL Server Data Services, Google-s Data
store). These services represent a variety of reformations of data
management architectures, and more are on the horizon.
Data Management in Cloud, AWS, EC2, S3, SQS,
Analytical Study of Component Based Software Engineering
This paper is a survey of current component-based
software technologies and the description of promotion and
inhibition factors in CBSE. The features that software components
inherit are also discussed. Quality Assurance issues in componentbased
software are also catered to. The feat research on the quality
model of component based system starts with the study of what the
components are, CBSE, its development life cycle and the pro &
cons of CBSE. Various attributes are studied and compared keeping
in view the study of various existing models for general systems and
CBS. When illustrating the quality of a software component an apt
set of quality attributes for the description of the system (or
components) should be selected. Finally, the research issues that can
be extended are tabularized.
Component, COTS, Component based development,Component-based Software Engineering.
Face Recognition Using Eigen face Coefficients and Principal Component Analysis
Face Recognition is a field of multidimensional
applications. A lot of work has been done, extensively on the most of
details related to face recognition. This idea of face recognition using
PCA is one of them. In this paper the PCA features for Feature
extraction are used and matching is done for the face under
consideration with the test image using Eigen face coefficients. The
crux of the work lies in optimizing Euclidean distance and paving the
way to test the same algorithm using Matlab which is an efficient tool
having powerful user interface along with simplicity in representing
Eigen Face, Multidimensional, Matching, PCA.
Software Maintenance Severity Prediction with Soft Computing Approach
As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done on time especially for the critical applications. In this paper, we have explored the different predictor models to NASA-s public domain defect dataset coded in Perl programming language. Different machine learning algorithms belonging to the different learner categories of the WEKA project including Mamdani Based Fuzzy Inference System and Neuro-fuzzy based system have been evaluated for the modeling of maintenance severity or impact of fault severity. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provides relatively better prediction accuracy as compared to other models and hence, can be used for the maintenance severity prediction of the software.
Software Metrics, Fuzzy, Neuro-Fuzzy, SoftwareFaults, Accuracy, MAE, RMSE.
A Subtractive Clustering Based Approach for Early Prediction of Fault Proneness in Software Modules
In this paper, subtractive clustering based fuzzy inference system approach is used for early detection of faults in the function oriented software systems. This approach has been tested with real time defect datasets of NASA software projects named as PC1 and CM1. Both the code based model and joined model (combination of the requirement and code based metrics) of the datasets are used for training and testing of the proposed approach. The performance of the models is recorded in terms of Accuracy, MAE and RMSE values. The performance of the proposed approach is better in case of Joined Model. As evidenced from the results obtained it can be concluded that Clustering and fuzzy logic together provide a simple yet powerful means to model the earlier detection of faults in the function oriented software systems.
Subtractive clustering, fuzzy inference system, fault proneness.
A Technique for Execution of Written Values on Shared Variables
The current paper conceptualizes the technique of
release consistency indispensable with the concept of
synchronization that is user-defined. Programming model concreted
with object and class is illustrated and demonstrated. The essence of
the paper is phases, events and parallel computing execution .The
technique by which the values are visible on shared variables is
implemented. The second part of the paper consist of user defined
high level synchronization primitives implementation and system
architecture with memory protocols. There is a proposition of
techniques which are core in deciding the validating and invalidating
a stall page .
synchronization objects, barrier, phases and events,shared memory
A Model for Estimation of Efforts in Development of Software Systems
Software effort estimation is the process of predicting
the most realistic use of effort required to develop or maintain
software based on incomplete, uncertain and/or noisy input. Effort
estimates may be used as input to project plans, iteration plans,
budgets. There are various models like Halstead, Walston-Felix,
Bailey-Basili, Doty and GA Based models which have already used
to estimate the software effort for projects. In this study Statistical
Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are
experimented to estimate the software effort for projects. The
performances of the developed models were tested on NASA
software project datasets and results are compared with the Halstead,
Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based
models mentioned in the literature. The result shows that the NF
Model has the lowest MMRE and RMSE values. The NF Model
shows the best results as compared with the Fuzzy-GA based hybrid
Inference System and other existing Models that are being used for
the Effort Prediction with lowest MMRE and RMSE values.
Neuro-Fuzzy Model, Halstead Model, Walston-Felix
Model, Bailey-Basili Model, Doty Model, GA Based Model, Genetic
A Complexity Measure for Java Bean based Software Components
The traditional software product and process metrics
are neither suitable nor sufficient in measuring the complexity of
software components, which ultimately is necessary for quality and
productivity improvement within organizations adopting CBSE.
Researchers have proposed a wide range of complexity metrics for
software systems. However, these metrics are not sufficient for
components and component-based system and are restricted to the
module-oriented systems and object-oriented systems. In this
proposed study it is proposed to find the complexity of the JavaBean
Software Components as a reflection of its quality and the component
can be adopted accordingly to make it more reusable. The proposed
metric involves only the design issues of the component and does not
consider the packaging and the deployment complexity. In this way,
the software components could be kept in certain limit which in turn
help in enhancing the quality and productivity.
JavaBean Components, Complexity, Metrics,Validation.
Predicting the Impact of the Defect on the Overall Environment in Function Based Systems
There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. In this paper, we tried to predict the level of impact of the existing faults in software systems. Neuro-Fuzzy based predictor models is applied NASA-s public domain defect dataset coded in C programming language. As Correlation-based Feature Selection (CFS) evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. So, CFS is used for the selecting the best metrics that have highly correlated with level of severity of faults. The results are compared with the prediction results of Logistic Models (LMT) that was earlier quoted as the best technique in . The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provide a relatively better prediction accuracy as compared to other models and hence, can be used for the modeling of the level of impact of faults in function based systems.
Software Metrics, Fuzzy, Neuro-Fuzzy, Software
Faults, Accuracy, MAE, RMSE.
A Reusability Evaluation Model for OO-Based Software Components
The requirement to improve software productivity has
promoted the research on software metric technology. There are
metrics for identifying the quality of reusable components but the
function that makes use of these metrics to find reusability of
software components is still not clear. These metrics if identified in
the design phase or even in the coding phase can help us to reduce the
rework by improving quality of reuse of the component and hence
improve the productivity due to probabilistic increase in the reuse
level. CK metric suit is most widely used metrics for the objectoriented
(OO) software; we critically analyzed the CK metrics, tried
to remove the inconsistencies and devised the framework of metrics
to obtain the structural analysis of OO-based software components.
Neural network can learn new relationships with new input data and
can be used to refine fuzzy rules to create fuzzy adaptive system.
Hence, Neuro-fuzzy inference engine can be used to evaluate the
reusability of OO-based component using its structural attributes as
inputs. In this paper, an algorithm has been proposed in which the
inputs can be given to Neuro-fuzzy system in form of tuned WMC,
DIT, NOC, CBO , LCOM values of the OO software component and
output can be obtained in terms of reusability. The developed
reusability model has produced high precision results as expected by
the human experts.
CK-Metric, ID3, Neuro-fuzzy, Reusability.
Identification of Reusable Software Modules in Function Oriented Software Systems using Neural Network Based Technique
The cost of developing the software from scratch can
be saved by identifying and extracting the reusable components from
already developed and existing software systems or legacy systems
. But the issue of how to identify reusable components from
existing systems has remained relatively unexplored. We have used
metric based approach for characterizing a software module. In this
present work, the metrics McCabe-s Cyclometric Complexity
Measure for Complexity measurement, Regularity Metric, Halstead
Software Science Indicator for Volume indication, Reuse Frequency
metric and Coupling Metric values of the software component are
used as input attributes to the different types of Neural Network
system and reusability of the software component is calculated. The
results are recorded in terms of Accuracy, Mean Absolute Error
(MAE) and Root Mean Squared Error (RMSE).
Software reusability, Neural Networks, MAE,
Software Reengineering Tool for Traffic Accident Data
In today-s hip hop world where everyone is running
short of time and works hap hazardly,the similar scene is common on
the roads while in traffic.To do away with the fatal consequences of
such speedy traffics on rushy lanes, a software to analyse and keep
account of the traffic and subsequent conjestion is being used in the
developed countries. This software has being implemented and used
with the help of a suppprt tool called Critical Analysis Reporting
Environment.There has been two existing versions of this tool.The
current research paper involves examining the issues and probles
while using these two practically. Further a hybrid architecture is
proposed for the same that retains the quality and performance of
both and is better in terms of coupling of components , maintainence
and many other features.
Critical Analysis Reporting Environment, coupling,hybrid architecture etc.
Coverage and Connectivity Problem in Sensor Networks
In over deployed sensor networks, one approach
to Conserve energy is to keep only a small subset of sensors
active at Any instant. For the coverage problems, the monitoring
area in a set of points that require sensing, called demand points, and
consider that the node coverage area is a circle of range R, where R
is the sensing range, If the Distance between a demand point and
a sensor node is less than R, the node is able to cover this point. We
consider a wireless sensor network consisting of a set of sensors
deployed randomly. A point in the monitored area is covered if it is
within the sensing range of a sensor. In some applications, when the
network is sufficiently dense, area coverage can be approximated by
guaranteeing point coverage. In this case, all the points of wireless
devices could be used to represent the whole area, and the working
sensors are supposed to cover all the sensors. We also introduce
Hybrid Algorithm and challenges related to coverage in sensor
Wireless sensor networks, network coverage, Energy
conservation, Hybrid Algorithms.
Software Maintenance Severity Prediction for Object Oriented Systems
As the majority of faults are found in a few of its
modules so there is a need to investigate the modules that are
affected severely as compared to other modules and proper
maintenance need to be done in time especially for the critical
applications. As, Neural networks, which have been already applied
in software engineering applications to build reliability growth
models predict the gross change or reusability metrics. Neural
networks are non-linear sophisticated modeling techniques that are
able to model complex functions. Neural network techniques are
used when exact nature of input and outputs is not known. A key
feature is that they learn the relationship between input and output
through training. In this present work, various Neural Network Based
techniques are explored and comparative analysis is performed for
the prediction of level of need of maintenance by predicting level
severity of faults present in NASA-s public domain defect dataset.
The comparison of different algorithms is made on the basis of Mean
Absolute Error, Root Mean Square Error and Accuracy Values. It is
concluded that Generalized Regression Networks is the best
algorithm for classification of the software components into different
level of severity of impact of the faults. The algorithm can be used to
develop model that can be used for identifying modules that are
heavily affected by the faults.
Neural Network, Software faults, Software Metric.
Consistency Model and Synchronization Primitives in SDSMS
This paper is on the general discussion of memory consistency model like Strict Consistency, Sequential Consistency, Processor Consistency, Weak Consistency etc. Then the techniques for implementing distributed shared memory Systems and Synchronization Primitives in Software Distributed Shared Memory Systems are discussed. The analysis involves the performance measurement of the protocol concerned that is Multiple Writer Protocol. Each protocol has pros and cons. So, the problems that are associated with each protocol is discussed and other related things are explored.
Distributed System, Single owner protocol, Multiple owner protocol
Optimization for Reducing Handoff Latency and Utilization of Bandwidth in ATM Networks
To support mobility in ATM networks, a number of
technical challenges need to be resolved. The impact of handoff
schemes in terms of service disruption, handoff latency, cost
implications and excess resources required during handoffs needs to
be addressed. In this paper, a one phase handoff and route
optimization solution using reserved PVCs between adjacent ATM
switches to reroute connections during inter-switch handoff is
studied. In the second phase, a distributed optimization process is
initiated to optimally reroute handoff connections. The main
objective is to find the optimal operating point at which to perform
optimization subject to cost constraint with the purpose of reducing
blocking probability of inter-switch handoff calls for delay tolerant
traffic. We examine the relation between the required bandwidth
resources and optimization rate. Also we calculate and study the
handoff blocking probability due to lack of bandwidth for resources
reserved to facilitate the rapid rerouting.
Wireless ATM, Mobility, Latency, Optimization rateand Blocking Probability.
Biometric Methods and Implementation of Algorithms
Biometric measures of one kind or another have been
used to identify people since ancient times, with handwritten
signatures, facial features, and fingerprints being the traditional
methods. Of late, Systems have been built that automate the task of
recognition, using these methods and newer ones, such as hand
geometry, voiceprints and iris patterns. These systems have different
strengths and weaknesses. This work is a two-section composition. In
the starting section, we present an analytical and comparative study
of common biometric techniques. The performance of each of them
has been viewed and then tabularized as a result. The latter section
involves the actual implementation of the techniques under
consideration that has been done using a state of the art tool called,
MATLAB. This tool aids to effectively portray the corresponding
results and effects.
Matlab, Recognition, Facial Vectors, Functions.
Prediction of Reusability of Object Oriented Software Systems using Clustering Approach
In literature, there are metrics for identifying the
quality of reusable components but the framework that makes use of
these metrics to precisely predict reusability of software components
is still need to be worked out. These reusability metrics if identified
in the design phase or even in the coding phase can help us to reduce
the rework by improving quality of reuse of the software component
and hence improve the productivity due to probabilistic increase in
the reuse level. As CK metric suit is most widely used metrics for
extraction of structural features of an object oriented (OO) software;
So, in this study, tuned CK metric suit i.e. WMC, DIT, NOC, CBO
and LCOM, is used to obtain the structural analysis of OO-based
software components. An algorithm has been proposed in which the
inputs can be given to K-Means Clustering system in form of
tuned values of the OO software component and decision tree is
formed for the 10-fold cross validation of data to evaluate the in
terms of linguistic reusability value of the component. The developed
reusability model has produced high precision results as desired.
CK-Metric, Desicion Tree, Kmeans, Reusability.
A Study on Early Prediction of Fault Proneness in Software Modules using Genetic Algorithm
Fault-proneness of a software module is the
probability that the module contains faults. To predict faultproneness
of modules different techniques have been proposed which
includes statistical methods, machine learning techniques, neural
network techniques and clustering techniques. The aim of proposed
study is to explore whether metrics available in the early lifecycle
(i.e. requirement metrics), metrics available in the late lifecycle (i.e.
code metrics) and metrics available in the early lifecycle (i.e.
requirement metrics) combined with metrics available in the late
lifecycle (i.e. code metrics) can be used to identify fault prone
modules using Genetic Algorithm technique. This approach has been
tested with real time defect C Programming language datasets of
NASA software projects. The results show that the fusion of
requirement and code metric is the best prediction model for
detecting the faults as compared with commonly used code based
Genetic Algorithm, Fault Proneness, Software Faultand Software Quality.
Automatic Reusability Appraisal of Software Components using Neuro-fuzzy Approach
Automatic reusability appraisal could be helpful in
evaluating the quality of developed or developing reusable software
components and in identification of reusable components from
existing legacy systems; that can save cost of developing the software
from scratch. But the issue of how to identify reusable components
from existing systems has remained relatively unexplored. In this
paper, we have mentioned two-tier approach by studying the
structural attributes as well as usability or relevancy of the
component to a particular domain. Latent semantic analysis is used
for the feature vector representation of various software domains. It
exploits the fact that FeatureVector codes can be seen as documents
containing terms -the idenifiers present in the components- and so
text modeling methods that capture co-occurrence information in
low-dimensional spaces can be used. Further, we devised Neuro-
Fuzzy hybrid Inference System, which takes structural metric values
as input and calculates the reusability of the software component.
Decision tree algorithm is used to decide initial set of fuzzy rules for
the Neuro-fuzzy system. The results obtained are convincing enough
to propose the system for economical identification and retrieval of
reusable software components.
Clustering, ID3, LSA, Neuro-fuzzy System, SVD
A Taguchi Approach to Investigate Impact of Factors for Reusability of Software Components
Quantitative Investigation of impact of the factors' contribution towards measuring the reusability of software components could be helpful in evaluating the quality of developed or developing reusable software components and in identification of reusable component from existing legacy systems; that can save cost of developing the software from scratch. But the issue of the relative significance of contributing factors has remained relatively unexplored. In this paper, we have use the Taguchi's approach in analyzing the significance of different structural attributes or factors in deciding the reusability level of a particular component. The results obtained shows that the complexity is the most important factor in deciding the better Reusability of a function oriented Software. In case of Object Oriented Software, Coupling and Complexity collectively play significant role in high reusability.
Taguchi Approach, Reusability, SoftwareComponents, Structural Attributes.
Software Effort Estimation Using Soft Computing Techniques
Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.
Effort Estimation, Neural-Fuzzy Model, Halstead Model, Walston-Felix Model, Bailey-Basili Model, Doty Model.
A 16Kb 10T-SRAM with 4x Read-Power Reduction
This work aims to reduce the read power consumption
as well as to enhance the stability of the SRAM cell during the read
operation. A new 10-transisor cell is proposed with a new read
scheme to minimize the power consumption within the memory core.
It has separate read and write ports, thus cell read stability is
significantly improved. A 16Kb SRAM macro operating at 1V
supply voltage is demonstrated in 65 nm CMOS process. Its read
power consumption is reduced to 24% of the conventional design.
The new cell also has lower leakage current due to its special bit-line
pre-charge scheme. As a result, it is suitable for low-power mobile
applications where power supply is restricted by the battery.
A 16Kb 10T-SRAM, 4x Read-Power Reduction
A Dynamic RGB Intensity Based Steganography Scheme
Steganography meaning covered writing. Steganography includes the concealment of information within computer files . In other words, it is the Secret communication by hiding the existence of message. In this paper, we will refer to cover image, to indicate the images that do not yet contain a secret message, while we will refer to stego images, to indicate an image with an embedded secret message. Moreover, we will refer to the secret message as stego-message or hidden message. In this paper, we proposed a technique called RGB intensity based steganography model as RGB model is the technique used in this field to hide the data. The methods used here are based on the manipulation of the least significant bits of pixel values  or the rearrangement of colors to create least significant bit or parity bit patterns, which correspond to the message being hidden. The proposed technique attempts to overcome the problem of the sequential fashion and the use of stego-key to select the pixels.
Steganography, Stego Image, RGB Image,Cryptography, LSB.
A K-Means Based Clustering Approach for Finding Faulty Modules in Open Source Software Systems
Prediction of fault-prone modules provides one way to
support software quality engineering. Clustering is used to determine
the intrinsic grouping in a set of unlabeled data. Among various
clustering techniques available in literature K-Means clustering
approach is most widely being used. This paper introduces K-Means
based Clustering approach for software finding the fault proneness of
the Object-Oriented systems. The contribution of this paper is that it
has used Metric values of JEdit open source software for generation
of the rules for the categorization of software modules in the
categories of Faulty and non faulty modules and thereafter
empirically validation is performed. The results are measured in
terms of accuracy of prediction, probability of Detection and
Probability of False Alarms.
K-Means, Software Fault, Classification, ObjectOriented Metrics.
Bayesian Network Based Intelligent Pediatric System
In this paper, a Bayesian Network (BN) based system
is presented for providing clinical decision support to healthcare
practitioners in rural or remote areas of India for young infants or
children up to the age of 5 years. The government is unable to
appoint child specialists in rural areas because of inadequate number
of available pediatricians. It leads to a high Infant Mortality Rate
(IMR). In such a scenario, Intelligent Pediatric System provides a
realistic solution. The prototype of an intelligent system has been
developed that involves a knowledge component called an Intelligent
Pediatric Assistant (IPA); and User Agents (UA) along with their
Graphical User Interfaces (GUI). The GUI of UA provides the
interface to the healthcare practitioner for submitting sign-symptoms
and displaying the expert opinion as suggested by IPA. Depending
upon the observations, the IPA decides the diagnosis and the
treatment plan. The UA and IPA form client-server architecture for
Network, Based Intelligent, Pediatric System
Robust Face Recognition Using Eigen Faces and Karhunen-Loeve Algorithm
The current research paper is an implementation of
Eigen Faces and Karhunen-Loeve Algorithm for face recognition.
The designed program works in a manner where a unique
identification number is given to each face under trial. These faces
are kept in a database from where any particular face can be matched
and found out of the available test faces. The Karhunen –Loeve
Algorithm has been implemented to find out the appropriate right
face (with same features) with respect to given input image as test
data image having unique identification number. The procedure
involves usage of Eigen faces for the recognition of faces.
Eigen Faces, Karhunen-Loeve Algorithm, FaceRecognition.
Impact of Faults in Different Software Systems: A Survey
Software maintenance is extremely important activity in software development life cycle. It involves a lot of human efforts, cost and time. Software maintenance may be further subdivided into different activities such as fault prediction, fault detection, fault prevention, fault correction etc. This topic has gained substantial attention due to sophisticated and complex applications, commercial hardware, clustered architecture and artificial intelligence. In this paper we surveyed the work done in the field of software maintenance. Software fault prediction has been studied in context of fault prone modules, self healing systems, developer information, maintenance models etc. Still a lot of things like modeling and weightage of impact of different kind of faults in the various types of software systems need to be explored in the field of fault severity.
Fault prediction, Software Maintenance, Automated Fault Prediction, and Failure Mode Analysis
Analysis of Modified Heap Sort Algorithm on Different Environment
In field of Computer Science and Mathematics,
sorting algorithm is an algorithm that puts elements of a list in a
certain order i.e. ascending or descending. Sorting is perhaps the
most widely studied problem in computer science and is frequently
used as a benchmark of a system-s performance. This paper
presented the comparative performance study of four sorting
algorithms on different platform. For each machine, it is found that
the algorithm depends upon the number of elements to be sorted. In
addition, as expected, results show that the relative performance of
the algorithms differed on the various machines. So, algorithm
performance is dependent on data size and there exists impact of
Algorithm, Analysis, Complexity, Sorting.
Fingerprint Verification System Using Minutiae Extraction Technique
Most fingerprint recognition techniques are based on minutiae matching and have been well studied. However, this technology still suffers from problems associated with the handling of poor quality impressions. One problem besetting fingerprint matching is distortion. Distortion changes both geometric position and orientation, and leads to difficulties in establishing a match among multiple impressions acquired from the same finger tip. Marking all the minutiae accurately as well as rejecting false minutiae is another issue still under research. Our work has combined many methods to build a minutia extractor and a minutia matcher. The combination of multiple methods comes from a wide investigation into research papers. Also some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutia marking with special considering the triple branch counting, minutia unification by decomposing a branch into three terminations, and matching in the unified x-y coordinate system after a two-step transformation are used in the work.
Biometrics, Minutiae, Crossing number, False Accept Rate (FAR), False Reject Rate (FRR).
A Genetic Algorithm Based Classification Approach for Finding Fault Prone Classes
Fault-proneness of a software module is the
probability that the module contains faults. A correlation exists
between the fault-proneness of the software and the measurable
attributes of the code (i.e. the static metrics) and of the testing (i.e.
the dynamic metrics). Early detection of fault-prone software
components enables verification experts to concentrate their time and
resources on the problem areas of the software system under
development. This paper introduces Genetic Algorithm based
software fault prediction models with Object-Oriented metrics. The
contribution of this paper is that it has used Metric values of JEdit
open source software for generation of the rules for the classification
of software modules in the categories of Faulty and non faulty
modules and thereafter empirically validation is performed. The
results shows that Genetic algorithm approach can be used for
finding the fault proneness in object oriented software components.
Genetic Algorithms, Software Fault, Classification,Object Oriented Metrics.
Algorithm Design and Performance Evaluation of Equivalent CMOS Model
This work is a proposed model of CMOS for which
the algorithm has been created and then the performance evaluation
of this proposition has been done. In this context, another commonly
used model called ZSTT (Zero Switching Time Transient) model is
chosen to compare all the vital features and the results for the
Proposed Equivalent CMOS are promising. In the end, the excerpts
of the created algorithm are also included
Dual Capacitor Model, ZSTT, CMOS, SPICEMacro-Model.
Modeling of Reusability of Object Oriented Software System
Automatic reusability appraisal is helpful in
evaluating the quality of developed or developing reusable software
components and in identification of reusable components from
existing legacy systems; that can save cost of developing the
software from scratch. But the issue of how to identify reusable
components from existing systems has remained relatively
unexplored. In this research work, structural attributes of software
components are explored using software metrics and quality of the
software is inferred by different Neural Network based approaches,
taking the metric values as input. The calculated reusability value
enables to identify a good quality code automatically. It is found that
the reusability value determined is close to the manual analysis used
to be performed by the programmers or repository managers. So, the
developed system can be used to enhance the productivity and
quality of software development.
Neural Network, Software Reusability, Software
Metric, Accuracy, MAE, RMSE.
Application of LSB Based Steganographic Technique for 8-bit Color Images
Steganography is the process of hiding one file inside another such that others can neither identify the meaning of the embedded object, nor even recognize its existence. Current trends favor using digital image files as the cover file to hide another digital file that contains the secret message or information. One of the most common methods of implementation is Least Significant Bit Insertion, in which the least significant bit of every byte is altered to form the bit-string representing the embedded file. Altering the LSB will only cause minor changes in color, and thus is usually not noticeable to the human eye. While this technique works well for 24-bit color image files, steganography has not been as successful when using an 8-bit color image file, due to limitations in color variations and the use of a colormap. This paper presents the results of research investigating the combination of image compression and steganography. The technique developed starts with a 24-bit color bitmap file, then compresses the file by organizing and optimizing an 8-bit colormap. After the process of compression, a text message is hidden in the final, compressed image. Results indicate that the final technique has potential of being useful in the steganographic world.
Compression, Colormap, Encryption, Steganographyand LSB Insertion.