A Recommendation to Oncologists for Cancer Treatment by Immunotherapy: Quantitative and Qualitative Analysis
Today, the treatment of cancer, in a relatively short
period, with minimum adverse effects is a great concern for
oncologists. In this paper, based on a recently used mathematical
model for cancer, a guideline has been proposed for the amount
and duration of drug doses for cancer treatment by immunotherapy.
Dynamically speaking, the mathematical ordinary differential
equation (ODE) model of cancer has different equilibrium points;
one of them is unstable, which is called the no tumor equilibrium
point. In this paper, based on the number of tumor cells an
intelligent soft computing controller (a combination of fuzzy logic
controller and genetic algorithm), decides regarding the amount
and duration of drug doses, to eliminate the tumor cells and
stabilize the unstable point in a relatively short time. Two different
immunotherapy approaches; active and adoptive, have been studied
and presented. It is shown that the rate of decay of tumor cells is
faster and the doses of drug are lower in comparison with the result
of some other literatures. It is also shown that the period of
treatment and the doses of drug in adoptive immunotherapy are
significantly less than the active method. A recommendation to
oncologists has also been presented.
Optimal Distributed Generator Sizing and Placement by Analytical Method and PSO Algorithm Considering Optimal Reactive Power Dispatch
In this paper, an approach combining analytical method for the distributed generator (DG) sizing and meta-heuristic search for the optimal location of DG has been presented. The optimal size of DG on each bus is estimated by the loss sensitivity factor method while the optimal sites are determined by Particle Swarm Optimization (PSO) based optimal reactive power dispatch for minimizing active power loss. To confirm the proposed approach, it has been tested on IEEE-30 bus test system. The adjustments of operating constraints and voltage profile improvements have also been observed. The obtained results show that the allocation of DGs results in a significant loss reduction with good voltage profiles and the combined approach is competent in keeping the system voltages within the acceptable limits.
Optimal Placement and Sizing of Distributed Generation in Microgrid for Power Loss Reduction and Voltage Profile Improvement
Environmental issues and the ever-increasing in demand of electrical energy make it necessary to have distributed generation (DG) resources in the power system. In this research, in order to realize the goals of reducing losses and improving the voltage profile in a microgrid, the allocation and sizing of DGs have been used. The proposed Genetic Algorithm (GA) is described from the array of artificial intelligence methods for solving the problem. The algorithm is implemented on the IEEE 33 buses network. This study is presented in two scenarios, primarily to illustrate the effect of location and determination of DGs has been done to reduce losses and improve the voltage profile. On the other hand, decisions made with the one-level assumptions of load are not universally accepted for all levels of load. Therefore, in this study, load modelling is performed and the results are presented for multi-levels load state.
Arabic Character Recognition Using Regression Curves with the Expectation Maximization Algorithm
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We then proceed by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
A Genetic Algorithm Approach Considering Zero Injection Bus Constraint Modeling for Optimal Phasor Measurement Unit Placement
This paper presents optimal Phasor Measurement Unit (PMU) Placement in network using a genetic algorithm approach as it is infeasible and require high installation cost to place PMUs at every bus in network. This paper proposes optimal PMU allocation considering observability and redundancy utilizing Genetic Algorithm (GA) approach. The nonlinear constraints of buses are modeled to give accurate results. Constraints associated with Zero Injection (ZI) buses and radial buses are modeled to optimize number of locations for PMU placement. GA is modeled with ZI bus constraints to minimize number of locations without losing complete observability. Redundancy of every bus in network is computed to show optimum redundancy of complete system network. The performance of method is measured by Bus Observability Index (BOI) and Complete System Observability Performance Index (CSOPI). MATLAB simulations are carried out on IEEE -14, -30 and -57 bus-systems and compared with other methods in literature survey to show the effectiveness of the proposed approach.
The Whale Optimization Algorithm and Its Implementation in MATLAB
Optimization is an important tool in making decisions and in analysing physical systems. In mathematical terms, an optimization problem is the problem of finding the best solution from among the set of all feasible solutions. The paper discusses the Whale Optimization Algorithm (WOA), and its applications in different fields. The algorithm is tested using MATLAB because of its unique and powerful features. The benchmark functions used in WOA algorithm are grouped as: unimodal (F1-F7), multimodal (F8-F13), and fixed-dimension multimodal (F14-F23). Out of these benchmark functions, we show the experimental results for F7, F11, and F19 for different number of iterations. The search space and objective space for the selected function are drawn, and finally, the best solution as well as the best optimal value of the objective function found by WOA is presented. The algorithmic results demonstrate that the WOA performs better than the state-of-the-art meta-heuristic and conventional algorithms.
Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
In recent years, real-time spatial applications, like
location-aware services and traffic monitoring, have become more
and more important. Such applications result dynamic environments
where data as well as queries are continuously moving. As a result,
there is a tremendous amount of real-time spatial data generated
every day. The growth of the data volume seems to outspeed the
advance of our computing infrastructure. For instance, in real-time
spatial Big Data, users expect to receive the results of each query
within a short time period without holding in account the load
of the system. But with a huge amount of real-time spatial data
generated, the system performance degrades rapidly especially in
overload situations. To solve this problem, we propose the use of
data partitioning as an optimization technique. Traditional horizontal
and vertical partitioning can increase the performance of the system
and simplify data management. But they remain insufficient for
real-time spatial Big data; they can’t deal with real-time and
stream queries efficiently. Thus, in this paper, we propose a novel
data partitioning approach for real-time spatial Big data named
VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial
Big data). This contribution is an implementation of the Matching
algorithm for traditional vertical partitioning. We find, firstly, the
optimal attribute sequence by the use of Matching algorithm. Then,
we propose a new cost model used for database partitioning, for
keeping the data amount of each partition more balanced limit and
for providing a parallel execution guarantees for the most frequent
queries. VPA-RTSBD aims to obtain a real-time partitioning scheme
and deals with stream data. It improves the performance of query
execution by maximizing the degree of parallel execution. This affects
QoS (Quality Of Service) improvement in real-time spatial Big Data
especially with a huge volume of stream data. The performance of
our contribution is evaluated via simulation experiments. The results
show that the proposed algorithm is both efficient and scalable, and
that it outperforms comparable algorithms.
A Comparative Study of GTC and PSP Algorithms for Mining Sequential Patterns Embedded in Database with Time Constraints
This paper will consider the problem of sequential
mining patterns embedded in a database by handling the time
constraints as defined in the GSP algorithm (level wise algorithms).
We will compare two previous approaches GTC and PSP, that
resumes the general principles of GSP. Furthermore this paper will
discuss PG-hybrid algorithm, that using PSP and GTC. The results
show that PSP and GTC are more efficient than GSP. On the other
hand, the GTC algorithm performs better than PSP. The PG-hybrid
algorithm use PSP algorithm for the two first passes on the database,
and GTC approach for the following scans. Experiments show that
the hybrid approach is very efficient for short, frequent sequences.
A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm
Feature selection and attribute reduction are crucial
problems, and widely used techniques in the field of machine
learning, data mining and pattern recognition to overcome the
well-known phenomenon of the Curse of Dimensionality. This paper
presents a feature selection method that efficiently carries out attribute
reduction, thereby selecting the most informative features of a dataset.
It consists of two components: 1) a measure for feature subset
evaluation, and 2) a search strategy. For the evaluation measure,
we have employed the fuzzy-rough dependency degree (FRFDD)
of the lower approximation-based fuzzy-rough feature selection
(L-FRFS) due to its effectiveness in feature selection. As for the
search strategy, a modified version of a binary shuffled frog leaping
algorithm is proposed (B-SFLA). The proposed feature selection
method is obtained by hybridizing the B-SFLA with the FRDD. Nine
classifiers have been employed to compare the proposed approach
with several existing methods over twenty two datasets, including
nine high dimensional and large ones, from the UCI repository.
The experimental results demonstrate that the B-SFLA approach
significantly outperforms other metaheuristic methods in terms of the
number of selected features and the classification accuracy.
Optimisation of Structural Design by Integrating Genetic Algorithms in the Building Information Modelling Environment
Structural design and analysis is an important and time-consuming process, particularly at the conceptual design stage. Decisions made at this stage can have an enormous effect on the entire project, as it becomes ever costlier and more difficult to alter the choices made early on in the construction process. Hence, optimisation of the early stages of structural design can provide important efficiencies in terms of cost and time. This paper suggests a structural design optimisation (SDO) framework in which Genetic Algorithms (GAs) may be used to semi-automate the production and optimisation of early structural design alternatives. This framework has the potential to leverage conceptual structural design innovation in Architecture, Engineering and Construction (AEC) projects. Moreover, this framework improves the collaboration between the architectural stage and the structural stage. It will be shown that this SDO framework can make this achievable by generating the structural model based on the extracted data from the architectural model. At the moment, the proposed SDO framework is in the process of validation, involving the distribution of an online questionnaire among structural engineers in the UK.
Building Information Modelling
, Genetic Algorithm
The Creative Unfolding of “Reduced Descriptive Structures” in Musical Cognition: Technical and Theoretical Insights Based on the OpenMusic and PWGL Long-Term Feedback
We here describe the theoretical and philosophical understanding of a long term use and development of algorithmic computer-based tools applied to music composition. The findings of our research lead us to interrogate some specific processes and systems of communication engaged in the discovery of specific cultural artworks: artistic creation in the sono-musical domain. Our hypothesis is that the patterns of auditory learning cannot be only understood in terms of social transmission but would gain to be questioned in the way they rely on various ranges of acoustic stimuli modes of consciousness and how the different types of memories engaged in the percept-action expressive systems of our cultural communities also relies on these shadowy conscious entities we named “Reduced Descriptive Structures”.
Genetic Algorithm Optimization of the Economical, Ecological and Self-Consumption Impact of the Energy Production of a Single Building
This paper presents an optimization method based
on genetic algorithm for the energy management inside buildings
developed in the frame of the project Smart Living Lab (SLL)
in Fribourg (Switzerland). This algorithm optimizes the interaction
between renewable energy production, storage systems and energy
consumers. In comparison with standard algorithms, the innovative
aspect of this project is the extension of the smart regulation
over three simultaneous criteria: the energy self-consumption, the
decrease of greenhouse gas emissions and operating costs. The
genetic algorithm approach was chosen due to the large quantity
of optimization variables and the non-linearity of the optimization
function. The optimization process includes also real time data of the
building as well as weather forecast and users habits. This information
is used by a physical model of the building energy resources to predict
the future energy production and needs, to select the best energetic
strategy, to combine production or storage of energy in order to
guarantee the demand of electrical and thermal energy. The principle
of operation of the algorithm as well as typical output example of
the algorithm is presented.
Analysis of Cooperative Learning Behavior Based on the Data of Students' Movement
The purpose of this paper is to analyze the cooperative learning behavior pattern based on the data of students' movement. The study firstly reviewed the cooperative learning theory and its research status, and briefly introduced the k-means clustering algorithm. Then, it used clustering algorithm and mathematical statistics theory to analyze the activity rhythm of individual student and groups in different functional areas, according to the movement data provided by 10 first-year graduate students. It also focused on the analysis of students' behavior in the learning area and explored the law of cooperative learning behavior. The research result showed that the cooperative learning behavior analysis method based on movement data proposed in this paper is feasible. From the results of data analysis, the characteristics of behavior of students and their cooperative learning behavior patterns could be found.
Real Time Lidar and Radar High-Level Fusion for Obstacle Detection and Tracking with Evaluation on a Ground Truth
Both Lidars and Radars are sensors for obstacle
detection. While Lidars are very accurate on obstacles positions
and less accurate on their velocities, Radars are more precise on
obstacles velocities and less precise on their positions. Sensor
fusion between Lidar and Radar aims at improving obstacle
detection using advantages of the two sensors. The present
paper proposes a real-time Lidar/Radar data fusion algorithm
for obstacle detection and tracking based on the global nearest
neighbour standard filter (GNN). This algorithm is implemented
and embedded in an automative vehicle as a component generated
by a real-time multisensor software. The benefits of data fusion
comparing with the use of a single sensor are illustrated through
several tracking scenarios (on a highway and on a bend) and
using real-time kinematic sensors mounted on the ego and tracked
vehicles as a ground truth.
Quantum Markov Modeling for Healthcare
A Markov model defines a system of states, composed
by the feasible transition paths between those states, and the
parameters of those transitions. The paths and parameters may be
a representative way to address healthcare issues, such as to identify
the most likely sequence of patient health states given the sequence
of observations. Furthermore estimating the length of stay (LoS) of
patients in hospitalization is one of the challenges that Markov models
allow us to solve. However, finding the maximum probability of
any path that gets to state at time t, can have high computational
cost. A quantum approach allows us to take advantage of quantum
computation since the calculated probabilities can be in several states,
ending up to outperform classical computing due to the possible
superposition of states when handling large amounts of data. The
aid of quantum physics-based architectures and machine learning
techniques are therefore appropriated to address the complexity of
An Improved K-Means Algorithm for Gene Expression Data Clustering
Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved.
Meta-Learning for Hierarchical Classification and Applications in Bioinformatics
Hierarchical classification is a special type of classification task where the class labels are organised into a hierarchy, with more generic class labels being ancestors of more specific ones. Meta-learning for classification-algorithm recommendation consists of recommending to the user a classification algorithm, from a pool of candidate algorithms, for a dataset, based on the past performance of the candidate algorithms in other datasets. Meta-learning is normally used in conventional, non-hierarchical classification. By contrast, this paper proposes a meta-learning approach for more challenging task of hierarchical classification, and evaluates it in a large number of bioinformatics datasets. Hierarchical classification is especially relevant for bioinformatics problems, as protein and gene functions tend to be organised into a hierarchy of class labels. This work proposes meta-learning approach for
recommending the best hierarchical classification algorithm to a
hierarchical classification dataset. This work’s contributions are: 1)
proposing an algorithm for splitting hierarchical datasets into
new datasets to increase the number of meta-instances, 2) proposing
meta-features for hierarchical classification, and 3) interpreting
decision-tree meta-models for hierarchical classification algorithm
An Observer-Based Direct Adaptive Fuzzy Sliding Control with Adjustable Membership Functions
In this paper, an observer-based direct adaptive fuzzy sliding mode (OAFSM) algorithm is proposed. In the proposed algorithm, the zero-input dynamics of the plant could be unknown. The input connection matrix is used to combine the sliding surfaces of individual subsystems, and an adaptive fuzzy algorithm is used to estimate an equivalent sliding mode control input directly. The fuzzy membership functions, which were determined by time consuming try and error processes in previous works, are adjusted by adaptive algorithms. The other advantage of the proposed controller is that the input gain matrix is not limited to be diagonal, i.e. the plant could be over/under actuated provided that controllability and observability are preserved. An observer is constructed to directly estimate the state tracking error, and the nonlinear part of the observer is constructed by an adaptive fuzzy algorithm. The main advantage of the proposed observer is that, the measured outputs is not limited to the first entry of a canonical-form state vector. The closed-loop stability of the proposed method is proved using a Lyapunov-based approach. The proposed method is applied numerically on a multi-link robot manipulator, which verifies the performance of the closed-loop control. Moreover, the performance of the proposed algorithm is compared with some conventional control algorithms.
A Speeded up Robust Scale-Invariant Feature Transform Currency Recognition Algorithm
All currencies around the world look very different from each other. For instance, the size, color, and pattern of the paper are different. With the development of modern banking services, automatic methods for paper currency recognition become important in many applications like vending machines. One of the currency recognition architecture’s phases is Feature detection and description. There are many algorithms that are used for this phase, but they still have some disadvantages. This paper proposes a feature detection algorithm, which merges the advantages given in the current SIFT and SURF algorithms, which we call, Speeded up Robust Scale-Invariant Feature Transform (SR-SIFT) algorithm. Our proposed SR-SIFT algorithm overcomes the problems of both the SIFT and SURF algorithms. The proposed algorithm aims to speed up the SIFT feature detection algorithm and keep it robust. Simulation results demonstrate that the proposed SR-SIFT algorithm decreases the average response time, especially in small and minimum number of best key points, increases the distribution of the number of best key points on the surface of the currency. Furthermore, the proposed algorithm increases the accuracy of the true best point distribution inside the currency edge than the other two algorithms.
Study on Sharp V-Notch Problem under Dynamic Loading Condition Using Symplectic Analytical Singular Element
V-notch problem under dynamic loading condition is considered in this paper. In the time domain, the precise time domain expanding algorithm is employed, in which a self-adaptive technique is carried out to improve computing accuracy. By expanding variables in each time interval, the recursive finite element formulas are derived. In the space domain, a Symplectic Analytical Singular Element (SASE) for V-notch problem is constructed addressing the stress singularity of the notch tip. Combining with the conventional finite elements, the proposed SASE can be used to solve the dynamic stress intensity factors (DSIFs) in a simple way. Numerical results show that the proposed SASE for V-notch problem subjected to dynamic loading condition is effective and efficient.
Model of Transhipment and Routing Applied to the Cargo Sector in Small and Medium Enterprises of Bogotá, Colombia
This paper presents a design of a model for planning the distribution logistics operation. The significance of this work relies on the applicability of this fact to the analysis of small and medium enterprises (SMEs) of dry freight in Bogotá. Two stages constitute this implementation: the first one is the place where optimal planning is achieved through a hybrid model developed with mixed integer programming, which considers the transhipment operation based on a combined load allocation model as a classic transshipment model; the second one is the specific routing of that operation through the heuristics of Clark and Wright. As a result, an integral model is obtained to carry out the step by step planning of the distribution of dry freight for SMEs in Bogotá. In this manner, optimum assignments are established by utilizing transshipment centers with that purpose of determining the specific routing based on the shortest distance traveled.
Lane Detection Using Labeling Based RANSAC Algorithm
In this paper, we propose labeling based RANSAC algorithm for lane detection. Advanced driver assistance systems (ADAS) have been widely researched to avoid unexpected accidents. Lane detection is a necessary system to assist keeping lane and lane departure prevention. The proposed vision based lane detection method applies Canny edge detection, inverse perspective mapping (IPM), K-means algorithm, mathematical morphology operations and 8 connected-component labeling. Next, random samples are selected from each labeling region for RANSAC. The sampling method selects the points of lane with a high probability. Finally, lane parameters of straight line or curve equations are estimated. Through the simulations tested on video recorded at daytime and nighttime, we show that the proposed method has better performance than the existing RANSAC algorithm in various environments.
An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem
The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.
Relay Node Placement for Connectivity Restoration in Wireless Sensor Networks Using Genetic Algorithms
Wireless Sensor Networks (WSNs) consist of a set of sensor nodes with limited capability. WSNs may suffer from multiple node failures when they are exposed to harsh environments such as military zones or disaster locations and lose connectivity by getting partitioned into disjoint segments. Relay nodes (RNs) are alternatively introduced to restore connectivity. They cost more than sensors as they benefit from mobility, more power and more transmission range, enforcing a minimum number of them to be used. This paper addresses the problem of RN placement in a multiple disjoint network by developing a genetic algorithm (GA). The problem is reintroduced as the Steiner tree problem (which is known to be an NP-hard problem) by the aim of finding the minimum number of Steiner points where RNs are to be placed for restoring connectivity. An upper bound to the number of RNs is first computed to set up the length of initial chromosomes. The GA algorithm then iteratively reduces the number of RNs and determines their location at the same time. Experimental results indicate that the proposed GA is capable of establishing network connectivity using a reasonable number of RNs compared to the best existing work.
Evaluation of the MCFLIRT Correction Algorithm in Head Motion from Resting State fMRI Data
In the last few years, resting-state functional MRI (rs-fMRI) was widely used to investigate the architecture of brain networks by investigating the Blood Oxygenation Level Dependent response. This technique represented an interesting, robust and reliable approach to compare pathologic and healthy subjects in order to investigate neurodegenerative diseases evolution. On the other hand, the elaboration of rs-fMRI data resulted to be very prone to noise due to confounding factors especially the head motion. Head motion has long been known to be a source of artefacts in task-based functional MRI studies, but it has become a particularly challenging problem in recent studies using rs-fMRI. The aim of this work was to evaluate in MS patients a well-known motion correction algorithm from the FMRIB's Software Library - MCFLIRT - that could be applied to minimize the head motion distortions, allowing to correctly interpret rs-fMRI results.
A Hybrid Algorithm for Collaborative Transportation Planning among Carriers
In this paper, there is concentration on collaborative transportation planning (CTP) among multiple carriers with pickup and delivery requests and time windows. This problem is a vehicle routing problem with constraints from standard vehicle routing problems and new constraints from a real-world application. In the problem, each carrier has a finite number of vehicles, and each request is a pickup and delivery request with time window. Moreover, each carrier has reserved requests, which must be served by itself, whereas its exchangeable requests can be outsourced to and served by other carriers. This collaboration among carriers can help them to reduce total transportation costs. A mixed integer programming model is proposed to the problem. To solve the model, a hybrid algorithm that combines Genetic Algorithm and Simulated Annealing (GASA) is proposed. This algorithm takes advantages of GASA at the same time. After tuning the parameters of the algorithm with the Taguchi method, the experiments are conducted and experimental results are provided for the hybrid algorithm. The results are compared with those obtained by a commercial solver. The comparison indicates that the GASA significantly outperforms the commercial solver.
A Location-Allocation-Routing Model for a Home Health Care Supply Chain Problem
With increasing life expectancy in developed countries, the role of home care services is highlighted by both academia and industrial contributors in Home Health Care Supply Chain (HHCSC) companies. The main decisions in such supply chain systems are the location of pharmacies, the allocation of patients to these pharmacies and also the routing and scheduling decisions of nurses to visit their patients. In this study, for the first time, an integrated model is proposed to consist of all preliminary and necessary decisions in these companies, namely, location-allocation-routing model. This model is a type of NP-hard one. Therefore, an Imperialist Competitive Algorithm (ICA) is utilized to solve the model, especially in large sizes. Results confirm the efficiency of the developed model for HHCSC companies as well as the performance of employed ICA.
A Numerical Description of a Fibre Reinforced Concrete Using a Genetic Algorithm
This work reports about an approach for an automatic adaptation of concrete formulations based on genetic algorithms (GA) to optimize a wide range of different fit-functions. In order to achieve the goal, a method was developed which provides a numerical description of a fibre reinforced concrete (FRC) mixture regarding the production technology and the property spectrum of the concrete. In a first step, the FRC mixture with seven fixed components was characterized by varying amounts of the components. For that purpose, ten concrete mixtures were prepared and tested. The testing procedure comprised flow spread, compressive and bending tensile strength. The analysis and approximation of the determined data was carried out by GAs. The aim was to obtain a closed mathematical expression which best describes the given seven-point cloud of FRC by applying a Gene Expression Programming with Free Coefficients (GEP-FC) strategy. The seven-parametric FRC-mixtures model which is generated according to this method correlated well with the measured data. The developed procedure can be used for concrete mixtures finding closed mathematical expressions, which are based on the measured data.
Porul: Option Generation and Selection and Scoring Algorithms for a Tamil Flash Card Game
Games can be the excellent tools for teaching a language. There are few e-learning games in Indian languages like word scrabble, cross word, quiz games etc., which were developed mainly for educational purposes. This paper proposes a Tamil word game called, “Porul”, which focuses on education as well as on players’ thinking and decision-making skills. Porul is a multiple choice based quiz game, in which the players attempt to answer questions correctly from the given multiple options that are generated using a unique algorithm called the Option Selection algorithm which explores the semantics of the question in various dimensions namely, synonym, rhyme and Universal Networking Language semantic category. This kind of semantic exploration of the question not only increases the complexity of the game but also makes it more interesting. The paper also proposes a Scoring Algorithm which allots a score based on the popularity score of the question word. The proposed game has been tested using 20,000 Tamil words.
Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising
In seismic data processing, attenuation of random noise
is the basic step to improve quality of data for further application
of seismic data in exploration and development in different gas
and oil industries. The signal-to-noise ratio of the data also highly
determines quality of seismic data. This factor affects the reliability
as well as the accuracy of seismic signal during interpretation
for different purposes in different companies. To use seismic data
for further application and interpretation, we need to improve the
signal-to-noise ration while attenuating random noise effectively.
To improve the signal-to-noise ration and attenuating seismic
random noise by preserving important features and information
about seismic signals, we introduce the concept of anisotropic
total fractional order denoising algorithm. The anisotropic total
fractional order variation model defined in fractional order bounded
variation is proposed as a regularization in seismic denoising. The
split Bregman algorithm is employed to solve the minimization
problem of the anisotropic total fractional order variation model
and the corresponding denoising algorithm for the proposed method
is derived. We test the effectiveness of theproposed method for
synthetic and real seismic data sets and the denoised result is
compared with F-X deconvolution and non-local means denoising