Open Science Research Excellence

Open Science Index

Commenced in January 2007 Frequency: Monthly Edition: International Paper Count: 255

255
10010656
From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Abstract:

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

254
10010623
Single-Camera Basketball Tracker through Pose and Semantic Feature Fusion
Abstract:
Tracking sports players is a widely challenging scenario, specially in single-feed videos recorded in tight courts, where cluttering and occlusions cannot be avoided. This paper presents an analysis of several geometric and semantic visual features to detect and track basketball players. An ablation study is carried out and then used to remark that a robust tracker can be built with Deep Learning features, without the need of extracting contextual ones, such as proximity or color similarity, nor applying camera stabilization techniques. The presented tracker consists of: (1) a detection step, which uses a pretrained deep learning model to estimate the players pose, followed by (2) a tracking step, which leverages pose and semantic information from the output of a convolutional layer in a VGG network. Its performance is analyzed in terms of MOTA over a basketball dataset with more than 10k instances.
253
10010432
A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics
Abstract:
Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.
252
10010274
The Efficiency of Association Measures in Automatic Extraction of Collocations: Exclusivity and Frequency
Abstract:

This paper deals with automatic extraction of 20 ‘adjective + noun’ collocations using four different association measures: T-score, MI, Log Dice, and Log Likelihood with most emphasis on mainly Log Likelihood and Log Dice scores for which an argument for their suitability in this experiment is to be presented. The nodes of the chosen collocates are 20 adjectival false friends between English and French. The noun candidate to be chosen needs to occur with a threshold of top ten collocates in two lists in which the results are sorted by Log Likelihood and Log Dice. The fulfillment of this criterion will guarantee that the chosen candidates are both exclusive and significant noun collocates and thereby, they make perfect noun candidates for the nodes. The results of the top 10 collocates sorted by Log Dice and Log Likelihood are not to be filtered. Thereby technical terms, function words, and stop words are not to be removed for the purposes of the analysis. Out of 20 adjectives, 15 ‘adjective + noun’ collocations have been extracted by the means of consensus of Log Likelihood and Log Dice scores on the top 10 noun collocates. The generated list of the automatic extracted ‘adjective + noun’ collocations will serve as the bulk of a translation test in which Algerian students of translation are asked to render these collocations into Arabic. The ultimate goal of this test is to test French influence as a Second Language on English as a Foreign Language in the Algerian context.

251
10009995
Localization of Geospatial Events and Hoax Prediction in the UFO Database
Abstract:

Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at the National UFO Report Center (NUFORC). Some of these reports are a hoax and among those that seem legitimate, our task is not to establish that these events confirm that they indeed are events related to flying objects from aliens in outer space. Rather, we intend to identify if the report was a hoax as was identified by the UFO database team with their existing curation criterion. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about the proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and correlate with real-time events where one of the UFO reports strongly correlates to a missile test conducted in the United States.

250
10009997
Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features
Abstract:

With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.

249
10009911
The Use of Microorganisms in the Bioleaching of Soils Polluted with Heavy Metals
Abstract:

This paper shows researches in order to extract Cr, Cu and Ni from the polluted soils. Research is based on preliminary studies regarding the usage of Thiobacillus ferrooxidans bacterium (9K medium) for bioleaching of soil polluted with heavy metal (Cu, Cr and Ni). The microorganisms (Thiobacillus ferooxidans) selected directly from polluted soil samples were used in this experimental work. Soil samples used in the experimental research were taken from an area polluted with heavy metals from Romania. The soil samples are subjected to the cleaning process using the 9K medium solution (20 mL and 40 mL, respectively), stirred 200 rpm for 20 hours at a controlled temperature (30 ˚C). During the experiment (0, 2, 4, 8 and 20 h), liquid samples have been extracted and analyzed using the Atomic Absorption Spectrophotometer AA-6800 (AAS) in order to determine the Cr, Cu and Ni concentration. Experiments led to the conclusion that these soils can be depolluted by bioleaching, being a biological treatment method involving the use of microorganisms to favor the extraction of Cr, Cu and Ni from polluted soils.

248
10009852
Early Recognition and Grading of Cataract Using a Combined Log Gabor/Discrete Wavelet Transform with ANN and SVM
Abstract:
Eyes are considered to be the most sensitive and important organ for human being. Thus, any eye disorder will affect the patient in all aspects of life. Cataract is one of those eye disorders that lead to blindness if not treated correctly and quickly. This paper demonstrates a model for automatic detection, classification, and grading of cataracts based on image processing techniques and artificial intelligence. The proposed system is developed to ease the cataract diagnosis process for both ophthalmologists and patients. The wavelet transform combined with 2D Log Gabor Wavelet transform was used as feature extraction techniques for a dataset of 120 eye images followed by a classification process that classified the image set into three classes; normal, early, and advanced stage. A comparison between the two used classifiers, the support vector machine SVM and the artificial neural network ANN were done for the same dataset of 120 eye images. It was concluded that SVM gave better results than ANN. SVM success rate result was 96.8% accuracy where ANN success rate result was 92.3% accuracy.
247
10009584
Interaction of between Cd and Zn in Barley (Hordeum vulgare L.) Plant for Phytoextraction Method
Abstract:

The aim of this research is to remediation of the cadmium (Cd) pollution in agricultural soils by using barley (Hordeum vulgare L.) plant. For this purpose, a pot experiment was done in greenhouse conditions. Cadmium (100 mg/kg) as CdSO4.8H2O forms was applied to each pot and incubated during 30 days. Then Ethylenediamine tetraacetic acid (EDTA) chelate was applied to each pot at five doses (0, 3, 6, 8 and 10 mmol/kg) 20 days before harvesting time of the barley plants. The plants were harvested after two months planting. According to the pot experiment results, Cd and Zn amounts of barley plant increased with increasing EDTA application and Zn and Cd contents of barley 20,13 and 1,35 mg/kg for 0 mmol /kg EDTA; 58,61 and 113,24 mg/kg for 10 mmol/kg EDTA doses, respectively. On the other hand, Cd and Zn concentrations of experiment soil increased with EDTA application to the soil samples. Zinc and Cd concentrations of soil 0,31 and 0,021 mg/kg for 0 mmol /kg EDTA; 2,39 and 67,40 mg/kg for 10 mmol/kg EDTA doses, respectively. These increases were found to be statistically significant at the level of 1 %. According to the results of the pot experiment, some heavy metal especially Cd pollution of barley (Hordeum vulgare L.) plant province can be remediated by the phytoextraction method.

246
10009491
Synthetic Aperture Radar Remote Sensing Classification Using the Bag of Visual Words Model to Land Cover Studies
Abstract:

Classification of high resolution polarimetric Synthetic Aperture Radar (PolSAR) images plays an important role in land cover and land use management. Recently, classification algorithms based on Bag of Visual Words (BOVW) model have attracted significant interest among scholars and researchers in and out of the field of remote sensing. In this paper, BOVW model with pixel based low-level features has been implemented to classify a subset of San Francisco bay PolSAR image, acquired by RADARSAR 2 in C-band. We have used segment-based decision-making strategy and compared the result with the result of traditional Support Vector Machine (SVM) classifier. 90.95% overall accuracy of the classification with the proposed algorithm has shown that the proposed algorithm is comparable with the state-of-the-art methods. In addition to increase in the classification accuracy, the proposed method has decreased undesirable speckle effect of SAR images.

245
10009575
Optimization of Soybean Oil by Modified Supercritical Carbon Dioxide
Abstract:

The content of omega-3 in soybean oil is important in the development of infants and is an alternative for the omega-3 in fish oils. The investigation of extraction of soybean oil is needed to obtain the bioactive compound in the extract. Supercritical carbon dioxide extraction is modern and green technology to extract herbs and plants to obtain high quality extract due to high diffusivity and solubility of the solvent. The aim of this study was to obtain the optimum condition of soybean oil extraction by modified supercritical carbon dioxide. The soybean oil was extracted by using modified supercritical carbon dioxide (SC-CO2) under the temperatures of 40, 60, 80 °C, pressures of 150, 250, 350 Bar, and constant flow-rate of 10 g/min as the parameters of extraction processes. An experimental design was performed in order to optimize three important parameters of SC-CO2 extraction which are pressure (X1), temperature (X2) to achieve optimum yields of soybean oil. Box Behnken Design was applied for experimental design. From the optimization process, the optimum condition of extraction of soybean oil was obtained at pressure 338 Bar and temperature 80 °C with oil yield of 2.713 g. Effect of pressure is significant on the extraction of soybean oil by modified supercritical carbon dioxide. Increasing of pressure will increase the oil yield of soybean oil.

244
10009310
Automated Heart Sound Classification from Unsegmented Phonocardiogram Signals Using Time Frequency Features
Abstract:
Cardiologists perform cardiac auscultation to detect abnormalities in heart sounds. Since accurate auscultation is a crucial first step in screening patients with heart diseases, there is a need to develop computer-aided detection/diagnosis (CAD) systems to assist cardiologists in interpreting heart sounds and provide second opinions. In this paper different algorithms are implemented for automated heart sound classification using unsegmented phonocardiogram (PCG) signals. Support vector machine (SVM), artificial neural network (ANN) and cartesian genetic programming evolved artificial neural network (CGPANN) without the application of any segmentation algorithm has been explored in this study. The signals are first pre-processed to remove any unwanted frequencies. Both time and frequency domain features are then extracted for training the different models. The different algorithms are tested in multiple scenarios and their strengths and weaknesses are discussed. Results indicate that SVM outperforms the rest with an accuracy of 73.64%.
243
10009057
Ultrasound Assisted Extraction and Microwave Assisted Extraction of Carotenoids from Melon Shells
Abstract:
Cantaloupes (muskmelon and watermelon) contain biologically active molecules such as carotenoids which are natural pigments used as food colorants and afford health benefits. ß-carotene is the major source of carotenoids present in muskmelon and watermelon shell. Carotenoids were extracted using Microwave assisted extraction (MAE) and Ultrasound assisted extraction (UAE) utilising organic lipophilic solvents such as acetone, methanol, and hexane. Extraction conditions feed-solvent ratio, microwave power, ultrasound frequency, temperature and particle size were varied and optimized. It was found that the yield of carotenoids was higher using UAE than MAE, and muskmelon had the highest yield of carotenoids when was ethanol used as a solvent for 0.5 mm particle size.
242
10008868
Lead in The Soil-Plant System Following Aged Contamination from Ceramic Wastes
Abstract:

Lead contamination of agricultural land mainly vegetated with perennial ryegrass (Lolium perenne) has been investigated. The metal derived from the discharge of sludge from a ceramic industry in the past had used lead paints. The results showed very high values of lead concentration in many soil samples. In order to assess the lead soil contamination, a sequential extraction with H2O, KNO3, EDTA was performed, and the chemical forms of lead in the soil were evaluated. More than 70% of lead was in a potentially bioavailable form. Analysis of Lolium perenne showed elevated lead concentration. A Freundlich-like model was used to describe the transferability of the metal from the soil to the plant.

241
10008564
Key Frame Based Video Summarization via Dependency Optimization
Authors:
Abstract:
As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.
240
10008709
The Mechanism Study of Degradative Solvent Extraction of Biomass by Liquid Membrane-Fourier Transform Infrared Spectroscopy
Abstract:

Degradative solvent extraction is the method developed for biomass upgrading by dewatering and fractionation of biomass under the mild condition. However, the conversion mechanism of the degradative solvent extraction method has not been fully understood so far. The rice straw was treated in 1-methylnaphthalene (1-MN) at a different solvent-treatment temperature varied from 250 to 350 oC with the residence time for 60 min. The liquid membrane-Fourier Transform Infrared Spectroscopy (FTIR) technique is applied to study the processing mechanism in-depth without separation of the solvent. It has been found that the strength of the oxygen-hydrogen stretching  (3600-3100 cm-1) decreased slightly with increasing temperature in the range of 300-350 oC. The decrease of the hydroxyl group in the solvent soluble suggested dehydration reaction taking place between 300 and 350 oC. FTIR spectra in the carbonyl stretching region (1800-1600 cm-1) revealed the presence of esters groups, carboxylic acid and ketonic groups in the solvent-soluble of biomass. The carboxylic acid increased in the range of 200 to 250 oC and then decreased. The prevailing of aromatic groups showed that the aromatization took place during extraction at above 250 oC. From 300 to 350 oC, the carbonyl functional groups in the solvent-soluble noticeably decreased. The removal of the carboxylic acid and the decrease of esters into the form of carbon dioxide indicated that the decarboxylation reaction occurred during the extraction process.

239
10008534
Hybrid Knowledge Approach for Determining Health Care Provider Specialty from Patient Diagnoses
Abstract:

In an access-control situation, the role of a user determines whether a data request is appropriate. This paper combines vetted web mining and logic modeling to build a lightweight system for determining the role of a health care provider based only on their prior authorized requests. The model identifies provider roles with 100% recall from very little data. This shows the value of vetted web mining in AI systems, and suggests the impact of the ICD classification on medical practice.

238
10009080
Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features
Abstract:

Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.

237
10009032
Analyzing Keyword Networks for the Identification of Correlated Research Topics
Abstract:
The production and publication of scientific works have increased significantly in the last years, being the Internet the main factor of access and distribution of these works. Faced with this, there is a growing interest in understanding how scientific research has evolved, in order to explore this knowledge to encourage research groups to become more productive. Therefore, the objective of this work is to explore repositories containing data from scientific publications and to characterize keyword networks of these publications, in order to identify the most relevant keywords, and to highlight those that have the greatest impact on the network. To do this, each article in the study repository has its keywords extracted and in this way the network is  characterized, after which several metrics for social network analysis are applied for the identification of the highlighted keywords.
236
10008031
Wavelet-Based ECG Signal Analysis and Classification
Abstract:

This paper presents the processing and analysis of ECG signals. The study is based on wavelet transform and uses exclusively the MATLAB environment. This study includes removing Baseline wander and further de-noising through wavelet transform and metrics such as signal-to noise ratio (SNR), Peak signal-to-noise ratio (PSNR) and the mean squared error (MSE) are used to assess the efficiency of the de-noising techniques. Feature extraction is subsequently performed whereby signal features such as heart rate, rise and fall levels are extracted and the QRS complex was detected which helped in classifying the ECG signal. The classification is the last step in the analysis of the ECG signals and it is shown that these are successfully classified as Normal rhythm or Abnormal rhythm.  The final result proved the adequacy of using wavelet transform for the analysis of ECG signals.

235
10008810
Influences of Juice Extraction and Drying Methods on the Chemical Analysis of Lemon Peels
Abstract:

This study aimed to determine the influence of some different juice extraction methods (screw type hand operated juice extractor and pressed squeeze juice extractor) as well as drying methods (microwave, solar and oven drying) on the chemical properties of lemon peels. It could be concluded that extraction of juice by screw type and drying of peel using the microwave drying method were the best preparative processing steps methods for lemon peel utilization as food additives.

234
10007196
Terrain Classification for Ground Robots Based on Acoustic Features
Abstract:
The motivation of our work is to detect different terrain types traversed by a robot based on acoustic data from the robot-terrain interaction. Different acoustic features and classifiers were investigated, such as Mel-frequency cepstral coefficient and Gamma-tone frequency cepstral coefficient for the feature extraction, and Gaussian mixture model and Feed forward neural network for the classification. We analyze the system’s performance by comparing our proposed techniques with some other features surveyed from distinct related works. We achieve precision and recall values between 87% and 100% per class, and an average accuracy at 95.2%. We also study the effect of varying audio chunk size in the application phase of the models and find only a mild impact on performance.
233
10007303
Evolving Knowledge Extraction from Online Resources
Abstract:
In this paper, we present an evolving knowledge extraction system named AKEOS (Automatic Knowledge Extraction from Online Sources). AKEOS consists of two modules, including a one-time learning module and an evolving learning module. The one-time learning module takes in user input query, and automatically harvests knowledge from online unstructured resources in an unsupervised way. The output of the one-time learning is a structured vector representing the harvested knowledge. The evolving learning module automatically schedules and performs repeated one-time learning to extract the newest information and track the development of an event. In addition, the evolving learning module summarizes the knowledge learned at different time points to produce a final knowledge vector about the event. With the evolving learning, we are able to visualize the key information of the event, discover the trends, and track the development of an event.
232
10006619
Phytoremediation Potential of Tomato for Cd and Cr Removal from Polluted Soils
Abstract:

Cadmium and chromium are toxic to most organisms and different mechanisms have been developed for overcoming with the toxic effects of these heavy metals. We studied the uptake and distribution of cadmium and chromium in different organs of tomato (Lycopersicon esculentum L.) plants in nine heavy metal polluted soils in western Hormozgan province, Iran. The accumulation of chromium was in increasing pattern of fruit peel

231
10006195
Optimization of Process Parameters using Response Surface Methodology for the Removal of Zinc(II) by Solvent Extraction
Abstract:

A factorial design of experiments and a response surface methodology were implemented to investigate the liquid-liquid extraction process of zinc (II) from acetate medium using the 1-Butyl-imidazolium di(2-ethylhexyl) phosphate [BIm+][D2EHP-]. The optimization process of extraction parameters such as the initial pH effect (2.5, 4.5, and 6.6), ionic liquid concentration (1, 5.5, and 10 mM) and salt effect (0.01, 5, and 10 mM) was carried out using a three-level full factorial design (33). The results of the factorial design demonstrate that all these factors are statistically significant, including the square effects of pH and ionic liquid concentration. The results showed that the order of significance: IL concentration > salt effect > initial pH. Analysis of variance (ANOVA) showing high coefficient of determination (R2 = 0.91) and low probability values (P < 0.05) signifies the validity of the predicted second-order quadratic model for Zn (II) extraction. The optimum conditions for the extraction of zinc (II) at the constant temperature (20 °C), initial Zn (II) concentration (1mM) and A/O ratio of unity were: initial pH (4.8), extractant concentration (9.9 mM), and NaCl concentration (8.2 mM). At the optimized condition, the metal ion could be quantitatively extracted.

230
10006420
An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation
Abstract:
With the development of HyperSpectral Imagery (HSI) technology, the spectral resolution of HSI became denser, which resulted in large number of spectral bands, high correlation between neighboring, and high data redundancy. However, the semantic interpretation is a challenging task for HSI analysis due to the high dimensionality and the high correlation of the different spectral bands. In fact, this work presents a dimensionality reduction approach that allows to overcome the different issues improving the semantic interpretation of HSI. Therefore, in order to preserve the spatial information, the Tensor Locality Preserving Projection (TLPP) has been applied to transform the original HSI. In the second step, knowledge has been extracted based on the adjacency graph to describe the different pixels. Based on the transformation matrix using TLPP, a weighted matrix has been constructed to rank the different spectral bands based on their contribution score. Thus, the relevant bands have been adaptively selected based on the weighted matrix. The performance of the presented approach has been validated by implementing several experiments, and the obtained results demonstrate the efficiency of this approach compared to various existing dimensionality reduction techniques. Also, according to the experimental results, we can conclude that this approach can adaptively select the relevant spectral improving the semantic interpretation of HSI.
229
10006461
Incremental Learning of Independent Topic Analysis
Abstract:
In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.
228
10006534
Performance Study of Neodymium Extraction by Carbon Nanotubes Assisted Emulsion Liquid Membrane Using Response Surface Methodology
Abstract:
The high purity rare earth elements (REEs) have been vastly used in the field of chemical engineering, metallurgy, nuclear energy, optical, magnetic, luminescence and laser materials, superconductors, ceramics, alloys, catalysts, and etc. Neodymium is one of the most abundant rare earths. By development of a neodymium–iron–boron (Nd–Fe–B) permanent magnet, the importance of neodymium has dramatically increased. Solvent extraction processes have many operational limitations such as large inventory of extractants, loss of solvent due to the organic solubility in aqueous solutions, volatilization of diluents, etc. One of the promising methods of liquid membrane processes is emulsion liquid membrane (ELM) which offers an alternative method to the solvent extraction processes. In this work, a study on Nd extraction through multi-walled carbon nanotubes (MWCNTs) assisted ELM using response surface methodology (RSM) has been performed. The ELM composed of diisooctylphosphinic acid (CYANEX 272) as carrier, MWCNTs as nanoparticles, Span-85 (sorbitan triooleate) as surfactant, kerosene as organic diluent and nitric acid as internal phase. The effects of important operating variables namely, surfactant concentration, MWCNTs concentration, and treatment ratio were investigated. Results were optimized using a central composite design (CCD) and a regression model for extraction percentage was developed. The 3D response surfaces of Nd(III) extraction efficiency were achieved and significance of three important variables and their interactions on the Nd extraction efficiency were found out. Results indicated that introducing the MWCNTs to the ELM process led to increasing the Nd extraction due to higher stability of membrane and mass transfer enhancement. MWCNTs concentration of 407 ppm, Span-85 concentration of 2.1 (%v/v) and treatment ratio of 10 were achieved as the optimum conditions. At the optimum condition, the extraction of Nd(III) reached the maximum of 99.03%.
227
10006736
Diagnosis of Diabetes Using Computer Methods: Soft Computing Methods for Diabetes Detection Using Iris
Abstract:

Complementary and Alternative Medicine (CAM) techniques are quite popular and effective for chronic diseases. Iridology is more than 150 years old CAM technique which analyzes the patterns, tissue weakness, color, shape, structure, etc. for disease diagnosis. The objective of this paper is to validate the use of iridology for the diagnosis of the diabetes. The suggested model was applied in a systemic disease with ocular effects. 200 subject data of 100 each diabetic and non-diabetic were evaluated. Complete procedure was kept very simple and free from the involvement of any iridologist. From the normalized iris, the region of interest was cropped. All 63 features were extracted using statistical, texture analysis, and two-dimensional discrete wavelet transformation. A comparison of accuracies of six different classifiers has been presented. The result shows 89.66% accuracy by the random forest classifier.

226
10005953
Characterisation of Fractions Extracted from Sorghum Byproducts
Abstract:

Sorghum byproducts, namely bran, stalk, and panicle are examples of lignocellulosic biomass. These raw materials contain large amounts of polysaccharides, in particular hemicelluloses, celluloses, and lignins, which if efficiently extracted, can be utilised for the development of a range of added value products with potential applications in agriculture and food packaging sectors. The aim of this study was to characterise fractions extracted from sorghum bran and stalk with regards to their physicochemical properties that could determine their applicability as food-packaging materials. A sequential alkaline extraction was applied for the isolation of cellulosic, hemicellulosic and lignin fractions from sorghum stalk and bran. Lignin content, phenolic content and antioxidant capacity were also investigated in the case of the lignin fraction. Thermal analysis using differential scanning calorimetry (DSC) and X-Ray Diffraction (XRD) revealed that the glass transition temperature (Tg) of cellulose fraction of the stalk was ~78.33 oC at amorphous state (~65%) and water content of ~5%. In terms of hemicellulose, the Tg value of stalk was slightly lower compared to bran at amorphous state (~54%) and had less water content (~2%). It is evident that hemicelluloses generally showed a lower thermal stability compared to cellulose, probably due to their lack of crystallinity. Additionally, bran had higher arabinose-to-xylose ratio (0.82) than the stalk, a fact that indicated its low crystallinity. Furthermore, lignin fraction had Tg value of ~93 oC at amorphous state (~11%). Stalk-derived lignin fraction contained more phenolic compounds (mainly consisting of p-coumaric and ferulic acid) and had higher lignin content and antioxidant capacity compared to bran-derived lignin fraction.

Vol:13 No:08 2019Vol:13 No:07 2019Vol:13 No:06 2019Vol:13 No:05 2019Vol:13 No:04 2019Vol:13 No:03 2019Vol:13 No:02 2019Vol:13 No:01 2019
Vol:12 No:12 2018Vol:12 No:11 2018Vol:12 No:10 2018Vol:12 No:09 2018Vol:12 No:08 2018Vol:12 No:07 2018Vol:12 No:06 2018Vol:12 No:05 2018Vol:12 No:04 2018Vol:12 No:03 2018Vol:12 No:02 2018Vol:12 No:01 2018
Vol:11 No:12 2017Vol:11 No:11 2017Vol:11 No:10 2017Vol:11 No:09 2017Vol:11 No:08 2017Vol:11 No:07 2017Vol:11 No:06 2017Vol:11 No:05 2017Vol:11 No:04 2017Vol:11 No:03 2017Vol:11 No:02 2017Vol:11 No:01 2017
Vol:10 No:12 2016Vol:10 No:11 2016Vol:10 No:10 2016Vol:10 No:09 2016Vol:10 No:08 2016Vol:10 No:07 2016Vol:10 No:06 2016Vol:10 No:05 2016Vol:10 No:04 2016Vol:10 No:03 2016Vol:10 No:02 2016Vol:10 No:01 2016
Vol:9 No:12 2015Vol:9 No:11 2015Vol:9 No:10 2015Vol:9 No:09 2015Vol:9 No:08 2015Vol:9 No:07 2015Vol:9 No:06 2015Vol:9 No:05 2015Vol:9 No:04 2015Vol:9 No:03 2015Vol:9 No:02 2015Vol:9 No:01 2015
Vol:8 No:12 2014Vol:8 No:11 2014Vol:8 No:10 2014Vol:8 No:09 2014Vol:8 No:08 2014Vol:8 No:07 2014Vol:8 No:06 2014Vol:8 No:05 2014Vol:8 No:04 2014Vol:8 No:03 2014Vol:8 No:02 2014Vol:8 No:01 2014
Vol:7 No:12 2013Vol:7 No:11 2013Vol:7 No:10 2013Vol:7 No:09 2013Vol:7 No:08 2013Vol:7 No:07 2013Vol:7 No:06 2013Vol:7 No:05 2013Vol:7 No:04 2013Vol:7 No:03 2013Vol:7 No:02 2013Vol:7 No:01 2013
Vol:6 No:12 2012Vol:6 No:11 2012Vol:6 No:10 2012Vol:6 No:09 2012Vol:6 No:08 2012Vol:6 No:07 2012Vol:6 No:06 2012Vol:6 No:05 2012Vol:6 No:04 2012Vol:6 No:03 2012Vol:6 No:02 2012Vol:6 No:01 2012
Vol:5 No:12 2011Vol:5 No:11 2011Vol:5 No:10 2011Vol:5 No:09 2011Vol:5 No:08 2011Vol:5 No:07 2011Vol:5 No:06 2011Vol:5 No:05 2011Vol:5 No:04 2011Vol:5 No:03 2011Vol:5 No:02 2011Vol:5 No:01 2011
Vol:4 No:12 2010Vol:4 No:11 2010Vol:4 No:10 2010Vol:4 No:09 2010Vol:4 No:08 2010Vol:4 No:07 2010Vol:4 No:06 2010Vol:4 No:05 2010Vol:4 No:04 2010Vol:4 No:03 2010Vol:4 No:02 2010Vol:4 No:01 2010
Vol:3 No:12 2009Vol:3 No:11 2009Vol:3 No:10 2009Vol:3 No:09 2009Vol:3 No:08 2009Vol:3 No:07 2009Vol:3 No:06 2009Vol:3 No:05 2009Vol:3 No:04 2009Vol:3 No:03 2009Vol:3 No:02 2009Vol:3 No:01 2009
Vol:2 No:12 2008Vol:2 No:11 2008Vol:2 No:10 2008Vol:2 No:09 2008Vol:2 No:08 2008Vol:2 No:07 2008Vol:2 No:06 2008Vol:2 No:05 2008Vol:2 No:04 2008Vol:2 No:03 2008Vol:2 No:02 2008Vol:2 No:01 2008
Vol:1 No:12 2007Vol:1 No:11 2007Vol:1 No:10 2007Vol:1 No:09 2007Vol:1 No:08 2007Vol:1 No:07 2007Vol:1 No:06 2007Vol:1 No:05 2007Vol:1 No:04 2007Vol:1 No:03 2007Vol:1 No:02 2007Vol:1 No:01 2007