Excellence in Research and Innovation for Humanity

International Science Index

Commenced in January 1999 Frequency: Monthly Edition: International Abstract Count: 43397

Computer and Information Engineering

2265
76007
Database Playlists: Croatia's Popular Music in the Mirror of Collective Memory
Abstract:
Scientific research analytically explores database playlists by studying the memory culture through Croatian popular radio music. The research is based on the scientific analysis of databases developed on the basis of the playlist of ten Croatian radio stations. The most recent Croatian song on Statehood Day 2008-2013 is analyzed in order to gain insight into their (memory) potential in terms of storing, interpreting and presenting a national identity. The research starts with the general assumption that popular music is an efficient identifier, transmitter, and promoter of national identity. The aim of the scientific research of the database was to analytically reveal specific titles of Croatian popular songs that participate in marking memories and analyzing their symbolic capital to gain insight into the popular music experience of the past and to develop a new method of scientifically based analysis of specific databases.
2264
76001
Animation of Objects on the Website by Application of CSS3 Language
Abstract:
Scientific work analytically explores and demonstrates techniques that can animate objects and geometric characters using CSS3 language by applying proper formatting and positioning of elements. The paper presents examples of optimum application of the CSS3 descriptive language when generating general web animations (eg, billiards and movement of geometric characters, etc.). The paper presents analytically the optimal development and animation design with the frames within which the animated objects are. The originally developed content is based on the upgrading of existing CSS3 descriptive language animations with more complex syntax and project-oriented work. The purpose of the developed animations is to provide an overview of the interactive features of CSS3 descriptive language design for computer games and the animation of important analytical data based on the web view. It has been analytically demonstrated that CSS3 as a descriptive language allows inserting of various multimedia elements into websites for public and internal sites.
2263
75839
Road Network Topology Inference via Satellite Imagery
Abstract:
Determining optimal routing paths is at the heart of many humanitarian and military challenges, and also crucial for autonomous vehicles. In areas of low population density or regions undergoing rapid change (e.g.: natural disasters), commercial or open source mapping products are often inadequate. The rapid revisit rates of satellite imaging constellations have the potential to alleviate this capability gap, if road networks can be inferred directly from imagery. We demonstrate techniques for extracting the physical and logical topology of road networks from satellite imagery via computer vision and graph theory techniques. To evaluate our algorithms (and to encourage external engagement), we label and release a high-quality dataset of roads in multiple cities. In order to catalyze new capabilities beyond the pixel classification typically used in road detection, we also develop a new metric based upon integrated path length differences. This allows us to measure network similarity and assess how road network inference performance varies between differing scenes and environments.
2262
75738
An Intrusion Detection Systems Based on K-Means, K-Medoids and Support Vector Clustering Using Ensemble
Abstract:
Presently, computer networks’ security rise in importance and many studies have also been conducted in this field. By the penetration of the internet networks in different fields, many things need to be done to provide a secure industrial and non-industrial network. Fire walls, appropriate Intrusion Detection Systems (IDS), encryption protocols for information sending and receiving, and use of authentication certificated are among things, which should be considered for system security. The aim of the present study is to use the outcome of several algorithms, which cause decline in IDS errors, in the way that improves system security and prevents additional overload to the system. Finally, regarding the obtained result we can also detect the amount and percentage of more sub attacks. By running the proposed system, which is based on the use of multi-algorithmic outcome and comparing that by the proposed single algorithmic methods, we observed a 78.64% result in attack detection that is improved by 3.14% than the proposed algorithms.
2261
75569
Task Scheduling on Parallel System using Genetic Algorithm
Abstract:
Scheduling and mapping the application task graph on multiprocessor parallel systems is considered as the most crucial and critical NP-complete problem. Many genetic algorithms have been proposed to solve such problems. In this paper, two genetic approach based algorithms have been designed and developed with or without task duplication. The proposed algorithms work on two fitness functions. The first fitness i.e. task fitness is used to minimize the total finish time of the schedule (schedule length) while the second fitness function i.e. process fitness is concerned with allocating the tasks to the available highly efficient processor from the list of available processors (load balance). Proposed genetic-based algorithms have been experimentally implemented and evaluated with other state-of-art popular and widely used algorithms.
2260
75502
Qualitative Data Analysis for Health Care Services
Abstract:
This study was designed to enable application of multivariate technique in the interpretation of categorical data for measuring health care services satisfaction in Turkey. The data was collected from a total of 17726 respondents. The establishment of the sample group and collection of the data were carried out by a joint team from The Ministry of Health and Turkish Statistical Institute (Turk Stat) of Turkey. The multiple correspondence analysis (MCA) was used on the data of 2882 respondents who answered the questionnaire in full. The multiple correspondence analysis indicated that, in the evaluation of health services females, public employees, younger and more highly educated individuals were more concerned and complainant than males, private sector employees, older and less educated individuals. This study demonstrates the public consciousness in health services and health care satisfaction in Turkey. Awareness of health service quality increases with education levels. Older individuals and males would appear to have lower expectancies in health services.
2259
75461
Knowledge Discovery and Data Mining Techniques in Textile Industry
Abstract:
This paper addresses the issues and technique for textile industry using data mining techniques. Data mining has been applied to the stitching of garments products that were obtained from a textile company. Data mining techniques were applied to the data obtained from the Chaid algorithm, C&RT algorithm, Regression Analysis and, Artificial Neural Networks. Classification technique based analyses were used while data mining and decision model about the production per person and variables affecting about production were found by this method. In the study, the results show that as the daily working time increases, the production per person also decreases. In addition, the relationship between total daily working and production per person shows a negative result, and the production per person shows the highest and negative relationship.
2258
75458
An Evaluation of ISO 9001:2008 and ISO 9001:2015 Standard Changes in Quality Management System
Abstract:
The objective of this study is to provide an insight to enterprises, who need to carry on their sustainability in harmony with the changing competition conditions, technology, and laws, regarding the ISO 9001:2015 directives to be newly implemented in Turkey and to speed up their harmonization process. In the study, ISO 9001:2015, which is planned to be put in force and exists as a draft, was studied and its differences from the previous standard, ISO 9001:2008, were determined. In order to find out the differences, a survey was conducted among enterprises that implement a quality system. Besides, with the survey, the points of view of enterprises regarding quality, the present Quality Management System, and the draft document of the new quality management system to be put in force. According to the findings obtained at the end of the study, it was observed that the enterprises attach importance to quality and follow the developments about Quality Management System, and they find the changes in the new draft document necessary.
2257
75446
Comparative Advantage of Mobile Agent Application in Procuring Software Products on the Internet
Abstract:
This paper brings to fore the inherent advantages in application of mobile agents to procure software products rather than downloading software content on the internet. It proposes a system whereby the products come on compact disk with mobile agent as deliverable. The client/user purchases a software product but must connect to the remote server of the software developer before installation. The user provides an activation code that activates mobile agent which is part of the software product on compact disk. The validity of the activation code is checked on connection at the developer’s end to ascertain authenticity and prevent piracy. The system is implemented by downloading two different software products as compared with installing same products on compact disk with mobile agent’s application. Downloading software contents from developer’s database as in the traditional method requires a continuously open connection between the client and the developer’s end, a fixed network is not economically or technically feasible. Mobile agent after being dispatched into the network becomes independent of the creating process and can operate asynchronously and autonomously. It can reconnect later after completing its task and return for result delivery. Response time and network load are very minimal with application of mobile agent.
2256
75444
An Approach to Secure Mobile Agent Communication in Multi-Agent Systems
Abstract:
Inter-agent communication manager facilitates communication among mobile agents via message passing mechanism. Till now, all Foundation for Intelligent Physical Agents (FIPA) compliant agent systems are capable of exchanging messages following the standard format of sending and receiving messages. Previous works tend to secure messages to be exchanged among a community of collaborative agents commissioned to perform specific tasks using cryptosystems. However, the approach is characterized by computational complexity due to the encryption and decryption processes required at the two ends. The proposed approach to secure agent communication allows only agents that are created by the host agent server to communicate via the agent communication channel provided by the host agent platform. These agents are assumed to be harmless. Therefore, to secure communication of legitimate agents from intrusion by external agents, a two-phase policy enforcement system was developed. The first phase constrains external agent to run only on the network server while the second phase confines the activities of the external agent to its execution environment. To implement the proposed policy, a controller agent was charged with the task of screening any external agent entering the local area network and preventing it from migrating to the agent execution host where the legitimate agents are running. On arrival of the external agent at the host network server, an introspector agent was charged to monitor and restrain its activities. This approach secures legitimate agent communication from Man-in-the Middle and Replay attacks.
2255
75439
Evaluating 8D Reports Using Text-Mining
Abstract:
Increasing quality requirements make reliable and effective quality management indispensable. This includes the complaint handling in which the 8D method is widely used. The 8D report as a written documentation of the 8D method is one of the key quality documents as it internally secures the quality standards and acts as a communication medium to the customer. In practice, however, the 8D report is mostly faulty and of poor quality. There is no quality control of 8D reports today. This paper describes the use of natural language processing for the automated evaluation of 8D reports. Based on semantic analysis and text-mining algorithms the presented system is able to uncover content and formal quality deficiencies and thus increases the quality of the complaint processing in the long term.
2254
75352
A Survey on a Critical Infrastructure Monitoring Using Wireless Sensor Networks
Abstract:
There are diverse applications of wireless sensor networks (WSNs) in the real world, typically invoking some kind of monitoring, tracking, or controlling activities. In an application, a WSN is deployed over the area of interest to sense and detect the events and collect data through their sensors in a geographical area and transmit the collected data to a Base Station (BS). This paper presents an overview of the research solutions available in the field of environmental monitoring applications, more precisely the problems of critical area monitoring using wireless sensor networks.
2253
75284
Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor
Abstract:
City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signaling model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signaling model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signaling model, the minimum error of distance in the five scene is about 3.06m.
2252
75218
Key Frame Based Video Summarization via Dependency Optimization
Authors:
Abstract:
As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting keyframes. In particular, we apply a statistical dependency measure called quadratic mutual information as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.
2251
75216
Perceptual Image Coding by Exploiting Internal Generative Mechanism
Authors:
Abstract:
In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.
2250
75214
Robust Medical Image Watermarking Using Frequency Domain and Least Significant Bits Algorithms
Abstract:
Watermarking and stenography are getting importance recently because of copyright protection and authentication. In watermarking we embed stamp, logo, noise or image to multimedia elements such as image, video, audio, animation and text. There are several works have been done in watermarking for different purposes. In this research work, we used watermarking techniques to embed patient information into the medical magnetic resonance (MR) images. There are two methods have been used; frequency domain (Digital Wavelet Transform-DWT, Digital Cosine Transform-DCT, and Digital Fourier Transform-DFT) and spatial domain (Least Significant Bits-LSB) domain. Experimental results show that embedding in frequency domains resist against one type of attacks, and embedding in spatial domain is resist against another group of attacks. Peak Signal Noise Ratio (PSNR) and Similarity Ratio (SR) values are two measurement values for testing. These two values give very promising result for information hiding in medical MR images.
2249
75122
Digital Recording System Identification Based on Audio File
Abstract:
The objective of this work is to develop a theoretical framework for reliable digital recording system identification from digital audio files alone, for forensic purposes. A digital recording system consists of a microphone and a digital sound processing card. We view the cascade as a system of unknown transfer function. We expect same manufacturer and model microphone-sound card combinations to have very similar/near identical transfer functions, bar any unique manufacturing defect. Input voice (or other) signals are modeled as non-stationary processes. The technical problem under consideration becomes blind deconvolution with non-stationary inputs as it manifests itself in the specific application of digital audio recording equipment classification.
2248
75068
Content-Based Image Retrieval Using HSV Color Space Features
Abstract:
In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKbench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods.
2247
75049
A Conceptual Cyber-Physical System Architecture for Urban Farming and Precision Agronomy in Smart Cities
Abstract:
In recent years, climate change, high population rates, water scarcity, contaminated farmland, eroding soils, and other factors, have raised serious concerns in the area of sustainable food production. In addition, it has been estimated that by 2050, as much as 80 percent of the earth’s population will reside in cities. As a consequence, new forms of food production have been explored, especially in places close to or even within the consumption areas, that is, cities. In this context, cities are evolving into intelligent infrastructures in which the aim is to automate, optimize, and improve all possible aspects, including urban agriculture and precision agronomy. Despite recent research efforts, common grounds still have to be defined in terms of how to efficiently automate and provide intelligence to the urban agriculture implementations and integrate those in a transparent and scalable manner to the overall smart city solutions. For this, we believe that smart cities must be conceptualized and abstracted on a higher and different plane than the supporting information and communications technology (ICT) infrastructure. However, the essential properties that make a city smart or a system of systems must be the architectural design assets of the underlying planes. This paper presents a conceptual CPS architecture for urban farming and precision agronomy in Smart Cities based on Software Defined Networks (SDNs) and Cyber-physical systems. Three layers comprise the proposed architecture.
2246
74913
Burnout Recognition for Call Center Agent by Using Skin Color Detection with Hand Poses
Abstract:
Call centers have been expanding and influence on activation in various markets increasingly. A call center’s work is known as one of the most demanding and stressful jobs. In this paper, we propose the fatigue detection system in order to detect burnout of call center agents in the case of a neck pain and upper back pain. Our proposed system is based on the computer vision technique combined skin color detection with the Viola-Jones object detector. To recognize the gesture of hand poses caused by stress sign, the YCbCr color space is used to detect the skin color region including face and hand poses around the area related to neck ache and upper back pain. A cascade of clarifiers by Viola-Jones is used for face recognition to extract from the skin color region. The detection of hand poses is given by the evaluation of neck pain and upper back pain by using skin color detection and face recognition method. The system performance evaluated using two groups of dataset created in the laboratory to simulate call center environment. Our call center agent, burnout detection system, has implemented by using a web camera and processed by Matlab. From the experimental results, our system achieved 96.3 % for Upper Back pain detection and 94.2 % for neck pain detection.
2245
74907
Features Extraction for Real-Time Traffic Sign Detection and Recognition System
Abstract:
The developing of traffic sign detection and recognition (TSDR) in real-time system plays a key part of an intelligent transport system and advanced driver assistance system. TSDR system supports drivers to drive safely on the road and reduces the number of traffic accidents. It reminds the regulations, limitations, and status of the road so drivers can control their driving speed in time. Drivers don’t obey the rules and drive the vehicles according to their desire when drivers sometimes are lack of knowledge or miss about the actual road signs. In these situations, drivers put him selves into a very danger situation and turn a participant in accident. There are many challenges for developing real-time TSDR system due to motion artifacts, variable lighting and weather conditions and situations of traffic signs. The aim of the proposed research is to develop an efficient and effective TSDR in real time. The traffic signs colors are red, blue and yellow. And their shapes are triangle, circle and diamond shape. RGB color model is sensitive in lighting variations. HSI color model is more resistance various light intensity than RGB color model. RGB color model is converted to others color model such as HSI/HSV for robust various lighting conditions. But it is time-consuming and not gets significant results from changing to HSI color. The fixed threshold based on input image cannot detect traffic signs when the environmental factors changed. Therefore, this system calculates the threshold value from input image frame. The histogram of oriented gradients (HOG) is obvious feature extraction methods. HOG needs to calculate the gradients of each pixel in the path. So, these computations are cost time. It is not effective for low powered devices. System extracted many features from detected traffic sign image, but some of them are not influential for recognition. The significant features are needed to be extracted for TSDR system. The main point of real-time TSDR is fast processing time. The main contribution of this system proposes new features for traffic signs recognition system in real-time. In our proposed system has two parts: Detection and Recognition. The video is given as input in this system and extracts image frame after every 5 frames. This system will calculate threshold values based on RGB color input image and choose the best threshold values for detecting the candidate traffic signs regions. So, the faded and blur traffic signs can be detected. It is also combat instabilities caused by lighting variations. And then, the system calculates aspect ratio to decide whether the output candidate region is traffic sign or not. In recognition step, new propose four features (termination points, bifurcation points, 90’ angles, and turns direction points) are extracted from validated image. The features are given as input to adaptive neuro fuzzy inference system (ANFIS) for training and testing. This system provides minimize processing time, quality detection and recognition performance in real time.
2244
74906
Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.
2243
74812
Deep Convolutional Neural Networks Supervised Learning for Land Cover Remote Sensing Classification through AlexNet
Abstract:
This study aims to evaluate the impact of classifying labelled remote sensing images using convolutional neural network (CNN) architecture i.e. AlexNet on different land cover scenarios based on two remotely sensed datasets. The analytic study carried out to investigate the performance of the computational time and overall accuracy achieving a state of art result. Thus, a set of experiments were conducted to specify the effectiveness of the selected convolutional neural network method using two implementing approaches, named fully trained and fine-tuned. For validation purposes, two remote sensing datasets, Aerial Image Dataset Dataset (AID) which is a recently released dataset that has a challenging higher interrelated 30 classes collected using Google Earth sensor, the sample number in each class is ranging from 200 to 400 images of 600 x 600 size and Remote Sensing Seven Classes dataset (RSSCN7) which collected from Google Earth and include seven categories of 400 x 400 size, were used in the experiments. These datasets have been chosen with a wide diversity of input data, number of classes, amount of labelled data, and texture patterns which definitely can provide a better benchmark to evaluate image classification proposed deep learning method. For running the experiment, a specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in training, validation, and testing. As a result, the fully trained approach has achieved a trivial result for both of the two data sets, AID and RSSCN7 by 73.346% and 71.857% within 24 min, 1 sec and 8 min, 3 sec respectively. However, dramatic improvement of the classification performance using the fine tuning approach has been recorded by 92.5% and 91% respectively within 24min, 44 sec and 8 min 41 sec respectively. The represented conclusion opens the opportunities for a better classification performance in various applications such as agriculture and crops remote sensing.
2242
74685
Discrete Multi-Valued Search Space Cockroach Swarm Algorithm for Travelling Salesman Problem
Abstract:
Many real life optimization problems including scheduling, transportation, inventory planning, vehicle routing, production planning, communication operations, computer operations, financial assets, risk management, revenue management, blending problems, process design, depot selection, network design have discrete valued variables. The reason for investigating new algorithms is that none of the existing algorithms has been confirmed of being able to solve all problems adequately. Some algorithms provide better solutions to certain problems in comparison to others; hence, there is a need for more improved discrete valued algorithms to offer solutions to real-life discrete-valued complex optimization problems. A discrete multi-valued search space cockroach swarm optimization algorithm was constructed to provide solutions to optimization problems with discrete ranges greater than 2. This was designed as an improvement on the binary version, which can only offer solutions to the optimization problem with binary values. The proposed algorithm was evaluated on a set of experiments that tested its performance in combinatorial optimization. It was applied to instances of traveling salesman problem, and the results obtained from the experiments were compared with the results of the existing ant colony optimization algorithm. The proposed algorithm shows similar performance to the existing well-known algorithm.
2241
74628
Flexible 3D Virtual Desktop Using Handles for Cloud Environments
Abstract:
Due to the improvement in performance of computer hardware and the development of operating systems, a multi-tasking for several programs has become one of the basic functions to computer users. It is natural for computer users to want more functional, convenient, and visual GUI functions (Graphic User Interface). In this paper, a 3D virtual desktop system was proposed to meet users’ requirements for cloud environments such as a virtual desktop function in the Windows environment. The proposed system uses the handles of the windows to hide or restore several windows. It connects the list of task spaces using the circular double linked list to manage the handles. Each handle list is registered in the corresponding task space being executed. The 3D virtual desktop is efficient and flexible in handling the numbers of task spaces and can help users to work under more comfortable environments. Acknowledgment: This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korea government (MSIP) (NRF-2015R1D1A1A01057680).
2240
74582
Sparsity-Based Unsupervised Unmixing of Hyperspectral Imaging Data Using Basis Pursuit
Abstract:
Mixing in the hyperspectral imaging occurs due to the low spatial resolutions of the used cameras. The present sources “endmembers” participate the spectra pixels with different ratios called abundances. Unmixing of the data cube is important action to know the present endmembers in the cube for the analysis of these images. Unsupervised unmixing is done with no information about the given data cube. Sparsity is one of the recent approaches used in the source recovery or unmixing techniques. The l1-norm basis pursuit is an optimization problem solves this unmixing problem where the endmembers is assumed to be sparse in a certain domain known as dictionary. This optimization problem is solved using proximal method or iterative thresholding. The l1-norm basis pursuit optimization problem as a sparsity-based unmixing technique was applied to real and synthetic hyperspectral data cubes.
2239
74491
Linguistic Summarization of Structured Patent Data
Abstract:
Patent data have an increasingly important role in economic growth, innovation, technical advantages and business strategies and even in countries competitions. Analyzing of patent data is crucial since patents cover large part of all technological information of the world. In this paper, we have used the linguistic summarization technique to prove the validity of the hypotheses related to patent data stated in the literature.
2238
74483
On Improving Breast Cancer Prediction Using General Regression Neural Networks-Conformal Prediction
Abstract:
The aim of this study is to help predict breast cancer and to construct a supportive model that will stimulate a more reliable prediction as a factor that is fundamental to public health. In this study, we utilize general regression neural networks (GRNN) to replace the normal predictions with prediction periods to achieve a reasonable percentage of confidence. The mechanism employed here utilise a novel machine learning system called conformal prediction (CP), in order to assign consistent confidence measures to predictions, which combined with GRNN. The proposed method we apply the resulting algorithm to the problem of breast cancer diagnosis. The results show that the prediction constructed by this method is reasonable and could be useful in practice.
2237
74473
Building a Dynamic News Category Network for News Sources Recommendations
Abstract:
It is generic that news sources publish news in different broad categories. These categories can either be generic such as Business, Sports, etc. or time-specific such as World Cup 2015 and Nepal Earthquake or both. It is up to the news agencies to build the categories. Extracting news categories automatically from numerous online news sources is expected to be helpful in many applications including news source recommendations and time specific news category extraction. To address this issue, existing systems like DMOZ directory and Yahoo directory are mostly considered though they are mostly human annotated and do not consider the time dynamism of categories of news websites. As a remedy, we propose an approach to automatically extract news category URLs from news websites in this paper. News category URL is a link which points to a category in news websites. We use the news category URL as a prior knowledge to develop a news source recommendation system which contains news sources listed in various categories in order of ranking. In addition, we also propose an approach to rank numerous news sources in different categories using various parameters like Traffic Based Website Importance, Social media Analysis and Category Wise Article Freshness. Experimental results on category URLs captured from GDELT project during April 2016 to December 2016 show the adequacy of the proposed method.
2236
74461
An Empirical Study of the Impacts of Big Data on Firm Performance
Authors:
Abstract:
In the present time, data to a data-driven knowledge-based economy is the same as oil to the industrial age hundreds of years ago. Data is everywhere in vast volumes! Big data analytics is expected to help firms not only efficiently improve performance but also completely transform how they should run their business. However, employing the emergent technology successfully is not easy, and assessing the roles of big data in improving firm performance is even much harder. There was a lack of studies that have examined the impacts of big data analytics on organizational performance. This study aimed to fill the gap. The present study suggested using firms’ intellectual capital as a proxy for big data in evaluating its impact on organizational performance. The present study employed the Value Added Intellectual Coefficient method to measure firm intellectual capital, via its three main components: human capital efficiency, structural capital efficiency, and capital employed efficiency, and then used the structural equation modeling technique to model the data and test the models. The financial fundamental and market data of 100 randomly selected publicly listed firms were collected. The results of the tests showed that only human capital efficiency had a significant positive impact on firm profitability, which highlighted the prominent human role in the impact of big data technology.