Excellence in Research and Innovation for Humanity

International Science Index

Commenced in January 1999 Frequency: Monthly Edition: International Abstract Count: 39602

Computer and Information Engineering

2145
67079
Comparative Study of Conventional and Satellite Based Agriculture Information System
Abstract:
The purpose of this study is to compare the conventional crop monitoring system with the satellite-based crop monitoring system in Pakistan. This study is conducted for SUPARCO (Space & upper Atmosphere Research Commission). The study focused on wheat crop as it is the main cash crop of Pakistan and province of Punjab. This study will answer the following: Which system is better in terms of cost, time & man power? The man power calculated for Punjab CRS is 1418 personnel and for SUPARCO: 26 personnel. The total cost calculated for SUPARCO is almost 13.35 million, and CRS is 47.705 million. The man hours calculated for CRS (Crop Reporting Service) are 1543200hrs (136 days) and man hours for SUPARCO are 8, 320hrs (40 days). It means that SUPARCO workers finish their work 96 days earlier than CRS workers. The results show that satellite-based crop monitoring system is efficient in terms of manpower, cost and time as compared to conventional system, and also generates early crop forecasts and estimations. The research instruments used included: interviews, physical visits, group discussions, questionnaires, study of reports and work flows. 93 employees were selected using Yamane’s formula for data collection, which is done with the help questionnaires and interviews. Comparative graphing is used for the analysis of data to formulate the results of the research. The research findings also demonstrate that although conventional methods have a strong impact still in Pakistan (for crop monitoring) but it is the time to bring a change through technology so that our agriculture will also be developed on modern lines.
2144
67045
Keynote Proposal - Discrete-Event Modeling and Simulation Methodologies: Past, Present and Future
Abstract:
Modeling and Simulation methods have been used to better analyze the behavior of complex physical systems, and it is now common to use simulation as a part of the scientific and technological discovery process. M&S advanced thanks to the improvements in computer technology, which, in many cases, resulted in the development of simulation software using ad-hoc techniques. Formal M&S appeared in order to try to improve the development task of very complex simulation systems. Some of these techniques proved to be successful in providing a sound base for the development of discrete-event simulation models, improving the ease of model definition and enhancing the application development tasks; reducing costs and favoring reuse. The DEVS formalism is one of these techniques, which proved to be successful in providing means for modeling while reducing development complexity and costs. DEVS model development is based on a sound theoretical framework. The independence of M&S tasks made possible to run DEVS models on different environments (personal computers, parallel computers, real-time equipment, and distributed simulators) and middleware. We will present a historical perspective of discrete-event M&S methodologies, showing different modeling techniques. We will introduce DEVS origins and general ideas, and compare it with some of these techniques. We will then show the current status of DEVS M&S, and we will discuss a technological perspective to solve current M&S problems (including real-time simulation, interoperability, and model-centered development techniques). We will show some examples of the current use of DEVS, including applications in different fields. We will finally show current open topics in the area, which include advanced methods for centralized, parallel or distributed simulation, the need for real-time modeling techniques, and our view in these fields.
2143
67031
A Survey of Big Data Description, Deep Learning Algorithm Applications, Challenges and Trends
Abstract:
With the advances of digital devices such as digital sensors, large amounts of data have been generated at the fast speed that results in a new emerging area named Big data. Big data is not only about producing data from sensors. It can be provided by humans texts, images and so on. Big data will have a great impact on technologies and computing. In other words, we have more data these days(data will be twentieth until 2020, and current methods can not deal with these data. The novel term of Big data means collecting, processing and presenting the results of the huge amount of data that comes in high speed in a variety of formats. These days, Big data research area is becoming more and more important. Big data has many applications in industry, military and so forth. In this paper, we do a well-nigh comprehensive survey on what is big data, its research problems or challenges and open research problems or trends. Doing research to have a comprehensive survey on future research trends,open research problems in this area is necessary.Although Big Data has many applications these days; we can not proceed these data with traditional methods and ways. For achieving this aim, we must have a comprehensive overview of what is Big Data, why is it important, and what are challenges in this area that are described in this paper and at last we present one important tool of big data called Deep Learning that has the important effect on that and its usage.
2142
66878
Stakeholder Management for Successful Software Projects
Authors:
Abstract:
An alarming number of software projects fail to deliver the required functionalities within the provided budget and timeframe and with the required qualities. Some of the main reasons for this problem include bad stakeholder management, poor communications and informal change management. Informal processes to identify, engage and control stakeholders lead to these reasons. Recently, to emphasize its importance, the Project Management Institute (PMI) updated the Project Management Body of Knowledge (PMBoK) to explicitly include the stakeholder management knowledge area. This knowledge area consists of four processes to identify stakeholders, plan stakeholder management, and manage and control stakeholder engagement. The use of appropriate techniques for stakeholder management in software projects will definitely lead to higher quality and successful software. In this paper, we describe some of the proven techniques that can be used during the execution of the four processes for stakeholder management. Development of collaboration tools for automating these processes are recommended and need to be integrated in available software project management tools.
2141
66857
Mobile Learning: Toward Better Understanding of Compression Techniques
Abstract:
Data compression shrinks files into fewer bits then their original presentation. It has more advantage on internet because the smaller a file, the faster it can be transferred but learning most of the concepts in data compression are abstract in nature therefore making them difficult to digest by some students (Engineers in particular). To determine the best approach toward learning data compression technique, this paper first study the learning preference of engineering students who tend to have strong active, sensing, visual and sequential learning preferences, the paper also study the advantage that mobility of learning have experienced; Learning at the point of interest, efficiency, connection, and many more. A survey is carried out with some reasonable number of students, through random sampling to see whether considering the learning preference and advantages in mobility of learning will give a promising improvement over the traditional way of learning. Evidence from data analysis using Ms-Excel as a point of concern for error-free findings shows that there is significance different in the students after using learning content provided on smart phone, also the result of the findings presented in, bar charts and pie charts interpret that mobile learning has to be promising feature of learning.
2140
66729
Detection of Attention Deficit Hyperactivity Disorder Using Gray and White Matter Features
Abstract:
Attention deficit hyperactivity disorder (ADHD) is a psychiatric condition that affects millions of children and many times last into adulthood. There is no single test that can show whether a person has ADHD. The symptoms vary from person to person. Therefore, it is hard to diagnose ADHD contrary to many physical illnesses. Our aim is to create methods to minimize human effort and increase accuracy of diagnosis of ADHD. We collected structural Magnetic Resonance Images (MRI) from 26 subjects: 11 controls and 15 children diagnosed with ADHD. The data was provided from NPIstanbul Neuro Psychiatric Hospital. The models were built on the k-nearest neighbors algorithm (KNN) and decision tree using Matlab machine learning toolbox. We used k-means clustering algorithm to extract gray matter and white matter regions from the axial plane. Four features are extracted from these regions; area of gray matter, area of white matter and perimeter of gray matter, perimeter of white matter. The most important attribute was determined by using principal component analysis. The experiments were conducted on a full training dataset including 26 instance and 5 fold, and hold out cross validation was adopted for randomly sampling training and test sets. ADHD is successfully classified with 91 % accuracy by using the proposed method. The outcome of our study will reduce the number medical errors by informing physicians in their efforts of diagnosing ADHD.
2139
66725
Communication Experience and the Perception of Media Richness among Parents Working Overseas and Their Children Left-behind in the Philippines
Authors:
Abstract:
This study analyzed four knowledge-building elements of channel expansion theory namely: communication media, communication content, communication partner, and communication influence vis-à- vis media richness dimensions among parents working overseas and their left-behind children in the Philippines. Results reveal that both parents and children consumed four out of six mediated communications tested in this research, spent one to four days a week connecting, between 30 minutes to 3 hours per engagement, and media consumption is dependent on the message content and media literacy of parents. Family, academic, household, and health were the common communication topics and parents dictate which channel to use. All six medium tested received high ratings based on the media richness constructs.
2138
66703
A Method for Compression of Short Unicode Strings
Abstract:
The use of short texts in communication has been greatly increasing in recent years. Applying different languages in short texts has led to compulsory use of Unicode strings. These strings need twice the space of common strings, hence, applying algorithms of compression for the purpose of accelerating transmission and reducing cost is worthwhile. Nevertheless, other compression methods like gzip, bzip2 or PAQ due to high overhead data size are not appropriate. The Huffman algorithm is one of the rare algorithms effective in reducing the size of short Unicode strings. In this paper, an algorithm is proposed for compression of very short Unicode strings. At first, every new character to be sent to a destination is inserted in the proposed mapping table. At the beginning, every character is new. In case the character is repeated for the same destination, it is not considered as a new character. Next, the new characters together with the mapping value of repeated characters are arranged through a specific technique and specially formatted to be transmitted. The results obtained from an assessment made on a set of short Persian and Arabic strings indicate that this proposed algorithm outperforms the Huffman algorithm in size reduction.
2137
66686
Blocking of Random Chat Apps at Home Routers for Juvenile Protection in South Korea
Abstract:
Numerous anonymous chat apps that help people to connect with random strangers have been released in South Korea. However, they become a serious problem for young people since young people often use them for channels of prostitution or sexual violence. Although ISPs in South Korea are responsible for making inappropriate content inaccessible on their networks, they do not block traffic of random chat apps since 1) the use of random chat apps is entirely legal. 2) it is reported that they use HTTP proxy blocking so that non-HTTP traffic cannot be blocked. In this paper, we propose a service model that can block random chat apps at home routers. A service provider manages a blacklist that contains blocked apps’ information. Home routers that subscribe the service filter the traffic of the apps out using deep packet inspection. We have implemented a prototype of the proposed model, including a centralized server providing the blacklist, a Raspberry Pi-based home router that can filter traffic of the apps out, and an Android app used by the router’s administrator to locally customize the blacklist.
2136
66685
Machine Learning Based Gender Identification of Authors of Entry Programs
Abstract:
Entry is an education platform used in South Korea, created to help students learn to program, in which they can learn to code while playing. Using the online version of the entry, teachers can easily assign programming homework to the student and the students can make programs simply by linking programming blocks. However, the programs may be made by others, so that the authors of the programs should be identified. In this paper, as the first step toward author identification of entry programs, we present an artificial neural network based classification approach to identify genders of authors of a program written in an entry. A neural network has been trained from labeled training data that we have collected. Our result in progress, although preliminary, shows that the proposed approach could be feasible to be applied to the online version of entry for gender identification of authors. As future work, we will first use a machine learning technique for age identification of entry programs, which would be the second step toward the author identification.
2135
66676
Going Horizontal: Confronting the Challenges When Transitioning to Cloud
Abstract:
As one of the largest cancer treatment centers in the United States, we continuously confront the challenge of how to leverage the best possible technological solutions, in order to provide the highest quality of service to our customers – the doctors, nurses and patients at Moffitt who are fighting every day for the prevention and cure of cancer. This paper reports on the transition from a vertical to a horizontal IT infrastructure. We discuss how the new frameworks and methods such as public, private and hybrid cloud, brokering cloud services are replacing the traditional vertical paradigm for computing. We also report on the impact of containers, micro services, and the shift to continuous integration/continuous delivery. These impacts and changes in delivery methodology for computing are driving how we accomplish our strategic IT goals across the enterprise.
2134
66573
Solving of Ranking Problem Using Reinforcement Learning
Abstract:
We apply reinforcement learning method to learn domain-specific heuristics for ranking problem. In the problem we need to classify a lot of objects based on their properties into a positive or a negative set, taking into account the preferences of the ranking person. We created an agent for the ranking process which executes a reordering operation in the produced environment after a selection. The activity of the agent was monitored by the environment and if the measure of reordering reaches a certain limit, the agent gets reward, otherwise, it is penalized. Considering this, the agent which uses reinforcement learning, attempts to maximize the reward and hereby the ranking is carried out as quickly as possible. The objects (for example motion pictures or products) can be reached and selected on a user interface. At the start there is no object classified. If we select an object from the initial (the third) set as acceptable, then relying upon this finding the arranging of objects is modified, so the number of elements of the three sets changes. If we classify another object, for example again into a set of positive objects, then our algorithm continues the ranking process. With selection of only some objects the ranking of the complete initial set can be executed with high precision taking into account user preferences. The results suggest that reinforcement learning can provide a new method for solving ranking problems with high-performance.
2133
66531
Performance Comparaison of Outlier Detection Techniques Based Classification in Wireless Sensor Networks
Abstract:
Nowadays, many wireless sensor networks have been distributed in the real world to collect valuable raw sensed data. The challenge is to extract high-level knowledge from this huge amount of data. However, the identification of outliers can lead to the discovery of useful and meaningful knowledge. In the field of wireless sensor networks, an outlier is defined as a measurement that deviates from the normal behavior of sensed data. Many detection techniques of outliers in WSNs have been extensively studied in the past decade and have focused on classic based algorithms. These techniques identify outlier in the real transaction dataset. This survey aims at providing a structured and comprehensive overview of the existing researches on classification based outlier detection techniques as applicable to WSNs. Thus, we have identified key hypotheses, which are used by these approaches to differentiate between normal and outlier behavior. In addition, this paper tries to provide an easier and a succinct understanding of the classification based techniques. Furthermore, we identified the advantages and disadvantages of different classification based techniques and we presented a comparative guide with useful paradigms for promoting outliers detection research in various WSN applications and suggested further opportunities for future research.
2132
66477
Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems
Abstract:
The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.
2131
66459
An Efficient Hardware/Software Workflow for Multi-Cores Simulink Applications
Abstract:
Over these last years, applications such as telecommunications, signal processing, digital communication with advanced features (Multi-antenna, equalization..) witness a rapid evaluation accompanied with an increase of user exigencies in terms of latency, the power of computation… To satisfy these requirements, the use of hardware/software systems is a common solution; where hardware is composed of multi-cores and software is represented by models of computation, synchronous data flow (SDF) graph for instance. Otherwise, the most of the embedded system designers utilize Simulink for modeling. The issue is how to simplify the c code generation, for a multi-cores platform, of an application modeled by Simulink. To overcome this problem, we propose a workflow allowing an automatic transformation from the Simulink model to the SDF graph and providing an efficient schedule permitting to optimize the number of cores and to minimize latency. This workflow goes from a Simulink application and a hardware architecture described by IP.XACT language. Based on the synchronous and hierarchical behavior of both models, the Simulink block diagram is automatically transformed into an SDF graph. Once this process is successfully achieved, the scheduler calculates the optimal cores’ number needful by minimizing the maximum density of the whole application. Then, a core is chosen to execute a specific graph task in a specific order and, subsequently, a compatible C code is generated. In order to perform this proposal, we extend Preesm, a rapid prototyping tool, to take the Simulink model as entry input and to support the optimal schedule. Afterward, we compared our results to this tool results, using a simple illustrative application. The comparison shows that our results strictly dominate the Preesm results in terms of number of cores and latency. In fact, if Preesm needs m processors and latency L, our workflow need processors and latency L'< L.
2130
66332
Forensic Speaker Verification in Noisy Environmental by Enhancing the Speech Signal Using Independent Component Analysis (ICA) Approach
Abstract:
We propose a system to real environmental noise and channel mismatch for forensic speaker verification systems. This method is based on suppressing various types of real environmental noise by using independent component analysis (ICA) algorithm. The enhanced speech signal is applied to mel frequency cepstral coefficients (MFCC) or MFCC feature warping to extract the essential characteristics of the speech signal. Channel effects are reduced using an intermediate vector (i-vector) and probabilistic linear discriminant analysis (PLDA) approach for classification. The proposed algorithm is evaluated by using an Australian forensic voice comparison database, combined with car, street and home noises from QUTNOISE at a signal to noise ratio (SNR) ranging from -10 dB to 10 dB. Experimental results indicate that the MFCC feature warping- ICA achieves a reduction in equal error rate about (47.8%, 40.71%, and 51.82%) over using MFCC feature warping when the test speech signals are corrupted with random sessions of street, car, and home noises at -10 dB SNR.
2129
66317
Simulation as a Problem-Solving Spotter for System Reliability
Abstract:
An important performance measure for stochastic manufacturing networks is the system reliability, defined as the probability that the production output meets or exceeds a specified demand. The system parameters include the capacity of each workstation and numbers of the conforming parts produced in each workstation. We establish that eighteen archival publications, containing twenty-one examples, provide incorrect values of the system reliability. The author recently published the Song Rule, which provides the correct analytical system-reliability value; it is, however, computationally inefficient for large networks. In this paper, we use Monte Carlo simulation (implemented in C and Flexsim) to provide estimates for the above-mentioned twenty-one examples. The simulation estimates are consistent with the analytical solution for small networks but is computationally efficient for large networks. We argue here for three advantages of Monte Carlo simulation: (1) understanding stochastic systems, (2) validating analytical results, and (3) providing estimates even when analytical and numerical approaches are overly expensive in computation. Monte Carlo simulation could have detected the published analysis errors.
2128
66165
Security Risks Assessment: A Conceptualization and Extension of NFC Touch-And-Go Application
Abstract:
NFC operates on low-range 13.56 MHz frequency within a distance from 4cm to 10cm, and the applications can be categorized as touch and go, touch and confirm, touch and connect, and touch and explore. NFC applications are vulnerable to various security and privacy attacks such due to its physical nature; unprotected data stored in NFC tag and insecure communication between its applications. This paper aims to determine the likelihood of security risks happening in an NFC technology and application. We present an NFC technology taxonomy covering NFC standards, types of application and various security and privacy attack. Based on observations and the survey presented to evaluate the risk assessment within the touch and go application demonstrates two security attacks that are high risks namely data corruption and DOS attacks. After the risks are determined, risk countermeasures by using AHP is adopted. The guideline and solutions to these two high risks, attacks are later applied to a secure NFC-enabled Smartphone Attendance System.
2127
66086
Learning Compression Techniques on Smart Phone
Abstract:
Data compression shrinks files into fewer bits than their original presentation. It has more advantage on the internet because the smaller a file, the faster it can be transferred but learning most of the concepts in data compression are abstract in nature, therefore, making them difficult to digest by some students (engineers in particular). This paper studies the learning preference of engineering students who tend to have strong, active, sensing, visual and sequential learning preferences, the paper also studies the three shift of technology-aided that learning has experienced, which mobile learning has been considered to be the feature of learning that will integrate other form of the education process. Lastly, we propose a design and implementation of mobile learning application using software engineering methodology that will enhance the traditional teaching and learning of data compression techniques.
2126
65992
Examining the Importance of the Structure Based on Grid Computing Service and Virtual Organizations
Abstract:
Vast changes and developments achieved in information technology field in recent decades have made the review of different issues such as organizational structures unavoidable. Applying informative technologies such as internet and also vast use of computer and related networks have led to new organizational formations with a nature completely different from the traditional, great and bureaucratic ones; some common specifications of such organizations are transfer of the affairs out of the organization, benefiting from informative and communicative networks and centered-science workers. Such communicative necessities have led to network sciences development including grid computing. First, the grid computing was only to relate some sites for short – time and use their sources simultaneously, but now it has gone beyond such idea. In this article, the grid computing technology was examined, and at the same time, virtual organization concept was discussed.
2125
65986
Compressed Suffix Arrays to Self-Indexes Based on Partitioned Elias-Fano
Abstract:
A practical and simple self-indexing data structure PEF-CSA is built in linear time for the compression of the suffix array based on partitioned Elias-Fano indexes. Moreover, the PEF-CSA is compared with two classical compressed indexing methods, FMI and Sad-CSA on different kinds of type and size files on Pizza&Chili. The PEF-CSA performs better on the existing data in terms of the compression ratio, count and locate time except for the even distributed data such as proteins data. The observations of the experiments are the distribution of the φ is more important than the alphabet size on the compression ratio; unevenly distributed data φ makes better compression effect, and the larger the size of the hit counts, the longer the count and locate time.
2124
65837
Computational System for the Monitoring Ecosystem of the Endangered White Fish (Chirostoma estor estor) in the Patzcuaro Lake, Mexico
Abstract:
White fish (Chirostoma estor estor) is an endemic species that habits in the Patzcuaro Lake, located in Michoacan, Mexico; being an important source of gastronomic and cultural wealth of the area. Actually, it have undergone an immense depopulation of individuals, due to the high fishing, contamination and eutrophication of the lake water, resulting in the possible extinction of this important species. This work proposes a new computational model for monitoring and assessment of critical environmental parameters of the white fish ecosystem. According to an Analytical Hierarchy Process, a mathematical model is built assigning weights to each environmental parameter depending on their water quality importance on the ecosystem. Then, a development of an advanced system for the monitoring, analysis and control of water quality is built using the virtual environment of LabVIEW. As results, we have obtained a global score that indicates the condition level of the water quality in the Chirostoma estor ecosystem (excellent, good, regular and poor), allowing to provide an effective decision making about the environmental parameters that affect the proper culture of the white fish such as temperature, pH and dissolved oxygen. In situ evaluations show regular conditions for a success reproduction and growth rates of this species where the water quality tends to have regular levels. This system emerges as a suitable tool for the water management, where future laws for white fish fishery regulations will result in the reduction of the mortality rate in the early stages of development of the species, which represent the most critical phase. This can guarantees better population sizes than those currently obtained in the aquiculture crop. The main benefit will be seen as a contribution to maintain the cultural and gastronomic wealth of the area and for its inhabitants, since white fish is an important food and economical income of the region, but the species is endangered.
2123
65815
Images Selection and Best Descriptor Combination for Multi-Shot Person Re-Identification
Abstract:
To re-identify a person is to check if he/she has been already seen over a cameras network. Recently, re-identifying people over large public cameras networks has become a crucial task of great importance to ensure public security. The vision community has deeply investigated this area of research. Most existing researches rely only on the spatial appearance information from either one or multiple person images. Actually, the real person re-id framework is a multi-shot scenario. However, to efficiently model a person’s appearance and to choose the best samples to remain a challenging problem. In this work, an extensive comparison of descriptors of state of the art associated with the proposed frame selection method is studied. Specifically, we evaluate the samples selection approach using multiple proposed descriptors. We show the effectiveness and advantages of the proposed method by extensive comparisons with related state-of-the-art approaches using two standard datasets PRID2011 and iLIDS-VID.
2122
65796
A Study of Different Factors Influencing Youngsters’ Mobile Device Buying Behaviors in Malaysia
Abstract:
The mobile phone is an indispensable device in today’s daily living. The arising new brands in the market with different specification are targeting at the different population. The most promising market would be the younger generation who are IT savvy. Therefore, it is beneficial to find out their factors of consideration in purchasing a mobile phone. A survey is carried out in Malaysia to discover the current youngster’s mobile phone buying behavior. This study has found that the most influencing factor of consideration is Price, followed by Feature, and Battery Lifespan. Gender and Income have no relationship with certain factors of consideration. It is important to discover the factors of consideration in order to provide industry insight into the current trend of smartphone in Malaysia.
2121
65752
Definition of a Computing Independent Model (CIM) and Rules for Transformation to Models in Unified Model Language (UML) Focused on the Model-View-Controller (MVC) Architecture
Abstract:
This paper presents a model-oriented development approach to software development in the Model-View-Controller (MVC) architectural standard. This approach aims to expose a process of extractions of information from the models, in which through rules and a syntax defined in this work, assists in the design of the initial model and its future conversions. The proposed paper presents a syntax based on the natural language, according to the rules agreed in the classic grammar of the Portuguese language, added to the rules of conversions generating models that follow the norms of the Object Management Group (OMG) and the Meta-Object Facility (MOF).
2120
65733
Performance Analysis of Keyword Extraction Algorithms Assessing Extractive Text Summarization
Abstract:
Automatic text summarization is the task of deriving a meaningful and concise brief from a given text while retaining the concept and key information conveyed by the original text. So far, numerous approaches and algorithms have been devised to achieve this goal with certain accuracy and effectiveness. One key aspect of text summarization is accurate identification of keywords from the given textual content. In this paper, we investigate the relative performance three popular algorithms, namely TextRank, LexRank and LSA (Latent Semantic Analysis) for keyword extraction by measuring their effectiveness in identifying keywords from set of articles. We then contrast the performance of each of these algorithms with those of handwritten summaries of the same articles. We conclude by identifying the most effective algorithm as observed in our experiment.
2119
65725
Event Driven Summarization Framework for Cricket Videos
Abstract:
This paper presents an automated framework to generate highlights for cricket videos. The proposed work presents a non-learning technique based on textual features to detect three key events in cricket videos that are, boundary (4), six (6) and wicket. Score captions identified from input video in the initial phase are analyzed to detect changes in score and wickets section. To extract the contents of score captions, input video frame is discretized using the first moment and second central moment followed by running image averaging process. Noise and outliers and cleaned by morphological operators. Optical character recognition is applied on the extracted score caption region to analyze any significant change. The frame is marked as keyframe in case any significant change (4, 6, and wicket event) is detected. A collection of video frames is selected against each keyframe to generate the summarized video. The proposed method is tested on a diverse dataset of cricket videos belonging to different tournaments. The average accuracy rate of 94.8% shows the effectiveness of the proposed technique for summarization of cricket videos.
2118
65722
X-Eye Navigation Aid to Visually Impaired for Day-to-Day Mobility
Abstract:
This paper presents an effective method of providing day-to-day mobility aid to visually impaired people. An Android application named X-EYE using LOOXCIE wearable camera is designed for blind people to navigate safely. Existing navigation aid systems use various hardware components such as sensors that are expensive and cause health hazards. The proposed system presents an economical solution to provide safe navigation service to the visually impaired user. X-EYE provides obstacle detection, location tracking, and sharing, SMS reader, and language translation features to the visually impaired people. Audio messages are specifically generated to provide better usability to the visually impaired user. The performance of the proposed system in terms of obstacle detection is evaluated on thirty real-time videos. Experimental results indicate that the proposed method can achieve average detection accuracy ≥ 95.5%.
2117
65696
Queueing Modeling of M/G/1 Fault Tolerant System with Threshold Recovery and Imperfect Coverage
Abstract:
This paper investigates a finite M/G/1 fault tolerant multi-component machining system. The system incorporates the features such as standby support, threshold recovery and imperfect coverage make the study closer to real time systems. The performance prediction of M/G/1 fault tolerant system is carried out using recursive approach by treating remaining service time as a supplementary variable. The numerical results are presented to illustrate the computational tractability of analytical results by taking three different service time distributions viz. exponential, 3-stage Erlang and deterministic. Moreover, the cost function is constructed to determine the optimal choice of system descriptors to upgrading the system.
2116
65677
Correlation Coefficient of Intuitionistic Fuzzy Numbers under Weakest Triangular Norm
Authors:
Abstract:
The correlation coefficient of variables has wide applications in statistics and is often calculated in crisp or fuzzy environment. This paper extends the application of correlation coefficient to intuitionistic fuzzy environment. In this study, a new method is proposed to measure the correlation coefficient of intuitionistic fuzzy numbers using weakest triangular norm based intuitionistic fuzzy arithmetic operations. Different from previous studies, the correlation coefficient computed in this paper is intuitionistic fuzzy number rather than a crisp or fuzzy number. The weakest t-norm arithmetic operations effectively reduce fuzzy spreads (fuzzy intervals) and provide more exact results. Here, we present a simplified, effective and exact method to compute the correlation coefficient of intuitionistic fuzzy numbers. To illustrate the proposed method, a numerical example is presented.