Reconstructing Gene Regulation Network Based on Conditional Mutual Information
The purpose of gene regulatory network construction is to reduce the potential relationship between genes in gene expression data. In this paper, the algorithm CMICRAT is used to construct the gene-directed undirected graphs based on the mutual information and Conditional mutual information by the gene expression data, and then use the CRAE to determine the direction of the undirected graph. The experimental results show that the proposed algorithm can improve the accuracy of constructing the gene regulation network by experimentally validating the gene expression data of DREAM4 gene expression data in an international biological competition.
A Robust Hybrid Blind Digital Image Watermarking System Using Discrete Wavelet Transform and Contourlet Transform
In this paper, a hybrid blind digital watermarking system using Discrete Wavelet Transform (DWT) and Contourlet Transform (CT) has been implemented and tested. The implemented combined digital watermarking system has been tested against five common types of image attacks. The performance evaluation shows improved results in terms of imperceptibility, robustness, and high tolerance against these attacks; accordingly, the system is very effective and applicable.
Computing Quasi-Minimal Program Slices
Program slicing is a technique to extract the part of a program (the slice) that influences or is influenced by a set of variables at a given point. Computing minimal slices are undecidable in the general case, and obtaining the minimal slice of a given program is computationally prohibitive even for very small programs. Hence, no matter what program slicer we use, in general, we cannot be sure that our slices are minimal. This is probably the fundamental reason why no benchmark collection of minimal program slices exists, even though this would be of great interest. In this work, we present a method to automatically produce quasi-minimal slices (i.e., an automated method cannot prove that they are minimal, but we provide technological evidence, based on different techniques, that they probably are). Using our method, we have produced a suite of quasi-minimal slices that later we have manually proven minimal. We explain the process of constructing the suite, the methodology and tools that were used, and the obtained results. The suite comes with a collection of Erlang benchmarks together with different slicing criteria and the associated minimal slices.
Perceptions toward Adopting Virtual Reality as a Learning Aid in Information Technology
The field of education is an ever-evolving area constantly enriched by newly discovered techniques provided by active research in all areas of technologies. The recent years have witnessed the introduction of a number of promising technologies and applications to enhance education experience. Virtual reality, which is considered one of the methods that have contributed to improving education in many fields, creates an artificial environment by computer hardware and software that is similar to the real world. This simulation provides a solution to improve the delivery of the materials; which facilitates the teaching process by providing a useful aid to instructors, and enhances the learning experience by providing a beneficial learning aid. In order to assure future utilization of such system, students perceptions were examined toward utilizing VR as an educational tool in the Faculty of Information Technology (IT) in The University of Jordan. A questionnaire was administered to IT undergraduates investigated students opinions about the opportunities and the implications of VR as learning and teaching aid. The results confirmed the end users’ willingness of adopting VR system as a learning aid. The result of this research forms a solid base for investing in VR system for IT education.
Automated Digital Mammogram Segmentation Using Dispersed Region Growing and Pectoral Muscle Sliding Window Algorithm
Early diagnosis of breast cancer can improve the survival rate by detecting cancer at an early stage. Breast region segmentation is an essential step in the analysis of digital mammograms. Accurate image segmentation leads to better detection of cancer. It aims at separating out Region of Interest (ROI) from rest of the image. The procedure begins with removal of labels, annotations and tags from the mammographic image using morphological opening method. Pectoral Muscle Sliding Window Algorithm (PMSWA) is used for removal of pectoral muscle from mammograms which is necessary as the intensity values of pectoral muscles are similar to that of ROI which makes it difficult to separate out. After removing the pectoral muscle, Dispersed Region Growing Algorithm (DRGA) is used for segmentation of mammogram which disperses seeds in different regions instead of a single bright region. To demonstrate the validity of our segmentation method, 322 mammographic images from Mammographic Image Analysis Society (MIAS) database are used. The dataset contains medio-lateral oblique (MLO) view of mammograms. Experimental results on MIAS dataset show the effectiveness of our proposed method.
Upper Bounds for the Binary Cubic Knapsack Problem
We address the binary cubic knapsack problem of selecting from a set of items, a subset with maximum profit, and whose overall weight does not exceed a given capacity c. The cubic knapsack problem is a generalization of the linear and the quadratic knapsack problems. The objective function, which measures the profit of the selection, is a cubic function, and the problem is naturally formulated as a binary cubic program. A large variety of problems can be formulated as binary cubic programs, including applications in biology, capital budgeting, and graph theory. The cubic expressions in the objective function allow for variable interactions, and are useful in areas where such interactions are important. In the area of capital budgeting, for example, the binary variables usually represent decisions of whether or not to invest in different projects. The decision-maker intends to maximize present value returns, where the quadratic and cubic terms represent additional benefits from selecting two or three interrelated projects. The knapsack constraint enforces budgetary restriction. The cubic knapsack problem has been proved to be NP-Hard, and to obtain its exact solution, it is important to compute strong upper bounds to its optimal value. Several works have proposed relaxations for the quadratic knapsack problem varying from linear programs to more sophisticated semidefinite programs. However, the cubic knapsack problem has not been thoroughly investigated. In this work we propose the application of a cutting plane algorithm that iteratively strengthen an initial linear programming (LP) relaxation of the cubic knapsack problem with the goal of obtaining bounds of good quality. Valid inequalities are proposed and added to the initial LP relaxation in order to strengthen the bound. Computational results illustrate the trade-off between the quality of the bounds computed and the computational effort required by the cutting plane algorithm.
Using Class Cohesion Metrics to Predict Class Maintainability
Class maintainability is one of the most important external quality attributes and it is defined as the ease with which a class can be modified. Class cohesion is an internal quality attribute that is believed to have a positive impact on class maintainability (i.e., classes with high cohesion are believed to be more maintainable). In this paper, we empirically investigate the relationships between class maintainability and fourteen class cohesion metrics. We analyze the extent to which the considered fourteen class cohesion metrics can be used individually and in combination to predict class maintainability. Our results show that most of the considered class cohesion metrics are statistically significant predictors for class maintainability and that some of the considered class cohesion metrics can be used in combination to better predict class maintainability.
Fusion of Shape and Texture for Unconstrained Periocular Authentication
Unconstrained authentication is an important component for personal automated systems and human-computer interfaces. Existing solutions mostly use face as primary object of analysis. The performance of face-based systems is largely determined by the extent of deformation caused in the facial region and amount of useful information available in occluded face images. Periocular region is a useful portion of face with discriminative ability coupled with resistance to deformation. Reliable portion of periocular area is available for occluded images. The present work demonstrates that joint representation of periocular texture and periocular structure provides an effective expression and pose invariant representation. The proposed methodology provides an effective and compact description of periocular texture and shape. The method is tested over four benchmark datasets exhibiting varied acquisition conditions.
Comparison of ANFIS Update Methods Using Genetic Algorithm, Particle Swarm Optimization, and Artificial Bee Colony
This paper presents a comparison of the implementation of metaheuristic algorithms to train the antecedent parameters and consequence parameters in the adaptive network-based fuzzy inference system (ANFIS). The algorithms compared are genetic algorithm (GA), particle swarm optimization (PSO), and artificial bee colony (ABC). The objective of this paper is to benchmark well-known metaheuristic algorithms. The algorithms are applied to several data set with different nature. The combinations of the algorithms' parameters are tested. In all algorithms, a different number of populations are tested. In PSO, combinations of velocity are tested. In ABC, a different number of limit abandonment are tested. Experiments find out that ABC is more reliable than other algorithms, ABC manages to get better mean square error (MSE) than other algorithms in all data set.
Neural Network Based Approach of Software Maintenance Prediction for Laboratory Information System
Software maintenance phase is started once a software project has been developed and delivered. After that, any modification to it corresponds to maintenance. Software maintenance involves modifications to keep a software project usable in a changed or a changing environment, to correct discovered faults, and modifications, and to improve performance or maintainability. Software maintenance and management of software maintenance are recognized as two most important and most expensive processes in a life of a software product. As software maintenance plays a more and more important role in planning and executing new software products therefore any new input which can better describe or propose solution for faster and most cost effective resolution of issues in software maintenance is very useful. This publication is based on research made in one of the world biggest pharmaceutical and medical diagnostic company. Data used in this publication represents research on maintenance of one of the most used laboratory software in Europe. Therefore conclusions found in this paper may serve as guidance to other colleagues who are dealing with the same issues. This research is basing the prediction of maintenance, on risks and time evaluation, and using them as data sets for working with neural networks. One of the key variables used in research is predicted implementation time. This implementation time is based on expertise of the software development engineers who give the prognosis how much time each issue can take. Other variables are three different types of risk evaluation for every issue in maintenance phase, which are: product risk, business risk, and implementation risk. Each of those risks is evaluated by the dedicated engineers for the field of risk. Each of these risks can have one of the possible values: low, medium, or high. We are trying to extract the known data from old issues, which are implemented in previous software service patches. There are several factors which influence the results. Apart from evaluated risks, there is also a number of planned and a number of actually used working hours, which are taken into account. With all this information we are trying to provide the conclusion which will lead the decisions regarding the length of a time needed for a software maintenance period, so called software-service-patch, in future. Also this information will hopefully lead to better adjustment of the planned working hours with the actually used ones. Aim of this paper is to provide support to project maintenance managers. They will be able to take the issues planned for the next software-service-patch and pass them on to experts, for risk and working time evaluation. Afterward this will lead to more accurately prediction of the working hours needed for the software-service-patch, which will eventually lead to better planning of budget for software maintenance projects.
Effect of Fusing Multiple Convolutional Neural Network Features in Image Classification
In response to rapidly growing digital image capturing, sharing and usage, automatic image classification has become potential research topic. It has branched out into many algorithms and also adopted new techniques now and then. Among them, feature fusion based state-of-the-art image classification generally relies on hand-crafted features such as scale-invariant feature transformation (SIFT), histogram of oriented gradients (HOG), local binary pattern (LBP) and so forth. However, it has been proven that the features extracted through a pre-trained deep convolutional neural network (DCNN) outperform the traditional hand-crafted features. It spurs this work to explore the effect of fusing multiple CNN features from different architectures.
Thus, this work exploits the strength of the CNN cues from multiple DCNNs without being tied to any hand-crafted features. Firstly, the features are extracted from penultimate layers of three different pre-trained DCNN architectures on ImageNet namely AlexNet, VGG-16, and Inception-V3. Then, a generalized feature subspace of the three CNN features is achieved by employing principle component reconstruction model, where the features from individual CNN are mapped into a common dimensional subspace and concatenated to form feature vector (FV). This subspace transformation plays a vital role such that it generates a representation of image statistics that is appearance invariant by capturing the complementary information available in the different CNN feature spaces. Finally, a multi-class linear support vector machine (SVM) and an extreme learning machine (ELM) are trained on the generalized fused FV. The experiment results from the SVM and the ELM demonstrate that such multi-CNN feature fusion is well suited for image classification tasks, but surprisingly it has not been explored so far by the computer vision research community.
Quantitatively, top-1 classification accuracies of the proposed fusion approach are compared with existing classification methods that comprise fusion of several hand-crafted features and a single CNN feature fused with other hand-crafted features on six widely accepted image classification data-sets: CIFAR10, CIFAR100, Caltech101, Caltech256, MIT67, and Sun397. It is found that the proposed feature fusion strategy clearly surmounts the state-of-the-art fusion methods for image classification by far 2% and shows competitive results with fully trained DCNN based methods. It concludes that features from different deep convolutional neural architectures have complementary details that can meliorate object classification accuracy when they are transformed into a common feature space and fused.
Multi-Atlas Segmentation Based on Dynamic Energy Model: Application to Brain MR Images
Segmentation of anatomical structures in medical images is essential for scientific inquiry into the complex relationships between biological structure and clinical diagnosis, treatment and assessment. As a method of incorporating the prior knowledge and the anatomical structure similarity between a target image and atlases, multi-atlas segmentation has been successfully applied in segmenting a variety of medical images, including the brain, cardiac, and abdominal images. The basic idea of multi-atlas segmentation is to transfer the labels in atlases to the coordinate of the target image by matching the target patch to the atlas patch in the neighborhood. However, this technique is limited by the pairwise registration between target image and atlases. In this paper, a novel multi-atlas segmentation approach is proposed by introducing a dynamic energy model. First, the target is mapped to each atlas image by minimizing the dynamic energy function, then the segmentation of target image is generated by weighted fusion based on the energy. The method is tested on MICCAI 2012 Multi-Atlas Labeling Challenge dataset which includes 20 target images and 15 atlases images. The paper also analyzes the influence of different parameters of the dynamic energy model on the segmentation accuracy and measures the dice coefficient by using different feature terms with the energy model. The highest mean dice coefficient obtained with the proposed method is 0.861, which is competitive compared with the recently published method.
Design and Implementation of a Nano-Power Wireless Sensor Device for Smart Home Security
Most battery-driven wireless sensor devices will enter in sleep mode as soon as possible to extend the overall lifetime of a sensor network. It is necessary to turn off unnecessary radio and peripheral functions, especially the radio unit always consumes more energy than other components during wireless communication. The microcontroller is the most important part of the wireless sensor device. It is responsible for the manipulation of sensing data and communication protocols. The microcontroller always has different sleep modes, each with a different level of energy usage. The deeper the sleep, the lower the energy consumption. Most wireless sensor devices can only enter the sleep mode: the external low-frequency oscillator is still running to wake up the sleeping microcontroller when the sleep timer expires. In this paper, our sensor device can enter the extended sleep mode: none of the oscillator is running and the wireless sensor device has the nanoampere consumption and self-awaking ability. Finally, these wireless sensor devices were deployed in a smart home security network.
Problem of Services Selection in Ubiquitous Systems
Ubiquitous computing is nowadays a reality through the networking of a growing number of computing devices. It allows providing users with context aware information and services in a heterogeneous environment, anywhere and anytime. Selection of the best context-aware service, between many available services and providers, is a tedious problem. In this paper, a service selection method based on Constraint Satisfaction Problem (CSP) formalism is proposed. The services are considered as variables and domains; and the user context, preferences and providers characteristics are considered as constraints. The Backtrack algorithm is used to solve the problem to find the best service and provider which matches the user requirements. Even though this algorithm has an exponential complexity, but its use guarantees that the service, that best matches the user requirements, will be found. A comparison of the proposed method with the existing solutions finishes the paper.
Students Perceptions toward Virtual Reality Technology as a Learning Medium
The field of education witnesses a constant process of evolution fuelled by the desire to develop educational methods that rival the younger generations needs, as they are accustomed to digital data and information on-demand. As such they have developed a fully customized manner of learning, which in turn requires a new, innovative and equally customized teaching methods. This inherited customization and accelerated manner of learning are stemmed from contemporary lifestyle trends. As such a reduced learning curve requires innovative and efficient teaching methods, which comply with existing curriculums, yet facilitate the contemporary learning paradigm, it is, therefore, essential to utilize new technology to enhance the students’ skills. Virtual Reality (VR) is gradually spreading as a teaching and learning aid due to the 3 Dimensional (3D) environments it offers, in which one can navigate and interact. VR provides a solution to improve the delivery of the material to students and facilitates the teaching process by providing a useful aid to lecturers. Whilst proving the effectiveness of this new technology in this particular area starts by examining students’ perceptions toward utilizing VR as an educational tool in the Faculty of Information Technology in Al-Ahliyya Amman University in Jordan. The methodology was the result of mixed method approach, obtained by interviewing and administering a questionnaire to IT undergraduates. Since students are the active role in the learning process. The results revealed the awareness level of VR technology, perceptions towards embedding VR in the education process, and willingness to use VR technology as a learning medium.
Study on Energy Performance Comparison of Information Centric Network Based on Difference of Network Architecture
The first generation of the wide area network was circuit centric network. How the optimal circuit can be signed was the most important issue to get the best performance. This architecture had succeeded for line based telephone system. The second generation was host centric network and Internet based on this architecture has very succeeded world widely. And Internet became as new social infrastructure. Currently the architecture of the network is based on the location of the information. This future network is called Information centric network (ICN). The information-centric network (ICN) has being researched by many projects and different architectures for implementation of ICN have been proposed. The goal of this study is to compare performances of those ICN architectures. In this paper, the authors propose general ICN model which can represent two typical ICN architectures and compare communication performances using request routing. Finally, simulation results are shown. Also, we assume that this network architecture should be adapt to energy on-demand routing.
Category Based Relationship Extraction of Medical Concepts from Lexical Contexts
Medical information extraction, an important task in the domain of Biomedical Natural Language Processing (Bio-NLP), helps to understand the domain specific information like problems and treatments as disease, symptoms, and drugs from medical texts. In this paper, we present an automated relationship extraction system based on the two different state of the art approaches such as rulebased and feature-oriented machine learning technique in the presence of our previously developed WordNet of Medical Event (WME 2.0) lexicon. The lexicon provides support to identify medical concepts and their related features such as category, Parts-Of-Speech (POS), sentiment, and Similar Sentiment Words (SSW), which assist in extracting the conceptual relations between medical concepts. Our primary motivation behind to build the relationship extraction system is an improving quality of patient care. We consider five types of categories for the concepts such as diseases, drugs, symptoms, human anatomy, and miscellaneous medical terms (MMT), which all refer the broadest fundamental classes of the medical concepts. These assigned categories of concepts assist in extracting eight types of semantic relations from the medical contexts such as drugdrug, disease-drug, and human anatomy-symptom etc. The extracted categories of the concepts evaluate by widely used Na¨ıve Bayes and Logistic Regression supervised classifiers, where the relationship extraction system validate through Support Vector Machine (SVM) classifier. The classifiers offer an average F-Measure 0.81 for concept categorization and 0.86 for relationship extraction respectively.
A Comparison of Image Data Representations for Local Stereo Matching
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have lead to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. Older approaches utilize image intensity values, or greyscale, while some newer methods utilize full colour (i.e. RGB). Others still apply transformations to the data to optimize for particular use-cases. In this paper, an analysis is proposed to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.
Operating System Based Virtualization Models in Cloud Computing
Cloud computing is ready to transform the structure of businesses and learning through supplying the real-time applications and provide an immediate help for small to medium sized businesses. The ability to run a hypervisor inside a virtual machine is important feature of virtualization and it is called nested virtualization. In today’s growing field of information technology, many of the virtualization models are available, that provide a convenient approach to implement, but decision for a single model selection is difficult. This paper explains the applications of operating system based virtualization in cloud computing with an appropriate/suitable model with their different specifications and user’s requirements. In the present paper, most popular models are selected, and the selection was based on container and hypervisor based virtualization. Selected models were compared with a wide range of user’s requirements as number of CPUs, memory size, nested virtualization supports, live migration and commercial supports, etc. and we identified a most suitable model of virtualization.
Design of a Graphical User Interface for Data Preprocessing and Image Segmentation Process in 2D MRI Images
The 2D image segmentation is a significant process in finding a suitable region in medical images such as MRI, PET, CT etc. In this study, we have focused on 2D MRI images for image segmentation process. We have designed a GUI (graphical user interface) written in MATLABTM for 2D MRI images. In this program, there are two different interfaces including data pre-processing and image clustering or segmentation. In the data pre-processing section, there are median filter, average filter, unsharp mask filter, Wiener filter, and custom filter (a filter that is designed by user in MATLAB). As for the image clustering, there are seven different image segmentations for 2D MR images. These image segmentation algorithms are as follows: PSO (particle swarm optimization), GA (genetic algorithm), Lloyds algorithm, k-means, the combination of Lloyds and k-means, mean shift clustering, and finally BBO (Biogeography Based Optimization). To find the suitable cluster number in 2D MRI, we have designed the histogram based cluster estimation method and then applied to these numbers to image segmentation algorithms to cluster an image automatically. Also, we have selected the best hybrid method for each 2D MR images thanks to this GUI software.
Multi Cloud Storage Systems for Resource Constrained Mobile Devices: Comparison and Analysis
Cloud storage is a model of online data storage where data is stored in virtualized pool of servers hosted by third parties (CSPs) and located in different geographical locations. Cloud storage revolutionized the way how users access their data online anywhere, anytime and using any device as a tablet, mobile, laptop, etc. A lot of issues as vendor lock-in, frequent service outage, data loss and performance related issues exist in single cloud storage systems. So to evade these issues, the concept of multi cloud storage introduced. There are a lot of multi cloud storage systems exists in the market for mobile devices. In this article, we are providing comparison of four multi cloud storage systems for mobile devices Otixo, Unclouded, Cloud Fuze, and Clouds and evaluate their performance on the basis of CPU usage, battery consumption, time consumption and data usage parameters on three mobile phones Nexus 5, Moto G and Nexus 7 tablet and using Wi-Fi network. Finally, open research challenges and future scope are discussed.
Challenges in Multi-Cloud Storage Systems for Mobile Devices
The demand for cloud storage is increasing because users want continuous access their data. Cloud Storage revolutionized the way how users access their data. A lot of cloud storage service providers are available as DropBox, G Drive, and providing limited free storage and for extra storage; users have to pay money, which will act as a burden on users. To avoid the issue of limited free storage, the concept of Multi Cloud Storage introduced. In this paper, we will discuss the limitations of existing Multi Cloud Storage systems for mobile devices.
Intrusion Detection in Cloud Computing Using Machine Learning
With an emergence of distributed environment, cloud computing is proving to be the most stimulating computing paradigm shift in computer technology, resulting in spectacular expansion in IT industry. Many companies have augmented their technical infrastructure by adopting cloud resource sharing architecture. Cloud computing has opened doors to unlimited opportunities from application to platform availability, expandable storage and provision of computing environment. However, from a security viewpoint, an added risk level is introduced from clouds, weakening the protection mechanisms, and hardening the availability of privacy, data security and on demand service. Issues of trust, confidentiality, and integrity are elevated due to multitenant resource sharing architecture of cloud. Trust or reliability of cloud refers to its capability of providing the needed services precisely and unfailingly. Confidentiality is the ability of the architecture to ensure authorization of the relevant party to access its private data. It also guarantees integrity to protect the data from being fabricated by an unauthorized user. So in order to assure provision of secured cloud, a roadmap or model is obligatory to analyze a security problem, design mitigation strategies, and evaluate solutions. The aim of the paper is twofold; first to enlighten the factors which make cloud security critical along with alleviation strategies and secondly to propose an intrusion detection model that identifies the attackers in a preventive way using machine learning Random Forest classifier with an accuracy of 99.8%. This model uses less number of features. A comparison with other classifiers is also presented.
Intelligent Tutor Using Adaptive Learning to Partial Discharges with Virtual Reality Systems
The aim of this study is developing an intelligent tutoring system for electrical operators training with virtual reality systems at the laboratory center of partials discharges LAPEM. The electrical domain requires efficient and well trained personnel, due to the danger involved in the partials discharges field, qualified electricians are required. This paper presents an overview of the intelligent tutor adaptive learning design and user interface with VR. We propose the develop of constructing a model domain of a subset of partial discharges enables adaptive training through a trainee model which represents the affective and knowledge states of trainees. According to the success of the intelligent tutor system with VR, it is also hypothesized that the trainees will able to learn the electrical domain installations of partial discharges and gain knowledge more efficient and well trained than trainees using traditional methods of teaching without running any risk of being in danger, traditional methods makes training lengthily, costly and dangerously.
Synthetic Method of Contextual Knowledge Extraction
Global information society requirements are transparency and reliability of data, as well as ability to manage information resources independently; particularly to search, to analyze, to evaluate information, thereby obtaining new expertise. Moreover, it is satisfying the society information needs that increases the efficiency of the enterprise management and public administration. The study of structurally organized thematic and semantic contexts of different types, automatically extracted from unstructured data, is one of the important tasks for the application of information technologies in education, science, culture, governance and business. The objectives of this study are the contextual knowledge typologization, selection or creation of effective tools for extracting and analyzing contextual knowledge. Explication of various kinds and forms of the contextual knowledge involves the development and use full-text search information systems. For the implementation purposes, the authors use an e-library 'Humanitariana' services such as the contextual search, different types of queries (paragraph-oriented query, frequency-ranked query), automatic extraction of knowledge from the scientific texts. The multifunctional e-library «Humanitariana» is realized in the Internet-architecture in WWS-configuration (Web-browser / Web-server / SQL-server). Advantage of use 'Humanitariana' is in the possibility of combining the resources of several organizations. Scholars and research groups may work in a local network mode and in distributed IT environments with ability to appeal to resources of any participating organizations servers. Paper discusses some specific cases of the contextual knowledge explication with the use of the e-library services and focuses on possibilities of new types of the contextual knowledge. Experimental research base are science texts about 'e-government' and 'computer games'. An analysis of the subject-themed texts trends allowed to propose the content analysis methodology, that combines a full-text search with automatic construction of 'terminogramma' and expert analysis of the selected contexts. 'Terminogramma' is made out as a table that contains a column with a frequency-ranked list of words (nouns), as well as columns with an indication of the absolute frequency (number) and the relative frequency of occurrence of the word (in %% ppm). The analysis of 'e-government' materials showed, that the state takes a dominant position in the processes of the electronic interaction between the authorities and society in modern Russia. The media credited the main role in these processes to the government, which provided public services through specialized portals. Factor analysis revealed two factors statistically describing the used terms: human interaction (the user) and the state (government, processes organizer); interaction management (public officer, processes performer) and technology (infrastructure). Isolation of these factors will lead to changes in the model of electronic interaction between government and society. In this study, the dominant social problems and the prevalence of different categories of subjects of computer gaming in science papers from 2005 to 2015 were identified. Therefore, there is an evident identification of several types of contextual knowledge: micro context; macro context; dynamic context; thematic collection of queries (interactive contextual knowledge expanding a composition of e-library information resources); multimodal context (functional integration of iconographic and full-text resources through hybrid quasi-semantic algorithm of search). Further studies can be pursued both in terms of expanding the resource base on which they are held, and in terms of the development of appropriate tools.
Scrabble Scoring Using Artificial Intelligence Based on Image Processing
In the world of the game, scoring is an important thing to do to determine the winner. Scrabble scoring system is done by adding the value of each piece of scrabble tiles which will be arranged into a word. Usually, it is done by manual calculation, and it is both less accurate and time-consuming. Therefore, in this paper, we create a program of Scrabble scoring using artificial intelligence based on image processing. This system will snap the picture of the board and recognize a word in 8 angles of view by using image processing. Then, the artificial intelligence system will convert the word into scores.
A Conceptual Framework for Knowledge Integration in Agricultural Knowledge Management System Development
Agriculture is the mainstay of the Ethiopian economy; however, the sector is dominated by smallholder farmers resulting in land fragmentation and suffering from low productivity. Due to these issues, much effort has been put into the transformation of the sector to bring about more sustainable rural economic development. Technological advancements have been applied for the betterment of farmers resulting in the design of tools that are potentially capable of supporting the agricultural sector; however, their use and relevance are still alien to the local rural communities. The notion of the creating, capturing and sharing of knowledge has also been repetitively raised by many international donor agencies to transform the sector, yet the most current approaches to knowledge dissemination focus on knowledge that originates from the western view of scientific rationality while overlooking the role of indigenous knowledge (IK). Therefore, in agricultural knowledge management system (KMS) development, the integration of IKS with scientific knowledge is a critical success factor. The present study aims to contribute in the discourse on how to best integrate scientific and IK in agricultural KMS development. The conceptual framework of the research is anchored in concepts drawn from the theory of situated learning in communities of practice (CoPs): knowledge brokering. Using the KMS development practices of Ethiopian agricultural transformation agency as a case area, this research employed an interpretive analysis using primary and secondary qualitative data acquired through in-depth semi-structured interviews and participatory observations. As a result, concepts are identified for understanding the integration of the two major knowledge systems (i.e., indigenous and scientific knowledge) and participation of relevant stakeholders in particular the local farmers in agricultural KMS development through the roles of extension agent as a knowledge broker including crossing boundaries, in-between position, translation and interpretation, negotiation, and networking. The research shall have a theoretical contribution in addressing the incorporation of a variety of knowledge systems in agriculture and practically to provide insight for policy makers in agriculture regarding the importance of IK integration in agricultural KMS development and support marginalized small-scale farmers.
An Experimental Study on Some Conventional and Hybrid Models of Fuzzy Clustering
Clustering is a versatile instrument in the analysis of collections of data providing insights of the underlying structures of the dataset and enhancing the modeling capabilities. The fuzzy approach to the clustering problem increases the flexibility involving the concept of partial memberships (some value in the continuous interval [0, 1]) of the instances in the clusters. Several fuzzy clustering algorithms have been devised like FCM, Gustafson-Kessel, Gath-Geva, kernel-based FCM, PCM etc. Each of these algorithms has its own advantages and drawbacks, so none of these algorithms would be able to perform superiorly in all datasets. In this paper we will experimentally compare FCM, GK, GG algorithm and a hybrid two-stage fuzzy clustering model combining the FCM and Gath-Geva algorithms. Firstly we will theoretically dis-cuss the advantages and drawbacks for each of these algorithms and we will describe the hybrid clustering model exploiting the advantages and diminishing the drawbacks of each algorithm. Secondly we will experimentally compare the accuracy of the hybrid model by applying it on several benchmark and synthetic datasets.
Core Number Optimization Based Scheduler to Order/Mapp Simulink Application
Over these last years, the number of cores witnessed a spectacular increase in digital signal and general use processors. Concurrently, significant researches are done to get benefit from the high degree of parallelism. Indeed, these researches are focused to provide an efficient scheduling from hardware/software systems to multicores architecture. The scheduling process consists on statically choose one core to execute one task and to specify an execution order for the application tasks. In this paper, we describe an efficient scheduler that calculates the optimal number of cores required to schedule an application, gives a heuristic scheduling solution and evaluates its cost. Our proposal results are evaluated and compared with Preesm scheduler results and we prove that ours allows better scheduling in terms of latency, computation time and number of cores.
Cloud Computing: Major Issues and Solutions
This paper presents major issues in cloud computing. The paper describes different cloud computing deployment models and cloud service models available in the field of cloud computing. The paper then concentrates on various issues in the field. The issues such as cloud compatibility, compliance of the cloud, standardizing cloud technology, monitoring while on the cloud and cloud security are described. The paper suggests solutions for these issues and concludes that hybrid cloud infrastructure is a real boon for organizations.