Scholarly Research Excellence

Digital Open Science Index

Commenced in January 2007 Frequency: Monthly Edition: International Abstract Count: 56109

Computer and Information Engineering

2366
99065
Semi-Supervised Outlier Detection Using a Generative and Adversary Framework
Abstract:
In many outlier detection tasks, only training data belonging to one class, i.e., the positive class, is available. The task is then to predict a new data point as belonging either to the positive class or to the negative class, in which case the data point is considered an outlier. For this task, we propose a novel corrupted Generative Adversarial Network (CorGAN). In the adversarial process of training CorGAN, the Generator generates outlier samples for the negative class, and the Discriminator is trained to distinguish the positive training data from the generated negative data. The proposed framework is evaluated using an image dataset and a real-world network intrusion dataset. Our outlier-detection method achieves state-of-the-art performance on both tasks.
2365
98820
A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm
Abstract:
Feature selection and attribute reduction are crucial problems, and widely used techniques in the field of machine learning, data mining and pattern recognition to overcome the well-known phenomenon of the Curse of Dimensionality. This paper presents a feature selection method that efficiently carries out attribute reduction, thereby selecting the most informative features of a dataset. It consists of two components: 1) a measure for feature subset evaluation, and 2) a search strategy. For the evaluation measure, we have employed the fuzzy-rough dependency degree (FRFDD) of the lower approximation-based fuzzy-rough feature selection (L-FRFS) due to its effectiveness in feature selection. As for the search strategy, a modified version of a binary shuffled frog leaping algorithm is proposed (B-SFLA). The proposed feature selection method is obtained by hybridizing the B-SFLA with the FRDD. Nine classifiers have been employed to compare the proposed approach with several existing methods over twenty two datasets, including nine high dimensional and large ones, from the UCI repository. The experimental results demonstrate that the B-SFLA approach significantly outperforms other metaheuristic methods in terms of the number of selected features and the classification accuracy.
2364
98812
FPGA Implementation of the BB84 Protocol
Abstract:
The development of a quantum key distribution (QKD) system on a field-programmable gate array (FPGA) platform is the subject of this paper. A quantum cryptographic protocol is designed based on the properties of quantum information and the characteristics of FPGAs. The proposed protocol performs key extraction, reconciliation, error correction, and privacy amplification tasks to generate a perfectly secret final key. We modeled the presence of the spy in our system with a strategy to reveal some of the exchanged information without being noticed. Using an FPGA card with a 100 MHz clock frequency, we have demonstrated the evolution of the error rate as well as the amounts of mutual information (between the two interlocutors and that of the spy) passing from one step to another in the key generation process.
2363
98649
Cells Detection and Recognition in Bone Marrow Examination with Deep Learning Method
Abstract:
In this paper, deep learning methods are applied in bio-medical field to detect and count different types of cells in an automatic way instead of manual work in medical practice, specifically in bone marrow examination. The process is mainly composed of two steps, detection and recognition. Mask-Region-Convolutional Neural Networks (Mask-RCNN) was used for detection and image segmentation to extract cells and then Convolutional Neural Networks (CNN), as well as Deep Residual Network (ResNet) was used to classify. Result of cell detection network shows high efficiency to meet application requirements. For the cell recognition network, two networks are compared and the final system is fully applicable.
2362
98473
An Automatic Generating Unified Modelling Language Use Case Diagram and Test Cases Based on Classification Tree Method
Abstract:
The processes in software development by Object Oriented methodology have many stages those take time and high cost. The inconceivable error in system analysis process will affect to the design and the implementation process. The unexpected output causes the reason why we need to revise the previous process. The more rollback of each process takes more expense and delayed time. Therefore, the good test process from the early phase, the implemented software is efficient, reliable and also meet the user’s requirement. Unified Modelling Language (UML) is the tool which uses symbols to describe the work process in Object Oriented Analysis (OOA). This paper presents the approach for automatically generated UML use case diagram and test cases. UML use case diagram is generated from the event table and test cases are generated from use case specifications and Graphic User Interfaces (GUI). Test cases are derived from the Classification Tree Method (CTM) that classify data to a node present in the hierarchy structure. Moreover, this paper refers to the program that generates use case diagram and test cases. As the result, it can reduce work time and increase efficiency work.
2361
98455
A Recommender System for Dynamic Selection of Undergraduates' Elective Courses
Abstract:
The task of selecting a few elective courses from a variety of available elective courses has been a difficult one for many students over the years. In many higher institutions, guidance and counselors or level advisers are usually employed to assist the students in picking the right choice of courses. In reality, these counselors and advisers are most times overloaded with too many students to attend to, and sometimes they do not have enough time for the students. Most times, the academic strength of the student based on past results are not considered in the new choice of electives. Recommender systems implement advanced data analysis techniques to help users find the items of their interest by producing a predicted likeliness score or a list of top recommended items for a given active user. Therefore, in this work, a collaborative filtering-based recommender system that will dynamically recommend elective courses to undergraduate students based on their past grades in related courses was developed. This approach employed the use of the k-nearest neighbor algorithm to discover hidden relationships between the related courses passed by students in the past and the currently available elective courses. Real students’ results dataset was used to build and test the recommendation model. The developed system will not only improve the academic performance of students, but it will also help reduce the workload on the level advisers and school counselors.
2360
98338
Cost Effective Real-Time Image Processing Based Optical Mark Reader
Abstract:
In this modern era of automation, most of the academic exams and competitive exams are Multiple Choice Questions (MCQ). The responses of these MCQ based exams are recorded in the Optical Mark Reader (OMR) sheet. Evaluation of the OMR sheet requires separate specialized machines for scanning and marking. The sheets used by these machines are special and costs more than a normal sheet. Available process is non-economical and dependent on paper thickness, scanning quality, paper orientation, special hardware and customized software. This study tries to tackle the problem of evaluating the OMR sheet without any special hardware and making the whole process economical. We propose an image processing based algorithm which can be used to read and evaluate the scanned OMR sheets with no special hardware required. It will eliminate the use of special OMR sheet. Responses recorded in normal sheet is enough for evaluation. The proposed system takes care of color, brightness, rotation, little imperfections in the OMR sheet images.
2359
98230
Housing Price Prediction Using Machine Learning Algorithms: The Case of Melbourne City, Australia
Authors:
Abstract:
House price forecasting is a main topic in the real estate market research. Effective house price prediction models could not only allow home buyers and real estate agents to make better data-driven decisions but may also be beneficial for the property policymaking process. This study investigates the housing market by using machine learning techniques to analyze real historical house sale transactions in Australia. It seeks useful models which could be deployed as an application for house buyers and sellers. Data analytics show a high discrepancy between the house price in the most expensive suburbs and the most affordable suburbs in the city of Melbourne. In addition, experiments demonstrate that the combination of Stepwise and Support Vector Machine (SVM), based on the Mean Squared Error (MSE) measurement, consistently outperforms other models in terms of prediction accuracy.
2358
98182
Consolidating Service Engineering Ontologies Building Service Ontology from SOA Modeling Language (SoaML)
Abstract:
As a term for characterizing a process of devising a service system, the term ‘service engineering’ is still regarded as an ‘open’ research challenge due to unspecified details and conflicting perspectives. This paper presents consolidated service engineering ontologies in collecting, specifying and defining relationship between components pertinent within the context of service engineering. The ontologies are built by way of literature surveys from the collected conceptual works by collating various concepts into an integrated ontology. Two ontologies are produced: general service ontology and software service ontology. The software-service ontology is drawn from the informatics domain, while the generalized ontology of a service system is built from both a business management and the information system perspective. The produced ontologies are verified by exercising conceptual operationalizations of the ontologies in adopting several service orientation features and service system patterns. The proposed ontologies are demonstrated to be sufficient to serve as a basis for a service engineering framework.
2357
97829
Analysis of Electrocardiography Survey Data for the Classification of Heart Diseases Using Artificial Neural Network
Abstract:
Due to the prevalence of heart diseases in Pakistan and the frequent deaths of patients, there is a need to predict the heart diseases timely so that the proper medication can be provided to the patients. The research is performed on the results of the survey (based on patient’s ECG) conducted in Pakistan. This study makes use of ECGs collected from the different areas of Pakistan. The readings of parameters P, Q, R, S, T and their intervals are extracted from these collected ECGs. The dataset contains 278 features which contain readings from all leads and also include patient’s name, age, location, etc. Firstly, feature reduction algorithm is applied to the dataset. The dataset features are reduced from 278 to 150 with the help of the dimensionality reduction algorithm principle component analysis. After reducing the features, a neural network algorithm is applied for predicting heart diseases. A feedforward multi-layer neural network (NN) with error back-propagation (BP) learning algorithm is applied to the dataset in order to predict the heart diseases from ECG signals. The best prediction rates obtained are 99.5% with one hidden layer. The results successfully showed that this framework is efficiently predicting heart diseases that can be used for improving the diagnosis of heart diseases in Pakistan and also used for educational purposes. By further analyzing the results, we can see that most common diseases in the collected dataset are with label: 10.
2356
97812
A Comparative Study of GTC and PSP Algorithms for Mining Sequential Patterns Embedded in Database with Time Constraints
Authors:
Abstract:
This paper will consider the problem of sequential mining patterns embedded in a database by handling the time constraints as defined in the GSP algorithm (level wise algorithms). We will compare two previous approaches GTC and PSP, that resumes the general principles of GSP. Furthermore this paper will discuss PG-hybrid algorithm, that using PSP and GTC. The results show that PSP and GTC are more efficient than GSP. On the other hand, the GTC algorithm performs better than PSP. The PG-hybrid algorithm use PSP algorithm for the two first passes on the database, and GTC approach for the following scans. Experiments show that the hybrid approach is very efficient for short, frequent sequences.
2355
97810
An Integrated Web-Based Workflow System for Design of Computational Pipelines in the Cloud
Abstract:
With more and more workflow systems adopting cloud as their execution environment, it presents various challenges that need to be addressed in order to be utilized efficiently. This paper introduces a method for resource provisioning based on our previous research of dynamic allocation and its pipeline processes. We present an abstraction for workload scheduling in which independent tasks get scheduled among various available processors of distributed computing for optimization. We also propose an integrated web-based workflow designer by taking advantage of the HTML5 technology and chaining together multiple tools. In order to make the combination of multiple pipelines executing on the cloud in parallel, we develop a script translator and an execution engine for workflow management in the cloud. All information is known in advance by the workflow engine and tasks are allocated according to the prior knowledge in the repository. This proposed effort has the potential to provide support for process definition, workflow enactment and monitoring of workflow processes. Users would benefit from the web-based system that allows creation and execution of pipelines without scripting knowledge.
2354
97665
A Comparative Analysis of Asymmetric Encryption Schemes on Android Messaging Service
Abstract:
Today, Short Message Service (SMS) is an important means of communication. SMS is not only used in informal environment for communication and transaction, but it is also used in formal environments such as institutions, organizations, companies, and business world as a tool for communication and transactions. Therefore, there is a need to secure the information that is being transmitted through this medium to ensure security of information both in transit and at rest. But, encryption has been identified as a means to provide security to SMS messages in transit and at rest. Several past researches have proposed and developed several encryption algorithms for SMS and Information Security. This research aims at comparing the performance of common Asymmetric encryption algorithms on SMS security. The research employs the use of three algorithms, namely RSA, McEliece, and RABIN. Several experiments were performed on SMS of various sizes on android mobile device. The experimental results show that each of the three techniques has different key generation, encryption, and decryption times. The efficiency of an algorithm is determined by the time that it takes for encryption, decryption, and key generation. The best algorithm can be chosen based on the least time required for encryption. The obtained results show the least time when McEliece size 4096 is used. RABIN size 4096 gives most time for encryption and so it is the least effective algorithm when considering encryption. Also, the research shows that McEliece size 2048 has the least time for key generation, and hence, it is the best algorithm as relating to key generation. The result of the algorithms also shows that RSA size 1024 is the most preferable algorithm in terms of decryption as it gives the least time for decryption.
2353
97629
Optimization of Machine Learning Regression Results: An Application on Health Expenditures
Abstract:
Machine learning regression methods are recommended as an alternative to classical regression methods in the existence of variables which are difficult to model. Data for health expenditure is typically non-normal and have a heavily skewed distribution. This study aims to compare machine learning regression methods by hyperparameter tuning to predict health expenditure per capita. A multiple regression model was conducted and performance results of Lasso Regression, Random Forest Regression and Support Vector Machine Regression recorded when different hyperparameters are assigned. Lambda (λ) value for Lasso Regression, number of trees for Random Forest Regression, epsilon (ε) value for Support Vector Regression was determined as hyperparameters. Study results performed by using 'k' fold cross validation changed from 5 to 50, indicate the difference between machine learning regression results in terms of R², RMSE and MAE values that are statistically significant (p < 0.001). Study results reveal that Random Forest Regression (R² ˃ 0.7500, RMSE ≤ 0.6000 ve MAE ≤ 0.4000) outperforms other machine learning regression methods. It is highly advisable to use machine learning regression methods for modelling health expenditures.
2352
97628
Examination of Public Hospital Unions Technical Efficiencies Using Data Envelopment Analysis and Machine Learning Techniques
Abstract:
Regional planning in health has gained speed for developing countries in recent years. In Turkey, 89 different Public Hospital Unions (PHUs) were conducted based on provincial levels. In this study technical efficiencies of 89 PHUs were examined by using Data Envelopment Analysis (DEA) and machine learning techniques by dividing them into two clusters in terms of similarities of input and output indicators. Number of beds, physicians and nurses determined as input variables and number of outpatients, inpatients and surgical operations determined as output indicators. Before performing DEA, PHUs were grouped into two clusters. It is seen that the first cluster represents PHUs which have higher population, demand and service density than the others. The difference between clusters was statistically significant in terms of all study variables (p ˂ 0.001). After clustering, DEA was performed for general and for two clusters separately. It was found that 11% of PHUs were efficient in general, additionally 21% and 17% of them were efficient for the first and second clusters respectively. It is seen that PHUs, which are representing urban parts of the country and have higher population and service density, are more efficient than others. Random forest decision tree graph shows that number of inpatients is a determinative factor of efficiency of PHUs, which is a measure of service density. It is advisable for public health policy makers to use statistical learning methods in resource planning decisions to improve efficiency in health care.
2351
97616
Studying Methodological Maps on the Engineering Education Program
Authors:
Abstract:
With the constant progress in our daily lives through information and communication technology and the presence of abundant in research activities in the hardware and software associated with them, and develop and improve their performance, but still there is a need to provide all combined solutions in one business. A systematic mapping study was conducted to investigate the contributions that have been prepared, and the areas of knowledge that are explored further, and any aspects of the research used to divide the common understanding of the latest technology in software engineering education. Which, we have categorized into a well-defined engineering framework. An overview of current research topics and trends and their distribution by type of research and scope of application. In addition, the topics were grouped into groups and a list of proposed methods and frameworks and tools was used. The map shows that the current research impact is limited to a few areas of knowledge are needed to map a future path to fill the gaps in the instruction activities.
2350
97544
The Effect of Computerized Systems of Office Automation on Employees' Productivity Efficiency
Abstract:
One of the factors that can play an important role in increasing productivity is the optimal use of information technology, which in this area today has a significant role to play in computer systems of office automation in organizations and companies. Therefore, this research has been conducted with the aim of investigating the effect of the relationship between computerized systems of office automation and the productivity of employees in the municipality of Ilam city. The statistical population of this study was 110 people. Using Cochran formula, the minimum sample size is 78 people. The present research is a descriptive-looking research in terms of the type of objective view. A questionnaire was used to collect data. To assess the reliability of variables, Cornbrash’s alpha coefficient was used, which was equal to 0.85; SPSS19 and Pearson test were used to analyze the data and test the hypothesis of the research. In this research, three hypotheses of the relationship between office automation with efficiency, performance, and effectiveness were investigated. The results showed a direct and positive relationship between the office automation system and the increase in the efficiency, effectiveness, and efficiency of employees, and there was no reason to reject these hypotheses.
2349
97437
Multiscale Modelization of Multilayered Bi-Dimensional Soils
Abstract:
Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.
2348
97191
Assisted Video Colorization Using Texture Descriptors
Abstract:
Colorization is the process of add colors to a monochromatic image or video. Usually, the process involves to segment the image in regions of interest and then apply colors to each one, for videos, this process is repeated for each frame, which makes it a tedious and time-consuming job. We propose a new assisted method for video colorization; the user only has to colorize one frame, and then the colors are propagated to following frames. The user can intervene at any time to correct eventual errors in color assignment. The method consists of to extract intensity and texture descriptors from the frames and then perform a feature matching to determine the best color for each segment. To reduce computation time and give a better spatial coherence we narrow the area of search and give weights for each feature to emphasize texture descriptors. To give a more natural result, we use an optimization algorithm to make the color propagation. Experimental results in several image sequences, compared to others existing methods, demonstrates that the proposed method perform a better colorization with less time and user interference.
2347
97087
Evaluating Perceived Usability of ProxTalker App Using Arabic Standard Usability Scale: A Student's Perspective
Abstract:
This oral presentation discusses a proposal for a study that evaluates the usability of an evidence based application named ProxTalker App. The significance of this study will inform administration and faculty staff at the Department of Communication Sciences Disorders (CDS), College of Life Sciences, Kuwait University whether the app is a suitable tool to use for CDS students. A case study will be used involving a sample of CDS students taking practicum and internship courses during the academic year 2018/2019. The study will follow a process used by previous study. The process of calculating SUS is well documented and will be followed. ProxTalker App is an alternative and augmentative tool that speech language pathologist (SLP) can use to customize boards for their clients. SLPs can customize different boards using this app for various activities. A board can be created by the SLP to improve and support receptive and expressive language. Using technology to support therapy can aid SLPs to integrate this ProxTalker App as part of their clients therapy. Supported tools, games and motivation are some advantages of incorporating apps during therapy sessions. A quantitative methodology will be used. It involves the utilization of a standard tool that was the was adapted to the Arabic language to accommodate native Arabic language users. The tool that will be utilized in this research is the Arabic Standard Usability Scale (A-SUS) questionnaire which is an adoption of System Usability Scale (SUS). Standard usability questionnaires are reliable, valid and their process is properly documented. This study builds upon the development of A-SUS, which is a psychometrically evaluated questionnaire that targets Arabic native speakers. Results of the usability will give preliminary indication of whether the ProxTalker App under investigation is appropriate to be integrated within the practicum and internship curriculum of CDS. The results of this study will inform the CDS department of this specific app is an appropriate tool to be used for our specific students within our environment because usability depends on the product, environment, and users.
2346
96919
A Preliminary Literature Review of Digital Transformation Case Studies
Abstract:
While struggling to succeed in today&rsquo;s complex market environment and provide better customer experience and services, enterprises encompass digital transformation as a means for reaching competitiveness and foster value creation. A digital transformation process consists of information technology implementation projects, as well as organizational factors such as top management support, digital transformation strategy, and organizational changes. However, to the best of our knowledge, there is little evidence about digital transformation endeavors in organizations and how they perceive it &ndash; is it only about digital technologies adoption or a true organizational shift is needed? In order to address this issue and as the first step in our research project, a literature review is conducted. The analysis included case study papers from Scopus and Web of Science databases. The following attributes are considered for classification and analysis of papers: time component; country of case origin; case industry and; digital transformation concept comprehension, i.e. focus. Research showed that organizations &ndash; public, as well as private ones, are aware of change necessity and employ digital transformation projects. Also, the changes concerning digital transformation affect both manufacturing and service-based industries. Furthermore, we discovered that organizations understand that besides technologies implementation, organizational changes must also be adopted. However, with only 29 relevant papers identified, research positioned digital transformation as an unexplored and emerging phenomenon in information systems research. The scarcity of evidence-based papers calls for further examination of this topic on cases from practice.
2345
96874
The Role of Business Process Management in Driving Digital Transformation: Insurance Company Case Study
Abstract:
Digital transformation is one of the latest trends on the global market. In order to maintain the competitive advantage and sustainability, increasing number of organizations are conducting digital transformation processes. Those organizations are changing their business processes and creating new business models with the help of digital technologies. In that sense, one should also observe the role of business process management (BPM) and its maturity in driving digital transformation. Therefore, the goal of this paper is to investigate the role of BPM in digital transformation process within one organization. Since experiences from practice show that organizations from financial sector could be observed as leaders in digital transformation, an insurance company has been selected to participate in the study. That company has been selected due to the high level of its BPM maturity and the fact that it has previously been through a digital transformation process. In order to fulfill the goals of the paper, several interviews, as well as questionnaires, have been conducted within the selected company. The results are presented in a form of a case study. Results indicate that digital transformation process within the observed company has been successful, with special focus on the development of digital strategy, BPM and change management. The role of BPM in the digital transformation of the observed company is further discussed in the paper.
2344
96767
The Evolution of the Israel Defence Forces’ Information Operations: A Case Study of the Israel Defence Forces' Activities in the Information Domain 2006–2014
Abstract:
This article examines the evolution of the Israel Defence Forces’ information operation activities during an eight-year timespan from the 2006 war with Hezbollah to more recent operations such as Pillar of Defence and Protective Edge. To this end, the case study will show a change in the Israel Defence Forces’ activities in the information domain. In the 2006 war with Hezbollah in Lebanon, Israel inflicted enormous damage on the Lebanese infrastructure, leaving more than 1,200 people dead and 4,400 injured. Casualties among Hezbollah, Israel’s main adversary, were estimated to range from 250 to 700 fighters. Damage to the Lebanese infrastructure was estimated at over USD 2.5bn, with almost 2,000 houses and buildings damaged and destroyed. Even this amount of destruction did not force Hezbollah to yield and while both sides were claiming victory in the war, Israel paid a heavier price in political backlashes and loss of reputation, mainly due to failures in the media and the way in which the war was portrayed and perceived in Israel and abroad. Much of this can be credited to Hezbollah’s efficient use of the media, and Israel’s failure to do so. Israel managed the next conflict it was engaged in completely differently – it had learnt its lessons and built up new ways to counter its adversary’s propaganda and media operations. In Operation Cast Lead at the turn of 2009, Hamas, Israel’s adversary and Gaza’s dominating faction, was not able to utilize the media in the same way that Hezbollah had. By creating a virtual and physical barrier around the Gaza Strip, Israel almost totally denied its adversary access to the worldwide media, and by restricting the movement of journalists in the area, Israel could let its voice be heard above all. The operation Cast Lead began with a deception operation, which caught Hamas totally off guard. The 21-day campaign left the Gaza Strip devastated, but did not cause as much protest in Israel during the operation as the 2006 war did, mainly due to almost total Israeli dominance in the information dimension. The most important outcome from the Israeli perspective was the fact that Operation Cast Lead was assessed to be a success and the operation enjoyed domestic support along with support from many western nations, which had condemned Israeli actions in the 2006 war. Later conflicts have shown the same tendency towards virtually total dominance in the information domain, which has had an impact on target audiences across the world. Thus, it is clear that well-planned and conducted information operations are able to shape public opinion and influence decision-makers, although Israel might have been outpaced by its rivals.
2343
96551
An Interactive Methodology to Demonstrate the Level of Effectiveness of the Synthesis of Local-Area Networks
Authors:
Abstract:
This study focuses on disconfirming that wide-area networks can be made mobile, highly-available, and wireless. This methodological test shows that IPv7 and context-free grammar are mismatched. In the cases of robots, a similar tendency is also revealed. Further, we also prove that public-private key pairs could be built embedded, adaptive, and wireless. Finally, we disconfirm that although hash tables can be made distributed, interposable, and autonomous, XML and DNS can interfere to realize this purpose. Our experiments soon proved that exokernelizing our replicated Knesis keyboards was more significant than interrupting them. Our experiments exhibited degraded average sampling rate.
2342
96476
PathoPy2.0: Application of Fractal Geometry for Early Detection and Histopathological Analysis of Lung Cancer
Authors:
Abstract:
Fractal dimension provides a way to characterize non-geometric shapes like those found in nature. The purpose of this research is to estimate Minkowski fractal dimension of human lung images for early detection of lung cancer. Lung cancer is the leading cause of death among all types of cancer and an early histopathological analysis will help reduce deaths primarily due to late diagnosis. A Python application program, PathoPy2.0, was developed for analyzing medical images in pixelated format and estimating Minkowski fractal dimension using a new box-counting algorithm that allows windowing of images for more accurate calculation in the suspected areas of cancerous growth. Benchmark geometric fractals were used to validate the accuracy of the program and changes in fractal dimension of lung images to indicate the presence of issues in the lung. The accuracy of the program for the benchmark examples was between 93-99% of known values of the fractal dimensions. Fractal dimension values were then calculated for lung images, from National Cancer Institute, taken over time to correctly detect the presence of cancerous growth. For example, as the fractal dimension for a given lung increased from 1.19 to 1.27 due to cancerous growth, it represents a significant change in fractal dimension which lies between 1 and 2 for 2-D images. Based on the results obtained on many lung test cases, it was concluded that fractal dimension of human lungs can be used to diagnose lung cancer early. The ideas behind PathoPy2.0 can also be applied to study patterns in the electrical activity of the human brain and DNA matching.
2341
96328
Green Computing: Awareness and Practice in a University Information Technology Department
Abstract:
The fact that ICTs is pervasive in today’s society paradoxically also calls for the need for green computing. Green computing generally encompasses the study and practice of using Information and Communication Technology (ICT) resources effectively and efficiently without negatively affecting the environment. Since the emergence of this innovation, manufacturers and governmental bodies such as Energy Star and the United State of America’s government have obviously invested many resources in ensuring the reality of green design, manufacture, and disposal of ICTs. However, the level of adherence to green use of ICTs among users have been less accounted for especially in developing ICT consuming nations. This paper, therefore, focuses on examining the awareness and practice of green computing among academics and students of the Information Technology Department of Durban University of Technology, Durban South Africa, in the context of green use of ICTs. This was achieved through a survey that involved the use of a questionnaire with four sections: (a) demography of respondents, (b) Awareness of green computing, (c) practices of green computing, and (d) attitude towards greener computing. One hundred and fifty (150) questionnaires were distributed, one hundred and twenty (125) were completed and collected for data analysis. Out of the one hundred and twenty-five (125) respondents, twenty-five percent (25%) were academics while the remaining seventy-five percent (75%) were students. The result showed a higher level of awareness of green computing among academics when compared to the students. Green computing practices are also shown to be highly adhered to among academics only. However, interestingly, the students were found to be more enthusiastic towards greener computing in the future. The study, therefore, suggests that the awareness of green computing should be further strengthened among students from the curriculum point of view in order to improve on the greener use of ICTs in universities especially in developing countries.
2340
96299
Relevant LMA Features for Human Motion Recognition
Abstract:
Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets.
2339
96251
Fingerprint Image Encryption Using a 2D Chaotic Map and Elliptic Curve Cryptography
Abstract:
Fingerprints are suitable as long-term markers of human identity since they provide detailed and unique individual features which are difficult to alter and durable over life time. In this paper, we propose an algorithm to encrypt and decrypt fingerprint images by using a specially designed Elliptic Curve Cryptography (ECC) procedure based on block ciphers. In addition, to increase the confusing effect of fingerprint encryption, we also utilize a chaotic-behaved method called Arnold Cat Map (ACM) for a 2D scrambling of pixel locations in our method. Experimental results are carried out with various types of efficiency and security analyses. As a result, we demonstrate that the proposed fingerprint encryption/decryption algorithm is advantageous in several different aspects including efficiency, security and flexibility. In particular, using this algorithm, we achieve a margin of about 0.1% in the test of Number of Pixel Changing Rate (NPCR) values comparing to the-state-of-the-art performances.
2338
96240
Cities Simulation and Representation in Locative Games from the Perspective of Cultural Studies
Abstract:
This work aims to analyze the locative structure used by the locative games of the company Niantic. To fulfill this objective, a literature review on the representation and simulation of cities was developed; interviews with Ingress players and playing Ingress. Relating these data, it was possible to deepen the relationship between the virtual and the real to create the simulation of cities and their cultural objects in locative games. Cities representation associates geo-location provided by the Global Positioning System (GPS), with augmented reality and digital image, and provides a new paradigm in the city interaction with its parts and real and virtual world elements, homeomorphic to real world. Bibliographic review of papers related to the representation and simulation study and their application in locative games was carried out and is presented in the present paper. The cities representation and simulation concepts in locative games, and how this setting enables the flow and immersion in urban space, are analyzed. Some examples of games are discussed for this new setting development, which is a mix of real and virtual world. Finally, it was proposed a Locative Structure for electronic games using the concepts of heterotrophic representations and isotropic representations conjoined with immediacy and hypermediacy.
2337
95860
Data Mining Model for Predicting the Status of HIV Patients during Drug Regimen Change
Abstract:
Human Immunodeficiency Virus and Acquired Immunodeficiency Syndrome (HIV/AIDS) is a major cause of death for most African countries. Ethiopia is one of the seriously affected countries in sub Saharan Africa. Previously in Ethiopia, having HIV/AIDS was almost equivalent to a death sentence. With the introduction of Antiretroviral Therapy (ART), HIV/AIDS has become chronic, but manageable disease. The study focused on a data mining technique to predict future living status of HIV/AIDS patients at the time of drug regimen change when the patients become toxic to the currently taking ART drug combination. The data is taken from University of Gondar Hospital ART program database. Hybrid methodology is followed to explore the application of data mining on ART program dataset. Data cleaning, handling missing values and data transformation were used for preprocessing the data. WEKA 3.7.9 data mining tools, classification algorithms, and expertise are utilized as means to address the research problem. By using four different classification algorithms, (i.e., J48 Classifier, PART rule induction, Naïve Bayes and Neural network) and by adjusting their parameters thirty-two models were built on the pre-processed University of Gondar ART program dataset. The performances of the models were evaluated using the standard metrics of accuracy, precision, recall, and F-measure. The most effective model to predict the status of HIV patients with drug regimen substitution is pruned J48 decision tree with a classification accuracy of 98.01%. This study extracts interesting attributes such as Ever taking Cotrim, Ever taking TbRx, CD4 count, Age, Weight, and Gender so as to predict the status of drug regimen substitution. The outcome of this study can be used as an assistant tool for the clinician to help them make more appropriate drug regimen substitution. Future research directions are forwarded to come up with an applicable system in the area of the study.