Excellence in Research and Innovation for Humanity

International Science Index

Commenced in January 1999 Frequency: Monthly Edition: International Abstract Count: 46749

Computer and Information Engineering

A Study of General Attacks on Elliptic Curve Discrete Logarithm Problem over Prime Field and Binary Field
This paper begins by describing some basic properties of finite field and elliptic curve cryptography over prime and binary fields. Then we discuss the discrete logarithm problem for elliptic curves and its properties. We study the leading general attacks on elliptic curve discrete logarithm problem such as the Baby Step, Giant Step method, Pollard’s rho method and Pohlig-Hellman method, and describe in detail experiments of these attacks over prime field and binary field. The paper finishes by describing expected running time of the attacks and suggesting strong elliptic curves that are not susceptible to these attacks.
Input Data Balancing in a Neural Network PM-10 Forecasting System
Recently PM-10 has become social and global issues. It is one of major air pollutants which affect human health. Therefore, it needs to be forecasted rapid and precisely. However, PM-10 comes from various emission sources, and its level of concentration is largely dependent on meteorological and geographical factors of local and global region, so the forecasting of PM-10 concentration is very difficult. Neural network model can be used in the case. But there are few cases of high concentration PM-10. It makes the learning of the neural network model difficult. In this paper, we suggest a simple input balancing method when the data distribution is uneven. It is based on the probability of appearance of the data. Experimental results show that the input balancing makes the neural networks’ learning easy and improves the forecasting rates.
High Performance Electrocardiogram Steganography Based on Fast Discrete Cosine Transform
Based on fast discrete cosine transform (FDCT), the authors present a high capacity and high perceived quality method for electrocardiogram (ECG) signal. By using a simple adjusting policy to the 1-dimensional (1-D) DCT coefficients, a large volume of secret message can be effectively embedded in an ECG host signal, and be successfully extracted at the intended receiver. Simulations confirmed that the resulting perceived quality is good, while the hiding capability of the proposed method significantly outperforms that of existing techniques. In addition, our proposed method has a certain degree of robustness. Since the computational complexity is low, it is feasible for our method being employed in real-time applications.
Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.
Mining Riding Patterns in Bike-Sharing System Connecting with Public Transportation
With the fast growing road traffic and increasingly severe traffic congestion, more and more citizens choose to use the public transportation for daily travelling. Meanwhile, the shared bike provides a convenient option for the first and last mile to the public transit. As of 2016, over one thousand cities around the world have deployed the bike-sharing system. The combination of these two transportations have stimulated the development of each other and made significant contribution to the reduction of carbon footprint. A lot of work has been done on mining the riding behaviors in various bike-sharing systems. Most of them, however, treated the bike-sharing system as an isolated system and thus their results provide little reference for the public transit construction and optimization. In this work, we treat the bike-sharing and public transit as a whole and investigate the customers’ bike-and-ride behaviors. Specifically, we develop a spatio-temporal traffic delivery model to study the riding patterns between the two transportation systems and explore the traffic characteristics (e.g., distributions of customer arrival/departure and traffic peak hours) from the time and space dimensions. During the model construction and evaluation, we make use of large open datasets from real-world bike-sharing systems (the CitiBike in New York, GoBike in San Francisco and BIXI in Montreal) along with corresponding public transit information. The developed two-dimension traffic model, as well as the mined bike-and-ride behaviors, can provide great help to the deployment of next-generation intelligent transportation systems.
A Word-to-Vector Formulation for Word Representation
This work presents a novel word to vector representation that is based on embedding the words into a sphere, whereby the dot product of the corresponding vectors represents the similarity between any two words. Embedding the vectors into a sphere enabled us to take into consideration the antonymity between words, not only the synonymity, because of the suitability to handle the polarity nature of words. For example, a word and its antonym can be represented as a vector and its negative. Moreover, we have managed to extract an adequate vocabulary. The obtained results show that the proposed approach can capture the essence of the language, and can be generalized to estimate a correct similarity of any new pair of words.
Credit Risk Evaluation Using Genetic Programming
Credit risk is considered as one of the important issues for financial institutions. It provokes great losses for banks. To this objective, numerous methods for credit risk evaluation have been proposed. Many evaluation methods are black box models that cannot adequately reveal information hidden in the data. However, several works have focused on building transparent rules-based models. For credit risk assessment, generated rules must be not only highly accurate, but also highly interpretable. In this paper, we aim to build both, an accurate and transparent credit risk evaluation model which proposes a set of classification rules. In fact, we consider the credit risk evaluation as an optimization problem which uses a genetic programming (GP) algorithm, where the goal is to maximize the accuracy of generated rules. We evaluate our proposed approach on the base of German and Australian credit datasets. We compared our finding with some existing works; the result shows that the proposed GP outperforms the other models.
Active Learning via Reinforcement Learning in Regression Tasks and Classification Tasks
Active learning is an iterative semi-supervised method that iteratively queries an oracle for labels in order to select new data points with desired properties, e.g., more informative or representative. To do so, it relies on metrics such as entropy and mutual information that are not embedded within the learning model. Here, we present a hybrid active learning framework where a reinforcement learning method is employed to train an active learning model without the aid of extra techniques to select informative datapoints. During the training process, the model learns to make decisions about whether to present its prediction or request the true label when an unlabeled datapoint comes as a reward or penalty is assigned depending on the action it takes. The model makes decisions based on the state it observes, which is composed of the hitherto observation history and the input of the current unlabeled datapoint. The hitherto observation history is encoded by the hidden state and cell state of a long short term memory (LSTM) model. Our experimental results show the agent’s prediction accuracy/error is higher/lower after excluding some datapoints when testing on a hold-out set, suggesting that the agent is able to pick out more uncertain datapoints among the unlabeled datapoints. This work presents a general method for building an active learning model both for regression tasks and classification tasks.
Particle Swarm Optimization and Quantum Particle Swarm Optimization to Multidimensional Function Approximation
This work compares the results of multidimensional function approximation using two algorithms: the classical Particle Swarm Optimization (PSO) and the Quantum Particle Swarm Optimization (QPSO). These algorithms were both tested on three functions; the Rosenbrock, the Rastrigin, and the sphere functions, with different characteristics by increasing their number of dimensions. As a result, this study shows that the higher the function space, i.e., the larger the function dimension, the more evident the advantages of using the QPSO method compared to the PSO method in terms of performance and number of necessary iterations to reach the stop criterion.
Optimal Pilots Offset Relation in Shifted Constellation-Based Method for Detection of Pilot Contamination Attacks
One possible approach for maintaining the security of communication systems relies on physical layer security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, which results in sending an enhanced data signal towards the malicious user. A recently proposed method for detection of such type of intervention is the Shifted 2-N-PSK method, involving two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends more on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is under research. The results show that with non-legitimate user which manages to guess the corresponding non-shifted legitimate pilots or their reciprocals, small difference in offset values corresponds to performance results similar to those in absence of eavesdropper. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, by a large number of simulations, the inter-cell interference impact on the performance of the method is observed. According to the results, the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.
Performance Assessment of Carrier Aggregation-Based Indoor Mobile Networks
The intelligent management and optimisation of radio resource technologies will lead to a considerable improvement of the overall performance in Next-Generation Networks (NGNs). The Carrier Aggregation (CA) technology, also known as Spectrum Aggregation, provides more efficient use of the available spectrum by combining multiple Component Carriers (CCs) in a virtual wideband channel. LTE-A CA technology can combine multiple adjacent or separate CCs in the same band or in different bands. Thus, increased data rates and dynamic load balancing can be achieved, resulting in more reliable and efficient operation of mobile networks and the enabling of new high bandwidth mobile services. In this paper, several distinct CA deployment strategies for the exploitation of spectrum bands are compared in various indoor and indoor-outdoor scenarios, simulated via the recently-developed Realistic Indoor Environment Generator (RIEG). We analyse the performance of the User Equipment (UE) by integrating the average throughput, the level of fairness of radio resource allocation, and other parameters into one summative assessment denoted as a Comparative Factor (CF). In addition, comparison of non-CA and CA indoor mobile networks is carried out under different load conditions: varying numbers and positions of UEs. The combination of CA and Multi-User Multiple-Input and Multiple-Output (MU-MIMO) in the same scenario is also considered, aiming to investigate and evaluate indoor communication network performance. The experimental results show that the CA technology can improve network performance, especially in the case of indoor scenarios. Additionally, the increase of carrier frequency does not necessarily lead to improved CF values, due to high wall-penetration losses. The performance of users under bad-channel conditions, who are often located in the periphery of the cells, can be improved by intelligent CA deployment. Furthermore, a combination of such a deployment and effective radio resource allocation management with respect to user-fairness plays a crucial role in improving the performance of LTE-A networks.
Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV
The present paper focuses on the object recognition in a special mode. In our days it is possible to put the camera on different vehicles like quadcopter, train, aeroplane, etc. Usually, these vehicles have also GPS. During the movement, the camera takes pictures without storages them in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the workstation in real time. This functionality will be very useful in emergency situations where is necessary to find a specific object. In another application the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. After this puts the GPS coordinates and sends all this information to remote station. So in this case we can know the location of the specific object. The present paper focuses on the realization and application of this system for image recognition.
Evaluation of Model-Based Code Generation for Embedded Systems – Mature Approach for Development in Evolution
Model-based development approach is gaining more support and acceptance. There are a number of reasons about it – high-level abstraction, simplifying the complex systems, but also giving easily applicable tools for domain experts; possibility for simulation helps doing prototype rapidly and also verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modeling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live car seat heater module development, using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. Later, a number of different manual configurations are done, trying to improve the quality of the product – the generated code. As a result of the investigation, the publication compares the code quality of particularly generated versions, trying to assess the optimality of automatic configuration. Manual implementation of the same project is used as a benchmark for comparison. Measurements show that generally, the code generated by automatic approach is not worse than the manual one. Deep dissection of modules and parameters, points deficiencies, part of them identified as topics for our future work.
High Performance Field Programmable Gate Array-Based Stochastic Low-Density Parity-Check Decoder Design for IEEE 802.3an Standard
This paper introduces high-performance architecture for fully parallel stochastic Low-Density Parity-Check (LDPC) field programmable gate array (FPGA) based LDPC decoder. The new approach is designed to decrease the decoding latency and to reduce the FPGA logic utilisation. To accomplish the target logic utilisation reduction, the routing of the proposed sub-variable node (VN) internal memory is designed to utilize one slice distributed RAM. Furthermore, a VN initialization, using the channel input probability, is achieved to enhance the decoder convergence, without extra resources and without integrating the output saturated-counters. The Xilinx FPGA implementation, of IEEE 802.3an standard LDPC code, shows that the proposed decoding approach attain high performance along with reduction of FPGA logic utilisation.
Comparing Clusterings: A Store Segmentation Application
This study focuses on one of the clustering comparison measures, pair counting techniques such as Rand Index, Adjusted Rand Index and Fowlkes Mallows Index. The aim is discussing their properties and showing a marketing application of the techniques. For an application, a retail chain company’s supermarket stores are segmented by two approach. The first clustering approach is segmenting stores based on socioeconomic factors and the second approach is based on purchasing behaviors of customers. Since consumer purchases are influenced strongly by purchasing behavior, this study expects to find an agreement between two clusterings. While Rand Index value indicates an agreement, Fowlkes-Mallows Index value has found a weak agreement and Adjusted Rand Index value could not find any agreement between two clusterings.
A Proposed Query Optimization Strategy for Autonomous Distributed Database Systems
Distributed database is a collection of logically related databases that cooperate in a transparent manner. Query processing uses a communication network for transmitting data between sites. It refers to one of the challenges in the database world. The development of sophisticated query optimization technology is the reason for the commercial success of database systems, which complexity and cost increase with increasing number of relations in the query. Mariposa, query trading and query trading with processing task-trading strategies developed for autonomous distributed database systems, but they cause high optimization cost because of involvement of all nodes in generating an optimal plan. In this paper, we proposed a modification on the autonomous strategy K-QTPT that make the seller’s nodes with the lowest cost have gradually high priorities to reduce the optimization time. We implement our proposed strategy and present the results and analysis based on those results.
Modern Spectrum Sensing Techniques for Cognitive Radio Networks: Practical Implementation and Performance Evaluation
Spectrum underutilization has made cognitive radio a promising technology both for current and future telecommunications. This is due to the ability to exploit the unused spectrum in the bands dedicated to other wireless networks, and thus, increase their occupancy. The essential function, which allows the cognitive radio device to perceive the occupancy of the spectrum, is spectrum sensing. In this paper, the performance of the modern adaptations of the four most widely used spectrum sensing techniques such as energy detection (ED), cyclostationary feature detection (CSFD), matched filter (MF) and eigenvalues-based detection (EBD) is compared. The implementation has been accomplished through the PlutoSDR hardware platform and the GNU Radio software package in very low signal-to-noise ratio conditions. The optimal detection performance of the examined methods in a realistic implementation-oriented model is found for the common relevant parameters (number of observed samples, sensing time and required probability of false alarm).
Object Tracking in Motion Blurred Images with Adaptive Mean Shift and Wavelet Feature
A new method for object tracking in motion blurred images is proposed in this article. This paper shows that object tracking could be improved with this approach. We use mean shift algorithm to track different objects as a main tracker. But the problem is that mean shift could not track the selected object accurately in blurred scenes. So for better tracking result, and increasing the accuracy of tracking, wavelet transform is used. We use a feature named as blur extent, which could help us to get better results in tracking. For calculating of this feature we should use harr wavelet. We can look at this matter from two different angles which lead to determine whether an image is blurred or not and to what extent an image is blur. In fact, this feature left an impact on the covariance matrix of mean shift algorithm and cause to better performance of tracking. This method has been concentrated mostly on motion blur parameter. transform. The results reveal the ability of our method in order to reach more accurately tracking.
Impact of Population Size on Symmetric Travelling Salesman Problem Efficiency
Genetic algorithm (GA) is a powerful evolutionary searching technique that is used successfully to solve and optimize problems in different research areas. Genetic Algorithm (GA) considered as one of optimization methods used to solve Travel salesman Problem (TSP). The feasibility of GA in finding a TSP solution is dependent on GA operators; encoding method, population size, termination criteria, in general. In specific, crossover and its probability play a significant role in finding possible solutions for Symmetric TSP (STSP). In addition, the crossover should be determined and enhanced in term reaching optimal or at least near optimal. In this paper, we spot the light on using a modified crossover method called modified sequential constructive crossover and its impact on reaching optimal solution. To justify the relevance of a parameter value in solving the TSP, a set comparative analysis conducted on different crossover methods values.
Integrated ACOR/IACOMV-R-Support Vector Machine Algorithm
A new direction for ACO is to optimize continuous and mixed (discrete and continuous) variables in solving problems with various types of data. Support Vector Machine (SVM) which originates from statistical approach is a present day classification technique. Selecting feature subset and tuning parameters are two main problems of SVM. Most approaches related to tuning SVM parameters will discretize the continuous value of the parameters. This process will give a negative effect on the performance as some data are lost which affects the classification accuracy. This paper presents two algorithms that can simultaneously tune SVM parameters and feature subset selection. The first algorithm ACOR-SVM will tune SVM parameters while the second IACOMV-R-SVM algorithm, will simultaneously tune SVM parameters and feature subset selection. Three benchmark datasets from UCI were used in the experiments to validate the performance of the proposed algorithms. The results have been shown that the proposed algorithms have good performances compared with other approaches.
Energy Efficient Firefly Algorithm in Wireless Sensor Network
Wireless sensor network (WSN) is comprised of a huge number of small and cheap devices known as sensor nodes. Usually, these sensor nodes are massively and deployed randomly as in Ad-hoc over hostile and harsh environment to sense, collect and transmit data to the needed locations (i.e., base station). One of the main advantages of WSN is that the ability to work in unattended and scattered environments regardless the presence of humans such as remote active volcanoes environments or earthquakes. In WSN expanding network, lifetime is a major concern. Clustering technique is more important to maximize network lifetime. Nature-inspired algorithms are developed and optimized to find optimized solutions for various optimization problems. We proposed Energy Efficient Firefly Algorithm to improve network lifetime as long as possible.
Smart Unmanned Parking System Based on Radio Frequency Identification Technology
In order to tackle the ever-growing problem of the lack of parking space, this paper presents the design and implementation of a smart unmanned parking system that is based on RFID (radio frequency identification) technology and Wireless communication technology. This system uses RFID technology to achieve the identification function (transmitted by 2.4 G wireless module) and is equipped with an STM32L053 micro controller as the main control chip of the smart vehicle. This chip can accomplish automatic parking (in/out), charging and other functions. On this basis, it can also help users easily query the information that is stored in the database through the Internet. Experimental tests have shown that the system has the features of low power consumption and stable operation, among others. It can effectively improve the level of automation control of the parking lot management system and has enormous application prospects.
Quantum Inspired Security on a Mobile Phone
The widespread use of mobile electronic devices increases the complexities of mobile security. This thesis aims to provide a secure communication environment for smartphone users. Some research proves that the one-time pad is one of the securest encryption methods, and that the key distribution problem can be solved by using the QKD (quantum key distribution). The objective of this project is to design an Android APP (application) to exchange several random keys between mobile phones. Inspired by QKD, the developed APP uses the quick response (QR) code as a carrier to dispatch large amounts of one-time keys. After evaluating the performance of APP, it allows the mobile phone to capture and decode 1800 bytes of random data in 600ms. The continuous scanning mode of APP is designed to improve the overall transmission performance and user experience, and the maximum transmission rate of this mode is around 2200 bytes/s. The omnidirectional readability and error correction capability of QR code gives it a better real-life application, and the features of adequate storage capacity and quick response optimize overall transmission efficiency. The security of this APP is guaranteed since QR code is exchanged face-to-face, eliminating the risk of being eavesdropped. Also, the id of QR code is the only message that would be transmitted through the whole communication. The experimental results show this project can achieve superior transmission performance, and the correlation between the transmission rate of the system and several parameters, such as the QR code size, has been analyzed. In addition, some existing technologies and the main findings in the context of the project are summarized and critically compared in detail.
Cognitive Determinants of IT Professional Belief and Attitude towards Green IT
This study builds on the behavioral perspective of Green IT to investigate the cognitive determinants of IT professional Green IT beliefs and the implications of such beliefs on their attitude towards Green IT practices. We posited that individual’s belief about Green IT could be informed by one’s knowledge and awareness of the adverse effect of non-sustainable practices on the environment. Also, we hypothesized beliefs as the determinant of one’s attitude towards Green IT practices. The outcome of the empirical investigation on a sample of data collected from IT professionals in Malaysia provided support for our hypotheses. Furthermore, the implications of the ensuing findings for the existing literature and the successful management of Green IT practices were discussed.
System and Method for Providing Web-Based Remote Application Service
With the development of virtualization technologies, a new type of service named cloud computing service is produced. Cloud users usually encounter the problem of how to use the virtualized platform easily over the web without requiring the plug-in or installation of special software. The object of this paper is to develop a system and a method enabling process interfacing within an automation scenario for accessing remote application by using the web browser. To meet this challenge, we have devised a web-based interface that system has allowed to shift the GUI application from the traditional local environment to the cloud platform, which is stored on a remote virtual machine rather than locally. We designed the sketch of web interface following the cloud virtualization concept that sought to enable communication and collaboration among users. We describe the design requirements of remote application technology and present implementation details of the web application and its associated components. We conclude that this proposed effort has the potential to provide an elastic and resilience environment for several application services. Users no longer need to burden the platform maintenance and drastically reduces the overall cost of hardware and software licenses. Moreover, this flexible remote application service represents the next significant step to the mobile workplace, and it lets user to access the remote application from virtually anywhere.
Meta-Learning for Hierarchical Classification and Applications in Bioinformatics
Hierarchical classification is a special type of classification task where the class labels are organised into a hierarchy, with more generic class labels being ancestors of more specific ones. Meta-learning for classification-algorithm recommendation consists of recommending to the user a classification algorithm, from a pool of candidate algorithms, for a new dataset, based on the past performance of the candidate algorithms in other datasets. Meta-learning is normally used in conventional, non-hierarchical classification. By contrast, this paper proposes a new meta-learning approach for the more challenging task of hierarchical classification and evaluates it in a large number of bioinformatics datasets. Hierarchical classification is especially relevant for bioinformatics problems, as protein and gene functions tend to be organised into a hierarchy of class labels. This work proposes the first meta-learning approach for recommending the best hierarchical classification algorithm to a new hierarchical classification dataset. This work’s contributions are: 1) proposing a new algorithm for splitting hierarchical datasets into new datasets to increase the number of meta-instances, 2) proposing new meta-features for hierarchical classification, and 3) interpreting decision-tree meta-models for hierarchical classification algorithm recommendation.
Performance Evaluation of Parallel Surface Modeling and Generation on Actual and Virtual Multicore Systems
Even though past, current and future trends suggest that multicore and cloud computing systems are increasingly prevalent/ubiquitous, this class of parallel systems is nonetheless underutilized, in general, and barely used for research on employing parallel Delaunay triangulation for parallel surface modeling and generation, in particular. The performances, of actual/physical and virtual/cloud multicore systems/machines, at executing various algorithms, which implement various parallelization strategies of the incremental insertion technique of the Delaunay triangulation algorithm, were evaluated. t-tests were run on the data collected, in order to determine whether various performance metrics differences (including execution time, speedup and efficiency) were statistically significant. Results show that the actual machine is approximately twice faster than the virtual machine at executing the same programs for the various parallelization strategies. Results, which furnish the scalability behaviors of the various parallelization strategies, also show that some of the differences between the performances of these systems, during different runs of the algorithms on the systems, were statistically significant. A few pseudo superlinear speedup results, which were computed from the raw data collected, are not true superlinear speedup values. These pseudo superlinear speedup values, which arise as a result of one way of computing speedups, disappear and give way to asymmetric speedups, which are the accurate kind of speedups that occur in the experiments performed.
An Approach for Pattern Recognition and Prediction of Information Diffusion Model on Twitter
In this paper, we study the information diffusion process on Twitter as a multivariate time series problem. Our model concerns three measures (volume, network influence, and sentiment of tweets) based on 10 features, and we collected 27 million tweets to build our information diffusion time series dataset for analysis. Then, different time series clustering techniques with Dynamic Time Warping (DTW) distance were used to identify different patterns of information diffusion. Finally, we built the information diffusion prediction models for new hashtags which comprise two phrases: The first phrase is recognizing the pattern using k-NN with DTW distance; the second phrase is building the forecasting model using the traditional Autoregressive Integrated Moving Average (ARIMA) model and the non-linear recurrent neural network of Long Short-Term Memory (LSTM). Preliminary results of performance evaluation between different forecasting models show that LSTM with clustering information notably outperforms other models. Therefore, our approach can be applied in real-world applications to analyze and predict the information diffusion characteristics of selected topics or memes (hashtags) in Twitter.
Land Cover Remote Sensing Classification Advanced Neural Networks Supervised Learning
This study aims to evaluate the impact of classifying labelled remote sensing images conventional neural network (CNN) architecture, i.e., AlexNet on different land cover scenarios based on two remotely sensed datasets from different point of views such as the computational time and performance. Thus, a set of experiments were conducted to specify the effectiveness of the selected convolutional neural network using two implementing approaches, named fully trained and fine-tuned. For validation purposes, two remote sensing datasets, AID, and RSSCN7 which are publicly available and have different land covers features were used in the experiments. These datasets have a wide diversity of input data, number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in training, validation, and testing. As a result, the fully trained approach has achieved a trivial result for both of the two data sets, AID and RSSCN7 by 73.346% and 71.857% within 24 min, 1 sec and 8 min, 3 sec respectively. However, dramatic improvement of the classification performance using the fine-tuning approach has been recorded by 92.5% and 91% respectively within 24min, 44 secs and 8 min 41 sec respectively. The represented conclusion opens the opportunities for a better classification performance in various applications such as agriculture and crops remote sensing.
Pythagorean-Platonic Lattice Method for Finding All Co-Prime Right Angle Triangles
This paper presents a new method for determining all of the co-prime right angle triangles in the Euclidean field by looking at the intersection of the Pythagorean and Platonic right angle triangles and the corresponding lattice that this produces. The co-prime properties of each lattice point representing a unique right angle triangle are then considered. This paper proposes a conjunction between these two ancient disparaging theorists. This work has wide applications in information security where cryptography involves improved ways of finding tuples of prime numbers for secure communication systems. In particular, this paper has direct impact in enhancing the encryption and decryption algorithms in cryptography.