|Commenced in January 1999 || Frequency: Monthly || Edition: International|| Paper Count: 17 |
Computer, Electrical, Automation, Control and Information Engineering
Parallel Vector Processing Using Multi Level Orbital DATA
Many applications use vector operations by applying
single instruction to multiple data that map to different locations
in conventional memory. Transferring data from memory is limited
by access latency and bandwidth affecting the performance gain of
vector processing. We present a memory system that makes all of
its content available to processors in time so that processors need
not to access the memory, we force each location to be available to
all processors at a specific time. The data move in different orbits
to become available to other processors in higher orbits at different
time. We use this memory to apply parallel vector operations to data
streams at first orbit level. Data processed in the first level move
to upper orbit one data element at a time, allowing a processor in
that orbit to apply another vector operation to deal with serial code
limitations inherited in all parallel applications and interleaved it with
lower level vector operations.
Operating System Based Virtualization Models in Cloud Computing
Cloud computing is ready to transform the structure of businesses and learning through supplying the real-time applications and provide an immediate help for small to medium sized businesses. The ability to run a hypervisor inside a virtual machine is important feature of virtualization and it is called nested virtualization. In today’s growing field of information technology, many of the virtualization models are available, that provide a convenient approach to implement, but decision for a single model selection is difficult. This paper explains the applications of operating system based virtualization in cloud computing with an appropriate/suitable model with their different specifications and user’s requirements. In the present paper, most popular models are selected, and the selection was based on container and hypervisor based virtualization. Selected models were compared with a wide range of user’s requirements as number of CPUs, memory size, nested virtualization supports, live migration and commercial supports, etc. and we identified a most suitable model of virtualization.
An Android Geofencing App for Autonomous Remote Switch Control
Geofence is a virtual fence defined by a preset physical radius around a target location. Geofencing App provides location-based services which define the actionable operations upon the crossing of a geofence. Geofencing requires continual location tracking, which can consume noticeable amount of battery power. Additionally, location updates need to be frequent and accurate or order so that actions can be triggered within an expected time window after the mobile user navigate through the geofence. In this paper, we build an Android mobile geofencing Application to remotely and autonomously control a power switch.
Collision Detection Algorithm Based on Data Parallelism
Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.
Comparative Study of Conventional and Satellite Based Agriculture Information System
The purpose of this study is to compare the conventional crop monitoring system with the satellite based crop monitoring system in Pakistan. This study is conducted for SUPARCO (Space and Upper Atmosphere Research Commission). The study focused on the wheat crop, as it is the main cash crop of Pakistan and province of Punjab. This study will answer the following: Which system is better in terms of cost, time and man power? The man power calculated for Punjab CRS is: 1,418 personnel and for SUPARCO: 26 personnel. The total cost calculated for SUPARCO is almost 13.35 million and CRS is 47.705 million. The man hours calculated for CRS (Crop Reporting Service) are 1,543,200 hrs (136 days) and man hours for SUPARCO are 8, 320hrs (40 days). It means that SUPARCO workers finish their work 96 days earlier than CRS workers. The results show that the satellite based crop monitoring system is efficient in terms of manpower, cost and time as compared to the conventional system, and also generates early crop forecasts and estimations. The research instruments used included: Interviews, physical visits, group discussions, questionnaires, study of reports and work flows. A total of 93 employees were selected using Yamane’s formula for data collection, which is done with the help questionnaires and interviews. Comparative graphing is used for the analysis of data to formulate the results of the research. The research findings also demonstrate that although conventional methods have a strong impact still in Pakistan (for crop monitoring) but it is the time to bring a change through technology, so that our agriculture will also be developed along modern lines.
Control Strategies for a Robot for Interaction with Children with Autism Spectrum Disorder
Socially assistive robotic has become increasingly active and it is present in therapies of people affected for several neurobehavioral conditions, such as Autism Spectrum Disorder (ASD). In fact, robots have played a significant role for positive interaction with children with ASD, by stimulating their social and cognitive skills. This work introduces a mobile socially-assistive robot, which was built for interaction with children with ASD, using non-linear control techniques for this interaction.
JREM: An Approach for Formalising Models in the Requirements Phase with JSON and NoSQL Databases
This paper presents an approach to reduce some of its current flaws in the requirements phase inside the software development process. It takes the software requirements of an application, makes a conceptual modeling about it and formalizes it within JSON documents. This formal model is lodged in a NoSQL database which is document-oriented, that is, MongoDB, because of its advantages in flexibility and efficiency. In addition, this paper underlines the contributions of the detailed approach and shows some applications and benefits for the future work in the field of automatic code generation using model-driven engineering tools.
Pose Normalization Network for Object Classification
Convolutional Neural Networks (CNN) have
demonstrated their effectiveness in synthesizing 3D views of object
instances at various viewpoints. Given the problem where one
have limited viewpoints of a particular object for classification, we
present a pose normalization architecture to transform the object to
existing viewpoints in the training dataset before classification to
yield better classification performance. We have demonstrated that
this Pose Normalization Network (PNN) can capture the style of
the target object and is able to re-render it to a desired viewpoint.
Moreover, we have shown that the PNN improves the classification
result for the 3D chairs dataset and ShapeNet airplanes dataset
when given only images at limited viewpoint, as compared to a
Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R
Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.
Data Quality Enhancement with String Length Distribution
Recently, collectable manufacturing data are rapidly
increasing. On the other hand, mega recall is getting serious as
a social problem. Under such circumstances, there are increasing
needs for preventing mega recalls by defect analysis such as
root cause analysis and abnormal detection utilizing manufacturing
data. However, the time to classify strings in manufacturing data
by traditional method is too long to meet requirement of quick
defect analysis. Therefore, we present String Length Distribution
Classification method (SLDC) to correctly classify strings in a short
time. This method learns character features, especially string length
distribution from Product ID, Machine ID in BOM and asset list.
By applying the proposal to strings in actual manufacturing data, we
verified that the classification time of strings can be reduced by 80%.
As a result, it can be estimated that the requirement of quick defect
analysis can be fulfilled.
Hierarchical Checkpoint Protocol in Data Grids
Grid of computing nodes has emerged as a
representative means of connecting distributed computers or
resources scattered all over the world for the purpose of computing
and distributed storage. Since fault tolerance becomes complex due
to the availability of resources in decentralized grid environment,
it can be used in connection with replication in data grids. The
objective of our work is to present fault tolerance in data grids
with data replication-driven model based on clustering. The
performance of the protocol is evaluated with Omnet++ simulator.
The computational results show the efficiency of our protocol in
terms of recovery time and the number of process in rollbacks.
Complex Fuzzy Evolution Equation with Nonlocal Conditions
The objective of this paper is to study the existence and
uniqueness of Mild solutions for a complex fuzzy evolution equation
with nonlocal conditions that accommodates the notion of fuzzy sets
defined by complex-valued membership functions. We first propose
definition of complex fuzzy strongly continuous semigroups. We then
give existence and uniqueness result relevant to the complex fuzzy
Management Software for the Elaboration of an Electronic File in the Pharmaceutical Industry Following Mexican Regulations
For certification, certain goods of public interest, such as medicines and food, it is required the preparation and delivery of a dossier. For its elaboration, legal and administrative knowledge must be taken, as well as organization of the documents of the process, and an order that allows the file verification. Therefore, a virtual platform was developed to support the process of management and elaboration of the dossier, providing accessibility to the information and interfaces that allow the user to know the status of projects. The development of dossier system on the cloud allows the inclusion of the technical requirements for the software management, including the validation and the manufacturing in the field industry. The platform guides and facilitates the dossier elaboration (report, file or history), considering Mexican legislation and regulations, it also has auxiliary tools for its management. This technological alternative provides organization support for documents and accessibility to the information required to specify the successful development of a dossier. The platform divides into the following modules: System control, catalog, dossier and enterprise management. The modules are designed per the structure required in a dossier in those areas. However, the structure allows for flexibility, as its goal is to become a tool that facilitates and does not obstruct processes. The architecture and development of the software allows flexibility for future work expansion to other fields, this would imply feeding the system with new regulations.
Efficient Filtering of Graph Based Data Using Graph Partitioning
An algebraic framework for processing graph signals
axiomatically designates the graph adjacency matrix as the shift
operator. In this setup, we often encounter a problem wherein we
know the filtered output and the filter coefficients, and need to
find out the input graph signal. Solution to this problem using
direct approach requires O(N3) operations, where N is the number
of vertices in graph. In this paper, we adapt the spectral graph
partitioning method for partitioning of graphs and use it to reduce
the computational cost of the filtering problem. We use the example
of denoising of the temperature data to illustrate the efficacy of the
Visual Search Based Indoor Localization in Low Light via RGB-D Camera
Most of traditional visual indoor navigation algorithms
and methods only consider the localization in ordinary daytime, while
we focus on the indoor re-localization in low light in the paper. As
RGB images are degraded in low light, less discriminative infrared
and depth image pairs are taken, as the input, by RGB-D cameras, the
most similar candidates, as the output, are searched from databases
which is built in the bag-of-word framework. Epipolar constraints can
be used to relocalize the query infrared and depth image sequence.
We evaluate our method in two datasets captured by Kinect2. The
results demonstrate very promising re-localization results for indoor
navigation system in low light environments.
A Robust Hybrid Blind Digital Image Watermarking System Using Discrete Wavelet Transform and Contourlet Transform
In this paper, a hybrid blind digital watermarking system using Discrete Wavelet Transform (DWT) and Contourlet Transform (CT) has been implemented and tested. The implemented combined digital watermarking system has been tested against five common types of image attacks. The performance evaluation shows improved results in terms of imperceptibility, robustness, and high tolerance against these attacks; accordingly, the system is very effective and applicable.
Studies on Properties of Knowledge Dependency and Reduction Algorithm in Tolerance Rough Set Model
Relation between tolerance class and indispensable attribute and knowledge dependency in rough set model with tolerance relation is explored. After giving definitions and concepts of knowledge dependency and knowledge dependency degree for incomplete information system in tolerance rough set model by distinguishing decision attribute containing missing attribute value or not, the result of maintaining reflectivity, transitivity, augmentation, decomposition law and merge law for complete knowledge dependency is proved. Knowledge dependency degrees (not complete knowledge dependency degrees) only satisfy some laws after transitivity, augmentation and decomposition operations. An algorithm to solve attribute reduction in an incomplete decision table is designed. The correctness is checked by an example.