Excellence in Research and Innovation for Humanity

International Science Index

Commenced in January 1999 Frequency: Monthly Edition: International Abstract Count: 45197

Computer and Information Engineering

An Efficient Fundamental Matrix Estimation for Moving Object Detection
In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object’s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC the fundamental matrix is calculated. In this proposed method, we have improved the performances of moving object detection by using two threshold values that determines inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values.
Investigation of Clustering Algorithms Used in Wireless Sensor Networks
Wireless sensor networks are networks in which more than one sensor node is organized among themselves. The working principle is based on the transfer of the sensed data over the other nodes in the network to the central station. Wireless sensor networks concentrate on routing algorithms, energy efficiency and clustering algorithms. In the clustering method, the nodes in the network are divided into clusters using different parameters and the most suitable cluster head is selected from among them. The data to be sent to the center is sent per cluster and the cluster head is transmitted to the center. With this method, the network traffic is reduced and the energy efficiency of the nodes is increased. In this study, clustering algorithms were examined in terms of clustering performances and cluster head selection characteristics to try to identify weak and strong sides.
Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery
The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.
Improved Super-Resolution Using Deep Denoising Convolutional Neural Network
Super-resolution is the technique that is being used in computer vision to construct high-resolution images from a single low-resolution image. It is used to increase the frequency component, recover the lost details and removing the down sampling and noises that caused by camera during image acquisition process. High-resolution images or videos are desired part of all image processing tasks and its analysis in most of digital imaging application. The target behind super-resolution is to combine non-repetition information inside single or multiple low-resolution frames to generate a high-resolution image. Many methods have been proposed where multiple images are used as low-resolution images of same scene with different variation in transformation. This is called multi-image super resolution. And another family of methods is single image super-resolution that tries to learn redundancy that presents in image and reconstruction the lost information from a single low-resolution image. Use of deep learning is one of state of art method at present for solving reconstruction high-resolution image. In this research, we proposed Deep Denoising Super Resolution (DDSR) that is a deep neural network for effectively reconstruct the high-resolution image from low-resolution image.
A Modular Framework for Enabling Analysis for Educators with Different Levels of Data Mining Skills
Enabling data mining analysis among a wider audience of educators is an active area of research within the educational data mining (EDM) community. The paper proposes a framework for developing an environment that caters for educators who have little technical data mining skills as well as for more advanced users with some data mining expertise. This framework architecture was developed through the review of the strengths and weaknesses of existing models in the literature. The proposed framework provides a modular architecture for future researchers to focus on the development of specific areas within the EDM process. Finally, the paper also highlights a strategy of enabling analysis through either the use of predefined questions or a guided data mining process and highlights how the developed questions and analysis conducted can be reused and extended over time.
Virtual Reality Based 3D Video Games and Speech-Lip Synchronization Superseding Algebraic Code Excited Linear Prediction
In 3D video games, the dominance of production is unceasingly growing with a protruding level of affordability in terms of budget. Afterward, the automation of speech-lip synchronization technique is customarily onerous and has advanced a critical research subject in virtual reality based 3D video games. This paper presents one of these automatic tools, precisely riveted on the synchronization of the speech and the lip movement of the game characters. A robust and precise speech recognition segment that systematized with Algebraic Code Excited Linear Prediction method is developed which unconventionally delivers lip sync results. The Algebraic Code Excited Linear Prediction algorithm is constructed on that used in code-excited linear prediction, but Algebraic Code Excited Linear Prediction codebooks have an explicit algebraic structure levied upon them. This affords a quicker substitute to the software enactments of lip sync algorithms and thus advances the superiority of service factors abridged production cost.
An Ontology Based Model to Control the Potential Hazards and to Assure the Safety and Quality of Seafood
Ensuring the food safety is a major problem worldwide. Quality control systems are important and have a significant role in ensuring the safety of the food. This paper proposes an ontology based seafood quality analyzer and miner (ONTO SQAM), which is used to ensure the quality of seafood by a knowledge-based model. The total amount of fish consumption in the world has increased dramatically in the recent decades. It is, therefore, necessary to assure the safety and quality of fish for the consumers. Different algorithms are suggested for analysis, mining, and prediction to ensure the seafood quality. Indian Seafood industry is taken as the case study in this research.
Tensor Deep Stacking Neural Networks and Bilinear Mapping Based Speech Emotion Classification Using Facial Electromyography
Speech emotion classification is a dominant research field in finding a sturdy and profligate classifier appropriate for different real-life applications. This effort accentuates on classifying different emotions from speech signal quarried from the features related to pitch, formants, energy contours, jitter, shimmer, spectral, perceptual and temporal features. Tensor deep stacking neural networks were supported to examine the factors that influence the classification success rate. Facial electromyography signals were composed of several forms of focuses in a controlled atmosphere by means of audio-visual stimuli. Proficient facial electromyography signals were pre-processed using moving average filter, and a set of arithmetical features were excavated. Extracted features were mapped into consistent emotions using bilinear mapping. With facial electromyography signals, a database comprising diverse emotions will be exposed with a suitable fine-tuning of features and training data. A success rate of 92% can be attained deprived of increasing the system connivance and the computation time for sorting diverse emotional states.
A Software Engineering Methodology for Developing Secure Obfuscated Software
We propose a methodology to conciliate two apparently contradictory processes in the development of secure obfuscated software and good software engineered software. Our methodology consists first in the system designers defining the type of security level required for the software. There are four types of attackers: casual attackers, hackers, institution attack, and government attack. Depending on the level of threat, the methodology we propose uses five or six teams to accomplish this task. One Software Engineer Team and one or two software Obfuscation Teams, and Compiler Team, these four teams will develop and compile the secure obfuscated software, a Code Breakers Team will test the results of the previous teams to see if the software is not broken at the required security level, and an Intrusion Analysis Team will analyze the results of the Code Breakers Team and propose solutions to the development teams to prevent the detected intrusions. We also present an analytical model to prove that our methodology is no only easier to use, but generates an economical way of producing secure obfuscated software.
A World Map of Seabed Sediment Based on 50 Years Knowledge
SHOM initiated the production of a global sedimentological seabed map in 1995 to provide the necessary tool for the needs of searches for aircraft and boats lost at sea, sedimentary information on nautical charts, and the input data of acoustic propagation modeling. This project is original but had already been initiated in 1912 when the French hydrographic service and the University of Nancy produced a series of maps of the distribution of marine sediments of all the French coasts. During the following decade, this association permitted the publication of the sediment map of the continental shelves of Europe and North America. The map of the sediment of oceans presented was initiated with a map of the deep ocean floor, carried out by UNESCO in the 1970s. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of the sediments is represented. Currently, more than 200 source maps have been integrated. These maps are sometimes published seabed maps, in these cases, it is simply a validation of the interest of the document, a standardization of the sediment classification, and integration in the world map. Data also comes from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. Eighty-six regional maps of the Atlantic, the Mediterranean, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a new digital version every two years, with the integration of some new maps. This is the first and only article describing this global seabed map and its realization. It describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This approach makes it possible to take into account the progress in knowledge made in the field of seabed characterization during the last decades. This way the multiplication of sediment measurement systems and the compilation of all published data gradually enriched a map that still relies, for many regions, on data acquired from more than half a century ago.
Nature of the Prohibition of Discrimination on Grounds of Sexual Orientation in EU Law
The EU law encompasses many supranational legal systems (EU law, ECHR, international public law and constitutional traditions common to the Member States) which guarantee the protection of fundamental rights, with partly overlapping scopes of applicability, various principles of interpretation of legal norms and a different hierarchy. In EU law, the prohibition of discrimination on grounds of sexual orientation originates from both the primary and secondary EU legislation. At present, the prohibition is considered to be a fundamental right in pursuance of Article 21 of the Charter, but the Court has not yet determined whether it is a right or a principle within the meaning of the Charter. Similarly, the Court has not deemed this criterion to be a general principle of EU law. The personal and materials scope of the prohibition of discrimination on grounds of sexual orientation based on Article 21 of the Charter requires each time to be specified in another legal act of the EU in accordance with Article 51 of the Charter. The effect of the prohibition of discrimination on grounds of sexual orientation understood as above will be two-fold, for the States and for the Union. On the one hand, one may refer to the legal instruments of review of EU law enforcement by a Member State laid down in the Treaties. On the other hand, EU law does not provide for the right to individual petition. Therefore, it is the duty of the domestic courts to protect the right of a person not to be discriminated on grounds of sexual orientation in line with the national procedural rules, within the limits and in accordance with the principles set out in EU law, in particular in Directive 2000/78. The development of the principle of non-discrimination in the Court’s case-law gives rise to certain doubts as to its applicability, namely whether the principle as the general principle of EU law may be granted an autonomous character, with respect to the applicability to matters not included in the personal or material scope of the Directives, although within the EU’s competence. Moreover, both the doctrine and the opinions of the Advocates-General have called for the general competence of CJEU with regard to fundamental rights which, however, might lead to a violation of the principle of separation of competence. The aim of this paper is to answer the question what is the nature of the prohibition of discrimination on grounds of sexual orientation in EU law (a general principle in EU law, or a principle or right under the Charter’s terminology). Therefore, the paper focuses on the nature of Article 21 of the Charter (a right or a principle) and the scope (personal and material) of the prohibition of discrimination based on sexual orientation in EU law as well as its effect (vertical or horizontal). The study has included the provisions of EU law together with the relevant CJEU case-law.
The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition
Speech recognition is of important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that require different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features. However, Mel frequency cepstral coefficients (MFCC) is the popular technique. It has been long observed that the MFCC dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit.
Automatic Selection of Kernels to Improve the Hyperspectral Images Classification in Deep Learning: Using Convolutional Neural Network-Fuzzy-C-Means Clustering Methods
Hyperspectral images (HSI) classification purposes at conveying each pixel a predefined class label, which reinforces lots applications. Many classification methods, therefore, have been wished for improved the hyperspectral image classification. Deep Convolutional Neural Network (CNN) is very interesting; it can provide excellent performance in classifying the hyperspectral image when we have many training samples. To improve the classification accuracy of the CNN architecture, an advanced CNN model used a clustering method (KMeans algorithm) to generate the kernels. How to choose the convolution kernels number is the problem because it must be assigned manually. This type of method, thus generating the kernels which better represent the data, are rarely studied in the domain of hyperspectral imaging classification. Thus, a novel HSI classification method based on CNNs has recently been proposed. In which the conventional kernels can be automatically learned from the clustering data without knowing the number of clusters to use. In fact, this method is based on the use of two algorithms CNN and KMeans, where the second is to choose the kernels, showed good results. At this level, knowing that FCM can replace KMeans, with the advantage of fuzzy clustering, the classification accuracy can be improved. In this paper, we propose a novel framework based on CNN, where kernels are automatically learned, using Fuzzy-C-means clustering (FCM) network for the HIS classification. With those data adaptive kernels, CNNAKS-FCM technique attains better classification accuracy. The simulation results for the identification of rice diseases show the feasibility and effectiveness of CNNAKS-FCM method.
Fast Schroedinger Eigenmaps Algorithm for Hyperspectral Imagery Classification
Schroedinger Eigenmaps (SE) has exhibited a super efficiency for hyperspectral dimensionality reduction tasks; it is based on Laplacian Eigenmaps (LE) algorithm and Spatial-Spectral potential matrix. Practically, SE suffers of high computing complexity which may limit its exploitation in the remote sensing field. In this paper, we proposed a fast variant of Schroedinger Eigenmaps (SE) method called Fast Schroedinger Eigenmaps (Fast SE). The proposed approach is based on a fast variant of Laplacian Eigenmaps (LE) algorithm and a Spatial-Spectral potential matrix; it replaces the quadratic constraint used by the quadratic optimization problem in the Laplacian Eigenmaps (LE) approach by a new linear constraint. This modification helps to preserve the data manifold structure in a similar way to SE, but with more computational efficiency. A real hyperspectral scene was employed for experimental analysis. Experiment results proved effective classification accuracies with a reduced computing time, according to the SE and other known dimensionality reduction methods.
MapReduce Algorithm for Geometric and Topological Information Extraction from 3D CAD Models
In a digital world in perpetual evolution and acceleration, data more and more voluminous, rich and varied, the new software solutions emerged with the Big Data phenomenon offer new opportunities to the company enabling it not only to optimize its business and to evolve its production model, but also to reorganize itself to increase competitiveness and to identify new strategic axes. Design and manufacturing industrial companies, like the others, face these challenges, data represent a major asset, provided that they know how to capture, refine, combine and analyze them. The objective of our paper is to propose a solution allowing geometric and topological information extraction from 3D CAD model (precisely STEP files) databases, with specific algorithm based on the programming paradigm MapReduce. Our proposal is the first step of our future approach to 3D CAD object retrieval.
3D Object Retrieval Based on Similarity Calculation in 3D Computer Aided Design Systems
Nowadays, recent technological advances in the acquisition, modeling, and processing of three-dimensional (3D) objects data lead to the creation of models stored in huge databases, which are used in various domains such as computer vision, augmented reality, game industry, medicine, CAD (Computer-aided design), 3D printing etc. On the other hand, the industry is currently benefiting from powerful modeling tools enabling designers to easily and quickly produce 3D models. The great ease of acquisition and modeling of 3D objects make possible to create large 3D models databases, then, it becomes difficult to navigate them. Therefore, the indexing of 3D objects appears as a necessary and promising solution to manage this type of data, to extract model information, retrieve an existing model or calculate similarity between 3D objects. The objective of the proposed research is to develop a framework allowing easy and fast access to 3D objects in a CAD models database with specific indexing algorithm to find objects similar to a reference model. Our main objectives are to study existing methods of similarity calculation of 3D objects (essentially shape-based methods) by specifying the characteristics of each method as well as the difference between them, and then we will propose a new approach for indexing and comparing 3D models, which is suitable for our case study and which is based on some previously studied methods. Our proposed approach is finally illustrated by an implementation, and evaluated in a professional context.
Adaptive Filtering in Subbands for Supervised Source Separation
This paper investigates MIMO (Multiple-Input Multiple-Output) adaptive filtering techniques for the application of supervised source separation in the context of convolutive mixtures. From the observation that there is correlation among the signals of the different mixtures, an improvement in the NSAF (Normalized Subband Adaptive Filter) algorithm is proposed in order to accelerate its convergence rate. Simulation results with mixtures of speech signals in reverberant environments show the superior performance of the proposed algorithm with respect to the performances of the NLMS (Normalized Least-Mean-Square) and conventional NSAF, considering both the convergence speed and SIR (Signal-to-Interference Ratio) after convergence.
A Review on Wireless Sensory Communication Technologies for Telehealth and Ambient Assisted Living Environments
Ambient Assisted Living (AAL) homes for elderly people have gained much attention recently. Humans are developing different types of assistive devices using low power sensors to assist elderly people to live longer in their preferred environments. Personal Area Network sensors can assist in early detection of unusual condition to an elderly person e.g. rising of temperature, heart attack, etc. However, the increasing number of these devices results in a poor network performance, end to end delay of packets, packet loss and packet collisions as they compete for the limited resources like bandwidth. This paper is a review of communication protocols and real time systems that are used in telehealth and AAL environments to assist elderly people with their daily activities. We identify their strengths and weaknesses of the existing solutions and compare them. Finally we propose a better solution that could lead to a better Quality of Service (QoS) in Ambient Assisted Living.
Towards an Approach for Personalization of Web Services Composition
Web service (WS) is a ‘software application’ which is gaining nowadays an increasing attention due to its ability to achieve efficient user need. In some cases, the available (WS) cannot meet the complex needs of user and their adaptation to an environment in perpetual change remains a major problem for information systems (I.S) design. In this respect, services composition comes mitigate this problem. It represents a big challenge for systems and has received lots of attentions in literature. However, the satisfaction of these informational needs requires a dynamic and reusable environment. We believe so that the incorporation of a personalized aspect will be very useful for this composition process. In this work, our goal is to: i) propose a new approach that allows dynamic services composition and aims to meet the different needs of users. ii) Propose a personalization approach based on external resources such as the ontologies and user profile. This proposition allows services reuse in order to provide a relevant response to each user.
Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.
i2kit: A Tool for Immutable Infrastructure Deployments
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.
Privacy as a Key Factor of Information and Communication Technologies Development in Healthcare
The transfer of a large part of activities to the cyberspace has made the society heavily depended on ICT. Internet of Things has been created due to the appearance of smart devices. IoT generates a new dimension of the network. Its use has ceased to be the domain of man. Communication only between devices has become possible, posing new challenges, directly related to security threats and a lot of new opportunities for unauthorized data. While there is a common awareness of the potential risks in using computers or networks, the use of intelligent things is wrongly seen as making life easier and paradoxically more secure. It is extremely important to notice seemingly unimportant behavior, but likely to cause harm. In a world with ‘smart’ things, there are threats such as permanent surveillance, incessant and uncontrolled data leaks or identity theft. The challenge is to set up and formulate norms and enabling legislative processes to keep pace with the technology advancement. The use of ICT, especially in science and industry, changes everyday life already today. Society aging and increase in healthcare expenditure makes it imperative to expand the use of ICT in healthcare, where a revolution is expected with the use of intelligent diagnostic support systems, current health state monitoring and specialized technologies enabling remote medical procedures adaptation. But there are dangers associated with IoT use in healthcare, requiring clearly defined criteria. Patients must be able to expect privacy and medical data safety. Until gathered data is used only under the control of the affected people IoT is not a threat. Any unauthorized access to such data is a violation of the right to privacy - and therefore the security of the individual. The main goal of the research is to make a comparative analysis of the legal conditions that characterize the ICT use in healthcare in Poland in the context of EU legislation, while pointing out the postulated directions of changes and trying to answer the questions as to whether and to what extent the legal system is ready for the challenges of modern technologies. The ICT introduction pace in healthcare in Poland and other EU countries differs, but is still inadequate. Even seemingly indispensable necessity to ensure the safety of medical data is not obvious. Sensitive data gets into wrong hands. Providers are not prepared to share information about patients in real time because systems for electronic health records processing are not compatible. Utilizing ICT in the health sector will ensure change in approach towards patients and increased productivity. The need for privacy should be considered at the stage of technology design and implementation. The success of digitization in the health sector depends largely on how the legislation is ready to meet these revolutionary changes. Ensuring medical data security is paramount. Otherwise, social resistance and costs resulting from i.e. leakage of medical data and use of such data in an unlawful or even threatening manner will be very high.
Enabling Cloud Adoption Based Secured Mobile Banking through Backend as a Service
With the increase of prevailing non-traditional rivalry, mobile banking experiences an ever changing commercial backdrop. Substantial customer demands have established to be more intricate as customers request more expediency and superintend over their banking services. To enterprise advance and modernization in mobile banking applications, it is gradually obligatory to deeply leapfrog the scuffle using business model transformation. The dramaturgical vicissitudes taking place in mobile banking entail advanced traditions to exploit security. By reforming and transforming older back office into integrated mobile banking applications, banks can engender a supple and nimble banking environment that can rapidly respond to new business requirements over cloud computing. Cloud computing is transfiguring ecosystems in numerous industries, and mobile banking is no exemption providing services innovation, greater flexibility to respond to improved security and enhanced business intelligence with less cost. Cloud technology offer secure deployment possibilities that can provision banks in developing new customer experiences, empower operative relationship and advance speed to efficient banking transaction. Cloud adoption is escalating quickly since it can be made secured for commercial mobile banking transaction through backend as a service in scrutinizing the security strategies of the cloud service provider along with the antiquity of transaction details and their security related practices.
An Immersive Virtual Reality Application for Learning French as a Foreign Language: Design, Development and Evaluation
The use of immersive Virtual Reality (VR) technology in the teaching of French as a foreign language is the centerpiece of this work. Research questions refer to (i) the feasibility and (ii) the efficiency of the VR-aided approach for teaching French to beginners. The “VRFLE” educational platform, designed and developed using the Unity game engine for use with the Oculus Rift VR headset and Leap Motion controller, is used as the basis for an educational intervention organized in two phases. The first phase serves as a necessary step for the smooth introduction of the students (volunteering adults) to e-learning and the use of technology, while the second phase proceeds to offer the actual VR-aided courses. Both phases are evaluated as to the learning outcomes achieved and as to the experience and attitudes they have generated in the students with respect to VR in general and to VR-aided learning specifically. Evaluation results indicate a strong positive potential of VR and immersion technologies as learning tools, under a suitable and carefully designed educational scenario.
Research on Fuzzy Test Framework Based on Concolic Execution
Vulnerability discovery technology is a significant field of the current. In this paper, a fuzzy framework based on concolic execution has been proposed. Fuzzy test and symbolic execution are widely used in the field of vulnerability discovery technology. But each of them has its own advantages and disadvantages. During the path generation stage, path traversal algorithm based on generation is used to get more accurate path. During the constraint solving stage, dynamic concolic execution is used to avoid the path explosion. If there is external call, the concolic based on function summary is used. Experiments show that the framework can effectively improve the ability of triggering vulnerabilities and code coverage.
Underwater Acoustic Channel Estimation and Equalization via Adaptive Filtering and Sparse Approximation
This paper presents a method for the identification and equalization of an underwater acoustic (UWA) channel, which is modeled as a Multi-Scale Multi-Lag (MSML) channel. The proposed approach consists of identifying the parameters of the different paths which form the UWA model using a bank of adaptive subfilters, which are applied to scaled versions of the transmitted signal and updated by considering the channel sparseness property. We first verify the accuracy of the identification procedure and then advance to a channel equalization stage using the parameters obtained during the identification process. The equalization performance is evaluated for different signal-to-noise ratios.
A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines
The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.
Segmentation of Gray Scale Images of Dropwise Condensation on Textured Surfaces
In the present work, we developed an image processing algorithm to measure water droplets characteristics during dropwise condensation on pillared surfaces. The main problem in this process is the similarity between shape and size of water droplets and the pillars. The developed method divides droplets into four main groups based on their size and applies the corresponding algorithm to segment each group. These algorithms generate binary images of droplets based on both their geometrical and intensity properties. The information related to droplets evolution during time including mean radius and drops number per unit area are then extracted from the binary images. The developed image processing algorithm is verified using manual detection and applied to two different sets of images corresponding to two kinds of pillared surfaces.
Autonomic Management For Mobile Robot Battery Degradation
The majority of today’s mobile robots are very dependent on battery power. Mobile robots can operate untethered for a number of hours but eventually they will need to recharge their batteries in-order to continue to function. While computer processing and sensors have become cheaper and more powerful each year, battery development has progress very little. They are slow to re-charge, inefficient and lagging behind in the general progression of robotic development we see today. However, batteries are relatively cheap and when fully charged, can supply high power output necessary for operating heavy mobile robots. As there are no cheap alternatives to batteries, we need to find efficient ways to manage the power that batteries provide during their operational lifetime. This paper proposes the use of autonomic principles of self-adaption to address the behavioral changes a battery experiences as it gets older. In life, as we get older, we cannot perform tasks in the same way as we did in our youth; these tasks generally take longer to perform and require more of our energy to complete. Batteries also suffer from a form of degradation. As a battery gets older, it loses the ability to retain the same charge capacity it would have when brand new. This paper investigates how we can adapt the current state of a battery charge and cycle count, to the requirements of a mobile robot to perform its tasks.
Neural Network Based Decision Trees Using Machine Learning for Alzheimer's Diagnosis
Alzheimer’s disease is one of the prevalent kind of ailment, expected for impudent reconciliation or an effectual therapy is to be accredited hitherto. Probable detonation of patients in the upcoming years, and consequently an enormous deal of apprehension in early discovery of the disorder, this will conceivably chaperon to enhanced healing outcomes. Complex impetuosity of the brain is an observant symbolic of the disease and a unique recognition of genetic sign of the disease. Machine learning alongside deep learning and decision tree reinforces the aptitude to absorb characteristics from multi-dimensional data’s and thus simplifies automatic classification of Alzheimer’s disease. Susceptible testing was prophesied and realized in training the prospect of Alzheimer’s disease classification built on machine learning advances. It was shrewd that the decision trees trained with deep neural network fashioned the excellent results parallel to related pattern classification.