Excellence in Research and Innovation for Humanity

International Science Index

Commenced in January 1999 Frequency: Monthly Edition: International Abstract Count: 42523

Computer and Information Engineering

2241
74491
Linguistic Summarization of Structured Patent Data
Abstract:
Patent data have an increasingly important role in economic growth, innovation, technical advantages and business strategies and even in countries competitions. Analyzing of patent data is crucial since patents cover large part of all technological information of the world. In this paper, we have used the linguistic summarization technique to prove the validity of the hypotheses related to patent data stated in the literature.
2240
74483
On Improving Breast Cancer Prediction Using General Regression Neural Networks-Conformal Prediction
Abstract:
The aim of this study is to help predict breast cancer and to construct a supportive model that will stimulate a more reliable prediction as a factor that is fundamental to public health. In this study, we utilize general regression neural networks (GRNN) to replace the normal predictions with prediction periods to achieve a reasonable percentage of confidence. The mechanism employed here utilise a novel machine learning system called conformal prediction (CP), in order to assign consistent confidence measures to predictions, which combined with GRNN. The proposed method we apply the resulting algorithm to the problem of breast cancer diagnosis. The results show that the prediction constructed by this method is reasonable and could be useful in practice.
2239
74473
Building a Dynamic News Category Network for News Sources Recommendations
Abstract:
It is generic that news sources publish news in different broad categories. These categories can either be generic such as Business, Sports, etc. or time-specific such as World Cup 2015 and Nepal Earthquake or both. It is up to the news agencies to build the categories. Extracting news categories automatically from numerous online news sources is expected to be helpful in many applications including news source recommendations and time specific news category extraction. To address this issue, existing systems like DMOZ directory and Yahoo directory are mostly considered though they are mostly human annotated and do not consider the time dynamism of categories of news websites. As a remedy, we propose an approach to automatically extract news category URLs from news websites in this paper. News category URL is a link which points to a category in news websites. We use the news category URL as a prior knowledge to develop a news source recommendation system which contains news sources listed in various categories in order of ranking. In addition, we also propose an approach to rank numerous news sources in different categories using various parameters like Traffic Based Website Importance, Social media Analysis and Category Wise Article Freshness. Experimental results on category URLs captured from GDELT project during April 2016 to December 2016 show the adequacy of the proposed method.
2238
74461
An Empirical Study of the Impacts of Big Data on Firm Performance
Authors:
Abstract:
In the present time, data to a data-driven knowledge-based economy is the same as oil to the industrial age hundreds of years ago. Data is everywhere in vast volumes! Big data analytics is expected to help firms not only efficiently improve performance but also completely transform how they should run their business. However, employing the emergent technology successfully is not easy, and assessing the roles of big data in improving firm performance is even much harder. There was a lack of studies that have examined the impacts of big data analytics on organizational performance. This study aimed to fill the gap. The present study suggested using firms’ intellectual capital as a proxy for big data in evaluating its impact on organizational performance. The present study employed the Value Added Intellectual Coefficient method to measure firm intellectual capital, via its three main components: human capital efficiency, structural capital efficiency, and capital employed efficiency, and then used the structural equation modeling technique to model the data and test the models. The financial fundamental and market data of 100 randomly selected publicly listed firms were collected. The results of the tests showed that only human capital efficiency had a significant positive impact on firm profitability, which highlighted the prominent human role in the impact of big data technology.
2237
74113
A Survey of Feature-Based Steganalysis for Jpeg Images
Abstract:
Due to the increase in usage of public domain channels, such as the internet, and communication technology, there is a concern about the protection of intellectual property and security threats. This interest has led to growth in researching and implementing techniques for information hiding. Steganography is the art and science of hiding information in a private manner such that its existence cannot be recognized. Communication using steganographic techniques makes not only the secret message but also the presence of hidden communication, invisible. Steganalysis is the art of detecting the presence of this hidden communication. Parallel to steganography, steganalysis is also gaining prominence, since the detection of hidden messages can prevent catastrophic security incidents from occurring. Steganalysis can also be incredibly helpful in identifying and revealing holes with the current steganographic techniques, which makes them vulnerable to attacks. Through the formulation of new effective steganalysis methods, further research to improve the resistance of tested steganography techniques can be developed. Feature-based steganalysis method for JPEG images calculates the features of an image using the L1 norm of the difference between a stego image and the calibrated version of the image. This calibration can help retrieve some of the parameters of the cover image, revealing the variations between the cover and stego image and enabling a more accurate detection. Applying this method to various steganographic schemes, experimental results were compared and evaluated to derive conclusions and principles for more protected JPEG steganography.
2236
73991
Human Computer Interaction Using Computer Vision and Speech Processing
Abstract:
Internet of Things (IoT) is seen as the next major step in the ongoing revolution in the Information Age. It is predicted that in the near future billions of embedded devices will be communicating with each other to perform a plethora of tasks with or without human intervention. One of the major ongoing hotbed of research activity in IoT is Human Computer Interaction (HCI). HCI is used to facilitate communication between an intelligent system and a user. An intelligent system typically comprises of a system consisting of various sensors, actuators and embedded controllers which communicate with each other to monitor data collected from the environment. Communication by the user to the system is typically done using voice. One of the major ongoing applications of HCI is in home automation as a personal assistant. The prime objective of our project is to implement a use case of HCI for home automation. Our system is designed to detect and recognize the users and personalize the appliances in the house according to their individual preferences. Our HCI system is also capable of speaking with the user when certain commands are spoken such as searching on the web for information and controlling appliances. Our system can also monitor the environment in the house such as air quality and gas leakages for added safety.
2235
73918
A Neural Network Classifier for Identifying Duplicate Image Entries in Real-Estate Databases
Abstract:
A Deep Convolution Neural Network with Triplet Loss is used to identify duplicate images in real-estate advertisements in the presence of image artifacts such as watermarking, cropping, hue/brightness adjustment, and others. The effects of batch normalization, spatial dropout, and various convergence methodologies on the resulting detection accuracy are discussed. For comparative Return-on-Investment study (per industry request), end-2-end performance is benchmarked on both Nvidia Titan GPUs and Intel’s Xeon CPUs. A new real-estate dataset from San Francisco Bay Area is used for this work. Sufficient duplicate detection accuracy is achieved to supplement other database-grounded methods of duplicate removal. The implemented method is used in a Proof-of-Concept project in the real-estate industry.
2234
73887
A Semi-Markov Decision Process-Based Service Model for Resource Allocation in Vehicular Cloud Computing Systems
Abstract:
Vehicular Cloud Computing (VCC) is a promising approach that efficiently utilizes the vehicular resources and provides services to the vehicle users. However, despite the remarkable advantages of VCC in traffic management and road safety, there has appeared relatively limit at resources due to the limited capability of on-board computers. Therefore, how to allocate resources effectively is an urgent problem to be solved. In this paper, we propose a service decision-making system under the vehicular cloud computing environment to maximize the overall system rewards and improve the Quality of Experience (QoE) of the vehicle users. In such a scenario, when a service request from a vehicle arrives at the VCC system, the proposed scheme will be helpful to provide an optimal decision of where the request should be processed and how many computing resources should be allocated. To this end, we formulate the service request decision-making process as a semi-Markov decision process (SMDP), and solve the optimization problem using iteration algorithm. Numerical results indicate that the significant performance gain and system reward can be obtained by the SMDP-Based Service Model compared with the greedy approach.
2233
73858
Digital Forensics Compute Cluster (dforc2): A New High-Speed Distributed Computing Capability for Digital Forensics
Abstract:
We have developed a new distributed computing capability, Digital Forensics Compute Cluster (DFORC2) to speed up the ingestion and processing of digital evidence that is resident on computer hard drives. DFORC2 parallelizes evidence ingestion and file processing steps. It can be run on a standalone computer cluster or in the Amazon Web Services (AWS) cloud. When running in a virtualized computing environment, its cluster resources can be dynamically scaled up or down using Kubernetes. DFORC2 is an open source project that uses Autopsy, Apache Spark and Kafka, and other open source software packages. It extends the proven open source digital forensics capabilities of Autopsy to compute clusters and cloud architectures so that digital forensics tasks can be accomplished efficiently by a scalable array of cluster compute nodes. In this paper, we describe DFORC2 and compare its with a standalone version of Autopsy when both are used to process evidence from hard drives of different sizes.
2232
73795
Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method
Abstract:
Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending a scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since quality of an image may not be the same throughout the image, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods.
2231
73777
Behavioral Analysis of Traffic Flow for an Effective Network Traffic Identification
Abstract:
Fast and accurate network traffic identification is becoming essential for network management, high quality of service control and early detection of network traffic abnormalities. Techniques based on statistical features of packet flows have recently become popular for network classification due to the limitations of traditional port and payload based methods. In this paper, we propose a method to identify network traffics. In this method, for cleaning and preparing data, we perform effective preprocessing approach. Then effective features are extracted using the behavioral analysis of application. Using the effective preprocessing and feature extraction techniques, this method can effectively and accurately identify network traffics. For this purpose, two network traffic databases namely UNIBS and the collected database on router are analyzed. In order to evaluate the results, the accuracy of network traffic identification using proposed method is analyzed by machine learning techniques. Experimental results show that the proposed method has improved the accuracy of network traffic identification methods.
2230
73764
Inferring User Preference Using Distance Dependent Chinese Restaurant Process and Weighted Distribution for a Content Based Recommender System
Abstract:
Nowadays websites provide a vast number of resources for users. Recommender systems have been developed as an essential element of these websites to provide a personalized environment for users. They help users to retrieve interested resources from large sets of available resources. Due to the dynamic feature of user preference, constructing an appropriate model to estimate the user preference is the major task of recommender systems. Profile matching and latent factors are two main approaches to identify user preference. In this paper, we employed the latent factor and profile matching to cluster the user profile and identify user preference, respectively. The proposed method uses the Distance Dependent Chines Restaurant Process as a Bayesian nonparametric framework to extract the latent factors from the user profile. These latent factors are mapped to user interests, and a weighted distribution is used to identify user preferences. We evaluate the proposed method using a real-world data-set that contains news tweets of a news agency (BBC). The experimental results and comparisons show the superior recommendation accuracy of the proposed approach related to existing methods, and its ability to effectively evolve over time.
2229
73756
Improved Hash Value Based Stream Cipher Using Delayed Feedback with Carry Shift Register
Abstract:
In the modern era, as the application data’s are massive and complex, it needs to be secured from the adversary attack. In this context, a non-recursive key based integrated spritz stream cipher with the circulant hash function using delayed feedback with carry shift register (d-FCSR) is proposed in this paper. The novelty of this proposed stream cipher algorithm is to engender the improved keystream using d-FCSR. The proposed algorithm is coded using Verilog HDL to produce dynamic binary key stream and implemented on commercially available FPGA device Virtex 5 xc5vlx110t-2ff1136. The implementation of stream cipher using d-FCSR on the FPGA device operates at a maximum frequency of 60.62 MHz. It achieved the data throughput of 492 Mbps and improved in terms of efficiency (throughput/area) compared to existing techniques. This paper also briefs the cryptanalysis of proposed circulant hash value based spritz stream cipher using d-FCSR is against the adversary attack on a hardware platform for the hardware based cryptography applications.
2228
73732
A Study of Temporal Stability on Finger-Vein Recognition Accuracy Using a Steady-State Model
Abstract:
Finger-vein recognition, as well as all other biometric recognition methods, is built on two fundamental premises: uniqueness (vein patterns from distinct fingers are strictly different) and persistence (finger-vein patterns do not change over time). Until the last few years, a few achievements have been made on proving these two theoretical premises concerning fingerprints, palm prints, iris, face, etc. However, none of related academic results have been published on finger-vein recognition ever so far. In this paper, we try to study on the stability (i.e. consistency or namely persistence) of finger-vein, within a designed timespan (four years). In order to achieve this goal, first we’ve collected and screened a proper database for stability study, and worked on eliminating those external influences of finger-vein features (acquiring hardware, user behavior, and circumstance situation). Then we proposed a steady-state model of finger-vein features indicating that each specific finger owns a stable steady-state which all its finger-vein images would properly converging to, regardless of time. Experiments have been conducted on our 5-year/200,000-finger data set. And results from both genuine match and imposter match demonstrate that the model is well supported. This steady-state model is generic, hence providing a common method on how to evaluate the stability of other types of biometric features.
2227
73708
Information Security Dilemma: Employees' Behaviour on Three-Dimensions to Failure
Abstract:
This paper explains about human nature concept as to understand the significance of information security in employees’ mentality including leaders in an organisation. By studying on a theory concept of the latest Von Solms fourth waves, information security governance basically refers to the concept of a set of methods, techniques and tools that responsible for protecting resources of a computer system to ensure service availability, confidentiality and integrity of information. However, today’s information security dilemma relates to the acceptance of employees mentality. The major causes are a lack of communication and commitment. These types of management in an organisation are labelled as immoral/amoral management which effects on information security compliance. A recovery action is taken based on ‘learn a lesson from incident events’ rather than prevention. Therefore, the paper critically analysed the Von Solms fourth waves’ theory with current human events and its correlation by studying secondary data and also from qualitative analysis among employees in public sectors. ‘Three-dimensions to failure’ of information security dilemma are explained as deny, don’t know and don’t care. These three-dimensions are the most common vulnerable behaviour owned by employees. Therefore, by avoiding the three-dimensions to failure may improve the vulnerable behaviour of employees which is often related to immoral/amoral management.
2226
73626
A Multi-Viewpoints Ontology-Based Approach for Semantic Annotation
Abstract:
We are interested in the problem of semantic annotation of resources belonging to a heterogeneous organization, by taking into consideration different viewpoints and different terminologies of communities in the organization. Our goal is to propose a semantic annotation approach based on a multi-view ontology, where users can annotate a web page according to their knowledge levels and their personal points of view. In the proposed approach, we generate a page structure for each viewpoint, then we associate a semantics to each structure using a multi-view ontology. In addition, in order to manipulate the elements of the page, we used the tree representation of the page generated with DOMXML.
2225
73551
User Modeling from the Perspective of Improvement in Search Results: A Survey of the State of the Art
Abstract:
Currently, users expect high quality and personalized information from search results. To satisfy user’s needs, personalized approaches to web search have been proposed. These approaches can provide the most appropriate answer for user’s needs by using user context and incorporating information about query provided by combining search technologies. To carry out personalized web search, there is a need to make different techniques on whole of user search process. There are the number of possible deployment of personalized approaches such as personalized web search, personalized recommendation, personalized summarization and filtering systems and etc. but the common feature of all approaches in various domains is that user modeling is utilized to provide personalized information from the Web. So the most important work in personalized approaches is user model mining. User modeling applications and technologies can be used in various domains depending on how the user collected information may be extracted. In addition to, the used techniques to create user model is also different in each of these applications. Since in the previous studies, there was not a complete survey in this field, our purpose is to present a survey on applications and techniques of user modeling from the viewpoint of improvement in search results by considering the existing literature and researches.
2224
73532
A Systematic Review on Challenges in Big Data Environment
Abstract:
Big Data has demonstrated the vast potential in streamlining, deciding, spotting business drifts in different fields, for example, producing, fund, Information Technology. This paper gives a multi-disciplinary diagram of the research issues in enormous information and its procedures, instruments, and system identified with the privacy, data storage management, network and energy utilization, adaptation to non-critical failure and information representations. Other than this, result difficulties and openings accessible in this Big Data platform have made.
2223
73497
Design and Implementation of Machine Learning Model for Short-Term Energy Forecasting in Smart Home Management System
Abstract:
The main aim of this paper is to handle the energy requirement in an efficient manner by merging the advanced digital communication and control technologies for smart grid applications. In order to reduce user home load during peak load hours, utility applies several incentives such as real-time pricing, time of use, demand response for residential customer through smart meter. However, this method provides inconvenience in the sense that user needs to respond manually to prices that vary in real time. To overcome these inconvenience, this paper proposes a convolutional neural network (CNN) with k-means clustering machine learning model which have ability to forecast energy requirement in short term, i.e., hour of the day or day of the week. By integrating our proposed technique with home energy management based on Bluetooth low energy provides predicted value to user for scheduling appliance in advanced. This paper describes detail about CNN configuration and k-means clustering algorithm for short-term energy forecasting.
2222
73399
Multiple Query Optimization in Wireless Sensor Networks Using Data Correlation
Abstract:
Data sensing in wireless sensor networks is done by query deceleration the network by the users. In many applications of the wireless sensor networks, many users send queries to the network simultaneously. If the queries are processed separately, the network’s energy consumption will increase significantly. Therefore, it is very important to aggregate the queries before sending them to the network. In this paper, we propose a multiple query optimization framework based on sensors physical and temporal correlation. In the proposed method, queries are merged and sent to network by considering correlation among the sensors in order to reduce the communication cost between the sensors and the base station.
2221
73340
Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction
Abstract:
Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.
2220
73233
Distributed Computing Framework in Security: Case Study of Encryption Method
Abstract:
Cloud computing has garnered increasing attention from researchers who have presented much work on performing massive computing tasks efficiently. Many security vulnerabilities have simultaneously occurred; therefore dealing with security problems in cloud computing has become an urgent issue. The purpose of this paper is to create a cloud computing framework based on the Map Reduce platform, a Google cloud-computing platform, and solve some security problems in the process of distributed computation. Inspired by SMC (Secure Multi-Party Computation), a protocol naturally suitable for distributed computation, we adopted homomorphic encryption, which could be used for processing a large amount of data securely in cloud computation. We also find that order preserving encryption(OPE), an encryption algorithm put forward in 2004, can be used in our secure framework. Cognizant of the applicability of SMC and OPE in cloud computing, we combine them with Map Reduce to design a security framework of distributed computation. Our major contributions consist of designing an innovative cloud computation framework in security based on Map Reduce, applying the order preserving encryption(OPE) algorithm, homomorphic encryption, and constructing a real distributed computation platform.
2219
73229
A Hybrid Data Mining Algorithm Based System for Intelligent Defence Mission Readiness and Maintenance Scheduling
Abstract:
It is a challenging task in today’s date to keep defence forces in the highest state of combat readiness with budgetary constraints. A huge amount of time and money is squandered in the unnecessary and expensive traditional maintenance activities. To overcome this limitation Defence Intelligent Mission Readiness and Maintenance Scheduling System has been proposed, which ameliorates the maintenance system by diagnosing the condition and predicting the maintenance requirements. Based on new data mining algorithms, this system intelligently optimises mission readiness for imminent operations and maintenance scheduling in repair echelons. With modified data mining algorithms such as Weighted Feature Ranking Genetic Algorithm and SVM-Random Forest Linear ensemble, it improves the reliability, availability and safety, alongside reducing maintenance cost and Equipment Out of Action (EOA) time. The results clearly conclude that the introduced algorithms have an edge over the conventional data mining algorithms. The system utilizing the intelligent condition-based maintenance approach improves the operational and maintenance decision strategy of the defence force.
2218
73221
Modeling and Analyzing the Wireless Application Protocol Class 2 Wireless Transaction Protocol Using Event-B
Abstract:
This paper presents an incremental formal development of the wireless Transaction Protocol (WTP) in Event-B. WTP is part of the wireless Application Protocol (WAP) architectures and provides a reliable request-response service. To model and verify the protocol, we use the formal technique Event-B which provides an accessible and rigorous development method. This interaction between modelling and proving reduces the complexity and helps to eliminate misunderstandings, inconsistencies, and specification gaps. As result, verification of WTP allows us to find some deficiencies in the current specification.
2217
73164
Impact on the Results of Sub-Group Analysis on Performance of Recommender Systems
Abstract:
The purpose of this study is to investigate whether friendship in social media can be an important factor in recommender system through social scientific analysis of friendship in popular social media such as Facebook and Twitter. For this purpose, this study analyzes data on friendship in real social media using component analysis and clique analysis among sub-group analysis in social network analysis. In this study, we propose an algorithm to reflect the results of sub-group analysis on the recommender system. The key to this algorithm is to ensure that recommendations from users in friendships are more likely to be reflected in recommendations from users. As a result of this study, outcomes of various subgroup analyzes were derived, and it was confirmed that the results were different from the results of the existing recommender system. Therefore, it is considered that the results of the subgroup analysis affect the recommendation performance of the system. Future research will attempt to generalize the results of the research through further analysis of various social data.
2216
73150
Development of a Sequential Multimodal Biometric System for Web-Based Physical Access Control into a Security Safe
Abstract:
The security safe is a place or building where classified document and precious items are kept. To prevent unauthorised persons from gaining access to this safe a lot of technologies had been used. But frequent reports of an unauthorised person gaining access into security safes with the aim of removing document and items from the safes are pointers to the fact that there is still security gap in the recent technologies used as access control for the security safe. In this paper we try to solve this problem by developing a multimodal biometric system for physical access control into a security safe using face and voice recognition. The safe is accessed by the combination of face and speech pattern recognition and also in that sequential order. User authentication is achieved through the use of camera/sensor unit and a microphone unit both attached to the door of the safe. The user face was captured by the camera/sensor while the speech was captured by the use of the microphone unit. The Scale Invariance Feature Transform (SIFT) algorithm was used to train images to form templates for the face recognition system while the Mel-Frequency Cepitral Coefficients (MFCC) algorithm was used to train the speech recognition system to recognise authorise user’s speech. Both algorithms were hosted in two separate web based servers and for automatic analysis of our work; our developed system was simulated in a MATLAB environment. The results obtained shows that the developed system was able to give access to authorise users while declining unauthorised person access to the security safe.
2215
73137
Practice on Design Knowledge Management and Transfer across the Life Cycle of a New-Built Nuclear Power Plant in China
Abstract:
As a knowledge-intensive industry, nuclear industry highly values the importance of safety and quality. The life cycle of a NPP (Nuclear Power Plant) can last 100 years from the initial research and design to its decommissioning. How to implement the high-quality knowledge management and how to contribute to a more safe, advanced and economic NPP (Nuclear Power Plant) is the most important issue and responsibility for knowledge management. As the lead of nuclear industry, nuclear research and design institute has competitive advantages of its advanced technology, knowledge and information, DKM (Design Knowledge Management) of nuclear research and design institute is the core of the knowledge management in the whole nuclear industry. In this paper, the study and practice on DKM and knowledge transfer across the life cycle of a new-built NPP in China is introduced. For this digital intelligent NPP, the whole design process is based on a digital design platform which includes NPP engineering and design dynamic analyzer, visualization engineering verification platform, digital operation maintenance support platform and digital equipment design, manufacture integrated collaborative platform. In order to make all the design data and information transfer across design, construction, commissioning and operation, the overall architecture of new-built digital NPP should become a modern knowledge management system. So a digital information transfer model across the NPP life cycle is proposed in this paper. The challenges related to design knowledge transfer is also discussed, such as digital information handover, data center and data sorting, unified data coding system. On the other hand, effective delivery of design information during the construction and operation phase will contribute to the comprehensive understanding of design ideas and components and systems for the construction contractor and operation unit, largely increasing the safety, quality and economic benefits during the life cycle. The operation and maintenance records generated from the NPP operation process have great significance for maintaining the operating state of NPP, especially the comprehensiveness, validity and traceability of the records. So the requirements of an online monitoring and smart diagnosis system of NPP is also proposed, to help utility-owners to improve the safety and efficiency.
2214
73077
System Analysis of Quality Assurance in Online Education
Abstract:
Our society is in a constant state of change. Technology advancements continue to affect our daily lives. How we work, communicate and entertain ourselves has changed dramatically in the past decades. As our society learns to accept and adapt to the many different technological advances that seem to inundate every part of our lives, the education institutions must migrate from traditional methods of instruction to online education in order to take full advantage of the opportunities provided by these technology advancements. There are many benefits that can be gained for university and society from offering online programs by utilizing advanced technologies. But the programs must not be implemented carelessly. The key to providing a quality online program is the issue of perceived quality, which takes into account the viewpoint of all stakeholders involved. To truly ensure the institutional quality, however, a systemic view of all factors contributing to the quality must be analyzed and linked to one another — allowing education administrators to understand how each factor contributes to the perceived quality of online education. The perceived quality of an online program will be positively reinforced only through an organizational-wide effort that focuses on managed administration, augmenting online program branding, skilled faculty, supportive alumni, student satisfaction, and effective delivery systems — each of which is vital to a quality online program. This study focuses on the concept of quality assurance in the start-up, implementation, and sustainability of online education. A case of online MBA program will be analyzed to explore the quality assurance. The difficulties in promoting online education quality is the fact that universities are complex networks of disciplinary, social, economic, and political fiefdoms, both internal and external factors to the institutions. As such, the system analysis, a systems-thinking approach, on the issue of perceived quality is ideal to investigate the factors and how each factor contributes to the perceived quality in the online education domain.
2213
73032
Automated Detection of Targets and Retrieve the Corresponding Analytics Using Augmented Reality
Abstract:
Augmented reality is defined as the collection of the digital (or) computer generated information like images, audio, video, 3d models, etc. and overlay them over the real time environment. Augmented reality can be thought as a blend between completely synthetic and completely real. Augmented reality provides scope in a wide range of industries like manufacturing, retail, gaming, advertisement, tourism, etc. and brings out new dimensions in the modern digital world. As it overlays the content, it makes the users enhance the knowledge by providing the content blended with real world. In this application, we integrated augmented reality with data analytics and integrated with cloud so the virtual content will be generated on the basis of the data present in the database and we used marker based augmented reality where every marker will be stored in the database with corresponding unique ID. This application can be used in wide range of industries for different business processes, but in this paper, we mainly focus on the marketing industry which helps the customer in gaining the knowledge about the products in the market which mainly focus on their prices, customer feedback, quality, and other benefits. This application also focuses on providing better market strategy information for marketing managers who obtain the data about the stocks, sales, customer response about the product, etc. In this paper, we also included the reports from the feedback got from different people after the demonstration, and finally, we presented the future scope of Augmented Reality in different business processes by integrating with new technologies like cloud, big data, artificial intelligence, etc.
2212
72932
A Semantic E-Learning and E-Assessment System of Learners
Abstract:
The evolutions of Social Web and Semantic Web lead us to ask ourselves about the way of supporting the personalization of learning by means of intelligent filtering of educational resources published in the digital networks. We recommend personalized courses of learning articulated around a first educational course defined upstream. Resuming the context and the stakes in the personalization, we also suggest anchoring the personalization of learning in a community of interest within a group of learners enrolled in the same training. This reflection is supported by the display of an active and semantic system of learning dedicated to the constitution of personalized to measure courses and in the due time.