Stress Analysis of Hexagonal Element for Precast Concrete Pavements
While the use of cast-in-place concrete for an airfield and highway pavement overlay is very common, the application of precast concrete elements is very limited today. The main reasons consist of high production costs and complex structural behavior. Despite that, several precast concrete systems have been developed and tested with the aim to provide a system with rapid construction. The contribution deals with the reinforcement design of a hexagonal element developed for a proposed airfield pavement system. The sub-base course of the system is composed of compacted recycled concrete aggregates and fiber reinforced concrete with recycled aggregates place on top of it. The selected element belongs to a group of precast concrete elements which are being considered for the construction of a surface course. Both high costs of full-scale experiments and the need to investigate various elements force to simulate their behavior in a numerical analysis software by using finite element method instead of performing expensive experiments. The simulation of the selected element was conducted on a nonlinear model in order to obtain such results which could fully compensate results from experiments. The main objective was to design reinforcement of the precast concrete element subject to quasi-static loading from airplanes with respect to geometrical imperfections, manufacturing imperfections, tensile stress in reinforcement, compressive stress in concrete and crack width. The obtained findings demonstrate that the position and the presence of imperfection in a pavement highly affect the stress distribution in the precast concrete element. The precast concrete element should be heavily reinforced to fulfill all the demands. Using under-reinforced concrete elements would lead to the formation of wide cracks and cracks permanently open.
Improving Flash Flood Forecasting with a Bayesian Probabilistic Approach: A Case Study on the Posina Basin in Italy
The Flash Flood Guidance (FFG) provides the rainfall amount of a given duration necessary to cause flooding. The approach is based on the development of rainfall-runoff curves, which helps us to find out the rainfall amount that would cause flooding. An alternative approach, mostly experimented with Italian Alpine catchments, is based on determining threshold discharges from past events and on finding whether or not an oncoming flood has its magnitude more than some critical discharge thresholds found beforehand. Both approaches suffer from large uncertainties in forecasting flash floods as, due to the simplistic approach followed, the same rainfall amount may or may not cause flooding. This uncertainty leads to the question whether a probabilistic model is preferable over a deterministic one in forecasting flash floods. We propose the use of a Bayesian probabilistic approach in flash flood forecasting. A prior probability of flooding is derived based on historical data. Additional information, such as antecedent moisture condition (AMC) and rainfall amount over any rainfall thresholds are used in computing the likelihood of observing these conditions given a flash flood has occurred. Finally, the posterior probability of flooding is computed using the prior probability and the likelihood. The variation of the computed posterior probability with rainfall amount and AMC presents the suitability of the approach in decision making in an uncertain environment. The methodology has been applied to the Posina basin in Italy. From the promising results obtained, we can conclude that the Bayesian approach in flash flood forecasting provides more realistic forecasting over the FFG.
Modeling Football Penalty Shootouts: How Improving Individual Performance Affects Team Performance and the Fairness of the ABAB Sequence
Penalty shootouts often decide the outcome of
important soccer matches. Although usually referred to as ”lotteries”,
there is evidence that some national teams and clubs consistently
perform better than others. The outcomes are therefore not explained
just by mere luck, and therefore there are ways to improve the average
performance of players, naturally at the expense of some sort of
effort. In this article we study the payoff of player performance
improvements in terms of the performance of the team as a whole.
To do so we develop an analytical model with static individual
performances, as well as Monte Carlo models that take into account
the known influence of partial score and round number on individual
performances. We find that within a range of usual values, the team
performance improves above 70% faster than individual performances
do. Using these models, we also estimate that the new ABBA penalty
shootout ordering under test reduces almost all the known bias in
favor of the first-shooting team under the current ABAB system.
Designing an Integrated Platform for Real-Time Recommendations Sharing among the Aged and People Living with Cancer
The world is expected to experience growth in the number of ageing population, and this will bring about high cost of providing care for these valuable citizens. In addition, many of these live with chronic diseases that come with old age. Providing adequate care in the face of rising costs and dwindling personnel can be challenging. However, advances in technologies and emergence of the Internet of Things are providing a way to address these challenges while improving care giving. This study proposes the integration of recommendation systems into homecare to provide real-time recommendations for effective management of people receiving care at home and those living with chronic diseases. Using the simplified Training Logic Concept, stakeholders and requirements were identified. Specific requirements were gathered from people living with cancer. The solution designed has two components namely home and community, to enhance recommendations sharing for effective care giving. The community component of the design was implemented with the development of a mobile app called Recommendations Sharing Community for Aged and Chronically Ill People (ReSCAP). This component has illustrated the possibility of real-time recommendations, improved recommendations sharing among care receivers and between a physician and care receivers. Full implementation will increase access to health data for better care decision making.
Comparison of Machine Learning Models for the Prediction of System Marginal Price of Greek Energy Market
The Greek Energy Market is structured as a mandatory pool where the producers make their bid offers in day-ahead basis. The System Operator solves an optimization routine aiming at the minimization of the cost of produced electricity. The solution of the optimization problem leads to the calculation of the System Marginal Price (SMP). Accurate forecasts of the SMP can lead to increased profits and more efficient portfolio management from the producer`s perspective. Aim of this study is to provide a comparative analysis of various machine learning models such as artificial neural networks and neuro-fuzzy models for the prediction of the SMP of the Greek market. Machine learning algorithms are favored in predictions problems since they can capture and simulate the volatilities of complex time series.
Forecasting Issues in Energy Markets within a Reg-ARIMA Framework
Electricity markets throughout the world have
undergone substantial changes. Accurate, reliable, clear and
comprehensible modeling and forecasting of different variables
(loads and prices in the first instance) have achieved increasing
importance. In this paper, we describe the actual state of the
art focusing on reg-SARMA methods, which have proven to be
flexible enough to accommodate the electricity price/load behavior
satisfactory. More specifically, we will discuss: 1) The dichotomy
between point and interval forecasts; 2) The difficult choice between
stochastic (e.g. climatic variation) and non-deterministic predictors
(e.g. calendar variables); 3) The confrontation between modelling
a single aggregate time series or creating separated and potentially
different models of sub-series. The noteworthy point that we would
like to make it emerge is that prices and loads require different
approaches that appear irreconcilable even though must be made
reconcilable for the interests and activities of energy companies.
PM10 Prediction and Forecasting Using CART: A Case Study for Pleven, Bulgaria
Ambient air pollution with fine particulate matter (PM10) is a systematic permanent problem in many countries around the world. The accumulation of a large number of measurements of both the PM10 concentrations and the accompanying atmospheric factors allow for their statistical modeling to detect dependencies and forecast future pollution. This study applies the classification and regression trees (CART) method for building and analyzing PM10 models. In the empirical study, average daily air data for the city of Pleven, Bulgaria for a period of 5 years are used. Predictors in the models are seven meteorological variables, time variables, as well as lagged PM10 variables and some lagged meteorological variables, delayed by 1 or 2 days with respect to the initial time series, respectively. The degree of influence of the predictors in the models is determined. The selected best CART models are used to forecast future PM10 concentrations for two days ahead after the last date in the modeling procedure and show very accurate results.
The Impact of Open Defecation on Fecal-Oral Infections: A Case Study in Burat and Ngaremara Wards of Isiolo County, Kenya
The practice of open defecation can be devastating for human health as well as the environment, and this practice persistence could be due to ingrained habits that individuals continue to engage in despite having a better alternative. Safe disposal of human excreta is essential for public health protection. This study sought to find if open defecation relates to fecal-oral infections in Burat and Ngaremara Wards in Isiolo County. This was achieved through conducting a cross-sectional study. Simple random sampling technique was used to select 385 households that were used in the study. Data collection was done by use of questionnaires and observation checklists. The result show that 66% of the respondents disposed-off fecal matter in a safe manner, whereas 34% disposed-off fecal matter in unsafe manner through open defecation. The prevalence proportions per 1000 of diarrhea and intestinal worms among children under-5 years of age were 142 and 21, respectively. The prevalence proportions per 1000 of diarrhea and typhoid among children over-5 years of age were 20 and 20, respectively.
Harmonizing Spatial Plans: A Methodology to Integrate Sustainable Mobility and Energy Plans to Promote Resilient City Planning
Local administrations are facing established targets on sustainable development from different disciplines at the heart of different city departments. Nevertheless, some of these targets, such as CO2 reduction, relate to two or more disciplines, as it is the case of sustainable mobility and energy plans (SUMP & SECAP/SEAP). This opens up the possibility to efficiently cooperate among different city departments and to create and develop harmonized spatial plans by using available resources and together achieving more ambitious goals in cities. The steps of the harmonization processes developed result in the identification of areas to achieve common strategic objectives. Harmonization, in other words, helps different departments in local authorities to work together and optimize the use or resources by sharing the same vision, involving key stakeholders, and promoting common data assessment to better optimize the resources. A methodology to promote resilient city planning via the harmonization of sustainable mobility and energy plans is presented in this paper. In order to validate the proposed methodology, a representative city engaged in an innovation process in efficient spatial planning is used as a case study. The harmonization process of sustainable mobility and energy plans covers identifying matching targets between different fields, developing different spatial plans with dual benefit and common indicators guaranteeing the continuous improvement of the harmonized plans. The proposed methodology supports local administrations in consistent spatial planning, considering both energy efficiency and sustainable mobility. Thus, municipalities can use their human and economic resources efficiently. This guarantees an efficient upgrade of land use plans integrating energy and mobility aspects in order to achieve sustainability targets, as well as to improve the wellbeing of its citizens.
Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks
Load forecasting has become crucial in recent years
and become popular in forecasting area. Many different power
forecasting models have been tried out for this purpose. Electricity
load forecasting is necessary for energy policies, healthy and reliable
grid systems. Effective power forecasting of renewable energy load
leads the decision makers to minimize the costs of electric utilities
and power plants. Forecasting tools are required that can be used
to predict how much renewable energy can be utilized. The purpose
of this study is to explore the effectiveness of LSTM-based neural
networks for estimating renewable energy loads. In this study, we
present models for predicting renewable energy loads based on
deep neural networks, especially the Long Term Memory (LSTM)
algorithms. Deep learning allows multiple layers of models to learn
representation of data. LSTM algorithms are able to store information
for long periods of time. Deep learning models have recently been
used to forecast the renewable energy sources such as predicting
wind and solar energy power. Historical load and weather information
represent the most important variables for the inputs within the
power forecasting models. The dataset contained power consumption
measurements are gathered between January 2016 and December
2017 with one-hour resolution. Models use publicly available data
from the Turkish Renewable Energy Resources Support Mechanism.
Forecasting studies have been carried out with these data via deep
neural networks approach including LSTM technique for Turkish
electricity markets. 432 different models are created by changing
layers cell count and dropout. The adaptive moment estimation
(ADAM) algorithm is used for training as a gradient-based optimizer
instead of SGD (stochastic gradient). ADAM performed better than
SGD in terms of faster convergence and lower error rates. Models
performance is compared according to MAE (Mean Absolute Error)
and MSE (Mean Squared Error). Best five MAE results out of
432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting
performance of the proposed LSTM models gives successful results
compared to literature searches.
Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN
Electricity prices have sophisticated features such as
high volatility, nonlinearity and high frequency that make forecasting
quite difficult. Electricity price has a volatile and non-random
character so that, it is possible to identify the patterns based on the
historical data. Intelligent decision-making requires accurate price
forecasting for market traders, retailers, and generation companies.
So far, many shallow-ANN (artificial neural networks) models have
been published in the literature and showed adequate forecasting
results. During the last years, neural networks with many hidden
layers, which are referred to as DNN (deep neural networks) have
been using in the machine learning community. The goal of this
study is to investigate electricity price forecasting performance of the
shallow-ANN and DNN models for the Turkish day-ahead electricity
market. The forecasting accuracy of the models has been evaluated
with publicly available data from the Turkish day-ahead electricity
market. Both shallow-ANN and DNN approach would give successful
result in forecasting problems. Historical load, price and weather
temperature data are used as the input variables for the models.
The data set includes power consumption measurements gathered
between January 2016 and December 2017 with one-hour resolution.
In this regard, forecasting studies have been carried out comparatively
with shallow-ANN and DNN models for Turkish electricity markets
in the related time period. The main contribution of this study
is the investigation of different shallow-ANN and DNN models
in the field of electricity price forecast. All models are compared
regarding their MAE (Mean Absolute Error) and MSE (Mean Square)
results. DNN models give better forecasting performance compare to
shallow-ANN. Best five MAE results for DNN models are 0.346,
0.372, 0.392, 0,402 and 0.409.
Analysis of Green Wood Preservation Chemicals
Wood decay is addressed continuously within the wood industry through use and development of wood preservatives. The increasing awareness on the negative effects of many chemicals towards the environment is causing political restrictions in their use and creating more urgent need for research on green alternatives. This paper discusses some of the possible natural extracts for wood preserving applications and compares the analytical methods available for testing their behavior and efficiency against decay fungi. The results indicate that natural extracts have interesting chemical constituents that delay fungal growth but vary in efficiency depending on the chemical concentration and substrate used. Results also suggest that presence and redistribution of preservatives in wood during exposure trials can be assessed by spectral imaging methods although standardized methods are not available. This study concludes that, in addition to the many standard methods available, there is a need to develop new faster methods for screening potential preservative formulation while maintaining the comparability and relevance of results.
Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach
With recent trends in Big Data and advancements
in Information and Communication Technologies, the healthcare
industry is at the stage of its transition from clinician oriented to
technology oriented. Many people around the world die of cancer
because the diagnosis of disease was not done at an early stage.
Nowadays, the computational methods in the form of Machine
Learning (ML) are used to develop automated decision support
systems that can diagnose cancer with high confidence in a timely
manner. This paper aims to carry out the comparative evaluation
of a selected set of ML classifiers on two existing datasets: breast
cancer and cervical cancer. The ML classifiers compared in this study
are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest
Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and
Artificial Neural Networks (ANN). The evaluation is carried out based
on standard evaluation metrics Precision (P), Recall (R), F1-score and
Accuracy. The experimental results based on the evaluation metrics
show that ANN showed the highest-level accuracy (99.4%) when
tested with breast cancer dataset. On the other hand, when these
ML classifiers are tested with the cervical cancer dataset, Ensemble
(Bagged Tree) technique gave better accuracy (93.1%) in comparison
to other classifiers.
Artificial neural networks
, breast cancer
, cervical cancer
, logistic regression
, machine learning
, support vector machine.
Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network
This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.
Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
The High Strength Biocompatible Wires of Commercially Pure Titanium
COMTES FHT has been active in a field of research and development of high-strength wires for quite some time. The main material was pure titanium. The primary goal of this effort is to develop a continuous production process for ultrafine and nanostructured materials with the aid of severe plastic deformation (SPD). This article outlines mechanical and microstructural properties of the materials and the options available for testing the components made of these materials. Ti Grade 2 and Grade 4 wires are the key products of interest. Ti Grade 2 with ultrafine to nano-sized grain shows ultimate strength of up to 1050 MPa. Ti Grade 4 reaches ultimate strengths of up to 1250 MPa. These values are twice or three times as higher as those found in the unprocessed material. For those fields of medicine where implantable metallic materials are used, bulk ultrafine to nanostructured titanium is available. It is manufactured by SPD techniques. These processes leave the chemical properties of the initial material unchanged but markedly improve its final mechanical properties, in particular, the strength. Ultrafine to nanostructured titanium retains all the significant and, from the biological viewpoint, desirable properties that are important for its use in medicine, i.e. those properties which made pure titanium the preferred material also for dental implants.
Aggregation Scheduling Algorithms in Wireless Sensor Networks
In Wireless Sensor Networks which consist of tiny
wireless sensor nodes with limited battery power, one of the most
fundamental applications is data aggregation which collects nearby
environmental conditions and aggregates the data to a designated
destination, called a sink node. Important issues concerning the
data aggregation are time efficiency and energy consumption due
to its limited energy, and therefore, the related problem, named
Minimum Latency Aggregation Scheduling (MLAS), has been the
focus of many researchers. Its objective is to compute the minimum
latency schedule, that is, to compute a schedule with the minimum
number of timeslots, such that the sink node can receive the
aggregated data from all the other nodes without any collision or
interference. For the problem, the two interference models, the graph
model and the more realistic physical interference model known as
Signal-to-Interference-Noise-Ratio (SINR), have been adopted with
different power models, uniform-power and non-uniform power (with
power control or without power control), and different antenna
models, omni-directional antenna and directional antenna models.
In this survey article, as the problem has proven to be NP-hard,
we present and compare several state-of-the-art approximation
algorithms in various models on the basis of latency as its
Sustaining the Social Memory in a Historic Neighborhood: The Case Study of Uch Dukkan Neighborhood in Ardabil City in Azerbaijani Region of Iran
Conservation of historical urban patterns in the traditional neighborhoods is a part of creating integrated urban environments that are socially more sustainable. Urbanization reflects on life conditions and social, physical, economical characteristics of the society. In this regard, historical zones and traditional regions are affected by dramatic interventions on these characteristics. This article focuses on the Uch Dukkan neighborhood located in Ardabil City in Azarbaijani region of Iran, which has been up to such interventions that leaded its transformation from the past to the present. After introducing a brief inventory of the main elements of the historical zone and the neighborhood; this study explores the changes and transformations in different periods; and their impacts on the quality of the environment and its social sustainability. The survey conducted in the neighborhood as part of this research study revealed that the Uch Dukkan neighborhood and the unique architectural heritage that it possesses have become more inactive physically and functionally in a decade. This condition requires an exploration and comparison of the present and the expected transformations of the meaning of social space from the most private unit to the urban scale. From this token, it is argued that an architectural point of view that is based on space order; use and meaning of space as a social and cultural image, should not be ignored. Based on the interplay between social sustainability, collective memory, and the urban environment, study aims to make the invisible portion of ignorance clear, that ends up with a weakness in defining the collective meaning of the neighborhood as a historic urban district. It reveals that the spatial possessions of the neighborhood are valuable not only for their historical and physical characteristics, but also for their social memory that is to be remembered and constructed further.
Detecting Financial Bubbles Using Gap between Common Stocks and Preferred Stocks
How to detecting financial bubble? Addressing this simple question has been the focus of a vast amount of empirical research spanning almost half a century. However, financial bubble is hard to observe and varying over the time; there needs to be more research on this area. In this paper, we used abnormal difference between common stocks price and those preferred stocks price to explain financial bubble. First, we proposed the ‘W-index’ which indicates spread between common stocks and those preferred stocks in stock market. Second, to prove that this ‘W-index’ is valid for measuring financial bubble, we showed that there is an inverse relationship between this ‘W-index’ and S&P500 rate of return. Specifically, our hypothesis is that when ‘W-index’ is comparably higher than other periods, financial bubbles are added up in stock market and vice versa; according to our hypothesis, if investors made long term investments when ‘W-index’ is high, they would have negative rate of return; however, if investors made long term investments when ‘W-index’ is low, they would have positive rate of return. By comparing correlation values and adjusted R-squared values of between W-index and S&P500 return, VIX index and S&P500 return, and TED index and S&P500 return, we showed only W-index has significant relationship between S&P500 rate of return. In addition, we figured out how long investors should hold their investment position regard the effect of financial bubble. Using this W-index, investors could measure financial bubble in the market and invest with low risk.
Characterization of Screening Staphylococcus aureus Isolates Harboring mecA Genes among Intensive Care Unit Patients from Tertiary Care Hospital in Jakarta, Indonesia
The objective of this study is to determine the prevalence of methicillin-resistant Staphylococcus aureus (MRSA) harboring mecA genes from screening isolates among intensive care unit (ICU) patients. All MRSA screening isolates from ICU’s patients of Cipto Mangunkusumo Hospital during 2011 and 2014 were included in this study. Identification and susceptibility test was performed using Vitek2 system (Biomereux®). PCR was conducted to characterize the SCCmec of S. aureus harboring the mecA gene on each isolate. Patient’s history of illness was traced through medical record. 24 isolates from 327 screening isolates were MRSA positive (7.3%). From PCR, we found 17 (70.8%) isolates carrying SCCmec type I, 3 (12.5%) isolates carrying SCCmec type III, and 2 (8.3%) isolates carrying SCCmec type IV. In conclusion, SCCmec type I is the most prevalent MRSA colonization among ICU patients in Cipto Mangunkusumo Hospital.
Multimedia Firearms Training System
The goal of the article is to present a novel Multimedia Firearms Training System. The system was developed in order to compensate for major problems of existing shooting training systems. The designed and implemented solution can be characterized by five major advantages: algorithm for automatic geometric calibration, algorithm of photometric recalibration, firearms hit point detection using thermal imaging camera, IR laser spot tracking algorithm for after action review analysis, and implementation of ballistics equations. The combination of the abovementioned advantages in a single multimedia firearms training system creates a comprehensive solution for detecting and tracking of the target point usable for shooting training systems and improving intervention tactics of uniformed services. The introduced algorithms of geometric and photometric recalibration allow the use of economically viable commercially available projectors for systems that require long and intensive use without most of the negative impacts on color mapping of existing multi-projector multimedia shooting range systems. The article presents the results of the developed algorithms and their application in real training systems.
Technologic Information about Photovoltaic Applied in Urban Residences
Among renewable energy sources, solar energy is the one that has stood out. Solar radiation can be used as a thermal energy source and can also be converted into electricity by means of effects on certain materials, such as thermoelectric and photovoltaic panels. These panels are often used to generate energy in homes, buildings, arenas, etc., and have low pollution emissions. Thus, a technological prospecting was performed to find patents related to the use of photovoltaic plates in urban residences. The patent search was based on ESPACENET, associating the keywords photovoltaic and home, where we found 136 patent documents in the period of 1994-2015 in the fields title and abstract. Note that the years 2009, 2010, 2011, 2012, 2013 and 2014 had the highest number of applicants, with respectively, 11, 13, 23, 29, 15 and 21. Regarding the country that deposited about this technology, it is clear that China leads with 67 patent deposits, followed by Japan with 38 patents applications. It is important to note that most depositors, 50% are companies, 44% are individual inventors and only 6% are universities. On the International Patent classification (IPC) codes, we noted that the most present classification in results was H02J3/38, which represents provisions in parallel to feed a single network by two or more generators, converters or transformers. Among all categories, there is the H session, which means Electricity, with 70% of the patents.
On the Transition of Europe’s Power Sector: Economic Consequences of National Targets
The prospects for the European power sector indicate that it has to almost fully decarbonize in order to reach the economy-wide target of CO2-emission reduction. We apply the EU-REGEN model to explain the penetration of RES from an economic perspective, their spatial distribution, and the complementary role of conventional generation technologies. Furthermore, we identify economic consequences of national energy and climate targets. Our study shows that onshore wind power will be the most crucial generation technology for the future European power sector. Its geographic distribution is driven by resource quality. Gas power will be the major conventional generation technology for backing-up wind power. Moreover, a complete phase out of coal power proves to be not economically optimal. The paper demonstrates that existing national targets have a negative impact, especially on the German region with higher prices and lower revenues. The remaining regions profit are hardly affected. We encourage an EU-wide coordination on the expansion of wind power with harmonized policies. Yet, this requires profitable market structures for both, RES and conventional generation technologies.
Performance Management of Tangible Assets within the Balanced Scorecard and Interactive Business Decision Tools
The present study investigated approaches and techniques to enhance strategic management governance and decision making within the framework of a performance-based balanced scorecard. The review of best practices from strategic, program, process, and systems engineering management provided for a holistic approach toward effective outcome-based capability management. One technique, based on factorial experimental design methods, was used to develop an empirical model. This model predicted the degree of capability effectiveness and is dependent on controlled system input variables and their weightings. These variables represent business performance measures, captured within a strategic balanced scorecard. The weighting of these measures enhances the ability to quantify causal relationships within balanced scorecard strategy maps. The focus in this study was on the performance of tangible assets within the scorecard rather than the traditional approach of assessing performance of intangible assets such as knowledge and technology. Tangible assets are represented in this study as physical systems, which may be thought of as being aboard a ship or within a production facility. The measures assigned to these systems include project funding for upgrades against demand, system certifications achieved against those required, preventive maintenance to corrective maintenance ratios, and material support personnel capacity against that required for supporting respective systems. The resultant scorecard is viewed as complimentary to the traditional balanced scorecard for program and performance management. The benefits from these scorecards are realized through the quantified state of operational capabilities or outcomes. These capabilities are also weighted in terms of priority for each distinct system measure and aggregated and visualized in terms of overall state of capabilities achieved. This study proposes the use of interactive controls within the scorecard as a technique to enhance development of alternative solutions in decision making. These interactive controls include those for assigning capability priorities and for adjusting system performance measures, thus providing for what-if scenarios and options in strategic decision-making. In this holistic approach to capability management, several cross functional processes were highlighted as relevant amongst the different management disciplines. In terms of assessing an organization’s ability to adopt this approach, consideration was given to the P3M3 management maturity model.
Application of Transportation Models for Analysing Future Intercity and Intracity Travel Patterns in Kuwait
In order to meet the increasing demand for housing care for Kuwaiti citizens, the government authorities in Kuwait are undertaking a series of projects in the form of new large cities, outside the current urban area. Al Mutlaa City located to the north-west of the Kuwait Metropolitan Area is one such project out of the 15 planned new cities. The city accommodates a wide variety of residential developments, employment opportunities, commercial, recreational, health care and institutional uses. This paper examines the application of comprehensive transportation demand modeling works undertaken in VISUM platform to understand the future intracity and intercity travel distribution patterns in Kuwait. The scope of models developed varied in levels of detail: strategic model update, sub-area models representing future demand of Al Mutlaa City, sub-area models built to estimate the demand in the residential neighborhoods of the city. This paper aims at offering model update framework that facilitates easy integration between sub-area models and strategic national models for unified traffic forecasts. This paper presents the transportation demand modeling results utilized in informing the planning of multi-modal transportation system for Al Mutlaa City. This paper also presents the household survey data collection efforts undertaken using GPS devices (first time in Kuwait) and notebook computer based digital survey forms for interviewing representative sample of citizens and residents. The survey results formed the basis of estimating trip generation rates and trip distribution coefficients used in the strategic base year model calibration and validation process.
Air Quality Forecast Based on Principal Component Analysis-Genetic Algorithm and Back Propagation Model
Under the circumstance of environment deterioration, people are increasingly concerned about the quality of the environment, especially air quality. As a result, it is of great value to give accurate and timely forecast of AQI (air quality index). In order to simplify influencing factors of air quality in a city, and forecast the city’s AQI tomorrow, this study used MATLAB software and adopted the method of constructing a mathematic model of PCA-GABP to provide a solution. To be specific, this study firstly made principal component analysis (PCA) of influencing factors of AQI tomorrow including aspects of weather, industry waste gas and IAQI data today. Then, we used the back propagation neural network model (BP), which is optimized by genetic algorithm (GA), to give forecast of AQI tomorrow. In order to verify validity and accuracy of PCA-GABP model’s forecast capability. The study uses two statistical indices to evaluate AQI forecast results (normalized mean square error and fractional bias). Eventually, this study reduces mean square error by optimizing individual gene structure in genetic algorithm and adjusting the parameters of back propagation model. To conclude, the performance of the model to forecast AQI is comparatively convincing and the model is expected to take positive effect in AQI forecast in the future.
Disaggregating and Forecasting the Total Energy Consumption of a Building: A Case Study of a High Cooling Demand Facility
Energy disaggregation has been focused by many energy companies since energy efficiency can be achieved when the breakdown of energy consumption is known. Companies have been investing in technologies to come up with software and/or hardware solutions that can provide this type of information to the consumer. On the other hand, not all people can afford to have these technologies. Therefore, in this paper, we present a methodology for breaking down the aggregate consumption and identifying the highdemanding end-uses profiles. These energy profiles will be used to build the forecast model for optimal control purpose. A facility with high cooling load is used as an illustrative case study to demonstrate the results of proposed methodology. We apply a high level energy disaggregation through a pattern recognition approach in order to extract the consumption profile of its rooftop packaged units (RTUs) and present a forecast model for the energy consumption.
Structural Behavior of Precast Foamed Concrete Sandwich Panel Subjected to Vertical In-Plane Shear Loading
Experimental and analytical studies were accomplished to examine the structural behavior of precast foamed concrete sandwich panel (PFCSP) under vertical in-plane shear load. PFCSP full-scale specimens with total number of six were developed with varying heights to study an important parameter slenderness ratio (H/t). The production technique of PFCSP and the procedure of test setup were described. The results obtained from the experimental tests were analysed in the context of in-plane shear strength capacity, load-deflection profile, load-strain relationship, slenderness ratio, shear cracking patterns and mode of failure. Analytical study of finite element analysis was implemented and the theoretical calculations of the ultimate in-plane shear strengths using the adopted ACI318 equation for reinforced concrete wall were determined aimed at predicting the in-plane shear strength of PFCSP. The decrease in slenderness ratio from 24 to 14 showed an increase of 26.51% and 21.91% on the ultimate in-plane shear strength capacity as obtained experimentally and in FEA models, respectively. The experimental test results, FEA models data and theoretical calculation values were compared and provided a significant agreement with high degree of accuracy. Therefore, on the basis of the results obtained, PFCSP wall has the potential use as an alternative to the conventional load-bearing wall system.
Extremism among College and High School Students in Moscow: Diagnostics Features
In this day and age, extremism in various forms of its manifestation is a real threat to the world community, the national security of a state and its territorial integrity, as well as to the constitutional rights and freedoms of citizens. Extremism, as it is known, in general terms described as a commitment to extreme views and actions, radically denying the existing social norms and rules. Supporters of extremism in the ideological and political struggles often adopt methods and means of psychological warfare, appeal not to reason and logical arguments, but to emotions and instincts of the people, to prejudices, biases, and a variety of mythological designs. They are dissatisfied with the established order and aim at increasing this dissatisfaction among the masses. Youth extremism holds a specific place among the existing forms and types of extremism. In this context in 2015, we conducted a survey among Moscow college and high school students. The aim of this study was to determine how great or small is the difference in understanding and attitudes towards extremism manifestations, inclination and readiness to take part in extremist activities and what causes this predisposition, if it exists. We performed multivariate analysis and found the Russian college and high school students' opinion about the extremism and terrorism situation in our country and also their cognition on these topics. Among other things, we showed, that the level of aggressiveness of young people were not above the average for the whole population. The survey was conducted using the questionnaire method. The sample included college and high school students in Moscow (642 and 382, respectively) by method of random selection. The questionnaire was developed by specialists of RUDN University Sociological Laboratory and included both original questions (projective questions, the technique of incomplete sentences), and the standard test Dayhoff S. to determine the level of internal aggressiveness. It is also used as an experiment, the technique of study option using of FACS and SPAFF to determine the psychotypes and determination of non-verbal manifestations of emotions. The study confirmed the hypothesis that in respondents’ opinion, the level of aggression is higher today than a few years ago. Differences were found in the understanding of and respect for such social phenomena as extremism, terrorism, and their danger and appeal for the two age groups of young people. Theory of psychotypes, SPAFF (specific affect cording system) and FACS (facial action cording system) are considered as additional techniques for the diagnosis of a tendency to extreme views. Thus, it is established that diagnostics of acceptance of extreme views among young people is possible thanks to simultaneous use of knowledge from the different fields of socio-humanistic sciences. The results of the research can be used in a comparative context with other countries and as a starting point for further research in the field, taking into account its extreme relevance.
Currency Exchange Rate Forecasts Using Quantile Regression
In this paper, we discuss a Bayesian approach to
quantile autoregressive (QAR) time series model estimation and
forecasting. Together with a combining forecasts technique, we then
predict USD to GBP currency exchange rates. Combined forecasts
contain all the information captured by the fitted QAR models
at different quantile levels and are therefore better than those
obtained from individual models. Our results show that an unequally
weighted combining method performs better than other forecasting
methodology. We found that a median AR model can perform well in
point forecasting when the predictive density functions are symmetric.
However, in practice, using the median AR model alone may involve
the loss of information about the data captured by other QAR models.
We recommend that combined forecasts should be used whenever