Partitioning Ordinary Differential Equations Using Multistep Block Methods
In this paper, adaptive partitioning techniques based on multistep block methods are developed for the numerical solution of ordinary differential equations (ODEs). The methods consist of an explicit Adams block methods and two-point Block Backward Differentiation Formulas (BBDFs). The code for the Adams block method and BBDFs are combined in a single code. The partitioning of the ODEs will be determined by calculating the eigenvalues of the given ODE systems. The proposed partitioning strategy is validated through numerical results on some standard problems found in the literature and comparisons are made with non partitioning of the ODEs. Numerical results are presented to demonstrate the advantage of implementing adaptive partitioning for ODEs.
Spatial Pattern and Predictors of Anaemia in Ethiopia
Anaemia is a condition in which the haemoglobin concentration falls below an established cut-off value due to a decrease in the number and size of red blood cells. The current study aimed to assess the spatial pattern and identify predictors related to anaemia using the third Ethiopian demographic health survey which was conducted in 2010. To achieve this objective, this study took into account the clustered nature of the data. As a result, multilevel modeling has been used in the statistical analysis. For analysis purpose, only complete cases from 15,909 females, and 13,903 males were considered. Among all subjects who agreed for haemoglobin test, 5.49 %males, and 19.86% females were anaemic. In both binary and ordinal outcome modeling approaches, educational level, age, wealth index, BMI and HIV status were identified to be significant predictors for anaemia prevalence. Furthermore, it was noted that pregnant women were more anaemic than non-pregnant women. As revealed by Moran's I test, significant spatial autocorrelation was noted across clusters. The risk of anaemia was found to vary across different regions, and higher prevalence was observed in Somali and Affar region.
Parameter Estimation for the Mixture of Generalized Gamma Model
Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.
Weight Least Absolute Deviations and Weight Least Absolute Deviations Ridge Estimator for Seemingly Unrelated Regression Models
In this paper, we introduce the four new estimators for SUR model, Weight least absolute deviations (WLAD), General Weight least absolute deviations (WGLAD), Weight least absolute deviations Ridge (WLAD_Ridge) and General Weight least absolute deviations Ridge (GWLAD_Ridge) estimator for SUR model. The LAD and GLAD estimator are sensitive to the leverage point, so, the WLAD and WGLAD are suitable alternatives to deal with this problem. On the other hand, the ridge estimator is used when the predictors are highly collinear. The Weight least absolute deviations Ridge (WLAD_Ridge) and General Weight least absolute deviations Ridge (GWLAD_Ridge) estimators combine the interesting features of Weight least absolute deviations (WLAD) and Ridge estimators. The aim of these estimators is to resist the outliers, leverage point and at the same time shrinking coefficient to solve the multicollinearity simultaneously in SUR model. We chose ridge parameter by the new robust criteria, least absolute deviations (LAD) cross-validation criteria. We drove the simulation study of WLAD, WGLAD, (WLAD_Ridge) and (GWLAD_Ridge) estimators to determine the efficiency gain for it compare with the other estimators.
Identification of Shocks from Unconventional Monetary Policy Measures
After several prominent central banks including European Central Bank (ECB), Federal Reserve System (Fed), Bank of Japan and Bank of England employed unconventional monetary policies in the aftermath of the financial crisis of 2008-2009 the problem of identification of the effects from such policies became of great interest. One of the main difficulties in identification of shocks from unconventional monetary policy measures in structural VAR analysis is that they often are anticipated, which leads to a non-fundamental MA representation of the VAR model. Moreover, the unconventional monetary policy actions may indirectly transmit to markets information about the future stance of the interest rate, which raises a question of the plausibility of the assumption of orthogonality between shocks from unconventional and conventional policy measures. This paper offers a method of identification that takes into account the abovementioned issues. The author uses factor-augmented VARs to increase the information set and identification through heteroskedasticity of error terms and rank restrictions on the errors’ second moments’ matrix to deal with the cross-correlation of the structural shocks.
Bayesian Estimation of Ruin Probability Based on Non-Homogeneous Poisson Process Claim Arrivals and Heavy-Tailed Distributed Claim Aggregates
The purpose of this article is to estimate ruin probability at a future time T past a truncated time t, when prior to the truncated time ruin has not occurred. It is assumed that claim arrivals are in accordance with a non-homogeneous Poisson process (NHPP) and that the mean-value function of the process is a non-linear function. The distribution of claim amount X is assumed to be in the class of heavy-tailed distributions such as Inverse-Gaussian (IG). Gamma priors are used to estimate the parameters of IG as well as parameters of power law intensity function. Based on observed arrival times t₁,...,tₙ, and claim amounts X₁,...,Xₙ prior to truncated time t, all parameters associated with aggregate risk process and NHPP are estimated to compute ruin probability. Simulation results are presented to assess accuracy of Bayes estimate for ruin probability and accuracy of Bayes and Maximum Likelihood estimates are compared.
An Iterative Family for Solution of System of Nonlinear Equations
This paper presents a family of iterative scheme for solving nonlinear systems of equations which have wide application in sciences and engineering. The proposed iterative family is based upon some parameters which generates many different iterative schemes. This family is completely derivative free and uses first of divided difference operator. Moreover some numerical experiments are performed and compared with existing methods. Analysis of convergence shows that the presented family has fourth-order of convergence. The dynamical behaviour of proposed family and local convergence have also been discussed. The numerical performance and convergence region comparison demonstrates that proposed family is efficient.
An Economic Order Quantity Model for Deteriorating Items with Ramp Type Demand, Time Dependent Holding Cost and Price Discount Offered on Backorders
In our present work, an economic order quantity inventory model with shortages is developed where holding cost is expressed as linearly increasing function of time and demand rate is a ramp type function of time. The items considered in the model are deteriorating in nature so that a small fraction of the items is depleted with the passage of time. In order to consider a more realistic situation, the deterioration rate is assumed to follow a continuous uniform distribution with the parameters involved being triangular fuzzy numbers. The inventory manager offers his customer a discount in case he is willing to backorder his demand when there is a stock-out. The optimum ordering policy and the optimum discount offered for each backorder are determined by minimizing the total cost in a replenishment interval. For better illustration of our proposed model in both the crisp and fuzzy sense and for providing richer insights, a numerical example is cited to exemplify the policy and to analyze the sensitivity of the model parameters.
Imperfect Production Inventory Model with Inspection Errors and Fuzzy Demand and Deterioration Rates
Our work presents an inventory model which illustrates imperfect production and imperfect inspection processes for deteriorating items. A cost-minimizing model is studied considering two types of inspection errors, namely, Type I error of falsely screening out a proportion of non-defects, thereby passing them on for rework and Type II error of falsely not screening out a proportion of defects, thus selling those to customers which incurs a penalty cost. The screened items are reworked; however, no returns are entertained due to deteriorating nature of the items. In more practical situations, certain parameters such as the demand rate and the deterioration rate of inventory cannot be accurately determined, and therefore, they are assumed to be triangular fuzzy numbers in our model. We calculate the optimal lot size that must be produced in order to minimize the total inventory cost for both the crisp and the fuzzy models. A numerical example is also considered to exemplify the procedure which is followed by the analysis of sensitivity of various parameters on the decision variable and the objective function.
Numerical Solution of Space Fractional Order Solute Transport System
In the present article, a drive is taken to compute the solution of spatial fractional order advection-dispersion equation having source/sink term with given initial and boundary conditions. The equation is converted to a system of ordinary differential equations using second-kind shifted Chebyshev polynomials, which have finally been solved using finite difference method. The striking feature of the article is the fast transportation of solute concentration as and when the system approaches fractional order from standard order for specified values of the parameters of the system.
Numerical Solution of Porous Media Equation Using Jacobi Operational Matrix
During modeling of transport phenomena in porous media, many nonlinear partial differential equations (NPDEs) encountered which greatly described the convection, diffusion and reaction process. To solve such types of nonlinear problems, a reliable and efficient technique is needed. In this article, the numerical solution of NPDEs encountered in porous media is derived. Here Jacobi collocation method is used to solve the considered problems which convert the NPDEs in systems of nonlinear algebraic equations that can be solved using Newton-Raphson method. The numerical results of some illustrative examples are reported to show the efficiency and high accuracy of the proposed approach. The comparison of the numerical results with the existing analytical results already reported in the literature and the error analysis for each example exhibited through graphs and tables confirms the exponential convergence rate of the proposed method.
Machine Learning Methods for the Prediction of Claim Probability
Machine learning is progressively growing field that uses learning algorithms to construct relationship between an outcome and predictors as an intersection of computer science and statistics. Probability of claim occurrence is one of the main components used to estimate insurance risk premium. This probability is generally estimated using classical approaches such as logistic regression. Objective of this study is to use various machine learning techniques for the prediction of claim probability and to compare the predictive performances of different methods using a health insurance data set from a Turkish insurance company. Classification trees, bagging, random forest, boosting and neural networks are used as alternative to classical logistic regression model. Two-year data set is used for the case study; one year is for the model fitting and the other year is for the prediction. The predictive performances of different methods are compared using the statistical measures of confusion matrices and the results are discussed.
Existence of Random Fixed Point Theorem for Contractive Mappings
Random fixed point theory has received much attention in recent years, and it is needed for the study of various classes of random equations. The study of random fixed point theorems was initiated by the Prague school of probabilistic in the 1950s. The existence and uniqueness of fixed points for the self-maps of a metric space by altering distances between the points with the use of a control function is an interesting aspect in the classical fixed point theory. In a new category of fixed point problems for a single self-map with the help of a control function that alters the distance between two points in a metric space which they called an altering distance function. In this paper, we prove the results of existence of random common fixed point and its uniqueness for a pair of random mappings under weakly contractive condition for generalizing alter distance function in polish spaces using Random Common Fixed Point Theorem for Generalized Weakly Contractions.
Comparative Study of Estimators of Population Means in Two Phase Sampling in the Presence of Non-Response
A comparative study of estimators of population means in two phase sampling in the presence of non-response when Unknown population means of the auxiliary variable(s) and incomplete information of study variable y as well as of auxiliary variable(s) is made. Three real data sets of University students, hospital and unemployment are used for comparison of all the available techniques in two phase sampling in the presence of non-response with the newly generalized ratio estimators.
On Generalized Cumulative Past Inaccuracy Measure for Marginal and Conditional Lifetimes
Recently, the notion of past cumulative inaccuracy (CPI) measure has been proposed in the literature as a generalization of cumulative past entropy (CPE) in univariate as well as bivariate setup. In this paper, we introduce the notion of CPI of order α (alpha) and study the proposed measure for conditionally specified models of two components failed at different time instants called generalized conditional CPI (GCCPI). We provide some bounds using usual stochastic order and investigate several properties of GCCPI. The effect of monotone transformation on this proposed measure has also been examined. Furthermore, we characterize some bivariate distributions under the assumption of conditional proportional reversed hazard rate model. Moreover, the role of GCCPI in reliability modeling has also been investigated for a real-life problem.
A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.
Numerical Solution of Two-Dimensional Solute Transport System Using Operational Matrices
In this study, the numerical solution of two-dimensional solute transport system in a homogeneous porous medium of finite-length is obtained. The considered transport system have the terms accounting for advection, dispersion and first-order decay with first-type boundary conditions. Initially, the aquifer is considered solute free and a constant input-concentration is considered at inlet boundary. The solution is describing the solute concentration in rectangular inflow-region of the homogeneous porous media. The numerical solution is derived using a powerful method viz., spectral collocation method. The numerical computation and graphical presentations exhibit that the method is effective and reliable during solution of the physical model with complicated boundary conditions even in the presence of reaction term.
Numerical Solution of Space Fractional Order Linear/Nonlinear Reaction-Advection Diffusion Equation Using Jacobi Polynomial
During modelling of many physical problems and engineering processes, fractional calculus plays an important role. Those are greatly described by fractional differential equations (FDEs). So a reliable and efficient technique to solve such types of FDEs is needed. In this article, a numerical solution of a class of fractional differential equations namely space fractional order reaction-advection dispersion equations subject to initial and boundary conditions is derived. In the proposed approach shifted Jacobi polynomials are used to approximate the solutions together with shifted Jacobi operational matrix of fractional order and spectral collocation method. The main advantage of this approach is that it converts such problems in the systems of algebraic equations which are easier to be solved. The proposed approach is effective to solve the linear as well as non-linear FDEs. To show the reliability, validity and high accuracy of proposed approach, the numerical results of some illustrative examples are reported, which are compared with the existing analytical results already reported in the literature. The error analysis for each case exhibited through graphs and tables confirms the exponential convergence rate of the proposed method.
Weighted Rank Regression with Adaptive Penalty Function
The use of regularization for statistical methods has become popular. The least absolute shrinkage and selection operator (LASSO) framework has become the standard tool for sparse regression. However, it is well known that the LASSO is sensitive to outliers or leverage points. We consider a new robust estimation which is composed of the weighted loss function of the pairwise difference of residuals and the adaptive penalty function regulating the tuning parameter for each variable. Rank regression is resistant to regression outliers, but not to leverage points. By adopting a weighted loss function, the proposed method is robust to leverage points of the predictor variable. Furthermore, the adaptive penalty function gives us good statistical properties in variable selection such as oracle property and consistency. We develop an efficient algorithm to compute the proposed estimator using basic functions in program R. We used an optimal tuning parameter based on the Bayesian information criterion (BIC). Numerical simulation shows that the proposed estimator is effective for analyzing real data set and contaminated data.
Bivariate Generalization of q-α-Bernstein Polynomials
We propose to define the q-analogue of the α-Bernstein Kantorovich operators and then introduce the q-bivariate generalization of these operators to study the approximation of functions of two variables. We obtain the rate of convergence of these bivariate operators by means of the total modulus of continuity, partial modulus of continuity and the Peetre’s K-functional for continuous functions. Further, in order to study the approximation of functions of two variables in a space bigger than the space of continuous functions, i.e. Bögel space; the GBS (Generalized Boolean Sum) of the q-bivariate operators is considered and degree of approximation is discussed for the Bögel continuous and Bögel differentiable functions with the aid of the Lipschitz class and the mixed modulus of smoothness.
Durrmeyer Type Modification of q-Generalized-Bernstein Operators
The purpose of this paper to introduce the Durrmeyer type modification of q-generalized-Bernstein operators which include the Bernstein polynomials in the particular α = 0. We investigate the rate of convergence by means of the Lipschitz class and the Peetre’s K-functional. Also, we define the bivariate case of Durrmeyer type modification of q-generalized-Bernstein operators and study the degree of approximation with the aid of the partial modulus of continuity and the Peetre’s K-functional. Finally, we introduce the GBS (Generalized Boolean Sum) of the Durrmeyer type modification of q- generalized-Bernstein operators and investigate the approximation of the Bögel continuous and Bögel differentiable functions with the aid of the Lipschitz class and the mixed modulus of smoothness.
Rings Characterized by Classes of Rad-plus-Supplemented Modules
In this paper, we introduce and give various properties of weak* Rad-plus-supplemented and cofinitely weak* Rad-plus-supplemented modules over some special kinds of rings, in particular, artinian serial ring and semiperfect ring. Also prove that ring R is artinian serial if and only if every right and left R-module is weak* Rad-plus-supplemented. We provide the counter example which proves that weak* Rad-plus-supplemented module is the generalization of plus-supplemented and Rad-plus-supplemented modules. Furthermore, as an application of above finding results of this research article, our main focus is to characterized the semisimple ring, artinian principal ideal ring, semilocal ring, semiperfect ring, perfect ring, commutative noetherian ring and Dedekind domain in terms of weak* Rad-plus-supplemented module.
Asymptotic Expansion of the Korteweg-de Vries-Burgers Equation
It is common knowledge that many physical problems (such as non-linear shallow-water waves and wave motion in plasmas) can be described by the Korteweg-de Vries (KdV) equation, which possesses certain special solutions, known as solitary waves or solitons. As a marriage of the KdV equation and the classical Burgers (KdVB) equation, the Korteweg-de Vries-Burgers (KdVB) equation is a mathematical model of waves on shallow water surfaces in the presence of viscous dissipation. Asymptotic analysis is a method of describing limiting behavior and is a key tool for exploring the differential equations which arise in the mathematical modeling of real-world phenomena. By using variable transformations, the asymptotic expansion of the KdVB equation is presented in this paper. The asymptotic expansion may provide a good gauge on the validation of the corresponding numerical scheme.
Theoretical Analysis of the Existing Sheet Thickness in the Calendering of Non-N Material Using Lubrication Approximation Theory
The mechanical process of smoothing and compressing a molten material by passing it through a number of pairs of heated rolls in order to produce a sheet of desired thickness is called calendering. The rolls that are in combination are called calenders; a term derived from kylindros the Greek word for cylinder. It infects the finishing process used on cloth, paper, textiles, leather, cloth, or plastic film and so on. It is a mechanism which is used to strengthen surface properties, minimize sheet thickness, and yield special effects such as a glaze or polish. It has a wide variety of applications in industries in the manufacturing of textile fabrics, coated fabrics, and plastic sheeting to provide the desired surface finish and texture. An analysis has been presented for the calendering of Pseudoplastic material. The lubrication approximation theory (LAT) has been used to simplify the equations of motion. For the investigation of the nature of the steady solutions that exist, we make use of the combination of exact solution and numerical methods. The expressions for the velocity profile, rate of volumetric flow and pressure gradient are found in the form of exact solutions. Furthermore, the quantities of interest by engineering point of view, such as pressure distribution, roll-separating force, and power transmitted to the fluid by the rolls are also computed. Some results are shown graphically while others are given in tabulated form. It is found that the non-Newtonian parameter and Reynolds number serve as the controlling parameters for the calendering process.
Approximate Solution for Nonlinear Riccati Differential Equation Using the Non Perturbation Method
Riccati equation is widely used in designing and analyzing linear and nonlinear optimal control processes. In this paper, we have presented an analytical solution for quadratic Riccati differential equation by He’s Variational Iteration Method. This method is relatively proficient and suitable for such problems as compared to classical methods. Firstly, a correctional function has been constructed with the help of a general Lagrange Multiplier on the basis of variational theory and then the solution has been found for Riccati Equation without any unphysical restrictive assumptions. A comparison of the approximate solution with the exact solution has also been given. The absolute error of 3.28x10⁻⁶ to 8.74x10⁻⁸has been obtained by the proposed solution. Our results show that the proposed method is very efficient and simple as compared to existing classical methods.
Estimation of Population Mean under Random Non-Response in Two-Phase Successive Sampling
In this paper, we have considered the problem of estimation for population mean, on current (second) occasion in the presence of random non response in two-occasion successive sampling under two phase set-up. Modified exponential type estimators have been proposed, and their properties are studied under the assumptions that numbers of sampling units follow a distribution due to random non response situations. The performances of the proposed estimators are compared with linear combinations of two estimators, (a) sample mean estimator for fresh sample and (b) ratio estimator for matched sample under the complete response situations. Results are demonstrated through empirical studies which present the effectiveness of the proposed estimators. Suitable recommendations have been made to the survey practitioners.
Bulk Viscous Bianchi Type V Cosmological Model with Time Dependent Gravitational Constant and Cosmological Constant in General Relativity
In this paper, we investigate Bulk Viscous Bianchi Type V Cosmological Model with Time dependent gravitational constant and cosmological constant in general Relativity by assuming ξ(t)=ξ_(0 ) p^m where ξ_(0 ) and m are constants. We also assume a variation law for Hubble parameter as H(R) = a (R^(-n)+1), where a>0, n>1 being constant. Two universe models were obtained, and their physical behavior has been discussed. When n=1 the Universe starts from singular state whereas when n=0 the cosmology follows a no singular state. The presence of bulk viscosity increase matter density’s value.
Static and Dynamical Analysis on Clutch Discs on Different Material and Geometries
This paper presents the static and cyclic stresses in combination with fatigue analysis resultant of loads applied to the friction discs usually utilized on Industrial Clutches. The material chosen to simulate the friction discs under load is aluminum. The numerical simulation was done by software COMSOLTM Multiphysics. The results obtained for static loads showed enough stiffness for both geometries and the material utilized. On the other hand, in the fatigue standpoint, failure is clearly verified, what demonstrates the importance of both approaches, mainly dynamical analysis. The results and the conclusion are based on the Stresses on Disc, Counted Stress Cycles and Fatigue Usage Factor.
On Chvátal's Conjecture for the Hamiltonicity of 1-Tough Graphs and Their Complements
Graph toughness and the associated cycle structure have attracted much attention and aroused extensive works since Chvátal introduced this concept in 1973. Among the seven conjectures posted by then, much fewer results were published for the one relating the existence of a hamiltonian cycle in any 1-tough graph to its complement graph. In this paper, we show that the conjecture does not hold in general. More precisely, it is true only for graphs with six or seven vertices and is false for graphs with eight or more vertices. A new theorem is derived as a correction for the conjecture.
Theorem on Inconsistency of The Classical Logic
This abstract concerns an extremely fundamental issue. Namely, the fundamental problem of science is the issue of consistency. In this abstract, we present the theorem saying that the classical calculus of quantifiers is inconsistent in the traditional sense. At the beginning, we introduce a notation, and later we remind the definition of the consistency in the traditional sense. S1 is the set of all well-formed formulas in the calculus of quantifiers. RS1 denotes the set of all rules over the set S1. Cn(R, X) is the set of all formulas standardly provable from X by rules R, where R is a subset of RS1, and X is a subset of S1. The couple < R,X > is called a system, whenever R is a subset of RS1, and X is a subset of S1. Definition: The system < R,X > is consistent in the traditional sense if there does not exist any formula from the set S1, such that this formula and its negation are provable from X, by using rules from R. Finally, < R0+, L2 > denotes the classical calculus of quantifiers, where R0+ consists of Modus Ponens and the generalization rule. L2 is the set of all formulas valid in the classical calculus of quantifiers. The Main Result: The system < R0+, L2 > is inconsistent in the traditional sense.