Open Science Research Excellence

Open Science Index

Commenced in January 2007 Frequency: Monthly Edition: International Paper Count: 15

15
10009769
Multilevel Arnoldi-Tikhonov Regularization Methods for Large-Scale Linear Ill-Posed Systems
Abstract:
This paper is devoted to the numerical solution of large-scale linear ill-posed systems. A multilevel regularization method is proposed. This method is based on a synthesis of the Arnoldi-Tikhonov regularization technique and the multilevel technique. We show that if the Arnoldi-Tikhonov method is a regularization method, then the multilevel method is also a regularization one. Numerical experiments presented in this paper illustrate the effectiveness of the proposed method.
14
10004093
Analysis and Simulation of TM Fields in Waveguides with Arbitrary Cross-Section Shapes by Means of Evolutionary Equations of Time-Domain Electromagnetic Theory
Abstract:
The boundary value problem on non-canonical and arbitrary shaped contour is solved with a numerically effective method called Analytical Regularization Method (ARM) to calculate propagation parameters. As a result of regularization, the equation of first kind is reduced to the infinite system of the linear algebraic equations of the second kind in the space of L2. This equation can be solved numerically for desired accuracy by using truncation method. The parameters as cut-off wavenumber and cut-off frequency are used in waveguide evolutionary equations of electromagnetic theory in time-domain to illustrate the real-valued TM fields with lossy and lossless media.
13
10004459
Numerical Applications of Tikhonov Regularization for the Fourier Multiplier Operators
Abstract:
Tikhonov regularization and reproducing kernels are the most popular approaches to solve ill-posed problems in computational mathematics and applications. And the Fourier multiplier operators are an essential tool to extend some known linear transforms in Euclidean Fourier analysis, as: Weierstrass transform, Poisson integral, Hilbert transform, Riesz transforms, Bochner-Riesz mean operators, partial Fourier integral, Riesz potential, Bessel potential, etc. Using the theory of reproducing kernels, we construct a simple and efficient representations for some class of Fourier multiplier operators Tm on the Paley-Wiener space Hh. In addition, we give an error estimate formula for the approximation and obtain some convergence results as the parameters and the independent variables approaches zero. Furthermore, using numerical quadrature integration rules to compute single and multiple integrals, we give numerical examples and we write explicitly the extremal function and the corresponding Fourier multiplier operators.
12
10003720
Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models
Authors:
Abstract:
As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an ‘optimal’ value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.
11
10003879
Variable Regularization Parameter Normalized Least Mean Square Adaptive Filter
Abstract:
We present a normalized LMS (NLMS) algorithm with robust regularization. Unlike conventional NLMS with the fixed regularization parameter, the proposed approach dynamically updates the regularization parameter. By exploiting a gradient descent direction, we derive a computationally efficient and robust update scheme for the regularization parameter. In simulation, we demonstrate the proposed algorithm outperforms conventional NLMS algorithms in terms of convergence rate and misadjustment error.
10
10003880
Affine Projection Adaptive Filter with Variable Regularization
Abstract:
We propose two affine projection algorithms (APA) with variable regularization parameter. The proposed algorithms dynamically update the regularization parameter that is fixed in the conventional regularized APA (R-APA) using a gradient descent based approach. By introducing the normalized gradient, the proposed algorithms give birth to an efficient and a robust update scheme for the regularization parameter. Through experiments we demonstrate that the proposed algorithms outperform conventional R-APA in terms of the convergence rate and the misadjustment error.
9
9999188
In Search of Robustness and Efficiency via l1− and l2− Regularized Optimization for Physiological Motion Compensation
Abstract:

Compensating physiological motion in the context of minimally invasive cardiac surgery has become an attractive issue since it outperforms traditional cardiac procedures offering remarkable benefits. Owing to space restrictions, computer vision techniques have proven to be the most practical and suitable solution. However, the lack of robustness and efficiency of existing methods make physiological motion compensation an open and challenging problem. This work focusses on increasing robustness and efficiency via exploration of the classes of 1−and 2−regularized optimization, emphasizing the use of explicit regularization. Both approaches are based on natural features of the heart using intensity information. Results pointed out the 1−regularized optimization class as the best since it offered the shortest computational cost, the smallest average error and it proved to work even under complex deformations.

8
16147
Normalization Discriminant Independent Component Analysis
Abstract:

In face recognition, feature extraction techniques attempts to search for appropriate representation of the data. However, when the feature dimension is larger than the samples size, it brings performance degradation. Hence, we propose a method called Normalization Discriminant Independent Component Analysis (NDICA). The input data will be regularized to obtain the most reliable features from the data and processed using Independent Component Analysis (ICA). The proposed method is evaluated on three face databases, Olivetti Research Ltd (ORL), Face Recognition Technology (FERET) and Face Recognition Grand Challenge (FRGC). NDICA showed it effectiveness compared with other unsupervised and supervised techniques.

7
16400
Sample-Weighted Fuzzy Clustering with Regularizations
Abstract:

Although there have been many researches in cluster analysis to consider on feature weights, little effort is made on sample weights. Recently, Yu et al. (2011) considered a probability distribution over a data set to represent its sample weights and then proposed sample-weighted clustering algorithms. In this paper, we give a sample-weighted version of generalized fuzzy clustering regularization (GFCR), called the sample-weighted GFCR (SW-GFCR). Some experiments are considered. These experimental results and comparisons demonstrate that the proposed SW-GFCR is more effective than the most clustering algorithms.

6
892
Identifying an Unknown Source in the Poisson Equation by a Modified Tikhonov Regularization Method
Abstract:

In this paper, we consider the problem for identifying the unknown source in the Poisson equation. A modified Tikhonov regularization method is presented to deal with illposedness of the problem and error estimates are obtained with an a priori strategy and an a posteriori choice rule to find the regularization parameter. Numerical examples show that the proposed method is effective and stable.

5
9760
Fourier Spectral Method for Analytic Continuation
Abstract:

The numerical analytic continuation of a function f(z) = f(x + iy) on a strip is discussed in this paper. The data are only given approximately on the real axis. The periodicity of given data is assumed. A truncated Fourier spectral method has been introduced to deal with the ill-posedness of the problem. The theoretic results show that the discrepancy principle can work well for this problem. Some numerical results are also given to show the efficiency of the method.

4
11245
LMI Approach to Regularization and Stabilization of Linear Singular Systems: The Discrete-time Case
Authors:
Abstract:

Sufficient linear matrix inequalities (LMI) conditions for regularization of discrete-time singular systems are given. Then a new class of regularizing stabilizing controllers is discussed. The proposed controllers are the sum of predictive and memoryless state feedbacks. The predictive controller aims to regularizing the singular system while the memoryless state feedback is designed to stabilize the resulting regularized system. A systematic procedure is given to calculate the controller gains through linear matrix inequalities.

3
5808
Improving Image Segmentation Performance via Edge Preserving Regularization
Abstract:
This paper presents an improved image segmentation model with edge preserving regularization based on the piecewise-smooth Mumford-Shah functional. A level set formulation is considered for the Mumford-Shah functional minimization in segmentation, and the corresponding partial difference equations are solved by the backward Euler discretization. Aiming at encouraging edge preserving regularization, a new edge indicator function is introduced at level set frame. In which all the grid points which is used to locate the level set curve are considered to avoid blurring the edges and a nonlinear smooth constraint function as regularization term is applied to smooth the image in the isophote direction instead of the gradient direction. In implementation, some strategies such as a new scheme for extension of u+ and u- computation of the grid points and speedup of the convergence are studied to improve the efficacy of the algorithm. The resulting algorithm has been implemented and compared with the previous methods, and has been proved efficiently by several cases.
2
1857
Variable Step-Size Affine Projection Algorithm With a Weighted and Regularized Projection Matrix
Abstract:

This paper presents a forgetting factor scheme for variable step-size affine projection algorithms (APA). The proposed scheme uses a forgetting processed input matrix as the projection matrix of pseudo-inverse to estimate system deviation. This method introduces temporal weights into the projection matrix, which is typically a better model of the real error's behavior than homogeneous temporal weights. The regularization overcomes the ill-conditioning introduced by both the forgetting process and the increasing size of the input matrix. This algorithm is tested by independent trials with coloured input signals and various parameter combinations. Results show that the proposed algorithm is superior in terms of convergence rate and misadjustment compared to existing algorithms. As a special case, a variable step size NLMS with forgetting factor is also presented in this paper.

1
7379
Adaptive Total Variation Based on Feature Scale
Abstract:

The widely used Total Variation de-noising algorithm can preserve sharp edge, while removing noise. However, since fixed regularization parameter over entire image, small details and textures are often lost in the process. In this paper, we propose a modified Total Variation algorithm to better preserve smaller-scaled features. This is done by allowing an adaptive regularization parameter to control the amount of de-noising in any region of image, according to relative information of local feature scale. Experimental results demonstrate the efficient of the proposed algorithm. Compared with standard Total Variation, our algorithm can better preserve smaller-scaled features and show better performance.

Vol:13 No:06 2019Vol:13 No:05 2019Vol:13 No:04 2019Vol:13 No:03 2019Vol:13 No:02 2019Vol:13 No:01 2019
Vol:12 No:12 2018Vol:12 No:11 2018Vol:12 No:10 2018Vol:12 No:09 2018Vol:12 No:08 2018Vol:12 No:07 2018Vol:12 No:06 2018Vol:12 No:05 2018Vol:12 No:04 2018Vol:12 No:03 2018Vol:12 No:02 2018Vol:12 No:01 2018
Vol:11 No:12 2017Vol:11 No:11 2017Vol:11 No:10 2017Vol:11 No:09 2017Vol:11 No:08 2017Vol:11 No:07 2017Vol:11 No:06 2017Vol:11 No:05 2017Vol:11 No:04 2017Vol:11 No:03 2017Vol:11 No:02 2017Vol:11 No:01 2017
Vol:10 No:12 2016Vol:10 No:11 2016Vol:10 No:10 2016Vol:10 No:09 2016Vol:10 No:08 2016Vol:10 No:07 2016Vol:10 No:06 2016Vol:10 No:05 2016Vol:10 No:04 2016Vol:10 No:03 2016Vol:10 No:02 2016Vol:10 No:01 2016
Vol:9 No:12 2015Vol:9 No:11 2015Vol:9 No:10 2015Vol:9 No:09 2015Vol:9 No:08 2015Vol:9 No:07 2015Vol:9 No:06 2015Vol:9 No:05 2015Vol:9 No:04 2015Vol:9 No:03 2015Vol:9 No:02 2015Vol:9 No:01 2015
Vol:8 No:12 2014Vol:8 No:11 2014Vol:8 No:10 2014Vol:8 No:09 2014Vol:8 No:08 2014Vol:8 No:07 2014Vol:8 No:06 2014Vol:8 No:05 2014Vol:8 No:04 2014Vol:8 No:03 2014Vol:8 No:02 2014Vol:8 No:01 2014
Vol:7 No:12 2013Vol:7 No:11 2013Vol:7 No:10 2013Vol:7 No:09 2013Vol:7 No:08 2013Vol:7 No:07 2013Vol:7 No:06 2013Vol:7 No:05 2013Vol:7 No:04 2013Vol:7 No:03 2013Vol:7 No:02 2013Vol:7 No:01 2013
Vol:6 No:12 2012Vol:6 No:11 2012Vol:6 No:10 2012Vol:6 No:09 2012Vol:6 No:08 2012Vol:6 No:07 2012Vol:6 No:06 2012Vol:6 No:05 2012Vol:6 No:04 2012Vol:6 No:03 2012Vol:6 No:02 2012Vol:6 No:01 2012
Vol:5 No:12 2011Vol:5 No:11 2011Vol:5 No:10 2011Vol:5 No:09 2011Vol:5 No:08 2011Vol:5 No:07 2011Vol:5 No:06 2011Vol:5 No:05 2011Vol:5 No:04 2011Vol:5 No:03 2011Vol:5 No:02 2011Vol:5 No:01 2011
Vol:4 No:12 2010Vol:4 No:11 2010Vol:4 No:10 2010Vol:4 No:09 2010Vol:4 No:08 2010Vol:4 No:07 2010Vol:4 No:06 2010Vol:4 No:05 2010Vol:4 No:04 2010Vol:4 No:03 2010Vol:4 No:02 2010Vol:4 No:01 2010
Vol:3 No:12 2009Vol:3 No:11 2009Vol:3 No:10 2009Vol:3 No:09 2009Vol:3 No:08 2009Vol:3 No:07 2009Vol:3 No:06 2009Vol:3 No:05 2009Vol:3 No:04 2009Vol:3 No:03 2009Vol:3 No:02 2009Vol:3 No:01 2009
Vol:2 No:12 2008Vol:2 No:11 2008Vol:2 No:10 2008Vol:2 No:09 2008Vol:2 No:08 2008Vol:2 No:07 2008Vol:2 No:06 2008Vol:2 No:05 2008Vol:2 No:04 2008Vol:2 No:03 2008Vol:2 No:02 2008Vol:2 No:01 2008
Vol:1 No:12 2007Vol:1 No:11 2007Vol:1 No:10 2007Vol:1 No:09 2007Vol:1 No:08 2007Vol:1 No:07 2007Vol:1 No:06 2007Vol:1 No:05 2007Vol:1 No:04 2007Vol:1 No:03 2007Vol:1 No:02 2007Vol:1 No:01 2007