Convergence Analysis of Extended LOBPCG for the Extreme Eigenvalue
This talk is concerned with the convergence analysis of an extended variation of the locally optimal preconditioned conjugate gradient method (LOBPCG) for the extreme eigenvalue of a Hermitian matrix polynomial which admits some extended form of Rayleigh quotient. This progress is a generalization of the analysis by Ovtchinnikov (SIAM J. Numer. Anal., 46(5):2567–2592, 2008). As instances, the algorithms for definite matrix pairs and hyperbolic quadratic matrix polynomials are shown to have a global convergence and an asymptotically local convergence rate. Also, numerical examples are given to show the performance of the convergence.
Handling Algebraic Branch Points in Nonlinear Eigenvalue Problems
In several real world applications of nonlinear eigenvalue problems, functions with branch points appear in the nonlinear matrix-valued function. This type of nonlinear eigenvalue problems is challenging to tackle because existing nonlinear eigenvalue algorithms usually require the matrix-valued function to be analytic in the region where the desired eigenvalues are located. We propose an effective way to handle algebraic branch points and reliably compute the eigenvalues, including those that are close to branch points. Our approach consists of applying appropriate transformations of variables after a possible subdivision of the region of interest. The transformed nonlinear matrix-valued function are always analytic inside its corresponding subregion, and can be approximated by low order rational functions due to analyticity.We are then free to apply any existing nonlinear eigenvalue solver to solve the remaining problem.
A Decoupled Form of the Structure-Preserving Doubling Algorithm with Low-Rank Structures
The structure-preserving doubling algorithm (SDA) is a fairly efficient method for solving problems closely related to Hamiltonian (or Hamiltonian-like) matrices, such as computing the required solutions to algebraic Riccati equations. However, for large-scale problems with dimension $n$, the SDA with an $O(n^3)$ computational complexity does not work well. In this talk, we show a new decoupled form of the SDA (we name it as dSDA), building on the associated Krylov subspaces thus leading to the inherent low-rank structures. Importantly, the approach decouples the original two to four iteration formulae. The resulting dSDA is much more efficient since only one quantity (instead of the original two to four) is computed iteratively. For large-scale problems, further efficiency is gained from the low-rank structures.
Randomized Kaczmarz-type Methods for Solving Large Sparse Linear Systems
The Kaczmarz method is a classic but popular linear iteration solver for computing an approximate solution for the large sparse system of linear equations. Due to its simplicity, the Kaczmarz method has been a preferable solution tool in many fields, such as computerized tomography, image reconstruction, distributed computing, signal processing and so on. Research on the Kaczmarz method was reignited in 2009 when Strohmer and Vershynin proposed the randomized Kaczmarz method for solving the consistent linear systems. The key to guaranteeing fast convergence of the randomized variants of the Kaczmarz method is to introduce a practical and appropriate probability criterion. In this talk, for solving the large scale consistent or inconsistent system of linear equations, based on more efficient probability criteria, we construct several randomized Kaczmarz-type methods, and explore their convergence properties through theoretical analyses and numerical experiments.
A Parallel-in-Time Iterative Algorithm for Time-Fractional PDEs.
Volterra partial integro-differential problems with weakly singular kernel attract a lot of attentions in recent years, thanks to the numerous real world applications. Solving this kind of PDEs in a parallel-in-time (PinT) pattern is difficult, because of the nonlocal property of time evolution. In this paper, we consider a class of representative problems and propose a novel iterative algorithm for PinT computation. In each iteration, we can solve the PDEs for all the discrete time points simultaneously via the diagonalization technique proposed recently. Convergence of the algorithm is analyzed by looking insight into the decreasing property of the convolution quadrature weights. We show that the convergence rate of the proposed algorithm is robust with respect to the discretization and problem parameters. Numerical results are reported to support our findings. This is the joint work with Prof. Shu-Lin Wu (Northeast Normal University).
Low-Rank Methods for Bayesian Inverse Problems
In this talk, we will introduce our recent work on low-rank methods for Bayesian inverse problems. For linear problems with Gaussian noise and Gaussian prior, the posterior is also Gaussian and characterized by the posterior mean and covariance. We propose a low-rank Arnoldi method to approximate the large dense posterior covariance matrix by making use of tensor computations. For nonlinear systems, the posterior is not Gaussian anymore, however, can often be approximated by a Gaussian distribution using the ensemble Kalman filter (EnKF) or the extended Kalman filter (ExKF). We propose a randomized low-rank method to reduce the computational complexity of the EnKF. We use numerical experiments to show the efficiency of our low-rank methods.
Backward error incurred by the CORK linearization
One of the most successful methods for solving a polynomial (PEP) or rational eigenvalue problem (REP) is to recast it, by linearization, as an equivalent but larger generalized eigenvalue problem which can be solved by standard eigensolvers. In this work, we investigate the backward errors of the computed eigenpairs incurred by the application of the recently developed and well-received compact rational Krylov (CORK) linearization. Our treatment is unified for the original PEPs or REPs expressed in various commonly used bases, including Taylor, Newton, Lagrange, orthogonal, and rational basis functions. We construct one-sided factorizations that relate the eigenpairs of the CORK linearization and those of the original PEPs or REPs. With these factorizations, we establish upper bounds for the backward error of an approximate eigenpair of the original eigenvalue problem relative to the backward error of an approximate eigenpair of the CORK linearization. These bounds suggest a scaling strategy to improve the accuracy of the computed eigenpairs. We show, by numerical experiments, that the actual backward errors can be successfully reduced by scaling and the errors, before and after scaling, are both well predicted by the bounds.
Modified ALI Iteration Methods for M-Matrix Algebraic Riccati Equation
We consider numerical solution of M-matrix algebraic Riccati equation (MARE), which plays an important role in many areas of applied mathematics. Based on the ALI iteration method proposed by Bai et al, we propose two modified ALI iteration methods for computing the minimal nonnegative solution of MARE. Convergence of the MALI iteration methods are proved under suitable conditions. Numerical experiments are given to show that the MALI iteration methods are feasible and are effective in some cases.
On Fast Solution of Constrained Tikhonov Ill-Posed Problems
Tikhonov regularization is commonly applied to the solution of linear discrete ill-posed problems. In many applications it is natural to require that the approximate solution determined by Tikhonov regularization lies in a constrained cone. We present two iterative methods that employ iterative methods to compute approximate solutions in the constrained cone of large-scale Tikhonov regularization problems. The first method incorporates the active set strategy, which contains two stages; the first stage consists of modulus iterations to identify the active set, while the second stage solves the reduced unconstrained least squares problems only on the inactive variables. The second method considered consists of two steps: first the given linear discrete ill-posed problem is reduced to a small problem by a Krylov subspace method, and then the reduced Tikhonov regularization problems so obtained is solved. Computed examples illustrate the performances of the methods.