Convergence Analysis of Extended LOBPCG for the Extreme Eigenvalue 梁鑫（清华大学） This talk is concerned with the convergence analysis of an extended variation of the locally optimal preconditioned conjugate gradient method (LOBPCG) for the extreme eigenvalue of a Hermitian matrix polynomial which admits some extended form of Rayleigh quotient. This progress is a generalization of the analysis by Ovtchinnikov (SIAM J. Numer. Anal., 46(5):2567–2592, 2008). As instances, the algorithms for definite matrix pairs and hyperbolic quadratic matrix polynomials are shown to have a global convergence and an asymptotically local convergence rate. Also, numerical examples are given to show the performance of the convergence. 
Handling Algebraic Branch Points in Nonlinear Eigenvalue Problems 邵美悦（复旦大学） In several real world applications of nonlinear eigenvalue problems, functions with branch points appear in the nonlinear matrixvalued function. This type of nonlinear eigenvalue problems is challenging to tackle because existing nonlinear eigenvalue algorithms usually require the matrixvalued function to be analytic in the region where the desired eigenvalues are located. We propose an effective way to handle algebraic branch points and reliably compute the eigenvalues, including those that are close to branch points. Our approach consists of applying appropriate transformations of variables after a possible subdivision of the region of interest. The transformed nonlinear matrixvalued function are always analytic inside its corresponding subregion, and can be approximated by low order rational functions due to analyticity.We are then free to apply any existing nonlinear eigenvalue solver to solve the remaining problem. 
A Decoupled Form of the StructurePreserving Doubling Algorithm with LowRank Structures 郭朕臣（南京大学）
The structurepreserving doubling algorithm (SDA) is a fairly efficient method for solving problems closely related to Hamiltonian (or Hamiltonianlike) matrices, such as computing the required solutions to algebraic Riccati equations. However, for largescale problems with dimension $n$, the SDA with an $O(n^3)$ computational complexity does not work well. In this talk, we show a new decoupled form of the SDA (we name it as dSDA), building on the associated Krylov subspaces thus leading to the inherent lowrank structures. Importantly, the approach decouples the original two to four iteration formulae. The resulting dSDA is much more efficient since only one quantity (instead of the original two to four) is computed iteratively. For largescale problems, further efficiency is gained from the lowrank structures.

Randomized Kaczmarztype Methods for Solving Large Sparse Linear Systems 巫文婷（北京理工大学）
The Kaczmarz method is a classic but popular linear iteration solver for computing an approximate solution for the large sparse system of linear equations. Due to its simplicity, the Kaczmarz method has been a preferable solution tool in many fields, such as computerized tomography, image reconstruction, distributed computing, signal processing and so on. Research on the Kaczmarz method was reignited in 2009 when Strohmer and Vershynin proposed the randomized Kaczmarz method for solving the consistent linear systems. The key to guaranteeing fast convergence of the randomized variants of the Kaczmarz method is to introduce a practical and appropriate probability criterion. In this talk, for solving the large scale consistent or inconsistent system of linear equations, based on more efficient probability criteria, we construct several randomized Kaczmarztype methods, and explore their convergence properties through theoretical analyses and numerical experiments.

A ParallelinTime Iterative Algorithm for TimeFractional PDEs. 顾先明（西南财经大学）
Volterra partial integrodifferential problems with weakly singular kernel attract a lot of attentions in recent years, thanks to the numerous real world applications. Solving this kind of PDEs in a parallelintime (PinT) pattern is difficult, because of the nonlocal property of time evolution. In this paper, we consider a class of representative problems and propose a novel iterative algorithm for PinT computation. In each iteration, we can solve the PDEs for all the discrete time points simultaneously via the diagonalization technique proposed recently. Convergence of the algorithm is analyzed by looking insight into the decreasing property of the convolution quadrature weights. We show that the convergence rate of the proposed algorithm is robust with respect to the discretization and problem parameters. Numerical results are reported to support our findings. This is the joint work with Prof. ShuLin Wu (Northeast Normal University).

LowRank Methods for Bayesian Inverse Problems 邱越（上海科技大学）
In this talk, we will introduce our recent work on lowrank methods for Bayesian inverse problems. For linear problems with Gaussian noise and Gaussian prior, the posterior is also Gaussian and characterized by the posterior mean and covariance. We propose a lowrank Arnoldi method to approximate the large dense posterior covariance matrix by making use of tensor computations. For nonlinear systems, the posterior is not Gaussian anymore, however, can often be approximated by a Gaussian distribution using the ensemble Kalman filter (EnKF) or the extended Kalman filter (ExKF). We propose a randomized lowrank method to reduce the computational complexity of the EnKF. We use numerical experiments to show the efficiency of our lowrank methods.

Backward error incurred by the CORK linearization 谌鸿佳（南昌大学）
One of the most successful methods for solving a polynomial (PEP) or rational eigenvalue problem (REP) is to recast it, by linearization, as an equivalent but larger generalized eigenvalue problem which can be solved by standard eigensolvers. In this work, we investigate the backward errors of the computed eigenpairs incurred by the application of the recently developed and wellreceived compact rational Krylov (CORK) linearization. Our treatment is unified for the original PEPs or REPs expressed in various commonly used bases, including Taylor, Newton, Lagrange, orthogonal, and rational basis functions. We construct onesided factorizations that relate the eigenpairs of the CORK linearization and those of the original PEPs or REPs. With these factorizations, we establish upper bounds for the backward error of an approximate eigenpair of the original eigenvalue problem relative to the backward error of an approximate eigenpair of the CORK linearization. These bounds suggest a scaling strategy to improve the accuracy of the computed eigenpairs. We show, by numerical experiments, that the actual backward errors can be successfully reduced by scaling and the errors, before and after scaling, are both well predicted by the bounds.

Modified ALI Iteration Methods for MMatrix Algebraic Riccati Equation 关晋瑞（太原师范学院）
We consider numerical solution of Mmatrix algebraic Riccati equation (MARE), which plays an important role in many areas of applied mathematics. Based on the ALI iteration method proposed by Bai et al, we propose two modified ALI iteration methods for computing the minimal nonnegative solution of MARE. Convergence of the MALI iteration methods are proved under suitable conditions. Numerical experiments are given to show that the MALI iteration methods are feasible and are effective in some cases.

On Fast Solution of Constrained Tikhonov IllPosed Problems 郑宁（同济大学）
Tikhonov regularization is commonly applied to the solution of linear discrete illposed problems. In many applications it is natural to require that the approximate solution determined by Tikhonov regularization lies in a constrained cone. We present two iterative methods that employ iterative methods to compute approximate solutions in the constrained cone of largescale Tikhonov regularization problems. The first method incorporates the active set strategy, which contains two stages; the first stage consists of modulus iterations to identify the active set, while the second stage solves the reduced unconstrained least squares problems only on the inactive variables. The second method considered consists of two steps: first the given linear discrete illposed problem is reduced to a small problem by a Krylov subspace method, and then the reduced Tikhonov regularization problems so obtained is solved. Computed examples illustrate the performances of the methods.

