Jump to content

Progressive-iterative approximation method

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by EagleZju (talk | contribs) at 01:51, 25 July 2023 (-- Draft creation using the WP:Article wizard --). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Progressive-iterative  approximation method

Progressive-iterative approximation method is an iterative method of data fitting with geometric meanings. Given the data points to be fitted, the method obtains a series of fitting curves (surfaces) by iteratively updating the control points, and the limit curve (surface) can  interpolate or approximate the given data points. It avoids solving a linear system of equations directly and allows flexibility in adding constraints during the iterative process. Therefore, it has been widely used in geometric design and related fields.

The study of the iterative method with geometric meaning can be traced back to the work of scholars such as Dongxu Qi  and Carl de Boorin the 1970s. Qi et al. developed and proved the "profit and loss" algorithm for uniform cubic B-spline curves,de Boor independently proposed and proved this property. In 2004, Hongwei Lin and coauthors proved that non-uniform cubic B-spline curves and surfaces have the "profit and loss" property[1]. Later, in 2005, Lin et al. proved that the curves and surfaces with normalized and totally positive basis all have this property and named it progressive iterative approximation (PIA)[2]. In 2007, Maekawa et al. changed the algebraic distance in PIA to geometric distance and named it geometric interpolation (GI)[3]. In 2008, Chen et al. extended it to subdivision surfaces and named the method progressive interpolation (PI)[4]. Since the iteration steps of the PIA, GI, and PI algorithms are similar and all have geometric meanings, we collectively referred to them as geometric iterative methods (GIM).

PIA is now extended to several common curves and surfaces in the geometric design field, including NURBS curves and surfaces[5], T-spline surfaces[6], implicit curves and surfaces[7].

[toc]

1. Iteration formats

Generally, geometric iterative methods can be divided into interpolation and approximation types. In interpolation algorithms, the number of control points is equal to that of the data points; in approximation algorithms, the number of control points can be less than that of the data points. In addition, there are representative iteration formats, such as local-PIA, implicit-PIA, fairing-PIA, and isogeometric least-squares progressive-iterative approximation (IG-LSPIA), which is specialized for solving the isogeometric analysisproblem.

1.1 Interpolation scheme: PIA[1][2][5][6][8]

To facilitate the description of the PIA iteration format for different forms of curves and surfaces, we write B-spline curves and surfaces, NURBS curves and surfaces, B-spline solids, T-spline surfaces, and triangular Bernstein–Bézier (B–B) surfaces uniformly in the following form: $$ \mathbf{P}(\mathbf{t})=\sum{i=1}^n\mathbf{P}iB_i(\mathbf{t}). $$

  • If $P(t)$ is a B-spline curve,then $t$ is a scalar, $Bi(t)$ is a B-spline basis function,and $\mathbf{P}i$ denotes the control point;
  • If $\mathbf{P}(\mathbf{t})$ is a B-spline patch with $nu\times nv$ control points,then $\mathbf{t}=(u,v)$,$Bi(\mathbf{t})=Ni(u)Ni(v)$,where $Ni(u)$ and $N_i(v)$ are B-spline basis functions;
  • If $\mathbf{P}(\mathbf{t})$ is a trivariate B-spline solid with $nu\times nv\times nw$ control points, then $\mathbf{t}=(u,v,w)$,$Bi(\mathbf{t})=Ni(u)Ni(v)Ni(w)$,where $Ni(u)$、$Ni(v)$ and $Ni(w)$ are B-spline basis functions;
  • If $P(t)$ is the homogeneous expression of a NURBS curve, then $t$ is a scalar,$Bi(t)$ is a B-spline basis function,$\mathbf{P}i=(wixi,wiyi,wizi,wi)$,where $wi$ is the weight,and $(xi,yi,z_i)\in \mathbb{R}^3$ ;

Given an ordered data set ${\mathbf{Q}i,i=1,2,\cdots,n}$,with parameters $ti,i=1,2,\cdots,n$ satisfying $t1<t2<\cdots

where the initial control points $\mathbf{P}i^{(0)}$ can be randomly selected. Suppose that after the $k$th iteration, the $k$th fitting curve (surface)$\mathbf{P}^{(k)}(t)$ is generated by $$ \mathbf{P}^{(k)}(t)=\sum{i=1}^n\mathbf{P}i^{(k)}Bi(t). $$ To construct the $(k+1)$st curve (surface),we first calculate the difference vectors, $$ \mathbf{\Delta}^{(k)}i=\mathbf{Q}i-\mathbf{P}^{(k)}(ti),:i=1,2,\cdots,n, $$ then update the control points by $$ \mathbf{P}i^{(k+1)}=\mathbf{P}i^{(k)}+\mathbf{\Delta}i^{(k)}, $$leading to the $(k+1)$st fitting curve (surface): $$ \mathbf{P}^{(k+1)}(t)=\sum{i=1}^n\mathbf{P}i^{(k+1)}Bi(t). $$ In this way, we obtain a sequence of curves (surfaces) $$ {\mathbf{P}^{(\alpha)}(t),\alpha=0,1,2,\cdots}. $$ It has been proved that this sequence of curves (surfaces) converges to a limit curve (surface) that interpolates a given column of points, i.e. $$ \lim \limits{\alpha\rightarrow\infty}\mathbf{P}^{(\alpha)}(ti)=\mathbf{Q}i,:i=1,2,\cdots,n. $$

1.2 Approximation scheme: LSPIA

For the B-spline curve and surface fitting problem, Deng and Lin proposed a least-squares progressive–iterative approximation(LSPIA)[9], which allows the number of control points to be less than that of the data points and is more suitable for large-scale data fitting problems.

Assume that the number of data points is $m$,and the number of control points is $n(n\le m)$. Following the notations in Section 1.1, the $k$th fitting curve (surface) generated after the $k$th iteration is $\mathbf{P}^{(k)}(t)$, $$ \mathbf{P}^{(k)}(t)=\sum{j=1}^n\mathbf{P}j^{(k)}B_j(t). $$ To generate the $(k+1)$st fitting curve (surface),we compute the following difference vectors in turn:

Difference vectors for data points:$\mathbf{\Delta}^{(k)}i=\mathbf{Q}i-\mathbf{P}^{(k)}(t_i),:i=1,2,\cdots,m,$

Difference vectors for control points:$\mathbf{D}^{(k)}j=\mu\sum{i\in Ij}{Bj(ti)\mathbf{\Delta}i^{(k)}},:j=1,2,\cdots,n,$

where $Ij$ is the index set of the data points in the $j$th group,whose parameters fall in the local support of the $j$th basis function, i.e., $Bj(ti)\ne0$. $\mu$ is a normalization weight that guarantees the convergence of the algorithm, usually taken as $\frac{1}{\sum{i\in Ij}{Bj(t_i)}}$.

Finally, the control points of the $(k+1)$st curve (surface) are updated by $$ \mathbf{P}j^{(k+1)}=\mathbf{P}j^{(k)}+\mathbf{D}_j^{(k)}, $$ leading to the $(k+1)$st fittging curve (surface) $\mathbf{P}^{(k+1)}(t)$. In this way,we obtain a sequence of curve (surface),and the limit curve (surface) converges to the least-squares fitting result to the given data points.

1.3 Local-PIA[10]

In the local-PIA, the control points are divided into active and fixed control points, whose subscripts are denoted as $I=\left{i1,i2,\cdots,iI\right}$ and $J=\left{j1,j2,\cdots,jJ\right}$. The interpolated local iteration format is given below when $n = m$; a similar generalization can be made when $n < m$.

Assume that, the $k$th fitting curve (surface) is $\mathbf{P}^{(k)}(t)=\sum{j=1}^n\mathbf{P}j^{(k)}Bj(t)$,where the fixed control point satisfy $$ \mathbf{P}j^{(k)}=\mathbf{P}j^{(0)},\quad j\in J,\quad k=0,1,2,\cdots. $$ Then,the iterative formula of the difference vector $\mathbf{D}h^{(k+1)}$ corresponding to the fixed control points is $$ \begin{aligned} \mathbf{D}h^{(k+1)}&=\mathbf{Q}h-\sum{j=1}^n\mathbf{P}j^{(k+1)}Bj(th)\ &=\mathbf{Q}h-\sum{j\in J}\mathbf{P}j^{(k+1)}Bj(th)-\sum{i\in I}\left(\mathbf{P}i^{(k)}+\mathbf{D}i^{(k)}\right)Bi(th)\ &=\mathbf{Q}h-\sum{j=1}^n\mathbf{P}j^{(k)}Bj(th)-\sum{i\in I}\mathbf{D}i^{(k)}Bi(th)\ &=\mathbf{D}h^{(k)}-\sum{i\in I}\mathbf{D}i^{(k)}Bi(th), \quad h\in J. \end{aligned} $$ On the other hand, the iterative formula of the difference vector $\mathbf{D}l^{(k+1)}$ corresponding to the active control points is $$ \begin{aligned} \mathbf{D}l^{(k+1)}&=\mathbf{Q}l-\sum{j=1}^n\mathbf{P}j^{(k+1)}Bj(tl)\ &=\mathbf{Q}l-\sum{j=1}^n\mathbf{P}j^{(k)}Bj(tl)-\sum{i\in I}\mathbf{D}i^{(k)}Bi(tl)\ &=\mathbf{D}l^{(k)}-\sum{i\in I}\mathbf{D}i^{(k)}Bi(tl)\ &=-\mathbf{D}{i1}^{(k)}B{i1}(tl)-\mathbf{D}{i2}^{(k)}B{i2}(tl)-\cdots+\left(1-Bl(tl)\right)\mathbf{D}l ^{(k)}-\cdots-\mathbf{D}{iI}^{(k)}B{iI}(tl),\quad l\in I. \end{aligned} $$ Arranging the above difference vectors into a one-dimensional sequence, $$ \mathbf{D}^{(k+1)}=\left[\mathbf{D}{j1}^{(k+1)} ,\mathbf{D}{j2}^{(k+1)},\cdots,\mathbf{D}{jJ}^{(k+1)},\mathbf{D}{i1}^{(k+1)},\mathbf{D}{i2}^{(k+1)},\cdots,\mathbf{D}{iI}^{(k+1)}\right]^T,\quad k=0,1,2,\cdots, $$ then the local iteration format in matrix form is $$ \mathbf{D}^{(k+1)}=\mathbf{T}\mathbf{D}^{(k)},\quad k=0,1,2,\cdots, $$ where, $\mathbf{T}$ is the iteration matrix $$ \mathbf{T}= \begin{bmatrix} \mathbf{E}J & -\mathbf{B}1\ 0 & \mathbf{E}I-\mathbf{B}2 \end{bmatrix}, $$ $\mathbf{E}J$ and $\mathbf{E}I$ are the indentity matrices and $$ \mathbf{B}1= \begin{bmatrix} B{i1}\left(t{j1} \right) & B{i2}\left(t{j1} \right) & \cdots &B{iI}\left(t{j1} \right) \ B{i1}\left(t{j2} \right) & B{i2}\left(t{j2} \right) & \cdots &B{iI}\left(t{j2} \right) \ \vdots & \vdots &\vdots & \vdots \ B{i1}\left(t{jJ} \right) & B{i2}\left(t{jJ} \right) & \cdots &B{iI}\left(t{jJ} \right) \ \end{bmatrix},\ \mathbf{B}2= \begin{bmatrix} B{i1}\left(t{i1} \right) & B{i2}\left(t{i1} \right) & \cdots &B{iI}\left(t{i1} \right) \ B{i1}\left(t{i2} \right) & B{i2}\left(t{i2} \right) & \cdots &B{iI}\left(t{i2} \right) \ \vdots & \vdots &\vdots & \vdots \ B{i1}\left(t{iI} \right) & B{i2}\left(t{iI} \right) & \cdots &B{iI}\left(t{i_I} \right) \ \end{bmatrix}. $$ The above local iteration format converges and can be extended to blending surfaces and subdivision surfaces.

1.4 Implicit-PIA[7]

The progressive iterative approximation format for implicit curve and surface reconstruction is described in the following. Given an ordered point cloud $\left{\mathbf{Q}i\right}{i=1}^n$ and a unit normal vector $\left{\mathbf{n}i\right}{i=1}^n$ on the data points, an implicit curve (surface) is reconstructed to fit the given point cloud. Some offset points $\left{\mathbf{Q}l\right}{l=n+1}^{2n}$ are added to the point cloud to avoid the nontrivial solution. They are offset by a distance $\sigma$ along the unit normal vector of each point$$ \mathbf{Q}l=\mathbf{Q}i+\sigma\mathbf{n}i,\quad l=n+i,\quad i=1,2,\cdots,n. $$ Assume that $\epsilon$ is the value of the implicit function at the offset point $$ f\left(\mathbf{Q}l\right)=\epsilon,\quad l=n+1,n+2,\cdots,2n. $$ Let the implicit curve after the $\alpha$th iteration be $$ f^{(\alpha)}(x,y)=\sum{i=1}^{Nu}\sum{j=1}^{Nv}C{ij}^{(\alpha)}Bi(x)Bj(y), $$ where $C{ij}^{(\alpha)}$ is the control point.

Define the difference vector of data points as $$ \begin{aligned} \Deltak^{(\alpha)}&=0-f^{(\alpha)}(xk,yk),\quad k=1,2,\cdots,n,\ \Deltal^{(\alpha)}&=\epsilon-f^{(\alpha)}(xl,yl),\quad l=n+1,n+2,\cdots, 2n. \end{aligned} $$ Next, calculate the difference vector of control coefficients $$ D{ij}^{(\alpha)}=\mu\sum{k=1}^{2n}Bi(xk)Bj(yk)\Deltak^{(\alpha)},\quad i=1,2,\cdots,Nu,\quad j=1,2,\cdots,Nv, $$ where $\mu$ is the convergence coefficient. As a result, the new control coefficients are $$ C{ij}^{(\alpha+1)}=C{ij}^{(\alpha)}+D{ij}^{(\alpha)}, $$ leading to the new algebraic B-spline curve $$ f^{(\alpha+1)}(x,y)=\sum{i=1}^{Nu}\sum{j=1}^{Nv}C{ij}^{(\alpha+1)}Bi(x)Bj(y). $$ The above procedure is carried out iteratively to generate a sequence of algebraic B-spline functions $\left{f^{(\alpha)}(x,y),:\alpha=0,1,2,\cdots\right}$. The sequence converges to a minimization problem with constraints when the initial control coefficients $C{ij}^{(0)}=0$.

Assume that the implicit surface generated after the $\alpha$th iteration is $$ f^{(\alpha)}(x,y,z)=\sum{i=1}^{Nu}\sum{j=1}^{Nv}\sum{k=1}^{Nw}C{ijk}^{(\alpha)}Bi(x)Bj(y)Bk(z), $$ the iteration format is similar to that of the curve case.

1.5 Fairing-PIA[11]

To achieve flexible fairing, first define the functionals as follows: $$ \mathcal{F}{r,j}(f) = \int{t1}^{tm}B{r,j}(t)fdt,\quad j=1,2,\cdots,n,\quad r=1,2,3, $$ where $B{r,j}(t)$ represents the $r$th derivative of the basis function $B_j(t)$.

Let the curve after the $k$th iteration be $$ \mathbf{P}^{[k]}(t)=\sum{j=1}^nBj(t)\mathbf{P}j^{[k]},\quad t\in[t1,tm]. $$ To construct the new curve $\mathbf{P}^{[k+1]}(t)$,we first calculate the $(k + 1)$st difference vectors for data points, $$ \mathbf{d}i^{[k]} = \mathbf{Q}i - \mathbf{P}^{[k]}(ti),\quad i=1,2,\cdots,m. $$ Then, the fitting difference vectors and the fairing vectors for control points are calculated by $$ \begin{aligned}

\mathbf{\delta}j^{[k]} &= \sum{h\in Ij}Bj(th)\mathbf{d}h^{[k]},\quad j=1,2,\cdots,n,\

\mathbf{\eta}{j}^{[k]} &=\sum{l=1}^n \mathcal{F}{r,l}\left(B{r,j}(t)\right)\mathbf{P}_l^{[k]},\quad j=1,2,\cdots,n.

\end{aligned} $$ Finally, the control points of the $(k+1)$st curve are produced by $$ \mathbf{P}j^{[k+1]} = \mathbf{P}j^{[k]} + \mu_j

\left[

\left(1-\omegaj\right)\mathbf{\delta}j^{[k]} - \omegaj\mathbf{\eta}{j}^{[k]}

\right],\quad j=1,2,\cdots,n, $$ where $\muj$ is a normalization weight, and $\omegaj$ is a smoothing weight corresponding to the $j$th control point. The larger the smoothing weight is, the smoother the generated curve is. The new curve is obtained as follows $$ \mathbf{P}^{[k+1]}(t)=\sum{j=1}^nBj(t)\mathbf{P}j^{[k+1]},\quad t\in[t1,tm]. $$ In this way, we obtain a sequence of curves $\left{\mathbf{P}^{[k]}(t),\;k=1,2,3,\cdots\right}$. The sequence converges to the solution of the conventional fairing method based on energy minimization when all smoothing weights are equal ( $\omegaj=\omega$). Similarly, the fairing-PIA can be extended to the surface case.

1.6 IG-LSPIA[12]

Given a boundary value problem $$ \left{ \begin{aligned} \mathcal{L}u=f,&\quad \text{in}\;\Omega,\ \mathcal{G}u=g,&\quad \text{on}\;\partial\Omega, \end{aligned} \right. $$ where $u:\Omega\to\mathbb{R}$ is the unknown solution,$\mathcal{L}$ and $\mathcal{G}$ are the differential operator and boundary operator, respectively. $f$ and $g$ are the continuous functions. In the isogeometric analysis method, NURBS basis functions are used as form functions to solve the numerical solution of this boundary value problem. The same basis functions are applied to represent the numerical solution $uh$ and the geometric mapping $G$. $$ \begin{aligned} uh\left(\hat{\tau}\right) &= \sum{j=1}^nR{j}(\hat\tau )uj,\ G({\hat \tau }) &= \sum{j=1}^nR{j}(\hat\tau )Pj, \end{aligned} $$ where $Rj(\hat{\tau})$ denotes the NURBS basis function,$uj$ is the control coefficient. After substituting the collocation points $\hat\tau{i} ,i = 1,2,...,{m}$ into the strong form of PDE,we obtain a discretized problem $$ \left{ \begin{aligned} \mathcal{L}u{h}(\hat\tau{i})=f(G(\hat\tau{i})),&\quad i\in\mathcal{IL},\ \mathcal{G}u{h}(\hat\tau{i})=g(G(\hat\tau{i})),&\quad i\in\mathcal{IG}, \end{aligned} \right. $$ where $\mathcal{IL}$ and $\mathcal{I_G}$ denote the subscripts of internal and boundary collocation points, respectively.

Arranging the control coefficients $uj$ of the numerical solution $uh(\hat\tau)$ into an $n$-dimensional column vector, i.e., $\mathbf{U}=[u1,u2,...,u_n]^T$, the discretized problem can be reformulated in matrix form $$ \mathbf{AU}=\mathbf{b}, $$ where $\mathbf{A}$ is the collocation matrix,and $\mathbf{b}$ is the load vector.

Assume that the discretized load values are data points $\left{bi\right}{i=1}^m$ to be fitted. Given the initial guess of the control coefficients$\left{uj^{(0)}\right}{j=1}^n$($n

\right. $$ $\Omegap^{in}$ and $\Omegap^{bd}$ indicate the interior and boundary of the parameter domain, respectively. Each $Aj(\hat\tau)$ corresponds to the $j$th control coefficient. Assume that $J{in}$ and $J{bd}$ are the index sets of the internal and boundary control coefficients, respectively. Without loss of generality, we further assume that the boundary control coefficients have been obtained using strong or weak imposition and are fixed, i.e., $$ u{j}^{(k)}=u{j}^{*},\quad j\in J{bd},\quad k=0,1,2,\cdots. $$ The $k$th blending function, generated after the $k$th iteration of IG-LSPIA, is assumed to be as follows: $$ U^{(k)}(\hat\tau) = \sum{j=1}^nAj(\hat\tau)uj^{(k)},\quad\hat\tau\in[\hat\tau1,\hat\tau_m]. $$ Then, the difference vectors for collocation points (DCP) in the $(k + 1)$st iteration are obtained using $$ \begin{align}

\delta_i^{(k)}

\nonumber

&= bi-\sum{j=1}^{n}Aj(\hat\taui)u_j^{(k)}\

&= bi-\sum{j\in J{bd}}Aj(\hat\taui)uj^{(k)}

-\sum{j\in J{in}}Aj(\hat\taui)u_j^{(k)}

,\quad i=1,2,...,m,

\end{align} $$ and group all load values whose parameters fall in the local support of the $j$th derivatives function, i.e., $Aj(\hat\taui)\ne 0$, into the $j$th group corresponding to the $j$th control coefficient. In the following, we denote the subscript of the $j$th group of load values using $I_j$.

Furthermore, we derive the differences for control coefficients (DCC) as follows: $$ dj^{(k)}=\mu\sum{h\in Ij}Aj(\hat\tauh)\deltah^{(k)},\quad j=1,2,...,n, $$ where $\mu$ is a normalization technique to guarantee the convergence of the algorithm.

Thus, the new control coefficients are updated via the following relation $$ uj^{(k+1)}=uj^{(k)}+dj^{(k)},\quad j=1,2,...,n, $$Consequently, the $(k + 1)$st blending function is as follows: $$ U^{(k+1)}(\hat\tau) = \sum{j=1}^nAj(\hat\tau)uj^{(k+1)}. $$ The above iteration process is performed until the desired fitting error is reached and a sequence of blending functions is obtained $$ \left { U^{(k)}(\hat\tau),k=0,1,\dots \right }. $$ The IG-LSPIA converges to the solution of a constrained least-squares collocation problem.

2. Proof of convergence

The PIA iterative format in matrix form is, $$ \begin{align} \mathbf{P^{(\alpha+1)}}&=\mathbf{P^{(\alpha)}}+\mathbf{\Delta}^{(\alpha)},\ &=\mathbf{P}^{(\alpha)}+\mathbf{Q}-\mathbf{B}\mathbf{P}^{(\alpha)},\ &=\left(\mathbf{I}-\mathbf{B}\right)\mathbf{P}^{(\alpha)}+\mathbf{Q}, \end{align} $$ where,$$ \begin{align} &\mathbf{Q}=\left[\mathbf{Q}1,\mathbf{Q}2,\cdots,\mathbf{Q}m\right]^T,\ &\mathbf{P^{(\alpha)}}=\left[\mathbf{P}1^{(\alpha)},\mathbf{P}2^{(\alpha)},\cdots,\mathbf{P}n^{(\alpha)}\right]^T,\ &\mathbf{\Delta}^{(\alpha)}=\left[\mathbf{\Delta}1^{(\alpha)},\mathbf{\Delta}^{(\alpha)}2,\cdots,\mathbf{\Delta}^{(\alpha)}n\right]^T,\ &\mathbf{B}=\left[ \matrix{ B1(t1) & B2(t1) &\cdots &Bn(t1)\ B1(t2) & B2(t2) &\cdots &Bn(t2)\ \vdots & \vdots &\ddots & \vdots \ B1(tm) & B2(tm) &\cdots &Bn(t_m)\ } \right]. \end{align} $$ The convergence of the PIA is related to the properties of the collocation matrix.

2.1 Non-singular case

  • If $m=n$,and the spectral radius of iteration matrix is less than 1,then the PIA is convergent.
  • If $n Moreover, $$ \begin{align} \lambda(\mu\mathbf{B}^T\mathbf{B})&=\mu\lambda(\mathbf{B}^T\mathbf{B})\&<2\frac{\lambda(\mathbf{B}^T\mathbf{B})}{\lambda_0}<2. \end{align} $$ In summary,$0<\lambda(\mu\mathbf{B}^T\mathbf{B})<2$. Theorem:If $0<\mu<\frac{2}{\lambda_0}$ ,LSPIA is convergent, and converges to the least-squares fitting result to the given data points.

Proof:From the matrix form of iterative format, we obtain the following : $$ \begin{align} \mathbf{P^{(\alpha+1)}}&=\left(\mathbf{I}-\mu\mathbf{B}^T\mathbf{B}\right)\mathbf{P}^{(\alpha)}+\mu\mathbf{B}^T\mathbf{Q},\ &=\left(\mathbf{I}-\mu\mathbf{B}^T\mathbf{B}\right)\left[\left(\mathbf{I}-\mu\mathbf{B}^T\mathbf{B}\right)\mathbf{P}^{(\alpha-1)}+\mu\mathbf{B}^T\mathbf{Q}\right]+\mu\mathbf{B}^T\mathbf{Q},\ &=\left(\mathbf{I}-\mu\mathbf{B}^T\mathbf{B}\right)^2\mathbf{P}^{(\alpha-1)}+\sum{i=0}^1\left( \mathbf{I}-\mu\mathbf{B}^T\mathbf{B}\right)\mu\mathbf{B}^T\mathbf{Q},\ &=\cdots\ &=\left(\mathbf{I}-\mu\mathbf{B}^T\mathbf{B}\right)^{\alpha+1}\mathbf{P}^{(0)}+\sum{i=0}^{\alpha}\left( \mathbf{I}-\mu\mathbf{B}^T\mathbf{B}\right)^{\alpha}\mu\mathbf{B}^T\mathbf{Q}.\ \end{align} $$ According to Lemma, the spectral radius of the matrix $\mu\mathbf{B}^T\mathbf{B}$ satisfies, $$ 0<\rho\left({\mu\mathbf{B}^T\mathbf{B}}\right)<2, $$Thus,the spectral radius of the iteration matrix satisfies, $$ 0<\rho\left({\mathbf{I}-\mu\mathbf{B}^T\mathbf{B}}\right)<1. $$ When $\alpha\rightarrow \infty$, $$ \left(\mathbf{I}-\mu\mathbf{B}^T\mathbf{B}\right)^{\infty}=0,:\sum_{i=0}^{\infty}\left( \mathbf{I}-\mu\mathbf{B}^T\mathbf{B}\right)^{\alpha}=\frac{1}{\mu}\left(\mathbf{B}^T\mathbf{B}\right)^{-1}. $$ As a result, $$ \mathbf{P}^{(\infty)}=\left(\mathbf{B}^T\mathbf{B}\right)^{-1}\mathbf{B}^T\mathbf{Q}, $$ i.e., $\mathbf{B}^T\mathbf{B}\mathbf{P}^{(\infty)}=\mathbf{B}^T\mathbf{Q}$,which is equivalent to the normal equation of the fitting problem. Hence, the LSPIA algorithm converges to the least squares result for a given sequence of points.

2.2 Singular case

Lin et al. showed that LSPIA converges even when the iteration matrix is singular [13].

3. Acceleration algorithms

  • Incremental fitting:During the incremental iteration, each new round of iteration reuses information from the last round of iteration to reduce the number of iterations.
  • Local fitting:Based on the local property of PIA,researchers have proposed a series of local PIA formats[10][14][15].
  • Combined with classical iterative algorithms:PIA based on conjugate gradient[16][17]、Gauss-Seidel[18][19]、SOR[20] and HSS[21] .
  • Precondition:Liu et al. proposed a preconditioned PIA for Bézier surfaces via the diagonally compensated reduction method, effectively improving the accuracy and efficiency of the classical algorithm.[22].
  • iteration matrix inverse approximation: Sajavičius improves the LSPIA based on the matrix approximate inverse method[23]. In each iteration step, the approximate inverse of the coefficient matrix of the least-squares fitting problem is first computed and then used as the weight to adjust the control points.
  • Optimal weight[24] [25][26]: Lu initially presented a weighted progressive-iterative approximation (WPIA) that introduces the optimal weight of difference vectors for control points to accelerate the convergence. For more flexible geometric modeling, Zhang et al. proposed a weighted local PIA format for tensor Bézier surfaces. Li et al. assigned initial weights to each data point, and the weights of the interpolated points are determined adaptively during the iterative process.
  • Stochastic descent strategy:Rios and Jüttle explored the relationship between LSPIA and gradient descent method and proposed a stochastic LSPIA algorithm with parameter correction[27].
  • Acceleration with memory[28]: In 2020, Huang et al. proposed a progressive-iterative approximation method with memory for least square fitting (MLSPIA), which has a similar format to the momentum method. MLSPIA generates a series of fitting curves (surfaces) with three weights by iteratively adjusting the control points. With appropriate parameter selection, these curves (surfaces) converge to the least squares fit results for a given data point and are more efficient than LSPIA.


4. Applications

Since the geometric iteration method has obvious geometric meaning and is easily implemented in parallel, it makes up for the shortcomings of traditional algorithms for solving equations directly. It is now widely used in many fields, such as data fitting, reverse engineering, geometric design, modeling, mesh generation, data compression, fairing curve and surface generation, and isogeometric analysis[29][30].

  • Data fitting Adaptive data fitting[31]:The control points are divided into active control points and fixed control points. In each round of iteration, if the fitting error of a data point reaches a given accuracy, its corresponding control point is fixed and not updated. The above iterative process is repeated until all control points are fixed . The algorithm performs well on large-scale data sets by adaptively reducing the number of active control points. Large-scale data fitting[6]:By combining T-spline with PIA, an incremental fitting algorithm suitable for fitting large-scale data sets is proposed. The convergence rate of the traditional point-by-point iterative algorithm decreases as the number of control points increases, while in the PIA the computation of each iteration step is unrelated to the number of control points, thus exhibiting a powerful data fitting capability.
  • Implicit reconstruction[7] For implicit curve and surface reconstruction, the PIA avoids the additional zero level set and regularization term, which greatly improves the speed of the reconstruction algorithm.
  • Offset curve approximation[32] Firstly, the data points are sampled on the original curve, then the initial polynomial approximation curve or rational approximation curve of the offset curve is generated from these sampled points, and finally the offset curve is approximated iteratively using the PIA method.
  • Mesh generation[33] Input a triangular mesh model, the algorithm first constructs the initial hexahedral mesh, then extracts the quadrilateral mesh of the surface as the initial boundary mesh, then constrains the movement of the mesh vertices during the iteration, and finally makes the hexahedral model fit to the given input model, which ensures the validity of the generated hexahedral mesh, i.e., the Jacobi value at each mesh vertex is greater than zero.
  • Data compression[34] First, the image data are converted into a one-dimensional sequence by Hilbert scan; then, these data points are fitted by LSPIA to generate a Hilbert curve; finally, the Hilbert curve is sampled, and the compressed image can be reconstructed. This method can well preserve the neighborhood information of pixels.
  • Fairing curve and surface generation [11] Given a data point set, firstly, define the fairing functional; secondly, calculate the fitting difference vector and the fairing vector of the control point; and finally, adjust the control points with fairing weights. According to the above steps, the fairing curve and surface can be generated iteratively. Due to the abundant fairing parameters, the method can achieve global or local fairing. It is also flexible to adjust knot vectors, fairing weights, or data parameterization after each round of iteration. The traditional energy-minimization method is a special case of this method, i.e., the smooth weights are all equal.
  • Isogeometric analysis[12] The discretized load values are regarded as the set of data points, and the combination of the basis functions and their derivative functions is used as the blending function for fitting. The method automatically adjusts the degrees of freedom of the numerical solution of the partial differential equation according to the fitting result of the blending function to the load values. In addition, the average iteration time per step is only related to the number of matching points and unrelated to the number of control coefficients.

5. References

[1]. Lin H, Wang G, Dong C. Constructing iterative non-uniform B-spline curve and surface to fit data points[J]. Science China Information Sciences, 2004, 47(3) : 315 – 331. [2]. Lin H, Bao H, Wang G. Totally positive bases and progressive iteration approximation[J]. Computers & Mathematics with Applications, 2005, 50(3) : 575 – 586. [3]. Maekawa T, Matsumoto Y, Namiki K. Interpolation by geometric algorithm.Computer-Aided Design,2007,39(4):313-323. [4]. Chen Z, Luo X, Tan L, et al. Progressive interpolation based on Catmull-Clark subdivision surfaces[J]. Computer Graphics Forum, 2008, 27(7) : 1823 – 1827. [5]. Shi L, Wang R. An iterative algorithm of NURBS interpolation and approximation[ J]. Journal of Mathematical Research with Applications, 2006, 26(4) : 735 – 743. [6]. Lin H, Zhang Z. An efficient method for fitting large data sets using T-Splines[J].SIAM Journal on Scientific Computing, 2013, 35(6) : A3052 – A3068. [7]. Hamza Y F, Lin H, Li Z. Implicit progressive-iterative approximation for curve and surface reconstruction[J]. Computer-Aided Geometric Design, 2020, 77 : 101817. [8]. Zhao Y, Lin H. The PIA property of low degree non-uniform triangular B-B patches[C]. In: Proceedings of 12th international conference on computer-aided design and computer graphics, 2011, 239–242. [9]. Deng C, Lin H. Progressive and iterative approximation for least squares B-spline curve and surface fitting[J]. Computer-Aided Design, 2014, 47 : 32 – 44. [10]. Lin H. Local progressive-iterative approximation format for blending curves and patches[J]. Computer Aided Geometric Design, 2010, 27(4) : 322 – 339. [11]. Jiang Y, Lin H, Huang W. Fairing-PIA: progressive-iterative approximation for fairing curve and surface generation[J]. The Visual Computer, 2023. [12]. Jiang Y, Lin H. IG-LSPIA: Least Squares Progressive Iterative Approximation for Isogeometric Collocation Method[J]. Mathematics, 2023, 11(4), 898. [13]. Lin H, Cao Q, Zhang X. The convergence of least-squares progressive iterative approximation for singular least-squares fitting system[J]. Journal of Systems Science and Complexity, 2018, 31(6) : 1618 – 1632. [14]. Zhao Y, Lin H, Bao H. Local progressive interpolation for subdivision surface fitting[J]. Computer Research and Development, 2012, 49(8) : 1699 – 1707. [15]. Yan L, Yu D. Local progressive iterative approximation for triangular Bézier and rational triangular Bézier Surfaces[C] // 3rd International Conference on Mechatronics and Industrial Informatics. 2015 : 456 – 463.[16]. Hamza Y F, Lin H. Conjugate-gradient progressive-iterative approximation for Loop and Catmull-Clark subdivision surface interpolation[J]. Journal of Computer Science and Technology, 2022, 37(2) : 487 – 504. [17]. Jiang Y, Lin H. Conjugate-gradient progressive-iterative approximation for least square fitting of curves and surfaces[J]. SCIENTIA SINICA Informationis, 2022,52(7):1251-1271. [18]. Hamza Y F, Jiang Y, Lin H. Gauss-Seidel progressive and iterative approximation for least squares fitting[J]. Journal of Computer-Aided Design & Computer Graphics, 2021, 33(1) : 1 – 10. [19]. Wang Z, Li Y, Liu J, et al. Gauss-Seidel Progressive Iterative Approximation (GSPIA) for Subdivision Surface Interpolation[J]. The Visual Computer, 2021, 39(1) : 139 – 148. [20]. Hu L, Shou H, Fang S. Progressive Iterative Approximation of SOR for Non-uniform Cubic B-spline Curve and Surface Interpolation[C] Simulation Tools and Techniques. 2021 : 339 – 349. [21]. Hu L, Shou H, Dai Z. HSS-iteration-based iterative interpolation of curves and surfaces with NTP bases[J]. Wireless Networks, 2021, 27(5) : 3523 – 3535. [22]. Liu C, Han X, Li J. Preconditioned progressive iterative approximation for triangular Bézier patches and its application[J]. Journal of Computational and Applied Mathematics, 2020, 366. [23]. Sajavičius S. Hyperpower least squares progressive iterative approximation[J]. Journal of Computational and Applied Mathematics, 2023, 422 : 114888. [24]. Lu L. Weighted progressive iteration approximation and convergence analysis[J].Computer Aided Geometric Design, 2010, 27(2):129 – 137. [25]. Zhang L, Zuo J, Tan J. Weighted local progressive iterative approximation for tensorproduct Bézier surfaces[J]. Journal of Information and Computational Science, 2014, 11(7) : 2117 – 2124. [26]. Li S, Xu H, Deng C. Data-weighted least square progressive and iterative approximation and related B-Spline curve fitting[J]. Journal of Computer-Aided Design & Computer Graphics, 2019, 31(9) : 1574 – 1580. [27]. Rios D, Jüttler B. LSPIA, (stochastic) gradient descent, and parameter correction[J].Journal of Computational and Applied Mathematics, 2022, 406 : 113921. [28]. Huang Z, Wang H. On a progressive and iterative approximation method with memory for least square fitting[J]. Computer-Aided Geometric Design, 2020, 82 : 101931. [29]. Lin H. Survey on Geometric Iterative Methods with Applications[J]. Journal of Computer-Aided Design & Computer Graphics, 2015(4):582-589. [30]. Lin H, Maekawa T, Deng C. Survey on geometric iterative methods and their applications[J]. Computer Aided Design, 2018, 95 : 40 – 51. [31]. Lin H. Adaptive data fitting by the progressive-iterative approximation[J]. Computer-Aided Geometric Design, 2012, 29(7) : 463 – 473.[32]. Zhang L, Wang H, Li Y. A progressive iterative approximation method in offset approximation. J Comput Aided Des Comput Graph, 2014, 26(10) : 1646 – 1653. [33]. Lin H, Jin S, Liao H, et al. Quality guaranteed all-hex mesh generation by a constrained volume iterative fitting algorithm[J]. Computer-Aided Design, 2015, 67-68 : 107 – 117. [34]. Hu L, Yi Y, Liu C, et al. Iterative method for image compression by using LSPIA[J]. IAENG International Journal of Computer Science, 2020, 47(4) : 1 – 7.

References