英语翻译If the method is convergent,L2 norm of the difference ve

英语翻译
If the method is convergent,L2 norm of the difference vector,Δp,and the residual vector,A(p),converge to zero [see 12].We are reporting convergence of both of
these vectors.For better understanding the error reducing property of these methods,we report variation of ‖A(pk)‖L2/‖A(p0)‖L2 and ‖Δ(pk)‖L2/‖Δ(p0)‖L2
with iterations (k).
We performed three experiments with different initialization in the algorithms (7) and (9).In the first test,let the initial vector be zero for both the algorithms.Figure 2 reports the result.Figure 2(a) presents convergence of the residual vector while the Figure 2(b) presents convergence of the difference vector.In these figures,NIM stands for Newton Iterative Method while AJNIM stands for Alterted Newton Iterative Method.These figures show that both the methods converges at the same rate (quadratically),but still the Altered Jacobian Newton Iterative Method is better in reducing the error.
In the second case,let us select initial vector whose elements 10.Figure 3 presents comparison of the two methods for an initial vector whose elements are 10.It can be seen in the Figures 3(a) and 3(b) that the Altered Jacobian
Newton Iterative Method converges faster than the Newton Iterative Method.Let us finally take an initial vector with elements equal to 100.Figure 4 presents comparison
of the two methods for an initial vector whose elements are 100.The Figures 4(a) and 4(b) show that the Newton Iterative Method is not converging while the Altered
Jacobian Newton Iterative Method still converges.The table 1 presents error after 10 iterations of the two methods.These experiments does show the independence of the convergence of the Altered Jacobian Newton Iterative Method with respect to initialization.We saw that the Newton Iterative Method converges quadratically for the first case (initial guess is zero vector) but its convergence rate decreases as we selected other initial guesses.On the other hand,for all the initial guesses the Altered Jacobian Newton Iterative Method converges quadratically.
We have developed a nonlinear algorithm named Altered Jacobian Newton Iterative Method for solving system nonlinear equations formed from discretization of nonlinear elliptic problems.Presented numerical work shows
that the Altered Jacobian Newton Iterative Method is robust with respect to the initialization.
chunzhilove 1年前 已收到1个回答 举报

oplink 幼苗

共回答了18个问题采纳率:77.8% 举报

如果该方法是收敛的,差异的L2范数向量,ΔP,和残余向量,一个(P),收敛到零[ 12 ].我们报告两个收
这些载体.为了更好地理解这些方法的误差降低性能,我们报告‖变异(PK)‖L2 /‖一(P0)‖L2和‖Δ(PK)‖L2 /‖Δ(P0)‖
详细的迭代(K).
我们进行了三个实验,用不同的初始化的演算法(7)和(9).在第一次试验中,让初始向量的算法都为零.图2报告结果.图2(a)提出了收敛的残余向量,而图2(b)提出的收敛性差异向量.在这些数字,尼莫代表牛顿迭代法而AJNIM代表Alterted牛顿迭代法.这些数据表明,这两种方法都以同样的速度收敛(二次),但仍然改变雅可比牛顿迭代方法更好地减少错误.
在第二种情况下,我们选择初始向量的元素10.图3这两种方法进行了比较的初始向量的元素是10.它可以在图3看到(一)和3(b),改变雅可比
牛顿迭代法的收敛速度比牛顿迭代法快.让我们最后采取与元素等于100的初始向量.图4给出的比较
一个初始向量的元素都是100的两种方法.图4(a)和4(b)显示,牛顿迭代法是不收敛而改变
牛顿迭代法收敛的雅可比仍然.表1给出了误差在10次迭代的两种方法.这些实验确实表明相对于初始化改变雅可比牛顿迭代法的收敛性的独立性.我们看到,牛顿迭代法的收敛quadratically第一例(初始猜测是零向量)但其收敛速度减小选取初始猜测.另一方面,对于所有的初始猜改变牛顿迭代法的收敛quadratically雅可比.
我们已经开发了一种非线性算法改变雅可比牛顿迭代法求解系统非线性方程的非线性椭圆问题的离散化形成.给出的数值工作表明
这改变雅可比牛顿迭代法是稳健的初始化.
自己凑合着看吧.有时候要反着读.

1年前

3
可能相似的问题
Copyright © 2024 YULUCN.COM - 雨露学习互助 - 17 q. 2.247 s. - webmaster@yulucn.com