From 4c3a2a3e84e61168eba1373f07ee011aab7cf95c Mon Sep 17 00:00:00 2001 From: Apramey Date: Wed, 6 Mar 2024 15:21:01 -0600 Subject: [PATCH] Update finite-difference.md --- notes/finite-difference.md | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/notes/finite-difference.md b/notes/finite-difference.md index 21b4caa..22a4c57 100644 --- a/notes/finite-difference.md +++ b/notes/finite-difference.md @@ -9,7 +9,7 @@ changelog: name: Kriti Chandak netid: kritic3 date: 2024-02-24 - message: added information from slides and additional examples + message: added information from slides and videos and added additional examples - name: Mariana Silva netid: mfsilva @@ -47,16 +47,19 @@ We define the forward finite difference approximation, \\(df(x)\\), to the first where \\(h\\) is often called a "perturbation", i.e., a "small" change to the variable \\(x\\) (small when compared to the magnitude of \\(x\\)). -By the Taylor's theorem, we can write +By Taylor's theorem, we can write
\[f(x+h) = f(x) + f'(x)\, h + f''(x) \frac{h^2}{2} + f'''(x) \frac{h^3}{6} + ... \]
\[f(x+h) =f(x) + f'(x)\, h + O(h^2) \]
\[f'(x) = \frac{f(x+h)-f(x)}{h} + O(h) \]
-Rearranging the above, we get the derivative \\(f'(x)\\) as a function of the forward finite difference approximation \\(d(f(x)\\): +Rearranging the above, we get the derivative \\(f'(x)\\) as a function of the forward finite difference approximation \\(df(x)\\):
\[f'(x) = df(x) + O(h) \]
+ +#### Error + Therefore, the truncation error of the forward finite difference approximation is bounded by:
\[Mh \geq |f'(x) - df(x)| \]
@@ -65,10 +68,17 @@ Therefore, the truncation error of the forward finite difference approximation i If \\(h\\) is very small, we will have cancellation errors, which is bounded by:
\[\frac{\epsilon_m|f(x)|}{h} \geq df(x) \]
+where \\(\epsilon_m\\) is machine epsilon. + To find the \\(h\\) that minimizes the total error:
\[error \approx \frac{\epsilon_m|f(x)|}{h} + Mh \]
\[h = \sqrt{\frac{\epsilon_m |f(x)|}{M}} \]
+Using the forward finite different approximation on \\(f(x) = e^x - 2\\), we can see the values of total error, truncation error, and rounding error depending on the chosen perturbation \\(h\\) in the graph below. +
+Vector v graph +
+Therefore, we can see that the optimal \\(h\\) that minimizes the total error is where the truncation error and rounding error intersect. Using a similar approach, we can summarize the following finite difference approximations: @@ -209,7 +219,7 @@ Consider a differentiable function \\(f = \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \dots & \frac{\partial f_1}{\partial x_n}\\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \dots & \frac{\partial f_2}{\partial x_n} \\ & \ddots \\ - \frac{\partial f_n}{\partial x_1} & \frac{\partial f_n}{\partial x_2} & \dots & \frac{\partial f_n}{\partial x_n} + \frac{\partial f_m}{\partial x_1} & \frac{\partial f_m}{\partial x_2} & \dots & \frac{\partial f_m}{\partial x_n} \end{bmatrix} \] We define the Jacobian finite difference approximation as @@ -218,7 +228,7 @@ We define the Jacobian finite difference approximation as df_1(x_1) & df_1(x_2) & \dots & df_1(x_n)\\ df_2(x_1) & df_2(x_2) & \dots & df_2(x_n) \\ & \ddots \\ - df_n(x_1) & df_n(x_2) & \dots & df_n(x_n) + df_m(x_1) & df_m(x_2) & \dots & df_m(x_n) \end{bmatrix} \] where \\(df_i(x_j) \\) is the approximation of \\(f_i\\) at \\(x_j\\) using any finite difference method. @@ -277,7 +287,7 @@ We can find the absolute truncation error by: 4. How can you approximate the integral of a function using Taylor series? -5. Given an function and a cneter, can you write out the \\(n\\)-th degree Taylor polynomial? +5. Given an function and a center, can you write out the \\(n\\)-th degree Taylor polynomial? 6. For an \\(n\\)-th degree Taylor polynomial, what is the bound on the error of your approximation as a function of distance from the center?