Method for Applied Math (1)
Generated by Qwen3-Max-Preview.
Notation
we say \(f(\varepsilon)<<1\) iff \(\lim_{\varepsilon \to 0} f(\varepsilon) = 0\). Moreover, \(f(\varepsilon) \sim g(\varepsilon)\) iff \(\lim_{\varepsilon \to 0} \frac{f}{g} = 1\)
I. Series Method
Example 1: Regular Perturbation
Equation: \[ x^2 + \varepsilon x - 1 = 0 \Rightarrow x = -\frac{1}{2}\varepsilon \pm \sqrt{1 + \frac{1}{4}\varepsilon^2} \]
Exact expansion (Taylor around \(\varepsilon = 0\)): \[ x = \begin{cases} 1 - \frac{1}{2}\varepsilon + \frac{1}{8}\varepsilon^2 - \frac{1}{128}\varepsilon^4 + O(\varepsilon^6) \\ -1 - \frac{1}{2}\varepsilon - \frac{1}{8}\varepsilon^2 + \frac{1}{128}\varepsilon^4 + O(\varepsilon^6) \end{cases} \]
(1) Expansion Method
Assume expansion around \(x = 1\): \[ x(\varepsilon) = 1 + \varepsilon x_1 + \varepsilon^2 x_2 + \cdots \]
Substitute into equation: \[ x^2 = 1 + 2\varepsilon x_1 + (\varepsilon^2 x_1^2 + 2x_2)\varepsilon^2 + \cdots \\ \varepsilon x = \varepsilon + \varepsilon^2 x_1 + \cdots \]
Collect by powers of \(\varepsilon\): - \(O(1)\): \(1 - 1 = 0\) ✅
- \(O(\varepsilon)\): \(2x_1 + 1 = 0 \Rightarrow x_1 = -\frac{1}{2}\)
- \(O(\varepsilon^2)\): \(x_1^2 + 2x_2 + x_1 = 0 \Rightarrow x_2 = -\frac{1}{8}\)
Result: \[ x(\varepsilon) = 1 - \frac{1}{2}\varepsilon - \frac{1}{8}\varepsilon^2 + \cdots \]
Matches Taylor expansion.
(2) Iterative Method (Fixed Point)
Given \(f(x) = 0\), rewrite as \(x = g(x)\).
For \(x^2 + \varepsilon x - 1 = 0\), rearrange: \[ x = \sqrt{1 - \varepsilon x} = g(x) \]
Convergence condition: \(|g'(x)| < 1\)
Compute: \[
|g'(x)| = \left| \frac{\varepsilon}{2\sqrt{1 - \varepsilon x}} \right| < 1 \quad \text{if } x > \frac{1}{2} - \frac{1}{4}\varepsilon
\]
Start with \(x_0 = 1\): - \(x_1 = \sqrt{1 - \varepsilon} \approx 1 - \frac{1}{2}\varepsilon - \frac{1}{8}\varepsilon^2\) - \(x_2 = 1 - \frac{1}{2}\varepsilon + \frac{1}{8}\varepsilon^2 + O(\varepsilon^3)\) - \(x_3 = 1 - \frac{1}{2}\varepsilon + \frac{1}{8}\varepsilon^2 - \frac{1}{128}\varepsilon^4 + \cdots\)
✅ Each iteration adds one more accurate term — converges to Taylor expansion.
Example 2: Singular Perturbation
Equation: \[ \varepsilon x^2 + x - 1 = 0 \]
True solutions: \[ x = \frac{-1 \pm \sqrt{1 + 4\varepsilon}}{2\varepsilon} = \begin{cases} 1 - \varepsilon + 2\varepsilon^2 - 5\varepsilon^3 + \cdots \\ -\frac{1}{\varepsilon} - 1 + \varepsilon - 2\varepsilon^2 + \cdots \end{cases} \]
At \(\varepsilon = 0\), one root \(\to 1\), the other \(\to -\infty\) → singular behavior in one branch.
(1) Iterative Method — Converges as Expected
From: \[ \varepsilon x^2 + x - 1 = 0 \Rightarrow x = \frac{1}{1 + \varepsilon x} = g(x) \]
Check derivative: \[ |g'(x)| = \left| \frac{\varepsilon}{(1 + \varepsilon x)^2} \right| < 1 \quad \text{for } x > 0, \varepsilon \ll 1 \]
Start with \(x_0 = 1\): - \(x_1 = \frac{1}{1 + \varepsilon} = 1 - \varepsilon + \varepsilon^2 - \varepsilon^3 + \cdots\) - \(x_2 = \frac{1}{1 + \varepsilon x_1} = 1 - 1/\varepsilon + \cdots\) - Matches Taylor expansion term-by-term.
✅ Conclusion: Iterative method successfully converges to the regular perturbation series for the bounded root.
(2) Expansion Method (with Scaling Ansatz)
Assume: \[ x = \delta(\varepsilon) X, \quad X = O(1) \]
Plug into \(\varepsilon x^2 + x - 1 = 0\): \[ \varepsilon \delta^2 X^2 + \delta X - 1 = 0 \]
Try scalings:
Case i) \(\delta \ll 1\):
Then \(\delta X \ll 1\), \(\varepsilon \delta^2 X^2 \ll 1\) → equation: small + small = 1 → ❌ impossible.
Case ii) \(\delta = 1\):
Equation: \(\varepsilon X^2 + X - 1 = 0\) → \(X \approx 1\) → ✅ regular perturbation.
Case iii) \(\delta \gg 1\):
Then \(\varepsilon \delta^2 X^2\) and \(\delta X\) may dominate.
Try dominant balance: \(\varepsilon \delta^2 X^2 \sim \delta X\) → \(\varepsilon \delta X \sim 1\) → \(\delta \sim \varepsilon^{-1} X^{-1}\)
Assume \(X = O(1)\) → \(\delta \sim \varepsilon^{-1}\) → \(x \sim \varepsilon^{-1}\)
Plug in: \(\varepsilon (\varepsilon^{-2} X^2) + \varepsilon^{-1} X - 1 = \varepsilon^{-1} X^2 + \varepsilon^{-1} X - 1\)
Dominant term: \(\varepsilon^{-1}(X^2 + X)\) — must balance with \(-1\)? Only if \(X^2 + X = O(\varepsilon)\) → \(X \sim -1\) → consistent with large root \(x \sim -\varepsilon^{-1}\)
So two consistent scalings: - \(\delta = 1\): \(x \sim 1\) → regular root - \(\delta = \varepsilon^{-1}\): \(x \sim -\varepsilon^{-1}\) → singular root
II. Dominant Balance Method
Goal: Identify which terms dominate as \(\varepsilon \to 0\).
Equation: \(\varepsilon x^2 + x - 1 = 0\)
Possible balances:
- \(x \sim 1\): then \(\varepsilon x^2 \to 0\), \(x \sim 1\), \(-1\) → \(1 - 1 = 0\) ✅
- \(\varepsilon x^2 \sim x\): then \(\varepsilon x \sim 1\) → \(x \sim \varepsilon^{-1}\) → then \(\varepsilon x^2 \sim \varepsilon^{-1}\), \(x \sim \varepsilon^{-1}\), \(-1\) negligible → ✅ for large root
- \(\varepsilon x^2 \sim 1\): then \(x \sim \varepsilon^{-1/2}\) → then \(x \sim \varepsilon^{-1/2} \gg 1\) → imbalance → ❌
Thus, only two balances work — matches exact solution.
Asymptotic vs. Convergent Series — Important Remark
⚠️ Key Insight:
The perturbation series obtained via expansion or iteration may not converge pointwise as \(\varepsilon \to 0\) in the classical sense — especially for singular problems.However, asymptotic expansions are still extremely useful: they approximate the true solution at the point of interest (e.g., \(\varepsilon = 0.01\)) to high accuracy, even if divergent for smaller \(\varepsilon\).
In many cases, the series is asymptotic, not convergent — meaning:
\[ \left| x(\varepsilon) - \sum_{k=0}^N a_k \varepsilon^k \right| = o(\varepsilon^N) \quad \text{as } \varepsilon \to 0 \]
— even if the infinite series diverges.This is why perturbation methods remain powerful: they give practically accurate approximations even without convergence.
Example 3: Transcendental Equation
\[ x e^{-x} = \varepsilon \]
Rewrite: \[ \frac{x}{\varepsilon} = e^x \]
We may expect (by graphical analysis): - Small root: \(x \sim \varepsilon\) (since LHS small, RHS ~1) - Large root: \(x \sim \ln(1/\varepsilon)\) (or at least some negative terms)
Use scaling \(x = \delta(\varepsilon) X\) to find consistent asymptotic expansions for each.
Summary
| Concept | Key Idea |
|---|---|
| Regular Perturbation | Expand around known limit; converges to Taylor series for bounded roots |
| Singular Perturbation | Requires scaling \(\delta(\varepsilon)\); multiple roots at different scales |
| Iterative Method | Converges if \(g'(x) \leq L < 1\), builds asymptotic series term-by-term |
| Dominant Balance | Identify leading-order terms to deduce scaling |
| Asymptotic Series | May diverge globally but approximates solution locally — still useful! |
✅ Final Takeaway:
Perturbation methods are not about infinite convergence — they’re about controlled approximation. Even divergent asymptotic series can yield highly accurate results for small \(\varepsilon\), making them indispensable in applied mathematics.