使用AI轻松电子化纸质笔记

将手写笔记转化成markdown有诸多好处, 特别是便于查找, 储存和分享. 据我观察, 身边的人多以手打为主, 当然这有助于巩固记忆, 但也会迫使你在诸多琐碎的细节(例如铺垫内容)上浪费宝贵的时间.

现在, 我们可以借助全新的VLM (Vision Language Model) 模型, 来改变这种境况. 而且完全免费.

模型选择

这次要介绍的仍然来自于通义千问(Qwen)家族– Qwen3-Max-Preview. 据通义千问官方介绍, 当我们向Qwen3-Max-Preview发送图片时, 图片会被自动路由到次世代VLM模型. 据推测可能是尚未正式发布的Qwen3-VLM. (截止至9/11/2025)

使用方法

  1. 登录Qwen Chat官网 https://chat.qwen.ai
  2. 左上角模型选择 Qwen3-Max-Preview
  3. 上传笔记照片/截图, 并写一小段提示词要求模型将手写笔记convert成Markdown格式.
use example

效果预览

笔记原文: (极端情况, 注意护眼)

非常潦草

Qwen Chat:

Here’s a clean and structured Markdown version of your handwritten lecture notes on Perturbation Methods for Algebraic Equations, preserving all key points, mathematical expressions, and logical flow.


Part I: Perturbation Methods for Algebraic Equations

I. Series Method

Example 1:

\[ x^2 + \varepsilon x - 1 = 0 \Rightarrow x = -\frac{1}{2}\varepsilon \pm \sqrt{1 + \frac{1}{4}\varepsilon^2} = x(\varepsilon) \]

Exact solution: \[ x = \begin{cases} 1 - \frac{1}{2}\varepsilon + \frac{1}{8}\varepsilon^2 - \frac{1}{128}\varepsilon^4 + O(\varepsilon^6) \\ -1 - \frac{1}{2}\varepsilon - \frac{1}{8}\varepsilon^2 + \frac{1}{128}\varepsilon^4 + O(\varepsilon^6) \end{cases} \]


(1) Expansion Method

Assume expansion of \(x(\varepsilon)\) around \(x = 1\):

\[ x(\varepsilon) = 1 + \varepsilon x_1 + \varepsilon^2 x_2 + \cdots \]

Substitute back into original equation:

\[ x^2 = 1 + 2\varepsilon x_1 + (\varepsilon^2 x_1^2 + 2x_2)\varepsilon^2 + \cdots \\ \varepsilon x = \varepsilon + \varepsilon^2 x_1 + \varepsilon^3 x_2 + \cdots \]

So the equation becomes: \[ x^2 + \varepsilon x - 1 = 0 \Rightarrow \text{(Collect terms by powers of } \varepsilon) \]

At \(\varepsilon = 0\): \[ O(\varepsilon^0): \quad 1 - 1 = 0 \quad \text{(OK)} \]

\(O(\varepsilon^1)\): \[ 2x_1 - 1 = 0 \Rightarrow x_1 = \frac{1}{2} \]

\(O(\varepsilon^2)\): \[ x_1^2 + 2x_2 + x_1 = 0 \Rightarrow \left(\frac{1}{2}\right)^2 + 2x_2 + \frac{1}{2} = 0 \Rightarrow x_2 = -\frac{1}{8} \]

Thus, \[ x(\varepsilon) = 1 + \frac{1}{2}\varepsilon - \frac{1}{8}\varepsilon^2 + \cdots \]

Note: This matches the series expansion up to \(\varepsilon^2\). Higher-order terms can be obtained similarly.


(2) Iterative Method (Fixed Point)

Given \(f(x) = 0 \Leftrightarrow x = g(x)\)

Iterative scheme: \[ x_{n+1} = g(x_n) \]

Recall: Converges when \(x \in [a,b]\) and \(|g'(x)| \leq L < 1\)

For our example: \[ x = \sqrt{1 - \varepsilon x} = g(x) \Rightarrow |g'(x)| = \left| \frac{\varepsilon}{2} \cdot \frac{1}{\sqrt{1 - \varepsilon x}} \right| < 1 \]

This implies convergence if \(x > \frac{1}{2} - \frac{1}{4}\varepsilon\)

Choose initial guess \(x_0 = 1\):

  • \(x_1 = \sqrt{1 - \varepsilon x_0} = \sqrt{1 - \varepsilon} \approx 1 - \frac{1}{2}\varepsilon - \frac{1}{8}\varepsilon^2\)
  • \(x_2 = 1 - \frac{1}{2}\varepsilon + \frac{1}{8}\varepsilon^2 + O(\varepsilon^3)\)
  • \(x_3 = 1 - \frac{1}{2}\varepsilon + \frac{1}{8}\varepsilon^2 - \frac{1}{128}\varepsilon^4 + \cdots\)

Observation: Each iteration adds one more accurate term in the perturbation series.

Compare with exact solution: \[ \sqrt{1 - \varepsilon} = 1 - \frac{1}{2}\varepsilon - \frac{1}{8}\varepsilon^2 - \frac{1}{16}\varepsilon^3 + \cdots \]

So: - \(x_1\): accurate up to \(\varepsilon\), error at \(\varepsilon^2\) - \(x_2\): accurate up to \(\varepsilon^2\), error at \(\varepsilon^3\) - …

Key Insight: For each iteration, we gain one more accurate term in the asymptotic expansion.


Example 2: Singular Perturbation

Equation: \[ \varepsilon^3 x^2 + x - 1 = 0 \]

True solution: \[ x = -\frac{1}{2\varepsilon} + \frac{\sqrt{1 + 4\varepsilon}}{2\varepsilon} = \begin{cases} 1 - \varepsilon + 2\varepsilon^2 - 5\varepsilon^3 + \cdots \\ -\frac{1}{\varepsilon} - 1 + \varepsilon - 2\varepsilon^2 + 5\varepsilon^3 + \cdots \end{cases} \]

At \(\varepsilon = 0\): \(x = 1\)

But note: The second branch has \(\mathcal{O}(1/\varepsilon)\) — this is a singular perturbation problem.


(1) Iterative Method Attempt

From: \[ \varepsilon x^2 + x - 1 = 0 \Rightarrow x^2 = \frac{1 - x}{\varepsilon} \Rightarrow x = \frac{1}{\varepsilon x} - \frac{1}{\varepsilon} = g(x) \]

Wait — better form: \[ x = \frac{1}{\varepsilon x} - \frac{1}{\varepsilon} = g(x) \]

Then: \[ |g'(x)| = \left| \frac{1}{\varepsilon} \cdot \left( -\frac{1}{x^2} \right) \right| = \frac{1}{\varepsilon x^2} \Rightarrow |g'(x)| < 1 \iff |x| > \frac{1}{\sqrt{\varepsilon}} \]


Summary

Method Applicability Notes
Expansion Method Regular perturbations Assumes solution is analytic in \(\varepsilon\), expands around known limit
Iterative Method Fixed-point convergence Requires \(g'(x) \leq L < 1\), converges slowly but builds accuracy iteratively
Singular Case When \(\varepsilon \to 0\) causes breakdown Standard methods fail; need multiple scales

🔍 Key Takeaway:
- Regular perturbation works well when \(x(\varepsilon)\) remains bounded as \(\varepsilon \to 0\).
- Singular perturbations involve boundary layers or non-uniform behavior — require advanced techniques.
- Iterative methods are useful numerically and can recover perturbation series term-by-term.


Let me know if you’d like this exported as a .md file or enhanced with LaTeX rendering!