Spécial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Tout
N |
|---|
Newton's MethodNewton's Method is a second-order optimization algorithm that uses both gradient and Hessian information. The update rule is: xₖ₊₁ = xₖ - [∇²f(xₖ)]⁻¹∇f(xₖ), where ∇²f is the Hessian matrix of second derivatives. Newton's method has quadratic convergence near the optimum but requires computing and inverting the Hessian, which is expensive for high-dimensional problems. Quasi-Newton methods (BFGS, L-BFGS) approximate the Hessian to reduce computational cost while maintaining superlinear convergence. Applications include: nonlinear equations, unconstrained optimization, and interior point methods for constrained optimization. | |
Nonlinear Programming (NLP)Nonlinear Programming deals with optimization problems where the objective function or constraints are nonlinear. General form: minimize f(x) subject to g(x) ≤ 0, h(x) = 0. Challenges include: multiple local optima, non-convexity, and lack of closed-form solutions. Solution approaches: gradient-based methods (Newton, quasi-Newton, SQP), penalty and barrier methods, trust region methods, and global optimization techniques. Optimality conditions: KKT conditions (necessary for local optima), second-order conditions (sufficient). Special cases: convex programming (tractable), quadratic programming. Applications span: engineering design, chemical process optimization, parameter estimation, and machine learning. Commercial solvers: SNOPT, IPOPT, KNITRO. Modern research focuses on: global optimization and mixed-integer nonlinear programming (MINLP). | |
NP-HardnessNP-Hardness is a computational complexity classification indicating a problem is at least as hard as the hardest problems in NP (nondeterministic polynomial time). An NP-hard problem has no known polynomial-time algorithm, and finding one would prove P=NP. Many optimization problems are NP-hard: traveling salesman, knapsack, graph coloring, scheduling, facility location. Implications: (1) exact algorithms likely require exponential time for worst cases; (2) practical approaches use approximation algorithms, heuristics, or metaheuristics; (3) special cases may be tractable. Recognizing NP-hardness guides solution strategy selection - ruling out exact methods for large instances and justifying heuristic approaches. Reduction proofs establish NP-hardness by transforming known NP-hard problems. | |