Consulter le glossaire à l’aide de cet index

Spécial | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Tout

N

Network Flow Problems

Network Flow Problems involve finding optimal flows through networks represented as directed graphs with arc capacities. Types include: (1) Maximum Flow - find maximum flow from source to sink respecting arc capacities (solved by Ford-Fulkerson, Edmonds-Karp, or Push-Relabel algorithms); (2) Minimum Cost Flow - route flow at minimum cost satisfying supply/demand constraints (solved by network simplex); (3) Multi-commodity Flow - handle multiple commodities with shared arc capacities. Applications span: transportation networks, telecommunications, supply chain management, and project scheduling. The max-flow min-cut theorem establishes duality between maximum flow and minimum cut capacity.

Lien vers l’article : Network Flow Problems

Newton's Method

Newton's Method is a second-order optimization algorithm that uses both gradient and Hessian information. The update rule is: xₖ₊₁ = xₖ - [∇²f(xₖ)]⁻¹∇f(xₖ), where ∇²f is the Hessian matrix of second derivatives. Newton's method has quadratic convergence near the optimum but requires computing and inverting the Hessian, which is expensive for high-dimensional problems. Quasi-Newton methods (BFGS, L-BFGS) approximate the Hessian to reduce computational cost while maintaining superlinear convergence. Applications include: nonlinear equations, unconstrained optimization, and interior point methods for constrained optimization.

Lien vers l’article : Newton's Method

Nonlinear Programming (NLP)

Nonlinear Programming deals with optimization problems where the objective function or constraints are nonlinear. General form: minimize f(x) subject to g(x) ≤ 0, h(x) = 0. Challenges include: multiple local optima, non-convexity, and lack of closed-form solutions. Solution approaches: gradient-based methods (Newton, quasi-Newton, SQP), penalty and barrier methods, trust region methods, and global optimization techniques. Optimality conditions: KKT conditions (necessary for local optima), second-order conditions (sufficient). Special cases: convex programming (tractable), quadratic programming. Applications span: engineering design, chemical process optimization, parameter estimation, and machine learning. Commercial solvers: SNOPT, IPOPT, KNITRO. Modern research focuses on: global optimization and mixed-integer nonlinear programming (MINLP).

Lien vers l’article : Nonlinear Programming (NLP)

NP-Hardness

NP-Hardness is a computational complexity classification indicating a problem is at least as hard as the hardest problems in NP (nondeterministic polynomial time). An NP-hard problem has no known polynomial-time algorithm, and finding one would prove P=NP. Many optimization problems are NP-hard: traveling salesman, knapsack, graph coloring, scheduling, facility location. Implications: (1) exact algorithms likely require exponential time for worst cases; (2) practical approaches use approximation algorithms, heuristics, or metaheuristics; (3) special cases may be tractable. Recognizing NP-hardness guides solution strategy selection - ruling out exact methods for large instances and justifying heuristic approaches. Reduction proofs establish NP-hardness by transforming known NP-hard problems.

Lien vers l’article : NP-Hardness