Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

C

Column Generation

Column Generation is a technique for solving large-scale linear programs with many variables (columns) by generating only promising variables as needed. Based on Dantzig-Wolfe decomposition, it alternates between: (1) Restricted Master Problem (RMP) - LP with current subset of columns; (2) Pricing Problem - identifies new columns with negative reduced cost to add to RMP. The process continues until no improving columns exist (optimality). Often combined with branch-and-bound (branch-and-price) for integer programs. Applications: crew scheduling, vehicle routing with many routes, cutting stock problems, and airline scheduling. Effectiveness depends on efficiently solving the pricing problem, which often has special structure (shortest path, knapsack).

Entry link: Column Generation

Column Generation for Cutting Stock

The Cutting Stock Problem minimizes waste when cutting raw materials (rolls, sheets) into smaller pieces. Classical formulation has exponential columns (cutting patterns). Column generation approach: (1) Master Problem - selects patterns minimizing total rolls used; (2) Pricing Subproblem - finds new pattern (knapsack problem) with negative reduced cost. The algorithm generates only needed patterns iteratively. Gilmore-Gomory method applies this framework to solve large-scale instances. Extensions handle: multiple stock sizes, two-dimensional cutting, and trim loss minimization. Applications: paper industry, steel manufacturing, glass cutting, and textile production. This problem pioneered column generation methodology, demonstrating effectiveness for problems with many variables.

Entry link: Column Generation for Cutting Stock

Conjugate Gradient Method

The Conjugate Gradient method is an efficient algorithm for solving linear systems Ax = b where A is symmetric positive definite, and for unconstrained optimization of quadratic functions. Instead of steepest descent directions, it uses conjugate directions that ensure each iteration makes progress independent of previous iterations. For n-dimensional problems, it theoretically converges in at most n iterations (without roundoff errors). In practice, it's used iteratively for large sparse systems. Nonlinear conjugate gradient extends this to general smooth functions, commonly used in large-scale optimization and machine learning.

Entry link: Conjugate Gradient Method

Constraint Satisfaction Problem (CSP)

A Constraint Satisfaction Problem consists of: (1) a set of variables X = {X₁, X₂, ..., Xₙ}; (2) a domain D for each variable specifying possible values; (3) a set of constraints C restricting variable combinations. The goal is finding an assignment of values to variables satisfying all constraints. Solution techniques include: backtracking search, forward checking, arc consistency algorithms (AC-3), constraint propagation, and variable/value ordering heuristics (most constrained variable, least constraining value). CSPs model: scheduling, resource allocation, configuration problems, graph coloring, and Sudoku. When optimization is needed, CSPs extend to Constraint Optimization Problems (COPs).

Entry link: Constraint Satisfaction Problem (CSP)

Constraints

Constraints are mathematical conditions that feasible solutions must satisfy. They represent: physical limitations (capacity, time), resource availability, logical requirements, quality standards, or policy rules. Types include: (1) Equality constraints: h(x) = 0; (2) Inequality constraints: g(x) ≤ 0 or g(x) ≥ 0; (3) Box constraints: lower ≤ x ≤ upper; (4) Integer/binary restrictions. Constraints define the feasible region. Active (binding) constraints at optimal solution have zero slack. Constraint formulation impacts solvability - tighter formulations with stronger linear programming relaxations improve performance. Techniques: constraint aggregation, valid inequalities, logical implications, and big-M formulations for conditional constraints.

Entry link: Constraints

Convex Programming

Convex Programming involves optimizing a convex objective function over a convex feasible set. A function f is convex if f(λx + (1-λ)y) ≤ λf(x) + (1-λ)fYes for all x, y and λ ∈ [0,1]. Key properties: any local minimum is a global minimum; first-order optimality conditions (KKT conditions) are sufficient; numerous efficient algorithms exist. Applications include: portfolio optimization, machine learning (SVM, logistic regression), signal processing, control theory, and network flow problems. Convex programming bridges linear and nonlinear optimization with computational tractability.

Entry link: Convex Programming

CPLEX

IBM ILOG CPLEX is a high-performance mathematical programming solver for linear, mixed-integer, quadratic, and constraint programming. CPLEX employs state-of-the-art algorithms: dual simplex, barrier method for LP; branch-and-cut for MIP; specialized algorithms for network flows and quadratic programs. Advanced features: automatic problem preprocessing, parallel optimization, solution pool for multiple optima, conflict analysis for infeasible models, and tuning tool for parameter optimization. CPLEX interfaces with modeling languages (AMPL, GAMS), programming languages (C++, Java, Python), and provides Concert Technology API for embedded applications. Industry standard for large-scale optimization in logistics, finance, manufacturing, and telecommunications.

Entry link: CPLEX

Critical Path Method (CPM)

CPM is a deterministic project scheduling algorithm that identifies the critical path - the longest sequence of dependent activities determining minimum project duration. Activities on the critical path have zero total float (slack), meaning any delay directly impacts project completion. CPM calculates: forward pass (earliest start/finish times), backward pass (latest start/finish times), and float (total, free, and independent). The method supports: crashing (reducing activity durations), resource leveling, and schedule optimization. CPM is widely used in construction, manufacturing, and software development projects.

Entry link: Critical Path Method (CPM)

Cutting Plane Methods

Cutting Plane Methods solve integer programs by iteratively adding linear inequalities (cuts) that tighten the LP relaxation without removing integer feasible solutions. Starting with LP relaxation, if the solution is fractional: (1) identify violated valid inequalities; (2) add cuts to strengthen formulation; (3) resolve; (4) repeat until integer solution found or proven optimal. Types of cuts: Gomory cuts (derived from simplex tableau), cover inequalities, clique cuts, and problem-specific cuts. Pure cutting plane methods rarely work alone but are integrated into branch-and-cut algorithms - combining branching with cutting planes. Modern solvers (CPLEX, Gurobi) automatically generate various cut types for improved performance.

Entry link: Cutting Plane Methods