## QM350: Key Terms

Chapter 1

1. Problem solving: The process of identifying a difference between the actual and the desired state of affairs and then taking action to resolve the difference.
2. Decision making: The process of defining the problem, identifying the alternatives, determining the criteria, evaluating the alternatives, and choosing an alternative.
3. Single-criterion decision problem: A problem in which the objective is to find the “best” solution with respect to just one criterion.
4. Multicriteria decision problem: A problem that involves more than one criterion; the objective is to find the “best” solution, taking into account all the criteria.
5. Decision: The alternative selected.
6. Model: A representation of a real object or situation.
7. Iconic model: A physical replica, or representation, of a real object.
8. Analog model: Although physical in form, an analog model does not have a physical appearance similar to the real object or situation it represents.
9. Mathematical model: Mathematical symbols and expressions used to represent a real situation.
10. Constraints: Restrictions or limitations imposed on a problem.
11. Objective function: A mathematical expression that describes the problem’s objective.
12. Uncontrollable inputs: The environmental factors or inputs that cannot be controlled by the decision maker.
13. Controllable inputs: The inputs that are controlled or determined by the decision maker.
14. Decision variable: Another term for controllable input.
15. Deterministic model: A model in which all uncontrollable inputs are known and cannot vary.
16. Stochastic (probabilistic) model: A model in which at least one uncontrollable input is uncertain and subject to variation; stochastic models are also referred to as probabilistic models.
17. Optimal solution: The specific decision-variable value or values that provide the “best” output for the model.
18. Infeasible solution: A decision alternative or solution that does not satisfy one or more constraints.
19. Feasible solution: A decision alternative or solution that satisfies all constraints.

Chapter 2

1. Constraint: An equation or inequality that rules out certain combinations of decision variables as feasible solutions.
2. Problem formulation: The process of translating the verbal statement of a problem into a mathematical statement called the mathematical model.
3. Decision variable: A controllable input for a linear programming model.
4. Nonnegativity constraints: Aset of constraints that requires all variables to be nonnegative.
5. Mathematical model: A representation of a problem where the objective and all constraint conditions are described by mathematical expressions.
6. Linear programming model: A mathematical model with a linear objective function, a set of linear constraints, and nonnegative variables.
7. Linear program: Another term for linear programming model.
8. Linear functions: Mathematical expressions in which the variables appear in separate terms and are raised to the first power.
9. Feasible solution: A solution that satisfies all the constraints.
10. Feasible region: The set of all feasible solutions.
11. Slack variable: A variable added to the left-hand side of a less-than-or-equal-to constraint to convert the constraint into an equality. The value of this variable can usually be interpreted as the amount of unused resource.
12. Standard form: A linear program in which all the constraints are written as equalities. The optimal solution of the standard form of a linear program is the same as the optimal solution of the original formulation of the linear program.
13. Redundant constraint: A constraint that does not affect the feasible region. If a constraint is redundant, it can be removed from the problem without affecting the feasible region.
14. Extreme point: Graphically speaking, extreme points are the feasible solution points occurring at the vertices or “corners” of the feasible region. With two-variable problems, extreme points are determined by the intersection of the constraint lines.
15. Surplus variable: A variable subtracted from the left-hand side of a greater-than-orequal- to constraint to convert the constraint into an equality. The value of this variable can usually be interpreted as the amount over and above some required minimum level.
16. Alternative optimal solutions: The case in which more than one solution provides the optimal value for the objective function.
17. Infeasibility: The situation in which no solution to the linear programming problem satisfies all the constraints.
18. Unbounded: If the value of the solution may be made infinitely large in a maximization linear programming problem or infinitely small in a minimization problem without violating any of the constraints, the problem is said to be unbounded.

Chapter 4

1. Simplex method: An algebraic procedure for solving linear programming problems. The simplex method uses elementary row operations to iterate from one basic feasible solution (extreme point) to another until the optimal solution is reached.
2. Basic solution: Given a linear program in standard form, with n variables and m constraints, a basic solution is obtained by setting n _ m of the variables equal to zero and solving the constraint equations for the values of the other m variables. If a unique solution exists, it is a basic solution.
3. Nonbasic variable: One of n _ m variables set equal to zero in a basic solution.
4. Basic variable: One of the m variables not required to equal zero in a basic solution.
5. Basic feasible solution: A basic solution that is also feasible; that is, it satisfies the nonnegativity constraints. A basic feasible solution corresponds to an extreme point.
6. Tableau form: The form in which a linear program must be written before setting up the initial simplex tableau. When a linear program is written in tableau form, its A matrix contains m unit columns corresponding to the basic variables, and the values of these basic variables are given by the values in the b column. A further requirement is that the entries in the b column be greater than or equal to zero.
7. Simplex tableau: A table used to keep track of the calculations required by the simplex method.
8. Unit column or unit vector: A vector or column of a matrix that has a zero in every position except one. In the nonzero position there is a 1. There is a unit column in the simplex tableau for each basic variable.
9. Basis: The set of variables that are not restricted to equal zero in the current basic solution. The variables that make up the basis are termed basic variables, and the remaining variables are called nonbasic variables.
10. Net evaluation row: The row in the simplex tableau that contains the value of cj zj for every variable (column).
11. Iteration: The process of moving from one basic feasible solution to another.
12. Pivot element: The element of the simplex tableau that is in both the pivot row and the pivot column.
13. Pivot column: The column in the simplex tableau corresponding to the nonbasic variable that is about to be introduced into solution.
14. Pivot row: The row in the simplex tableau corresponding to the basic variable that will leave the solution.
15. Elementary row operations: Operations that may be performed on a system of simultaneous equations without changing the solution to the system of equations.
16. Artificial variable: A variable that has no physical meaning in terms of the original linear programming problem, but serves merely to enable a basic feasible solution to be created for starting the simplex method. Artificial variables are assigned an objective function coefficient of –M, where M is a very large number.
17. Phase I: When artificial variables are present in the initial simplex tableau, phase I refers to the iterations of the simplex method that are required to eliminate the artificial variables. At the end of phase I, the basic feasible solution in the simplex tableau is also feasible for the real problem.
18. Degeneracy: When one or more of the basic variables has a value of zero.

Chapter 5

1. Range of optimality: The range of values over which an objective function coefficient may vary without causing any change in the optimal solution (i.e., the values of all the variables will remain the same, but the value of the objective function may change).
2. Dual price: The improvement in value of the optimal solution per unit increase in a constraint’s right-hand-side value.
3. Range of feasibility: The range of values over which a bi may vary without causing the current basic solution to become infeasible. The values of the variables in the solution will change, but the same variables will remain basic. The dual prices for constraints do not change within these ranges.
4. Dual problem: A linear programming problem related to the primal problem. Solution of the dual also provides the solution to the primal.
5. Primal problem: The original formulation of a linear programming problem.
6. Canonical form for a maximization problem: A maximization problem with all lessthan-or-equal-to constraints and nonnegativity requirements for the decision variables.
7. Canonical form for a minimization problem: A minimization problem with all greaterthan-or-equal-to constraints and nonnegativity requirements for the decision variables.
8. Dual variable: The variable in a dual linear programming problem. Its optimal value provides the dual price for the associated primal resource.

Chapter 6

1. Simulation: A method for learning about a real system by experimenting with a model that represents the system.
2. Simulation experiment: The generation of a sample of values for the probabilistic inputs of a simulation model and computing the resulting values of the model outputs.
3. Controllable input: Input to a simulation model that is selected by the decision maker.
4. Probabilistic input: Input to a simulation model that is subject to uncertainty. A probabilistic input is described by a probability distribution.
5. Risk analysis: The process of predicting the outcome of a decision in the face of uncertainty.
6. Parameters: Numerical values that appear in the mathematical relationships of a model. Parameters are considered known and remain constant over all trials of a simulation.
7. What-if analysis: A trial-and-error approach to learning about the range of possible outputs for a model. Trial values are chosen for the model inputs (these are the what-ifs) and the value of the output(s) is computed.
8. Base-case scenario: Determining the output given the most likely values for the probabilistic inputs of a model.
9. Worst-case scenario: Determining the output given the worst values that can be expected for the probabilistic inputs of a model.
10. Best-case scenario: Determining the output given the best values that can be expected for the probabilistic inputs of a model.
11. Static simulation model: A simulation model used in situations where the state of the system at one point in time does not affect the state of the system at future points in time. Each trial of the simulation is independent.
12. Dynamic simulation model: A simulation model used in situations where the state of the system affects how the system changes or evolves over time.
13. Event: An instantaneous occurrence that changes the state of the system in a simulation model.
14. Discrete-event simulation model: A simulation model that describes how a system evolves over time by using events that occur at discrete points in time.
15. Verification: The process of determining that a computer program implements a simulation model as it is intended.
16. Validation: The process of determining that a simulation model provides an accurate representation of a real system.

Chapter 7

1. Time series: A sequence of observations on a variable measured at successive points in time or over successive periods of time.
2. Time series plot: A graphical presentation of the relationship between time and the time series variable. Time is shown on the horizontal axis and the time series values are shown on the vertical axis.
3. Stationary time series: A time series whose statistical properties are independent of time. For a stationary time series the process generating the data has a constant mean and the variability of the time series is constant over time.
4. Trend pattern: A trend pattern exists if the time series plot shows gradual shifts or movements to relatively higher or lower values over a longer period of time.
5. Seasonal pattern: A seasonal pattern exists if the time series plot exhibits a repeating pattern over successive periods. The successive periods are often one-year intervals, which is where the name seasonal pattern comes from.
6. Cyclical pattern: A cyclical pattern exists if the time series plot shows an alternating sequence of points below and above the trend line lasting more than one year.
7. Forecast error: The difference between the actual time series value and the forecast.
8. Mean absolute error (MAE): The average of the absolute values of the forecast errors.
9. Mean squared error (MSE): The average of the sum of squared forecast errors.
10. Mean absolute percentage error (MAPE): The average of the absolute values of the percentage forecast errors.
11. Moving averages: A forecasting method that uses the average of the most recent k data values in the time series as the forecast for the next period.

Chapter 8

1. Decision alternatives: Options available to the decision maker.
2. Chance event: An uncertain future event affecting the consequence, or payoff, associated with a decision.
3. Consequence: The result obtained when a decision alternative is chosen and a chance event occurs. A measure of the consequence is often called a payoff.
4. States of nature: The possible outcomes for chance events that affect the payoff associated with a decision alternative.
5. Influence diagram: A graphical device that shows the relationship among decisions, chance events, and consequences for a decision problem.
6. Node: An intersection or junction point of an influence diagram or a decision tree.
7. Decision nodes: Nodes indicating points where a decision is made.
8. Chance nodes: Nodes indicating points where an uncertain event will occur.
9. Consequence nodes: Nodes of an influence diagram indicating points where a payoff will occur.
10. Payoff: A measure of the consequence of a decision such as profit, cost, or time. Each combination of a decision alternative and a state of nature has an associated payoff (consequence).
11. Payoff table: A tabular representation of the payoffs for a decision problem.
12. Decision tree: A graphical representation of the decision problem that shows the sequential nature of the decision-making process.
13. Branch: Lines showing the alternatives from decision nodes and the outcomes from chance nodes.
14. Optimistic approach: An approach to choosing a decision alternative without using probabilities. For a maximization problem, it leads to choosing the decision alternative corresponding to the largest payoff; for a minimization problem, it leads to choosing the decision alternative corresponding to the smallest payoff.
15. Conservative approach: An approach to choosing a decision alternative without using probabilities. For a maximization problem, it leads to choosing the decision alternative that maximizes the minimum payoff; for a minimization problem, it leads to choosing the decision alternative that minimizes the maximum payoff.
16. Minimax regret approach: An approach to choosing a decision alternative without using probabilities. For each alternative, the maximum regret is computed, which leads to choosing the decision alternative that minimizes the maximum regret.
17. Opportunity loss, or regret: The amount of loss (lower profit or higher cost) from not making the best decision for each state of nature.
18. Expected value approach: An approach to choosing a decision alternative based on the expected value of each decision alternative. The recommended decision alternative is the one that provides the best expected value.
19. Expected value (EV): For a chance node, it is the weighted average of the payoffs. The weights are the state-of-nature probabilities.
20. Expected value of perfect information (EVPI): The expected value of information that would tell the decision maker exactly which state of nature is going to occur (i.e., perfect information).
21. Risk analysis: The study of the possible payoffs and probabilities associated with a decision alternative or a decision strategy.
22. Sensitivity analysis: The study of how changes in the probability assessments for the states of nature or changes in the payoffs affect the recommended decision alternative.
23. Risk profile: The probability distribution of the possible payoffs associated with a decision alternative or decision strategy.
24. Prior probabilities: The probabilities of the states of nature prior to obtaining sample information.
25. Sample information: New information obtained through research or experimentation that enables an updating or revision of the state-of-nature probabilities.
26. Posterior (revised) probabilities: The probabilities of the states of nature after revising the prior probabilities based on sample information.
27. Decision strategy: A strategy involving a sequence of decisions and chance outcomes to provide the optimal solution to a decision problem.
28. Expected value of sample information (EVSI): The difference between the expected value of an optimal strategy based on sample information and the “best” expected value without any sample information.
29. Efficiency: The ratio of EVSI to EVPI as a percentage; perfect information is 100% efficient.
30. Bayes’ theorem: A theorem that enables the use of sample information to revise prior probabilities.
31. Conditional probability: The probability of one event given the known outcome of a (possibly) related event.
32. Joint probabilities: The probabilities of both sample information and a particular state of nature occurring simultaneously.