﻿ OR Methods: Markov Chains

Video Tutorials for OR Methods: Markov Chains

This page provides the materials and video tutorials for the Markov Chains section of the course. You can increase the quality of the video by clicking on the gear button of the video.

Markov Chains Video Tutorial 1 (by Thomas Sharkey): Modeling Chutes and Ladders as a Markov Chain and its Steady-State Probabilities

This video was created by Thomas Sharkey. It focuses on modeling a small-scale Chutes and Ladders game (that goes on forever) as a Markov Chain. It discusses the states of the Markov Chain, the transition probability matrix, and formulates the steady-state probability equations. It then solves this set of equations to determine the long-run percentage of time the Markov Chain spends in each state or, equivalently, the long-run percentage of time a player is on a particular space. The problem description is available here: Chutes and Ladders.

Markov Chains Video Tutorial 2 (by Thomas Sharkey): First Passage Times and the Chutes and Ladders Markov Chain

This video was created by Thomas Sharkey. It focuses on determining the expected first passage times of various states in the Chutes and Ladders Markov Chain. You can become familiar with this Markov Chain by watching the previous video. The problem description is available here: Chutes and Ladders.

Markov Chains Video Tutorial 3 (by Thomas Sharkey): Modeling a Golf Hole as a Markov Chain and Using Absorption Probabilities to Analyze Scores

This video was created by Thomas Sharkey. It focuses on modeling the playing of the 17th hole at TPC Sawgrass (the famous island green) as a Markov Chain with absorbing states. We discuss how to formulate and solve the equations associated with ending up in particular absorbing state to determine the likelihood that I will make a par or better on the hole. The problem description is available here: Playing the 17th Hole at TPC Sawgrass. You can also access the probability transition diagram at: Transition Probability Diagram

.

Markov Chains Video Tutorial 4 (by Thomas Sharkey): Modeling a Market Share and Advertising Problem as a Markov Decision Process

This video was created by Thomas Sharkey. It focuses on modeling the market share of a company and its advertising decisions for each state as a Markov Decision Process. It then formulates a linear program in order to solve the MDP and help determine the optimal advertising decisions for each state of the underlying Markov Chain. The problem description is available here: A Market Share and Advertising Problem.