Skip Navigation
Search

AMS 342, Operations Research II: Stochastic Models

Catalog Description: Methods and techniques for stochastic modeling and optimization, with applications to queueing theory, Markov chains, inventory theory, games, and decisions. AMS 341 and 342 may be taken in either order, though it is recommended that AMS 341 be taken first.

PrerequisiteAMS 210 or MAT 211; AMS 311 

Note:  Offered in spring semester only

3 credits

SBC:  SBS+


Required Textbook:
"Introduction to Probability Models" by Sheldon Ross, 12th Edition, 2019, Publisher: Elsevier, ISBN: 9780128143469

 

AMS 342 Webpage 

1.  Introduction and Review of Probability Theory –  4 hours
2.  Discrete Time Markov Chains –  7 hours
3.  Continuous Time Markov Chains –  6 hours
4.  Queuing Theory –  8 hours 
5.  Inventory Theory –  6 hours
6.  Reliability Theory – 6 hours 
7.  Examinations and Review – 5 class hours

 

Learning Outcomes for AMS 342, Operations Research II:  Stochastic Models

1.) Familiarity with the double expectation formula and demonstrate an ability to apply the formula in calculating expectations and probabilities involving two or more random variables.

 2.) Demonstrate an understanding of the basic concepts in Poisson processes:
        * familiarity with the memoryless property of exponential random variables;
        * state the two alternative definitions of a Poisson process;
        * develop an intuitive understanding of the various properties of Poisson processes;
        * understand nonhomogeneous Poisson processes and compound Poisson processes;
        * model simple real-life problems using the Poisson process and its generalizations.

3.) Demonstrate an understanding of the basic concepts in Discrete Time Markov Chains:
        * define a Markov chain;
        * appreciate the range of applications of Markov chains;
        * apply the Chapman-Kolmogorov equations to compute the marginal distributions of a Markov  chain (Transient behavior);
        * classify the states of a Markov chain;
        * determine the transience and recurrence of a Markov chains;
        * calculate steady state distributions for ergodic Markov chains.

4.) Demonstrate an understanding of the basic concepts in Renewal Processes:
        * define and characterize a renewal process;
        * derive integral equations using the renewal argument;
        * understand the limiting behavior of a renewal process;
        * define a renewal reward process;
        * familiarity with the elementary renewal theorem and the renewal reward theorem and be able to apply these results in problem solving situations;
        * some knowledge of the key renewal theorem and an intuitive understanding of the inspection paradox;
        * model simple real-life problems using the renew process and its generalizations such as the renewal reward process and the alternating renewal process.