-
Notifications
You must be signed in to change notification settings - Fork 1
/
waiting-for-the-simulation.html
539 lines (497 loc) · 53.6 KB
/
waiting-for-the-simulation.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
<!DOCTYPE html>
<html lang="" xml:lang="">
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<title>Chapter 7 Waiting for the Simulation | Decision Modeling with Spreadsheets</title>
<meta name="description" content="This is a spreadsheet version of every management science book out there, less all of the detail." />
<meta name="generator" content="bookdown 0.20 and GitBook 2.6.7" />
<meta property="og:title" content="Chapter 7 Waiting for the Simulation | Decision Modeling with Spreadsheets" />
<meta property="og:type" content="book" />
<meta property="og:description" content="This is a spreadsheet version of every management science book out there, less all of the detail." />
<meta name="twitter:card" content="summary" />
<meta name="twitter:title" content="Chapter 7 Waiting for the Simulation | Decision Modeling with Spreadsheets" />
<meta name="twitter:description" content="This is a spreadsheet version of every management science book out there, less all of the detail." />
<meta name="author" content="William G. Foote" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="apple-mobile-web-app-capable" content="yes" />
<meta name="apple-mobile-web-app-status-bar-style" content="black" />
<link rel="prev" href="part-3-simulation.html"/>
<link rel="next" href="the-outer-limits.html"/>
<script src="libs/jquery-2.2.3/jquery.min.js"></script>
<link href="libs/gitbook-2.6.7/css/style.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-table.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-bookdown.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-highlight.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-search.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-fontsettings.css" rel="stylesheet" />
<link href="libs/gitbook-2.6.7/css/plugin-clipboard.css" rel="stylesheet" />
<link rel="stylesheet" href="style.css" type="text/css" />
</head>
<body>
<div class="book without-animation with-summary font-size-2 font-family-1" data-basepath=".">
<div class="book-summary">
<nav role="navigation">
<ul class="summary">
<li><a href="./">Decision Modeling with Spreadsheets</a></li>
<li class="divider"></li>
<li class="chapter" data-level="" data-path="index.html"><a href="index.html"><i class="fa fa-check"></i>Prerequisites</a></li>
<li class="chapter" data-level="" data-path="part-1-spreadsheet-engineering.html"><a href="part-1-spreadsheet-engineering.html"><i class="fa fa-check"></i>Part 1 – Spreadsheet Engineering</a></li>
<li class="chapter" data-level="1" data-path="spreadsheet1.html"><a href="spreadsheet1.html"><i class="fa fa-check"></i><b>1</b> Tortuous Pie-making in the Sky</a><ul>
<li class="chapter" data-level="1.1" data-path="spreadsheet1.html"><a href="spreadsheet1.html#spreadsheets-really"><i class="fa fa-check"></i><b>1.1</b> Spreadsheets? Really?</a></li>
<li class="chapter" data-level="1.2" data-path="spreadsheet1.html"><a href="spreadsheet1.html#questions-questions"><i class="fa fa-check"></i><b>1.2</b> Questions, questions</a></li>
<li class="chapter" data-level="1.3" data-path="spreadsheet1.html"><a href="spreadsheet1.html#count-the-errors-of-our-ways"><i class="fa fa-check"></i><b>1.3</b> Count the errors of our ways</a></li>
<li class="chapter" data-level="1.4" data-path="spreadsheet1.html"><a href="spreadsheet1.html#prevailing-recommended-practices"><i class="fa fa-check"></i><b>1.4</b> Prevailing recommended practices</a><ul>
<li class="chapter" data-level="1.4.1" data-path="spreadsheet1.html"><a href="spreadsheet1.html#do-not-ever-do-this"><i class="fa fa-check"></i><b>1.4.1</b> Do not ever do this</a></li>
<li class="chapter" data-level="1.4.2" data-path="spreadsheet1.html"><a href="spreadsheet1.html#instead-practice-these"><i class="fa fa-check"></i><b>1.4.2</b> Instead practice these</a></li>
</ul></li>
<li class="chapter" data-level="1.5" data-path="spreadsheet1.html"><a href="spreadsheet1.html#pie-in-the-sky"><i class="fa fa-check"></i><b>1.5</b> Pie-in-the-Sky</a></li>
<li class="chapter" data-level="1.6" data-path="spreadsheet1.html"><a href="spreadsheet1.html#wheres-the-paper-and-pencils"><i class="fa fa-check"></i><b>1.6</b> Where’s the paper and pencils?</a></li>
<li class="chapter" data-level="1.7" data-path="spreadsheet1.html"><a href="spreadsheet1.html#cost-and-volume"><i class="fa fa-check"></i><b>1.7</b> Cost and volume</a></li>
<li class="chapter" data-level="1.8" data-path="spreadsheet1.html"><a href="spreadsheet1.html#demand-analysis"><i class="fa fa-check"></i><b>1.8</b> Demand analysis</a></li>
<li class="chapter" data-level="1.9" data-path="spreadsheet1.html"><a href="spreadsheet1.html#weekly-profit"><i class="fa fa-check"></i><b>1.9</b> Weekly profit</a></li>
<li class="chapter" data-level="1.10" data-path="spreadsheet1.html"><a href="spreadsheet1.html#profit-sensitivity-to-price"><i class="fa fa-check"></i><b>1.10</b> Profit sensitivity to price</a></li>
<li class="chapter" data-level="1.11" data-path="spreadsheet1.html"><a href="spreadsheet1.html#lo-and-behold"><i class="fa fa-check"></i><b>1.11</b> Lo and behold</a></li>
<li class="chapter" data-level="1.12" data-path="spreadsheet1.html"><a href="spreadsheet1.html#references-and-endnotes"><i class="fa fa-check"></i><b>1.12</b> References and endnotes</a></li>
</ul></li>
<li class="chapter" data-level="2" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html"><i class="fa fa-check"></i><b>2</b> Chaotic Pie-making in the Sky"</a><ul>
<li class="chapter" data-level="2.1" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html#how-many"><i class="fa fa-check"></i><b>2.1</b> How many?</a></li>
<li class="chapter" data-level="2.2" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html#whats-new"><i class="fa fa-check"></i><b>2.2</b> What’s new?</a><ul>
<li class="chapter" data-level="2.2.1" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html#the-costs-they-are-a-changing"><i class="fa fa-check"></i><b>2.2.1</b> The costs they are a-changing</a></li>
<li class="chapter" data-level="2.2.2" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html#demand-takes-a-step-back"><i class="fa fa-check"></i><b>2.2.2</b> Demand takes a step back</a></li>
<li class="chapter" data-level="2.2.3" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html#a-new-profit-dawning"><i class="fa fa-check"></i><b>2.2.3</b> A new profit dawning</a></li>
</ul></li>
<li class="chapter" data-level="2.3" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html#an-algebra-of-pie"><i class="fa fa-check"></i><b>2.3</b> An algebra of pie</a><ul>
<li class="chapter" data-level="2.3.1" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html#a-little-lite-algebra"><i class="fa fa-check"></i><b>2.3.1</b> A little lite algebra</a></li>
<li class="chapter" data-level="2.3.2" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html#pvs-last-stand"><i class="fa fa-check"></i><b>2.3.2</b> PV’s last stand</a></li>
<li class="chapter" data-level="2.3.3" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html#the-model-in-the-mist"><i class="fa fa-check"></i><b>2.3.3</b> The model in the mist</a></li>
<li class="chapter" data-level="2.3.4" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html#how-sensitive"><i class="fa fa-check"></i><b>2.3.4</b> How sensitive?</a></li>
<li class="chapter" data-level="2.3.5" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html#one-way-or-the-other"><i class="fa fa-check"></i><b>2.3.5</b> One way or the other</a></li>
</ul></li>
<li class="chapter" data-level="2.4" data-path="chaotic-pie-making-in-the-sky.html"><a href="chaotic-pie-making-in-the-sky.html#lo-and-behold-yet-again"><i class="fa fa-check"></i><b>2.4</b> Lo and behold yet again</a></li>
<li class="chapter" data-level="2.5" data-path="spreadsheet1.html"><a href="spreadsheet1.html#references-and-endnotes"><i class="fa fa-check"></i><b>2.5</b> References and endnotes</a></li>
</ul></li>
<li class="chapter" data-level="3" data-path="case-salmerón-solar-systems-llc.html"><a href="case-salmerón-solar-systems-llc.html"><i class="fa fa-check"></i><b>3</b> Case: Salmerón Solar Systems LLC</a><ul>
<li class="chapter" data-level="3.1" data-path="case-salmerón-solar-systems-llc.html"><a href="case-salmerón-solar-systems-llc.html#photovoltaic-systems"><i class="fa fa-check"></i><b>3.1</b> Photovoltaic systems</a></li>
<li class="chapter" data-level="3.2" data-path="case-salmerón-solar-systems-llc.html"><a href="case-salmerón-solar-systems-llc.html#the-particulars"><i class="fa fa-check"></i><b>3.2</b> The particulars</a></li>
<li class="chapter" data-level="3.3" data-path="case-salmerón-solar-systems-llc.html"><a href="case-salmerón-solar-systems-llc.html#some-definitions-are-in-order"><i class="fa fa-check"></i><b>3.3</b> Some definitions are in order</a></li>
<li class="chapter" data-level="3.4" data-path="case-salmerón-solar-systems-llc.html"><a href="case-salmerón-solar-systems-llc.html#requirements"><i class="fa fa-check"></i><b>3.4</b> Requirements</a></li>
</ul></li>
<li class="chapter" data-level="" data-path="part-2-optimization.html"><a href="part-2-optimization.html"><i class="fa fa-check"></i>Part 2 – Optimization</a></li>
<li class="chapter" data-level="4" data-path="bringing-pie-to-earth.html"><a href="bringing-pie-to-earth.html"><i class="fa fa-check"></i><b>4</b> Bringing Pie to Earth</a><ul>
<li class="chapter" data-level="4.1" data-path="bringing-pie-to-earth.html"><a href="bringing-pie-to-earth.html#it-all-started-here"><i class="fa fa-check"></i><b>4.1</b> It all started here</a></li>
<li class="chapter" data-level="4.2" data-path="bringing-pie-to-earth.html"><a href="bringing-pie-to-earth.html#from-humble-beginnings"><i class="fa fa-check"></i><b>4.2</b> From humble beginnings</a><ul>
<li class="chapter" data-level="4.2.1" data-path="bringing-pie-to-earth.html"><a href="bringing-pie-to-earth.html#decisions-decisions"><i class="fa fa-check"></i><b>4.2.1</b> 1. Decisions, decisions</a></li>
<li class="chapter" data-level="4.2.2" data-path="bringing-pie-to-earth.html"><a href="bringing-pie-to-earth.html#resources-they-are-aconstraining"><i class="fa fa-check"></i><b>4.2.2</b> 2. Resources they are a’constraining</a></li>
<li class="chapter" data-level="4.2.3" data-path="bringing-pie-to-earth.html"><a href="bringing-pie-to-earth.html#not-without-constraint"><i class="fa fa-check"></i><b>4.2.3</b> 3. Not without constraint</a></li>
<li class="chapter" data-level="4.2.4" data-path="bringing-pie-to-earth.html"><a href="bringing-pie-to-earth.html#let-us-eat-pie"><i class="fa fa-check"></i><b>4.2.4</b> Let us eat pie</a></li>
</ul></li>
<li class="chapter" data-level="4.3" data-path="bringing-pie-to-earth.html"><a href="bringing-pie-to-earth.html#an-easier-way"><i class="fa fa-check"></i><b>4.3</b> An easier way?</a></li>
</ul></li>
<li class="chapter" data-level="5" data-path="expanding-horizons.html"><a href="expanding-horizons.html"><i class="fa fa-check"></i><b>5</b> Expanding horizons</a><ul>
<li class="chapter" data-level="5.1" data-path="expanding-horizons.html"><a href="expanding-horizons.html#didos-bullhide"><i class="fa fa-check"></i><b>5.1</b> Dido’s bullhide</a></li>
<li class="chapter" data-level="5.2" data-path="expanding-horizons.html"><a href="expanding-horizons.html#making-dough"><i class="fa fa-check"></i><b>5.2</b> Making dough</a><ul>
<li class="chapter" data-level="5.2.1" data-path="expanding-horizons.html"><a href="expanding-horizons.html#model-me-an-eoq"><i class="fa fa-check"></i><b>5.2.1</b> Model me an EOQ</a></li>
<li class="chapter" data-level="5.2.2" data-path="expanding-horizons.html"><a href="expanding-horizons.html#implementing-the-simple-eoq"><i class="fa fa-check"></i><b>5.2.2</b> Implementing the simple EOQ</a></li>
<li class="chapter" data-level="5.2.3" data-path="expanding-horizons.html"><a href="expanding-horizons.html#life-constrains-our-simple-model"><i class="fa fa-check"></i><b>5.2.3</b> Life constrains our simple model</a></li>
</ul></li>
<li class="chapter" data-level="5.3" data-path="expanding-horizons.html"><a href="expanding-horizons.html#back-to-the-future"><i class="fa fa-check"></i><b>5.3</b> Back to the future</a><ul>
<li class="chapter" data-level="5.3.1" data-path="expanding-horizons.html"><a href="expanding-horizons.html#bottom-line-up-front"><i class="fa fa-check"></i><b>5.3.1</b> Bottom line up front</a></li>
<li class="chapter" data-level="5.3.2" data-path="expanding-horizons.html"><a href="expanding-horizons.html#tighter-and-tighter"><i class="fa fa-check"></i><b>5.3.2</b> Tighter, and tighter</a></li>
<li class="chapter" data-level="5.3.3" data-path="expanding-horizons.html"><a href="expanding-horizons.html#implementing-the-model"><i class="fa fa-check"></i><b>5.3.3</b> Implementing the model</a></li>
<li class="chapter" data-level="5.3.4" data-path="expanding-horizons.html"><a href="expanding-horizons.html#a-revised-rendering"><i class="fa fa-check"></i><b>5.3.4</b> A revised rendering</a></li>
</ul></li>
<li class="chapter" data-level="5.4" data-path="expanding-horizons.html"><a href="expanding-horizons.html#next-steps"><i class="fa fa-check"></i><b>5.4</b> Next steps?</a></li>
<li class="chapter" data-level="5.5" data-path="spreadsheet1.html"><a href="spreadsheet1.html#references-and-endnotes"><i class="fa fa-check"></i><b>5.5</b> References and endnotes</a></li>
</ul></li>
<li class="chapter" data-level="6" data-path="case-pricing-production-at-make-a-pie.html"><a href="case-pricing-production-at-make-a-pie.html"><i class="fa fa-check"></i><b>6</b> Case: Pricing Production at Make-A-Pie</a><ul>
<li class="chapter" data-level="6.1" data-path="case-pricing-production-at-make-a-pie.html"><a href="case-pricing-production-at-make-a-pie.html#front-matter"><i class="fa fa-check"></i><b>6.1</b> Front matter</a></li>
</ul></li>
<li class="chapter" data-level="" data-path="part-3-simulation.html"><a href="part-3-simulation.html"><i class="fa fa-check"></i>Part 3 – Simulation</a></li>
<li class="chapter" data-level="7" data-path="waiting-for-the-simulation.html"><a href="waiting-for-the-simulation.html"><i class="fa fa-check"></i><b>7</b> Waiting for the Simulation</a><ul>
<li class="chapter" data-level="7.1" data-path="waiting-for-the-simulation.html"><a href="waiting-for-the-simulation.html#a-story-of-chance"><i class="fa fa-check"></i><b>7.1</b> A story of chance</a></li>
<li class="chapter" data-level="7.2" data-path="waiting-for-the-simulation.html"><a href="waiting-for-the-simulation.html#a-piece-of-pie-please"><i class="fa fa-check"></i><b>7.2</b> A piece of pie, please</a></li>
<li class="chapter" data-level="7.3" data-path="waiting-for-the-simulation.html"><a href="waiting-for-the-simulation.html#a-brief-interlude-with-cholesky"><i class="fa fa-check"></i><b>7.3</b> A brief interlude with Cholesky</a></li>
<li class="chapter" data-level="7.4" data-path="waiting-for-the-simulation.html"><a href="waiting-for-the-simulation.html#solution"><i class="fa fa-check"></i><b>7.4</b> Solution</a></li>
<li class="chapter" data-level="7.5" data-path="waiting-for-the-simulation.html"><a href="waiting-for-the-simulation.html#now-we-can-simulate"><i class="fa fa-check"></i><b>7.5</b> Now we can simulate</a><ul>
<li class="chapter" data-level="7.5.1" data-path="waiting-for-the-simulation.html"><a href="waiting-for-the-simulation.html#excel-or-bust"><i class="fa fa-check"></i><b>7.5.1</b> Excel or bust!</a></li>
</ul></li>
<li class="chapter" data-level="7.6" data-path="waiting-for-the-simulation.html"><a href="waiting-for-the-simulation.html#back-to-the-eatery-robot"><i class="fa fa-check"></i><b>7.6</b> Back to the eatery robot</a><ul>
<li class="chapter" data-level="7.6.1" data-path="waiting-for-the-simulation.html"><a href="waiting-for-the-simulation.html#the-main-course"><i class="fa fa-check"></i><b>7.6.1</b> The main course</a></li>
<li class="chapter" data-level="7.6.2" data-path="waiting-for-the-simulation.html"><a href="waiting-for-the-simulation.html#exploring-wait-times"><i class="fa fa-check"></i><b>7.6.2</b> Exploring wait times</a></li>
</ul></li>
<li class="chapter" data-level="7.7" data-path="waiting-for-the-simulation.html"><a href="waiting-for-the-simulation.html#any-next-steps"><i class="fa fa-check"></i><b>7.7</b> Any next steps?</a></li>
<li class="chapter" data-level="7.8" data-path="spreadsheet1.html"><a href="spreadsheet1.html#references-and-endnotes"><i class="fa fa-check"></i><b>7.8</b> References and endnotes</a></li>
</ul></li>
<li class="chapter" data-level="8" data-path="the-outer-limits.html"><a href="the-outer-limits.html"><i class="fa fa-check"></i><b>8</b> The Outer Limits</a><ul>
<li class="chapter" data-level="8.1" data-path="the-outer-limits.html"><a href="the-outer-limits.html#tales-of-tails"><i class="fa fa-check"></i><b>8.1</b> Tales of tails</a></li>
<li class="chapter" data-level="8.2" data-path="the-outer-limits.html"><a href="the-outer-limits.html#it-always-starts-with-data"><i class="fa fa-check"></i><b>8.2</b> It always starts with data</a></li>
<li class="chapter" data-level="8.3" data-path="the-outer-limits.html"><a href="the-outer-limits.html#a-distribution-to-remember"><i class="fa fa-check"></i><b>8.3</b> A distribution to remember</a></li>
<li class="chapter" data-level="8.4" data-path="the-outer-limits.html"><a href="the-outer-limits.html#what-does-it-all-mean-so-far"><i class="fa fa-check"></i><b>8.4</b> What does it all mean, so far?</a></li>
<li class="chapter" data-level="8.5" data-path="the-outer-limits.html"><a href="the-outer-limits.html#the-joints-ajumpin"><i class="fa fa-check"></i><b>8.5</b> The joint’s ajumpin’</a></li>
<li class="chapter" data-level="8.6" data-path="the-outer-limits.html"><a href="the-outer-limits.html#a-sinking-feeling"><i class="fa fa-check"></i><b>8.6</b> A sinking feeling?</a></li>
<li class="chapter" data-level="8.7" data-path="the-outer-limits.html"><a href="the-outer-limits.html#simuluate-and-stimulate"><i class="fa fa-check"></i><b>8.7</b> Simuluate and stimulate</a></li>
<li class="chapter" data-level="8.8" data-path="the-outer-limits.html"><a href="the-outer-limits.html#finally-exhaustively-where-are-the-results"><i class="fa fa-check"></i><b>8.8</b> Finally, exhaustively, where are the results?</a></li>
<li class="chapter" data-level="8.9" data-path="spreadsheet1.html"><a href="spreadsheet1.html#references-and-endnotes"><i class="fa fa-check"></i><b>8.9</b> References and endnotes</a></li>
</ul></li>
<li class="chapter" data-level="9" data-path="case-forecasting-workers-compensation-claims.html"><a href="case-forecasting-workers-compensation-claims.html"><i class="fa fa-check"></i><b>9</b> Case: Forecasting Workers Compensation Claims</a><ul>
<li class="chapter" data-level="9.1" data-path="case-forecasting-workers-compensation-claims.html"><a href="case-forecasting-workers-compensation-claims.html#some-background"><i class="fa fa-check"></i><b>9.1</b> Some background</a></li>
<li class="chapter" data-level="9.2" data-path="case-forecasting-workers-compensation-claims.html"><a href="case-forecasting-workers-compensation-claims.html#the-ask"><i class="fa fa-check"></i><b>9.2</b> The ask</a></li>
<li class="chapter" data-level="9.3" data-path="case-forecasting-workers-compensation-claims.html"><a href="case-forecasting-workers-compensation-claims.html#some-requirements"><i class="fa fa-check"></i><b>9.3</b> Some requirements</a></li>
</ul></li>
<li class="chapter" data-level="" data-path="references.html"><a href="references.html"><i class="fa fa-check"></i>References</a></li>
<li class="divider"></li>
<li><a href="https://github.com/wgfoote/book-decision-modeling" target="blank">Published with bookdown</a></li>
</ul>
</nav>
</div>
<div class="book-body">
<div class="body-inner">
<div class="book-header" role="navigation">
<h1>
<i class="fa fa-circle-o-notch fa-spin"></i><a href="./">Decision Modeling with Spreadsheets</a>
</h1>
</div>
<div class="page-wrapper" tabindex="-1" role="main">
<div class="page-inner">
<section class="normal" id="section-">
<div id="waiting-for-the-simulation" class="section level1">
<h1><span class="header-section-number">Chapter 7</span> Waiting for the Simulation</h1>
<script>
function showText(y) {
var x = document.getElementById(y);
if (x.style.display === "none") {
x.style.display = "block";
} else {
x.style.display = "none";
}
}
</script>
<div id="a-story-of-chance" class="section level2">
<h2><span class="header-section-number">7.1</span> A story of chance</h2>
<p>Stan Ulam got sick one day, and it lasted for quite a while. The year was about 1945 . The place, a secret facility in a not so secret desert in the Southwest United States. To wile away the time he played, as many might do, <a href="https://en.wikipedia.org/wiki/Patience_(game)">solitaire, a game of self-imposed patience.</a> He was a mathematician and soon became impatient with not winning several hands in a row.</p>
<p>The combinations of potential hands and their rule by rule repositioning in an ace-high order, and Stan knew this, results in <span class="math inline">\(52! \approx 8.07 \times 10^{67}\)</span> possible games. If he could play 4 games per hour, non-stop, it would take him about <span class="math inline">\(3.29 \times 10^{62}\)</span> years. That wasn’t happening! He discussed this idea with his colleague John von Neuman who had just been involved in building the <a href="">ENNIAC computer.</a> He supposed he could generate <span class="math inline">\(N\)</span> games.</p>
<p>For each game he determines which proposed game is, or is not, winnable. The number of winnable games is <span class="math inline">\(W\)</span>. The relative frequency <span class="math inline">\(f\)</span> of winnable, acceptable games is then <span class="math inline">\(f = w/N\)</span>. Next he would let <span class="math inline">\(N\)</span> get larger thinking that the accuracy of this thought experiment should improve. There are after all just two possibilities win or lose, a binary outcome. If there are just two outcomes, and like the flipping of a coin, equally likely or nearly so, then solitaire wins and losses can be modeled by a binomial distribution with <span class="math inline">\(N\)</span> games a probability of a win <span class="math inline">\(p\)</span>. and thus a loss <span class="math inline">\(1-p\)</span>. The standard deviation of <span class="math inline">\(p = f =w/N\)</span> wins over losses is the sampled <span class="math inline">\(f\)</span> from a binomial distribution of a 1 shot sample (also known as a geometric distribution, but neither here nor there.<a href="#fn18" class="footnote-ref" id="fnref18"><sup>18</sup></a></p>
<p><span class="math display">\[
\sigma_p = \sqrt{\frac{p(1-p)}{n}}
\]</span></p>
<p>This means that as the number of sampled solitaire (Klondike style) hands climbs, the standard deviation of relative frequencies declines and thus the process produces more accurate results. The win rate for recreational play has been apocryphally estimated at <span class="math inline">\(p=0.43\)</span>. This is almost a random walk.</p>
<p>John von Neumann, his <em>computer</em> wife Klara von Neumann, and others worked with Ulam’s idea, code-named Monte Carlo (it was secret until about 1948) on the ENNIAC computer with its 17,000+ vacuum tubes. They were able to find solutions to problems they had no idea even existed. They also built thermo-nuclear weapons with designs enabled by Monte Carlo methods on the ENNIAC. Haig et al. document the use of the stored program modern computing paradigm installed in the ENNIAC as well as the programs themselves (<span class="citation">Thomas Haigh (<a href="#ref-Haigh2014" role="doc-biblioref">2014</a>)</span>),</p>
<p>We might have three or so takeaways from all of this story-telling. First, we should always keep a notebook, the original stored program facility. Second, we can use Monte Carlo to generate new insights through a random brute force method. Third, advances in technology provide a touchstone for technique: always be on the look-out for off trend movements in a domain of knowledge.</p>
<p>The second takeaway has an important implication for us. Generative models build joint probabilities of highly interrelated groupings of data. In our working example this week we will need the capability of letting multiple eateries communicate with different times of day. With 20 eateries and 2 shifts there are over <span class="math inline">\(4 \times 10^{18}\)</span> possible interactions. That’s a lot of potential communication. Conditioning the data will help us pare this down to size. Monte Carlo will help us understand customer experience at the eateries.</p>
</div>
<div id="a-piece-of-pie-please" class="section level2">
<h2><span class="header-section-number">7.2</span> A piece of pie, please</h2>
<p>Tortiere and Fazi started baking vegan pies because they really wanted to open a chain of small eateries where the pies, with salads of course, would be the main attraction. They would also offer coffee, near-coffee as they call products from some chains from the Northwest United States, perhaps liquor and beer, very cosmopolitan. They hire some restaurant experts to help them understand some of the business side of an eatery. The so-called experts build a robot. This is a pie-eating-robot, not a person, but a machine that can sense, observe, build probability tables for how long it takes to get served.</p>
<p>Our model (we are the so-called experts) should be able to go from one eatery to another with memory. and we We do remember that last time we ordered a cup of coffee and a piece of pie at a lunch counter eatery. The model should also be able to the same eatery or another in the morning or the afternoon. Time of day traffic at eateries could be different, exhibiting different customer experiences. The primary metric we elicit from our clients is their concern over how long a customer remains in line, gets to order, and receives the order. Too slow and the customer might bolt next door and eat meat pies. Too fast and service quality, including basic courtesy might jump out of the back window.</p>
<p>We will program a robot to visit two eateries, order coffee and a piece of pie, and
estimate the waiting times at each. The robot enters the first eatery, either in the morning or in the afternoon. The pie-eating-robot begins with a vague idea for the waiting times, say with a mean of 5 minutes and a standard deviation of 1. After ordering a cup of coffee and a piece of pie at the first eatery, the robot observes a waiting time of 4 minutes, an observation below the mean, about 1 standard deviation’s worth. It updates its probability of being at this eatery, the so-called entry, just getting in the queue, prior distribution, using Bayes’ theorem of course, with this information. This gives it an exit, resulting, otherwise known as a posterior distribution for the waiting time at the first eatery.</p>
<p>Now the robot moves on to a second eatery. It might be morning or afternoon. When this robot arrives at the next eatery, what is its expectation upon entering, the so-called prior, the probability that the next eatery is to be entered? It could just use the resulting distribution, also known as a posterior, from the first eatery as its entry distribution, also known as a prior, for the second eatery. But that implicitly assumes that the two pie-eateries have the same average waiting time. Eateries are all pretty much the same, but they aren’t identical.</p>
<p>On the other hand, it doesn’t make much sense to ignore the observation, the experience of waiting, from the first eatery. That would make the robot have amnesia and we want the robot to remember, and possibly compare. So how can the eatery customer-robot do better? It needs to represent the population of eaterys and learn about that population. The distribution of waiting times in the population becomes the prior, the probability, for each eatery, until, and if, the pie-eating-robot encounters a new customer experience, which will be compared with and conditioned by prior experiences.</p>
<p>Whatever the eatery, the robot has a simple model in its robotic cortex.</p>
<p><span class="math display">\[
\mu_{i} = \alpha_{i} + \beta_{i}A_i
\]</span></p>
<p>where <span class="math inline">\(\mu_i\)</span> is the average waiting time in minutes at eatery <span class="math inline">\(i\)</span>, <span class="math inline">\(\alpha_i\)</span> is the average morning waiting time, <span class="math inline">\(\beta_i\)</span> is the average difference in morning and afternoon waiting times, and <span class="math inline">\(A_i\)</span> is a zero/one indicator of whether we are in the afternoon shift, 1, or present ourselves to the morning shift, 0.</p>
<p>Eateries covary in their intercepts and slopes. Why? At a popular eatery, wait times are on average long in the morning, because staff are very busy, in a word, slammed. The eatery might be the only place open for blocks, but the same eatery might be much less busy in the afternoon, leading to a large difference between morning and afternoon wait times. At such a popular eatery, the intercept is high and the slope is far from zero, because the difference between morning and afternoon waits is large. But at a less popular eatery, the difference will be small. Such a less than popular eatery would make us wait less in the morning, because it’s not busy. and there is not much of a change in the afternoon. In the entire population of eateries, including both the popular and the unpopular, intercepts and slopes covary. This covariation is information that the robot can use.</p>
</div>
<div id="a-brief-interlude-with-cholesky" class="section level2">
<h2><span class="header-section-number">7.3</span> A brief interlude with Cholesky</h2>
<p>The information structure we impose on the customer-robot relates morning and afternoon wait times. We model this as covariance, eqivalently as correlation, which is covariance scaled (divided) by the product of the standard deviations of morning and afternoon. Scaling forces a covariance that can possibly range from negative to positive infinity into a far more useful range between -1 and +1. If we (through a scanner darkly<a href="#fn19" class="footnote-ref" id="fnref19"><sup>19</sup></a> with our robot, that is)</p>
<p>We have two random parameters, so our problem is like this one. Given a <span class="math inline">\(2 \times 2\)</span> standardized variance-covariance matrix, also known as a correlation matrix, <span class="math inline">\(R\)</span>, where <span class="math inline">\(\rho\)</span> is the coefficient of correlation,</p>
<p><span class="math display">\[
R =
\begin{bmatrix}
1 & \rho \\
\rho & 1
\end{bmatrix}
\]</span></p>
<p>We start with uncorrelated variates <span class="math inline">\(x\)</span> from no particular distribution. Our job is to transform the uncorrelated variates to produce correlated variates <span class="math inline">\(z\)</span> with the same expected variance-covariance matrix as <span class="math inline">\(R\)</span>. This will put informational structure into the otherwise independently occurring <span class="math inline">\(x\)</span>.</p>
<p>When we say transform we mean this mathematically.</p>
<p><span class="math display">\[
z = Lx
\]</span></p>
<p>such that</p>
<p><span class="math display">\[
R = LL^T
\]</span></p>
<p>This may be a tall order if it weren’t for a trick we might have learned when solving simultaneous equations called triangularization.</p>
</div>
<div id="solution" class="section level2">
<h2><span class="header-section-number">7.4</span> Solution</h2>
<p>We can decompose <span class="math inline">\(R\)</span> into the product of upper and lower triangular symmetrical matrices. This is the standard trick of matrix algebra noted above.</p>
<p><span class="math display">\[
R =
\begin{bmatrix}
1 & \rho \\
\rho & 1
\end{bmatrix}
=
\begin{bmatrix}
\ell_{11} & 0 \\
\ell_{21} & \ell_{22}
\end{bmatrix}
\begin{bmatrix}
\ell_{11} & \ell_{21} \\
0 & \ell_{22}
\end{bmatrix}
\]</span></p>
<p>When we multiply the upper and lower triangular matrices we get this interesting matrix.</p>
<p><span class="math display">\[
R =
\begin{bmatrix}
1 & \rho \\
\rho & 1
\end{bmatrix}
=
\begin{bmatrix}
\ell_{11}^2 & \ell_{11}\ell_{21} \\
\ell_{11}\ell_{21} & \ell_{22}^2
\end{bmatrix}
\]</span></p>
<p>Now we use another trick by matching elements of <span class="math inline">\(R\)</span> with the elements in our new matrix. Here we go.</p>
<ul>
<li><span class="math inline">\(\ell_{11}^2 = 1\)</span> implies <span class="math inline">\(\ell_{11}= 1\)</span></li>
<li><span class="math inline">\(\ell_{11}\ell_{21} = \rho\)</span> then implies <span class="math inline">\(\ell_{21} = \rho\)</span></li>
<li><span class="math inline">\(\ell_{11}^2 + \ell_{22}^2 = 1\)</span> then implies that <span class="math inline">\(\ell_{22} = \sqrt{1 - \rho^2}\)</span></li>
</ul>
<p>That wasn’t as bad as we might have thought when we started. We then let <span class="math inline">\(L\)</span> be the new mashed up <span class="math inline">\(R\)</span> matrix.</p>
<p><span class="math display">\[
L =
\begin{bmatrix}
\ell_{11} & 0 \\
\ell_{21} & \ell_{22}
\end{bmatrix}
=
\begin{bmatrix}
1 & 0 \\
\rho & \sqrt{1 - \rho^2}
\end{bmatrix}
\]</span></p>
<p>This matrix can be expanded to more than 2 dimensions. The procedure to make that happen is called a <a href="https://en.wikipedia.org/wiki/Cholesky_decomposition"><strong>Cholesky</strong> factorization</a>, really an algorithm, well beyond these proceedings, but on a path for Monte Carlo generation.</p>
</div>
<div id="now-we-can-simulate" class="section level2">
<h2><span class="header-section-number">7.5</span> Now we can simulate</h2>
<p>We generate correlated <span class="math inline">\(z = Lx\)</span> building on the uncorrelated <span class="math inline">\(x\)</span>. We first generate a random <span class="math inline">\(x_1\)</span> and, independently, a random <span class="math inline">\(x_2\)</span>. We can use the <code>=RAND()</code> function in Excel to perform this task.</p>
<p><span class="math display">\[
x =
\begin{bmatrix}
x_1 \\
x_2
\end{bmatrix}
\]</span>
Then we generate a <span class="math inline">\(z_1\)</span> and a <span class="math inline">\(z_2\)</span> using the <span class="math inline">\(x\)</span> vector of random numbers, but transformed by pre-multiplying <span class="math inline">\(x\)</span> with <span class="math inline">\(L\)</span>.</p>
<p><span class="math display">\[
z =
\begin{bmatrix}
z_1 \\
z_2
\end{bmatrix}
=
\begin{bmatrix}
1 & 0 \\
\rho & \sqrt{1 - \rho^2}
\end{bmatrix}
\begin{bmatrix}
x_1 \\
x_2
\end{bmatrix}
\]</span>
We do remember that By definition, if <span class="math inline">\(x_1\)</span> is not correlated with <span class="math inline">\(x_2\)</span>, then <span class="math inline">\(\rho_{12} = 0\)</span>. We can check our maths with this calculation.</p>
<p><span class="math display">\[
xx^T
=
\begin{bmatrix}
x_1 \\
x_2
\end{bmatrix}
\begin{bmatrix}
x_1 & x_2
\end{bmatrix}
=
\begin{bmatrix}
x_1^2 & x_1 x_2 \\
x_1 x_2 & x_2^2
\end{bmatrix}
=
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}
\]</span></p>
<p>The multiplication of a column vector of independently drawn <span class="math inline">\(x\)</span> with its transpose, the row vector of thos same <span class="math inline">\(x\)</span> random numbers will always return the the identity matrix <span class="math inline">\(I\)</span>. This shows that variates are perfectly correlated with themselves but not each other.</p>
<p>Back to the main business at hand. We now calculate</p>
<p><span class="math display">\[
zz^T = (Lx)(Lx)^T = Lxx^TL^T = LIL^T = LL^T
\]</span></p>
<p>But, <span class="math inline">\(R = LL^T\)</span> so that <span class="math inline">\(R = zz^T\)</span>.</p>
<p>Thus we have sketched out these steps.</p>
<ul>
<li><p>Generate uncorrelated <span class="math inline">\(x\)</span>.</p></li>
<li><p>Generate <span class="math inline">\(z = Lx\)</span>, where <span class="math inline">\(L\)</span> reflects the desired correlation structure.</p></li>
<li><p>Find that <span class="math inline">\(L\)</span> generates the same correlation structure as the correlations in <span class="math inline">\(z\)</span>.</p></li>
</ul>
<div id="excel-or-bust" class="section level3">
<h3><span class="header-section-number">7.5.1</span> Excel or bust!</h3>
<p>Here is a demonstration in Excel. It will use VBA without apology.</p>
<p><img src="images/05/cholesky-demo-calculation.jpg" /></p>
<p>Vectors of <span class="math inline">\(x\)</span> and <span class="math inline">\(z\)</span> flank the correlation matrix <span class="math inline">\(R\)</span> and the transformation matrix <span class="math inline">\(L\)</span>, just like the maths. We name the region B2:N3 `calculation. The <span class="math inline">\(x\)</span> vector is populate with RAND() functions. These will always return a uniformly distributed random number between 0 and 1. This will act like a probability if we would like to shape the draws into something more usful to our purposes. We use the MMULT() array function to matrix multiply <span class="math inline">\(L\)</span> and <span class="math inline">\(x\)</span> to get <span class="math inline">\(z\)</span>. We then simulate this act 10,000 times.</p>
<p>To simulate we designate the Q3:T3 range by naming it <code>interface</code>. This is a go-between region that mediates the calculations with the presentation of a single run to another named range called <code>simulation</code>in cells Q5:T5.</p>
<p>We then (cleverly) write a routine that replaces our fingers. They would otherwise press F9 10,000 times to recalculate the sheet, drawing new <span class="math inline">\(x\)</span> values and transforming them into new <span class="math inline">\(z\)</span> values. Each time the fingers would drop a copy of the interface value row into the next available row below the <code>simulation</code> range Q5:T5, and do it all over again, shall we say it again? Yes, 10,000 times. We replace this labor intensive work with this Visual Basic for Applications (VBA) code which may be viewed by pressing the Visual Basic button in the Developer ribbon.</p>
<p><img src="images/05/cholesky-demo-vba.jpg" /></p>
<p>We might some appropriate pithy remark in the MsgBox. We can also change the number of simulations to some other assumption of the analysis by editing the FOR-NEXT loop. Otherwise, this code is self-sufficient and ready to reuse. We simply specify the named ranges. The rest is nicely automated. Assigning this subroutine to a button finishes the job.</p>
</div>
</div>
<div id="back-to-the-eatery-robot" class="section level2">
<h2><span class="header-section-number">7.6</span> Back to the eatery robot</h2>
<p>The robot enters an eatery and observes waiting times. We will have this robot do this many times in the morning and / or the afternoon for 20 eateries, thus the simulation.</p>
<p>Here is the setup, calculation and graph for the random <span class="math inline">\(\alpha_i\)</span> waiting time intercepts correlated with <span class="math inline">\(\beta_i\)</span> afternoon waiting time differentials. This table is the starter for the rest of the waiting time simulation. It represents our prior expectations about the range and shape of morning intercept times and afternoon slope times. We will use this table in the main simulation in the next section.</p>
<p><img src="images/05/eatery-a-b.jpg" /></p>
<p>The intercepts and slopes for 20 eateries plots a negative relationship. We shape the random variates into Gaussian (normal) distributions. In an x_1 cell in column F we make a normally distributed random variable with a 0 mean and 1 standard deviation. This is none other than the centered and scaled normal z-score. NORM.INV() will report the number of standard deviations away from 0 corresponding to a probability, here drawn as a number from 0 to 1 with RAND(). But we need a probability version of this drawn that transforms z’s that may positive or negative and very large or very small into a probability. That magic we perform with NORM.DIST set to TRUE, the cumulative normal density distribution. In the end we get a normally distributed random number with mean zero and standard deviation one.</p>
<p>We move to an x_2 cell in column G where we <em>Cholesky</em> (a new verb, perhaps) the x_1 normal random number into a new normal (pun intended!) but correlated random number. We use the second row of the Cholesky matrix, which when multiplied by another randomly drawn x_2 in Excel gives us in Excel this algebraic formula.</p>
<p><span class="math display">\[
z_2 = \rho \, x_1 + \sqrt{1-\rho^2}\, x_2
\]</span></p>
<p>We remember that <span class="math inline">\(z_1 = x_1\)</span> in the Cholesky transformation. Maybe this is too much information, but it does come in handy when numbers blow up or do not at all align with anyone’s common sense.</p>
<p>We then get to the random intercepts and slopes in columns H and I. The hard work has already been done. We draw normal variates with mean and standard deviations already specified for a and b. The graph concurs with our expectations that they are random and negatively correlated.</p>
<div id="the-main-course" class="section level3">
<h3><span class="header-section-number">7.6.1</span> The main course</h3>
<p>The robot randomly enters eatery number one and orders again, and possibly, again, or goes to eatery number two to do the same. The robot may go in the morning, or possibly in the afternoon. We and the robot do not expend any calories, get heartburn, or even have a good time. The robot only cares about how long it takes to get served.
from the data. This means the robot tracks parameters for each eatery and time of day.</p>
<p><img src="images/05/eatery-simulation-calculation.jpg" /></p>
<p>We introduce more, uncorrelated, uncertainty by allowing the customer-robot to choose whether to sample an eatery in the morning <span class="math inline">\(A=0\)</span> or afternoon <span class="math inline">\(A=1\)</span>. We use the IF() statement to make a choice with a threshold of p_am. Basically this is Jakob Bernoulli jum probability. The robot jumps into the eatery in the morning if a uniformly drawn RAND() number exceeds this threshold, otherwise wait for the afternoon shift.</p>
<p>We calculate the average wait time using each eatery_id in an INDEX(MATCH()) statement to lift the correct intercept <span class="math inline">\(\alpha_i\)</span> and slope <span class="math inline">\(\beta_i\)</span> for eatery <span class="math inline">\(i\)</span>. We then add intercept to the slope times the 0 or 1 afternoon marker. This is the mean, along with an assumed standard deviation that samples a waiting time from a normal distribution in the wait column. We do this 2000 times.</p>
<p>With all of this data it behooves us to study it closely.</p>
</div>
<div id="exploring-wait-times" class="section level3">
<h3><span class="header-section-number">7.6.2</span> Exploring wait times</h3>
<p>We now have 2000 logically related versions of how 20 eateries with two shifts, morning and afternoon produce wait times for wary and unwary customers. We did this by building a logically endowed robot to perform the calculations all in Excel. Here is the workup in a separate worksheet called eda (for <span class="citation">Tukey (<a href="#ref-Tukey1977" role="doc-biblioref">1977</a>)</span> exploratory data analysis).</p>
<p><img src="images/05/eatery-eda-calculation.jpg" /></p>
<p>There are three tables. The one on the left at the top builds the parameters for an equally spaced set of intervals from which we can count the number of wait time results occur in the interval. All of that action occurs in the table to the right. Just below the grid parameter table is a summary of results.</p>
<p>We compose a typical data summary with these items:</p>
<ol style="list-style-type: decimal">
<li><p>Maximum, minimum, and the 25th, 50th, and 75th quartiles, since they break the data into four parts. All four are measures of location. We measure centeral tendency with the median. Generally these are all we need to make a box plot. We also record the Interquartile Range (IQR), a scale metric useful for helping us determine outliers.</p></li>
<li><p>Mean and standard deviation to get other measures of location and scale. We also compare with median with the mean and see they are very close.</p></li>
<li><p>Skewness measures how lopsized, asymmetric the distribution of data is. Kurtosis is very nearly a scaled standard deviation of the standard deviation. It measures the thickness, or thinness, of the tail where we find less frequent outcomes. This data is nearly symmetric with a meso-kurtic, a medium-thick, tail. It so looks like the normal distribution, and it should, since we sampled from this distribution in the first place.</p></li>
<li><p>Other measures such as the data can be included.</p></li>
</ol>
<p>Now off to the right is a table where we calculate, using 21 intervals, the frequency, relative frequency, and cumulative relative frequency of the data occurring in intervals. We use enough intervals with interval mid-point spacing designed for the use of managers and other decision makers. The documented cells at the bottom of the table document the way we construct intervals and their mid-points. We use midpoints in graphing. We count the integer number of times that a wait time we just sample happens to occur in a an interval.</p>
<p>The Excel function COUNTIFS(array, criterion, array, criterion) does the heavy lifting for us. An array here is the vector of wait times. All criteria are logical statements. The syntax for a finding the number of wait time samples in an interval with beginning of interval value in a cell and an ending of interval value in another cell is “>=”&begin_cell and “<”&end_cell. The very last interval, to ensure we count all of the data has a “<=”&end-cell logical statement. We check each of the beginning and ending interval cell formulae and the logical criteria to count wait time observations.</p>
<p>In the last column of the table we compute the normal (Gaussian) cumulative distribution function (CDF) version of cumulative relative frequency. We use mid-points from our interval estimations and the mean and standard deviation from the data summary table. We set the last argument to TRUE to calculate the CDF. Our mechanical work is done. Here is a graph of the results.</p>
<p><img src="images/05/eatery-eda-graph.jpg" /></p>
<p>We see that the simulation and normal CDF’s nearly collide into one another. The utility of a distribution like this is to help us ask how certain are we about average wait times? Probability intervals with lower and upper bounds will help us allocate scarce resources in a more principled fashion. We continue to remind ourselves that models are abstractions from reality, they are not customer experience, or the staff and facilities needed to enhance the experience.</p>
</div>
</div>
<div id="any-next-steps" class="section level2">
<h2><span class="header-section-number">7.7</span> Any next steps?</h2>
<p>We certainly should take a paste-values frozen copy of the simulation, split it into morning and afternoon sections and run the same exploratory data analysis on the splits. This will enable us to determine whether the eateries have much the same or radically different customer experiences.</p>
<p>We can also ask the question about which eateries are more or less popular, if we think popular is longer wait times for both morning and afternoon shifts. If popular, the conjecture is that longer wait times are due to customer congestion. Unfortunately they could be due to staff shortages, relatively less trained staff or even the physical configuration of an eatery. But at least we have a start on beginning to advise Make-A-Pie.</p>
<p>Can we use this model in other settings? We might try to map the parameters of this model to other processes. For example the model might apply to analyze queues at vaccination sites. We might also conceive of the use of the model for tracking time to correct faults in geographically distributed electric and gas equipment. We might also use this sort of model in any environment where the number of steps, the number movements, the distance to and from, varies by some category in time and space (physical and imaginary) and where the categories are correlated. As always one size will probably only fit one. Customization will require further scoping, designing, and implementing.</p>
</div>
<div id="references-and-endnotes" class="section level2">
<h2><span class="header-section-number">7.8</span> References and endnotes</h2>
</div>
</div>
<h3>References</h3>
<div id="refs" class="references">
<div id="ref-Haigh2014">
<p>Thomas Haigh, Crispin Rope, Mark Priestley. 2014. “Los Alamos Bets on Eniac: Nuclear Monte Carlo Simulations, 1947–1948.” <em>IEEE Annals of the History of Computing</em>, nos. July-August.</p>
</div>
<div id="ref-Tukey1977">
<p>Tukey, John. 1977. <em>Exploratory Data Analysis</em>. <a href="https://archive.org/details/exploratorydataa0000tuke_7616">https://archive.org/details/exploratorydataa0000tuke_7616</a>.</p>
</div>
</div>
<div class="footnotes">
<hr />
<ol start="18">
<li id="fn18"><p>Here’s a sketch of the derivation. The sketch includes a manipulation of expectations. Now, we do remember a bit of information about expectations. They are ultimately weighted averages of outcomes, like the <span class="math inline">\(f/N\)</span> we, including Stan, sampled, where the weights are assigned probabilities of occurrence of outcomes. We let <span class="math inline">\(E()\)</span> stand in for this aggregation. Then we remember that the standard deviation is the square root of the variance. The variance in turn is the expectation (weighted probability average) of squared deviations of outcomes from means (which are in turn expectations). If <span class="math inline">\(f\)</span> is a binomial outcome, then <span class="math inline">\(Ef = np\)</span> and <span class="math inline">\(Var(f) = E(f-Ef)^2 = np(1-p)\)</span>. Here is the sketch without much, in any, commentary.
<span class="math display">\[
\begin{align}
Var(p) &= E(p-Ep)^2 \\
&= Ep^2 - (Ep)^2 \\
p &= \frac{f}{N} \\
Var(p) &= E(f/N)^2 - (E(f/N))^2 \\
&=(1/n^2)(Ef^2 - (Ef)^2) \\
&=(1/n^2)var(f) \\
&=(1/n^2)np(1-p) \\
&=\frac{p(1-p)}{n}
\end{align}
\]</span>
The standard deviation <span class="math inline">\(sigma_p\)</span> of <span class="math inline">\(Var(p)\)</span> is then
<span class="math display">\[
\sigma_p = \sqrt{\frac{p(1-p)}{n}}
\]</span>
Phew! Only a little waving of the hands required.<a href="waiting-for-the-simulation.html#fnref18" class="footnote-back">↩</a></p></li>
<li id="fn19"><p>Pilfered directly from Philip K. Dick’s memorable novel, <a href="https://en.wikipedia.org/wiki/A_Scanner_Darkly"><em>A Scanner Darkly</em>.</a> Our aim is not at all as dystopic as Dick’s.<a href="waiting-for-the-simulation.html#fnref19" class="footnote-back">↩</a></p></li>
</ol>
</div>
</section>
</div>
</div>
</div>
<a href="part-3-simulation.html" class="navigation navigation-prev " aria-label="Previous page"><i class="fa fa-angle-left"></i></a>
<a href="the-outer-limits.html" class="navigation navigation-next " aria-label="Next page"><i class="fa fa-angle-right"></i></a>
</div>
</div>
<script src="libs/gitbook-2.6.7/js/app.min.js"></script>
<script src="libs/gitbook-2.6.7/js/lunr.js"></script>
<script src="libs/gitbook-2.6.7/js/clipboard.min.js"></script>
<script src="libs/gitbook-2.6.7/js/plugin-search.js"></script>
<script src="libs/gitbook-2.6.7/js/plugin-sharing.js"></script>
<script src="libs/gitbook-2.6.7/js/plugin-fontsettings.js"></script>
<script src="libs/gitbook-2.6.7/js/plugin-bookdown.js"></script>
<script src="libs/gitbook-2.6.7/js/jquery.highlight.js"></script>
<script src="libs/gitbook-2.6.7/js/plugin-clipboard.js"></script>
<script>
gitbook.require(["gitbook"], function(gitbook) {
gitbook.start({
"sharing": {
"github": false,
"facebook": true,
"twitter": true,
"linkedin": false,
"weibo": false,
"instapaper": false,
"vk": false,
"all": ["facebook", "twitter", "linkedin", "weibo", "instapaper"]
},
"fontsettings": {
"theme": "white",
"family": "sans",
"size": 2
},
"edit": {
"link": null,
"text": null
},
"history": {
"link": null,
"text": null
},
"view": {
"link": null,
"text": null
},
"download": ["book-decision.pdf", "book-decision.epub"],
"toc": {
"collapse": "subsection"
}
});
});
</script>
<!-- dynamically load mathjax for compatibility with self-contained -->
<script>
(function () {
var script = document.createElement("script");
script.type = "text/javascript";
var src = "true";
if (src === "" || src === "true") src = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-MML-AM_CHTML";
if (location.protocol !== "file:")
if (/^https?:/.test(src))
src = src.replace(/^https?:/, '');
script.src = src;
document.getElementsByTagName("head")[0].appendChild(script);
})();
</script>
</body>
</html>