-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathdirichlet-mixture.qmd
954 lines (743 loc) · 28.6 KB
/
dirichlet-mixture.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
# [The Dirichlet-mixture belief distribution]{.red} {#sec-dirichlet-mix}
{{< include macros.qmd >}}
{{< include macros_exchangeability.qmd >}}
{{< include macros_opm.qmd >}}
{{< include macros_marg_cond.qmd >}}
We have finally collected all we need to build a real, prototype
![](optimal_predictor_machine.png){width=75%}
up from ground principles. This will be done in the present and following chapters of this part. In this chapter we specify and discuss a concrete belief distribution representing an agent with exchangeable beliefs about nominal variates. In the next we put it into code. Then we apply it to some real datasets.
\
## A belief distribution for frequency distributions over nominal variates {#sec-intro-dirichlet-mix}
In this course we sadly shall not examine in depth many mathematical expressions for belief distributions $\P(F\mo\vf\|\yI)$ over frequencies. We briefly discuss one, called the [**Dirichlet-mixture**]{.blue} belief distribution. The state of knowledge underlying this distribution will be denoted $\yD$.
The Dirichlet-mixture distribution is appropriate for statistical populations with *nominal*, discrete variates, or joint variates with all nominal components. It is not appropriate to discrete ordinal variates, because it implicitly assumes that there is no natural order to the variate values.
::::{.column-margin}
::: {.callout-tip}
## {{< fa rocket >}} For the extra curious
Some of the theoretical basis for the choice of this belief distribution can be found in chapters 4--5 of [*The Estimation of Probabilities*](https://hvl.instructure.com/courses/28605/modules).
:::
::::
Suppose we have a simple or joint nominal variate $\bZ$ which can assume $M$ different values (these can be joint values, as in the examples of [§@sec-know-freq]). As usual $\vf$ denotes a specific frequency distribution for the variate values. For a specific value $\bz$, $f(\bz)$ is the relative frequency with which that value occurs in the full population.
The Dirichlet-mixture distribution assigns to $\vf$ a probability density given by the following formula:
$$\p(F\mo\vf \| \yD) =
\frac{1}{\amax-\amin+1}
\sum_{\ya=\amin}^{\amax}
\Biggl[\prod_{\bz} f(\bz)^{\frac{2^\ya}{M} -1} \Biggr]
\cdot
\frac{
\bigl(2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{M} - 1\bigr)!}^M
}
$$
Besides some multiplicative constants, the probability is simply proportional to the product of all frequencies, raised to some powers. The product "$\prod_{\bz}$" is over all $M$ possible values of $\bZ$. The sum "$\sum_{\ya}$" is over an *integer* (positive or negative) index $\ya$ that runs between the minimum value $\amin$ and the maximum value $\amax$. In the applications of the next chapters these minimum and maximum are chosen as follows:
$$
\amin=0
\qquad
\amax=20
$$
so the sum $\sum_k$ runs over 21 terms.
In most applications it does not matter if we take a lower $\amin$ or a higher $\amax$.
### Meaning of the $\amin, \amax$ parameters
The parameters $\amin, \amax$ encode, approximately speaking, the agent's prior belief about how many data are needed to change its initial beliefs. More precisely, $2^{\amin}$ and $2^{\amax}$ represent a lower and an upper bound on the amount of data necessary to overcome initial beliefs. Values\ \ $\amin=0$,\ \ $\amax=20$\ \ represent the belief that such amount could be anywhere between 1 unit and approximately 1 million units. The belief is spread out uniformly across the orders of magnitude in between.
If $2^{\amin}$ is larger than the amount of training data, the agent will consider these data insufficient, and tend to give uniform probabilities to its inferences, for example a 50%/50% probability to a binary variate.
If the amount of data that should be considered "enough" is known, for example from previous studies on similar populations, the parameters $\amin,\amax$ can be set to that order of magnitude (in base 2), minus or plus some magnitude range.
Note that if such an order of magnitude is not known, then it does *not* make sense to "estimate" it from training data with other methods, because an agent with a Dirichlet-mixture distribution will already do that internally (and in an optimal way), provided an ample range is given with $\amin, \amax$.
:::{.callout-caution}
## {{< fa user-edit >}} Exercise
In the calculations and exercises that follow, you're welcome to do them also with other $\amin$ and $\amax$ values, and see what happens.
:::
\
Let's see how this formula looks like in a concrete, simple example: the Mars-prospecting scenario (which has many analogies with coin tosses).
The variate $\yR$ can assume two values $\set{\yy,\yn}$, so $M=2$ in this case. The frequency distribution consists in two frequencies:
$$f(\yy) \qquad f(\yn)$$
of which only one can be chosen independently, since they must sum up to 1. For instance we could consider $f(\yy)=0.5, f(\yn)=0.5$,\ \ or $f(\yy)=0.84, f(\yn)=0.16$,\ \ and so on.
<!-- $$ -->
<!-- 2^{\amin} \approx \frac{1}{2}\ , -->
<!-- \quad -->
<!-- 2^{\amax} = 4 -->
<!-- \qquad\Longrightarrow\qquad -->
<!-- \amin = -1 \ , -->
<!-- \quad -->
<!-- \amax = 2 -->
<!-- $$ -->
*Only for this example* we choose
$$\amin=0 \qquad \amax=2$$
so that the sum $\sum_k$ runs over 3 terms.
The agent's belief distribution for the frequencies is
$$
\begin{aligned}
\p(F\mo\vf \| \yD) &=
\frac{1}{2-0+1}
\sum_{\ya=0}^{2}
\Biggl[\prod_{\bz=\yy}^{\yn} f(\bz)^{\frac{2^\ya}{2} -1} \Biggr]
\cdot
\frac{
\bigl(2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{2} - 1\bigr)!}^2
}
\\[2ex]
&=
\frac{1}{3}
\Biggl[
f(\yy)^{\frac{2^{0}}{2}-1}\cdot f(\yn)^{\frac{2^{0}}{2}-1}
\cdot
\frac{
\bigl(2^{0} -1 \bigr)!
}{
{\bigl(\frac{2^{0}}{2} - 1\bigr)!}^2
}
+{} \\[1ex]&\qquad
f(\yy)^{\frac{2^{1}}{2}-1}\cdot f(\yn)^{\frac{2^{1}}{2}-1}
\cdot
\frac{
\bigl(2^{1} -1 \bigr)!
}{
{\bigl(\frac{2^{1}}{2} - 1\bigr)!}^2
}
+{} \\[1ex]&\qquad
f(\yy)^{\frac{2^{2}}{2}-1}\cdot f(\yn)^{\frac{2^{2}}{2}-1}
\cdot
\frac{
\bigl(2^{2} -1 \bigr)!
}{
{\bigl(\frac{2^{2}}{2} - 1\bigr)!}^2
}
\Biggr]
\\[2ex]
&=
\frac{1}{3}
\Biggl[
\frac{1}{\sqrt{f(\yy)}}\cdot \frac{1}{\sqrt{f(\yn)}}
\cdot \frac{1}{\pi}
+{} \\[1ex]&\qquad
1 \cdot 1 \cdot \frac{1}{1}
+{} \\[1ex]&\qquad f(\yy)\cdot f(\yn)
\cdot \frac{6}{1}
\Biggr]
\end{aligned}
$$
\
We can visualize this belief distribution (with $\amin=0, \amax=2$) with a generalized scatter plot ([§@sec-repr-general-distr]) of 100 frequency distributions, each represented by a line histogram ([§@sec-discr-prob-distr]):
![](samples_rocks.png){width=100%}
Alternatively we can represent the probability density of the frequency $f(\yy)$:
![](dirmix_pdf_rocks.png){width=100%}
You can see some characteristics of this belief:
- all possible frequency distributions are taken into account, that is, no frequency is judged impossible and given zero probability
- a higher probability is given to frequency distributions that are almost 50%/50%, or that are almost 0%/100% or 100%/0%
The second characteristic expresses the belief that the agent may more often deal with tasks where frequencies are almost symmetric (think of coin toss), or the opposite: tasks where, once you observe a phenomenon, you're quite sure you'll keep observing it. This latter case is typical of some physical phenomena; an example is given by Jaynes:
> For example, in a chemical laboratory we find a jar containing an unknown and unlabeled compound. We are at first completely ignorant as to whether a small sample of this compound will dissolve in water or not. But having observed that one small sample does dissolve, we infer immediately that all samples of this compound are water soluble, and although this conclusion does not carry quite the force of deductive proof, we feel strongly that the inference was justified.
::::{.column-margin}
::: {.callout-tip}
## {{< fa rocket >}} For the extra curious
[*Prior probabilities*](https://hvl.instructure.com/courses/28605/modules/items/723829
)
:::
::::
:::{.callout-caution}
## {{< fa user-edit >}} Exercise
Calculate the formula above for these three frequency distributions:
- $f(\yy)=0.5\quad f(\yn)=0.5$
- $f(\yy)=0.75\quad f(\yn)=0.25$
- $f(\yy)=0.99\quad f(\yn)=0.01$
<!-- ## [1] 1.33103 -->
<!-- ## [1] 0.945739 -->
<!-- ## [1] 1.06467 -->
:::
## Formula for joint probabilities about units with the Dirichlet-mixture distribution {#sec-joint-prob-dirichlet}
A mathematical advantage of the Dirichlet-mixture belief distribution is that some formulae where it enters can be computed exactly. The most important formula is de Finetti's representation theorem ([§@sec-freq-not-known]), where the sum $\sum_{\vf}$ (really an integral $\int\,\di\vf$) can be calculated analytically in the case of the Dirichlet-mixture:
:::{.column-body-outset-right}
$$
\begin{aligned}
\P(
\blue
Z_{L}\mo z_{L}
\and
\dotsb \and
Z_{1}\mo z_1
\black
\| \yD
)
&=
\int
f(\blue Z_{L}\mo z_{L} \black) \cdot
\,\dotsb\, \cdot
f(\blue Z_{1}\mo z_1 \black) \cdot
\p(F\mo\vf \| \yD)
\,\di\vf
\\[2ex]
&=
\frac{1}{\amax-\amin+1}
\sum_{\ya=\amin}^{\amax}
\frac{
\prod_{\bz} \bigl(\frac{2^{\ya}}{M} + \#\bz - 1\bigr)!
}{
\bigl(2^{\ya} + L -1 \bigr)!
}
\cdot
\frac{
\bigl(2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{M} - 1\bigr)!}^M
}
\end{aligned}
$$
:::
where $\#\bz$ is the multiplicity with which the specific value $\bz$ occurs in the sequence $\blue z_1,\dotsc,z_{L}$.
\
Let's see how this formula looks like in an example from the Mars-prospecting scenario.
Take the sequence^[Remember that the agent has exchangeable beliefs, so the units' IDs don't matter ([§@sec-exchaneable-distr])!]
$$
\yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \and \yR_4\mo\yy
$$
in this sequence, value $\yy$ appears thrice and value $\yn$ once, that is
$$L=4 \qquad \#\yy = 3 \qquad \#\yn = 1$$
In this case we have $M=2$, and we still take $\amin=0, \amax=2$. The formula above then becomes
:::{.column-body-outset-right}
$$
\begin{aligned}
&\P(
\underbracket[0.1ex]{\yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \and \yR_4\mo\yy}_{
\grey L=4\quad \#\cat{Y}=3\quad \#\cat{N}=1
}
\| \yD
)
\\
&\qquad{}=
\frac{1}{\amax-\amin+1}
\sum_{\ya=\amin}^{\amax}
\frac{
\prod_{\bz} \bigl(\frac{2^{\ya}}{M} + \#\bz - 1\bigr)!
}{
\bigl(2^{\ya} + L -1 \bigr)!
}
\cdot
\frac{
\bigl(2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{M} - 1\bigr)!}^M
}
\\[2ex]
&\qquad{}=
\frac{1}{2-0+1}
\sum_{\ya=\amin}^{\amax}
\frac{
\bigl(\frac{2^{\ya}}{2} + \#\yy - 1\bigr)!
\cdot \bigl(\frac{2^{\ya}}{2} + \#\yn - 1\bigr)!
}{
\bigl(2^{\ya} + 4 -1 \bigr)!
}
\cdot
\frac{
\bigl(2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{2} - 1\bigr)!}^2
}
\\[2ex]
&\qquad{}=
\frac{1}{3}
\sum_{\ya=-1}^{2}
\frac{
\bigl(\frac{2^{\ya}}{2} + {\lblue 3} - 1\bigr)! \cdot
\bigl(\frac{2^{\ya}}{2} + {\yellow 1} - 1\bigr)!
}{
\bigl(2^{\ya} + 3 \bigr)!
}
\cdot
\frac{
\bigl(2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{2} - 1\bigr)!}^2
}
\\[1ex]
&\qquad{}=
\boldsymbol{0.048 735 1}
\end{aligned}
$$
:::
:::{.callout-caution}
## {{< fa user-edit >}} Exercises
- Using the formula above, calculate:
- $\P(\yR_1\mo\yy\| \yD)$ ; does the result make sense?
- $\P(\yR_{32}\mo\yn \and \yR_{102}\mo\yn \and \yR_{8}\mo\yn\| \yD)$
<!-- ## [1] 0.5 -->
<!-- ## [1] 0.254167 -->
- Try doing the calculation above on a computer, with $\amin=0, \amax=20$. What happens?
:::
## Examples of inference tasks with the Dirichlet-mixture belief distribution {#sec-formulae-with-Dirmix}
With the explicit formula
:::{.column-page-inset-right}
::::{.callout-note}
##
$$
\P(
\blue
Z_{L}\mo z_{L}
\and
\dotsb \and
Z_{1}\mo z_1
\black
\| \yD
)
=
\frac{1}{\amax-\amin+1}
\sum_{\ya=\amin}^{\amax}
\frac{
\prod_{\bz} \bigl(\frac{2^{\ya}}{M} + \#\bz - 1\bigr)!
}{
\bigl(2^{\ya} + L -1 \bigr)!
}
\cdot
\frac{
\bigl(2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{M} - 1\bigr)!}^M
}
$$
where $\#\bz$ is the number of times some value $\bz$ appears in the sequence
::::
:::
we can now solve all the kinds of task involving multiple units that were discussed in [§@sec-categ-probtheory]. Let's see a simple example of forecasting all variates for a new unit (no predictors), and another example where we guess a predictand given a predictor. We shall solve them step by step.
## Example 1: Forecast about one variate, given previous observations {#sec-dirmix-example1}
In this task there are no predictors and only one predictand variate $\yR$: the presence of haematite. The agent observes the value of this variate in several rocks, and tries to forecast its value in a new rock.
The agent has exchangeable beliefs represented by a Dirichlet-mixture distribution. The variate of the population of interest has two possible values, so in the formulae above we have\ \ $M=2$. We still take $\amin=0$, $\amax=2$.
The agent has collected three rocks. Upon examination, two of them contain haematite, one doesn't. The agent's data are therefore
$$\text{\small data:}\quad \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn$$
(remember that it doesn't matter how the rocks are labelled, because the agent's beliefs are exchangeable).
What probability should the agent give to finding haematite in a newly collected rock? That is, what value should it assign to
$$\P(\yR_4\mo\yy \| \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \and \yD) \ ?$$
Our main formula is
:::{.column-body-outset-right}
$$
\P(\yR_4\mo\yy \| \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \and \yD)
=
\frac{
\P(\yR_4\mo\yy \and \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \| \yD)
}{
\sum_{{\purple r}=\yy}^{\yn}
\P(\yR_4\mo{\purple r} \and \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \| \yD)
}
$$
:::
\
[{{< fa 1 >}} {{< fa hand-point-right >}}]{.green}\ \ \ The fraction above requires the computation of two joint probabilities:
$$
\begin{aligned}
\P(\yR_4\mo\yy \, \and\, \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \| \yD)
\\[1ex]
\P(\yR_4\mo\yn \, \and\, \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \| \yD)
\end{aligned}
$$
Note how they are associated with the two possible hypotheses about the new rock:
$$\yR_4\mo\yy \qquad \yR_4\mo\yn$$
of which the first one interests us.
:::{.column-body-outset-right}
[{{< fa 2 >}}a {{< fa hand-point-right >}}]{.green}\ \ In the first joint probability, $\yy$ appears thrice and $\yn$ appears once, so
$$\#\yy = 3 \qquad \#\yn = 1 \qquad {\midgrey L} = {\midgrey 4}$$
The de Finetti representation formula gives
$$\begin{aligned}
&\P(\yR_4\mo\yy \, \and\, \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \| \yD)
\\[1ex]
&\qquad{}=
\frac{1}{2-0+1}
\sum_{\ya=0}^{2}
\frac{
\bigl(\frac{2^{\ya}}{2} + {\lblue 3} - 1\bigr)! \cdot
\bigl(\frac{2^{\ya}}{2} + {\yellow 1} - 1\bigr)!
}{
\bigl(2^{\ya} + {\midgrey 4} -1 \bigr)!
}
\cdot
\frac{
\bigl(2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{2} - 1\bigr)!}^2
}
\\[1ex]
&\qquad{}=
\boldsymbol{0.048 735 1}
\end{aligned}
$$
\
[{{< fa 2 >}}b {{< fa hand-point-right >}}]{.green}\ \ In the second joint probability, $\yy$ appears twice and $\yn$ appears twice, so
$$\#\yy = 2 \qquad \#\yn = 2 \qquad {\midgrey L} = {\midgrey 4}$$
We find
$$\begin{aligned}
&\P(\yR_4\mo\yy \, \and\, \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \| \yD)
\\[1ex]
&\qquad{}=
\frac{1}{2-0+1}
\sum_{\ya=0}^{2}
\frac{
\bigl(\frac{2^{\ya}}{2} + {\lblue 2} - 1\bigr)! \cdot
\bigl(\frac{2^{\ya}}{2} + {\yellow 2} - 1\bigr)!
}{
\bigl(2^{\ya} + {\midgrey 4} -1 \bigr)!
}
\cdot
\frac{
\bigl(2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{2} - 1\bigr)!}^2
}
\\[1ex]
&\qquad{}=
\boldsymbol{0.033 209 3}
\end{aligned}
$$
:::
\
[{{< fa 3 >}} {{< fa hand-point-right >}}]{.green}\ \ \ The probability for the hypothesis of interest is finally given by the fraction
:::{.column-page-inset-right}
$$
\begin{aligned}
&\P(\yR_4\mo\yy \| \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \and \yD)
\\[1ex]
&\qquad{}=
\frac{
\P(\yR_4\mo\yy \and \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \| \yD)
}{
\sum_{{\purple r}=\yy}^{\yn}
\P(\yR_4\mo{\purple r} \and \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \| \yD)
}
\\[1ex]
&\qquad{}=
\frac{
\P(\yR_4\mo\yy \, \and\, \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \| \yD)
}{
\P(\yR_4\mo\yy \, \and\, \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \| \yD) +
\P(\yR_4\mo\yn \, \and\, \yR_1\mo\yy \and \yR_2\mo\yy \and \yR_3\mo\yn \| \yD)
}
\\[1ex]
&\qquad{}=
\frac{0.048 735 1}{0.048 735 1 + 0.033 209 3}
\\[1ex]
&\qquad{}=
\boldsymbol{59.47\%}
\end{aligned}
$$
:::
:::{.callout-caution}
## {{< fa user-edit >}} Exercises
- The inference problem above has some analogy with coin tossing: there's just one, binary, variate. This agent could have been used to make forecasts about coin tosses.
Consider the result above from the point of view of this analogy. Let's say that $\yy$ would be "heads", and $\yn$ "tails". Having observed four coin tosses, with three heads and one tail, the agent is giving a 58% probability for heads at the next toss.
- Do you consider this probability reasonable? Why?
- In which different coin-tossing circumstances would you consider this probability reasonable (given the same previous observation data)?
- Try doing the calculation above on a computer, using the values\ \ $\amin=0, \amax=20$.
If you use the formulae above as they're given, you'll probably get just `NaN`s. The formulae above must be rewritten in a different way in order not to generate overflow. The result would be $\boldsymbol{51.42\%}$.
:::
## Example 2: Forecast about one predictand, given predictor and previous observations {#sec-dirmix-example2}
Let's go back to the hospital scenario of [§@sec-conditional-joint-general]. The units are patients coming into a hospital. The population is characterized by two nominal variates:
- $T$: the patient's means of transportation at arrival, with domain $\set{\ambu, \heli, \othe}$
- $U$: the patient's need of urgent care, with domain $\set{\urge, \nonu}$
The combined variate $(U \and T)$ has\ \ $M = 2\cdot 3 = 6$\ \ possible values. We still use parameters $\amin=0$ and $\amax=2$ to use the de Finetti formula as-is without causing overflow errors.
The agent's task is to forecast whether the next incoming patient will require urgent care, given information about the patient's transportation. So $U$ is the predictand variate, $T$ the predictor variate.
At the moment the agent has a complete record of two previous patients:
- $U_1\mo\ambu \and T_1\mo\ambu$
- $U_2\mo\nonu \and T_2\mo\othe$
A third patient is incoming by ambulance:
- $T_3\mo\ambu$
What is the probability that this patient requires urgent care?
#### Initial belief
First let's get a glimpse of the agent's forecast if it were given *no* information about previous patients. We therefore want to calculate the probability
$$\P(U_3\mo\urge \| T_3\mo\ambu \and \yD)$$
\
[{{< fa 1 >}} {{< fa hand-point-right >}}]{.green}\ \ \ We need to calculate the two joint probabilities
:::{.column-body-outset-right}
$$
\P(U_3\mo\urge \and T_3\mo\ambu \| \yD)
\qquad
\P(U_3\mo\nonu \and T_3\mo\ambu \| \yD)
$$
:::
corresponding to the two hypotheses of interest, $U_3\mo\urge$ and $U_3\mo\nonu$.
\
[{{< fa 2 >}}a {{< fa hand-point-right >}}]{.green}\ \ In the first joint probability above the value pair $(U\mo\urge \and T\mo\ambu)$ appears once, and the remaining five pairs appear zero times:
:::{.column-body-outset-right}
$$\#(\urge, \ambu) = {\lblue 1} \qquad\text{\small five others }\#(\dotsc,\dotsc) = {\yellow 0}
\qquad {\midgrey L} = {\midgrey 1}$$
:::
The de Finetti formula gives
:::{.column-page-inset-right}
$$
\begin{aligned}
&\P(U_3\mo\urge \and T_3\mo\ambu \| \yD)
\\[2ex]
&\qquad{}=
\frac{1}{2-0+1}
\sum_{\ya=0}^{2}
\frac{
\bigl(\frac{2^{\ya}}{6} + {\lblue 1} - 1\bigr)! \cdot
\underbracket[0.1ex]{\bigl(\frac{2^{\ya}}{6} + {\yellow 0} - 1\bigr)! \cdot
\,\dotsb\, \cdot
\bigl(\frac{2^{\ya}}{6} + {\yellow 0} - 1\bigr)!}_{\text{\grey five factors}}
}{
\bigl( 2^{\ya} + {\midgrey 1} -1 \bigr)!
}
\cdot
\frac{
\bigl( 2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{6} - 1\bigr)!}^6
}
\\[1ex]
&\qquad{}=
\boldsymbol{1/6}
\end{aligned}
$$
:::
\
[{{< fa 2 >}}b {{< fa hand-point-right >}}]{.green}\ \ In the second joint probability above the value pair $(U\mo\nonu \and T\mo\ambu)$ appears once, and the remaining five pairs appear zero times:
:::{.column-body-outset-right}
$$\#(\nonu, \ambu) = {\lblue 1} \qquad\text{\small five others }\#(\dotsc,\dotsc) = {\yellow 0}
\qquad {\midgrey L} = {\midgrey 1}$$
:::
The de Finetti formula gives an identical result as before:
:::{.column-page-inset-right}
$$
\begin{aligned}
&\P(U_3\mo\nonu \and T_3\mo\ambu \| \yD)
\\[2ex]
&\qquad{}=
\frac{1}{2-0+1}
\sum_{\ya=0}^{2}
\frac{
\bigl(\frac{2^{\ya}}{6} + {\lblue 1} - 1\bigr)! \cdot
\underbracket[0.1ex]{\bigl(\frac{2^{\ya}}{6} + {\yellow 0} - 1\bigr)! \cdot
\,\dotsb\, \cdot
\bigl(\frac{2^{\ya}}{6} + {\yellow 0} - 1\bigr)!}_{\text{\grey five factors}}
}{
\bigl( 2^{\ya} + {\midgrey 1} -1 \bigr)!
}
\cdot
\frac{
\bigl( 2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{6} - 1\bigr)!}^6
}
\\[1ex]
&\qquad{}=
\boldsymbol{1/6}
\end{aligned}
$$
:::
\
[{{< fa 3 >}} {{< fa hand-point-right >}}]{.green}\ \ \ The probability that this incoming patient is urgent is then
:::{.column-page-inset-right}
$$
\begin{aligned}
&
\P(U_3\mo\urge \| T_3\mo\ambu \and \yD)
\\[1ex]
&\qquad{}=
\frac{
\P(U_3\mo\urge \and T_3\mo\ambu \| \yD)
}{
\sum_{\purple u}
\P(U_3\mo{\purple u} \and T_3\mo\ambu \| \yD)
}
\\[1ex]
&\qquad{}=
\frac{
\P(U_3\mo\urge \and T_3\mo\ambu \| \yD)
}{
\P(U_3\mo\urge \and T_3\mo\ambu \| \yD)
+
\P(U_3\mo\nonu \and T_3\mo\ambu \| \yD)
}
\\[1ex]
&\qquad{}=
\frac{1/6 }{1/6 + 1/6}
\\[1ex]
&\qquad{}=
\boldsymbol{50\%}
\end{aligned}
$$
:::
\
The agent's background information says that, a priori, urgent and non-urgent patients are equally plausible. This is a characteristic of the particular Dirichlet-mixture belief distribution we are using. It would actually be possible to modify it so as to give a-priori different probabilities for the $\urge$ and $\nonu$ values; but we shall not pursue this possibility here.
\
#### Forecast after learning
Now let's take into account the information about the previous two patients. We therefore want to calculate the probability
:::{.column-page-right}
$$\P(U_3\mo\urge \| T_3\mo\ambu \and
U_2\mo\nonu \and T_2\mo\othe \and
U_1\mo\urge \and T_1\mo\ambu \and
\yD)
$$
:::
\
[{{< fa 1 >}} {{< fa hand-point-right >}}]{.green}\ \ \ The hypotheses are again $U_3\mo\urge$ and $U_3\mo\nonu$, and we need the joint probabilities
:::{.column-page-right}
$$
\begin{aligned}
&\P(U_3\mo\urge \and T_3\mo\ambu \and
U_2\mo\nonu \and T_2\mo\othe \and
U_1\mo\urge \and T_1\mo\ambu \|
\yD)
\\[1ex]
&\P(U_3\mo\nonu \and T_3\mo\ambu \and
U_2\mo\nonu \and T_2\mo\othe \and
U_1\mo\urge \and T_1\mo\ambu \|
\yD)
\end{aligned}
$$
:::
\
[{{< fa 2 >}}a {{< fa hand-point-right >}}]{.green}\ \ The first joint probability has the following counts:
:::{.column-page-right}
$$
\#(\urge, \ambu) = {\blue 2} \quad
\#(\nonu, \othe) = {\lblue 1} \quad
\text{\small four others }\#(\dotsc,\dotsc) = {\yellow 0}
\quad {\midgrey L} = {\midgrey 3}$$
:::
and therefore
:::{.column-page-right}
$$
\begin{aligned}
&\P(U_3\mo\urge \and T_3\mo\ambu \and
U_2\mo\nonu \and T_2\mo\othe \and
U_1\mo\urge \and T_1\mo\ambu \|
\yD)
\\[2ex]
&\qquad{}=
\frac{1}{3}
\sum_{\ya=0}^{2}
\frac{
\bigl(\frac{2^{\ya}}{6} + {\blue 2} - 1\bigr)! \cdot
\bigl(\frac{2^{\ya}}{6} + {\lblue 1} - 1\bigr)! \cdot
\underbracket[0.1ex]{\bigl(\frac{2^{\ya}}{6} + {\yellow 0} - 1\bigr)! \cdot
\,\dotsb\, \cdot
\bigl(\frac{2^{\ya}}{6} + {\yellow 0} - 1\bigr)!}_{\text{\grey four factors}}
}{
\bigl( 2^{\ya} + {\midgrey 3} -1 \bigr)!
}
\cdot
\frac{
\bigl( 2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{6} - 1\bigr)!}^6
}
\\[1ex]
&\qquad{}=
\boldsymbol{0.005 915 64}
\end{aligned}
$$
:::
\
[{{< fa 2 >}}b {{< fa hand-point-right >}}]{.green}\ \ Counts for the second joint probability:
:::{.column-page-right}
$$
\begin{gathered}
\#(\urge, \ambu) = {\lblue 1} \qquad
\#(\nonu, \ambu) = {\lblue 1} \qquad
\#(\nonu, \othe) = {\lblue 1}
\\[1ex]
\text{\small three others }\#(\dotsc,\dotsc) = {\yellow 0}
\qquad {\midgrey L} = {\midgrey 3}
\end{gathered}
$$
:::
and therefore
:::{.column-page-right}
$$
\begin{aligned}
&\P(U_3\mo\nonu \and T_3\mo\ambu \and
U_2\mo\nonu \and T_2\mo\othe \and
U_1\mo\urge \and T_1\mo\ambu \|
\yD)
\\[2ex]
&\qquad{}=
\frac{1}{3}
\sum_{\ya=0}^{2}
\frac{
\underbracket[0.1ex]{
\bigl(\frac{2^{\ya}}{6} + {\lblue 1} - 1\bigr)!
\cdot\,\dotsb\, \cdot
\bigl(\frac{2^{\ya}}{6} + {\lblue 1} - 1\bigr)!
}_{\text{\grey three factors}}
\cdot
\underbracket[0.1ex]{\bigl(\frac{2^{\ya}}{6} + {\yellow 0} - 1\bigr)! \cdot
\,\dotsb\, \cdot
\bigl(\frac{2^{\ya}}{6} + {\yellow 0} - 1\bigr)!}_{\text{\grey three factors}}
}{
\bigl( 2^{\ya} + {\midgrey 3} -1 \bigr)!
}
\cdot
\frac{
\bigl( 2^{\ya} -1 \bigr)!
}{
{\bigl(\frac{2^{\ya}}{6} - 1\bigr)!}^6
}
\\[1ex]
&\qquad{}=
\boldsymbol{0.001 594 65}
\end{aligned}
$$
:::
\
[{{< fa 3 >}} {{< fa hand-point-right >}}]{.green}\ \ \ Finally, the probability that the third incoming patient is urgent is
:::{.column-page-inset-right}
$$
\begin{aligned}
&
\P(U_3\mo\urge \| T_3\mo\ambu \and
U_2\mo\nonu \and T_2\mo\othe \and
U_1\mo\urge \and T_1\mo\ambu \and
\yD)
\\[1ex]
&\qquad{}=
\frac{0.005 915 64}{0.005 915 64 + 0.001 594 65}
\\[1ex]
&\qquad{}=
\boldsymbol{78.77\%}
\end{aligned}
$$
:::
The agent has learned an association between $\ambu$ and $\urge$ from the first patient. Note that if we had used the parameters $\amin=0, \amax=20$, the result would have been more conservative: $\boldsymbol{54.90\%}$.
::::{.column-page-inset-right}
:::{.callout-caution}
## {{< fa user-edit >}} Exercises
- Do the inverse inference, using urgency $U$ as predictor, and transportation $T$ as predictand. That is, calculate the probability
$$\P( T_3\mo\ambu \| U_3\mo\urge \and
U_2\mo\nonu \and T_2\mo\othe \and
U_1\mo\urge \and T_1\mo\ambu \and
\yD)$$
- Imagine that the urgency variate for the first patient, $U_1$, is not known (missing data). Using the formula for marginalization (see [§@sec-underlying-distribution]), calculate the corresponding probability
$$\P(U_3\mo\urge \| T_3\mo\ambu \and
U_2\mo\nonu \and T_2\mo\othe \and
T_1\mo\ambu \and
\yD)$$
- Make similar kinds of inferences, freely trying other combinations of information about the two previous patients.
- Do the same, but with three previous patients instead of two.
:::
::::
## When is the Dirichlet-mixture belief distribution appropriate? {#sec-critique-dirmix}
The two examples above reveal some characteristics of an agent based on the Dirichlet-mixture belief distribution:
- In absence of previous data, it assigns uniform probability distributions to any variate.
- It can be "eager" to learn from previous examples, that is, its probabilities may vary appreciably even with only few observations. The "eagerness" is determined by the parameters $\amin, \amax$. For a general-purpose agent, the values $\amin=0, \amax=20$ are more reasonable.
There are also other subtle characteristics connected to the two above, which we won't discuss here.
These characteristics can be appropriate to some inference tasks, but not to others. It is again a matter of *background information* about the task one wants to solve.
The background information implicit in the Dirichlet-mixture belief distribution can be reasonable in situations where:
- There is very little information about the physics or science behind the (nominal) variates and population, so one is willing to give a lot of weight to observed data. Contrast this with the coin-tossing scenario, where our physics knowledge about coin tosses make us appreciably change our probabilities only after a large number of observations.
::::{.column-margin}
::: {.callout-warning}
## {{< fa book >}} Study reading (again)
[*Dynamical Bias in the Coin Toss*](https://hvl.instructure.com/courses/28605/modules)
<!-- We conclude that coin tossing is "physics" not "random." -->
:::
::::
- A large number of previous observations is available, "large" relative to the domain size $M$ of the joint variate $Z$.
- The joint variate $Z$ has a small domain.
It is possible to modify the Dirichlet-mixture belief distribution in order to alter the characteristics above. Some modifications can assign more a-priori plausibility to some variate values than others, or make the initial belief less affected by observed data.
::::{.column-margin}
::: {.callout-tip}
## {{< fa rocket >}} For the extra curious
- [*The Dirichlet-tree distribution*](https://tminka.github.io/papers/dirichlet/minka-dirtree.pdf): discusses a more flexible generalization of the Dirichlet distribution
- [*Monkeys, kangaroos, and N*](https://hvl.instructure.com/courses/28605/modules): is an insightful discussion of how to investigate and represent prior beliefs.
:::
::::
These possibilities should remind us about the importance of assessing and specifying appropriate background information. No matter the amount of data, what the data eventually "tell" us acquires meaning only against the background information from which they are observed.
----
In the next chapter we discuss the code implementation of the formulae for the Dirichlet-mixture agent.