-
Notifications
You must be signed in to change notification settings - Fork 3
/
connection-1-ML.qmd
107 lines (66 loc) · 6.78 KB
/
connection-1-ML.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
# [First connection with machine learning]{.midgrey}
{{< include macros.qmd >}}
Some works in machine learning focus on "guessing the correct answer", and this focus is reflected in the way their machine-learning algorithms -- especially classifiers -- are trained and used.
In [§@sec-optimality] we emphasized that "guessing successfully" can be a misleading goal, however, because it can lead us away from guessing *optimally*. We shall now see two simple but concrete examples of this.
## A "max-success" classifier vs an optimal classifier
:::{.callout-note}
##
You find the code for this chapter and exercises also in [this JupyterLab notebook for R](code/mlc_vs_opm.ipynb) and (courtesy of Viktor Karl Gravdal!) [this JupyterLab notebook for python](code/mlc_vs_opm_py.ipynb).
:::
We shall compare the results obtained in some numerical simulations by using
- a [Machine-Learning Classifier]{.yellow} trained to do most successful guesses
- a prototype "[Optimal Predictor Machine]{.blue}" trained to make the optimal decision
For the moment we treat both as "black boxes", that is, we don't study yet how they're calculating their outputs (although you may already have a good guess at how the Optimal Predictor Machine works).
Their operation is implemented in [this R script](code/mlc_vs_opm.R) that we now load:
```{r}
source('code/mlc_vs_opm.R')
```
This script simply defines the function `hitsvsgain()`:
```
hitsvsgain(ntrials, chooseAtrueA, chooseAtrueB, chooseBtrueB, chooseBtrueA, probsA)
```
having six arguments:
- `ntrials`: how many simulations of guesses to make
- `chooseAtrueA`: utility gained by guessing `A` when the successful guess is indeed `A`
- `chooseAtrueB`: utility gained by guessing `A` when the successful guess is `B` instead
- `chooseBtrueB`: utility gained by guessing `B` when the successful guess is indeed `B`
- `chooseBtrueA`: utility gained by guessing `B` when the successful guess is `A` instead
- `probsA`: a tuple of probabilities (between `0` and `1`) to be used in the simulations (recycling it if necessary), for the successful guess being `A`; the corresponding probabilities for `B` are therefore `1-probsA`. If this argument is omitted it defaults to `0.5` (not very interesting)
## Example 1: electronic component
Let's apply our two classifiers to the *Accept or discard?* problem of [§@sec-intro]. We call `A` the alternative in which the element won't fail before one year, and should therefore be accepted *if this alternative were known at the time of the decision*. We call `B` the alternative in which the element will fail within a year, and should therefore be discarded *if this alternative were known at the time of the decision*. Remember that the crucial point here is that the classifiers *don't* have this information at the moment of making the decision.
We simulate this decision for 100 000 components ("trials"), assuming that the probabilities of failure can be `0.05`, `0.20`, `0.80`, `0.95`. The values of the arguments should be clear:
```{r}
hitsvsgain(ntrials=100000, chooseAtrueA=+1, chooseAtrueB=-11, chooseBtrueB=0, chooseBtrueA=0, probsA=c(0.05, 0.20, 0.80, 0.95))
```
Note how the [machine-learning classifier]{.yellow} is the one that *makes most successful guesses* (around 88%), **and yet it leads to a net loss!** If the utility were in *kroner*, this classifier would cause the company producing the components a [net loss of more than 20 000 kr]{.red}.
The [optimal predictor machine]{.blue}, on the other hand, *makes fewer successful guesses* overall (around 72%), **and yet it leads to a net gain!** It would earn the company a [net gain of around 10 000 kr]{.green}.
:::{.callout-caution}
## {{< fa user-edit >}} Exercise
How is this possible? Try to understand what's happening; feel free to research this by modifying the `hitsvsgain()` function, so that it prints additional outputs.
:::
## Example 2: find Aladdin! (image recognition)
A typical use of machine-learning classifiers is for image recognition: for instance, the classifier guesses whether a particular subject is present in the image or not.
Intuitively one may think that "guessing successfully" should be the best goal here. But exceptions to this may be more common than one thinks. Consider the following scenario:
> Bianca has a computer folder with 10 000 photos. Some of these include her beloved cat Aladdin, who sadly passed away recently. She would like to select all photos that include Aladdin and save them in a separate "Aladdin" folder. Doing this by hand would take too long, if at all possible; so Bianca wants to employ a machine-learning classifier.
>
> For Bianca it's important that no photo with Aladdin goes missing, so she would be very sad if any photo with him weren't correctly recognized; on the other hand she doesn't mind if some photos without him end up in the "Aladdin" folder -- she can delete them herself afterwards.
Let's apply and compare our two classifiers to this image-recognition problem, using again the `hitsvsgain()` function. We call `A` the case where Aladdin is present in a photo, and `B` where he isn't. To reflect Bianca's preferences, let's use these "emotional utilities":
- `chooseAisA = +2`: Aladdin is correctly recognized
- `chooseBisA = -2`: Aladdin is not recognized and photo goes missing
- `chooseBisB = +1`: absence of Aladding is correctly recognized
- `chooseAisB = -1`: photo without Aladding end up in "Aladding" folder
and let's say that the photos may have probabilities `0.3`, `0.4`, `0.6`, `0.7` of including Aladding:
```{r}
hitsvsgain(ntrials=10000, chooseAtrueA=+2, chooseAtrueB=-1, chooseBtrueB=1, chooseBtrueA=-2, probsA=c(0.3, 0.4, 0.6, 0.7))
```
Again we see that the [machine-learning classifier]{.yellow} makes more successful guesses than the [optimal predictor machine]{.blue}, but the latter yields a higher "emotional utility".
You may sensibly object that this result could depend on the peculiar utilities or probabilities chosen for this example. The next exercise helps answering your objection.
:::{.callout-caution}
## {{< fa user-edit >}} Exercise
- Is there any case in which the optimal predictor machine yields a strictly lower utility than the machine-learning classifier?
+ Try using different utilities, for instance using `±5` instead of `±2`, or whatever other values you please.
+ Try using different probabilities as well.
- As in the previous exercise, try to understand what's happening. Consider this question: *how many photos including Aladdin did each classifier miss?*
Modify the `hitsvsgain()` function to output this result.
- Do the comparison using the following utilities: `chooseAtrueA=+1, chooseAtrueB=-1, chooseBtrueB=1, chooseBtrueA=-1`. What's the result? what does this tell you about the relationship between the machine-learning classifier and the optimal predictor machine?
:::