-
Notifications
You must be signed in to change notification settings - Fork 1
/
thesis.lot
25 lines (25 loc) · 2.98 KB
/
thesis.lot
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
\addvspace {10\p@ }
\contentsline {table}{\numberline {1.1}{\ignorespaces Summary of notation}}{3}{table.caption.5}
\addvspace {10\p@ }
\addvspace {10\p@ }
\addvspace {10\p@ }
\contentsline {table}{\numberline {4.1}{\ignorespaces SVM vs. OCSVM (hard-margin separation)\relax }}{45}{table.caption.8}
\contentsline {table}{\numberline {4.2}{\ignorespaces SVM vs. OCSVM ($\nu $-soft margin separation)\relax }}{46}{table.caption.9}
\addvspace {10\p@ }
\addvspace {10\p@ }
\addvspace {10\p@ }
\contentsline {table}{\numberline {7.1}{\ignorespaces Support recovering on simulated data\relax }}{113}{table.caption.23}
\contentsline {table}{\numberline {7.2}{\ignorespaces Total number of sub-cones of wave data\relax }}{114}{table.caption.25}
\contentsline {table}{\numberline {7.3}{\ignorespaces Datasets characteristics\relax }}{115}{table.caption.26}
\contentsline {table}{\numberline {7.4}{\ignorespaces Results on extreme regions with standard parameters $(k,\epsilon ) = (n^{1/2}, 0.01)$\relax }}{115}{table.caption.27}
\contentsline {table}{\numberline {7.5}{\ignorespaces Results on extreme regions with lower $\epsilon =0.1$\relax }}{115}{table.caption.28}
\addvspace {10\p@ }
\contentsline {table}{\numberline {8.1}{\ignorespaces Original Datasets characteristics\relax }}{135}{table.caption.36}
\contentsline {table}{\numberline {8.2}{\ignorespaces Results for the novelty detection setting. ROC, PR, EM, MV often do agree on which algorithm is the best (in bold), which algorithm is the worse (underlined) on some fixed datasets. When they do not agree, it is often because ROC and PR themselves do not, meaning that the ranking is not clear.\relax }}{136}{table.caption.37}
\contentsline {table}{\numberline {8.3}{\ignorespaces Results for the unsupervised setting still remain good: one can see that ROC, PR, EM, MV often do agree on which algorithm is the best (in bold), which algorithm is the worse (underlined) on some fixed datasets. When they do not agree, it is often because ROC and PR themselves do not, meaning that the ranking is not clear.\relax }}{137}{table.caption.38}
\addvspace {10\p@ }
\contentsline {table}{\numberline {9.1}{\ignorespaces Original Datasets characteristics\relax }}{158}{table.caption.60}
\contentsline {table}{\numberline {9.2}{\ignorespaces Results for the novelty detection setting (semi-supervised framework). The table reports AUC ROC and AUC PR scores (higher is better) for each algorithms. The training time of each algorithm has been limited (for each experiment among the 10 performed for each dataset) to 30 minutes, where `NA' indicates that the algorithm could not finish training within the allowed time limit. In average on all the datasets, our proposed algorithm `OneClassRF' achieves both best AUC ROC and AUC PR scores (with LSAD for AUC ROC). It also achieves the lowest cumulative training time.\relax }}{158}{table.caption.61}
\contentsline {table}{\numberline {9.3}{\ignorespaces Results for the unsupervised setting\relax }}{164}{table.caption.63}
\addvspace {10\p@ }
\addvspace {10\p@ }