Skip to content

Results

Niall Walsh edited this page Apr 1, 2019 · 18 revisions

Results

Here we will keep updated the current best results of our research in review deception detection, using different algorithms, parameter sets, and approaches. All of the work that produces these results can be found in the /notebooks directory in the LUCAS subdirectory of the project source code.

Statistical Models

Classifier Dataset Accuracy
Log. Regression OpSpam ~0.87
Naive Bayes OpSpam ~0.87
Linear SVC OpSpam ~0.88
k-NN OpSpam ~0.8
Log. Regression YelpData ~0.72
Naive Bayes YelpData ~0.67
Linear SVC YelpData ~0.75
k-NN YelpData ~0.59
LDA YelpData ~0.68

Neural Models

Classifier Dataset Accuracy
FFNN(BOW) OpSpam ~0.86
FFNN(OpSpam W2V) OpSpam ~0.57
FFNN(Pretrained W2V) OpSpam ~0.66
FFNN(Pretrained W2V) Yelp ~0.62
CNN(BOW) OpSpam ~0.82
CNN(OpSpam W2V) OpSpam ~0.75
CNN(Pretrained W2V) OpSpam ~0.78
CNN(Pretrained W2V) Yelp ~0.71
LSTM (BOW) OpSpam ~0.88
LSTM (OpSpam W2V) OpSpam ~0.65
LSTM (Pretrained W2V) OpSpam ~0.65
LSTM (Pretrained W2V) Yelp ~0.70
BiLSTM (Pretrained W2V) Yelp 0.65
FFNN(BOW) YelpData(:20000) ~0.64
FFNN(Yelp W2V) YelpData(:20000) ~0.63
BERT YelpData(:10000) 0.65
Clone this wiki locally