-
Notifications
You must be signed in to change notification settings - Fork 4
Results
Niall Walsh edited this page Apr 10, 2019
·
18 revisions
Here we will keep updated the current best results of our research in review deception detection, using different algorithms, parameter sets, and approaches. All of the work that produces these results can be found in the /notebooks directory in the LUCAS subdirectory of the project source code.
Classifier | Dataset | Accuracy |
---|---|---|
Log. Regression | OpSpam | ~0.87 |
Naive Bayes | OpSpam | ~0.87 |
Linear SVC | OpSpam | ~0.88 |
k-NN | OpSpam | ~0.8 |
Log. Regression | YelpData | ~0.72 |
Naive Bayes | YelpData | ~0.67 |
Linear SVC | YelpData | ~0.75 |
k-NN | YelpData | ~0.59 |
LDA | YelpData | ~0.68 |
Classifier | Dataset | Accuracy |
---|---|---|
FFNN(BOW) | OpSpam | ~0.86 |
FFNN(OpSpam W2V) | OpSpam | ~0.57 |
FFNN(Pretrained W2V) | OpSpam | ~0.66 |
FFNN(Pretrained W2V) | Yelp | ~0.704 |
CNN(BOW) | OpSpam | ~0.82 |
CNN(OpSpam W2V) | OpSpam | ~0.75 |
CNN(Pretrained W2V) | OpSpam | ~0.78 |
CNN(Pretrained W2V) | Yelp | ~0.71 |
LSTM (BOW) | OpSpam | ~0.88 |
LSTM (OpSpam W2V) | OpSpam | ~0.65 |
LSTM (Pretrained W2V) | OpSpam | ~0.65 |
LSTM (Pretrained W2V) | Yelp | ~0.70 |
LSTM (Pretrained W2V, w/ user features) | Yelp | ~0.71 |
BiLSTM (Pretrained W2V) | Yelp | 0.65 |
FFNN(BOW) | Yelp(:20000) | ~0.64 |
FFNN(Yelp W2V) | Yelp(:20000) | ~0.63 |
BERT | YelpNYC(:10000) | 0.65 |
BERT | YelpZIP | 0.67 |
BERT | OpSpam | 90.025 |
- ACLSW 2019
- Our datasets
- Experiment Results
- Research Analysis
- Hypothesis
- Machine Learning
- Deep Learning
- Paper Section Drafts
- Word Embeddings
- References/Resources
- Correspondence with H. Aghakhani
- The Gotcha! Collection