Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
brianferrell787 authored Oct 8, 2020
1 parent e4e7ba0 commit fb036d6
Showing 1 changed file with 29 additions and 0 deletions.
29 changes: 29 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,31 @@
# BERTLiterature
This contains an annotated bibliography and a literature review examining the robustness of BERT for text classification tasks and how to improve it. This stuff is pretty cool to me and I am still slowly learning, so maybe these resources can help you out with your own NLP text classification tasks!

## Papers used

### BERT verus other methods
- **BERT vs. ML**: (["Comparing BERT against traditional machine learning text classification"(Gonzales et al.)](https://arxiv.org/abs/2005.13012)).

- **BERT vs. ML for small datasets**: (["Low-Shot Classification: A Comparison of Classical and Deep Transfer Machine Learning Approaches"(Usherwood et al.)](https://arxiv.org/abs/1907.07543)).
- **BERT for drug reviews**: (["10. Comparing deep learning architectures for sentiment analysis on drug reviews"(Colon et al.)](https://www.sciencedirect.com/science/article/pii/S1532046420301672?casa_token=y_yrQlPLUo4AAAAA:TU4SWv2AXialGiaYbkJbEC7oaUD76N63CM1Q4wNxV05iiC7_VUvoVHZbyqesEeNxWFDzkxTU)).

- **BERT for Alzheimer's Disease Detection**: (["To BERT or Not To BERT: Comparing Speech and Language-based Approaches for Alzheimer's Disease Detection"(Balagopalan et al.)](https://arxiv.org/abs/2008.01551)).

- **BERT in other cultures**: ([Antisocial Online Behavior Detection Using Deep Learning"(Zinovyeva
et al.)](https://www.researchgate.net/publication/342764307_Antisocial_Online_Behavior_Detection_Using_Deep_Learning)).

- **BERT for radiological classification**: (["The Utility of General Domain Transfer Learning for Medical Language Tasks"(Ranti et al.)](https://arxiv.org/abs/2002.06670)).

### Adversarial Papers
- **textfooler**: Rule-based Adversarial Attacks (["Is Bert Really Robust?" (Jin et al., 2019)](https://arxiv.org/abs/1907.11932)).

- **bae**: BERT masked language model turned against itself (["BAE: BERT-based Adversarial Examples for Text Classification" (Garg & Ramakrishnan, 2019)](https://arxiv.org/abs/2004.01970)).

- **bert-attack**: BERT masked language model transformation with subword replacement strategy (["BERT-ATTACK: Adversarial Attack Against BERT Using BERT" (Li et al., 2020)](https://arxiv.org/abs/2004.09984)).

### How to improve BERT
- **Examining underneath BERT's hood**: (["How to Fine-Tune BERT for Text Classification?"(Sun et al.)](https://arxiv.org/abs/1905.05583)).

- **BERT for clinincal data**: (["Publicly Available Clinical BERT Embeddings"(Alsentzer et al.)](https://arxiv.org/abs/1904.03323)).

- **BERT vs. Albert**: (["ALBERT: A Lite BERT for Self-supervised Learning of Language Representations"(Zhenzhong et al.)](https://arxiv.org/abs/1909.11942)).

0 comments on commit fb036d6

Please sign in to comment.