Skip to content

Commit

Permalink
Remove albert-base-v2 since it fails torch_mlir.compile() (#644)
Browse files Browse the repository at this point in the history
  • Loading branch information
monorimet authored Dec 15, 2022
1 parent e7e7635 commit a14c53a
Showing 1 changed file with 0 additions and 1 deletion.
1 change: 0 additions & 1 deletion tank/torch_model_list.csv
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
model_name, use_tracing, model_type, dynamic, param_count, tags, notes
microsoft/MiniLM-L12-H384-uncased,True,hf,True,66M,"nlp;bert-variant;transformer-encoder","Large version has 12 layers; 384 hidden size; Smaller than BERTbase (66M params vs 109M params)"
albert-base-v2,True,hf,True,11M,"nlp;bert-variant;transformer-encoder","12 layers; 128 embedding dim; 768 hidden dim; 12 attention heads; Smaller than BERTbase (11M params vs 109M params); Uses weight sharing to reduce # params but computational cost is similar to BERT."
bert-base-uncased,True,hf,True,109M,"nlp;bert-variant;transformer-encoder","12 layers; 768 hidden; 12 attention heads"
bert-base-cased,True,hf,True,109M,"nlp;bert-variant;transformer-encoder","12 layers; 768 hidden; 12 attention heads"
google/mobilebert-uncased,True,hf,True,25M,"nlp,bert-variant,transformer-encoder,mobile","24 layers, 512 hidden size, 128 embedding"
Expand Down

0 comments on commit a14c53a

Please sign in to comment.