Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ResourceExhaustedError while running Document_representation_from_BERT #51

Open
MahajanTarun opened this issue Aug 17, 2021 · 2 comments

Comments

@MahajanTarun
Copy link

I have been trying to run Document_representation_from_BERT on local machine with enough memory, i.e. 8 GB RAM.
All the other TF function runs without this error for other notebooks on my local machine.

But when I load the Patent_BERT model, i.e.
model = tf.compat.v2.saved_model.load(export_dir=MODEL_DIR, tags=['serve']) model = model.signatures['serving_default']

It also gives the similar error at :
docs_embeddings = [] for _, row in df.iterrows(): inputs = get_bert_token_input(row['claims']) response = model(**inputs) avg_embeddings = pooling( tf.reshape(response['encoder_layer'], shape=[1, -1, 1024])) docs_embeddings.append(avg_embeddings.numpy()[0])

Please help me to get this done. I have already spent a lot of time solving the issue, but to no avail.

@twin9458
Copy link

twin9458 commented Mar 3, 2022

I have the same issue with that.
Especially, when I run response = model(**inputs) there is an error.
NameError : name 'response' is not defined.

My code is
model = tf.compat.v2.saved_model.load(export_dir=MODEL_DIR, tags=['serve'])
model = model.signatures['serving_default']
inputs = get_bert_token_input(example_sent)
response = model(**inputs)

Is there any other solution?
I have to solve this... but I can't ....
error

@twin9458
Copy link

twin9458 commented Mar 8, 2022

Hi, I have

I have been trying to run Document_representation_from_BERT on local machine with enough memory, i.e. 8 GB RAM. All the other TF function runs without this error for other notebooks on my local machine.

But when I load the Patent_BERT model, i.e. model = tf.compat.v2.saved_model.load(export_dir=MODEL_DIR, tags=['serve']) model = model.signatures['serving_default']

It also gives the similar error at : docs_embeddings = [] for _, row in df.iterrows(): inputs = get_bert_token_input(row['claims']) response = model(**inputs) avg_embeddings = pooling( tf.reshape(response['encoder_layer'], shape=[1, -1, 1024])) docs_embeddings.append(avg_embeddings.numpy()[0])

Please help me to get this done. I have already spent a lot of time solving the issue, but to no avail.

Hi, I don't know if I can help, but if you have the same problem as me, try this.

The PatentBERT model uses averagepooling, but I misunderstood the part of averaging.
In my case, I had to find a sentence vector for each input sentence, so I changed it to a list form. You should refer to line 68.
I will also attach information about the data I entered.

I hope this helps 😥
Good day !
캡쳐1

캡쳐2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants