You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for providing the code of your paper. As per the instructions, I am running the code for Version 2 of RNNLogic with emb. While the training is running as expected, it is very slow for both wn18rr and FB15K-237 datasets on my GPU server.
Could you inform about your experimental setup for these experiments in terms of the underlying hardware and the expected run times? I could estimate the running times for my setup from this information.
Thanks!
The text was updated successfully, but these errors were encountered:
Hello, I am facing the same problem when trying re-implementing RNNLogic using the code in the main branch. I found that using multiprocessing package to concurrently train the model for each relation cannot speed up since a single process will cost almost 50% of my CPU (Intel Xeon Gold 5220). Did you face the same problem? Approximately how long did you cost to train on FB15k-237 or other much smaller datasets like umls/kinship?
Thanks for your interest, and very sorry for the late response. We have refactored the codes, and the new codes are in the folder RNNLogic+, which are more readable and easier to run. You might be interested. Thanks!
Hello,
Thank you for providing the code of your paper. As per the instructions, I am running the code for Version 2 of RNNLogic with emb. While the training is running as expected, it is very slow for both wn18rr and FB15K-237 datasets on my GPU server.
Could you inform about your experimental setup for these experiments in terms of the underlying hardware and the expected run times? I could estimate the running times for my setup from this information.
Thanks!
The text was updated successfully, but these errors were encountered: