-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bias-factorized ChromBPNet training still running after 6 days #228
Comments
It is very likely you aren't using a GPU. Something may be off with your
configuration. It should definitely not take 24 hours to train a bias model.
…On Wed, Feb 19, 2025, 7:33 AM Manonbaudic ***@***.***> wrote:
Hello,
I am running ChromBPNet on ATAC-seq data from ENCODE (ENCFF884PIS). I
successfully generated a custom bias model (bias.h5), which took slightly
over 24 hours to complete. However, when running the Bias-Factorized
ChromBPNet with the pre-trained bias model, I’ve noticed the process is
taking an exceptionally long time.
After 6 days, it is still running, and the log file indicates that I am
currently at step 11 out of 50. Do you know why this might be taking so
long or if there are any optimizations I can apply to speed up the process?
"Epoch 11/50
364/1595 [=====>........................] - ETA: 10:43:08 - loss: 348.1603
- logits_profile_predictions_loss: 341.5599 - logcount_predictions_loss:
0.4963."
Here is the job I submitted:
"#!/bin/bash -l
cd /path/chrombpnet/
source /path/mambaforge/bin/activate chrombpnet
chrombpnet pipeline -ibam /path/ENCFF884PIS.bam -d "ATAC" -g
/path/chrombpnet/genome.fa -c
/path/genome_files/chrom.sizes/mm10.chrom.sizes -p
/path/chrombpnet/ENCFF767MGH.bed -n
/path/chrombpnet/output_nonpeaks/output_negatives.bed -fl
/path/chrombpnet/folds/fold_0.json -b
/path/chrombpnet/output_bias-model-training/models/bias.h5 -o
/path/chrombpnet/output_ChromBPNet_training/"
Thank you very much for our help,
Manon
—
Reply to this email directly, view it on GitHub
<#228>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABDWEMFQDHSDEZJ57ZQ2BD2QSP4ZAVCNFSM6AAAAABXOP4JSGVHI2DSMVQWIX3LMV43ASLTON2WKOZSHA3DGNZQGMZTINA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
[image: Manonbaudic]*Manonbaudic* created an issue
(kundajelab/chrombpnet#228)
<#228>
Hello,
I am running ChromBPNet on ATAC-seq data from ENCODE (ENCFF884PIS). I
successfully generated a custom bias model (bias.h5), which took slightly
over 24 hours to complete. However, when running the Bias-Factorized
ChromBPNet with the pre-trained bias model, I’ve noticed the process is
taking an exceptionally long time.
After 6 days, it is still running, and the log file indicates that I am
currently at step 11 out of 50. Do you know why this might be taking so
long or if there are any optimizations I can apply to speed up the process?
"Epoch 11/50
364/1595 [=====>........................] - ETA: 10:43:08 - loss: 348.1603
- logits_profile_predictions_loss: 341.5599 - logcount_predictions_loss:
0.4963."
Here is the job I submitted:
"#!/bin/bash -l
cd /path/chrombpnet/
source /path/mambaforge/bin/activate chrombpnet
chrombpnet pipeline -ibam /path/ENCFF884PIS.bam -d "ATAC" -g
/path/chrombpnet/genome.fa -c
/path/genome_files/chrom.sizes/mm10.chrom.sizes -p
/path/chrombpnet/ENCFF767MGH.bed -n
/path/chrombpnet/output_nonpeaks/output_negatives.bed -fl
/path/chrombpnet/folds/fold_0.json -b
/path/chrombpnet/output_bias-model-training/models/bias.h5 -o
/path/chrombpnet/output_ChromBPNet_training/"
Thank you very much for our help,
Manon
—
Reply to this email directly, view it on GitHub
<#228>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABDWEMFQDHSDEZJ57ZQ2BD2QSP4ZAVCNFSM6AAAAABXOP4JSGVHI2DSMVQWIX3LMV43ASLTON2WKOZSHA3DGNZQGMZTINA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
@Manonbaudic Yes I dont think the job is using the GPU. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello,
I am running ChromBPNet on ATAC-seq data from ENCODE (ENCFF884PIS). I successfully generated a custom bias model (bias.h5), which took slightly over 24 hours to complete. However, when running the Bias-Factorized ChromBPNet with the pre-trained bias model, I’ve noticed the process is taking an exceptionally long time.
After 6 days, it is still running, and the log file indicates that I am currently at step 11 out of 50. Do you know why this might be taking so long or if there are any optimizations I can apply to speed up the process?
"Epoch 11/50
364/1595 [=====>........................] - ETA: 10:43:08 - loss: 348.1603 - logits_profile_predictions_loss: 341.5599 - logcount_predictions_loss: 0.4963."
Here is the job I submitted:
"#!/bin/bash -l
cd /path/chrombpnet/
source /path/mambaforge/bin/activate chrombpnet
chrombpnet pipeline -ibam /path/ENCFF884PIS.bam -d "ATAC" -g /path/chrombpnet/genome.fa -c /path/genome_files/chrom.sizes/mm10.chrom.sizes -p /path/chrombpnet/ENCFF767MGH.bed -n /path/chrombpnet/output_nonpeaks/output_negatives.bed -fl /path/chrombpnet/folds/fold_0.json -b /path/chrombpnet/output_bias-model-training/models/bias.h5 -o /path/chrombpnet/output_ChromBPNet_training/"
Thank you very much for our help,
Manon
The text was updated successfully, but these errors were encountered: