Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Develop merge for 2.2.1 release #345

Merged
merged 19 commits into from
Oct 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,36 @@ Code contributions to the hotfix:

### Requirements

## [2.2.1] - 2024-10-01 : https://github.com/BU-ISCIII/buisciii-tools/releases/tag/2.2.1

### Credits

Code contributions to the new version:

- [Daniel Valle](https://github.com/Daniel-VM)
- [Sarai Varona](https://github.com/svarona)
- [Victor Lopez](https://github.com/victor5lm)
- [Sergio Olmos](https://github.com/OPSergio)

### Template fixes and updates

- Fixed path to blast database and update Emmtyper params [#339](https://github.com/BU-ISCIII/buisciii-tools/pull/339)
- Updated sarek version (v3.4.4) in ExomeEB-ExomeTrio-WGSTrio templates [#341] (https://github.com/BU-ISCIII/buisciii-tools/pull/341)
- Fixed IRMAs config for amended consensus [#325](https://github.com/BU-ISCIII/buisciii-tools/pull/325).
- Improved excel_generator.py and bioinfo_doc.py email creation function, and updated sftp_user.json, setup.py, main.py and some lablogs [#344](https://github.com/BU-ISCIII/buisciii-tools/pull/344).

### Modules

#### Added enhancements

#### Fixes

#### Changed

#### Removed

### Requirements

## [2.2.0] - 2024-09-12 : https://github.com/BU-ISCIII/buisciii-tools/releases/tag/2.2.0

### Credits
Expand Down
2 changes: 1 addition & 1 deletion bu_isciii/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def run_bu_isciii():
)

# stderr.print("[green] `._,._,'\n", highlight=False)
__version__ = "2.2.0"
__version__ = "2.2.1"
stderr.print(
"[grey39] BU-ISCIII-tools version {}".format(__version__), highlight=False
)
Expand Down
39 changes: 36 additions & 3 deletions bu_isciii/bioinfo_doc.py
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -646,10 +646,43 @@ def email_creation(self):
email_data["email_notes"] = self.delivery_notes.replace(
"\n", "<br />"
)
else:
email_data["email_notes"] = bu_isciii.utils.ask_for_some_text(
msg="Write email notes"
).replace("\n", "<br />")
else:
email_data["email_notes"] = bu_isciii.utils.ask_for_some_text(
msg="Write email notes"
).replace("\n", "<br />")
if bu_isciii.utils.prompt_yn_question(
msg="Do you wish to provide a text file for email notes?",
dflt=False,
):
for i in range(3, -1, -1):
email_data["email_notes"] = bu_isciii.utils.prompt_path(
msg="Write the path to the file with RAW text as email notes"
)
if not os.path.isfile(
os.path.expanduser(email_data["email_notes"])
):
stderr.print(
f"Provided file doesn't exist. Attempts left: {i}"
)
else:
stderr.print(f"File selected: {email_data['email_notes']}")
break
else:
stderr.print(
"No more attempts. Email notes will be given by prompt"
)
email_data["email_notes"] = None
else:
email_data["email_notes"] = None

if email_data["email_notes"]:
with open(os.path.expanduser(email_data["email_notes"])) as f:
email_data["email_notes"] = f.read().replace("\n", "<br />")
else:
email_data["email_notes"] = bu_isciii.utils.ask_for_some_text(
msg="Write email notes"
).replace("\n", "<br />")

email_data["user_data"] = self.resolution_info["service_user_id"]
email_data["service_id"] = self.service_name.split("_", 5)[0]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,31 +9,6 @@ cat ../samples_id.txt | while read in; do echo "srun --partition short_idx --cpu

echo 'bash create_irma_stats.sh' > _02_create_stats.sh

echo "ls */*HA*.fasta | cut -d '/' -f2 | cut -d '.' -f1 | sort -u | cut -d '_' -f3 | sed '/^\$/d' | sed 's/^/A_/g' > HA_types.txt" > _03_post_processing.sh
echo 'bash postprocessing.sh' > _03_post_processing.sh

echo 'cat HA_types.txt | while read type; do if test -d ${type}; then rm -rf ${type}; fi; done; if test -d B ; then rm -rf B; fi; if test -d C; then rm -rf C; fi' >> _03_post_processing.sh

echo 'if test -f all_samples_completo.txt; then rm all_samples_completo.txt; fi' >> _03_post_processing.sh

echo "cat HA_types.txt | while read in; do mkdir \${in}; done" >> _03_post_processing.sh

echo "if grep -qw 'B__' irma_stats.txt; then mkdir B; fi" >> _03_post_processing.sh

echo "if grep -qw 'C__' irma_stats.txt; then mkdir C; fi" >> _03_post_processing.sh

echo "ls */*.fasta | cut -d '/' -f2 | cut -d '.' -f1 | cut -d '_' -f1,2 | sort -u | grep 'A_' > A_fragment_list.txt" >> _03_post_processing.sh

echo "ls */*.fasta | cut -d '/' -f2 | cut -d '.' -f1 | cut -d '_' -f1,2 | sort -u | grep 'B_' > B_fragment_list.txt" >> _03_post_processing.sh

echo "ls */*.fasta | cut -d '/' -f2 | cut -d '.' -f1 | cut -d '_' -f1,2 | sort -u | grep 'C_' > C_fragment_list.txt" >> _03_post_processing.sh

echo 'cat HA_types.txt | while read type; do grep ${type} irma_stats.txt | cut -f1 | while read sample; do cat A_fragment_list.txt | while read fragment; do if test -f ${sample}/${fragment}*.fasta; then cat ${sample}/${fragment}*.fasta | sed "s/^>/\>${sample}_/g" | sed 's/_H1//g' | sed 's/_H3//g' | sed 's/_N1//g' | sed 's/_N2//g' | sed s@-@/@g | sed s/_A_/_/g ; fi >> ${type}/${fragment}.txt; done; done; done' >> _03_post_processing.sh

echo 'grep -w 'B__' irma_stats.txt | cut -f1 | while read sample; do cat B_fragment_list.txt | while read fragment; do if test -f ${sample}/${fragment}*.fasta; then cat ${sample}/${fragment}*.fasta | sed "s/^>/\>${sample}_/g" | sed s/_H1//g | sed s/_H3//g | sed s/_N1//g | sed s/_N2//g | sed s@-@/@g | sed s/_B_/_/g ; fi >> B/${fragment}.txt; done; done' >> _03_post_processing.sh

echo 'grep -w 'C__' irma_stats.txt | cut -f1 | while read sample; do cat C_fragment_list.txt | while read fragment; do if test -f ${sample}/${fragment}*.fasta; then cat ${sample}/${fragment}*.fasta | sed "s/^>/\>${sample}_/g" | sed s/_H1//g | sed s/_H3//g | sed s/_N1//g | sed s/_N2//g | sed s@-@/@g | sed s/_C_/_/g ; fi >> C/${fragment}.txt; done; done' >> _03_post_processing.sh

echo 'cat ../samples_id.txt | while read in; do cat ${in}/*.fasta | sed "s/^>/\>${in}_/g" | sed 's/_H1//g' | sed 's/_H3//g' | sed 's/_N1//g' | sed 's/_N2//g' | sed 's@-@/@g' | sed 's/_A_/_/g' | sed 's/_B_/_/g' | sed 's/_C_/_/g' >> all_samples_completo.txt; done' >> _03_post_processing.sh

echo 'sed "s/__//g" irma_stats.txt > clean_irma_stats.txt' >> _03_post_processing.sh
echo 'sed "s/_\t/\t/g" irma_stats.txt > clean_irma_stats.txt' >> _03_post_processing.sh
echo 'sed "s/__//g" irma_stats.txt | sed "s/_\t/\t/g" > clean_irma_stats.txt' >> _03_post_processing.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
#CLEAN
if test -f all_samples_completo.txt; then rm all_samples_completo.txt; fi
if test -d A_*; then rm -rf A_*; fi
if test -d B; then rm -rf B; fi
if test -d C; then rm -rf C; fi
if test -d D; then rm -rf D; fi

cat ../samples_id.txt | while read sample; do
FLUSUBTYPE=$(ls ${sample}/*H*.fasta | cut -d '/' -f2 | cut -d '.' -f1 | cut -d '_' -f1,3 | sort -u)
FLUTYPE=$(ls ${sample}/*H*.fasta | cut -d '/' -f2 | cut -d '.' -f1 | cut -d '_' -f1 | sort -u)
mkdir -p $FLUSUBTYPE
ls ${sample}/amended_consensus/*.fa | cut -d '_' -f3 | cut -d '.' -f1 | while read fragment; do
if [ $fragment == 1 ]; then
if [ $FLUTYPE == "B" ]; then
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_1/_PB1/' | tee -a ${FLUSUBTYPE}/B_PB1.txt all_samples_completo.txt > /dev/null
else
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_1/_PB2/' | tee -a ${FLUSUBTYPE}/${FLUTYPE}_PB2.txt all_samples_completo.txt > /dev/null
fi
elif [ $fragment == 2 ]; then
if [ $FLUTYPE == "B" ]; then
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_2/_PB2/' | tee -a ${FLUSUBTYPE}/B_PB2.txt all_samples_completo.txt > /dev/null
else
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_2/_PB1/' | tee -a ${FLUSUBTYPE}/${FLUTYPE}_PB1.txt all_samples_completo.txt > /dev/null
fi
elif [ $fragment == 3 ]; then
if [ $FLUTYPE == "B" ] || [ $FLUTYPE == "A" ]; then
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_3/_PA/' | tee -a ${FLUSUBTYPE}/${FLUTYPE}_PA.txt all_samples_completo.txt > /dev/null
else
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_3/_P3/' | tee -a ${FLUSUBTYPE}/${FLUTYPE}_P3.txt all_samples_completo.txt > /dev/null
fi
elif [ $fragment == 4 ]; then
if [ $FLUTYPE == "B" ] || [ $FLUTYPE == "A" ]; then
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_4/_HA/' | tee -a ${FLUSUBTYPE}/${FLUTYPE}_HA.txt all_samples_completo.txt > /dev/null
else
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_4/_HE/' | tee -a ${FLUSUBTYPE}/${FLUTYPE}_HE.txt all_samples_completo.txt > /dev/null
fi
elif [ $fragment == 5 ]; then
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_5/_NP/' | tee -a ${FLUSUBTYPE}/${FLUTYPE}_NP.txt all_samples_completo.txt > /dev/null
elif [ $fragment == 6 ]; then
if [ $FLUTYPE == "B" ] || [ $FLUTYPE == "A" ]; then
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_6/_NA/' | tee -a ${FLUSUBTYPE}/${FLUTYPE}_NA.txt all_samples_completo.txt > /dev/null
else
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_6/_MP/' | tee -a ${FLUSUBTYPE}/${FLUTYPE}_MP.txt all_samples_completo.txt > /dev/null
fi
elif [ $fragment == 7 ]; then
if [ $FLUTYPE == "B" ] || [ $FLUTYPE == "A" ]; then
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_7/_MP/' | tee -a ${FLUSUBTYPE}/${FLUTYPE}_MP.txt all_samples_completo.txt > /dev/null
else
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_7/_NS/' | tee -a ${FLUSUBTYPE}/${FLUTYPE}_NS.txt all_samples_completo.txt > /dev/null
fi
elif [ $fragment == 8 ]; then
cat ${sample}/amended_consensus/*_${fragment}.fa | sed 's/-/\//g' | sed 's/_8/_NS/' | tee -a ${FLUSUBTYPE}/${FLUTYPE}_NS.txt all_samples_completo.txt > /dev/null
else
echo "The sample $sample has a segment with number $fragment, but I don't know which segment it is."
fi
done
done
4 changes: 4 additions & 0 deletions bu_isciii/templates/IRMA/DOC/irma_config.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,7 @@ MATCH_PROC=8 # grid maximum processes for the MATCH
SORT_PROC=8 # currently not used
ALIGN_PROC=8 # grid maximum processes for the rough align
ASSEM_PROC=8 # grid maximum processes for assembly

### AMENDED CONSENSUS ###
MIN_AMBIG=0.75 # Sets ambiguities to off
MIN_CONS_SUPPORT=9 # Mask low coverage <= 9 (10 is ok)
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ cat <<EOF > _01_emmtyper.sbatch
# create results folder
mkdir -p 01-typing
mkdir -p 01-typing/tmps
blastdb_path=/data/bi/references/cdc_emm_blastdb
blastdb_path=/data/bi/references/cdc_emm_blastdb/20240509

# Run emmtyper
singularity exec \\
Expand All @@ -63,9 +63,9 @@ singularity exec \\
/data/bi/pipelines/singularity-images/singularity-emmtyper.0.2.0--py_0 emmtyper \\
-w blast \\
--keep \\
--blast_db "\${blastdb_path}/cdc_emm_database29042024" \\
--percent-identity 95 \\
--culling-limit 5 \\
--blast_db "\${blastdb_path}/cdc_emm_database" \\
--percent-identity 100 \\
--culling-limit 5 \
--output 01-typing/results_emmtyper.out \\
--output-format verbose \\
./fasta_inputs/*.fasta
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ echo "srun --partition short_idx --time 2:00:00 --chdir ${scratch_dir} --output

## 4-5. Lablog for annotating whole genome samples using Variant Effect Predictor (VEP).

echo "srun --partition short_idx --mem 100G --time 4:00:00 --chdir ${scratch_dir} --output logs/VEP.log --job-name VEP singularity exec -B ${scratch_dir}/../../../ /data/bi/pipelines/singularity-images/ensembl-vep:103.1--pl5262h4a94de4_2 vep --fasta /data/bi/references/eukaria/homo_sapiens/hg19/1000genomes_b37/genome/human_g1k_v37.fasta -i ${scratch_dir}/vep/variants_fil_mod.vcf -o ${scratch_dir}/vep/vep_annot.vcf --cache --offline --dir_cache /data/bi/references/eukaria/homo_sapiens/cache_vep/ --everything --dir_plugins /data/bi/references/eukaria/homo_sapiens/cache_vep/Plugins/ --assembly GRCh37 --tab --plugin dbNSFP,/data/bi/references/eukaria/homo_sapiens/cache_vep/custom_databases/dbNSFP/GRCh37/dbNSFP4.1a_grch37.gz,clinvar_id,clinvar_trait,clinvar_OMIM_id,clinvar_Orphanet_id,HGVSc_snpEff,HGVSp_snpEff,SIFT_score,SIFT_pred,Polyphen2_HDIV_score,Polyphen2_HDIV_pred,Polyphen2_HVAR_score,Polyphen2_HVAR_pred,MutationTaster_score,MutationTaster_pred,MutationAssessor_score,MutationAssessor_pred,FATHMM_score,FATHMM_pred,PROVEAN_score,PROVEAN_pred,VEST4_score,MetaSVM_score,MetaSVM_pred,MetaLR_score,MetaLR_pred,CADD_raw,CADD_phred,CADD_raw_hg19,CADD_phred_hg19,GERP++_NR,GERP++_RS,phyloP100way_vertebrate,phastCons100way_vertebrate &" > _02_vep_annotation.sh
echo "srun --partition short_idx --mem 100G --time 4:00:00 --chdir ${scratch_dir} --output logs/VEP.log --job-name VEP singularity exec -B /data/bi/references/eukaria/homo_sapiens/hg19/1000genomes_b37/genome/ -B /data/bi/references/eukaria/homo_sapiens/cache_vep/homo_sapiens -B /data/bi/references/eukaria/homo_sapiens/cache_vep/custom_databases/dbNSFP/GRCh37/ -B ${scratch_dir}/../../../ /data/bi/pipelines/singularity-images/ensembl-vep:103.1--pl5262h4a94de4_2 vep --fasta /data/bi/references/eukaria/homo_sapiens/hg19/1000genomes_b37/genome/human_g1k_v37.fasta -i ${scratch_dir}/vep/variants_fil_mod.vcf -o ${scratch_dir}/vep/vep_annot.vcf --cache --offline --dir_cache /data/bi/references/eukaria/homo_sapiens/cache_vep/ --everything --dir_plugins /data/bi/references/eukaria/homo_sapiens/cache_vep/Plugins/ --assembly GRCh37 --tab --plugin dbNSFP,/data/bi/references/eukaria/homo_sapiens/cache_vep/custom_databases/dbNSFP/GRCh37/dbNSFP4.1a_grch37.gz,clinvar_id,clinvar_trait,clinvar_OMIM_id,clinvar_Orphanet_id,HGVSc_snpEff,HGVSp_snpEff,SIFT_score,SIFT_pred,Polyphen2_HDIV_score,Polyphen2_HDIV_pred,Polyphen2_HVAR_score,Polyphen2_HVAR_pred,MutationTaster_score,MutationTaster_pred,MutationAssessor_score,MutationAssessor_pred,FATHMM_score,FATHMM_pred,PROVEAN_score,PROVEAN_pred,VEST4_score,MetaSVM_score,MetaSVM_pred,MetaLR_score,MetaLR_pred,CADD_raw,CADD_phred,CADD_raw_hg19,CADD_phred_hg19,GERP++_NR,GERP++_RS,phyloP100way_vertebrate,phastCons100way_vertebrate &" > _02_vep_annotation.sh

echo "grep -v '^##' ./vep/vep_annot.vcf > ./vep/vep_annot_head.txt" > _03_merge_data1.sh
echo "sed -i 's/#Uploaded_variation/ID/' ./vep/vep_annot_head.txt" >> _03_merge_data1.sh
Expand Down
4 changes: 2 additions & 2 deletions bu_isciii/templates/exomeeb/ANALYSIS/ANALYSIS01_EXOME/lablog
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# module load Nextflow/22.10.1 singularity
# module load Nextflow singularity

ln -s ../00-reads/ .
ln -s ../samples_id.txt .
Expand Down Expand Up @@ -28,7 +28,7 @@ cat <<EOF > sarek.sbatch

export NXF_OPTS="-Xms500M -Xmx4G"

nextflow run /data/bi/pipelines/nf-core-sarek/nf-core-sarek-3.4.2/workflow/main.nf \\
nextflow run /data/bi/pipelines/nf-core-sarek/nf-core-sarek_3.4.4/3_4_4/main.nf \\
-c ../../DOC/hpc_slurm_sarek.config \\
--input 'samplesheet.csv' \\
--outdir 01-sarek \\
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ echo "srun --partition short_idx --time 2:00:00 --chdir ${scratch_dir} --output
# 3. Lablog for annotating whole genome samples using Variant Effect Predictor (VEP).
# Run Vep without the plugin columns

echo "srun --partition short_idx --mem 100G --time 12:00:00 --chdir ${scratch_dir} --output logs/VEP.log --job-name VEP singularity exec -B ${scratch_dir}/../../../ /data/bi/pipelines/singularity-images/ensembl-vep:103.1--pl5262h4a94de4_2 vep --fasta /data/bi/references/eukaria/homo_sapiens/hg19/1000genomes_b37/genome/human_g1k_v37.fasta -i ${scratch_dir}/vep/variants_fil_mod.vcf -o ${scratch_dir}/vep/vep_annot.vcf --cache --offline --dir_cache /data/bi/references/eukaria/homo_sapiens/cache_vep/ --everything --assembly GRCh37 --tab &" > _02_vep_annotation.sh
echo "srun --partition short_idx --mem 100G --time 12:00:00 --chdir ${scratch_dir} --output logs/VEP.log --job-name VEP singularity exec -B /data/bi/references/eukaria/homo_sapiens/hg19/1000genomes_b37/genome -B /data/bi/references/eukaria/homo_sapiens/cache_vep -B ${scratch_dir}/../../../ /data/bi/pipelines/singularity-images/ensembl-vep:103.1--pl5262h4a94de4_2 vep --fasta /data/bi/references/eukaria/homo_sapiens/hg19/1000genomes_b37/genome/human_g1k_v37.fasta -i ${scratch_dir}/vep/variants_fil_mod.vcf -o ${scratch_dir}/vep/vep_annot.vcf --cache --offline --dir_cache /data/bi/references/eukaria/homo_sapiens/cache_vep/ --everything --assembly GRCh37 --tab &" > _02_vep_annotation.sh

#--------------------------------------------------------------------------------------------------------------------

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ cat <<EOF > sarek.sbatch

export NXF_OPTS="-Xms500M -Xmx4G"

nextflow run /data/bi/pipelines/nf-core-sarek/nf-core-sarek-3.4.2/workflow/main.nf \\
nextflow run /data/bi/pipelines/nf-core-sarek/nf-core-sarek_3.4.4/3_4_4/main.nf \\
-c ../../DOC/hpc_slurm_sarek.config \\
--input 'samplesheet.csv' \\
--outdir 01-sarek \\
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ cat ../samples_id.txt | xargs -I @@ echo -e "srun --job-name MTBSEQ.@@ --output

# classification
echo "mkdir classification_all" > _03_gather_results.sh
echo "FIRST_SAMPLE=$(head -n1 ../samples_id.txt); head -n 1 ${FIRST_SAMPLE}/Classification/Strain_Classification.tab > classification_all/strain_classification_all.tab; grep \"^'$analysis_year\" */Classification/Strain_Classification.tab | cut -d \":\" -f 2 >> classification_all/strain_classification_all.tab" >> _03_gather_results.sh
echo 'FIRST_SAMPLE=$(head -n1 samples_id.txt); head -n 1 ${FIRST_SAMPLE}/Classification/Strain_Classification.tab > classification_all/strain_classification_all.tab; grep "^'\'''"$analysis_year"'" */Classification/Strain_Classification.tab | cut -d ":" -f 2 >> classification_all/strain_classification_all.tab' >> _03_gather_results.sh
# resistances
echo "mkdir resistances_all" >> _03_gather_results.sh
cat ../samples_id.txt | xargs -I % echo "cp %/Amend/NONE_joint_cf4_cr4_fr75_ph4_samples1_amended.tab resistances_all/%_var_res.tab" >> _03_gather_results.sh
# stats
echo "mkdir stats_all; FIRST_SAMPLE=$(head -n1 ../samples_id.txt); head -n 1 ${FIRST_SAMPLE}/Statistics/Mapping_and_Variant_Statistics.tab > stats_all/statistics_all.tab; grep \"^'$analysis_year\" */Statistics/Mapping_and_Variant_Statistics.tab | cut -d \":\" -f 2 >> stats_all/statistics_all.tab" >> _03_gather_results.sh
echo 'mkdir stats_all; FIRST_SAMPLE=$(head -n1 ../samples_id.txt); head -n 1 ${FIRST_SAMPLE}/Statistics/Mapping_and_Variant_Statistics.tab > stats_all/statistics_all.tab; grep "^'\'''"$analysis_year"'" */Statistics/Mapping_and_Variant_Statistics.tab | cut -d ":" -f 2 >> stats_all/statistics_all.tab' >> _03_gather_results.sh
2 changes: 1 addition & 1 deletion bu_isciii/templates/sftp_user.json
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
"sara.perez": ["GeneticDiagnosis"],
"sbarturen": ["Labvirushep"],
"sergio.sanchez": ["LabFWDB_ssanchez"],
"sherrera": ["LabFWBD", "LabFWBD_ext"],
"sherrera": ["LabFWBD", "LabFWBD_ext", "Labtuberculosis"],
"sresino": ["Labvirushep"],
"svaldezate": ["Labtaxonomia"],
"svazquez": ["Labvirusres"],
Expand Down
Loading
Loading