Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.0.0 release: tarball creation/installation, plugin symlink management, release management #1008

Merged
merged 128 commits into from
Oct 3, 2024

Conversation

valassi
Copy link
Member

@valassi valassi commented Sep 26, 2024

Hi @oliviermattelaer this is a WIP PR for tarball creation/installation and plugin symlink management

The part that is done is the usage of the MG5aMC_PLUGIN trick for the symlink management (nice trick, thanks!)

What I am still testing (also in the CI here) is the package creation

NB there is not yet any renaming of plugin directories or of repositories, this can come later

…new gpucpp including Trex 143 and cudacpp_install 142)
…ce the symlink that will be removed from mg5amcnlo/PLUGIN
…b8 (valassi_cudacpp_install removing the PLUGIN/CUDACPP_OUTPUT symlink)
….py to allow loading the plugin from MG5aMC_PLUGIN/CUDACPP_OUTPUT (rather than from PLUGIN/CUDACPP_OUTPUT)
…t interface) to allow the '-m' mode from directory MG5aMC_PLUGIN
…new MG5aMC_PLUGIN/CUDACPP_OUTPUT symlink

This requires "PYTHONPATH=.. ./bin/mg5_aMC -m CUDACPP_OUTPUT" instead of "./bin/mg5_aMC"
…cudacpp

git submodule add --force -b cudacpp --name MG5aMC/HEPToolsInstallers https://github.com/mg5amcnlo/HEPToolsInstallers ../../MG5aMC/HEPToolsInstallers
@valassi valassi self-assigned this Sep 26, 2024
@valassi valassi requested a review from a team as a code owner September 26, 2024 09:58
… as a release asset rather than as a CI artifact
… sorted, so that the created tests are reproducible
…sults are sorted and reproducible

python create_acceptance_from_file.py
… to "PYTHONPATH=.. ./bin/mg5_aMC -m CUDACPP_OUTPUT"
…equence to "PYTHONPATH=.. ./bin/mg5_aMC -m CUDACPP_OUTPUT"

python create_acceptance_from_file.py
…sts path to ../MG5aMC_PLUGIN/CUDACPP_OUTPUT/
…ance_tests path to ../MG5aMC_PLUGIN/CUDACPP_OUTPUT/

python create_acceptance_from_file.py
…HONPATH

PS: the acceptance tests still fail the CI, there is still a bug that needs to be fixed
…n -sf ../../MG5aMC_PLUGIN/CUDACPP_OUTPUT PLUGIN/CUDACPP_OUTPUT'

PS: acceptance tests now succeed in the CI - however, will revert because adding back a symlink in PLUGIN/CUDACPP_OUTPUT should be avoided

(NB it is not enough to copy or symlink test_simd_madevent.py, because acceptance tests need the code generation plugin)
…e generation plugin from ../MG5aMC_PLUGIN/CUDACPP_OUTPUT

Revert "[install] modify and regenerate the acceptance tests after linking 'ln -sf ../../MG5aMC_PLUGIN/CUDACPP_OUTPUT PLUGIN/CUDACPP_OUTPUT'"
This reverts commit fd6a873.
…h PYTHONPATH and the ./bin/mg5_aMC command line arguments

PS: this finally succeeds in the CI while using a plugin in ../MG5aMC_PLUGIN, all bugs have been fixed - some cleanup is still possible

(NB it is possible to test this interactively as "./tests/test_manager.py -p./cudacpp_acceptance_tests/ test_simd_cpp_eemumua_float")
valassi added 11 commits October 3, 2024 07:31
STARTED  AT Wed Oct  2 08:37:55 PM CEST 2024
./tput/teeThroughputX.sh -mix -hrd -makej -eemumu -ggtt -ggttg -ggttgg -gqttq -ggttggg -makeclean
ENDED(1) AT Wed Oct  2 10:40:49 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -flt -hrd -makej -eemumu -ggtt -ggttgg -inlonly -makeclean
ENDED(2) AT Wed Oct  2 10:57:24 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -makej -eemumu -ggtt -ggttg -gqttq -ggttgg -ggttggg -flt -bridge -makeclean
ENDED(3) AT Wed Oct  2 11:06:24 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -rmbhst
ENDED(4) AT Wed Oct  2 11:09:09 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -curhst
ENDED(5) AT Wed Oct  2 11:11:51 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -common
ENDED(6) AT Wed Oct  2 11:14:38 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -mix -hrd -makej -susyggtt -susyggt1t1 -smeftggtttt -heftggbb -makeclean
ENDED(7) AT Wed Oct  2 11:27:29 PM CEST 2024 [Status=0]

No errors found in logs

eemumu MEK (channelid array) processed 512 events across 2 channels { 1 : 256, 2 : 256 }
eemumu MEK (no multichannel) processed 512 events across 2 channels { no-multichannel : 512 }
ggttggg MEK (channelid array) processed 512 events across 1240 channels { 1 : 32, 2 : 32, 4 : 32, 5 : 32, 7 : 32, 8 : 32, 14 : 32, 15 : 32, 16 : 32, 18 : 32, 19 : 32, 20 : 32, 22 : 32, 23 : 32, 24 : 32, 26 : 32 }
ggttggg MEK (no multichannel) processed 512 events across 1240 channels { no-multichannel : 512 }
ggttgg MEK (channelid array) processed 512 events across 123 channels { 2 : 32, 3 : 32, 4 : 32, 5 : 32, 6 : 32, 7 : 32, 8 : 32, 9 : 32, 10 : 32, 11 : 32, 12 : 32, 13 : 32, 14 : 32, 15 : 32, 16 : 32, 17 : 32 }
ggttgg MEK (no multichannel) processed 512 events across 123 channels { no-multichannel : 512 }
ggttg MEK (channelid array) processed 512 events across 16 channels { 1 : 64, 2 : 32, 3 : 32, 4 : 32, 5 : 32, 6 : 32, 7 : 32, 8 : 32, 9 : 32, 10 : 32, 11 : 32, 12 : 32, 13 : 32, 14 : 32, 15 : 32 }
ggttg MEK (no multichannel) processed 512 events across 16 channels { no-multichannel : 512 }
ggtt MEK (channelid array) processed 512 events across 3 channels { 1 : 192, 2 : 160, 3 : 160 }
ggtt MEK (no multichannel) processed 512 events across 3 channels { no-multichannel : 512 }
gqttq MEK (channelid array) processed 512 events across 5 channels { 1 : 128, 2 : 96, 3 : 96, 4 : 96, 5 : 96 }
gqttq MEK (no multichannel) processed 512 events across 5 channels { no-multichannel : 512 }
heftggbb MEK (channelid array) processed 512 events across 4 channels { 1 : 128, 2 : 128, 3 : 128, 4 : 128 }
heftggbb MEK (no multichannel) processed 512 events across 4 channels { no-multichannel : 512 }
smeftggtttt MEK (channelid array) processed 512 events across 72 channels { 1 : 32, 2 : 32, 3 : 32, 4 : 32, 5 : 32, 6 : 32, 7 : 32, 8 : 32, 9 : 32, 10 : 32, 11 : 32, 12 : 32, 13 : 32, 14 : 32, 15 : 32, 16 : 32 }
smeftggtttt MEK (no multichannel) processed 512 events across 72 channels { no-multichannel : 512 }
susyggt1t1 MEK (channelid array) processed 512 events across 6 channels { 2 : 128, 3 : 96, 4 : 96, 5 : 96, 6 : 96 }
susyggt1t1 MEK (no multichannel) processed 512 events across 6 channels { no-multichannel : 512 }
susyggtt MEK (channelid array) processed 512 events across 3 channels { 1 : 192, 2 : 160, 3 : 160 }
susyggtt MEK (no multichannel) processed 512 events across 3 channels { no-multichannel : 512 }
STARTED  AT Wed Oct  2 11:57:38 PM CEST 2024
(SM tests)
ENDED(1) AT Thu Oct  3 03:45:27 AM CEST 2024 [Status=0]
(BSM tests)
ENDED(1) AT Thu Oct  3 03:55:40 AM CEST 2024 [Status=0]

24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_d_inl0_hrd0.txt
1 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_m_inl0_hrd0.txt

eemumu MEK processed 81920 events across 2 channels { 1 : 81920 }
eemumu MEK processed 8192 events across 2 channels { 1 : 8192 }
ggttggg MEK processed 81920 events across 1240 channels { 1 : 81920 }
ggttggg MEK processed 8192 events across 1240 channels { 1 : 8192 }
ggttgg MEK processed 81920 events across 123 channels { 112 : 81920 }
ggttgg MEK processed 8192 events across 123 channels { 112 : 8192 }
ggttg MEK processed 81920 events across 16 channels { 1 : 81920 }
ggttg MEK processed 8192 events across 16 channels { 1 : 8192 }
ggtt MEK processed 81920 events across 3 channels { 1 : 81920 }
ggtt MEK processed 8192 events across 3 channels { 1 : 8192 }
gqttq MEK processed 81920 events across 5 channels { 1 : 81920 }
gqttq MEK processed 8192 events across 5 channels { 1 : 8192 }
heftggbb MEK processed 81920 events across 4 channels { 1 : 81920 }
heftggbb MEK processed 8192 events across 4 channels { 1 : 8192 }
smeftggtttt MEK processed 81920 events across 72 channels { 1 : 81920 }
smeftggtttt MEK processed 8192 events across 72 channels { 1 : 8192 }
susyggt1t1 MEK processed 81920 events across 6 channels { 3 : 81920 }
susyggt1t1 MEK processed 8192 events across 6 channels { 3 : 8192 }
susyggtt MEK processed 81920 events across 3 channels { 1 : 81920 }
susyggtt MEK processed 8192 events across 3 channels { 1 : 8192 }
(NB: this was run in parallel - a posteriori I reverted itscrd90 tput logs, except for 6 curhst logs, then squashed)
(To revert the curhst logs: "git checkout 4865525 tput/logs_*curhst*")

STARTED  AT Wed Oct  2 08:51:16 PM CEST 2024
./tput/teeThroughputX.sh -mix -hrd -makej -eemumu -ggtt -ggttg -ggttgg -gqttq -ggttggg -makeclean  -cpponly
ENDED(1) AT Wed Oct  2 09:26:43 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -flt -hrd -makej -eemumu -ggtt -ggttgg -inlonly -makeclean  -cpponly
ENDED(2) AT Wed Oct  2 09:44:22 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -makej -eemumu -ggtt -ggttg -gqttq -ggttgg -ggttggg -flt -bridge -makeclean  -cpponly
ENDED(3) AT Wed Oct  2 09:49:19 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -rmbhst  -cpponly
ENDED(4) AT Wed Oct  2 09:50:47 PM CEST 2024 [Status=0]
SKIP './tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -common  -cpponly'
ENDED(5) AT Wed Oct  2 09:50:47 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -common  -cpponly
ENDED(6) AT Wed Oct  2 09:52:14 PM CEST 2024 [Status=0]
./tput/teeThroughputX.sh -mix -hrd -makej -susyggtt -susyggt1t1 -smeftggtttt -heftggbb -makeclean  -cpponly
ENDED(7) AT Wed Oct  2 10:05:36 PM CEST 2024 [Status=0]

No errors found in logs

eemumu MEK (channelid array) processed 512 events across 2 channels { 1 : 256, 2 : 256 }
eemumu MEK (no multichannel) processed 512 events across 2 channels { no-multichannel : 512 }
ggttggg MEK (channelid array) processed 512 events across 1240 channels { 1 : 32, 2 : 32, 4 : 32, 5 : 32, 7 : 32, 8 : 32, 14 : 32, 15 : 32, 16 : 32, 18 : 32, 19 : 32, 20 : 32, 22 : 32, 23 : 32, 24 : 32, 26 : 32 }
ggttggg MEK (no multichannel) processed 512 events across 1240 channels { no-multichannel : 512 }
ggttgg MEK (channelid array) processed 512 events across 123 channels { 2 : 32, 3 : 32, 4 : 32, 5 : 32, 6 : 32, 7 : 32, 8 : 32, 9 : 32, 10 : 32, 11 : 32, 12 : 32, 13 : 32, 14 : 32, 15 : 32, 16 : 32, 17 : 32 }
ggttgg MEK (no multichannel) processed 512 events across 123 channels { no-multichannel : 512 }
ggttg MEK (channelid array) processed 512 events across 16 channels { 1 : 64, 2 : 32, 3 : 32, 4 : 32, 5 : 32, 6 : 32, 7 : 32, 8 : 32, 9 : 32, 10 : 32, 11 : 32, 12 : 32, 13 : 32, 14 : 32, 15 : 32 }
ggttg MEK (no multichannel) processed 512 events across 16 channels { no-multichannel : 512 }
ggtt MEK (channelid array) processed 512 events across 3 channels { 1 : 192, 2 : 160, 3 : 160 }
ggtt MEK (no multichannel) processed 512 events across 3 channels { no-multichannel : 512 }
gqttq MEK (channelid array) processed 512 events across 5 channels { 1 : 128, 2 : 96, 3 : 96, 4 : 96, 5 : 96 }
gqttq MEK (no multichannel) processed 512 events across 5 channels { no-multichannel : 512 }
heftggbb MEK (channelid array) processed 512 events across 4 channels { 1 : 128, 2 : 128, 3 : 128, 4 : 128 }
heftggbb MEK (no multichannel) processed 512 events across 4 channels { no-multichannel : 512 }
smeftggtttt MEK (channelid array) processed 512 events across 72 channels { 1 : 32, 2 : 32, 3 : 32, 4 : 32, 5 : 32, 6 : 32, 7 : 32, 8 : 32, 9 : 32, 10 : 32, 11 : 32, 12 : 32, 13 : 32, 14 : 32, 15 : 32, 16 : 32 }
smeftggtttt MEK (no multichannel) processed 512 events across 72 channels { no-multichannel : 512 }
susyggt1t1 MEK (channelid array) processed 512 events across 6 channels { 2 : 128, 3 : 96, 4 : 96, 5 : 96, 6 : 96 }
susyggt1t1 MEK (no multichannel) processed 512 events across 6 channels { no-multichannel : 512 }
susyggtt MEK (channelid array) processed 512 events across 3 channels { 1 : 192, 2 : 160, 3 : 160 }
susyggtt MEK (no multichannel) processed 512 events across 3 channels { no-multichannel : 512 }
…as expected (heft fails madgraph5#833)

(NB: this was run in parallel - a posteriori I reverted itscrd90 tmad logs, then squashed)

STARTED  AT Wed Oct  2 11:57:55 PM CEST 2024
(SM tests)
ENDED(1) AT Thu Oct  3 02:55:33 AM CEST 2024 [Status=0]
(BSM tests)
ENDED(1) AT Thu Oct  3 03:00:49 AM CEST 2024 [Status=0]

20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_d_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_m_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_d_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_f_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_m_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_d_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_m_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_d_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_m_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_d_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_m_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_d_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_m_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_d_inl0_hrd0.txt
1 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_f_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_m_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_d_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_f_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_m_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_d_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_f_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_m_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_d_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_f_inl0_hrd0.txt
20 /data/avalassi/GPU2024/madgraph4gpuX/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_m_inl0_hrd0.txt

eemumu MEK processed 81920 events across 2 channels { 1 : 81920 }
eemumu MEK processed 8192 events across 2 channels { 1 : 8192 }
ggttggg MEK processed 81920 events across 1240 channels { 1 : 81920 }
ggttggg MEK processed 8192 events across 1240 channels { 1 : 8192 }
ggttgg MEK processed 81920 events across 123 channels { 112 : 81920 }
ggttgg MEK processed 8192 events across 123 channels { 112 : 8192 }
ggttg MEK processed 81920 events across 16 channels { 1 : 81920 }
ggttg MEK processed 8192 events across 16 channels { 1 : 8192 }
ggtt MEK processed 81920 events across 3 channels { 1 : 81920 }
ggtt MEK processed 8192 events across 3 channels { 1 : 8192 }
gqttq MEK processed 81920 events across 5 channels { 1 : 81920 }
gqttq MEK processed 8192 events across 5 channels { 1 : 8192 }
heftggbb MEK processed 81920 events across 4 channels { 1 : 81920 }
heftggbb MEK processed 8192 events across 4 channels { 1 : 8192 }
smeftggtttt MEK processed 81920 events across 72 channels { 1 : 81920 }
smeftggtttt MEK processed 8192 events across 72 channels { 1 : 8192 }
susyggt1t1 MEK processed 81920 events across 6 channels { 3 : 81920 }
susyggt1t1 MEK processed 8192 events across 6 channels { 3 : 8192 }
susyggtt MEK processed 81920 events across 3 channels { 1 : 81920 }
susyggtt MEK processed 8192 events across 3 channels { 1 : 8192 }
…rd90

Revert "[install] rerun 30 tmad tests on itgold91 for release v1.00.00 - all as expected (heft fails madgraph5#833)"
This reverts commit 7f46b18b42dbcc517eb0b96fa69f73b6345f8782.

Revert "[install] rerun 96 tput tests on itgold91 for release v1.00.00 - all ok"
This reverts commit d66294042c5e6dd9685257da3d2f7624709ff5e9.
… 72h) for release v1.00.00 - one new issue madgraph5#1011 (FPEs in vxxxxx for LUMI)

(NB: this was run in parallel - a posteriori I reverted itscrd90 tput logs, except for 6 curhst logs, then squashed)
(To revert the curhst logs: "git checkout 4865525 tput/logs_*curhst*")

(1) Note, I had initially done a build and test without the -hip option, with some failures

STARTED  AT Wed 02 Oct 2024 09:48:45 PM EEST
./tput/teeThroughputX.sh -mix -hrd -makej -eemumu -ggtt -ggttg -ggttgg -gqttq -ggttggg -makeclean
ENDED(1) AT Wed 02 Oct 2024 10:14:30 PM EEST [Status=1]
./tput/teeThroughputX.sh -flt -hrd -makej -eemumu -ggtt -ggttgg -inlonly -makeclean
ENDED(2) AT Wed 02 Oct 2024 10:45:14 PM EEST [Status=0]
./tput/teeThroughputX.sh -makej -eemumu -ggtt -ggttg -gqttq -ggttgg -ggttggg -flt -bridge -makeclean
ENDED(3) AT Wed 02 Oct 2024 10:48:26 PM EEST [Status=1]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -rmbhst
ENDED(4) AT Wed 02 Oct 2024 10:50:27 PM EEST [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -curhst
ENDED(5) AT Wed 02 Oct 2024 10:50:58 PM EEST [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -common
ENDED(6) AT Wed 02 Oct 2024 10:52:58 PM EEST [Status=0]
./tput/teeThroughputX.sh -mix -hrd -makej -susyggtt -susyggt1t1 -smeftggtttt -heftggbb -makeclean
ENDED(7) AT Wed 02 Oct 2024 11:13:57 PM EEST [Status=0]

(2) This commit is the result of the second test, where I repeated using the -hip option (./tput/allTees.sh -hip)

STARTED  AT Thu 03 Oct 2024 12:57:14 AM EEST
./tput/teeThroughputX.sh -mix -hrd -makej -eemumu -ggtt -ggttg -ggttgg -gqttq -ggttggg -makeclean  -nocuda
ENDED(1) AT Thu 03 Oct 2024 01:29:36 AM EEST [Status=0]
./tput/teeThroughputX.sh -flt -hrd -makej -eemumu -ggtt -ggttgg -inlonly -makeclean  -nocuda
ENDED(2) AT Thu 03 Oct 2024 01:38:03 AM EEST [Status=0]
./tput/teeThroughputX.sh -makej -eemumu -ggtt -ggttg -gqttq -ggttgg -ggttggg -flt -bridge -makeclean  -nocuda
ENDED(3) AT Thu 03 Oct 2024 01:47:01 AM EEST [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -rmbhst  -nocuda
ENDED(4) AT Thu 03 Oct 2024 01:49:00 AM EEST [Status=0]
SKIP './tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -common  -nocuda'
ENDED(5) AT Thu 03 Oct 2024 01:49:00 AM EEST [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -common  -nocuda
ENDED(6) AT Thu 03 Oct 2024 01:50:58 AM EEST [Status=0]
./tput/teeThroughputX.sh -mix -hrd -makej -susyggtt -susyggt1t1 -smeftggtttt -heftggbb -makeclean  -nocuda
ENDED(7) AT Thu 03 Oct 2024 02:00:26 AM EEST [Status=0]

NB: the results below come from an improved version of checklogs in tput/allTees.sh, from a later commit

No errors found in logs

tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x74b3d0 processed 0 events across 2 channels { }
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x728930 processed 0 events across 2 channels { }
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0_common.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0_common.txt:DEBUG: MEK 0x7618d0 processed 0 events across 2 channels { }
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0_common.txt:DEBUG: MEK 0x74b3d0 processed 0 events across 2 channels { }
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd1.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x117f910 processed 0 events across 2 channels { }
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x77c170 processed 0 events across 2 channels { }
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0_rmbhst.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0_rmbhst.txt:DEBUG: MEK 0x119a3d0 processed 0 events across 2 channels { }
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0_rmbhst.txt:DEBUG: MEK 0xc33230 processed 0 events across 2 channels { }
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0_bridge.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0_bridge.txt:DEBUG: MEK 0xc32660 processed 0 events across 2 channels { }
tput/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0_bridge.txt:DEBUG: MEK 0x7809a0 processed 0 events across 2 channels { }
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0_common.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0_common.txt:DEBUG: MEK 0x8d9670 processed 0 events across 123 channels { }
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0_common.txt:DEBUG: MEK 0x8c5930 processed 0 events across 123 channels { }
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0_rmbhst.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0_rmbhst.txt:DEBUG: MEK 0x8d9670 processed 0 events across 123 channels { }
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0_rmbhst.txt:DEBUG: MEK 0x8c5930 processed 0 events across 123 channels { }
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0_bridge.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0_bridge.txt:DEBUG: MEK 0x8ec7f0 processed 0 events across 123 channels { }
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0_bridge.txt:DEBUG: MEK 0x8978e0 processed 0 events across 123 channels { }
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x8d9670 processed 0 events across 123 channels { }
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x8c5930 processed 0 events across 123 channels { }
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd1.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x1262600 processed 0 events across 123 channels { }
tput/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x94e8a0 processed 0 events across 123 channels { }
tput/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0_bridge.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0_bridge.txt:DEBUG: MEK 0x75eb20 processed 0 events across 16 channels { }
tput/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0_bridge.txt:DEBUG: MEK 0x11bd0d0 processed 0 events across 16 channels { }
tput/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd1.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd1.txt:DEBUG: MEK 0xd82780 processed 0 events across 16 channels { }
tput/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x73e480 processed 0 events across 16 channels { }
tput/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt:DEBUG: MEK 0xb9ace0 processed 0 events across 16 channels { }
tput/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt:DEBUG: MEK 0xc4ab30 processed 0 events across 16 channels { }
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0_rmbhst.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0_rmbhst.txt:DEBUG: MEK 0x6a5340 processed 0 events across 3 channels { }
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0_rmbhst.txt:DEBUG: MEK 0x11ac900 processed 0 events across 3 channels { }
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd1.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd1.txt:DEBUG: MEK 0xd1c010 processed 0 events across 3 channels { }
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x6fc940 processed 0 events across 3 channels { }
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0_common.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0_common.txt:DEBUG: MEK 0x6df940 processed 0 events across 3 channels { }
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0_common.txt:DEBUG: MEK 0x67fb00 processed 0 events across 3 channels { }
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0_bridge.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0_bridge.txt:DEBUG: MEK 0xb882a0 processed 0 events across 3 channels { }
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0_bridge.txt:DEBUG: MEK 0x783ec0 processed 0 events across 3 channels { }
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x6df940 processed 0 events across 3 channels { }
tput/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x67fb00 processed 0 events across 3 channels { }
tput/logs_ggtt_mad/#log_ggtt_mad_f_inl0_hrd0.txt#:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_ggtt_mad/#log_ggtt_mad_f_inl0_hrd0.txt#:DEBUG: MEK 0x6df940 processed 0 events across 3 channels { }
tput/logs_ggtt_mad/#log_ggtt_mad_f_inl0_hrd0.txt#:DEBUG: MEK 0x67fb00 processed 0 events across 3 channels { }
tput/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0.txt:DEBUG: MEK 0xb83cf0 processed 0 events across 5 channels { }
tput/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x7896a0 processed 0 events across 5 channels { }
tput/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0_bridge.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0_bridge.txt:DEBUG: MEK 0xd1fcc0 processed 0 events across 5 channels { }
tput/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0_bridge.txt:DEBUG: MEK 0xd1b3b0 processed 0 events across 5 channels { }
tput/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd1.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x6e4740 processed 0 events across 5 channels { }
tput/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x7298f0 processed 0 events across 5 channels { }
tput/logs_heftggbb_mad/log_heftggbb_mad_f_inl0_hrd0.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_heftggbb_mad/log_heftggbb_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x11a9de0 processed 0 events across 4 channels { }
tput/logs_heftggbb_mad/log_heftggbb_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x11975c0 processed 0 events across 4 channels { }
tput/logs_heftggbb_mad/log_heftggbb_mad_f_inl0_hrd1.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_heftggbb_mad/log_heftggbb_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x74d7b0 processed 0 events across 4 channels { }
tput/logs_heftggbb_mad/log_heftggbb_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x729a10 processed 0 events across 4 channels { }
tput/logs_smeftggtttt_mad/log_smeftggtttt_mad_f_inl0_hrd0.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_smeftggtttt_mad/log_smeftggtttt_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x72f1d0 processed 0 events across 72 channels { }
tput/logs_smeftggtttt_mad/log_smeftggtttt_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x871370 processed 0 events across 72 channels { }
tput/logs_smeftggtttt_mad/log_smeftggtttt_mad_f_inl0_hrd1.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_smeftggtttt_mad/log_smeftggtttt_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x7ea630 processed 0 events across 72 channels { }
tput/logs_smeftggtttt_mad/log_smeftggtttt_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x6dbd10 processed 0 events across 72 channels { }
tput/logs_susyggt1t1_mad/log_susyggt1t1_mad_f_inl0_hrd0.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_susyggt1t1_mad/log_susyggt1t1_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x6f2f60 processed 0 events across 6 channels { }
tput/logs_susyggt1t1_mad/log_susyggt1t1_mad_f_inl0_hrd0.txt:DEBUG: MEK 0x6ee280 processed 0 events across 6 channels { }
tput/logs_susyggt1t1_mad/log_susyggt1t1_mad_f_inl0_hrd1.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_susyggt1t1_mad/log_susyggt1t1_mad_f_inl0_hrd1.txt:DEBUG: MEK 0xc36d80 processed 0 events across 6 channels { }
tput/logs_susyggt1t1_mad/log_susyggt1t1_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x788210 processed 0 events across 6 channels { }
tput/logs_susyggtt_mad/log_susyggtt_mad_f_inl0_hrd0.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_susyggtt_mad/log_susyggtt_mad_f_inl0_hrd0.txt:DEBUG: MEK 0xd71c40 processed 0 events across 3 channels { }
tput/logs_susyggtt_mad/log_susyggtt_mad_f_inl0_hrd0.txt:DEBUG: MEK 0xd6e8e0 processed 0 events across 3 channels { }
tput/logs_susyggtt_mad/log_susyggtt_mad_f_inl0_hrd1.txt:Floating Point Exception (GPU): 'vxxxxx' ievt=17
tput/logs_susyggtt_mad/log_susyggtt_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x6f6ff0 processed 0 events across 3 channels { }
tput/logs_susyggtt_mad/log_susyggtt_mad_f_inl0_hrd1.txt:DEBUG: MEK 0x117d970 processed 0 events across 3 channels { }

eemumu MEK (channelid array) processed 512 events across 2 channels { 1 : 256, 2 : 256 }
eemumu MEK (no multichannel) processed 512 events across 2 channels { no-multichannel : 512 }
ggttggg MEK (channelid array) processed 512 events across 1240 channels { 1 : 32, 2 : 32, 4 : 32, 5 : 32, 7 : 32, 8 : 32, 14 : 32, 15 : 32, 16 : 32, 18 : 32, 19 : 32, 20 : 32, 22 : 32, 23 : 32, 24 : 32, 26 : 32 }
ggttggg MEK (no multichannel) processed 512 events across 1240 channels { no-multichannel : 512 }
ggttgg MEK (channelid array) processed 512 events across 123 channels { 2 : 32, 3 : 32, 4 : 32, 5 : 32, 6 : 32, 7 : 32, 8 : 32, 9 : 32, 10 : 32, 11 : 32, 12 : 32, 13 : 32, 14 : 32, 15 : 32, 16 : 32, 17 : 32 }
ggttgg MEK (no multichannel) processed 512 events across 123 channels { no-multichannel : 512 }
ggttg MEK (channelid array) processed 512 events across 16 channels { 1 : 64, 2 : 32, 3 : 32, 4 : 32, 5 : 32, 6 : 32, 7 : 32, 8 : 32, 9 : 32, 10 : 32, 11 : 32, 12 : 32, 13 : 32, 14 : 32, 15 : 32 }
ggttg MEK (no multichannel) processed 512 events across 16 channels { no-multichannel : 512 }
ggtt MEK (channelid array) processed 512 events across 3 channels { 1 : 192, 2 : 160, 3 : 160 }
ggtt MEK (no multichannel) processed 512 events across 3 channels { no-multichannel : 512 }
gqttq MEK (channelid array) processed 512 events across 5 channels { 1 : 128, 2 : 96, 3 : 96, 4 : 96, 5 : 96 }
gqttq MEK (no multichannel) processed 512 events across 5 channels { no-multichannel : 512 }
heftggbb MEK (channelid array) processed 512 events across 4 channels { 1 : 128, 2 : 128, 3 : 128, 4 : 128 }
heftggbb MEK (no multichannel) processed 512 events across 4 channels { no-multichannel : 512 }
smeftggtttt MEK (channelid array) processed 512 events across 72 channels { 1 : 32, 2 : 32, 3 : 32, 4 : 32, 5 : 32, 6 : 32, 7 : 32, 8 : 32, 9 : 32, 10 : 32, 11 : 32, 12 : 32, 13 : 32, 14 : 32, 15 : 32, 16 : 32 }
smeftggtttt MEK (no multichannel) processed 512 events across 72 channels { no-multichannel : 512 }
susyggt1t1 MEK (channelid array) processed 512 events across 6 channels { 2 : 128, 3 : 96, 4 : 96, 5 : 96, 6 : 96 }
susyggt1t1 MEK (no multichannel) processed 512 events across 6 channels { no-multichannel : 512 }
susyggtt MEK (channelid array) processed 512 events across 3 channels { 1 : 192, 2 : 160, 3 : 160 }
susyggtt MEK (no multichannel) processed 512 events across 3 channels { no-multichannel : 512 }
…elease v1.00.00 - all as expected (heft fails madgraph5#833, skip ggttggg madgraph5#933)

(NB: this was run in parallel - a posteriori I reverted itscrd90 tmad logs, then squashed)

STARTED  AT Thu 03 Oct 2024 02:00:26 AM EEST
(SM tests)
ENDED(1) AT Thu 03 Oct 2024 04:31:27 AM EEST [Status=0]
(BSM tests)
ENDED(1) AT Thu 03 Oct 2024 04:40:50 AM EEST [Status=0]

16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_d_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_m_inl0_hrd0.txt
12 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_d_inl0_hrd0.txt
12 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_f_inl0_hrd0.txt
12 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_m_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_d_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_m_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_d_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_m_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_d_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_m_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_d_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_m_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_d_inl0_hrd0.txt
1 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_f_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_heftggbb_mad/log_heftggbb_mad_m_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_d_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_f_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_smeftggtttt_mad/log_smeftggtttt_mad_m_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_d_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_f_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_susyggt1t1_mad/log_susyggt1t1_mad_m_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_d_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_f_inl0_hrd0.txt
16 /users/valassia/GPU2024/madgraph4gpu/epochX/cudacpp/tmad/logs_susyggtt_mad/log_susyggtt_mad_m_inl0_hrd0.txt

eemumu MEK processed 81920 events across 2 channels { 1 : 81920 }
eemumu MEK processed 8192 events across 2 channels { 1 : 8192 }
ggttggg MEK processed 81920 events across 1240 channels { 1 : 81920 }
ggttggg MEK processed 8192 events across 1240 channels { 1 : 8192 }
ggttgg MEK processed 81920 events across 123 channels { 112 : 81920 }
ggttgg MEK processed 8192 events across 123 channels { 112 : 8192 }
ggttg MEK processed 81920 events across 16 channels { 1 : 81920 }
ggttg MEK processed 8192 events across 16 channels { 1 : 8192 }
ggtt MEK processed 81920 events across 3 channels { 1 : 81920 }
ggtt MEK processed 8192 events across 3 channels { 1 : 8192 }
gqttq MEK processed 81920 events across 5 channels { 1 : 81920 }
gqttq MEK processed 8192 events across 5 channels { 1 : 8192 }
heftggbb MEK processed 81920 events across 4 channels { 1 : 81920 }
heftggbb MEK processed 8192 events across 4 channels { 1 : 8192 }
smeftggtttt MEK processed 81920 events across 72 channels { 1 : 81920 }
smeftggtttt MEK processed 8192 events across 72 channels { 1 : 8192 }
susyggt1t1 MEK processed 81920 events across 6 channels { 3 : 81920 }
susyggt1t1 MEK processed 8192 events across 6 channels { 3 : 8192 }
susyggtt MEK processed 81920 events across 3 channels { 1 : 81920 }
susyggtt MEK processed 8192 events across 3 channels { 1 : 8192 }
…rd90

Revert "[install] rerun 30 tmad tests on LUMI worker node (small-g 72h) for release v1.00.00 - all as expected (heft fails madgraph5#833, skip ggttggg madgraph5#933)"
This reverts commit a6c94d0.

Revert "[install] rerun 96 tput builds and tests on LUMI worker node (small-g 72h) for release v1.00.00 - one new issue madgraph5#1011 (FPEs in vxxxxx for LUMI)"
This reverts commit 217368c.
@valassi valassi force-pushed the install branch 5 times, most recently from 8556232 to e6d9ddc Compare October 3, 2024 09:22
…release notes - will revert, only works for annotated tags
…g author to the release notes, it does not work if it is not an annotated tag)

Revert "[install] in archiver.yml, try to add tag date and tag author to the release notes - will revert, only works for annotated tags"
This reverts commit 1feb468
Copy link
Member

@oliviermattelaer oliviermattelaer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect nothing to comment on.

But please can we do a squash here?

Cheers,

Olivier

@valassi
Copy link
Member Author

valassi commented Oct 3, 2024

Hi @oliviermattelaer thanks!

But please can we do a squash here?

Sorry, no. Amongst other things, there are some test results that are only visible as individual commits (I then build all my summary tables for cnferences based on those). Eventually I/we need to find a better way, but so far, no.

And in any case I really prefer to see the full history, in case we need to go back and check some stuff. You might not believe it, but I am already squashing a lot (there are here around 100 commits visible, but in reality these were maybe 400, debugging the CI was really complex, and I want to keep track of that debugging).

Also for HEPInstaller by the way I would really prefer no squashing. Same thing, at least I can easily find back if ever I want to bypass info.dat (not that I think this will ever be needed, but one never knows).

Can I merge like this then? Thanks!
Andrea

@valassi
Copy link
Member Author

valassi commented Oct 3, 2024

PS Sorry I know I am difficult :-)

But this point easily gets very very very religious...!
https://dev.to/wesen/squash-commits-considered-harmful-ob1
Quote "A recurring conversation in developer circles is if you should use git --squash when merging or do explicit merge commits. The short answer: you shouldn't. People have strong opinions about this. The thing is that my opinion is the correct one. Squashing commits has no purpose other than losing information. It doesn't make for a cleaner history."

I am not saying "my opinion is the correct one", but definitely I do have a strong opinion that squashing is often bad. It will not surprise you that I prefer too much information than too little information...

Then I do squash sometimes, my own commits. With other people's commits I would even be more reluctant to squash, I want to be able to trace back what they did, if/when needed...

@valassi
Copy link
Member Author

valassi commented Oct 3, 2024

Thanks again @oliviermattelaer ! As discussed, merging now.

Then I will bump up the version in a later PR or commit

Andrea

@valassi valassi changed the title tarball creation/installation, plugin symlink management, release management v1.0.0 release: tarball creation/installation, plugin symlink management, release management Oct 3, 2024
@valassi valassi merged commit 7e8e033 into madgraph5:master Oct 3, 2024
172 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants