-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: helm test workflow check #394
Conversation
Hi @tomaszbarwicki, while I think this check is actually working, I am not sure, if we should really automate this and add it to our dashboard. The whole automation effort is currently mainly driven by the System Team of the consortia, which we know will end at some point. We need to keep in mind, that we are potentially building a legacy for others to take over. That's why I think we should only focus on "waterproof" test, that do not require explanation or need background info on which teams have special agreements to have a failing check on our dashboard. What do you guys think @carslen @hzierer @FaGru3n @almadigabor? |
I'm a bit torn here. Maybe if we put this code into a separate workflow as example / "reference implementation"? |
Hi @SebastianBezold , @hzierer, Appreciate the feedback, I understand the concern however not sure if I agree that we should resign from automation just because a margin chose to follow different approach (not saying incorrect one!). Based on my testing I made across organisation (almost 40 qualified as product's repos) there seems to be only 4 which didn't pass the automated helm workflow test. Analysis of failing repos:
In the end we weight time/effort save for each individual while release checks vs 2% of repos with potential incorrect test result. Finally why not, over time the 2% cases can be also incorporated following incremental development strategy.. :) |
I am not against automation per se. But I would still argue, that the time I need to spend manually checking the workflow is actually not that much, since there are just a view key places to look at. Also, we do not only safe time (potentially) in checks. We also add more things, that have to be maintained. As soon as we change details on the workflow, the check has to be adapted, since it is focusing on technical details rather than the overall goal of the TRG (automatic testing of a chart). I can definitely understand you points and as I said, I am not against automation at all. But if we automate, it has to be waterproof in my opinion. This means only reporting actual non-compliances while still be open for multiple solutions |
I'm afraid the "broken window" has already been there with us for quite a while. Current base image check reports red for certain repositories despite they actually fulfill TRG but decided to use aliases. Isn't that technical detail which makes the check being not general enough? Doesn't that dictate specific format of Dockerfile? Are we really in the position that we can say all our checks are waterproof? I'm not saying we should temper aspirations, but maybe embrace step by step approach, learn from each stage and ensure sustainable progress. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With regards to our internal discussion about a proper exception process, I'm fine with the current implementation, since it seems to work in general
PR to add helm chart testing workflow check to a release-automation tool in order to verify required by TRG workflow is present in the repository.
Updates eclipse-tractusx/sig-infra#142