diff --git a/website/content/docs/proposals/proposal-002-run.md b/website/content/docs/proposals/proposal-002-run.md index b34a2f3..42946a8 100644 --- a/website/content/docs/proposals/proposal-002-run.md +++ b/website/content/docs/proposals/proposal-002-run.md @@ -26,13 +26,13 @@ Draft - [Non-Goals](#non-goals) - [Linked Docs](#linked-docs) - [Proposal](#proposal) - - [User Stories (Optional)](#user-stories-optional) + - [User Stories](#user-stories) - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) - [Risks and Mitigations](#risks-and-mitigations) - [Design Details](#design-details) - - [Defining Jobs \& how to run them from the project pipeline](#defining-jobs--how-to-run-them-from-the-project-pipeline) - - [Job defined in upstream Kubernetes manifests](#job-defined-in-upstream-kubernetes-manifests) - - [Graduation Criteria (Optional)](#graduation-criteria-optional) + - [Definitions](#definitions) + - [How the project workflow calls the benchmark job](#how-the-project-workflow-calls-the-benchmark-job) + - [Running \& Defining Benchmark Jobs](#running--defining-benchmark-jobs) - [Drawbacks (Optional)](#drawbacks-optional) - [Alternatives](#alternatives) - [Infrastructure Needed (Optional)](#infrastructure-needed-optional) @@ -134,7 +134,7 @@ As a project maintainer, I create and run a benchmark test from a separate GitHu **Green Reviews maintainer helps to create a new benchmark test for a specific CNCF project** -As a Green Reviews maintainer, I can help a CNCF project maintainers to define the Functional Unit of a project so that the project maintainers can create a benchmark test. +As a Green Reviews maintainer, I can help a CNCF project maintainers to define the Functional Unit of a project so that the project maintainers can create a benchmark test. **Project maintainer modifies or removes a benchmark test** @@ -197,14 +197,13 @@ There are different components defined here and shown in the following diagram. ### How the project workflow calls the benchmark job -When the project workflow starts, it deploys the project on the test environment and then runs the benchmark job. For modularity and/or clarity, the benchmark test instructions could be defined in 3 different ways: +When the project workflow starts, it deploys the project on the test environment and then runs the benchmark job. For modularity and/or clarity, the benchmark test instructions could be defined in two different ways: -1. As a Job and with in-line instructions/steps -2. As a Job that calls another GitHub Action workflow (yes, yet another workflow 🙂) that contains the instructions. The workflow can be either: - 1. In the Green Reviews WG repository - 2. In a separate repository +As a Job that calls another GitHub Action workflow (yes, yet another workflow 🙂) that contains the instructions. The workflow can be either: + 1. In the Green Reviews WG repository + 2. In a separate repository -The three options for defining a benchmark test are illustrated below. +The two options for defining a benchmark test are illustrated below. ![Calling the benchmark job](calling-benchmark-job.png "Calling the Benchmark job") @@ -324,6 +323,8 @@ SIG to get the process for these resources started right away. --> TODO: +* remove option 1 - too confusing +* explain that we want project contribs to contribute PRs in their repo * From the trigger pipeline, go to the Project Pipeline to run the tests of a specific CNCF project. → I suppose this is covered between Proposal 1 and this. Only thing we didn’t dig into is the subcomponents. * Directory structure for CNCF projects, their pipeline/Jobs, etc. Collecting the Jobs & categorising them in "libraries"? → We should probably specify a standard way to store the tests in directories, where do they go and what is the directory structure, something in line with the subcomponents in Falco. * Cleaning up test artefacts → Oops, not yet there either 😛