Handling modularity for benchmark suites like CIS #6924
Replies: 3 comments 5 replies
-
\o Hey there @alexhaydock! I've spent a bit of time thinking about this as well and mentioned it here: #6889 -- and see the documentation PR here submitted by @jan-cerny: #6915 As I kinda mentioned on your other post, CIS is a tricky benchmark to implement content for. The approach we are going to take for our Ubuntu content was to create four separate profiles and manually maintain the rules across them. Part of the issue is that CIS benchmark for two unrelated OSes are very different. Taking Ubuntu and RHEL: the hierarchy is roughly CIS Distribution Independent -> Fedora {19,28} Family -> RHEL {7,8} or CIS Distribution Independent -> Debian Family -> Ubuntu (all versions). This makes creating a single, common control file (even for two different versions of the same OS) tricky. Couple this with split, propagating updates (sometimes a change is made in one OS and later realized it needs to propagate back up to distribution independent before trickling down to all other versions) and irregular release cadence (Ubuntu 18.04 and 20.04 release at different times for instance) and you almost don't gain anything by creating a product-independent (in the CaC sense) control file over four separate profiles and using a good diff tool (like I definitely encourage you to get involved with the CIS community if you can, I think it'll give you some good insight into how the benchmarks are created and a forum to provide feedback on any bugs you encounter. And maybe you can propose a better identifier format. Free to join :) Anyhow, all this to say, that's why I proposed some additions to the profile format in #6889c5. Maybe you'll find the thoughts there interesting. Regarding your proposals: I lean towards number two. I do like the idea of multiple levels though, perhaps we could revisit calling them |
Beta Was this translation helpful? Give feedback.
-
Hello, we plan to split CIS for rhel 7 and 8 into 2 profiles - one for level 1 and one for level 2. I think you can contact me if you want to cooperate. I am from Red Hat. Initially I planed to use the machine readable policy approach (the one used for ANSSI profiles), but it seems that it might not be suitable for CIS. I still have to do some research on it. |
Beta Was this translation helpful? Give feedback.
-
Hello, well, there does not seem to be a demand for separate server and workstation profiles. But I think that it can be done. During the process all the CIS requirements need to be evaluated anyway so it should not increase effort much. |
Beta Was this translation helpful? Give feedback.
-
I already posted about this in a different thread but it was buried and somewhat of a train-of-thought so I've collected my thoughts here in (hopefully) a clearer fashion. I'm quite new to contributing to ComplianceAsCode so I'm sorry if I've misunderstood anything about how things are working - please let me know! 😅
So I can see that recent developments in the repo have added modularity to the ANSSI benchmark, which has 4 levels:
Unlike the other monolithic benchmarks which keep all their rule references in a single
.profile
file, with the ANSSI benchmark each "level" gets its own.profile
file which references a "set" of rules in a simple way:These rules get pulled from
controls/anssi.yml
based on theid: anssi
section within that file.From what I can tell, inside that file is where the "levels" of benchmark are both defined and applied to individual rules. We define them near the top of the file:
and we use them in rules like so:
If I'm understanding it correctly (please confirm if I'm not!), when a
level:
is applied to a rule in this way, we're basically defining the "lowest" level in the hierarchy of levels (defined above inlevels:
) which this rule will apply to. So in the example above, this rule is taggedlevel: intermediary
, so we can expect it to apply to theintermediary
,enhanced
, andhigh
benchmark levels.This approach is working quite well for ANSSI and probably will for anything that does have hierarchical subsets of rules like this, but my concern is that CIS takes a different approach.
CIS is sorted into 4 categories too, like ANSSI:
While "Level 1" and "Level 2" do apply hierarchically like ANSSI above (i.e. Level 2 contains all the controls applied by Level 1, and more), the Server and Workstation sub-categories don't work cleanly in this way. The CIS Server benchmarks contain rules which the Workstation benchmark does not, and the CIS Workstation benchmarks contain rules which the Server one does not.
I'm opening this thread for a discussion on ways of handling this while still being able to keep modularity, and still being able to provide a full service benchmark for CIS, split into the 4 categories.
Possible Options
content/cis_workstation.yml
andcontent/cis_server.yml
. The downside of this approach is that, while the Workstation and Server categories do contain different rules, they do overlap for a large percentage of the rules involved so there'd be a huge amount of duplication between those two files.level:
definition for rules in thecontent/xxxx.yml
files doesn't apply hierarchically but instead applies as a list, where we'd need to specify a full list of benchmark levels that each rule is a member of. For example:the above would be my preferred approach, but I'd love to hear from others on how feasible or easy this would be to build. It feels like it would lead to more work to maintain for each
content/
file, but it also feels more declarative and more flexible in terms of modularity. I'm not sure how many of the compliance benchmarks would actually benefit from it though, or whether it would all just be for the benefit of CIS and it's not-entirely-overlapping subcategories of rules.Please let me know any thoughts anyone has. I'm willing to do as much as I can to help this along as I have a vested interest in being able to use OpenSCAP to validate/remediate hosts against specific levels of the CIS benchmark. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions