Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Missing privilege separation directory: /run/sshd #752

Closed
fungiboletus opened this issue Feb 28, 2024 · 13 comments · Fixed by #784
Closed

Error: Missing privilege separation directory: /run/sshd #752

fungiboletus opened this issue Feb 28, 2024 · 13 comments · Fixed by #784
Labels

Comments

@fungiboletus
Copy link

Description

Running the role ssh_hardening on Debian 12.0 Bookworm seems to fail at the Create sshd_config and set permissions to root/600 step.

Reproduction steps

Run the ssh hardening role on Debian 12 using the default settings.

Current Behavior

TASK [devsec.hardening.ssh_hardening : Create sshd_config and set permissions to root/600] ******************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
fatal: [debian-12-server]: FAILED! => {"changed": false, "checksum": "abcd", "exit_status": 255, "msg": "failed to validate", "stderr": "Missing privilege separation directory: /run/sshd\r\n", "stderr_lines": ["Missing privilege separation directory: /run/sshd"], "stdout": "", "stdout_lines": []}

Expected Behavior

Success.

OS / Environment

Debian GNU/Linux 12 (bookworm)

Ansible Version

ansible [core 2.16.3]
  config file = /Users/fungiboletus/.ansible.cfg
  configured module search path = ['/Users/fungiboletus/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /opt/homebrew/Cellar/ansible/9.2.0/libexec/lib/python3.12/site-packages/ansible
  ansible collection location = /Users/fungiboletus/.ansible/collections:/usr/share/ansible/collections
  executable location = /opt/homebrew/bin/ansible
  python version = 3.12.2 (main, Feb  6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)] (/opt/homebrew/Cellar/ansible/9.2.0/libexec/bin/python)
  jinja version = 3.1.3
  libyaml = True

Collection Version

devsec.hardening:9.0.1

Additional information

...

@rndmh3ro
Copy link
Member

Can you confirm that the ssh-server is installed and running on the server?

@fungiboletus
Copy link
Author

Huum, it was a little while ago. I did have a ssh-server installed (and probably running) as it was a remote server, but I may had zero active connections at the time as I was using a teleport SSH access instead.

I eventually did a mkdir /run/sshd and it worked I think. I'm closing the issue, but let me know if you want to re-open it.

@rndmh3ro
Copy link
Member

Since this doesn't happen in our tests or production environments, I don't think we a bigger problem here.

@markuman
Copy link

I'm facing the same error using devsec.hardening 9.0.1 on Ubuntu 24.04 at AWS/EC2.

grafik

@rndmh3ro
Copy link
Member

@markuman, can you confirm that the ssh-server is installed and running on your instance?

@markuman
Copy link

@markuman, can you confirm that the ssh-server is installed and running on your instance?

Yes, it is.

@rndmh3ro
Copy link
Member

I just tested it myself on Ubuntu 24.04. (ami-01e444924a2233b07) and it worked. I connected from outside the machine and only applied the ssh-hardening collection.

@markuman, can you please provide which ami you did use and your playbook?

@rndmh3ro rndmh3ro reopened this May 27, 2024
@markuman
Copy link

I just tested it myself on Ubuntu 24.04. (ami-01e444924a2233b07) and it worked. I connected from outside the machine and only applied the ssh-hardening collection.

@markuman, can you please provide which ami you did use and your playbook?

It's ami-0d6dc6dcb667a9515.

devsec.hardening is applied here via cloudinit (ansible-pull). So it is executed via localhost.
Maybe it make a different and sshd creates the directory after a connection is created. In our case, there are no active ssh connections at this point. ....just guessing ....

@rndmh3ro
Copy link
Member

I can confirm this and the problem is this: https://discourse.ubuntu.com/t/sshd-now-uses-socket-based-activation-ubuntu-22-10-and-later/30189

As of version 1:9.0p1-1ubuntu1 of openssh-server in Kinetic Kudu (Ubuntu 22.10), OpenSSH in Ubuntu is configured by default to use systemd socket activation. This means that sshd will not be started until an incoming connection request is received. This has been done to reduce the memory consumed by Ubuntu Server instances by default, which is of particular interest with Ubuntu running in VMs or LXD containers: by not running sshd when it is not used, we save at least 3MiB of memory in each instance, representing a savings of roughly 5% on an idle, pristine kinetic container.

So, what should we do here?

Start ssh? Fail the playbook early if ssh is not activated? Validate only if ssh is running? Document that ssh should be running?

@schurzi
Copy link
Contributor

schurzi commented May 27, 2024

We could try one of the other validation options, maybe some of them will not need this directory (currently we are using -G and thus the most comprehensive checks):

       -G      Parse  and  print  configuration  file.  Check the validity of the configuration file, output the effective configuration to std out and then exit.  Optionally, Match rules may be applied by
               specifying the connection parameters using one or more -C options.

       -T      Extended test mode.  Check the validity of the configuration file, output the effective configuration to stdout and then exit.  Optionally, Match rules may be applied by specifying the con‐
               nection parameters using one or more -C options.  This is similar to the -G flag, but it includes the additional testing performed by the -t flag.

       -t      Test mode.  Only check the validity of the configuration file and sanity of the keys.  This is useful for updating sshd reliably as configuration options may change.

@rndmh3ro
Copy link
Member

It does seem to work with -G.

@jeanmonet
Copy link
Contributor

jeanmonet commented Jul 25, 2024

Fresh install of 24.04LTS (latest current) inside a local VM, to which I connect via SSH to run the playbook, same issue:

TASK [dev-sec.ssh-hardening : Create sshd_config and set permissions to root/600] **************************************************
fatal: [ubuntu_srv_22_usr]: FAILED! => {
    "changed": false,
    "checksum": "ef2d6a2e3ff7dbc4c8f6661e9e8d527362065c54",
    "exit_status": 255
}

STDERR:

Missing privilege separation directory: /run/sshd



MSG:

failed to validate

How to get around this?

EDIT what I did is:

hardening.yml:

# Add this:
- name: Ensure privilege separation directory exists
  ansible.builtin.file:
    path: /run/sshd
    state: directory
    owner: root
    group: root
    mode: '0755'

# Before this:
- name: Create sshd_config and set permissions to root/600
  ...

Post-role execution:

Execute the following after role finishes execution and before restarting SSHD:

    - name: Unmask ssh.socket (systemd) (prevent 'Failed to start sshd.service - Unit ssh.socket is masked.')
      systemd:
        name: ssh.socket
        state: started
        enabled: yes
        masked: no

@schurzi schurzi linked a pull request Aug 2, 2024 that will close this issue
@schurzi
Copy link
Contributor

schurzi commented Aug 2, 2024

@jeanmonet thank you very much, your input was valuable to identify a solution we want to implement. The next version of our collection won't need these kinds of workarounds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants