Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gvmd@gvmd FATAL: remaining connection slots are reserved for non-replication superuser connections #2038

Closed
MitchDrage opened this issue Jul 7, 2023 · 19 comments · Fixed by #2042
Labels
bug Something isn't working

Comments

@MitchDrage
Copy link

Expected behavior

The system should boot and operate correctly.

Actual behavior

During the initial feed update of a community containers install, messages are being displayed indicating that psql has hit a connection limit.

Steps to reproduce

  1. Build a fresh install of greenbone-community-containers

GVM versions

gsa: (gsad --version) - Greenbone Security Assistant 22.05.1

gvm: (gvmd --version) - Greenbone Vulnerability Manager 22.5.3

openvas-scanner: (openvas --version) - OpenVAS 22.4.1

gvm-libs: - gvm-libs 22.4.1~dev1

psql: psql (PostgreSQL) 13.11 (Debian 13.11-0+deb11u1)

Environment

Operating system:

Community Containers

Logfiles

greenbone-community-edition-gvmd-1                 | waiting 1 second for ready postgres container
greenbone-community-edition-gvmd-1                 | waiting 1 second for ready postgres container
greenbone-community-edition-gvmd-1                 | waiting 1 second for ready postgres container
greenbone-community-edition-gvmd-1                 | waiting 1 second for ready postgres container
greenbone-community-edition-gvmd-1                 | waiting 1 second for ready postgres container
greenbone-community-edition-gvmd-1                 | waiting 1 second for ready postgres container
greenbone-community-edition-gvmd-1                 | waiting 1 second for ready postgres container
greenbone-community-edition-gvmd-1                 | waiting 1 second for ready postgres container
greenbone-community-edition-gvmd-1                 | waiting 1 second for ready postgres container
greenbone-community-edition-gvmd-1                 | waiting 1 second for ready postgres container
greenbone-community-edition-gvmd-1                 |  connection 
greenbone-community-edition-gvmd-1                 | ------------
greenbone-community-edition-gvmd-1                 |  connected
greenbone-community-edition-gvmd-1                 | (1 row)
greenbone-community-edition-gvmd-1                 | 
greenbone-community-edition-gvmd-1                 | 
greenbone-community-edition-gvmd-1                 | User exists already.
greenbone-community-edition-gvmd-1                 | starting gvmd
greenbone-community-edition-gvmd-1                 | md   main:MESSAGE:2023-07-07 01h17.32 utc:446:    Greenbone Vulnerability Manager version 22.5.3 (DB revision 255)
greenbone-community-edition-gvmd-1                 | md manage:   INFO:2023-07-07 01h17.32 utc:446:    Creating user.
greenbone-community-edition-gvmd-1                 | md manage:MESSAGE:2023-07-07 01h17.32 utc:446: No SCAP database found
greenbone-community-edition-gvmd-1                 | md   main:MESSAGE:2023-07-07 01h17.33 utc:448:    Greenbone Vulnerability Manager version 22.5.3 (DB revision 255)
greenbone-community-edition-gvmd-1                 | md manage:   INFO:2023-07-07 01h17.33 utc:448:    Getting users.
greenbone-community-edition-gvmd-1                 | md manage:MESSAGE:2023-07-07 01h17.33 utc:448: No SCAP database found
greenbone-community-edition-gvmd-1                 | md   main:MESSAGE:2023-07-07 01h17.33 utc:451:    Greenbone Vulnerability Manager version 22.5.3 (DB revision 255)
greenbone-community-edition-gvmd-1                 | md manage:   INFO:2023-07-07 01h17.33 utc:451:    Modifying setting.
greenbone-community-edition-gvmd-1                 | md manage:MESSAGE:2023-07-07 01h17.33 utc:451: No SCAP database found
greenbone-community-edition-gvmd-1                 | md   main:MESSAGE:2023-07-07 01h17.33 utc:452:    Greenbone Vulnerability Manager version 22.5.3 (DB revision 255)
greenbone-community-edition-gvmd-1                 | md manage:MESSAGE:2023-07-07 01h17.33 utc:453: No SCAP database found
greenbone-community-edition-gvmd-1                 | md manage:WARNING:2023-07-07 01h17.34 UTC:473: update_scap: No SCAP db present, rebuilding SCAP db from scratch
greenbone-community-edition-gvmd-1                 | md manage:   INFO:2023-07-07 01h17.34 UTC:474: osp_scanner_feed_version: No feed version available yet. OSPd OpenVAS is still starting
greenbone-community-edition-gvmd-1                 | md manage:   INFO:2023-07-07 01h17.34 UTC:473: update_scap: Updating data from feed
greenbone-community-edition-gvmd-1                 | md manage:   INFO:2023-07-07 01h17.34 UTC:473: Updating CPEs
greenbone-community-edition-pg-gvm-1               | 2023-07-07 02:46:29.943 UTC [1004] gvmd@gvmd FATAL:  remaining connection slots are reserved for non-replication superuser connections
greenbone-community-edition-gvmd-1                 | md manage:WARNING:2023-07-07 02h46.29 utc:1673: sql_open: PQconnectPoll failed
greenbone-community-edition-gvmd-1                 | md manage:WARNING:2023-07-07 02h46.29 utc:1673: sql_open: PQerrorMessage (conn): FATAL:  remaining connection slots are reserved for non-replication superuser connections
greenbone-community-edition-gvmd-1                 | md manage:WARNING:2023-07-07 02h46.29 utc:1673: init_manage_open_db: sql_open failed
greenbone-community-edition-pg-gvm-1               | 2023-07-07 02:46:40.219 UTC [1006] gvmd@gvmd FATAL:  remaining connection slots are reserved for non-replication superuser connections
greenbone-community-edition-gvmd-1                 | md manage:WARNING:2023-07-07 02h46.40 utc:1674: sql_open: PQconnectPoll failed
greenbone-community-edition-gvmd-1                 | md manage:WARNING:2023-07-07 02h46.40 utc:1674: sql_open: PQerrorMessage (conn): FATAL:  remaining connection slots are reserved for non-replication superuser connections
greenbone-community-edition-gvmd-1                 | md manage:WARNING:2023-07-07 02h46.40 utc:1674: init_manage_open_db: sql_open failed
@MitchDrage MitchDrage added the bug Something isn't working label Jul 7, 2023
@benbrummer
Copy link

benbrummer commented Jul 7, 2023

The docker deployment works, when downgrading gvmd to 22.5.2

Update 22.5.2 does have the same issue

@bjoernricks
Copy link
Contributor

bjoernricks commented Jul 7, 2023

This issue originated from https://forum.greenbone.net/t/sql-open-pqerrormessage-conn-fatal-remaining-connection-slots-are-reserved-for-non-replication-superuser-connections/15137

As far as I understand using gvmd 22.5.1 should work because it got introduced with #2028

@alex-feel
Copy link

alex-feel commented Jul 7, 2023

I have encountered the issue with PostgreSQL connections after building and running Greenbone Community Edition 22.4 from the source code on Ubuntu 22.04 LTS under Windows Subsystem for Linux (WSL).

During normal operation, I encountered the following errors:

md manage:WARNING:2023-07-07 11h57.55 utc:16975: init_manage_open_db: sql_open failed
md manage:WARNING:2023-07-07 11h57.55 utc:16974: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 11h57.55 utc:16974: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections

After I saw this in the logs, the web interface stopped working and only three tabs remained. I ran sudo systemctl restart gvmd many times and the process resumed. For example, if at first I received the message:

md   main:MESSAGE:2023-07-07 13h19.47 utc:5411:    Greenbone Vulnerability Manager version 22.5.2 (DB revision 255)
md manage:MESSAGE:2023-07-07 13h19.47 utc:5411: No SCAP database found
md manage:WARNING:2023-07-07 13h19.49 UTC:5433: update_scap: No SCAP db present, rebuilding SCAP db from scratch
md manage:   INFO:2023-07-07 13h19.49 UTC:5433: update_scap: Updating data from feed
md manage:   INFO:2023-07-07 13h19.49 UTC:5433: Updating CPEs
md manage:   INFO:2023-07-07 13h22.20 UTC:5433: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2014.xml
md manage:   INFO:2023-07-07 13h22.28 UTC:5433: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2002.xml
md manage:   INFO:2023-07-07 13h22.35 UTC:5433: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2022.xml
md manage:WARNING:2023-07-07 13h23.58 utc:6217: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 13h23.58 utc:6217: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections

Then after several restarts, the progress moved forward and I already saw the following:

md   main:MESSAGE:2023-07-07 13h28.13 utc:6377:    Greenbone Vulnerability Manager version 22.5.2 (DB revision 255)
md manage:MESSAGE:2023-07-07 13h28.13 utc:6377: No SCAP database found
md manage:WARNING:2023-07-07 13h28.15 UTC:6400: update_scap: No SCAP db present, rebuilding SCAP db from scratch
md manage:   INFO:2023-07-07 13h28.16 UTC:6400: update_scap: Updating data from feed
md manage:   INFO:2023-07-07 13h28.16 UTC:6400: Updating CPEs
md manage:   INFO:2023-07-07 13h31.14 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2014.xml
md manage:   INFO:2023-07-07 13h31.21 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2002.xml
md manage:   INFO:2023-07-07 13h31.25 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2022.xml
md manage:   INFO:2023-07-07 13h32.01 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2019.xml
md manage:   INFO:2023-07-07 13h32.36 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2006.xml
md manage:   INFO:2023-07-07 13h32.42 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2013.xml
md manage:   INFO:2023-07-07 13h32.49 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2020.xml
md manage:   INFO:2023-07-07 13h33.15 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2018.xml
md manage:   INFO:2023-07-07 13h33.42 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2010.xml
md manage:   INFO:2023-07-07 13h33.49 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2012.xml
md manage:   INFO:2023-07-07 13h33.57 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2003.xml
md manage:   INFO:2023-07-07 13h33.59 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2023.xml
md manage:   INFO:2023-07-07 13h34.14 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2005.xml
md manage:   INFO:2023-07-07 13h34.19 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2004.xml
md manage:   INFO:2023-07-07 13h34.23 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2021.xml
md manage:   INFO:2023-07-07 13h35.02 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2016.xml
md manage:   INFO:2023-07-07 13h35.16 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2007.xml
md manage:   INFO:2023-07-07 13h35.23 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2008.xml
md manage:   INFO:2023-07-07 13h35.34 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2017.xml
md manage:   INFO:2023-07-07 13h35.57 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2011.xml
md manage:   INFO:2023-07-07 13h36.08 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2009.xml
md manage:   INFO:2023-07-07 13h36.20 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2015.xml
md manage:   INFO:2023-07-07 13h36.32 UTC:6400: Updating CVSS scores and CVE counts for CPEs
md manage:   INFO:2023-07-07 13h39.12 UTC:6400: Updating placeholder CPEs
md manage:   INFO:2023-07-07 13h39.43 UTC:6400: Updating Max CVSS for DFN-CERT
md manage:   INFO:2023-07-07 13h39.49 UTC:6400: Updating DFN-CERT CVSS max succeeded.
md manage:   INFO:2023-07-07 13h39.49 UTC:6400: Updating Max CVSS for CERT-Bund
md manage:   INFO:2023-07-07 13h39.53 UTC:6400: Updating CERT-Bund CVSS max succeeded.
md manage:   INFO:2023-07-07 13h39.56 UTC:6400: update_scap_end: Updating SCAP info succeeded

However, when I added a target for scanning, the following appeared in the log:

event target:MESSAGE:2023-07-07 13h41.21 UTC:7368: Target Target for immediate scan of IP app.example.com - 2023-07-07 13:41:21 (6e1b7247-56c6-41ea-bf91-1abe2ed7dd84) has been created by admin
event task:MESSAGE:2023-07-07 13h41.21 UTC:7368: Status of task  (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has changed to New
event task:MESSAGE:2023-07-07 13h41.21 UTC:7368: Task Immediate scan of IP app.example.com (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has been created by admin
event task:MESSAGE:2023-07-07 13h41.22 UTC:7368: Status of task Immediate scan of IP app.example.com (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has changed to Requested
event task:MESSAGE:2023-07-07 13h41.22 UTC:7368: Task Immediate scan of IP app.example.com (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has been requested to start by admin
event wizard:MESSAGE:2023-07-07 13h41.22 UTC:7368: Wizard quick_first_scan has been run by admin
md manage:WARNING:2023-07-07 13h41.22 utc:7413: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 13h41.22 utc:7413: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections
md manage:WARNING:2023-07-07 13h41.22 utc:7413: init_manage_open_db: sql_open failed
md manage:WARNING:2023-07-07 13h41.22 utc:7412: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 13h41.22 utc:7412: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections
md manage:WARNING:2023-07-07 13h41.22 utc:7412: init_manage_open_db: sql_open failed

Here is my current system environment:

Operating System: Windows 11 Pro
WSL Version: WSL 2
Linux Distribution: Ubuntu 22.04 LTS
Greenbone Community Edition Version: 22.4

@malwarework
Copy link

malwarework commented Jul 7, 2023

Find decision but it's not a good way. After starting pg-gvm I connected to container and increase the value of max_connections (Thanks to this article).
After restarted all services and error is gone.

@dosera
Copy link

dosera commented Jul 8, 2023

This issue originated from https://forum.greenbone.net/t/sql-open-pqerrormessage-conn-fatal-remaining-connection-slots-are-reserved-for-non-replication-superuser-connections/15137

As far as I understand using gvmd 22.5.1 should work because it got introduced with #2028

Downgrade did not work for me due to gvmd complaining about version inconsistencies:

md   main:CRITICAL:2023-07-08 04h22.44 utc:52:  gvmd: database is wrong version

@xenago
Copy link

xenago commented Jul 10, 2023

Same issue here, downgrading doesn't work.

md   main:MESSAGE:2023-07-10 01h54.38 utc:27:    Greenbone Vulnerability Manager version 22.5.1 (DB revision 254)
md manage:MESSAGE:2023-07-10 01h54.38 utc:28: check_db_versions: database version of database: 255
md manage:MESSAGE:2023-07-10 01h54.38 utc:28: check_db_versions: database version supported by manager: 254
md   main:CRITICAL:2023-07-10 01h54.38 utc:28: gvmd: database is wrong version

@mikadmswnrto
Copy link

mikadmswnrto commented Jul 10, 2023

I have encountered the issue with PostgreSQL connections after building and running Greenbone Community Edition 22.4 from the source code on Ubuntu 22.04 LTS under Windows Subsystem for Linux (WSL).

During normal operation, I encountered the following errors:

md manage:WARNING:2023-07-07 11h57.55 utc:16975: init_manage_open_db: sql_open failed
md manage:WARNING:2023-07-07 11h57.55 utc:16974: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 11h57.55 utc:16974: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections

After I saw this in the logs, the web interface stopped working and only three tabs remained. I ran sudo systemctl restart gvmd many times and the process resumed. For example, if at first I received the message:

md   main:MESSAGE:2023-07-07 13h19.47 utc:5411:    Greenbone Vulnerability Manager version 22.5.2 (DB revision 255)
md manage:MESSAGE:2023-07-07 13h19.47 utc:5411: No SCAP database found
md manage:WARNING:2023-07-07 13h19.49 UTC:5433: update_scap: No SCAP db present, rebuilding SCAP db from scratch
md manage:   INFO:2023-07-07 13h19.49 UTC:5433: update_scap: Updating data from feed
md manage:   INFO:2023-07-07 13h19.49 UTC:5433: Updating CPEs
md manage:   INFO:2023-07-07 13h22.20 UTC:5433: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2014.xml
md manage:   INFO:2023-07-07 13h22.28 UTC:5433: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2002.xml
md manage:   INFO:2023-07-07 13h22.35 UTC:5433: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2022.xml
md manage:WARNING:2023-07-07 13h23.58 utc:6217: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 13h23.58 utc:6217: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections

Then after several restarts, the progress moved forward and I already saw the following:

md   main:MESSAGE:2023-07-07 13h28.13 utc:6377:    Greenbone Vulnerability Manager version 22.5.2 (DB revision 255)
md manage:MESSAGE:2023-07-07 13h28.13 utc:6377: No SCAP database found
md manage:WARNING:2023-07-07 13h28.15 UTC:6400: update_scap: No SCAP db present, rebuilding SCAP db from scratch
md manage:   INFO:2023-07-07 13h28.16 UTC:6400: update_scap: Updating data from feed
md manage:   INFO:2023-07-07 13h28.16 UTC:6400: Updating CPEs
md manage:   INFO:2023-07-07 13h31.14 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2014.xml
md manage:   INFO:2023-07-07 13h31.21 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2002.xml
md manage:   INFO:2023-07-07 13h31.25 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2022.xml
md manage:   INFO:2023-07-07 13h32.01 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2019.xml
md manage:   INFO:2023-07-07 13h32.36 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2006.xml
md manage:   INFO:2023-07-07 13h32.42 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2013.xml
md manage:   INFO:2023-07-07 13h32.49 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2020.xml
md manage:   INFO:2023-07-07 13h33.15 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2018.xml
md manage:   INFO:2023-07-07 13h33.42 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2010.xml
md manage:   INFO:2023-07-07 13h33.49 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2012.xml
md manage:   INFO:2023-07-07 13h33.57 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2003.xml
md manage:   INFO:2023-07-07 13h33.59 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2023.xml
md manage:   INFO:2023-07-07 13h34.14 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2005.xml
md manage:   INFO:2023-07-07 13h34.19 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2004.xml
md manage:   INFO:2023-07-07 13h34.23 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2021.xml
md manage:   INFO:2023-07-07 13h35.02 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2016.xml
md manage:   INFO:2023-07-07 13h35.16 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2007.xml
md manage:   INFO:2023-07-07 13h35.23 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2008.xml
md manage:   INFO:2023-07-07 13h35.34 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2017.xml
md manage:   INFO:2023-07-07 13h35.57 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2011.xml
md manage:   INFO:2023-07-07 13h36.08 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2009.xml
md manage:   INFO:2023-07-07 13h36.20 UTC:6400: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2015.xml
md manage:   INFO:2023-07-07 13h36.32 UTC:6400: Updating CVSS scores and CVE counts for CPEs
md manage:   INFO:2023-07-07 13h39.12 UTC:6400: Updating placeholder CPEs
md manage:   INFO:2023-07-07 13h39.43 UTC:6400: Updating Max CVSS for DFN-CERT
md manage:   INFO:2023-07-07 13h39.49 UTC:6400: Updating DFN-CERT CVSS max succeeded.
md manage:   INFO:2023-07-07 13h39.49 UTC:6400: Updating Max CVSS for CERT-Bund
md manage:   INFO:2023-07-07 13h39.53 UTC:6400: Updating CERT-Bund CVSS max succeeded.
md manage:   INFO:2023-07-07 13h39.56 UTC:6400: update_scap_end: Updating SCAP info succeeded

However, when I added a target for scanning, the following appeared in the log:

event target:MESSAGE:2023-07-07 13h41.21 UTC:7368: Target Target for immediate scan of IP app.example.com - 2023-07-07 13:41:21 (6e1b7247-56c6-41ea-bf91-1abe2ed7dd84) has been created by admin
event task:MESSAGE:2023-07-07 13h41.21 UTC:7368: Status of task  (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has changed to New
event task:MESSAGE:2023-07-07 13h41.21 UTC:7368: Task Immediate scan of IP app.example.com (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has been created by admin
event task:MESSAGE:2023-07-07 13h41.22 UTC:7368: Status of task Immediate scan of IP app.example.com (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has changed to Requested
event task:MESSAGE:2023-07-07 13h41.22 UTC:7368: Task Immediate scan of IP app.example.com (54d76b19-39d7-4f6d-a21d-1c80cfbf279f) has been requested to start by admin
event wizard:MESSAGE:2023-07-07 13h41.22 UTC:7368: Wizard quick_first_scan has been run by admin
md manage:WARNING:2023-07-07 13h41.22 utc:7413: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 13h41.22 utc:7413: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections
md manage:WARNING:2023-07-07 13h41.22 utc:7413: init_manage_open_db: sql_open failed
md manage:WARNING:2023-07-07 13h41.22 utc:7412: sql_open: PQconnectPoll failed
md manage:WARNING:2023-07-07 13h41.22 utc:7412: sql_open: PQerrorMessage (conn): connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  remaining connection slots are reserved for non-replication superuser connections
md manage:WARNING:2023-07-07 13h41.22 utc:7412: init_manage_open_db: sql_open failed

Here is my current system environment:

Operating System: Windows 11 Pro WSL Version: WSL 2 Linux Distribution: Ubuntu 22.04 LTS Greenbone Community Edition Version: 22.4

Same problem here, any solution? after restarting the gvmd the web back to normal, but then several seconds back to error again

@bjoernricks
Copy link
Contributor

We are actively working on this issue. As always debugging and implementing the best possible solution needs some time.

0463e70 from #2028 is identified as the culprit of the issue. Either we implement a fix or are going the revert that change the next days.

@swarooplendi
Copy link

Do we have a document where it mentions what version of psql container greenbone/pg-gvm-build:<version> needs to be used with specific version of gvmd image: greenbone/gvmd:<version> ,even if we use 22.04 version for all the containers at compose file we are facing check_db_versions: database version supported by manager: 254 gvmd: database is wrong version so that it would be a proper walkaround till it is fixed .

@mattmundell
Copy link
Contributor

I'm hoping /pull/2042 will solve this. When trying to reproduce I saw many gvmd processes building up when I was using GSA, and I guess the same issue is causing the database connections to run out.

@mikadmswnrto
Copy link

I'm hoping /pull/2042 will solve this. When trying to reproduce I saw many gvmd processes building up when I was using GSA, and I guess the same issue is causing the database connections to run out.

I tried to increase the number of max_connections to 300 still experience the error

@mattmundell
Copy link
Contributor

I tried to increase the number of max_connections to 300 still experience the error

What's happening for me is that gvmd is keeping the connections from gsad open, and this is causing postgres to reach max_connections. /pull/2042 solves the issue.

@tomassrnka
Copy link

I'm hoping /pull/2042 will solve this. When trying to reproduce I saw many gvmd processes building up when I was using GSA, and I guess the same issue is causing the database connections to run out.

Thank you! I rebuilt gvmd with the patch applied and it seems to fix the issue,

@nunofranciscomoreira
Copy link

Do we have a document where it mentions what version of psql container greenbone/pg-gvm-build:<version> needs to be used with specific version of gvmd image: greenbone/gvmd:<version> ,even if we use 22.04 version for all the containers at compose file we are facing check_db_versions: database version supported by manager: 254 gvmd: database is wrong version so that it would be a proper walkaround till it is fixed .

you can also tag the image of pg-gvm using:
image: greenbone/pg-gvm:22.5.1

@bjoernricks
Copy link
Contributor

you can also tag the image of pg-gvm using: image: greenbone/pg-gvm:22.5.1

This wont change anything. pg-gvm stable is the same as pg-gvm:22, pg-gvm:22.5, pg-gvm:22.5.1 and also pg-gvm:latest. Additionally as I wrote pg-gvm doesn't not what you currently expect. You data in the database wont be rolled back to an older version.

@dosera
Copy link

dosera commented Jul 11, 2023

you can also tag the image of pg-gvm using: image: greenbone/pg-gvm:22.5.1

This wont change anything. pg-gvm stable is the same as pg-gvm:22, pg-gvm:22.5, pg-gvm:22.5.1 and also pg-gvm:latest. Additionally as I wrote pg-gvm doesn't not what you currently expect. You data in the database wont be rolled back to an older version.

Are you tagging like this intentionally? If so may I ask about the reason for it?
I was honestly a little stunned since I am using the stable tagged image and didn't expect such a regression.

@nunofranciscomoreira
Copy link

nunofranciscomoreira commented Jul 11, 2023

you can also tag the image of pg-gvm using: image: greenbone/pg-gvm:22.5.1

This wont change anything. pg-gvm stable is the same as pg-gvm:22, pg-gvm:22.5, pg-gvm:22.5.1 and also pg-gvm:latest. Additionally as I wrote pg-gvm doesn't not what you currently expect. You data in the database wont be rolled back to an older version.

If you say so...
I'm telling you how to solve the problem regarding the minor mismatch. If it was as you say, that pg-gvm is the same for all the tags, then the logs wouldn't be telling otherwise and that you have a minor version mismatch.

What I reported was that using both tags with the same version works, It worked for me, it has been working on a fresh install with both greenbone/pg-gvm:22.5.1 and greenbone/gvmd:22.5.1

Again, do as you please, this is a community branch.

@nunofranciscomoreira
Copy link

you can also tag the image of pg-gvm using: image: greenbone/pg-gvm:22.5.1

This wont change anything. pg-gvm stable is the same as pg-gvm:22, pg-gvm:22.5, pg-gvm:22.5.1 and also pg-gvm:latest. Additionally as I wrote pg-gvm doesn't not what you currently expect. You data in the database wont be rolled back to an older version.

Are you tagging like this intentionally? If so may I ask about the reason for it? I was honestly a little stunned since I am using the stable tagged image and didn't expect such a regression.

Yes, I'm tagging like that based on the suggestions here.

My docker-compose.yml:

services:
  vulnerability-tests:
    image: greenbone/vulnerability-tests
    environment:
      STORAGE_PATH: /var/lib/openvas/22.04/vt-data/nasl
    volumes:
      - vt_data_vol:/mnt

  notus-data:
    image: greenbone/notus-data
    volumes:
      - notus_data_vol:/mnt

  scap-data:
    image: greenbone/scap-data
    volumes:
      - scap_data_vol:/mnt

  cert-bund-data:
    image: greenbone/cert-bund-data
    volumes:
      - cert_data_vol:/mnt

  dfn-cert-data:
    image: greenbone/dfn-cert-data
    volumes:
      - cert_data_vol:/mnt
    depends_on:
      - cert-bund-data

  data-objects:
    image: greenbone/data-objects
    volumes:
      - data_objects_vol:/mnt

  report-formats:
    image: greenbone/report-formats
    volumes:
      - data_objects_vol:/mnt
    depends_on:
      - data-objects

  gpg-data:
    image: greenbone/gpg-data
    volumes:
      - gpg_data_vol:/mnt

  redis-server:
    image: greenbone/redis-server
    restart: on-failure
    volumes:
      - redis_socket_vol:/run/redis/

  pg-gvm:
    image: greenbone/pg-gvm:22.5.1
    restart: on-failure
    volumes:
      - psql_data_vol:/var/lib/postgresql
      - psql_socket_vol:/var/run/postgresql

  gvmd:
    image: greenbone/gvmd:22.5.1
    restart: on-failure
    volumes:
      - gvmd_data_vol:/var/lib/gvm
      - scap_data_vol:/var/lib/gvm/scap-data/
      - cert_data_vol:/var/lib/gvm/cert-data
      - data_objects_vol:/var/lib/gvm/data-objects/gvmd
      - vt_data_vol:/var/lib/openvas/plugins
      - psql_data_vol:/var/lib/postgresql
      - gvmd_socket_vol:/run/gvmd
      - ospd_openvas_socket_vol:/run/ospd
      - psql_socket_vol:/var/run/postgresql
    depends_on:
      pg-gvm:
        condition: service_started
      scap-data:
        condition: service_completed_successfully
      cert-bund-data:
        condition: service_completed_successfully
      dfn-cert-data:
        condition: service_completed_successfully
      data-objects:
        condition: service_completed_successfully
      report-formats:
        condition: service_completed_successfully

  gsa:
    image: greenbone/gsa:stable
    restart: on-failure
    ports:
      - 9392:80
    volumes:
      - gvmd_socket_vol:/run/gvmd
    depends_on:
      - gvmd

  ospd-openvas:
    image: greenbone/ospd-openvas:stable
    restart: on-failure
    init: true
    hostname: ospd-openvas.local
    cap_add:
      - NET_ADMIN # for capturing packages in promiscuous mode
      - NET_RAW # for raw sockets e.g. used for the boreas alive detection
    security_opt:
      - seccomp=unconfined
      - apparmor=unconfined
    command:
      [
        "ospd-openvas",
        "-f",
        "--config",
        "/etc/gvm/ospd-openvas.conf",
        "--mqtt-broker-address",
        "mqtt-broker",
        "--notus-feed-dir",
        "/var/lib/notus/advisories",
        "-m",
        "666"
      ]
    volumes:
      - gpg_data_vol:/etc/openvas/gnupg
      - vt_data_vol:/var/lib/openvas/plugins
      - notus_data_vol:/var/lib/notus
      - ospd_openvas_socket_vol:/run/ospd
      - redis_socket_vol:/run/redis/
    depends_on:
      redis-server:
        condition: service_started
      gpg-data:
        condition: service_completed_successfully
      vulnerability-tests:
        condition: service_completed_successfully

  mqtt-broker:
    restart: on-failure
    image: greenbone/mqtt-broker
    ports:
      - 1883:1883
    networks:
      default:
        aliases:
          - mqtt-broker
          - broker

  notus-scanner:
    restart: on-failure
    image: greenbone/notus-scanner:stable
    volumes:
      - notus_data_vol:/var/lib/notus
      - gpg_data_vol:/etc/openvas/gnupg
    environment:
      NOTUS_SCANNER_MQTT_BROKER_ADDRESS: mqtt-broker
      NOTUS_SCANNER_PRODUCTS_DIRECTORY: /var/lib/notus/products
    depends_on:
      - mqtt-broker
      - gpg-data
      - vulnerability-tests

  gvm-tools:
    image: greenbone/gvm-tools
    volumes:
      - gvmd_socket_vol:/run/gvmd
      - ospd_openvas_socket_vol:/run/ospd
    depends_on:
      - gvmd
      - ospd-openvas

volumes:
  gpg_data_vol:
  scap_data_vol:
  cert_data_vol:
  data_objects_vol:
  gvmd_data_vol:
  psql_data_vol:
  vt_data_vol:
  notus_data_vol:
  psql_socket_vol:
  gvmd_socket_vol:
  ospd_openvas_socket_vol:
  redis_socket_vol:

@iampilot
Copy link

iampilot commented Jul 11, 2023

Maybe the Greenbone Community Containers still have same problem , if worker have free time, please fix it, thanks!
From a China kind boy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.