Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All Scans interrupted at 0% #307

Open
kcadmin-github opened this issue Oct 29, 2024 · 7 comments
Open

All Scans interrupted at 0% #307

kcadmin-github opened this issue Oct 29, 2024 · 7 comments
Assignees

Comments

@kcadmin-github
Copy link

Since about 14 days, all scans are getting interrupted at 0%.

The container versions used (and tried out) are:

- 22.4.51
- 22.4.52
- 24.10.1

Container ist started since ~2 years with the following docker-compose.yml:

#version: "3"
services:
  openvas:
    ports:
      - "127.0.0.1:8080:9392"
    environment:
      - "PASSWORD=CENSORED"
      - "USERNAME=admin"
      - "RELAYHOST=172.17.0.1"
      - "SMTPPORT=25"
      - "REDISDBS=512" # number of Redis DBs to use
      - "QUIET=false"  # dump feed sync noise to /dev/null
      - "NEWDB=false"  # only use this for creating a blank DB
      - "SKIPSYNC=false" # Skips the feed sync on startup.
      - "RESTORE=false"  # This probably not be used from compose... see docs.
      - "DEBUG=false"  # This will cause the container to stop and not actually start gvmd
      - "HTTPS=false"  # wether to use HTTPS or not
      - "GMP=9390"    # to enable see docs
      - "GSATIMEOUT=120" # Configurable session timeout in minutes
    volumes:
      - "openvas:/data"
    container_name: openvas
#    image: immauss/openvas:latest
#    image: immauss/openvas:24.10.1
    image: immauss/openvas:22.4.51
    deploy:
      restart_policy:
        condition: unless-stopped
        delay: 10s
        window: 120s
#  scannable:
#    image: immauss/scannable
#    container_name: scannable

Environment (please complete the following information):

  • OS: Ubuntu 22.04
  • Memory available to OS: 8G
  • Container environment used with version:
docker-buildx-plugin/jammy,now 0.17.1-1~ubuntu.22.04~jammy amd64  [Installiert,automatisch]
docker-ce-cli/jammy,now 5:27.3.1-1~ubuntu.22.04~jammy amd64  [installiert]
docker-ce-rootless-extras/jammy,now 5:27.3.1-1~ubuntu.22.04~jammy amd64  [Installiert,automatisch]
docker-ce/jammy,now 5:27.3.1-1~ubuntu.22.04~jammy amd64  [installiert]
docker-compose-plugin/jammy,now 2.29.7-1~ubuntu.22.04~jammy amd64  [Installiert,automatisch]

logs ( commands assume the container name is 'openvas' )
Please attach the output from one of the following commands:

The only usable log output ist the following:

==> /usr/local/var/log/gvm/healthchecks.log <==
 Healthchecks completed with no issues.

==> /usr/local/var/log/gvm/gvmd.log <==
event task:MESSAGE:2024-10-29 08h00.14 utc:32729: Task Host Discovery Internal Network 192.168.111.1/24 (3e1fa134-8ccf-4fb8-a58a-390a8ff89abf) could not be resumed by admin
event task:MESSAGE:2024-10-29 08h00.14 utc:32729: Status of task Host Discovery Internal Network 192.168.111.1/24 (3e1fa134-8ccf-4fb8-a58a-390a8ff89abf) has changed to Requested
event task:MESSAGE:2024-10-29 08h00.14 utc:32729: Task Host Discovery Internal Network 192.168.111.1/24 (3e1fa134-8ccf-4fb8-a58a-390a8ff89abf) has been requested to start by admin

==> /usr/local/var/log/gvm/ospd-openvas.log <==
OSPD[1133] 2024-10-29 08:00:24,252: INFO: (ospd.command.command) Scan d6228fcf-cd03-4804-b5c1-ee065f1df4f5 added to the queue in position 2.

==> /usr/local/var/log/gvm/gvmd.log <==
event task:MESSAGE:2024-10-29 08h00.24 utc:32733: Status of task Host Discovery Internal Network 192.168.111.1/24 (3e1fa134-8ccf-4fb8-a58a-390a8ff89abf) has changed to Queued

==> /usr/local/var/log/gvm/ospd-openvas.log <==
OSPD[1133] 2024-10-29 08:00:33,611: INFO: (ospd.ospd) Currently 1 queued scans.
OSPD[1133] 2024-10-29 08:00:33,856: INFO: (ospd.ospd) Starting scan d6228fcf-cd03-4804-b5c1-ee065f1df4f5.

==> /usr/local/var/log/gvm/gvmd.log <==
event task:MESSAGE:2024-10-29 08h00.34 utc:32733: Status of task Host Discovery Internal Network 192.168.111.1/24 (3e1fa134-8ccf-4fb8-a58a-390a8ff89abf) has changed to Running
md   main:MESSAGE:2024-10-29 08h01.02 utc:32837:    Greenbone Vulnerability Manager version 23.10.0 (DB revision 256)
md manage:   INFO:2024-10-29 08h01.02 utc:32837:    Getting scanners.
md   main:MESSAGE:2024-10-29 08h01.12 utc:32859:    Greenbone Vulnerability Manager version 23.10.0 (DB revision 256)
md manage:   INFO:2024-10-29 08h01.12 utc:32859:    Verifying scanner.

==> /usr/local/var/log/gvm/openvas.log <==
sd   main:MESSAGE:2024-10-29 08h01.14 utc:32863:d6228fcf-cd03-4804-b5c1-ee065f1df4f5: openvas 23.9.0 started
sd   main:MESSAGE:2024-10-29 08h01.14 utc:32863:d6228fcf-cd03-4804-b5c1-ee065f1df4f5: attack_network_init: INIT MQTT: SUCCESS

==> /usr/local/var/log/gvm/healthchecks.log <==
 Healthchecks completed with no issues.

==> /usr/local/var/log/gvm/ospd-openvas.log <==
OSPD[1133] 2024-10-29 08:01:22,719: ERROR: (ospd_openvas.daemon) Task d6228fcf-cd03-4804-b5c1-ee065f1df4f5 was unexpectedly stopped or killed.
OSPD[1133] 2024-10-29 08:01:22,725: INFO: (ospd.ospd) d6228fcf-cd03-4804-b5c1-ee065f1df4f5: Host scan finished.
OSPD[1133] 2024-10-29 08:01:22,728: INFO: (ospd.ospd) d6228fcf-cd03-4804-b5c1-ee065f1df4f5: Host scan got interrupted. Progress: 0, Status: RUNNING
OSPD[1133] 2024-10-29 08:01:22,729: INFO: (ospd.ospd) d6228fcf-cd03-4804-b5c1-ee065f1df4f5: Scan interrupted.
OSPD[1133] 2024-10-29 08:01:25,241: INFO: (ospd.ospd) d6228fcf-cd03-4804-b5c1-ee065f1df4f5: Scan process is dead and its progress is 0
OSPD[1133] 2024-10-29 08:01:25,243: INFO: (ospd.ospd) d6228fcf-cd03-4804-b5c1-ee065f1df4f5: Scan interrupted.
OSPD[1133] 2024-10-29 08:01:25,251: INFO: (ospd.ospd) d6228fcf-cd03-4804-b5c1-ee065f1df4f5: Scan process is dead and its progress is 0
OSPD[1133] 2024-10-29 08:01:25,252: INFO: (ospd.ospd) d6228fcf-cd03-4804-b5c1-ee065f1df4f5: Scan interrupted.
OSPD[1133] 2024-10-29 08:01:25,408: INFO: (ospd.ospd) d6228fcf-cd03-4804-b5c1-ee065f1df4f5: Scan process is dead and its progress is 0
OSPD[1133] 2024-10-29 08:01:25,409: INFO: (ospd.ospd) d6228fcf-cd03-4804-b5c1-ee065f1df4f5: Scan interrupted.
OSPD[1133] 2024-10-29 08:01:25,464: INFO: (ospd.ospd) d6228fcf-cd03-4804-b5c1-ee065f1df4f5: Scan process is dead and its progress is 0
OSPD[1133] 2024-10-29 08:01:25,465: INFO: (ospd.ospd) d6228fcf-cd03-4804-b5c1-ee065f1df4f5: Scan interrupted.

==> /usr/local/var/log/gvm/gvmd.log <==
event task:MESSAGE:2024-10-29 08h01.25 utc:32733: Status of task Host Discovery Internal Network 192.168.111.1/24 (3e1fa134-8ccf-4fb8-a58a-390a8ff89abf) has changed to Interrupted
md   main:MESSAGE:2024-10-29 08h06.21 utc:33046:    Greenbone Vulnerability Manager version 23.10.0 (DB revision 256)
md manage:   INFO:2024-10-29 08h06.21 utc:33046:    Getting scanners.
md   main:MESSAGE:2024-10-29 08h06.31 utc:33055:    Greenbone Vulnerability Manager version 23.10.0 (DB revision 256)
md manage:   INFO:2024-10-29 08h06.31 utc:33055:    Verifying scanner.

==> /usr/local/var/log/gvm/healthchecks.log <==
 Healthchecks completed with no issues.

Does anybody have an idea?

@immauss
Copy link
Owner

immauss commented Oct 29, 2024

It looks like ospd-openvas is dying, but it has the same PID in the logs ... so it must a process that the daemon is spawning that is dying, but it is not telling us why. I've not seen anything like this before.

As a first step, could you please start the container, with the latest image, and use a new/clean volume. You will have to recreate your scan and targets, but it will give me a better idea of where to look if that works fine.

Thanks,
Scott

@kcadmin-github
Copy link
Author

Thanks for your answer - will try that over the next days, there's also some autumn vacation here. :)

@rbourgaize
Copy link

Hi,
I have just installed this container fresh on a new install, but I am running in to the same issue as above. Even single target will become interrupted at 0%.
Happy to provide any logs as required.
Cheers
R

@kcadmin-github
Copy link
Author

It looks like ospd-openvas is dying, but it has the same PID in the logs ... so it must a process that the daemon is spawning that is dying, but it is not telling us why. I've not seen anything like this before.

As a first step, could you please start the container, with the latest image, and use a new/clean volume. You will have to recreate your scan and targets, but it will give me a better idea of where to look if that works fine.

I'm using now the tag latest which should correspond to 24.10.1 and before resetting everything I tried it out with my current config.
Outcome: Scans are running now. I didn't change anything but this tag.

Don't trust the situation now, but will keep observing.

@rbourgaize
Copy link

I've torn down and rebuilt the container in a few different fashions, still getting the same issue. Found this in the logs though:
==> /usr/local/var/log/gvm/gvmd.log <==
event task:MESSAGE:2024-11-05 12h58.16 utc:800: Status of task Immediate scan of IP 192.168.31.1 (4bdac5a1-d9cb-4368-8e53-83a1856b102a) has changed to Running
sd main-Message: 12:58:55.357: openvas 23.9.0 started
sd main-Message: 12:58:55.459: attack_network_init: INIT MQTT: SUCCESS
sd main-Message: 12:59:00.888: Vulnerability scan 6b741529-ae92-49b8-9c66-607c3f09515f started: Target has 1 hosts: 192.168.31.1, with max_hosts = 20 and max_checks = 4

(openvas:1204): libgvm boreas-WARNING **: 12:59:00.888: set_socket: failed to open ICMPV4 socket: Operation not permitted

(openvas:1204): libgvm boreas-WARNING **: 12:59:00.889: start_alive_detection. Boreas could not initialise alive detection. Boreas was not able to open a new socket. Exit Boreas.
sd main-Message: 12:59:01.018: Vulnerability scan 6b741529-ae92-49b8-9c66-607c3f09515f finished in 6 seconds: 0 alive hosts of 1

@rbourgaize
Copy link

So I had to add this to the compose file:
privileged: true

Now the scans are running. Initially I thought it was due to my root account being locked after reading this : https://forum.greenbone.net/t/interrupted-at-0-libgvm-boreas-failed-to-open-icmpv4/9240/2

But after enabling root account, this did not help. I then stumbled on this article:
https://forum.greenbone.net/t/failed-to-open-icmpv4-socket-operation-not-permitted/13791

But as I am running with podman rather than docker, I had to add the privileged flag to the compose file rather than append
--privileged

Hope this helps.

@immauss
Copy link
Owner

immauss commented Nov 7, 2024

Curious .... Did anything change for podman? An update or otherwise?

My podman foo is rather weak. . . . . . .

Does podman default to running this as something other than root?

Thanks,
-Scott

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants