Sockets Sockets Sockets

This has been plaguing me for far too long.

“Could not connect to Scanner at /var/run/ospd/ospd.sock”

(I know there are threads on this. I’ve read them. But I’m still stumped.)

I’m running in a fully updated buster container on docker.
This problem started with the most recent updates to GSE. I have tried hardcoding the sockets, I’ve tried removing all references to the sockets in my startup commands ( to use the defaults), but no matter what I do, I can not get gvmd to talk with the ospd-openvas via socket. In the past, I’ve been able to kludge around problems of this nature with chmod/chown/chgrp and/or soft links, but this time, these methods have thus far been unsuccessful.

The permissions on the socket are wide open.
root@c38b032cce21:/# ls -l /var/run/ospd/ospd.sock
srwxrwxrwx. 1 root gvm 0 Aug 25 08:50 /var/run/ospd/ospd.sock
(The group is changed via chgrp after ospd-openvas starts)

The scanner is pointed to the correct socket.
root@c38b032cce21:/# su -c “gvmd --get-scanners” gvm
08b69003-5fc2-4037-a479-93b440211c73 OpenVAS /var/run/ospd/ospd.sock 0 OpenVAS Default

I’ve also disabled SELinux on the host to verify that was not causing any issues.

I’m building with ALL default paths. If I understand correctly, this should install everything in “/usr/local/” which is essential for the multistage container build processes I’m using. If there is something being installed outside of “/usr/local” that is essential to this process, then that could be my problem, but I’ve no idea how to figure that bit out.

lsof shows the socket open:
root@a3746e7845bd:/# lsof | grep ospd.sock
ospd-open 233 root 5u unix 0x0000000000000000 0t0 23711495 /var/run/ospd/ospd.sock type=STREAM
ospd-open 233 238 ospd-open root 5u unix 0x0000000000000000 0t0 23711495 /var/run/ospd/ospd.sock type=STREAM

But I still get the “Could not Connect”.

Versions:
gvmd=v21.4.3
openvas=v21.4.2
openvas_smb=v21.4.0
gvm_libs=v21.4.2
openvas_scanner=v21.4.2
gsa=v21.4.2
ospd=v21.4.3
ospd_openvas=v21.4.2
python_gvm=v21.6.0
gvm_tools=v21.6.1

All of the container build bits and startup are in github in the “newbuild” branch.

Start comands:
ospd-openvas --log-file /usr/local/var/log/gvm/ospd-openvas.log
–unix-socket /var/run/ospd/ospd.sock --log-level INFO --socket-mode 777

su -c “gvmd -a 0.0.0.0 -p 9390 --osp-vt-update=/var/run/ospd/ospd.sock --max-email-attachment-size=64000000 --max-email-include-size=64000000 --max-email-message-size=64000000” gvm

I have that feeling that I’ve been staring at this for too long and failing to see the glaringly obvious answer.

Please help.

Thanks,
Scott

My advise on this is, please follow https://greenbone.github.io/docs/ where I’ve written down the necessary steps and ensured that the path are appropriate. You can choose a different install prefix then /usr/local if you want. But for a docker image I wouldn’t do that because you don’t get any advantage.

The assumption of the guide is that /run (and therefore /var/run as a symlink to /run) is maintained by systemd. That means the /run/gvm and /run/ospd directories are created by systemd via the service file. That’s of course not the case in a container.

If you are running all the GVM components in a single container I also would just create a single user that runs all services and owns all files.

2 Likes

bricks,
Thanks for confirming the path install bits. I have been through the guide too. Using your exact build commands of course causes multiple issues with the container build. I am creating the directories in /run with the startup scripts. But you did give me an idea and it is definitely permissions related. I can write to the socket as root, but not the gvm user.

root@a3746e7845bd:/# su -c " echo Test | socat - UNIX-CONNECT:/var/run/ospd/ospd.sock " root
root@a3746e7845bd:/# su -c " echo Test | socat - UNIX-CONNECT:/var/run/ospd/ospd.sock " gvm
2021/08/25 10:45:26 socat[1949] E connect(5, AF=1 “/var/run/ospd/ospd.sock”, 25): Permission denied

Plus you have to check your permissions you grant to the container as well. Container does not simplify anything, if you don´t get it to work outside a container, you either don´t get it inside.

So i would suggest you start first outside, build your GVM and understand permissions, sockets and paths. If this is working you can try the containerization.

As well debugging is always a pain inside a container context or a security nightmare.

1 Like

As far as i can remember gvmd and also ospd-openvas should create the unix socket by themself with the user they are running. And of course they should not run as root. So this seems to be a bit strange.

1 Like

Lukas,
I have had this container build working for > 6 months. This only started recently. But you are making me question myself on one point. I can not remember if it has ever completely worked on buster. There is definitely something unusual going on with the permissions to the socket though as I verified I can actually send request and get a response from ospd-openvas via the socket when sent as root, but not as the gvm user.

root@a3746e7845bd:/# su -c " echo ‘<get_version/>’ | socat - UNIX-CLIENT:/var/run/ospd/ospd.sock " root
<get_version_response status=“200” status_text=“OK”>OSP21.4.3OSPd OpenVAS21.4.3openvasOpenVAS 21.4.2202108241024</get_version_response>

Which lead me to the solution … ( I knew it was staring me in the face.)

The /run/ospd directory was still:
drwxrwx—. 2 root root 4096 Aug 25 09:41 ospd

permissions on the socket were fine, but since the directory wasn’t in the correct group and the other wasn’t open, gvm could not even see the file !!!

Thank you bricks and Lukas!!

I just needed a few nudges to make me think about it differently.

And bricks … I thanks for pointing out that ospd-openvas doesn’t need to run as root. That will simplify things a bit as well.

1 Like

Only openvas scanner (the openvas executable) needs to run as root. This is ensured by configuring sudo accordingly.

2 Likes