Gvm-cli connections fail under openvassd load

I’m using gvm-cli (gvm-cli 2.0.0.beta1. API version 1.0.0.dev1) to connect to openvasmd on the local server to start and manage tasks, download reports etc.

An openvassd scan will spawn a lot of processes and as a result the load will increase.

At some level of load on my system, the gvm-cli will not connect with openvasmd and throws an error. If the load stays particularly high for some time, gvm-cli will not connect at all.

Using “gvm-cli socket -c --xml …” or “gvm-cli socket --gmp-username foo --gmp-password foo --xml …” will yield same results.

The xml string itself has no effect on this behaviour.

Error message:

$ gvm-cli socket -c --xml “<get_version/>”
Traceback (most recent call last):
File “/usr/local/bin/gvm-cli”, line 11, in
File “/usr/local/lib/python3.6/dist-packages/gvmtools/cli.py”, line 251, in main
gvm.authenticate(args.gmp_username, args.gmp_password)
File “/home/asgeir/.local/lib/python3.6/site-packages/gvm/protocols/gmpv7.py”, line 198, in authenticate
response = self._read()
File “/home/asgeir/.local/lib/python3.6/site-packages/gvm/protocols/base.py”, line 54, in _read
response = self._connection.read()
File “/home/asgeir/.local/lib/python3.6/site-packages/gvm/connections.py”, line 275, in read
data = self._socket.recv(BUF_SIZE)
ConnectionResetError: [Errno 104] Connection reset by peer

$ uptime
11:03:24 up 4 days, 46 min, 2 users, load average: 32.38, 20.21, 8.70

Each time this happens, the openvasmd.log will print:

md manage:WARNING:2019-01-22 10h59.29 utc:16364: sql_exec_internal: sqlite3_step failed: interrupted
md manage:WARNING:2019-01-22 10h59.29 utc:16364: sqlv: sql_exec_internal failed

My system:

$ openvassd --version
OpenVAS Scanner 5.1.3

$ openvasmd --version
OpenVAS Manager 7.0.4
GIT revision 03563817-openvas-manager-7.0
Manager DB revision 184

$ gvm-cli --version
gvm-cli 2.0.0.beta1. API version 1.0.0.dev1

$ uname -a
Linux xxx 4.15.0-43-generic #46-Ubuntu SMP Thu Dec 6 14:45:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

PS. this seems to be a persistent bug, tested on two different systems.

1 Like

Somehow, I feel this must be a bug in openvasmd, possibly within src/sql_sqlite3.c, sql_exec_internal() or any other function?

Is it possible that the openvasmd code is not honoring timeouts, or the timeouts are too short in the code?

This problem happens constantly when openvassd comes under load. openvasmd itself takes no load at all.

Ath the same time, the tasks.db database can easily be accessed with any sqlite command, like:

$ sudo sqlite3 /usr/local/var/lib/openvas/mgr/tasks.db “select * from tasks;”

and more.

So the sqlite database file (tasks.db) itself is not at all under load or inaccessible under these circumstances.

I’m reporting an issue with openvasmd.

1 Like