Optimizing PostgreSQL for gvmd

Hello

Is there any tweaks / hints to optimize / fine-tune PostgreSQL for gvmd ? My installation has plenty of CPU / Memory available; but still gsad is often very slow to answer queries, and very often it fails to receive answers from gvmd due to time out.

Looking at the processes clearly show that postgresql daemons are taking most of the CPU power, but still this remains very low considering the global CPU power of the system. So clearly, it doesn’t seem optimized.

Any clue ?

Thanks

Which version of PostgreSQL do you use?

Are you sure that is a CPU or IO issue ?

What does “iotop” and the other process debug features say ?

Are you running on bare metal NVMEs ?

Big installations seems to struggle with limited I/O on virtual systems, and should run on very fast NVMEs.

It seems that requesting the results is very very slow with Postgres 12 compared to Postgres 11. Postgres 11 is of course recommended because that’s the version in Debian Stable which is our reference system.

Hi Everyone and thanks for your interest in the subject.

A couple of informations:

System is bare metal with OS Linux Mint 19.3 64 bit (Ubuntu 18.04 LTS)
Installed version of PostgreSQL is 10.
94 Gb RAM and Intel® Xeon® CPU X5680 @ 3.33GHz 23 cores

iostat gives the following:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           8,10    1,19    9,17    1,37    0,00   80,17

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda              99,82        11,23       968,80   40585011 3500199664
sdb               0,65         0,97         8,32    3505669   30047876
sdc               0,00         0,00         0,00       5480          0

gvm@ov-master-eqi:~$ free -g
              total        used        free      shared  buff/cache   available
Mem:             94           8          20           0          65          85
Swap:            95           0          95

So as you can notice, CPU / Memory isn’t the bottleneck here.

Thanks

As you use PSQL 10 and not 11 i would upgrade, and then trying to fine tune your database and looking into your system issues. Still you did not answer any questions regarding your storage.
I don´t see any native NVMEs that is what we use for accelerate in our enterprise appliances. They are 10 times faster then spinning rust if you have a huge database, that would kill your performance due to seek times.
As well your amount of swap is far to high, i would turn swap off with that amount of ram.

So classic debugging and performance optimization would be my next steps.

OK, I take the point of PSQL 11. Didn’t know this was the recommended version.

Regarding the storage, sorry I missed it. I’m not using NVMEs drives; currently only ATA. If you guys believe this would really improve things, then I’ll see how to invest in that.

Last point is SWAP; perhaps I’m wrong but I always thought that:

  1. disabling SWAP is a bad idea
  2. System will always use memory first, then swap if no more memory is available.

Isn’t that correct ?

Does GVM-11 actually works with PostgreSQL-11, or were you only talking about GVM 20.08 ? On a pre-prod system I have updated my PSQL from 10 to 11, but then I get the following errors when starting gvmd:

md   main:MESSAGE:2020-09-23 17h09.44 utc:9172:    Greenbone Vulnerability Manager version 9.0.0 (DB revision 221)
md manage:MESSAGE:2020-09-23 17h09.44 utc:9173: check_db_versions: database version of SCAP database: 16
md manage:MESSAGE:2020-09-23 17h09.44 utc:9173: check_db_versions: SCAP database version supported by manager: 15
md   main:CRITICAL:2020-09-23 17h09.44 utc:9173: gvmd: database is wrong version

I don’t really understand this error, as I don’t see how my SCAP database version could be newer than the one supported by the manager, since I havn’t actually changed the SCAP data during the PSQL upgrade.

For gvmd-20.08 and gvmd-21.04/master PostgreSQL 11 is our reference. The PostgreSQL 12 issue might get fixed but we can’t promise anything at the moment.

For other versions below gvmd-20.08 Debian Stretch is the base system and therefore PostgreSQL 9.6 is recommended.

For your current issue with the SCAP db, please try to use gvmd-9.0.1 the latest bugfix release instead of 9.0.0. I can remember either a ticket a GitHub or a topic in this forum where this issue was mentioned to and updating to the bugfix release fixed it.

1 Like

Thanks bricks. That actually answer my question. So there is no point to try mixing gvmd-11 with PSQL-11 if the reference is 9.6.

I’ll focus my effort on upgrading to 20.08 AND PSQL-11 at once. Hopefully, this should solve the huge performance issues I’m facing at the moment.

Thanks a lot

Using GVM-11/gvmd-9 with PostgreSQL 11 should work. I’ve used that for a longer period on my personal development computer. But it isn’t tested and there is no guarantee because our corresponding products (GOS 5 and 6) are based on Debian Stretch and therefore PostgreSQL 9.6. We are developing and testing for our products of course. If the community is facing issues with other systems varying from our reference system we sometimes can’t give any promises for fixes.

1 Like

Using GVM-11/gvmd-9 with PostgreSQL 11 does work, actually. It just that in our production environment, it is pain slow. Perhaps it has nothing to do with the version of PSQL. It’s possible gvmd is just not designed to handle such big database (57GB), that’s why I want to explore all options at this point.

Thanks

I have seen DBs with more then 10 /8 networks and 16 millions of results.
But they are never at this size, do you store all “log” and “scan-meta” data there ?

How often do you do DB maintenance ?

It looks like a non optimal scan configuration and/or missing db-maintenance.

Plus our enterprise appliances designed for that type of DBs have a block-acceleration based on special hardware so speed up the DB in hardware.

I have not tweaked any openvas configuration; as the default values looks good enough:

non_simult_ports = 139, 445, 3389, Services/irc
vendor_version =
safe_checks = yes
nasl_no_signature_check = yes
time_between_request = 0
expand_vhosts = yes
max_checks = 10
optimize_test = yes
report_host_details = yes
config_file = /opt/gvm/etc/openvas/openvas.conf
unscanned_closed_udp = yes
include_folders = /opt/gvm/var/lib/openvas/plugins
test_empty_vhost = no
plugins_timeout = 320
cgi_path = /cgi-bin:/scripts
checks_read_timeout = 5
unscanned_closed = yes
auto_enable_dependencies = yes
log_whole_attack = no
db_address = /var/run/redis/redis.sock
drop_privileges = no
log_plugins_name_at_load = no
scanner_plugins_timeout = 36000
timeout_retry = 3
max_hosts = 30
network_scan = no
open_sock_max_attempts = 5
plugins_folder = /opt/gvm/var/lib/openvas/plugins

Regarding the DB maintenance; I’m doing the following weekly:

Freeing some unused storage space in the database

/opt/gvm/sbin/gvmd --optimize=vacuum

Optimize DB queries

/opt/gvm/sbin/gvmd --optimize=analyze

Cleans up references to report formats that have been removed without using the DELETE_REPORT_FORMAT GMP command, for example after a built-in report format has been removed.

/opt/gvm/sbin/gvmd --optimize=cleanup-report-formats

This cleans up results with missing result_nvt entries which can result in filters and overrides not working properly.

/opt/gvm/sbin/gvmd --optimize=cleanup-result-nvts

This option removes duplicate preferences from Scan Configs and corrects some broken preference values

/opt/gvm/sbin/gvmd --optimize=cleanup-config-prefs

This cleans up results with no severity by assigning the default severity set by the user owning the result.

/opt/gvm/sbin/gvmd --optimize=cleanup-result-severities

This creates the cache containing the unfiltered result counts of all reports that are not cached yet.

/opt/gvm/sbin/gvmd --optimize=update-report-cache

I havn’t found any other information to optimize SQL database for GVM usage, so if you have other informations I miss, please share.

However you are right on one point; we store log information as well as low/medium/high…

Thanks

That is not enough with that DB size, as well your default config is not intended for that extra big enterprise environment. I would run offline optimizations first.

I would go with multiple enterprise appliances with that big environment. Otherwise you have to re-invent the wheel based on our OpenSource.

What do you call offline optimization ? Please share informations if you have it. Also you mention log data; is there a way to stop logging it ? I havn’t found any documentation on this; it only apply to reports.

The appliance is unfortunately not an option in my context; due to prohibitive restrictions to install hardwares in our DCs. And in all case, I understand my DB isn’t optimized since you said you’ve seen DBs with more than 10 /8 networks never reaching that size.

Currently I have no other option but to optimize to the maximum what I have. Again, help is welcome here if you have advices.

Thanks

Hi, just came accross this post trying to solve performance issues with postgresql

Found solution here

Just change jit=off in postgresql and no more performance issues.

2 Likes