Issues with my build process / openvas segmentation fault

Thank you !
I’m having some issues with my build process for gsa though. The assets available on github with gsa 21.4.3 are not the same as was previously available with 21.4.[12]. The ‘node-modules’ are no longer there. Did the build processes change or will they not be included in future releases?

Assets with 21.4.3:
Source Code.tar.gz

Assets with 21.4.2:
Source code (zip)
Source code (tar.gz)

We removed the gsa-node-modules because they are relying on a very specific nodejs version. If you are using a different nodejs version as used for building the gsa-node-modules, yarn won’t consider them. In this case if you did choose the yarn offline build the build will fail.

Thanks Bricks!

Did anything significant change with paths for openvas/ospd-openvas ?
My initial build, ospd-openvas was looking for things in /var/lib vs /usr/local/var/lib where I build. I soft linked it to resolve, but now when ospd-openvas calls ‘openvas --update-vt-info’, openvas is core dumping.

Some default file paths have changed. Please take a look at the release notes.

All of those path exists, and since openvas should be running as root, it shouldn’t have any issues with permissions right? I’m not seeing any file open/write/read errors on an strace, so I’m thinking I have them all.

openvas will run with the following options:

But running it with no options or --update-vt-info causes a coredump.

Not sure if it will help, but here’s the tail of ’ strace openvas --update-vt-info ’

socket(AF_UNIX, SOCK_STREAM, 0) = 3
fcntl(3, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0
connect(3, {sa_family=AF_UNIX, sun_path="/run/redis/redis.sock"}, 110) = 0
fcntl(3, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK)
fcntl(3, F_SETFL, O_RDWR) = 0
write(3, “*3\r\n$6\r\nCONFIG\r\n$3\r\nGET\r\n$9\r\ndat”…, 40) = 40
read(3, “*2\r\n$9\r\ndatabases\r\n$3\r\n512\r\n”, 16384) = 28
write(3, “*3\r\n$7\r\nHEXISTS\r\n$19\r\nGVM.__Glob”…, 50) = 50
read(3, “:1\r\n”, 16384) = 4
write(3, “*2\r\n$6\r\nSELECT\r\n$1\r\n1\r\n”, 23) = 23
read(3, “+OK\r\n”, 16384) = 5
write(3, “*3\r\n$6\r\nLINDEX\r\n$9\r\nnvticache\r\n$”…, 39) = 39
read(3, “$1\r\n0\r\n”, 16384) = 7
stat("/var/lib/openvas/plugins", {st_mode=S_IFDIR|0775, st_size=270336, …}) = 0
getpid() = 1010
getppid() = 1007
stat(".", {st_mode=S_IFDIR|0755, st_size=4096, …}) = 0
stat("/", {st_mode=S_IFDIR|0755, st_size=4096, …}) = 0
chdir("/var/lib/openvas/plugins") = 0
openat(AT_FDCWD, “/var/lib/openvas/plugins/”, O_RDONLY) = 4
fstat(4, {st_mode=S_IFREG|0664, st_size=1014, …}) = 0
read(4, “# Copyright © 2021 Greenbone N”…, 1014) = 1014
close(4) = 0
getpid() = 1010
chdir("/") = 0
getpid() = 1010
— SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=NULL} —
+++ killed by SIGSEGV (core dumped) +++
Segmentation fault (core dumped)

Hmm OpenVAS is trying to access outside the mapped address space. Do you use namespaces ? If so please disable all this and try to run this stand alone.

Is an announcement topic in the “News” category really the best place to discuss such technical issues?

Maybe some one of the @moderators team can split the posts into a separate topic in the GSE category?

1 Like

Thanks @cfi!

Note for anyone reading- this was split from: New releases for GVM 20.08 and GVM 21.04

@immauss I re-titled using part of your post, if you want to change that (and can’t) please let me know. Thanks!


@DeeAnn Thank you!


It’s running in a container. So that “shouldn’t” happen. And if I’m going to use containers, I can’t really disable namespaces. Unless I’m missing something huge …

I would try it first outside the container, if it work there you have a first hint.

I rolled only the version of openvas_scanner back to 21.4.2, and it works fine.

Building outside the container will take me much longer. And I’m using a Buster based image.
My build process is obviously written around building the container. If it was a problem for non-container environments, I would expect others to have seen this already.

@Lukas, why do you think this is namespace related? My understanding of SEGV_MAPERR ( admittedly after researching today) is that it occurs when the application tries to address memory not allocated to it. Whether it is operating in a namespace or not, I would not expect that to happen. I’m not saying you’re wrong, just trying to understand what’s going on. And if this is directly related to running in a container, then I would call this a bug.

Root processes are normally not limited, if this happens only within a “memory limited” namespace and not as root with no limits. It will be a clear indication.

It could be:

  1. A software bug
  2. A compiler bug / optimization
  3. A kernel issue / a issue due to the namespace or container

To debug this i would then change the compiler flags to disable optimizations and build a debug version you can run with gdb.

I just checked my installation 21.04.08 and it does not happen, but i don´t limit any openvas processes via a container.

So … you clearly understand this better than I do … So I hope you don’t mind me asking a few questions so I can better understand what’s going on.

At the moment, I’m not applying any restrictions to the memory on the container, so it “shouldn’t” be constrained, and the container run time should be able to handle openvas without it being aware of the namespaces.

Is the assumption here that openvas is trying access memory outside the namespace?

Even in the case of a container, Isn’t the memory it has access to assigned by the kernel, so if it is trying to access memory outside of what it is allocated, isn’t that a problem regardless of the namespaces?

Reading symbols from /usr/local/sbin/openvas…done.

(gdb) run

Starting program: /usr/local/sbin/openvas

warning: Error disabling address space randomization: Operation not permitted

[Thread debugging using libthread_db enabled]

Using host libthread_db library “/lib/x86_64-linux-gnu/”.

Program received signal SIGSEGV, Segmentation fault.

0x00007fd616b58005 in g_mutex_lock () from /usr/lib/x86_64-linux-gnu/


OK … I dug around a bit and found that I should be starting the container differently to make sure gdb functions properly.

docker run -d --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
	-p 8080:9392 --name "${TAG}-test" -e SKIPSYNC=true immauss/openvas:${TAG}

Here’s the gdb results with the backtrace.

root@b3ee8e2f1f04:/# gdb /usr/local/sbin/openvas

GNU gdb (Debian 8.2.1-2+b3) 8.2.1

Copyright (C) 2018 Free Software Foundation, Inc.

License GPLv3+: GNU GPL version 3 or later <>

This is free software: you are free to change and redistribute it.

There is NO WARRANTY, to the extent permitted by law.

Type "show copying" and "show warranty" for details.

This GDB was configured as "x86_64-linux-gnu".

Type "show configuration" for configuration details.

For bug reporting instructions, please see:


Find the GDB manual and other documentation resources online at:


For help, type "help".

Type "apropos word" to search for commands related to "word"...

Reading symbols from /usr/local/sbin/openvas...done.

(gdb) run --update-vt-info

Starting program: /usr/local/sbin/openvas --update-vt-info

[Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib/x86_64-linux-gnu/".

Program received signal SIGSEGV, Segmentation fault.

0x00007ffff7e9d005 in g_mutex_lock () from /usr/lib/x86_64-linux-gnu/

(gdb) backtrace

#0 0x00007ffff7e9d005 in g_mutex_lock () at /usr/lib/x86_64-linux-gnu/

#1 0x0000555555560595 in create_process

(function=function@entry=0x55555555e9c0 <plugins_reload_from_dir>, argument=argument@entry=0x5555555a1bd0)

at /build/openvas-scanner-21.4.3/src/processes.c:102

#2 0x000055555555f058 in plugins_init () at /build/openvas-scanner-21.4.3/src/pluginload.c:391

#3 0x000055555555dc25 in openvas (argc=<optimized out>, argv=<optimized out>) at /build/openvas-scanner-21.4.3/src/openvas.c:551

#4 0x00007ffff678f09b in __libc_start_main (main=

0x555555559b90 <main>, argc=2, argv=0x7fffffffecf8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffece8) at ../csu/libc-start.c:308

#5 0x0000555555559bca in _start ()

And you learned the Segfault is not within the Greenbone Code, now you need to compile debug version of the library " " to trace the bug back. I quick google shows that you are not alone, many other have that same issue.

I did see a number of those when I searched, but what changed on the openvas side from 21.4.2 - 21.4.3 since this didn’t happen before?

FWIW, I did update the base image very early on in trying to resolve this, so there is not a newer version of for buster yet.

OK … I installed debug symbols for libglib and now my gdb and backtrace looks like below. It’s saying “No such file or directory”, but I have no idea how to figure … “what file” it’s looking for. Any ideas ??

(gdb) run
Starting program: /usr/local/sbin/openvas 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/".

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7e9d005 in g_mutex_lock (mutex=0x0) at ../../../glib/gthread-posix.c:1343
1343    ../../../glib/gthread-posix.c: No such file or directory.

(gdb) backtrace
#0  0x00007ffff7e9d005 in g_mutex_lock (mutex=0x0) at ../../../glib/gthread-posix.c:1343
#1  0x0000555555560595 in create_process
    (function=function@entry=0x55555555e9c0 <plugins_reload_from_dir>, argument=argument@entry=0x5555555a16e0)
    at /build/openvas-scanner-21.4.3/src/processes.c:102
#2  0x000055555555f058 in plugins_init () at /build/openvas-scanner-21.4.3/src/pluginload.c:391
#3  0x000055555555dc86 in openvas (argc=<optimized out>, argv=<optimized out>) at /build/openvas-scanner-21.4.3/src/openvas.c:591
#4  0x00007ffff678f09b in __libc_start_main (main=
    0x555555559b90 <main>, argc=1, argv=0x7fffffffece8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffecd8) at ../csu/libc-start.c:308
#5  0x0000555555559bca in _start ()