Scan /24 subnets results in lots of dead "assets". How to manage those hosts/ip's

Hi,

Now when we have GVM10 with a good Assets/Host model, we wanted to start use that data.
The thing is that we usually split our scans into “real” subnets, and mostly those are /24’s and there are lots of free ip’s on those subnets.
When we do a host discovery all the ip’s that has been scanned is added to the asset/host data.

Before when we didn’t use the inventory this wasn’t an issue, but it would be interesting to hear what others are doing to keep this clean?

If we do a scan before and push those hosts in with the api, we must recreate the task every time to match the new targets. And then we get no good “stats” over time on that subnet.

Any ideas is welcome, and if I missed something that is obvious, I apologize in advance :smiley:


Regards Falk

AFAIK only hosts which have at least one results within the scan report will be added to the asset database. If there are dead hosts added as well make sure that the:

  1. created scan task doesn’t use “Consider Alive” for the “Alive Test”
  2. used scan configuration doesn’t have configured Ping Host (OID: 1.3.6.1.4.1.25623.1.0.100315) VT to have e.g. “Mark unrechable Hosts as dead (not scanning)” or “Report about unrechable Hosts” set to “no”.
  3. Chosen “Alive Test” from 1. is matching your network environment so that dead host are detected accordingly and not seen as alive.

Hi,

I found something strange when using remote scanners.
This morning I tried to do a testrun “locally” on the “master” gvm10…
And it worked as you said. Only the ip’s that answered were added to the assets.

But then I cloned that task, and used a scanner(“slave”) instead.
And all the ip’s in the range were added to the assets?

I’m going to do some more checks with this today and see what’s going on :slight_smile:


Regards Falk

Update:

I tried with an “Host Discovery” on local and remote scanner.
Same results as the last try.

The setup is:

Local Task:

Remote Task:

They both use this target (10 ip’s):

Result for local:

Result for remote:

Any ideas to debug this more?


Regards Falk

Any chances that the sensor is located in a different network segment where the sensor get a (falsely) positive response to the ICMP Ping “Alive Test” (e.g. from a firewall in between) where the local “master” isn’t getting one?

Any chances that the sensor is located in a different network segment where the sensor get a (falsely) positive response to the ICMP Ping “Alive Test” (e.g. from a firewall in between) where the local “master” isn’t getting one?

It shouldn’t be.[tm]. :slight_smile:
I’ll fire both of them up on the same subnet as the targets later.


Regards Falk

Update on a simple 192.168.1.0/24 subnet.

master » 192.168.1.112
remote-scanner » 192.168.1.179
targets » 192.168.1.10-192.168.1.20
config » host discovery

The same config and targets are used with cloned tasks.

And the result differs (for me).
[edit]
The “only” difference on the software side, is that the master is psql and the remote scanner is sqlite3.
Both are now running with psql container image.
[/edit]

Logs

Master:

gvm10_1  | ==> /usr/local/var/log/gvm/gvmd.log <==
gvm10_1  | event task:MESSAGE:2019-05-09 13h26.48 UTC:1208: Status of task alive-test-local (f5eb2da6-b680-4d2a-be57-dddc8c37e38b) has changed to Running
gvm10_1  | 
gvm10_1  | ==> /usr/local/var/log/gvm/openvassd.log <==
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.48 utc:1252: Finished testing 192.168.1.12. Time : 0.52 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.48 utc:1261: Finished testing 192.168.1.17. Time : 0.50 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.48 utc:1250: Finished testing 192.168.1.10. Time : 0.54 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.48 utc:1263: Finished testing 192.168.1.18. Time : 0.69 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1259: The remote host 192.168.1.16 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1251: The remote host 192.168.1.11 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1259: Finished testing 192.168.1.16. Time : 2.25 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1251: Finished testing 192.168.1.11. Time : 2.29 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1258: The remote host 192.168.1.15 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1265: The remote host 192.168.1.20 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1264: The remote host 192.168.1.19 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1253: The remote host 192.168.1.13 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1258: Finished testing 192.168.1.15. Time : 2.49 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1256: The remote host 192.168.1.14 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1265: Finished testing 192.168.1.20. Time : 2.49 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1264: Finished testing 192.168.1.19. Time : 2.50 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1253: Finished testing 192.168.1.13. Time : 2.55 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1256: Finished testing 192.168.1.14. Time : 2.55 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1204: Test complete
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h26.50 utc:1204: Total time to scan all hosts : 3 seconds
gvm10_1  | 
gvm10_1  | ==> /usr/local/var/log/gvm/gvmd.log <==
gvm10_1  | event task:MESSAGE:2019-05-09 13h26.52 UTC:1208: Status of task alive-test-local (f5eb2da6-b680-4d2a-be57-dddc8c37e38b) has changed to Done

Remote scanner:

gvm10_1  | ==> /usr/local/var/log/gvm/gvmd.log <==
gvm10_1  | event task:MESSAGE:2019-05-09 13h27.55 UTC:247: Status of task 5bfbceef-778c-4ee9-a08f-c2ce0904e372 for alive-test-scanner (ce231788-f9d3-4549-b00d-06c7c8feacb8) has changed to Running
gvm10_1  | 
gvm10_1  | ==> /usr/local/var/log/gvm/openvassd.log <==
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:258: The remote host 192.168.1.16 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:249: The remote host 192.168.1.11 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:251: The remote host 192.168.1.13 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:262: The remote host 192.168.1.19 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:252: The remote host 192.168.1.14 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:255: The remote host 192.168.1.15 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:264: The remote host 192.168.1.20 is dead
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:258: Finished testing 192.168.1.16. Time : 2.31 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:249: Finished testing 192.168.1.11. Time : 2.34 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:251: Finished testing 192.168.1.13. Time : 2.33 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:262: Finished testing 192.168.1.19. Time : 2.32 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:252: Finished testing 192.168.1.14. Time : 2.35 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:255: Finished testing 192.168.1.15. Time : 2.34 secs
gvm10_1  | sd   main:MESSAGE:2019-05-09 13h27.56 utc:264: Finished testing 192.168.1.20. Time : 2.33 secs
gvm10_1  | 
gvm10_1  | ==> /usr/local/var/log/gvm/gvmd.log <==
gvm10_1  | event task:MESSAGE:2019-05-09 13h27.58 UTC:247: Status of task 5bfbceef-778c-4ee9-a08f-c2ce0904e372 for alive-test-scanner (ce231788-f9d3-4549-b00d-06c7c8feacb8) has changed to Done

Asset view

Local:

Remote scanner:

@cfi Do you think that I should open an issue on gmvd?

You could still verify the following first:

Comparing the reports of both scans could show if there are any additional log messages shown in the one which is adding the dead hosts to the asset database.

Now I compared the reports and they looks the same.
Local scan:


Remote scan:

Then I reversed the settings. Changed “master” and “remote scanner”.
And I got the same result as before. The new “remote scanner” added all the ip’s.


Regards Falk

And a last update :slight_smile:

If I only have one ip with no host on that ip.

  • The local scan adds no ip to assets.
  • But the remote-scanner scan adds the empty ip to assets.

So something fishy is going on here :slight_smile:


Regards Falk

This looks indeed strange. Unfortunately i can’t help here anymore at this point due to the lack of knowledge on remote / local scanners.

Do you connect the Ethernet directly into the container via dedicated interface or using the container network stack / load balancer ?

Hi,

I’m using the container network.

The problem is that when I do the same scan from the Master there are no “ghost ip’s”.

Or if I do the same scan locally on the “slave”, no extra ip’s.

For the moment we work around it with a python script that removes all empty ip’s nightly.

Regards Falk

Try to bypass and nat and stateful firewall these tables are the natural enemy of vulnerability scanning esp if you have dead hosts and use all TCP this will fill up the table and can lead to strange network issues.

It was my first thoughts too.
But when doing the same task on the remote scanner, “locally” there are no extra ip’s.

To be sure that I not missed anything I’ll try to do the same with a src install.
Sometimes it’s easy to “remember” thing wrong :slight_smile:


Regards Falk

That might be totally depending on the “Container Host Kernel”, so just connect a real Ethernet interface and try to use a Kernel that is NOT using any stateful Firewall/NAT/LoadBalancer.

Hi,

Sorry if I seem dense :slight_smile:

But if I do the test one two “installations”, Master and Slave.
When I run them in a master/slave setup I get the “dead ip’s”.
But if I run the same tasks locally on each server I get no “dead ip’s”?

Then, the network/NAT should experience the same even if I run it locally?

I’ll do some real testing asap :slight_smile:


MvH Falk

I don´t know what else you run on this kernel :wink: And the state of the host might be different, timeout, table size, etc … etc…

That is why you should not use any firewall / nat / loadbalancer in between …

Hi,

That’s fair :slight_smile:
I agree that the fw/nat/lb isn’t optimal in any way.
And that is perhaps something to look into on the containers, if I can :slight_smile:

But the problem exists for me even on a source installation with no funniness on the interfaces.
[edit] Except that they are VM’s on the same subnet[/edit]

I did an installation as this: https://sadsloth.net/post/install-gvm10-src/

Tried out 11 ip’s on a simple local network with 4 hosts. Two identical installations. A and B.

On the HOST dash:
    A - local scan: registers 4 ip's.
    B - local scan: registers 4 ip's.
    A(master) » B(scanner) - remote scan: registers the whole range, 11 ip's

But on the Report for that scan, there are only 4 results registered.

I don’t really know how to debug this behavior, or if I am the only one with this “problem”.
Then the problem is locally here, and I am doing something fundamentally wrong :smiley:


Regads Falk