Rsync fails: Connection refused (111) when running from cron, but works from command line

New install OpenVAS 9 on CentoS

Noticed that the Feed Status displayed on the Security Assistant was 2 days old.
The rsync being run from cron was failing.

crontab:

35 1 * * * /usr/sbin/greenbone-nvt-sync > /dev/null
5 0 * * * /usr/sbin/greenbone-scapdata-sync > /dev/null
5 1 * * * /usr/sbin/greenbone-certdata-sync > /dev/null

root email:

Date: Fri, 11 Jan 2019 01:00:01 +0000 (UTC)

rsync: failed to connect to feed.openvas.org (89.146.224.58): Connection refused (111)
rsync: failed to connect to feed.openvas.org (2a01:130:2000:127::d1): Network is unreachable (101)
rsync error: error in socket IO (code 10) at clientserver.c(125) [Receiver=3.1.2]

And yet, when running from the command line:

[root@openvas]# /usr/sbin/greenbone-nvt-sync
(…)
receiving incremental file list
plugin_feed_info.inc
1,131 100% 1.08MB/s 0:00:00 (xfr#1, to-chk=0/1)

sent 43 bytes received 1,243 bytes 514.40 bytes/sec
total size is 1,131 speedup is 0.88
(…)
receiving incremental file list
./
COPYING
588 100% 574.22kB/s 0:00:00 (xfr#1, ir-chk=8553/8565)
COPYING.GPLv2
18,002 100% 17.17MB/s 0:00:00 (xfr#2, ir-chk=8552/8565)
COPYING.files
3,275,588 100% 130.16MB/s 0:00:00 (xfr#3, ir-chk=8551/8565)
gb_schneider_eurotherm_guicon_detect_win.nasl
3,634 100% 141.95kB/s 0:00:00 (xfr#4, ir-chk=5446/8565)
gb_schneider_eurotherm_guicon_detect_win.nasl.asc
819 100% 31.99kB/s 0:00:00 (xfr#5, ir-chk=5445/8565)
gb_schneider_zelio_soft_detect_win.nasl
3,413 100% 133.32kB/s 0:00:00 (xfr#6, ir-chk=5432/8565)
gb_schneider_zelio_soft_detect_win.nasl.asc
819 100% 31.99kB/s 0:00:00 (xfr#7, ir-chk=5431/8565)
(…)
2019/phpipam/gb_phpipam_mult_vuln.nasl
3,262 100% 6.99kB/s 0:00:00 (xfr#126, ir-chk=1158/94720)
2019/phpipam/gb_phpipam_mult_vuln.nasl.asc
819 100% 1.33kB/s 0:00:00 (xfr#127, ir-chk=1157/94720)

sent 52,929 bytes received 5,142,254 bytes 296,867.60 bytes/sec
total size is 282,414,161 speedup is 54.36

Is there something obvious to anyone that I might have missed?
Or do you need more info that I could provide?
I’m probably just doing it wrong. :wink:

If this works from the shell, something with your Cron-Job maybe different. I would first not redirect the output to /dev/null and write it to a log file to see what the output is. This would help you to ask qualified question here.

This time it worked from cron.
The only difference was the redirection, and the time it ran. Go figure.
Maybe if everyone is using the default settings, it is just too busy.
I’ll try adjusting the times slightly…

15 1 * * * /usr/sbin/greenbone-nvt-sync > /tmp/greenbone-nvt-sync.out
20 1 * * * /usr/sbin/greenbone-scapdata-sync > /tmp/greenbone-scapdata-sync.out
25 1 * * * /usr/sbin/greenbone-certdata-sync > /tmp/greenbone-certdata-sync.out

You can run tcpdump to check that only ONE session is running, we only allow one TCP session per source IP to feed.community.greenbone.net any attempt of a 2nd sync will be blocked automatically, to be fair with our limited resources (1G Internet) to all other users.

Each script ran from cron 5 minutes apart.
The output seemed short … all I received was the incremental file list.
Doesn’t look like I received any updates. Is this correct?

greenbone-nvt-sync.out:

receiving incremental file list
plugin_feed_info.inc
1,131 100% 1.08MB/s 0:00:00 (xfr#1, to-chk=0/1)

sent 43 bytes received 1,243 bytes 514.40 bytes/sec
total size is 1,131 speedup is 0.88

greenbone-scapdata-sync.out:

receiving incremental file list
timestamp
13 100% 12.70kB/s 0:00:00 (xfr#1, to-chk=0/1)

sent 43 bytes received 114 bytes 62.80 bytes/sec
total size is 13 speedup is 0.08

greenbone-certdata-sync.out:
receiving incremental file list
timestamp
13 100% 12.70kB/s 0:00:00 (xfr#1, to-chk=0/1)

sent 43 bytes received 114 bytes 104.67 bytes/sec
total size is 13 speedup is 0.08

Just to add to this I’ve noticed similar problems. Periodically the update failed with connection refused error, sometimes on NTV, sometimes CERT, sometimes SCAP, sometimes all. No firewall / network accelerator in place.

Issue is happening randomely. Are you limiting the number of simultaneous connections ? (from different IPs) Or perhaps the feed server is refusing new connections when the load is too high ?

Are you sure that no NAT/PAT is in use. This normally happened if so. Try IPv6 this works generally better.

NAT is in use, but not PAT. However I don’t see any reason why NAT would cause such issues. If I’m not mistaken the connection is done over RSYNC / TCP protocol, which works without problem with NAT since ages.

Not if you put a SYN limit of one connection in at the same time in. Many NAT devices trend to keep TCP session longer open then needed.

1 Like