Finding vulnerable PHP components on hosting servers?

Hello,

(Sorry if I am posting in wrong category…)

I am looking for tools capable of finding vulnerable (with known vulnerabilities) PHP components on hosting servers… Is OpenVAS is suitable for that? For example: would OpenVAS detect “phpmailer 6.0.5” somewhere on the filesystem (website’s home)?
https://nvd.nist.gov/vuln/detail/CVE-2018-19296

OpenVAS definetely has a signature for CVE-2018-19296…

Does it makes any difference whether package is installed via OS package manager, dependency manager (“composer” in case of PHP?) or is just copied somewhere to the filesystem?

Thanks

Hi,
check out this site Greenbone SecInfo

Short answer is yes. Regarding Suitability

3 Likes

As I mentioned, I do know that there are NVTs for CVE-2018-19296, but I am not able to detect outdated/vulnerable “phpmailer” on file system (web site’s home…)…

8:
Quality of Detection: package (97%)

1:
Quality of Detection: remote_banner (80%)

Any suggestions?

“phpmailer” is just an example… I can not detect other vulnerable PHP packages as well… They are not installed using distros package manager, but uploaded by web sites’ owners to their homes… Some use composer, others don’t… Are you sure, OpenVAS should detect those?

Thank you in advance

Hi there,

if the custom installation of any PHP package lies somewhere in an unknown or uncommon web directory, it is impossible for the scanner to find it.

With a software like “phpmailer”, there is definitely a big difference between custom installations and installing it with the help of a package manager when it comes to vulnerability scans.

Package checks rely on the information of the binary or library files on the target system - usually they are always in the same spot, so it’s easy for a local check to obtain the version information and check the ones from an official advisory against the vulnerable one.

The second type of detection - in this case checking a specific HTTP response - deals with the information that the software, running on a specific port of the webserver, is exposing to us. So if it’s “hidden” inside a custom directory, there’s almost no chance that it will be detected by our script.

Hope I could clear it up a bit.

Cheers

1 Like

They could be recognizable with their hash, even if elsewhere.
If a package was installed with a package manager and then moved, a hash could probably still be found in the package manager’s database.
Otherwise you’d need to compare it to a hash list brought to the machine … or compared against an external database of known software (maybe restricted with a list of software which should be on the scanned host).
So, “impossible” is relative and makes an assertion about the scanning techniques used in the audit software.

Ture, but that’s not what I was talking about. I said if it was a custom installation and moved out of its default directory.

No it doesn’t, since you are referring to the wrong install method. The package detection works fine.

Why would an install method become “wrong” just by moving an installed package (e.g. installing with apt-get or so and then moving files in non-standard directories, without breaking the functional coherence, of course)?

My scenario was kind of a forensic one, even if used on a live system. If you examine a computer using a hash database, you can find files even if they are not at standard locations.

That’s for instance what the hash databases from NIST are useful for. I don’t know if such a method is used by the GB software suite, but at least one could easily create such a scanner and connect it to the suite using the recommended scanner shims to integrate the results into the overall Vulnerability Management. Kind of “tripwire” in a way. Could also be used to alert to uploaded vulnerable (outdated) DLLs, to find leftovers of oldish software falsely believed to be uninstalled, etc.

Are you deliberately trying to misunderstand me?

Your solution sounds intriguing but it’s got nothing to do with the core issue at hand: How web components can be found (without having access from inside!).

Yes, having local access to the machine makes it very simple to find stuff. I guess we can agree on that.

Trying to access non-default directories through web-based services? Not so much.
Enumerating directories has its limits (unless we talk brute force), however our CGI scan gets a lot done; ultimately it depends what the server is exposing to us during the scan or beforehand - that is the important bit.

No, certainly not.
I obviously missed a bit of information, I’m sorry for that.
Thank you for your remark pushing me into realizing what’s going on.

1 Like

Cheers, but also no need to feel sorry - hopefully my question wasn’t offending you in any way; it certainly wasn’t meant to sound harsh.

I didn’t receive it as harsh, more kind of stunned, which is normal in this kind of situation. The oversight is completely on my side and you helped me realize it.

InfoSec can put a strain on people, that’s how this happened, behind the scenes.

I agree. Let’s just wait for OP to respond to all this. :slight_smile:

@mirkt your turn!