Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

False Positive/Negative: Potentially interesting backup/cert file found. #728

Open
Anon-Exploiter opened this issue Jun 28, 2021 · 21 comments
Labels

Comments

@Anon-Exploiter
Copy link

Output of suspected false positive / negative

+ /com.cer: Potentially interesting backup/cert file found.
+ /database.tar.lzma: Potentially interesting backup/cert file found.
+ /dump.tgz: Potentially interesting backup/cert file found.
+ /com.alz: Potentially interesting backup/cert file found.
+ /com.tar: Potentially interesting backup/cert file found.
+ /com.pem: Potentially interesting backup/cert file found.
+ /site.cer: Potentially interesting backup/cert file found.
+ /com.jks: Potentially interesting backup/cert file found.
+ /dump.tar.lzma: Potentially interesting backup/cert file found.
+ /backup.pem: Potentially interesting backup/cert file found.
+ /database.cer: Potentially interesting backup/cert file found.
+ /database.pem: Potentially interesting backup/cert file found.
+ /dump.alz: Potentially interesting backup/cert file found.
+ /archive.tgz: Potentially interesting backup/cert file found.
+ /dump.tar.bz2: Potentially interesting backup/cert file found.
+ /backup.cer: Potentially interesting backup/cert file found.
+ /com.tar.lzma: Potentially interesting backup/cert file found.
+ /archive.cer: Potentially interesting backup/cert file found.
+ /database.tar: Potentially interesting backup/cert file found.

Screenshot:

image


What's actually happening?

Nikto is trying to find sensitive/backup files but the server/site is returning 200 response on almost all the requests causing all these FPs (the list is really long).

Can we fix this?

I think yes, we should just add a check for same length of all these objects (this could also introduce FPs if the file name/path being accessed gets reflected in the title or other HTML tags of the page)

Other fix would be to check the file headers -- If it's a .tgz or .zip file we can identify if it really is by analyzing the headers of the files being accessed.

@Anon-Exploiter
Copy link
Author

by analyzing the headers of the files being accessed.

Or by checking the Content-Type header specified in the response from the server, if it really is a zip file, etc.

@sullo
Copy link
Owner

sullo commented Jul 3, 2021

Was the rest of the scan full of false positives as well?

Here is the current logic. I believe I accepted 200 as to avoid false negatives, but maybe it's giving FP too much.

            if (($response->{'content-type'} =~ '/^application\//i')
                || (   ($res == 200)
                    && ($response->{'content-length'} > 0)
                    && (!is_404("/$f", $content, $res, $response->{'location'})))
                    ) {

So this says:
If content type matches to applicaton/ OR response is 200/OK, AND length is > 0 AND it's not a 404 (as determined on start of scan)... then alert.

Perhaps better would be:
If content type matches to applicaton/ AND response is 200/OK, AND length is > 0 AND it's not a 404 (as determined on start of scan)... then alert.

@Anon-Exploiter do you still have access to this site to test against?

@Anon-Exploiter
Copy link
Author

@Anon-Exploiter do you still have access to this site to test against?
Yes, I do.

As far as I remember, there were a lot of FPs in the directory/files discovery phase too. It printed almost everything.

@sullo
Copy link
Owner

sullo commented Jul 3, 2021

I was looking to make a change for you to test but I realize I read the logic wrong:
if (CT matches application/) OR ( 200 OK AND content-length > 0 AND is not 404)

Any chance you can capture a full response (headers at least) on one of the requests for a .gz or something? curl or nikto -D DS (debug and scrub name/ip) redirected to a file. I really only need one response for a compressed file request.

You might be able to solve the other false positives by using -404string <string> if there is a distinct message in the response body of those FPs, like "We couldn't find that file" or something. More info here

Thanks

@Anon-Exploiter
Copy link
Author

I was looking to make a change for you to test but I realize I read the logic wrong:
if (CT matches application/) OR ( 200 OK AND content-length > 0 AND is not 404)

Any chance you can capture a full response (headers at least) on one of the requests for a .gz or something? curl or nikto -D DS (debug and scrub name/ip) redirected to a file. I really only need one response for a compressed file request.

You might be able to solve the other false positives by using -404string <string> if there is a distinct message in the response body of those FPs, like "We couldn't find that file" or something. More info here

Thanks

Thanks for looking into this! The thing is, the site just throws 200 for literally everything.

I think one check we implement to mitigate this is to calculate the faulty response length by making a request to an endpoint that doesn't exist and then match it with the length of other paths?

Here are the cURL responses for some requests:

Normal Response:

$ curl -i https://host.com

HTTP/2 200
date: Sat, 03 Jul 2021 21:02:53 GMT
content-type: text/html; charset=UTF-8
content-length: 3316
server: Apache
strict-transport-security: max-age=2592000; includeSubDomains; preload;
x-frame-options: sameorigin
referrer-policy: origin
feature-policy: default self
x-content-type-options: nosniff
content-security-policy: img-src * data:;
vary: X-Forwarded-Proto,Accept-Encoding,User-Agent
last-modified: Sat, 03 Jul 2021 16:19:31 GMT
accept-ranges: bytes
cache-control: max-age=1
expires: Sat, 03 Jul 2021 21:02:54 GMT

<!DOCTYPE html>
<html lang="">
<head>
  <meta charset="utf-8">
  <meta http-equiv="x-ua-compatible" content="ie=edge">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <meta name="application-version" content="18.19.01">
  <link rel="stylesheet" href="assets/css/style.css">

  <title>...</title>
</html>

Request to an endpoint which doesn't exist:

curl -i https://host.com/thisDoesntExist

HTTP/2 200
date: Sat, 03 Jul 2021 20:59:49 GMT
content-type: text/html; charset=UTF-8
content-length: 3316
server: Apache
strict-transport-security: max-age=2592000; includeSubDomains; preload;
x-frame-options: sameorigin
referrer-policy: origin
feature-policy: default self
x-content-type-options: nosniff
content-security-policy: img-src * data:;
vary: X-Forwarded-Proto,Accept-Encoding,User-Agent
last-modified: Sat, 03 Jul 2021 17:40:36 GMT
accept-ranges: bytes
cache-control: max-age=1
expires: Sat, 03 Jul 2021 20:59:50 GMT

<!DOCTYPE html>
<html lang="">
<head>
  <meta charset="utf-8">
  <meta http-equiv="x-ua-compatible" content="ie=edge">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <meta name="application-version" content="18.19.01">
  <link rel="stylesheet" href="assets/css/style.css">

  <title>...</title>
</html>

Request to a test extension (i.e. .gz, .tgz, etc.):

$ curl -i https://host.com/backup.zip

HTTP/2 200
date: Sat, 03 Jul 2021 21:04:28 GMT
content-type: text/html; charset=UTF-8
content-length: 3316
server: Apache
strict-transport-security: max-age=2592000; includeSubDomains; preload;
x-frame-options: sameorigin
referrer-policy: origin
feature-policy: default self
x-content-type-options: nosniff
content-security-policy: img-src * data:;
vary: X-Forwarded-Proto,Accept-Encoding,User-Agent
last-modified: Sat, 03 Jul 2021 17:40:36 GMT
accept-ranges: bytes
cache-control: max-age=1
expires: Sat, 03 Jul 2021 21:04:29 GMT

<!DOCTYPE html>
<html lang="">
<head>
  <meta charset="utf-8">
  <meta http-equiv="x-ua-compatible" content="ie=edge">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <meta name="application-version" content="18.19.01">
  <link rel="stylesheet" href="assets/css/style.css">

  <title>...</title>
</html>

Also, the length of the response is always same for everything 😅

$ curl -i https://host.com/thisDoesntExist -s | wc -c
3865

$ curl -i https://host.com -s | wc -c
3865

$ curl -i https://host.com/backup.zip -s | wc -c
3865

@Anon-Exploiter
Copy link
Author

The only solution (which comes to my mind) in this case would be to exclude all the results with the same length as that of the non-existing path.

@sullo
Copy link
Owner

sullo commented Jul 4, 2021

Thanks for all that. The ultimate problem here isn't the backup file requests it's is_404() being fooled despite the logic to avoid this.

I never implemented a content-length check in is_404; i'll have to see if i can work that in if the default 404 is really 200.

I also added an additional content-type check to the sitefiles plugin (the one that checks for backups to make sure it's not matching text/*.

I don't suppose this is a public bug bounty server? :)

@Anon-Exploiter
Copy link
Author

Sadly can't share the host, it's a client of my company and not a bug bounty target.

But we can maybe set up a custom application imitating this behavior to test the script?


Still, let me know whenever you want me to test anything.

@Anon-Exploiter
Copy link
Author

Hey, long time no see. Any progress on this? @sullo

@Anon-Exploiter
Copy link
Author

Closing due to no updates/comments.

@sullo
Copy link
Owner

sullo commented Aug 13, 2021

My laptop screen died so I haven't been able to devote much time to this. Reopening as I do want to figure out a better way to do this.

@sullo sullo reopened this Aug 13, 2021
@Cyb3rMehul
Copy link

We can solve this by using set of random file names, which have absolutely low chance of having an existence on any server, to make sure that the server is not returning 200 response on wild card.

For example, https://www.google.com/idontplaydarts.html, where idontplaydarts is a random string that has really low chance of existence on any server.

@Anon-Exploiter
Copy link
Author

We can solve this by using set of random file names, which have absolutely low chance of having an existence on any server, to make sure that the server is not returning 200 response on wild card.

For example, https://www.google.com/idontplaydarts.html, where idontplaydarts is a random string that has really low chance of existence on any server.

The problem is, in my case, everything returns 200.

As stated here: #728 (comment)

@tautology0
Copy link
Collaborator

The 404 detection already uses random filenames to look for common patterns. Unfortunately sites which erroneously return 200 is a common problem which is not easy to fix . So much so that I'm always tempted to raise it to the vendor when I see it.

@Anon-Exploiter
Copy link
Author

Anon-Exploiter commented Aug 18, 2021

The 404 detection already uses random filenames to look for common patterns. Unfortunately sites which erroneously return 200 is a common problem which is not easy to fix . So much so that I'm always tempted to raise it to the vendor when I see it.

Haha, yeah lol. In my case, the site is in production.

@ivanfeanor
Copy link

Hello.
So, for me issue is for SPA (single page application aka react app). Routing is done inside app, and everything that is not /api or existing resource under / is redirected to index.html
Nginx example config
location / { alias /app/; try_files $uri $uri/ /index.html; }
The easiest would be to check against Content-Type: text/html (or / and <!doctype html>)

@sullo
Copy link
Owner

sullo commented Feb 16, 2022

I made a small change in the 2.5.0 branch which is that anything with a content-type of text/ will be ignored as a potential false positive. While I don't wan tot restrict to what it could be, an archive should never really have a content-type of text. @ivanfeanor please give it a try.

@ivanfeanor
Copy link

Hi @sullo .
No, still the same

@sullo
Copy link
Owner

sullo commented Feb 17, 2022

@ivanfeanor can you send the response headers for a GET request to something like /test.zip?

@ghost
Copy link

ghost commented Sep 18, 2022

Hello all,

Just been working on some scans and Nikto and had this come up, the files however do not exist...but are reporting 200 still:

Screenshot_2022-09-18_19-46-50

Doing a wget on the 200 response backup.pem file, pulls me a file with doctype info in it:

This is running under Kali Linux - fully patched as of today 09.18.2022

backup pem

On another server, it is returning 301 instead, yet the file does not exist there either, but cause it is reporting a 301...
backup jks 301

Could this be affected by how redirects are configured on a host, such as AWS or OVH ?

@sullo
Copy link
Owner

sullo commented Sep 19, 2022

@Giga-Tastic could you show the response headers from the /backup.pem?

Can you confirm if you are running version 2.1.6 or if you are running 2.5.0? If you are running 2.1.6 please try 2.5.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants