-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
False Positive/Negative: Potentially interesting backup/cert file found. #728
Comments
Or by checking the |
Was the rest of the scan full of false positives as well? Here is the current logic. I believe I accepted 200 as to avoid false negatives, but maybe it's giving FP too much.
So this says: Perhaps better would be: @Anon-Exploiter do you still have access to this site to test against? |
As far as I remember, there were a lot of FPs in the directory/files discovery phase too. It printed almost everything. |
I was looking to make a change for you to test but I realize I read the logic wrong: Any chance you can capture a full response (headers at least) on one of the requests for a .gz or something? curl or You might be able to solve the other false positives by using Thanks |
Thanks for looking into this! The thing is, the site just throws 200 for literally everything. I think one check we implement to mitigate this is to calculate the faulty response length by making a request to an endpoint that doesn't exist and then match it with the length of other paths? Here are the cURL responses for some requests: Normal Response: $ curl -i https://host.com
HTTP/2 200
date: Sat, 03 Jul 2021 21:02:53 GMT
content-type: text/html; charset=UTF-8
content-length: 3316
server: Apache
strict-transport-security: max-age=2592000; includeSubDomains; preload;
x-frame-options: sameorigin
referrer-policy: origin
feature-policy: default self
x-content-type-options: nosniff
content-security-policy: img-src * data:;
vary: X-Forwarded-Proto,Accept-Encoding,User-Agent
last-modified: Sat, 03 Jul 2021 16:19:31 GMT
accept-ranges: bytes
cache-control: max-age=1
expires: Sat, 03 Jul 2021 21:02:54 GMT
<!DOCTYPE html>
<html lang="">
<head>
<meta charset="utf-8">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="application-version" content="18.19.01">
<link rel="stylesheet" href="assets/css/style.css">
<title>...</title>
</html> Request to an endpoint which doesn't exist: curl -i https://host.com/thisDoesntExist
HTTP/2 200
date: Sat, 03 Jul 2021 20:59:49 GMT
content-type: text/html; charset=UTF-8
content-length: 3316
server: Apache
strict-transport-security: max-age=2592000; includeSubDomains; preload;
x-frame-options: sameorigin
referrer-policy: origin
feature-policy: default self
x-content-type-options: nosniff
content-security-policy: img-src * data:;
vary: X-Forwarded-Proto,Accept-Encoding,User-Agent
last-modified: Sat, 03 Jul 2021 17:40:36 GMT
accept-ranges: bytes
cache-control: max-age=1
expires: Sat, 03 Jul 2021 20:59:50 GMT
<!DOCTYPE html>
<html lang="">
<head>
<meta charset="utf-8">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="application-version" content="18.19.01">
<link rel="stylesheet" href="assets/css/style.css">
<title>...</title>
</html> Request to a test extension (i.e. .gz, .tgz, etc.): $ curl -i https://host.com/backup.zip
HTTP/2 200
date: Sat, 03 Jul 2021 21:04:28 GMT
content-type: text/html; charset=UTF-8
content-length: 3316
server: Apache
strict-transport-security: max-age=2592000; includeSubDomains; preload;
x-frame-options: sameorigin
referrer-policy: origin
feature-policy: default self
x-content-type-options: nosniff
content-security-policy: img-src * data:;
vary: X-Forwarded-Proto,Accept-Encoding,User-Agent
last-modified: Sat, 03 Jul 2021 17:40:36 GMT
accept-ranges: bytes
cache-control: max-age=1
expires: Sat, 03 Jul 2021 21:04:29 GMT
<!DOCTYPE html>
<html lang="">
<head>
<meta charset="utf-8">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="application-version" content="18.19.01">
<link rel="stylesheet" href="assets/css/style.css">
<title>...</title>
</html> Also, the length of the response is always same for everything 😅 $ curl -i https://host.com/thisDoesntExist -s | wc -c
3865
$ curl -i https://host.com -s | wc -c
3865
$ curl -i https://host.com/backup.zip -s | wc -c
3865 |
The only solution (which comes to my mind) in this case would be to exclude all the results with the same length as that of the non-existing path. |
Thanks for all that. The ultimate problem here isn't the backup file requests it's is_404() being fooled despite the logic to avoid this. I never implemented a content-length check in is_404; i'll have to see if i can work that in if the default 404 is really 200. I also added an additional content-type check to the sitefiles plugin (the one that checks for backups to make sure it's not matching I don't suppose this is a public bug bounty server? :) |
Sadly can't share the host, it's a client of my company and not a bug bounty target. But we can maybe set up a custom application imitating this behavior to test the script? Still, let me know whenever you want me to test anything. |
Hey, long time no see. Any progress on this? @sullo |
Closing due to no updates/comments. |
My laptop screen died so I haven't been able to devote much time to this. Reopening as I do want to figure out a better way to do this. |
We can solve this by using set of random file names, which have absolutely low chance of having an existence on any server, to make sure that the server is not returning 200 response on wild card. For example, |
The problem is, in my case, everything returns 200. As stated here: #728 (comment) |
The 404 detection already uses random filenames to look for common patterns. Unfortunately sites which erroneously return 200 is a common problem which is not easy to fix . So much so that I'm always tempted to raise it to the vendor when I see it. |
Haha, yeah lol. In my case, the site is in production. |
Hello. |
I made a small change in the 2.5.0 branch which is that anything with a content-type of |
Hi @sullo . |
@ivanfeanor can you send the response headers for a GET request to something like /test.zip? |
@Giga-Tastic could you show the response headers from the /backup.pem? Can you confirm if you are running version 2.1.6 or if you are running 2.5.0? If you are running 2.1.6 please try 2.5.0. |
Output of suspected false positive / negative
Screenshot:
What's actually happening?
Nikto is trying to find sensitive/backup files but the server/site is returning 200 response on almost all the requests causing all these FPs (the list is really long).
Can we fix this?
I think yes, we should just add a check for same length of all these objects (this could also introduce FPs if the file name/path being accessed gets reflected in the title or other HTML tags of the page)
Other fix would be to check the file headers -- If it's a .tgz or .zip file we can identify if it really is by analyzing the headers of the files being accessed.
The text was updated successfully, but these errors were encountered: