Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jsherwood0 patch apache_activemq_rce_cve_2023_46604.rb to ensure timeout #19037

Conversation

jsherwood0
Copy link
Contributor

Add a timeout to eliminate the hang in the check method

This fix is to ensure that if all else fails, the check will timeout and allow progress to continue instead of hanging on a host that does not reply as expected. This addresses Issue #19036.

Verification

List the steps needed to make sure this thing works

  • Start msfconsole
  • use exploit/multi/misc/apache_activemq_rce_cve_2023_46604
  • Set RHOSTS and RPORT to host/port that is not responding as ActiveMQ should (e.g., it's down or its not ActiveMQ)
  • check
  • Verify that the check will eventually time out even if it does not receive an expected ActiveMQ response.
  • Verify that the check does not hang on servers that fail to respond as expected.

Add a timeout to eliminate the hang in the check method
Remove rescue that does not handle specific exceptions.
@@ -78,9 +78,12 @@ def initialize(info = {})
end

def check
connect
res = nil
::Timeout.timeout(datastore['ConnectTimeout']) do
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not seem like the correct place to fix this issue. If the proposed changes work, that would imply that there is an issue in the Tcp mixin here that is preventing the existing datastore['ConnectTimeout'] option from being used.

If that's the case, we should fix the issue at its root which is likely in rex-sockets.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point, these are just the only two places (this and #19038 ) that I've seen this result occur, and I've run a bunch of modules' check methods on the same host/service pool. Since I couldn't see any reason why the TCP mixin would be the failure point, but only on these two checks, I just made sure that they do at least time out.

Copy link
Contributor

@smcintyre-r7 smcintyre-r7 Apr 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left a comment on the issue. I'm having trouble reproducing this because it looks like the ConnectTimeout datastore option is working as intended.

metasploit-framework (S:0 J:0) exploit(multi/misc/apache_activemq_rce_cve_2023_46604) > time check ConnectTimeout=5

[-] 192.168.159.128:61616 - Exploit failed [unreachable]: Rex::ConnectionTimeout The connection with (192.168.159.128:61616) timed out.
[-] 192.168.159.128:61616 - Check failed: The state could not be determined.
[+] Command "check ConnectTimeout\\=5" completed in 5.042257890003384 seconds
metasploit-framework (S:0 J:0) exploit(multi/misc/apache_activemq_rce_cve_2023_46604) > time check ConnectTimeout=10

[-] 192.168.159.128:61616 - Exploit failed [unreachable]: Rex::ConnectionTimeout The connection with (192.168.159.128:61616) timed out.
[-] 192.168.159.128:61616 - Check failed: The state could not be determined.
[+] Command "check ConnectTimeout\\=10" completed in 10.044773347995942 seconds
metasploit-framework (S:0 J:0) exploit(multi/misc/apache_activemq_rce_cve_2023_46604) > 

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I have a specific server type that can be used as a target to see this issue:

  • Qumulo Core 7.0.1 build 287721.1.50
    • running on TornadoServer v5.1.1
    • using TLSv1.2 on port 443
      • using a subject with a wildcarded FQDN (e.g., Subject: "CN=*.example.com")
      • having Altnames with DNS tags and wildcarded FQDNs (e.g., "Certificate Subject Alternative Name: DNS Name: *.example.com, DNS Name: example.com"
      • accessed using the IP address (so does not match subject or altnames)

Yes, I know that this is not an ActiveMQ server, but the module should properly handle unexpected responses in case non-ActiveMQ targets make it into the RHOSTS list.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I tested this a bit more thoroughly and it's not the #connect method that's timing out, it's the sock.get_once call.

In hind sight, this method probably should have been something along these lines:

    len = sock.timed_read(4).unpack1('N')

    return CheckCode::Unknown if len > 0x2000 # upper limit in case the service isn't ActiveMQ

    res = sock.timed_read(len)

    disconnect

    return CheckCode::Unknown unless res

    _, magic = res.unpack('CZ*')

That would have applied the socket-timeout (default of 10 seconds) to the two read operations. When an HTTPS server is the target, the first timeout would have expired and caused the check method to return unknown. Lastly, at the bottom, a rescue block should be used to catch the exception and return CheckCode::Unknown.

    Exploit::CheckCode::Safe("Apache ActiveMQ #{version}")
  rescue ::Timeout::Error
    CheckCode::Unknown
  end

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems to work pretty well. Do you want me to close this pull request so you can submit one with your fix?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't mind either way. Which ever is easiest for you works for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

2 participants