Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Monitor server implementation #5012

Draft
wants to merge 65 commits into
base: main
Choose a base branch
from

Conversation

dlmarion
Copy link
Contributor

@dlmarion dlmarion commented Oct 23, 2024

Introduced a new Metrics Thrift API that all server types implement
and the foundation for a new Monitor server implementation. This
change includes dependencies on Google FlatBuffers (see
https://github.com/google/flatbuffers and https://flatbuffers.dev/)
and Javalin (see https://github.com/javalin/javalin and https://javalin.io/),
both of which are licensed under the Apache License Version 2.0. The
flatbuffers IDL file can be used during the Maven build to regenerate
the java code using the flatbuffers profile without having to install
anything locally (on most platforms).

I modified all of the Thrift server processors to respond to a new
Metrics RPC, which returns a MetricsResponse type. The response
object includes identifying information for the server, the timestamp
of the response, and a list of byte arrays. Each byte array is a flatbuffer
encoding of a Metric (name, type, tags, and value). The new
MetricsThriftRpcIT test validates that each of the server types responds
to this new Thrift API.

I modified the Monitor server to start a new endpoint using Javaln. Javalin
is a wrapper on top of Jetty, it provides a nicer API for configuring the server,
additional features like debug logging, a route endpoint (see image below),
and other plugins for dealing with popular javascript frameworks. When the
Monitor server is started a free ephemeral port is acquired for hosting the
Javalin-based Jetty endpoint. This port is discoverable in the log and the long
term plan would be to remove the current Jetty based monitor and wire up
the new Javalin one to the existing properties.

The new monitor code periodically retrieves the MetricResponses from the
servers and populates some internal datastructures (essentially a Caffeine
Cache and some indexes into it). The use of FlatBuffers here is for memory
and time efficiency, the metrics are not deserialized unless they are needed
(for example to send back to the UI). We could also provide the ability to
filter on metric names, which would only require a deserialization of the
name and not the type, tags, or value.

Route endpoint image:
image

Related to #4973

@dlmarion dlmarion added this to the 4.0.0 milestone Oct 23, 2024
@dlmarion dlmarion self-assigned this Oct 23, 2024
Copy link
Member

@ctubbsii ctubbsii left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There seems to be 3 separate things going on in this PR, all at once, and I don't think they are strictly necessary to be coupled.

  1. The goal previously discussed to create a Thrift RPC endpoint for the monitor to query, and the corresponding Monitor thread to periodically query the endpoints,
  2. The partial migration from thrift serialization to flatbuffers to serialize,
  3. The demo of Javalin as an alternative way to set up and configure the monitor web server.

I'm not sold on items 2 and 3, as I think this could be done much more simply without any of those changes. Addressing those in reverse order:

Regarding Javalin, the existing monitor is well-established, and accomplishes a lot of goals that this demo of Javalin doesn't do (it handles serving JS/CSS resources from the classpath using the default servlet, for example, and automatically handles the serialization of POJO objects via the /rest endpoints in either JSON or XML using Jackson, with only JAX-RS annotations; it's also designed to be easily configured for TLS with client-auth certs, if desired, and handles a lot of other things). I think the switching cost of moving to Javelin are high, bring in a lot of new dependencies... on libraries written in a completely different programming language (Kotlin) that will be difficult for those unfamiliar to trace into for troubleshooting, if that becomes needed. It also isn't clear that the way it connects resources to endpoints is really all that much better or simpler than the JAX-RS specification that we get with Jersey annotations. If we decide to expose the metrics directly from the servers, for general consumption by users who implement their own monitors, in order to get rid of our own monitor, a Javalin-based server might be a reasonable reference implementation. But, I don't think it should be part of these changes today.

Regarding flatbuffers, I'd really like to understand why you did that, since it seems to me that it's not providing anything we don't already get with Thrift directly, and it adds extra layers and dependencies. I'm not sure I understand the intent.

For the core change of exposing the metrics via Thrift, I was only able to review at a glance, because it was hard to see what was part of that core change, and what was part of the other things in this PR... but the overall mechanism seemed okay. However, it still depends on the assumption that we should put full metrics on the monitor (rather than just a small curated set of health/status items on a custom API, and leaving it to the user to collect metrics for analysis using a dedicated metrics-specific system, if they need that, rather than try to get them via our lightweight monitor). Also, I was talking with @DomGarguilo earlier today and it occurred to me that it has happened a few times in the past that a user has come to us with an observation that the monitor is the only place where certain information exists, and as a solution, we've created public API methods to provide that information. So then I was wondering, instead of exposing a thrift RPC for metrics, that we use internally, maybe we should instead expose it as a public API, which the monitor can get directly from the AccumuloClient (it's own ServerContext). Then, anybody wanting to write their own monitor and expose the information differently, could just get the same information from the public API. This could be a step towards making the monitor an optional server that is merely another Accumulo client process. In the public API, we could also expose things that are not easily exposed in the metrics... like aggregated data per table or per tablet, which are not easily seen in the per-process metrics alone.

core/pom.xml Outdated
<plugin>
<groupId>com.sequsoft.maven.plugins</groupId>
<artifactId>flatbuffers-maven-plugin</artifactId>
<version>0.0.1</version>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Version management should go in the parent POM's pluginManagement section.

core/pom.xml Outdated
Comment on lines 424 to 425
<groupId>com.sequsoft.maven.plugins</groupId>
<artifactId>flatbuffers-maven-plugin</artifactId>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a pretty low-quality Maven plugin that hasn't been touched in 4 years. It doesn't publish generated maven plugin documentation to know what to configure, and it installs software (and compiles it) into the user's home directory, instead of following Maven best practices to use the target directory (or at least /tmp).

I am reluctant to buy in to using this plugin for these reasons, and am wondering if the goal of the plugin could be done with flatc on the command-line like we do with thrift, or if it could be achieved without using flatbuffers at all.

Also, why not just use Thrift for the internal type? It seems unnecessary to bring in another library that is essentially serving the same purpose, especially since you're also adding generated thrift types that do serialization in the same PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason I'm using Flatbuffers vs Thrift is for the reason I mentioned in my initial comment - memory and time efficiency. IIRC, and I'm on my phone so I can't easily check, Thrift fully deserializes the the binary sent over the wire. Flatbuffers does not, it deserializes what you need when you need it. Deserializing all of the metrics from the servers would create a lot of objects which may potentially go unused if nobody is using the UI.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're probably right about how thrift deserializes. It's a safe assumption that if you've made an RPC call, you want to make use of the response, so that's probably what thrift does.

We've also used JSON in many places, which has similar benefits. I wonder what the difference in performance would be if we just used Gson or Jackson to handle the serialization of the objects, rather than introduce another dependency and workflow into the build. It'd be nice to be somewhat consistent with how we do serialization, and it would be nice to avoid the complexity of another IDL build workflow and dependency, especially if the plugin that makes the workflow easier isn't well maintained.

The way the monitor currently avoids unnecessary work is by only pulling stuff from the Manager when needed to serve content to a user request. It rate limits the frequency of the pulling to prevent excessive calls to the Manager. I don't know if it would eliminate the entire benefits of flatbuffers, but I'm thinking we should probably do the same here, because there's no reason for the monitor to constantly poll the whole cluster when nobody is querying any pages. If that alone is enough to eliminate the unnecessary deserialization when nobody's asking for it, then it might make things dramatically simpler.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You had mentioned the flatbuffers-maven-plugin in an earlier comment and I didn't address it. I have no issue with removing it and using the same mechanism that we use for generating the Thrift code. I added the plugin because it was the easiest way to solve the problem and continue moving forward on this idea.

Comment on lines 121 to 124
}).get("/metrics", ctx -> ctx.json(metrics.getAll()))
.get("/metrics/groups", ctx -> ctx.json(metrics.getResourceGroups()))
.get("/metrics/manager", ctx -> ctx.json(metrics.getManager()))
.get("/metrics/gc", ctx -> ctx.json(metrics.getGarbageCollector()))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's interesting to see Javalin being used, but I don't think it's compelling enough to switch anything away from our existing Jersey endpoints and Jetty setup, which does a lot more than just this. And, I wouldn't want to run it listening on two ports, or running separate services.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a draft, an incomplete implementation. We would need to have a working UI (which is yet to be done) , to flesh out all of the endpoints. Javalin exposes all of the underlying Jetty configuration and setup. I have no doubt we can do everything that we are doing today with it.

Regarding running on two ports, this is a short term approach to running the current Monitor alongside one that is in progress. I'm assuming that once the new one is complete, the old one would be removed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming we can do everything we can do today with Javalin, it still seems like a lateral move without a clear benefit, but with the downside of adding dependencies and learning a whole extra layer of conventions. But I'm not opposed to experimenting with it as a draft... it just seems not clearly beneficial and not worth it overall.

@dlmarion
Copy link
Contributor Author

Regarding Javalin, the existing monitor is well-established, and accomplishes a lot of goals that this demo of Javalin doesn't do (it handles serving JS/CSS resources from the classpath using the default servlet, for example, and automatically handles the serialization of POJO objects via the /rest endpoints in either JSON or XML using Jackson, with only JAX-RS annotations; it's also designed to be easily configured for TLS with client-auth certs, if desired, and handles a lot of other things). I think the switching cost of moving to Javelin are high, bring in a lot of new dependencies... on libraries written in a completely different programming language (Kotlin) that will be difficult for those unfamiliar to trace into for troubleshooting, if that becomes needed.

I think I addressed this in my other comment, but since Javalin is a wrapper on top of Jetty, and it exposes all of the underlying Jetty controls, I don't think any of the points you raised regarding serving up static resources, TLS configuration, etc. are going to be an issue. I almost started with replacing the EmbeddedWebServer with Javalin completely, but then figured it would be better to keep them separate as it will be easier to remove the old monitor code once the new monitor is complete. The intention with this PR is to have two separate monitor (old and new) running off the same Monitor process until the new one is complete. All of this work could be done in a feature branch and not merged into main. @DomGarguilo's needs something that serves up the data to create the new UI.

It also isn't clear that the way it connects resources to endpoints is really all that much better or simpler than the JAX-RS specification that we get with Jersey annotations.

I think it's simpler. With the current Monitor code you have to search for the class and method that has the matching path to understand which method is being called by the UI code. The way that the routes are built with Javalin, the mapping is centrally located in one place and can be made visible via the RouteOverview plugin to aid in debugging and development.

If we decide to expose the metrics directly from the servers, for general consumption by users who implement their own monitors, in order to get rid of our own monitor, a Javalin-based server might be a reasonable reference implementation. But, I don't think it should be part of these changes today.

Exposing metrics directly from the servers for general consumption was not something that was discussed. This exposes metrics from the servers for the purpose of serving data to the Monitor. The MetricServiceHandler class in this PR currently exposes all of the Accumulo application metrics as it's unknown at this point which metrics will be needed by the Monitor. Once we know what that set is, we can further reduce the set of metrics being exposed by the new Thrift endpoint.

Regarding flatbuffers, I'd really like to understand why you did that, since it seems to me that it's not providing anything we don't already get with Thrift directly, and it adds extra layers and dependencies. I'm not sure I understand the intent.

I think I addressed this in my other comment.

For the core change of exposing the metrics via Thrift, I was only able to review at a glance, because it was hard to see what was part of that core change, and what was part of the other things in this PR... but the overall mechanism seemed okay. However, it still depends on the assumption that we should put full metrics on the monitor (rather than just a small curated set of health/status items on a custom API, and leaving it to the user to collect metrics for analysis using a dedicated metrics-specific system, if they need that, rather than try to get them via our lightweight monitor).

The intention is to just supply the metrics that are needed for the Monitor. Getting all of the metrics via a dedicated metrics system is the better and more complete solution.

Also, I was talking with @DomGarguilo earlier today and it occurred to me that it has happened a few times in the past that a user has come to us with an observation that the monitor is the only place where certain information exists, and as a solution, we've created public API methods to provide that information. So then I was wondering, instead of exposing a thrift RPC for metrics, that we use internally, maybe we should instead expose it as a public API, which the monitor can get directly from the AccumuloClient (it's own ServerContext). Then, anybody wanting to write their own monitor and expose the information differently, could just get the same information from the public API. This could be a step towards making the monitor an optional server that is merely another Accumulo client process. In the public API, we could also expose things that are not easily exposed in the metrics... like aggregated data per table or per tablet, which are not easily seen in the per-process metrics alone.

I think the intention here is to build the new Monitor off of exposed metric information. If there is something new that is needed, then we should add the metric for it. This approach means that the Monitor is not getting special data that can't be retrieved or calculated by just getting the metrics via some MeterRegistry. I don't think we should expose this via the public API in the AccumuloClient, I think that's the wrong approach.

@dlmarion
Copy link
Contributor Author

@ctubbsii - I added a new endpoint in 0543a48. There was no need to add any new classes with JAX-RS annotations or any classes to hold the data just for serialization

@ctubbsii
Copy link
Member

The intention with this PR is to have two separate monitor (old and new) running off the same Monitor process until the new one is complete. All of this work could be done in a feature branch and not merged into main. @DomGarguilo's needs something that serves up the data to create the new UI.

I don't think it's necessary to do that. The pages can be served from the same server. There's no need to run two. Also, I didn't think it was in scope to create a whole new monitor from scratch... I thought the goal was to do a comprehensive review of what's there, get rid of the bits we want to drop, and add stuff we think should be there. That doesn't require running two servers. We can build off of the previous work, the previous design, etc. That's what I had been discussing with @DomGarguilo. If your goal is to just create a monitor to view published metrics, I can see that being a standalone server, separate from the existing monitor, but I don't think it would be a replacement.

I think it's simpler. With the current Monitor code you have to search for the class and method that has the matching path to understand which method is being called by the UI code. The way that the routes are built with Javalin, the mapping is centrally located in one place and can be made visible via the RouteOverview plugin to aid in debugging and development.

I think it's pretty easy to look at the @Path annotations to see the mappings. We used to have a centralized view, with servlets, and it got better when we switched to this. I think it might only look better now because the number of endpoints without a UI is still very small. Once that starts growing, and you start adding all the features back in that we want to keep from the old monitor, that's going to get quite cumbersome. Preferring one over the other may still be subjective. But even if there are some small style benefits, I still strongly prefer building off what we have, rather than make the lateral move, and add all the new dependencies.

I think the intention here is to build the new Monitor off of exposed metric information. If there is something new that is needed, then we should add the metric for it. This approach means that the Monitor is not getting special data that can't be retrieved or calculated by just getting the metrics via some MeterRegistry. I don't think we should expose this via the public API in the AccumuloClient, I think that's the wrong approach.

I think the fundamental difference in our perspective on this is that you seem to see the monitor as a way to get some metrics information about the system. And, I see it as a way to get an overview of the system's deployed state and current activity. Some of the deployed state and current activity can be viewed as metrics, but some of it simply can't be. There isn't a metric for instanceName or instanceId, for example, or list of tables, or list of resource groups, or list of zookeepers the system is configured with, or whether a table is currently "offline" or "disabled". These kinds of things are status things that don't make sense as metrics, but are instead retrieved from the public API. So, I absolutely think that the monitor does have a need of getting data that can't be retrieved or calculated by just getting the metrics via some MeterRegistry. To the extent that the monitor is showing stuff that is specifically metrics, or can be metrics, I think the monitor should do very little of that, and instead rely on the user's use of a dedicated metrics collection/aggregation/alerting/visualization thing. If that's the kind of thing you're trying to prototype here, I think that's fine, but perhaps it shouldn't have anything to do with the monitor at all? Or, to the extent that it is, it's a very specific page on the monitor, separate from the non-metrics stuff. So, I think we should discuss what the actual goal is here, because it currently feels like "metrics" is being used as a hammer to hit every nail, when a lot of the monitor isn't actually a nail... some of it is, and for that stuff, it's probably useful to prototype that like you're doing in this PR... but not all of it is.

@dlmarion
Copy link
Contributor Author

I think the fundamental difference in our perspective on this is that you seem to see the monitor as a way to get some metrics information about the system.

Not really, I think that the Monitor can display state from the metrics that we collect.

And, I see it as a way to get an overview of the system's deployed state and current activity. Some of the deployed state and current activity can be viewed as metrics, but some of it simply can't be. There isn't a metric for instanceName or instanceId, for example, or list of tables, or list of resource groups, or list of zookeepers the system is configured with, or whether a table is currently "offline" or "disabled". These kinds of things are status things that don't make sense as metrics, but are instead retrieved from the public API. So, I absolutely think that the monitor does have a need of getting data that can't be retrieved or calculated by just getting the metrics via some MeterRegistry.

The MetricsResponse object that is returned from the Metrics Thrift API contains a small amount of information that is not metrics (server type, address, resource group, and timestamp) in addition to a list of Metrics. I think we should try to convey as much of that information as possible as metrics, and then add the remaining information to the MetricsResponse as non-metrics. Certainly the Manager could emit a metric for the number of tablets per table, from which we could create a list of tables. The nice thing about capturing most things as metrics is that we don't need two different code paths to capture the information about the system. If the monitor can run mostly off the metrics, then the monitor is optional as a user could decide to create their own monitor using some enterprise level monitoring system.

To the extent that the monitor is showing stuff that is specifically metrics, or can be metrics, I think the monitor should do very little of that, and instead rely on the user's use of a dedicated metrics collection/aggregation/alerting/visualization thing. If that's the kind of thing you're trying to prototype here, I think that's fine, but perhaps it shouldn't have anything to do with the monitor at all? Or, to the extent that it is, it's a very specific page on the monitor, separate from the non-metrics stuff. So, I think we should discuss what the actual goal is here, because it currently feels like "metrics" is being used as a hammer to hit every nail, when a lot of the monitor isn't actually a nail... some of it is, and for that stuff, it's probably useful to prototype that like you're doing in this PR... but not all of it is.

My thought for the new monitor is based upon the image I put in #4973 (comment). It's information about the system as whole, most of which can be gleaned from the metrics. I realize that not everything is a metric, but if your idea is that the Monitor should show non-metrics, what is it showing? A description would be useful I think as there are certainly a lot of numbers being shown on the current Monitor pages, most of which are captured as metrics in 4.0.

@ctubbsii
Copy link
Member

A description would be useful I think as there are certainly a lot of numbers being shown on the current Monitor pages, most of which are captured as metrics in 4.0.

The monitor currently shows activity aggregated on a per-table and per-tablet basis, and that those are not currently available as metrics. To the extent that these can be converted to metrics, I think that's fine to do that. However, it still may be useful to expose the aggregated status via a public API, rather than a Thrift endpoint, for direct consumption by the monitor, as well as outside tools. I'm not 100% sold on the idea, though... I'm just pitching it as an alternative to doing it via a server-only Thrift call that users don't have access to.

Basically, my idea is that if it is not the whole metrics, but a curated set of useful data to show on the monitor, then it's also a useful set of curated data to show via an API, so users who want it don't resort to scraping the monitor.

Aside from that, my general assumption is that the monitor's overall structure, and view of a deployed system is not primarily about serving metrics (in spite of the fact that we have a lot of metrics-like data in tabular and graphical format right now), and its design should not be centered around metrics. To the extent that it's showing detailed metrics, I would prefer to leave those off the monitor entirely, leaving it to the user to plug into the metrics system directly for those. But, I think the real value of the monitor is the ability to have a big picture view of the system as a whole, and most of that doesn't come from the metrics. Sure, some can be powered by metrics, but that alone doesn't warrant a complete redesign. I don't think we should redesign the front end, except to determine which pages we want to keep, and which we want to lose, which we want to change, and which we want to lose. Powering the monitor with data collected from the metrics system doesn't require a complete overhaul, and I don't think we need a complete overhaul... changing the data source behind the scenes should be a largely invisible change to users.

@dlmarion
Copy link
Contributor Author

... it still may be useful to expose the aggregated status via a public API, rather than a Thrift endpoint, for direct consumption by the monitor, as well as outside tools. I'm not 100% sold on the idea, though... I'm just pitching it as an alternative to doing it via a server-only Thrift call that users don't have access to.

I'm not sure how that would work, the client would need to talk with every server to gather the information, or the information would have to be aggregated somewhere and then the client just fetches it from that location. Removing the ManagerMonitorInfo object in the Manager, the current mechanism of aggregation, was part of the reason for some of these changes (Thrift API in each server component).

Another item to bring into this conversation, as it overlaps some, is the accumulo admin serviceStatus command added in #4567. It does provide a limited overview of the deployment.

dlmarion and others added 26 commits November 20, 2024 13:47
In the new monitor code SystemInformation class there are
ConcurrentHashMaps that have non concurrent data structs for values.
This was causing problems when multiple threads accessed.  Changed the
values to be concurrent.  Also changed a few other map to be concurrent
just to remove use the normal HashMap import in the class.
Renamed TExternalCompactionList to TExternalCompactionMap, added an actual
TExternalCompactionList. Implemented solution for keeping N of the oldest
compactions in the CompactionCoordinator, which are then returned to the
Monitor instead of returning all RunningCompactions to the Monitor and
letting it figure out the oldest.
@dlmarion
Copy link
Contributor Author

If #5193 is merged into 2.1 and up to main before this is merged in, then I can just add the new Thrift metrics endpoint to the process.thrift file added in #5193 and reduce the changes in this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

5 participants