-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add automatic analysis from JMC #85
base: main
Are you sure you want to change the base?
Conversation
@gunnarmorling I have finally found some time to open a PR for JMC analysis. This PR will load rules from the published JMC artefacts (org.openjdk.jmc.flightrecorder.rules and org.openjdk.jmc.flightrecorder.rules.jdk). There is still more that can be done around custom rules, configuring severity and expanding on assertions, but initially demonstrates the scope and type of assertions that could be made against automated analysis |
@johnaohara This looks great! I just added couple of suggestions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot, @johnahara! A few minor comments inline. Overall, I think it'd be great to see a really compelling example for the value of using the rule mechanism over simply asserting plain JFR events.
//Inspect score of rule | ||
assertThat(analysisResults) | ||
.contains(FullGcRule.class) | ||
.scoresLessThan(80); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does that score mean?
Taking a step back, I'm still trying to wrap my head around the capabilities of the JMC rules. Are there other real-world examples coming to mind which would show the power of that feature?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each rule can generate a score between 0-100. It roughly translates to the risk rating of a problem that has been detected. 0 being least likely to cause an issue to 100 being most likely to be problem. The score implementation is specific to the Rule that is triggered.
e.g. for the ExceptionRule
it is a score based on number of exceptions per second : https://github.com/openjdk/jmc/blob/master/core/org.openjdk.jmc.flightrecorder.rules.jdk/src/main/java/org/openjdk/jmc/flightrecorder/rules/jdk/exceptions/ExceptionRule.java#L109
For the SockertWriteRule, it is score calculated from the maximum duration of all socket writes: https://github.com/openjdk/jmc/blob/master/core/org.openjdk.jmc.flightrecorder.rules.jdk/src/main/java/org/openjdk/jmc/flightrecorder/rules/jdk/io/SocketWriteRule.java#L135
This assertion allows users who are familiar with the Rules to assert that risk scores did not cross a certain threshold
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I see. Aren't we back to relying on characteristics of the environment then though? E.g. Exceptions per second might differ on the general performance and load of the machine running the test. So I'm wondering how stable/portable such test would be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe they were poor examples, I think some of the rules will be dependent on environment and others not. At present all the rules available from JMC are loaded. Maybe I could curate a list of Rules that do not rely on environment and only load them?
* See the License for the specific language governing permissions and | ||
* limitations under the License. | ||
*/ | ||
package org.moditect.jfrunit; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this class user-facing? Or should it be moved to the internal package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am thinking about this now. Atm it is not user facing, but it is tightly coupled to JfrEvents
. I think it depends on the workflow you would expect users to generate the analysis.
i.e.
jfrEvents.awaitEvents();
List<IResult> analysisResults = JfrAnalysis.analyse(jfrEvents);
or
jfrEvents.awaitEvents();
List<IResult> analysisResults = jfrEvents.automaticAnalysis();
or even, as @phejl mentioned previously
List<IResult> analysisResults = jfrEvents.automaticAnalysis();
where jfrEvents.awaitEvents()
is called in jfrEvents.automaticAnalysis()
In the last 2 examples, JfrAnalysis
would be an internal class. In the first it would be user-facing.
What do you think is best in terms of user experience? To have the analysis to be exposed as the JfrEvents
api, or as a separate, user-facing class that consumes JfrEvents
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terms of discoverability, I'd prefer the latter two, as you'll find the method easier than in comparison to it being static, declared on a class which you need to know about in the first place.
@johnaohara If I understand the code correctly it dumps the events to the same file which is later rewritten at the end of the test (or sonner by another analysis if we will allow that). Wouldn't it be better having it isolated? So the file used for analysis would be suffixed with something like |
@phejl I had assumed that all historic events would be preserved in between writing to disk, i.e. an new events would be appended to the existing jfr file, although I need to validate this assumption. I understand the concern if the output file was overwritten and events that had previously been written to disk were lost. I also can see the concern if a user was to run multiple automatic analysis during a test and want to be able to inspect events after the test has completed. If the JFR recording is overwritten multiple times with each automatic analysis, a user would not be able to reload a JFR recording exactly as it was at the point the analysis was performed. I did investigate converting the I can dump the recording used for analysis to a separate file for clarity, but will double check that dumping the recording does not reset the recording and the JFR events generated by a single test are not split over multiple recordings. |
@gunnarmorling an example of using the analysis over individual events could be a generic catch all, "I want to assert that there are no WARNINGS in from the automated analysis", e.g.
In this example, the test would fail as the (N.B. the |
@phejl I checked the behaviour of JFR when events are dumped to file multiple times, and jfr is writing all events out to disk with every dump and duplicating events. I will separate the analysis dump files so events are not duplicated in the final jfr's |
Ok, that could be useful, yes. On the exception case in particular, couldn't the same be achieved though like so:
? Note I don't mean to push back, I think integration with the rules API can be useful. I just would love to see some really compelling example or use case. |
There's a failure in the new test on CI btw:
|
Yes, that is ironic; I added that test as an example, and it "passed on my machine"™ |
LOL 🤣 . |
Hey @johnaohara, any further insight on the test failure here? |
…rules that have fired and tests based on severity
- Renamed JfrAnalysis -> JmcAutomaticAnalysis - Moved JmcAutomaticAnalysis to internal package - Separated tests for Automated Analysis - Add to JmcAutomaticAnalysisAssert api to assert based on IReport severity - Generate new JFR dump for each analysis - Removed JfrAnalysisResults, replaced with List<IResult>
cbc58f5
to
03a9c07
Compare
Allow assertions against JMC automatic analysis rules based on whether rule has fired and severity.