Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test more Python evaluation results #99

Open
HenryRLee opened this issue May 3, 2024 · 0 comments
Open

Test more Python evaluation results #99

HenryRLee opened this issue May 3, 2024 · 0 comments
Labels

Comments

@HenryRLee
Copy link
Owner

HenryRLee commented May 3, 2024

Currently, in Python, the unit tests for our 5-card, 6-card, and 7-card evaluator are using data set in 5cards.json, 6cards.json, and 7cards.json. The dataset is small and fixed. So the test coverage is far from enough.

Since we cannot use Kev's evaluator like the C++ code, my idea would be something like this:

  1. For 5-card evaluation, generate all possible hands and stored them in 5cards.json. There will be 2,598,960 lines, which is acceptable. (I just found that we happened to have this data set under ../tree/develop/test/five).
  2. For 6-card and 7-card evaluation, randomly sample thousands of hands, and brute force the expected result using the 5-card evaluator. This is similar to the unit test case used in the Omaha test.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant