-
Notifications
You must be signed in to change notification settings - Fork 12
Performance auditor fixture #1147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
roduran-dev
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very, nice change, with this we can introduce some bothersome cases like megaflaws and compare the performance.
2 questions:
- does this change the default behaviour on pytest output?
- this just changes only if performance_audit is used?
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea is to not longer need django-silk or similar to conduct some manual tests to check/improve performance, just write a test case replicating the workflow or calling the function you want to test using this new fixture and review the results.
If we were to permanently provision django-silk on the local dev environment (i.e. it would be available by default), would using it really be that much more manual effort than writing a test and decorating it with the performance auditor decorator?
I feel like the only benefit is the generated report, but a before/after screenshot of the django-silk UI can work too.
As for smoke tests, I'm not sure how helpful it'd be as it would require manually checking the report and said report would be based off of the test database and not stage/uat/prod, locust might be better suited for such tests.
I think there's some good ideas here but it's also a lot of custom code to maintain, have you considered contributing e.g. the report generation to django-silk?
That said, I'm not opposed nor won't block the PR based on my opinion, if the team sees value in incorporating the fixture then I'm OK with merging it.
Some simple checks might be faster doing it by hand, but in any case with django-silk you need to manually run the actions in the browser, this adds the dependency of OSIM/bindings, so it will not work if developing new features, using the API directly is possible, but I think that is even worse in comparison, also in some test cases like testing affect bulk update, you will need to manually create the flaw/multiple affects and perform the actions, probably multiple times.
One of the benefits of the report being markdown is that you can shove it into your favorite AI companion and ask questions about it, having access to the raw data we can also tune the report as we like it, for example on slow queries I run the The report is just an example, it is what I used during the performance analysis, but we could store the data somewhere and have historical measures of performance.
Using this together with smoke tests was just an idea on when to run some automatic analysis, instead of reading the entire report I think the summary table that is printed in the CI is good enough for a quick glance, we could even have it added as a comment to the pull request, if that summary shows some issues then we can look at the report, since we have the data we could add some trigger gates to show a more specific comment instead of a summary only when something happened like
I'm up for it, but I think it needs a bit more polishing, with some feedback to improve the output and get more meaningful data I can contribute it.
My idea with this pull request was only to showcase the fixture/report and if we see value on it, discuss improvements/changes so it is more helpful when doing performance analysis and then merge it |
Why do you think it is worse in comparison? Hitting the API endpoint directly is usually the best way to reproduce the issue as it's the point of entry for any client, clients don't have knowledge of the internals (nor should they). Even with the fixture and in your example tests the entrypoint for profiling is hitting a given endpoint.
I agree that setting up the test data is the most time consuming part and being able to consistently set up said data can help reproduce issues consistently. I think that we can achieve this using a few Django fixtures though, which also has the benefit of providing seed data to play with on a fresh database without needing to sync things. Once you have that the amount of instructions is reduced to "run this curl command" which is not much different from "run this make/pytest/tox command to execute the performance auditor test". I won't comment on the rest because I personally don't use WDYT about having this code in its own library? Then my main concerns (code maintenance and cognitive load) would be accounted for and whoever wants to use it can use it (locally and maybe in stage), and whether we want to include it as part of CI/smoke tests/etc. can be determined independently. Not to mention code reuse in other projects, greater OSS exposure, faster development cycles, and still the option of contributing it to |
I meant using the raw API endpoint with cURL or similar, since you need to handle authentication and all the query params so it a bit more cumbersome, but I see your point; One advantage of the fixture is that you could test function calls directly, like I did for the
How do you test for perfomance in Django applications? Not only raw number of parallel requests but actually finding the pain points and so, I'm interested in learning better/different ways of doing it, I went this route because is the one I thought of but maybe there are better options and the fixture could be improved.
Sure I'm not against it, if we are going to use it I'm up for maintaining the library and/or contributing to I wrote this fixture because I think it could be useful but If there are better ways of doing this let's go for it and discard this I just want to improve myself and OSIDB |
Important
Please read the description before looking at the code, I've also added some comments to the code itself
I've created a test fixture that gathers some analytics to help analyze the performance of the application.
The idea is to not longer need
django-silkor similar to conduct some manual tests to check/improve performance, just write a test case replicating the workflow or calling the function you want to test using this new fixture and review the results.The fixture will print a summary of the results in the cli after the tests:
It will also generate a full report in markdown in to
performance_report_YYYY-MM-DD_HH-MM.mdso you can review old runs while iterating, this report will also be added to the github summary for the test action, you can check it here https://github.com/RedHatProductSecurity/osidb/actions/runs/19742723923?pr=1147.The tests I wrote in this pull request are just an example of usage, not intended to be merged, we could keep the fixture alone for manual usage or ideally implement some kind of smoke tests that runs on release (detectable by branch name for example) and have these test generate the report, but we should discuss it.
I tried to keep it simple and objective without assuming or suggesting fixes/improvements that could be wrong, is just plain data to help understand the performance of the application.