You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To ensure that performance is not touched between rororo releases.
As of now I'm thinking about using pyperf as benchmark runner, and benchmarking,
In todobackend example,
setup_openapi performance
validate_request performance by creating 100+ todos via create_todo operation (without Redis layer)
validate_response performance by emulating list_todos operation with 100+ todos (without Redis layer)
In hobotnica example,
validate_security performance for basic auth
validate_security performance for HTTP auth
To complete task need to store benchmark results somewhere (gh-pages branch or as release artifact) and have ability to compare benchmark results between releases.
The text was updated successfully, but these errors were encountered:
To ensure that performance is not touched between
rororo
releases.As of now I'm thinking about using pyperf as benchmark runner, and benchmarking,
todobackend
example,setup_openapi
performancevalidate_request
performance by creating 100+ todos viacreate_todo
operation (without Redis layer)validate_response
performance by emulatinglist_todos
operation with 100+ todos (without Redis layer)hobotnica
example,validate_security
performance for basic authvalidate_security
performance for HTTP authTo complete task need to store benchmark results somewhere (
gh-pages
branch or as release artifact) and have ability to compare benchmark results between releases.The text was updated successfully, but these errors were encountered: