Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load testing against httpx #554

Open
4 tasks
sacOO7 opened this issue Feb 6, 2024 · 11 comments
Open
4 tasks

Load testing against httpx #554

sacOO7 opened this issue Feb 6, 2024 · 11 comments

Comments

@sacOO7
Copy link
Collaborator

sacOO7 commented Feb 6, 2024

Do load testing on a dedicated server using

  • httpx > 0.24.1 and http2 ( ablypython v2.0.3 )
  • httpx > 0.24.1 and http1.1 ( ablypython v2.0.3 )
  • httpx > 0.25.2 and http2 ( ablypython v2.0.4 )
  • httpx > 0.25.2 and http1.1 ( ablypython v2.0.4)

┆Issue is synchronized with this Jira Task by Unito

Copy link

sync-by-unito bot commented Feb 6, 2024

➤ Automation for Jira commented:

The link to the corresponding Jira issue is https://ably.atlassian.net/browse/SDK-4077

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Feb 6, 2024

We can either write our own load testing script
https://medium.com/@yusufenes3494/how-to-build-your-own-load-test-a-step-by-step-guide-1a8367f7f6a2
Or
One of the way is to clone python-locust project and modify https://github.com/locustio/locust/blob/master/locust/clients.py file to use httpx client instead of python-requests.
This will help us to use all features of locust client like result tracking on web UI, without writing explicit code for the same.

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Feb 6, 2024

We will need a dedicated ably RDP server to run this script against an ably limitless account : )

@sacOO7 sacOO7 self-assigned this Feb 6, 2024
@sacOO7
Copy link
Collaborator Author

sacOO7 commented Feb 6, 2024

I don't see any point in generalizing this load tester client, since this is a library specific issue. Most of the times, http client libraries are stable and they are not required to be tested. Load testing is specifically done to test servers under load and not client itself.

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Feb 6, 2024

Also, it doesn't make sense to load test clients across different SDK's since every client is written in different language and we will need to write the script for the same, every language has different metrix in terms of performance. Currently, we can just focus on ably-python and check if it works as expected.

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Feb 14, 2024

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Feb 14, 2024

Test with both servers

  1. Local HTTP server
  2. Ably `https://ably.rest.io/time

Test with requests and niquests -> SingleTon
Run in both sync/async mode

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Feb 14, 2024

Created a separate load test repo. to run load test using locust.

requests_vs_httpx_rps

Executed GET request against https://rest.ably.io/time with 10 users using singleton instance of requests and httpx

  1. Left side first half graph (python requests with http1.1) is very stable with low bumps (upto 140ms) with average repsonse time of 70ms
  2. Middle to right side graph (httpx with http2) is having lots of bumps (upto 200ms) with average repsonse time of 76ms

PS. Average response time for browser request to https://rest.ably.io/time is ~66ms.

Test conducted on intel i7-11800h, 16 gb ram windows machine.

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Feb 15, 2024

For 100 users, httpx is literally crying here with several bumps wrt number of requests made.

httpx_crying

Comparing it with python-requests, we get much stable graph with lowest possible latency for requests made

python-requests-rocking

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Feb 19, 2024

For python-requests => pool_connections =pool_maxsize = 100
and for httpx => httpx.Limits(max_keepalive_connections=100, max_connections=100, keepalive_expiry=120)

For 50 users, httpx is showing bumps again with several bumps wrt number of requests made.
Average response time is ~90ms.

httpx_50_connections

Comparing it with python-requests, we get much stable graph with lowest possible latency for requests made
Average response time is ~70ms.

python-requests-50-users

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Feb 19, 2024

@ttypic Until we get proper resolution from httpx, we can suggest documentation https://www.python-httpx.org/advanced/#pool-limit-configuration. Devs can adjust this limits according to their load requirements. This doesn't guarantee full stability though will reduce spikes in requests made.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant