Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get Second-level Aggregates for Multiple stocks at once through the Rest API #626

Open
parthnagarkar opened this issue Mar 15, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@parthnagarkar
Copy link

parthnagarkar commented Mar 15, 2024

Is your feature request related to a problem? Please describe.
I use the universal snapshot method every minute to download 1 minute aggregate bars for all stocks, then do some analysis on it, and alert any stock that I need to get second-level data for. These stocks (~100 by the end of the data) are stored in a list.

Describe the solution you'd like
I would like to be able to run get_aggs every 5 seconds for a list of stocks with just one call.
A universal snapshot at the second-level would be the best case scenario.

Describe alternatives you've considered
I tried using the websocket API (second-level aggregates), but I could not make it to work such that once the list is updated, the connection is closed, and a new connection is opened with the updated list. I think (please correct me if I am wrong) this is because the close method is an asynchronous method. I tried using asyncio, but I could not make it to work without changing a significant part of my existing code. This (https://polygon.io/blog/pattern-for-non-blocking-websocket-and-rest-calls-in-python) helped me understand the problem a bit more, but still not very clear about it. My entire current program runs on threading and the rest api.

I also tried running get_aggs (for 5 second interval) every 5 seconds for each stock in the list, but that turned out to be obviously very slow.

In the worst case, I am planning on using the websocket api and get second-level data for all tradeable stocks. Then every 5 seconds, write the aggregated data to an on-disk sqlite database. And have another process read the necessary data to run further analysis on it. It will certainly result in a lot of unnecessary data since I only need to analyze data for ~100 stocks by the end of the day. If there are any obvious inefficiencies with this plan, please let me know.

Additional context
If there is another/better way to solve this problem, please let me know -- I will be very grateful for that.

@parthnagarkar parthnagarkar added the enhancement New feature or request label Mar 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant