This repository has been archived by the owner on Jan 23, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 15
lep: pull based event loop #3
Open
saghul
wants to merge
1
commit into
libuv:master
Choose a base branch
from
saghul:pull_based_event_loop
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,77 @@ | ||
| Title | Pull based event loop | | ||
|--------|------------------------| | ||
| Author | @saghul | | ||
| Status | DRAFT | | ||
| Date | 2014-11-27 07:54:35 | | ||
|
||
|
||
## Overview | ||
|
||
This LEP assumes “Request all the things” is implemented. | ||
|
||
At this point the libuv event loop basically does the following: | ||
|
||
1. Run timers, idle and prepare handles | ||
2. Calculate the poll timeout | ||
3. Block for i/o | ||
4. Run all i/o callbacks | ||
5. Goto 1 | ||
|
||
Callbacks can be fired at different moments of the loop run process (check, prepare, | ||
idle handle callbacks), making it hard to reason about, and also hard to decompose | ||
and run in stages. Here is the proposed new loop iteration process: | ||
|
||
1. Calculate the poll timeout | ||
2. Block for i/o (no callbacks are called) | ||
3. Run all queued callbacks | ||
|
||
In order to achieve this every callback in libuv (except allocation callbacks) needs | ||
to be attached to a request. Those handles which do not use requests for their | ||
operation (`uv_poll_t`, `uv_fs_event_t`, …) will use internal requests to represent this. | ||
Those requests will be queued in *a single* queue which the loop will iterate until | ||
finished, right after polling for i/o. Any request queued while iterating the queue | ||
will be processed in the next iteration. | ||
|
||
While this process is internal to libuv, it is exposed with 3 API calls: | ||
|
||
~~~~ | ||
uint64_t uv_backend_timeout(const uv_loop_t* loop) | ||
~~~~ | ||
|
||
Returns the amount of time to block for i/o. This function becomes really simple: if | ||
the request queue is non-empty: 0; else, if there are any timeout requests, the nearest | ||
timeout, else infinity. | ||
|
||
~~~~ | ||
void uv_backend_process(uv_loop_t* loop, uint64_t timeout) | ||
~~~~ | ||
|
||
This function blocks for i/o for the given amount of time, and it executes all i/o operations, | ||
putting completed requests in the request queue. No callback is called, except for the | ||
allocation callbacks if necessary. | ||
|
||
~~~~ | ||
void uv_backend_dispatch(uv_loop_t* loop) | ||
~~~~ | ||
|
||
Runs the callbacks for all queued requests. If any request is added to the queue while | ||
this function is running all callbacks, it will be deferred until the next iteration, | ||
to avoid starvation. | ||
|
||
Here is a pseudocode example of a simplified version of uv_run, which would run for ever, | ||
until explicitly stopped: | ||
|
||
~~~~ | ||
void my_uv_run(uv_loop_t* loop) { | ||
while (!loop->must_stop) { | ||
uv_backend_process(loop, uv_backend_timeout(loop)); | ||
uv_backend_dispatch(loop); | ||
} | ||
} | ||
~~~~ | ||
|
||
This proposal should also make loop embedding easier, since one thread could block for | ||
i/o and then another one run all callbacks using `uv_backend_dispatch`. | ||
|
||
Since all callbacks are now run after polling for i/o, `uv_prepare_t` and `uv_check_t` | ||
handles become obsolete and are removed. |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't really thought this through yet but I think it would be desirable to have an API where you can pull and dispatch one request at a time.
I'm not really sure what it should look like but here is a strawman for you to poke holes in:
The idea being that the user can decide when to dispatch and in what order. If, for example, you are aiming for low I/O latency, you give preferential treatment to read and write requests. Whereas if you need high-precision timers, you run those at the first possible opportunity.
(To prevent tons of calls, we'd probably want to have a batch API where you tell libuv to give you N requests. But hey, strawman, right? It doesn't have to be perfect right away.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what I kind-of revolved around in libuv/libuv#6 (comment) but that's not exactly what they want.
Anyway, I don't disagree that it could be useful. Now, do we have an actual use case which we need to cover? The two approaches aren't orthogonal, so this could be refined in a follow-up LEP if we see the need. Or we could amend this one as something to consider in the future and then just edit the LEP and that's that.
We could have:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you get to uv_backend_dispatch, splitting that up to get what we want won't be a problem IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be clear, I don't want the C callbacks to be called ever. I don't want to have to provide C callbacks (though I am fine with having to pass in NULL). I want to be given as a return value some sort of data structure when the event happens and I ask for it with enough data to do my own dispatching in the scripting language.
In libuv 1.x the callbacks often will have extra data that's not in the req itself. I'm not sure how much of this will change in this new world where everything is a req. But for my use case, I need all the data returned in some sort of union type. I can then check the type and typecast to the proper type to get at all the data members. Using either the void* data member or a containerof trick on the req, I can get my data to know where to dispatch to in the scripting language.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see what you want to do, but that's not how libuv really works today. Functions take callbacks because that's the only way we have to tell the user something happened. We cannot just take NULL. While we might be able to cover your scenario (I'm not really sure at this point) it won't be on this LEP.
What worries me the most is the fact that we'd be exposing internal requests and all of their members. If we ever go for opaque types and getter functions you can see how much trouble this would be.
Feel free to write a LEP about this, but we'll need a solid plan and reasoning for it. Also, if this about performance, I'll ask for numbers.
FWIW, Python's cffi has this way for using callbacks, which I plan to use for pyuv's ffi version: https://cffi.readthedocs.org/en/release-0.8/#callbacks (I haven't measured the performance though).