-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interaction with iterator helpers #75
Comments
Why is specific propagation for these methods needed? Would the callbacks not just run in whatever context the methods are called in, since it's synchronous? |
I kinda go back and forth on this. We decided that user generators should capture their init-time context for all iteration calls – I argued that we shouldn't need to, because the init context is likely to be maintained by the calling code anyways. But since we did, this seems like a normal extension. |
Iterators aren't async on their own though, they're just an abstraction over something else that is async. Even an async iterator is just an abstraction over a bunch of promises, so it's the promise itself that needs to maintain the context, not the iterator. It has been my experience with Context management should model where asynchrony actually is and leave it to the user to work around abstractions which are not themselves inherently async as the correct way to express those can be ambiguous. Rather we should be considering our learnings with promises that often taking one path allows achieving the other through |
@Qard Do you have an example where using the calling time context was the correct case? |
IMO iterator helpers should propagate the context to the functions which are called inside of them. Not to anywhere else, not to anyplace that is analogous to what iterators have. |
This seems to be clarifying a question brought up but not clearly resolved in #18 (comment). Given the following, const var1 = new AsyncContext.Variable({defaultValue: 0});
function* gen() {
console.log(`gen1: ${var1.get()}`);
yield;
console.log(`gen2: ${var1.get()}`);
}
const iter = gen();
var1.run(1, () => iter.next());
const mapped = var1.run(2, () => iter.map(() => console.log(`map: ${var1.get()}`)));
var1.run(3, () => mapped.next()); My understanding of #18 is that the decision was made to spec this as logging @andreubotella indicated in that issue (and repeated again in the OP here) that this would log I understand @Qard to be advocating for That said, @Qard brought up an interesting point that it's a lot easier to support both behaviors if the built-in behavior is to treat everything as "sync" and not capture the snapshot (and @jridgewell posted an example in the other issues of a wrapper to add the snapshotting). In fact, it's not at all clear to me how one could possibly go about accessing the outer context from the caller of I think either |
Your understanding is correct. From our call today, I believe we're in agreement that I agree with @Qard. Generator's yields are not themselves async, it depends on who's calling them. If the generator is sync, I fully expect that the calling code will handle them within their own sync context (so there's no need to capture, we'll execute within the same context anyways). And if the generator is async, then I expect the calling code will handle that asynchronicity by using const v = new AsyncContext.Variable();
function* foo() {
console.log(`gen: ${v.get()}`);
yield 1;
console.log(`gen: ${v.get()}`);
yield 2;
console.log(`gen: ${v.get()}`);
}
async function* bar() {
// this will yield a promise each time `foo` yields/returns a value.
yield* foo();
}
v.run(1, () => {
// this will log 3 `gen: 1` messages
Array.from(foo());
// So will this.
for (const i of foo()) {}
// So will this.
const it = foo();
const v1 = it.next();
const v2 = it.next();
const v3 = it.next();
});
v.run(2, () => {
// this will log 3 `gen: 2` messages
Array.fromAsync(bar());
// So will this.
for await (const i of bar()) {}
// So will this (parallel calls will queue).
const it1 = bar();
const v1 = it1.next();
const v2 = it1.next();
const v3 = it1.next();
// So will this (serial waiting won't queue).
const it2 = bar();
const v1 = await it2.next();
const v2 = await it2.next();
const v3 = await it2.next();
}); There are cases where letting the iterator escape the context will lead to a different result, as you demonstrate. But why treat this any different than a promise that escapes its context? const it = v.run(3, () => foo())
// I think this should log `gen: 4`
v.run(4, () => it.next());
const p = v.run(3, () => Promise.resolve());
v.run(4, () => {
// logs `p: 4`
p.then(() => console.log(`p: ${v.get()}`));
}); |
The particular example I was thinking of was co, which basically did async/await before async/await existed. I'm not referring to that library specifically though but rather the pattern it presents. It used a yield point to represent a task to be completed at some point and the return back into the generator came from that task being resolved. In the case of Now from an APM perspective, what if we want to instrument such form of generator and represent each task as a span? Okay, we can wrap the code between each external Let's look at a code example: function* workQueue(tasks) {
const results = []
for (const task of tasks) {
results.push(task)
}
// Sum the results wrapped in a span to express its timing
tracer.getCurrentSpan().createChildSpan('aggregate results', () => {
return results.reduce((m, v) => m + v, 0)
})
}
function processQueue(tasks) {
const gen = workQueue(tasks)
let result = gen.next()
while (!result.done) {
// Do some work wrapped in a span to express its timing
tracer.getCurrentSpan().createChildSpan('task', (span) => {
gen.next(result.value * 2)
})
}
return result.value
}
processQueue([ 1, 2, 3 ]) In the example above what we want is for each step to be a child of the previous and for the final return to connect back to the tasks which led to the aggregation task it represents. This is impossible when forcing that generators retain the context between yield and return. I expect this will be a common problem in how we think about context management that the most seemingly natural way to think about it would be to model it as a direct reflection of execution at the language level, but actually what is often needed is for it to reflect what the execution model is at the hardware level. We want to flow in exactly the arrangement the execution does on-cpu. Situations like this though are where context flow is subjective and different users will have different opinions, which is also why I had previously suggested an instance-scoped bind to allow different store owners to apply their differing opinions and why, in the diagnostics_channel module in Node.js, I've implemented a system of binding individual stores to channel events with channel.bindStore(store, transform), channel.unbindStore(store) of channel.runStores(event, scope). Incidentally, I've also been meaning to prepare a proposal for diagnostics_channel at the language level to integrate with The point I want people to keep in mind though is just what was already expressed: it's easy to reduce a graph by drawing new edges within it, but it's very difficult to expand a graph after branches have already been orphaned. I'm not convinced we actually can have a universally acceptable graph shape, which is why I feel it's important we provide the tools to effectively mutate our own representations of the graph to fit our uses and for the default path it follows to avoid locking users out from the expressions of the graph which they actually need. |
Did you mean it will log "gen: 2"? I don't see where a "1" would come from here.
I think the difference is that in the case of a generator, the code lives in the generator's definition, and so there's maybe some surprise that the context is changing out from under it. I could ask the same thing about the continuations - there's an understanding that in Honestly, I don't have a strong opinion either way. The transpilation/polyfill is actually easier if we don't have any special handling for generators. My feeling is that capturing the init snapshot would be less surprising, but I can also understand the flexibility argument winning out.
Thanks for that example - that's helpful.
I'm also looking at using
I think that's a really good point to keep in mind, and I agree that if we go with the init snapshot, that it would be very hard to use this for the other use case. Who is arguing strongly for snapshotting? I recall seeing that @jridgewell had pointed out a weird example with |
I do not understand this code example. It's seems we're missing
Yes I meant "gen: 2", and I fixed my example.
I think I see generators and async functions as being different. The context before and after an In my mind, generators aren't just a single function unit, but multiple units combined with prettier syntax. Eg, the transpilation of a generator is a reentrant switch statement to skip to a code location, with each being a separate function call entirely. Each of these calls are their own call-context inheriting from the parent's.
That, and node's
@littledan and I think @bakkot from an old TC39 plenary.
That was #18 (comment). Async generators prevent parallel execution, queueing the 2nd+ calls behind the promise of the current call. async function* foo(n) {
for (let i = 0; i < n; i++) {
await sleep(10);
yield i;
}
}
const it = foo();
// Notice that I don't await, there are 2 parallel calls to the iterator.
// The second cannot enter the function's body until the first hits the yield.
// So, the async-gen internally queued the call, and the p2 call will (sync) resume
// as soon as we resolve p1 (there's no tick between p1 resolving and p2 resuming,
// it's not actually a promise.then() chain internally).
v.run(5, () => {
const p1 = it.next();
const p2 = it.next();
return Promise.all([p1, p2]);
}); Because of the internal promise queuing, p2's call context will be p1's call context. But, p1's context is just the parent context, which I think is the expected result anyways. The snapshot behavior addresses this because p1's context is forced to be the init-time context, which means p2's context is always the init-time one. This can be easily resolved with a modification of the AsyncGeneratorEnqueue and AsyncGeneratorDrainQueue AOs. |
This thread is getting a little off-topic - I moved the discussion of basic generator mechanics back to #18 and suggested we re-open that issue. I suggest that the resolution here be dependent on how #18 resolves:
|
I think our current direction is that generators will save and restore the AsyncContext Snapshot across For iterator helpers, my opinion is that we should do an |
I'm not sure I understand the distinction about the callback vs the inner and outer As I understand it, nearly every iterator is already going to restore some or other context when it runs const arr = [null];
Object.defineProperty(arr, 0, {get() { return v.get(); }});
const iter1 = arr[Symbol.iterator]();
const iter2 = Iterator.from({
[Symbol.iterator]() { return this; },
next() { return {done: false, value: v.get()}; }.
}); IIUC you're saying that the following should pass with function check(iter) {
const i = v.run(3, () => iter.map(x => 2 * x));
v.run(5, () => {
assertEquals(10, i.next().value);
});
} But that @jridgewell There was discussion (elsewhere) of a |
In a previous meeting we agreed that, although built-in iterators should not propagate the creation context by default, iterator helpers should probably propagate it:
This needs modifications on the (sync and async) iterator helpers spec text.
The text was updated successfully, but these errors were encountered: