-
Notifications
You must be signed in to change notification settings - Fork 734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: 2.0 API Changes - Swift Concurrency #3411
Comments
Hey 👋 I checked the list of interceptors that we use in our project. Most of them would become ErrorParsingInterceptor
This interceptor has the simple task of translating HTTP error codes into enum cases. I'm not too fond of it (it doesn't add much value), but it's an example of something that doesn't look like it would fit the new model, as we insert this in-between the Another point of uncertainty is the Interceptor chain
We're using the additional error interceptor to terminate the signpost range when an error is thrown. For our use-case, we would like to have either something similar to These are my answers to the questions in your RFC:
|
Hello! This is a quick comment as it is all I have time for at the moment, but expect follow-ups and more discussion. I have concerns about taking away cache read/write interceptors. There's a fundamental difference in providing these versus a custom normalized cache. The former operates on GraphQL operations, while the latter operates on cache records. Also the former operates on requests and responses and has all that context, including the request context itself, while the latter does not. I also have concerns about taking away the APIs that exist in the current NetworkTransport layer, but the reasoning here is more bespoke. We are taking advantage of being able to wrap this entire later in order to allow for a new URL/endpoint if needed upon request, which currently isn't possible with only the request chain. Doing it at this layer allows us to not have to recreate a client for endpoint changes. For NormalizedCache, we would need a more robust way to be notified of any/all cache activity, as you know already. Though we also need a way to implement cache expiration, which we currently have implemented as a cache read interceptor. We also have GraphQL request tracing (logging) implemented as an interceptor. |
Thank you both for your feedback @TizianoCoroneo & @jimisaacs. That's really helpful. I'm going to need to do some more thinking and will update this RFC soon.
In 1.0, you can modify the endpoint URL by mutating the I don't think there is any reason why a |
It technically would, though some more context, what I have is an injection pattern, where you initialize a client with a config object that basically configures how internal NetworkTransport will be created on every request. This includes passing through an interceptorProvider which is considered a passthrough at this layer. So to remove the current NetworkTransport facility, and to still allow the same functionality, it will require understanding how we configure interceptor providers in this new layer. Ultimately to move this configuration to an interceptor would mean that particular interceptor is internal only, which would mean that interceptors would need to be appended to base interceptors. Which would require a redesign of all this being passthrough. |
I think I've referenced Apollo Web's Type Policies before, but this would be useful. Basically something that could provide that relationship we were looking for in a query for a list of things with a query for a single thing, where cache normalization would be linked between the two. EDIT: In my opinion, something like this could lean into dictionaries or just less typed things, like Apollo web, for the sake of not having to worry about codegen and what not. Then a followup could make it more robust and typesafe later. |
Excited by the proposed changes here, does an approximate timeframe exist for the v2 release? |
Due to the excellent feedback I've gotten here, we've decided to go in a different direction with the 2.0. There will still be significant changes to support As for timeframe, we are committed to having at least a beta of 2.0 released alongside the September stable release of Swift 6. I'm hoping we can get the beta out earlier than that and be approaching a stable release version of Apollo iOS 2.0 with Swift 6, but at the very least we will have a working beta for you to begin your migration with. |
The RFC has been updated to reflect the current state of work on 2.0. This is still subject to change and additional feedback is appreciated! |
I haven't seen any talk about giving SelectionSet sendable conformance. Is there a timeline on this, or reasoning why it's not already sendable? |
@JOyo246 I haven't added that part to the RFC yet, because I haven't gotten to working on that part of the library yet. Making these safely I'll update the RFC once I've confirmed. Once I've determined a solution, if we are able to make the models |
I don't know if this was covered in other discussion but I just wanted to note here that I believe it's overly restrictive to make the NormalizedCache API (or really, any protocol) AnyActor. A type doesn't need to be an Actor to be Sendable (ie, usable across concurrency domains/threads), which seemed to be the motivation here. An actor would probably be a good choice but that can probably be an implementation detail? |
One downside I can see with this approach is it doesn't allow an interceptor to hook into completion side of the request. Previously the completion closure passed to For example, here's a simplified version of our logging interceptor. Our actual implementation uses signposts for logging, but the structure is the same. This might only work in our project since we don't use caching so it's guaranteed that the completion will only be called once. class LoggingInterceptor: ApolloInterceptor {
func interceptAsync<Operation: GraphQLOperation>(
chain: RequestChain,
request: HTTPRequest<Operation>,
response: HTTPResponse<Operation>?,
completion: @escaping (Result<GraphQLResult<Operation.Data>, Error>) -> Void
) {
print("Starting \(request.operation.operationName)")
chain.proceedAsync(
request: request,
response: response
) { result in
switch result {
case .success
print("Success \(request.operation.operationName)")
case .failure
print("Failure \(request.operation.operationName)")
}
completion(result)
}
}
} I wonder if there's a way to implement it to still allow the interceptor to hook into the request going down the list of interceptors and back up the list on completion. I'm not sure how possible that'd be with caching allowing the interceptors to "return" multiple times. One thought would be to split the interceptors for caching from the ones from the ones doing the network request. That'd allow each list of interceptors to know that only one value will be returned and then a higher level type would responsible for sending both responses to the caller. I've also been using the They've got a number of examples in their repo but here's a simple logging example with the same behaviour as the func intercept(
_ request: HTTPRequest,
body: HTTPBody?,
baseURL: URL,
operationID: String,
next: @Sendable (HTTPRequest, HTTPBody?, URL) async throws -> (HTTPResponse, HTTPBody?)
) async throws -> (HTTPResponse, HTTPBody?) {
print("Starting \(operationID)")
do {
let response = try await next(request, body, baseURL)
print("Success \(operationID)")
return response
} catch {
print("Failure \(operationID)")
throw error
}
} |
+1 to @CraigSiemens |
Thanks for the input all. @swizzlr I'll think about this more before moving forward. My intention here was to ensure that cache read/writes are done serially and prevent races conditions. Currently, we are using a barrier queue to handle that in the I'd love to hear opinions from others on this one. Thanks for providing this use case here. The way I would think to handle that would be to just use two interceptors. A While something similar to the example @CraigSiemens provided is doable, it does make for a less clean API in my opinion. Though I certainly want to ensure that we aren't preventing any necessary use cases here. Are there other use cases outside of logging in which this would be useful, and that breaking the behavior into two separate interceptors would not be a workable or desirable solution? |
I wanted to bring it up because it does make some things more complicated when split up across multiple interceptors. For example, |
+1, the |
Thanks for the feedback. I'm going to go back to the drawing board and make sure we get this right! |
Hey @AnthonyMDev! It's been ~a month so I figured I'd check in on how progress was going with 2.0? |
Hey there! We're making progress. The suggestions in this RFC have made us re-think the Interceptor APIs, and the Apollo team has been busy preparing for and attending GraphQL Conference this month. I'm hopping back into this today and hope to have an idea of the direction on those APIs and the RFC updated by end of week. I'm also considering trying to release a public preview of some of this work that isn't breaking in an Alpha release if I'm able to do it without breaking existing APIs. Things like versions of the |
Hey @AnthonyMDev With these upcoming changes, will Apollo's interceptors properly receive TaskLocal values from the request callsite? With full structured concurrency, this happens automatically, but with the class-based interceptors, I believe that extra work needs to be done. It would be really great if the interceptors are run with the same TaskLocals as the callsite of the request. That way we can use TaskLocals for logging, isolating data and other dependencies per graphQL call. Libraries such as swift-dependencies rely on this functionality quite heavily |
Looking for more feedback here!
I understand your point about the cache mechanism needing context on the request/operation, especially until we implement
It also can emit a cache response and then continue on with the request chain (with a fetchPolicy My feeling is that the awkwardness here is because the cache read/write really isn't part of the "networking" process and should instead, be handled outside of the This This feels like better separation of concerns. I think it's less confusing when your Do you think this would be too limiting? The only direct behavior this would remove is the ability for a Maybe the issues with the intermediary and early exit results not traveling through the request chain isn't really an issue to anyone? |
I'm not clear what work needs to be done to support this, but I'd like to make it possible. Once I get an initial alpha release out, maybe you can play around with it and let me know what is/isn't working. |
I've got a thought about the proposed form of
I agree with the overall idea that we might get multiple results, and thus returning an I'm basically wondering if there's place for something like:
That would be available for everything but I understand that this might be hard to achive with For me the |
I like the idea of separating the cache interceptor from the request interceptors.
I can't think of an issue with it. It's currently possible today if an interceptor called 👍 for @cheif's suggestions. I don't think that would be too difficult. The enum SingleResultCachePolicy {
case returnCacheDataElseFetch
case fetchIgnoringCacheData
case fetchIgnoringCacheCompletely
case returnCacheDataDontFetch
}
enum MultiResultCachePolicy {
case returnCacheDataAndFetch
} Then there could be two versions of fetch for the different cache policies and return types. func fetch<Query: GraphQLQuery>(
query: Query,
cachePolicy: SingleResultCachePolicy = .default,
context: (any RequestContext)? = nil
) async throws -> GraphQLResult<Query.Data>
{ ... }
func fetch<Query: GraphQLQuery>(
query: Query,
cachePolicy: MultiResultCachePolicy,
context: (any RequestContext)? = nil
) -> AsyncThrowingStream<GraphQLResult<Query.Data>, any Error>
{ ... } Since the functions have the same label, Xcode's suggestions will contain the cases from both enums. Unless the return type is already matching, in which case it'll only show the matching cases. |
This is also applicable to |
I see, I haven't come across
etc. It would lead to more overloads of |
I agree with this. The library should absolutely offer a simplified async/await API for the base case without defer/stream/returnCacheDataElseFetch. |
+1 A strong vote for this from my side. Anything else leads to confusion as to whether you as a developer have to deal with the fact that there may or may not be multiple results, although it should actually be clear from the usage. |
Thanks for all the feedback on the desire for a single return We’ll need to do some exploration to see what shape that could take. I don’t love the idea of overloading, because it’s odd to me that the API of the same function could change in a way that significantly alters how you consume it. But a function with a slightly different name that is used just for single-result fetches is fine with me. |
In our client wrapper around ApolloClient, we provide the following methods:
The consumer decides which flavor of fetch to use, and this has worked out just fine this year. This was also thought about extensively, albeit internally. |
@jimisaacs How do users decide which overload of fetch to use? Just by casting/inferring the return type? What if the user chooses to use the |
Yes
It would result in receiving only the first result. |
@CraigSiemens I'm implementing this concept of a separate If you have a request interceptor to do logging, that won't be called prior to cache read, and it won't be called at all if you don't get to the network request portion of the chain. My thought is that the correct way of handling this going forward would be to inject a logger that you can then use in any of your interceptors. This could be done using a global dependency injection container if you use some sort of dependency injection library; passing a logger to your interceptors during creation in your Does this limit any necessary use cases for anyone? |
As I've been building this out, I'm realizing that the middleware design that @CraigSiemens and @jimisaacs have asked for doesn't really work with our networking stack. This is because interceptors may be called multiple times With I'm exploring an alternative where each interceptor has two functions. One for intercepting the request on the way in and another for intercepting the response on the way out. I hope that will allow you to encapsulate logic within a single interceptor, while still enabling interceptors to be called multiple times for the same request. |
@AnthonyMDev First, thanks to the work you are doing here. I think is the proper way to proceed with Apollo :) |
I'm working hard on getting this ready, but it's a huge undertaking. I can't guarantee when it will be done yet, but I'll be continuing to post updates here in the mean time. Until then, plan to import Apollo using I'm hoping to have a significant update by the end of the week. Just trying to get things actually building so I can verify that what I've done so far works. There were so many areas that needed to be updates, getting to a point where the code even compiles again has been a challenge. :) |
@AnthonyMDev any updates on this one? |
Unfortunately still marching forward with the work, but nothing that is functional yet. I've been making a lot of design iterations to accommodate all the feedback. Will continue to update here as I have news to share. |
Looking for feedback on another possible change here. We are strongly considering making Apollo iOS 2.0 target a minimum version of iOS 15. Would that be a problem for many users? |
That would not be a problem for us. I've successfully been able to convince the last 3 companies I worked for that iOS development is done by supporting major minus 1 for new features, major minus 2 for minimum support. Others may not have had so much luck |
This wouldn't be a problem for the apps I manage. None of them have an iOS 15 target. In fact, most of them are already on iOS 16 as a min deployment target. If a key feature of the 2.0 upgrade is Swift 6 and Swift concurrency support, I don't think that should be a problem in general. I don't know of any iOS developers who rely on these technologies and still support older versions of iOS. |
This will not be an issue for us either. |
Our original deployment target for this work was iOS 13 where a lot of the concurrency API was back-ported to, but working through parts of the design yesterday showed that some of the API we want to use is only available on iOS 15. That's the reason for the new minimum deployment target. I share the same sentiment though that the vast majority of apps that were updated to use Swift Concurrency went directly to 15 or higher. |
Not a problem |
My company's min target is 17 soon to be 18 because we have to keep up with the latest security updates from apple. Please deploy this soon! |
iOS 16. I think it would be ideal. |
This RFC is a work in progress. Additions and changes will be made throughout the design process. Changes will be accompanied by a comment indicating what sections have changed.
Background
The upcoming release of Swift 6 brings some significant changes to the language. The new structured concurrency model is incompatible with the internal mutable state of the existing Apollo iOS infrastructure. While
@unchecked Sendable
can be used to silence most of the errors the current library faces in Swift 6, many of our data structures are only implicitly thread safe, but allows for unsafe usage in ways that would be difficult to account for and prevent if using@unchecked Sendable
.The Apollo iOS team has planned to do a large overhaul of the networking APIs for a 2.0 release in the future. Swift 6 is pushing us to move that up on our roadmap.
Proposal
In order to properly support Swift structured concurrency and Swift 6, we believe significant breaking changes to the library need to be made. We are hoping to use this opportunity to make some of the other breaking changes to the networking layer that we have been planning and release a 2.0 version for Swift 6 compatibility. Due to the time constraints and urgency of releasing a version alongside the official stable release of Swift 6, we do not expect this 2.0 version to encompass the entire scope of changes we initially wanted to make. This will be an iterative (though significant) improvement on the existing code base. It is likely that a 3.0 version will be released in the future with additional breaking changes to provide for additional functionality that is out of scope for the Swift 6 compatible 2.0 release.
Impact - Breaking Changes
For users who are not building custom interceptors, the impact of the 2.0 migration would primarily involve adopt Swift concurrency in your calling code and updating API calls. How easy this would be is dependent on how your existing code is structured. This is the direction the language is going, and if you are upgrading to Swift 6, most of these changes will be necessary anyways.
For users who are doing advanced networking, the migration could require a bit more work. The 2.0 proposal includes significant changes to the way the
RequestChain
,ApolloInterceptor
, andNormalizedCache
work. Anyone who is implementing their own custom versions of any of these are going to need restructure their code and make their implementations thread safe.Users who are unable to migrate will still be able to use Apollo iOS 1.0 with the
@preconcurrency import
annotation. This would downgrade the compiler errors into warnings in Swift 6.Deployment Target
Apollo iOS 2.0 would drop support for iOS 12 and macOS 10.14. The new minimum deployment targets would be:
ApolloClient
APIsThe
ApolloClient
will have new API's introduced that support Swift Concurrency. Because GraphQL requests may return results multiple times, the request methods will return anAsyncThrowingStream
.The
watch(query:)
,subscribe(subscription:)
, andperform(mutation:)
methods will also have new versions following the same format.The returned stream can be awaited upon to receive values from the request. The returned stream will finish when the request has been fully completed or an error is thrown. In order to prevent blocking of the current thread, awaiting on the request stream should be done on a detached
Task
.RequestChain
andRequestChainInterceptor
In 1.0,
RequestChain
was a protocol, with a provided implementationInterceptorRequestChain
. We have not identified any situation in which a custom implementation ofRequestChain
is useful. In 2.0,RequestChain
will no longer be a protocol and the implementation ofInterceptorRequestChain
will become theRequestChain
itself.As in 1.0, you will create a
RequestChainNetworkTransport
to initialize theApolloClient
with. Each individual network request will have its ownRequestChain
instantiated by theRequestChainNetworkTransport
. In order to allow the interceptors in the chain to be configured on a per-request basis, anInterceptorProvider
can be provided. While the APIs of these types may be slightly altered, the basic structure remains the same as 1.0.ApolloInterceptor
will be renamedRequestChainInterceptor
. Currently, all steps in the request chain are performed using interceptors that provide the following method:Instead of passing the
RequestChain
to the interceptors and having them callchain.proceedAsync()
, the interceptors will now return aNextAction
(or throw) and the request chain will use that action to proceed onto the next interceptor.The
NextAction
is anenum
that provides cases for determining what action the request chain should take next.The
RequestChain
will proceed as follows given theNextAction
returned:.proceed
:request
and optionalresponse
provided to theintercept(request:response:)
function of the next interceptor in the chain..proceedAndEmit
:intermediaryResult
will be emitted throughAsyncThrowingStream
for the request by theApolloClient
.request
and optionalresponse
provided to theintercept(request:response:)
function of the next interceptor in the chain.CacheReadInterceptor
when using the.returnCacheDataAndFetch
cache policy to emit the result returned from the cache while still continuing to complete the network fetch request..multiProceed
:await
on the stream and proceed through the rest of the interceptors from the current point for eachNextAction
value provided.AsyncThrowingStream
for the request returned by theApolloClient
.@defer
responses..exitEarlyAndEmit
:result
will be emitted throughAsyncThrowingStream
for the request by theApolloClient
, followed by the stream terminating. Subsequent interceptors in the request chain will not be called.CacheReadInterceptor
when using the.returnCacheDataElseFetch
cache policy to emit the result returned from the cache and prevent the request chain from proceeding to the network fetch request..retry
:request
.Error handling
ApolloErrorInterceptor
will be renamedRequestChainErrorInterceptor
. In 1.0, interceptors returned aResult
, which could be a.failure
with an error. Usingasync/await
in 2.0, an interceptor canthrow
an error instead of returning aNextAction
.Your
InterceptorProvider
may provideRequestChainErrorInterceptor
with the function:If your
InterceptorProvider
provides aRequestChainErrorInterceptor
, thrown errors will be passed to itshandleError
function. If the error interceptor can recover from the error, it may return aNextAction
, and the request chain will continue with that action as described above. Otherwise the error interceptor may re-throw the error (or throw another error).If the error interceptor throws an error (or no
RequestChainErrorInterceptor
is provided), the request chain will terminate and theAsyncThrowingStream
for the request returned by theApolloClient
will complete, throwing the provided error.Normalized Cache
This section is in progress and requires more research.
The
NormalizedCache
API has been too limited, and we are investigating how to allow for more customization of caching implementations. This will likely mean expanding the protocol to receive more information during loading and writing of data to allow for custom implementations to make better decisions about their behavior. We are looking for feedback on what additional functionality users would like to see enabled by theNormalizedCache
.The
NormalizedCache
will become anAnyActor
protocol, meaning implementations will need to beactor
types in 2.0. This ensures thread safety and prevents data races if aNormalizedCache
were to be used with multipleApolloStores
(which you probably shouldn't do, but is theoretically possible currently).Design Questions
These are questions that are currently undecided about this RFC. Please comment on this issue if you have opinions or concerns.
What additional functionality would you like to see enabled by the
NormalizedCache
.The text was updated successfully, but these errors were encountered: