Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: 2.0 API Changes - Swift Concurrency #3411

Open
AnthonyMDev opened this issue Jul 11, 2024 · 46 comments
Open

RFC: 2.0 API Changes - Swift Concurrency #3411

AnthonyMDev opened this issue Jul 11, 2024 · 46 comments

Comments

@AnthonyMDev
Copy link
Contributor

AnthonyMDev commented Jul 11, 2024

This RFC is a work in progress. Additions and changes will be made throughout the design process. Changes will be accompanied by a comment indicating what sections have changed.

Background

The upcoming release of Swift 6 brings some significant changes to the language. The new structured concurrency model is incompatible with the internal mutable state of the existing Apollo iOS infrastructure. While @unchecked Sendable can be used to silence most of the errors the current library faces in Swift 6, many of our data structures are only implicitly thread safe, but allows for unsafe usage in ways that would be difficult to account for and prevent if using @unchecked Sendable.

The Apollo iOS team has planned to do a large overhaul of the networking APIs for a 2.0 release in the future. Swift 6 is pushing us to move that up on our roadmap.

Proposal

In order to properly support Swift structured concurrency and Swift 6, we believe significant breaking changes to the library need to be made. We are hoping to use this opportunity to make some of the other breaking changes to the networking layer that we have been planning and release a 2.0 version for Swift 6 compatibility. Due to the time constraints and urgency of releasing a version alongside the official stable release of Swift 6, we do not expect this 2.0 version to encompass the entire scope of changes we initially wanted to make. This will be an iterative (though significant) improvement on the existing code base. It is likely that a 3.0 version will be released in the future with additional breaking changes to provide for additional functionality that is out of scope for the Swift 6 compatible 2.0 release.

Impact - Breaking Changes

For users who are not building custom interceptors, the impact of the 2.0 migration would primarily involve adopt Swift concurrency in your calling code and updating API calls. How easy this would be is dependent on how your existing code is structured. This is the direction the language is going, and if you are upgrading to Swift 6, most of these changes will be necessary anyways.

For users who are doing advanced networking, the migration could require a bit more work. The 2.0 proposal includes significant changes to the way the RequestChain, ApolloInterceptor, and NormalizedCache work. Anyone who is implementing their own custom versions of any of these are going to need restructure their code and make their implementations thread safe.

Users who are unable to migrate will still be able to use Apollo iOS 1.0 with the @preconcurrency import annotation. This would downgrade the compiler errors into warnings in Swift 6.

Deployment Target

Apollo iOS 2.0 would drop support for iOS 12 and macOS 10.14. The new minimum deployment targets would be:

  • iOS 13.0+
  • iPadOS 13.0+
  • macOS 10.15+
  • tvOS 13.0+
  • visionOS 1.0+
  • watchOS 6.0+

ApolloClient APIs

The ApolloClient will have new API's introduced that support Swift Concurrency. Because GraphQL requests may return results multiple times, the request methods will return an AsyncThrowingStream.

public func fetch<Query: GraphQLQuery>(
    query: Query,
    cachePolicy: CachePolicy = .default,
    context: (any RequestContext)? = nil
) -> AsyncThrowingStream<GraphQLResult<Query.Data>, any Error>

The watch(query:), subscribe(subscription:), and perform(mutation:) methods will also have new versions following the same format.

The returned stream can be awaited upon to receive values from the request. The returned stream will finish when the request has been fully completed or an error is thrown. In order to prevent blocking of the current thread, awaiting on the request stream should be done on a detached Task.

let task = Task.detached {
  let request = client.fetch(query: MyQuery())

  for try await response in request {
    await MainActor.run {
      // Run some code using the response on the MainActor.
    }
  }
}

RequestChain and RequestChainInterceptor

In 1.0, RequestChain was a protocol, with a provided implementation InterceptorRequestChain. We have not identified any situation in which a custom implementation of RequestChain is useful. In 2.0, RequestChain will no longer be a protocol and the implementation of InterceptorRequestChain will become the RequestChain itself.

As in 1.0, you will create a RequestChainNetworkTransport to initialize the ApolloClient with. Each individual network request will have its own RequestChain instantiated by the RequestChainNetworkTransport. In order to allow the interceptors in the chain to be configured on a per-request basis, an InterceptorProvider can be provided. While the APIs of these types may be slightly altered, the basic structure remains the same as 1.0.

ApolloInterceptor will be renamed RequestChainInterceptor. Currently, all steps in the request chain are performed using interceptors that provide the following method:

func interceptAsync<Operation: GraphQLOperation>(
    chain: any RequestChain,
    request: HTTPRequest<Operation>,
    response: HTTPResponse<Operation>?
) -> Result<GraphQLResult<Operation.Data>, any Error>

Instead of passing the RequestChain to the interceptors and having them call chain.proceedAsync(), the interceptors will now return a NextAction (or throw) and the request chain will use that action to proceed onto the next interceptor.

  func intercept<Operation: GraphQLOperation>(
    request: HTTPRequest<Operation>,
    response: HTTPResponse<Operation>?
  ) async throws -> RequestChain.NextAction<Operation>

The NextAction is an enum that provides cases for determining what action the request chain should take next.

public enum NextAction<Operation: GraphQLOperation> {
    case proceed(
      request: HTTPRequest<Operation>,
      response: HTTPResponse<Operation>?
    )

    case proceedAndEmit(
      intermediaryResult: GraphQLResult<Operation.Data>,
      request: HTTPRequest<Operation>,
      response: HTTPResponse<Operation>?
    )

    case multiProceed(AsyncThrowingStream<NextAction<Operation>, any Error>)

    case exitEarlyAndEmit(
      result: GraphQLResult<Operation.Data>,
      request: HTTPRequest<Operation>
    )

    case retry(
      request: HTTPRequest<Operation>
    )
}

The RequestChain will proceed as follows given the NextAction returned:

  • .proceed:
    • The request chain will pass the request and optional response provided to the intercept(request:response:) function of the next interceptor in the chain.
  • .proceedAndEmit:
    • The value passed to the intermediaryResult will be emitted through AsyncThrowingStream for the request by the ApolloClient.
    • Then the request chain will pass the request and optional response provided to the intercept(request:response:) function of the next interceptor in the chain.
    • This is used by the CacheReadInterceptor when using the .returnCacheDataAndFetch cache policy to emit the result returned from the cache while still continuing to complete the network fetch request.
  • .multiProceed:
    • The request chain will await on the stream and proceed through the rest of the interceptors from the current point for each NextAction value provided.
    • This action allows for a request chain to branch into multiple asynchronous request chains from the current interceptor. Values emitted by each of the branched chains will be passed through to the final AsyncThrowingStream for the request returned by the ApolloClient.
    • This is used for multi-part network responses such as HTTP subscriptions and @defer responses.
  • .exitEarlyAndEmit:
    • The value passed to the result will be emitted through AsyncThrowingStream for the request by the ApolloClient, followed by the stream terminating. Subsequent interceptors in the request chain will not be called.
    • This is used by the CacheReadInterceptor when using the .returnCacheDataElseFetch cache policy to emit the result returned from the cache and prevent the request chain from proceeding to the network fetch request.
  • .retry:
    • The request chain will begin again from the first interceptors, passing in the provided request.

Error handling

ApolloErrorInterceptor will be renamed RequestChainErrorInterceptor. In 1.0, interceptors returned a Result, which could be a .failure with an error. Using async/await in 2.0, an interceptor can throw an error instead of returning a NextAction.

Your InterceptorProvider may provide RequestChainErrorInterceptor with the function:

func handleError<Operation: GraphQLOperation>(
    error: any Error,
    request: HTTPRequest<Operation>,
    response: HTTPResponse<Operation>?
) async throws -> RequestChain.NextAction<Operation>

If your InterceptorProvider provides a RequestChainErrorInterceptor, thrown errors will be passed to its handleError function. If the error interceptor can recover from the error, it may return a NextAction, and the request chain will continue with that action as described above. Otherwise the error interceptor may re-throw the error (or throw another error).

If the error interceptor throws an error (or no RequestChainErrorInterceptor is provided), the request chain will terminate and the AsyncThrowingStream for the request returned by the ApolloClient will complete, throwing the provided error.

Normalized Cache

This section is in progress and requires more research.

The NormalizedCache API has been too limited, and we are investigating how to allow for more customization of caching implementations. This will likely mean expanding the protocol to receive more information during loading and writing of data to allow for custom implementations to make better decisions about their behavior. We are looking for feedback on what additional functionality users would like to see enabled by the NormalizedCache.

The NormalizedCache will become an AnyActor protocol, meaning implementations will need to be actor types in 2.0. This ensures thread safety and prevents data races if a NormalizedCache were to be used with multiple ApolloStores (which you probably shouldn't do, but is theoretically possible currently).

Design Questions

These are questions that are currently undecided about this RFC. Please comment on this issue if you have opinions or concerns.

What additional functionality would you like to see enabled by the NormalizedCache.

@TizianoCoroneo
Copy link
Contributor

TizianoCoroneo commented Jul 12, 2024

Hey 👋 I checked the list of interceptors that we use in our project. Most of them would become GraphQLRequestInterceptor and fit nicely with the new model.
However, there is one interceptor that I'm not sure how to translate.

ErrorParsingInterceptor
public class ErrorParsingInterceptor: ApolloInterceptor {
    public var id: String = "ErrorParsingInterceptor"
    public init() {}
    public func interceptAsync<Operation>(
        chain: RequestChain,
        request: HTTPRequest<Operation>,
        response: HTTPResponse<Operation>?,
        completion: @escaping (Result<GraphQLResult<Operation.Data>, Error>) -> Void
    ) where Operation: GraphQLOperation {
      do {
        guard let response = response else {
            assertionFailure("We need to have a response to parse the GQL errors in the response. This interceptor is placed in the wrong order in the request chain.")
            throw InterceptorOrderError()
        }

        switch response.httpResponse.statusCode {
        case 410:
            throw GraphQLHTTPResponseError(
                operationName: String(describing: Operation.self),
                body: response.rawData,
                response: response.httpResponse,
                kind: .upgradeNeeded
            )

        case 401:
            throw GraphQLHTTPResponseError(
                operationName: String(describing: Operation.self),
                body: response.rawData,
                response: response.httpResponse,
                kind: .notAuthorized
            )

        case 503:
            throw GraphQLHTTPResponseError(
                operationName: String(describing: Operation.self),
                body: response.rawData,
                response: response.httpResponse,
                kind: .maintenance
            )

        case _ where !response.httpResponse.isSuccessful:
            throw GraphQLHTTPResponseError(
                operationName: String(describing: Operation.self),
                body: response.rawData,
                response: response.httpResponse,
                kind: .errorResponse
            )

        default:
            chain.proceedAsync(
                request: request,
                response: response,
                interceptor: self,
                completion: completion)
        }

    } catch {
        chain.handleErrorAsync(
            error,
            request: request,
            response: response,
            completion: completion)
    }
}
}

This interceptor has the simple task of translating HTTP error codes into enum cases. I'm not too fond of it (it doesn't add much value), but it's an example of something that doesn't look like it would fit the new model, as we insert this in-between the NetworkFetchInterceptor and the JSONResponseParsingInterceptor, since in case we receive one of these HTTP codes the JSON payload is not present in the form expected by the parser.
What kind of error we would get back from the framework in case of, for example, a 401 is a bit unclear.

Another point of uncertainty is the additionalErrorInterceptor functionality. We are currently using it to create signpost ranges for network requests, that are sometimes handy when profiling the app. Our interceptor chain looks like this:

Interceptor chain
override open func interceptors<Operation>(
    for _: Operation
) -> [ApolloInterceptor] where Operation: GraphQLOperation {
    [
        SignpostInterceptor(
            type: .begin,
            name: "Request chain",
            message: "Start: %@ - %@"
        ),

        MaxRetryInterceptor(maxRetriesAllowed: 1),
        loggerInterceptor,

        detectMultipleCallsInterceptor,

        SignpostInterceptor(
            type: .event,
            name: "Request chain",
            message: "Cache read start: %@ - %@"
        ),
        CacheReadInterceptor(store: self.store),
        tokenInterceptor,
        metadataInterceptor,
        botProtectionInterceptor,

        SignpostInterceptor(
            type: .event,
            name: "Request chain",
            message: "Networking start: %@ - %@"
        ),

        NetworkFetchInterceptor(client: self.client),

        SignpostInterceptor(
            type: .event,
            name: "Request chain",
            message: "Response parsing start: %@ - %@"
        ),
        errorParsingInterceptor,
        JSONResponseParsingInterceptor(),
        AutomaticPersistedQueryInterceptor(),

        SignpostInterceptor(
            type: .event,
            name: "Request chain",
            message: "Cache write start: %@ - %@"
        ),
        CacheWriteInterceptor(store: self.store),

        SignpostInterceptor(
            type: .end,
            name: "Request chain",
            message: "End: %@ - %@"
        ),
    ].compactMap { $0 }
}

override open func additionalErrorInterceptor<Operation>(
    for _: Operation
) -> ApolloErrorInterceptor? where Operation: GraphQLOperation {
    SignpostErrorInterceptor(
        type: .end,
        name: "Request chain",
        nextErrorInterceptor: apiErrorInterceptor
    )
}

We're using the additional error interceptor to terminate the signpost range when an error is thrown. For our use-case, we would like to have either something similar to additionalErrorInterceptor to catch errors before they are thrown outside the client, or to have os.log signposts embedded in the client directly.

These are my answers to the questions in your RFC:

  1. I don't see a use-case for editing requests, if we want to pass additional data down the chain we can always reference the interceptors we need.
  2. See above.
  3. I don't think we need additional functionality, but I should dig more first.

@jimisaacs
Copy link
Contributor

jimisaacs commented Jul 12, 2024

Hello! This is a quick comment as it is all I have time for at the moment, but expect follow-ups and more discussion.

I have concerns about taking away cache read/write interceptors. There's a fundamental difference in providing these versus a custom normalized cache. The former operates on GraphQL operations, while the latter operates on cache records. Also the former operates on requests and responses and has all that context, including the request context itself, while the latter does not.

I also have concerns about taking away the APIs that exist in the current NetworkTransport layer, but the reasoning here is more bespoke. We are taking advantage of being able to wrap this entire later in order to allow for a new URL/endpoint if needed upon request, which currently isn't possible with only the request chain. Doing it at this layer allows us to not have to recreate a client for endpoint changes.

For NormalizedCache, we would need a more robust way to be notified of any/all cache activity, as you know already. Though we also need a way to implement cache expiration, which we currently have implemented as a cache read interceptor.

We also have GraphQL request tracing (logging) implemented as an interceptor.

@AnthonyMDev
Copy link
Contributor Author

Thank you both for your feedback @TizianoCoroneo & @jimisaacs. That's really helpful. I'm going to need to do some more thinking and will update this RFC soon.

@jimisaacs

I also have concerns about taking away the APIs that exist in the current NetworkTransport layer, but the reasoning here is more bespoke. We are taking advantage of being able to wrap this entire later in order to allow for a new URL/endpoint if needed upon request, which currently isn't possible with only the request chain. Doing it at this layer allows us to not have to recreate a client for endpoint changes.

In 1.0, you can modify the endpoint URL by mutating the graphQLendpoint property of the HTTPRequest in an interceptor. Would this enable your use case here?

I don't think there is any reason why a NetworkTransport in the 2.0 couldn't be wrapped in the same way though!

@jimisaacs
Copy link
Contributor

In 1.0, you can modify the endpoint URL by mutating the graphQLendpoint property of the HTTPRequest in an interceptor. Would this enable your use case here?

It technically would, though some more context, what I have is an injection pattern, where you initialize a client with a config object that basically configures how internal NetworkTransport will be created on every request. This includes passing through an interceptorProvider which is considered a passthrough at this layer. So to remove the current NetworkTransport facility, and to still allow the same functionality, it will require understanding how we configure interceptor providers in this new layer.

Ultimately to move this configuration to an interceptor would mean that particular interceptor is internal only, which would mean that interceptors would need to be appended to base interceptors. Which would require a redesign of all this being passthrough.

@jimisaacs
Copy link
Contributor

jimisaacs commented Jul 12, 2024

What additional functionality would you like to see enabled by the NormalizedCache.

I think I've referenced Apollo Web's Type Policies before, but this would be useful. Basically something that could provide that relationship we were looking for in a query for a list of things with a query for a single thing, where cache normalization would be linked between the two.

EDIT: In my opinion, something like this could lean into dictionaries or just less typed things, like Apollo web, for the sake of not having to worry about codegen and what not. Then a followup could make it more robust and typesafe later.

@Brett-Best
Copy link

Excited by the proposed changes here, does an approximate timeframe exist for the v2 release?

@AnthonyMDev
Copy link
Contributor Author

Due to the excellent feedback I've gotten here, we've decided to go in a different direction with the 2.0. There will still be significant changes to support async/await, but the RequestChain format will be staying much closer to the existing design. I'll be updating this RFC with some more accurate information tomorrow.

As for timeframe, we are committed to having at least a beta of 2.0 released alongside the September stable release of Swift 6. I'm hoping we can get the beta out earlier than that and be approaching a stable release version of Apollo iOS 2.0 with Swift 6, but at the very least we will have a working beta for you to begin your migration with.

@AnthonyMDev
Copy link
Contributor Author

The RFC has been updated to reflect the current state of work on 2.0. This is still subject to change and additional feedback is appreciated!

@JOyo246
Copy link

JOyo246 commented Aug 5, 2024

I haven't seen any talk about giving SelectionSet sendable conformance. Is there a timeline on this, or reasoning why it's not already sendable?

@AnthonyMDev
Copy link
Contributor Author

@JOyo246 I haven't added that part to the RFC yet, because I haven't gotten to working on that part of the library yet. Making these safely Sendable was a question that I needed to do some research into, because some of the models are actually mutable (for use in a local cache mutation). And there have been requests to allow them to be mutated outside of cache transactions for other use cases. I believe that, since we are using copy on write semantics, we should be able to allow the mutable models to be Sendable, but I wasn't sure about that at the time of beginning this RFC.

I'll update the RFC once I've confirmed.

Once I've determined a solution, if we are able to make the models Sendable without introducing breaking changes, I will be backporting Sendable conformance to the models in 1.0. So you won't have to wait for 2.0 to use them.

@swizzlr
Copy link

swizzlr commented Aug 12, 2024

I don't know if this was covered in other discussion but I just wanted to note here that I believe it's overly restrictive to make the NormalizedCache API (or really, any protocol) AnyActor. A type doesn't need to be an Actor to be Sendable (ie, usable across concurrency domains/threads), which seemed to be the motivation here. An actor would probably be a good choice but that can probably be an implementation detail?

@CraigSiemens
Copy link
Contributor

Instead of passing the RequestChain to the interceptors and having them call chain.proceedAsync(), the interceptors will now return a NextAction (or throw) and the request chain will use that action to proceed onto the next interceptor.

 func intercept<Operation: GraphQLOperation>(
   request: HTTPRequest<Operation>,
   response: HTTPResponse<Operation>?
 ) async throws -> RequestChain.NextAction<Operation>

One downside I can see with this approach is it doesn't allow an interceptor to hook into completion side of the request. Previously the completion closure passed to chain.proceedAsync could have additional logic added to it that could use the result passed to the completion.

For example, here's a simplified version of our logging interceptor. Our actual implementation uses signposts for logging, but the structure is the same. This might only work in our project since we don't use caching so it's guaranteed that the completion will only be called once.

class LoggingInterceptor: ApolloInterceptor {
    func interceptAsync<Operation: GraphQLOperation>(
        chain: RequestChain,
        request: HTTPRequest<Operation>,
        response: HTTPResponse<Operation>?,
        completion: @escaping (Result<GraphQLResult<Operation.Data>, Error>) -> Void
    ) {
        print("Starting \(request.operation.operationName)")
        
        chain.proceedAsync(
            request: request,
            response: response
        ) { result in
            switch result {
            case .success
                print("Success \(request.operation.operationName)")
            case .failure
                print("Failure \(request.operation.operationName)")
            }
            
            completion(result)
        }
    }
}

I wonder if there's a way to implement it to still allow the interceptor to hook into the request going down the list of interceptors and back up the list on completion. I'm not sure how possible that'd be with caching allowing the interceptors to "return" multiple times. One thought would be to split the interceptors for caching from the ones from the ones doing the network request. That'd allow each list of interceptors to know that only one value will be returned and then a higher level type would responsible for sending both responses to the caller.


I've also been using the ClientMiddleware type in OpenAPIRuntime recently. It uses async/await and serves a similar purpose while allowing intercepting the request and response. The flexibility of it has been really nice allowing implementing thing like logging, automatic retry for requests, and some types of cacheing (ex. returnCacheDataElseFetch).

They've got a number of examples in their repo but here's a simple logging example with the same behaviour as the LoggingInterceptor above.

func intercept(
    _ request: HTTPRequest,
    body: HTTPBody?,
    baseURL: URL,
    operationID: String,
    next: @Sendable (HTTPRequest, HTTPBody?, URL) async throws -> (HTTPResponse, HTTPBody?)
) async throws -> (HTTPResponse, HTTPBody?) {
    print("Starting \(operationID)")
    do {
        let response = try await next(request, body, baseURL)
        print("Success \(operationID)")
        return response
    } catch {
        print("Failure \(operationID)")
        throw error
    }
}

@jimisaacs
Copy link
Contributor

+1 to @CraigSiemens LoggingInterceptor point. We have a logging interceptor that looks very similar. Would like to know how to achieve the same thing with this RFC.

@AnthonyMDev
Copy link
Contributor Author

Thanks for the input all.

@swizzlr I'll think about this more before moving forward. My intention here was to ensure that cache read/writes are done serially and prevent races conditions. Currently, we are using a barrier queue to handle that in the ApolloStore when it calls into your cache implementation. If you were to access the NormalizedCache outside of ApolloStore, then there are no guarantees of thread safety.

I'd love to hear opinions from others on this one.


@CraigSiemens @jimisaacs

Thanks for providing this use case here. The way I would think to handle that would be to just use two interceptors. A PreRequestLoggingInterceptor and a PostResponseLoggingInterceptor or really any combination of logging interceptors you need. For the logging use case specifically, another possible way to handle this would be to inject a logger into your RequestContext, and then all of your custom interceptors could just log events as necessary.

While something similar to the example @CraigSiemens provided is doable, it does make for a less clean API in my opinion. Though I certainly want to ensure that we aren't preventing any necessary use cases here. Are there other use cases outside of logging in which this would be useful, and that breaking the behavior into two separate interceptors would not be a workable or desirable solution?

@CraigSiemens
Copy link
Contributor

I wanted to bring it up because it does make some things more complicated when split up across multiple interceptors. For example, OSSignposter.beginInterval returns an object that needs to be passed to the end interval call. It's much simpler to create and use the value in the same function rather managing passing the value between interceptors.

@jimisaacs
Copy link
Contributor

+1, the next() pattern is a well-established part of middleware designs and provides the capability for managing centralized logic, as described. It’s widely used in command pattern-inspired architectures like Flux/Redux, where unidirectional data flow and middleware are employed. Moving away from this pattern and suggesting the splitting of centralized logic into multiple interceptors introduces unnecessary complexity and risks leaking business logic across multiple components, which breaks encapsulation and violates principles like Single Responsibility.

@AnthonyMDev
Copy link
Contributor Author

Thanks for the feedback. I'm going to go back to the drawing board and make sure we get this right!

@Killectro
Copy link

Hey @AnthonyMDev! It's been ~a month so I figured I'd check in on how progress was going with 2.0?

@AnthonyMDev
Copy link
Contributor Author

Hey there! We're making progress. The suggestions in this RFC have made us re-think the Interceptor APIs, and the Apollo team has been busy preparing for and attending GraphQL Conference this month. I'm hopping back into this today and hope to have an idea of the direction on those APIs and the RFC updated by end of week.

I'm also considering trying to release a public preview of some of this work that isn't breaking in an Alpha release if I'm able to do it without breaking existing APIs. Things like versions of the ApolloClient.fetch APIs that return an AsyncStream instead of using a completion handler (and maybe Sendable for the generated models?). That way we could get some users testing these things while we iterate on the 2.0.

@woodymelling
Copy link

woodymelling commented Sep 19, 2024

Hey @AnthonyMDev

With these upcoming changes, will Apollo's interceptors properly receive TaskLocal values from the request callsite?

With full structured concurrency, this happens automatically, but with the class-based interceptors, I believe that extra work needs to be done.

It would be really great if the interceptors are run with the same TaskLocals as the callsite of the request. That way we can use TaskLocals for logging, isolating data and other dependencies per graphQL call.

Libraries such as swift-dependencies rely on this functionality quite heavily

@AnthonyMDev
Copy link
Contributor Author

Looking for more feedback here!

I have concerns about taking away cache read/write interceptors. There's a fundamental difference in providing these versus a custom normalized cache. The former operates on GraphQL operations, while the latter operates on cache records. Also the former operates on requests and responses and has all that context, including the request context itself, while the latter does not. - @jimisaacs

I understand your point about the cache mechanism needing context on the request/operation, especially until we implement @typePolicy & @fieldPolicy. I'm feeling like having the caching mechanisms as part of the RequestChain is a bad design though, and I'd like to consider changing that.

CacheReadInterceptor is the only interceptor that can terminate the RequestChain early, and if it does (b/c it got a valid cache hit with a fetchPolicy .returnCacheDataElseFetch) then none of the interceptors after it are called. In the new RFC, that is modeled using the exitEarlyAndEmit action. I don't really love the existing behavior in 1.0 where your future interceptors aren't even called when the cache read exits early. It is an unexpected behavior that can trip people up.

It also can emit a cache response and then continue on with the request chain (with a fetchPolicy .returnCacheDataAndFetch). That uses the proceedAndEmit(intermediaryResult:...) action. I'm also not a fan of this being part of the RequestChain, because the intermediaryResult does not travel through the chain on its way out. It's just emitted as is (this is the existing behavior in 1.0).


My feeling is that the awkwardness here is because the cache read/write really isn't part of the "networking" process and should instead, be handled outside of the RequestChain. I'm thinking that we could expose a secondary protocol CacheInterceptor that would also be provided via the InterceptorProvider. Instead of having all interceptors in one place, you will provide a list of requestInterceptors and optionally, a single cacheInterceptor.

This CacheInterceptor would have access to the GraphQLOperation with all of its variables. I don't know if we want to expose the HTTPRequest/HTTPResponse here, but we could. I could see a situation in which you would want to expose the initial HTTPRequest to the CacheInterceptor, if you wanted to use a custom URL-based cache or something.

This feels like better separation of concerns. I think it's less confusing when your networkInterceptors are never called because there was a cache hit. You either get a request that proceeds through all of them, or none of them are called in the first place.

Do you think this would be too limiting? The only direct behavior this would remove is the ability for a networkInterceptor to emit an intermediary value. That would only be possible from the cache getting a hit and then continuing on to the request chain. B/c the interceptors will return the GraphQLResult using the next() middleware pattern, the ability to early exit would still technically exist (by returning a result without calling next()), but it would not be recommended usage.

Maybe the issues with the intermediary and early exit results not traveling through the request chain isn't really an issue to anyone?

@AnthonyMDev
Copy link
Contributor Author

@woodymelling

With these upcoming changes, will Apollo's interceptors properly receive TaskLocal values from the request callsite?

With full structured concurrency, this happens automatically, but with the class-based interceptors, I believe that extra work needs to be done.

I'm not clear what work needs to be done to support this, but I'd like to make it possible. Once I get an initial alpha release out, maybe you can play around with it and let me know what is/isn't working.

@cheif
Copy link
Contributor

cheif commented Sep 25, 2024

I've got a thought about the proposed form of fetch:

The ApolloClient will have new API's introduced that support Swift Concurrency. Because GraphQL requests may return results multiple times, the request methods will return an AsyncThrowingStream.

public func fetch<Query: GraphQLQuery>(
    query: Query,
    cachePolicy: CachePolicy = .default,
    context: (any RequestContext)? = nil
) -> AsyncThrowingStream<GraphQLResult<Query.Data>, any Error>

I agree with the overall idea that we might get multiple results, and thus returning an AsyncThrowingStream is the correct API. However, this will only happen for one (of five) cache policies, the others will always return exactly once, and thus a simple async throws method would suffice for these cases.

I'm basically wondering if there's place for something like:

public func fetch<Query: GraphQLQuery>(
    query: Query,
    cachePolicy: CachePolicy = .default,
    context: (any RequestContext)? = nil
) async throws -> GraphQLResult<Query.Data>

That would be available for everything but returnCacheDataAndFetch.

I understand that this might be hard to achive with CachePolicy being an enum, and I guess changing CachePolicy to something else (a protocol?) would have other drawbacks.

For me the async throws version is probably what I'm going to be wanting almost all of the time, and I would guess I'm not alone in that, so I think it migth be worth to at least consider it.

@CraigSiemens
Copy link
Contributor

CraigSiemens commented Sep 25, 2024

I like the idea of separating the cache interceptor from the request interceptors.

Maybe the issues with the intermediary and early exit results not traveling through the request chain isn't really an issue to anyone?

I can't think of an issue with it. It's currently possible today if an interceptor called completion instead of calling proceedAsync.


👍 for @cheif's suggestions.

I don't think that would be too difficult. The CachePolicy enum could be split up into two based on the number of results that the operation will produce.

enum SingleResultCachePolicy {
    case returnCacheDataElseFetch
    case fetchIgnoringCacheData
    case fetchIgnoringCacheCompletely
    case returnCacheDataDontFetch
}

enum MultiResultCachePolicy {
    case returnCacheDataAndFetch
}

Then there could be two versions of fetch for the different cache policies and return types.

func fetch<Query: GraphQLQuery>(
    query: Query,
    cachePolicy: SingleResultCachePolicy = .default,
    context: (any RequestContext)? = nil
) async throws -> GraphQLResult<Query.Data>
{ ... }

func fetch<Query: GraphQLQuery>(
    query: Query,
    cachePolicy: MultiResultCachePolicy,
    context: (any RequestContext)? = nil
) -> AsyncThrowingStream<GraphQLResult<Query.Data>, any Error>
{ ... }

Since the functions have the same label, Xcode's suggestions will contain the cases from both enums.

Screenshot 2024-09-25 at 9 46 55 AM

Unless the return type is already matching, in which case it'll only show the matching cases.
Screenshot 2024-09-25 at 9 54 13 AM
Screenshot 2024-09-25 at 9 55 30 AM

@calvincestari
Copy link
Member

@cheif @CraigSiemens

I agree with the overall idea that we might get multiple results, and thus returning an AsyncThrowingStream is the correct API. However, this will only happen for one (of five) cache policies, the others will always return exactly once, and thus a simple async throws method would suffice for these cases.

This is also applicable to @defer and @stream operations though, which are independent of any cache policy.

@cheif
Copy link
Contributor

cheif commented Sep 26, 2024

@cheif @CraigSiemens

I agree with the overall idea that we might get multiple results, and thus returning an AsyncThrowingStream is the correct API. However, this will only happen for one (of five) cache policies, the others will always return exactly once, and thus a simple async throws method would suffice for these cases.

This is also applicable to @defer and @stream operations though, which are independent of any cache policy.

I see, I haven't come across @defer and @stream previously, so I don't really understand how they are treated by the client-libs. My naive take would be that maybe the codegen could generate out different GraphQLQuery when it comes across @defer/@stream, and these could then be used for specializing down .fetch, e.g.:

public protocol SingleResultGraphQLQuery: GraphQLQuery {}
public protocol StreamingGraphQLQuery: GraphQLQuery {}
...
func fetch<Query: SingleResultGraphQLQuery>(..., cachePolicy: SingleResultCachePolicy) async throws -> ...
func fetch<Query: StreamingGraphQLQuery>(..., cachePolicy: SingleResultCachePolicy) -> AsyncThrowingStream<...>

etc.

It would lead to more overloads of .fetch, but I would assume that this just happens at the outermost layer of the library, while everything internally could just return AsyncThrowingStream.

@TizianoCoroneo
Copy link
Contributor

My naive take would be that maybe the codegen could generate out different GraphQLQuery when it comes across @defer/@stream, and these could then be used for specializing down .fetch, [...]

I agree with this. The library should absolutely offer a simplified async/await API for the base case without defer/stream/returnCacheDataElseFetch.

@wiedem
Copy link

wiedem commented Sep 27, 2024

I agree with the overall idea that we might get multiple results, and thus returning an AsyncThrowingStream is the correct API. However, this will only happen for one (of five) cache policies, the others will always return exactly once, and thus a simple async throws method would suffice for these cases.

+1 A strong vote for this from my side.
In the majority of my projects in which I use Apollo-iOS, the vast majority of fetches produce a single result. In my opinion, it should be clear from the API whether a fetch can return multiple results or not. And if it can't, no AsyncThrowingStream should be returned.

Anything else leads to confusion as to whether you as a developer have to deal with the fact that there may or may not be multiple results, although it should actually be clear from the usage.

@AnthonyMDev
Copy link
Contributor Author

Thanks for all the feedback on the desire for a single return async fetch function. We'll devise some solution for that.

We’ll need to do some exploration to see what shape that could take. I don’t love the idea of overloading, because it’s odd to me that the API of the same function could change in a way that significantly alters how you consume it. But a function with a slightly different name that is used just for single-result fetches is fine with me.

@jimisaacs
Copy link
Contributor

In our client wrapper around ApolloClient, we provide the following methods:

  • fetch: async throws
  • fetch: return AsyncThrowingStream
  • perform: async throws
  • upload: async throws
  • watch: return AsyncStream
  • subscribe: return AsyncStream

The consumer decides which flavor of fetch to use, and this has worked out just fine this year. This was also thought about extensively, albeit internally.

@AnthonyMDev
Copy link
Contributor Author

@jimisaacs How do users decide which overload of fetch to use? Just by casting/inferring the return type? What if the user chooses to use the async throws version for a request that would return multiple results?

@jimisaacs
Copy link
Contributor

jimisaacs commented Sep 27, 2024

How do users decide which overload of fetch to use? Just by casting/inferring the return type?

Yes

What if the user chooses to use the async throws version for a request that would return multiple results?

It would result in receiving only the first result.

@AnthonyMDev
Copy link
Contributor Author

@CraigSiemens I'm implementing this concept of a separate cacheInterceptor and then a list of requestInterceptors. But I'm realizing that this may have an impact on logging interceptors. There is a chance that the requestInterceptors would never be called. The cacheInterceptor would be run first, and if it gets a cache hit with a cache policy of .returnCacheDataElseFetch then the requestInterceptors are never called. Also, if you use a .returnCacheDataDontFetch policy, the requestInterceptors are never called.

If you have a request interceptor to do logging, that won't be called prior to cache read, and it won't be called at all if you don't get to the network request portion of the chain.

My thought is that the correct way of handling this going forward would be to inject a logger that you can then use in any of your interceptors. This could be done using a global dependency injection container if you use some sort of dependency injection library; passing a logger to your interceptors during creation in your InterceptorProvider; passing it into the RequestContext on fetch; or possibly via TaskLocal values?

Does this limit any necessary use cases for anyone?

@AnthonyMDev
Copy link
Contributor Author

As I've been building this out, I'm realizing that the middleware design that @CraigSiemens and @jimisaacs have asked for doesn't really work with our networking stack. This is because interceptors may be called multiple times
when using @defer, or when getting a cache hit and then a network response.

With async/await, these next() closures only return once. The only way to make this work would be to make the next() closure itself return an AsyncThrowingStream. Custom interceptors would then need to iterate over that stream, and the code becomes very confusing and error-prone very quickly.

I'm exploring an alternative where each interceptor has two functions. One for intercepting the request on the way in and another for intercepting the response on the way out. I hope that will allow you to encapsulate logic within a single interceptor, while still enabling interceptors to be called multiple times for the same request.

@fabioferrero
Copy link

fabioferrero commented Oct 16, 2024

@AnthonyMDev First, thanks to the work you are doing here. I think is the proper way to proceed with Apollo :)
I came here for the Sendable conformance of generated objects, but I guess we can wait for Apollo 2.0
What is the planned roadmap at this point? We want to place Apollo 2.0 release in our roadmap to Swift 6 complete concurrency migration

@AnthonyMDev
Copy link
Contributor Author

I'm working hard on getting this ready, but it's a huge undertaking. I can't guarantee when it will be done yet, but I'll be continuing to post updates here in the mean time.

Until then, plan to import Apollo using @preconcurrency as all of our current implementations should be thread safe.

I'm hoping to have a significant update by the end of the week. Just trying to get things actually building so I can verify that what I've done so far works. There were so many areas that needed to be updates, getting to a point where the code even compiles again has been a challenge. :)

@gregiOS
Copy link

gregiOS commented Nov 4, 2024

@AnthonyMDev any updates on this one?

@AnthonyMDev
Copy link
Contributor Author

Unfortunately still marching forward with the work, but nothing that is functional yet. I've been making a lot of design iterations to accommodate all the feedback. Will continue to update here as I have news to share.

@AnthonyMDev
Copy link
Contributor Author

Looking for feedback on another possible change here. We are strongly considering making Apollo iOS 2.0 target a minimum version of iOS 15. Would that be a problem for many users?

@bryan1anderson
Copy link

bryan1anderson commented Dec 3, 2024

That would not be a problem for us. I've successfully been able to convince the last 3 companies I worked for that iOS development is done by supporting major minus 1 for new features, major minus 2 for minimum support. Others may not have had so much luck

@wiedem
Copy link

wiedem commented Dec 3, 2024

This wouldn't be a problem for the apps I manage. None of them have an iOS 15 target. In fact, most of them are already on iOS 16 as a min deployment target.
If you have a SwiftUI app, you have little choice but to move to newer versions anyway.

If a key feature of the 2.0 upgrade is Swift 6 and Swift concurrency support, I don't think that should be a problem in general. I don't know of any iOS developers who rely on these technologies and still support older versions of iOS.

@woodymelling
Copy link

This will not be an issue for us either.

@calvincestari
Copy link
Member

If a key feature of the 2.0 upgrade is Swift 6 and Swift concurrency support, I don't think that should be a problem in general. I don't know of any iOS developers who rely on these technologies and still support older versions of iOS.

Our original deployment target for this work was iOS 13 where a lot of the concurrency API was back-ported to, but working through parts of the design yesterday showed that some of the API we want to use is only available on iOS 15. That's the reason for the new minimum deployment target. I share the same sentiment though that the vast majority of apps that were updated to use Swift Concurrency went directly to 15 or higher.

@jimisaacs
Copy link
Contributor

Not a problem

@chasemac
Copy link

My company's min target is 17 soon to be 18 because we have to keep up with the latest security updates from apple. Please deploy this soon!

@productdevbook
Copy link

Looking for feedback on another possible change here. We are strongly considering making Apollo iOS 2.0 target a minimum version of iOS 15. Would that be a problem for many users?

iOS 16. I think it would be ideal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests