Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Allow manually verifying expectations without requiring mutability #598

Open
Plebshot opened this issue Aug 7, 2024 · 9 comments

Comments

@Plebshot
Copy link

Plebshot commented Aug 7, 2024

Problem

I'm having an issue with the fact that mock assertions are exclusively verified on drop. In particular, if the mock is behind an Arc that is used in a detached async task, a test won't fail as the panic doesn't propagate to the test thread. This is somewhat related to #461 and #396.

While this can be circumvented in some use cases by joining all tasks manually or using interior mutability for the mocked object, I fail to find a satisfying solution for my use case.

Proposal

Add a method like verify(&self) that allows checking all expectations without requiring mutability.

This method could be called on mocks behind an Arc, without having to worry about the logistics of where the last reference ends up dropping the mock.

Context

  • I'm working with an Actix HTTP server.
  • I have written a middleware that records metrics about a request
  • I use a mockable trait to test the middleware and other components interacting with the metrics system

Conceptional code:

use actix_web::{http::StatusCode, test::TestRequest, web, App, HttpResponse};
use std::sync::Arc;
use tokio::sync::mpsc;

struct RequestMetricsRecord {}

#[mockall::automock]
trait MetricsSystem: Send + Sync + 'static {
    fn record(&self, record: RequestMetricsRecord);
    // ...
}

#[derive(Clone)]
struct MetricsMiddleware {
    recorder: mpsc::Sender<RequestMetricsRecord>,
}

impl MetricsMiddleware {
    /// Creates a new factory for the actual middleware.
    ///
    /// This spawns a background task to collect the metrics for better response times.
    fn new(metrics: Arc<dyn MetricsSystem>) -> Self {
        let (recorder, mut receiver) = mpsc::channel(100);
        // Note: Task is detached because there is no logical point to join it.
        // It exits by itself once the last sender for the channel is dropped by the Actix server.
        tokio::spawn(async move {
            while let Some(record) = receiver.recv().await {
                metrics.record(record);
            }
        });
        Self { recorder }
    }
}

// Implement the actual middleware logic that records incoming HTTP requests.
// Omitted for brevity - simply sends metrics over the channel to the above background task.
// ...

#[actix_web::test]
async fn test_middleware_records_metrics() {
    // Setup a mock that expects one call to record with a 200 status code.
    let mut mock_metrics = MockMetricsSystem::new();
    mock_metrics
        .expect_record()
        .withf(|record| record.status == StatusCode::OK)
        .once()
        .return_const(());
    let shared_metrics = Arc::new(mock_metrics);

    // Configure a server that should trigger the middleware to call the mock.
    let app = actix_web::test::init_service(
        App::new()
            .wrap(MetricsMiddleware::new(shared_metrics.clone()))
            .route("/ok", web::get().to(HttpResponse::Ok)),
    )
    .await;

    // Make a request to trigger the middleware
    app.call(TestRequest::get().uri("/ok").to_request()).await.unwrap();
}

Note that even adding drop(app) manually, to ensure the actix server shuts down before the mock is dropped doesn't work here because the detached background task lives longer than the drop.

Potential Workarounds

  • Add a sleep to ensure detached tasks are cleaned up after dropping the server.
    Unreliable and undesirable.
  • Don't use a detached task in the first place.
    Then recording metrics would add to the response time of each HTTP request, which it doesn't has to.
  • Put MetricsSystem behind a Mutex to allow using checkpoint() on the mock.
    Not an acceptable performance overhead if it's just needed for tests.
  • Write a SyncMockMetricsSystem that wraps MockMetricsSystem in an Mutex and manually reimplements each method of the trait.
    Introduces boilerplate and might cause issues due to having a mutex only in tests.
  • Join all detached threads manually.
    Doesn't seem (easily) possible within the context of Actix, in particular because of async drop.
  • Use #[test] and create/drop the async runtime manually?
    Not entirely sure if that really solves the issue, but it would also introduce boilerplate for each test again.

If you have any other suggestions to solve this I'm happy to hear them.

Thanks for the great work on this library!

@asomers
Copy link
Owner

asomers commented Aug 7, 2024

One way or another, your test needs to ensure that the background task doesn't panic. Otherwise those background tasks are big test escapes. It's not just Mockall that might cause a panic there.

@Plebshot
Copy link
Author

Plebshot commented Aug 7, 2024

Thanks for the quick reply!

While I agree with that in principle, I believe this is already ensured here:

  1. Any implementation of MetricsSystem would already test that its record method does not panic. So unless the tokio receiver suddenly panics, it is in fact only Mockall that can cause a panic here. Any more complex logic for the background task would also be extracted into it's own function to test separately.
  2. In the worst case where the background task does end up panicking despite that, it is detected and explicitly handled by anyone trying to use the sender, as sending without any open receivers yields an error.

I'm aware this design isn't perfect in the first case, but I'm also on the side that simple solutions often result in less errors. So it's unfortunate that the current API for Mockall somewhat restricts how you can structure your code here.

Are there any specific reasons why checking the expectations has to be mutable, other than also wanting to reset them in the checkpoint method? Sadly I can't really wrap my head around all the macros to make a contribution here.

@asomers
Copy link
Owner

asomers commented Aug 7, 2024

Yes, the checkpoint method requires mutable access so it can clear current expectations. BTW, if you really really really don't want to assert that your background task didn't panic, other options are:

  • use Arc::try_unwrap() on the mock object and then drop it. You'll have to do that in a polling loop to ensure that the background task has actually terminated.
  • Create a futures::channel::oneshot. Move the sender into the mock object like mock.expect_record().returning(|| sender.send(()).unwrap()) and then await the receiver in the test function.

@Plebshot
Copy link
Author

Plebshot commented Aug 7, 2024

The oneshot channel in the mock return is a great tip to prevent the test exiting before the background task had the chance to call record (note: Has to be return_once, not returning due to FnMut). I thought about unwrapping the Arc too. It works, but it also introduces some unsatisfying boiler plate once more.

Yes, the checkpoint method requires mutable access so it can clear current expectations.

The point of something like a verify, assert or satisfied function on a mock would be that it doesn't clear the current expectations. All it should do is check if all expectations are satisfied at the current point in time, and panic (or otherwise indicate) if they're not. I'd imagine this is pretty much 1:1 the logic that is already present in the drop for an expectation, or are there any other complications here?

@asomers
Copy link
Owner

asomers commented Aug 7, 2024

The point of something like a verify, assert or satisfied function on a mock would be that it doesn't clear the current expectations. All it should do is check if all expectations are satisfied at the current point in time, and panic (or otherwise indicate) if they're not. I'd imagine this is pretty much 1:1 the logic that is already present in the drop for an expectation, or are there any other complications here?

It's not quite the same as the logic that's already present. The existing checkpoint logic simply drops the Expectation objects, so you would need to do something different. Would you like to submit a patch?

FTR, the reason that the checkpoint method clears existing expectations, rather than just checks that they are satisfied, is due to the precedent set by the Mockers crate.

@Plebshot
Copy link
Author

Plebshot commented Aug 7, 2024

I haven't worked with Macros in Rust yet, so I'm having a hard time understanding the library enough to figure out where to add a function like this, or what it has to check in particular.

What I meant initially was moving this part of the drop implementation into it's own fn assert(&self) (or whatever you want to call it) and adding an equally named method on the generated Mocks, that calls this for all expectations.

@asomers
Copy link
Owner

asomers commented Aug 7, 2024

Yes, that would have to be a part of your patch.

@Plebshot
Copy link
Author

Plebshot commented Aug 7, 2024

Is there any chance you could get around to adding this, or help me with the initial scaffolding for the macro related code? Otherwise, I'm not sure when I'll get enough time to solve this on my own.

Would be greatly appreciated.

@asomers
Copy link
Owner

asomers commented Aug 11, 2024

Probably not very soon, I'm afraid. My open-source time is limited right now, and there are a lot of different projects placing demands on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants