-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consumer channel isn't closed in the event of unexpected disconnection #18
Comments
Hi. I had the same problem with https://github.com/streadway/amqp. I had to look at the channel shutdown function. It blocks on sending a notification to the Instead of
I have this:
This is the line that blocks and prevents the consumer channel from being closed here |
I tried your workaround solution on my repro project above and it works. Thank you very much. Do you think that a quick fix like this in the lines you mention would be a good solution to this problem?
Or should the blocking remain as intended functionality (we should listen on both connection and channel close channels in the event of abnormal disconnection). But it should be documented better? |
This would drop a notification in the lib. I have changed my reconnect code that is based on the example (here) to use buffered channels, but I need to check if it improves the behaviour. Based on your code the change would be:
to
This means the above code isn't blocking anymore and I hope the mutexes don't deadlock. I still had problems when the channel or connection was closed while doing a call - in my case it was on |
I can also reproduce this issue. |
This might also be a useful approach: streadway/amqp#519 |
Hi, thank you very much to have investigated on the issue both here and in #32 I can reproduce the issue with a local integration test.
because the channel notifyChanClose in the select statement of the example is not consumed anymore after the notifyConnClose get consumed causing the lock acquired from the shutdown method to never been released and so creating the deadlock The options are:
This library is widely used since some time and it is used as wrapper for other libraries, modifying this behaviour could cause some issue elsewhere, so we are more inclined to proceed with the first option. In this case we will state in the documentation that during a shutdown caused by an abnormal drop in the connection the notification channels both for connections and channels need to be consumed or a buffered channel needs to be used. |
@DanielePalaia Thanks for the response. And can you elaborate on the bold part here a bit?
|
We understood the problem but we'd like to avoid putting timeouts for every single channel. Given a simple go program: func handleInt(ch chan int) {
fmt.Printf("Handle %d", <-ch)
}
func main() {
fmt.Println("Starting...")
ch := make(chan int)
go handleInt(ch)
ch <- 23 if we comment: func handleInt(ch chan int) {
// fmt.Printf("Handle %d", <-ch)
} We have a deadlock, but this is how golang works :)! Speaking with the team, we'd tend to add some documentation. @andygrunwald thank you for your contributions. |
That is a fair design choice. Thanks for the context on this. |
In the documentation you are going to provide please include an explanation and maybe an example of the difference in behavior between a graceful close of a connection versus an unexpected one. When I first encountered the issue, that part was what threw me off the most.
Having a different behavior for different types of disconnection was counter intuitive for me. |
Hi, we updated the documentation explaining this use case scenario https://pkg.go.dev/github.com/rabbitmq/amqp091-go, we also commented to the notifyClose functions of the connection and channel struct. I think we can close this one for now. |
@DanielePalaia Are these the commits? |
@andygrunwald yyes that ones! |
The example function Example_consume() shows how to write a consumer with reconnection support. This commit also changes the notification channels to be buffered with capacity of 1. This is important to avoid potential deadlocks during an abnormal disconnection. See #32 and #18 for more details. Both example functions now have a context with timeout, so that they don't run forever. The QoS of 1 is set to slowdown the consumption; this is helpful to test the reconnection capabilities, by giving enough time to close the connection via the Management UI (or any other means). Signed-off-by: Aitor Perez Cedres <[email protected]>
This is a simplified version of my code.
In this code I am getting a connection and a channel and registering a notification (go)channel for both the connection and channel to be notified when they are closed.
Then I declare a queue and start consuming messages from it by ranging on the deliveryChan
<-chan amqp.Delivery
returned by the consume function.The problem happens when an unexpected disconnection occurs (for example I turn off my internet) . In that case even though the notifyConnClose channel gets a message the deliveryChan is not closed, and the range loop blocks forever.
In the event of a graceful disconnection by a connection.Close() then both the notifyConnClose gets a message, and the deliveryChan is Closed.
In the event of the unexpected disconnection, given that I can't close the
<-chan amqp.Delivery
from my code how am I supposed to proceed and get the loop to end?The text was updated successfully, but these errors were encountered: