-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mutex guard to Channel methods #242
Comments
All of the RabbitMQ client libraries specifically do not allow sharing channels across threads and assume that applications that use the libraries have a connection per-thread, with associated channels. I doubt we will add this feature here, but I'd like @Zerpet's feedback. I'm not sure why certain methods do use a mutex. I'll investigate when I have time. |
Thanks for the fast feedback! I can see the point of letting users manually handle the channels. It seems that Regarding DocsI think that it should be clarified in the docs that The docs could also suggest to create one channel per goroutine. The only mention of thread-safe that I see is that Connection PoolWould a concurrent-safe connection pool package (in the same fashion as pgxpool) would fit into this repo? However, it seems that having multiple connections may be less useful with RabbitMQ in comparison to PostgreSQL because calls seems to be way faster. But I don't have huge experience with Rabbit. If that's true, then having a mutex in the library would be more efficient. |
We could do a better job at documenting the thread-safety of channels. The document that Luke mention is this one: https://rabbitmq.com/channels.html I'm not against this change. My concern is whether we may lose performance by making every operation on the channel synchronous. If we can prove with a benchmark that performance penalty is reasonable, I'll be happy to see this change in the library. Regarding a connection pool, it will definitely benefit this repo. However, a connection pool is a concept of "smart clients", and this library has, intentionally, been kept as simple bindings to the AMQP protocol + RabbitMQ extension. If you may notice that this library does not have auto-reconnection, whilst the Java and .NET rabbitmq libraries do. This is an inherited non-goal. What I'm trying to say is that we can consider a connection pool type for this library, as long as it's not too clever 🙂 Something you may also consider is whether the existing CloudAMQP AMQProxy already covers what you had in mind for the connection pool. |
Thanks for the fast feedback! I'm glad the connection pool idea can be considered. Thanks also for suggesting the AMQProxy! However, it feels a bit heavy to setup for a simple use-case. Regarding the benchmark, is there a way to run the test with a mocked RabbitMQ server in the CI? I don't see a However, I've setup this simple benchmark with a real rabbitMQ : docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password rabbitmq:3-management package main
import (
"fmt"
"testing"
amqp "github.com/rabbitmq/amqp091-go"
"github.com/stretchr/testify/require"
)
func BenchmarkQueueDeclare(b *testing.B) {
config := amqp.Config{
Vhost: "/",
Properties: amqp.NewConnectionProperties(),
}
conn, err := amqp.DialConfig("amqp://user:[email protected]", config)
require.NoError(b, err)
defer conn.Close()
channel, err := conn.Channel()
require.NoError(b, err)
defer channel.Close()
for i := 0; i < b.N; i++ {
name := fmt.Sprintf("queue-%d", i)
_, err := channel.QueueDeclare(
name, // name of the queue
false, // durable
false, // delete when unused
false, // exclusive
false, // noWait
nil, // arguments
)
require.NoError(b, err)
_, err = channel.QueueDelete(
name,
false, // ifUnused
false, // ifEmpty
false, // noWait
)
require.NoError(b, err)
}
} Here are the results: Without mutex lock in QueueDeclare
With mutex lock in QueueDeclare
It can be reduced to measuring the performance of func BenchmarkMutex(b *testing.B) {
var m sync.Mutex
for i := 0; i < b.N; i++ {
m.Lock()
m.Unlock()
}
} which is:
So, the mutex lock / unlock seems to represent 0.00084% of ns/op value, which seems really light IMO but it depends on you. Thanks! |
Any news on this? |
@gnuletik there is no need to bump this issue. This issue is not urgent, and, as we've said, this library works as documented. If you'd like to submit a PR with tests and benchmarks, it would be appreciated, but there is no guarantee of when we can review it. |
Is your feature request related to a problem? Please describe.
I used
Channel.QueueDeclare
concurrently and got the following error:Describe the solution you'd like
I'd like to add the following guard:
to some methods of the
Channel
struct:Qos
,Cancel
QueueDeclare
,QueueDeclarePassive
,QueueInspect
,QueueBind
,QueueUnbind
,QueuePurge
,QueueDelete
ExchangeDeclare
,ExchangeDeclarePassive
,ExchangeDelete
,ExchangeBind
,ExchangeUnbind
This is already implemented on the following methods (publish / ack related):
PublishWithDeferredConfirmWithContext
,Ack
,Nack
,Reject
So I think that it would make sense to have it on other methods too.
Describe alternatives you've considered
Implementing the mutex in the business code is doable but it makes less sense considering that this is done by the library for some methods.
Additional context
No response
The text was updated successfully, but these errors were encountered: