Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[th2-2552] backpressure: added check for queue size limit #184

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

andrew-drobynin
Copy link
Contributor

@andrew-drobynin andrew-drobynin commented Feb 7, 2022

New version of #137

README.md Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
Comment on lines 134 to 136
QueueConfiguration::getQueue,
QueueConfiguration::getVirtualQueueLimit,
Math::min // TODO is it valid situation if there are several configurations for one queue?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant, whole common or each pin (routing key) have a publish limit.

README.md Outdated Show resolved Hide resolved
Comment on lines 451 to 473
private Map<String, QueuesWithVirtualPublishLimit> groupQueuesByRoutingKey() {
List<BindingInfo> bindings = new ArrayList<>();
knownExchanges.forEach(exchange -> bindings.addAll(
knownExchangesToRoutingKeys.forEach((exchange, routingKeys) -> bindings.addAll(
client.getBindingsBySource(rabbitMQConfiguration.getVHost(), exchange).stream()
.filter(it -> it.getDestinationType() == QUEUE && knownRoutingKeys.contains(it.getRoutingKey()))
.filter(it -> it.getDestinationType() == QUEUE && routingKeys.contains(it.getRoutingKey()))
.collect(Collectors.toList())
));
Map<String, QueueInfo> queueNameToInfo = client.getQueues().stream()
.collect(toMap(QueueInfo::getName, Function.identity()));
Map<String, List<QueueInfoWithVirtualLimit>> routingKeyToQueues = new HashMap<>();
bindings.forEach(bindingInfo -> routingKeyToQueues
.computeIfAbsent(bindingInfo.getRoutingKey(), s -> new ArrayList<>())
.add(new QueueInfoWithVirtualLimit(
queueNameToInfo.get(bindingInfo.getDestination()),
queueNameToVirtualQueueLimit.get(bindingInfo.getDestination())
))
);
return routingKeyToQueues;
Map<String, QueuesWithVirtualPublishLimit> routingKeyToQueuesWithLimit = new HashMap<>();
bindings.stream()
.collect(groupingBy(BindingInfo::getRoutingKey))
.forEach((routingKey, bindingsForRoutingKey) ->
routingKeyToQueuesWithLimit.put(
routingKey,
new QueuesWithVirtualPublishLimit(
bindingsForRoutingKey.stream().map(bindingInfo -> queueNameToInfo.get(bindingInfo.getDestination())).collect(toList()),
connectionManagerConfiguration.getVirtualPublishLimit()
)
)
);
return routingKeyToQueuesWithLimit;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

private Map<String, QueuesWithVirtualPublishLimit> groupQueuesByRoutingKey() {
Map<String, QueueInfo> queueNameToInfo = client.getQueues().stream()
.collect(toMap(QueueInfo::getName, Function.identity()));

    Map<String, List<BindingInfo>> bindings = knownExchangesToRoutingKeys.entrySet().stream()
            .flatMap(entry -> {
                        String exchange = entry.getKey();
                        Set<String> routingKeys = entry.getValue();
                        return client.getBindingsBySource(rabbitMQConfiguration.getVHost(), exchange).stream()
                                .filter(it -> it.getDestinationType() == QUEUE && routingKeys.contains(it.getRoutingKey()));
                    }
            ).collect(groupingBy(BindingInfo::getRoutingKey));

    Map<String, QueuesWithVirtualPublishLimit> routingKeyToQueuesWithLimit = new HashMap<>();
    bindings.forEach((routingKey, bindingsForRoutingKey) ->
                    routingKeyToQueuesWithLimit.put(
                            routingKey,
                            new QueuesWithVirtualPublishLimit(
                                    bindingsForRoutingKey.stream().map(bindingInfo -> queueNameToInfo.get(bindingInfo.getDestination())).collect(toList()),
                                    connectionManagerConfiguration.getVirtualPublishLimit()
                            )
                    )
            );
    return routingKeyToQueuesWithLimit;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can combine lockSendingIfSizeLimitExceeded and groupQueuesByRoutingKey into continuous stream or Kotlin sequence

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't want to do it. I wanted to keep all logic with com.rabbitmq.http.client.Client in one place and do only locking in lockSendingIfSizeLimitExceeded().
But I'll try if you want.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see commit 18469f3.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I migrated ConnectionManager to Kotlin. Please see commit fbae68d.
But I don't think we can do it in one stream because we should take all queues for each routing key. I mean we can't do anything before check all queues.

.associateBy({ it.destination }, { queueNameToSize.getValue(it.destination) })
val limit = connectionManagerConfiguration.virtualPublishLimit
val holder = getChannelFor(PinId.forRoutingKey(routingKey))
LOGGER.trace { "Size limit lock for routing key '$routingKey': ${holder.sizeLimitLock}" }
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it may be helpful, we'll have something like:

Size limit lock for routing key 'parsed-key': java.util.concurrent.locks.ReentrantLock@5744e6e4[Unlocked]
Size limit lock for routing key 'parsed-key': java.util.concurrent.locks.ReentrantLock@5744e6e4[Locked by thread pool-3-thread-1]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants