Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for configurable dequeue size #3031

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

schueffi
Copy link

@schueffi schueffi commented Jul 2, 2024

When sending messages to remote MTAs, the messages get dequeued in batches from the local queue. As the batch-key is the given remote MX server, those messages will be delivered to this remote MTA in one SMTP session. Although this is good for performance (to reuse the same SMTP session for many mails), many of the real-world MTAs do not like sending too much mails at once in one single session.

Example error messages are similar to "421 too many messages in this connection"

Therefore, we make the limit adjustable (with the default value of 100 to be backwards compatible). From our experiences with the last 5 million emails sent, having a batch size of 10 works almost ever, and 50 seems to be the upper "real world" limit before hitting those rate limits by the remote MTAs.

When sending messages to remote MTAs, the messages get dequeued in batches
from the local queue. As the batch-key is the given remote MX server, those
messages will be delivered to this remote MTA in one SMTP session.
Although this is good for performance (to reuse the same SMTP session for
many mails), many of the real-world MTAs do not like sending too much mails
at once in one single session.

Example error messages are similar to "421 too many messages in this connection"

Therefore, we make the limit adjustable (with the default value of 100 to be
backwards compatible). From our experiences with the last 5 million emails
sent, having a batch size of 10 works almost ever, and 50 seems to be the upper
"real world" limit before hitting those rate limits by the remote MTAs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant