Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SAMZA-2688: [Elasticity] introduce configs and sub-partition concept aka SystemStreamPartitionKeyHash #1531

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

lakshmi-manasa-g
Copy link
Contributor

Feature: Elasticity for Samza job. Throughput via parallelism is tied to the number of tasks which is equal to the partition count of input streams. If a job is facing lag and is already at the max container count = number of tasks = number of input partitions, then the only choice it is left with is to repartition the input. This PR is part of the feature which aims to increase throughput by scaling task count beyond the input partition count. In this PR, the config and basic class for elasticity are introduced.

Changes: Introduce config "task.elasticity.factor" which defaults to 1. If factor = X>1 then each task is split into X elastic tasks. Also, introduce SystemStreamPartitionKeyHash which represents the portion of SSP that an elastic task will consume.

Tests: existing tests pass.

API changes: New config "task.elasticity.factor" which if > 1 enables this elasticity feature.

upgrade/usage instructions: add above config with value >1 to enable feature.

@@ -71,6 +74,7 @@ public IncomingMessageEnvelope(SystemStreamPartition systemStreamPartition, Stri
this.message = message;
this.size = size;
this.arrivalTime = Instant.now().toEpochMilli();
this.hashCodeForKeyHashComputation = key != null ? key.hashCode() : offset != null ? offset.hashCode() : hashCode();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a benefit to caching and storing it, rather than computing and exposing it via a function?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did consider this originally but if there are multiple calls to getSystemStreamPartitionKeyHash then it would be worth having this cached.

* Aggregate object representing a portion of {@link SystemStreamPartition} consisting of envelopes within the
* SystemStreamPartition that have envelope.key % job's elasticity factor = keyHash of this object.
*/
public class SystemStreamPartitionKeyHash extends SystemStreamPartition {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to add the changes (i.e., keyhash) within SystemStreamPartition class itself?
Since logically this is representing a key-range (which is pretty such a "partition" of data, albeit different from the input kafka-partition)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

theoretically this should be possible. But the problem I forsee is with the serde of job model where the backwards compatability of reading an old job model containing old SSP serde might get impacted with the new defn and serde of SSP. Let me test this and get back on this thread.

@@ -136,6 +136,11 @@
"task.transactional.state.retain.existing.state";
private static final boolean DEFAULT_TRANSACTIONAL_STATE_RETAIN_EXISTING_STATE = true;

// Job Elasticity related configs
// Take effect only when job.elasticity.factor is > 1. otherwise there is no elasticity
private static final String TASK_ELASTICITY_FACTOR = "task.elasticity.factor";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this more like a task-to-partition mapping factor?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am looking at this more as a split multiple. As in if the factor = 2 then split each original task into 2 virtual/elastic tasks. It can also be looked at as task-to-partition/ssp factor where factor = 2 means each ssp is read by 2 virtual/elastic tasks. would lend the same semantics.

Copy link
Contributor

@rmatharu-zz rmatharu-zz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

took a pass

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants