-
Notifications
You must be signed in to change notification settings - Fork 671
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Newbie] Supporting Yunikorn and Kueue #5915
base: master
Are you sure you want to change the base?
Changes from 11 commits
6bd5fb6
7641f32
02e35df
590d64a
20d5672
bb8a0ec
2c12357
9a732c7
ae9f6b9
9ed6b05
552a800
b71b749
3fc572f
e64beb0
539eff6
9d511e3
d6a420a
e9c401a
4387d58
87ee532
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,131 @@ | ||
# [Newbie] Supporting Yunikorn and Kueue | ||
|
||
**Authors:** | ||
|
||
- @yuteng | ||
|
||
## 1 Executive Summary | ||
|
||
Providing kubernetes (k8s) resource management, gang scheduling and preemption for flyte applications by third-party software, including Apache Yunikorn and Kueue. | ||
|
||
## 2 Motivation | ||
|
||
Flyte support multi-tenancy and various k8s plugins. | ||
|
||
Kueue and Yunikorn support gang scheduling and preemption. | ||
Gang scheduling guarantees the availability of certain K8s crd services, such as Spark, Ray, with sufficient resource and preemption make sure high priority task execute immediately. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I would rather say that gang scheduling guarantees that all worker pods derived from a CRD are scheduled at the same time. Would add that this is important to prevent waste of resources when jobs can partially start without being able to do any meaningful work. |
||
|
||
Flyte doesn't provide resource management for multi-tenancy, which hierarchical resource queues of Yunikorn can solve. | ||
|
||
## 3 Proposed Implementation | ||
|
||
```yaml | ||
queueconfig: | ||
scheduler: yunikorn | ||
jobs: | ||
- type: "ray" | ||
gangscheduling: "placeholderTimeoutInSeconds=60 gangSchedulingStyle=hard" | ||
- type: "spark" | ||
gangscheduling: "placeholderTimeoutInSeconds=30 gangSchedulingStyle=hard" | ||
``` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this list complete or an example? I.e. will this also work for plugins like kubeflow pytorch, tf, mpi or dask, ...? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is an example. |
||
|
||
Mentioned configuration indicates what queues exist for an org. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could you please explain what an org is in this context? It's not the same as this |
||
Hierarchical queues will be structured as follows. | ||
root.org1.ray、root.org1.spark and root.org1.default". | ||
|
||
ResourceFlavor allocates resource based on labels which indicates that category-based resource allocation by organization label is available. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could you please explain how the the reseource flavor will be determined? Is there a way to automatically derive this from the task decorator args It would be really nice if tasks that need e.g. an A100 GPU were automatically not in the same queue as tasks that need 2 x T4 GPUs. We're using kubeflow pytorch jobs with scheduler plugins' gang scheduling and have observed jobs being starved that the cluster would have had resources for because other jobs which were trying to get different GPU types couldn't be scheduled but which had a higher priority. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No, need to create Kueue CRDs first.
In the other hand, kueue preemption requires Kueue WorkloadPriorityClass and patching job with label. |
||
Thus, a clusterQueue including multiple resources represents the total acessaible resource for an organization. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't understand this sentence tbh, could you please explain/expand? |
||
| clusterQueue | localQueue | | ||
| --- | --- | | ||
| Org | ray、spark、default | | ||
A tenant can submit organization-specific tasks to queues such as org.ray, org.spark and org.default to track which job types are submittable. | ||
|
||
|
||
A SchedulerConfigManager maintains config from mentioned yaml. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't see this in any of the code snippets below. |
||
It patches labels or annotations on k8s resources after they pass rules specified in the configuration. | ||
|
||
```go | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why not have a single interface and two implementations of the same interface for yunikorn and kueue? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would prefer in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Agree, i updated the document.. |
||
type YunikornScheduablePlugin interface { | ||
MutateResourceForYunikorn(ctx context.Context, object client.Object, taskTmpl *core.TaskTemplate) (client.Object, error) | ||
GetLabels(id core.Identifier) map[string]string | ||
} | ||
|
||
type KueueScheduablePlugin interface { | ||
MutateResourceForKueue(ctx context.Context, object client.Object, taskTmpl *core.TaskTemplate) (client.Object, error) | ||
GetLabels(id core.Identifier) map[string]string | ||
} | ||
|
||
func (h *YunikornScheduablePlugin) MutateResourceForYunikorn(ctx context.Context, object client.Object, taskTmpl *core.TaskTemplate) (client.Object, error) error { | ||
rayJob := object.(*rayv1.RayJob) | ||
// TODO | ||
} | ||
|
||
func (h *YunikornScheduablePlugin) GetLabels(id core.Identifier) map[string]string { | ||
// 1.UserInfo | ||
// 2.QueueName | ||
// 3.ApplicationID | ||
} | ||
|
||
func PatchPodSpec(target *v1.PodSpec, labels map[string]string) error { | ||
// Get Metaobject from target | ||
// Add label is the specific label doesn't exist | ||
} | ||
``` | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do any additional k8s resources have to be created for the queues or does a queue exist as soon as a pod has an annotation with a new queue name? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, Kueue CRDs describe the quota a queue when adopting Kueue. In the other hand, queues are configured by setting [Yunikorn configuration] (https://yunikorn.apache.org/docs/user_guide/queue_config) if adopting Yunikorn. |
||
|
||
Creat a scheduler plugin according to the queueconfig.scheduler. | ||
Its basic responsibility validate whether submitted application is accepted. | ||
When a Yunikorn scheduler plugin created, it will create applicationID and queue name. | ||
in the other hand, a Kueue scheduler plugin constructs labels including localQueueName, preemption. | ||
|
||
```go | ||
func (e *PluginManager) launchResource(ctx context.Context, tCtx pluginsCore.TaskExecutionContext) (pluginsCore.Transition, error) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe |
||
o, err := e.plugin.BuildResource(ctx, k8sTaskCtx) | ||
if err != nil { | ||
return pluginsCore.UnknownTransition, err | ||
} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would be nice to not have |
||
if o, err = e.SchedulerPlugin.MutateResourceForKueue(o); err == nil { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should only mutate the resource if the plugin manager manages a plugin which the user configured a queue for, right? How will this matching be done? Just comparing this type string queueconfig:
scheduler: yunikorn
jobs:
- type: "ray" to the name of the plugin? |
||
return pluginsCore.UnknownTransition, err | ||
} | ||
} else { | ||
return pluginsCore.UnknownTransition, err | ||
} | ||
} | ||
``` | ||
When batchscheduler in flyte is yunikorn, some examples are like following. | ||
For example, this appoarch submits a Ray job owned by user1 in org1 to "root.org1.ray". | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Where does flytepropeller know the user from? Or does the user not matter as the label |
||
A spark application in ns1 submitted by user4 in org1 is in "root.org1.ns1". | ||
In the other hand, results of these examples are "org1-ray" and "org1-ns1" when adopting Kueue. | ||
|
||
## 4 Metrics & Dashboards | ||
|
||
1. The Yunikorn scheduler add applications to a specific queue based on their user info, queue name for any application type. | ||
2. Yunikorn and Kueue provide gang scheduling through annotations For Ray and spark. | ||
3. Preemption behavior aligns with user-defined configuration in yunikorn. | ||
|
||
## 5 Drawbacks | ||
|
||
This appoarch doesn't offer a way to maintain consistency between the accuate resource quotas of groups and the configuration in scheduler. | ||
|
||
## 6 Alternatives | ||
|
||
## 7 Potential Impact and Dependencies | ||
|
||
Flyte support Spark, Ray and Kubeflow CRDs including Pytorch and TFjobs. | ||
The Spark and Ray operators have supported Yunikorn gang scheduling since task group calculation were implemented in these operators. | ||
Taskgroup calculation implementation in pods aspect in flyte or kubeflow is required for supporting kubeflow CRDs. | ||
In the other hand, Kueue currently doesn't support Spark CRD. | ||
| Operator | Yunikorn | Kueue | | ||
| --- | --- | --- | | ||
| Spark | v | x | | ||
| Ray | v | v | | ||
| Kubeflow | x | v | | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. From what I understand, one only needs to add labels/annotations on the worker pods. Can't we do this purely from flyte by modifying the pod template spec of the respective CRD? What do the operators have to do in addition to that? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, current progress fetch the pod templates from CRDs and patch label on them. |
||
## 8 Unresolved questions | ||
|
||
|
||
## 9 Conclusion | ||
|
||
Yunikorn and Kueue support gang scheduling allowing all necassary pods to run sumultaneously when required resource are available. | ||
Yunikorn provides preemption calculating the priority of applications based on thier priority class and priority score of the queue where they are submitted, in order to trigger high-prioirty or emergency application immediately. | ||
Yunikorn's hierarchical queue includes grarateed resources settings and ACLs. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nit: Could you please run the doc through a spelling checker? Thank you 🙇 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes, i ran the make spellcheck in the latest commit :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please explain what preemption means here compared to what preemption means in the context of spot instances on e.g. AWS or GCP?