forked from pivotal-cf/docs-pks
-
Notifications
You must be signed in to change notification settings - Fork 0
/
_plans.html.md.erb
96 lines (74 loc) · 9.78 KB
/
_plans.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
A plan defines a set of resource types used for deploying a cluster. You must first activate and configure <strong>Plan 1</strong>, and afterwards you can activate up to nine additional, optional, plans.
<% if current_page.data.iaas == "vSphere" %>
<p class="note"><strong>Note</strong>: If you are deploying a Windows worker-based cluster on vSphere without NSX-T, you must select either Plan 1, XXX or XXX.</p>
<% end %>
To activate and configure a plan, perform the following steps:
1. Click the plan that you want to activate.
1. Select **Active** to activate the plan and make it available to developers deploying clusters.
![Plan pane configuration](images/plan1.png)
1. Under **Name**, provide a unique name for the plan.
1. Under **Description**, edit the description as needed.
The plan description appears in the Services Marketplace, which developers can access by using the PKS CLI.
<% if current_page.data.iaas == "vSphere" %>
1. Select **Enable Windows Workers** to deploy a Windows worker-based cluster. Windows worker-based clusters are only supported on vSphere without NSX-T.
<% else %>
1. Windows Workers are only supported on vSphere without NSX-T. Only select **Enable Windows Workers** when deploying a Windows worker-based cluster on vSphere without NSX-T.
<% end %>
1. Under **Master/ETCD Node Instances**, select the default number of Kubernetes master/etcd nodes to provision for each cluster.
You can enter <code>1</code>, <code>3</code>, or <code>5</code>.
<p class="note"><strong>Note</strong>: If you deploy a cluster with multiple master/etcd node VMs,
confirm that you have sufficient hardware to handle the increased load on disk write and network traffic. For more information, see <a href="https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/hardware.md#example-hardware-configurations">Hardware recommendations</a> in the etcd documentation.<br><br>
In addition to meeting the hardware requirements for a multi-master cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see <a href="monitor-etcd.html">Monitoring Master/etcd Node VMs</a>.</p>
<p class="note warning"><strong>WARNING</strong>: To change the number of master/etcd nodes for a plan, you must ensure that no existing clusters use the plan. <%= vars.product_short %> does not support changing the number of master/etcd nodes for plans with existing clusters.
</p>
1. Under **Master/ETCD VM Type**, select the type of VM to use for Kubernetes master/etcd nodes. For more information, including master node VM customization options, see the [Master Node VM Size](vm-sizing.html#master-sizing) section of _VM Sizing for <%= vars.product_short %> Clusters_.
1. Under **Master Persistent Disk Type**, select the size of the persistent disk for the Kubernetes master node VM.
1. Under **Master/ETCD Availability Zones**, select one or more AZs for the Kubernetes clusters deployed by <%= vars.product_short %>.
If you select more than one AZ, <%= vars.product_short %> deploys the master VM in the first AZ and the worker VMs across the remaining AZs.
If you are using multiple masters, <%= vars.product_short %> deploys the master and worker VMs across the AZs in round-robin fashion.
1. Under **Maximum number of workers on a cluster**, set the maximum number of
Kubernetes worker node VMs that <%= vars.product_short %> can deploy for each cluster. Enter any whole number in this field.
<img src="images/plan2.png" alt="Plan pane configuration, part two">
1. Under **Worker Node Instances**, select the default number of Kubernetes worker nodes to provision for each cluster.
<br><br>
If the user creating a cluster with the PKS CLI does not specify a number of worker nodes, the cluster is deployed with the default number set in this field. This value cannot be greater than the maximum worker node value you set in the previous field. For more information about creating clusters, see [Creating Clusters](create-cluster.html).
<br><br>
For high availability, create clusters with a minimum of three worker nodes, or two per AZ if you intend to use PersistentVolumes (PVs). For example, if you deploy across three AZs, you should have six worker nodes. For more information about PVs, see [PersistentVolumes](maintain-uptime.html#persistent-volumes) in *Maintaining Workload Uptime*. Provisioning a minimum of three worker nodes, or two nodes per AZ is also recommended for stateless workloads.
<br><br>
If you later reconfigure the plan to adjust the default number of worker nodes, the existing clusters that have been created from that plan are not automatically upgraded with the new default number of worker nodes.
1. Under **Worker VM Type**, select the type of VM to use for Kubernetes worker node VMs. For more information, including worker node VM customization options, see the [Worker Node VM Number and Size](vm-sizing.html#worker-sizing) section of _VM Sizing for <%= vars.product_short %> Clusters_.
<p class="note"><strong>Note</strong>: If you install <%= vars.product_short %> in an NSX-T environment, we recommend that you select a <strong>Worker VM Type</strong> with a minimum disk size of 16 GB. The disk space provided by the default <code>medium</code> Worker VM Type is insufficient for <%= vars.product_short %> with NSX-T.</p>
1. Under **Worker Persistent Disk Type**, select the size of the persistent disk for the Kubernetes worker node VMs.
<% if current_page.data.iaas == "vSphere" %>
<p class="note"><strong>Note</strong>: In Windows worker-based clusters the Worker Persistent Disk Type is not supported and is ignored.</p>
<% end %>
1. Under **Worker Availability Zones**, select one or more AZs for the Kubernetes worker nodes. <%= vars.product_short %> deploys worker nodes equally across the AZs you select.
1. Under **Kubelet customization - system-reserved**, enter resource values that Kubelet can use to reserve resources for system daemons. For example, `memory=250Mi, cpu=150m`. For more information about system-reserved values, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved).
1. Under **Kubelet customization - eviction-hard**, enter threshold limits that Kubelet can use to evict pods when they exceed the limit. Enter limits in the format `EVICTION-SIGNAL=QUANTITY`. For example, `memory.available=100Mi, nodefs.available=10%, nodefs.inodesFree=5%`. For more information about eviction thresholds, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#hard-eviction-thresholds).
<p class="note warning"><strong>WARNING</strong>: Use the Kubelet customization fields with caution. If you enter values that are invalid or that exceed the limits the system supports, Kubelet might fail to start. If Kubelet fails to start, you cannot create clusters.</p>
1. Under **Errand VM Type**, select the size of the VM that contains the errand. The smallest instance possible is sufficient, as the only errand running on this VM is the one that applies the **Default Cluster App** YAML configuration.
1. (Optional) Under **(Optional) Add-ons - Use with caution**, enter additional YAML configuration to add custom workloads to each cluster in this plan. You can specify multiple files using `---` as a separator. For more information, see [Adding Custom Workloads](custom-workloads.html).
![Plan pane configuration](images/plan3.png)
1. (Optional) To allow users to create pods with privileged containers, select the **Allow Privileged** option. For more information, see [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers) in the Kubernetes documentation.
<% if current_page.data.iaas == "vSphere" %>
<p class="note"><strong>Note</strong>: In Windows worker-based clusters, privileged containers are not supported and the **Allow Privileged** configuration is ignored.</p>
<% end %>
1. (Optional) Enable or disable one or more admission controller plugins: **PodSecurityPolicy**, **DenyEscalatingExec**, and **SecurityContextDeny**. See [Admission Plugins](./admission-plugins.html) for more information.
<% if current_page.data.iaas == "vSphere" %>
<p class="note"><strong>Note</strong>: In Windows worker-based clusters the DenyEscalatingExec plugin is not supported and the **DenyEscalatingExec** configuration is ignored.</p>
<% end %>
![Node Drain Timeout fields](images/node-drain.png)
1. (Optional) Under **Node Drain Timeout(mins)**, enter the timeout in minutes for the node to drain pods. If you set this value to 0, the node drain does not terminate.
1. (Optional) Under **Pod Shutdown Grace Period (seconds)**, enter a timeout in seconds for the node to wait before it forces the pod to terminate. If you set this value to `-1`, the default timeout is set to the one specified by the pod.
1. (Optional) To configure when the node drains, enable the following:
* **Force node to drain even if it has running pods not managed by a ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet.**
* **Force node to drain even if it has running DaemonSet-managed pods.**
* **Force node to drain even if it has running running pods using emptyDir.**
* **Force node to drain even if pods are still running after timeout.**
<p class="note warning"><strong>Warning:</strong> If you select <strong>Force node to drain even if pods are still running after timeout</strong>, the node kills all running workloads on pods.</p>
For more information about configuring default node drain behavior, see [Worker Node Hangs Indefinitely](./troubleshoot-issues.html#upgrade-drain-hangs) in _Troubleshooting_.
1. Click **Save**.
To deactivate a plan, perform the following steps:
1. Click the plan that you want to deactivate.
1. Select **Inactive**.
1. Click **Save**.