forked from pivotal-cf/docs-pks
-
Notifications
You must be signed in to change notification settings - Fork 0
/
create-cluster.html.md.erb
204 lines (157 loc) · 11.8 KB
/
create-cluster.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
---
title: Creating Clusters
owner: PKS
---
<strong><%= modified_date %></strong>
This topic describes how to create a Kubernetes cluster with <%= vars.product_full %> using the PKS Command Line Interface (PKS CLI).
## <a id='cluster-access'></a>Configure Cluster Access
Cluster access configuration differs by the type of <%= vars.product_short %> deployment.
### <a id='cluster-access-nsx-t'></a>vSphere with NSX-T
<%= vars.product_short %> deploys a load balancer automatically when clusters are created. The load balancer is configured automatically when workloads are being deployed on these Kubernetes clusters.
For more information, see [Load Balancers in <%= vars.product_short %> Deployments with NSX-T](about-lb.html#with-nsx-t).
<p class="note"><strong>Note</strong>: For a complete list of the objects that <%= vars.product_short %> creates by default when you create a Kubernetes cluster on vSphere with NSX-T, see <a href="vsphere-nsxt-cluster-objects.html">vSphere with NSX-T Cluster Objects</a>.</p>
### <a id='cluster-access-general'></a>GCP, AWS, Azure, or vSphere without NSX-T
When you create a Kubernetes cluster, you must configure external access to the cluster by creating an external TCP or HTTPS load balancer.
This load balancer allows you to run PKS CLI commands on the cluster from your local workstation. For more information, see [Load Balancers in <%= vars.product_short %> Deployments without NSX-T](about-lb.html#without-nsx-t).
You can configure any load balancer of your choice.
If you use GCP, AWS, Azure, or vSphere without NSX-T, you can create a load balancer using your cloud provider console.
For more information about configuring a <%= vars.product_short %> cluster load balancer, see the following:
* [Creating and Configuring a GCP Load Balancer for <%= vars.product_short %> Clusters](gcp-cluster-load-balancer.html)
* [Creating and Configuring an AWS Load Balancer for <%= vars.product_short %> Clusters](aws-cluster-load-balancer.html)
* [Creating and Configuring an Azure Load Balancer for <%= vars.product_short %> Clusters](azure-cluster-load-balancer.html)
Create the <%= vars.product_short %> cluster load balancer before you create the cluster. Use the load balancer IP address
as the external hostname, and then point the load balancer to the IP address of the master
virtual machine (VM) after cluster creation. If the cluster has multiple master nodes, you
must configure the load balancer to point to all master VMs for the cluster.
If you are creating a cluster in a non-production environment, you can choose to create a cluster without a load balancer.
Create a DNS entry that points to the IP address of the cluster's master VM after cluster creation.
To locate the IP addresses and VM IDs of the master VMs, see [Identify the Kubernetes Cluster Master VM](#master-id) below.
## <a id='create'></a>Create a Kubernetes Cluster
Perform the following steps:
1. Grant cluster access to a new or existing user in UAA.
For more information, see the [Grant <%= vars.product_short %> Access to an Individual User](manage-users.html#uaa-user) section of _Managing <%= vars.product_short %> Admin Users with UAA_.
1. <%= partial 'login-pks-api' %>
1. To create a cluster run the following command :
```
pks create-cluster CLUSTER-NAME \
--external-hostname HOSTNAME \
--plan PLAN-NAME \
[--num-nodes WORKER-NODES] \
[--network-profile NETWORK-PROFILE-NAME]
```
Where:
* `CLUSTER-NAME` is your unique name for your cluster.
<p class="note"><strong>Note</strong>: The <code>CLUSTER-NAME</code> must not contain special characters such as <code>&</code>.
The PKS CLI does not validate the presence of special characters in the <code>CLUSTER-NAME</code> string,
but cluster creation fails if one or more special characters are present.</p>
* `HOSTNAME` is your external hostname for your cluster. You can use any fully qualified
domain name (FQDN) or IP address you own. For example, `my-cluster.example.com` or `10.0.0.1`.
If you created an external load balancer, use its DNS hostname.
* `PLAN-NAME` is the plan for your cluster. Run `pks plans` to list your available plans.
* (Optional) `WORKER-NODES` is the number of worker nodes for the cluster.
* (Optional) (NSX-T only) `NETWORK-PROFILE-NAME` is the network profile to use for the cluster.
See [Using Network Profiles (NSX-T Only)](network-profiles.html) for more information.
For example:
<pre class="terminal">$ pks create-cluster my-cluster \
--external-hostname my-cluster.example.com \
--plan large --num-nodes 3</pre>
<p class="note"><strong>Note</strong>: It can take up to 30 minutes to create a cluster.</p>
For high availability, create clusters with a minimum of three worker nodes, or two per AZ if you intend to use PersistentVolumes (PVs). For example, if you deploy across three AZs, you should have six worker nodes. For more information about PVs, see [PersistentVolumes](maintain-uptime.html#persistent-volumes) in *Maintaining Workload Uptime*. Provisioning a minimum of three worker nodes, or two nodes per AZ is also recommended for stateless workloads.
<br><br>
The maximum value you can specify is configured in the **Plans** pane of the <%= vars.product_tile %> tile. If you do not specify a number of worker nodes, the cluster is deployed with the default number, which is also configured in the **Plans** pane. For more information, see the *Installing <%= vars.product_short %>* topic for your IaaS, such as [Installing <%= vars.product_short %> on vSphere](installing-pks-vsphere.html#plans).
1. To track cluster creation, run the following command:
```
pks cluster CLUSTER-NAME
```
Where`CLUSTER-NAME` is the unique name for your cluster.
<br>
For example:
<pre class="terminal">$ pks cluster my-cluster
Name: my-cluster
Plan Name: large
UUID: 01a234bc-d56e-7f89-01a2-3b4cde5f6789
Last Action: CREATE
Last Action State: succeeded
Last Action Description: Instance provisioning completed
Kubernetes Master Host: my-cluster.example.com
Kubernetes Master Port: 8443
Worker Instances: 3
Kubernetes Master IP(s): 192.168.20.7
</pre>
1. If the **Last Action State** value is `error`, troubleshoot by performing the following procedure:
1. Log in to the BOSH Director.
1. Run the following command:
```
bosh tasks
```
For more information, see [Advanced Troubleshooting with the BOSH CLI](https://docs.pivotal.io/pivotalcf/customizing/trouble-advanced.html).
1. Depending on your deployment:
* For **vSphere with NSX-T**, choose one of the following:
* Specify the hostname or FQDN and register the FQDN with the IP provided by <%= vars.product_short %> after cluster deployment. You can do this using `resolv.conf` or via DNS registration.
* Specify a temporary placeholder value for FQDN, then replace the FQDN in the `kubeconfig` with the IP address assigned to the load balancer dedicated to the cluster.<br><br>
To retrieve the IP address to access the Kubernetes API and UI services, use the `pks cluster CLUSTER-NAME` command.
* For **vSphere without NSX-T**, **AWS**, and **Azure**, configure external access to the cluster's master nodes using either DNS records or an external load balancer. Use the output from the `pks cluster` command to locate the master node IP addresses and ports.
* For **GCP**, use the output from the `pks cluster` command to locate the master node IP addresses and ports, and then continue to [Configure Load Balancer Backend](gcp-cluster-load-balancer.html#backend) in _Configuring a GCP Load Balancer for <%= vars.product_short %> Clusters_.
<p class="note"><strong>Note</strong>: For clusters with multiple master node VMs, health checks on port 8443 are recommended.</p>
1. To access your cluster, run the following command:
```
pks get-credentials CLUSTER-NAME
```
Where `CLUSTER-NAME` is the unique name for your cluster.
</br>
For example:
<pre class="terminal">$ pks get-credentials pks-example-cluster
Fetching credentials for cluster pks-example-cluster.
Context set for cluster pks-example-cluster.
You can now switch between clusters by using:
$kubectl config use-context <cluster-name>
</pre>
The `pks get-credentials` command creates a local `kubeconfig` that allows you to manage the cluster.
For more information about the `pks get-credentials` command, see [Retrieving Cluster Credentials and Configuration](cluster-credentials.html).
<%= partial "saml-sso-login" %>
1. To confirm you can access your cluster using the Kubernetes CLI, run the following command:
```
kubectl cluster-info
```
See [Managing <%= vars.product_short %>](managing.html) for information about checking cluster health and viewing cluster logs.
## <a id="master-id"></a>Identify Kubernetes Cluster Master VMs
<p class="note"><strong>Note</strong>: This section applies only to <%= vars.product_short %> deployments on GCP or on vSphere without NSX-T. Skip this section if your <%= vars.product_short %> deployment is on vSphere with NSX-T, AWS, or Azure. For more information, see <a href="about-lb.html">Load Balancers in <%= vars.product_short %></a>.
</p>
To reconfigure the load balancer or DNS record for an existing cluster, you may need to locate VM ID and IP address information for the cluster's master VMs.
Use the information you locate in this procedure when configuring your load balancer backend.
To locate the IP addresses and VM IDs for the master VMs of an existing cluster, do the following:
1. <%= partial 'login-pks-api' %>
1. To locate the cluster ID and master node IP addresses, run the following command:
```
pks cluster CLUSTER-NAME
```
Where `CLUSTER-NAME` is the unique name for your cluster.
<br>
From the output of this command, record the following items:
* **UUID**: This value is your cluster ID.
* **Kubernetes Master IP(s)**: This value lists the IP addresses of all master nodes in the cluster.
1. Gather credential and IP address information for your BOSH Director.
1. To log in to the BOSH Director, perform the following:
1. SSH into the Ops Manager VM.
1. Log in to the BOSH Director by using the BOSH CLI from the Ops Manager VM.
For information on how to complete these steps, see [Advanced Troubleshooting with the BOSH CLI](https://docs.pivotal.io/pivotalcf/customizing/trouble-advanced.html)
.
1. To identify the name of your cluster deployment, run the following command:
```
bosh -e pks deployments
````
Your cluster deployment name begins with `service-instance` and includes the UUID you located in a previous step.
1. To identify the master VM IDs by listing the VMs in your cluster, run the following command:
```
bosh -e pks -d CLUSTER-SI-ID vms
```
Where `CLUSTER-SI-ID` is your cluster service instance ID which begins with `service-instance` and includes the `UUID` you previously located.
<br>
For example:
<pre class="terminal">$ bosh -e pks -d service-instance-aa1234567bc8de9f0a1c vms</pre>
Your master VM IDs are displayed in the **VM CID** column.
1. Use the master VM IDs and other information you gathered in this procedure to configure your load balancer backend.
For example, if you use GCP, use the master VM IDs retrieved during the previous step in [Reconfiguring a GCP Load Balancer](gcp-cluster-load-balancer.html#reconfigure).
## <a id='next-steps'></a> Next Steps
If your <%= vars.product_short %> deployment is on AWS, you must tag your subnets with your new cluster's unique identifier before adding the subnets to the
<%= vars.product_short %> workload load balancer. After you complete the [Create a Kubernetes Cluster](#create) procedure, follow the instructions in [AWS Prerequisites](./deploy-workloads.html#aws).