forked from pivotal-cf/docs-pks
-
Notifications
You must be signed in to change notification settings - Fork 0
/
deploy-workloads.html.md.erb
388 lines (274 loc) · 14.6 KB
/
deploy-workloads.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
---
title: Deploying and Exposing Basic Workloads
owner: PKS
---
<strong><%= modified_date %></strong>
This topic describes how to configure, deploy, and expose basic workloads in <%= vars.product_full %>.
## <a id='overview'></a> Overview
A load balancer is a third-party device that distributes network and application traffic across resources.
Using a load balancer can prevent individual network components from being overloaded by high traffic.
<p class='note'><strong>Note</strong>: The procedures in this topic create a dedicated load balancer
for each workload.
If your cluster has many apps, a load balancer dedicated to each workload can be an inefficient use of resources.
An ingress controller pattern is better suited for clusters with many workloads.
</p>
Refer to the following <%= vars.product_short %> documentation topics for additional information
about deploying and exposing workloads:
* For the different types of load balancers used in a deployment, see [Load Balancers in PKS](about-lb.html).
* For ingress routing on GCP, AWS, Azure, or vSphere without NSX-T, see [Configuring Ingress Routing](configure-ingress.html).
* For ingress routing on vSphere with NSX-T, see [Configuring Ingress Resources and Load Balancer Services](nsxt-ingress-srvc-lb.html).
## <a id='prerequisites'></a> Prerequisites
This topic references standard Kubernetes primitives. If you are unfamiliar with Kubernetes
primitives, review the Kubernetes [Workloads](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/) and
[Services, Load Balancing, and Networking](https://kubernetes.io/docs/concepts/services-networking/service/)
documentation before following the procedures below.
### <a id='nonsxt'></a>vSphere without NSX-T Prerequisites
If you use vSphere without NSX-T, you can choose to configure your own external load balancer or
expose static ports to access your workload without a load balancer.
See [Deploy Workloads without a Load Balancer](#without-lb) below.
### <a id='gcp'></a>GCP, AWS, Azure, and vSphere with NSX-T Prerequisites
If you use Google Cloud Platform (GCP), Amazon Web Services (AWS), Azure, or vSphere with NSX-T integration,
your cloud provider can configure a public-cloud external load balancer for your workload.
See either [Deploy Workloads on vSphere with NSX-T](#external-lb-nsxt) or [Deploy Workloads on GCP, AWS, or
Azure, Using a Public-Cloud External Load Balancer](#external-lb) below.
### <a id='aws'></a>AWS Prerequisites
If you use AWS, you can also expose your workload using a public-cloud internal load balancer.
Perform the following steps before you create a load balancer:
1. In the [AWS Management Console](https://aws.amazon.com/console/), create or locate a public
subnet for each availability zone (AZ) that you are deploying to.
A public subnet has a route table that directs internet-bound traffic to the internet gateway.
1. On the command line, run `pks cluster CLUSTER-NAME`, where `CLUSTER-NAME` is the name of your cluster.
1. Record the unique identifier for the cluster.
1. In the [AWS Management Console](https://aws.amazon.com/console/), tag each public subnet based on the table below, replacing `CLUSTER-UUID`
with the unique identifier of the cluster. Leave the **Value** field empty.
<table>
<tr>
<th>Key</th>
<th>Value</th>
</tr>
<tr>
<td><code>kubernetes.io/cluster/service-instance_CLUSTER-UUID</code></td>
<td>empty</td>
</tr>
</table>
<p class='note'><strong>Note</strong>: AWS limits the number of tags on a subnet to 100.</p>
After completing these steps, follow the steps below in [Deploy AWS Workloads Using an Internal Load Balancer](#internal-lb).
## <a id='external-lb-nsxt'></a>Deploy Workloads on vSphere with NSX-T
If you use vSphere with NSX-T, follow the steps below to deploy and expose basic workloads using the NSX-T load balancer.
<%= partial 'expose-external-lb' %>
## <a id='external-lb'></a>Deploy Workloads on GCP, AWS, or Azure, Using a Public-Cloud External Load Balancer
If you use GCP, AWS, or Azure, follow the steps below to deploy and expose basic workloads using a load balancer configured by your cloud provider.
<%= partial 'expose-external-lb' %>
## <a id='internal-lb'></a>Deploy AWS Workloads Using an Internal Load Balancer
If you use AWS, follow the steps below to deploy, expose, and access basic workloads using an internal load balancer configured by your cloud provider.
#### <a id='internal-lb-configure'></a>Configure Your Workload
1. Open your workload's Kubernetes service configuration file in a text editor.
1. To expose the workload through a load balancer, confirm that the Service object is configured to be `type: LoadBalancer`.
1. In the services metadata section of the manifest, add the following `annotations` tag:
```
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
```
For example:
```
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
name: nginx
spec:
ports:
- port: 80
selector:
app: nginx
type: LoadBalancer
---
```
1. Confirm that the workload's Kubernetes service configuration is set to be `type: LoadBalancer`.
1. Confirm that the `annotations` and `type` properties of each workload's Kubernetes service are similarly configured.
<p class='note'><strong>Note</strong>: For an example of a fully configured Kubernetes service, see the
<a href="https://github.com/cloudfoundry-incubator/kubo-ci/blob/master/specs/nginx-lb.yml">nginx app's example <code>type: LoadBalancer</code> configuration</a> in GitHub.</p>
For more information about configuring the `LoadBalancer` Service type see the
[Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer).
#### <a id='internal-lb-deploy'></a>Deploy and Expose Your Workload
1. To deploy the service configuration for your workload, run the following command:
```
kubectl apply -f SERVICE-CONFIG
```
Where `SERVICE-CONFIG` is your workload's Kubernetes service configuration.
<br>
For example:
<pre class="terminal">kubectl apply -f nginx.yml</pre>
This command creates three pod replicas, spanning three worker nodes.
1. Deploy your applications, deployments, config maps, persistent volumes, secrets,
and any other configurations or objects necessary for your applications to run.
1. Wait until your cloud provider has created and connected a dedicated load balancer to the worker nodes on a specific port.
#### <a id='internal-lb-access'></a>Access Your Workload
1. To determine your exposed workload's load balancer IP address and port number, run the following command:
```
kubectl get svc SERVICE-NAME
```
Where `SERVICE-NAME` is your workload configuration's specified service `name`.
<br>
For example:
<pre class="terminal">kubectl get svc nginx</pre>
1. Retrieve the load balancer's external IP and port from the returned listing.
1. To access the app, run the following command:
```
curl http://EXTERNAL-IP:PORT
```
Where:
* `EXTERNAL-IP` is the IP address of the load balancer.
* `PORT` is the port number.
<p class='note'><strong>Note</strong>: This command should be run on a server with network connectivity and visibility to the IP address of the worker node.</p>
## <a id='external-lb-generic'></a>Deploy Workloads for a Generic External Load Balancer
Follow the steps below to deploy and access basic workloads using a generic external load balancer, such as F5.
In this approach you will access you workloads with a generic external load balancer.
Using a generic external load balancer requires a static port in your Kubernetes cluster. To do this we need to expose your workloads with a `NodePort`.
#### <a id='external-lb-generic-configure'></a>Configure Your Workload
To expose a static port on your workload, perform the following steps:
1. Open your workload's Kubernetes service configuration file in a text editor.
1. To expose the workload without a load balancer, confirm that the Service object is configured to be `type: NodePort`.
<br>
For example:
```
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx
name: nginx
spec:
ports:
- port: 80
selector:
app: nginx
type: NodePort
---
```
1. Confirm that the workload's Kubernetes service configuration is set to be `type: NodePort`.
1. Confirm that the `type` property of each workload's Kubernetes service is similarly configured.
<p class='note'><strong>Note</strong>: For an example of a fully configured Kubernetes service, see the
<a href="https://github.com/cloudfoundry-incubator/kubo-ci/blob/master/specs/nginx.yml">nginx app's example <code>type: NodePort</code> configuration</a> in GitHub.</p>
For more information about configuring the `NodePort` Service type see the
[Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport).
#### <a id='external-lb-generic-deploy'></a>Deploy and Expose Your Workload
1. To deploy the service configuration for your workload, run the following command:
```
kubectl apply -f SERVICE-CONFIG
```
Where `SERVICE-CONFIG` is your workload's Kubernetes service configuration.
<br>
For example:
<pre class="terminal">kubectl apply -f nginx.yml</pre>
This command creates three pod replicas, spanning three worker nodes.
1. Deploy your applications, deployments, config maps, persistent volumes, secrets,
and any other configurations or objects necessary for your applications to run.
1. Wait until your cloud provider has connected your worker nodes on a specific port.
#### <a id='external-lb-generic-access'></a>Access Your Workload
1. Retrieve the IP address for a worker node with a running app pod.
<p class='note'><strong>Note</strong>: If you deployed more than four worker
nodes, some worker nodes may not contain a running app pod. Select a worker
node that contains a running app pod.</p>
You can retrieve the IP address for a worker node with a running app pod in
one of the following ways:
* On the command line, run the following
```
kubectl get nodes -L spec.ip
```
* On the Ops Manager command line, run the following to find the IP address:
```
bosh vms
```
This IP address will be used when configuring your external load balancer.
1. To see a listing of port numbers, run the following command:
```
kubectl get svc SERVICE-NAME
```
Where `SERVICE-NAME` is your workload configuration's specified service `name`.
<br>
For example:
<pre class="terminal">kubectl get svc nginx</pre>
1. Find the node port number in the `3XXXX` range. This port number will be used when configuring your external load balancer.
1. Configure your external load balancer to map your application Uri to the IP and port number you collected above. Please refer to your load balancer documentation for instructions.
## <a id='without-lb'></a>Deploy Workloads without a Load Balancer
If you do not use an external load balancer, you can configure your service to expose a static port on each worker node.
The following steps configure your service to be reachable from outside the cluster at `http://NODE-IP:NODE-PORT`.
#### <a id='without-lb-configure'></a>Configure Your Workload
To expose a static port on your workload, perform the following steps:
1. Open your workload's Kubernetes service configuration file in a text editor.
1. To expose the workload without a load balancer, confirm that the Service object is configured to be `type: NodePort`.
<br>
For example:
```
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx
name: nginx
spec:
ports:
- port: 80
selector:
app: nginx
type: NodePort
---
```
1. Confirm that the workload's Kubernetes service configuration is set to be `type: NodePort`.
1. Confirm that the `type` property of each workload's Kubernetes service is similarly configured.
<p class='note'><strong>Note</strong>: For an example of a fully configured Kubernetes service, see the
<a href="https://github.com/cloudfoundry-incubator/kubo-ci/blob/master/specs/nginx.yml">nginx app's example <code>type: NodePort</code> configuration</a> in GitHub.</p>
For more information about configuring the `NodePort` Service type see the
[Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport).
#### <a id='without-lb-deploy'></a>Deploy and Expose Your Workload
1. To deploy the service configuration for your workload, run the following command:
```
kubectl apply -f SERVICE-CONFIG
```
Where `SERVICE-CONFIG` is your workload's Kubernetes service configuration.
<br>
For example:
<pre class="terminal">kubectl apply -f nginx.yml</pre>
This command creates three pod replicas, spanning three worker nodes.
1. Deploy your applications, deployments, config maps, persistent volumes, secrets,
and any other configurations or objects necessary for your applications to run.
1. Wait until your cloud provider has connected your worker nodes on a specific port.
#### <a id='without-lb-access'></a>Access Your Workload
1. Retrieve the IP address for a worker node with a running app pod.
<p class='note'><strong>Note</strong>: If you deployed more than four worker
nodes, some worker nodes may not contain a running app pod. Select a worker
node that contains a running app pod.</p>
You can retrieve the IP address for a worker node with a running app pod in
one of the following ways:
* On the command line, run the following
```
kubectl get nodes -L spec.ip
```
* On the Ops Manager command line, run the following to find the IP address:
```
bosh vms
```
1. To see a listing of port numbers, run the following command:
```
kubectl get svc SERVICE-NAME
```
Where `SERVICE-NAME` is your workload configuration's specified service `name`.
<br>
For example:
<pre class="terminal">kubectl get svc nginx</pre>
1. Find the node port number in the `3XXXX` range.
1. To access the app, run the following command line:
```
curl http://NODE-IP:NODE-PORT
```
Where
* `NODE-IP` is the IP address of the worker node.
* `NODE-PORT` is the node port number.
<p class='note'><strong>Note</strong>: Run this command on a server with network connectivity and visibility to the IP address of the worker node.</p>