forked from pivotal-cf/docs-pks
-
Notifications
You must be signed in to change notification settings - Fork 0
/
_ports-protocols-sphere.html.md.erb
298 lines (273 loc) · 20 KB
/
_ports-protocols-sphere.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
This topic describes the firewall ports and protocols requirements for using <%= vars.product_full %>
<% if current_page.data.nsxt == true %>
on vSphere with NSX-T integration.
<% else %>
on vSphere.
<% end %>
Firewalls and security policies are used to filter traffic and limit access in environments with strict
inter-network access control policies.
Apps frequently require the ability to pass internal communication between system components on
different networks and require one or more conduits through the environment's firewalls.
Firewall rules are also required to enable interfacing with external systems such as with
enterprise apps or apps and data on the public Internet.
For <%= vars.product_short %>, Pivotal recommends that you disable security policies that filter traffic between
<% if current_page.data.nsxt == true %>
the networks supporting the system. To secure the environment and grant access between
system components with <%= vars.product_short %>, use one of the following methods:
* Enable access to apps through standard Kubernetes load-balancers and ingress
controller types. This enables you to designate specific ports and protocols as a firewall conduit.
* Enable access using the NSX-T load balancer and ingress. This enables you to configure external addresses and ports
that are automatically mapped and resolved to internal/local addresses and ports.
For information on ports and protocol requirements for vSphere without NSX-T,
see [Firewall Ports and Protocols Requirements for vSphere without NSX-T](ports-protocols-wo-nsx-t.html)
<% else %>
the networks supporting the system. With <%= vars.product_short %> you should enable access to apps
through standard Kubernetes load-balancers and ingress controller types. This enables you to designate
specific ports and protocols as a firewall conduit.
For information on ports and protocol requirements for vSphere with NSX-T,
see [Firewall Ports and Protocols Requirements for vSphere with NSX-T](ports-protocols-nsx-t.html)
<% end %>
If you are unable to implement your security policy using the methods described above,
refer to the following table, which identifies
the flows between system components in a typical <%= vars.product_short %> deployment.
<p class="note"><strong>Note</strong>: To control which groups access deploying and scaling
your organization's Enterprise PKS-deployed Kubernetes clusters, configure your firewall settings
as described on the Operator –> PKS API server lines below.</p>
## <a id="enterprise"></a> <%= vars.product_short %> Ports and Protocols
<% if current_page.data.nsxt == true %>
The following tables list ports and protocols required for network communications between <%= vars.product_short %> v1.5.0
and later, and vSphere 6.7 and NSX-T 2.4.0.1 and later.
<% else %>
The following tables list ports and protocols required for network communications between <%= vars.product_short %> v1.5.0
and later, and vSphere 6.7 and later.
<% end %>
### <a id="users"></a> <%= vars.product_short %> Users Ports and Protocols
The following table lists ports and protocols used for network communication between <%= vars.product_short %> user interface components.
| Source Component | Destination Component | Destination Protocol | Destination Port | Service |
| --- | --- | --- | --- | --- |
| Admin/Operator Console | All System Components | TCP | 22 | ssh |
| Admin/Operator Console | All System Components | TCP | 80 | http |
| Admin/Operator Console | All System Components | TCP | 443 | https |
| Admin/Operator Console | Cloud Foundry BOSH Director | TCP | 25555 | bosh director rest api |
<% if current_page.data.nsxt == true %>
| Admin/Operator Console | NSX-T API VIP | TCP | 443 | https |
<% else %>
<% end %>
| Admin/Operator Console | Pivotal Cloud Foundry Operations Manager | TCP | 22 | ssh |
| Admin/Operator Console | Pivotal Cloud Foundry Operations Manager | TCP | 443 | https |
| Admin/Operator Console | PKS Controller | TCP | 9021 | pks api server |
| Admin/Operator Console | vCenter Server | TCP | 443 | https |
| Admin/Operator Console | vCenter Server | TCP | 5480 | vami |
| Admin/Operator Console | vSphere ESXI Hosts Mgmt. vmknic | TCP | 902 | ideafarm-door |
| Admin/Operator and Developer Consoles | Harbor Private Image Registry | TCP | 80 | http |
| Admin/Operator and Developer Consoles | Harbor Private Image Registry | TCP | 443 | https |
| Admin/Operator and Developer Consoles | Harbor Private Image Registry | TCP | 4443 | notary |
| Admin/Operator and Developer Consoles | Kubernetes App Load-Balancer Svc | TCP/UDP | Varies | varies with apps |
| Admin/Operator and Developer Consoles | Kubernetes Cluster API Server -LB VIP | TCP | 8443 | httpsca |
| Admin/Operator and Developer Consoles | Kubernetes Cluster Ingress Controller | TCP | 80 | http |
| Admin/Operator and Developer Consoles | Kubernetes Cluster Ingress Controller | TCP | 443 | https |
| Admin/Operator and Developer Consoles | Kubernetes Cluster Worker Node | TCP/UDP | 30000-32767 | kubernetes nodeport |
| Admin/Operator and Developer Consoles | PKS Controller | TCP | 8443 | httpsca |
| All User Consoles (Operator, Developer, Consumer) | Kubernetes App Load-Balancer Svc | TCP/UDP | Varies | varies with apps |
| All User Consoles (Operator, Developer, Consumer) | Kubernetes Cluster Ingress Controller | TCP | 80 | http |
| All User Consoles (Operator, Developer, Consumer) | Kubernetes Cluster Ingress Controller | TCP | 443 | https |
| All User Consoles (Operator, Developer, Consumer) | Kubernetes Cluster Worker Node | TCP/UDP | 30000-32767 | kubernetes nodeport |
<% if current_page.data.nsxt == true %>
<p class="note"><strong>Note</strong>: The <code>type:NodePort</code> Service type is not supported for <%= vars.product_short %> deployments on vSphere with NSX-T.
Only <code>type:LoadBalancer</code> and Services associated with Ingress rules are supported on vSphere with NSX-T.</p>
<% else %>
<% end %>
### <a id="core"></a> <%= vars.product_short %> Core Ports and Protocols
The following table lists ports and protocols used for network communication between core <%= vars.product_short %> components.
| Source Component | Destination Component | Destination Protocol | Destination Port | Service|
| --- | --- | --- | --- | --- |
| All System Components | Corporate Domain Name Server | TCP/UDP | 53 | dns|
| All System Components | Network Time Server | UDP | 123 | ntp|
| All System Components | vRealize LogInsight | TCP/UDP | 514/1514 | syslog/tls syslog|
| All System Control Plane Components | AD/LDAP Directory Server | TCP/UDP | 389/636 | ldap/ldaps|
| Pivotal Cloud Foundry Operations Manager | Admin/Operator Console | TCP | 22 | ssh|
| Pivotal Cloud Foundry Operations Manager | Cloud Foundry BOSH Director | TCP | 6868 | bosh agent http|
| Pivotal Cloud Foundry Operations Manager | Cloud Foundry BOSH Director | TCP | 8443 | httpsca|
| Pivotal Cloud Foundry Operations Manager | Cloud Foundry BOSH Director | TCP | 8844 | credhub|
| Pivotal Cloud Foundry Operations Manager | Cloud Foundry BOSH Director | TCP | 25555 | bosh director rest api|
| Pivotal Cloud Foundry Operations Manager | Harbor Private Image Registry | TCP | 22 | ssh|
| Pivotal Cloud Foundry Operations Manager | Kubernetes Cluster Master/Etcd Node | TCP | 22 | ssh|
| Pivotal Cloud Foundry Operations Manager | Kubernetes Cluster Worker Node | TCP | 22 | ssh|
<% if current_page.data.nsxt == true %>
| Pivotal Cloud Foundry Operations Manager | NSX-T API VIP | TCP | 443 | https|
| Pivotal Cloud Foundry Operations Manager | NSX-T Manager/Controller Node | TCP | 22 | ssh|
| Pivotal Cloud Foundry Operations Manager | NSX-T Manager/Controller Node | TCP | 443 | https|
<% else %>
<% end %>
| Pivotal Cloud Foundry Operations Manager | PKS Controller | TCP | 22 | ssh|
| Pivotal Cloud Foundry Operations Manager | PKS Controller | TCP | 8443 | httpsca|
| Pivotal Cloud Foundry Operations Manager | vCenter Server | TCP | 443 | https|
| Pivotal Cloud Foundry Operations Manager | vSphere ESXI Hosts Mgmt. vmknic | TCP | 443 | https|
<% if current_page.data.nsxt == true %>
| Cloud Foundry BOSH Director | NSX-T API VIP | TCP | 443 | https|
<% else %>
<% end %>
| Cloud Foundry BOSH Director | vCenter Server | TCP | 443 | https|
| Cloud Foundry BOSH Director | vSphere ESXI Hosts Mgmt. vmknic | TCP | 443 | https|
| BOSH Compilation Job VM | Cloud Foundry BOSH Director | TCP | 4222 | bosh nats server|
| BOSH Compilation Job VM | Cloud Foundry BOSH Director | TCP | 25250 | bosh blobstore|
| BOSH Compilation Job VM | Cloud Foundry BOSH Director | TCP | 25923 | health monitor daemon|
| BOSH Compilation Job VM | Harbor Private Image Registry | TCP | 443 | https|
| BOSH Compilation Job VM | Harbor Private Image Registry | TCP | 8853 | bosh dns health|
| PKS Controller | Cloud Foundry BOSH Director | TCP | 4222 | bosh nats server|
| PKS Controller | Cloud Foundry BOSH Director | TCP | 8443 | httpsca|
| PKS Controller | Cloud Foundry BOSH Director | TCP | 25250 | bosh blobstore|
| PKS Controller | Cloud Foundry BOSH Director | TCP | 25555 | bosh director rest api|
| PKS Controller | Cloud Foundry BOSH Director | TCP | 25923 | health monitor daemon|
| PKS Controller | Kubernetes Cluster Master/Etcd Node | TCP | 8443 | httpsca|
<% if current_page.data.nsxt == true %>
| PKS Controller | NSX-T API VIP | TCP | 443 | https|
<% else %>
<% end %>
| PKS Controller | vCenter Server | TCP | 443 | https|
| Harbor Private Image Registry | Cloud Foundry BOSH Director | TCP | 4222 | bosh nats server|
| Harbor Private Image Registry | Cloud Foundry BOSH Director | TCP | 25250 | bosh blobstore|
| Harbor Private Image Registry | Cloud Foundry BOSH Director | TCP | 25923 | health monitor daemon|
| Harbor Private Image Registry | IP NAS Storage Array | TCP | 111 | nfs rpc portmapper|
| Harbor Private Image Registry | IP NAS Storage Array | TCP | 2049 | nfs|
| Harbor Private Image Registry | Public CVE Source Database | TCP | 443 | https|
| kube-system pod/telemetry-agent | PKS Controller | TCP | 24224 | fluentd out_forward|
<% if current_page.data.nsxt == true %>
| Kubernetes Cluster Ingress Controller | NSX-T API VIP | TCP | 443 | https|
<% else %>
<% end %>
| Kubernetes Cluster Ingress Controller | vCenter Server | TCP | 443 | https|
| Kubernetes Cluster Master/Etcd Node | Cloud Foundry BOSH Director | TCP | 4222 | bosh nats server|
| Kubernetes Cluster Master/Etcd Node | Cloud Foundry BOSH Director | TCP | 25250 | bosh blobstore|
| Kubernetes Cluster Master/Etcd Node | Cloud Foundry BOSH Director | TCP | 25923 | health monitor daemon|
| Kubernetes Cluster Master/Etcd Node | Kubernetes Cluster Master/Etcd Node | TCP | 2379 | etcd clent|
| Kubernetes Cluster Master/Etcd Node | Kubernetes Cluster Master/Etcd Node | TCP | 2380 | etcd server|
| Kubernetes Cluster Master/Etcd Node | Kubernetes Cluster Master/Etcd Node | TCP | 8443 | httpsca|
| Kubernetes Cluster Master/Etcd Node | Kubernetes Cluster Master/Etcd Node | TCP | 8853 | bosh dns health|
| Kubernetes Cluster Master/Etcd Node | Kubernetes Cluster Worker Node | TCP | 4194 | cadvisor|
| Kubernetes Cluster Master/Etcd Node | Kubernetes Cluster Worker Node | TCP | 10250 | kubelet api|
| Kubernetes Cluster Master/Etcd Node | Kubernetes Cluster Worker Node | TCP | 31194 | cadvisor|
<% if current_page.data.nsxt == true %>
| Kubernetes Cluster Master/Etcd Node | NSX-T API VIP | TCP | 443 | https|
<% else %>
<% end %>
| Kubernetes Cluster Master/Etcd Node | PKS Controller | TCP | 8443 | httpsca|
| Kubernetes Cluster Master/Etcd Node | PKS Controller | TCP | 8853 | bosh dns health|
| Kubernetes Cluster Master/Etcd Node | vCenter Server | TCP | 443 | https|
| Kubernetes Cluster Worker Node | Cloud Foundry BOSH Director | TCP | 4222 | bosh nats server|
| Kubernetes Cluster Worker Node | Cloud Foundry BOSH Director | TCP | 25250 | bosh blobstore|
| Kubernetes Cluster Worker Node | Cloud Foundry BOSH Director | TCP | 25923 | health monitor daemon|
| Kubernetes Cluster Worker Node | Harbor Private Image Registry | TCP | 443 | https|
| Kubernetes Cluster Worker Node | Harbor Private Image Registry | TCP | 8853 | bosh dns health|
| Kubernetes Cluster Worker Node | IP NAS Storage Array | TCP | 111 | nfs rpc portmapper|
| Kubernetes Cluster Worker Node | IP NAS Storage Array | TCP | 2049 | nfs|
| Kubernetes Cluster Worker Node | Kubernetes Cluster Master/Etcd Node | TCP | 8443 | httpsca|
| Kubernetes Cluster Worker Node | Kubernetes Cluster Master/Etcd Node | TCP | 8853 | bosh dns health|
| Kubernetes Cluster Worker Node | Kubernetes Cluster Master/Etcd Node | TCP | 10250 | kubelet api|
| pks-system pod/cert-generator | PKS Controller | TCP | 24224 | fluentd out_forward|
| pks-system pod/fluent-bit | PKS Controller | TCP | 24224 | fluentd out_forward|
## <a id="vmware"></a> VMWare Ports and Protocols
The following tables list ports and protocols required for network communication between VMware components.
### <a id="virtual-infra"></a> VMWare Virtual Infrastructure Ports and Protocols
The following table lists ports and protocols used for network communication between VMware virtual infrastructure components.
| Source Component | Destination Component | Destination Protocol | Destination Port | Service|
| --- | --- | --- | --- | --- |
<% if current_page.data.nsxt == true %>
| vCenter Server | NSX-T Manager/Controller Node | TCP | 8080 | http alt|
<% else %>
<% end %>
| vCenter Server | vSphere ESXI Hosts Mgmt. vmknic | TCP | 443 | https|
| vCenter Server | vSphere ESXI Hosts Mgmt. vmknic | TCP | 8080 | http alt|
| vCenter Server | vSphere ESXI Hosts Mgmt. vmknic | TCP | 9080 | io filter storage|
<% if current_page.data.nsxt == true %>
| vSphere ESXI Hosts Mgmt. vmknic | NSX-T Manager/Controller Node | TCP | 443 | https|
| vSphere ESXI Hosts Mgmt. vmknic | NSX-T Manager/Controller Node | TCP | 1235 | netcpa|
| vSphere ESXI Hosts Mgmt. vmknic | NSX-T Manager/Controller Node | TCP | 5671 | amqp traffic|
| vSphere ESXI Hosts Mgmt. vmknic | NSX-T Manager/Controller Node | TCP | 8080 | http alt|
<% else %>
<% end %>
| vSphere ESXI Hosts Mgmt. vmknic | vCenter Server | UDP | 902 | ideafarm-door|
| vSphere ESXI Hosts Mgmt. vmknic | vCenter Server | TCP | 9084 | update manager|
| vSphere ESXI Hosts Mgmt. vmknic | vSphere ESXI Hosts Mgmt. vmknic | TCP | 8182 | vsphere ha|
| vSphere ESXI Hosts Mgmt. vmknic | vSphere ESXI Hosts Mgmt. vmknic | UDP | 8182 | vsphere ha|
| vSphere ESXI Hosts vMotion vmknic | vSphere ESXI Hosts vMotion vmknic | TCP | 8000 | vmotion|
| vSphere ESXI Hosts IP Storage vmknic | IP NAS Storage Array | TCP | 111 | nfs rpc portmapper|
| vSphere ESXI Hosts IP Storage vmknic | IP NAS Storage Array | TCP | 2049 | nfs|
| vSphere ESXI Hosts IP Storage vmknic | IP NAS Storage Array | TCP | 3260 | iscsi|
| vSphere ESXI Hosts vSAN vmknic | vSphere ESXI Hosts vSAN vmknic | TCP | 2233 | vsan transport|
| vSphere ESXI Hosts vSAN vmknic | vSphere ESXI Hosts vSAN vmknic | UDP | 12321 | unicast agent|
| vSphere ESXI Hosts vSAN vmknic | vSphere ESXI Hosts vSAN vmknic | UDP | 12345 | vsan cluster svc|
| vSphere ESXI Hosts vSAN vmknic | vSphere ESXI Hosts vSAN vmknic | UDP | 23451 | vsan cluster svc|
| vSphere ESXI Hosts TEP vmknic | vSphere ESXI Hosts TEP vmknic | UDP | 3784 | bfd|
| vSphere ESXI Hosts TEP vmknic | vSphere ESXI Hosts TEP vmknic | UDP | 3785 | bfd|
| vSphere ESXI Hosts TEP vmknic | vSphere ESXI Hosts TEP vmknic | UDP | 6081 | geneve|
<% if current_page.data.nsxt == true %>
| vSphere ESXI Hosts TEP vmknic | NSX-T Edge TEP vNIC | UDP | 3784 | bfd|
| vSphere ESXI Hosts TEP vmknic | NSX-T Edge TEP vNIC | UDP | 3785 | bfd|
| vSphere ESXI Hosts TEP vmknic | NSX-T Edge TEP vNIC | UDP | 6081 | geneve|
| NSX-T Manager/Controller Node | NSX-T API VIP | TCP | 443 | https|
| NSX-T Manager/Controller Node | NSX-T Manager/Controller Node | TCP | 443 | https|
| NSX-T Manager/Controller Node | NSX-T Manager/Controller Node | TCP | 5671 | amqp traffic|
| NSX-T Manager/Controller Node | NSX-T Manager/Controller Node | TCP | 8080 | http alt|
| NSX-T Manager/Controller Node | NSX-T Manager/Controller Node | TCP | 9000 | loginsight ingestion api|
| NSX-T Manager/Controller Node | Traceroute Destination | UDP | 33434-33523 | traceroute|
| NSX-T Manager/Controller Node | vCenter Server | TCP | 80 | http|
| NSX-T Manager/Controller Node | vCenter Server | TCP | 443 | https|
| NSX-T Manager/Controller Node | vSphere ESXI Hosts Mgmt. vmknic | TCP | 443 | https|
| NSX-T Edge Management | NSX-T Edge Management | TCP | 1167 | DHCP backend|
| NSX-T Edge Management | NSX-T Edge Management | TCP | 2480 | Nestdb|
| NSX-T Edge Management | NSX-T Edge Management | UDP | 3784 | bfd|
| NSX-T Edge Management | NSX-T Edge Management | UDP | 50263 | high-availability|
| NSX-T Edge Management | NSX-T Manager/Controller Node | TCP | 443 | https|
| NSX-T Edge Management | NSX-T Manager/Controller Node | TCP | 1235 | netcpa|
| NSX-T Edge Management | NSX-T Manager/Controller Node | TCP | 5671 | amqp traffic|
| NSX-T Edge Management | NSX-T Manager/Controller Node | TCP | 8080 | http alt|
| NSX-T Edge Management | Traceroute Destination | UDP | 33434-33523 | traceroute|
| NSX-T Edge TEP vNIC | NSX-T Edge TEP vNIC | UDP | 3784 | bfd|
| NSX-T Edge TEP vNIC | NSX-T Edge TEP vNIC | UDP | 3785 | bfd|
| NSX-T Edge TEP vNIC | NSX-T Edge TEP vNIC | UDP | 6081 | geneve|
| NSX-T Edge TEP vNIC | NSX-T Edge TEP vNIC | UDP | 50263 | high-availability|
| NSX-T Edge TEP vNIC | vSphere ESXI Hosts TEP vmknic | UDP | 3784 | bfd|
| NSX-T Edge TEP vNIC | vSphere ESXI Hosts TEP vmknic | UDP | 3785 | bfd|
| NSX-T Edge TEP vNIC | vSphere ESXI Hosts TEP vmknic | UDP | 6081 | geneve|
| NSX-T Edge Tier-0 Uplink IP(s) / HA VIP | Physical Network Router | TCP | 179 | bgp|
| Physical Network Router | NSX-T Edge Tier-0 Uplink IP(s) / HA VIP | TCP | 179 | bgp|
<% else %>
<% end %>
### <a id="optional-integration"></a> VMWare Optional Integration Ports and Protocols
The following table lists ports and protocols used for network communication between optional VMware integrations.
| Source Component | Destination Component | Destination Protocol | Destination Port | Service|
| --- | --- | --- | --- | --- |
| Admin/Operator Console | vRealize Operations Manager | TCP | 443 | https|
| vRealize Operations Manager | Kubernetes Cluster API Server -LB VIP | TCP | 8443 | httpsca|
<% if current_page.data.nsxt == true %>
| vRealize Operations Manager | NSX-T API VIP | TCP | 443 | https|
<% else %>
<% end %>
| vRealize Operations Manager | PKS Controller | TCP | 8443 | httpsca|
| vRealize Operations Manager | Kubernetes Cluster API Server -LB VIP | TCP | 8443 | httpsca|
| Admin/Operator Console | vRealize LogInsight | TCP | 443 | https|
| Kubernetes Cluster Ingress Controller | vRealize LogInsight | TCP | 9000 | ingestion api|
| Kubernetes Cluster Master/Etcd Node | vRealize LogInsight | TCP | 9000 | ingestion api|
| Kubernetes Cluster Master/Etcd Node | vRealize LogInsight | TCP | 9543 | ingestion api -tls|
| Kubernetes Cluster Worker Node | vRealize LogInsight | TCP | 9000 | ingestion api|
| Kubernetes Cluster Worker Node | vRealize LogInsight | TCP | 9543 | ingestion api -tls|
<% if current_page.data.nsxt == true %>
| NSX-T Manager/Controller Node | vRealize LogInsight | TCP | 9000 | ingestion api|
<% else %>
<% end %>
| PKS Controller | vRealize LogInsight | TCP | 9000 | ingestion api|
| Admin/Operator and Developer Consoles | Wavefront SaaS APM | TCP | 443 | https|
| kube-system pod/wavefront-proxy | Wavefront SaaS APM | TCP | 443 | https|
| kube-system pod/wavefront-proxy | Wavefront SaaS APM | TCP | 8443 | httpsca|
| pks-system pod/wavefront-collector | PKS Controller | TCP | 24224 | fluentd out_forward|
| Admin/Operator Console | vRealize Network Insight Platform | TCP | 443 | https|
| Admin/Operator Console | vRealize Network Insight Proxy | TCP | 22 | ssh|
| vRealize Network Insight Proxy | Kubernetes Cluster API Server -LB VIP | TCP | 8443 | httpsca|
<% if current_page.data.nsxt == true %>
| vRealize Network Insight Proxy | NSX-T API VIP | TCP | 22 | ssh|
| vRealize Network Insight Proxy | NSX-T API VIP | TCP | 443 | https|
<% else %>
<% end %>
| vRealize Network Insight Proxy | PKS Controller | TCP | 8443 | httpsca|
| vRealize Network Insight Proxy | PKS Controller | TCP | 9021 | pks api server|