This page explains how to create a private Google Kubernetes Engine (GKE) cluster, which is a type of VPC-native cluster. In a private cluster, nodes only have internal IP addresses, which means that nodes and Pods are isolated from the internet by default.
Internal IP addresses for nodes come from the primary IP address range of the subnet you choose for the cluster. Pod IP addresses and Service IP addresses come from two subnet secondary IP address ranges of that same subnet. For more information, see IP ranges for VPC-native clusters.
GKE versions 1.14.2 and later support any internal IP address ranges, including private ranges (RFC 1918 and other private ranges) and privately used public IP address ranges. See the VPC documentation for a list of valid internal IP address ranges.
To learn more about how private clusters work, see Private clusters.
Before you begin
Make yourself familiar with the requirements, restrictions, and limitations before moving to the next step.
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running
gcloud components update
.
Ensure you have the correct permission to create clusters. At minimum, you should be a Kubernetes Engine Cluster Admin.
Ensure you have a route to the Default Internet Gateway.
Creating a private cluster with no client access to the public endpoint
In this section, you create the following resources:
- A private cluster named
private-cluster-0
that has private nodes, and that has no client access to the public endpoint. - A network named
my-net-0
. - A subnet named
my-subnet-0
.
Console
Create a network and subnet
Go to the VPC networks page in the Google Cloud console.
Click add_box Create VPC network.
For Name, enter
my-net-0
.For Subnet creation mode, select Custom.
In the New subnet section, for Name, enter
my-subnet-0
.In the Region list, select the region that you want.
For IP address range, enter
10.2.204.0/22
.Set Private Google Access to On.
Click Done.
Click Create.
Create a private cluster
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click
Create then in the Standard or Autopilot section, click Configure.For the Name, specify
private-cluster-0
.In the navigation pane, click Networking.
In the Network list, select my-net-0.
In the Node subnet list, select my-subnet-0.
Select the Private cluster radio button.
Clear the Access control plane using its external IP address checkbox.
(Optional for Autopilot): Set Control plane IP range to
172.16.0.32/28
.Click Create.
gcloud
For Autopilot clusters, run the following command:
gcloud container clusters create-auto private-cluster-0 \ --create-subnetwork name=my-subnet-0 \ --enable-master-authorized-networks \ --enable-private-nodes \ --enable-private-endpoint
For Standard clusters, run the following command:
gcloud container clusters create private-cluster-0 \ --create-subnetwork name=my-subnet-0 \ --enable-master-authorized-networks \ --enable-ip-alias \ --enable-private-nodes \ --enable-private-endpoint \ --master-ipv4-cidr 172.16.0.32/28
where:
--create-subnetwork name=my-subnet-0
causes GKE to automatically create a subnet namedmy-subnet-0
.--enable-master-authorized-networks
specifies that access to the public endpoint is restricted to IP address ranges that you authorize.
--enable-ip-alias
makes the cluster VPC-native (not required for Autopilot).
--enable-private-nodes
indicates that the cluster's nodes don't have external IP addresses.--enable-private-endpoint
indicates that the cluster is managed using the internal IP address of the control plane API endpoint.
--master-ipv4-cidr 172.16.0.32/28
specifies an internal IP address range for the control plane (optional for Autopilot). This setting is permanent for this cluster and must be unique within the VPC. The use of non RFC 1918 internal IP addresses is supported.
API
To create a cluster without a publicly-reachable control plane, specify the
enablePrivateEndpoint: true
field in the privateClusterConfig
resource.
At this point, these are the only IP addresses that have access to the control plane:
- The primary range of
my-subnet-0
. - The secondary range used for Pods.
For example, suppose you created a VM in the primary range of my-subnet-0
.
Then on that VM, you could
configure kubectl
to use the internal IP address
of the control plane.
If you want to access the control plane from outside my-subnet-0
, you must
authorize at least one address range to have access to the private endpoint.
Suppose you have a VM that is in the default network, in the same region as
your cluster, but not in my-subnet-0
.
For example:
my-subnet-0
:10.0.0.0/22
- Pod secondary range:
10.52.0.0/14
- VM address:
10.128.0.3
You could authorize the VM to access the control plane by using this command:
gcloud container clusters update private-cluster-0 \
--enable-master-authorized-networks \
--master-authorized-networks 10.128.0.3/32
Creating a private cluster with limited access to the public endpoint
When creating a private cluster using this configuration, you can choose to use an automatically generated subnet, or a custom subnet.
Using an automatically generated subnet
In this section, you create a private cluster named private-cluster-1
where
GKE automatically generates a subnet for your cluster nodes.
The subnet has Private Google Access enabled. In the subnet,
GKE automatically creates two secondary ranges: one for Pods
and one for Services.
You can use the Google Cloud CLI or the GKE API.
gcloud
For Autopilot clusters, run the following command:
gcloud container clusters create-auto private-cluster-1 \ --create-subnetwork name=my-subnet-1 \ --enable-master-authorized-networks \ --enable-private-nodes
For Standard clusters, run the following command:
gcloud container clusters create private-cluster-1 \ --create-subnetwork name=my-subnet-1 \ --enable-master-authorized-networks \ --enable-ip-alias \ --enable-private-nodes \ --master-ipv4-cidr 172.16.0.0/28
where:
--create-subnetwork name=my-subnet-1
causes GKE to automatically create a subnet namedmy-subnet-1
.--enable-master-authorized-networks
specifies that access to the public endpoint is restricted to IP address ranges that you authorize.
--enable-ip-alias
makes the cluster VPC-native (not required for Autopilot).
--enable-private-nodes
indicates that the cluster's nodes don't have external IP addresses.
--master-ipv4-cidr 172.16.0.0/28
specifies an internal IP address range for the control plane (optional for Autopilot). This setting is permanent for this cluster and must be unique within the VPC. The use of non RFC 1918 internal IP addresses is supported.
API
Specify the privateClusterConfig
field in the Cluster
API resource:
{
"name": "private-cluster-1",
...
"ipAllocationPolicy": {
"createSubnetwork": true,
},
...
"privateClusterConfig" {
"enablePrivateNodes": boolean # Creates nodes with internal IP addresses only
"enablePrivateEndpoint": boolean # false creates a cluster control plane with a publicly-reachable endpoint
"masterIpv4CidrBlock": string # CIDR block for the cluster control plane
"privateEndpoint": string # Output only
"publicEndpoint": string # Output only
}
}
At this point, these are the only IP addresses that have access to the cluster control plane:
- The primary range of
my-subnet-1
. - The secondary range used for Pods.
Suppose you have a group of machines, outside of your VPC network,
that have addresses in the range 203.0.113.0/29
. You could authorize those
machines to access the public endpoint by entering this command:
gcloud container clusters update private-cluster-1 \
--enable-master-authorized-networks \
--master-authorized-networks 203.0.113.0/29
Now these are the only IP addresses that have access to the control plane:
- The primary range of
my-subnet-1
. - The secondary range used for Pods.
- Address ranges that you have authorized, for example,
203.0.113.0/29
.
Using a custom subnet
In this section, you create the following resources:
- A private cluster named
private-cluster-2
. - A network named
my-net-2
. - A subnet named
my-subnet-2
, with primary range192.168.0.0/20
, for your cluster nodes. Your subnet has the following secondary address ranges:my-pods
for the Pod IP addresses.my-services
for the Service IP addresses.
Console
Create a network, subnet, and secondary ranges
Go to the VPC networks page in the Google Cloud console.
Click add_box Create VPC network.
For Name, enter
my-net-2
.For Subnet creation mode , select Custom.
In the New subnet section, for Name, enter
my-subnet-2
.In the Region list, select the region that you want.
For IP address range, enter
192.168.0.0/20
.Click Create secondary IP range. For Subnet range name, enter
my-services
, and for Secondary IP range, enter10.0.32.0/20
.Click Add IP range. For Subnet range name, enter
my-pods
, and for Secondary IP range, enter10.4.0.0/14
.Set Private Google Access to On.
Click Done.
Click Create.
Create a private cluster
Create a private cluster that uses your subnet:
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click
Create then in the Standard or Autopilot section, click Configure.For the Name, enter
private-cluster-2
.From the navigation pane, click Networking.
Select the Private cluster radio button.
To create a control plane that is accessible from authorized external IP ranges, keep the Access control plane using its external IP address checkbox selected.
(Optional for Autopilot) Set Control plane IP range to
172.16.0.16/28
.In the Network list, select my-net-2.
In the Node subnet list, select my-subnet-2.
Clear the Automatically create secondary ranges checkbox.
In the Pod secondary CIDR range list, select my-pods.
In the Services secondary CIDR range list, select my-services.
Select the Enable control plane authorized networks checkbox.
Click Create.
gcloud
Create a network
First, create a network for your cluster. The following command creates a
network, my-net-2
:
gcloud compute networks create my-net-2 \
--subnet-mode custom
Create a subnet and secondary ranges
Next, create a subnet, my-subnet-2
, in the my-net-2
network, with
secondary ranges my-pods
for Pods and my-services
for Services:
gcloud compute networks subnets create my-subnet-2 \
--network my-net-2 \
--range 192.168.0.0/20 \
--secondary-range my-pods=10.4.0.0/14,my-services=10.0.32.0/20 \
--enable-private-ip-google-access
Create a private cluster
Now, create a private cluster, private-cluster-2
, using the network,
subnet, and secondary ranges you created.
For Autopilot clusters, run the following command:
gcloud container clusters create-auto private-cluster-2 \ --enable-master-authorized-networks \ --network my-net-2 \ --subnetwork my-subnet-2 \ --cluster-secondary-range-name my-pods \ --services-secondary-range-name my-services \ --enable-private-nodes
For Standard clusters, run the following command:
gcloud container clusters create private-cluster-2 \ --enable-master-authorized-networks \ --network my-net-2 \ --subnetwork my-subnet-2 \ --cluster-secondary-range-name my-pods \ --services-secondary-range-name my-services \ --enable-private-nodes \ --enable-ip-alias \ --master-ipv4-cidr 172.16.0.16/28 \ --no-enable-basic-auth \ --no-issue-client-certificate
At this point, these are the only IP addresses that have access to the control plane:
- The primary range of
my-subnet-2
. - The secondary range
my-pods
.
Suppose you have a group of machines, outside of my-net-2
, that have addresses
in the range 203.0.113.0/29
. You could authorize those machines to access the
public endpoint by entering this command:
gcloud container clusters update private-cluster-2 \
--enable-master-authorized-networks \
--master-authorized-networks 203.0.113.0/29
At this point, these are the only IP addresses that have access to the control plane:
- The primary range of
my-subnet-2
. - The secondary range
my-pods
. - Address ranges that you have authorized, for example,
203.0.113.0/29
.
Using Cloud Shell to access a private cluster
The private cluster you created in the
Using an automatically generated subnet section, private-cluster-1
,
has a public endpoint and has authorized networks enabled. If you want to
use Cloud Shell to access the cluster, you must add
the external IP address of your Cloud Shell to the cluster's list of authorized
networks.
To do this:
In your Cloud Shell command-line window, use
dig
to find the external IP address of your Cloud Shell:dig +short myip.opendns.com @resolver1.opendns.com
Add the external address of your Cloud Shell to your cluster's list of authorized networks:
gcloud container clusters update private-cluster-1 \ --enable-master-authorized-networks \ --master-authorized-networks EXISTING_AUTH_NETS,SHELL_IP/32
Replace the following:
EXISTING_AUTH_NETS
: the IP addresses of your existing list of authorized networks. You can find your authorized networks in the console or by running the following command:gcloud container clusters describe private-cluster-1 --format "flattened(masterAuthorizedNetworksConfig.cidrBlocks[])"
SHELL_IP
: the external IP address of your Cloud Shell.
Get credentials, so that you can use
kubectl
to access the cluster:gcloud container clusters get-credentials private-cluster-1 \ --project=PROJECT_ID \ --internal-ip
Replace
PROJECT_ID
with your project ID.Use
kubectl
, in Cloud Shell, to access your private cluster:kubectl get nodes
The output is similar to the following:
NAME STATUS ROLES AGE VERSION gke-private-cluster-1-default-pool-7d914212-18jv Ready <none> 104m v1.21.5-gke.1302 gke-private-cluster-1-default-pool-7d914212-3d9p Ready <none> 104m v1.21.5-gke.1302 gke-private-cluster-1-default-pool-7d914212-wgqf Ready <none> 104m v1.21.5-gke.1302
Creating a private cluster with unrestricted access to the public endpoint
In this section, you create a private cluster where any IP address can access the control plane.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click
Create then in the Standard or Autopilot section, click Configure.For the Name, enter
private-cluster-3
.In the navigation pane, click Networking.
Select the Private cluster option.
Keep the Access control plane using its external IP address checkbox selected.
(Optional for Autopilot) Set Control plane IP range to
172.16.0.32/28
.Leave Network and Node subnet set to
default
. This causes GKE to generate a subnet for your cluster.Clear the Enable control plane authorized networks checkbox.
Click Create.
gcloud
For Autopilot clusters, run the following command:
gcloud container clusters create-auto private-cluster-3 \ --create-subnetwork name=my-subnet-3 \ --no-enable-master-authorized-networks \ --enable-private-nodes
For Standard clusters, run the following command:
gcloud container clusters create private-cluster-3 \ --create-subnetwork name=my-subnet-3 \ --no-enable-master-authorized-networks \ --enable-ip-alias \ --enable-private-nodes \ --master-ipv4-cidr 172.16.0.32/28
where:
--create-subnetwork name=my-subnet-3
causes GKE to automatically create a subnet namedmy-subnet-3
.--no-enable-master-authorized-networks
disables authorized networks for the cluster.
--enable-ip-alias
makes the cluster VPC-native (not required for Autopilot).
--enable-private-nodes
indicates that the cluster's nodes don't have external IP addresses.
--master-ipv4-cidr 172.16.0.32/28
specifies an internal IP address range for the control plane (optional for Autopilot). This setting is permanent for this cluster and must be unique within the VPC. The use of non RFC 1918 internal IP addresses is supported.
Other private cluster configurations
In addition to the preceding configurations, you can run private clusters with the following configurations.
Granting private nodes outbound internet access
To provide outbound internet access for your private nodes, such as to pull images from an external registry, use Cloud NAT to create and configure a Cloud Router. Cloud NAT lets private clusters establish outbound connections over the internet to send and receive packets.
The Cloud Router allows all your nodes in the region to use Cloud NAT for all primary and alias IP ranges. It also automatically allocates the external IP addresses for the NAT gateway.
For instructions to create and configure a Cloud Router, refer to Create a Cloud NAT configuration using Cloud Router in the Cloud NAT documentation.
Creating a private cluster in a Shared VPC network
To learn how to create a private cluster in a Shared VPC network, see Creating a private cluster in a Shared VPC.
Deploying a Windows Server container application to a private cluster
To learn how to deploy a Windows Server container application to a private cluster, refer to the Windows node pool documentation.
Accessing the control plane's private endpoint globally
The control plane's private endpoint is implemented by an internal passthrough Network Load Balancer in the control plane's VPC network. Clients that are internal or are connected through Cloud VPN tunnels and Cloud Interconnect VLAN attachments can access internal passthrough Network Load Balancers.
By default, these clients must be located in the same region as the load balancer.
When you enable control plane global access, the internal passthrough Network Load Balancer is globally accessible: Client VMs and on-premises systems can connect to the control plane's private endpoint, subject to the authorized networks configuration, from any region.
For more information about the internal passthrough Network Load Balancers and global access, see Internal load balancers and connected networks.
Enabling control plane private endpoint global access
By default, global access is not enabled for the control plane's private endpoint when you create a private cluster. To enable control plane global access, use the following tools based on your cluster mode:
- For Standard clusters, you can use
Google Cloud CLI
or the Google Cloud console.
- For Autopilot clusters, you can use the
google_container_cluster
Terraform resource.
Console
To create a new private cluster with control plane global access enabled, perform the following steps:
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click
Create then in the Standard or Autopilot section, click Configure.Enter a Name.
In the navigation pane, click Networking.
Select Private cluster.
Select the Enable Control plane global access checkbox.
Configure other fields as you want.
Click Create.
To enable control plane global access for an existing private cluster, perform the following steps:
Go to the Google Kubernetes Engine page in the Google Cloud console.
Next to the cluster you want to edit, click more_vert Actions, then click edit Edit.
In the Networking section, next to Control plane global access, click edit Edit.
In the Edit control plane global access dialog, select the Enable Control plane global access checkbox.
Click Save Changes.
gcloud
Add the --enable-master-global-access
flag to create a private cluster with
global access to the control plane's private endpoint enabled:
gcloud container clusters create CLUSTER_NAME \
--enable-private-nodes \
--enable-master-global-access
You can also enable global access to the control plane's private endpoint for an existing private cluster:
gcloud container clusters update CLUSTER_NAME \
--enable-master-global-access
Verifying control plane private endpoint global access
You can verify that global access to the control plane's private endpoint is enabled by running the following command and looking at its output.
gcloud container clusters describe CLUSTER_NAME
The output includes a privateClusterConfig
section where you can see the
status of masterGlobalAccessConfig
.
privateClusterConfig:
enablePrivateNodes: true
masterIpv4CidrBlock: 172.16.1.0/28
peeringName: gke-1921aee31283146cdde5-9bae-9cf1-peer
privateEndpoint: 172.16.1.2
publicEndpoint: 34.68.128.12
masterGlobalAccessConfig:
enabled: true
Accessing the control plane's private endpoint from other networks
When you create a
GKE private cluster
and disable the control plane's public endpoint, you must administer the cluster
with tools like kubectl
using its control plane's private endpoint. You can
access the cluster's control plane's private endpoint from another network,
including the following:
- An on-premises network that's connected to the cluster's VPC network using Cloud VPN tunnels or Cloud Interconnect VLAN attachments
- Another VPC network that's connected to the cluster's VPC network using Cloud VPN tunnels
The following diagram shows a routing path between an on-premises network and GKE control plane nodes:
To allow systems in another network to connect to a cluster's control plane private endpoint, complete the following requirements:
Identify and record relevant network information for the cluster and its control plane's private endpoint.
gcloud container clusters describe CLUSTER_NAME \ --location=COMPUTE_LOCATION \ --format="yaml(network, privateClusterConfig)"
Replace the following:
CLUSTER_NAME
: the name for the cluster.COMPUTE_LOCATION
: the Compute Engine location of the cluster
From the output of the command, identify and record the following information to use in the next steps:
network
: The name or URI for the cluster's VPC network.privateEndpoint
: The IPv4 address of the control plane's private endpoint or the enclosing IPv4 CIDR range (masterIpv4CidrBlock
).peeringName
: The name of the VPC Network Peering connection used to connect the cluster's VPC network to the control plane's VPC network.
The output is similar to the following:
network: cluster-network privateClusterConfig: enablePrivateNodes: true masterGlobalAccessConfig: enabled: true masterIpv4CidrBlock: 172.16.1.0/28 peeringName: gke-1921aee31283146cdde5-9bae-9cf1-peer privateEndpoint: 172.16.1.2 publicEndpoint: 34.68.128.12
Consider enabling control plane private endpoint global access to allow packets to enter from any region in the cluster's VPC network. Enabling control plane private endpoint global access lets you connect to the private endpoint using Cloud VPN tunnels or Cloud Interconnect VLAN attachments located in any region, not just the cluster's region.
Create a route for the
privateEndpoint
IP address or themasterIpv4CidrBlock
IP address range in the other network. Because the control plane's private endpoint IP address always fits within themasterIpv4CidrBlock
IPv4 address range, creating a route for either theprivateEndpoint
IP address or its enclosing range provides a path for packets from the other network to the control plane's private endpoint if:The other network connects to the cluster's VPC network using Cloud Interconnect VLAN attachments or Cloud VPN tunnels that use dynamic (BGP) routes: Use a Cloud Router custom route advertisement. For more information, see Advertising Custom IP Ranges in the Cloud Router documentation.
The other network connects to the cluster's VPC network using Classic VPN tunnels that do not use dynamic routes: You must configure a static route in the other network.
Configure the cluster's VPC network to export its custom routes in the peering relationship to the control plane's VPC network. Google Cloud always configures the control plane's VPC network to import custom routes from the cluster's VPC network. This step provides a path for packets from the control plane's private endpoint back to the other network.
To enable custom route export from your cluster's VPC network, use the following command:
gcloud compute networks peerings update PEERING_NAME \ --network=CLUSTER_VPC_NETWORK \ --export-custom-routes
Replace the following:
PEERING_NAME
: the name for the peering that connects the cluster's VPC network to the control plane VPC networkCLUSTER_VPC_NETWORK
: the name or URI of the cluster's VPC network
For more details about how to update route exchange for existing VPC Network Peering connections, see Update the peering connection.
Custom routes in the cluster's VPC network include routes whose destinations are IP address ranges in other networks, for example, an on-premises network. To ensure that these routes become effective as peering custom routes in the control plane's VPC network, see Supported destinations from the other network.
Supported destinations from the other network
The address ranges that the other network sends to Cloud Routers in the cluster's VPC network must adhere to the following conditions:
While your cluster's VPC might accept a default route (
0.0.0.0/0
), the control plane's VPC network always rejects default routes because it already has a local default route. If the other network sends a default route to your VPC network, the other network must also send the specific destinations of systems that need to connect to the control plane's private endpoint. For more details, see Routing order.If the control plane's VPC network accepts routes that effectively replace a default route, those routes break connectivity to Google Cloud APIs and services, interrupting the cluster control plane. As a representative example, the other network must not advertise routes with destinations
0.0.0.0/1
and128.0.0.0/1
. Refer to the previous point for an alternative.
Monitor the Cloud Router limits, especially the maximum number of unique destinations for learned routes.
Verify that nodes don't have external IP addresses
After you create a private cluster, verify that the cluster's nodes don't have external IP addresses.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the list of clusters, click the cluster name.
For Autopilot clusters, in the Cluster basics section, check the External endpoint field. The value is Disabled.
For Standard clusters, do the following:
- On the Clusters page, click the Nodes tab.
- Under Node Pools, click the node pool name.
- On the Node pool details page, under Instance groups, click the name of your instance group. For example, gke-private-cluster-0-default-pool-5c5add1f-grp`.
- In the list of instances, verify that your instances do not have external IP addresses.
gcloud
Run the following command:
kubectl get nodes --output wide
The output's EXTERNAL-IP
column is empty:
STATUS ... VERSION EXTERNAL-IP OS-IMAGE ...
Ready v.8.7-gke.1 Container-Optimized OS
Ready v1.8.7-gke.1 Container-Optimized OS
Ready v1.8.7-gke.1 Container-Optimized OS
Viewing the cluster's subnet and secondary address ranges
After you create a private cluster, you can view the subnet and secondary address ranges that you or GKE provisioned for the cluster.
Console
Go to the VPC networks page in the Google Cloud console.
Click the name of the subnet. For example,
gke-private-cluster-0-subnet-163e3c97
.Under IP address range, you can see the primary address range of your subnet. This is the range used for nodes.
Under Secondary IP ranges, you can see the IP address range for Pods and the range for Services.
gcloud
List all subnets
To list the subnets in your cluster's network, run the following command:
gcloud compute networks subnets list \
--network NETWORK_NAME
Replace NETWORK_NAME
with the private cluster's
network. If you created the cluster with an automatically-created subnet,
use default
.
In the command output, find the name of the cluster's subnet.
View cluster's subnet
Get information about the automatically created subnet:
gcloud compute networks subnets describe SUBNET_NAME
Replace SUBNET_NAME
with the name of the subnet.
The output shows the primary address range for nodes (the first
ipCidrRange
field) and the secondary ranges for Pods and Services (under
secondaryIpRanges
):
...
ipCidrRange: 10.0.0.0/22
kind: compute#subnetwork
name: gke-private-cluster-1-subnet-163e3c97
...
privateIpGoogleAccess: true
...
secondaryIpRanges:
- ipCidrRange: 10.40.0.0/14
rangeName: gke-private-cluster-1-pods-163e3c97
- ipCidrRange: 10.0.16.0/20
rangeName: gke-private-cluster-1-services-163e3c97
...
Viewing a private cluster's endpoints
You can view a private cluster's endpoints using the gcloud CLI or the Google Cloud console.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the cluster name.
In the Details tab, under Cluster basics, look for the Endpoint field.
gcloud
Run the following command:
gcloud container clusters describe CLUSTER_NAME
The output shows both the private and public endpoints:
...
privateClusterConfig:
enablePrivateEndpoint: true
enablePrivateNodes: true
masterIpv4CidrBlock: 172.16.0.32/28
privateEndpoint: 172.16.0.34
publicEndpoint: 35.239.154.67
Pulling container images from an image registry
In a private cluster, the container runtime can pull container images from Artifact Registry; it cannot pull images from any other container image registry on the internet. This is because the nodes in a private cluster don't have external IP addresses, so by default they cannot communicate with services outside of the Google Cloud network.
The nodes in a private cluster can communicate with Google Cloud services, like Artifact Registry, if they are on a subnet that has Private Google Access enabled.
The following commands create a Deployment that pulls a sample image from an Artifact Registry repository:
kubectl run hello-deployment --image=us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0
Adding firewall rules for specific use cases
This section explains how to add a firewall rule to a private cluster. By default,
firewall rules restrict your cluster control plane to only initiate TCP connections to
your nodes and Pods on ports 443
(HTTPS) and 10250
(kubelet). For some Kubernetes
features, you might need to add firewall rules to allow access on additional
ports.
Kubernetes features that require additional firewall rules include:
- Admission webhooks
- Aggregated API servers
- Webhook conversion
- Dynamic audit configuration
- Generally, any API that has a ServiceReference field requires additional firewall rules.
Adding a firewall rule allows traffic from the cluster control plane to all of the following:
- The specified port of each node (hostPort).
- The specified port of each Pod running on these nodes.
- The specified port of each Service running on these nodes.
To learn about firewall rules, refer to Firewall rules in the Cloud Load Balancing documentation.
To add a firewall rule in a private cluster, you need to record the cluster control plane's CIDR block and the target used. After you have recorded this you can create the rule.
Step 1. View control plane's CIDR block
You need the cluster control plane's CIDR block to add a firewall rule.
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the cluster name.
In the Details tab, under Networking, take note of the value in the Control plane address range field.
gcloud
Run the following command:
gcloud container clusters describe CLUSTER_NAME
Replace CLUSTER_NAME
with the name of your private
cluster.
In the command output, take note of the value in the masterIpv4CidrBlock field.
Step 2. View existing firewall rules
You need to specify the target (in this case, the destination nodes) that the cluster's existing firewall rules use.
Console
Go to the Firewall policies page in the Google Cloud console.
For Filter table for VPC firewall rules, enter
gke-CLUSTER_NAME
.
In the results, take note of the value in the Targets field.
gcloud
Run the following command:
gcloud compute firewall-rules list \
--filter 'name~^gke-CLUSTER_NAME' \
--format 'table(
name,
network,
direction,
sourceRanges.list():label=SRC_RANGES,
allowed[].map().firewall_rule().list():label=ALLOW,
targetTags.list():label=TARGET_TAGS
)'
In the command output, take note of the value in the Targets field.
Step 3. Add a firewall rule
Console
Go to the Firewall policies page in the Google Cloud console.
Click add_box Create Firewall Rule.
For Name, enter the name for the firewall rule.
In the Network list, select the relevant network.
In Direction of traffic, click Ingress.
In Action on match, click Allow.
In the Targets list, select Specified target tags.
For Target tags, enter the target value that you noted previously.
In the Source filter list, select IPv4 ranges.
For Source IPv4 ranges, enter the cluster control plane's CIDR block.
In Protocols and ports, click Specified protocols and ports, select the checkbox for the relevant protocol (tcp or udp), and enter the port number in the protocol field.
Click Create.
gcloud
Run the following command:
gcloud compute firewall-rules create FIREWALL_RULE_NAME \
--action ALLOW \
--direction INGRESS \
--source-ranges CONTROL_PLANE_RANGE \
--rules PROTOCOL:PORT \
--target-tags TARGET
Replace the following:
FIREWALL_RULE_NAME
: the name you choose for the firewall rule.CONTROL_PLANE_RANGE
: the cluster control plane's IP address range (masterIpv4CidrBlock
) that you collected previously.PROTOCOL:PORT
: the port and its protocol,tcp
orudp
.TARGET
: the target (Targets
) value that you collected previously.
Protecting a private cluster with VPC Service Controls
To further secure your GKE private clusters, you can protect them using VPC Service Controls.
VPC Service Controls provides additional security for your GKE private clusters to help mitigate the risk of data exfiltration. Using VPC Service Controls, you can add projects to service perimeters that protect resources and services from requests that originate outside the perimeter.
To learn more about service perimeters, see Service perimeter details and configuration.
If you use Artifact Registry with your GKE private cluster in a VPC Service Controls service perimeter, you must configure routing to the restricted virtual IP to prevent exfiltration of data.
VPC peering reuse
Any private clusters you create after January 15, 2020 reuse VPC Network Peering connections.
Any private clusters you created prior to January 15, 2020 use a unique VPC Network Peering connection. Each VPC network can peer with up to 25 other VPC networks which means for these clusters there is a limit of at most 25 private clusters per network (assuming peerings are not being used for other purposes).
This feature is not backported to previous releases. To enable VPC Network Peering reuse on older private clusters, you can delete a cluster and recreate it. Upgrading a cluster does not cause it to reuse an existing VPC Network Peering connection.
Each location can support a maximum of 75 private clusters if the clusters have VPC Network Peering reuse enabled. Zones and regions are treated as separate locations.
For example, you can create up to 75 private zonal clusters in us-east1-a
and
another 75 private regional clusters in us-east1
. This also applies if you are
using private clusters in a Shared VPC network. The
maximum number of connections to a single VPC network is 25,
which means you can only create private clusters using 25 unique locations.
VPC Network Peering reuse only applies to clusters in the same location, for example regional clusters in the same region or zonal clusters in the same zone. At maximum, you can have four VPC Network Peerings per region if you create both regional clusters and zonal clusters in all of the zones of that region.
You can check if your private cluster reuses VPC Network Peering connections using the gcloud CLI or the Google Cloud console.
Console
Check the VPC peering row on the Cluster details page. If your cluster is
reusing VPC peering connections, the output begins with gke-n
.
For example, gke-n34a117b968dee3b2221-93c6-40af-peer
.
gcloud
gcloud container clusters describe CLUSTER_NAME \
--format="value(privateClusterConfig.peeringName)"
If your cluster is reusing VPC peering connections, the output
begins with gke-n
. For example, gke-n34a117b968dee3b2221-93c6-40af-peer
.
Cleaning up
After completing the tasks on this page, follow these steps to remove the resources to prevent unwanted charges incurring on your account:
Delete the clusters
Console
Go to the Google Kubernetes Engine page in the Google Cloud console.
Select each cluster.
Click delete Delete.
gcloud
gcloud container clusters delete -q private-cluster-0 private-cluster-1 private-cluster-2 private-cluster-3
Delete the network
Console
Go to the VPC networks page in the Google Cloud console.
In the list of networks, click
my-net-0
.On the VPC network details page, click delete Delete VPC Network.
In the Delete a network dialog, click Delete.
gcloud
gcloud compute networks delete my-net-0
Requirements, restrictions, and limitations
Private clusters have the following requirements:
- Private clusters must be VPC-native clusters. VPC-native clusters don't support legacy networks.
Private clusters have the following restrictions:
- You cannot convert an existing, non-private cluster to a private cluster.
- When you use
172.17.0.0/16
for your control plane IP range, you cannot use this range for nodes, Pod, or Services IP addresses. - Deleting the VPC peering between the cluster control plane and the cluster nodes, deleting the firewall rules that allow ingress traffic from the cluster control plane to nodes on port 10250, or deleting the default route to the default internet gateway, causes a private cluster to stop functioning. If you delete the default route, you must ensure traffic to necessary Google Cloud services is routed. For more information, see custom routing.
- When custom route export is enabled for the VPC, creating routes that overlap with Google Cloud IP ranges might break your cluster.
- You can add up to 50 authorized networks (allowed CIDR blocks) in a project. For more information, refer to Add an authorized network to an existing cluster.
Private clusters have the following limitations:
- The size of the RFC 1918 block for the cluster control plane must be
/28
. - While GKE can detect overlap with the control plane address block, it cannot detect overlap within a Shared VPC network.
- All nodes in a private cluster are created without a public IP; they have limited access to Google Cloud APIs and services. To provide outbound internet access for your private nodes, you can use Cloud NAT.
- Private Google Access is enabled automatically when you create a private cluster unless you are using Shared VPC. You must not disable Private Google Access unless you are using NAT to access the internet.
- Any private clusters you created prior to January 15, 2020 have a limit of at most 25 private clusters per network (assuming peerings are not being used for other purposes). See VPC peering reuse for more information.
- Every private cluster requires a peering route between VPCs, but only one peering operation can happen at a time. If you attempt to create multiple private clusters at the same time, cluster creation may time out. To avoid this, create new private clusters serially so that the VPC peering routes already exist for each subsequent private cluster. Attempting to create a single private cluster may also time out if there are operations running on your VPC.
- If you expand the primary IP range of a subnet to accommodate additional nodes, then you must add the expanded subnet's primary IP address range to the list of authorized networks for your cluster. If you don't, ingress-allow firewall rules relevant to the control plane aren't updated, and new nodes created in the expanded IP address space won't be able to register with the control plane. This can lead to an outage where new nodes are continuously deleted and replaced. Such an outage can happen when performing node pool upgrades or when nodes are automatically replaced due to liveness probe failures.
- Don't create firewall rules or hierarchical firewall policy rules that have a higher priority than the automatically created firewall rules.
Troubleshooting
The following sections explain how to resolve common issues related to private clusters.
VPC Network Peering connection on private cluster is accidentally deleted
- Symptoms
When you accidentally delete a VPC Network Peering connection, the cluster goes in a repair state and all nodes show an
UNKNOWN
status. You won't be able to perform any operation on the cluster since reachability to the control plane is disconnected. When you inspect the control plane, logs will display an error similar to the following:error checking if node NODE_NAME is shutdown: unimplemented
- Potential causes
You accidentally deleted the VPC Network Peering connection.
- Resolution
Follow these steps:
Create a new temporary VPC Network Peering cluster. Cluster creation causes VPC Network Peering recreation and old cluster is restored to its normal operation.
Delete the temporarily created VPC Network Peering cluster after the old cluster restors to its normal operation.
Cluster overlaps with active peer
- Symptoms
Attempting to create a private cluster returns an error similar to the following:
Google Compute Engine: An IP range in the peer network overlaps with an IP range in an active peer of the local network.
- Potential causes
You chose an overlapping control plane CIDR.
- Resolution
Delete and recreate the cluster using a different control plane CIDR.
Can't reach control plane of a private cluster
Increase the likelihood that your cluster control plane is reachable by implementing any of the cluster endpoint access configuration. For more information, see access to cluster endpoints.
- Symptoms
After creating a private cluster, attempting to run
kubectl
commands against the cluster returns an error similar to one of the following:Unable to connect to the server: dial tcp [IP_ADDRESS]: connect: connection timed out.
Unable to connect to the server: dial tcp [IP_ADDRESS]: i/o timeout.
- Potential causes
kubectl
is unable to talk to the cluster control plane.- Resolution
Verify credentials for the cluster has been generated for kubeconfig or the correct context is activated. For more information on setting the cluster credentials see generate kubeconfig entry.
Verify that accessing the control plane using its external IP address is permitted. Disabling external access to the cluster control plane isolates the cluster from the internet.This configuration is immutable after the cluster creation. With this configuration, only authorized internal network CIDR ranges or reserved network have access to the control plane.
Verify the origin IP address is authorized to reach the control plane:
gcloud container clusters describe CLUSTER_NAME \ --format="value(masterAuthorizedNetworksConfig)"\ --location=COMPUTE_LOCATION
Replace the following:
CLUSTER_NAME
: the name of your cluster.COMPUTE_LOCATION
: the Compute Engine location for the cluster.
If your origin IP address is not authorized, the output may return an empty result (only curly braces) or CIDR ranges which does not include your origin IP address
cidrBlocks: cidrBlock: 10.XXX.X.XX/32 displayName: jumphost cidrBlock: 35.XXX.XXX.XX/32 displayName: cloud shell enabled: true
Add authorized networks to access control plane.
If you run the
kubectl
command from an on-premises environment or a region different from the cluster's location, ensure that control plane private endpoint global access is enabled. For more information, see accessing the control plane's private endpoint globally.Describe the cluster to see control access config response:
gcloud container clusters describe CLUSTER_NAME \ --location=COMPUTE_LOCATION \ --flatten "privateClusterConfig.masterGlobalAccessConfig"
Replace the following:
CLUSTER_NAME
: the name of your cluster.COMPUTE_LOCATION
: the Compute Engine location for the cluster.
A successful output is similar to the following:
enabled: true
If
null
is returned, enable global access to the control plane.
Can't create cluster due to overlapping IPv4 CIDR block
- Symptoms
gcloud container clusters create
returns an error similar to the following:The given master_ipv4_cidr 10.128.0.0/28 overlaps with an existing network 10.128.0.0/20.
- Potential causes
You specified a control plane CIDR block that overlaps with an existing subnet in your VPC.
- Resolution
Specify a CIDR block for
--master-ipv4-cidr
that does not overlap with an existing subnet.
Can't create cluster due to services range already in use by another cluster
- Symptoms
Attempting to create a private cluster returns an error similar to the following:
Services range [ALIAS_IP_RANGE] in network [VPC_NETWORK], subnetwork [SUBNET_NAME] is already used by another cluster.
- Potential causes
Either of the following:
- You chose a service range which is still in use by another cluster, or the cluster was not deleted.
- There was a cluster using that services range which was deleted but the secondary ranges metadata was not properly cleaned up. Secondary ranges for a GKE cluster are saved in the Compute Engine metadata and should be removed once the cluster is deleted. Even when a clusters is successfully deleted, the metadata might not be removed.
- Resolution
Follow these steps:
- Check if the services range is in use by an existing cluster. You can use the
gcloud container clusters list
command with thefilter
flag to search for the cluster. If there is an existing cluster using the services ranges, you must delete that cluster or create a new services range. - If the services range is not in use by an existing cluster, then manually remove the metadata entry that matches the services range you want to use.
- Check if the services range is in use by an existing cluster. You can use the
Can't create subnet
- Symptoms
When you attempt to create a private cluster with an automatic subnet, or to create a custom subnet, you might encounter the following error:
An IP range in the peer network overlaps with an IP range in one of the active peers of the local network.
- Potential causes
The control plane CIDR range you specified overlaps with another IP range in the cluster. This can also occur if you've recently deleted a private cluster and you're attempting to create a new private cluster using the same control plane CIDR.
- Resolution
Try using a different CIDR range.
Can't pull image from public Docker Hub
- Symptoms
A Pod running in your cluster displays a warning in
kubectl describe
:Failed to pull image: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
- Potential causes
Nodes in a private cluster don't have external IP addresses, so they don't meet the internet access requirements. However, the nodes can access Google Cloud APIs and services, including Artifact Registry, if you have enabled Private Google Access and met its network requirements.
- Resolution
Use one of the following solutions:
Copy the images in your private cluster from Docker Hub to Artifact Registry. See Migrating containers from a third-party registry for more information.
GKE automatically checks
mirror.gcr.io
for cached copies of frequently-accessed Docker Hub images.If you must pull images from Docker Hub or another public repository, use Cloud NAT or an instance-based proxy that is the target for a static
0.0.0.0/0
route.
API request that triggers admission webhook timing out
- Symptoms
An API request that triggers an admission webhook configured to use a service with a targetPort other than 443 times out, causing the request to fail:
Error from server (Timeout): request did not complete within requested timeout 30s
- Potential causes
By default, the firewall does not allow TCP connections to nodes except on ports 443 (HTTPS) and 10250 (kubelet). An admission webhook attempting to communicate with a Pod on a port other than 443 will fail if there is not a custom firewall rule that permits the traffic.
- Resolution
Add a firewall rule for your specific use case.
Can't create cluster due to health check failing
- Symptoms
After creating a private cluster, it gets stuck at the health check step and reports an error similar to one of the following:
All cluster resources were brought up, but only 0 of 2 have registered.
All cluster resources were brought up, but: 3 nodes out of 4 are unhealthy
- Potential causes
Any of the following:
- Cluster nodes cannot download required binaries from the Cloud Storage API
(
storage.googleapis.com
). - Firewall rules restricting egress traffic.
- Shared VPC IAM permissions are incorrect.
- Private Google Access requires you to configure DNS for
*.gcr.io
.
- Cluster nodes cannot download required binaries from the Cloud Storage API
(
- Resolution
Use one of the following solutions:
Enable Private Google Access on the subnet for node network access to
storage.googleapis.com
, or enable Cloud NAT to allow nodes to communicate withstorage.googleapis.com
endpoints. For more information, see How to Troubleshoot GKE private cluster creation issues.For node read access to
storage.googleapis.com
, confirm that the service account assigned to the cluster node has storage read access.Ensure that you have either a Google Cloud firewall rule to allow all egress traffic or configure a firewall rule to allow egress traffic for nodes to the cluster control plane and
*.googleapis.com
.Create the DNS configuration for
*.gcr.io
.If you have a non-default firewall or route setup, configure Private Google Access.
If you use VPC Service Controls, set up Container Registry or Artifact Registry for GKE private clusters.
Ensure you have not deleted or modified the automatically created firewall rules for Ingress.
If using Shared VPC, ensure you have configured the required IAM permissions.
kubelet Failed to create pod sandbox
- Symptoms
After creating a private cluster, it reports an error similar to one of the following:
Warning FailedCreatePodSandBox 12s (x9 over 4m) kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = Error response from daemon: Get https://registry.k8s.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
- Potential causes
The
calico-node
ornetd
Pod cannot reach*.gcr.io
.- Resolution
Use one of the following solutions:
- Ensure you have completed the required setup for Container Registry or Artifact Registry.
Private cluster nodes created but not joining the cluster
Often when using custom routing and third-party network appliances on the
VPC your private cluster is using, the default route
(0.0.0.0/0
) is redirected to the appliance instead of the default internet
gateway. In addition to the control plane connectivity, you need to ensure that
the following destinations are reachable:
- *.googleapis.com
- *.gcr.io
- gcr.io
Configure Private Google Access for all three domains. This best practice allows the new nodes to startup and join the cluster while keeping the internet bound traffic restricted.
Workloads on private GKE clusters unable to access internet
Pods in private GKE clusters cannot access the internet. For example, after running the apt update
command from the Pod exec shell
, it reports an error similar to the following:
0% [Connecting to deb.debian.org (199.232.98.132)] [Connecting to security.debian.org (151.101.130.132)]
If subnet secondary IP address range used for Pods in the cluster is not configured on Cloud NAT gateway, the Pods cannot connect to the internet as they don't have an external IP address configured for Cloud NAT gateway.
Ensure you configure the Cloud NAT gateway to apply at least the following subnet IP address ranges for the subnet that your cluster uses:
- Subnet primary IP address range (used by nodes)
- Subnet secondary IP address range used for Pods in the cluster
- Subnet secondary IP address range used for Services in the cluster
To learn more, see how to add secondary subnet IP range used for Pods
What's next
- Read the GKE network overview.
- Learn how to create VPC-native clusters.
- Learn more about VPC Network Peering.