You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 

4.3 KiB

Renovación de certificados

Los certificados de kubernetes expiran.

Cuando eso ocurre, al intentar acceder al cluster, sale el error

x509: certificate has expired or is not yet valid

Básicamente, lo que hay que hacer es renovar los certificados.

He seguido estas [instrucciones[(https://www.linkedin.com/pulse/kubernetes-x509-certificate-has-expired-yet-valid-error-sagar-patil)

comprobar la fecha de expiración

Ejecutar el comando

sudo kubeadm certs check-expiration

que dará un resultado parecido a este:

[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Oct 28, 2026 07:45 UTC   364d            ca                      no
apiserver                  Oct 28, 2026 07:45 UTC   364d            ca                      no
apiserver-etcd-client      Oct 28, 2026 07:45 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Oct 28, 2026 07:45 UTC   364d            ca                      no
controller-manager.conf    Oct 28, 2026 07:45 UTC   364d            ca                      no
etcd-healthcheck-client    Oct 28, 2026 07:45 UTC   364d            etcd-ca                 no
etcd-peer                  Oct 28, 2026 07:45 UTC   364d            etcd-ca                 no
etcd-server                Oct 28, 2026 07:45 UTC   364d            etcd-ca                 no
front-proxy-client         Oct 28, 2026 07:45 UTC   364d            front-proxy-ca          no
scheduler.conf             Oct 28, 2026 07:45 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Oct 21, 2033 08:15 UTC   7y              no
etcd-ca                 Oct 21, 2033 08:15 UTC   7y              no
front-proxy-ca          Oct 21, 2033 08:15 UTC   7y              no

Renovar los certificados

kubeadm certs renew all

después de ejecutar este comando, hay que copiar el fichero de configuración en nuestro directorio local:

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Y, según las instrucciones del enlace de arriba y del propio comando de renovación, hay que reiniciar kube-apiserver, kube-controller-manager, kube-scheduler y etcd. No lo hice y parece que funciona.

reiniciar los servicios del clúster.

En el caso de que haya que reiniciar los servicios mencionados arriba, encontré estas instrucciones pero no las he probado.

Solution

To restart a container of one of the core components, you need to move it from the /etc/kubernetes/manifests directory on the control plane node host. Below are the step for restarting the kube-apiserver components:

  1. SSH to the control plane node, or follow this guide if you don't have SSH access (in this case, you need to adjust the filesystem paths with the /host prefix).

  2. Move the kube-apiserver manifest from the manifests directory: mv /etc/kubernetes/manifests/kube-apiserver.yaml /root/

  3. Wait till the correspondent kube-apiserver pod is gone:

    $ kubectl get pods -n kube-system | grep api kube-apiserver-ip-10-0-203-99.us-west-2.compute.internal 1/1 Running 0 36m kube-apiserver-ip-10-0-69-238.us-west-2.compute.internal 1/1 Running 1 (39m ago) 38m

  4. Move the kube-apiserver manifest back: mv /root/kube-apiserver.yaml /etc/kubernetes/manifests/

  5. Wait till the correspondent kube-apiserver pod is back:

    $ kubectl get pods -n kube-system | grep api kube-apiserver-ip-10-0-166-232.us-west-2.compute.internal 1/1 Running 0 15s kube-apiserver-ip-10-0-203-99.us-west-2.compute.internal 1/1 Running 0 39m kube-apiserver-ip-10-0-69-238.us-west-2.compute.internal 1/1 Running 1 (41m ago) 41m

  6. Remember to restart the rest of the pods on the rest of the control plane nodes if needed. To avoid the risk of causing a service outage or losing control of your cluster, you must restart the pods one by one.