Lỗi The Setup Controller Has Encountered A Problem During Install. Please Review The Log Files
An issue that comes up rather frequently for new installations of doanhnghiepnet.com.vn isthat a Service is not working properly. You"ve run your Pods through aDeployment (or other workload controller) và created a Service, but youget no response when you try lớn access it. This document will hopefully helpyou to lớn figure out what"s going wrong.
Bạn đang xem: Lỗi the setup controller has encountered a problem during install. please review the log files
Running commands in a Pod
For many steps here you will want to lớn see what a Pod running in the clustersees. The simplest way to vì this is to run an interactive busybox Pod:
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh
Note: If you don"t see a command prompt, try pressing enter.
If you already have a running Pod that you prefer to use, you can run acommand in it using:
kubectl exec -c --
Setup
For the purposes of this walk-through, let"s run some Pods. Since you"reprobably debugging your own Service you can substitute your own details, or youcan follow along và get a second data point.kubectl create deployment hostnames --image=registry.k8s.io/serve_hostname
deployment.apps/hostnames createdkubectl commands will print the type và name of the resource created or mutated, which can then be used in subsequent commands.
Let"s scale the deployment lớn 3 replicas.
kubectl scale deployment hostnames --replicas=3
deployment.apps/hostnames scaledNote that this is the same as if you had started the Deployment with the followingYAML:
apiVersion: apps/v1kind: Deploymentmetadata: labels: app: hostnames name: hostnamesspec: selector: matchLabels: app: hostnames replicas: 3 template: metadata: labels: app: hostnames spec: containers: - name: hostnames image: registry.k8s.io/serve_hostname
The label "app" is automatically phối by kubectl create deployment khổng lồ the name of theDeployment.
You can confirm your Pods are running:
kubectl get pods -l app=hostnames
NAME READY STATUS RESTARTS AGEhostnames-632524106-bbpiw 1/1 Running 0 2mhostnames-632524106-ly40y 1/1 Running 0 2mhostnames-632524106-tlaok 1/1 Running 0 2mYou can also confirm that your Pods are serving. You can get the menu ofPod IP addresses và test them directly.
kubectl get pods -l app=hostnames -o go-template='range .items.status.podIP"
"end'
10.244.0.510.244.0.610.244.0.7The example container used for this walk-through serves its own hostnamevia HTTP on port 9376, but if you are debugging your own app, you"ll want touse whatever port number your Pods are listening on.
From within a pod:
for ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; bởi vì wget -qO- $epdone
This should produce something like:
hostnames-632524106-bbpiwhostnames-632524106-ly40yhostnames-632524106-tlaokIf you are not getting the responses you expect at this point, your Podsmight not be healthy or might not be listening on the port you think they are.You might find kubectl logs khổng lồ be useful for seeing what is happening, orperhaps you need to lớn kubectl exec directly into your Pods và debug fromthere.
Assuming everything has gone lớn plan so far, you can start khổng lồ investigate whyyour Service doesn"t work.
Does the Service exist?
The astute reader will have noticed that you did not actually create a Serviceyet - that is intentional. This is a step that sometimes gets forgotten, andis the first thing to check.
What would happen if you tried to lớn access a non-existent Service? Ifyou have another Pod that consumes this Service by name you would getsomething like:
wget -O- hostnames
Resolving hostnames (hostnames)... Failed: Name or service not known.wget: unable to lớn resolve host address 'hostnames'The first thing to kiểm tra is whether that Service actually exists:
kubectl get svc hostnames
No resources found.Error from server (NotFound): services "hostnames" not foundLet"s create the Service. As before, this is for the walk-through - you canuse your own Service"s details here.
kubectl expose deployment hostnames --port=80 --target-port=9376
service/hostnames exposedAnd read it back:
kubectl get svc hostnames
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEhostnames ClusterIP 10.0.1.175 80/TCP 5sNow you know that the Service exists.
As before, this is the same as if you had started the Service with YAML:
apiVersion: v1kind: Servicemetadata: labels: app: hostnames name: hostnamesspec: selector: app: hostnames ports: - name: default protocol: TCP port: 80 targetPort: 9376
In order lớn highlight the full range of configuration, the Service you createdhere uses a different port number than the Pods. For many real-worldServices, these values might be the same.
Any Network Policy Ingress rules affecting the target Pods?
If you have deployed any Network Policy Ingress rules which may affect incomingtraffic lớn hostnames-* Pods, these need khổng lồ be reviewed.
Please refer lớn Network Policies for more details.
Does the Service work by DNS name?
One of the most common ways that clients consume a Service is through a DNSname.
From a Pod in the same Namespace:
nslookup hostnames
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.localName: hostnamesAddress 1: 10.0.1.175 hostnames.default.svc.cluster.localIf this fails, perhaps your Pod and Service are in differentNamespaces, try a namespace-qualified name (again, from within a Pod):
nslookup hostnames.default
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.localName: hostnames.defaultAddress 1: 10.0.1.175 hostnames.default.svc.cluster.localIf this works, you"ll need khổng lồ adjust your tiện ích to use a cross-namespace name, orrun your app & Service in the same Namespace. If this still fails, try afully-qualified name:
nslookup hostnames.default.svc.cluster.local
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.localName: hostnames.default.svc.cluster.localAddress 1: 10.0.1.175 hostnames.default.svc.cluster.localNote the suffix here: "default.svc.cluster.local". The "default" is theNamespace you"re operating in. The "svc" denotes that this is a Service.The "cluster.local" is your cluster domain, which COULD be different in yourown cluster.
You can also try this from a Node in the cluster:
Note: 10.0.0.10 is the cluster"s DNS Service IP, yours might be different.
nslookup hostnames.default.svc.cluster.local 10.0.0.10
Server: 10.0.0.10Address: 10.0.0.10#53Name: hostnames.default.svc.cluster.localAddress: 10.0.1.175If you are able to vị a fully-qualified name lookup but not a relative one, youneed to kiểm tra that your /etc/resolv.conf file in your Pod is correct. Fromwithin a Pod:
cat /etc/resolv.conf
You should see something like:
nameserver 10.0.0.10search default.svc.cluster.local svc.cluster.local cluster.local example.comoptions ndots:5The nameserver line must indicate your cluster"s DNS Service. This ispassed into kubelet with the --cluster-dns flag.
The search line must include an appropriate suffix for you to find theService name. In this case it is looking for Services in the localNamespace ("default.svc.cluster.local"), Services in all Namespaces("svc.cluster.local"), & lastly for names in the cluster ("cluster.local").Depending on your own install you might have additional records after that (upto 6 total). The cluster suffix is passed into kubelet with the--cluster-domain flag. Throughout this document, the cluster suffix isassumed lớn be "cluster.local". Your own clusters might be configureddifferently, in which case you should change that in all of the previouscommands.
The options line must phối ndots high enough that your DNS client libraryconsiders search paths at all. doanhnghiepnet.com.vn sets this khổng lồ 5 by default, which ishigh enough lớn cover all of the DNS names it generates.
Does any Service work by DNS name?
If the above still fails, DNS lookups are not working for your Service. Youcan take a step back và see what else is not working. The doanhnghiepnet.com.vn masterService should always work. From within a Pod:
nslookup doanhnghiepnet.com.vn.default
Server: 10.0.0.10Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.localName: doanhnghiepnet.com.vn.defaultAddress 1: 10.0.0.1 doanhnghiepnet.com.vn.default.svc.cluster.localIf this fails, please see the kube-proxy sectionof this document, or even go back to lớn the đứng top of this document & start over,but instead of debugging your own Service, debug the DNS Service.
Does the Service work by IP?
Assuming you have confirmed that DNS works, the next thing to test is whether yourService works by its IP address. From a Pod in your cluster, access theService"s IP (from kubectl get above).
Xem thêm: Bầu Ăn Nho Có Tốt Cho Bà Bầu Không ? Loại Nho Nào Tốt Nhất Cho Bà Bầu
for i in $(seq 1 3); bởi vì wget -qO- 10.0.1.175:80done
This should produce something like:
hostnames-632524106-bbpiwhostnames-632524106-ly40yhostnames-632524106-tlaokIf your Service is working, you should get correct responses. If not, thereare a number of things that could be going wrong. Read on.
Is the Service defined correctly?
It might sound silly, but you should really double and triple check that yourService is correct và matches your Pod"s port. Read back your Serviceand verify it:
kubectl get service hostnames -o json
Is the Service port you are trying to lớn access listed in spec.ports<>?Is the targetPort correct for your Pods (some Pods use a different port than the Service)?If you meant to lớn use a numeric port, is it a number (9376) or a string "9376"?If you meant to lớn use a named port, vị your Pods expose a port with the same name?Is the port"s protocol correct for your Pods?
Does the Service have any Endpoints?
If you got this far, you have confirmed that your Service is correctlydefined and is resolved by DNS. Now let"s check that the Pods you ran areactually being selected by the Service.Earlier you saw that the Pods were running. You can re-check that:
kubectl get pods -l app=hostnames
NAME READY STATUS RESTARTS AGEhostnames-632524106-bbpiw 1/1 Running 0 1hhostnames-632524106-ly40y 1/1 Running 0 1hhostnames-632524106-tlaok 1/1 Running 0 1hThe -l app=hostnames argument is a label selector configured on the Service.
The "AGE" column says that these Pods are about an hour old, which implies thatthey are running fine và not crashing.
The "RESTARTS" column says that these pods are not crashing frequently or beingrestarted. Frequent restarts could lead khổng lồ intermittent connectivity issues.If the restart count is high, read more about how to lớn debug pods.
Inside the doanhnghiepnet.com.vn system is a control loop which evaluates the selector ofevery Service and saves the results into a corresponding Endpoints object.
kubectl get endpoints hostnamesNAME ENDPOINTShostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
This confirms that the endpoints controller has found the correct Pods foryour Service. If the ENDPOINTS column is , you should kiểm tra thatthe spec.selector field of your Service actually selects formetadata.labels values on your Pods. A common mistake is khổng lồ have a typo orother error, such as the Service selecting for app=hostnames, but theDeployment specifying run=hostnames, as in versions previous to lớn 1.18, wherethe kubectl run command could have been also used lớn create a Deployment.
Are the Pods working?
At this point, you know that your Service exists and has selected your Pods.At the beginning of this walk-through, you verified the Pods themselves.Let"s check again that the Pods are actually working - you can bypass theService mechanism and go straight khổng lồ the Pods, as listed by the Endpointsabove.
Note: These commands use the Pod port (9376), rather than the Service port (80).
From within a Pod:
for ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; vì chưng wget -qO- $epdone
This should produce something like:
hostnames-632524106-bbpiwhostnames-632524106-ly40yhostnames-632524106-tlaokYou expect each Pod in the Endpoints các mục to return its own hostname. Ifthis is not what happens (or whatever the correct behavior is for your ownPods), you should investigate what"s happening there.
Is the kube-proxy working?
If you get here, your Service is running, has Endpoints, and your Podsare actually serving. At this point, the whole Service proxy mechanism issuspect. Let"s confirm it, piece by piece.
The mặc định implementation of Services, và the one used on most clusters, iskube-proxy. This is a program that runs on every node và configures one of asmall phối of mechanisms for providing the Service abstraction. If yourcluster does not use kube-proxy, the following sections will not apply, & youwill have to investigate whatever implementation of Services you are using.
Is kube-proxy running?
Confirm that kube-proxy is running on your Nodes. Running directly on aNode, you should get something like the below:
ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://doanhnghiepnet.com.vn-master --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2Next, confirm that it is not failing something obvious, lượt thích contacting themaster. To vì this, you"ll have khổng lồ look at the logs. Accessing the logsdepends on your Node OS. On some OSes it is a file, such as/var/log/kube-proxy.log, while other OSes use journalctl khổng lồ access logs. Youshould see something like:
One of the possible reasons that kube-proxy cannot run correctly is that therequired conntrack binary cannot be found. This may happen on some Linuxsystems, depending on how you are installing the cluster, for example, you areinstalling doanhnghiepnet.com.vn from scratch. If this is the case, you need to lớn manuallyinstall the conntrack package (e.g. Sudo apt install conntrack on Ubuntu)and then retry.
Kube-proxy can run in one of a few modes. In the log listed above, theline Using iptables Proxier indicates that kube-proxy is running in"iptables" mode. The most common other mode is "ipvs".
Iptables modeIn "iptables" mode, you should see something like the following on a Node:
-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m bình luận --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m bình luận --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.3.6:9376-A KUBE-SEP-WNBA2IHDGP2BOBGZ -s 10.244.1.7/32 -m bình luận --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000-A KUBE-SEP-WNBA2IHDGP2BOBGZ -p tcp -m phản hồi --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.1.7:9376-A KUBE-SEP-X3P2623AGDH6CDF3 -s 10.244.2.3/32 -m phản hồi --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000-A KUBE-SEP-X3P2623AGDH6CDF3 -p tcp -m phản hồi --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.2.3:9376-A KUBE-SERVICES -d 10.0.1.175/32 -p tcp -m bình luận --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-WNBA2IHDGP2BOBGZ-A KUBE-SVC-NWV5X2332I4OT4T3 -m phản hồi --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X3P2623AGDH6CDF3-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-57KPRZ3JQVENLNBRFor each port of each Service, there should be 1 rule in KUBE-SERVICES andone KUBE-SVC- chain. For each Pod endpoint, there should be a smallnumber of rules in that KUBE-SVC- and one KUBE-SEP- chain witha small number of rules in it. The exact rules will vary based on your exactconfig (including node-ports và load-balancers).IPVS mode
In "ipvs" mode, you should see something lượt thích the following on a Node:
Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn...TCP 10.0.1.175:80 rr -> 10.244.0.5:9376 Masq 1 0 0 -> 10.244.0.6:9376 Masq 1 0 0 -> 10.244.0.7:9376 Masq 1 0 0...For each port of each Service, plus any NodePorts, external IPs, andload-balancer IPs, kube-proxy will create a virtual server. For each Podendpoint, it will create corresponding real servers. In this example, servicehostnames(10.0.1.175:80) has 3 endpoints(10.244.0.5:9376,10.244.0.6:9376, 10.244.0.7:9376).
Is kube-proxy proxying?
Assuming you do see one the above cases, try again to lớn access your Service byIP from one of your Nodes:
hostnames-632524106-bbpiwIf this still fails, look at the kube-proxy logs for specific lines like:
Setting endpoints for default/hostnames:default khổng lồ <10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376>If you don"t see those, try restarting kube-proxy with the -v flag set to lớn 4, andthen look at the logs again.
Edge case: A Pod fails to reach itself via the Service IP
This might sound unlikely, but it does happen và it is supposed to lớn work.
This can happen when the network is not properly configured for "hairpin"traffic, usually when kube-proxy is running in iptables mode và Podsare connected with bridge network. The Kubelet exposes a hairpin-modeflag that allows endpoints of a Service to lớn loadbalanceback lớn themselves if they try to access their own Service VIP. Thehairpin-mode flag must either be set to lớn hairpin-veth orpromiscuous-bridge.
The common steps to trouble shoot this are as follows:
Confirm hairpin-mode is set to hairpin-veth or promiscuous-bridge.You should see something lượt thích the below. Hairpin-mode is phối topromiscuous-bridge in the following example.root 3392 1.1 0.8 186804 65208 ? Sl 00:51 11:11 /usr/local/bin/kubelet --enable-debugging-handlers=true --config=/etc/doanhnghiepnet.com.vn/manifests --allow-privileged=True --v=4 --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --configure-cbr0=true --cgroup-root=/ --system-cgroups=/system --hairpin-mode=promiscuous-bridge --runtime-cgroups=/docker-daemon --kubelet-cgroups=/kubelet --babysit-daemons=true --max-pods=110 --serialize-image-pulls=false --outofdisk-transition-frequency=0Confirm the effective hairpin-mode. To bởi this, you"ll have lớn look atkubelet log. Accessing the logs depends on your Node OS. On some OSes itis a file, such as /var/log/kubelet.log, while other OSes use journalctlto access logs. Please be noted that the effective hairpin mode may notmatch --hairpin-mode flag due lớn compatibility. Kiểm tra if there is any loglines with key word hairpin in kubelet.log. There should be log linesindicating the effective hairpin mode, like something below.I0629 00:51:43.648698 3252 kubelet.go:380> Hairpin mode set khổng lồ "promiscuous-bridge"If the effective hairpin mode is hairpin-veth, ensure the Kubelet hasthe permission to operate in /sys on node. If everything works properly,you should see something like:
1111If the effective hairpin mode is promiscuous-bridge, ensure Kubelethas the permission to lớn manipulate linux bridge on node. If cbr0 bridge isused và configured properly, you should see:
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1460 Metric:1Seek help if none of above works out.
Seek help
If you get this far, something very strange is happening. Your Service isrunning, has Endpoints, and your Pods are actually serving. You have DNSworking, & kube-proxy does not seem to be misbehaving. & yet yourService is not working. Please let us know what is going on, so we can helpinvestigate!Contact us onSlack orForum orGitHub.
What"s next
Visit the troubleshooting overview documentfor more information.
Feedback
Was this page helpful?
YesNoThanks for the feedback. If you have a specific, answerable question about how lớn use doanhnghiepnet.com.vn, ask it onStack Overflow.Open an issue in the GitHub repo if you want toreport a problemorsuggest an improvement.
Last modified December 05, 2022 at 10:36 AM PST: Remove references to kube-proxy userspace mode (37ee1e335c)