Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nginx ingress controller is not working correctly on minikube #26

Open
wwkicq1234 opened this issue Jul 19, 2022 · 4 comments
Open

Nginx ingress controller is not working correctly on minikube #26

wwkicq1234 opened this issue Jul 19, 2022 · 4 comments

Comments

@wwkicq1234
Copy link

With minikube environment, I got the following error:
error: timed out waiting for the condition on pods/ingress-nginx-controller-7587b7f44c-7rfxc

The ingress-nginx-controller pod is blocked with the following error:
Warning FailedScheduling 26m (x140 over 168m) default-scheduler 0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector.

@jkneubuh
Copy link
Contributor

Hi @wwkicq1234

fabric-operator is directly targeting the core k8s APIs, so it should work on any Kubernetes environment, including minikube. You may be one of the first people to try this out, so if you find the recipe then please share the configuration notes, either with a docs PR or update to the sample network scripts, or just some general notes in this Issue.

We have had very good luck with KIND clusters, but there are some drawbacks to using this platform. The one issue in particular that is incredibly annoying is that the KIND runtime does NOT have direct visibility to the Docker image cache on the host system. So for instance if you are using a local operator / cluster to develop a chaincode container, the image either needs to be uploaded to a container registry or loaded directly into the KIND control plane. The KIND cluster created by network kind includes a local, insecure Docker registry at localhost:5000, but ... this is still a pain.

When the sample network configures the k8s cluster with Nginx ingress, it uses the kustomization target at config/ingress/${CLUSTER_RUNTIME}. By default, this is set to "kind", so I think what may be happening is that you are loading the KIND kustomization overlay into your minikube cluster, and it's not working correctly.

Try using the following env settings when running cluster init. This may resolve the ingress problems:

export TEST_NETWORK_CLUSTER_RUNTIME="k3s"
export TEST_NETWORK_STAGE_DOCKER_IMAGES="false"
export TEST_NETWORK_STORAGE_CLASS="local-path"

Once the Ingress issues are sorted out, the other two problems that will come up with minikube are going to be:

  1. The local-path storage controller may be available on the cluster, or may be available with a different name. (E.g., in KIND it is installed with name default, and with rancher Desktop local-path.)

  2. Local docker images (e.g., a custom build of the operator, or chaincode images) may not be immediately visible to minikube.

In general, yes minikube can work, but it looks like there are still a couple of rough edges in the setup for the Ingress and PVC that need to get sorted out.

Thanks for opening the Issue. This is an honest to goodness bug.

@jkneubuh jkneubuh changed the title Can fabric-operator be deployed with minikube? Nginx ingress controller is not working correctly on minikube Jul 19, 2022
@wwkicq1234
Copy link
Author

Thanks for your quickly response. Your comments solved my issue. Then i encountered the issue that storage-class "local-path" isn't exiting. I solved it by 'export STORAGE_CLASS="standard"'.

@wwkicq1234
Copy link
Author

I encountered another issue with "network channel create" command failure. From the log, I can see the following error:

  • FABRIC_CA_CLIENT_HOME=/home/wwk/src/fabric-operator/sample-network/temp/enrollments/org0/users/org0admin
  • fabric-ca-client enroll --url https://org0admin:[email protected]:443 --tls.certfiles /home/wwk/src/fabric-operator/sample-network/temp/cas/org0-ca/tls-cert.pem
    2022/07/21 14:49:15 [INFO] TLS Enabled
    2022/07/21 14:49:15 [INFO] generating key: &{A:ecdsa S:256}
    2022/07/21 14:49:15 [INFO] encoded CSR
    Error: POST failure of request: POST https://test-network-org0-ca-ca.localho.st:443/enroll
    {"hosts":["wwk-Virtual-Machine"],"certificate_request":"-----BEGIN CERTIFICATE REQUEST-----\nMIIBSzCB9AIBADBhMQswCQYDVQQGEwJVUzEXMBUGA1UECBMOTm9ydGggQ2Fyb2xp\nbmExFDASBgNVBAoTC0h5cGVybGVkZ2VyMQ8wDQYDVQQLEwZGYWJyaWMxEjAQBgNV\nBAMTCW9yZzBhZG1pbjBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABCZJSir9SDzl\nnX1ErK0/PlqfJEWGMJHNtnnCFvBz6T7lXxc/aKrdyT5ttXqEKyT4lhX0XAKRyIhb\nQM1SDdnbhdegMTAvBgkqhkiG9w0BCQ4xIjAgMB4GA1UdEQQXMBWCE3d3ay1WaXJ0\ndWFsLU1hY2hpbmUwCgYIKoZIzj0EAwIDRgAwQwIfDx3sbx9Tk6XImi3s7+F5KYQy\nzinfJSQTzCpa2oGmPwIgdmMk+DprhHTnC1kzvAh7taUXiiPDRFUCJhF6lvFapSQ=\n-----END CERTIFICATE REQUEST-----\n","profile":"","crl_override":"","label":"","NotBefore":"0001-01-01T00:00:00Z","NotAfter":"0001-01-01T00:00:00Z","ReturnPrecert":false,"CAName":""}: Post "https://test-network-org0-ca-ca.localho.st:443/enroll": dial tcp 127.0.0.1:443: connect: connection refused
    Do you have any idea for this error?

@jkneubuh
Copy link
Contributor

Hi @wwkicq1234 . Thanks for pushing on this issue. Running fabric on a local minikube seems like a really, really nice advancement. There are some .. "issues" that come up with KIND that make it not 100% ideal for a local dev platform. Likewise the switch to rancher desktop / k3s / containerd comes with some headache. It would / will be good for the fabric-operator to "just work" on any k8s - even minikube. Looks like there's still some work to sort out on this front.

The logs above look like a problem with the Nginx ingress controller. There can be a few things going on - but first, make sure you don't have any other services running on the loopback / 0.0.0.0:443 or 127.0.0.1:443. Also - depending on your setup it is also possible to run k8s in VMs, or behind a network bridge of some kind... The path to test is to make sure that something like "curl" running on the host OS can resolve and open a TCP port to the ingress controller. If things are working correctly, after ingress is up and running you should be able to:

network cluster init        # this will bring up nginx at 127.0.0.1:80 and :443 

curl foobar.localho.st      # *.localho.st in DNS resolves to 127.0.0.1
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>

network cluster init will run a kustomization of ingress CRDs / resources under /config/ingress/${TEST_NETWORK_CLUSTER_RUNTIME}. Something is going wrong with the setup and it looks like the Nginx is not binding correctly to :443 on the local loopback / host OS.

I do see that there are specific instructions for installing Nginx on minikube, so perhaps we will need to supplement the approach used in the network setup to work with the installer. The only change that is necessary to work with Nginx is to enable the - --enable-ssl-passthrough flag to the ingress controller. This allows the peers, orderers, CAs, etc. to terminate SSL at the fabric endpoints.

I think the current approach is close, but needs a little TLC to push it out. It is warranted to set up with a new minikube value for the TEST_NETWORK_CLUSTER_RUNTIME parameter, and maybe take a different tactic to install nginx to a minikube. If you can hold out for a little bit, this will likely get sorted out. If you are feeling ambitious - forge ahead and post any results or outcomes here inline. Just running minikube addons enable ingress might just be enough to get everything working. (+ enabling ssl-passthrough)

If you are feeling ambitious and generous - please feel free to submit a PR back with the new runtime, if you can sort out the details! Minikube is great - it will be a big step forward for "hey it just works" if we can get this new runtime under the fabric-operator umbrella.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants