Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

self signed certificate in certificate chain error in 1.0.0-rc4 #1509

Open
rudyflores opened this issue Jan 10, 2024 · 21 comments
Open

self signed certificate in certificate chain error in 1.0.0-rc4 #1509

rudyflores opened this issue Jan 10, 2024 · 21 comments

Comments

@rudyflores
Copy link

Describe the bug
A clear and concise description of what the bug is.

I receive the following error:

request to https://example?pretty=true failed, reason: self signed certificate in certificate chain

this was not appearing on previous versions of the kubernetes client, I also noticed it on 0.20.0 but had to upgrade due to vulnerability issues with request, is there a way to get rid of this error? I am logging into my cluster and generating a token just fine which used to work.

** Client Version **
e.g. 0.12.0

v1.0.0-rc4

** Server Version **
e.g. 1.19.1

v1.26.6

To Reproduce
Steps to reproduce the behavior:

run any request with kubernetes client, e.g:

await this.kc.readNamespace({
          name: this.kubeConfig.namespace,
          pretty: "true",
});

Expected behavior
A clear and concise description of what you expected to happen.

I should be able to make calls without errors about self-signed certs in cert chain.

** Example Code**
Code snippet for what you are doing

Environment (please complete the following information):

  • OS: [e.g. Windows, Linux] MacOS
  • NodeJS Version [eg. 10] v18.19.0
  • Cloud runtime [e.g. Azure Functions, Lambda]

Additional context
Add any other context about the problem here.

@rudyflores rudyflores changed the title self signed self signed certificate in certificate chain error in 1.0.0-rc4 Jan 10, 2024
@brendandburns
Copy link
Contributor

There is a similar error here:

#1451

which appears to be related to the runtime environment.

@rudyflores
Copy link
Author

@brendandburns do you know if maybe my token in my kube config is not being attached with the kubernetes client?

I tried the same request I am trying with the API call of readNamespace() on Postman and the request seems to work just fine there.

@rudyflores
Copy link
Author

Just tested with v0.18.1 and this worked just fine, something must've changed during and update with kubernetes client that now throws this error for me when I can perform actions just fine with v 0.18.1 which is now vulnerable due to the request dependency

@brendandburns
Copy link
Contributor

The switch to v1.0.4 includes a switch to a different underlying HTTP client. (fetch vs request) it's possible that's different, but it seems to work for other people.

What is the kubernetes distro that you are using? Can you send the contents of your kubeconfig file with any secrets redacted?

@rudyflores
Copy link
Author

The switch to v1.0.4 includes a switch to a different underlying HTTP client. (fetch vs request) it's possible that's different, but it seems to work for other people.

What is the kubernetes distro that you are using? Can you send the contents of your kubeconfig file with any secrets redacted?

Take in mind I also am seeing this issue with v0.20.0 (Which still uses request), where as v0.18.1 does not seem to have this issue.

The Kubernetes distro is Openshift, and this is the kubeconfig (with secrets redacted):

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://<myserver>:<myport>
  name: <myserver>:<myport>
contexts:
- context:
    cluster: <myserver>:<myport>
    namespace: <namespace>
    user: <user>/<server>:<myport>
  name: default/<myserver>:<myport>/<user>
current-context: default/<myserver>:<myport>/<user>
kind: Config
preferences: {}
users:
- name: <myuser>/<myserver>:<myport>
  user:
    token: <token>

@brendandburns
Copy link
Contributor

ah, ok so you are explicitly turning off cert checking with:

insecure-skip-tls-verify: true

I suspect that something broke in our handling of that parameter. I'll try to reproduce in unit tests.

@rudyflores
Copy link
Author

ah, ok so you are explicitly turning off cert checking with:

insecure-skip-tls-verify: true

I suspect that something broke in our handling of that parameter. I'll try to reproduce in unit tests.

Thank you for your help with this, please keep me updated.

@brendandburns
Copy link
Contributor

So I think that this is because you are using a BearerToken for auth. The codepath for that is different and I don't think it respects the SSL in that case.

I'm not quite sure about the right way to fix it, but I will keep looking. In the meantime if you could try a different auth method and see if that works, that would be useful.

@rudyflores
Copy link
Author

So I think that this is because you are using a BearerToken for auth. The codepath for that is different and I don't think it respects the SSL in that case.

I'm not quite sure about the right way to fix it, but I will keep looking. In the meantime if you could try a different auth method and see if that works, that would be useful.

Thanks for the update!

I believe currently my team has only token auth setup in our cluster so I may not be able to try another auth method for the time being unfortunately, thanks again for looking into resolving this issue! If a pull request is made could you link it to this issue?

@rudyflores
Copy link
Author

@brendandburns any updates for this issue?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 6, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 5, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 5, 2024
@brendandburns
Copy link
Contributor

/reopen

@k8s-ci-robot
Copy link
Contributor

@brendandburns: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot reopened this Aug 1, 2024
@brendandburns
Copy link
Contributor

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 1, 2024
@brendandburns
Copy link
Contributor

Unfortunately, I haven't been able to reproduce this. In general, setting tls auth to ignore self-signed certificates is pretty insecure, so I would recommend not doing that in general (and certainly not in production)

The security risks from disabling TLS checks are far worse than the security risks associated with the request library, so if you have an old version that works, you should just keep using that.

But in general, you should really not be disabling TLS checks.

@brendandburns
Copy link
Contributor

The relevant code is here: https://github.com/kubernetes-client/javascript/blob/master/src/config.ts#L473 (0.x branch)
and here: https://github.com/kubernetes-client/javascript/blob/release-1.x/src/config.ts#L559 (1.x branch)

If this is truly blocking you I'd encourage you to investigate those codepaths and see what is going wrong. I don't have an environment where I use self-signed certificates to reproduce this.

@rudyflores
Copy link
Author

I can see your reasoning behind this @brendandburns , but the issue is that some dev environments don't have certs purposely setup (obviously in production this should not be done), so ideally we want to support both scenarios to allow environments like these to test.

In the case of certs, how do I attach my cert to CoreV1API instance? I have another environment where I setup the kube config to have the cert to pointed to the proper directory & insecure-skip-tls-verify: false but yet the API is still showing an self signed cert error.

@kberkos-public
Copy link

I would second this, our team is also using an OpenShift cluster and our API's in development environments are failing with the same self-signed-cert error. If anyone has figured out a fix for this it would be greatly appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants