Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds socketPath configuration to allow setting a predictable path #62

Closed

Conversation

aitorpazos
Copy link

This change is relevant for situations where we need a predictable path for this file.
For example:

  • If we do builds in kubernetes builders and we want to share the socket across multiple containers in the builder pod, having this predictable path helps configuring the volume mount for it. Otherwise we need to mount the whole /tmp which is not always the best option.

  • Make sure you are opening from a topic/feature/bugfix branch (right side) and not your main branch!

  • Ensure that the pull request title represents the desired changelog entry

  • Please describe what you did

  • Link to relevant issues in GitHub or Jira

  • Link to relevant pull requests, esp. upstream and downstream changes

  • Ensure you have provided tests - that demonstrates feature works or fixes the issue

This change is relevant for situations where we need a predictable path for this file.
For example:
- If we do builds in kubernetes builders and we want to share the socket across multiple containers in the builder pod, having this predictable path helps configuring the volume mount for it. Otherwise we need to mount the whole /tmp which is not always the best option.
@jpatterson-fastly
Copy link

@jglick Any chance we could get this merged? With the removal of the built-in java ssh-agent provider, and it now relying directly on ssh-agent cli being available, for jobs spawning in Kubernetes, this means we now need ssh client tools installed in any container in which we want to load the ssh-agent via pipeline syntax. We don't want to go the route of having to mount a shared /tmp across the pod, for obvious reasons.

By being able to override the socket path, we can instead still create a shared volume (mounting to something other than /tmp), load the ssh-agent from within the JNLP container, and have ssh-agent access across all containers within the pod, without having to install ssh client tools in other containers, some of which are third party maintained

@jglick
Copy link
Member

jglick commented Jan 21, 2022

If we do builds in kubernetes builders and we want to share the socket across multiple containers in the builder pod…

Possibly better for the plugin to use /home/jenkins/agent/workspace/$JOB_NAME@tmp in that context, which is already mounted across containers. In traditional agents this is not a good option because of a Linux limit on socket name length, but in a pod this should be more predictable I think. Unless the job name is very long; maybe even override https://github.com/jenkinsci/branch-api-plugin/blob/ac9234ee647afb58ad1cad65c145dc83f0cd6b7b/src/main/java/jenkins/branch/WorkspaceLocatorImpl.java#L79-L86 in K8s to use some fixed path like /home/jenkins/agent/ws@tmp (see jenkinsci/kubernetes-plugin#559 + jenkinsci/kubernetes-plugin#610 though could get broken if jenkinsci/kubernetes-plugin#988 is merged).

At any rate, I am not considering myself a maintainer of this plugin. Should probably be marked as up for adoption by someone who can actually commit to not just merging PRs but thinking carefully about impacts, and tracking regressions. As of the switch to the exec provider by default, it does not really add anything essential; more transparent and flexible to run ssh-agent yourself (or just pass -i arguments using withCredentials for commands supporting that).

@jglick
Copy link
Member

jglick commented Jul 30, 2024

Addresses a corner case which can be handled by simply not using this plugin.

@jglick jglick closed this Jul 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants