-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split the functionality in node/mounter
into smaller packages
#328
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Burak Varlı <[email protected]>
Signed-off-by: Burak Varlı <[email protected]>
Signed-off-by: Burak Varlı <[email protected]>
Signed-off-by: Burak Varlı <[email protected]>
Signed-off-by: Burak Varlı <[email protected]>
Signed-off-by: Burak Varlı <[email protected]>
Signed-off-by: Burak Varlı <[email protected]>
Signed-off-by: Burak Varlı <[email protected]>
ctx, cancel := context.WithCancel(context.Background()) | ||
defer cancel() | ||
tokenFile := os.Getenv(webIdentityTokenEnv) | ||
if tokenFile != "" { | ||
klog.Infof("Found AWS_WEB_IDENTITY_TOKEN_FILE, syncing token") | ||
go tokenFileTender(ctx, tokenFile, "/csi/token") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic has been moved to credentials_sts_web_identity.go. Since we enabled requiresRepublish
as part of Pod-level identity support, Kubernetes will call NodePublishVolume
periodically to update existing service account tokens before they expire, and the credential provider will be called as part of this method. Another reason for this change is that, this assumes a single location for service account tokens for Driver-level identity, but with containerization this won't be the case. See the note regarding credential provider in the original PR description for more context.
// If the given file exists, `os.WriteFile` just truncates it without changing it's permissions, | ||
// so we need to ensure it always has the correct permissions. | ||
return os.Chmod(path, awsProfileFilePerm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
renameio.WriteFile
does this for us.
// Moves a parameter optionName from the options list to MP's environment variable list. We need this as options are | ||
// passed to the driver in a single field, but MP sometimes only supports config from environment variables. | ||
// Returns an updated options and environment. | ||
func moveOptionToEnvironmentVariables(optionName string, envName string, options []string, env []string) ([]string, []string) { | ||
optionIdx := -1 | ||
for i, o := range options { | ||
if strings.HasPrefix(o, optionName) { | ||
optionIdx = i | ||
break | ||
} | ||
} | ||
if optionIdx != -1 { | ||
// We can do replace here as we've just verified it has the right prefix | ||
env = append(env, strings.Replace(options[optionIdx], optionName, envName, 1)) | ||
options = append(options[:optionIdx], options[optionIdx+1:]...) | ||
} | ||
return options, env | ||
} | ||
|
||
// method to add the user agent prefix to the Mountpoint headers | ||
// https://github.com/awslabs/mountpoint-s3/pull/548 | ||
func addUserAgentToOptions(options []string, userAgent string) []string { | ||
// first remove it from the options in case it's in there | ||
for i := len(options) - 1; i >= 0; i-- { | ||
if strings.Contains(options[i], userAgentPrefix) { | ||
options = append(options[:i], options[i+1:]...) | ||
} | ||
} | ||
// add the hard coded S3 CSI driver user agent string | ||
return append(options, userAgentPrefix+"="+userAgent) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for moving --aws-max-attempts
to env and adding user-agent to arguments has been moved to node.go
as it's unrelated to mount mechanism and needs to be there for both SystemdMounter
and PodMounter
.
volumeContext := req.GetVolumeContext() | ||
if volumeContext[mounter.VolumeCtxAuthenticationSource] == mounter.AuthenticationSourcePod { | ||
podID := volumeContext[mounter.VolumeCtxPodUID] | ||
volumeID := req.GetVolumeId() | ||
if podID != "" && volumeID != "" { | ||
err := ns.credentialProvider.CleanupToken(volumeID, podID) | ||
if err != nil { | ||
klog.V(4).Infof("NodeStageVolume: Failed to cleanup token for pod/volume %s/%s: %v", podID, volumeID, err) | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This NodeStageVolume
wasn't enabled at all – we need to add STAGE_UNSTAGE_VOLUME
node capability to enable – and also this cleaning of tokens has been moved to Unmount
method of mounter.
targetPath, err := ParseTargetPath(target) | ||
if err == nil { | ||
if targetPath.VolumeID != volumeID { | ||
klog.V(4).Infof("NodeUnpublishVolume: Volume ID from parsed target path differs from Volume ID passed: %s (parsed) != %s (passed)", targetPath.VolumeID, volumeID) | ||
} else { | ||
err := ns.credentialProvider.CleanupToken(targetPath.VolumeID, targetPath.PodID) | ||
if err != nil { | ||
klog.V(4).Infof("NodeUnpublishVolume: Failed to cleanup token for pod/volume %s/%s: %v", targetPath.PodID, volumeID, err) | ||
} | ||
} | ||
} else { | ||
klog.V(4).Infof("NodeUnpublishVolume: Failed to parse target path %s: %v", target, err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This clean up logic has been moved to Unmount
method of mounter.
Signed-off-by: Burak Varlı <[email protected]>
Signed-off-by: Burak Varlı <[email protected]>
Currently,
node/mounter
package contains lots of logic on top of mounting, and it makes harder to change/add new mounting logic for #279. This PR tries to address that by splitting the functionality innode/mounter
into smaller packages.Changes:
ReplaceFile
function toutil
package fromdriver
packageTargetPath
into its own packagemountpoint.Args
for parsing and accessing Mountpoint args - Previously, accessing and manipulating of Mountpoint arguments was spread over to the codebase. This newmountpoint
package contains all the logic related to Mountpoint arguments.envprovider
package for accessing environment variables - This package contains all the logic related to accessing environment variables to pass Mountpoint.volumecontext
package for accessing volume context from CSI - This package contains constants for keys in the volume context provided in CSI RPCs.regionprovider
package - This package contains all the logic related to detecting regions, which is currently just for STS to use in Pod-level identity.credentialprovider
package for getting AWS credentials - This package contains all the logic related to providing AWS credentials for Mountpoint. One other change with this refactor is that, now this package does not assume any path to write credentials to, and instead acceptswritePath
andenvPath
inCredentials.Dump
method. This is needed as the path to write credentials will be different withSystemdMounter
and upcomingPodMounter
for containerization. Another change is that, previously we were writing service account tokens for IRSA to/var/lib/kubelet/plugins/s3.csi.aws.com/
, now we're writing them to/var/lib/kubelet/pods/<pod-uuid>/volumes/kubernetes.io~csi/<volume-id>/credentials
– which is unique for each mount and makes cleaning credentials easier. This new logic will be also required inPodMounter
as Mountpoint Pods won't havehostPath
mount and instead each will have itsemptyDir
to write credentials to.SystemdMounter
- This change basically applies all previous changes toSystemdMounter
.By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.