Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add volumeattach/volumedetach API #4359

Merged
merged 1 commit into from
Nov 6, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 86 additions & 0 deletions pkg/pillar/kubeapi/vitoapiserver.go
Original file line number Diff line number Diff line change
Expand Up @@ -313,6 +313,92 @@ func RolloutDiskToPVC(ctx context.Context, log *base.LogObject, exists bool,
return nil
}

// GetPVFromPVC : Returns volume name (PV) from the PVC name
func GetPVFromPVC(pvcName string, log *base.LogObject) (string, error) {
// Get the Kubernetes clientset
clientset, err := GetClientSet()
if err != nil {
err = fmt.Errorf("failed to get clientset: %v", err)
Copy link
Contributor

@rene rene Oct 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A new error it's being created here, why you just don't propagate the original error (from the GetClientSet() call)? Understood, I was looking to line 355, now I think you should use the same approach used here there... I will mark with a comment....

log.Error(err)
return "", err
}
// Get the PersistentVolumeClaim (PVC)
pvc, err := clientset.CoreV1().PersistentVolumeClaims(EVEKubeNameSpace).Get(context.TODO(), pvcName, metav1.GetOptions{})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just curious what is the context.TODO() in this case and will it be changed later...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I'm a little bit worried about the amount of dereferencing in one line. Could be hard to debug if one of them fails.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The err should contain the actual error if something fails.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The err variable will catch only the error returned by the final .Get method call, not by each method in the chain (clientset.CoreV1(), PersistentVolumeClaims(EVEKubeNameSpace)). If there's a subtle issue in one of those intermediate steps, like clientset or EVEKubeNameSpace, you might not catch it right away.
But it's a minor for the PR.

if err != nil {
err = fmt.Errorf("Error fetching PersistentVolumeClaim %s: %v", pvcName, err)
log.Error(err)
return "", err
}

// Get the associated PersistentVolume (PV) name
volumeName := pvc.Spec.VolumeName

log.Noticef("PersistentVolume %s is associated with PVC %s", volumeName, pvcName)

return volumeName, nil
}

// GetVolumeAttachmentFromPV : Return volume attachment if any for that PV along with nodename.
// longhorn attaches to the PV to serve to the apps. This API returns the attachment name and nodename.
// We use that attachment name to delete the attachment during failover.
// Basically the attachment of previous node needs to be deleted to attach to current node.
func GetVolumeAttachmentFromPV(volName string, log *base.LogObject) (string, string, error) {
// Get the Kubernetes clientset
clientset, err := GetClientSet()
if err != nil {
err = fmt.Errorf("failed to get clientset: %v", err)
log.Error(err)
return "", "", err
}
// List all VolumeAttachments and find the one corresponding to the PV
volumeAttachments, err := clientset.StorageV1().VolumeAttachments().List(context.TODO(), metav1.ListOptions{})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it a synchronous call to the API? Are we sure we will not have issues with the watchdog in this case if the requests are handled longer than expected? This is a generic question for all API calls here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its is synchronous call, but I do not anticipate it takes even a second to get the result. Number of volumes we have in our deployments are in single digits.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, and what about the other API calls in the file? Like .Delete()... So we are pretty sure all of them run fast enough, yep?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So far we have not seen any issues with watchdog for the all kubernetes calls. Also I thought domainmgr/volumemgr executes the jobs in asynchronous worker threads and should not block anything to trigger watchdog. Is that not right ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also Kubernetes API calls execution are asynchronous by default. So only sync call is to issue the command and return back after that any action will happen in background.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any more questions ? Or can this be merged ?

if err != nil {
err = fmt.Errorf("Error listing VolumeAttachments: %v", err)
log.Error(err)
return "", "", err
}

// Iterate through VolumeAttachments to find one that references the PV's CSI volume handle
for _, va := range volumeAttachments.Items {
if va.Spec.Source.PersistentVolumeName != nil && *va.Spec.Source.PersistentVolumeName == volName {
log.Noticef("VolumeAttachment for vol %s found: %s (attached to node: %s)\n", volName, va.Name, va.Spec.NodeName)
return va.Name, va.Spec.NodeName, nil
}
}

return "", "", nil
}

// DeleteVolumeAttachment : Delete the volumeattachment of given name
func DeleteVolumeAttachment(vaName string, log *base.LogObject) error {
// Get the Kubernetes clientset
clientset, err := GetClientSet()
if err != nil {
err = fmt.Errorf("failed to get clientset: %v", err)
log.Error(err)
return err
}

// Force delete the VolumeAttachment with grace period set to 0
// This will ensure attachment is really deleted before its assigned to some other node.
deletePolicy := metav1.DeletePropagationForeground
deleteOptions := metav1.DeleteOptions{
GracePeriodSeconds: new(int64), // Set grace period to 0 for force deletion
PropagationPolicy: &deletePolicy,
}

// Delete the VolumeAttachment
err = clientset.StorageV1().VolumeAttachments().Delete(context.TODO(), vaName, deleteOptions)
if err != nil {
err = fmt.Errorf("Error deleting VolumeAttachment %s: %v", vaName, err)
log.Error(err)
return err
}

log.Noticef("Deleted volumeattachment %s", vaName)
return nil
}

func stringPtr(str string) *string {
return &str
}
Loading