Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update known issue powermax #833

Merged
merged 3 commits into from
Sep 7, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions content/docs/csidriver/release/powermax.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ description: Release notes for PowerMax CSI driver
| Unable to update Host: A problem occurred modifying the host resource | This issue occurs when the nodes do not have unique hostnames or when an IP address/FQDN with same sub-domains are used as hostnames. The workaround is to use unique hostnames or FQDN with unique sub-domains|
| When a node goes down, the block volumes attached to the node cannot be attached to another node | This is a known issue and has been reported at https://github.com/kubernetes-csi/external-attacher/issues/215. Workaround: <br /> 1. Force delete the pod running on the node that went down <br /> 2. Delete the volumeattachment to the node that went down. <br /> Now the volume can be attached to the new node |
| If the volume limit is exhausted and there are pending pods and PVCs due to `exceed max volume count`, the pending PVCs will be bound to PVs and the pending pods will be scheduled to nodes when the driver pods are restarted. | It is advised not to have any pending pods or PVCs once the volume limit per node is exhausted on a CSI Driver. There is an open issue reported with kubenetes at https://github.com/kubernetes/kubernetes/issues/95911 with the same behavior. |
| Powerpath FS volumes are not showing expanded size for File system inside the application pod | To Expand the file system , The workaround is : <br /> 1.Findout the powerpath pseduo device using the volume id in the command `lsblk \| grep <VOLUMEID>`. <br /> 2. Run `blockdev --rereadpt /dev/<powerpath_pseudo_device>` to re-read the partition table <br /> 3. If it is _ext_ file system type , run the command `resize2fs /dev/<powerpath_pseudo_device>` . If it is _xfs_ type run the command `xfs_growfs -d /dev/<powerpath_pseudo_device`.

### Note:

Expand Down