Skip to content
This repository has been archived by the owner on Oct 22, 2024. It is now read-only.

incorrect volume size in direct mode #933

Closed
krisiasty opened this issue Apr 15, 2021 · 22 comments
Closed

incorrect volume size in direct mode #933

krisiasty opened this issue Apr 15, 2021 · 22 comments
Assignees
Labels
1.0 bug Something isn't working

Comments

@krisiasty
Copy link

I have a 5-node cluster of dual-socket machines fully populated with 512GB PMems (two regions of 6x512GB in app direct interleaved mode per node) running OpenShift 4.6.6 (kubernetes 1.19) and pmem-csi 1.9.0.

I'm requesting four 1480GiB volumes per node in direct mode, but I'm getting 1458GiB volumes instead.
When I try to request 1500GiB volumes, I'm getting two 1482GiB volumes in one region and two 1481GiB volumes in the other one. Not only requests are not fulfilled properly, but the allocated space is also not consistent. On top of that, reported volume size is not the actual size.

The empty regions look like this:

[core@ocp5 ~]$ LD_LIBRARY_PATH=. ./ndctl list -RNu
[
  {
    "dev":"region1",
    "size":"3012.00 GiB (3234.11 GB)",
    "available_size":"3012.00 GiB (3234.11 GB)",
    "max_available_extent":"3012.00 GiB (3234.11 GB)",
    "type":"pmem",
    "iset_id":"0x6bcc7f481d9d2ccc",
    "persistence_domain":"memory_controller"
  },
  {
    "dev":"region0",
    "size":"3012.00 GiB (3234.11 GB)",
    "available_size":"3012.00 GiB (3234.11 GB)",
    "max_available_extent":"3012.00 GiB (3234.11 GB)",
    "type":"pmem",
    "iset_id":"0xdb067f48bfc62ccc",
    "persistence_domain":"memory_controller"
  } 

The PVC spec and status for 1480GiB volumes:

spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1480Gi
  storageClassName: pmem-xfs
  volumeMode: Filesystem
  volumeName: pvc-2bdecb32-1e11-4db5-913c-e08599790bbd
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1480Gi
  phase: Bound

The status says it's 1480Gi, but in reality its 1565515579392 bytes = 1458GiB:

[core@ocp5 ~]$ lsblk -b | grep pvc-2bdecb32-1e11-4db5-913c-e08599790bbd
pmem1.1                      259:5    0 1565515579392  0 disk /var/lib/kubelet/pods/b0eed186-9e21-4232-b9d5-4977b472d42d/volumes/kubernetes.io~csi/pvc-2bdecb32-1e11-4db5-913c-e08599790bbd/mount

And confirmation from ndctl (four volumes per node):

[core@ocp5 ~]$ LD_LIBRARY_PATH=. ./ndctl list -RNu
{
  "regions":[
    {
      "dev":"region1",
      "size":"3012.00 GiB (3234.11 GB)",
      "available_size":"48.00 GiB (51.54 GB)",
      "max_available_extent":"48.00 GiB (51.54 GB)",
      "type":"pmem",
      "iset_id":"0x217c7f48049b2ccc",
      "persistence_domain":"memory_controller",
      "namespaces":[
        {
          "dev":"namespace1.1",
          "mode":"fsdax",
          "map":"dev",
          "size":"1458.00 GiB (1565.52 GB)",
          "uuid":"3094b276-9dd4-11eb-9796-0a580a3c0223",
          "sector_size":512,
          "align":1073741824,
          "blockdev":"pmem1.1",
          "name":"pvc-2b-def0357682e98f9dbdc802ce26ed0106377716492ba74082a677ea7c"
        },
        {
          "dev":"namespace1.3",
          "mode":"fsdax",
          "map":"dev",
          "size":"1458.00 GiB (1565.52 GB)",
          "uuid":"23076b7c-9dd4-11eb-9796-0a580a3c0223",
          "sector_size":512,
          "align":1073741824,
          "blockdev":"pmem1.3",
          "name":"pvc-55-8a112b2edabe3a361bc2a9ffad916cd6dcee1d7440ed989b4887dc25"
        }
      ]
    },
    {
      "dev":"region0",
      "size":"3012.00 GiB (3234.11 GB)",
      "available_size":"48.00 GiB (51.54 GB)",
      "max_available_extent":"48.00 GiB (51.54 GB)",
      "type":"pmem",
      "iset_id":"0x2fc27f487d9c2ccc",
      "persistence_domain":"memory_controller",
      "namespaces":[
        {
          "dev":"namespace0.1",
          "mode":"fsdax",
          "map":"dev",
          "size":"1458.00 GiB (1565.52 GB)",
          "uuid":"4d55363e-9dd4-11eb-9796-0a580a3c0223",
          "sector_size":512,
          "align":1073741824,
          "blockdev":"pmem0.1",
          "name":"pvc-dc-506d915c45300eda3742d6b5ab21f2b262b021ed5af4d5a8f35f293d"
        },
        {
          "dev":"namespace0.3",
          "mode":"fsdax",
          "map":"dev",
          "size":"1458.00 GiB (1565.52 GB)",
          "uuid":"40e680dc-9dd4-11eb-9796-0a580a3c0223",
          "sector_size":512,
          "align":1073741824,
          "blockdev":"pmem0.3",
          "name":"pvc-36-8a2e5baf83b66756aa9f6a603df379591896de9261dbc011ae3fcdc1"
        }
      ]
    }
  ]
}

PmemCSIDeployment:

apiVersion: pmem-csi.intel.com/v1beta1
kind: PmemCSIDeployment
metadata:
  name: pmem-csi.intel.com
spec:
  deviceMode: direct
  nodeSelector:
    storage: "pmem"

Storege class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: pmem-xfs
parameters:
  csi.storage.k8s.io/fstype: xfs
  eraseafter: "false"
provisioner: pmem-csi.intel.com
volumeBindingMode: WaitForFirstConsumer

Notice I had to disable 'eraseafter' since on large volumes like ~1.5TB the shred is extremely slow.

@pohly
Copy link
Contributor

pohly commented Apr 15, 2021

@okartau : this namespace alignment logic is your area of expertise. Can you investigate this?

@pohly
Copy link
Contributor

pohly commented Apr 15, 2021

We never had unit tests for that code either. Would it perhaps now be possible to write some unit tests after I added mocking code for ndctl? Not merged yet, though, see pkg/ndctl/fake in #919

@krisiasty
Copy link
Author

I did some more experiments and I think I found the root cause.

First, I tried to create 1500GiB fsdax namespace manually using ndctl (version 67):

[core@ocp1 ~]$ sudo LD_LIBRARY_PATH=. ./ndctl create-namespace -m fsdax -s 1500G -a 2M -v -r region0
{
  "dev":"namespace0.0",
  "mode":"fsdax",
  "map":"dev",
  "size":"1476.56 GiB (1585.44 GB)",
  "uuid":"e1655aa1-ae63-4625-b197-b3ef2e8167e0",
  "sector_size":512,
  "align":2097152,
  "blockdev":"pmem0"
}

The namespace size is 1476.56 GiB instead of requested 1500GiB - similar what we are observing in pmem-csi.

After some investigation I found the answer in the ndctl documentation. There is a −M, −−map= argument stating:

A pmem namespace in "fsdax" or "devdax" mode requires allocation of per−page metadata. The allocation can be drawn from either:
"mem": typical system memory
"dev": persistent memory reserved from the namespace
Given relative capacities of "Persistent Memory" to "System RAM" the allocation defaults to reserving space out of the namespace directly ("−−map=dev"). The overhead is 64−bytes per 4K (16GB per 1TB) on x86.

I created another namespace using -M mem option and got exactly 1500GiB:

[core@ocp1 ~]$ sudo LD_LIBRARY_PATH=. ./ndctl create-namespace -m fsdax -s 1500G -a 2M -v -r region1 -M mem
{
  "dev":"namespace1.3",
  "mode":"fsdax",
  "map":"mem",
  "size":"1500.00 GiB (1610.61 GB)",
  "uuid":"2167a111-d77c-44d2-8d61-c4964f3b357f",
  "sector_size":512,
  "align":2097152,
  "blockdev":"pmem1.3"
}

So in default (dev) map mode, the space is taken from the namespace itself and resulting namespace size is lower than requested. This behavior is counter-intuitive, since one should expect the additional space for the map should be allocated from the free space in the region, not the requested namespace, but I don't think this will be fixed anytime soon (if ever), so you need to compensate for this in pmem-csi and increase request size by the map overhead, then align properly.

Also, I have found in the code that you are checking for available space in the region before adjusting size (alignment), which is incorrect - this should be done after all alignments.

@krisiasty
Copy link
Author

krisiasty commented Apr 15, 2021

One more thing - I found the space for the map is also somehow aligned - for 1500GiB request the map should be 24000MiB (= 23.4375GiB), but if I try to request 1635778560000 namespace (1500GiB + 24000MiB) I'm getting only 1499.63 GiB useable space. Unfortunately I don't have enough time / knowledge to analyse libndctl code to find the correct way to compute this overhead.

After several tries I found that to get the actual 1500GiB volume I had to request for 1636181213184 bytes - it's still 2MiB larger than needed (1636174921728 bytes instead of 1610612736000), but this might be actual alignment, so in the end I had to request for 1500GiB+24384MiB to actually get the 1500GiB namespace.

@okartau
Copy link
Contributor

okartau commented Apr 15, 2021

Thanks for good analysis and proposals!

Also, I have found in the code that you are checking for available space in the region before adjusting size

you mean code in pkg/ndctl/region.go ?
I think you are right. That code was ported library code and alignment-adjust was inserted later. git blame points to me...

Overall, as you already found out, there is space overhead in ndctl lower layers which always gives us smaller volume than we ask, so we compensate by asking more. That also causes side-issue that I tried to document in #399 : It difficult to satisfy one CSI spec req. about exact allocation as we can't. Result is typically bigger, or we have to report failure.
The alignment scheme is not simple. That has led to "size compensating code" on 2 levels, pmem-csi and that region.go (ported lib code). Now that I read it, I think we have the logic differing on LVM and direct mode, should re-check.

About mapping: yes we use default mapping. I did notice libndctl mapping argument during development, but when I tried it, I was hitting more and new errors, so I gave up and reverted to default mode which worked reliably.
(but that was some time ago with much older ndctl and kernel code, so things may have improved).

Your case is probably first which tries on real DIMMs with such size, we had smaller ones during development.
CI system favors LVM mode a lot, so direct mode does get less attention (and CI runs on top of emulated storage anyway).
In a scale you show, it is possible that overhead is bigger than we anticipated.
I think we should try to add check of resulting namespace immediately after creation?
@pohly is right, we should have better unit testing for that area. Few things are HW-dependent like number of interleave ways, which affects alignment sizes, but we can hopefully mock those values somehow in testing.

BTW, is it possible for you (if not too much effort) to extract logging code from pmem-csi driver about code where these namespace creations are made? It would be interesting to see what are logged values when K8s asks a volume, what alignment decisions are made by pmem-csi, and such, from the actual run. As I cant repeat right away, dont have such big devices.

@krisiasty
Copy link
Author

krisiasty commented Apr 15, 2021

Here are some logs I grabbed previously with a 1500GiB request and log level 4 (correction: it's actually log level 5):

I0415 12:18:40.868958       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 0 items received
I0415 12:18:57.741290       1 controller.go:1332] provision "default/server-data-cluster1-dc1-racka-sts-0" class "pmem-xfs": started
I0415 12:18:57.741478       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"server-data-cluster1-dc1-racka-sts-0", UID:"a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8", APIVersion:"v1", ResourceVersion:"72039566", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/server-data-cluster1-dc1-racka-sts-0"
I0415 12:18:57.741404       1 controller.go:731] CreateVolumeRequest name:"pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8" capacity_range:<required_bytes:1610612736000 > volume_capabilities:<mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > > parameters:<key:"eraseafter" value:"false" > accessibility_requirements:<requisite:<segments:<key:"pmem-csi.intel.com/node" value:"ocp1" > > preferred:<segments:<key:"pmem-csi.intel.com/node" value:"ocp1" > > >
I0415 12:18:57.741792       1 connection.go:182] GRPC call: /csi.v1.Controller/CreateVolume
I0415 12:18:57.741800       1 connection.go:183] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"requisite":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}]},"capacity_range":{"required_bytes":1610612736000},"name":"pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8","parameters":{"eraseafter":"false"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}}]}
I0415 12:18:57.742734       1 tracing.go:19] GRPC call: /csi.v1.Controller/CreateVolume
I0415 12:18:57.742765       1 tracing.go:20] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"requisite":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}]},"capacity_range":{"required_bytes":1610612736000},"name":"pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8","parameters":{"eraseafter":"false"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}}]}
I0415 12:18:57.743035       1 controllerserver-node.go:221] Node CreateVolume: Name:"pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8" req.Required:1610612736000 req.Limit:0
I0415 12:18:57.743049       1 controllerserver-node.go:519] Node CreateVolume: Create VolumeID:pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a based on name:pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8
I0415 12:18:57.750186       1 pmd-ndctl.go:168] Compensate for libndctl creating one alignment step smaller: increase size to 1611686477824
I0415 12:18:57.750272       1 region.go:202] region1: namespace size must align to interleave-width:6 * alignment:1073741824, force-align to 1617055186944
I0415 12:18:57.805909       1 controller.go:1332] provision "default/server-data-cluster1-dc1-racka-sts-1" class "pmem-xfs": started
I0415 12:18:57.806160       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"server-data-cluster1-dc1-racka-sts-1", UID:"8eed9a43-3ec4-4b14-a0b0-41d8bad46989", APIVersion:"v1", ResourceVersion:"72039592", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/server-data-cluster1-dc1-racka-sts-1"
I0415 12:18:57.806182       1 controller.go:731] CreateVolumeRequest name:"pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989" capacity_range:<required_bytes:1610612736000 > volume_capabilities:<mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > > parameters:<key:"eraseafter" value:"false" > accessibility_requirements:<requisite:<segments:<key:"pmem-csi.intel.com/node" value:"ocp1" > > preferred:<segments:<key:"pmem-csi.intel.com/node" value:"ocp1" > > >
I0415 12:18:57.806291       1 connection.go:182] GRPC call: /csi.v1.Controller/CreateVolume
I0415 12:18:57.806308       1 connection.go:183] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"requisite":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}]},"capacity_range":{"required_bytes":1610612736000},"name":"pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989","parameters":{"eraseafter":"false"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}}]}
I0415 12:18:57.806749       1 tracing.go:19] GRPC call: /csi.v1.Controller/CreateVolume
I0415 12:18:57.806762       1 tracing.go:20] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"requisite":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}]},"capacity_range":{"required_bytes":1610612736000},"name":"pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989","parameters":{"eraseafter":"false"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}}]}
I0415 12:18:57.806892       1 controllerserver-node.go:221] Node CreateVolume: Name:"pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989" req.Required:1610612736000 req.Limit:0
I0415 12:18:57.806904       1 controllerserver-node.go:519] Node CreateVolume: Create VolumeID:pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5 based on name:pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989
I0415 12:18:57.814469       1 controller.go:1332] provision "default/server-data-cluster1-dc1-racka-sts-2" class "pmem-xfs": started
I0415 12:18:57.814515       1 controller.go:731] CreateVolumeRequest name:"pvc-aebf96b1-1596-4da7-abca-8204215a9d1c" capacity_range:<required_bytes:1610612736000 > volume_capabilities:<mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > > parameters:<key:"eraseafter" value:"false" > accessibility_requirements:<requisite:<segments:<key:"pmem-csi.intel.com/node" value:"ocp1" > > preferred:<segments:<key:"pmem-csi.intel.com/node" value:"ocp1" > > >
I0415 12:18:57.814567       1 connection.go:182] GRPC call: /csi.v1.Controller/CreateVolume
I0415 12:18:57.814583       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"server-data-cluster1-dc1-racka-sts-2", UID:"aebf96b1-1596-4da7-abca-8204215a9d1c", APIVersion:"v1", ResourceVersion:"72039613", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/server-data-cluster1-dc1-racka-sts-2"
I0415 12:18:57.814571       1 connection.go:183] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"requisite":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}]},"capacity_range":{"required_bytes":1610612736000},"name":"pvc-aebf96b1-1596-4da7-abca-8204215a9d1c","parameters":{"eraseafter":"false"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}}]}
I0415 12:18:57.814936       1 tracing.go:19] GRPC call: /csi.v1.Controller/CreateVolume
I0415 12:18:57.814960       1 tracing.go:20] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"requisite":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}]},"capacity_range":{"required_bytes":1610612736000},"name":"pvc-aebf96b1-1596-4da7-abca-8204215a9d1c","parameters":{"eraseafter":"false"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}}]}
I0415 12:18:57.815094       1 controllerserver-node.go:221] Node CreateVolume: Name:"pvc-aebf96b1-1596-4da7-abca-8204215a9d1c" req.Required:1610612736000 req.Limit:0
I0415 12:18:57.815110       1 controllerserver-node.go:519] Node CreateVolume: Create VolumeID:pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da based on name:pvc-aebf96b1-1596-4da7-abca-8204215a9d1c
I0415 12:18:57.830295       1 controller.go:1332] provision "default/server-data-cluster1-dc1-racka-sts-3" class "pmem-xfs": started
I0415 12:18:57.830330       1 controller.go:731] CreateVolumeRequest name:"pvc-fbce7559-e360-4a20-ac37-d52b320d78f5" capacity_range:<required_bytes:1610612736000 > volume_capabilities:<mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > > parameters:<key:"eraseafter" value:"false" > accessibility_requirements:<requisite:<segments:<key:"pmem-csi.intel.com/node" value:"ocp1" > > preferred:<segments:<key:"pmem-csi.intel.com/node" value:"ocp1" > > >
I0415 12:18:57.830381       1 connection.go:182] GRPC call: /csi.v1.Controller/CreateVolume
I0415 12:18:57.830461       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"server-data-cluster1-dc1-racka-sts-3", UID:"fbce7559-e360-4a20-ac37-d52b320d78f5", APIVersion:"v1", ResourceVersion:"72039645", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/server-data-cluster1-dc1-racka-sts-3"
I0415 12:18:57.830387       1 connection.go:183] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"requisite":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}]},"capacity_range":{"required_bytes":1610612736000},"name":"pvc-fbce7559-e360-4a20-ac37-d52b320d78f5","parameters":{"eraseafter":"false"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}}]}
I0415 12:18:57.830709       1 tracing.go:19] GRPC call: /csi.v1.Controller/CreateVolume
I0415 12:18:57.830728       1 tracing.go:20] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"requisite":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}]},"capacity_range":{"required_bytes":1610612736000},"name":"pvc-fbce7559-e360-4a20-ac37-d52b320d78f5","parameters":{"eraseafter":"false"},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}}]}
I0415 12:18:57.830862       1 controllerserver-node.go:221] Node CreateVolume: Name:"pvc-fbce7559-e360-4a20-ac37-d52b320d78f5" req.Required:1610612736000 req.Limit:0
I0415 12:18:57.830878       1 controllerserver-node.go:519] Node CreateVolume: Create VolumeID:pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7 based on name:pvc-fbce7559-e360-4a20-ac37-d52b320d78f5
I0415 12:18:58.564528       1 region.go:230] setting namespace sector size: 0
I0415 12:18:59.281592       1 region.go:240] setting pfn
I0415 12:19:21.558731       1 region.go:251] enabling namespace
I0415 12:19:21.559352       1 ndctl.go:77] Namespace pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a created in region1
I0415 12:19:21.559560       1 pmd-ndctl.go:179] Namespace created: {"blockdev":"pmem1.2","dev":"namespace1.2","enabled":false,"id":2,"map":"dev","mode":"fsdax","name":"pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a","size":1591285383168,"uuid":"c1b3b5b3-9de4-11eb-8e52-0a580a3c0446"}
I0415 12:19:21.559630       1 pmd-util.go:20] ClearDevice: path: /dev/pmem1.2 flush:false
I0415 12:19:21.559656       1 pmd-util.go:61] Zeroing 4 1k blocks at start of device: /dev/pmem1.2 Size 1591285383168
I0415 12:19:21.559698       1 exec.go:32] Executing: /bin/dd if=/dev/zero of=/dev/pmem1.2 bs=1024 count=4
I0415 12:19:21.560508       1 exec.go:45] /bin/dd: 4+0 records in
I0415 12:19:21.560522       1 exec.go:45] /bin/dd: 4+0 records out
I0415 12:19:21.560535       1 exec.go:45] /bin/dd: 4096 bytes (4.1 kB, 4.0 KiB) copied, 3.7633e-05 s, 109 MB/s
I0415 12:19:21.560589       1 exec.go:51] /bin/dd terminated, with 91 bytes of output and error <nil>
I0415 12:19:21.560749       1 controllerserver-node.go:281] Node CreateVolume: Record new volume as {pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a 1610612736000 map[deviceMode:direct eraseafter:false name:pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8]}
I0415 12:19:21.560810       1 tracing.go:25] GRPC response: {"volume":{"accessible_topology":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"capacity_bytes":1610612736000,"volume_id":"pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a"}}
I0415 12:19:21.561593       1 pmd-ndctl.go:168] Compensate for libndctl creating one alignment step smaller: increase size to 1611686477824
I0415 12:19:21.561554       1 connection.go:185] GRPC response: {"volume":{"accessible_topology":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"capacity_bytes":1610612736000,"volume_id":"pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a"}}
I0415 12:19:21.561667       1 region.go:202] region1: namespace size must align to interleave-width:6 * alignment:1073741824, force-align to 1617055186944
I0415 12:19:21.561689       1 connection.go:186] GRPC error: <nil>
I0415 12:19:21.561702       1 controller.go:762] create volume rep: {CapacityBytes:1610612736000 VolumeId:pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a VolumeContext:map[] ContentSource:<nil> AccessibleTopology:[segments:<key:"pmem-csi.intel.com/node" value:"ocp1" > ] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0415 12:19:21.561745       1 controller.go:838] successfully created PV pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8 for PVC server-data-cluster1-dc1-racka-sts-0 and csi volume name pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a
I0415 12:19:21.561752       1 controller.go:854] successfully created PV {GCEPersistentDisk:nil AWSElasticBlockStore:nil HostPath:nil Glusterfs:nil NFS:nil RBD:nil ISCSI:nil Cinder:nil CephFS:nil FC:nil Flocker:nil FlexVolume:nil AzureFile:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil PortworxVolume:nil ScaleIO:nil Local:nil StorageOS:nil CSI:&CSIPersistentVolumeSource{Driver:pmem-csi.intel.com,VolumeHandle:pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a,ReadOnly:false,FSType:xfs,VolumeAttributes:map[string]string{storage.kubernetes.io/csiProvisionerIdentity: 1618488538864-8081-pmem-csi.intel.com-ocp1,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,}}
I0415 12:19:21.561883       1 controller.go:1439] provision "default/server-data-cluster1-dc1-racka-sts-0" class "pmem-xfs": volume "pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8" provisioned
I0415 12:19:21.561902       1 controller.go:1456] provision "default/server-data-cluster1-dc1-racka-sts-0" class "pmem-xfs": succeeded
I0415 12:19:21.561909       1 volume_store.go:154] Saving volume pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8
I0415 12:19:21.566235       1 volume_store.go:157] Volume pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8 saved
I0415 12:19:21.566282       1 controller.go:1093] Claim processing succeeded, removing PVC a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8 from claims in progress
I0415 12:19:21.566349       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"server-data-cluster1-dc1-racka-sts-0", UID:"a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8", APIVersion:"v1", ResourceVersion:"72039566", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8
I0415 12:19:22.014461       1 tracing.go:19] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0415 12:19:22.014477       1 tracing.go:20] GRPC request: {}
I0415 12:19:22.014550       1 tracing.go:25] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0415 12:19:22.015628       1 tracing.go:19] GRPC call: /csi.v1.Node/NodeStageVolume
I0415 12:19:22.015642       1 tracing.go:20] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1618488538864-8081-pmem-csi.intel.com-ocp1"},"volume_id":"pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a"}
I0415 12:19:22.015766       1 nodeserver.go:493] NodeStageVolume: VolumeID:pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a Staging target path:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8/globalmount Requested fsType:xfs Requested mount options:[]
I0415 12:19:22.368309       1 region.go:230] setting namespace sector size: 0
I0415 12:19:23.088288       1 region.go:240] setting pfn
I0415 12:19:46.007801       1 region.go:251] enabling namespace
I0415 12:19:46.008393       1 ndctl.go:77] Namespace pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5 created in region1
I0415 12:19:46.008580       1 pmd-ndctl.go:179] Namespace created: {"blockdev":"pmem1.1","dev":"namespace1.1","enabled":false,"id":1,"map":"dev","mode":"fsdax","name":"pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5","size":1591285383168,"uuid":"cfe45285-9de4-11eb-8e52-0a580a3c0446"}
I0415 12:19:46.008658       1 pmd-util.go:20] ClearDevice: path: /dev/pmem1.1 flush:false
I0415 12:19:46.008685       1 pmd-util.go:61] Zeroing 4 1k blocks at start of device: /dev/pmem1.1 Size 1591285383168
I0415 12:19:46.008737       1 exec.go:32] Executing: /bin/dd if=/dev/zero of=/dev/pmem1.1 bs=1024 count=4
I0415 12:19:46.050802       1 exec.go:45] /bin/dd: 4+0 records in
I0415 12:19:46.050814       1 exec.go:45] /bin/dd: 4+0 records out
I0415 12:19:46.050818       1 exec.go:45] /bin/dd: 4096 bytes (4.1 kB, 4.0 KiB) copied, 4.9027e-05 s, 83.5 MB/s
I0415 12:19:46.050843       1 exec.go:51] /bin/dd terminated, with 92 bytes of output and error <nil>
I0415 12:19:46.051015       1 controllerserver-node.go:281] Node CreateVolume: Record new volume as {pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5 1610612736000 map[deviceMode:direct eraseafter:false name:pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989]}
I0415 12:19:46.051060       1 tracing.go:25] GRPC response: {"volume":{"accessible_topology":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"capacity_bytes":1610612736000,"volume_id":"pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5"}}
I0415 12:19:46.051494       1 connection.go:185] GRPC response: {"volume":{"accessible_topology":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"capacity_bytes":1610612736000,"volume_id":"pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5"}}
I0415 12:19:46.051598       1 connection.go:186] GRPC error: <nil>
I0415 12:19:46.051611       1 controller.go:762] create volume rep: {CapacityBytes:1610612736000 VolumeId:pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5 VolumeContext:map[] ContentSource:<nil> AccessibleTopology:[segments:<key:"pmem-csi.intel.com/node" value:"ocp1" > ] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0415 12:19:46.051654       1 controller.go:838] successfully created PV pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989 for PVC server-data-cluster1-dc1-racka-sts-1 and csi volume name pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5
I0415 12:19:46.051659       1 controller.go:854] successfully created PV {GCEPersistentDisk:nil AWSElasticBlockStore:nil HostPath:nil Glusterfs:nil NFS:nil RBD:nil ISCSI:nil Cinder:nil CephFS:nil FC:nil Flocker:nil FlexVolume:nil AzureFile:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil PortworxVolume:nil ScaleIO:nil Local:nil StorageOS:nil CSI:&CSIPersistentVolumeSource{Driver:pmem-csi.intel.com,VolumeHandle:pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5,ReadOnly:false,FSType:xfs,VolumeAttributes:map[string]string{storage.kubernetes.io/csiProvisionerIdentity: 1618488538864-8081-pmem-csi.intel.com-ocp1,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,}}
I0415 12:19:46.051737       1 controller.go:1439] provision "default/server-data-cluster1-dc1-racka-sts-1" class "pmem-xfs": volume "pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989" provisioned
I0415 12:19:46.051754       1 controller.go:1456] provision "default/server-data-cluster1-dc1-racka-sts-1" class "pmem-xfs": succeeded
I0415 12:19:46.051762       1 volume_store.go:154] Saving volume pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989
I0415 12:19:46.052038       1 pmd-ndctl.go:168] Compensate for libndctl creating one alignment step smaller: increase size to 1611686477824
E0415 12:19:46.052104       1 ndctl.go:80] Namespace creation failure in region1: create namespace with size 1611686477824: not enough space
I0415 12:19:46.052144       1 region.go:202] region0: namespace size must align to interleave-width:6 * alignment:1073741824, force-align to 1617055186944
I0415 12:19:46.055532       1 volume_store.go:157] Volume pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989 saved
I0415 12:19:46.055576       1 controller.go:1093] Claim processing succeeded, removing PVC 8eed9a43-3ec4-4b14-a0b0-41d8bad46989 from claims in progress
I0415 12:19:46.055618       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"server-data-cluster1-dc1-racka-sts-1", UID:"8eed9a43-3ec4-4b14-a0b0-41d8bad46989", APIVersion:"v1", ResourceVersion:"72039592", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989
I0415 12:19:46.854113       1 region.go:230] setting namespace sector size: 0
I0415 12:19:47.052655       1 tracing.go:19] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0415 12:19:47.052676       1 tracing.go:20] GRPC request: {}
I0415 12:19:47.052742       1 tracing.go:25] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0415 12:19:47.054105       1 tracing.go:19] GRPC call: /csi.v1.Node/NodeStageVolume
I0415 12:19:47.054118       1 tracing.go:20] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1618488538864-8081-pmem-csi.intel.com-ocp1"},"volume_id":"pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5"}
I0415 12:19:47.054222       1 nodeserver.go:493] NodeStageVolume: VolumeID:pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5 Staging target path:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989/globalmount Requested fsType:xfs Requested mount options:[]
I0415 12:19:47.574995       1 region.go:240] setting pfn
I0415 12:20:15.482150       1 region.go:251] enabling namespace
I0415 12:20:15.482675       1 ndctl.go:77] Namespace pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da created in region0
I0415 12:20:15.482866       1 pmd-ndctl.go:179] Namespace created: {"blockdev":"pmem0.2","dev":"namespace0.2","enabled":false,"id":2,"map":"dev","mode":"fsdax","name":"pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da","size":1590211641344,"uuid":"de7cb270-9de4-11eb-8e52-0a580a3c0446"}
I0415 12:20:15.483013       1 pmd-util.go:20] ClearDevice: path: /dev/pmem0.2 flush:false
I0415 12:20:15.483040       1 pmd-util.go:61] Zeroing 4 1k blocks at start of device: /dev/pmem0.2 Size 1590211641344
I0415 12:20:15.483082       1 exec.go:32] Executing: /bin/dd if=/dev/zero of=/dev/pmem0.2 bs=1024 count=4
I0415 12:20:15.484459       1 connection.go:185] GRPC response: {"volume":{"accessible_topology":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"capacity_bytes":1610612736000,"volume_id":"pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da"}}
I0415 12:20:15.484540       1 connection.go:186] GRPC error: <nil>
I0415 12:20:15.484554       1 controller.go:762] create volume rep: {CapacityBytes:1610612736000 VolumeId:pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da VolumeContext:map[] ContentSource:<nil> AccessibleTopology:[segments:<key:"pmem-csi.intel.com/node" value:"ocp1" > ] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0415 12:20:15.484626       1 controller.go:838] successfully created PV pvc-aebf96b1-1596-4da7-abca-8204215a9d1c for PVC server-data-cluster1-dc1-racka-sts-2 and csi volume name pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da
I0415 12:20:15.483852       1 exec.go:45] /bin/dd: 4+0 records in
I0415 12:20:15.483869       1 exec.go:45] /bin/dd: 4+0 records out
I0415 12:20:15.483891       1 exec.go:45] /bin/dd: 4096 bytes (4.1 kB, 4.0 KiB) copied, 4.2255e-05 s, 96.9 MB/s
I0415 12:20:15.483931       1 exec.go:51] /bin/dd terminated, with 92 bytes of output and error <nil>
I0415 12:20:15.484090       1 controllerserver-node.go:281] Node CreateVolume: Record new volume as {pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da 1610612736000 map[deviceMode:direct eraseafter:false name:pvc-aebf96b1-1596-4da7-abca-8204215a9d1c]}
I0415 12:20:15.484130       1 tracing.go:25] GRPC response: {"volume":{"accessible_topology":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"capacity_bytes":1610612736000,"volume_id":"pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da"}}
I0415 12:20:15.484638       1 controller.go:854] successfully created PV {GCEPersistentDisk:nil AWSElasticBlockStore:nil HostPath:nil Glusterfs:nil NFS:nil RBD:nil ISCSI:nil Cinder:nil CephFS:nil FC:nil Flocker:nil FlexVolume:nil AzureFile:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil PortworxVolume:nil ScaleIO:nil Local:nil StorageOS:nil CSI:&CSIPersistentVolumeSource{Driver:pmem-csi.intel.com,VolumeHandle:pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da,ReadOnly:false,FSType:xfs,VolumeAttributes:map[string]string{storage.kubernetes.io/csiProvisionerIdentity: 1618488538864-8081-pmem-csi.intel.com-ocp1,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,}}
I0415 12:20:15.484711       1 controller.go:1439] provision "default/server-data-cluster1-dc1-racka-sts-2" class "pmem-xfs": volume "pvc-aebf96b1-1596-4da7-abca-8204215a9d1c" provisioned
I0415 12:20:15.485289       1 pmd-ndctl.go:168] Compensate for libndctl creating one alignment step smaller: increase size to 1611686477824
E0415 12:20:15.485369       1 ndctl.go:80] Namespace creation failure in region1: create namespace with size 1611686477824: not enough space
I0415 12:20:15.484733       1 controller.go:1456] provision "default/server-data-cluster1-dc1-racka-sts-2" class "pmem-xfs": succeeded
I0415 12:20:15.484741       1 volume_store.go:154] Saving volume pvc-aebf96b1-1596-4da7-abca-8204215a9d1c
I0415 12:20:15.485417       1 region.go:202] region0: namespace size must align to interleave-width:6 * alignment:1073741824, force-align to 1617055186944
I0415 12:20:15.505088       1 volume_store.go:157] Volume pvc-aebf96b1-1596-4da7-abca-8204215a9d1c saved
I0415 12:20:15.505130       1 controller.go:1093] Claim processing succeeded, removing PVC aebf96b1-1596-4da7-abca-8204215a9d1c from claims in progress
I0415 12:20:15.505209       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"server-data-cluster1-dc1-racka-sts-2", UID:"aebf96b1-1596-4da7-abca-8204215a9d1c", APIVersion:"v1", ResourceVersion:"72039613", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-aebf96b1-1596-4da7-abca-8204215a9d1c
I0415 12:20:16.100062       1 tracing.go:19] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0415 12:20:16.100079       1 tracing.go:20] GRPC request: {}
I0415 12:20:16.100156       1 tracing.go:25] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0415 12:20:16.101070       1 tracing.go:19] GRPC call: /csi.v1.Node/NodeStageVolume
I0415 12:20:16.101080       1 tracing.go:20] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aebf96b1-1596-4da7-abca-8204215a9d1c/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1618488538864-8081-pmem-csi.intel.com-ocp1"},"volume_id":"pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da"}
I0415 12:20:16.101194       1 nodeserver.go:493] NodeStageVolume: VolumeID:pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da Staging target path:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aebf96b1-1596-4da7-abca-8204215a9d1c/globalmount Requested fsType:xfs Requested mount options:[]
I0415 12:20:16.293375       1 region.go:230] setting namespace sector size: 0
I0415 12:20:16.997356       1 region.go:240] setting pfn
I0415 12:20:39.186733       1 region.go:251] enabling namespace
I0415 12:20:39.187305       1 ndctl.go:77] Namespace pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7 created in region0
I0415 12:20:39.187456       1 pmd-ndctl.go:179] Namespace created: {"blockdev":"pmem0.1","dev":"namespace0.1","enabled":false,"id":1,"map":"dev","mode":"fsdax","name":"pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7","size":1590211641344,"uuid":"f0063160-9de4-11eb-8e52-0a580a3c0446"}
I0415 12:20:39.187577       1 pmd-util.go:20] ClearDevice: path: /dev/pmem0.1 flush:false
I0415 12:20:39.187602       1 pmd-util.go:61] Zeroing 4 1k blocks at start of device: /dev/pmem0.1 Size 1590211641344
I0415 12:20:39.187650       1 exec.go:32] Executing: /bin/dd if=/dev/zero of=/dev/pmem0.1 bs=1024 count=4
I0415 12:20:39.188399       1 exec.go:45] /bin/dd: 4+0 records in
I0415 12:20:39.188410       1 exec.go:45] /bin/dd: 4+0 records out
I0415 12:20:39.188419       1 exec.go:45] /bin/dd: 4096 bytes (4.1 kB, 4.0 KiB) copied, 3.3612e-05 s, 122 MB/s
I0415 12:20:39.188474       1 exec.go:51] /bin/dd terminated, with 91 bytes of output and error <nil>
I0415 12:20:39.188615       1 controllerserver-node.go:281] Node CreateVolume: Record new volume as {pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7 1610612736000 map[deviceMode:direct eraseafter:false name:pvc-fbce7559-e360-4a20-ac37-d52b320d78f5]}
I0415 12:20:39.188655       1 tracing.go:25] GRPC response: {"volume":{"accessible_topology":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"capacity_bytes":1610612736000,"volume_id":"pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7"}}
I0415 12:20:39.189032       1 connection.go:185] GRPC response: {"volume":{"accessible_topology":[{"segments":{"pmem-csi.intel.com/node":"ocp1"}}],"capacity_bytes":1610612736000,"volume_id":"pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7"}}
I0415 12:20:39.189137       1 connection.go:186] GRPC error: <nil>
I0415 12:20:39.189150       1 controller.go:762] create volume rep: {CapacityBytes:1610612736000 VolumeId:pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7 VolumeContext:map[] ContentSource:<nil> AccessibleTopology:[segments:<key:"pmem-csi.intel.com/node" value:"ocp1" > ] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0415 12:20:39.189190       1 controller.go:838] successfully created PV pvc-fbce7559-e360-4a20-ac37-d52b320d78f5 for PVC server-data-cluster1-dc1-racka-sts-3 and csi volume name pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7
I0415 12:20:39.189196       1 controller.go:854] successfully created PV {GCEPersistentDisk:nil AWSElasticBlockStore:nil HostPath:nil Glusterfs:nil NFS:nil RBD:nil ISCSI:nil Cinder:nil CephFS:nil FC:nil Flocker:nil FlexVolume:nil AzureFile:nil VsphereVolume:nil Quobyte:nil AzureDisk:nil PhotonPersistentDisk:nil PortworxVolume:nil ScaleIO:nil Local:nil StorageOS:nil CSI:&CSIPersistentVolumeSource{Driver:pmem-csi.intel.com,VolumeHandle:pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7,ReadOnly:false,FSType:xfs,VolumeAttributes:map[string]string{storage.kubernetes.io/csiProvisionerIdentity: 1618488538864-8081-pmem-csi.intel.com-ocp1,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,}}
I0415 12:20:39.189262       1 controller.go:1439] provision "default/server-data-cluster1-dc1-racka-sts-3" class "pmem-xfs": volume "pvc-fbce7559-e360-4a20-ac37-d52b320d78f5" provisioned
I0415 12:20:39.189284       1 controller.go:1456] provision "default/server-data-cluster1-dc1-racka-sts-3" class "pmem-xfs": succeeded
I0415 12:20:39.189290       1 volume_store.go:154] Saving volume pvc-fbce7559-e360-4a20-ac37-d52b320d78f5
I0415 12:20:39.189685       1 exec.go:32] Executing: /usr/bin/file -bsL /dev/pmem1.2
I0415 12:20:39.190629       1 exec.go:32] Executing: /usr/bin/file -bsL /dev/pmem1.1
I0415 12:20:39.191931       1 exec.go:32] Executing: /usr/bin/file -bsL /dev/pmem0.2
I0415 12:20:39.193264       1 volume_store.go:157] Volume pvc-fbce7559-e360-4a20-ac37-d52b320d78f5 saved
I0415 12:20:39.193322       1 controller.go:1093] Claim processing succeeded, removing PVC fbce7559-e360-4a20-ac37-d52b320d78f5 from claims in progress
I0415 12:20:39.193349       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"server-data-cluster1-dc1-racka-sts-3", UID:"fbce7559-e360-4a20-ac37-d52b320d78f5", APIVersion:"v1", ResourceVersion:"72039645", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-fbce7559-e360-4a20-ac37-d52b320d78f5
I0415 12:20:39.250816       1 exec.go:45] /usr/bin/file: data
I0415 12:20:39.251015       1 exec.go:51] /usr/bin/file terminated, with 5 bytes of output and error <nil>
I0415 12:20:39.251084       1 exec.go:32] Executing: /sbin/mkfs.xfs -b size=4096 -m reflink=0 -f /dev/pmem1.2
I0415 12:20:39.251290       1 exec.go:45] /usr/bin/file: data
I0415 12:20:39.251576       1 exec.go:51] /usr/bin/file terminated, with 5 bytes of output and error <nil>
I0415 12:20:39.251617       1 exec.go:32] Executing: /sbin/mkfs.xfs -b size=4096 -m reflink=0 -f /dev/pmem1.1
I0415 12:20:39.251792       1 exec.go:45] /usr/bin/file: data
I0415 12:20:39.252072       1 exec.go:51] /usr/bin/file terminated, with 5 bytes of output and error <nil>
I0415 12:20:39.252103       1 exec.go:32] Executing: /sbin/mkfs.xfs -b size=4096 -m reflink=0 -f /dev/pmem0.2
I0415 12:20:40.131131       1 tracing.go:19] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0415 12:20:40.131146       1 tracing.go:20] GRPC request: {}
I0415 12:20:40.131212       1 tracing.go:25] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0415 12:20:40.149993       1 tracing.go:19] GRPC call: /csi.v1.Node/NodeStageVolume
I0415 12:20:40.150004       1 tracing.go:20] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fbce7559-e360-4a20-ac37-d52b320d78f5/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1618488538864-8081-pmem-csi.intel.com-ocp1"},"volume_id":"pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7"}
I0415 12:20:40.150095       1 nodeserver.go:493] NodeStageVolume: VolumeID:pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7 Staging target path:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fbce7559-e360-4a20-ac37-d52b320d78f5/globalmount Requested fsType:xfs Requested mount options:[]
I0415 12:20:40.151512       1 exec.go:32] Executing: /usr/bin/file -bsL /dev/pmem0.1
I0415 12:20:40.159264       1 exec.go:45] /usr/bin/file: data
I0415 12:20:40.159535       1 exec.go:51] /usr/bin/file terminated, with 5 bytes of output and error <nil>
I0415 12:20:40.159591       1 exec.go:32] Executing: /sbin/mkfs.xfs -b size=4096 -m reflink=0 -f /dev/pmem0.1
I0415 12:20:40.761721       1 exec.go:45] /sbin/mkfs.xfs: meta-data=/dev/pmem0.2           isize=512    agcount=4, agsize=97058816 blks
I0415 12:20:40.761738       1 exec.go:45] /sbin/mkfs.xfs:          =                       sectsz=4096  attr=2, projid32bit=1
I0415 12:20:40.761741       1 exec.go:45] /sbin/mkfs.xfs:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
I0415 12:20:40.761745       1 exec.go:45] /sbin/mkfs.xfs:          =                       reflink=0
I0415 12:20:40.761748       1 exec.go:45] /sbin/mkfs.xfs: data     =                       bsize=4096   blocks=388235264, imaxpct=5
I0415 12:20:40.761751       1 exec.go:45] /sbin/mkfs.xfs:          =                       sunit=0      swidth=0 blks
I0415 12:20:40.761753       1 exec.go:45] /sbin/mkfs.xfs: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
I0415 12:20:40.761756       1 exec.go:45] /sbin/mkfs.xfs: log      =internal log           bsize=4096   blocks=189568, version=2
I0415 12:20:40.761758       1 exec.go:45] /sbin/mkfs.xfs:          =                       sectsz=4096  sunit=1 blks, lazy-count=1
I0415 12:20:40.761771       1 exec.go:45] /sbin/mkfs.xfs: realtime =none                   extsz=4096   blocks=0, rtextents=0
I0415 12:20:40.761911       1 exec.go:51] /sbin/mkfs.xfs terminated, with 676 bytes of output and error <nil>
I0415 12:20:40.762077       1 exec.go:32] Executing: /bin/mount -c -o dax /dev/pmem0.2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aebf96b1-1596-4da7-abca-8204215a9d1c/globalmount
I0415 12:20:40.959670       1 exec.go:51] /bin/mount terminated, with 0 bytes of output and error <nil>
I0415 12:20:40.959719       1 tracing.go:25] GRPC response: {}
I0415 12:20:40.962301       1 tracing.go:19] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0415 12:20:40.962311       1 tracing.go:20] GRPC request: {}
I0415 12:20:40.962342       1 tracing.go:25] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0415 12:20:40.963221       1 tracing.go:19] GRPC call: /csi.v1.Node/NodePublishVolume
I0415 12:20:40.963231       1 tracing.go:20] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aebf96b1-1596-4da7-abca-8204215a9d1c/globalmount","target_path":"/var/lib/kubelet/pods/3049d862-29f3-417c-8c7d-f95ce80af76a/volumes/kubernetes.io~csi/pvc-aebf96b1-1596-4da7-abca-8204215a9d1c/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"cluster1-dc1-racka-sts-2","csi.storage.k8s.io/pod.namespace":"default","csi.storage.k8s.io/pod.uid":"3049d862-29f3-417c-8c7d-f95ce80af76a","csi.storage.k8s.io/serviceAccount.name":"cass-operator","storage.kubernetes.io/csiProvisionerIdentity":"1618488538864-8081-pmem-csi.intel.com-ocp1"},"volume_id":"pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da"}
I0415 12:20:40.963353       1 nodeserver.go:140] NodePublishVolume request: VolID:pvc-ae-09b3c2f812888d6ce86af361a25043858357775ec3ceb21999c4b6da targetpath:/var/lib/kubelet/pods/3049d862-29f3-417c-8c7d-f95ce80af76a/volumes/kubernetes.io~csi/pvc-aebf96b1-1596-4da7-abca-8204215a9d1c/mount sourcepath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aebf96b1-1596-4da7-abca-8204215a9d1c/globalmount readonly:false mount flags:[] fsType:xfs context:map["csi.storage.k8s.io/ephemeral":"false" "csi.storage.k8s.io/pod.name":"cluster1-dc1-racka-sts-2" "csi.storage.k8s.io/pod.namespace":"default" "csi.storage.k8s.io/pod.uid":"3049d862-29f3-417c-8c7d-f95ce80af76a" "csi.storage.k8s.io/serviceAccount.name":"cass-operator" "storage.kubernetes.io/csiProvisionerIdentity":"1618488538864-8081-pmem-csi.intel.com-ocp1"]
I0415 12:20:41.050957       1 exec.go:32] Executing: /bin/mount -c -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aebf96b1-1596-4da7-abca-8204215a9d1c/globalmount /var/lib/kubelet/pods/3049d862-29f3-417c-8c7d-f95ce80af76a/volumes/kubernetes.io~csi/pvc-aebf96b1-1596-4da7-abca-8204215a9d1c/mount
I0415 12:20:41.052380       1 exec.go:51] /bin/mount terminated, with 0 bytes of output and error <nil>
I0415 12:20:41.052410       1 tracing.go:25] GRPC response: {}
I0415 12:20:41.663541       1 exec.go:45] /sbin/mkfs.xfs: meta-data=/dev/pmem0.1           isize=512    agcount=4, agsize=97058816 blks
I0415 12:20:41.663578       1 exec.go:45] /sbin/mkfs.xfs:          =                       sectsz=4096  attr=2, projid32bit=1
I0415 12:20:41.663581       1 exec.go:45] /sbin/mkfs.xfs:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
I0415 12:20:41.663584       1 exec.go:45] /sbin/mkfs.xfs:          =                       reflink=0
I0415 12:20:41.663587       1 exec.go:45] /sbin/mkfs.xfs: data     =                       bsize=4096   blocks=388235264, imaxpct=5
I0415 12:20:41.663590       1 exec.go:45] /sbin/mkfs.xfs:          =                       sunit=0      swidth=0 blks
I0415 12:20:41.663593       1 exec.go:45] /sbin/mkfs.xfs: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
I0415 12:20:41.663596       1 exec.go:45] /sbin/mkfs.xfs: log      =internal log           bsize=4096   blocks=189568, version=2
I0415 12:20:41.663598       1 exec.go:45] /sbin/mkfs.xfs:          =                       sectsz=4096  sunit=1 blks, lazy-count=1
I0415 12:20:41.663601       1 exec.go:45] /sbin/mkfs.xfs: realtime =none                   extsz=4096   blocks=0, rtextents=0
I0415 12:20:41.663695       1 exec.go:51] /sbin/mkfs.xfs terminated, with 676 bytes of output and error <nil>
I0415 12:20:41.663764       1 exec.go:32] Executing: /bin/mount -c -o dax /dev/pmem0.1 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fbce7559-e360-4a20-ac37-d52b320d78f5/globalmount
I0415 12:20:41.850490       1 exec.go:51] /bin/mount terminated, with 0 bytes of output and error <nil>
I0415 12:20:41.850522       1 tracing.go:25] GRPC response: {}
I0415 12:20:41.854121       1 tracing.go:19] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0415 12:20:41.854130       1 tracing.go:20] GRPC request: {}
I0415 12:20:41.854161       1 tracing.go:25] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0415 12:20:41.856372       1 tracing.go:19] GRPC call: /csi.v1.Node/NodePublishVolume
I0415 12:20:41.856392       1 tracing.go:20] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fbce7559-e360-4a20-ac37-d52b320d78f5/globalmount","target_path":"/var/lib/kubelet/pods/39fa9cf2-98db-4de9-9331-106cafa9f8e7/volumes/kubernetes.io~csi/pvc-fbce7559-e360-4a20-ac37-d52b320d78f5/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"cluster1-dc1-racka-sts-3","csi.storage.k8s.io/pod.namespace":"default","csi.storage.k8s.io/pod.uid":"39fa9cf2-98db-4de9-9331-106cafa9f8e7","csi.storage.k8s.io/serviceAccount.name":"cass-operator","storage.kubernetes.io/csiProvisionerIdentity":"1618488538864-8081-pmem-csi.intel.com-ocp1"},"volume_id":"pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7"}
I0415 12:20:41.856496       1 nodeserver.go:140] NodePublishVolume request: VolID:pvc-fb-37c6758341b5c204e84b9d84955d9d0097f983c83f68006ae39361f7 targetpath:/var/lib/kubelet/pods/39fa9cf2-98db-4de9-9331-106cafa9f8e7/volumes/kubernetes.io~csi/pvc-fbce7559-e360-4a20-ac37-d52b320d78f5/mount sourcepath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fbce7559-e360-4a20-ac37-d52b320d78f5/globalmount readonly:false mount flags:[] fsType:xfs context:map["csi.storage.k8s.io/ephemeral":"false" "csi.storage.k8s.io/pod.name":"cluster1-dc1-racka-sts-3" "csi.storage.k8s.io/pod.namespace":"default" "csi.storage.k8s.io/pod.uid":"39fa9cf2-98db-4de9-9331-106cafa9f8e7" "csi.storage.k8s.io/serviceAccount.name":"cass-operator" "storage.kubernetes.io/csiProvisionerIdentity":"1618488538864-8081-pmem-csi.intel.com-ocp1"]
I0415 12:20:41.858825       1 exec.go:32] Executing: /bin/mount -c -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fbce7559-e360-4a20-ac37-d52b320d78f5/globalmount /var/lib/kubelet/pods/39fa9cf2-98db-4de9-9331-106cafa9f8e7/volumes/kubernetes.io~csi/pvc-fbce7559-e360-4a20-ac37-d52b320d78f5/mount
I0415 12:20:41.860665       1 exec.go:51] /bin/mount terminated, with 0 bytes of output and error <nil>
I0415 12:20:41.860679       1 tracing.go:25] GRPC response: {}
I0415 12:20:43.766990       1 exec.go:45] /sbin/mkfs.xfs: meta-data=/dev/pmem1.2           isize=512    agcount=4, agsize=97124352 blks
I0415 12:20:43.767013       1 exec.go:45] /sbin/mkfs.xfs:          =                       sectsz=4096  attr=2, projid32bit=1
I0415 12:20:43.767016       1 exec.go:45] /sbin/mkfs.xfs:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
I0415 12:20:43.767019       1 exec.go:45] /sbin/mkfs.xfs:          =                       reflink=0
I0415 12:20:43.767022       1 exec.go:45] /sbin/mkfs.xfs: data     =                       bsize=4096   blocks=388497408, imaxpct=5
I0415 12:20:43.767027       1 exec.go:45] /sbin/mkfs.xfs:          =                       sunit=0      swidth=0 blks
I0415 12:20:43.767030       1 exec.go:45] /sbin/mkfs.xfs: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
I0415 12:20:43.767032       1 exec.go:45] /sbin/mkfs.xfs: log      =internal log           bsize=4096   blocks=189696, version=2
I0415 12:20:43.767034       1 exec.go:45] /sbin/mkfs.xfs:          =                       sectsz=4096  sunit=1 blks, lazy-count=1
I0415 12:20:43.767037       1 exec.go:45] /sbin/mkfs.xfs: realtime =none                   extsz=4096   blocks=0, rtextents=0
I0415 12:20:43.767151       1 exec.go:51] /sbin/mkfs.xfs terminated, with 676 bytes of output and error <nil>
I0415 12:20:43.767227       1 exec.go:32] Executing: /bin/mount -c -o dax /dev/pmem1.2 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8/globalmount
I0415 12:20:43.851872       1 exec.go:45] /sbin/mkfs.xfs: meta-data=/dev/pmem1.1           isize=512    agcount=4, agsize=97124352 blks
I0415 12:20:43.851889       1 exec.go:45] /sbin/mkfs.xfs:          =                       sectsz=4096  attr=2, projid32bit=1
I0415 12:20:43.851892       1 exec.go:45] /sbin/mkfs.xfs:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
I0415 12:20:43.851895       1 exec.go:45] /sbin/mkfs.xfs:          =                       reflink=0
I0415 12:20:43.851897       1 exec.go:45] /sbin/mkfs.xfs: data     =                       bsize=4096   blocks=388497408, imaxpct=5
I0415 12:20:43.851900       1 exec.go:45] /sbin/mkfs.xfs:          =                       sunit=0      swidth=0 blks
I0415 12:20:43.851902       1 exec.go:45] /sbin/mkfs.xfs: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
I0415 12:20:43.851905       1 exec.go:45] /sbin/mkfs.xfs: log      =internal log           bsize=4096   blocks=189696, version=2
I0415 12:20:43.851907       1 exec.go:45] /sbin/mkfs.xfs:          =                       sectsz=4096  sunit=1 blks, lazy-count=1
I0415 12:20:43.851910       1 exec.go:45] /sbin/mkfs.xfs: realtime =none                   extsz=4096   blocks=0, rtextents=0
I0415 12:20:43.852042       1 exec.go:51] /sbin/mkfs.xfs terminated, with 676 bytes of output and error <nil>
I0415 12:20:43.852113       1 exec.go:32] Executing: /bin/mount -c -o dax /dev/pmem1.1 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989/globalmount
I0415 12:20:43.870233       1 exec.go:51] /bin/mount terminated, with 0 bytes of output and error <nil>
I0415 12:20:43.870259       1 tracing.go:25] GRPC response: {}
I0415 12:20:43.872938       1 tracing.go:19] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0415 12:20:43.872951       1 tracing.go:20] GRPC request: {}
I0415 12:20:43.873003       1 tracing.go:25] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0415 12:20:43.873910       1 tracing.go:19] GRPC call: /csi.v1.Node/NodePublishVolume
I0415 12:20:43.873918       1 tracing.go:20] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8/globalmount","target_path":"/var/lib/kubelet/pods/e98ed649-9002-451e-8b99-d9be1ee6bf21/volumes/kubernetes.io~csi/pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"cluster1-dc1-racka-sts-0","csi.storage.k8s.io/pod.namespace":"default","csi.storage.k8s.io/pod.uid":"e98ed649-9002-451e-8b99-d9be1ee6bf21","csi.storage.k8s.io/serviceAccount.name":"cass-operator","storage.kubernetes.io/csiProvisionerIdentity":"1618488538864-8081-pmem-csi.intel.com-ocp1"},"volume_id":"pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a"}
I0415 12:20:43.874011       1 nodeserver.go:140] NodePublishVolume request: VolID:pvc-a6-d3bf70885af9aa7fe39e0007671b912cf9749730ad814c8694225c2a targetpath:/var/lib/kubelet/pods/e98ed649-9002-451e-8b99-d9be1ee6bf21/volumes/kubernetes.io~csi/pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8/mount sourcepath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8/globalmount readonly:false mount flags:[] fsType:xfs context:map["csi.storage.k8s.io/ephemeral":"false" "csi.storage.k8s.io/pod.name":"cluster1-dc1-racka-sts-0" "csi.storage.k8s.io/pod.namespace":"default" "csi.storage.k8s.io/pod.uid":"e98ed649-9002-451e-8b99-d9be1ee6bf21" "csi.storage.k8s.io/serviceAccount.name":"cass-operator" "storage.kubernetes.io/csiProvisionerIdentity":"1618488538864-8081-pmem-csi.intel.com-ocp1"]
I0415 12:20:43.876500       1 exec.go:32] Executing: /bin/mount -c -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8/globalmount /var/lib/kubelet/pods/e98ed649-9002-451e-8b99-d9be1ee6bf21/volumes/kubernetes.io~csi/pvc-a63bc7db-3cbd-404d-9c9d-d7b1969cbaa8/mount
I0415 12:20:43.877954       1 exec.go:51] /bin/mount terminated, with 0 bytes of output and error <nil>
I0415 12:20:43.877981       1 tracing.go:25] GRPC response: {}
I0415 12:20:43.881847       1 exec.go:51] /bin/mount terminated, with 0 bytes of output and error <nil>
I0415 12:20:43.881867       1 tracing.go:25] GRPC response: {}
I0415 12:20:43.884149       1 tracing.go:19] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0415 12:20:43.884160       1 tracing.go:20] GRPC request: {}
I0415 12:20:43.884195       1 tracing.go:25] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
I0415 12:20:43.950347       1 tracing.go:19] GRPC call: /csi.v1.Node/NodePublishVolume
I0415 12:20:43.950371       1 tracing.go:20] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989/globalmount","target_path":"/var/lib/kubelet/pods/b0076a34-dd7d-46b4-af53-d3934e76dd3d/volumes/kubernetes.io~csi/pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"cluster1-dc1-racka-sts-1","csi.storage.k8s.io/pod.namespace":"default","csi.storage.k8s.io/pod.uid":"b0076a34-dd7d-46b4-af53-d3934e76dd3d","csi.storage.k8s.io/serviceAccount.name":"cass-operator","storage.kubernetes.io/csiProvisionerIdentity":"1618488538864-8081-pmem-csi.intel.com-ocp1"},"volume_id":"pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5"}
I0415 12:20:43.950530       1 nodeserver.go:140] NodePublishVolume request: VolID:pvc-8e-ef468f05f87cd4daf1fcf59e9dd68b43febee91fcb6310d5a4e1bcb5 targetpath:/var/lib/kubelet/pods/b0076a34-dd7d-46b4-af53-d3934e76dd3d/volumes/kubernetes.io~csi/pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989/mount sourcepath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989/globalmount readonly:false mount flags:[] fsType:xfs context:map["csi.storage.k8s.io/ephemeral":"false" "csi.storage.k8s.io/pod.name":"cluster1-dc1-racka-sts-1" "csi.storage.k8s.io/pod.namespace":"default" "csi.storage.k8s.io/pod.uid":"b0076a34-dd7d-46b4-af53-d3934e76dd3d" "csi.storage.k8s.io/serviceAccount.name":"cass-operator" "storage.kubernetes.io/csiProvisionerIdentity":"1618488538864-8081-pmem-csi.intel.com-ocp1"]
I0415 12:20:43.952679       1 exec.go:32] Executing: /bin/mount -c -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989/globalmount /var/lib/kubelet/pods/b0076a34-dd7d-46b4-af53-d3934e76dd3d/volumes/kubernetes.io~csi/pvc-8eed9a43-3ec4-4b14-a0b0-41d8bad46989/mount
I0415 12:20:43.954157       1 exec.go:51] /bin/mount terminated, with 0 bytes of output and error <nil>
I0415 12:20:43.954174       1 tracing.go:25] GRPC response: {}

@pohly
Copy link
Contributor

pohly commented Apr 15, 2021

Overall, as you already found out, there is space overhead in ndctl lower layers which always gives us smaller volume than we ask, so we compensate by asking more. That also causes side-issue that I tried to document in #399 : It difficult to satisfy one CSI spec req. about exact allocation as we can't.

We don't need exact allocations. Kubernetes only requests a minimal size, never a maximum size, so creating some larger volume than requested is always fine in practice.

@krisiasty
Copy link
Author

krisiasty commented Apr 15, 2021

We don't need exact allocations. Kubernetes only requests a minimal size, never a maximum size, so creating some larger volume than requested is always fine in practice.

Actually, the CSI spec says the request has two fields: required_bytes and limit_bytes, but I think that in practice the later is rarely used, so the driver has to provide at least required_bytes or more. The spec also says that for exact volume size both parameters should be equal.
Anyway, the current behaviour where we get smaller volume than requested is obviously out of spec.

@okartau
Copy link
Contributor

okartau commented Apr 15, 2021

The spec also says that for exact volume size both parameters should be equal.

Thats exactly the point I recorded in #399, if a test writer wants to exactly follow specification, there will be lots of sizes with what an allocation test would prove pmem-csi not spec-compliant, for example all non-aligned sizes. In practice we are saved by the fact that method of exact allocation is not used.

The current issue of creating smaller volume, is of course clearly a bug.

I looked up values in the posted log (thanks, very helpful!):
volume is configured as 1500 GiB, and thats what is asked.
We add 1x alignment size (which is predef 1 GiB in code) to that, thats our method to compensate for getting smaller size.
This is where we go wrong, as we need much more.
Then we call CreateNamespace and there is another correction upwards in ndctl/region.go which makes request 1 GiB x 6 aligned, based on req that all size should be aligned by alignment x interleave_ways. That grows the request to 1506 GiB, which we pass to libndctl.
libndctl will then allocate namespace having size of 1482 GiB, i.e. 24 GiB less than we asked.

Our "size compensate" method is not scalable, we use static 1GiB there always regardless of request size, It was introduced based on false hope 1GiB is quite big, and overhead is small (thats written in a comment), then it was tested with small size cases only. Actually (and it was known the overhead is caused by mapping), the size of request will affect the overhead (point which I missed during that time).
To fix:
We should make that compensation size dynamic based on request. Overall, this is all not pretty, but can we do better?
One can not ask ndctl "what should I ask to get N allocated", you have to ask something and then you get something.
One method could be to repeatedly allocate with growing request until we get wanted size, but that does not seem right (?).
Seems one method we can do is to introduce scaled compensation and test it with different HW configurations and request sizes.
We need to find out what parameters contribute to growth of overhead (in addition to request size), if any.

@krisiasty
Copy link
Author

krisiasty commented Apr 15, 2021

For reference, I have also opened an issue on pmem/ndctl: pmem/ndctl#167
If this gets fixed someday in the future, the whole compensation here will be unnecessary, but it's not clear if/when that will happen.

As for the algorithm, according to the documentation, there is a namespace configuration data (not sure about the size), then the map with PFNs (should be: metadata _{(bytes)} = (size _{(bytes)}/4096)*64), probably aligned, then the actual space for user data (aligned again?). But the numbers do not match reality - when I request 1500GiB+24000MiB volume, im still getting less than 1500GiB useable space, so the structure size is larger than specified 64b per 4KiB, or there is additional alignment involved which is not documented there.

@okartau okartau added the bug Something isn't working label Apr 16, 2021
@okartau
Copy link
Contributor

okartau commented Apr 16, 2021

math from traced case (1506 asked, result 1482) shows actual overhead per 4k block is approx. 65.3 bytes (1506GiB has 394788864 4Ki blocks). Still open is the extra margin coming per-4Ki-unit or per-something else, or even single?

In practice, we can we use above "additional margin" formula with value 64+safety_margin, like 66 or 68?
After we get some more evidence in extra-margin-per-unit question, we can make it more exact.

Extra margin dynamics can be determined with few experiments using various sizes. I think trials can be made with cmdline ndctl which should show same results compared to pmem-csi allocations via libndctl.

@okartau
Copy link
Contributor

okartau commented Apr 16, 2021

I made few example allocations with overhead calc, using a host I have readily available, my trials remain smaller, but we want to cover lower end as well, so it should help.
Alignment likely contributes so I used 2M and 1G (BTW pmem-csi always uses 1G as historically that was the only value working without issues in early times, the condition that may have changed now?)

on host with 4x128 DIMMs, interleaved to form 2x256 regions
-S 120 Gi -a 2M: result: 118.12 GiB, delta 1.88 GiB, margin-per-4k-unit: 64.2 B
-S 120 Gi -a 1G: result: 118 GiB, delta 2 GiB, margin-per-4k-unit: 68.3 B
-S 240 Gi -a 2M: result: 236.25 GiB, delta 3.75 GiB, margin-per-4k-unit: 64 B
-S 240 Gi -a 1G: result: 236 GiB, delta 4 GiB, margin-per-4k-unit: 68.3 B
adding 2 bigger cases from above on host with 6x512 DIMMs, interleaved
-S 1500 Gi -a 2M: result: 1476.56 GiB, delta 23.44 GiB, margin-per-4k-unit: 64.0068... B
-S 1506 Gi -a 1G: result: 1482 GiB, delta 24 GiB, margin-per-4k-unit: 65.3 B

2M-aligned cases show overhead exactly 64 or very little over 64, 1G-aligned cases show 66..69 B overhead

@pohly
Copy link
Contributor

pohly commented Apr 16, 2021

Instead of using trial-and-error, can you work with Dan Williams in pmem/ndctl#167 to come up with an exact specification of the overhead? Ideally, they provide an implementation in libndctl itself.

@pohly
Copy link
Contributor

pohly commented Jun 11, 2021

I'm requesting four 1480GiB volumes per node in direct mode, but I'm getting 1458GiB volumes instead.

Here's a perhaps controversial thought: when someone asks for a volume of size 1480GiB, what do they expect? Being able to memory-map a file of size 1480GiB? Or a block device of size 1480GiB?

My point is this: there's always some overhead that leads to less usable space than requested. Getting a block device that is smaller than requested is perhaps unexpected, but not that different from the extra overhead caused by the filesystem. Whoever deploys an app must be prepared for this.

They might even be in a better position to estimate that overhead beforehand than PMEM-CSI. For example, they might have run experiments with the exact cluster configuration and kernel that they will use and thus might know what the overhead will be.

So here's food for thought: what if we simply document that volumes in direct mode have this additional overhead and not try to compensate for it?

The size that we report for the volume can be the size of the namespace. That's not wrong. Nothing specifies that it has to be the size of block device that is later going to be mounted.

@pohly
Copy link
Contributor

pohly commented Jun 11, 2021

Where this thought experiment might break down is when the namespace size is the size of the block device. Not sure how ndctl then handles the overhead if it is not part of the namespace itself.

Suppose I have a region of 10GiB. Can I create 10 namespaces of 1GiB each?

@pohly
Copy link
Contributor

pohly commented Jun 11, 2021

So the reported namespace size is indeed "requested size minus overhead". I tried with a 64GiB region and requested 64 namespaces of size 1GiB. That worked (i.e. I now have 64 namespaces), but each namespace is said to have 1006MiB.

My plan could still work. PMEM-CSI simply would have to report the requested volume size as actual volume size instead of reading the size from the created namespace.

@pohly
Copy link
Contributor

pohly commented Jun 11, 2021

The advantage of not asking for a larger namespace is that volume creation becomes predictable ("a region of size XX can hold n volumes of size YY", at least when ignoring fragmentation and alignment). This is important for storage capacity tracking where Kubernetes will compare reported free space and maximum volume size against the requested size of the PVC.

pohly added a commit to pohly/pmem-CSI that referenced this issue Jun 14, 2021
This is the outcome of the discussion in
intel#933 (comment)
pohly added a commit to pohly/pmem-CSI that referenced this issue Jun 14, 2021
This is the outcome of the discussion in
intel#933 (comment)
pohly added a commit to pohly/pmem-CSI that referenced this issue Jun 14, 2021
This is the outcome of the discussion in
intel#933 (comment)
okartau pushed a commit to okartau/pmem-CSI that referenced this issue Jun 15, 2021
This is the outcome of the discussion in
intel#933 (comment)
@pohly
Copy link
Contributor

pohly commented Jun 21, 2021

I now consider this a feature change and don't intend to backport to 0.9. Therefore we are done with this.

@pohly pohly closed this as completed Jun 21, 2021
@pohly pohly removed the 0.9 label Jun 21, 2021
@krisiasty
Copy link
Author

My expectation (which is consistent with CSI spec) was that the block device will be not smaller than requested, no matter how much overhead the "storage subsystem" requires internally. It's the same situation as when you request a volume from enterprise storage system, where you can have some RAID protection, or maybe even compression and deduplication. The user doesn't care how much RAW space needs to be used to provision such volume, and with compression / deduplication it is not even possible to guess and it can change based on the actual data. If the subsystem is unable to satisfy the request it should just return an error stating there is not enough space available. You should never expect that you can request specific number of logical volumes having particular RAW space available. Does it make sense?
Yes, there is additional overhead from the filesystem layer, but this is higher layer after properly sized volume is provisioned.

Since we are using the same API (CSI) as the other storage types, the behaviour should be consistent, even though applications might consume PMEM differently in app direct mode.

@pohly
Copy link
Contributor

pohly commented Jun 21, 2021

My expectation (which is consistent with CSI spec)

Where in the CSI spec is it described how capacity is measured? I don't think it is defined anywhere and therefore up to the CSI driver implementers to define.

I understand that the namespace overhead is unexpected, both in PMEM-CSI and in ndctl. But that's a matter of setting expectations, which can be done by clearly documenting how "size" is defined. Both ndctl and PMEM-CSI do that now.

For the example with RAID, the "size" is the number of usable blocks. That's one way of doing it, but not required by the spec. Counting the used PMEM is another. I prefer to be consistent with ndctl instead of some different technology.

Yes, there is additional overhead from the filesystem layer, but this is higher layer after properly sized volume is provisioned.

The filesystem is created by PMEM-CSI as part of provisioning. That there is a block device involved is an implementation detail.

@krisiasty
Copy link
Author

CSI spec: https://github.com/container-storage-interface/spec/blob/master/spec.md

The CreateVolumeRequest contains CapacityRange, which is defined this way:

message CapacityRange {
  // Volume MUST be at least this big. This field is OPTIONAL.
  // A value of 0 is equal to an unspecified field value.
  // The value of this field MUST NOT be negative.
  int64 required_bytes = 1;

  // Volume MUST not be bigger than this. This field is OPTIONAL.
  // A value of 0 is equal to an unspecified field value.
  // The value of this field MUST NOT be negative.
  int64 limit_bytes = 2;
}

Since most requests contains only required_bytes field set, it means that "volume MUST be at least this big". The common meaning is you should be able to write this many bytes to the volume in block mode (unless it it read-only volume or snapshot). I don't know of any other CSI driver which implements this differently.

I know of the ndctl limitations and understand there is no easy way to address this issue, but ndctl is just internal implementation and it shouldn't matter how it is dealing with the allocations. The tool is not even consistent, so if you create namespace with metadata stored in memory instead of on the media, the available volume size will be different despite the same request size, so it is clear this implementation is broken and should not be used as a reference.

I don't understand the part regarding RAID and blocks. When dealing with enterprise storage array you are not requesting number of blocks - from the CSI spec perspective you are always requesting specified number of bytes. How it is translated internally by the driver and storage backend can vary and is not relevant. The driver can translate the request to specific number of blocks and the array can do RAID / compression / deduplication allocating necessary space on multiple drives - all this is invisible and does not matter. You can either provision logical volume with requested size, or not. If you are reusing existing volume or snapshot which was smaller than requested, you can either resize it properly or fail the request.

@pohly
Copy link
Contributor

pohly commented Jun 21, 2021

The common meaning is you should be able to write this many bytes to the volume in block mode

That's the point: it's "common", but not "required".

ndctl is just internal implementation and it shouldn't matter how it is dealing with the allocations

The same can be said about the underlying block device...

I don't understand the part regarding RAID and blocks.

My argument was that for such a driver it makes sense to define "volume size" as "number of usable blocks times block size" and hide the overhead but that this does not mean that PMEM-CSI must do the same.

The difference is that the underlying storage for PMEM-CSI is limited compared to a more dynamic storage pool where disks might be added or removed as needed. That limitation is exposed to the users and admins. To me it makes more sense to use "amount of PMEM" as definition of "capacity" because then "free space" and "volume size" become comparable. It's consistent and predictable.

The alternative would be to define "capacity" as "usable bytes". We then would have to report free space differently to avoid surprises like "16GB free, but cannot create a 16GB volume". The problem with that is that we can't implement it unless we get more information from libndctl and the currently running kernel. If pmem/ndctl#167 gets resolved, then we can reconsider the approach in PMEM-CSI, but until then any solution in PMEM-CSI would just be a band-aid that will break down occasionally - not what I want in a driver that is supposed to reach 1.0.

I understand that we now might not meet the initial user expectations and have to explain how PMEM-CSI works to adapt those expectations, but the same is true when we do it the other way around and then have to explain why volumes can only be smaller than expected.

Ignoring expectations for a second, does the current approach lead to any real usability issues when deploying applications? In the end, that is what matters.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
1.0 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants