Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CRD status is not deployed #2268

Open
leonp-c opened this issue Aug 21, 2024 · 5 comments
Open

CRD status is not deployed #2268

leonp-c opened this issue Aug 21, 2024 · 5 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@leonp-c
Copy link

leonp-c commented Aug 21, 2024

**What happened:
Registering a CRD with:

subresources:
      scale:
        labelSelectorPath: .status.selector
        specReplicasPath: .spec.replicas
        statusReplicasPath: .status.replicas
      status: {}

is not registered in k8s. after testing from command line using kubectl get crd some.custom.crd.ai -o yaml
the result yaml is:

    subresources:
      scale:
        labelSelectorPath: .status.selector
        specReplicasPath: .spec.replicas
        statusReplicasPath: .status.replicas

status is missing

What you expected to happen:
status should exist so that using kubernetes command (kubernnetes package):
custom_objects_api.get_namespaced_custom_object(group=self.group, version=self.version, namespace=self.namespace, plural=self.plural, name=self.name)
would work

How to reproduce it (as minimally and precisely as possible):
Deploy a CustomResourceDefinition resource that has spec.versions.subresources.status as {} (dict)
check the deployed CRD resource yaml
get crd some.resource.name.ai -o yaml

Anything else we need to know?:
Tried to downgrade to kubernetes 28.1.0, same result to comply to hikaru version (1.3.0)

Environment:

  • Kubernetes version (kubectl version):
    • Client Version: v1.27.2
    • Kustomize Version: v5.0.1
    • Server Version: v1.27.14
    • OS (e.g., MacOS 10.13.6):
  • Python version (python --version): 3.10.12
  • Python client version (pip list | grep kubernetes): 30.1.0
  • hikaru version: 1.3.0
@leonp-c leonp-c added the kind/bug Categorizes issue or PR as related to a bug. label Aug 21, 2024
@roycaihw
Copy link
Member

This seems to be a server-side issue. Have you verified if kubectl has the same problem?

@roycaihw roycaihw added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Aug 28, 2024
@leonp-c
Copy link
Author

leonp-c commented Sep 2, 2024

using kubectl returned all values as expected

@Bhargav-manepalli
Copy link

Hi @leonp-c,

I tried to reproduce the issue you reported, and everything worked as expected on my end. Here’s what I did:

  1. Deployed a CRD with spec.versions.subresources.status defined as {} using the Kubernetes Python client.
  2. Queried the resource both using kubectl and the Python client, and I was able to see the status field correctly populated in both cases.

If everything looks correct and the issue persists, feel free to share more details about your setup,

@leonp-c
Copy link
Author

leonp-c commented Sep 12, 2024

Hi @Bhargav-manepalli ,
It seems that the issue was related to hikaru module which i used to parse the yaml and create the resource.
hikaru is removing/ignoring the empty dictionary field from the v1 object.
A bug was opened on their git hikaru-43
Thank you for your effort.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

5 participants