Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

promxy did not get external_labels from server_groups #652

Closed
deniszh opened this issue May 7, 2024 · 9 comments
Closed

promxy did not get external_labels from server_groups #652

deniszh opened this issue May 7, 2024 · 9 comments

Comments

@deniszh
Copy link

deniszh commented May 7, 2024

Promxy do not retrieves any "external_labels" from server_groups, which breaks compatibility with Prometheus.
In our case we migrated to labels, but I can imagine use case where it can be tricky.

@jacksontj
Copy link
Owner

First off, thanks for reaching out!

I am a bit confused by the report/question here. Some of this is likely due to external_labels in prometheus being a bit of a confusing topic. From the prometheus docs:

  # The labels to add to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:

This means that these labels aren't returned in the metrics queries -- but rather when being sent out to external systems (e.g. alertmanager). This is especially murky since it indicates that it shows up in "remote storage" but not in the regular query interface (super confusing).

Promxy is passing the external labels config to the prometheus rule manager so these external labels are added to alerts fired to alertmanager; but that is the only current usage of these external labels.

In your usage; where are you expecting to see these external labels?

@deniszh
Copy link
Author

deniszh commented May 8, 2024

Hi @jacksontj
Thanks for your response!
Sorry for not providing more details, let me explain issue.
Assume setup like this:

--> Prometheus (frontend) --> Prometheus Backend A
               |
               +------------> Prometheus Backend B                       

If Prom A has config

external_labels:
      replica: a
      cluster: b
      role: c

and Prom B has config

external_labels:
      replica: x
      cluster: y
      role: z

and Prometheus (frontend) has config

remote_read:
  - url: http://promA:9090/api/v1/read
  - url: http://promB:9090/api/v1/read

and both of them have metric metric1 (let's assume w/o labels for simplicity) then if we query Prometheus frontend for metric1 we'll get

metric1{replica: a, cluster: b, role: c}
metric1{replica: x, cluster: y, role: z}

Next step - let's replace Prometheus Frontend with Promxy, with following setup:

server_groups:
    - static_configs:
        - targets:
          - promA:9090
    - static_configs:
        - targets:
          - promB:9090

Then similar request for metric1 to promxy would return

metric1{}

(and it would be metric1 from PromA btw).

So, if you know external labels for PromA and PromB fix is obvious:

server_groups:
    - static_configs:
        - targets:
          - promA:9090
             labels:
                  replica: a
                  cluster: b
                  role: c
    - static_configs:
        - targets:
          - promB:9090
             labels:
                  replica: x
                  cluster: y
                  role: z

It's obviously restoring old behavior. So, it's bad for breaking compatibility (but IIRC promxy do not promise any) and I can imagine some issues only if external labels is dynamic and not easily obtainable in promxy - which is kinda theoretical.
So, maybe we can just mention behavior above near labels field in example config and call it a day.

@Giovanniuum
Copy link

and both of them have metric metric1 (let's assume w/o labels for simplicity) then if we query Prometheus frontend for metric1 we'll get
metric1{replica: a, cluster: b, role: c}
metric1{replica: x, cluster: y, role: z}

That's because your frontend uses Federation. So that's native Prometheus feature. Promxy doesn't act as Federator, so it has no way to get the Prometheus external_labels as they're not stored anywhere in TSDB but rather added dynamically on federation/remote_write/alertmanager.

@deniszh
Copy link
Author

deniszh commented Jun 20, 2024

Hi @Giovanniuum,
I'm talking about default Prometheus behaviour, I do not include any federation specific option and flags. This Prometheus behavior described in docs in "external_label" section, not in the federation section.

@jacksontj
Copy link
Owner

So, it's bad for breaking compatibility

This part I guess I'm a bit confused on. If I take a look at stock prometheus promxy's behavior seems consistent. As an example; if we use demo.robustperception.io as our example:

the config has the following:

  external_labels:
    environment: prometheus-demo

But when I do a query for metrics I get back:

prometheus_build_info{branch="HEAD", goarch="amd64", goos="linux", goversion="go1.22.3", instance="demo.do.prometheus.io:9090", job="prometheus", revision="879d80922a227c37df502e7315fad8ceb10a986d", tags="netgo,builtinassets,stringlabels", version="2.52.0"}

Note that the external_label of environment is not present. These labels are only sent by prometheus when it talks to external systems (e.g. alertmangager). So this feels like its exactly compatible -- as promxy does send its configured external_labels to those systems as well.

The use-case I've seen most for external_labels are to add additional context to an external system (e.g. alerting) to know where the alert came from. In that context it makes sense that promxy would provide its own instead of trying to derive and merge what downstreams have configured (both because it feels more right-- and because those downstreams don't really expose that as a config :) ).

So unless I've missed something, this feels like its working as intended? (even though the intent is a bit confusing :D ).

@deniszh
Copy link
Author

deniszh commented Jul 14, 2024

Hi @jacksontj
Thanks for your answer.

Note that the external_label of environment is not present.

In simple query it does not, indeed. You need 2 prometheus instances, demo installation do not have it.

These labels are only sent by prometheus when it talks to external systems (e.g. alertmangager).

Yes. Or proxy. Prometheus also can be proxy to other prometheuses (without data merging ofc) and I faced issue when replaced proxy prometheus with promxy. I changed description above to reflect it.

So this feels like its exactly compatible -- as promxy does send its configured external_labels to those systems as well.

This issue is not about sending external labels (I don't know does promxy exen support it) but receiving external labels from downstream prometheuses. In that aspect promxy breaks compatibility with stock prometheus.

In that context it makes sense that promxy would provide its own instead of trying to derive and merge what downstreams have configured (both because it feels more right-- and because those downstreams don't really expose that as a config :) ). So unless I've missed something, this feels like its working as intended? (even though the intent is a bit confusing :D ).

I'm not challenging intention of the project, indeed. If it feels right and promxy should do exactly that - that's more than fine. I just noted that in this aspect promxy is not compatible with stock prometheus - which exactly "derive and merge what downstreams have configured" - and this behaviour should be or corrected or highlighted in readme, or at least in this issue.

Feel free to close it, it would server its purpose even if closed.

@jacksontj
Copy link
Owner

Prometheus also can be proxy to other prometheuses (without data merging ofc) and I faced issue when replaced proxy prometheus with promxy. I changed description above to reflect it.

Oh; I am not aware of this configuration for prometheus. If you have some links to docs/configs maybe I can read up on that and come up with a better solution here.

but receiving external labels from downstream prometheuses

To be a bit pedantic here; the issue (at least as I'm seeing it) is that the downstream prometheus instances aren't sending these external labels. So this isn't a case of promxy eating the labels -- but rather that these external_labels aren't returned in the query result to promxy -- so it has no idea about them.

I just noted that in this aspect promxy is not compatible with stock prometheus

I would love some more context on this prometheus-as-a-proxy stuff; since I am completely unaware of that behavior in prometheus (and unable to find anything in my little bit of googling). Fundamentally promxy is just hitting the query API the same way a user would if you went to the prometheus HTTP UI. So if the labels aren't there (at least how it is today) then they won't show up. If prometheus has some mechanism to conditionally add those in the response -- I'd be VERY curious what that is (since I might be able to encorporate that).

@jacksontj
Copy link
Owner

Given where this left off it sounds like we can close this out. If there is more context from yourself or someone else on this "prometheus as a proxy" I would be very interested in understanding how that fits into the picture. Otherwise it sounds like we should close this out for now (if you disagree definitely feel free to re-open :) )

@deniszh
Copy link
Author

deniszh commented Sep 7, 2024

Sure, no problem.
After your questions I tried to understand intended Prometheus behaviour regarding external labels and remote read, read some issues and docs, and it confused myself even more.
So, issue is there, it's easily reproducible, but it's not clear is it issue or initial bug in Prometheus. So, let's close it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants