Skip to content
This repository has been archived by the owner on Dec 11, 2023. It is now read-only.

Should raise error in metric is recorded with the wrong name #83

Open
smathieu opened this issue Feb 13, 2014 · 7 comments
Open

Should raise error in metric is recorded with the wrong name #83

smathieu opened this issue Feb 13, 2014 · 7 comments

Comments

@smathieu
Copy link

Currently, if you use an invalid metric's name such as

Librato.measure "lkdfjsldfk#&*()#$&)#", 0 # => nil

The librato-rails gem will accept it without complaining and not push that data to Librato. It would be much better if it could be configured to raise an exception instead. At least, our unit-tests would let us know we're sending an invalid name.

@nextmat
Copy link
Contributor

nextmat commented Feb 13, 2014

@smathieu The current behavior is intentional because presumably you don't want instrumentation code in production breaking production. It would be nice to have some mode you could enable which would raise exceptions for this sort of thing though.

Can you give me a bit more info about your ideal behavior?

@smathieu
Copy link
Author

I think it would be nice to have least have an optional callback when the name is invalid. That way, I could at least raise an exception in development and test.

Something like

# config/initializer/librato.rb
Librato.on_name_error do |name|
  if !Rails.env.production? 
    raise "Librato received an invalid name: #{name}"
  end
end

@nextmat
Copy link
Contributor

nextmat commented Feb 13, 2014

Hmm, there are certainly a number of other invalid cases as well which we swallow right now. Will give some thought to how to handle this more generically, but I like the idea of an optional callback.

@smathieu
Copy link
Author

smathieu commented Jun 3, 2014

Any progress on this? We've just got burned again by this :(

@nextmat
Copy link
Contributor

nextmat commented Jun 4, 2014

@smathieu I know what needs doing, just need a few spare cycles to get it done. I'll try to get this sorted in the next week or so.

@nextmat
Copy link
Contributor

nextmat commented Jul 3, 2014

@smathieu I spent some time on this this past week. This is actually a bit tricker to achieve than I was originally thinking because we don't do validations inline. We do them all at once on submission to reduce overhead when instrumenting in the primary (request-serving) threads.

Since queued metrics are never flushed when running tests, even if a callback was setup it wouldn't ever be reached.

I'm thinking probably the right way to do this is to add a method that allows you to manually validate all of the currently queued metrics. You would have to call it directly, but I think this would get you want you want. What do you think?

@smathieu
Copy link
Author

smathieu commented Jul 3, 2014

That would work as long as I can call the same validation code that is used on submission. Thanks for looking into this

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants