-
-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attaching metadata to jobs #1558
Comments
I think you should pass them through the job's serialized parameters. This is the general pattern, and for Sentry something like this might work: module SentryTraces
extend ActiveSupport::Concern
included do
prepend Prepends
attr_accessor :sentry_trace_propagation_headers
around_perform :configure_sentry_traces
end
module Prepends
def enqueue(options = {})
self.sentry_trace_propagation_headers ||= Sentry.get_trace_propagation_headers
super
end
def serialize
super.tap do |job_data|
job_data["sentry_trace_propagation_headers"] = sentry_trace_propagation_headers.to_json if sentry_trace_propagation_headers
end
end
def deserialize(job_data)
super
self.sentry_trace_propagation_headers = JSON.parse(job_data["sentry_trace_propagation_headers"]) if job_data["sentry_trace_propagation_headers"]
end
def configure_sentry_traces
Sentry.setup(sentry_trace_propagation_headers) # <= or however Sentry wants it set up
yield
Sentry.teardown
end
end
end
class ApplicationJob
include SentryTraces
end fyi, when adding to serialize/deserialize you have to explicitly serialize the values (unlock job params passed to |
That looks like it'd work! I'll give it a try and report back. Thanks for the quick reply as always, Ben :) |
I was able to achieve what I was after using your suggestion. Thanks again! |
I spent some time looking into how feasible it would be to integrate Sentry's queue monitoring feature in GoodJob.
Simply monitoring how much time passes between the time a job is enqueued and the time it is picked up by a worker is probably doable using the ActiveJob
around_enqueue
andaround_perform
callbacks. I need to do some more testing for things such asperform_now
vs.perform_later
, job retries, etc. but it sounds easy enough.One thing that doesn't so easy with GoodJob's current state is trace propagation. This is a feature that allows Sentry to group events across different services. E.g. if a web request enqueues a job and the job is then picked up by a worker, with trace propagation Sentry will group the 2 events under the same trace, which is very useful for debugging.
This feature works by passing a specific set of headers between events. Sentry's Ruby gem provides an integration for Sidekiq which takes advantage of this:
This works for Sidekiq because
job
is just an arbitrary hash that gets stored in Redis.I tried to achieve the same thing with GoodJob, but unfortunately I couldn't find a good way to store the trace propagation headers. I hacked something by storing the headers inside
serialized_params
then removing them before the job is actually called, but this is obviously a terrible and unreliable hack.Anyway, the above was just for context. What do you think of the idea of adding a
metadata
column to GoodJob's job model? Ideally a JSONB column, though just text would work too.The text was updated successfully, but these errors were encountered: