Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add event tracking #349

Open
wants to merge 18 commits into
base: main
Choose a base branch
from
Open

add event tracking #349

wants to merge 18 commits into from

Conversation

sravfeyn
Copy link
Member

@sravfeyn sravfeyn commented Jul 1, 2024

@sravfeyn
Copy link
Member Author

sravfeyn commented Jul 1, 2024

This is work in progress, but wanted to get any initial review I can get @calellowitz



class RecordsFlagged(InferredEventSpec):

Copy link
Member Author

@sravfeyn sravfeyn Jul 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This style of inferred events allows us to avoid saving redundant info on Event model.



@celery_app.task
def process_events_batch():
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have gone this route over few other options

  • Simpler option: single celery task for saving each event would be simpler, but would be too much of a celery overhead and doesn't take advantage of ability to save in bulk which should enable better performance.
  • Complicated option using kafka would enable to do this in much more scalable way, but it would be probably an overkill for now.

We can run this task periodically, say every 30 seconds.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for describing your thought process a bit. A few questions

  1. Why redis instead of postgres for the queue?
  2. Why does this need to be async at all? I can imagine its slightly faster to write to redis than postgres, but if we are doing a network write for every event anyway, I am surprised this is much more efficient than just writing directly to postgres?

Copy link
Member Author

@sravfeyn sravfeyn Jul 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Main reason being: Wouldn't writing to redis (in memory) be faster than writing to Postgres DB (on disk)? This also allows less number of writes to Postgres (because they are committed in batches) which should make it that much less probable for the DB to run into performance issues.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather we start with a purely synchronous implementation for now if we want to be sure about event durability. We can always make these async later, but this adds a lot of complexity for something we aren't sure we need yet. If we do want to make it async, I think we should use celery unless we have a very strong reason not to since we are already using that. That keeps us to only one async task method in our code and makes it a bit easier to manage.

Copy link
Member Author

@sravfeyn sravfeyn Jul 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am pretty sure that it would impact performance in sync mode, at least in some cases if not all. For e.g. in process_learn_form we need to track multiple events (Finishes a Module, Finishes all Modules, Failed Assessment, Succeeded Assessment). Without async, these will add up enough to slow down the request.

(Though, may be this isn't a big issue if these requests are coming from Repeaters as opposed to directly from mobile submissions.)

And if we do separate task for each event, we are going to face lot of constant celery troubleshooting. We have like 47 lines of code to do async and a single periodic task. I don't feel that qualifies for 'lot of complexity'.

I am happy to remove all the async code or make it multiple individual tasks, but ¯_(ツ)_/, you will have to handle if/when things slow down 🐢 ⌛ 🐌 or if we end up doing constant celery firefighting 🧯 🔥 🚒

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My instinct was that nearly all of these come from background tasks (like approvals) or non time sensitive requests. It is certainly possible that future ones won't (the mobile endpoint should be performant, but we also need to be able to tell mobile whether the write succeeded and we can't do that with an async task anyway), and we can deal with async then, but at this point it seems more important to have guarantees. I am also less convinced of the long term sustainability of this implementation. My understanding of the code is that if any event has an issue, no events will ever be written, based on the error handling there. For example if mobile sends a new event and we havent yet migrated, or a web dev forgets to add the migration. That seems quite dangerous. Additionally its not paged at all, if the list is very long (maybe mobile sends a bunch at once), it could consume a lot of memory.

None of these issues are unfixable, but the more we work to make this a production ready queuing system, the more complex it becomes, and the more we benefit from a real one. However, given our current needs, unless there is a clear place where the async will benefit us, it is a bit hard for me to believe that one postgres write will be enough worse than the one redis write at this time and for our use cases.

I am not going to block the PR over this, and if you think there is a specific place where that performance matters (the form processor would be reasonable if that were blocking a mobile call), I am certainly willing to consider it, but my strong presumption is that this is overkill for our current needs, while not being robust enough to be a long term solution.

Copy link
Collaborator

@calellowitz calellowitz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left a couple questions. Super happy you chose to do this in a new app, way fewer migration issues


class Event(models.Model):

class Type(models.TextChoices):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a little concerned that this means we will need a migration each time we add a new event, something that is likely to be quite frequent. What is the reason to make this a choice set?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I am deliberating to change it for few other reasons as well.

class Event(models.Model):

class Type(models.TextChoices):
INVITE_SENT = "is", gettext("Invite Sent")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the slugs should be human readable. No reason to save a few bytes, and it will make it easier to use them later

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do already have the slug the first part (INVITE_SENT) for readability inside the code. Of course, if you look at the raw DB it might not be. The number of events are going to be very large (larger than any other table), so I thought we could use every opportunity to keep it smaller.

)


class RecordsFlagged(InferredEventSpec):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this class do?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a specification for 'inferred' events, for e.g. this particular one specifies that UserVisit with the listed mapping is an indication of the event that visits got flagged. This avoids saving redundant info.

This spec is further used in get_events to get list of events.

date_created = models.DateTimeField(auto_now_add=True, db_index=True)
event_type = models.CharField(max_length='2', choices=Type.choices)
user = models.ForeignKey(User, on_delete=models.CASCADE, null=True)
opportunity = models.ForeignKey(Opportunity, on_delete=models.PROTECT, null=True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there no way to save additional metadata?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For all the metrics so far, I haven't found one which needs more metadata.



@celery_app.task
def process_events_batch():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for describing your thought process a bit. A few questions

  1. Why redis instead of postgres for the queue?
  2. Why does this need to be async at all? I can imagine its slightly faster to write to redis than postgres, but if we are doing a network write for every event anyway, I am surprised this is much more efficient than just writing directly to postgres?

Copy link
Collaborator

@calellowitz calellowitz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few followup questions/comments

serializer_class = EventSerializer
permission_classes = [IsAuthenticated]

def create(self, request, *args, **kwargs):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if one item passed by the phone is invalid? Does the whole request fail or are the rest created? If the first, I think that could cause issues because then one bad event will make all subsequent requests from the phone fail as it will be included in each one. If the second, I think we need to let the phone know which succeeded and which failed so it knows which records to retry and which it can delete

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great point, from your above comment maintaining some sort of ID would address this. I will add that.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this change is still pending?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of having an ID, I have gone with a simpler approach of just sending back the rows that fail.

class EventSerializer(serializers.ModelSerializer):
class Meta:
model = Event
fields = ["date_created", "event_type", "user", "opportunity"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this will need some kind of event id from the phone, so that we can communicate success info for individual items in the list as described below

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, I was thinking to use the timestamp as such ID, but a UUID would be nice.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Type = types

date_created = models.DateTimeField(auto_now_add=True, db_index=True)
event_type = models.CharField(max_length=40, choices=types.EVENT_TYPE_CHOICES)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mentioned you were going to rethink this in response to my last comment but appears to still be here? How has your thinking here evolved? Why did you decide to require a migration each time we have a new event?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I am going to make it a callable to avoid migrations getting created

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, it looks like this won't work since we are on Django 4 (and callable only works in 5), we can do it using validators. But the trade off is that we need to have some extra code to toggle the slug and verbose_name (and back) while processing in some of the places (which is ugly and unnecessary). I could live with extra migration until Django 5 instead of the unnecessary code. Do you feel strongly against migrations?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you feel strongly against migrations?

I am not sure I totally understand why we need restricted choices here at all. That seems pretty nonstandard for an analytics system (if you think of GA or kissmetrics, they allow arbitrary events), and makes adding new events, whcih we expect to do incrementally much trickier. Mobile will need to coordinate releases around web running these migrations, and simple communication errors could cause confusing problems. I like the idea on web of predefining our events, and only using ones we have listed in something like a constant file, but I am not sure I am sold on enforcing them at the db level.

As with other comments, I am open to being convinced

# this allows referring to event types in this style: Event.Type.INVITE_SENT
Type = types

date_created = models.DateTimeField(auto_now_add=True, db_index=True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If these events are processed asynchronously, I don't think we can use auto_now_add especially since there is data from the phone that could be hours or days delayed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, that's a good point. Though, this field is getting overridden when the events are created. I agree it's better to remove the auto_now_add and let the callers worry about setting it.

INFERRED_EVENT_SPECS = [RecordsFlagged()]


def get_events(user=None, from_date=None, to_date=None):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how is this used?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be used in the Events timeline report.



@celery_app.task
def process_events_batch():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather we start with a purely synchronous implementation for now if we want to be sure about event durability. We can always make these async later, but this adds a lot of complexity for something we aren't sure we need yet. If we do want to make it async, I think we should use celery unless we have a very strong reason not to since we are already using that. That keeps us to only one async task method in our code and makes it a bit easier to manage.

@sravfeyn sravfeyn changed the title WIP: add event tracking add event tracking Jul 31, 2024
Copy link
Collaborator

@calellowitz calellowitz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few comments/questions. You can definitely convince me that your architecture is better than what I had in mind, but I am not sure I see the benefits for a few of the pieces yet

@@ -26,6 +27,12 @@ class Event(models.Model):
event_type = models.CharField(max_length=40, choices=get_event_type_choices())
user = models.ForeignKey(User, on_delete=models.CASCADE, null=True)
opportunity = models.ForeignKey(Opportunity, on_delete=models.PROTECT, null=True)
organization = models.ForeignKey(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this necessary? Opportunity encodes this info as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was considering for cases where Events are org specific without org, but may be we don't have any right now. Let me remove it

serializer_class = EventSerializer
permission_classes = [IsAuthenticated]

def create(self, request, *args, **kwargs):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this change is still pending?



@celery_app.task
def process_events_batch():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My instinct was that nearly all of these come from background tasks (like approvals) or non time sensitive requests. It is certainly possible that future ones won't (the mobile endpoint should be performant, but we also need to be able to tell mobile whether the write succeeded and we can't do that with an async task anyway), and we can deal with async then, but at this point it seems more important to have guarantees. I am also less convinced of the long term sustainability of this implementation. My understanding of the code is that if any event has an issue, no events will ever be written, based on the error handling there. For example if mobile sends a new event and we havent yet migrated, or a web dev forgets to add the migration. That seems quite dangerous. Additionally its not paged at all, if the list is very long (maybe mobile sends a bunch at once), it could consume a lot of memory.

None of these issues are unfixable, but the more we work to make this a production ready queuing system, the more complex it becomes, and the more we benefit from a real one. However, given our current needs, unless there is a clear place where the async will benefit us, it is a bit hard for me to believe that one postgres write will be enough worse than the one redis write at this time and for our use cases.

I am not going to block the PR over this, and if you think there is a specific place where that performance matters (the form processor would be reasonable if that were blocking a mobile call), I am certainly willing to consider it, but my strong presumption is that this is overkill for our current needs, while not being robust enough to be a long term solution.

Type = types

date_created = models.DateTimeField(auto_now_add=True, db_index=True)
event_type = models.CharField(max_length=40, choices=types.EVENT_TYPE_CHOICES)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you feel strongly against migrations?

I am not sure I totally understand why we need restricted choices here at all. That seems pretty nonstandard for an analytics system (if you think of GA or kissmetrics, they allow arbitrary events), and makes adding new events, whcih we expect to do incrementally much trickier. Mobile will need to coordinate releases around web running these migrations, and simple communication errors could cause confusing problems. I like the idea on web of predefining our events, and only using ones we have listed in something like a constant file, but I am not sure I am sold on enforcing them at the db level.

As with other comments, I am open to being convinced

Copy link
Collaborator

@calellowitz calellowitz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None of the comments I left are blocking, in case this is holding up development on the user timeline view or other work, but I think the mobile API and model fields could still be improved in the ways we previously discussed.

Thanks for how responsive you have been on this

if failed_items:
partial_error_response = {"error": "Some items could not be saved", "failed_items": failed_items}
headers = self.get_success_headers(serializer.data)
return Response(partial_error_response, status=status.HTTP_206_PARTIAL_CONTENT, headers=headers)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it looks like 206 is really intended for "range" requests. Since that isn't what this is, I think it's fine to use a more standard response like 200, especially since that is still distinguishable from a full success.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, I will update it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you still plan to update this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I have updated it to 201


event_objects = [Event(**item) for item in serializer.validated_data]
Event.objects.bulk_create(event_objects)
try:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we talked a few times about having these events include IDs that you could send back to the phone to indicate which ones succeeded. I don't see any code that could handle that here. Specifically it looks like the code will error if they send ids with these events (or it will try to set that id as the PK, which is even more dangerous).

I do see that you instead send down failures (though still not with uniquely identifiable IDs, so it will not necessarily be straightforward for the phone to know which events they match to). Successes are generally preferable because there are less ways for that to go wrong, and it doesn't require the phone to track which events it sent, and there is no possibility for mismatch, it just deletes the ones you say succeeded.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I updated the code (and responded to that comment above https://github.com/dimagi/commcare-connect/pull/349/files#r1701902592). I took a simpler approach of sending down just the failed events. I don't understand how including IDs makes it any better for mobile to track what events failed.

event_objects = [Event(**item) for item in serializer.validated_data]
Event.objects.bulk_create(event_objects)
try:
event_objects = [Event(**item) for item in serializer.validated_data]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its generally safer to be explicit about incoming fields in case the phone adds unexpected ones, and to allow it to intentionally send fields you are not including (like the event id we have discussed).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might have been my ignorance about the serializer. What happens if there are unexpected fields sent up? Are they ignored or raise an error?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, they get ignored.

date_created = models.DateTimeField(db_index=True)
event_type = models.CharField(max_length=40, db_index=True)
user = models.ForeignKey(User, on_delete=models.CASCADE, null=True)
opportunity = models.ForeignKey(Opportunity, on_delete=models.PROTECT, null=True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add again that I think a metadata field will be very beneficial. Even with the existing set of events we have, data like which record was approved, or how much the payment was could be very useful, and I know that mobile was also hoping to include additional metadata.

@sravfeyn
Copy link
Member Author

@calellowitz Is this good to merge?

@sravfeyn
Copy link
Member Author

@calellowitz bumping this for review.

@sravfeyn
Copy link
Member Author

@calellowitz Bumping for review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants