-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed some factory issues to address Percy's false positive flags #11771
base: main
Are you sure you want to change the base?
Conversation
d079754
to
c794da8
Compare
@@ -18,17 +18,12 @@ class Meta: | |||
"footer_sentence", | |||
) | |||
|
|||
class Params: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These details seem a bit unnecessary to me. I'm deleting them to avoid complication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are helpful if creating these in a test. Instead of doing something like HighlightFactory(publish_after=timedelta(datetime.now() + 1))
to indicate an unpublished highlight, we can just do HighlightFactory(unpublished=True)
and it will create it for us with a publish_after
in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's worth deleting them, they are helpful factory traits
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool I'll add them back!
@@ -32,34 +32,15 @@ class Meta: | |||
exclude = ("headline_sentence",) | |||
|
|||
class Params: | |||
unpublished = Trait( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly, these details seem a bit unnecessary to me. I'm deleting them to avoid complication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to keep these as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will add them back!
video = Trait(is_video=True) | ||
|
||
headline = LazyAttribute(lambda o: o.headline_sentence.rstrip(".")) | ||
outlet = Faker("company") | ||
date = Faker("date") if RANDOM_SEED and not TESTING else Faker("past_date", start_date="-30d") | ||
date = Faker("past_date", start_date=datetime(2020, 1, 1), tzinfo=timezone.utc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm unsure why we needed to have two ways to generate values for date
here. Any insights?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure... it seems to be generating a random date in the past 30 days for testing while for all other generations it uses a random date with no restrictions
if RANDOM_SEED and not TESTING | ||
else Faker("past_datetime", start_date="-30d", tzinfo=timezone.utc) | ||
) | ||
publish_after = Faker("past_datetime", start_date=datetime(2020, 1, 1), tzinfo=timezone.utc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question here. I'm curious why RANDOM_SEED and TESTING matter when it comes to generating value for publish_after
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure either. I think we're safe to simplify this if the Python tests pass.
Faker("date_time", tzinfo=timezone.utc) | ||
if RANDOM_SEED and not TESTING | ||
else Faker("past_datetime", start_date="-30d", tzinfo=timezone.utc) | ||
first_published_at = Faker("past_datetime", start_date=datetime(2020, 1, 1), tzinfo=timezone.utc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same q as above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above
c794da8
to
faf23d0
Compare
…ate 2020, 1, 1 instead of 'today'
0c04f73
to
5cd4015
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @mmmavis, thanks for these changes. I'm a bit concerned that these won't be enough to solve the issue though. The dates randomly changing on RCC/Research Hub does not seem to be caused by the factory date generation that you changed here, for instance.
I think it relates more to controlling the randomness. If the randomness is indeed reproducible, the dates shouldn't be changing like that from one Percy test to another.
@@ -18,17 +18,12 @@ class Meta: | |||
"footer_sentence", | |||
) | |||
|
|||
class Params: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are helpful if creating these in a test. Instead of doing something like HighlightFactory(publish_after=timedelta(datetime.now() + 1))
to indicate an unpublished highlight, we can just do HighlightFactory(unpublished=True)
and it will create it for us with a publish_after
in the future.
@@ -18,17 +18,12 @@ class Meta: | |||
"footer_sentence", | |||
) | |||
|
|||
class Params: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's worth deleting them, they are helpful factory traits
publish_after = Faker("past_datetime", start_date="-30d", tzinfo=timezone.utc) | ||
publish_after = Faker("past_datetime", start_date=datetime(2020, 1, 1), tzinfo=timezone.utc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how this would solve the problem though... 🤔 we would still get a random date but now between Jan 1st 2020 and the current date instead of the last 30 days. We might need a different approach
@@ -32,34 +32,15 @@ class Meta: | |||
exclude = ("headline_sentence",) | |||
|
|||
class Params: | |||
unpublished = Trait( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to keep these as well
video = Trait(is_video=True) | ||
|
||
headline = LazyAttribute(lambda o: o.headline_sentence.rstrip(".")) | ||
outlet = Faker("company") | ||
date = Faker("date") if RANDOM_SEED and not TESTING else Faker("past_date", start_date="-30d") | ||
date = Faker("past_date", start_date=datetime(2020, 1, 1), tzinfo=timezone.utc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure... it seems to be generating a random date in the past 30 days for testing while for all other generations it uses a random date with no restrictions
if RANDOM_SEED and not TESTING | ||
else Faker("past_datetime", start_date="-30d", tzinfo=timezone.utc) | ||
) | ||
publish_after = Faker("past_datetime", start_date=datetime(2020, 1, 1), tzinfo=timezone.utc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure either. I think we're safe to simplify this if the Python tests pass.
Faker("date_time", tzinfo=timezone.utc) | ||
if RANDOM_SEED and not TESTING | ||
else Faker("past_datetime", start_date="-30d", tzinfo=timezone.utc) | ||
first_published_at = Faker("past_datetime", start_date=datetime(2020, 1, 1), tzinfo=timezone.utc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above
Hi @jhonatan-lopes , thanks for the feedback! I'll add the Regarding the date generation issue - from my observation in the past, the date difference seemed always to be "x days" from when the |
That sounds great, thanks @mmmavis! I tried some stuff locally here but couldn't get to the bottom of the problem either but your comment definitely does make sense. I'll try some other stuff with that in mind (trying to "fudge today's date" and re-generating data) to see if I can repeat the problem |
I used this past Percy test as a reference to identify false-positive flags we have been getting recently that are caused by factory configuration
This PR
Related to #10328
┆Issue is synchronized with this Jira Task