You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Even though I think LLMs generally do not work like this, I still wonder whether we could guard against some - otherwise super dumb - LLM to just learn our repo by heart and then achieve great results.
Given the discussions in #118 I wonder whether we could somehow maintain a separate secret branch where we ask the conceptually same questions but just with a slightly modifications?
Maybe:
changing the english in the prompt a bit
changing the actual values in the input data and the corresponding assertions
changing the order of the input and output arguments
It would be a bit of work...but maybe worth it?
What do you think?
The text was updated successfully, but these errors were encountered:
Even though I think LLMs generally do not work like this, I still wonder whether we could guard against some - otherwise super dumb - LLM to just learn our repo by heart and then achieve great results.
Given the discussions in #118 I wonder whether we could somehow maintain a separate secret branch where we ask the conceptually same questions but just with a slightly modifications?
Maybe:
It would be a bit of work...but maybe worth it?
What do you think?
The text was updated successfully, but these errors were encountered: