-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor modifier #104
base: python-cli-refactor
Are you sure you want to change the base?
Refactor modifier #104
Conversation
…-dml/d3m-experimenter into refactor-modifier
pull down most recent changes from python-cli-refactor
…-dml/d3m-experimenter into refactor-modifier
…-dml/d3m-experimenter into refactor-modifier
This comment has been minimized.
This comment has been minimized.
|
…perimenter into refactor-modifier
if (empty_failed_queue is True): | ||
empty_failed(queue_name=queue_name) | ||
else: | ||
queue = get_queue(queue_name) | ||
queue.empty() | ||
print(_EMPTIED_MESSAGE.format(queue_name)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on your experience, was it helpful to just empty one, not both at the same time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me, it was helpful to just empty one at a time. This might have just been a developmental thing, and it might not be useful when things are up and running. But I do not think it would be too harmful to keep it around.
def get_failed_job(queue_name:str = _DEFAULT_QUEUE, job_num:int = 0): | ||
#pass name and connection | ||
reg = rq.registry.FailedJobRegistry(name = queue_name, connection = get_connection()) | ||
job_ids = reg.get_job_ids() | ||
if (len(job_ids)<=0): | ||
return "None", reg | ||
job_id = job_ids[0] | ||
return job_id, reg | ||
|
||
|
||
def save_failed_job(queue_name:str = _DEFAULT_QUEUE, job_num:int = 0): | ||
if (queue_name is None): | ||
queue_name = _DEFAULT_QUEUE | ||
job_id, failed_queue = get_failed_job() | ||
job = rq.job.Job.fetch(job_id, connection=get_connection()) | ||
with open (os.path.join('/data',"failed_job_{}.txt".format(job_num)), 'w') as job_file: | ||
job_file.write(job.exc_info) | ||
#remove the job | ||
failed_queue.remove(job_id, delete_job=True) | ||
print(_SAVE_FAILED_MESSAGE.format(os.path.join('/data', | ||
"failed_job_{}.txt".format(job_num)))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These two functions look like they have not been run. Did you use past versions of it? How was it helpful?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first function, I use for returning the failed queue so that I can empty it, and also I return a single job. The returning of the single job is used in the second function, which then saves the failure output to a file in the '/data' directory. This is for remote development and debugging. We can look at the reason why a task failed without having to do some kind of dashboard thing (it will be saved to a file). I use that one with the queue --save-failed command. The first function is used in quite a few places on the queue.py file. It returns the failed queue, which is what I use it for most often.
Refactor of modifier functionality, the queueing/enqueueing seems to be running, though I have not done any hard test using rq-dashboard, because I have not figure out how to do it on the lab machines.