You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
When using flows to preprocess a file before transcoding (i.e. cleaning unwanted subtitle tracks, cleaning unwanted language tracks, etc) with classic plugins in a sequence, a new working cache file is created per step. This means that, when having a flow with, let's say, 8 classic plugins that work on an original file of, let's say, 28Gi, we'll end up with 8*28=224Gi of transcode cache for a single original file, in case every step of the flow has processed it.
As an example, I have a flow that works like this:
and the transcode cache for this flow looks like this:
This makes it really difficult to have more than one concurrent transcode, since original file size will be multiplied by quite a factor, filling up the transcode cache.
Describe the solution you'd like
I think that tdarr should automatically delete the previous working file, once a step on the flow has processed it and produced a new output file.
Describe alternatives you've considered
I tried using the "Delete file" plugin, but then the flow will not work as intended.
Additional context
This has been tried in latest v2.27.02.
The text was updated successfully, but these errors were encountered:
davidfdezalcoba
changed the title
Limit to one working cache file
Limit to one working cache file when using classic plugins in flow
Oct 22, 2024
I can confirm the same behavior on my Tdarr 2.27.02. It's especially problematic when you are trying to process a huge BD like 80GB in size. The classic plugins I have in my flow were spawning so many cache files that they overflowed my Unraid cache pool. I believe the constant copying was making the flow way slower than it should as well.
The reason for this is that there may be flows which do all sorts of file processing and e.g. need to go back to the previous cache file created in a flow. But yes can add something for this situation.
Is your feature request related to a problem? Please describe.
When using flows to preprocess a file before transcoding (i.e. cleaning unwanted subtitle tracks, cleaning unwanted language tracks, etc) with classic plugins in a sequence, a new working cache file is created per step. This means that, when having a flow with, let's say, 8 classic plugins that work on an original file of, let's say, 28Gi, we'll end up with 8*28=224Gi of transcode cache for a single original file, in case every step of the flow has processed it.
As an example, I have a flow that works like this:
and the transcode cache for this flow looks like this:
with a total directory size of 42Gi for a file of 8Gi:
This makes it really difficult to have more than one concurrent transcode, since original file size will be multiplied by quite a factor, filling up the transcode cache.
Describe the solution you'd like
I think that tdarr should automatically delete the previous working file, once a step on the flow has processed it and produced a new output file.
Describe alternatives you've considered
I tried using the "Delete file" plugin, but then the flow will not work as intended.
Additional context
This has been tried in latest v2.27.02.
The text was updated successfully, but these errors were encountered: