Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit to one working cache file when using classic plugins in flow #1105

Open
davidfdezalcoba opened this issue Oct 22, 2024 · 2 comments
Open
Labels
enhancement New feature or request

Comments

@davidfdezalcoba
Copy link

Is your feature request related to a problem? Please describe.
When using flows to preprocess a file before transcoding (i.e. cleaning unwanted subtitle tracks, cleaning unwanted language tracks, etc) with classic plugins in a sequence, a new working cache file is created per step. This means that, when having a flow with, let's say, 8 classic plugins that work on an original file of, let's say, 28Gi, we'll end up with 8*28=224Gi of transcode cache for a single original file, in case every step of the flow has processed it.

As an example, I have a flow that works like this:
image
and the transcode cache for this flow looks like this:

.
├── 1729593261553
├── 1729593262216
├── 1729593262830
├── 1729593263436
├── 1729593264027
├── 1729593264635
├── 1729593266487
│   └── The Aristocats (1970) [imdbid-tt0065421] - [Bluray-1080p][EAC3 5.1][x264]-playHD.mkv
├── 1729593880588
│   └── The Aristocats (1970) [imdbid-tt0065421] - [Bluray-1080p][EAC3 5.1][x264]-playHD.mkv
├── 1729593920464
│   └── The Aristocats (1970) [imdbid-tt0065421] - [Bluray-1080p][EAC3 5.1][x264]-playHD.mkv
├── 1729594030455
├── 1729594031208
│   └── The Aristocats (1970) [imdbid-tt0065421] - [Bluray-1080p][EAC3 5.1][x264]-playHD.mkv
├── 1729594077278
│   └── The Aristocats (1970) [imdbid-tt0065421] - [Bluray-1080p][EAC3 5.1][x264]-playHD.mkv
└── 1729594152261
    └── The Aristocats (1970) [imdbid-tt0065421] - [Bluray-1080p][EAC3 5.1][x264]-playHD.mkv

with a total directory size of 42Gi for a file of 8Gi:

4.0K    ./1729593261553
8.0G    ./1729594031208
8.0G    ./1729593880588
8.0G    ./1729594077278
4.0K    ./1729593262216
8.0G    ./1729593266487
8.0G    ./1729593920464
4.0K    ./1729594030455
4.0K    ./1729593264027
4.0K    ./1729593264635
4.0K    ./1729593262830
4.0K    ./1729593263436
1.9G    ./1729594152261
42G     .

This makes it really difficult to have more than one concurrent transcode, since original file size will be multiplied by quite a factor, filling up the transcode cache.

Describe the solution you'd like
I think that tdarr should automatically delete the previous working file, once a step on the flow has processed it and produced a new output file.

Describe alternatives you've considered
I tried using the "Delete file" plugin, but then the flow will not work as intended.

Additional context
This has been tried in latest v2.27.02.

@davidfdezalcoba davidfdezalcoba changed the title Limit to one working cache file Limit to one working cache file when using classic plugins in flow Oct 22, 2024
@kagoromo
Copy link

I can confirm the same behavior on my Tdarr 2.27.02. It's especially problematic when you are trying to process a huge BD like 80GB in size. The classic plugins I have in my flow were spawning so many cache files that they overflowed my Unraid cache pool. I believe the constant copying was making the flow way slower than it should as well.

@HaveAGitGat HaveAGitGat added the enhancement New feature or request label Dec 23, 2024
@HaveAGitGat
Copy link
Owner

The reason for this is that there may be flows which do all sorts of file processing and e.g. need to go back to the previous cache file created in a flow. But yes can add something for this situation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants