Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Describe workflow for writing new subtitles? #15

Open
larsmagne opened this issue Nov 29, 2019 · 10 comments
Open

Describe workflow for writing new subtitles? #15

larsmagne opened this issue Nov 29, 2019 · 10 comments

Comments

@larsmagne
Copy link

The README seems geared towards editing existing subtitles. It would be nice if there was a section there to describe what the typical workflow would be when writing brand new subtitles: I've played around with the mode for fifteen minutes, and I don't think I've quite understood how this mode is to be used efficiently.

@rndusr
Copy link
Collaborator

rndusr commented Nov 29, 2019 via email

@larsmagne
Copy link
Author

I've uploaded a movie to Youtube, and then made Youtube auto-segment the audio, and then downloaded the resulting .srt file from Youtube. :-/ It has no text, of course, but it does mark the sections where there's talking, so using that I'm making some progress.

But it can be hard to actually hear what people are saying when listening to looped snippets, so I've added some commands to snap out of the loops and listen more continuously, which helps.

The mode seems quite intuitive once you've got a scaffolding going. But, yes, the mpv integration needs just a bit more... stuff. Perhaps widening the loops sometime.

@rndusr
Copy link
Collaborator

rndusr commented Nov 29, 2019 via email

@larsmagne
Copy link
Author

Adding some time after the loop just makes the mode continue on to the next title (if there's an overlap) due to

subed-mpv-playback-position-hook #'subed--ensure-subtitle-loop

The mode is rather difficult to debug due to all the indirection in the code, and all the functions that seem to appear out of nowhere -- it makes the normal Emacs debugging interfaces rather useless, unfortunately.

@rndusr
Copy link
Collaborator

rndusr commented Nov 29, 2019 via email

@alienbogart
Copy link

alienbogart commented Mar 9, 2020

I would also be interested in that answer.

@larsmagne
Copy link
Author

I ended up uploading the video to youtube and using their tool to create the "scaffolding": They add the timings for each line, basically, and then I downloaded that file, and used this Emacs mode to fill in the translations. But it's not ideal...

@alienbogart
Copy link

alienbogart commented Mar 10, 2020

I ended up uploading the video to youtube and using their tool to create the "scaffolding": They add the timings for each line, basically, and then I downloaded that file, and used this Emacs mode to fill in the translations. But it's not ideal...

I coincidentally subtitled my video yesterday using YouTube's builtin tool. It's certainly not ideal, but a lot better than I thought. If I could use Emacs to create subtitles from scratch I could even make that a hobby, all the other programs are so-not-Emacs hahaha

Thanks ;)

@rndusr
Copy link
Collaborator

rndusr commented Mar 10, 2020 via email

@mooseyboots
Copy link

another possibility would be to install and set up vosk https://github.com/alphacep/vosk-api, and then use https://github.com/boi4/subextractor/blob/master/extract_srt.py.

(you have to download some ~1GB models for the speech-to-text extraction.)

this means no upload to youtube, which seems to have a max 15 mins limit for free accounts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants