Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempt to extend generated audio to fit captions gaps: extend voice breaks after commas #1

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,5 @@
# ignore generated output files
output/
output/
.idea/
.idea/misc.xml
.DS_Store
3 changes: 0 additions & 3 deletions .idea/.gitignore

This file was deleted.

6 changes: 0 additions & 6 deletions .idea/inspectionProfiles/profiles_settings.xml

This file was deleted.

10 changes: 0 additions & 10 deletions .idea/karpik-poc-py.iml

This file was deleted.

4 changes: 0 additions & 4 deletions .idea/misc.xml

This file was deleted.

8 changes: 0 additions & 8 deletions .idea/modules.xml

This file was deleted.

6 changes: 0 additions & 6 deletions .idea/vcs.xml

This file was deleted.

39 changes: 37 additions & 2 deletions generate-audio.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def synthesize(text, config):

response = polly_client.synthesize_speech(
Engine='neural', # standard|neural - neural nie obsługuje max-duration
VoiceId= config.voice,
VoiceId=config.voice,
LanguageCode='en-US',
OutputFormat='mp3',
TextType='ssml', # or text
Expand All @@ -52,6 +52,12 @@ def caption_start(caption):
return seconds


def caption_end(caption):
nums = [float(n) for n in caption.end.split(':')]
seconds = nums[0] * 3600 + nums[1] * 60 + nums[2]
return seconds


def load_captions(config):
if config.captions_format == 'vtt':
return webvtt.read(f'input/{config.captions_file_name}')
Expand All @@ -61,6 +67,33 @@ def load_captions(config):
raise Exception('Unsupported subtitles format')


# TODO figure out better way of defining break length
def define_break(diff_length, num_of_pauses):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To jest jedyny fragment z którego zrozumieniem mam problem.

Jeśli mamy 2 sek różnicy (diff_length=2.0) oraz 3 przecinki (num_of_pauses=3) to domyślna długość pauzy wyniesie 0,66(6) sekundy.

Potem jest mapowanie:
Jeśli domyślna długośc pauzy jest za dluga (powyżej 2 sek) to przytnij do 1 sek
Jeśli domyślna długośc pauzy jest między 1 a 2 sek to przytnij do 0,8 sek
Jeśli domyślna długośc pauzy jest poniżej 1 sek to przytnij do 0,5 sek

Dlaczego jest potrzebne to mapowanie? Dlaczego nie można po prostu zwrócić length_of_pause?

length_of_pause = diff_length / num_of_pauses
if diff_length / num_of_pauses > 2:
return 1000
elif 1 < length_of_pause < 2:
return 800
else:
return 500


def extend_sentence_audio(sentence_audio, caption):
audio_duration = sentence_audio.duration_seconds
caption_start_time = caption_start(caption)
caption_end_time = caption_end(caption)
diff = ((caption_end_time - caption_start_time) - audio_duration).__round__(3)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

można to lekko zoptymalizowąc. Jeśli diff < noticeable_difference (np: 0,25) to zaakceptuj bieżące audio

result = ''
split_caption = caption.text.split(',')
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tu dzielimy każdy caption przecinkami

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

potem można by pomyśleć o uwzględnieniu innych znaków interpunkcyjnych i nadaniu im różnych wag.
Np przecinek to średnia przerwa, myślnik krótka, kropka długa.

if len(split_caption) == 1:
return sentence_audio
for idx, cpt in enumerate(split_caption):
result = result + cpt
if idx != len(split_caption):
result = result + '<break time="{}ms"/>'.format(define_break(diff, len(split_caption) - 1))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jeśli to nie jest ostatnia część "captiona" doklejamy tag ciszy o konkretnej długości obliczonej przez define_break

return synthesize(result, config)


if __name__ == '__main__':
config = InlineClass({
'captions_file_name': 'udemy_sample_01.vtt',
Expand All @@ -80,6 +113,8 @@ def load_captions(config):
print(f'Processing {caption}')
sentence_audio = synthesize(caption.text, config)

sentence_audio = extend_sentence_audio(sentence_audio, caption)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Przekazujemy już wygenerowane zdanie (potrzebna nam jest jego długość) żeby móc zdefiniować odpowiednio dobraną długość pauzy po przecinku


start = caption_start(caption)
if audio.duration_seconds < start:
break_length = (start - audio.duration_seconds) * 1000
Expand All @@ -92,4 +127,4 @@ def load_captions(config):
new_audio = mpe.AudioFileClip(f'output/{config.audio_file_name}')
# new_audio = mpe.CompositeAudioClip([input_clip.audio, new_audio])
final_clip = input_clip.set_audio(new_audio)
final_clip.write_videofile(f'output/{config.movie_file_name}')
final_clip.write_videofile(f'output/output_{config.movie_file_name}')
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plik jest już w katalogu output więc nie ma sensu go prefixować