Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BrokenPipeError: [WinError 232] 管道正在被关闭。`( The pipe is being closed) #16

Open
Jzz8977 opened this issue Sep 21, 2024 · 1 comment

Comments

@Jzz8977
Copy link

Jzz8977 commented Sep 21, 2024

Hello Author,

When I was testing, this error occurred. How can I solve it?

`<<< Lina: John, nice to meet you at this bar in Las Vegas.Exception in thread Thread-7:

Traceback (most recent call last):
File "D:\Dev\Miniconda\envs\ai\lib\threading.py", line 980, in _bootstrap_inner
self.run()
File "D:\Dev\Miniconda\envs\ai\lib\threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "D:\Dev\Miniconda\envs\ai\lib\site-packages\RealtimeTTS\text_to_stream.py", line 201, in synthesize_worker
self.engine.synthesize(sentence)
File "D:\Dev\Miniconda\envs\ai\lib\site-packages\RealtimeTTS\engines\coqui_engine.py", line 402, in synthesize
self.send_command('synthesize', data)
File "D:\Dev\Miniconda\envs\ai\lib\site-packages\RealtimeTTS\engines\coqui_engine.py", line 315, in send_command
self.parent_synthesize_pipe.send(message)
File "D:\Dev\Miniconda\envs\ai\lib\multiprocessing\connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "D:\Dev\Miniconda\envs\ai\lib\multiprocessing\connection.py", line 280, in _send_bytes
ov, err = _winapi.WriteFile(self._handle, buf, overlapped=True)
BrokenPipeError: [WinError 232] 管道正在被关闭。`( The pipe is being closed)

Python version: 3.9.19

best regard

@KoljaB
Copy link
Owner

KoljaB commented Sep 21, 2024

Looks like the synthesize worker process terminated unexpectedly. Most probable reason I guess is coqui tts installation not working correctly. Can you please test if a basic coqui tts example is working?

import torch
from TTS.api import TTS

# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"

# Init TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)

# Run TTS
wav = tts.tts(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en")

(speaker_wav must point to a valid cloning wave file like this one)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants