From b06d97088713dd36ab55e29b5f2449695a9f6ff9 Mon Sep 17 00:00:00 2001 From: Morgan Ney Date: Thu, 1 Jun 2023 10:07:12 -0500 Subject: [PATCH] chore: publish under new major version. (#44) * chore: publish under new major version. * docs: update umd example. --- README.md | 312 +---------------- docs/examples.md | 2 +- package-lock.json | 4 +- packages/story/README.md | 3 + packages/story/package.json | 2 +- packages/tts-react/README.md | 316 ++++++++++++++++++ packages/tts-react/package.json | 2 +- .../tts-react/tts-react.png | Bin 8 files changed, 330 insertions(+), 311 deletions(-) create mode 100644 packages/story/README.md create mode 100644 packages/tts-react/README.md rename tts-react.png => packages/tts-react/tts-react.png (100%) diff --git a/README.md b/README.md index 88a4fb3..0930258 100644 --- a/README.md +++ b/README.md @@ -1,316 +1,16 @@ # [`tts-react`](https://www.npmjs.com/package/tts-react) -![CI](https://github.com/morganney/tts-react/actions/workflows/ci.yml/badge.svg) -[![codecov](https://codecov.io/gh/morganney/tts-react/branch/main/graph/badge.svg?token=ZDP1VBC8E1)](https://codecov.io/gh/morganney/tts-react) -[![NPM version](https://img.shields.io/npm/v/tts-react.svg)](https://www.npmjs.com/package/tts-react) - -TextToSpeech React component - -`tts-react` provides a hook (`useTts`) and component (`TextToSpeech`) to convert text to speech. In most cases you want the hook so you can use custom styling on the audio controls. - -By default `tts-react` uses the [`SpeechSynthesis`](https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesis) and [`SpeechSynthesisUtterance`](https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesisUtterance) API's. You can fallback to the [`HTMLAudioElement`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/audio) API by providing a `fetchAudioData` prop to the hook or component. +Repository for `tts-react`, a React component and hook that uses the [`SpeechSynthesis`](https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesis) and [`SpeechSynthesisUtterance`](https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesisUtterance) API's to convert text to speech. You can fallback to the [`HTMLAudioElement`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/audio) API by providing a `fetchAudioData` prop to the hook or component. ## Install `npm i react react-dom tts-react` -## Demo (Storybook) - -[morganney.github.io/tts-react](https://morganney.github.io/tts-react/) - -## Example - -#### Hook - -You can use the hook to create a `Speak` component that converts the text to speech on render: - -```tsx -import { useTts } from 'tts-react' -import type { TTSHookProps } from 'tts-react' - -type SpeakProps = Pick - -const Speak = ({ children }: SpeakProps) => ( - <>{useTts({ children, autoPlay: true }).ttsChildren} -) - -const App = () => { - return ( - -

This text will be spoken on render.

-
- ) -} -``` - -Or create a more advanced component with controls for adjusting the speaking: - -```tsx -import { useTts } from 'tts-react' -import type { TTSHookProps } from 'tts-react' - -interface CustomProps extends TTSHookProps { - highlight?: boolean -} - -const CustomTTSComponent = ({ children, highlight = false }: CustomProps) => { - const { ttsChildren, state, play, stop, pause } = useTts({ - children, - markTextAsSpoken: highlight - }) - - return ( -
- <> - - - - - {ttsChildren} -
- ) -} - -const App = () => { - return ( - -

Some text to be spoken and highlighted.

-
- ) -} -``` - -#### Component - -Use the `TextToSpeech` component to get up and running quickly: - -```tsx -import { TextToSpeech, Positions, Sizes } from 'tts-react' +## Table of Contents -const App = () => { - return ( - -

Some text to be spoken.

-
- ) -} -``` +* [tts-react](./packages/tts-react) +* [storybook](./packages/story) -## `useTts` - -The hook returns the internal state of the audio being spoken, getters/setters of audio attributes, callbacks that can be used to control playing/stopping/pausing/etc. of the audio, and modified `children` if using `markTextAsSpoken`. The parameters accepted are described in the [Props](#props) section. The response object is described by the `TTSHookResponse` type. - -```ts -const { - get, - set, - state, - spokenText, - ttsChildren, - play, - stop, - pause, - replay, - playOrPause, - playOrStop, - toggleMute -} = useTts({ - lang, - voice, - children, - autoPlay, - markTextAsSpoken, - markColor, - markBackgroundColor, - onStart, - onBoundary, - onPause, - onEnd, - onError, - onVolumeChange, - onPitchChange, - onRateChange, - fetchAudioData -}) - -interface TTSHookProps extends MarkStyles { - /** The spoken text is extracted from here. */ - children: ReactNode - /** The `SpeechSynthesisUtterance.lang` to use. */ - lang?: string - /** The `SpeechSynthesisUtterance.voice` to use. */ - voice?: SpeechSynthesisVoice - /** The initial rate of the speaking audio. */ - rate?: number - /** The initial volume of the speaking audio. */ - volume?: number - /** Whether the text should be spoken automatically, i.e. on render. */ - autoPlay?: boolean - /** Whether the spoken word should be wrapped in a `` element. */ - markTextAsSpoken?: boolean - /** Callback when the volume is changed. */ - onVolumeChange?: (newVolume: number) => void - /** Callback when the rate is changed. */ - onRateChange?: (newRate: number) => void - /** Callback when the pitch is changed. */ - onPitchChange?: (newPitch: number) => void - /** Callback when there is an error of any kind. */ - onError?: (msg: string) => void - /** Callback when speaking/audio starts playing. */ - onStart?: (evt: SpeechSynthesisEvent | Event) => void - /** Callback when the speaking/audio is paused. */ - onPause?: (evt: SpeechSynthesisEvent | Event) => void - /** Calback when the current utterance/audio has ended. */ - onEnd?: (evt: SpeechSynthesisEvent | Event) => void - /** Callback when a word boundary/mark has been reached. */ - onBoundary?: (evt: SpeechSynthesisEvent | Event) => void - /** Function to fetch audio and speech marks for the spoken text. */ - fetchAudioData?: (spokenText: string) => Promise -} -interface TTSHookResponse { - set: { - lang: (value: string) => void - rate: (value: number) => void - pitch: (value: number) => void - volume: (value: number) => void - preservesPitch: (value: boolean) => void - } - get: { - lang: () => string - rate: () => number - pitch: () => number - volume: () => number - preservesPitch: () => boolean - } - /** State of the current speaking/audio. */ - state: TTSHookState - /** The text extracted from the children elements and used to synthesize speech. */ - spokenText: string - play: () => void - stop: () => void - pause: () => void - replay: () => void - /** Toggles between muted/unmuted, i.e. volume is zero or non-zero. */ - toggleMute: (callback?: (wasMuted: boolean) => void) => void - /** Toggles between play/stop. */ - playOrStop: () => void - /** Toggles between play/pause. */ - playOrPause: () => void - /** The original children with a possible included if using `markTextAsSpoken`. */ - ttsChildren: ReactNode -} -interface TTSHookState { - voices: SpeechSynthesisVoice[] - boundary: BoundaryUpdate - isPlaying: boolean - isPaused: boolean - isMuted: boolean - isError: boolean - isReady: boolean -} -interface TTSBoundaryUpdate { - word: string - startChar: number - endChar: number -} -``` - -## `fetchAudioData` - -Using `fetchAudioData` will bypass `SpeechSynthesis` and use the `HTMLAudioElement`. - -```ts -(spokenText: string) => Promise -``` - -When using `fetchAudioData` it must return `TTSAudioData` which has the following shape: - -```ts -interface PollySpeechMark { - end: number - start: number - time: number - type: 'word' - value: string -} -interface TTSAudioData { - audio: string - marks?: PollySpeechMark[] -} -``` -The `audio` property must be a URL that can be applied to [`HTMLAudioElement.src`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/audio#attr-src), including a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs). If using `markTextAsSpoken` then you must also return the `marks` that describe the word boundaries. `PollySpeechMarks` have the same shape as the [Speech Marks used by Amazon Polly](https://docs.aws.amazon.com/polly/latest/dg/speechmarks.html), with the restriction that they must be of `type: 'word'`. - - -## Props - -Most of these are supported by the `useTts` hook, but those marked with an asterisk are exclusive to the `TextToSpeech` component. - -`*` Only applies to `TextToSpeech` component. - -|Name|Required|Type|Default|Description| -|----|--------|----|-------|-----------| -|children|yes|`ReactNode`|none|Provides the text that will be spoken.| -|lang|no|`string`|The one used by [`SpeechSynthesisUtterance.lang`](https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesisUtterance/lang).|Sets the [`SpeechSynthesisUtterance.lang`](https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesisUtterance/lang). Overrides `voice` when set and `voice.lang` does not match `lang`.| -|voice|no|`SpeechSynthesisVoice`|None or the voice provided by `audio` from `TTSAudioData`.|The voice heard when the text is spoken. Calling `set.lang` may override this value.| -|autoPlay|no|`boolean`|`false`|Whether the audio of the text should automatically be spoken when ready.| -|markTextAsSpoken|no|`boolean`|`false`|Whether the word being spoken should be highlighted.| -|markColor|no|`string`|none|Color of the text that is currently being spoken. Only applies with `markTextAsSpoken`.| -|markBackgroundColor|no|`string`|none|Background color of the text that is currently being spoken. Only applies with `markTextAsSpoken`.| -|fetchAudioData|no|`(text: string) => Promise`|none|Function to return the optional `SpeechMarks[]` and `audio` URL for the text to be spoken. See [fetchAudioData](#fetchaudiodata) for more details.| -|`*`allowMuting|no|`boolean`|`true`|Whether an additional button will be shown on the component that allows muting the audio.| -|`*`onMuteToggled|no|`(wasMuted: boolean) => void`|none|Callback when the user clicks the mute button shown from `allowMuting` being enabled. Can be used to toggle global or local state like whether `autoPlay` should be enabled.| -|onStart|no|`(evt: SpeechSynthesisEvent \| Event) => void`|none|Callback when the speaking/audio has started (or resumed) playing.| -|onPause|no|`(evt: SpeechSynthesisEvent \| Event) => void`|none|Callback when the speaking/audio has been paused.| -|onEnd|no|`(evt: SpeechSynthesisEvent \| Event) => void`|none|Callback when the speaking/audio has stopped.| -|onBoundary|no|`(boundary: TTSBoundaryUpdate, evt: SpeechSynthesisEvent \| Event) => void`|none|Callback when a word boundary/mark has been reached.| -|onError|no|`(msg: string) => void`|none|Callback when there is an error of any kind playing the spoken text. The error message (if any) will be provided.| -|onVolumeChange|no|`(newVolume: number) => void`|none|Callback when the volume has changed.| -|onRateChange|no|`(newRate: number) => void`|none|Callback when the rate has changed.| -|onPitchChange|no|`(newPitch: number) => void`|none|Callback when the pitch has changed.| -|`*`align|no|`'horizontal' \| 'vertical'`|`'horizontal'`|How to align the controls within the `TextToSpeech` component.| -|`*`size|no|`'small' \| 'medium' \| 'large'`|`'medium'`|The relative size of the controls within the `TextToSpeech` component.| -|`*`position|no|`'topRight' \| 'topLeft' \| 'bottomRight' \| 'bottomLeft'`|`'topRight'`|The relative positioning of the controls within the `TextToSpeech` component.| -|`*`useStopOverPause|no|`boolean`|`false`|Whether the controls should display a stop button instead of a pause button. On Android devices, `SpeechSynthesis.pause()` behaves like `cancel()`, so you can use this prop in that context.| - - -## FAQ - -
-Why is text inside child components not being spoken? -

Due to the way Children.map works

-
-

The traversal does not go deeper than React elements: they don't get rendered, and their children aren't traversed.

-
-

tts-react can not extract the text from child components. Instead, include the text as a direct child of TextToSpeech (or useTts).

-
- -
-Why does markTextAsSpoken sometimes highlight the wrong word? -

The SpeechSynthesisUtterance boundary event may fire with skewed word boundaries for certain combinations of spokenText and lang or voice props. If you check the value of state.boundary.word in these cases, you will find the event is firing at unexpected boundaries, so there is no real solution other than to find a suitable voice for your given spokenText.

-
- -
-Why does markTextAsSpoken not work on Chrome for Android? -

This is a known issue by the Chromium team that apparently they are not going to fix. You can use fetchAudioData to fallback to the HTMLAudioElement, or try a different browser.

-
- -
-Why can I not pause the audio when using SpeechSynthesis on Firefox and Chrome for Android? -

See the compat table on MDN for SpeechSynthesis.pause().

-

In Android, pause() ends the current utterance. pause() behaves the same as cancel().

-

You can use the hook useTts to build custom controls that do not expose a pause, but only stop. If using the TextToSpeech component use the useStopOverPause prop for Android devices.

-
- -
-Why is text from dangerouslySetInnerHTML not spoken? -

tts-react does not speak text from dangerouslySetInnerHTML. Instead convert your HTML string into React elements via an html-to-react parser. See this example.

-
+## Demo (Storybook) -
-What's up with Safari? -

Safari simply does not follow the spec completely (yet). As one example, Safari 15.6.1 on macOS Monterey 12.5.1, throws a SpeechSynthesisEvent during a SpeechSynthesisUtterance.error, while the spec says errors against utterances "must use the SpeechSynthesisErrorEvent interface".

-
+[morganney.github.io/tts-react](https://morganney.github.io/tts-react/) diff --git a/docs/examples.md b/docs/examples.md index 8a5feb0..227d118 100644 --- a/docs/examples.md +++ b/docs/examples.md @@ -12,7 +12,7 @@ Using `tts-react` from a CDN: - +
diff --git a/package-lock.json b/package-lock.json index eb2ecf7..3633e2a 100644 --- a/package-lock.json +++ b/package-lock.json @@ -20668,11 +20668,11 @@ "react": "^18.2.0", "react-dom": "^18.2.0", "storybook": "^7.0.18", - "tts-react": "^2.0.0" + "tts-react": "^3.0.0" } }, "packages/tts-react": { - "version": "2.0.1", + "version": "3.0.0", "license": "MIT", "engines": { "node": ">=18.16.0", diff --git a/packages/story/README.md b/packages/story/README.md new file mode 100644 index 0000000..8b301f3 --- /dev/null +++ b/packages/story/README.md @@ -0,0 +1,3 @@ +## Storybook for tts-react + +You can see the story for `tts-react` at [morganney.github.io/tts-react](https://morganney.github.io/tts-react/). diff --git a/packages/story/package.json b/packages/story/package.json index 96a255a..8c27961 100644 --- a/packages/story/package.json +++ b/packages/story/package.json @@ -23,6 +23,6 @@ "react": "^18.2.0", "react-dom": "^18.2.0", "storybook": "^7.0.18", - "tts-react": "^2.0.0" + "tts-react": "^3.0.0" } } diff --git a/packages/tts-react/README.md b/packages/tts-react/README.md new file mode 100644 index 0000000..88a4fb3 --- /dev/null +++ b/packages/tts-react/README.md @@ -0,0 +1,316 @@ +# [`tts-react`](https://www.npmjs.com/package/tts-react) + +![CI](https://github.com/morganney/tts-react/actions/workflows/ci.yml/badge.svg) +[![codecov](https://codecov.io/gh/morganney/tts-react/branch/main/graph/badge.svg?token=ZDP1VBC8E1)](https://codecov.io/gh/morganney/tts-react) +[![NPM version](https://img.shields.io/npm/v/tts-react.svg)](https://www.npmjs.com/package/tts-react) + +TextToSpeech React component + +`tts-react` provides a hook (`useTts`) and component (`TextToSpeech`) to convert text to speech. In most cases you want the hook so you can use custom styling on the audio controls. + +By default `tts-react` uses the [`SpeechSynthesis`](https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesis) and [`SpeechSynthesisUtterance`](https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesisUtterance) API's. You can fallback to the [`HTMLAudioElement`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/audio) API by providing a `fetchAudioData` prop to the hook or component. + +## Install + +`npm i react react-dom tts-react` + +## Demo (Storybook) + +[morganney.github.io/tts-react](https://morganney.github.io/tts-react/) + +## Example + +#### Hook + +You can use the hook to create a `Speak` component that converts the text to speech on render: + +```tsx +import { useTts } from 'tts-react' +import type { TTSHookProps } from 'tts-react' + +type SpeakProps = Pick + +const Speak = ({ children }: SpeakProps) => ( + <>{useTts({ children, autoPlay: true }).ttsChildren} +) + +const App = () => { + return ( + +

This text will be spoken on render.

+
+ ) +} +``` + +Or create a more advanced component with controls for adjusting the speaking: + +```tsx +import { useTts } from 'tts-react' +import type { TTSHookProps } from 'tts-react' + +interface CustomProps extends TTSHookProps { + highlight?: boolean +} + +const CustomTTSComponent = ({ children, highlight = false }: CustomProps) => { + const { ttsChildren, state, play, stop, pause } = useTts({ + children, + markTextAsSpoken: highlight + }) + + return ( +
+ <> + + + + + {ttsChildren} +
+ ) +} + +const App = () => { + return ( + +

Some text to be spoken and highlighted.

+
+ ) +} +``` + +#### Component + +Use the `TextToSpeech` component to get up and running quickly: + +```tsx +import { TextToSpeech, Positions, Sizes } from 'tts-react' + +const App = () => { + return ( + +

Some text to be spoken.

+
+ ) +} +``` + +## `useTts` + +The hook returns the internal state of the audio being spoken, getters/setters of audio attributes, callbacks that can be used to control playing/stopping/pausing/etc. of the audio, and modified `children` if using `markTextAsSpoken`. The parameters accepted are described in the [Props](#props) section. The response object is described by the `TTSHookResponse` type. + +```ts +const { + get, + set, + state, + spokenText, + ttsChildren, + play, + stop, + pause, + replay, + playOrPause, + playOrStop, + toggleMute +} = useTts({ + lang, + voice, + children, + autoPlay, + markTextAsSpoken, + markColor, + markBackgroundColor, + onStart, + onBoundary, + onPause, + onEnd, + onError, + onVolumeChange, + onPitchChange, + onRateChange, + fetchAudioData +}) + +interface TTSHookProps extends MarkStyles { + /** The spoken text is extracted from here. */ + children: ReactNode + /** The `SpeechSynthesisUtterance.lang` to use. */ + lang?: string + /** The `SpeechSynthesisUtterance.voice` to use. */ + voice?: SpeechSynthesisVoice + /** The initial rate of the speaking audio. */ + rate?: number + /** The initial volume of the speaking audio. */ + volume?: number + /** Whether the text should be spoken automatically, i.e. on render. */ + autoPlay?: boolean + /** Whether the spoken word should be wrapped in a `` element. */ + markTextAsSpoken?: boolean + /** Callback when the volume is changed. */ + onVolumeChange?: (newVolume: number) => void + /** Callback when the rate is changed. */ + onRateChange?: (newRate: number) => void + /** Callback when the pitch is changed. */ + onPitchChange?: (newPitch: number) => void + /** Callback when there is an error of any kind. */ + onError?: (msg: string) => void + /** Callback when speaking/audio starts playing. */ + onStart?: (evt: SpeechSynthesisEvent | Event) => void + /** Callback when the speaking/audio is paused. */ + onPause?: (evt: SpeechSynthesisEvent | Event) => void + /** Calback when the current utterance/audio has ended. */ + onEnd?: (evt: SpeechSynthesisEvent | Event) => void + /** Callback when a word boundary/mark has been reached. */ + onBoundary?: (evt: SpeechSynthesisEvent | Event) => void + /** Function to fetch audio and speech marks for the spoken text. */ + fetchAudioData?: (spokenText: string) => Promise +} +interface TTSHookResponse { + set: { + lang: (value: string) => void + rate: (value: number) => void + pitch: (value: number) => void + volume: (value: number) => void + preservesPitch: (value: boolean) => void + } + get: { + lang: () => string + rate: () => number + pitch: () => number + volume: () => number + preservesPitch: () => boolean + } + /** State of the current speaking/audio. */ + state: TTSHookState + /** The text extracted from the children elements and used to synthesize speech. */ + spokenText: string + play: () => void + stop: () => void + pause: () => void + replay: () => void + /** Toggles between muted/unmuted, i.e. volume is zero or non-zero. */ + toggleMute: (callback?: (wasMuted: boolean) => void) => void + /** Toggles between play/stop. */ + playOrStop: () => void + /** Toggles between play/pause. */ + playOrPause: () => void + /** The original children with a possible included if using `markTextAsSpoken`. */ + ttsChildren: ReactNode +} +interface TTSHookState { + voices: SpeechSynthesisVoice[] + boundary: BoundaryUpdate + isPlaying: boolean + isPaused: boolean + isMuted: boolean + isError: boolean + isReady: boolean +} +interface TTSBoundaryUpdate { + word: string + startChar: number + endChar: number +} +``` + +## `fetchAudioData` + +Using `fetchAudioData` will bypass `SpeechSynthesis` and use the `HTMLAudioElement`. + +```ts +(spokenText: string) => Promise +``` + +When using `fetchAudioData` it must return `TTSAudioData` which has the following shape: + +```ts +interface PollySpeechMark { + end: number + start: number + time: number + type: 'word' + value: string +} +interface TTSAudioData { + audio: string + marks?: PollySpeechMark[] +} +``` +The `audio` property must be a URL that can be applied to [`HTMLAudioElement.src`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/audio#attr-src), including a [data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs). If using `markTextAsSpoken` then you must also return the `marks` that describe the word boundaries. `PollySpeechMarks` have the same shape as the [Speech Marks used by Amazon Polly](https://docs.aws.amazon.com/polly/latest/dg/speechmarks.html), with the restriction that they must be of `type: 'word'`. + + +## Props + +Most of these are supported by the `useTts` hook, but those marked with an asterisk are exclusive to the `TextToSpeech` component. + +`*` Only applies to `TextToSpeech` component. + +|Name|Required|Type|Default|Description| +|----|--------|----|-------|-----------| +|children|yes|`ReactNode`|none|Provides the text that will be spoken.| +|lang|no|`string`|The one used by [`SpeechSynthesisUtterance.lang`](https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesisUtterance/lang).|Sets the [`SpeechSynthesisUtterance.lang`](https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesisUtterance/lang). Overrides `voice` when set and `voice.lang` does not match `lang`.| +|voice|no|`SpeechSynthesisVoice`|None or the voice provided by `audio` from `TTSAudioData`.|The voice heard when the text is spoken. Calling `set.lang` may override this value.| +|autoPlay|no|`boolean`|`false`|Whether the audio of the text should automatically be spoken when ready.| +|markTextAsSpoken|no|`boolean`|`false`|Whether the word being spoken should be highlighted.| +|markColor|no|`string`|none|Color of the text that is currently being spoken. Only applies with `markTextAsSpoken`.| +|markBackgroundColor|no|`string`|none|Background color of the text that is currently being spoken. Only applies with `markTextAsSpoken`.| +|fetchAudioData|no|`(text: string) => Promise`|none|Function to return the optional `SpeechMarks[]` and `audio` URL for the text to be spoken. See [fetchAudioData](#fetchaudiodata) for more details.| +|`*`allowMuting|no|`boolean`|`true`|Whether an additional button will be shown on the component that allows muting the audio.| +|`*`onMuteToggled|no|`(wasMuted: boolean) => void`|none|Callback when the user clicks the mute button shown from `allowMuting` being enabled. Can be used to toggle global or local state like whether `autoPlay` should be enabled.| +|onStart|no|`(evt: SpeechSynthesisEvent \| Event) => void`|none|Callback when the speaking/audio has started (or resumed) playing.| +|onPause|no|`(evt: SpeechSynthesisEvent \| Event) => void`|none|Callback when the speaking/audio has been paused.| +|onEnd|no|`(evt: SpeechSynthesisEvent \| Event) => void`|none|Callback when the speaking/audio has stopped.| +|onBoundary|no|`(boundary: TTSBoundaryUpdate, evt: SpeechSynthesisEvent \| Event) => void`|none|Callback when a word boundary/mark has been reached.| +|onError|no|`(msg: string) => void`|none|Callback when there is an error of any kind playing the spoken text. The error message (if any) will be provided.| +|onVolumeChange|no|`(newVolume: number) => void`|none|Callback when the volume has changed.| +|onRateChange|no|`(newRate: number) => void`|none|Callback when the rate has changed.| +|onPitchChange|no|`(newPitch: number) => void`|none|Callback when the pitch has changed.| +|`*`align|no|`'horizontal' \| 'vertical'`|`'horizontal'`|How to align the controls within the `TextToSpeech` component.| +|`*`size|no|`'small' \| 'medium' \| 'large'`|`'medium'`|The relative size of the controls within the `TextToSpeech` component.| +|`*`position|no|`'topRight' \| 'topLeft' \| 'bottomRight' \| 'bottomLeft'`|`'topRight'`|The relative positioning of the controls within the `TextToSpeech` component.| +|`*`useStopOverPause|no|`boolean`|`false`|Whether the controls should display a stop button instead of a pause button. On Android devices, `SpeechSynthesis.pause()` behaves like `cancel()`, so you can use this prop in that context.| + + +## FAQ + +
+Why is text inside child components not being spoken? +

Due to the way Children.map works

+
+

The traversal does not go deeper than React elements: they don't get rendered, and their children aren't traversed.

+
+

tts-react can not extract the text from child components. Instead, include the text as a direct child of TextToSpeech (or useTts).

+
+ +
+Why does markTextAsSpoken sometimes highlight the wrong word? +

The SpeechSynthesisUtterance boundary event may fire with skewed word boundaries for certain combinations of spokenText and lang or voice props. If you check the value of state.boundary.word in these cases, you will find the event is firing at unexpected boundaries, so there is no real solution other than to find a suitable voice for your given spokenText.

+
+ +
+Why does markTextAsSpoken not work on Chrome for Android? +

This is a known issue by the Chromium team that apparently they are not going to fix. You can use fetchAudioData to fallback to the HTMLAudioElement, or try a different browser.

+
+ +
+Why can I not pause the audio when using SpeechSynthesis on Firefox and Chrome for Android? +

See the compat table on MDN for SpeechSynthesis.pause().

+

In Android, pause() ends the current utterance. pause() behaves the same as cancel().

+

You can use the hook useTts to build custom controls that do not expose a pause, but only stop. If using the TextToSpeech component use the useStopOverPause prop for Android devices.

+
+ +
+Why is text from dangerouslySetInnerHTML not spoken? +

tts-react does not speak text from dangerouslySetInnerHTML. Instead convert your HTML string into React elements via an html-to-react parser. See this example.

+
+ +
+What's up with Safari? +

Safari simply does not follow the spec completely (yet). As one example, Safari 15.6.1 on macOS Monterey 12.5.1, throws a SpeechSynthesisEvent during a SpeechSynthesisUtterance.error, while the spec says errors against utterances "must use the SpeechSynthesisErrorEvent interface".

+
diff --git a/packages/tts-react/package.json b/packages/tts-react/package.json index e052692..8b4d810 100644 --- a/packages/tts-react/package.json +++ b/packages/tts-react/package.json @@ -1,6 +1,6 @@ { "name": "tts-react", - "version": "2.0.1", + "version": "3.0.0", "description": "React hook and component for converting text to speech using the Web Speech API or Amazon Polly.", "type": "module", "main": "dist/index.js", diff --git a/tts-react.png b/packages/tts-react/tts-react.png similarity index 100% rename from tts-react.png rename to packages/tts-react/tts-react.png