Skip to content
/ CTCNet Public
forked from JusperLee/CTCNet

An Audio-Visual Speech Separation Model Inspired by Cortico-Thalamo-Cortical Circuits

Notifications You must be signed in to change notification settings

mreale8/CTCNet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

CTCNet

This paper has not yet been published. Once published, the code will be made publicly available.

Datasets

This method involves using the LRS2, LRS3, and Vox2 datasets to create a multimodal speech separation dataset. The corresponding folders in the provided GitHub repository contain the files necessary to build the datasets, and the code in the repository can be used to construct the multimodal datasets.

About

An Audio-Visual Speech Separation Model Inspired by Cortico-Thalamo-Cortical Circuits

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published