Skip to content

Bias detection in different language models and de-biasing on pre-trained language models (PLMs)

Notifications You must be signed in to change notification settings

aboots/bias-detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 

Repository files navigation

bias-detection

Bias detection in different language models and de-biasing on pre-trained language models (PLMs)

This project focuses on the detection and mitigation of various biases, such as gender and ethnic biases, in diverse language models, including those for multiple languages and multilingual models. A significant aspect of the project is the development of de-biasing techniques specifically tailored for Persian language models.

About

Bias detection in different language models and de-biasing on pre-trained language models (PLMs)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •