Bias detection in different language models and de-biasing on pre-trained language models (PLMs)
This project focuses on the detection and mitigation of various biases, such as gender and ethnic biases, in diverse language models, including those for multiple languages and multilingual models. A significant aspect of the project is the development of de-biasing techniques specifically tailored for Persian language models.