This model predicts Body Mass Index (BMI) with one image of a human face, with state-of-the-art results.
The performance of this model is better than the benchmarks set by state-of-the-art method on VisualBMI dataset as of Jan 2024 by 39.5%.
After training 10 epoches, the model has a MAE loss of 3.45
on test dataset.
After training 7 epoches, the model has a MAE loss of 3.02
on test dataset.
- Clone this repository by running:
git clone [email protected]:liujie-zheng/face-to-bmi-vit.git
cd face-to-bmi-vit
- Install conda here.
- Depending on your operating system, install dependencies by running:
conda env create -f environment_linux.yml
conda activate face2bmi
or
conda env create -f environment_mac.yml
conda activate face2bmi
- (Optional) replace ./data/test_pic.jpg with your own image. Note: for your own image, a face should occupy a substantial part of the image for optimal results.
- In root directory, run:
cd scripts
conda run -n face2bmi --no-capture-output python demo.py
if you encounter a PermissionError: [Errno 13] Permission denied
error, instead run:
sudo conda run -n face2bmi --no-capture-output python demo.py
In root directory, train the original unaugmented dataset by running:
cd scripts
conda run -n face2bmi --no-capture-output python run.py
or train the augmented dataset by running:
cd scripts
conda run -n face2bmi --no-capture-output python run.py --augmented=True