Sketch detection network with deformable convolution.
When the input image is given, then model outs it's sketch(edge) version.
I use my custom handmake sketch(edge) dataset instead general benchmark datasets.
- NVIDIA A5000 24G
- Train 512 x 512 image
- Model params: 6.2M
[Anime] Dataset (included nude pictures, and not multiscale)
- train: 120 images, edges
- val: 10 images, edges
Model | ODS↑ | OIS↑ | LPIPS(edge)↑ | LPIPS(image)↑ |
---|---|---|---|---|
UAED(pretrained by BSDS) | 0.5417 | 0.5502 | 0.6500 | 0.5286 |
MuGE(pretrained by BSDS, α = 1.0) | 0.5502 | 0.5721 | 0.6830 | 0.5465 |
DSDN(Anime) | 0.6340 | 0.6389 | 0.7735 | 0.6323 |
Model | ODS↑ | OIS↑ | LPIPS(edge)↑ | LPIPS(image)↑ |
---|---|---|---|---|
UAED(pretrained by BSDS) | 0.8410 | 0.8470 | 0.6519 | 0.3352 |
MuGE(pretrained by BSDS, α = 1.0) | 0.850 | 0.8560 | 0.6899 | 0.3403 |
DSDN(Anime) | 0.7354 | 0.7354 | 0.5301 | 0.4022 |
- the samples placed in the order UAED, MuGE, DSDN
- poor performance about humans or animals detection.(Noise sensitive & too many detect detail informations)
- good performance about structures than humans or animals detection.
- good performance about pictures than sota edge detection networks.