From 9fb9fac2640922adac06330a822a519a07612af1 Mon Sep 17 00:00:00 2001 From: Vivian Cheng <60167936+vccheng2001@users.noreply.github.com> Date: Thu, 26 May 2022 10:42:14 -0700 Subject: [PATCH] Fix README link Previous link to Replicate web demo in README.md file had a typo; fixed. --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 3e89e7c..33a4e4d 100755 --- a/README.md +++ b/README.md @@ -9,8 +9,8 @@ This is the PyTorch implementation of [GANs N’ Roses: Stable, Controllable, Di >**Abstract:**
>We show how to learn a map that takes a content code, derived from a face image, and a randomly chosen style code to an anime image. We derive an adversarial loss from our simple and effective definitions of style and content. This adversarial loss guarantees the map is diverse -- a very wide range of anime can be produced from a single content code. Under plausible assumptions, the map is not just diverse, but also correctly represents the probability of an anime, conditioned on an input face. In contrast, current multimodal generation procedures cannot capture the complex styles that appear in anime. Extensive quantitative experiments support the idea the map is correct. Extensive qualitative results show that the method can generate a much more diverse range of styles than SOTA comparisons. Finally, we show that our formalization of content and style allows us to perform video to video translation without ever training on videos. -[Demo and Docker image on Replicate](https://replicate.com/vccheng2001/gans-n-roses) - +[Demo and Docker image on Replicate](https://replicate.com/mchong6/gans-n-roses) + [Gradio Web Demo](https://gradio.app/hub/AK391/GANsNRoses)