Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can we run demo without Adobe training set? #25

Open
JerryKurata opened this issue Jan 29, 2019 · 13 comments
Open

How can we run demo without Adobe training set? #25

JerryKurata opened this issue Jan 29, 2019 · 13 comments

Comments

@JerryKurata
Copy link

Hi,

Apparently the original image training sets is not available to anyone not associated with a university. Is there a way to run the demo at least without it. You have graciously provided the trained model so why is the training data needed for processing new images?

@rainsun1
Copy link

rainsun1 commented Feb 8, 2019

The training data is not needed and you only have to provide an input RGB image, a trimap (320x320) and a trained model. You can modify the corresponding codes in "demo.py".

@usalexsantos
Copy link

Does the image need to be 320X320? Can I run the code with the original image size?

@HWNHJJ
Copy link

HWNHJJ commented Feb 23, 2019

you could .just modify demo.py

@HWNHJJ
Copy link

HWNHJJ commented Feb 23, 2019

I am recreating the experiment, some questions I hope to be advised, this is my contact information.
my address [email protected] , [email protected] ,or QQ 1820641671

@rainsun1
Copy link

perhaps you can modify the input shape from (320, 320, 4) to (None, None, 4) in test?

@ahsanbarkati
Copy link

@rainsun1 I faced a similar issue and I tried to modify input shape from (320,320,4) to (None,None,4) in models.py. But I get the following error:
ValueError: Tried to convert 'shape' to a tensor and failed. Error: None values not supported.

@rainsun1
Copy link

@ahsanbarkati Where does the error come from? Is it in the shape computation from a layer?

@ahsanbarkati
Copy link

You can look into the complete error log here: https://pastebin.com/2Pw1mVF2
The error is in this line: origReshaped = Reshape(shape)(orig_5)

@yxt132
Copy link

yxt132 commented Apr 26, 2019

The training data is not needed and you only have to provide an input RGB image, a trimap (320x320) and a trained model. You can modify the corresponding codes in "demo.py".

I have the image and trimap and also downloaded the pre-trained model. But I have no clue as to how to modify the demo.py to make it work for my own test. Also, like other people said, it will be nice to be able to change the image size as well. Can you provide another demo code for us to test when we don't have the Adobe datasets, a lot of us don't have it anyways. Thanks!

@shartoo
Copy link

shartoo commented Apr 26, 2019

You have to change input shape of network and train the model and the height and widht of input shape should be equal.

@peachthiefmedia
Copy link

If you need a larger size you are probably best to provide it in patches and then reconstruct it. There are some pre-existing python modules to do so (https://github.com/adamrehn/slidingwindow for example) but it's pretty easy to write the loop to do it.

@FantasyJXF
Copy link
Contributor

@rainsun1 I faced a similar issue and I tried to modify input shape from (320,320,4) to (None,None,4) in models.py. But I get the following error:
ValueError: Tried to convert 'shape' to a tensor and failed. Error: None values not supported.

If input is None, maybe you should use tf.shape[index] to compute, not tf.get_shape().as_list()

@rainsun1
Copy link

You can also construct the model with a large shape such as:
image_size = (800, 800)
input_shape = image_size + (4, )
model = build_encoder_decoder(shapeInput=input_shape)
after model prediction, you can crop the result into the original size of the image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants