-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to batch inference? #71
Comments
Thanks for the question. I want to make sure I understand what you're trying to do. (While also confessing I will need to page many details about the code back into my brain.) Do you just want (1) an easy way to map from a list of image paths to the predicted strings (i.e., piped to standard output), or do you actually (2) want the tensor of images packed together as input (i.e., you'll need the sequence lengths, too) and run a single GPU-parallel inference on them, producing the prediction sequences (i.e. as a sparse tensor)? Both should be possible, but the second may require deriving some new client code for the Estimator that blends elements of |
Hello Jerod @weinman. Definitely the (2), the tensor of images packed together as input and run a single GPU-parallel inference on them, producing the prediction sequences (i.e. as a sparse tensor). Thank you for your help to understand how it could be achieved. |
Well, it should be possible without too much trouble, I hope. I note that both model_fn.predict_fn (used for When I try a simple/dummy example taking the batched dataset from # both from cnn_lstm_ctc_ocr/src
import model_fn
import train
classifier = tf.estimator.Estimator(
model_fn=model_fn.predict_fn(
None,None), # no lexicon or prior weight
model_dir='/tmp/model' )
predictions = classifier.predict( input_fn=train._get_input )
results = next(predictions) it still seems to gives me a single (unbatched) example. I don't have more time right now to test this out further (determining what I'm probably doing wrong), but if you can get a batched tensor into |
Thank you so much Jerod @weinman. I will take over from here. Will check in PR as soon as I get it working. |
As a quick follow-up before I completely lose track of this thread. The confusing issue that stymied me was that I thought the underlying tensors should be producing batched outputs, even in predict mode. I thought this because the following test seems to indicate so: import tensorflow as tf
from tensorflow.contrib import learn
import model_fn
import train
ds = train._get_input()
[features,labels]=tf.data.make_one_shot_iterator(ds).get_next()
mode = learn.ModeKeys.EVAL
logits, sequence_length = model_fn._get_image_info(features, mode)
predictions, log_probs = model_fn._get_output( logits, sequence_length, None, None ) # no lexicon or prior weight
with tf.Session() as sess:
tf.initialize_all_variables().run()
[logit,pred] = sess.run([logits,predictions])
print(logit.shape)
print(pred[0].dense_shape) Using the simple tfrecord file included in the repo, for me this produces:
which seems to indicate the full batch (size 32) is being produced. I don't know if that helps, but I hope you figure out what you're looking for. I think it would be useful for others as well. |
Hello Jerod @weinman. Does this platform support batch inference? How can I submit the list of images for prediction all at ones? Thank you.
The text was updated successfully, but these errors were encountered: