Note: Functions taking Tensor
arguments can also take anything accepted by
tf.convert_to_tensor
.
[TOC]
TensorFlow provides a placeholder operation that must be fed with data on execution. For more info, see the section on Feeding data.
Inserts a placeholder for a tensor that will be always fed.
Important: This tensor will produce an error if evaluated. Its value must
be fed using the feed_dict
optional argument to Session.run()
,
Tensor.eval()
, or Operation.run()
.
For example:
x = tf.placeholder(tf.float32, shape=(1024, 1024))
y = tf.matmul(x, x)
with tf.Session() as sess:
print(sess.run(y)) # ERROR: will fail because x was not fed.
rand_array = np.random.rand(1024, 1024)
print(sess.run(y, feed_dict={x: rand_array})) # Will succeed.
dtype
: The type of elements in the tensor to be fed.shape
: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape.name
: A name for the operation (optional).
A Tensor
that may be used as a handle for feeding a value, but not
evaluated directly.
A placeholder op that passes through input
when its output is not fed.
input
: ATensor
. The default value to produce whenoutput
is not fed.shape
: Atf.TensorShape
or list ofints
. The (possibly partial) shape of the tensor.name
: A name for the operation (optional).
A Tensor
. Has the same type as input
.
A placeholder tensor that defaults to input
if it is not fed.
For feeding SparseTensor
s which are composite type,
there is a convenience function:
Inserts a placeholder for a sparse tensor that will be always fed.
Important: This sparse tensor will produce an error if evaluated.
Its value must be fed using the feed_dict
optional argument to
Session.run()
, Tensor.eval()
, or Operation.run()
.
For example:
x = tf.sparse_placeholder(tf.float32)
y = tf.sparse_reduce_sum(x)
with tf.Session() as sess:
print(sess.run(y)) # ERROR: will fail because x was not fed.
indices = np.array([[3, 2, 0], [4, 5, 1]], dtype=np.int64)
values = np.array([1.0, 2.0], dtype=np.float32)
shape = np.array([7, 9, 2], dtype=np.int64)
print(sess.run(y, feed_dict={
x: tf.SparseTensorValue(indices, values, shape)})) # Will succeed.
print(sess.run(y, feed_dict={
x: (indices, values, shape)})) # Will succeed.
sp = tf.SparseTensor(indices=indices, values=values, dense_shape=shape)
sp_value = sp.eval(session)
print(sess.run(y, feed_dict={x: sp_value})) # Will succeed.
dtype
: The type ofvalues
elements in the tensor to be fed.shape
: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a sparse tensor of any shape.name
: A name for prefixing the operations (optional).
A SparseTensor
that may be used as a handle for feeding a value, but not
evaluated directly.
TensorFlow provides a set of Reader classes for reading data formats. For more information on inputs and readers, see Reading data.
Base class for different Reader types, that produce a record every step.
Conceptually, Readers convert string 'work units' into records (key, value pairs). Typically the 'work units' are filenames and the records are extracted from the contents of those files. We want a single record produced per step, but a work unit can correspond to many records.
Therefore we introduce some decoupling using a queue. The queue contains the work units and the Reader dequeues from the queue when it is asked to produce a record (via Read()) but it has finished the last work unit.
Creates a new ReaderBase.
reader_ref
: The operation that implements the reader.supports_serialize
: True if the reader implementation can serialize its state.
Returns the number of records this reader has produced.
This is the same as the number of Read executions that have succeeded.
name
: A name for the operation (optional).
An int64 Tensor.
Returns the number of work units this reader has finished processing.
name
: A name for the operation (optional).
An int64 Tensor.
Returns the next record (key, value pair) produced by a reader.
Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file).
queue
: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items.name
: A name for the operation (optional).
A tuple of Tensors (key, value).
key
: A string scalar Tensor.value
: A string scalar Tensor.
Returns up to num_records (key, value pairs) produced by a reader.
Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch.
queue
: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items.num_records
: Number of records to read.name
: A name for the operation (optional).
A tuple of Tensors (keys, values).
keys
: A 1-D string Tensor.values
: A 1-D string Tensor.
Op that implements the reader.
Restore a reader to its initial clean state.
name
: A name for the operation (optional).
The created Operation.
Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an Unimplemented error.
state
: A string Tensor. Result of a SerializeState of a Reader with matching type.name
: A name for the operation (optional).
The created Operation.
Produce a string tensor that encodes the state of a reader.
Not all Readers support being serialized, so this can produce an Unimplemented error.
name
: A name for the operation (optional).
A string Tensor.
Whether the Reader implementation can serialize its state.
A Reader that outputs the lines of a file delimited by newlines.
Newlines are stripped from the output. See ReaderBase for supported methods.
Create a TextLineReader.
skip_header_lines
: An optional int. Defaults to 0. Number of lines to skip from the beginning of every file.name
: A name for the operation (optional).
Returns the number of records this reader has produced.
This is the same as the number of Read executions that have succeeded.
name
: A name for the operation (optional).
An int64 Tensor.
Returns the number of work units this reader has finished processing.
name
: A name for the operation (optional).
An int64 Tensor.
Returns the next record (key, value pair) produced by a reader.
Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file).
queue
: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items.name
: A name for the operation (optional).
A tuple of Tensors (key, value).
key
: A string scalar Tensor.value
: A string scalar Tensor.
Returns up to num_records (key, value pairs) produced by a reader.
Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch.
queue
: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items.num_records
: Number of records to read.name
: A name for the operation (optional).
A tuple of Tensors (keys, values).
keys
: A 1-D string Tensor.values
: A 1-D string Tensor.
Op that implements the reader.
Restore a reader to its initial clean state.
name
: A name for the operation (optional).
The created Operation.
Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an Unimplemented error.
state
: A string Tensor. Result of a SerializeState of a Reader with matching type.name
: A name for the operation (optional).
The created Operation.
Produce a string tensor that encodes the state of a reader.
Not all Readers support being serialized, so this can produce an Unimplemented error.
name
: A name for the operation (optional).
A string Tensor.
Whether the Reader implementation can serialize its state.
A Reader that outputs the entire contents of a file as a value.
To use, enqueue filenames in a Queue. The output of Read will be a filename (key) and the contents of that file (value).
See ReaderBase for supported methods.
Create a WholeFileReader.
name
: A name for the operation (optional).
Returns the number of records this reader has produced.
This is the same as the number of Read executions that have succeeded.
name
: A name for the operation (optional).
An int64 Tensor.
Returns the number of work units this reader has finished processing.
name
: A name for the operation (optional).
An int64 Tensor.
Returns the next record (key, value pair) produced by a reader.
Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file).
queue
: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items.name
: A name for the operation (optional).
A tuple of Tensors (key, value).
key
: A string scalar Tensor.value
: A string scalar Tensor.
Returns up to num_records (key, value pairs) produced by a reader.
Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch.
queue
: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items.num_records
: Number of records to read.name
: A name for the operation (optional).
A tuple of Tensors (keys, values).
keys
: A 1-D string Tensor.values
: A 1-D string Tensor.
Op that implements the reader.
Restore a reader to its initial clean state.
name
: A name for the operation (optional).
The created Operation.
Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an Unimplemented error.
state
: A string Tensor. Result of a SerializeState of a Reader with matching type.name
: A name for the operation (optional).
The created Operation.
Produce a string tensor that encodes the state of a reader.
Not all Readers support being serialized, so this can produce an Unimplemented error.
name
: A name for the operation (optional).
A string Tensor.
Whether the Reader implementation can serialize its state.
A Reader that outputs the queued work as both the key and value.
To use, enqueue strings in a Queue. Read will take the front work string and output (work, work).
See ReaderBase for supported methods.
Create a IdentityReader.
name
: A name for the operation (optional).
Returns the number of records this reader has produced.
This is the same as the number of Read executions that have succeeded.
name
: A name for the operation (optional).
An int64 Tensor.
Returns the number of work units this reader has finished processing.
name
: A name for the operation (optional).
An int64 Tensor.
Returns the next record (key, value pair) produced by a reader.
Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file).
queue
: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items.name
: A name for the operation (optional).
A tuple of Tensors (key, value).
key
: A string scalar Tensor.value
: A string scalar Tensor.
Returns up to num_records (key, value pairs) produced by a reader.
Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch.
queue
: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items.num_records
: Number of records to read.name
: A name for the operation (optional).
A tuple of Tensors (keys, values).
keys
: A 1-D string Tensor.values
: A 1-D string Tensor.
Op that implements the reader.
Restore a reader to its initial clean state.
name
: A name for the operation (optional).
The created Operation.
Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an Unimplemented error.
state
: A string Tensor. Result of a SerializeState of a Reader with matching type.name
: A name for the operation (optional).
The created Operation.
Produce a string tensor that encodes the state of a reader.
Not all Readers support being serialized, so this can produce an Unimplemented error.
name
: A name for the operation (optional).
A string Tensor.
Whether the Reader implementation can serialize its state.
A Reader that outputs the records from a TFRecords file.
See ReaderBase for supported methods.
Create a TFRecordReader.
name
: A name for the operation (optional).options
: A TFRecordOptions object (optional).
Returns the number of records this reader has produced.
This is the same as the number of Read executions that have succeeded.
name
: A name for the operation (optional).
An int64 Tensor.
Returns the number of work units this reader has finished processing.
name
: A name for the operation (optional).
An int64 Tensor.
Returns the next record (key, value pair) produced by a reader.
Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file).
queue
: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items.name
: A name for the operation (optional).
A tuple of Tensors (key, value).
key
: A string scalar Tensor.value
: A string scalar Tensor.
Returns up to num_records (key, value pairs) produced by a reader.
Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch.
queue
: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items.num_records
: Number of records to read.name
: A name for the operation (optional).
A tuple of Tensors (keys, values).
keys
: A 1-D string Tensor.values
: A 1-D string Tensor.
Op that implements the reader.
Restore a reader to its initial clean state.
name
: A name for the operation (optional).
The created Operation.
Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an Unimplemented error.
state
: A string Tensor. Result of a SerializeState of a Reader with matching type.name
: A name for the operation (optional).
The created Operation.
Produce a string tensor that encodes the state of a reader.
Not all Readers support being serialized, so this can produce an Unimplemented error.
name
: A name for the operation (optional).
A string Tensor.
Whether the Reader implementation can serialize its state.
A Reader that outputs fixed-length records from a file.
See ReaderBase for supported methods.
tf.FixedLengthRecordReader.__init__(record_bytes, header_bytes=None, footer_bytes=None, name=None)
{#FixedLengthRecordReader.init}
Create a FixedLengthRecordReader.
record_bytes
: An int.header_bytes
: An optional int. Defaults to 0.footer_bytes
: An optional int. Defaults to 0.name
: A name for the operation (optional).
tf.FixedLengthRecordReader.num_records_produced(name=None)
{#FixedLengthRecordReader.num_records_produced}
Returns the number of records this reader has produced.
This is the same as the number of Read executions that have succeeded.
name
: A name for the operation (optional).
An int64 Tensor.
tf.FixedLengthRecordReader.num_work_units_completed(name=None)
{#FixedLengthRecordReader.num_work_units_completed}
Returns the number of work units this reader has finished processing.
name
: A name for the operation (optional).
An int64 Tensor.
Returns the next record (key, value pair) produced by a reader.
Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file).
queue
: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items.name
: A name for the operation (optional).
A tuple of Tensors (key, value).
key
: A string scalar Tensor.value
: A string scalar Tensor.
tf.FixedLengthRecordReader.read_up_to(queue, num_records, name=None)
{#FixedLengthRecordReader.read_up_to}
Returns up to num_records (key, value pairs) produced by a reader.
Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch.
queue
: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items.num_records
: Number of records to read.name
: A name for the operation (optional).
A tuple of Tensors (keys, values).
keys
: A 1-D string Tensor.values
: A 1-D string Tensor.
Op that implements the reader.
Restore a reader to its initial clean state.
name
: A name for the operation (optional).
The created Operation.
Restore a reader to a previously saved state.
Not all Readers support being restored, so this can produce an Unimplemented error.
state
: A string Tensor. Result of a SerializeState of a Reader with matching type.name
: A name for the operation (optional).
The created Operation.
Produce a string tensor that encodes the state of a reader.
Not all Readers support being serialized, so this can produce an Unimplemented error.
name
: A name for the operation (optional).
A string Tensor.
Whether the Reader implementation can serialize its state.
TensorFlow provides several operations that you can use to convert various data formats into tensors.
Convert CSV records to tensors. Each column maps to one tensor.
RFC 4180 format is expected for the CSV records. (https://tools.ietf.org/html/rfc4180) Note that we allow leading and trailing spaces with int or float field.
records
: ATensor
of typestring
. Each string is a record/row in the csv and all records should have the same format.record_defaults
: A list ofTensor
objects with types from:float32
,int32
,int64
,string
. One tensor per column of the input record, with either a scalar default value for that column or empty if the column is required.field_delim
: An optionalstring
. Defaults to","
. delimiter to separate fields in a record.name
: A name for the operation (optional).
A list of Tensor
objects. Has the same type as record_defaults
.
Each tensor will have the same shape as records.
Reinterpret the bytes of a string as a vector of numbers.
bytes
: ATensor
of typestring
. All the elements must have the same length.out_type
: Atf.DType
from:tf.half, tf.float32, tf.float64, tf.int32, tf.uint8, tf.int16, tf.int8, tf.int64
.little_endian
: An optionalbool
. Defaults toTrue
. Whether the inputbytes
are in little-endian order. Ignored forout_type
values that are stored in a single byte likeuint8
.name
: A name for the operation (optional).
A Tensor
of type out_type
.
A Tensor with one more dimension than the input bytes
. The
added dimension will have size equal to the length of the elements
of bytes
divided by the number of bytes to represent out_type
.
TensorFlow's recommended format for training
examples
is serialized Example
protocol buffers, described
here.
They contain Features
, described
here.
Configuration for parsing a variable-length input feature.
Fields: dtype: Data type of input.
Return self as a plain tuple. Used by copy and pickle.
Exclude the OrderedDict from pickling
Create new instance of VarLenFeature(dtype,)
Return a nicely formatted representation string
Alias for field number 0
Configuration for parsing a fixed-length input feature.
To treat sparse input as dense, provide a default_value
; otherwise,
the parse functions will fail on any examples missing this feature.
Fields:
shape: Shape of input data.
dtype: Data type of input.
default_value: Value to be used if an example is missing this feature. It
must be compatible with dtype
.
Return self as a plain tuple. Used by copy and pickle.
Exclude the OrderedDict from pickling
Create new instance of FixedLenFeature(shape, dtype, default_value)
Return a nicely formatted representation string
Alias for field number 2
Alias for field number 1
Alias for field number 0
Configuration for a dense input feature in a sequence item.
To treat a sparse input as dense, provide allow_missing=True
; otherwise,
the parse functions will fail on any examples missing this feature.
Fields: shape: Shape of input data. dtype: Data type of input. allow_missing: Whether to allow this feature to be missing from a feature list item.
Return self as a plain tuple. Used by copy and pickle.
Exclude the OrderedDict from pickling
tf.FixedLenSequenceFeature.__new__(_cls, shape, dtype, allow_missing=False)
{#FixedLenSequenceFeature.new}
Create new instance of FixedLenSequenceFeature(shape, dtype, allow_missing)
Return a nicely formatted representation string
Alias for field number 2
Alias for field number 1
Alias for field number 0
Configuration for parsing a sparse input feature.
Fields:
index_key: Name of index feature. The underlying feature's type must
be int64
and its length must always match that of the value_key
feature.
value_key: Name of value feature. The underlying feature's type must
be dtype
and its length must always match that of the index_key
feature.
dtype: Data type of the value_key
feature.
size: A Python int to specify a dimension of the dense shape. Each value in
the index_key
feature must be in [0, size)
.
already_sorted: A Python boolean to specify whether the values in
index_key
are already sorted. If so skip sorting.
False by default (optional).
Return self as a plain tuple. Used by copy and pickle.
Exclude the OrderedDict from pickling
tf.SparseFeature.__new__(_cls, index_key, value_key, dtype, size, already_sorted=False)
{#SparseFeature.new}
Create new instance of SparseFeature(index_key, value_key, dtype, size, already_sorted)
Return a nicely formatted representation string
Alias for field number 4
Alias for field number 2
Alias for field number 0
Alias for field number 3
Alias for field number 1
Parses Example
protos into a dict
of tensors.
Parses a number of serialized Example
protos given in serialized
.
example_names
may contain descriptive names for the corresponding serialized
protos. These may be useful for debugging purposes, but they have no effect on
the output. If not None
, example_names
must be the same length as
serialized
.
This op parses serialized examples into a dictionary mapping keys to Tensor
and SparseTensor
objects. features
is a dict from keys to VarLenFeature
,
SparseFeature
, and FixedLenFeature
objects. Each VarLenFeature
and SparseFeature
is mapped to a SparseTensor
, and each
FixedLenFeature
is mapped to a Tensor
.
Each VarLenFeature
maps to a SparseTensor
of the specified type
representing a ragged matrix. Its indices are [batch, index]
where batch
is the batch entry the value is from in serialized
, and index
is the
value's index in the list of values associated with that feature and example.
Each SparseFeature
maps to a SparseTensor
of the specified type
representing a sparse matrix of shape
(serialized.size(), SparseFeature.size)
. Its indices are [batch, index]
where batch
is the batch entry the value is from in serialized
, and
index
is the value's index is given by the values in the
SparseFeature.index_key
feature column.
Each FixedLenFeature
df
maps to a Tensor
of the specified type (or
tf.float32
if not specified) and shape (serialized.size(),) + df.shape
.
FixedLenFeature
entries with a default_value
are optional. With no default
value, we will fail if that Feature
is missing from any example in
serialized
.
Examples:
For example, if one expects a tf.float32
sparse feature ft
and three
serialized Example
s are provided:
serialized = [
features
{ feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } },
features
{ feature []},
features
{ feature { key: "ft" value { float_list { value: [3.0] } } }
]
then the output will look like:
{"ft": SparseTensor(indices=[[0, 0], [0, 1], [2, 0]],
values=[1.0, 2.0, 3.0],
dense_shape=(3, 2)) }
Given two Example
input protos in serialized
:
[
features {
feature { key: "kw" value { bytes_list { value: [ "knit", "big" ] } } }
feature { key: "gps" value { float_list { value: [] } } }
},
features {
feature { key: "kw" value { bytes_list { value: [ "emmy" ] } } }
feature { key: "dank" value { int64_list { value: [ 42 ] } } }
feature { key: "gps" value { } }
}
]
And arguments
example_names: ["input0", "input1"],
features: {
"kw": VarLenFeature(tf.string),
"dank": VarLenFeature(tf.int64),
"gps": VarLenFeature(tf.float32),
}
Then the output is a dictionary:
{
"kw": SparseTensor(
indices=[[0, 0], [0, 1], [1, 0]],
values=["knit", "big", "emmy"]
dense_shape=[2, 2]),
"dank": SparseTensor(
indices=[[1, 0]],
values=[42],
dense_shape=[2, 1]),
"gps": SparseTensor(
indices=[],
values=[],
dense_shape=[2, 0]),
}
For dense results in two serialized Example
s:
[
features {
feature { key: "age" value { int64_list { value: [ 0 ] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
},
features {
feature { key: "age" value { int64_list { value: [] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
}
]
We can use arguments:
example_names: ["input0", "input1"],
features: {
"age": FixedLenFeature([], dtype=tf.int64, default_value=-1),
"gender": FixedLenFeature([], dtype=tf.string),
}
And the expected output is:
{
"age": [[0], [-1]],
"gender": [["f"], ["f"]],
}
Given two Example
input protos in serialized
:
[
features {
feature { key: "val" value { float_list { value: [ 0.5, -1.0 ] } } }
feature { key: "ix" value { int64_list { value: [ 3, 20 ] } } }
},
features {
feature { key: "val" value { float_list { value: [ 0.0 ] } } }
feature { key: "ix" value { int64_list { value: [ 42 ] } } }
}
]
And arguments
example_names: ["input0", "input1"],
features: {
"sparse": SparseFeature(
index_key="ix", value_key="val", dtype=tf.float32, size=100),
}
Then the output is a dictionary:
{
"sparse": SparseTensor(
indices=[[0, 3], [0, 20], [1, 42]],
values=[0.5, -1.0, 0.0]
dense_shape=[2, 100]),
}
serialized
: A vector (1-D Tensor) of strings, a batch of binary serializedExample
protos.features
: Adict
mapping feature keys toFixedLenFeature
,VarLenFeature
, andSparseFeature
values.name
: A name for this operation (optional).example_names
: A vector (1-D Tensor) of strings (optional), the names of the serialized protos in the batch.
A dict
mapping feature keys to Tensor
and SparseTensor
values.
ValueError
: if any feature is invalid.
tf.parse_single_example(serialized, features, name=None, example_names=None)
{#parse_single_example}
Parses a single Example
proto.
Similar to parse_example
, except:
For dense tensors, the returned Tensor
is identical to the output of
parse_example
, except there is no batch dimension, the output shape is the
same as the shape given in dense_shape
.
For SparseTensor
s, the first (batch) column of the indices matrix is removed
(the indices matrix is a column vector), the values vector is unchanged, and
the first (batch_size
) entry of the shape vector is removed (it is now a
single element vector).
One might see performance advantages by batching Example
protos with
parse_example
instead of using this function directly.
serialized
: A scalar string Tensor, a single serialized Example. See_parse_single_example_raw
documentation for more details.features
: Adict
mapping feature keys toFixedLenFeature
orVarLenFeature
values.name
: A name for this operation (optional).example_names
: (Optional) A scalar string Tensor, the associated name. See_parse_single_example_raw
documentation for more details.
A dict
mapping feature keys to Tensor
and SparseTensor
values.
ValueError
: if any feature is invalid.
Transforms a serialized tensorflow.TensorProto proto into a Tensor.
serialized
: ATensor
of typestring
. A scalar string containing a serialized TensorProto proto.out_type
: Atf.DType
. The type of the serialized tensor. The provided type must match the type of the serialized tensor and no implicit conversion will take place.name
: A name for the operation (optional).
A Tensor
of type out_type
. A Tensor of type out_type
.
Convert JSON-encoded Example records to binary protocol buffer strings.
This op translates a tensor containing Example records, encoded using the standard JSON mapping, into a tensor containing the same records encoded as binary protocol buffers. The resulting tensor can then be fed to any of the other Example-parsing ops.
json_examples
: ATensor
of typestring
. Each string is a JSON object serialized according to the JSON mapping of the Example proto.name
: A name for the operation (optional).
A Tensor
of type string
.
Each string is a binary Example protocol buffer corresponding
to the respective element of json_examples
.
TensorFlow provides several implementations of 'Queues', which are structures within the TensorFlow computation graph to stage pipelines of tensors together. The following describe the basic Queue interface and some implementations. To see an example use, see Threading and Queues.
Base class for queue implementations.
A queue is a TensorFlow data structure that stores tensors across multiple steps, and exposes operations that enqueue and dequeue tensors.
Each queue element is a tuple of one or more tensors, where each tuple component has a static dtype, and may have a static shape. The queue implementations support versions of enqueue and dequeue that handle single elements, versions that support enqueuing and dequeuing a batch of elements at once.
See tf.FIFOQueue
and
tf.RandomShuffleQueue
for concrete
implementations of this class, and instructions on how to create
them.
Enqueues one element to this queue.
If the queue is full when this operation executes, it will block until the element has been enqueued.
At runtime, this operation may raise an error if the queue is
closed before or during its execution. If the
queue is closed before this operation runs,
tf.errors.CancelledError
will be raised. If this operation is
blocked, and either (i) the queue is closed by a close operation
with cancel_pending_enqueues=True
, or (ii) the session is
closed,
tf.errors.CancelledError
will be raised.
vals
: A tensor, a list or tuple of tensors, or a dictionary containing the values to enqueue.name
: A name for the operation (optional).
The operation that enqueues a new tuple of tensors to the queue.
Enqueues zero or more elements to this queue.
This operation slices each component tensor along the 0th dimension to
make multiple queue elements. All of the tensors in vals
must have the
same size in the 0th dimension.
If the queue is full when this operation executes, it will block until all of the elements have been enqueued.
At runtime, this operation may raise an error if the queue is
closed before or during its execution. If the
queue is closed before this operation runs,
tf.errors.CancelledError
will be raised. If this operation is
blocked, and either (i) the queue is closed by a close operation
with cancel_pending_enqueues=True
, or (ii) the session is
closed,
tf.errors.CancelledError
will be raised.
vals
: A tensor, a list or tuple of tensors, or a dictionary from which the queue elements are taken.name
: A name for the operation (optional).
The operation that enqueues a batch of tuples of tensors to the queue.
Dequeues one element from this queue.
If the queue is empty when this operation executes, it will block until there is an element to dequeue.
At runtime, this operation may raise an error if the queue is
closed before or during its execution. If the
queue is closed, the queue is empty, and there are no pending
enqueue operations that can fulfill this request,
tf.errors.OutOfRangeError
will be raised. If the session is
closed,
tf.errors.CancelledError
will be raised.
name
: A name for the operation (optional).
The tuple of tensors that was dequeued.
Dequeues and concatenates n
elements from this queue.
This operation concatenates queue-element component tensors along
the 0th dimension to make a single component tensor. All of the
components in the dequeued tuple will have size n
in the 0th dimension.
If the queue is closed and there are less than n
elements left, then an
OutOfRange
exception is raised.
At runtime, this operation may raise an error if the queue is
closed before or during its execution. If the
queue is closed, the queue contains fewer than n
elements, and
there are no pending enqueue operations that can fulfill this
request, tf.errors.OutOfRangeError
will be raised. If the
session is closed,
tf.errors.CancelledError
will be raised.
n
: A scalarTensor
containing the number of elements to dequeue.name
: A name for the operation (optional).
The tuple of concatenated tensors that was dequeued.
Compute the number of elements in this queue.
name
: A name for the operation (optional).
A scalar tensor containing the number of elements in this queue.
Closes this queue.
This operation signals that no more elements will be enqueued in
the given queue. Subsequent enqueue
and enqueue_many
operations will fail. Subsequent dequeue
and dequeue_many
operations will continue to succeed if sufficient elements remain
in the queue. Subsequent dequeue
and dequeue_many
operations
that would block will fail immediately.
If cancel_pending_enqueues
is True
, all pending requests will also
be cancelled.
cancel_pending_enqueues
: (Optional.) A boolean, defaulting toFalse
(described above).name
: A name for the operation (optional).
The operation that closes the queue.
Constructs a queue object from a queue reference.
The two optional lists, shapes
and names
, must be of the same length
as dtypes
if provided. The values at a given index i
indicate the
shape and name to use for the corresponding queue component in dtypes
.
dtypes
: A list of types. The length of dtypes must equal the number of tensors in each element.shapes
: Constraints on the shapes of tensors in an element: A list of shape tuples or None. This list is the same length as dtypes. If the shape of any tensors in the element are constrained, all must be; shapes can be None if the shapes should not be constrained.names
: Optional list of names. If provided, theenqueue()
anddequeue()
methods will use dictionaries with these names as keys. Must be None or a list or tuple of the same length asdtypes
.queue_ref
: The queue reference, i.e. the output of the queue op.
ValueError
: If one of the arguments is invalid.
Dequeues and concatenates n
elements from this queue.
Note This operation is not supported by all queues. If a queue does not
support DequeueUpTo, then a tf.errors.UnimplementedError
is raised.
This operation concatenates queue-element component tensors along
the 0th dimension to make a single component tensor. If the queue
has not been closed, all of the components in the dequeued tuple
will have size n
in the 0th dimension.
If the queue is closed and there are more than 0
but fewer than
n
elements remaining, then instead of raising a
tf.errors.OutOfRangeError
like dequeue_many
,
less than n
elements are returned immediately. If the queue is
closed and there are 0
elements left in the queue, then a
tf.errors.OutOfRangeError
is raised just like in dequeue_many
.
Otherwise the behavior is identical to dequeue_many
.
n
: A scalarTensor
containing the number of elements to dequeue.name
: A name for the operation (optional).
The tuple of concatenated tensors that was dequeued.
The list of dtypes for each component of a queue element.
Create a queue using the queue reference from queues[index]
.
index
: An integer scalar tensor that determines the input that gets selected.queues
: A list ofQueueBase
objects.
A QueueBase
object.
TypeError
: Whenqueues
is not a list ofQueueBase
objects, or when the data types ofqueues
are not all the same.
The name of the underlying queue.
The list of names for each component of a queue element.
The underlying queue reference.
The list of shapes for each component of a queue element.
A queue implementation that dequeues elements in first-in first-out order.
See tf.QueueBase
for a description of the methods on
this class.
tf.FIFOQueue.__init__(capacity, dtypes, shapes=None, names=None, shared_name=None, name='fifo_queue')
{#FIFOQueue.init}
Creates a queue that dequeues elements in a first-in first-out order.
A FIFOQueue
has bounded capacity; supports multiple concurrent
producers and consumers; and provides exactly-once delivery.
A FIFOQueue
holds a list of up to capacity
elements. Each
element is a fixed-length tuple of tensors whose dtypes are
described by dtypes
, and whose shapes are optionally described
by the shapes
argument.
If the shapes
argument is specified, each component of a queue
element must have the respective fixed shape. If it is
unspecified, different queue elements may have different shapes,
but the use of dequeue_many
is disallowed.
capacity
: An integer. The upper bound on the number of elements that may be stored in this queue.dtypes
: A list ofDType
objects. The length ofdtypes
must equal the number of tensors in each queue element.shapes
: (Optional.) A list of fully-definedTensorShape
objects with the same length asdtypes
, orNone
.names
: (Optional.) A list of string naming the components in the queue with the same length asdtypes
, orNone
. If specified the dequeue methods return a dictionary with the names as keys.shared_name
: (Optional.) If non-empty, this queue will be shared under the given name across multiple sessions.name
: Optional name for the queue operation.
A FIFOQueue that supports batching variable-sized tensors by padding.
A PaddingFIFOQueue
may contain components with dynamic shape, while also
supporting dequeue_many
. See the constructor for more details.
See tf.QueueBase
for a description of the methods on
this class.
tf.PaddingFIFOQueue.__init__(capacity, dtypes, shapes, names=None, shared_name=None, name='padding_fifo_queue')
{#PaddingFIFOQueue.init}
Creates a queue that dequeues elements in a first-in first-out order.
A PaddingFIFOQueue
has bounded capacity; supports multiple concurrent
producers and consumers; and provides exactly-once delivery.
A PaddingFIFOQueue
holds a list of up to capacity
elements. Each
element is a fixed-length tuple of tensors whose dtypes are
described by dtypes
, and whose shapes are described by the shapes
argument.
The shapes
argument must be specified; each component of a queue
element must have the respective shape. Shapes of fixed
rank but variable size are allowed by setting any shape dimension to None.
In this case, the inputs' shape may vary along the given dimension, and
dequeue_many
will pad the given dimension with zeros up to the maximum
shape of all elements in the given batch.
capacity
: An integer. The upper bound on the number of elements that may be stored in this queue.dtypes
: A list ofDType
objects. The length ofdtypes
must equal the number of tensors in each queue element.shapes
: A list ofTensorShape
objects, with the same length asdtypes
. Any dimension in theTensorShape
containing valueNone
is dynamic and allows values to be enqueued with variable size in that dimension.names
: (Optional.) A list of string naming the components in the queue with the same length asdtypes
, orNone
. If specified the dequeue methods return a dictionary with the names as keys.shared_name
: (Optional.) If non-empty, this queue will be shared under the given name across multiple sessions.name
: Optional name for the queue operation.
ValueError
: If shapes is not a list of shapes, or the lengths of dtypes and shapes do not match, or if names is specified and the lengths of dtypes and names do not match.
A queue implementation that dequeues elements in a random order.
See tf.QueueBase
for a description of the methods on
this class.
tf.RandomShuffleQueue.__init__(capacity, min_after_dequeue, dtypes, shapes=None, names=None, seed=None, shared_name=None, name='random_shuffle_queue')
{#RandomShuffleQueue.init}
Create a queue that dequeues elements in a random order.
A RandomShuffleQueue
has bounded capacity; supports multiple
concurrent producers and consumers; and provides exactly-once
delivery.
A RandomShuffleQueue
holds a list of up to capacity
elements. Each element is a fixed-length tuple of tensors whose
dtypes are described by dtypes
, and whose shapes are optionally
described by the shapes
argument.
If the shapes
argument is specified, each component of a queue
element must have the respective fixed shape. If it is
unspecified, different queue elements may have different shapes,
but the use of dequeue_many
is disallowed.
The min_after_dequeue
argument allows the caller to specify a
minimum number of elements that will remain in the queue after a
dequeue
or dequeue_many
operation completes, to ensure a
minimum level of mixing of elements. This invariant is maintained
by blocking those operations until sufficient elements have been
enqueued. The min_after_dequeue
argument is ignored after the
queue has been closed.
capacity
: An integer. The upper bound on the number of elements that may be stored in this queue.min_after_dequeue
: An integer (described above).dtypes
: A list ofDType
objects. The length ofdtypes
must equal the number of tensors in each queue element.shapes
: (Optional.) A list of fully-definedTensorShape
objects with the same length asdtypes
, orNone
.names
: (Optional.) A list of string naming the components in the queue with the same length asdtypes
, orNone
. If specified the dequeue methods return a dictionary with the names as keys.seed
: A Python integer. Used to create a random seed. Seeset_random_seed
for behavior.shared_name
: (Optional.) If non-empty, this queue will be shared under the given name across multiple sessions.name
: Optional name for the queue operation.
A queue implementation that dequeues elements in prioritized order.
See tf.QueueBase
for a description of the methods on
this class.
tf.PriorityQueue.__init__(capacity, types, shapes=None, names=None, shared_name=None, name='priority_queue')
{#PriorityQueue.init}
Creates a queue that dequeues elements in a first-in first-out order.
A PriorityQueue
has bounded capacity; supports multiple concurrent
producers and consumers; and provides exactly-once delivery.
A PriorityQueue
holds a list of up to capacity
elements. Each
element is a fixed-length tuple of tensors whose dtypes are
described by types
, and whose shapes are optionally described
by the shapes
argument.
If the shapes
argument is specified, each component of a queue
element must have the respective fixed shape. If it is
unspecified, different queue elements may have different shapes,
but the use of dequeue_many
is disallowed.
Enqueues and Dequeues to the PriorityQueue
must include an additional
tuple entry at the beginning: the priority
. The priority must be
an int64 scalar (for enqueue
) or an int64 vector (for enqueue_many
).
capacity
: An integer. The upper bound on the number of elements that may be stored in this queue.types
: A list ofDType
objects. The length oftypes
must equal the number of tensors in each queue element, except the first priority element. The first tensor in each element is the priority, which must be type int64.shapes
: (Optional.) A list of fully-definedTensorShape
objects, with the same length astypes
, orNone
.names
: (Optional.) A list of strings naming the components in the queue with the same length asdtypes
, orNone
. If specified, the dequeue methods return a dictionary with the names as keys.shared_name
: (Optional.) If non-empty, this queue will be shared under the given name across multiple sessions.name
: Optional name for the queue operation.
A conditional accumulator for aggregating gradients.
Up-to-date gradients (i.e., time step at which gradient was computed is equal to the accumulator's time step) are added to the accumulator.
Extraction of the average gradient is blocked until the required number of gradients has been accumulated.
tf.ConditionalAccumulatorBase.__init__(dtype, shape, accumulator_ref)
{#ConditionalAccumulatorBase.init}
Creates a new ConditionalAccumulator.
dtype
: Datatype of the accumulated gradients.shape
: Shape of the accumulated gradients.accumulator_ref
: A handle to the conditional accumulator, created by sub- classes
The underlying accumulator reference.
The datatype of the gradients accumulated by this accumulator.
The name of the underlying accumulator.
tf.ConditionalAccumulatorBase.num_accumulated(name=None)
{#ConditionalAccumulatorBase.num_accumulated}
Number of gradients that have currently been aggregated in accumulator.
name
: Optional name for the operation.
Number of accumulated gradients currently in accumulator.
tf.ConditionalAccumulatorBase.set_global_step(new_global_step, name=None)
{#ConditionalAccumulatorBase.set_global_step}
Sets the global time step of the accumulator.
The operation logs a warning if we attempt to set to a time step that is lower than the accumulator's own time step.
new_global_step
: Value of new time step. Can be a variable or a constantname
: Optional name for the operation.
Operation that sets the accumulator's time step.
A conditional accumulator for aggregating gradients.
Up-to-date gradients (i.e., time step at which gradient was computed is equal to the accumulator's time step) are added to the accumulator.
Extraction of the average gradient is blocked until the required number of gradients has been accumulated.
tf.ConditionalAccumulator.__init__(dtype, shape=None, shared_name=None, name='conditional_accumulator')
{#ConditionalAccumulator.init}
Creates a new ConditionalAccumulator.
dtype
: Datatype of the accumulated gradients.shape
: Shape of the accumulated gradients.shared_name
: Optional. If non-empty, this accumulator will be shared under the given name across multiple sessions.name
: Optional name for the accumulator.
The underlying accumulator reference.
tf.ConditionalAccumulator.apply_grad(grad, local_step=0, name=None)
{#ConditionalAccumulator.apply_grad}
Attempts to apply a gradient to the accumulator.
The attempt is silently dropped if the gradient is stale, i.e., local_step is less than the accumulator's global time step.
grad
: The gradient tensor to be applied.local_step
: Time step at which the gradient was computed.name
: Optional name for the operation.
The operation that (conditionally) applies a gradient to the accumulator.
ValueError
: If grad is of the wrong shape
The datatype of the gradients accumulated by this accumulator.
The name of the underlying accumulator.
Number of gradients that have currently been aggregated in accumulator.
name
: Optional name for the operation.
Number of accumulated gradients currently in accumulator.
tf.ConditionalAccumulator.set_global_step(new_global_step, name=None)
{#ConditionalAccumulator.set_global_step}
Sets the global time step of the accumulator.
The operation logs a warning if we attempt to set to a time step that is lower than the accumulator's own time step.
new_global_step
: Value of new time step. Can be a variable or a constantname
: Optional name for the operation.
Operation that sets the accumulator's time step.
Attempts to extract the average gradient from the accumulator.
The operation blocks until sufficient number of gradients have been successfully applied to the accumulator.
Once successful, the following actions are also triggered:
- Counter of accumulated gradients is reset to 0.
- Aggregated gradient is reset to 0 tensor.
- Accumulator's internal time step is incremented by 1.
num_required
: Number of gradients that needs to have been aggregatedname
: Optional name for the operation
A tensor holding the value of the average gradient.
InvalidArgumentError
: If num_required < 1
A conditional accumulator for aggregating sparse gradients.
Sparse gradients are represented by IndexedSlices.
Up-to-date gradients (i.e., time step at which gradient was computed is equal to the accumulator's time step) are added to the accumulator.
Extraction of the average gradient is blocked until the required number of gradients has been accumulated.
Args: dtype: Datatype of the accumulated gradients. shape: Shape of the accumulated gradients. shared_name: Optional. If non-empty, this accumulator will be shared under the given name across multiple sessions. name: Optional name for the accumulator.
tf.SparseConditionalAccumulator.__init__(dtype, shape=None, shared_name=None, name='sparse_conditional_accumulator')
{#SparseConditionalAccumulator.init}
The underlying accumulator reference.
tf.SparseConditionalAccumulator.apply_grad(grad_indices, grad_values, grad_shape=None, local_step=0, name=None)
{#SparseConditionalAccumulator.apply_grad}
Attempts to apply a sparse gradient to the accumulator.
The attempt is silently dropped if the gradient is stale, i.e., local_step is less than the accumulator's global time step.
A sparse gradient is represented by its indices, values and possibly empty or None shape. Indices must be a vector representing the locations of non-zero entries in the tensor. Values are the non-zero slices of the gradient, and must have the same first dimension as indices, i.e., the nnz represented by indices and values must be consistent. Shape, if not empty or None, must be consistent with the accumulator's shape (if also provided).
A tensor [[0, 0], [0. 1], [2, 3]] can be represented
indices
: [1,2]values
: [[0,1],[2,3]]shape
: [3, 2]
grad_indices
: Indices of the sparse gradient to be applied.grad_values
: Values of the sparse gradient to be applied.grad_shape
: Shape of the sparse gradient to be applied.local_step
: Time step at which the gradient was computed.name
: Optional name for the operation.
The operation that (conditionally) applies a gradient to the accumulator.
InvalidArgumentError
: If grad is of the wrong shape
tf.SparseConditionalAccumulator.apply_indexed_slices_grad(grad, local_step=0, name=None)
{#SparseConditionalAccumulator.apply_indexed_slices_grad}
Attempts to apply a gradient to the accumulator.
The attempt is silently dropped if the gradient is stale, i.e., local_step is less than the accumulator's global time step.
grad
: The gradient IndexedSlices to be applied.local_step
: Time step at which the gradient was computed.name
: Optional name for the operation.
The operation that (conditionally) applies a gradient to the accumulator.
InvalidArgumentError
: If grad is of the wrong shape
The datatype of the gradients accumulated by this accumulator.
The name of the underlying accumulator.
tf.SparseConditionalAccumulator.num_accumulated(name=None)
{#SparseConditionalAccumulator.num_accumulated}
Number of gradients that have currently been aggregated in accumulator.
name
: Optional name for the operation.
Number of accumulated gradients currently in accumulator.
tf.SparseConditionalAccumulator.set_global_step(new_global_step, name=None)
{#SparseConditionalAccumulator.set_global_step}
Sets the global time step of the accumulator.
The operation logs a warning if we attempt to set to a time step that is lower than the accumulator's own time step.
new_global_step
: Value of new time step. Can be a variable or a constantname
: Optional name for the operation.
Operation that sets the accumulator's time step.
tf.SparseConditionalAccumulator.take_grad(num_required, name=None)
{#SparseConditionalAccumulator.take_grad}
Attempts to extract the average gradient from the accumulator.
The operation blocks until sufficient number of gradients have been successfully applied to the accumulator.
Once successful, the following actions are also triggered:
- Counter of accumulated gradients is reset to 0.
- Aggregated gradient is reset to 0 tensor.
- Accumulator's internal time step is incremented by 1.
num_required
: Number of gradients that needs to have been aggregatedname
: Optional name for the operation
A tuple of indices, values, and shape representing the average gradient.
InvalidArgumentError
: If num_required < 1
tf.SparseConditionalAccumulator.take_indexed_slices_grad(num_required, name=None)
{#SparseConditionalAccumulator.take_indexed_slices_grad}
Attempts to extract the average gradient from the accumulator.
The operation blocks until sufficient number of gradients have been successfully applied to the accumulator.
Once successful, the following actions are also triggered:
- Counter of accumulated gradients is reset to 0.
- Aggregated gradient is reset to 0 tensor.
- Accumulator's internal time step is incremented by 1.
num_required
: Number of gradients that needs to have been aggregatedname
: Optional name for the operation
An IndexedSlices holding the value of the average gradient.
InvalidArgumentError
: If num_required < 1
Returns the set of files matching a pattern.
Note that this routine only supports wildcard characters in the basename portion of the pattern, not in the directory portion.
pattern
: ATensor
of typestring
. A (scalar) shell wildcard pattern.name
: A name for the operation (optional).
A Tensor
of type string
. A vector of matching filenames.
Reads and outputs the entire contents of the input filename.
filename
: ATensor
of typestring
.name
: A name for the operation (optional).
A Tensor
of type string
.
Writes contents to the file at input filename. Creates file if not existing.
filename
: ATensor
of typestring
. scalar. The name of the file to which we write the contents.contents
: ATensor
of typestring
. scalar. The content to be written to the output file.name
: A name for the operation (optional).
The created Operation.
TensorFlow functions for setting up an input-prefetching pipeline. Please see the reading data how-to for context.
The "producer" functions add a queue to the graph and a corresponding
QueueRunner
for running the subgraph that fills that queue.
Save the list of files matching pattern, so it is only computed once.
pattern
: A file pattern (glob).name
: A name for the operations (optional).
A variable that is initialized to the list of files matching pattern.
Returns tensor num_epochs
times and then raises an OutOfRange
error.
Note: creates local counter epochs
. Use local_variables_initializer()
to
initialize local variables.
tensor
: AnyTensor
.num_epochs
: A positive integer (optional). If specified, limits the number of steps the output tensor may be evaluated.name
: A name for the operations (optional).
tensor or OutOfRange
.
ValueError
: ifnum_epochs
is invalid.
tf.train.input_producer(input_tensor, element_shape=None, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, summary_name=None, name=None, cancel_op=None)
{#input_producer}
Output the rows of input_tensor
to a queue for an input pipeline.
Note: if num_epochs
is not None
, this function creates local counter
epochs
. Use local_variables_initializer()
to initialize local variables.
input_tensor
: A tensor with the rows to produce. Must be at least one-dimensional. Must either have a fully-defined shape, orelement_shape
must be defined.element_shape
: (Optional.) ATensorShape
representing the shape of a row ofinput_tensor
, if it cannot be inferred.num_epochs
: (Optional.) An integer. If specifiedinput_producer
produces each row ofinput_tensor
num_epochs
times before generating anOutOfRange
error. If not specified,input_producer
can cycle through the rows ofinput_tensor
an unlimited number of times.shuffle
: (Optional.) A boolean. If true, the rows are randomly shuffled within each epoch.seed
: (Optional.) An integer. The seed to use ifshuffle
is true.capacity
: (Optional.) The capacity of the queue to be used for buffering the input.shared_name
: (Optional.) If set, this queue will be shared under the given name across multiple sessions.summary_name
: (Optional.) If set, a scalar summary for the current queue size will be generated, using this name as part of the tag.name
: (Optional.) A name for queue.cancel_op
: (Optional.) Cancel op for the queue
A queue with the output rows. A QueueRunner
for the queue is
added to the current QUEUE_RUNNER
collection of the current
graph.
ValueError
: If the shape of the input cannot be inferred from the arguments.
tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None)
{#range_input_producer}
Produces the integers from 0 to limit-1 in a queue.
Note: if num_epochs
is not None
, this function creates local counter
epochs
. Use local_variables_initializer()
to initialize local variables.
limit
: An int32 scalar tensor.num_epochs
: An integer (optional). If specified,range_input_producer
produces each integernum_epochs
times before generating an OutOfRange error. If not specified,range_input_producer
can cycle through the integers an unlimited number of times.shuffle
: Boolean. If true, the integers are randomly shuffled within each epoch.seed
: An integer (optional). Seed used if shuffle == True.capacity
: An integer. Sets the queue capacity.shared_name
: (optional). If set, this queue will be shared under the given name across multiple sessions.name
: A name for the operations (optional).
A Queue with the output integers. A QueueRunner
for the Queue
is added to the current Graph
's QUEUE_RUNNER
collection.
tf.train.slice_input_producer(tensor_list, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None)
{#slice_input_producer}
Produces a slice of each Tensor
in tensor_list
.
Implemented using a Queue -- a QueueRunner
for the Queue
is added to the current Graph
's QUEUE_RUNNER
collection.
tensor_list
: A list ofTensor
objects. EveryTensor
intensor_list
must have the same size in the first dimension.num_epochs
: An integer (optional). If specified,slice_input_producer
produces each slicenum_epochs
times before generating anOutOfRange
error. If not specified,slice_input_producer
can cycle through the slices an unlimited number of times.shuffle
: Boolean. If true, the integers are randomly shuffled within each epoch.seed
: An integer (optional). Seed used if shuffle == True.capacity
: An integer. Sets the queue capacity.shared_name
: (optional). If set, this queue will be shared under the given name across multiple sessions.name
: A name for the operations (optional).
A list of tensors, one for each element of tensor_list
. If the tensor
in tensor_list
has shape [N, a, b, .., z]
, then the corresponding output
tensor will have shape [a, b, ..., z]
.
ValueError
: ifslice_input_producer
produces nothing fromtensor_list
.
tf.train.string_input_producer(string_tensor, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None, cancel_op=None)
{#string_input_producer}
Output strings (e.g. filenames) to a queue for an input pipeline.
Note: if num_epochs
is not None
, this function creates local counter
epochs
. Use local_variables_initializer()
to initialize local variables.
string_tensor
: A 1-D string tensor with the strings to produce.num_epochs
: An integer (optional). If specified,string_input_producer
produces each string fromstring_tensor
num_epochs
times before generating anOutOfRange
error. If not specified,string_input_producer
can cycle through the strings instring_tensor
an unlimited number of times.shuffle
: Boolean. If true, the strings are randomly shuffled within each epoch.seed
: An integer (optional). Seed used if shuffle == True.capacity
: An integer. Sets the queue capacity.shared_name
: (optional). If set, this queue will be shared under the given name across multiple sessions.name
: A name for the operations (optional).cancel_op
: Cancel op for the queue (optional).
A queue with the output strings. A QueueRunner
for the Queue
is added to the current Graph
's QUEUE_RUNNER
collection.
ValueError
: If the string_tensor is a null Python list. At runtime, will fail with an assertion if string_tensor becomes a null tensor.
These functions add a queue to the graph to assemble a batch of
examples, with possible shuffling. They also add a QueueRunner
for
running the subgraph that fills that queue.
Use batch
or batch_join
for batching
examples that have already been well shuffled. Use
shuffle_batch
or
shuffle_batch_join
for examples that would
benefit from additional shuffling.
Use batch
or shuffle_batch
if you want a
single thread producing examples to batch, or if you have a
single subgraph producing examples but you want to run it in N threads
(where you increase N until it can keep the queue full). Use
batch_join
or shuffle_batch_join
if you have N different subgraphs producing examples to batch and you
want them run by N threads. Use maybe_*
to enqueue conditionally.
tf.train.batch(tensors, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)
{#batch}
Creates batches of tensors in tensors
.
The argument tensors
can be a list or a dictionary of tensors.
The value returned by the function will be of the same type
as tensors
.
This function is implemented using a queue. A QueueRunner
for the
queue is added to the current Graph
's QUEUE_RUNNER
collection.
If enqueue_many
is False
, tensors
is assumed to represent a single
example. An input tensor with shape [x, y, z]
will be output as a tensor
with shape [batch_size, x, y, z]
.
If enqueue_many
is True
, tensors
is assumed to represent a batch of
examples, where the first dimension is indexed by example, and all members of
tensors
should have the same size in the first dimension. If an input
tensor has shape [*, x, y, z]
, the output will have shape [batch_size, x, y, z]
. The capacity
argument controls the how long the prefetching is
allowed to grow the queues.
The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
N.B.: If dynamic_pad
is False
, you must ensure that either
(i) the shapes
argument is passed, or (ii) all of the tensors in
tensors
must have fully-defined shapes. ValueError
will be
raised if neither of these conditions holds.
If dynamic_pad
is True
, it is sufficient that the rank of the
tensors is known, but individual dimensions may have shape None
.
In this case, for each enqueue the dimensions with value None
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See PaddingFIFOQueue
for more info.
If allow_smaller_final_batch
is True
, a smaller batch value than
batch_size
is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
get_shape
method will have a first Dimension
value of None
, and
operations that depend on fixed batch_size would fail.
Note: if num_epochs
is not None
, this function creates local counter
epochs
. Use local_variables_initializer()
to initialize local variables.
tensors
: The list or dictionary of tensors to enqueue.batch_size
: The new batch size pulled from the queue.num_threads
: The number of threads enqueuingtensors
.capacity
: An integer. The maximum number of elements in the queue.enqueue_many
: Whether each tensor intensors
is a single example.shapes
: (Optional) The shapes for each example. Defaults to the inferred shapes fortensors
.dynamic_pad
: Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.allow_smaller_final_batch
: (Optional) Boolean. IfTrue
, allow the final batch to be smaller if there are insufficient items left in the queue.shared_name
: (Optional). If set, this queue will be shared under the given name across multiple sessions.name
: (Optional) A name for the operations.
A list or dictionary of tensors with the same types as tensors
(except if
the input is a list of one element, then it returns a tensor, not a list).
ValueError
: If theshapes
are not specified, and cannot be inferred from the elements oftensors
.
tf.train.maybe_batch(tensors, keep_input, batch_size, num_threads=1, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)
{#maybe_batch}
Conditionally creates batches of tensors based on keep_input
.
See docstring in batch
for more details.
tensors
: The list or dictionary of tensors to enqueue.keep_input
: Abool
scalar Tensor. This tensor controls whether the input is added to the queue or not. If it evaluatesTrue
, thentensors
are added to the queue; otherwise they are dropped. This tensor essentially acts as a filtering mechanism.batch_size
: The new batch size pulled from the queue.num_threads
: The number of threads enqueuingtensors
.capacity
: An integer. The maximum number of elements in the queue.enqueue_many
: Whether each tensor intensors
is a single example.shapes
: (Optional) The shapes for each example. Defaults to the inferred shapes fortensors
.dynamic_pad
: Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.allow_smaller_final_batch
: (Optional) Boolean. IfTrue
, allow the final batch to be smaller if there are insufficient items left in the queue.shared_name
: (Optional). If set, this queue will be shared under the given name across multiple sessions.name
: (Optional) A name for the operations.
A list or dictionary of tensors with the same types as tensors
.
ValueError
: If theshapes
are not specified, and cannot be inferred from the elements oftensors
.
tf.train.batch_join(tensors_list, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)
{#batch_join}
Runs a list of tensors to fill a queue to create batches of examples.
The tensors_list
argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the tensors
argument of tf.train.batch()
.
Enqueues a different list of tensors in different threads.
Implemented using a queue -- a QueueRunner
for the queue
is added to the current Graph
's QUEUE_RUNNER
collection.
len(tensors_list)
threads will be started,
with thread i
enqueuing the tensors from
tensors_list[i]
. tensors_list[i1][j]
must match
tensors_list[i2][j]
in type and shape, except in the first
dimension if enqueue_many
is true.
If enqueue_many
is False
, each tensors_list[i]
is assumed
to represent a single example. An input tensor x
will be output as a
tensor with shape [batch_size] + x.shape
.
If enqueue_many
is True
, tensors_list[i]
is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of tensors_list[i]
should have the
same size in the first dimension. The slices of any input tensor
x
are treated as examples, and the output tensors will have shape
[batch_size] + x.shape[1:]
.
The capacity
argument controls the how long the prefetching is allowed to
grow the queues.
The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
N.B.: If dynamic_pad
is False
, you must ensure that either
(i) the shapes
argument is passed, or (ii) all of the tensors in
tensors_list
must have fully-defined shapes. ValueError
will be
raised if neither of these conditions holds.
If dynamic_pad
is True
, it is sufficient that the rank of the
tensors is known, but individual dimensions may have value None
.
In this case, for each enqueue the dimensions with value None
may have a variable length; upon dequeue, the output tensors will be padded
on the right to the maximum shape of the tensors in the current minibatch.
For numbers, this padding takes value 0. For strings, this padding is
the empty string. See PaddingFIFOQueue
for more info.
If allow_smaller_final_batch
is True
, a smaller batch value than
batch_size
is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
get_shape
method will have a first Dimension
value of None
, and
operations that depend on fixed batch_size would fail.
tensors_list
: A list of tuples or dictionaries of tensors to enqueue.batch_size
: An integer. The new batch size pulled from the queue.capacity
: An integer. The maximum number of elements in the queue.enqueue_many
: Whether each tensor intensor_list_list
is a single example.shapes
: (Optional) The shapes for each example. Defaults to the inferred shapes fortensor_list_list[i]
.dynamic_pad
: Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.allow_smaller_final_batch
: (Optional) Boolean. IfTrue
, allow the final batch to be smaller if there are insufficient items left in the queue.shared_name
: (Optional) If set, this queue will be shared under the given name across multiple sessions.name
: (Optional) A name for the operations.
A list or dictionary of tensors with the same number and types as
tensors_list[i]
.
ValueError
: If theshapes
are not specified, and cannot be inferred from the elements oftensor_list_list
.
tf.train.maybe_batch_join(tensors_list, keep_input, batch_size, capacity=32, enqueue_many=False, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, shared_name=None, name=None)
{#maybe_batch_join}
Runs a list of tensors to conditionally fill a queue to create batches.
See docstring in batch_join
for more details.
tensors_list
: A list of tuples or dictionaries of tensors to enqueue.keep_input
: Abool
scalar Tensor. This tensor controls whether the input is added to the queue or not. If it evaluatesTrue
, thentensors
are added to the queue; otherwise they are dropped. This tensor essentially acts as a filtering mechanism.batch_size
: An integer. The new batch size pulled from the queue.capacity
: An integer. The maximum number of elements in the queue.enqueue_many
: Whether each tensor intensor_list_list
is a single example.shapes
: (Optional) The shapes for each example. Defaults to the inferred shapes fortensor_list_list[i]
.dynamic_pad
: Boolean. Allow variable dimensions in input shapes. The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.allow_smaller_final_batch
: (Optional) Boolean. IfTrue
, allow the final batch to be smaller if there are insufficient items left in the queue.shared_name
: (Optional) If set, this queue will be shared under the given name across multiple sessions.name
: (Optional) A name for the operations.
A list or dictionary of tensors with the same number and types as
tensors_list[i]
.
ValueError
: If theshapes
are not specified, and cannot be inferred from the elements oftensor_list_list
.
tf.train.shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)
{#shuffle_batch}
Creates batches by randomly shuffling tensors.
This function adds the following to the current Graph
:
- A shuffling queue into which tensors from
tensors
are enqueued. - A
dequeue_many
operation to create batches from the queue. - A
QueueRunner
toQUEUE_RUNNER
collection, to enqueue the tensors fromtensors
.
If enqueue_many
is False
, tensors
is assumed to represent a
single example. An input tensor with shape [x, y, z]
will be output
as a tensor with shape [batch_size, x, y, z]
.
If enqueue_many
is True
, tensors
is assumed to represent a
batch of examples, where the first dimension is indexed by example,
and all members of tensors
should have the same size in the
first dimension. If an input tensor has shape [*, x, y, z]
, the
output will have shape [batch_size, x, y, z]
.
The capacity
argument controls the how long the prefetching is allowed to
grow the queues.
The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
For example:
# Creates batches of 32 images and 32 labels.
image_batch, label_batch = tf.train.shuffle_batch(
[single_image, single_label],
batch_size=32,
num_threads=4,
capacity=50000,
min_after_dequeue=10000)
N.B.: You must ensure that either (i) the shapes
argument is
passed, or (ii) all of the tensors in tensors
must have
fully-defined shapes. ValueError
will be raised if neither of
these conditions holds.
If allow_smaller_final_batch
is True
, a smaller batch value than
batch_size
is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
get_shape
method will have a first Dimension
value of None
, and
operations that depend on fixed batch_size would fail.
Note: if num_epochs
is not None
, this function creates local counter
epochs
. Use local_variables_initializer()
to initialize local variables.
tensors
: The list or dictionary of tensors to enqueue.batch_size
: The new batch size pulled from the queue.capacity
: An integer. The maximum number of elements in the queue.min_after_dequeue
: Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.num_threads
: The number of threads enqueuingtensor_list
.seed
: Seed for the random shuffling within the queue.enqueue_many
: Whether each tensor intensor_list
is a single example.shapes
: (Optional) The shapes for each example. Defaults to the inferred shapes fortensor_list
.allow_smaller_final_batch
: (Optional) Boolean. IfTrue
, allow the final batch to be smaller if there are insufficient items left in the queue.shared_name
: (Optional) If set, this queue will be shared under the given name across multiple sessions.name
: (Optional) A name for the operations.
A list or dictionary of tensors with the types as tensors
.
ValueError
: If theshapes
are not specified, and cannot be inferred from the elements oftensors
.
tf.train.maybe_shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, keep_input, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)
{#maybe_shuffle_batch}
Creates batches by randomly shuffling conditionally-enqueued tensors.
See docstring in shuffle_batch
for more details.
tensors
: The list or dictionary of tensors to enqueue.batch_size
: The new batch size pulled from the queue.capacity
: An integer. The maximum number of elements in the queue.min_after_dequeue
: Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.keep_input
: Abool
scalar Tensor. This tensor controls whether the input is added to the queue or not. If it evaluatesTrue
, thentensors
are added to the queue; otherwise they are dropped. This tensor essentially acts as a filtering mechanism.num_threads
: The number of threads enqueuingtensor_list
.seed
: Seed for the random shuffling within the queue.enqueue_many
: Whether each tensor intensor_list
is a single example.shapes
: (Optional) The shapes for each example. Defaults to the inferred shapes fortensor_list
.allow_smaller_final_batch
: (Optional) Boolean. IfTrue
, allow the final batch to be smaller if there are insufficient items left in the queue.shared_name
: (Optional) If set, this queue will be shared under the given name across multiple sessions.name
: (Optional) A name for the operations.
A list or dictionary of tensors with the types as tensors
.
ValueError
: If theshapes
are not specified, and cannot be inferred from the elements oftensors
.
tf.train.shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)
{#shuffle_batch_join}
Create batches by randomly shuffling tensors.
The tensors_list
argument is a list of tuples of tensors, or a list of
dictionaries of tensors. Each element in the list is treated similarly
to the tensors
argument of tf.train.shuffle_batch()
.
This version enqueues a different list of tensors in different threads.
It adds the following to the current Graph
:
- A shuffling queue into which tensors from
tensors_list
are enqueued. - A
dequeue_many
operation to create batches from the queue. - A
QueueRunner
toQUEUE_RUNNER
collection, to enqueue the tensors fromtensors_list
.
len(tensors_list)
threads will be started, with thread i
enqueuing
the tensors from tensors_list[i]
. tensors_list[i1][j]
must match
tensors_list[i2][j]
in type and shape, except in the first dimension if
enqueue_many
is true.
If enqueue_many
is False
, each tensors_list[i]
is assumed
to represent a single example. An input tensor with shape [x, y, z]
will be output as a tensor with shape [batch_size, x, y, z]
.
If enqueue_many
is True
, tensors_list[i]
is assumed to
represent a batch of examples, where the first dimension is indexed
by example, and all members of tensors_list[i]
should have the
same size in the first dimension. If an input tensor has shape [*, x, y, z]
, the output will have shape [batch_size, x, y, z]
.
The capacity
argument controls the how long the prefetching is allowed to
grow the queues.
The returned operation is a dequeue operation and will throw
tf.errors.OutOfRangeError
if the input queue is exhausted. If this
operation is feeding another input queue, its queue runner will catch
this exception, however, if this operation is used in your main thread
you are responsible for catching this yourself.
If allow_smaller_final_batch
is True
, a smaller batch value than
batch_size
is returned when the queue is closed and there are not enough
elements to fill the batch, otherwise the pending elements are discarded.
In addition, all output tensors' static shapes, as accessed via the
get_shape
method will have a first Dimension
value of None
, and
operations that depend on fixed batch_size would fail.
tensors_list
: A list of tuples or dictionaries of tensors to enqueue.batch_size
: An integer. The new batch size pulled from the queue.capacity
: An integer. The maximum number of elements in the queue.min_after_dequeue
: Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.seed
: Seed for the random shuffling within the queue.enqueue_many
: Whether each tensor intensor_list_list
is a single example.shapes
: (Optional) The shapes for each example. Defaults to the inferred shapes fortensors_list[i]
.allow_smaller_final_batch
: (Optional) Boolean. IfTrue
, allow the final batch to be smaller if there are insufficient items left in the queue.shared_name
: (optional). If set, this queue will be shared under the given name across multiple sessions.name
: (Optional) A name for the operations.
A list or dictionary of tensors with the same number and types as
tensors_list[i]
.
ValueError
: If theshapes
are not specified, and cannot be inferred from the elements oftensors_list
.
tf.train.maybe_shuffle_batch_join(tensors_list, batch_size, capacity, min_after_dequeue, keep_input, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None)
{#maybe_shuffle_batch_join}
Create batches by randomly shuffling conditionally-enqueued tensors.
See docstring in shuffle_batch_join
for more details.
tensors_list
: A list of tuples or dictionaries of tensors to enqueue.batch_size
: An integer. The new batch size pulled from the queue.capacity
: An integer. The maximum number of elements in the queue.min_after_dequeue
: Minimum number elements in the queue after a dequeue, used to ensure a level of mixing of elements.keep_input
: Abool
scalar Tensor. If provided, this tensor controls whether the input is added to the queue or not. If it evaluatesTrue
, thentensors_list
are added to the queue; otherwise they are dropped. This tensor essentially acts as a filtering mechanism.seed
: Seed for the random shuffling within the queue.enqueue_many
: Whether each tensor intensor_list_list
is a single example.shapes
: (Optional) The shapes for each example. Defaults to the inferred shapes fortensors_list[i]
.allow_smaller_final_batch
: (Optional) Boolean. IfTrue
, allow the final batch to be smaller if there are insufficient items left in the queue.shared_name
: (optional). If set, this queue will be shared under the given name across multiple sessions.name
: (Optional) A name for the operations.
A list or dictionary of tensors with the same number and types as
tensors_list[i]
.
ValueError
: If theshapes
are not specified, and cannot be inferred from the elements oftensors_list
.