Skip to content

Latest commit

 

History

History
821 lines (489 loc) · 22.2 KB

File metadata and controls

821 lines (489 loc) · 22.2 KB

Running Graphs

[TOC]

This library contains classes for launching graphs and executing operations.

The basic usage guide has examples of how a graph is launched in a tf.Session.

Session management


class tf.Session {#Session}

A class for running TensorFlow operations.

A Session object encapsulates the environment in which Operation objects are executed, and Tensor objects are evaluated. For example:

# Build a graph.
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b

# Launch the graph in a session.
sess = tf.Session()

# Evaluate the tensor `c`.
print(sess.run(c))

A session may own resources, such as variables, queues, and readers. It is important to release these resources when they are no longer required. To do this, either invoke the close() method on the session, or use the session as a context manager. The following two examples are equivalent:

# Using the `close()` method.
sess = tf.Session()
sess.run(...)
sess.close()

# Using the context manager.
with tf.Session() as sess:
  sess.run(...)

The ConfigProto protocol buffer exposes various configuration options for a session. For example, to create a session that uses soft constraints for device placement, and log the resulting placement decisions, create a session as follows:

# Launch the graph in a session that allows soft device placement and
# logs the placement decisions.
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True,
                                        log_device_placement=True))

tf.Session.__init__(target='', graph=None, config=None) {#Session.init}

Creates a new TensorFlow session.

If no graph argument is specified when constructing the session, the default graph will be launched in the session. If you are using more than one graph (created with tf.Graph() in the same process, you will have to use different sessions for each graph, but each graph can be used in multiple sessions. In this case, it is often clearer to pass the graph to be launched explicitly to the session constructor.

Args:
  • target: (Optional.) The execution engine to connect to. Defaults to using an in-process engine. See Distributed Tensorflow for more examples.
  • graph: (Optional.) The Graph to be launched (described above).
  • config: (Optional.) A ConfigProto protocol buffer with configuration options for the session.

tf.Session.run(fetches, feed_dict=None, options=None, run_metadata=None) {#Session.run}

Runs operations and evaluates tensors in fetches.

This method runs one "step" of TensorFlow computation, by running the necessary graph fragment to execute every Operation and evaluate every Tensor in fetches, substituting the values in feed_dict for the corresponding input values.

The fetches argument may be a single graph element, or an arbitrarily nested list, tuple, namedtuple, dict, or OrderedDict containing graph elements at its leaves. A graph element can be one of the following types:

  • An Operation. The corresponding fetched value will be None.
  • A Tensor. The corresponding fetched value will be a numpy ndarray containing the value of that tensor.
  • A SparseTensor. The corresponding fetched value will be a SparseTensorValue containing the value of that sparse tensor.
  • A get_tensor_handle op. The corresponding fetched value will be a numpy ndarray containing the handle of that tensor.
  • A string which is the name of a tensor or operation in the graph.

The value returned by run() has the same shape as the fetches argument, where the leaves are replaced by the corresponding values returned by TensorFlow.

Example:

   a = tf.constant([10, 20])
   b = tf.constant([1.0, 2.0])
   # 'fetches' can be a singleton
   v = session.run(a)
   # v is the numpy array [10, 20]
   # 'fetches' can be a list.
   v = session.run([a, b])
   # v a Python list with 2 numpy arrays: the numpy array [10, 20] and the
   # 1-D array [1.0, 2.0]
   # 'fetches' can be arbitrary lists, tuples, namedtuple, dicts:
   MyData = collections.namedtuple('MyData', ['a', 'b'])
   v = session.run({'k1': MyData(a, b), 'k2': [b, a]})
   # v is a dict with
   # v['k1'] is a MyData namedtuple with 'a' the numpy array [10, 20] and
   # 'b' the numpy array [1.0, 2.0]
   # v['k2'] is a list with the numpy array [1.0, 2.0] and the numpy array
   # [10, 20].

The optional feed_dict argument allows the caller to override the value of tensors in the graph. Each key in feed_dict can be one of the following types:

  • If the key is a Tensor, the value may be a Python scalar, string, list, or numpy ndarray that can be converted to the same dtype as that tensor. Additionally, if the key is a placeholder, the shape of the value will be checked for compatibility with the placeholder.
  • If the key is a SparseTensor, the value should be a SparseTensorValue.
  • If the key is a nested tuple of Tensors or SparseTensors, the value should be a nested tuple with the same structure that maps to their corresponding values as above.

Each value in feed_dict must be convertible to a numpy array of the dtype of the corresponding key.

The optional options argument expects a [RunOptions] proto. The options allow controlling the behavior of this particular step (e.g. turning tracing on).

The optional run_metadata argument expects a [RunMetadata] proto. When appropriate, the non-Tensor output of this step will be collected there. For example, when users turn on tracing in options, the profiled info will be collected into this argument and passed back.

Args:
  • fetches: A single graph element, a list of graph elements, or a dictionary whose values are graph elements or lists of graph elements (described above).
  • feed_dict: A dictionary that maps graph elements to values (described above).
  • options: A [RunOptions] protocol buffer
  • run_metadata: A [RunMetadata] protocol buffer
Returns:

Either a single value if fetches is a single graph element, or a list of values if fetches is a list, or a dictionary with the same keys as fetches if that is a dictionary (described above).

Raises:
  • RuntimeError: If this Session is in an invalid state (e.g. has been closed).
  • TypeError: If fetches or feed_dict keys are of an inappropriate type.
  • ValueError: If fetches or feed_dict keys are invalid or refer to a Tensor that doesn't exist.

tf.Session.close() {#Session.close}

Closes this session.

Calling this method frees all resources associated with the session.

Raises:

tf.errors.OpError: Or one of its subclasses if an error occurs while closing the TensorFlow session.


tf.Session.graph {#Session.graph}

The graph that was launched in this session.


tf.Session.as_default() {#Session.as_default}

Returns a context manager that makes this object the default session.

Use with the with keyword to specify that calls to Operation.run() or Tensor.eval() should be executed in this session.

c = tf.constant(..)
sess = tf.Session()

with sess.as_default():
  assert tf.get_default_session() is sess
  print(c.eval())

To get the current default session, use tf.get_default_session().

N.B. The as_default context manager does not close the session when you exit the context, and you must close the session explicitly.

c = tf.constant(...)
sess = tf.Session()
with sess.as_default():
  print(c.eval())
# ...
with sess.as_default():
  print(c.eval())

sess.close()

Alternatively, you can use with tf.Session(): to create a session that is automatically closed on exiting the context, including when an uncaught exception is raised.

N.B. The default graph is a property of the current thread. If you create a new thread, and wish to use the default session in that thread, you must explicitly add a with sess.as_default(): in that thread's function.

Returns:

A context manager using this session as the default session.


tf.Session.reset(target, containers=None, config=None) {#Session.reset}

Resets resource containers on target, and close all connected sessions.

A resource container is distributed across all workers in the same cluster as target. When a resource container on target is reset, resources associated with that container will be cleared. In particular, all Variables in the container will become undefined: they lose their values and shapes.

NOTE: (i) reset() is currently only implemented for distributed sessions. (ii) Any sessions on the master named by target will be closed.

If no resource containers are provided, all containers are reset.

Args:
  • target: The execution engine to connect to.
  • containers: A list of resource container name strings, or None if all of all the containers are to be reset.
  • config: (Optional.) Protocol buffer with configuration options.
Raises:

tf.errors.OpError: Or one of its subclasses if an error occurs while resetting containers.

Other Methods


tf.Session.__enter__() {#Session.enter}


tf.Session.__exit__(exec_type, exec_value, exec_tb) {#Session.exit}


class tf.InteractiveSession {#InteractiveSession}

A TensorFlow Session for use in interactive contexts, such as a shell.

The only difference with a regular Session is that an InteractiveSession installs itself as the default session on construction. The methods Tensor.eval() and Operation.run() will use that session to run ops.

This is convenient in interactive shells and IPython notebooks, as it avoids having to pass an explicit Session object to run ops.

For example:

sess = tf.InteractiveSession()
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# We can just use 'c.eval()' without passing 'sess'
print(c.eval())
sess.close()

Note that a regular session installs itself as the default session when it is created in a with statement. The common usage in non-interactive programs is to follow that pattern:

a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
with tf.Session():
  # We can also use 'c.eval()' here.
  print(c.eval())

tf.InteractiveSession.__init__(target='', graph=None, config=None) {#InteractiveSession.init}

Creates a new interactive TensorFlow session.

If no graph argument is specified when constructing the session, the default graph will be launched in the session. If you are using more than one graph (created with tf.Graph() in the same process, you will have to use different sessions for each graph, but each graph can be used in multiple sessions. In this case, it is often clearer to pass the graph to be launched explicitly to the session constructor.

Args:
  • target: (Optional.) The execution engine to connect to. Defaults to using an in-process engine.
  • graph: (Optional.) The Graph to be launched (described above).
  • config: (Optional) ConfigProto proto used to configure the session.

tf.InteractiveSession.close() {#InteractiveSession.close}

Closes an InteractiveSession.


tf.get_default_session() {#get_default_session}

Returns the default session for the current thread.

The returned Session will be the innermost session on which a Session or Session.as_default() context has been entered.

NOTE: The default session is a property of the current thread. If you create a new thread, and wish to use the default session in that thread, you must explicitly add a with sess.as_default(): in that thread's function.

Returns:

The default Session being used in the current thread.

Error classes and convenience functions


class tf.OpError {#OpError}

A generic error that is raised when TensorFlow execution fails.

Whenever possible, the session will raise a more specific subclass of OpError from the tf.errors module.


tf.OpError.op {#OpError.op}

The operation that failed, if known.

N.B. If the failed op was synthesized at runtime, e.g. a Send or Recv op, there will be no corresponding Operation object. In that case, this will return None, and you should instead use the OpError.node_def to discover information about the op.

Returns:

The Operation that failed, or None.


tf.OpError.node_def {#OpError.node_def}

The NodeDef proto representing the op that failed.

Other Methods


tf.OpError.__init__(node_def, op, message, error_code) {#OpError.init}

Creates a new OpError indicating that a particular op failed.

Args:
  • node_def: The node_def_pb2.NodeDef proto representing the op that failed, if known; otherwise None.
  • op: The ops.Operation that failed, if known; otherwise None.
  • message: The message string describing the failure.
  • error_code: The error_codes_pb2.Code describing the error.

tf.OpError.__str__() {#OpError.str}


tf.OpError.error_code {#OpError.error_code}

The integer error code that describes the error.


tf.OpError.message {#OpError.message}

The error message that describes the error.


class tf.errors.CancelledError {#CancelledError}

Raised when an operation or step is cancelled.

For example, a long-running operation (e.g. queue.enqueue() may be cancelled by running another operation (e.g. queue.close(cancel_pending_enqueues=True), or by closing the session. A step that is running such a long-running operation will fail by raising CancelledError.


tf.errors.CancelledError.__init__(node_def, op, message) {#CancelledError.init}

Creates a CancelledError.


class tf.errors.UnknownError {#UnknownError}

Unknown error.

An example of where this error may be returned is if a Status value received from another address space belongs to an error-space that is not known to this address space. Also errors raised by APIs that do not return enough error information may be converted to this error.


tf.errors.UnknownError.__init__(node_def, op, message, error_code=2) {#UnknownError.init}

Creates an UnknownError.


class tf.errors.InvalidArgumentError {#InvalidArgumentError}

Raised when an operation receives an invalid argument.

This may occur, for example, if an operation is receives an input tensor that has an invalid value or shape. For example, the tf.matmul() op will raise this error if it receives an input that is not a matrix, and the tf.reshape() op will raise this error if the new shape does not match the number of elements in the input tensor.


tf.errors.InvalidArgumentError.__init__(node_def, op, message) {#InvalidArgumentError.init}

Creates an InvalidArgumentError.


class tf.errors.DeadlineExceededError {#DeadlineExceededError}

Raised when a deadline expires before an operation could complete.

This exception is not currently used.


tf.errors.DeadlineExceededError.__init__(node_def, op, message) {#DeadlineExceededError.init}

Creates a DeadlineExceededError.


class tf.errors.NotFoundError {#NotFoundError}

Raised when a requested entity (e.g., a file or directory) was not found.

For example, running the tf.WholeFileReader.read() operation could raise NotFoundError if it receives the name of a file that does not exist.


tf.errors.NotFoundError.__init__(node_def, op, message) {#NotFoundError.init}

Creates a NotFoundError.


class tf.errors.AlreadyExistsError {#AlreadyExistsError}

Raised when an entity that we attempted to create already exists.

For example, running an operation that saves a file (e.g. tf.train.Saver.save()) could potentially raise this exception if an explicit filename for an existing file was passed.


tf.errors.AlreadyExistsError.__init__(node_def, op, message) {#AlreadyExistsError.init}

Creates an AlreadyExistsError.


class tf.errors.PermissionDeniedError {#PermissionDeniedError}

Raised when the caller does not have permission to run an operation.

For example, running the tf.WholeFileReader.read() operation could raise PermissionDeniedError if it receives the name of a file for which the user does not have the read file permission.


tf.errors.PermissionDeniedError.__init__(node_def, op, message) {#PermissionDeniedError.init}

Creates a PermissionDeniedError.


class tf.errors.UnauthenticatedError {#UnauthenticatedError}

The request does not have valid authentication credentials.

This exception is not currently used.


tf.errors.UnauthenticatedError.__init__(node_def, op, message) {#UnauthenticatedError.init}

Creates an UnauthenticatedError.


class tf.errors.ResourceExhaustedError {#ResourceExhaustedError}

Some resource has been exhausted.

For example, this error might be raised if a per-user quota is exhausted, or perhaps the entire file system is out of space.


tf.errors.ResourceExhaustedError.__init__(node_def, op, message) {#ResourceExhaustedError.init}

Creates a ResourceExhaustedError.


class tf.errors.FailedPreconditionError {#FailedPreconditionError}

Operation was rejected because the system is not in a state to execute it.

This exception is most commonly raised when running an operation that reads a tf.Variable before it has been initialized.


tf.errors.FailedPreconditionError.__init__(node_def, op, message) {#FailedPreconditionError.init}

Creates a FailedPreconditionError.


class tf.errors.AbortedError {#AbortedError}

The operation was aborted, typically due to a concurrent action.

For example, running a queue.enqueue() operation may raise AbortedError if a queue.close() operation previously ran.


tf.errors.AbortedError.__init__(node_def, op, message) {#AbortedError.init}

Creates an AbortedError.


class tf.errors.OutOfRangeError {#OutOfRangeError}

Raised when an operation iterates past the valid input range.

This exception is raised in "end-of-file" conditions, such as when a queue.dequeue() operation is blocked on an empty queue, and a queue.close() operation executes.


tf.errors.OutOfRangeError.__init__(node_def, op, message) {#OutOfRangeError.init}

Creates an OutOfRangeError.


class tf.errors.UnimplementedError {#UnimplementedError}

Raised when an operation has not been implemented.

Some operations may raise this error when passed otherwise-valid arguments that it does not currently support. For example, running the tf.nn.max_pool() operation would raise this error if pooling was requested on the batch dimension, because this is not yet supported.


tf.errors.UnimplementedError.__init__(node_def, op, message) {#UnimplementedError.init}

Creates an UnimplementedError.


class tf.errors.InternalError {#InternalError}

Raised when the system experiences an internal error.

This exception is raised when some invariant expected by the runtime has been broken. Catching this exception is not recommended.


tf.errors.InternalError.__init__(node_def, op, message) {#InternalError.init}

Creates an InternalError.


class tf.errors.UnavailableError {#UnavailableError}

Raised when the runtime is currently unavailable.

This exception is not currently used.


tf.errors.UnavailableError.__init__(node_def, op, message) {#UnavailableError.init}

Creates an UnavailableError.


class tf.errors.DataLossError {#DataLossError}

Raised when unrecoverable data loss or corruption is encountered.

For example, this may be raised by running a tf.WholeFileReader.read() operation, if the file is truncated while it is being read.


tf.errors.DataLossError.__init__(node_def, op, message) {#DataLossError.init}

Creates a DataLossError.


tf.errors.exception_type_from_error_code(error_code) {#exception_type_from_error_code}


tf.errors.error_code_from_exception_type(cls) {#error_code_from_exception_type}


tf.errors.raise_exception_on_not_ok_status() {#raise_exception_on_not_ok_status}