Explanation on "Tensorflow AutoGraph support print and assert"

44 views Asked by At

Background

In AutoGraph converts Python into TensorFlow graphs it says:

We (AutoGraph) also support constructs like break, continue, and even print and assert. When converted, this snippet’s Python assert converts to a graph that uses the appropriate tf.Assert.

However, Introduction to graphs and tf.function says:

To explain, the print statement is executed when Function runs the original code in order to create the graph in a process known as "tracing" (refer to the Tracing section of the tf.function guide. Tracing captures the TensorFlow operations into a graph, and print is not captured in the graph. That graph is then executed for all three calls without ever running the Python code again.

Question

The first documents gives the impression that I can use print and TensorFlow AutoGraph can convert into Tensorflow operations. However apparently it is not the case as in the second document.

Please help understand if the sentence in the first document stating "We/AutoGraph support even print and assert" is still correct or if I misunderstand something.

In my understanding, AutoGraph is the one being used under @tf.function and tf.py_function.

1

There are 1 answers

0
kkm -still wary of SE promises On

The documentation is correct, Python print() is never converted to tf.print() during tracing. This is, for one, important for debugging and diagnostics: the majority of… eh, well, unintended consequences happen during tracing. print() may be liberally used in the body of the function, and has its effect only during the tracing phase.

The reason for this discrepancy is that the first example, a blog post, is six years older than the question, and apparently refers to TensorFlow 1.x. AFAIK, the decorator @autograph.convert was experimental in TF1, and had not made it into TF2; @tf.function subsumes the behavior. But generally, between a blog post and the documentation of any feature, the documentation is more authoritative.

In addition to the Graph overview that you've linked, there is a more detailed guide, Better performance with tf.function. (I'm not sure what the note about a “very different view of graphs[...] for those [...] who are only familiar with TensorFlow 1.x” in the Graph article is really about.) The tf.function documentation is also a must-read, not the least for the links to tutorials and guides it contains.

In my understanding, AutoGraph is the one being used under @tf.function…

That's correct. A function is converted into a GenericFunction object, which then can be instantiated into a few ConcreteFunctions, with different tensor shapes or data types, without re-tracing. However, a function may be re-traced if the reification process cannot conclusively prove that Python variables have not changed, when compared to all existing cached GenericFunction traces of the same Python function. AutoGraph proper is more involved with the second pass, reifying and optimising the ConcreteFunction, which (in addition to, basically, metadata) contains a Graph object, which can be placed and run on a device, or saved into a completely portable model (with some limitations, mainly concerning the TPU devices). The first pass primarily creates Python code rigged to call AutoGraph during the second phase.

…and tf.py_function.

@tf.py_function is a pragma directive for Autograph that tells it to create a graph op that calls back into Python from the graph. It has no effect in eager mode. (Needless to say, this makes the whole model less portable and unable to run without the full Python runtime, in addition to slowdown and synchronisation.) This is different from @tf.function, which declares that the function is intended to be transformed by Autograph, and causes the tracing of the function when it's called, if all conditions, documented in the tf.executing_eagerly article are met. They are both declarative, but their declarations are intended for different moving parts of the framework.