gtag('config', 'G-0PFHD683JR');
Price Prediction

Episodes, Police and Signature: Writing a friendly Tensorflow icon for graphs

table of contents

  • Signature transformations
  • Restrictions
    • Implementing the side effects of the biton
    • All outputs a tf.function The return values ​​must be
    • Underestimated TF.fuctions is not supported.
  • Known issues
    • Depending on global and thermal variables
    • Depending on the Beeton creatures
    • Create Tf.viables

Signature transformations

The signature is a library by default in tf.functionAnd it turns a sub -code of Python to OPS compatible with the graph. This includes control flow such as ifand forand while.

Tensorflow Ops such as tf.cond and tf.while_loop Continue work, but the control flow is often easier to write and understand when writing it in Bethon.

# A simple loop

@tf.function
def f(x):
  while tf.reduce_sum(x) > 1:
    tf.print(x)
    x = tf.tanh(x)
  return x

f(tf.random.uniform([5]))
[0.722626925 0.640327692 0.725044 0.904435039 0.868018746]
[0.61853379 0.565122604 0.620023966 0.718450606 0.700366139]
[0.550106347 0.511768281 0.551144719 0.615948677 0.604600191]
[0.500599921 0.471321791 0.501377642 0.548301 0.540314913]
[0.462588847 0.439266682 0.463199914 0.499245733 0.493226349]
[0.432191819 0.413036436 0.432688653 0.461523771 0.456773371]
[0.407151431 0.391047835 0.407565802 0.431325316 0.427450746]
[0.386051297 0.372263193 0.386403859 0.406428277 0.403188676]
[0.367951065 0.355969697 0.368255854 0.38543576 0.382673979]
[0.352198243 0.341659099 0.352465183 0.367418766 0.365027398]
[0.338323593 0.328957736 0.33856 0.351731867 0.349634588]
[0.325979948 0.317583948 0.326191217 0.337910533 0.336051434]
[0.314903945 0.307320684 0.315094262 0.325610697 0.323947728]
[0.304891765 0.297997624 0.30506441 0.314571291 0.313072115]
[0.295782804 0.289479077 0.29594034 0.304590017 0.303229302]
[0.287448555 0.281655282 0.287593067 0.295507431 0.294265062]
[0.279784769 0.274436355 0.279917955 0.287195921 0.286055595]
[0.272705853 0.267748028 0.272829145 0.279551893 0.278500348]
[0.266140789 0.261528105 0.266255379 0.272490293 0.271516532]
[0.26003018 0.255724251 0.260137022 0.265940517 0.265035421]
[0.254323781 0.250291914 0.254423678 0.259843439 0.258999288]
[0.248978764 0.245193034 0.249072418 0.25414905 0.253359258]
[0.243958414 0.240394741 0.244046524 0.248814836 0.248073786]
[0.239231125 0.235868543 0.239314198 0.243804231 0.24310714]
[0.234769359 0.231589615 0.234847859 0.239085764 0.238428399]
[0.230549142 0.227536201 0.230623439 0.234632015 0.234010741]
[0.226549357 0.223689109 0.22661984 0.23041907 0.229830697]
[0.222751439 0.220031396 0.222818434 0.226425976 0.225867674]
[0.21913895 0.216548 0.219202697 0.222634196 0.222103462]
[0.215697214 0.213225439 0.215757981 0.219027311 0.218521982]
[0.212413162 0.210051686 0.212471202 0.215590775 0.215108871]
[0.209275112 0.207015961 0.209330618 0.212311521 0.211851314]
[0.206272557 0.204108506 0.206325665 0.209177911 0.20873782]
[0.203395993 0.201320544 0.203446865 0.206179485 0.20575805]
[0.200636819 0.198644072 0.200685605 0.203306749 0.202902704]




If you’re curious you can inspect the code AutoGraph generates.

print(tf.autograph.to_code(f.python_function))
def tf__f(x):
    with ag__.FunctionScope('f', 'fscope', ag__.ConversionOptions(recursive=True, user_requested=True, optional_features=(), internal_convert_user_code=True)) as fscope:
        do_return = False
        retval_ = ag__.UndefinedReturnValue()

        def get_state():
            return (x,)

        def set_state(vars_):
            nonlocal x
            (x,) = vars_

        def loop_body():
            nonlocal x
            ag__.converted_call(ag__.ld(tf).print, (ag__.ld(x),), None, fscope)
            x = ag__.converted_call(ag__.ld(tf).tanh, (ag__.ld(x),), None, fscope)

        def loop_test():
            return ag__.converted_call(ag__.ld(tf).reduce_sum, (ag__.ld(x),), None, fscope) > 1
        ag__.while_stmt(loop_test, loop_body, get_state, set_state, ('x',), {})
        try:
            do_return = True
            retval_ = ag__.ld(x)
        except:
            do_return = False
            raise
        return fscope.ret(retval_, do_return)

Conditionals

AutoGraph will convert some if  statements into the equivalent tf.cond calls. This substitution is made if  is a Tensor. Otherwise, the if statement is executed as a Python conditional.

A Python conditional executes during tracing, so exactly one branch of the conditional will be added to the graph. Without AutoGraph, this traced graph would be unable to take the alternate branch if there is data-dependent control flow.

tf.cond traces and adds both branches of the conditional to the graph, dynamically selecting a branch at execution time. Tracing can have unintended side effects; check out AutoGraph tracing effects for more information.

tf.function
def fizzbuzz(n):
  for i in tf.range(1, n + 1):
    print('Tracing for loop')
    if i % 15 == 0:
      print('Tracing fizzbuzz branch')
      tf.print('fizzbuzz')
    elif i % 3 == 0:
      print('Tracing fizz branch')
      tf.print('fizz')
    elif i % 5 == 0:
      print('Tracing buzz branch')
      tf.print('buzz')
    else:
      print('Tracing default branch')
      tf.print(i)

fizzbuzz(tf.constant(5))
fizzbuzz(tf.constant(20))
Tracing for loop
Tracing fizzbuzz branch
Tracing fizz branch
Tracing buzz branch
Tracing default branch
1
2
fizz
4
buzz
1
2
fizz
4
buzz
fizz
7
8
fizz
buzz
11
fizz
13
14
fizzbuzz
16
17
fizz
19
buzz

See the reference documentation for additional restrictions on AutoGraph-converted if statements.

Loops

AutoGraph will convert some for and while statements into the equivalent TensorFlow looping ops, like tf.while_loop. If not converted, the for or while loop is executed as a Python loop.

This substitution is made in the following situations:

  • for x in y: if y is a Tensor, convert to tf.while_loop. In the special case where y is a tf.data.Dataset, a combination of tf.data.Dataset ops are generated.
  • while : if  is a Tensor, convert to tf.while_loop.

A Python loop executes during tracing, adding additional ops to the tf.Graph for every iteration of the loop.

A TensorFlow loop traces the body of the loop, and dynamically selects how many iterations to run at execution time. The loop body only appears once in the generated tf.Graph.

See the reference documentation for additional restrictions on AutoGraph-converted for and while statements.

Looping over Python data

A common pitfall is to loop over Python/NumPy data within a tf.function. This loop will execute during the tracing process, adding a copy of your model to the tf.Graph for each iteration of the loop.

If you want to wrap the entire training loop in tf.function, the safest way to do this is to wrap your data as a tf.data.Dataset so that AutoGraph will dynamically unroll the training loop.

def measure_graph_size(f, *args):
  g = f.get_concrete_function(*args).graph
  print("{}({}) contains {} nodes in its graph".format(
      f.__name__, ', '.join(map(str, args)), len(g.as_graph_def().node)))

@tf.function
def train(dataset):
  loss = tf.constant(0)
  for x, y in dataset:
    loss += tf.abs(y - x) # Some dummy computation.
  return loss

small_data = [(1, 1)] * 3 big_data = [(1, 1)] * 10 Measure_GRAPH_SIZE (Train, Small_Data) Measure_GRAPH_SIZE (Train, Big_data) Measure_GRAPH_SIZE (TRIIN, Tfata.dataset.from_Generator (Lambda: Big_data, (tf.int32, tf.int32)))))))
train([(1, 1), (1, 1), (1, 1)]) contains 11 nodes in its graph
train([(1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1), (1, 1)]) contains 32 nodes in its graph
train(<_FlatMapDataset element_spec=(TensorSpec(shape=, dtype=tf.int32, name=None), TensorSpec(shape=, dtype=tf.int32, name=None))>) contains 6 nodes in its graph
train(<_FlatMapDataset element_spec=(TensorSpec(shape=, dtype=tf.int32, name=None), TensorSpec(shape=, dtype=tf.int32, name=None))>) contains 6 nodes in its graph

When wrapping Python/Numby data in a data set, be aware tf.data.Dataset.from_generator reverse tf.data.Dataset.from_tensor_slices. The former will keep the data in Bethon and bring it through tf.py_function Which can have effects on performance, while the latter will carry a copy of the data as a large tf.constant() The knot in the graph, which can have memory effects.

Read data from files via TFRecordDatasetand CsvDatasetEtc., it is the most effective way to consume data, as Tensorflow itself can manage simultaneous download and data cosmetic, without the need to involve Python. To learn more, see tf.dataTensorflow.

The accumulation of values ​​in an episode

The common pattern is to assemble the intermediate values ​​of a loop. Usually, this is achieved by attaching the Python menu or adding entries to the Python dictionary. However, since these side effects of the snake, they will not work as expected in a dynamic loop. Use tf.TensorArray To accumulate results from an unstable dynamically stable episode.

batch_size = 2
seq_len = 3
feature_size = 4

def rnn_step(inp, state):
  return inp + state

@tf.function
def dynamic_rnn(rnn_step, input_data, initial_state):
  # [batch, time, features] -> [time, batch, features]
  input_data = tf.transpose(input_data, [1, 0, 2])
  max_seq_len = input_data.shape[0]

  states = tf.TensorArray(tf.float32, size=max_seq_len)
  state = initial_state
  for i in tf.range(max_seq_len):
    state = rnn_step(input_data[i], state)
    states = states.write(i, state)
  return tf.transpose(states.stack(), [1, 0, 2])

dynamic_rnn(rnn_step,
            tf.random.uniform([batch_size, seq_len, feature_size]),
            tf.zeros([batch_size, feature_size]))

Limitations

tf.function has a few limitations by design that you should be aware of when converting a Python function to a tf.function.

Executing Python side effects

Side effects, like printing, appending to lists, and mutating globals, can behave unexpectedly inside a tf.function, sometimes executing twice or not all. They only happen the first time you call a tf.function with a set of inputs. Afterwards, the traced tf.Graph is reexecuted, without executing the Python code.

The general rule of thumb is to avoid relying on Python side effects in your logic and only use them to debug your traces. Otherwise, TensorFlow APIs like tf.datatf.printtf.summarytf.Variable.assign, and tf.TensorArray are the best way to ensure your code will be executed by the TensorFlow runtime with each call.

@tf.function
def f(x):
  print("Traced with", x)
  tf.print("Executed with", x)

f(1)
f(1)
f(2)
Traced with 1
Executed with 1
Executed with 1
Traced with 2
Executed with 2

If you would like to execute Python code during each invocation of a tf.functiontf. py_function is an exit hatch. The drawbacks of tf.py_function are that it’s not portable or particularly performant, cannot be saved with SavedModel, and does not work well in distributed (multi-GPU, TPU) setups. Also, since tf.py_function has to be wired into the graph, it casts all inputs/outputs to tensors.

@tf.py_function(Tout=tf.float32)
def py_plus(x, y):
  print('Executing eagerly.')
  return x + y

@tf.function
def tf_wrapper(x, y):
  print('Tracing.')
  return py_plus(x, y)

The tf.function will trace the first time:

tf_wrapper(tf.constant(1.0), tf.constant(2.0)).numpy()
Tracing.
Executing eagerly.
3.0

But the tf.py_function inside executes eagerly every time:

tf_wrapper(tf.constant(1.0), tf.constant(2.0)).numpy()
Executing eagerly.
3.0

Changing Python global and free variables

Changing Python global and free variables counts as a Python side effect, so it only happens during tracing.

external_list = []

@Tf.function Def Side_EFFECT (X): Print ('Python Side Effects') External_List.Aptpend (x) Side_Effect (1) Side_EFFECT (1) Side_Effect (1) # An updated list only once! Confirm Len (external_list) == 1
Python side effect

Sometimes it is difficult to notice unexpected behaviors. In the example below, counter It aims to protect increased variable. However, since it is a correct number of snake and not a Tensorflow, the value of this is taken during the first tracking. when tf.function Use, and assign_add It will be registered without registration or condition in the basic chart. So v It will increase by 1, each time tf.function It is called. This problem is common among users trying to deport the Tensorflow icon from the chart mode to Tensorflow 2 using tf.function Decorations, when the side effects of the biton ( counter In the example) it is used to determine what OPS to run (assign_add In the example). Users usually realize this only after seeing suspicious numerical results, or much lower performance than expected (for example, if the protected process is very expensive).

class Model(tf.Module):
  def __init__(self):
    self.v = tf.Variable(0)
    self.counter = 0

  @tf.function
  def __call__(self):
    if self.counter == 0:
      # A python side-effect
      self.counter += 1
      self.v.assign_add(1)

    return self.v

m = Model()
for n in range(3):
  print(m().numpy()) # prints 1, 2, 3
1
2
3

The solution to achieve the expected behavior is to use tf.init_scope To raise operations outside the job chart. This ensures that the variable increase takes place only once during the tracking time. It is worth noting init_scope It has other side effects, including loyal control and gradient. Sometimes use init_scope It can become very complex for realistic management.

class Model(tf.Module):
  def __init__(self):
    self.v = tf.Variable(0)
    self.counter = 0

  @tf.function
  def __call__(self):
    if self.counter == 0:
      # Lifts ops out of function-building graphs
      with tf.init_scope():
        self.counter += 1
        self.v.assign_add(1)

    return self.v

m = Model()
for n in range(3):
  print(m().numpy()) # prints 1, 1, 1
1
1
1

In short, as a rule, you should avoid the mutation of Bethon creatures such as the correct numbers or containers such as the lists that live outside tf.function. Instead, use the media and TF objects. For example, the “accumulation of values ​​in a loop” section contains one example on how to carry out the menu -like operations.

In some cases, you can capture and manipulate it if it is a tf.Variable. This is how Keras models are updated with frequent calls ConcreteFunction.

Using Python and generators

Many Python features, such as generators and repetition, depend on the time of the Python running to track the status. In general, although these constructions work as expected in the enthusiastic mode, they are examples of the side effects of the snake, and thus occur only during the tracking.

@tf.function
def buggy_consume_next(iterator):
  tf.print("Value:", next(iterator))

iterator = iter([1, 2, 3])
buggy_consume_next(iterator)
# This reuses the first value from the iterator, rather than consuming the next value.
buggy_consume_next(iterator)
buggy_consume_next(iterator)
Value: 1
Value: 1
Value: 1

Just like how Tensorflow is specialized tf.TensorArray As for the existing constructions, it has a specialist tf.data.Iterator To design repetition. Review the transformations section to sign an overview. Also, and tf.data API can help implement generators:

@tf.function
def good_consume_next(iterator):
  # This is ok, iterator is a tf.data.Iterator
  tf.print("Value:", next(iterator))

ds = tf.data.Dataset.from_tensor_slices([1, 2, 3])
iterator = iter(ds)
good_consume_next(iterator)
good_consume_next(iterator)
good_consume_next(iterator)
Value: 1
Value: 2
Value: 3

All Tf.function outputs should be return values

except tf.VariableS, TF.FUNCTION must be returned all its outputs. Attempting to reach directly to any tensions of a job without passing through the return values, causing “leakage”.

For example, the function below the tensioner “leakage” a Through Python Global x:

x = None

@tf.function
def leaky_function(a):
  global x
  x = a + 1  # Bad - leaks local tensor
  return a + 2

correct_a = leaky_function(tf.constant(1))

print(correct_a.numpy())  # Good - value obtained from function's returns
try:
  x.numpy()  # Bad - tensor leaked from inside the function, cannot be used here
except AttributeError as expected:
  print(expected)
3
'SymbolicTensor' object has no attribute 'numpy'

This is true even if the leaked value is also returned:

@tf.function
def leaky_function(a):
  global x
  x = a + 1  # Bad - leaks local tensor
  return x  # Good - uses local tensor

correct_a = leaky_function(tf.constant(1))

print(correct_a.numpy())  # Good - value obtained from function's returns
try:
  x.numpy()  # Bad - tensor leaked from inside the function, cannot be used here
except AttributeError as expected:
  print(expected)

@tf.function
def captures_leaked_tensor(b):
  b += x  # Bad - `x` is leaked from `leaky_function`
  return b

with assert_raises(TypeError):
  captures_leaked_tensor(tf.constant(2))
2
'SymbolicTensor' object has no attribute 'numpy'
Caught expected exception 
  :
Traceback (most recent call last):
  File "/tmpfs/tmp/ipykernel_167534/3551158538.py", line 8, in assert_raises
    yield
  File "/tmpfs/tmp/ipykernel_167534/566849597.py", line 21, in 
    captures_leaked_tensor(tf.constant(2))
TypeError:  is out of scope and cannot be used here. Use return values, explicit Python locals or TensorFlow collections to access it.
Please see https://www.tensorflow.org/guide/function#all_outputs_of_a_tffunction_must_be_return_values for more information.

 was defined here:
    File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/ipykernel_launcher.py", line 18, in 
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/traitlets/config/application.py", line 1075, in launch_instance
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/ipykernel/kernelapp.py", line 739, in start
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tornado/platform/asyncio.py", line 205, in start
    File "/usr/lib/python3.9/asyncio/base_events.py", line 601, in run_forever
    File "/usr/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once
    File "/usr/lib/python3.9/asyncio/events.py", line 80, in _run
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 545, in dispatch_queue
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 534, in process_one
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 437, in dispatch_shell
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/ipykernel/ipkernel.py", line 362, in execute_request
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 778, in execute_request
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/ipykernel/ipkernel.py", line 449, in do_execute
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/ipykernel/zmqshell.py", line 549, in run_cell
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3048, in run_cell
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3103, in _run_cell
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3308, in run_cell_async
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3490, in run_ast_nodes
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3550, in run_code
    File "/tmpfs/tmp/ipykernel_167534/566849597.py", line 7, in 
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py", line 150, in error_handler
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 833, in __call__
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 889, in _call
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 696, in _initialize
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 178, in trace_function
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 283, in _maybe_define_function
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 310, in _create_concrete_function
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/framework/func_graph.py", line 1059, in func_graph_from_py_func
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 599, in wrapped_fn
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/eager/polymorphic_function/autograph_util.py", line 41, in autograph_handler
    File "/tmpfs/tmp/ipykernel_167534/566849597.py", line 4, in leaky_function
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py", line 150, in error_handler
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/framework/override_binary_operator.py", line 113, in binary_op_wrapper
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/ops/tensor_math_operator_overrides.py", line 28, in _add_dispatch_factory
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py", line 150, in error_handler
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py", line 1260, in op_dispatch_handler
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/ops/math_ops.py", line 1701, in _add_dispatch
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/ops/gen_math_ops.py", line 490, in add_v2
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/framework/op_def_library.py", line 796, in _apply_op_helper
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/framework/func_graph.py", line 670, in _create_op_internal
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 2682, in _create_op_internal
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 1177, in from_node_def

The tensor  cannot be accessed from here, because it was defined in FuncGraph(name=leaky_function, id=139959630636096), which is out of scope.

Leaks like this usually occur when using Python phrases or data structures. In addition to the leakage of the mission that cannot be reached, it is also possible that these phrases are wrong because they are considered side effects of the snake, and do not guarantee their implementation in each job call.

Common ways to leak local mutation also include modifying the Bethon external group, or object:

class MyClass:

  def __init__(self):
    self.field = None

external_list = []
external_object = MyClass()

def leaky_function():
  a = tf.constant(1)
  external_list.append(a)  # Bad - leaks tensor
  external_object.field = a  # Bad - leaks tensor

Caretn tf.functions is not supported

Lukewarm tf.functionThe S is not supported and it can cause countless rings. For example,

@tf.function
def recursive_fn(n):
  if n > 0:
    return recursive_fn(n - 1)
  else:
    return 1

with assert_raises(Exception):
  recursive_fn(tf.constant(5))  # Bad - maximum recursion error.

Even if you come tf.function It seems that it works, the Python function will be tracked several times and can have traces of performance. For example,

@tf.function
def recursive_fn(n):
  if n > 0:
    print('tracing')
    return recursive_fn(n - 1)
  else:
    return 1

recursive_fn(5)  # Warning - multiple tracings
tracing
tracing
tracing
tracing
tracing

Known issues

If your own tf.function It is not properly evaluated, the error can be explained by these known issues that are planned to be fixed in the future.

Depending on global and thermal variables

tf.function It creates new ConcreteFunction When calling with a new value of the Bethon argument. However, he does not do so in order to close the snake, balls, or local tf.function. If its value changes between calls to tf.functionthe tf.function They will use the values ​​they had when they were tracked. This differs from how to make regular Python functions.

For this reason, you should follow a functional programming pattern that uses media instead of closing external names.

@tf.function
def buggy_add():
  return 1 + foo

@tf.function
def recommended_add(foo):
  return 1 + foo

foo = 1
print("Buggy:", buggy_add())
print("Correct:", recommended_add(foo))
Buggy: tf.Tensor(2, shape=(), dtype=int32)
Correct: tf.Tensor(2, shape=(), dtype=int32)
print("Updating the value of `foo` to 100!")
foo = 100
print("Buggy:", buggy_add())  # Did not change!
print("Correct:", recommended_add(foo))
Updating the value of `foo` to 100!
Buggy: tf.Tensor(2, shape=(), dtype=int32)
Correct: tf.Tensor(101, shape=(), dtype=int32)

Another way to update the global value is to make it tf.Variable And use Variable.assign The method instead.

@tf.function
def variable_add():
  return 1 + foo

foo = tf.Variable(1)
print("Variable:", variable_add())
Variable: tf.Tensor(2, shape=(), dtype=int32)
print("Updating the value of `foo` to 100!")
foo.assign(100)
print("Variable:", variable_add())
Updating the value of `foo` to 100!
Variable: tf.Tensor(101, shape=(), dtype=int32)

Depending on the Beeton creatures

Python Python Makers Pass the Media to tf.function Supported but has some restrictions.

To get the maximum to cover the features, consider converting objects into types of extension before transferring them to tf.function. You can also use Python Primitives and tf.nestCompatible structures.

However, as covered in tracking rules, when it is the habit TraceType It is not provided by the custom Python category, tf.function Forced to use counterfeiting equality, which means that they will work Do not create a new impact When you pass The same object with modified features.

class SimpleModel(tf.Module):
  def __init__(self):
    # These values are *not* tf.Variables.
    self.bias = 0.
    self.weight = 2.

@tf.function
def evaluate(model, x):
  return model.weight * x + model.bias

simple_model = SimpleModel()
x = tf.constant(10.)
print(evaluate(simple_model, x))
tf.Tensor(20.0, shape=(), dtype=float32)
print("Adding bias!")
simple_model.bias += 5.0
print(evaluate(simple_model, x))  # Didn't change :(
dding bias!
tf.Tensor(20.0, shape=(), dtype=float32)

Using the same tf.function To evaluate the modified model of the model will be animal -drawn vehicles because it still has the same example as an original model.

For this reason, it is recommended to write tf.function To avoid relying on the features of the changeable object or the implementation of the tracking protocol of the objects to inform tf.function About these features.

If this is not possible, one of the solutions will be to make new tf.functionEvery time you modify your being to force the restoration:

def evaluate(model, x):
  return model.weight * x + model.bias

new_model = SimpleModel()
evaluate_no_bias = tf.function(evaluate).get_concrete_function(new_model, x)
# Don't pass in `new_model`. `tf.function` already captured its state during tracing.
print(evaluate_no_bias(x))
tf.Tensor(20.0, shape=(), dtype=float32)
print("Adding bias!")
new_model.bias += 5.0
# Create new `tf.function` and `ConcreteFunction` since you modified `new_model`.
evaluate_with_bias = tf.function(evaluate).get_concrete_function(new_model, x)
print(evaluate_with_bias(x)) # Don't pass in `new_model`.
Adding bias!
tf.Tensor(25.0, shape=(), dtype=float32)

Since the restoration can be expensive, you can use it tf.VariableS as a object of an object, which can be mutated (but not changed, cautious!) For a similar effect without the need to restore.

class BetterModel:

  def __init__(self):
    self.bias = tf.Variable(0.)
    self.weight = tf.Variable(2.)

@tf.function
def evaluate(model, x):
  return model.weight * x + model.bias

better_model = BetterModel()
print(evaluate(better_model, x))
tf.Tensor(20.0, shape=(), dtype=float32)
print("Adding bias!")
better_model.bias.assign_add(5.0)  # Note: instead of better_model.bias += 5
print(evaluate(better_model, x))  # This works!
Adding bias!
tf.Tensor(25.0, shape=(), dtype=float32)

Create Tf.viables

tf.function Supports only the singular tf.VariableS was created once on the first call, and reused through subsequent job calls. The code excerpt will be created below tf.Variable In each job call, which leads to a file ValueError exception.

example:

@tf.function
def f(x):
  v = tf.Variable(1.0)
  return v

with assert_raises(ValueError):
  f(1.0)
Caught expected exception 
  :
Traceback (most recent call last):
  File "/tmpfs/tmp/ipykernel_167534/3551158538.py", line 8, in assert_raises
    yield
  File "/tmpfs/tmp/ipykernel_167534/3018268426.py", line 7, in 
    f(1.0)
ValueError: in user code:

    File "/tmpfs/tmp/ipykernel_167534/3018268426.py", line 3, in f  *
        v = tf.Variable(1.0)

    ValueError: tf.function only supports singleton tf.Variables created on the first call. Make sure the tf.Variable is only created once or created outside tf.function. See https://www.tensorflow.org/guide/function#creating_tfvariables for more information.

The common pattern used in this restriction is to start with a value of nothing of the snake, then create conditional tf.Variable If the value is nothing:

class Count(tf.Module):
  def __init__(self):
    self.count = None

  @tf.function
  def __call__(self):
    if self.count is None:
      self.count = tf.Variable(0)
    return self.count.assign_add(1)

c = Count()
print(c())
print(c())
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)

Use with multiple cura ophthalmates

You may face ValueError: tf.function only supports singleton tf.Variables created on the first call. When using more than one Keras with a tf.function. This error occurs due to the creation of interlocutors internally tf.VariableS when they apply grades for the first time.

opt1 = tf.keras.optimizers.Adam(learning_rate = 1e-2)
opt2 = tf.keras.optimizers.Adam(learning_rate = 1e-3)

@tf.function
def train_step(w, x, y, optimizer):
   with tf.GradientTape() as tape:
       L = tf.reduce_sum(tf.square(w*x - y))
   gradients = tape.gradient(L, [w])
   optimizer.apply_gradients(zip(gradients, [w]))

w = tf.Variable(2.)
x = tf.constant([-1.])
y = tf.constant([2.])

train_step(w, x, y, opt1)
print("Calling `train_step` with different optimizer...")
with assert_raises(ValueError):
  train_step(w, x, y, opt2)
Calling `train_step` with different optimizer...
Caught expected exception 
  :
Traceback (most recent call last):
  File "/tmpfs/tmp/ipykernel_167534/3551158538.py", line 8, in assert_raises
    yield
  File "/tmpfs/tmp/ipykernel_167534/950644149.py", line 18, in 
    train_step(w, x, y, opt2)
ValueError: in user code:

    File "/tmpfs/tmp/ipykernel_167534/950644149.py", line 9, in train_step  *
        optimizer.apply_gradients(zip(gradients, [w]))
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras/src/optimizers/base_optimizer.py", line 291, in apply_gradients  **
        self.apply(grads, trainable_variables)
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras/src/optimizers/base_optimizer.py", line 330, in apply
        self.build(trainable_variables)
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras/src/optimizers/adam.py", line 97, in build
        self.add_variable_from_reference(
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras/src/backend/tensorflow/optimizer.py", line 36, in add_variable_from_reference
        return super().add_variable_from_reference(
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras/src/optimizers/base_optimizer.py", line 227, in add_variable_from_reference
        return self.add_variable(
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras/src/optimizers/base_optimizer.py", line 201, in add_variable
        variable = backend.Variable(
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras/src/backend/common/variables.py", line 163, in __init__
        self._initialize_with_initializer(initializer)
    File "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras/src/backend/tensorflow/core.py", line 40, in _initialize_with_initializer
        self._value = tf.Variable(

    ValueError: tf.function only supports singleton tf.Variables created on the first call. Make sure the tf.Variable is only created once or created outside tf.function. See https://www.tensorflow.org/guide/function#creating_tfvariables for more information.

If you need to change a state object between calls, it is simpler to determine a tf.Module The sub -category, and the creation of counterparts to hold these creatures:

class TrainStep(tf.Module):
  def __init__(self, optimizer):
    self.optimizer = optimizer

  @tf.function
  def __call__(self, w, x, y):
    with tf.GradientTape() as tape:
       L = tf.reduce_sum(tf.square(w*x - y))
    gradients = tape.gradient(L, [w])
    self.optimizer.apply_gradients(zip(gradients, [w]))


opt1 = tf.keras.optimizers.Adam(learning_rate = 1e-2)
opt2 = tf.keras.optimizers.Adam(learning_rate = 1e-3)

train_o1 = TrainStep(opt1)
train_o2 = TrainStep(opt2)

train_o1(w, x, y)
train_o2(w, x, y)

You can also do this manually by creating multiple cases from @tf.function Cover, one for each improved:

opt1 = tf.keras.optimizers.Adam(learning_rate = 1e-2)
opt2 = tf.keras.optimizers.Adam(learning_rate = 1e-3)

# Not a tf.function.
def train_step(w, x, y, optimizer):
   with tf.GradientTape() as tape:
       L = tf.reduce_sum(tf.square(w*x - y))
   gradients = tape.gradient(L, [w])
   optimizer.apply_gradients(zip(gradients, [w]))

w = tf.Variable(2.)
x = tf.constant([-1.])
y = tf.constant([2.])

# Make a new tf.function and ConcreteFunction for each optimizer.
train_step_1 = tf.function(train_step)
train_step_2 = tf.function(train_step)
for i in range(10):
  if i % 2 == 0:
    train_step_1(w, x, y, opt1)
  else:
    train_step_2(w, x, y, opt2)

Using multiple Keras models

You may also face ValueError: tf.function only supports singleton tf.Variables created on the first call. When you pass different typical counterparts to the same thing tf.function.

This error occurs because Keras (which does not contain its insertion is specific) and creates Keras layers tf.VariableS when they are called for the first time. You may try to create these variables within a tf.functionThat was already called. To avoid this error, try contact model.build(input_shape) To prepare all weights before training the form.

It was originally published on Tensorflow siteand This article here appears under a new and licensed title under CC by 4.0. Common symbol samples below Apache 2.0 license.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button