I am using tape.gradient method for optimising some neural networks. It works as expected but keeps giving this warning when I calculate the gradient using tape.gradients, multiple times in a single iteration. This means that within a single loop, while doing back prop it is somewhere fiddling with complex numbers.
WARNING:tensorflow:You are casting an input of type complex64 to an incompatible dtype float64. This will discard the imaginary part and may not be what you intended.
cost_progress=[]
trace_progress=[]
for i in range(reps):
with tf.GradientTape() as tape:
tape.watch(params)
loss,trace = cost(params,ratio)
trace_progress.append(trace)
cost_progress.append(loss)
gradients = tape.gradient(loss, params)
opt.apply_gradients(zip([gradients], [params]))
Now, all params and loss are tf.float64, but still within tape.gradient() something complex type is given, and I want to manually cast them to real so that this warning stops showing up on my screen. But I am not able to find how to cast so that it doesn't mess up.
Brut forcing gradients = tf.cast(tape.gradient(loss, params),tf.float64) doesn't work. I have verified that gradients = tape.gradient(loss, params) is giving the Warning and that both loss and params are of tf.float64 type.