tf.gradients: ValueError: Shapes must be equal rank, but are 2 and 1
I need to get the gradient of weights and biases with tf.gradients():
x = tf.placeholder(tf.float32, [batch_size, x_train.shape[1]])
y = tf.placeholder(tf.float32, [batch_size, y_train.shape[1]])
y_ = tf.placeholder(tf.float32, [batch_size, y_train.shape[1]])
Wx=tf.Variable(tf.random_normal(stddev=0.1,shape=[x_train.shape[1],n_hidden]))
Wy=tf.Variable(tf.random_normal(stddev=0.1,shape=[y_train.shape[1],n_hidden]))
b=tf.Variable(tf.constant(0.1,shape=[n_hidden]))
hidden_joint=tf.nn.relu((tf.matmul(x,Wx)+tf.matmul(y,Wy))+b)
hidden_marg=tf.nn.relu(tf.matmul(x,Wx)+tf.matmul(y_,Wy)+b)
Wout=tf.Variable(tf.random_normal(stddev=0.1,shape=[n_hidden, 1]))
bout=tf.Variable(tf.constant(0.1,shape=[1]))
out_joint=tf.matmul(hidden_joint,Wout)+bout
out_marg=tf.matmul(hidden_marg,Wout)+bout
optimizer = tf.train.AdamOptimizer(0.005)
t = out_joint
et = tf.exp(out_marg)
ex_delta_t = tf.reduce_mean(tf.gradients(t, tf.trainable_variables()))
ex_delta_et = tf.reduce_mean(tf.gradients(et, tf.trainable_variables()))
But I always get the following error:
File "/home/ferdi/Documents/mine/mine.py", line 77, in get_mi_batched
ex_delta_t = tf.reduce_mean(tf.gradients(t, tf.trainable_variables()))
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1490, in reduce_mean
reduction_indices),
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1272, in _ReductionDims
return range(0, array_ops.rank(x))
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 368, in rank
return rank_internal(input, name, optimize=True)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 388, in rank_internal
input_tensor = ops.convert_to_tensor(input)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1048, in convert_to_tensor
as_ref=False)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1144, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 971, in _autopacking_conversion_function
return _autopacking_helper(v, dtype, name or "packed")
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 923, in _autopacking_helper
return gen_array_ops.pack(elems_as_tensors, name=scope)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 4689, in pack
"Pack", values=values, axis=axis, name=name)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3272, in create_op
op_def=op_def)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1790, in __init__
control_input_ops)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1629, in _create_c_op
raise ValueError(str(e))
ValueError: Shapes must be equal rank, but are 2 and 1
From merging shape 3 with other shapes. for 'Rank/packed' (op: 'Pack') with input shapes: [512,20], [10,20], [20], [20,1], [1].
If I reshape or do similar things, there are other error occurring. I know that there are many similar question but I still couldn't figure it out. What am I doing wrong?
python tensorflow gradient-descent
add a comment |
I need to get the gradient of weights and biases with tf.gradients():
x = tf.placeholder(tf.float32, [batch_size, x_train.shape[1]])
y = tf.placeholder(tf.float32, [batch_size, y_train.shape[1]])
y_ = tf.placeholder(tf.float32, [batch_size, y_train.shape[1]])
Wx=tf.Variable(tf.random_normal(stddev=0.1,shape=[x_train.shape[1],n_hidden]))
Wy=tf.Variable(tf.random_normal(stddev=0.1,shape=[y_train.shape[1],n_hidden]))
b=tf.Variable(tf.constant(0.1,shape=[n_hidden]))
hidden_joint=tf.nn.relu((tf.matmul(x,Wx)+tf.matmul(y,Wy))+b)
hidden_marg=tf.nn.relu(tf.matmul(x,Wx)+tf.matmul(y_,Wy)+b)
Wout=tf.Variable(tf.random_normal(stddev=0.1,shape=[n_hidden, 1]))
bout=tf.Variable(tf.constant(0.1,shape=[1]))
out_joint=tf.matmul(hidden_joint,Wout)+bout
out_marg=tf.matmul(hidden_marg,Wout)+bout
optimizer = tf.train.AdamOptimizer(0.005)
t = out_joint
et = tf.exp(out_marg)
ex_delta_t = tf.reduce_mean(tf.gradients(t, tf.trainable_variables()))
ex_delta_et = tf.reduce_mean(tf.gradients(et, tf.trainable_variables()))
But I always get the following error:
File "/home/ferdi/Documents/mine/mine.py", line 77, in get_mi_batched
ex_delta_t = tf.reduce_mean(tf.gradients(t, tf.trainable_variables()))
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1490, in reduce_mean
reduction_indices),
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1272, in _ReductionDims
return range(0, array_ops.rank(x))
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 368, in rank
return rank_internal(input, name, optimize=True)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 388, in rank_internal
input_tensor = ops.convert_to_tensor(input)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1048, in convert_to_tensor
as_ref=False)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1144, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 971, in _autopacking_conversion_function
return _autopacking_helper(v, dtype, name or "packed")
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 923, in _autopacking_helper
return gen_array_ops.pack(elems_as_tensors, name=scope)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 4689, in pack
"Pack", values=values, axis=axis, name=name)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3272, in create_op
op_def=op_def)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1790, in __init__
control_input_ops)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1629, in _create_c_op
raise ValueError(str(e))
ValueError: Shapes must be equal rank, but are 2 and 1
From merging shape 3 with other shapes. for 'Rank/packed' (op: 'Pack') with input shapes: [512,20], [10,20], [20], [20,1], [1].
If I reshape or do similar things, there are other error occurring. I know that there are many similar question but I still couldn't figure it out. What am I doing wrong?
python tensorflow gradient-descent
I found the solution, please read my up-to-date answer below.
– Geeocode
Nov 16 '18 at 23:09
Did you see my answer, what is the developments, cause I found the issue quite interesting.
– Geeocode
Nov 18 '18 at 19:31
thanks geeocode, you're a star!
– Ferdi K
Nov 19 '18 at 20:52
I really happy, that it worked. Good luck!
– Geeocode
Nov 19 '18 at 21:49
add a comment |
I need to get the gradient of weights and biases with tf.gradients():
x = tf.placeholder(tf.float32, [batch_size, x_train.shape[1]])
y = tf.placeholder(tf.float32, [batch_size, y_train.shape[1]])
y_ = tf.placeholder(tf.float32, [batch_size, y_train.shape[1]])
Wx=tf.Variable(tf.random_normal(stddev=0.1,shape=[x_train.shape[1],n_hidden]))
Wy=tf.Variable(tf.random_normal(stddev=0.1,shape=[y_train.shape[1],n_hidden]))
b=tf.Variable(tf.constant(0.1,shape=[n_hidden]))
hidden_joint=tf.nn.relu((tf.matmul(x,Wx)+tf.matmul(y,Wy))+b)
hidden_marg=tf.nn.relu(tf.matmul(x,Wx)+tf.matmul(y_,Wy)+b)
Wout=tf.Variable(tf.random_normal(stddev=0.1,shape=[n_hidden, 1]))
bout=tf.Variable(tf.constant(0.1,shape=[1]))
out_joint=tf.matmul(hidden_joint,Wout)+bout
out_marg=tf.matmul(hidden_marg,Wout)+bout
optimizer = tf.train.AdamOptimizer(0.005)
t = out_joint
et = tf.exp(out_marg)
ex_delta_t = tf.reduce_mean(tf.gradients(t, tf.trainable_variables()))
ex_delta_et = tf.reduce_mean(tf.gradients(et, tf.trainable_variables()))
But I always get the following error:
File "/home/ferdi/Documents/mine/mine.py", line 77, in get_mi_batched
ex_delta_t = tf.reduce_mean(tf.gradients(t, tf.trainable_variables()))
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1490, in reduce_mean
reduction_indices),
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1272, in _ReductionDims
return range(0, array_ops.rank(x))
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 368, in rank
return rank_internal(input, name, optimize=True)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 388, in rank_internal
input_tensor = ops.convert_to_tensor(input)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1048, in convert_to_tensor
as_ref=False)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1144, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 971, in _autopacking_conversion_function
return _autopacking_helper(v, dtype, name or "packed")
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 923, in _autopacking_helper
return gen_array_ops.pack(elems_as_tensors, name=scope)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 4689, in pack
"Pack", values=values, axis=axis, name=name)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3272, in create_op
op_def=op_def)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1790, in __init__
control_input_ops)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1629, in _create_c_op
raise ValueError(str(e))
ValueError: Shapes must be equal rank, but are 2 and 1
From merging shape 3 with other shapes. for 'Rank/packed' (op: 'Pack') with input shapes: [512,20], [10,20], [20], [20,1], [1].
If I reshape or do similar things, there are other error occurring. I know that there are many similar question but I still couldn't figure it out. What am I doing wrong?
python tensorflow gradient-descent
I need to get the gradient of weights and biases with tf.gradients():
x = tf.placeholder(tf.float32, [batch_size, x_train.shape[1]])
y = tf.placeholder(tf.float32, [batch_size, y_train.shape[1]])
y_ = tf.placeholder(tf.float32, [batch_size, y_train.shape[1]])
Wx=tf.Variable(tf.random_normal(stddev=0.1,shape=[x_train.shape[1],n_hidden]))
Wy=tf.Variable(tf.random_normal(stddev=0.1,shape=[y_train.shape[1],n_hidden]))
b=tf.Variable(tf.constant(0.1,shape=[n_hidden]))
hidden_joint=tf.nn.relu((tf.matmul(x,Wx)+tf.matmul(y,Wy))+b)
hidden_marg=tf.nn.relu(tf.matmul(x,Wx)+tf.matmul(y_,Wy)+b)
Wout=tf.Variable(tf.random_normal(stddev=0.1,shape=[n_hidden, 1]))
bout=tf.Variable(tf.constant(0.1,shape=[1]))
out_joint=tf.matmul(hidden_joint,Wout)+bout
out_marg=tf.matmul(hidden_marg,Wout)+bout
optimizer = tf.train.AdamOptimizer(0.005)
t = out_joint
et = tf.exp(out_marg)
ex_delta_t = tf.reduce_mean(tf.gradients(t, tf.trainable_variables()))
ex_delta_et = tf.reduce_mean(tf.gradients(et, tf.trainable_variables()))
But I always get the following error:
File "/home/ferdi/Documents/mine/mine.py", line 77, in get_mi_batched
ex_delta_t = tf.reduce_mean(tf.gradients(t, tf.trainable_variables()))
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1490, in reduce_mean
reduction_indices),
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 1272, in _ReductionDims
return range(0, array_ops.rank(x))
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 368, in rank
return rank_internal(input, name, optimize=True)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 388, in rank_internal
input_tensor = ops.convert_to_tensor(input)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1048, in convert_to_tensor
as_ref=False)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1144, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 971, in _autopacking_conversion_function
return _autopacking_helper(v, dtype, name or "packed")
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 923, in _autopacking_helper
return gen_array_ops.pack(elems_as_tensors, name=scope)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 4689, in pack
"Pack", values=values, axis=axis, name=name)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3272, in create_op
op_def=op_def)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1790, in __init__
control_input_ops)
File "/home/ferdi/anaconda3/envs/ml_all/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1629, in _create_c_op
raise ValueError(str(e))
ValueError: Shapes must be equal rank, but are 2 and 1
From merging shape 3 with other shapes. for 'Rank/packed' (op: 'Pack') with input shapes: [512,20], [10,20], [20], [20,1], [1].
If I reshape or do similar things, there are other error occurring. I know that there are many similar question but I still couldn't figure it out. What am I doing wrong?
python tensorflow gradient-descent
python tensorflow gradient-descent
asked Nov 14 '18 at 18:48
Ferdi KFerdi K
376
376
I found the solution, please read my up-to-date answer below.
– Geeocode
Nov 16 '18 at 23:09
Did you see my answer, what is the developments, cause I found the issue quite interesting.
– Geeocode
Nov 18 '18 at 19:31
thanks geeocode, you're a star!
– Ferdi K
Nov 19 '18 at 20:52
I really happy, that it worked. Good luck!
– Geeocode
Nov 19 '18 at 21:49
add a comment |
I found the solution, please read my up-to-date answer below.
– Geeocode
Nov 16 '18 at 23:09
Did you see my answer, what is the developments, cause I found the issue quite interesting.
– Geeocode
Nov 18 '18 at 19:31
thanks geeocode, you're a star!
– Ferdi K
Nov 19 '18 at 20:52
I really happy, that it worked. Good luck!
– Geeocode
Nov 19 '18 at 21:49
I found the solution, please read my up-to-date answer below.
– Geeocode
Nov 16 '18 at 23:09
I found the solution, please read my up-to-date answer below.
– Geeocode
Nov 16 '18 at 23:09
Did you see my answer, what is the developments, cause I found the issue quite interesting.
– Geeocode
Nov 18 '18 at 19:31
Did you see my answer, what is the developments, cause I found the issue quite interesting.
– Geeocode
Nov 18 '18 at 19:31
thanks geeocode, you're a star!
– Ferdi K
Nov 19 '18 at 20:52
thanks geeocode, you're a star!
– Ferdi K
Nov 19 '18 at 20:52
I really happy, that it worked. Good luck!
– Geeocode
Nov 19 '18 at 21:49
I really happy, that it worked. Good luck!
– Geeocode
Nov 19 '18 at 21:49
add a comment |
1 Answer
1
active
oldest
votes
The solution:
ex_delta_t = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in tf.gradients(t, tf.trainable_variables())], axis=0))
ex_delta_et = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in tf.gradients(et, tf.trainable_variables())], axis=0))
Or the same code unfolded:
grads_t_0 = tf.gradients(t, tf.trainable_variables())
grads_et_0 = tf.gradients(t, tf.trainable_variables())
grads_t =
grads_et =
for gt,get in zip(grads_t_0, grads_et_0):
grads_t.append(tf.reshape(gt, [-1]))
grads_et.append(tf.reshape(get, [-1]))
grads_t_flatten = tf.concat(grads_t, axis=0)
grads_et_flatten = tf.concat(grads_et, axis=0)
ex_delta_t = tf.reduce_mean(grads_t_flatten)
ex_delta_et = tf.reduce_mean(grads_et_flatten)
Explanation:
You may get this error message, because your gradient functions
tf.gradients(t, tf.trainable_variables())
tf.gradients(et, tf.trainable_variables()
returns multiply shaped tensors.
As a result your tf.reduce_mean()
operation complains, that it can't work with this multiply shaped tensors.
As a possibility to workaround this, you should first flatten than concatenate the gradient list, then pass it to the reduce_mean function.
Let's see a simple example code to simulate the error and its solution!
#You dummy gradients as the output of tf.gradients()
grad_wx = tf.constant(0.1, shape=[512, 20])
grad_wy = tf.constant(0.2, shape=[10, 20])
grad_b = tf.constant(0.3, shape=[20])
grad_wout = tf.constant(0.4, shape=[20, 1])
grad_bout = tf.constant(0.5, shape=[1])
grads_0 = [grad_wx, grad_wy, grad_b, grad_wout, grad_bout]
sess = tf.Session()
result = tf.reduce_mean(grads_0)
print(sess.run(result)
Out(error):
ValueError: Shapes must be equal rank, but are 2 and 1
From merging shape 3 with other shapes. for 'Rank/packed' (op: 'Pack') with input shapes: [512,20], [10,20], [20], [20,1], [1].
Solution:
result = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in grads_0], axis=0))
print(sess.run(result))
Out(fixed):
0.102899365
what do you mean by y has only 1 dim? y has shape (n, 10) and x (n, 512). They are then mapped to hidden_joint and hidden_marg which have shape (n, n_hidden).
– Ferdi K
Nov 14 '18 at 19:46
@FerdiK Then you have to provide more information us to be able to answer, like shape of x,y and other tensors, some data and label sample etc.
– Geeocode
Nov 14 '18 at 20:23
Ok sure, thanks! x.shape = (300, 512), y.shape = (300, 10). y is the one hot enc. of 10 classes. x entries are between 0 and 1, e.g. [0.00376076 0.6290544 0. ... 0.44217262 0. 0.53888565]. n_hidden = 20, so Wx.shape = (512, 20), Wy.shape = (10, 20) and Wout.shape = (20,1). Why does it complain about merging shape 3 with other shapes? It shouldn't matter, that the biases are only 1D arrays and weights 2D arrays. If I reshape the biases to [1, 20] and [1,1], it complains: ValueError: Dimension 0 in both shapes must be equal, but are 20 and 1. Shapes are [20,1] and [1,1].
– Ferdi K
Nov 14 '18 at 20:41
@FerdiK There are some initializations some gradients weights and biasses, but what is the purpose of(tf.matmul(x,Wx)+tf.matmul(y,Wy)
, why we use the labels here to any computation?
– Geeocode
Nov 14 '18 at 20:55
sorry, y aren't the labels: The net is an implementation of MINE (arxiv.org/pdf/1801.04062.pdf). The network tries to give a lower bound to the mutual information between x and y. It doesn't train on labeled data, it tries to maximzie the following objective: lower_bound = t - tf.log(et), where t is the 1-dim outputs of the net. And et = exp(t)
– Ferdi K
Nov 14 '18 at 21:18
|
show 5 more comments
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53306907%2ftf-gradients-valueerror-shapes-must-be-equal-rank-but-are-2-and-1%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
The solution:
ex_delta_t = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in tf.gradients(t, tf.trainable_variables())], axis=0))
ex_delta_et = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in tf.gradients(et, tf.trainable_variables())], axis=0))
Or the same code unfolded:
grads_t_0 = tf.gradients(t, tf.trainable_variables())
grads_et_0 = tf.gradients(t, tf.trainable_variables())
grads_t =
grads_et =
for gt,get in zip(grads_t_0, grads_et_0):
grads_t.append(tf.reshape(gt, [-1]))
grads_et.append(tf.reshape(get, [-1]))
grads_t_flatten = tf.concat(grads_t, axis=0)
grads_et_flatten = tf.concat(grads_et, axis=0)
ex_delta_t = tf.reduce_mean(grads_t_flatten)
ex_delta_et = tf.reduce_mean(grads_et_flatten)
Explanation:
You may get this error message, because your gradient functions
tf.gradients(t, tf.trainable_variables())
tf.gradients(et, tf.trainable_variables()
returns multiply shaped tensors.
As a result your tf.reduce_mean()
operation complains, that it can't work with this multiply shaped tensors.
As a possibility to workaround this, you should first flatten than concatenate the gradient list, then pass it to the reduce_mean function.
Let's see a simple example code to simulate the error and its solution!
#You dummy gradients as the output of tf.gradients()
grad_wx = tf.constant(0.1, shape=[512, 20])
grad_wy = tf.constant(0.2, shape=[10, 20])
grad_b = tf.constant(0.3, shape=[20])
grad_wout = tf.constant(0.4, shape=[20, 1])
grad_bout = tf.constant(0.5, shape=[1])
grads_0 = [grad_wx, grad_wy, grad_b, grad_wout, grad_bout]
sess = tf.Session()
result = tf.reduce_mean(grads_0)
print(sess.run(result)
Out(error):
ValueError: Shapes must be equal rank, but are 2 and 1
From merging shape 3 with other shapes. for 'Rank/packed' (op: 'Pack') with input shapes: [512,20], [10,20], [20], [20,1], [1].
Solution:
result = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in grads_0], axis=0))
print(sess.run(result))
Out(fixed):
0.102899365
what do you mean by y has only 1 dim? y has shape (n, 10) and x (n, 512). They are then mapped to hidden_joint and hidden_marg which have shape (n, n_hidden).
– Ferdi K
Nov 14 '18 at 19:46
@FerdiK Then you have to provide more information us to be able to answer, like shape of x,y and other tensors, some data and label sample etc.
– Geeocode
Nov 14 '18 at 20:23
Ok sure, thanks! x.shape = (300, 512), y.shape = (300, 10). y is the one hot enc. of 10 classes. x entries are between 0 and 1, e.g. [0.00376076 0.6290544 0. ... 0.44217262 0. 0.53888565]. n_hidden = 20, so Wx.shape = (512, 20), Wy.shape = (10, 20) and Wout.shape = (20,1). Why does it complain about merging shape 3 with other shapes? It shouldn't matter, that the biases are only 1D arrays and weights 2D arrays. If I reshape the biases to [1, 20] and [1,1], it complains: ValueError: Dimension 0 in both shapes must be equal, but are 20 and 1. Shapes are [20,1] and [1,1].
– Ferdi K
Nov 14 '18 at 20:41
@FerdiK There are some initializations some gradients weights and biasses, but what is the purpose of(tf.matmul(x,Wx)+tf.matmul(y,Wy)
, why we use the labels here to any computation?
– Geeocode
Nov 14 '18 at 20:55
sorry, y aren't the labels: The net is an implementation of MINE (arxiv.org/pdf/1801.04062.pdf). The network tries to give a lower bound to the mutual information between x and y. It doesn't train on labeled data, it tries to maximzie the following objective: lower_bound = t - tf.log(et), where t is the 1-dim outputs of the net. And et = exp(t)
– Ferdi K
Nov 14 '18 at 21:18
|
show 5 more comments
The solution:
ex_delta_t = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in tf.gradients(t, tf.trainable_variables())], axis=0))
ex_delta_et = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in tf.gradients(et, tf.trainable_variables())], axis=0))
Or the same code unfolded:
grads_t_0 = tf.gradients(t, tf.trainable_variables())
grads_et_0 = tf.gradients(t, tf.trainable_variables())
grads_t =
grads_et =
for gt,get in zip(grads_t_0, grads_et_0):
grads_t.append(tf.reshape(gt, [-1]))
grads_et.append(tf.reshape(get, [-1]))
grads_t_flatten = tf.concat(grads_t, axis=0)
grads_et_flatten = tf.concat(grads_et, axis=0)
ex_delta_t = tf.reduce_mean(grads_t_flatten)
ex_delta_et = tf.reduce_mean(grads_et_flatten)
Explanation:
You may get this error message, because your gradient functions
tf.gradients(t, tf.trainable_variables())
tf.gradients(et, tf.trainable_variables()
returns multiply shaped tensors.
As a result your tf.reduce_mean()
operation complains, that it can't work with this multiply shaped tensors.
As a possibility to workaround this, you should first flatten than concatenate the gradient list, then pass it to the reduce_mean function.
Let's see a simple example code to simulate the error and its solution!
#You dummy gradients as the output of tf.gradients()
grad_wx = tf.constant(0.1, shape=[512, 20])
grad_wy = tf.constant(0.2, shape=[10, 20])
grad_b = tf.constant(0.3, shape=[20])
grad_wout = tf.constant(0.4, shape=[20, 1])
grad_bout = tf.constant(0.5, shape=[1])
grads_0 = [grad_wx, grad_wy, grad_b, grad_wout, grad_bout]
sess = tf.Session()
result = tf.reduce_mean(grads_0)
print(sess.run(result)
Out(error):
ValueError: Shapes must be equal rank, but are 2 and 1
From merging shape 3 with other shapes. for 'Rank/packed' (op: 'Pack') with input shapes: [512,20], [10,20], [20], [20,1], [1].
Solution:
result = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in grads_0], axis=0))
print(sess.run(result))
Out(fixed):
0.102899365
what do you mean by y has only 1 dim? y has shape (n, 10) and x (n, 512). They are then mapped to hidden_joint and hidden_marg which have shape (n, n_hidden).
– Ferdi K
Nov 14 '18 at 19:46
@FerdiK Then you have to provide more information us to be able to answer, like shape of x,y and other tensors, some data and label sample etc.
– Geeocode
Nov 14 '18 at 20:23
Ok sure, thanks! x.shape = (300, 512), y.shape = (300, 10). y is the one hot enc. of 10 classes. x entries are between 0 and 1, e.g. [0.00376076 0.6290544 0. ... 0.44217262 0. 0.53888565]. n_hidden = 20, so Wx.shape = (512, 20), Wy.shape = (10, 20) and Wout.shape = (20,1). Why does it complain about merging shape 3 with other shapes? It shouldn't matter, that the biases are only 1D arrays and weights 2D arrays. If I reshape the biases to [1, 20] and [1,1], it complains: ValueError: Dimension 0 in both shapes must be equal, but are 20 and 1. Shapes are [20,1] and [1,1].
– Ferdi K
Nov 14 '18 at 20:41
@FerdiK There are some initializations some gradients weights and biasses, but what is the purpose of(tf.matmul(x,Wx)+tf.matmul(y,Wy)
, why we use the labels here to any computation?
– Geeocode
Nov 14 '18 at 20:55
sorry, y aren't the labels: The net is an implementation of MINE (arxiv.org/pdf/1801.04062.pdf). The network tries to give a lower bound to the mutual information between x and y. It doesn't train on labeled data, it tries to maximzie the following objective: lower_bound = t - tf.log(et), where t is the 1-dim outputs of the net. And et = exp(t)
– Ferdi K
Nov 14 '18 at 21:18
|
show 5 more comments
The solution:
ex_delta_t = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in tf.gradients(t, tf.trainable_variables())], axis=0))
ex_delta_et = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in tf.gradients(et, tf.trainable_variables())], axis=0))
Or the same code unfolded:
grads_t_0 = tf.gradients(t, tf.trainable_variables())
grads_et_0 = tf.gradients(t, tf.trainable_variables())
grads_t =
grads_et =
for gt,get in zip(grads_t_0, grads_et_0):
grads_t.append(tf.reshape(gt, [-1]))
grads_et.append(tf.reshape(get, [-1]))
grads_t_flatten = tf.concat(grads_t, axis=0)
grads_et_flatten = tf.concat(grads_et, axis=0)
ex_delta_t = tf.reduce_mean(grads_t_flatten)
ex_delta_et = tf.reduce_mean(grads_et_flatten)
Explanation:
You may get this error message, because your gradient functions
tf.gradients(t, tf.trainable_variables())
tf.gradients(et, tf.trainable_variables()
returns multiply shaped tensors.
As a result your tf.reduce_mean()
operation complains, that it can't work with this multiply shaped tensors.
As a possibility to workaround this, you should first flatten than concatenate the gradient list, then pass it to the reduce_mean function.
Let's see a simple example code to simulate the error and its solution!
#You dummy gradients as the output of tf.gradients()
grad_wx = tf.constant(0.1, shape=[512, 20])
grad_wy = tf.constant(0.2, shape=[10, 20])
grad_b = tf.constant(0.3, shape=[20])
grad_wout = tf.constant(0.4, shape=[20, 1])
grad_bout = tf.constant(0.5, shape=[1])
grads_0 = [grad_wx, grad_wy, grad_b, grad_wout, grad_bout]
sess = tf.Session()
result = tf.reduce_mean(grads_0)
print(sess.run(result)
Out(error):
ValueError: Shapes must be equal rank, but are 2 and 1
From merging shape 3 with other shapes. for 'Rank/packed' (op: 'Pack') with input shapes: [512,20], [10,20], [20], [20,1], [1].
Solution:
result = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in grads_0], axis=0))
print(sess.run(result))
Out(fixed):
0.102899365
The solution:
ex_delta_t = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in tf.gradients(t, tf.trainable_variables())], axis=0))
ex_delta_et = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in tf.gradients(et, tf.trainable_variables())], axis=0))
Or the same code unfolded:
grads_t_0 = tf.gradients(t, tf.trainable_variables())
grads_et_0 = tf.gradients(t, tf.trainable_variables())
grads_t =
grads_et =
for gt,get in zip(grads_t_0, grads_et_0):
grads_t.append(tf.reshape(gt, [-1]))
grads_et.append(tf.reshape(get, [-1]))
grads_t_flatten = tf.concat(grads_t, axis=0)
grads_et_flatten = tf.concat(grads_et, axis=0)
ex_delta_t = tf.reduce_mean(grads_t_flatten)
ex_delta_et = tf.reduce_mean(grads_et_flatten)
Explanation:
You may get this error message, because your gradient functions
tf.gradients(t, tf.trainable_variables())
tf.gradients(et, tf.trainable_variables()
returns multiply shaped tensors.
As a result your tf.reduce_mean()
operation complains, that it can't work with this multiply shaped tensors.
As a possibility to workaround this, you should first flatten than concatenate the gradient list, then pass it to the reduce_mean function.
Let's see a simple example code to simulate the error and its solution!
#You dummy gradients as the output of tf.gradients()
grad_wx = tf.constant(0.1, shape=[512, 20])
grad_wy = tf.constant(0.2, shape=[10, 20])
grad_b = tf.constant(0.3, shape=[20])
grad_wout = tf.constant(0.4, shape=[20, 1])
grad_bout = tf.constant(0.5, shape=[1])
grads_0 = [grad_wx, grad_wy, grad_b, grad_wout, grad_bout]
sess = tf.Session()
result = tf.reduce_mean(grads_0)
print(sess.run(result)
Out(error):
ValueError: Shapes must be equal rank, but are 2 and 1
From merging shape 3 with other shapes. for 'Rank/packed' (op: 'Pack') with input shapes: [512,20], [10,20], [20], [20,1], [1].
Solution:
result = tf.reduce_mean( tf.concat([tf.reshape(g, [-1]) for g in grads_0], axis=0))
print(sess.run(result))
Out(fixed):
0.102899365
edited Nov 16 '18 at 23:22
answered Nov 14 '18 at 19:04
GeeocodeGeeocode
2,3301820
2,3301820
what do you mean by y has only 1 dim? y has shape (n, 10) and x (n, 512). They are then mapped to hidden_joint and hidden_marg which have shape (n, n_hidden).
– Ferdi K
Nov 14 '18 at 19:46
@FerdiK Then you have to provide more information us to be able to answer, like shape of x,y and other tensors, some data and label sample etc.
– Geeocode
Nov 14 '18 at 20:23
Ok sure, thanks! x.shape = (300, 512), y.shape = (300, 10). y is the one hot enc. of 10 classes. x entries are between 0 and 1, e.g. [0.00376076 0.6290544 0. ... 0.44217262 0. 0.53888565]. n_hidden = 20, so Wx.shape = (512, 20), Wy.shape = (10, 20) and Wout.shape = (20,1). Why does it complain about merging shape 3 with other shapes? It shouldn't matter, that the biases are only 1D arrays and weights 2D arrays. If I reshape the biases to [1, 20] and [1,1], it complains: ValueError: Dimension 0 in both shapes must be equal, but are 20 and 1. Shapes are [20,1] and [1,1].
– Ferdi K
Nov 14 '18 at 20:41
@FerdiK There are some initializations some gradients weights and biasses, but what is the purpose of(tf.matmul(x,Wx)+tf.matmul(y,Wy)
, why we use the labels here to any computation?
– Geeocode
Nov 14 '18 at 20:55
sorry, y aren't the labels: The net is an implementation of MINE (arxiv.org/pdf/1801.04062.pdf). The network tries to give a lower bound to the mutual information between x and y. It doesn't train on labeled data, it tries to maximzie the following objective: lower_bound = t - tf.log(et), where t is the 1-dim outputs of the net. And et = exp(t)
– Ferdi K
Nov 14 '18 at 21:18
|
show 5 more comments
what do you mean by y has only 1 dim? y has shape (n, 10) and x (n, 512). They are then mapped to hidden_joint and hidden_marg which have shape (n, n_hidden).
– Ferdi K
Nov 14 '18 at 19:46
@FerdiK Then you have to provide more information us to be able to answer, like shape of x,y and other tensors, some data and label sample etc.
– Geeocode
Nov 14 '18 at 20:23
Ok sure, thanks! x.shape = (300, 512), y.shape = (300, 10). y is the one hot enc. of 10 classes. x entries are between 0 and 1, e.g. [0.00376076 0.6290544 0. ... 0.44217262 0. 0.53888565]. n_hidden = 20, so Wx.shape = (512, 20), Wy.shape = (10, 20) and Wout.shape = (20,1). Why does it complain about merging shape 3 with other shapes? It shouldn't matter, that the biases are only 1D arrays and weights 2D arrays. If I reshape the biases to [1, 20] and [1,1], it complains: ValueError: Dimension 0 in both shapes must be equal, but are 20 and 1. Shapes are [20,1] and [1,1].
– Ferdi K
Nov 14 '18 at 20:41
@FerdiK There are some initializations some gradients weights and biasses, but what is the purpose of(tf.matmul(x,Wx)+tf.matmul(y,Wy)
, why we use the labels here to any computation?
– Geeocode
Nov 14 '18 at 20:55
sorry, y aren't the labels: The net is an implementation of MINE (arxiv.org/pdf/1801.04062.pdf). The network tries to give a lower bound to the mutual information between x and y. It doesn't train on labeled data, it tries to maximzie the following objective: lower_bound = t - tf.log(et), where t is the 1-dim outputs of the net. And et = exp(t)
– Ferdi K
Nov 14 '18 at 21:18
what do you mean by y has only 1 dim? y has shape (n, 10) and x (n, 512). They are then mapped to hidden_joint and hidden_marg which have shape (n, n_hidden).
– Ferdi K
Nov 14 '18 at 19:46
what do you mean by y has only 1 dim? y has shape (n, 10) and x (n, 512). They are then mapped to hidden_joint and hidden_marg which have shape (n, n_hidden).
– Ferdi K
Nov 14 '18 at 19:46
@FerdiK Then you have to provide more information us to be able to answer, like shape of x,y and other tensors, some data and label sample etc.
– Geeocode
Nov 14 '18 at 20:23
@FerdiK Then you have to provide more information us to be able to answer, like shape of x,y and other tensors, some data and label sample etc.
– Geeocode
Nov 14 '18 at 20:23
Ok sure, thanks! x.shape = (300, 512), y.shape = (300, 10). y is the one hot enc. of 10 classes. x entries are between 0 and 1, e.g. [0.00376076 0.6290544 0. ... 0.44217262 0. 0.53888565]. n_hidden = 20, so Wx.shape = (512, 20), Wy.shape = (10, 20) and Wout.shape = (20,1). Why does it complain about merging shape 3 with other shapes? It shouldn't matter, that the biases are only 1D arrays and weights 2D arrays. If I reshape the biases to [1, 20] and [1,1], it complains: ValueError: Dimension 0 in both shapes must be equal, but are 20 and 1. Shapes are [20,1] and [1,1].
– Ferdi K
Nov 14 '18 at 20:41
Ok sure, thanks! x.shape = (300, 512), y.shape = (300, 10). y is the one hot enc. of 10 classes. x entries are between 0 and 1, e.g. [0.00376076 0.6290544 0. ... 0.44217262 0. 0.53888565]. n_hidden = 20, so Wx.shape = (512, 20), Wy.shape = (10, 20) and Wout.shape = (20,1). Why does it complain about merging shape 3 with other shapes? It shouldn't matter, that the biases are only 1D arrays and weights 2D arrays. If I reshape the biases to [1, 20] and [1,1], it complains: ValueError: Dimension 0 in both shapes must be equal, but are 20 and 1. Shapes are [20,1] and [1,1].
– Ferdi K
Nov 14 '18 at 20:41
@FerdiK There are some initializations some gradients weights and biasses, but what is the purpose of
(tf.matmul(x,Wx)+tf.matmul(y,Wy)
, why we use the labels here to any computation?– Geeocode
Nov 14 '18 at 20:55
@FerdiK There are some initializations some gradients weights and biasses, but what is the purpose of
(tf.matmul(x,Wx)+tf.matmul(y,Wy)
, why we use the labels here to any computation?– Geeocode
Nov 14 '18 at 20:55
sorry, y aren't the labels: The net is an implementation of MINE (arxiv.org/pdf/1801.04062.pdf). The network tries to give a lower bound to the mutual information between x and y. It doesn't train on labeled data, it tries to maximzie the following objective: lower_bound = t - tf.log(et), where t is the 1-dim outputs of the net. And et = exp(t)
– Ferdi K
Nov 14 '18 at 21:18
sorry, y aren't the labels: The net is an implementation of MINE (arxiv.org/pdf/1801.04062.pdf). The network tries to give a lower bound to the mutual information between x and y. It doesn't train on labeled data, it tries to maximzie the following objective: lower_bound = t - tf.log(et), where t is the 1-dim outputs of the net. And et = exp(t)
– Ferdi K
Nov 14 '18 at 21:18
|
show 5 more comments
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53306907%2ftf-gradients-valueerror-shapes-must-be-equal-rank-but-are-2-and-1%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
I found the solution, please read my up-to-date answer below.
– Geeocode
Nov 16 '18 at 23:09
Did you see my answer, what is the developments, cause I found the issue quite interesting.
– Geeocode
Nov 18 '18 at 19:31
thanks geeocode, you're a star!
– Ferdi K
Nov 19 '18 at 20:52
I really happy, that it worked. Good luck!
– Geeocode
Nov 19 '18 at 21:49