'In the following code, I want dense matrix B
to left multiply a sparse matrix A
, but I got errors.
\n\nimport tensorflow as tf\nimport numpy as np\n\nA = tf.sparse_placeholder(tf.float32)\nB = tf.placeholder(tf.float32, shape=(5,5))\nC = tf.matmul(B,A,a_is_sparse=False,b_is_sparse=True)\nsess = tf.InteractiveSession()\nindices = np.array([[3, 2], [1, 2]], dtype=np.int64)\nvalues = np.array([1.0, 2.0], dtype=np.float32)\nshape = np.array([5,5], dtype=np.int64)\nSparse_A = tf.SparseTensorValue(indices, values, shape)\nRandB = np.ones((5, 5))\nprint sess.run(C, feed_dict={A: Sparse_A, B: RandB})\n
\n\nThe error message is as follows:
\n\nTypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> \nto Tensor. Contents: SparseTensor(indices=Tensor("Placeholder_4:0", shape=(?, ?), dtype=int64), values=Tensor("Placeholder_3:0", shape=(?,), dtype=float32), dense_shape=Tensor("Placeholder_2:0", shape=(?,), dtype=int64)). \nConsider casting elements to a supported type.\n
\n\nWhat's wrong with my code?
\n\nI'm doing this following the documentation and it says we should use a_is_sparse
to denote whether the first matrix is sparse, and similarly with b_is_sparse
. Why is my code wrong?
\n\nAs is suggested by vijay, I should use C = tf.matmul(B,tf.sparse_tensor_to_dense(A),a_is_sparse=False,b_is_sparse=True)
\n\nI tried this but I met with another error saying:
\n\nCaused by op u'SparseToDense', defined at:\n File "a.py", line 19, in <module>\n C = tf.matmul(B,tf.sparse_tensor_to_dense(A),a_is_sparse=False,b_is_sparse=True)\n File "/home/fengchao.pfc/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/sparse_ops.py", line 845, in sparse_tensor_to_dense\n name=name)\n File "/home/mypath/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/sparse_ops.py", line 710, in sparse_to_dense\n name=name)\n File "/home/mypath/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_sparse_ops.py", line 1094, in _sparse_to_dense\n validate_indices=validate_indices, name=name)\n File "/home/mypath/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op\n op_def=op_def)\n File "/home/mypath/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2506, in create_op\n original_op=self._default_original_op, op_def=op_def)\n File "/home/mypath/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1269, in __init__\n self._traceback = _extract_stack()\n\nInvalidArgumentError (see above for traceback): indices[1] = [1,2] is out of order\n[[Node: SparseToDense = SparseToDense[T=DT_FLOAT, Tindices=DT_INT64, validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_Placeholder_4_0_2, _arg_Placeholder_2_0_0, _arg_Placeholder_3_0_1, SparseToDense/default_value)]]\n
\n\nThank you all for helping me!
\n'"I am using tf.estimator.train_and_evaluate
and tf.data.Dataset
to feed data to the estimator:
\n\nInput Data function:
\n\n def data_fn(data_dict, batch_size, mode, num_epochs=10):\n dataset = {}\n if mode == tf.estimator.ModeKeys.TRAIN:\n dataset = tf.data.Dataset.from_tensor_slices(data_dict['train_data'].astype(np.float32))\n dataset = dataset.cache()\n dataset = dataset.shuffle(buffer_size= batch_size * 10).repeat(num_epochs).batch(batch_size)\n else:\n dataset = tf.data.Dataset.from_tensor_slices(data_dict['valid_data'].astype(np.float32))\n dataset = dataset.cache()\n dataset = dataset.batch(batch_size)\n\n iterator = dataset.make_one_shot_iterator()\n next_element = iterator.get_next()\n\n return next_element\n
\n\nTrain Function:
\n\ndef train_model(data):\n tf.logging.set_verbosity(tf.logging.INFO)\n config = tf.ConfigProto(allow_soft_placement=True,\n log_device_placement=False)\n config.gpu_options.allow_growth = True\n run_config = tf.contrib.learn.RunConfig(\n save_checkpoints_steps=10,\n keep_checkpoint_max=10,\n session_config=config\n )\n\n train_input = lambda: data_fn(data, 100, tf.estimator.ModeKeys.TRAIN, num_epochs=1)\n eval_input = lambda: data_fn(data, 1000, tf.estimator.ModeKeys.EVAL)\n estimator = tf.estimator.Estimator(model_fn=model_fn, params=hps, config=run_config)\n train_spec = tf.estimator.TrainSpec(train_input, max_steps=100)\n eval_spec = tf.estimator.EvalSpec(eval_input,\n steps=None,\n throttle_secs = 30)\n\n tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n
\n\nThe training goes fine, but when it comes to evaluation I get this error:
\n\nOutOfRangeError (see above for traceback): End of sequence \n
\n\nIf I don't use Dataset.batch
on evaluation dataset (by omitting the line dataset[name] = dataset[name].batch(batch_size)
in data_fn
) I get the same error but after a much longer time.
\n\nI can only avoid this error if I don't batch the data and use steps=1
for evaluation, but does that perform the evaluation on the whole dataset?
\n\nI don't understand what causes this error as the documentation suggests I should be able to evaluate on batches too.
\n\nNote: I get the same error when using tf.estimator.evaluate
on data batches.
\n"