sharukat commited on
Commit
a0fe883
1 Parent(s): 4b9a244

Add SetFit model

Browse files
Files changed (4) hide show
  1. README.md +52 -150
  2. config.json +1 -1
  3. model.safetensors +1 -1
  4. model_head.pkl +1 -1
README.md CHANGED
@@ -11,136 +11,41 @@ metrics:
11
  - recall
12
  - f1
13
  widget:
14
- - text: "<p>Summary: according to the <a href=\"https://www.tensorflow.org/api_docs/python/tf/keras/models/Model#fit\"\
15
- \ rel=\"nofollow noreferrer\">documentation</a>, Keras <code>model.fit()</code>\
16
- \ should accept tf.dataset as input (I am using TF version 1.12.0). I can train\
17
- \ my model if I manually do the training steps but using <code>model.fit()</code>\
18
- \ on the same model, I get an error I cannot resolve.</p>\n\n<p>Here is a sketch\
19
- \ of what I did: my dataset, which is too big to fit in the memory, consists of\
20
- \ many files each with different number of rows of (100 features, label). I'd\
21
- \ like to use <code>tf.data</code> to build my data pipeline:</p>\n\n<pre class=\"\
22
- lang-py prettyprint-override\"><code>def data_loader(filename):\n '''load a\
23
- \ single data file with many rows'''\n features, labels = load_hdf5(filename)\n\
24
- \ ...\n return features, labels\n\ndef make_dataset(filenames, batch_size):\n\
25
- \ '''read files one by one, pick individual rows, batch them and repeat'''\n\
26
- \ dataset = tf.data.Dataset.from_tensor_slices(filenames)\n dataset = dataset.map(\
27
- \ # Problem here! See edit for solution\n lambda filename: tuple(tf.py_func(data_loader,\
28
- \ [filename], [float32, tf.float32])))\n dataset = dataset.flat_map(\n \
29
- \ lambda features, labels: tf.data.Dataset.from_tensor_slices((features, labels)))\n\
30
- \ dataset = dataset.batch(batch_size)\n dataset = dataset.repeat()\n \
31
- \ dataset = dataset.prefetch(1000)\n return dataset\n\n_BATCH_SIZE = 128\n\
32
- training_set = make_dataset(training_files, batch_size=_BATCH_SIZE)\n</code></pre>\n\
33
- \n<p>I'd like to try a very basic logistic regression model:</p>\n\n<pre class=\"\
34
- lang-py prettyprint-override\"><code>inputs = tf.keras.layers.Input(shape=(100,))\n\
35
- outputs = tf.keras.layers.Dense(1, activation='softmax')(inputs)\nmodel = tf.keras.Model(inputs,\
36
- \ outputs)\n</code></pre>\n\n<p>If I train it <em>manually</em> everything works\
37
- \ fine, e.g.:</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>labels\
38
- \ = tf.placeholder(tf.float32)\nloss = tf.reduce_mean(tf.keras.backend.categorical_crossentropy(labels,\
39
- \ outputs))\ntrain_step = tf.train.GradientDescentOptimizer(.05).minimize(loss)\n\
40
- \niterator = training_set.make_one_shot_iterator()\nnext_element = iterator.get_next()\n\
41
- init_op = tf.global_variables_initializer()\n\nwith tf.Session() as sess:\n \
42
- \ sess.run(init_op)\n for i in range(training_size // _BATCH_SIZE):\n \
43
- \ x, y = sess.run(next_element)\n train_step.run(feed_dict={inputs:\
44
- \ x, labels: y})\n</code></pre>\n\n<p>However, if I instead try to use <code>model.fit</code>\
45
- \ like this:</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>model.compile('adam',\
46
- \ 'categorical_crossentropy', metrics=['acc'])\nmodel.fit(training_set.make_one_shot_iterator(),\n\
47
- \ steps_per_epoch=training_size // _BATCH_SIZE,\n epochs=1,\n\
48
- \ verbose=1)\n</code></pre>\n\n<p>I get an error message <code>ValueError:\
49
- \ Cannot take the length of Shape with unknown rank.</code> inside the keras'es\
50
- \ <code>_standardize_user_data</code> function.</p>\n\n<p>I have tried quite a\
51
- \ few things but could not resolve the issue. Any ideas?</p>\n\n<p><strong>Edit:</strong>\
52
- \ based on @kvish's answer, the solution was to change the map from a lambda to\
53
- \ a function that would specify the correct tensor dimensions, e.g.:</p>\n\n<pre\
54
- \ class=\"lang-py prettyprint-override\"><code>def data_loader(filename):\n \
55
- \ def loader_impl(filename):\n features, labels, _ = load_hdf5(filename)\n\
56
- \ ...\n return features, labels\n\n features, labels = tf.py_func(loader_impl,\
57
- \ [filename], [tf.float32, tf.float32])\n features.set_shape((None, 100))\n\
58
- \ labels.set_shape((None, 1))\n return features, labels\n</code></pre>\n\
59
- \n<p>and now, all needed to do is to call this function from <code>map</code>:</p>\n\
60
- \n<pre class=\"lang-py prettyprint-override\"><code>dataset = dataset.map(data_loader)\n\
61
- </code></pre>\n"
62
- - text: "<p>I'm wondering what the current available options are for simulating BatchNorm\
63
- \ folding during quantization aware training in Tensorflow 2. Tensorflow 1 has\
64
- \ the <code>tf.contrib.quantize.create_training_graph</code> function which inserts\
65
- \ FakeQuantization layers into the graph and takes care of simulating batch normalization\
66
- \ folding (according to this <a href=\"https://arxiv.org/pdf/1806.08342.pdf\"\
67
- \ rel=\"noreferrer\">white paper</a>).</p>\n\n<p>Tensorflow 2 has a <a href=\"\
68
- https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide.md\"\
69
- \ rel=\"noreferrer\">tutorial</a> on how to use quantization in their recently\
70
- \ adopted <code>tf.keras</code> API, but they don't mention anything about batch\
71
- \ normalization. I tried the following simple example with a BatchNorm layer:</p>\n\
72
- \n<pre><code>import tensorflow_model_optimization as tfmo\n\nmodel = tf.keras.Sequential([\n\
73
- \ l.Conv2D(32, 5, padding='same', activation='relu', input_shape=input_shape),\n\
74
- \ l.MaxPooling2D((2, 2), (2, 2), padding='same'),\n l.Conv2D(64, 5,\
75
- \ padding='same', activation='relu'),\n l.BatchNormalization(), # BN!\n\
76
- \ l.MaxPooling2D((2, 2), (2, 2), padding='same'),\n l.Flatten(),\n \
77
- \ l.Dense(1024, activation='relu'),\n l.Dropout(0.4),\n l.Dense(num_classes),\n\
78
- \ l.Softmax(),\n])\nmodel = tfmo.quantization.keras.quantize_model(model)\n\
79
- </code></pre>\n\n<p>It however gives the following exception:</p>\n\n<pre><code>RuntimeError:\
80
- \ Layer batch_normalization:&lt;class 'tensorflow.python.keras.layers.normalization.BatchNormalization'&gt;\
81
- \ is not supported. You can quantize this layer by passing a `tfmot.quantization.keras.QuantizeConfig`\
82
- \ instance to the `quantize_annotate_layer` API.\n</code></pre>\n\n<p>which indicates\
83
- \ that TF does not know what to do with it.</p>\n\n<p>I also saw <a href=\"https://stackoverflow.com/questions/52259343/quantize-a-keras-neural-network-model/57785739#57785739\"\
84
- >this related topic</a> where they apply <code>tf.contrib.quantize.create_training_graph</code>\
85
- \ on a keras constructed model. They however don't use BatchNorm layers, so I'm\
86
- \ not sure this will work.</p>\n\n<p>So what are the options for using this BatchNorm\
87
- \ folding feature in TF2? Can this be done from the keras API, or should I switch\
88
- \ back to the TensorFlow 1 API and define a graph the old way?</p>\n"
89
- - text: '<p>How can I get the file name of a <a href="https://www.tensorflow.org/api_docs/python/tf/summary/FileWriter"
90
- rel="nofollow noreferrer"><code>tf.summary.FileWriter</code></a> (<a href="https://web.archive.org/web/20170321224015/https://www.tensorflow.org/api_docs/python/tf/summary/FileWriter"
91
- rel="nofollow noreferrer">mirror</a>) in TensorFlow?</p>
92
-
93
-
94
- <p>I am aware that I can use <a href="https://www.tensorflow.org/api_docs/python/tf/summary/FileWriter#get_logdir"
95
- rel="nofollow noreferrer"><code>get_logdir()</code></a> but I don''t see any
96
- similar method to access the file name.</p>
97
-
98
- '
99
- - text: "<p>Will future versions of tensorflow provide a way to run the tensorflow\
100
- \ graph generated by single node tf.sess on a distributed environments with multiple\
101
- \ ps nodes and worker nodes through python interfaces?\nOr is it supported right\
102
- \ now?</p>\n\n<p>I am trying to build my tf.graph on my notebook (single node)\
103
- \ and save then graph into a binary file, \nand then loading the binary graph\
104
- \ into a distributed environment (with multiply ps and worker nodes) to train\
105
- \ and verify it. It seems it is not supported now.</p>\n\n<p>I tried it on tensorflow-0.10\
106
- \ and failed.\nBy using</p>\n\n<pre><code>tf.train.write_graph(sess.graph_def,\
107
- \ path, pb_name)\n</code></pre>\n\n<p>interface: The graph saved is not trainable\
108
- \ as loading the <code>.pb</code> file through <code>import_graph_def</code> will\
109
- \ only <code>g.create_ops</code> according to the <code>.bp</code> file but not\
110
- \ add then into <code>ops.collections</code>. So the graph loaded is not trainable.</p>\n\
111
- \n<p>By using <code>tf.saver.save</code> to save a <code>.meta</code> file: The\
112
- \ loaded graph cannot fit into the distributed environment as devices assignment\
113
- \ is messy.</p>\n\n<p>I tried the</p>\n\n<pre><code>tf.train.import_meta_graph('test_model.meta',\
114
- \ clear_devices=True)\n</code></pre>\n\n<p>interface to let the load clean the\
115
- \ original device assignment and let the <code>with tf.device(device_setter)</code>\
116
- \ reassign the device for each variable, but there is a problem as operations\
117
- \ belonging to <code>Saver</code> and <code>Restore</code> still can not be assigned\
118
- \ correctly. When creating operations for <code>Saver</code> and <code>Restore</code>\
119
- \ ops through <code>g.create_op</code> inside <code>import_graph_def</code> called\
120
- \ by <code>import_meta_graph</code>, the device_setter will not assign ps node\
121
- \ to these ops as their name is not <code>Variable</code>.\nIs there any way to\
122
- \ do so?</p>\n"
123
- - text: "<p>I use <code>freeze_graph</code> to export my model to a file named <code>\"\
124
- frozen.pb\"</code>. But Found that the accuracy of predictions on <code>frozen.pb</code>\
125
- \ is very bad.</p>\n\n<p>I know the problem maybe <code>MovingAverage</code> not\
126
- \ included in <code>frozen.pb</code>.</p>\n\n<p>When I use <code>model.ckpt</code>\
127
- \ files to restore model for evaluating, if I call <code>tf.train.ExponentialMovingAverage(0.999)</code>\
128
- \ , then the accuracy is good as expected, else the accuracy is bad.</p>\n\n<p><strong>So\
129
- \ How To export a binary model which performance is the same as the one restored\
130
- \ from checkpoint files?</strong> I want to use <code>\".pb\"</code> files in\
131
- \ Android Devices.</p>\n\n<p><a href=\"https://www.tensorflow.org/versions/r0.12/api_docs/python/train/moving_averages\"\
132
- \ rel=\"nofollow noreferrer\">The official document</a> doesn't mention this.</p>\n\
133
- \n<p>Thanks!!</p>\n\n<p>Freeze Command:</p>\n\n<pre><code>~/bazel-bin/tensorflow/python/tools/freeze_graph\
134
- \ \\\n --input_graph=./graph.pbtxt \\\n --input_checkpoint=./model.ckpt-100000\
135
- \ \\\n --output_graph=frozen.pb \\\n --output_node_names=output \\\n --restore_op_name=save/restore_all\
136
- \ \\\n --clear_devices\n</code></pre>\n\n<p>Evaluate Code:</p>\n\n<pre><code>...\
137
- \ ...\nlogits = carc19.inference(images)\ntop_k = tf.nn.top_k(logits, k=10)\n\n\
138
- # Precision: 97%\n# Restore the moving average version of the learned variables\
139
- \ for eval.\nvariable_averages = tf.train.ExponentialMovingAverage(carc19.MOVING_AVERAGE_DECAY)\n\
140
- variables_to_restore = variable_averages.variables_to_restore()\nfor k in variables_to_restore.keys():\n\
141
- \ print (k,variables_to_restore[k])\nsaver = tf.train.Saver(variables_to_restore)\n\
142
- \n# Precision: 84%\n#saver = tf.train.Saver()\n\n#model_path = '/tmp/carc19_train/model.ckpt-9801'\n\
143
- with tf.Session() as sess:\n saver.restore(sess, model_path)\n... ...\n</code></pre>\n"
144
  pipeline_tag: text-classification
145
  inference: true
146
  base_model: flax-sentence-embeddings/stackoverflow_mpnet-base
@@ -156,16 +61,16 @@ model-index:
156
  split: test
157
  metrics:
158
  - type: accuracy
159
- value: 0.75
160
  name: Accuracy
161
  - type: precision
162
- value: 0.7604166666666666
163
  name: Precision
164
  - type: recall
165
- value: 0.75
166
  name: Recall
167
  - type: f1
168
- value: 0.7474747474747475
169
  name: F1
170
  ---
171
 
@@ -197,17 +102,17 @@ The model has been trained using an efficient few-shot learning technique that i
197
  - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
198
 
199
  ### Model Labels
200
- | Label | Examples |
201
- |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
202
- | 0 | <ul><li>'<p><strong>TLDR;</strong> <em>my question is on how to load compressed video frames from TFRecords.</em></p>\n\n<p>I am setting up a data pipeline for training deep learning models on a large video dataset (<a href="https://deepmind.com/research/open-source/open-source-datasets/kinetics/" rel="noreferrer">Kinetics</a>). For this I am using TensorFlow, more specifically the <code>tf.data.Dataset</code> and <code>TFRecordDataset</code> structures. As the dataset contains ~300k videos of 10 seconds, there is a large amount of data to deal with. During training, I want to randomly sample 64 consecutive frames from a video, therefore fast random sampling is important. For achieving this there are a number of data loading scenarios possible during training:</p>\n\n<ol>\n<li><strong>Sample from Video.</strong> Load the videos using <code>ffmpeg</code> or <code>OpenCV</code> and sample frames. Not ideal as seeking in videos is tricky, and decoding video streams is much slower than decoding JPG.</li>\n<li><strong>JPG Images.</strong> Preprocess the dataset by extracting all video frames as JPG. This generates a huge amount of files, which is probably not going to be fast due to random access.</li>\n<li><strong>Data Containers.</strong> Preprocess the dataset to <code>TFRecords</code> or <code>HDF5</code> files. Requires more work getting the pipeline ready, but most likely to be the fastest of those options.</li>\n</ol>\n\n<p>I have decided to go for option (3) and use <code>TFRecord</code> files to store a preprocessed version of the dataset. However, this is also not as straightforward as it seems, for example:</p>\n\n<ol>\n<li><strong>Compression.</strong> Storing the video frames as uncompressed byte data in TFRecords will require a huge amount of disk space. Therefore, I extract all the video frames, apply JPG compression and store the compressed bytes as TFRecords. </li>\n<li><strong>Video Data.</strong> We are dealing with video, so each example in the TFRecords file will be quite large and contains several video frames (typically 250-300 for 10 seconds of video, depending on the frame rate). </li>\n</ol>\n\n<p>I have wrote the following code to preprocess the video dataset and write the video frames as TFRecord files (each of ~5GB in size):</p>\n\n<pre><code>def _int64_feature(value):\n """Wrapper for inserting int64 features into Example proto."""\n if not isinstance(value, list):\n value = [value]\n return tf.train.Feature(int64_list=tf.train.Int64List(value=value))\n\ndef _bytes_feature(value):\n """Wrapper for inserting bytes features into Example proto."""\n return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))\n\n\nwith tf.python_io.TFRecordWriter(output_file) as writer:\n\n # Read and resize all video frames, np.uint8 of size [N,H,W,3]\n frames = ... \n\n features = {}\n features[\'num_frames\'] = _int64_feature(frames.shape[0])\n features[\'height\'] = _int64_feature(frames.shape[1])\n features[\'width\'] = _int64_feature(frames.shape[2])\n features[\'channels\'] = _int64_feature(frames.shape[3])\n features[\'class_label\'] = _int64_feature(example[\'class_id\'])\n features[\'class_text\'] = _bytes_feature(tf.compat.as_bytes(example[\'class_label\']))\n features[\'filename\'] = _bytes_feature(tf.compat.as_bytes(example[\'video_id\']))\n\n # Compress the frames using JPG and store in as bytes in:\n # \'frames/000001\', \'frames/000002\', ...\n for i in range(len(frames)):\n ret, buffer = cv2.imencode(".jpg", frames[i])\n features["frames/{:04d}".format(i)] = _bytes_feature(tf.compat.as_bytes(buffer.tobytes()))\n\n tfrecord_example = tf.train.Example(features=tf.train.Features(feature=features))\n writer.write(tfrecord_example.SerializeToString())\n</code></pre>\n\n<p>This works fine; the dataset is nicely written as TFRecord files with the frames as compressed JPG bytes. My question regards, how to read the TFRecord files during training, randomly sample 64 frames from a video and decode the JPG images. </p>\n\n<p>According to <a href="https://www.tensorflow.org/programmers_guide/datasets" rel="noreferrer">TensorFlow\'s documentation</a> on <code>tf.Data</code> we need to do something like: </p>\n\n<pre><code>filenames = tf.placeholder(tf.string, shape=[None])\ndataset = tf.data.TFRecordDataset(filenames)\ndataset = dataset.map(...) # Parse the record into tensors.\ndataset = dataset.repeat() # Repeat the input indefinitely.\ndataset = dataset.batch(32)\niterator = dataset.make_initializable_iterator()\ntraining_filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"]\nsess.run(iterator.initializer, feed_dict={filenames: training_filenames})\n</code></pre>\n\n<p>There are many example on how to do this with images, and that is quite straightforward. However, for video and random sampling of frames I am stuck. The <code>tf.train.Features</code> object stores the frames as <code>frame/00001</code>, <code>frame/000002</code> etc. My first question is how to randomly sample a set of consecutive frames from this inside the <code>dataset.map()</code> function? Considerations are that each frame has a variable number of bytes due to JPG compression and need to be decoded using <code>tf.image.decode_jpeg</code>.</p>\n\n<p>Any help how to best setup reading video sampels from TFRecord files would be appreciated! </p>\n'</li><li>'<p>I am training a deep learning model on stacks of images with variable dimensions. <code>(Shape = [Batch, None, 256, 256, 1])</code>, where None can be variable.</p>\n<p>I use <code>tf.RaggedTensor.merge_dimsions(0,1)</code> to convert the ragged Tensor to a shape of <code>[None, 256, 256, 1]</code> to run into a pretrained keras CNN model.</p>\n<p>However, using the KerasLayer API results in the following error: <code>TypeError: the object of type \'RaggedTensor\' has no len()</code></p>\n<p>When I apply <code>.merge_dimsions</code> outside of the KerasLayer and pass the tensors to the same pretrained model I do not get this error.</p>\n<pre class="lang-py prettyprint-override"><code>import tensorflow as tf\n\n# Synthetic Data Pipeline\ndef synthetic_gen():\n varShape = tf.random.uniform((), minval=1, maxval=12, dtype=tf.int32)\n image = tf.random.normal((varShape, 256, 256, 1))\n image = tf.RaggedTensor.from_tensor(image, ragged_rank=1)\n yield image\n\nds = tf.data.Dataset.from_generator(synthetic_gen, output_signature=(tf.RaggedTensorSpec(shape=(None, 256, 256, 1), dtype=tf.float32, ragged_rank=1)))\nds = ds.repeat().batch(8)\nprint(next(iter(ds)).shape)\n\n# Build Model\ninputs = tf.keras.Input(\n type_spec=tf.RaggedTensorSpec(\n shape=(8, None, 256, 256, 1), \n dtype=tf.float32, \n ragged_rank=1))\n\nResNet50 = tf.keras.applications.ResNet50(\n include_top=True, \n input_shape=(256, 256, 1),\n weights=None)\n\ndef merge(x):\n x = x.merge_dims(0, 1)\n return x\nx = tf.keras.layers.Lambda(merge)(inputs)\nmerged_inputs = x\n# x = ResNet50(x) # Uncommenting this will result in `model` producing an error when run for inference.\n\nmodel = tf.keras.Model(inputs, x)\n\n# Run inference\ndata = next(iter(ds))\nmodel(data).shape # Will be an error if ResNet50 is used\n</code></pre>\n<p>Here is a colab notebook that demonstrates the problem. <a href="https://colab.research.google.com/drive/1kN78mf4_oNqxWOluV054NlqmakC5msli?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1kN78mf4_oNqxWOluV054NlqmakC5msli?usp=sharing</a></p>\n'</li><li>'<p>This is a pretty simple question that I just can\'t seem to figure out. I am working with an an output tensor of shape [100, 250]. I want to be able to access the 250 Dimensional array at any spot along the hundred and modify them separately. The tensorflow mathematical tools that I\'ve found either do element-wise modification or scalar modification on the entire tensor. However, I am trying to do scalar modification on subsets of the tensor.</p>\n\n<p>EDIT:</p>\n\n<p>Here is the numpy code that I would like to recreate with tensorflow methods:</p>\n\n<pre><code>update = sess.run(y, feed_dict={x: batch_xs})\nfor i in range(len(update)):\n update[i] = update[i]/np.sqrt(np.sum(np.square(update[i])))\n update[i] = update[i] * magnitude\n</code></pre>\n\n<p>This for loop follows this formula in 250-D instead of 3-D\n<a href="https://i.stack.imgur.com/Xru79.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xru79.png" alt="Unit vector formula, which is the first line of the for-loop"></a>\n. I then multiply each unit vector by magnitude to re-shape it to my desired length.</p>\n\n<p>So update here is the numpy [100, 250] dimensional output. I want to transform each 250 dimensional vector into its unit vector. That way I can change its length to a magnitude of my choosing. Using this numpy code, if I run my train_step and pass update into one of my placeholders</p>\n\n<pre><code>sess.run(train_step, feed_dict={x: batch_xs, prediction: output}) \n</code></pre>\n\n<p>it returns the error:</p>\n\n<pre><code>No gradients provided for any variable\n</code></pre>\n\n<p>This is because I\'ve done the math in numpy and ported it back into tensorflow. <a href="https://stackoverflow.com/questions/35325480/tensorflow-performing-this-loss-computation">Here</a> is a related stackoverflow question that did not get answered.</p>\n\n<p>the <a href="https://www.tensorflow.org/versions/r0.9/api_docs/python/nn.html#l2_normalize" rel="nofollow noreferrer">tf.nn.l2_normalize</a> is very close to what I am looking for, but it divides by the square root of the <em>maximum</em> sum of squares. Whereas I am trying to divide each vector by its own sum of squares.</p>\n\n<p>Thanks!</p>\n'</li></ul> |
203
- | 1 | <ul><li>'<p>I am using <a href="https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory" rel="nofollow noreferrer">image_dataset_from_directory</a> to load a very large RGB imagery dataset from disk into a <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset" rel="nofollow noreferrer">Dataset</a>. For example,</p>\n<pre><code>dataset = tf.keras.preprocessing.image_dataset_from_directory(\n &lt;directory&gt;,\n label_mode=None,\n seed=1,\n subset=\'training\',\n validation_split=0.1)\n</code></pre>\n<p>The Dataset has, say, 100000 images grouped into batches of size 32 yielding a <code>tf.data.Dataset</code> with spec <code>(batch=32, width=256, height=256, channels=3)</code></p>\n<p>I would like to extract patches from the images to create a new <code>tf.data.Dataset</code> with image spatial dimensions of, say, 64x64.</p>\n<p>Therefore, I would like to create a new Dataset with 400000 patches still in batches of 32 with a <code>tf.data.Dataset</code> with spec <code>(batch=32, width=64, height=64, channels=3)</code></p>\n<p>I\'ve looked at the <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#window" rel="nofollow noreferrer">window</a> method and the <a href="https://www.tensorflow.org/api_docs/python/tf/image/extract_patches" rel="nofollow noreferrer">extract_patches</a> function but it\'s not clear from the documentation how to use them to create a new Dataset I need to start training on the patches. The <code>window</code> seems to be geared toward 1D tensors and the <code>extract_patches</code> seems to work with arrays and not with Datasets.</p>\n<p>Any suggestions on how to accomplish this?</p>\n<p>UPDATE:</p>\n<p>Just to clarify my needs. I am trying to avoid manually creating the patches on disk. One, that would be untenable disk wise. Two, the patch size is not fixed. The experiments will be conducted over several patch sizes. So, I do not want to manually perform the patch creation either on disk or manually load the images in memory and perform the patching. I would prefer to have tensorflow handle the patch creation as part of the pipeline workflow to minimize disk and memory usage.</p>\n'</li><li>"<p>To find Tensorflow version we can do that by:\npython -c 'import tensorflow as tf; print(tf.<strong>version</strong>)' </p>\n\n<p>Tensorflow Serving is a separate install, so how to find the version of Tensorflow Serving?</p>\n\n<p>Is it same as Tensorflow? Do not see any reference/comments or documentation related to this.</p>\n"</li><li>'<p><code>tf.variable_scope</code> has a <code>partitioner</code> parameter as mentioned in <a href="https://www.tensorflow.org/api_docs/python/tf/variable_scope#__init__" rel="noreferrer">documentation</a>.</p>\n\n<p>As I understand it\'s used for distributed training. Can anyone explain it in more details what is the correct use of it?</p>\n'</li></ul> |
204
 
205
  ## Evaluation
206
 
207
  ### Metrics
208
  | Label | Accuracy | Precision | Recall | F1 |
209
  |:--------|:---------|:----------|:-------|:-------|
210
- | **all** | 0.75 | 0.7604 | 0.75 | 0.7475 |
211
 
212
  ## Uses
213
 
@@ -227,10 +132,7 @@ from setfit import SetFitModel
227
  # Download from the 🤗 Hub
228
  model = SetFitModel.from_pretrained("sharukat/so_mpnet-base_question_classifier")
229
  # Run inference
230
- preds = model("<p>How can I get the file name of a <a href=\"https://www.tensorflow.org/api_docs/python/tf/summary/FileWriter\" rel=\"nofollow noreferrer\"><code>tf.summary.FileWriter</code></a> (<a href=\"https://web.archive.org/web/20170321224015/https://www.tensorflow.org/api_docs/python/tf/summary/FileWriter\" rel=\"nofollow noreferrer\">mirror</a>) in TensorFlow?</p>
231
-
232
- <p>I am aware that I can use <a href=\"https://www.tensorflow.org/api_docs/python/tf/summary/FileWriter#get_logdir\" rel=\"nofollow noreferrer\"><code>get_logdir()</code></a> but I don't see any similar method to access the file name.</p>
233
- ")
234
  ```
235
 
236
  <!--
@@ -260,14 +162,14 @@ preds = model("<p>How can I get the file name of a <a href=\"https://www.tensorf
260
  ## Training Details
261
 
262
  ### Training Set Metrics
263
- | Training set | Min | Median | Max |
264
- |:-------------|:----|:---------|:-----|
265
- | Word count | 24 | 343.8917 | 3755 |
266
 
267
  | Label | Training Sample Count |
268
  |:------|:----------------------|
269
- | 0 | 240 |
270
- | 1 | 240 |
271
 
272
  ### Training Hyperparameters
273
  - batch_size: (8, 8)
@@ -290,8 +192,8 @@ preds = model("<p>How can I get the file name of a <a href=\"https://www.tensorf
290
  ### Training Results
291
  | Epoch | Step | Training Loss | Validation Loss |
292
  |:-------:|:---------:|:-------------:|:---------------:|
293
- | 0.0001 | 1 | 0.2423 | - |
294
- | **1.0** | **14430** | **0.0** | **0.3596** |
295
 
296
  * The bold row denotes the saved checkpoint.
297
  ### Framework Versions
 
11
  - recall
12
  - f1
13
  widget:
14
+ - text: 'I''m trying to take a dataframe and convert them to tensors to train a model
15
+ in keras. I think it''s being triggered when I am converting my Y label to a tensor:
16
+ I''m getting the following error when casting y_train to tensor from slices: In
17
+ the tutorials this seems to work but I think those tutorials are doing multiclass
18
+ classifications whereas I''m doing a regression so y_train is a series not multiple
19
+ columns. Any suggestions of what I can do?'
20
+ - text: My weights are defined as I want to use the weights decay so I add, for example,
21
+ the argument to the tf.get_variable. Now I'm wondering if during the evaluation
22
+ phase this is still correct or maybe I have to set the regularizer factor to 0.
23
+ There is also another argument trainable. The documentation says If True also
24
+ add the variable to the graph collection GraphKeys.TRAINABLE_VARIABLES. which
25
+ is not clear to me. Should I use it? Can someone explain to me if the weights
26
+ decay effects in a sort of wrong way the evaluation step? How can I solve in that
27
+ case?
28
+ - text: 'Maybe I''m confused about what "inner" and "outer" tensor dimensions are,
29
+ but the documentation for tf.matmul puzzles me: Isn''t it the case that R-rank
30
+ arguments need to have matching (or no) R-2 outer dimensions, and that (as in
31
+ normal matrix multiplication) the Rth, inner dimension of the first argument must
32
+ match the R-1st dimension of the second. That is, in The outer dimensions a, ...,
33
+ z must be identical to a'', ..., z'' (or not exist), and x and x'' must match
34
+ (while p and q can be anything). Or put another way, shouldn''t the docs say:'
35
+ - text: 'I am using tf.data with reinitializable iterator to handle training and dev
36
+ set data. For each epoch, I initialize the training data set. The official documentation
37
+ has similar structure. I think this is not efficient especially if the training
38
+ set is large. Some of the resources I found online has sess.run(train_init_op,
39
+ feed_dict={X: X_train, Y: Y_train}) before the for loop to avoid this issue. But
40
+ then we can''t process the dev set after each epoch; we can only process it after
41
+ we are done iterating over epochs epochs. Is there a way to efficiently process
42
+ the dev set after each epoch?'
43
+ - text: 'Why is the pred variable being calculated before any of the training iterations
44
+ occur? I would expect that a pred would be generated (through the RNN() function)
45
+ during each pass through of the data for every iteration? There must be something
46
+ I am missing. Is pred something like a function object? I have looked at the docs
47
+ for tf.matmul() and that returns a tensor, not a function. Full source: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py
48
+ Here is the code:'
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  pipeline_tag: text-classification
50
  inference: true
51
  base_model: flax-sentence-embeddings/stackoverflow_mpnet-base
 
61
  split: test
62
  metrics:
63
  - type: accuracy
64
+ value: 0.81875
65
  name: Accuracy
66
  - type: precision
67
+ value: 0.8248924988055423
68
  name: Precision
69
  - type: recall
70
+ value: 0.81875
71
  name: Recall
72
  - type: f1
73
+ value: 0.8178892421209625
74
  name: F1
75
  ---
76
 
 
102
  - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
103
 
104
  ### Model Labels
105
+ | Label | Examples |
106
+ |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
107
+ | 1 | <ul><li>'In tf.gradients, there is a keyword argument grad_ys Why is grads_ys needed here? The docs here is implicit. Could you please give some specific purpose and code? And my example code for tf.gradients is'</li><li>'I am coding a Convolutional Neural Network to classify images in TensorFlow but there is a problem: When I try to feed my NumPy array of flattened images (3 channels with RGB values from 0 to 255) to a tf.estimator.inputs.numpy_input_fn I get the following error: My numpy_imput_fn looks like this: In the documentation for the function it is said that x should be a dict of NumPy array:'</li><li>'I am trying to use tf.pad. Here is my attempt to pad the tensor to length 20, with values 10. I get this error message I am looking at the documentation https://www.tensorflow.org/api_docs/python/tf/pad But I am unable to figure out how to shape the pad value'</li></ul> |
108
+ | 0 | <ul><li>"I am trying to use tf.train.shuffle_batch to consume batches of data from a TFRecord file using TensorFlow 1.0. The relevant functions are: The code enters through examine_batches(), having been handed the output of batch_generator(). batch_generator() calls tfrecord_to_graph_ops() and the problem is in that function, I believe. I am calling on a file with 1,000 bytes (numbers 0-9). If I call eval() on this in a Session, it shows me all 1,000 elements. But if I try to put it in a batch generator, it crashes. If I don't reshape targets, I get an error like ValueError: All shapes must be fully defined when tf.train.shuffle_batch is called. If I call targets.set_shape([1]), reminiscent of Google's CIFAR-10 example code, I get an error like Invalid argument: Shape mismatch in tuple component 0. Expected [1], got [1000] in tf.train.shuffle_batch. I also tried using tf.strided_slice to cut a chunk of the raw data - this doesn't crash but it results in just getting the first event over and over again. What is the right way to do this? To pull batches from a TFRecord file? Note, I could manually write a function that chopped up the raw byte data and did some sort of batching - especially easy if I am using the feed_dict approach to getting data into the graph - but I am trying to learn how to use TensorFlow's TFRecord files and how to use their built in batching functions. Thanks!"</li><li>"I am fairly new to TF and ML in general, so I have relied heavily on the documentation and tutorials provided by TF. I have been following along with the Tensorflow 2.0 Objection Detection API tutorial to the letter and have encountered an issue while training: everytime I run the training script model_main_tf2.py, it always hangs after the output: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2) after a number of depreciation warnings. I have tried many different ways of fixing this, including modifying the train script and pipeline.config files. My dataset isn't very large, less than 100 images with a max of 15 labels per image. useful info: Python 3.8.0 Tensorflow 2.4.4 (Non GPU) Windows 10 Pro Any and all help is appreciated!"</li><li>'I found two solutions to calculate FLOPS of Keras models (TF 2.x): [1] https://github.com/tensorflow/tensorflow/issues/32809#issuecomment-849439287 [2] https://github.com/tensorflow/tensorflow/issues/32809#issuecomment-841975359 At first glance, both seem to work perfectly when testing with tf.keras.applications.ResNet50(). The resulting FLOPS are identical and correspond to the FLOPS of the ResNet paper. But then I built a small GRU model and found different FLOPS for the two methods: This results in the following numbers: 13206 for method [1] and 18306 for method [2]. That is really confusing... Does anyone know how to correctly calculate FLOPS of recurrent Keras models in TF 2.x? EDIT I found another information: [3] https://github.com/tensorflow/tensorflow/issues/36391#issuecomment-596055100 When adding this argument to convert_variables_to_constants_v2, the outputs of [1] and [2] are the same when using my GRU example. The tensorflow documentation explains this argument as follows (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/convert_to_constants.py): Can someone try to explain this?'</li></ul> |
109
 
110
  ## Evaluation
111
 
112
  ### Metrics
113
  | Label | Accuracy | Precision | Recall | F1 |
114
  |:--------|:---------|:----------|:-------|:-------|
115
+ | **all** | 0.8187 | 0.8249 | 0.8187 | 0.8179 |
116
 
117
  ## Uses
118
 
 
132
  # Download from the 🤗 Hub
133
  model = SetFitModel.from_pretrained("sharukat/so_mpnet-base_question_classifier")
134
  # Run inference
135
+ preds = model("I'm trying to take a dataframe and convert them to tensors to train a model in keras. I think it's being triggered when I am converting my Y label to a tensor: I'm getting the following error when casting y_train to tensor from slices: In the tutorials this seems to work but I think those tutorials are doing multiclass classifications whereas I'm doing a regression so y_train is a series not multiple columns. Any suggestions of what I can do?")
 
 
 
136
  ```
137
 
138
  <!--
 
162
  ## Training Details
163
 
164
  ### Training Set Metrics
165
+ | Training set | Min | Median | Max |
166
+ |:-------------|:----|:---------|:----|
167
+ | Word count | 12 | 128.0219 | 907 |
168
 
169
  | Label | Training Sample Count |
170
  |:------|:----------------------|
171
+ | 0 | 320 |
172
+ | 1 | 320 |
173
 
174
  ### Training Hyperparameters
175
  - batch_size: (8, 8)
 
192
  ### Training Results
193
  | Epoch | Step | Training Loss | Validation Loss |
194
  |:-------:|:---------:|:-------------:|:---------------:|
195
+ | 0.0000 | 1 | 0.3266 | - |
196
+ | **1.0** | **25640** | **0.0** | **0.2863** |
197
 
198
  * The bold row denotes the saved checkpoint.
199
  ### Framework Versions
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "checkpoints/step_14430",
3
  "architectures": [
4
  "MPNetModel"
5
  ],
 
1
  {
2
+ "_name_or_path": "checkpoints/step_25640",
3
  "architectures": [
4
  "MPNetModel"
5
  ],
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:275fcac2144b13278bd733cc40b858c8a52a60b06fe2801fc7d86fe4565335ea
3
  size 437967672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2f525cfb8e8b3946018793494ac6a26049d8ed198c5d20c983cf73ec90efb45
3
  size 437967672
model_head.pkl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf72336ad00df793ee7d141f6070afa38f729369174ee131c21dd5c2ec379823
3
  size 7007
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:016c123d70461936b9b9498918910c3b3d84871e44234b11028d4b736312f54e
3
  size 7007