docs
stringclasses 4
values | category
stringlengths 3
31
| thread
stringlengths 7
255
| href
stringlengths 42
278
| question
stringlengths 0
30.3k
| context
stringlengths 0
24.9k
| marked
int64 0
1
|
---|---|---|---|---|---|---|
tensorflow | Show and Tell | Barlow Twins in TensorFlow | https://discuss.tensorflow.org/t/barlow-twins-in-tensorflow/487 | If you are into self-supervised learning, you already know that “representation collapse” is a real problem and is difficult to get around with simple methods. But not anymore!
Barlow Twins introduces a simple training objective that implicitly guarantees representation collapse does not happen.
Here’s my TensorFlow implementation:
github.com
sayakpaul/Barlow-Twins-TF 21
TensorFlow implementation of Barlow Twins (https://arxiv.org/abs/2103.03230).
With a ResNet20 as a trunk and a 3-layer MLP (each layer containing 2048 units) and 100 epochs of pre-training, I got 62.61% accuracy on the CIFAR10 test set. The pre-training total takes ~23 minutes on a single Tesla V100. Note that this pre-training does not make use of any labeled samples.
There’s a Colab Notebook inside. So, feel free to tweak it, experiment with it, and let me know. Happy to address any feedback. | You’re a beast implementing these new research methods. Thank you for sharing | 0 |
tensorflow | Show and Tell | Semi-supervised learning with PAWS () | https://discuss.tensorflow.org/t/semi-supervised-learning-with-paws/519 | PAWS 3 introduces a way to combine a small fraction of labeled samples with unlabeled ones during the pre-training of vision models. With its simple and unique approach, it sets SOTA in semi-supervised learning that too with far less compute and parameters.
Here’s my implementation of PAWS in TensorFlow:
github.com
sayakpaul/PAWS-TF 15
Minimal implementation of PAWS (https://arxiv.org/abs/2104.13963) in TensorFlow.
For the benefit of the community, I have included all the major bits that have been used in order to make PAWS work. These recipes are largely applicable to train self-supervised and semi-supervised models at scale:
Multi-crop augmentation policy (helps a network systematically learn local to global mappings)
Class stratified sampling
WarmUpCosine LR schedule
Training with the LARS optimizer (with the correct hyperparameter choices)
Additionally, I have included a Colab Notebook 8 that walks through the Multi-crop augmentation method since it can seem daunting when you work it out for the first time.
The results are pretty promising. I encourage you, folks, to check it out. | Massive share, thank you Sayak
Loved the PAWS paper | 0 |
tensorflow | Show and Tell | Vision Transformers are Robust Learners | https://discuss.tensorflow.org/t/vision-transformers-are-robust-learners/681 | Hi folks,
I wanted to share my new work with Pin-Yu Chen 11 (IBM Research) - “Vision Transformers are Robust Learners” .
For some time now, Transformers have taken the vision world by storm. In this work, we question the robustness aspects of Vision Transformers. Specifically, we investigate the question:
With the virtue of self-attention, can Vision Transformers provide improved robustness to common corruptions, perturbations, etc.? If so, why?
We build on top of existing works & investigate the robustness aspects of ViT. Through a series of six systematically designed experiments, we present analyses that provide both quantitative & qualitative indications to explain why ViTs are indeed more robust learners.
Paper: [2105.07581] Vision Transformers are Robust Learners 6
Code: https://git.io/J3VO0 4 | Sayak_Paul:
[2105.07581] Vision Transformers are Robust Learners
Congratulations - amazing work on ViT research.
image1013×897 457 KB | 0 |
tensorflow | Show and Tell | Learning to Resize in Computer Vision | https://discuss.tensorflow.org/t/learning-to-resize-in-computer-vision/501 | It is a common belief that if we constrain vision models to perceive things as humans do, their performance can be improved. For example, in this work 9, Geirhos et al. showed that the vision models pre-trained on the ImageNet-1k dataset are biased toward texture whereas human beings mostly use the shape descriptor to develop a common perception. But does this belief always apply especially when it comes to improving the performance of vision models?
Know more in this post:
keras.io
Keras documentation: Learning to Resize in Computer Vision 23 | Great share, thanks Sayak! | 0 |
tensorflow | Special Interest Groups | About the Special Interest Groups category | https://discuss.tensorflow.org/t/about-the-special-interest-groups-category/17 | TensorFlow’s Special Interest Groups (SIGs) support community collaboration on particular project focuses.
Our SIGs include:
SIG I/O
SIG JVM
SIG MLIR
SIG TFjs
SIG Recommenders
SIG Models
SIG Build
SIG Addons
SIG TensorBoard
SIG Micro
SIG Keras
SIG Swift
SIG Rust | I think we have 18 SIGs now community/sigs at master · tensorflow/community · GitHub 9 | 0 |
tensorflow | Special Interest Groups | Customize Keras Loss to take Mean of the Regularizations intead of the Sum | https://discuss.tensorflow.org/t/customize-keras-loss-to-take-mean-of-the-regularizations-intead-of-the-sum/7327 | I was writing some simple models and I did not want the Model Loss to be the sum of all L2 Regularizations. I wanted it to be the mean instead. My reason being that having 3 L2 Losses had a huge impact of regularization, taking the mean reduces that impact. In most courses as well, we can take the mean
Any idea on how to approach it in a manner that can generalize well
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
model = Sequential()
model.add(Dense(100, input_shape=(8,), kernel_regularizer=tf.keras.regularizers.L2(0.01)))
model.add(Dense(80, kernel_regularizer=tf.keras.regularizers.L2(0.01)))
model.add(Dense(30, kernel_regularizer=tf.keras.regularizers.L2(0.01)))
model.add(Dense(1))
model.compile(optimizer=‘adam’, loss=‘binary_crossentropy’, metrics=[‘accuracy’])
print(model.losses)
[<tf.Tensor: shape=(), dtype=float32, numpy=0.15066518>, <tf.Tensor: shape=(), dtype=float32, numpy=0.883246>, <tf.Tensor: shape=(), dtype=float32, numpy=0.4300898>]
I would want the loss to add (0.15066518 + 0.883246 + 0.4300898)/3 instead of (0.15066518 + 0.883246 + 0.4300898) | Do you want just to Reduction.None like in
github.com
keras-team/keras/blob/master/keras/losses.py#L546-L547
>>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True,
... reduction=tf.keras.losses.Reduction.NONE)
And then apply your custom operation? | 0 |
tensorflow | Special Interest Groups | Tsjs-tflite wasm does not work on codepen | https://discuss.tensorflow.org/t/tsjs-tflite-wasm-does-not-work-on-codepen/7283 | I am trying to get a tfjs-tflite demo from here 1
to get it to work on codepen here 2
The goal is to cartoonise the image on the top row and generate its corresponding cartoon via GAN on the second row.
For some reason, it does not seem to produce a result However when I do yarn it works fine.
Any clue as to why this is would be greatly appreciated (I am a JS newbie). | I guess @Jason might be able to help here | 0 |
tensorflow | Special Interest Groups | Mlir-hlo repo cmake setup for dependent projects | https://discuss.tensorflow.org/t/mlir-hlo-repo-cmake-setup-for-dependent-projects/7165 | This is a question related to projects that would like to depend on mlir-hlo and set it up as a submodule (while using a cmake setup).
https://github.com/tensorflow/mlir-hlo
As far as I could tell, mlir-hlo doesn’t include something equivalent to MLIRHLOConfig.cmake.in that would allow a user project to find it and see all its targets. As a result, one approach would be to set it up with proper target_link options to point to the submodule’s build dir. This creates issues with transitive dependencies and rebuilding after updates. I haven’t tried the approaches of cmake external projects ExternalProject — CMake 3.22.1 Documentation or simply an add_subdirectory.
Is there a recommended setup for projects that would like to depend on mlir-hlo via a submodule? Would a contribution of Config.cmake be acceptable in the repo or does one already exist somewhere? Thanks! | I don’t know about the solution here, but we’d take a patch | 0 |
tensorflow | Special Interest Groups | TensorFlow “TypeError: Target data is missing” though dataset with 2 dimension tuple was supplied | https://discuss.tensorflow.org/t/tensorflow-typeerror-target-data-is-missing-though-dataset-with-2-dimension-tuple-was-supplied/7188 | I’m trying to use a generator- based data set:
def gen():
return zip(samples,feature)
ds = tf.data.Dataset.from_generator(gen,output_types=tf.dtypes.float32)
model.fit(ds,
epochs=150,
#callbacks=[tensorboard_callback]
)
model.save("/sda/anyone/imagenet-in-np/transformer")
whereas feature is numpy.ndarray (2D array)
whereas feature is numpy.ndarray (4D array)
And I get the following error:
TypeError: Target data is missing. Your model has `loss`: BinaryCrossentropy, and therefore expects target data to be passed in `fit()`.
which is strange, as the target data is actually present.
Whenever I separate the dataset to two
def gen():
return samples
ds = tf.data.Dataset.from_generator(gen,output_types=tf.dtypes.float32)
def gen2():
return feature
ds2= tf.data.Dataset.from_generator(gen2,output_types=tf.dtypes.float32)
model.fit(ds,ds2,
epochs=150,
#callbacks=[tensorboard_callback]
)
model.save("/sda/anyone/imagenet-in-np/transformer")
I get:
raise ValueError("`y` argument is not supported when using "
ValueError: `y` argument is not supported when using dataset as input.
Which means that TF doesn’t accept this split.
I tried
def gen():
for element in zip(samples,feature):
yield element
ds = tf.data.Dataset.from_generator(gen(),output_types=tf.dtypes.float32)
I get
TypeError: generator must be a Python callable.
So I tried to swap it to :
def gen():
for element in zip(samples,feature):
yield element
ds = tf.data.Dataset.from_generator(gen,output_types=tf.dtypes.float32)
I get again:
TypeError: Target data is missing. Your model has `loss`: BinaryCrossentropy, and therefore expects target data to be passed in `fit()`.
python-BaseException
So how should I use the generator API? | I actually got the same error and my mistake was that the model was expecting the input and labels in a different format while I was passing them in an incorrect format.
Additionally can you provide a Minimum working example (MWE)? It would be easier to find out the problem that way. | 0 |
tensorflow | Special Interest Groups | Tensorflow transforms passes hardcoded on “main” function | https://discuss.tensorflow.org/t/tensorflow-transforms-passes-hardcoded-on-main-function/6908 | Hello,
I notice at least about half a dozen tensorflow transforms passes (like tensor-list-ops-decomposition) hardcoded to work only on a single function named “main”.
(tensor_list_ops_decomposition.cc on github)
void TensorListOpsDecompositionPass::runOnOperation() {
auto module = getOperation();
auto main = module.lookupSymbol<FuncOp>("main");
if (!main) return;
if (failed(DecomposeTensorListOps(&main.front(), module))) {
signalPassFailure();
}
}
Is there an assumption that the canonical form is one where the “entry function” is named “main”? This isn’t true for an import/translation from a tf.function where the entry function has the tf.function’s name with a suffix/prefix. Should this check instead be for a function with the attribute “tf.entry_function” and should this be patched like this or better with a common utility to update all passes with such checks?
- auto main = module.lookupSymbol<FuncOp>("main");
- if (!main) return;
- if (failed(DecomposeTensorListOps(&main.front(), module))) {
- signalPassFailure();
+ for (auto func_op : module.getOps<FuncOp>()) {
+ // Just run on the entry function.
+ if (!func_op->getAttr("tf.entry_function") && func_op.getName() != "main")
+ continue;
+ if (failed(DecomposeTensorListOps(&func_op.front(), module))) {
+ signalPassFailure();
+ }
+ break;
}
Related to this are also several instances of “main” and “tf.entry_function” hardcoded in “transforms/” and “translate/”. | We likely should provide a helper for this instead of a raw loop.
Also it isn’t clear to me what a public function (from an MLIR symbol visibility point of view) that isn’t an entry function would mean? And if so why not just filter on public ones? | 0 |
tensorflow | Special Interest Groups | Creating float tensors from BufferedImage in Java/Kotlin | https://discuss.tensorflow.org/t/creating-float-tensors-from-bufferedimage-in-java-kotlin/1865 | Continuing a thread started on Gitter:
Hello, I want to run a Tensorflow model I found with a Java app, but I am having difficulty with getting the input just right. Below you can see the result from the layer analysis. I found a few examples for one-dimensional input (mnist) and I got another model working that required integers, but creating Tensor with dimensions {batch, height, width, channels} is a difficult task. I would like some help. The input is just a JPG, basically BufferedImage as I want to keep my options open.
Often TF Java users are looking for a snippet showing how this can be done easily, I’m sharing one here written in Kotlin (warning, I did not test it out after modifying it, but basically the logic should be good):
fun preprocess(sourceImages: List<BufferedImage>, imageHeight: Int, imageWidth: Int, imageChannels: Int): TFloat32 {
val imageShape = Shape.of(sourceImages.size.toLong(), imageHeight.toLong(), imageWidth.toLong(), imageChannels.toLong())
return TFloat32.tensorOf(imageShape) { tensor ->
// Copy all images to the tensor
sourceImages.forEachIndexed { imageIdx, sourceImage ->
// Scale the image to required dimensions if needed
val image = if (sourceImage.width != imageWidth || sourceImage.height != imageHeight) {
val scaledImage = BufferedImage(imageWidth, imageHeight, BufferedImage.TYPE_3BYTE_BGR)
scaledImage.createGraphics().apply {
setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_NEAREST_NEIGHBOR)
drawImage(sourceImage, 0, 0, imageWidth, imageHeight, null)
dispose()
}
scaledImage
} else {
sourceImage
}
// Converts the image to floats and normalize by subtracting mean values
var i = 0
for (h in 0L until imageHeight) {
for (w in 0L until imageWidth) {
// "caffe"-style normalization
tensor.setFloat(image.data.dataBuffer.getElemFloat(i++) - 103.939f, imageIdx.toLong(), h, w, 0)
tensor.setFloat(image.data.dataBuffer.getElemFloat(i++) - 116.779f, imageIdx.toLong(), h, w, 1)
tensor.setFloat(image.data.dataBuffer.getElemFloat(i++) - 123.68f, imageIdx.toLong(), h, w, 2)
}
}
}
}
}
So the idea is simply to resample your image if it is not already of the right size and to normalize its pixel values when feeding the tensor. The “caffe”-style normalization is the one used by default by Keras in Python so the mean values to subtract were picked from Keras sources 4 directly.
UPDATED : here’s the Java version
TFloat32 preprocess(List<BufferedImage> sourceImages, int imageHeight, int imageWidth, int imageChannels) {
Shape imageShape = Shape.of(sourceImages.size(), imageHeight, imageWidth, imageChannels);
return TFloat32.tensorOf(imageShape, tensor -> {
// Copy all images to the tensor
int imageIdx = 0;
for (BufferedImage sourceImage : sourceImages) {
// Scale the image to required dimensions if needed
BufferedImage image;
if (sourceImage.getWidth() != imageWidth || sourceImage.getHeight() != imageHeight) {
image = new BufferedImage(imageWidth, imageHeight, BufferedImage.TYPE_3BYTE_BGR);
Graphics2D graphics = image.createGraphics();
graphics.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_NEAREST_NEIGHBOR);
graphics.drawImage(sourceImage, 0, 0, imageWidth, imageHeight, null);
graphics.dispose();
} else {
image = sourceImage;
}
// Converts the image to floats and normalize by subtracting mean values
int i = 0;
for (long h = 0; h < imageHeight; ++h) {
for (long w = 0; w < imageWidth; ++w) {
// "caffe"-style normalization
tensor.setFloat(image.getData().getDataBuffer().getElemFloat(i++) - 103.939f, imageIdx, h, w, 0);
tensor.setFloat(image.getData().getDataBuffer().getElemFloat(i++) - 116.779f, imageIdx, h, w, 1);
tensor.setFloat(image.getData().getDataBuffer().getElemFloat(i++) - 123.68f, imageIdx, h, w, 2);
}
}
++imageIdx;
}
});
} | Sorry I can’t add links but there’s also some example java code in the tensorflow-java models github repository. You need to drill down to the cnn FasterRcnnInception directory | 0 |
tensorflow | Special Interest Groups | Can’t pickle weakref objects | https://discuss.tensorflow.org/t/cant-pickle-weakref-objects/5433 | I am going to build my project and data is fetched from my database with specific Project_id. and then train my model using LSTM. Epochs are clearly running but after that, It shows an Internal Server Error
admin.py
def build(self, request, queryset):
count = 0
for p in queryset:
if build_id(p.project_management.id):
count += 1
else:
messages.warning(request, f"Could not build model for {p}")
messages.success(
request, f"Successfully built models for {count} projects")
build.short_description = "Build models for selected Projects"
bild.py
here the model is built via a specific Project_id. Model store only model.pkl data but not completed. And other files scalar_in and scalar_out do not save in a specific folder.
def build_id(project_id):
# get directory path to store models in
path = fetch_model_path(project_id, True)
# train model
model, scaler_in, scaler_out = train_project_models(project_id)
# ensure model was trained
if model is None:
return False
# store models
store_model(f'{path}/model.pkl', model)
store_model(f'{path}/scaler_in.pkl', scaler_in)
store_model(f'{path}/scaler_out.pkl', scaler_out)
# clear current loaded model from memory
keras_clear()
return True
utils.py
with open(path, 'wb') as f:
model_file = File(f)
pickle.dump(model, model_file)
when I Comment on the pickle.dump(model,model_file) then model.pkl, scalar_in.pkl, and scalar_out.pkl save files with 0 kb data. If pkl files exist already with data then it removes and builds the project successfully. I debug this code and the Django debuger_tool shows that the page is temporarily moved.
output
Epoch 1/4
11/11 [==============================] - 9s 302ms/step - loss: 0.4594 - val_loss: 0.2777
Epoch 2/4
11/11 [==============================] - 2s 177ms/step - loss: 0.1039 - val_loss: 0.0395
Epoch 3/4
11/11 [==============================] - 2s 170ms/step - loss: 0.0545 - val_loss: 0.0361
Epoch 4/4
11/11 [==============================] - 2s 169ms/step - loss: 0.0414 - val_loss: 0.0551
Internal Server Error: /turboai/turboAI/jaaiparameters/
Traceback (most recent call last):
File "E:\.Space\project\venv\lib\site-packages\django\core\handlers\exception.py", line 47, in inner
response = get_response(request)
File "E:\.Space\project\venv\lib\site-packages\django\core\handlers\base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\contrib\admin\options.py", line 616, in wrapper
return self.admin_site.admin_view(view)(*args, **kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\utils\decorators.py", line 130, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\views\decorators\cache.py", line 44, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\contrib\admin\sites.py", line 232, in inner
return view(request, *args, **kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\utils\decorators.py", line 43, in _wrapper
return bound_method(*args, **kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\utils\decorators.py", line 130, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\contrib\admin\options.py", line 1723, in changelist_view
response = self.response_action(request, queryset=cl.get_queryset(request))
File "E:\.Space\project\venv\lib\site-packages\django\contrib\admin\options.py", line 1408, in response_action
response = func(self, request, queryset)
File "E:\.Space\project\TurboAnchor\turboAI\admin.py", line 125, in build
if build_id(p.project_management.id):
File "E:\.Space\project\TurboAnchor\turboAI\build.py", line 48, in build_id
store_model(f'{path}/model.pkl', model)
File "E:\.Space\project\TurboAnchor\turboAI\utils.py", line 154, in store_model
pickle.dump(model, model_file)
TypeError: can't pickle weakref objects
[29/Oct/2021 17:50:31] "POST /turboai/turboAI/jaaiparameters/ HTTP/1.1" 500 126722 | Please look at:
github.com/tensorflow/tensorflow
Keras model pickle-able but tf.keras model not pickle-able 85
opened
Nov 29, 2019
closed
Oct 4, 2021
Edwin-Koh1
stat:awaiting response
stat:awaiting tensorflower
type:bug
stalled
comp:keras
TF 2.5
**System information**
- Windows 10
- Tensorflow 2.0 (CPU)
- joblib 0.14.0
…- Python 3.7.5
- Keras 2.3.1
Hello everybody! This is my first post so please forgive me if I have missed something. So I'm trying to use a genetic algorithm to train and evaluate multiple NN architectures so I need to parallelize them on a multi-core CPU. Therefore I have used joblib to try to parallelize this. However, I was stuck on my tf.keras code because it wasn't pickleable. After many hours of debugging I finally realised that the tf.keras models are not pickleable whereas keras models are.
**Describe the current behavior**
The code below works but if you replaced keras with tf.keras, there will be an error:
**Could not pickle the task to send it to the workers.**
**Describe the expected behavior**
Moving forward, tf.keras should be replacing keras and therefore tf.keras should also be pickleable.
**Code to reproduce the issue**
```
#The following is a simple code to illustrate the problem:
from joblib import Parallel, delayed
import keras
import tensorflow as tf
def test():
model = keras.models.Sequential()
return
Parallel(n_jobs=8)(delayed(test)(i) for i in range(10)) #this works as intended
def test_tf():
model = tf.keras.models.Sequential()
return
Parallel(n_jobs=8)(delayed(test_tf)(i) for i in range(10)) #this will spit out the error above
```
**Other comments**
I guess a quick fix would just be to replace all the existing code with tf.keras to just keras but seeing as keras support will be discontinued and absorbed by Tensorflow 2.0, I think this should be fixed.
I suggest to test this with TF 2.6.x or TF 2.7rc | 0 |
tensorflow | Special Interest Groups | How to save a model with a custom layer? | https://discuss.tensorflow.org/t/how-to-save-a-model-with-a-custom-layer/7004 | Hi everyone,
I am facing a problem while trying to save my model that has a custom layer, I followed the same method the Francois Chollet book is using, but I got this error:
ValueError: Unable to create a dataset (name already exists).
can anyone help, please? | Hi, can you share the chapter from the Deep Learning with Python (v1 or v2) book you’re referring to, as well as some code? Are you saving the model with Keras ModelCheckpoint callbacks? I’m sure we’ll be able to help. | 0 |
tensorflow | Special Interest Groups | About the the tools folder | https://discuss.tensorflow.org/t/about-the-the-tools-folder/7057 | I did a quick look at the tools folder and I am interested in maintaining this folder since I have some experience with docker . My github nick name is @vulkomilev . | You can start to creare and review PRs related to that folder. At some point we will add you are codeowners. | 0 |
tensorflow | Special Interest Groups | Deformable convolution and other custom ops | https://discuss.tensorflow.org/t/deformable-convolution-and-other-custom-ops/1951 | Recently we had a refresh over a Deformable convloution WIP PR 51 in Addons.
I’ve cherry-picked this as an example as this requires us to maintain almost 3k lines of new code in the repository.
This is maintainer-ship overhead is also quite similar to what we have with other custom kernels PRs.
As Addons is one of the few Ecosystem repositories to support custom (c++) ops and the related CI infra it is quite normal that we have this kind of proposed PRs.
But as the codeownership of these components it is generally not so stable over time we would like to not merge, as possible, these custom ops PRs also to achieve a more broad hardware coverage.
What are the alternatives? How we could collaborate when a compositional implementation has huge performance gaps?
Often this kind of issues are shared across the “extend” ecosystem like e.g. for the EmbeddingBag:
github.com/pytorch/xla
lowering embeddingbag to XLA 6
opened
Aug 5, 2020
shz0116
nostale
op lowering
The embeddingbag operation has not been lowered to XLA. I saw aten:embeddingba…g from the profiling.
github.com/tensorflow/addons
EmbeddingBag and Product-Key Memory Layers 1
opened
Oct 14, 2020
Rocketknight1
Feature Request
layers
**Describe the feature and the current behavior/state.**
FAIR have a cool paper… where they introduce [Product-Key Memory Layers](https://arxiv.org/abs/1907.05242) - these are layers that can add a huge number of parameters (100M-1B) to a network with a very minimal compute overhead.
Unfortunately, implementing them efficiently depends on the [EmbeddingBag layer](https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html) from Pytorch. This layer basically does a gather op followed by a weighted sum across the final dimension of the gather indices.
It is trivial to implement this op as a composition of two or three ops in Tensorflow, but doing so requires you to materialize the output of the gather, which in the case of Product-Key Memory layers is enormous, and usually blows out my GPU RAM. By combining these ops into a single efficient call, EmbeddingBag avoids ever materializing the extremely large pre-sum gather output. There's no efficient way to do the same in Tensorflow without a custom op.
I've already gotten a CUDA and (single-threaded) CPU implementation of EmbeddingBag working locally using the custom-op repo and associated docker image. I've verified correctness by comparing outputs and gradients to those from the manual composition of ops, and speed and memory usage are vastly improved. I could also contribute a TF implementation of the Product-Key Memory layer itself if desired.
**Relevant information**
- Are you willing to contribute it (yes/no): yes
- Are you willing to maintain it going forward? (yes/no): yes
- Is there a relevant academic paper? (if so, where): https://arxiv.org/abs/1907.05242
- Is there already an implementation in another framework? (if so, where): Yes, EmbeddingBag is already a PyTorch layer
- Was it part of tf.contrib? (if so, where):
**Which API type would this fall under (layer, metric, optimizer, etc.)**
Layer
**Who will benefit with this feature?**
People who want to squeeze loads of parameters into their model while maintaining fast throughput and aren't worried about overfitting. The paper used it for big autoregressive NLP Transformers, but I suspect you could deploy it in a lot of other places too.
**Any other info.**
I have only implemented the portions of EmbeddingBag necessary for Product-Key Memory layers.
EmbeddingBag op and layer by Rocketknight1 · Pull Request #2352 · tensorflow/addons · GitHub (1k lines)
github.com/tensorflow/tensorflow
embedding_lookup cause ran out of memory 1
opened
Oct 5, 2020
shz0116
TF 2.3
comp:tpus
comp:xla
stat:awaiting tensorflower
type:bug
I am running the following code to test embedding_lookup.
```python
# comman…d:
# python3 -m pdb embtest.py --features=1000 --nnz=30 --batch=128
#
# error:
# *** tensorflow.python.framework.errors_impl.ResourceExhaustedError:
# Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.
#
import tensorflow as tf
import numpy as np
import sys
import os
import time
def measure(params, sp_ids, steps, thr):
res = tf.nn.embedding_lookup([params[0:thr],params[thr:]], sp_ids, None, name="TEST1")
print("Finished test")
return res
if __name__ == "__main__":
import sys
import argparse
parser = argparse.ArgumentParser(
description="Measure the performance of tensorflow embeddingbag using tf.nn.embedding" )
parser.add_argument("--features", type=int, default=10)
parser.add_argument("--em", type=int, default=2)
parser.add_argument("--nnz", type=int, default=2)
parser.add_argument("--batch", type=int, default=4)
parser.add_argument("--steps", type=int, default=1)
parser.add_argument("--warmups", type=int, default=0)
args = parser.parse_args()
features = args.features
em = args.em
nnz = args.nnz
batch = args.batch
steps = args.steps
warmups = args.warmups
sp_ids = np.random.randint(0, features, (batch * nnz,))
res = tf.zeros([batch, em])
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu="grpc://"+os.environ["TPU_IP"])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
print(" ")
tpus = tf.config.list_logical_devices('TPU')
print("There are {} tpu logical devices".format(len(tpus)))
print(tpus[0])
with tf.device('TPU:0'):
params = tf.random.uniform([features, em])
res = measure(params, sp_ids, tf.constant(steps), features//2)
print(res)
```
But got the following error:
```bash
hongzhang@shan-tf1:~$ python embtest.py --features=1000 --nnz=30 --batch=128
Eager execution : True
2020-10-05 08:23:42.244623: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-05 08:23:42.250601: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2300000000 Hz
2020-10-05 08:23:42.251595: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4c1dde0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-05 08:23:42.251631: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-10-05 08:23:42.263068: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job worker -> {0 -> 10.178.175.58:8470}
2020-10-05 08:23:42.263113: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:38651}
2020-10-05 08:23:42.279709: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job worker -> {0 -> 10.178.175.58:8470}
2020-10-05 08:23:42.279743: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:38651}
2020-10-05 08:23:42.280176: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:405] Started server with target: grpc://localhost:38651
There are 8 tpu logical devices
LogicalDevice(name='/job:worker/replica:0/task:0/device:TPU:7', device_type='TPU')
Traceback (most recent call last):
File "embtest.py", line 84, in <module>
t1 = measure(params, sp_ids, tf.constant(steps), features//2)
File "embtest.py", line 15, in measure
res = tf.nn.embedding_lookup([params[0:thr],params[thr:]], sp_ids, None, name="TEST1")
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/embedding_ops.py", line 394, in embedding_lookup_v2
return embedding_lookup(params, ids, "div", name, max_norm=max_norm)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/embedding_ops.py", line 328, in embedding_lookup
transform_fn=None)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/embedding_ops.py", line 246, in _embedding_lookup_and_transform
ret.set_shape(ids.get_shape().concatenate(element_shape_s))
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py", line 1206, in set_shape
if not self.shape.is_compatible_with(shape):
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py", line 1167, in shape
self._tensor_shape = tensor_shape.TensorShape(self._shape_tuple())
tensorflow.python.framework.errors_impl.ResourceExhaustedError: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.
Largest program allocations in vmem:
XLA label: register allocator spill slots
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
2020-10-05 08:23:59.826142: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:76] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.
Largest program allocations in vmem:
XLA label: register allocator spill slots
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
```
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
os: Linux
os kernel version: #1 SMP Debian 4.19.146-1 (2020-09-17)
os release version: 4.19.0-11-cloud-amd64
os platform: Linux-4.19.0-11-cloud-amd64-x86_64-with-debian-10.6
linux distribution: ('debian', '10.6', '')
linux os distribution: ('debian', '10.6', '')
mac version: ('', ('', '', ''), '')
uname: uname_result(system='Linux', node='shan-tf1', release='4.19.0-11-cloud-amd64', version='#1 SMP Debian 4.19.146-1 (2020-09-17)', machine='x86_64', processor='')
architecture: ('64bit', 'ELF')
machine: x86_64
- TensorFlow installed from (source or binary):
- TensorFlow version (use command below):
tf.version.VERSION = 2.3.0-dev20200620
tf.version.GIT_VERSION = v1.12.1-34769-gfd2d4cdb70
tf.version.COMPILER_VERSION = 7.3.1 20180303
- Python version:
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory:
github.com/google/jax
np.take and np.einsum aren't fused properly 3
opened
May 26, 2020
AranKomat
P2 (eventual)
performance
xla_issue
I'm trying to translate [Product Key Memory ](https://arxiv.org/abs/1907.05242) …in PyTorch into JAX, and this requires the translation of [nn.EmbeddingBag](https://pytorch.org/docs/master/generated/torch.nn.EmbeddingBag.html) with per_sample_weights, as I haven't found any counterpart in JAX (but if you know, please let me know). For this, I wrote scatter_weighted_sum, the weighted sum version of scatter_add, in hope that it'll be efficient with fusing. However, jit didn't fuse np.take, reshape and np.einsum properly, which resulted in a huge intermediate object. Since #1979 concluded this sort of ops will be fused on GPU, I was wondering what is causing this problem. If by any chance this isn't supported on GPU, should this work on TPU? I'm using JAX ver. 0.1.67 with various Colab GPUs.
````python
hidden_dim = 512
n_keys = 512
batch = 2 ** 15
knn = 32
heads = 4
key = random.PRNGKey(0)
values = random.normal(key, (n_keys ** 2, hidden_dim))
indices = random.randint(key, (batch*heads, knn), 0, n_keys ** 2)
weights = random.normal(key, (batch*heads, knn))
@jit
def scatter_weighted_sum(inputs, indices, weights):
num_bags = weights.shape[-1]
dim = inputs.shape[-1]
indices = indices.reshape(-1)
tmp = inputs.take(indices, axis=0).reshape(-1, num_bags, dim)
return np.einsum('ind, in -> id', tmp, weights)
````
Thanks,
Stefano | @kristen Is the MLIR team registered to this Dscourse instance or are they only in the LLVM MLIR discourse instance 3?
Cause generally we don’t have TF specific threads in the LLVM MLIR instance. | 0 |
tensorflow | Special Interest Groups | SIG Build January Meeting: January 11 @ 2pm | https://discuss.tensorflow.org/t/sig-build-january-meeting-january-11-2pm/6981 | SIG Build’s next meeting will be tomorrow, Tuesday, January 11, at 2pm Pacific time. Find the meeting details at bit.ly/tf-sig-build-notes 4, and feel free to suggest new agenda items. | Here’s a summary of some of the major points from the meeting.
Please fill out this form: SIG Build Monthly Meeting Time 2022. We’re considering a new meeting time every other month in for the sake of worldwide members for whom 2pm PST is not ideal.
I’ve merged configuration files that build TF’s Nightly test suite in Docker, and am working on changing our internal CI to use these containers.
Gentoo now packages ROCm!
Many of TF DevInfra team’s projects are delayed – manylinux2014 and Python 3.10 support are both stuck on build/test failures.
Our next meeting is February 1st at 2pm PST. See you then! | 0 |
tensorflow | Special Interest Groups | How to use TFJS for processing pen & paper filled assessments? | https://discuss.tensorflow.org/t/how-to-use-tfjs-for-processing-pen-paper-filled-assessments/6816 | Hello, I had been use TFJS for a while for pose recognition (with PoseNet) and it is magnific! Now I wonder if I can use TF for a different use case: I want to scan printed assessments with multiple questions, where the options are circles that should be filled with a black pen by the user.
I imagine I need a way to recognize the frame containing all the answers, similar to the way QR-Codes corners are used… I appreciate any suggestion about what would be a good approach to this problem.
thanks! | Welcome to the forum and thanks for being part of the TensorFlow.js community!
A few things.
Glad you were enjoying PoseNet however please consider upgrading to MoveNet which is almost 100x faster and much more accurate. PoseNet was good when it came out, but MoveNet is now the standard Learn more here:
blog.tensorflow.org
Next-Generation Pose Detection with MoveNet and TensorFlow.js
MoveNet is a human pose detection architecture developed at Google that is ultra fast and accurate. It was designed to detect difficult poses
So for your new problem I would first ask if you need machine learning for this task? Regular computer vision may be adequate depending on your data eg if its well scanned and just black and white it may be fairly trivial to find all black areas that are squares of a certain size and which of those contain more filled pixels than others for example.
That being said if you do want to use machine learning you will probably want to retrain some sort of object detection model to find objects of interest and their positions eg a filled box vs an unfilled box etc.
For that I highly recommend following this great tutorial by Hugo:
blog.tensorflow.org
Custom object detection in the browser using TensorFlow.js
Train a custom MobileNetV2 using the TensorFlow 2 Object Detection API and Google Colab for object detection, convert the model to TensorFlow.js
You can then run that resulting trained model in the browser to detect custom objects like you need and find their locations in the given image. | 0 |
tensorflow | Special Interest Groups | Adopting Open-Source Dockerfiles for Official tf-nightly CI | https://discuss.tensorflow.org/t/adopting-open-source-dockerfiles-for-official-tf-nightly-ci/6050 | The TensorFlow OSS DevInfra Team and TF SIG Build are developing new Dockerfiles in the SIG Build GitHub repo 8 that we want to be used for all of TensorFlow’s official build and test environments. They are published to SIG Build on DockerHub 5. Our first milestone is to use the Dockerfiles to build the TF Nightly packages with the following goals:
Container-built packages are functionally identical to the current package
Developers (you!) can build the same packages that we do with minimal effort
That milestone is ready for verification. I’ve set up internal CI jobs that use the containers to build tf-nightly packages that are very similar to the current ones, and I’d like your help to evaluate them for functional differences. Starting on Monday the 30th, we’ve been using the containers to build our official tf-nightly packages.
Here is a set of packages we built at the same commits for advance comparison. There are minor cosmetic differences but we’d like your help to find out if there are any functional differences between packages on the same row of the table below.
Short Git Hash
Old Non-Docker Builds
New Docker Builds
5af3afc559
GPU Python 3.9
GPU Python 3.9 3
5af3afc559
GPU Python 3.8
GPU Python 3.8
5af3afc559
GPU Python 3.7
GPU Python 3.7
1d51452b18
CPU Python 3.9
CPU Python 3.9 1
1d51452b18
CPU Python 3.8
CPU Python 3.8 1
1d51452b18
CPU Python 3.7
CPU Python 3.7
Here’s how you can help us make the containers useful for you:
Install and compare the sample packages above. If you compare the two wheels for any of the rows, do they have any differences that would affect your workflow?
Check out the containers on DockerHub 5 and the tf-nightly build instructions at the SIG Build repository 8. Are you able to build TensorFlow with them? If you use the same git hashes as above, how is your package different?
With the new packages that came out starting on Nov. 30, is anything different about them in a way that affects your workflow?
Please give all feedback in this thread. Thank you for your help! | If you have Docker (and nvidia-docker 1 if you want to run GPU TensorFlow) set up already, here’s how to test out one of the packages linked in the OP from inside the containers:
CPU:
docker pull tensorflow/build:latest-python3.9
docker run -it --rm tensorflow/build:latest-python3.9 bash
wget https://storage.googleapis.com/tensorflow-nightly/prod/tensorflow/nightly_release/ubuntu_tfdkr/cpu_py39/6/20211117-000455/pkg/tf_nightly_cpu-2.8.0.dev20211117-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
pip install ./tf_nightly*
python
import tensorflow as tf
GPU with nvidia-docker:
docker pull tensorflow/build:latest-python3.9
docker run --gpus=all -it --rm tensorflow/build:latest-python3.9 bash
wget https://storage.googleapis.com/tensorflow-nightly/prod/tensorflow/nightly_release/ubuntu_tfdkr/gpu_py39/6/20211117-000458/pkg/tf_nightly-2.8.0.dev20211117-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
pip install ./tf_nightly*
python
import tensorflow as tf
tf.config.list_physical_devices('GPU') | 0 |
tensorflow | Special Interest Groups | How to add multiple pre-processing steps and a post-processing step for text-classifiction model to serve via tensorflow-serving? | https://discuss.tensorflow.org/t/how-to-add-multiple-pre-processing-steps-and-a-post-processing-step-for-text-classifiction-model-to-serve-via-tensorflow-serving/6712 | What I currently have and trying to do:
When I receive a request from a client to the model in tensorflow-serving, I first need to process them using 13 regexes, then pass that text through tf.keras.preprocessing.text.Tokenizer 1 to convert them to numbers(or token), and then pass them to tf.keras.preprocessing.sequence.pad_sequences to add 0s (for the sentences whose lenght doesn’t match the input that the model expects) at the end of each array(for a batch of inputs), then this(a single sentence or a batch of sentences as tokens) will be fed to a tf.keras model to get some probabilities as outputs. And I then need to map these probabilites(different thresholds for different units) to texts and return it to the client.
What problems am I currently facing trying to accomplish above:
While trying to put together all that to be able to serve the model using tensorflow-serving, I learned that some parts can be converted to tensorflow functions, but not all of it.
regexes: I still couldn’t figure out where and how to put my regexes to be able to manipulate the text.
tokenizer: I learned from some blogs and SO questions, that tf.lookup.StaticHashTable can be used for this purpose.
pad_sequences: no help with this too.
post-processing: I could find very little information to do this.
I read the beginner and advanced blogs on tensorflow-transform tutorials page, but either of them mentioned how to link those tft functions to the tf-keras model, while saving it. And I could also find some information about adding pre-processing for serving, but all of them involved tensorflow code and some workarounds, but they didn’t cover what I am trying to achieve even indirectly.
I can provide more information as required.
How do I add these steps to the graph, while saving the model? | 1 & 3 & 4. After training the model, you can save graph with pre-processing and post-processing steps like below
...
...
# some training steps
model = ...
model.compile(...)
model.fit(...)
@tf.function
def inference_function(text):
# do some preprocessing
text = tf.strings.regex_replace(text, # ... some regex patterns...)
token_ids, starts, ends = tokenizer.tokenize_with_offsets(text)
model_inputs = # prepare model inputs using token_ids
# inference model
model_outputs = model(model_inputs)
outputs = # do some post-processing with starts, ends, and model_outputs
return outputs
# https://www.tensorflow.org/api_docs/python/tf/keras/Model#save
model.save(
"some path to save the model",
signatures={
"inference_fn": inference_function.get_concrete_function(tf.TensorSpec([None], dtype=tf.string)),
}
)
Yes! After training the sentencepiece model, you can load and use it with text.SentencepieceTokenizer 1 in TF graph. | 1 |
tensorflow | Special Interest Groups | Converting PyTorch to Keras: Internal Blocks Not Showing Up | https://discuss.tensorflow.org/t/converting-pytorch-to-keras-internal-blocks-not-showing-up/6646 | Hi everyone,
For a personal project I’m trying to recreate the NBeats architecture into Keras, and I don’t think I’m doing it correctly but am not sure why.
The page I’m working off of as a ground truth can be found here: https://github.com/ElementAI/N-BEATS/blob/master/models/nbeats.py
Here’s the starter PyTorch code that I’m trying to convert:
class NBeatsBlock(t.nn.Module):
def __init__(self,
input_size,
theta_size: int,
basis_function: t.nn.Module,
layers: int,
layer_size: int):
super().__init__()
self.layers = t.nn.ModuleList([t.nn.Linear(in_features=input_size, out_features=layer_size)] +
[t.nn.Linear(in_features=layer_size, out_features=layer_size)
for _ in range(layers - 1)])
self.basis_parameters = t.nn.Linear(in_features=layer_size, out_features=theta_size)
self.basis_function = basis_function
def forward(self, x: t.Tensor) -> Tuple[t.Tensor, t.Tensor]:
block_input = x
for layer in self.layers:
block_input = t.relu(layer(block_input))
basis_parameters = self.basis_parameters(block_input)
return self.basis_function(basis_parameters)
class NBeats(t.nn.Module):
def __init__(self, blocks: t.nn.ModuleList):
super().__init__()
self.blocks = blocks
def forward(self, x: t.Tensor, input_mask: t.Tensor) -> t.Tensor:
residuals = x.flip(dims=(1,))
input_mask = input_mask.flip(dims=(1,))
forecast = x[:, -1:]
for i, block in enumerate(self.blocks):
backcast, block_forecast = block(residuals)
residuals = (residuals - backcast) * input_mask
forecast = forecast + block_forecast
return forecast
class GenericBasis(t.nn.Module):
def __init__(self, backcast_size: int, forecast_size: int):
super().__init__()
self.backcast_size = backcast_size
self.forecast_size = forecast_size
def forward(self, theta: t.Tensor):
return theta[:, :self.backcast_size], theta[:, -self.forecast_size:]
Here’s the Keras code I have to translate:
class NBeatsBlock(keras.layers.Layer):
def __init__(self,
theta_size: int,
basis_function: keras.layers.Layer,
layer_size: int = 4):
super(NBeatsBlock, self).__init__()
self.layers_ = [keras.layers.Dense(layer_size, activation = 'relu')
for i in range(layer_size)]
self.basis_parameters = keras.layers.Dense(theta_size)
self.basis_function = basis_function
def call(self, inputs):
x = self.layers_[0](inputs)
for layer in self.layers_[1:]:
x = layer(x)
x = self.basis_parameters(x)
return self.basis_function(x)
class NBeats(keras.layers.Layer):
def __init__(self,
blocksize: int,
theta_size: int,
basis_function: keras.layers.Layer):
super(NBeats, self).__init__()
self.blocks = [NBeatsBlock(theta_size = theta_size, basis_function = basis_function) for i in range(blocksize)]
def call(self, inputs):
residuals = K.reverse(inputs, axes = 0)
forecast = inputs[:, -1:]
for block in self.blocks:
backcast, block_forecast = block(residuals)
residuals = residuals - backcast
forecast = forecast + block_forecast
return forecast
class GenericBasis(keras.layers.Layer):
def __init__(self, backcast_size: int, forecast_size: int):
super().__init__()
self.backcast_size = backcast_size
self.forecast_size = forecast_size
def call(self, inputs):
return inputs[:, :self.backcast_size], inputs[:, -self.forecast_size:]
If I try and make a model from the Keras code it works, but I don’t think it’s constructed correctly.
Here’s a simple model:
inputs = Input(shape = (1, ))
nbeats = NBeats(blocksize = 4, theta_size = 7, basis_function = GenericBasis(7, 7))(inputs)
out = keras.layers.Dense(7)(nbeats)
model = Model(inputs, out)
My concern is that the internal NBeatsBlock layers are not actually being used in the model I just created.
My model summary reads like this:
,
And as you can see there’s nothing that indicates the internal Dense layers are there.
And if I plot the model I get the following diagram:
So I don’t think I’m doing things correctly but I’m also not sure where I’m going wrong with how I’m constructing it. I’m guessing there are small differences in how PyTorch & Keras work that I’m not picking up on. | I’ve not personally verified the correct implementation but there was a parallel Pytorch and Keras impl at:
GitHub
GitHub - philipperemy/n-beats: Keras/Pytorch implementation of N-BEATS:... 1
Keras/Pytorch implementation of N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. - GitHub - philipperemy/n-beats: Keras/Pytorch implementation of N-BEATS: Neural ... | 0 |
tensorflow | Special Interest Groups | Help training MobileNetV3Small model on custom image classification | https://discuss.tensorflow.org/t/help-training-mobilenetv3small-model-on-custom-image-classification/6565 | Hi everyone, I’m new to the Tensorflow Keras API, and thought I would use it with the tensorflow-metal plugin from Apple to train a custom MobileNetV3Small model on my M1 Pro MacBook for the task of image classification. This is for my app DeTeXt, that classifies drawings into LaTeX symbols. Currently I’m using a MobileNetV2 model that I had trained on a GPU cluster using the PyTorch API (code here 1).
Here is the code I use to train my custom network from scratch on the images I have:
import tensorflow as tf
import pdb
EPOCHS = 5
BATCH_SIZE = 128
LEARNING_RATE = 0.003
SEED=1220
if __name__ == '__main__':
# Load train and validation data
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
'/Volumes/detext/drawings/',
color_mode="grayscale",
seed=SEED,
batch_size=BATCH_SIZE,
labels='inferred',
label_mode='int',
image_size=(200,300),
validation_split=0.1,
subset='training')
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
'/Volumes/detext/drawings/',
color_mode="grayscale",
seed=SEED,
batch_size=BATCH_SIZE,
labels='inferred',
label_mode='int',
image_size=(200,300),
validation_split=0.1,
subset='validation')
# Get the class names
class_names = train_ds.class_names
num_classes = len(class_names)
# Create model
model = tf.keras.applications.MobileNetV3Small(
input_shape=(200,300,1), alpha=1.0, minimalistic=False,
include_top=True, weights=None, input_tensor=None, classes=num_classes,
pooling=None, dropout_rate=0.2, classifier_activation="softmax",
include_preprocessing=True)
# Compile model
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
# Training
model.fit(train_ds, epochs=EPOCHS, validation_data=val_ds)
model.save('./saved_model3/')
While the training runs smooth and fast with the metal plugin, the validation accuracy is very low after 5 epochs, and I suspect it is either predicting the same class every time, or there is an error somewhere in my setup above. I have tried rescaling the inputs myself (and removing rescaling layer from model), but no matter what I try, the validation accuracy it outputs is really low. Here is the output (warnings and all) after 2 epochs:
Found 210454 files belonging to 1098 classes.
Using 189409 files for training.
Metal device set to: Apple M1 Pro
systemMemory: 32.00 GB
maxCacheSize: 10.67 GB
2021-12-16 10:02:46.369476: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2021-12-16 10:02:46.369603: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Found 210454 files belonging to 1098 classes.
Using 21045 files for validation.
Epoch 1/2
2021-12-16 10:02:50.610564: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
2021-12-16 10:02:50.619328: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2021-12-16 10:02:50.619628: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
1480/1480 [==============================] - ETA: 0s - loss: 1.7621 - sparse_categorical_accuracy: 0.57022021-12-16 10:12:58.720162: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
1480/1480 [==============================] - 626s 422ms/step - loss: 1.7621 - sparse_categorical_accuracy: 0.5702 - val_loss: 9.5837 - val_sparse_categorical_accuracy: 0.0052
Epoch 2/2
1480/1480 [==============================] - 622s 420ms/step - loss: 1.0791 - sparse_categorical_accuracy: 0.6758 - val_loss: 7.3651 - val_sparse_categorical_accuracy: 0.0423
2021-12-16 10:23:40.260143: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
/Users/venkat/miniforge3/envs/tf-metal/lib/python3.9/site-packages/keras/utils/generic_utils.py:494: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
warnings.warn('Custom mask layers require a config and must override '
For reference, I was getting validation micro-F1 (which is same as accuracy) of over 60% with MobilenetV2 in PyTorch. Anyone have any idea what I’m doing wrong here? | The issue seems to be specific to certain types of operations/layers in Tensorflow, and specifically with respect to the validation accuracy (similar to this issue 2). When I build my own custom model with convolutions like so:
model = Sequential([
layers.Rescaling(1./255, input_shape=(IMG_HEIGHT, IMG_WIDTH, 1)),
layers.Conv2D(16, 1, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 1, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 1, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes),
layers.Softmax()
])
training proceeds as expected, with a high validation accuracy as well. Below is the output for the above model:
Found 210454 files belonging to 1098 classes.
Metal device set to: Apple M1 Pro
systemMemory: 32.00 GB
maxCacheSize: 10.67 GB
2021-12-21 12:27:24.005759: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2021-12-21 12:27:24.006206: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Found 210454 files belonging to 1098 classes.
Using 31568 files for validation.
2021-12-21 12:27:26.965648: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
2021-12-21 12:27:26.968717: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2021-12-21 12:27:26.969214: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
1645/1645 [==============================] - ETA: 0s - loss: 2.1246 - sparse_categorical_accuracy: 0.52732021-12-21 12:32:57.475358: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
1645/1645 [==============================] - 353s 214ms/step - loss: 2.1246 - sparse_categorical_accuracy: 0.5273 - val_loss: 1.3041 - val_sparse_categorical_accuracy: 0.6558
2021-12-21 12:33:19.600146: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
However, the very same code with the MobileNetV3Small model (instead of my custom model) produces the following output:
Found 210454 files belonging to 1098 classes.
Metal device set to: Apple M1 Pro
systemMemory: 32.00 GB
maxCacheSize: 10.67 GB
2021-12-21 12:34:46.754598: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2021-12-21 12:34:46.754793: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Found 210454 files belonging to 1098 classes.
Using 31568 files for validation.
2021-12-21 12:34:49.742015: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
2021-12-21 12:34:49.747397: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2021-12-21 12:34:49.747606: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
1645/1645 [==============================] - ETA: 0s - loss: 2.4072 - sparse_categorical_accuracy: 0.46722021-12-21 12:41:28.137948: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
1645/1645 [==============================] - 415s 252ms/step - loss: 2.4072 - sparse_categorical_accuracy: 0.4672 - val_loss: 21.6091 - val_sparse_categorical_accuracy: 0.0131
2021-12-21 12:41:46.017580: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
/Users/venkat/miniforge3/envs/tf-metal/lib/python3.9/site-packages/keras/utils/generic_utils.py:494: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
warnings.warn('Custom mask layers require a config and must override '
The validation loss/accuracy is hilariously bad, and I find that the model constantly predicts the same class. My guess is that MobileNetV3Small seems to contain some operations/layers that don’t work well with tensorflow-metal for whatever reason, and only Apple Engineers can fix this problem at a low level. | 0 |
tensorflow | Special Interest Groups | SIG Addons December Meeting Moved to 12/16 | https://discuss.tensorflow.org/t/sig-addons-december-meeting-moved-to-12-16/6234 | Hi All,
Just wanted to call attention to our meeting being moved to 12/9 this month. This will be our first back log grooming meeting so looking forward to your attendance! | Unfortunately I have a late conflict and calendar should be moved to 12/16. Sincerely apologize. | 0 |
tensorflow | Special Interest Groups | SIG Build December Meeting: December 7 @ 2pm | https://discuss.tensorflow.org/t/sig-build-december-meeting-december-7-2pm/6364 | SIG Build’s next meeting will be tomorrow, Tuesday, December 7, at 2pm Pacific time. Find the meeting details at bit.ly/tf-sig-build-notes 4, and feel free to suggest new agenda items.
One of the big discussion topics will be the Docker containers, which are discussed here: Adopting Open-Source Dockerfiles for Official tf-nightly CI 1 | Thanks to everyone who attended yesterday. Here’s a summary of the biggest topics:
Next month’s meeting will be January 11th. I have not moved the calendar slot yet.
We discussed the Docker containers. Notable points: @angerson will work on a public roadmap; there are some permissions-related tests that should not exist; GPU passthrough isn’t working right now; the resulting wheels do seem to be the same; there are many issues with cache misses that we know about but are low priority.
DevInfra is aware of Numpy’s Python 3.7 deprecation plans and need to discuss with the TF product team about what we’re going to do. | 0 |
tensorflow | Special Interest Groups | Design of MLIR TFG tensor | https://discuss.tensorflow.org/t/design-of-mlir-tfg-tensor/6371 | Hi there,
I am curious about TFG’s designed representation of tensors. From the code, I found that tensor is represented as an attribute of the tfg.op. The code is like below:
// tensorflow/core/ir/importexport/convert_tensor.h
// Converts an TensorFlow tensor proto into an MLIR elements attribute.
tensorflow::StatusOr<ElementsAttr> ConvertTensorProto(
const tensorflow::TensorProto& input_tensor, Builder builder,
TFGraphDialect* tfgDialect);
From the above code, a tensorflow tensor is converted to mlir::ElementsAttr for a specific op. This would be an attribute for the op node.
However, MLIR attributes 1 read following:
attributes are compile-time known constant values
So, from my understanding, tfg does not allow modifying the tensor’s data, because the tensor attribute should be constant value.
In that case, Are the following understanding correct?
tfg by design does not allow optimizations like constant_folding, which would modify tensor’s data. (Or have to recreate a new node and replace the old one).
Why not create a type instead of attribute in TFG MLIR? As type would allow mutable part 1 listed in the link.
Similar design appreas on ShapeAttr , it also does not have the setShape method, meaning this by design could not be modified.
As tfg is designed to replace grappler, in grappler, it could easily change those things in NodeDef, how could tfg do these things? Should tfg always create a new node and replace the old one? | Stonepia:
So I have to attach a pointer-like attribute, and mutate that underlying information, is this correct?
You really can’t mutate Type and Attribute safely: they are stored in a map and hashed. Every operation (node…) using a given type will reuse the same instance. When you “recreate” a Type what happens is actually hashing and lookup to see if it already exists, in which case it gets returned.
In general, if you need to just compute transient information in a transformation, you may not store them directly by mutating the IR, you may keep a map on the side from operations to “split_strategy” and use this as your temporary storage. It does not prevent you from materializing the “split_strategy” as type/attribute annotation when you’re done, but that shouldn’t be too heavy any more at this point. | 1 |
tensorflow | Special Interest Groups | [AutoKeras] Bug in using tf.Dataset for training | https://discuss.tensorflow.org/t/autokeras-bug-in-using-tf-dataset-for-training/6335 | Hi everyone!
I was exporting a custom data generator to a tf.Dataset to use my dataset in a memory-efficient way. However, I am encountering this error:-
ValueError: Expect x to be a non-empty array or dataset.
To keep things clean here, I have put all the heavy information on a GitHub Thread 3.
If anyone requires additional information or help for reproduction, please do not hesitate to ping me! As I have mentioned in my issue, using torch.random.radn((...)) for a dummy dataset seems the fastest way
Dissecting the error message, it seems that in train.py (keras/engine) the Dataset it gets is empty (even though its not) which yields no updates to the model parameters. Since no updates are made, no logs are created. logs is unchanged to its OG value None and it hits the raise statement.
If anyone has any idea, please do help me out! | I made a little progress; but help is really appreciated as Im kinda confused
The issue seems to be directly at autokeras/auto_model.py at 78223dfc63056414e32e202b158dbc981be48dc9 · keras-team/autokeras · GitHub 3 just when I pass the tf.Dataset - its simply empty. I have checked numerous times but can confirm I am not passing an empty Dataset. strange. but I will surely double check.
Again, if anyone can expedite this process for me - that would be really appreciated! | 0 |
tensorflow | Special Interest Groups | Standardize Github community health files in the Ecosystem | https://discuss.tensorflow.org/t/standardize-github-community-health-files-in-the-ecosystem/1708 | Hi,
I would like to collect a little some feedback about the idea of standardizing a little bit a new candidate contributor experience across SIGs as we are moving forward a multi repositories/SIGs oriented ecosystem (e.g. see the new TF-micro and Keras standalone repos or TF core as product RFC 7).
With this I hope that we could discuss a minimal set of common contents for the two main Github community health files 4.
I have currently submitted two DRAFT PR to collect feedback and comments:
README.md 2
CONTRIBUTING.md 4
At the end of the process, if we could find a consensus on the minimal set of info to maintain, then every SIG could also add extra sections in the footers or links to other Markdown files available in its own repository.
I hope that we could lower the cognitive overhead of a candidate contributor navigating over the ecosystem.
For general comments we can use this thread to discuss the topic.
Thanks
/cc @thea @Joana @ewilderj @yarri-oss | Thanks for starting this thread. I’d like to make sure we give special attention to the community Maintainers. We have a mix of very active vs. more passive maintainers across the SIGs – both types are important and to be encouraged! I would like to propose adding a CALL_FOR_MAINTAINERS.md file to your list above. | 0 |
tensorflow | Special Interest Groups | Does the CRFModelWrapper in the TFA support the serialization for the continuous training? | https://discuss.tensorflow.org/t/does-the-crfmodelwrapper-in-the-tfa-support-the-serialization-for-the-continuous-training/6019 | Hi everyone,
I am using the CRFModelWrapper method following the tutorial as addons/layers_crf.ipynb at add_crf_tutorial · howl-anderson/addons · GitHub to implement a Bi-LSTM -CRF neural-network for a multi-classes NER problem. The model I built (codes are shown below) can be trained with multiple GPUs and it can be saved and load with the tf.keras.model functions. However, when I I saved the model, a warning shows below which I am not sure if it really matters. After that, I loaded the trained model, it shows many warnings related inconsistent output shape? These warnings are posted below. want to train it again using the fit function, it shows no
Saving warning:
WARNING:absl:Found untraced functions such as embedding_layer_call_and_return_conditional_losses, embedding_layer_call_fn, embedding_1_layer_call_and_return_conditional_losses, embedding_1_layer_call_fn, multi_head_attention_layer_call_and_return_conditional_losses while saving (showing 5 of 65). These functions will not be directly callable after loading.
Loading warnings:
2021-11-21 00:28:34.222763: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:34.374067: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:35.553642: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:37.286608: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:37.556591: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:37.645775: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:37.742369: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:38.758195: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:38.892746: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:38.905252: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:39.369396: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:39.426702: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:39.439491: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:39.562408: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:39.574280: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:40.102410: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:40.363686: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:40.375926: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:40.706863: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:40.765348: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:41.099845: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:41.111754: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:41.127138: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:41.139674: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:41.959556: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:41.971252: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:42.012887: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:42.025740: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:42.284787: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:42.403613: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:42.551636: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:42.773302: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:42.786014: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:42.993502: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:43.005655: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:43.019730: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:43.031896: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:43.045523: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:43.581028: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:43.593377: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:44.286465: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:44.298637: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:44.319105: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:44.331097: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:44.446932: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:44.864126: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:44.995524: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:45.084922: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:45.097600: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:45.134094: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:45.243815: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:45.264186: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:45.599941: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:45.621283: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:45.633535: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:46.106656: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond/while' has 14 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:46.119119: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:46.141522: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:46.165137: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
2021-11-21 00:28:46.266813: W tensorflow/core/common_runtime/graph_constructor.cc:803] Node 'cond' has 5 outputs but the _output_shapes attribute specifies shapes for 48 outputs. Output shapes may be inaccurate.
With these warnings, the model can still be saved, loaded, and used for the prediction, the loaded model cannot be re-trained again. The error message is posted below and it seems that the loss function inside the CRFModelWrapper cannot be called again so that the gradient calculation cannot be done. So, I am wondering if the CRFModelWrapper doesn’t support the serialization (save->load->training) or it’s because of some mistakes I have made. If so, is there any way that I can workaround to retrain the model?
Thank you very much.
Error message when re-training the trained model:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\training.py", line 1184, in fit
tmp_logs = self.train_function(iterator)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\def_function.py", line 885, in __call__
result = self._call(*args, **kwds)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\def_function.py", line 933, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\def_function.py", line 759, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\function.py", line 3066, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\function.py", line 3463, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\function.py", line 3298, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\framework\func_graph.py", line 1007, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\eager\def_function.py", line 668, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\framework\func_graph.py", line 994, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\training.py:853 train_function *
return step_function(self, iterator)
C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\training.py:842 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1286 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2849 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:3632 _call_for_each_replica
return fn(*args, **kwargs)
C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\training.py:835 run_step **
outputs = model.train_step(data)
C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\training.py:791 train_step
self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\optimizer_v2\optimizer_v2.py:522 minimize
return self.apply_gradients(grads_and_vars, name=name)
C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\optimizer_v2\optimizer_v2.py:622 apply_gradients
grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars)
C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\optimizer_v2\utils.py:72 filter_empty_gradients
raise ValueError("No gradients provided for any variable: %s." %
ValueError: No gradients provided for any variable: ['chain_kernel:0', 'left_boundary:0', 'right_boundary:0', 'crf_model_wrapper/crf/dense/kernel:0', 'crf_model_wrapper/crf/dense/bias:0', 'embedding/embeddings:0', 'bidirectional/forward_bilstm/lstm_cell_1/kernel:0', 'bidirectional/forward_bilstm/lstm_cell_1/recurrent_kernel:0', 'bidirectional/forward_bilstm/lstm_cell_1/bias:0', 'bidirectional/backward_bilstm/lstm_cell_2/kernel:0', 'bidirectional/backward_bilstm/lstm_cell_2/recurrent_kernel:0', 'bidirectional/backward_bilstm/lstm_cell_2/bias:0', 'time_distributed/kernel:0', 'time_distributed/bias:0'].
Below shows my codes except for the data-preprocessing.
#%% Build the base model_1
def build_bilstm_crf_model(
lstm_unit,
fc_unit
) -> tf.keras.Model:
x = tf.keras.layers.Input(shape=(None,), dtype=tf.float32, name="inn")
y = tf.keras.layers.Embedding(1, 1, mask_zero=True)(x)
y = tf.keras.layers.Bidirectional(
tf.keras.layers.LSTM(lstm_unit, return_sequences=True,name="bilstm")
)(y)
y = tf.keras.layers.TimeDistributed(
tf.keras.layers.Dense(fc_unit,name="fc")
)(y)
return tf.keras.Model(
inputs=x, outputs=y
)
# CFR Wrapper Model
class CRFModelWrapper(tf.keras.Model):
def __init__(
self,
model: tf.keras.Model,
units: int,
chain_initializer="orthogonal",
use_boundary: bool = True,
boundary_initializer="zeros",
use_kernel: bool = True,
**kwargs
):
super().__init__()
self.crf_layer = tfa.layers.CRF(
units=units,
chain_initializer=chain_initializer,
use_boundary=use_boundary,
boundary_initializer=boundary_initializer,
use_kernel=use_kernel,
**kwargs
)
self.base_model = model
def unpack_training_data(self, data):
# override me, if this is not suit for your task
if len(data) == 3:
x, y, sample_weight = data
else:
x, y = data
sample_weight = None
return x, y, sample_weight
def call(self, inputs, training=None, mask=None, return_crf_internal=False):
base_model_outputs = self.base_model(inputs, training, mask)
# change next line, if your model has more outputs
crf_input = base_model_outputs
decode_sequence, potentials, sequence_length, kernel = self.crf_layer(crf_input)
### potentials =predicted y during training
# change next line, if your base model has more outputs
# Always keep `(potentials, sequence_length, kernel), decode_sequence, `
# as first two outputs of model.
# current `self.train_step()` expected such settings
outputs = (potentials, sequence_length, kernel), decode_sequence
if return_crf_internal:
return outputs
else:
# outputs[0] is the crf internal, skip it
output_without_crf_internal = outputs[1:]
# it is nicer to return a tensor instead of an one tensor list
if len(output_without_crf_internal) == 1:
return output_without_crf_internal[0]
else:
return output_without_crf_internal
def compute_crf_loss(self, potentials, sequence_length, kernel, y, sample_weight=None):
### Added to reshape labels(y)
shape = y.shape
if len(shape) > 2:
y_1 = tf.argmax(y, -1, output_type=tf.int32)
################################################
crf_likelihood, _ = tfa.text.crf_log_likelihood(
potentials, y_1, sequence_length, kernel
)
# convert likelihood to loss
flat_crf_loss = -1 * crf_likelihood
if sample_weight is not None:
flat_crf_loss = flat_crf_loss * sample_weight
crf_loss = tf.reduce_mean(flat_crf_loss)
return crf_loss
def train_step(self, data):
x, y, sample_weight = self.unpack_training_data(data)
with tf.GradientTape() as tape:
(potentials, sequence_length, kernel), decoded_sequence, *_ = self(
x, training=True, return_crf_internal=True
)
crf_loss = self.compute_crf_loss(
potentials, sequence_length, kernel, y, sample_weight
)
loss = crf_loss + tf.reduce_sum(self.losses)
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, decoded_sequence)
# Return a dict mapping metric names to current value
results = {m.name: m.result() for m in self.metrics}
results.update({"loss": loss, "crf_loss": crf_loss}) # append loss
return results
def test_step(self, data):
x, y, sample_weight = self.unpack_training_data(data)
(potentials, sequence_length, kernel), decode_sequence, *_ = self(
x, training=False, return_crf_internal=True
)
crf_loss = self.compute_crf_loss(
potentials, sequence_length, kernel, y, sample_weight
)
loss = crf_loss + tf.reduce_sum(self.losses)
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, decode_sequence)
# Return a dict mapping metric names to current value
results = {m.name: m.result() for m in self.metrics}
results.update({"loss": loss, "crf_loss": crf_loss}) # append loss
return results
When training the model:
strategy = tf.distribute.MirroredStrategy()
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
with strategy.scope():
base_model = build_bilstm_crf_model(units_lstm,TAG_SIZE)
model = CRFModelWrapper(base_model, TAG_SIZE)
model.compile(optimizer=tf.keras.optimizers.Adam(lr))
num_epochs = 10
name = 'BLD_CRF_Lut{}_lr{}_{}'.format(units_lstm,lrr,int(time.time()))
name_csv = 'BLD_CRF_Lut{}_lr{}'.format(units_lstm,lrr)
mc = tf.keras.callbacks.ModelCheckpoint(filepath=os.path.join(name,'{epoch:02d}'), verbose=1,
save_best_only=False,save_weights_only=False)
csv_log = tf.keras.callbacks.CSVLogger('{}.csv'.format(name_csv),append=True)
model.fit(tr_gen, epochs=num_epochs, validation_data = val_gen,verbose=2,batch_size = bt_sz,callbacks=[mc,csv_log])
When re-training the model:
#%%
strategy = tf.distribute.MirroredStrategy()
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
with strategy.scope():
# base_model = build_bilstm_crf_model(units_lstm,TAG_SIZE)
# model = CRFModelWrapper(base_model, TAG_SIZE)
# model.compile(optimizer=tf.keras.optimizers.Adam(lr))
model = load_model(FILE_PATH4)
num_epochs = 10
name = 'BLD_CRF_Lut{}_lr{}_{}'.format(units_lstm,lrr,int(time.time()))
name_csv = 'BLD_CRF_Lut{}_lr{}'.format(units_lstm,lrr)
mc = tf.keras.callbacks.ModelCheckpoint(filepath=os.path.join(name,'{epoch:02d}'),verbose=1,save_best_only=False,save_weights_only=False)
csv_log = tf.keras.callbacks.CSVLogger('{}.csv'.format(name_csv),append=True)
model.fit(tr_gen, epochs=num_epochs, validation_data = val_gen,verbose=2,batch_size = bt_sz,callbacks=[mc,csv_log]) | /cc @XiaoquanKong can you check this? | 0 |
tensorflow | Special Interest Groups | All zero values after tensor_scatter_nd_update in a Quantum CNN | https://discuss.tensorflow.org/t/all-zero-values-after-tensor-scatter-nd-update-in-a-quantum-cnn/5734 | Hi everyone,
We are trying to build a trainable Quantum Convolutional Neural Network.
To this end we are trying to create a new subclass QuantumConvolutionalLayer of the class keras.Layer.
How we are trying this now:
In the initialization we use the add_weight function to add the trainable_parameters as a weight. Then we create (still in the initialization) a quantum circuit (on 4 qubits) with pennylane that takes 4 classical inputs and the trainable weights that will be called upon.
In the call function (for the forward pass):
a 2x2 grid is moved over the entire image (mnist in our case) and everytime the 2x2 grid is processed using the quantum circuit defined in the initialization.
The outcome of the processing with this quantum circuit of such a 2x2 grid [measurement_results] is stored in a tensorflow tensor object and we aim to then store all these in one big tensor [out]. To do this we use the tensor_scatter_nd_update. Unfortunately when we look at the resulting out tensor it has all zero values, even though when we print the measurement_results of the small grids we get non zero values. Any ideas on how this can be solved?
Many thanks in advance for your help!
Odiel
PS: Below you can find our code so far:
# Library installation
!pip install qiskit
!pip install pennylane
# General imports
import pennylane as qml
import numpy as np
from numpy import pi, sqrt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import backend
from tensorflow.keras.layers import Layer
import matplotlib.pyplot as plt
from qiskit import QuantumCircuit, Aer, assemble, visualization
# Dataset
from keras.datasets import mnist
# Embedding imports
from pennylane.templates import QAOAEmbedding
# Build function to load train and test dataset
def load_dataset():
# load dataset
(trainX, trainY), (testX, testY) = mnist.load_data()
# reshape dataset to have a single channel
trainX = trainX.reshape((trainX.shape[0], 28, 28, 1))
testX = testX.reshape((testX.shape[0], 28, 28, 1))
# one hot encode target values
trainY = tf.keras.utils.to_categorical(trainY)
testY = tf.keras.utils.to_categorical(testY)
return trainX, trainY, testX, testY
# Load train and test dataset
X_train, Y_train, X_test, Y_test = load_dataset()
num_images = 10 # set to -1 in order to keep all images
X_train = X_train[0:num_images]
X_test = X_test[0:num_images]
Y_train = Y_train[0:num_images]
Y_test = Y_test[0:num_images]
# Build a class for the trainable quantum convolutional layer that is a subclass of the keras.Layer class
class QuantumConvolutionalLayer(Layer):
def __init__(self, device = "default.qubit", stride = 2, wires = 4, layers = 1, n_measurements = 4):
# Inherits the initialization of the keras.Layer class
super(QuantumConvolutionalLayer, self).__init__()
# Initialize the device
self.wires = wires
self.dev = qml.device(device, wires = self.wires)
# Initialize the quantum circuit
self.layers = layers
self.stride = stride
self.n_measurements = n_measurements
self.trainable_parameters = self.add_weight("trainable_parameters", shape = QAOAEmbedding.shape(n_layers=layers, n_wires=wires), initializer = tf.keras.initializers.RandomNormal())
# To this end, build the quantum circuit (for 1 square of stride x stride)
@qml.qnode(device = self.dev, interface = "tf")
def quantum_circuit(inputs, trainable_parameters = self.trainable_parameters):
QAOAEmbedding(features = inputs, weights = trainable_parameters, wires = range(wires))
return [qml.expval(qml.PauliZ(j)) for j in range(n_measurements)]
#weight_shapes = {"trainable_parameters": QAOAEmbedding.shape(n_layers=self.layers, n_wires=self.wires)}
#self.quantum_circuit = qml.qnn.KerasLayer(quantum_circuit, weight_shapes = weight_shapes, output_dim = self.n_measurements)
self.quantum_circuit = quantum_circuit
dtype = tf.float32 if tf.keras.backend.floatx() == tf.float32 else tf.float64
if self.quantum_circuit.diff_method != "backprop" or self.quantum_circuit.diff_method_change:
self.quantum_circuit.to_tf(dtype=dtype)
def build(self, input_shape):
super().build(input_shape)
def call(self, inputs):
# define forward pass
num_images=inputs.shape[0]
h_in, w_in, ch_in = inputs.shape[1:] # inputs.shape (28, 28, 1) for MNIST
h_out, w_out, ch_out = h_in // self.stride, w_in // self.stride, ch_in * self.n_measurements # (14, 14, 4) for MNIST and our quantum circuit filter
out = tf.zeros((num_images, h_out, w_out, ch_out))
for img_idx in range(num_images):
# print(tf.rank(out))
for j in range(0, h_in, self.stride):
for k in range(0, w_in, self.stride):
grid = [inputs[img_idx, j, k, 0], inputs[img_idx, j, k + 1, 0], inputs[img_idx, j + 1, k, 0], inputs[img_idx, j + 1, k + 1, 0]]
measurement_results = self.quantum_circuit(inputs = grid, trainable_parameters = self.trainable_parameters)
print(measurement_results)
for ch in range(self.n_measurements):
tf.tensor_scatter_nd_update(out[img_idx], tf.constant([[j//2, k//2, ch]]), [measurement_results[ch]])
return out
quanv = QuantumConvolutionalLayer()
quanv(np.array([X_train[0]])) | Dear all,
We have found the issue.
The tf.tensor_scatter_nd_update function replaces the entries at the given coordinates by the right values. It just doesn’t store them automatically. so you still have to assign it yourself.
Replacing the following line in the code:
tf.tensor_scatter_nd_update(out[img_idx], tf.constant([[j//2, k//2, ch]]), [measurement_results[ch]])
by this line solves the problem:
out = tf.tensor_scatter_nd_update(out, tf.constant([[img_idx, j//2, k//2, ch]]), [measurement_results[ch]]) | 0 |
tensorflow | Special Interest Groups | Compatibility of the CRF layers in TF-addons and TF2.6 | https://discuss.tensorflow.org/t/compatibility-of-the-crf-layers-in-tf-addons-and-tf2-6/5965 | Hi everyone,
I am trying to use the CRFModelWrapper method following the tutorial as addons/layers_crf.ipynb at add_crf_tutorial · howl-anderson/addons · GitHub 1 to implement a Bi-LSTM -CRF neural-network for a multi-classes time-series NER problem, and it works in TF 2.7 in my PC. However, when I used the same code and same data runing in TF 2.6 environment, it pops out some errors(as shown below) regarding to the tfa-CRF layer. So, could someone please let me know if the addons CRF layer is only compatible for the TF 2.7 version or this is because I made some mistakes? Thank you very much.
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\training.py", line 1134, in fit
data_handler = data_adapter.get_data_handler(
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\data_adapter.py", line 1383, in get_data_handler
return DataHandler(*args, **kwargs)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\data_adapter.py", line 1138, in __init__
self._adapter = adapter_cls(
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\data_adapter.py", line 917, in __init__
super(KerasSequenceAdapter, self).__init__(
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\data_adapter.py", line 801, in __init__
model.distribute_strategy.run(
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\distribute\distribute_lib.py", line 1286, in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\distribute\distribute_lib.py", line 2849, in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\distribute\distribute_lib.py", line 3632, in _call_for_each_replica
return fn(*args, **kwargs)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 597, in wrapper
return func(*args, **kwargs)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\data_adapter.py", line 802, in <lambda>
lambda x: model(x, training=False), args=(concrete_x,))
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\base_layer.py", line 1037, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "<input>", line 47, in call
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\keras\engine\base_layer.py", line 1037, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "C:\Users\YenPangLai\anaconda3\envs\tf2p6\lib\site-packages\tensorflow_addons\layers\crf.py", line 131, in call
raise NotImplementedError(
NotImplementedError: Currently, CRF layer do not support left padding | /cc @XiaoquanKong can you check this? | 0 |
tensorflow | Special Interest Groups | Fastest way to load_model for inference | https://discuss.tensorflow.org/t/fastest-way-to-load-model-for-inference/5767 | Hi there,
I’m trying to quickly load a model to make predictions in a REST API. The tf.keras.models.load_model method takes ~1s to load so it’s too slow for what I’m trying to do.
What is the fastest way to load a model for inference only?
I know there is a TFX serving server to just do this efficiently but I’ve already have a REST API for doing other things. Setting up a specialised server just for predictions feels like an overkill. How is the TFX server handling this?
Thanks in advance,
Joan | Hey Robert, thanks for your reply. Yes, I ended up loading the model in memory on server initialisation. It’s a small model so works well this way. | 1 |
tensorflow | Special Interest Groups | Factorized_top_k using tensorflow recommenders | https://discuss.tensorflow.org/t/factorized-top-k-using-tensorflow-recommenders/2811 | I followed the tensorflow recommenders movie ranking tutorial 1 and built the model. Now I would like to get top_k recommendations using the model. This is what I tried:
layer = tfrs.layers.factorized_top_k.Streaming(model.ranking_model)
layer.index(movies.map(model.ranking_model.movie_embeddings), movies)
tracks = layer.query_with_exclusions(
queries=np.array([["42", "52"]]),
exclusions= np.array([[]])
)
But it throws the error “iterating over tf.Tensor is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.”
How to invoke the query_with_exclusions() function correctly? | Also interested in the answer to this, and how one could use this function to remove all previously interacted with items for each user. | 0 |
tensorflow | Special Interest Groups | [MLIR]Tensorflow Graph IR Dev Status | https://discuss.tensorflow.org/t/mlir-tensorflow-graph-ir-dev-status/5851 | Hi there,
I am really interested in Tensorflow MLIR, and hope to create custom dialects from Tensorflow’s existing dialects.
I found that recently there is a TensorFlow Graph IR 6 , it seems quite exciting. May I ask what is its def status by now?
Does it support basic runtime? I mean, could I use it for actually run by now? I am going to use it for optimizing ResNet training, is it ok now?
Besides, Is there any dev discussion group for now? Like PyTorch’s developer group in slack. | Hi,
The Graph IR is fairly recent, and enabled by default in Grappler for a bit more than a month now.
We’re ramping up on this this quarter to provide more natural extension point and helpers for transformations on this. | 0 |
tensorflow | Special Interest Groups | Clang-tidy file for mlir-hlo? | https://discuss.tensorflow.org/t/clang-tidy-file-for-mlir-hlo/5578 | I notice that the tensorflow repo doesn’t have a .clang-tidy file anywhere in its tree. Does anyone have a .clang-tidy file that conforms or nearly conforms to the TF coding style? I specifically need it for the mlir-hlo sub-tree (tensorflow/compiler/mlir/). | Note that the mlir-hlo subtree follows the LLVM coding style in theory (in practice it is a bit of a mix).
I don’t know about .clang-tidy, but for .clang-format TF just uses the Google style otherwise
. | 0 |
tensorflow | Special Interest Groups | Use SDRAM to store model_tflite[] | https://discuss.tensorflow.org/t/use-sdram-to-store-model-tflite/4969 | @petewarden
I use the dual core 160 pin microcontroller Arduino Portenta to make TensorFlowLite/Micro models. Recently I found out about an extra 8 MB of SDRAM on the board. I have managed to use the SDRAM to load the 320x320 Grayscale Camera Buffer when using EdgeImpulse.com models, draft code here.
.
.
I would like to put the entire TensorflowMicro model_tflite array into the 8 MB SDRAM, the board normally only has 1 MB of RAM for each core, so using SDRAM would be a fair improvement.
When using SDRAM for the camera frame buffer I was just changing one pointer for another so it was not that hard.
#include <SDRAM.h>
SDRAMClass mySDRAM;
uint8_t *sdram_frame_buffer;
// in the setup
mySDRAM.begin(SDRAM_START_ADDRESS); // for camera 320x320
sdram_frame_buffer = (uint8_t *)mySDRAM.malloc(320 * 320 * sizeof(uint8_t));
// in the main loop
int myCamResult = myCam.grab(sdram_frame_buffer); // myCamResult should be zero
but for TFLITE I need to work with an array, and that is a bit different when calling the array. Anyone got any suggestions to get me started?
Here is the basic TFLITE code.
const unsigned char model_tflite[] = { ... }
unsigned int model_tflite_len = 3048;
// which is then called using
model = tflite::GetModel(model_tflite);
// I have tried passing the array as a pointer (which works in some situations)
// but doesn't seem to work here
// Not really sure what the GetModel method does with the array.
.
.
using TFLITE tutorial here
.
More info posted on the Arduino Fourm Portenta - Usage of SDRAM.h library - #3 by jerteach - Portenta - Arduino Forum | Just as a quick followup, I chatted to Jeremy elsewhere and think we reached a solution: SDRAM Examples · Issue #38 · arduino/ArduinoCore-mbed · GitHub 1 | 0 |
tensorflow | Special Interest Groups | Timeseries_dataset_from_array returns future samples instead of previous | https://discuss.tensorflow.org/t/timeseries-dataset-from-array-returns-future-samples-instead-of-previous/5532 | I have a dataframe in my ML project it is in the following format initially.
Feature 1 | Feature 2| Feature 3 | Feature 4 | etc.
Time 1
Time 2
Time 3
Etc.
I am trying to change this dataframe to be 3d, where each value in this dataframe has another dimension into the screen, containing the same value for the same feature, but at previous 192 timesteps.
Here i am trying to use the built in function keras.preprocessing.timeseries_dataset_from_array(), but it returns the opposite of what i’m trying to achieve.
I expect it to return
Feature 1 | Feature 2| Feature 3 | Feature 4 | etc.
Time 192| [1-192] | [1-192] | [1-192] | |
Time 193| | | | |
Time 194| | | | |
Time End| | | | |
Here it instead returns:
Feature 1 | Feature 2| Feature 3 | Feature 4 | etc.
Time 1| [192-1] | [192-1] | [192-1] | |
Time 2| | | | |
Time 3| | | | |
Time End-192| | | | |
Basically every sample contains the future 192 values, instead of the previous 192 values of the dataset. Therefore it ends 192 samples before it should, and starts 192 samples too early.
My code is the following:
#Past is defined as 192
#x_val is the 2-d dataframe
#y_val is one of the columns in the dataframe.
dataset_historic_train = keras.preprocessing.timeseries_dataset_from_array(
x_val,
y_val,
sequence_length=past,
batch_size=len(x_val),
)
Where x_val is the entirety of my 2-d dataframe indexed from first to last time of sample, and y_val is my target feature, which is Feature 1 in this case.
pythondat | You pass x and y of equal length to the dataset constructor. When it transforms x using sliding window of size 192, the x becomes shorter, because the first 192 rows of your original DataFrame do not have enough previous values. So it drops the last 192 values of y to pair it with x.
To make it work as expected you should pass x and y[192:]. Then it will drop the first 192 values of y. | 0 |
tensorflow | Special Interest Groups | TF.js SIG Meeting - Tuesday Nov 2nd from 5-6pm PST | https://discuss.tensorflow.org/t/tf-js-sig-meeting-tuesday-nov-2nd-from-5-6pm-pst/5478 | Greetings TF.js Community,
We are looking forward to catching up with you all today Tuesday November 2nd from 5:00-6:00 P.M. PST.
We will discuss the first RFC to the new sig-tfjs repo “WebNN Delegate for TensorFlow Lite Web” (https://github.com/tensorflow/sig-tfjs/pull/2 2), and also discuss Coral Node support and TFJS Debugger tool. Please find the agenda and the Gmeet link below. Feel free to add any questions or topics for discussion to the agenda.
docs.google.com
[Public] SIG TF.js Meeting Notes 1
[Public] SIG TF.js Meeting Notes Meeting: 2021-11-02 Tuesday, November 2nd, 2021, 5:00 – 6:00 pm Pacific Time Meeting Recording - TODO Please join this link: meet.google.com/dac-ngjq-okc (US) +1 617-675-4444 PIN: 298 636 519 4797# Shared Drive...
Cheers, and talk soon!
Masoud on behalf of the TF.js team | Hi everyone!
If you didn’t have the chance to join today’s meeting, here’s the link 2 to rewatch it.
See you next time! | 0 |
tensorflow | Special Interest Groups | [MLIR]The following operations cannot be legalized: tf.VariableV2 | https://discuss.tensorflow.org/t/mlir-the-following-operations-cannot-be-legalized-tf-variablev2/5129 | Hello, this is my first time posting here. Let me know whether this question is suited to be posted to this forum or not.
I have been trying to translate tensorflow models into MLIR-HLO. As such I successfully used tf-mlir-translate , tf-opt with very simple TF models like Conv2D, MatMul,MaxPool…, achieving the expected results.
However, if I try to do the same with the official example tf.Variable with the three steps below:
tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false target.pbtxt -tf-prune-unused-nodes -tf-control-output-arrays=Variable/Assign,AssignAdd -o target.mlir
tf-opt -tf-executor-to-functional-conversion target.mlir -o target-func.mlir
tf-opt --tf-to-hlo-pipeline target-func.mlir -o target-mhlo.mlir
I get the following error message with the 3’rd step:
target-func.mlir:2:3: error: The following operations cannot be legalized: tf.Assign (count: 1); tf.AssignAdd (count: 1); tf.VariableV2 (count: 1). These legalization failure(s) may be due to missing TF to HLO lowerings and/or unsupported attributes, etc.
func @main() attributes {tf.entry_function = {control_outputs = "Variable/Assign,AssignAdd", inputs = "", outputs = ""}} {
^
target-func.mlir:2:3: error: Emitting more detail about one op that failed to legalize...
func @main() attributes {tf.entry_function = {control_outputs = "Variable/Assign,AssignAdd", inputs = "", outputs = ""}} {
^
target-func.mlir:7:10: error: 'tf.Assign' op is not legalizable
%2 = "tf.Assign"(%0, %cst_0) {_class = ["loc:@Variable"], device = "", use_locking = true, validate_shape = true} : (tensor<!tf.f32ref>, tensor<f32>) -> tensor<*x!tf.f32ref>
^
target-func.mlir:7:10: note: see current operation: %4 = "tf.Assign"(%2, %0) {_class = ["loc:@Variable"], device = "", use_locking = true, validate_shape = true} : (tensor<!tf.f32ref>, tensor<f32>) -> tensor<!tf.f32ref>
As of right now, is there a way to lower these tf operations into mhlo-hlo ? | “HLO ops are intended for the numeric computation side and are mostly pure with state management is handled outside them. For TF this is done via TF ops and the state hoisted/sunk outside stateless parts"
How to continue lowering for TF? does TF lower the resource variables to other dialect than mhlo-hlo? | 0 |
tensorflow | Special Interest Groups | How to calculate the kTensorArenaSize value in tflite-micro? | https://discuss.tensorflow.org/t/how-to-calculate-the-ktensorarenasize-value-in-tflite-micro/5383 | Hello everyone, when I read the examples code of tflite-micro, I saw that different examples use different values of kTensorArenaSize. What is the basis for determining this value? For example, in hello_world, the value of kTensorArenaSize is 2000, and in magic_wand, the value of kTensorArenaSize is 64*1024. | In the comment, there are such lines, so the only way is to try again and again to get minist value.
// Create an area of memory to use for input, output, and intermediate arrays.
// The size of this will depend on the model you’re using, and may need to be
// determined by experimentation. | 0 |
tensorflow | Special Interest Groups | How to verify if an object is a tf tensor in javascript? | https://discuss.tensorflow.org/t/how-to-verify-if-an-object-is-a-tf-tensor-in-javascript/5248 | Using typeof() on a tf tensor only returns object.
Using isinstancof object, or isinstanceof tf.tensor, will generate syntax error of missing ) after argument list…
So, how to verify an object is a tf tensor on which we can apply tensor related operations?
import * as tf from "@tensorflow/tfjs";
const a = tf.tensor([[1, 2], [3, 4]]);
console.log('type:', typeof(a));
// returns "object" | You can simply print in the console the variable assigned to console and that would print Tensor if the object is tf object.
For Example
import * as tf from "@tensorflow/tfjs";
console.log(tf.tensor([1, 2, 3]))
Output: | 0 |
tensorflow | Special Interest Groups | Why does my validation loss increase, but validation accuracy perfectly matches training accuracy? | https://discuss.tensorflow.org/t/why-does-my-validation-loss-increase-but-validation-accuracy-perfectly-matches-training-accuracy/4283 | I am building a simple 1D convolutional neural network in Keras. Here is the model:
def build_model():
model = models.Sequential()
model.add(layers.SeparableConv1D(64, kernel_size=2, activation="relu", input_shape=(64,20)))
model.add(layers.SeparableConv1D(64, kernel_size=2, activation="relu"))
model.add(layers.MaxPooling1D(4))
model.add(layers.Flatten())
model.add(layers.Dense(128, activation="relu"))
model.add(layers.Dense(128, activation="relu"))
model.add(layers.Dropout(0.1))
model.add(layers.Dense(1, activation="sigmoid"))
model.compile(
optimizer='rmsprop',
loss='binary_crossentropy',
metrics=[
keras.metrics.BinaryAccuracy(),
],
)
#model.summary()
return model
When I train my model on roughly 1500 samples, I always get my training and validation accuracy completely overlapping and virtually equal, reflected in the graph below. This is making me think there is something fishy going on with my code or in Keras/Tensorflow since the loss is increasing dramatically and you would expect the accuracy to be affected at least somewhat by this. It looks like it is massively overfitting and yet only reporting the accuracy values for the training set or something along those lines. When I then test on a test set, the accuracy is nowhere near the 85 to 90 percent reported on the graph, but rather ~70%.
Any help is greatly appreciated, I have been stuck on this for the longest time. Below is the training code.
image1764×615 116 KB
#Define the number of folds... this will give us an 80/20 split
k = 5
epochs = 100
num_val_samples = len(x_train) // k
scores_binacc = []
scores_precision = []
scores_recall = []
histories = []
#Train the dense model in k iterations
for i in range(k):
print('Processing fold #', i)
val_data = x_train[i * num_val_samples : (i + 1) * num_val_samples]
val_targets = y_train[i * num_val_samples : (i + 1) * num_val_samples]
print('Validation partition = ', i * num_val_samples, (i + 1) * num_val_samples)
print('Training partition 1 = ', 0, i * num_val_samples)
print('Training partition 2 = ', (i+1) * num_val_samples, len(x_train))
partial_train_data = np.concatenate(
[
x_train[:i * num_val_samples],
x_train[(i+1) * num_val_samples:]
],
axis=0
)
partial_train_targets = np.concatenate(
[
y_train[:i * num_val_samples],
y_train[(i+1) * num_val_samples:]
],
axis=0
)
model = build_model()
h = model.fit(
partial_train_data,
partial_train_targets,
validation_data=(val_data, val_targets),
epochs=epochs,
verbose=1
)
val_loss, val_binacc = model.evaluate(val_data, val_targets, verbose=0)
scores_binacc.append(val_binacc)
#scores_precision.append(val_precision)
#scores_recall.append(val_recall)
histories.append(h) | Maybe you’re overfitting but the underlying relationships are simple so your validation set still has decent accuracy but higher loss.
I feel like the change in accuracy could be caused by shuffling. Are you shuffling your data during training but not on test data? Does order matter for your problem? | 0 |
tensorflow | Special Interest Groups | TF.js SIG Meeting - Tuesday Oct 19th from 5-6pm PST | https://discuss.tensorflow.org/t/tf-js-sig-meeting-tuesday-oct-19th-from-5-6pm-pst/5094 | Greetings TF.js Community,
We are looking forward to catching up with you all tomorrow Tuesday October 19th from 5:00-6:00PM PST, after a gap of a couple of months now!
Tomorrow we will share a draft of the RFC process for TF.js contributions. The RFCs will be assigned a sponsor from the Tensforflow team that will help your project be successful. With a more formal contribution to the new sig-tfjs repo, our Tensorflow group can help you connect with partners to collaborate through working groups, and we can offer regular support on working group projects through the SIG. In addition, we will help promote your work and there will be swag involved!
Please find the agenda and the Gmeet link below. Feel free to add any questions or topics for discussion to the agenda.
docs.google.com
[Public] SIG TF.js Meeting Notes 3
[Public] SIG TF.js Meeting Notes Meeting: 2021-10-19 Tuesday, October 19, 2021, 5:00 – 6:00 pm Pacific Time Meeting Recording - TODO Please join this link: meet.google.com/dac-ngjq-okc (US) +1 617-675-4444 PIN: 298 636 519 4797# Shared Drive...
Cheers, and talk soon!
Masoud on behalf of the TF.js team | If you missed the SIG TF.js meeting, please see the recording here 1.
We look forward to seeing you at the next meeting! | 0 |
tensorflow | Special Interest Groups | Cannot register 2 metrics with the same name: /tensorflow/api/keras/optimizers | https://discuss.tensorflow.org/t/cannot-register-2-metrics-with-the-same-name-tensorflow-api-keras-optimizers/3000 | We are starting to see this error in some of the unit tests, within recent docker containers we are creating.
2021-07-14 02:13:34.052676: E tensorflow/core/lib/monitoring/collection_registry.cc:77] Cannot register 2 metrics with the same name: /tensorflow/api/keras/optimizers
see this in the tip of develop branch (which is a fork of the upstream/master branch) and in the tip of our r2.6 branch (which is a fork of the upstream/r2.6 branch)
see this in about 10+ of the unit tests…these ones
21134://tensorflow/python/compiler/xla:xla_test_gpu FAILED in 3 out of 3 in 3.6s
21139://tensorflow/python/keras/benchmarks:eager_microbenchmarks_test_gpu FAILED in 3 out of 3 in 3.6s
21144://tensorflow/python/keras/benchmarks:model_components_benchmarks_test_gpu FAILED in 3 out of 3 in 3.5s
21149://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:antirectifier_benchmark_test_gpu FAILED in 3 out of 3 in 3.4s
21154://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:bidirectional_lstm_benchmark_test_gpu FAILED in 3 out of 3 in 3.5s
21159://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:cifar10_cnn_benchmark_test_gpu FAILED in 3 out of 3 in 3.6s
21164://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:mnist_conv_benchmark_test_gpu FAILED in 3 out of 3 in 3.5s
21169://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:mnist_conv_custom_training_benchmark_test_gpu FAILED in 3 out of 3 in 3.7s
21174://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:mnist_hierarchical_rnn_benchmark_test_gpu FAILED in 3 out of 3 in 3.5s
21179://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:mnist_irnn_benchmark_test_gpu FAILED in 3 out of 3 in 3.5s
21184://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:reuters_mlp_benchmark_test_gpu FAILED in 3 out of 3 in 3.5s
21189://tensorflow/python/keras/benchmarks/keras_examples_benchmarks:text_classification_transformer_benchmark_test_gpu FAILED in 3 out of 3 in 3.7s
21194://tensorflow/python/ops/numpy_ops:np_interop_test_gpu FAILED in 3 out of 3 in 29.1s
Anyone else running into this?
Any insight as to what might be causing this? | The docker containers we build, use this script to install all the pip packages
github.com
ROCmSoftwarePlatform/tensorflow-upstream/blob/develop-upstream/tensorflow/tools/ci_build/install/install_pip_packages.sh 82
#!/usr/bin/env bash
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
# Get the latest version of pip so it recognize manylinux2010
wget https://bootstrap.pypa.io/get-pip.py
This file has been truncated. show original
which in turn seems to install the keras-nightly package
github.com
ROCmSoftwarePlatform/tensorflow-upstream/blob/develop-upstream/tensorflow/tools/ci_build/install/install_pip_packages.sh#L98 13
# TensorFlow Serving integration tests require the following:
pip3 install grpcio
# Eager-to-graph execution needs astor, gast and termcolor:
pip3 install --upgrade astor
pip3 install --upgrade gast
pip3 install --upgrade termcolor
# Keras
pip3 install keras-nightly --no-deps
pip3 install keras_preprocessing==1.1.0 --no-deps
pip3 install --upgrade h5py==3.1.0
# Estimator
pip3 install tf-estimator-nightly --no-deps
# Tensorboard
pip3 install tb-nightly --no-deps
# Argparse
is that still the correct thing to do? and is that co-related to the error we are getting?
thanks | 0 |
tensorflow | Special Interest Groups | Postprocess object detection predictions with tfjs-tflite | https://discuss.tensorflow.org/t/postprocess-object-detection-predictions-with-tfjs-tflite/5042 | I recently created an object detection example 2 using tfjs-tflite 1, which uses the ObjectDetector class 1 to load and use the Object Detector.
Now I wanted to create an object detection without using the ObjectDetector class. I managed to load the model into memory and to prepare the image to make predictions by following the ‘Test model runner’ example, but I’m having problems postprocessing the predictions since the dataSync method that is used in the CodePen example throws an error.
index.js:20 Uncaught (in promise) TypeError: result.dataSync is not a function
at detect (index.js:20)
Without the dataSync() method I’m getting the following output:
{TFLite_Detection_PostProcess: e, TFLite_Detection_PostProcess:1: e, TFLite_Detection_PostProcess:2: e, TFLite_Detection_PostProcess:3: e}
TFLite_Detection_PostProcess: e {kept: false, isDisposedInternal: false, shape: Array(3), dtype: 'float32', size: 40, …}
TFLite_Detection_PostProcess:1: e {kept: false, isDisposedInternal: false, shape: Array(2), dtype: 'float32', size: 10, …}
TFLite_Detection_PostProcess:2: e {kept: false, isDisposedInternal: false, shape: Array(2), dtype: 'float32', size: 10, …}
TFLite_Detection_PostProcess:3: e {kept: false, isDisposedInternal: false, shape: Array(1), dtype: 'float32', size: 1, …}
[[Prototype]]: Object
Code:
index.js:
const img = document.querySelector("img");
const resultEle = document.querySelector(`.result`);
let objectDetector;
/** Detect objects in image. */
async function detect() {
resultEle.textContent = "Loading...";
if (!objectDetector) {
objectDetector = await tflite.loadTFLiteModel(
"https://tfhub.dev/tensorflow/lite-model/ssd_mobilenet_v1/1/metadata/2?lite-format=tflite"
);
}
const start = Date.now();
let input = tf.image.resizeBilinear(tf.browser.fromPixels(img), [300, 300]);
input = tf.cast(tf.sub(tf.div(tf.expandDims(input), 127.5), 1), 'int32');
// Run the inference and get the output tensors.
let result = objectDetector.predict(input);
console.log(result)
const latency = Date.now() - start;
renderDetectionResult(result);
resultEle.textContent = `Latency: ${latency}ms`;
}
/** Render detection results. */
function renderDetectionResult(result) {
const boxesContainer = document.querySelector(".boxes-container");
boxesContainer.innerHTML = "";
for (let i = 0; i < result.length; i++) {
const curObject = result[i];
const boundingBox = curObject.boundingBox;
const name = curObject.classes[0].className;
const score = curObject.classes[0].probability;
if (score > 0.5) {
const boxContainer = createDetectionResultBox(
boundingBox.originX,
boundingBox.originY,
boundingBox.width,
boundingBox.height,
name,
score
);
boxesContainer.appendChild(boxContainer);
}
}
}
/** Create a single detection result box. */
function createDetectionResultBox(left, top, width, height, name, score) {
const container = document.createElement("div");
container.classList.add("box-container");
const box = document.createElement("div");
box.classList.add("box");
container.appendChild(box);
const label = document.createElement("div");
label.classList.add("label");
label.textContent = `${name} (${score.toFixed(2)})`;
container.appendChild(label);
container.style.left = `${left - 1}px`;
container.style.top = `${top - 1}px`;
box.style.width = `${width + 1}px`;
box.style.height = `${height + 1}px`;
return container;
}
document.querySelector(".btn").addEventListener("click", () => {
detect();
});
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>TFLITE Web API Example</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<h1>TFLITE Web API Object Detection Example</h1>
<div class="img-container">
<img src="https://storage.googleapis.com/tfweb/demos/static/obj_detection.jpeg" crossorigin="anonymous" />
<div class="boxes-container"></div>
</div>
<div class="btn">Detect</div>
<div class="result"></div>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-cpu"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-tflite@0.0.1-alpha.6/dist/tf-tflite.min.js"></script>
<script src="index.js"></script>
</body>
</html>
Any help is highly appreciated. Kind regards,
Gilbert Tanner | @Jason can help here | 0 |
tensorflow | Special Interest Groups | Error Saving TF-Java Model / Exporter API | https://discuss.tensorflow.org/t/error-saving-tf-java-model-exporter-api/5008 | I keep getting the Error No Operation named [StatefulPartitionedCall_2:0] in the Graph
when Using SavedModelBundle.exporter to save the model
Tensorflow Python Version : 2.4.1
Tensorflow Java Version: 0.3.1
Os: Windows 10
GPU/CPU: CPU version
ConcreteFunction serveFunction = savedModel.function("serve_model");
SavedModelBundle.exporter(exportDir)
.withFunction(serveFunction)
.export();
To access and inspect Graph operations, i can see the StatefulPartitionedCall_2
But without the : at the end of the operation name.
Iterator<Operation> operationIterator = serveFunction.graph().operations();
while(operationIterator.hasNext()){
System.out.println(operationIterator.next().name());
}
code snippet output
Adam/iter
Adam/iter/Read/ReadVariableOp
Adam/beta_1
Adam/beta_1/Read/ReadVariableOp
Adam/beta_2
...
...
...
train_model_labels
StatefulPartitionedCall_1
saver_filename
StatefulPartitionedCall_2
StatefulPartitionedCall_3
Works fine when invoking directly the Op from session.runner()
String checkpointPath = "...";
session.runner()
.feed("saver_filename:0", checkpointPath)
.fetch("StatefulPartitionedCall_2:0").run() ;
Error could be reproduced using this scripts which defines than saves the model (credits to Thierry Herrmann)
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import layers
def make_model():
class CustomLayer(keras.layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
l2_reg = keras.regularizers.l2(0.1)
self.dense = layers.Dense(1, kernel_regularizer=l2_reg,
name='my_layer_dense')
def call(self, data):
return self.dense(data)
inputs = keras.Input(shape=(8,))
x1 = layers.Dense(30, activation="relu", name='my_dense')(inputs)
outputs = CustomLayer()(x1)
return keras.Model(inputs=inputs, outputs=outputs)
class CustomModule(tf.Module):
def __init__(self):
super(CustomModule, self).__init__()
self.model = make_model()
self.opt = keras.optimizers.Adam(learning_rate=0.001)
@tf.function(input_signature=[tf.TensorSpec([None, 8], tf.float32)])
def __call__(self, X):
return self.model(X)
# the my_train function processes one batch (one step): computes the loss and apply the
# loss gradient to update the model weights
@tf.function(input_signature=[tf.TensorSpec([None, 8], tf.float32), tf.TensorSpec([None], tf.float32)])
def my_train(self, X, y):
with tf.GradientTape() as tape:
logits = self.model(X, training=True)
main_loss = tf.reduce_mean(keras.losses.mean_squared_error(y, logits))
# self.model.losses contains the reularization loss (see l2_reg above)
loss_value = tf.add_n([main_loss] + self.model.losses)
grads = tape.gradient(loss_value, self.model.trainable_weights)
self.opt.apply_gradients(zip(grads, self.model.trainable_weights))
return loss_value
# instantiate the module
module = CustomModule()
def save_module(module, model_dir):
tf.saved_model.save(module, model_dir,
signatures={
'serve_model' :
module.__call__.get_concrete_function(tf.TensorSpec([None, 8], tf.float32)),
'train_model' :
module.my_train.get_concrete_function(tf.TensorSpec([None, 8], tf.float32),
tf.TensorSpec([None], tf.float32))})
MODEL_OUTPUT_DIR ="..."
save_module(module, MODEL_OUTPUT_DIR)
``` | For those interested to follow this topic, the discussion is happening on this GitHub issue 2. | 0 |
tensorflow | Special Interest Groups | Use custom model with tfjs-tflite CORS issue | https://discuss.tensorflow.org/t/use-custom-model-with-tfjs-tflite-cors-issue/4985 | The tfjs-tflite library allows you to run TFLite models on the web.
Example:
const tfliteModel = await tflite.loadTFLiteModel('url/to/your/model.tflite');
or
const objectDetector = await tflite.ObjectDetector.create(
"https://storage.googleapis.com/tfhub-lite-models/tensorflow/lite-model/ssd_mobilenet_v1/1/metadata/2.tflite"
);
I’m currently working on a simple Object Detection example 2, which works fine for the example models that are stored on Google Cloud, but I couldn’t get it to work with a custom model stored on Github.
This gives me a CORS error:
Access to fetch at 'https://github.com/TannerGilbert/TFLite-Object-Detection-with-TFLite-Model-Maker/raw/master/model.tflite' from origin 'null' has been blocked by CORS policy: The 'Access-Control-Allow-Origin' header contains multiple values 'https://render.githubusercontent.com https://viewscreen.githubusercontent.com https://viewscreen-lab.githubusercontent.com', but only one is allowed. Have the server send the header with a valid value, or, if an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Therefore, I wanted to ask what’s the simplest way to solve the error or perhaps what the best platform is to store your model on the web for free.
Any help is highly appreciated. Kind regards,
Gilbert Tanner | Thanks @Jason for the detailed explanations!
@Gi_T, for your use case, it is possible to use the “raw.githubusercontent.com” link to get the model file which has “access-control-allow-origin” header set to “*”.
Try:
https://raw.githubusercontent.com/TannerGilbert/TFLite-Object-Detection-with-TFLite-Model-Maker/master/model.tflite 5
Thanks! | 1 |
tensorflow | Special Interest Groups | Promote resources to args with tf-opt | https://discuss.tensorflow.org/t/promote-resources-to-args-with-tf-opt/4915 | Hello,
I am trying to apply the command tf-opt --promote-resources-to-args on the following TF/MLIR code.
func @counter(%arg0: tensor) → tensor {
%1 = “tf.VarHandleOp”() {container = “”, shared_name = “x”} : () → tensor<!tf_type.resource<tensor>>
%2 = “tf.ReadVariableOp”(%1) : (tensor<!tf_type.resource<tensor>>) → tensor
%3 = “tf.Add”(%arg0, %2): (tensor, tensor) → tensor
“tf.AssignVariableOp”(%1, %3) {device = “”} : (tensor<!tf_type.resource<tensor>>, tensor) → ()
%4 = “tf.ReadVariableOp”(%1) : (tensor<!tf_type.resource<tensor>>) → tensor
return %4: tensor
}
It does nothing because the function’s name is not @main. Is there a way to apply this command on every function whatever its name is ?
The result I need is :
func @counter(%arg0: tensor, %arg1: tensor {tf.aliasing_output = 1 : i64, tf.resource_name = “x”}) → (tensor, tensor) {
%0 = “tf.Add”(%arg0, %arg1) : (tensor, tensor) → tensor
return %0, %0 : tensor, tensor
}
Best regards,
HP | Unfortunately this isn’t possible as-is since it is hard-coded here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/tensorflow/transforms/promote_resources_to_args.cc#L348-L349 6
Feel free to send a patch that would make it an option of the pass to specify the entrypoint name! | 0 |
tensorflow | Special Interest Groups | Unable to assign element of a 1D array obtained from tensor.data() as value of an object | https://discuss.tensorflow.org/t/unable-to-assign-element-of-a-1d-array-obtained-from-tensor-data-as-value-of-an-object/4862 | [what i want to do]
I created a rank-3 tensor.
Then I got a flattened 1D array of all the values in this tensor by using tensor.data() method.
I want to assign each element of this flattened array as value of an object.
[what is the problem]
I’m unable to obtain the individual elements of the array using a.data().then((data) => { card.value = data[i] }); .
console.log(card.value) returns undefined.
However, using card.value = a.dataSync()[i]; seems to work, instead.
[main.js]
import * as tf from "@tensorflow/tfjs";
import Card from "./Card.js";
// create a rank-3 tensor
const a = tf.randomNormal([4, 3, 2]);
a.print();
// assign values in the tensor to a series of div object
for (let i = 0; i < a.size; i += 1) {
// create card object
const card = new Card(i, "card " + String(i), "96px", "96px");
// assign a value to card
// [method 1] using synchronous method works
// card.value = a.dataSync()[i];
// [method 2] using asynchronous method is not working ...
a.data().then((data) => { card.value = data[i] });
console.log(card.value);
[Card.js]
export default class Card {
// constructor
constructor(_idx, _name, _width, _height, _posx, _posy, _posz, _value) {
this.idx = _idx;
this.name = _name;
this.width = _width;
this.height = _height;
this.posx = _posx;
this.posy = _posy;
this.posz = _posz;
this.value = _value;
}
} | Could anybody kindly help to take a look of the issue shown above and advise what is missing? Thanks. | 0 |
tensorflow | Special Interest Groups | SIG Build October Meeting: October 5 @ 2pm | https://discuss.tensorflow.org/t/sig-build-october-meeting-october-5-2pm/4823 | SIG Build’s next meeting will be tomorrow, Tuesday, October 5, at 2pm Pacific time. Find the meeting details at bit.ly/tf-sig-build-notes 5, and feel free to suggest new agenda items. | Thanks to everyone who attended the meeting yesterday. Here are are a couple of notable points from the discussion:
Our November meeting on Nov. 2 may be affected by Daylight Savings Time if you are in Europe.
The TF DevInfra team is ramping up a couple of new members with projects including manylinux2014 support for our build environment and author notifications when a GitHub PR gets rolled back.
manylinux development continues to be complicated. See the notes for details.
Thanks, and see you in November! | 0 |
tensorflow | Special Interest Groups | Integrating tf.recommender with TFX pipeline ran on kubeflow | https://discuss.tensorflow.org/t/integrating-tf-recommender-with-tfx-pipeline-ran-on-kubeflow/4671 | I am currently working on integrating a tf.recommender 1 model into an existing TFX 1 pipeline to provide on-service recommendations. I am quite new to both TFX and tf.recommender and am not seeing any resources on integrating the two. I want to be sure I am implementing best practices–in particular with TFT 1 and TFMA 1. Does anyone know of existing docs that may help me with this, or better yet, existing example pipelines?
Thanks! | maybe @Robert_Crowe have some links to share | 0 |
tensorflow | Special Interest Groups | How to separate dataset to validate CNN? | https://discuss.tensorflow.org/t/how-to-separate-dataset-to-validate-cnn/4192 | Hello im working training to train a CNN with two datasets that I label manually with negative and positive.(80x60 depth images in each matrix)
# dimensions of our images.
img_width, img_height = 80, 60
n_positives_img, n_negatives_img = 17874, 26308
n_total_img = 44182
#Imports of datasets inside Drive
ds_negatives = np.loadtxt('/content/drive/MyDrive/Colab Notebooks/negative_depth.txt')
ds_positives = np.loadtxt('/content/drive/MyDrive/Colab Notebooks/positive_depth.txt')
#Labeled arrays for datasets
arrayceros = np.zeros(n_negatives_img)
arrayunos = np.ones(n_positives_img)
#Reshaping of datasets to convert separate them
arraynegativos= ds_negatives.reshape(( n_negatives_img, img_width, img_height))
arraypositivos= ds_positives.reshape((n_positives_img, img_width, img_height))
#Labeling datasets with the arrays
ds_negatives_target = tf.data.Dataset.from_tensor_slices((arraynegativos, arrayceros))
ds_positives_target = tf.data.Dataset.from_tensor_slices((arraypositivos, arrayunos))
#Concatenate 2 datasets and shuffle them
ds_concatenate = ds_negatives_target.concatenate(ds_positives_target)
datasetfinal = ds_concatenate.shuffle(n_total_img)
But when I try to separate my dataset 80/20 to validate my CNN:
trainingdataset, validatedataset = train_test_split(datasetfinal, test_size=0.2, random_state=25)
I get this error:
TypeError: Singleton array arrayshapes: ((80, 60), ()), types: (tf.float64, tf.float64)>, dtype=object) cannot be considered a valid collection.
Any ideas? Thank in advance!!! | It’s impossible to split tensorflow dataset object by passing it to train_test_split from sklearn. You can choose the number of validation samples, which should be int number, and use the following example:
valid_ds = datasetfinal.take(n_samples)
train_ds = datasetfinal.skip(n_samples)
It does what the method says: takes first n_samples from the dataset and skips all the rest or skips the first n_samples and takes all the rest. | 0 |
tensorflow | Special Interest Groups | I was working on Mask RCNN so while executing demo.ipynb on jupyter notebook i got an error can anyone please help me out | https://discuss.tensorflow.org/t/i-was-working-on-mask-rcnn-so-while-executing-demo-ipynb-on-jupyter-notebook-i-got-an-error-can-anyone-please-help-me-out/2872 | AttributeError Traceback (most recent call last) in 14 sys.path.append(ROOT_DIR) # To find local version of the library 15 from mrcnn import utils —> 16 import mrcnn.model as modellib 17 from mrcnn import visualize 18 # Import COCO config
~\ComputerVisionProject\Mask_RCNN_CustomDataset\Mask_RCNN-master\Mask_RCNN-master\mrcnn\model.py in 253 254 → 255 class ProposalLayer(KE.Layer): 256 “”"Receives anchor scores and selects a subset to pass as proposals 257 to the second stage. Filtering is done based on anchor scores and
AttributeError: module ‘keras.engine’ has no attribute 'Layer’ | Are you using this repo? I think its for tf version 1.x
github.com
matterport/Mask_RCNN 50
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow | 0 |
tensorflow | Special Interest Groups | Inverse kinematic approximation with neural network | https://discuss.tensorflow.org/t/inverse-kinematic-approximation-with-neural-network/4387 | Good morning everyone,
I’ll try to briefly explain the context and then the problem I’m facing:
Context: I am using and testing a collaborative Robot. This Robot has been provided to me with a library in python that allows to acquire signals from the robot (currents, velocity, positions, I/O etc) and to command it, in joint and end-effector (EE) coordinates. There are also available the functions of Direct and Inverse Kinematics (DK and IK).
For my curiosity, I was interested in generating a trajectory (in end-effector coordinates) in order to move it within a conical area [I attach a link to the video that shows the movement in question].
LINK: https://www.youtube.com/watch?v=CExtMfvRabo 4
From the robot, moreover, it is possible to save the .csv file containing the trajectories, in joint coordinates, of the single joints.
Initially, not knowing the “shape” that should have the trajectory (in end-effector coordinates) of the movement that I was interested in reproducing, I was able, manually moving the robot in gravity compensation mode, to acquire the trajectories of the individual joints. At this point, using the Direct Kinematics algorithm, I obtained the movement of the consequent end-effector [I attach photos of 2 3D graphs: the first in which I plot the 3 coordinates x,y,z and the second, in which I plot roll, pitch, yaw].
End Effector Angular Displacement 1
End Effector Position Displacement
Here the problem was born.
Problem: out of curiosity, I tried to use the Inverse Kinematics algorithm on the points obtained from the DK and the algorithm returned the error: “Singular Trajectory”. But the robot was able to move according to that trajectory, the problem should be in the calculation of the IK, which probably finds multiple/infinite solutions.
To overcome this limitation I used a Neural Network developed in Python using Tensorflow (Keras) to try to approximate the IK. I will preface this by saying that I have never used Keras or Tensorflow, so I may have made some conceptual errors. I have consulted the API of Keras and also the guide proposed in this link
LINK: https://machinelearningmastery.com/deep-learning-models-for-multi-output-regression/ 1
In my PC I use:
Visual Studio Code for programming in python;
python 3.9.5
Keras 2.6.0;
I thought of the neural network this way: 6 input nodes (corresponding to the 6 coordinates of the end-effector) and 6 output nodes (the 6 coordinates of the joints). The training set consists of a .csv file containing the 6 coordinates of the end-effector computed via the DK run on a .csv file containing the trajectories of the 6 joints. The file containing the joint coordinates is the Label file.
Below I attach the code of the network implementation.
from numpy import loadtxt
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
import tensorflow as tf
from numpy import array
# Model definition
def get_model(I_N_L_1, I_N_L_2, I_N_L_3, I_N_L_4, I_N_L_5 ,n_inputs, n_outputs):
model = Sequential()
model.add(Dense(I_N_L_1, input_dim=n_inputs, kernel_initializer='he_uniform', activation='relu'))
model.add(Dense(I_N_L_2, activation='relu'))
model.add(Dense(I_N_L_3, activation='relu'))
model.add(Dense(I_N_L_4, activation='relu'))
model.add(Dense(I_N_L_5, activation='relu'))
model.add(Dense(n_outputs))
model.compile(loss='mae', optimizer='adam', metrics=["mae"])
return model
# Load Training set csv
dataset_EF = loadtxt('WeldingProve.csv', delimiter=',')
x_train = dataset_EF[0:1700,0:6]
print('shape: ',x_train.shape)
# Load Label set csv
dataset_joints = loadtxt('EF_from_WeldingProve.csv', delimiter=',')
y_train = dataset_joints[0:1700,0:6]
print('shape: ',y_train.shape)
# Test set definition
x_test = dataset_EF[1701:,0:6]
print('shape: ',x_test.shape)
# Label of the test set definition
y_test = dataset_joints[1701:,0:6]
print('shape: ',y_test.shape)
# Number of nodes in the hidden layers
I_N_L_1 = 192
I_N_L_2 = 36
I_N_L_3 = 6
I_N_L_4 = 36
I_N_L_5 = 192
# Number of nodes in the input and output layers
n_inputs = 6
n_outputs = 6
# calling the "get_model" function
model = get_model(I_N_L_1, I_N_L_2, I_N_L_3, I_N_L_4, I_N_L_5 ,n_inputs, n_outputs)
es = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=5)
# fit model
model.fit(x_train, y_train, verbose=1, epochs=600)
# saving the model
model.save("Test_Model.h5")
# Testing procedure
pred = []
# Computing the Prediction on the Test Set
for i in range(len(x_test)-1):
b = [x_test[i][0], x_test[i][1], x_test[i][2], x_test[i][3], x_test[i][4], x_test[i][5]]
ToBePredicted = array([b])
Prediction = model.predict(ToBePredicted)
a = [Prediction[0][0], Prediction[0][1], Prediction[0][2], Prediction[0][3], Prediction[0][4], Prediction[0][5]]
# Computing the mean vector of the error for each predicted joint trajectory
average_vector = []
sum = 0
average = 0
for j in range(6): # colonne
for i in range(len(y_test)-1): #righe
sum = sum + (pred[i][j] - y_test[i][j])
average = sum/len(y_test)
average_vector.append(average)
average = 0
sum = 0
print('average_vector: ', average_vector)
# Computing the standard deviation vector of the error for each predicted joint trajectory
sum = 0
std_vector = []
for j in range(6): # colonne
for i in range(len(y_test)-1): #righe
sum = sum + ((pred[i][j] - y_test[i][j]) - average_vector[j])**2
std = (sum/len(y_test))**(0.5)
std_vector.append(std)
std = 0
sum = 0
print('std_vector: ', std_vector)
My questions are the following:
once I have trained the neural network, even using a very large training set, I get predictions that are not good. Can you suggest me how to improve these predictions, perhaps going to act on the parameters of the network,
Is it necessary to pre-process the training data and its labels? If yes, which technique should I apply?
Trying to change the number of nodes in the various layers of the network, I saw that the performance changes, even a lot. Do you have advice on the “shape” to give to the network ?
Are there any other solutions that can be used to estimate the IK of the robot ? | I’m not an expert in robotics. So my comments are only regarding the model architecture and training.
When you call model.fit() you can pass you test data to the argument “validation_data”, and the model will automatically calculate loss and metrics for both train and validation set. TensorFlow has MeanSquaredError and MeanAbsolutePercentageErrror in addition to MAE that you use.
In the EarlyStopping callback you should define monitor=‘val_loss’ and restore_best_weights=True. In this case the training will be stopped, when validation loss starts worsening, and the model will roll back to the optimal state, when best val_loss was reached. At present you monitor training loss, which does not say anything about overfitting.
Check the scale of the coordinates used as input features. If they are not in range 0-1, input data requires normalization. Keras has Normalization layer, which could be used to ensure that all data passed to the model is normalized identically.
Usually number of units in the dense layers gradually decreases. You defined 5 layers with units decreasing and then increasing like V-shape.
If all this does not improve the result, probably you should add more features like previous positions of the object, or it’s speed, or something else. | 0 |
tensorflow | Special Interest Groups | How do I build a custom voice recognition model for multiple people? | https://discuss.tensorflow.org/t/how-do-i-build-a-custom-voice-recognition-model-for-multiple-people/3833 | Is there a quick and easy import tool for custom voice data?
Is there a free local training Speech Recognition to text tool (including exporting model for tf.js) for custom raw voice data?
Can I run tf.js to automatically learn unknown speech sounds and integrate them into existing model examples?
ps: I don’t want to train my custom data through a cloud-based paid service | Welcome to the community. If you just need sound recognition you can try Teachable Machine that makes it easy to recognize short form sounds eg 1 second in length. I have not seen a full voice recognition conversion yet as those tend to be quite large in file size, but sound recognition is most certainly possible. check:
teachablemachine.withgoogle.com
Teachable Machine 16
Train a computer to recognize your own images, sounds, & poses.
A fast, easy way to create machine learning models for your sites, apps, and more – no expertise or coding required.
And then select audio project. If you like what it trains in browser you can click download on top right and save the model files generated to your computer. All training is done in browser using TensorFlow.js so no server is used here other than to deliver the initial webpage so your sounds are never sent to a server.
If you want to do voice recognition in JavaScript it actually exists via the WebSpeech API:
developer.mozilla.org
Using the Web Speech API - Web APIs | MDN 11
The Web Speech API provides two distinct areas of functionality — speech recognition, and speech synthesis (also known as text to speech, or tts) — which open up interesting new possibilities for accessibility, and control mechanisms. This article...
You do not need TensorFlow.js to use that. It is part of the browser implementation and will use whatever OS level voice recognition exists.
Good luck! | 0 |
tensorflow | Special Interest Groups | How to load tflite model into chrome extension using tfjs | https://discuss.tensorflow.org/t/how-to-load-tflite-model-into-chrome-extension-using-tfjs/4424 | Hi tensors,
I have seen chrome extension 6 in the tfjs-examples, I am trying to make it for my model. But for that also, I am not able to get required tfjs.min.js and tfjs.js lib into extension.
I have added lib links to popup.html and tried to load the model in background.js.
Uncaught (in promise) ReferenceError: tflite is not defined
I have added package.json file —
{
“name”: “xxx”,
“version”: “0.0.1”,
“description”: “Use tfjs model.predict in a chrome extension”,
“scripts”: {
“copy”: “copy content.js dist/”,
“build”: “parcel build background.js -d dist/ -o background --no-minify && npm run copy”,
“watch”: “npm run copy && parcel watch background.js --hmr-hostname localhost -d dist/ -o background”
},
“license”: “Apache 2.0”,
“devDependencies”: {
“babel-core”: “^6.26.3”,
“babel-plugin-transform-runtime”: “^6.23.0”,
“babel-polyfill”: “^6.26.0”,
“babel-preset-env”: “^1.6.1”,
“clang-format”: “^1.2.3”,
“parcel-bundler”: “^1.9.4”
},
“resolutions”: {
“is-svg”: “4.3.1”,
“node-fetch”: “2.6.1”,
“vega”: “5.17.3”,
“glob-parent”: “5.1.2”,
“postcss”: “8.2.10”
},
“dependencies”: {
“@tensorflow/tfjs”: “^3.9.0”
}
}
The manifest.json is like this—
{
“name”: “xxxxx”,
“description”: “xxxxxx”,
“version”: “1.0”,
“manifest_version”: 2,
“browser_action”: {
“default_icon”: “icon.png”,
“default_popup”: “popup.html”,
“default_title”: “Chrome Extension”
},
“permissions”: [
“<all_urls>”,
“activeTab”
],
“background”: {
“scripts”: [“background.js”],
“persistent”: true
},
“content_scripts”: [
{
“matches”: [“http:///”, “https:///”],
“js”: [“content.js”],
“all_frames”: true,
“run_at”: “document_start”
}
],
“commands”: {
“_execute_browser_action”: {
“suggested_key”: {
“default”: “Ctrl+Shift+F”,
“mac”: “MacCtrl+Shift+F”
},
“description”: “Opens popup.html”
}
},
“content_security_policy”: “script-src ‘self’ https://cdn.jsdelivr.net ‘unsafe-eval’; object-src ‘self’”
}
Background.js is like —
// Copyright 2018 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
//import * as tf from ‘tfjs’;
/**
Function callled when extension installed into browser.
*/
chrome.runtime.onInstalled.addListener(function() {
// load TFLite model into browser
async function load_tflite_model() {
const tfliteModel = await tflite.loadTFLiteModel(
“https://storage.googleapis.com/tfweb/models/cartoongan_fp16.tflite 2”
);
console.log(“tfliteModel…”,tfliteModel)
}
load_tflite_model();
});
Thank
Neha Soni | Hi there,
I have seen chrome extension in the tfjs-examples, I am trying to make it for my model. But for that also, I am not able to get required tfjs.min.js and tfjs.js lib into extension.
I have added lib links to popup.html and tried to load the model in background.js.
Uncaught (in promise) ReferenceError: tflite is not defined
I have added package.json file —
{
“name”: “xxx”,
“version”: “0.0.1”,
“description”: “Use tfjs model.predict in a chrome extension”,
“scripts”: {
“copy”: “copy content.js dist/”,
“build”: “parcel build background.js -d dist/ -o background --no-minify && npm run copy”,
“watch”: “npm run copy && parcel watch background.js --hmr-hostname localhost -d dist/ -o background”
},
“license”: “Apache 2.0”,
“devDependencies”: {
“babel-core”: “^6.26.3”,
“babel-plugin-transform-runtime”: “^6.23.0”,
“babel-polyfill”: “^6.26.0”,
“babel-preset-env”: “^1.6.1”,
“clang-format”: “^1.2.3”,
“parcel-bundler”: “^1.9.4”
},
“resolutions”: {
“is-svg”: “4.3.1”,
“node-fetch”: “2.6.1”,
“vega”: “5.17.3”,
“glob-parent”: “5.1.2”,
“postcss”: “8.2.10”
},
“dependencies”: {
“@tensorflow/tfjs”: “^3.9.0”
}
}
The manifest.json is like this—
{
“name”: “xxxxx”,
“description”: “xxxxxx”,
“version”: “1.0”,
“manifest_version”: 2,
“browser_action”: {
“default_icon”: “icon.png”,
“default_popup”: “popup.html”,
“default_title”: “Chrome Extension”
},
“permissions”: [
“<all_urls>”,
“activeTab”
],
“background”: {
“scripts”: [“background.js”],
“persistent”: true
},
“content_scripts”: [
{
“matches”: [“http:///”, “https:///”],
“js”: [“content.js”],
“all_frames”: true,
“run_at”: “document_start”
}
],
“commands”: {
“_execute_browser_action”: {
“suggested_key”: {
“default”: “Ctrl+Shift+F”,
“mac”: “MacCtrl+Shift+F”
},
“description”: “Opens popup.html”
}
},
“content_security_policy”: “script-src ‘self’ https://cdn.jsdelivr.net ‘unsafe-eval’; object-src ‘self’”
}
Background.js is like —
// Copyright 2018 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
//import * as tf from ‘tfjs’;
/**
Function callled when extension installed into browser.
*/
chrome.runtime.onInstalled.addListener(function() {
// load TFLite model into browser
async function load_tflite_model() {
const tfliteModel = await tflite.loadTFLiteModel(
“https://storage.googleapis.com/tfweb/models/cartoongan_fp16.tflite”
);
console.log(“tfliteModel…”,tfliteModel)
}
load_tflite_model();
});
Thank
Neha Soni | 0 |
tensorflow | Special Interest Groups | Setup your favorite editor to develop Keras | https://discuss.tensorflow.org/t/setup-your-favorite-editor-to-develop-keras/3747 | Here is list of editors that you can use to develop Keras.
The steps to setup each of them is provided.
Welcome to reply to this topic to add more.
GitHub Codespaces
This is the easiest option. It helps you setup the environment with one click.
You can click “Code → new codespace” on your fork’s web page to open it in GitHub Codespaces 14.
You can start coding and running the tests there right away.
However, Codespaces is only available in beta. You need to request early access to use it.
Visual Studio Code
This is also an easy option for beginners.
Clone your fork of the repo to your computer.
Open Visual Studio Code 2.
Install the Remote-Containers extension 8.
Open the cloned folder and click “Reopen in Container” in the popup notification.
You can start coding and running the tests there right away. | Thank you,
I think we are missing the linting tools in keras/requirements.txt at master · keras-team/keras · GitHub 1
See my comment at https://github.com/keras-team/keras/pull/15006#pullrequestreview-716500818 1
If it is complete you can close my June contribution offer at Vscode/Github codespaces
We are also waiting for the same for TF core at Tensorflow with Gitbub Codespaces 1.
For TF Addons it was rejected more then 1 year ago but probably we could re-valuate it:
github.com/tensorflow/addons
Add initial vscode devcontainer support 2
tensorflow:master ← bhack:vscode_devcontainer
opened
Mar 15, 2020
bhack
+61
-3
Initial support for Vscode devcontainer. See https://github.com/tensorflow/addon…s/issues/1305
Since May 2020 we maintained the .devcontainers for TF, Keras (not standalone) and TF Addons at:
github.com
vscode-dev-containers/repository-containers/github.com/tensorflow at main ·... 7
main/repository-containers/github.com/tensorflow
A repository of development container definitions for the VS Code Remote - Containers extension and GitHub Codespaces - vscode-dev-containers/repository-containers/github.com/tensorflow at main · m... | 0 |
tensorflow | Special Interest Groups | Set_weights in keras | https://discuss.tensorflow.org/t/set-weights-in-keras/4319 | I have built the model, trained it. Again I was trying to set weights to the model from a file saved in my local machine but I was getting errors. So, what are the things that we should care about while setting weights in the trained models?Thank you | What methods did you use to save the model weights and use them again?
If you need to reuse only the weights, that’s how you can save and reload them:
model.save_weights(‘name_or_path.h5’)
new_model.load_weights(‘name_or_path.h5’)
Architecture of “new_model” should be identical to “model”. | 0 |
tensorflow | Special Interest Groups | MobileNetV3: small change in architecture between TF2.5 and 2.6 | https://discuss.tensorflow.org/t/mobilenetv3-small-change-in-architecture-between-tf2-5-and-2-6/4110 | For tf.keras.applications.MobileNetV3 (large or small), there’s been a slight change to the architecture from TF <=2.5 to TF 2.6. Specifically the GlobalAveragePooling2D layer happens before “Conv_2” in TF2.6, but after “Conv_2” (and it’s non-linear activation) in TF2.5.
These operations don’t commute, so the architectures are slightly different. Both versions point to the same pre-trained weights, so their architectures ought to be the same.
I haven’t checked if this degrades the performance of the pretrained models.
My interest in this is mostly that it’s a breaking change to the API: MobileNetV3Large(include_top=False) will output a tensor of shape [?, 1, 1, 1280] starting with TF2.6 compared to a tensor of shape [?, 7, 7, 1280] with TF <=2.5 (assuming an input of shape [?, 224, 224, 3]). | This is a bug fix.
The two ops don’t quite commute but they commute well enough that both versions of the model do well with the weights.
You are correct about the change in the feature vector shape. The new version is the one that is “correct”. | 0 |
tensorflow | Special Interest Groups | Proper use of Keras ImageDataGenerator: Create Masks for Segmentation and sample_weight parameter | https://discuss.tensorflow.org/t/proper-use-of-keras-imagedatagenerator-create-masks-for-segmentation-and-sample-weight-parameter/3134 | Hello all,
I want to do Image Data Augmentation for an Semantic Segmentation task. Therefore, I want to use the ImageDataGenerator from Keras, together with the flow() method, because my data is in Numpy arrays and does not need to be loaded from a folder. Since this is a segmentation task, I need to augment the image and the corresponding mask. I do this by following the last example in the API reference (ImageDataGenerator 4 ) and accordingly using two different generators for image and mask with the same data_gen_args. I only want to rotate, flip and move my images, so I want to use the arguments rotation_range, width_shift_range, height_shift_range,horizontal_flip, vertical_flip.
Accordingly, I want to get masks that are 8 bit images of the shape (128,128,1) like the input mask and also contain only the classes of the input mask (all integer values). And this is exactly where the problem lies, the masks I get are 32-bit floats, which do not contain integer values at all. Even when specifying the argument dtype = “uint8” the code always returns only float32 masks. I have not found an example that fixes this problem? Is there a trick that can be used ?
Another problem in connection with the ImageDataGenerator is sample_weight. As my dataset is quite unbalanced, I would like to use them. In a segmentation task, I think the sample_weight parameter in the flow() method would have to correspond to another mask containing the respective class_weight for the class of each pixel in the original mask. If I do it this way, I get sample_weight back as well, but it seems to me that these weights, similar to the mask, are not correct either, as my UNet does not train well with them anymore. In the meantime I use a third ImageDataGenerator only for the sample_weight, so the training works better, but I hardly think this is the right approach. However, I have not found an example for the correct use. Therefore I hope that the community can help me with their experience.
Thank you.
Kind regards,
Jonas | Hi Jonas
ImageDataGenerator has been superseded by Keras Preprocessing Layers 5 for data preprocessing, to be used together with the tf.data 2 API. However, at this time, you cannot yet do joint preprocessing of the image and mask using Keras Preprocessing Layers so I cannot recommend that route yet.
In my experience, the following data augmentation frameworks support image segmentation use cases directly:
Albumentations 6
ImgAug 6
Your best way for now is to use one of these libraries and then format your dataset as a Python generator (or tf.data.Dataset through tf.data.Dataset.from_generator 4)
The limitations of these approaches is that they do the data transformations in Python rather than TF operations and therefore cannot be saved to SavedModel and deployed in production.
Until we have a segmentation-compatible, Keras Preprocessing Layer implemented with TF ops, I advise you to special-case inference in your model setup. You can use Python libraries for data preprocessing for training and evaluation, but implement the minimal necessary inference-time data transformations (JPEG decompression, size, scale, …) using TF functions and Keras Preprocessing Layers. For example tf.io.decode_image and tf.keras.layers.Resizing. | 0 |
tensorflow | Special Interest Groups | SIG Build September Meeting: September 7 (today!) @ 2pm | https://discuss.tensorflow.org/t/sig-build-september-meeting-september-7-today-2pm/4222 | SIG Build’s next meeting will be today, Tuesday, September 7, at 2pm Pacific time. Find the meeting details at bit.ly/tf-sig-build-notes 6, and feel free to suggest new agenda items.
I hope those of you in the USA had a relaxing Labor Day! | Thanks to everyone who attended this month’s meeting. Here’s a summary of some of the main points:
About 20 vulnerabilities are expected to be patched on top of TF 2.6; there will be patch releases for previous releases as well.
The DevInfra team is looking at bumping some dependencies, but it’s slow going because of how much testing is necessary.
manylinux2014 or perennial manylinux is a requirement for CUDA 11.4. The DevInfra team is working on this.
I’m looking at Docker containers again (finally!) and will be checking out the PYPA containers to see if we can use those directly instead of using our own custom toolchain, which we needed for Ubuntu previously.
Check out the notes, linked in the first post, for full details. | 0 |
tensorflow | Special Interest Groups | Locally connected 2D layer without summation over colors | https://discuss.tensorflow.org/t/locally-connected-2d-layer-without-summation-over-colors/4246 | I want to create a neural network with a locally connected layer but without summation over the 3rd dimension (colors) of the inputs.
I saw in the docs of LocallyConnected2D 2 that there is no “group” argument as in the conv2d.
Is there a way to do that? | Do you mean something like this?
github.com/keras-team/keras
Feature request: In-plane locally-connected 2D convolutions 6
opened
Mar 29, 2018
closed
Jun 25, 2021
tsoernes
I'm requesting a convolution that works on each channel separately using the sam…e filter (as https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/contrib/layers/conv2d_in_plane), and but uses different filters spatially (like LocallyConnected2D).
One approach would be to modify `LocallyConnected2D`:
```
individual_channels = tf.split(inputs, inputs.shape[3], -1)
convs = []
for channel in individual_channels:
conv = K.local_conv2d(channel, self.kernel, self.kernel_size, self.strides,
(self.output_row, self.output_col), self.data_format)
convs.append(conv)
outputs = tf.concat(convs, -1)
```
where
```
self.kernel_shape = (output_row * output_col,
self.kernel_size[0] * self.kernel_size[1], 1)
```
But the above approach is very slow. | 0 |
tensorflow | Special Interest Groups | Not able to lower tf.sets.intersection to HLO | https://discuss.tensorflow.org/t/not-able-to-lower-tf-sets-intersection-to-hlo/4232 | My python code snippet is like:
import os
import numpy as np
import tensorflow as tf
from tensorflow.python.eager import context
from tensorflow.python.framework import dtypes
from tensorflow.python.platform import test
from tensorflow.python.framework import config
import tensorflow.compat.v1 as tf
config.enable_mlir_bridge()
tf.config.experimental.enable_mlir_bridge()
class CustomModule(tf.Module):
def init(self):
super(CustomModule, self).init()
self.condition = tf.Variable(np.array([[True, False, False],[False, True, False],[True, True, True]]), dtype = tf.bool)
self.x = tf.Variable(np.array([[1, 2, 3],[4, 5, 6],[7, 8, 9]]), dtype = tf.int32)
self.y =tf.Variable(np.array([[11, 12, 13],[14, 15, 16],[17, 18, 19]]), dtype = tf.int32)
@tf.function
def call(self, x):
r = tf.where(self.condition, self.x, self.y)
m= tf.where(self.condition, self.x, self.y)
c=tf.sets.intersection(tf.expand_dims(r, 0),tf.expand_dims(m, 0))
return c
module = CustomModule()
module_with_signature_path = os.path.join("/data/aruna/tf_ops", ‘sets_intersection’)
call = module.call.get_concrete_function(tf.TensorSpec(shape=(), dtype=tf.int32))
signatures = {‘predict’: call}
tf.saved_model.save(module, module_with_signature_path, signatures=call)
print(‘Saving model…’)
if name == ‘main’:
test.main()
I ran this python code and got saved_model.pb.
Then I used following commands:
tensorflow/compiler/mlir/tf-mlir-translate --savedmodel-objectgraph-to-mlir --tf-savedmodel-exported-names=predict -tf-enable-shape-inference-on-import=true $PWD -o sample.mlir
tensorflow/compiler/mlir/tf-opt --tf-executor-to-functional-conversion --tf-shape-inference -xla-legalize-tf --print-ir-before-all sample.mlir
TF dialect looks like:
// -----// IR Dump Before LegalizeTF //----- //
builtin.func private @__inference___call___750(%arg0: tensor {tf._user_specified_name = “x”}, %arg1: tensor<!tf_type.resource>, %arg2: tensor<!tf_type.resource>, %arg3: tensor<!tf_type.resource>) → (tensor<?x3xi64>, tensor<?xi32>, tensor<3xi64>) attributes {tf._construction_context = “kEagerRuntime”, tf._input_shapes = [#tf_type.shape<>, #tf_type.shape<>, #tf_type.shape<>, #tf_type.shape<>], tf.signature.is_stateful} {
%cst = “tf.Const”() {device = “”, value = dense<0> : tensor} : () → tensor
%cst_0 = “tf.Const”() {device = “”, value = dense<0> : tensor}2021-09-08 09:56:50.579733: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
: () → tensor
%0 = “tf.ReadVariableOp”(%arg2) {device = “”} : (tensor<!tf_type.resource>) → tensor<3x3xi32>
%1 = “tf.ReadVariableOp”(%arg2) {device = “”} : (tensor<!tf_type.resource>) → tensor<3x3xi32>
%2 = “tf.ReadVariableOp”(%arg3) {device = “”} : (tensor<!tf_type.resource>) → tensor<3x3xi32>
%3 = “tf.ReadVariableOp”(%arg3) {device = “”} : (tensor<!tf_type.resource>) → tensor<3x3xi32>
%4 = “tf.ReadVariableOp”(%arg1) {device = “”} : (tensor<!tf_type.resource>) → tensor<3x3xi1>
%5 = “tf.Select”(%4, %0, %2) {device = “”} : (tensor<3x3xi1>, tensor<3x3xi32>, tensor<3x3xi32>) → tensor<3x3xi32>
%6 = “tf.ExpandDims”(%5, %cst) {device = “”} : (tensor<3x3xi32>, tensor) → tensor<1x3x3xi32>
%7 = “tf.ReadVariableOp”(%arg1) {device = “”} : (tensor<!tf_type.resource>) → tensor<3x3xi1>
“tf.NoOp”() {_acd_function_control_output = true, device = “”} : () → ()
%8 = “tf.Select”(%7, %1, %3) {device = “”} : (tensor<3x3xi1>, tensor<3x3xi32>, tensor<3x3xi32>) → tensor<3x3xi32>
%9 = “tf.ExpandDims”(%8, %cst_0) {device = “”} : (tensor<3x3xi32>, tensor) → tensor<1x3x3xi32>
%10:3 = “tf.DenseToDenseSetOperation”(%6, %9) {T = i32, device = “”, set_operation = “intersection”, validate_indices = true} : (tensor<1x3x3xi32>, tensor<1x3x3xi32>) → (tensor<?x3xi64>, tensor<?xi32>, tensor<3xi64>)
%11 = “tf.Identity”(%10#0) {device = “”} : (tensor<?x3xi64>) → tensor<?x3xi64>
%12 = “tf.Identity”(%10#1) {device = “”} : (tensor<?xi32>) → tensor<?xi32>
%13 = “tf.Identity”(%10#2) {device = “”} : (tensor<3xi64>) → tensor<3xi64>
return %11, %12, %13 : tensor<?x3xi64>, tensor<?xi32>, tensor<3xi64>
}
Error is:
sample.mlir:5:3: error: The following operations cannot be legalized: tf.DenseToDenseSetOperation (count: 1); tf.NoOp (count: 1); tf.ReadVariableOp (count: 6). These legalization failure(s) may be due to missing TF to HLO lowerings and/or unsupported attributes, etc.
builtin.func private @__inference___call___340(%arg0: tensor {tf._user_specified_name = “x”}, %arg1: tensor<!tf_type.resource>, %arg2: tensor<!tf_type.resource>, %arg3: tensor<!tf_type.resource>) → (tensor<?x3xi64>, tensor<?xi32>, tensor<3xi64>) attributes {tf._construction_context = “kEagerRuntime”, tf._input_shapes = [#tf_type.shape<>, #tf_type.shape<>, #tf_type.shape<>, #tf_type.shape<>], tf.signature.is_stateful} {
^
sample.mlir:5:3: error: Emitting more detail about one op that failed to legalize…
builtin.func private @__inference___call___340(%arg0: tensor {tf._user_specified_name = “x”}, %arg1: tensor<!tf_type.resource>, %arg2: tensor<!tf_type.resource>, %arg3: tensor<!tf_type.resource>) → (tensor<?x3xi64>, tensor<?xi32>, tensor<3xi64>) attributes {tf._construction_context = “kEagerRuntime”, tf._input_shapes = [#tf_type.shape<>, #tf_type.shape<>, #tf_type.shape<>, #tf_type.shape<>], tf.signature.is_stateful} {
^
sample.mlir:20:61: error: ‘tf.DenseToDenseSetOperation’ op is not legalizable
%outputs_23:3, %control_24 = tf_executor.island wraps “tf.DenseToDenseSetOperation”(%outputs_14, %outputs_21) {T = i32, device = “”, set_operation = “intersection”, validate_indices = true} : (tensor<1x3x3xi32>, tensor<1x3x3xi32>) → (tensor<?x3xi64>, tensor<?xi32>, tensor<3xi64>) | tf.CropAndResize
tf.StridedSlice
tf.Unique
tf.Where
tf.SparseToDense
tf.NonMaxSuppressionV4
tf.TensorListFromTensor
tf.TensorListGetItem
tf.DenseToDenseSetOperation
tf.TensorListReserve
tf.TensorListSetItem
tf.TensorListStack
tf.TopKV2
For the above ops also I am getting same error while lowering these ops to HLO | 0 |
tensorflow | Special Interest Groups | Multi Learning rate in keras | https://discuss.tensorflow.org/t/multi-learning-rate-in-keras/4198 | Hello
for keras platform in model.compile only we use single learning rate, but I need multi learning rate for my model. for example my model include, Backbone with learning rate of 10^-4 and one transformer with learning rate of 10^-3. How could I set this two learning rate inside the model.compile with Adam optimizer or any optimizer? | It seams that TensorFlow Addons has an optimizer with this capability: tfa.optimizers.MultiOptimizer | TensorFlow Addons 10 | 0 |
tensorflow | Special Interest Groups | LLVM updates and bazel cache | https://discuss.tensorflow.org/t/llvm-updates-and-bazel-cache/2060 | As we are updating LLVM two times at day 5 I’ve tried to query bazel with:
bazel aquery "rdeps(//tensorflow:*,//third_party/llvm:*)" --include_aspects 2>/dev/null | grep Compiling | wc -l
I am not a bazel ninja so probably the query could be wrong or improved but I currently see 9938 files on master (CPU only).
What is the effect of this bi-daily rolling update on the average community contributor compiling workflow/environment and his bazel cache? | I’d assume that Bazel is smart enough to only recompile the pieces that actually changed in there, so the impact will vary depending on the actual updates we’re picking up.
As a workflow, when I use to develop in LLVM on a laptop, I would have a cron script that would run a git pull and build at 7am before I show up in the office so that when I arrive I have the most recent copy of the code with the build cache up-to-date | 0 |
tensorflow | Special Interest Groups | ValueError: Feeds must be tensors | https://discuss.tensorflow.org/t/valueerror-feeds-must-be-tensors/3915 | Hi,
We’re calling SavedModelBundle.exporter("/data/").withSession(sess).withSignature(mySig).export to save a model. It seems to save out fine (according to the cli tool). But when we’re loading it from the Python side using tf.saved_model.load("/data/"), we’re getting the following error in ValueError: Feeds must be tensors.
Any ideas ?
more details…
Signature (according to saved_model_cli)
The given SavedModel SignatureDef contains the following input(s):
inputs['_input_0'] tensor_info:
dtype: DT_FLOAT
shape: (10)
name: Placeholder_1:0
The given SavedModel SignatureDef contains the following output(s):
outputs['_out_0'] tensor_info:
dtype: DT_FLOAT
shape: (10)
name: Relu_1:0
Method name is:
And full python stacktrace
wrap_function.py.prune(/apps/python3/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py:262)
load_v1_in_v2.py._extract_saver_restore(/apps/python3/lib/python3.7/site-packages/tensorflow/python/saved_model/load_v1_in_v2.py:105)
load_v1_in_v2.py.load(/apps/python3/lib/python3.7/site-packages/tensorflow/python/saved_model/load_v1_in_v2.py:211)
load_v1_in_v2.py.load(/apps/python3/lib/python3.7/site-packages/tensorflow/python/saved_model/load_v1_in_v2.py:263)
load.py.load_internal(/apps/python3/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py:613)
load.py.load(/apps/python3/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py:578) | What version of TF-Java exported the model, and what version of TF Python is loading it in?
It’s a little weird it’s coming in as a TF v1 model, we might need to set a bit somewhere to have it load using the appropriate path. | 0 |
tensorflow | Special Interest Groups | Training CNN with Keras and 32FC1 matrix dataset | https://discuss.tensorflow.org/t/training-cnn-with-keras-and-32fc1-matrix-dataset/3918 | Good evening, my problem is that I want to train a Keras CNN that could tell me if in a image there is a sewer or not.
I have 2 datasets (one with positives images with sewer and another with no sewer) files with a 8000x60 matrix of decoded depth images, each image if 80x60 so each dataset has like 100 images.
My problem is that i dont know how to code that input to train the CNN. I have always worked with png datasets and now that type. If you have questions just ask.
Thanks in advance. | If your images are already decoded into a matrix, you can try to use tf.data.Dataset.from_tensor_slices() method (tf.data.Dataset | TensorFlow Core v2.6.0 2) to create inputs for your model. You pass a tuple into this method: the first element is decoded image matrix (could be numpy array or other array-like types), the second element is at array of integer labels (0 and 1 in your case). | 0 |
tensorflow | Special Interest Groups | Keras: slow startup for large models due to metric-initialization | https://discuss.tensorflow.org/t/keras-slow-startup-for-large-models-due-to-metric-initialization/3785 | When creating large models (couple thousands nodes) in graph mode, initializing the metrics can take a very long time. The following toy example takes ~30 seconds on my machine (TF 2.6) to start training:
import tensorflow as tf
import numpy as np
from tensorflow.python.keras import backend as K
with K.get_session() as sess:
print("DEF")
model = tf.keras.Sequential(
[tf.keras.layers.Dense(1) for _ in 500]
)
print("METRICS")
metrics = [tf.keras.metrics.Accuracy(str(i)) for i in range(100)]
print("COMPILE")
model.compile(loss="mse", metrics=metrics, run_eagerly=False)
x, y = np.zeros((2, 1000), dtype=np.float32)
print("FIT")
model.fit(x=x, y=y)
Most of the startup time is spend in this loop 6 initializing the metrics.
In the actual model I am currently investigating, startup takes ~20 minutes since it’s quite a large model with data loading included in the graph and ~400 metrics. The latter is due to having 4 per-class metrics for ~100 classes. This time quadruples when adding another GPU with MirroredStrategy. What could I do to improve startup time in this case? So far, I’ve tried:
running in eager mode, which works fine on a single GPU, but scaling out is going to be more challenging
Creating one metric-class for all classes so that I only need to register 4 metrics. But it doesn’t seem to be possible for metrics to return arrays. | Turns out it’s only a problem with Tensorflow 1.x graph mode. Removing the line with K.get_session() as sess: fixes it. | 0 |
tensorflow | Special Interest Groups | Are there any types of Neural Network which is not possible built by Keras platform? and why? | https://discuss.tensorflow.org/t/are-there-any-types-of-neural-network-which-is-not-possible-built-by-keras-platform-and-why/3390 | By Considering these Chart of Complete Neural Networks Types:
ImgBB
Network-Types 8
Image Network-Types hosted in ImgBB
And also here is a Commonly used Neural Network types in Keras I found, which I take note also for its most Practice Cases (You can correct me if I am wrong):
1 DNN (Deep Neural Network) → Most Practice Cases: Time series + predicting plain response y variable (flexible on numeric integer categoric
2 CNN (Convolution Neural Network) → Most Practice Cases: Time Series + Image Classification/Computer Vision
3 LSTM (Long Short-Term Memory) → Most Practice Cases: Time Series + NLP, → Recommended use for a long set of training data (More complex structure than GRU → LSTM has three gates (namely input, output and forget gates)
4 RNN (Recurrent Neural Network) → Most Practice Cases: Time Series + NLP, RNN → Recommended use for Sequence Data (faster training, computationally less expensive
5 GRU (Gated Recurrent Unit → Most Practice Cases: Time Series + NLP GRU → Recommended use for a shorter set of training data (Less complex structure than LSTM GRU has two gates (reset and update gates)
6 Auto Encoders (AE) → Most Practice Cases: Image Classification/Computer Vision (Noising & Denoising), Auto Encoders basically are a set of CNN, Pooling2D, and CNN-Transpose
Finally my Question:
Are there any types of Neural Network in above chart which structure of Network are currently not possible to build by Keras component?
if there’s any Network which aren’t possible, could you point me what types, and why?
Are there any more Commonly used Neural Network aside from what I notes above? Appreciate it if theres any improvisation added to it
Appreciate any effort put into this question, Thank You! | Hi Jovan,
I’d associate GRU, RNN and LSTM with sequences instead of time series. It’s broader and easier to remember. That’s why NLP is in the same group (sequence of characters/words)
As far as I know, Keras can build all the Neural Networks presented and if the layer you need is not available directly, you can customize or create your own with the same API.
The only detail on your image that I could spot is the Support Vector Machine (SVM). I don’t know if that can be considered a Neural Network and I don’t know if that can be built using Keras. | 0 |
tensorflow | Special Interest Groups | Vanilla Javascript TensorflowJs | https://discuss.tensorflow.org/t/vanilla-javascript-tensorflowjs/3535 | Hi folks:
I havn’t done any TFJS since version 2.3.0, looks like we are now at 3.8.0.
Just want to introduce myself. My name is Jeremy Ellis, social media @rocksetta. I have spent almost 5 years simplifying TensorflowJs for High School students.
https://www.rocksetta.com/tensorflowjs/ 11
My favorite basic layers example is at
https://www.rocksetta.com/tensorflowjs/beginner-keras/20keras-xOr.html 7
I am now getting back to TFJS.
Anyone want to comment on possible changes, or if they are interested in what I do? | @Jason might be interested | 0 |
tensorflow | Special Interest Groups | Extend the collaboration on the CI | https://discuss.tensorflow.org/t/extend-the-collaboration-on-the-ci/1917 | I want to try to explore a little bit the collaboration over the CI/infra.
After evaluating the recent positive impact of having a more transparent and reproducible linting env with a [public Github action] (Actions · tensorflow/tensorflow · GitHub 2) in the repository why we don’t progressively expand a little bit this pilot with self-hosted github runners?
I’ve already talked about this in PR #48421 6 and I’ve mentioned different solutions. As you can see in comments some of the mentioned repo are maintained by Google Cloud team members.
As the dev-infra team is always quite busy having Gtihub Action workflows hosted in the main repository with self-hosted GitHub runners I think it could open the collaboration and partially co-maintainership on the CI infra, minimizing, as possibile, the delta between the internal testing perimeter/env and the public testing space increasing the transparency and community collaboration.
E.g. As you can see in Numpy 1.20 issue/PRs 4 people often don’t understand the full picture of our CI (internal + external).
Minimizing the delta with an expanded public space could help us to build up a more conscious community which is one of the prerequisites for attracting more contributions also in this specific domain.
/cc @thea @Joana @yarri-oss | We have an internal draft proposal on this but any kind of public feedback is welcome.
Thanks | 0 |
tensorflow | Special Interest Groups | Getting the most out of TensorBoard | https://discuss.tensorflow.org/t/getting-the-most-out-of-tensorboard/3180 | Hi, I’m currently using TensorBoard to analyse experiments involving the training and evaluation of time series forecasting models. I currently create plots of various metrics in TensorBoard (MSE, MAE etc) but am interested in improving the analysis I do through TensorBoard.
Does anyone have any recommendations of:
Useful plugins to add
Extra logging to add
Ways of customising the TensorBoard plots
Anything else that has helped you get the most out of TensorBoard
Thanks in advance! | Hi, have you visited TensorBoard’s official Get started page Get started with TensorBoard | TensorFlow 8 ? On the left-hand side - and you may have to expand the browser window to see it - there is a list of different guides. Let us know what you think. In addition, there’s a long tutorial video made by a YouTuber about TensorBoard: TensorFlow Tutorial 17 - Complete TensorBoard Guide - YouTube 3 you may want to check out. | 0 |
tensorflow | Special Interest Groups | Tensorboard fails to plot model weights for all epochs during training | https://discuss.tensorflow.org/t/tensorboard-fails-to-plot-model-weights-for-all-epochs-during-training/3098 | I am trying to plot the progression of my model’s weights during training using add_scalar, however, the tensorboard plot only shows the weights for any one epoch.
What I mean by this is, when I load tensorboard, I only see “epoch 0” in the scalars section even if I have run my model for 10 epochs. However, I dont have this issue while plotting histograms in the same code.
My code is as follows:
for epoch in total_epochs:
train model
calculate error
optimizer.step()
calculate average loss
for name,weight in model.named_parameters():
SummaryWriter.add_histogram(name, weight, epoch)
SummaryWriter.add_scalar(str(name), weight, epoch)
Here 1 is an example of what I mean. I had run the model for 10 epochs, the graph only shows epoch 0 and 1. However, the histogram (not pictured) contains the progression of all 10 epochs. | For custom training loops and TensorBoard, have you tried the method described in Using TensorBoard with other methods 2 (tf.summary)? | 0 |
tensorflow | Special Interest Groups | Keras seperate pip package | https://discuss.tensorflow.org/t/keras-seperate-pip-package/2912 | Hi everyone,
I saw the following tweet last week about splitting Keras into a seperate pip package.
Does this mean that, on my local virtual environment I need to do ‘pip install --upgrade keras’ in order to get the latest tf.keras?
Up until now, keras is updated as part of my:
pip install --upgrade tensorflow"
or
pip install --upgrade tf-nighly"
Thanks!
Fadi | It is still ok to use:
pip install --upgrade tf-nighly | 0 |
tensorflow | Special Interest Groups | SIG Build June Meeting: June 8th @ 2pm (delayed a week) | https://discuss.tensorflow.org/t/sig-build-june-meeting-june-8th-2pm-delayed-a-week/1322 | SIG Build’s next meeting will be on Tuesday, June 8th, at 2pm Pacific time. Find the meeting details here 5. Please feel free to suggest your own agenda items.
Normally the meeting would be on June 1st, but that’s right after Memorial Day, so @perfinion and I decided to move it a week later. | The meeting is happening tomorrow, on Tuesday the 8th! Find the meeting details, how to join, and agenda here 5. | 0 |
tensorflow | Special Interest Groups | Module ‘tensorflow.compat.v2.__internal__’ has no attribute ‘tf2’ | https://discuss.tensorflow.org/t/module-tensorflow-compat-v2-internal-has-no-attribute-tf2/1673 | Hi, Rizal here.
I am following this topic , GitHub - AntonMu/TrainYourOwnYOLO: Train a state-of-the-art yolov3 object detector from scratch! 25
but when i do the training, i got the error
2021-06-03 17:39:14.545998: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Using TensorFlow backend.
Traceback (most recent call last):
File “Train_YOLO.py”, line 28, in
import keras.backend as K
File “/home/rizal/.local/lib/python3.8/site-packages/keras/init.py”, line 12, in
from . import initializers
File “/home/rizal/.local/lib/python3.8/site-packages/keras/initializers/init.py”, line 126, in
populate_deserializable_objects()
File “/home/rizal/.local/lib/python3.8/site-packages/keras/initializers/init.py”, line 51, in populate_deserializable_objects
LOCAL.GENERATED_WITH_V2 = tf.internal.tf2.enabled()
AttributeError: module ‘tensorflow.compat.v2.internal’ has no attribute ‘tf2’
Any hint, for me to move forward ? Many thanks in advanded. guys.
Cheers. | @Ahmad_Rizal , This 771 stackoverflow answer may help you. | 0 |
tensorflow | Special Interest Groups | ML Reproducibility Challenge | https://discuss.tensorflow.org/t/ml-reproducibility-challenge/1737 | Hey all,
Just wondering if anyone has any interest in joining up for the reproducibility challenge this year?
The primary goal of this event is to encourage the publishing and sharing of scientific results that are reliable and reproducible. In support of this, the objective of this challenge is to investigate reproducibility of papers accepted for publication at top conferences by inviting members of the community at large to select a paper, and verify the empirical results and claims in the paper by reproducing the computational experiments, either via a new implementation or using code/data or other information provided by the authors.
The submission deadline is July 15th, 2021, so we don’t have a lot of time but with some effort, we can pull it off. Could be fun to tackle a paper that has shown promise and would be a useful addition to Tensorflow.
Comment below if you think you’ll have a bit of time to spare and what paper you think could be worth reproducing | This is a nice idea!
Unfortunately I can’t participate at the moment
I’d go even further and publish the model on TFHub when ready! | 0 |
tensorflow | Special Interest Groups | is it possible to use tensor flow lite with infineon tricore family architecture? | https://discuss.tensorflow.org/t/is-it-possible-to-use-tensor-flow-lite-with-infineon-tricore-family-architecture/1678 | I am currently using the infineon tricore controller. I would like to include the TF-lite model in to this controller. is it possible? how to compile C/C++ code specifically toward the infinion tricore compiler? | /cc @AStevens_Infineon | 0 |
tensorflow | Special Interest Groups | Code examples for using TensorFlow for Java? | https://discuss.tensorflow.org/t/code-examples-for-using-tensorflow-for-java/1574 | Hi,
I wasn’t aware of the availability of TensorFlow for Java, which github repository is available here:
GitHub - tensorflow/java: Java bindings for TensorFlow 3
This is great news. Is there any good place to find code examples for training, predictions, model building and the likes?
Thank you,
OD | We have a small set of example models here: GitHub - tensorflow/java-models: Models in Java 12
TensorFlow Java doesn’t have access to all of the gradients available in TensorFlow as many are implemented in python, so some models can’t be specified entirely in Java yet, but we’re working to build out the set. | 0 |
tensorflow | Special Interest Groups | Geometric Deep-learning | https://discuss.tensorflow.org/t/geometric-deep-learning/1261 | Can we have SIG for GDL/3D deep learning? It would be also a good idea to create a tag and resources thread for GDL. | There was a plan for TF Graphics to start a SIG
github.com/tensorflow/graphics
Add Roadmap.md 1
opened
Nov 14, 2019
closed
May 15, 2020
bhack
Is there a roadmap for this TF subprojects.
Seems that It Is very low resource …so It Is interesting if you can share TF plans/expectation on this.
But I don’t know what Is the current status
We have also TF 3D but on GitHub it Is still under Google Reaseach org.
Google AI Blog
3D Scene Understanding with TensorFlow 3D 8
Posted by Alireza Fathi, Research Scientist and Rui Huang, AI Resident, Google Research The growing ubiquity of 3D sensors (e.g., Lidar ,...
/cc @thea | 0 |
tensorflow | Special Interest Groups | Support for Block Floating Point | https://discuss.tensorflow.org/t/support-for-block-floating-point/485 | Currently micro prefers, what in DSP would be referred to as, fixed point representations of all it’s integer tensors. This is very acceptable for situations where the expected dynamic range is low, i.e. NNs with batch norm, etc. I’m interested in using micro for general audio/other DSP where I’m used to using block floating point.
What are my options here? Any plans to support it/is it supported?
My ultimate goal is to move audio frontend DSP code of an audio pipeline into tf so as to consolidate our DSP(block floating point) and ML(fixed/floating point) frameworks.
Thanks | I just saw ’ * Dynamic quantized models support’ on TensorFlow Lite Roadmap 2
Is this what I am asking for? | 0 |
tensorflow | Special Interest Groups | Flatbuffers for CustomOptions | https://discuss.tensorflow.org/t/flatbuffers-for-customoptions/486 | Is anyone using a custom/secondary flatbuffer schema for CustomOptions to ops instead of using FlexBuffers? If so then how has it gone? Thanks | Is anyone using a custom/secondary flatbuffer schema for CustomOptions to ops instead of using FlexBuffers? If so then how has it gone? Thanks
Looping in @Advait_Jain (TF Lite Micro team) | 0 |
tensorflow | Special Interest Groups | Hello Everybody - Is this the place for all future discussions? | https://discuss.tensorflow.org/t/hello-everybody-is-this-the-place-for-all-future-discussions/382 | Are all meetings happening here in the future?
Yours @ulf1 | Yes, we still have monthly SIG-Addons meetings. | 0 |
tensorflow | Research & Models | About the Research and Models category | https://discuss.tensorflow.org/t/about-the-research-and-models-category/294 | Discuss state-of-the-art ML models both in and out of the TensorFlow model garden. | Two models I would immediately like to see in Model Garden:
[WideResNets] (very helpful for benchmarking purposes)
Any colorization model
I had worked on a Colorization model 9 in the past months with a collaborator. It gave decent results but we could not get the distributed training part right. We are up for collaborations! Our pipeline is pretty fast so far utilizing the best practices from tf.data and is fully compatible with TPUs. | 0 |
tensorflow | Research & Models | Keras-cv, keras-nlp, keras applications, models garden | https://discuss.tensorflow.org/t/keras-cv-keras-nlp-keras-applications-models-garden/7276 | Can someone give a general overview about the models request we are collecting on Keras-cv?
What kind of relationship we will have between this, models garden, tf.keras.applications namespace and “marginally” TFHUB?
I would be really nice to disambiguate a little bit this topic to avoid duplication, fragmentation and confusion about the contribution path in the TF ecosystem and to optimize the external contributors resources.
We have already have some historically pinned tickets about models garden community requests and help wanted requests at:
github.com/tensorflow/models
📄 Community requests: New paper implementations 2
opened
Jun 6, 2020
jaeyounkim
type:support
models:official
This issue contains **all open requests for paper implementations requested by t…he community**.
We cannot guarantee that we can fulfill community requests for specific paper implementations.
If you'd like to contribute, **please add a comment to the relevant GitHub issue to express your interest in providing your paper implementation**.
Awesome external contributors will be nominated for [Google Open Source Peer Bonus](https://opensource.google/docs/growing/peer-bonus/).
Please also see our [contribution guidelines](https://github.com/tensorflow/models/wiki/Research-paper-code-contribution) and [paper selection criteria](https://github.com/tensorflow/models/wiki/Research-paper-code-contribution#model-selection).
## Computer Vision
| Paper | Conference | GitHub issue | Note |
--------|------------|--------------|------|
| ResNeXt: [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/abs/1611.05431) | CVPR 2017 | #6752 | |
| DenseNet: [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993) | CVPR 2017 | #8278 | |
| [Density estimation using Real NVP](https://arxiv.org/abs/1605.08803) | ICLR 2017 | #7848 | Need to migrate [TF 1 code](https://github.com/tensorflow/models/tree/master/research/real_nvp) to TF 2 |
| [Spatiotemporal Contrastive Video Representation Learning](https://arxiv.org/abs/2008.03800) | CVPR 2021 | #9993 | In progress (Internally) |
github.com/tensorflow/models
[Help wanted] Research paper implementations (Project Tracker) 1
opened
Jun 21, 2020
jaeyounkim
help wanted:paper implementation
# Help wanted: Research paper code and models
This issue contains a list of t…he research papers we want to implement in TensorFlow 2 with help from the community.
If you'd like to contribute, please **add a comment to the relevant GitHub issue** or **create a new issue** to express your interest in providing your paper implementation.
Awesome external contributors will be nominated for [Google Open Source Peer Bonus](https://opensource.google/docs/growing/peer-bonus/).
Please also see our [contribution guidelines](https://github.com/tensorflow/models/wiki/Research-paper-code-contribution) and [paper selection criteria](https://github.com/tensorflow/models/wiki/Research-paper-code-contribution#model-selection).
## Computer Vision
| Paper | GitHub issue | Status |
|-------|--------------|--------|
| FCOS: Fully Convolutional One-Stage Object Detection | #10275 | In progress |
| DarkPose: [Distribution Aware Coordinate Representation for Human Pose Estimation](https://arxiv.org/abs/1910.06278) | #8713 | In progress |
| MoCo: [Momentum Contrast for Unsupervised Visual Representation Learning](https://arxiv.org/abs/1911.05722) | #8708 | Need contribution |
| YOLOv4 [Optimal Speed and Accuracy of Object Detection](https://arxiv.org/abs/2004.10934) | N/A | [In progress](https://github.com/tensorflow/models/tree/master/official/vision/beta/projects/yolo) |
## Natural Language Processing
| Paper | GitHub issue | Status |
|-------|--------------|--------|
| RoBERTa: [A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) | #8704 | Need contribution |
| RoFormer: Enhanced Transformer with Rotary Position Embedding | N/A | In progress |
| Longformer: The Long-Document Transformer | N/A | In progress |
| BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension | N/A | In progress |
### Benchmark datasets
| Dataset | GitHub issue(s) | Status |
|----------|------------------|--------|
## Speech Recognition
| Paper | Conference | GitHub issue | Status |
|-------|------------|--------------|--------|
| Deep Speech 2: [End-to-End Speech Recognition in English and Mandarin](https://arxiv.org/abs/1512.02595) | ICML 2016 | #8702 | In progress |
Useful for the context
See also other community members comments like @sebastian-sz :
github.com/keras-team/keras-cv
ResNet-RS block/layer 1
opened
Jan 12, 2022
LukeWood
Or our thread at:
github.com/keras-team/keras
Updating the ResNet-* weights
opened
Dec 12, 2021
sayakpaul
type:feature
Contributions welcome
If you open a GitHub issue, here is our policy:
It must be a bug, a feature r…equest, or a significant problem with the documentation (for small docs fixes please send a PR instead).
The form below must be filled out.
**Here's why we have that policy:**.
Keras developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.
**System information**.
TensorFlow version (you are using): 2.7.0
Are you willing to contribute it (Yes/No) : Currently no
**Describe the feature and the current behavior/state**.
ResNets are arguably one of the most influential architectures in deep learning. Today, they are used in different capacities. For example, sometimes they act as strong baselines, sometimes they are used as backbones. Since their inception, their performance on ImageNet-1k, in particular, has improved quite a lot. I think it's time the ResNets under `tf.keras.applications` were updated to facilitate these changes.
**Will this change the current api? How?**
ResNet-RS (https://arxiv.org/abs/2103.07579) introduces slight architectural changes to the vanilla ResNet architecture (https://arxiv.org/abs/1512.03385). So, yes, there will be changes to the current implementation of ResNets (among other things) we have under `tf.keras.applications`. We could call it `tf.keras.applications.ResNet50RS`, for example. Following summarizes the performance benefits that ResNet-RS introduces to the final ImageNet-1k performance (measured on the `val` set):

<sub><a href=https://github.com/tensorflow/tpu/tree/master/models/official/resnet/resnet_rs#imagenet-checkpoints>Source</a></sub>
**Who will benefit from this feature?**
Keras users that use ResNets from `tf.keras.applications` for building downstream applications.
**[Contributing](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md)**
- Do you want to contribute a PR? (yes/no): Currently no
- If yes, please read [this page](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md) for instructions
- Briefly describe your candidate solution(if contributing):
/cc @thea @yarri-oss @lgusm @Luke_Wood @Jaehong_Kim @Scott_Zhu | Thanks Bhack for the question.
We will have some readme/contribution guild available on the keras-cv github project to provide more details about what’s the difference between keras-cv/model garden/keras.applications/tf-hub. | 0 |
tensorflow | Research & Models | Holistic nn modeling? | https://discuss.tensorflow.org/t/holistic-nn-modeling/7263 | Can Tensorflow\ Keras be used for a synchronous \ holistic modeling off NN architecture\ structure, types of neurons, weights etc? Are there examples to show that? | Do you have any non Keras example/reference? Just to understand a little bit the topic. | 0 |
tensorflow | Research & Models | [Research ] MLP-Mixer: An all-MLP Architecture for Vision | https://discuss.tensorflow.org/t/research-mlp-mixer-an-all-mlp-architecture-for-vision/1849 | MLP-Mixer: An all-MLP Architecture for Vision (Tolstikhin et al., 2021) 16 (Google)
Convolutional Neural Networks (CNNs) are the go-to model for computer vision.
Recently, attention-based networks, such as the Vision Transformer, have also
become popular. In this paper we show that while convolutions and attention are
both sufficient for good performance, neither of them are necessary. We present
MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs).
MLP-Mixer contains two types of layers: one with MLPs applied independently to
image patches (i.e. “mixing” the per-location features), and one with MLPs applied
across patches (i.e. “mixing” spatial information). When trained on large datasets,
or with modern regularization schemes, MLP-Mixer attains competitive scores on
image classification benchmarks, with pre-training and inference cost comparable
to state-of-the-art models. We hope that these results spark further research beyond the realms of well established CNNs and Transformers.
image1218×851 233 KB
… our architecture can be seen as a very special CNN, which uses 1×1 convolutions
for channel mixing, and single-channel depth-wise convolutions of a full receptive field and parameter sharing for token mixing. However, the converse is not true as typical CNNs are not special cases of Mixer. Furthermore, a convolution is more complex than the plain matrix multiplication in MLPs as it requires an additional costly reduction to matrix multiplication and/or specialized implementation.
Despite its simplicity, Mixer attains competitive results. When pre-trained on large datasets (i.e., ∼100M images), it reaches near state-of-the-art performance, previously claimed by CNNs and Transformers, in terms of the accuracy/cost trade-off. This includes 87.94% top-1 validation accuracy on ILSVRC2012 “ImageNet” [13]. When pre-trained on data of more modest scale (i.e., ∼1– 10M images), coupled with modern regularization techniques [48, 53], Mixer also achieves strong performance. However, similar to ViT, it falls slightly short of specialized CNN architectures.
image1118×724 352 KB
We describe a very simple architecture for vision. Our experiments demonstrate that it is as good as existing state-of-the-art methods in terms of the trade-off between accuracy and computational resources required for training and inference. We believe these results open many questions. On the practical side, it may be useful to study the features learned by the model and identify the main differences (if any) from those learned by CNNs and Transformers. On the theoretical side, we would like to understand the inductive biases hidden in these various features and eventually their role in generalization. Most of all, we hope that our results spark further research, beyond the realms of established models based on convolutions and self-attention. It would be particularly interesting to see whether such a design works in NLP or other domains.
@Sayak_Paul’s implementation:
MLP-Mixer with CIFAR-10 Show and Tell
Here’s my implementation of MLP-Mixer, the all MLP architecture for computer vision without any use of convs and self-attention:
Here’s what is included:
Distributed training with mixed-precision.
Visualization of the token-mixing MLP weights.
A TensorBoard callback to keep track of the learned linear projections of the image patches.
Results are quite competitive with room for improvements for interpretability.
github.com
sayakpaul/MLP-Mixer-CIFAR10 6
Implements MLP-Mixer (https://arxiv.org/abs/2105.01601) with the CIFAR-10 dataset. | I think that we need to better promote our TF models in paperswithcode /cc @Joana.
In this specific case you can see the official Google reference implementation in JAX and all the others alternative implementations:
paperswithcode.com
Papers with Code - MLP-Mixer: An all-MLP Architecture for Vision 15
#11 best model for Image Classification on ImageNet ReaL (Accuracy metric) | 0 |
tensorflow | Research & Models | ANN models and the identifiability problem | https://discuss.tensorflow.org/t/ann-models-and-the-identifiability-problem/7197 | Dear community,
In general, ANN models are non-identified. But, we attempt to address the identifiability problem by imposing some constraints. How can I impose these constraints using tf.keras.constraints?
Someone already did this?
Regards. | If you are looking for how to just handle some constrains optimizzation you could take a look at:
Google AI Blog: Setting Fairness Goals with the TensorFlow Constrained Optimization Library 1 | 0 |
tensorflow | Research & Models | Image Matching Model? | https://discuss.tensorflow.org/t/image-matching-model/7099 | Hi Everyone,
I am new to Tensorflow (beginner). I am researching solutions for a project i’m working on and was wondering if someone could point me in the right direction. I checked models at modelzoo and it was a bit overwhelming and so far did not find what i was looking for. It seems like a rather common use-case so thought there might already be something built with tensorflow for this, but maybe another tool is better suited?
Basically I want to match incoming photos taken by client with a database of photos and info.
Kind regards,
-M | You can start to play with something like:
Near-duplicate image search Show and Tell
New example on building a near-duplication image search utility. Comprises of an image classifier, Bit LSH (Locality Sensitive Hashing), random projection, and TensorRT optimized inference to drastically reduce the query time. LSH and random projection have been shown with from-scratch implementations for readers to better understand them.
Check also this thread:
Tutorials/materials for fast image retrieval tasks General Discussion
Hi all.
Looking for resources/materials that build on TensorFlow/Keras (preferably) for doing scalable image similarity searches. | 0 |
tensorflow | Research & Models | Which one are the best Densnet or Resnet? | https://discuss.tensorflow.org/t/which-one-are-the-best-densnet-or-resnet/7053 | Can someone explain which one are the best algorithm ? I know but I want to learn from a person who may best experience in above. Thanks | On what kind of public dataset? | 0 |
tensorflow | Research & Models | Low Accuracy-High Recall | https://discuss.tensorflow.org/t/low-accuracy-high-recall/6944 | Hey Community, I hope you’re doing great.
I’m working on binary classification using structured data, and my model has gave a great validation recall result(around 80%) but low validation accuracy(around 40%)!
and i like to improve the validation accuracy even if that will slightly decrease the validation recall? any suggestions please.
Thank you so much! | Plotting the “precision vs. recall” curve will show you the performance you can reach just by changing the threshold level. Have a look at this tutorial:
TensorFlow
Classification on imbalanced data | TensorFlow Core 3 | 0 |
tensorflow | Research & Models | Changing augmentation parameters in TFOD API | https://discuss.tensorflow.org/t/changing-augmentation-parameters-in-tfod-api/6935 | I am aware of the preprocessing proto that is used in the models repo:
github.com
tensorflow/models/blob/master/research/object_detection/protos/preprocessor.proto 3
syntax = "proto2";
package object_detection.protos;
// Message for defining a preprocessing operation on input data.
// See: //third_party/tensorflow_models/object_detection/core/preprocessor.py
// Next ID: 41
message PreprocessingStep {
oneof preprocessing_step {
NormalizeImage normalize_image = 1;
RandomHorizontalFlip random_horizontal_flip = 2;
RandomPixelValueScale random_pixel_value_scale = 3;
RandomImageScale random_image_scale = 4;
RandomRGBtoGray random_rgb_to_gray = 5;
RandomAdjustBrightness random_adjust_brightness = 6;
RandomAdjustContrast random_adjust_contrast = 7;
RandomAdjustHue random_adjust_hue = 8;
RandomAdjustSaturation random_adjust_saturation = 9;
RandomDistortColor random_distort_color = 10;
RandomJitterBoxes random_jitter_boxes = 11;
This file has been truncated. show original
My question is how one configures their augmentation pipeline when using the TFOD API. Consider this configuration file. It has a field for augmentation:
train_config: {
...
data_augmentation_options {
random_horizontal_flip {
}
}
If I wanted to expand the set of augmentation transformations here what should I do?
@Laurence_Moroney @khanhlvg any pointers? | Interesting.
This looks like they’ve re-encoded something like keras’s model.get_config, as a proto.
To change the data-augmentation, you edit that data_augmentation_options list.
The .proto files define what’s allowed. The definition of TrainConfig is here:
github.com
tensorflow/models/blob/aa3e639f80c2967504310b0f578f0f00063a8aff/research/object_detection/protos/train.proto#L25
// Message for configuring DetectionModel training jobs (train.py).
// Next id: 31
message TrainConfig {
// Effective batch size to use for training.
// For TPU (or sync SGD jobs), the batch size per core (or GPU) is going to be
// `batch_size` / number of cores (or `batch_size` / number of GPUs).
optional uint32 batch_size = 1 [default=32];
// Data augmentation options.
repeated PreprocessingStep data_augmentation_options = 2;
// Whether to synchronize replicas during training.
optional bool sync_replicas = 3 [default=false];
// How frequently to keep checkpoints.
optional float keep_checkpoint_every_n_hours = 4 [default=10000.0];
// Optimizer used to train the DetectionModel.
optional Optimizer optimizer = 5;
data_augmentation_options is a repeated PreprocessingStep.
A PreprocessingStep is one of the items from that list. The parameters of each and their default values are defined in preprocessor.proto
If you want to add a RandomScale step:
train_config: {
...
data_augmentation_options {
random_horizontal_flip {
}
random_image_scale {
min_scale_ratio: 0.9
max_scale_ratio: 1.1
}
}
}
That format is “proto-text” (.PBTXT), you can check your syntax with:
from google.protobuf import text_format
train_config = TrainConfig()
train_config = text_format.Parse(
r"""
train_config: {
...
data_augmentation_options {
random_horizontal_flip {
}
random_image_scale {
min_scale_ratio: 0.9
max_scale_ratio: 1.1
}
}
}
""", train_config)
print(train_config) | 1 |
tensorflow | Research & Models | Collective Intelligence for Deep Learning | https://discuss.tensorflow.org/t/collective-intelligence-for-deep-learning/6688 | A very interesting survey by Google Brain Tokyo:
https://arxiv.org/abs/2111.14377 15 | In the case you are looking for a gentle introduction to the “emergence” I recommend you this 2014 blog post by David Pines:
Medium – 13 Nov 14
Emergence: A unifying theme for 21st century science 4
By David Pines, Co-Founder in Residence, Santa Fe Institute
Reading time: 13 min read | 0 |
tensorflow | Research & Models | Prepare .wav file for yamnet.tflite model | https://discuss.tensorflow.org/t/prepare-wav-file-for-yamnet-tflite-model/4903 | Hi developers?
How to prepare .wav and .amr file for yamnet.tflite model in kotlin or java. I have checked the example project on Github but it has only real-time classification using the mic, but I need to know how to prepare the wav and amr file for this model. thanks | Hello sir, I hope you’re well, I found the solution for yamnet model, and write the article on Medium,
Medium – 19 Oct 21
Prepare .wav or .amr files for yamnet.tflite model Android 5
After a lot of research, I realized there is no any easiest way to prepare a local .wav or .amr file for yamnet.tflite model, as we know…
Reading time: 1 min read
Please check it out and give me suggestions to improve it. | 1 |
tensorflow | Research & Models | Mult-GPUs training with Unified Memory | https://discuss.tensorflow.org/t/mult-gpus-training-with-unified-memory/6428 | Hi everyone,
I’m doing some research on Unified Memory Management on Multi-GPUs system and trying to compare the performance with explicit copy on some real ML workloads.
The benefits from Unified Memory are
Allow memory oversubscription
Improve programmability, programmers don’t need to worry about data placement and movement
I found there’s a switch per_process_gpu_memory_fraction to turn on Unified Memory in tensorflow. For distributed training on multi GPUs, I used tf.distribute.MirroredStrategy API. But from profiling result, it seems that tensorflow just leverage Unified Memory to overcome memory oversubscription, there are still explicit memory copies between GPU and CPU, or GPU and GPU.
I’m wondering if there’s a way to train on multi GPUs and fully explore the power of Unified Memory, like letting memory system manage the data, in tensorflow.
System information
TensorFlow version (you are using): 2.4
CUDA version: 11.0
cudnn version: 8.0
Thanks | Have you tried with these envs on TF 2.7:
github.com/tensorflow/tensorflow
[PJRT] Allow GPU memory oversubscription when unified memory is enabled. 6
committed
Jul 9, 2021
tensorflower-gardener
+5
-2
With this CL, we can enable GPU memory oversubscription via env flags.
For examp…le, `TF_FORCE_UNIFIED_MEMORY=1 XLA_PYTHON_CLIENT_MEM_FRACTION=8.0` provides 8x the GPU memory to the program. The 'extra' memory is physically located on the other GPU devices and the host's RAM, with swapping done transparently by CUDA.
PiperOrigin-RevId: 383819164
Change-Id: Id139d3184d3a62983c1e86bf95ca4078a08db4f4 | 0 |
tensorflow | Research & Models | [Google Research ] Self-supervised learning - Compressed SimCLR / BYOL with Conditional Entropy Bottleneck (with TensorFlow code) | https://discuss.tensorflow.org/t/google-research-self-supervised-learning-compressed-simclr-byol-with-conditional-entropy-bottleneck-with-tensorflow-code/6427 | New self-supervised methods—Compressed SimCLR and Compressed BYOL with Conditional Entropy Bottleneck—for learning effective and robust visual representations, which enable learning visual classifiers with limited data.
arXiv: Compressive Visual Representations 5 (Lee et al., 2021) (Google Research)
Learning effective visual representations that generalize well without human supervision is a fundamental problem in order to apply Machine Learning to a wide variety of tasks. Recently, two families of self-supervised methods, contrastive learning and latent bootstrapping, exemplified by SimCLR and BYOL respectively, have made significant progress. In this work, we hypothesize that adding explicit information compression to these algorithms yields better and more robust representations. We verify this by developing SimCLR and BYOL formulations compatible with the Conditional Entropy Bottleneck (CEB) objective, allowing us to both measure and control the amount of compression in the learned representation, and observe their impact on downstream tasks. Furthermore, we explore the relationship between Lipschitz continuity and compression, showing a tractable lower bound on the Lipschitz constant of the encoders we learn. As Lipschitz continuity is closely related to robustness, this provides a new explanation for why compressed models are more robust. Our experiments confirm that adding compression to SimCLR and BYOL significantly improves linear evaluation accuracies and model robustness across a wide range of domain shifts. In particular, the compressed version of BYOL achieves 76.0% Top-1 linear evaluation accuracy on ImageNet with ResNet-50, and 78.8% with ResNet-50 2x.1
Recent contrastive approaches to self-supervised visual representation learning aim to learn representations that maximally capture the mutual information between two transformed views of an image… The primary idea of these approaches is that this mutual information corresponds to a general shared context that is invariant to various transformations of the input, and it is assumed that such invariant features will be effective for various downstream higher-level tasks. However, although existing contrastive approaches maximize mutual information between augmented views of the same input, they do not necessarily compress away the irrelevant information from these views… retaining irrelevant information often leads to less stable representations and to failures in robustness and generalization, hampering the efficacy of the learned representations. An alternative state-of-the-art self-supervised learning approach is BYOL [30], which uses a slow-moving average network to learn consistent, view-invariant representations of the inputs. However, it also does not explicitly capture relevant compression in its objective.
In this work, we modify SimCLR [12], a state-of-the-art contrastive representation method, by adding information compression using the Conditional Entropy Bottleneck (CEB) [27]. Similarly, we show how BYOL [30] representations can also be compressed using CEB. By using CEB we are able to measure and control the amount of information compression in the learned representation [26], and observe its impact on downstream tasks. We empirically demonstrate that our compressive variants of SimCLR and BYOL, which we name C-SimCLR and C-BYOL, significantly improve accuracy and robustness to domain shifts across a number of scenarios.
Code: GitHub - google-research/compressive-visual-representations: Tensorflow 2 implementations of the C-SimCLR and C-BYOL self-supervised visual representation methods from "Compressive Visual Representations" (NeurIPS 2021) 2
C-SimCLR866×259 129 KB
C-BYOL866×337 182 KB
Related work:
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations (Chen et al., 2020) (Google Research, Brain Team)
SimCLRv2: Big Self-Supervised Models are Strong Semi-Supervised Learners (Chen et al., 2020) (Google Research, Brain Team)
GitHub with TF 2 implementation
BYOL: Bootstrap your own latent: A new approach to self-supervised Learning (Grill et al., 2020) (DeepMind/Imperial College) | Interested in learning about self-supervised methods? Here are some resources:
Google AI Blog: Advancing Self-Supervised and Semi-Supervised Learning with SimCLR
Google AI Blog: Extending Contrastive Learning to the Supervised Setting
Some code examples and other posts made by the ML community members:
Keras: Self-supervised contrastive learning with SimSiam (by @Sayak_Paul)
GitHub - sayakpaul/SimCLR-in-TensorFlow-2: (Minimally) implements SimCLR (https://arxiv.org/abs/2002.05709) in TensorFlow 2. (by @Sayak_Paul)
GitHub - ayulockin/SwAV-TF: TensorFlow implementation of "Unsupervised Learning of Visual Features by Contrasting Cluster Assignments". (by ayulockin and @Sayak_Paul)
GitHub - sayakpaul/SimSiam-TF: Minimal implementation of SimSiam (https://arxiv.org/abs/2011.10566) in TensorFlow 2. (by @Sayak_Paul)
Lilian Weng’s blog post: Self-Supervised Representation Learning (2019)
Lilian Weng’s blog post: Contrastive Representation Learning (2021) | 0 |
tensorflow | Research & Models | Distributed learning models | https://discuss.tensorflow.org/t/distributed-learning-models/2297 | It could be really nice if we could launch an experiment/initiative with the Model Garden, Federated teams and our community on training our first model garden model with our federated tools.
We have some recent interesting experiment with other frameworks.
learning@home
Train vast neural networks together
A library to train large neural networks across the internet. Imagine training one huge transformer on thousands of computers from universities, companies, and volunteers.
https://arxiv.org/abs/2106.10207
huggingface.co
neuropark/sahajBERT · Hugging Face | Hello, TensorFlow already does and you can use KubelFlow, ML overlay from Kubernetes… | 0 |
tensorflow | Research & Models | Where can I find sysmetric TFlite quantization model? | https://discuss.tensorflow.org/t/where-can-i-find-sysmetric-tflite-quantization-model/6413 | I would like to find some symmetric 8 bit quantization models to deploy on hardware. However, every model I found in model zoo are asymmetric. Hosted models | TensorFlow Lite
TensorFlow
TensorFlow Lite 8-bit quantization specification
Here mentioned that “In the past our quantization tooling used per-tensor, asymmetric, uint8 quantization. New tooling, reference kernels, and optimized kernels for 8-bit quantization will use this spec.”
Do you mean that symmetric quantizated tflite models are still under development? Or I could find some symmetric quantization tflite model elsewhere? | We have a tracking ticket at:
github.com/tensorflow/tensorflow
Allow symmetric TFLite quantization (no zero point/scale only) 2
opened
Sep 10, 2020
bwang1991
stat:awaiting tensorflower
type:feature
comp:lite
As far as I know, TFLite's quantization forces activations to have both scales a…nd zero points. However, for some networks, symmetric quantization (no zero point) does not cause a significant loss in accuracy. It is therefore sufficient to use scale only. Please add support for symmetric quantization. | 0 |
tensorflow | Research & Models | From keras.engine. topology import network | https://discuss.tensorflow.org/t/from-keras-engine-topology-import-network/6326 | What are the possible ways to install topology for Keras. Because the following import command gives error.
Command:
from keras.engine.topology import network
Error:
ModuleNotFoundError: No module named ‘keras.engine.topology’ | This might depend on the Keras version you are using. Look into if the keras.engine.topology has depricated.
You can force install an earlier version by:
pip install 'keras==2.1.6' --force-reinstall
Where 2.1.6 is a suitable example. You may try
import tensorflow.python.keras.engine
But you will not be able to import topology from tensorflow.python.keras.engine .
Please refer to the answers in similar issue1 12, issue2 9.
Thanks! | 0 |
tensorflow | Research & Models | How can I measure the complexity of the model? | https://discuss.tensorflow.org/t/how-can-i-measure-the-complexity-of-the-model/6243 | I need to measure time and memory complexity for a keras model or captioning model using keras … how can I start ?
Thanks | Check:
How to find out keras model memory size? General Discussion
Hello!
I am doing a school work and I need to find out keras model memory size so I could compare different models. It is supposed to be composed of weights/parameters and model itself. It was given that model.summary() should contain all the information. From there I see that layer info and output shapes with the number of parameters.
I understand parameters as they are just a numbers and so, number of parameters * 4B most likely will give how much room parameters take. But I know that more i… | 0 |
tensorflow | Research & Models | The H5 model I trained can be used normally on the computer after being converted to tflite model, but it can’t be used on Android devices. What’s the problem | https://discuss.tensorflow.org/t/the-h5-model-i-trained-can-be-used-normally-on-the-computer-after-being-converted-to-tflite-model-but-it-cant-be-used-on-android-devices-whats-the-problem/6223 | I trained the H5 model through the computer and passed tf.lite.tfliteconverter.from_ keras_ Mode to convert it into tflite model, but it seems that it can’t be used on Android devices. The reasoning result obtained through interpreter.run is Nan. Where did I make a configuration error? | Hi @jun_yin
I am a little bit confused
Can you use the tflite model with the python api? | 0 |
tensorflow | Research & Models | Model selection | https://discuss.tensorflow.org/t/model-selection/6212 | My dataset is a combination of time series and non-time series data. what model can I use to train my machine using both types of data? | @Md_Samiul_Basir, Model selection is purely based business problem. Is is possible to share sample data to understand more? | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.