issue
dict | pr
dict | pr_details
dict |
---|---|---|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below):2.1.0\r\n- Python version:3.7.6\r\n- Bazel version (if compiling from source):N/A\r\n- GCC/Compiler version (if compiling from source):N/A\r\n- CUDA/cuDNN version:N/A\r\n- GPU model and memory:N/A\r\n\r\n\r\n\r\n\r\n**Describe the current behavior**\r\n`tf.keras.backend.tile` crash(aborts) when `n` is large\r\n\r\n**Describe the expected behavior**\r\nexpect an exception message if the input unexpected instead of crash. \r\n\r\n**Standalone code to reproduce the issue**\r\n~~~python\r\nimport tensorflow as tf\r\nimport numpy as np\r\ntf.keras.backend.tile(x=np.ones((1,1,1)), n=[100000000,100000000, 100000000])\r\n~~~\r\nOutput\r\n~~~python\r\n2021-02-04 04:10:34.072054: F tensorflow/core/framework/tensor_shape.cc:353] Check failed: 0 <= new_num_elements (0 vs. -1)\r\nAborted (core dumped)\r\n~~~",
"comments": [
{
"body": "Was able to reproduce the issue with TF v2.3, TF v2.4 and TF-nightly. Please find the gist of it [here](https://colab.research.google.com/gist/amahendrakar/c767ecbb5d79e0cfdda8bebb6fa4582e/46911.ipynb). Thanks!",
"created_at": "2021-02-04T16:20:20Z"
},
{
"body": "Colab crashes in TF 2.6 as well.Please find the gist [here](https://colab.research.google.com/gist/saikumarchalla/b6bdaeacb30114dcac93f2325c54dfd2/copy-of-untitled92.ipynb).Thanks!",
"created_at": "2021-05-29T04:21:38Z"
},
{
"body": "Added PR #51138 for the fix.",
"created_at": "2021-08-04T03:23:44Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46911\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46911\">No</a>\n",
"created_at": "2021-09-21T01:51:59Z"
}
],
"number": 46911,
"title": "tf.keras.backend.tile crash(aborts) when n is large"
}
|
{
"body": "This PR tries to fix the issue raised in #46911 where tf.tile\r\nwill crash when n is large. This PR add additional check\r\nto make sure an error message is rendered (instead of crash).\r\n\r\nThis PR fixes #46911.\r\n\r\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>",
"number": 51138,
"review_comments": [
{
"body": "Let's instead change this to use [`AddDimWithStatus`](https://github.com/tensorflow/tensorflow/blob/51d911b4661b290f0e207fc7cf1ec79b9e411a58/tensorflow/core/framework/tensor_shape.h#L208-L210) and wrap that in `OP_REQUIRES_OK`.\r\n\r\nAlso, if we want to display a shape in an error message, `shape.DebugString()` is the best API to use.",
"created_at": "2021-08-04T17:14:26Z"
}
],
"title": "Fix tf.tile crash when n is large"
}
|
{
"commits": [
{
"message": "Fix tf.tile crash when n is large\n\nThis PR tries to fix the issue raised in 46911 where tf.tile\nwill crash when n is large. This PR add additional check\nto make sure an error message is rendered (instead of crash).\n\nThis PR fixes 46911.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
}
],
"files": [
{
"diff": "@@ -188,7 +188,9 @@ class TileOp : public OpKernel {\n context, multiples_array[i] >= 0,\n errors::InvalidArgument(\"Expected multiples[\", i, \"] >= 0, but got \",\n multiples_array[i]));\n- output_shape.AddDim(input.dim_size(i) * multiples_array[i]);\n+ OP_REQUIRES_OK(\n+ context,\n+ output_shape.AddDimWithStatus(input.dim_size(i) * multiples_array[i]));\n }\n if (output_shape == input.shape()) {\n context->set_output(0, input);",
"filename": "tensorflow/core/kernels/tile_ops.cc",
"status": "modified"
},
{
"diff": "@@ -723,6 +723,14 @@ def testShapeFunctionEdgeCases(self):\n inp, array_ops.placeholder(\n dtypes.int32, shape=[3]))\n \n+ def testLargeTensor(self):\n+ # Test case for GItHub issue 46911.\n+ with self.assertRaises(errors_impl.InternalError):\n+ with self.cached_session():\n+ tiled = array_ops.tile(\n+ np.ones((1, 1, 1)), [100000000, 100000000, 100000000])\n+ result = self.evaluate(tiled)\n+\n \n if __name__ == \"__main__\":\n test.main()",
"filename": "tensorflow/python/kernel_tests/shape_ops_test.py",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Pop-OS 20.04\r\n- TensorFlow installed from (source or binary): source\r\n- TensorFlow version (use command below): v2.5.0-rc3-213-ga4dfb8d1a71 2.5.0\r\n- Python version: 3.9.5\r\n- CUDA/cuDNN version: CUDA 11.4 / cuDNN 8.2.2\r\n- GPU model and memory: RTX 3080\r\n\r\n**Describe the current behavior**\r\nWhen using the TF_GPU_ALLOCATOR=cuda_malloc_async, TF throws an internal error after allocation of GPU: \r\n```\r\n2021-07-08` 12:44:26.553800: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0\r\n2021-07-08 12:44:27.009583: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1\r\n2021-07-08 12:44:27.034925: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2021-07-08 12:44:27.035193: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: \r\npciBusID: 0000:2d:00.0 name: NVIDIA GeForce RTX 3080 computeCapability: 8.6\r\ncoreClock: 1.71GHz coreCount: 68 deviceMemorySize: 9.76GiB deviceMemoryBandwidth: 707.88GiB/s\r\n2021-07-08 12:44:27.035207: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0\r\n2021-07-08 12:44:27.036831: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11\r\n2021-07-08 12:44:27.036855: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11\r\n2021-07-08 12:44:27.037745: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10\r\n2021-07-08 12:44:27.037863: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10\r\n2021-07-08 12:44:27.038095: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11\r\n2021-07-08 12:44:27.038451: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11\r\n2021-07-08 12:44:27.038515: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8\r\n2021-07-08 12:44:27.038573: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2021-07-08 12:44:27.038841: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2021-07-08 12:44:27.039405: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0\r\n2021-07-08 12:44:27.039873: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2021-07-08 12:44:27.040284: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2021-07-08 12:44:27.040520: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: \r\npciBusID: 0000:2d:00.0 name: NVIDIA GeForce RTX 3080 computeCapability: 8.6\r\ncoreClock: 1.71GHz coreCount: 68 deviceMemorySize: 9.76GiB deviceMemoryBandwidth: 707.88GiB/s\r\n2021-07-08 12:44:27.040556: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2021-07-08 12:44:27.040800: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2021-07-08 12:44:27.041145: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0\r\n2021-07-08 12:44:27.041164: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0\r\n2021-07-08 12:44:27.296173: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:\r\n2021-07-08 12:44:27.296199: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0 \r\n2021-07-08 12:44:27.296206: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N \r\n2021-07-08 12:44:27.296339: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2021-07-08 12:44:27.296598: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2021-07-08 12:44:27.296835: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2021-07-08 12:44:27.297045: I tensorflow/core/common_runtime/gpu/gpu_process_state.cc:210] Using CUDA malloc Async allocator for GPU.\r\n2021-07-08 12:44:27.297081: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1\r\nTraceback (most recent call last):\r\n File \"/home/sebltm/OneDrive/KCL/Individual_Project/FaceCapsNet/./training.py\", line 424, in <module>\r\n log_writer = tf.summary.create_file_writer(logdir) if log else None\r\n File \"/home/sebltm/OneDrive/KCL/Individual_Project/FaceCapsNet/facecapsnet/lib/python3.9/site-packages/tensorflow/python/ops/summary_ops_v2.py\", line 479, in create_file_writer_v2\r\n with ops.name_scope(name, \"create_file_writer\") as scope, ops.device(\"cpu:0\"):\r\n File \"/home/sebltm/OneDrive/KCL/Individual_Project/FaceCapsNet/facecapsnet/lib/python3.9/site-packages/tensorflow/python/framework/ops.py\", line 5255, in device\r\n return context.device(device_name_or_function)\r\n File \"/home/sebltm/OneDrive/KCL/Individual_Project/FaceCapsNet/facecapsnet/lib/python3.9/site-packages/tensorflow/python/eager/context.py\", line 2072, in device\r\n ensure_initialized()\r\n File \"/home/sebltm/OneDrive/KCL/Individual_Project/FaceCapsNet/facecapsnet/lib/python3.9/site-packages/tensorflow/python/eager/context.py\", line 1867, in ensure_initialized\r\n context().ensure_initialized()\r\n File \"/home/sebltm/OneDrive/KCL/Individual_Project/FaceCapsNet/facecapsnet/lib/python3.9/site-packages/tensorflow/python/eager/context.py\", line 525, in ensure_initialized\r\n context_handle = pywrap_tfe.TFE_NewContext(opts)\r\ntensorflow.python.framework.errors_impl.InternalError: No allocator statistics\r\n```",
"comments": [
{
"body": "@sebltm ,\r\n\r\nEvery TensorFlow release is compatible with a certain version, for more information please take a look at the [tested build configurations](https://www.tensorflow.org/install/source#gpu).In this case, can you please try installing TensorFlow v2.5 with CUDA 11.2 and cuDNN 8.1 and check if you are facing the same error. Thanks!",
"created_at": "2021-07-08T13:44:53Z"
},
{
"body": "Works with TF 2.5 / CUDA 11.2 / cuDNN 8.1, however this only half solves my problem, since my reason for trying another version of CUDA / cuDNN is that I was getting frequent segmentation faults with this combination, which disappeared after migrating to CUDA 11.4 / cuDNN 8.2.2",
"created_at": "2021-07-13T09:45:32Z"
},
{
"body": "CC @nouiz \r\n\r\n> since my reason for trying another version of CUDA / cuDNN is that I was getting frequent segmentation faults with this combination\r\n\r\nIs there a GH issue about these segfaults?",
"created_at": "2021-07-23T22:36:45Z"
},
{
"body": "The \"No allocator statistics\" was fixed about a mount ago (#49173), but it got reverted and i just saw this today.\r\nI'll take a look at this.",
"created_at": "2021-07-26T19:37:48Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/50669\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/50669\">No</a>\n",
"created_at": "2021-08-30T11:45:32Z"
},
{
"body": "The fix was merged a few hours ago.\r\nCan you wait 24h and try TF nightly build to be sure that it also works for you?\r\nIf you have any comments on that new features, please share with us.",
"created_at": "2021-08-30T14:55:19Z"
}
],
"number": 50669,
"title": "Internal Error with TF_GPU_ALLOCATOR=cuda_malloc_async"
}
|
{
"body": "This reverts commit a4553f8ad74f5ad486057da400fb8e101f12e09f that was reverting #49173.\r\n\r\nWhen I run it now, //tensorflow/core/common_runtime/gpu:gpu_device_test in asan give me the same output as before. It was already crashing before this PR with my command line:\r\n\r\n```bazel test --distinct_host_configuration=false --javabase=@bazel_tools//tools/jdk:remote_jdk11 --config=asan -c opt --config=cuda //tensorflow/core/common_runtime/gpu:gpu_device_test```\r\n\r\n@penpornk \r\n\r\nfixes #50669 ",
"number": 50961,
"review_comments": [
{
"body": "I think something like this should be done on runtime startup.\r\n\r\nWe've seen quite a few issues when folks tried to run TF built with CUDA-11.3 on the drivers 460. It does not fail right away, but tends to cause weird issues later one. Some apps work OK, others crash or fail with odd CUDA errors. No idea what exactly triggers the failures.\r\n\r\nChecking if the driver is recent enough for the CUDA version we build with, and issuing a warning if it's not, would be very useful. CUDA-11 was supposed to be driver-agnostic, but while it removed the strict checks for the driver version, it did not quite remove the dependency on recent enough driver versions.",
"created_at": "2021-08-25T22:11:39Z"
},
{
"body": "Do you have example of problems that this cause?\r\nIf a new 11.X version introduce new features, then some of those features need new feature inside the driver. Like cudaMallocAsync. In that case, I think it is impossible to backport this to older 11.X. \r\nTo my knowledge, the compatibility between 11.X driver is only if you limit yourself to features in 11.0. If you use the new feature, then you are bumping the minimum driver requirement.\r\n\r\nSo you only need to take care about new features that need new drivers.\r\n\r\nPersonally, I think it is useful for TF user that those new features are enabled only when a recent enough driver is installed. So those feature should detect the version and be enabled only when they are available.\r\n\r\nI think that crashing as you suggest is too strong.",
"created_at": "2021-08-26T13:14:39Z"
}
],
"title": "Add back \"PR #49173: [Crash fix] Fix cudaMallocAsync crashes.\""
}
|
{
"commits": [
{
"message": "Revert \"PR #49173: [Crash fix] Fix cudaMallocAsync crashes.\"\n\nThis reverts commit a4553f8ad74f5ad486057da400fb8e101f12e09f."
},
{
"message": "Do not free nullptr and if a nullptr is freed, handle it correctly."
},
{
"message": "Add extra error information."
},
{
"message": "Print driver version."
},
{
"message": "Check that the CUDA driver support 11.2 to have a more explicit error message."
},
{
"message": "More earlier a check"
},
{
"message": "Skip a test when the driver is too old."
},
{
"message": "Skip the test if the cuda toolkit is also too old."
},
{
"message": "Try to make the test pass with ROCM."
}
],
"files": [
{
"diff": "@@ -319,6 +319,7 @@ tf_cuda_cc_test(\n tags = tf_cuda_tests_tags(),\n deps = [\n \":gpu_id\",\n+ \":gpu_runtime\",\n \"//tensorflow/cc:cc_ops\",\n \"//tensorflow/core:framework\",\n \"//tensorflow/core:framework_internal\",",
"filename": "tensorflow/core/common_runtime/gpu/BUILD",
"status": "modified"
},
{
"diff": "@@ -96,10 +96,13 @@ void GpuCudaMallocAsyncAllocator::PrintAllocatorStatistics() {\n #endif\n }\n \n+std::atomic<int> GpuCudaMallocAsyncAllocator::number_instantiated_(0);\n+\n GpuCudaMallocAsyncAllocator::GpuCudaMallocAsyncAllocator(\n PlatformDeviceId platform_device_id, size_t pool_size, bool reserve_memory,\n bool compute_stats)\n : name_(absl::StrCat(\"gpu_async_\", platform_device_id.value())) {\n+ ++number_instantiated_;\n #if TF_CUDA_MALLOC_ASYNC_SUPPORTED\n stream_exec_ = DeviceIdUtil::ExecutorForPlatformDeviceId(GPUMachineManager(),\n platform_device_id)\n@@ -114,6 +117,13 @@ GpuCudaMallocAsyncAllocator::GpuCudaMallocAsyncAllocator(\n int driverVersion;\n cuDriverGetVersion(&driverVersion);\n VLOG(2) << \"DRIVER VERSION: \" << driverVersion;\n+ if (driverVersion < 11020) {\n+ LOG(FATAL) // Crash OK.\n+ << \"Disable cuda_malloc_async or update your CUDA driver to a version\"\n+ << \" compitible with CUDA 11.2 or higher.\"\n+ << \" We detected a version compatible with: \" << driverVersion;\n+ }\n+\n if (platform_device_id.value() > 0 && driverVersion < 11030) {\n CUcontext pctx; // We loose track of it. But this is fine.\n if (auto result = cuDevicePrimaryCtxRetain(&pctx, 0))\n@@ -122,13 +132,25 @@ GpuCudaMallocAsyncAllocator::GpuCudaMallocAsyncAllocator(\n }\n \n se::cuda::ScopedActivateExecutorContext scoped_activation{stream_exec_};\n+\n+ // Check the the CUDA runtime is recent enough.\n+ if (auto status2 = cuDriverGetVersion(&driverVersion)) {\n+ LOG(FATAL) // Crash OK.\n+ << \"Error while fetching driver version: \"\n+ << GetCudaErrorMessage(status2);\n+ }\n+\n+ // Check that cudaMallocAsync is supported.\n int cuda_malloc_async_supported;\n if (auto status =\n cuDeviceGetAttribute(&cuda_malloc_async_supported,\n CU_DEVICE_ATTRIBUTE_MEMORY_POOLS_SUPPORTED,\n- platform_device_id.value()))\n- LOG(FATAL) << // Crash OK.\n- \"Failed to get device attribute: \" << GetCudaErrorMessage(status);\n+ platform_device_id.value())) {\n+ LOG(FATAL) // Crash OK.\n+ << \"On device: \" << platform_device_id.value()\n+ << \" Current driver: \" << driverVersion\n+ << \". Failed to get device attribute : \" << GetCudaErrorMessage(status);\n+ }\n if (!cuda_malloc_async_supported)\n LOG(FATAL) // Crash OK.\n << \"TF_GPU_ALLOCATOR=cuda_malloc_async isn't currently supported on \"\n@@ -311,6 +333,8 @@ void* GpuCudaMallocAsyncAllocator::AllocateRaw(size_t alignment,\n }\n void GpuCudaMallocAsyncAllocator::DeallocateRaw(void* ptr) {\n #if TF_CUDA_MALLOC_ASYNC_SUPPORTED\n+ if (ptr == nullptr)\n+ return;\n if (auto result = cuMemFreeAsync(reinterpret_cast<const CUdeviceptr&>(ptr),\n cuda_stream_)) {\n if (result == CUDA_ERROR_DEINITIALIZED) {",
"filename": "tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.cc",
"status": "modified"
},
{
"diff": "@@ -67,7 +67,7 @@ class GpuCudaMallocAsyncAllocator : public Allocator {\n explicit GpuCudaMallocAsyncAllocator(PlatformDeviceId platform_device_id,\n size_t pool_size,\n bool reserve_memory = false,\n- bool compute_stats = false);\n+ bool compute_stats = true);\n ~GpuCudaMallocAsyncAllocator() override;\n string Name() override { return name_; }\n void* AllocateRaw(size_t alignment, size_t num_bytes) override;\n@@ -85,7 +85,7 @@ class GpuCudaMallocAsyncAllocator : public Allocator {\n \n void SetStream(void* stream) override {\n #if TF_CUDA_MALLOC_ASYNC_SUPPORTED\n- cuda_stream_ = reinterpret_cast<CUstream>(stream);\n+ cuda_stream_ = *(static_cast<CUstream*>(stream));\n #endif\n }\n \n@@ -95,6 +95,8 @@ class GpuCudaMallocAsyncAllocator : public Allocator {\n // - If CUDA_VERSION >= 11030, print cudaMallocAsync statistics.\n void PrintAllocatorStatistics();\n \n+ static int GetInstantiatedCountTestOnly() { return number_instantiated_; }\n+\n private:\n #if TF_CUDA_MALLOC_ASYNC_SUPPORTED\n se::StreamExecutor* stream_exec_; // Not owned.\n@@ -112,6 +114,10 @@ class GpuCudaMallocAsyncAllocator : public Allocator {\n CUmemoryPool pool_;\n #endif // TF_CUDA_MALLOC_ASYNC_SUPPORTED\n \n+ // Just a counter for the number of time this class is instantiated.\n+ // Only useful for tests.\n+ static std::atomic<int> number_instantiated_;\n+\n string name_;\n \n TF_DISALLOW_COPY_AND_ASSIGN(GpuCudaMallocAsyncAllocator);",
"filename": "tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.h",
"status": "modified"
},
{
"diff": "@@ -401,7 +401,8 @@ BaseGPUDevice::BaseGPUDevice(const SessionOptions& options, const string& name,\n \n BaseGPUDevice::~BaseGPUDevice() {\n delete gpu_device_info_;\n- gpu_allocator_->DeallocateRaw(scratch_);\n+ if (scratch_)\n+ gpu_allocator_->DeallocateRaw(scratch_);\n device_context_->Unref();\n }\n ",
"filename": "tensorflow/core/common_runtime/gpu/gpu_device.cc",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@ limitations under the License.\n #include \"tensorflow/core/common_runtime/gpu/gpu_device.h\"\n \n #include \"tensorflow/core/common_runtime/device/device_id_utils.h\"\n+#include \"tensorflow/core/common_runtime/gpu/gpu_cudamallocasync_allocator.h\"\n #include \"tensorflow/core/common_runtime/gpu/gpu_init.h\"\n #include \"tensorflow/core/common_runtime/gpu/gpu_process_state.h\"\n #include \"tensorflow/core/lib/core/errors.h\"\n@@ -66,14 +67,17 @@ class GPUDeviceTest : public ::testing::Test {\n const string& visible_device_list = \"\",\n double per_process_gpu_memory_fraction = 0, int gpu_device_count = 1,\n const std::vector<std::vector<float>>& memory_limit_mb = {},\n- const std::vector<std::vector<int32>>& priority = {}) {\n+ const std::vector<std::vector<int32>>& priority = {},\n+ const bool use_cuda_malloc_async = false) {\n SessionOptions options;\n ConfigProto* config = &options.config;\n (*config->mutable_device_count())[\"GPU\"] = gpu_device_count;\n GPUOptions* gpu_options = config->mutable_gpu_options();\n gpu_options->set_visible_device_list(visible_device_list);\n gpu_options->set_per_process_gpu_memory_fraction(\n per_process_gpu_memory_fraction);\n+ gpu_options->mutable_experimental()->set_use_cuda_malloc_async(\n+ use_cuda_malloc_async);\n for (int i = 0; i < memory_limit_mb.size(); ++i) {\n auto virtual_devices =\n gpu_options->mutable_experimental()->add_virtual_devices();\n@@ -109,6 +113,49 @@ class GPUDeviceTest : public ::testing::Test {\n }\n };\n \n+TEST_F(GPUDeviceTest, CudaMallocAsync) {\n+ // cudaMallocAsync supported only when cuda toolkit and driver supporting CUDA 11.2+\n+#ifndef GOOGLE_CUDA\n+ return;\n+#elif CUDA_VERSION < 11020\n+ LOG(INFO) << \"CUDA toolkit too old, skipping this test: \" << CUDA_VERSION;\n+ return;\n+#else\n+ // cudaMallocAsync supported only for driver supporting CUDA 11.2+\n+ int driverVersion;\n+ cuDriverGetVersion(&driverVersion);\n+ if (driverVersion < 11020) {\n+ LOG(INFO) << \"Driver version too old, skipping this test: \" << driverVersion;\n+ return;\n+ }\n+#endif\n+\n+ SessionOptions opts = MakeSessionOptions(\"0\", 0, 1, {}, {},\n+ /*use_cuda_malloc_async=*/true);\n+ std::vector<std::unique_ptr<Device>> devices;\n+ Status status;\n+ int number_instantiated =\n+ GpuCudaMallocAsyncAllocator::GetInstantiatedCountTestOnly();\n+ { // The new scope is to trigger the destruction of the object.\n+ status = DeviceFactory::GetFactory(\"GPU\")->CreateDevices(\n+ opts, kDeviceNamePrefix, &devices);\n+ EXPECT_EQ(devices.size(), 1);\n+ Device* device = devices[0].get();\n+ auto* device_info = device->tensorflow_gpu_device_info();\n+ EXPECT_NE(device_info, nullptr);\n+\n+ AllocatorAttributes allocator_attributes = AllocatorAttributes();\n+ allocator_attributes.set_gpu_compatible(true);\n+ Allocator* allocator = devices[0]->GetAllocator(allocator_attributes);\n+ void* ptr = allocator->AllocateRaw(Allocator::kAllocatorAlignment, 1024);\n+ EXPECT_NE(ptr, nullptr);\n+ allocator->DeallocateRaw(ptr);\n+ }\n+ EXPECT_EQ(number_instantiated + 1,\n+ GpuCudaMallocAsyncAllocator::GetInstantiatedCountTestOnly());\n+ EXPECT_EQ(status.code(), error::OK);\n+}\n+\n TEST_F(GPUDeviceTest, FailedToParseVisibleDeviceList) {\n SessionOptions opts = MakeSessionOptions(\"0,abc\");\n std::vector<std::unique_ptr<Device>> devices;",
"filename": "tensorflow/core/common_runtime/gpu/gpu_device_test.cc",
"status": "modified"
},
{
"diff": "@@ -207,21 +207,18 @@ Allocator* GPUProcessState::GetGPUAllocator(\n // useful for doing memory debugging with tools like cuda-memcheck\n // **WARNING** probably will not work in a multi-gpu scenario\n delete gpu_bfc_allocator;\n- delete sub_allocator;\n gpu_bfc_allocator = nullptr;\n- sub_allocator = nullptr;\n gpu_allocator = new GPUcudaMallocAllocator(platform_device_id);\n- } else if (UseCudaMallocAsyncAllocator()) {\n+ } else if (UseCudaMallocAsyncAllocator() ||\n+ options.experimental().use_cuda_malloc_async()) {\n LOG(INFO) << \"Using CUDA malloc Async allocator for GPU: \"\n << platform_device_id;\n // If true, passes all allocation requests through to cudaMallocAsync\n // TODO: useful for doing memory debugging with tools like\n // compute-sanitizer.\n // TODO: **WARNING** probably will not work in a multi-gpu scenario\n delete gpu_bfc_allocator;\n- delete sub_allocator;\n gpu_bfc_allocator = nullptr;\n- sub_allocator = nullptr;\n gpu_allocator =\n new GpuCudaMallocAsyncAllocator(platform_device_id, total_bytes);\n }",
"filename": "tensorflow/core/common_runtime/gpu/gpu_process_state.cc",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below): v2.4.0-rc4-71-g582c8d236cb 2.4.0\r\n- Python version: 3.8.5\r\n\r\n**Describe the current behavior**\r\nWhen a cropping layer (Cropping1D, Cropping2D,Cropping3D) is used, it can happen that the cropping value is accidentally selected too big. Then, the output is just an empty list. It would be good to have an error or warning message for this case, since such an issue can be really cumbersome to find when it occurs deep in a complex graph model.\r\nThe issue is illustrated in the simplified example below.\r\n\r\n**Standalone code to reproduce the issue**\r\n```\r\nimport tensorflow as tf\r\nfrom tensorflow import keras\r\nfrom tensorflow.keras import layers\r\nimport numpy as np\r\nin0Cro69514 = tf.keras.layers.Input(shape=([1, 2]))\r\nCro69514 = keras.layers.Cropping1D(cropping=((5, 5)), name = 'Cro69514', )(in0Cro69514)\r\nmodel = tf.keras.models.Model(inputs=[in0Cro69514], outputs=Cro69514)\r\nin0Cro69514 = tf.constant([[[1.6058, 1.8537]]])\r\nprint (np.array2string(model.predict([in0Cro69514],steps=1), separator=', '))\r\n```\r\n",
"comments": [
{
"body": "@Saduf2019 ,\r\nI was able to reproduce the issue in tf v2.4 and v2.5.Please find the gist of it [here](https://colab.research.google.com/gist/tilakrayal/6297e9900e309ab8d8bf02612967ab87/untitled50612.ipynb).",
"created_at": "2021-07-05T16:50:36Z"
},
{
"body": "Hi @ymodak do you want to solve this issue? Can I work on it?\r\n\r\nIf your asnwer is \"yes\". I think it should raise a `ValueError` I don't see a warning because have zero data in the Tensor have no sense (at least in my knowledge).\r\n\r\n",
"created_at": "2021-07-15T18:01:05Z"
},
{
"body": "@arubiales Feel free to take stab on this. Thanks!",
"created_at": "2021-07-16T17:59:31Z"
},
{
"body": " Keras project has moved to new repository in https://github.com/keras-team/keras\r\nYou may want to raise this PR on keras-team/keras repo.\r\nSee [TensorFlow Forum Announcement](https://discuss.tensorflow.org/t/keras-project-moved-to-new-repository-in-https-github-com-keras-team-keras/1999) to know more.\r\nThanks for your PR and apologies for the overhead.",
"created_at": "2021-07-19T19:03:35Z"
},
{
"body": "This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2021-07-26T19:09:29Z"
},
{
"body": "This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2021-08-05T18:05:06Z"
},
{
"body": "Closing as stale. Please reopen if you'd like to work on this further.\n",
"created_at": "2021-08-12T19:04:48Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/50612\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/50612\">No</a>\n",
"created_at": "2021-08-12T19:05:18Z"
}
],
"number": 50612,
"title": "cropping layer additional error message "
}
|
{
"body": "initialy the PR was in #50612 but the Tensorflow/Keras policy changed. Now is #14958\r\n\r\nI do a first try to solve this issue. I think we can do a prototype with `Cropping1D` and after when everything is perfect, copy the same pattern to `Cropping2D`, `Cropping3D`, etc.\r\n\r\nPls, give me your thoughts. @ymodak",
"number": 50844,
"review_comments": [],
"title": "fix cropping layer return empty list if crop is higher than data shap…"
}
|
{
"commits": [
{
"message": "fix cropping layer return empty list if crop is higher than data shape #50612"
}
],
"files": [
{
"diff": "@@ -3098,6 +3098,12 @@ def compute_output_shape(self, input_shape):\n return tensor_shape.TensorShape([input_shape[0], length, input_shape[2]])\n \n def call(self, inputs):\n+ if sum(self.cropping) >= inputs.shape[1]:\n+ raise ValueError(\n+ 'cropping parameter of Cropping layer is too high,' +\n+ 'the result of crop' + str(inputs.shape) + ' with cropping ' +\n+ str(self.cropping) + ' is an empty tensor'\n+ )\n if self.cropping[1] == 0:\n return inputs[:, self.cropping[0]:, :]\n else:",
"filename": "tensorflow/python/keras/layers/convolutional.py",
"status": "modified"
}
]
}
|
{
"body": "<em>Please make sure that this is a build/installation issue. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em>\r\n\r\n**System information**\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Debian 11\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:\r\n- TensorFlow installed from (source or binary): source\r\n- TensorFlow version: git/master\r\n- Python version: 3.9\r\n- Installed using virtualenv? pip? conda?: conda\r\n- Bazel version (if compiling from source): 4.1.0\r\n- GCC/Compiler version (if compiling from source): 11\r\n- CUDA/cuDNN version: N/A\r\n- GPU model and memory: N/A\r\n\r\n\r\n\r\n**Describe the problem**\r\n\r\nBuilding TF from the current source on GH fails due to missing files (all from `llvm-openmp`):\r\n`tools.pm`\r\n`kmp.h`\r\n`kmp_platform.h`\r\n`kmp_os.h`\r\n\r\n**Provide the exact sequence of commands / steps that you executed before running into the problem**\r\nMy build command was as follows:\r\n```bash\r\nbazel build --config=mkl -config=nogcp --config=nonccl -c opt --copt=-march=native --copt=-O3 -s //tensorflow/tools/pip_package:build_pip_package\r\n```\r\n\r\n**Any other info / logs**\r\nInclude any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.\r\n",
"comments": [
{
"body": "The issue will move to closed status once the PR is merged.",
"created_at": "2021-06-08T14:34:40Z"
},
{
"body": "@eli-osherovich ,\r\n\r\nPlease free feel to move this issue to closed status.Thanks!",
"created_at": "2021-06-09T07:26:34Z"
},
{
"body": "@tilakrayal \r\nOriginal PR messed up after a rebase.\r\nRe-created it.\r\n",
"created_at": "2021-06-09T17:36:35Z"
},
{
"body": "@eli-osherovich, @tilakrayal, Intel is also seeing this build issue and investigating for solution. As a workaround before any fix, please try with adding this build option, **--spawn_strategy=standalone** , to the build command and this should temporarily fix the build failure.",
"created_at": "2021-06-09T20:43:20Z"
},
{
"body": "Tensorflow has reverted a change that causes the build failure, there is no need to use the additional option mentioned above if you sync the project today which has this commit https://github.com/tensorflow/tensorflow/commit/763ae9bed64834d6a9a2e18d9eead0d8763df079 (the commit message is incorrect).",
"created_at": "2021-06-10T05:31:26Z"
},
{
"body": "@yimeisun123 \r\nThe above change just puts standalone into config. My fix is better - it keeps the usual build",
"created_at": "2021-06-10T05:53:42Z"
},
{
"body": "@eli-osherovich - tried your earlier PR#50143, and still has build failure. I see that you pushed a new PR#50179, will check. Thanks.",
"created_at": "2021-06-10T06:02:15Z"
},
{
"body": "@yimeisun123 it definitely works for me.\r\n\r\nBy the way, if you are working on TF inside Intel, have a look at this issue: #50176 . There is a weird problem inside Intel's code with GCC 11. While trivial code changes can solve it, the problem is quite interesting. Probably worth solving without code changes. ",
"created_at": "2021-06-10T19:21:13Z"
},
{
"body": "@tilakrayal can we move forward with this?",
"created_at": "2021-06-15T18:58:22Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/50145\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/50145\">No</a>\n",
"created_at": "2021-06-17T15:55:07Z"
}
],
"number": 50145,
"title": "Building current master fails due to missing files"
}
|
{
"body": "Fixes #50145",
"number": 50179,
"review_comments": [],
"title": "Added missing files to BUILD."
}
|
{
"commits": [
{
"message": "Added missing files to BUILD."
}
],
"files": [
{
"diff": "@@ -1,5 +1,6 @@\n # Build file for OpenMP library that is part of llvm\n \n+load(\"@rules_cc//cc:defs.bzl\", \"cc_binary\")\n load(\n \"@org_tensorflow//third_party/llvm:llvm.bzl\",\n \"cmake_var_string\",\n@@ -22,6 +23,7 @@ genrule(\n name = \"kmp_i18n_id\",\n srcs = [\n \"runtime/tools/message-converter.pl\",\n+ \"runtime/tools/lib/tools.pm\",\n \"runtime/src/i18n/en_US.txt\",\n ],\n outs = [\"include/kmp_i18n_id.inc\"],\n@@ -32,6 +34,7 @@ genrule(\n name = \"kmp_i18n_default\",\n srcs = [\n \"runtime/tools/message-converter.pl\",\n+ \"runtime/tools/lib/tools.pm\",\n \"runtime/src/i18n/en_US.txt\",\n ],\n outs = [\"include/kmp_i18n_default.inc\"],\n@@ -109,6 +112,37 @@ expand_cmake_vars(\n dst = \"include/omp.h\",\n )\n \n+headers = [\n+ \"runtime/src/kmp_affinity.h\",\n+ \"runtime/src/kmp_atomic.h\",\n+ \"runtime/src/kmp_debug.h\",\n+ \"runtime/src/kmp_dispatch_hier.h\",\n+ \"runtime/src/kmp_dispatch.h\",\n+ \"runtime/src/kmp_environment.h\",\n+ \"runtime/src/kmp_error.h\",\n+ \"runtime/src/kmp_ftn_entry.h\",\n+ \"runtime/src/kmp_ftn_os.h\",\n+ \"runtime/src/kmp_i18n.h\",\n+ \"runtime/src/kmp_io.h\",\n+ \"runtime/src/kmp_itt.h\",\n+ \"runtime/src/kmp_itt.inl\",\n+ \"runtime/src/kmp_lock.h\",\n+ \"runtime/src/kmp_os.h\",\n+ \"runtime/src/kmp_platform.h\",\n+ \"runtime/src/kmp_safe_c_api.h\",\n+ \"runtime/src/kmp_settings.h\",\n+ \"runtime/src/kmp_stats.h\",\n+ \"runtime/src/kmp_str.h\",\n+ \"runtime/src/kmp_taskdeps.h\",\n+ \"runtime/src/kmp_version.h\",\n+ \"runtime/src/kmp_wait_release.h\",\n+ \"runtime/src/kmp_wrapper_getpid.h\",\n+ \"runtime/src/kmp_wrapper_malloc.h\",\n+ \"runtime/src/kmp.h\",\n+ \"runtime/src/ompt-specific.h\",\n+ \"runtime/src/tsan_annotations.h\",\n+]\n+\n cppsources = [\n \"runtime/src/kmp_alloc.cpp\",\n \"runtime/src/kmp_atomic.cpp\",\n@@ -144,7 +178,6 @@ srcdeps = [\n \":config_omp\",\n \":kmp_i18n_id\",\n \":kmp_i18n_default\",\n- \":ldscript\",\n ]\n \n common_includes = [\n@@ -171,7 +204,8 @@ cc_binary(\n \"runtime/src/z_Linux_util.cpp\",\n \"runtime/src/kmp_gsupport.cpp\",\n \"runtime/src/z_Linux_asm.S\",\n- ] + srcdeps,\n+ ] + headers + srcdeps,\n+ additional_linker_inputs = [\":ldscript\"],\n copts = [\"-Domp_EXPORTS -D_GNU_SOURCE -D_REENTRANT\"],\n includes = common_includes,\n linkopts = [\"-lpthread -ldl -Wl,--version-script=$(location :ldscript)\"],",
"filename": "third_party/llvm_openmp/BUILD",
"status": "modified"
}
]
}
|
{
"body": "<em>Please make sure that this is a build/installation issue. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em>\r\n\r\n**System information**\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Debian 11\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:\r\n- TensorFlow installed from (source or binary): source\r\n- TensorFlow version: git/master\r\n- Python version: 3.9\r\n- Installed using virtualenv? pip? conda?: conda\r\n- Bazel version (if compiling from source): 4.1.0\r\n- GCC/Compiler version (if compiling from source): 11\r\n- CUDA/cuDNN version: N/A\r\n- GPU model and memory: N/A\r\n\r\n\r\n\r\n**Describe the problem**\r\n\r\nBuilding TF from the current source on GH fails due to missing files (all from `llvm-openmp`):\r\n`tools.pm`\r\n`kmp.h`\r\n`kmp_platform.h`\r\n`kmp_os.h`\r\n\r\n**Provide the exact sequence of commands / steps that you executed before running into the problem**\r\nMy build command was as follows:\r\n```bash\r\nbazel build --config=mkl -config=nogcp --config=nonccl -c opt --copt=-march=native --copt=-O3 -s //tensorflow/tools/pip_package:build_pip_package\r\n```\r\n\r\n**Any other info / logs**\r\nInclude any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.\r\n",
"comments": [
{
"body": "The issue will move to closed status once the PR is merged.",
"created_at": "2021-06-08T14:34:40Z"
},
{
"body": "@eli-osherovich ,\r\n\r\nPlease free feel to move this issue to closed status.Thanks!",
"created_at": "2021-06-09T07:26:34Z"
},
{
"body": "@tilakrayal \r\nOriginal PR messed up after a rebase.\r\nRe-created it.\r\n",
"created_at": "2021-06-09T17:36:35Z"
},
{
"body": "@eli-osherovich, @tilakrayal, Intel is also seeing this build issue and investigating for solution. As a workaround before any fix, please try with adding this build option, **--spawn_strategy=standalone** , to the build command and this should temporarily fix the build failure.",
"created_at": "2021-06-09T20:43:20Z"
},
{
"body": "Tensorflow has reverted a change that causes the build failure, there is no need to use the additional option mentioned above if you sync the project today which has this commit https://github.com/tensorflow/tensorflow/commit/763ae9bed64834d6a9a2e18d9eead0d8763df079 (the commit message is incorrect).",
"created_at": "2021-06-10T05:31:26Z"
},
{
"body": "@yimeisun123 \r\nThe above change just puts standalone into config. My fix is better - it keeps the usual build",
"created_at": "2021-06-10T05:53:42Z"
},
{
"body": "@eli-osherovich - tried your earlier PR#50143, and still has build failure. I see that you pushed a new PR#50179, will check. Thanks.",
"created_at": "2021-06-10T06:02:15Z"
},
{
"body": "@yimeisun123 it definitely works for me.\r\n\r\nBy the way, if you are working on TF inside Intel, have a look at this issue: #50176 . There is a weird problem inside Intel's code with GCC 11. While trivial code changes can solve it, the problem is quite interesting. Probably worth solving without code changes. ",
"created_at": "2021-06-10T19:21:13Z"
},
{
"body": "@tilakrayal can we move forward with this?",
"created_at": "2021-06-15T18:58:22Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/50145\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/50145\">No</a>\n",
"created_at": "2021-06-17T15:55:07Z"
}
],
"number": 50145,
"title": "Building current master fails due to missing files"
}
|
{
"body": "Compiling TF git fails due to missing files. \r\nAdded these into relevant BUILD rules.\r\n\r\n\r\nFixes #50145 ",
"number": 50143,
"review_comments": [],
"title": "Build from source fails due to missing files (llvm-openmp issues)"
}
|
{
"commits": [
{
"message": "Update BUILD\n\nAdded missing files"
},
{
"message": "Added even more files."
},
{
"message": "One extra."
},
{
"message": "Another extra."
}
],
"files": [
{
"diff": "@@ -1,5 +1,6 @@\n # Build file for OpenMP library that is part of llvm\n \n+load(\"@rules_cc//cc:defs.bzl\", \"cc_binary\")\n load(\n \"@org_tensorflow//third_party/llvm:llvm.bzl\",\n \"cmake_var_string\",\n@@ -22,6 +23,7 @@ genrule(\n name = \"kmp_i18n_id\",\n srcs = [\n \"runtime/tools/message-converter.pl\",\n+ \"runtime/tools/lib/tools.pm\",\n \"runtime/src/i18n/en_US.txt\",\n ],\n outs = [\"include/kmp_i18n_id.inc\"],\n@@ -32,6 +34,7 @@ genrule(\n name = \"kmp_i18n_default\",\n srcs = [\n \"runtime/tools/message-converter.pl\",\n+ \"runtime/tools/lib/tools.pm\",\n \"runtime/src/i18n/en_US.txt\",\n ],\n outs = [\"include/kmp_i18n_default.inc\"],\n@@ -110,6 +113,30 @@ expand_cmake_vars(\n )\n \n cppsources = [\n+ \"runtime/src/kmp_debug.h\",\n+ \"runtime/src/kmp_dispatch.h\",\n+ \"runtime/src/kmp_dispatch_hier.h\",\n+ \"runtime/src/kmp_environment.h\",\n+ \"runtime/src/kmp_affinity.h\",\n+ \"runtime/src/kmp_error.h\",\n+ \"runtime/src/kmp_ftn_entry.h\",\n+ \"runtime/src/kmp_ftn_os.h\",\n+ \"runtime/src/kmp.h\",\n+ \"runtime/src/kmp_i18n.h\",\n+ \"runtime/src/kmp_io.h\",\n+ \"runtime/src/kmp_itt.h\",\n+ \"runtime/src/kmp_itt.inl\",\n+ \"runtime/src/kmp_lock.h\",\n+ \"runtime/src/kmp_os.h\",\n+ \"runtime/src/kmp_settings.h\",\n+ \"runtime/src/kmp_stats.h\",\n+ \"runtime/src/kmp_str.h\",\n+ \"runtime/src/kmp_taskdeps.h\",\n+ \"runtime/src/kmp_version.h\",\n+ \"runtime/src/kmp_wait_release.h\",\n+ \"runtime/src/kmp_wrapper_getpid.h\",\n+ \"runtime/src/kmp_wrapper_malloc.h\",\n+ \"runtime/src/kmp_atomic.h\",\n \"runtime/src/kmp_alloc.cpp\",\n \"runtime/src/kmp_atomic.cpp\",\n \"runtime/src/kmp_csupport.cpp\",",
"filename": "third_party/llvm_openmp/BUILD",
"status": "modified"
}
]
}
|
{
"body": "Same issue as `reduce_variance` addressed in #37000 for `reduce_std`\r\n",
"comments": [
{
"body": "Please close this issue once related PR is merged.Thanks!",
"created_at": "2021-06-01T14:50:51Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/49941\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/49941\">No</a>\n",
"created_at": "2021-06-09T18:41:33Z"
}
],
"number": 49941,
"title": "Modify tf.math.reduce_std so that it is compatible with ragged tensors"
}
|
{
"body": "Fixes #49941 .\r\n\r\nTook help from my similar PRs for `reduce_variance` #37014 and #49609\r\n\r\ncc @mihaimaruseac , @edloper ",
"number": 49942,
"review_comments": [],
"title": "Added RaggedTensor support to reduce_std"
}
|
{
"commits": [
{
"message": "Added RaggedTensor support to reduce_std"
},
{
"message": "Resolved bazel error"
}
],
"files": [
{
"diff": "@@ -2555,6 +2555,7 @@ def reduce_std(input_tensor, axis=None, keepdims=False, name=None):\n \"\"\"\n name = name if name else \"reduce_std\"\n with ops.name_scope(name):\n+ input_tensor = ops.convert_to_tensor(input_tensor)\n variance = reduce_variance(input_tensor, axis=axis, keepdims=keepdims)\n return gen_math_ops.sqrt(variance)\n ",
"filename": "tensorflow/python/ops/math_ops.py",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@\n from tensorflow.python.ops import math_ops\n from tensorflow.python.ops import resource_variable_ops\n from tensorflow.python.ops import variables\n+from tensorflow.python.ops.ragged import ragged_factory_ops\n from tensorflow.python.platform import googletest\n \n \n@@ -121,6 +122,14 @@ def testReduceStd(self):\n self.assertEqual(np.std(x_np), 0.5)\n self.assertEqual(self.evaluate(math_ops.reduce_std(x_np)), 0.5)\n \n+ x=ragged_factory_ops.constant([[5., 1., 4., 1.],\n+ [],\n+ [5., 9., 2.],\n+ [5.],\n+ []])\n+ self.assertAllClose(math_ops.reduce_std(x, axis=0),\n+ [0., 4., 1., 0.])\n+\n def testReduceStdComplex(self):\n # Ensure that complex values are handled to be consistent with numpy\n complex_ys = [([0 - 1j, 0 + 1j], dtypes.float64),",
"filename": "tensorflow/python/ops/math_ops_test.py",
"status": "modified"
},
{
"diff": "@@ -407,6 +407,7 @@ def is_supported(self, args, kwargs):\n math_ops.reduce_max,\n math_ops.reduce_mean,\n math_ops.reduce_variance,\n+ math_ops.reduce_std,\n math_ops.reduce_any,\n math_ops.reduce_all,\n string_ops.string_to_number,\n@@ -525,6 +526,7 @@ def _ragged_nn_dropout_v2(x, rate, noise_shape=None, seed=None, name=None):\n (math_ops.reduce_mean, ragged_math_ops.reduce_mean, ['input_tensor']),\n (math_ops.reduce_variance, ragged_math_ops.reduce_variance,\n ['input_tensor']),\n+ (math_ops.reduce_std, ragged_math_ops.reduce_std, ['input_tensor']),\n (math_ops.reduce_any, ragged_math_ops.reduce_any, ['input_tensor']),\n (math_ops.reduce_all, ragged_math_ops.reduce_all, ['input_tensor']),\n (nn_ops.dropout, _ragged_nn_dropout_v1, ['x']),",
"filename": "tensorflow/python/ops/ragged/ragged_dispatch.py",
"status": "modified"
},
{
"diff": "@@ -595,6 +595,15 @@ def testBinaryOpSparseAndRagged(self):\n 1\n },\n expected=[1., 6.]),\n+ dict(\n+ op=math_ops.reduce_std,\n+ kwargs={\n+ 'input_tensor':\n+ ragged_factory_ops.constant_value([[1, 3], [1, 2, 2, 1]]),\n+ 'axis':\n+ 1\n+ },\n+ expected=[1., 0.5]),\n dict(\n op=math_ops.reduce_any,\n kwargs={\n@@ -744,7 +753,8 @@ def test_ragged_op_list(self):\n 'math.maximum', 'math.minimum', 'math.multiply', 'math.negative',\n 'math.not_equal', 'math.pow', 'math.real', 'math.reciprocal',\n 'math.reduce_any', 'math.reduce_max', 'math.reduce_mean',\n- 'math.reduce_variance', 'math.reduce_min', 'math.reduce_prod',\n+ 'math.reduce_variance', 'math.reduce_std', 'math.reduce_min',\n+ 'math.reduce_prod',\n 'math.reduce_sum', 'math.rint', 'math.round', 'math.rsqrt', 'math.sign',\n 'math.sin', 'math.sinh', 'math.sqrt', 'math.square',\n 'math.squared_difference', 'math.subtract', 'math.tan', 'math.truediv',",
"filename": "tensorflow/python/ops/ragged/ragged_dispatch_test.py",
"status": "modified"
},
{
"diff": "@@ -422,6 +422,12 @@ def _set_ragged_segment_docstring(func, combination, combined):\n >>> tf.math.reduce_variance(rt, axis=1).numpy()\n array([2., 0.25, 0., 2.25])\n \"\"\"\n+_RAGGED_REDUCE_STD_EXAMPLE = \"\"\"\n+ >>> rt = tf.ragged.constant([[1, 0], [2, 1], [3], [4, 1]],\n+ ... dtype=tf.float64)\n+ >>> tf.math.reduce_std(rt, axis=1).numpy()\n+ array([0.5, 0.5, 0., 1.5])\n+\"\"\"\n _RAGGED_REDUCE_ALL_EXAMPLE = \"\"\"\n >>> rt = tf.ragged.constant([[True, True], [True, True, False, True], [False, True]])\n >>> tf.reduce_all(rt, axis=0).numpy()\n@@ -647,6 +653,13 @@ def reduce_variance(input_tensor, axis=None, keepdims=False, name=None):\n return mean_of_square - square_of_mean\n \n \n+def reduce_std(input_tensor, axis=None, keepdims=False, name=None):\n+ \"\"\"For docs, see: _RAGGED_REDUCE_DOCSTRING.\"\"\"\n+ with ops.name_scope(name, \"RaggedReduceStd\", [input_tensor, axis]):\n+ variance = reduce_variance(input_tensor, axis=axis, keepdims=keepdims)\n+ return math_ops.sqrt(variance)\n+\n+\n def _cast(input_tensor, dtype):\n return ragged_functional_ops.map_flat_values(math_ops.cast, input_tensor,\n dtype)\n@@ -690,6 +703,8 @@ def _set_ragged_reduce_docstring(func, combination, combined, default, example):\n _RAGGED_REDUCE_MEAN_EXAMPLE)\n _set_ragged_reduce_docstring(reduce_variance, 'variance', 'averaged', 'NaN',\n _RAGGED_REDUCE_VARIANCE_EXAMPLE)\n+_set_ragged_reduce_docstring(reduce_std, 'std', 'averaged', 'NaN',\n+ _RAGGED_REDUCE_STD_EXAMPLE)\n _set_ragged_reduce_docstring(reduce_all, 'logical and', 'and-ed', 'True',\n _RAGGED_REDUCE_ALL_EXAMPLE)\n _set_ragged_reduce_docstring(reduce_any, 'logical or', 'or-ed', 'False',",
"filename": "tensorflow/python/ops/ragged/ragged_math_ops.py",
"status": "modified"
},
{
"diff": "@@ -43,6 +43,10 @@ def variance(*values):\n return np.var(values, dtype=np.float64)\n \n \n+def std(*values):\n+ return np.std(values, dtype=np.float64)\n+\n+\n @test_util.run_all_in_graph_and_eager_modes\n class RaggedReduceOpsTest(test_util.TensorFlowTestCase, parameterized.TestCase):\n \n@@ -138,6 +142,12 @@ class RaggedReduceOpsTest(test_util.TensorFlowTestCase, parameterized.TestCase):\n axis=0,\n keepdims=False,\n expected=[9.6875, 0.0, 0.0]),\n+ dict(\n+ ragged_reduce_op=ragged_math_ops.reduce_std,\n+ rt_input=[[3, 1, 4], [3, 1], [2], [2, 1]],\n+ axis=0,\n+ keepdims=False,\n+ expected=[0.5, 0., 0.]),\n dict(\n ragged_reduce_op=ragged_math_ops.reduce_any,\n rt_input=[[True, True], [True, True, False, True], [False, True]],\n@@ -248,6 +258,12 @@ class RaggedReduceOpsTest(test_util.TensorFlowTestCase, parameterized.TestCase):\n axis=0,\n keepdims=True,\n expected=[[9.6875, 0., 0.]]),\n+ dict(\n+ ragged_reduce_op=ragged_math_ops.reduce_std,\n+ rt_input=[[3, 1, 4], [3, 1], [2], [2, 1]],\n+ axis=0,\n+ keepdims=True,\n+ expected=[[0.5, 0., 0.]]),\n dict(\n ragged_reduce_op=ragged_math_ops.reduce_any,\n rt_input=[[True, True], [True, True, False, True], [False, True]],\n@@ -320,6 +336,12 @@ class RaggedReduceOpsTest(test_util.TensorFlowTestCase, parameterized.TestCase):\n axis=None,\n keepdims=False,\n expected=variance(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)),\n+ dict(\n+ ragged_reduce_op=ragged_math_ops.reduce_std,\n+ rt_input=[[0, 1, 2, 3], [4], [], [5, 6], [7], [8, 9]],\n+ axis=None,\n+ keepdims=False,\n+ expected=std(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)),\n # axis=0\n dict(\n ragged_reduce_op=ragged_math_ops.reduce_sum,\n@@ -359,6 +381,12 @@ class RaggedReduceOpsTest(test_util.TensorFlowTestCase, parameterized.TestCase):\n keepdims=False,\n expected=[variance(0, 1, 2, 3, 4),\n variance(1, 1, 1), 0, 0]),\n+ dict(\n+ ragged_reduce_op=ragged_math_ops.reduce_std,\n+ rt_input=[[1, 1, 2, 3], [1], [], [1, 1], [1], [1, 1]],\n+ axis=0,\n+ keepdims=False,\n+ expected=[std(1, 1, 1, 1, 1), std(1, 1, 1), 0, 0]),\n # axis=1\n # Note: we don't test mean here because it gives a NaN, and this will\n # cause assertEqual to fail (since NaN != NaN). See testMeanNan().\n@@ -555,6 +583,24 @@ class RaggedReduceOpsTest(test_util.TensorFlowTestCase, parameterized.TestCase):\n keepdims=False,\n expected=[[variance(6, 2), variance(6, 9, 9)], [variance(6, 7), 0.],\n [0.]]),\n+ dict(\n+ ragged_reduce_op=ragged_math_ops.reduce_std,\n+ rt_input=[[[6, 2], [3, 4, 5]], [[6, 7], [8]], [[9]]],\n+ axis=0,\n+ keepdims=False,\n+ expected=[[std(6, 6, 9), std(2, 7)], [std(3, 8), 0., 0.]]),\n+ dict(\n+ ragged_reduce_op=ragged_math_ops.reduce_std,\n+ rt_input=[[[6, 2], [3, 4, 5]], [[6, 7], [8]], [[9]]],\n+ axis=1,\n+ keepdims=False,\n+ expected=[[std(6, 3), std(2, 4), 0.], [std(6, 8), 0.], [0.]]),\n+ dict(\n+ ragged_reduce_op=ragged_math_ops.reduce_std,\n+ rt_input=[[[6, 2], [6, 9, 9]], [[6, 7], [8]], [[9]]],\n+ axis=2,\n+ keepdims=False,\n+ expected=[[std(6, 2), std(6, 9, 9)], [std(6, 7), 0.], [0.]]),\n \n # Test case for GitHub issue 27497, multiple negative axes.\n dict(\n@@ -610,6 +656,20 @@ def testVarianceNan(self):\n reduced = ragged_math_ops.reduce_variance(rt_input, axis=1)\n self.assertEqualWithNan(self.evaluate(reduced), expected)\n \n+ def testStdNan(self):\n+ rt_as_list = [[0, 1, 1, 0], [4], [], [5, 6], [7], [8, 9]]\n+ expected = ([\n+ std(0, 1, 1, 0),\n+ std(4),\n+ std(),\n+ std(5, 6),\n+ std(7),\n+ std(8, 9)\n+ ])\n+ rt_input = ragged_factory_ops.constant(rt_as_list)\n+ reduced = ragged_math_ops.reduce_std(rt_input, axis=1)\n+ self.assertEqualWithNan(self.evaluate(reduced), expected)\n+\n def testMeanWithTensorInputs(self):\n tensor = [[1.0, 2.0, 3.0], [10.0, 20.0, 30.0]]\n expected = [2.0, 20.0]\n@@ -622,6 +682,12 @@ def testVarianceWithTensorInputs(self):\n reduced = ragged_math_ops.reduce_variance(tensor, axis=1)\n self.assertAllEqual(reduced, expected)\n \n+ def testStdWithTensorInputs(self):\n+ tensor = [[1.0, 2.0, 2.0, 1.0], [10.0, 20.0, 20.0, 10.0]]\n+ expected = [0.5, 5.]\n+ reduced = ragged_math_ops.reduce_std(tensor, axis=1)\n+ self.assertAllEqual(reduced, expected)\n+\n def testErrors(self):\n rt_input = ragged_factory_ops.constant([[1, 2, 3], [4, 5]])\n axis = array_ops.placeholder_with_default(constant_op.constant([0]), None)",
"filename": "tensorflow/python/ops/ragged/ragged_reduce_op_test.py",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator L2_POOL_2D from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1 (step 1): Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2 (step 2): Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\n\r\nThe next 3 steps are combined into a single PR3 with separate commits:\r\n\r\n(step 3): Copy operator from lite to micro making minimal changes and not including in the build\r\n(step 4): Delete extra code from the micro copy of the operator\r\n(step 5): Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47814\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47814\">No</a>\n",
"created_at": "2021-06-09T16:03:51Z"
}
],
"number": 47814,
"title": "micro: port op L2_POOL_2D from lite"
}
|
{
"body": "When using the MicroInterpreter, the TfLiteEvalTensor must be the source of truth for tensor dims.\r\nIt is possible during the Prepare phase for an operator to change the location of the tensor dims to somewhere other than the Flatbuffer. When using the MicroInterpreter, the Flatbuffer is currently used as the source of truth for TfLiteTensor dims. This fix makes the TfLiteEvalTensor dims the source of truth when initializing a TfLiteTensor.\r\n\r\nAdditional fix for issue micro: port op L2_POOL_2D from lite #47814",
"number": 49921,
"review_comments": [],
"title": "Bug fix for TfLiteTensor dims"
}
|
{
"commits": [
{
"message": "Bug fix for TfLiteTensor dims\n\nWhen using the MicroInterpreter, the TfLiteEvalTensor must be the source of truth for tensor dims.\n\nAdditional fix for issue micro: port op L2_POOL_2D from lite #47814"
}
],
"files": [
{
"diff": "@@ -822,6 +822,10 @@ TfLiteTensor* MicroAllocator::AllocateTempTfLiteTensor(\n // point the corresponding buffer to the new TfLiteTensor data value.\n tensor->data.data =\n subgraph_allocations[subgraph_index].tensors[tensor_index].data.data;\n+ // TfLiteEvalTensor structs must also be the source of truth for the\n+ // TfLiteTensor dims.\n+ tensor->dims =\n+ subgraph_allocations[subgraph_index].tensors[tensor_index].dims;\n }\n return tensor;\n }",
"filename": "tensorflow/lite/micro/micro_allocator.cc",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator L2_POOL_2D from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1 (step 1): Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2 (step 2): Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\n\r\nThe next 3 steps are combined into a single PR3 with separate commits:\r\n\r\n(step 3): Copy operator from lite to micro making minimal changes and not including in the build\r\n(step 4): Delete extra code from the micro copy of the operator\r\n(step 5): Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47814\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47814\">No</a>\n",
"created_at": "2021-06-09T16:03:51Z"
}
],
"number": 47814,
"title": "micro: port op L2_POOL_2D from lite"
}
|
{
"body": "Also check that the AllocatePersistentBuffer method is available.\r\n\r\nAdditional fix for issue micro: port op L2_POOL_2D from lite #47814",
"number": 49916,
"review_comments": [],
"title": "Additional bug fix for CreateWritableTensorDimsWithCopy"
}
|
{
"commits": [
{
"message": "Additional bug fix for CreateWritableTensorDimsWithCopy\n\nAlso check that the AllocatePersistentBuffer method is available.\n\nAdditional fix for issue micro: port op L2_POOL_2D from lite #47814"
}
],
"files": [
{
"diff": "@@ -58,6 +58,7 @@ TfLiteStatus CreateWritableTensorDimsWithCopy(TfLiteContext* context,\n TfLiteEvalTensor* eval_tensor) {\n TF_LITE_ENSURE(context, tensor != nullptr);\n TF_LITE_ENSURE(context, eval_tensor != nullptr);\n+ TF_LITE_ENSURE(context, context->AllocatePersistentBuffer != nullptr);\n int ranks = tensor->dims->size;\n size_t alloc_size = TfLiteIntArrayGetSizeInBytes(ranks);\n TfLiteIntArray* new_dims = static_cast<TfLiteIntArray*>(",
"filename": "tensorflow/lite/micro/kernels/kernel_util.cc",
"status": "modified"
}
]
}
|
{
"body": "<em>Please make sure that this is a bug. As per our\r\n[GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md),\r\nwe only address code/doc bugs, performance issues, feature requests and\r\nbuild/installation issues on GitHub. tag:bug_template</em>\r\n\r\n**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Colab default\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: No\r\n- TensorFlow installed from (source or binary): Colab default\r\n- TensorFlow version (use command below): v2.4.1-0-g85c8b2a817f 2.4.1\r\n- Python version: 3\r\n- Bazel version (if compiling from source): No\r\n- GCC/Compiler version (if compiling from source): No\r\n- CUDA/cuDNN version: No\r\n- GPU model and memory: No\r\n\r\n\r\n**Describe the current behavior**\r\nThere is a function convert_image_dtype that used in many augmentation ops like tf.image.stateless_random_brightness and etc.\r\nThat function has a rounding issue here https://github.com/tensorflow/tensorflow/blob/85c8b2a817f95a3e979ecd1ed95bff1dc1335cff/tensorflow/python/ops/image_ops_impl.py#L2290\r\nMaybe other cases also have this bug (not checked).\r\n\r\nWhen we casting float to int, we should use rounding. In convert_image_dtype rounding implemented as shift by 0.5 which is correct for max value, but is not correct for 0.\r\nSee example by link below.\r\n\r\n**Describe the expected behavior**\r\nEvery time convert_image_dtype changes value scale following casting to int, it should use rounding before casting.\r\n\r\n**Standalone code to reproduce the issue**\r\nhttps://colab.research.google.com/drive/1xZqyuAlZu_xkDZZHNsr7K8g6MyapcNPi?usp=sharing\r\n",
"comments": [
{
"body": "Was able to reproduce the issue in TF 2.4 and Nightly version as well. Please find the gist [here](https://colab.research.google.com/gist/saikumarchalla/5bbf7fed3fe931f4cc07306c91276f88/convert_image_dtype.ipynb).",
"created_at": "2021-04-23T16:54:46Z"
},
{
"body": "Thanks for reporting the issue!",
"created_at": "2021-05-06T18:46:41Z"
},
{
"body": "Hi, Can I take this issue?",
"created_at": "2021-05-25T10:17:12Z"
},
{
"body": "Yes, please feel free to do so.",
"created_at": "2021-05-25T16:13:50Z"
},
{
"body": "@kkimdev thank you. I did changes and try to check it by unit test, but have one small problem. Could you please tell me or send the link where I can find how to start the exact unit test? e.g ConvertImageTest.testNoConvert.\r\nCould you also tell me please how to check what tests failed because of me? And do you have something like Jenkins or whatever where I can check that I don't break something?",
"created_at": "2021-05-28T14:45:40Z"
},
{
"body": "@pointhex I think you can put unit test here https://github.com/tensorflow/tensorflow/blob/85c8b2a817f95a3e979ecd1ed95bff1dc1335cff/tensorflow/python/ops/image_ops_test.py and once you open a Github PR, it will kick off pre-submit testings. Thanks!",
"created_at": "2021-05-28T17:08:53Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48701\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48701\">No</a>\n",
"created_at": "2021-09-21T00:27:58Z"
},
{
"body": "The issue is still not resolved in TF master branch. It seems that PR https://github.com/tensorflow/tensorflow/pull/49868 was not merged.",
"created_at": "2021-12-02T05:43:45Z"
},
{
"body": "Here is a commit where this issue fix was removed https://github.com/tensorflow/tensorflow/commit/56ab2308f72da337865cf765a1844ed9e990d02e#diff-a5a22434f0c18768fc2e10c0e0420ac6f111a5802f67e3df54f155bfefc7094f",
"created_at": "2021-12-06T07:38:36Z"
}
],
"number": 48701,
"title": "Function used in many augmentations (convert_image_dtype) has an issue"
}
|
{
"body": "Fixes #48701",
"number": 49868,
"review_comments": [],
"title": "Made rounding in convert_image_dtype for numbers close to zero"
}
|
{
"commits": [
{
"message": "Made rounding in convert_image_dtype for numbers close to zero\n\nFixes #48701"
}
],
"files": [
{
"diff": "@@ -196,7 +196,7 @@ def testAdjustPositiveHue(self):\n x_np = np.array(x_data, dtype=np.uint8).reshape(x_shape)\n \n delta = 0.25\n- y_data = [13, 0, 11, 226, 54, 221, 234, 8, 92, 1, 217, 255]\n+ y_data = [13, 0, 12, 226, 54, 221, 234, 8, 92, 1, 217, 255]\n y_np = np.array(y_data, dtype=np.uint8).reshape(x_shape)\n \n with self.session():\n@@ -214,7 +214,7 @@ def testBatchAdjustHue(self):\n x_np = np.array(x_data, dtype=np.uint8).reshape(x_shape)\n \n delta = 0.25\n- y_data = [13, 0, 11, 226, 54, 221, 234, 8, 92, 1, 217, 255]\n+ y_data = [13, 0, 12, 226, 54, 221, 234, 8, 92, 1, 217, 255]\n y_np = np.array(y_data, dtype=np.uint8).reshape(x_shape)\n \n with self.session():\n@@ -322,7 +322,7 @@ def testHalfSaturation(self):\n x_np = np.array(x_rgb_data, dtype=np.uint8).reshape(x_shape)\n \n saturation_factor = 0.5\n- y_rgb_data = [6, 9, 13, 140, 180, 226, 135, 121, 234, 172, 255, 128]\n+ y_rgb_data = [7, 9, 13, 140, 180, 226, 136, 121, 234, 172, 255, 128]\n y_np = np.array(y_rgb_data, dtype=np.uint8).reshape(x_shape)\n \n with self.session():",
"filename": "tensorflow/compiler/tests/image_ops_test.py",
"status": "modified"
},
{
"diff": "@@ -2470,12 +2470,12 @@ def convert_image_dtype(image, dtype, saturate=False, name=None):\n return math_ops.multiply(cast, scale, name=name)\n else:\n # Converting from float: first scale, then cast\n- scale = dtype.max + 0.5 # avoid rounding problems in the cast\n- scaled = math_ops.multiply(image, scale)\n+ scaled = math_ops.multiply(image, dtype.max)\n+ rounded = math_ops.round(scaled)\n if saturate:\n- return math_ops.saturate_cast(scaled, dtype, name=name)\n+ return math_ops.saturate_cast(rounded, dtype, name=name)\n else:\n- return math_ops.cast(scaled, dtype, name=name)\n+ return math_ops.cast(rounded, dtype, name=name)\n \n \n @tf_export('image.rgb_to_grayscale')",
"filename": "tensorflow/python/ops/image_ops_impl.py",
"status": "modified"
},
{
"diff": "@@ -179,7 +179,7 @@ def _RGBToGrayscale(self, images):\n green = images[batch, y, x, 1]\n blue = images[batch, y, x, 2]\n gray = 0.2989 * red + 0.5870 * green + 0.1140 * blue\n- out[batch, y, x, 0] = int(gray)\n+ out[batch, y, x, 0] = np.round(gray)\n if not is_batch:\n out = np.squeeze(out, axis=0)\n return out\n@@ -344,7 +344,7 @@ def _test_adjust_gamma_uint8(self, gamma):\n # then perform correction\n y_np = np.power(x_np / 255.0, gamma)\n # convert correct numpy image back to uint8 type\n- y_np = np.trunc(np.clip(y_np * 255.5, 0, 255.0))\n+ y_np = np.round(np.clip(y_np * 255.0, 0, 255.0))\n \n self.assertAllClose(y_tf, y_np, 1e-6)\n \n@@ -436,7 +436,7 @@ def testAdjustPositiveHue(self):\n x_np = np.array(x_data, dtype=np.uint8).reshape(x_shape)\n \n delta = 0.25\n- y_data = [13, 0, 11, 226, 54, 221, 234, 8, 92, 1, 217, 255]\n+ y_data = [13, 0, 12, 226, 54, 221, 234, 8, 92, 1, 217, 255]\n y_np = np.array(y_data, dtype=np.uint8).reshape(x_shape)\n \n with self.cached_session():\n@@ -451,7 +451,7 @@ def testBatchAdjustHue(self):\n x_np = np.array(x_data, dtype=np.uint8).reshape(x_shape)\n \n delta = 0.25\n- y_data = [13, 0, 11, 226, 54, 221, 234, 8, 92, 1, 217, 255]\n+ y_data = [13, 0, 12, 226, 54, 221, 234, 8, 92, 1, 217, 255]\n y_np = np.array(y_data, dtype=np.uint8).reshape(x_shape)\n \n with self.cached_session():\n@@ -907,7 +907,7 @@ def testHalfSaturation(self):\n x_np = np.array(x_data, dtype=np.uint8).reshape(x_shape)\n \n saturation_factor = 0.5\n- y_data = [6, 9, 13, 140, 180, 226, 135, 121, 234, 172, 255, 128]\n+ y_data = [7, 9, 13, 140, 180, 226, 136, 121, 234, 172, 255, 128]\n y_np = np.array(y_data, dtype=np.uint8).reshape(x_shape)\n \n with self.cached_session():\n@@ -937,7 +937,7 @@ def testBatchSaturation(self):\n x_np = np.array(x_data, dtype=np.uint8).reshape(x_shape)\n \n saturation_factor = 0.5\n- y_data = [6, 9, 13, 140, 180, 226, 135, 121, 234, 172, 255, 128]\n+ y_data = [7, 9, 13, 140, 180, 226, 136, 121, 234, 172, 255, 128]\n y_np = np.array(y_data, dtype=np.uint8).reshape(x_shape)\n \n with self.cached_session():\n@@ -1518,7 +1518,7 @@ def testDoubleContrastUint8(self):\n x_data = [0, 5, 13, 54, 135, 226, 37, 8, 234, 90, 255, 1]\n x_np = np.array(x_data, dtype=np.uint8).reshape(x_shape)\n \n- y_data = [0, 0, 0, 62, 169, 255, 28, 0, 255, 135, 255, 0]\n+ y_data = [0, 0, 0, 63, 169, 255, 29, 0, 255, 135, 255, 0]\n y_np = np.array(y_data, dtype=np.uint8).reshape(x_shape)\n \n self._testContrast(x_np, y_np, contrast_factor=2.0)\n@@ -1541,7 +1541,7 @@ def testHalfContrastUint8(self):\n x_data = [0, 5, 13, 54, 135, 226, 37, 8, 234, 90, 255, 1]\n x_np = np.array(x_data, dtype=np.uint8).reshape(x_shape)\n \n- y_data = [22, 52, 65, 49, 118, 172, 41, 54, 176, 67, 178, 59]\n+ y_data = [23, 53, 66, 50, 118, 172, 41, 54, 176, 68, 178, 60]\n y_np = np.array(y_data, dtype=np.uint8).reshape(x_shape)\n \n self._testContrast(x_np, y_np, contrast_factor=0.5)\n@@ -4637,6 +4637,25 @@ def testConvertBetweenInt16AndInt8(self):\n self._convert([0, 255 * 256], dtypes.uint16, dtypes.int16, [0, 255 * 128])\n self._convert([0, 255 * 128], dtypes.int16, dtypes.uint16, [0, 255 * 256])\n \n+ def testConvertBetweenFloat32AndInt8SmallNumbers(self):\n+ with self.cached_session():\n+ image0 = constant_op.constant(0.)\n+ x0 = image_ops.convert_image_dtype(image0 + 0.4 / 255., 'uint8',\n+ \tsaturate=True)\n+ y0 = image_ops.convert_image_dtype(image0 + 0.6 / 255., 'uint8',\n+ \tsaturate=True)\n+ self.assertAllEqual(x0, constant_op.constant(0.))\n+ self.assertAllEqual(y0, constant_op.constant(1.))\n+\n+ image1 = constant_op.constant(1.)\n+ x1 = image_ops.convert_image_dtype(image1 - 0.4 / 255., 'uint8',\n+ \tsaturate=True)\n+ y1 = image_ops.convert_image_dtype(image1 - 0.6 / 255., 'uint8',\n+ \tsaturate=True)\n+ self.assertAllEqual(x1, constant_op.constant(255))\n+ self.assertAllEqual(y1, constant_op.constant(254))\n+\n+\n \n class TotalVariationTest(test_util.TensorFlowTestCase):\n \"\"\"Tests the function total_variation() in image_ops.",
"filename": "tensorflow/python/ops/image_ops_test.py",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator BATCH_MATMUL from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\n",
"comments": [
{
"body": "Hi @ddavis-2015, are you stilling planning on integrating this?",
"created_at": "2023-07-19T21:39:06Z"
},
{
"body": "@pkgoogle A new PR based on this PR is in progress. The new PR will appear in the tflite-micro repo when ready.",
"created_at": "2023-07-20T06:28:58Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46504\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46504\">No</a>\n",
"created_at": "2023-07-20T06:39:58Z"
}
],
"number": 46504,
"title": "micro: port op BATCH_MATMUL from lite"
}
|
{
"body": "micro: port operator BATCH_MATMUL kernel from lite with test\r\n\r\nComplete implementation of TFLM operator BATCH_MATMUL and associated TFLM test code.\r\n\r\nPR step 5 of the work to port operator BATCH_MATMUL as tracked in Issue #46504",
"number": 49751,
"review_comments": [],
"title": "micro:BATCH_MATMUL PR3-5"
}
|
{
"commits": [
{
"message": "micro: copy operator BATCH_MATMUL kernel from lite\n\nThis is a copy without modification of the kernel and test for\noperator BATCH_MATMUL from tensorflow/lite/kernels.\nAdaptations to micro and addition to the micro build to follow.\n\nPR step 3 of the work to port operator as tracked in Issue #46504"
},
{
"message": "micro: prepare to port operator BATCH_MATMUL kernel from lite with test\n\nImplement skeleton (non-working) code for operator and test.\nHeader files changed.\nNamespaces changed.\nSome original code deleted.\nSome original code modified.\n\nPR step 4 of the work to port operator BATCH_MATMUL as tracked in Issue #46504"
},
{
"message": "micro: port operator BATCH_MATMUL kernel from lite with test\n\nComplete implementation of TFLM operator BATCH_MATMUL and associated TFLM test code.\n\nPR step 5 of the work to port operator BATCH_MATMUL as tracked in Issue #46504"
}
],
"files": [
{
"diff": "@@ -27,6 +27,7 @@ AllOpsResolver::AllOpsResolver() {\n AddArgMax();\n AddArgMin();\n AddAveragePool2D();\n+ AddBatchMatMul();\n AddBatchToSpaceNd();\n AddCeil();\n AddConcatenation();",
"filename": "tensorflow/lite/micro/all_ops_resolver.cc",
"status": "modified"
},
{
"diff": "@@ -36,6 +36,20 @@ cc_library(\n ],\n )\n \n+cc_library(\n+ name = \"batch_matmul_test_util\",\n+ hdrs = [\"batch_matmul_test_util.h\"],\n+ deps = [\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/kernels/internal:compatibility\",\n+ \"//tensorflow/lite/kernels/internal:types\",\n+ \"//tensorflow/lite/micro:debug_log\",\n+ \"//tensorflow/lite/micro:micro_utils\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_library(\n name = \"circular_buffer_flexbuffers_generated_data\",\n srcs = [\n@@ -265,6 +279,7 @@ cc_library(\n \"add.cc\",\n \"add_n.cc\",\n \"arg_min_max.cc\",\n+ \"batch_matmul.cc\",\n \"batch_to_space_nd.cc\",\n \"cast.cc\",\n \"ceil.cc\",\n@@ -456,6 +471,22 @@ cc_test(\n ],\n )\n \n+cc_test(\n+ name = \"batch_matmul_test\",\n+ srcs = [\n+ \"batch_matmul_test.cc\",\n+ ],\n+ deps = [\n+ \":batch_matmul_test_util\",\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:debug_log\",\n+ \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_test(\n name = \"batch_to_space_nd_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,637 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#include \"tensorflow/lite/kernels/internal/reference/batch_matmul.h\"\n+\n+#include <algorithm>\n+#include <cstdint>\n+#include <limits>\n+\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/quantization_util.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/process_broadcast_shapes.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/transpose.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n+#include \"tensorflow/lite/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n+\n+namespace tflite {\n+namespace {\n+\n+constexpr int kInputLHSTensor = 0;\n+constexpr int kInputRHSTensor = 1;\n+constexpr int kOutputTensor = 0;\n+\n+constexpr int kInvalidScratchBufferIndex = -1;\n+\n+struct QuantizationOpData {\n+ // The scaling factor from input to output (aka the 'real multiplier') can\n+ // be represented as a fixed point multiplier plus a left shift.\n+ int32_t output_multiplier;\n+ int output_shift; // exponent\n+\n+ // The range of the fused activation layer. For example for kNone and\n+ // int8_t these would be -128 and 127.\n+ int32_t output_activation_min;\n+ int32_t output_activation_max;\n+\n+ int32_t lhs_zero_point;\n+ int32_t rhs_zero_point;\n+ int32_t output_zero_point;\n+};\n+\n+struct HybridOpData {\n+ float filter_scale; // RHS tensor scale\n+\n+ // scratch buffer indices\n+ int input_quantized_index;\n+ int scaling_factors_index;\n+ int input_offsets_index;\n+\n+ // row_sums_buffer may be re-used across eval calls\n+ int32_t* row_sums_buffer;\n+\n+ bool compute_row_sums;\n+};\n+\n+struct OpData {\n+ union {\n+ QuantizationOpData* quantization;\n+ HybridOpData* hybrid;\n+ };\n+\n+ // Transpose tensors and state\n+ TfLiteEvalTensor* lhs_transposed_tensor;\n+ TfLiteEvalTensor* rhs_transposed_tensor;\n+ bool rhs_is_transposed;\n+ bool lhs_is_constant_tensor;\n+ bool rhs_is_constant_tensor;\n+};\n+\n+struct OpContext {\n+ OpContext(TfLiteContext* context, TfLiteNode* node) {\n+ params = static_cast<TfLiteBatchMatMulParams*>(node->builtin_data);\n+ opdata = static_cast<OpData*>(node->user_data);\n+ }\n+\n+ TfLiteBatchMatMulParams* params;\n+ OpData* opdata;\n+};\n+\n+struct PrepareOpContext : OpContext {\n+ PrepareOpContext(TfLiteContext* context, TfLiteNode* node)\n+ : OpContext(context, node) {\n+ lhs = GetInput(context, node, kInputLHSTensor);\n+ rhs = GetInput(context, node, kInputRHSTensor);\n+ output = GetOutput(context, node, kOutputTensor);\n+ }\n+\n+ const TfLiteTensor* lhs;\n+ const TfLiteTensor* rhs;\n+ TfLiteTensor* output;\n+};\n+\n+struct EvalOpContext : OpContext {\n+ EvalOpContext(TfLiteContext* context, TfLiteNode* node)\n+ : OpContext(context, node) {\n+ lhs = tflite::micro::GetEvalInput(context, node, kInputLHSTensor);\n+ rhs = tflite::micro::GetEvalInput(context, node, kInputRHSTensor);\n+ output = tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n+ }\n+\n+ const TfLiteEvalTensor* lhs;\n+ const TfLiteEvalTensor* rhs;\n+ TfLiteEvalTensor* output;\n+};\n+\n+TfLiteStatus ResizeOutputTensor(TfLiteContext* context, TfLiteNode* node,\n+ const RuntimeShape& extended_lhs_shape,\n+ const RuntimeShape& extended_rhs_shape,\n+ bool adj_x, bool adj_y, int output_rank,\n+ TfLiteTensor* output) {\n+ auto orig_size = NumElements(output);\n+\n+ // make sure output tensor dims are not in the FlatBuffer\n+ TfLiteEvalTensor* output_eval =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n+ TF_LITE_ENSURE_OK(context, tflite::micro::CreateWritableTensorDimsWithCopy(\n+ context, output, output_eval));\n+\n+ // Fill in any broadcast dimensions.\n+ for (int i = 0; i < output_rank - 2; ++i) {\n+ const int lhs_dim = extended_lhs_shape.Dims(i);\n+ const int rhs_dim = extended_rhs_shape.Dims(i);\n+ int broadcast_dim = lhs_dim;\n+ if ((lhs_dim != rhs_dim) && (lhs_dim == 1)) {\n+ broadcast_dim = rhs_dim;\n+ }\n+ output->dims->data[i] = broadcast_dim;\n+ }\n+ // Fill in the matmul dimensions.\n+ int lhs_rows_index = adj_x ? output_rank - 1 : output_rank - 2;\n+ int rhs_cols_index = adj_y ? output_rank - 2 : output_rank - 1;\n+\n+ output->dims->data[output_rank - 2] = extended_lhs_shape.Dims(lhs_rows_index);\n+ output->dims->data[output_rank - 1] = extended_rhs_shape.Dims(rhs_cols_index);\n+ output->dims->size = output_rank;\n+\n+ // Check that output tensor has not been resized\n+ // since TFLM doesn't support tensor resizing.\n+ TF_LITE_ENSURE_EQ(context, orig_size, NumElements(output));\n+\n+ return kTfLiteOk;\n+}\n+\n+TfLiteEvalTensor* AllocInitTransposeTensorFromTfLiteTensor(\n+ TfLiteContext* context, const TfLiteTensor& tensor) {\n+ TfLiteEvalTensor* eval_tensor = static_cast<TfLiteEvalTensor*>(\n+ context->AllocatePersistentBuffer(context, sizeof(TfLiteEvalTensor)));\n+\n+ eval_tensor->type = tensor.type;\n+\n+ const int tensor_rank = NumDimensions(&tensor);\n+ auto eval_dims_size = TfLiteIntArrayGetSizeInBytes(tensor_rank);\n+ eval_tensor->dims = static_cast<TfLiteIntArray*>(\n+ context->AllocatePersistentBuffer(context, eval_dims_size));\n+ eval_tensor->dims->size = tensor_rank;\n+ for (int i = 0; i < tensor_rank - 2; ++i) {\n+ eval_tensor->dims->data[i] = tensor.dims->data[i];\n+ }\n+ // Swap last two dimensions.\n+ eval_tensor->dims->data[tensor_rank - 2] = tensor.dims->data[tensor_rank - 1];\n+ eval_tensor->dims->data[tensor_rank - 1] = tensor.dims->data[tensor_rank - 2];\n+\n+ size_t eval_data_size = static_cast<size_t>(NumElements(&tensor));\n+ if (tensor.type == kTfLiteFloat32) {\n+ eval_data_size *= sizeof(float);\n+ }\n+ eval_tensor->data.data =\n+ context->AllocatePersistentBuffer(context, eval_data_size);\n+\n+ return eval_tensor;\n+}\n+\n+// Initializes tensors to store transposed operands.\n+// Allocate storage for hybrid quantization if needed.\n+// Allocate normal quantization data if needed.\n+TfLiteStatus InitializeTemporaries(TfLiteContext* context, TfLiteNode* node,\n+ const PrepareOpContext& op_context) {\n+ OpData* op_data = op_context.opdata;\n+ const TfLiteTensor* lhs = op_context.lhs;\n+ const TfLiteTensor* rhs = op_context.rhs;\n+\n+ // For \"hybrid\" quantization, we impose the constraint that the LHS\n+ // is float (typically an activation from a prior layer) and the RHS\n+ // is quantized int8.\n+ bool is_hybrid = (lhs->type == kTfLiteFloat32 && rhs->type == kTfLiteInt8);\n+ if (is_hybrid) {\n+ op_data->hybrid = static_cast<decltype(op_data->hybrid)>(\n+ context->AllocatePersistentBuffer(context, sizeof(*op_data->hybrid)));\n+ TF_LITE_ENSURE(context, op_data->hybrid != nullptr);\n+ op_data->hybrid->input_quantized_index = kInvalidScratchBufferIndex;\n+ op_data->hybrid->scaling_factors_index = kInvalidScratchBufferIndex;\n+ op_data->hybrid->row_sums_buffer = nullptr;\n+ op_data->hybrid->input_offsets_index = kInvalidScratchBufferIndex;\n+ } else if (lhs->type == kTfLiteInt8) {\n+ op_data->quantization = static_cast<decltype(op_data->quantization)>(\n+ context->AllocatePersistentBuffer(context,\n+ sizeof(*op_data->quantization)));\n+ TF_LITE_ENSURE(context, op_data->quantization != nullptr);\n+ } else {\n+ op_data->quantization = nullptr; // also op_data->hybrid\n+ }\n+\n+ // tensor for Transposed LHS;\n+ if (op_context.params->adj_x) {\n+ op_data->lhs_transposed_tensor =\n+ AllocInitTransposeTensorFromTfLiteTensor(context, *lhs);\n+ } else {\n+ op_data->lhs_transposed_tensor = nullptr;\n+ }\n+\n+ // We need a buffer for the RHS if we need to transpose the RHS. We\n+ // transpose by default, so that the two inputs (LHS and RHS) are in a proper\n+ // layout for our fast matrix multiplication routines. If the transpose flag\n+ // is set by the caller, the data is already in the desired layout.\n+ if (!op_context.params->adj_y) {\n+ op_data->rhs_transposed_tensor =\n+ AllocInitTransposeTensorFromTfLiteTensor(context, *rhs);\n+ } else {\n+ op_data->rhs_transposed_tensor = nullptr;\n+ }\n+\n+ // If we have to perform on-the-fly quantization (with quantized weights and\n+ // float inputs) first we need to quantize the inputs. Allocate temporary\n+ // buffer to store the intermediate quantized values, the batch scaling\n+ // factors, the input offsets, and persistent storage for the sums of the\n+ // rows for each weights matrix.\n+ // RHS = weights, LHS = inputs\n+ if (is_hybrid) {\n+ const int lhs_rank = NumDimensions(lhs);\n+ const int rhs_rank = NumDimensions(rhs);\n+ const int batch_size = op_context.params->adj_x\n+ ? lhs->dims->data[lhs_rank - 1]\n+ : lhs->dims->data[lhs_rank - 2];\n+ const int num_units = rhs->dims->data[rhs_rank - 1];\n+\n+ // Calculate the total number of LHS batches.\n+ int num_batches = 1;\n+ for (int i = 0; i < lhs_rank - 2; ++i) {\n+ num_batches *= lhs->dims->data[i];\n+ }\n+ int num_weights_matrices = 1;\n+ for (int i = 0; i < rhs_rank - 2; ++i) {\n+ num_weights_matrices *= rhs->dims->data[i];\n+ }\n+\n+ const size_t input_quantized_size = static_cast<size_t>(\n+ NumElements(lhs->dims) * TfLiteTypeGetSize(rhs->type));\n+ TF_LITE_ENSURE_OK(context, context->RequestScratchBufferInArena(\n+ context, input_quantized_size,\n+ &op_data->hybrid->input_quantized_index));\n+\n+ const size_t scaling_factors_size =\n+ static_cast<size_t>(batch_size * num_batches * sizeof(float));\n+ TF_LITE_ENSURE_OK(context, context->RequestScratchBufferInArena(\n+ context, scaling_factors_size,\n+ &op_data->hybrid->scaling_factors_index));\n+\n+ const size_t input_offsets_size =\n+ static_cast<size_t>(batch_size * num_batches * sizeof(int32_t));\n+ TF_LITE_ENSURE_OK(context, context->RequestScratchBufferInArena(\n+ context, input_offsets_size,\n+ &op_data->hybrid->input_offsets_index));\n+\n+ const size_t row_sums_size =\n+ static_cast<size_t>(num_weights_matrices * num_units * sizeof(int32_t));\n+ op_data->hybrid->row_sums_buffer = static_cast<int32_t*>(\n+ context->AllocatePersistentBuffer(context, row_sums_size));\n+ TF_LITE_ENSURE(context, op_data->hybrid->row_sums_buffer != nullptr);\n+\n+ op_data->hybrid->compute_row_sums = true;\n+ op_data->hybrid->filter_scale = rhs->params.scale;\n+ }\n+\n+ return kTfLiteOk;\n+}\n+\n+template <typename scalar>\n+void TransposeRowsColumnsImpl(const TfLiteEvalTensor& tensor_in,\n+ const scalar* input, TfLiteEvalTensor* tensor_out,\n+ scalar* output) {\n+ RuntimeShape transposed_shape(tflite::micro::GetTensorShape(&tensor_in));\n+ RuntimeShape shape(transposed_shape);\n+ TransposeParams params;\n+ int rank = shape.DimensionsCount();\n+ params.perm_count = rank;\n+ for (int i = 0; i < rank - 2; ++i) {\n+ params.perm[i] = i;\n+ }\n+ // Transpose the last two dimensions.\n+ params.perm[rank - 2] = rank - 1;\n+ params.perm[rank - 1] = rank - 2;\n+ transposed_shape.SetDim(rank - 1, shape.Dims(rank - 2));\n+ transposed_shape.SetDim(rank - 2, shape.Dims(rank - 1));\n+ reference_ops::Transpose(params, shape, input, transposed_shape, output);\n+}\n+\n+TfLiteStatus TransposeRowsColumns(TfLiteContext* context,\n+ const TfLiteEvalTensor& tensor_in,\n+ TfLiteEvalTensor* tensor_out) {\n+ if (tensor_in.type == kTfLiteFloat32) {\n+ TransposeRowsColumnsImpl<float>(\n+ tensor_in, tflite::micro::GetTensorData<float>(&tensor_in), tensor_out,\n+ tflite::micro::GetTensorData<float>(tensor_out));\n+ return kTfLiteOk;\n+ } else if (tensor_in.type == kTfLiteInt8) {\n+ TransposeRowsColumnsImpl<int8_t>(\n+ tensor_in, tflite::micro::GetTensorData<int8_t>(&tensor_in), tensor_out,\n+ tflite::micro::GetTensorData<int8_t>(tensor_out));\n+ return kTfLiteOk;\n+ } else {\n+ TF_LITE_KERNEL_LOG(context,\n+ \"BATCH_MATMUL can only transpose tensors with float, \"\n+ \"int8 type.\");\n+ return kTfLiteError;\n+ }\n+}\n+\n+RuntimeShape SwapRowColumnDims(const RuntimeShape& shape) {\n+ RuntimeShape swapped_shape(shape);\n+ const int32_t dims = shape.DimensionsCount();\n+ swapped_shape.SetDim(dims - 2, shape.Dims(dims - 1));\n+ swapped_shape.SetDim(dims - 1, shape.Dims(dims - 2));\n+ return swapped_shape;\n+}\n+\n+TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n+ TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);\n+ TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n+\n+ PrepareOpContext op_context(context, node);\n+ const TfLiteTensor* lhs_data = op_context.lhs;\n+ TF_LITE_ENSURE(context, lhs_data != nullptr);\n+ const TfLiteTensor* rhs_data = op_context.rhs;\n+ TF_LITE_ENSURE(context, rhs_data != nullptr);\n+ TfLiteTensor* output = op_context.output;\n+ TF_LITE_ENSURE(context, output != nullptr);\n+\n+ TF_LITE_ENSURE(context, lhs_data->type == kTfLiteFloat32 ||\n+ lhs_data->type == kTfLiteInt8);\n+ TF_LITE_ENSURE(context, rhs_data->type == kTfLiteFloat32 ||\n+ rhs_data->type == kTfLiteInt8);\n+ // Either we have a hybrid quantization with a float32 and an int8 input,\n+ // otherwise both inputs should be of the same type.\n+ TF_LITE_ENSURE(context, (lhs_data->type == kTfLiteFloat32 &&\n+ rhs_data->type == kTfLiteInt8) ||\n+ lhs_data->type == rhs_data->type);\n+\n+ const int lhs_rank = NumDimensions(lhs_data);\n+ const int rhs_rank = NumDimensions(rhs_data);\n+ // Support dimensions between 2 and 4, inclusive.\n+ TF_LITE_ENSURE(context, lhs_rank >= 2);\n+ TF_LITE_ENSURE(context, lhs_rank <= 4);\n+ TF_LITE_ENSURE(context, rhs_rank >= 2);\n+ TF_LITE_ENSURE(context, rhs_rank <= 4);\n+\n+ TF_LITE_ENSURE_OK(context, InitializeTemporaries(context, node, op_context));\n+\n+ OpData* op_data = op_context.opdata;\n+ // If the RHS is constant, we only transpose once.\n+ op_data->rhs_is_transposed = false;\n+ op_data->lhs_is_constant_tensor = IsConstantTensor(lhs_data);\n+ op_data->rhs_is_constant_tensor = IsConstantTensor(rhs_data);\n+\n+ bool adj_x = op_context.params->adj_x;\n+ bool adj_y = op_context.params->adj_y;\n+\n+ // Note that quantized inference requires that all tensors have their\n+ // parameters set. This is usually done during quantized training.\n+ if (lhs_data->type == kTfLiteInt8) {\n+ TF_LITE_ENSURE(context, op_data->quantization != nullptr);\n+ double real_multiplier = 0.0;\n+ TF_LITE_ENSURE_STATUS(GetQuantizedConvolutionMultipler(\n+ context, lhs_data, rhs_data, output, &real_multiplier));\n+ QuantizeMultiplier(real_multiplier,\n+ &op_data->quantization->output_multiplier,\n+ &op_data->quantization->output_shift);\n+ // BatchMatMul has no fused activation functions. Therefore, set\n+ // output activation min and max to min and max of int8_t type.\n+ op_data->quantization->output_activation_min =\n+ std::numeric_limits<int8_t>::min();\n+ op_data->quantization->output_activation_max =\n+ std::numeric_limits<int8_t>::max();\n+\n+ // set zero_point for Int8 only\n+ op_data->quantization->lhs_zero_point = lhs_data->params.zero_point;\n+ op_data->quantization->rhs_zero_point = rhs_data->params.zero_point;\n+ op_data->quantization->output_zero_point = output->params.zero_point;\n+ }\n+\n+ const int output_rank = std::max(lhs_rank, rhs_rank);\n+ const RuntimeShape extended_lhs_shape =\n+ RuntimeShape::ExtendedShape(output_rank, GetTensorShape(lhs_data));\n+ const RuntimeShape extended_rhs_shape =\n+ RuntimeShape::ExtendedShape(output_rank, GetTensorShape(rhs_data));\n+\n+ // Ensure any batch dimensions obey broacasting rules.\n+ for (int i = 0; i < output_rank - 2; ++i) {\n+ const int lhs_dim = extended_lhs_shape.Dims(i);\n+ const int rhs_dim = extended_rhs_shape.Dims(i);\n+ if (lhs_dim != rhs_dim) {\n+ if (lhs_dim != 1) {\n+ TF_LITE_ENSURE_EQ(context, rhs_dim, 1);\n+ }\n+ }\n+ }\n+ // Ensure other dimensions work for matrix multiplication.\n+ int accum_dim_lhs = adj_x ? extended_lhs_shape.Dims(output_rank - 2)\n+ : extended_lhs_shape.Dims(output_rank - 1);\n+ int accum_dim_rhs = adj_y ? extended_rhs_shape.Dims(output_rank - 1)\n+ : extended_rhs_shape.Dims(output_rank - 2);\n+\n+ TF_LITE_ENSURE_EQ(context, accum_dim_lhs, accum_dim_rhs);\n+ TfLiteStatus status =\n+ ResizeOutputTensor(context, node, extended_lhs_shape, extended_rhs_shape,\n+ adj_x, adj_y, output_rank, output);\n+ return status;\n+}\n+\n+void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n+ // This is a builtin op, so we don't use the contents in 'buffer', if any.\n+ // Instead, we allocate a new object to carry information from Prepare() to\n+ // Eval().\n+ TFLITE_DCHECK(context->AllocatePersistentBuffer != nullptr);\n+ return context->AllocatePersistentBuffer(context, sizeof(OpData));\n+}\n+\n+TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+ return CalculateOpData(context, node);\n+}\n+\n+TfLiteStatus EvalHybrid(TfLiteContext* context, TfLiteNode* node,\n+ const OpData& data, const RuntimeShape& input_shape,\n+ const TfLiteEvalTensor& input,\n+ const RuntimeShape& filter_shape,\n+ const TfLiteEvalTensor& filter,\n+ TfLiteEvalTensor* output) {\n+ const auto* params =\n+ static_cast<TfLiteBatchMatMulParams*>(node->builtin_data);\n+ const int32_t num_input_dims = input_shape.DimensionsCount();\n+\n+ // Input row/cols have been swapped at this point, so dims are\n+ // {input_size, num_batches}\n+ const int input_size = input_shape.Dims(num_input_dims - 2);\n+ const int batch_size = input_shape.Dims(num_input_dims - 1);\n+\n+ int num_batches_to_quantize = batch_size;\n+ for (int i = 0; i < input_shape.DimensionsCount() - 2; ++i) {\n+ num_batches_to_quantize *= input_shape.Dims(i);\n+ }\n+ // Quantize input from float to uint8 + quantization params (scaling factor).\n+ float* scaling_factors_ptr = static_cast<float*>(\n+ context->GetScratchBuffer(context, data.hybrid->scaling_factors_index));\n+ int32_t* input_offset_ptr = static_cast<int32_t*>(\n+ context->GetScratchBuffer(context, data.hybrid->input_offsets_index));\n+ int32_t* row_sums_ptr = data.hybrid->row_sums_buffer;\n+ if (!params->asymmetric_quantize_inputs) {\n+ std::fill_n(input_offset_ptr, num_batches_to_quantize, 0);\n+ }\n+\n+ int8_t* quant_data = static_cast<int8_t*>(\n+ context->GetScratchBuffer(context, data.hybrid->input_quantized_index));\n+ const int8_t* filter_data = tflite::micro::GetTensorData<int8_t>(&filter);\n+ const float* input_ptr = tflite::micro::GetTensorData<float>(&input);\n+ // Quantize each batch independently.\n+ tensor_utils::BatchQuantizeFloats(input_ptr, num_batches_to_quantize,\n+ input_size, quant_data, scaling_factors_ptr,\n+ input_offset_ptr,\n+ params->asymmetric_quantize_inputs);\n+ for (int b = 0; b < num_batches_to_quantize; ++b) {\n+ // Incorporate scaling of the filter.\n+ scaling_factors_ptr[b] *= data.hybrid->filter_scale;\n+ }\n+\n+ RuntimeShape output_shape = tflite::micro::GetTensorShape(output);\n+ int output_size = NumElements(output->dims);\n+ std::fill_n(tflite::micro::GetTensorData<float>(output), output_size, 0.0f);\n+ reference_ops::BatchMatMul(\n+ filter_shape, filter_data, input_shape, quant_data, scaling_factors_ptr,\n+ input_offset_ptr, row_sums_ptr, tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<float>(output),\n+ &(data.hybrid->compute_row_sums));\n+\n+ return kTfLiteOk;\n+}\n+\n+TfLiteStatus EvalInt8(TfLiteContext* context, const OpData& data,\n+ const RuntimeShape& lhs_shape,\n+ const TfLiteEvalTensor& lhs,\n+ const RuntimeShape& rhs_shape,\n+ const TfLiteEvalTensor& rhs,\n+ const RuntimeShape& output_shape,\n+ TfLiteEvalTensor* output) {\n+ TF_LITE_ENSURE(context, data.quantization != nullptr);\n+\n+ // Reuse params struct from FullyConnected Op.\n+ FullyConnectedParams op_params;\n+ op_params.input_offset = -data.quantization->lhs_zero_point;\n+ op_params.weights_offset =\n+ -data.quantization->rhs_zero_point; // filter offset\n+ op_params.output_offset = data.quantization->output_zero_point;\n+ op_params.output_multiplier = data.quantization->output_multiplier;\n+ op_params.output_shift = data.quantization->output_shift;\n+ op_params.quantized_activation_min = data.quantization->output_activation_min;\n+ op_params.quantized_activation_max = data.quantization->output_activation_max;\n+ op_params.lhs_cacheable = data.lhs_is_constant_tensor;\n+ op_params.rhs_cacheable = data.rhs_is_constant_tensor;\n+\n+ // Note we pass RHS args first, LHS args second. See note for Eval.\n+ reference_ops::BatchMatMul<int8_t, int32_t>(\n+ op_params, rhs_shape, tflite::micro::GetTensorData<int8_t>(&rhs),\n+ lhs_shape, tflite::micro::GetTensorData<int8_t>(&lhs), output_shape,\n+ tflite::micro::GetTensorData<int8_t>(output));\n+\n+ return kTfLiteOk;\n+}\n+\n+TfLiteStatus EvalQuantized(TfLiteContext* context, TfLiteNode* node,\n+ const OpData& data, const RuntimeShape& lhs_shape,\n+ const TfLiteEvalTensor& lhs,\n+ const RuntimeShape& rhs_shape,\n+ const TfLiteEvalTensor& rhs,\n+ TfLiteEvalTensor* output) {\n+ if (lhs.type == kTfLiteFloat32 && rhs.type == kTfLiteInt8) {\n+ TF_LITE_ENSURE(context, data.hybrid != nullptr);\n+ TF_LITE_ENSURE(context, data.hybrid->row_sums_buffer != nullptr);\n+ TF_LITE_ENSURE(context, data.hybrid->input_quantized_index !=\n+ kInvalidScratchBufferIndex);\n+ TF_LITE_ENSURE(context, data.hybrid->scaling_factors_index !=\n+ kInvalidScratchBufferIndex);\n+ TF_LITE_ENSURE(context, data.hybrid->input_offsets_index !=\n+ kInvalidScratchBufferIndex);\n+ return EvalHybrid(context, node, data, lhs_shape, lhs, rhs_shape, rhs,\n+ output);\n+ } else if (lhs.type == kTfLiteInt8 && rhs.type == kTfLiteInt8) {\n+ return EvalInt8(context, data, lhs_shape, lhs, rhs_shape, rhs,\n+ tflite::micro::GetTensorShape(output), output);\n+ } else {\n+ TF_LITE_KERNEL_LOG(\n+ context, \"BATCH_MATMUL only supports hybrid, int8 quantization.\\n\");\n+ }\n+ return kTfLiteError;\n+}\n+\n+// Perform a batch matrix multiply on\n+// LHS <..., A, B> X RHS<..., B, C>\n+// where the leading dimensions of LHS and RHS obey broadcasting rules\n+// (this Op will apply broadcasting rules).\n+// We assume that LHS and RHS are both row oriented (adjacent values in memory\n+// are in the same row) and will output in the same memory layout. However,\n+// our fast GEMM libraries assume RCC layout (LHS row oriented,\n+// RHS column oriented, output column oriented). Therefore, we perform\n+// RHS <..., C, B> X LHS <..., B, A>\n+// where output is a C X A column-oriented, which is equivalent to\n+// A X C row-oriented.\n+TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n+ EvalOpContext op_context(context, node);\n+ OpData* op_data = op_context.opdata;\n+ const TfLiteEvalTensor* lhs = op_context.lhs;\n+ const TfLiteEvalTensor* rhs = op_context.rhs;\n+ TfLiteEvalTensor* output = op_context.output;\n+ RuntimeShape orig_lhs_shape = tflite::micro::GetTensorShape(lhs);\n+ RuntimeShape orig_rhs_shape = tflite::micro::GetTensorShape(rhs);\n+\n+ bool adj_y = op_context.params->adj_y;\n+ bool adj_x = op_context.params->adj_x;\n+\n+ TfLiteEvalTensor* rhs_tensor = adj_y ? const_cast<TfLiteEvalTensor*>(rhs)\n+ : op_data->rhs_transposed_tensor;\n+ TfLiteEvalTensor* lhs_tensor = adj_x ? op_data->lhs_transposed_tensor\n+ : const_cast<TfLiteEvalTensor*>(lhs);\n+ TF_LITE_ENSURE(context, rhs_tensor != nullptr);\n+ TF_LITE_ENSURE(context, lhs_tensor != nullptr);\n+ if (!adj_y) {\n+ // OLD-TODO(b/154760341) Constant tensors should already be transposed, but\n+ // we transpose once if necessary for now.\n+ if (!(op_data->rhs_is_constant_tensor && op_data->rhs_is_transposed)) {\n+ TransposeRowsColumns(context, *rhs, rhs_tensor);\n+ op_data->rhs_is_transposed = true;\n+ }\n+ }\n+ if (adj_x) {\n+ TransposeRowsColumns(context, *lhs, lhs_tensor);\n+ }\n+ RuntimeShape rhs_shape =\n+ adj_y ? orig_rhs_shape : SwapRowColumnDims(orig_rhs_shape);\n+ RuntimeShape lhs_shape =\n+ adj_x ? orig_lhs_shape : SwapRowColumnDims(orig_lhs_shape);\n+\n+ switch (rhs->type) {\n+ case kTfLiteFloat32:\n+ // Note we pass RHS args first, LHS args second. See note above.\n+ reference_ops::BatchMatMul(\n+ rhs_shape, tflite::micro::GetTensorData<float>(rhs_tensor), lhs_shape,\n+ tflite::micro::GetTensorData<float>(lhs_tensor),\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<float>(output));\n+ break;\n+ case kTfLiteInt8:\n+ return EvalQuantized(context, node, *op_data, lhs_shape, *lhs_tensor,\n+ rhs_shape, *rhs_tensor, output);\n+ default:\n+ TF_LITE_KERNEL_LOG(context,\n+ \"Currently BATCH_MATMUL doesn't support type: %s\",\n+ TfLiteTypeGetName(lhs->type));\n+ return kTfLiteError;\n+ }\n+ return kTfLiteOk;\n+}\n+\n+} // namespace\n+\n+TfLiteRegistration Register_BATCH_MATMUL() {\n+ return {/*init=*/Init,\n+ /*free=*/nullptr,\n+ /*prepare=*/Prepare,\n+ /*invoke=*/Eval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/0};\n+}\n+\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/batch_matmul.cc",
"status": "added"
},
{
"diff": "@@ -0,0 +1,666 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include <cstddef>\n+#include <cstdint>\n+#include <initializer_list>\n+#include <type_traits>\n+\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor_utils_common.h\"\n+#include \"tensorflow/lite/micro/kernels/batch_matmul_test_util.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_runner.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n+\n+namespace tflite {\n+namespace testing {\n+namespace {\n+\n+template <typename T, size_t IN1, size_t IN2, size_t OUT>\n+class BatchMatMulOpModel : public TestOpModel<T, T, T, IN1, IN2, OUT> {\n+ public:\n+ BatchMatMulOpModel(const TestTensorData& lhs, const TestTensorData& rhs,\n+ bool adj_x = false, bool adj_y = false)\n+ : TestOpModel<T, T, T, IN1, IN2, OUT>(tflite::Register_BATCH_MATMUL()),\n+ adj_x_(adj_x),\n+ adj_y_(adj_y) {\n+ this->AddInput(lhs, kInputIndex0);\n+ this->AddInput(rhs, kInputIndex1);\n+ }\n+\n+ inline ElementArray<T, IN1>* lhs() { return &this->GetInput0(); }\n+ inline ElementArray<T, IN2>* rhs() { return &this->GetInput1(); }\n+\n+ void Invoke() {\n+ TfLiteTensor tensors[] = {\n+ CreateTensor(lhs()->data, &this->GetInputShape(kInputIndex0)),\n+ CreateTensor(rhs()->data, &this->GetInputShape(kInputIndex1)),\n+ CreateTensor(this->GetOutput().data, &this->GetOutputShape()),\n+ };\n+ constexpr int tensors_count = std::extent<decltype(tensors)>::value;\n+\n+ TfLiteBatchMatMulParams params;\n+ params.adj_x = adj_x_;\n+ params.adj_y = adj_y_;\n+ params.asymmetric_quantize_inputs = false;\n+ this->DoInvoke(¶ms, tensors, tensors_count);\n+ }\n+\n+ private:\n+ bool adj_x_;\n+ bool adj_y_;\n+};\n+\n+template <typename T, size_t IN1, size_t IN2, size_t OUT>\n+class QuantizedBatchMatMulOpModel : public TestOpModel<T, T, T, IN1, IN2, OUT> {\n+ public:\n+ QuantizedBatchMatMulOpModel(int units, int batches, const TestTensorData& lhs,\n+ const TestTensorData& output = {kTfLiteInt8},\n+ bool adj_x = false, bool adj_y = false)\n+ : TestOpModel<T, T, T, IN1, IN2, OUT>(tflite::Register_BATCH_MATMUL()),\n+ adj_x_(adj_x),\n+ adj_y_(adj_y) {\n+ int input_size = ElementCount(this->GetInputShape(kInputIndex0)) / batches;\n+\n+ this->AddInput(lhs, &this->GetInput0());\n+ this->AddInput({lhs.type,\n+ {input_size, units},\n+ 0,\n+ 0,\n+ this->GetInput0().scale,\n+ this->GetInput0().zero_point},\n+ &this->GetInput1());\n+ this->AddOutput(output);\n+ }\n+\n+ template <typename TRHS>\n+ void SetWeights(const std::initializer_list<float>& data) {\n+ this->template QuantizeAndPopulate<TRHS>(rhs(), data);\n+ }\n+\n+ template <typename TLHS>\n+ void SetInput(const std::initializer_list<float>& data) {\n+ this->template QuantizeAndPopulate<TLHS>(lhs(), data);\n+ }\n+\n+ inline ElementArray<T, IN1>* lhs() { return &this->GetInput0(); }\n+ inline ElementArray<T, IN2>* rhs() { return &this->GetInput1(); }\n+\n+ void Invoke() {\n+ TfLiteTensor tensors[] = {\n+ CreateQuantizedTensor(lhs()->data, &this->GetInputShape(kInputIndex0),\n+ lhs()->scale, lhs()->zero_point),\n+ CreateQuantizedTensor(rhs()->data, &this->GetInputShape(kInputIndex1),\n+ rhs()->scale, rhs()->zero_point),\n+ CreateQuantizedTensor(this->GetOutput().data, &this->GetOutputShape(),\n+ this->GetOutput().scale,\n+ this->GetOutput().zero_point),\n+ };\n+ constexpr int tensors_count = std::extent<decltype(tensors)>::value;\n+\n+ TfLiteBatchMatMulParams params;\n+ params.adj_x = adj_x_;\n+ params.adj_y = adj_y_;\n+ params.asymmetric_quantize_inputs = false;\n+ this->DoInvoke(¶ms, tensors, tensors_count);\n+ }\n+\n+ private:\n+ bool adj_x_;\n+ bool adj_y_;\n+};\n+\n+template <typename T1, typename T2, size_t IN1, size_t IN2, size_t OUT>\n+class HybridBatchMatMulOpModel : public TestOpModel<T1, T2, T1, IN1, IN2, OUT> {\n+ public:\n+ HybridBatchMatMulOpModel(int units, int batches, const TestTensorData& lhs,\n+ const TestTensorData& rhs,\n+ const TestTensorData& output = {kTfLiteFloat32},\n+ bool asymmetric_quantize_inputs = true)\n+ : TestOpModel<T1, T2, T1, IN1, IN2, OUT>(tflite::Register_BATCH_MATMUL()),\n+ asymmetric_quantize_inputs_(asymmetric_quantize_inputs) {\n+ this->AddInput(lhs, &this->GetInput0());\n+ this->AddInput(rhs, &this->GetInput1());\n+ }\n+\n+ void SetSignedWeights(const std::initializer_list<float>& data) {\n+ this->SignedSymmetricQuantizeAndPopulate(rhs(), data);\n+ }\n+\n+ void SetInput(const std::initializer_list<float>& data) {\n+ this->PopulateTensor(lhs(), data);\n+ }\n+\n+ inline ElementArray<T1, IN1>* lhs() { return &this->GetInput0(); }\n+ inline ElementArray<T2, IN2>* rhs() { return &this->GetInput1(); }\n+\n+ void Invoke() {\n+ TfLiteTensor tensors[] = {\n+ CreateTensor(lhs()->data, &this->GetInputShape(kInputIndex0)),\n+ CreateQuantizedTensor(rhs()->data, &this->GetInputShape(kInputIndex1),\n+ rhs()->scale, rhs()->zero_point),\n+ CreateTensor(this->GetOutput().data, &this->GetOutputShape()),\n+ };\n+ constexpr int tensors_count = std::extent<decltype(tensors)>::value;\n+\n+ TfLiteBatchMatMulParams params;\n+ params.adj_x = false;\n+ params.adj_y = false;\n+ params.asymmetric_quantize_inputs = asymmetric_quantize_inputs_;\n+ this->DoInvoke(¶ms, tensors, tensors_count);\n+ }\n+\n+ private:\n+ bool asymmetric_quantize_inputs_;\n+};\n+\n+} // namespace\n+} // namespace testing\n+} // namespace tflite\n+\n+using tflite::testing::BatchMatMulOpModel;\n+using tflite::testing::HybridBatchMatMulOpModel;\n+using tflite::testing::QuantizedBatchMatMulOpModel;\n+\n+TF_LITE_MICRO_TESTS_BEGIN\n+\n+TF_LITE_MICRO_TEST(BatchMatMulOpTestFloat32Test_Simple) {\n+ BatchMatMulOpModel<float, 6, 12, 8> model({kTfLiteFloat32, {1, 2, 3}},\n+ {kTfLiteFloat32, {1, 3, 4}});\n+ model.PopulateTensor<float>(model.lhs(), {1, 2, 3, 4, 5, 6});\n+ model.PopulateTensor<float>(model.rhs(),\n+ {7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutput(),\n+ ElementsAreArray({74., 80., 86., 92., 173., 188., 203., 218.}));\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({1, 2, 4}));\n+}\n+\n+TF_LITE_MICRO_TEST(BatchMatMulOpTestFloat32Test_SimpleRHSAdjoint) {\n+ BatchMatMulOpModel<float, 6, 12, 8> model(\n+ {kTfLiteFloat32, {1, 2, 3}}, {kTfLiteFloat32, {1, 4, 3}}, false, true);\n+ model.PopulateTensor<float>(model.lhs(), {1, 2, 3, 4, 5, 6});\n+ model.PopulateTensor<float>(model.rhs(),\n+ {7, 11, 15, 8, 12, 16, 9, 13, 17, 10, 14, 18});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutput(),\n+ ElementsAreArray({74., 80., 86., 92., 173., 188., 203., 218.}));\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({1, 2, 4}));\n+}\n+\n+TF_LITE_MICRO_TEST(BatchMatMulOpTestFloat32Test_SimpleLHSAdjoint) {\n+ BatchMatMulOpModel<float, 6, 12, 8> model(\n+ {kTfLiteFloat32, {1, 3, 2}}, {kTfLiteFloat32, {1, 3, 4}}, true, false);\n+ model.PopulateTensor<float>(model.lhs(), {1, 4, 2, 5, 3, 6});\n+ model.PopulateTensor<float>(model.rhs(),\n+ {7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutput(),\n+ ElementsAreArray({74., 80., 86., 92., 173., 188., 203., 218.}));\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({1, 2, 4}));\n+}\n+\n+TF_LITE_MICRO_TEST(BatchMatMulOpTestFloat32Test_BatchSizeTwo) {\n+ BatchMatMulOpModel<float, 12, 24, 16> model({kTfLiteFloat32, {2, 2, 3}},\n+ {kTfLiteFloat32, {2, 3, 4}});\n+ model.PopulateTensor<float>(model.lhs(),\n+ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12});\n+ model.PopulateTensor<float>(model.rhs(),\n+ {7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,\n+ 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30});\n+ model.Invoke();\n+ EXPECT_THAT(\n+ model.GetOutput(),\n+ ElementsAreArray({74., 80., 86., 92., 173., 188., 203., 218., 560., 584.,\n+ 608., 632., 767., 800., 833., 866.}));\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({2, 2, 4}));\n+}\n+\n+TF_LITE_MICRO_TEST(BatchMatMulOpTestFloat32Test_Broadcast) {\n+ BatchMatMulOpModel<float, 12, 12, 16> model({kTfLiteFloat32, {2, 2, 3}},\n+ {kTfLiteFloat32, {3, 4}});\n+ model.PopulateTensor<float>(model.lhs(),\n+ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12});\n+ model.PopulateTensor<float>(model.rhs(),\n+ {7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18});\n+\n+ model.Invoke();\n+ EXPECT_THAT(\n+ model.GetOutput(),\n+ ElementsAreArray({74., 80., 86., 92., 173., 188., 203., 218., 272., 296.,\n+ 320., 344., 371., 404., 437., 470.}));\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({2, 2, 4}));\n+}\n+\n+TF_LITE_MICRO_TEST(BatchMatMulOpTestFloat32Test_BroadcastLHSAdjoint) {\n+ BatchMatMulOpModel<float, 12, 12, 16> model(\n+ {kTfLiteFloat32, {2, 3, 2}}, {kTfLiteFloat32, {3, 4}}, true, false);\n+ model.PopulateTensor<float>(model.lhs(),\n+ {1, 4, 2, 5, 3, 6, 7, 10, 8, 11, 9, 12});\n+ model.PopulateTensor<float>(model.rhs(),\n+ {7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18});\n+\n+ model.Invoke();\n+ EXPECT_THAT(\n+ model.GetOutput(),\n+ ElementsAreArray({74., 80., 86., 92., 173., 188., 203., 218., 272., 296.,\n+ 320., 344., 371., 404., 437., 470.}));\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({2, 2, 4}));\n+}\n+\n+TF_LITE_MICRO_TEST(BatchMatMulOpTestFloat32Test_Broadcast2) {\n+ BatchMatMulOpModel<float, 12, 24, 72> model({kTfLiteFloat32, {2, 1, 3, 2}},\n+ {kTfLiteFloat32, {3, 2, 4}});\n+ model.PopulateTensor<float>(model.lhs(),\n+ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12});\n+ model.PopulateTensor<float>(model.rhs(),\n+ {7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,\n+ 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30});\n+\n+ model.Invoke();\n+ EXPECT_THAT(\n+ model.GetOutput(),\n+ ElementsAreArray({29., 32., 35., 38., 65., 72., 79., 86., 101.,\n+ 112., 123., 134., 53., 56., 59., 62., 121., 128.,\n+ 135., 142., 189., 200., 211., 222., 77., 80., 83.,\n+ 86., 177., 184., 191., 198., 277., 288., 299., 310.,\n+ 137., 152., 167., 182., 173., 192., 211., 230., 209.,\n+ 232., 255., 278., 257., 272., 287., 302., 325., 344.,\n+ 363., 382., 393., 416., 439., 462., 377., 392., 407.,\n+ 422., 477., 496., 515., 534., 577., 600., 623., 646.}));\n+\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({2, 3, 3, 4}));\n+}\n+\n+TF_LITE_MICRO_TEST(BatchMatMulOpTestFloat32Test_Broadcast2LHSAdjoint) {\n+ BatchMatMulOpModel<float, 12, 24, 72> model(\n+ {kTfLiteFloat32, {2, 1, 2, 3}}, {kTfLiteFloat32, {3, 2, 4}}, true, false);\n+ model.PopulateTensor<float>(model.lhs(),\n+ {1, 3, 5, 2, 4, 6, 7, 9, 11, 8, 10, 12});\n+ model.PopulateTensor<float>(model.rhs(),\n+ {7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,\n+ 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30});\n+\n+ model.Invoke();\n+ EXPECT_THAT(\n+ model.GetOutput(),\n+ ElementsAreArray({29., 32., 35., 38., 65., 72., 79., 86., 101.,\n+ 112., 123., 134., 53., 56., 59., 62., 121., 128.,\n+ 135., 142., 189., 200., 211., 222., 77., 80., 83.,\n+ 86., 177., 184., 191., 198., 277., 288., 299., 310.,\n+ 137., 152., 167., 182., 173., 192., 211., 230., 209.,\n+ 232., 255., 278., 257., 272., 287., 302., 325., 344.,\n+ 363., 382., 393., 416., 439., 462., 377., 392., 407.,\n+ 422., 477., 496., 515., 534., 577., 600., 623., 646.}));\n+\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({2, 3, 3, 4}));\n+}\n+\n+TF_LITE_MICRO_TEST(BatchMatMulOpTestFloat32Test_Broadcast2RHSAdjoint) {\n+ BatchMatMulOpModel<float, 12, 24, 72> model(\n+ {kTfLiteFloat32, {2, 1, 3, 2}}, {kTfLiteFloat32, {3, 4, 2}}, false, true);\n+ model.PopulateTensor<float>(model.lhs(),\n+ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12});\n+ model.PopulateTensor<float>(model.rhs(),\n+ {7, 11, 8, 12, 9, 13, 10, 14, 15, 19, 16, 20,\n+ 17, 21, 18, 22, 23, 27, 24, 28, 25, 29, 26, 30});\n+ model.Invoke();\n+ EXPECT_THAT(\n+ model.GetOutput(),\n+ ElementsAreArray({29., 32., 35., 38., 65., 72., 79., 86., 101.,\n+ 112., 123., 134., 53., 56., 59., 62., 121., 128.,\n+ 135., 142., 189., 200., 211., 222., 77., 80., 83.,\n+ 86., 177., 184., 191., 198., 277., 288., 299., 310.,\n+ 137., 152., 167., 182., 173., 192., 211., 230., 209.,\n+ 232., 255., 278., 257., 272., 287., 302., 325., 344.,\n+ 363., 382., 393., 416., 439., 462., 377., 392., 407.,\n+ 422., 477., 496., 515., 534., 577., 600., 623., 646.}));\n+\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({2, 3, 3, 4}));\n+}\n+\n+TF_LITE_MICRO_TEST(BatchMatMulOpTestFloat32Test_Broadcast2BothAdjoint) {\n+ BatchMatMulOpModel<float, 12, 24, 72> model(\n+ {kTfLiteFloat32, {2, 1, 2, 3}}, {kTfLiteFloat32, {3, 4, 2}}, true, true);\n+ model.PopulateTensor<float>(model.lhs(),\n+ {1, 3, 5, 2, 4, 6, 7, 9, 11, 8, 10, 12});\n+ model.PopulateTensor<float>(model.rhs(),\n+ {7, 11, 8, 12, 9, 13, 10, 14, 15, 19, 16, 20,\n+ 17, 21, 18, 22, 23, 27, 24, 28, 25, 29, 26, 30});\n+ model.Invoke();\n+ EXPECT_THAT(\n+ model.GetOutput(),\n+ ElementsAreArray({29., 32., 35., 38., 65., 72., 79., 86., 101.,\n+ 112., 123., 134., 53., 56., 59., 62., 121., 128.,\n+ 135., 142., 189., 200., 211., 222., 77., 80., 83.,\n+ 86., 177., 184., 191., 198., 277., 288., 299., 310.,\n+ 137., 152., 167., 182., 173., 192., 211., 230., 209.,\n+ 232., 255., 278., 257., 272., 287., 302., 325., 344.,\n+ 363., 382., 393., 416., 439., 462., 377., 392., 407.,\n+ 422., 477., 496., 515., 534., 577., 600., 623., 646.}));\n+\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({2, 3, 3, 4}));\n+}\n+\n+TF_LITE_MICRO_TEST(BatchMatMulOpTestFloat32Test_BroadcastFromRHS) {\n+ BatchMatMulOpModel<float, 20, 30, 24> model({kTfLiteFloat32, {4, 5}},\n+ {kTfLiteFloat32, {3, 1, 5, 2}});\n+ model.PopulateTensor<float>(\n+ model.lhs(),\n+ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20});\n+ model.PopulateTensor<float>(\n+ model.rhs(),\n+ {7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,\n+ 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36});\n+\n+ model.Invoke();\n+ EXPECT_THAT(\n+ model.GetOutput(),\n+ ElementsAreArray({185., 200., 460., 500., 735., 800., 1010., 1100.,\n+ 335., 350., 860., 900., 1385., 1450., 1910., 2000.,\n+ 485., 500., 1260., 1300., 2035., 2100., 2810., 2900.}));\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAreArray({3, 1, 4, 2}));\n+}\n+\n+TF_LITE_MICRO_TEST(HybridAsymmetricBatchMatMulOpTestSimpleTestQuantizedInt8) {\n+ HybridBatchMatMulOpModel<float, int8_t, 20, 30, 6> m(\n+ /*units=*/3, /*batches=*/2,\n+ /*lhs=*/{kTfLiteFloat32, {2, 10}},\n+ /*rhs=*/{kTfLiteInt8, {10, 3}, 0, 0, 10.0 / 127.0, 0});\n+\n+ m.SetSignedWeights({\n+ 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5,\n+ 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, 10, 10,\n+ });\n+\n+ m.SetInput({\n+ 11, 12, 13, 14, 15, 16, 17, 18, -19, -20, // batch 1, 0\n+ 11, 12, 13, 14, 15, 16, 17, -18, 19, -20, // batch 1, 1\n+ });\n+\n+ m.Invoke();\n+\n+ EXPECT_THAT(m.GetOutput(), ElementsAreArray(ArrayFloatNear(\n+ {\n+ 196,\n+ 196,\n+ 196,\n+ 246,\n+ 246,\n+ 246,\n+ },\n+ /*max_abs_error=*/0.64f)));\n+ EXPECT_THAT(m.GetOutputShape(), ElementsAreArray({2, 3}));\n+}\n+\n+TF_LITE_MICRO_TEST(\n+ HybridAsymmetricBatchMatMulOpTestQuantizedInt8BroadcastWeights) {\n+ HybridBatchMatMulOpModel<float, int8_t, 40, 30, 12> m(\n+ /*units=*/3, /*batches=*/2,\n+ /*lhs=*/{kTfLiteFloat32, {2, 2, 10}},\n+ /*rhs=*/{kTfLiteInt8, {10, 3}, 0, 0, 10.0 / 127.0, 0});\n+\n+ m.SetSignedWeights({\n+ 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5,\n+ 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, 10, 10,\n+ });\n+\n+ m.SetInput({\n+ 1, 2, 3, 4, 5, 6, 7, 8, -9, -10, // batch 0, 0\n+ 1, 2, 3, 4, 5, 6, 7, -8, 9, -10, // batch 0, 1\n+ 11, 12, 13, 14, 15, 16, 17, 18, -19, -20, // batch 1, 0\n+ 11, 12, 13, 14, 15, 16, 17, -18, 19, -20, // batch 1, 1\n+ });\n+\n+ m.Invoke();\n+\n+ EXPECT_THAT(m.GetOutput(), ElementsAreArray(ArrayFloatNear(\n+ {\n+ 24, 24, 24, //\n+ 58, 58, 58, //\n+ 196, 196, 196, //\n+ 246, 246, 246, //\n+ },\n+ /*max_abs_error=*/1.3f)));\n+ EXPECT_THAT(m.GetOutputShape(), ElementsAreArray({2, 2, 3}));\n+}\n+\n+TF_LITE_MICRO_TEST(\n+ HybridAsymmetricBatchMatMulOpTestQuantizedInt8BroadcastBigWeights) {\n+ HybridBatchMatMulOpModel<float, int8_t, 40, 90, 36> m(\n+ /*units=*/9, /*batches=*/2,\n+ /*lhs=*/{kTfLiteFloat32, {2, 2, 10}},\n+ /*rhs=*/{kTfLiteInt8, {10, 9}, 0, 0, 10.0 / 127.0, 0});\n+\n+ m.SetSignedWeights({\n+ 1, 1, 1, 17, 17, 17, 26, 26, 26, 2, 2, 2, 18, 18, 18, 27, 27, 27,\n+ 3, 3, 3, 19, 19, 19, 28, 28, 28, 4, 4, 4, 20, 20, 20, 29, 29, 29,\n+ 5, 5, 5, 21, 21, 21, 30, 30, 30, 6, 6, 6, 22, 22, 22, 31, 31, 31,\n+ 7, 7, 7, 23, 23, 23, 32, 32, 32, 8, 8, 8, 24, 24, 24, 33, 33, 33,\n+ 9, 9, 9, 25, 25, 25, 34, 34, 34, 10, 10, 10, 26, 26, 26, 35, 35, 35,\n+ });\n+\n+ m.SetInput({\n+ 1, 2, 3, 4, 5, 6, 7, 8, -9, -10, // batch 0, 0\n+ 1, 2, 3, 4, 5, 6, 7, -8, 9, -10, // batch 0, 1\n+ 11, 12, 13, 14, 15, 16, 17, 18, -19, -20, // batch 1, 0\n+ 11, 12, 13, 14, 15, 16, 17, -18, 19, -20, // batch 1, 1\n+ });\n+\n+ m.Invoke();\n+\n+ EXPECT_THAT(m.GetOutput(),\n+ ElementsAreArray(ArrayFloatNear(\n+ {\n+ 23, 23, 23, 295, 295, 295, 449, 449, 449, //\n+ 60, 60, 60, 364, 364, 364, 533, 533, 533, //\n+ 195, 195, 195, 1429, 1429, 1429, 2124, 2124, 2124, //\n+ 250, 250, 250, 1512, 1512, 1512, 2213, 2213, 2213 //\n+ },\n+ /*max_abs_error=*/1.3f)));\n+ EXPECT_THAT(m.GetOutputShape(), ElementsAreArray({2, 2, 9}));\n+}\n+\n+TF_LITE_MICRO_TEST(\n+ HybridAsymmetricBatchMatMulOpTestQuantizedInt8BroadcastInputs) {\n+ HybridBatchMatMulOpModel<float, int8_t, 20, 60, 12> m(\n+ /*units=*/3, /*batches=*/2,\n+ /*lhs=*/{kTfLiteFloat32, {2, 10}},\n+ /*rhs=*/{kTfLiteInt8, {2, 10, 3}, 0, 0, 10.0 / 127.0, 0});\n+\n+ m.SetSignedWeights({\n+ 1, -3, 1, 2, -2, 2, 3, -1, 3, 4, 0, 4, 5, 1, 5, 6, 2, 6, 7, 3,\n+ 7, 8, 4, 8, 9, 5, 9, 10, 6, 10, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4,\n+ 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, 10, 10,\n+ });\n+\n+ m.SetInput({\n+ 1, 2, 3, 4, 5, 6, 7, 8, -9, -10, // batch 0, 0\n+ 1, 2, 3, 4, 5, 6, 7, -8, 9, -10, // batch 0, 1\n+ });\n+\n+ m.Invoke();\n+\n+ EXPECT_THAT(m.GetOutput(), ElementsAreArray(ArrayFloatNear(\n+ {\n+ 24, -45, 24, //\n+ 58, -18, 58, //\n+ 24, 24, 24, //\n+ 58, 58, 58, //\n+ },\n+ /*max_abs_error=*/0.64f)));\n+ EXPECT_THAT(m.GetOutputShape(), ElementsAreArray({2, 2, 3}));\n+}\n+\n+TF_LITE_MICRO_TEST(HybridSymmetricBatchMatMulOpTestSimpleTestQuantizedInt8) {\n+ HybridBatchMatMulOpModel<float, int8_t, 20, 30, 6> m(\n+ /*units=*/3, /*batches=*/2,\n+ /*lhs=*/{kTfLiteFloat32, {2, 10}},\n+ /*rhs=*/{kTfLiteInt8, {10, 3}, 0, 0, 10.0 / 127.0, 0},\n+ /*output=*/{kTfLiteFloat32}, /*asymmetric_quantize_inputs=*/false);\n+\n+ m.SetSignedWeights({\n+ 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5,\n+ 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, 10, 10,\n+ });\n+\n+ m.SetInput({\n+ 11, 12, 13, 14, 15, 16, 17, 18, -19, -20, // batch 1, 0\n+ 11, 12, 13, 14, 15, 16, 17, -18, 19, -20, // batch 1, 1\n+ });\n+\n+ m.Invoke();\n+\n+ EXPECT_THAT(m.GetOutput(), ElementsAreArray(ArrayFloatNear(\n+ {\n+ 194,\n+ 194,\n+ 194,\n+ 248,\n+ 248,\n+ 248,\n+ },\n+ /*max_abs_error=*/0.64f)));\n+ EXPECT_THAT(m.GetOutputShape(), ElementsAreArray({2, 3}));\n+}\n+\n+TF_LITE_MICRO_TEST(\n+ HybridSymmetricBatchMatMulOpTestQuantizedInt8BroadcastWeights) {\n+ HybridBatchMatMulOpModel<float, int8_t, 40, 30, 12> m(\n+ /*units=*/3, /*batches=*/2,\n+ /*lhs=*/{kTfLiteFloat32, {2, 2, 10}},\n+ /*rhs=*/{kTfLiteInt8, {10, 3}, 0, 0, 10.0 / 127.0, 0},\n+ /*output=*/{kTfLiteFloat32}, /*asymmetric_quantize_inputs=*/false);\n+\n+ m.SetSignedWeights({\n+ 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5,\n+ 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, 10, 10,\n+ });\n+\n+ m.SetInput({\n+ 1, 2, 3, 4, 5, 6, 7, 8, -9, -10, // batch 0, 0\n+ 1, 2, 3, 4, 5, 6, 7, -8, 9, -10, // batch 0, 1\n+ 11, 12, 13, 14, 15, 16, 17, 18, -19, -20, // batch 1, 0\n+ 11, 12, 13, 14, 15, 16, 17, -18, 19, -20, // batch 1, 1\n+ });\n+\n+ m.Invoke();\n+\n+ EXPECT_THAT(m.GetOutput(), ElementsAreArray(ArrayFloatNear(\n+ {\n+ 24, 24, 24, //\n+ 56, 56, 56, //\n+ 194, 194, 194, //\n+ 248, 248, 248, //\n+ },\n+ /*max_abs_error=*/1.3f)));\n+ EXPECT_THAT(m.GetOutputShape(), ElementsAreArray({2, 2, 3}));\n+}\n+\n+TF_LITE_MICRO_TEST(\n+ HybridSymmetricBatchMatMulOpTestQuantizedInt8BroadcastBigWeights) {\n+ HybridBatchMatMulOpModel<float, int8_t, 40, 90, 36> m(\n+ /*units=*/9, /*batches=*/2,\n+ /*lhs=*/{kTfLiteFloat32, {2, 2, 10}},\n+ /*rhs=*/{kTfLiteInt8, {10, 9}, 0, 0, 10.0 / 127.0, 0}, {kTfLiteFloat32},\n+ false);\n+\n+ m.SetSignedWeights({\n+ 1, 1, 1, 17, 17, 17, 26, 26, 26, 2, 2, 2, 18, 18, 18, 27, 27, 27,\n+ 3, 3, 3, 19, 19, 19, 28, 28, 28, 4, 4, 4, 20, 20, 20, 29, 29, 29,\n+ 5, 5, 5, 21, 21, 21, 30, 30, 30, 6, 6, 6, 22, 22, 22, 31, 31, 31,\n+ 7, 7, 7, 23, 23, 23, 32, 32, 32, 8, 8, 8, 24, 24, 24, 33, 33, 33,\n+ 9, 9, 9, 25, 25, 25, 34, 34, 34, 10, 10, 10, 26, 26, 26, 35, 35, 35,\n+ });\n+\n+ m.SetInput({\n+ 1, 2, 3, 4, 5, 6, 7, 8, -9, -10, // batch 0, 0\n+ 1, 2, 3, 4, 5, 6, 7, -8, 9, -10, // batch 0, 1\n+ 11, 12, 13, 14, 15, 16, 17, 18, -19, -20, // batch 1, 0\n+ 11, 12, 13, 14, 15, 16, 17, -18, 19, -20, // batch 1, 1\n+ });\n+\n+ m.Invoke();\n+\n+ EXPECT_THAT(m.GetOutput(),\n+ ElementsAreArray(ArrayFloatNear(\n+ {\n+ 23, 23, 23, 296, 296, 296, 451, 451, 451, //\n+ 58, 58, 58, 362, 362, 362, 529, 529, 529, //\n+ 193, 193, 193, 1424, 1424, 1424, 2118, 2118, 2118, //\n+ 253, 253, 253, 1519, 1519, 1519, 2223, 2223, 2223 //\n+ },\n+ /*max_abs_error=*/1.3f)));\n+ EXPECT_THAT(m.GetOutputShape(), ElementsAreArray({2, 2, 9}));\n+}\n+\n+TF_LITE_MICRO_TEST(\n+ HybridSymmetricBatchMatMulOpTestQuantizedInt8BroadcastInputs) {\n+ HybridBatchMatMulOpModel<float, int8_t, 20, 60, 12> m(\n+ /*units=*/3, /*batches=*/2,\n+ /*lhs=*/{kTfLiteFloat32, {2, 10}},\n+ /*rhs=*/{kTfLiteInt8, {2, 10, 3}, 0, 0, 10.0 / 127.0, 0},\n+ {kTfLiteFloat32}, false);\n+\n+ m.SetSignedWeights({\n+ 1, -3, 1, 2, -2, 2, 3, -1, 3, 4, 0, 4, 5, 1, 5, 6, 2, 6, 7, 3,\n+ 7, 8, 4, 8, 9, 5, 9, 10, 6, 10, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4,\n+ 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, 10, 10,\n+ });\n+\n+ m.SetInput({\n+ 1, 2, 3, 4, 5, 6, 7, 8, -9, -10, // batch 0, 0\n+ 1, 2, 3, 4, 5, 6, 7, -8, 9, -10, // batch 0, 1\n+ });\n+\n+ m.Invoke();\n+\n+ EXPECT_THAT(m.GetOutput(), ElementsAreArray(ArrayFloatNear(\n+ {\n+ 24, -45, 24, //\n+ 56, -19, 56, //\n+ 24, 24, 24, //\n+ 56, 56, 56, //\n+ },\n+ /*max_abs_error=*/0.64f)));\n+ EXPECT_THAT(m.GetOutputShape(), ElementsAreArray({2, 2, 3}));\n+}\n+\n+TF_LITE_MICRO_TEST(QuantizedBatchMatMulOpTestSimpleTestQuantizedInt8) {\n+ QuantizedBatchMatMulOpModel<int8_t, 20, 30, 6> m(\n+ /*units=*/3, /*batches*/ 2,\n+ /*lhs=*/{kTfLiteInt8, {2, 10}, -63.5, 64},\n+ /*output=*/{kTfLiteInt8, {}, -127, 128});\n+\n+ m.SetWeights<int8_t>({\n+ 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5,\n+ 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, 10, 10,\n+ });\n+\n+ m.SetInput<int8_t>({\n+ 1, 2, 3, 4, 5, 6, 7, 8, -9, -10, // b = 0\n+ 1, 2, 3, 4, 5, 6, 7, -8, 9, -10, // b = 1\n+ });\n+\n+ m.Invoke();\n+\n+ EXPECT_THAT(m.GetDequantizedOutput<int8_t>(),\n+ ElementsAreArray(ArrayFloatNear({23, 23, 23, 57, 57, 57})));\n+ EXPECT_THAT(m.GetOutput<int8_t>(),\n+ ElementsAreArray({22, 22, 22, 56, 56, 56}));\n+}\n+\n+TF_LITE_MICRO_TESTS_END",
"filename": "tensorflow/lite/micro/kernels/batch_matmul_test.cc",
"status": "added"
},
{
"diff": "@@ -0,0 +1,345 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#ifndef TENSORFLOW_LITE_MICRO_KERNELS_BATCH_MATMUL_TEST_UTIL_H_\n+#define TENSORFLOW_LITE_MICRO_KERNELS_BATCH_MATMUL_TEST_UTIL_H_\n+\n+#include <cstdint>\n+#include <initializer_list>\n+#include <type_traits>\n+\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor_utils_common.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_runner.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n+\n+namespace tflite {\n+namespace testing {\n+\n+constexpr int kMaxDims = RuntimeShape::kMaxSmallSize;\n+constexpr int kInputTensor1 = 0;\n+constexpr int kInputTensor2 = 1;\n+constexpr int kOutputTensor = 2;\n+constexpr float kTolerance = 1e-5;\n+\n+enum TestInputIndex {\n+ kInputIndex0,\n+ kInputIndex1,\n+};\n+constexpr size_t kMaxInputs = TestInputIndex::kInputIndex1 + 1;\n+\n+struct TestTensorData {\n+ TestTensorData(const TfLiteType datum_type,\n+ const std::initializer_list<int>& datum_list = {},\n+ float datum_min = 0.0f, float datum_max = 0.0f,\n+ float datum_scale = 0.0f, int32_t datum_zero_point = 0)\n+ : type(datum_type),\n+ shape(datum_list),\n+ minimum(datum_min),\n+ maximum(datum_max),\n+ scale(datum_scale),\n+ zero_point(datum_zero_point) {}\n+ const TfLiteType type;\n+ const std::initializer_list<int>& shape;\n+ const float minimum;\n+ const float maximum;\n+ const float scale;\n+ const int32_t zero_point;\n+};\n+\n+template <typename T, size_t D>\n+struct ElementArray {\n+ ElementArray() : scale(0.0f), zero_point(0) {}\n+\n+ template <typename TA>\n+ explicit ElementArray(const TA (&a)[D]) : ElementArray() {\n+ for (size_t i = 0; i < D; i++) {\n+ data[i] = static_cast<T>(a[i]);\n+ }\n+ }\n+\n+ T data[D];\n+ TestInputIndex index;\n+\n+ // quantization parameters\n+ float scale;\n+ int32_t zero_point;\n+};\n+\n+template <typename T, size_t D>\n+struct ElementArrayNear : ElementArray<T, D> {\n+ template <typename TA>\n+ explicit ElementArrayNear(const TA (&a)[D],\n+ const float tolerance_param = kTolerance)\n+ : ElementArray<T, D>(a), tolerance(tolerance_param) {}\n+\n+ const float tolerance;\n+};\n+\n+template <size_t D>\n+inline ElementArrayNear<float, D> ElementsAreArray(const double (&a)[D]) {\n+ return ElementArrayNear<float, D>(a);\n+}\n+\n+template <size_t D>\n+inline ElementArray<int, D> ElementsAreArray(const int (&a)[D]) {\n+ return ElementArray<int, D>(a);\n+}\n+\n+template <typename T, size_t D>\n+inline const ElementArrayNear<T, D>& ElementsAreArray(\n+ const ElementArrayNear<T, D>& a) {\n+ return a;\n+}\n+\n+template <typename T, size_t D>\n+inline ElementArrayNear<float, D> ArrayFloatNear(\n+ const T (&a)[D], const float tolerance = kTolerance) {\n+ return ElementArrayNear<float, D>(a, tolerance);\n+}\n+\n+template <size_t D>\n+void ExpectThat(const TfLiteIntArray& actual,\n+ const ElementArray<int, D>& expected) {\n+ TF_LITE_MICRO_EXPECT_EQ(actual.size, static_cast<int>(D));\n+ for (int i = 0; i < actual.size; i++) {\n+ TF_LITE_MICRO_EXPECT_EQ(actual.data[i], expected.data[i]);\n+ }\n+}\n+\n+template <typename T, size_t D>\n+void ExpectThat(const ElementArray<T, D>& actual,\n+ const ElementArrayNear<T, D>& expected) {\n+ for (size_t i = 0; i < D; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(actual.data[i], expected.data[i],\n+ expected.tolerance);\n+ }\n+}\n+\n+template <typename T1, typename T2, size_t D>\n+void ExpectThat(const ElementArray<T1, D>& actual,\n+ const ElementArray<T2, D>& expected) {\n+ for (size_t i = 0; i < D; i++) {\n+ TF_LITE_MICRO_EXPECT_EQ(actual.data[i], static_cast<T1>(expected.data[i]));\n+ }\n+}\n+\n+template <typename T1, typename T2, size_t D>\n+void ExpectThat(const ElementArray<T1, D>& actual,\n+ const ElementArrayNear<T2, D>& expected) {\n+ for (size_t i = 0; i < D; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(static_cast<T2>(actual.data[i]), expected.data[i],\n+ expected.tolerance);\n+ }\n+}\n+\n+inline void IntArrayCopy(const TfLiteIntArray& from, TfLiteIntArray* to) {\n+ if (from.size > 0) {\n+ for (int i = 0; i < from.size; i++) {\n+ to->data[i] = from.data[i];\n+ }\n+ to->size = from.size;\n+ }\n+}\n+\n+inline void IntArrayCopy(const std::initializer_list<int>& from,\n+ TfLiteIntArray* to) {\n+ if (from.size() > 0) {\n+ for (size_t i = 0; i < from.size(); i++) {\n+ to->data[i] = from.begin()[i];\n+ }\n+ to->size = from.size();\n+ }\n+}\n+\n+template <typename TIN1, typename TIN2, typename TOUT, size_t IN1, size_t IN2,\n+ size_t OUT>\n+class TestOpModel {\n+ public:\n+ explicit TestOpModel(const TfLiteRegistration& registration)\n+ : registration_(registration) {\n+ TfLiteIntArray* dims = IntArrayFromInts(dims_output_);\n+ dims->size = 1;\n+ dims->data[0] = OUT;\n+\n+ dims = IntArrayFromInts(dims_inputs_[kInputIndex0]);\n+ dims->size = 1;\n+ dims->data[0] = IN1;\n+ data_input0_.index = kInputIndex0;\n+\n+ dims = IntArrayFromInts(dims_inputs_[kInputIndex1]);\n+ dims->size = 1;\n+ dims->data[0] = IN2;\n+ data_input1_.index = kInputIndex1;\n+ }\n+\n+ void AddInput(const TestTensorData& datum, const TestInputIndex index) {\n+ TF_LITE_MICRO_EXPECT_LE(datum.shape.size(), kMaxDims);\n+ TfLiteIntArray& dims = GetInputShape(index);\n+ IntArrayCopy(datum.shape, &dims);\n+ TF_LITE_MICRO_EXPECT_EQ(ElementCount(dims),\n+ static_cast<int>(GetInputSize(index)));\n+ }\n+\n+ template <typename T, size_t D>\n+ void AddInput(const TestTensorData& datum, ElementArray<T, D>* const input) {\n+ TF_LITE_MICRO_EXPECT_LE(datum.shape.size(), kMaxDims);\n+ TestInputIndex index = input->index;\n+ TfLiteIntArray& dims = GetInputShape(index);\n+ IntArrayCopy(datum.shape, &dims);\n+ TF_LITE_MICRO_EXPECT_EQ(ElementCount(dims), static_cast<int>(D));\n+\n+ const bool quantizable =\n+ (datum.type == kTfLiteInt8) &&\n+ (datum.minimum != 0.0f || datum.maximum != 0.0f || datum.scale != 0.0f);\n+ if (quantizable) {\n+ if (datum.scale != 0.0f) {\n+ input->scale = datum.scale;\n+ input->zero_point = datum.zero_point;\n+ } else {\n+ input->scale = ScaleFromMinMax<int8_t>(datum.minimum, datum.maximum);\n+ input->zero_point =\n+ ZeroPointFromMinMax<int8_t>(datum.minimum, datum.maximum);\n+ }\n+ }\n+ }\n+\n+ void AddOutput(const TestTensorData& datum) {\n+ TF_LITE_MICRO_EXPECT_LE(datum.shape.size(), kMaxDims);\n+ TfLiteIntArray& dims = GetOutputShape();\n+ IntArrayCopy(datum.shape, &dims);\n+ TF_LITE_MICRO_EXPECT_EQ(ElementCount(dims), static_cast<int>(OUT));\n+\n+ const bool quantizable =\n+ (datum.type == kTfLiteInt8) &&\n+ (datum.minimum != 0.0f || datum.maximum != 0.0f || datum.scale != 0.0f);\n+ if (quantizable) {\n+ if (datum.scale != 0.0f) {\n+ data_output_.scale = datum.scale;\n+ data_output_.zero_point = datum.zero_point;\n+ } else {\n+ data_output_.scale =\n+ ScaleFromMinMax<int8_t>(datum.minimum, datum.maximum);\n+ data_output_.zero_point =\n+ ZeroPointFromMinMax<int8_t>(datum.minimum, datum.maximum);\n+ }\n+ }\n+ }\n+\n+ ElementArray<TOUT, OUT>& GetOutput() { return data_output_; }\n+ TfLiteIntArray& GetOutputShape() { return *IntArrayFromInts(dims_output_); }\n+ ElementArray<TIN1, IN1>& GetInput0() { return data_input0_; }\n+ ElementArray<TIN2, IN2>& GetInput1() { return data_input1_; }\n+ TfLiteIntArray& GetInputShape(const TestInputIndex index) {\n+ return *IntArrayFromInts(dims_inputs_[index]);\n+ }\n+ size_t GetInputSize(const TestInputIndex index) {\n+ if (index == kInputIndex0) {\n+ return IN1;\n+ } else { // (index == kInputIndex1)\n+ return IN2;\n+ }\n+ }\n+\n+ template <typename T, size_t D>\n+ void PopulateTensor(ElementArray<T, D>* const input,\n+ const std::initializer_list<float>& list) {\n+ TF_LITE_MICRO_EXPECT_EQ(list.size(), D);\n+\n+ auto iter = list.begin();\n+ for (size_t i = 0; i < list.size(); i++) {\n+ input->data[i] = static_cast<T>(iter[i]);\n+ }\n+ }\n+\n+ template <typename T, size_t D>\n+ void QuantizeAndPopulate(ElementArray<T, D>* const input,\n+ const std::initializer_list<float>& list) {\n+ TF_LITE_MICRO_EXPECT_EQ(list.size(), D);\n+\n+ Quantize(list.begin(), input->data, D, input->scale, input->zero_point);\n+ }\n+\n+ template <typename T, size_t D>\n+ void SignedSymmetricQuantizeAndPopulate(\n+ ElementArray<T, D>* const input,\n+ const std::initializer_list<float>& list) {\n+ TF_LITE_MICRO_EXPECT_EQ(list.size(), D);\n+\n+ float min, max, scaling_factor;\n+ tensor_utils::SymmetricQuantizeFloats(list.begin(), static_cast<int>(D),\n+ input->data, &min, &max,\n+ &scaling_factor);\n+ input->scale = scaling_factor;\n+ input->zero_point = 0;\n+ }\n+\n+ template <typename T>\n+ ElementArray<float, OUT> GetDequantizedOutput() {\n+ ElementArray<float, OUT> result;\n+ auto& output = this->GetOutput();\n+ Dequantize<T>(output.data, OUT, output.scale, output.zero_point,\n+ result.data);\n+\n+ return result;\n+ }\n+\n+ template <typename T>\n+ ElementArray<T, OUT>& GetOutput() {\n+ return data_output_;\n+ }\n+\n+ protected:\n+ void DoInvoke(const void* params, TfLiteTensor* tensors,\n+ const int tensors_count) {\n+ int kInputArrayData[] = {kMaxInputs, kInputTensor1, kInputTensor2};\n+ TfLiteIntArray* inputs_array = IntArrayFromInts(kInputArrayData);\n+ int kOutputArrayData[] = {1, kOutputTensor};\n+ TfLiteIntArray* outputs_array = IntArrayFromInts(kOutputArrayData);\n+\n+ micro::KernelRunner runner(registration_, tensors, tensors_count,\n+ inputs_array, outputs_array,\n+ const_cast<void*>(params));\n+\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.InitAndPrepare());\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.Invoke());\n+\n+ // The output tensor dims will have moved to a location in the\n+ // memory arena. Copy the tensor dims back into <dims_output_>\n+ TfLiteIntArray* dims = IntArrayFromInts(dims_output_);\n+ IntArrayCopy(*tensors[kOutputTensor].dims, dims);\n+ }\n+\n+ private:\n+ int dims_inputs_[kMaxInputs][kMaxDims + 1]; // TfLiteIntArray[kMaxInputs]\n+ int dims_output_[kMaxDims + 1]; // TfLiteIntArray\n+ ElementArray<TIN1, IN1> data_input0_;\n+ ElementArray<TIN2, IN2> data_input1_;\n+ ElementArray<TOUT, OUT> data_output_;\n+ const TfLiteRegistration registration_;\n+};\n+\n+} // namespace testing\n+} // namespace tflite\n+\n+using tflite::testing::ArrayFloatNear;\n+using tflite::testing::ElementsAreArray;\n+using tflite::testing::ExpectThat;\n+\n+#define EXPECT_THAT(a, b) ExpectThat(a, b)\n+\n+#endif // TENSORFLOW_LITE_MICRO_KERNELS_BATCH_MATMUL_TEST_UTIL_H_",
"filename": "tensorflow/lite/micro/kernels/batch_matmul_test_util.h",
"status": "added"
},
{
"diff": "@@ -59,7 +59,8 @@ TfLiteStatus CreateWritableTensorDimsWithCopy(TfLiteContext* context,\n TF_LITE_ENSURE(context, tensor != nullptr);\n TF_LITE_ENSURE(context, eval_tensor != nullptr);\n int ranks = tensor->dims->size;\n- size_t alloc_size = TfLiteIntArrayGetSizeInBytes(ranks);\n+ // always allocate max ranks to allow for reshaping\n+ size_t alloc_size = TfLiteIntArrayGetSizeInBytes(RuntimeShape::kMaxSmallSize);\n TfLiteIntArray* new_dims = static_cast<TfLiteIntArray*>(\n context->AllocatePersistentBuffer(context, alloc_size));\n TfLiteIntArray* old_dims = tensor->dims;",
"filename": "tensorflow/lite/micro/kernels/kernel_util.cc",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,7 @@ namespace tflite {\n // have their Register function declarations in the tflite namespace.\n \n TfLiteRegistration Register_ADD_N();\n+TfLiteRegistration Register_BATCH_MATMUL();\n TfLiteRegistration Register_BATCH_TO_SPACE_ND();\n TfLiteRegistration Register_CAST();\n TfLiteRegistration Register_CONV_2D();",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -144,6 +144,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n ParsePool);\n }\n \n+ TfLiteStatus AddBatchMatMul() {\n+ return AddBuiltin(BuiltinOperator_BATCH_MATMUL,\n+ tflite::Register_BATCH_MATMUL(), ParseBatchMatMul);\n+ }\n+\n TfLiteStatus AddBatchToSpaceNd() {\n return AddBuiltin(BuiltinOperator_BATCH_TO_SPACE_ND,\n Register_BATCH_TO_SPACE_ND(), ParseBatchToSpaceNd);",
"filename": "tensorflow/lite/micro/micro_mutable_op_resolver.h",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,143 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#include \"tensorflow/lite/kernels/internal/tensor_utils_common.h\"\n+\n+#include <algorithm>\n+#include <cmath>\n+#include <cstdint>\n+#include <limits>\n+\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/cppmath.h\"\n+\n+namespace tflite {\n+\n+//\n+// The following is copied from TfLite portable_tensor_utils.cc\n+//\n+// The declarations are located in header file:\n+// tensorflow/lite/kernels/internal/tensor_utils_common.h\n+//\n+namespace tensor_utils {\n+\n+// Quantizes a buffer of floating point values using a symmetric quantization\n+// (i.e. linear quantization without an offset) to 8-bit signed integers.\n+// It also outputs the range (min, max) of the floating point buffer, and the\n+// scaling factor used to quantize the values.\n+void SymmetricQuantizeFloats(const float* values, const int size,\n+ int8_t* quantized_values, float* min_value,\n+ float* max_value, float* scaling_factor) {\n+ auto minmax = std::minmax_element(values, values + size);\n+ *min_value = *minmax.first;\n+ *max_value = *minmax.second;\n+\n+ SymmetricQuantizeFloats(values, size, quantized_values, *min_value,\n+ *max_value, scaling_factor);\n+}\n+\n+// Quantizes a buffer of floating point values using a symmetric quantization\n+// (i.e. linear quantization without an offset) to 8-bit signed integers.\n+// It uses the range (min, max) provided to the function to calculate the\n+// appropriate scaling factor to quantize the values.\n+void SymmetricQuantizeFloats(const float* values, const int size,\n+ int8_t* quantized_values, float min_value,\n+ float max_value, float* scaling_factor) {\n+ const int32_t kScale = 127;\n+ const float range = std::max(std::abs(min_value), std::abs(max_value));\n+ if (range == 0) {\n+ std::fill_n(quantized_values, size, 0);\n+ *scaling_factor = 1;\n+ return;\n+ }\n+ *scaling_factor = range / kScale;\n+ const float scaling_factor_inv = kScale / range;\n+ for (int i = 0; i < size; ++i) {\n+ const int32_t quantized_value =\n+ static_cast<int32_t>(TfLiteRound(values[i] * scaling_factor_inv));\n+ // Clamp: just in case some odd numeric offset.\n+ quantized_values[i] = static_cast<int8_t>(\n+ std::min(kScale, std::max(-kScale, quantized_value)));\n+ }\n+}\n+\n+void AsymmetricQuantizeFloats(const float* values, const int size,\n+ int8_t* quantized_values, float* scaling_factor,\n+ int32_t* offset) {\n+ const int32_t kMinScale = -128;\n+ const int32_t kMaxScale = 127;\n+ const double qmin_double = kMinScale;\n+ const double qmax_double = kMaxScale;\n+ const auto minmax = std::minmax_element(values, values + size);\n+ const double rmin = std::fmin(0, *minmax.first);\n+ const double rmax = std::fmax(0, *minmax.second);\n+ if (rmin == rmax) {\n+ std::fill_n(quantized_values, size, 0);\n+ *scaling_factor = 1;\n+ *offset = 0;\n+ return;\n+ } else {\n+ double scale = (rmax - rmin) / (qmax_double - qmin_double);\n+ const double zero_point_from_min = qmin_double - rmin / scale;\n+ const double zero_point_from_max = qmax_double - rmax / scale;\n+ const double zero_point_from_min_error =\n+ std::abs(qmin_double) + std::abs(rmin / scale);\n+ const double zero_point_from_max_error =\n+ std::abs(qmax_double) + std::abs(rmax / scale);\n+ const double zero_point_double =\n+ zero_point_from_min_error < zero_point_from_max_error\n+ ? zero_point_from_min\n+ : zero_point_from_max;\n+ int8_t nudged_zero_point = 0;\n+ if (zero_point_double <= qmin_double) {\n+ nudged_zero_point = kMinScale;\n+ } else if (zero_point_double >= qmax_double) {\n+ nudged_zero_point = kMaxScale;\n+ } else {\n+ nudged_zero_point = static_cast<int8_t>(round(zero_point_double));\n+ }\n+ *scaling_factor = scale;\n+ *offset = nudged_zero_point;\n+ }\n+ const float scaling_factor_inv = 1.0f / *scaling_factor;\n+ for (int i = 0; i < size; ++i) {\n+ const int32_t quantized_value = static_cast<int32_t>(\n+ TfLiteRound(*offset + values[i] * scaling_factor_inv));\n+ quantized_values[i] =\n+ std::min(kMaxScale, std::max(kMinScale, quantized_value));\n+ }\n+}\n+\n+// Reduce-sum on a vector:\n+// input_vector: pointer to input vector.\n+// output_vector: pointer to vector.\n+// output_size: output vector size.\n+// reduction_size: number of consecutive elements from input vector which are\n+// added to get one element of output.\n+void ReductionSumVector(const int8_t* input_vector, int32_t* output_vector,\n+ int output_size, int reduction_size) {\n+ for (int o = 0; o < output_size; o++) {\n+ int32_t result = 0;\n+ for (int r = 0; r < reduction_size; r++) {\n+ result += input_vector[r];\n+ }\n+ output_vector[o] = result;\n+ input_vector += reduction_size;\n+ }\n+}\n+\n+} // namespace tensor_utils\n+\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/tensor_utils_common.cc",
"status": "added"
},
{
"diff": "@@ -261,6 +261,7 @@ tensorflow/lite/micro/kernels/activations_test.cc \\\n tensorflow/lite/micro/kernels/add_test.cc \\\n tensorflow/lite/micro/kernels/add_n_test.cc \\\n tensorflow/lite/micro/kernels/arg_min_max_test.cc \\\n+tensorflow/lite/micro/kernels/batch_matmul_test.cc \\\n tensorflow/lite/micro/kernels/batch_to_space_nd_test.cc \\\n tensorflow/lite/micro/kernels/cast_test.cc \\\n tensorflow/lite/micro/kernels/ceil_test.cc \\\n@@ -327,6 +328,7 @@ tensorflow/lite/micro/kernels/activations.cc \\\n tensorflow/lite/micro/kernels/add.cc \\\n tensorflow/lite/micro/kernels/add_n.cc \\\n tensorflow/lite/micro/kernels/arg_min_max.cc \\\n+tensorflow/lite/micro/kernels/batch_matmul.cc \\\n tensorflow/lite/micro/kernels/batch_to_space_nd.cc \\\n tensorflow/lite/micro/kernels/cast.cc \\\n tensorflow/lite/micro/kernels/ceil.cc \\\n@@ -435,6 +437,7 @@ tensorflow/lite/kernels/internal/quantization_util.h \\\n tensorflow/lite/kernels/internal/reference/add.h \\\n tensorflow/lite/kernels/internal/reference/add_n.h \\\n tensorflow/lite/kernels/internal/reference/arg_min_max.h \\\n+tensorflow/lite/kernels/internal/reference/batch_matmul.h \\\n tensorflow/lite/kernels/internal/reference/batch_to_space_nd.h \\\n tensorflow/lite/kernels/internal/reference/binary_function.h \\\n tensorflow/lite/kernels/internal/reference/ceil.h \\\n@@ -495,6 +498,7 @@ tensorflow/lite/kernels/internal/min.h \\\n tensorflow/lite/kernels/internal/portable_tensor.h \\\n tensorflow/lite/kernels/internal/strided_slice_logic.h \\\n tensorflow/lite/kernels/internal/tensor_ctypes.h \\\n+tensorflow/lite/kernels/internal/tensor_utils_common.h \\\n tensorflow/lite/kernels/internal/types.h \\\n tensorflow/lite/kernels/kernel_util.h \\\n tensorflow/lite/kernels/op_macros.h \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "As part of #37014 , `reduce_variance` was added for ragged tensors. Although it is working fine for `axis=1`, it gives error for same input if we change to `axis=0`. \r\n\r\nI am attaching the [gist](https://colab.research.google.com/gist/ashutosh1919/b0591ddb485107187b982a08276712be/untitled550.ipynb).\r\n\r\nBy observing the stack trace, what I think the issue is:\r\n(1) when I explicitly pass input to `ragged_math_ops.reduce_variance`, it gives correct result. So, the problem is not with `reduce_variance` function.\r\n(2) When I call `tf.math.reduce_variance`, it internally call `GlobalDispatcherOp` which iterates over all the dispatchers to see where the op is supported. \r\n(3) When the op runs for `BinaryRaggedElementwiseDispatcher`, it fails for some reason and that is why execution fails. It didn't reach till the dispatcher associated for `reduce_variance`.\r\n(4) That's why the problem I think is with dispatcher module.\r\n\r\n@mihaimaruseac, @edloper - I am not able to exactly find why the error occures in dispatcher. Please help me by pointing to specific direction and will contribute the fix.",
"comments": [
{
"body": "I don't see any errors (or anything involving ragged tensors or reduce_variance) in the attached gist. Did you link the right one?",
"created_at": "2021-05-25T13:43:26Z"
},
{
"body": "I was able to reproduce this problem with:\r\n\r\n```\r\nimport tensorflow as tf\r\nx = tf.ragged.constant([[1., 2, 3], [4, 5], [6, 7, 8, 9]])\r\nprint(tf.math.reduce_variance(x, axis=1)) # succeeds\r\nprint(tf.math.reduce_variance(x, axis=0)) # fails\r\n```\r\n\r\nThe problem arises because the current dispatch mechanism is reactive, not proactive. In particular, the current dispatch mechanism is a fallback that gets used when the \"normal\" implementation raises an error. This design was used to ensure that dispatch didn't add overhead to existing operations, but we have plans to change it to a proactive mechanism, which checks the types of arguments before running the operation.\r\n\r\nSo the long term fix is to update the dispatch to be proactive, and not a fallback mechanism.\r\n\r\nBut until we've finished that update, a shorter-term solution is to update reduce_variance to call convert_to_tensor on its input. Almost all TensorFlow ops call convert_to_tensor on their inputs before processing them. This ensures that we can pass in non-tensor values, such as numpy arrays or python lists. The fact that (almost) all TensorFlow ops call convert_to_tensor is what makes the dispatch fallback mechanism work, since this will fail for types such as RaggedTensor.\r\n\r\nIf we look at some of the other reduce operations in math_ops.py, such as reduce_logsumexp, they do call convert_to_tensor on their input. I believe that the following change should make dispatch work correctly (though I haven't actually tested it yet):\r\n\r\n```\r\n with ops.name_scope(name):\r\n input_tensor = ops.convert_to_tensor(input_tensor) # NEW\r\n means = reduce_mean(input_tensor, axis=axis, keepdims=True)\r\n```\r\n\r\nThis change will also have the side benefit that input_tensor won't get converted to a tensor twice. (In the current code, if input_tensor is a python list or other non-tensor value, then it will get converted twice: once when we call reduce_mean, and again in the expression `input_tensor - means`).\r\n\r\nI'd be happy to make that fix if you like; or if you'd prefer to contribute the fix, that would be fine too.",
"created_at": "2021-05-25T14:00:10Z"
},
{
"body": "@edloper , apologies. I have updated [gist](https://colab.research.google.com/gist/ashutosh1919/b0591ddb485107187b982a08276712be/untitled550.ipynb)",
"created_at": "2021-05-25T14:00:27Z"
},
{
"body": "The issue will move to closed status once the PR is merged.",
"created_at": "2021-05-27T04:23:07Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/49606\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/49606\">No</a>\n",
"created_at": "2021-06-09T19:36:14Z"
}
],
"number": 49606,
"title": "reduce_variance gives error in case of RaggedTensor when axis=0"
}
|
{
"body": "Fixes #49606 \r\n\r\ncc @edloper , Thanks for help in providing solution.",
"number": 49609,
"review_comments": [
{
"body": "It's probably better to put this line inside the scope of the `with ops.name_scope(name)`. (I.e., move it down one line.) That way, if you're inspecting the graph produced by this op, it will be clear that the conversion was done as part of the reduce_variance op.",
"created_at": "2021-05-27T13:22:37Z"
},
{
"body": "@edloper, let me do one thing. Let me add test case as well for this. I haven't tested it thoroughly. Let me do that first. re-requesting you again so that ready-to-pull is removed.",
"created_at": "2021-05-27T15:35:48Z"
},
{
"body": "+1 please add a test for this so that we can ensure it fixes the problem!",
"created_at": "2021-05-28T05:18:50Z"
}
],
"title": "Fixing math_ops.reduce_variance for dispatch failure to ragged"
}
|
{
"commits": [
{
"message": "Fixing math_ops.reduce_variance for dispatch failure to ragged"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into ragged_reduce_variance"
},
{
"message": "Added test case"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into ragged_reduce_variance"
},
{
"message": "Added axis"
}
],
"files": [
{
"diff": "@@ -2494,6 +2494,7 @@ def reduce_variance(input_tensor, axis=None, keepdims=False, name=None):\n \"\"\"\n name = name if name else \"reduce_variance\"\n with ops.name_scope(name):\n+ input_tensor = ops.convert_to_tensor(input_tensor)\n means = reduce_mean(input_tensor, axis=axis, keepdims=True)\n if means.dtype.is_integer:\n raise TypeError(\"Input must be either real or complex\")",
"filename": "tensorflow/python/ops/math_ops.py",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@\n from tensorflow.python.ops import math_ops\n from tensorflow.python.ops import resource_variable_ops\n from tensorflow.python.ops import variables\n+from tensorflow.python.ops.ragged import ragged_factory_ops\n from tensorflow.python.platform import googletest\n \n \n@@ -94,6 +95,14 @@ def testReduceVar(self):\n self.assertEqual(np.var(x_np), 0.25)\n self.assertEqual(self.evaluate(math_ops.reduce_variance(x_np)), 0.25)\n \n+ x=ragged_factory_ops.constant([[5., 1., 4., 1.],\n+ [],\n+ [5., 9., 2.],\n+ [5.],\n+ []])\n+ self.assertAllClose(math_ops.reduce_variance(x, axis=0),\n+ [0., 16., 1., 0.])\n+\n def testReduceVarComplex(self):\n # Ensure that complex values are handled to be consistent with numpy\n complex_ys = [([0 - 1j, 0 + 1j], dtypes.float64),",
"filename": "tensorflow/python/ops/math_ops_test.py",
"status": "modified"
}
]
}
|
{
"body": "<em>Please make sure that this is a bug. As per our\r\n[GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md),\r\nwe only address code/doc bugs, performance issues, feature requests and\r\nbuild/installation issues on GitHub. tag:bug_template</em>\r\n\r\n**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): no\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: n/a\r\n- TensorFlow installed from (source or binary): 2.5.0\r\n- TensorFlow version (use command below): 2.5.0\r\n- Python version: 3.8\r\n- Bazel version (if compiling from source): n/a\r\n- GCC/Compiler version (if compiling from source): n/a\r\n- CUDA/cuDNN version: n/a\r\n- GPU model and memory: n/a\r\n\r\nYou can collect some of this information using our environment capture\r\n[script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh)\r\nYou can also obtain the TensorFlow version with:\r\n1. TF 1.0: `python -c \"import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)\"`\r\n2. TF 2.0: `python -c \"import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)\"`\r\n\r\n**Describe the current behavior**\r\n\r\nIt looks like on Windows, File system (e.g., gcs) is only working through `tf.io.read_file()` but not working through `tf.io.gfile.GFile().read()`.\r\n\r\nOn Windows, check the following command:\r\n```bash\r\npython3 -c \"import tensorflow as tf;print(tf.version.VERSION);tf.io.read_file('gs://1234567')\" \r\n```\r\nthrows out error of:\r\n```\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: GCS path doesn't contain an object name: gs://1234567 [Op:ReadFile]\r\n```\r\n\r\nThis indicates GCS file system at least is available.\r\n\r\nOn the other hand, the following command:\r\n```bash\r\npython3 -c \"import tensorflow as tf;print(tf.version.VERSION);tf.io.gfile.GFile('gs://1234567').read()\" \r\n```\r\nthrows out error of:\r\n```\r\ntensorflow.python.framework.errors_impl.UnimplementedError: File system scheme 'gs' not implemented (file: 'gs://1234567')\r\n```\r\n\r\nwhich indicate GCS file system is not even registered.\r\n\r\n**Describe the expected behavior**\r\n\r\n**[Contributing](https://www.tensorflow.org/community/contribute)** - Do you\r\nwant to contribute a PR? (yes/no): - Briefly describe your candidate solution\r\n(if contributing): yes\r\n\r\n**Standalone code to reproduce the issue**\r\nSee description above\r\n\r\nThe issue is due to the duplication of `Env::Default` in multiple places. The issue is exposed on Windows due to `config=monolithic` being passed to bazel. I think I have identified the places that cause the issue. Will submit a PR soon.\r\n\r\n/cc @mihaimaruseac @vnvo2409 @kvignesh1420 @terrytangyuan @burgerkingeater ",
"comments": [
{
"body": "The issue will move to closed status once the PR is merged.",
"created_at": "2021-05-25T06:39:20Z"
},
{
"body": "@yongtang \r\nIs this still an issue, could you please try on the latest tf version and let us know.",
"created_at": "2021-06-21T15:41:57Z"
},
{
"body": "@Saduf2019 This is still an issue as #49520 has not been merged yet. (the PR has been approved but has not been imported internally yet). See https://github.com/tensorflow/tensorflow/pull/49520#issuecomment-862739157",
"created_at": "2021-06-21T16:59:15Z"
},
{
"body": "@yongtang\r\nIs this still an issue",
"created_at": "2021-08-18T09:37:20Z"
},
{
"body": "@Saduf2019 PR #49520 has been approved, though it has not been imported and merged yet. Once it is merged this issue can be resolved.",
"created_at": "2021-08-18T14:51:45Z"
},
{
"body": "@yongtang,\r\nRelated PR https://github.com/tensorflow/tensorflow/pull/49520 was closed. Could you please take a look at the comments from the developer and respond accordingly. Thank you!",
"created_at": "2023-03-07T11:51:43Z"
}
],
"number": 49515,
"title": "File system is not working on Windows through tf.io.gfile.GFile interface"
}
|
{
"body": "This PR tries to address the issue raised in #49515 where gcs is not available through tf.io.gfile.GFile interface.\r\n\r\nThe reason was that on Windows -config=monolithic was used in bazel which includes multiple copies of `Env::Default` in different pyd files.\r\n\r\nThis PR removes all duplications to make gcs working on Windows.\r\n\r\nThis PR fixes #49515\r\n\r\n/cc @mihaimaruseac @vnvo2409 @kvignesh1420 @terrytangyuan @burgerkingeater FYI\r\n\r\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>",
"number": 49520,
"review_comments": [
{
"body": "Is there a way to change this to not import TF fully? As it is written this raises this error (sanitized):\r\n\r\n```\r\nHOURGLASS IMPORT DETECTED\r\n\r\n'import tensorflow' statements are no longer allowed in the internal\r\nTensorFlow codebase, with the exception of /examples/ and /g3doc/\r\ndirectories. This is because they prevent Bazel and <testing tool> from caching\r\nanything.\r\n\r\nPlease import modules directly. Also please make sure your Python\r\nBUILD rules depend on all the labels whose sources they import.\r\n```",
"created_at": "2021-06-09T16:12:49Z"
},
{
"body": "@mihaimaruseac Let me take a look. I think it should be possible. The reason was to enable all modules (so that in case multiple copies of `Env::Default` is present in the future again, it will trigger the test fail). I will see if I can find an internal import that can achieve the same thing.",
"created_at": "2021-06-09T16:18:52Z"
},
{
"body": "Alternatively, maybe we can change [the nightly smoke test](https://cs.opensource.google/tensorflow/tensorflow/+/master:tensorflow/tools/ci_build/builds/nightly_release_smoke_test.sh;l=24;drc=82f83aae97ae11736e577fa560e9d33cf764a018) to test for this instead. It will prevent nightlies from being pushed and will still give a signal.",
"created_at": "2021-06-09T16:22:34Z"
},
{
"body": "Thanks @mihaimaruseac . Let me see if I can update the PR.",
"created_at": "2021-06-09T16:26:05Z"
}
],
"title": "Remove duplicated Env::Default to fix gcs issue on Windows"
}
|
{
"commits": [
{
"message": "Remove duplicated `Env::Default` to fix gcs issue on Windows\n\nThis PR tries to address the issue raised in 49515 where\ngcs is not available through tf.io.gfile.GFile interface.\n\nThe reason was that on Windows -config=monolithic was used\nin bazel which includes multiple copies of `Env::Default`\nin different pyd files.\n\nThis PR removes all duplications to make gcs working on Windows.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
},
{
"message": "Add missing dependency by using tf_python_pybind_extension\n\nand add missing linkage dependency\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
},
{
"message": "Add Env::Default to windows def file\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
},
{
"message": "Add test case in nightly release smoke test to make sure gcs is available on Windows through tf.io.gfile.GFile\n\nSee GitHub issue #49515 for details.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
},
{
"message": "Use if_static to avoid breaking existing code (for Env::Default duplication fix)\n\nThis commit use if_static to avoid breaking existing code for\nEnv::Default duplication fix. In case of static build (Windows monolithic)\none copy of Env::Default is ensured; in case of non-static build (Linux)\nthen old behavior is maintained.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
},
{
"message": "Fix duplicate dependencies\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
},
{
"message": "Fix merge conflict\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
}
],
"files": [
{
"diff": "@@ -957,6 +957,7 @@ tf_cc_shared_object(\n \"//tensorflow/core/common_runtime/pluggable_device:pluggable_device_runtime_impl\",\n \"//tensorflow/core/grappler/optimizers:custom_graph_optimizer_registry_impl\",\n \"//tensorflow/core:lib_internal_impl\",\n+ \"//tensorflow/core/platform:env_impl\",\n \"//tensorflow/core/profiler:profiler_impl\",\n \"//tensorflow/core/util:determinism\",\n \"//tensorflow/lite/kernels/shim:tf_kernel_shim\",",
"filename": "tensorflow/BUILD",
"status": "modified"
},
{
"diff": "@@ -773,6 +773,7 @@ cc_library(\n \"//tensorflow/core:lib_internal\",\n \"//tensorflow/core:op_gen_lib\",\n \"//tensorflow/core:protos_all_cc\",\n+ \"//tensorflow/core/platform:env_impl\",\n \"@com_google_absl//absl/strings\",\n ],\n )",
"filename": "tensorflow/cc/BUILD",
"status": "modified"
},
{
"diff": "@@ -111,6 +111,7 @@ tf_cc_binary(\n \"//tensorflow/compiler/mlir:init_mlir\",\n \"//tensorflow/compiler/mlir/tensorflow\",\n \"//tensorflow/core:lib\",\n+ \"//tensorflow/core/platform:env_impl\",\n \"//tensorflow/stream_executor/lib\",\n \"@com_google_absl//absl/strings\",\n \"@llvm-project//llvm:Analysis\",",
"filename": "tensorflow/compiler/mlir/tools/kernel_gen/BUILD",
"status": "modified"
},
{
"diff": "@@ -1430,7 +1430,6 @@ cc_library(\n \"//tensorflow/core/platform:denormal\",\n \"//tensorflow/core/platform:dynamic_annotations\",\n \"//tensorflow/core/platform:env\",\n- \"//tensorflow/core/platform:env_impl\",\n \"//tensorflow/core/platform:errors\",\n \"//tensorflow/core/platform:file_statistics\",\n \"//tensorflow/core/platform:fingerprint\",\n@@ -1474,7 +1473,14 @@ cc_library(\n \"@zlib\",\n \"@double_conversion//:double-conversion\",\n \"@com_google_protobuf//:protobuf\",\n- ] + select({\n+ ] + if_static(\n+ extra_deps = [],\n+ otherwise = [\n+ # Make sure one copy of env_impl in case of static build,\n+ # leave non-static build untouched to not break existing code.\n+ \"//tensorflow/core/platform:env_impl\",\n+ ],\n+ ) + select({\n \"//tensorflow:fuchsia\": [],\n \"//conditions:default\": [\"//tensorflow/core/platform:subprocess\"],\n }) + tf_protos_all_impl() + tf_protos_grappler_impl() + tf_protos_profiler_impl() + tf_monitoring_framework_deps(),\n@@ -1686,7 +1692,6 @@ tf_cuda_library(\n \"//tensorflow/core/framework:shape_inference\",\n \"//tensorflow/core/framework:tensor\",\n \"//tensorflow/core/framework:tensor_shape\",\n- \"//tensorflow/core/platform:env_impl\",\n \"//tensorflow/core/platform:fingerprint\",\n \"//tensorflow/core/platform/default/build_config:platformlib\",\n \"//tensorflow/core/profiler/lib:annotated_traceme\",\n@@ -1705,7 +1710,12 @@ tf_cuda_library(\n \"@local_config_cuda//cuda:cudnn_header\",\n ]) + if_static(\n extra_deps = [\"@com_google_protobuf//:protobuf\"],\n- otherwise = [\"@com_google_protobuf//:protobuf_headers\"],\n+ otherwise = [\n+ \"@com_google_protobuf//:protobuf_headers\",\n+ # Make sure one copy of env_impl in case of static build,\n+ # leave non-static build untouched to not break existing code.\n+ \"//tensorflow/core/platform:env_impl\",\n+ ],\n ),\n alwayslink = 1,\n )",
"filename": "tensorflow/core/BUILD",
"status": "modified"
},
{
"diff": "@@ -498,7 +498,6 @@ cc_library(\n \"//tensorflow/core/lib/strings:strcat\",\n \"//tensorflow/core/lib/strings:stringprintf\",\n \"//tensorflow/core/platform:env\",\n- \"//tensorflow/core/platform:env_impl\",\n \"//tensorflow/core/platform:logging\",\n \"//tensorflow/core/platform:macros\",\n \"//tensorflow/core/platform:mutex\",",
"filename": "tensorflow/core/framework/BUILD",
"status": "modified"
},
{
"diff": "@@ -450,7 +450,6 @@ cc_library(\n deps = [\n \":test_log_proto_impl_cc\",\n \"//tensorflow/core/platform:env\",\n- \"//tensorflow/core/platform:env_impl\",\n \"//tensorflow/core/platform:errors\",\n \"//tensorflow/core/platform:macros\",\n \"//tensorflow/core/platform:mutex\",",
"filename": "tensorflow/core/util/BUILD",
"status": "modified"
},
{
"diff": "@@ -116,6 +116,7 @@ cc_library(\n \"//tensorflow/core:lib_internal\",\n \"//tensorflow/core:op_gen_lib\",\n \"//tensorflow/core:protos_all_cc\",\n+ \"//tensorflow/core/platform:env_impl\",\n ],\n alwayslink = 1,\n )",
"filename": "tensorflow/python/framework/BUILD",
"status": "modified"
},
{
"diff": "@@ -242,7 +242,7 @@ tf_py_test(\n ],\n )\n \n-pybind_extension(\n+tf_python_pybind_extension(\n name = \"_pywrap_tf2\",\n srcs = [\"enable_tf2.cc\"],\n hdrs = [\"//tensorflow/core/platform:enable_tf2_hdr\"],",
"filename": "tensorflow/python/platform/BUILD",
"status": "modified"
},
{
"diff": "@@ -2113,6 +2113,18 @@ def pywrap_tensorflow_macro(\n ],\n )\n \n+ # There should only be one instance of Env::Default within all .dll/.so/.dylib/.pyd files\n+ # inside tensorflow, to make sure file system registration work.\n+ # //tensorflow/core/platform:env_impl is already part of\n+ # //tensorflow:libtensorflow_framework_import_lib (libtensorflow_framework.so).\n+ # The following only include //tensorflow/core/platform:env_impl if\n+ # pywrap_tensorflow_internal.so does not depends on libtensorflow_framework.so in\n+ # monolithic build (if_static).\n+ extra_deps += if_static(\n+ extra_deps = [\"//tensorflow/core/platform:env_impl\"],\n+ otherwise = [],\n+ )\n+\n tf_cc_shared_object(\n name = cc_library_name,\n srcs = srcs,",
"filename": "tensorflow/tensorflow.bzl",
"status": "modified"
},
{
"diff": "@@ -92,6 +92,32 @@ function test_tf_imports() {\n return 1\n fi\n \n+ # test for gcs file system import\n+ # Note: The following tests availability of gcs file system on Windows.\n+ # While the path gs://1234567890 is not visible, the error should be\n+ # something like tf.errors.InvalidArgumentError:\n+ # `GCS path doesn't contain an object name: gs://1234567`\n+ # On the other hand, if gcs file system is not available, the error will\n+ # be `UnimplementedError: File system scheme 'gs' not implemented`\n+ # See https://github.com/tensorflow/tensorflow/issues/49515 for details.\n+ # Note `echo` is used as we need to deal with multiple line to\n+ # capture right exception.\n+ # The expanded code is:\n+ #\n+ # import tensorflow as tf;\n+ # try:\n+ # tf.io.gfile.GFile('gs://1234567890').read()\n+ # except tf.errors.InvalidArgumentError:\n+ # print('SUCCESS')\n+ # except:\n+ # print('FAILURE')\n+ RET_VAL=$(echo -e \"import tensorflow as tf;\\ntry:\\n tf.io.gfile.GFile('gs://1234567890').read()\\nexcept tf.errors.InvalidArgumentError:\\n print('SUCCESS')\\nexcept:\\n print('FAILURE')\" | python)\n+ if ! [[ ${RET_VAL} == 'SUCCESS' ]]; then\n+ echo \"Unexpected return value: ${RET_VALUE}\"\n+ echo \"GCS file system import test on virtualenv FAILED, will not upload ${WHL_NAME} package.\"\n+ return 1\n+ fi\n+\n RESULT=$?\n \n popd",
"filename": "tensorflow/tools/ci_build/builds/nightly_release_smoke_test.sh",
"status": "modified"
},
{
"diff": "@@ -272,6 +272,7 @@ def main():\n def_fp.write(\"\\t ?MaybeSavedModelDirectory@tensorflow@@YA_NAEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z\\n\")\n def_fp.write(\"\\t ?_TensorShapeProto_default_instance_@tensorflow@@3VTensorShapeProtoDefaultTypeInternal@1@A\\n\")\n def_fp.write(\"\\t ?_GraphDef_default_instance_@tensorflow@@3VGraphDefDefaultTypeInternal@1@A\\n\")\n+ def_fp.write(\"\\t ?Default@Env@tensorflow@@SAPEAV12@XZ\\n\")\n \n # Each symbols returned by undname matches the same position in candidates.\n # We compare on undname but use the decorated name from candidates.",
"filename": "tensorflow/tools/def_file_filter/def_file_filter.py.tpl",
"status": "modified"
}
]
}
|
{
"body": "In `ModelCheckpoint` callback, there is a parameter given named `save_freq` to save model. If `save_freq` is set to `epoch`, it will save model at the end of every epoch. (This works perfectly fine). But when `save_freq` is set to an integer let's say `N`, then the callback should save the model after `N` batches in every epoch. But the problem here is the callback doesn't accept the filepath as `file.batch{batch:02d}epoch{epoch:02d}.h5` and raises error as `batch` is invalid key. \r\nThe problem in the code that I have noticed is that the `_save_model` function has access to `epoch` but it doesn't have access to `batch`. And that's why `_get_file_path()` has access to `epoch` but not `batch`. The functionality should be changed little bit. I am raising PR to add access to `batch` param in both `_save_model` and `_get_file_path` variable.\r\nI noticed this error in tf code during the work on my PR [#1702](https://github.com/tensorflow/addons/pull/1702) in tensorflow/addons. \r\n\r\ncc @gabrieldemarmiesse.\r\n",
"comments": [
{
"body": "@ashutosh1919 would be willing to send a PR for this issue ?",
"created_at": "2020-10-09T04:37:45Z"
},
{
"body": "same problem here. It should support batch number formatting, especially when save_freq is not epoch but batch, only have epoch number would make previous weights files of same epoch been override.",
"created_at": "2020-10-28T16:58:17Z"
},
{
"body": "Same issue here. Is there actually no support for this yet?",
"created_at": "2021-06-09T06:26:30Z"
},
{
"body": "@aningineer , I have raised PR #49376 to fix this issue and you can see that it is approved as well. Let it get merged and you will be able to use this fix in `tf-nightly`",
"created_at": "2021-06-09T07:49:37Z"
},
{
"body": "The PR that fix this issue is merged to keras-team/keras as keras-team/keras@c567184. Closing this now.",
"created_at": "2021-06-17T22:20:09Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/38668\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/38668\">No</a>\n",
"created_at": "2021-06-17T22:20:13Z"
},
{
"body": "next PR",
"created_at": "2022-07-22T03:57:03Z"
}
],
"number": 38668,
"title": "In ModelCheckpoint, filepath is not accepting batch as formatting parameter."
}
|
{
"body": "Fixes #38668 . It is to complete the work done in two of my previous stale PRs #38669 and [#1702](https://github.com/tensorflow/addons/pull/1702).\r\n\r\ncc @mihaimaruseac , please review.",
"number": 49376,
"review_comments": [],
"title": "Added batch as formatting parameter during ModelCheckpoint callback"
}
|
{
"commits": [
{
"message": "Added batch as formatting parameter during ModelCheckpoint callback"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into model_ckpt"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into model_ckpt"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into model_ckpt"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into model_ckpt"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into model_ckpt"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into model_ckpt"
},
{
"message": "Resolved error"
}
],
"files": [
{
"diff": "@@ -1380,7 +1380,7 @@ def _implements_train_batch_hooks(self):\n \n def on_train_batch_end(self, batch, logs=None):\n if self._should_save_on_batch(batch):\n- self._save_model(epoch=self._current_epoch, logs=logs)\n+ self._save_model(epoch=self._current_epoch, batch=batch, logs=logs)\n \n def on_epoch_begin(self, epoch, logs=None):\n self._current_epoch = epoch\n@@ -1389,7 +1389,7 @@ def on_epoch_end(self, epoch, logs=None):\n self.epochs_since_last_save += 1\n # pylint: disable=protected-access\n if self.save_freq == 'epoch':\n- self._save_model(epoch=epoch, logs=logs)\n+ self._save_model(epoch=epoch, batch=None, logs=logs)\n \n def _should_save_on_batch(self, batch):\n \"\"\"Handles batch-level saving logic, supports steps_per_execution.\"\"\"\n@@ -1408,11 +1408,13 @@ def _should_save_on_batch(self, batch):\n return True\n return False\n \n- def _save_model(self, epoch, logs):\n+ def _save_model(self, epoch, batch, logs):\n \"\"\"Saves the model.\n \n Args:\n epoch: the epoch this iteration is in.\n+ batch: the batch this iteration is in. `None` if the `save_freq`\n+ is set to `epoch`.\n logs: the `logs` dict passed in to `on_batch_end` or `on_epoch_end`.\n \"\"\"\n logs = logs or {}\n@@ -1422,7 +1424,7 @@ def _save_model(self, epoch, logs):\n # Block only when saving interval is reached.\n logs = tf_utils.sync_to_numpy_or_python_type(logs)\n self.epochs_since_last_save = 0\n- filepath = self._get_file_path(epoch, logs)\n+ filepath = self._get_file_path(epoch, batch, logs)\n \n try:\n if self.save_best_only:\n@@ -1469,14 +1471,19 @@ def _save_model(self, epoch, logs):\n # Re-throw the error for any other causes.\n raise e\n \n- def _get_file_path(self, epoch, logs):\n+ def _get_file_path(self, epoch, batch, logs):\n \"\"\"Returns the file path for checkpoint.\"\"\"\n # pylint: disable=protected-access\n try:\n- # `filepath` may contain placeholders such as `{epoch:02d}` and\n- # `{mape:.2f}`. A mismatch between logged metrics and the path's\n+ # `filepath` may contain placeholders such as `{epoch:02d}`,`{batch:02d}`\n+ # and `{mape:.2f}`. A mismatch between logged metrics and the path's\n # placeholders can cause formatting to fail.\n- file_path = self.filepath.format(epoch=epoch + 1, **logs)\n+ if batch is None or \"batch\" in logs:\n+ file_path = self.filepath.format(epoch=epoch + 1, **logs)\n+ else:\n+ file_path = self.filepath.format(epoch=epoch + 1,\n+ batch=batch + 1,\n+ **logs)\n except KeyError as e:\n raise KeyError('Failed to format this callback filepath: \"{}\". '\n 'Reason: {}'.format(self.filepath, e))",
"filename": "tensorflow/python/keras/callbacks.py",
"status": "modified"
},
{
"diff": "@@ -785,6 +785,57 @@ def test_ModelCheckpoint(self):\n mode=mode,\n options=save_options_lib.SaveOptions())\n \n+ # Case 11: `ModelCheckpoint` save model with batch number in filename.\n+ filepath = os.path.join(temp_dir,\n+ 'checkpoint.epoch{epoch:02d}batch{batch:02d}.h5')\n+ cbks = [\n+ keras.callbacks.ModelCheckpoint(\n+ filepath,\n+ monitor=monitor,\n+ save_freq=1\n+ )\n+ ]\n+ assert not os.path.exists(filepath.format(epoch=1, batch=1))\n+ assert not os.path.exists(filepath.format(epoch=1, batch=2))\n+ assert not os.path.exists(filepath.format(epoch=2, batch=1))\n+ assert not os.path.exists(filepath.format(epoch=2, batch=2))\n+ assert not os.path.exists(filepath.format(epoch=3, batch=1))\n+ assert not os.path.exists(filepath.format(epoch=3, batch=2))\n+ assert not os.path.exists(filepath.format(epoch=4, batch=1))\n+ assert not os.path.exists(filepath.format(epoch=4, batch=2))\n+ assert not os.path.exists(filepath.format(epoch=5, batch=1))\n+ assert not os.path.exists(filepath.format(epoch=5, batch=2))\n+ model.fit(\n+ x_train,\n+ y_train,\n+ batch_size=5,\n+ validation_data=(x_test, y_test),\n+ callbacks=cbks,\n+ epochs=5,\n+ verbose=1)\n+\n+ assert os.path.exists(filepath.format(epoch=1, batch=1))\n+ assert os.path.exists(filepath.format(epoch=1, batch=2))\n+ assert os.path.exists(filepath.format(epoch=2, batch=1))\n+ assert os.path.exists(filepath.format(epoch=2, batch=2))\n+ assert os.path.exists(filepath.format(epoch=3, batch=1))\n+ assert os.path.exists(filepath.format(epoch=3, batch=2))\n+ assert os.path.exists(filepath.format(epoch=4, batch=1))\n+ assert os.path.exists(filepath.format(epoch=4, batch=2))\n+ assert os.path.exists(filepath.format(epoch=5, batch=1))\n+ assert os.path.exists(filepath.format(epoch=5, batch=2))\n+\n+ os.remove(filepath.format(epoch=1, batch=1))\n+ os.remove(filepath.format(epoch=1, batch=2))\n+ os.remove(filepath.format(epoch=2, batch=1))\n+ os.remove(filepath.format(epoch=2, batch=2))\n+ os.remove(filepath.format(epoch=3, batch=1))\n+ os.remove(filepath.format(epoch=3, batch=2))\n+ os.remove(filepath.format(epoch=4, batch=1))\n+ os.remove(filepath.format(epoch=4, batch=2))\n+ os.remove(filepath.format(epoch=5, batch=1))\n+ os.remove(filepath.format(epoch=5, batch=2))\n+\n @testing_utils.run_v2_only\n def test_ModelCheckpoint_subclass_save_weights_false(self):\n model = testing_utils.get_small_subclass_mlp(NUM_HIDDEN, NUM_CLASSES)",
"filename": "tensorflow/python/keras/callbacks_test.py",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Colab\r\n- TensorFlow installed from (source or binary): Colab, so binary\r\n- TensorFlow version (use command below): 2.4.1\r\n- Python version: 3.7\r\n\r\n\r\n\r\n\r\n**Describe the current behavior**\r\nThis issue is a continuation of #47689 . Whenever the user opts for `implementation=2` or `implementation=3`, the model cannot be saved as a SavedModel format. \r\n\r\n**Describe the expected behavior**\r\nOne should be able to save it as SavedModel format.\r\n\r\n**Standalone code to reproduce the issue**\r\nPlease see the gist [here](https://colab.research.google.com/gist/AdityaKane2001/c74692d463f14517d84ee9be9443941d/tfpr_maker.ipynb).\r\n\r\n**Other info**\r\nI have played around a bit with the source code in the colab environment. I have inserted many `logging_ops.print_v2(...)` statements to make it easier to debug. Also, I have cloned my fork (it's a clean fork, no personal commits) and ran the `local_test.py` file. I have added `model_2.save('model2')` line, too check for the issue.\r\n\r\n**Some observations:**\r\n1. The `local.py` file has no errors (none that I found). But many function calls can be updated as they are deprecated.\r\n2. The `K.reshape` or `array_ops.reshape` calls are the ones that are causing the issue. There's some bug in `tensorflow/python/framework/op_def_library.py : _apply_op_helper(op_type_name, name=None, **keywords)` function. \r\n3. Note that `local.py: 790` calls `reshape`, which calls `gen_array_ops.reshape`, which calls `op_def_library._apply_op_helper`. But the arguments that go to the latter **were not recorded in `**keywords` argument.** I checked that by printing the keywords dict in the error message. \r\n\r\nPlease take a look at the error log under local_test.py execution cell in the colab notebook.\r\nThanks. \r\n",
"comments": [
{
"body": "> Note that local.py: 790 calls reshape, which calls gen_array_ops.reshape, which calls op_def_library._apply_op_helper. But the arguments that go to the latter were not recorded in **keywords argument. I checked that by printing the keywords dict in the error message.\r\n\r\nIt is that in_dims and out_dims have shape `shape=(3,)` so then `in_size/out_size` are `shape=()`. \r\n\r\n```\r\n in_size = math_ops.reduce_prod(in_dims)\r\n out_size = math_ops.reduce_prod(out_dims)\r\n```",
"created_at": "2021-04-17T15:59:37Z"
},
{
"body": "@bhack \r\nCan you please elaborate on that?",
"created_at": "2021-04-17T16:02:32Z"
},
{
"body": "In `local.py` add a print yourself:\r\n\r\n```\r\n in_size = math_ops.reduce_prod(in_dims)\r\n out_size = math_ops.reduce_prod(out_dims)\r\n print(\"== Shape in ==\" + str(in_size.shape))\r\n print(\"== Shape out ==\" + str(out_size.shape))\r\n return array_ops.reshape(tensor, (in_size, out_size))\r\n```",
"created_at": "2021-04-17T16:04:43Z"
},
{
"body": "Got it, will look into it\r\n",
"created_at": "2021-04-17T16:07:36Z"
},
{
"body": "P.s. I fixed a typo",
"created_at": "2021-04-17T16:07:58Z"
},
{
"body": "@bhack \r\nI understood that in `make_2d` function we are facing empty arrays. But then why in the test error logs we see that the error is on some other line, and not that one?",
"created_at": "2021-04-17T16:44:45Z"
},
{
"body": "> Note that local.py: 790 calls reshape, which calls gen_array_ops.reshape, which calls op_def_library._apply_op_helper. But the arguments that go to the latter were not recorded in **keywords argument. I checked that by printing the keywords dict in the error message.\r\n\r\nI was just commenting your point 3. Isn't that line?",
"created_at": "2021-04-17T17:30:18Z"
},
{
"body": "Yes, that's the one. \r\nMy question is, how do we pinpoint where the bug is? Where are we calling the `reshape` that has argument `shape` which has `None` in it?",
"created_at": "2021-04-17T17:32:35Z"
},
{
"body": "The problem is that on save the layer `def call(self, inputs):` is called with `inputs.shape` `(None, None, None, 1)` instead of `(None, 28, 28, 1)`. As in `implementation == 2` and `3` we have always have `self.compute_output_shape(inputs.shape)` we will get the error for `(None, None, None, 1)`.",
"created_at": "2021-04-17T19:56:30Z"
},
{
"body": "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2021-04-28T00:35:35Z"
},
{
"body": "@AdityaKane2001 is this a duplicate of https://github.com/tensorflow/tensorflow/issues/47689 or there is something new?",
"created_at": "2021-04-28T00:41:31Z"
},
{
"body": "@bhack \r\nYes, this is a duplicate, but I had added some observations.",
"created_at": "2021-04-28T04:33:26Z"
},
{
"body": "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2021-05-05T04:37:53Z"
},
{
"body": "@bhack \r\nFrankly, I don't have any idea how to proceed from here. I had read the related code and tried to understand it, but I couldn't identify what changes were required. How can we proceed from here?",
"created_at": "2021-05-05T09:24:37Z"
},
{
"body": "@bhack , @AdityaKane2001 - Please consider looking at the detailed description in the PR #49230 and provide your feedback.",
"created_at": "2021-05-17T12:49:58Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48584\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48584\">No</a>\n",
"created_at": "2021-05-25T19:43:52Z"
}
],
"number": 48584,
"title": "Models having locally connected layers with implementation = 2 or 3 cannot be saved in SavedModel format."
}
|
{
"body": "Fixes #48584 and #47689 .\r\n\r\n### My understanding:\r\nThe model save is failing because at the time of saving, `call()` of LocallyConnected2D will be invoked which is not passed with any inputs (None). That is why `compute_output_shape` and `local_conv_matmul` fails and thus `save` fails.\r\n\r\n### Solution that I propose:\r\nWe particularly don't require any input data at the time of saving. So, I am saving the input_shape at the time when LocallyConnected2D instance is `build()` and whenever the parameter `inputs` is `None` in `call()`, I am replacing `inputs` with the dummy tensor with shape `input_shape` which was saved in `build()`.\r\n\r\nAny suggestions on optimisation of solution are welcomed.\r\n\r\ncc @mihaimaruseac , @bhack , @ AdityaKane2001 ",
"number": 49230,
"review_comments": [],
"title": "Fixing LocallyConnected2D and LocallyConnected1D layer Save Model to tf issue"
}
|
{
"commits": [
{
"message": "Fixing LocallyConnected2D layer Save Model to tf issue"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into local_connected_2d"
},
{
"message": "Resolving errors and adding tests"
},
{
"message": "seperate function for testing saved/loaded model output"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into local_connected_2d"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into local_connected_2d"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into local_connected_2d"
},
{
"message": "overridden default input_spec_signature"
}
],
"files": [
{
"diff": "@@ -153,6 +153,10 @@ def __init__(self,\n self.implementation = implementation\n self.input_spec = InputSpec(ndim=3)\n \n+ @property\n+ def _use_input_spec_as_call_signature(self):\n+ return False\n+\n @tf_utils.shape_type_conversion\n def build(self, input_shape):\n if self.data_format == 'channels_first':\n@@ -456,6 +460,10 @@ def __init__(self,\n self.implementation = implementation\n self.input_spec = InputSpec(ndim=4)\n \n+ @property\n+ def _use_input_spec_as_call_signature(self):\n+ return False\n+\n @tf_utils.shape_type_conversion\n def build(self, input_shape):\n if self.data_format == 'channels_last':",
"filename": "tensorflow/python/keras/layers/local.py",
"status": "modified"
},
{
"diff": "@@ -14,6 +14,7 @@\n # ==============================================================================\n \"\"\"Tests for locally-connected layers.\"\"\"\n \n+import os\n from absl.testing import parameterized\n import numpy as np\n \n@@ -26,6 +27,7 @@\n from tensorflow.python.ops import math_ops\n from tensorflow.python.ops import nn\n from tensorflow.python.platform import test\n+from tensorflow.python.keras.optimizer_v2 import rmsprop\n from tensorflow.python.training.rmsprop import RMSPropOptimizer\n \n \n@@ -362,6 +364,90 @@ def test_locallyconnected_implementation(self, width, data_format):\n self.assertAllCloseAccordingToType(\n out_1, out_3, atol=2e-4)\n \n+ @parameterized.parameters([\n+ {'width': 1, 'data_format': 'channels_first'},\n+ {'width': 1, 'data_format': 'channels_last'},\n+ {'width': 6, 'data_format': 'channels_first'},\n+ {'width': 6, 'data_format': 'channels_last'},\n+ ])\n+ def test_locallyconnected_save(self, width, data_format):\n+ with self.cached_session():\n+ num_samples = 4\n+ num_classes = 3\n+ num_epochs = 2\n+\n+ np.random.seed(1)\n+ tf_test_util.random_seed.set_seed(1)\n+ targets = np.random.randint(0, num_classes, (num_samples,))\n+\n+ height = 7\n+ filters = 2\n+ inputs = get_inputs(data_format, filters, height, num_samples, width)\n+\n+ kernel_x = (3,)\n+ kernel_y = () if width == 1 else (2,)\n+ stride_x = (1,)\n+ stride_y = () if width == 1 else (3,)\n+ layers = 2\n+\n+ kwargs = {\n+ 'layers': layers,\n+ 'filters': filters,\n+ 'kernel_size': kernel_x + kernel_y,\n+ 'strides': stride_x + stride_y,\n+ 'data_format': data_format,\n+ 'num_classes': num_classes\n+ }\n+\n+ model_1 = get_model_saveable(implementation=1, **kwargs)\n+ model_2 = get_model_saveable(implementation=2, **kwargs)\n+ model_3 = get_model_saveable(implementation=3, **kwargs)\n+\n+ # Train.\n+ model_1.fit(\n+ x=inputs,\n+ y=targets,\n+ epochs=num_epochs,\n+ batch_size=num_samples,\n+ shuffle=False)\n+ model_2.fit(\n+ x=inputs,\n+ y=targets,\n+ epochs=num_epochs,\n+ batch_size=num_samples,\n+ shuffle=False)\n+ model_3.fit(\n+ x=inputs,\n+ y=targets,\n+ epochs=num_epochs,\n+ batch_size=num_samples,\n+ shuffle=False)\n+\n+ out_1_before = model_1(inputs)\n+ out_2_before = model_2(inputs)\n+ out_3_before = model_3(inputs)\n+\n+ path_1 = os.path.join(self.get_temp_dir(), 'model_1_path')\n+ model_1.save(path_1)\n+ model_1 = keras.models.load_model(path_1, custom_objects={'xent': xent})\n+ path_2 = os.path.join(self.get_temp_dir(), 'model_2_path')\n+ model_2.save(path_2)\n+ model_2 = keras.models.load_model(path_2, custom_objects={'xent': xent})\n+ path_3 = os.path.join(self.get_temp_dir(), 'model_3_path')\n+ model_3.save(path_3)\n+ model_3 = keras.models.load_model(path_3, custom_objects={'xent': xent})\n+\n+ out_1_after = model_1(inputs)\n+ out_2_after = model_2(inputs)\n+ out_3_after = model_3(inputs)\n+\n+ self.assertAllCloseAccordingToType(\n+ out_1_before, out_1_after, atol=2e-4)\n+ self.assertAllCloseAccordingToType(\n+ out_2_before, out_2_after, atol=2e-4)\n+ self.assertAllCloseAccordingToType(\n+ out_3_before, out_3_after, atol=2e-4)\n+\n def test_make_2d(self):\n input_shapes = [\n (0,),\n@@ -468,6 +554,44 @@ def get_model(implementation,\n return model\n \n \n+def get_model_saveable(implementation,\n+ filters,\n+ kernel_size,\n+ strides,\n+ layers,\n+ num_classes,\n+ data_format):\n+ model = keras.Sequential()\n+\n+ if len(kernel_size) == 1:\n+ lc_layer = keras.layers.LocallyConnected1D\n+ elif len(kernel_size) == 2:\n+ lc_layer = keras.layers.LocallyConnected2D\n+ else:\n+ raise NotImplementedError(kernel_size)\n+\n+ for _ in range(layers):\n+ model.add(lc_layer(\n+ padding='valid',\n+ kernel_initializer=keras.initializers.random_normal(),\n+ bias_initializer=keras.initializers.random_normal(),\n+ filters=filters,\n+ strides=strides,\n+ kernel_size=kernel_size,\n+ activation=keras.activations.relu,\n+ data_format=data_format,\n+ implementation=implementation))\n+\n+ model.add(keras.layers.Flatten())\n+ model.add(keras.layers.Dense(num_classes))\n+ model.compile(\n+ optimizer=rmsprop.RMSProp(learning_rate=0.01),\n+ metrics=[keras.metrics.categorical_accuracy],\n+ loss=xent\n+ )\n+ return model\n+\n+\n def copy_lc_weights_2_to_1(lc_layer_2_from, lc_layer_1_to):\n lc_2_kernel, lc_2_bias = lc_layer_2_from.weights\n lc_2_kernel_masked = lc_2_kernel * lc_layer_2_from.kernel_mask",
"filename": "tensorflow/python/keras/layers/local_test.py",
"status": "modified"
}
]
}
|
{
"body": "Hi, tensorflow community\r\n\r\nI'm playing around with LocallyConnected2D and found some weird error when I tried to save a model (model.save).\r\n\r\nWhen I gave the layer implementation=1, the model was saved without any errors.\r\nBut, if I set implementation=2, it gave me this error.\r\n\r\nTraceback (most recent call last):\r\n File \"/home/tonglab/Documents/Project/PycharmProjects/LCN/LCN_Keras/LocalConn2d_Keras.py\", line 84, in <module>\r\n model.save('keras1')\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py\", line 2002, in save\r\n signatures, options, save_traces)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py\", line 157, in save_model\r\n signatures, options, save_traces)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save.py\", line 89, in save\r\n save_lib.save(model, filepath, signatures, options)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py\", line 1033, in save\r\n obj, signatures, options, meta_graph_def)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py\", line 1198, in _build_meta_graph\r\n return _build_meta_graph_impl(obj, signatures, options, meta_graph_def)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py\", line 1133, in _build_meta_graph_impl\r\n checkpoint_graph_view)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/saved_model/signature_serialization.py\", line 75, in find_function_to_export\r\n functions = saveable_view.list_functions(saveable_view.root)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py\", line 151, in list_functions\r\n self._serialization_cache)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py\", line 2613, in _list_functions_for_serialization\r\n Model, self)._list_functions_for_serialization(serialization_cache)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 3087, in _list_functions_for_serialization\r\n .list_functions_for_serialization(serialization_cache))\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py\", line 94, in list_functions_for_serialization\r\n fns = self.functions_to_serialize(serialization_cache)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py\", line 79, in functions_to_serialize\r\n serialization_cache).functions_to_serialize)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py\", line 95, in _get_serialized_attributes\r\n serialization_cache)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/model_serialization.py\", line 57, in _get_serialized_attributes_internal\r\n serialization_cache))\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py\", line 104, in _get_serialized_attributes_internal\r\n functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py\", line 155, in wrap_layer_functions\r\n original_fns = _replace_child_layer_functions(layer, serialization_cache)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py\", line 274, in _replace_child_layer_functions\r\n serialization_cache).functions)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py\", line 95, in _get_serialized_attributes\r\n serialization_cache)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py\", line 104, in _get_serialized_attributes_internal\r\n functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py\", line 193, in wrap_layer_functions\r\n fn.get_concrete_function()\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py\", line 549, in get_concrete_function\r\n self.call_collection.add_trace(*args, **kwargs)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py\", line 423, in add_trace\r\n fn.get_concrete_function(*args, **kwargs)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py\", line 550, in get_concrete_function\r\n return super(LayerCall, self).get_concrete_function(*args, **kwargs)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py\", line 1299, in get_concrete_function\r\n concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py\", line 1205, in _get_concrete_function_garbage_collected\r\n self._initialize(args, kwargs, add_initializers_to=initializers)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py\", line 726, in _initialize\r\n *args, **kwds))\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/eager/function.py\", line 2969, in _get_concrete_function_internal_garbage_collected\r\n graph_function, _ = self._maybe_define_function(args, kwargs)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/eager/function.py\", line 3361, in _maybe_define_function\r\n graph_function = self._create_graph_function(args, kwargs)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/eager/function.py\", line 3206, in _create_graph_function\r\n capture_by_value=self._capture_by_value),\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py\", line 990, in func_graph_from_py_func\r\n func_outputs = python_func(*func_args, **func_kwargs)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py\", line 634, in wrapped_fn\r\n out = weak_wrapped_fn().__wrapped__(*args, **kwds)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py\", line 527, in wrapper\r\n ret = method(*args, **kwargs)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py\", line 570, in call_and_return_conditional_losses\r\n call_output = layer_call(inputs, *args, **kwargs)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/layers/local.py\", line 615, in call\r\n self.compute_output_shape(inputs.shape))\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/layers/local.py\", line 782, in local_conv_matmul\r\n [K.shape(output_flat)[0],] + output_shape.as_list()[1:])\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py\", line 201, in wrapper\r\n return target(*args, **kwargs)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/keras/backend.py\", line 3020, in reshape\r\n return array_ops.reshape(x, shape)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py\", line 201, in wrapper\r\n return target(*args, **kwargs)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py\", line 195, in reshape\r\n result = gen_array_ops.reshape(tensor, shape, name)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py\", line 8378, in reshape\r\n \"Reshape\", tensor=tensor, shape=shape, name=name)\r\n File \"/home/tonglab/anaconda3/envs/tf_2_4_1/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py\", line 540, in _apply_op_helper\r\n (input_name, err))\r\nValueError: Tried to convert 'shape' to a tensor and failed. Error: None values not supported.\r\n\r\nMy tensorflow version is 2.4.1.\r\n\r\nAny advice will be appreciated.\r\nThank you.",
"comments": [
{
"body": "@hojin89 \r\nIt would be great if you shared the source code of this error. \r\nAlso, from the log, I speculate that the layer and its weights are initialized with a placeholder (dummy Tensor), and thus it says that the shape is not defined. Other cause may be that the shapes of some Tensors may not be defined before runtime, as layers are executed in graph mode.",
"created_at": "2021-03-10T08:38:54Z"
},
{
"body": "@hojin89,\r\nAs mentioned by @AdityaKane2001, could you please provide the complete code and the TensorFlow version you are using so that we can reproduce the issue on our end. Thanks!\r\n",
"created_at": "2021-03-10T13:57:27Z"
},
{
"body": "Thanks for the replies.\r\n\r\nHere is the complete code:\r\n\r\n import numpy as np\r\n import matplotlib.pyplot as plt\r\n import tensorflow as tf\r\n from tensorflow import keras\r\n from tensorflow.keras import layers\r\n from tensorflow.keras import initializers\r\n \r\n num_classes = 10\r\n input_shape = (28, 28, 1)\r\n \r\n (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\r\n \r\n x_train = x_train.astype(\"float32\") / 255\r\n x_test = x_test.astype(\"float32\") / 255\r\n \r\n x_train = np.expand_dims(x_train, -1)\r\n x_test = np.expand_dims(x_test, -1)\r\n print(\"x_train shape:\", x_train.shape)\r\n print(x_train.shape[0], \"train samples\")\r\n print(x_test.shape[0], \"test samples\")\r\n \r\n y_train = keras.utils.to_categorical(y_train, num_classes)\r\n y_test = keras.utils.to_categorical(y_test, num_classes)\r\n \r\n strategy = tf.distribute.MirroredStrategy()\r\n with strategy.scope():\r\n\r\n model = keras.Sequential(\r\n [\r\n keras.Input(shape=input_shape),\r\n layers.LocallyConnected2D(16, kernel_size=(9, 9), activation=\"relu\", implementation=2),\r\n layers.MaxPooling2D(pool_size=(2, 2)),\r\n layers.Conv2D(64, kernel_size=(3, 3), activation=\"relu\"),\r\n layers.MaxPooling2D(pool_size=(2, 2)),\r\n layers.Flatten(),\r\n layers.Dropout(0.5),\r\n layers.Dense(num_classes, activation=\"softmax\"),\r\n ]\r\n )\r\n \r\n model.summary()\r\n \r\n batch_size = 128\r\n epochs = 1\r\n \r\n opt = keras.optimizers.Adam(learning_rate=0.001)\r\n model.compile(loss=\"categorical_crossentropy\", optimizer=opt, metrics=[\"accuracy\"])\r\n model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)\r\n \r\n score = model.evaluate(x_test, y_test, verbose=0)\r\n print(\"Test loss:\", score[0])\r\n print(\"Test accuracy:\", score[1])\r\n \r\n model.save('keras1')\r\n\r\nAs mentioned earlier, my tensorflow version is 2.4.1.\r\nIf I use \"implementation=1\", the code works well without any errors.\r\n\r\nIt seems like the error occurs when it calls \"self.compute_output_shape(inputs.shape))\" in local.py.\r\ninputs.shape should be (None, 28, 28, 1), but for some reason, it shows me (None, None, None, 1) at some point.\r\n\r\nI'm still trying to figure this out. \r\nIf you could give any advice, that'll be appreciated.\r\n\r\n\r\n",
"created_at": "2021-03-10T15:25:45Z"
},
{
"body": "`model.save('keras1.h5')` works as expected. Saving model in a directory results in the aforementioned error. Gist [here](https://colab.research.google.com/gist/AdityaKane2001/c2072f8433a13975b815dbdfa3819d7c/47689.ipynb).",
"created_at": "2021-03-11T04:39:20Z"
},
{
"body": "@ymodak,\r\nI was able to reproduce the issue with TF v2.3, TF v2.4 and TF-nightly. Please find the gist of it [here](https://colab.research.google.com/gist/amahendrakar/cfbf7c9942e89e7b286509fe7776cdf5/47689.ipynb). Thanks!",
"created_at": "2021-03-17T17:38:23Z"
},
{
"body": "Adding the contributions welcome label to this issue for further investigation by the community. If you are interested in working on this issue, please leave a comment and I will assign it to you. Thanks!",
"created_at": "2021-04-06T17:08:53Z"
},
{
"body": "@AdityaKane2001 @hojin89 \r\nOne can observe in this [file](https://github.com/tensorflow/tensorflow/blob/be1598592ee45a0abe37a5d8fa8e53420a8c554e/tensorflow/python/keras/layers/local.py#L325) that LocallyConnected2D layer uses deprecated version of keras and a bunch of deprecated ops. According to its documention, implementation=2, stores weights of each layer in a dense (but sparsely-populated) 2D matrix and implements the forward pass as a single matrix-multiply. So while saving as TF SavedModel, compute_output_shape() is not able to convert the shape and raises the error you mentioned. However it gets saved as .h5 keras model. You can see the difference between TF SavedModel and .H5 model [here](https://www.tensorflow.org/guide/keras/save_and_serialize).\r\n",
"created_at": "2021-04-15T05:34:57Z"
},
{
"body": "@Suraj1199 \r\nThe link is broken..\r\n",
"created_at": "2021-04-15T05:36:32Z"
},
{
"body": "@AdityaKane2001 It should be working now.",
"created_at": "2021-04-15T05:40:48Z"
},
{
"body": "@Suraj1199\r\nThanks",
"created_at": "2021-04-15T05:41:56Z"
},
{
"body": "> `model.save('keras1.h5')` works as expected. Saving model in a directory results in the aforementioned error. Gist [here](https://colab.research.google.com/gist/AdityaKane2001/c2072f8433a13975b815dbdfa3819d7c/47689.ipynb).\r\n\r\nTo save the model in TF SavedModel format in a directory ('keras1' in this case) use Conv2D layer intead of LocallyConnected2D.\r\n",
"created_at": "2021-04-15T05:46:06Z"
},
{
"body": "Side note, this is true for `implementation=3` as well. Only `implementation=1` works.",
"created_at": "2021-04-16T05:53:02Z"
},
{
"body": "@hojin89 \r\nyou can Save the entire model to a HDF5 file.\r\n The '.h5' extension indicates that the model should be saved to HDF5.\r\n```model.save('my_model.h5')```",
"created_at": "2021-04-30T02:25:21Z"
},
{
"body": "@nikitamaia , please assign this issue to me since I am working on it as part of PR #49230 .",
"created_at": "2021-05-17T12:48:05Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47689\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47689\">No</a>\n",
"created_at": "2021-05-25T19:43:54Z"
}
],
"number": 47689,
"title": "layers.LocallyConnected2D throws error when saving a model in tf, saving in .h5 works"
}
|
{
"body": "Fixes #48584 and #47689 .\r\n\r\n### My understanding:\r\nThe model save is failing because at the time of saving, `call()` of LocallyConnected2D will be invoked which is not passed with any inputs (None). That is why `compute_output_shape` and `local_conv_matmul` fails and thus `save` fails.\r\n\r\n### Solution that I propose:\r\nWe particularly don't require any input data at the time of saving. So, I am saving the input_shape at the time when LocallyConnected2D instance is `build()` and whenever the parameter `inputs` is `None` in `call()`, I am replacing `inputs` with the dummy tensor with shape `input_shape` which was saved in `build()`.\r\n\r\nAny suggestions on optimisation of solution are welcomed.\r\n\r\ncc @mihaimaruseac , @bhack , @ AdityaKane2001 ",
"number": 49230,
"review_comments": [],
"title": "Fixing LocallyConnected2D and LocallyConnected1D layer Save Model to tf issue"
}
|
{
"commits": [
{
"message": "Fixing LocallyConnected2D layer Save Model to tf issue"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into local_connected_2d"
},
{
"message": "Resolving errors and adding tests"
},
{
"message": "seperate function for testing saved/loaded model output"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into local_connected_2d"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into local_connected_2d"
},
{
"message": "Merge remote-tracking branch 'upstream/master' into local_connected_2d"
},
{
"message": "overridden default input_spec_signature"
}
],
"files": [
{
"diff": "@@ -153,6 +153,10 @@ def __init__(self,\n self.implementation = implementation\n self.input_spec = InputSpec(ndim=3)\n \n+ @property\n+ def _use_input_spec_as_call_signature(self):\n+ return False\n+\n @tf_utils.shape_type_conversion\n def build(self, input_shape):\n if self.data_format == 'channels_first':\n@@ -456,6 +460,10 @@ def __init__(self,\n self.implementation = implementation\n self.input_spec = InputSpec(ndim=4)\n \n+ @property\n+ def _use_input_spec_as_call_signature(self):\n+ return False\n+\n @tf_utils.shape_type_conversion\n def build(self, input_shape):\n if self.data_format == 'channels_last':",
"filename": "tensorflow/python/keras/layers/local.py",
"status": "modified"
},
{
"diff": "@@ -14,6 +14,7 @@\n # ==============================================================================\n \"\"\"Tests for locally-connected layers.\"\"\"\n \n+import os\n from absl.testing import parameterized\n import numpy as np\n \n@@ -26,6 +27,7 @@\n from tensorflow.python.ops import math_ops\n from tensorflow.python.ops import nn\n from tensorflow.python.platform import test\n+from tensorflow.python.keras.optimizer_v2 import rmsprop\n from tensorflow.python.training.rmsprop import RMSPropOptimizer\n \n \n@@ -362,6 +364,90 @@ def test_locallyconnected_implementation(self, width, data_format):\n self.assertAllCloseAccordingToType(\n out_1, out_3, atol=2e-4)\n \n+ @parameterized.parameters([\n+ {'width': 1, 'data_format': 'channels_first'},\n+ {'width': 1, 'data_format': 'channels_last'},\n+ {'width': 6, 'data_format': 'channels_first'},\n+ {'width': 6, 'data_format': 'channels_last'},\n+ ])\n+ def test_locallyconnected_save(self, width, data_format):\n+ with self.cached_session():\n+ num_samples = 4\n+ num_classes = 3\n+ num_epochs = 2\n+\n+ np.random.seed(1)\n+ tf_test_util.random_seed.set_seed(1)\n+ targets = np.random.randint(0, num_classes, (num_samples,))\n+\n+ height = 7\n+ filters = 2\n+ inputs = get_inputs(data_format, filters, height, num_samples, width)\n+\n+ kernel_x = (3,)\n+ kernel_y = () if width == 1 else (2,)\n+ stride_x = (1,)\n+ stride_y = () if width == 1 else (3,)\n+ layers = 2\n+\n+ kwargs = {\n+ 'layers': layers,\n+ 'filters': filters,\n+ 'kernel_size': kernel_x + kernel_y,\n+ 'strides': stride_x + stride_y,\n+ 'data_format': data_format,\n+ 'num_classes': num_classes\n+ }\n+\n+ model_1 = get_model_saveable(implementation=1, **kwargs)\n+ model_2 = get_model_saveable(implementation=2, **kwargs)\n+ model_3 = get_model_saveable(implementation=3, **kwargs)\n+\n+ # Train.\n+ model_1.fit(\n+ x=inputs,\n+ y=targets,\n+ epochs=num_epochs,\n+ batch_size=num_samples,\n+ shuffle=False)\n+ model_2.fit(\n+ x=inputs,\n+ y=targets,\n+ epochs=num_epochs,\n+ batch_size=num_samples,\n+ shuffle=False)\n+ model_3.fit(\n+ x=inputs,\n+ y=targets,\n+ epochs=num_epochs,\n+ batch_size=num_samples,\n+ shuffle=False)\n+\n+ out_1_before = model_1(inputs)\n+ out_2_before = model_2(inputs)\n+ out_3_before = model_3(inputs)\n+\n+ path_1 = os.path.join(self.get_temp_dir(), 'model_1_path')\n+ model_1.save(path_1)\n+ model_1 = keras.models.load_model(path_1, custom_objects={'xent': xent})\n+ path_2 = os.path.join(self.get_temp_dir(), 'model_2_path')\n+ model_2.save(path_2)\n+ model_2 = keras.models.load_model(path_2, custom_objects={'xent': xent})\n+ path_3 = os.path.join(self.get_temp_dir(), 'model_3_path')\n+ model_3.save(path_3)\n+ model_3 = keras.models.load_model(path_3, custom_objects={'xent': xent})\n+\n+ out_1_after = model_1(inputs)\n+ out_2_after = model_2(inputs)\n+ out_3_after = model_3(inputs)\n+\n+ self.assertAllCloseAccordingToType(\n+ out_1_before, out_1_after, atol=2e-4)\n+ self.assertAllCloseAccordingToType(\n+ out_2_before, out_2_after, atol=2e-4)\n+ self.assertAllCloseAccordingToType(\n+ out_3_before, out_3_after, atol=2e-4)\n+\n def test_make_2d(self):\n input_shapes = [\n (0,),\n@@ -468,6 +554,44 @@ def get_model(implementation,\n return model\n \n \n+def get_model_saveable(implementation,\n+ filters,\n+ kernel_size,\n+ strides,\n+ layers,\n+ num_classes,\n+ data_format):\n+ model = keras.Sequential()\n+\n+ if len(kernel_size) == 1:\n+ lc_layer = keras.layers.LocallyConnected1D\n+ elif len(kernel_size) == 2:\n+ lc_layer = keras.layers.LocallyConnected2D\n+ else:\n+ raise NotImplementedError(kernel_size)\n+\n+ for _ in range(layers):\n+ model.add(lc_layer(\n+ padding='valid',\n+ kernel_initializer=keras.initializers.random_normal(),\n+ bias_initializer=keras.initializers.random_normal(),\n+ filters=filters,\n+ strides=strides,\n+ kernel_size=kernel_size,\n+ activation=keras.activations.relu,\n+ data_format=data_format,\n+ implementation=implementation))\n+\n+ model.add(keras.layers.Flatten())\n+ model.add(keras.layers.Dense(num_classes))\n+ model.compile(\n+ optimizer=rmsprop.RMSProp(learning_rate=0.01),\n+ metrics=[keras.metrics.categorical_accuracy],\n+ loss=xent\n+ )\n+ return model\n+\n+\n def copy_lc_weights_2_to_1(lc_layer_2_from, lc_layer_1_to):\n lc_2_kernel, lc_2_bias = lc_layer_2_from.weights\n lc_2_kernel_masked = lc_2_kernel * lc_layer_2_from.kernel_mask",
"filename": "tensorflow/python/keras/layers/local_test.py",
"status": "modified"
}
]
}
|
{
"body": "<em>Please make sure that this is a bug. As per our\r\n[GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md),\r\nwe only address code/doc bugs, performance issues, feature requests and\r\nbuild/installation issues on GitHub. tag:bug_template</em>\r\n\r\n**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): **no**\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): **any**, tested on MacOS \r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: -\r\n- TensorFlow installed from (source or binary): **binary** (independent)\r\n- TensorFlow version (use command below): since **v2.3.0-rc0**, present in latest\r\n - since refactoring https://github.com/tensorflow/tensorflow/commit/bb15c97379f197a6a46ec1446d8fb0b292b860ba\r\n- Python version: **any**, 3.7 (independent)\r\n- Bazel version (if compiling from source): -\r\n- GCC/Compiler version (if compiling from source): -\r\n- CUDA/cuDNN version: **any** (independent)\r\n- GPU model and memory: -\r\n\r\nYou can collect some of this information using our environment capture\r\n[script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh)\r\nYou can also obtain the TensorFlow version with:\r\n1. TF 1.0: `python -c \"import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)\"`\r\n2. TF 2.0: `python -c \"import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)\"`\r\n\r\n**Describe the current behavior**\r\n\r\nThe function `convert_nested_model()` nested within `keras.saving.load_model_from_hdf5()` converts layers only nested within `Model` or `Sequential` submodels, but not yet `Functional`.\r\n\r\nFor a model where GRU is within Functional nested model, trained with CuDNN, conversion when loading without CuDNN is not applied.\r\n\r\nThe result is that loading fails when setting weight values of incompatible shape.\r\n\r\n**Describe the expected behavior**\r\n\r\nThe conversion is applied to GRU weights even when the layer is nested within a Functional submodel.\r\n\r\n**[Contributing](https://www.tensorflow.org/community/contribute)** - Do you\r\nwant to contribute a PR? (yes/no): - Briefly describe your candidate solution\r\n(if contributing):\r\n\r\nYes. I contributed the original code of some of those conversions.\r\n\r\nhttps://github.com/keras-team/keras/pull/10081\r\n\r\n**Standalone code to reproduce the issue**\r\nProvide a reproducible test case that is the bare minimum necessary to generate\r\nthe problem. If possible, please share a link to Colab/Jupyter/any notebook.\r\n\r\nI will provide a minimal example for reproduction.\r\n\r\n**Other info / logs** Include any logs or source code that would be helpful to\r\ndiagnose the problem. If including tracebacks, please include the full\r\ntraceback. Large logs and files should be attached.\r\n\r\n```\r\n# in K.batch_set_value() for the GRU bias (CuDNN-compatible, ie .reset_after=True)\r\nassign_op = x.assign(assign_placeholder)\r\n\r\nself:\r\n<tf.Variable 'gru_18/gru_cell_30/bias:0' shape=(2, 384) dtype=float32>\r\nvalue:\r\n<tf.Tensor 'Placeholder_173:0' shape=(?,) dtype=float32>\r\n```\r\n\r\nThe bias value array is 768 and placeholder shape is `(?,)` (1D). It's missin conversion to proper shape: `(2, 384)` (as the variable has).\r\n\r\n## Proposed solution\r\n\r\nI tried adding the `'Functional'` to the list of nested class names to be be converted.\r\n\r\n```\r\n- elif layer.__class__.__name__ in ['Model', 'Sequential']:\r\n+ elif layer.__class__.__name__ in ['Model', 'Sequential', 'Functional']:\r\n```\r\n\r\nAnd the whole model loaded correctly.\r\n\r\n+ make unit tests\r\n",
"comments": [
{
"body": "Thanks, can you open a PR with tests?",
"created_at": "2021-05-16T11:25:12Z"
},
{
"body": "Yes, I'm going to prepare some simple code for reproduction and unit tests.",
"created_at": "2021-05-16T11:33:07Z"
},
{
"body": "It looks that there's an existing test in [cudnn_recurrent_test.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/layers/cudnn_recurrent_test.py#L274\r\n) but for some reason has been [disabled](https://github.com/tensorflow/tensorflow/commit/8d25e4bf616b7ae4ed101c580a23421616bf674c). It was done just before v2.3.0-rc0 so it's likely caused by the bug described above.\r\n\r\n```\r\n # TODO(b/156439419): Reenable after the bug is fixed.\r\n @parameterized.named_parameters(\r\n *testing_utils.generate_combinations_with_testcase_name(\r\n rnn_type=['LSTM', 'GRU'], to_cudnn=[True, False],\r\n bidirectional=[True, False], implementation=[1, 2],\r\n model_nest_level=[1, 2], model_type=['seq', 'func']))\r\n @test_util.run_v1_only('b/120911602, b/112083752')\r\n @test_util.run_gpu_only\r\n def DISALBED_test_load_weights_between_noncudnn_rnn(\r\n```\r\n",
"created_at": "2021-05-16T17:38:18Z"
},
{
"body": "A workaround to patch the module before the new Tensorflow is released (or for older TF 2 installations):\r\n\r\n```\r\nimport importlib\r\nfrom tensorflow.python.keras.saving import hdf5_format\r\n\r\nfilename = hdf5_format.__file__\r\nwith open(filename, 'r') as f:\r\n content = f.read()\r\nif \"['Model', 'Sequential']\" in content:\r\n print(f'Patching {filename} to support nested Functional model...')\r\n content = content.replace(\"['Model', 'Sequential']\", \"['Model', 'Sequential', 'Functional']\")\r\n with open(filename, 'w') as f:\r\n f.write(content)\r\n\r\n importlib.reload(hdf5_format)\r\n```",
"created_at": "2021-05-16T21:14:31Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/49214\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/49214\">No</a>\n",
"created_at": "2021-05-24T15:35:07Z"
}
],
"number": 49214,
"title": "RNN weight conversion not applied within nested Functional models"
}
|
{
"body": "Closes #49214",
"number": 49222,
"review_comments": [],
"title": "Convert layer weights even within nested Functional models."
}
|
{
"commits": [
{
"message": "Convert layer weights even within nested Functional models.\n\nCloses #49214"
}
],
"files": [
{
"diff": "@@ -263,17 +263,16 @@ def test_trainability(self):\n self.assertEqual(len(layer.trainable_weights), 3)\n self.assertEqual(len(layer.non_trainable_weights), 0)\n \n- # TODO(b/156439419): Reenable after the bug is fixed.\n @parameterized.named_parameters(\n *testing_utils.generate_combinations_with_testcase_name(\n rnn_type=['LSTM', 'GRU'], to_cudnn=[True, False],\n bidirectional=[True, False], implementation=[1, 2],\n model_nest_level=[1, 2], model_type=['seq', 'func']))\n @test_util.run_v1_only('b/120911602, b/112083752')\n @test_util.run_gpu_only\n- def DISALBED_test_load_weights_between_noncudnn_rnn(\n- self, rnn_type, to_cudnn, bidirectional, implementation,\n- model_nest_level, model_type):\n+ def test_load_weights_between_noncudnn_rnn(self, rnn_type, to_cudnn,\n+ bidirectional, implementation,\n+ model_nest_level, model_type):\n input_size = 10\n timesteps = 6\n input_shape = (timesteps, input_size)",
"filename": "tensorflow/python/keras/layers/cudnn_recurrent_test.py",
"status": "modified"
},
{
"diff": "@@ -322,7 +322,7 @@ def convert_nested_model(weights):\n weights = convert_nested_bidirectional(weights)\n if layer.__class__.__name__ == 'TimeDistributed':\n weights = convert_nested_time_distributed(weights)\n- elif layer.__class__.__name__ in ['Model', 'Sequential']:\n+ elif layer.__class__.__name__ in ['Model', 'Sequential', 'Functional']:\n weights = convert_nested_model(weights)\n \n if original_keras_version == '1':",
"filename": "tensorflow/python/keras/saving/hdf5_format.py",
"status": "modified"
}
]
}
|
{
"body": "\r\n**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow):\r\n- OS Platform and Distribution (Linux Ubuntu 18.04):\r\n- TensorFlow installed from (pip3):\r\n- TensorFlow version (use command below):\r\n`pip3 install tensorflow-gpu==2.4.1`\r\n- Python version:3.8.0\r\n- CUDA/cuDNN version: cuda11.3 driver 465.19\r\n- GPU model and memory: eight 3090 cards\r\n\r\ncodes as the [URL](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/custom_training.ipynb)\r\nnever modified\r\n\r\nbugs\r\n```\r\n2021-05-14 18:00:34.332244: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:656] In AUTO-mode, and switching to DATA-based sharding, instead of FILE-based sharding as we cannot find appropriate reader dataset op(s) to shard. Error: Found an unshardable source dataset: name: \"TensorSliceDataset/_2\"\r\nop: \"TensorSliceDataset\"\r\ninput: \"Placeholder/_0\"\r\ninput: \"Placeholder/_1\"\r\nattr {\r\n key: \"Toutput_types\"\r\n value {\r\n list {\r\n type: DT_FLOAT\r\n type: DT_UINT8\r\n }\r\n }\r\n}\r\nattr {\r\n key: \"output_shapes\"\r\n value {\r\n list {\r\n shape {\r\n dim {\r\n size: 28\r\n }\r\n dim {\r\n size: 28\r\n }\r\n dim {\r\n size: 1\r\n }\r\n }\r\n shape {\r\n }\r\n }\r\n }\r\n}\r\n\r\n2021-05-14 18:00:34.366102: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:656] In AUTO-mode, and switching to DATA-based sharding, instead of FILE-based sharding as we cannot find appropriate reader dataset op(s) to shard. Error: Found an unshardable source dataset: name: \"TensorSliceDataset/_2\"\r\nop: \"TensorSliceDataset\"\r\ninput: \"Placeholder/_0\"\r\ninput: \"Placeholder/_1\"\r\nattr {\r\n key: \"Toutput_types\"\r\n value {\r\n list {\r\n type: DT_FLOAT\r\n type: DT_UINT8\r\n }\r\n }\r\n}\r\nattr {\r\n key: \"output_shapes\"\r\n value {\r\n list {\r\n shape {\r\n dim {\r\n size: 28\r\n }\r\n dim {\r\n size: 28\r\n }\r\n dim {\r\n size: 1\r\n }\r\n }\r\n shape {\r\n }\r\n }\r\n }\r\n}\r\n\r\n2021-05-14 18:00:35.243992: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)\r\n2021-05-14 18:00:35.262293: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2500000000 Hz\r\nTraceback (most recent call last):\r\n File \"/data/prod/xulm1/custom_training.py\", line 113, in <module>\r\n total_loss += distributed_train_step(x)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py\", line 828, in __call__\r\n result = self._call(*args, **kwds)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py\", line 871, in _call\r\n self._initialize(args, kwds, add_initializers_to=initializers)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py\", line 725, in _initialize\r\n self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py\", line 2969, in _get_concrete_function_internal_garbage_collected\r\n graph_function, _ = self._maybe_define_function(args, kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py\", line 3361, in _maybe_define_function\r\n graph_function = self._create_graph_function(args, kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py\", line 3196, in _create_graph_function\r\n func_graph_module.func_graph_from_py_func(\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py\", line 990, in func_graph_from_py_func\r\n func_outputs = python_func(*func_args, **func_kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py\", line 634, in wrapped_fn\r\n out = weak_wrapped_fn().__wrapped__(*args, **kwds)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py\", line 977, in wrapper\r\n raise e.ag_error_metadata.to_exception(e)\r\nAttributeError: in user code:\r\n\r\n /data/prod/xulm1/custom_training.py:99 distributed_train_step *\r\n per_replica_losses = strategy.experimental_run_v2(train_step,\r\n\r\n AttributeError: 'MirroredStrategy' object has no attribute 'experimental_run_v2'\r\n\r\n\r\n```\r\n\r\nhow to deal with this ?\r\nthx\r\n",
"comments": [
{
"body": "I found that v2 wasn't the Attribute, as below\r\n```\r\n>>> strategy.ex\r\nstrategy.experimental_distribute_dataset(\r\nstrategy.experimental_distribute_datasets_from_function(\r\nstrategy.experimental_distribute_values_from_function(\r\nstrategy.experimental_local_results(\r\n**strategy.experimental_run(**\r\n```\r\nand i use the last \r\nbut got another bug\r\n```\r\nTraceback (most recent call last):\r\n File \"/data/prod/xulm1/custom_training.py\", line 113, in <module>\r\n total_loss += distributed_train_step(x)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py\", line 828, in __call__\r\n result = self._call(*args, **kwds)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py\", line 871, in _call\r\n self._initialize(args, kwds, add_initializers_to=initializers)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py\", line 725, in _initialize\r\n self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py\", line 2969, in _get_concrete_function_internal_garbage_collected\r\n graph_function, _ = self._maybe_define_function(args, kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py\", line 3361, in _maybe_define_function\r\n graph_function = self._create_graph_function(args, kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/function.py\", line 3196, in _create_graph_function\r\n func_graph_module.func_graph_from_py_func(\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py\", line 990, in func_graph_from_py_func\r\n func_outputs = python_func(*func_args, **func_kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/def_function.py\", line 634, in wrapped_fn\r\n out = weak_wrapped_fn().__wrapped__(*args, **kwds)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py\", line 977, in wrapper\r\n raise e.ag_error_metadata.to_exception(e)\r\nTypeError: in user code:\r\n\r\n /data/prod/xulm1/custom_training.py:99 distributed_train_step *\r\n per_replica_losses = strategy.experimental_run(train_step,\r\n\r\n TypeError: experimental_run() got an unexpected keyword argument 'args'\r\n\r\n```",
"created_at": "2021-05-14T10:29:12Z"
},
{
"body": "Hi @ucasiggcas ,I think that ```mirroredstrategy.experimental_run_v2``` has been deprecated since Tensorflow 2.2 .This can found from the release pages:\r\nhttps://github.com/tensorflow/tensorflow/blob/93360e5c3bb1c7f3d5de2267d564bc8c77dfe3de/RELEASE.md\r\nIt has been renamed to ```mirroredstrategy.run``` method as specified in this segment:\r\n\r\n```Deprecated experimental_run_v2 method for distribution strategies and renamed the method run as it is no longer experimental.```\r\n\r\n Raised a PR #49219 for logging messages.",
"created_at": "2021-05-16T15:20:43Z"
},
{
"body": "On a side note for using gpus with TF 2.4 pre built binary compatible cuda version is 11.0 and cudnn 8.0",
"created_at": "2021-05-20T00:10:56Z"
},
{
"body": "ok\r\nthx ",
"created_at": "2021-05-21T12:30:49Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/49187\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/49187\">No</a>\n",
"created_at": "2021-05-22T02:22:55Z"
}
],
"number": 49187,
"title": "AttributeError: 'MirroredStrategy' object has no attribute 'experimental_run_v2'"
}
|
{
"body": "To resolve PR #49187 , by providing logging messages for deprecated 'experimental_run_v2' in Tf 1.x\r\n\r\n",
"number": 49219,
"review_comments": [
{
"body": "```suggestion\r\n```",
"created_at": "2021-06-18T16:07:17Z"
},
{
"body": "Resolved. And added deprecation decorators where-ever specified.",
"created_at": "2021-06-18T18:29:48Z"
},
{
"body": "This is the TF1 Strategy base object. I don't think we would like to have deprecation warnings for TF1 users.",
"created_at": "2021-06-20T06:51:38Z"
},
{
"body": "Reverted the changes for TF1 Strategy based objects.",
"created_at": "2021-06-20T06:58:38Z"
},
{
"body": "Please don't insert additional whitespace",
"created_at": "2021-06-24T14:35:40Z"
},
{
"body": "Removed additional whitespace.",
"created_at": "2021-06-24T16:30:14Z"
},
{
"body": "use run() instead.",
"created_at": "2021-07-13T12:06:31Z"
},
{
"body": "Made the required change at line 963. Let me know if any other changes are required.",
"created_at": "2021-07-13T15:23:08Z"
},
{
"body": "whitespace at the end of the line",
"created_at": "2021-07-14T23:42:19Z"
},
{
"body": "whitespace",
"created_at": "2021-07-14T23:42:34Z"
},
{
"body": "whitespace",
"created_at": "2021-07-14T23:42:50Z"
},
{
"body": "whitespace",
"created_at": "2021-07-14T23:42:57Z"
},
{
"body": "Please add back the blank line.",
"created_at": "2021-07-14T23:44:05Z"
},
{
"body": "Have removed the whitespace",
"created_at": "2021-07-15T08:13:34Z"
},
{
"body": "Whitespace is removed",
"created_at": "2021-07-15T08:13:51Z"
},
{
"body": "Whitespace is removed",
"created_at": "2021-07-15T08:13:59Z"
},
{
"body": "This whitespace is a single blank line at 2904.",
"created_at": "2021-07-15T08:15:06Z"
},
{
"body": "Added back the blank line",
"created_at": "2021-07-15T08:15:32Z"
},
{
"body": "You also added trailing whitespace here",
"created_at": "2021-07-15T16:00:31Z"
},
{
"body": "Please don't add trailing whitespace at the end of lines.\r\n```suggestion\r\n\r\n```",
"created_at": "2021-07-15T16:01:37Z"
},
{
"body": "Removed the trailing whitespace\r\n",
"created_at": "2021-07-15T16:25:42Z"
}
],
"title": "Added tf_logging for deprecated 'experimental_run_v2' (Tf 1.x)"
}
|
{
"commits": [
{
"message": "Added tf_logging for deprecated use cases"
},
{
"message": "Use deprecated decorator"
},
{
"message": "Merge branch 'tensorflow:master' into master"
},
{
"message": "deprecated decorators"
},
{
"message": "Merge branch 'tensorflow:master' into master"
},
{
"message": "Merge branch 'master' of https://github.com/abhilash1910/tensorflow"
},
{
"message": "decprecated decorator"
},
{
"message": "Revert tf1.x deprecation changes"
},
{
"message": "Merge branch 'tensorflow:master' into master"
},
{
"message": "Merge branch 'tensorflow:master' into master"
},
{
"message": "remove spaces"
},
{
"message": "Merge branch 'master' of https://github.com/abhilash1910/tensorflow"
},
{
"message": "remove spaces"
},
{
"message": "rename to run()"
},
{
"message": "Merge branch 'tensorflow:master' into master"
},
{
"message": "remove whitespace"
},
{
"message": "Merge branch 'master' of https://github.com/abhilash1910/tensorflow"
},
{
"message": "remove whitespace"
}
],
"files": [
{
"diff": "@@ -937,6 +937,7 @@ def scope(self):\n # pylint: enable=line-too-long\n \n @doc_controls.do_not_doc_inheritable # DEPRECATED, moving to `extended`\n+ @deprecated(None,'use extended.colocate_vars_with() instead.')\n def colocate_vars_with(self, colocate_with_variable):\n \"\"\"DEPRECATED: use extended.colocate_vars_with() instead.\"\"\"\n return self._extended.colocate_vars_with(colocate_with_variable)\n@@ -959,6 +960,7 @@ def make_input_fn_iterator(self,\n input_fn, replication_mode=replication_mode)\n \n @doc_controls.do_not_generate_docs # DEPRECATED: TF 1.x only\n+ @deprecated(None,'use run() instead')\n def experimental_run(self, fn, input_iterator=None):\n \"\"\"DEPRECATED TF 1.x ONLY.\"\"\"\n with self.scope():\n@@ -1490,6 +1492,7 @@ def mean_reduce_fn(v):\n return math_ops.truediv(numer, denom)\n \n @doc_controls.do_not_doc_inheritable # DEPRECATED\n+ @deprecated(None,'use `experimental_local_results` instead.')\n def unwrap(self, value):\n \"\"\"Returns the list of all local per-replica values contained in `value`.\n \n@@ -1542,6 +1545,7 @@ def num_replicas_in_sync(self):\n return self._extended._num_replicas_in_sync # pylint: disable=protected-access\n \n @doc_controls.do_not_doc_inheritable # DEPRECATED: see doc string\n+ @deprecated(None, 'use `update_config_proto` instead.')\n def configure(self,\n session_config=None,\n cluster_spec=None,\n@@ -1942,6 +1946,7 @@ def experimental_make_numpy_dataset(self, numpy_input, session=None):\n return self.extended.experimental_make_numpy_dataset(\n numpy_input, session=session)\n \n+ @deprecated(None,'This method is not available in TF 2.x. Please switch to using `run` instead.')\n def experimental_run(self, fn, input_iterator=None): # pylint: disable=useless-super-delegation\n \"\"\"Runs ops in `fn` on each replica, with inputs from `input_iterator`.\n \n@@ -2747,6 +2752,7 @@ def broadcast_to(self, tensor, destinations):\n def _broadcast_to(self, tensor, destinations):\n raise NotImplementedError(\"must be implemented in descendants\")\n \n+ @deprecated(None,'please use `run` instead.')\n def experimental_run_steps_on_iterator(self,\n fn,\n iterator,",
"filename": "tensorflow/python/distribute/distribute_lib.py",
"status": "modified"
}
]
}
|
{
"body": "I want to use [`tf.keras.metrics`](https://www.tensorflow.org/api_docs/python/tf/keras/metrics) the same way I use [`tf.keras.optimizers`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers). I can pass `'Adam'` and it'll work. I can pass an instantiation of [`Adam`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam) and it'll work.\r\n\r\nUnfortunately the same isn't true for `tf.keras.metrics`, making it difficult to derive the full list of builtin metrics…\r\n\r\n**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow):\r\nUse this example and just replace `'accuracy'` with `'Accuracy'` https://github.com/tensorflow/datasets/blob/8723d84/docs/keras_example.ipynb\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:\r\n- TensorFlow installed from (source or binary):\r\n- TensorFlow version (use command below): 2.3.0\r\n- Python version: 3.8.5\r\n\r\n**Describe the current behavior**\r\nError\r\n\r\n**Describe the expected behavior**\r\nNo error. Looking here it doesn't appear they they are different:\r\nhttps://www.tensorflow.org/api_docs/python/tf/keras/metrics\r\nhttps://www.tensorflow.org/api_docs/python/tf/keras/metrics/Accuracy\r\n\r\nOr is one the class and one an instance? - Should we be able to pass `'Accuracy'` as a string, or is the expected behaviour?\r\n\r\n**Standalone code to reproduce the issue**\r\nUse this example and just replace `'accuracy'` with `'Accuracy'` https://github.com/tensorflow/datasets/blob/8723d84/docs/keras_example.ipynb\r\n\r\n**Other info / logs** Include any logs or source code that would be helpful to\r\ndiagnose the problem. If including tracebacks, please include the full\r\ntraceback. Large logs and files should be attached.\r\n```\r\nFile \"lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py\", line 108, in _method_wrapper\r\n return method(self, *args, **kwargs)\r\n File \"lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py\", line 1098, in fit\r\n tmp_logs = train_function(iterator)\r\n File \"lib/python3.8/site-packages/tensorflow/python/eager/def_function.py\", line 780, in __call__\r\n result = self._call(*args, **kwds)\r\n File \"lib/python3.8/site-packages/tensorflow/python/eager/def_function.py\", line 823, in _call\r\n self._initialize(args, kwds, add_initializers_to=initializers)\r\n File \"lib/python3.8/site-packages/tensorflow/python/eager/def_function.py\", line 696, in _initialize\r\n self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access\r\n File \"lib/python3.8/site-packages/tensorflow/python/eager/function.py\", line 2855, in _get_concrete_function_internal_garbage_collected\r\n graph_function, _, _ = self._maybe_define_function(args, kwargs)\r\n File \"lib/python3.8/site-packages/tensorflow/python/eager/function.py\", line 3213, in _maybe_define_function\r\n graph_function = self._create_graph_function(args, kwargs)\r\n File \"lib/python3.8/site-packages/tensorflow/python/eager/function.py\", line 3065, in _create_graph_function\r\n func_graph_module.func_graph_from_py_func(\r\n File \"lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py\", line 986, in func_graph_from_py_func\r\n func_outputs = python_func(*func_args, **func_kwargs)\r\n File \"lib/python3.8/site-packages/tensorflow/python/eager/def_function.py\", line 600, in wrapped_fn\r\n return weak_wrapped_fn().__wrapped__(*args, **kwds)\r\n File \"lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py\", line 973, in wrapper\r\n raise e.ag_error_metadata.to_exception(e)\r\nValueError: in user code:\r\n\r\n lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:806 train_function *\r\n return step_function(self, iterator)\r\n lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:796 step_function **\r\n outputs = model.distribute_strategy.run(run_step, args=(data,))\r\n lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run\r\n return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)\r\n lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica\r\n return self._call_for_each_replica(fn, args, kwargs)\r\n lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica\r\n return fn(*args, **kwargs)\r\n lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:789 run_step **\r\n outputs = model.train_step(data)\r\n lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:759 train_step\r\n self.compiled_metrics.update_state(y, y_pred, sample_weight)\r\n lib/python3.8/site-packages/tensorflow/python/keras/engine/compile_utils.py:409 update_state\r\n metric_obj.update_state(y_t, y_p, sample_weight=mask)\r\n lib/python3.8/site-packages/tensorflow/python/keras/utils/metrics_utils.py:90 decorated\r\n update_op = update_state_fn(*args, **kwargs)\r\n lib/python3.8/site-packages/tensorflow/python/keras/metrics.py:176 update_state_fn\r\n return ag_update_state(*args, **kwargs)\r\n lib/python3.8/site-packages/tensorflow/python/keras/metrics.py:612 update_state **\r\n matches = ag_fn(y_true, y_pred, **self._fn_kwargs)\r\n lib/python3.8/site-packages/tensorflow/python/keras/metrics.py:3208 accuracy **\r\n y_pred.shape.assert_is_compatible_with(y_true.shape)\r\n lib/python3.8/site-packages/tensorflow/python/framework/tensor_shape.py:1134 assert_is_compatible_with\r\n raise ValueError(\"Shapes %s and %s are incompatible\" % (self, other))\r\n\r\n ValueError: Shapes (None, 10) and (None, 1) are incompatible\r\n```",
"comments": [
{
"body": "Hello, I would like to work on this issue, can you explain me what are you expecting from this. Should I have to make it to case insensitive or just put a warning/error.",
"created_at": "2020-08-15T18:20:09Z"
},
{
"body": "Was able to reproduce the issue in TF 2.3 and nightly version. Please find the gist [here](https://colab.research.google.com/gist/saikumarchalla/5c8d7694ffe4a3db2110f513d4805fb9/tensorflow-datasets.ipynb#scrollTo=XWqxdmS1NLKA).Thanks!",
"created_at": "2020-08-17T16:18:18Z"
},
{
"body": "@SamuelMarks I think you can pass 'accuracy' as well as 'Accuracy' metric. Following toy example exhibits successful behavior.\r\n```python\r\nimport tensorflow as tf\r\nimport numpy as np\r\ninputs = tf.keras.layers.Input(shape=(3,))\r\noutputs = tf.keras.layers.Dense(2)(inputs)\r\nmodel = tf.keras.models.Model(inputs=inputs, outputs=outputs)\r\nmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"Accuracy\"])\r\n\r\nx = np.random.random((2, 3))\r\ny = np.random.randint(0, 2, (2, 2))\r\nmodel.fit(x, y)\r\n\r\nprint([m.name for m in model.metrics])\r\nprint(tf.__version__)\r\n#Ouput:\r\n1/1 [==============================] - 0s 282us/step - loss: 0.6162 - accuracy: 0.0000e+00\r\n['loss', 'accuracy']\r\n2.4.0-dev20200721\r\n```\r\nHowever the same logic fails in the example you are trying. Will investigate more. Thanks!",
"created_at": "2020-08-17T18:14:38Z"
},
{
"body": "Maybe it's because `metrics=['Accuracy']` means just plain `tf.keras.metrics.Accuracy` class. Since CIFAR-10 and MNIST classifies images into 10 classes (CIFAR-10 can be seen in [current tensorflow tutorial](https://www.tensorflow.org/tutorials/images/cnn)), metric should be categorical, for example, `tf.keras.metrics.SparseCategoricalAccuracy` class, or `'sparse_categorical_accuracy'`(The name of `tf.keras.metrics.SparseCategoricalAccuracy` class).\r\n\r\nBelow code(I tested for CIFAR-10) should work, test in\r\n- OS: Windows 10 Pro\r\n- GPU: GTX 1660 Ti with MAX-Q design\r\n- TF version: 2.3.1\r\n- CUDA: 10.1\r\n- cuDNN: 7.6.5\r\n- Python: 3.8.6\r\n\r\n```python\r\nimport tensorflow as tf\r\n\r\nfrom tensorflow.keras import datasets, layers, models\r\n\r\n(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()\r\n\r\ntrain_images, test_images = train_images / 255.0, test_images / 255.0\r\n\r\nmodel = models.Sequential()\r\nmodel.add(layers.Conv2D(32, (3, 3), activation=tf.keras.activations.relu, input_shape=(32, 32, 3)))\r\nmodel.add(layers.MaxPooling2D((2, 2)))\r\nmodel.add(layers.Conv2D(64, (3, 3), activation=tf.keras.activations.relu))\r\nmodel.add(layers.MaxPooling2D((2, 2)))\r\nmodel.add(layers.Conv2D(64, (3, 3), activation=tf.keras.activations.relu))\r\nmodel.add(layers.Flatten())\r\nmodel.add(layers.Dense(64, activation=tf.keras.activations.relu))\r\nmodel.add(layers.Dense(10))\r\n\r\nmodel.summary()\r\n\r\nmodel.compile(\r\n optimizer=tf.keras.optimizers.Adam(),\r\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\r\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]\r\n)\r\n\r\nhistory = model.fit(train_images, train_labels, epochs=10,\r\n validation_data=(test_images, test_labels))\r\n\r\ntest_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)\r\n\r\nprint(test_acc)\r\n```\r\nIf you change `metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]` to `metrics=['Accuracy']` above code will not work, with the error `ValueError: Shapes (None, 10) and (None, 1) are incompatible`.\r\n\r\nAfter searching some tensorflow code, found this comment:\r\n(https://github.com/tensorflow/tensorflow/blob/9278b9421f64a2103d18a67d825a9b6be243e211/tensorflow/python/keras/engine/training.py#L477)\r\n\r\n> When you pass the strings 'accuracy' or 'acc', we convert this to one of `tf.keras.metrics.BinaryAccuracy`, `tf.keras.metrics.CategoricalAccuracy`, `tf.keras.metrics.SparseCategoricalAccuracy` based on the loss function used and the model output shape. We do a similar conversion for the strings 'crossentropy' and 'ce' as well.\r\n\r\nThis explains why `'accuracy'` works but `'Accuracy'` does not.\r\n\r\nHope this will help :)",
"created_at": "2020-10-29T08:43:38Z"
},
{
"body": "@SamuelMarks, have you seen the comment from @amoretspero? Let me know if this answers your question.",
"created_at": "2021-02-01T19:36:18Z"
},
{
"body": "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2021-02-10T01:03:03Z"
},
{
"body": "Thanks @nikitamaia / @amoretspero yeah it does resolve the issue… but not the issue. I'm still left with no way to infer this behaviour for the code I generate https://github.com/SamuelMarks/ml-params-tensorflow/blob/4eea761/ml_params_tensorflow/ml_params/metrics.py\r\n\r\nShould I just give up on having it work this way, or maybe implementing an `object` inheriting `class` which just constructs the relevant `Metric` would be in order?",
"created_at": "2021-02-10T12:50:03Z"
},
{
"body": "Adding the `contributions welcome` label to this issue for further investigation by the community. If you are interested in working on this issue, please leave a comment and I will assign it to you. Thanks!",
"created_at": "2021-03-15T14:37:01Z"
},
{
"body": "@SamuelMarks is this releated to https://github.com/tensorflow/tensorflow/issues/41361 or something different?",
"created_at": "2021-04-12T22:52:25Z"
},
{
"body": "@nikitamaia , Please assign this issue to me. I have worked on this and raised PR #49218 .",
"created_at": "2021-05-17T08:41:23Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/42383\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/42383\">No</a>\n",
"created_at": "2021-05-17T17:48:59Z"
},
{
"body": "@SamuelMarks , You can confirm. The [gist](https://colab.research.google.com/gist/saikumarchalla/5c8d7694ffe4a3db2110f513d4805fb9/tensorflow-datasets.ipynb#scrollTo=XWqxdmS1NLKA) works normally now.",
"created_at": "2021-05-18T13:39:45Z"
}
],
"number": 42383,
"title": "`metrics=['accuracy']` works, `metrics=['Accuracy']` gives `ValueError: Shapes (None, 10) and (None, 1) are incompatible`"
}
|
{
"body": "Fixes #42383\r\n\r\ncc: @mihaimaruseac , @bhack , @SamuelMarks , @nikitamaia ",
"number": 49218,
"review_comments": [],
"title": "Fixed Accuracy to be able to transform dynamically"
}
|
{
"commits": [
{
"message": "Fixed Accuracy to be able to transform dynamically"
}
],
"files": [
{
"diff": "@@ -500,7 +500,7 @@ def _get_metric_object(self, metric, y_t, y_p):\n \n # Convenience feature for selecting b/t binary, categorical,\n # and sparse categorical.\n- if metric not in ['accuracy', 'acc', 'crossentropy', 'ce']:\n+ if str(metric).lower() not in ['accuracy', 'acc', 'crossentropy', 'ce']:\n metric_obj = metrics_mod.get(metric)\n else:\n y_t_rank = len(y_t.shape.as_list())\n@@ -512,7 +512,7 @@ def _get_metric_object(self, metric, y_t, y_p):\n is_sparse_categorical = (\n y_t_rank < y_p_rank or y_t_last_dim == 1 and y_p_last_dim > 1)\n \n- if metric in ['accuracy', 'acc']:\n+ if str(metric).lower() in ['accuracy', 'acc']:\n if is_binary:\n metric_obj = metrics_mod.binary_accuracy\n elif is_sparse_categorical:",
"filename": "tensorflow/python/keras/engine/compile_utils.py",
"status": "modified"
}
]
}
|
{
"body": "https://github.com/tensorflow/tensorflow/blob/40caef44549a199eaac327b673fa862194b66fc4/tensorflow/python/keras/backend.py#L6004-L6007\r\n\r\nIs line 6007 supposed to be `(len(bias_shape), ndim(x) - 1))`?",
"comments": [
{
"body": "Thanks.. Can you submit a small PR?",
"created_at": "2021-05-10T14:39:45Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/49046\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/49046\">No</a>\n",
"created_at": "2021-05-13T19:12:52Z"
}
],
"number": 49046,
"title": "Unclear error message in keras.backend.bias_add"
}
|
{
"body": "closes #49046 \r\n",
"number": 49049,
"review_comments": [],
"title": "Made error message in keras.backend.bias_add match the check"
}
|
{
"commits": [
{
"message": "Made error message in keras.backend.bias_add match check"
}
],
"files": [
{
"diff": "@@ -6004,7 +6004,7 @@ def bias_add(x, bias, data_format=None):\n if len(bias_shape) != 1 and len(bias_shape) != ndim(x) - 1:\n raise ValueError(\n 'Unexpected bias dimensions %d, expect to be 1 or %d dimensions' %\n- (len(bias_shape), ndim(x)))\n+ (len(bias_shape), ndim(x) - 1))\n \n if len(bias_shape) == 1:\n if data_format == 'channels_first':",
"filename": "tensorflow/python/keras/backend.py",
"status": "modified"
}
]
}
|
{
"body": "<em>Please make sure that this is a bug. As per our\r\n[GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md),\r\nwe only address code/doc bugs, performance issues, feature requests and\r\nbuild/installation issues on GitHub. tag:bug_template</em>\r\n\r\n**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): OSX Catalina 10.15\r\n- TensorFlow installed from (source or binary): pip\r\n- TensorFlow version (use command below): 2.5.0rc2\r\n- Python version: 3.9.2\r\n\r\n**Describe the current behavior**\r\n\r\nPassing a list of Numpy arrays of rank > 1 one breaks tf.ragged.constant. Happens in both tf 2.4 and 2.5\r\n\r\n**Describe the expected behavior**\r\n\r\nThat it would create the requests multidimensional ragged constant. I think this is a pretty common use case; for example, I have N variable-sized lists of xyz points, shape (None, 3), and want to turn them into one ragged constant of shape (N, None, 3).\r\n\r\n**Standalone code to reproduce the issue**\r\n\r\nhttps://colab.research.google.com/drive/1Zi40gL5IdARNeKid_7R_2b7_JcnWDW0T?usp=sharing \r\n\r\n**Other info / logs** Include any logs or source code that would be helpful to\r\n`Traceback (most recent call last):\r\n File \"/Users/user1/Desktop/CS/ML/trees/trees-pointnet/test.py\", line 31, in <module>\r\n b = tf.ragged.constant(a, ragged_rank=1)\r\n File \"/Users/user1/anaconda3/envs/tf2.5/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py\", line 206, in wrapper\r\n return target(*args, **kwargs)\r\n File \"/Users/user1/anaconda3/envs/tf2.5/lib/python3.9/site-packages/tensorflow/python/ops/ragged/ragged_factory_ops.py\", line 86, in constant\r\n return _constant_value(ragged_factory, constant_op.constant, pylist, dtype,\r\n File \"/Users/user1/anaconda3/envs/tf2.5/lib/python3.9/site-packages/tensorflow/python/ops/ragged/ragged_factory_ops.py\", line 218, in _constant_value\r\n inner_shape = _default_inner_shape_for_pylist(pylist, ragged_rank)\r\n File \"/Users/user1/anaconda3/envs/tf2.5/lib/python3.9/site-packages/tensorflow/python/ops/ragged/ragged_factory_ops.py\", line 311, in _default_inner_shape_for_pylist\r\n inner_shape = get_inner_shape(flat_values)\r\n File \"/Users/user1/anaconda3/envs/tf2.5/lib/python3.9/site-packages/tensorflow/python/ops/ragged/ragged_factory_ops.py\", line 285, in get_inner_shape\r\n return (len(item),) + get_inner_shape(item[0])\r\n File \"/Users/user1/anaconda3/envs/tf2.5/lib/python3.9/site-packages/tensorflow/python/ops/ragged/ragged_factory_ops.py\", line 284, in get_inner_shape\r\n elif item:\r\nValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()`\r\n",
"comments": [
{
"body": "Can you check with https://github.com/tensorflow/tensorflow/pull/48945?",
"created_at": "2021-05-06T22:47:38Z"
},
{
"body": "That looks like it should do it",
"created_at": "2021-05-17T04:50:25Z"
},
{
"body": "@JRice15 ,\r\n\r\nPlease close this issue once PR is merged.Thanks",
"created_at": "2021-05-17T11:53:10Z"
},
{
"body": "@Saduf2019 ,\r\n\r\nI was able to reproduce the issue in tf v2.4,v2.5 and nightly.Please find the [gist](https://colab.research.google.com/gist/tilakrayal/002794e031c0d95a282f4f108bcd7511/48941.ipynb) here.",
"created_at": "2021-05-24T11:54:10Z"
},
{
"body": "Please close this. The PR Is merged.",
"created_at": "2021-06-11T19:42:59Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48941\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48941\">No</a>\n",
"created_at": "2021-06-11T21:00:14Z"
}
],
"number": 48941,
"title": "tf.ragged.constant fails on list of np.array"
}
|
{
"body": "Explore a fix for https://github.com/tensorflow/tensorflow/issues/48941\r\n\r\nFixes #48941",
"number": 48945,
"review_comments": [],
"title": "Ragged factory Extra check item len"
}
|
{
"commits": [
{
"message": "Extra check item len"
},
{
"message": "Add test"
},
{
"message": "Fix space"
},
{
"message": "Fix"
},
{
"message": "More g-style friendly"
}
],
"files": [
{
"diff": "@@ -73,6 +73,11 @@ class RaggedConstantValueOpTest(test_util.TensorFlowTestCase,\n np.array([]), [[5, 6], [7, 8], [9, 0]]],\n ragged_rank=1,\n expected_shape=(3, None, 2)),\n+ dict(\n+ pylist=[[np.array([3, np.array(4)]), [1, 2]],\n+ np.array([]), [[5, 6], [7, 8], [9, 0]]],\n+ ragged_rank=1,\n+ expected_shape=(3, None, 2)),\n dict(\n pylist=[[[1, 2], np.array([3, np.array(4)])],\n np.array([]), [[5, 6], [7, 8], [9, 0]]],",
"filename": "tensorflow/python/ops/ragged/ragged_constant_value_op_test.py",
"status": "modified"
},
{
"diff": "@@ -281,7 +281,7 @@ def get_inner_shape(item):\n \"\"\"Returns the inner shape for a python list `item`.\"\"\"\n if not isinstance(item, (list, tuple)) and np.ndim(item) == 0:\n return ()\n- elif item:\n+ elif len(item) > 0:\n return (len(item),) + get_inner_shape(item[0])\n return (0,)\n ",
"filename": "tensorflow/python/ops/ragged/ragged_factory_ops.py",
"status": "modified"
}
]
}
|
{
"body": "I am trying to add custom gradient support to the Java bindings (tensorflow/java#292). This is currently being prevented by the fact that `TF_AddGradientsWithPrefix` requires a lock on the graph, but so do any operation creation methods (i.e. `TF_NewOperation`), so it is impossible to create ops in gradient functions.\r\n\r\nIs there a way I can get around this? I am considering calling `TF_NewOperationLocked` directly but I assume the locks are there for a reason.\r\n",
"comments": [
{
"body": "Thanks for reporting the issue!",
"created_at": "2021-05-06T19:06:03Z"
},
{
"body": "Please feel free to send a PR!",
"created_at": "2021-05-06T19:07:07Z"
},
{
"body": "I can, but what's the preferred solution? Providing and exposing a `TF_AddGradientsWithPrefixLocked` that doesn't lock, or exposing `TF_NewOperationLocked` and `TF_FinishOperationLocked`? (See #48815)",
"created_at": "2021-05-06T19:10:19Z"
},
{
"body": "Exposing `TF_NewOperationLocked` and `TF_FinishOperationLocked` sgtm.",
"created_at": "2021-05-06T19:11:49Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48767\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48767\">No</a>\n",
"created_at": "2021-07-15T17:35:37Z"
}
],
"number": 48767,
"title": "C API Locking prevents custom gradient definition"
}
|
{
"body": "Expose `TF_NewOperationLocked`, `TF_FinishOperationLocked`, and `ToOperation` to the C API. `ToOperation` is in `c_api_internal.h`.\r\n\r\nFixes #48767\r\nFixes #48815",
"number": 48944,
"review_comments": [
{
"body": "I don't understand how this can be used in the public C API since tensorflow::Node is not a C type?",
"created_at": "2021-05-27T19:26:34Z"
},
{
"body": "Could you please indent this and others. The linter might complain during merge.",
"created_at": "2021-05-27T19:27:16Z"
},
{
"body": "Hmm, yeah, it's not really part of the C API, but we (tensorflow/java) are binding parts of the C++ API as well, and need this to go between. It probably shouldn't be exposed here, but it would be nice for it to be exposed somewhere, is there a better place to put it?",
"created_at": "2021-05-27T21:40:52Z"
},
{
"body": "Do you mean I should indent everything in the `extern \"C\" {` block? Or just the functions I changed (the other's aren't)? I went through and lined up all the parameters.",
"created_at": "2021-05-27T21:46:54Z"
},
{
"body": "I don't think we expose any APIs that mix C and C++ symbols. Would it be possible for tensorflow/java to fork this since this is a simple cast?",
"created_at": "2021-06-21T15:12:01Z"
},
{
"body": "Yep, I meant the params. Thanks for fixing.",
"created_at": "2021-06-21T15:12:44Z"
},
{
"body": "@saudet advice? Is this something we can do in an included file like adapters?",
"created_at": "2021-06-21T20:38:20Z"
},
{
"body": "This is just a reinterpret cast? Just taking its address with `new TF_Operation(node)` should work: https://github.com/tensorflow/java/blob/master/tensorflow-core/tensorflow-core-api/src/gen/java/org/tensorflow/internal/c_api/TF_Operation.java#L19",
"created_at": "2021-06-22T00:30:45Z"
}
],
"title": "Expose TF_NewOperationLocked, TF_FinishOperationLocked, and ToOperation"
}
|
{
"commits": [
{
"message": "Expose TF_NewOperationLocked, TF_FinishOperationLocked, and ToOperation"
},
{
"message": "Formatting"
},
{
"message": "Don't expose To_Operation"
}
],
"files": [
{
"diff": "@@ -782,9 +782,9 @@ void TF_GraphGetTensorShape(TF_Graph* graph, TF_Output output, int64_t* dims,\n \n extern \"C\" {\n \n-static TF_OperationDescription* TF_NewOperationLocked(TF_Graph* graph,\n- const char* op_type,\n- const char* oper_name)\n+TF_OperationDescription* TF_NewOperationLocked(TF_Graph* graph,\n+ const char* op_type,\n+ const char* oper_name)\n TF_EXCLUSIVE_LOCKS_REQUIRED(graph->mu) {\n return new TF_OperationDescription(graph, op_type, oper_name);\n }\n@@ -1041,8 +1041,8 @@ void TF_SetAttrValueProto(TF_OperationDescription* desc, const char* attr_name,\n status->status = Status::OK();\n }\n \n-static TF_Operation* TF_FinishOperationLocked(TF_OperationDescription* desc,\n- TF_Status* status)\n+TF_Operation* TF_FinishOperationLocked(TF_OperationDescription* desc,\n+ TF_Status* status)\n TF_EXCLUSIVE_LOCKS_REQUIRED(desc->graph->mu) {\n Node* ret = nullptr;\n ",
"filename": "tensorflow/c/c_api.cc",
"status": "modified"
},
{
"diff": "@@ -255,6 +255,12 @@ TF_CAPI_EXPORT extern void TF_GraphGetTensorShape(TF_Graph* graph,\n int64_t* dims, int num_dims,\n TF_Status* status);\n \n+// TF_NewOperation, but without locking the graph.\n+// Should prefer TF_NewOperation when possible.\n+TF_CAPI_EXPORT extern TF_OperationDescription* TF_NewOperationLocked(TF_Graph* graph,\n+ const char* op_type,\n+ const char* oper_name);\n+\n // Operation will only be added to *graph when TF_FinishOperation() is\n // called (assuming TF_FinishOperation() does not return an error).\n // *graph must not be deleted until after TF_FinishOperation() is\n@@ -406,6 +412,11 @@ TF_CAPI_EXPORT extern void TF_SetAttrValueProto(TF_OperationDescription* desc,\n size_t proto_len,\n TF_Status* status);\n \n+// TF_FinishOperation, but without locking the graph.\n+// TF_FinishOperation should be preferred when possible.\n+TF_CAPI_EXPORT extern TF_Operation* TF_FinishOperationLocked(TF_OperationDescription* desc,\n+ TF_Status* status);\n+\n // If this function succeeds:\n // * *status is set to an OK value,\n // * a TF_Operation is added to the graph,",
"filename": "tensorflow/c/c_api.h",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution: Linux Ubuntu 16.04\r\n- TensorFlow installed from: binary (pip)\r\n- TensorFlow version: `2.0.0`\r\n- Python version: `3.5.2`\r\n- CUDA version: `10.1`\r\n- GPU model and memory: GTX 1060, 6GB\r\n\r\n``` python\r\n# example image batches\r\nvideo1 = tf.random.uniform(shape=[8, 64, 64, 1], minval=0, maxval=1)\r\nvideo2 = tf.random.uniform(shape=[8, 64, 64, 1], minval=0, maxval=1)\r\n```\r\nssim works fine but when I use the multiscale ssim, I am getting the following error message. What am I doing wrong? How do I fix this? \r\n\r\n**SSIM**\r\n``` python\r\nssim_score = tf.image.ssim(img1=video1, img2=video1, max_val=1.0)\r\nprint(ssim_score) # tf.Tensor([1. 1. 1. 1. 1. 1. 1. 1.], shape=(8,), dtype=float32)\r\n```\r\n\r\n**MS-SSIM**\r\n``` python\r\nms_ssim_score = tf.image.ssim_multiscale(img1=video1, img2=video2, max_val=1.0)\r\n```\r\n``` python\r\n---------------------------------------------------------------------------\r\nInvalidArgumentError Traceback (most recent call last)\r\n<ipython-input-9-cc68ceec0921> in <module>\r\n----> 1 ms_ssim_score = tf.image.ssim_multiscale(img1=video1, img2=video2, max_val=1.0)\r\n\r\n~/.local/lib/python3.5/site-packages/tensorflow_core/python/ops/image_ops_impl.py in ssim_multiscale(img1, img2, max_val, power_factors, filter_size, filter_sigma, k1, k2)\r\n 3405 filter_sigma=filter_sigma,\r\n 3406 k1=k1,\r\n-> 3407 k2=k2)\r\n 3408 mcs.append(nn_ops.relu(cs))\r\n 3409 \r\n\r\n~/.local/lib/python3.5/site-packages/tensorflow_core/python/ops/image_ops_impl.py in _ssim_per_channel(img1, img2, max_val, filter_size, filter_sigma, k1, k2)\r\n 3174 math_ops.greater_equal(shape1[-3:-1], filter_size)),\r\n 3175 [shape1, filter_size],\r\n-> 3176 summarize=8),\r\n 3177 control_flow_ops.Assert(\r\n 3178 math_ops.reduce_all(\r\n\r\n~/.local/lib/python3.5/site-packages/tensorflow_core/python/util/tf_should_use.py in wrapped(*args, **kwargs)\r\n 196 \"\"\"\r\n 197 def wrapped(*args, **kwargs):\r\n--> 198 return _add_should_use_warning(fn(*args, **kwargs))\r\n 199 return tf_decorator.make_decorator(\r\n 200 fn, wrapped, 'should_use_result',\r\n\r\n~/.local/lib/python3.5/site-packages/tensorflow_core/python/ops/control_flow_ops.py in Assert(condition, data, summarize, name)\r\n 154 op=None,\r\n 155 message=\"Expected '%s' to be true. Summarized data: %s\" %\r\n--> 156 (condition, \"\\n\".join(data_str)))\r\n 157 return\r\n 158 \r\n\r\nInvalidArgumentError: Expected 'tf.Tensor(False, shape=(), dtype=bool)' to be true. Summarized data: 8, 8, 8, 1\r\n11\r\n\r\n```\r\n",
"comments": [
{
"body": "Issue is replicating on colab with TF 2.0. Please see the [gist](https://colab.sandbox.google.com/gist/gadagashwini/45c4eb25cbba648f1a48a0bade7587b8/untitled229.ipynb). Thanks!",
"created_at": "2019-10-31T09:25:22Z"
},
{
"body": "+1\r\nThe following code reproduces the error:\r\n\r\n```\r\na = np.random.randn(1, 10, 10, 1)\r\nmax_val = np.max(np.reshape(a, [-1]))\r\n\r\nl = tf.image.ssim_multiscale(a, a, max_val, filter_size=11)\r\n```",
"created_at": "2019-11-15T03:45:38Z"
},
{
"body": "Any update on this. Have the same issue with 2.1.0-rc2",
"created_at": "2020-03-06T03:27:15Z"
},
{
"body": "Could replicate the issue with Tf-nightly == 2.2.0.dev20200316.\r\nPlease find the gist [here](https://colab.sandbox.google.com/gist/gadagashwini/6bff2733702ece74d63d3de197bab46a/untitled458.ipynb). Thanks!",
"created_at": "2020-03-16T12:18:14Z"
},
{
"body": "I was also able to verify with your gist that psnr() has the same issue and is also broke.",
"created_at": "2020-03-17T13:19:07Z"
},
{
"body": "same here with tensorflow `2.2.0-rc3`\r\ninterestingly it works for me on `(255,255,1)` shape tensors but got the same error with `(128,128,1)` tensors",
"created_at": "2020-05-01T16:31:35Z"
},
{
"body": "Could replicate the issue with TF 2.2.0-rc4.Please, find the gist [here](https://colab.sandbox.google.com/gist/ravikyram/78e436329f8216cc39153b804181d402/untitled856.ipynb).Thanks!",
"created_at": "2020-05-05T10:45:35Z"
},
{
"body": "The issue appears to be with assertion after spatial-dimension reduction. In `_ssim_per_channel `, the `H` and `W` of images is [asserted](https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/ops/image_ops_impl.py#L3524) against `filter_size` . Whereas in `ssim_multiscale`, [downsampling ](https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/ops/image_ops_impl.py#L3741)is performed `len(power_factors)-1` times. \r\n\r\nHere are two workarounds:\r\n1. Make sure that `filter_size` is small enough to calculate ssim values for all the four spatial-scales(excluding first scale) after downsampling within `ssim_multiscale`. Contrarily, ensure both `H` and `W ` of your image are big enough such that **`H/(2**4) and W/(2**4) >= filter_size`** . \r\n\r\n2. Since downsampling is performed `len(power_factors)-1` times, you can also use lesser number of `_MSSSIM_WEIGHTS` or power_factors than default, which means `H/(2**(len(power_factors)-1)) and W/(2**(len(power_factors)-1)) >= filter_size` .\r\n\r\n```python3\r\nfield1 = tf.random.uniform(shape=[8, 64, 64, 1], minval=0, maxval=1)\r\nfield2 = tf.random.uniform(shape=[8, 64, 64, 1], minval=0, maxval=1) \r\n#Use smaller filter_size\r\nms_ssim_score = tf.image.ssim_multiscale(img1=field1, img2=field2, max_val=1.0,\r\n filter_size=4)\r\n#Or use lesser number of power_factors\r\nms_ssim_score = tf.image.ssim_multiscale(img1=field1, img2=field2, max_val=1.0,\r\n power_factors=(0.0448, 0.2856, 0.3001),\r\n filter_size=11)\r\n```\r\nHope this helps :-)",
"created_at": "2020-05-25T21:04:18Z"
},
{
"body": "@AravindGanesh,\r\nSorry for the delayed response. Can you please let us know if [this workaround](https://github.com/tensorflow/tensorflow/issues/33840#issuecomment-633715778) helped you so that we can work towards closure of this issue? Thanks!",
"created_at": "2021-04-29T11:28:26Z"
},
{
"body": "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2021-05-06T11:37:53Z"
},
{
"body": "Closing as stale. Please reopen if you'd like to work on this further.\n",
"created_at": "2021-05-13T11:59:54Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/33840\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/33840\">No</a>\n",
"created_at": "2021-05-13T11:59:59Z"
},
{
"body": "@AravindGanesh , The issue has been fixed in Tensorflow 2.5 version, please find the gist [here](https://colab.research.google.com/gist/sachinprasadhs/586e5ad2732c96c6ee5b547f906b4213/224.ipynb).",
"created_at": "2021-05-20T11:44:06Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/33840\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/33840\">No</a>\n",
"created_at": "2021-05-20T11:44:08Z"
},
{
"body": "> @AravindGanesh , The issue has been fixed in Tensorflow 2.5 version, please find the gist [here](https://colab.research.google.com/gist/sachinprasadhs/586e5ad2732c96c6ee5b547f906b4213/224.ipynb).\r\n\r\n@sachinprasadhs the issue is not fixed even with tf2.5, you have pasted my workaround in your gist. The issue still persists, please check by pasting what OP @AravindGanesh has mentioned or check the proper gist [here](https://colab.research.google.com/gist/aakash30jan/7add0e61ea973844360cf569bf8dcffc/issue_33840.ipynb). ",
"created_at": "2021-07-08T12:37:51Z"
},
{
"body": "@aakash30jan, There was some mistake, reopened the issue.",
"created_at": "2021-07-12T13:51:35Z"
},
{
"body": "Hi! @AravindGanesh and Thanks @aakash30jan . Was able to replicate the issue with TF v2.5 with different numbers of power factors starting with default values as mentioned in official tensorflow document .The program ran successfully only when power_factor was 2 or 3 , Please find the [gist](https://colab.research.google.com/gist/mohantym/917ff98e449aeb85c98d2f58a5e065b4/33840.ipynb#scrollTo=08xZvKrGCfLn) here ..Thanks!",
"created_at": "2021-07-27T09:31:37Z"
},
{
"body": "This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2021-08-16T13:05:20Z"
},
{
"body": "Closing as stale. Please reopen if you'd like to work on this further.\n",
"created_at": "2021-08-23T13:52:52Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/33840\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/33840\">No</a>\n",
"created_at": "2021-08-23T13:52:57Z"
},
{
"body": "The error persists here using python 3.7 and tensorflow 2.9.1\r\n\r\n> The issue appears to be with assertion after spatial-dimension reduction. In `_ssim_per_channel `, the `H` and `W` of images is [asserted](https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/ops/image_ops_impl.py#L3524) against `filter_size` . Whereas in `ssim_multiscale`, [downsampling ](https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/ops/image_ops_impl.py#L3741)is performed `len(power_factors)-1` times.\r\n> \r\n> Here are two workarounds:\r\n> \r\n> 1. Make sure that `filter_size` is small enough to calculate ssim values for all the four spatial-scales(excluding first scale) after downsampling within `ssim_multiscale`. Contrarily, ensure both `H` and `W ` of your image are big enough such that **`H/(2**4) and W/(2**4) >= filter_size`** .\r\n> \r\n> 2. Since downsampling is performed `len(power_factors)-1` times, you can also use lesser number of `_MSSSIM_WEIGHTS` or power_factors than default, which means `H/(2**(len(power_factors)-1)) and W/(2**(len(power_factors)-1)) >= filter_size` .\r\n> \r\n> \r\n> ```python\r\n> field1 = tf.random.uniform(shape=[8, 64, 64, 1], minval=0, maxval=1)\r\n> field2 = tf.random.uniform(shape=[8, 64, 64, 1], minval=0, maxval=1) \r\n> #Use smaller filter_size\r\n> ms_ssim_score = tf.image.ssim_multiscale(img1=field1, img2=field2, max_val=1.0,\r\n> filter_size=4)\r\n> #Or use lesser number of power_factors\r\n> ms_ssim_score = tf.image.ssim_multiscale(img1=field1, img2=field2, max_val=1.0,\r\n> power_factors=(0.0448, 0.2856, 0.3001),\r\n> filter_size=11)\r\n> ```\r\n> \r\n> Hope this helps :-)\r\n\r\nthis workaround works great, but I think it's not the fix. I don't know if define the max filter_size and alert with an warning is an option, but is possible with this code and I can contribute with something like this if you agree:\r\n`filter_size = max(1, min(img1.shape[1]//(2**(len(power_factors)-1)), img1.shape[2]//(2**(len(power_factors)-1)), img2.shape[2]//(2**(len(power_factors)-1)), img2.shape[1]//(2**(len(power_factors)-1))))`\r\n\r\nFor filter_size 11 do you need an image at least 176X176",
"created_at": "2022-08-26T21:45:57Z"
},
{
"body": "Ok @dmvieira ! \r\nThanks for the update. Re-opening as requested. Could you raise a PR from your end .\r\nThank you! ",
"created_at": "2022-08-29T04:46:46Z"
},
{
"body": "@dmvieira perhaps this is well suited \r\n\r\n```\r\ndef suggest_filter_size(image1_batch,image2_batch,power_factors,filter_size):\r\n shape1= image1_batch.shape[1:-1] \r\n shape2= image2_batch.shape[1:-1] \r\n if not(shape1[-3:-1][0]/(2**(len(power_factors)-1)) and shape2[-3:-1][0]/(2**(len(power_factors)-1)) >= filter_size):\r\n H = tf.math.reduce_min((shape1,shape2))\r\n suggested_filter_size = int(H/(2**(len(power_factors)-1)))\r\n else:\r\n suggested_filter_size = filter_size\r\n return suggested_filter_size\r\n```\r\nFor example: \r\n`\r\nsuggest_filter_size(field1, field2,power_factors = (0.0448, 0.2856, 0.3001, 0.2363, 0.1333), filter_size = 11)\r\n`\r\nA good fix would be on the user side, something like:\r\n\r\n```\r\nH,W = 128,128 \r\nfield1 = tf.random.uniform(shape=[8, H, W, 1], minval=0, maxval=1) \r\nfield2 = tf.random.uniform(shape=[8, H, W, 1], minval=0, maxval=1) \r\nfilter_size = 11 #default from tf.image.ssim_multiscale\r\npower_factors = (0.0448, 0.2856, 0.3001, 0.2363, 0.1333) #default from tf.image.ssim_multiscale\r\n#get a new filter_size from default values\r\nnew_filter_size = suggest_filter_size(field1, field2,power_factors,filter_size) \r\n#tf.image.ssim_multiscale would now work with new_filter_size\r\nms_ssim_score = tf.image.ssim_multiscale(img1=field1, img2=field1, max_val=1.0,filter_size=new_filter_size)\r\n```\r\n",
"created_at": "2022-08-29T14:20:21Z"
},
{
"body": "It's a good code example @aakash30jan ! And a great new feature, but my point was about how is the best Interface for the user. I don't know if build a new function to suggest filter_size is the best way, but perhaps just show a better error with the description explaining that the maximum filter_size should be the value returned by a code similar to `suggest_filter_size`, instead of a generic error. \r\n\r\nAnother option is create another param like `auto_change_filter` that uses `suggest_filter_size` or another one is `filter_size` accepts `None` and uses `suggest_filter_size` to determine one, but this last one should be very well documented.\r\n\r\nMy vote to close this issue is in the first one option that just show a better error and how to use. What do you think?\r\n",
"created_at": "2022-08-30T01:55:58Z"
}
],
"number": 33840,
"title": "tf.image.ssim_multiscale does not work in tf-2.0.0"
}
|
{
"body": "Fix for issue [tensorflow#33840](https://github.com/tensorflow/tensorflow/issues/33840) on tf.image.ssim_multiscale. #33840 ",
"number": 48938,
"review_comments": [],
"title": "Issue tensorflow#33840 in tf.image.ssim_multiscale"
}
|
{
"commits": [
{
"message": "Issue tensorflow#33840 - Fixed filter_size issue in tf.image.ssim_multiscale"
}
],
"files": [
{
"diff": "@@ -4393,6 +4393,12 @@ def ssim_multiscale(img1,\n divisor = [1, 2, 2, 1]\n divisor_tensor = constant_op.constant(divisor[1:], dtype=dtypes.int32)\n \n+ #fixes filter_size issue #33840\n+ if not(shape1[-3:-1][0]/(2**(len(power_factors)-1)) and shape2[-3:-1][0]/(2**(len(power_factors)-1)) >= filter_size):\n+ H = tf.math.reduce_min((shape1,shape2))\n+ suggested_filter_size = int(H/(2**(len(power_factors)-1)))\n+ filter_size = suggested_filter_size\n+\n def do_pad(images, remainder):\n padding = array_ops.expand_dims(remainder, -1)\n padding = array_ops.pad(padding, [[1, 0], [1, 0]])",
"filename": "tensorflow/python/ops/image_ops_impl.py",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below): v2.4.0-rc4-71-g582c8d236cb 2.4.0\r\n- Python version: 3.8.5\r\n\r\n**Describe the current behavior**\r\nThe [ReLu](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ReLU) layer seems to have an issue with negative thresholds, even though it should support them according to the documentation. For example, for a `max_value=1` and `threshold=-1`, ReLu should produce f(x)=0.5 for x=0.5 according to the following formula from the documentation `f(x) = x if threshold <= x < max_value`. \r\nThe issue is illustrated in the following code.\r\n\r\n**Standalone code to reproduce the issue**\r\n```\r\nimport tensorflow as tf\r\nfrom tensorflow import keras\r\nmodel = keras.Sequential([\r\nkeras.layers.ReLU(max_value=1, threshold=-1, negative_slope=1, input_shape=(4,))])\r\nx = tf.constant([[1.5, 0.5,-0.5, -1.5]])\r\nprint (model.predict(x,steps=1))\r\n```\r\n`Output: [[ 1. 0.5 0. -0.5]] Expected Output: [[1 0.5 -0.5 -0.5]]`\r\n",
"comments": [
{
"body": "That's how \"ReLU\" looks like if we obey \r\n```\r\n f(x) = max_value if x >= max_value\r\n f(x) = x if threshold <= x < max_value\r\n f(x) = negative_slope * (x - threshold) otherwise\r\n\r\nnegative_slope = 1\r\nthreshold = -1\r\nmax_value = 1\r\n```\r\n\r\n\r\nIt does not look like \"**Re**ctified **L**inear **U**nit anymore so I think it's no point to fix it and I recommend modifying the docs. (https://github.com/tensorflow/tensorflow/pull/48654)",
"created_at": "2021-04-20T21:52:50Z"
},
{
"body": "@szutenberg \r\n> f(x) = negative_slope * (x - threshold) otherwise\r\n\r\nCan we do\r\n` f(x) = negative_slope * (x - threshold) + threshold `\r\n\r\nThis will work for the general case as `f(x) .= x for x = threshold` ",
"created_at": "2021-04-21T04:46:51Z"
},
{
"body": "@szutenberg \r\nAlso, there's something more weird going on with `tf.keras.backend.relu` function. Here's how it works:\r\n```python\r\na = np.array( [ [0.5,2] , [-1.5,-0.5] ])\r\n# array([[ 0.5, 2. ],\r\n# [-1.5, -0.5]])\r\nprint(tf.keras.backend.relu(a, alpha = 1, max_value = 1, threshold=-1)) \r\n# Output :\r\n# tf.Tensor(\r\n# [[ 0.5 1. ]\r\n# [-0.5 0. ]], shape=(2, 2), dtype=float64)\r\n```\r\n\r\nAccording to your graph the (1,1) entry should be -0.5. \r\n",
"created_at": "2021-04-21T06:54:16Z"
},
{
"body": "@AdityaKane2001 \r\n\r\nYes, it does not match my graph because I drew it by following:\r\n```\r\n f(x) = max_value if x >= max_value\r\n f(x) = x if threshold <= x < max_value\r\n f(x) = negative_slope * (x - threshold) otherwise\r\n```\r\n\r\nThat's why I propose PR https://github.com/tensorflow/tensorflow/pull/48654 which solves the problem (negative threshold is simply not supported).\r\n\r\nI tried to understand why the threshold parameter was introduced. I found commit https://github.com/tensorflow/tensorflow/commit/0cf2c612e5e6ff8c5026011e8186056801def747 . I don't understand what is the point to introduce such consolidation. Any ideas?\r\n\r\nNote that in [keras.layers.ThresholdedReLU](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ThresholdedReLU) `theta >= 0` (`theta` is equivalent to `threshold` in ReLU).\r\n\r\nWe should not replace `f(x) = negative_slope * (x - threshold)` with `f(x) = negative_slope * (x - threshold) + threshold` because it would break compatibility with [keras.layers.ThresholdedReLU](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ThresholdedReLU) by modifying behaviour for positive theshold value (f(x) would return threshold instead of 0).",
"created_at": "2021-04-25T11:45:11Z"
},
{
"body": "@szutenberg \r\nFair enough. But then should we modify the relu function itself, asserting that we do have a non-negative threshold? It will remove any ambiguity for the user in case the user tries to enter a negative threshold value.\r\n\r\nHowever, even though the use case is extremely rare, it is perhaps best to keep it like that and clear bugs, if any, because it gives more flexibility, as `ThresholdedReLU` only allows a normal ReLU to be used with a threshold, and not some custom variant of it.\r\n\r\nSo, another possible solution may be to resolve those errors and bugs in the relu function, and eliminate `ThresholdedReLU` as it will be sort of a duplicate.",
"created_at": "2021-04-25T12:26:56Z"
},
{
"body": "@AdityaKane2001 I added additional asserts to ReLU.\r\n\r\nFrom the graph I drew, we can see that such scenario **probably** does not make any sense. It's not continuous at -1, this sudden change may break gradient descent.\r\n\r\nI don't understand what is the idea behind https://github.com/tensorflow/tensorflow/commit/0cf2c612e5e6ff8c5026011e8186056801def747 and I'd rather revert it than eliminate ThresholdedReLU. People use ThresholdedReLU: [example 1](https://github.com/gao-lab/REVA-Data_Source_Code/blob/9da1a9fcfa04663b6a09a9c2549af9dddcc9848a/Variant_annotation/CNNs/model_structures.py#L17), [example 2](https://github.com/tincochan/vGame_bgm_remix/blob/4135b1a5ff9f1de9107facb8623d586200c0246e/rnn-cnn-gan-enhancer.py#L66)\r\n\r\nFor very rare use cases users can implement their own layers.",
"created_at": "2021-04-25T13:44:32Z"
},
{
"body": "@szutenberg \r\nI agree with you. The use case is very (very) small..",
"created_at": "2021-04-25T15:32:16Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48646\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48646\">No</a>\n",
"created_at": "2021-05-13T23:17:50Z"
}
],
"number": 48646,
"title": "ReLU layer wrong result with negative threshold"
}
|
{
"body": "threshold should not be negative because ReLU behavior does not match\r\nthe docs and it is not ReLU\r\n\r\n#48646",
"number": 48654,
"review_comments": [],
"title": "Fix keras.layers.ReLU docs #48646"
}
|
{
"commits": [
{
"message": "Fix keras.layers.ReLU docs #48646\n\nthreshold should not be negative because ReLU behaviour does not match\nthe docs and it is not ReLU"
},
{
"message": "Improve checks in keras.layers.ReLU #48646\n\n- added checking if negative_slope is None\n- added checking if threshold is negative\n- updated respective unit tests\n- renamed test_threshold_relu_with_invalid_alpha to\n test_threshold_relu_with_invalid_theta"
}
],
"files": [
{
"diff": "@@ -404,20 +404,21 @@ class ReLU(Layer):\n max_value: Float >= 0. Maximum activation value. Default to None, which\n means unlimited.\n negative_slope: Float >= 0. Negative slope coefficient. Default to 0.\n- threshold: Float. Threshold value for thresholded activation. Default to 0.\n+ threshold: Float >= 0. Threshold value for thresholded activation. Default\n+ to 0.\n \"\"\"\n \n def __init__(self, max_value=None, negative_slope=0, threshold=0, **kwargs):\n super(ReLU, self).__init__(**kwargs)\n if max_value is not None and max_value < 0.:\n- raise ValueError('max_value of Relu layer '\n- 'cannot be negative value: ' + str(max_value))\n- if negative_slope < 0.:\n- raise ValueError('negative_slope of Relu layer '\n- 'cannot be negative value: ' + str(negative_slope))\n- if threshold is None:\n- raise ValueError('threshold of Relu layer '\n- 'cannot be None. Required a float')\n+ raise ValueError('max_value of a ReLU layer cannot be a negative '\n+ 'value. Got: %s' % max_value)\n+ if negative_slope is None or negative_slope < 0.:\n+ raise ValueError('negative_slope of a ReLU layer cannot be a negative '\n+ 'value. Got: %s' % negative_slope)\n+ if threshold is None or threshold < 0.:\n+ raise ValueError('threshold of a ReLU layer cannot be a negative '\n+ 'value. Got: %s' % threshold)\n \n self.supports_masking = True\n if max_value is not None:",
"filename": "tensorflow/python/keras/layers/advanced_activations.py",
"status": "modified"
},
{
"diff": "@@ -78,21 +78,53 @@ def test_relu(self):\n # Test that we use `relu6` when appropriate in graph mode.\n self.assertTrue('Relu6' in keras.layers.ReLU(max_value=6)(x).name)\n \n- def test_relu_with_invalid_arg(self):\n+ def test_relu_with_invalid_max_value(self):\n with self.assertRaisesRegex(\n- ValueError, 'max_value of Relu layer cannot be negative value: -10'):\n- testing_utils.layer_test(keras.layers.ReLU,\n- kwargs={'max_value': -10},\n- input_shape=(2, 3, 4),\n- supports_masking=True)\n+ ValueError, 'max_value of a ReLU layer cannot be a negative '\n+ 'value. Got: -10'):\n+ testing_utils.layer_test(\n+ keras.layers.ReLU,\n+ kwargs={'max_value': -10},\n+ input_shape=(2, 3, 4),\n+ supports_masking=True)\n+\n+ def test_relu_with_invalid_negative_slope(self):\n+ with self.assertRaisesRegex(\n+ ValueError, 'negative_slope of a ReLU layer cannot be a negative '\n+ 'value. Got: None'):\n+ testing_utils.layer_test(\n+ keras.layers.ReLU,\n+ kwargs={'negative_slope': None},\n+ input_shape=(2, 3, 4),\n+ supports_masking=True)\n+\n+ with self.assertRaisesRegex(\n+ ValueError, 'negative_slope of a ReLU layer cannot be a negative '\n+ 'value. Got: -10'):\n+ testing_utils.layer_test(\n+ keras.layers.ReLU,\n+ kwargs={'negative_slope': -10},\n+ input_shape=(2, 3, 4),\n+ supports_masking=True)\n+\n+ def test_relu_with_invalid_threshold(self):\n+ with self.assertRaisesRegex(\n+ ValueError, 'threshold of a ReLU layer cannot be a negative '\n+ 'value. Got: None'):\n+ testing_utils.layer_test(\n+ keras.layers.ReLU,\n+ kwargs={'threshold': None},\n+ input_shape=(2, 3, 4),\n+ supports_masking=True)\n+\n with self.assertRaisesRegex(\n- ValueError,\n- 'negative_slope of Relu layer cannot be negative value: -2'):\n- with self.cached_session():\n- testing_utils.layer_test(\n- keras.layers.ReLU,\n- kwargs={'negative_slope': -2},\n- input_shape=(2, 3, 4))\n+ ValueError, 'threshold of a ReLU layer cannot be a negative '\n+ 'value. Got: -10'):\n+ testing_utils.layer_test(\n+ keras.layers.ReLU,\n+ kwargs={'threshold': -10},\n+ input_shape=(2, 3, 4),\n+ supports_masking=True)\n \n @keras_parameterized.run_with_all_model_types\n def test_layer_as_activation(self):\n@@ -126,7 +158,7 @@ def test_leaky_elu_with_invalid_alpha(self):\n input_shape=(2, 3, 4),\n supports_masking=True)\n \n- def test_threshold_relu_with_invalid_alpha(self):\n+ def test_threshold_relu_with_invalid_theta(self):\n with self.assertRaisesRegex(\n ValueError, 'Theta of a Thresholded ReLU layer cannot '\n 'be None, requires a float. Got None'):",
"filename": "tensorflow/python/keras/layers/advanced_activations_test.py",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): **Yes**\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): **Debian Buster**\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:\r\n- TensorFlow installed from (source or binary): **binary**\r\n- TensorFlow version (use command below): **v1.12.1-55105-ga7116dd3913 2.6.0-dev20210418**\r\n- Python version: **3.7.10**\r\n- Bazel version (if compiling from source):\r\n- GCC/Compiler version (if compiling from source):\r\n- CUDA/cuDNN version:\r\n- GPU model and memory:\r\n\r\n**Describe the current behavior**\r\n\r\nThe ragged version of `tf.losses.SparseCategoricalCrossentropy()` fails when the inner shape (the number of classes in the prediction) is not ragged (which is the default if the predictions are generated by a Dense layer).\r\n\r\nIn other words, while the following works,\r\n```python\r\ny_true = tf.ragged.constant([[0, 1], [2]])\r\ny_pred = tf.ragged.constant([[[.9, .05, .05], [.5, .89, .6]], [[.05, .01, .94]]], dtype=tf.float32)\r\nprint(y_true.shape, y_pred.shape)\r\n>>> (2, None) (2, None, None)\r\nprint(tf.losses.SparseCategoricalCrossentropy()(y_true, y_pred))\r\n```\r\nthe following code fails:\r\n```python\r\ny_true = tf.ragged.constant([[0, 1], [2]])\r\ny_pred = tf.ragged.constant([[[.9, .05, .05], [.5, .89, .6]], [[.05, .01, .94]]], ragged_rank=1, dtype=tf.float32)\r\nprint(y_true.shape, y_pred.shape)\r\n>>> (2, None) (2, None, 3)\r\nprint(tf.losses.SparseCategoricalCrossentropy()(y_true, y_pred))\r\n```\r\n\r\n**Describe the expected behavior**\r\n\r\nThe computation should not crash.\r\n\r\n**Standalone code to reproduce the issue**\r\nhttps://colab.research.google.com/drive/1k8POHBqlHn4Q5_7GUuaINdmX4F-u3Ktb?usp=sharing\r\n\r\n**Other info / logs** Include any logs or source code that would be helpful to\r\n\r\nAdding @pedro-r-marques who wrote the code.",
"comments": [
{
"body": "Adding @pedro-r-marques who wrote the code.",
"created_at": "2021-04-18T23:33:57Z"
},
{
"body": "Was able to reproduce the issue in TF 2.4.1 and nightly versions. Please find the gist [here](https://colab.research.google.com/gist/saikumarchalla/17a0a4e7dd419bf42f096ccb9db1fca1/raggedsparsecategoricalcrossentropybug.ipynb). Thanks!",
"created_at": "2021-04-20T06:42:49Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48609\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48609\">No</a>\n",
"created_at": "2021-04-21T19:01:08Z"
}
],
"number": 48609,
"title": "Bug in ragged version of tf.losses.SparseCategoricalCrossentropy"
}
|
{
"body": "The last dimension of a prediction corresponding to the per category scores\r\nmay (or not) be ragged, depending on how the tensor was constructed.\r\nIgnore this last dimension, if present, but do not require it to be there.\r\n\r\nFixes #48609",
"number": 48634,
"review_comments": [],
"title": "Fix handling of last dimension of RaggedTensor in SparseCategoricalLoss."
}
|
{
"commits": [
{
"message": "Fix handling of last dimension of RaggedTensor in SparseCategoricalLoss.\n\nThe last dimension of a prediction corresponding to the per category scores\nmay (or not) be ragged, depending on how the tensor was constructed.\nIgnore this last dimension, if present, but do not require it to be there."
}
],
"files": [
{
"diff": "@@ -1278,7 +1278,10 @@ def _wrapper(inputs, ragged_output):\n \n nested_splits_list = [rt.nested_row_splits for rt in (y_true, y_pred)]\n if y_pred_extra_dim:\n- nested_splits_list[1] = nested_splits_list[1][:-1]\n+ # The last dimension of a categorical prediction may be ragged or not.\n+ rdims = [len(slist) for slist in nested_splits_list]\n+ if rdims[0] == rdims[1] - 1:\n+ nested_splits_list[1] = nested_splits_list[1][:-1]\n \n map_fn = functools.partial(_wrapper, ragged_output=len(lshape) > 1)\n ",
"filename": "tensorflow/python/keras/losses.py",
"status": "modified"
},
{
"diff": "@@ -1170,6 +1170,26 @@ def test_ragged_tensors(self):\n loss = cce_obj(y_true, logits, sample_weight=sample_weight)\n self.assertAlmostEqual(self.evaluate(loss), 0.1934, 3)\n \n+ def test_ragged_tensors_rank_1(self):\n+ cce_obj = losses.SparseCategoricalCrossentropy()\n+ y_true = ragged_factory_ops.constant([[0, 1], [2]])\n+ y_pred = ragged_factory_ops.constant(\n+ [[[.9, .05, .05], [.5, .89, .6]], [[.05, .01, .94]]],\n+ ragged_rank=1, dtype=dtypes.float32)\n+ # batch losses [[0.1054, 0.8047], [0.0619]]\n+ sample_weight = constant_op.constant([[1.2], [3.4]], shape=(2, 1))\n+ loss = cce_obj(y_true, y_pred, sample_weight=sample_weight)\n+ # sum([0.1054, 0.8047, 0.0619]) / 3\n+ self.assertAlmostEqual(self.evaluate(loss), 0.4341, 3)\n+\n+ # Test with logits.\n+ logits = ragged_factory_ops.constant([[[8., 1., 1.], [0., 9., 1.]],\n+ [[2., 3., 5.]]], ragged_rank=1)\n+ cce_obj = losses.SparseCategoricalCrossentropy(from_logits=True)\n+ # batch losses [[0.0018, 0.0004], [0.1698]]\n+ loss = cce_obj(y_true, logits, sample_weight=sample_weight)\n+ self.assertAlmostEqual(self.evaluate(loss), 0.1934, 3)\n+\n def test_ragged_tensors_3d(self):\n # shape [2, 1, None]\n y_true = ragged_factory_ops.constant([[[1, 1]], [[0]]])",
"filename": "tensorflow/python/keras/losses_test.py",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A\r\n- TensorFlow installed from (source or binary): bianry\r\n- TensorFlow version (use command below): 2 .4.1\r\n- Python version: 3.7\r\n\r\n\r\n**Standalone code to reproduce the issue**\r\nWhen `Conv2D` with `kernel_size`=`2` and padding=`valid` receives an invalid input, it does not raise any exception. Instead it outputs a tensor with zero-dimension. This can lead to future crash for other APIs with 0-dim tensor as input.\r\n\r\n```\r\nimport tensorflow as tf\r\nimport numpy as np\r\n\r\nfilters, kernel_size, strides, padding = 3, [2, 2], 2, 'valid'\r\ndata = np.random.rand(1, 1, 1, 1)\r\nlayer = tf.keras.layers.Conv2D(filters, kernel_size, strides=strides, padding=padding)\r\nprint(layer(data).shape)\r\n```\r\n\r\nOutputs\r\n```\r\n(1, 0, 0, 3)\r\n```\r\n\r\n\r\n\r\n**Describe the current behavior**\r\nNo exception is raised for invalid input argument.\r\n\r\n**Describe the expected behavior**\r\nExpect `ValueError` to be raised.\r\n",
"comments": [
{
"body": "@lugalUrim \r\nWhen you make a model and compile, it gives the expected error.\r\nExample:\r\n```python\r\nmodel = tf.keras.Sequential()\r\nmodel.add(tf.keras.layers.Conv2D(filters, kernel_size, strides=strides, padding=padding))\r\nmodel.add(tf.keras.layers.Conv2D(filters, kernel_size, strides=strides, padding=padding))\r\nmodel.compile(loss = 'categorical_crossentropy',optimizer = 'adam', metrics = ['accuracy'])\r\nmodel.fit(np.random.rand(10, 1, 1, 3))\r\n```\r\nError trace:\r\n```\r\nValueError: in user code:\r\n\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function *\r\n return step_function(self, iterator)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function **\r\n outputs = model.distribute_strategy.run(run_step, args=(data,))\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run\r\n return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica\r\n return self._call_for_each_replica(fn, args, kwargs)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica\r\n return fn(*args, **kwargs)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step **\r\n outputs = model.train_step(data)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:754 train_step\r\n y_pred = self(x, training=True)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:1012 __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/sequential.py:389 call\r\n outputs = layer(inputs, **kwargs)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:1012 __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/layers/convolutional.py:248 call\r\n outputs = self._convolution_op(inputs, self.kernel)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper\r\n return target(*args, **kwargs)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/nn_ops.py:1020 convolution_v2\r\n name=name)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/nn_ops.py:1150 convolution_internal\r\n name=name)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/nn_ops.py:2604 _conv2d_expanded_batch\r\n name=name)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py:973 conv2d\r\n data_format=data_format, dilations=dilations, name=name)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/op_def_library.py:750 _apply_op_helper\r\n attrs=attr_protos, op_def=op_def)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:592 _create_op_internal\r\n compute_device)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:3536 _create_op_internal\r\n op_def=op_def)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:2016 __init__\r\n control_input_ops, op_def)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:1856 _create_c_op\r\n raise ValueError(str(e))\r\n\r\n ValueError: Negative dimension size caused by subtracting 2 from 1 for '{{node sequential_1/conv2d_5/Conv2D}} = Conv2D[T=DT_FLOAT, data_format=\"NHWC\", dilations=[1, 1, 1, 1], explicit_paddings=[], padding=\"VALID\", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true](IteratorGetNext, sequential_1/conv2d_5/Conv2D/ReadVariableOp)' with input shapes: [?,1,1,3], [2,2,3,3].\r\n\r\n```\r\n\r\nPlease close this issue if the query is resolved.\r\nThanks",
"created_at": "2021-04-17T14:46:10Z"
},
{
"body": "Thanks @AdityaKane2001 . When I make a model, it raises a `ValueError` indeed.\r\n\r\nAs far as I am concerned, it would be better if the error can also be raised when building a layer for invalid input. \r\n\r\n - Image this case: One breaks the model into multiple blocks to debug, and every single block (consisting of one or more layers) works fine with some input, but when putting them altogether into a model, it gives some error. It is a bit confusing, right.\r\n\r\n - This bug leads to future crash when taking gradient.\r\nFirst, build a layer. (Executes successfully)\r\n```\r\nimport tensorflow as tf\r\nimport numpy as np\r\n\r\nfilters, kernel_size, strides, padding = 3, [2, 2], 2, 'valid'\r\ndata = np.random.rand(1, 1, 1, 1)\r\nlayer = tf.keras.layers.Conv2D(filters, kernel_size, strides=strides, padding=padding)\r\nprint(layer(data).shape)\r\n```\r\nSecond, take the gradient. (**Session crashes**.)\r\n```\r\nwith tf.GradientTape() as tape:\r\n out = layer(data)\r\n loss = tf.reduce_sum(out)\r\n layer_variables = layer.trainable_variables\r\n grads = tape.gradient(loss, layer_variables)\r\n```",
"created_at": "2021-04-18T15:35:43Z"
},
{
"body": "@lugalUrim \r\nThe thing is, mostly one does not execute a layer like this without putting it into a model, for obvious reasons. But yes, it's a valid thing to expect, and we can perhaps implement it.",
"created_at": "2021-04-18T15:42:43Z"
},
{
"body": "/cc @nikitamaia Assign/Inprogress",
"created_at": "2021-04-19T17:00:43Z"
},
{
"body": "Requesting to close, solved in #48610 \r\nThanks",
"created_at": "2021-05-13T16:37:49Z"
},
{
"body": "Yes, as a suggestion please use a valid pattern in the PR next time for autolinking https://docs.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword",
"created_at": "2021-05-13T16:51:25Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48589\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48589\">No</a>\n",
"created_at": "2021-05-13T17:20:50Z"
},
{
"body": "Thank you @AdityaKane2001 @bhack closed this as the PR got merged",
"created_at": "2021-05-13T17:21:40Z"
},
{
"body": "> Yes, as a suggestion please use a valid pattern in the PR next time for autolinking https://docs.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword\n\nSure, would do",
"created_at": "2021-05-13T17:35:50Z"
}
],
"number": 48589,
"title": "Conv2d didn't raise exception for invalid input argument."
}
|
{
"body": "See #48589 ",
"number": 48610,
"review_comments": [
{
"body": "I'm wondering if we can somehow mentioned on why there's changes on the implementation on the PR's description, previously we're using `backend` instead of `array_ops`.",
"created_at": "2021-04-21T09:33:09Z"
},
{
"body": "@irvifa \r\nCan you please elaborate on that?",
"created_at": "2021-04-21T09:35:23Z"
},
{
"body": "Actually in the issue the user pointed out that some of the calls were deprecated. Also, backend also calls `array_ops` methods, so I called them directly.",
"created_at": "2021-04-21T09:37:02Z"
},
{
"body": "Please change to singular form: output_shapes => output_shape",
"created_at": "2021-04-22T18:47:03Z"
},
{
"body": "all() operator here is a bit unclear, please change to 0 in output_shapes, or do specific checks on the dimension.",
"created_at": "2021-04-22T18:48:07Z"
},
{
"body": "Is it an invalid shape? If true, consider changing the test name to test_conv1d_invalid_output_shapes()?",
"created_at": "2021-04-22T18:48:39Z"
},
{
"body": "Same as above",
"created_at": "2021-04-22T18:48:48Z"
},
{
"body": "Could you explain why we need to skip checks on the channel axis? Also please consider putting a comment before the if check.",
"created_at": "2021-04-30T17:17:41Z"
},
{
"body": "The code chunk under if and else are basically same, usually we discourage duplicated code. Could you create a small helper function to wrap the code? Thx!",
"created_at": "2021-04-30T17:19:20Z"
},
{
"body": "Sure, I'll add one",
"created_at": "2021-04-30T17:25:57Z"
},
{
"body": "@chenmoneygithub \r\nThe channel axis is dependent on no. of filters, which are checked by the additions made in #48566 . ",
"created_at": "2021-04-30T17:26:26Z"
},
{
"body": "Also the rest of the arguments for `conv_utils.conv_output_length` don't consider channels and batch size.",
"created_at": "2021-04-30T17:44:15Z"
},
{
"body": "Nit: delete the space after 'filters': 2. 'filters': 2 , => 'filters': 2,",
"created_at": "2021-05-04T17:24:32Z"
},
{
"body": "Same here",
"created_at": "2021-05-04T17:24:42Z"
},
{
"body": "Same here",
"created_at": "2021-05-04T17:25:00Z"
},
{
"body": "Same here",
"created_at": "2021-05-04T17:25:07Z"
},
{
"body": "Nit: space between var and operator: output_dimension<=0 => output_dimension <= 0",
"created_at": "2021-05-04T17:25:50Z"
}
],
"title": "Check if all dimensions in output are non-zero"
}
|
{
"commits": [
{
"message": "Merge pull request #5 from tensorflow/master\n\nStay up to date"
},
{
"message": "message"
},
{
"message": "message"
},
{
"message": "message"
},
{
"message": "Update convolutional.py\n\nSee issue #48589"
},
{
"message": "Update convolutional.py"
},
{
"message": "Added test functions in convolutional_test.py and local_test.py"
},
{
"message": "Merge pull request #6 from tensorflow/master\n\nStay up to date"
},
{
"message": "Create main.yml"
},
{
"message": "Update main.yml"
},
{
"message": "Delete main.yml"
},
{
"message": "Requested changes done"
},
{
"message": "Requested changes"
},
{
"message": "Requested changes"
},
{
"message": "Fixed indentation"
},
{
"message": "Fixed indentation"
},
{
"message": "Used conv_utils.output_length"
},
{
"message": "Fixed speeling mistake"
},
{
"message": "fixed spelling mistake"
},
{
"message": "Check for None"
},
{
"message": "fixed mistake"
},
{
"message": "Made helper function"
},
{
"message": "Updated comment and added spaces"
},
{
"message": "Improved code style"
},
{
"message": "Fixed bugs"
},
{
"message": "Removed wrong test case (conv1dtranspose test)"
}
],
"files": [
{
"diff": "@@ -87,8 +87,8 @@ class Conv(Layer):\n activation: Activation function to use.\n If you don't specify anything, no activation is applied.\n use_bias: Boolean, whether the layer uses a bias.\n- kernel_initializer: An initializer for the convolution kernel. If None, the \n- default initializer (glorot_uniform) will be used. \n+ kernel_initializer: An initializer for the convolution kernel. If None, the\n+ default initializer (glorot_uniform) will be used.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer (zeros) will be used.\n kernel_regularizer: Optional regularizer for the convolution kernel.\n@@ -231,6 +231,17 @@ def build(self, input_shape):\n if tf_op_name == 'Conv1D':\n tf_op_name = 'conv1d' # Backwards compat.\n \n+ # Check if output shapes are valid\n+ # They must not have 0 entries along any dimension\n+ # Check dimensions other than batch and channel, must be greater than 0\n+ if self._channels_first:\n+ for idx, dimension in enumerate(input_shape.as_list()[-self.rank:]):\n+ self._check_invalid_dimension(dimension, idx, input_shape)\n+\n+ else:\n+ for idx, dimension in enumerate(input_shape.as_list()[-self.rank-1:-1]):\n+ self._check_invalid_dimension(dimension, idx, input_shape)\n+\n self._convolution_op = functools.partial(\n nn_ops.convolution_v2,\n strides=tf_strides,\n@@ -303,6 +314,20 @@ def compute_output_shape(self, input_shape):\n def _recreate_conv_op(self, inputs): # pylint: disable=unused-argument\n return False\n \n+ def _check_invalid_dimension(self, dimension, idx, input_shape):\n+ \"\"\"Checks if output has all positive dimensions\"\"\"\n+ output_dimension = conv_utils.conv_output_length(\n+ dimension,\n+ self.kernel_size[idx],\n+ self.padding,\n+ self.strides[idx],\n+ dilation=self.dilation_rate[idx])\n+ if output_dimension is not None and output_dimension <= 0:\n+ raise ValueError('One of the dimensions in output tensor is less than or'\n+ ' equal to zero. Please check the input shape. '\n+ ' Recieved input: %s'%input_shape)\n+\n+\n def get_config(self):\n config = {\n 'filters':\n@@ -610,9 +635,9 @@ class Conv2D(Conv):\n bias_initializer: Initializer for the bias vector (see\n `keras.initializers`). Defaults to 'zeros'.\n kernel_regularizer: Regularizer function applied to the `kernel` weights\n- matrix (see `keras.regularizers`). \n+ matrix (see `keras.regularizers`).\n bias_regularizer: Regularizer function applied to the bias vector (see\n- `keras.regularizers`). \n+ `keras.regularizers`).\n activity_regularizer: Regularizer function applied to the output of the\n layer (its \"activation\") (see `keras.regularizers`).\n kernel_constraint: Constraint function applied to the kernel matrix (see\n@@ -1735,7 +1760,7 @@ class SeparableConv(Conv):\n see `keras.initializers`). If None, then the default initializer (\n 'glorot_uniform') will be used.\n pointwise_initializer: An initializer for the pointwise convolution kernel (\n- see `keras.initializers`). If None, then the default initializer \n+ see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer ('zeros') will be used (see `keras.initializers`).\n@@ -1944,7 +1969,7 @@ class SeparableConv1D(SeparableConv):\n see `keras.initializers`). If None, then the default initializer (\n 'glorot_uniform') will be used.\n pointwise_initializer: An initializer for the pointwise convolution kernel (\n- see `keras.initializers`). If None, then the default initializer \n+ see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer ('zeros') will be used (see `keras.initializers`).\n@@ -2106,7 +2131,7 @@ class SeparableConv2D(SeparableConv):\n strides: An integer or tuple/list of 2 integers,\n specifying the strides of the convolution along the height and width.\n Can be a single integer to specify the same value for\n- all spatial dimensions. Current implementation only supports equal \n+ all spatial dimensions. Current implementation only supports equal\n length strides in the row and column dimensions.\n Specifying any stride value != 1 is incompatible with specifying\n any `dilation_rate` value != 1.\n@@ -2140,7 +2165,7 @@ class SeparableConv2D(SeparableConv):\n see `keras.initializers`). If None, then the default initializer (\n 'glorot_uniform') will be used.\n pointwise_initializer: An initializer for the pointwise convolution kernel (\n- see `keras.initializers`). If None, then the default initializer \n+ see `keras.initializers`). If None, then the default initializer\n ('glorot_uniform') will be used.\n bias_initializer: An initializer for the bias vector. If None, the default\n initializer ('zeros') will be used (see `keras.initializers`).",
"filename": "tensorflow/python/keras/layers/convolutional.py",
"status": "modified"
},
{
"diff": "@@ -166,6 +166,12 @@ def fn(inpt):\n fn(inpt2)\n self.assertEqual(outp1_shape, layer(inpt1).shape)\n \n+ def test_conv1d_invalid_output_shapes(self):\n+ kwargs = {'filters': 2, 'kernel_size': 10}\n+ with self.assertRaises(ValueError):\n+ layer = keras.layers.Conv1D(**kwargs)\n+ layer.build((None, 5, 2))\n+\n \n @keras_parameterized.run_all_keras_modes\n class Conv2DTest(keras_parameterized.TestCase):\n@@ -298,6 +304,12 @@ def test_conv2d_zero_kernel_size(self):\n with self.assertRaises(ValueError):\n keras.layers.Conv2D(**kwargs)\n \n+ def test_conv2d_invalid_output_shapes(self):\n+ kwargs = {'filters': 2, 'kernel_size': 10}\n+ with self.assertRaises(ValueError):\n+ layer = keras.layers.Conv2D(**kwargs)\n+ layer.build((None, 5, 5, 2))\n+\n \n @keras_parameterized.run_all_keras_modes\n class Conv3DTest(keras_parameterized.TestCase):\n@@ -433,6 +445,12 @@ def test_conv3d_dynamic_shape(self):\n input_shape=(None, 3, None, None, None),\n input_data=input_data)\n \n+ def test_conv3d_invalid_output_shapes(self):\n+ kwargs = {'filters': 2, 'kernel_size': 10}\n+ with self.assertRaises(ValueError):\n+ layer = keras.layers.Conv3D(**kwargs)\n+ layer.build((None, 5, 5, 5, 2))\n+\n \n @keras_parameterized.run_all_keras_modes(always_skip_v1=True)\n class GroupedConvTest(keras_parameterized.TestCase):\n@@ -552,6 +570,7 @@ def test_conv3d_transpose(self, kwargs, expected_output_shape=None):\n self._run_test(kwargs, expected_output_shape)\n \n \n+\n @keras_parameterized.run_all_keras_modes\n class ConvSequentialTest(keras_parameterized.TestCase):\n ",
"filename": "tensorflow/python/keras/layers/convolutional_test.py",
"status": "modified"
},
{
"diff": "@@ -729,14 +729,14 @@ def local_conv_matmul(inputs, kernel, kernel_mask, output_shape):\n Returns:\n Output (N+2)-D tensor with shape `output_shape`.\n \"\"\"\n- inputs_flat = backend.reshape(inputs, (backend.shape(inputs)[0], -1))\n+ inputs_flat = array_ops.reshape(inputs, (array_ops.shape(inputs)[0], -1))\n \n kernel = kernel_mask * kernel\n kernel = make_2d(kernel, split_dim=backend.ndim(kernel) // 2)\n \n output_flat = math_ops.matmul(inputs_flat, kernel, b_is_sparse=True)\n- output = backend.reshape(output_flat, [\n- backend.shape(output_flat)[0],\n+ output = array_ops.reshape(output_flat, [\n+ array_ops.shape(output_flat)[0],\n ] + output_shape.as_list()[1:])\n return output\n ",
"filename": "tensorflow/python/keras/layers/local.py",
"status": "modified"
},
{
"diff": "@@ -158,6 +158,11 @@ def test_locallyconnected_1d_regularization(self, data_format, padding,\n self.assertEqual(layer.kernel.constraint, k_constraint)\n self.assertEqual(layer.bias.constraint, b_constraint)\n \n+ def test_locallyconnected1d_invalid_output_shapes(self):\n+ kwargs = {'filters': 2, 'kernel_size': 10}\n+ with self.assertRaises(ValueError):\n+ layer = keras.layers.LocallyConnected1D(**kwargs)\n+ layer.build((None, 5, 2))\n \n @combinations.generate(combinations.combine(mode=['graph', 'eager']))\n class LocallyConnected2DLayersTest(test.TestCase, parameterized.TestCase):\n@@ -265,6 +270,12 @@ def test_locallyconnected_2d_regularization(self, data_format, padding,\n self.assertEqual(layer.kernel.constraint, k_constraint)\n self.assertEqual(layer.bias.constraint, b_constraint)\n \n+ def test_locallyconnected2d_invalid_output_shapes(self):\n+ kwargs = {'filters': 2, 'kernel_size': 10}\n+ with self.assertRaises(ValueError):\n+ layer = keras.layers.LocallyConnected2D(**kwargs)\n+ layer.build((None, 5, 5, 2))\n+\n \n @combinations.generate(combinations.combine(mode=['graph', 'eager']))\n class LocallyConnectedImplementationModeTest(test.TestCase,",
"filename": "tensorflow/python/keras/layers/local_test.py",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\nTensorFlow version:\r\n```python\r\nprint(tf.version.GIT_VERSION, tf.version.VERSION)\r\nv2.4.0-rc4-71-g582c8d236cb 2.4.0\r\n```\r\n\r\n**Describe the current behavior**\r\nAccording to [MobileNetV3Small documentation](https://www.tensorflow.org/api_docs/python/tf/keras/applications/MobileNetV3Small)\r\n\r\n> The weights for all 6 models are obtained and translated from the Tensorflow checkpoints from TensorFlow checkpoints found here.\r\n\r\nBut keras models cannot reproduce performance on ImageNet #48066.\r\n\r\nTensorflow pb model contains GlobalAvgPool **before** Conv2D with filter <1x1x576x1024>:\r\n\r\n\r\nBut MobileNetV3Small from keras.application contains GlobalAvgPool **after** Conv2D with filter <1x1x576x1024>:\r\n\r\n\r\nhttps://github.com/tensorflow/tensorflow/blob/6a278b9cf2fbed781661079b13d88b4106622a09/tensorflow/python/keras/applications/mobilenet_v3.py#L295-L308\r\n\r\nSuch version of MobileNetV3Small achieves top1: 56.79% instead of 68.1%.\r\n\r\n**Describe the expected behavior**\r\nTo achieve reported accuracy, change [model code](https://github.com/tensorflow/tensorflow/blob/6a278b9cf2fbed781661079b13d88b4106622a09/tensorflow/python/keras/applications/mobilenet_v3.py#L295-L315) to:\r\n\r\n```python\r\n x = activation(x)\r\n\r\n x = layers.GlobalAveragePooling2D()(x)\r\n if channel_axis == 1:\r\n x = layers.Reshape((last_conv_ch, 1, 1))(x)\r\n else:\r\n x = layers.Reshape((1, 1, last_conv_ch))(x)\r\n\r\n x = layers.Conv2D(\r\n last_point_ch,\r\n kernel_size=1,\r\n padding='same',\r\n use_bias=True,\r\n name='Conv_2')(x)\r\n x = activation(x)\r\n\r\n if include_top:\r\n if dropout_rate > 0:\r\n x = layers.Dropout(dropout_rate)(x)\r\n x = layers.Conv2D(classes, kernel_size=1, padding='same', name='Logits')(x)\r\n x = layers.Flatten()(x)\r\n imagenet_utils.validate_activation(classifier_activation, weights)\r\n x = layers.Activation(activation=classifier_activation,\r\n name='Predictions')(x)\r\n```\r\n",
"comments": [
{
"body": "Thanks for the report. Please open a PR with the fix.",
"created_at": "2021-04-14T21:49:56Z"
},
{
"body": "Thanks for reporting the issue. Please https://github.com/tensorflow/tensorflow/pull/48542#issuecomment-824258134 for more updates. #48542 should fix the issue.",
"created_at": "2021-04-21T18:16:56Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48504\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48504\">No</a>\n",
"created_at": "2021-04-22T17:28:20Z"
}
],
"number": 48504,
"title": "MobileNetV3 keras models have bug with GlobalAveragePooling2D"
}
|
{
"body": "relates to #48504",
"number": 48542,
"review_comments": [],
"title": "Fixed MobilenetV3 from keras/application"
}
|
{
"commits": [
{
"message": "Fixed MobilenetV3 from keras/application"
},
{
"message": "fixed tests"
}
],
"files": [
{
"diff": "@@ -92,7 +92,7 @@ def test_application_base(self, app, _):\n \n @parameterized.parameters(*MODEL_LIST)\n def test_application_notop(self, app, last_dim):\n- if 'NASNet' in app.__name__:\n+ if 'NASNet' or 'MobileNetV3' in app.__name__:\n only_check_last_dim = True\n else:\n only_check_last_dim = False\n@@ -118,7 +118,10 @@ def test_application_variable_input_channels(self, app, last_dim):\n input_shape = (None, None, 1)\n output_shape = _get_output_shape(\n lambda: app(weights=None, include_top=False, input_shape=input_shape))\n- self.assertShapeEqual(output_shape, (None, None, None, last_dim))\n+ if 'MobileNetV3' in app.__name__:\n+ self.assertShapeEqual(output_shape, (None, 1, 1, last_dim))\n+ else:\n+ self.assertShapeEqual(output_shape, (None, None, None, last_dim))\n backend.clear_session()\n \n if backend.image_data_format() == 'channels_first':\n@@ -127,7 +130,10 @@ def test_application_variable_input_channels(self, app, last_dim):\n input_shape = (None, None, 4)\n output_shape = _get_output_shape(\n lambda: app(weights=None, include_top=False, input_shape=input_shape))\n- self.assertShapeEqual(output_shape, (None, None, None, last_dim))\n+ if 'MobileNetV3' in app.__name__:\n+ self.assertShapeEqual(output_shape, (None, 1, 1, last_dim))\n+ else:\n+ self.assertShapeEqual(output_shape, (None, None, None, last_dim))\n backend.clear_session()\n \n ",
"filename": "tensorflow/python/keras/applications/applications_test.py",
"status": "modified"
},
{
"diff": "@@ -292,6 +292,11 @@ def MobileNetV3(stack_fn,\n axis=channel_axis, epsilon=1e-3,\n momentum=0.999, name='Conv_1/BatchNorm')(x)\n x = activation(x)\n+ x = layers.GlobalAveragePooling2D()(x)\n+ if channel_axis == 1:\n+ x = layers.Reshape((last_conv_ch, 1, 1))(x)\n+ else:\n+ x = layers.Reshape((1, 1, last_conv_ch))(x)\n x = layers.Conv2D(\n last_point_ch,\n kernel_size=1,\n@@ -301,11 +306,6 @@ def MobileNetV3(stack_fn,\n x = activation(x)\n \n if include_top:\n- x = layers.GlobalAveragePooling2D()(x)\n- if channel_axis == 1:\n- x = layers.Reshape((last_point_ch, 1, 1))(x)\n- else:\n- x = layers.Reshape((1, 1, last_point_ch))(x)\n if dropout_rate > 0:\n x = layers.Dropout(dropout_rate)(x)\n x = layers.Conv2D(classes, kernel_size=1, padding='same', name='Logits')(x)",
"filename": "tensorflow/python/keras/applications/mobilenet_v3.py",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator DEPTH_TO_SPACE from lite to micro. The port will be submitted in a number of PRs. Here's a rough flight plan:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro without making any changes or including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Hi @rkuester !\r\nIt seems PR 3-5 have been already merged through this [PR](https://github.com/tensorflow/tensorflow/pull/48508). \r\nCan we consider this as resolved as respective kernels already moved to [tflite-micro](https://github.com/tensorflow/tflite-micro) repo now.\r\nThank you! ",
"created_at": "2022-10-27T08:53:41Z"
},
{
"body": "@mohantym Yes this issue is resolved.",
"created_at": "2022-10-27T19:26:18Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46025\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46025\">No</a>\n",
"created_at": "2022-10-27T19:26:20Z"
}
],
"number": 46025,
"title": "micro: port op DEPTH_TO_SPACE from lite"
}
|
{
"body": "PR steps 3 through 5 for the DEPTH_TO_SPACE operator as per Issue #46025 ",
"number": 48508,
"review_comments": [],
"title": "micro: DEPTH_TO_SPACE PR3-5"
}
|
{
"commits": [
{
"message": "micro: copy operator DEPTH_TO_SPACE kernel from lite\n\nThis is a copy with minimal modification of the kernel and test for\noperator DEPTH_TO_SPACE from tensorflow/lite/kernels.\nAdaptations to micro and addition to the micro build to follow.\n\nPR step 3 for issue #46025"
},
{
"message": "micro: prepare to port operator DEPTH_TO_SPACE kernel from lite with test\n\nImplement skeleton (non-working) code for operator and test.\nHeader files changed.\nNamespaces changed.\nSome original code deleted.\nSome original code modified.\n\nPR step 4 of the work to port operator DEPTH_TO_SPACE as tracked in Issue #46025"
},
{
"message": "micro: port operator DEPTH_TO_SPACE kernel from lite with test\n\nComplete implementation of TFLM operator DEPTH_TO_SPACE and associated TFLM test code.\n\nPR step 5 of the work to port operator DEPTH_TO_SPACE as tracked in Issue #46025"
}
],
"files": [
{
"diff": "@@ -43,7 +43,7 @@ inline void DepthToSpace(const tflite::DepthToSpaceParams& op_params,\n const int output_height = output_shape.Dims(1);\n const int output_batch = output_shape.Dims(0);\n \n- const int32 block_size = op_params.block_size;\n+ const int32_t block_size = op_params.block_size;\n \n TFLITE_DCHECK_EQ(input_width * block_size, output_width);\n TFLITE_DCHECK_EQ(input_height * block_size, output_height);",
"filename": "tensorflow/lite/kernels/internal/reference/depth_to_space.h",
"status": "modified"
},
{
"diff": "@@ -33,6 +33,7 @@ AllOpsResolver::AllOpsResolver() {\n AddConv2D();\n AddCos();\n AddCumSum();\n+ AddDepthToSpace();\n AddDepthwiseConv2D();\n AddDequantize();\n AddDetectionPostprocess();",
"filename": "tensorflow/lite/micro/all_ops_resolver.cc",
"status": "modified"
},
{
"diff": "@@ -267,6 +267,7 @@ cc_library(\n \"comparisons.cc\",\n \"concatenation.cc\",\n \"cumsum.cc\",\n+ \"depth_to_space.cc\",\n \"dequantize.cc\",\n \"detection_postprocess.cc\",\n \"elementwise.cc\",\n@@ -555,6 +556,21 @@ cc_test(\n ],\n )\n \n+cc_test(\n+ name = \"depth_to_space_test\",\n+ srcs = [\n+ \"depth_to_space_test.cc\",\n+ ],\n+ deps = [\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:debug_log\",\n+ \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_test(\n name = \"depthwise_conv_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -12,33 +12,28 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n+#include \"tensorflow/lite/kernels/internal/reference/depth_to_space.h\"\n+\n #include <stdint.h>\n \n-#include \"tensorflow/lite/c/builtin_op_data.h\"\n #include \"tensorflow/lite/c/common.h\"\n-#include \"tensorflow/lite/kernels/internal/optimized/optimized_ops.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/reference_ops.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n #include \"tensorflow/lite/kernels/internal/types.h\"\n #include \"tensorflow/lite/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n \n namespace tflite {\n-namespace ops {\n-namespace builtin {\n-namespace depth_to_space {\n-\n-// This file has two implementation of DepthToSpace. Note that DepthToSpace only\n-// works on 4D tensors.\n-enum KernelType {\n- kReference,\n- kGenericOptimized,\n-};\n+namespace {\n \n constexpr int kInputTensor = 0;\n constexpr int kOutputTensor = 0;\n \n-TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+// input/output tensor shape rank associations\n+constexpr int kBatchRank = 0;\n+constexpr int kHeightRank = 1;\n+constexpr int kWidthRank = 2;\n+constexpr int kDepthRank = 3;\n+\n+TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n auto* params =\n reinterpret_cast<TfLiteDepthToSpaceParams*>(node->builtin_data);\n \n@@ -55,15 +50,13 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n \n auto data_type = output->type;\n TF_LITE_ENSURE(context,\n- data_type == kTfLiteFloat32 || data_type == kTfLiteUInt8 ||\n- data_type == kTfLiteInt8 || data_type == kTfLiteInt32 ||\n- data_type == kTfLiteInt64);\n+ data_type == kTfLiteFloat32 || data_type == kTfLiteInt8);\n TF_LITE_ENSURE_TYPES_EQ(context, input->type, output->type);\n \n const int block_size = params->block_size;\n- const int input_height = input->dims->data[1];\n- const int input_width = input->dims->data[2];\n- const int input_channels = input->dims->data[3];\n+ const int input_height = input->dims->data[kHeightRank];\n+ const int input_width = input->dims->data[kWidthRank];\n+ const int input_channels = input->dims->data[kDepthRank];\n int output_height = input_height * block_size;\n int output_width = input_width * block_size;\n int output_channels = input_channels / block_size / block_size;\n@@ -73,98 +66,77 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_EQ(context, input_channels,\n output_channels * block_size * block_size);\n \n- TfLiteIntArray* output_size = TfLiteIntArrayCreate(4);\n- output_size->data[0] = input->dims->data[0];\n- output_size->data[1] = output_height;\n- output_size->data[2] = output_width;\n- output_size->data[3] = output_channels;\n+ // We must update the output tensor dimensions.\n+ // The dims storage is expected to be the same area in memory\n+ // for both TfLiteTensor and TfLiteEvalTensor. This is important\n+ // because TfLiteTensor in the MicroInterpreter is a temporary\n+ // allocation. For the KernelRunner interpreter, TfLiteEvalTensor\n+ // is a temporary allocation. We must therefore relocate the dims\n+ // from the FlatBuffer to the persistant storage arena.\n+ TfLiteEvalTensor* output_eval =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n+ TF_LITE_ENSURE_OK(context, tflite::micro::CreateWritableTensorDimsWithCopy(\n+ context, output, output_eval));\n+ output->dims->data[kBatchRank] = input->dims->data[kBatchRank];\n+ output->dims->data[kHeightRank] = output_height;\n+ output->dims->data[kWidthRank] = output_width;\n+ output->dims->data[kDepthRank] = output_channels;\n \n- return context->ResizeTensor(context, output, output_size);\n+ return kTfLiteOk;\n+}\n+\n+TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+ return CalculateOpData(context, node);\n }\n \n-template <KernelType kernel_type>\n TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n auto* params =\n reinterpret_cast<TfLiteDepthToSpaceParams*>(node->builtin_data);\n \n- const TfLiteTensor* input;\n- TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, kInputTensor, &input));\n- TfLiteTensor* output;\n- TF_LITE_ENSURE_OK(context,\n- GetOutputSafe(context, node, kOutputTensor, &output));\n+ const TfLiteEvalTensor* input =\n+ tflite::micro::GetEvalInput(context, node, kInputTensor);\n+ TfLiteEvalTensor* output =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n+\n+ tflite::DepthToSpaceParams op_params;\n+ op_params.block_size = static_cast<int32_t>(params->block_size);\n \n-#define TF_LITE_DEPTH_TO_SPACE(type, scalar) \\\n- tflite::DepthToSpaceParams op_params; \\\n- op_params.block_size = params->block_size; \\\n- type::DepthToSpace(op_params, GetTensorShape(input), \\\n- GetTensorData<scalar>(input), GetTensorShape(output), \\\n- GetTensorData<scalar>(output))\n switch (input->type) { // Already know in/out types are same.\n case kTfLiteFloat32:\n- if (kernel_type == kReference) {\n- TF_LITE_DEPTH_TO_SPACE(reference_ops, float);\n- } else {\n- TF_LITE_DEPTH_TO_SPACE(optimized_ops, float);\n- }\n- break;\n- case kTfLiteUInt8:\n- if (kernel_type == kReference) {\n- TF_LITE_DEPTH_TO_SPACE(reference_ops, uint8_t);\n- } else {\n- TF_LITE_DEPTH_TO_SPACE(optimized_ops, uint8_t);\n- }\n+ reference_ops::DepthToSpace(op_params,\n+ tflite::micro::GetTensorShape(input),\n+ tflite::micro::GetTensorData<float>(input),\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<float>(output));\n break;\n case kTfLiteInt8:\n- if (kernel_type == kReference) {\n- TF_LITE_DEPTH_TO_SPACE(reference_ops, int8_t);\n- } else {\n- TF_LITE_DEPTH_TO_SPACE(optimized_ops, int8_t);\n- }\n- break;\n- case kTfLiteInt32:\n- if (kernel_type == kReference) {\n- TF_LITE_DEPTH_TO_SPACE(reference_ops, int32_t);\n- } else {\n- TF_LITE_DEPTH_TO_SPACE(optimized_ops, int32_t);\n- }\n- break;\n- case kTfLiteInt64:\n- if (kernel_type == kReference) {\n- TF_LITE_DEPTH_TO_SPACE(reference_ops, int64_t);\n- } else {\n- TF_LITE_DEPTH_TO_SPACE(optimized_ops, int64_t);\n- }\n+ reference_ops::DepthToSpace(op_params,\n+ tflite::micro::GetTensorShape(input),\n+ tflite::micro::GetTensorData<int8_t>(input),\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<int8_t>(output));\n break;\n default:\n- TF_LITE_KERNEL_LOG(context, \"Type '%s' not currently supported.\",\n- TfLiteTypeGetName(input->type));\n+ TF_LITE_KERNEL_LOG(\n+ context, \"DEPTH_TO_SPACE only supports FLOAT32 and INT8, got %s.\",\n+ TfLiteTypeGetName(output->type));\n return kTfLiteError;\n }\n-#undef TF_LITE_DEPTH_TO_SPACE\n \n return kTfLiteOk;\n }\n \n-} // namespace depth_to_space\n-\n-TfLiteRegistration* Register_DEPTH_TO_SPACE_REF() {\n- static TfLiteRegistration r = {\n- nullptr, nullptr, depth_to_space::Prepare,\n- depth_to_space::Eval<depth_to_space::kReference>};\n- return &r;\n-}\n-\n-TfLiteRegistration* Register_DEPTH_TO_SPACE_GENERIC_OPT() {\n- static TfLiteRegistration r = {\n- nullptr, nullptr, depth_to_space::Prepare,\n- depth_to_space::Eval<depth_to_space::kGenericOptimized>};\n- return &r;\n-}\n-\n-TfLiteRegistration* Register_DEPTH_TO_SPACE() {\n- return Register_DEPTH_TO_SPACE_GENERIC_OPT();\n+} // namespace\n+\n+TfLiteRegistration Register_DEPTH_TO_SPACE() {\n+ return {/*init=*/nullptr,\n+ /*free=*/nullptr,\n+ /*prepare=*/Prepare,\n+ /*invoke=*/Eval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/0};\n }\n \n-} // namespace builtin\n-} // namespace ops\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/depth_to_space.cc",
"status": "modified"
},
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -12,97 +12,298 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <stdint.h>\n+#include <type_traits>\n \n-#include <initializer_list>\n-#include <vector>\n-\n-#include \"flatbuffers/flatbuffers.h\" // from @flatbuffers\n-#include \"tensorflow/lite/kernels/test_util.h\"\n-#include \"tensorflow/lite/schema/schema_generated.h\"\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_runner.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n \n namespace tflite {\n+namespace testing {\n namespace {\n \n-using ::testing::ElementsAre;\n-using ::testing::ElementsAreArray;\n-\n-class DepthToSpaceOpModel : public SingleOpModel {\n- public:\n- DepthToSpaceOpModel(const TensorData& tensor_data, int block_size) {\n- input_ = AddInput(tensor_data);\n- output_ = AddOutput(tensor_data);\n- SetBuiltinOp(BuiltinOperator_DEPTH_TO_SPACE,\n- BuiltinOptions_DepthToSpaceOptions,\n- CreateDepthToSpaceOptions(builder_, block_size).Union());\n- BuildInterpreter({GetShape(input_)});\n- }\n+constexpr int kOutputDimsCount = 4;\n+\n+struct DepthToSpaceTestParams {\n+ int block_size;\n+ // output_dims_data is a TfLiteIntArray\n+ int output_dims_data[kOutputDimsCount + 1] = {kOutputDimsCount, 0, 0, 0, 0};\n+};\n+\n+void ExecuteDepthToSpaceTest(const DepthToSpaceTestParams& params,\n+ TfLiteTensor* tensors, int tensors_count) {\n+ constexpr int kInputArrayData[] = {1, 0};\n+ TfLiteIntArray* inputs_array = IntArrayFromInts(kInputArrayData);\n+ constexpr int kOutputArrayData[] = {1, 1};\n+ TfLiteIntArray* outputs_array = IntArrayFromInts(kOutputArrayData);\n+\n+ TfLiteDepthToSpaceParams op_params = {};\n+ op_params.block_size = params.block_size;\n \n- template <typename T>\n- void SetInput(std::initializer_list<T> data) {\n- PopulateTensor<T>(input_, data);\n+ const TfLiteRegistration registration = tflite::Register_DEPTH_TO_SPACE();\n+ micro::KernelRunner runner(registration, tensors, tensors_count, inputs_array,\n+ outputs_array, static_cast<void*>(&op_params));\n+\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.InitAndPrepare());\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.Invoke());\n+}\n+\n+template <typename T>\n+void TestDepthToSpace(const DepthToSpaceTestParams& params,\n+ const int* input_dims_data, const T* input_data,\n+ const int* expected_dims_data, const T* expected_data,\n+ T* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* expected_dims = IntArrayFromInts(expected_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(params.output_dims_data);\n+ const int expected_count = ElementCount(*expected_dims);\n+\n+ TfLiteTensor tensors[] = {\n+ CreateTensor(input_data, input_dims),\n+ CreateTensor(output_data, output_dims),\n+ };\n+ constexpr int tensors_count = std::extent<decltype(tensors)>::value;\n+ ExecuteDepthToSpaceTest(params, tensors, tensors_count);\n+\n+ constexpr float kTolerance = 1e-5;\n+ for (int i = 0; i < expected_count; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(expected_data[i], output_data[i], kTolerance);\n }\n- template <typename T>\n- std::vector<T> GetOutput() {\n- return ExtractVector<T>(output_);\n+ for (int i = 0; i < expected_dims->size; i++) {\n+ // output dims will have been relocated during prepare phase,\n+ // so use the tensor dims pointer.\n+ TF_LITE_MICRO_EXPECT_EQ(expected_dims->data[i], tensors[1].dims->data[i]);\n }\n- std::vector<int> GetOutputShape() { return GetTensorShape(output_); }\n+}\n \n- private:\n- int input_;\n- int output_;\n+// min/max are used to compute scale, zero-point, compare tolerance\n+template <typename T, int kOutputSize>\n+struct TestQuantParams {\n+ float data_min; // input and output data minimum value\n+ float data_max; // input and output data maximum value\n+ T input_data[kOutputSize]; // quantized input storage\n+ T output_data[kOutputSize]; // quantized output storage\n };\n \n-#ifdef GTEST_HAS_DEATH_TEST\n-TEST(DepthToSpaceOpModel, BadBlockSize) {\n- EXPECT_DEATH(DepthToSpaceOpModel({TensorType_FLOAT32, {1, 1, 1, 4}}, 4),\n- \"Cannot allocate tensors\");\n+// for quantized, the error shouldn't exceed step\n+template <typename T>\n+float GetTolerance(float min, float max) {\n+ float kQuantizedStep =\n+ 2.0f * (max - min) /\n+ (std::numeric_limits<T>::max() - std::numeric_limits<T>::min());\n+ return kQuantizedStep;\n }\n-#endif\n-\n-TEST(DepthToSpaceOpModel, Float32) {\n- DepthToSpaceOpModel m({TensorType_FLOAT32, {1, 1, 1, 4}}, 2);\n- m.SetInput<float>({1.4, 2.3, 3.2, 4.1});\n- m.Invoke();\n- EXPECT_THAT(m.GetOutput<float>(), ElementsAreArray({1.4, 2.3, 3.2, 4.1}));\n- EXPECT_THAT(m.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+\n+template <typename T, int kOutputSize>\n+void TestDepthToSpaceQuantized(const DepthToSpaceTestParams& params,\n+ TestQuantParams<T, kOutputSize>* quant_params,\n+ const int* input_dims_data,\n+ const float* input_data,\n+ const int* expected_dims_data,\n+ const float* expected_data, float* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* expected_dims = IntArrayFromInts(expected_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(params.output_dims_data);\n+\n+ const float scale =\n+ ScaleFromMinMax<T>(quant_params->data_min, quant_params->data_max);\n+ const int zero_point =\n+ ZeroPointFromMinMax<T>(quant_params->data_min, quant_params->data_max);\n+\n+ TfLiteTensor tensors[] = {\n+ CreateQuantizedTensor(input_data, quant_params->input_data, input_dims,\n+ scale, zero_point),\n+ CreateQuantizedTensor(quant_params->output_data, output_dims, scale,\n+ zero_point),\n+ };\n+ constexpr int kTensorsCount = std::extent<decltype(tensors)>::value;\n+\n+ ExecuteDepthToSpaceTest(params, tensors, kTensorsCount);\n+\n+ Dequantize(quant_params->output_data, kOutputSize, scale, zero_point,\n+ output_data);\n+ const float kTolerance =\n+ GetTolerance<T>(quant_params->data_min, quant_params->data_max);\n+ for (int i = 0; i < kOutputSize; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(expected_data[i], output_data[i], kTolerance);\n+ }\n+ for (int i = 0; i < expected_dims->size; i++) {\n+ // output dims will have been relocated during prepare phase,\n+ // so use the tensor dims pointer.\n+ TF_LITE_MICRO_EXPECT_EQ(expected_dims->data[i], tensors[1].dims->data[i]);\n+ }\n }\n \n-TEST(DepthToSpaceOpModel, Uint8) {\n- DepthToSpaceOpModel m({TensorType_UINT8, {1, 1, 2, 4}}, 2);\n- m.SetInput<uint8_t>({1, 2, 3, 4, 5, 6, 7, 8});\n- m.Invoke();\n- EXPECT_THAT(m.GetOutput<uint8_t>(),\n- ElementsAreArray({1, 2, 5, 6, 3, 4, 7, 8}));\n- EXPECT_THAT(m.GetOutputShape(), ElementsAre(1, 2, 4, 1));\n+} // namespace\n+} // namespace testing\n+} // namespace tflite\n+\n+TF_LITE_MICRO_TESTS_BEGIN\n+\n+TF_LITE_MICRO_TEST(DepthToSpaceOpModelFloat32_1114_2) {\n+ constexpr int kInputDims[] = {4, 1, 1, 1, 4};\n+ constexpr float kInput[] = {1.4, 2.3, 3.2, 4.1};\n+ constexpr int kExpectDims[] = {4, 1, 2, 2, 1};\n+ constexpr float kExpect[] = {1.4, 2.3, 3.2, 4.1};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::DepthToSpaceTestParams params;\n+ params.block_size = 2;\n+\n+ tflite::testing::TestDepthToSpace(params, kInputDims, kInput, kExpectDims,\n+ kExpect, output_data);\n }\n \n-TEST(DepthToSpaceOpModel, int8) {\n- DepthToSpaceOpModel m({TensorType_INT8, {1, 2, 1, 4}}, 2);\n- m.SetInput<int8_t>({1, 2, 3, 4, 5, 6, 7, 8});\n- m.Invoke();\n- EXPECT_THAT(m.GetOutput<int8_t>(),\n- ElementsAreArray({1, 2, 3, 4, 5, 6, 7, 8}));\n- EXPECT_THAT(m.GetOutputShape(), ElementsAre(1, 4, 2, 1));\n+TF_LITE_MICRO_TEST(DepthToSpaceOpModelFloat32_1124_2) {\n+ constexpr int kInputDims[] = {4, 1, 1, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr int kExpectDims[] = {4, 1, 2, 4, 1};\n+ constexpr float kExpect[] = {1, 2, 5, 6, 3, 4, 7, 8};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::DepthToSpaceTestParams params;\n+ params.block_size = 2;\n+\n+ tflite::testing::TestDepthToSpace(params, kInputDims, kInput, kExpectDims,\n+ kExpect, output_data);\n }\n \n-TEST(DepthToSpaceOpModel, Int32) {\n- DepthToSpaceOpModel m({TensorType_INT32, {1, 2, 2, 4}}, 2);\n- m.SetInput<int32_t>({1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16});\n- m.Invoke();\n- EXPECT_THAT(m.GetOutput<int32_t>(),\n- ElementsAreArray(\n- {1, 2, 5, 6, 3, 4, 7, 8, 9, 10, 13, 14, 11, 12, 15, 16}));\n- EXPECT_THAT(m.GetOutputShape(), ElementsAre(1, 4, 4, 1));\n+TF_LITE_MICRO_TEST(DepthToSpaceOpModelFloat32_1214_2) {\n+ constexpr int kInputDims[] = {4, 1, 2, 1, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr int kExpectDims[] = {4, 1, 4, 2, 1};\n+ constexpr float kExpect[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::DepthToSpaceTestParams params;\n+ params.block_size = 2;\n+\n+ tflite::testing::TestDepthToSpace(params, kInputDims, kInput, kExpectDims,\n+ kExpect, output_data);\n }\n \n-TEST(DepthToSpaceOpModel, Int64) {\n- DepthToSpaceOpModel m({TensorType_INT64, {1, 1, 1, 1}}, 1);\n- m.SetInput<int64_t>({4});\n- m.Invoke();\n- EXPECT_THAT(m.GetOutput<int64_t>(), ElementsAreArray({4}));\n- EXPECT_THAT(m.GetOutputShape(), ElementsAre(1, 1, 1, 1));\n+TF_LITE_MICRO_TEST(DepthToSpaceOpModelFloat32_1224_2) {\n+ constexpr int kInputDims[] = {4, 1, 2, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8,\n+ 9, 10, 11, 12, 13, 14, 15, 16};\n+ constexpr int kExpectDims[] = {4, 1, 4, 4, 1};\n+ constexpr float kExpect[] = {1, 2, 5, 6, 3, 4, 7, 8,\n+ 9, 10, 13, 14, 11, 12, 15, 16};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::DepthToSpaceTestParams params;\n+ params.block_size = 2;\n+\n+ tflite::testing::TestDepthToSpace(params, kInputDims, kInput, kExpectDims,\n+ kExpect, output_data);\n }\n \n-} // namespace\n-} // namespace tflite\n+TF_LITE_MICRO_TEST(DepthToSpaceOpModelFloat32_1111_1) {\n+ constexpr int kInputDims[] = {4, 1, 1, 1, 1};\n+ constexpr float kInput[] = {4};\n+ constexpr int kExpectDims[] = {4, 1, 1, 1, 1};\n+ constexpr float kExpect[] = {4};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::DepthToSpaceTestParams params;\n+ params.block_size = 1;\n+\n+ tflite::testing::TestDepthToSpace(params, kInputDims, kInput, kExpectDims,\n+ kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(DepthToSpaceOpModelInt8_1114_2) {\n+ constexpr int kInputDims[] = {4, 1, 1, 1, 4};\n+ constexpr float kInput[] = {1.4, 2.3, 3.2, 4.1};\n+ constexpr int kExpectDims[] = {4, 1, 2, 2, 1};\n+ constexpr float kExpect[] = {1.4, 2.3, 3.2, 4.1};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::DepthToSpaceTestParams params;\n+ params.block_size = 2;\n+ tflite::testing::TestQuantParams<int8_t, kOutputCount> quant_params = {};\n+ quant_params.data_min = 0.0;\n+ quant_params.data_max = 5.0;\n+\n+ tflite::testing::TestDepthToSpaceQuantized<int8_t, kOutputCount>(\n+ params, &quant_params, kInputDims, kInput, kExpectDims, kExpect,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(DepthToSpaceOpModelInt8_1124_2) {\n+ constexpr int kInputDims[] = {4, 1, 1, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr int kExpectDims[] = {4, 1, 2, 4, 1};\n+ constexpr float kExpect[] = {1, 2, 5, 6, 3, 4, 7, 8};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::DepthToSpaceTestParams params;\n+ params.block_size = 2;\n+ tflite::testing::TestQuantParams<int8_t, kOutputCount> quant_params = {};\n+ quant_params.data_min = 0.0;\n+ quant_params.data_max = 9.0;\n+\n+ tflite::testing::TestDepthToSpaceQuantized<int8_t, kOutputCount>(\n+ params, &quant_params, kInputDims, kInput, kExpectDims, kExpect,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(DepthToSpaceOpModelInt8_1214_2) {\n+ constexpr int kInputDims[] = {4, 1, 2, 1, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr int kExpectDims[] = {4, 1, 4, 2, 1};\n+ constexpr float kExpect[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::DepthToSpaceTestParams params;\n+ params.block_size = 2;\n+ tflite::testing::TestQuantParams<int8_t, kOutputCount> quant_params = {};\n+ quant_params.data_min = 0.0;\n+ quant_params.data_max = 9.0;\n+\n+ tflite::testing::TestDepthToSpaceQuantized<int8_t, kOutputCount>(\n+ params, &quant_params, kInputDims, kInput, kExpectDims, kExpect,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(DepthToSpaceOpModelInt8_1224_2) {\n+ constexpr int kInputDims[] = {4, 1, 2, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8,\n+ 9, 10, 11, 12, 13, 14, 15, 16};\n+ constexpr int kExpectDims[] = {4, 1, 4, 4, 1};\n+ constexpr float kExpect[] = {1, 2, 5, 6, 3, 4, 7, 8,\n+ 9, 10, 13, 14, 11, 12, 15, 16};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::DepthToSpaceTestParams params;\n+ params.block_size = 2;\n+ tflite::testing::TestQuantParams<int8_t, kOutputCount> quant_params = {};\n+ quant_params.data_min = 0.0;\n+ quant_params.data_max = 17.0;\n+\n+ tflite::testing::TestDepthToSpaceQuantized<int8_t, kOutputCount>(\n+ params, &quant_params, kInputDims, kInput, kExpectDims, kExpect,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(DepthToSpaceOpModelInt8_1111_1) {\n+ constexpr int kInputDims[] = {4, 1, 1, 1, 1};\n+ constexpr float kInput[] = {4};\n+ constexpr int kExpectDims[] = {4, 1, 1, 1, 1};\n+ constexpr float kExpect[] = {4};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::DepthToSpaceTestParams params;\n+ params.block_size = 1;\n+ tflite::testing::TestQuantParams<int8_t, kOutputCount> quant_params = {};\n+ quant_params.data_min = 3.0;\n+ quant_params.data_max = 5.0;\n+\n+ tflite::testing::TestDepthToSpaceQuantized<int8_t, kOutputCount>(\n+ params, &quant_params, kInputDims, kInput, kExpectDims, kExpect,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TESTS_END",
"filename": "tensorflow/lite/micro/kernels/depth_to_space_test.cc",
"status": "modified"
},
{
"diff": "@@ -36,6 +36,7 @@ TfLiteRegistration Register_BATCH_TO_SPACE_ND();\n TfLiteRegistration Register_CAST();\n TfLiteRegistration Register_CONV_2D();\n TfLiteRegistration Register_CUMSUM();\n+TfLiteRegistration Register_DEPTH_TO_SPACE();\n TfLiteRegistration Register_DEPTHWISE_CONV_2D();\n TfLiteRegistration Register_DIV();\n TfLiteRegistration Register_ELU();",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -182,6 +182,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n ParseCumsum);\n }\n \n+ TfLiteStatus AddDepthToSpace() {\n+ return AddBuiltin(BuiltinOperator_DEPTH_TO_SPACE,\n+ tflite::Register_DEPTH_TO_SPACE(), ParseDepthToSpace);\n+ }\n+\n TfLiteStatus AddDepthwiseConv2D() {\n return AddBuiltin(BuiltinOperator_DEPTHWISE_CONV_2D,\n Register_DEPTHWISE_CONV_2D(), ParseDepthwiseConv2D);",
"filename": "tensorflow/lite/micro/micro_mutable_op_resolver.h",
"status": "modified"
},
{
"diff": "@@ -276,6 +276,7 @@ tensorflow/lite/micro/kernels/comparisons_test.cc \\\n tensorflow/lite/micro/kernels/concatenation_test.cc \\\n tensorflow/lite/micro/kernels/conv_test.cc \\\n tensorflow/lite/micro/kernels/cumsum_test.cc \\\n+tensorflow/lite/micro/kernels/depth_to_space_test.cc \\\n tensorflow/lite/micro/kernels/depthwise_conv_test.cc \\\n tensorflow/lite/micro/kernels/dequantize_test.cc \\\n tensorflow/lite/micro/kernels/detection_postprocess_test.cc \\\n@@ -337,6 +338,7 @@ tensorflow/lite/micro/kernels/concatenation.cc \\\n tensorflow/lite/micro/kernels/conv.cc \\\n tensorflow/lite/micro/kernels/conv_common.cc \\\n tensorflow/lite/micro/kernels/cumsum.cc \\\n+tensorflow/lite/micro/kernels/depth_to_space.cc \\\n tensorflow/lite/micro/kernels/depthwise_conv.cc \\\n tensorflow/lite/micro/kernels/depthwise_conv_common.cc \\\n tensorflow/lite/micro/kernels/dequantize.cc \\\n@@ -434,6 +436,7 @@ tensorflow/lite/kernels/internal/reference/comparisons.h \\\n tensorflow/lite/kernels/internal/reference/concatenation.h \\\n tensorflow/lite/kernels/internal/reference/conv.h \\\n tensorflow/lite/kernels/internal/reference/cumsum.h \\\n+tensorflow/lite/kernels/internal/reference/depth_to_space.h \\\n tensorflow/lite/kernels/internal/reference/depthwiseconv_float.h \\\n tensorflow/lite/kernels/internal/reference/depthwiseconv_uint8.h \\\n tensorflow/lite/kernels/internal/reference/dequantize.h \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator CUMSUM from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1 (step 1): Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2 (step 2): Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\n\r\nThe next 3 steps are combined into a single PR3 with separate commits:\r\n\r\n(step 3): Copy operator from lite to micro making minimal changes and not including in the build\r\n(step 4): Delete extra code from the micro copy of the operator\r\n(step 5): Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47290\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47290\">No</a>\n",
"created_at": "2021-06-02T16:07:22Z"
}
],
"number": 47290,
"title": "micro: port op CUMSUM from lite"
}
|
{
"body": "Added support for INT8 to the CUMSUM operator.\r\n\r\nReference Issue #47290",
"number": 48472,
"review_comments": [],
"title": "micro: add INT8 support to CUMSUM op"
}
|
{
"commits": [
{
"message": "micro: add INT8 support to CUMSUM op\n\nAdded support for INT8 to the CUMSUM operator.\n\nReference Issue #47290"
}
],
"files": [
{
"diff": "@@ -15,10 +15,12 @@ limitations under the License.\n #ifndef TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_CUMSUM_H_\n #define TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_CUMSUM_H_\n \n+#include <algorithm>\n #include <cstdint>\n+#include <limits>\n \n+#include \"tensorflow/lite/kernels/internal/common.h\"\n #include \"tensorflow/lite/kernels/internal/compatibility.h\"\n-#include \"tensorflow/lite/kernels/internal/types.h\"\n \n namespace tflite {\n namespace reference_ops {\n@@ -79,6 +81,94 @@ inline void CumSum(const T* input_data, const RuntimeShape& shape, int32_t axis,\n }\n }\n \n+//\n+// Quantized INT8 CUMSUM\n+//\n+inline void CumSum(const ArithmeticParams& params, const int8_t* input_data,\n+ const RuntimeShape& shape, int32_t axis, bool exclusive,\n+ bool reverse, int8_t* output_data) {\n+ TFLITE_DCHECK_LE(params.quantized_activation_min,\n+ params.quantized_activation_max);\n+ // Input offset is negative input zero point. Activation tensors are\n+ // asymmetric quantized so they span the full int8 range.\n+ // All inputs should have same zero-point and scale, this is checked during\n+ // Prepare stage.\n+ TFLITE_DCHECK_GE(-params.input1_offset, std::numeric_limits<int8_t>::min());\n+ TFLITE_DCHECK_LE(-params.input1_offset, std::numeric_limits<int8_t>::max());\n+\n+ const int32_t rank = shape.DimensionsCount();\n+ TFLITE_DCHECK_GE(rank, 1);\n+ TFLITE_DCHECK_GE(axis, 0);\n+ TFLITE_DCHECK_LT(axis, rank);\n+\n+ size_t inner = 1;\n+ size_t outer = 1;\n+ size_t depth = 1;\n+ for (int32_t i = 0; i < rank; i++) {\n+ if (i < axis)\n+ inner *= shape.Dims(i);\n+ else if (i > axis)\n+ outer *= shape.Dims(i);\n+ else\n+ depth = shape.Dims(i);\n+ }\n+\n+ for (size_t outer_index = 0; outer_index < outer; outer_index++) {\n+ size_t outer_index_adj;\n+ if (reverse)\n+ outer_index_adj = (outer - 1) - outer_index;\n+ else\n+ outer_index_adj = outer_index;\n+ for (size_t inner_index = 0; inner_index < inner; inner_index++) {\n+ int32_t accumulator = params.input1_offset; // accumulator = 0\n+ accumulator *= (1 << params.left_shift);\n+ accumulator = MultiplyByQuantizedMultiplierSmallerThanOneExp(\n+ accumulator, params.input1_multiplier, params.input1_shift);\n+\n+ size_t inner_index_adj;\n+ if (reverse)\n+ inner_index_adj = (inner - 1) - inner_index;\n+ else\n+ inner_index_adj = inner_index;\n+\n+ for (size_t depth_index = 0; depth_index < depth; depth_index++) {\n+ size_t depth_index_adj;\n+ if (reverse)\n+ depth_index_adj = (depth - 1) - depth_index;\n+ else\n+ depth_index_adj = depth_index;\n+\n+ size_t index = outer_index_adj;\n+ index += inner_index_adj * depth * outer;\n+ index += depth_index_adj * outer;\n+\n+ const int32_t y = params.input1_offset + input_data[index];\n+ const int32_t shifted_y = y * (1 << params.left_shift);\n+ const int32_t scaled_y = MultiplyByQuantizedMultiplierSmallerThanOneExp(\n+ shifted_y, params.input1_multiplier, params.input1_shift);\n+\n+ int32_t scaled_output;\n+ if (exclusive) {\n+ scaled_output = accumulator;\n+ accumulator += scaled_y;\n+ } else {\n+ accumulator += scaled_y;\n+ scaled_output = accumulator;\n+ }\n+\n+ const int32_t raw_output =\n+ MultiplyByQuantizedMultiplierSmallerThanOneExp(\n+ scaled_output, params.output_multiplier, params.output_shift) +\n+ params.output_offset;\n+ const int32_t clamped_output =\n+ std::min(params.quantized_activation_max,\n+ std::max(params.quantized_activation_min, raw_output));\n+ output_data[index] = static_cast<int8_t>(clamped_output);\n+ }\n+ }\n+ }\n+}\n+\n } // namespace reference_ops\n } // namespace tflite\n ",
"filename": "tensorflow/lite/kernels/internal/reference/cumsum.h",
"status": "modified"
},
{
"diff": "@@ -16,16 +16,32 @@ limitations under the License.\n #include \"tensorflow/lite/kernels/internal/reference/cumsum.h\"\n \n #include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/quantization_util.h\"\n #include \"tensorflow/lite/kernels/internal/types.h\"\n #include \"tensorflow/lite/kernels/kernel_util.h\"\n #include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n \n namespace tflite {\n namespace {\n \n-static const int kInputTensor = 0;\n-static const int kAxisTensor = 1;\n-static const int kOutputTensor = 0;\n+constexpr int kInputTensor = 0;\n+constexpr int kAxisTensor = 1;\n+constexpr int kOutputTensor = 0;\n+\n+constexpr int kCumSumIntegerShift = 20;\n+\n+// only used with INT8 tensors\n+struct OpData {\n+ int32_t output_activation_min;\n+ int32_t output_activation_max;\n+ int32_t input_offset;\n+ int32_t output_offset;\n+ int32_t input_multiplier;\n+ int32_t output_multiplier;\n+ int input_shift;\n+ int output_shift;\n+ int left_shift;\n+};\n \n TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);\n@@ -34,7 +50,8 @@ TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n const TfLiteTensor* input = GetInput(context, node, kInputTensor);\n const TfLiteTensor* axis = GetInput(context, node, kAxisTensor);\n \n- TF_LITE_ENSURE(context, input->type == kTfLiteFloat32);\n+ TF_LITE_ENSURE(context,\n+ input->type == kTfLiteFloat32 || input->type == kTfLiteInt8);\n TF_LITE_ENSURE_EQ(context, axis->type, kTfLiteInt32);\n \n TF_LITE_ENSURE_EQ(context, NumElements(axis), 1);\n@@ -46,6 +63,34 @@ TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_EQ(context, input->type, output->type);\n TF_LITE_ENSURE(context, HaveSameShapes(input, output));\n \n+ if (output->type == kTfLiteInt8) {\n+ node->user_data =\n+ context->AllocatePersistentBuffer(context, sizeof(OpData));\n+ OpData* data = static_cast<OpData*>(node->user_data);\n+\n+ // 8bit -> 8bit general quantized path, with general rescalings\n+ data->input_offset = -input->params.zero_point;\n+ data->output_offset = output->params.zero_point;\n+ data->left_shift = kCumSumIntegerShift;\n+ const double twice_max_input_scale =\n+ 2 * static_cast<double>(input->params.scale);\n+ const double real_input_multiplier =\n+ static_cast<double>(input->params.scale) / twice_max_input_scale;\n+ const double real_output_multiplier =\n+ twice_max_input_scale /\n+ ((1 << data->left_shift) * static_cast<double>(output->params.scale));\n+\n+ QuantizeMultiplierSmallerThanOneExp(\n+ real_input_multiplier, &data->input_multiplier, &data->input_shift);\n+\n+ QuantizeMultiplierSmallerThanOneExp(\n+ real_output_multiplier, &data->output_multiplier, &data->output_shift);\n+\n+ TF_LITE_ENSURE_STATUS(CalculateActivationRangeQuantized(\n+ context, kTfLiteActNone, output, &data->output_activation_min,\n+ &data->output_activation_max));\n+ }\n+\n return kTfLiteOk;\n }\n \n@@ -62,7 +107,7 @@ TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n TfLiteEvalTensor* output =\n tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n \n- auto* params = static_cast<TfLiteCumsumParams*>(node->builtin_data);\n+ auto* cs_params = static_cast<TfLiteCumsumParams*>(node->builtin_data);\n auto input_shape = tflite::micro::GetTensorShape(input);\n \n int32_t axis = *tflite::micro::GetTensorData<int32_t>(axis_tensor);\n@@ -76,14 +121,35 @@ TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n switch (input->type) {\n case kTfLiteFloat32: {\n reference_ops::CumSum(tflite::micro::GetTensorData<float>(input),\n- input_shape, axis, params->exclusive,\n- params->reverse,\n+ input_shape, axis, cs_params->exclusive,\n+ cs_params->reverse,\n tflite::micro::GetTensorData<float>(output));\n return kTfLiteOk;\n } break;\n+\n+ case kTfLiteInt8: {\n+ auto* data = static_cast<OpData*>(node->user_data);\n+ ArithmeticParams params;\n+ params.left_shift = data->left_shift;\n+ params.input1_offset = data->input_offset;\n+ params.input1_multiplier = data->input_multiplier;\n+ params.input1_shift = data->input_shift;\n+ params.output_offset = data->output_offset;\n+ params.output_multiplier = data->output_multiplier;\n+ params.output_shift = data->output_shift;\n+ SetActivationParams(data->output_activation_min,\n+ data->output_activation_max, ¶ms);\n+ reference_ops::CumSum(params, tflite::micro::GetTensorData<int8_t>(input),\n+ input_shape, axis, cs_params->exclusive,\n+ cs_params->reverse,\n+ tflite::micro::GetTensorData<int8_t>(output));\n+ return kTfLiteOk;\n+ } break;\n+\n default: {\n- TF_LITE_KERNEL_LOG(\n- context, \"Unsupported input type, CUMSUM only supports FLOAT32.\");\n+ TF_LITE_KERNEL_LOG(context,\n+ \"CUMSUM only supports FLOAT32 and INT8, got %s.\",\n+ TfLiteTypeGetName(output->type));\n return kTfLiteError;\n }\n }",
"filename": "tensorflow/lite/micro/kernels/cumsum.cc",
"status": "modified"
},
{
"diff": "@@ -77,6 +77,59 @@ void TestCumSum(const CumSumTestParams& test_params, const int* input_dims_data,\n }\n }\n \n+// min/max are used to compute scale, zero-point, compare tolerance\n+template <typename T, int kOutputSize>\n+struct TestQuantParams {\n+ float data_min; // input and output data minimum value\n+ float data_max; // input and output data maximum value\n+ T input_data[kOutputSize]; // quantized input storage\n+ T output_data[kOutputSize]; // quantized output storage\n+};\n+\n+// for quantized int, the error shouldn't exceed step\n+template <typename T>\n+float GetTolerance(float min, float max) {\n+ float kQuantizedStep =\n+ 2.0f * (max - min) /\n+ (std::numeric_limits<T>::max() - std::numeric_limits<T>::min());\n+ return kQuantizedStep;\n+}\n+\n+template <typename T, int kOutputSize>\n+void TestCumSumQuantized(const CumSumTestParams& test_params,\n+ TestQuantParams<T, kOutputSize>* params,\n+ const int* input_dims_data, const float* input_data,\n+ const int* expected_dims, const float* expected_data,\n+ float* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(expected_dims);\n+\n+ constexpr int axis_dims_data[] = {1, 1};\n+ TfLiteIntArray* axis_dims = IntArrayFromInts(axis_dims_data);\n+ const int32_t axis_data[] = {test_params.axis};\n+\n+ const float scale = ScaleFromMinMax<T>(params->data_min, params->data_max);\n+ const int zero_point =\n+ ZeroPointFromMinMax<T>(params->data_min, params->data_max);\n+\n+ TfLiteTensor tensors[] = {\n+ CreateQuantizedTensor(input_data, params->input_data, input_dims, scale,\n+ zero_point),\n+ CreateTensor(axis_data, axis_dims),\n+ CreateQuantizedTensor(params->output_data, output_dims, scale,\n+ zero_point),\n+ };\n+\n+ constexpr int tensors_count = std::extent<decltype(tensors)>::value;\n+ ExecuteCumSumTest(test_params, tensors, tensors_count);\n+\n+ Dequantize(params->output_data, kOutputSize, scale, zero_point, output_data);\n+ const float kTolerance = GetTolerance<T>(params->data_min, params->data_max);\n+ for (int i = 0; i < kOutputSize; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(expected_data[i], output_data[i], kTolerance);\n+ }\n+}\n+\n } // namespace\n } // namespace testing\n } // namespace tflite\n@@ -177,4 +230,121 @@ TF_LITE_MICRO_TEST(CumSumOpTestSimpleReverseExclusiveTest) {\n output_data);\n }\n \n+TF_LITE_MICRO_TEST(CumSumOpTestSimpleTestInt8) {\n+ constexpr int kDims[] = {2, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr float kExpect[] = {1, 3, 6, 10, 5, 11, 18, 26};\n+\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::CumSumTestParams test_params;\n+ test_params.axis = 1;\n+\n+ tflite::testing::TestQuantParams<int8_t, kOutputCount> params = {};\n+ params.data_min = -26.0f;\n+ params.data_max = 26.0f;\n+\n+ tflite::testing::TestCumSumQuantized<int8_t, kOutputCount>(\n+ test_params, ¶ms, kDims, kInput, kDims, kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(CumSumOpTestSimpleAxis0TestInt8) {\n+ constexpr int kDims[] = {2, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr float kExpect[] = {1, 2, 3, 4, 6, 8, 10, 12};\n+\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::CumSumTestParams test_params;\n+ test_params.axis = 0;\n+\n+ tflite::testing::TestQuantParams<int8_t, kOutputCount> params = {};\n+ params.data_min = -12.0f;\n+ params.data_max = 12.0f;\n+\n+ tflite::testing::TestCumSumQuantized<int8_t, kOutputCount>(\n+ test_params, ¶ms, kDims, kInput, kDims, kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(CumSumOpTestSimple1DTestInt8) {\n+ constexpr int kDims[] = {1, 8};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr float kExpect[] = {1, 3, 6, 10, 15, 21, 28, 36};\n+\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::CumSumTestParams test_params;\n+ test_params.axis = 0;\n+\n+ tflite::testing::TestQuantParams<int8_t, kOutputCount> params = {};\n+ params.data_min = -36.0f;\n+ params.data_max = 36.0f;\n+\n+ tflite::testing::TestCumSumQuantized<int8_t, kOutputCount>(\n+ test_params, ¶ms, kDims, kInput, kDims, kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(CumSumOpTestSimpleReverseTestInt8) {\n+ constexpr int kDims[] = {2, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr float kExpect[] = {10, 9, 7, 4, 26, 21, 15, 8};\n+\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::CumSumTestParams test_params;\n+ test_params.axis = 1;\n+ test_params.reverse = true;\n+\n+ tflite::testing::TestQuantParams<int8_t, kOutputCount> params = {};\n+ params.data_min = -26.0f;\n+ params.data_max = 26.0f;\n+\n+ tflite::testing::TestCumSumQuantized<int8_t, kOutputCount>(\n+ test_params, ¶ms, kDims, kInput, kDims, kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(CumSumOpTestSimpleExclusiveTestInt8) {\n+ constexpr int kDims[] = {2, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr float kExpect[] = {0, 1, 3, 6, 0, 5, 11, 18};\n+\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::CumSumTestParams test_params;\n+ test_params.axis = 1;\n+ test_params.exclusive = true;\n+\n+ tflite::testing::TestQuantParams<int8_t, kOutputCount> params = {};\n+ params.data_min = -18.0f;\n+ params.data_max = 18.0f;\n+\n+ tflite::testing::TestCumSumQuantized<int8_t, kOutputCount>(\n+ test_params, ¶ms, kDims, kInput, kDims, kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(CumSumOpTestSimpleReverseExclusiveTestInt8) {\n+ constexpr int kDims[] = {2, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr float kExpect[] = {9, 7, 4, 0, 21, 15, 8, 0};\n+\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::CumSumTestParams test_params;\n+ test_params.axis = -1;\n+ test_params.exclusive = true;\n+ test_params.reverse = true;\n+\n+ tflite::testing::TestQuantParams<int8_t, kOutputCount> params = {};\n+ params.data_min = -21.0f;\n+ params.data_max = 21.0f;\n+\n+ tflite::testing::TestCumSumQuantized<int8_t, kOutputCount>(\n+ test_params, ¶ms, kDims, kInput, kDims, kExpect, output_data);\n+}\n TF_LITE_MICRO_TESTS_END",
"filename": "tensorflow/lite/micro/kernels/cumsum_test.cc",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below): 2.3.0\r\n- Python version: 3.6.8\r\n\r\n\r\n**Describe the current behavior**\r\nSome callbacks (e.g. ProgbarLogger, ModelCheckpoint, ...) have the flag `self._supports_tf_logs = True`. If other callbacks (especially custom Callback) don't have this property, then those callbacks do not have acces to the same logs. \r\nIn the code example below, `ModelCheckpoint` can not use the `'val_log_loss'` as a monitor value from the `CustomMetric` callback.\r\nThis results from the commit https://github.com/tensorflow/tensorflow/commit/50480faea75f56def464b84f251b4aee388dfce9 where a new `numpy_logs` property has been introduced, without making sure to sync it with the pre-existing `logs` property.\r\n\r\n**Describe the expected behavior**\r\nThe two propertys `numpy_logs` and `logs` should contain the same information OR it should be made clear in the docs (https://www.tensorflow.org/guide/keras/custom_callback#keras_callbacks_overview) what `_supports_tf_logs` does and that there could be compatibility issues.\r\n\r\n**Standalone code to reproduce the issue**\r\n```\r\n...\r\nfrom tensorflow.keras.callbacks import Callback, ModelCheckpoint\r\n...\r\n\r\nclass CustomMetric(Callback):\r\n def __init__(self, x_valid, y_valid):\r\n super().__init__()\r\n self.x_valid = x_valid\r\n self.y_valid = y_valid\r\n\r\n def on_epoch_end(self, epoch, logs=None):\r\n y_pred = self.model.predict(self.x_valid, batch_size=BATCHSIZE)\r\n\r\n logs['val_log_loss'] = metrics.log_loss(self.y_valid, y_pred)\r\n\r\n...\r\n\r\nmodel.fit(\r\n x_train,\r\n y_train,\r\n validation_data=(x_valid, y_valid),\r\n shuffle=True,\r\n batch_size=BATCHSIZE,\r\n epochs=EPOCHS,\r\n verbose=1,\r\n callbacks=[CustomMetric(x_valid, y_valid), ModelCheckpoint('test.h5', 'val_log_loss', verbose=1, save_best_only=True, mode='min')]\r\n )\r\n\r\n...\r\n```\r\n\r\n**Other info / logs** \r\nSee commit https://github.com/tensorflow/tensorflow/commit/50480faea75f56def464b84f251b4aee388dfce9",
"comments": [
{
"body": "@albert-92 Can you please provide a standalone code to reproduce the issue? Thanks!",
"created_at": "2020-07-29T23:57:26Z"
},
{
"body": "@jvishnuvardhan Sure. Here's a standalone code to reproduce the issue:\r\n\r\n```\r\nfrom __future__ import print_function\r\n\r\nfrom tensorflow.keras.datasets import mnist\r\nfrom tensorflow.keras.models import Sequential\r\nfrom tensorflow.keras.layers import Dense\r\nfrom tensorflow.keras.optimizers import RMSprop\r\nfrom tensorflow.keras.callbacks import Callback, ModelCheckpoint, History\r\nfrom tensorflow.keras import utils\r\nfrom sklearn import metrics\r\n\r\nbatch_size = 128\r\nnum_classes = 10\r\nepochs = 2\r\n\r\n# Custom callback, where the logs are actually the numpy_logs object \r\n# if the flag self._supports_tf_logs is not set to True\r\nclass CustomMetric(Callback):\r\n def __init__(self, x_valid, y_valid):\r\n super().__init__()\r\n self.x_valid = x_valid\r\n self.y_valid = y_valid\r\n\r\n def on_epoch_end(self, epoch, logs=None):\r\n y_pred = self.model.predict(self.x_valid, batch_size=batch_size)\r\n\r\n logs['val_log_loss'] = metrics.log_loss(self.y_valid, y_pred)\r\n\r\n\r\n# the data, split between train and test sets\r\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\r\n\r\nx_train = x_train.reshape(60000, 784).astype('float32') / 255.\r\nx_test = x_test.reshape(10000, 784).astype('float32') / 255.\r\n\r\n# convert class vectors to binary class matrices\r\ny_train = utils.to_categorical(y_train, num_classes)\r\ny_test = utils.to_categorical(y_test, num_classes)\r\n\r\nmodel = Sequential()\r\nmodel.add(Dense(64, activation='relu', input_shape=(784,)))\r\nmodel.add(Dense(32, activation='relu'))\r\nmodel.add(Dense(num_classes, activation='softmax'))\r\n\r\nmodel.summary()\r\n\r\nmodel.compile(loss='categorical_crossentropy',\r\n optimizer=RMSprop(),\r\n metrics=['accuracy'])\r\n\r\n# The following part works partly as intended.\r\n# history.history contains the key 'val_log_loss' even though it is not printed by the ProgbarLogger\r\n# (since ProgbarLogger uses logs and CustomMetric numpy_logs)\r\nhistory = model.fit(x_train, y_train,\r\n batch_size=batch_size,\r\n epochs=epochs,\r\n verbose=1,\r\n validation_data=(x_test, y_test),\r\n callbacks=[\r\n CustomMetric(x_test, y_test)\r\n ])\r\n\r\nprint(history.history)\r\n\r\n# This following part does not work as intented.\r\n# ModelCheckpoint outputs the warning\r\n# \"WARNING:tensorflow:Can save best model only with val_log_loss available, skipping.\"\r\n# because 'val_log_loss' is in the numpy_logs object and ModelCheckpoint uses the logs object\r\nmodel.fit(x_train, y_train,\r\n batch_size=batch_size,\r\n epochs=epochs,\r\n verbose=1,\r\n validation_data=(x_test, y_test),\r\n callbacks=[\r\n CustomMetric(x_test, y_test),\r\n ModelCheckpoint('test.h5', monitor='val_log_loss', verbose=1, save_best_only=True, mode='min')\r\n ])\r\n\r\n```",
"created_at": "2020-07-30T07:08:46Z"
},
{
"body": "I have tried in colab with TF version 2.3, nightly version(`2.4.0-dev20200729`) and was able to reproduce the issue.Please, find the gist [here](https://colab.research.google.com/gist/ravikyram/87ab302844f49a73e3c53b032a8565b8/untitled200.ipynb).Thanks!",
"created_at": "2020-07-30T08:11:47Z"
},
{
"body": "Facing the same issue when moved from 2.2 to 2.3",
"created_at": "2020-09-28T20:25:53Z"
},
{
"body": "@reedwm @omalleyt12 this issue is affecting our Keras callbacks in Horovod as well. As reported in https://github.com/horovod/horovod/issues/2440, when using `MetricAverageCallback` to average metrics across workers, the history is correctly reporting averages, but the logs are not. When setting `callback._supports_tf_logs = True` we get the exact opposite behavior: logs are correctly averaged but history is not. \r\n\r\nCan someone from your team help in providing a fix / workaround for this?\r\n\r\nHere's a standalone script using Horovod that repros the issue:\r\n\r\n```\r\n import tensorflow as tf\r\n from tensorflow import keras\r\n import horovod.tensorflow.keras as hvd\r\n\r\n hvd.init()\r\n\r\n opt = tf.keras.optimizers.Adam(0.01)\r\n opt = hvd.DistributedOptimizer(opt)\r\n\r\n def test_metric(y_true, y_pred):\r\n return hvd.rank()\r\n\r\n model = keras.models.Sequential()\r\n model.add(keras.layers.Dense(2, input_shape=(3,)))\r\n model.compile(loss=keras.losses.mean_squared_error,\r\n optimizer=opt,\r\n metrics=[test_metric],\r\n experimental_run_tf_function=False)\r\n\r\n x = np.random.random((1, 3))\r\n y = np.random.random((1, 3, 2))\r\n\r\n callbacks = [\r\n hvd.callbacks.BroadcastGlobalVariablesCallback(0),\r\n hvd.callbacks.MetricAverageCallback(),\r\n ]\r\n\r\n train_history = model.fit(\r\n x,\r\n y,\r\n steps_per_epoch=10,\r\n callbacks=callbacks,\r\n epochs=1\r\n )\r\n\r\n expected = sum(range(hvd.size())) / hvd.size()\r\n results = train_history.history.get('test_metric')\r\n assert results[0] == expected\r\n```",
"created_at": "2020-11-23T17:50:24Z"
},
{
"body": "/CC @fchollet",
"created_at": "2020-12-01T22:36:30Z"
},
{
"body": "I made a PR to fix this: https://github.com/tensorflow/tensorflow/pull/47922",
"created_at": "2021-03-19T16:40:54Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/41851\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/41851\">No</a>\n",
"created_at": "2021-04-05T15:00:51Z"
},
{
"body": "Thanks for the PR @lgeiger \r\nThis doesn't seem to fix the progress bar values on the example provided by tgaddair on tf2.5. Do you have any recommendation for that case?\r\nThank you",
"created_at": "2021-07-21T12:35:10Z"
}
],
"number": 41851,
"title": "Keras Callbacks logs / numpy_logs not in sync"
}
|
{
"body": "This PR cherry-picks #47922 onto the `r2.5` branch which addresses Keras callback issues reported in #41851 and #45895.\r\nWould be great if this fix could make it into the 2.5.",
"number": 48315,
"review_comments": [],
"title": "[r2.5 cherry-pick] Fix Keras Callbacks logs / numpy_logs sync"
}
|
{
"commits": [
{
"message": "Fix Keras Callbacks logs sync"
},
{
"message": "Only convert logs if batch hooks do not support TF logs\n\nThis is a small performance optimization that prevents conversion if not\nnecessary."
}
],
"files": [
{
"diff": "@@ -234,6 +234,15 @@ def __init__(self,\n \n # Performance optimization: determines if batch hooks need to be called.\n # pylint: disable=protected-access\n+ self._supports_tf_logs = all(\n+ getattr(cb, '_supports_tf_logs', False) for cb in self.callbacks)\n+ self._batch_hooks_support_tf_logs = all(\n+ getattr(cb, '_supports_tf_logs', False)\n+ for cb in self.callbacks\n+ if cb._implements_train_batch_hooks()\n+ or cb._implements_test_batch_hooks()\n+ or cb._implements_predict_batch_hooks())\n+\n self._should_call_train_batch_hooks = any(\n cb._implements_train_batch_hooks() for cb in self.callbacks)\n self._should_call_test_batch_hooks = any(\n@@ -272,6 +281,16 @@ def _add_default_callbacks(self, add_history, add_progbar):\n self._history = History()\n self.callbacks.append(self._history)\n \n+ def _process_logs(self, logs, is_batch_hook=False):\n+ \"\"\"Turns tensors into numpy arrays or Python scalars if necessary.\"\"\"\n+ if logs is None:\n+ return {}\n+ if self._supports_tf_logs:\n+ return logs\n+ if is_batch_hook and self._batch_hooks_support_tf_logs:\n+ return logs\n+ return tf_utils.sync_to_numpy_or_python_type(logs)\n+\n def append(self, callback):\n self.callbacks.append(callback)\n \n@@ -347,19 +366,13 @@ def _call_batch_end_hook(self, mode, batch, logs):\n \n def _call_batch_hook_helper(self, hook_name, batch, logs):\n \"\"\"Helper function for `on_*_batch_*` methods.\"\"\"\n- logs = logs or {}\n- numpy_logs = None\n if self._check_timing:\n start_time = time.time()\n \n+ logs = self._process_logs(logs, is_batch_hook=True)\n for callback in self.callbacks:\n hook = getattr(callback, hook_name)\n- if getattr(callback, '_supports_tf_logs', False):\n- hook(batch, logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- hook(batch, numpy_logs)\n+ hook(batch, logs)\n \n if self._check_timing:\n if hook_name not in self._hook_times:\n@@ -402,15 +415,9 @@ def on_epoch_begin(self, epoch, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_epoch_begin(epoch, logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_epoch_begin(epoch, numpy_logs)\n+ callback.on_epoch_begin(epoch, logs)\n \n def on_epoch_end(self, epoch, logs=None):\n \"\"\"Calls the `on_epoch_end` methods of its callbacks.\n@@ -423,15 +430,9 @@ def on_epoch_end(self, epoch, logs=None):\n validation epoch if validation is performed. Validation result keys\n are prefixed with `val_`.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_epoch_end(epoch, logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_epoch_end(epoch, numpy_logs)\n+ callback.on_epoch_end(epoch, logs)\n \n def on_train_batch_begin(self, batch, logs=None):\n \"\"\"Calls the `on_train_batch_begin` methods of its callbacks.\n@@ -506,15 +507,9 @@ def on_train_begin(self, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_train_begin(logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_train_begin(numpy_logs)\n+ callback.on_train_begin(logs)\n \n def on_train_end(self, logs=None):\n \"\"\"Calls the `on_train_end` methods of its callbacks.\n@@ -523,15 +518,9 @@ def on_train_end(self, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_train_end(logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_train_end(numpy_logs)\n+ callback.on_train_end(logs)\n \n def on_test_begin(self, logs=None):\n \"\"\"Calls the `on_test_begin` methods of its callbacks.\n@@ -540,15 +529,9 @@ def on_test_begin(self, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_test_begin(logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_test_begin(numpy_logs)\n+ callback.on_test_begin(logs)\n \n def on_test_end(self, logs=None):\n \"\"\"Calls the `on_test_end` methods of its callbacks.\n@@ -557,15 +540,9 @@ def on_test_end(self, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_test_end(logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_test_end(numpy_logs)\n+ callback.on_test_end(logs)\n \n def on_predict_begin(self, logs=None):\n \"\"\"Calls the 'on_predict_begin` methods of its callbacks.\n@@ -574,15 +551,9 @@ def on_predict_begin(self, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_predict_begin(logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_predict_begin(numpy_logs)\n+ callback.on_predict_begin(logs)\n \n def on_predict_end(self, logs=None):\n \"\"\"Calls the `on_predict_end` methods of its callbacks.\n@@ -591,15 +562,9 @@ def on_predict_end(self, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_predict_end(logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_predict_end(numpy_logs)\n+ callback.on_predict_end(logs)\n \n def __iter__(self):\n return iter(self.callbacks)",
"filename": "tensorflow/python/keras/callbacks.py",
"status": "modified"
},
{
"diff": "@@ -76,6 +76,13 @@\n NUM_HIDDEN = 5\n BATCH_SIZE = 5\n \n+CALLBACK_HOOKS = [\n+ 'on_batch_begin', 'on_batch_end', 'on_epoch_begin', 'on_epoch_end',\n+ 'on_predict_batch_begin', 'on_predict_batch_end', 'on_predict_begin',\n+ 'on_predict_end', 'on_test_batch_begin', 'on_test_batch_end',\n+ 'on_test_begin', 'on_test_end', 'on_train_batch_begin',\n+ 'on_train_batch_end', 'on_train_begin', 'on_train_end'\n+]\n \n class Counter(keras.callbacks.Callback):\n \"\"\"Counts the number of times each callback method was run.\n@@ -87,14 +94,7 @@ class Counter(keras.callbacks.Callback):\n \n def __init__(self):\n self.method_counts = collections.defaultdict(int)\n- methods_to_count = [\n- 'on_batch_begin', 'on_batch_end', 'on_epoch_begin', 'on_epoch_end',\n- 'on_predict_batch_begin', 'on_predict_batch_end', 'on_predict_begin',\n- 'on_predict_end', 'on_test_batch_begin', 'on_test_batch_end',\n- 'on_test_begin', 'on_test_end', 'on_train_batch_begin',\n- 'on_train_batch_end', 'on_train_begin', 'on_train_end'\n- ]\n- for method_name in methods_to_count:\n+ for method_name in CALLBACK_HOOKS:\n setattr(self, method_name,\n self.wrap_with_counts(method_name, getattr(self, method_name)))\n \n@@ -107,6 +107,17 @@ def _call_and_count(*args, **kwargs):\n return _call_and_count\n \n \n+class CallAllHooks(keras.callbacks.Callback):\n+ \"\"\"A callback that calls self._run for all hooks\"\"\"\n+\n+ def __init__(self):\n+ for method_name in CALLBACK_HOOKS:\n+ setattr(self, method_name, self._run)\n+\n+ def _run(self, *args, logs=None):\n+ raise NotImplementedError\n+\n+\n def _get_numpy():\n return np.ones((10, 10)), np.ones((10, 1))\n \n@@ -1683,6 +1694,12 @@ def on_test_batch_end(self, batch, logs=None):\n def on_predict_batch_end(self, batch, logs=None):\n self.predict_batches += 1\n \n+ class MyCallbackWithTFBatchHooks(keras.callbacks.Callback):\n+\n+ def __init__(self):\n+ super(MyCallbackWithTFBatchHooks, self).__init__()\n+ self._supports_tf_logs = True\n+\n class MyCallbackWithoutBatchHooks(keras.callbacks.Callback):\n \n def __init__(self):\n@@ -1700,6 +1717,7 @@ def on_epoch_end(self, epoch, logs=None):\n self.assertTrue(cb_list._should_call_train_batch_hooks)\n self.assertTrue(cb_list._should_call_test_batch_hooks)\n self.assertTrue(cb_list._should_call_predict_batch_hooks)\n+ self.assertFalse(cb_list._batch_hooks_support_tf_logs)\n \n model.fit(x, y, epochs=2, batch_size=10, callbacks=[my_cb], verbose=0)\n model.evaluate(x, y, batch_size=10, callbacks=[my_cb], verbose=0)\n@@ -1709,6 +1727,10 @@ def on_epoch_end(self, epoch, logs=None):\n self.assertEqual(my_cb.test_batches, 1)\n self.assertEqual(my_cb.predict_batches, 1)\n \n+ my_cb = MyCallbackWithTFBatchHooks()\n+ cb_list = keras.callbacks.CallbackList([my_cb], verbose=0)\n+ self.assertTrue(cb_list._batch_hooks_support_tf_logs)\n+\n my_cb = MyCallbackWithoutBatchHooks()\n cb_list = keras.callbacks.CallbackList([my_cb], verbose=0)\n self.assertLen(cb_list.callbacks, 1)\n@@ -1720,6 +1742,56 @@ def on_epoch_end(self, epoch, logs=None):\n model.evaluate(x, y, batch_size=10, callbacks=[my_cb], verbose=0)\n model.predict(x, batch_size=10, callbacks=[my_cb], verbose=0)\n \n+ @keras_parameterized.run_all_keras_modes(always_skip_v1=True)\n+ def test_logs_conversion(self):\n+ assert_dict_equal = self.assertDictEqual\n+\n+ class MutateNumpyLogs(CallAllHooks):\n+ def _run(self, *args, logs=None):\n+ logs = logs or args[-1]\n+ logs[\"numpy\"] = 1\n+\n+ class MutateTensorFlowLogs(CallAllHooks):\n+ def __init__(self):\n+ super(MutateTensorFlowLogs, self).__init__()\n+ self._supports_tf_logs = True\n+\n+ def _run(self, *args, logs=None):\n+ logs = logs or args[-1]\n+ logs[\"tf\"] = 2\n+\n+ class AssertNumpyLogs(CallAllHooks):\n+ def _run(self, *args, logs=None):\n+ logs = logs or args[-1]\n+ assert_dict_equal(logs, {\"all\": 0, \"numpy\": 1, \"tf\": 2})\n+\n+ class AssertTensorFlowLogs(AssertNumpyLogs):\n+ def __init__(self):\n+ super(AssertTensorFlowLogs, self).__init__()\n+ self._supports_tf_logs = True\n+\n+ cb_list = keras.callbacks.CallbackList([\n+ MutateNumpyLogs(),\n+ MutateTensorFlowLogs(),\n+ AssertNumpyLogs(),\n+ AssertTensorFlowLogs()])\n+\n+ assert len(cb_list.callbacks) == 4\n+ cb_list.on_epoch_begin(0, logs={\"all\": 0})\n+ cb_list.on_epoch_end(0, logs={\"all\": 0})\n+ cb_list.on_predict_batch_begin(0, logs={\"all\": 0})\n+ cb_list.on_predict_batch_end(0, logs={\"all\": 0})\n+ cb_list.on_predict_begin(logs={\"all\": 0})\n+ cb_list.on_predict_end(logs={\"all\": 0})\n+ cb_list.on_test_batch_begin(0, logs={\"all\": 0})\n+ cb_list.on_test_batch_end(0, logs={\"all\": 0})\n+ cb_list.on_test_begin(logs={\"all\": 0})\n+ cb_list.on_test_end(logs={\"all\": 0})\n+ cb_list.on_train_batch_begin(0, logs={\"all\": 0})\n+ cb_list.on_train_batch_end(0, logs={\"all\": 0})\n+ cb_list.on_train_begin(logs={\"all\": 0})\n+ cb_list.on_train_end(logs={\"all\": 0})\n+\n @keras_parameterized.run_all_keras_modes(always_skip_v1=True)\n def test_implements_batch_hooks_override(self):\n ",
"filename": "tensorflow/python/keras/callbacks_test.py",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\n**System information**\r\n- Host OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 20.04\r\n- TensorFlow installed from (source or binary): source\r\n- Tensorflow version (commit SHA if source): 1e8f4666f2fbc1bdd4ce2797b218de0453cffc63\r\n- Target platform (e.g. Arm Mbed OS, Arduino Nano 33 etc.): all\r\n\r\n**Describe the problem**\r\n\r\nThe script `tensorflow/lite/micro/tools/make/flatbuffers_download.sh` creates a temporary files/dirs in `/tmp`\r\nwhose names are not uniqified and not all of which are subsequently removed. This means builds by different users on a shared server host fail and junk is left lying around in `/tmp`.\r\n\r\nA small patch correcting these issues by using `mktemp` is attached. This approach is the same\r\nas that used in the `tensorflow/lite/micro/tools/make/download_and_extract.sh` script.\r\n\r\n[tmp_filename_clash_fix.patch.txt](https://github.com/tensorflow/tensorflow/files/6221742/tmp_filename_clash_fix.patch.txt)\r\n\r\n\r\n**Please provide the exact sequence of commands/steps when you ran into the problem**\r\n\r\nmake -f tensorflow/lite/micro/tools/make/Makefile\r\n\r\n",
"comments": [
{
"body": "Thanks for the fix. Created https://github.com/tensorflow/tensorflow/pull/48241 with the patch.\r\n\r\nFeel free to send a PR as well.",
"created_at": "2021-04-01T20:03:59Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48155\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/48155\">No</a>\n",
"created_at": "2021-04-02T02:05:13Z"
}
],
"number": 48155,
"title": "Filename clashes in /tmp during flatbuffer download"
}
|
{
"body": "This PR is exactly the patch from @andrewstevens-infineon attached to #48155\r\n",
"number": 48241,
"review_comments": [],
"title": "Fix for #48155"
}
|
{
"commits": [
{
"message": "Fix for #48155\n\nThis PR is exactly the patch from @anrewstevens-infineon attached to #48155"
}
],
"files": [
{
"diff": "@@ -25,6 +25,8 @@ function compute_md5() {\n tflm_md5sum=md5sum\n elif [ ${UNAME_S} == Darwin ]; then\n tflm_md5sum='md5 -r'\n+ else\n+ tflm_md5sum=md5sum\n fi\n ${tflm_md5sum} ${1} | awk '{print $1}'\n }",
"filename": "tensorflow/lite/micro/tools/make/bash_helpers.sh",
"status": "modified"
},
{
"diff": "@@ -51,7 +51,7 @@ fi\n # $1 - full path to the downloaded flexbuffers.h that will be patched in-place.\n function patch_to_avoid_strtod() {\n local input_flexbuffers_path=\"$1\"\n- local temp_flexbuffers_path=\"/tmp/flexbuffers_patched.h\"\n+ local temp_flexbuffers_path=\"$(mktemp)\"\n local string_to_num_line=`awk '/StringToNumber/{ print NR; }' ${input_flexbuffers_path}`\n local case_string_line=$((${string_to_num_line} - 2))\n \n@@ -94,11 +94,14 @@ else\n FLATBUFFERS_URL=\"http://mirror.tensorflow.org/github.com/google/flatbuffers/archive/${ZIP_PREFIX}.zip\"\n FLATBUFFERS_MD5=\"aa9adc93eb9b33fa1a2a90969e48baee\"\n \n- wget ${FLATBUFFERS_URL} -O /tmp/${ZIP_PREFIX}.zip >&2\n- check_md5 /tmp/${ZIP_PREFIX}.zip ${FLATBUFFERS_MD5}\n+ TMPDIR=\"$(mktemp -d)\"\n+ TMPFILE=\"${TMPDIR}/${ZIP_PREFIX}.zip\"\n+ wget ${FLATBUFFERS_URL} -O \"$TMPFILE\" >&2\n+ check_md5 \"${TMPFILE}\" ${FLATBUFFERS_MD5}\n \n- unzip -qo /tmp/${ZIP_PREFIX}.zip -d /tmp >&2\n- mv /tmp/flatbuffers-${ZIP_PREFIX} ${DOWNLOADED_FLATBUFFERS_PATH}\n+ unzip -qo \"$TMPFILE\" -d \"${TMPDIR}\" >&2\n+ mv \"${TMPDIR}/flatbuffers-${ZIP_PREFIX}\" ${DOWNLOADED_FLATBUFFERS_PATH}\n+ rm -rf \"${TMPDIR}\"\n \n patch_to_avoid_strtod ${DOWNLOADED_FLATBUFFERS_PATH}/include/flatbuffers/flexbuffers.h\n delete_build_files ${DOWNLOADED_FLATBUFFERS_PATH}",
"filename": "tensorflow/lite/micro/tools/make/flatbuffers_download.sh",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 20.04\r\n- TensorFlow installed from (source or binary): binary \r\n- TensorFlow version (use command below): v2.3.0-rc2-23-gb36436b087 2.3.0\r\n- Python version: 3.8.5\r\n\r\n**Describe the current behavior**\r\n\r\nWhen running a `Lambda` layer with `dynamic=True`, the code crashes with a `RecursionError`.\r\n\r\n**Describe the expected behavior**\r\n\r\nNo crash. \r\n\r\n**Standalone code to reproduce the issue**\r\n\r\n```python\r\nimport tensorflow as tf\r\ninp = tf.keras.Input(shape=(10,))\r\nout = tf.keras.layers.Lambda(\r\n lambda x_input: x_input,\r\n dynamic=True,\r\n)(inp)\r\nmodel = tf.keras.Model(inputs=inp, outputs=out)\r\n```\r\n\r\n**Other info / logs** \r\n\r\n[traceback_recursion_error.log](https://github.com/tensorflow/tensorflow/files/5546954/traceback_recursion_error.log)\r\n\r\n",
"comments": [
{
"body": "I have tried in colab with TF version 2.2, 2.3 and nightly version(`2.5.0-dev20201116`) and was able to reproduce the issue.Please, find the gist [here](https://colab.research.google.com/gist/ravikyram/08c4a080d0ab6b9451e0ec9536747bfb/untitled519.ipynb). Thanks!",
"created_at": "2020-11-17T06:06:39Z"
},
{
"body": "@Lescurel Do you like a better error message?\r\n\r\nExtracted from the doc https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer?version=nightly\r\n>dynamic: Set this to True if your layer should only be run eagerly, and should not be used to generate a static computation graph. This would be the case for a Tree-RNN or a recursive network, for example, or generally for any layer that manipulates tensors using Python control flow. If False, we assume that the layer can safely be used to generate a static computation graph. \r\n\r\nIf you run a similar example you can check:\r\n```\r\nimport tensorflow as tf\r\ninp = tf.keras.Input(shape=(10,))\r\ndef my_lambda_func(x):\r\n print(tf.executing_eagerly())\r\nx = tf.keras.layers.Lambda(my_lambda_func)(inp)\r\n```\r\n\r\n",
"created_at": "2020-11-17T15:08:22Z"
},
{
"body": "@bhack If passing `dynamic=True` to a Lambda layer is indeed impossible, then yes, I would have expected a better error message. \r\n\r\nBut I don't really see why it would be impossible. It feels to me that it is more likely to end up writing Python control flow code in a Lambda layer than in any other layer. ",
"created_at": "2020-11-18T14:51:08Z"
},
{
"body": "It Is ok to me to keep this open for a better error message.\nAs you can see with my print in lambda you are no more in eager mode this match with documentation.",
"created_at": "2020-11-18T15:05:38Z"
},
{
"body": "To add a bit of context, I ended finding that \"bug\" in a more complex program that ended up throwing that Traceback : \r\n\r\n[traceback.log](https://github.com/tensorflow/tensorflow/files/5560985/traceback.log)\r\n\r\nIn that Traceback, the first error I saw was : \r\n\r\n> OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did not convert this function. Try decorating it directly with @tf.function.\r\n\r\nWhich I ignored, given that the function was already decorated with `@tf.function`. (This is also a strange behaviour, but I can't reproduce it)\r\n\r\nThe second error is: \r\n\r\n> TypeError: You are attempting to use Python control flow in a layer that was not declared to be dynamic. Pass `dynamic=True` to the class constructor.\r\n\r\nThe problem is that apparently, those two suggestions are incompatible with each other, if I believe your example. \r\n",
"created_at": "2020-11-18T15:16:19Z"
},
{
"body": "I cannot see your context with the current code.",
"created_at": "2020-11-18T15:42:25Z"
},
{
"body": "Sorry, I forgot to add that the error was caused by a Lambda layer. Basically something like the example below caused the error. However, I have been unable to reproduce with a minimal example, so the example below is completely functional. \r\n\r\n```python\r\nimport tensorflow as tf\r\n\r\n@tf.function\r\ndef python_control_flow_fn(tensor):\r\n return tf.concat([t for t in tensor],axis=0)\r\n\r\ninp = tf.keras.Input(shape=(10,))\r\nlayer = tf.keras.layers.Lambda(python_control_flow_fn)(inp)\r\n```\r\n\r\nThe traceback seems to suggest that I can either add `@tf.function` to my Python control flow function, or add `dynamic=True` to the layer. ",
"created_at": "2020-11-18T16:07:34Z"
},
{
"body": "> OperatorNotAllowedInGraphError: iterating over tf.Tensor is not allowed: AutoGraph did not convert this function. Try decorating it directly with @tf.function.\r\n\r\nThis is not related to lambda:\r\n````python\r\n@tf.function\r\ndef python_control_flow_fn(tensor):\r\n return tf.concat([t for t in tensor],axis=0)\r\n\r\ninp = tf.keras.Input(shape=(10,))\r\npython_control_flow_fn(inp)",
"created_at": "2020-11-18T18:24:47Z"
},
{
"body": "Adding the `contributions welcome` label to this issue for further investigation by the community. If you are interested in working on this issue, please leave a comment and I will assign it to you. Thanks!",
"created_at": "2021-03-15T14:30:48Z"
},
{
"body": "Hi,\r\nthis error happens even in eager mode (tf 2.4.1):\r\n\r\n```python\r\n\r\nimport tensorflow as tf\r\n\r\ntf.config.run_functions_eagerly(True)\r\n\r\ninput = tf.keras.Input(shape=())\r\n\r\n@tf.function\r\ndef fn(x):\r\n return x\r\n\r\noutput = tf.keras.layers.Lambda(fn, dynamic=True)(input)\r\ntf.keras.Model(inputs=input, outputs=output)\r\n```\r\nthrows:\r\n\r\n`RecursionError: maximum recursion depth exceeded while calling a Python object`",
"created_at": "2021-03-18T17:06:39Z"
},
{
"body": "@nikitamaia Can you assign this to @fsx950223 he has already a submitted PR.",
"created_at": "2021-04-13T19:31:18Z"
},
{
"body": "Thanks for the heads up @bhack!\r\n\r\n@fsx950223 can you leave a comment on this thread requesting to be assigned? Github settings won't let me assign someone unless they have commented on the thread :)",
"created_at": "2021-04-13T19:45:49Z"
},
{
"body": "Please assign the issue to me.",
"created_at": "2021-04-14T05:15:53Z"
},
{
"body": "Was able to reproduce your issue in Tf Nightly 2.6.0-dev20210602, please find the gist [here](https://colab.research.google.com/gist/sachinprasadhs/4b23f0c1e384137b1666c3a22a4c4ddf/44906.ipynb). Thanks!",
"created_at": "2021-06-03T08:34:25Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/44906\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/44906\">No</a>\n",
"created_at": "2021-06-09T16:25:12Z"
},
{
"body": "I also observed the following alternative names of the API have the same behavior that causes the `RecursionError` Exception:\r\n\r\n- `(tf.keras.layers.Lambda)`, `tf.compat.v1.keras.layers.Lambda`\r\n\r\nThis behavior still exists in tensorflow nightly (2.15.0-dev20230920), and users should be cautious when using them on both CPU and GPU.\r\n\r\n<details>\r\n <summary>Code to reproduce the issue in <code>tf.compat.v1.keras.layers.Lambda</code></summary>\r\n\r\n```python\r\nimport tensorflow as tf\r\nprint(tf.version.GIT_VERSION, tf.version.VERSION, flush=True)\r\nprint(tf.config.list_physical_devices(), flush=True)\r\n\r\n\r\ninp = tf.keras.Input(shape=(10,))\r\nout = tf.compat.v1.keras.layers.Lambda(\r\n lambda x_input: x_input,\r\n dynamic=True,\r\n)(inp)\r\nmodel = tf.keras.Model(inputs=inp, outputs=out)\r\n```\r\n\r\nOn my GPU machine, the above code produces the following output, and the `RecursionError` Exception is raised.\r\n\r\n```text\r\nv2.14.0-rc0-34-gdd01672d9a9 2.14.0-rc1\r\n[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]\r\nTraceback (most recent call last):\r\n File \"/tmp/analyze/44906-4-s/tf.compat.v1.keras.layers.Lambda.py\", line 7, in <module>\r\n out = tf.compat.v1.keras.layers.Lambda(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/dist-packages/keras/src/utils/traceback_utils.py\", line 70, in error_handler\r\n raise e.with_traceback(filtered_tb) from None\r\n File \"/usr/lib/python3.11/inspect.py\", line 3272, in signature\r\n return Signature.from_callable(obj, follow_wrapped=follow_wrapped,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/inspect.py\", line 3020, in from_callable\r\n return _signature_from_callable(obj, sigcls=cls,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/inspect.py\", line 2457, in _signature_from_callable\r\n obj = unwrap(obj, stop=(lambda f: hasattr(f, \"__signature__\")))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/inspect.py\", line 760, in unwrap\r\n memo = {id(f): f}\r\n ^^^^^\r\nRecursionError: maximum recursion depth exceeded while calling a Python object\r\n```\r\n\r\nThis behavior is also reproducible on my CPU machine:\r\n\r\n```text\r\nv2.14.0-rc0-34-gdd01672d9a9 2.14.0-rc1\r\n[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')]\r\nTraceback (most recent call last):\r\n File \"/tmp/analyze/44906-4-s/tf.compat.v1.keras.layers.Lambda.py\", line 7, in <module>\r\n out = tf.compat.v1.keras.layers.Lambda(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/dist-packages/keras/src/utils/traceback_utils.py\", line 70, in error_handler\r\n raise e.with_traceback(filtered_tb) from None\r\n File \"/usr/lib/python3.11/inspect.py\", line 3272, in signature\r\n return Signature.from_callable(obj, follow_wrapped=follow_wrapped,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/inspect.py\", line 3020, in from_callable\r\n return _signature_from_callable(obj, sigcls=cls,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/inspect.py\", line 2457, in _signature_from_callable\r\n obj = unwrap(obj, stop=(lambda f: hasattr(f, \"__signature__\")))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/inspect.py\", line 760, in unwrap\r\n memo = {id(f): f}\r\n ^^^^^\r\nRecursionError: maximum recursion depth exceeded while calling a Python object\r\n```\r\n</details>\r\n",
"created_at": "2023-09-21T11:22:48Z"
}
],
"number": 44906,
"title": "RecursionError with `dynamic=True` when using a `Lambda` layer "
}
|
{
"body": "Fix #44906",
"number": 48207,
"review_comments": [
{
"body": "This is already what `super(Lambda, self).compute_output_shape(input_shape)` is supposed to do (`compute_output_shape` method of `Layer` base class). Why can't we leverage it, instead of duplicating the functionality?",
"created_at": "2021-04-15T17:58:45Z"
},
{
"body": "```_functional_construction_call``` under ```__call__``` raises RecusionError bug.\r\nI use call instead in order to fix it.",
"created_at": "2021-04-17T16:20:30Z"
},
{
"body": "I would recommend modifying the line `outputs = self(inputs, training=False)` to use `call` in `compute_output_shape` in `Layer`. That should be more generic, and it will avoid reimplementing functionality twice.",
"created_at": "2021-04-19T17:33:35Z"
},
{
"body": "Done",
"created_at": "2021-04-20T08:02:13Z"
}
],
"title": "fix lambda dynamic shape inference"
}
|
{
"commits": [
{
"message": "fix lambda dynamic shape inference"
},
{
"message": "fix keras v1 bug"
}
],
"files": [
{
"diff": "@@ -773,7 +773,16 @@ def _make_placeholder_like(shape):\n return ph\n inputs = nest.map_structure(_make_placeholder_like, input_shape)\n try:\n- outputs = self(inputs, training=False)\n+ if (base_layer_utils.is_subclassed(self) and\n+ not base_layer_utils.from_saved_model(self)):\n+ call_fn = autograph.tf_convert(self.call,\n+ ag_ctx.control_status_ctx())\n+ else:\n+ call_fn = self.call\n+ if self._expects_training_arg:\n+ outputs = call_fn(inputs, training=False)\n+ else:\n+ outputs = call_fn(inputs)\n except TypeError as e:\n raise NotImplementedError(\n 'We could not automatically infer the static shape of the '",
"filename": "tensorflow/python/keras/engine/base_layer.py",
"status": "modified"
},
{
"diff": "@@ -567,7 +567,16 @@ def compute_output_shape(self, input_shape):\n inputs = nest.map_structure(\n base_layer_utils.generate_placeholders_from_shape, input_shape)\n try:\n- outputs = self(inputs, training=False)\n+ if (base_layer_utils.is_subclassed(self) and\n+ not base_layer_utils.from_saved_model(self)):\n+ call_fn = autograph.tf_convert(self.call,\n+ ag_ctx.control_status_ctx())\n+ else:\n+ call_fn = self.call\n+ if self._expects_training_arg:\n+ outputs = call_fn(inputs, training=False)\n+ else:\n+ outputs = call_fn(inputs)\n except TypeError as e:\n raise NotImplementedError(\n 'We could not automatically infer the static shape of the '",
"filename": "tensorflow/python/keras/engine/base_layer_v1.py",
"status": "modified"
},
{
"diff": "@@ -95,6 +95,14 @@ def test_dropout_partial_noise_shape(self):\n @keras_parameterized.run_all_keras_modes\n class LambdaLayerTest(keras_parameterized.TestCase):\n \n+ def test_dynamic(self):\n+ inp = keras.Input(shape=(10,))\n+ out = keras.layers.Lambda(\n+ lambda x_input: x_input,\n+ dynamic=True)(inp)\n+ model = keras.Model(inputs=inp, outputs=out)\n+ self.assertEqual(model.output_shape, (None, 10))\n+\n def test_lambda(self):\n testing_utils.layer_test(\n keras.layers.Lambda,",
"filename": "tensorflow/python/keras/layers/core_test.py",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator TRANSPOSE from lite to micro. @advaitjain \r\n\r\nIt will be delivered in a series of PRs.\r\n\r\nPR 1 (merged): Refactor flatbuffer_conversions #45439 \r\nPR 2 (merged): Refactor transpose reference op #45438 \r\nPR 3 (merged): Copy of the reference kernel from lite to micro without changes #45843 \r\nPR 4: Modify the micro kernel, port the tests and add the kernel to the micro build (as three separate commits) #47446\r\n\r\n",
"comments": [
{
"body": "@driedler Sorry for a delayed response. I missed your comment. Actually this issue relates to porting the TRANSPOSE op, the files you liked are for the TRANSPOSE_CONV op. It was ported here https://github.com/tensorflow/tensorflow/commit/d9841dfd9689f9c4e0bc4e1229dbc354f01ebc1b\r\n\r\nIs it the transpose or transpose_conv you are interested in? :)",
"created_at": "2021-02-02T11:58:02Z"
},
{
"body": "@patriklaurell Yes. My apologies, TRANSPOSE_CONV has the issue. Please disregard.",
"created_at": "2021-02-11T00:24:13Z"
},
{
"body": "Hi @patriklaurell,\r\n\r\nWhich is the status of your PR5? I would need to consume TRANSPOSE from TFLM as well and I would prefer not to duplicate the effort, if you are already working on it :)\r\n\r\nThank you.",
"created_at": "2021-03-29T12:18:32Z"
},
{
"body": "I created a PR [48192](https://github.com/tensorflow/tensorflow/pull/48192) that should solve this issue.",
"created_at": "2021-03-30T15:56:42Z"
},
{
"body": "@dmpiergiacomo sorry for a late response. I have been on vacation over the easter week. I don't know if it is still relevant but I have the code for PR5 ready locally. I have not uploaded it since it depends on the changes in PR4 #47446. ",
"created_at": "2021-04-06T09:50:42Z"
},
{
"body": "With the merge of #47446 this issue is fixed.",
"created_at": "2021-05-21T07:47:54Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45695\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45695\">No</a>\n",
"created_at": "2021-05-21T07:47:56Z"
}
],
"number": 45695,
"title": "micro: Port TRANSPOSE from lite to micro"
}
|
{
"body": "Fixes #45695 \r\nFixes #43472\r\n\r\nAddition of TRANSPOSE operation and its relevant test file to TF Lite for Microcontrollers. This operation has been successfully tested in the following ways:\r\n\r\n1. Running `transpose_test` with Bazel\r\n2. Building TFLM for a target nRF52840 DK (Cortex-M) and verifying that the target error `Didn't find op for builtin opcode 'TRANSPOSE' version '2'` disappeared\r\n\r\n**More details abut point 1.**\r\nThe command used is:\r\n```\r\nbazel test //tensorflow/lite/micro/kernels:transpose_test --verbose_failures\r\n```\r\n\r\nIt returns:\r\n```\r\n.....\r\nRemoved for brevity\r\n.....\r\nINFO: Analyzed target //tensorflow/lite/micro/kernels:transpose_test (0 packages loaded, 0 targets configured).\r\nINFO: Found 1 test target...\r\nTarget //tensorflow/lite/micro/kernels:transpose_test up-to-date:\r\n bazel-bin/tensorflow/lite/micro/kernels/transpose_test\r\nINFO: Elapsed time: 0.156s, Critical Path: 0.00s\r\nINFO: 1 process: 1 internal.\r\nINFO: Build completed successfully, 1 total action\r\n//tensorflow/lite/micro/kernels:transpose_test (cached) PASSED in 0.0s\r\n\r\nExecuted 0 out of 1 test: 1 test passes.\r\nINFO: Build completed successfully, 1 total action\r\n```\r\n\r\n**More details abut point 2.**\r\nI always build locally my version of TFLM with the command:\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile \\\r\n TARGET=cortex_m_generic \\\r\n TARGET_ARCH=cortex-m4+fp \\\r\n TARGET_TOOLCHAIN_ROOT=/opt/gcc-arm-none-eabi-9-2020-q2-update/bin/ \\\r\n OPTIMIZED_KERNEL_DIR=cmsis_nn microlite\r\n```\r\n\r\nBefore applying the fixes of this PR, the error on the target nRF52840 DK was:\r\n```\r\n[ERR] ./model/debug_log.cc:12: Didn't find op for builtin opcode 'TRANSPOSE' version '2'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?\r\n[ERR] ./model/debug_log.cc:12: \r\n[ERR] ./model/debug_log.cc:12: Failed to get registration from op code TRANSPOSE\r\n[ERR] ./model/debug_log.cc:12: \r\n[ERR] ./model/debug_log.cc:12: Failed starting model allocation.\r\n[ERR] ./model/debug_log.cc:12: \r\n[ERR] ./model/debug_log.cc:12: AllocateTensors() failed\r\n```\r\n\r\nNow, after applying the fixes of this PR, the error disappears and the code can process further at runtime.",
"number": 48192,
"review_comments": [],
"title": "Micro transpose op ported and tested for TFLM"
}
|
{
"commits": [
{
"message": "Added the Add* function for the missing Builtin operator Fill"
},
{
"message": "Merge pull request #1 from dmpiergiacomo/addfill-patch\n\nAdded the Add* function for the missing Builtin operator Fill"
},
{
"message": "Cherry-Picking and Conflict Solving for Init Transpose"
},
{
"message": "Cherry-Picking and Conflicts Resolving for Fixed ParseOp"
},
{
"message": "Cherry-Picking and Conflicts Resolving for Add missing AddTranspose"
},
{
"message": "Transpose V2 expected"
},
{
"message": "Removed op_type from ParseTranspose and fixed Register_TRANSPOSE. Fixed micro BUILD file and transpose_test.cc"
},
{
"message": "Moved Register_TRANSPOSE() to tflite namespace and update micro Makefile"
}
],
"files": [
{
"diff": "@@ -13,6 +13,7 @@ node_modules\n __pycache__\n *.swp\n .vscode/\n+venv/\n cmake_build/\n tensorflow/contrib/cmake/_build/\n .idea/**",
"filename": ".gitignore",
"status": "modified"
},
{
"diff": "@@ -474,6 +474,10 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n return ParseTanh(op, error_reporter, allocator, builtin_data);\n }\n \n+ case BuiltinOperator_TRANSPOSE: {\n+ return ParseTranspose(op, error_reporter, allocator, builtin_data);\n+ }\n+\n case BuiltinOperator_TRANSPOSE_CONV: {\n return ParseTransposeConv(op, error_reporter, allocator, builtin_data);\n }\n@@ -806,7 +810,6 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n case BuiltinOperator_SLICE:\n case BuiltinOperator_TILE:\n case BuiltinOperator_TOPK_V2:\n- case BuiltinOperator_TRANSPOSE:\n case BuiltinOperator_RANGE:\n case BuiltinOperator_SQUARED_DIFFERENCE:\n case BuiltinOperator_REVERSE_V2:\n@@ -2059,8 +2062,10 @@ TfLiteStatus ParseTanh(const Operator*, ErrorReporter*, BuiltinDataAllocator*,\n // We have this parse function instead of directly returning kTfLiteOk from the\n // switch-case in ParseOpData because this function is used as part of the\n // selective registration for the OpResolver implementation in micro.\n-TfLiteStatus ParseTranspose(const Operator*, ErrorReporter*,\n- BuiltinDataAllocator*, void**) {\n+TfLiteStatus ParseTranspose(const Operator* op,\n+ ErrorReporter* error_reporter,\n+ BuiltinDataAllocator* allocator,\n+ void** builtin_data) {\n return kTfLiteOk;\n }\n ",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.cc",
"status": "modified"
},
{
"diff": "@@ -334,9 +334,10 @@ TfLiteStatus ParseSvdf(const Operator* op, ErrorReporter* error_reporter,\n TfLiteStatus ParseTanh(const Operator* op, ErrorReporter* error_reporter,\n BuiltinDataAllocator* allocator, void** builtin_data);\n \n-TfLiteStatus ParseTranspose(const Operator* op, ErrorReporter* error_reporter,\n- BuiltinDataAllocator* allocator,\n- void** builtin_data);\n+TfLiteStatus ParseTranspose(const Operator* op,\n+ ErrorReporter* error_reporter,\n+\t\t\t\t\t BuiltinDataAllocator* allocator,\n+\t\t\t\t\t\t\tvoid** builtin_data);\n \n TfLiteStatus ParseTransposeConv(const Operator* op,\n ErrorReporter* error_reporter,",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.h",
"status": "modified"
},
{
"diff": "@@ -86,6 +86,7 @@ AllOpsResolver::AllOpsResolver() {\n AddSub();\n AddSvdf();\n AddTanh();\n+ AddTranspose();\n AddTransposeConv();\n AddUnpack();\n }",
"filename": "tensorflow/lite/micro/all_ops_resolver.cc",
"status": "modified"
},
{
"diff": "@@ -302,6 +302,7 @@ cc_library(\n \"sub.cc\",\n \"svdf_common.cc\",\n \"tanh.cc\",\n+ \"transpose.cc\",\n \"transpose_conv.cc\",\n \"unpack.cc\",\n \"zeros_like.cc\",\n@@ -1089,6 +1090,19 @@ cc_test(\n ],\n )\n \n+cc_test(\n+ name = \"transpose_test\",\n+ srcs = [\n+ \"transpose_test.cc\"\n+ ],\n+ deps = [\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:micro_framework\",\n+ \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_test(\n name = \"transpose_conv_test\",\n srcs = [\n@@ -1130,4 +1144,4 @@ cc_test(\n \"//tensorflow/lite/micro:test_helpers\",\n \"//tensorflow/lite/micro/testing:micro_test\",\n ],\n-)\n+)\n\\ No newline at end of file",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -49,6 +49,7 @@ TfLiteRegistration Register_SOFTMAX();\n TfLiteRegistration Register_SPACE_TO_BATCH_ND();\n TfLiteRegistration Register_SQUEEZE();\n TfLiteRegistration Register_SVDF();\n+TfLiteRegistration Register_TRANSPOSE();\n TfLiteRegistration Register_TRANSPOSE_CONV();\n TfLiteRegistration Register_ZEROS_LIKE();\n ",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -12,170 +12,113 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <stdint.h>\n \n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n #include \"tensorflow/lite/c/common.h\"\n-#include \"tensorflow/lite/kernels/internal/compatibility.h\"\n-#include \"tensorflow/lite/kernels/internal/optimized/optimized_ops.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/reference_ops.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor.h\"\n #include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n-#include \"tensorflow/lite/kernels/internal/types.h\"\n #include \"tensorflow/lite/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/kernels/op_macros.h\"\n+#include \"tensorflow/lite/micro/memory_helpers.h\"\n+#include \"tensorflow/lite/micro/micro_utils.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/transpose.h\"\n \n namespace tflite {\n-namespace ops {\n-namespace builtin {\n-namespace transpose {\n-\n-// This file has two implementations of Transpose.\n-enum KernelType {\n- kReference,\n- kGenericOptimized,\n-};\n+namespace {\n+\n+constexpr int kInputTensor = 0;\n+constexpr int kPermTensor = 1;\n+constexpr int kOutputTensor = 0;\n \n struct TransposeContext {\n- TransposeContext(TfLiteContext* context, TfLiteNode* node) {\n- input = GetInput(context, node, 0);\n- perm = GetInput(context, node, 1);\n- output = GetOutput(context, node, 0);\n- }\n- const TfLiteTensor* input;\n- const TfLiteTensor* perm;\n- TfLiteTensor* output;\n+ TransposeContext(TfLiteContext* context, TfLiteNode* node) {\n+ input = GetInput(context, node, kInputTensor);\n+ perm = GetInput(context, node, kPermTensor);\n+ output = GetOutput(context, node, kOutputTensor);\n+ }\n+ const TfLiteTensor* input;\n+ const TfLiteTensor* perm;\n+ TfLiteTensor* output;\n };\n \n-TfLiteStatus ResizeOutputTensor(TfLiteContext* context,\n- TransposeContext* op_context) {\n- int dims = NumDimensions(op_context->input);\n- const int* perm_data = GetTensorData<int32_t>(op_context->perm);\n-\n- // Ensure validity of the permutations tensor as a 1D tensor.\n- TF_LITE_ENSURE_EQ(context, NumDimensions(op_context->perm), 1);\n- TF_LITE_ENSURE_EQ(context, op_context->perm->dims->data[0], dims);\n- for (int idx = 0; idx < dims; ++idx) {\n- TF_LITE_ENSURE_MSG(context, (perm_data[idx] >= 0 && perm_data[idx] < dims),\n- \"Transpose op permutations array is out of bounds.\");\n- }\n-\n- // Determine size of output tensor.\n- TfLiteIntArray* input_size = op_context->input->dims;\n- TfLiteIntArray* output_size = TfLiteIntArrayCopy(input_size);\n- for (int idx = 0; idx < dims; ++idx) {\n- output_size->data[idx] = input_size->data[perm_data[idx]];\n- }\n-\n- return context->ResizeTensor(context, op_context->output, output_size);\n-}\n-\n TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n- TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);\n- TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n+ TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);\n+ TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n \n- TransposeContext op_context(context, node);\n+ TransposeContext op_context(context, node);\n \n- // Ensure validity of input tensor.\n- TF_LITE_ENSURE_MSG(context, NumDimensions(op_context.input) <= 5,\n- \"Transpose op only supports 1D-5D input arrays.\");\n- TF_LITE_ENSURE_TYPES_EQ(context, op_context.input->type,\n- op_context.output->type);\n+ // Ensure validity of input tensor.\n+ TF_LITE_ENSURE_MSG(context, NumDimensions(op_context.input) <= 5,\n+ \"Transpose op only supports 1D-5D input arrays.\");\n+ TF_LITE_ENSURE_TYPES_EQ(context, op_context.input->type,\n+ op_context.output->type);\n \n- if (!IsConstantTensor(op_context.perm)) {\n- SetTensorToDynamic(op_context.output);\n return kTfLiteOk;\n- }\n- return ResizeOutputTensor(context, &op_context);\n }\n \n-template <KernelType kernel_type>\n TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n TransposeContext op_context(context, node);\n \n- // Resize the output tensor if the output tensor is dynamic.\n- if (IsDynamicTensor(op_context.output)) {\n- TF_LITE_ENSURE_OK(context, ResizeOutputTensor(context, &op_context));\n- }\n+ // Retrieve the perm permutation array\n+ const int32_t* perm_data = GetTensorData<int32_t>(op_context.perm);\n \n- const int* perm_data = GetTensorData<int32_t>(op_context.perm);\n+ // Determine the number of dimensions in the perm array\n const int size = op_context.perm->dims->data[0];\n+\n+ // Prepare an params object to store the perm data whilst implementing\n+ // the conversion \n TransposeParams params;\n params.perm_count = size;\n for (int i = 0; i < size; ++i) {\n params.perm[i] = perm_data[i];\n }\n \n-#define TF_LITE_TRANSPOSE(type, scalar) \\\n- type::Transpose(params, GetTensorShape(op_context.input), \\\n+ // Helper operation to acquire and convert data types\n+#define TF_LITE_TRANSPOSE(scalar) \\\n+ reference_ops::Transpose(params, GetTensorShape(op_context.input), \\\n GetTensorData<scalar>(op_context.input), \\\n GetTensorShape(op_context.output), \\\n GetTensorData<scalar>(op_context.output))\n \n- // Transpose kernel only does rearranging values not numeric evaluations on\n- // each cell. It's safe to implement per size of scalar type and this trick\n- // keeps the total code size in a reasonable range.\n+ // Transpose really operates at the byte level,\n+ // and therefore we only really need to get the \n+ // size of the scalar datatype in bytes.\n+ // Using this we can simplify the calls\n+ // to only use a small number of data types\n switch (op_context.input->type) {\n case kTfLiteFloat32:\n case kTfLiteInt32:\n- if (kernel_type == kGenericOptimized) {\n- TF_LITE_TRANSPOSE(optimized_ops, int32_t);\n- } else {\n- TF_LITE_TRANSPOSE(reference_ops, int32_t);\n- }\n+ TF_LITE_TRANSPOSE(int32_t);\n break;\n- case kTfLiteUInt8:\n case kTfLiteInt8:\n- if (kernel_type == kGenericOptimized) {\n- TF_LITE_TRANSPOSE(optimized_ops, int8_t);\n- } else {\n- TF_LITE_TRANSPOSE(reference_ops, int8_t);\n- }\n+ case kTfLiteUInt8:\n+ TF_LITE_TRANSPOSE(int8_t);\n break;\n case kTfLiteInt16:\n- TF_LITE_TRANSPOSE(reference_ops, int16_t);\n- break;\n- case kTfLiteInt64:\n- TF_LITE_TRANSPOSE(reference_ops, int64_t);\n- break;\n- case kTfLiteBool:\n- if (sizeof(bool) == 1) {\n- if (kernel_type == kGenericOptimized) {\n- TF_LITE_TRANSPOSE(optimized_ops, int8_t);\n- } else {\n- TF_LITE_TRANSPOSE(reference_ops, int8_t);\n- }\n- } else {\n- TF_LITE_TRANSPOSE(reference_ops, bool);\n- }\n+ TF_LITE_TRANSPOSE(int16_t);\n break;\n default:\n TF_LITE_KERNEL_LOG(context,\n \"Type %s is currently not supported by Transpose.\",\n TfLiteTypeGetName(op_context.input->type));\n return kTfLiteError;\n }\n+\n #undef TF_LITE_TRANSPOSE\n \n return kTfLiteOk;\n }\n \n-} // namespace transpose\n-\n-TfLiteRegistration* Register_TRANSPOSE_REF() {\n- static TfLiteRegistration r = {nullptr, nullptr, transpose::Prepare,\n- transpose::Eval<transpose::kReference>};\n- return &r;\n-}\n-\n-TfLiteRegistration* Register_TRANSPOSE_GENERIC_OPTIMIZED() {\n- static TfLiteRegistration r = {nullptr, nullptr, transpose::Prepare,\n- transpose::Eval<transpose::kGenericOptimized>};\n- return &r;\n-}\n-\n-TfLiteRegistration* Register_TRANSPOSE() {\n- return Register_TRANSPOSE_GENERIC_OPTIMIZED();\n+} // namespace transpose\n+\n+TfLiteRegistration Register_TRANSPOSE() {\n+ return {/*init=*/nullptr,\n+ /*free=*/nullptr,\n+ /*prepare=*/Prepare,\n+ /*invoke=*/Eval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/2};\n }\n \n-} // namespace builtin\n-} // namespace ops\n-} // namespace tflite\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/transpose.cc",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,536 @@\n+/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+ http://www.apache.org/licenses/LICENSE-2.0\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include <stdint.h>\n+\n+#include <initializer_list>\n+#include <vector>\n+\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n+#include \"tensorflow/lite/micro/micro_utils.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/transpose.h\"\n+\n+namespace tflite {\n+namespace testing {\n+namespace {\n+\n+// template <typename T = float>\n+// void ValidateTransposeGoldens(TfLiteTensor* tensors, int tensors_size, TfLiteIntArray* inputs_array,\n+// TfLiteIntArray* outputs_array, const T* expected_output,\n+// const size_t expected_output_len, const int* expected_dims,\n+// const size_t expected_dims_len, bool expect_failure) {\n+\n+// const TfLiteRegistration registration =\n+// tflite::ops::micro::Register_TRANSPOSE();\n+\n+// micro::KernelRunner runner(registration, tensors, tensors_size, inputs_array,\n+// outputs_array,\n+// /*builtin_data=*/nullptr, micro_test::reporter);\n+\n+// if (expect_failure) {\n+// TF_LITE_MICRO_EXPECT_NE(kTfLiteOk, runner.InitAndPrepare());\n+// return;\n+// }\n+\n+// TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.InitAndPrepare());\n+// TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.Invoke());\n+\n+// TfLiteTensor* output_tensor = &tensors[outputs_array->data[0]];\n+// const T* output_data = GetTensorData<T>(output_tensor);\n+// for (size_t i = 0; i < expected_output_len; ++i) {\n+// TF_LITE_MICRO_EXPECT_NEAR(expected_output[i], output_data[i], 1e-5f);\n+// }\n+// TF_LITE_MICRO_EXPECT_EQ(expected_dims_len,\n+// static_cast<size_t>(output_tensor->dims->size));\n+// for (size_t i = 0; i < expected_dims_len; ++i) {\n+// TF_LITE_MICRO_EXPECT_EQ(expected_dims[i], output_tensor->dims->data[i]);\n+// }\n+// }\n+\n+// template <typename T = float>\n+// void TestTransposeWithShape(TfLiteTensor* input_tensor, \n+// TfLiteTensor* perm_tensor, \n+// TfLiteTensor* output_tensor, \n+// const T* expected_output,\n+// const size_t expected_output_len,\n+// const int* expected_dims,\n+// const size_t expected_dims_len, \n+// bool expect_failure) {\n+\n+// constexpr int inputs_size = 2;\n+// constexpr int outputs_size = 1;\n+// constexpr int tensors_size = inputs_size + outputs_size;\n+// TfLiteTensor tensors[tensors_size];\n+// tensors[0] = *input_tensor;\n+// tensors[1] = *perm_tensor;\n+// tensors[2] = *output_tensor;\n+\n+// int inputs_data[] = {2, 0, 1};\n+// TfLiteIntArray* inputs_array = IntArrayFromInts(inputs_data);\n+// int outputs_data[] = {1, 2};\n+// TfLiteIntArray* outputs_array = IntArrayFromInts(outputs_data);\n+\n+// ValidateTransposeGoldens(tensors, tensors_size, \n+// inputs_array, outputs_array,\n+// expected_output, expected_output_len, \n+// expected_dims, expected_dims_len, \n+// expect_failure);\n+\n+// }\n+\n+// template <typename T = float, TfLiteType tensor_type = kTfLiteFloat32>\n+// void TestTranspose(const int* input_dims_data, const T* input_data,\n+// const int* perm_dims_data, const int32_t* perm_data,\n+// int* output_dims_data, T* output_data,\n+// const T* expected_output, const size_t expected_output_len,\n+// const int* expected_dims, const size_t expected_dims_len,\n+// bool expect_failure = false) {\n+\n+// TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+// TfLiteIntArray* perm_dims = IntArrayFromInts(perm_dims_data);\n+// TfLiteIntArray* output_dims = IntArrayFromInts(output_dims_data);\n+\n+// TfLiteTensor input_tensor =\n+// CreateTensor<T, tensor_type>(input_data, input_dims);\n+// TfLiteTensor perm_tensor =\n+// CreateTensor<int32_t, kTfLiteInt32>(perm_data, perm_dims);\n+// TfLiteTensor output_tensor =\n+// CreateTensor<T, tensor_type>(output_data, output_dims);\n+\n+// TestTransposeWithShape(&input_tensor, &perm_tensor, &output_tensor,\n+// expected_output, expected_output_len, expected_dims,\n+// expected_dims_len, expect_failure);\n+// }\n+\n+template <typename T>\n+inline RuntimeShape GetTensorShape(std::vector<T> data) {\n+ return RuntimeShape(data.size(), data.data());\n+}\n+\n+template <typename T>\n+void RunTestPermutation(const std::vector<int>& shape,\n+ const std::vector<int>& perms,\n+ std::vector<T>* input_transposed) {\n+ // Count elements and allocate output.\n+ int count = 1;\n+ for (auto factor : shape) count *= factor;\n+ input_transposed->resize(count);\n+\n+ // Create the dummy data\n+ std::vector<T> input(count);\n+ for (unsigned int i = 0; i < input.size(); i++) {\n+ input[i] = i;\n+ }\n+\n+ // Make input and output shapes.\n+ const RuntimeShape input_shape = GetTensorShape(shape);\n+ RuntimeShape output_shape(perms.size());\n+ for (unsigned int i = 0; i < perms.size(); i++) {\n+ output_shape.SetDim(i, input_shape.Dims(perms[i]));\n+ }\n+\n+ TransposeParams params;\n+ params.perm_count = perms.size();\n+ for (unsigned int i = 0; i < perms.size(); ++i) {\n+ params.perm[i] = perms[i];\n+ }\n+\n+ tflite::reference_ops::Transpose<T>(params, \n+ input_shape, input.data(), \n+ output_shape, input_transposed->data());\n+}\n+\n+} // namespace\n+} // namespace testing\n+} // namespace tflite\n+\n+#define TF_LITE_MICRO_ARRAY_COMP_EQ(_a,_b) \\\n+ { \\\n+ TF_LITE_MICRO_EXPECT_EQ(_a.size(),_b.size()); \\\n+ for (unsigned int _e = 0; _e < _a.size(); _e++) { \\\n+ TF_LITE_MICRO_EXPECT_EQ(_a[_e], _b[_e]); \\\n+ } \\\n+ }\n+\n+#define TF_LITE_MICRO_ARRAY_COMP_NE(_a,_b) \\\n+ { \\\n+ bool size_eq = _a.size() == _b.size(); \\\n+ bool cont_eq = true; \\\n+ if (size_eq) { \\\n+ for (unsigned int _e = 0; _e < _a.size(); _e++) \\\n+ cont_eq &= _a[_e] == _b[_e]; \\\n+ } \\\n+ if (size_eq & cont_eq) { \\\n+ TF_LITE_MICRO_FAIL(\"Arrays are equal\"); \\\n+ } \\\n+ }\n+\n+template <typename T>\n+void TransposeTestTestRefOps1D() {\n+ // Basic 1D identity.\n+ std::vector<T> out;\n+ tflite::testing::RunTestPermutation<T>({3}, {0}, &out);\n+ std::vector<T> expected({0, 1, 2});\n+\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, expected);\n+}\n+\n+template <typename T>\n+void TransposeTestTestRefOps2D() {\n+ std::vector<T> out;\n+ // Basic 2D.\n+ tflite::testing::RunTestPermutation<T>({3, 2}, {1, 0}, &out);\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, std::vector<T>({0, 2, 4, 1, 3, 5}));\n+ // Identity.\n+ tflite::testing::RunTestPermutation<T>({3, 2}, {0, 1}, &out);\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, std::vector<T>({0, 1, 2, 3, 4, 5}));\n+}\n+\n+template <typename T>\n+void TransposeTestTestRefOps3D() {\n+ std::vector<T> out; \n+ {\n+ std::vector<T> ref({0, 4, 8, 12, 16, 20, 1, 5, 9, 13, 17, 21,\n+ 2, 6, 10, 14, 18, 22, 3, 7, 11, 15, 19, 23});\n+ tflite::testing::RunTestPermutation<T>(/*shape=*/{2, 3, 4}, /*perms=*/{2, 0, 1}, &out); \n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, ref);\n+ }\n+\n+ // Test 3 dimensional identity transform\n+ {\n+ tflite::testing::RunTestPermutation<T>(/*shape=*/{2, 3, 4}, /*perms=*/{0, 1, 2}, &out);\n+ std::vector<T> ref(out.size());\n+ for (unsigned int k = 0; k < ref.size(); k++) ref[k] = k;\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, ref);\n+ }\n+\n+ /**\n+ * Additional tests that mimic first case, but with different perm.\n+ */\n+ {\n+ std::vector<T> ref({0, 12, 1, 13, 2, 14, 3, 15, 4, 16, 5, 17,\n+ 6, 18, 7, 19, 8, 20, 9, 21, 10, 22, 11, 23});\n+ tflite::testing::RunTestPermutation<T>(/*shape=*/{2, 3, 4}, /*perms=*/{1, 2, 0}, &out);\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, ref);\n+ }\n+\n+ {\n+ std::vector<T> ref({0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11,\n+ 12, 16, 20, 13, 17, 21, 14, 18, 22, 15, 19, 23});\n+ tflite::testing::RunTestPermutation<T>(/*shape=*/{2, 3, 4}, /*perms=*/{0, 2, 1}, &out);\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, ref);\n+ }\n+\n+ {\n+ std::vector<T> ref({0, 1, 2, 3, 12, 13, 14, 15, 4, 5, 6, 7,\n+ 16, 17, 18, 19, 8, 9, 10, 11, 20, 21, 22, 23});\n+ tflite::testing::RunTestPermutation<T>(/*shape=*/{2, 3, 4}, /*perms=*/{1, 0, 2}, &out);\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, ref);\n+ }\n+\n+ {\n+ std::vector<T> ref({0, 12, 4, 16, 8, 20, 1, 13, 5, 17, 9, 21,\n+ 2, 14, 6, 18, 10, 22, 3, 15, 7, 19, 11, 23});\n+ tflite::testing::RunTestPermutation<T>(/*shape=*/{2, 3, 4}, /*perms=*/{2, 1, 0}, &out);\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, ref);\n+ }\n+}\n+\n+template <typename T>\n+void TransposeTestTestRefOps3D_OneInDimension() {\n+ std::vector<T> out;\n+ // Shape with 1 as first dim -> transposed.\n+ {\n+ std::vector<T> ref({0, 3, 1, 4, 2, 5});\n+ tflite::testing::RunTestPermutation<T>(/*shape=*/{1, 2, 3}, /*perms=*/{2, 0, 1}, &out);\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, ref);\n+ }\n+ // Shape with 1 as first dim -> identity.\n+ {\n+ std::vector<T> ref({0, 1, 2, 3, 4, 5});\n+ tflite::testing::RunTestPermutation<T>(/*shape=*/{1, 2, 3}, /*perms=*/{1, 2, 0}, &out);\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, ref);\n+ }\n+ // Shape with 1 as third dim -> transposed.\n+ {\n+ std::vector<T> ref({0, 3, 1, 4, 2, 5});\n+ tflite::testing::RunTestPermutation<T>(/*shape=*/{2, 3, 1}, /*perms=*/{1, 2, 0}, &out);\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, ref);\n+ }\n+ // Shape with 1 as third dim -> identity.\n+ {\n+ std::vector<T> ref({0, 1, 2, 3, 4, 5});\n+ tflite::testing::RunTestPermutation<T>(/*shape=*/{2, 3, 1}, /*perms=*/{2, 0, 1}, &out);\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, ref);\n+ }\n+}\n+\n+template <typename T>\n+void TransposeTestTestRefOps4D() {\n+ std::vector<T> out;\n+ // Basic 4d.\n+ tflite::testing::RunTestPermutation<T>({2, 3, 4, 5}, {2, 0, 1, 3}, &out);\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(\n+ out,\n+ std::vector<T>(\n+ {0, 1, 2, 3, 4, 20, 21, 22, 23, 24, 40, 41, 42, 43, 44,\n+ 60, 61, 62, 63, 64, 80, 81, 82, 83, 84, 100, 101, 102, 103, 104,\n+ 5, 6, 7, 8, 9, 25, 26, 27, 28, 29, 45, 46, 47, 48, 49,\n+ 65, 66, 67, 68, 69, 85, 86, 87, 88, 89, 105, 106, 107, 108, 109,\n+ 10, 11, 12, 13, 14, 30, 31, 32, 33, 34, 50, 51, 52, 53, 54,\n+ 70, 71, 72, 73, 74, 90, 91, 92, 93, 94, 110, 111, 112, 113, 114,\n+ 15, 16, 17, 18, 19, 35, 36, 37, 38, 39, 55, 56, 57, 58, 59,\n+ 75, 76, 77, 78, 79, 95, 96, 97, 98, 99, 115, 116, 117, 118, 119}));\n+ tflite::testing::RunTestPermutation<T>({2, 3, 4, 5}, {0, 1, 2, 3}, &out);\n+ // Basic identity.\n+ std::vector<T> ref(out.size());\n+ for (unsigned int k = 0; k < ref.size(); k++) ref[k] = k;\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(out, ref);\n+};\n+\n+TF_LITE_MICRO_TESTS_BEGIN\n+\n+// TF_LITE_MICRO_TEST(MustFail) {\n+// TF_LITE_MICRO_FAIL(\"Boom\");\n+// }\n+\n+// Safety test to ensure the array tests \n+// are passing successfully\n+TF_LITE_MICRO_TEST(ARRAY_COMP_ShouldSucceed) {\n+ std::vector<float> a({0, 1, 2, 3, 4, 5});\n+ std::vector<float> b({0, 1, 2, 3, 4, 5});\n+\n+ TF_LITE_MICRO_ARRAY_COMP_EQ(a,b);\n+}\n+\n+// Safety test to ensure the array tests \n+// are failing as expected\n+TF_LITE_MICRO_TEST(ARRAY_COMP_ShouldFail) {\n+ std::vector<float> a({0, 1, 2, 3, 4, 6});\n+ std::vector<float> b({0, 1, 2, 3, 4, 5});\n+ std::vector<float> c({0, 1, 2, 3, 4});\n+\n+ TF_LITE_MICRO_ARRAY_COMP_NE(a, b);\n+ TF_LITE_MICRO_ARRAY_COMP_NE(b, c);\n+}\n+\n+TF_LITE_MICRO_TEST(TestRefOps1D) { TransposeTestTestRefOps1D<float>(); }\n+\n+TF_LITE_MICRO_TEST(TestRefOps2DFloat) { TransposeTestTestRefOps2D<float>(); }\n+TF_LITE_MICRO_TEST(TestRefOps2DInt8) { TransposeTestTestRefOps2D<int8_t>(); }\n+TF_LITE_MICRO_TEST(TestRefOps2DUInt8) { TransposeTestTestRefOps2D<uint8_t>(); }\n+\n+TF_LITE_MICRO_TEST(TestRefOps3DFloat) { TransposeTestTestRefOps3D<float>(); }\n+TF_LITE_MICRO_TEST(TestRefOps3DInt8) { TransposeTestTestRefOps3D<int8_t>(); }\n+TF_LITE_MICRO_TEST(TestRefOps3DUInt8) { TransposeTestTestRefOps3D<uint8_t>(); }\n+\n+TF_LITE_MICRO_TEST(TestRefOps3D_OneInDimensionFloat) { TransposeTestTestRefOps3D_OneInDimension<float>(); }\n+TF_LITE_MICRO_TEST(TestRefOps3D_OneInDimensionInt8) { TransposeTestTestRefOps3D_OneInDimension<int8_t>(); }\n+TF_LITE_MICRO_TEST(TestRefOps3D_OneInDimensionUInt8) { TransposeTestTestRefOps3D_OneInDimension<uint8_t>(); }\n+\n+TF_LITE_MICRO_TEST(TestRefOps4DFloat) { TransposeTestTestRefOps4D<float>(); }\n+TF_LITE_MICRO_TEST(TestRefOps4DInt8) { TransposeTestTestRefOps4D<int8_t>(); }\n+TF_LITE_MICRO_TEST(TestRefOps4DInt16) { TransposeTestTestRefOps4D<int16_t>(); }\n+\n+\n+// TF_LITE_MICRO_TEST(TransposeCreateTensorPerm) {\n+// const int perm_dims_data[] = { 1 };\n+// const int32_t perm_int32[] = { 1 };\n+\n+// TfLiteIntArray* perm_dims = tflite::testing::IntArrayFromInts(perm_dims_data);\n+// TfLiteTensor perm_tensor = tflite::testing::CreateTensor<int32_t, kTfLiteInt32>(perm_dims, perm_int32);\n+\n+// TF_LITE_MICRO_EXPECT_EQ(perm_tensor.dims.data[0], 1);\n+// }\n+\n+// TF_LITE_MICRO_TEST(TransposeBasic1DIdentityShouldSucceed) {\n+\n+// float output_data_float[32];\n+// int8_t output_data_int8[32];\n+// uint8_t output_data_uint8[32];\n+\n+// const int input_dims[] = { 3 };\n+// const float input_float[] = { 0, 1, 2 };\n+// const int8_t input_int8[] = { 0, 1, 2 };\n+// const uint8_t input_uint8[] = { 0, 1, 2 };\n+\n+// const int perm_dims[] = { 1 };\n+// const int32_t perm_int32[] = { 1 };\n+\n+// int output_dims[] = { 3 };\n+\n+// const int golden_output_len = 3;\n+// const float golden_output_float[] = { 0, 1, 2 };;\n+// const int8_t golden_output_int8[] = { 0, 1, 2 };;\n+// const uint8_t golden_output_uint8[] = { 0, 1, 2 };;\n+\n+// const int golden_dims_len = 1;\n+// const int golden_dims[] = { 3 };\n+\n+// tflite::testing::TestTranspose(input_dims, input_float, perm_dims, perm_int32,\n+// output_dims, output_data_float,\n+// golden_output_float, golden_output_len,\n+// golden_dims, golden_dims_len);\n+\n+// tflite::testing::TestTranspose<int8_t, kTfLiteInt8>(\n+// input_dims, input_int8, perm_dims, perm_int32,\n+// output_dims, output_data_int8,\n+// golden_output_int8, golden_output_len,\n+// golden_dims, golden_dims_len);\n+\n+// tflite::testing::TestTranspose<uint8_t, kTfLiteUInt8>(\n+// input_dims, input_uint8, perm_dims, perm_int32,\n+// output_dims, output_data_uint8,\n+// golden_output_uint8, golden_output_len,\n+// golden_dims, golden_dims_len);\n+\n+// }\n+\n+// TF_LITE_MICRO_TEST(TransposeBasic2DShouldSucceed) {\n+\n+// float output_data_float[32];\n+// int8_t output_data_int8[32];\n+// uint8_t output_data_uint8[32];\n+\n+// const int input_dims[] = { 3, 2 };\n+// const float input_float[] = { 0, 1, 2, 3, 4, 5 };\n+// const int8_t input_int8[] = { 0, 1, 2, 3, 4, 5 };\n+// const uint8_t input_uint8[] = { 0, 1, 2, 3, 4, 5 };\n+\n+// const int perm_dims[] = { 1 };\n+// const int32_t perm_int32[] = { 1, 0 };\n+\n+// int output_dims[] = { 2, 3 };\n+\n+// const int golden_output_len = 6;\n+// const float golden_output_float[] = { 0, 2, 4, 1, 3, 5 };\n+// const int8_t golden_output_int8[] = { 0, 2, 4, 1, 3, 5 };\n+// const uint8_t golden_output_uint8[] = { 0, 2, 4, 1, 3, 5 };\n+\n+// const int golden_dims_len = 1;\n+// const int golden_dims[] = { 2, 3 };\n+\n+// tflite::testing::TestTranspose<float, kTfLiteFloat32>(\n+// input_dims, input_float, perm_dims, perm_int32,\n+// output_dims, output_data_float,\n+// golden_output_float, golden_output_len,\n+// golden_dims, golden_dims_len);\n+\n+// tflite::testing::TestTranspose<int8_t, kTfLiteInt8>(\n+// input_dims, input_int8, perm_dims, perm_int32,\n+// output_dims, output_data_int8,\n+// golden_output_int8, golden_output_len,\n+// golden_dims, golden_dims_len);\n+\n+// tflite::testing::TestTranspose<uint8_t, kTfLiteUInt8>(\n+// input_dims, input_uint8, perm_dims, perm_int32,\n+// output_dims, output_data_uint8,\n+// golden_output_uint8, golden_output_len,\n+// golden_dims, golden_dims_len);\n+// }\n+\n+// TF_LITE_MICRO_TEST(TransposeBasic3D) {\n+\n+// float output_data_float[32];\n+// int8_t output_data_int8[32];\n+// uint8_t output_data_uint8[32];\n+\n+// const int input_dims[] = { 1, 2, 3 };\n+// const float input_float[] = { 0, 1, 2, 3, 4, 5 };\n+// const int8_t input_int8[] = { 0, 1, 2, 3, 4, 5 };\n+// const uint8_t input_uint8[] = { 0, 1, 2, 3, 4, 5 };\n+\n+// const int perm_dims[] = { 3 };\n+// const int32_t perm_int32[] = { 2, 0, 1 };\n+\n+// int output_dims[] = { 2, 3 };\n+\n+// const int golden_output_len = 6;\n+// const float golden_output_float[] = { 0, 2, 4, 1, 3, 5 };\n+// const int8_t golden_output_int8[] = { 0, 2, 4, 1, 3, 5 };\n+// const uint8_t golden_output_uint8[] = { 0, 2, 4, 1, 3, 5 };\n+\n+// const int golden_dims_len = 1;\n+// const int golden_dims[] = { 2, 3 };\n+\n+// tflite::testing::TestTranspose<float, kTfLiteFloat32>(\n+// input_dims, input_float, perm_dims, perm_int32,\n+// output_dims, output_data_float,\n+// golden_output_float, golden_output_len,\n+// golden_dims, golden_dims_len);\n+\n+// tflite::testing::TestTranspose<int8_t, kTfLiteInt8>(\n+// input_dims, input_int8, perm_dims, perm_int32,\n+// output_dims, output_data_int8,\n+// golden_output_int8, golden_output_len,\n+// golden_dims, golden_dims_len);\n+\n+// tflite::testing::TestTranspose<uint8_t, kTfLiteUInt8>(\n+// input_dims, input_uint8, perm_dims, perm_int32,\n+// output_dims, output_data_uint8,\n+// golden_output_uint8, golden_output_len,\n+// golden_dims, golden_dims_len);\n+\n+// }\n+\n+\n+// TF_LITE_MICRO_TEST(TransposeBasic4DShouldSucceed) {\n+\n+// float output_data_float[64];\n+// int8_t output_data_int8[64];\n+// uint8_t output_data_uint8[64];\n+\n+// const int input_dims[] = { 2, 1, 5, 4 };\n+// const float input_float[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40 };\n+// const int8_t input_int8[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40 };\n+// const uint8_t input_uint8[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40 };\n+\n+// const int perm_dims[] = { 1 };\n+// const int32_t perm_int32[] = { 3, 2, 1, 0 };\n+\n+// int output_dims[] = { 4 };\n+\n+// const int golden_output_len = 40;\n+\n+// const float golden_output_float[] = { 1, 5, 9, 13, 17, 2, 6, 10, 14, 18, 3, 7, 11, 15, 19, 4, 8, 12, 16, 20, 21, 25, 29, 33, 37, 22, 26, 30, 34, 38, 23, 27, 31, 35, 39, 24, 28, 32, 36, 40 };\n+// const int8_t golden_output_int8[] = { 1, 5, 9, 13, 17, 2, 6, 10, 14, 18, 3, 7, 11, 15, 19, 4, 8, 12, 16, 20, 21, 25, 29, 33, 37, 22, 26, 30, 34, 38, 23, 27, 31, 35, 39, 24, 28, 32, 36, 40 };\n+// const uint8_t golden_output_uint8[] = { 1, 5, 9, 13, 17, 2, 6, 10, 14, 18, 3, 7, 11, 15, 19, 4, 8, 12, 16, 20, 21, 25, 29, 33, 37, 22, 26, 30, 34, 38, 23, 27, 31, 35, 39, 24, 28, 32, 36, 40 };\n+\n+// const int golden_dims_len = 4;\n+// const int golden_dims[] = { 2, 4, 5, 1 };\n+\n+// tflite::testing::TestTranspose(\n+// input_dims, input_float, perm_dims, perm_int32,\n+// output_dims, output_data_float,\n+// golden_output_float, golden_output_len,\n+// golden_dims, golden_dims_len);\n+\n+// tflite::testing::TestTranspose<int8_t, kTfLiteInt8>(\n+// input_dims, input_int8, perm_dims, perm_int32,\n+// output_dims, output_data_int8,\n+// golden_output_int8, golden_output_len,\n+// golden_dims, golden_dims_len);\n+\n+// tflite::testing::TestTranspose<uint8_t, kTfLiteUInt8>(\n+// input_dims, input_uint8, perm_dims, perm_int32,\n+// output_dims, output_data_uint8,\n+// golden_output_uint8, golden_output_len,\n+// golden_dims, golden_dims_len);\n+\n+// }\n+\n+TF_LITE_MICRO_TESTS_END\n+\n+#undef TF_LITE_MICRO_ARRAY_COMP_EQ\n+#undef TF_LITE_MICRO_ARRAY_COMP_NE ",
"filename": "tensorflow/lite/micro/kernels/transpose_test.cc",
"status": "added"
},
{
"diff": "@@ -223,6 +223,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n ParseExpandDims);\n }\n \n+ TfLiteStatus AddFill() {\n+ return AddBuiltin(BuiltinOperator_FILL,\n+ tflite::Register_FILL(), ParseFill);\n+ }\n+\n TfLiteStatus AddFloor() {\n return AddBuiltin(BuiltinOperator_FLOOR,\n tflite::ops::micro::Register_FLOOR(), ParseFloor);\n@@ -466,6 +471,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n ParseTanh);\n }\n \n+ TfLiteStatus AddTranspose() {\n+ return AddBuiltin(BuiltinOperator_TRANSPOSE,\n+ \t\t Register_TRANSPOSE(), ParseTranspose);\n+ }\n+\n TfLiteStatus AddTransposeConv() {\n return AddBuiltin(BuiltinOperator_TRANSPOSE_CONV,\n tflite::Register_TRANSPOSE_CONV(), ParseTransposeConv);",
"filename": "tensorflow/lite/micro/micro_mutable_op_resolver.h",
"status": "modified"
},
{
"diff": "@@ -315,6 +315,7 @@ tensorflow/lite/micro/kernels/strided_slice_test.cc \\\n tensorflow/lite/micro/kernels/sub_test.cc \\\n tensorflow/lite/micro/kernels/svdf_test.cc \\\n tensorflow/lite/micro/kernels/tanh_test.cc \\\n+tensorflow/lite/micro/kernels/transpose_test.cc \\\n tensorflow/lite/micro/kernels/transpose_conv_test.cc \\\n tensorflow/lite/micro/kernels/unpack_test.cc \\\n tensorflow/lite/micro/kernels/zeros_like_test.cc \\\n@@ -381,6 +382,7 @@ tensorflow/lite/micro/kernels/sub.cc \\\n tensorflow/lite/micro/kernels/svdf.cc \\\n tensorflow/lite/micro/kernels/svdf_common.cc \\\n tensorflow/lite/micro/kernels/tanh.cc \\\n+tensorflow/lite/micro/kernels/transpose.cc \\\n tensorflow/lite/micro/kernels/transpose_conv.cc \\\n tensorflow/lite/micro/kernels/unpack.cc \\\n tensorflow/lite/micro/kernels/zeros_like.cc\n@@ -470,6 +472,7 @@ tensorflow/lite/kernels/internal/reference/sub.h \\\n tensorflow/lite/kernels/internal/reference/logistic.h \\\n tensorflow/lite/kernels/internal/reference/strided_slice.h \\\n tensorflow/lite/kernels/internal/reference/tanh.h \\\n+tensorflow/lite/kernels/internal/reference/transpose.h \\\n tensorflow/lite/kernels/internal/reference/transpose_conv.h \\\n tensorflow/lite/kernels/internal/cppmath.h \\\n tensorflow/lite/kernels/internal/max.h \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator ADD_N from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46162\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46162\">No</a>\n",
"created_at": "2021-04-12T10:38:38Z"
}
],
"number": 46162,
"title": "micro: port op ADD_N from lite"
}
|
{
"body": "Added support for INT8 to the ADD_N operator.\r\n\r\nReference Issue #46162",
"number": 48160,
"review_comments": [
{
"body": "Could you leave this as-is, unless there's a reason this is necessary? Even though it's cleaner your way, it's nice to keep the code churn to a minimum.",
"created_at": "2021-03-29T17:43:40Z"
},
{
"body": "Fixed.",
"created_at": "2021-03-29T18:24:12Z"
}
],
"title": "micro: add INT8 support to ADD_N op"
}
|
{
"commits": [
{
"message": "micro: add INT8 support to ADD_N op\n\nAdded support for INT8 to the ADD_N operator.\n\nReference Issue #46162"
},
{
"message": "fix for requested change to PR"
}
],
"files": [
{
"diff": "@@ -15,7 +15,10 @@ limitations under the License.\n #ifndef TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_ADD_N_H_\n #define TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_ADD_N_H_\n \n-#include \"tensorflow/lite/kernels/internal/types.h\"\n+#include <algorithm>\n+#include <limits>\n+\n+#include \"tensorflow/lite/kernels/internal/common.h\"\n \n namespace tflite {\n namespace reference_ops {\n@@ -36,6 +39,47 @@ inline void AddN(const RuntimeShape& input_shape, const size_t num_inputs,\n }\n }\n \n+inline void AddN(const ArithmeticParams& params,\n+ const RuntimeShape& input_shape, const size_t num_inputs,\n+ const int8_t* const* input_data, int8_t* output_data) {\n+ TFLITE_DCHECK_LE(params.quantized_activation_min,\n+ params.quantized_activation_max);\n+ // Input offset is negative input zero point. Activation tensors are\n+ // asymmetric quantized so they span the full int8 range.\n+ // All inputs should have same zero-point and scale, this is checked during\n+ // Prepare stage.\n+ TFLITE_DCHECK_GE(-params.input1_offset, std::numeric_limits<int8_t>::min());\n+ TFLITE_DCHECK_LE(-params.input1_offset, std::numeric_limits<int8_t>::max());\n+\n+ // All inputs and output should have the same shape, this is checked during\n+ // Prepare stage.\n+ const size_t size = input_shape.FlatSize();\n+ for (size_t i = 0; i < size; ++i) {\n+ // accumulate in scaled_x before clamping to avoid overflow\n+ const int32_t x = params.input1_offset; // x = 0\n+ const int32_t shifted_x = x * (1 << params.left_shift);\n+ int32_t scaled_x = MultiplyByQuantizedMultiplierSmallerThanOneExp(\n+ shifted_x, params.input1_multiplier, params.input1_shift);\n+\n+ for (size_t j = 0; j < num_inputs; ++j) {\n+ const int32_t y = params.input1_offset + input_data[j][i];\n+ const int32_t shifted_y = y * (1 << params.left_shift);\n+ int32_t scaled_y = MultiplyByQuantizedMultiplierSmallerThanOneExp(\n+ shifted_y, params.input1_multiplier, params.input1_shift);\n+ scaled_x += scaled_y;\n+ }\n+\n+ const int32_t raw_output =\n+ MultiplyByQuantizedMultiplierSmallerThanOneExp(\n+ scaled_x, params.output_multiplier, params.output_shift) +\n+ params.output_offset;\n+ const int32_t clamped_output =\n+ std::min(params.quantized_activation_max,\n+ std::max(params.quantized_activation_min, raw_output));\n+ output_data[i] = static_cast<int8_t>(clamped_output);\n+ }\n+}\n+\n } // namespace reference_ops\n } // namespace tflite\n ",
"filename": "tensorflow/lite/kernels/internal/reference/add_n.h",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@ limitations under the License.\n #include <cstdint>\n \n #include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/quantization_util.h\"\n #include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n #include \"tensorflow/lite/kernels/kernel_util.h\"\n #include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n@@ -28,6 +29,22 @@ namespace {\n constexpr int kInputTensor0 = 0;\n constexpr int kOutputTensor = 0;\n \n+constexpr int kAddNIntegerShift = 20;\n+\n+// only used with INT8 tensors\n+struct OpData {\n+ int32_t output_activation_min;\n+ int32_t output_activation_max;\n+ int32_t input_offset;\n+ int32_t output_offset;\n+ int32_t input_multiplier;\n+ int32_t output_multiplier;\n+ int input_shift;\n+ int output_shift;\n+ int left_shift;\n+ int scratch_index;\n+};\n+\n TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n int num_inputs = NumInputs(node);\n TF_LITE_ENSURE(context, num_inputs >= 2);\n@@ -47,19 +64,61 @@ TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, i, &input));\n TF_LITE_ENSURE(context, HaveSameShapes(input_tensor_first, input));\n TF_LITE_ENSURE_TYPES_EQ(context, input_tensor_first->type, input->type);\n+\n+ // Check that all INT8 input tensors have the same zero-point and scale.\n+ if (input_tensor_first->type == kTfLiteInt8) {\n+ TF_LITE_ENSURE(context, input_tensor_first->params.zero_point ==\n+ input->params.zero_point);\n+ TF_LITE_ENSURE(context,\n+ input_tensor_first->params.scale == input->params.scale);\n+ }\n }\n \n- // Allocate scratch buffer space for pointer to each tensor's data\n- // and store the scratch buffer index in the node's user_data\n if (output->type == kTfLiteFloat32) {\n+ // Allocate scratch buffer space for pointer to each tensor's data\n+ // and store the scratch buffer index in the node's user_data\n int scratch_index;\n size_t scratch_size = sizeof(float*) * num_inputs;\n TF_LITE_ENSURE_OK(context, context->RequestScratchBufferInArena(\n context, scratch_size, &scratch_index));\n node->user_data =\n reinterpret_cast<decltype(node->user_data)>(scratch_index);\n+ } else if (output->type == kTfLiteInt8) {\n+ node->user_data =\n+ context->AllocatePersistentBuffer(context, sizeof(OpData));\n+ OpData* data = static_cast<OpData*>(node->user_data);\n+\n+ // Allocate scratch buffer space for pointer to each tensor's data\n+ // and store the scratch buffer index in OpData\n+ size_t scratch_size = sizeof(int8_t*) * num_inputs;\n+ TF_LITE_ENSURE_OK(\n+ context, context->RequestScratchBufferInArena(context, scratch_size,\n+ &data->scratch_index));\n+\n+ // 8bit -> 8bit general quantized path, with general rescalings\n+ data->input_offset = -input_tensor_first->params.zero_point;\n+ data->output_offset = output->params.zero_point;\n+ data->left_shift = kAddNIntegerShift;\n+ const double twice_max_input_scale =\n+ 2 * static_cast<double>(input_tensor_first->params.scale);\n+ const double real_input_multiplier =\n+ static_cast<double>(input_tensor_first->params.scale) /\n+ twice_max_input_scale;\n+ const double real_output_multiplier =\n+ twice_max_input_scale /\n+ ((1 << data->left_shift) * static_cast<double>(output->params.scale));\n+\n+ QuantizeMultiplierSmallerThanOneExp(\n+ real_input_multiplier, &data->input_multiplier, &data->input_shift);\n+\n+ QuantizeMultiplierSmallerThanOneExp(\n+ real_output_multiplier, &data->output_multiplier, &data->output_shift);\n+\n+ TF_LITE_ENSURE_STATUS(CalculateActivationRangeQuantized(\n+ context, kTfLiteActNone, output, &data->output_activation_min,\n+ &data->output_activation_max));\n } else {\n- TF_LITE_KERNEL_LOG(context, \"ADD_N only supports FLOAT32, got %s.\",\n+ TF_LITE_KERNEL_LOG(context, \"ADD_N only supports FLOAT32 and INT8, got %s.\",\n TfLiteTypeGetName(output->type));\n return kTfLiteError;\n }\n@@ -72,12 +131,10 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n }\n \n template <typename T>\n-void EvalAddN(TfLiteContext* context, TfLiteNode* node,\n- TfLiteEvalTensor* output) {\n+inline const T** CopyInputsToScratchBuffer(TfLiteContext* context,\n+ TfLiteNode* node,\n+ const int scratch_index) {\n int num_inputs = NumInputs(node);\n-\n- int scratch_index =\n- static_cast<int>(reinterpret_cast<intptr_t>(node->user_data));\n void* scratch_buffer = context->GetScratchBuffer(context, scratch_index);\n const T** all_inputs = static_cast<decltype(all_inputs)>(scratch_buffer);\n for (int i = 0; i < num_inputs; i++) {\n@@ -86,17 +143,56 @@ void EvalAddN(TfLiteContext* context, TfLiteNode* node,\n all_inputs[i] = tflite::micro::GetTensorData<T>(next_input);\n }\n \n+ return all_inputs;\n+}\n+\n+template <typename T>\n+void EvalAddN(TfLiteContext* context, TfLiteNode* node,\n+ TfLiteEvalTensor* output) {\n+ int num_inputs = NumInputs(node);\n+\n+ int scratch_index =\n+ static_cast<int>(reinterpret_cast<intptr_t>(node->user_data));\n+ const T** all_inputs =\n+ CopyInputsToScratchBuffer<T>(context, node, scratch_index);\n+\n reference_ops::AddN<T>(tflite::micro::GetTensorShape(output), num_inputs,\n all_inputs, tflite::micro::GetTensorData<T>(output));\n }\n \n+template <typename T>\n+void EvalAddNQuantized(TfLiteContext* context, TfLiteNode* node,\n+ TfLiteEvalTensor* output) {\n+ int num_inputs = NumInputs(node);\n+\n+ OpData* data = static_cast<OpData*>(node->user_data);\n+ const T** all_inputs =\n+ CopyInputsToScratchBuffer<T>(context, node, data->scratch_index);\n+\n+ ArithmeticParams params;\n+ params.left_shift = data->left_shift;\n+ params.input1_offset = data->input_offset;\n+ params.input1_multiplier = data->input_multiplier;\n+ params.input1_shift = data->input_shift;\n+ params.output_offset = data->output_offset;\n+ params.output_multiplier = data->output_multiplier;\n+ params.output_shift = data->output_shift;\n+ SetActivationParams(data->output_activation_min, data->output_activation_max,\n+ ¶ms);\n+\n+ reference_ops::AddN(params, tflite::micro::GetTensorShape(output), num_inputs,\n+ all_inputs, tflite::micro::GetTensorData<T>(output));\n+}\n+\n TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n TfLiteEvalTensor* output =\n tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n if (output->type == kTfLiteFloat32) {\n EvalAddN<float>(context, node, output);\n+ } else if (output->type == kTfLiteInt8) {\n+ EvalAddNQuantized<int8_t>(context, node, output);\n } else {\n- TF_LITE_KERNEL_LOG(context, \"ADD_N only supports FLOAT32, got %s.\",\n+ TF_LITE_KERNEL_LOG(context, \"ADD_N only supports FLOAT32 and INT8, got %s.\",\n TfLiteTypeGetName(output->type));\n return kTfLiteError;\n }",
"filename": "tensorflow/lite/micro/kernels/add_n.cc",
"status": "modified"
},
{
"diff": "@@ -69,6 +69,55 @@ void TestAddN(const int* input_dims_data, const T* const* input_data,\n }\n }\n \n+// min/max are used to compute scale, zero-point, compare tolerance\n+template <typename T, int kNumInputs, int kOutputSize>\n+struct TestQuantParams {\n+ float data_min; // input and output data minimum value\n+ float data_max; // input and output data maximum value\n+ T input_data[kNumInputs][kOutputSize]; // quantized input storage\n+ T output_data[kOutputSize]; // quantized output storage\n+};\n+\n+// for quantized Add, the error shouldn't exceed step\n+template <typename T>\n+float GetTolerance(float min, float max) {\n+ float kQuantizedStep =\n+ 2.0f * (max - min) /\n+ (std::numeric_limits<T>::max() - std::numeric_limits<T>::min());\n+ return kQuantizedStep;\n+}\n+\n+template <typename T, int kNumInputs, int kOutputSize>\n+void TestAddNQuantized(TestQuantParams<T, kNumInputs, kOutputSize>* params,\n+ const int* input_dims_data,\n+ const float* const* input_data, const int* expected_dims,\n+ const float* expected_data, float* output_data) {\n+ TF_LITE_MICRO_EXPECT_LE(kNumInputs, kMaxInputTensors);\n+\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(expected_dims);\n+\n+ const float scale = ScaleFromMinMax<T>(params->data_min, params->data_max);\n+ const int zero_point =\n+ ZeroPointFromMinMax<T>(params->data_min, params->data_max);\n+\n+ TfLiteTensor tensors[kMaxInputTensors + kMaxOutputTensors] = {};\n+ for (int i = 0; i < kNumInputs; i++) {\n+ tensors[i] = CreateQuantizedTensor(input_data[i], params->input_data[i],\n+ input_dims, scale, zero_point);\n+ }\n+ tensors[kNumInputs] = CreateQuantizedTensor(params->output_data, output_dims,\n+ scale, zero_point);\n+\n+ ExecuteAddN(tensors, kNumInputs + 1);\n+\n+ Dequantize(params->output_data, kOutputSize, scale, zero_point, output_data);\n+ const float kTolerance = GetTolerance<T>(params->data_min, params->data_max);\n+ for (int i = 0; i < kOutputSize; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(expected_data[i], output_data[i], kTolerance);\n+ }\n+}\n+\n } // namespace\n } // namespace testing\n } // namespace tflite\n@@ -94,4 +143,28 @@ TF_LITE_MICRO_TEST(FloatAddNOpAddMultipleTensors) {\n output_data);\n }\n \n+TF_LITE_MICRO_TEST(Int8AddNOpAddMultipleTensors) {\n+ constexpr int kDims[] = {4, 1, 2, 2, 1};\n+ constexpr float kInput1[] = {-2.0, 0.2, 0.7, 0.8};\n+ constexpr float kInput2[] = {0.1, 0.2, 0.3, 0.5};\n+ constexpr float kInput3[] = {0.5, 0.1, 0.1, 0.2};\n+ constexpr float kExpect[] = {-1.4, 0.5, 1.1, 1.5};\n+ const float* kInputs[tflite::testing::kMaxInputTensors] = {\n+ kInput1,\n+ kInput2,\n+ kInput3,\n+ };\n+ constexpr int kInputCount = std::extent<decltype(kInputs)>::value;\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::TestQuantParams<int8_t, kInputCount, kOutputCount> params =\n+ {};\n+ params.data_min = -3.0;\n+ params.data_max = 3.0;\n+\n+ tflite::testing::TestAddNQuantized<int8_t, kInputCount, kOutputCount>(\n+ ¶ms, kDims, kInputs, kDims, kExpect, output_data);\n+}\n+\n TF_LITE_MICRO_TESTS_END",
"filename": "tensorflow/lite/micro/kernels/add_n_test.cc",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\n**System information**\r\n- Host OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04.5 LTS\r\n- TensorFlow installed from (source or binary): Source built locally\r\n- Tensorflow version (commit SHA if source): 771c870a81c1025c4886a4fb60ca33971e98c577\r\n- Target platform (e.g. Arm Mbed OS, Arduino Nano 33 etc.): nRF52840 DK\r\n\r\n**Describe the problem**\r\nI successfully converted my model to TF Lite for Microcontrollers and I am now trying to consume it from an nRF52840 DK target, but I get a failure when allocating the tensors:\r\n```\r\n[ERR] ./model/debug_log.cc:12: Didn't find op for builtin opcode 'FILL' version '1'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?\r\n[ERR] ./model/debug_log.cc:12: \r\n[ERR] ./model/debug_log.cc:12: Failed to get registration from op code FILL\r\n[ERR] ./model/debug_log.cc:12: \r\n[ERR] ./model/debug_log.cc:12: Failed starting model allocation.\r\n[ERR] ./model/debug_log.cc:12: \r\n[ERR] ./model/debug_log.cc:12: AllocateTensors() failed\r\n```\r\n\r\nBeing proprietary software, unfortunately, I cannot share more details about the model used.\r\n\r\nI already have a proposed resolution of the issue and I'm opening this Issue ticket in order to open a PR.\r\n\r\n**Please provide the exact sequence of commands/steps when you ran into the problem**\r\nThe first step is building TFLM locally:\r\n```\r\ncd tensorflow\r\nmake -f tensorflow/lite/micro/tools/make/Makefile \\\r\n TARGET=cortex_m_generic \\\r\n TARGET_ARCH=cortex-m4+fp \\\r\n TARGET_TOOLCHAIN_ROOT=/opt/gcc-arm-none-eabi-9-2020-q2-update/bin/ \\\r\n OPTIMIZED_KERNEL_DIR=cmsis_nn microlite\r\n```\r\nOnce `libtensorflow-microlite.a` is built I can include it as a library in the Makefile of my project.\r\n\r\nThe second step is to consume my TFLM model from th target nRF52840. The code fails within this function call:\r\n```\r\nvoid model_init(){\r\n\t// Set up logging\r\n\tstatic tflite::MicroErrorReporter micro_error_reporter;\r\n\terror_reporter = µ_error_reporter;\r\n\r\n\t// Map the model into a usable data structure\r\n\tmodel = ::tflite::GetModel(g_model);\r\n\tif (model->version() != TFLITE_SCHEMA_VERSION) {\r\n\t\tTF_LITE_REPORT_ERROR(error_reporter,\r\n\t\t\t\t\"Model provided is schema version %d not equal \"\r\n\t\t\t\t\"to supported version %d.\\n\",\r\n\t\t\t\tmodel->version(), TFLITE_SCHEMA_VERSION);\r\n\t}\r\n\r\n\t// This pulls in all the operation implementations we need\r\n\tstatic tflite::AllOpsResolver resolver;\r\n\tresolver.AddExpandDims();\r\n\r\n\t// Build an interpreter to run the model with.\r\n\tstatic tflite::MicroInterpreter static_interpreter(\r\n\t\t\tmodel, resolver, tensor_arena, kTensorArenaSize, error_reporter);\r\n\tinterpreter = &static_interpreter;\r\n\r\n\t// Allocate memory from the tensor_arena for the model's tensors.\r\n\tTfLiteStatus allocate_status = interpreter->AllocateTensors();\r\n\tif (allocate_status != kTfLiteOk){\r\n\t\tTF_LITE_REPORT_ERROR(error_reporter, \"AllocateTensors() failed\");\r\n\t\treturn;\r\n\t}\r\n\r\n\t// Obtain pointers to the model's input and output tensors.\r\n\tinput = interpreter->input(0);\r\n\toutput = interpreter->output(0);\r\n}\r\n```",
"comments": [
{
"body": "Hi,\r\n\r\nI created a PR with a fix [(48144)](https://github.com/tensorflow/tensorflow/pull/48144).\r\n\r\nApologies if did not respecting your contributing guidelines this time.\r\n\r\nCheers",
"created_at": "2021-03-29T08:55:16Z"
},
{
"body": "Hi, could you please tell me how big is your libtensorflow-microlite.a?",
"created_at": "2021-03-30T08:43:31Z"
},
{
"body": "> Hi, could you please tell me how big is your libtensorflow-microlite.a?\r\n\r\nHi @napoleonwar, my `libtensorflow-microlite.a` seems to be 1.0 MB on my host. I currently don't know the size on the target though.",
"created_at": "2021-03-30T09:11:34Z"
},
{
"body": "> Hi @napoleonwar, my `libtensorflow-microlite.a` seems to be 1.0 MB on my host. I currently don't know the size on the target though.\r\n\r\nOk Thanks! I am wondering how big is it after you compiling the hello world example if you did before. Because in my case, the binary file is 350KB for this simplest example, it will become bigger for more complex cases. Did you get some result whose size is under 100KB by using TFLM?",
"created_at": "2021-03-30T12:35:53Z"
}
],
"number": 48145,
"title": "Missing Add* function for Builtin operator FILL"
}
|
{
"body": "Fixes #48145\r\n\r\nAdded the Add* function for the missing Builtin operator Fill in TFLM. I can now consume Fill op in my Cortex-M micro-controller.",
"number": 48144,
"review_comments": [],
"title": "Added the Add* function for the missing Builtin operator Fill"
}
|
{
"commits": [
{
"message": "Added the Add* function for the missing Builtin operator Fill"
}
],
"files": [
{
"diff": "@@ -223,6 +223,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n ParseExpandDims);\n }\n \n+ TfLiteStatus AddFill() {\n+ return AddBuiltin(BuiltinOperator_FILL,\n+ tflite::Register_FILL(), ParseFill);\n+ }\n+\n TfLiteStatus AddFloor() {\n return AddBuiltin(BuiltinOperator_FLOOR,\n tflite::ops::micro::Register_FLOOR(), ParseFloor);",
"filename": "tensorflow/lite/micro/micro_mutable_op_resolver.h",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator L2_POOL_2D from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1 (step 1): Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2 (step 2): Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\n\r\nThe next 3 steps are combined into a single PR3 with separate commits:\r\n\r\n(step 3): Copy operator from lite to micro making minimal changes and not including in the build\r\n(step 4): Delete extra code from the micro copy of the operator\r\n(step 5): Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47814\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47814\">No</a>\n",
"created_at": "2021-06-09T16:03:51Z"
}
],
"number": 47814,
"title": "micro: port op L2_POOL_2D from lite"
}
|
{
"body": "Fix for shape data for tensors sometimes existing in the flatbuffer, which is supposed to be read-only. L2_POOL_2D now modifies the tensor shape data after relocating it.\r\n\r\nAdditional fix for Issue #47814",
"number": 48141,
"review_comments": [],
"title": "micro: L2_POOL_2D flatbuffer fix"
}
|
{
"commits": [
{
"message": "micro: L2_POOL_2D flatbuffer fix\n\nFix for shape data for tensors sometimes existing in the flatbuffer, which is supposed to be read-only. L2_POOL_2D now modifies the tensor shape data after relocating it.\n\nAdditional fix for Issue #47814"
},
{
"message": "fixes for requested changes to PR"
},
{
"message": "Clarify the CreateWritableTensorDimsWithCopy() allowed usage."
}
],
"files": [
{
"diff": "@@ -49,5 +49,29 @@ PaddingType RuntimePaddingType(TfLitePadding padding) {\n }\n }\n \n+// Relocate tensor dims from FlatBuffer to the persistent storage arena.\n+// The old dims data is copied to the new storage area.\n+// The tensor and eval_tensor must be the same tensor.\n+// Only use during Prepare phase.\n+TfLiteStatus CreateWritableTensorDimsWithCopy(TfLiteContext* context,\n+ TfLiteTensor* tensor,\n+ TfLiteEvalTensor* eval_tensor) {\n+ TF_LITE_ENSURE(context, tensor != nullptr);\n+ TF_LITE_ENSURE(context, eval_tensor != nullptr);\n+ int ranks = tensor->dims->size;\n+ size_t alloc_size = TfLiteIntArrayGetSizeInBytes(ranks);\n+ TfLiteIntArray* new_dims = static_cast<TfLiteIntArray*>(\n+ context->AllocatePersistentBuffer(context, alloc_size));\n+ TfLiteIntArray* old_dims = tensor->dims;\n+ new_dims->size = ranks;\n+ tensor->dims = new_dims;\n+ eval_tensor->dims = new_dims;\n+ for (int i = 0; i < ranks; i++) {\n+ new_dims->data[i] = old_dims->data[i];\n+ }\n+\n+ return kTfLiteOk;\n+}\n+\n } // namespace micro\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/kernel_util.cc",
"status": "modified"
},
{
"diff": "@@ -72,6 +72,14 @@ bool HaveSameShapes(const TfLiteEvalTensor* input1,\n \n PaddingType RuntimePaddingType(TfLitePadding padding);\n \n+// Relocate tensor dims from FlatBuffer to the persistent storage arena.\n+// The old dims data is copied to the new storage area.\n+// The tensor and eval_tensor must be the same tensor.\n+// Only use during Prepare phase.\n+TfLiteStatus CreateWritableTensorDimsWithCopy(TfLiteContext* context,\n+ TfLiteTensor* tensor,\n+ TfLiteEvalTensor* eval_tensor);\n+\n } // namespace micro\n } // namespace tflite\n ",
"filename": "tensorflow/lite/micro/kernels/kernel_util.h",
"status": "modified"
},
{
"diff": "@@ -70,7 +70,13 @@ TfLiteStatus L2Prepare(TfLiteContext* context, TfLiteNode* node) {\n // The dims storage is expected to be the same area in memory\n // for both TfLiteTensor and TfLiteEvalTensor. This is important\n // because TfLiteTensor in the MicroInterpreter is a temporary\n- // allocation.\n+ // allocation. For the KernelRunner interpreter, TfLiteEvalTensor\n+ // is a temporary allocation. We must therefore relocate the dims\n+ // from the FlatBuffer to the persistant storage arena.\n+ TfLiteEvalTensor* output_eval =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n+ TF_LITE_ENSURE_OK(context, tflite::micro::CreateWritableTensorDimsWithCopy(\n+ context, output, output_eval));\n output->dims->data[kBatchRank] = batches;\n output->dims->data[kHeightRank] = out_height;\n output->dims->data[kWidthRank] = out_width;",
"filename": "tensorflow/lite/micro/kernels/l2_pool_2d.cc",
"status": "modified"
},
{
"diff": "@@ -84,7 +84,9 @@ void TestL2Pool2D(const L2Pool2DTestParams& params, const int* input_dims_data,\n params.compare_tolerance);\n }\n for (int i = 0; i < expected_dims->size; i++) {\n- TF_LITE_MICRO_EXPECT_EQ(expected_dims->data[i], output_dims->data[i]);\n+ // output dims will have been relocated during prepare phase,\n+ // so use the tensor dims pointer.\n+ TF_LITE_MICRO_EXPECT_EQ(expected_dims->data[i], tensors[1].dims->data[i]);\n }\n }\n ",
"filename": "tensorflow/lite/micro/kernels/l2_pool_2d_test.cc",
"status": "modified"
}
]
}
|
{
"body": "It seems that the path isn't split correctly. Filenames come with an extra `\\` prefix. Minimal reproducible example:\r\n\r\n```python3\r\nimport os\r\nimport tensorflow as tf\r\n\r\n\r\ntf.io.gfile.makedirs(\"ram://folder\")\r\nwith tf.io.gfile.GFile(\"ram://folder/file.txt\", mode=\"w\") as f:\r\n f.write(\"data\")\r\n\r\nfor root, _, filenames in tf.io.gfile.walk(\"ram://folder\"):\r\n for filename in filenames:\r\n assert tf.io.gfile.exists(os.path.join(root, filename))\r\n```\r\n\r\nThis passes on *nix but not on Windows. Here is a quick CI run in GitHub actions showing this: https://github.com/adriangb/tensorflow-test/actions/runs/688190284\r\n\r\nccing @mihaimaruseac @bhack ",
"comments": [
{
"body": "Some relevant discussion: https://github.com/tensorflow/tensorflow/pull/39609#discussion_r600667357",
"created_at": "2021-03-25T23:11:28Z"
},
{
"body": "Thanks",
"created_at": "2021-03-25T23:11:45Z"
},
{
"body": "The problem in this specific example is that you could not use `os.path.join` cause it will add in python `\\` native separator on Windows that is not the ram filesystem separator `/`.\r\nSee https://github.com/bhack/tensorflow-test/blob/master/test.py",
"created_at": "2021-03-26T00:11:16Z"
},
{
"body": "Hmm good point, I was trying to simplify the example to not include model saving, but maybe that's the source of the problem? I reverted to [8c3dae](https://github.com/adriangb/tensorflow-test/blob/8c3dae59efbbf62e746b24ab2227307fad072466/test.py), which [does reproduce the issue](https://github.com/adriangb/tensorflow-test/runs/2198176548?check_suite_focus=true).",
"created_at": "2021-03-26T00:48:42Z"
},
{
"body": "Yes I suppose that the issue is more in the pythoh native join in model save and load. E.g. see https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/saving/saved_model/save.py#L96.\r\n\r\nX filesystem separator in c++ is here but I don't think it is exposed in pyhton:\r\nhttps://github.com/tensorflow/tensorflow/blob/343cb03f21c73cfe84e47ab2974a6faa1ad80972/tensorflow/core/platform/file_system.h#L394-L399",
"created_at": "2021-03-26T00:55:14Z"
},
{
"body": "I have a reproducible version without model saving (although of course these two could be unrelated, you make a good point above):\r\n\r\n[Test passing on *nix](https://github.com/adriangb/tensorflow-test/runs/2198281295?check_suite_focus=true)\r\n[Test failing on Windows](https://github.com/adriangb/tensorflow-test/runs/2198281305?check_suite_focus=true)\r\n\r\nSource (also [here](https://github.com/adriangb/tensorflow-test/blob/02a7cdb7dbb386366573b775a8231ac64dfacd85/test.py)):\r\n\r\n```python3\r\nimport tensorflow as tf\r\n\r\n\r\ntf.io.gfile.makedirs(\"ram://test/inner\")\r\n\r\nwith tf.io.gfile.GFile(\"ram://test/inner/file.txt\", mode=\"w\") as f:\r\n f.write(\"data\")\r\n\r\nfor root, _, filenames in tf.io.gfile.walk(\"ram://test\"):\r\n for filename in filenames:\r\n path = root + \"/\" + filename\r\n print(f\"root: {root}\")\r\n print(f\"filename: {filename}\")\r\n print(f\"path: {path}\")\r\n assert path == \"ram://test/inner/file.txt\"\r\n```",
"created_at": "2021-03-26T01:07:50Z"
},
{
"body": "I suspect this is another problem. \r\nI don't know if you can patch on the fly in you repo action https://github.com/tensorflow/tensorflow/blob/306904197c95cc01cdcd30462fd62984329f5cef/tensorflow/python/lib/io/file_io.py#L838-L839\r\nTo print `_make_full_path` and `is_directory` result in Ubuntu/Win ",
"created_at": "2021-03-26T02:15:46Z"
},
{
"body": "I should be able to. I'll give it a try tomorrow.",
"created_at": "2021-03-26T02:24:53Z"
},
{
"body": "> This passes on *nix but not on Windows\r\n\r\n@adriangb,\r\nWith [TF v2.4](https://colab.research.google.com/gist/amahendrakar/244269ccabf4e7b5f175ce5eaf662264/48086.ipynb), I was able to reproduce the `AssertionError` on Linux as well. \r\n\r\nHowever, I did not face any error while running the code with the latest [TF-nightly](https://colab.research.google.com/gist/amahendrakar/6fcf1ad6081f233913593481a765c7b8/48086-tf-nightly.ipynb). Please check the linked gist for reference. \r\n\r\nCould you please check if you are facing the same error with TF-nightly as well? Thanks!",
"created_at": "2021-03-26T10:01:19Z"
},
{
"body": "@amahendrakar We was always working to debug this with nightly. See https://github.com/adriangb/tensorflow-test/blob/master/.github/workflows/test.yml#L14",
"created_at": "2021-03-26T11:03:02Z"
},
{
"body": "@adriangb Ok I've Win emulated your last example on Linux with Docker+Wine. \r\n\r\nYou can test yourself with\r\n\r\n```\r\ndocker run -it --rm tobix/pywine /bin/bash\r\nwine pip install tf-nighlty-cpu\r\nwine python <your_code_stub>.py\r\n```\r\nYou can add debug prints to `/opt/wineprefix/drive_c/Python39/Lib/site-packages/tensorflow/python/lib/io/file_io.py` in `def _make_full_path(parent, item):`\r\n\r\nJust for your last **specific test case** you could replace `return os.path.join(parent, item)` with `return os.path.join(parent, item).replace(\"\\\\\",\"/\")` and run your last example in the Win emulated Docker.\r\n\r\n\r\n\r\n",
"created_at": "2021-03-26T13:29:22Z"
},
{
"body": "@frankchn @mihaimaruseac As I see in your ram filesystem [tests](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/platform/ram_file_system_test.py) do you expect an universal \"/\" for the ram filesystem and not an OS dependent one right?",
"created_at": "2021-03-26T13:38:31Z"
},
{
"body": "I was able to test using docker and wine, thanks for the tip @bhack !\r\n\r\nIt looks like what is happening is that this `os.path.join`:\r\nhttps://github.com/tensorflow/tensorflow/blob/204082475214b3f08d1301998780592b53076951/tensorflow/python/lib/io/file_io.py#L824\r\nIs joining `parent=\"ram://test\"` and `item=\"inner\"` -> `path=\"ram://test\\inner`, which then cascades into `is_directory(path) == False`, so `\\inner` gets returned as a file, etc (btw, returning `False` for a non-existing dir/file is the same behavior as `os.path.isdir`, so that's fine)\r\n\r\nIt seems to me like generally the problem stems from the fact that, on windows, `os.path.join(\"a\", \"b\") == \"a\\b\"` but you are using `\"/\"` as the path separator for ram filesystems regardless of the platform.",
"created_at": "2021-03-26T14:16:07Z"
},
{
"body": "So it seems like 2 things are needed to solve this:\r\n1. Some way to determine what the path separator is given the full path.\r\n2. An implementation of `os.path.join` that accepts a path separator.",
"created_at": "2021-03-26T14:25:48Z"
},
{
"body": "If you see the ram filesystem tests I mentioned above you see that it is only tested with \"/\" separator.\r\n\r\nBut OS dep separator is not illegal. E.g. run in your Wine environment:\r\n\r\n```\r\nimport tensorflow as tf\r\n\r\n\r\ntf.io.gfile.makedirs(\"ram://test\\\\inner\")\r\n\r\nwith tf.io.gfile.GFile(\"ram://test\\\\inner\\\\file.txt\", mode=\"w\") as f:\r\n f.write(\"data\")\r\n\r\nassert tf.io.gfile.exists(\"ram://test\\\\inner\\\\file.txt\")\r\n\r\n```\r\n",
"created_at": "2021-03-26T14:27:23Z"
},
{
"body": "Perhaps. But those files wouldn't exist since they are saved with `/`. For example:\r\n\r\n```python3\r\ntf.io.gfile.makedirs(\"ram://test/inner\")\r\n\r\nwith tf.io.gfile.GFile(\"ram://test/inner/file.txt\", mode=\"w\") as f:\r\n f.write(\"data\") # this is what save model does\r\n\r\nprint(tf.io.gfile.exists(\"ram://test\\\\inner\\\\file.txt\"))\r\n```\r\n\r\nThat is, the files are saved with `/`, but when `walk` calls `os.path.join` they end up with `\\`, so they are different files.",
"created_at": "2021-03-26T16:43:49Z"
},
{
"body": "I see in your old debug print with save `listing (L832) = ['\\\\assets', '\\\\keras_metadata.pb', '\\\\saved_model.pb', '\\\\variables']`\r\nAs we told above `save` is using the native Os separator at save time right?\r\nhttps://github.com/tensorflow/tensorflow/blob/94f92c9949c2dc6afe1db959e25631520602c6ea/tensorflow/python/keras/saving/saved_model/save.py#L96\r\n",
"created_at": "2021-03-26T17:41:17Z"
},
{
"body": "Maybe? It's very much possible that there is more than one issue at play here. Well, I think it's the same issue, just different places in the codebase that it's occurring. The end result is the same: the `ram://` filesystem (and presumably other non-native filesystems) are not fully functional on Windows.",
"created_at": "2021-03-26T17:44:19Z"
},
{
"body": "Let's wait to understand what the original design scope was. If `ram://` works only with `/` but it isn't enforced or it needs to work on native Sep by design.",
"created_at": "2021-03-26T17:48:45Z"
},
{
"body": "So the original design for the RAM file system is quite limited -- it was originally just for Cloud TPUs to have a place to write temporary files to (since Cloud TPUs don't have access to local file systems and writing to GCS is slow). \r\n\r\nI think if someone wants to get `ram://` working with Windows-style separators etc, we are happy to accept the controbution.",
"created_at": "2021-03-27T04:32:45Z"
},
{
"body": "@adriangb As `test_savedmodel` test was passing on Win and Linux is this enough for your use case at https://github.com/tensorflow/tensorflow/pull/39609#discussion_r600667357?\r\nhttps://github.com/tensorflow/tensorflow/blob/3c16284eb619732b69948f0200ee06c5dd7312d0/tensorflow/core/platform/ram_file_system_test.py#L146-L156",
"created_at": "2021-03-27T13:57:35Z"
},
{
"body": "@bhack that test is not enough for #39609.\r\n\r\nWe need to load all of `ram://my_module,` into a single contagious string of bytes so that it can be serialized. There is no other way to do this other than iterating over each file, which is exactly what `walk` is for. I don't see any way to do this without fixing `walk` or re-implementing it using `listdir`, `isdir`, etc.\r\n\r\n@frankchn so I take it that support (or explicitly not supporting) `\\` for `ram://` on Windows was not part of the original design? Does this mean that if I were to implement fixes that enabled the use case in #39609 but possibly broke other use cases involving `\\` on Windows, that would be okay? Assuming it doesn't break any exisiting tests. I do not have the bandwidth to fully implement support for `ram://test\\other` on Windows (or alternatively explicit lack of support), especially if that includes submitting RFCs or editing source on the C++ side. I may however be able to find a way in which we can edit 1-2 lines of Python to at least make #39609 work.",
"created_at": "2021-03-27T15:43:15Z"
},
{
"body": "@adriangb Can you extend that test or a new one in the same file with a new PR just to cover your case. If I have time to explore a C++ fix I need to use the CI cause Wine is too slow to compile a large source code like TF without an available cache produced in the same environment.",
"created_at": "2021-03-27T15:55:42Z"
},
{
"body": "We have tests in #39609 . Why do we need new tests or a new PR?\r\n\r\nThanks for looking into a fix.",
"created_at": "2021-03-27T15:57:20Z"
},
{
"body": "I meant that we need a new test (with a new PR) in `ram_filesystem_test.py` to test the isolated feature that you need in the ram_file_system.",
"created_at": "2021-03-27T16:19:13Z"
},
{
"body": "I don't think this is necessarily a feature. Depending on whether it was intentional for `\\` to be supported on `ram://` or not (it sounds like it was neither), it is either a design oversight or a bug. Adding a minimal explicit test (i.e. not including savemodel or anything) is likely to (1) force a design decision (2) require multiple PRs/tests since the use of `os.path.join` evidently exists outside of `walk` (eg. in `save` like you point out in https://github.com/tensorflow/tensorflow/issues/48086#issuecomment-808403294)\r\n\r\nBut let me do a bit of testing to see what needs to be fixed to get #39609 working and go from there. It may be simple or it may indeed require a new feature.",
"created_at": "2021-03-27T16:25:59Z"
},
{
"body": "I think the easiest solution here is to implement a TensorFlow specific `os.path.join`, maybe `tf.io.gfile.join`:\r\n\r\n```python\r\nimport os\r\nfrom posixpath import join as urljoin\r\n\r\ndef join(*paths):\r\n root = str(paths[0])\r\n if root.startswith(\"ram://\") or root.startswith(\"gs://\"):\r\n return urljoin(*paths)\r\n return os.path.join(*paths)\r\n```\r\n\r\nI'm not sure if `root.startswith(\"ram://\")` is the best we can do here, that will be up to your team to decide.\r\n\r\nThe more difficult part of this will be replacing `os.path.join` with `tf.python.io.file_io.join` all around the codebase.\r\n\r\nThen finally comes the issue of testing, features and backwards compatibility. I suppose this might break things for anyone relying on `ram://test\\other` like behavior on Windows. But since this was never documented (or intentional) I think that should be okay. We can/should add a small test for this, but I think the test should focus on `tf.python.io.file_io.join`, not on the behavior of other things like `walk` or `SavedModel` (i.e. let's not make any promises about those).\r\n\r\nDoes this sound reasonable?",
"created_at": "2021-03-27T17:13:54Z"
},
{
"body": "I don't know if `model.save` it is supported on GCS see https://github.com/tensorflow/tensorflow/issues/36453. \r\nProbably `tf.saved_model.save` is going to work cause it use c++ impl with `io::JoinPath`.\r\nBut if you still need `model.save` in your case and `model.save` is using os native join at the Python level probably it is easier to let r`ram://` to support Win native separator as `ram://` is currently not opinionated. \r\nLet me know if you have a test for `tensorflow/tensorflow/core/platform/ram_file_system_test.py`\r\n ",
"created_at": "2021-03-27T18:24:45Z"
},
{
"body": "My use cases don't require saving anything to gcs. I only threw `root.startswith(\"gs://\")` in there because I thought you'd want it implemented (and since it's only a couple extra characters).\r\n\r\nI also don't care about `tf.saved_model.save` vs `model.save` vs. any other way to save things using SaveModel, they would all be equally fine for #39609. But I tested `tf.saved_model.save` and it does not solve the problems surrounding the use of `os.path.join` on `ram://` in Windows. For example, see here:\r\n\r\nhttps://github.com/tensorflow/tensorflow/blob/3c16284eb619732b69948f0200ee06c5dd7312d0/tensorflow/python/saved_model/save.py#L878\r\n\r\nThe simplest solution I see here is what I proposed above, implementing a version of `os.path.join` that is aware of using `/` and not `\\` for the `ram://` filesystem on Windows. We can write a test for this and put it wherever you deem best (I would suggest it live with the rest of the tests that correspond to `tf.python.io.file_io` since it is not specific to `ram://`). I tested this solution via monkey patching and indeed it does fix all problems related to #39609 . I have a branch and test for it, I'm waiting for bazel to compile and run tests on my machine to at least make sure they run on linux, but this takes >10 hours on my machine so I probably won't push the branch / make a PR until tomorrow.\r\n\r\nAlternatively, you would have to modify all of the tooling surrounding the `ram://` filesystem to support both `\\` and `/`, eg. I think you'd have to make changes here:\r\n\r\nhttps://github.com/tensorflow/tensorflow/blob/3c16284eb619732b69948f0200ee06c5dd7312d0/tensorflow/core/platform/ram_file_system.h#L178",
"created_at": "2021-03-27T18:43:06Z"
},
{
"body": "What I meant is that using the ticket info and if a test like this will pass also on GCS I suppose that it is not using `os.join` at the python level cause GCS doesn't work with `\\`. \r\n\r\nhttps://github.com/tensorflow/tensorflow/blob/3c16284eb619732b69948f0200ee06c5dd7312d0/tensorflow/core/platform/ram_file_system_test.py#L146-L156\r\n\r\nAnd probably it is why `model.save` doesn't work also with GCS (I've not tested this case so it is just a reference to the mentioned ticket ticket).\r\n\r\nYour mentioned `os.path.join` is only for the extra debug info case so probably it could now work on GCS but the default value is `save_debug_info=False`.\r\n\r\nI think that we need to separate Walk fix vs if different save methods add os native sep in python.\r\n\r\nMy hypothesis In the case we don't have the os native separator at python level we could make a specific test and fix only the Walk case.\r\n\r\nThis is why I want to have an addition test to fail for `ram_filesystem_test.py`\r\n",
"created_at": "2021-03-27T19:23:52Z"
}
],
"number": 48086,
"title": "tf.io.gfile.walk broken on Windows"
}
|
{
"body": "Possibly needed for #48086",
"number": 48125,
"review_comments": [
{
"body": "I think it's might be more suitable to check `not (path.startswith(\"://\") or path.startswith(\"file://\"))` because we have a lot of cloud filesystem, like `s3`, `hdfs`, etc. WDYT ?",
"created_at": "2021-03-28T11:18:24Z"
},
{
"body": "I was hoping someone would come along with a better idea :)\r\n\r\nIt seems like `ram://`, `gc://` or anything else with `://` would not be valid folder or filename is mac, linux or windows (based on [SO](https://stackoverflow.com/a/31976060/6582418)). So maybe we can do:\r\n\r\n```python\r\nif \"://\" in path[1:] and not path.startswith(\"file://\")\r\n```\r\n\r\nThat avoids more complex alternatives like a regex or something.",
"created_at": "2021-03-28T18:05:12Z"
},
{
"body": "Should we also delete/move this file?",
"created_at": "2021-05-12T17:56:44Z"
},
{
"body": "Sorry if I am misunderstanding, but I believe `ram_file_system_test.py` _is_ being moved: https://github.com/tensorflow/tensorflow/commit/ee7a51028fb04dae61936e702449d493997741e8#diff-18772cb1894d1c5e6364f679cee08b916774e945094644a52b17735188b9f907",
"created_at": "2021-05-12T17:59:30Z"
},
{
"body": "Sorry, I had the page displaying the status wrong.",
"created_at": "2021-05-12T18:16:08Z"
},
{
"body": "> We should add tests for this new API endpoint in file_io_test.py\r\n\r\n(from internal review)",
"created_at": "2021-05-16T23:52:56Z"
},
{
"body": "> Maybe add examples here of the TF filesystem behavior?\r\n\r\n(from internal review)",
"created_at": "2021-05-16T23:53:30Z"
},
{
"body": "That's a good idea.\r\n\r\nI added some tests in `file_io_test.py`, very similar to what was already in `ram_file_system_test.py ` but also testing `gcs://` and that the behavior for native filesystems.\r\n\r\nI also replaced the numerous `os.path.join` calls in `file_io_test.py` with `file_io.join`, I think this is a good idea, but let me know if you disagree and I can revert it.",
"created_at": "2021-05-17T00:41:47Z"
},
{
"body": "I added an example in the docstring.",
"created_at": "2021-05-17T00:41:55Z"
},
{
"body": "@mihaimaruseac let me know if this is the desired behavior. I realize that `file://` is the native filesystem, but maybe it should be treated like other ulr-like filesystems? What does `GFile(join(\"file://\", \"dir\", \"file.py\"))` expect?",
"created_at": "2021-05-17T00:45:35Z"
},
{
"body": "I think it should be `\"file://dir/file.py\"`. It's similar to URLs",
"created_at": "2021-05-18T23:01:47Z"
},
{
"body": "easy enough change; and it simplifies the implementation/testing.",
"created_at": "2021-05-19T01:36:07Z"
},
{
"body": "Could you add a slightly more complex example here? I tried the provided example with os.path.join and I see the same result. So might be helpful to find something that has a different result.... ",
"created_at": "2021-05-19T21:42:37Z"
},
{
"body": "results would only differ from os.path.join on Windows",
"created_at": "2021-05-19T21:45:04Z"
},
{
"body": "On windows you get:\r\n\r\n```\r\n>>> tf.io.join(\"gcs://folder\", \"file.py\") \r\n\"gcs://folder\\\\file.py\"\r\n```",
"created_at": "2021-05-19T21:49:22Z"
},
{
"body": "I updated the docstring to:\r\n```\r\nThis is the same as os.path.join except that it guarantees correct\r\nhandling of tensorflow specific filesystems like `gcs://` and `ram://`\r\non all platforms.\r\n```\r\nTo try to clarify. Let me know if that works or if there's any other wording you'd prefer.",
"created_at": "2021-05-19T21:55:36Z"
},
{
"body": "@mihaimaruseac I just pushed this change, please re-approve so CI runs when you get a chance 😃 ",
"created_at": "2021-05-20T05:01:02Z"
},
{
"body": "Looks like this needs to be sorted (moved one down). I'll push it once current CI finishes.",
"created_at": "2021-06-18T17:35:22Z"
},
{
"body": "Since I added `testJoinUrlLike` and `testJoinFilesystem` in `file_io_test.py`, should we minimize the changes by not adding this test (which would also mean not moving the test file)?",
"created_at": "2021-06-18T17:42:13Z"
},
{
"body": "Sounds good to me",
"created_at": "2021-06-20T19:36:32Z"
},
{
"body": "Cool, reverted those changes. Would you mind kicking off CI? Thanks\r\n",
"created_at": "2021-06-20T19:44:05Z"
},
{
"body": "> Docstring contains a backslash, use r\"\"\" instead of \"\"\". [g-docstring-has-escape]\r\n\r\nPlease fix.",
"created_at": "2021-06-23T17:27:11Z"
},
{
"body": "attempted!",
"created_at": "2021-06-23T17:41:31Z"
}
],
"title": "Custom os.path.join that is aware of TF filesystems"
}
|
{
"commits": [
{
"message": "Custom os.path.join that is aware of ram fs"
},
{
"message": "BUILD edits"
},
{
"message": "try to support more fs"
},
{
"message": "fix str byte mixing"
},
{
"message": "add missing import"
},
{
"message": "Add tests"
},
{
"message": "sanity fixes"
},
{
"message": "test fixes"
},
{
"message": "update api"
},
{
"message": "Merge branch 'master' into ram-gcs-join"
},
{
"message": "simpler test"
},
{
"message": "sanity formatting"
},
{
"message": "move build to get tests to run on windows"
},
{
"message": "Merge branch 'master' into ram-gcs-join"
},
{
"message": "add tests in file_io_test.py"
},
{
"message": "formatting"
},
{
"message": "add example in docstring"
},
{
"message": "Update docstring"
},
{
"message": "make file:// behave url-like"
},
{
"message": "Remove special file:// case"
},
{
"message": "remove examples from doctest"
},
{
"message": "Merge branch 'ram-gcs-join' of https://github.com/adriangb/tensorflow into ram-gcs-join"
},
{
"message": "fix doctest?"
},
{
"message": "add native FS doc example"
},
{
"message": "add native FS doc example"
},
{
"message": "better doctest"
},
{
"message": "re-remove file:// special case"
},
{
"message": "Remove comments from docstring output"
},
{
"message": "Merge branch 'master' into ram-gcs-join"
},
{
"message": "Fix doctest syntax"
}
],
"files": [
{
"diff": "@@ -19,6 +19,7 @@\n \n import binascii\n import os\n+from posixpath import join as urljoin\n import uuid\n \n import six\n@@ -778,6 +779,43 @@ def list_directory_v2(path):\n for filename in _pywrap_file_io.GetChildren(compat.path_to_bytes(path))\n ]\n \n+@tf_export(\"io.gfile.join\")\n+def join(path, *paths):\n+ r\"\"\"Join one or more path components intelligently.\n+\n+ TensorFlow specific filesystems will be joined\n+ like a url (using \"/\" as the path seperator) on all platforms:\n+ \n+ On Windows or Linux/Unix-like:\n+ >>> tf.io.gfile.join(\"gcs://folder\", \"file.py\")\n+ 'gcs://folder/file.py'\n+\n+ >>> tf.io.gfile.join(\"ram://folder\", \"file.py\")\n+ 'ram://folder/file.py'\n+\n+ But the native filesystem is handled just like os.path.join:\n+\n+ >>> path = tf.io.gfile.join(\"folder\", \"file.py\")\n+ >>> if os.name == \"nt\":\n+ ... expected = \"folder\\\\file.py\" # Windows\n+ ... else:\n+ ... expected = \"folder/file.py\" # Linux/Unix-like\n+ >>> path == expected\n+ True\n+\n+ Args:\n+ path: string, path to a directory\n+ paths: string, additional paths to concatenate\n+\n+ Returns:\n+ path: the joined path.\n+ \"\"\"\n+ # os.path.join won't take mixed bytes/str, so don't overwrite the incoming `path` var\n+ path_ = compat.as_str_any(compat.path_to_str(path))\n+ if \"://\" in path_[1:]:\n+ return urljoin(path, *paths)\n+ return os.path.join(path, *paths)\n+\n \n @tf_export(v1=[\"gfile.Walk\"])\n def walk(top, in_order=True):\n@@ -816,12 +854,12 @@ def walk_v2(top, topdown=True, onerror=None):\n \"\"\"\n \n def _make_full_path(parent, item):\n- # Since `os.path.join` discards paths before one that starts with the path\n- # separator (https://docs.python.org/3/library/os.path.html#os.path.join),\n+ # Since `join` discards paths before one that starts with the path\n+ # separator (https://docs.python.org/3/library/os.path.html#join),\n # we have to manually handle that case as `/` is a valid character on GCS.\n if item[0] == os.sep:\n- return \"\".join([os.path.join(parent, \"\"), item])\n- return os.path.join(parent, item)\n+ return \"\".join([join(parent, \"\"), item])\n+ return join(parent, item)\n \n top = compat.as_str_any(compat.path_to_str(top))\n try:",
"filename": "tensorflow/python/lib/io/file_io.py",
"status": "modified"
},
{
"diff": "@@ -44,14 +44,14 @@ def __str__(self):\n \n \n run_all_path_types = parameterized.named_parameters(\n- (\"str\", os.path.join),\n- (\"pathlike\", lambda *paths: PathLike(os.path.join(*paths))))\n+ (\"str\", file_io.join),\n+ (\"pathlike\", lambda *paths: PathLike(file_io.join(*paths))))\n \n \n class FileIoTest(test.TestCase, parameterized.TestCase):\n \n def setUp(self):\n- self._base_dir = os.path.join(self.get_temp_dir(), \"base_dir\")\n+ self._base_dir = file_io.join(self.get_temp_dir(), \"base_dir\")\n file_io.create_dir(self._base_dir)\n \n def tearDown(self):\n@@ -62,6 +62,53 @@ def testEmptyFilename(self):\n with self.assertRaises(errors.NotFoundError):\n _ = f.read()\n \n+ def testJoinUrlLike(self):\n+ \"\"\"file_io.join joins url-like filesystems with '/' on all platform.\n+ \"\"\"\n+ for fs in (\"ram://\", \"gcs://\", \"file://\"):\n+ expected = fs + 'exists/a/b/c.txt'\n+ self.assertEqual(\n+ file_io.join(fs, 'exists', 'a', 'b', 'c.txt'),\n+ expected\n+ )\n+ self.assertEqual(\n+ file_io.join(fs + 'exists', 'a', 'b', 'c.txt'),\n+ expected\n+ )\n+ self.assertEqual(\n+ file_io.join(fs, 'exists/a', 'b', 'c.txt'),\n+ expected\n+ )\n+ self.assertEqual(\n+ file_io.join(fs, 'exists', 'a', 'b/c.txt'),\n+ expected\n+ )\n+\n+ def testJoinFilesystem(self):\n+ \"\"\"file_io.join respects the os.path.join behavior for native filesystems.\n+ \"\"\"\n+ for sep in (\"/\", \"\\\\\", os.sep):\n+ self.assertEqual(\n+ os.path.join(\"a\", \"b\", \"c\"),\n+ file_io.join(\"a\", \"b\", \"c\")\n+ )\n+ self.assertEqual(\n+ os.path.join(sep + \"a\", \"b\", \"c\"),\n+ file_io.join(sep + \"a\", \"b\", \"c\")\n+ )\n+ self.assertEqual(\n+ os.path.join(\"a\", sep + \"b\", \"c\"),\n+ file_io.join(\"a\", sep + \"b\", \"c\")\n+ )\n+ self.assertEqual(\n+ os.path.join(\"a\", \"b\", sep + \"c\"),\n+ file_io.join(\"a\", \"b\", sep + \"c\")\n+ )\n+ self.assertEqual(\n+ os.path.join(\"a\", \"b\", \"c\" + sep),\n+ file_io.join(\"a\", \"b\", \"c\" + sep)\n+ )\n+\n @run_all_path_types\n def testFileDoesntExist(self, join):\n file_path = join(self._base_dir, \"temp_file\")\n@@ -78,14 +125,14 @@ def testWriteToString(self, join):\n self.assertEqual(\"testing\", file_contents)\n \n def testAtomicWriteStringToFile(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n file_io.atomic_write_string_to_file(file_path, \"testing\")\n self.assertTrue(file_io.file_exists(file_path))\n file_contents = file_io.read_file_to_string(file_path)\n self.assertEqual(\"testing\", file_contents)\n \n def testAtomicWriteStringToFileOverwriteFalse(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n file_io.atomic_write_string_to_file(file_path, \"old\", overwrite=False)\n with self.assertRaises(errors.AlreadyExistsError):\n file_io.atomic_write_string_to_file(file_path, \"new\", overwrite=False)\n@@ -111,7 +158,7 @@ def testWriteBinaryMode(self, join):\n self.assertEqual(\"testing\", f.read())\n \n def testAppend(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n with file_io.FileIO(file_path, mode=\"w\") as f:\n f.write(\"begin\\n\")\n with file_io.FileIO(file_path, mode=\"a\") as f:\n@@ -123,7 +170,7 @@ def testAppend(self):\n self.assertEqual(\"begin\\na1\\na2\\n\", file_contents)\n \n def testMultipleFiles(self):\n- file_prefix = os.path.join(self._base_dir, \"temp_file\")\n+ file_prefix = file_io.join(self._base_dir, \"temp_file\")\n for i in range(5000):\n f = file_io.FileIO(file_prefix + str(i), mode=\"w+\")\n f.write(\"testing\")\n@@ -132,20 +179,20 @@ def testMultipleFiles(self):\n f.close()\n \n def testMultipleWrites(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n with file_io.FileIO(file_path, mode=\"w\") as f:\n f.write(\"line1\\n\")\n f.write(\"line2\")\n file_contents = file_io.read_file_to_string(file_path)\n self.assertEqual(\"line1\\nline2\", file_contents)\n \n def testFileWriteBadMode(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n with self.assertRaises(errors.PermissionDeniedError):\n file_io.FileIO(file_path, mode=\"r\").write(\"testing\")\n \n def testFileReadBadMode(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n file_io.FileIO(file_path, mode=\"w\").write(\"testing\")\n self.assertTrue(file_io.file_exists(file_path))\n with self.assertRaises(errors.PermissionDeniedError):\n@@ -159,39 +206,39 @@ def testFileDelete(self, join):\n self.assertFalse(file_io.file_exists(file_path))\n \n def testFileDeleteFail(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n with self.assertRaises(errors.NotFoundError):\n file_io.delete_file(file_path)\n \n def testGetMatchingFiles(self):\n- dir_path = os.path.join(self._base_dir, \"temp_dir\")\n+ dir_path = file_io.join(self._base_dir, \"temp_dir\")\n file_io.create_dir(dir_path)\n files = [\"file1.txt\", \"file2.txt\", \"file3.txt\", \"file*.txt\"]\n for name in files:\n- file_path = os.path.join(dir_path, name)\n+ file_path = file_io.join(dir_path, name)\n file_io.FileIO(file_path, mode=\"w\").write(\"testing\")\n- expected_match = [os.path.join(dir_path, name) for name in files]\n+ expected_match = [file_io.join(dir_path, name) for name in files]\n self.assertItemsEqual(\n- file_io.get_matching_files(os.path.join(dir_path, \"file*.txt\")),\n+ file_io.get_matching_files(file_io.join(dir_path, \"file*.txt\")),\n expected_match)\n self.assertItemsEqual(file_io.get_matching_files(tuple()), [])\n files_subset = [\n- os.path.join(dir_path, files[0]), os.path.join(dir_path, files[2])\n+ file_io.join(dir_path, files[0]), file_io.join(dir_path, files[2])\n ]\n self.assertItemsEqual(\n file_io.get_matching_files(files_subset), files_subset)\n file_io.delete_recursively(dir_path)\n- self.assertFalse(file_io.file_exists(os.path.join(dir_path, \"file3.txt\")))\n+ self.assertFalse(file_io.file_exists(file_io.join(dir_path, \"file3.txt\")))\n \n def testGetMatchingFilesWhenParentDirContainsParantheses(self):\n- dir_path = os.path.join(self._base_dir, \"dir_(special)\")\n+ dir_path = file_io.join(self._base_dir, \"dir_(special)\")\n file_io.create_dir(dir_path)\n files = [\"file1.txt\", \"file(2).txt\"]\n for name in files:\n- file_path = os.path.join(dir_path, name)\n+ file_path = file_io.join(dir_path, name)\n file_io.FileIO(file_path, mode=\"w\").write(\"testing\")\n- expected_match = [os.path.join(dir_path, name) for name in files]\n- glob_pattern = os.path.join(dir_path, \"*\")\n+ expected_match = [file_io.join(dir_path, name) for name in files]\n+ glob_pattern = file_io.join(dir_path, \"*\")\n self.assertItemsEqual(\n file_io.get_matching_files(glob_pattern), expected_match)\n \n@@ -200,10 +247,10 @@ def testCreateRecursiveDir(self, join):\n dir_path = join(self._base_dir, \"temp_dir/temp_dir1/temp_dir2\")\n file_io.recursive_create_dir(dir_path)\n file_io.recursive_create_dir(dir_path) # repeat creation\n- file_path = os.path.join(str(dir_path), \"temp_file\")\n+ file_path = file_io.join(str(dir_path), \"temp_file\")\n file_io.FileIO(file_path, mode=\"w\").write(\"testing\")\n self.assertTrue(file_io.file_exists(file_path))\n- file_io.delete_recursively(os.path.join(self._base_dir, \"temp_dir\"))\n+ file_io.delete_recursively(file_io.join(self._base_dir, \"temp_dir\"))\n self.assertFalse(file_io.file_exists(file_path))\n \n @run_all_path_types\n@@ -218,18 +265,18 @@ def testCopy(self, join):\n self.assertEqual(7, f.tell())\n \n def testCopyOverwrite(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n file_io.FileIO(file_path, mode=\"w\").write(\"testing\")\n- copy_path = os.path.join(self._base_dir, \"copy_file\")\n+ copy_path = file_io.join(self._base_dir, \"copy_file\")\n file_io.FileIO(copy_path, mode=\"w\").write(\"copy\")\n file_io.copy(file_path, copy_path, overwrite=True)\n self.assertTrue(file_io.file_exists(copy_path))\n self.assertEqual(\"testing\", file_io.FileIO(file_path, mode=\"r\").read())\n \n def testCopyOverwriteFalse(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n file_io.FileIO(file_path, mode=\"w\").write(\"testing\")\n- copy_path = os.path.join(self._base_dir, \"copy_file\")\n+ copy_path = file_io.join(self._base_dir, \"copy_file\")\n file_io.FileIO(copy_path, mode=\"w\").write(\"copy\")\n with self.assertRaises(errors.AlreadyExistsError):\n file_io.copy(file_path, copy_path, overwrite=False)\n@@ -244,26 +291,26 @@ def testRename(self, join):\n self.assertFalse(file_io.file_exists(file_path))\n \n def testRenameOverwrite(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n file_io.FileIO(file_path, mode=\"w\").write(\"testing\")\n- rename_path = os.path.join(self._base_dir, \"rename_file\")\n+ rename_path = file_io.join(self._base_dir, \"rename_file\")\n file_io.FileIO(rename_path, mode=\"w\").write(\"rename\")\n file_io.rename(file_path, rename_path, overwrite=True)\n self.assertTrue(file_io.file_exists(rename_path))\n self.assertFalse(file_io.file_exists(file_path))\n \n def testRenameOverwriteFalse(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n file_io.FileIO(file_path, mode=\"w\").write(\"testing\")\n- rename_path = os.path.join(self._base_dir, \"rename_file\")\n+ rename_path = file_io.join(self._base_dir, \"rename_file\")\n file_io.FileIO(rename_path, mode=\"w\").write(\"rename\")\n with self.assertRaises(errors.AlreadyExistsError):\n file_io.rename(file_path, rename_path, overwrite=False)\n self.assertTrue(file_io.file_exists(rename_path))\n self.assertTrue(file_io.file_exists(file_path))\n \n def testDeleteRecursivelyFail(self):\n- fake_dir_path = os.path.join(self._base_dir, \"temp_dir\")\n+ fake_dir_path = file_io.join(self._base_dir, \"temp_dir\")\n with self.assertRaises(errors.NotFoundError):\n file_io.delete_recursively(fake_dir_path)\n \n@@ -298,7 +345,7 @@ def testListDirectory(self, join):\n self.assertItemsEqual(files + [\"sub_dir\"], dir_list)\n \n def testListDirectoryFailure(self):\n- dir_path = os.path.join(self._base_dir, \"test_dir\")\n+ dir_path = file_io.join(self._base_dir, \"test_dir\")\n with self.assertRaises(errors.NotFoundError):\n file_io.list_directory(dir_path)\n \n@@ -309,18 +356,18 @@ def _setupWalkDirectories(self, dir_path):\n # subdir1_2 -> dir: subdir2\n file_io.create_dir(dir_path)\n file_io.FileIO(\n- os.path.join(dir_path, \"file1.txt\"), mode=\"w\").write(\"testing\")\n+ file_io.join(dir_path, \"file1.txt\"), mode=\"w\").write(\"testing\")\n sub_dirs1 = [\"subdir1_1\", \"subdir1_2\", \"subdir1_3\"]\n for name in sub_dirs1:\n- file_io.create_dir(os.path.join(dir_path, name))\n+ file_io.create_dir(file_io.join(dir_path, name))\n file_io.FileIO(\n- os.path.join(dir_path, \"subdir1_1/file2.txt\"),\n+ file_io.join(dir_path, \"subdir1_1/file2.txt\"),\n mode=\"w\").write(\"testing\")\n- file_io.create_dir(os.path.join(dir_path, \"subdir1_2/subdir2\"))\n+ file_io.create_dir(file_io.join(dir_path, \"subdir1_2/subdir2\"))\n \n @run_all_path_types\n def testWalkInOrder(self, join):\n- dir_path_str = os.path.join(self._base_dir, \"test_dir\")\n+ dir_path_str = file_io.join(self._base_dir, \"test_dir\")\n dir_path = join(self._base_dir, \"test_dir\")\n self._setupWalkDirectories(dir_path_str)\n # Now test the walk (in_order = True)\n@@ -332,13 +379,13 @@ def testWalkInOrder(self, join):\n all_subdirs.append(w_subdirs)\n all_files.append(w_files)\n self.assertItemsEqual(all_dirs, [dir_path_str] + [\n- os.path.join(dir_path_str, item) for item in\n+ file_io.join(dir_path_str, item) for item in\n [\"subdir1_1\", \"subdir1_2\", \"subdir1_2/subdir2\", \"subdir1_3\"]\n ])\n self.assertEqual(dir_path_str, all_dirs[0])\n self.assertLess(\n- all_dirs.index(os.path.join(dir_path_str, \"subdir1_2\")),\n- all_dirs.index(os.path.join(dir_path_str, \"subdir1_2/subdir2\")))\n+ all_dirs.index(file_io.join(dir_path_str, \"subdir1_2\")),\n+ all_dirs.index(file_io.join(dir_path_str, \"subdir1_2/subdir2\")))\n self.assertItemsEqual(all_subdirs[1:5], [[], [\"subdir2\"], [], []])\n self.assertItemsEqual(all_subdirs[0],\n [\"subdir1_1\", \"subdir1_2\", \"subdir1_3\"])\n@@ -347,7 +394,7 @@ def testWalkInOrder(self, join):\n all_files.index([\"file1.txt\"]), all_files.index([\"file2.txt\"]))\n \n def testWalkPostOrder(self):\n- dir_path = os.path.join(self._base_dir, \"test_dir\")\n+ dir_path = file_io.join(self._base_dir, \"test_dir\")\n self._setupWalkDirectories(dir_path)\n # Now test the walk (in_order = False)\n all_dirs = []\n@@ -358,14 +405,14 @@ def testWalkPostOrder(self):\n all_subdirs.append(w_subdirs)\n all_files.append(w_files)\n self.assertItemsEqual(all_dirs, [\n- os.path.join(dir_path, item)\n+ file_io.join(dir_path, item)\n for item in\n [\"subdir1_1\", \"subdir1_2/subdir2\", \"subdir1_2\", \"subdir1_3\"]\n ] + [dir_path])\n self.assertEqual(dir_path, all_dirs[4])\n self.assertLess(\n- all_dirs.index(os.path.join(dir_path, \"subdir1_2/subdir2\")),\n- all_dirs.index(os.path.join(dir_path, \"subdir1_2\")))\n+ all_dirs.index(file_io.join(dir_path, \"subdir1_2/subdir2\")),\n+ all_dirs.index(file_io.join(dir_path, \"subdir1_2\")))\n self.assertItemsEqual(all_subdirs[0:4], [[], [], [\"subdir2\"], []])\n self.assertItemsEqual(all_subdirs[4],\n [\"subdir1_1\", \"subdir1_2\", \"subdir1_3\"])\n@@ -374,7 +421,7 @@ def testWalkPostOrder(self):\n all_files.index([\"file2.txt\"]), all_files.index([\"file1.txt\"]))\n \n def testWalkFailure(self):\n- dir_path = os.path.join(self._base_dir, \"test_dir\")\n+ dir_path = file_io.join(self._base_dir, \"test_dir\")\n # Try walking a directory that wasn't created.\n all_dirs = []\n all_subdirs = []\n@@ -399,7 +446,7 @@ def testStat(self, join):\n self.assertFalse(file_statistics.is_directory)\n \n def testReadLine(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n with file_io.FileIO(file_path, mode=\"r+\") as f:\n f.write(\"testing1\\ntesting2\\ntesting3\\n\\ntesting5\")\n self.assertEqual(36, f.size())\n@@ -411,7 +458,7 @@ def testReadLine(self):\n self.assertEqual(\"\", f.readline())\n \n def testRead(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n with file_io.FileIO(file_path, mode=\"r+\") as f:\n f.write(\"testing1\\ntesting2\\ntesting3\\n\\ntesting5\")\n self.assertEqual(36, f.size())\n@@ -421,7 +468,7 @@ def testRead(self):\n self.assertEqual(\"esting3\\n\\ntesting5\", f.read())\n \n def testReadErrorReacquiresGil(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n with file_io.FileIO(file_path, mode=\"r+\") as f:\n f.write(\"testing1\\ntesting2\\ntesting3\\n\\ntesting5\")\n with self.assertRaises(errors.InvalidArgumentError):\n@@ -433,7 +480,7 @@ def testReadErrorReacquiresGil(self):\n f.read(-2)\n \n def testTell(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n with file_io.FileIO(file_path, mode=\"r+\") as f:\n f.write(\"testing1\\ntesting2\\ntesting3\\n\\ntesting5\")\n self.assertEqual(0, f.tell())\n@@ -451,7 +498,7 @@ def testTell(self):\n self.assertEqual(36, f.tell())\n \n def testSeek(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n with file_io.FileIO(file_path, mode=\"r+\") as f:\n f.write(\"testing1\\ntesting2\\ntesting3\\n\\ntesting5\")\n self.assertEqual(\"testing1\\n\", f.readline())\n@@ -485,7 +532,7 @@ def testSeek(self):\n self.assertEqual(\"testing2\\n\", f.readline())\n \n def testSeekFromWhat(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n with file_io.FileIO(file_path, mode=\"r+\") as f:\n f.write(\"testing1\\ntesting2\\ntesting3\\n\\ntesting5\")\n self.assertEqual(\"testing1\\n\", f.readline())\n@@ -509,7 +556,7 @@ def testSeekFromWhat(self):\n f.seek(0, 3)\n \n def testReadingIterator(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n data = [\"testing1\\n\", \"testing2\\n\", \"testing3\\n\", \"\\n\", \"testing5\"]\n with file_io.FileIO(file_path, mode=\"r+\") as f:\n f.write(\"\".join(data))\n@@ -519,7 +566,7 @@ def testReadingIterator(self):\n self.assertSequenceEqual(actual_data, data)\n \n def testReadlines(self):\n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n data = [\"testing1\\n\", \"testing2\\n\", \"testing3\\n\", \"\\n\", \"testing5\"]\n f = file_io.FileIO(file_path, mode=\"r+\")\n f.write(\"\".join(data))\n@@ -528,15 +575,15 @@ def testReadlines(self):\n self.assertSequenceEqual(lines, data)\n \n def testUTF8StringPath(self):\n- file_path = os.path.join(self._base_dir, \"UTF8测试_file\")\n+ file_path = file_io.join(self._base_dir, \"UTF8测试_file\")\n file_io.write_string_to_file(file_path, \"testing\")\n with file_io.FileIO(file_path, mode=\"rb\") as f:\n self.assertEqual(b\"testing\", f.read())\n \n def testEof(self):\n \"\"\"Test that reading past EOF does not raise an exception.\"\"\"\n \n- file_path = os.path.join(self._base_dir, \"temp_file\")\n+ file_path = file_io.join(self._base_dir, \"temp_file\")\n f = file_io.FileIO(file_path, mode=\"r+\")\n content = \"testing\"\n f.write(content)\n@@ -551,90 +598,90 @@ def testUTF8StringPathExists(self, join):\n self.assertEqual(v, True)\n \n def testFilecmp(self):\n- file1 = os.path.join(self._base_dir, \"file1\")\n+ file1 = file_io.join(self._base_dir, \"file1\")\n file_io.write_string_to_file(file1, \"This is a sentence\\n\" * 100)\n \n- file2 = os.path.join(self._base_dir, \"file2\")\n+ file2 = file_io.join(self._base_dir, \"file2\")\n file_io.write_string_to_file(file2, \"This is another sentence\\n\" * 100)\n \n- file3 = os.path.join(self._base_dir, \"file3\")\n+ file3 = file_io.join(self._base_dir, \"file3\")\n file_io.write_string_to_file(file3, u\"This is another sentence\\n\" * 100)\n \n self.assertFalse(file_io.filecmp(file1, file2))\n self.assertTrue(file_io.filecmp(file2, file3))\n \n def testFilecmpSameSize(self):\n- file1 = os.path.join(self._base_dir, \"file1\")\n+ file1 = file_io.join(self._base_dir, \"file1\")\n file_io.write_string_to_file(file1, \"This is a sentence\\n\" * 100)\n \n- file2 = os.path.join(self._base_dir, \"file2\")\n+ file2 = file_io.join(self._base_dir, \"file2\")\n file_io.write_string_to_file(file2, \"This is b sentence\\n\" * 100)\n \n- file3 = os.path.join(self._base_dir, \"file3\")\n+ file3 = file_io.join(self._base_dir, \"file3\")\n file_io.write_string_to_file(file3, u\"This is b sentence\\n\" * 100)\n \n self.assertFalse(file_io.filecmp(file1, file2))\n self.assertTrue(file_io.filecmp(file2, file3))\n \n def testFilecmpBinary(self):\n- file1 = os.path.join(self._base_dir, \"file1\")\n+ file1 = file_io.join(self._base_dir, \"file1\")\n file_io.FileIO(file1, \"wb\").write(\"testing\\n\\na\")\n \n- file2 = os.path.join(self._base_dir, \"file2\")\n+ file2 = file_io.join(self._base_dir, \"file2\")\n file_io.FileIO(file2, \"wb\").write(\"testing\\n\\nb\")\n \n- file3 = os.path.join(self._base_dir, \"file3\")\n+ file3 = file_io.join(self._base_dir, \"file3\")\n file_io.FileIO(file3, \"wb\").write(\"testing\\n\\nb\")\n \n- file4 = os.path.join(self._base_dir, \"file4\")\n+ file4 = file_io.join(self._base_dir, \"file4\")\n file_io.FileIO(file4, \"wb\").write(\"testing\\n\\ntesting\")\n \n self.assertFalse(file_io.filecmp(file1, file2))\n self.assertFalse(file_io.filecmp(file1, file4))\n self.assertTrue(file_io.filecmp(file2, file3))\n \n def testFileCrc32(self):\n- file1 = os.path.join(self._base_dir, \"file1\")\n+ file1 = file_io.join(self._base_dir, \"file1\")\n file_io.write_string_to_file(file1, \"This is a sentence\\n\" * 100)\n crc1 = file_io.file_crc32(file1)\n \n- file2 = os.path.join(self._base_dir, \"file2\")\n+ file2 = file_io.join(self._base_dir, \"file2\")\n file_io.write_string_to_file(file2, \"This is another sentence\\n\" * 100)\n crc2 = file_io.file_crc32(file2)\n \n- file3 = os.path.join(self._base_dir, \"file3\")\n+ file3 = file_io.join(self._base_dir, \"file3\")\n file_io.write_string_to_file(file3, \"This is another sentence\\n\" * 100)\n crc3 = file_io.file_crc32(file3)\n \n self.assertTrue(crc1 != crc2)\n self.assertEqual(crc2, crc3)\n \n def testFileCrc32WithBytes(self):\n- file1 = os.path.join(self._base_dir, \"file1\")\n+ file1 = file_io.join(self._base_dir, \"file1\")\n file_io.write_string_to_file(file1, \"This is a sentence\\n\" * 100)\n crc1 = file_io.file_crc32(file1, block_size=24)\n \n- file2 = os.path.join(self._base_dir, \"file2\")\n+ file2 = file_io.join(self._base_dir, \"file2\")\n file_io.write_string_to_file(file2, \"This is another sentence\\n\" * 100)\n crc2 = file_io.file_crc32(file2, block_size=24)\n \n- file3 = os.path.join(self._base_dir, \"file3\")\n+ file3 = file_io.join(self._base_dir, \"file3\")\n file_io.write_string_to_file(file3, \"This is another sentence\\n\" * 100)\n crc3 = file_io.file_crc32(file3, block_size=-1)\n \n self.assertTrue(crc1 != crc2)\n self.assertEqual(crc2, crc3)\n \n def testFileCrc32Binary(self):\n- file1 = os.path.join(self._base_dir, \"file1\")\n+ file1 = file_io.join(self._base_dir, \"file1\")\n file_io.FileIO(file1, \"wb\").write(\"testing\\n\\n\")\n crc1 = file_io.file_crc32(file1)\n \n- file2 = os.path.join(self._base_dir, \"file2\")\n+ file2 = file_io.join(self._base_dir, \"file2\")\n file_io.FileIO(file2, \"wb\").write(\"testing\\n\\n\\n\")\n crc2 = file_io.file_crc32(file2)\n \n- file3 = os.path.join(self._base_dir, \"file3\")\n+ file3 = file_io.join(self._base_dir, \"file3\")\n file_io.FileIO(file3, \"wb\").write(\"testing\\n\\n\\n\")\n crc3 = file_io.file_crc32(file3)\n \n@@ -643,31 +690,31 @@ def testFileCrc32Binary(self):\n \n def testMatchingFilesPermission(self):\n # Create top level directory test_dir.\n- dir_path = os.path.join(self._base_dir, \"test_dir\")\n+ dir_path = file_io.join(self._base_dir, \"test_dir\")\n file_io.create_dir(dir_path)\n # Create second level directories `noread` and `any`.\n- noread_path = os.path.join(dir_path, \"noread\")\n+ noread_path = file_io.join(dir_path, \"noread\")\n file_io.create_dir(noread_path)\n- any_path = os.path.join(dir_path, \"any\")\n+ any_path = file_io.join(dir_path, \"any\")\n file_io.create_dir(any_path)\n files = [\"file1.txt\", \"file2.txt\", \"file3.txt\"]\n for name in files:\n- file_path = os.path.join(any_path, name)\n+ file_path = file_io.join(any_path, name)\n file_io.FileIO(file_path, mode=\"w\").write(\"testing\")\n- file_path = os.path.join(noread_path, \"file4.txt\")\n+ file_path = file_io.join(noread_path, \"file4.txt\")\n file_io.FileIO(file_path, mode=\"w\").write(\"testing\")\n # Change noread to noread access.\n os.chmod(noread_path, 0)\n- expected_match = [os.path.join(any_path, name) for name in files]\n+ expected_match = [file_io.join(any_path, name) for name in files]\n self.assertItemsEqual(\n- file_io.get_matching_files(os.path.join(dir_path, \"*\", \"file*.txt\")),\n+ file_io.get_matching_files(file_io.join(dir_path, \"*\", \"file*.txt\")),\n expected_match)\n # Change noread back so that it could be cleaned during tearDown.\n os.chmod(noread_path, 0o777)\n \n def testFileSeekableWithZip(self):\n # Note: Test case for GitHub issue 27276, issue only exposed in python 3.7+.\n- filename = os.path.join(self._base_dir, \"a.npz\")\n+ filename = file_io.join(self._base_dir, \"a.npz\")\n np.savez_compressed(filename, {\"a\": 1, \"b\": 2})\n with gfile.GFile(filename, \"rb\") as f:\n info = np.load(f, allow_pickle=True) # pylint: disable=unexpected-keyword-arg",
"filename": "tensorflow/python/lib/io/file_io_test.py",
"status": "modified"
},
{
"diff": "@@ -420,6 +420,7 @@ py_library(\n \"//tensorflow/python:errors\",\n \"//tensorflow/python:framework_ops\",\n \"//tensorflow/python:handle_data_util\",\n+ \"//tensorflow/python:lib\",\n \"//tensorflow/python:lookup_ops\",\n \"//tensorflow/python:resource_variable_ops\",\n \"//tensorflow/python:tensor_util\",",
"filename": "tensorflow/python/saved_model/BUILD",
"status": "modified"
},
{
"diff": "@@ -419,12 +419,12 @@ def save(self, as_text=False):\n file_io.recursive_create_dir(self._export_dir)\n \n if as_text:\n- path = os.path.join(\n+ path = file_io.join(\n compat.as_bytes(self._export_dir),\n compat.as_bytes(constants.SAVED_MODEL_FILENAME_PBTXT))\n file_io.write_string_to_file(path, str(self._saved_model))\n else:\n- path = os.path.join(\n+ path = file_io.join(\n compat.as_bytes(self._export_dir),\n compat.as_bytes(constants.SAVED_MODEL_FILENAME_PB))\n file_io.write_string_to_file(\n@@ -770,7 +770,7 @@ def copy_assets_to_destination_dir(asset_filename_map, destination_dir):\n \n # Copy each asset from source path to destination path.\n for asset_basename, asset_source_filepath in asset_filename_map.items():\n- asset_destination_filepath = os.path.join(\n+ asset_destination_filepath = file_io.join(\n compat.as_bytes(assets_destination_dir),\n compat.as_bytes(asset_basename))\n ",
"filename": "tensorflow/python/saved_model/builder_impl.py",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n from __future__ import print_function\n \n import functools\n-import os\n import sys\n \n from tensorflow.core.protobuf import graph_debug_info_pb2\n@@ -34,6 +33,7 @@\n from tensorflow.python.framework import errors\n from tensorflow.python.framework import ops\n from tensorflow.python.framework import tensor_util\n+from tensorflow.python.lib.io import file_io\n from tensorflow.python.ops import array_ops\n from tensorflow.python.ops import control_flow_ops\n from tensorflow.python.ops import handle_data_util\n@@ -582,7 +582,7 @@ class _UserObject(tracking.AutoTrackable):\n return _UserObject(), setattr\n \n def _recreate_asset(self, proto):\n- filename = os.path.join(\n+ filename = file_io.join(\n saved_model_utils.get_assets_dir(self._export_dir),\n self._asset_file_def[proto.asset_file_def_index].filename)\n asset = tracking.Asset(filename)",
"filename": "tensorflow/python/saved_model/load.py",
"status": "modified"
},
{
"diff": "@@ -59,7 +59,7 @@ def parse_saved_model_with_debug_info(export_dir):\n \"\"\"\n saved_model = _parse_saved_model(export_dir)\n \n- debug_info_path = os.path.join(\n+ debug_info_path = file_io.join(\n saved_model_utils.get_debug_dir(export_dir),\n constants.DEBUG_INFO_FILENAME_PB)\n debug_info = graph_debug_info_pb2.GraphDebugInfo()\n@@ -88,11 +88,11 @@ def parse_saved_model(export_dir):\n IOError: If the file does not exist, or cannot be successfully parsed.\n \"\"\"\n # Build the path to the SavedModel in pbtxt format.\n- path_to_pbtxt = os.path.join(\n+ path_to_pbtxt = file_io.join(\n compat.as_bytes(compat.path_to_str(export_dir)),\n compat.as_bytes(constants.SAVED_MODEL_FILENAME_PBTXT))\n # Build the path to the SavedModel in pb format.\n- path_to_pb = os.path.join(\n+ path_to_pb = file_io.join(\n compat.as_bytes(compat.path_to_str(export_dir)),\n compat.as_bytes(constants.SAVED_MODEL_FILENAME_PB))\n \n@@ -155,14 +155,14 @@ def get_asset_tensors(export_dir, meta_graph_def_to_load, import_scope=None):\n asset_protos.append(asset_proto)\n \n # Location of the assets for SavedModel.\n- assets_directory = os.path.join(\n+ assets_directory = file_io.join(\n compat.as_bytes(export_dir), compat.as_bytes(constants.ASSETS_DIRECTORY))\n # Process each asset and add it to the asset tensor dictionary.\n for asset_proto in asset_protos:\n tensor_name = asset_proto.tensor_info.name\n if import_scope:\n tensor_name = \"%s/%s\" % (import_scope, tensor_name)\n- asset_tensor_dict[tensor_name] = os.path.join(\n+ asset_tensor_dict[tensor_name] = file_io.join(\n compat.as_bytes(assets_directory),\n compat.as_bytes(asset_proto.filename))\n \n@@ -249,8 +249,8 @@ def maybe_saved_model_directory(export_dir):\n Returns:\n True if the export directory contains SavedModel files, False otherwise.\n \"\"\"\n- txt_path = os.path.join(export_dir, constants.SAVED_MODEL_FILENAME_PBTXT)\n- pb_path = os.path.join(export_dir, constants.SAVED_MODEL_FILENAME_PB)\n+ txt_path = file_io.join(export_dir, constants.SAVED_MODEL_FILENAME_PBTXT)\n+ pb_path = file_io.join(export_dir, constants.SAVED_MODEL_FILENAME_PB)\n return file_io.file_exists(txt_path) or file_io.file_exists(pb_path)\n \n ",
"filename": "tensorflow/python/saved_model/loader_impl.py",
"status": "modified"
},
{
"diff": "@@ -21,8 +21,6 @@\n from __future__ import division\n from __future__ import print_function\n \n-import os\n-\n from tensorflow.python.lib.io import file_io\n from tensorflow.python.platform import tf_logging\n from tensorflow.python.saved_model import constants\n@@ -127,20 +125,20 @@ def save(self, new_export_dir=None):\n errors.OpError: If there are errors during the file save operation.\n \"\"\"\n \n- is_input_text_proto = file_io.file_exists(os.path.join(\n+ is_input_text_proto = file_io.file_exists(file_io.join(\n compat.as_bytes(self._export_dir),\n compat.as_bytes(constants.SAVED_MODEL_FILENAME_PBTXT)))\n if not new_export_dir:\n new_export_dir = self._export_dir\n \n if is_input_text_proto:\n # TODO(jdchung): Add a util for the path creation below.\n- path = os.path.join(\n+ path = file_io.join(\n compat.as_bytes(new_export_dir),\n compat.as_bytes(constants.SAVED_MODEL_FILENAME_PBTXT))\n file_io.write_string_to_file(path, str(self._saved_model))\n else:\n- path = os.path.join(\n+ path = file_io.join(\n compat.as_bytes(new_export_dir),\n compat.as_bytes(constants.SAVED_MODEL_FILENAME_PB))\n file_io.write_string_to_file(",
"filename": "tensorflow/python/saved_model/method_name_updater.py",
"status": "modified"
},
{
"diff": "@@ -83,6 +83,7 @@ py_strict_library(\n \":mode_keys\",\n \"//tensorflow/python:platform\",\n \"//tensorflow/python:util\",\n+ \"//tensorflow/python/lib/io:lib\",\n \"//tensorflow/python/saved_model:signature_constants\",\n \"//tensorflow/python/saved_model:signature_def_utils\",\n \"//tensorflow/python/saved_model:tag_constants\",",
"filename": "tensorflow/python/saved_model/model_utils/BUILD",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n import os\n import time\n \n+from tensorflow.python.lib.io import file_io\n from tensorflow.python.platform import gfile\n from tensorflow.python.platform import tf_logging as logging\n from tensorflow.python.saved_model import signature_constants\n@@ -208,7 +209,7 @@ def get_timestamped_export_dir(export_dir_base):\n while attempts < MAX_DIRECTORY_CREATION_ATTEMPTS:\n timestamp = int(time.time())\n \n- result_dir = os.path.join(\n+ result_dir = file_io.join(\n compat.as_bytes(export_dir_base), compat.as_bytes(str(timestamp)))\n if not gfile.Exists(result_dir):\n # Collisions are still possible (though extremely unlikely): this\n@@ -241,7 +242,7 @@ def get_temp_export_dir(timestamped_export_dir):\n str_name = basename.decode('utf-8')\n else:\n str_name = str(basename)\n- temp_export_dir = os.path.join(\n+ temp_export_dir = file_io.join(\n compat.as_bytes(dirname),\n compat.as_bytes('temp-{}'.format(str_name)))\n return temp_export_dir",
"filename": "tensorflow/python/saved_model/model_utils/export_utils.py",
"status": "modified"
},
{
"diff": "@@ -1029,7 +1029,7 @@ def _export_debug_info(exported_graph, export_dir):\n graph_debug_info = error_interpolation.create_graph_debug_info_def(\n exported_operations)\n file_io.atomic_write_string_to_file(\n- os.path.join(\n+ file_io.join(\n utils_impl.get_or_create_debug_dir(export_dir),\n constants.DEBUG_INFO_FILENAME_PB),\n graph_debug_info.SerializeToString(deterministic=True))\n@@ -1283,7 +1283,7 @@ def save_and_return_nodes(obj,\n # as we build up the C++ API.\n pywrap_saved_model.Save(export_dir)\n \n- path = os.path.join(\n+ path = file_io.join(\n compat.as_str(export_dir),\n compat.as_str(constants.SAVED_MODEL_FILENAME_PB))\n file_io.atomic_write_string_to_file(",
"filename": "tensorflow/python/saved_model/save.py",
"status": "modified"
},
{
"diff": "@@ -18,8 +18,6 @@\n from __future__ import division\n from __future__ import print_function\n \n-import os\n-\n from tensorflow.core.framework import types_pb2\n from tensorflow.core.protobuf import meta_graph_pb2\n from tensorflow.core.protobuf import struct_pb2\n@@ -232,14 +230,14 @@ def get_or_create_variables_dir(export_dir):\n \n def get_variables_dir(export_dir):\n \"\"\"Return variables sub-directory in the SavedModel.\"\"\"\n- return os.path.join(\n+ return file_io.join(\n compat.as_text(export_dir),\n compat.as_text(constants.VARIABLES_DIRECTORY))\n \n \n def get_variables_path(export_dir):\n \"\"\"Return the variables path, used as the prefix for checkpoint files.\"\"\"\n- return os.path.join(\n+ return file_io.join(\n compat.as_text(get_variables_dir(export_dir)),\n compat.as_text(constants.VARIABLES_FILENAME))\n \n@@ -255,7 +253,7 @@ def get_or_create_assets_dir(export_dir):\n \n def get_assets_dir(export_dir):\n \"\"\"Return path to asset directory in the SavedModel.\"\"\"\n- return os.path.join(\n+ return file_io.join(\n compat.as_text(export_dir),\n compat.as_text(constants.ASSETS_DIRECTORY))\n \n@@ -270,20 +268,20 @@ def get_or_create_debug_dir(export_dir):\n \n \n def get_saved_model_pbtxt_path(export_dir):\n- return os.path.join(\n+ return file_io.join(\n compat.as_bytes(compat.path_to_str(export_dir)),\n compat.as_bytes(constants.SAVED_MODEL_FILENAME_PBTXT))\n \n \n def get_saved_model_pb_path(export_dir):\n- return os.path.join(\n+ return file_io.join(\n compat.as_bytes(compat.path_to_str(export_dir)),\n compat.as_bytes(constants.SAVED_MODEL_FILENAME_PB))\n \n \n def get_debug_dir(export_dir):\n \"\"\"Returns path to the debug sub-directory in the SavedModel.\"\"\"\n- return os.path.join(\n+ return file_io.join(\n compat.as_text(export_dir), compat.as_text(constants.DEBUG_DIRECTORY))\n \n # Based on tensor_bundle/byte_swap.cc",
"filename": "tensorflow/python/saved_model/utils_impl.py",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,10 @@ tf_module {\n name: \"isdir\"\n argspec: \"args=[\\'path\\'], varargs=None, keywords=None, defaults=None\"\n }\n+ member_method {\n+ name: \"join\"\n+ argspec: \"args=[\\'path\\'], varargs=paths, keywords=None, defaults=None\"\n+ }\n member_method {\n name: \"listdir\"\n argspec: \"args=[\\'path\\'], varargs=None, keywords=None, defaults=None\"",
"filename": "tensorflow/tools/api/golden/v1/tensorflow.io.gfile.pbtxt",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,10 @@ tf_module {\n name: \"isdir\"\n argspec: \"args=[\\'path\\'], varargs=None, keywords=None, defaults=None\"\n }\n+ member_method {\n+ name: \"join\"\n+ argspec: \"args=[\\'path\\'], varargs=paths, keywords=None, defaults=None\"\n+ }\n member_method {\n name: \"listdir\"\n argspec: \"args=[\\'path\\'], varargs=None, keywords=None, defaults=None\"",
"filename": "tensorflow/tools/api/golden/v2/tensorflow.io.gfile.pbtxt",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 20.04\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: /\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below): 2.3.0\r\n- Python version: 3.7\r\n- Bazel version (if compiling from source): /\r\n- GCC/Compiler version (if compiling from source): /\r\n- CUDA/cuDNN version: 10.1\r\n- GPU model and memory: Titan Xp\r\n\r\n**Describe the current behavior**\r\n`tf.math.floordiv` produces different values if compiled with `experimental_compile=True` vs not\r\n\r\n**Describe the expected behavior**\r\n`tf.math.floordiv` should return the same values regardless of compilation method.\r\n\r\n**Standalone code to reproduce the issue**\r\n[colab](https://colab.research.google.com/drive/1RgVH3RaLmfcjdzKA5pDtpWhK-23tHmrX#scrollTo=2Iw5kpILGneG)\r\n```python\r\nimport tensorflow as tf\r\n\r\ndef floordiv(x, y):\r\n # x // y\r\n return tf.math.floordiv(x, y)\r\n\r\n@tf.function\r\ndef floordiv_tffn(x, y):\r\n # x // y\r\n return tf.math.floordiv(x, y)\r\n\r\n@tf.function(experimental_compile=True)\r\ndef floordiv_compiled(x, y):\r\n # x // y\r\n return tf.math.floordiv(x, y)\r\n\r\nx, y = tf.constant([0., 0.1, 0.9]), 1.\r\nprint(floordiv(x, y))\r\nprint(floordiv_tffn(x, y))\r\nprint(floordiv_compiled(x, y))\r\n```\r\n```\r\ntf.Tensor([0. 0. 0.], shape=(3,), dtype=float32)\r\ntf.Tensor([0. 0.1 0.9], shape=(3,), dtype=float32)\r\ntf.Tensor([0. 0. 0.], shape=(3,), dtype=float32)\r\n```",
"comments": [
{
"body": "I am able to replicate the issue reported on tf 2.3,2.4 and nightly, please find the [gist here](https://colab.research.google.com/gist/Saduf2019/e701dd589462a69a246c57135c1e6920/untitled567.ipynb).",
"created_at": "2021-03-22T09:37:03Z"
},
{
"body": "Opened #47986 with a fix.\r\n\r\nAs a temporary workaround you can use `tf.config.optimizer.set_experimental_options({\"constant_folding\": False})` although that could slightly reduce performance in some cases.",
"created_at": "2021-03-22T20:24:30Z"
},
{
"body": "It works for me if I do `experimental_compile=True` (since I anyways want to do that for performance). I'm wondering though, doesn't experimental compile constant fold too?",
"created_at": "2021-03-23T08:21:47Z"
},
{
"body": "> I'm wondering though, doesn't experimental compile constant fold too?\r\n\r\nexperimental compile does constant folding too, but uses a different code path as far as I know which doesn't have this bug.",
"created_at": "2021-03-23T10:48:28Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47970\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47970\">No</a>\n",
"created_at": "2021-04-10T01:13:49Z"
}
],
"number": 47970,
"title": "tf.math.floordiv (//) is broken when inside @tf.function if second argument is 1."
}
|
{
"body": "This PR prevents grappler constant folding of `FloorDiv(x, 1.0)` since `FloorDiv(x, 1.0) == Floor(x) != x`.\r\n\r\nFixes #47970",
"number": 47986,
"review_comments": [],
"title": "Do not constant fold FloorDiv(x, 1)"
}
|
{
"commits": [
{
"message": "Do not constant fold FloorDiv(x, 1)"
}
],
"files": [
{
"diff": "@@ -2923,7 +2923,7 @@ Status ConstantFolding::SimplifyArithmeticOperations(\n const bool is_matmul = IsAnyMatMul(*node);\n const bool is_add = IsAdd(*node) || IsBiasAdd(*node) || IsLogicalOr(*node);\n const bool is_sub = IsSub(*node);\n- const bool is_any_div = IsAnyDiv(*node);\n+ const bool is_any_div = IsAnyDiv(*node) && !IsFloorDiv(*node);\n // Simplify arithmetic operations with ones or zeros.\n if (use_shape_info &&\n (is_mul || is_matmul || is_add || is_sub || is_any_div) &&",
"filename": "tensorflow/core/grappler/optimizers/constant_folding.cc",
"status": "modified"
},
{
"diff": "@@ -800,6 +800,7 @@ TEST_F(ConstantFoldingTest, NeutralElement) {\n Output mul6 = ops::MulNoNan(s.WithOpName(\"mul6\"), zeros_1d, y);\n Output div1 = ops::Div(s.WithOpName(\"div1\"), x, ones);\n Output div2 = ops::Div(s.WithOpName(\"div2\"), ones, y);\n+ Output floordiv = ops::FloorDiv(s.WithOpName(\"floordiv\"), x, ones);\n Output matmul1 = ops::MatMul(s.WithOpName(\"matmul1\"), x, zeros);\n Output matmul2 = ops::MatMul(s.WithOpName(\"matmul2\"), zeros, y);\n Output matmul3 = ops::MatMul(s.WithOpName(\"matmul3\"), a, zeros);\n@@ -814,10 +815,10 @@ TEST_F(ConstantFoldingTest, NeutralElement) {\n Output bias_add2 = ops::BiasAdd(s.WithOpName(\"bias_add2\"), zeros, bias);\n Output sub1 = ops::Sub(s.WithOpName(\"sub1\"), x, zeros);\n Output sub2 = ops::Sub(s.WithOpName(\"sub2\"), zeros, y);\n- Output concat =\n- ops::Stack(s.WithOpName(\"stack\"),\n- {mul1, mul2, mul3, mul4, mul5, mul6, div1, div2, matmul1,\n- matmul2, add1, add2, bias_add1, bias_add2, sub1, sub2});\n+ Output concat = ops::Stack(\n+ s.WithOpName(\"stack\"),\n+ {mul1, mul2, mul3, mul4, mul5, mul6, div1, div2, floordiv, matmul1,\n+ matmul2, add1, add2, bias_add1, bias_add2, sub1, sub2});\n GrapplerItem item;\n TF_CHECK_OK(s.ToGraphDef(&item.graph));\n item.fetch = {\"stack\", \"matmul3\", \"matmul4\", \"mul1_bcast\",\n@@ -836,7 +837,7 @@ TEST_F(ConstantFoldingTest, NeutralElement) {\n const string ctrl_zeros_name = strings::StrCat(\"^zeros\", suffix);\n const string ctrl_ones_name = strings::StrCat(\"^ones\", suffix);\n \n- EXPECT_EQ(const_type == kFill ? 42 : 38, output.node_size());\n+ EXPECT_EQ(const_type == kFill ? 43 : 39, output.node_size());\n for (int i = 0; i < output.node_size(); ++i) {\n const NodeDef& node = output.node(i);\n const string& name = node.name();\n@@ -880,6 +881,10 @@ TEST_F(ConstantFoldingTest, NeutralElement) {\n EXPECT_EQ(\"Reciprocal\", node.op());\n EXPECT_EQ(\"y\", node.input(0));\n EXPECT_EQ(ctrl_ones_name, node.input(1));\n+ } else if (name == \"floordiv\") {\n+ EXPECT_EQ(\"FloorDiv\", node.op());\n+ EXPECT_EQ(\"x\", node.input(0));\n+ EXPECT_EQ(ones_name, node.input(1));\n } else if (name == \"matmul1\") {\n EXPECT_EQ(\"Const\", node.op());\n EXPECT_EQ(\"^x\", node.input(0));",
"filename": "tensorflow/core/grappler/optimizers/constant_folding_test.cc",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below): 2.3.0\r\n- Python version: 3.6.8\r\n\r\n\r\n**Describe the current behavior**\r\nSome callbacks (e.g. ProgbarLogger, ModelCheckpoint, ...) have the flag `self._supports_tf_logs = True`. If other callbacks (especially custom Callback) don't have this property, then those callbacks do not have acces to the same logs. \r\nIn the code example below, `ModelCheckpoint` can not use the `'val_log_loss'` as a monitor value from the `CustomMetric` callback.\r\nThis results from the commit https://github.com/tensorflow/tensorflow/commit/50480faea75f56def464b84f251b4aee388dfce9 where a new `numpy_logs` property has been introduced, without making sure to sync it with the pre-existing `logs` property.\r\n\r\n**Describe the expected behavior**\r\nThe two propertys `numpy_logs` and `logs` should contain the same information OR it should be made clear in the docs (https://www.tensorflow.org/guide/keras/custom_callback#keras_callbacks_overview) what `_supports_tf_logs` does and that there could be compatibility issues.\r\n\r\n**Standalone code to reproduce the issue**\r\n```\r\n...\r\nfrom tensorflow.keras.callbacks import Callback, ModelCheckpoint\r\n...\r\n\r\nclass CustomMetric(Callback):\r\n def __init__(self, x_valid, y_valid):\r\n super().__init__()\r\n self.x_valid = x_valid\r\n self.y_valid = y_valid\r\n\r\n def on_epoch_end(self, epoch, logs=None):\r\n y_pred = self.model.predict(self.x_valid, batch_size=BATCHSIZE)\r\n\r\n logs['val_log_loss'] = metrics.log_loss(self.y_valid, y_pred)\r\n\r\n...\r\n\r\nmodel.fit(\r\n x_train,\r\n y_train,\r\n validation_data=(x_valid, y_valid),\r\n shuffle=True,\r\n batch_size=BATCHSIZE,\r\n epochs=EPOCHS,\r\n verbose=1,\r\n callbacks=[CustomMetric(x_valid, y_valid), ModelCheckpoint('test.h5', 'val_log_loss', verbose=1, save_best_only=True, mode='min')]\r\n )\r\n\r\n...\r\n```\r\n\r\n**Other info / logs** \r\nSee commit https://github.com/tensorflow/tensorflow/commit/50480faea75f56def464b84f251b4aee388dfce9",
"comments": [
{
"body": "@albert-92 Can you please provide a standalone code to reproduce the issue? Thanks!",
"created_at": "2020-07-29T23:57:26Z"
},
{
"body": "@jvishnuvardhan Sure. Here's a standalone code to reproduce the issue:\r\n\r\n```\r\nfrom __future__ import print_function\r\n\r\nfrom tensorflow.keras.datasets import mnist\r\nfrom tensorflow.keras.models import Sequential\r\nfrom tensorflow.keras.layers import Dense\r\nfrom tensorflow.keras.optimizers import RMSprop\r\nfrom tensorflow.keras.callbacks import Callback, ModelCheckpoint, History\r\nfrom tensorflow.keras import utils\r\nfrom sklearn import metrics\r\n\r\nbatch_size = 128\r\nnum_classes = 10\r\nepochs = 2\r\n\r\n# Custom callback, where the logs are actually the numpy_logs object \r\n# if the flag self._supports_tf_logs is not set to True\r\nclass CustomMetric(Callback):\r\n def __init__(self, x_valid, y_valid):\r\n super().__init__()\r\n self.x_valid = x_valid\r\n self.y_valid = y_valid\r\n\r\n def on_epoch_end(self, epoch, logs=None):\r\n y_pred = self.model.predict(self.x_valid, batch_size=batch_size)\r\n\r\n logs['val_log_loss'] = metrics.log_loss(self.y_valid, y_pred)\r\n\r\n\r\n# the data, split between train and test sets\r\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\r\n\r\nx_train = x_train.reshape(60000, 784).astype('float32') / 255.\r\nx_test = x_test.reshape(10000, 784).astype('float32') / 255.\r\n\r\n# convert class vectors to binary class matrices\r\ny_train = utils.to_categorical(y_train, num_classes)\r\ny_test = utils.to_categorical(y_test, num_classes)\r\n\r\nmodel = Sequential()\r\nmodel.add(Dense(64, activation='relu', input_shape=(784,)))\r\nmodel.add(Dense(32, activation='relu'))\r\nmodel.add(Dense(num_classes, activation='softmax'))\r\n\r\nmodel.summary()\r\n\r\nmodel.compile(loss='categorical_crossentropy',\r\n optimizer=RMSprop(),\r\n metrics=['accuracy'])\r\n\r\n# The following part works partly as intended.\r\n# history.history contains the key 'val_log_loss' even though it is not printed by the ProgbarLogger\r\n# (since ProgbarLogger uses logs and CustomMetric numpy_logs)\r\nhistory = model.fit(x_train, y_train,\r\n batch_size=batch_size,\r\n epochs=epochs,\r\n verbose=1,\r\n validation_data=(x_test, y_test),\r\n callbacks=[\r\n CustomMetric(x_test, y_test)\r\n ])\r\n\r\nprint(history.history)\r\n\r\n# This following part does not work as intented.\r\n# ModelCheckpoint outputs the warning\r\n# \"WARNING:tensorflow:Can save best model only with val_log_loss available, skipping.\"\r\n# because 'val_log_loss' is in the numpy_logs object and ModelCheckpoint uses the logs object\r\nmodel.fit(x_train, y_train,\r\n batch_size=batch_size,\r\n epochs=epochs,\r\n verbose=1,\r\n validation_data=(x_test, y_test),\r\n callbacks=[\r\n CustomMetric(x_test, y_test),\r\n ModelCheckpoint('test.h5', monitor='val_log_loss', verbose=1, save_best_only=True, mode='min')\r\n ])\r\n\r\n```",
"created_at": "2020-07-30T07:08:46Z"
},
{
"body": "I have tried in colab with TF version 2.3, nightly version(`2.4.0-dev20200729`) and was able to reproduce the issue.Please, find the gist [here](https://colab.research.google.com/gist/ravikyram/87ab302844f49a73e3c53b032a8565b8/untitled200.ipynb).Thanks!",
"created_at": "2020-07-30T08:11:47Z"
},
{
"body": "Facing the same issue when moved from 2.2 to 2.3",
"created_at": "2020-09-28T20:25:53Z"
},
{
"body": "@reedwm @omalleyt12 this issue is affecting our Keras callbacks in Horovod as well. As reported in https://github.com/horovod/horovod/issues/2440, when using `MetricAverageCallback` to average metrics across workers, the history is correctly reporting averages, but the logs are not. When setting `callback._supports_tf_logs = True` we get the exact opposite behavior: logs are correctly averaged but history is not. \r\n\r\nCan someone from your team help in providing a fix / workaround for this?\r\n\r\nHere's a standalone script using Horovod that repros the issue:\r\n\r\n```\r\n import tensorflow as tf\r\n from tensorflow import keras\r\n import horovod.tensorflow.keras as hvd\r\n\r\n hvd.init()\r\n\r\n opt = tf.keras.optimizers.Adam(0.01)\r\n opt = hvd.DistributedOptimizer(opt)\r\n\r\n def test_metric(y_true, y_pred):\r\n return hvd.rank()\r\n\r\n model = keras.models.Sequential()\r\n model.add(keras.layers.Dense(2, input_shape=(3,)))\r\n model.compile(loss=keras.losses.mean_squared_error,\r\n optimizer=opt,\r\n metrics=[test_metric],\r\n experimental_run_tf_function=False)\r\n\r\n x = np.random.random((1, 3))\r\n y = np.random.random((1, 3, 2))\r\n\r\n callbacks = [\r\n hvd.callbacks.BroadcastGlobalVariablesCallback(0),\r\n hvd.callbacks.MetricAverageCallback(),\r\n ]\r\n\r\n train_history = model.fit(\r\n x,\r\n y,\r\n steps_per_epoch=10,\r\n callbacks=callbacks,\r\n epochs=1\r\n )\r\n\r\n expected = sum(range(hvd.size())) / hvd.size()\r\n results = train_history.history.get('test_metric')\r\n assert results[0] == expected\r\n```",
"created_at": "2020-11-23T17:50:24Z"
},
{
"body": "/CC @fchollet",
"created_at": "2020-12-01T22:36:30Z"
},
{
"body": "I made a PR to fix this: https://github.com/tensorflow/tensorflow/pull/47922",
"created_at": "2021-03-19T16:40:54Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/41851\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/41851\">No</a>\n",
"created_at": "2021-04-05T15:00:51Z"
},
{
"body": "Thanks for the PR @lgeiger \r\nThis doesn't seem to fix the progress bar values on the example provided by tgaddair on tf2.5. Do you have any recommendation for that case?\r\nThank you",
"created_at": "2021-07-21T12:35:10Z"
}
],
"number": 41851,
"title": "Keras Callbacks logs / numpy_logs not in sync"
}
|
{
"body": "50480faea75f56def464b84f251b4aee388dfce9 introduced a bug where callbacks with `_supports_tf_logs=True` would access a different `logs` dictionary compared to callbacks with `_supports_tf_logs=False`.\r\nThis lead to problems when users would mutate the dictionary in callbacks which is a common pattern that is also used in some built in callbacks.\r\n\r\nThis PR fixes this by converting the logs dictionary to numpy in cases where not all callbacks support TF logs. This makes sure that all callbacks will access the same dictionary. The changes make the assumption that callbacks with `_supports_tf_logs=True` also support numpy logs. For all builtin callbacks this is already the case and in fact the logs dictionary frequently includes a mix of TensorFlow and Python scalars in the current implementation as well, so I don't think this causes a problem.\r\n\r\nThe PR also includes a small performance optimization in c38818b65da0f1284dcce4fb7aaf0c92ee98c1c1 which removes the need for converting logs during batch hooks in cases where all callbacks implementing batch hooks support TF logs.\r\n\r\nFixes #41851\r\nFixes #45895\r\n\r\n@fchollet @rmothukuru @reedwm @omalleyt12 would you be able to take a look at this PR? It would be great if this fix could still make it into the TF 2.5 release since it currently breaks many custom callbacks in user space (e.g. https://github.com/horovod/horovod/pull/2549).\r\nNote: For easier review, I'd recommend looking at the two commits one by one.",
"number": 47922,
"review_comments": [],
"title": "Fix Keras Callbacks logs / numpy_logs sync"
}
|
{
"commits": [
{
"message": "Fix Keras Callbacks logs sync"
},
{
"message": "Only convert logs if batch hooks do not support TF logs\n\nThis is a small performance optimization that prevents conversion if not\nnecessary."
}
],
"files": [
{
"diff": "@@ -234,6 +234,15 @@ def __init__(self,\n \n # Performance optimization: determines if batch hooks need to be called.\n # pylint: disable=protected-access\n+ self._supports_tf_logs = all(\n+ getattr(cb, '_supports_tf_logs', False) for cb in self.callbacks)\n+ self._batch_hooks_support_tf_logs = all(\n+ getattr(cb, '_supports_tf_logs', False)\n+ for cb in self.callbacks\n+ if cb._implements_train_batch_hooks()\n+ or cb._implements_test_batch_hooks()\n+ or cb._implements_predict_batch_hooks())\n+\n self._should_call_train_batch_hooks = any(\n cb._implements_train_batch_hooks() for cb in self.callbacks)\n self._should_call_test_batch_hooks = any(\n@@ -272,6 +281,16 @@ def _add_default_callbacks(self, add_history, add_progbar):\n self._history = History()\n self.callbacks.append(self._history)\n \n+ def _process_logs(self, logs, is_batch_hook=False):\n+ \"\"\"Turns tensors into numpy arrays or Python scalars if necessary.\"\"\"\n+ if logs is None:\n+ return {}\n+ if self._supports_tf_logs:\n+ return logs\n+ if is_batch_hook and self._batch_hooks_support_tf_logs:\n+ return logs\n+ return tf_utils.sync_to_numpy_or_python_type(logs)\n+\n def append(self, callback):\n self.callbacks.append(callback)\n \n@@ -347,19 +366,13 @@ def _call_batch_end_hook(self, mode, batch, logs):\n \n def _call_batch_hook_helper(self, hook_name, batch, logs):\n \"\"\"Helper function for `on_*_batch_*` methods.\"\"\"\n- logs = logs or {}\n- numpy_logs = None\n if self._check_timing:\n start_time = time.time()\n \n+ logs = self._process_logs(logs, is_batch_hook=True)\n for callback in self.callbacks:\n hook = getattr(callback, hook_name)\n- if getattr(callback, '_supports_tf_logs', False):\n- hook(batch, logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- hook(batch, numpy_logs)\n+ hook(batch, logs)\n \n if self._check_timing:\n if hook_name not in self._hook_times:\n@@ -402,15 +415,9 @@ def on_epoch_begin(self, epoch, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_epoch_begin(epoch, logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_epoch_begin(epoch, numpy_logs)\n+ callback.on_epoch_begin(epoch, logs)\n \n def on_epoch_end(self, epoch, logs=None):\n \"\"\"Calls the `on_epoch_end` methods of its callbacks.\n@@ -423,15 +430,9 @@ def on_epoch_end(self, epoch, logs=None):\n validation epoch if validation is performed. Validation result keys\n are prefixed with `val_`.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_epoch_end(epoch, logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_epoch_end(epoch, numpy_logs)\n+ callback.on_epoch_end(epoch, logs)\n \n def on_train_batch_begin(self, batch, logs=None):\n \"\"\"Calls the `on_train_batch_begin` methods of its callbacks.\n@@ -506,15 +507,9 @@ def on_train_begin(self, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_train_begin(logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_train_begin(numpy_logs)\n+ callback.on_train_begin(logs)\n \n def on_train_end(self, logs=None):\n \"\"\"Calls the `on_train_end` methods of its callbacks.\n@@ -523,15 +518,9 @@ def on_train_end(self, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_train_end(logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_train_end(numpy_logs)\n+ callback.on_train_end(logs)\n \n def on_test_begin(self, logs=None):\n \"\"\"Calls the `on_test_begin` methods of its callbacks.\n@@ -540,15 +529,9 @@ def on_test_begin(self, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_test_begin(logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_test_begin(numpy_logs)\n+ callback.on_test_begin(logs)\n \n def on_test_end(self, logs=None):\n \"\"\"Calls the `on_test_end` methods of its callbacks.\n@@ -557,15 +540,9 @@ def on_test_end(self, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_test_end(logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_test_end(numpy_logs)\n+ callback.on_test_end(logs)\n \n def on_predict_begin(self, logs=None):\n \"\"\"Calls the 'on_predict_begin` methods of its callbacks.\n@@ -574,15 +551,9 @@ def on_predict_begin(self, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_predict_begin(logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_predict_begin(numpy_logs)\n+ callback.on_predict_begin(logs)\n \n def on_predict_end(self, logs=None):\n \"\"\"Calls the `on_predict_end` methods of its callbacks.\n@@ -591,15 +562,9 @@ def on_predict_end(self, logs=None):\n logs: Dict. Currently no data is passed to this argument for this method\n but that may change in the future.\n \"\"\"\n- logs = logs or {}\n- numpy_logs = None\n+ logs = self._process_logs(logs)\n for callback in self.callbacks:\n- if getattr(callback, '_supports_tf_logs', False):\n- callback.on_predict_end(logs)\n- else:\n- if numpy_logs is None: # Only convert once.\n- numpy_logs = tf_utils.sync_to_numpy_or_python_type(logs)\n- callback.on_predict_end(numpy_logs)\n+ callback.on_predict_end(logs)\n \n def __iter__(self):\n return iter(self.callbacks)",
"filename": "tensorflow/python/keras/callbacks.py",
"status": "modified"
},
{
"diff": "@@ -76,6 +76,13 @@\n NUM_HIDDEN = 5\n BATCH_SIZE = 5\n \n+CALLBACK_HOOKS = [\n+ 'on_batch_begin', 'on_batch_end', 'on_epoch_begin', 'on_epoch_end',\n+ 'on_predict_batch_begin', 'on_predict_batch_end', 'on_predict_begin',\n+ 'on_predict_end', 'on_test_batch_begin', 'on_test_batch_end',\n+ 'on_test_begin', 'on_test_end', 'on_train_batch_begin',\n+ 'on_train_batch_end', 'on_train_begin', 'on_train_end'\n+]\n \n class Counter(keras.callbacks.Callback):\n \"\"\"Counts the number of times each callback method was run.\n@@ -87,14 +94,7 @@ class Counter(keras.callbacks.Callback):\n \n def __init__(self):\n self.method_counts = collections.defaultdict(int)\n- methods_to_count = [\n- 'on_batch_begin', 'on_batch_end', 'on_epoch_begin', 'on_epoch_end',\n- 'on_predict_batch_begin', 'on_predict_batch_end', 'on_predict_begin',\n- 'on_predict_end', 'on_test_batch_begin', 'on_test_batch_end',\n- 'on_test_begin', 'on_test_end', 'on_train_batch_begin',\n- 'on_train_batch_end', 'on_train_begin', 'on_train_end'\n- ]\n- for method_name in methods_to_count:\n+ for method_name in CALLBACK_HOOKS:\n setattr(self, method_name,\n self.wrap_with_counts(method_name, getattr(self, method_name)))\n \n@@ -107,6 +107,17 @@ def _call_and_count(*args, **kwargs):\n return _call_and_count\n \n \n+class CallAllHooks(keras.callbacks.Callback):\n+ \"\"\"A callback that calls self._run for all hooks\"\"\"\n+\n+ def __init__(self):\n+ for method_name in CALLBACK_HOOKS:\n+ setattr(self, method_name, self._run)\n+\n+ def _run(self, *args, logs=None):\n+ raise NotImplementedError\n+\n+\n def _get_numpy():\n return np.ones((10, 10)), np.ones((10, 1))\n \n@@ -1683,6 +1694,12 @@ def on_test_batch_end(self, batch, logs=None):\n def on_predict_batch_end(self, batch, logs=None):\n self.predict_batches += 1\n \n+ class MyCallbackWithTFBatchHooks(keras.callbacks.Callback):\n+\n+ def __init__(self):\n+ super(MyCallbackWithTFBatchHooks, self).__init__()\n+ self._supports_tf_logs = True\n+\n class MyCallbackWithoutBatchHooks(keras.callbacks.Callback):\n \n def __init__(self):\n@@ -1700,6 +1717,7 @@ def on_epoch_end(self, epoch, logs=None):\n self.assertTrue(cb_list._should_call_train_batch_hooks)\n self.assertTrue(cb_list._should_call_test_batch_hooks)\n self.assertTrue(cb_list._should_call_predict_batch_hooks)\n+ self.assertFalse(cb_list._batch_hooks_support_tf_logs)\n \n model.fit(x, y, epochs=2, batch_size=10, callbacks=[my_cb], verbose=0)\n model.evaluate(x, y, batch_size=10, callbacks=[my_cb], verbose=0)\n@@ -1709,6 +1727,10 @@ def on_epoch_end(self, epoch, logs=None):\n self.assertEqual(my_cb.test_batches, 1)\n self.assertEqual(my_cb.predict_batches, 1)\n \n+ my_cb = MyCallbackWithTFBatchHooks()\n+ cb_list = keras.callbacks.CallbackList([my_cb], verbose=0)\n+ self.assertTrue(cb_list._batch_hooks_support_tf_logs)\n+\n my_cb = MyCallbackWithoutBatchHooks()\n cb_list = keras.callbacks.CallbackList([my_cb], verbose=0)\n self.assertLen(cb_list.callbacks, 1)\n@@ -1720,6 +1742,56 @@ def on_epoch_end(self, epoch, logs=None):\n model.evaluate(x, y, batch_size=10, callbacks=[my_cb], verbose=0)\n model.predict(x, batch_size=10, callbacks=[my_cb], verbose=0)\n \n+ @keras_parameterized.run_all_keras_modes(always_skip_v1=True)\n+ def test_logs_conversion(self):\n+ assert_dict_equal = self.assertDictEqual\n+\n+ class MutateNumpyLogs(CallAllHooks):\n+ def _run(self, *args, logs=None):\n+ logs = logs or args[-1]\n+ logs[\"numpy\"] = 1\n+\n+ class MutateTensorFlowLogs(CallAllHooks):\n+ def __init__(self):\n+ super(MutateTensorFlowLogs, self).__init__()\n+ self._supports_tf_logs = True\n+\n+ def _run(self, *args, logs=None):\n+ logs = logs or args[-1]\n+ logs[\"tf\"] = 2\n+\n+ class AssertNumpyLogs(CallAllHooks):\n+ def _run(self, *args, logs=None):\n+ logs = logs or args[-1]\n+ assert_dict_equal(logs, {\"all\": 0, \"numpy\": 1, \"tf\": 2})\n+\n+ class AssertTensorFlowLogs(AssertNumpyLogs):\n+ def __init__(self):\n+ super(AssertTensorFlowLogs, self).__init__()\n+ self._supports_tf_logs = True\n+\n+ cb_list = keras.callbacks.CallbackList([\n+ MutateNumpyLogs(),\n+ MutateTensorFlowLogs(),\n+ AssertNumpyLogs(),\n+ AssertTensorFlowLogs()])\n+\n+ assert len(cb_list.callbacks) == 4\n+ cb_list.on_epoch_begin(0, logs={\"all\": 0})\n+ cb_list.on_epoch_end(0, logs={\"all\": 0})\n+ cb_list.on_predict_batch_begin(0, logs={\"all\": 0})\n+ cb_list.on_predict_batch_end(0, logs={\"all\": 0})\n+ cb_list.on_predict_begin(logs={\"all\": 0})\n+ cb_list.on_predict_end(logs={\"all\": 0})\n+ cb_list.on_test_batch_begin(0, logs={\"all\": 0})\n+ cb_list.on_test_batch_end(0, logs={\"all\": 0})\n+ cb_list.on_test_begin(logs={\"all\": 0})\n+ cb_list.on_test_end(logs={\"all\": 0})\n+ cb_list.on_train_batch_begin(0, logs={\"all\": 0})\n+ cb_list.on_train_batch_end(0, logs={\"all\": 0})\n+ cb_list.on_train_begin(logs={\"all\": 0})\n+ cb_list.on_train_end(logs={\"all\": 0})\n+\n @keras_parameterized.run_all_keras_modes(always_skip_v1=True)\n def test_implements_batch_hooks_override(self):\n ",
"filename": "tensorflow/python/keras/callbacks_test.py",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator L2_POOL_2D from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1 (step 1): Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2 (step 2): Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\n\r\nThe next 3 steps are combined into a single PR3 with separate commits:\r\n\r\n(step 3): Copy operator from lite to micro making minimal changes and not including in the build\r\n(step 4): Delete extra code from the micro copy of the operator\r\n(step 5): Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47814\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47814\">No</a>\n",
"created_at": "2021-06-09T16:03:51Z"
}
],
"number": 47814,
"title": "micro: port op L2_POOL_2D from lite"
}
|
{
"body": "PR steps 3 through 5 for the L2_POOL_2D operator as per Issue #47814\r\n",
"number": 47864,
"review_comments": [
{
"body": "This looks like an unrelated fix for Leaky Relu? Could you add it to a separate PR?",
"created_at": "2021-03-17T16:20:45Z"
},
{
"body": "There was a merge conflict resolution error. Fixed now.",
"created_at": "2021-03-17T16:42:34Z"
}
],
"title": "micro: L2_POOL_2D PR3-5"
}
|
{
"commits": [
{
"message": "micro: copy operator L2_POOL_2D kernel from lite\n\nThis is a copy with minimal modification of the kernel and test for\noperator L2_POOL_2D from tensorflow/lite/kernels.\nAdaptations to micro and addition to the micro build to follow.\n\nPR step 3 for issue #47814"
},
{
"message": "micro: prepare to port operator L2_POOL_2D kernel from lite with test\n\nImplement skeleton (non-working) code for operator and test.\nHeader files changed.\nNamespaces changed.\nSome original code deleted.\nSome original code modified.\n\nPR step 4 of the work to port operator L2_POOL_2D as tracked in Issue #47814"
},
{
"message": "micro: port operator L2_POOL_2D kernel from lite with test\n\nComplete implementation of TFLM operator L2_POOL_2D and associated TFLM test code.\n\nPR step 5 of the work to port operator L2_POOL_2D as tracked in Issue #47814"
},
{
"message": "Fix merge conflict error"
}
],
"files": [
{
"diff": "@@ -44,6 +44,7 @@ AllOpsResolver::AllOpsResolver() {\n AddGreaterEqual();\n AddHardSwish();\n AddL2Normalization();\n+ AddL2Pool2D();\n AddLeakyRelu();\n AddLess();\n AddLessEqual();",
"filename": "tensorflow/lite/micro/all_ops_resolver.cc",
"status": "modified"
},
{
"diff": "@@ -275,6 +275,7 @@ cc_library(\n \"fill.cc\",\n \"floor.cc\",\n \"l2norm.cc\",\n+ \"l2_pool_2d.cc\",\n \"leaky_relu.cc\",\n \"logical.cc\",\n \"logistic.cc\",\n@@ -702,6 +703,21 @@ cc_test(\n ],\n )\n \n+cc_test(\n+ name = \"l2_pool_2d_test\",\n+ srcs = [\n+ \"l2_pool_2d_test.cc\",\n+ ],\n+ deps = [\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:debug_log\",\n+ \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_test(\n name = \"leaky_relu_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,137 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include <stddef.h>\n+#include <stdint.h>\n+\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/pooling.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n+#include \"tensorflow/lite/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/kernels/padding.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n+\n+namespace tflite {\n+namespace {\n+\n+// Input/output tensor index.\n+constexpr int kInputTensor = 0;\n+constexpr int kOutputTensor = 0;\n+\n+// required rank for input/output tensor shape\n+constexpr int kTensorShapeRank = 4;\n+\n+// input/output tensor shape rank associations\n+enum { kBatchRank = 0, kHeightRank, kWidthRank, kChannelRank };\n+\n+TfLiteStatus L2Prepare(TfLiteContext* context, TfLiteNode* node) {\n+ auto* params = static_cast<TfLitePoolParams*>(node->builtin_data);\n+\n+ TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);\n+ TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n+ TfLiteTensor* output;\n+ TF_LITE_ENSURE_OK(context,\n+ GetOutputSafe(context, node, kOutputTensor, &output));\n+ const TfLiteTensor* input;\n+ TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, kInputTensor, &input));\n+ TF_LITE_ENSURE_EQ(context, NumDimensions(input), kTensorShapeRank);\n+ TF_LITE_ENSURE_EQ(context, NumDimensions(output), kTensorShapeRank);\n+ TF_LITE_ENSURE_TYPES_EQ(context, input->type, output->type);\n+\n+ int batches = SizeOfDimension(input, kBatchRank);\n+ int height = SizeOfDimension(input, kHeightRank);\n+ int width = SizeOfDimension(input, kWidthRank);\n+ int channels_out = SizeOfDimension(input, kChannelRank);\n+\n+ // Matching GetWindowedOutputSize in TensorFlow.\n+ auto padding = params->padding;\n+ int out_width, out_height;\n+\n+ params->computed.padding = ComputePaddingHeightWidth(\n+ params->stride_height, params->stride_width, 1, 1, height, width,\n+ params->filter_height, params->filter_width, padding, &out_height,\n+ &out_width);\n+\n+ // We currently don't have a quantized implementation of L2Pool\n+ TF_LITE_ENSURE_TYPES_EQ(context, input->type, kTfLiteFloat32);\n+\n+ // We must update the output tensor dimensions.\n+ // The dims storage is expected to be the same area in memory\n+ // for both TfLiteTensor and TfLiteEvalTensor. This is important\n+ // because TfLiteTensor in the MicroInterpreter is a temporary\n+ // allocation.\n+ output->dims->data[kBatchRank] = batches;\n+ output->dims->data[kHeightRank] = out_height;\n+ output->dims->data[kWidthRank] = out_width;\n+ output->dims->data[kChannelRank] = channels_out;\n+\n+ return kTfLiteOk;\n+}\n+\n+void L2EvalFloat(const TfLitePoolParams& params, const TfLiteEvalTensor& input,\n+ tflite::PoolParams* op_params, TfLiteEvalTensor* output) {\n+ float activation_min, activation_max;\n+ CalculateActivationRange(params.activation, &activation_min, &activation_max);\n+\n+ op_params->float_activation_min = activation_min;\n+ op_params->float_activation_max = activation_max;\n+ reference_ops::L2Pool(*op_params, tflite::micro::GetTensorShape(&input),\n+ tflite::micro::GetTensorData<float>(&input),\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<float>(output));\n+}\n+\n+TfLiteStatus L2Eval(TfLiteContext* context, TfLiteNode* node) {\n+ auto* params = static_cast<const TfLitePoolParams*>(node->builtin_data);\n+\n+ TfLiteEvalTensor* output =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n+ const TfLiteEvalTensor* input =\n+ tflite::micro::GetEvalInput(context, node, kInputTensor);\n+\n+ tflite::PoolParams op_params;\n+ op_params.stride_height = params->stride_height;\n+ op_params.stride_width = params->stride_width;\n+ op_params.filter_height = params->filter_height;\n+ op_params.filter_width = params->filter_width;\n+ op_params.padding_values.height = params->computed.padding.height;\n+ op_params.padding_values.width = params->computed.padding.width;\n+\n+ switch (input->type) { // Already know in/out types are same.\n+ case kTfLiteFloat32:\n+ L2EvalFloat(*params, *input, &op_params, output);\n+ break;\n+ default:\n+ TF_LITE_KERNEL_LOG(context,\n+ \"L2_POOL_2D only supports float32 currently, got %s.\",\n+ TfLiteTypeGetName(input->type));\n+ return kTfLiteError;\n+ }\n+ return kTfLiteOk;\n+}\n+\n+} // namespace\n+\n+TfLiteRegistration Register_L2_POOL_2D() {\n+ return {/*init=*/nullptr,\n+ /*free=*/nullptr,\n+ /*prepare=*/L2Prepare,\n+ /*invoke=*/L2Eval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/0};\n+}\n+\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/l2_pool_2d.cc",
"status": "added"
},
{
"diff": "@@ -0,0 +1,222 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include <type_traits>\n+\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_runner.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n+\n+namespace tflite {\n+namespace testing {\n+namespace {\n+\n+constexpr float kTolerance = 1e-5;\n+\n+constexpr int kOutputDimsCount = 4;\n+\n+struct L2Pool2DTestParams {\n+ TfLitePadding padding = kTfLitePaddingValid;\n+ int stride_width = 2;\n+ int stride_height = 2;\n+ int filter_width = 2;\n+ int filter_height = 2;\n+ TfLiteFusedActivation activation = kTfLiteActNone;\n+ float compare_tolerance = kTolerance;\n+ // output_dims_data is a TfLiteIntArray\n+ int output_dims_data[kOutputDimsCount + 1] = {kOutputDimsCount, 0, 0, 0, 0};\n+};\n+\n+void ExecuteL2Pool2DTest(const L2Pool2DTestParams& params,\n+ TfLiteTensor* tensors, int tensors_count) {\n+ constexpr int kInputArrayData[] = {1, 0};\n+ TfLiteIntArray* inputs_array = IntArrayFromInts(kInputArrayData);\n+ constexpr int kOutputArrayData[] = {1, 1};\n+ TfLiteIntArray* outputs_array = IntArrayFromInts(kOutputArrayData);\n+\n+ TfLitePoolParams op_params = {};\n+ op_params.activation = params.activation;\n+ op_params.filter_height = params.filter_height;\n+ op_params.filter_width = params.filter_width;\n+ op_params.padding = params.padding;\n+ op_params.stride_height = params.stride_height;\n+ op_params.stride_width = params.stride_width;\n+\n+ const TfLiteRegistration registration = tflite::Register_L2_POOL_2D();\n+ micro::KernelRunner runner(registration, tensors, tensors_count, inputs_array,\n+ outputs_array, static_cast<void*>(&op_params));\n+\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.InitAndPrepare());\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.Invoke());\n+}\n+\n+template <typename T>\n+void TestL2Pool2D(const L2Pool2DTestParams& params, const int* input_dims_data,\n+ const T* input_data, const int* expected_dims_data,\n+ const T* expected_data, T* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* expected_dims = IntArrayFromInts(expected_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(params.output_dims_data);\n+ const int expected_count = ElementCount(*expected_dims);\n+\n+ TfLiteTensor tensors[] = {\n+ CreateTensor(input_data, input_dims),\n+ CreateTensor(output_data, output_dims),\n+ };\n+ constexpr int tensors_count = std::extent<decltype(tensors)>::value;\n+ ExecuteL2Pool2DTest(params, tensors, tensors_count);\n+\n+ for (int i = 0; i < expected_count; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(expected_data[i], output_data[i],\n+ params.compare_tolerance);\n+ }\n+ for (int i = 0; i < expected_dims->size; i++) {\n+ TF_LITE_MICRO_EXPECT_EQ(expected_dims->data[i], output_dims->data[i]);\n+ }\n+}\n+\n+} // namespace\n+} // namespace testing\n+} // namespace tflite\n+\n+TF_LITE_MICRO_TESTS_BEGIN\n+\n+TF_LITE_MICRO_TEST(FloatPoolingOpTestL2Pool) {\n+ constexpr int kInputDims[] = {4, 1, 2, 4, 1};\n+ constexpr float kInput[] = {\n+ 0, 6, 2, 4, //\n+ 3, 2, 10, 7, //\n+ };\n+ constexpr int kExpectDims[] = {4, 1, 1, 2, 1};\n+ constexpr float kExpect[] = {3.5, 6.5};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::L2Pool2DTestParams params;\n+ params.compare_tolerance = 0;\n+\n+ tflite::testing::TestL2Pool2D(params, kInputDims, kInput, kExpectDims,\n+ kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(FloatPoolingOpTestL2PoolActivationRelu) {\n+ constexpr int kInputDims[] = {4, 1, 2, 4, 1};\n+ constexpr float kInput[] = {\n+ -1, -6, 2, 4, //\n+ -3, -2, 10, 7, //\n+ };\n+ constexpr int kExpectDims[] = {4, 1, 1, 2, 1};\n+ constexpr float kExpect[] = {3.53553, 6.5};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::L2Pool2DTestParams params;\n+ params.activation = kTfLiteActRelu;\n+\n+ tflite::testing::TestL2Pool2D(params, kInputDims, kInput, kExpectDims,\n+ kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(FloatPoolingOpTestL2PoolActivationRelu1) {\n+ constexpr int kInputDims[] = {4, 1, 2, 4, 1};\n+ constexpr float kInput[] = {\n+ -0.1, -0.6, 2, 4, //\n+ -0.3, -0.2, 10, 7, //\n+ };\n+ constexpr int kExpectDims[] = {4, 1, 1, 2, 1};\n+ constexpr float kExpect[] = {0.353553, 1.0};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::L2Pool2DTestParams params;\n+ params.activation = kTfLiteActReluN1To1;\n+\n+ tflite::testing::TestL2Pool2D(params, kInputDims, kInput, kExpectDims,\n+ kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(FloatPoolingOpTestL2PoolActivationRelu6) {\n+ constexpr int kInputDims[] = {4, 1, 2, 4, 1};\n+ constexpr float kInput[] = {\n+ -0.1, -0.6, 2, 4, //\n+ -0.3, -0.2, 10, 7, //\n+ };\n+ constexpr int kExpectDims[] = {4, 1, 1, 2, 1};\n+ constexpr float kExpect[] = {0.353553, 6.0};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::L2Pool2DTestParams params;\n+ params.activation = kTfLiteActRelu6;\n+\n+ tflite::testing::TestL2Pool2D(params, kInputDims, kInput, kExpectDims,\n+ kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(FloatPoolingOpTestL2PoolPaddingSame) {\n+ constexpr int kInputDims[] = {4, 1, 2, 4, 1};\n+ constexpr float kInput[] = {\n+ 0, 6, 2, 4, //\n+ 3, 2, 10, 7, //\n+ };\n+ constexpr int kExpectDims[] = {4, 1, 1, 2, 1};\n+ constexpr float kExpect[] = {3.5, 6.5};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::L2Pool2DTestParams params;\n+ params.padding = kTfLitePaddingSame;\n+ params.compare_tolerance = 0;\n+\n+ tflite::testing::TestL2Pool2D(params, kInputDims, kInput, kExpectDims,\n+ kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(FloatPoolingOpTestL2PoolPaddingSameStride1) {\n+ constexpr int kInputDims[] = {4, 1, 2, 4, 1};\n+ constexpr float kInput[] = {\n+ 0, 6, 2, 4, //\n+ 3, 2, 10, 7, //\n+ };\n+ constexpr int kExpectDims[] = {4, 1, 2, 4, 1};\n+ constexpr float kExpect[] = {3.5, 6.0, 6.5, 5.70088,\n+ 2.54951, 7.2111, 8.63134, 7.0};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::L2Pool2DTestParams params;\n+ params.padding = kTfLitePaddingSame;\n+ params.compare_tolerance = 1e-4;\n+ params.stride_width = 1;\n+ params.stride_height = 1;\n+\n+ tflite::testing::TestL2Pool2D(params, kInputDims, kInput, kExpectDims,\n+ kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(FloatPoolingOpTestL2PoolPaddingValidStride1) {\n+ constexpr int kInputDims[] = {4, 1, 2, 4, 1};\n+ constexpr float kInput[] = {\n+ 0, 6, 2, 4, //\n+ 3, 2, 10, 7, //\n+ };\n+ constexpr int kExpectDims[] = {4, 1, 1, 3, 1};\n+ constexpr float kExpect[] = {3.5, 6.0, 6.5};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::L2Pool2DTestParams params;\n+ params.stride_width = 1;\n+ params.stride_height = 1;\n+\n+ tflite::testing::TestL2Pool2D(params, kInputDims, kInput, kExpectDims,\n+ kExpect, output_data);\n+}\n+\n+TF_LITE_MICRO_TESTS_END",
"filename": "tensorflow/lite/micro/kernels/l2_pool_2d_test.cc",
"status": "added"
},
{
"diff": "@@ -40,6 +40,7 @@ TfLiteRegistration Register_ELU();\n TfLiteRegistration Register_EXP();\n TfLiteRegistration Register_EXPAND_DIMS();\n TfLiteRegistration Register_FILL();\n+TfLiteRegistration Register_L2_POOL_2D();\n TfLiteRegistration Register_LEAKY_RELU();\n TfLiteRegistration Register_QUANTIZE();\n TfLiteRegistration Register_SHAPE();",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -253,6 +253,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n ParseL2Normalization);\n }\n \n+ TfLiteStatus AddL2Pool2D() {\n+ return AddBuiltin(BuiltinOperator_L2_POOL_2D, tflite::Register_L2_POOL_2D(),\n+ ParsePool);\n+ }\n+\n TfLiteStatus AddLeakyRelu() {\n return AddBuiltin(BuiltinOperator_LEAKY_RELU, tflite::Register_LEAKY_RELU(),\n ParseLeakyRelu);",
"filename": "tensorflow/lite/micro/micro_mutable_op_resolver.h",
"status": "modified"
},
{
"diff": "@@ -287,6 +287,7 @@ tensorflow/lite/micro/kernels/floor_test.cc \\\n tensorflow/lite/micro/kernels/fully_connected_test.cc \\\n tensorflow/lite/micro/kernels/hard_swish_test.cc \\\n tensorflow/lite/micro/kernels/l2norm_test.cc \\\n+tensorflow/lite/micro/kernels/l2_pool_2d_test.cc \\\n tensorflow/lite/micro/kernels/leaky_relu_test.cc \\\n tensorflow/lite/micro/kernels/logical_test.cc \\\n tensorflow/lite/micro/kernels/logistic_test.cc \\\n@@ -349,6 +350,7 @@ tensorflow/lite/micro/kernels/hard_swish.cc \\\n tensorflow/lite/micro/kernels/kernel_runner.cc \\\n tensorflow/lite/micro/kernels/kernel_util.cc \\\n tensorflow/lite/micro/kernels/l2norm.cc \\\n+tensorflow/lite/micro/kernels/l2_pool_2d.cc \\\n tensorflow/lite/micro/kernels/leaky_relu.cc \\\n tensorflow/lite/micro/kernels/logical.cc \\\n tensorflow/lite/micro/kernels/logistic.cc \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\n**System information**\r\n- Host OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04\r\n- TensorFlow installed from (source or binary): source\r\n- Tensorflow version (commit SHA if source): 5a16264ba6f12883726d12d484d4cd61405ddab7\r\n- Target platform (e.g. Arm Mbed OS, Arduino Nano 33 etc.): Host\r\n\r\n**Describe the problem**\r\nFunction in question: tflite::PopulateConvolutionQuantizationParams() in tensorflow/lite/kernels/kernel_util.cc (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/kernel_util.cc#L138)\r\nOperators in question: conv and depthwise_conv\r\n\r\nPointer arguments 'per_channel_multiplier' and 'per_channel_shift' are accessed and written to in all cases. \r\nIn the non-int<8,16> case, these arguments can be NULL pointers or uninitialized pointers. The reason it doesn't\r\ncrash now for reference kernels is because memory is allocated for per-channel quant parameters irrespective\r\nof the quantization type. This ticket is for protecting accesses of per-channel params in PopulateConvolutionQuantizationParams().\r\n\r\nOnce that is done, memory usage for non per-channel cases can be reduced for TFLu(and TFL) as an improvement.\r\n\r\n\r\n**Please provide the exact sequence of commands/steps when you ran into the problem**\r\n_Simple Step:_\r\nSimplest way is to run the unit test for conv or depthwise_conv and see that per-channel arguments are accessed and \r\nupdated in the non-per channel case. \r\n\r\n./tensorflow/lite/micro/tools/make/gen/linux_x86_64/bin/kernel_depthwise_conv_test\r\n\r\n_How it was discovered:_\r\nSince it is now possible to dynamically allocate per-channel params in cmsis-nn/<op>.cc (Thanks to https://github.com/tensorflow/tensorflow/commit/59d177d9acabe8e70bc33e554a364d2620bc6999)\r\nthe conv.cc and depthwise_conv.cc in cmsis-nn folder was updated based on PR https://github.com/tensorflow/tensorflow/pull/42770 with some additional\r\ncorrection to not allocate per-channel params for uint8 operators. This led to a crash.\r\n",
"comments": [
{
"body": "Affects PR https://github.com/tensorflow/tensorflow/pull/43486",
"created_at": "2020-09-25T06:43:31Z"
},
{
"body": "With changes coming in, I realize linking the line number for the function start wasn't a smart idea!\r\nIt is PopulateConvolutionQuantizationParams()\r\n\r\n// Per-axis & per-tensor\r\nTfLiteStatus PopulateConvolutionQuantizationParams(\r\n TfLiteContext* context, const TfLiteTensor* input,\r\n const TfLiteTensor* filter, const TfLiteTensor* bias, TfLiteTensor* output,\r\n const TfLiteFusedActivation& activation, int32_t* multiplier, int* shift,\r\n int32_t* output_activation_min, int32_t* output_activation_max,\r\n int32_t* per_channel_multiplier, int* per_channel_shift, int num_channels) {",
"created_at": "2020-09-29T07:04:48Z"
},
{
"body": "Adding a permalink to the function:\r\nhttps://github.com/tensorflow/tensorflow/blob/195369c5a0c63fb51f1deea1e05bd78e23e90cc2/tensorflow/lite/kernels/kernel_util.cc#L201-L217\r\n\r\n",
"created_at": "2020-09-29T18:18:21Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/42883\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/42883\">No</a>\n",
"created_at": "2021-03-16T23:20:09Z"
}
],
"number": 42883,
"title": "Uninitialized memory access of per-channel params"
}
|
{
"body": "See the discussion on https://github.com/tensorflow/tensorflow/pull/47471 and #44912 for more details.\r\n\r\nFixes #42883",
"number": 47830,
"review_comments": [],
"title": "Remove uint8 support from conv and depthwise conv."
}
|
{
"commits": [
{
"message": "Remove uint8 support from conv and depthwise conv.\n\nSee #44912 for more details."
},
{
"message": "Disable in internal CI."
}
],
"files": [
{
"diff": "@@ -26,6 +26,7 @@ cc_test(\n srcs = [\"image_recognition_test.cc\"],\n tags = [\n \"no_oss\", # TODO(b/174680668): Exclude from OSS.\n+ \"notap\", # TODO(#44912): Consider removing this (uint8) example.\n ],\n deps = [\n \":image_model_data\",",
"filename": "tensorflow/lite/micro/examples/image_recognition_experimental/BUILD",
"status": "modified"
},
{
"diff": "@@ -280,20 +280,6 @@ TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n return EvalQuantizedPerChannel(context, node, params, data, input, filter,\n bias, output, nullptr);\n break;\n- case kTfLiteUInt8: {\n- reference_ops::Conv(ConvParamsQuantized(params, data.reference_op_data),\n- tflite::micro::GetTensorShape(input),\n- tflite::micro::GetTensorData<uint8_t>(input),\n- tflite::micro::GetTensorShape(filter),\n- tflite::micro::GetTensorData<uint8_t>(filter),\n- tflite::micro::GetTensorShape(bias),\n- tflite::micro::GetTensorData<int32_t>(bias),\n- tflite::micro::GetTensorShape(output),\n- tflite::micro::GetTensorData<uint8_t>(output),\n- tflite::micro::GetTensorShape(nullptr), nullptr,\n- nullptr);\n- break;\n- }\n default:\n TF_LITE_KERNEL_LOG(context, \"Type %s (%d) not supported.\",\n TfLiteTypeGetName(input->type), input->type);",
"filename": "tensorflow/lite/micro/kernels/cmsis_nn/conv.cc",
"status": "modified"
},
{
"diff": "@@ -247,52 +247,6 @@ void EvalQuantizedPerChannel(TfLiteContext* context, TfLiteNode* node,\n }\n }\n \n-void EvalQuantized(TfLiteContext* context, TfLiteNode* node,\n- const TfLiteDepthwiseConvParams& params, const OpData& data,\n- const TfLiteEvalTensor* input,\n- const TfLiteEvalTensor* filter, const TfLiteEvalTensor* bias,\n- TfLiteEvalTensor* output) {\n- tflite::DepthwiseParams op_params =\n- DepthwiseConvParamsQuantized(params, data.reference_op_data);\n-\n- if (1 == op_params.dilation_width_factor &&\n- 1 == op_params.dilation_height_factor) {\n- RuntimeShape filter_shape = tflite::micro::GetTensorShape(filter);\n- const int filter_height = filter_shape.Dims(1);\n- const int filter_width = filter_shape.Dims(2);\n- RuntimeShape input_shape = tflite::micro::GetTensorShape(input);\n- const int input_height = input_shape.Dims(1);\n- const int input_width = input_shape.Dims(2);\n- const int input_depth = input_shape.Dims(3);\n- RuntimeShape output_shape = tflite::micro::GetTensorShape(output);\n- const int output_height = output_shape.Dims(1);\n- const int output_width = output_shape.Dims(2);\n- arm_depthwise_conv_u8_basic_ver1(\n- tflite::micro::GetTensorData<uint8_t>(input), input_width, input_height,\n- input_depth, tflite::micro::GetTensorData<uint8_t>(filter),\n- filter_width, filter_height, op_params.depth_multiplier,\n- op_params.padding_values.width, op_params.padding_values.height,\n- op_params.stride_width, op_params.stride_height,\n- op_params.dilation_width_factor, op_params.dilation_height_factor,\n- tflite::micro::GetTensorData<int32_t>(bias), op_params.input_offset,\n- op_params.weights_offset, op_params.output_offset,\n- tflite::micro::GetTensorData<uint8_t>(output), output_width,\n- output_height, op_params.quantized_activation_min,\n- op_params.quantized_activation_max, op_params.output_shift,\n- op_params.output_multiplier);\n- } else {\n- tflite::reference_ops::DepthwiseConv(\n- op_params, tflite::micro::GetTensorShape(input),\n- tflite::micro::GetTensorData<uint8_t>(input),\n- tflite::micro::GetTensorShape(filter),\n- tflite::micro::GetTensorData<uint8_t>(filter),\n- tflite::micro::GetTensorShape(bias),\n- tflite::micro::GetTensorData<int32_t>(bias),\n- tflite::micro::GetTensorShape(output),\n- tflite::micro::GetTensorData<uint8_t>(output));\n- }\n-}\n-\n TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n TFLITE_DCHECK(node->user_data != nullptr);\n TFLITE_DCHECK(node->builtin_data != nullptr);\n@@ -312,8 +266,6 @@ TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n ? tflite::micro::GetEvalInput(context, node, kDepthwiseConvBiasTensor)\n : nullptr;\n \n- // TODO(aselle): Consider whether float conv and quantized conv should be\n- // separate ops to avoid dispatch overhead here.\n switch (input->type) { // Already know in/out types are same.\n case kTfLiteFloat32: {\n tflite::reference_ops::DepthwiseConv(\n@@ -332,9 +284,6 @@ TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n EvalQuantizedPerChannel(context, node, params, data, input, filter, bias,\n output);\n break;\n- case kTfLiteUInt8:\n- EvalQuantized(context, node, params, data, input, filter, bias, output);\n- break;\n default:\n TF_LITE_KERNEL_LOG(context, \"Type %s (%d) not supported.\",\n TfLiteTypeGetName(input->type), input->type);",
"filename": "tensorflow/lite/micro/kernels/cmsis_nn/depthwise_conv.cc",
"status": "modified"
},
{
"diff": "@@ -83,20 +83,6 @@ TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n tflite::micro::GetTensorData<int8_t>(output));\n break;\n }\n- case kTfLiteUInt8: {\n- reference_ops::Conv(ConvParamsQuantized(params, data),\n- tflite::micro::GetTensorShape(input),\n- tflite::micro::GetTensorData<uint8_t>(input),\n- tflite::micro::GetTensorShape(filter),\n- tflite::micro::GetTensorData<uint8_t>(filter),\n- tflite::micro::GetTensorShape(bias),\n- tflite::micro::GetTensorData<int32_t>(bias),\n- tflite::micro::GetTensorShape(output),\n- tflite::micro::GetTensorData<uint8_t>(output),\n- tflite::micro::GetTensorShape(nullptr), nullptr,\n- nullptr);\n- break;\n- }\n default:\n TF_LITE_KERNEL_LOG(context, \"Type %s (%d) not supported.\",\n TfLiteTypeGetName(input->type), input->type);",
"filename": "tensorflow/lite/micro/kernels/conv.cc",
"status": "modified"
},
{
"diff": "@@ -98,32 +98,6 @@ TF_LITE_MICRO_TEST(InputAndFilterSameWidthHeight) {\n tflite::Register_CONV_2D(), output_data));\n }\n \n-TF_LITE_MICRO_TEST(SimpleTestQuantized) {\n- const int output_dims_count = 12;\n- uint8_t output_data[output_dims_count];\n-\n- const float input_scale = 0.5f;\n- const float filter_scale = 0.5f;\n- const float output_scale = 1.0f;\n-\n- uint8_t input_quantized[tflite::testing::kInputElements];\n- uint8_t filter_quantized[tflite::testing::kFilterElements];\n- int32_t bias_quantized[tflite::testing::kBiasElements];\n- uint8_t golden_quantized[tflite::testing::kOutputElements];\n-\n- TF_LITE_MICRO_EXPECT_EQ(\n- kTfLiteOk,\n- tflite::testing::TestConvQuantizedPerLayer(\n- tflite::testing::kInputShape, tflite::testing::kInputData,\n- input_quantized, input_scale, tflite::testing::kFilterShape,\n- tflite::testing::kFilterData, filter_quantized, filter_scale,\n- tflite::testing::kBiasShape, tflite::testing::kBiasData,\n- bias_quantized, tflite::testing::kOutputShape,\n- tflite::testing::kGoldenData, golden_quantized, output_scale,\n- &tflite::testing::common_conv_params, tflite::Register_CONV_2D(),\n- output_data));\n-}\n-\n TF_LITE_MICRO_TEST(InputOutputDifferentTypeIsError) {\n using tflite::testing::CreateQuantizedTensor;\n using tflite::testing::CreateTensor;\n@@ -184,46 +158,6 @@ TF_LITE_MICRO_TEST(HybridModeIsError) {\n tflite::Register_CONV_2D(), output_data));\n }\n \n-TF_LITE_MICRO_TEST(SimpleTestDilatedQuantized) {\n- const int output_dims_count = 24;\n- uint8_t output_data[output_dims_count];\n-\n- const float input_scale = 0.5f;\n- const float filter_scale = 0.5f;\n- const float output_scale = 1.0f;\n-\n- const int input_elements = 48;\n- const int input_shape[] = {4, 2, 4, 6, 1};\n- const float input_data[] = {\n- // b = 0\n- 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4,\n- // b = 1\n- 1, 2, 3, 4, 5, 6, 2, 6, 2, 4, 4, 2, 3, 2, 6, 5, 1, 4, 1, 2, 1, 4, 6, 3};\n- const int output_elements = 24;\n- const int output_shape[] = {4, 2, 2, 2, 3};\n- const float golden_data[] = {25, 2, 7, 25, 2, 7, 10, 2, -3, 10, 2, -3,\n- 39, 7, 6, 50, 3, 4, 14, 4, -5, 15, 0, -7};\n-\n- uint8_t input_quantized[input_elements];\n- uint8_t filter_quantized[tflite::testing::kFilterElements];\n- int32_t bias_quantized[tflite::testing::kBiasElements];\n- uint8_t golden_quantized[output_elements];\n-\n- TfLiteConvParams conv_params{tflite::testing::common_conv_params};\n- conv_params.dilation_width_factor = 3;\n- conv_params.dilation_height_factor = 2;\n-\n- TF_LITE_MICRO_EXPECT_EQ(\n- kTfLiteOk,\n- tflite::testing::TestConvQuantizedPerLayer(\n- input_shape, input_data, input_quantized, input_scale,\n- tflite::testing::kFilterShape, tflite::testing::kFilterData,\n- filter_quantized, filter_scale, tflite::testing::kBiasShape,\n- tflite::testing::kBiasData, bias_quantized, output_shape, golden_data,\n- golden_quantized, output_scale, &conv_params,\n- tflite::Register_CONV_2D(), output_data));\n-}\n-\n TF_LITE_MICRO_TEST(SimpleTestQuantizedPerChannel) {\n const int output_dims_count = 12;\n int8_t output_data[output_dims_count];",
"filename": "tensorflow/lite/micro/kernels/conv_test.cc",
"status": "modified"
},
{
"diff": "@@ -71,13 +71,6 @@ TfLiteStatus InvokeConv(TfLiteTensor* tensors, int tensors_size,\n registration, output_data);\n }\n \n-TfLiteStatus InvokeConv(TfLiteTensor* tensors, int tensors_size,\n- int output_length, TfLiteConvParams* conv_params,\n- TfLiteRegistration registration, uint8_t* output_data) {\n- return InvokeConv<uint8_t>(tensors, tensors_size, output_length, conv_params,\n- registration, output_data);\n-}\n-\n TfLiteStatus ValidateConvGoldens(TfLiteTensor* tensors, int tensors_size,\n const float* expected_output_data,\n int output_length,\n@@ -100,17 +93,6 @@ TfLiteStatus ValidateConvGoldens(TfLiteTensor* tensors, int tensors_size,\n registration, output_data, tolerance);\n }\n \n-TfLiteStatus ValidateConvGoldens(TfLiteTensor* tensors, int tensors_size,\n- const uint8_t* expected_output_data,\n- int output_length,\n- TfLiteConvParams* conv_params,\n- TfLiteRegistration registration,\n- uint8_t* output_data, float tolerance) {\n- return ValidateConvGoldens<uint8_t>(\n- tensors, tensors_size, expected_output_data, output_length, conv_params,\n- registration, output_data, tolerance);\n-}\n-\n TfLiteStatus TestConvFloat(const int* input_dims_data, const float* input_data,\n const int* filter_dims_data,\n const float* filter_data, const int* bias_dims_data,\n@@ -139,48 +121,6 @@ TfLiteStatus TestConvFloat(const int* input_dims_data, const float* input_data,\n output_data);\n }\n \n-TfLiteStatus TestConvQuantizedPerLayer(\n- const int* input_dims_data, const float* input_data,\n- uint8_t* input_quantized, float input_scale, const int* filter_dims_data,\n- const float* filter_data, uint8_t* filter_quantized, float filter_scale,\n- const int* bias_dims_data, const float* bias_data, int32_t* bias_quantized,\n- const int* output_dims_data, const float* expected_output_data,\n- uint8_t* expected_output_quantized, float output_scale,\n- TfLiteConvParams* conv_params, TfLiteRegistration registration,\n- uint8_t* output_data) {\n- TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n- TfLiteIntArray* filter_dims = IntArrayFromInts(filter_dims_data);\n- TfLiteIntArray* bias_dims = IntArrayFromInts(bias_dims_data);\n- TfLiteIntArray* output_dims = IntArrayFromInts(output_dims_data);\n- const int output_dims_count = ElementCount(*output_dims);\n-\n- tflite::Quantize(expected_output_data, expected_output_quantized,\n- output_dims_count, output_scale, 128);\n-\n- constexpr int inputs_size = 3;\n- constexpr int outputs_size = 1;\n- constexpr int tensors_size = inputs_size + outputs_size;\n- TfLiteTensor tensors[tensors_size] = {\n- CreateQuantizedTensor(input_data, input_quantized, input_dims,\n- input_scale, 128),\n- CreateQuantizedTensor(filter_data, filter_quantized, filter_dims,\n- filter_scale, 128),\n- CreateQuantizedBiasTensor(bias_data, bias_quantized, bias_dims,\n- input_scale, filter_scale),\n- CreateQuantizedTensor(output_data, output_dims, output_scale, 128)};\n-\n- float filter_scales[] = {1, filter_scale};\n- int filter_zero_points[] = {1, 128};\n- TfLiteAffineQuantization filter_quant = {FloatArrayFromFloats(filter_scales),\n- IntArrayFromInts(filter_zero_points),\n- 0};\n- tensors[1].quantization = {kTfLiteAffineQuantization, &filter_quant};\n-\n- return ValidateConvGoldens(tensors, tensors_size, expected_output_quantized,\n- output_dims_count, conv_params, registration,\n- output_data);\n-}\n-\n TfLiteStatus TestConvQuantizedPerChannel(\n const int* input_dims_data, const float* input_data,\n int8_t* input_quantized, float input_scale, int input_zero_point,",
"filename": "tensorflow/lite/micro/kernels/conv_test_common.cc",
"status": "modified"
},
{
"diff": "@@ -82,19 +82,6 @@ TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n tflite::micro::GetTensorData<int8_t>(output));\n break;\n }\n- case kTfLiteUInt8: {\n- reference_ops::DepthwiseConv(\n- DepthwiseConvParamsQuantized(params, data),\n- tflite::micro::GetTensorShape(input),\n- tflite::micro::GetTensorData<uint8_t>(input),\n- tflite::micro::GetTensorShape(filter),\n- tflite::micro::GetTensorData<uint8_t>(filter),\n- tflite::micro::GetTensorShape(bias),\n- tflite::micro::GetTensorData<int32_t>(bias),\n- tflite::micro::GetTensorShape(output),\n- tflite::micro::GetTensorData<uint8_t>(output));\n- break;\n- }\n default:\n TF_LITE_KERNEL_LOG(context, \"Type %s (%d) not supported.\",\n TfLiteTypeGetName(input->type), input->type);",
"filename": "tensorflow/lite/micro/kernels/depthwise_conv.cc",
"status": "modified"
},
{
"diff": "@@ -109,57 +109,6 @@ void TestDepthwiseConvFloat(const int* input_dims_data, const float* input_data,\n conv_params, 1e-5, tensors_size, tensors);\n }\n \n-void TestDepthwiseConvQuantizedPerLayer(\n- const int* input_dims_data, const float* input_data,\n- uint8_t* input_quantized, float input_scale, int input_zero_point,\n- const int* filter_dims_data, const float* filter_data,\n- uint8_t* filter_quantized, float filter_scale, int filter_zero_point,\n- const int* bias_dims_data, const float* bias_data, int32_t* bias_quantized,\n- const float* golden, uint8_t* golden_quantized, const int* output_dims_data,\n- uint8_t* output_data, float output_scale, int output_zero_point,\n- TfLiteDepthwiseConvParams* conv_params) {\n- TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n- TfLiteIntArray* filter_dims = IntArrayFromInts(filter_dims_data);\n- TfLiteIntArray* bias_dims = IntArrayFromInts(bias_dims_data);\n- TfLiteIntArray* output_dims = IntArrayFromInts(output_dims_data);\n- const int output_dims_count = ElementCount(*output_dims);\n-\n- constexpr int inputs_size = 3;\n- constexpr int outputs_size = 1;\n- constexpr int tensors_size = inputs_size + outputs_size;\n- TfLiteTensor tensors[tensors_size] = {\n- tflite::testing::CreateQuantizedTensor(input_data, input_quantized,\n- input_dims, input_scale,\n- input_zero_point),\n- tflite::testing::CreateQuantizedTensor(filter_data, filter_quantized,\n- filter_dims, filter_scale,\n- filter_zero_point),\n- tflite::testing::CreateQuantizedBiasTensor(\n- bias_data, bias_quantized, bias_dims, input_scale, filter_scale),\n- tflite::testing::CreateQuantizedTensor(output_data, output_dims,\n- output_scale, output_zero_point),\n- };\n-\n- // TODO(njeff): Affine Quantization Params should be set on tensor creation.\n- float filter_scales[] = {1, filter_scale};\n- int filter_zero_points[] = {1, 128};\n- TfLiteAffineQuantization filter_quant = {FloatArrayFromFloats(filter_scales),\n- IntArrayFromInts(filter_zero_points),\n- 0};\n- tensors[1].quantization = {kTfLiteAffineQuantization, &filter_quant};\n-\n- float bias_scales[] = {1, filter_scale * input_scale};\n- int bias_zero_points[] = {1, 128};\n- TfLiteAffineQuantization bias_quant = {FloatArrayFromFloats(bias_scales),\n- IntArrayFromInts(bias_zero_points), 0};\n- tensors[2].quantization = {kTfLiteAffineQuantization, &bias_quant};\n-\n- Quantize(golden, golden_quantized, output_dims_count, output_scale,\n- output_zero_point);\n- ValidateDepthwiseConvGoldens(golden_quantized, output_dims_count, conv_params,\n- 1.0, tensors_size, tensors);\n-}\n-\n void TestDepthwiseConvQuantizedPerChannel(\n const int* input_dims_data, const float* input_data,\n int8_t* input_quantized, float input_scale, int input_zero_point,\n@@ -265,96 +214,6 @@ TF_LITE_MICRO_TEST(SimpleTest) {\n bias_values, golden, output_shape, &conv_params, output_data);\n }\n \n-TF_LITE_MICRO_TEST(SimpleTestQuantized) {\n- const int input_elements = 12;\n- const int input_shape[] = {4, 1, 3, 2, 2};\n- const float input_values[] = {1, 2, 7, 8, 3, 4, 9, 10, 5, 6, 11, 12};\n- const int filter_elements = 16;\n- const int filter_shape[] = {4, 1, 2, 2, 4};\n- const float filter_values[] = {1, 2, 3, 4, -9, 10, -11, 12,\n- 5, 6, 7, 8, 13, -14, 15, -16};\n- const int bias_elements = 4;\n- const int bias_shape[] = {4, 1, 1, 1, 4};\n- const int output_elements = 8;\n- const float bias_values[] = {1, 2, 3, 4};\n- const float golden[] = {\n- 71, -34, 99, -20, 91, -26, 127, -4,\n- };\n- const int output_shape[] = {4, 1, 2, 1, 4};\n-\n- const float input_scale = 0.5f;\n- const int input_zero_point = 128;\n- const float filter_scale = 0.5f;\n- const int filter_zero_point = 128;\n- const float output_scale = 1.0f;\n- const int output_zero_point = 128;\n-\n- uint8_t input_quantized[input_elements];\n- uint8_t filter_quantized[filter_elements];\n- int32_t bias_quantized[bias_elements];\n- uint8_t golden_quantized[output_elements];\n- uint8_t output_data[output_elements];\n-\n- TfLiteDepthwiseConvParams conv_params;\n- conv_params.activation = kTfLiteActNone;\n- conv_params.dilation_width_factor = 1;\n- conv_params.dilation_height_factor = 1;\n-\n- tflite::testing::TestDepthwiseConvQuantizedPerLayer(\n- input_shape, input_values, input_quantized, input_scale, input_zero_point,\n- filter_shape, filter_values, filter_quantized, filter_scale,\n- filter_zero_point, bias_shape, bias_values, bias_quantized, golden,\n- golden_quantized, output_shape, output_data, output_scale,\n- output_zero_point, &conv_params);\n-}\n-\n-TF_LITE_MICRO_TEST(SimpleTestDilatedQuantized) {\n- const int input_elements = 48;\n- const int input_shape[] = {4, 1, 4, 6, 2};\n- const float input_values[] = {1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, // h = 0\n- 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, // h = 1\n- 1, 2, 3, 4, 5, 6, 2, 6, 2, 4, 4, 2, // h = 2\n- 3, 2, 6, 5, 1, 4, 1, 2, 1, 4, 6, 3}; // h = 3\n- const int filter_elements = 16;\n- const int filter_shape[] = {4, 1, 2, 2, 4};\n- const float filter_values[] = {1, 2, 3, 4, -9, 10, -11, 12,\n- 5, 6, 7, 8, 13, -14, 15, -16};\n- const int bias_elements = 4;\n- const int bias_shape[] = {4, 1, 1, 1, 4};\n- const int output_elements = 24;\n- const float bias_values[] = {1, 2, 3, 4};\n- const float golden[] = {\n- 15, 2, 88, -48, 25, 14, 72, 0, 61, -2, 56, 48, // h = 0\n- -4, 52, 12, 48, 11, 70, 63, 40, 51, -30, 41, 48 // h = 1\n- };\n- const int output_shape[] = {4, 1, 2, 3, 4};\n-\n- const float input_scale = 0.5f;\n- const int input_zero_point = 128;\n- const float filter_scale = 0.5f;\n- const int filter_zero_point = 128;\n- const float output_scale = 1.0f;\n- const int output_zero_point = 128;\n-\n- uint8_t input_quantized[input_elements];\n- uint8_t filter_quantized[filter_elements];\n- int32_t bias_quantized[bias_elements];\n- uint8_t golden_quantized[output_elements];\n- uint8_t output_data[output_elements];\n-\n- TfLiteDepthwiseConvParams conv_params;\n- conv_params.activation = kTfLiteActNone;\n- conv_params.dilation_width_factor = 3;\n- conv_params.dilation_height_factor = 2;\n-\n- tflite::testing::TestDepthwiseConvQuantizedPerLayer(\n- input_shape, input_values, input_quantized, input_scale, input_zero_point,\n- filter_shape, filter_values, filter_quantized, filter_scale,\n- filter_zero_point, bias_shape, bias_values, bias_quantized, golden,\n- golden_quantized, output_shape, output_data, output_scale,\n- output_zero_point, &conv_params);\n-}\n-\n TF_LITE_MICRO_TEST(SimpleTestRelu) {\n const int input_shape[] = {4, 1, 3, 2, 2};\n const float input_values[] = {1, 2, 7, 8, 3, 4, 9, 10, 5, 6, 11, 12};\n@@ -378,90 +237,6 @@ TF_LITE_MICRO_TEST(SimpleTestRelu) {\n bias_values, golden_relu, output_shape, &conv_params, output_data);\n }\n \n-TF_LITE_MICRO_TEST(SimpleTestReluQuantized) {\n- const int input_elements = 12;\n- const int input_shape[] = {4, 1, 3, 2, 2};\n- const float input_values[] = {1, 2, 7, 8, 3, 4, 9, 10, 5, 6, 11, 12};\n- const int filter_elements = 16;\n- const int filter_shape[] = {4, 1, 2, 2, 4};\n- const float filter_values[] = {1, 2, 3, 4, -9, 10, -11, 12,\n- 5, 6, 7, 8, 13, -14, 15, -16};\n- const int bias_elements = 4;\n- const int bias_shape[] = {4, 1, 1, 1, 4};\n- const int output_elements = 8;\n- const float bias_values[] = {1, 2, 3, 4};\n- const int output_shape[] = {4, 1, 2, 1, 4};\n- const float golden_relu[] = {71, 0, 99, 0, 91, 0, 127, 0};\n-\n- const float input_scale = 0.5f;\n- const int input_zero_point = 128;\n- const float filter_scale = 0.5f;\n- const int filter_zero_point = 128;\n- const float output_scale = 1.0f;\n- const int output_zero_point = 128;\n-\n- uint8_t input_quantized[input_elements];\n- uint8_t filter_quantized[filter_elements];\n- int32_t bias_quantized[bias_elements];\n- uint8_t golden_quantized[output_elements];\n- uint8_t output_data[output_elements];\n-\n- TfLiteDepthwiseConvParams conv_params;\n- conv_params.activation = kTfLiteActRelu;\n- conv_params.dilation_width_factor = 1;\n- conv_params.dilation_height_factor = 1;\n-\n- tflite::testing::TestDepthwiseConvQuantizedPerLayer(\n- input_shape, input_values, input_quantized, input_scale, input_zero_point,\n- filter_shape, filter_values, filter_quantized, filter_scale,\n- filter_zero_point, bias_shape, bias_values, bias_quantized, golden_relu,\n- golden_quantized, output_shape, output_data, output_scale,\n- output_zero_point, &conv_params);\n-}\n-\n-TF_LITE_MICRO_TEST(SimpleTestQuantizedOptimizedFilterWidth) {\n- const int input_elements = 12;\n- const float input_values[] = {1, 2, 7, 8, 3, 4, 9, 10, 5, 6, 11, 12};\n- const int filter_elements = 16;\n- const float filter_values[] = {1, 2, 3, 4, -9, 10, -11, 12,\n- 5, 6, 7, 8, 13, -14, 15, -16};\n- const int bias_elements = 4;\n- const float bias_values[] = {1, 2, 3, 4};\n- const int output_dims_count = 9;\n- const int input_shape[] = {4, 1, 1, 9, 1};\n- const int filter_shape[] = {4, 2, 1, 8, 1};\n- const int bias_shape[] = {1, 1};\n- const float goldens[] = {\n- 92, 56, 12, 22, 33, 72, 44, 20, 5,\n- };\n- const int output_shape[] = {4, 1, 1, 9, 1};\n-\n- const float input_scale = 1.0f;\n- const int input_zero_point = 128;\n- const float filter_scale = 0.5f;\n- const int filter_zero_point = 128;\n- const float output_scale = 1.0f;\n- const int output_zero_point = 128;\n-\n- uint8_t input_quantized[input_elements];\n- uint8_t filter_quantized[filter_elements];\n- int32_t bias_quantized[bias_elements];\n- uint8_t golden_quantized[output_dims_count];\n- uint8_t output_data[output_dims_count];\n-\n- TfLiteDepthwiseConvParams conv_params;\n- conv_params.activation = kTfLiteActNone;\n- conv_params.dilation_width_factor = 1;\n- conv_params.dilation_height_factor = 1;\n-\n- tflite::testing::TestDepthwiseConvQuantizedPerLayer(\n- input_shape, input_values, input_quantized, input_scale, input_zero_point,\n- filter_shape, filter_values, filter_quantized, filter_scale,\n- filter_zero_point, bias_shape, bias_values, bias_quantized, goldens,\n- golden_quantized, output_shape, output_data, output_scale,\n- output_zero_point, &conv_params);\n-}\n-\n TF_LITE_MICRO_TEST(SimpleTestQuantizedPerChannel) {\n const int input_elements = 12;\n const int input_shape[] = {4, 1, 3, 2, 2};",
"filename": "tensorflow/lite/micro/kernels/depthwise_conv_test.cc",
"status": "modified"
},
{
"diff": "@@ -236,6 +236,14 @@ MICROLITE_LIB_NAME := libtensorflow-microlite.a\n # to bypass this check and allow for deeper directory structures.\n MICRO_LITE_EXAMPLE_TESTS := $(shell find tensorflow/lite/micro/examples/ -maxdepth 2 -name Makefile.inc)\n MICRO_LITE_EXAMPLE_TESTS += $(shell find tensorflow/lite/micro/examples/ -name Makefile_internal.inc)\n+\n+# Image recognition experimental uses uint8 quantization and is no longer\n+# supported (See #44912 for more details). We should consider deleting\n+# the image_recognition_experimental example.\n+EXCLUDED_EXAMPLE_TESTS := \\\n+ tensorflow/lite/micro/examples/image_recognition_experimental/Makefile.inc\n+MICRO_LITE_EXAMPLE_TESTS := $(filter-out $(EXCLUDED_EXAMPLE_TESTS), $(MICRO_LITE_EXAMPLE_TESTS))\n+\n MICRO_LITE_BENCHMARKS := $(wildcard tensorflow/lite/micro/benchmarks/Makefile.inc)\n \n # TODO(b/152645559): move all benchmarks to benchmarks directory.\n@@ -803,6 +811,8 @@ $(eval $(call microlite_test,$(notdir $(basename $(TEST_TARGET))),$(TEST_TARGET)\n $(foreach TEST_TARGET,$(filter tensorflow/lite/micro/kernels/%,$(MICROLITE_TEST_SRCS)),\\\n $(eval $(call microlite_test,kernel_$(notdir $(basename $(TEST_TARGET))),$(TEST_TARGET))))\n \n+\n+\n ifeq ($(TARGET_SPECIFIC_MAKE_TEST),0)\n test: $(MICROLITE_TEST_TARGETS)\n endif",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator CUMSUM from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1 (step 1): Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2 (step 2): Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\n\r\nThe next 3 steps are combined into a single PR3 with separate commits:\r\n\r\n(step 3): Copy operator from lite to micro making minimal changes and not including in the build\r\n(step 4): Delete extra code from the micro copy of the operator\r\n(step 5): Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47290\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47290\">No</a>\n",
"created_at": "2021-06-02T16:07:22Z"
}
],
"number": 47290,
"title": "micro: port op CUMSUM from lite"
}
|
{
"body": "PR steps 3 through 5 for the CUMSUM operator as per Issue #47290",
"number": 47790,
"review_comments": [],
"title": "micro: CUMSUM PR3-5"
}
|
{
"commits": [
{
"message": "micro: copy operator CUMSUM kernel from lite\n\nThis is a copy with minimal modification of the kernel and test for\noperator CUMSUM from tensorflow/lite/kernels.\nAdaptations to micro and addition to the micro build to follow.\n\nPR step 3 for issue #47290"
},
{
"message": "micro: prepare to port operator CUMSUM kernel from lite with test\n\nImplement skeleton (non-working) code for operator and test.\nHeader files changed.\nNamespaces changed.\nSome original code deleted.\nSome original code modified.\n\nPR step 4 of the work to port operator CUMSUM as tracked in Issue #47290"
},
{
"message": "micro: port operator CUMSUM kernel from lite with test\n\nComplete implementation of TFLM operator CUMSUM and associated TFLM test code.\n\nPR step 5 of the work to port operator CUMSUM as tracked in Issue #47290"
}
],
"files": [
{
"diff": "@@ -467,6 +467,7 @@ cc_library(\n \"reference/concatenation.h\",\n \"reference/conv.h\",\n \"reference/conv3d.h\",\n+ \"reference/cumsum.h\",\n \"reference/densify.h\",\n \"reference/depth_to_space.h\",\n \"reference/depthwiseconv_float.h\",\n@@ -575,6 +576,7 @@ cc_library(\n \"reference/concatenation.h\",\n \"reference/conv.h\",\n \"reference/conv3d.h\",\n+ \"reference/cumsum.h\",\n \"reference/densify.h\",\n \"reference/depth_to_space.h\",\n \"reference/depthwiseconv_float.h\",",
"filename": "tensorflow/lite/kernels/internal/BUILD",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,85 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#ifndef TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_CUMSUM_H_\n+#define TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_CUMSUM_H_\n+\n+#include <cstdint>\n+\n+#include \"tensorflow/lite/kernels/internal/compatibility.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n+\n+namespace tflite {\n+namespace reference_ops {\n+\n+template <typename T>\n+inline void CumSum(const T* input_data, const RuntimeShape& shape, int32_t axis,\n+ bool exclusive, bool reverse, T* output_data) {\n+ const int32_t rank = shape.DimensionsCount();\n+ TFLITE_DCHECK_GE(rank, 1);\n+ TFLITE_DCHECK_GE(axis, 0);\n+ TFLITE_DCHECK_LT(axis, rank);\n+\n+ size_t inner = 1;\n+ size_t outer = 1;\n+ size_t depth = 1;\n+ for (int32_t i = 0; i < rank; i++) {\n+ if (i < axis)\n+ inner *= shape.Dims(i);\n+ else if (i > axis)\n+ outer *= shape.Dims(i);\n+ else\n+ depth = shape.Dims(i);\n+ }\n+\n+ for (size_t outer_index = 0; outer_index < outer; outer_index++) {\n+ size_t outer_index_adj;\n+ if (reverse)\n+ outer_index_adj = (outer - 1) - outer_index;\n+ else\n+ outer_index_adj = outer_index;\n+ for (size_t inner_index = 0; inner_index < inner; inner_index++) {\n+ T accumulator = 0;\n+ size_t inner_index_adj;\n+ if (reverse)\n+ inner_index_adj = (inner - 1) - inner_index;\n+ else\n+ inner_index_adj = inner_index;\n+ for (size_t depth_index = 0; depth_index < depth; depth_index++) {\n+ size_t depth_index_adj;\n+ if (reverse)\n+ depth_index_adj = (depth - 1) - depth_index;\n+ else\n+ depth_index_adj = depth_index;\n+\n+ size_t index = outer_index_adj;\n+ index += inner_index_adj * depth * outer;\n+ index += depth_index_adj * outer;\n+\n+ if (exclusive) {\n+ output_data[index] = accumulator;\n+ accumulator += input_data[index];\n+ } else {\n+ accumulator += input_data[index];\n+ output_data[index] = accumulator;\n+ }\n+ }\n+ }\n+ }\n+}\n+\n+} // namespace reference_ops\n+} // namespace tflite\n+\n+#endif // TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_CUMSUM_H_",
"filename": "tensorflow/lite/kernels/internal/reference/cumsum.h",
"status": "added"
},
{
"diff": "@@ -32,6 +32,7 @@ AllOpsResolver::AllOpsResolver() {\n AddConcatenation();\n AddConv2D();\n AddCos();\n+ AddCumSum();\n AddDepthwiseConv2D();\n AddDequantize();\n AddDetectionPostprocess();",
"filename": "tensorflow/lite/micro/all_ops_resolver.cc",
"status": "modified"
},
{
"diff": "@@ -266,6 +266,7 @@ cc_library(\n \"circular_buffer.cc\",\n \"comparisons.cc\",\n \"concatenation.cc\",\n+ \"cumsum.cc\",\n \"dequantize.cc\",\n \"detection_postprocess.cc\",\n \"div.cc\",\n@@ -538,6 +539,21 @@ cc_test(\n ],\n )\n \n+cc_test(\n+ name = \"cumsum_test\",\n+ srcs = [\n+ \"cumsum_test.cc\",\n+ ],\n+ deps = [\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:debug_log\",\n+ \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_test(\n name = \"depthwise_conv_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,107 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#include \"tensorflow/lite/kernels/internal/reference/cumsum.h\"\n+\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n+#include \"tensorflow/lite/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n+\n+namespace tflite {\n+namespace {\n+\n+static const int kInputTensor = 0;\n+static const int kAxisTensor = 1;\n+static const int kOutputTensor = 0;\n+\n+TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n+ TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);\n+ TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n+\n+ const TfLiteTensor* input = GetInput(context, node, kInputTensor);\n+ const TfLiteTensor* axis = GetInput(context, node, kAxisTensor);\n+\n+ TF_LITE_ENSURE(context, input->type == kTfLiteFloat32);\n+ TF_LITE_ENSURE_EQ(context, axis->type, kTfLiteInt32);\n+\n+ TF_LITE_ENSURE_EQ(context, NumElements(axis), 1);\n+\n+ TF_LITE_ENSURE(context, NumDimensions(input) >= 1);\n+\n+ TfLiteTensor* output = GetOutput(context, node, kOutputTensor);\n+\n+ TF_LITE_ENSURE_EQ(context, input->type, output->type);\n+ TF_LITE_ENSURE(context, HaveSameShapes(input, output));\n+\n+ return kTfLiteOk;\n+}\n+\n+TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+ return CalculateOpData(context, node);\n+}\n+\n+TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n+ const TfLiteEvalTensor* input =\n+ tflite::micro::GetEvalInput(context, node, kInputTensor);\n+ const TfLiteEvalTensor* axis_tensor =\n+ tflite::micro::GetEvalInput(context, node, kAxisTensor);\n+\n+ TfLiteEvalTensor* output =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n+\n+ auto* params = static_cast<TfLiteCumsumParams*>(node->builtin_data);\n+ auto input_shape = tflite::micro::GetTensorShape(input);\n+\n+ int32_t axis = *tflite::micro::GetTensorData<int32_t>(axis_tensor);\n+ if (axis < 0) axis += input_shape.DimensionsCount();\n+\n+ if (axis < 0 || axis >= input_shape.DimensionsCount()) {\n+ TF_LITE_KERNEL_LOG(context, \"CUMSUM Invalid axis: %d\", axis);\n+ return kTfLiteError;\n+ }\n+\n+ switch (input->type) {\n+ case kTfLiteFloat32: {\n+ reference_ops::CumSum(tflite::micro::GetTensorData<float>(input),\n+ input_shape, axis, params->exclusive,\n+ params->reverse,\n+ tflite::micro::GetTensorData<float>(output));\n+ return kTfLiteOk;\n+ } break;\n+ default: {\n+ TF_LITE_KERNEL_LOG(\n+ context, \"Unsupported input type, CUMSUM only supports FLOAT32.\");\n+ return kTfLiteError;\n+ }\n+ }\n+\n+ return kTfLiteError;\n+}\n+\n+} // namespace\n+\n+TfLiteRegistration Register_CUMSUM() {\n+ return {/*init=*/nullptr,\n+ /*free=*/nullptr,\n+ /*prepare=*/Prepare,\n+ /*invoke=*/Eval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/0};\n+}\n+\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/cumsum.cc",
"status": "added"
},
{
"diff": "@@ -0,0 +1,180 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#include <limits>\n+#include <type_traits>\n+\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_runner.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n+\n+namespace tflite {\n+namespace testing {\n+namespace {\n+\n+struct CumSumTestParams {\n+ bool exclusive = false;\n+ bool reverse = false;\n+ int32_t axis = std::numeric_limits<int32_t>::max();\n+};\n+\n+void ExecuteCumSumTest(const CumSumTestParams& test_params,\n+ TfLiteTensor* tensors, int tensors_count) {\n+ constexpr int kInputArrayData[] = {2, 0, 1};\n+ TfLiteIntArray* inputs_array = IntArrayFromInts(kInputArrayData);\n+ constexpr int kOutputArrayData[] = {1, 2};\n+ TfLiteIntArray* outputs_array = IntArrayFromInts(kOutputArrayData);\n+\n+ TfLiteCumsumParams params;\n+ params.exclusive = test_params.exclusive;\n+ params.reverse = test_params.reverse;\n+\n+ const TfLiteRegistration registration = tflite::Register_CUMSUM();\n+ micro::KernelRunner runner(registration, tensors, tensors_count, inputs_array,\n+ outputs_array, static_cast<void*>(¶ms));\n+\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.InitAndPrepare());\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.Invoke());\n+}\n+\n+template <typename T>\n+void TestCumSum(const CumSumTestParams& test_params, const int* input_dims_data,\n+ const T* input_data, const int* expected_dims,\n+ const T* expected_data, T* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(expected_dims);\n+ const int output_count = ElementCount(*output_dims);\n+\n+ constexpr int axis_dims_data[] = {1, 1};\n+ TfLiteIntArray* axis_dims = IntArrayFromInts(axis_dims_data);\n+ const int32_t axis_data[] = {test_params.axis};\n+\n+ TfLiteTensor tensors[] = {\n+ CreateTensor(input_data, input_dims),\n+ CreateTensor(axis_data, axis_dims),\n+ CreateTensor(output_data, output_dims),\n+ };\n+ constexpr int tensors_count = std::extent<decltype(tensors)>::value;\n+ ExecuteCumSumTest(test_params, tensors, tensors_count);\n+\n+ constexpr float kTolerance = 1e-5;\n+ for (int i = 0; i < output_count; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(expected_data[i], output_data[i], kTolerance);\n+ }\n+}\n+\n+} // namespace\n+} // namespace testing\n+} // namespace tflite\n+\n+TF_LITE_MICRO_TESTS_BEGIN\n+\n+TF_LITE_MICRO_TEST(CumSumOpTestSimpleTest) {\n+ constexpr int kDims[] = {2, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr float kExpect[] = {1, 3, 6, 10, 5, 11, 18, 26};\n+\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::CumSumTestParams test_params;\n+ test_params.axis = 1;\n+\n+ tflite::testing::TestCumSum(test_params, kDims, kInput, kDims, kExpect,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(CumSumOpTestSimpleAxis0Test) {\n+ constexpr int kDims[] = {2, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr float kExpect[] = {1, 2, 3, 4, 6, 8, 10, 12};\n+\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::CumSumTestParams test_params;\n+ test_params.axis = 0;\n+\n+ tflite::testing::TestCumSum(test_params, kDims, kInput, kDims, kExpect,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(CumSumOpTestSimple1DTest) {\n+ constexpr int kDims[] = {1, 8};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr float kExpect[] = {1, 3, 6, 10, 15, 21, 28, 36};\n+\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::CumSumTestParams test_params;\n+ test_params.axis = 0;\n+\n+ tflite::testing::TestCumSum(test_params, kDims, kInput, kDims, kExpect,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(CumSumOpTestSimpleReverseTest) {\n+ constexpr int kDims[] = {2, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr float kExpect[] = {10, 9, 7, 4, 26, 21, 15, 8};\n+\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::CumSumTestParams test_params;\n+ test_params.axis = 1;\n+ test_params.reverse = true;\n+\n+ tflite::testing::TestCumSum(test_params, kDims, kInput, kDims, kExpect,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(CumSumOpTestSimpleExclusiveTest) {\n+ constexpr int kDims[] = {2, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr float kExpect[] = {0, 1, 3, 6, 0, 5, 11, 18};\n+\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::CumSumTestParams test_params;\n+ test_params.axis = 1;\n+ test_params.exclusive = true;\n+\n+ tflite::testing::TestCumSum(test_params, kDims, kInput, kDims, kExpect,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(CumSumOpTestSimpleReverseExclusiveTest) {\n+ constexpr int kDims[] = {2, 2, 4};\n+ constexpr float kInput[] = {1, 2, 3, 4, 5, 6, 7, 8};\n+ constexpr float kExpect[] = {9, 7, 4, 0, 21, 15, 8, 0};\n+\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::CumSumTestParams test_params;\n+ test_params.axis = -1;\n+ test_params.exclusive = true;\n+ test_params.reverse = true;\n+\n+ tflite::testing::TestCumSum(test_params, kDims, kInput, kDims, kExpect,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TESTS_END",
"filename": "tensorflow/lite/micro/kernels/cumsum_test.cc",
"status": "added"
},
{
"diff": "@@ -35,6 +35,7 @@ TfLiteRegistration Register_ADD_N();\n TfLiteRegistration Register_BATCH_TO_SPACE_ND();\n TfLiteRegistration Register_CAST();\n TfLiteRegistration Register_CONV_2D();\n+TfLiteRegistration Register_CUMSUM();\n TfLiteRegistration Register_DEPTHWISE_CONV_2D();\n TfLiteRegistration Register_DIV();\n TfLiteRegistration Register_ELU();",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -177,6 +177,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n ParseCos);\n }\n \n+ TfLiteStatus AddCumSum() {\n+ return AddBuiltin(BuiltinOperator_CUMSUM, tflite::Register_CUMSUM(),\n+ ParseCumsum);\n+ }\n+\n TfLiteStatus AddDepthwiseConv2D() {\n return AddBuiltin(BuiltinOperator_DEPTHWISE_CONV_2D,\n Register_DEPTHWISE_CONV_2D(), ParseDepthwiseConv2D);",
"filename": "tensorflow/lite/micro/micro_mutable_op_resolver.h",
"status": "modified"
},
{
"diff": "@@ -275,6 +275,7 @@ tensorflow/lite/micro/kernels/circular_buffer_test.cc \\\n tensorflow/lite/micro/kernels/comparisons_test.cc \\\n tensorflow/lite/micro/kernels/concatenation_test.cc \\\n tensorflow/lite/micro/kernels/conv_test.cc \\\n+tensorflow/lite/micro/kernels/cumsum_test.cc \\\n tensorflow/lite/micro/kernels/depthwise_conv_test.cc \\\n tensorflow/lite/micro/kernels/dequantize_test.cc \\\n tensorflow/lite/micro/kernels/detection_postprocess_test.cc \\\n@@ -334,6 +335,7 @@ tensorflow/lite/micro/kernels/comparisons.cc \\\n tensorflow/lite/micro/kernels/concatenation.cc \\\n tensorflow/lite/micro/kernels/conv.cc \\\n tensorflow/lite/micro/kernels/conv_common.cc \\\n+tensorflow/lite/micro/kernels/cumsum.cc \\\n tensorflow/lite/micro/kernels/depthwise_conv.cc \\\n tensorflow/lite/micro/kernels/depthwise_conv_common.cc \\\n tensorflow/lite/micro/kernels/dequantize.cc \\\n@@ -429,6 +431,7 @@ tensorflow/lite/kernels/internal/reference/ceil.h \\\n tensorflow/lite/kernels/internal/reference/comparisons.h \\\n tensorflow/lite/kernels/internal/reference/concatenation.h \\\n tensorflow/lite/kernels/internal/reference/conv.h \\\n+tensorflow/lite/kernels/internal/reference/cumsum.h \\\n tensorflow/lite/kernels/internal/reference/depthwiseconv_float.h \\\n tensorflow/lite/kernels/internal/reference/depthwiseconv_uint8.h \\\n tensorflow/lite/kernels/internal/reference/dequantize.h \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "`ImageResizerState::ValidateAndCreateOutput` needs a tensor as the second argument. Actually it is unnecessary. Because `input` can be retrieved through `context->input(0)`. `ImageResizerGradientState` has a similar issue.",
"comments": [
{
"body": "I sent a pull request which should fix this issue",
"created_at": "2021-03-14T00:44:38Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47789\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47789\">No</a>\n",
"created_at": "2021-04-07T19:52:43Z"
}
],
"number": 47789,
"title": "ImageResizerState's public function contains redundant arguments"
}
|
{
"body": "Fix #47789 \r\n\r\nCurrently `ImageResizerState::ValidateAndCreateOutput()` needs a tensor as the second argument. Actually its value can be retrieved through `context->input(0)`. Hence, it is unnecessary to provide the second argument. `ImageResizerGradientState::ValidateAndCreateOutput()` has the similar issue.\r\n\r\nThis code change removes the redundant arguments of the two functions mentioned above.",
"number": 47788,
"review_comments": [],
"title": "Remove redundant arguments of ImageResizerState's API"
}
|
{
"commits": [
{
"message": "refactoring image_resize_state"
},
{
"message": "refactoring image_resize_state"
},
{
"message": "Merge branch 'refactor_submit' of https://github.com/CyangXu/tensorflow into refactor_submit"
}
],
"files": [
{
"diff": "@@ -653,7 +653,7 @@ class FusedResizeConv2DUsingGemmOp : public OpKernel {\n ImageResizerState st(false, false);\n if (DoResize) {\n st = ImageResizerState(align_corners_, false);\n- st.ValidateAndCalculateOutputSize(context, input);\n+ st.ValidateAndCalculateOutputSize(context);\n if (!context->status().ok()) return;\n } else {\n // Set up the resize parameters to do no scaling at all.",
"filename": "tensorflow/core/kernels/conv_ops_fused_image_transform.cc",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@ limitations under the License.\n #include <algorithm>\n #include <memory>\n \n-#include \"third_party/eigen3/unsupported/Eigen/CXX11/Tensor\"\n #include \"tensorflow/core/framework/op_kernel.h\"\n #include \"tensorflow/core/framework/register_types.h\"\n #include \"tensorflow/core/framework/tensor.h\"\n@@ -28,6 +27,7 @@ limitations under the License.\n #include \"tensorflow/core/lib/core/status.h\"\n #include \"tensorflow/core/platform/logging.h\"\n #include \"tensorflow/core/util/image_resizer_state.h\"\n+#include \"third_party/eigen3/unsupported/Eigen/CXX11/Tensor\"\n \n namespace tensorflow {\n \n@@ -144,17 +144,17 @@ class ResizeAreaOp : public OpKernel {\n }\n \n void Compute(OpKernelContext* context) override {\n- const Tensor& input = context->input(0);\n // The op always did the correct thing with regard to pixel centers, so we\n // always pass false here for half_pixel_centers since ImageResizerState\n // enforces that if align_corners_ is true, half_pixel_centers must be\n // false.\n ImageResizerState st(align_corners_, /*unused half_pixel_centers=*/false);\n- st.ValidateAndCreateOutput(context, input);\n+ st.ValidateAndCreateOutput(context);\n \n if (!context->status().ok()) return;\n \n- typename TTypes<T, 4>::ConstTensor input_data(input.tensor<T, 4>());\n+ typename TTypes<T, 4>::ConstTensor input_data(\n+ context->input(0).tensor<T, 4>());\n \n // Precompute values used when iterating over x coordinates within a row.\n // Note that it may be useful to cache x_interps for a given",
"filename": "tensorflow/core/kernels/image/resize_area_op.cc",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,6 @@ limitations under the License.\n #include <algorithm>\n #include <array>\n \n-#include \"third_party/eigen3/unsupported/Eigen/CXX11/Tensor\"\n #include \"tensorflow/core/framework/op_kernel.h\"\n #include \"tensorflow/core/framework/register_types.h\"\n #include \"tensorflow/core/framework/tensor.h\"\n@@ -30,6 +29,7 @@ limitations under the License.\n #include \"tensorflow/core/lib/core/status.h\"\n #include \"tensorflow/core/platform/logging.h\"\n #include \"tensorflow/core/util/image_resizer_state.h\"\n+#include \"third_party/eigen3/unsupported/Eigen/CXX11/Tensor\"\n \n namespace tensorflow {\n namespace {\n@@ -557,13 +557,13 @@ class ResizeBicubicOp : public OpKernel {\n }\n \n void Compute(OpKernelContext* context) override {\n- const Tensor& input = context->input(0);\n ImageResizerState st(align_corners_, half_pixel_centers_);\n- st.ValidateAndCreateOutput(context, input);\n+ st.ValidateAndCreateOutput(context);\n \n if (!context->status().ok()) return;\n \n- typename TTypes<T, 4>::ConstTensor input_data(input.tensor<T, 4>());\n+ typename TTypes<T, 4>::ConstTensor input_data(\n+ context->input(0).tensor<T, 4>());\n TTypes<float, 4>::Tensor output_data = st.output->tensor<float, 4>();\n \n interpolate_with_caching<T>(input_data, st, half_pixel_centers_,\n@@ -587,16 +587,15 @@ class ResizeBicubicOpGrad : public OpKernel {\n \n void Compute(OpKernelContext* context) override {\n // Validate input.\n- // First argument is gradient with respect to resized image.\n- const Tensor& input = context->input(0);\n- const Tensor& original_image = context->input(1);\n-\n ImageResizerGradientState st(align_corners_, half_pixel_centers_);\n- st.ValidateAndCreateOutput(context, input, original_image);\n+ st.ValidateAndCreateOutput(context);\n \n if (!context->status().ok()) return;\n \n- TTypes<float, 4>::ConstTensor input_grad = input.tensor<float, 4>();\n+ // First argument is gradient with respect to resized image.\n+ TTypes<float, 4>::ConstTensor input_grad =\n+ context->input(0).tensor<float, 4>();\n+\n typename TTypes<T, 4>::Tensor output_grad(st.output->tensor<T, 4>());\n \n ResizeBicubicGrad<T>(input_grad, st, half_pixel_centers_, output_grad);",
"filename": "tensorflow/core/kernels/image/resize_bicubic_op.cc",
"status": "modified"
},
{
"diff": "@@ -28,7 +28,6 @@ limitations under the License.\n \n #include <memory>\n \n-#include \"third_party/eigen3/unsupported/Eigen/CXX11/Tensor\"\n #include \"tensorflow/core/framework/op_kernel.h\"\n #include \"tensorflow/core/framework/register_types.h\"\n #include \"tensorflow/core/framework/tensor.h\"\n@@ -38,6 +37,7 @@ limitations under the License.\n #include \"tensorflow/core/lib/core/status.h\"\n #include \"tensorflow/core/platform/logging.h\"\n #include \"tensorflow/core/util/image_resizer_state.h\"\n+#include \"third_party/eigen3/unsupported/Eigen/CXX11/Tensor\"\n \n namespace tensorflow {\n \n@@ -54,16 +54,16 @@ class ResizeBilinearOp : public OpKernel {\n }\n \n void Compute(OpKernelContext* context) override {\n- const Tensor& input = context->input(0);\n ImageResizerState st(align_corners_, half_pixel_centers_);\n- st.ValidateAndCreateOutput(context, input);\n+ st.ValidateAndCreateOutput(context);\n \n if (!context->status().ok()) return;\n \n // Return if the output is empty.\n if (st.output->NumElements() == 0) return;\n \n- typename TTypes<T, 4>::ConstTensor image_data(input.tensor<T, 4>());\n+ typename TTypes<T, 4>::ConstTensor image_data(\n+ context->input(0).tensor<T, 4>());\n TTypes<float, 4>::Tensor output_data = st.output->tensor<float, 4>();\n \n functor::ResizeBilinear<Device, T>()(\n@@ -370,16 +370,14 @@ class ResizeBilinearOpGrad : public OpKernel {\n \n void Compute(OpKernelContext* context) override {\n // Validate input.\n- // First argument is gradient with respect to resized image.\n- const Tensor& input = context->input(0);\n- const Tensor& original_image = context->input(1);\n-\n ImageResizerGradientState st(align_corners_, half_pixel_centers_);\n- st.ValidateAndCreateOutput(context, input, original_image);\n+ st.ValidateAndCreateOutput(context);\n \n if (!context->status().ok()) return;\n \n- TTypes<float, 4>::ConstTensor input_grad = input.tensor<float, 4>();\n+ // First argument is gradient with respect to resized image.\n+ TTypes<float, 4>::ConstTensor input_grad =\n+ context->input(0).tensor<float, 4>();\n \n if (!std::is_same<T, Eigen::half>::value &&\n !std::is_same<T, Eigen::bfloat16>::value) {",
"filename": "tensorflow/core/kernels/image/resize_bilinear_op.cc",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@ limitations under the License.\n \n #include <memory>\n \n-#include \"third_party/eigen3/unsupported/Eigen/CXX11/Tensor\"\n #include \"tensorflow/core/framework/op_kernel.h\"\n #include \"tensorflow/core/framework/register_types.h\"\n #include \"tensorflow/core/framework/tensor.h\"\n@@ -29,6 +28,7 @@ limitations under the License.\n #include \"tensorflow/core/lib/core/status.h\"\n #include \"tensorflow/core/platform/logging.h\"\n #include \"tensorflow/core/util/image_resizer_state.h\"\n+#include \"third_party/eigen3/unsupported/Eigen/CXX11/Tensor\"\n \n namespace tensorflow {\n \n@@ -46,9 +46,8 @@ class ResizeNearestNeighborOp : public OpKernel {\n }\n \n void Compute(OpKernelContext* context) override {\n- const Tensor& input = context->input(0);\n ImageResizerState st(align_corners_, half_pixel_centers_);\n- st.ValidateAndCreateOutput(context, input);\n+ st.ValidateAndCreateOutput(context);\n \n if (!context->status().ok()) return;\n \n@@ -59,7 +58,8 @@ class ResizeNearestNeighborOp : public OpKernel {\n // Return if the output is empty.\n if (st.output->NumElements() == 0) return;\n \n- typename TTypes<T, 4>::ConstTensor input_data(input.tensor<T, 4>());\n+ typename TTypes<T, 4>::ConstTensor input_data(\n+ context->input(0).tensor<T, 4>());\n typename TTypes<T, 4>::Tensor output_data(st.output->tensor<T, 4>());\n \n bool status;",
"filename": "tensorflow/core/kernels/image/resize_nearest_neighbor_op.cc",
"status": "modified"
},
{
"diff": "@@ -700,19 +700,19 @@ class QuantizedResizeBilinearOp : public OpKernel {\n }\n \n void Compute(OpKernelContext* context) override {\n- const Tensor& input = context->input(0);\n const float in_min = context->input(2).flat<float>()(0);\n const float in_max = context->input(3).flat<float>()(0);\n \n ImageResizerState st(align_corners_, false);\n- st.ValidateAndCreateOutput(context, input);\n+ st.ValidateAndCreateOutput(context);\n \n if (!context->status().ok()) return;\n \n // Return if the output is empty.\n if (st.output->NumElements() == 0) return;\n \n- typename TTypes<T, 4>::ConstTensor image_data(input.tensor<T, 4>());\n+ typename TTypes<T, 4>::ConstTensor image_data(\n+ context->input(0).tensor<T, 4>());\n typename TTypes<T, 4>::Tensor output_data(st.output->tensor<T, 4>());\n \n ResizeBilinear<T>(image_data, st.height_scale, st.width_scale, in_min,",
"filename": "tensorflow/core/kernels/quantized_resize_bilinear_op.cc",
"status": "modified"
},
{
"diff": "@@ -27,13 +27,13 @@ limitations under the License.\n #include <algorithm>\n #include <array>\n \n-#include \"third_party/eigen3/unsupported/Eigen/CXX11/Tensor\"\n #include \"tensorflow/core/framework/bounds_check.h\"\n #include \"tensorflow/core/framework/op_kernel.h\"\n #include \"tensorflow/core/framework/register_types.h\"\n #include \"tensorflow/core/framework/tensor.h\"\n #include \"tensorflow/core/framework/tensor_shape.h\"\n #include \"tensorflow/core/framework/types.h\"\n+#include \"third_party/eigen3/unsupported/Eigen/CXX11/Tensor\"\n \n namespace tensorflow {\n \n@@ -76,45 +76,53 @@ struct ImageResizerState {\n // height_scale and width_scale, and calculates the output size.\n // If any of these operations fails, it sets an error status in\n // the context, which the caller must check.\n- void ValidateAndCalculateOutputSize(OpKernelContext* context,\n- const Tensor& input) {\n+ void ValidateAndCalculateOutputSize(OpKernelContext* context) {\n OP_REQUIRES(\n context,\n !half_pixel_centers_ || (half_pixel_centers_ && !align_corners_),\n errors::InvalidArgument(\"If half_pixel_centers is True, \"\n \"align_corners must be False.\"));\n- OP_REQUIRES(context, input.dims() == 4,\n+\n+ const TensorShape& input_shape = context->input(0).shape();\n+ OP_REQUIRES(context, input_shape.dims() == 4,\n errors::InvalidArgument(\"input must be 4-dimensional\",\n- input.shape().DebugString()));\n+ input_shape.DebugString()));\n+ batch_size = input_shape.dim_size(0);\n+ channels = input_shape.dim_size(3);\n+ OP_REQUIRES(\n+ context, channels > 0,\n+ errors::InvalidArgument(\"image must have at least one channel\"));\n+\n+ // Verify and assign `in_height` and `in_width`.\n+ OP_REQUIRES(\n+ context, input_shape.dim_size(1) > 0 && input_shape.dim_size(2) > 0,\n+ errors::InvalidArgument(\"input image must be of non-zero size\"));\n+ OP_REQUIRES(\n+ context,\n+ FastBoundsCheck(input_shape.dim_size(1),\n+ std::numeric_limits<int32>::max()) &&\n+ FastBoundsCheck(input_shape.dim_size(2),\n+ std::numeric_limits<int32>::max()),\n+ errors::InvalidArgument(\"input sizes must be between 0 and max int32\"));\n+ in_height = static_cast<int32>(input_shape.dim_size(1));\n+ in_width = static_cast<int32>(input_shape.dim_size(2));\n+\n+ // Verify the output tensor's shape.\n const Tensor& shape_t = context->input(1);\n OP_REQUIRES(context, shape_t.dims() == 1,\n errors::InvalidArgument(\"shape_t must be 1-dimensional\",\n shape_t.shape().DebugString()));\n OP_REQUIRES(context, shape_t.NumElements() == 2,\n errors::InvalidArgument(\"shape_t must have two elements\",\n shape_t.shape().DebugString()));\n+\n+ // Verify and assign `out_height` and `out_width`.\n auto Svec = shape_t.vec<int32>();\n- batch_size = input.dim_size(0);\n out_height = internal::SubtleMustCopy(Svec(0));\n out_width = internal::SubtleMustCopy(Svec(1));\n- OP_REQUIRES(\n- context,\n- FastBoundsCheck(input.dim_size(1), std::numeric_limits<int32>::max()) &&\n- FastBoundsCheck(input.dim_size(2),\n- std::numeric_limits<int32>::max()),\n- errors::InvalidArgument(\"input sizes must be between 0 and max int32\"));\n-\n- in_height = static_cast<int32>(input.dim_size(1));\n- in_width = static_cast<int32>(input.dim_size(2));\n- channels = input.dim_size(3);\n OP_REQUIRES(context, out_height > 0 && out_width > 0,\n errors::InvalidArgument(\"output dimensions must be positive\"));\n- OP_REQUIRES(\n- context, channels > 0,\n- errors::InvalidArgument(\"image must have at least one channel\"));\n- OP_REQUIRES(\n- context, input.dim_size(1) > 0 && input.dim_size(2) > 0,\n- errors::InvalidArgument(\"input image must be of non-zero size\"));\n+\n height_scale = CalculateResizeScale(in_height, out_height, align_corners_);\n width_scale = CalculateResizeScale(in_width, out_width, align_corners_);\n \n@@ -132,14 +140,14 @@ struct ImageResizerState {\n }\n \n // Calculates all the required variables, and allocates the output.\n- void ValidateAndCreateOutput(OpKernelContext* context, const Tensor& input) {\n- ValidateAndCalculateOutputSize(context, input);\n+ void ValidateAndCreateOutput(OpKernelContext* context) {\n+ ValidateAndCalculateOutputSize(context);\n if (!context->status().ok()) return;\n- OP_REQUIRES_OK(context, context->allocate_output(\n- 0,\n- TensorShape({input.dim_size(0), out_height,\n- out_width, input.dim_size(3)}),\n- &output));\n+ OP_REQUIRES_OK(\n+ context,\n+ context->allocate_output(\n+ 0, TensorShape({batch_size, out_height, out_width, channels}),\n+ &output));\n }\n \n int64 batch_size;\n@@ -163,41 +171,43 @@ struct ImageResizerGradientState {\n : align_corners_(align_corners),\n half_pixel_centers_(half_pixel_centers) {}\n \n- void ValidateAndCreateOutput(OpKernelContext* context, const Tensor& input,\n- const Tensor& original_image) {\n+ void ValidateAndCreateOutput(OpKernelContext* context) {\n OP_REQUIRES(\n context,\n !half_pixel_centers_ || (half_pixel_centers_ && !align_corners_),\n errors::InvalidArgument(\"If half_pixel_centers is True, \"\n \"align_corners must be False.\"));\n \n+ const Tensor& input = context->input(0);\n OP_REQUIRES(context, input.dims() == 4,\n errors::InvalidArgument(\"input_grad must be 4-dimensional\",\n input.shape().DebugString()));\n+\n // Resizers always produce float images, so input gradient must\n // always be a float.\n OP_REQUIRES(context, input.dtype() == DT_FLOAT,\n errors::InvalidArgument(\"input_grad must be of type float\",\n DataTypeString(input.dtype())));\n \n- OP_REQUIRES(context, original_image.dims() == 4,\n- errors::InvalidArgument(\"original_image must be 4-dimensional\",\n- original_image.shape().DebugString()));\n-\n- // Allocate output and initialize to zeros.\n batch_size = input.dim_size(0);\n channels = input.dim_size(3);\n+\n resized_height = input.dim_size(1);\n resized_width = input.dim_size(2);\n- original_height = original_image.dim_size(1);\n- original_width = original_image.dim_size(2);\n \n // The following check is also carried out for the forward op. It is added\n // here to prevent a divide-by-zero exception when either height_scale or\n // width_scale is being calculated.\n OP_REQUIRES(context, resized_height > 0 && resized_width > 0,\n errors::InvalidArgument(\"resized dimensions must be positive\"));\n \n+ const TensorShape& output_shape = context->input(1).shape();\n+ OP_REQUIRES(context, output_shape.dims() == 4,\n+ errors::InvalidArgument(\"original_image must be 4-dimensional\",\n+ output_shape.DebugString()));\n+ original_height = output_shape.dim_size(1);\n+ original_width = output_shape.dim_size(2);\n+\n // The following check is also carried out for the forward op. It is added\n // here to prevent either height_scale or width_scale from being set to\n // zero, which would cause a divide-by-zero exception in the deterministic\n@@ -217,7 +227,7 @@ struct ImageResizerGradientState {\n CalculateResizeScale(original_height, resized_height, align_corners_);\n width_scale =\n CalculateResizeScale(original_width, resized_width, align_corners_);\n- output = nullptr;\n+\n OP_REQUIRES_OK(context, context->allocate_output(\n 0,\n TensorShape({batch_size, original_height,\n@@ -233,7 +243,7 @@ struct ImageResizerGradientState {\n int64 original_width;\n float height_scale;\n float width_scale;\n- Tensor* output;\n+ Tensor* output = nullptr;\n \n private:\n bool align_corners_;",
"filename": "tensorflow/core/util/image_resizer_state.h",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator CUMSUM from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1 (step 1): Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2 (step 2): Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\n\r\nThe next 3 steps are combined into a single PR3 with separate commits:\r\n\r\n(step 3): Copy operator from lite to micro making minimal changes and not including in the build\r\n(step 4): Delete extra code from the micro copy of the operator\r\n(step 5): Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47290\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47290\">No</a>\n",
"created_at": "2021-06-02T16:07:22Z"
}
],
"number": 47290,
"title": "micro: port op CUMSUM from lite"
}
|
{
"body": "Create the reference implementation in its own header so that micro\r\ncan use it without the unrelated depedencies of reference_ops.h.\r\n\r\nPR step 2 for issue #47290",
"number": 47732,
"review_comments": [],
"title": "micro: CUMSUM PR2"
}
|
{
"commits": [
{
"message": "micro: CUMSUM PR2\n\nCreate the reference implementation in its own header so that micro\ncan use it without the unrelated depedencies of reference_ops.h.\n\nPR step 2 for issue #47290"
},
{
"message": "Correct operator/class/test naming.\nRemove extraneous header files.\nRemove extraneous class methods from test code."
},
{
"message": "Merge branch 'master' into Cumsum-pr2"
},
{
"message": "remove TFLite CUMSUM reference op test\n\nThe TFLite reference op test will be added in a later PR."
}
],
"files": [
{
"diff": "@@ -467,6 +467,7 @@ cc_library(\n \"reference/concatenation.h\",\n \"reference/conv.h\",\n \"reference/conv3d.h\",\n+ \"reference/cumsum.h\",\n \"reference/densify.h\",\n \"reference/depth_to_space.h\",\n \"reference/depthwiseconv_float.h\",\n@@ -575,6 +576,7 @@ cc_library(\n \"reference/concatenation.h\",\n \"reference/conv.h\",\n \"reference/conv3d.h\",\n+ \"reference/cumsum.h\",\n \"reference/densify.h\",\n \"reference/depth_to_space.h\",\n \"reference/depthwiseconv_float.h\",",
"filename": "tensorflow/lite/kernels/internal/BUILD",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,85 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#ifndef TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_CUMSUM_H_\n+#define TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_CUMSUM_H_\n+\n+#include <cstdint>\n+\n+#include \"tensorflow/lite/kernels/internal/compatibility.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n+\n+namespace tflite {\n+namespace reference_ops {\n+\n+template <typename T>\n+inline void CumSum(const T* input_data, const RuntimeShape& shape, int32_t axis,\n+ bool exclusive, bool reverse, T* output_data) {\n+ const int32_t rank = shape.DimensionsCount();\n+ TFLITE_DCHECK_GE(rank, 1);\n+ TFLITE_DCHECK_GE(axis, 0);\n+ TFLITE_DCHECK_LT(axis, rank);\n+\n+ size_t inner = 1;\n+ size_t outer = 1;\n+ size_t depth = 1;\n+ for (int32_t i = 0; i < rank; i++) {\n+ if (i < axis)\n+ inner *= shape.Dims(i);\n+ else if (i > axis)\n+ outer *= shape.Dims(i);\n+ else\n+ depth = shape.Dims(i);\n+ }\n+\n+ for (size_t outer_index = 0; outer_index < outer; outer_index++) {\n+ size_t outer_index_adj;\n+ if (reverse)\n+ outer_index_adj = (outer - 1) - outer_index;\n+ else\n+ outer_index_adj = outer_index;\n+ for (size_t inner_index = 0; inner_index < inner; inner_index++) {\n+ T accumulator = 0;\n+ size_t inner_index_adj;\n+ if (reverse)\n+ inner_index_adj = (inner - 1) - inner_index;\n+ else\n+ inner_index_adj = inner_index;\n+ for (size_t depth_index = 0; depth_index < depth; depth_index++) {\n+ size_t depth_index_adj;\n+ if (reverse)\n+ depth_index_adj = (depth - 1) - depth_index;\n+ else\n+ depth_index_adj = depth_index;\n+\n+ size_t index = outer_index_adj;\n+ index += inner_index_adj * depth * outer;\n+ index += depth_index_adj * outer;\n+\n+ if (exclusive) {\n+ output_data[index] = accumulator;\n+ accumulator += input_data[index];\n+ } else {\n+ accumulator += input_data[index];\n+ output_data[index] = accumulator;\n+ }\n+ }\n+ }\n+ }\n+}\n+\n+} // namespace reference_ops\n+} // namespace tflite\n+\n+#endif // TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_CUMSUM_H_",
"filename": "tensorflow/lite/kernels/internal/reference/cumsum.h",
"status": "added"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator CUMSUM from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1 (step 1): Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2 (step 2): Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\n\r\nThe next 3 steps are combined into a single PR3 with separate commits:\r\n\r\n(step 3): Copy operator from lite to micro making minimal changes and not including in the build\r\n(step 4): Delete extra code from the micro copy of the operator\r\n(step 5): Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47290\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47290\">No</a>\n",
"created_at": "2021-06-02T16:07:22Z"
}
],
"number": 47290,
"title": "micro: port op CUMSUM from lite"
}
|
{
"body": "Extract the parsing out of a switch statement case to create a\r\nstandalone function which can be called by the micro op resolver.\r\n\r\nPR step 1 for issue #47290",
"number": 47731,
"review_comments": [],
"title": "micro: CUMSUM PR1"
}
|
{
"commits": [
{
"message": "micro: CUMSUM PR1\n\nExtract the parsing out of a switch statement case to create a\nstandalone function which can be called by the micro op resolver.\n\nPR step 1 for issue #47290"
}
],
"files": [
{
"diff": "@@ -205,6 +205,10 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n return ParseConv2D(op, error_reporter, allocator, builtin_data);\n }\n \n+ case BuiltinOperator_CUMSUM: {\n+ return ParseCumsum(op, error_reporter, allocator, builtin_data);\n+ }\n+\n case BuiltinOperator_DEPTH_TO_SPACE: {\n return ParseDepthToSpace(op, error_reporter, allocator, builtin_data);\n }\n@@ -753,16 +757,6 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n *builtin_data = params.release();\n return kTfLiteOk;\n }\n- case BuiltinOperator_CUMSUM: {\n- auto params = safe_allocator.Allocate<TfLiteCumsumParams>();\n- TF_LITE_ENSURE(error_reporter, params != nullptr);\n- if (const auto* cumsum_params = op->builtin_options_as_CumsumOptions()) {\n- params->exclusive = cumsum_params->exclusive();\n- params->reverse = cumsum_params->reverse();\n- }\n- *builtin_data = params.release();\n- return kTfLiteOk;\n- }\n case BuiltinOperator_CONV_3D: {\n auto params = safe_allocator.Allocate<TfLiteConv3DParams>();\n TF_LITE_ENSURE(error_reporter, params != nullptr);\n@@ -1105,6 +1099,24 @@ TfLiteStatus ParseConv2D(const Operator* op, ErrorReporter* error_reporter,\n return kTfLiteOk;\n }\n \n+// We have this parse function instead of directly returning kTfLiteOk from the\n+// switch-case in ParseOpData because this function is used as part of the\n+// selective registration for the OpResolver implementation in micro.\n+TfLiteStatus ParseCumsum(const Operator* op, ErrorReporter* error_reporter,\n+ BuiltinDataAllocator* allocator, void** builtin_data) {\n+ CheckParsePointerParams(op, error_reporter, allocator, builtin_data);\n+\n+ SafeBuiltinDataAllocator safe_allocator(allocator);\n+ auto params = safe_allocator.Allocate<TfLiteCumsumParams>();\n+ TF_LITE_ENSURE(error_reporter, params != nullptr);\n+ if (const auto* cumsum_params = op->builtin_options_as_CumsumOptions()) {\n+ params->exclusive = cumsum_params->exclusive();\n+ params->reverse = cumsum_params->reverse();\n+ }\n+ *builtin_data = params.release();\n+ return kTfLiteOk;\n+}\n+\n // We have this parse function instead of directly returning kTfLiteOk from the\n // switch-case in ParseOpData because this function is used as part of the\n // selective registration for the OpResolver implementation in micro.",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.cc",
"status": "modified"
},
{
"diff": "@@ -110,6 +110,9 @@ TfLiteStatus ParseConv2D(const Operator* op, ErrorReporter* error_reporter,\n TfLiteStatus ParseCos(const Operator* op, ErrorReporter* error_reporter,\n BuiltinDataAllocator* allocator, void** builtin_data);\n \n+TfLiteStatus ParseCumsum(const Operator* op, ErrorReporter* error_reporter,\n+ BuiltinDataAllocator* allocator, void** builtin_data);\n+\n TfLiteStatus ParseDepthToSpace(const Operator* op,\n ErrorReporter* error_reporter,\n BuiltinDataAllocator* allocator,",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.h",
"status": "modified"
}
]
}
|
{
"body": "<em>Please make sure that this is a bug. As per our\r\n[GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md),\r\nwe only address code/doc bugs, performance issues, feature requests and\r\nbuild/installation issues on GitHub. tag:bug_template</em>\r\n\r\n**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): **Yes**\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): **Ubuntu 18.04**\r\n- TensorFlow installed from (source or binary): **Binary**\r\n- TensorFlow version (use command below): **TF:2.5.0-dev20210114**\r\n- Python version: **3.7**\r\n- CUDA/cuDNN version: **11.0, 8.0.4**\r\n- GPU model and memory: **1060**\r\n\r\n**Describe the current behavior**\r\nTensorRT converter crashes with a segmentation fault when I try to export my `saved_model`.\r\nInterestingly, if I set `minimum_segment_size=10`, it works because it skips \r\n\r\n*Replaced segment 5 consisting of 7 nodes by StatefulPartitionedCall/decode_predictions/TRTEngineOp_0_5.\r\n2021-01-15 15:21:38.915310: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:858] Segment consists of nodes: StatefulPartitionedCall/decode_predictions/combined_non_max_suppression/CombinedNonMaxSuppression, StatefulPartitionedCall/decode_predictions/combined_non_max_suppression/CombinedNonMaxSuppression/max_output_size_per_class, StatefulPartitionedCall/decode_predictions/combined_non_max_suppression/Const, StatefulPartitionedCall/decode_predictions/combined_non_max_suppression/iou_threshold, StatefulPartitionedCall/decode_predictions/combined_non_max_suppression/score_threshold, StatefulPartitionedCall/decode_predictions/transpose_1, StatefulPartitionedCall/decode_predictions/transpose_1/perm*\r\n\r\nI have attached the full log after running with these flags\r\n`TF_CPP_VMODULE=trt_engine_op=2,convert_nodes=2,convert_graph=2,segment=2,trt_shape_optimization_profiles=2,trt_engine_resource_ops=2 python trt.py`\r\n\r\n**Standalone code to reproduce the issue**\r\n```python\r\nimport os\r\n\r\nimport tensorflow as tf\r\n\r\n## Download and extract the zip \r\n## URL: https://drive.google.com/file/d/1Zxqdnm2iHpJGdUl17cAi-lV7wZ3UhMDA/view\r\n\r\nparams = tf.experimental.tensorrt.ConversionParams(\r\n precision_mode='FP32',\r\n maximum_cached_engines=1,\r\n minimum_segment_size=5)\r\n\r\nconverter = tf.experimental.tensorrt.Converter(\r\n input_saved_model_dir='retinanet-18-640-30x-64-tpu',\r\n conversion_params=params)\r\nconverter.convert()\r\n\r\ndef input_fn(steps=1):\r\n for i in range(steps):\r\n yield (tf.random.uniform([640, 640, 3]), tf.constant(1, dtype=tf.int32))\r\n \r\nconverter.build(input_fn=input_fn)\r\nconverter.save('trt')\r\n```\r\n\r\n**Other info / logs** Include any logs or source code that would be helpful to\r\ndiagnose the problem. If including tracebacks, please include the full\r\ntraceback. Large logs and files should be attached.\r\n[trt_log.txt](https://github.com/tensorflow/tensorflow/files/5819748/trt_log.txt)\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46453\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46453\">No</a>\n",
"created_at": "2021-01-15T10:16:57Z"
},
{
"body": "Was able to run the code without any issues on [TF v2.4](https://colab.research.google.com/gist/amahendrakar/7f400de432bddbf6b4e47a0feb33ed7a/46453.ipynb).\r\n\r\nHowever, session crashes on running the code with [TF-nightly](https://colab.research.google.com/gist/amahendrakar/be033a6c68b0028f5522924ef378e66b/46453-tf-nightly.ipynb#scrollTo=rdbO5RSdFM6a) (i.e. v2.5.0-dev20210114). Please check the linked gist for reference. Thanks!",
"created_at": "2021-01-15T17:44:59Z"
},
{
"body": "I can successfully convert, build, and then load back the model in `2.4.0`. But if I include `tf.image.combined_non_max_suppression` op for conversion, I get totally wrong predictions from the converted model. \r\n\r\nThe same code fails in nightly `dev20210114`.\r\n\r\n\r\nFor reference, this is how i call `tf.image.combined_non_max_suppression`\r\n\r\n```python\r\ndef call(self, predictions):\r\n box_predictions, class_predictions = predictions\r\n\r\n class_predictions = tf.cast(class_predictions, dtype=tf.float32)\r\n box_predictions = tf.cast(box_predictions, dtype=tf.float32)\r\n\r\n class_predictions = tf.nn.sigmoid(class_predictions) # [batch_size, num_anchors, num_classes]\r\n boxes = self._decode_box_predictions(self._anchors.boxes[None, ...],\r\n box_predictions) # [batch_size, num_anchors, 4]; (absolute coordinates)\r\n\r\n if self.pre_nms_top_k > 0: # This condition return false because `pre_nms_top_k` is -1 always\r\n top_k_class_predictions, top_k_boxes = self._filter_top_k(\r\n class_predictions, boxes)\r\n\r\n else:\r\n top_k_boxes = tf.expand_dims(boxes, axis=2) # [batch_size, num_anchors, 1, 4]; (absolute coordinates)\r\n top_k_class_predictions = class_predictions # [batch_size, num_anchors, num_classes]\r\n\r\n return tf.image.combined_non_max_suppression(\r\n top_k_boxes,\r\n top_k_class_predictions,\r\n self.max_detections_per_class, # 100\r\n self.max_detections, # 100\r\n self.nms_iou_threshold, # 0.5\r\n self.confidence_threshold, # 0.05\r\n clip_boxes=False,\r\n )\r\n```\r\n\r\n**EDIT 1**: I could fix the wrong predictions issue (after conversion) by using normalized coordinates (when x1, y1, x2, y2 lie in [0, 1])\r\nBut according to [the documentation](https://www.tensorflow.org/api_docs/python/tf/image/combined_non_max_suppression), both normalized and absolute coordinates should work. I think the TRT plugin only supports normalized coordinates.\r\n```\r\n... \"Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) \r\nare the coordinates of any diagonal pair of box corners and the coordinates can\r\nbe provided as normalized (i.e., lying in the interval [0, 1]) or absolute\"...\r\n```\r\n\r\n**EDIT 2**\r\nLooks like this is the reason why we always send normalized coordinates to TRT plugin\r\nhttps://github.com/tensorflow/tensorflow/blob/4fa4184a5a454eceb5b567c8b3c4fce46faf2de8/tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc#L5911-L5917",
"created_at": "2021-01-15T19:07:55Z"
},
{
"body": "@bixia1 what do you think? Is [this](https://github.com/tensorflow/tensorflow/blob/4fa4184a5a454eceb5b567c8b3c4fce46faf2de8/tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc#L5914) correct?",
"created_at": "2021-01-21T23:58:34Z"
},
{
"body": "The [PR in question](https://github.com/tensorflow/tensorflow/pull/40062) (merged on 1/13/2021) doesn’t make any change in terms of always use normalized coordinate in TF-TRT.\r\n\r\nThe [TF document](https://www.tensorflow.org/api_docs/python/tf/image/combined_non_max_suppression) says the bounding box coordinates can be normalized or absolute values. But I couldn't tell how the operation representation indicate whether the coordinates are normalized or absolute values. I looked at [the implement of the compute method for the operation](https://github.com/tensorflow/tensorflow/blob/9045dcaf276cb7b24fde33da09165cb38d157a5e/tensorflow/core/kernels/image/non_max_suppression_op.cc#L916) and couldn't figure out this either. I am asking for information about this in an internal channel.\r\n\r\n",
"created_at": "2021-01-22T02:25:50Z"
},
{
"body": "@tfeher ",
"created_at": "2021-01-22T02:31:04Z"
},
{
"body": "I got an answer from the TensorFlow people which help me understand the situation. The TensorFlow implementation for the operation is the same regardless whether the coordinates are normalized or absolute values. On the other hand, the TensorRT implementation requires a parameter that specifies whether the coordinates are normalized or not, see code [here](https://github.com/tensorflow/tensorflow/blob/ab9eb3d10400c92b15bfae62b284bf9937cb1845/tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc#L5951-L5952). If this field is indeed used in the TensorRT implementation, then we have a problem here. Now the question is why it was working before the [PR in question](https://github.com/tensorflow/tensorflow/pull/40062)? I will let @tfeher and @DEKHTIARJonathan take care of this.",
"created_at": "2021-01-22T06:42:09Z"
},
{
"body": "@bixia1 There are two issues here, \r\n - Conversion of `tf.image.combined_non_max_suppression` in nightly fails. But works in `2.4.0`\r\n - Reading the comments [here](https://github.com/tensorflow/tensorflow/blob/4fa4184a5a454eceb5b567c8b3c4fce46faf2de8/tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc#L5911-L5917), calculation of width/height differs in tensorrt vs tensorflow op. To avoid this, the converter always assumes the users are sending normalized coordinates [here](https://github.com/tensorflow/tensorflow/blob/4fa4184a5a454eceb5b567c8b3c4fce46faf2de8/tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc#L5918).\r\n\r\nHence, if the users want to export `tf.image.combined_non_max_suppression` to tensorrt, they should make sure that they are sending normalized coordinates. But the documentation fails to warn the user about this.",
"created_at": "2021-01-22T13:11:18Z"
},
{
"body": "> The TensorFlow implementation for the operation is the same regardless whether the coordinates are normalized or absolute values. On the other hand, the TensorRT implementation requires a parameter that specifies whether the coordinates are normalized or not\r\n\r\nTensorRT indeed has two implementation for the IOU calculation one for coordinates that should be interpreted as [pixels](https://github.com/NVIDIA/TensorRT/blob/183f891191f08fd016216fd0b94bc9c8c52d0ac2/plugin/common/kernels/allClassNMS.cu#L142-L143), and one otherwise (activated by[ isNormalized=true](https://github.com/NVIDIA/TensorRT/blob/183f891191f08fd016216fd0b94bc9c8c52d0ac2/plugin/common/kernels/allClassNMS.cu#L135-L139)). Note that `isNormalized` is an unfortunate name for the option, what it does is simply switching between these two modes. The latter implementation agrees with the [TF implementation](https://github.com/tensorflow/tensorflow/blob/508374a893df7999633d9ebbe55e94d92eef7280/tensorflow/core/kernels/image/non_max_suppression_op.cc#L123-L134), therefore the converter uses only that mode [mode of the TRT plugin](https://github.com/tensorflow/tensorflow/blob/ac4b2997b6c47e06a4f689355c08418856905594/tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc#L5911-L5918). We do have an unit test which checks that the converter works for non-normalized coordinates. \r\n\r\nTo summarize, the TF-TRT converted combinedNMS op should work with not-normalized coordinates. #40062 did not touch this. Among other things, the handling of the `clib_boxes` attribute was corrected in that PR, but not sure if that is relevant here.\r\n\r\nI will have a closer look at the problem and report back.\r\n\r\n\r\n\r\n",
"created_at": "2021-01-22T14:26:28Z"
},
{
"body": "I could reproduce the bug. The TRT engine throws a segfault when we call [enqueue](https://github.com/tensorflow/tensorflow/blob/3db793ee031f2abede180043b016c085f1d8c26b/tensorflow/compiler/tf2tensorrt/utils/trt_engine_utils.cc#L280). The problem indeed happens after the converter was changed in #40062, if I revert the changes then the segfault disappears. It still needs to be clarified whether we make an error on TF-TRT side while setting up the TRT NMS plugin parameters, or it is a bug in TRT.",
"created_at": "2021-01-27T19:39:37Z"
},
{
"body": "The issue is caused by the change in handling the `top_k` parameter for the plugin. The value that we are passing (5000) is larger than what TRT can handle (4096). There are two options to fix this:\r\n- Cap top_k to 4096. We have to check whether this leads to incompatibility between the TF and TRT results.\r\n- Mark node as incompatible, this is always safe but bad for performance.",
"created_at": "2021-02-03T19:55:57Z"
},
{
"body": "> > The TensorFlow implementation for the operation is the same regardless whether the coordinates are normalized or absolute values. On the other hand, the TensorRT implementation requires a parameter that specifies whether the coordinates are normalized or not\r\n> \r\n> TensorRT indeed has two implementation for the IOU calculation one for coordinates that should be interpreted as [pixels](https://github.com/NVIDIA/TensorRT/blob/183f891191f08fd016216fd0b94bc9c8c52d0ac2/plugin/common/kernels/allClassNMS.cu#L142-L143), and one otherwise (activated by[ isNormalized=true](https://github.com/NVIDIA/TensorRT/blob/183f891191f08fd016216fd0b94bc9c8c52d0ac2/plugin/common/kernels/allClassNMS.cu#L135-L139)). Note that `isNormalized` is an unfortunate name for the option, what it does is simply switching between these two modes. The latter implementation agrees with the [TF implementation](https://github.com/tensorflow/tensorflow/blob/508374a893df7999633d9ebbe55e94d92eef7280/tensorflow/core/kernels/image/non_max_suppression_op.cc#L123-L134), therefore the converter uses only that mode [mode of the TRT plugin](https://github.com/tensorflow/tensorflow/blob/ac4b2997b6c47e06a4f689355c08418856905594/tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc#L5911-L5918). We do have an unit test which checks that the converter works for non-normalized coordinates.\r\n> \r\n> To summarize, the TF-TRT converted combinedNMS op should work with not-normalized coordinates. #40062 did not touch this. Among other things, the handling of the `clib_boxes` attribute was corrected in that PR, but not sure if that is relevant here.\r\n> \r\n> I will have a closer look at the problem and report back.\r\n\r\n@tfeher \r\nhttps://github.com/tensorflow/tensorflow/blob/a6f927400dfa09f445faee953e5694512226eed5/tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc#L5980-L5987\r\nCan you please clarify: Since `is_normalized` explicitly being set to `true`, does this mean that the converter expects the coordinates to be normalized `[0, 1]`?. When I tried passing unnormalized coordinates, I did not get correct results.",
"created_at": "2021-02-03T21:37:42Z"
},
{
"body": "As I have described above, the combinedNMS op is expected to work with not normalized coordinates. And yes, we need to set `is_normalized = true` even if we have not normalied coordinates. TRT's naming of the argument is somewhat unfortunate. I have confirmed with a TRT engineer that the `isNormalized` arg only switches modes of IOU calculation, it does not require us to provide normalized coordinates.\r\n\r\nNote that TRT has an option `clipBoxes` with the following meaning:\r\n\r\n> Forcibly restrict bounding boxes to the normalized range [0,1]. Only applicable if isNormalized is also true. Defaults to true.\r\n\r\nIf it is set to false, then the conversion should work with unnormalized coordinates.\r\n\r\nNow the problem is, that before #40062, the `clipBoxes` arg was not set by the converter, so it defaulted to truncating the box coordinates. This is one of the things which was fixed by #40062. \r\n\r\nThe question is, which version of TF are you using when you get the incorrect results?",
"created_at": "2021-02-03T22:48:01Z"
},
{
"body": "@tfeher, I guess this explains the behaviour. I was successfully able to convert the models in `2.4.0` but the output boxes were being clipped to 1 for the converted model. \r\n> clipBoxes arg was not set by the converter",
"created_at": "2021-02-03T22:56:39Z"
},
{
"body": "@tfeher, we encountered similar crash for a google internal customer and verified that setting top_k to 4096 fix the problem. For their case, num_boxes is 70K+. \r\n\r\nI think we probably misunderstand the relationship between num_boxes and top_k in [this code block](https://github.com/tensorflow/tensorflow/blob/ed22f400428a669c1c6e4553cd7f4900abeaf954/tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc#L5999-L6002). top_k in the plugin is more like the num of top values we keep in the internal of the algorithm, in order to support the selection of keep_top_k as output, is it?\r\n\r\nI think we should fix this to something like this:\r\nif (keep_top_k > 4096) return \"tensorrt not support\" else top_k = 4096 (or top_k = keep_top_k?)\r\n\r\nThey also help me check the performance for these two ways of setting top_k:\r\n top_k = keep_top_k\r\n top_k = 4096\r\nand didn't see any perf diff for their app.\r\n",
"created_at": "2021-02-05T18:33:19Z"
},
{
"body": "See cloud_tpu is setting top_k to 5000 [here](https://github.com/tensorflow/tpu/blob/d4daff70a8a17625cb43386b2a564cb0e0e0e130/models/official/detection/ops/postprocess_ops.py#L60-L80)\r\nHere is their definition of top_k: \r\npre_nms_num_boxes: an int number of top candidate detections per class\r\n before NMS",
"created_at": "2021-02-05T18:37:21Z"
},
{
"body": "@bixia1 pre_nms_top_k is used in models like RetinaNet, EfficientDet and other similar models. And in most of the cases, the literature asks us to pick the top 5000 boxes (depending on the score), this is where the number 5000 comes from.\r\n\r\nBut what troubles me is there is no way in which the user can set this value when calling `combined_non_max_suppression`. I feel that since tensorrt plugin already has this field, it would be beneficial to expose this param in `tf.image.combined_non_max_suppression`\r\n\r\n```python\r\ntf.image.combined_non_max_suppression(\r\n boxes, scores, max_output_size_per_class, max_total_size, iou_threshold=0.5,\r\n score_threshold=float('-inf'), pad_per_class=False, clip_boxes=True,\r\n name=None\r\n)\r\n```",
"created_at": "2021-02-05T19:02:59Z"
},
{
"body": "Hi, Tama's PR is going to fix this by introducing an environment for you to overwrite the default behavior. With his PR, the default behavior won't be changed, that is, we still reject the case where top_k > 4096 as we do currently unless you explicit request to change this behavior through providing the environment variable. We need your feedback on this solution.",
"created_at": "2021-03-10T19:59:28Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46453\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46453\">No</a>\n",
"created_at": "2021-03-12T00:56:33Z"
}
],
"number": 46453,
"title": "TensorRT converter fails for CombinedNonMaxSuppression"
}
|
{
"body": "Fixes #46453\r\n\r\nThe TRT plugin has a limitation on the max value of the top_k input parameter. This PR modifies the converter to refuse conversion if top_k > 4096. \r\n\r\nIn some cases it might be desirable to do the conversion, even if the top_k<=4096 restriction would lead to loss of accuracy. The user can set TF_TRT_ALLOW_NMS_TOPK_OVERRIDE=1 environment variable can be set to opt-in for conversion in this case.\r\n",
"number": 47698,
"review_comments": [
{
"body": "Not sure if you agree, but I think making the env for user to over the top_k value is more flexible. If that is what we will do, this code will look like this:\r\n```\r\nstd::optional<int64> user_request_top_k = read from env\r\nif (user_request_top_k) {\r\n if (top_k <= user_request_top_k.value()) {\r\n issue a VLOG message saying the user request is not used because it is bigger than the natural top_k.\r\n } else {\r\n issue a VLOG to tell the \"natural top_k\" and user requested top_k values, and the use requested one is use\r\n top_k = user_requested_top_k.value();\r\n keep_top_k = min(top_k, keep_top_k)\r\n }\r\n}\r\nif (top_k > 4096)\r\n return errors::InvalidaArgument(...)\r\n```\r\n",
"created_at": "2021-03-10T16:41:12Z"
},
{
"body": "I believe what we need here is a way for the user to opt-in for a conversion in case the TRT converted op is not strictly equivalent with the TF op. While your suggestion can be used to reach the same goal, I do not see the need for flexibly overriding top_k, and therefore I would choose the current simpler solution.\r\n\r\nWhy would the user want to override top_k with a concrete value? More importantly, how would he/she decide what value to use? The TF-TRT converter does this job: it sets a value for top_k based on the input parameters. If that value is not compatible with TRT, then we either skip conversion, or not (this case only with the users consent). If we do not skip conversion, then the next best thing (in terms of correctness of the results) is to set the max value allowed by the plugin.\r\n",
"created_at": "2021-03-10T17:34:55Z"
},
{
"body": "See [this comment](https://github.com/tensorflow/tensorflow/issues/46453#issuecomment-774227477).\r\nCan this value affect performance? That is the only reason I can think of to allow users change the value.\r\n",
"created_at": "2021-03-10T17:51:56Z"
}
],
"title": "TF-TRT CombinedNMS: fix top_k parameter max value"
}
|
{
"commits": [
{
"message": "TF-TRT CombinedNMS: fix top_k parameter max value"
}
],
"files": [
{
"diff": "@@ -6151,6 +6151,20 @@ Status ConvertSquaredDifference(OpConverterParams* params) {\n }\n \n #if IS_TRT_VERSION_GE(7, 1, 3, 0)\n+\n+bool AllowNmsTopkOverride() {\n+ static bool result = [] {\n+ bool value;\n+ Status status = ReadBoolFromEnvVar(\"TF_TRT_ALLOW_NMS_TOPK_OVERRIDE\",\n+ /*default_value=*/false, &value);\n+ if (!status.ok()) {\n+ LOG(ERROR) << status;\n+ }\n+ return value;\n+ }();\n+ return result;\n+}\n+\n Status ConvertCombinedNMS(OpConverterParams* params) {\n TF_RETURN_IF_ERROR(\n CheckInputsWeights(*params, {{\"boxes\", false},\n@@ -6235,8 +6249,6 @@ Status ConvertCombinedNMS(OpConverterParams* params) {\n node_def.name());\n }\n \n- if (params->validation_only) return Status::OK();\n-\n // TRT op is_normalized=False treats input corrdinates as pixels and\n // calculates width/height as (max - min + 1).\n //\n@@ -6256,10 +6268,30 @@ Status ConvertCombinedNMS(OpConverterParams* params) {\n } else {\n keep_top_k = max_total_size;\n }\n+\n // According to the batchedNMS plugin description we need to set top_k so that\n // keep_top_k <= top_k\n // https://github.com/NVIDIA/TensorRT/tree/master/plugin/batchedNMSPlugin\n- const int top_k = std::max(num_boxes, keep_top_k);\n+ // Before the NMS step, TRT selects top_k candidate from each class and\n+ // discards the rest. The NMS step is performed only among the top_k\n+ // candidates. To be strictly compatible with the TF op, we need that top_k is\n+ // greater equal to num_boxes.\n+ int top_k = std::max(num_boxes, keep_top_k);\n+ // TRT has a limitation: top_k <=4096.\n+ if (top_k > 4096) {\n+ if (AllowNmsTopkOverride()) {\n+ top_k = 4096;\n+ keep_top_k = std::min(top_k, keep_top_k);\n+ } else {\n+ return errors::InvalidArgument(\n+ \"TRT NMS plugin allow top_k<=4096, where top_k = max(num_boxes, \"\n+ \"max_total_size). You can override this by setting \"\n+ \"TF_TRT_ALLOW_NMS_TOPK_OVERRIDE=1 environment variable, but this can \"\n+ \"result in a loss of accuracy.\");\n+ }\n+ }\n+\n+ if (params->validation_only) return Status::OK();\n float score_thresh = *(static_cast<float*>(score_threshold.GetValues()));\n const int background_id = -1;\n nvinfer1::PluginField fields[9] = {",
"filename": "tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc",
"status": "modified"
},
{
"diff": "@@ -3419,6 +3419,30 @@ TEST_P(OpConverter_FP32_Test, ConvertCombinedNMS) {\n {0, 0, 0, -1}, // exp_classes\n {3}, // exp_num_detections\n conv_status},\n+ TestParams{\"Test 5: TopK error\",\n+ {1, 5000, 1, 4}, // boxes dims\n+ {1, 5000, 1}, // scores dims\n+ {}, // boxes values:\n+ {}, // scores values\n+ 4, // max_output_size_per_class\n+ 4, // max_total_size\n+ 0.1, // IOU threshold\n+ 0, // score threshold\n+ false, // pad_per_class\n+ false, // clip_boxes\n+ {}, // expected_valid_detections_dims\n+ {}, // exp_boxes_values\n+ {}, // exp_scores\n+ {}, // exp_classes\n+ {}, // exp_num_detections\n+ conv_status.ok()\n+ ? errors::InvalidArgument(\n+ \"TRT NMS plugin allow top_k<=4096, where top_k = \"\n+ \"max(num_boxes, max_total_size). You can override \"\n+ \"this by setting TF_TRT_ALLOW_NMS_TOPK_OVERRIDE=1 \"\n+ \"environment variable, but this can result in a \"\n+ \"loss of accuracy.\")\n+ : conv_status},\n };\n \n for (auto p : params) {",
"filename": "tensorflow/compiler/tf2tensorrt/convert/convert_nodes_test.cc",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,10 @@\n class CombinedNmsTest(trt_test.TfTrtIntegrationTestBase):\n \"\"\"Test for CombinedNMS op in TF-TRT.\"\"\"\n \n+ def setUp(self):\n+ super().setUp()\n+ self.num_boxes = 200\n+\n def GraphFn(self, boxes, scores):\n max_output_size_per_class = 3\n max_total_size = 3\n@@ -64,12 +68,11 @@ def GetParams(self):\n # Parameters\n q = 1\n batch_size = 2\n- num_boxes = 200\n num_classes = 2\n max_total_size = 3\n \n- boxes_shape = [batch_size, num_boxes, q, 4]\n- scores_shape = [batch_size, num_boxes, num_classes]\n+ boxes_shape = [batch_size, self.num_boxes, q, 4]\n+ scores_shape = [batch_size, self.num_boxes, num_classes]\n nmsed_boxes_shape = [batch_size, max_total_size, 4]\n nmsed_scores_shape = [batch_size, max_total_size]\n nmsed_classes_shape = [batch_size, max_total_size]\n@@ -200,5 +203,20 @@ def _get_graph_fn(x, y):\n ])\n \n \n+class CombinedNmsTopKOverride(CombinedNmsTest):\n+ def setUp(self):\n+ super().setUp()\n+ self.num_boxes = 5000\n+ os.environ['TF_TRT_ALLOW_NMS_TOPK_OVERRIDE'] = '1'\n+\n+ def tearDown(self):\n+ super().tearDown()\n+ os.environ['TF_TRT_ALLOW_NMS_TOPK_OVERRIDE'] = '0'\n+\n+ def GetMaxBatchSize(self, run_params):\n+ \"\"\"Returns the max_batch_size that the converter should use for tests.\"\"\"\n+ if run_params.dynamic_engine:\n+ return None\n+\n if __name__ == '__main__':\n test.main()",
"filename": "tensorflow/python/compiler/tensorrt/test/combined_nms_test.py",
"status": "modified"
}
]
}
|
{
"body": "The tests described in this issue are with the RI-2020.4-linux toolchain and the Fusion F1 core.\r\n\r\nBuild the keyword_benchmark with:\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade keyword_benchmark -j8 BUILD_TYPE=release\r\n```\r\n\r\nList all the symbols:\r\n```\r\nXTENSA_CORE=F1_190305_swupgrade xt-nm --print-size --size-sort --radix=d -C tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\r\n```\r\n\r\nWe see symbols that appear to be related to exception handling:\r\n```\r\n00082100 00000304 T _Unwind_ForcedUnwind\r\n00070712 00000305 T __cxxabiv1::__vmi_class_type_info::__do_upcast(__cxxabiv1::__class_type_info const*, void const*, __cxxabiv1::__class_type_info::__upcast_result&) const\r\n00076024 00000307 T __divdf3\r\n00082796 00000310 T _Unwind_Resume_or_Rethrow\r\n00104616 00000326 t _xa_nn_dot_product_4_rows_1_vec_mat_aligned_vec_aligned\r\n00080392 00000340 t uw_update_context_1\r\n00081736 00000364 T _Unwind_RaiseException\r\n00082404 00000392 T _Unwind_Resume\r\n00081140 00000408 t uw_advance_context\r\n00080732 00000408 t uw_update_context\r\n00072160 00000413 t get_ttype_entry(lsda_header_info*, unsigned int)\r\n00089100 00000415 T _FDscalex\r\n00085964 00000444 t add_fdes\r\n00086600 00000456 t linear_search_fdes\r\n00085504 00000457 t classify_object_over_fdes\r\n00084100 00000526 t fde_single_encoding_compare\r\n00070064 00000584 T _FExp\r\n00079584 00000806 t uw_frame_state_for\r\n00078136 00001446 t execute_cfa_program\r\n00116320 00002048 b emergency_buffer_72\r\n```\r\n\r\nSome of the full command line options (to confirm that we are passing in `-fno-exceptions` when building the .cc files):\r\n```\r\nxt-clang++ -std=c++11 -fno-exceptions -fno-threadsafe-statics -fno-unwind-tables -ffunction-sections -fdata-sections -fmessage-length=0 -DTF_LITE_STATIC_MEMORY -DTF_LITE_DISABLE_X86_NEON -O3 -Werror -Wsign-compare -Wdouble-promotion -Wshadow -Wunused-variable -Wmissing-field-initializers -Wunused-function -Wswitch -Wvla -Wall -Wextra -Wstrict-aliasing -Wno-unused-parameter -DXTENSA -DXTENSA -DNDEBUG -DTF_LITE_STRIP_ERROR_STRINGS -DTF_LITE_MCU_DEBUG_LOG -DTF_LITE_USE_CTIME --xtensa-core=F1_190305_swupgrade -mcoproc -DMAX_RFFT_PWR=9 -DMIN_RFFT_PWR=MAX_RFFT_PWR -DFUSION_F1 -Wno-unused-private-field -DNNLIB_V2 -Wno-shadow -I. -Itensorflow/lite/micro/tools/make/downloads/gemmlowp -Itensorflow/lite/micro/tools/make/downloads/flatbuffers/include -Itensorflow/lite/micro/tools/make/downloads/ruy -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/kernels/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/include/nnlib/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/include/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/common/include/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/ndsp/hifi4/include/ -c tensorflow/lite/micro/kernels/xtensa/fully_connected.cc -o tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/obj/tensorflow/lite/micro/kernels/xtensa/fully_connected.o\r\n\r\nxt-clang -std=c11 -fno-unwind-tables -ffunction-sections -fdata-sections -fmessage-length=0 -DTF_LITE_STATIC_MEMORY -DTF_LITE_DISABLE_X86_NEON -O3 -Werror -Wsign-compare -Wdouble-promotion -Wshadow -Wunused-variable -Wmissing-field-initializers -Wunused-function -Wswitch -Wvla -Wall -Wextra -Wstrict-aliasing -Wno-unused-parameter -DXTENSA -DXTENSA -DNDEBUG -DTF_LITE_STRIP_ERROR_STRINGS -DTF_LITE_MCU_DEBUG_LOG -DTF_LITE_USE_CTIME --xtensa-core=F1_190305_swupgrade -mcoproc -DMAX_RFFT_PWR=9 -DMIN_RFFT_PWR=MAX_RFFT_PWR -DFUSION_F1 -Wno-unused-private-field -DNNLIB_V2 -Wno-shadow -I. -Itensorflow/lite/micro/tools/make/downloads/gemmlowp -Itensorflow/lite/micro/tools/make/downloads/flatbuffers/include -Itensorflow/lite/micro/tools/make/downloads/ruy -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/kernels/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/include/nnlib/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/include/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/common/include/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/ndsp/hifi4/include/ -c tensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/ndsp/hifi4/src/scl_tanhf_hifi4.c -o tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/obj/tensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/ndsp/hifi4/src/scl_tanhf_hifi4.o\r\n\r\nxt-clang++ -std=c++11 -fno-exceptions -fno-threadsafe-statics -fno-unwind-tables -ffunction-sections -fdata-sections -fmessage-length=0 -DTF_LITE_STATIC_MEMORY -DTF_LITE_DISABLE_X86_NEON -O3 -Werror -Wsign-compare -Wdouble-promotion -Wshadow -Wunused-variable -Wmissing-field-initializers -Wunused-function -Wswitch -Wvla -Wall -Wextra -Wstrict-aliasing -Wno-unused-parameter -DXTENSA -DXTENSA -DNDEBUG -DTF_LITE_STRIP_ERROR_STRINGS -DTF_LITE_MCU_DEBUG_LOG -DTF_LITE_USE_CTIME --xtensa-core=F1_190305_swupgrade -mcoproc -DMAX_RFFT_PWR=9 -DMIN_RFFT_PWR=MAX_RFFT_PWR -DFUSION_F1 -Wno-unused-private-field -DNNLIB_V2 -Wno-shadow -I. -Itensorflow/lite/micro/tools/make/downloads/gemmlowp -Itensorflow/lite/micro/tools/make/downloads/flatbuffers/include -Itensorflow/lite/micro/tools/make/downloads/ruy -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/kernels/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/include/nnlib/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/include/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/common/include/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/ndsp/hifi4/include/ -o tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/obj/tensorflow/lite/micro/benchmarks/keyword_benchmark.o tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/obj/tensorflow/lite/micro/benchmarks/keyword_scrambled_model_data.o tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/lib/libtensorflow-microlite.a -Wl,--fatal-warnings -Wl,--gc-sections -lm\r\n```\r\n\r\n\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47575\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47575\">No</a>\n",
"created_at": "2021-03-09T00:01:52Z"
}
],
"number": 47575,
"title": "Exception related symbols appear to be showing up in the final binary with the xtensa toolchain"
}
|
{
"body": "Confirmed that the following two commands build and we see different sizes with and without `XTENSA_USE_LIBC=true`.\r\n\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade keyword_benchmark -j8 BUILD_TYPE=release XTENSA_USE_LIBC=true\r\nxt-size tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\r\n```\r\ngives:\r\n```\r\n text\t data\t bss\t dec\t hex\tfilename\r\n 84496\t 384\t 22704\t 107584\t 1a440\ttensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\r\n```\r\n\r\nAnd\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade keyword_benchmark -j8 BUILD_TYPE=release\r\nxt-size tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\r\n```\r\n\r\ngives:\r\n```\r\n text\t data\t bss\t dec\t hex\tfilename\r\n 66696\t 40212\t 24856\t 131764\t 202b4 tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\r\n```\r\n\r\nProgress towards #47575 and http://b/182209217\r\n",
"number": 47681,
"review_comments": [],
"title": "Add command line option to select `-stdlib=libc++` with Xtensa."
}
|
{
"commits": [
{
"message": "Add command line option to select `-stdlib=libc++` with Xtensa.\n\nConfirmed that the following two commands build and we see different sizes with and without `-stdlib=libc++`.\n\n```\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade keyword_benchmark -j8 BUILD_TYPE=release XTENSA_USE_LIBC=true\nxt-size tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\n```\ngives:\n```\n text\t data\t bss\t dec\t hex\tfilename\n 84496\t 384\t 22704\t 107584\t 1a440\ttensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\n```\n\nAnd\n```\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade keyword_benchmark -j8 BUILD_TYPE=release\nxt-size tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\n```\n\ngives:\n```\n text\t data\t bss\t dec\t hex\tfilename\n 66696\t 40212\t 24856\t 131764\t 202b4\ttensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\n```\n\nProgress towards #47575 and http://b/182209217"
}
],
"files": [
{
"diff": "@@ -9,6 +9,7 @@\n # For example: hifimini\n \n TARGET_ARCH :=\n+XTENSA_USE_LIBC :=\n \n ifndef XTENSA_BASE\n $(error XTENSA_BASE is undefined)\n@@ -36,7 +37,6 @@ PLATFORM_FLAGS = \\\n -DTF_LITE_USE_CTIME \\\n --xtensa-core=$(XTENSA_CORE) \\\n -mcoproc \\\n- -stdlib=libc++ \\\n -DMAX_RFFT_PWR=9 \\\n -DMIN_RFFT_PWR=MAX_RFFT_PWR \\\n $(TARGET_ARCH_DEFINES)\n@@ -50,6 +50,21 @@ TARGET_TOOLCHAIN_PREFIX := xt-\n CXX_TOOL := clang++\n CC_TOOL := clang\n \n+# Unused exception related symbols make their way into a binary that links\n+# against TFLM as described in https://github.com/tensorflow/tensorflow/issues/47575.\n+# We have two options to avoid this. The first involves using -stdlib=libc++ and\n+# the second involves stubbing out and modifying some of the files in the Xtensa\n+# toolchain to prevent inclusion of the exception handling code\n+# (http://b/182209217#comment3). This Makefile supports building TFLM in a way\n+# that is compatible with either of the two approaches.\n+ifeq ($(XTENSA_USE_LIBC), true)\n+ PLATFORM_FLAGS += -stdlib=libc++\n+else\n+ # TODO(b/150240249): Do not filter-out -fno-rtti once that works for the\n+ # Xtensa toolchain.\n+ CXXFLAGS := $(filter-out -fno-rtti, $(CXXFLAGS))\n+endif\n+\n CXXFLAGS += $(PLATFORM_FLAGS)\n CCFLAGS += $(PLATFORM_FLAGS)\n ",
"filename": "tensorflow/lite/micro/tools/make/targets/xtensa_makefile.inc",
"status": "modified"
}
]
}
|
{
"body": "The tests described in this issue are with the RI-2020.4-linux toolchain and the Fusion F1 core.\r\n\r\nBuild the keyword_benchmark with:\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade keyword_benchmark -j8 BUILD_TYPE=release\r\n```\r\n\r\nList all the symbols:\r\n```\r\nXTENSA_CORE=F1_190305_swupgrade xt-nm --print-size --size-sort --radix=d -C tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\r\n```\r\n\r\nWe see symbols that appear to be related to exception handling:\r\n```\r\n00082100 00000304 T _Unwind_ForcedUnwind\r\n00070712 00000305 T __cxxabiv1::__vmi_class_type_info::__do_upcast(__cxxabiv1::__class_type_info const*, void const*, __cxxabiv1::__class_type_info::__upcast_result&) const\r\n00076024 00000307 T __divdf3\r\n00082796 00000310 T _Unwind_Resume_or_Rethrow\r\n00104616 00000326 t _xa_nn_dot_product_4_rows_1_vec_mat_aligned_vec_aligned\r\n00080392 00000340 t uw_update_context_1\r\n00081736 00000364 T _Unwind_RaiseException\r\n00082404 00000392 T _Unwind_Resume\r\n00081140 00000408 t uw_advance_context\r\n00080732 00000408 t uw_update_context\r\n00072160 00000413 t get_ttype_entry(lsda_header_info*, unsigned int)\r\n00089100 00000415 T _FDscalex\r\n00085964 00000444 t add_fdes\r\n00086600 00000456 t linear_search_fdes\r\n00085504 00000457 t classify_object_over_fdes\r\n00084100 00000526 t fde_single_encoding_compare\r\n00070064 00000584 T _FExp\r\n00079584 00000806 t uw_frame_state_for\r\n00078136 00001446 t execute_cfa_program\r\n00116320 00002048 b emergency_buffer_72\r\n```\r\n\r\nSome of the full command line options (to confirm that we are passing in `-fno-exceptions` when building the .cc files):\r\n```\r\nxt-clang++ -std=c++11 -fno-exceptions -fno-threadsafe-statics -fno-unwind-tables -ffunction-sections -fdata-sections -fmessage-length=0 -DTF_LITE_STATIC_MEMORY -DTF_LITE_DISABLE_X86_NEON -O3 -Werror -Wsign-compare -Wdouble-promotion -Wshadow -Wunused-variable -Wmissing-field-initializers -Wunused-function -Wswitch -Wvla -Wall -Wextra -Wstrict-aliasing -Wno-unused-parameter -DXTENSA -DXTENSA -DNDEBUG -DTF_LITE_STRIP_ERROR_STRINGS -DTF_LITE_MCU_DEBUG_LOG -DTF_LITE_USE_CTIME --xtensa-core=F1_190305_swupgrade -mcoproc -DMAX_RFFT_PWR=9 -DMIN_RFFT_PWR=MAX_RFFT_PWR -DFUSION_F1 -Wno-unused-private-field -DNNLIB_V2 -Wno-shadow -I. -Itensorflow/lite/micro/tools/make/downloads/gemmlowp -Itensorflow/lite/micro/tools/make/downloads/flatbuffers/include -Itensorflow/lite/micro/tools/make/downloads/ruy -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/kernels/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/include/nnlib/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/include/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/common/include/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/ndsp/hifi4/include/ -c tensorflow/lite/micro/kernels/xtensa/fully_connected.cc -o tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/obj/tensorflow/lite/micro/kernels/xtensa/fully_connected.o\r\n\r\nxt-clang -std=c11 -fno-unwind-tables -ffunction-sections -fdata-sections -fmessage-length=0 -DTF_LITE_STATIC_MEMORY -DTF_LITE_DISABLE_X86_NEON -O3 -Werror -Wsign-compare -Wdouble-promotion -Wshadow -Wunused-variable -Wmissing-field-initializers -Wunused-function -Wswitch -Wvla -Wall -Wextra -Wstrict-aliasing -Wno-unused-parameter -DXTENSA -DXTENSA -DNDEBUG -DTF_LITE_STRIP_ERROR_STRINGS -DTF_LITE_MCU_DEBUG_LOG -DTF_LITE_USE_CTIME --xtensa-core=F1_190305_swupgrade -mcoproc -DMAX_RFFT_PWR=9 -DMIN_RFFT_PWR=MAX_RFFT_PWR -DFUSION_F1 -Wno-unused-private-field -DNNLIB_V2 -Wno-shadow -I. -Itensorflow/lite/micro/tools/make/downloads/gemmlowp -Itensorflow/lite/micro/tools/make/downloads/flatbuffers/include -Itensorflow/lite/micro/tools/make/downloads/ruy -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/kernels/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/include/nnlib/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/include/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/common/include/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/ndsp/hifi4/include/ -c tensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/ndsp/hifi4/src/scl_tanhf_hifi4.c -o tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/obj/tensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/ndsp/hifi4/src/scl_tanhf_hifi4.o\r\n\r\nxt-clang++ -std=c++11 -fno-exceptions -fno-threadsafe-statics -fno-unwind-tables -ffunction-sections -fdata-sections -fmessage-length=0 -DTF_LITE_STATIC_MEMORY -DTF_LITE_DISABLE_X86_NEON -O3 -Werror -Wsign-compare -Wdouble-promotion -Wshadow -Wunused-variable -Wmissing-field-initializers -Wunused-function -Wswitch -Wvla -Wall -Wextra -Wstrict-aliasing -Wno-unused-parameter -DXTENSA -DXTENSA -DNDEBUG -DTF_LITE_STRIP_ERROR_STRINGS -DTF_LITE_MCU_DEBUG_LOG -DTF_LITE_USE_CTIME --xtensa-core=F1_190305_swupgrade -mcoproc -DMAX_RFFT_PWR=9 -DMIN_RFFT_PWR=MAX_RFFT_PWR -DFUSION_F1 -Wno-unused-private-field -DNNLIB_V2 -Wno-shadow -I. -Itensorflow/lite/micro/tools/make/downloads/gemmlowp -Itensorflow/lite/micro/tools/make/downloads/flatbuffers/include -Itensorflow/lite/micro/tools/make/downloads/ruy -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/kernels/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/include/nnlib/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/include/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/common/include/ -Itensorflow/lite/micro/tools/make/downloads/xa_nnlib_hifi4/algo/ndsp/hifi4/include/ -o tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/obj/tensorflow/lite/micro/benchmarks/keyword_benchmark.o tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/obj/tensorflow/lite/micro/benchmarks/keyword_scrambled_model_data.o tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/lib/libtensorflow-microlite.a -Wl,--fatal-warnings -Wl,--gc-sections -lm\r\n```\r\n\r\n\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47575\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47575\">No</a>\n",
"created_at": "2021-03-09T00:01:52Z"
}
],
"number": 47575,
"title": "Exception related symbols appear to be showing up in the final binary with the xtensa toolchain"
}
|
{
"body": "Manually confirmed with the steps outlined in #47575 that exception related symbols are no longer part of the keyword_benchmark binary when build with the Xtensa toolchain.\r\n\r\nManually tested the size with:\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade keyword_benchmark -j8 BUILD_TYPE=release\r\nxt-size tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\r\n```\r\n\r\nWithout this change:\r\n```\r\n text\t data\t bss\t dec\t hex\tfilename\r\n 70912\t 40212\t 24856\t 135980\t 2132c\ttensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\r\n```\r\n\r\nWith this change:\r\n```\r\n text\t data\t bss\t dec\t hex\tfilename\r\n 88712\t 384\t 22704\t 111800\t 1b4b8\ttensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\r\n```\r\n\r\nWhile what goes in the text and data sections has changed, the overall binary size is reduced by ~24KB.\r\n\r\nAlso confirmed that the cycles for the keyword benchmark are unaffected:\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade run_keyword_benchmark -j8\r\n```\r\n\r\ngives:\r\n```\r\nInitializeKeywordRunner took 159001 ticks (159 ms).\r\n\r\nKeywordRunNIerations(1) took 34253 ticks (34 ms)\r\nQUANTIZE took 800 ticks (0 ms).\r\nSVDF took 4753 ticks (4 ms).\r\nFULLY_CONNECTED took 1353 ticks (1 ms).\r\nSVDF took 4211 ticks (4 ms).\r\nFULLY_CONNECTED took 1353 ticks (1 ms).\r\nSVDF took 3145 ticks (3 ms).\r\nFULLY_CONNECTED took 1353 ticks (1 ms).\r\nSVDF took 4211 ticks (4 ms).\r\nFULLY_CONNECTED took 1353 ticks (1 ms).\r\nSVDF took 2890 ticks (2 ms).\r\nSVDF took 3583 ticks (3 ms).\r\nSVDF took 3054 ticks (3 ms).\r\nFULLY_CONNECTED took 1091 ticks (1 ms).\r\nSOFTMAX took 749 ticks (0 ms).\r\nQUANTIZE took 354 ticks (0 ms).\r\n\r\nKeywordRunNIerations(10) took 342530 ticks (342 ms)\r\n```\r\n\r\nAnd all the unit tests pass:\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade test -j8\r\n```\r\n\r\nFixes #47575\r\n\r\nWith this change, we no longer need to remove `-fno-rtti` for Xtensa and http://b/150240249 is also fixed.\r\n",
"number": 47653,
"review_comments": [],
"title": "Use -stdlib=libc++ and -fno-rtti with Xtensa."
}
|
{
"commits": [
{
"message": "Use -stdlib=libc++ and -fno-rtti with Xtensa.\n\nManually confirmed with the steps outlined in #47575 that exception\nrelated symbols are no longer part of the keyword_benchmark binary when\nbuild with the Xtensa toolchain.\n\nManually tested the size with:\n```\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade keyword_benchmark -j8 BUILD_TYPE=release\nxt-size tensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\n```\n\nWithout this change:\n```\n text\t data\t bss\t dec\t hex\tfilename\n 70912\t 40212\t 24856\t 135980\t 2132c\ttensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\n```\n\nWith this change:\n```\n text\t data\t bss\t dec\t hex\tfilename\n 88712\t 384\t 22704\t 111800\t 1b4b8\ttensorflow/lite/micro/tools/make/gen/xtensa_fusion_f1_release/bin/keyword_benchmark\n```\n\nWhile what goes in the text and data sections has changed, the overall\nbinary size is reduced by ~24KB.\n\nAlso confirmed that the cycles for the keyword benchmark are unaffected:\n```\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade run_keyword_benchmark -j8\n```\n\ngives:\n```\nInitializeKeywordRunner took 159001 ticks (159 ms).\n\nKeywordRunNIerations(1) took 34253 ticks (34 ms)\nQUANTIZE took 800 ticks (0 ms).\nSVDF took 4753 ticks (4 ms).\nFULLY_CONNECTED took 1353 ticks (1 ms).\nSVDF took 4211 ticks (4 ms).\nFULLY_CONNECTED took 1353 ticks (1 ms).\nSVDF took 3145 ticks (3 ms).\nFULLY_CONNECTED took 1353 ticks (1 ms).\nSVDF took 4211 ticks (4 ms).\nFULLY_CONNECTED took 1353 ticks (1 ms).\nSVDF took 2890 ticks (2 ms).\nSVDF took 3583 ticks (3 ms).\nSVDF took 3054 ticks (3 ms).\nFULLY_CONNECTED took 1091 ticks (1 ms).\nSOFTMAX took 749 ticks (0 ms).\nQUANTIZE took 354 ticks (0 ms).\n\nKeywordRunNIerations(10) took 342530 ticks (342 ms)\n```\n\nAnd all the unit tests pass:\n```\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade test -j8\n```\n\nFixes #47575\n\nWith this change, we no longer need to remove `-fno-rtti` for Xtensa and\nhttp://b/150240249 is also fixed."
}
],
"files": [
{
"diff": "@@ -36,6 +36,7 @@ PLATFORM_FLAGS = \\\n -DTF_LITE_USE_CTIME \\\n --xtensa-core=$(XTENSA_CORE) \\\n -mcoproc \\\n+ -stdlib=libc++ \\\n -DMAX_RFFT_PWR=9 \\\n -DMIN_RFFT_PWR=MAX_RFFT_PWR \\\n $(TARGET_ARCH_DEFINES)\n@@ -52,9 +53,6 @@ CC_TOOL := clang\n CXXFLAGS += $(PLATFORM_FLAGS)\n CCFLAGS += $(PLATFORM_FLAGS)\n \n-# TODO(b/150240249): Do not remove -fno-rtti once that works for the Xtensa toolchain.\n-CXXFLAGS := $(filter-out -fno-rtti, $(CXXFLAGS))\n-\n TEST_SCRIPT := tensorflow/lite/micro/testing/test_xtensa_binary.sh\n \n # TODO(b/158651472): Fix the memory_arena_threshold_test",
"filename": "tensorflow/lite/micro/tools/make/targets/xtensa_makefile.inc",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator ELU from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\nPR 6: Extract common activation code into activations.cc and activation_utils.h files. Extract common test code into activation_test_utils.h file.\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46323\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46323\">No</a>\n",
"created_at": "2021-02-24T02:01:04Z"
}
],
"number": 46323,
"title": "micro: port op ELU from lite"
}
|
{
"body": "Additional fix for Issue #46323",
"number": 47614,
"review_comments": [],
"title": "micro: Add ELU to AllOpsResolver"
}
|
{
"commits": [
{
"message": "Add ELU to AllOpsResolver\n\nAdditional fix for Issue #46323"
}
],
"files": [
{
"diff": "@@ -33,6 +33,7 @@ AllOpsResolver::AllOpsResolver() {\n AddDepthwiseConv2D();\n AddDequantize();\n AddDetectionPostprocess();\n+ AddElu();\n AddEqual();\n AddEthosU();\n AddFloor();",
"filename": "tensorflow/lite/micro/all_ops_resolver.cc",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code: Yes\r\n- OS Platform and Distribution: Ubuntu 18.04\r\n- TensorFlow installed from: binary\r\n- TensorFlow version: 2.4.0\r\n- Python version: 3.6.9\r\n\r\n**Describe the current behavior**\r\n\r\n`tf.keras.layers.LayerNormalization` crashes when the input is empty and the layer is executed on CPU.\r\n\r\n**Describe the expected behavior**\r\n\r\nThe layer should not crash but return a tensor with the same shape.\r\n\r\n**Standalone code to reproduce the issue**\r\n\r\n```python\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"\"\r\nimport tensorflow as tf\r\nlayer = tf.keras.layers.LayerNormalization()\r\nlayer(tf.zeros([1, 0, 10]))\r\n```\r\n\r\n**Other info / logs**\r\n\r\nThe code above exits with this error:\r\n\r\n```text\r\nFloating point exception (core dumped)\r\n```\r\n",
"comments": [
{
"body": "Was able to reproduce the issue with TF v2.3, TF v2.4 and TF-nightly. Colab session crashes on running the code, please find the gist of it [here](https://colab.research.google.com/gist/amahendrakar/a17885e619f64587f0946ef115003ff9/46366.ipynb#scrollTo=L8qDirj_Zx4v). Thanks!",
"created_at": "2021-01-13T11:03:04Z"
},
{
"body": "@guillaumekln , @amahendrakar , @jvishnuvardhan a tensor of shape [1,0,10] would return a tensor\r\n `tf.Tensor([], shape=(1, 0, 10), dtype=float32) TensorShape([1, 0, 10])` , Any tensor of value [] would cause the runtime to crash. I will send a pull request to raise a valid error regarding the faulty input tensor.",
"created_at": "2021-02-11T10:14:39Z"
},
{
"body": "Can we close this?",
"created_at": "2021-04-16T13:26:49Z"
},
{
"body": "The crash is not fixed. I believe the `FusedBatchNorm` CPU kernel should check if the input is empty. I tried to fix the issue but eventually moved to something else.",
"created_at": "2021-04-16T13:35:00Z"
},
{
"body": "The PRs at the python level was rejected. Do you think that it will be accepted at cpp level?",
"created_at": "2021-04-16T13:51:05Z"
},
{
"body": "The GPU kernel [is checking for empty inputs](https://github.com/tensorflow/tensorflow/blob/v2.5.0-rc1/tensorflow/core/kernels/fused_batch_norm_op.cc#L818), but not the CPU kernel. So I think a code change would make sense here.",
"created_at": "2021-04-16T13:55:27Z"
},
{
"body": "@nikitamaia I think that we could remove the Keras label here as this is a c++ contribution.",
"created_at": "2021-04-16T13:59:33Z"
},
{
"body": "Yes, the issue is more specifically related to `tf.compat.v1.nn.fused_batch_norm` that is called by `tf.keras.layers.LayerNormalization`.",
"created_at": "2021-04-16T14:06:59Z"
},
{
"body": "Still an issue in TF 2.6 Nightly as well.Thanks!",
"created_at": "2021-05-28T16:52:38Z"
},
{
"body": "I believe the issue has been resolved by commit 4b4bc60. Also verified with latest tf-nightly:\r\n```\r\n# python3\r\nPython 3.8.10 (default, Jun 2 2021, 10:49:15) \r\n[GCC 9.4.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import os\r\n>>> os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"\"\r\n>>> import tensorflow as tf\r\n2021-09-02 15:23:41.438425: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2021-09-02 15:23:41.438474: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n>>> layer = tf.keras.layers.LayerNormalization()\r\n>>> layer(tf.zeros([1, 0, 10]))\r\n2021-09-02 15:23:42.905740: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory\r\n2021-09-02 15:23:42.905782: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)\r\n2021-09-02 15:23:42.905806: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ip-172-31-87-192): /proc/driver/nvidia/version does not exist\r\n2021-09-02 15:23:42.906125: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n<tf.Tensor: shape=(1, 0, 10), dtype=float32, numpy=array([], shape=(1, 0, 10), dtype=float32)>\r\n```\r\n\r\nI will close this issue for now, but feel free to re-open if the issue persists.",
"created_at": "2021-09-02T15:25:04Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46366\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46366\">No</a>\n",
"created_at": "2021-09-02T15:25:06Z"
},
{
"body": "I also observed the following API aliases can cause the same issue in older versions of tensorflow.\r\nUsers should be cautious when using them on CPU up to tensorflow 2.5.1 (v2.5.0-160-g8222c1cfc86).\r\n\r\n- `(tf.keras.layers.LayerNormalization)`, `tf.compat.v1.keras.layers.LayerNormalization`\r\n\r\n<details>\r\n <summary>Code to reproduce the issue in <code>tf.compat.v1.keras.layers.LayerNormalization</code> in older versions</summary>\r\n\r\n```python\r\nimport tensorflow as tf\r\nprint(tf.version.GIT_VERSION, tf.version.VERSION, flush=True)\r\nprint(tf.config.list_physical_devices(), flush=True)\r\n\r\n\r\ntry:\r\n layer = tf.compat.v1.keras.layers.LayerNormalization()\r\n layer(tf.zeros([1, 0, 10]))\r\nexcept Exception as e:\r\n print(\"Error:\", str(e), flush=True)\r\nprint(\"Success!\", flush=True)\r\n```\r\n\r\nOn my CPU machine, the process aborts with a Floating point exception(core dumped), which is not expected.\r\n\r\n```text\r\nv2.5.0-160-g8222c1cfc86 2.5.1\r\n[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')]\r\nFloating point exception(core dumped)\r\n```\r\n</details>\r\n\r\nIt seems to be fixed in tensorflow 2.6.0 (v2.6.0-rc2-32-g919f693420e) and later versions.\r\n",
"created_at": "2023-09-21T10:47:33Z"
}
],
"number": 46366,
"title": "LayerNormalization crashes on empty inputs when run on CPU"
}
|
{
"body": "Fix #46366 . `LayerNormalization` crashes on getting an empty input while `BatchNormalization` returns back the empty input. This PR will help both the normlaization layers to raise an error upon encountering an empty input.",
"number": 47604,
"review_comments": [],
"title": "Fix normalization layers"
}
|
{
"commits": [
{
"message": "Update normalization.py"
},
{
"message": "Update normalization.py"
}
],
"files": [
{
"diff": "@@ -750,7 +750,9 @@ def _get_training_value(self, training=None):\n \n def call(self, inputs, training=None):\n training = self._get_training_value(training)\n-\n+ if 0 in inputs.shape:\n+ raise ValueError(\"Input shape cannot have a 0 dimension but got a shape {}\".format(inputs.shape))\n+ \n if self.virtual_batch_size is not None:\n # Virtual batches (aka ghost batches) can be simulated by reshaping the\n # Tensor and reusing the existing batch norm implementation\n@@ -1217,6 +1219,8 @@ def build(self, input_shape):\n def call(self, inputs):\n # Compute the axes along which to reduce the mean / variance\n input_shape = inputs.shape\n+ if 0 in input_shape:\n+ raise ValueError(\"Input shape cannot have a 0 dimension but got a shape {}\".format(input_shape))\n ndims = len(input_shape)\n \n # Broadcasting only necessary for norm when the axis is not just",
"filename": "tensorflow/python/keras/layers/normalization.py",
"status": "modified"
}
]
}
|
{
"body": "I suspect these two code blocks are doing very similar tasks and can potentially be merged: \r\n\r\n- [Code block 1](https://cs.opensource.google/tensorflow/tensorflow/+/master:tensorflow/core/kernels/image/resize_bilinear_op.cc;l=254;drc=9e274c0b2ff75f64a97c9aec57aa59b030c5a01b;bpv=1;bpt=0)\r\n- [Code block 2](https://cs.opensource.google/tensorflow/tensorflow/+/master:tensorflow/core/kernels/image/resize_bilinear_op.cc;l=265-283;drc=9e274c0b2ff75f64a97c9aec57aa59b030c5a01b;bpv=1;bpt=0)\r\n\r\nThe only difference which I can notice is that the code block 1 **might** benefit from sequential cache read. But I think the compiler should assist to achieve the same efficiency for the code block 2. If so, we should merge these two blocks to eliminate duplicate code. ",
"comments": [
{
"body": "I can help to fix if my judgement is correct. That would be my first code contribution to TF :) ",
"created_at": "2021-02-28T20:02:02Z"
},
{
"body": "@ymodak I sent in a pull request which must solve this.",
"created_at": "2021-03-04T18:06:51Z"
},
{
"body": "I expected that I could become the assignee and fixed it. Why not give a newbie an opportunity? So sad. ",
"created_at": "2021-03-04T18:30:04Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47464\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47464\">No</a>\n",
"created_at": "2021-03-05T17:27:49Z"
},
{
"body": "@CyangXu Thanks for your issue. Assignee's generally facilitate the PR merging process, you may want to raise PR as an author to fix issue/add new feature. I am sorry you missed this opportunity however feel free to raise PR to address any other issue of your choice and we can help merge it. Contributions from the community are highly encouraged and welcomed. Thanks!",
"created_at": "2021-03-06T01:30:07Z"
},
{
"body": "@ymodak Thank you for explanation!",
"created_at": "2021-03-14T00:50:23Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\r\n[Yes](https://goo.gl/forms/Oe0tEvODFRoI2gJF3)\r\n[No](https://goo.gl/forms/fUjzOfrtkFbrOT8d2)",
"created_at": "2021-03-14T05:29:59Z"
}
],
"number": 47464,
"title": "Suspected duplicate code in resize_bilinear_op.cc"
}
|
{
"body": "Fix #47464 . It removes the redundant code in `resize_bilinear_op.cc`",
"number": 47567,
"review_comments": [],
"title": "Update resize_bilinear_op.cc"
}
|
{
"commits": [
{
"message": "Update resize_bilinear_op.cc"
}
],
"files": [
{
"diff": "@@ -132,41 +132,28 @@ inline __m128 compute_lerp_v(const __m128 top_left, const __m128 top_right,\n #endif\n \n template <typename T>\n-void ResizeLine3Channels(const T* const ys_input_lower_ptr,\n+void ResizeLineChannels(const T* const ys_input_lower_ptr,\n const T* const ys_input_upper_ptr,\n const CachedInterpolation* const xs,\n const float ys_lerp, const int64 out_width,\n- float* out_y) {\n+ float* out_y,\n+ const int channels) {\n for (int64 x = 0; x < out_width; ++x) {\n const int64 xs_lower = xs[x].lower;\n const int64 xs_upper = xs[x].upper;\n const float xs_lerp = xs[x].lerp;\n \n- // Read channel 0.\n- const float top_left0(ys_input_lower_ptr[xs_lower + 0]);\n- const float top_right0(ys_input_lower_ptr[xs_upper + 0]);\n- const float bottom_left0(ys_input_upper_ptr[xs_lower + 0]);\n- const float bottom_right0(ys_input_upper_ptr[xs_upper + 0]);\n-\n- // Read channel 1.\n- const float top_left1(ys_input_lower_ptr[xs_lower + 1]);\n- const float top_right1(ys_input_lower_ptr[xs_upper + 1]);\n- const float bottom_left1(ys_input_upper_ptr[xs_lower + 1]);\n- const float bottom_right1(ys_input_upper_ptr[xs_upper + 1]);\n-\n- // Read channel 2.\n- const float top_left2(ys_input_lower_ptr[xs_lower + 2]);\n- const float top_right2(ys_input_lower_ptr[xs_upper + 2]);\n- const float bottom_left2(ys_input_upper_ptr[xs_lower + 2]);\n- const float bottom_right2(ys_input_upper_ptr[xs_upper + 2]);\n-\n- // Compute output.\n- out_y[x * 3 + 0] = compute_lerp(top_left0, top_right0, bottom_left0,\n- bottom_right0, xs_lerp, ys_lerp);\n- out_y[x * 3 + 1] = compute_lerp(top_left1, top_right1, bottom_left1,\n- bottom_right1, xs_lerp, ys_lerp);\n- out_y[x * 3 + 2] = compute_lerp(top_left2, top_right2, bottom_left2,\n- bottom_right2, xs_lerp, ys_lerp);\n+ for (int c = 0; c < channels; ++c){\n+ const float top_left(ys_input_lower_ptr[xs_lower + c]);\n+ const float top_right(ys_input_lower_ptr[xs_upper + c]);\n+ const float bottom_left(ys_input_upper_ptr[xs_lower + c]);\n+ const float bottom_right(ys_input_upper_ptr[xs_upper + c]);\n+\n+ out_y[x * channels + c] =\n+ compute_lerp(top_left, top_right, bottom_left, bottom_right,\n+ xs_lerp, ys_lerp);\n+ }\n+\n }\n }\n \n@@ -212,9 +199,9 @@ void ResizeLine3ChannelsVector(const T* const ys_input_lower_ptr,\n }\n // The last pixel of each row must be done in a non-vectorized way\n // because we cannot overflow.\n- ResizeLine3Channels(ys_input_lower_ptr, ys_input_upper_ptr,\n+ ResizeLineChannels(ys_input_lower_ptr, ys_input_upper_ptr,\n xs + out_width - 1, ys_lerp, 1,\n- out_y + (out_width - 1) * 3);\n+ out_y + (out_width - 1) * 3, 3);\n }\n #endif\n \n@@ -251,8 +238,8 @@ void resize_image(typename TTypes<T, 4>::ConstTensor images,\n ResizeLine3ChannelsVector(ys_input_lower_ptr, ys_input_upper_ptr, xs,\n ys[y].lerp, out_width, output_y_ptr);\n #else\n- ResizeLine3Channels(ys_input_lower_ptr, ys_input_upper_ptr, xs,\n- ys[y].lerp, out_width, output_y_ptr);\n+ ResizeLineChannels(ys_input_lower_ptr, ys_input_upper_ptr, xs,\n+ ys[y].lerp, out_width, output_y_ptr, 3);\n #endif\n output_y_ptr += out_row_size;\n }\n@@ -264,21 +251,10 @@ void resize_image(typename TTypes<T, 4>::ConstTensor images,\n for (int64 y = 0; y < out_height; ++y) {\n const T* ys_input_lower_ptr = input_b_ptr + ys[y].lower * in_row_size;\n const T* ys_input_upper_ptr = input_b_ptr + ys[y].upper * in_row_size;\n- const float ys_lerp = ys[y].lerp;\n- for (int64 x = 0; x < out_width; ++x) {\n- auto xs_lower = xs[x].lower;\n- auto xs_upper = xs[x].upper;\n- auto xs_lerp = xs[x].lerp;\n- for (int c = 0; c < channels; ++c) {\n- const float top_left(ys_input_lower_ptr[xs_lower + c]);\n- const float top_right(ys_input_lower_ptr[xs_upper + c]);\n- const float bottom_left(ys_input_upper_ptr[xs_lower + c]);\n- const float bottom_right(ys_input_upper_ptr[xs_upper + c]);\n- output_y_ptr[x * channels + c] =\n- compute_lerp(top_left, top_right, bottom_left, bottom_right,\n- xs_lerp, ys_lerp);\n- }\n- }\n+ \n+ ResizeLineChannels(ys_input_lower_ptr, ys_input_upper_ptr, xs,\n+ ys[y].lerp, out_width, output_y_ptr, channels);\n+ \n output_y_ptr += out_row_size;\n }\n input_b_ptr += in_batch_num_values;",
"filename": "tensorflow/core/kernels/image/resize_bilinear_op.cc",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator LOG_SOFTMAX from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47291\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47291\">No</a>\n",
"created_at": "2021-06-02T16:07:00Z"
}
],
"number": 47291,
"title": "micro: port op LOG_SOFTMAX from lite"
}
|
{
"body": "Move the reference implementation to its own header so that micro\r\ncan use it without the unrelated depedencies of reference_ops.h.\r\n\r\nPR step 2 for issue #47291",
"number": 47482,
"review_comments": [
{
"body": "How about creating this change separately? It would be better to split this big change into two, one for refactoring and one for quantization addition in order to make the overall code review procedure easier and faster.",
"created_at": "2021-03-01T21:12:26Z"
},
{
"body": "After discussion with Pete Warden, we have decided it is OK to continue with the PRs as currently submitted.",
"created_at": "2021-04-07T21:07:38Z"
}
],
"title": "micro: LOG_SOFTMAX PR2"
}
|
{
"commits": [
{
"message": "Move the reference implementation to its own header so that micro\ncan use it without the unrelated depedencies of reference_ops.h.\n\nPR step 2 for issue #47291"
},
{
"message": "Merge branch 'master' into LogSoftMax-pr2"
},
{
"message": "Fix CI build failure"
},
{
"message": "Merge branch 'master' into LogSoftMax-pr2"
}
],
"files": [
{
"diff": "@@ -497,6 +497,7 @@ cc_library(\n \"reference/l2normalization.h\",\n \"reference/leaky_relu.h\",\n \"reference/logistic.h\",\n+ \"reference/log_softmax.h\",\n \"reference/maximum_minimum.h\",\n \"reference/mul.h\",\n \"reference/neg.h\",\n@@ -595,6 +596,7 @@ cc_library(\n \"reference/l2normalization.h\",\n \"reference/leaky_relu.h\",\n \"reference/legacy_reference_ops.h\",\n+ \"reference/log_softmax.h\",\n \"reference/logistic.h\",\n \"reference/maximum_minimum.h\",\n \"reference/mul.h\",",
"filename": "tensorflow/lite/kernels/internal/BUILD",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,258 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#ifndef TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_LOG_SOFTMAX_H_\n+#define TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_LOG_SOFTMAX_H_\n+\n+#include <algorithm>\n+#include <limits>\n+\n+#include \"tensorflow/lite/kernels/internal/common.h\"\n+\n+namespace tflite {\n+namespace reference_ops {\n+\n+inline void LogSoftmax(const SoftmaxParams& params,\n+ const RuntimeShape& input_shape, const float* input_data,\n+ const RuntimeShape& output_shape, float* output_data) {\n+ const int trailing_dim = input_shape.DimensionsCount() - 1;\n+ const int outer_size =\n+ MatchingFlatSizeSkipDim(input_shape, trailing_dim, output_shape);\n+ const int depth =\n+ MatchingDim(input_shape, trailing_dim, output_shape, trailing_dim);\n+\n+ for (int i = 0; i < outer_size; ++i) {\n+ // Find max element value which we'll use to ensure numerical stability\n+ // taking advantage of the following equality:\n+ // log(exp(x[i])/sum(exp(x[i]))) == log(exp(x[i]+C)/sum(exp(x[i]+C)))\n+ float max = std::numeric_limits<float>::lowest();\n+ for (int c = 0; c < depth; ++c) {\n+ max = std::max(max, input_data[i * depth + c]);\n+ }\n+\n+ // Compute sum.\n+ float sum = 0.f;\n+ for (int c = 0; c < depth; ++c) {\n+ sum += std::exp(input_data[i * depth + c] - max);\n+ }\n+\n+ // Compute result.\n+ const float log_sum = std::log(sum);\n+ for (int c = 0; c < depth; ++c) {\n+ output_data[i * depth + c] = input_data[i * depth + c] - max - log_sum;\n+ }\n+ }\n+}\n+\n+inline void LogSoftmax(const SoftmaxParams& params,\n+ const RuntimeShape& input_shape, const uint8* input_data,\n+ const RuntimeShape& output_shape, uint8* output_data) {\n+ const int32 input_multiplier = params.input_multiplier;\n+ const int32 input_left_shift = params.input_left_shift;\n+ const int32 reverse_scaling_divisor = params.reverse_scaling_divisor;\n+ const int32 reverse_scaling_right_shift = params.reverse_scaling_right_shift;\n+ const int diff_min = params.diff_min;\n+ // The representation chosen for the input to the exp() function is Q5.26.\n+ // We need to leave extra space since values that we skip might be as large\n+ // as -32 before multiplying by input_beta_multiplier, and therefore as\n+ // large as -16 afterwards. Note that exp(-8) is definitely not\n+ // insignificant to accumulation, but exp(-16) definitely is.\n+ static constexpr int kScaledDiffIntegerBits = 5;\n+ static constexpr int kAccumulationIntegerBits = 12;\n+ static constexpr int kOutputIntegerBits = 4;\n+ using FixedPointScaledDiff =\n+ gemmlowp::FixedPoint<int32, kScaledDiffIntegerBits>;\n+ using FixedPointAccum = gemmlowp::FixedPoint<int32, kAccumulationIntegerBits>;\n+\n+ const int trailing_dim = input_shape.DimensionsCount() - 1;\n+ const int outer_size =\n+ MatchingFlatSizeSkipDim(input_shape, trailing_dim, output_shape);\n+ const int depth =\n+ MatchingDim(input_shape, trailing_dim, output_shape, trailing_dim);\n+\n+ for (int i = 0; i < outer_size; ++i) {\n+ uint8 max_in_row = 0;\n+ for (int c = 0; c < depth; ++c) {\n+ max_in_row = std::max(max_in_row, input_data[i * depth + c]);\n+ }\n+\n+ FixedPointAccum sum_of_exps = FixedPointAccum::Zero();\n+ for (int c = 0; c < depth; ++c) {\n+ int32 input_diff =\n+ static_cast<int32>(input_data[i * depth + c]) - max_in_row;\n+ if (input_diff >= diff_min) {\n+ const int32 input_diff_rescaled =\n+ MultiplyByQuantizedMultiplierGreaterThanOne(\n+ input_diff, input_multiplier, input_left_shift);\n+ const FixedPointScaledDiff scaled_diff_f8 =\n+ FixedPointScaledDiff::FromRaw(input_diff_rescaled);\n+ sum_of_exps = sum_of_exps + gemmlowp::Rescale<kAccumulationIntegerBits>(\n+ exp_on_negative_values(scaled_diff_f8));\n+ }\n+ }\n+\n+ const int32 fixed_log_sum_of_exps =\n+ log_x_for_x_greater_than_or_equal_to_1<kScaledDiffIntegerBits>(\n+ sum_of_exps)\n+ .raw();\n+\n+ // rescaled_diff_min is smallest representable in\n+ // Q(kScaledDiffIntegerBits).(31-kScaledDiffIntegerBits) plus the\n+ // log-sub-exps that will be subtracted in the loop.\n+ //\n+ // The thresholds diff_min, etc are negative.\n+ const int rescaled_diff_min =\n+ fixed_log_sum_of_exps + std::numeric_limits<int32>::lowest();\n+ const int adjusted_diff_min =\n+ std::max(diff_min - 1, // Note use of > below instead of >= above.\n+ MultiplyByQuantizedMultiplierSmallerThanOneExp(\n+ rescaled_diff_min, reverse_scaling_divisor,\n+ -reverse_scaling_right_shift));\n+\n+ for (int c = 0; c < depth; ++c) {\n+ int32 input_diff =\n+ static_cast<int32>(input_data[i * depth + c]) - max_in_row;\n+ if (input_diff > adjusted_diff_min) {\n+ const int32 input_diff_rescaled =\n+ MultiplyByQuantizedMultiplierGreaterThanOne(\n+ input_diff, input_multiplier, input_left_shift);\n+ int32 unsat_output =\n+ gemmlowp::RoundingDivideByPOT(\n+ (input_diff_rescaled - fixed_log_sum_of_exps),\n+ 31 - kScaledDiffIntegerBits - kOutputIntegerBits) +\n+ 255;\n+\n+ output_data[i * depth + c] = static_cast<uint8>(\n+ std::max(std::min(unsat_output, static_cast<int32>(255)), 0));\n+ } else {\n+ // Set output to smallest value.\n+ output_data[i * depth + c] = 0;\n+ }\n+ }\n+ }\n+}\n+\n+template <typename T>\n+inline void LogSoftmaxQuantized(const SoftmaxParams& params,\n+ const RuntimeShape& input_shape,\n+ const T* input_data,\n+ const RuntimeShape& output_shape,\n+ T* output_data) {\n+ const int32_t input_multiplier = params.input_multiplier;\n+ const int32_t input_left_shift = params.input_left_shift;\n+ const int32_t reverse_scaling_divisor = params.reverse_scaling_divisor;\n+ const int32_t reverse_scaling_right_shift =\n+ params.reverse_scaling_right_shift;\n+ const int diff_min = params.diff_min;\n+\n+ static constexpr T kMinT8 = std::numeric_limits<T>::min();\n+ static constexpr T kMaxT8 = std::numeric_limits<T>::max();\n+ static constexpr int32_t kMinInt32 = std::numeric_limits<int32_t>::min();\n+\n+ // zero-point is set by Prepare function.\n+ // value 127 for int8_t\n+ // value 255 for uint8_t\n+\n+ // All IntegerBits must agree with Prepare function.\n+ // Input is chosen as Q5.26 so exp(-1 * 2^5 * 2^-1) = exp(-16) is negligible.\n+ static constexpr int kInputIntegerBits = 5;\n+ static constexpr int kAccumulationIntegerBits = 12;\n+ static constexpr int kOutputIntegerBits = 4;\n+ using F5 = gemmlowp::FixedPoint<int32, kInputIntegerBits>;\n+ using F12 = gemmlowp::FixedPoint<int32, kAccumulationIntegerBits>;\n+\n+ const int trailing_dim = input_shape.DimensionsCount() - 1;\n+ const int outer_size =\n+ MatchingFlatSizeSkipDim(input_shape, trailing_dim, output_shape);\n+ const int depth =\n+ MatchingDim(input_shape, trailing_dim, output_shape, trailing_dim);\n+\n+ for (int outer_index = 0; outer_index < outer_size; ++outer_index) {\n+ T max_in_row = kMinT8;\n+ for (int inner_index = 0; inner_index < depth; ++inner_index) {\n+ max_in_row =\n+ std::max(max_in_row, input_data[outer_index * depth + inner_index]);\n+ }\n+\n+ // Accumulator \"sum_of_exps_in_q12\" is safe from overflowing in 2^12 steps.\n+ F12 sum_of_exps_in_q12 = F12::FromRaw(0);\n+ for (int inner_index = 0; inner_index < depth; ++inner_index) {\n+ int32_t input_diff =\n+ static_cast<int32_t>(input_data[outer_index * depth + inner_index]) -\n+ max_in_row;\n+ if (input_diff >= diff_min) {\n+ const int32_t input_diff_in_q5 = MultiplyByQuantizedMultiplier(\n+ input_diff, input_multiplier, input_left_shift);\n+ sum_of_exps_in_q12 =\n+ sum_of_exps_in_q12 +\n+ gemmlowp::Rescale<kAccumulationIntegerBits>(\n+ exp_on_negative_values(F5::FromRaw(input_diff_in_q5)));\n+ }\n+ }\n+\n+ const int32_t log_sum_of_exps_in_q5 =\n+ log_x_for_x_greater_than_or_equal_to_1<kInputIntegerBits>(\n+ sum_of_exps_in_q12)\n+ .raw();\n+\n+ // Potentially reduced the valid range. shifted_log_sum_of_exps_in_q5 is\n+ // smallest representable in Q5.26 plus the log_sum_of_exps.\n+ const int32_t shifted_log_sum_of_exps_in_q5 =\n+ log_sum_of_exps_in_q5 + kMinInt32;\n+ const int32_t adjusted_diff_min =\n+ std::max(diff_min - 1,\n+ MultiplyByQuantizedMultiplier(shifted_log_sum_of_exps_in_q5,\n+ reverse_scaling_divisor,\n+ -reverse_scaling_right_shift));\n+\n+ for (int inner_index = 0; inner_index < depth; ++inner_index) {\n+ int32_t input_diff =\n+ static_cast<int32_t>(input_data[outer_index * depth + inner_index]) -\n+ max_in_row;\n+ // Note use of > below instead of >= above.\n+ if (input_diff > adjusted_diff_min) {\n+ const int32_t input_diff_in_q5 = MultiplyByQuantizedMultiplier(\n+ input_diff, input_multiplier, input_left_shift);\n+\n+ // Rescale and downcast.\n+ int32_t output_in_q27 =\n+ gemmlowp::RoundingDivideByPOT(\n+ (input_diff_in_q5 - log_sum_of_exps_in_q5),\n+ 31 - kInputIntegerBits - kOutputIntegerBits) +\n+ params.zero_point;\n+\n+ output_in_q27 =\n+ std::max(std::min(output_in_q27, static_cast<int32_t>(kMaxT8)),\n+ static_cast<int32_t>(kMinT8));\n+ output_data[outer_index * depth + inner_index] =\n+ static_cast<T>(output_in_q27);\n+ } else {\n+ output_data[outer_index * depth + inner_index] = kMinT8;\n+ }\n+ }\n+ }\n+}\n+\n+inline void LogSoftmax(const SoftmaxParams& params,\n+ const RuntimeShape& input_shape,\n+ const int8_t* input_data,\n+ const RuntimeShape& output_shape, int8_t* output_data) {\n+ LogSoftmaxQuantized(params, input_shape, input_data, output_shape,\n+ output_data);\n+}\n+\n+} // namespace reference_ops\n+} // namespace tflite\n+\n+#endif // TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_LOG_SOFTMAX_H_",
"filename": "tensorflow/lite/kernels/internal/reference/log_softmax.h",
"status": "added"
},
{
"diff": "@@ -58,6 +58,7 @@ limitations under the License.\n #include \"tensorflow/lite/kernels/internal/reference/l2normalization.h\"\n #include \"tensorflow/lite/kernels/internal/reference/leaky_relu.h\"\n #include \"tensorflow/lite/kernels/internal/reference/logistic.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/log_softmax.h\"\n #include \"tensorflow/lite/kernels/internal/reference/maximum_minimum.h\"\n #include \"tensorflow/lite/kernels/internal/reference/mul.h\"\n #include \"tensorflow/lite/kernels/internal/reference/neg.h\"\n@@ -901,127 +902,6 @@ inline void LocalResponseNormalization(\n }\n }\n \n-inline void LogSoftmax(const SoftmaxParams& params,\n- const RuntimeShape& input_shape, const float* input_data,\n- const RuntimeShape& output_shape, float* output_data) {\n- const int trailing_dim = input_shape.DimensionsCount() - 1;\n- const int outer_size =\n- MatchingFlatSizeSkipDim(input_shape, trailing_dim, output_shape);\n- const int depth =\n- MatchingDim(input_shape, trailing_dim, output_shape, trailing_dim);\n-\n- for (int i = 0; i < outer_size; ++i) {\n- // Find max element value which we'll use to ensure numerical stability\n- // taking advantage of the following equality:\n- // log(exp(x[i])/sum(exp(x[i]))) == log(exp(x[i]+C)/sum(exp(x[i]+C)))\n- float max = std::numeric_limits<float>::lowest();\n- for (int c = 0; c < depth; ++c) {\n- max = std::max(max, input_data[i * depth + c]);\n- }\n-\n- // Compute sum.\n- float sum = 0.f;\n- for (int c = 0; c < depth; ++c) {\n- sum += std::exp(input_data[i * depth + c] - max);\n- }\n-\n- // Compute result.\n- const float log_sum = std::log(sum);\n- for (int c = 0; c < depth; ++c) {\n- output_data[i * depth + c] = input_data[i * depth + c] - max - log_sum;\n- }\n- }\n-}\n-\n-inline void LogSoftmax(const SoftmaxParams& params,\n- const RuntimeShape& input_shape, const uint8* input_data,\n- const RuntimeShape& output_shape, uint8* output_data) {\n- ruy::profiler::ScopeLabel label(\"LogSoftmax/8bit\");\n- const int32 input_multiplier = params.input_multiplier;\n- const int32 input_left_shift = params.input_left_shift;\n- const int32 reverse_scaling_divisor = params.reverse_scaling_divisor;\n- const int32 reverse_scaling_right_shift = params.reverse_scaling_right_shift;\n- const int diff_min = params.diff_min;\n- // The representation chosen for the input to the exp() function is Q5.26.\n- // We need to leave extra space since values that we skip might be as large\n- // as -32 before multiplying by input_beta_multiplier, and therefore as\n- // large as -16 afterwards. Note that exp(-8) is definitely not\n- // insignificant to accumulation, but exp(-16) definitely is.\n- static constexpr int kScaledDiffIntegerBits = 5;\n- static constexpr int kAccumulationIntegerBits = 12;\n- static constexpr int kOutputIntegerBits = 4;\n- using FixedPointScaledDiff =\n- gemmlowp::FixedPoint<int32, kScaledDiffIntegerBits>;\n- using FixedPointAccum = gemmlowp::FixedPoint<int32, kAccumulationIntegerBits>;\n-\n- const int trailing_dim = input_shape.DimensionsCount() - 1;\n- const int outer_size =\n- MatchingFlatSizeSkipDim(input_shape, trailing_dim, output_shape);\n- const int depth =\n- MatchingDim(input_shape, trailing_dim, output_shape, trailing_dim);\n-\n- for (int i = 0; i < outer_size; ++i) {\n- uint8 max_in_row = 0;\n- for (int c = 0; c < depth; ++c) {\n- max_in_row = std::max(max_in_row, input_data[i * depth + c]);\n- }\n-\n- FixedPointAccum sum_of_exps = FixedPointAccum::Zero();\n- for (int c = 0; c < depth; ++c) {\n- int32 input_diff =\n- static_cast<int32>(input_data[i * depth + c]) - max_in_row;\n- if (input_diff >= diff_min) {\n- const int32 input_diff_rescaled =\n- MultiplyByQuantizedMultiplierGreaterThanOne(\n- input_diff, input_multiplier, input_left_shift);\n- const FixedPointScaledDiff scaled_diff_f8 =\n- FixedPointScaledDiff::FromRaw(input_diff_rescaled);\n- sum_of_exps = sum_of_exps + gemmlowp::Rescale<kAccumulationIntegerBits>(\n- exp_on_negative_values(scaled_diff_f8));\n- }\n- }\n-\n- const int32 fixed_log_sum_of_exps =\n- log_x_for_x_greater_than_or_equal_to_1<kScaledDiffIntegerBits>(\n- sum_of_exps)\n- .raw();\n-\n- // rescaled_diff_min is smallest representable in\n- // Q(kScaledDiffIntegerBits).(31-kScaledDiffIntegerBits) plus the\n- // log-sub-exps that will be subtracted in the loop.\n- //\n- // The thresholds diff_min, etc are negative.\n- const int rescaled_diff_min =\n- fixed_log_sum_of_exps + std::numeric_limits<int32>::lowest();\n- const int adjusted_diff_min =\n- std::max(diff_min - 1, // Note use of > below instead of >= above.\n- MultiplyByQuantizedMultiplierSmallerThanOneExp(\n- rescaled_diff_min, reverse_scaling_divisor,\n- -reverse_scaling_right_shift));\n-\n- for (int c = 0; c < depth; ++c) {\n- int32 input_diff =\n- static_cast<int32>(input_data[i * depth + c]) - max_in_row;\n- if (input_diff > adjusted_diff_min) {\n- const int32 input_diff_rescaled =\n- MultiplyByQuantizedMultiplierGreaterThanOne(\n- input_diff, input_multiplier, input_left_shift);\n- int32 unsat_output =\n- gemmlowp::RoundingDivideByPOT(\n- (input_diff_rescaled - fixed_log_sum_of_exps),\n- 31 - kScaledDiffIntegerBits - kOutputIntegerBits) +\n- 255;\n-\n- output_data[i * depth + c] = static_cast<uint8>(\n- std::max(std::min(unsat_output, static_cast<int32>(255)), 0));\n- } else {\n- // Set output to smallest value.\n- output_data[i * depth + c] = 0;\n- }\n- }\n- }\n-}\n-\n inline void Dequantize(const RuntimeShape& input_shape,\n const Eigen::half* input_data,\n const RuntimeShape& output_shape, float* output_data) {",
"filename": "tensorflow/lite/kernels/internal/reference/reference_ops.h",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator LOG_SOFTMAX from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47291\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47291\">No</a>\n",
"created_at": "2021-06-02T16:07:00Z"
}
],
"number": 47291,
"title": "micro: port op LOG_SOFTMAX from lite"
}
|
{
"body": "Extract the parsing out of a switch statement case to create a\r\nstandalone function which can be called by the micro op resolver.\r\n\r\nPR step 1 for issue #47291",
"number": 47481,
"review_comments": [],
"title": "micro: LOG_SOFTMAX PR1"
}
|
{
"commits": [
{
"message": "Extract the parsing out of a switch statement case to create a\nstandalone function which can be called by the micro op resolver.\n\nPR step 1 for issue #47291"
},
{
"message": "Merge branch 'master' into LogSoftMax-pr1"
}
],
"files": [
{
"diff": "@@ -309,6 +309,10 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n return ParseLogistic(op, error_reporter, allocator, builtin_data);\n }\n \n+ case BuiltinOperator_LOG_SOFTMAX: {\n+ return ParseLogSoftmax(op, error_reporter, allocator, builtin_data);\n+ }\n+\n case BuiltinOperator_MAXIMUM: {\n return ParseMaximum(op, error_reporter, allocator, builtin_data);\n }\n@@ -814,7 +818,6 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n case BuiltinOperator_CUSTOM:\n case BuiltinOperator_EMBEDDING_LOOKUP:\n case BuiltinOperator_EQUAL:\n- case BuiltinOperator_LOG_SOFTMAX:\n case BuiltinOperator_MATRIX_DIAG:\n case BuiltinOperator_MATRIX_SET_DIAG:\n case BuiltinOperator_RELU_N1_TO_1:\n@@ -1468,6 +1471,14 @@ TfLiteStatus ParseLogistic(const Operator*, ErrorReporter*,\n return kTfLiteOk;\n }\n \n+// We have this parse function instead of directly returning kTfLiteOk from the\n+// switch-case in ParseOpData because this function is used as part of the\n+// selective registration for the OpResolver implementation in micro.\n+TfLiteStatus ParseLogSoftmax(const Operator*, ErrorReporter*,\n+ BuiltinDataAllocator*, void**) {\n+ return kTfLiteOk;\n+}\n+\n // We have this parse function instead of directly returning kTfLiteOk from the\n // switch-case in ParseOpData because this function is used as part of the\n // selective registration for the OpResolver implementation in micro.",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.cc",
"status": "modified"
},
{
"diff": "@@ -213,6 +213,10 @@ TfLiteStatus ParseLogistic(const Operator* op, ErrorReporter* error_reporter,\n BuiltinDataAllocator* allocator,\n void** builtin_data);\n \n+TfLiteStatus ParseLogSoftmax(const Operator* op, ErrorReporter* error_reporter,\n+ BuiltinDataAllocator* allocator,\n+ void** builtin_data);\n+\n TfLiteStatus ParseMaximum(const Operator* op, ErrorReporter* error_reporter,\n BuiltinDataAllocator* allocator, void** builtin_data);\n ",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.h",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator LOG_SOFTMAX from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47291\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47291\">No</a>\n",
"created_at": "2021-06-02T16:07:00Z"
}
],
"number": 47291,
"title": "micro: port op LOG_SOFTMAX from lite"
}
|
{
"body": "PR steps 3 through 5 for the LOG_SOFTMAX operator as per Issue #47291 ",
"number": 47477,
"review_comments": [
{
"body": "Is this type change required for correctness, or is it just a cleanup step? If it's just cleanup, I'd prefer to avoid the code churn.",
"created_at": "2021-04-13T16:17:34Z"
},
{
"body": "This is a slightly surprising change - I just wanted to check that it was deliberate?",
"created_at": "2021-04-13T16:19:28Z"
},
{
"body": "This was required to compile on ARM platforms. However, the fix was incorrect as it had int and int32_t being used interchangeably without a static_cast.\r\n\r\nFixed.",
"created_at": "2021-04-13T20:19:10Z"
},
{
"body": "Yes that is intentional. I saved 4 bytes during Init phase by not storing the zero point. The correct zero point for output is checked during the Prepare phase. So the zero point will always match kMaxT8 when calling this template.",
"created_at": "2021-04-13T22:11:03Z"
}
],
"title": "micro: LOG_SOFTMAX PR3-5"
}
|
{
"commits": [
{
"message": "Merge branch 'LogSoftMax-pr2' into LogSoftMax-pr3"
},
{
"message": "This is a copy with minimal modification of the kernel and test for\noperator LOG_SOFTMAX from tensorflow/lite/kernels.\nAdaptations to micro and addition to the micro build to follow.\n\nPR step 3 for issue #47291"
},
{
"message": "Implement skeleton (non-working) code for operator and test.\nHeader files changed.\nNamespaces changed.\nSome original code deleted.\nSome original code modified.\n\nPR step 4 of the work to port operator LOG_SOFTMAX as tracked in Issue #47291"
},
{
"message": "micro: port operator LOG_SOFTMAX kernel from lite with test\n\nComplete implementation of TFLM operator LOG_SOFTMAX and associated TFLM test code.\n\nPR step 5 of the work to port operator LOG_SOFTMAX as tracked in Issue #47291"
},
{
"message": "Merge branch 'master' into LogSoftMax-pr3"
},
{
"message": "fix CI build failure"
},
{
"message": "Merge branch 'master' into LogSoftMax-pr3"
},
{
"message": "Only allocate LogSoftmaxOpData when quantizing."
},
{
"message": "Fixes for review issues."
}
],
"files": [
{
"diff": "@@ -575,7 +575,8 @@ log_x_for_x_greater_than_or_equal_to_1_impl(\n // InputIntegerBits - z_b_headroom - 0.25);\n const FixedPointAccum z_a_pow_2_adj = SaturatingAddNonGemmlowp(\n FixedPointAccum::FromRaw(SaturatingRoundingMultiplyByPOTParam(\n- InputIntegerBits - z_a_headroom_plus_1, 31 - kAccumIntegerBits)),\n+ static_cast<int32_t>(InputIntegerBits - z_a_headroom_plus_1),\n+ 31 - kAccumIntegerBits)),\n shifted_quarter);\n \n // z_b is treated like z_a, but premultiplying by sqrt(0.5).\n@@ -585,7 +586,8 @@ log_x_for_x_greater_than_or_equal_to_1_impl(\n SaturatingRoundingMultiplyByPOTParam(z_a.raw(), z_b_headroom);\n const FixedPointAccum z_b_pow_2_adj = SaturatingSub(\n FixedPointAccum::FromRaw(SaturatingRoundingMultiplyByPOTParam(\n- InputIntegerBits - z_b_headroom, 31 - kAccumIntegerBits)),\n+ static_cast<int32_t>(InputIntegerBits - z_b_headroom),\n+ 31 - kAccumIntegerBits)),\n shifted_quarter);\n \n const FixedPoint0 r = FixedPoint0::FromRaw(std::min(r_a_raw, r_b_raw));",
"filename": "tensorflow/lite/kernels/internal/common.h",
"status": "modified"
},
{
"diff": "@@ -16,8 +16,10 @@ limitations under the License.\n #define TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_LOG_SOFTMAX_H_\n \n #include <algorithm>\n+#include <cstddef>\n #include <limits>\n \n+#include \"fixedpoint/fixedpoint.h\"\n #include \"tensorflow/lite/kernels/internal/common.h\"\n \n namespace tflite {\n@@ -56,12 +58,14 @@ inline void LogSoftmax(const SoftmaxParams& params,\n }\n \n inline void LogSoftmax(const SoftmaxParams& params,\n- const RuntimeShape& input_shape, const uint8* input_data,\n- const RuntimeShape& output_shape, uint8* output_data) {\n- const int32 input_multiplier = params.input_multiplier;\n- const int32 input_left_shift = params.input_left_shift;\n- const int32 reverse_scaling_divisor = params.reverse_scaling_divisor;\n- const int32 reverse_scaling_right_shift = params.reverse_scaling_right_shift;\n+ const RuntimeShape& input_shape,\n+ const uint8_t* input_data,\n+ const RuntimeShape& output_shape, uint8_t* output_data) {\n+ const int32_t input_multiplier = params.input_multiplier;\n+ const int32_t input_left_shift = params.input_left_shift;\n+ const int32_t reverse_scaling_divisor = params.reverse_scaling_divisor;\n+ const int32_t reverse_scaling_right_shift =\n+ params.reverse_scaling_right_shift;\n const int diff_min = params.diff_min;\n // The representation chosen for the input to the exp() function is Q5.26.\n // We need to leave extra space since values that we skip might be as large\n@@ -72,8 +76,9 @@ inline void LogSoftmax(const SoftmaxParams& params,\n static constexpr int kAccumulationIntegerBits = 12;\n static constexpr int kOutputIntegerBits = 4;\n using FixedPointScaledDiff =\n- gemmlowp::FixedPoint<int32, kScaledDiffIntegerBits>;\n- using FixedPointAccum = gemmlowp::FixedPoint<int32, kAccumulationIntegerBits>;\n+ gemmlowp::FixedPoint<int32_t, kScaledDiffIntegerBits>;\n+ using FixedPointAccum =\n+ gemmlowp::FixedPoint<int32_t, kAccumulationIntegerBits>;\n \n const int trailing_dim = input_shape.DimensionsCount() - 1;\n const int outer_size =\n@@ -82,17 +87,17 @@ inline void LogSoftmax(const SoftmaxParams& params,\n MatchingDim(input_shape, trailing_dim, output_shape, trailing_dim);\n \n for (int i = 0; i < outer_size; ++i) {\n- uint8 max_in_row = 0;\n+ uint8_t max_in_row = 0;\n for (int c = 0; c < depth; ++c) {\n max_in_row = std::max(max_in_row, input_data[i * depth + c]);\n }\n \n FixedPointAccum sum_of_exps = FixedPointAccum::Zero();\n for (int c = 0; c < depth; ++c) {\n- int32 input_diff =\n- static_cast<int32>(input_data[i * depth + c]) - max_in_row;\n+ int32_t input_diff =\n+ static_cast<int32_t>(input_data[i * depth + c]) - max_in_row;\n if (input_diff >= diff_min) {\n- const int32 input_diff_rescaled =\n+ const int32_t input_diff_rescaled =\n MultiplyByQuantizedMultiplierGreaterThanOne(\n input_diff, input_multiplier, input_left_shift);\n const FixedPointScaledDiff scaled_diff_f8 =\n@@ -102,7 +107,7 @@ inline void LogSoftmax(const SoftmaxParams& params,\n }\n }\n \n- const int32 fixed_log_sum_of_exps =\n+ const int32_t fixed_log_sum_of_exps =\n log_x_for_x_greater_than_or_equal_to_1<kScaledDiffIntegerBits>(\n sum_of_exps)\n .raw();\n@@ -113,28 +118,30 @@ inline void LogSoftmax(const SoftmaxParams& params,\n //\n // The thresholds diff_min, etc are negative.\n const int rescaled_diff_min =\n- fixed_log_sum_of_exps + std::numeric_limits<int32>::lowest();\n+ fixed_log_sum_of_exps + std::numeric_limits<int32_t>::lowest();\n const int adjusted_diff_min =\n- std::max(diff_min - 1, // Note use of > below instead of >= above.\n+ std::max(static_cast<int32_t>(\n+ diff_min - 1), // Note use of > below instead of >= above.\n MultiplyByQuantizedMultiplierSmallerThanOneExp(\n rescaled_diff_min, reverse_scaling_divisor,\n -reverse_scaling_right_shift));\n \n for (int c = 0; c < depth; ++c) {\n- int32 input_diff =\n- static_cast<int32>(input_data[i * depth + c]) - max_in_row;\n+ int32_t input_diff =\n+ static_cast<int32_t>(input_data[i * depth + c]) - max_in_row;\n if (input_diff > adjusted_diff_min) {\n- const int32 input_diff_rescaled =\n+ const int32_t input_diff_rescaled =\n MultiplyByQuantizedMultiplierGreaterThanOne(\n input_diff, input_multiplier, input_left_shift);\n- int32 unsat_output =\n+ int32_t unsat_output =\n gemmlowp::RoundingDivideByPOT(\n (input_diff_rescaled - fixed_log_sum_of_exps),\n 31 - kScaledDiffIntegerBits - kOutputIntegerBits) +\n 255;\n \n- output_data[i * depth + c] = static_cast<uint8>(\n- std::max(std::min(unsat_output, static_cast<int32>(255)), 0));\n+ output_data[i * depth + c] = static_cast<uint8_t>(\n+ std::max(std::min(unsat_output, static_cast<int32_t>(255)),\n+ static_cast<int32_t>(0)));\n } else {\n // Set output to smallest value.\n output_data[i * depth + c] = 0;\n@@ -145,6 +152,7 @@ inline void LogSoftmax(const SoftmaxParams& params,\n \n template <typename T>\n inline void LogSoftmaxQuantized(const SoftmaxParams& params,\n+ const size_t outer_size, const size_t depth,\n const RuntimeShape& input_shape,\n const T* input_data,\n const RuntimeShape& output_shape,\n@@ -160,34 +168,24 @@ inline void LogSoftmaxQuantized(const SoftmaxParams& params,\n static constexpr T kMaxT8 = std::numeric_limits<T>::max();\n static constexpr int32_t kMinInt32 = std::numeric_limits<int32_t>::min();\n \n- // zero-point is set by Prepare function.\n- // value 127 for int8_t\n- // value 255 for uint8_t\n-\n // All IntegerBits must agree with Prepare function.\n // Input is chosen as Q5.26 so exp(-1 * 2^5 * 2^-1) = exp(-16) is negligible.\n static constexpr int kInputIntegerBits = 5;\n static constexpr int kAccumulationIntegerBits = 12;\n static constexpr int kOutputIntegerBits = 4;\n- using F5 = gemmlowp::FixedPoint<int32, kInputIntegerBits>;\n- using F12 = gemmlowp::FixedPoint<int32, kAccumulationIntegerBits>;\n+ using F5 = gemmlowp::FixedPoint<int32_t, kInputIntegerBits>;\n+ using F12 = gemmlowp::FixedPoint<int32_t, kAccumulationIntegerBits>;\n \n- const int trailing_dim = input_shape.DimensionsCount() - 1;\n- const int outer_size =\n- MatchingFlatSizeSkipDim(input_shape, trailing_dim, output_shape);\n- const int depth =\n- MatchingDim(input_shape, trailing_dim, output_shape, trailing_dim);\n-\n- for (int outer_index = 0; outer_index < outer_size; ++outer_index) {\n+ for (size_t outer_index = 0; outer_index < outer_size; ++outer_index) {\n T max_in_row = kMinT8;\n- for (int inner_index = 0; inner_index < depth; ++inner_index) {\n+ for (size_t inner_index = 0; inner_index < depth; ++inner_index) {\n max_in_row =\n std::max(max_in_row, input_data[outer_index * depth + inner_index]);\n }\n \n // Accumulator \"sum_of_exps_in_q12\" is safe from overflowing in 2^12 steps.\n F12 sum_of_exps_in_q12 = F12::FromRaw(0);\n- for (int inner_index = 0; inner_index < depth; ++inner_index) {\n+ for (size_t inner_index = 0; inner_index < depth; ++inner_index) {\n int32_t input_diff =\n static_cast<int32_t>(input_data[outer_index * depth + inner_index]) -\n max_in_row;\n@@ -211,12 +209,12 @@ inline void LogSoftmaxQuantized(const SoftmaxParams& params,\n const int32_t shifted_log_sum_of_exps_in_q5 =\n log_sum_of_exps_in_q5 + kMinInt32;\n const int32_t adjusted_diff_min =\n- std::max(diff_min - 1,\n+ std::max(static_cast<int32_t>(diff_min - 1),\n MultiplyByQuantizedMultiplier(shifted_log_sum_of_exps_in_q5,\n reverse_scaling_divisor,\n -reverse_scaling_right_shift));\n \n- for (int inner_index = 0; inner_index < depth; ++inner_index) {\n+ for (size_t inner_index = 0; inner_index < depth; ++inner_index) {\n int32_t input_diff =\n static_cast<int32_t>(input_data[outer_index * depth + inner_index]) -\n max_in_row;\n@@ -230,7 +228,7 @@ inline void LogSoftmaxQuantized(const SoftmaxParams& params,\n gemmlowp::RoundingDivideByPOT(\n (input_diff_in_q5 - log_sum_of_exps_in_q5),\n 31 - kInputIntegerBits - kOutputIntegerBits) +\n- params.zero_point;\n+ kMaxT8;\n \n output_in_q27 =\n std::max(std::min(output_in_q27, static_cast<int32_t>(kMaxT8)),\n@@ -244,12 +242,12 @@ inline void LogSoftmaxQuantized(const SoftmaxParams& params,\n }\n }\n \n-inline void LogSoftmax(const SoftmaxParams& params,\n- const RuntimeShape& input_shape,\n+inline void LogSoftmax(const SoftmaxParams& params, const size_t outer_size,\n+ const size_t depth, const RuntimeShape& input_shape,\n const int8_t* input_data,\n const RuntimeShape& output_shape, int8_t* output_data) {\n- LogSoftmaxQuantized(params, input_shape, input_data, output_shape,\n- output_data);\n+ LogSoftmaxQuantized(params, outer_size, depth, input_shape, input_data,\n+ output_shape, output_data);\n }\n \n } // namespace reference_ops",
"filename": "tensorflow/lite/kernels/internal/reference/log_softmax.h",
"status": "modified"
},
{
"diff": "@@ -282,6 +282,7 @@ cc_library(\n \"leaky_relu.cc\",\n \"logical.cc\",\n \"logistic.cc\",\n+ \"log_softmax.cc\",\n \"maximum_minimum.cc\",\n \"mul.cc\",\n \"neg.cc\",\n@@ -805,6 +806,21 @@ cc_test(\n ],\n )\n \n+cc_test(\n+ name = \"log_softmax_test\",\n+ srcs = [\n+ \"log_softmax_test.cc\",\n+ ],\n+ deps = [\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:debug_log\",\n+ \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_test(\n name = \"maximum_minimum_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,150 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include \"tensorflow/lite/kernels/internal/reference/log_softmax.h\"\n+\n+#include <cstddef>\n+#include <cstdint>\n+\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/quantization_util.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n+#include \"tensorflow/lite/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n+\n+namespace tflite {\n+namespace {\n+\n+// used only with quantized data\n+struct LogSoftmaxOpData {\n+ int32_t input_multiplier;\n+ int32_t input_left_shift;\n+ int32_t reverse_scaling_divisor;\n+ int32_t reverse_scaling_right_shift;\n+ int diff_min;\n+ size_t outer_size; // number of tensor elements skipping computation axis\n+ size_t depth; // number of tensor elements on computation axis\n+};\n+\n+// input/output tensor index\n+constexpr int kInputTensor = 0;\n+constexpr int kOutputTensor = 0;\n+\n+TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n+ TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);\n+ TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n+ const TfLiteTensor* input;\n+ TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, kInputTensor, &input));\n+ TfLiteTensor* output;\n+ TF_LITE_ENSURE_OK(context,\n+ GetOutputSafe(context, node, kOutputTensor, &output));\n+ TF_LITE_ENSURE_TYPES_EQ(context, input->type, output->type);\n+\n+ TF_LITE_ENSURE(context, HaveSameShapes(input, output));\n+\n+ if (input->type == kTfLiteInt8) {\n+ node->user_data =\n+ context->AllocatePersistentBuffer(context, sizeof(LogSoftmaxOpData));\n+ auto data = static_cast<LogSoftmaxOpData*>(node->user_data);\n+\n+ // quantization datum\n+ constexpr int32_t kOutputZeroPoint = 127;\n+ constexpr float kOutputScale = 16.0 / 256;\n+ constexpr double kBeta = 1.0;\n+ constexpr int kScaledDiffIntegerBits = 5;\n+\n+ TF_LITE_ENSURE(context, output->params.scale == kOutputScale);\n+ TF_LITE_ENSURE(context, output->params.zero_point == kOutputZeroPoint);\n+\n+ int input_left_shift;\n+ int reverse_scaling_right_shift;\n+ tflite::PreprocessLogSoftmaxScalingExp(\n+ kBeta, static_cast<double>(input->params.scale), kScaledDiffIntegerBits,\n+ &data->input_multiplier, &input_left_shift,\n+ &data->reverse_scaling_divisor, &reverse_scaling_right_shift);\n+ data->input_left_shift = static_cast<int32_t>(input_left_shift);\n+ data->reverse_scaling_right_shift =\n+ static_cast<int32_t>(-reverse_scaling_right_shift);\n+ // diff_min has a negative value, and is used to limit the maximum magnitude\n+ // of the diffs, which are <= 0.\n+ data->diff_min =\n+ -tflite::CalculateInputRadius(kScaledDiffIntegerBits, input_left_shift);\n+\n+ RuntimeShape input_shape = GetTensorShape(input);\n+ const int trailing_dim = input_shape.DimensionsCount() - 1;\n+ data->outer_size =\n+ static_cast<size_t>(FlatSizeSkipDim(input_shape, trailing_dim));\n+ data->depth = static_cast<size_t>(input_shape.Dims(trailing_dim));\n+ }\n+\n+ return kTfLiteOk;\n+}\n+\n+TfLiteStatus LogSoftmaxPrepare(TfLiteContext* context, TfLiteNode* node) {\n+ return CalculateOpData(context, node);\n+}\n+\n+TfLiteStatus LogSoftmaxEval(TfLiteContext* context, TfLiteNode* node) {\n+ const LogSoftmaxOpData* data =\n+ static_cast<LogSoftmaxOpData*>(node->user_data);\n+ const TfLiteEvalTensor* input =\n+ tflite::micro::GetEvalInput(context, node, kInputTensor);\n+ TfLiteEvalTensor* output =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n+ switch (input->type) {\n+ case kTfLiteFloat32: {\n+ SoftmaxParams op_params = {};\n+ reference_ops::LogSoftmax(op_params, tflite::micro::GetTensorShape(input),\n+ tflite::micro::GetTensorData<float>(input),\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<float>(output));\n+ return kTfLiteOk;\n+ }\n+ case kTfLiteInt8: {\n+ SoftmaxParams op_params = {};\n+ op_params.input_multiplier = data->input_multiplier;\n+ op_params.input_left_shift = data->input_left_shift;\n+ op_params.reverse_scaling_divisor = data->reverse_scaling_divisor;\n+ op_params.reverse_scaling_right_shift = data->reverse_scaling_right_shift;\n+ op_params.diff_min = data->diff_min;\n+ reference_ops::LogSoftmax(op_params, data->outer_size, data->depth,\n+ tflite::micro::GetTensorShape(input),\n+ tflite::micro::GetTensorData<int8_t>(input),\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<int8_t>(output));\n+ return kTfLiteOk;\n+ }\n+ default:\n+ TF_LITE_KERNEL_LOG(context,\n+ \"LOG_SOFTMAX only supports float32, int8, got %s.\",\n+ TfLiteTypeGetName(input->type));\n+ return kTfLiteError;\n+ }\n+}\n+\n+} // namespace\n+\n+TfLiteRegistration Register_LOG_SOFTMAX() {\n+ return {/*init=*/nullptr,\n+ /*free=*/nullptr,\n+ /*prepare=*/LogSoftmaxPrepare,\n+ /*invoke=*/LogSoftmaxEval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/0};\n+}\n+\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/log_softmax.cc",
"status": "added"
},
{
"diff": "@@ -0,0 +1,231 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#include <cstdint>\n+#include <type_traits>\n+\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_runner.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n+\n+namespace tflite {\n+namespace testing {\n+namespace {\n+\n+void ExecuteLogSoftmaxTest(int tensors_count, TfLiteTensor* tensors) {\n+ constexpr int kInputArrayData[] = {1, 0};\n+ TfLiteIntArray* inputs_array = IntArrayFromInts(kInputArrayData);\n+ constexpr int kOutputArrayData[] = {1, 1};\n+ TfLiteIntArray* outputs_array = IntArrayFromInts(kOutputArrayData);\n+\n+ const TfLiteRegistration registration = tflite::Register_LOG_SOFTMAX();\n+ micro::KernelRunner runner(registration, tensors, tensors_count, inputs_array,\n+ outputs_array, nullptr);\n+\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.InitAndPrepare());\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.Invoke());\n+}\n+\n+template <typename T>\n+void TestLogSoftmax(const float tolerance, const int* input_dims_data,\n+ const T* input_data, const int* expected_dims,\n+ const T* expected_data, T* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(expected_dims);\n+ const int output_count = ElementCount(*output_dims);\n+\n+ TfLiteTensor tensors[] = {\n+ CreateTensor(input_data, input_dims),\n+ CreateTensor(output_data, output_dims),\n+ };\n+ constexpr int kTensorsCount = std::extent<decltype(tensors)>::value;\n+ ExecuteLogSoftmaxTest(kTensorsCount, tensors);\n+\n+ for (int i = 0; i < output_count; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(expected_data[i], output_data[i], tolerance);\n+ }\n+}\n+\n+// min/max are used to compute scale, zero-point\n+template <typename T>\n+struct TestLogSoftmaxParams {\n+ // quantization parameters\n+ float data_min; // input and output data minimum value\n+ float data_max; // input and output data maximum value\n+ T* input_data; // quantized input storage\n+ T* output_data; // quantized output storage\n+ float tolerance; // maximum compare difference\n+};\n+\n+template <typename T>\n+void TestLogSoftmaxQuantized(const TestLogSoftmaxParams<T>& params,\n+ const int* input_dims_data,\n+ const float* input_data, const int* expected_dims,\n+ const float* expected_data,\n+ const T* expected_data_quantized,\n+ float* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(expected_dims);\n+ const int output_count = ElementCount(*output_dims);\n+\n+ constexpr float kOutputScale = 16.0 / 256;\n+ constexpr int kOutputZeroPoint = 127;\n+ const float scale = ScaleFromMinMax<T>(params.data_min, params.data_max);\n+ const int zero_point =\n+ ZeroPointFromMinMax<T>(params.data_min, params.data_max);\n+\n+ TfLiteTensor tensors[] = {\n+ CreateQuantizedTensor(input_data, params.input_data, input_dims, scale,\n+ zero_point),\n+ CreateQuantizedTensor(params.output_data, output_dims, kOutputScale,\n+ kOutputZeroPoint),\n+ };\n+ constexpr int kTensorsCount = std::extent<decltype(tensors)>::value;\n+\n+ ExecuteLogSoftmaxTest(kTensorsCount, tensors);\n+\n+ for (int i = 0; i < output_count; i++) {\n+ TF_LITE_MICRO_EXPECT_EQ(expected_data_quantized[i], params.output_data[i]);\n+ }\n+ Dequantize(params.output_data, output_count, kOutputScale, kOutputZeroPoint,\n+ output_data);\n+ for (int i = 0; i < output_count; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(expected_data[i], output_data[i],\n+ params.tolerance);\n+ }\n+}\n+\n+} // namespace\n+} // namespace testing\n+} // namespace tflite\n+\n+TF_LITE_MICRO_TESTS_BEGIN\n+\n+// This contains the same test values as the Softmax test, but reference answer\n+// generated via the following snippet of python:\n+// logits1 = tf.constant([[0, -6, 2, 4],[3, -2, 10, 1]], dtype=tf.float32)\n+// logits2 = tf.constant([[0,-6],[2,4],[3,-2],[10,1]], dtype=tf.float32)\n+// lsm1 = tf.nn.log_softmax(logits1)\n+// lsm2 = tf.nn.log_softmax(logits2)\n+// with tf.Session() as sess:\n+// print('lsm1', sess.run(lsm1))\n+// print('lsm2', sess.run(lsm2))\n+TF_LITE_MICRO_TEST(FloatActivationsOpTestLogSoftmax) {\n+ constexpr int kDims1[] = {2, 2, 4};\n+ constexpr float kInput[] = {\n+ 0, -6, 2, 4, 3, -2, 10, 1,\n+ };\n+ constexpr float kExpect1[] = {\n+ -4.14297, -10.14297, -2.14297, -.142971, //\n+ -7.00104, -12.00104, -.00104087, -9.00104, //\n+ };\n+ constexpr int kOutputCount = std::extent<decltype(kExpect1)>::value;\n+ float output_data[kOutputCount];\n+\n+ constexpr float kTolerance = 1e-5;\n+\n+ tflite::testing::TestLogSoftmax(kTolerance, kDims1, kInput, kDims1, kExpect1,\n+ output_data);\n+\n+ // Same input, but a different shape.\n+ constexpr int kDims2[] = {2, 4, 2};\n+ constexpr float kExpect2[] = {\n+ -.00247565, -6.00247, -2.12692, -.126928,\n+ -.00671534, -5.00671, -.000123374, -9.00012,\n+ };\n+\n+ tflite::testing::TestLogSoftmax(kTolerance, kDims2, kInput, kDims2, kExpect2,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(LogSoftmaxOpTestSimpleTest) {\n+ constexpr int kDims[] = {2, 2, 5};\n+ constexpr float kInput[] = {\n+ 1.0, 2.0, 3.0, 4.0, 5.0, //\n+ -1.0, -2.0, -3.0, -4.0, -5.0, //\n+ };\n+ constexpr float kExpect[] = {\n+ -4.45191431, -3.45191431, -2.45191431, -1.45191443, -0.4519144, //\n+ -0.4519144, -1.45191443, -2.45191431, -3.45191431, -4.45191431 //\n+ };\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ constexpr float kTolerance = 1e-6;\n+\n+ tflite::testing::TestLogSoftmax(kTolerance, kDims, kInput, kDims, kExpect,\n+ output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(QuantizedActivationsOpTestLogSoftmaxInt8) {\n+ constexpr int kDims[] = {2, 2, 4};\n+ constexpr float kInput[] = {\n+ 0, -6, 2, 4, 3, -2, 10, 1,\n+ };\n+ constexpr float kExpect[] = {\n+ -4.14297, -10.14297, -2.14297, -.142971,\n+ -7.00104, -12.00104, -.00104087, -9.00104,\n+ };\n+ constexpr int8_t kExpectQuantized[] = {\n+ 61, -36, 93, 125, 15, -65, 127, -16,\n+ };\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ // setup quantization storage and parameters\n+ int8_t q_output_data[kOutputCount];\n+ int8_t q_input_data[kOutputCount];\n+ constexpr float kMin = -10;\n+ constexpr float kMax = 10;\n+ constexpr float kLogSoftmaxQuantizedTolerance = 0.06355;\n+ tflite::testing::TestLogSoftmaxParams<int8_t> params = {};\n+ params.data_min = kMin;\n+ params.data_max = kMax;\n+ params.input_data = q_input_data;\n+ params.output_data = q_output_data;\n+ params.tolerance = kLogSoftmaxQuantizedTolerance;\n+\n+ tflite::testing::TestLogSoftmaxQuantized(\n+ params, kDims, kInput, kDims, kExpect, kExpectQuantized, output_data);\n+}\n+\n+TF_LITE_MICRO_TEST(ExtraTestLogSoftmaxInt8) {\n+ constexpr int kDims[] = {2, 3, 1};\n+ constexpr float kInput[] = {0, -1, 1};\n+ constexpr float kExpect[] = {0, 0, 0};\n+ constexpr int8_t kExpectQuantized[] = {127, 127, 127};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ // setup quantization storage and parameters\n+ int8_t q_output_data[kOutputCount];\n+ int8_t q_input_data[kOutputCount];\n+ constexpr float kMin = -1;\n+ constexpr float kMax = 1;\n+ constexpr float kLogSoftmaxQuantizedTolerance = 0.06355;\n+ tflite::testing::TestLogSoftmaxParams<int8_t> params = {};\n+ params.data_min = kMin;\n+ params.data_max = kMax;\n+ params.input_data = q_input_data;\n+ params.output_data = q_output_data;\n+ params.tolerance = kLogSoftmaxQuantizedTolerance;\n+\n+ tflite::testing::TestLogSoftmaxQuantized(\n+ params, kDims, kInput, kDims, kExpect, kExpectQuantized, output_data);\n+}\n+\n+TF_LITE_MICRO_TESTS_END",
"filename": "tensorflow/lite/micro/kernels/log_softmax_test.cc",
"status": "added"
},
{
"diff": "@@ -46,6 +46,7 @@ TfLiteRegistration Register_FLOOR_DIV();\n TfLiteRegistration Register_FLOOR_MOD();\n TfLiteRegistration Register_L2_POOL_2D();\n TfLiteRegistration Register_LEAKY_RELU();\n+TfLiteRegistration Register_LOG_SOFTMAX();\n TfLiteRegistration Register_QUANTIZE();\n TfLiteRegistration Register_SHAPE();\n TfLiteRegistration Register_SOFTMAX();",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -294,6 +294,7 @@ tensorflow/lite/micro/kernels/l2_pool_2d_test.cc \\\n tensorflow/lite/micro/kernels/leaky_relu_test.cc \\\n tensorflow/lite/micro/kernels/logical_test.cc \\\n tensorflow/lite/micro/kernels/logistic_test.cc \\\n+tensorflow/lite/micro/kernels/log_softmax_test.cc \\\n tensorflow/lite/micro/kernels/maximum_minimum_test.cc \\\n tensorflow/lite/micro/kernels/mul_test.cc \\\n tensorflow/lite/micro/kernels/neg_test.cc \\\n@@ -360,6 +361,7 @@ tensorflow/lite/micro/kernels/l2_pool_2d.cc \\\n tensorflow/lite/micro/kernels/leaky_relu.cc \\\n tensorflow/lite/micro/kernels/logical.cc \\\n tensorflow/lite/micro/kernels/logistic.cc \\\n+tensorflow/lite/micro/kernels/log_softmax.cc \\\n tensorflow/lite/micro/kernels/maximum_minimum.cc \\\n tensorflow/lite/micro/kernels/mul.cc \\\n tensorflow/lite/micro/kernels/neg.cc \\\n@@ -458,6 +460,7 @@ tensorflow/lite/kernels/internal/reference/integer_ops/tanh.h \\\n tensorflow/lite/kernels/internal/reference/integer_ops/transpose_conv.h \\\n tensorflow/lite/kernels/internal/reference/l2normalization.h \\\n tensorflow/lite/kernels/internal/reference/leaky_relu.h \\\n+tensorflow/lite/kernels/internal/reference/log_softmax.h \\\n tensorflow/lite/kernels/internal/reference/maximum_minimum.h \\\n tensorflow/lite/kernels/internal/reference/mul.h \\\n tensorflow/lite/kernels/internal/reference/neg.h \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator TRANSPOSE from lite to micro. @advaitjain \r\n\r\nIt will be delivered in a series of PRs.\r\n\r\nPR 1 (merged): Refactor flatbuffer_conversions #45439 \r\nPR 2 (merged): Refactor transpose reference op #45438 \r\nPR 3 (merged): Copy of the reference kernel from lite to micro without changes #45843 \r\nPR 4: Modify the micro kernel, port the tests and add the kernel to the micro build (as three separate commits) #47446\r\n\r\n",
"comments": [
{
"body": "@driedler Sorry for a delayed response. I missed your comment. Actually this issue relates to porting the TRANSPOSE op, the files you liked are for the TRANSPOSE_CONV op. It was ported here https://github.com/tensorflow/tensorflow/commit/d9841dfd9689f9c4e0bc4e1229dbc354f01ebc1b\r\n\r\nIs it the transpose or transpose_conv you are interested in? :)",
"created_at": "2021-02-02T11:58:02Z"
},
{
"body": "@patriklaurell Yes. My apologies, TRANSPOSE_CONV has the issue. Please disregard.",
"created_at": "2021-02-11T00:24:13Z"
},
{
"body": "Hi @patriklaurell,\r\n\r\nWhich is the status of your PR5? I would need to consume TRANSPOSE from TFLM as well and I would prefer not to duplicate the effort, if you are already working on it :)\r\n\r\nThank you.",
"created_at": "2021-03-29T12:18:32Z"
},
{
"body": "I created a PR [48192](https://github.com/tensorflow/tensorflow/pull/48192) that should solve this issue.",
"created_at": "2021-03-30T15:56:42Z"
},
{
"body": "@dmpiergiacomo sorry for a late response. I have been on vacation over the easter week. I don't know if it is still relevant but I have the code for PR5 ready locally. I have not uploaded it since it depends on the changes in PR4 #47446. ",
"created_at": "2021-04-06T09:50:42Z"
},
{
"body": "With the merge of #47446 this issue is fixed.",
"created_at": "2021-05-21T07:47:54Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45695\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45695\">No</a>\n",
"created_at": "2021-05-21T07:47:56Z"
}
],
"number": 45695,
"title": "micro: Port TRANSPOSE from lite to micro"
}
|
{
"body": "This PR modifies the transpose kernel to make it run in micro, ports the tests and adds the kernel to the micro build.\r\n\r\nThis is PR 4/4 in delivering #45695",
"number": 47446,
"review_comments": [
{
"body": "remove this #define and directly call the reference function from the switch case.",
"created_at": "2021-05-05T16:24:13Z"
},
{
"body": "only add support for int8 and float32 for the initial port PR. Everything else should be done separately on an as-needed basis.",
"created_at": "2021-05-05T16:25:17Z"
}
],
"title": "Modify Transpose kernel to work in TFLu"
}
|
{
"commits": [
{
"message": "Modify Transpose kernel to work in TFLu"
},
{
"message": "Port tests for Transpose to micro"
},
{
"message": "Add transpose to micro build"
},
{
"message": "Address advaitjain's comments"
}
],
"files": [
{
"diff": "@@ -91,6 +91,7 @@ AllOpsResolver::AllOpsResolver() {\n AddSvdf();\n AddTanh();\n AddTransposeConv();\n+ AddTranspose();\n AddUnpack();\n }\n ",
"filename": "tensorflow/lite/micro/all_ops_resolver.cc",
"status": "modified"
},
{
"diff": "@@ -308,6 +308,7 @@ cc_library(\n \"svdf_common.cc\",\n \"tanh.cc\",\n \"transpose_conv.cc\",\n+ \"transpose.cc\",\n \"unpack.cc\",\n \"zeros_like.cc\",\n ] + select({\n@@ -1166,6 +1167,17 @@ cc_test(\n ],\n )\n \n+cc_test(\n+ name = \"transpose_test\",\n+ srcs = [\"transpose_test.cc\"],\n+ deps = [\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_test(\n name = \"transpose_conv_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -55,6 +55,7 @@ TfLiteRegistration Register_SOFTMAX();\n TfLiteRegistration Register_SPACE_TO_BATCH_ND();\n TfLiteRegistration Register_SQUEEZE();\n TfLiteRegistration Register_SVDF();\n+TfLiteRegistration Register_TRANSPOSE();\n TfLiteRegistration Register_TRANSPOSE_CONV();\n TfLiteRegistration Register_ZEROS_LIKE();\n ",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -12,27 +12,15 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <stdint.h>\n+#include \"tensorflow/lite/kernels/internal/reference/transpose.h\"\n \n #include \"tensorflow/lite/c/common.h\"\n-#include \"tensorflow/lite/kernels/internal/compatibility.h\"\n-#include \"tensorflow/lite/kernels/internal/optimized/optimized_ops.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/reference_ops.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor.h\"\n #include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n #include \"tensorflow/lite/kernels/internal/types.h\"\n #include \"tensorflow/lite/kernels/kernel_util.h\"\n \n namespace tflite {\n-namespace ops {\n-namespace builtin {\n-namespace transpose {\n-\n-// This file has two implementations of Transpose.\n-enum KernelType {\n- kReference,\n- kGenericOptimized,\n-};\n+namespace {\n \n struct TransposeContext {\n TransposeContext(TfLiteContext* context, TfLiteNode* node) {\n@@ -45,29 +33,6 @@ struct TransposeContext {\n TfLiteTensor* output;\n };\n \n-TfLiteStatus ResizeOutputTensor(TfLiteContext* context,\n- TransposeContext* op_context) {\n- int dims = NumDimensions(op_context->input);\n- const int* perm_data = GetTensorData<int32_t>(op_context->perm);\n-\n- // Ensure validity of the permutations tensor as a 1D tensor.\n- TF_LITE_ENSURE_EQ(context, NumDimensions(op_context->perm), 1);\n- TF_LITE_ENSURE_EQ(context, op_context->perm->dims->data[0], dims);\n- for (int idx = 0; idx < dims; ++idx) {\n- TF_LITE_ENSURE_MSG(context, (perm_data[idx] >= 0 && perm_data[idx] < dims),\n- \"Transpose op permutations array is out of bounds.\");\n- }\n-\n- // Determine size of output tensor.\n- TfLiteIntArray* input_size = op_context->input->dims;\n- TfLiteIntArray* output_size = TfLiteIntArrayCopy(input_size);\n- for (int idx = 0; idx < dims; ++idx) {\n- output_size->data[idx] = input_size->data[perm_data[idx]];\n- }\n-\n- return context->ResizeTensor(context, op_context->output, output_size);\n-}\n-\n TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);\n TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n@@ -80,102 +45,68 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_TYPES_EQ(context, op_context.input->type,\n op_context.output->type);\n \n- if (!IsConstantTensor(op_context.perm)) {\n- SetTensorToDynamic(op_context.output);\n- return kTfLiteOk;\n+ int dims = NumDimensions(op_context.input);\n+ const int32_t* perm_data = GetTensorData<int32_t>(op_context.perm);\n+\n+ // Ensure validity of the permutations tensor as a 1D tensor.\n+ TF_LITE_ENSURE_EQ(context, NumDimensions(op_context.perm), 1);\n+ TF_LITE_ENSURE_EQ(context, op_context.perm->dims->data[0], dims);\n+ for (int idx = 0; idx < dims; ++idx) {\n+ TF_LITE_ENSURE_MSG(context, (perm_data[idx] >= 0 && perm_data[idx] < dims),\n+ \"Transpose op permutations array is out of bounds.\");\n }\n- return ResizeOutputTensor(context, &op_context);\n+\n+ return kTfLiteOk;\n }\n \n-template <KernelType kernel_type>\n TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n TransposeContext op_context(context, node);\n \n- // Resize the output tensor if the output tensor is dynamic.\n- if (IsDynamicTensor(op_context.output)) {\n- TF_LITE_ENSURE_OK(context, ResizeOutputTensor(context, &op_context));\n- }\n-\n- const int* perm_data = GetTensorData<int32_t>(op_context.perm);\n+ const int32_t* perm_data = GetTensorData<int32_t>(op_context.perm);\n const int size = op_context.perm->dims->data[0];\n TransposeParams params;\n params.perm_count = size;\n for (int i = 0; i < size; ++i) {\n params.perm[i] = perm_data[i];\n }\n \n-#define TF_LITE_TRANSPOSE(type, scalar) \\\n- type::Transpose(params, GetTensorShape(op_context.input), \\\n- GetTensorData<scalar>(op_context.input), \\\n- GetTensorShape(op_context.output), \\\n- GetTensorData<scalar>(op_context.output))\n-\n- // Transpose kernel only does rearranging values not numeric evaluations on\n- // each cell. It's safe to implement per size of scalar type and this trick\n- // keeps the total code size in a reasonable range.\n+ // Transpose kernel only does rearranging values not numeric evaluations\n+ // on each cell. It's safe to implement per size of scalar type and this\n+ // trick keeps the total code size in a reasonable range.\n switch (op_context.input->type) {\n case kTfLiteFloat32:\n- case kTfLiteInt32:\n- if (kernel_type == kGenericOptimized) {\n- TF_LITE_TRANSPOSE(optimized_ops, int32_t);\n- } else {\n- TF_LITE_TRANSPOSE(reference_ops, int32_t);\n- }\n+ reference_ops::Transpose(params, GetTensorShape(op_context.input),\n+ GetTensorData<float>(op_context.input),\n+ GetTensorShape(op_context.output),\n+ GetTensorData<float>(op_context.output));\n break;\n- case kTfLiteUInt8:\n case kTfLiteInt8:\n- if (kernel_type == kGenericOptimized) {\n- TF_LITE_TRANSPOSE(optimized_ops, int8_t);\n- } else {\n- TF_LITE_TRANSPOSE(reference_ops, int8_t);\n- }\n- break;\n- case kTfLiteInt16:\n- TF_LITE_TRANSPOSE(reference_ops, int16_t);\n- break;\n- case kTfLiteInt64:\n- TF_LITE_TRANSPOSE(reference_ops, int64_t);\n- break;\n- case kTfLiteBool:\n- if (sizeof(bool) == 1) {\n- if (kernel_type == kGenericOptimized) {\n- TF_LITE_TRANSPOSE(optimized_ops, int8_t);\n- } else {\n- TF_LITE_TRANSPOSE(reference_ops, int8_t);\n- }\n- } else {\n- TF_LITE_TRANSPOSE(reference_ops, bool);\n- }\n+ reference_ops::Transpose(params, GetTensorShape(op_context.input),\n+ GetTensorData<int8_t>(op_context.input),\n+ GetTensorShape(op_context.output),\n+ GetTensorData<int8_t>(op_context.output));\n break;\n default:\n TF_LITE_KERNEL_LOG(context,\n- \"Type %s is currently not supported by Transpose.\",\n+ \"Type %s is currently not supported by Transpose. \"\n+ \"Only float32 and int8 is supported\",\n TfLiteTypeGetName(op_context.input->type));\n return kTfLiteError;\n }\n-#undef TF_LITE_TRANSPOSE\n \n return kTfLiteOk;\n }\n \n-} // namespace transpose\n-\n-TfLiteRegistration* Register_TRANSPOSE_REF() {\n- static TfLiteRegistration r = {nullptr, nullptr, transpose::Prepare,\n- transpose::Eval<transpose::kReference>};\n- return &r;\n-}\n-\n-TfLiteRegistration* Register_TRANSPOSE_GENERIC_OPTIMIZED() {\n- static TfLiteRegistration r = {nullptr, nullptr, transpose::Prepare,\n- transpose::Eval<transpose::kGenericOptimized>};\n- return &r;\n-}\n-\n-TfLiteRegistration* Register_TRANSPOSE() {\n- return Register_TRANSPOSE_GENERIC_OPTIMIZED();\n+} // namespace\n+\n+TfLiteRegistration Register_TRANSPOSE() {\n+ return {/*init=*/nullptr,\n+ /*free=*/nullptr,\n+ /*prepare=*/Prepare,\n+ /*invoke=*/Eval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/0};\n }\n-\n-} // namespace builtin\n-} // namespace ops\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/transpose.cc",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,614 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#include \"tensorflow/lite/kernels/internal/reference/transpose.h\"\n+\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/portable_tensor.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_runner.h\"\n+#include \"tensorflow/lite/micro/micro_utils.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n+\n+namespace tflite {\n+namespace testing {\n+namespace {\n+\n+template <typename T>\n+void RunTestPermutation(int num_dims, const int32_t* shape,\n+ const int32_t* perms, T* input, T* input_transposed) {\n+ // Count elements and allocate output.\n+ int count = 1;\n+ for (int i = 0; i < num_dims; i++) {\n+ count *= shape[i];\n+ }\n+\n+ // Create the dummy data\n+ for (int i = 0; i < count; i++) {\n+ input[i] = i;\n+ }\n+\n+ // Make input and output shapes.\n+ const RuntimeShape input_shape = RuntimeShape(num_dims, shape);\n+ RuntimeShape output_shape(num_dims);\n+\n+ for (int i = 0; i < num_dims; i++) {\n+ output_shape.SetDim(i, shape[perms[i]]);\n+ }\n+\n+ TransposeParams params;\n+ params.perm_count = num_dims;\n+ for (int i = 0; i < num_dims; ++i) {\n+ params.perm[i] = perms[i];\n+ }\n+\n+ reference_ops::Transpose<T>(params, input_shape, input, output_shape,\n+ input_transposed);\n+}\n+\n+template <typename T>\n+TfLiteStatus InvokeTranspose(TfLiteTensor* tensors, int tensors_size,\n+ T* output_data, int output_length,\n+ TransposeParams* params) {\n+ int inputs_array_data[] = {2, 0, 1};\n+ TfLiteIntArray* inputs_array = IntArrayFromInts(inputs_array_data);\n+ int outputs_array_data[] = {1, 2};\n+ TfLiteIntArray* outputs_array = IntArrayFromInts(outputs_array_data);\n+\n+ const TfLiteRegistration registration = Register_TRANSPOSE();\n+ micro::KernelRunner runner(registration, tensors, tensors_size, inputs_array,\n+ outputs_array, reinterpret_cast<void*>(params));\n+\n+ const char* init_data = reinterpret_cast<const char*>(params);\n+ TfLiteStatus status = runner.InitAndPrepare(init_data);\n+ if (status != kTfLiteOk) {\n+ return status;\n+ }\n+ return runner.Invoke();\n+}\n+\n+template <typename T>\n+TfLiteStatus ValidateTranspose(TfLiteTensor* tensors, int tensors_size,\n+ const T* expected_output_data, T* output_data,\n+ int output_length,\n+ tflite::TransposeParams* params,\n+ float tolerance = 1e-5) {\n+ TfLiteStatus status = InvokeTranspose(tensors, tensors_size, output_data,\n+ output_length, params);\n+ if (status != kTfLiteOk) {\n+ return status;\n+ }\n+\n+ for (int i = 0; i < output_length; ++i) {\n+ TF_LITE_MICRO_EXPECT_EQ(expected_output_data[i], output_data[i]);\n+ }\n+ return kTfLiteOk;\n+}\n+\n+template <typename T>\n+void TestTranspose(const int* input_dims_data, T* input_data,\n+ const int* output_dims_data, const T* expected_output_data,\n+ T* output_data, TransposeParams* params) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(output_dims_data);\n+ const int input_size = ElementCount(*input_dims);\n+ for (int i = 0; i < input_size; i++) {\n+ input_data[i] = i;\n+ }\n+\n+ for (int i = 0; i < input_dims->size; i++) {\n+ output_dims->data[i] = input_dims->data[params->perm[i]];\n+ }\n+\n+ const int perm_dims_data[] = {1, params->perm_count};\n+ TfLiteIntArray* perm_dims = IntArrayFromInts(perm_dims_data);\n+ const int output_dims_count = ElementCount(*output_dims);\n+ constexpr int inputs_size = 2;\n+ constexpr int outputs_size = 1;\n+ constexpr int tensors_size = inputs_size + outputs_size;\n+ TfLiteTensor tensors[tensors_size] = {\n+ CreateTensor(input_data, input_dims),\n+ CreateTensor(params->perm, perm_dims),\n+ CreateTensor(output_data, output_dims),\n+ };\n+\n+ TF_LITE_MICRO_EXPECT_EQ(\n+ kTfLiteOk, ValidateTranspose(tensors, tensors_size, expected_output_data,\n+ output_data, output_dims_count, params));\n+}\n+\n+} // namespace\n+} // namespace testing\n+} // namespace tflite\n+\n+TF_LITE_MICRO_TESTS_BEGIN\n+\n+TF_LITE_MICRO_TEST(1D) {\n+ const int input_dims_data[] = {1, 3};\n+ const int output_dims_data[] = {1, 3};\n+\n+ int8_t input_data[3];\n+ int8_t output_data[3];\n+ const int8_t expected_output_data[] = {0, 1, 2};\n+\n+ tflite::TransposeParams params = {1, {0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(2DPerm1) {\n+ const int input_dims_data[] = {2, 3, 2};\n+ const int output_dims_data[] = {2, 3, 2};\n+\n+ int8_t input_data[6];\n+ int8_t output_data[6];\n+ const int8_t expected_output_data[] = {0, 2, 4, 1, 3, 5};\n+\n+ tflite::TransposeParams params = {2, {1, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(2D4x4KernelLeftOverRightSide) {\n+ const int input_dims_data[] = {2, 4, 6};\n+ const int output_dims_data[] = {2, 4, 6};\n+\n+ int8_t input_data[24];\n+ int8_t output_data[24];\n+ const int8_t expected_output_data[] = {0, 6, 12, 18, 1, 7, 13, 19,\n+ 2, 8, 14, 20, 3, 9, 15, 21,\n+ 4, 10, 16, 22, 5, 11, 17, 23};\n+\n+ tflite::TransposeParams params = {2, {1, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(2D4x4KernelLeftOverBottomSide) {\n+ const int input_dims_data[] = {2, 6, 4};\n+ const int output_dims_data[] = {2, 4, 6};\n+\n+ int8_t input_data[24];\n+ int8_t output_data[24];\n+ const int8_t expected_output_data[] = {0, 4, 8, 12, 16, 20, 1, 5,\n+ 9, 13, 17, 21, 2, 6, 10, 14,\n+ 18, 22, 3, 7, 11, 15, 19, 23};\n+\n+ tflite::TransposeParams params = {2, {1, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(3D) {\n+ const int input_dims_data[] = {3, 2, 3, 4};\n+ const int output_dims_data[] = {3, 2, 3, 4};\n+\n+ int8_t input_data[24];\n+ int8_t output_data[24];\n+ const int8_t expected_output_data[] = {0, 4, 8, 12, 16, 20, 1, 5,\n+ 9, 13, 17, 21, 2, 6, 10, 14,\n+ 18, 22, 3, 7, 11, 15, 19, 23};\n+\n+ tflite::TransposeParams params = {3, {2, 0, 1}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(1DNotShrinked) {\n+ const int input_dims_data[] = {1, 1};\n+ const int output_dims_data[] = {1, 1};\n+\n+ float input_data[1];\n+ float output_data[1];\n+ const float expected_output_data[] = {0};\n+\n+ tflite::TransposeParams params = {1, {0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(2DShrinkedOneTime) {\n+ const int input_dims_data[] = {2, 2, 1};\n+ const int output_dims_data[] = {2, 2, 1};\n+\n+ float input_data[2];\n+ float output_data[2];\n+ const float expected_output_data[] = {0, 1};\n+\n+ tflite::TransposeParams params = {2, {1, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(2DShrinkedTwoTimes) {\n+ const int input_dims_data[] = {2, 1, 1};\n+ const int output_dims_data[] = {2, 1, 1};\n+\n+ float input_data[1];\n+ float output_data[1];\n+ const float expected_output_data[] = {0};\n+\n+ tflite::TransposeParams params = {2, {1, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(3DShrinkedOneTime) {\n+ const int input_dims_data[] = {3, 2, 1, 3};\n+ const int output_dims_data[] = {3, 2, 1, 3};\n+\n+ float input_data[6];\n+ float output_data[6];\n+ const float expected_output_data[] = {0, 1, 2, 3, 4, 5};\n+\n+ tflite::TransposeParams params = {3, {0, 2, 1}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(3DShrinkedTwoTimes) {\n+ const int input_dims_data[] = {3, 1, 1, 3};\n+ const int output_dims_data[] = {3, 1, 1, 3};\n+\n+ float input_data[3];\n+ float output_data[3];\n+ const float expected_output_data[] = {0, 1, 2};\n+\n+ tflite::TransposeParams params = {3, {1, 2, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(3DShrinkedAll) {\n+ const int input_dims_data[] = {3, 1, 1, 1};\n+ const int output_dims_data[] = {3, 1, 1, 1};\n+\n+ float input_data[1];\n+ float output_data[1];\n+ const float expected_output_data[] = {0};\n+\n+ tflite::TransposeParams params = {3, {1, 2, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(4DShrinkedOneTimes) {\n+ const int input_dims_data[] = {4, 2, 2, 3, 1};\n+ const int output_dims_data[] = {4, 2, 2, 3, 1};\n+\n+ float input_data[12];\n+ float output_data[12];\n+ const float expected_output_data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11};\n+\n+ tflite::TransposeParams params = {4, {3, 0, 1, 2}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(4DShrinkedTwoTimes) {\n+ const int input_dims_data[] = {4, 2, 1, 3, 1};\n+ const int output_dims_data[] = {4, 2, 1, 3, 1};\n+\n+ float input_data[6];\n+ float output_data[6];\n+ const float expected_output_data[] = {0, 1, 2, 3, 4, 5};\n+\n+ tflite::TransposeParams params = {4, {0, 3, 1, 2}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(4DShrinkedThreeTimes) {\n+ const int input_dims_data[] = {4, 2, 1, 1, 1};\n+ const int output_dims_data[] = {4, 2, 1, 1, 1};\n+\n+ float input_data[2];\n+ float output_data[2];\n+ const float expected_output_data[] = {0, 1};\n+\n+ tflite::TransposeParams params = {4, {3, 2, 1, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(4DShrinkedFourTimes) {\n+ const int input_dims_data[] = {4, 1, 1, 1, 1};\n+ const int output_dims_data[] = {4, 1, 1, 1, 1};\n+\n+ float input_data[1];\n+ float output_data[1];\n+ const float expected_output_data[] = {0};\n+\n+ tflite::TransposeParams params = {4, {2, 3, 1, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(3DFlatten) {\n+ const int input_dims_data[] = {3, 2, 2, 3};\n+ const int output_dims_data[] = {3, 2, 2, 3};\n+\n+ float input_data[12];\n+ float output_data[12];\n+ const float expected_output_data[] = {0, 3, 1, 4, 2, 5, 6, 9, 7, 10, 8, 11};\n+\n+ tflite::TransposeParams params = {3, {0, 2, 1}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(4DFlatten) {\n+ const int input_dims_data[] = {4, 2, 2, 2, 2};\n+ const int output_dims_data[] = {4, 2, 2, 2, 2};\n+\n+ float input_data[16];\n+ float output_data[16];\n+ const float expected_output_data[] = {0, 2, 1, 3, 4, 6, 5, 7,\n+ 8, 10, 9, 11, 12, 14, 13, 15};\n+\n+ tflite::TransposeParams params = {4, {0, 1, 3, 2}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(4DFlattenTwo) {\n+ const int input_dims_data[] = {4, 2, 2, 2, 2};\n+ const int output_dims_data[] = {4, 2, 2, 2, 2};\n+\n+ float input_data[16];\n+ float output_data[16];\n+ const float expected_output_data[] = {0, 4, 1, 5, 2, 6, 3, 7,\n+ 8, 12, 9, 13, 10, 14, 11, 15};\n+\n+ tflite::TransposeParams params = {4, {0, 2, 3, 1}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(3DDividedIntoTwo2DsOne) {\n+ float input_data[24];\n+ float expected_output_data[24];\n+ int32_t shape[] = {2, 3, 4};\n+ int32_t perms[] = {1, 2, 0};\n+ tflite::testing::RunTestPermutation(3, shape, perms, input_data,\n+ expected_output_data);\n+ const int input_dims_data[] = {3, 2, 3, 4};\n+ const int output_dims_data[] = {3, 2, 3, 4};\n+\n+ float output_data[24];\n+\n+ tflite::TransposeParams params = {3, {1, 2, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(3DDividedIntoTwo2DsTwo) {\n+ float input_data[24];\n+ float expected_output_data[24];\n+ int32_t shape[] = {2, 3, 4};\n+ int32_t perms[] = {2, 0, 1};\n+ tflite::testing::RunTestPermutation(3, shape, perms, input_data,\n+ expected_output_data);\n+ const int input_dims_data[] = {3, 2, 3, 4};\n+ const int output_dims_data[] = {3, 2, 3, 4};\n+\n+ float output_data[24];\n+\n+ tflite::TransposeParams params = {3, {2, 0, 1}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(4DDividedIntoTwo2DsOne) {\n+ int32_t shape[] = {2, 3, 4, 2};\n+ int32_t perms[] = {1, 2, 3, 0};\n+ float input_data[48];\n+ float expected_output_data[48];\n+ tflite::testing::RunTestPermutation(4, shape, perms, input_data,\n+ expected_output_data);\n+ const int input_dims_data[] = {4, 2, 3, 4, 2};\n+ const int output_dims_data[] = {4, 2, 3, 4, 2};\n+\n+ float output_data[48];\n+\n+ tflite::TransposeParams params = {4, {1, 2, 3, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+TF_LITE_MICRO_TEST(4DDividedIntoTwo2DsTwo) {\n+ int32_t shape[] = {2, 3, 4, 2};\n+ int32_t perms[] = {2, 3, 0, 1};\n+ float input_data[48];\n+ float expected_output_data[48];\n+ tflite::testing::RunTestPermutation(4, shape, perms, input_data,\n+ expected_output_data);\n+ const int input_dims_data[] = {4, 2, 3, 4, 2};\n+ const int output_dims_data[] = {4, 2, 3, 4, 2};\n+\n+ float output_data[48];\n+\n+ tflite::TransposeParams params = {4, {2, 3, 0, 1}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(4DDividedIntoTwo2DsThree) {\n+ int32_t shape[] = {2, 3, 4, 2};\n+ int32_t perms[] = {3, 0, 1, 2};\n+ float input_data[48];\n+ float expected_output_data[48];\n+ tflite::testing::RunTestPermutation(4, shape, perms, input_data,\n+ expected_output_data);\n+ const int input_dims_data[] = {4, 2, 3, 4, 2};\n+ const int output_dims_data[] = {4, 2, 3, 4, 2};\n+\n+ float output_data[48];\n+\n+ tflite::TransposeParams params = {4, {3, 0, 1, 2}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(5DDividedIntoTwo2DsOne) {\n+ int32_t shape[] = {2, 3, 2, 2, 2};\n+ int32_t perms[] = {1, 4, 2, 3, 0};\n+ float input_data[48];\n+ float expected_output_data[48];\n+ tflite::testing::RunTestPermutation(5, shape, perms, input_data,\n+ expected_output_data);\n+ const int input_dims_data[] = {5, 2, 3, 2, 2, 2};\n+ const int output_dims_data[] = {5, 2, 3, 2, 2, 2};\n+\n+ float output_data[48];\n+\n+ tflite::TransposeParams params = {5, {1, 4, 2, 3, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(5DDividedIntoTwo2DsTwo) {\n+ int32_t shape[] = {2, 3, 2, 2, 2};\n+ int32_t perms[] = {2, 3, 0, 4, 1};\n+ float input_data[48];\n+ float expected_output_data[48];\n+ tflite::testing::RunTestPermutation(5, shape, perms, input_data,\n+ expected_output_data);\n+ const int input_dims_data[] = {5, 2, 3, 2, 2, 2};\n+ const int output_dims_data[] = {5, 2, 3, 2, 2, 2};\n+\n+ float output_data[48];\n+\n+ tflite::TransposeParams params = {5, {2, 3, 0, 4, 1}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(5DDividedIntoTwo2DsThree) {\n+ int32_t shape[] = {2, 3, 2, 2, 2};\n+ int32_t perms[] = {3, 0, 4, 1, 2};\n+ float input_data[48];\n+ float expected_output_data[48];\n+ tflite::testing::RunTestPermutation(5, shape, perms, input_data,\n+ expected_output_data);\n+ const int input_dims_data[] = {5, 2, 3, 2, 2, 2};\n+ const int output_dims_data[] = {5, 2, 3, 2, 2, 2};\n+\n+ float output_data[48];\n+\n+ tflite::TransposeParams params = {5, {3, 0, 4, 1, 2}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(SimpleTestNoReorder) {\n+ const int input_dims_data[] = {4, 1, 2, 3, 1};\n+ const int output_dims_data[] = {4, 1, 2, 3, 1};\n+\n+ float input_data[6];\n+ float output_data[6];\n+ const float expected_output_data[] = {0, 1, 2, 3, 4, 5};\n+\n+ tflite::TransposeParams params = {4, {0, 1, 2, 3}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(SimpleTestWithReorder) {\n+ const int input_dims_data[] = {4, 1, 2, 3, 1};\n+ const int output_dims_data[] = {4, 1, 2, 3, 1};\n+\n+ float input_data[6];\n+ float output_data[6];\n+ const float expected_output_data[] = {0, 3, 1, 4, 2, 5};\n+\n+ tflite::TransposeParams params = {4, {2, 1, 3, 0}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(ComplexTestWithReorder) {\n+ const int input_dims_data[] = {4, 2, 3, 4, 5};\n+ const int output_dims_data[] = {4, 2, 3, 4, 5};\n+\n+ float input_data[120];\n+ float output_data[120];\n+ const float expected_output_data[] = {\n+ 0, 1, 2, 3, 4, 20, 21, 22, 23, 24, 40, 41, 42, 43, 44,\n+ 60, 61, 62, 63, 64, 80, 81, 82, 83, 84, 100, 101, 102, 103, 104,\n+ 5, 6, 7, 8, 9, 25, 26, 27, 28, 29, 45, 46, 47, 48, 49,\n+ 65, 66, 67, 68, 69, 85, 86, 87, 88, 89, 105, 106, 107, 108, 109,\n+ 10, 11, 12, 13, 14, 30, 31, 32, 33, 34, 50, 51, 52, 53, 54,\n+ 70, 71, 72, 73, 74, 90, 91, 92, 93, 94, 110, 111, 112, 113, 114,\n+ 15, 16, 17, 18, 19, 35, 36, 37, 38, 39, 55, 56, 57, 58, 59,\n+ 75, 76, 77, 78, 79, 95, 96, 97, 98, 99, 115, 116, 117, 118, 119};\n+\n+ tflite::TransposeParams params = {4, {2, 0, 1, 3}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TEST(Complex5DTestWithReorder) {\n+ const int input_dims_data[] = {5, 2, 3, 2, 2, 5};\n+ const int output_dims_data[] = {5, 2, 3, 2, 2, 5};\n+\n+ float input_data[120];\n+ float output_data[120];\n+ const float expected_output_data[] = {\n+ 0, 5, 1, 6, 2, 7, 3, 8, 4, 9, 20, 25, 21, 26, 22,\n+ 27, 23, 28, 24, 29, 40, 45, 41, 46, 42, 47, 43, 48, 44, 49,\n+ 60, 65, 61, 66, 62, 67, 63, 68, 64, 69, 80, 85, 81, 86, 82,\n+ 87, 83, 88, 84, 89, 100, 105, 101, 106, 102, 107, 103, 108, 104, 109,\n+ 10, 15, 11, 16, 12, 17, 13, 18, 14, 19, 30, 35, 31, 36, 32,\n+ 37, 33, 38, 34, 39, 50, 55, 51, 56, 52, 57, 53, 58, 54, 59,\n+ 70, 75, 71, 76, 72, 77, 73, 78, 74, 79, 90, 95, 91, 96, 92,\n+ 97, 93, 98, 94, 99, 110, 115, 111, 116, 112, 117, 113, 118, 114, 119};\n+\n+ tflite::TransposeParams params = {5, {2, 0, 1, 4, 3}};\n+\n+ tflite::testing::TestTranspose(input_dims_data, input_data, output_dims_data,\n+ expected_output_data, output_data, ¶ms);\n+}\n+\n+TF_LITE_MICRO_TESTS_END",
"filename": "tensorflow/lite/micro/kernels/transpose_test.cc",
"status": "added"
},
{
"diff": "@@ -496,6 +496,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n tflite::Register_TRANSPOSE_CONV(), ParseTransposeConv);\n }\n \n+ TfLiteStatus AddTranspose() {\n+ return AddBuiltin(BuiltinOperator_TRANSPOSE, Register_TRANSPOSE(),\n+ ParseTranspose);\n+ }\n+\n TfLiteStatus AddUnpack() {\n return AddBuiltin(BuiltinOperator_UNPACK,\n tflite::ops::micro::Register_UNPACK(), ParseUnpack);",
"filename": "tensorflow/lite/micro/micro_mutable_op_resolver.h",
"status": "modified"
},
{
"diff": "@@ -313,6 +313,7 @@ tensorflow/lite/micro/kernels/strided_slice_test.cc \\\n tensorflow/lite/micro/kernels/sub_test.cc \\\n tensorflow/lite/micro/kernels/svdf_test.cc \\\n tensorflow/lite/micro/kernels/tanh_test.cc \\\n+tensorflow/lite/micro/kernels/transpose_test.cc \\\n tensorflow/lite/micro/kernels/transpose_conv_test.cc \\\n tensorflow/lite/micro/kernels/unpack_test.cc \\\n tensorflow/lite/micro/kernels/zeros_like_test.cc \\\n@@ -384,6 +385,7 @@ tensorflow/lite/micro/kernels/sub.cc \\\n tensorflow/lite/micro/kernels/svdf.cc \\\n tensorflow/lite/micro/kernels/svdf_common.cc \\\n tensorflow/lite/micro/kernels/tanh.cc \\\n+tensorflow/lite/micro/kernels/transpose.cc \\\n tensorflow/lite/micro/kernels/transpose_conv.cc \\\n tensorflow/lite/micro/kernels/unpack.cc \\\n tensorflow/lite/micro/kernels/zeros_like.cc\n@@ -478,6 +480,7 @@ tensorflow/lite/kernels/internal/reference/sub.h \\\n tensorflow/lite/kernels/internal/reference/logistic.h \\\n tensorflow/lite/kernels/internal/reference/strided_slice.h \\\n tensorflow/lite/kernels/internal/reference/tanh.h \\\n+tensorflow/lite/kernels/internal/reference/transpose.h \\\n tensorflow/lite/kernels/internal/reference/transpose_conv.h \\\n tensorflow/lite/kernels/internal/cppmath.h \\\n tensorflow/lite/kernels/internal/max.h \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nI hit this issue while working on https://github.com/tensorflow/tensorflow/pull/46904:\r\n\r\n * If I create a global pointer variable (explicitly initialized to nullptr) and check for the variable == nullptr in my factory function, the check always returns false.\r\n\r\nAFAICT, this behavior is specific to our use of Renode. I have not been able to reproduce on Linux or with the Xtensa simulator.\r\n\r\nI have a workaround that should allow #46904 to be merged and I will then update this issue with a cleaner way to reproduce this error.",
"comments": [
{
"body": "The reason is in `_zero_initialize_bss_data` (tensorflow/lite/micro/tools/make/downloads/stm32_bare_lib/source/startup.c). It initializes the BSS data with DEADBEEF.\r\n\r\nTo verify that:\r\n\r\nBuild with debug info\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=bluepill BUILD_TYPE=debug test_renode -j`nproc`\r\n```\r\n\r\nRun in Renode:\r\n```\r\ninclude @tensorflow/lite/micro/testing/bluepill_nontest.resc; sysbus LoadELF @tensorflow/lite/micro/tools/make/gen/bluepill_cortex-m3_debug/bin/test_renode; machine StartGdbServer 3333\r\n```\r\n\r\nStart GDB:\r\n```\r\ntensorflow/lite/micro/tools/make/downloads/gcc_embedded/bin/arm-none-eabi-gdb tensorflow/lite/micro/tools/make/gen/bluepill_cortex-m3_debug/bin/test_renode\r\n```\r\n\r\nIn GDB:\r\n```\r\ntar rem :3333\r\nwatch *0x20000008\r\nmon s\r\nc\r\n```\r\n\r\nIt will break in the code that actually writes this data.\r\n\r\nNow why do they go to bss instead of data? I don't know.\r\n\r\nHere's an excerpt from the objdump:\r\n\r\n```\r\nDisassembly of section .data:\r\n\r\n20000000 <a>: <- OK\r\n20000000: 0000002d andeq r0, r0, sp, lsr #32\r\n\r\n20000004 <init_to_true>: <- OK\r\n20000004: 00000001 andeq r0, r0, r1\r\n\r\nDisassembly of section .bss: \r\n\r\n20000008 <init_to_false>: <- NOT OK\r\n20000008: 00000000 andeq r0, r0, r0\r\n\r\n2000000c <init_to_nullptr>: <- NOT OK\r\n2000000c: 00000000 andeq r0, r0, r0\r\n\r\n20000010 <g_tick_count>: <- NOT OK\r\n20000010: 00000000 andeq r0, r0, r0\r\n\r\nDisassembly of section ._user_heap_stack:\r\n\r\n20000014 <._user_heap_stack>:\r\n ... \r\n```\r\n\r\nFor the record, this is HAL specific, so it should behave the same way on HW, not only in Renode.",
"created_at": "2021-02-24T11:00:14Z"
},
{
"body": "Changing 0xDEADBEEF to 0 of course fixes the problem",
"created_at": "2021-02-24T12:46:29Z"
},
{
"body": "Your example @advaitjain has similar results on `stm32f4` also built with `stm32_bare_lib` that causes the problem.\r\n\r\nThe `-559038737` number that is printed as `init_to_nullptr` is exactly a signed int value for `0xDEADBEEF`. The `239` in `init_to_false` is the ending `0xEF` byte.\r\n\r\nI remember stumbling upon an issue with this BSS initialization when code from such an `if` was never executed:\r\n```\r\nstatic TYPE* some_pointer = nullptr;\r\nif (!some_pointer){\r\n // Code never called\r\n}\r\n```\r\nThe compiler simply assumed a static pointer is always `nullptr` (`0x0`) right after declaration and skipped the \"additional\" `nullptr` initialization while it was `0xDEADBEEF` instead.\r\n\r\nNow I can see there's no such problem anymore so perhaps the compiler has been fixed.\r\n\r\nNevertheless, this issue made me wonder again about what the purpose of that `0xDEADBEEF` initialization is. The binaries work well without it. Perhaps it could be simply disabled @aselle @petewarden? https://github.com/google/stm32_bare_lib/blob/55bf49816f1a9dc7d9e35951c135e852ce7a98df/source/startup.c#L114",
"created_at": "2021-02-24T20:06:22Z"
},
{
"body": "Thanks for the debugging @PiotrZierhoffer.\r\n\r\n@ajelinski, I'm not sure about the thinking behind the decision to go with not zero initializing the .bss -- it does seem non-standard.\r\n\r\nI have made https://github.com/tensorflow/tensorflow/pull/47382 that fixes this issue independent of upstream changes to STM32 Bare Lib and also answers @PiotrZierhoffer's comment about why the variables are ending up in .bss instead of .data\r\n\r\nThis link has some useful info as well: https://stackoverflow.com/q/8721475\r\n",
"created_at": "2021-02-24T22:02:06Z"
},
{
"body": "Talked to @petewarden and we decided that changine STM32 Bare Lib is the way to go.\r\n\r\n> \r\n> I have made #47382 that fixes this issue independent of upstream changes to STM32 Bare Lib and also answers @PiotrZierhoffer's comment about why the variables are ending up in .bss instead of .data\r\n> \r\n> This link has some useful info as well: [stackoverflow.com/q/8721475](https://stackoverflow.com/q/8721475)\r\n\r\n#47382 originally added `-fno-zero-initialized-in-bss` and verified that the globals were in the .data section instead of .bss using the tips from https://stackoverflow.com/q/8721475\r\n\r\nIt has since been updated to simply pull in an updated version of STM32 Bare Lib (with the zero initialization of .bss)",
"created_at": "2021-02-24T22:37:57Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46937\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46937\">No</a>\n",
"created_at": "2021-02-25T00:22:28Z"
}
],
"number": 46937,
"title": "Initialization of global pointer variable seems suspect with our current use of Renode."
}
|
{
"body": "With google/stm32_bare_lib@aaabdeb STM32 Bare Lib zero-initializes the bss section.\r\n\r\nThis change is also pulling out the download into a standalone bash script.\r\n\r\nSee #46937 for more discussion on this.\r\n\r\nFixes #46937",
"number": 47382,
"review_comments": [],
"title": "Update STM32 Bare Lib for zero initialization of the bss section"
}
|
{
"commits": [
{
"message": "Update STM32 Bare Lib for zero initialization of the bss section.\n\nWith\nhttps://github.com/google/stm32_bare_lib/commit/aaabdeb0d6098322a0874b29f6ed547a39b3929f\nSTM32 Bare Lib zero-initializes the bss section.\n\nThis change is also pulling out the download into a standalone bash script.\n\nSee https://github.com/tensorflow/tensorflow/issues/46937 for more\ndiscussion on this.\n\nFixes #46937"
}
],
"files": [
{
"diff": "@@ -54,17 +54,9 @@ void MicroPrintf(const char* format, ...) {\n \n namespace tflite {\n ErrorReporter* GetMicroErrorReporter() {\n-#if !defined(RENODE)\n if (error_reporter_ == nullptr) {\n error_reporter_ = new (micro_error_reporter_buffer) MicroErrorReporter();\n }\n-#else\n- // TODO(#46937): Until we resolve the global variable issue with Renode, we\n- // will be creating a new ErrorReporter object each time. While this is\n- // inefficient, it still allows us to make progress.\n- error_reporter_ = new (micro_error_reporter_buffer) MicroErrorReporter();\n-#endif\n-\n return error_reporter_;\n }\n ",
"filename": "tensorflow/lite/micro/micro_error_reporter.cc",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,51 @@\n+#!/bin/bash\n+# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n+#\n+# Licensed under the Apache License, Version 2.0 (the \"License\");\n+# you may not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n+# ==============================================================================\n+#\n+# Called with following arguments:\n+# 1 - Path to the downloads folder which is typically\n+# tensorflow/lite/micro/tools/make/downloads\n+#\n+# This script is called from the Makefile and uses the following convention to\n+# enable determination of sucess/failure:\n+#\n+# - If the script is successful, the only output on stdout should be SUCCESS.\n+# The makefile checks for this particular string.\n+#\n+# - Any string on stdout that is not SUCCESS will be shown in the makefile as\n+# the cause for the script to have failed.\n+#\n+# - Any other informational prints should be on stderr.\n+\n+set -e\n+\n+DOWNLOADS_DIR=${1}\n+if [ ! -d ${DOWNLOADS_DIR} ]; then\n+ echo \"The top-level downloads directory: ${DOWNLOADS_DIR} does not exist.\"\n+ exit 1\n+fi\n+\n+DOWNLOADED_STM32_BARE_LIB_PATH=${DOWNLOADS_DIR}/stm32_bare_lib\n+\n+if [ -d ${DOWNLOADED_STM32_BARE_LIB_PATH} ]; then\n+ echo >&2 \"${DOWNLOADED_STM32_BARE_LIB_PATH} already exists, skipping the download.\"\n+else\n+ git clone https://github.com/google/stm32_bare_lib.git ${DOWNLOADED_STM32_BARE_LIB_PATH} >&2\n+ pushd ${DOWNLOADED_STM32_BARE_LIB_PATH} > /dev/null\n+ git checkout aaabdeb0d6098322a0874b29f6ed547a39b3929f >&2\n+ popd > /dev/null\n+fi\n+\n+echo \"SUCCESS\"",
"filename": "tensorflow/lite/micro/tools/make/ext_libs/stm32_bare_lib_download.sh",
"status": "added"
},
{
"diff": "@@ -7,8 +7,6 @@ ifneq ($(DOWNLOAD_RESULT), SUCCESS)\n $(error Something went wrong with the GCC download: $(DOWNLOAD_RESULT))\n endif\n \n-$(eval $(call add_third_party_download,$(STM32_BARE_LIB_URL),$(STM32_BARE_LIB_MD5),stm32_bare_lib,))\n-\n DOWNLOAD_RESULT := $(shell $(MAKEFILE_DIR)/renode_download.sh ${MAKEFILE_DIR}/downloads)\n ifneq ($(DOWNLOAD_RESULT), SUCCESS)\n $(error Something went wrong with the renode download: $(DOWNLOAD_RESULT))\n@@ -19,6 +17,11 @@ ifneq ($(DOWNLOAD_RESULT), SUCCESS)\n $(error Something went wrong with the CMSIS download: $(DOWNLOAD_RESULT))\n endif\n \n+DOWNLOAD_RESULT := $(shell $(MAKEFILE_DIR)/ext_libs/stm32_bare_lib_download.sh ${MAKEFILE_DIR}/downloads)\n+ifneq ($(DOWNLOAD_RESULT), SUCCESS)\n+ $(error Something went wrong with the STM32 Bare Lib download: $(DOWNLOAD_RESULT))\n+endif\n+\n PLATFORM_FLAGS = \\\n -DTF_LITE_MCU_DEBUG_LOG \\\n -mcpu=cortex-m3 \\",
"filename": "tensorflow/lite/micro/tools/make/targets/bluepill_makefile.inc",
"status": "modified"
},
{
"diff": "@@ -10,8 +10,6 @@ ifneq ($(DOWNLOAD_RESULT), SUCCESS)\n $(error Something went wrong with the GCC download: $(DOWNLOAD_RESULT))\n endif\n \n-$(eval $(call add_third_party_download,$(STM32_BARE_LIB_URL),$(STM32_BARE_LIB_MD5),stm32_bare_lib,))\n-\n DOWNLOAD_RESULT := $(shell $(MAKEFILE_DIR)/renode_download.sh ${MAKEFILE_DIR}/downloads)\n ifneq ($(DOWNLOAD_RESULT), SUCCESS)\n $(error Something went wrong with the renode download: $(DOWNLOAD_RESULT))\n@@ -22,6 +20,11 @@ ifneq ($(DOWNLOAD_RESULT), SUCCESS)\n $(error Something went wrong with the CMSIS download: $(DOWNLOAD_RESULT))\n endif\n \n+DOWNLOAD_RESULT := $(shell $(MAKEFILE_DIR)/ext_libs/stm32_bare_lib_download.sh ${MAKEFILE_DIR}/downloads)\n+ifneq ($(DOWNLOAD_RESULT), SUCCESS)\n+ $(error Something went wrong with the STM32 Bare Lib download: $(DOWNLOAD_RESULT))\n+endif\n+\n # TODO(b/161478030): change -Wno-vla to -Wvla and remove -Wno-shadow once\n # we have a solution for fixing / avoiding being tripped up by these warnings.\n PLATFORM_FLAGS = \\",
"filename": "tensorflow/lite/micro/tools/make/targets/stm32f4_makefile.inc",
"status": "modified"
},
{
"diff": "@@ -22,9 +22,6 @@ SF_BSPS_URL := \"http://mirror.tensorflow.org/github.com/sparkfun/SparkFun_Apollo\n SF_BSPS_MD5 := \"34199f7e754735661d1c8a70a40ca7a3\"\n SF_BSPS_DEST := boards_sfe\n \n-STM32_BARE_LIB_URL := \"http://mirror.tensorflow.org/github.com/google/stm32_bare_lib/archive/c07d611fb0af58450c5a3e0ab4d52b47f99bc82d.zip\"\n-STM32_BARE_LIB_MD5 := \"282bff40d4d0b92278fd123a3b6e3123\"\n-\n ifeq ($(HOST_OS),osx)\n RISCV_TOOLCHAIN_URL := \"http://mirror.tensorflow.org/static.dev.sifive.com/dev-tools/riscv64-unknown-elf-gcc-8.1.0-2019.01.0-x86_64-apple-darwin.tar.gz\"\n RISCV_TOOLCHAIN_MD5 := \"2ac2fa00618b9ab7fa0c7d0ec173de94\"",
"filename": "tensorflow/lite/micro/tools/make/third_party_downloads.inc",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator ELU from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\nPR 6: Extract common activation code into activations.cc and activation_utils.h files. Extract common test code into activation_test_utils.h file.\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46323\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46323\">No</a>\n",
"created_at": "2021-02-24T02:01:04Z"
}
],
"number": 46323,
"title": "micro: port op ELU from lite"
}
|
{
"body": "Complete implementation of TFLM operator ELU and associated TFLM test code.\r\n\r\nPR step 5 of the work to port operator ELU as tracked in Issue #46323",
"number": 47284,
"review_comments": [],
"title": "micro: port operator ELU kernel from lite with test"
}
|
{
"commits": [
{
"message": "micro: port operator ELU kernel from lite with test\n\nComplete implementation of TFLM operator ELU and associated TFLM test code.\n\nPR step 5 of the work to port operator ELU as tracked in Issue #46323"
}
],
"files": [
{
"diff": "@@ -218,6 +218,7 @@ cc_library(\n \"detection_postprocess.cc\",\n \"elementwise.cc\",\n \"exp.cc\",\n+ \"elu.cc\",\n \"floor.cc\",\n \"l2norm.cc\",\n \"logical.cc\",\n@@ -521,6 +522,21 @@ cc_test(\n ],\n )\n \n+cc_test(\n+ name = \"elu_test\",\n+ srcs = [\n+ \"elu_test.cc\",\n+ ],\n+ deps = [\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:debug_log\",\n+ \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_test(\n name = \"exp_test\",\n srcs = [\"exp_test.cc\"],",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -17,7 +17,6 @@ limitations under the License.\n \n #include <algorithm>\n #include <cmath>\n-#include <functional>\n #include <limits>\n \n #include \"tensorflow/lite/c/common.h\"\n@@ -28,23 +27,26 @@ limitations under the License.\n #include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n \n namespace tflite {\n-namespace ops {\n-namespace micro {\n-namespace activations {\n namespace {\n \n+// Input/output tensor index.\n+constexpr int kInputTensor = 0;\n+constexpr int kOutputTensor = 0;\n+\n // OLD-TODO(b/142762739): We should figure out a multi-threading plan for most\n // of the activation ops below.\n \n struct OpData {\n- uint8_t table[256] = {0};\n+ int8_t table[256];\n };\n \n+using TransformFunc = float (*)(float);\n+\n template <typename T>\n-void PopulateLookupTable(struct OpData* data, const TfLiteTensor* input,\n- TfLiteTensor* output,\n- const std::function<float(float)>& transform) {\n- static_assert(sizeof(T) == 1, \"Lookup table valid only for 8bit\");\n+void PopulateLookupTable(const TfLiteTensor* input, const TfLiteTensor* output,\n+ const TransformFunc transform, OpData* data) {\n+ if (sizeof(T) != 1) TF_LITE_FATAL(\"Lookup table valid only for 8bit\");\n+\n const float inverse_scale = 1 / output->params.scale;\n int32_t maxval = std::numeric_limits<T>::max();\n int32_t minval = std::numeric_limits<T>::min();\n@@ -56,90 +58,94 @@ void PopulateLookupTable(struct OpData* data, const TfLiteTensor* input,\n const int32_t quantized =\n static_cast<int32_t>(rescaled + output->params.zero_point);\n data->table[static_cast<uint8_t>(static_cast<T>(val))] =\n- static_cast<uint8_t>(\n- static_cast<T>(std::max(std::min(maxval, quantized), minval)));\n+ static_cast<T>(std::max(std::min(maxval, quantized), minval));\n }\n }\n \n // OLD-TODO(b/143696793): move this to optimized_ops.\n-void EvalUsingLookupTable(struct OpData* data, const TfLiteTensor* input,\n- TfLiteTensor* output) {\n- const int size =\n- MatchingFlatSize(GetTensorShape(input), GetTensorShape(output));\n- uint8_t* output_data = GetTensorData<uint8_t>(output);\n- const uint8_t* input_data = GetTensorData<uint8_t>(input);\n- int i = 0;\n-\n- for (; i < size; ++i) {\n- output_data[i] = data->table[input_data[i]];\n+void EvalUsingLookupTable(const OpData* data, const TfLiteEvalTensor* input,\n+ TfLiteEvalTensor* output) {\n+ const int size = MatchingFlatSize(tflite::micro::GetTensorShape(input),\n+ tflite::micro::GetTensorShape(output));\n+ int8_t* output_data = tflite::micro::GetTensorData<int8_t>(output);\n+ const int8_t* input_data = tflite::micro::GetTensorData<int8_t>(input);\n+\n+ for (int i = 0; i < size; ++i) {\n+ output_data[i] = data->table[static_cast<uint8_t>(input_data[i])];\n }\n }\n \n-} // namespace\n-\n-void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n- // This is a builtin op, so we don't use the contents in 'buffer', if any.\n- // Instead, we allocate a new object to carry information from Prepare() to\n- // Eval().\n- return nullptr;\n-}\n-\n-TfLiteStatus GenericPrepare(TfLiteContext* context, TfLiteNode* node) {\n+TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);\n TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n const TfLiteTensor* input;\n- TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 0, &input));\n+ TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, kInputTensor, &input));\n TfLiteTensor* output;\n- TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));\n+ TF_LITE_ENSURE_OK(context,\n+ GetOutputSafe(context, node, kOutputTensor, &output));\n TF_LITE_ENSURE_TYPES_EQ(context, input->type, output->type);\n \n- return kTfLiteError;\n-}\n-\n-TfLiteStatus EluPrepare(TfLiteContext* context, TfLiteNode* node) {\n- const TfLiteTensor* input;\n- TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 0, &input));\n- TfLiteTensor* output;\n- TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));\n- OpData* data = reinterpret_cast<OpData*>(node->user_data);\n-\n // Use LUT to handle quantized elu path.\n if (input->type == kTfLiteInt8) {\n- PopulateLookupTable<int8_t>(data, input, output, [](float value) {\n- return value < 0.0 ? std::exp(value) - 1.0f : value;\n- });\n+ OpData* data = static_cast<OpData*>(node->user_data);\n+ TransformFunc transform = [](float value) {\n+ return value < 0.0f ? std::exp(value) - 1.0f : value;\n+ };\n+ PopulateLookupTable<int8_t>(input, output, transform, data);\n }\n- return GenericPrepare(context, node);\n+\n+ return kTfLiteOk;\n+}\n+\n+void* EluInit(TfLiteContext* context, const char* buffer, size_t length) {\n+ // This is a builtin op, so we don't use the contents in 'buffer', if any.\n+ // Instead, we allocate a new object to carry information from Prepare() to\n+ // Eval().\n+ TFLITE_DCHECK(context->AllocatePersistentBuffer != nullptr);\n+ return context->AllocatePersistentBuffer(context, sizeof(OpData));\n+}\n+\n+TfLiteStatus EluPrepare(TfLiteContext* context, TfLiteNode* node) {\n+ return CalculateOpData(context, node);\n }\n \n TfLiteStatus EluEval(TfLiteContext* context, TfLiteNode* node) {\n- const TfLiteTensor* input;\n- TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 0, &input));\n- TfLiteTensor* output;\n- TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));\n+ const TfLiteEvalTensor* input =\n+ tflite::micro::GetEvalInput(context, node, kInputTensor);\n+ TfLiteEvalTensor* output =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n switch (input->type) {\n case kTfLiteFloat32: {\n- optimized_ops::Elu(GetTensorShape(input), GetTensorData<float>(input),\n- GetTensorShape(output), GetTensorData<float>(output));\n+ reference_ops::Elu(tflite::micro::GetTensorShape(input),\n+ tflite::micro::GetTensorData<float>(input),\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<float>(output));\n return kTfLiteOk;\n }\n case kTfLiteInt8: {\n- OpData* data = reinterpret_cast<OpData*>(node->user_data);\n+ const OpData* data = static_cast<OpData*>(node->user_data);\n EvalUsingLookupTable(data, input, output);\n return kTfLiteOk;\n }\n default:\n TF_LITE_KERNEL_LOG(\n- context, \"Only float32 and int8 is supported currently, got %s.\",\n+ context, \"ELU only supports float32 and int8 currently, got %s.\",\n TfLiteTypeGetName(input->type));\n return kTfLiteError;\n }\n }\n \n-} // namespace activations\n+} // namespace\n \n-TfLiteRegistration* Register_ELU() { return nullptr; }\n+TfLiteRegistration Register_ELU() {\n+ return {/*init=*/EluInit,\n+ /*free=*/nullptr,\n+ /*prepare=*/EluPrepare,\n+ /*invoke=*/EluEval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/0};\n+}\n \n-} // namespace micro\n-} // namespace ops\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/elu.cc",
"status": "modified"
},
{
"diff": "@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <limits>\n #include <type_traits>\n \n #include \"tensorflow/lite/c/builtin_op_data.h\"\n@@ -25,20 +24,16 @@ namespace tflite {\n namespace testing {\n namespace {\n \n-#ifdef notdef\n-BaseActivationsOpModel(BuiltinOperator type, TensorData input) {\n- input_ = AddInput(input);\n- if (input.type == TensorType_UINT8) {\n- output_ = AddOutput({input.type, {}, 0, 0, 1. / 256});\n- } else if (input.type == TensorType_INT8) {\n- output_ = AddOutput({input.type, {}, 0, 0, 1. / 256, -128});\n- } else {\n- output_ = AddOutput({input.type, {}});\n- }\n- SetBuiltinOp(type, BuiltinOptions_NONE, 0);\n- BuildInterpreter({GetShape(input_)});\n-}\n-#endif // notdef\n+// min/max are used to compute scale, zero-point\n+template <typename T>\n+struct TestEluParams {\n+ // quantization parameters\n+ float data_min; // input and output data minimum value\n+ float data_max; // input and output data maximum value\n+ T* input_data; // quantized input storage\n+ T* output_data; // quantized output storage\n+ float tolerance; // output vs expected value tolerance\n+};\n \n // Our fixed-point math function implementations have roughly 12 bits of\n // accuracy, when specialized to 16-bit fixed-point arithmetic.\n@@ -56,53 +51,120 @@ BaseActivationsOpModel(BuiltinOperator type, TensorData input) {\n // has signed fixed-point arithmetic (SQRDMULH)). As the width of [-1, 1]\n // is 2, our representable values are often diluted by a factor of 2, whence\n // the factor of 2 below.\n-const float kQuantizedTolerance = 2 * (1. / 256);\n-const float kQuantizedToleranceInt16 = 2 * (1. / 4096);\n+constexpr float kQuantizedTolerance = 2 * (1. / 256);\n+\n+void ExecuteEluTest(TfLiteTensor* tensors, int tensors_count) {\n+ constexpr int kInputArrayData[] = {1, 0};\n+ TfLiteIntArray* inputs_array = IntArrayFromInts(kInputArrayData);\n+ constexpr int kOutputArrayData[] = {1, 1};\n+ TfLiteIntArray* outputs_array = IntArrayFromInts(kOutputArrayData);\n+\n+ const TfLiteRegistration registration = tflite::Register_ELU();\n+ micro::KernelRunner runner(registration, tensors, tensors_count, inputs_array,\n+ outputs_array, nullptr);\n+\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.InitAndPrepare());\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.Invoke());\n+}\n+\n+template <typename T>\n+void TestElu(const int* input_dims_data, const T* input_data,\n+ const int* expected_dims, const T* expected_data, T* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(expected_dims);\n+ const int output_count = ElementCount(*output_dims);\n+\n+ TfLiteTensor tensors[] = {\n+ CreateTensor(input_data, input_dims),\n+ CreateTensor(output_data, output_dims),\n+ };\n+ constexpr int tensors_count = std::extent<decltype(tensors)>::value;\n+ ExecuteEluTest(tensors, tensors_count);\n+\n+ constexpr float kTolerance = 1e-5;\n+ for (int i = 0; i < output_count; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(expected_data[i], output_data[i], kTolerance);\n+ }\n+}\n+\n+template <typename T>\n+void TestEluQuantized(const TestEluParams<T>& params,\n+ const int* input_dims_data, const float* input_data,\n+ const int* expected_dims, const float* expected_data,\n+ float* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(expected_dims);\n+ const int output_count = ElementCount(*output_dims);\n+\n+ const float scale = ScaleFromMinMax<T>(params.data_min, params.data_max);\n+ const int zero_point =\n+ ZeroPointFromMinMax<T>(params.data_min, params.data_max);\n+\n+ TfLiteTensor tensors[] = {\n+ CreateQuantizedTensor(input_data, params.input_data, input_dims, scale,\n+ zero_point),\n+ CreateQuantizedTensor(params.output_data, output_dims, scale, zero_point),\n+ };\n+ constexpr int kTensorsCount = std::extent<decltype(tensors)>::value;\n+\n+ ExecuteEluTest(tensors, kTensorsCount);\n+\n+ Dequantize(params.output_data, output_count, scale, zero_point, output_data);\n+ const float kTolerance = params.tolerance;\n+ for (int i = 0; i < output_count; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(expected_data[i], output_data[i], kTolerance);\n+ }\n+}\n+\n+} // namespace\n+} // namespace testing\n+} // namespace tflite\n \n TF_LITE_MICRO_TESTS_BEGIN\n \n TF_LITE_MICRO_TEST(FloatActivationsOpTestElu) {\n-#ifdef notdef\n- FloatActivationsOpModel m(BuiltinOperator_ELU,\n- /*input=*/{TensorType_FLOAT32, {1, 2, 4, 1}});\n- m.SetInput({\n- 0, -6, 2, -4, //\n+ constexpr int kDims[] = {4, 1, 2, 4, 1};\n+ constexpr float kInput[] = {\n+ 0, -6, 2, -4, //\n 3, -2, 10, -0.1, //\n- });\n- EXPECT_THAT(m.GetOutput(), ElementsAreArray(ArrayFloatNear({\n- 0.0, -0.997521, 2.0, -0.981684, //\n- 3.0, -0.864665, 10.0, -0.0951626, //\n- })));\n-#endif // notdef\n+ };\n+ constexpr float kExpect[] = {\n+ 0.0, -0.997521, 2.0, -0.981684, //\n+ 3.0, -0.864665, 10.0, -0.0951626, //\n+ };\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::TestElu(kDims, kInput, kDims, kExpect, output_data);\n }\n \n TF_LITE_MICRO_TEST(QuantizedActivationsOpTestEluInt8) {\n-#ifdef notdef\n- const float kMin = -1;\n- const float kMax = 127.f / 128.f;\n- QuantizedActivationsOpModel model(\n- BuiltinOperator_ELU,\n- /*input=*/{TensorType_INT8, {1, 2, 4, 1}, 8 * kMin, 8 * kMax},\n- /*output=*/{TensorType_INT8, {1, 2, 4, 1}, 8 * kMin, 8 * kMax});\n-\n- model.SetInput<int8_t>({\n+ constexpr int kDims[] = {4, 1, 2, 4, 1};\n+ constexpr float kInput[] = {\n 0, -6, 2, -4, //\n 3, -2, 6, -0.1, //\n- });\n-\n- model.Invoke();\n- EXPECT_THAT(model.GetDequantizedOutput<int8_t>(),\n- ElementsAreArray(ArrayFloatNear(\n- {\n- 0, -1.0, 2.0, -1, //\n- 3.0, -0.875, 6.0, -0.125, //\n- },\n- kQuantizedTolerance)));\n-#endif // notdef\n+ };\n+ constexpr float kExpect[] = {\n+ 0, -1.0, 2.0, -1, //\n+ 3.0, -0.875, 6.0, -0.125, //\n+ };\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ // setup quantization storage and parameters\n+ int8_t q_output_data[kOutputCount];\n+ int8_t q_input_data[kOutputCount];\n+ constexpr float kMin = -1;\n+ constexpr float kMax = 127.f / 128.f;\n+ tflite::testing::TestEluParams<int8_t> params = {};\n+ params.data_min = 8 * kMin;\n+ params.data_max = 8 * kMax;\n+ params.input_data = q_input_data;\n+ params.output_data = q_output_data;\n+ params.tolerance = tflite::testing::kQuantizedTolerance;\n+\n+ tflite::testing::TestEluQuantized(params, kDims, kInput, kDims, kExpect,\n+ output_data);\n }\n \n TF_LITE_MICRO_TESTS_END\n-\n-} // namespace\n-} // namespace testing\n-} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/elu_test.cc",
"status": "modified"
},
{
"diff": "@@ -35,6 +35,7 @@ TfLiteRegistration Register_BATCH_TO_SPACE_ND();\n TfLiteRegistration Register_CAST();\n TfLiteRegistration Register_CONV_2D();\n TfLiteRegistration Register_DEPTHWISE_CONV_2D();\n+TfLiteRegistration Register_ELU();\n TfLiteRegistration Register_EXP();\n TfLiteRegistration Register_QUANTIZE();\n TfLiteRegistration Register_SHAPE();",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -271,6 +271,7 @@ tensorflow/lite/micro/kernels/dequantize_test.cc \\\n tensorflow/lite/micro/kernels/detection_postprocess_test.cc \\\n tensorflow/lite/micro/kernels/elementwise_test.cc \\\n tensorflow/lite/micro/kernels/exp_test.cc \\\n+tensorflow/lite/micro/kernels/elu_test.cc \\\n tensorflow/lite/micro/kernels/floor_test.cc \\\n tensorflow/lite/micro/kernels/fully_connected_test.cc \\\n tensorflow/lite/micro/kernels/hard_swish_test.cc \\\n@@ -322,6 +323,7 @@ tensorflow/lite/micro/kernels/depthwise_conv.cc \\\n tensorflow/lite/micro/kernels/dequantize.cc \\\n tensorflow/lite/micro/kernels/detection_postprocess.cc \\\n tensorflow/lite/micro/kernels/elementwise.cc \\\n+tensorflow/lite/micro/kernels/elu.cc \\\n tensorflow/lite/micro/kernels/ethosu.cc \\\n tensorflow/lite/micro/kernels/exp.cc \\\n tensorflow/lite/micro/kernels/flexbuffers_generated_data.cc \\\n@@ -409,6 +411,7 @@ tensorflow/lite/kernels/internal/reference/depthwiseconv_float.h \\\n tensorflow/lite/kernels/internal/reference/depthwiseconv_uint8.h \\\n tensorflow/lite/kernels/internal/reference/dequantize.h \\\n tensorflow/lite/kernels/internal/reference/exp.h \\\n+tensorflow/lite/kernels/internal/reference/elu.h \\\n tensorflow/lite/kernels/internal/reference/floor.h \\\n tensorflow/lite/kernels/internal/reference/fully_connected.h \\\n tensorflow/lite/kernels/internal/reference/hard_swish.h \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nI hit this issue while working on https://github.com/tensorflow/tensorflow/pull/46904:\r\n\r\n * If I create a global pointer variable (explicitly initialized to nullptr) and check for the variable == nullptr in my factory function, the check always returns false.\r\n\r\nAFAICT, this behavior is specific to our use of Renode. I have not been able to reproduce on Linux or with the Xtensa simulator.\r\n\r\nI have a workaround that should allow #46904 to be merged and I will then update this issue with a cleaner way to reproduce this error.",
"comments": [
{
"body": "The reason is in `_zero_initialize_bss_data` (tensorflow/lite/micro/tools/make/downloads/stm32_bare_lib/source/startup.c). It initializes the BSS data with DEADBEEF.\r\n\r\nTo verify that:\r\n\r\nBuild with debug info\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=bluepill BUILD_TYPE=debug test_renode -j`nproc`\r\n```\r\n\r\nRun in Renode:\r\n```\r\ninclude @tensorflow/lite/micro/testing/bluepill_nontest.resc; sysbus LoadELF @tensorflow/lite/micro/tools/make/gen/bluepill_cortex-m3_debug/bin/test_renode; machine StartGdbServer 3333\r\n```\r\n\r\nStart GDB:\r\n```\r\ntensorflow/lite/micro/tools/make/downloads/gcc_embedded/bin/arm-none-eabi-gdb tensorflow/lite/micro/tools/make/gen/bluepill_cortex-m3_debug/bin/test_renode\r\n```\r\n\r\nIn GDB:\r\n```\r\ntar rem :3333\r\nwatch *0x20000008\r\nmon s\r\nc\r\n```\r\n\r\nIt will break in the code that actually writes this data.\r\n\r\nNow why do they go to bss instead of data? I don't know.\r\n\r\nHere's an excerpt from the objdump:\r\n\r\n```\r\nDisassembly of section .data:\r\n\r\n20000000 <a>: <- OK\r\n20000000: 0000002d andeq r0, r0, sp, lsr #32\r\n\r\n20000004 <init_to_true>: <- OK\r\n20000004: 00000001 andeq r0, r0, r1\r\n\r\nDisassembly of section .bss: \r\n\r\n20000008 <init_to_false>: <- NOT OK\r\n20000008: 00000000 andeq r0, r0, r0\r\n\r\n2000000c <init_to_nullptr>: <- NOT OK\r\n2000000c: 00000000 andeq r0, r0, r0\r\n\r\n20000010 <g_tick_count>: <- NOT OK\r\n20000010: 00000000 andeq r0, r0, r0\r\n\r\nDisassembly of section ._user_heap_stack:\r\n\r\n20000014 <._user_heap_stack>:\r\n ... \r\n```\r\n\r\nFor the record, this is HAL specific, so it should behave the same way on HW, not only in Renode.",
"created_at": "2021-02-24T11:00:14Z"
},
{
"body": "Changing 0xDEADBEEF to 0 of course fixes the problem",
"created_at": "2021-02-24T12:46:29Z"
},
{
"body": "Your example @advaitjain has similar results on `stm32f4` also built with `stm32_bare_lib` that causes the problem.\r\n\r\nThe `-559038737` number that is printed as `init_to_nullptr` is exactly a signed int value for `0xDEADBEEF`. The `239` in `init_to_false` is the ending `0xEF` byte.\r\n\r\nI remember stumbling upon an issue with this BSS initialization when code from such an `if` was never executed:\r\n```\r\nstatic TYPE* some_pointer = nullptr;\r\nif (!some_pointer){\r\n // Code never called\r\n}\r\n```\r\nThe compiler simply assumed a static pointer is always `nullptr` (`0x0`) right after declaration and skipped the \"additional\" `nullptr` initialization while it was `0xDEADBEEF` instead.\r\n\r\nNow I can see there's no such problem anymore so perhaps the compiler has been fixed.\r\n\r\nNevertheless, this issue made me wonder again about what the purpose of that `0xDEADBEEF` initialization is. The binaries work well without it. Perhaps it could be simply disabled @aselle @petewarden? https://github.com/google/stm32_bare_lib/blob/55bf49816f1a9dc7d9e35951c135e852ce7a98df/source/startup.c#L114",
"created_at": "2021-02-24T20:06:22Z"
},
{
"body": "Thanks for the debugging @PiotrZierhoffer.\r\n\r\n@ajelinski, I'm not sure about the thinking behind the decision to go with not zero initializing the .bss -- it does seem non-standard.\r\n\r\nI have made https://github.com/tensorflow/tensorflow/pull/47382 that fixes this issue independent of upstream changes to STM32 Bare Lib and also answers @PiotrZierhoffer's comment about why the variables are ending up in .bss instead of .data\r\n\r\nThis link has some useful info as well: https://stackoverflow.com/q/8721475\r\n",
"created_at": "2021-02-24T22:02:06Z"
},
{
"body": "Talked to @petewarden and we decided that changine STM32 Bare Lib is the way to go.\r\n\r\n> \r\n> I have made #47382 that fixes this issue independent of upstream changes to STM32 Bare Lib and also answers @PiotrZierhoffer's comment about why the variables are ending up in .bss instead of .data\r\n> \r\n> This link has some useful info as well: [stackoverflow.com/q/8721475](https://stackoverflow.com/q/8721475)\r\n\r\n#47382 originally added `-fno-zero-initialized-in-bss` and verified that the globals were in the .data section instead of .bss using the tips from https://stackoverflow.com/q/8721475\r\n\r\nIt has since been updated to simply pull in an updated version of STM32 Bare Lib (with the zero initialization of .bss)",
"created_at": "2021-02-24T22:37:57Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46937\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46937\">No</a>\n",
"created_at": "2021-02-25T00:22:28Z"
}
],
"number": 46937,
"title": "Initialization of global pointer variable seems suspect with our current use of Renode."
}
|
{
"body": "Simple reproduction of issue decribed in #46937\r\n\r\nExpected output (i.e. what I get on x86):\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile run_test_renode -j8\r\n```\r\ngives:\r\n```\r\na: 45\r\ninit_to_false: 0\r\nWas initialized to false\r\ninit_to_false: 1\r\ninit_to_true: 1\r\ninit_to_nullptr: 0\r\n```\r\n\r\nWhat I get with Renode + bluepill:\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=bluepill test_renode -j8\r\ntensorflow/lite/micro/tools/make/downloads/renode/renode\r\n```\r\nand in the renode terminal:\r\n```\r\nClear; include @tensorflow/lite/micro/testing/bluepill_nontest.resc; sysbus LoadELF @tensorflow/lite/micro/tools/make/gen/bluepill_cortex-m3_default/bin/test_renode; start\r\n```\r\n\r\ngives:\r\n```\r\na: 45\r\ninit_to_false: 239\r\nWas not initialized to false\r\ninit_to_false: 1\r\ninit_to_true: 1\r\ninit_to_nullptr: -559038737\r\n```\r\n\r\nNeed to debug further, but creating a PR in case @PiotrZierhoffer has any ideas.",
"number": 47276,
"review_comments": [],
"title": "Simple reproduction of issue decribed in #46937"
}
|
{
"commits": [
{
"message": "Simple reproduction of issue decribed in #46937"
}
],
"files": [
{
"diff": "@@ -0,0 +1,23 @@\n+\n+#include \"tensorflow/lite/micro/micro_error_reporter.h\"\n+\n+int a = 45;\n+bool init_to_false = false;\n+bool init_to_true = true;\n+int* init_to_nullptr = nullptr;\n+\n+int main(int argc, char** argv) {\n+ MicroPrintf(\"a: %d\", a);\n+ MicroPrintf(\"init_to_false: %d\", init_to_false);\n+ if (init_to_false == false) {\n+ MicroPrintf(\"Was initialized to false\");\n+ } else {\n+ MicroPrintf(\"Was not initialized to false\");\n+ }\n+\n+ init_to_false = true;\n+ MicroPrintf(\"init_to_false: %d\", init_to_false);\n+ MicroPrintf(\"init_to_true: %d\", init_to_true);\n+ MicroPrintf(\"init_to_nullptr: %d\", init_to_nullptr);\n+}\n+",
"filename": "tensorflow/lite/micro/test_renode.cc",
"status": "added"
},
{
"diff": "@@ -742,3 +742,6 @@ $(DEPDIR)/%.d: ;\n .PRECIOUS: $(BINDIR)%_test\n \n -include $(patsubst %,$(DEPDIR)/%.d,$(basename $(ALL_SRCS)))\n+\n+$(eval $(call microlite_test,test_renode, tensorflow/lite/micro/test_renode.cc,))\n+",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "\r\n**System information**\r\n- Have I written custom code: Yes\r\n- OS Platform and Distribution: Ubuntu 20.04 `Linux XXX 5.8.0-43-generic #49~20.04.1-Ubuntu SMP Fri Feb 5 09:57:56 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux`\r\n- TensorFlow installed from (source or binary): `pip install tensorflow`\r\n- TensorFlow version (use command below): `v2.4.0-49-g85c8b2a817f 2.4.1`\r\n- Python version: `3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]`\r\n\r\n**Describe the current behavior**\r\nI get the error `TypeError: __array__() takes 1 positional argument but 2 were given`. Hence it might be related to bug #46840. \r\n\r\n\r\n**Describe the expected behavior**\r\nShould just set the weight to what they are right now.\r\n\r\n**Standalone code to reproduce the issue**\r\n```\r\nimport tensorflow as tf\r\n\r\ndenseLayer = tf.keras.layers.Dense(1,activation=\"relu\")\r\ndenseLayer.build(input_shape=(4))\r\n\r\ndenseLayer.set_weights(denseLayer.weights)\r\n```\r\nLink: [Colab](https://colab.research.google.com/drive/1bwHmMvktEsLzOK-x8fsmIJBiNJGfsM9F?usp=sharing)\r\n\r\n**Other info / logs** \r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-21-99e968378f7a> in <module>\r\n 4 denseLayer.build(input_shape=(4))\r\n 5 \r\n----> 6 denseLayer.set_weights(denseLayer.weights)\r\n\r\n~/Projects/NotebooksEnvs/py38Env/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in set_weights(self, weights)\r\n 1875 weight_index += 1\r\n 1876 \r\n-> 1877 backend.batch_set_value(weight_value_tuples)\r\n 1878 \r\n 1879 def get_weights(self):\r\n\r\n~/Projects/NotebooksEnvs/py38Env/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)\r\n 199 \"\"\"Call target, and fall back on dispatchers if there is a TypeError.\"\"\"\r\n 200 try:\r\n--> 201 return target(*args, **kwargs)\r\n 202 except (TypeError, ValueError):\r\n 203 # Note: convert_to_eager_tensor currently raises a ValueError, not a\r\n\r\n~/Projects/NotebooksEnvs/py38Env/lib/python3.8/site-packages/tensorflow/python/keras/backend.py in batch_set_value(tuples)\r\n 3704 if ops.executing_eagerly_outside_functions():\r\n 3705 for x, value in tuples:\r\n-> 3706 x.assign(np.asarray(value, dtype=dtype(x)))\r\n 3707 else:\r\n 3708 with get_graph().as_default():\r\n\r\n~/Projects/NotebooksEnvs/py38Env/lib/python3.8/site-packages/numpy/core/_asarray.py in asarray(a, dtype, order)\r\n 81 \r\n 82 \"\"\"\r\n---> 83 return array(a, dtype, copy=False, order=order)\r\n 84 \r\n 85 \r\n\r\nTypeError: __array__() takes 1 positional argument but 2 were given\r\n```\r\n",
"comments": [
{
"body": "It turns out, that it works with `get_weights()` but still, this should yield a reasonable error message.",
"created_at": "2021-02-17T14:56:03Z"
},
{
"body": "Was able to reproduce the issue with TF v2.4 and TF-nightly. Please find the gist of it [here](https://colab.research.google.com/gist/ravikyram/be8bceb82372692740dc74da02749466/untitled674.ipynb). Thanks!",
"created_at": "2021-02-18T10:27:25Z"
},
{
"body": "Hi, I want to work on creating a more suitable error message for this scenario. I was thinking something like\r\nTypeError: set_weights() expects a list of expected_num_weights weights in the form of arrays, but instead received list of tf.variable ",
"created_at": "2021-02-19T01:44:38Z"
},
{
"body": "@shivaditya-meduri,\r\nWith [respect to this comment in the PR](https://github.com/tensorflow/tensorflow/pull/47489#issuecomment-833838923) can you please let us know if you are working on the fix? Thanks!",
"created_at": "2021-05-11T11:57:16Z"
},
{
"body": "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2021-05-18T12:50:46Z"
},
{
"body": "Closing as stale. Please reopen if you'd like to work on this further.\n",
"created_at": "2021-05-25T13:48:17Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47216\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47216\">No</a>\n",
"created_at": "2021-05-25T13:48:23Z"
}
],
"number": 47216,
"title": "TypeError if set the weights to the current weights via `set_weights`"
}
|
{
"body": "I have made some changes to address issue #47216 - https://github.com/tensorflow/tensorflow/issues/47216\r\nThis is my first pull request to TensorFlow, so I would appreciate complete feedback including the coding style too.\r\n",
"number": 47255,
"review_comments": [],
"title": "First Commit for issue #47216"
}
|
{
"commits": [
{
"message": "First Commit for issue #47216"
}
],
"files": [
{
"diff": "@@ -1867,7 +1867,14 @@ def set_weights(self, weights):\n 'with a weight list of length %s, but the layer was '\n 'expecting %s weights. Provided weights: %s...' %\n (self.name, len(weights), expected_num_weights, str(weights)[:50]))\n-\n+ for i in weights:\n+ if type(i)!=np.ndarray:\n+ raise TypeError(\n+ 'The weight values in the form of numpy arrays should be passed in the order they are created by the layer,'\n+ 'Instead encountered type \"%s\".'\n+ %\n+ (type(i)))\n+ break\n weight_index = 0\n weight_value_tuples = []\n for param in params:",
"filename": "tensorflow/python/keras/engine/base_layer.py",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis came up during the review for PR #42020 in [this comment](https://github.com/tensorflow/tensorflow/pull/42020#discussion_r483160984)\r\n\r\nThe TFLM team would like to understand better what this `reduce_codesize` tag is doing in `micro/examples/micro_speech/arc_emsdp/Makefile`\r\n\r\nTopics of discussion:\r\n\r\n * In the current design, TAGS was mostly meant as a way to allow for multiple optimized kernel implementations. While not enforced, the expectation is that each tag has a corresponding directory in micro/kernels/\r\n\r\n * We are planning on making some changes in the interest of being able to register different kernel variants that might be useful for this instead of what appears to be a find and replace.\r\n",
"comments": [
{
"body": "Tagging @JaccovG and @dzakhar since this issue is requesting more info.\r\n",
"created_at": "2020-09-03T18:28:42Z"
},
{
"body": "Hi,\r\nAs described, in the makefile, this tag does sources modifications to use a specific version of kernel in the exact “find and replace” way. It gives a notable reductione in total size of application by removing not-used versions of optimized kernels, which is important for us. Same mechanism is used in `micro/examples/person_detection_experimental/arc_emsdp/Makefile` \r\n\r\nWe considered several options to deal with it including an application specific version of the library, or moving part of library available for application build. However, since we are planning some related changes to our library in the future, we concluded that this current solution would be acceptable in the interim as something which could be done quickly\r\n\r\nWe are looking forward to migrating to your proposed kernels registration methodology when it will be available. Can you give more information/reference about it so we can assess how it can fits to our needs? \r\n",
"created_at": "2020-09-14T14:35:55Z"
},
{
"body": "#43682 is a first implementation of the changes that allow kernels to support registration of a subset of functionality from the application code via the MicroMutableOpResolver.\r\n\r\nLet me know if this is something that you guys can leverage.",
"created_at": "2020-09-30T22:34:56Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/42932\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/42932\">No</a>\n",
"created_at": "2021-02-17T19:00:54Z"
}
],
"number": 42932,
"title": "More details on the reduce_codesize tag used for arc"
}
|
{
"body": "This pull request removes deprecated functionality and fixes build for ARC targets.\r\n\r\nFixes #42932",
"number": 47190,
"review_comments": [],
"title": "Removed deprecated TAGS option and fixed few READMEs for ARC."
}
|
{
"commits": [
{
"message": "Removed deprecated TAGS option and fixed few READMEs with adding OPTIMIZED_KERNEL_DIR=arc_mli option for ARC target"
}
],
"files": [
{
"diff": "@@ -45,7 +45,7 @@ The example project for ARC EM SDP platform can be generated with the following\n command:\n \n ```\n-make -f tensorflow/lite/micro/tools/make/Makefile TARGET=arc_emsdp TAGS=no_arc_mli generate_hello_world_make_project\n+make -f tensorflow/lite/micro/tools/make/Makefile TARGET=arc_emsdp OPTIMIZED_KERNEL_DIR=arc_mli ARC_TAGS=no_arc_mli generate_hello_world_make_project\n ```\n \n ### Build and Run Example\n@@ -245,7 +245,7 @@ make -f tensorflow/lite/micro/tools/make/Makefile TARGET=himax_we1_evb third_par\n Generate hello world project\n \n ```\n-make -f tensorflow/lite/micro/tools/make/Makefile generate_hello_world_make_project TARGET=himax_we1_evb TAGS=no_arc_mli\n+make -f tensorflow/lite/micro/tools/make/Makefile generate_hello_world_make_project TARGET=himax_we1_evb ARC_TAGS=no_arc_mli\n ```\n \n ### Build and Burn Example",
"filename": "tensorflow/lite/micro/examples/hello_world/README.md",
"status": "modified"
},
{
"diff": "@@ -66,11 +66,12 @@ SDP platform can be generated with the following command:\n \n ```\n make -f tensorflow/lite/micro/tools/make/Makefile \\\n-TARGET=arc_emsdp TAGS=reduce_codesize \\\n+TARGET=arc_emsdp ARC_TAGS=reduce_codesize \\\n+OPTIMIZED_KERNEL_DIR=arc_mli \\\n generate_micro_speech_mock_make_project\n ```\n \n-Note that `TAGS=reduce_codesize` applies example specific changes of code to\n+Note that `ARC_TAGS=reduce_codesize` applies example specific changes of code to\n reduce total size of application. It can be omitted.\n \n ### Build and Run Example",
"filename": "tensorflow/lite/micro/examples/micro_speech/README.md",
"status": "modified"
},
{
"diff": "@@ -4,7 +4,7 @@ ifeq ($(TARGET), arc_emsdp)\n # In particular:\n # - Extend Heap and stack size for application needs\n # - Use Linker command file with better usage of fast memory\n-# - Optional (TAGS=reduce_codesize): In case project was \n+# - Optional (ARC_TAGS=reduce_codesize): In case project was \n # generated with MLI usage, reduce scratch buffers.\n \n MICRO_SPEECH_HDRS += \\\n@@ -36,7 +36,7 @@ ifeq ($(TARGET), arc_emsdp)\n \t@echo Makefile: No Reference fallback for MLI supported functions >> $@\n \n \n-ifneq ($(filter $(ALL_TAGS), reduce_codesize),)\n+ifneq ($(filter $(ARC_TAGS), reduce_codesize),)\n # In case 'reduce_codesize' tag is present, we replace common MLI functions with \n # specializations appropriate for this particular graph. But such changes of code \n # with high probability may not be acceptable for other graphs and will need ",
"filename": "tensorflow/lite/micro/examples/micro_speech/arc_emsdp/Makefile.inc",
"status": "modified"
},
{
"diff": "@@ -52,11 +52,12 @@ command:\n \n ```\n make -f tensorflow/lite/micro/tools/make/Makefile \\\n-TARGET=arc_emsdp TAGS=reduce_codesize \\\n+TARGET=arc_emsdp ARC_TAGS=reduce_codesize \\\n+OPTIMIZED_KERNEL_DIR=arc_mli \\\n generate_person_detection_int8_make_project\n ```\n \n-Note that `TAGS=reduce_codesize` applies example specific changes of code to\n+Note that `ARC_TAGS=reduce_codesize` applies example specific changes of code to\n reduce total size of application. It can be omitted.\n \n ### Build and Run Example",
"filename": "tensorflow/lite/micro/examples/person_detection/README.md",
"status": "modified"
},
{
"diff": "@@ -25,7 +25,7 @@ ifeq ($(TARGET), arc_emsdp)\n \t@sed -E -i 's#MLI_ONLY *\\?= *false#MLI_ONLY \\?= true#' $(word 2, $^)\n \t@echo Makefile: No Reference fallback for MLI supported functions >> $@\n \n-ifneq ($(filter $(ALL_TAGS), reduce_codesize),)\n+ifneq ($(filter $(ARC_TAGS), reduce_codesize),)\n #In case 'reduce_codesize' tag is present, we replace common MLI functions with\n #specializations appropriate for this particular graph.But such changes of code\n #with high probability may not be acceptable for other graphs and will need",
"filename": "tensorflow/lite/micro/examples/person_detection/arc_emsdp/Makefile.inc",
"status": "modified"
},
{
"diff": "@@ -21,16 +21,16 @@ ARC specific target implies usage of embARC MLI.\n For example:\n \n ```\n-make -f tensorflow/lite/micro/tools/make/Makefile TARGET=arc_emsdp generate_person_detection_int8_make_project\n+make -f tensorflow/lite/micro/tools/make/Makefile TARGET=arc_emsdp OPTIMIZED_KERNEL_DIR=arc_mli generate_person_detection_int8_make_project\n ```\n \n In case MLI implementation can’t be used, kernels in this folder fallback to\n TFLM reference implementations. For applications which may not benefit from MLI\n library, projects can be generated without these implementations by adding\n-`TAGS=no_arc_mli` in the command line, which can reduce overall code size:\n+`ARC_TAGS=no_arc_mli` in the command line, which can reduce overall code size:\n \n ```\n-make -f tensorflow/lite/micro/tools/make/Makefile TARGET=arc_emsdp TAGS=no_arc_mli generate_person_detection_int8_make_project\n+make -f tensorflow/lite/micro/tools/make/Makefile TARGET=arc_emsdp OPTIMIZED_KERNEL_DIR=arc_mli ARC_TAGS=no_arc_mli generate_person_detection_int8_make_project\n ```\n \n For ARC EM SDP board, a pre-compiled MLI library is downloaded and used in the\n@@ -39,7 +39,7 @@ and compiled during project generation phase. To build library from sources for\n ARC EM SDP platform, add `BUILD_ARC_MLI=true` option to make command:\n \n ```\n-make -f tensorflow/lite/micro/tools/make/Makefile TARGET=arc_emsdp BUILD_ARC_MLI=true generate_person_detection_int8_make_project\n+make -f tensorflow/lite/micro/tools/make/Makefile TARGET=arc_emsdp OPTIMIZED_KERNEL_DIR=arc_mli BUILD_ARC_MLI=true generate_person_detection_int8_make_project\n ```\n \n If an application exclusively uses accelerated MLI kernel implementations, one",
"filename": "tensorflow/lite/micro/kernels/arc_mli/README.md",
"status": "modified"
},
{
"diff": "@@ -18,8 +18,8 @@ ifeq ($(TARGET_ARCH), arc)\n \n # MLI Library is used by default for ARC platform whenever it is possible.\n # To use TFLM reference implementation MLI should be intentionally turned off \n-# by passing 'no_arc_mli' tag (make -f <tflm_main_makefile> TAGS=no_arc_mli ...)\n-ifeq ($(filter no_arc_mli,$(ALL_TAGS)),)\n+# by passing 'no_arc_mli' tag (make -f <tflm_main_makefile> ARC_TAGS=no_arc_mli ...)\n+ifeq ($(filter no_arc_mli,$(ARC_TAGS)),)\n \n ALL_TAGS += arc_mli\n ",
"filename": "tensorflow/lite/micro/tools/make/ext_libs/arc_mli.inc",
"status": "modified"
},
{
"diff": "@@ -149,7 +149,7 @@ use a shell to execute the following command from the root directory of the\n TensorFlow repo:\n \n ```\n-make -f tensorflow/lite/micro/tools/make/Makefile generate_person_detection_test_int8_make_project TARGET=arc_emsdp\n+make -f tensorflow/lite/micro/tools/make/Makefile generate_person_detection_test_int8_make_project TARGET=arc_emsdp OPTIMIZED_KERNEL_DIR=arc_mli\n ```\n \n The application project will be generated into\n@@ -166,7 +166,7 @@ is used by default to speed up execution of some kernels for asymmetrically\n quantized layers. Kernels which use MLI-based implementations are kept in the\n *tensorflow/lite/micro/kernels/arc_mli* folder. For applications which may not\n benefit from MLI library, the project can be generated without these\n-implementations by adding `TAGS=no_arc_mli` in the command line. This can reduce\n+implementations by adding `ARC_TAGS=no_arc_mli` in the command line. This can reduce\n code size when the optimized kernels are not required.\n \n For more options on embARC MLI usage see\n@@ -279,7 +279,7 @@ For instance, to build **Person Detection** test application, use the following\n command from the root directory of the TensorFlow repo:\n \n ```\n-make -f tensorflow/lite/micro/tools/make/Makefile generate_person_detection_test_int8_make_project TARGET=arc_custom TCF_FILE=<path_to_tcf_file> LCF_FILE=<path_to_lcf_file>\n+make -f tensorflow/lite/micro/tools/make/Makefile generate_person_detection_test_int8_make_project TARGET=arc_custom OPTIMIZED_KERNEL_DIR=arc_mli TCF_FILE=<path_to_tcf_file> LCF_FILE=<path_to_lcf_file>\n ```\n \n The application project will be generated into\n@@ -291,7 +291,7 @@ is used by default to speed up execution of some kernels for asymmetrically\n quantized layers. Kernels which use MLI-based implementations are kept in the\n *tensorflow/lite/micro/kernels/arc_mli* folder. For applications which may not\n benefit from MLI library, the project can be generated without these\n-implementations by adding `TAGS=no_arc_mli` in the command line. This can reduce\n+implementations by adding `ARC_TAGS=no_arc_mli` in the command line. This can reduce\n code size when the optimized kernels are not required.\n \n For more options on embARC MLI usage see",
"filename": "tensorflow/lite/micro/tools/make/targets/arc/README.md",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,7 @@ ARC_TOOLCHAIN := mwdt\n BUILD_ARC_MLI := false\n ARC_MLI_PRE_COMPILED_TARGET := emsdp_em11d_em9d_dfss\n \n-ifneq ($(filter no_arc_mli,$(ALL_TAGS)),)\n+ifneq ($(filter no_arc_mli,$(ARC_TAGS)),)\n MLI_LIB_DIR = arc_mli_package\n $(eval $(call add_third_party_download,$(EMBARC_MLI_PRE_COMPILED_URL),$(EMBARC_MLI_PRE_COMPILED_MD5),$(MLI_LIB_DIR),))\n else ifeq ($(BUILD_ARC_MLI), true)",
"filename": "tensorflow/lite/micro/tools/make/targets/arc_emsdp_makefile.inc",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): colab\r\n- TensorFlow installed from (source or binary): colab\r\n- TensorFlow version (use command below): 2.4.0, v2.4.0-0-g582c8d236cb\r\n- Python version: 3.6.9\r\n\r\n**Describe the current behavior**\r\nRunning `tf.keras.applications.densenet.preprocess_input` on a `data_format=\"channels_first\"` symbolic tensor raises an Exception. This is caused by the input transposition not being applied if the `mode` parameter to [_preprocess_symbolic_input](https://github.com/tensorflow/tensorflow/blob/7a49c87f9f56a7fc169669cfe97728859798967c/tensorflow/python/keras/applications/imagenet_utils.py#L242) is set to \"torch\" (as is the case for densenet preprocessing).\r\n\r\nSee the relevant lines here:\r\nhttps://github.com/tensorflow/tensorflow/blob/7a49c87f9f56a7fc169669cfe97728859798967c/tensorflow/python/keras/applications/imagenet_utils.py#L262-L281\r\n\r\nThis is not caught by the unit tests as only the default `mode=\"caffe\"` is tested.\r\n\r\nThe equivalent numpy function `_preprocess_numpy_input` handles this correctly (treating the input differently depending on `data_format` for all modes\r\n\r\n**Describe the expected behavior**\r\nPreprocessing should work for `data_format=\"channels_first\"`\r\n\r\n**Standalone code to reproduce the issue**\r\nColab notebook with minimum example:\r\nhttps://colab.research.google.com/drive/1THrNYTAAzPxw9135h-sFt-LxMz_7SEeQ?usp=sharing\r\n\r\n",
"comments": [
{
"body": "Was able to reproduce the issue with TF v2.3, TF v2.4 and TF-nightly. Please find the gist of it [here](https://colab.research.google.com/gist/amahendrakar/88605d7199898f076e057b8ed35457c9/46539.ipynb#scrollTo=NRQ2tF_zk5qY). Thanks!",
"created_at": "2021-01-20T15:39:13Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46539\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46539\">No</a>\n",
"created_at": "2021-02-23T15:47:21Z"
}
],
"number": 46539,
"title": "tf.keras.applications.densenet.preprocess_input does not work for data_format=\"channels_first\" for symbolic tensors"
}
|
{
"body": "Fixes #46539\r\n\r\nThe rescaling by `std` in keras.applications.imagenet_utils._preprocess_symbolic_input was not taking into account `data_format`, causing a shape mismatch error.\r\n\r\nThis was not caught by the unit tests as only the default `mode='caffe'` was tested, which does not use `std`. I changed the test to run for all modes.",
"number": 47135,
"review_comments": [],
"title": "Fix for keras applications preprocess_input with data_format=\"channels_first\" symbolic tensors"
}
|
{
"commits": [
{
"message": "fix for keras applications preprocessing with channels_first symbolic tensors\n\nThe scaling by std deviation was not taking into account the channel order\nand caused an error when a symbolic tensor was provided with mode='torch'\n(e.g. for tf.keras.applications.densenet.preprocess_input).\nThis commit corrects that, and runs the tests for all modes instead of\njust the default."
}
],
"files": [
{
"diff": "@@ -289,7 +289,10 @@ def _preprocess_symbolic_input(x, data_format, mode):\n else:\n x = backend.bias_add(x, mean_tensor, data_format)\n if std is not None:\n- x /= std\n+ std_tensor = backend.constant(np.array(std))\n+ if data_format == \"channels_first\":\n+ std_tensor = backend.reshape(std_tensor, (-1, 1, 1))\n+ x /= std_tensor\n return x\n \n ",
"filename": "tensorflow/python/keras/applications/imagenet_utils.py",
"status": "modified"
},
{
"diff": "@@ -79,27 +79,36 @@ def test_preprocess_input(self):\n xint2 = utils.preprocess_input(xint)\n self.assertAllClose(x, x2[..., ::-1])\n self.assertNotEqual(xint.astype('float').max(), xint2.max())\n-\n- def test_preprocess_input_symbolic(self):\n+ \n+ @parameterized.named_parameters([\n+ {'testcase_name': 'mode_torch',\n+ 'mode': 'torch'},\n+ {'testcase_name': 'mode_tf',\n+ 'mode': 'tf'},\n+ {'testcase_name': 'mode_caffe',\n+ 'mode': 'caffe'},\n+ ])\n+ def test_preprocess_input_symbolic(self, mode):\n # Test image batch\n x = np.random.uniform(0, 255, (2, 10, 10, 3))\n inputs = keras.layers.Input(shape=x.shape[1:])\n outputs = keras.layers.Lambda(\n- utils.preprocess_input, output_shape=x.shape[1:])(\n+ lambda x: utils.preprocess_input(x, mode=mode),\n+ output_shape=x.shape[1:])(\n inputs)\n model = keras.Model(inputs, outputs)\n self.assertEqual(model.predict(x).shape, x.shape)\n \n outputs1 = keras.layers.Lambda(\n- lambda x: utils.preprocess_input(x, 'channels_last'),\n+ lambda x: utils.preprocess_input(x, 'channels_last', mode=mode),\n output_shape=x.shape[1:])(\n inputs)\n model1 = keras.Model(inputs, outputs1)\n out1 = model1.predict(x)\n x2 = np.transpose(x, (0, 3, 1, 2))\n inputs2 = keras.layers.Input(shape=x2.shape[1:])\n outputs2 = keras.layers.Lambda(\n- lambda x: utils.preprocess_input(x, 'channels_first'),\n+ lambda x: utils.preprocess_input(x, 'channels_first', mode=mode),\n output_shape=x2.shape[1:])(\n inputs2)\n model2 = keras.Model(inputs2, outputs2)\n@@ -110,21 +119,22 @@ def test_preprocess_input_symbolic(self):\n x = np.random.uniform(0, 255, (10, 10, 3))\n inputs = keras.layers.Input(shape=x.shape)\n outputs = keras.layers.Lambda(\n- utils.preprocess_input, output_shape=x.shape)(\n+ lambda x: utils.preprocess_input(x, mode=mode),\n+ output_shape=x.shape)(\n inputs)\n model = keras.Model(inputs, outputs)\n self.assertEqual(model.predict(x[np.newaxis])[0].shape, x.shape)\n \n outputs1 = keras.layers.Lambda(\n- lambda x: utils.preprocess_input(x, 'channels_last'),\n+ lambda x: utils.preprocess_input(x, 'channels_last', mode=mode),\n output_shape=x.shape)(\n inputs)\n model1 = keras.Model(inputs, outputs1)\n out1 = model1.predict(x[np.newaxis])[0]\n x2 = np.transpose(x, (2, 0, 1))\n inputs2 = keras.layers.Input(shape=x2.shape)\n outputs2 = keras.layers.Lambda(\n- lambda x: utils.preprocess_input(x, 'channels_first'),\n+ lambda x: utils.preprocess_input(x, 'channels_first', mode=mode),\n output_shape=x2.shape)(\n inputs2)\n model2 = keras.Model(inputs2, outputs2)",
"filename": "tensorflow/python/keras/applications/imagenet_utils_test.py",
"status": "modified"
}
]
}
|
{
"body": "[Here](https://www.tensorflow.org/api_docs/python/tf/keras/initializers/Zeros) it's written that `tf.keras.initializers.zeros` is a shortcut for `tf.keras.initializers.Zeros()`.\r\nIf it is a shortcut, then both should work same\r\n\r\n\r\nWhile saving the model, if I use `tf.keras.initializers.zeros` , model save failes, but using `tf.keras.initializers.Zeros()` works great.\r\n\r\nSame issue raised by someone at [Stackoverflow](https://stackoverflow.com/questions/57154799/keras-model-saving-erroring-typeerror-get-config-missing-1-required-position) \r\n\r\n[Colab](https://colab.research.google.com/drive/1E0P-aBU9B7RO_QUtDrlRDPcOWqd6UfkD?usp=sharing#scrollTo=weBjeZAFJOP4)\r\n\r\n",
"comments": [
{
"body": "I'm new to open source, but to clarify, are you calling or referencing the function directly? If you reference the function object rather than initialize a new one (i.e. `tf.keras.initializers.zeros` instead of `tf.keras.initializers.zeros()`) without calling it, it will not work.",
"created_at": "2021-02-10T03:57:43Z"
},
{
"body": "@fawazahmed0 \r\nPlease provide with minimal stand alone indented code to replicate the issue reported or if possible share a colab gist with the error.\r\nThe usage of these depends on the context they are used in,Zeros behavior is more suitable for including it inside models and serializing them.",
"created_at": "2021-02-10T13:12:25Z"
},
{
"body": "Well, I was going through, [timeseries forecasting tutorial](https://www.tensorflow.org/tutorials/structured_data/time_series) and when I tried to save the model it failed, I just had to change `tf.initializers.zeros` to `tf.initializers.zeros()` to make it working.\r\n\r\nHere is the [colab](https://colab.research.google.com/drive/1E0P-aBU9B7RO_QUtDrlRDPcOWqd6UfkD?usp=sharing#scrollTo=weBjeZAFJOP4)\r\n\r\n ",
"created_at": "2021-02-10T19:34:05Z"
},
{
"body": "Minimal code sample which reproduces the error ([colab](https://colab.research.google.com/drive/1ehWxx6PaDKP6nFMaI2Nf84O3-Xyv3-Sm?usp=sharing) link):\r\n\r\n```python\r\nimport tensorflow as tf # issue was reproduced with tf.__version__ == 2.4.1\r\n\r\nmodel = tf.keras.Sequential([\r\n tf.keras.layers.Reshape((-1, 784)),\r\n tf.keras.layers.Lambda(lambda x: tf.divide(tf.cast(x, tf.float32), 255.)),\r\n tf.keras.layers.Dense(256, activation='relu', bias_initializer=tf.initializers.zeros),\r\n tf.keras.layers.Dense(10, activation='softmax',)\r\n])\r\n\r\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\r\nmodel.compile('adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\r\nhistory = model.fit(x_train, y_train, validation_data=(x_test, y_test))\r\nmodel.save('test')\r\n```\r\n\r\nexception:\r\n```\r\nTypeError Traceback (most recent call last)\r\n<...>\r\n\r\nTypeError: get_config() missing 1 required positional argument: 'self'\r\n```\r\n\r\nThe same code with tf.initializers.Zeros() would work fine `tf.initializers.Zeros()`, as @fawazahmed0 mentioned:\r\n\r\n```python\r\nimport tensorflow as tf\r\n\r\nmodel = tf.keras.Sequential([\r\n tf.keras.layers.Reshape((-1, 784)),\r\n tf.keras.layers.Lambda(lambda x: tf.divide(tf.cast(x, tf.float32), 255.)),\r\n tf.keras.layers.Dense(256, activation='relu', bias_initializer=tf.initializers.Zeros()),\r\n tf.keras.layers.Dense(10, activation='softmax',)\r\n])\r\n\r\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\r\nmodel.compile('adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\r\nhistory = model.fit(x_train, y_train, validation_data=(x_test, y_test))\r\nmodel.save('test')\r\n```\r\n",
"created_at": "2021-02-11T17:03:26Z"
},
{
"body": "I guess it could be simplified to the following gist ([colab](https://colab.research.google.com/drive/1jqONjLpZe1V6DYt9XulC9YFVifetUnFm?usp=sharing)):\r\n\r\n```python\r\nimport tensorflow as tf\r\n\r\nlayer = tf.keras.layers.Dense(256, bias_initializer=tf.initializers.zeros)\r\nlayer.get_config()\r\n\r\nOut: \r\n...\r\nTypeError: get_config() missing 1 required positional argument: 'self'\r\n```\r\n\r\nBut the following snippet works just fine:\r\n```python\r\nimport tensorflow as tf\r\n\r\nlayer = tf.keras.layers.Dense(256, bias_initializer=tf.initializers.Zeros())\r\nlayer.get_config()\r\nOut: \r\n{'activation': 'linear',\r\n 'activity_regularizer': None,\r\n 'bias_constraint': None,\r\n 'bias_initializer': {'class_name': 'Zeros', 'config': {}},,\r\n ...\r\n 'units': 256,\r\n 'use_bias': True}\r\n```\r\n\r\nThe same logic holds for other initializers. For example, `tf.initializers.he_uniform` will fail with the same error if `get_config` is called, but `tf.initializers.HeUniform` will not.\r\n\r\nI'm not sure if it is a problem or it's by design. But it's at least confusing because models trained with \"`tf.initializers.zeros` - like\" initializers use them properly except for the model serialization. \r\n",
"created_at": "2021-02-11T17:41:24Z"
},
{
"body": "@mishc9 To me, it appears that the `tf.initializers.zeros` shortcut function was intended to be _called_ rather than referenced as-is, i.e. it generates an object of the `tf.keras.initializers.Zeros` class. The class and the shortcut function are exported with the same `@keras_export` decorator in the source code. \r\n\r\nIf you run the colab with `tf.initializers.zeros()` in place of `tf.initializers.zeros`, it works just fine.",
"created_at": "2021-02-11T18:55:13Z"
},
{
"body": "@MaanasArora you're right, it should be provided to the layer constructor as `tf.initializers.zeros()` (better way to do this), not `tf.initializers.zeros`. But the strange thing is that it still would work even in the later case ([colab](https://colab.research.google.com/drive/1R1K3olMovqhX7z7qW1AwbmhbK12WeEjq#scrollTo=In65cIrYpBst)).\r\n\r\nI think this behaviour should be treated as an issue if the way to provide an initializer in the form of `tf.initializers.zeros` is allowed and it initializers weights the right way (because it's the main purpose of initializers). \r\n\r\nOn the other hand, if this way to use initializers is not a 'regular' way to use them and trained model could fail, there should be a warning (at least, may be exception) during the model build/compile stage in my opinion. Not sure about backward compatibility - raise an exception could break some old code. It is 'correct' way to initialize weights & biases in the [standalone](https://github.com/keras-team/keras) `keras` API. ",
"created_at": "2021-02-11T20:35:45Z"
},
{
"body": "@mishc9 Yes, it oddly works in both cases and it's inconsistent because `tf.initializers.zeros` only creates issues during serialization.\r\n\r\nFurther, I noticed that when I passed `tf.initializers.Zeros` instead of `tf.initializers.Zeros`, it _also_ worked, and it also had the same error as `tf.initializers.zeros` when serializing the model. So I think that the issue is not with the shortcut function but that the class itself can be used directly without initializing an instance but breaks when serializing the model.\r\n\r\nLooking at the source, I suspect that the class itself was not intended to be passed to the layer constructor. I will review the source further to be sure.",
"created_at": "2021-02-11T21:04:05Z"
},
{
"body": "@MaanasArora that is interesting. I looked here and there in the source code too and still do not understand what's the reason of this issue :( \r\n[Function](https://github.com/tensorflow/tensorflow/blob/132437408620c947aaa43db31ce442a2b30dec12/tensorflow/python/keras/initializers/__init__.py#L150) `initializers.get` dispatches initializers:\r\n\r\n```python \r\n@keras_export('keras.initializers.get')\r\ndef get(identifier):\r\n if identifier is None:\r\n return None\r\n if isinstance(identifier, dict):\r\n return deserialize(identifier)\r\n elif isinstance(identifier, six.string_types):\r\n identifier = str(identifier)\r\n return deserialize(identifier)\r\n elif callable(identifier):\r\n return identifier\r\n else:\r\n raise ValueError('Could not interpret initializer identifier: ' +\r\n str(identifier))\r\n```\r\n\r\nIt is invoked [here](https://github.com/tensorflow/tensorflow/blob/132437408620c947aaa43db31ce442a2b30dec12/tensorflow/python/keras/layers/core.py#L1164) in `__init__` of the layer and later, during 'build' stage, in `add_weight` [method](https://github.com/tensorflow/tensorflow/blob/132437408620c947aaa43db31ce442a2b30dec12/tensorflow/python/keras/engine/base_layer.py#L591). But it should return the identity of the provided object, because the `tf.initializers.zeros`/`tf.initializers.Zeros` is callable. And we couldn't use this object to initialize weights - it is just initializer class. So perhaps I missed something, probably would use debugger to localise the error. \r\n\r\n**Edit**\r\n\r\nJust found instantiation of type object initializer [here](https://github.com/tensorflow/tensorflow/blob/132437408620c947aaa43db31ce442a2b30dec12/tensorflow/python/keras/engine/base_layer_utils.py#L121):\r\n\r\n```python\r\n else:\r\n # Instantiate initializer if provided initializer is a type object.\r\n if tf_inspect.isclass(initializer):\r\n initializer = initializer()\r\n```\r\n\r\nSo, it seems like we could provide an initializer as a type object by design.\r\n\r\nFrames (`tf` of the older version `2.2.1`, so lines could differ now):\r\n<img width=\"405\" alt=\"image\" src=\"https://user-images.githubusercontent.com/15159090/107704354-28e39b80-6cce-11eb-9d34-3150363dc18c.png\">\r\n",
"created_at": "2021-02-11T21:18:36Z"
},
{
"body": "@mishc9 Note that the variable is instantiated in the `add_weight` method, so when `get_config` is called, the initializer attribute of the layer object is still a class. So, the problem is that the `get_config` method is not accessing the object that is created in `add_weight`, but rather the (class type) attribute itself.\r\n\r\nI'm not sure if it fits the design or function very well, but a possible solution would be to check if an attribute is a class during serialization and instantiate it if it is. Should I create a pull request with these changes?",
"created_at": "2021-02-12T16:38:25Z"
},
{
"body": "@MaanasArora yes, exactly. In a layer object initializer is still a class, because `getter` function called in `add_weight` does not change the layer object itself. \r\n\r\nI've tested locally following changes to the function `initialisers.get`: \r\n\r\n```python\r\n@keras_export('keras.initializers.get')\r\ndef get(identifier):\r\n if identifier is None:\r\n return None\r\n if isinstance(identifier, dict):\r\n return deserialize(identifier)\r\n elif isinstance(identifier, six.string_types):\r\n identifier = str(identifier)\r\n return deserialize(identifier)\r\n elif callable(identifier):\r\n if inspect.isclass(identifier): # Additional check copied from the snippet above\r\n identifier = identifier()\r\n return identifier\r\n else:\r\n raise ValueError('Could not interpret initializer identifier: ' +\r\n str(identifier))\r\n```\r\nand it worked. This function transforms type object initializer to the object initializer, as the `getter` in `add_weight` method does. \r\n\r\nI thought of changes in the serialisation function too. But it seems this solution could affect other serialised objects: activations, layers, regularizers etc. It's hards to handle all this cases carefully. But `get` function is only for initializers. There's a cons to the `initializers.get` change too: type of provided argument will be changed implicitly. But it is true for `string` and `dict` initializers now.\r\n\r\nI would make a PR with this change, but have had a problem with tensorflow local unit testing 🤷♂️. Just could not run them. I would send PR without any additional unit tests covering this issue though. By the way, have you seen any additional intstructions how to dev & tests tensorflow on the local machine, except the contributing guidelines? Than would be great :) ",
"created_at": "2021-02-12T18:47:33Z"
},
{
"body": "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2021-02-22T05:52:16Z"
},
{
"body": "Closing as stale. Please reopen if you'd like to work on this further.\n",
"created_at": "2021-03-01T06:06:06Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47054\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/47054\">No</a>\n",
"created_at": "2021-03-01T06:06:09Z"
}
],
"number": 47054,
"title": "tf.keras.initializers.zeros causes model.save to fail, while tf.keras.initializers.Zeros() works great"
}
|
{
"body": "Closes #47054\r\n\r\nSummary: \r\n\r\nModels with layers instantiated with initializers provided as type objects couldn't be saved after training, despite models were initialized and trained fine ([colab](https://colab.research.google.com/drive/1ehWxx6PaDKP6nFMaI2Nf84O3-Xyv3-Sm?usp=sharing) with example). This problem have been resolved the same way as in the `add_weight` method: now type objects are converting to objects in the function `initializers.get`. A unit test covering this case was implemented.\r\n\r\n\r\nPossible alternative fixes:\r\n* Remove support for type object initializers from `add_weight`. Drawback: would break backward compatibility\r\n* Make any initializer class (not just object) serializable. Could work, but hard to maintain and confusing.\r\n* Add instantiation of type objects of initializer type to the serialization function. Drawbacks: this function affects many other things and we should not overcomplicate it. Also it could lead to bugs in the parts of the code base non-related to initializets. ",
"number": 47128,
"review_comments": [
{
"body": "Actually, in certain conditions we will stick a dtype into the config, causing this to fail. Could you update this to \r\n\r\nconfig['config']['bias_initializer']['class_name'], 'Zeros'\r\n\r\nThat should be more robust and allow us to merge this.",
"created_at": "2021-03-08T20:50:22Z"
},
{
"body": "Here too",
"created_at": "2021-03-08T20:50:35Z"
},
{
"body": "Done",
"created_at": "2021-03-09T13:10:44Z"
}
],
"title": "Fix `tf.keras.initializers.get`: convert provided type object to object"
}
|
{
"commits": [
{
"message": "fix initializers.get: convert class to object"
},
{
"message": "fix mistype"
},
{
"message": "fix bad indentation"
},
{
"message": "make tests more robust"
}
],
"files": [
{
"diff": "@@ -156,6 +156,8 @@ def get(identifier):\n identifier = str(identifier)\n return deserialize(identifier)\n elif callable(identifier):\n+ if inspect.isclass(identifier):\n+ identifier = identifier()\n return identifier\n else:\n raise ValueError('Could not interpret initializer identifier: ' +",
"filename": "tensorflow/python/keras/initializers/__init__.py",
"status": "modified"
},
{
"diff": "@@ -310,6 +310,19 @@ def from_config(cls, config):\n self.assertEqual(new_layer.units, 3)\n self.assertIs(new_layer.units.fn, serializable_fn)\n \n+ def test_serialize_type_object_initializer(self):\n+ layer = keras.layers.Dense(\n+ 1,\n+ kernel_initializer=keras.initializers.ones,\n+ bias_initializer=keras.initializers.zeros)\n+ config = keras.layers.serialize(layer)\n+ self.assertEqual(\n+ config['config']['bias_initializer']['class_name'], 'Zeros'\n+ )\n+ self.assertEqual(\n+ config['config']['kernel_initializer']['class_name'], 'Ones'\n+ )\n+\n def test_serializable_with_old_config(self):\n # model config generated by tf-1.2.1\n old_model_config = {",
"filename": "tensorflow/python/keras/utils/generic_utils_test.py",
"status": "modified"
}
]
}
|
{
"body": "\r\n@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator FLOOR_MOD from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/floor_div.cc into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro without making any changes or including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45749\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45749\">No</a>\n",
"created_at": "2021-04-12T10:37:50Z"
}
],
"number": 45749,
"title": "micro: port op FLOOR_MOD from lite"
}
|
{
"body": "Complete implementation of TFLM operator FLOOR_MOD and associated TFLM test code.\r\n\r\nPR step 5 of the work to port operator FLOOR_MOD as tracked in Issue #45749",
"number": 47108,
"review_comments": [
{
"body": "2021? :)",
"created_at": "2021-03-29T17:51:05Z"
},
{
"body": "This was first time code complete last year",
"created_at": "2021-03-29T18:04:56Z"
}
],
"title": "micro: port operator FLOOR_MOD kernel from lite with test"
}
|
{
"commits": [
{
"message": "micro: port operator FLOOR_MOD kernel from lite with test\n\nComplete implementation of TFLM operator FLOOR_MOD and associated TFLM test code.\n\nPR step 5 of the work to port operator FLOOR_MOD as tracked in Issue #45749"
},
{
"message": "Merge branch 'master' into FloorMod-pr5"
}
],
"files": [
{
"diff": "@@ -15,6 +15,7 @@ limitations under the License.\n #ifndef TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_FLOOR_MOD_H_\n #define TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_FLOOR_MOD_H_\n \n+#include <cmath>\n #include <functional>\n \n namespace tflite {",
"filename": "tensorflow/lite/kernels/internal/reference/floor_mod.h",
"status": "modified"
},
{
"diff": "@@ -40,6 +40,7 @@ AllOpsResolver::AllOpsResolver() {\n AddEthosU();\n AddFloor();\n AddFloorDiv();\n+ AddFloorMod();\n AddFullyConnected();\n AddGreater();\n AddGreaterEqual();",
"filename": "tensorflow/lite/micro/all_ops_resolver.cc",
"status": "modified"
},
{
"diff": "@@ -275,6 +275,7 @@ cc_library(\n \"fill.cc\",\n \"floor.cc\",\n \"floor_div.cc\",\n+ \"floor_mod.cc\",\n \"l2norm.cc\",\n \"l2_pool_2d.cc\",\n \"leaky_relu.cc\",\n@@ -662,6 +663,19 @@ cc_test(\n ],\n )\n \n+cc_test(\n+ name = \"floor_mod_test\",\n+ srcs = [\"floor_mod_test.cc\"],\n+ deps = [\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:debug_log\",\n+ \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_test(\n name = \"floor_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -21,13 +21,11 @@ limitations under the License.\n #include \"tensorflow/lite/kernels/internal/types.h\"\n #include \"tensorflow/lite/kernels/kernel_util.h\"\n #include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/micro/micro_utils.h\"\n \n // OLD-TODO(b/117523611): We should factor out a binary_op and put binary ops\n // there.\n namespace tflite {\n-namespace ops {\n-namespace micro {\n-namespace floor_mod {\n namespace {\n \n // Input/output tensor index.\n@@ -37,11 +35,7 @@ constexpr int kOutputTensor = 0;\n \n // OLD-TODO(b/117912880): Support quantization.\n \n-void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n- return nullptr;\n-}\n-\n-TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);\n TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n \n@@ -56,89 +50,79 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n GetOutputSafe(context, node, kOutputTensor, &output));\n \n TF_LITE_ENSURE_TYPES_EQ(context, input1->type, input2->type);\n+ TF_LITE_ENSURE_TYPES_EQ(context, input1->type, output->type);\n \n- const TfLiteType type = input1->type;\n- if (type != kTfLiteInt32 && type != kTfLiteFloat32 && type != kTfLiteInt64) {\n- TF_LITE_KERNEL_LOG(context, \"Type '%s' is not supported by floor_mod.\",\n- TfLiteTypeGetName(type));\n- return kTfLiteError;\n- }\n- output->type = type;\n+ return kTfLiteOk;\n+}\n \n- return kTfLiteError;\n+void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n+ return nullptr;\n+}\n+\n+TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+ return CalculateOpData(context, node);\n }\n \n template <typename T>\n-TfLiteStatus EvalImpl(TfLiteContext* context, bool requires_broadcast,\n- const TfLiteTensor* input1, const TfLiteTensor* input2,\n- TfLiteTensor* output) {\n- const T* denominator_data = GetTensorData<T>(input2);\n-\n- if (input2->type == kTfLiteInt32 || input2->type == kTfLiteInt64) {\n- // Validate the denominator only for integer.\n- const int num_elements = NumElements(input2);\n- for (int i = 0; i < num_elements; ++i) {\n- if (denominator_data[i] == 0) {\n- TF_LITE_KERNEL_LOG(context, \"Division by 0\");\n- return kTfLiteError;\n- }\n- }\n- }\n+TfLiteStatus EvalFloorMod(TfLiteContext* context, bool requires_broadcast,\n+ const TfLiteEvalTensor* input1,\n+ const TfLiteEvalTensor* input2,\n+ TfLiteEvalTensor* output) {\n+ const T* denominator_data = tflite::micro::GetTensorData<T>(input2);\n+\n if (requires_broadcast) {\n reference_ops::BroadcastBinaryFunction4DSlow<T, T, T>(\n- GetTensorShape(input1), GetTensorData<T>(input1),\n- GetTensorShape(input2), denominator_data, GetTensorShape(output),\n- GetTensorData<T>(output), reference_ops::FloorMod<T>);\n+ tflite::micro::GetTensorShape(input1),\n+ tflite::micro::GetTensorData<T>(input1),\n+ tflite::micro::GetTensorShape(input2), denominator_data,\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<T>(output), reference_ops::FloorMod<T>);\n } else {\n reference_ops::BinaryFunction<T, T, T>(\n- GetTensorShape(input1), GetTensorData<T>(input1),\n- GetTensorShape(input2), GetTensorData<T>(input2),\n- GetTensorShape(output), GetTensorData<T>(output),\n- reference_ops::FloorMod<T>);\n+ tflite::micro::GetTensorShape(input1),\n+ tflite::micro::GetTensorData<T>(input1),\n+ tflite::micro::GetTensorShape(input2), denominator_data,\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<T>(output), reference_ops::FloorMod<T>);\n }\n \n return kTfLiteOk;\n }\n \n TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n- const TfLiteTensor* input1;\n- TF_LITE_ENSURE_OK(context,\n- GetInputSafe(context, node, kInputTensor1, &input1));\n- const TfLiteTensor* input2;\n- TF_LITE_ENSURE_OK(context,\n- GetInputSafe(context, node, kInputTensor2, &input2));\n- TfLiteTensor* output;\n- TF_LITE_ENSURE_OK(context,\n- GetOutputSafe(context, node, kOutputTensor, &output));\n+ const TfLiteEvalTensor* input1 =\n+ tflite::micro::GetEvalInput(context, node, kInputTensor1);\n+ const TfLiteEvalTensor* input2 =\n+ tflite::micro::GetEvalInput(context, node, kInputTensor2);\n+ TfLiteEvalTensor* output =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n \n- bool requires_broadcast = false;\n+ bool requires_broadcast = !tflite::micro::HaveSameShapes(input1, input2);\n \n switch (input1->type) {\n- case kTfLiteInt32: {\n- return EvalImpl<int32_t>(context, requires_broadcast, input1, input2,\n- output);\n- }\n- case kTfLiteInt64: {\n- return EvalImpl<int64_t>(context, requires_broadcast, input1, input2,\n- output);\n- }\n case kTfLiteFloat32: {\n- return EvalImpl<float>(context, requires_broadcast, input1, input2,\n- output);\n+ return EvalFloorMod<float>(context, requires_broadcast, input1, input2,\n+ output);\n }\n default: {\n- TF_LITE_KERNEL_LOG(context, \"Type '%s' is not supported by floor_mod.\",\n+ TF_LITE_KERNEL_LOG(context, \"Type '%s' is not supported by FLOOR_MOD.\",\n TfLiteTypeGetName(input1->type));\n return kTfLiteError;\n }\n }\n }\n \n } // namespace\n-} // namespace floor_mod\n \n-TfLiteRegistration* Register_FLOOR_MOD() { return nullptr; }\n+TfLiteRegistration Register_FLOOR_MOD() {\n+ return {/*init=*/Init,\n+ /*free=*/nullptr,\n+ /*prepare=*/Prepare,\n+ /*invoke=*/Eval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/0};\n+}\n \n-} // namespace micro\n-} // namespace ops\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/floor_mod.cc",
"status": "modified"
},
{
"diff": "@@ -23,86 +23,87 @@ limitations under the License.\n \n namespace tflite {\n namespace testing {\n-namespace {}\n+namespace {\n \n-TF_LITE_MICRO_TESTS_BEGIN\n+void ExecuteFloorModTest(TfLiteTensor* tensors, int tensors_count) {\n+ constexpr int kInputArrayData[] = {2, 0, 1};\n+ TfLiteIntArray* inputs_array = IntArrayFromInts(kInputArrayData);\n+ constexpr int kOutputArrayData[] = {1, 2};\n+ TfLiteIntArray* outputs_array = IntArrayFromInts(kOutputArrayData);\n \n-TF_LITE_MICRO_TEST(FloorModSimple) {\n-#ifdef notdef\n- FloorMod<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {}});\n- model.PopulateTensor<int32_t>(model.input1(), {10, 9, 11, 3});\n- model.PopulateTensor<int32_t>(model.input2(), {2, 2, 3, 4});\n- EXPECT_THAT(model.GetOutput(), ElementsAre(0, 1, 2, 3));\n-#endif // notdef\n-}\n+ const TfLiteRegistration registration = tflite::Register_FLOOR_MOD();\n+ micro::KernelRunner runner(registration, tensors, tensors_count, inputs_array,\n+ outputs_array, nullptr);\n \n-TF_LITE_MICRO_TEST(FloorModNegativeValue) {\n-#ifdef notdef\n- FloorMod<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {}});\n- model.PopulateTensor<int32_t>(model.input1(), {10, -9, -11, 7});\n- model.PopulateTensor<int32_t>(model.input2(), {2, 2, -3, -4});\n- EXPECT_THAT(model.GetOutput(), ElementsAre(0, 1, -2, -1));\n-#endif // notdef\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.InitAndPrepare());\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.Invoke());\n }\n \n-TF_LITE_MICRO_TEST(FloorModBroadcast) {\n-#ifdef notdef\n- FloorMod<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {1}}, {TensorType_INT32, {}});\n- model.PopulateTensor<int32_t>(model.input1(), {10, -9, -11, 7});\n- model.PopulateTensor<int32_t>(model.input2(), {-3});\n- EXPECT_THAT(model.GetOutput(), ElementsAre(-2, 0, -2, -2));\n-#endif // notdef\n+template <typename T>\n+void TestFloorMod(const int* input1_dims_data, const T* input1_data,\n+ const int* input2_dims_data, const T* input2_data,\n+ const int* expected_dims, const T* expected_data,\n+ T* output_data) {\n+ TfLiteIntArray* input1_dims = IntArrayFromInts(input1_dims_data);\n+ TfLiteIntArray* input2_dims = IntArrayFromInts(input2_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(expected_dims);\n+ const int output_count = ElementCount(*output_dims);\n+\n+ TfLiteTensor tensors[] = {\n+ CreateTensor(input1_data, input1_dims),\n+ CreateTensor(input2_data, input2_dims),\n+ CreateTensor(output_data, output_dims),\n+ };\n+ constexpr int tensors_count = std::extent<decltype(tensors)>::value;\n+\n+ ExecuteFloorModTest(tensors, tensors_count);\n+\n+ for (int i = 0; i < output_count; i++) {\n+ TF_LITE_MICRO_EXPECT_EQ(expected_data[i], output_data[i]);\n+ }\n }\n \n-TF_LITE_MICRO_TEST(FloorModInt64WithBroadcast) {\n-#ifdef notdef\n- FloorMod<int64_t> model({TensorType_INT64, {1, 2, 2, 1}},\n- {TensorType_INT64, {1}}, {TensorType_INT64, {}});\n- model.PopulateTensor<int64_t>(model.input1(), {10, -9, -11, (1LL << 34) + 9});\n- model.PopulateTensor<int64_t>(model.input2(), {-(1LL << 33)});\n- EXPECT_THAT(model.GetOutput(),\n- ElementsAre(-8589934582, -9, -11, -8589934583));\n-#endif // notdef\n-}\n+} // namespace\n+} // namespace testing\n+} // namespace tflite\n+\n+TF_LITE_MICRO_TESTS_BEGIN\n \n TF_LITE_MICRO_TEST(FloorModFloatSimple) {\n-#ifdef notdef\n- FloorMod<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {}});\n- model.PopulateTensor<float>(model.input1(), {10, 9, 11, 3});\n- model.PopulateTensor<float>(model.input2(), {2, 2, 3, 4});\n- EXPECT_THAT(model.GetOutput(), ElementsAre(0, 1, 2, 3));\n-#endif // notdef\n+ constexpr int kDims[] = {4, 1, 2, 2, 1};\n+ constexpr float kInput1[] = {10, 9, 11, 3};\n+ constexpr float kInput2[] = {2, 2, 3, 4};\n+ constexpr float kExpect[] = {0, 1, 2, 3};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::TestFloorMod(kDims, kInput1, kDims, kInput2, kDims, kExpect,\n+ output_data);\n }\n \n TF_LITE_MICRO_TEST(FloorModFloatNegativeValue) {\n-#ifdef notdef\n- FloorMod<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {}});\n- model.PopulateTensor<float>(model.input1(), {10, -9, -11, 7});\n- model.PopulateTensor<float>(model.input2(), {2, 2, -3, -4});\n- EXPECT_THAT(model.GetOutput(), ElementsAre(0, 1, -2, -1));\n-#endif // notdef\n+ constexpr int kDims[] = {4, 1, 2, 2, 1};\n+ constexpr float kInput1[] = {10, -9, -11, 7};\n+ constexpr float kInput2[] = {2, 2, -3, -4};\n+ constexpr float kExpect[] = {0, 1, -2, -1};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::TestFloorMod(kDims, kInput1, kDims, kInput2, kDims, kExpect,\n+ output_data);\n }\n \n TF_LITE_MICRO_TEST(FloorModFloatBroadcast) {\n-#ifdef notdef\n- FloorMod<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {1}}, {TensorType_FLOAT32, {}});\n- model.PopulateTensor<float>(model.input1(), {10, -9, -11, 7});\n- model.PopulateTensor<float>(model.input2(), {-3});\n- EXPECT_THAT(model.GetOutput(), ElementsAre(-2, 0, -2, -2));\n-#endif // notdef\n+ constexpr int kDims1[] = {4, 1, 2, 2, 1};\n+ constexpr int kDims2[] = {1, 1};\n+ constexpr float kInput1[] = {10, -9, -11, 7};\n+ constexpr float kInput2[] = {-3};\n+ constexpr float kExpect[] = {-2, 0, -2, -2};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::TestFloorMod(kDims1, kInput1, kDims2, kInput2, kDims1,\n+ kExpect, output_data);\n }\n \n TF_LITE_MICRO_TESTS_END\n-\n-} // namespace testing\n-} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/floor_mod_test.cc",
"status": "modified"
},
{
"diff": "@@ -42,6 +42,7 @@ TfLiteRegistration Register_EXP();\n TfLiteRegistration Register_EXPAND_DIMS();\n TfLiteRegistration Register_FILL();\n TfLiteRegistration Register_FLOOR_DIV();\n+TfLiteRegistration Register_FLOOR_MOD();\n TfLiteRegistration Register_L2_POOL_2D();\n TfLiteRegistration Register_LEAKY_RELU();\n TfLiteRegistration Register_QUANTIZE();",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -233,6 +233,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n ParseFloorDiv);\n }\n \n+ TfLiteStatus AddFloorMod() {\n+ return AddBuiltin(BuiltinOperator_FLOOR_MOD, tflite::Register_FLOOR_MOD(),\n+ ParseFloorMod);\n+ }\n+\n TfLiteStatus AddFullyConnected(\n const TfLiteRegistration& registration = Register_FULLY_CONNECTED()) {\n return AddBuiltin(BuiltinOperator_FULLY_CONNECTED, registration,",
"filename": "tensorflow/lite/micro/micro_mutable_op_resolver.h",
"status": "modified"
},
{
"diff": "@@ -285,6 +285,7 @@ tensorflow/lite/micro/kernels/expand_dims_test.cc \\\n tensorflow/lite/micro/kernels/fill_test.cc \\\n tensorflow/lite/micro/kernels/floor_test.cc \\\n tensorflow/lite/micro/kernels/floor_div_test.cc \\\n+tensorflow/lite/micro/kernels/floor_mod_test.cc \\\n tensorflow/lite/micro/kernels/fully_connected_test.cc \\\n tensorflow/lite/micro/kernels/hard_swish_test.cc \\\n tensorflow/lite/micro/kernels/l2norm_test.cc \\\n@@ -346,6 +347,7 @@ tensorflow/lite/micro/kernels/expand_dims.cc \\\n tensorflow/lite/micro/kernels/fill.cc \\\n tensorflow/lite/micro/kernels/floor.cc \\\n tensorflow/lite/micro/kernels/floor_div.cc \\\n+tensorflow/lite/micro/kernels/floor_mod.cc \\\n tensorflow/lite/micro/kernels/fully_connected.cc \\\n tensorflow/lite/micro/kernels/fully_connected_common.cc \\\n tensorflow/lite/micro/kernels/hard_swish.cc \\\n@@ -437,6 +439,7 @@ tensorflow/lite/kernels/internal/reference/exp.h \\\n tensorflow/lite/kernels/internal/reference/fill.h \\\n tensorflow/lite/kernels/internal/reference/floor.h \\\n tensorflow/lite/kernels/internal/reference/floor_div.h \\\n+tensorflow/lite/kernels/internal/reference/floor_mod.h \\\n tensorflow/lite/kernels/internal/reference/fully_connected.h \\\n tensorflow/lite/kernels/internal/reference/hard_swish.h \\\n tensorflow/lite/kernels/internal/reference/integer_ops/add.h \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nWhile the following command passes:\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=bluepill test\r\n```\r\n\r\nRunning a single test with renode (for example):\r\n```bash\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=bluepill test_kernel_add_test\r\n```\r\n\r\nfails with:\r\n```\r\ntensorflow/lite/micro/testing/test_with_renode.sh tensorflow/lite/micro/tools/make/gen/bluepill_cortex-m3/bin/kernel_add_test '~~~ALL TESTS PASSED~~~'\r\ntensorflow/lite/micro/testing/test_with_renode.sh: line 69: $ROBOT_SCRIPT: ambiguous redirect\r\nmake: *** [tensorflow/lite/micro/tools/make/Makefile:663: test_kernel_add_test] Error 1\r\n```\r\n\r\nThe reason is that the changes from https://github.com/tensorflow/tensorflow/pull/45787 are incompatible with how the Makefile calls the test script when running an individual test (as opposed to `make test`).\r\n\r\n",
"comments": [],
"number": 46186,
"title": "running a single test with renode is broken."
}
|
{
"body": "This will help prevent issues like #46186 and #45348\r\n",
"number": 47094,
"review_comments": [],
"title": "Add an individual kernel test with Renode to the CI."
}
|
{
"commits": [
{
"message": "Add an individual kernel test with Renode to the CI.\n\nThis will help prevent issues like #46186 and #45348"
}
],
"files": [
{
"diff": "@@ -40,3 +40,8 @@ readable_run make -j8 -f tensorflow/lite/micro/tools/make/Makefile TARGET=${TARG\n # debugging info on failures.\n readable_run make -f tensorflow/lite/micro/tools/make/Makefile clean\n readable_run make -j8 -f tensorflow/lite/micro/tools/make/Makefile TARGET=${TARGET} OPTIMIZATION_LEVEL=-Os test\n+\n+# We use Renode differently when running the full test suite (make test) vs an\n+# individual test. So, we test only of the kernels individually as well to have\n+# both of the Renode variations be part of the CI.\n+readable_run make -j8 -f tensorflow/lite/micro/tools/make/Makefile TARGET=${TARGET} test_kernel_add_test",
"filename": "tensorflow/lite/micro/tools/ci_build/test_bluepill.sh",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code: Yes\r\n- OS Platform and Distribution: Ubuntu 18.04\r\n- TensorFlow installed from: binary\r\n- TensorFlow version: 2.4.0\r\n- Python version: 3.6.9\r\n\r\n**Describe the current behavior**\r\n\r\n`tf.keras.layers.LayerNormalization` crashes when the input is empty and the layer is executed on CPU.\r\n\r\n**Describe the expected behavior**\r\n\r\nThe layer should not crash but return a tensor with the same shape.\r\n\r\n**Standalone code to reproduce the issue**\r\n\r\n```python\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"\"\r\nimport tensorflow as tf\r\nlayer = tf.keras.layers.LayerNormalization()\r\nlayer(tf.zeros([1, 0, 10]))\r\n```\r\n\r\n**Other info / logs**\r\n\r\nThe code above exits with this error:\r\n\r\n```text\r\nFloating point exception (core dumped)\r\n```\r\n",
"comments": [
{
"body": "Was able to reproduce the issue with TF v2.3, TF v2.4 and TF-nightly. Colab session crashes on running the code, please find the gist of it [here](https://colab.research.google.com/gist/amahendrakar/a17885e619f64587f0946ef115003ff9/46366.ipynb#scrollTo=L8qDirj_Zx4v). Thanks!",
"created_at": "2021-01-13T11:03:04Z"
},
{
"body": "@guillaumekln , @amahendrakar , @jvishnuvardhan a tensor of shape [1,0,10] would return a tensor\r\n `tf.Tensor([], shape=(1, 0, 10), dtype=float32) TensorShape([1, 0, 10])` , Any tensor of value [] would cause the runtime to crash. I will send a pull request to raise a valid error regarding the faulty input tensor.",
"created_at": "2021-02-11T10:14:39Z"
},
{
"body": "Can we close this?",
"created_at": "2021-04-16T13:26:49Z"
},
{
"body": "The crash is not fixed. I believe the `FusedBatchNorm` CPU kernel should check if the input is empty. I tried to fix the issue but eventually moved to something else.",
"created_at": "2021-04-16T13:35:00Z"
},
{
"body": "The PRs at the python level was rejected. Do you think that it will be accepted at cpp level?",
"created_at": "2021-04-16T13:51:05Z"
},
{
"body": "The GPU kernel [is checking for empty inputs](https://github.com/tensorflow/tensorflow/blob/v2.5.0-rc1/tensorflow/core/kernels/fused_batch_norm_op.cc#L818), but not the CPU kernel. So I think a code change would make sense here.",
"created_at": "2021-04-16T13:55:27Z"
},
{
"body": "@nikitamaia I think that we could remove the Keras label here as this is a c++ contribution.",
"created_at": "2021-04-16T13:59:33Z"
},
{
"body": "Yes, the issue is more specifically related to `tf.compat.v1.nn.fused_batch_norm` that is called by `tf.keras.layers.LayerNormalization`.",
"created_at": "2021-04-16T14:06:59Z"
},
{
"body": "Still an issue in TF 2.6 Nightly as well.Thanks!",
"created_at": "2021-05-28T16:52:38Z"
},
{
"body": "I believe the issue has been resolved by commit 4b4bc60. Also verified with latest tf-nightly:\r\n```\r\n# python3\r\nPython 3.8.10 (default, Jun 2 2021, 10:49:15) \r\n[GCC 9.4.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import os\r\n>>> os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"\"\r\n>>> import tensorflow as tf\r\n2021-09-02 15:23:41.438425: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2021-09-02 15:23:41.438474: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n>>> layer = tf.keras.layers.LayerNormalization()\r\n>>> layer(tf.zeros([1, 0, 10]))\r\n2021-09-02 15:23:42.905740: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory\r\n2021-09-02 15:23:42.905782: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)\r\n2021-09-02 15:23:42.905806: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ip-172-31-87-192): /proc/driver/nvidia/version does not exist\r\n2021-09-02 15:23:42.906125: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n<tf.Tensor: shape=(1, 0, 10), dtype=float32, numpy=array([], shape=(1, 0, 10), dtype=float32)>\r\n```\r\n\r\nI will close this issue for now, but feel free to re-open if the issue persists.",
"created_at": "2021-09-02T15:25:04Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46366\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46366\">No</a>\n",
"created_at": "2021-09-02T15:25:06Z"
},
{
"body": "I also observed the following API aliases can cause the same issue in older versions of tensorflow.\r\nUsers should be cautious when using them on CPU up to tensorflow 2.5.1 (v2.5.0-160-g8222c1cfc86).\r\n\r\n- `(tf.keras.layers.LayerNormalization)`, `tf.compat.v1.keras.layers.LayerNormalization`\r\n\r\n<details>\r\n <summary>Code to reproduce the issue in <code>tf.compat.v1.keras.layers.LayerNormalization</code> in older versions</summary>\r\n\r\n```python\r\nimport tensorflow as tf\r\nprint(tf.version.GIT_VERSION, tf.version.VERSION, flush=True)\r\nprint(tf.config.list_physical_devices(), flush=True)\r\n\r\n\r\ntry:\r\n layer = tf.compat.v1.keras.layers.LayerNormalization()\r\n layer(tf.zeros([1, 0, 10]))\r\nexcept Exception as e:\r\n print(\"Error:\", str(e), flush=True)\r\nprint(\"Success!\", flush=True)\r\n```\r\n\r\nOn my CPU machine, the process aborts with a Floating point exception(core dumped), which is not expected.\r\n\r\n```text\r\nv2.5.0-160-g8222c1cfc86 2.5.1\r\n[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')]\r\nFloating point exception(core dumped)\r\n```\r\n</details>\r\n\r\nIt seems to be fixed in tensorflow 2.6.0 (v2.6.0-rc2-32-g919f693420e) and later versions.\r\n",
"created_at": "2023-09-21T10:47:33Z"
}
],
"number": 46366,
"title": "LayerNormalization crashes on empty inputs when run on CPU"
}
|
{
"body": "Fix #46366 . `LayerNormalization` layer crashes on empty inputs on CPU. This pull request would help `LayerNormalization` return the input as the output if the input value is `[]` . This is a similar condition to `LayerNormalization on GPU` and `BatchNormalization` which returns back the empty input without throwing in an error.",
"number": 47093,
"review_comments": [],
"title": "Fix LayerNormalization on CPU"
}
|
{
"commits": [
{
"message": "Update normalization.py"
}
],
"files": [
{
"diff": "@@ -1206,6 +1206,8 @@ def build(self, input_shape):\n def call(self, inputs):\n # Compute the axes along which to reduce the mean / variance\n input_shape = inputs.shape\n+ if 0 in input_shape:\n+ return inputs\n ndims = len(input_shape)\n \n # Broadcasting only necessary for norm when the axis is not just",
"filename": "tensorflow/python/keras/layers/normalization.py",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator ELU from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\nPR 6: Extract common activation code into activations.cc and activation_utils.h files. Extract common test code into activation_test_utils.h file.\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46323\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46323\">No</a>\n",
"created_at": "2021-02-24T02:01:04Z"
}
],
"number": 46323,
"title": "micro: port op ELU from lite"
}
|
{
"body": "Implement skeleton (non-working) code for operator and test.\r\nHeader files changed.\r\nNamespaces changed.\r\nSome original code deleted.\r\nSome original code modified.\r\n\r\nPR step 4 of the work to port operator ELU as tracked in Issue #46323",
"number": 47078,
"review_comments": [],
"title": "micro: prepare to port operator ELU kernel from lite with test"
}
|
{
"commits": [
{
"message": "micro: prepare to port operator ELU kernel from lite with test\n\nImplement skeleton (non-working) code for operator and test.\nHeader files changed.\nNamespaces changed.\nSome original code deleted.\nSome original code modified.\n\nPR step 4 of the work to port operator ELU as tracked in Issue #46323"
}
],
"files": [
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -12,59 +12,31 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <stddef.h>\n+\n+#include \"tensorflow/lite/kernels/internal/reference/elu.h\"\n \n #include <algorithm>\n #include <cmath>\n-#include <cstdint>\n #include <functional>\n #include <limits>\n \n-#include \"tensorflow/lite/c/builtin_op_data.h\"\n #include \"tensorflow/lite/c/common.h\"\n-#include \"tensorflow/lite/kernels/cpu_backend_context.h\"\n-#include \"tensorflow/lite/kernels/internal/common.h\"\n-#include \"tensorflow/lite/kernels/internal/compatibility.h\"\n-#include \"tensorflow/lite/kernels/internal/cppmath.h\"\n-#include \"tensorflow/lite/kernels/internal/optimized/optimized_ops.h\"\n #include \"tensorflow/lite/kernels/internal/quantization_util.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/binary_function.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/integer_ops/log_softmax.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/integer_ops/logistic.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/integer_ops/tanh.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/logistic.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/prelu.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/reference_ops.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/softmax.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/tanh.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/process_broadcast_shapes.h\"\n #include \"tensorflow/lite/kernels/internal/types.h\"\n #include \"tensorflow/lite/kernels/kernel_util.h\"\n-\n-#if __aarch64__ && __clang__\n-#include <arm_neon.h>\n-#endif\n+#include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n \n namespace tflite {\n namespace ops {\n-namespace builtin {\n+namespace micro {\n namespace activations {\n+namespace {\n \n // OLD-TODO(b/142762739): We should figure out a multi-threading plan for most\n // of the activation ops below.\n \n-enum KernelType {\n- kReference,\n- kGenericOptimized,\n- kFixedPointOptimized,\n-};\n-\n struct OpData {\n- int32_t input_multiplier = 0;\n- int input_left_shift = 0;\n- int32_t input_range_radius = 0;\n- int diff_min = 0;\n uint8_t table[256] = {0};\n };\n \n@@ -97,42 +69,19 @@ void EvalUsingLookupTable(struct OpData* data, const TfLiteTensor* input,\n uint8_t* output_data = GetTensorData<uint8_t>(output);\n const uint8_t* input_data = GetTensorData<uint8_t>(input);\n int i = 0;\n-#if __aarch64__ && __clang__\n- // This code uses ARM64-only instructions.\n- // OLD-TODO(b/143709993): Port to ARMv7\n-\n- // Load the tables into registers. (4*4 128-bit registers)\n- uint8x16x4_t table[4];\n- table[0] = vld1q_u8_x4(data->table + 16 * 4 * 0);\n- table[1] = vld1q_u8_x4(data->table + 16 * 4 * 1);\n- table[2] = vld1q_u8_x4(data->table + 16 * 4 * 2);\n- table[3] = vld1q_u8_x4(data->table + 16 * 4 * 3);\n-\n- // Vectorized loop; process uint8x16_t (16 elements) at a time.\n- constexpr int vectorized_16_loop_step = 16;\n- const int vectorized_16_loop_end =\n- size / vectorized_16_loop_step * vectorized_16_loop_step;\n- for (; i < vectorized_16_loop_end; i += vectorized_16_loop_step) {\n- uint8x16_t input = vld1q_u8(input_data + i);\n- uint8x16_t output = optimized_ops::aarch64_lookup_vector(table, input);\n- vst1q_u8(output_data + i, output);\n- }\n- // Postamble and non-ARM64 code: simple for loop.\n-#endif\n+\n for (; i < size; ++i) {\n output_data[i] = data->table[input_data[i]];\n }\n }\n \n+} // namespace\n+\n void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n // This is a builtin op, so we don't use the contents in 'buffer', if any.\n // Instead, we allocate a new object to carry information from Prepare() to\n // Eval().\n- return new OpData;\n-}\n-\n-void Free(TfLiteContext* context, void* buffer) {\n- delete reinterpret_cast<OpData*>(buffer);\n+ return nullptr;\n }\n \n TfLiteStatus GenericPrepare(TfLiteContext* context, TfLiteNode* node) {\n@@ -144,8 +93,7 @@ TfLiteStatus GenericPrepare(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));\n TF_LITE_ENSURE_TYPES_EQ(context, input->type, output->type);\n \n- return context->ResizeTensor(context, output,\n- TfLiteIntArrayCopy(input->dims));\n+ return kTfLiteError;\n }\n \n TfLiteStatus EluPrepare(TfLiteContext* context, TfLiteNode* node) {\n@@ -174,12 +122,12 @@ TfLiteStatus EluEval(TfLiteContext* context, TfLiteNode* node) {\n optimized_ops::Elu(GetTensorShape(input), GetTensorData<float>(input),\n GetTensorShape(output), GetTensorData<float>(output));\n return kTfLiteOk;\n- } break;\n+ }\n case kTfLiteInt8: {\n OpData* data = reinterpret_cast<OpData*>(node->user_data);\n EvalUsingLookupTable(data, input, output);\n return kTfLiteOk;\n- } break;\n+ }\n default:\n TF_LITE_KERNEL_LOG(\n context, \"Only float32 and int8 is supported currently, got %s.\",\n@@ -190,12 +138,8 @@ TfLiteStatus EluEval(TfLiteContext* context, TfLiteNode* node) {\n \n } // namespace activations\n \n-TfLiteRegistration* Register_ELU() {\n- static TfLiteRegistration r = {activations::Init, activations::Free,\n- activations::EluPrepare, activations::EluEval};\n- return &r;\n-}\n+TfLiteRegistration* Register_ELU() { return nullptr; }\n \n-} // namespace builtin\n+} // namespace micro\n } // namespace ops\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/elu.cc",
"status": "modified"
},
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -12,150 +12,33 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <math.h>\n-#include <stdint.h>\n-#include <stdlib.h>\n-\n-#include <algorithm>\n-#include <initializer_list>\n #include <limits>\n-#include <map>\n-#include <memory>\n-#include <random>\n-#include <string>\n-#include <utility>\n-#include <vector>\n+#include <type_traits>\n \n-#include \"absl/memory/memory.h\"\n-#include \"flatbuffers/flatbuffers.h\" // from @flatbuffers\n-#include \"tensorflow/lite/core/api/op_resolver.h\"\n-#include \"tensorflow/lite/interpreter.h\"\n-#include \"tensorflow/lite/kernels/test_util.h\"\n-#include \"tensorflow/lite/schema/schema_generated.h\"\n-#include \"tensorflow/lite/string_type.h\"\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_runner.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n \n namespace tflite {\n-\n+namespace testing {\n namespace {\n \n-using ::testing::ElementsAreArray;\n-\n-class BaseActivationsOpModel : public SingleOpModel {\n- public:\n- // Most activations don't take any options, so this constructor works for\n- // them.\n- BaseActivationsOpModel(BuiltinOperator type, TensorData input) {\n- input_ = AddInput(input);\n- if (input.type == TensorType_UINT8) {\n- output_ = AddOutput({input.type, {}, 0, 0, 1. / 256});\n- } else if (input.type == TensorType_INT8) {\n- output_ = AddOutput({input.type, {}, 0, 0, 1. / 256, -128});\n- } else {\n- output_ = AddOutput({input.type, {}});\n- }\n- SetBuiltinOp(type, BuiltinOptions_NONE, 0);\n- BuildInterpreter({GetShape(input_)});\n- }\n-\n- BaseActivationsOpModel(TfLiteRegistration* registration, BuiltinOperator type,\n- TensorData input) {\n- input_ = AddInput(input);\n- if (input.type == TensorType_UINT8) {\n- output_ = AddOutput({input.type, {}, 0, 0, 1. / 256});\n- } else if (input.type == TensorType_INT8) {\n- output_ = AddOutput({input.type, {}, 0, 0, 1. / 256, -128});\n- } else {\n- output_ = AddOutput({input.type, {}});\n- }\n- SetBuiltinOp(type, BuiltinOptions_NONE, 0);\n- resolver_ = absl::make_unique<SingleOpResolver>(type, registration);\n- BuildInterpreter({GetShape(input_)});\n- }\n-\n- // A dedicated constructor for SOFTMAX, which does some options.\n- BaseActivationsOpModel(float softmax_beta, TensorData input,\n- TensorType output_type) {\n- input_ = AddInput(input);\n- if (output_type == TensorType_UINT8) {\n- output_ = AddOutput({TensorType_UINT8, {}, 0, 0, 1. / 256});\n- } else if (output_type == TensorType_INT8) {\n- output_ = AddOutput({TensorType_INT8, {}, 0, 0, 1. / 256, -128});\n- } else if (input.type == TensorType_INT16 &&\n- output_type == TensorType_INT16) {\n- output_ = AddOutput({TensorType_INT16,\n- {},\n- 0,\n- 0,\n- 1.0f / (std::numeric_limits<int16_t>::max() + 1),\n- 0});\n- } else if (input.type != TensorType_INT16 &&\n- output_type == TensorType_INT16) {\n- output_ = AddOutput({TensorType_INT16, {}, 0, 0, 1. / 32768, -16384});\n- } else {\n- output_ = AddOutput({output_type, {}});\n- }\n- SetBuiltinOp(BuiltinOperator_SOFTMAX, BuiltinOptions_SoftmaxOptions,\n- CreateSoftmaxOptions(builder_, softmax_beta).Union());\n- BuildInterpreter({GetShape(input_)});\n- }\n-\n- // A dedicated constructor for LeakyRelu, which does some options.\n- BaseActivationsOpModel(TensorData input, float alpha) {\n- input_ = AddInput(input);\n- // The output scale and input scale might be different.\n- if (input.type == TensorType_UINT8 || input.type == TensorType_INT8 ||\n- input.type == TensorType_INT16) {\n- auto output_min = (input.min >= 0) ? input.min : input.min * alpha;\n- auto output_max = (input.max >= 0) ? input.max : input.max * alpha;\n- if (input.type == TensorType_INT16) {\n- output_ = AddOutput({TensorType_INT16,\n- {},\n- 0,\n- 0,\n- output_max / (std::numeric_limits<int16_t>::max()),\n- 0});\n- } else {\n- output_ = AddOutput({input.type, {}, output_min, output_max});\n- }\n- } else {\n- output_ = AddOutput({input.type, {}});\n- }\n- SetBuiltinOp(BuiltinOperator_LEAKY_RELU, BuiltinOptions_LeakyReluOptions,\n- CreateLeakyReluOptions(builder_, alpha).Union());\n- BuildInterpreter({GetShape(input_)});\n- }\n-\n- BaseActivationsOpModel(BuiltinOperator type, const TensorData& input,\n- const TensorData& output) {\n- input_ = AddInput(input);\n- output_ = AddOutput(output);\n- SetBuiltinOp(type, BuiltinOptions_NONE, 0);\n- BuildInterpreter({GetShape(input_)});\n- }\n-\n- BaseActivationsOpModel(TfLiteRegistration* registration, BuiltinOperator type,\n- const TensorData& input, const TensorData& output) {\n- input_ = AddInput(input);\n- output_ = AddOutput(output);\n- SetBuiltinOp(type, BuiltinOptions_NONE, 0);\n- resolver_ = absl::make_unique<SingleOpResolver>(type, registration);\n- BuildInterpreter({GetShape(input_)});\n- }\n-\n- protected:\n- int input_;\n- int output_;\n-};\n-\n-class FloatActivationsOpModel : public BaseActivationsOpModel {\n- public:\n- using BaseActivationsOpModel::BaseActivationsOpModel;\n-\n- void SetInput(const std::vector<float>& data) {\n- PopulateTensor(input_, data);\n+#ifdef notdef\n+BaseActivationsOpModel(BuiltinOperator type, TensorData input) {\n+ input_ = AddInput(input);\n+ if (input.type == TensorType_UINT8) {\n+ output_ = AddOutput({input.type, {}, 0, 0, 1. / 256});\n+ } else if (input.type == TensorType_INT8) {\n+ output_ = AddOutput({input.type, {}, 0, 0, 1. / 256, -128});\n+ } else {\n+ output_ = AddOutput({input.type, {}});\n }\n- std::vector<float> GetOutput() { return ExtractVector<float>(output_); }\n-};\n+ SetBuiltinOp(type, BuiltinOptions_NONE, 0);\n+ BuildInterpreter({GetShape(input_)});\n+}\n+#endif // notdef\n \n // Our fixed-point math function implementations have roughly 12 bits of\n // accuracy, when specialized to 16-bit fixed-point arithmetic.\n@@ -176,41 +59,25 @@ class FloatActivationsOpModel : public BaseActivationsOpModel {\n const float kQuantizedTolerance = 2 * (1. / 256);\n const float kQuantizedToleranceInt16 = 2 * (1. / 4096);\n \n-class QuantizedActivationsOpModel : public BaseActivationsOpModel {\n- public:\n- using BaseActivationsOpModel::BaseActivationsOpModel;\n+TF_LITE_MICRO_TESTS_BEGIN\n \n- template <typename T>\n- void SetInput(const std::vector<float>& data) {\n- QuantizeAndPopulate<T>(input_, data);\n- }\n- template <typename T>\n- std::vector<T> GetOutput() {\n- return ExtractVector<T>(output_);\n- }\n-\n- template <typename T>\n- std::vector<float> GetDequantizedOutput() {\n- return Dequantize<T>(ExtractVector<T>(output_), GetScale(output_),\n- GetZeroPoint(output_));\n- }\n-};\n-\n-TEST(FloatActivationsOpTest, Elu) {\n+TF_LITE_MICRO_TEST(FloatActivationsOpTestElu) {\n+#ifdef notdef\n FloatActivationsOpModel m(BuiltinOperator_ELU,\n /*input=*/{TensorType_FLOAT32, {1, 2, 4, 1}});\n m.SetInput({\n 0, -6, 2, -4, //\n 3, -2, 10, -0.1, //\n });\n- m.Invoke();\n EXPECT_THAT(m.GetOutput(), ElementsAreArray(ArrayFloatNear({\n 0.0, -0.997521, 2.0, -0.981684, //\n 3.0, -0.864665, 10.0, -0.0951626, //\n })));\n+#endif // notdef\n }\n \n-TEST(QuantizedActivationsOpTest, EluInt8) {\n+TF_LITE_MICRO_TEST(QuantizedActivationsOpTestEluInt8) {\n+#ifdef notdef\n const float kMin = -1;\n const float kMax = 127.f / 128.f;\n QuantizedActivationsOpModel model(\n@@ -231,7 +98,11 @@ TEST(QuantizedActivationsOpTest, EluInt8) {\n 3.0, -0.875, 6.0, -0.125, //\n },\n kQuantizedTolerance)));\n+#endif // notdef\n }\n \n+TF_LITE_MICRO_TESTS_END\n+\n } // namespace\n+} // namespace testing\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/elu_test.cc",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\n**System information**\r\n- Host OS Platform and Distribution (e.g., Linux Ubuntu 16.04):\r\n- TensorFlow installed from (source or binary):\r\n- Tensorflow version (commit SHA if source):\r\n- Target platform (e.g. Arm Mbed OS, Arduino Nano 33 etc.):\r\n\r\n**Describe the problem**\r\nThe CI script tensorflow/lite/micro/tools/ci_build/test_stm32f4.sh tests OPTIMIZED_KERNEL_DIR=cmsis_nn with DSP extension.\r\nHowever there is no equivalent test for MVEI extension, i.e. Cortex-M55.\r\n\r\n**Please provide the exact sequence of commands/steps when you ran into the problem**\r\n\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46829\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46829\">No</a>\n",
"created_at": "2021-05-07T07:24:08Z"
}
],
"number": 46829,
"title": "Missing CI for OPTIMIZED_KERNEL_DIR=cmsis_nn with MVEI extension"
}
|
{
"body": "This will allow the unit tests to be run on additional targets that need some addiitonal initialization (for example corstone_300 from #46830).\r\n\r\nThis particular change is broken out from the Corstone PR #46830 to be able to have smaller more reviewable PRs.\r\n\r\nIn the past, we have added state to the DebugLog() and GetCurrentTimeTicks() functions as a way to avoid having an InitializeTarget function. With this change, we are deciding to go with an explicit intitialization step instead.\r\n\r\nThis change has added calls to tflite::InitializeTarget to the tests, benchmarks, and examples and converted the Arduino and SparkfunEdge to make use of this explicit initialization.\r\n\r\nThe changes for the Arduino and SparkfunEdge have not been tested on actual hardware.\r\n\r\nProgress towards #46829\r\nFixes http://b/150808076",
"number": 47077,
"review_comments": [
{
"body": "nitpick: typo",
"created_at": "2021-02-11T06:16:34Z"
},
{
"body": "fixed.",
"created_at": "2021-02-11T17:58:06Z"
}
],
"title": "Add an InitializeTarget function that can be specialized for a given target."
}
|
{
"commits": [
{
"message": "Add an InitializeTarget function that can be sepcialized for a given target.\n\nThis will allow the unit tests to be run on additional targets that need\nsome addiitonal initialization (for example cornstone_300 from #46830).\n\nThis particular change is broken out from the Cornstone PR #46830 to\nbe able to have smaller more reviewable PRs.\n\nIn the past, we have added state to the DebugLog() and\nGetCurrentTimeTicks() functions as a way to avoid having an\nInitializeTarget function. With this change, we are deciding to go with\nan explicit intitialization step instead.\n\nThis change has added calls to tflite::InitializeTarget to the tests,\nbenchmarks, and examples and converted the Arduino and SparkfunEdge to\nmake use of this explicit initialization.\n\nThe changes for the Arduino and SparkfunEdge have not been tested on\nactual hardware.\n\nProgress towards #46829"
}
],
"files": [
{
"diff": "@@ -216,6 +216,17 @@ cc_library(\n ],\n )\n \n+cc_library(\n+ name = \"system_setup\",\n+ srcs = [\n+ \"system_setup.cc\",\n+ ],\n+ hdrs = [\n+ \"system_setup.h\",\n+ ],\n+ copts = micro_copts(),\n+)\n+\n cc_test(\n name = \"micro_error_reporter_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/BUILD",
"status": "modified"
},
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -12,27 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-\n-#include \"tensorflow/lite/micro/debug_log.h\"\n-\n-#include \"Arduino.h\"\n-\n-// The Arduino DUE uses a different object for the default serial port shown in\n-// the monitor than most other models, so make sure we pick the right one. See\n-// https://github.com/arduino/Arduino/issues/3088#issuecomment-406655244\n-#if defined(__SAM3X8E__)\n-#define DEBUG_SERIAL_OBJECT (SerialUSB)\n-#else\n-#define DEBUG_SERIAL_OBJECT (Serial)\n-#endif\n-\n-// On Arduino platforms, we set up a serial port and write to it for debug\n-// logging.\n-extern \"C\" void DebugLog(const char* s) {\n- static bool is_initialized = false;\n- if (!is_initialized) {\n- DEBUG_SERIAL_OBJECT.begin(9600);\n- is_initialized = true;\n- }\n- DEBUG_SERIAL_OBJECT.print(s);\n-}\n+// This file is empty to ensure that a specialized implementation of\n+// debug_log.h is used (instead of the default implementation from\n+// tensorflow/lite/micro/debug_log.cc).\n+//\n+// The actual target-specific implementation of debug_log.h is in\n+// system_setup.cc since that allows us to consolidate all the target-specific\n+// specializations into one source file.",
"filename": "tensorflow/lite/micro/arduino/debug_log.cc",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,36 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#include \"tensorflow/lite/micro/system_setup.h\"\n+\n+#include \"Arduino.h\"\n+#include \"tensorflow/lite/micro/debug_log.h\"\n+\n+// The Arduino DUE uses a different object for the default serial port shown in\n+// the monitor than most other models, so make sure we pick the right one. See\n+// https://github.com/arduino/Arduino/issues/3088#issuecomment-406655244\n+#if defined(__SAM3X8E__)\n+#define DEBUG_SERIAL_OBJECT (SerialUSB)\n+#else\n+#define DEBUG_SERIAL_OBJECT (Serial)\n+#endif\n+\n+extern \"C\" void DebugLog(const char* s) { DEBUG_SERIAL_OBJECT.print(s); }\n+\n+namespace tflite {\n+\n+void InitializeTarget() { DEBUG_SERIAL_OBJECT.begin(9600); }\n+\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/arduino/system_setup.cc",
"status": "added"
},
{
"diff": "@@ -46,6 +46,7 @@ cc_binary(\n \"//tensorflow/lite/micro:micro_error_reporter\",\n \"//tensorflow/lite/micro:micro_framework\",\n \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:system_setup\",\n \"//tensorflow/lite/micro/kernels:fully_connected\",\n ],\n )\n@@ -63,6 +64,7 @@ cc_binary(\n \"//tensorflow/lite/micro:micro_framework\",\n \"//tensorflow/lite/micro:micro_utils\",\n \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:system_setup\",\n \"//tensorflow/lite/micro/examples/person_detection:model_settings\",\n \"//tensorflow/lite/micro/examples/person_detection:person_detect_model_data\",\n \"//tensorflow/lite/micro/examples/person_detection:simple_images_test_data\",",
"filename": "tensorflow/lite/micro/benchmarks/BUILD",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@ limitations under the License.\n #include \"tensorflow/lite/micro/micro_interpreter.h\"\n #include \"tensorflow/lite/micro/micro_mutable_op_resolver.h\"\n #include \"tensorflow/lite/micro/micro_profiler.h\"\n+#include \"tensorflow/lite/micro/system_setup.h\"\n \n /*\n * Keyword Spotting Benchmark for performance optimizations. The model used in\n@@ -77,6 +78,7 @@ void KeywordRunNIerations(int iterations, const char* tag,\n } // namespace tflite\n \n int main(int argc, char** argv) {\n+ tflite::InitializeTarget();\n tflite::MicroProfiler profiler;\n \n uint32_t event_handle = profiler.BeginEvent(\"InitializeKeywordRunner\");",
"filename": "tensorflow/lite/micro/benchmarks/keyword_benchmark.cc",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@ limitations under the License.\n #include \"tensorflow/lite/micro/micro_error_reporter.h\"\n #include \"tensorflow/lite/micro/micro_interpreter.h\"\n #include \"tensorflow/lite/micro/micro_utils.h\"\n+#include \"tensorflow/lite/micro/system_setup.h\"\n #include \"tensorflow/lite/schema/schema_generated.h\"\n \n /*\n@@ -74,6 +75,8 @@ void PersonDetectionNIerations(const int8_t* input, int iterations,\n } // namespace tflite\n \n int main(int argc, char** argv) {\n+ tflite::InitializeTarget();\n+\n tflite::MicroProfiler profiler;\n \n uint32_t event_handle = profiler.BeginEvent(\"InitializeBenchmarkRunner\");",
"filename": "tensorflow/lite/micro/benchmarks/person_detection_benchmark.cc",
"status": "modified"
},
{
"diff": "@@ -82,6 +82,7 @@ cc_binary(\n \"//tensorflow/lite/micro:micro_error_reporter\",\n \"//tensorflow/lite/micro:micro_framework\",\n \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:system_setup\",\n \"//tensorflow/lite/schema:schema_fbs\",\n ],\n )",
"filename": "tensorflow/lite/micro/examples/hello_world/BUILD",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@ limitations under the License.\n #include \"tensorflow/lite/micro/examples/hello_world/output_handler.h\"\n #include \"tensorflow/lite/micro/micro_error_reporter.h\"\n #include \"tensorflow/lite/micro/micro_interpreter.h\"\n+#include \"tensorflow/lite/micro/system_setup.h\"\n #include \"tensorflow/lite/schema/schema_generated.h\"\n \n // Globals, used for compatibility with Arduino-style sketches.\n@@ -38,6 +39,8 @@ uint8_t tensor_arena[kTensorArenaSize];\n \n // The name of this function is important for Arduino compatibility.\n void setup() {\n+ tflite::InitializeTarget();\n+\n // Set up logging. Google style is to avoid globals or statics because of\n // lifetime uncertainty, but since this has a trivial destructor it's okay.\n // NOLINTNEXTLINE(runtime-global-variables)",
"filename": "tensorflow/lite/micro/examples/hello_world/main_functions.cc",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@ limitations under the License.\n #include \"tensorflow/lite/micro/micro_error_reporter.h\"\n #include \"tensorflow/lite/micro/micro_interpreter.h\"\n #include \"tensorflow/lite/micro/micro_mutable_op_resolver.h\"\n+#include \"tensorflow/lite/micro/system_setup.h\"\n #include \"tensorflow/lite/schema/schema_generated.h\"\n \n #define NUM_OUT_CH 3\n@@ -34,6 +35,7 @@ static const char* labels[] = {\"Plane\", \"Car\", \"Bird\", \"Cat\", \"Deer\",\n \"Dog\", \"Frog\", \"Horse\", \"Ship\", \"Truck\"};\n \n int main(int argc, char** argv) {\n+ tflite::InitializeTarget();\n init_lcd();\n wait_ms(100);\n ",
"filename": "tensorflow/lite/micro/examples/image_recognition_experimental/main.cc",
"status": "modified"
},
{
"diff": "@@ -154,6 +154,7 @@ cc_binary(\n \"//tensorflow/lite/micro:micro_error_reporter\",\n \"//tensorflow/lite/micro:micro_framework\",\n \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:system_setup\",\n \"//tensorflow/lite/schema:schema_fbs\",\n ],\n )",
"filename": "tensorflow/lite/micro/examples/magic_wand/BUILD",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@ limitations under the License.\n #include \"tensorflow/lite/micro/micro_error_reporter.h\"\n #include \"tensorflow/lite/micro/micro_interpreter.h\"\n #include \"tensorflow/lite/micro/micro_mutable_op_resolver.h\"\n+#include \"tensorflow/lite/micro/system_setup.h\"\n #include \"tensorflow/lite/schema/schema_generated.h\"\n \n // Globals, used for compatibility with Arduino-style sketches.\n@@ -42,6 +43,8 @@ uint8_t tensor_arena[kTensorArenaSize];\n \n // The name of this function is important for Arduino compatibility.\n void setup() {\n+ tflite::InitializeTarget();\n+\n // Set up logging. Google style is to avoid globals or statics because of\n // lifetime uncertainty, but since this has a trivial destructor it's okay.\n static tflite::MicroErrorReporter micro_error_reporter; // NOLINT",
"filename": "tensorflow/lite/micro/examples/magic_wand/main_functions.cc",
"status": "modified"
},
{
"diff": "@@ -362,6 +362,7 @@ cc_binary(\n \"//tensorflow/lite/micro:micro_error_reporter\",\n \"//tensorflow/lite/micro:micro_framework\",\n \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:system_setup\",\n \"//tensorflow/lite/micro/examples/micro_speech/micro_features:micro_model_settings\",\n \"//tensorflow/lite/micro/examples/micro_speech/micro_features:model\",\n \"//tensorflow/lite/schema:schema_fbs\",\n@@ -383,6 +384,7 @@ cc_binary(\n \"//tensorflow/lite/micro:micro_error_reporter\",\n \"//tensorflow/lite/micro:micro_framework\",\n \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:system_setup\",\n \"//tensorflow/lite/micro/examples/micro_speech/micro_features:micro_model_settings\",\n \"//tensorflow/lite/micro/examples/micro_speech/micro_features:model\",\n \"//tensorflow/lite/schema:schema_fbs\",",
"filename": "tensorflow/lite/micro/examples/micro_speech/BUILD",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@ limitations under the License.\n #include \"tensorflow/lite/micro/micro_error_reporter.h\"\n #include \"tensorflow/lite/micro/micro_interpreter.h\"\n #include \"tensorflow/lite/micro/micro_mutable_op_resolver.h\"\n+#include \"tensorflow/lite/micro/system_setup.h\"\n #include \"tensorflow/lite/schema/schema_generated.h\"\n \n // Globals, used for compatibility with Arduino-style sketches.\n@@ -47,6 +48,8 @@ int8_t* model_input_buffer = nullptr;\n \n // The name of this function is important for Arduino compatibility.\n void setup() {\n+ tflite::InitializeTarget();\n+\n // Set up logging. Google style is to avoid globals or statics because of\n // lifetime uncertainty, but since this has a trivial destructor it's okay.\n // NOLINTNEXTLINE(runtime-global-variables)",
"filename": "tensorflow/lite/micro/examples/micro_speech/main_functions.cc",
"status": "modified"
},
{
"diff": "@@ -138,6 +138,7 @@ cc_binary(\n \"//tensorflow/lite/micro:micro_error_reporter\",\n \"//tensorflow/lite/micro:micro_framework\",\n \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:system_setup\",\n \"//tensorflow/lite/schema:schema_fbs\",\n ],\n )",
"filename": "tensorflow/lite/micro/examples/person_detection/BUILD",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@ limitations under the License.\n #include \"tensorflow/lite/micro/micro_error_reporter.h\"\n #include \"tensorflow/lite/micro/micro_interpreter.h\"\n #include \"tensorflow/lite/micro/micro_mutable_op_resolver.h\"\n+#include \"tensorflow/lite/micro/system_setup.h\"\n #include \"tensorflow/lite/schema/schema_generated.h\"\n \n // Globals, used for compatibility with Arduino-style sketches.\n@@ -45,6 +46,8 @@ static uint8_t tensor_arena[kTensorArenaSize];\n \n // The name of this function is important for Arduino compatibility.\n void setup() {\n+ tflite::InitializeTarget();\n+\n // Set up logging. Google style is to avoid globals or statics because of\n // lifetime uncertainty, but since this has a trivial destructor it's okay.\n // NOLINTNEXTLINE(runtime-global-variables)",
"filename": "tensorflow/lite/micro/examples/person_detection/main_functions.cc",
"status": "modified"
},
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -12,24 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-\n-// Implementation for the DebugLog() function that prints to the UART on the\n-// SparkFun Edge microcontroller. The same should work for other targets using\n-// the Ambiq Apollo 3.\n-\n-#include \"tensorflow/lite/micro/debug_log.h\"\n-\n-#include \"am_bsp.h\" // NOLINT\n-#include \"am_util.h\" // NOLINT\n-\n-extern \"C\" void DebugLog(const char* s) {\n-#ifndef TF_LITE_STRIP_ERROR_STRINGS\n- static bool is_initialized = false;\n- if (!is_initialized) {\n- am_bsp_uart_printf_enable();\n- is_initialized = true;\n- }\n-\n- am_util_stdio_printf(\"%s\", s);\n-#endif\n-}\n+// This file is empty to ensure that a specialized implementation of\n+// debug_log.h is used (instead of the default implementation from\n+// tensorflow/lite/micro/debug_log.cc).\n+//\n+// The actual target-specific implementation of debug_log.h is in\n+// system_setup.cc since that allows us to consolidate all the target-specific\n+// specializations into one source file.",
"filename": "tensorflow/lite/micro/sparkfun_edge/debug_log.cc",
"status": "modified"
},
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -12,91 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-\n-// Reference implementation of timer functions. Platforms are not required to\n-// implement these timer methods, but they are required to enable profiling.\n-\n-// On platforms that have a POSIX stack or C library, it can be written using\n-// methods from <sys/time.h> or clock() from <time.h>.\n-\n-// To add an equivalent function for your own platform, create your own\n-// implementation file, and place it in a subfolder with named after the OS\n-// you're targeting. For example, see the Cortex M bare metal version in\n-// tensorflow/lite/micro/bluepill/micro_timer.cc or the mbed one on\n-// tensorflow/lite/micro/mbed/micro_timer.cc.\n-\n-#include \"tensorflow/lite/micro/micro_time.h\"\n-\n-#include \"tensorflow/lite/micro/debug_log.h\"\n-\n-// These are headers from Ambiq's Apollo3 SDK.\n-#include \"am_bsp.h\" // NOLINT\n-#include \"am_mcu_apollo.h\" // NOLINT\n-#include \"am_util.h\" // NOLINT\n-\n-namespace tflite {\n-namespace {\n-\n-// Select CTIMER 1 as benchmarking timer on Sparkfun Edge. This timer must not\n-// be used elsewhere.\n-constexpr int kTimerNum = 1;\n-\n-// Clock set to operate at 12MHz.\n-constexpr int kClocksPerSecond = 12e6;\n-\n-// Enables 96MHz burst mode on Sparkfun Edge. Enable in timer since most\n-// benchmarks and profilers want maximum performance for debugging.\n-void BurstModeEnable() {\n- am_hal_clkgen_control(AM_HAL_CLKGEN_CONTROL_SYSCLK_MAX, 0);\n-\n- // Set the default cache configuration\n- am_hal_cachectrl_config(&am_hal_cachectrl_defaults);\n- am_hal_cachectrl_enable();\n-\n- am_hal_burst_avail_e eBurstModeAvailable;\n- am_hal_burst_mode_e eBurstMode;\n-\n- // Check that the Burst Feature is available.\n- int status = am_hal_burst_mode_initialize(&eBurstModeAvailable);\n- if (status != AM_HAL_STATUS_SUCCESS ||\n- eBurstModeAvailable != AM_HAL_BURST_AVAIL) {\n- DebugLog(\"Failed to initialize burst mode.\");\n- return;\n- }\n-\n- status = am_hal_burst_mode_enable(&eBurstMode);\n-\n- if (status != AM_HAL_STATUS_SUCCESS || eBurstMode != AM_HAL_BURST_MODE) {\n- DebugLog(\"Failed to Enable Burst Mode operation\\n\");\n- }\n-}\n-\n-} // namespace\n-\n-int32_t ticks_per_second() { return kClocksPerSecond; }\n-\n-// Calling this method enables a timer that runs for eternity. The user is\n-// responsible for avoiding trampling on this timer's config, otherwise timing\n-// measurements may no longer be valid.\n-int32_t GetCurrentTimeTicks() {\n- // TODO(b/150808076): Split out initialization, intialize in interpreter.\n- static bool is_initialized = false;\n- if (!is_initialized) {\n- BurstModeEnable();\n- am_hal_ctimer_config_t timer_config;\n- // Operate as a 32-bit timer.\n- timer_config.ui32Link = 1;\n- // Set timer A to continuous mode at 12MHz.\n- timer_config.ui32TimerAConfig =\n- AM_HAL_CTIMER_FN_CONTINUOUS | AM_HAL_CTIMER_HFRC_12MHZ;\n-\n- am_hal_ctimer_stop(kTimerNum, AM_HAL_CTIMER_BOTH);\n- am_hal_ctimer_clear(kTimerNum, AM_HAL_CTIMER_BOTH);\n- am_hal_ctimer_config(kTimerNum, &timer_config);\n- am_hal_ctimer_start(kTimerNum, AM_HAL_CTIMER_TIMERA);\n- is_initialized = true;\n- }\n- return CTIMERn(kTimerNum)->TMR0;\n-}\n-\n-} // namespace tflite\n+// This file is empty to ensure that a specialized implementation of\n+// micro_time.h is used (instead of the default implementation from\n+// tensorflow/lite/micro/micro_time.cc).\n+//\n+// The actual target-specific implementation of micro_time.h is in\n+// system_setup.cc since that allows us to consolidate all the target-specific\n+// specializations into one source file.",
"filename": "tensorflow/lite/micro/sparkfun_edge/micro_time.cc",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,99 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#include \"tensorflow/lite/micro/system_setup.h\"\n+\n+#include \"tensorflow/lite/micro/debug_log.h\"\n+#include \"tensorflow/lite/micro/micro_time.h\"\n+\n+// These are headers from Ambiq's Apollo3 SDK.\n+#include \"am_bsp.h\" // NOLINT\n+#include \"am_mcu_apollo.h\" // NOLINT\n+#include \"am_util.h\" // NOLINT\n+\n+namespace {\n+\n+// Select CTIMER 1 as benchmarking timer on Sparkfun Edge. This timer must not\n+// be used elsewhere.\n+constexpr int kTimerNum = 1;\n+\n+// Clock set to operate at 12MHz.\n+constexpr int kClocksPerSecond = 12e6;\n+\n+// Enables 96MHz burst mode on Sparkfun Edge. Enable in timer since most\n+// benchmarks and profilers want maximum performance for debugging.\n+void BurstModeEnable() {\n+ am_hal_clkgen_control(AM_HAL_CLKGEN_CONTROL_SYSCLK_MAX, 0);\n+\n+ // Set the default cache configuration\n+ am_hal_cachectrl_config(&am_hal_cachectrl_defaults);\n+ am_hal_cachectrl_enable();\n+\n+ am_hal_burst_avail_e eBurstModeAvailable;\n+ am_hal_burst_mode_e eBurstMode;\n+\n+ // Check that the Burst Feature is available.\n+ int status = am_hal_burst_mode_initialize(&eBurstModeAvailable);\n+ if (status != AM_HAL_STATUS_SUCCESS ||\n+ eBurstModeAvailable != AM_HAL_BURST_AVAIL) {\n+ DebugLog(\"Failed to initialize burst mode.\\n\");\n+ return;\n+ }\n+\n+ status = am_hal_burst_mode_enable(&eBurstMode);\n+\n+ if (status != AM_HAL_STATUS_SUCCESS || eBurstMode != AM_HAL_BURST_MODE) {\n+ DebugLog(\"Failed to Enable Burst Mode operation\\n\");\n+ }\n+}\n+\n+} // namespace\n+\n+// Implementation for the DebugLog() function that prints to the UART on the\n+// SparkFun Edge microcontroller. The same should work for other targets using\n+// the Ambiq Apollo 3.\n+extern \"C\" void DebugLog(const char* s) {\n+#ifndef TF_LITE_STRIP_ERROR_STRINGS\n+ am_util_stdio_printf(\"%s\", s);\n+#endif\n+}\n+\n+namespace tflite {\n+\n+// Calling this method enables a timer that runs for eternity. The user is\n+// responsible for avoiding trampling on this timer's config, otherwise timing\n+// measurements may no longer be valid.\n+void InitializeTarget() {\n+ am_bsp_uart_printf_enable();\n+\n+ BurstModeEnable();\n+ am_hal_ctimer_config_t timer_config;\n+ // Operate as a 32-bit timer.\n+ timer_config.ui32Link = 1;\n+ // Set timer A to continuous mode at 12MHz.\n+ timer_config.ui32TimerAConfig =\n+ AM_HAL_CTIMER_FN_CONTINUOUS | AM_HAL_CTIMER_HFRC_12MHZ;\n+\n+ am_hal_ctimer_stop(kTimerNum, AM_HAL_CTIMER_BOTH);\n+ am_hal_ctimer_clear(kTimerNum, AM_HAL_CTIMER_BOTH);\n+ am_hal_ctimer_config(kTimerNum, &timer_config);\n+ am_hal_ctimer_start(kTimerNum, AM_HAL_CTIMER_TIMERA);\n+}\n+\n+int32_t ticks_per_second() { return kClocksPerSecond; }\n+\n+int32_t GetCurrentTimeTicks() { return CTIMERn(kTimerNum)->TMR0; }\n+\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/sparkfun_edge/system_setup.cc",
"status": "added"
},
{
"diff": "@@ -0,0 +1,25 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#include \"tensorflow/lite/micro/system_setup.h\"\n+\n+namespace tflite {\n+\n+// To add an equivalent function for your own platform, create your own\n+// implementation file, and place it in a subfolder named after the target. See\n+// tensorflow/lite/micro/debug_log.cc for a similar example.\n+void InitializeTarget() {}\n+\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/system_setup.cc",
"status": "added"
},
{
"diff": "@@ -0,0 +1,27 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#ifndef TENSORFLOW_LITE_MICRO_SYSTEM_SETUP_H_\n+#define TENSORFLOW_LITE_MICRO_SYSTEM_SETUP_H_\n+\n+namespace tflite {\n+\n+// This should called during initialization of TFLM binaries and tests. It can\n+// be specialized if there is a need for custom target-specific intialization.\n+// For more information, see tensorflow/lite/micro/system_setup.cc.\n+void InitializeTarget();\n+\n+} // namespace tflite\n+\n+#endif // TENSORFLOW_LITE_MICRO_SYSTEM_SETUP_H_",
"filename": "tensorflow/lite/micro/system_setup.h",
"status": "added"
},
{
"diff": "@@ -30,6 +30,7 @@ cc_library(\n \"//tensorflow/lite/micro:micro_error_reporter\",\n \"//tensorflow/lite/micro:micro_framework\",\n \"//tensorflow/lite/micro:micro_utils\",\n+ \"//tensorflow/lite/micro:system_setup\",\n \"//tensorflow/lite/micro:test_helpers\",\n ],\n )",
"filename": "tensorflow/lite/micro/testing/BUILD",
"status": "modified"
},
{
"diff": "@@ -56,6 +56,7 @@ limitations under the License.\n \n #include \"tensorflow/lite/c/common.h\"\n #include \"tensorflow/lite/micro/micro_error_reporter.h\"\n+#include \"tensorflow/lite/micro/system_setup.h\"\n \n namespace micro_test {\n extern int tests_passed;\n@@ -64,6 +65,19 @@ extern bool is_test_complete;\n extern bool did_test_fail;\n } // namespace micro_test\n \n+namespace tflite {\n+\n+// This additional helper function is used (instead of directly calling\n+// tflite::InitializeTarget from the TF_LITE_MICRO_TESTS_BEGIN macro) to avoid\n+// adding a dependency from every bazel test target to micro:system_setp (which\n+// is the target that implements InitializeTarget().\n+//\n+// The underlying issue here is that the use of the macros results in\n+// dependencies that can be containted within the micro/testing:micro_test\n+// target bleeding on to all the tests.\n+inline void InitializeTest() { InitializeTarget(); }\n+} // namespace tflite\n+\n #define TF_LITE_MICRO_TESTS_BEGIN \\\n namespace micro_test { \\\n int tests_passed; \\\n@@ -74,7 +88,8 @@ extern bool did_test_fail;\n \\\n int main(int argc, char** argv) { \\\n micro_test::tests_passed = 0; \\\n- micro_test::tests_failed = 0;\n+ micro_test::tests_failed = 0; \\\n+ tflite::InitializeTest();\n \n #define TF_LITE_MICRO_TESTS_END \\\n MicroPrintf(\"%d/%d tests passed\", micro_test::tests_passed, \\",
"filename": "tensorflow/lite/micro/testing/micro_test.h",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\n**System information**\r\n- Host OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu\r\n- TensorFlow installed from (source or binary): source\r\n- Tensorflow version (commit SHA if source):\r\n- Target platform (e.g. Arm Mbed OS, Arduino Nano 33 etc.):\r\n\r\n**Describe the problem**\r\nWhen building the keyword benchmark project like this:\r\nmake -f tensorflow/lite/micro/tools/make/Makefile generate_keyword_benchmark_make_project\r\n\r\nI get 2 errors. One is due to micro_benchmark.h not being copied into the generated project, the other is a duplicate object error for **g_keyword_scrambled_model_data**. That happens because keyword_scrambled_model_data.cc somehow appears twice in the generated Makefile :)\r\n\r\nI will open a PR with a fix shortly.\r\n\r\n\r\n\r\n**Please provide the exact sequence of commands/steps when you ran into the problem**\r\n\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46860\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46860\">No</a>\n",
"created_at": "2021-02-10T00:49:58Z"
}
],
"number": 46860,
"title": "[TFLM] keyword benchmark broken when using generated Makefile projects"
}
|
{
"body": "These sources are only needed for specific tests and are now explicitly specified as part of creating a test target.\r\n\r\nFrom this change onwards, we will explicitly create test targets for tests that depend on sources outside of libtensorflow-microlite.a. This avoids putting unnecessary files into MICROLITE_CC_SRCS.\r\n\r\nManually verified that the following command does not error out:\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile generate_keyword_benchmark_make_project && cd tensorflow/lite/micro/tools/make/gen/linux_x86_64_default/prj/keyword_benchmark/make/ && make -j8\r\n```\r\n\r\nFixes #46860\r\n",
"number": 47018,
"review_comments": [],
"title": "Remove the sources containing the models from the TFLM static lib."
}
|
{
"commits": [
{
"message": "Remove the sources containing the models from the TFLM static lib.\n\nThese sources are only needed for specific tests and are now explicitly\nspecified as part of creating a test target.\n\nFrom this change onwards, we will explicitly create test targets for\ntests that depend on sources outside of libtensorflow-microlite.a. This\navoids putting unnecessary files into MICROLITE_CC_SRCS.\n\nManually verified that the following command does not error out:\n```\nmake -f tensorflow/lite/micro/tools/make/Makefile generate_keyword_benchmark_make_project && cd tensorflow/lite/micro/tools/make/gen/linux_x86_64_default/prj/keyword_benchmark/make/ && make -j8\n```\n\nFixes #46860"
}
],
"files": [
{
"diff": "@@ -3,7 +3,8 @@ tensorflow/lite/micro/benchmarks/keyword_benchmark.cc \\\n tensorflow/lite/micro/benchmarks/keyword_scrambled_model_data.cc\n \n KEYWORD_BENCHMARK_HDRS := \\\n-tensorflow/lite/micro/benchmarks/keyword_scrambled_model_data.h\n+tensorflow/lite/micro/benchmarks/keyword_scrambled_model_data.h \\\n+tensorflow/lite/micro/benchmarks/micro_benchmark.h\n \n PERSON_DETECTION_BENCHMARK_SRCS := \\\n tensorflow/lite/micro/benchmarks/person_detection_benchmark.cc \\\n@@ -20,4 +21,3 @@ $(KEYWORD_BENCHMARK_SRCS),$(KEYWORD_BENCHMARK_HDRS)))\n \n $(eval $(call microlite_test,person_detection_benchmark,\\\n $(PERSON_DETECTION_BENCHMARK_SRCS),$(PERSON_DETECTION_BENCHMARK_HDRS)))\n-",
"filename": "tensorflow/lite/micro/benchmarks/Makefile.inc",
"status": "modified"
},
{
"diff": "@@ -366,9 +366,7 @@ $(wildcard tensorflow/lite/micro/testing/*.h)\n \n MICROLITE_CC_BASE_SRCS := \\\n $(wildcard tensorflow/lite/micro/*.cc) \\\n-$(wildcard tensorflow/lite/micro/benchmarks/*model_data.cc) \\\n $(wildcard tensorflow/lite/micro/memory_planner/*.cc) \\\n-$(wildcard tensorflow/lite/micro/testing/*model.cc) \\\n tensorflow/lite/c/common.c \\\n tensorflow/lite/core/api/error_reporter.cc \\\n tensorflow/lite/core/api/flatbuffer_conversions.cc \\\n@@ -665,9 +663,55 @@ $(BINDIR)%.bin: $(BINDIR)%\n \t@mkdir -p $(dir $@)\n \t$(OBJCOPY) $< $@ -O binary\n \n-# Generate standalone makefile projects for all of the test targets.\n+\n+# Some tests have additional dependencies (beyond libtensorflow-microlite.a) and\n+# those need to be explicitly specified with their own individual call to the\n+# microlite_test helper function. For these tests, we also need to make sure to\n+# not add targets for them if they have been excluded as part of the target\n+# specific Makefile.\n+EXPLICITLY_SPECIFIED_TEST:= tensorflow/lite/micro/memory_arena_threshold_test.cc\n+ifneq ($(findstring $(EXPLICITLY_SPECIFIED_TEST),$(MICROLITE_TEST_SRCS)),)\n+ MICROLITE_TEST_SRCS := $(filter-out $(EXPLICITLY_SPECIFIED_TEST), $(MICROLITE_TEST_SRCS))\n+ EXPLICITLY_SPECIFIED_TEST_SRCS := \\\n+ $(EXPLICITLY_SPECIFIED_TEST) \\\n+ tensorflow/lite/micro/benchmarks/keyword_scrambled_model_data.cc \\\n+ tensorflow/lite/micro/testing/test_conv_model.cc\n+ EXPLICITLY_SPECIFIED_TEST_HDRS := \\\n+ tensorflow/lite/micro/benchmarks/keyword_scrambled_model_data.h \\\n+ tensorflow/lite/micro/testing/test_conv_model.h\n+ $(eval $(call microlite_test,memory_arena_threshold_test,\\\n+ $(EXPLICITLY_SPECIFIED_TEST_SRCS),$(EXPLICITLY_SPECIFIED_TEST_HDRS)))\n+endif\n+\n+EXPLICITLY_SPECIFIED_TEST:= tensorflow/lite/micro/micro_allocator_test.cc\n+ifneq ($(findstring $(EXPLICITLY_SPECIFIED_TEST),$(MICROLITE_TEST_SRCS)),)\n+ MICROLITE_TEST_SRCS := $(filter-out $(EXPLICITLY_SPECIFIED_TEST), $(MICROLITE_TEST_SRCS))\n+ EXPLICITLY_SPECIFIED_TEST_SRCS := \\\n+ $(EXPLICITLY_SPECIFIED_TEST) \\\n+ tensorflow/lite/micro/testing/test_conv_model.cc\n+ EXPLICITLY_SPECIFIED_TEST_HDRS := \\\n+ tensorflow/lite/micro/testing/test_conv_model.h\n+ $(eval $(call microlite_test,micro_allocator_test,\\\n+ $(EXPLICITLY_SPECIFIED_TEST_SRCS),$(EXPLICITLY_SPECIFIED_TEST_HDRS)))\n+endif\n+\n+EXPLICITLY_SPECIFIED_TEST:= tensorflow/lite/micro/recording_micro_allocator_test.cc\n+ifneq ($(findstring $(EXPLICITLY_SPECIFIED_TEST),$(MICROLITE_TEST_SRCS)),)\n+ MICROLITE_TEST_SRCS := $(filter-out $(EXPLICITLY_SPECIFIED_TEST), $(MICROLITE_TEST_SRCS))\n+ EXPLICITLY_SPECIFIED_TEST_SRCS := \\\n+ $(EXPLICITLY_SPECIFIED_TEST) \\\n+ tensorflow/lite/micro/testing/test_conv_model.cc\n+ EXPLICITLY_SPECIFIED_TEST_HDRS := \\\n+ tensorflow/lite/micro/testing/test_conv_model.h\n+ $(eval $(call microlite_test,recording_micro_allocator_test,\\\n+ $(EXPLICITLY_SPECIFIED_TEST_SRCS),$(EXPLICITLY_SPECIFIED_TEST_HDRS)))\n+endif\n+\n+# For all the tests that do not have any additional dependencies, we can\n+# add a make target in a common way.\n $(foreach TEST_TARGET,$(filter-out tensorflow/lite/micro/kernels/%,$(MICROLITE_TEST_SRCS)),\\\n $(eval $(call microlite_test,$(notdir $(basename $(TEST_TARGET))),$(TEST_TARGET))))\n+\n $(foreach TEST_TARGET,$(filter tensorflow/lite/micro/kernels/%,$(MICROLITE_TEST_SRCS)),\\\n $(eval $(call microlite_test,kernel_$(notdir $(basename $(TEST_TARGET))),$(TEST_TARGET))))\n ",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below):2.1.0\r\n- Python version:3.7.6\r\n- Bazel version (if compiling from source):N/A\r\n- GCC/Compiler version (if compiling from source):N/A\r\n- CUDA/cuDNN version:N/A\r\n- GPU model and memory:N/A\r\n\r\n\r\n**Describe the current behavior**\r\n`tf.keras.layers.ELU` and `tf.keras.layers.LeakyReLU` outputs nan if `alpha=None`\r\n\r\n\r\n**Describe the expected behavior**\r\nexpect no nan as output\r\n\r\n\r\n**Standalone code to reproduce the issue**\r\n~~~python\r\nimport tensorflow as tf\r\nimport numpy as np\r\nlayer = tf.keras.layers.ELU(alpha=None)\r\nout=layer(np.array([-2, 6.]))\r\nprint(out)\r\n~~~\r\n\r\nOutput:\r\n~~~\r\ntf.Tensor([nan 6.], shape=(2,), dtype=float32)\r\n~~~\r\n\r\n\r\n~~~python\r\nimport tensorflow as tf\r\nimport numpy as np\r\nlayer = tf.keras.layers.LeakyReLU(alpha=None)\r\nout=layer(np.array([-2, 6]))\r\n~~~\r\nOutput:\r\n~~~\r\n<tf.Tensor: shape=(2,), dtype=float32, numpy=array([nan, 6.], dtype=float32)>\r\n~~~\r\n\r\n\r\nRelated: #13787",
"comments": [
{
"body": "I am able to replicate the issue reported on tf 2.4 and tf-nightly, please find the [gist here](https://colab.research.google.com/gist/Saduf2019/cdb1c001d0fc15d73735806bba9a2cbd/untitled522.ipynb).\r\nThanks!\r\n\r\n",
"created_at": "2021-02-08T05:01:49Z"
},
{
"body": "I think when `alpha=None` is passed a ValueError could be thrown. Added a PR #47017 for the fix.",
"created_at": "2021-02-08T21:18:14Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46993\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46993\">No</a>\n",
"created_at": "2021-02-10T00:18:15Z"
}
],
"number": 46993,
"title": "`tf.keras.layers.ELU` and `tf.keras.layers.LeakyReLU` outputs nan if `alpha=None`"
}
|
{
"body": "This PR tries to address the issue raised in #46993 where\r\nincorrect nan value is returned when alpha=None is passed for\r\ntf.keras.layers.LeakyReLU. The nan could be misleading to users.\r\n\r\nThis PR address the issue and throw out ValueError instead.\r\n\r\nThis PR fixes #46993.\r\n\r\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>\r\n",
"number": 47017,
"review_comments": [],
"title": "Thrown out ValueError when alpha=None is passed for tf.keras.layers.LeakyReLU"
}
|
{
"commits": [
{
"message": "Thrown out ValueError when alpha=None is passed for tf.keras.layers.LeakyReLU\n\nThis PR tries to address the issue raised in 46993 where\nincorrect nan value is returned when alpha=None is passed for\ntf.keras.layers.LeakyReLU. The nan could be misleading to users.\n\nThis PR address the issue and throw out ValueError instead.\n\nThis PR fixes 46993.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
},
{
"message": "Thrown out ValueError if alpha is None for ELU\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
}
],
"files": [
{
"diff": "@@ -71,6 +71,9 @@ class LeakyReLU(Layer):\n \n def __init__(self, alpha=0.3, **kwargs):\n super(LeakyReLU, self).__init__(**kwargs)\n+ if alpha is None:\n+ raise ValueError('alpha of leaky Relu layer '\n+ 'cannot be None. Required a float')\n self.supports_masking = True\n self.alpha = K.cast_to_floatx(alpha)\n \n@@ -206,6 +209,9 @@ class ELU(Layer):\n \n def __init__(self, alpha=1.0, **kwargs):\n super(ELU, self).__init__(**kwargs)\n+ if alpha is None:\n+ raise ValueError('alpha of ELU layer '\n+ 'cannot be None. Required a float')\n self.supports_masking = True\n self.alpha = K.cast_to_floatx(alpha)\n ",
"filename": "tensorflow/python/keras/layers/advanced_activations.py",
"status": "modified"
},
{
"diff": "@@ -108,6 +108,24 @@ def test_layer_as_activation(self):\n run_eagerly=testing_utils.should_run_eagerly())\n model.fit(np.ones((10, 10)), np.ones((10, 1)), batch_size=2)\n \n+ def test_leaky_relu_with_invalid_alpha(self):\n+ # Test case for GitHub issue 46993.\n+ with self.assertRaisesRegex(\n+ ValueError, 'alpha of leaky Relu layer cannot be None'):\n+ testing_utils.layer_test(keras.layers.LeakyReLU,\n+ kwargs={'alpha': None},\n+ input_shape=(2, 3, 4),\n+ supports_masking=True)\n+\n+ def test_leaky_elu_with_invalid_alpha(self):\n+ # Test case for GitHub issue 46993.\n+ with self.assertRaisesRegex(\n+ ValueError, 'alpha of ELU layer cannot be None'):\n+ testing_utils.layer_test(keras.layers.ELU,\n+ kwargs={'alpha': None},\n+ input_shape=(2, 3, 4),\n+ supports_masking=True)\n+\n \n if __name__ == '__main__':\n test.main()",
"filename": "tensorflow/python/keras/layers/advanced_activations_test.py",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below):2.1.0\r\n- Python version:3.7.6\r\n- Bazel version (if compiling from source):N/A\r\n- GCC/Compiler version (if compiling from source):N/A\r\n- CUDA/cuDNN version:N/A\r\n- GPU model and memory:N/A\r\n\r\n\r\n**Describe the current behavior**\r\n`tf.strings.substr` crashes(aborts) when `len(pos)` > `len(input)`\r\n\r\n**Describe the expected behavior**\r\nexpect an exception message if the input unexpected instead of crash. \r\n\r\n\r\n**Standalone code to reproduce the issue**\r\n~~~python\r\nimport tensorflow as tf\r\ntf.strings.substr(input='abc', len=1, pos=[1,-1])\r\n~~~\r\n\r\n~~~python\r\nimport tensorflow as tf\r\ntf.strings.substr(input='abc', len=1, pos=[1,2])\r\n~~~\r\n\r\n\r\nOutput:\r\n~~~python\r\n2021-02-03 22:46:41.234297: F ./tensorflow/core/framework/tensor.h:806] Check failed: new_num_elements == NumElements() (2 vs. 1)\r\nAborted (core dumped)\r\n~~~",
"comments": [
{
"body": "@rmothukuru \r\nI ran the code shared on tf 2.4 and nightly, colab crashes, please find the [gist here](https://colab.research.google.com/gist/Saduf2019/e258b6a130140d89c1fd3325317982bc/untitled520.ipynb).",
"created_at": "2021-02-04T05:02:42Z"
},
{
"body": "According to documentation https://www.tensorflow.org/api_docs/python/tf/strings/substr an error should be thrown out gracefully (instead of a crash). Added a PR #46974 for the fix.",
"created_at": "2021-02-06T20:28:49Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46900\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46900\">No</a>\n",
"created_at": "2021-02-09T17:43:32Z"
}
],
"number": 46900,
"title": "tf.strings.substr crashes(aborts) "
}
|
{
"body": "This PR tries to address the issue raised in #46900 where\r\ntf.strings.substr will crash when pos and len have different shapes.\r\nAccording to the documentation of tf.strings.substr, ValueError\r\nshould be raised instead when pos and len does not have the same shape.\r\n\r\nThis PR add shape check in kernel to allows grace error throw (instead of crash).\r\n\r\nThis PR fixes #46900.\r\n\r\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>",
"number": 46974,
"review_comments": [],
"title": "Fix crash of tf.strings.substr when pos and len have different shapes"
}
|
{
"commits": [
{
"message": "Fix crash of tf.strings.substr when pos and len have different shapes\n\nThis PR tries to address the issue raised in 46900 where\ntf.strings.substr will crash when pos and len have different shapes.\nAccording to the documentation of tf.strings.substr, ValueError\nshould be raised instead when pos and len does not have the same shape.\n\nThis PR add shape check in kernel to allows grace error throw (instead of crash).\n\nThis PR fixes 46900.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
}
],
"files": [
{
"diff": "@@ -51,6 +51,12 @@ class SubstrOp : public OpKernel {\n const Tensor& len_tensor = context->input(2);\n const TensorShape& input_shape = input_tensor.shape();\n const TensorShape& pos_shape = pos_tensor.shape();\n+ const TensorShape& len_shape = len_tensor.shape();\n+ OP_REQUIRES(\n+ context, (pos_shape == len_shape),\n+ errors::InvalidArgument(\"pos and len should have the same shape, got: \",\n+ pos_shape.DebugString(), \" vs. \",\n+ len_shape.DebugString()));\n \n bool is_scalar = TensorShapeUtils::IsScalar(pos_shape);\n ",
"filename": "tensorflow/core/kernels/substr_op.cc",
"status": "modified"
},
{
"diff": "@@ -492,6 +492,16 @@ def testInvalidUnit(self):\n with self.assertRaises(ValueError):\n string_ops.substr(b\"test\", 3, 1, unit=\"UTF8\")\n \n+ def testInvalidPos(self):\n+ # Test case for GitHub issue 46900.\n+ with self.assertRaises((ValueError, errors_impl.InvalidArgumentError)):\n+ x = string_ops.substr(b\"abc\", len=1, pos=[1, -1])\n+ self.evaluate(x)\n+\n+ with self.assertRaises((ValueError, errors_impl.InvalidArgumentError)):\n+ x = string_ops.substr(b\"abc\", len=1, pos=[1, 2])\n+ self.evaluate(x)\n+\n \n if __name__ == \"__main__\":\n test.main()",
"filename": "tensorflow/python/kernel_tests/substr_op_test.py",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below):2.1.0\r\n- Python version:3.7.6\r\n- Bazel version (if compiling from source):N/A\r\n- GCC/Compiler version (if compiling from source):N/A\r\n- CUDA/cuDNN version:N/A\r\n- GPU model and memory:N/A\r\n\r\n**Describe the current behavior**\r\n`tf.transpose` crashes(abort) if `a` is complex and `conjugate`=True\r\n\r\n\r\n**Describe the expected behavior**\r\nexpect no crash\r\n\r\n**Standalone code to reproduce the issue**\r\n~~~python\r\nimport tensorflow as tf\r\ntf.transpose(conjugate=True, a=complex(1))\r\n~~~\r\n\r\nOutput:\r\n~~~python\r\n2021-02-03 17:58:05.565680: F ./tensorflow/core/kernels/transpose_functor.h:169] Check failed: in.dims() >= 2 (0 vs. 2)\r\nAborted (core dumped)\r\n~~~",
"comments": [
{
"body": "Was able to reproduce the issue with TF v2.3, TF v2.4 and TF-nightly. Please find the gist of it [here](https://colab.research.google.com/gist/amahendrakar/3beaf907cf0caf938eefcab8d92def67/46891.ipynb). Thanks!",
"created_at": "2021-02-04T10:18:08Z"
},
{
"body": "Added PR #46973 for the fix.",
"created_at": "2021-02-06T19:04:41Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46891\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46891\">No</a>\n",
"created_at": "2021-04-07T16:36:31Z"
},
{
"body": "I also observed the following API aliases can cause the same issue in older versions of tensorflow.\r\nUsers should be cautious when using them on both CPU and GPU up to tensorflow 2.4.0 (v2.4.0-rc4-71-g582c8d236cb).\r\n\r\n- `(tf.transpose)`, `tf.compat.v1.transpose`\r\n\r\n<details>\r\n <summary>Code to reproduce the issue in <code>tf.compat.v1.transpose</code> in older versions</summary>\r\n\r\n```python\r\nimport tensorflow as tf\r\nprint(tf.version.GIT_VERSION, tf.version.VERSION, flush=True)\r\nprint(tf.config.list_physical_devices(), flush=True)\r\n\r\n\r\ntry:\r\n tf.compat.v1.transpose(conjugate=True, a=complex(1))\r\nexcept Exception as e:\r\n print(\"Error:\", str(e), flush=True)\r\nprint(\"Success!\", flush=True)\r\n```\r\n\r\nOn GPU, the Check failed error occurs:\r\n\r\n```text\r\nv2.4.0-rc4-71-g582c8d236cb 2.4.0\r\n[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]\r\n2023-09-08 10:49:02.743006: F ./tensorflow/core/kernels/transpose_functor.h:169] Check failed: in.dims() >= 2 (0 vs. 2)\r\nAborted (core dumped)\r\n```\r\n\r\nThis behavior is also reproducible on my CPU machine:\r\n\r\n```text\r\nv2.4.0-rc4-71-g582c8d236cb 2.4.0\r\n[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')]\r\n2023-09-08 10:48:56.754273: F ./tensorflow/core/kernels/transpose_functor.h:169] Check failed: in.dims() >= 2 (0 vs. 2)\r\nAborted (core dumped)\r\n```\r\n</details>\r\n\r\nIt seems to be fixed in tensorflow 2.4.3 (v2.4.2-142-g72bb4c22adb) and later versions.\r\n",
"created_at": "2023-09-21T10:39:57Z"
}
],
"number": 46891,
"title": "tf.transpose crashes(abort) if `a` is complex"
}
|
{
"body": "This PR tries to address the issue raised in #46891 where\r\ntf.transpose will crash when a is complex and conjugate is True.\r\nThe issue comes from:\r\nhttps://github.com/tensorflow/tensorflow/blob/57bbc5e0d4b93483b8ae853352173516f1c08018/tensorflow/core/kernels/transpose_functor.h#L169\r\n\r\nHowever, as ndims < 2 has already been handled properly:\r\nhttps://github.com/tensorflow/tensorflow/blob/57bbc5e0d4b93483b8ae853352173516f1c08018/tensorflow/core/kernels/transpose_functor_cpu.cc#L103-L105\r\nThe check could be removed.\r\n\r\nThis PR fixes #46891.\r\n\r\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>",
"number": 46973,
"review_comments": [
{
"body": "I hope you don't need this decorator. Can you remove it and see if the tests pass?",
"created_at": "2021-03-27T05:11:02Z"
},
{
"body": "nit: s/Scaler/Scalar",
"created_at": "2021-03-27T05:11:14Z"
},
{
"body": "ditto",
"created_at": "2021-03-27T05:11:20Z"
},
{
"body": "Thanks @rohan100jain, Updated.",
"created_at": "2021-03-28T18:07:58Z"
},
{
"body": "Updated.",
"created_at": "2021-03-28T18:08:03Z"
},
{
"body": "Updated.",
"created_at": "2021-03-28T18:08:09Z"
},
{
"body": "nit: put these above the others so they test in increasing rank order.",
"created_at": "2021-04-06T15:27:16Z"
},
{
"body": "Thanks @cantonios. Done.",
"created_at": "2021-04-06T15:42:01Z"
}
],
"title": "Fix crash with tf.transpose when a is complex and conjugate is True"
}
|
{
"commits": [
{
"message": "Fix crash with tf.transpose when a is complex and conjugate is True\n\nThis PR tries to address the issue raised in 46891 where\ntf.transpose will crash when a is complex and conjugate is True.\nThe issue comes from:\nhttps://github.com/tensorflow/tensorflow/blob/57bbc5e0d4b93483b8ae853352173516f1c08018/tensorflow/core/kernels/transpose_functor.h#L169\n\nHowever, as ndims < 2 has already been handled properly:\nhttps://github.com/tensorflow/tensorflow/blob/57bbc5e0d4b93483b8ae853352173516f1c08018/tensorflow/core/kernels/transpose_functor_cpu.cc#L103-L105\nThe check could be removed.\n\nThis PR fixes 46891.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
},
{
"message": "Address review comment and merge tests into testComplex64() and testComplex128()\n\nwith additional update to sort test cases in increasing rank order\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
}
],
"files": [
{
"diff": "@@ -166,7 +166,6 @@ template <typename Device>\n Status DoTransposeImpl(const Device& d, const Tensor& in,\n const gtl::ArraySlice<int32> perm, bool conjugate,\n Tensor* out) {\n- CHECK_GE(in.dims(), 2);\n CHECK_EQ(in.dims(), out->dims());\n CHECK_EQ(in.dims(), perm.size());\n CHECK_EQ(in.dtype(), out->dtype());",
"filename": "tensorflow/core/kernels/transpose_functor.h",
"status": "modified"
},
{
"diff": "@@ -379,6 +379,8 @@ def testDouble(self):\n np.arange(0, 16).reshape([1, 2, 1, 2, 1, 2, 1, 2]).astype(np.float64))\n \n def testComplex64(self):\n+ self._testBoth(np.array(np.complex(1, 2)).astype(np.complex64))\n+ self._testBoth(np.complex(1, 2) * np.arange(0, 21).astype(np.complex64))\n self._testBoth(\n np.complex(1, 2) *\n np.arange(0, 21).reshape([3, 7]).astype(np.complex64))\n@@ -390,6 +392,8 @@ def testComplex64(self):\n np.arange(0, 1260).reshape([2, 3, 5, 7, 2, 3]).astype(np.complex64))\n \n def testComplex128(self):\n+ self._testBoth(np.array(np.complex(1, 2)).astype(np.complex128))\n+ self._testBoth(np.complex(1, 2) * np.arange(0, 21).astype(np.complex128))\n self._testBoth(\n np.complex(1, 2) *\n np.arange(0, 21).reshape([3, 7]).astype(np.complex128))",
"filename": "tensorflow/python/kernel_tests/transpose_op_test.py",
"status": "modified"
}
]
}
|
{
"body": "micro/kernels/exp_test.cc checks that two inf values are near eachother:\r\nhttps://github.com/tensorflow/tensorflow/blob/ed22f400428a669c1c6e4553cd7f4900abeaf954/tensorflow/lite/micro/kernels/exp_test.cc#L67-L72\r\n\r\nThis works ok for all the CI targets, but broke the xtensa build:\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade test_kernel_exp_test -j8\r\n```\r\nfails with:\r\n```\r\nTesting SingleDim\r\nexpected_output_data[i] (Inf) near output_data[i] (Inf) failed at tensorflow/lite/micro/kernels/exp_test.cc:54\r\n0/1 tests passed\r\n~~~SOME TESTS FAILED~~~\r\n```\r\n\r\nThe underlying issue that the EXPECT_NEAR_MACRO is taking a difference of two infinities which at least with the xtensa toolchain can give a `nan`, which in turn results in the check failing, even though inf==inf is true:\r\nhttps://github.com/tensorflow/tensorflow/blob/ed22f400428a669c1c6e4553cd7f4900abeaf954/tensorflow/lite/micro/testing/micro_test.h#L153-L165\r\n\r\n",
"comments": [],
"number": 46960,
"title": "TF_LITE_MICRO_EXPECT_NEAR(inf, inf) gives incorrect result for some platforms."
}
|
{
"body": "Underlying issue is a bug in the EXPECT_NEAR macro, as described in #46960\r\n\r\nAlso,\r\n * added an exp_test rule to the BUILD file.\r\n * changed the golden value computation to make use of std::exp instead of hard-coded values. This is closer to the TfLite test as well.\r\n\r\nManually confirmed that the following command passes:\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade test -j8\r\n```\r\n\r\nFixes #46960\r\n",
"number": 46962,
"review_comments": [],
"title": "Fix kernel_exp_test with the Xtensa toolchain."
}
|
{
"commits": [
{
"message": "Fix kernel_exp_test with the Xtensa toolchain.\n\nUnderlying issue is a bug in the EXPECT_NEAR macro, as described in\n\nAlso,\n * added an exp_test rule to the BUILD file.\n * changed the golden value computation to make use of std::exp instead\n of hard-coded values. This is closer to the TfLite test as well.\n\nManually confirmed that the following command passes:\n```\nmake -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=fusion_f1 XTENSA_CORE=F1_190305_swupgrade test -j8\n```\n\nFixes #46960"
}
],
"files": [
{
"diff": "@@ -235,6 +235,19 @@ tflite_micro_cc_test(\n ],\n )\n \n+tflite_micro_cc_test(\n+ name = \"exp_test\",\n+ srcs = [\"exp_test.cc\"],\n+ deps = [\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:debug_log\",\n+ \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n tflite_micro_cc_test(\n name = \"pooling_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -54,21 +54,23 @@ void TestExp(const int* input_dims_data, const float* input_data,\n TF_LITE_MICRO_EXPECT_NEAR(expected_output_data[i], output_data[i], 1e-5f);\n }\n }\n-\n } // namespace\n } // namespace testing\n } // namespace tflite\n \n TF_LITE_MICRO_TESTS_BEGIN\n \n TF_LITE_MICRO_TEST(SingleDim) {\n- float output_data[7];\n- const int input_dims[] = {2, 1, 7};\n- const float input_values[] = {0.0f, 1.0f, -1.0f, 100.0f,\n- -100.0f, 0.01f, -0.01f};\n- const float golden[] = {\n- 1.0f, 2.71828f, 0.36788f, std::numeric_limits<float>::infinity(),\n- 1.17549e-38f, 1.01005f, 0.99005f};\n+ constexpr int kInputSize = 7;\n+ float output_data[kInputSize];\n+ const int input_dims[] = {2, 1, kInputSize};\n+ const float input_values[kInputSize] = {0.0f, 1.0f, -1.0f, 100.0f,\n+ -100.0f, 0.01f, -0.01f};\n+ float golden[kInputSize];\n+ for (int i = 0; i < kInputSize; ++i) {\n+ golden[i] = std::exp(input_values[i]);\n+ }\n+\n tflite::testing::TestExp(input_dims, input_values, golden, output_data);\n }\n ",
"filename": "tensorflow/lite/micro/kernels/exp_test.cc",
"status": "modified"
},
{
"diff": "@@ -142,12 +142,14 @@ extern bool did_test_fail;\n } \\\n } while (false)\n \n+// The check vx != vy is needed to properly handle the case where both\n+// x and y evaluate to infinity. See #46960 for more details.\n #define TF_LITE_MICRO_EXPECT_NEAR(x, y, epsilon) \\\n do { \\\n auto vx = (x); \\\n auto vy = (y); \\\n auto delta = ((vx) > (vy)) ? ((vx) - (vy)) : ((vy) - (vx)); \\\n- if (delta > epsilon) { \\\n+ if (vx != vy && delta > epsilon) { \\\n MicroPrintf(#x \" (%f) near \" #y \" (%f) failed at %s:%d\", \\\n static_cast<double>(vx), static_cast<double>(vy), __FILE__, \\\n __LINE__); \\",
"filename": "tensorflow/lite/micro/testing/micro_test.h",
"status": "modified"
}
]
}
|
{
"body": "\r\n@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator FLOOR_DIV from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/floor_div.cc into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro without making any changes or including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45657\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45657\">No</a>\n",
"created_at": "2021-04-12T10:38:08Z"
}
],
"number": 45657,
"title": "micro: port op FLOOR_DIV from lite"
}
|
{
"body": "Complete implementation of TFLM operator FLOOR_DIV and associated TFLM test code.\r\n\r\nThis represents PR step 5 of the work to port operator FLOOR_DIV as tracked in Issue #45657",
"number": 46880,
"review_comments": [],
"title": "micro: port operator FLOOR_DIV kernel from lite with test"
}
|
{
"commits": [
{
"message": "micro: port operator FLOOR_DIV kernel from lite with test\n\nComplete implementation of TFLM operator FLOOR_DIV and associated TFLM test code.\n\nThis represents PR step 5 of the work to port operator FLOOR_DIV as tracked in Issue #45657"
}
],
"files": [
{
"diff": "@@ -40,6 +40,7 @@ AllOpsResolver::AllOpsResolver() {\n AddEqual();\n AddEthosU();\n AddFloor();\n+ AddFloorDiv();\n AddFullyConnected();\n AddGreater();\n AddGreaterEqual();",
"filename": "tensorflow/lite/micro/all_ops_resolver.cc",
"status": "modified"
},
{
"diff": "@@ -275,6 +275,7 @@ cc_library(\n \"expand_dims.cc\",\n \"fill.cc\",\n \"floor.cc\",\n+ \"floor_div.cc\",\n \"l2norm.cc\",\n \"l2_pool_2d.cc\",\n \"leaky_relu.cc\",\n@@ -662,6 +663,19 @@ cc_test(\n ],\n )\n \n+cc_test(\n+ name = \"floor_div_test\",\n+ srcs = [\"floor_div_test.cc\"],\n+ deps = [\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:debug_log\",\n+ \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_test(\n name = \"floor_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -13,30 +13,24 @@ See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n \n+#include \"tensorflow/lite/kernels/internal/reference/floor_div.h\"\n+\n #include \"tensorflow/lite/c/common.h\"\n-#include \"tensorflow/lite/kernels/internal/quantization_util.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/div.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/process_broadcast_shapes.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/binary_function.h\"\n #include \"tensorflow/lite/kernels/internal/types.h\"\n #include \"tensorflow/lite/kernels/kernel_util.h\"\n #include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/micro/micro_utils.h\"\n \n namespace tflite {\n-namespace ops {\n-namespace micro {\n-namespace floor_div {\n namespace {\n \n // Input/output tensor index.\n constexpr int kInputTensor1 = 0;\n constexpr int kInputTensor2 = 1;\n constexpr int kOutputTensor = 0;\n \n-void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n- return nullptr;\n-}\n-\n-TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);\n TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n \n@@ -51,86 +45,86 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n GetOutputSafe(context, node, kOutputTensor, &output));\n \n TF_LITE_ENSURE_TYPES_EQ(context, input1->type, input2->type);\n+ TF_LITE_ENSURE_TYPES_EQ(context, input1->type, output->type);\n \n- const TfLiteType type = input1->type;\n- switch (type) {\n- case kTfLiteFloat32:\n- case kTfLiteInt32:\n- break;\n- default:\n- TF_LITE_KERNEL_LOG(context, \"Type '%s' is not supported by floor_div.\",\n- TfLiteTypeGetName(type));\n- return kTfLiteError;\n- }\n- output->type = type;\n+ return kTfLiteOk;\n+}\n \n- return kTfLiteError;\n+void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n+ return nullptr;\n+}\n+\n+TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+ return CalculateOpData(context, node);\n }\n \n template <typename T>\n-TfLiteStatus EvalImpl(TfLiteContext* context, bool requires_broadcast,\n- const TfLiteTensor* input1, const TfLiteTensor* input2,\n- TfLiteTensor* output) {\n- const T* denominator_data = GetTensorData<T>(input2);\n+TfLiteStatus EvalFloorDiv(TfLiteContext* context,\n+ const TfLiteEvalTensor* input1,\n+ const TfLiteEvalTensor* input2,\n+ TfLiteEvalTensor* output) {\n+ const T* denominator_data = tflite::micro::GetTensorData<T>(input2);\n \n // Validate the denominator.\n- for (int i = 0; i < NumElements(input2); ++i) {\n+ for (int i = 0; i < tflite::ElementCount(*input2->dims); ++i) {\n if (std::equal_to<T>()(denominator_data[i], 0)) {\n TF_LITE_KERNEL_LOG(context, \"Division by 0\");\n return kTfLiteError;\n }\n }\n+\n+ bool requires_broadcast = !tflite::micro::HaveSameShapes(input1, input2);\n+\n if (requires_broadcast) {\n reference_ops::BroadcastBinaryFunction4DSlow<T, T, T>(\n- GetTensorShape(input1), GetTensorData<T>(input1),\n- GetTensorShape(input2), denominator_data, GetTensorShape(output),\n- GetTensorData<T>(output), reference_ops::FloorDiv<T>);\n+ tflite::micro::GetTensorShape(input1),\n+ tflite::micro::GetTensorData<T>(input1),\n+ tflite::micro::GetTensorShape(input2), denominator_data,\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<T>(output), reference_ops::FloorDiv<T>);\n } else {\n reference_ops::BinaryFunction<T, T, T>(\n- GetTensorShape(input1), GetTensorData<T>(input1),\n- GetTensorShape(input2), GetTensorData<T>(input2),\n- GetTensorShape(output), GetTensorData<T>(output),\n- reference_ops::FloorDiv<T>);\n+ tflite::micro::GetTensorShape(input1),\n+ tflite::micro::GetTensorData<T>(input1),\n+ tflite::micro::GetTensorShape(input2), denominator_data,\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<T>(output), reference_ops::FloorDiv<T>);\n }\n \n return kTfLiteOk;\n }\n \n TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n- const TfLiteTensor* input1;\n- TF_LITE_ENSURE_OK(context,\n- GetInputSafe(context, node, kInputTensor1, &input1));\n- const TfLiteTensor* input2;\n- TF_LITE_ENSURE_OK(context,\n- GetInputSafe(context, node, kInputTensor2, &input2));\n- TfLiteTensor* output;\n- TF_LITE_ENSURE_OK(context,\n- GetOutputSafe(context, node, kOutputTensor, &output));\n-\n- bool requires_broadcast = false;\n+ const TfLiteEvalTensor* input1 =\n+ tflite::micro::GetEvalInput(context, node, kInputTensor1);\n+ const TfLiteEvalTensor* input2 =\n+ tflite::micro::GetEvalInput(context, node, kInputTensor2);\n+ TfLiteEvalTensor* output =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n \n switch (input1->type) {\n- case kTfLiteInt32: {\n- return EvalImpl<int32_t>(context, requires_broadcast, input1, input2,\n- output);\n- }\n case kTfLiteFloat32: {\n- return EvalImpl<float>(context, requires_broadcast, input1, input2,\n- output);\n+ return EvalFloorDiv<float>(context, input1, input2, output);\n }\n default: {\n- TF_LITE_KERNEL_LOG(context, \"Type '%s' is not supported by floor_div.\",\n+ TF_LITE_KERNEL_LOG(context, \"Type '%s' is not supported by FLOOR_DIV.\",\n TfLiteTypeGetName(input1->type));\n return kTfLiteError;\n }\n }\n }\n \n } // namespace\n-} // namespace floor_div\n \n-TfLiteRegistration* Register_FLOOR_DIV() { return nullptr; }\n+TfLiteRegistration Register_FLOOR_DIV() {\n+ return {/*init=*/Init,\n+ /*free=*/nullptr,\n+ /*prepare=*/Prepare,\n+ /*invoke=*/Eval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/0};\n+}\n \n-} // namespace micro\n-} // namespace ops\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/floor_div.cc",
"status": "modified"
},
{
"diff": "@@ -25,75 +25,85 @@ namespace tflite {\n namespace testing {\n namespace {\n \n-TF_LITE_MICRO_TESTS_BEGIN\n+void ExecuteFloorDivTest(TfLiteTensor* tensors, int tensors_count) {\n+ constexpr int kInputArrayData[] = {2, 0, 1};\n+ TfLiteIntArray* inputs_array = IntArrayFromInts(kInputArrayData);\n+ constexpr int kOutputArrayData[] = {1, 2};\n+ TfLiteIntArray* outputs_array = IntArrayFromInts(kOutputArrayData);\n \n-TF_LITE_MICRO_TEST(FloorDivModelSimple) {\n-#ifdef notdef\n- FloorDivModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {}});\n- model.PopulateTensor<int32_t>(model.input1(), {10, 9, 11, 3});\n- model.PopulateTensor<int32_t>(model.input2(), {2, 2, 3, 4});\n- EXPECT_THAT(model.GetOutput(), ElementsAre(5, 4, 3, 0));\n-#endif\n-}\n+ const TfLiteRegistration registration = tflite::Register_FLOOR_DIV();\n+ micro::KernelRunner runner(registration, tensors, tensors_count, inputs_array,\n+ outputs_array, nullptr);\n \n-TF_LITE_MICRO_TEST(FloorDivModelNegativeValue) {\n-#ifdef notdef\n- FloorDivModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {}});\n- model.PopulateTensor<int32_t>(model.input1(), {10, -9, -11, 7});\n- model.PopulateTensor<int32_t>(model.input2(), {2, 2, -3, -4});\n- EXPECT_THAT(model.GetOutput(), ElementsAre(5, -5, 3, -2));\n-#endif\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.InitAndPrepare());\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.Invoke());\n }\n \n-TF_LITE_MICRO_TEST(FloorDivModelBroadcastFloorDiv) {\n-#ifdef notdef\n- FloorDivModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {1}}, {TensorType_INT32, {}});\n- model.PopulateTensor<int32_t>(model.input1(), {10, -9, -11, 7});\n- model.PopulateTensor<int32_t>(model.input2(), {-3});\n- EXPECT_THAT(model.GetOutput(), ElementsAre(-4, 3, 3, -3));\n-#endif\n+template <typename T>\n+void TestFloorDiv(const int* input1_dims_data, const T* input1_data,\n+ const int* input2_dims_data, const T* input2_data,\n+ const int* expected_dims, const T* expected_data,\n+ T* output_data) {\n+ TfLiteIntArray* input1_dims = IntArrayFromInts(input1_dims_data);\n+ TfLiteIntArray* input2_dims = IntArrayFromInts(input2_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(expected_dims);\n+ const int output_count = ElementCount(*output_dims);\n+\n+ TfLiteTensor tensors[] = {\n+ CreateTensor(input1_data, input1_dims),\n+ CreateTensor(input2_data, input2_dims),\n+ CreateTensor(output_data, output_dims),\n+ };\n+ constexpr int tensors_count = std::extent<decltype(tensors)>::value;\n+\n+ ExecuteFloorDivTest(tensors, tensors_count);\n+\n+ for (int i = 0; i < output_count; i++) {\n+ TF_LITE_MICRO_EXPECT_EQ(expected_data[i], output_data[i]);\n+ }\n }\n \n-TF_LITE_MICRO_TEST(FloorDivModelSimpleFloat) {\n-#ifdef notdef\n- FloorDivModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {}});\n- model.PopulateTensor<float>(model.input1(), {10.05, 9.09, 11.9, 3.01});\n- model.PopulateTensor<float>(model.input2(), {2.05, 2.03, 3.03, 4.03});\n- EXPECT_THAT(model.GetOutput(), ElementsAre(4.0, 4.0, 3.0, 0.0));\n-#endif\n+} // namespace\n+} // namespace testing\n+} // namespace tflite\n+\n+TF_LITE_MICRO_TESTS_BEGIN\n+\n+TF_LITE_MICRO_TEST(FloorDivTestSimpleFloat) {\n+ constexpr int kDims[] = {4, 1, 2, 2, 1};\n+ constexpr float kInput1[] = {10.05, 9.09, 11.9, 3.01};\n+ constexpr float kInput2[] = {2.05, 2.03, 3.03, 4.03};\n+ constexpr float kExpect[] = {4.0, 4.0, 3.0, 0.0};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::TestFloorDiv(kDims, kInput1, kDims, kInput2, kDims, kExpect,\n+ output_data);\n }\n \n-TF_LITE_MICRO_TEST(FloorDivModelNegativeValueFloat) {\n-#ifdef notdef\n- FloorDivModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {}});\n- model.PopulateTensor<float>(model.input1(), {10.03, -9.9, -11.0, 7.0});\n- model.PopulateTensor<float>(model.input2(), {2.0, 2.3, -3.0, -4.1});\n- EXPECT_THAT(model.GetOutput(), ElementsAre(5.0, -5.0, 3.0, -2.0));\n-#endif\n+TF_LITE_MICRO_TEST(FloorDivTestNegativeValueFloat) {\n+ constexpr int kDims[] = {4, 1, 2, 2, 1};\n+ constexpr float kInput1[] = {10.03, -9.9, -11.0, 7.0};\n+ constexpr float kInput2[] = {2.0, 2.3, -3.0, -4.1};\n+ constexpr float kExpect[] = {5.0, -5.0, 3.0, -2.0};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::TestFloorDiv(kDims, kInput1, kDims, kInput2, kDims, kExpect,\n+ output_data);\n }\n \n-TF_LITE_MICRO_TEST(FloorDivModelBroadcastFloorDivFloat) {\n-#ifdef notdef\n- FloorDivModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {1}},\n- {TensorType_FLOAT32, {}});\n- model.PopulateTensor<float>(model.input1(), {10.03, -9.9, -11.0, 7.0});\n- model.PopulateTensor<float>(model.input2(), {-3.3});\n- EXPECT_THAT(model.GetOutput(), ElementsAre(-4.0, 2.0, 3.0, -3.0));\n-#endif\n+TF_LITE_MICRO_TEST(FloorDivTestBroadcastFloat) {\n+ constexpr int kDims1[] = {4, 1, 2, 2, 1};\n+ constexpr int kDims2[] = {1, 1};\n+ constexpr float kInput1[] = {10.03, -9.9, -11.0, 7.0};\n+ constexpr float kInput2[] = {-3.3};\n+ constexpr float kExpect[] = {-4.0, 2.0, 3.0, -3.0};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ tflite::testing::TestFloorDiv(kDims1, kInput1, kDims2, kInput2, kDims1,\n+ kExpect, output_data);\n }\n \n TF_LITE_MICRO_TESTS_END\n-\n-} // namespace\n-} // namespace testing\n-} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/floor_div_test.cc",
"status": "modified"
},
{
"diff": "@@ -41,6 +41,7 @@ TfLiteRegistration Register_ELU();\n TfLiteRegistration Register_EXP();\n TfLiteRegistration Register_EXPAND_DIMS();\n TfLiteRegistration Register_FILL();\n+TfLiteRegistration Register_FLOOR_DIV();\n TfLiteRegistration Register_L2_POOL_2D();\n TfLiteRegistration Register_LEAKY_RELU();\n TfLiteRegistration Register_QUANTIZE();",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -228,6 +228,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n tflite::ops::micro::Register_FLOOR(), ParseFloor);\n }\n \n+ TfLiteStatus AddFloorDiv() {\n+ return AddBuiltin(BuiltinOperator_FLOOR_DIV, tflite::Register_FLOOR_DIV(),\n+ ParseFloorDiv);\n+ }\n+\n TfLiteStatus AddFullyConnected(\n const TfLiteRegistration& registration = Register_FULLY_CONNECTED()) {\n return AddBuiltin(BuiltinOperator_FULLY_CONNECTED, registration,",
"filename": "tensorflow/lite/micro/micro_mutable_op_resolver.h",
"status": "modified"
},
{
"diff": "@@ -285,6 +285,7 @@ tensorflow/lite/micro/kernels/exp_test.cc \\\n tensorflow/lite/micro/kernels/expand_dims_test.cc \\\n tensorflow/lite/micro/kernels/fill_test.cc \\\n tensorflow/lite/micro/kernels/floor_test.cc \\\n+tensorflow/lite/micro/kernels/floor_div_test.cc \\\n tensorflow/lite/micro/kernels/fully_connected_test.cc \\\n tensorflow/lite/micro/kernels/hard_swish_test.cc \\\n tensorflow/lite/micro/kernels/l2norm_test.cc \\\n@@ -346,6 +347,7 @@ tensorflow/lite/micro/kernels/exp.cc \\\n tensorflow/lite/micro/kernels/expand_dims.cc \\\n tensorflow/lite/micro/kernels/fill.cc \\\n tensorflow/lite/micro/kernels/floor.cc \\\n+tensorflow/lite/micro/kernels/floor_div.cc \\\n tensorflow/lite/micro/kernels/fully_connected.cc \\\n tensorflow/lite/micro/kernels/fully_connected_common.cc \\\n tensorflow/lite/micro/kernels/hard_swish.cc \\\n@@ -437,6 +439,7 @@ tensorflow/lite/kernels/internal/reference/elu.h \\\n tensorflow/lite/kernels/internal/reference/exp.h \\\n tensorflow/lite/kernels/internal/reference/fill.h \\\n tensorflow/lite/kernels/internal/reference/floor.h \\\n+tensorflow/lite/kernels/internal/reference/floor_div.h \\\n tensorflow/lite/kernels/internal/reference/fully_connected.h \\\n tensorflow/lite/kernels/internal/reference/hard_swish.h \\\n tensorflow/lite/kernels/internal/reference/integer_ops/add.h \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "<em>Please make sure that this is a bug. As per our\r\n[GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md),\r\nwe only address code/doc bugs, performance issues, feature requests and\r\nbuild/installation issues on GitHub. tag:bug_template</em>\r\n\r\n**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): no\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Debian testing\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below): 2.4.0\r\n- Python version: 3.8\r\n- Bazel version (if compiling from source):\r\n- GCC/Compiler version (if compiling from source):\r\n- CUDA/cuDNN version: NA\r\n- GPU model and memory: NA\r\n\r\n\r\n\r\n**Describe the current behavior**\r\n`tf.raw_ops.ImageProjectiveTransformV3` interpolates pixels that are outside image boundary (instead of using fill_value)\r\nIn the example below, the corner pixels are mapped from coordinates that lie outside the image. Hence, they must be set to fill_value (like, for example, scipy.ndimage.affine_transform and skimage.transform.AffineTransform). However, they are interpolated instead\r\n\r\n**Describe the expected behavior**\r\nPixels outside image boundaries should be set to `fill_value`. Namely, the corner pixels, in the example below must be zeros.\r\n\r\n**Standalone code to reproduce the issue**\r\n```python\r\nimport numpy as np\r\nimport tensorflow as tf\r\nimport tensorflow_addons as tfa\r\n\r\n# Rotate 45 degrees\r\ntransform = tfa.image.angles_to_projective_transforms(np.pi/4, 3, 3)\r\n\r\nx = tf.ones((1, 3, 3, 1))\r\nres = tf.raw_ops.ImageProjectiveTransformV3(images=x,\r\n transforms=transform,\r\n output_shape=(3,3),\r\n interpolation=\"BILINEAR\",\r\n fill_value=0)\r\nnp.squeeze(res)\r\n\r\narray([[0.58578646, 1. , 0.58578634],\r\n [1. , 1. , 1. ],\r\n [0.58578646, 1. , 0.58578634]], dtype=float32)\r\n```\r\n",
"comments": [
{
"body": "Running the code with TF v2.3 throws an error stating `AttributeError: module 'tensorflow._api.v2.raw_ops' has no attribute 'ImageProjectiveTransformV3'`\r\n\r\nHowever, I was able to reproduce the issue with TF v2.4 and TF-nightly. Please find the gist of it [here](https://colab.research.google.com/gist/amahendrakar/4be3af55b34eeecf51bfd8f426d4cfcb/46637.ipynb). Thanks!",
"created_at": "2021-01-25T15:11:40Z"
},
{
"body": "Tensorflow 2.3 and 2.4 have `ImageProjectiveTransformV2` which also must set pixels outside the image to zeros. The bug is present there as well",
"created_at": "2021-01-27T09:24:23Z"
},
{
"body": "Submitted a PR that fixes it: #46752 ",
"created_at": "2021-01-28T07:40:58Z"
},
{
"body": "@eli-osherovich The PR you have submitted was closed already. Please check PR for detailed information and seems the changes cannot be implemented due to several reasons as mentioned in PR. \r\n\r\nPlease go ahead and close the issue if you don't have any concern.Thanks!",
"created_at": "2021-05-29T03:46:22Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46637\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46637\">No</a>\n",
"created_at": "2021-05-30T15:41:23Z"
},
{
"body": "Warning says it all.\r\n```\r\n>>> import tensorflow_addons as tfa\r\nThe versions of TensorFlow you are currently using is 2.3.1 and is not supported. \r\nSome things might work, some things might not.\r\nIf you were to encounter a bug, do not file an issue.\r\nIf you want to make sure you're using a tested and supported configuration, either change the TensorFlow version or the TensorFlow Addons's version. \r\n\r\n```",
"created_at": "2021-10-05T08:39:29Z"
}
],
"number": 46637,
"title": "tf.raw_ops.ImageProjectiveTransformV3 interpolates pixels that are outside image boundary (instead of using `fill_value`)"
}
|
{
"body": "This is a fix for incorrectly set out-of-boundary pixels: #46637 ",
"number": 46752,
"review_comments": [],
"title": "Correctly set out-of-boundary values."
}
|
{
"commits": [
{
"message": "Correctly set out-of-boundary values."
},
{
"message": "Added tests."
},
{
"message": "Minor cleanup."
}
],
"files": [
{
"diff": "@@ -323,6 +323,7 @@ tf_cc_tests(\n \"adjust_contrast_op_test.cc\",\n \"colorspace_op_test.cc\",\n \"crop_and_resize_op_test.cc\",\n+ \"image_projective_transform_test.cc\",\n \"mirror_pad_op_test.cc\",\n \"non_max_suppression_op_test.cc\",\n \"resize_area_op_test.cc\",",
"filename": "tensorflow/core/kernels/image/BUILD",
"status": "modified"
},
{
"diff": "@@ -162,6 +162,12 @@ class ProjectiveGenerator {\n const float x = map_functor(input_x, input_.dimension(2));\n const float y = map_functor(input_y, input_.dimension(1));\n \n+ // Only MODE::FILL_CONSTANT keeps cooridnates out-of-boundary\n+ if (x < 0 || x > input_.dimension(2) - 1 || y < 0 ||\n+ y > input_.dimension(1) - 1) {\n+ return fill_value_;\n+ }\n+\n const DenseIndex batch = coords[0];\n const DenseIndex channels = coords[3];\n switch (interpolation_) {",
"filename": "tensorflow/core/kernels/image/image_ops.h",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,113 @@\n+/* Copyright 2015 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#include \"tensorflow/core/framework/allocator.h\"\n+#include \"tensorflow/core/framework/fake_input.h\"\n+#include \"tensorflow/core/framework/node_def_builder.h\"\n+#include \"tensorflow/core/framework/op_kernel.h\"\n+#include \"tensorflow/core/framework/tensor.h\"\n+#include \"tensorflow/core/framework/tensor_testutil.h\"\n+#include \"tensorflow/core/framework/types.h\"\n+#include \"tensorflow/core/framework/types.pb.h\"\n+#include \"tensorflow/core/kernels/ops_testutil.h\"\n+#include \"tensorflow/core/kernels/ops_util.h\"\n+#include \"tensorflow/core/lib/core/status_test_util.h\"\n+#include \"tensorflow/core/platform/test.h\"\n+\n+namespace tensorflow {\n+class ImageProjectiveTransformV3OpTest : public OpsTestBase {\n+ protected:\n+ template <typename T>\n+ void MakeOp(const string& interpolation, const string& fill_mode) {\n+ TF_EXPECT_OK(NodeDefBuilder(\"image_projective_transform_v3_op\",\n+ \"ImageProjectiveTransformV3\")\n+ .Input(FakeInput(DataTypeToEnum<T>::value))\n+ .Input(FakeInput(DT_FLOAT)) // transform\n+ .Input(FakeInput(DT_INT32)) // output shape\n+ .Input(FakeInput(DT_FLOAT)) // fill_value\n+ .Attr(\"interpolation\", interpolation)\n+ .Attr(\"fill_mode\", fill_mode)\n+ .Finalize(node_def()));\n+ TF_EXPECT_OK(InitOp());\n+ }\n+};\n+\n+#define REGISTER_TEST(T) \\\n+ TEST_F(ImageProjectiveTransformV3OpTest, TestConstantFill##T##nearest) { \\\n+ constexpr uint8 FILL_VALUE = 42; \\\n+ MakeOp<T>(\"NEAREST\", \"CONSTANT\"); \\\n+ /* Input: */ \\\n+ /* [[1, 1, 1] */ \\\n+ /* [1, 1, 1] */ \\\n+ /* [1, 1, 1]] */ \\\n+ AddInputFromArray<T>(TensorShape({1, 3, 3, 1}), \\\n+ {1, 1, 1, 1, 1, 1, 1, 1, 1}); \\\n+ \\\n+ /* Rotation 45 degrees */ \\\n+ AddInputFromArray<float>(TensorShape({1, 8}), \\\n+ {0.70710677, -0.70710677, 1., 0.70710677, \\\n+ 0.70710677, -0.41421354, 0., 0.}); \\\n+ AddInputFromArray<int32>(TensorShape({2}), {3, 3}); \\\n+ AddInputFromArray<float>(TensorShape({}), {FILL_VALUE}); \\\n+ TF_ASSERT_OK(RunOpKernel()); \\\n+ \\\n+ Tensor expected(allocator(), DataTypeToEnum<T>::value, \\\n+ TensorShape({1, 3, 3, 1})); \\\n+ /* Output (C = fill_value): */ \\\n+ /* [[C, 1, C] */ \\\n+ /* [1, 1, 1] */ \\\n+ /* [C, 1, C]] */ \\\n+ test::FillValues<T>(&expected, {FILL_VALUE, 1, FILL_VALUE, 1, 1, 1, \\\n+ FILL_VALUE, 1, FILL_VALUE}); \\\n+ test::ExpectTensorEqual<T>(expected, *GetOutput(0)); \\\n+ } \\\n+ \\\n+ TEST_F(ImageProjectiveTransformV3OpTest, TestConstantFill##T##bilinear) { \\\n+ constexpr uint8 FILL_VALUE = 42; \\\n+ MakeOp<T>(\"BILINEAR\", \"CONSTANT\"); \\\n+ /* Input: */ \\\n+ /* [[1, 1, 1] */ \\\n+ /* [1, 1, 1] */ \\\n+ /* [1, 1, 1]] */ \\\n+ AddInputFromArray<T>(TensorShape({1, 3, 3, 1}), \\\n+ {1, 1, 1, 1, 1, 1, 1, 1, 1}); \\\n+ \\\n+ /* Rotation 45 degrees */ \\\n+ AddInputFromArray<float>(TensorShape({1, 8}), \\\n+ {0.70710677, -0.70710677, 1., 0.70710677, \\\n+ 0.70710677, -0.41421354, 0., 0.}); \\\n+ AddInputFromArray<int32>(TensorShape({2}), {3, 3}); \\\n+ AddInputFromArray<float>(TensorShape({}), {FILL_VALUE}); \\\n+ TF_ASSERT_OK(RunOpKernel()); \\\n+ \\\n+ Tensor expected(allocator(), DataTypeToEnum<T>::value, \\\n+ TensorShape({1, 3, 3, 1})); \\\n+ /* Output (C = fill_value): */ \\\n+ /* [[C, 1, C] */ \\\n+ /* [1, 1, 1] */ \\\n+ /* [C, 1, C]] */ \\\n+ test::FillValues<T>(&expected, {FILL_VALUE, 1, FILL_VALUE, 1, 1, 1, \\\n+ FILL_VALUE, 1, FILL_VALUE}); \\\n+ test::ExpectTensorEqual<T>(expected, *GetOutput(0)); \\\n+ }\n+\n+REGISTER_TEST(float)\n+REGISTER_TEST(double)\n+REGISTER_TEST(uint8)\n+REGISTER_TEST(int32)\n+REGISTER_TEST(int64)\n+\n+#undef REGISTER_TEST\n+} // namespace tensorflow",
"filename": "tensorflow/core/kernels/image/image_projective_transform_test.cc",
"status": "added"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below):2.1.0\r\n- Python version:3.7.6\r\n- Bazel version (if compiling from source):N/A\r\n- GCC/Compiler version (if compiling from source):N/A\r\n- CUDA/cuDNN version:N/A\r\n- GPU model and memory:N/A\r\n\r\n\r\n\r\n**Describe the current behavior**\r\n`tf.sequence_mask` abortion when lengths contains large value\r\n\r\n**Describe the expected behavior**\r\nexpect an exception message if the input is not expected, instead of crash.\r\n**Standalone code to reproduce the issue**\r\n~~~python\r\nimport tensorflow as tf\r\nimport numpy as np\r\ntf.sequence_mask(lengths=np.array([3.05524638e+307], dtype=np.float64))\r\n~~~\r\n\r\nOutput:\r\n~~~python\r\n2021-01-26 16:02:42.088240: F tensorflow/core/framework/tensor_shape.cc:187] Non-OK-status: InitDims(dim_sizes) status: Internal: Expected shape dimensions to be non-negative, got -9223372036854775808\r\nAborted (core dumped)\r\n~~~\r\n\r\n",
"comments": [
{
"body": "I ran the code on tf 2.4 and tf-nightly colab crashes, please find the [gist here](https://colab.research.google.com/gist/Saduf2019/37011b0996cc4cebc25af92ee4fa1839/untitled509.ipynb)",
"created_at": "2021-01-27T04:56:49Z"
},
{
"body": "Added a PR #46742 for the fix.",
"created_at": "2021-01-27T21:22:43Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46698\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46698\">No</a>\n",
"created_at": "2021-02-01T20:13:51Z"
},
{
"body": "Reopening as #46742 was rolled back",
"created_at": "2021-02-15T17:45:34Z"
},
{
"body": "Was able to reproduce in Nightly version TF 2.6 and the colab crashes. Pease find the gist [here](https://colab.research.google.com/gist/saikumarchalla/5df7d4f625bc4be94b76698dd78a1517/untitled92.ipynb#scrollTo=UnuuCWdFJcbp). Thanks1",
"created_at": "2021-05-29T04:07:40Z"
},
{
"body": "Hi @DNXie ! I think this bug has been addressed now. I getting value error instead of [Colab ](https://colab.sandbox.google.com/gist/mohantym/69e5e1575579a85cc94199ca9f03da35/github_46698.ipynb)getting crashed. Thanks!",
"created_at": "2022-02-15T12:00:51Z"
},
{
"body": "@mohantym It seems to be fixed in the nightly version also. Thanks!\r\n\r\n",
"created_at": "2022-02-16T17:47:07Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46698\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46698\">No</a>\n",
"created_at": "2022-02-16T17:47:13Z"
}
],
"number": 46698,
"title": "tf.sequence_mask abortion when lengths contains large value"
}
|
{
"body": "This PR tries to address the issue raised in #46698 where\r\ntf.sequence_mask will crash abruptly if lengths is not passed\r\nwith an integer tensor.\r\n\r\nThis PR applies a dtype check and throw out ValueError to avoid\r\nprogram crash.\r\n\r\nThis PR fixes #46698.\r\n\r\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>",
"number": 46742,
"review_comments": [],
"title": "Fix crash when tf.sequence_mask takes a non-integer lengths"
}
|
{
"commits": [
{
"message": "Fix crash when tf.sequence_mask takes a non-integer lengths\n\nThis PR tries to address the issue raised in 46698 where\ntf.sequence_mask will crash abruptly if lengths is not passed\nwith an integer tensor.\n\nThis PR applies a dtype check and throw out ValueError to avoid\nprogram crash.\n\nThis PR fixes 46698.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
}
],
"files": [
{
"diff": "@@ -1427,6 +1427,13 @@ def check_output_dtype(output_dtype):\n check_output_dtype(\"float64\")\n check_output_dtype(np.float64)\n \n+ def testInvalidLengthsDTypeD(self):\n+ with self.cached_session():\n+ with self.assertRaisesRegex(\n+ ValueError, \"lengths must be integer for sequence_mask\"):\n+ array_ops.sequence_mask(\n+ lengths=np.array([3.05524638e+307], dtype=np.float64))\n+\n \n class ConcatSliceResourceTest(test_util.TensorFlowTestCase):\n ",
"filename": "tensorflow/python/kernel_tests/array_ops_test.py",
"status": "modified"
},
{
"diff": "@@ -4382,10 +4382,12 @@ def sequence_mask(lengths, maxlen=None, dtype=dtypes.bool, name=None):\n Returns:\n A mask tensor of shape `lengths.shape + (maxlen,)`, cast to specified dtype.\n Raises:\n- ValueError: if `maxlen` is not a scalar.\n+ ValueError: if `maxlen` is not a scalar or lengths is not an integer tensor.\n \"\"\"\n with ops.name_scope(name, \"SequenceMask\", [lengths, maxlen]):\n lengths = ops.convert_to_tensor(lengths)\n+ if not lengths.dtype.is_integer:\n+ raise ValueError(\"lengths must be integer for sequence_mask\")\n \n if maxlen is None:\n maxlen = gen_math_ops._max(lengths, _all_dimensions(lengths))",
"filename": "tensorflow/python/ops/array_ops.py",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below):2.1.0\r\n- Python version:3.7.6\r\n- Bazel version (if compiling from source):N/A\r\n- GCC/Compiler version (if compiling from source):N/A\r\n- CUDA/cuDNN version:N/A\r\n- GPU model and memory:N/A\r\n\r\n**Describe the current behavior**\r\n`tf.math.reduce_prod` aborts when `keepdims` contain large values\r\n\r\n**Describe the expected behavior**\r\nexpect an exception message if the input is not expected, instead of crash.\r\n\r\n**Standalone code to reproduce the issue**\r\n~~~python\r\nimport tensorflow as tf\r\nimport numpy as np\r\ntf.math.reduce_prod(input_tensor=1, keepdims=np.array([63600, 1], dtype=np.float16))\r\n~~~\r\n\r\n\r\nOutput:\r\n~~~python\r\n2021-01-26 17:02:24.497049: F ./tensorflow/python/eager/pywrap_tensor_conversion.h:58] Check failed: !PyErr_Occurred()\r\nAborted (core dumped)\r\n~~~",
"comments": [
{
"body": "@ymodak \r\nI ran the code on tf 2.4 and nightly but colab crashes, please find the [gist here](https://colab.research.google.com/gist/Saduf2019/0f6ab8c1a411eac72ae6e4502faed92b/untitled509.ipynb).",
"created_at": "2021-01-27T04:53:43Z"
},
{
"body": "Added a PR #46741 for the fix.",
"created_at": "2021-01-27T20:31:23Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46700\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46700\">No</a>\n",
"created_at": "2021-02-01T19:58:15Z"
}
],
"number": 46700,
"title": "tf.math.reduce_prod aborts when keepdims contain large values"
}
|
{
"body": "This PR tries to address the issue raised in #46700 where\r\ntf.math.reduce_prod will crash if keepdims is being passed\r\nwith a non-boolean value (e.g. numpy value)\r\n\r\nThe issue was that keepdims is passed through pywrap\r\nwhich can not interprete numpy values, thus crashes.\r\n\r\nA way to detect the type mismatch before being passed\r\nto pywrap is to use `bool(keepdims)` to give python a chance\r\nto convert to bool (and throw out error when appropriate).\r\n\r\nThis PR also fixes all reduce_ ops.\r\n\r\nThis PR fixes #46700.\r\n\r\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>",
"number": 46741,
"review_comments": [],
"title": "Fix crash when invalid keepdims value is being passed to tf.math.reduce_prod"
}
|
{
"commits": [
{
"message": "Fix crash when invalid keepdims value is being passed to tf.math.reduce_prod\n\nThis PR tries to address the issue raised in 46700 where\ntf.math.reduce_prod will crash if keepdims is being passed\nwith a non-boolean value (e.g. numpy value)\n\nThe issue was that keepdims is passed through pywrap\nwhich can not interprete numpy values, thus crashes.\n\nA way to detect the type mismatch before being passed\nto pywrap is to use `bool(keepdims)` to give python a chance\nto convert to bool (and throw out error when appropriate).\n\nThis PR also fixes all reduce_ ops.\n\nThis PR fixes 46700.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
},
{
"message": "Add test case for GitHub issue 46700.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
}
],
"files": [
{
"diff": "@@ -116,6 +116,25 @@ def testBasic(self):\n self.assertEqual(y.shape, ())\n \n \n+class ReductionInvalidKeepdims(test.TestCase):\n+\n+ def testBasic(self):\n+ # Test case for GitHub issue 46700.\n+ for dtype, reductions in [(dtypes.float32,\n+ (math_ops.reduce_sum, math_ops.reduce_mean,\n+ math_ops.reduce_prod, math_ops.reduce_max,\n+ math_ops.reduce_min,\n+ math_ops.reduce_euclidean_norm)),\n+ (dtypes.bool, (math_ops.reduce_all,\n+ math_ops.reduce_any))]:\n+ for reduction in reductions:\n+ with self.assertRaisesRegex(ValueError, \"The truth value\"):\n+ x = True if dtype == dtypes.bool else 1\n+ y = reduction(\n+ input_tensor=x, keepdims=np.array([63600, 1], dtype=np.float16))\n+ self.evaluate(y)\n+\n+\n class BaseReductionTest(test.TestCase):\n \n def _tf_reduce(self, x, reduction_axes, keepdims):",
"filename": "tensorflow/python/kernel_tests/reduction_ops_test.py",
"status": "modified"
},
{
"diff": "@@ -2016,7 +2016,7 @@ def reduce_sum_with_dims(input_tensor,\n keepdims=False,\n name=None,\n dims=None):\n- keepdims = False if keepdims is None else keepdims\n+ keepdims = False if keepdims is None else bool(keepdims)\n return _may_reduce_to_scalar(\n keepdims, axis,\n gen_math_ops._sum(input_tensor, dims, keepdims, name=name))\n@@ -2059,6 +2059,7 @@ def reduce_euclidean_norm(input_tensor, axis=None, keepdims=False, name=None):\n Returns:\n The reduced tensor, of the same dtype as the input_tensor.\n \"\"\"\n+ keepdims = bool(keepdims)\n return _may_reduce_to_scalar(\n keepdims, axis,\n gen_math_ops.euclidean_norm(\n@@ -2331,7 +2332,7 @@ def reduce_mean(input_tensor, axis=None, keepdims=False, name=None):\n \n @end_compatibility\n \"\"\"\n- keepdims = False if keepdims is None else keepdims\n+ keepdims = False if keepdims is None else bool(keepdims)\n return _may_reduce_to_scalar(\n keepdims, axis,\n gen_math_ops.mean(\n@@ -2491,7 +2492,7 @@ def reduce_prod(input_tensor, axis=None, keepdims=False, name=None):\n Equivalent to np.prod\n @end_compatibility\n \"\"\"\n- keepdims = False if keepdims is None else keepdims\n+ keepdims = False if keepdims is None else bool(keepdims)\n return _may_reduce_to_scalar(\n keepdims, axis,\n gen_math_ops.prod(\n@@ -2678,7 +2679,7 @@ def reduce_min(input_tensor, axis=None, keepdims=False, name=None):\n Equivalent to np.min\n @end_compatibility\n \"\"\"\n- keepdims = False if keepdims is None else keepdims\n+ keepdims = False if keepdims is None else bool(keepdims)\n return _may_reduce_to_scalar(\n keepdims, axis,\n gen_math_ops._min(\n@@ -2805,7 +2806,7 @@ def reduce_max_with_dims(input_tensor,\n keepdims=False,\n name=None,\n dims=None):\n- keepdims = False if keepdims is None else keepdims\n+ keepdims = False if keepdims is None else bool(keepdims)\n return _may_reduce_to_scalar(\n keepdims, axis,\n gen_math_ops._max(input_tensor, dims, keepdims, name=name))\n@@ -2909,7 +2910,7 @@ def reduce_all(input_tensor, axis=None, keepdims=False, name=None):\n Equivalent to np.all\n @end_compatibility\n \"\"\"\n- keepdims = False if keepdims is None else keepdims\n+ keepdims = False if keepdims is None else bool(keepdims)\n return _may_reduce_to_scalar(\n keepdims, axis,\n gen_math_ops._all(\n@@ -3015,7 +3016,7 @@ def reduce_any(input_tensor, axis=None, keepdims=False, name=None):\n Equivalent to np.any\n @end_compatibility\n \"\"\"\n- keepdims = False if keepdims is None else keepdims\n+ keepdims = False if keepdims is None else bool(keepdims)\n return _may_reduce_to_scalar(\n keepdims, axis,\n gen_math_ops._any(",
"filename": "tensorflow/python/ops/math_ops.py",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below):2.1.0\r\n- Python version:3.7.6\r\n- Bazel version (if compiling from source):N/A\r\n- GCC/Compiler version (if compiling from source):N/A\r\n- CUDA/cuDNN version:N/A\r\n- GPU model and memory:N/A\r\n\r\n\r\n**Describe the current behavior**\r\n`tf.keras.backend.constant` abortion\r\n\r\n**Describe the expected behavior**\r\nexpect an exception message if the input is not expected, instead of crash.\r\n\r\n**Standalone code to reproduce the issue**\r\n~~~python\r\nimport tensorflow as tf\r\nimport numpy as np\r\ntf.keras.backend.constant(value=np.ones((0,1,1)), shape=[36,23,53,24,117,82,47,124,112,69,53,0])\r\n~~~\r\n\r\nOutput:\r\n~~~python\r\n2021-01-26 16:30:57.093291: F tensorflow/core/framework/tensor_shape.cc:405] Check failed: 0 <= new_num_elements (0 vs. -1)\r\nAborted (core dumped)\r\n~~~",
"comments": [
{
"body": "I have tried in colab with TF version 2.1, nightly version(`2.5.0-dev20210126`) and was able to reproduce the issue. Please, find the gist [here](https://colab.research.google.com/gist/ravikyram/20eee659ad8f62f7d9af60e27df1ac65/untitled634.ipynb). Thanks!",
"created_at": "2021-01-26T21:42:37Z"
},
{
"body": "I think the issue will be fixed by PR #46717.",
"created_at": "2021-01-27T20:47:19Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46699\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46699\">No</a>\n",
"created_at": "2021-03-04T13:17:53Z"
}
],
"number": 46699,
"title": "tf.keras.backend.constant abortion"
}
|
{
"body": "This PR tries to address the issue raised in #46693 where\r\na shape with large number of elements will cause the\r\ntf.reshape to crash.\r\n\r\nThis PR adds relevant shape check so that error message can\r\nbe returned gracefully.\r\n\r\nThis PR fixes #46693\r\n\r\nThis PR also fixes #46699\r\n\r\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>",
"number": 46717,
"review_comments": [
{
"body": "The indentation here looks wrong; might be a spaces (the rest of the file) vs. tabs issue?",
"created_at": "2021-02-22T22:44:04Z"
}
],
"title": "Add relevant shape check for tf.reshape to prevent crash"
}
|
{
"commits": [
{
"message": "Add relevant shape check for tf.reshape to prevent crash\n\nThis PR tries to address the issue raised in 46693 where\na shape with large number of elements will cause the\ntf.reshape to crash.\n\nThis PR adds relevant shape check so that error message can\nbe returned gracefully.\n\nThis PR fixes 46693\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
}
],
"files": [
{
"diff": "@@ -24,6 +24,7 @@ limitations under the License.\n #include \"tensorflow/core/framework/types.h\"\n #include \"tensorflow/core/lib/core/status.h\"\n #include \"tensorflow/core/platform/logging.h\"\n+#include \"tensorflow/core/util/overflow.h\"\n \n namespace tensorflow {\n \n@@ -135,6 +136,17 @@ class ReshapeOp : public OpKernel {\n shape->AddDim(size);\n *has_zero_dim = true;\n } else {\n+ if (MultiplyWithoutOverflow(shape->num_elements(), size) < 0) {\n+ string msg;\n+ for (int ii = 0; ii < num_dims; ++ii) {\n+ if (ii != 0) {\n+ strings::StrAppend(&msg, \", \");\n+ }\n+ strings::StrAppend(&msg, Svec(ii));\n+ }\n+ return errors::InvalidArgument(\"Shape [\", msg,\n+ \"] has too many elements\");\n+ }\n shape->AddDim(size);\n (*product) *= size;\n }",
"filename": "tensorflow/core/kernels/reshape_op.h",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n \n from tensorflow.python.framework import constant_op\n from tensorflow.python.framework import dtypes\n+from tensorflow.python.framework import errors_impl\n from tensorflow.python.framework import ops\n from tensorflow.python.framework import tensor_shape\n from tensorflow.python.framework import test_util\n@@ -214,6 +215,14 @@ def testInt64Shape(self):\n y = array_ops.reshape(x, [1, 50000**2])\n self.assertEqual([1, 50000**2], y.get_shape().as_list())\n \n+ @test_util.run_v2_only\n+ def testTooLargeShape(self):\n+ with self.assertRaisesRegex(\n+ errors_impl.InvalidArgumentError, \"too many elements\"):\n+ x = array_ops.reshape([1], np.array([21943, 45817, 30516, 61760, 38987]))\n+ self.evaluate(x)\n+\n+\n \n if __name__ == \"__main__\":\n test.main()",
"filename": "tensorflow/python/kernel_tests/reshape_op_test.py",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below):2.1.0\r\n- Python version:3.7.6\r\n- Bazel version (if compiling from source):N/A\r\n- GCC/Compiler version (if compiling from source):N/A\r\n- CUDA/cuDNN version:N/A\r\n- GPU model and memory:N/A\r\n\r\n\r\n**Describe the current behavior**\r\n`tf.keras.backend.reshape` abortion when `shape` contain large values\r\n\r\n**Describe the expected behavior**\r\nexpect an exception message if the input is not expected, instead of crash. \r\n\r\n\r\n**Standalone code to reproduce the issue**\r\n~~~python\r\nimport tensorflow as tf\r\nimport numpy as np\r\ntf.keras.backend.reshape(x=[1], shape=np.array([21943, 45817, 30516, 61760, 38987], dtype=np.uint16))\r\n~~~\r\n\r\nOutput:\r\n~~~python\r\n2021-01-26 15:32:50.289333: F tensorflow/core/framework/tensor_shape.cc:405] Check failed: 0 <= new_num_elements (0 vs. -1)\r\nAborted (core dumped)\r\n~~~",
"comments": [
{
"body": "I have tried in colab with TF version 2.1, nightly version(2.5.0-dev20210126) and was able to reproduce the issue. Please, find the gist [here](https://colab.research.google.com/gist/ravikyram/a63148028cea8674427b82f612d8adeb/untitled636.ipynb). Thanks!",
"created_at": "2021-01-26T21:49:59Z"
},
{
"body": "Added a PR #46717 for the fix.",
"created_at": "2021-01-27T03:34:46Z"
},
{
"body": "@yongtang Thanks for th PR!\r\n\r\nBTW I just found similar abortion in `tf.reshape` and `tf.constant`.\r\n\r\nHere are the reproduce code:\r\n~~~python\r\ntf.reshape(tensor=[1], shape=np.array([21943, 45817, 30516, 61760, 38987], dtype=np.uint16))\r\ntf.constant(value=np.ones((0,1,1)), shape=[36,23,53,24,117,82,47,124,112,69,53,0])\r\n~~~\r\n\r\nCould you please make sure that the PR also fixes these two APIs? Thanks!\r\n",
"created_at": "2021-02-03T16:18:28Z"
},
{
"body": "@DNXie Yes tf.reshape and tf.keras.backend.reshape are through the same kernel so both will be fixed by PR #46717 ",
"created_at": "2021-02-03T16:29:00Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46693\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46693\">No</a>\n",
"created_at": "2021-03-04T13:17:50Z"
}
],
"number": 46693,
"title": "tf.keras.backend.reshape abortion when shape contain large values"
}
|
{
"body": "This PR tries to address the issue raised in #46693 where\r\na shape with large number of elements will cause the\r\ntf.reshape to crash.\r\n\r\nThis PR adds relevant shape check so that error message can\r\nbe returned gracefully.\r\n\r\nThis PR fixes #46693\r\n\r\nThis PR also fixes #46699\r\n\r\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>",
"number": 46717,
"review_comments": [
{
"body": "The indentation here looks wrong; might be a spaces (the rest of the file) vs. tabs issue?",
"created_at": "2021-02-22T22:44:04Z"
}
],
"title": "Add relevant shape check for tf.reshape to prevent crash"
}
|
{
"commits": [
{
"message": "Add relevant shape check for tf.reshape to prevent crash\n\nThis PR tries to address the issue raised in 46693 where\na shape with large number of elements will cause the\ntf.reshape to crash.\n\nThis PR adds relevant shape check so that error message can\nbe returned gracefully.\n\nThis PR fixes 46693\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
}
],
"files": [
{
"diff": "@@ -24,6 +24,7 @@ limitations under the License.\n #include \"tensorflow/core/framework/types.h\"\n #include \"tensorflow/core/lib/core/status.h\"\n #include \"tensorflow/core/platform/logging.h\"\n+#include \"tensorflow/core/util/overflow.h\"\n \n namespace tensorflow {\n \n@@ -135,6 +136,17 @@ class ReshapeOp : public OpKernel {\n shape->AddDim(size);\n *has_zero_dim = true;\n } else {\n+ if (MultiplyWithoutOverflow(shape->num_elements(), size) < 0) {\n+ string msg;\n+ for (int ii = 0; ii < num_dims; ++ii) {\n+ if (ii != 0) {\n+ strings::StrAppend(&msg, \", \");\n+ }\n+ strings::StrAppend(&msg, Svec(ii));\n+ }\n+ return errors::InvalidArgument(\"Shape [\", msg,\n+ \"] has too many elements\");\n+ }\n shape->AddDim(size);\n (*product) *= size;\n }",
"filename": "tensorflow/core/kernels/reshape_op.h",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n \n from tensorflow.python.framework import constant_op\n from tensorflow.python.framework import dtypes\n+from tensorflow.python.framework import errors_impl\n from tensorflow.python.framework import ops\n from tensorflow.python.framework import tensor_shape\n from tensorflow.python.framework import test_util\n@@ -214,6 +215,14 @@ def testInt64Shape(self):\n y = array_ops.reshape(x, [1, 50000**2])\n self.assertEqual([1, 50000**2], y.get_shape().as_list())\n \n+ @test_util.run_v2_only\n+ def testTooLargeShape(self):\n+ with self.assertRaisesRegex(\n+ errors_impl.InvalidArgumentError, \"too many elements\"):\n+ x = array_ops.reshape([1], np.array([21943, 45817, 30516, 61760, 38987]))\n+ self.evaluate(x)\n+\n+\n \n if __name__ == \"__main__\":\n test.main()",
"filename": "tensorflow/python/kernel_tests/reshape_op_test.py",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nDisclaimer: This is my first contribution to a bigger open source project, so please let me know if I'm doing something wrong or if I forget something - I highly appreciate your feedback.\r\n\r\nIn my project I use causal convolution, implemented with the Keras Layer Conv1D. After the conversion to the TFLite model, the convolution will be performed by the Conv2D op. This requires the ops BATCH_TO_SPACE_ND and SPACE_TO_BATCH_ND and since causal convolution has a huge field of use, I think that it makes sense to add these two ops to tflite micro as well. Since these ops are inverses of each other and (at least sometimes) used together, I thought it is appropriate to create only one issue to add both ops.\r\n\r\nI already got a model running on my mcu using these ops and now I would like to share this with the other tensorflow users. I am not sure yet about all the required steps and which Pull Requests I need to make but I am sure that I will figure this out and I will link the pull requests to this issue and document it properly.\r\n\r\nSteps\r\n* PR 1: refactor flatbuffer_conversions parsing function #45696 \r\n* PR 2: refactor reference implementation from lite/kernels/internal/reference/reference_ops.h into its own header without making any changes. #45699\r\n* PR 3: copy the reference kernel from lite to micro and adjust it to micro by removing optimized ops. Add the File to the build. #45704\r\n* PR 4 (by @njeffrie): Port batch_to_space from TFLite to micro for int8 and float #46681\r\n* PR 5 (by @njeffrie): Port space_to_batch from TFLite to micro for int8 and float #46714\r\n* PR 6: Bugfix in batch_to_space_nd.cc #47304\r\n",
"comments": [
{
"body": "Hi Stephan,\r\n\r\nWe are getting some quite high-priority requests from internal teams to port basic versions of these operators. In order to speed things along, I have created a TFLM float and int8 implementation for batch_to_space along with relevant tests in [this PR](https://github.com/tensorflow/tensorflow/pull/46681/files). I hope this can be a good starting point for fully featured batch_to_space and for space_to_batch, and help serve as an example for porting TFLM tests.",
"created_at": "2021-01-26T02:23:43Z"
},
{
"body": "> Hi Stephan,\r\n> \r\n> We are getting some quite high-priority requests from internal teams to port basic versions of these operators. In order to speed things along, I have created a TFLM float and int8 implementation for batch_to_space along with relevant tests in [this PR](https://github.com/tensorflow/tensorflow/pull/46681/files). I hope this can be a good starting point for fully featured batch_to_space and for space_to_batch, and help serve as an example for porting TFLM tests.\r\n\r\nHi Nat,\r\n\r\nThanks for your PR! Looks really good to me. I think I could continue working on this next week, this week I won't have any time. If it's so time critical, feel free to do it, if not, I will do it next week :)",
"created_at": "2021-01-26T07:52:25Z"
},
{
"body": "Sounds good. I think with the time pressure, I'll upload a similar version for space_to_batch_nd so that our internal teams can go ahead with their work. Hopefully these versions can serve as a starting point, and if you have additional features or especially additional tests we can work together to land those.",
"created_at": "2021-01-26T17:16:56Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45693\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45693\">No</a>\n",
"created_at": "2021-03-16T20:30:51Z"
}
],
"number": 45693,
"title": "micro: port ops BATCH_TO_SPACE_ND and SPACE_TO_BATCH_ND from lite "
}
|
{
"body": "Commit 1 copies the TFLite operator into TFLM\r\nCommit 2 implements basic float, int8 and error checking tests along with float and int8 implementations of space_to_batch_nd\r\n\r\nThis version requires that the flat size of input matches output, since TFLM does not support tensor resizing.\r\n\r\n#45693",
"number": 46714,
"review_comments": [
{
"body": "nit: move closer to where its first used",
"created_at": "2021-01-27T18:38:41Z"
},
{
"body": "2021",
"created_at": "2021-01-27T18:39:11Z"
},
{
"body": "2021",
"created_at": "2021-01-27T18:39:19Z"
},
{
"body": "constexpr",
"created_at": "2021-01-27T18:39:31Z"
},
{
"body": "Done.",
"created_at": "2021-01-28T09:16:46Z"
},
{
"body": "Done.",
"created_at": "2021-01-28T09:16:50Z"
},
{
"body": "Done.",
"created_at": "2021-01-28T09:17:17Z"
},
{
"body": "done.",
"created_at": "2021-01-28T09:17:38Z"
},
{
"body": "is it still worth adding a check for DimensionsCount ==4?\r\n\r\n```cc\r\nconst RuntimeSHare input1_shape = (unextended_input1_shape.DimensionsCount() == 4) ? unextended_input1_shape : ExtendSHape();\r\n```\r\n",
"created_at": "2021-01-28T23:05:51Z"
},
{
"body": "Done.",
"created_at": "2021-01-29T00:34:37Z"
},
{
"body": "I'm a bit confused by what this commit is fixing. types.h is ok to be included for TFLM.\r\n\r\nIn fact, the params are declared in that header:\r\nhttps://github.com/tensorflow/tensorflow/blob/ef214a5a9d217aedcfcff25027d7eab488385e29/tensorflow/lite/kernels/internal/types.h#L1096-L1099\r\n",
"created_at": "2021-01-29T20:52:16Z"
},
{
"body": "underlying issue what elsewhere, and fixed with [`0ec1f46` (#46714)](https://github.com/tensorflow/tensorflow/pull/46714/commits/0ec1f462f82afe9c0e0d5a95958deb71451bc831)",
"created_at": "2021-01-29T22:08:22Z"
}
],
"title": "Port int8 and float versions of space_to_batch to TFLM"
}
|
{
"commits": [
{
"message": "Copy space_to_batch to TFLM"
},
{
"message": "Add TFLM space_to_batch along with tests"
},
{
"message": "Merge branch 'master' of https://github.com/tensorflow/tensorflow into space_to_batch"
},
{
"message": "Address review comments."
},
{
"message": "Fix type issue"
},
{
"message": "Fix linker error due to lambda"
},
{
"message": "Add space_to_batch_nd.h to makefile"
},
{
"message": "Avoid unnecessary creation of RuntimeShape"
},
{
"message": "Fix bad dependencies bringing in __exidx symbol"
},
{
"message": "replace tflite::ExtendedShape to fix TFLite test."
}
],
"files": [
{
"diff": "@@ -21,9 +21,19 @@ limitations under the License.\n #include \"tensorflow/lite/kernels/internal/types.h\"\n \n namespace tflite {\n-\n namespace reference_ops {\n \n+inline RuntimeShape ExtendShape(const RuntimeShape& shape) {\n+ if (shape.DimensionsCount() == 4) {\n+ return shape;\n+ }\n+ RuntimeShape new_shape(4, 1);\n+ new_shape.SetDim(0, shape.Dims(0));\n+ new_shape.SetDim(1, shape.Dims(1));\n+ new_shape.SetDim(3, shape.Dims(2));\n+ return new_shape;\n+}\n+\n template <typename T>\n inline void SpaceToBatchND(const SpaceToBatchParams& params,\n const RuntimeShape& unextended_input1_shape,\n@@ -41,18 +51,8 @@ inline void SpaceToBatchND(const SpaceToBatchParams& params,\n unextended_output_shape.DimensionsCount());\n \n // Extends the input/output shape from 3D to 4D if needed, NHC -> NH1C.\n- auto extend_shape = [](const RuntimeShape& shape) {\n- if (shape.DimensionsCount() == 4) {\n- return shape;\n- }\n- RuntimeShape new_shape(4, 1);\n- new_shape.SetDim(0, shape.Dims(0));\n- new_shape.SetDim(1, shape.Dims(1));\n- new_shape.SetDim(3, shape.Dims(2));\n- return new_shape;\n- };\n- const RuntimeShape input1_shape = extend_shape(unextended_input1_shape);\n- const RuntimeShape output_shape = extend_shape(unextended_output_shape);\n+ const RuntimeShape input1_shape = ExtendShape(unextended_input1_shape);\n+ const RuntimeShape output_shape = ExtendShape(unextended_output_shape);\n \n const int depth = input1_shape.Dims(3);\n const int input_width = input1_shape.Dims(2);",
"filename": "tensorflow/lite/kernels/internal/reference/space_to_batch_nd.h",
"status": "modified"
},
{
"diff": "@@ -184,7 +184,11 @@ class RuntimeShape {\n // rolls out.\n RuntimeShape(RuntimeShape const& other) : size_(other.DimensionsCount()) {\n if (size_ > kMaxSmallSize) {\n+#ifdef TF_LITE_STATIC_MEMORY\n+ TFLITE_CHECK(false && \"No shape resizing supported on this platform\");\n+#else\n dims_pointer_ = new int32_t[size_];\n+#endif\n }\n std::memcpy(DimsData(), other.DimsData(), sizeof(int32_t) * size_);\n }",
"filename": "tensorflow/lite/kernels/internal/types.h",
"status": "modified"
},
{
"diff": "@@ -134,6 +134,7 @@ cc_library(\n \"round.cc\",\n \"shape.cc\",\n \"softmax_common.cc\",\n+ \"space_to_batch_nd.cc\",\n \"split.cc\",\n \"split_v.cc\",\n \"strided_slice.cc\",\n@@ -837,6 +838,21 @@ cc_test(\n ],\n )\n \n+tflite_micro_cc_test(\n+ name = \"space_to_batch_nd_test\",\n+ srcs = [\n+ \"space_to_batch_nd_test.cc\",\n+ ],\n+ deps = [\n+ \":conv_test_common\",\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:micro_utils\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n tflite_micro_cc_test(\n name = \"transpose_conv_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,7 @@ TfLiteRegistration Register_EXP();\n TfLiteRegistration Register_QUANTIZE();\n TfLiteRegistration Register_SHAPE();\n TfLiteRegistration Register_SOFTMAX();\n+TfLiteRegistration Register_SPACE_TO_BATCH_ND();\n TfLiteRegistration Register_SVDF();\n TfLiteRegistration Register_TRANSPOSE_CONV_2D();\n ",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,123 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#include \"tensorflow/lite/kernels/internal/reference/space_to_batch_nd.h\"\n+\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n+#include \"tensorflow/lite/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n+\n+namespace tflite {\n+\n+namespace {\n+\n+constexpr int kInputTensor = 0;\n+constexpr int kBlockShapeTensor = 1;\n+constexpr int kCropsTensor = 2;\n+constexpr int kOutputTensor = 0;\n+\n+constexpr int kInputDims = 4;\n+constexpr int kOutputDims = 4;\n+\n+} // namespace.\n+\n+void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n+ TFLITE_DCHECK(context->AllocatePersistentBuffer != nullptr);\n+ return context->AllocatePersistentBuffer(context, sizeof(SpaceToBatchParams));\n+}\n+\n+TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+ TF_LITE_ENSURE_EQ(context, NumInputs(node), 3);\n+ TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n+\n+ const TfLiteTensor* input = GetInput(context, node, kInputTensor);\n+ TfLiteTensor* output = GetOutput(context, node, kOutputTensor);\n+ TF_LITE_ENSURE(context, input != nullptr && output != nullptr);\n+\n+ SpaceToBatchParams* params =\n+ static_cast<SpaceToBatchParams*>(node->user_data);\n+ params->output_offset = output->params.zero_point;\n+\n+ // Only 4D input and output tensors are supported for this op on TFLM.\n+ TF_LITE_ENSURE_EQ(context, NumDimensions(input), kInputDims);\n+ TF_LITE_ENSURE_EQ(context, NumDimensions(output), kOutputDims);\n+ TF_LITE_ENSURE_EQ(context, input->type, output->type);\n+\n+ // Input and output must have the same flat size since TFLM does not support\n+ // tensor resizing.\n+ TF_LITE_ENSURE_EQ(context, GetTensorShape(input).FlatSize(),\n+ GetTensorShape(output).FlatSize());\n+ return kTfLiteOk;\n+}\n+\n+TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n+ TFLITE_DCHECK(node->user_data != nullptr);\n+ const SpaceToBatchParams& params =\n+ *(static_cast<const SpaceToBatchParams*>(node->user_data));\n+\n+ const TfLiteEvalTensor* input =\n+ tflite::micro::GetEvalInput(context, node, kInputTensor);\n+ const TfLiteEvalTensor* block_shape =\n+ tflite::micro::GetEvalInput(context, node, kBlockShapeTensor);\n+ const TfLiteEvalTensor* crops =\n+ tflite::micro::GetEvalInput(context, node, kCropsTensor);\n+ TfLiteEvalTensor* output =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n+\n+ switch (input->type) { // Already know in/out types are same.\n+ case kTfLiteFloat32:\n+ reference_ops::SpaceToBatchND(\n+ params, tflite::micro::GetTensorShape(input),\n+ tflite::micro::GetTensorData<float>(input),\n+ tflite::micro::GetTensorShape(block_shape),\n+ tflite::micro::GetTensorData<int32_t>(block_shape),\n+ tflite::micro::GetTensorShape(crops),\n+ tflite::micro::GetTensorData<int32_t>(crops),\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<float>(output));\n+ break;\n+ case kTfLiteInt8:\n+ reference_ops::SpaceToBatchND(\n+ params, tflite::micro::GetTensorShape(input),\n+ tflite::micro::GetTensorData<int8_t>(input),\n+ tflite::micro::GetTensorShape(block_shape),\n+ tflite::micro::GetTensorData<int32_t>(block_shape),\n+ tflite::micro::GetTensorShape(crops),\n+ tflite::micro::GetTensorData<int32_t>(crops),\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<int8_t>(output));\n+ break;\n+ default:\n+ TF_LITE_KERNEL_LOG(context, \"Type %s (%d) not supported.\",\n+ TfLiteTypeGetName(input->type), input->type);\n+ return kTfLiteError;\n+ }\n+ return kTfLiteOk;\n+}\n+\n+TfLiteRegistration Register_SPACE_TO_BATCH_ND() {\n+ return {/*init=*/Init,\n+ /*free=*/nullptr,\n+ /*prepare=*/Prepare,\n+ /*invoke=*/Eval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/0};\n+}\n+\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/space_to_batch_nd.cc",
"status": "added"
},
{
"diff": "@@ -0,0 +1,168 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+\n+#include <cstdint>\n+\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_runner.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n+\n+namespace tflite {\n+namespace testing {\n+namespace {\n+\n+constexpr int kBasicInputOutputSize = 16;\n+const int basic_input_dims[] = {4, 1, 4, 4, 1};\n+const float basic_input[kBasicInputOutputSize] = {\n+ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};\n+const int basic_block_shape_dims[] = {1, 2};\n+const int32_t basic_block_shape[] = {2, 2};\n+const int basic_crops_dims[] = {1, 4};\n+const int32_t basic_crops[] = {0, 0, 0, 0};\n+const int basic_output_dims[] = {4, 4, 2, 2, 1};\n+const float basic_golden[kBasicInputOutputSize] = {1, 3, 9, 11, 2, 4, 10, 12,\n+ 5, 7, 13, 15, 6, 8, 14, 16};\n+\n+template <typename T>\n+TfLiteStatus ValidateSpaceToBatchNdGoldens(TfLiteTensor* tensors,\n+ int tensors_size, const T* golden,\n+ T* output, int output_size) {\n+ int inputs_array_data[] = {3, 0, 1, 2};\n+ TfLiteIntArray* inputs_array = IntArrayFromInts(inputs_array_data);\n+ int outputs_array_data[] = {1, 3};\n+ TfLiteIntArray* outputs_array = IntArrayFromInts(outputs_array_data);\n+\n+ const TfLiteRegistration registration = Register_SPACE_TO_BATCH_ND();\n+ micro::KernelRunner runner(registration, tensors, tensors_size, inputs_array,\n+ outputs_array, nullptr, micro_test::reporter);\n+\n+ TF_LITE_ENSURE_STATUS(runner.InitAndPrepare());\n+ TF_LITE_ENSURE_STATUS(runner.Invoke());\n+\n+ for (int i = 0; i < output_size; ++i) {\n+ // TODO(b/158102673): workaround for not having fatal test assertions.\n+ TF_LITE_MICRO_EXPECT_EQ(golden[i], output[i]);\n+ if (golden[i] != output[i]) {\n+ return kTfLiteError;\n+ }\n+ }\n+ return kTfLiteOk;\n+}\n+\n+TfLiteStatus TestSpaceToBatchNdFloat(\n+ const int* input_dims_data, const float* input_data,\n+ const int* block_shape_dims_data, const int32_t* block_shape_data,\n+ const int* crops_dims_data, const int32_t* crops_data,\n+ const int* output_dims_data, const float* golden, float* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* block_shape_dims = IntArrayFromInts(block_shape_dims_data);\n+ TfLiteIntArray* crops_dims = IntArrayFromInts(crops_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(output_dims_data);\n+\n+ constexpr int inputs_size = 3;\n+ constexpr int outputs_size = 1;\n+ constexpr int tensors_size = inputs_size + outputs_size;\n+ TfLiteTensor tensors[tensors_size] = {\n+ CreateTensor(input_data, input_dims),\n+ CreateTensor(block_shape_data, block_shape_dims),\n+ CreateTensor(crops_data, crops_dims),\n+ CreateTensor(output_data, output_dims),\n+ };\n+\n+ return ValidateSpaceToBatchNdGoldens(tensors, tensors_size, golden,\n+ output_data, ElementCount(*output_dims));\n+}\n+\n+template <typename T>\n+TfLiteStatus TestSpaceToBatchNdQuantized(\n+ const int* input_dims_data, const float* input_data, T* input_quantized,\n+ float input_scale, int input_zero_point, const int* block_shape_dims_data,\n+ const int32_t* block_shape_data, const int* crops_dims_data,\n+ const int32_t* crops_data, const int* output_dims_data, const float* golden,\n+ T* golden_quantized, float output_scale, int output_zero_point,\n+ T* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* block_shape_dims = IntArrayFromInts(block_shape_dims_data);\n+ TfLiteIntArray* crops_dims = IntArrayFromInts(crops_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(output_dims_data);\n+\n+ constexpr int inputs_size = 3;\n+ constexpr int outputs_size = 1;\n+ constexpr int tensors_size = inputs_size + outputs_size;\n+ TfLiteTensor tensors[tensors_size] = {\n+ tflite::testing::CreateQuantizedTensor(input_data, input_quantized,\n+ input_dims, input_scale,\n+ input_zero_point),\n+ tflite::testing::CreateTensor(block_shape_data, block_shape_dims),\n+ tflite::testing::CreateTensor(crops_data, crops_dims),\n+ tflite::testing::CreateQuantizedTensor(output_data, output_dims,\n+ output_scale, output_zero_point),\n+ };\n+ tflite::Quantize(golden, golden_quantized, ElementCount(*output_dims),\n+ output_scale, output_zero_point);\n+\n+ return ValidateSpaceToBatchNdGoldens(tensors, tensors_size, golden_quantized,\n+ output_data, ElementCount(*output_dims));\n+}\n+\n+} // namespace\n+} // namespace testing\n+} // namespace tflite\n+\n+TF_LITE_MICRO_TESTS_BEGIN\n+\n+TF_LITE_MICRO_TEST(SpaceToBatchBasicFloat) {\n+ float output[tflite::testing::kBasicInputOutputSize];\n+ TF_LITE_MICRO_EXPECT_EQ(\n+ kTfLiteOk,\n+ tflite::testing::TestSpaceToBatchNdFloat(\n+ tflite::testing::basic_input_dims, tflite::testing::basic_input,\n+ tflite::testing::basic_block_shape_dims,\n+ tflite::testing::basic_block_shape, tflite::testing::basic_crops_dims,\n+ tflite::testing::basic_crops, tflite::testing::basic_output_dims,\n+ tflite::testing::basic_golden, output));\n+}\n+\n+TF_LITE_MICRO_TEST(SpaceToBatchBasicInt8) {\n+ int8_t output[tflite::testing::kBasicInputOutputSize];\n+ int8_t input_quantized[tflite::testing::kBasicInputOutputSize];\n+ int8_t golden_quantized[tflite::testing::kBasicInputOutputSize];\n+ TF_LITE_MICRO_EXPECT_EQ(\n+ kTfLiteOk,\n+ tflite::testing::TestSpaceToBatchNdQuantized(\n+ tflite::testing::basic_input_dims, tflite::testing::basic_input,\n+ input_quantized, 1.0f, 0, tflite::testing::basic_block_shape_dims,\n+ tflite::testing::basic_block_shape, tflite::testing::basic_crops_dims,\n+ tflite::testing::basic_crops, tflite::testing::basic_output_dims,\n+ tflite::testing::basic_golden, golden_quantized, 1.0f, 0, output));\n+}\n+\n+TF_LITE_MICRO_TEST(SpaceToBatchInvalidOutputDimensionShouldFail) {\n+ constexpr int output_length = 12;\n+ const int output_dims[] = {4, 1, 4, 3, 1};\n+ float output[output_length];\n+ TF_LITE_MICRO_EXPECT_EQ(\n+ kTfLiteError,\n+ tflite::testing::TestSpaceToBatchNdFloat(\n+ tflite::testing::basic_input_dims, tflite::testing::basic_input,\n+ tflite::testing::basic_block_shape_dims,\n+ tflite::testing::basic_block_shape, tflite::testing::basic_crops_dims,\n+ tflite::testing::basic_crops, output_dims,\n+ tflite::testing::basic_golden, output));\n+}\n+\n+TF_LITE_MICRO_TESTS_END",
"filename": "tensorflow/lite/micro/kernels/space_to_batch_nd_test.cc",
"status": "added"
},
{
"diff": "@@ -383,6 +383,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n ParseSoftmax);\n }\n \n+ TfLiteStatus AddSpaceToBatchNd() {\n+ return AddBuiltin(BuiltinOperator_SPACE_TO_BATCH_ND,\n+ Register_SPACE_TO_BATCH_ND(), ParseSpaceToBatchNd);\n+ }\n+\n TfLiteStatus AddSplit() {\n return AddBuiltin(BuiltinOperator_SPLIT,\n tflite::ops::micro::Register_SPLIT(), ParseSplit);",
"filename": "tensorflow/lite/micro/micro_mutable_op_resolver.h",
"status": "modified"
},
{
"diff": "@@ -34,9 +34,9 @@ readable_run make -f tensorflow/lite/micro/tools/make/Makefile TARGET=${TARGET}\n \n # check that the release build is ok.\n readable_run make -f tensorflow/lite/micro/tools/make/Makefile clean\n-readable_run make -j8 -f tensorflow/lite/micro/tools/make/Makefile TARGET=${TARGET} build BUILD_TYPE=release\n+readable_run make -j8 -f tensorflow/lite/micro/tools/make/Makefile TARGET=${TARGET} OPTIMIZATION_LEVEL=-O3 BUILD_TYPE=release build\n \n # Next, build w/o release so that we can run the tests and get additional\n # debugging info on failures.\n readable_run make -f tensorflow/lite/micro/tools/make/Makefile clean\n-readable_run make -j8 -f tensorflow/lite/micro/tools/make/Makefile TARGET=${TARGET} test\n+readable_run make -j8 -f tensorflow/lite/micro/tools/make/Makefile TARGET=${TARGET} OPTIMIZATION_LEVEL=-Os test",
"filename": "tensorflow/lite/micro/tools/ci_build/test_bluepill.sh",
"status": "modified"
},
{
"diff": "@@ -290,6 +290,7 @@ tensorflow/lite/micro/kernels/resize_nearest_neighbor_test.cc \\\n tensorflow/lite/micro/kernels/round_test.cc \\\n tensorflow/lite/micro/kernels/shape_test.cc \\\n tensorflow/lite/micro/kernels/softmax_test.cc \\\n+tensorflow/lite/micro/kernels/space_to_batch_nd_test.cc \\\n tensorflow/lite/micro/kernels/split_test.cc \\\n tensorflow/lite/micro/kernels/split_v_test.cc \\\n tensorflow/lite/micro/kernels/strided_slice_test.cc \\\n@@ -343,6 +344,7 @@ tensorflow/lite/micro/kernels/round.cc \\\n tensorflow/lite/micro/kernels/shape.cc \\\n tensorflow/lite/micro/kernels/softmax.cc \\\n tensorflow/lite/micro/kernels/softmax_common.cc \\\n+tensorflow/lite/micro/kernels/space_to_batch_nd.cc \\\n tensorflow/lite/micro/kernels/split.cc \\\n tensorflow/lite/micro/kernels/split_v.cc \\\n tensorflow/lite/micro/kernels/strided_slice.cc \\\n@@ -429,6 +431,7 @@ tensorflow/lite/kernels/internal/reference/requantize.h \\\n tensorflow/lite/kernels/internal/reference/resize_nearest_neighbor.h \\\n tensorflow/lite/kernels/internal/reference/round.h \\\n tensorflow/lite/kernels/internal/reference/softmax.h \\\n+tensorflow/lite/kernels/internal/reference/space_to_batch_nd.h \\\n tensorflow/lite/kernels/internal/reference/sub.h \\\n tensorflow/lite/kernels/internal/reference/logistic.h \\\n tensorflow/lite/kernels/internal/reference/strided_slice.h \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "\r\n@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator FLOOR_DIV from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/floor_div.cc into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro without making any changes or including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45657\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45657\">No</a>\n",
"created_at": "2021-04-12T10:38:08Z"
}
],
"number": 45657,
"title": "micro: port op FLOOR_DIV from lite"
}
|
{
"body": "Implement skeleton (non-working) code for operator and test.\r\nHeader files changed.\r\nNamespaces changed.\r\nSome original code deleted.\r\nSome original code modified.\r\n\r\nThis represents PR step 4 of the work to port operator FLOOR_DIV as tracked in Issue #45657",
"number": 46710,
"review_comments": [],
"title": "micro: prepare to port operator FLOOR_DIV kernel from lite with test"
}
|
{
"commits": [
{
"message": "micro: prepare to port operator FLOOR_DIV kernel from lite with test\n\nImplement skeleton (non-working) code for operator and test.\nHeader files changed.\nNamespaces changed.\nSome original code deleted.\nSome original code modified.\n\nThis represents PR step 4 of the work to port operator FLOOR_DIV as tracked in Issue #45657"
}
],
"files": [
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -12,22 +12,18 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <math.h>\n-#include <stddef.h>\n-#include <stdint.h>\n-\n-#include <functional>\n \n #include \"tensorflow/lite/c/common.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/binary_function.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/reference_ops.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n+#include \"tensorflow/lite/kernels/internal/quantization_util.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/div.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/process_broadcast_shapes.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n #include \"tensorflow/lite/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n \n namespace tflite {\n namespace ops {\n-namespace builtin {\n+namespace micro {\n namespace floor_div {\n namespace {\n \n@@ -36,28 +32,14 @@ constexpr int kInputTensor1 = 0;\n constexpr int kInputTensor2 = 1;\n constexpr int kOutputTensor = 0;\n \n-// Op data for floor_div op.\n-struct OpData {\n- bool requires_broadcast;\n-};\n-\n void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n- auto* data = new OpData;\n- data->requires_broadcast = false;\n- return data;\n-}\n-\n-void Free(TfLiteContext* context, void* buffer) {\n- delete reinterpret_cast<OpData*>(buffer);\n+ return nullptr;\n }\n \n TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);\n TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n \n- // Reinterprete the opaque data provided by user.\n- OpData* data = reinterpret_cast<OpData*>(node->user_data);\n-\n const TfLiteTensor* input1;\n TF_LITE_ENSURE_OK(context,\n GetInputSafe(context, node, kInputTensor1, &input1));\n@@ -82,17 +64,7 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n }\n output->type = type;\n \n- data->requires_broadcast = !HaveSameShapes(input1, input2);\n-\n- TfLiteIntArray* output_size = nullptr;\n- if (data->requires_broadcast) {\n- TF_LITE_ENSURE_OK(context, CalculateShapeForBroadcast(\n- context, input1, input2, &output_size));\n- } else {\n- output_size = TfLiteIntArrayCopy(input1->dims);\n- }\n-\n- return context->ResizeTensor(context, output, output_size);\n+ return kTfLiteError;\n }\n \n template <typename T>\n@@ -125,8 +97,6 @@ TfLiteStatus EvalImpl(TfLiteContext* context, bool requires_broadcast,\n }\n \n TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n- OpData* data = reinterpret_cast<OpData*>(node->user_data);\n-\n const TfLiteTensor* input1;\n TF_LITE_ENSURE_OK(context,\n GetInputSafe(context, node, kInputTensor1, &input1));\n@@ -137,13 +107,15 @@ TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_OK(context,\n GetOutputSafe(context, node, kOutputTensor, &output));\n \n+ bool requires_broadcast = false;\n+\n switch (input1->type) {\n case kTfLiteInt32: {\n- return EvalImpl<int32_t>(context, data->requires_broadcast, input1,\n- input2, output);\n+ return EvalImpl<int32_t>(context, requires_broadcast, input1, input2,\n+ output);\n }\n case kTfLiteFloat32: {\n- return EvalImpl<float>(context, data->requires_broadcast, input1, input2,\n+ return EvalImpl<float>(context, requires_broadcast, input1, input2,\n output);\n }\n default: {\n@@ -157,14 +129,8 @@ TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n } // namespace\n } // namespace floor_div\n \n-TfLiteRegistration* Register_FLOOR_DIV() {\n- // Init, Free, Prepare, Eval are satisfying the Interface required by\n- // TfLiteRegistration.\n- static TfLiteRegistration r = {floor_div::Init, floor_div::Free,\n- floor_div::Prepare, floor_div::Eval};\n- return &r;\n-}\n+TfLiteRegistration* Register_FLOOR_DIV() { return nullptr; }\n \n-} // namespace builtin\n+} // namespace micro\n } // namespace ops\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/floor_div.cc",
"status": "modified"
},
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -12,106 +12,88 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <stdint.h>\n \n-#include <vector>\n+#include <type_traits>\n \n-#include \"tensorflow/lite/kernels/test_util.h\"\n-#include \"tensorflow/lite/schema/schema_generated.h\"\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_runner.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n \n namespace tflite {\n+namespace testing {\n namespace {\n \n-using ::testing::ElementsAre;\n+TF_LITE_MICRO_TESTS_BEGIN\n \n-template <typename T>\n-class FloorDivModel : public SingleOpModel {\n- public:\n- FloorDivModel(const TensorData& input1, const TensorData& input2,\n- const TensorData& output) {\n- input1_ = AddInput(input1);\n- input2_ = AddInput(input2);\n- output_ = AddOutput(output);\n- SetBuiltinOp(BuiltinOperator_FLOOR_DIV, BuiltinOptions_FloorDivOptions,\n- CreateFloorDivOptions(builder_).Union());\n- BuildInterpreter({GetShape(input1_), GetShape(input2_)});\n- }\n-\n- int input1() { return input1_; }\n- int input2() { return input2_; }\n-\n- std::vector<T> GetOutput() { return ExtractVector<T>(output_); }\n- std::vector<int> GetOutputShape() { return GetTensorShape(output_); }\n-\n- private:\n- int input1_;\n- int input2_;\n- int output_;\n-};\n-\n-TEST(FloorDivModel, Simple) {\n+TF_LITE_MICRO_TEST(FloorDivModelSimple) {\n+#ifdef notdef\n FloorDivModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n {TensorType_INT32, {1, 2, 2, 1}},\n {TensorType_INT32, {}});\n model.PopulateTensor<int32_t>(model.input1(), {10, 9, 11, 3});\n model.PopulateTensor<int32_t>(model.input2(), {2, 2, 3, 4});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(), ElementsAre(5, 4, 3, 0));\n+#endif\n }\n \n-TEST(FloorDivModel, NegativeValue) {\n+TF_LITE_MICRO_TEST(FloorDivModelNegativeValue) {\n+#ifdef notdef\n FloorDivModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n {TensorType_INT32, {1, 2, 2, 1}},\n {TensorType_INT32, {}});\n model.PopulateTensor<int32_t>(model.input1(), {10, -9, -11, 7});\n model.PopulateTensor<int32_t>(model.input2(), {2, 2, -3, -4});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(), ElementsAre(5, -5, 3, -2));\n+#endif\n }\n \n-TEST(FloorDivModel, BroadcastFloorDiv) {\n+TF_LITE_MICRO_TEST(FloorDivModelBroadcastFloorDiv) {\n+#ifdef notdef\n FloorDivModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n {TensorType_INT32, {1}}, {TensorType_INT32, {}});\n model.PopulateTensor<int32_t>(model.input1(), {10, -9, -11, 7});\n model.PopulateTensor<int32_t>(model.input2(), {-3});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(), ElementsAre(-4, 3, 3, -3));\n+#endif\n }\n \n-TEST(FloorDivModel, SimpleFloat) {\n+TF_LITE_MICRO_TEST(FloorDivModelSimpleFloat) {\n+#ifdef notdef\n FloorDivModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n {TensorType_FLOAT32, {1, 2, 2, 1}},\n {TensorType_FLOAT32, {}});\n model.PopulateTensor<float>(model.input1(), {10.05, 9.09, 11.9, 3.01});\n model.PopulateTensor<float>(model.input2(), {2.05, 2.03, 3.03, 4.03});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(), ElementsAre(4.0, 4.0, 3.0, 0.0));\n+#endif\n }\n \n-TEST(FloorDivModel, NegativeValueFloat) {\n+TF_LITE_MICRO_TEST(FloorDivModelNegativeValueFloat) {\n+#ifdef notdef\n FloorDivModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n {TensorType_FLOAT32, {1, 2, 2, 1}},\n {TensorType_FLOAT32, {}});\n model.PopulateTensor<float>(model.input1(), {10.03, -9.9, -11.0, 7.0});\n model.PopulateTensor<float>(model.input2(), {2.0, 2.3, -3.0, -4.1});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(), ElementsAre(5.0, -5.0, 3.0, -2.0));\n+#endif\n }\n \n-TEST(FloorDivModel, BroadcastFloorDivFloat) {\n+TF_LITE_MICRO_TEST(FloorDivModelBroadcastFloorDivFloat) {\n+#ifdef notdef\n FloorDivModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n {TensorType_FLOAT32, {1}},\n {TensorType_FLOAT32, {}});\n model.PopulateTensor<float>(model.input1(), {10.03, -9.9, -11.0, 7.0});\n model.PopulateTensor<float>(model.input2(), {-3.3});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(), ElementsAre(-4.0, 2.0, 3.0, -3.0));\n+#endif\n }\n+\n+TF_LITE_MICRO_TESTS_END\n+\n } // namespace\n+} // namespace testing\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/floor_div_test.cc",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator BATCH_MATMUL from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\n",
"comments": [
{
"body": "Hi @ddavis-2015, are you stilling planning on integrating this?",
"created_at": "2023-07-19T21:39:06Z"
},
{
"body": "@pkgoogle A new PR based on this PR is in progress. The new PR will appear in the tflite-micro repo when ready.",
"created_at": "2023-07-20T06:28:58Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46504\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46504\">No</a>\n",
"created_at": "2023-07-20T06:39:58Z"
}
],
"number": 46504,
"title": "micro: port op BATCH_MATMUL from lite"
}
|
{
"body": "Move the reference implementation to its own header so that micro\r\ncan use it without the unrelated depedencies of reference_ops.h.\r\n\r\nPR step 2 for issue #46504",
"number": 46670,
"review_comments": [
{
"body": "Include order needs to be as the original for Eigen/Core.",
"created_at": "2021-01-25T18:49:21Z"
},
{
"body": "Changed include file order. Still fails the TFLite Makefile internal test.",
"created_at": "2021-01-25T21:52:32Z"
}
],
"title": "Extract reference for operator BATCH_MATMUL to standalone header"
}
|
{
"commits": [
{
"message": "Extract reference for operator BATCH_MATMUL to standalone header\n\nMove the reference implementation to its own header so that micro\ncan use it without the unrelated depedencies of reference_ops.h.\n\nPR step 2 for issue #46504"
}
],
"files": [
{
"diff": "@@ -570,6 +570,7 @@ cc_library(\n \"reference/add.h\",\n \"reference/add_n.h\",\n \"reference/arg_min_max.h\",\n+ \"reference/batch_matmul.h\",\n \"reference/batch_to_space_nd.h\",\n \"reference/binary_function.h\",\n \"reference/cast.h\",\n@@ -807,6 +808,7 @@ cc_library(\n ],\n hdrs = [\n \"tensor_utils.h\",\n+ \"tensor_utils_common.h\",\n ],\n compatible_with = get_compatible_with_portable(),\n copts = tflite_copts() + NEON_FLAGS_IF_APPLICABLE,",
"filename": "tensorflow/lite/kernels/internal/BUILD",
"status": "modified"
},
{
"diff": "@@ -15,16 +15,40 @@ limitations under the License.\n #ifndef TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_BATCH_MATMUL_H_\n #define TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_BATCH_MATMUL_H_\n \n-#include <stdint.h>\n-#include <string.h>\n+#include <algorithm>\n+#include <cstdint>\n \n #include \"tensorflow/lite/kernels/internal/common.h\"\n #include \"tensorflow/lite/kernels/internal/compatibility.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor_utils.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor_utils_common.h\"\n #include \"tensorflow/lite/kernels/internal/types.h\"\n \n namespace tflite {\n namespace reference_ops {\n+namespace batch_matmul {\n+\n+// Determine which dimension is the broadcast dimension.\n+inline int broadcast_dim(int lhs_dim, int rhs_dim) {\n+ if (lhs_dim == rhs_dim) return lhs_dim;\n+ if (lhs_dim == 1) return rhs_dim;\n+ TFLITE_DCHECK_EQ(rhs_dim, 1);\n+ return lhs_dim;\n+}\n+\n+// Compute the \"extent\" for iterating on this dimension.\n+// If we are broadcasting, then don't advance (i.e return 0).\n+inline int extent(const RuntimeShape& shape, int x) {\n+ if (shape.Dims(x) == 1) {\n+ return 0;\n+ }\n+ int prod = 1;\n+ for (int i = x + 1; i < shape.DimensionsCount(); ++i) {\n+ prod *= shape.Dims(i);\n+ }\n+ return prod;\n+}\n+\n+} // namespace batch_matmul\n \n inline void BatchMatMul(const RuntimeShape& lhs_shape, const float* lhs_data,\n const RuntimeShape& rhs_shape, const float* rhs_data,\n@@ -34,40 +58,19 @@ inline void BatchMatMul(const RuntimeShape& lhs_shape, const float* lhs_data,\n const RuntimeShape extended_rhs_shape =\n RuntimeShape::ExtendedShape(5, rhs_shape);\n \n- // Determine which dimension is the broadcast dimension.\n- auto broadcast_dim = [](int lhs_dim, int rhs_dim) {\n- if (lhs_dim == rhs_dim) return lhs_dim;\n- if (lhs_dim == 1) return rhs_dim;\n- TFLITE_DCHECK_EQ(rhs_dim, 1);\n- return lhs_dim;\n- };\n-\n- // Compute the \"extent\" for iterating on this dimension.\n- // If we are broadcasting, then don't advance (i.e return 0).\n- auto extent = [](const RuntimeShape& shape, int x) {\n- if (shape.Dims(x) == 1) {\n- return 0;\n- }\n- int prod = 1;\n- for (int i = x + 1; i < shape.DimensionsCount(); ++i) {\n- prod *= shape.Dims(i);\n- }\n- return prod;\n- };\n-\n- const int batch_dim0 =\n- broadcast_dim(extended_lhs_shape.Dims(0), extended_rhs_shape.Dims(0));\n- const int batch_dim1 =\n- broadcast_dim(extended_lhs_shape.Dims(1), extended_rhs_shape.Dims(1));\n- const int batch_dim2 =\n- broadcast_dim(extended_lhs_shape.Dims(2), extended_rhs_shape.Dims(2));\n+ const int batch_dim0 = batch_matmul::broadcast_dim(\n+ extended_lhs_shape.Dims(0), extended_rhs_shape.Dims(0));\n+ const int batch_dim1 = batch_matmul::broadcast_dim(\n+ extended_lhs_shape.Dims(1), extended_rhs_shape.Dims(1));\n+ const int batch_dim2 = batch_matmul::broadcast_dim(\n+ extended_lhs_shape.Dims(2), extended_rhs_shape.Dims(2));\n \n- const int lhs_ext0 = extent(extended_lhs_shape, 0);\n- const int lhs_ext1 = extent(extended_lhs_shape, 1);\n- const int lhs_ext2 = extent(extended_lhs_shape, 2);\n- const int rhs_ext0 = extent(extended_rhs_shape, 0);\n- const int rhs_ext1 = extent(extended_rhs_shape, 1);\n- const int rhs_ext2 = extent(extended_rhs_shape, 2);\n+ const int lhs_ext0 = batch_matmul::extent(extended_lhs_shape, 0);\n+ const int lhs_ext1 = batch_matmul::extent(extended_lhs_shape, 1);\n+ const int lhs_ext2 = batch_matmul::extent(extended_lhs_shape, 2);\n+ const int rhs_ext0 = batch_matmul::extent(extended_rhs_shape, 0);\n+ const int rhs_ext1 = batch_matmul::extent(extended_rhs_shape, 1);\n+ const int rhs_ext2 = batch_matmul::extent(extended_rhs_shape, 2);\n \n // Set params for each matrix multiply.\n const int lhs_rows = extended_lhs_shape.Dims(3);\n@@ -113,40 +116,19 @@ inline void BatchMatMul(const RuntimeShape& lhs_shape, const int8_t* lhs_data,\n const RuntimeShape extended_rhs_shape =\n RuntimeShape::ExtendedShape(5, rhs_shape);\n \n- // Determine which dimension is the broadcast dimension.\n- auto broadcast_dim = [](int lhs_dim, int rhs_dim) {\n- if (lhs_dim == rhs_dim) return lhs_dim;\n- if (lhs_dim == 1) return rhs_dim;\n- TFLITE_DCHECK_EQ(rhs_dim, 1);\n- return lhs_dim;\n- };\n+ const int batch_dim0 = batch_matmul::broadcast_dim(\n+ extended_lhs_shape.Dims(0), extended_rhs_shape.Dims(0));\n+ const int batch_dim1 = batch_matmul::broadcast_dim(\n+ extended_lhs_shape.Dims(1), extended_rhs_shape.Dims(1));\n+ const int batch_dim2 = batch_matmul::broadcast_dim(\n+ extended_lhs_shape.Dims(2), extended_rhs_shape.Dims(2));\n \n- // Compute the \"extent\" for iterating on this dimension.\n- // If we are broadcasting, then don't advance (i.e return 0).\n- auto extent = [](const RuntimeShape& shape, int x) {\n- if (shape.Dims(x) == 1) {\n- return 0;\n- }\n- int prod = 1;\n- for (int i = x + 1; i < shape.DimensionsCount(); ++i) {\n- prod *= shape.Dims(i);\n- }\n- return prod;\n- };\n-\n- const int batch_dim0 =\n- broadcast_dim(extended_lhs_shape.Dims(0), extended_rhs_shape.Dims(0));\n- const int batch_dim1 =\n- broadcast_dim(extended_lhs_shape.Dims(1), extended_rhs_shape.Dims(1));\n- const int batch_dim2 =\n- broadcast_dim(extended_lhs_shape.Dims(2), extended_rhs_shape.Dims(2));\n-\n- const int lhs_ext0 = extent(extended_lhs_shape, 0);\n- const int lhs_ext1 = extent(extended_lhs_shape, 1);\n- const int lhs_ext2 = extent(extended_lhs_shape, 2);\n- const int rhs_ext0 = extent(extended_rhs_shape, 0);\n- const int rhs_ext1 = extent(extended_rhs_shape, 1);\n- const int rhs_ext2 = extent(extended_rhs_shape, 2);\n+ const int lhs_ext0 = batch_matmul::extent(extended_lhs_shape, 0);\n+ const int lhs_ext1 = batch_matmul::extent(extended_lhs_shape, 1);\n+ const int lhs_ext2 = batch_matmul::extent(extended_lhs_shape, 2);\n+ const int rhs_ext0 = batch_matmul::extent(extended_rhs_shape, 0);\n+ const int rhs_ext1 = batch_matmul::extent(extended_rhs_shape, 1);\n+ const int rhs_ext2 = batch_matmul::extent(extended_rhs_shape, 2);\n \n // Set params for each matrix multiply.\n const int lhs_rows = extended_lhs_shape.Dims(3);\n@@ -223,40 +205,19 @@ inline void BatchMatMul(const FullyConnectedParams& params,\n const RuntimeShape extended_rhs_shape =\n RuntimeShape::ExtendedShape(5, rhs_shape);\n \n- // Determine which dimension is the broadcast dimension.\n- auto broadcast_dim = [](int lhs_dim, int rhs_dim) {\n- if (lhs_dim == rhs_dim) return lhs_dim;\n- if (lhs_dim == 1) return rhs_dim;\n- TFLITE_DCHECK_EQ(rhs_dim, 1);\n- return lhs_dim;\n- };\n-\n- // Compute the \"extent\" for iterating on this dimension.\n- // If we are broadcasting, then don't advance (i.e return 0).\n- auto extent = [](const RuntimeShape& shape, int x) {\n- if (shape.Dims(x) == 1) {\n- return 0;\n- }\n- int prod = 1;\n- for (int i = x + 1; i < shape.DimensionsCount(); ++i) {\n- prod *= shape.Dims(i);\n- }\n- return prod;\n- };\n-\n- const int batch_dim0 =\n- broadcast_dim(extended_lhs_shape.Dims(0), extended_rhs_shape.Dims(0));\n- const int batch_dim1 =\n- broadcast_dim(extended_lhs_shape.Dims(1), extended_rhs_shape.Dims(1));\n- const int batch_dim2 =\n- broadcast_dim(extended_lhs_shape.Dims(2), extended_rhs_shape.Dims(2));\n-\n- const int lhs_ext0 = extent(extended_lhs_shape, 0);\n- const int lhs_ext1 = extent(extended_lhs_shape, 1);\n- const int lhs_ext2 = extent(extended_lhs_shape, 2);\n- const int rhs_ext0 = extent(extended_rhs_shape, 0);\n- const int rhs_ext1 = extent(extended_rhs_shape, 1);\n- const int rhs_ext2 = extent(extended_rhs_shape, 2);\n+ const int batch_dim0 = batch_matmul::broadcast_dim(\n+ extended_lhs_shape.Dims(0), extended_rhs_shape.Dims(0));\n+ const int batch_dim1 = batch_matmul::broadcast_dim(\n+ extended_lhs_shape.Dims(1), extended_rhs_shape.Dims(1));\n+ const int batch_dim2 = batch_matmul::broadcast_dim(\n+ extended_lhs_shape.Dims(2), extended_rhs_shape.Dims(2));\n+\n+ const int lhs_ext0 = batch_matmul::extent(extended_lhs_shape, 0);\n+ const int lhs_ext1 = batch_matmul::extent(extended_lhs_shape, 1);\n+ const int lhs_ext2 = batch_matmul::extent(extended_lhs_shape, 2);\n+ const int rhs_ext0 = batch_matmul::extent(extended_rhs_shape, 0);\n+ const int rhs_ext1 = batch_matmul::extent(extended_rhs_shape, 1);\n+ const int rhs_ext2 = batch_matmul::extent(extended_rhs_shape, 2);\n \n // Set params for each matrix multiply.\n const int lhs_rows = extended_lhs_shape.Dims(3);",
"filename": "tensorflow/lite/kernels/internal/reference/batch_matmul.h",
"status": "modified"
},
{
"diff": "@@ -35,6 +35,7 @@ limitations under the License.\n #include \"tensorflow/lite/kernels/internal/reference/add.h\"\n #include \"tensorflow/lite/kernels/internal/reference/add_n.h\"\n #include \"tensorflow/lite/kernels/internal/reference/arg_min_max.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/batch_matmul.h\"\n #include \"tensorflow/lite/kernels/internal/reference/batch_to_space_nd.h\"\n #include \"tensorflow/lite/kernels/internal/reference/binary_function.h\"\n #include \"tensorflow/lite/kernels/internal/reference/cast.h\"",
"filename": "tensorflow/lite/kernels/internal/reference/reference_ops.h",
"status": "modified"
},
{
"diff": "@@ -17,9 +17,11 @@ limitations under the License.\n \n #include <algorithm>\n #include <cmath>\n+#include <cstdint>\n \n #include \"third_party/eigen3/Eigen/Core\"\n #include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor_utils_common.h\"\n \n #if defined(_MSC_VER)\n #define __restrict__ __restrict\n@@ -34,106 +36,6 @@ class CpuBackendContext;\n \n namespace tensor_utils {\n \n-// Checks if all entries of vector are zero for float.\n-bool IsZeroVector(const float* vector, int v_size);\n-\n-// Checks if all entries of vector are zero for int8.\n-bool IsZeroVector(const int8_t* vector, int v_size);\n-\n-// Quantizes a buffer of floating point values using a symmetric quantization\n-// (i.e. linear quantization without an offset) to 8-bit signed integers.\n-// It also outputs the range (min, max) of the floating point buffer, and the\n-// scaling factor used to quantize the values.\n-void SymmetricQuantizeFloats(const float* values, const int size,\n- int8_t* quantized_values, float* min_value,\n- float* max_value, float* scaling_factor);\n-\n-// Quantizes a buffer of floating point values using a symmetric quantization\n-// (i.e. linear quantization without an offset) to 8-bit signed integers.\n-// It uses the range (min, max) provided to the function to calculate the\n-// appropriate scaling factor to quantize the values.\n-void SymmetricQuantizeFloats(const float* values, const int size,\n- int8_t* quantized_values, float min_value,\n- float max_value, float* scaling_factor);\n-\n-void AsymmetricQuantizeFloats(const float* values, const int size,\n- int8_t* quantized_values, float* scaling_factor,\n- int32_t* offset);\n-\n-// Helper function to quantize floats.\n-// float_data_ptr input float vectors\n-// n_batch number of input vectors\n-// n_data size of a single input vector\n-// quantized_data_ptr (out) vector with quantized data\n-// scaling_factors (out) scaling factors (one per vector)\n-// zero_points (out) zero points (one per vector)\n-// do_asymmetric controls if the quantization should be asymmetric.\n-inline void BatchQuantizeFloats(const float* float_data_ptr, int n_batch,\n- int n_data, int8_t* quantized_data_ptr,\n- float* scaling_factors, int32_t* zero_points,\n- bool do_asymmetric) {\n- for (int b = 0; b < n_batch; ++b) {\n- const int offset = b * n_data;\n- if (do_asymmetric) {\n- tensor_utils::AsymmetricQuantizeFloats(\n- float_data_ptr + offset, n_data, quantized_data_ptr + offset,\n- &scaling_factors[b], &zero_points[b]);\n- } else {\n- float unused_min, unused_max;\n- tensor_utils::SymmetricQuantizeFloats(\n- float_data_ptr + offset, n_data, quantized_data_ptr + offset,\n- &unused_min, &unused_max, &scaling_factors[b]);\n- }\n- }\n-}\n-\n-// Multiplies a matrix by a \"batched\" vector (i.e. a matrix with a batch\n-// dimension composed by input vectors independent from each other). The result\n-// of the multiplication is accumulated to the passed result buffer.\n-// More specifically, for a matrix M of shape [n, i] and a batched-vector\n-// of shape [i, batch] it will first compute the product of shape [n, batch].\n-// This product will be accumulated to the result buffer.\n-void MatrixBatchVectorMultiplyAccumulate(const float* matrix, int m_rows,\n- int m_cols, const float* vector,\n- int n_batch, float* result);\n-\n-// Same as the function above, but the matrix is a sparse tensor with block\n-// pattern 1x4.\n-// This function assumes that m_cols is a multiple of the block size (4 in this\n-// case) so that there's no incomplete block.\n-void SparseMatrixBatchVectorMultiplyAccumulate1x4(\n- const float* __restrict__ matrix, const int32_t* __restrict__ segments,\n- const int32_t* __restrict__ indices, int m_rows, int m_cols,\n- const float* __restrict__ vector, int n_batch, float* __restrict__ result);\n-\n-// Same as the function above, but the matrix is stored in block compressed\n-// sparse row format with block pattern 1x16 which consists of two arrays:\n-// 1. A matrix array stores non-zero blocks of the matrix in row major.\n-// 2. A ledger array stores nrows groups, one group per row. Each group starts\n-// with an integer representing the number of non-zero blocks for the\n-// corresponding row and follows with column indexes of the first element\n-// of each non-zero block.\n-// This function assumes that\n-// 1. m_cols is a multiple of 16 so that all blocks are full blocks.\n-// 2. m_cols < 254 * 16 so that block index can be represented by uint8.\n-void SparseMatrixBatchVectorMultiplyAccumulate(\n- const float* __restrict__ matrix, const uint8_t* __restrict__ ledger,\n- int m_rows, int m_cols, const float* __restrict__ vector, int n_batch,\n- float* __restrict__ result);\n-\n-// Same as the function above, but for values quantized using symmetric\n-// quantization (e.g. by calling SymmetricQuantizeFloats).\n-// The passed scaling factors is a buffer of the quantization scaling factors\n-// that will be used to dequentize the products into the final result buffer.\n-// These scaling factors are the multiplication of the matrix scaling factor\n-// by the vector's scaling factor, one per batch (i.e. this allows quantizing\n-// each batch in the batch-vector matrix independently).\n-void MatrixBatchVectorMultiplyAccumulate(\n- const int8_t* __restrict__ matrix, const int m_rows, const int m_cols,\n- const int8_t* __restrict__ vectors,\n- const float* __restrict__ scaling_factors, int n_batch,\n- float* __restrict__ result);\n-\n // Same as the function above, but provide a scratch buffer for the\n // int8 x int8 -> int32 and a CpuBackendContext for the accumulator\n // computation.\n@@ -144,16 +46,6 @@ void MatrixBatchVectorMultiplyAccumulate(\n int32_t* __restrict__ scratch, float* __restrict__ result,\n CpuBackendContext* __restrict__ context);\n \n-// Same as the function above except that vector values\n-// are quantized with asymmetric quantization per-batch and the matrix\n-// is quantized per row.\n-void MatrixBatchVectorMultiplyAccumulate(\n- const int8_t* __restrict__ matrix, const int m_rows, const int m_cols,\n- const int8_t* __restrict__ vectors,\n- const float* __restrict__ scaling_factors, int n_batch,\n- float* __restrict__ result, const float* __restrict__ per_channel_scale,\n- const int32_t* __restrict__ input_offset);\n-\n // Same as the function above except that can make use of cached row sums.\n void MatrixBatchVectorMultiplyAccumulate(\n const int8_t* __restrict__ matrix, const int m_rows, const int m_cols,\n@@ -183,22 +75,6 @@ inline void MatrixBatchVectorMultiplyAccumulate(\n row_sums, compute_row_sums, context);\n }\n \n-// Same as the function above, but the matrix is stored in block compressed\n-// sparse row format with block pattern 1x16 which consists of two arrays:\n-// 1. A matrix array stores non-zero blocks of the matrix in row major.\n-// 2. A ledger array stores nrows groups, one group per row. Each group starts\n-// with an integer representing the number of non-zero blocks for the\n-// corresponding row followed by column index of the first element of\n-// each non-zero block.\n-// This function assumes that\n-// 1. m_cols is a multiple of 16 so that all blocks are full blocks.\n-// 2. m_cols < 254 * 16 so that block index can be represented by uint8.\n-void SparseMatrixBatchVectorMultiplyAccumulate(\n- const int8_t* __restrict__ matrix, const uint8_t* __restrict__ ledger,\n- const int m_rows, const int m_cols, const int8_t* __restrict__ vectors,\n- const float* __restrict__ scaling_factors, int n_batch,\n- float* __restrict__ result);\n-\n // Multiplies a matrix by a \"batched\" vector (i.e. a matrix with a batch\n // dimension composed by input vectors independent from each other). The result\n // of the multiplication is accumulated to the passed result buffer.\n@@ -223,8 +99,8 @@ void SparseMatrixBatchVectorMultiplyAccumulate(\n // - multiplier and shift combined gives the scale.\n // - assumes input zero point is 0.\n // - scratch is created for optimization purpose only.\n-// TODO(b/152066492): this can be removed if some future optimization\n-// work makes it unnecessary.\n+// TODO(b/152066492): this can be removed if some future optimization\n+// work makes it unnecessary.\n void MatrixBatchVectorMultiplyAccumulate(\n const int8_t* input, const int32_t* bias,\n const int8_t* input_to_gate_weights, int32_t multiplier, int32_t shift,\n@@ -254,280 +130,14 @@ void MatrixBatchVectorMultiplyAccumulate(\n // - multiplier and shift combined gives the scale.\n // - assumes input zero point is 0.\n // - scratch is created for optimization purpose only.\n-// TODO(b/152066492): this can be removed if some future optimization\n-// work makes it unnecessary.\n+// TODO(b/152066492): this can be removed if some future optimization\n+// work makes it unnecessary.\n void MatrixBatchVectorMultiplyAccumulate(\n const int8_t* input, const int32_t* bias,\n const int8_t* input_to_gate_weights, int32_t multiplier, int32_t shift,\n int32_t n_batch, int32_t n_input, int32_t n_output, int32_t output_zp,\n int32_t* scratch, int8_t* output, CpuBackendContext* context);\n \n-// Same as the above 8, 8, 8 integer matmul except for the presence of zero\n-// point and non-accumulative.\n-// TODO(b/148688698): remove this function by folding zero point calculation in\n-// prepare() function.\n-void MatrixBatchVectorMultiply(const int8_t* input, int32_t input_zeropoint,\n- const int8_t* input_to_gate_weights,\n- int32_t input_to_gate_effective_scale_a,\n- int32_t input_to_gate_effective_scale_b,\n- int32_t n_batch, int32_t n_input, int32_t n_cell,\n- int8_t* gate_output, int8_t gate_output_zp);\n-\n-// Same as above but has 16 bit and 8 bit input and 8 bit output.\n-// Used in projection when hidden is 16bit.\n-void MatrixBatchVectorMultiply(const int16_t* hidden,\n- const int8_t* hidden_to_output_weights,\n- int32_t proj_effective_scale_a,\n- int32_t proj_effective_scale_b,\n- const int32_t* gate_bias, int32_t n_batch,\n- int32_t n_hidden, int32_t n_output,\n- int32_t output_zp, int8_t* proj_output);\n-\n-// Multiplies a matrix with a scalar and reduce the result on each row to a\n-// scalar.\n-// Parameters:\n-// - matrix: matrix of size n_row * n_col\n-// - scalar: the scalar that is multiplied to each element in the matrix\n-// - n_row: the row count of the matrix\n-// - n_col: the column count of the matrix\n-// - output: the 32bit output\n-// Note: We do not need saturation because the int8 * int8 is safe from overflow\n-// in (2^31-1) / (2^14) = 131072, which is bigger than the n_row. Non-zero\n-// initial output value is not exceptionally large.\n-void MatrixScalarMultiplyAccumulate(const int8_t* matrix, int32_t scalar,\n- int32_t n_row, int32_t n_col,\n- int32_t* output);\n-\n-// Apply Layer Normalization (https://arxiv.org/abs/1607.06450) to a Quantized\n-// vector.\n-// Parameters:\n-// - input: batch vector of size n_batch * n_input; 16 bit.\n-// - layer_norm_weights: the quantized layer normalization weights.\n-// - bias: the bias for the layer normalization.\n-// - layer_norm_scale_a: multiplier for scale factor.\n-// - layer_norm_scale_b: shift for scale factor.\n-// - variance_limit: the guard to make sure the inverse does not overflow.\n-// - n_batch: the number of batches.\n-// - n_input: the size for input and output.\n-// - output: the 16 bit output\n-void ApplyLayerNorm(const int16_t* input, const int16_t* layer_norm_weights,\n- const int32_t* bias, int32_t layer_norm_scale_a,\n- int32_t layer_norm_scale_b, int32_t variance_limit,\n- int n_batch, int n_input, int16_t* output);\n-\n-// Same as above but the internal calculation is done in float.\n-void ApplyLayerNormFloat(const int16_t* input,\n- const int16_t* layer_norm_weights,\n- int32_t layer_norm_scale_a, int32_t layer_norm_scale_b,\n- const int32_t* bias, int n_batch, int n_input,\n- int16_t* output);\n-\n-// Apply Sigmoid to a quantized vector.\n-// Parameters:\n-// - input: batch vector of size n_batch * n_input; 16 bit.\n-// - n_batch: the number of batches.\n-// - n_input: the size for input and output.\n-// - output: the 16 bit output\n-// The input is in Q3.12 format and the output is in Q0.15 format.\n-void ApplySigmoid(const int16_t* input, int32_t n_batch, int32_t n_input,\n- int16_t* output);\n-\n-// Same as above but the internal calcualtion is float.\n-void ApplySigmoidFloat(const int16_t* input, int32_t n_batch, int32_t n_input,\n- int16_t* output);\n-\n-// Apply Tanh to a quantized vector.\n-// Parameters:\n-// - integer_bits: the integer bits of the input.\n-// Currently supports 0, 1, 2, 3, 4, 5, 6.\n-// - input: batch vector of size n_batch * n_input; 16 bit.\n-// - n_batch: the number of batches.\n-// - n_input: the size for input and output.\n-// - output: the 16 bit output\n-// The input is in Qm.15-m format and the output is in Q0.15 format.\n-void ApplyTanh(int32_t integer_bits, const int16_t* input, int32_t n_batch,\n- int32_t n_input, int16_t* output);\n-\n-// Apply Tanh to a quantized vector. Tbe internal calculation is in float.\n-// - Input has 2^(integer_bits) as scale.\n-// - Output has Q0.15 as scale.\n-void ApplyTanhFloat(const int16_t* input, int32_t n_batch, int32_t n_input,\n- int32_t integer_bits, int16_t* output);\n-\n-// Element-wise multiplication of two quantized vectors.\n-// Parameters:\n-// - input_1: batch vector of size n_batch * n_input; 16 bit.\n-// - input_2: batch vector of size n_batch * n_input; 16 bit.\n-// - n_batch: the number of batches.\n-// - n_input: the size for input and output.\n-// - shift: the shift needed to produce the output.\n-// - output: the 16 bit output of size n_batch * n_input.\n-// Output does not need to be initialized.\n-void CwiseMul(const int16_t* input_1, const int16_t* input_2, int n_batch,\n- int n_input, int shift, int16_t* output);\n-\n-// Element-wise multiplication of two quantized vectors.\n-// Parameters:\n-// - input_1: batch vector of size n_batch * n_input; 16 bit.\n-// - input_2: batch vector of size n_batch * n_input; 16 bit.\n-// - n_batch: the number of batches.\n-// - n_input: the size for input and output.\n-// - shift: the shift needed to produce the output.\n-// - output: the 8 bit output of size n_batch * n_input.\n-// Output does not need to be initialized.\n-void CwiseMul(const int16_t* input_1, const int16_t* input_2, int n_batch,\n- int n_input, int shift, int8_t* output);\n-\n-// Element-wise multiplication of two quantized vectors with rescaling.\n-// Parameters:\n-// - input_1: batch vector of size n_batch * n_input; 16 bit.\n-// - input_2: batch vector of size n_batch * n_input; 16 bit.\n-// - multiplier: the multiplier part of scale.\n-// - shift: the shift part of scale.\n-// - n_batch: the number of batches.\n-// - n_input: the size for input and output.\n-// - output: the 8 bit output of size n_batch * n_input.\n-// - output_zp: the zero point of output.\n-// Output does not need to be initialized.\n-// Multiplier (\"m\") and shift (\"s\") are connected to scale (\"s\") with s = m *\n-// 2^(s - 31).\n-void CwiseMul(const int16_t* input_1, const int16_t* input_2,\n- int32_t multiplier, int32_t shift, int32_t n_batch,\n- int32_t n_input, int32_t output_zp, int8_t* output);\n-\n-// Element-wise saturating addition of two quantized vectors without rescaling.\n-// Parameters:\n-// - input_1: batch vector of size n_batch * n_input; 16 bit.\n-// - input_2: batch vector of size n_batch * n_input; 16 bit.\n-// - n_batch: the number of batches.\n-// - n_input: the size for input and output.\n-// - output: the 8 bit output of size n_batch * n_input.\n-// Output does not need to be initialized.\n-void CwiseAdd(const int16_t* input_1, const int16_t* input_2, int n_batch,\n- int n_input, int16_t* output);\n-\n-// Element-wise in-place clipping of a vector. Overloaded for float, int16_t,\n-// int8_t. Parameters:\n-// - vector: vector of size v_size.\n-// - v_size: the size of the vector.\n-// - clipping_value: the value used for clipping.\n-void CwiseClipping(float* vector, const int v_size, const float clipping_value);\n-void CwiseClipping(int16_t* vector, const int v_size,\n- const int16_t clipping_value);\n-void CwiseClipping(int8_t* vector, const int v_size,\n- const int8_t clipping_value);\n-\n-// Cwise product of two vectors.\n-template <typename T>\n-inline void VectorVectorCwiseProduct(const T* __restrict__ vector1,\n- const T* __restrict__ vector2, int v_size,\n- T* __restrict__ result) {\n- for (int v = 0; v < v_size; v++) {\n- *result++ = *vector1++ * *vector2++;\n- }\n-}\n-\n-// Cwise product and accumulate of two vectors. Since it's a MAC operation, the\n-// assumption here is that result array is initialized to valid values.\n-template <typename T>\n-inline void VectorVectorCwiseProductAccumulate(const T* __restrict__ vector1,\n- const T* __restrict__ vector2,\n- int v_size,\n- T* __restrict__ result) {\n- for (int v = 0; v < v_size; v++) {\n- *result++ += *vector1++ * *vector2++;\n- }\n-}\n-\n-// Dot product of two vectors.\n-float VectorVectorDotProduct(const float* vector1, const float* vector2,\n- int v_size);\n-\n-// Dot product of two batch vectors of size n_batch * v_size:\n-// vector1 = [x_1_1, x_1_2, ..., x_1_vsize,\n-// x_2_1, x_2_2, ..., x_2_vsize,\n-// ...\n-// x_nbatch_1,..., x_nbatch_vsize]\n-// vector2 = [y_1_1, y_1_2, ..., y_1_vsize,\n-// y_2_1, y_2_2, ..., y_2_vsize,\n-// ...\n-// y_nbatch_1,..., y_nbatch_vsize]\n-// Then result will be a vector of n_batch size starting from 'result':\n-// [x_1_1 * y_1_1 + x_1_2 * y_1_2 + ... + x_1_vsize * y_1_vsize,\n-// x_2_1 * y_2_1 + x_2_2 * y_2_2 + ... + x_2_vsize * y_2_vsize,\n-// ...\n-// x_nbatch_1 * y_nbatch_1 + ... + x_nbatch_vsize * y_nbatch_vsize]\n-template <typename T>\n-inline void BatchVectorBatchVectorDotProduct(const T* vector1, const T* vector2,\n- int v_size, int n_batch,\n- T* result) {\n- for (int b = 0; b < n_batch; b++) {\n- result[b] = VectorVectorDotProduct(vector1, vector2, v_size);\n- vector1 += v_size;\n- vector2 += v_size;\n- }\n-}\n-\n-// Same as above but input is 16bit and output is 32bit.\n-void BatchVectorBatchVectorDotProduct(const int16_t* vector1,\n- const int16_t* vector2, int v_size,\n- int n_batch, int32_t* result);\n-\n-// Cwise product of a vector and a batch-vector.\n-template <typename T>\n-inline void VectorBatchVectorCwiseProduct(const T* vector, int v_size,\n- const T* batch_vector, int n_batch,\n- T* result) {\n- for (int b = 0; b < n_batch; b++) {\n- VectorVectorCwiseProduct(vector, batch_vector, v_size, result);\n- // Update the pointers.\n- result += v_size;\n- batch_vector += v_size;\n- }\n-}\n-\n-// Cwise product and accumulate of a vector and a batch-vector. Since it's a MAC\n-// operation, the assumption here is that result array is initialized to valid\n-// values.\n-template <typename T>\n-inline void VectorBatchVectorCwiseProductAccumulate(const T* vector, int v_size,\n- const T* batch_vector,\n- int n_batch, T* result) {\n- for (int b = 0; b < n_batch; b++) {\n- VectorVectorCwiseProductAccumulate(vector, batch_vector, v_size, result);\n- // Update the pointers.\n- result += v_size;\n- batch_vector += v_size;\n- }\n-}\n-\n-// Same as above, but inputs are 16bit integer and output is 16bit integer.\n-void VectorBatchVectorCwiseProductAccumulate(const int16_t* vector, int v_size,\n- const int16_t* batch_vector,\n- int n_batch, int32_t multiplier,\n- int shift, int16_t* result);\n-\n-// Add another vector for each batch in the batch vector.\n-template <typename T>\n-void VectorBatchVectorAdd(const T* vector, int v_size, int n_batch,\n- T* batch_vector) {\n- for (int b = 0; b < n_batch; b++) {\n- for (int i = 0; i < v_size; ++i) {\n- batch_vector[i] += vector[i];\n- }\n- batch_vector += v_size;\n- }\n-}\n-\n-// Batch vector initialization with another vector.\n-template <typename T>\n-void VectorBatchVectorAssign(const T* vector, int v_size, int n_batch,\n- T* batch_vector) {\n- for (int b = 0; b < n_batch; b++) {\n- std::copy_n(vector, v_size, batch_vector + b * v_size);\n- }\n-}\n-\n // Apply Rectified Linear to elements of a vector.\n inline void ApplyReluToVector(const float* __restrict__ vector, int v_size,\n float* __restrict__ result) {\n@@ -601,48 +211,6 @@ inline void ApplyActivationToVector(const float* __restrict__ vector,\n }\n }\n \n-// Compute \"1.0f - elements of vector\" (used in CIFG).\n-void Sub1Vector(const float* vector, int v_size, float* result);\n-\n-// Compute \"1.0f - elements of vector\" (used in CIFG) for int16 input.\n-// \"vector\" has range [0, 32767] because it is the output of sigmoid function.\n-void Sub1Vector(const int16_t* vector, int v_size, int16_t* result);\n-\n-// Multiply all elements of vector with a scalar.\n-void VectorScalarMultiply(const int8_t* vector, int v_size, float scale,\n- float* result);\n-\n-// Reduce-sum on a float input vector:\n-// input_vector: float pointer to input vector.\n-// output_vector: float pointer to vector.\n-// output_size: output vector size.\n-// reduction_size: number of consecutive elements from input vector which are\n-// added to get one element of output.\n-void ReductionSumVector(const float* input_vector, float* output_vector,\n- int output_size, int reduction_size);\n-\n-// Same as above but input/output is 32 bit integer.\n-void ReductionSumVector(const int32_t* input_vector, int32_t* output_vector,\n- int output_size, int reduction_size);\n-\n-// Same as above but input is 8 bit integer.\n-void ReductionSumVector(const int8_t* input_vector, int32_t* output_vector,\n- int output_size, int reduction_size);\n-\n-// Layer norm for each batch.\n-void MeanStddevNormalization(const float* __restrict__ input_vector,\n- float* __restrict__ output_vector, int v_size,\n- int n_batch);\n-\n-// Saturate Add with rescale on both inputs.\n-void TwoGateSaturatingAdd(const int8_t* input, int8_t input_zp,\n- const int8_t* recurrent, int8_t recurrent_zp,\n- int32_t input_effective_scale_a,\n- int32_t input_effective_scale_b,\n- int32_t recurrent_effective_scale_a,\n- int32_t recurrent_effective_scale_b, int32_t n_batch,\n- int32_t n_cell, int16_t* output);\n-\n } // namespace tensor_utils\n } // namespace tflite\n ",
"filename": "tensorflow/lite/kernels/internal/tensor_utils.h",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,466 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#ifndef TENSORFLOW_LITE_KERNELS_INTERNAL_TENSOR_UTILS_COMMON_H_\n+#define TENSORFLOW_LITE_KERNELS_INTERNAL_TENSOR_UTILS_COMMON_H_\n+\n+#include <algorithm>\n+#include <cstdint>\n+\n+#if defined(_MSC_VER)\n+#define __restrict__ __restrict\n+#endif\n+\n+namespace tflite {\n+\n+namespace tensor_utils {\n+\n+// Checks if all entries of vector are zero for float.\n+bool IsZeroVector(const float* vector, int v_size);\n+\n+// Checks if all entries of vector are zero for int8.\n+bool IsZeroVector(const int8_t* vector, int v_size);\n+\n+// Quantizes a buffer of floating point values using a symmetric quantization\n+// (i.e. linear quantization without an offset) to 8-bit signed integers.\n+// It also outputs the range (min, max) of the floating point buffer, and the\n+// scaling factor used to quantize the values.\n+void SymmetricQuantizeFloats(const float* values, const int size,\n+ int8_t* quantized_values, float* min_value,\n+ float* max_value, float* scaling_factor);\n+\n+// Quantizes a buffer of floating point values using a symmetric quantization\n+// (i.e. linear quantization without an offset) to 8-bit signed integers.\n+// It uses the range (min, max) provided to the function to calculate the\n+// appropriate scaling factor to quantize the values.\n+void SymmetricQuantizeFloats(const float* values, const int size,\n+ int8_t* quantized_values, float min_value,\n+ float max_value, float* scaling_factor);\n+\n+void AsymmetricQuantizeFloats(const float* values, const int size,\n+ int8_t* quantized_values, float* scaling_factor,\n+ int32_t* offset);\n+\n+// Helper function to quantize floats.\n+// float_data_ptr input float vectors\n+// n_batch number of input vectors\n+// n_data size of a single input vector\n+// quantized_data_ptr (out) vector with quantized data\n+// scaling_factors (out) scaling factors (one per vector)\n+// zero_points (out) zero points (one per vector)\n+// do_asymmetric controls if the quantization should be asymmetric.\n+inline void BatchQuantizeFloats(const float* float_data_ptr, int n_batch,\n+ int n_data, int8_t* quantized_data_ptr,\n+ float* scaling_factors, int32_t* zero_points,\n+ bool do_asymmetric) {\n+ for (int b = 0; b < n_batch; ++b) {\n+ const int offset = b * n_data;\n+ if (do_asymmetric) {\n+ tensor_utils::AsymmetricQuantizeFloats(\n+ float_data_ptr + offset, n_data, quantized_data_ptr + offset,\n+ &scaling_factors[b], &zero_points[b]);\n+ } else {\n+ float unused_min, unused_max;\n+ tensor_utils::SymmetricQuantizeFloats(\n+ float_data_ptr + offset, n_data, quantized_data_ptr + offset,\n+ &unused_min, &unused_max, &scaling_factors[b]);\n+ }\n+ }\n+}\n+\n+// Multiplies a matrix by a \"batched\" vector (i.e. a matrix with a batch\n+// dimension composed by input vectors independent from each other). The result\n+// of the multiplication is accumulated to the passed result buffer.\n+// More specifically, for a matrix M of shape [n, i] and a batched-vector\n+// of shape [i, batch] it will first compute the product of shape [n, batch].\n+// This product will be accumulated to the result buffer.\n+void MatrixBatchVectorMultiplyAccumulate(const float* matrix, int m_rows,\n+ int m_cols, const float* vector,\n+ int n_batch, float* result);\n+\n+// Same as the function above, but the matrix is a sparse tensor with block\n+// pattern 1x4.\n+// This function assumes that m_cols is a multiple of the block size (4 in this\n+// case) so that there's no incomplete block.\n+void SparseMatrixBatchVectorMultiplyAccumulate1x4(\n+ const float* __restrict__ matrix, const int32_t* __restrict__ segments,\n+ const int32_t* __restrict__ indices, int m_rows, int m_cols,\n+ const float* __restrict__ vector, int n_batch, float* __restrict__ result);\n+\n+// Same as the function above, but the matrix is stored in block compressed\n+// sparse row format with block pattern 1x16 which consists of two arrays:\n+// 1. A matrix array stores non-zero blocks of the matrix in row major.\n+// 2. A ledger array stores nrows groups, one group per row. Each group starts\n+// with an integer representing the number of non-zero blocks for the\n+// corresponding row and follows with column indexes of the first element\n+// of each non-zero block.\n+// This function assumes that\n+// 1. m_cols is a multiple of 16 so that all blocks are full blocks.\n+// 2. m_cols < 254 * 16 so that block index can be represented by uint8.\n+void SparseMatrixBatchVectorMultiplyAccumulate(\n+ const float* __restrict__ matrix, const uint8_t* __restrict__ ledger,\n+ int m_rows, int m_cols, const float* __restrict__ vector, int n_batch,\n+ float* __restrict__ result);\n+\n+// Same as the function above, but for values quantized using symmetric\n+// quantization (e.g. by calling SymmetricQuantizeFloats).\n+// The passed scaling factors is a buffer of the quantization scaling factors\n+// that will be used to dequentize the products into the final result buffer.\n+// These scaling factors are the multiplication of the matrix scaling factor\n+// by the vector's scaling factor, one per batch (i.e. this allows quantizing\n+// each batch in the batch-vector matrix independently).\n+void MatrixBatchVectorMultiplyAccumulate(\n+ const int8_t* __restrict__ matrix, const int m_rows, const int m_cols,\n+ const int8_t* __restrict__ vectors,\n+ const float* __restrict__ scaling_factors, int n_batch,\n+ float* __restrict__ result);\n+\n+// Same as the function above except that vector values\n+// are quantized with asymmetric quantization per-batch and the matrix\n+// is quantized per row.\n+void MatrixBatchVectorMultiplyAccumulate(\n+ const int8_t* __restrict__ matrix, const int m_rows, const int m_cols,\n+ const int8_t* __restrict__ vectors,\n+ const float* __restrict__ scaling_factors, int n_batch,\n+ float* __restrict__ result, const float* __restrict__ per_channel_scale,\n+ const int32_t* __restrict__ input_offset);\n+\n+// Same as the function above, but the matrix is stored in block compressed\n+// sparse row format with block pattern 1x16 which consists of two arrays:\n+// 1. A matrix array stores non-zero blocks of the matrix in row major.\n+// 2. A ledger array stores nrows groups, one group per row. Each group starts\n+// with an integer representing the number of non-zero blocks for the\n+// corresponding row followed by column index of the first element of\n+// each non-zero block.\n+// This function assumes that\n+// 1. m_cols is a multiple of 16 so that all blocks are full blocks.\n+// 2. m_cols < 254 * 16 so that block index can be represented by uint8.\n+void SparseMatrixBatchVectorMultiplyAccumulate(\n+ const int8_t* __restrict__ matrix, const uint8_t* __restrict__ ledger,\n+ const int m_rows, const int m_cols, const int8_t* __restrict__ vectors,\n+ const float* __restrict__ scaling_factors, int n_batch,\n+ float* __restrict__ result);\n+\n+// Same as the above 8, 8, 8 integer matmul except for the presence of zero\n+// point and non-accumulative.\n+// TODO(b/148688698): remove this function by folding zero point calculation in\n+// prepare() function.\n+void MatrixBatchVectorMultiply(const int8_t* input, int32_t input_zeropoint,\n+ const int8_t* input_to_gate_weights,\n+ int32_t input_to_gate_effective_scale_a,\n+ int32_t input_to_gate_effective_scale_b,\n+ int32_t n_batch, int32_t n_input, int32_t n_cell,\n+ int8_t* gate_output, int8_t gate_output_zp);\n+\n+// Same as above but has 16 bit and 8 bit input and 8 bit output.\n+// Used in projection when hidden is 16bit.\n+void MatrixBatchVectorMultiply(const int16_t* hidden,\n+ const int8_t* hidden_to_output_weights,\n+ int32_t proj_effective_scale_a,\n+ int32_t proj_effective_scale_b,\n+ const int32_t* gate_bias, int32_t n_batch,\n+ int32_t n_hidden, int32_t n_output,\n+ int32_t output_zp, int8_t* proj_output);\n+\n+// Multiplies a matrix with a scalar and reduce the result on each row to a\n+// scalar.\n+// Parameters:\n+// - matrix: matrix of size n_row * n_col\n+// - scalar: the scalar that is multiplied to each element in the matrix\n+// - n_row: the row count of the matrix\n+// - n_col: the column count of the matrix\n+// - output: the 32bit output\n+// Note: We do not need saturation because the int8 * int8 is safe from overflow\n+// in (2^31-1) / (2^14) = 131072, which is bigger than the n_row. Non-zero\n+// initial output value is not exceptionally large.\n+void MatrixScalarMultiplyAccumulate(const int8_t* matrix, int32_t scalar,\n+ int32_t n_row, int32_t n_col,\n+ int32_t* output);\n+\n+// Apply Layer Normalization (https://arxiv.org/abs/1607.06450) to a Quantized\n+// vector.\n+// Parameters:\n+// - input: batch vector of size n_batch * n_input; 16 bit.\n+// - layer_norm_weights: the quantized layer normalization weights.\n+// - bias: the bias for the layer normalization.\n+// - layer_norm_scale_a: multiplier for scale factor.\n+// - layer_norm_scale_b: shift for scale factor.\n+// - variance_limit: the guard to make sure the inverse does not overflow.\n+// - n_batch: the number of batches.\n+// - n_input: the size for input and output.\n+// - output: the 16 bit output\n+void ApplyLayerNorm(const int16_t* input, const int16_t* layer_norm_weights,\n+ const int32_t* bias, int32_t layer_norm_scale_a,\n+ int32_t layer_norm_scale_b, int32_t variance_limit,\n+ int n_batch, int n_input, int16_t* output);\n+\n+// Same as above but the internal calculation is done in float.\n+void ApplyLayerNormFloat(const int16_t* input,\n+ const int16_t* layer_norm_weights,\n+ int32_t layer_norm_scale_a, int32_t layer_norm_scale_b,\n+ const int32_t* bias, int n_batch, int n_input,\n+ int16_t* output);\n+\n+// Apply Sigmoid to a quantized vector.\n+// Parameters:\n+// - input: batch vector of size n_batch * n_input; 16 bit.\n+// - n_batch: the number of batches.\n+// - n_input: the size for input and output.\n+// - output: the 16 bit output\n+// The input is in Q3.12 format and the output is in Q0.15 format.\n+void ApplySigmoid(const int16_t* input, int32_t n_batch, int32_t n_input,\n+ int16_t* output);\n+\n+// Same as above but the internal calcualtion is float.\n+void ApplySigmoidFloat(const int16_t* input, int32_t n_batch, int32_t n_input,\n+ int16_t* output);\n+\n+// Apply Tanh to a quantized vector.\n+// Parameters:\n+// - integer_bits: the integer bits of the input.\n+// Currently supports 0, 1, 2, 3, 4, 5, 6.\n+// - input: batch vector of size n_batch * n_input; 16 bit.\n+// - n_batch: the number of batches.\n+// - n_input: the size for input and output.\n+// - output: the 16 bit output\n+// The input is in Qm.15-m format and the output is in Q0.15 format.\n+void ApplyTanh(int32_t integer_bits, const int16_t* input, int32_t n_batch,\n+ int32_t n_input, int16_t* output);\n+\n+// Apply Tanh to a quantized vector. Tbe internal calculation is in float.\n+// - Input has 2^(integer_bits) as scale.\n+// - Output has Q0.15 as scale.\n+void ApplyTanhFloat(const int16_t* input, int32_t n_batch, int32_t n_input,\n+ int32_t integer_bits, int16_t* output);\n+\n+// Element-wise multiplication of two quantized vectors.\n+// Parameters:\n+// - input_1: batch vector of size n_batch * n_input; 16 bit.\n+// - input_2: batch vector of size n_batch * n_input; 16 bit.\n+// - n_batch: the number of batches.\n+// - n_input: the size for input and output.\n+// - shift: the shift needed to produce the output.\n+// - output: the 16 bit output of size n_batch * n_input.\n+// Output does not need to be initialized.\n+void CwiseMul(const int16_t* input_1, const int16_t* input_2, int n_batch,\n+ int n_input, int shift, int16_t* output);\n+\n+// Element-wise multiplication of two quantized vectors.\n+// Parameters:\n+// - input_1: batch vector of size n_batch * n_input; 16 bit.\n+// - input_2: batch vector of size n_batch * n_input; 16 bit.\n+// - n_batch: the number of batches.\n+// - n_input: the size for input and output.\n+// - shift: the shift needed to produce the output.\n+// - output: the 8 bit output of size n_batch * n_input.\n+// Output does not need to be initialized.\n+void CwiseMul(const int16_t* input_1, const int16_t* input_2, int n_batch,\n+ int n_input, int shift, int8_t* output);\n+\n+// Element-wise multiplication of two quantized vectors with rescaling.\n+// Parameters:\n+// - input_1: batch vector of size n_batch * n_input; 16 bit.\n+// - input_2: batch vector of size n_batch * n_input; 16 bit.\n+// - multiplier: the multiplier part of scale.\n+// - shift: the shift part of scale.\n+// - n_batch: the number of batches.\n+// - n_input: the size for input and output.\n+// - output: the 8 bit output of size n_batch * n_input.\n+// - output_zp: the zero point of output.\n+// Output does not need to be initialized.\n+// Multiplier (\"m\") and shift (\"s\") are connected to scale (\"s\") with s = m *\n+// 2^(s - 31).\n+void CwiseMul(const int16_t* input_1, const int16_t* input_2,\n+ int32_t multiplier, int32_t shift, int32_t n_batch,\n+ int32_t n_input, int32_t output_zp, int8_t* output);\n+\n+// Element-wise saturating addition of two quantized vectors without rescaling.\n+// Parameters:\n+// - input_1: batch vector of size n_batch * n_input; 16 bit.\n+// - input_2: batch vector of size n_batch * n_input; 16 bit.\n+// - n_batch: the number of batches.\n+// - n_input: the size for input and output.\n+// - output: the 8 bit output of size n_batch * n_input.\n+// Output does not need to be initialized.\n+void CwiseAdd(const int16_t* input_1, const int16_t* input_2, int n_batch,\n+ int n_input, int16_t* output);\n+\n+// Element-wise in-place clipping of a vector. Overloaded for float, int16_t,\n+// int8_t. Parameters:\n+// - vector: vector of size v_size.\n+// - v_size: the size of the vector.\n+// - clipping_value: the value used for clipping.\n+void CwiseClipping(float* vector, const int v_size, const float clipping_value);\n+void CwiseClipping(int16_t* vector, const int v_size,\n+ const int16_t clipping_value);\n+void CwiseClipping(int8_t* vector, const int v_size,\n+ const int8_t clipping_value);\n+\n+// Cwise product of two vectors.\n+template <typename T>\n+inline void VectorVectorCwiseProduct(const T* __restrict__ vector1,\n+ const T* __restrict__ vector2, int v_size,\n+ T* __restrict__ result) {\n+ for (int v = 0; v < v_size; v++) {\n+ *result++ = *vector1++ * *vector2++;\n+ }\n+}\n+\n+// Cwise product and accumulate of two vectors. Since it's a MAC operation, the\n+// assumption here is that result array is initialized to valid values.\n+template <typename T>\n+inline void VectorVectorCwiseProductAccumulate(const T* __restrict__ vector1,\n+ const T* __restrict__ vector2,\n+ int v_size,\n+ T* __restrict__ result) {\n+ for (int v = 0; v < v_size; v++) {\n+ *result++ += *vector1++ * *vector2++;\n+ }\n+}\n+\n+// Dot product of two vectors.\n+float VectorVectorDotProduct(const float* vector1, const float* vector2,\n+ int v_size);\n+\n+// Dot product of two batch vectors of size n_batch * v_size:\n+// vector1 = [x_1_1, x_1_2, ..., x_1_vsize,\n+// x_2_1, x_2_2, ..., x_2_vsize,\n+// ...\n+// x_nbatch_1,..., x_nbatch_vsize]\n+// vector2 = [y_1_1, y_1_2, ..., y_1_vsize,\n+// y_2_1, y_2_2, ..., y_2_vsize,\n+// ...\n+// y_nbatch_1,..., y_nbatch_vsize]\n+// Then result will be a vector of n_batch size starting from 'result':\n+// [x_1_1 * y_1_1 + x_1_2 * y_1_2 + ... + x_1_vsize * y_1_vsize,\n+// x_2_1 * y_2_1 + x_2_2 * y_2_2 + ... + x_2_vsize * y_2_vsize,\n+// ...\n+// x_nbatch_1 * y_nbatch_1 + ... + x_nbatch_vsize * y_nbatch_vsize]\n+template <typename T>\n+inline void BatchVectorBatchVectorDotProduct(const T* vector1, const T* vector2,\n+ int v_size, int n_batch,\n+ T* result) {\n+ for (int b = 0; b < n_batch; b++) {\n+ result[b] = VectorVectorDotProduct(vector1, vector2, v_size);\n+ vector1 += v_size;\n+ vector2 += v_size;\n+ }\n+}\n+\n+// Same as above but input is 16bit and output is 32bit.\n+void BatchVectorBatchVectorDotProduct(const int16_t* vector1,\n+ const int16_t* vector2, int v_size,\n+ int n_batch, int32_t* result);\n+\n+// Cwise product of a vector and a batch-vector.\n+template <typename T>\n+inline void VectorBatchVectorCwiseProduct(const T* vector, int v_size,\n+ const T* batch_vector, int n_batch,\n+ T* result) {\n+ for (int b = 0; b < n_batch; b++) {\n+ VectorVectorCwiseProduct(vector, batch_vector, v_size, result);\n+ // Update the pointers.\n+ result += v_size;\n+ batch_vector += v_size;\n+ }\n+}\n+\n+// Cwise product and accumulate of a vector and a batch-vector. Since it's a MAC\n+// operation, the assumption here is that result array is initialized to valid\n+// values.\n+template <typename T>\n+inline void VectorBatchVectorCwiseProductAccumulate(const T* vector, int v_size,\n+ const T* batch_vector,\n+ int n_batch, T* result) {\n+ for (int b = 0; b < n_batch; b++) {\n+ VectorVectorCwiseProductAccumulate(vector, batch_vector, v_size, result);\n+ // Update the pointers.\n+ result += v_size;\n+ batch_vector += v_size;\n+ }\n+}\n+\n+// Same as above, but inputs are 16bit integer and output is 16bit integer.\n+void VectorBatchVectorCwiseProductAccumulate(const int16_t* vector, int v_size,\n+ const int16_t* batch_vector,\n+ int n_batch, int32_t multiplier,\n+ int shift, int16_t* result);\n+\n+// Add another vector for each batch in the batch vector.\n+template <typename T>\n+void VectorBatchVectorAdd(const T* vector, int v_size, int n_batch,\n+ T* batch_vector) {\n+ for (int b = 0; b < n_batch; b++) {\n+ for (int i = 0; i < v_size; ++i) {\n+ batch_vector[i] += vector[i];\n+ }\n+ batch_vector += v_size;\n+ }\n+}\n+\n+// Batch vector initialization with another vector.\n+template <typename T>\n+void VectorBatchVectorAssign(const T* vector, int v_size, int n_batch,\n+ T* batch_vector) {\n+ for (int b = 0; b < n_batch; b++) {\n+ std::copy_n(vector, v_size, batch_vector + b * v_size);\n+ }\n+}\n+\n+// Compute \"1.0f - elements of vector\" (used in CIFG).\n+void Sub1Vector(const float* vector, int v_size, float* result);\n+\n+// Compute \"1.0f - elements of vector\" (used in CIFG) for int16 input.\n+// \"vector\" has range [0, 32767] because it is the output of sigmoid function.\n+void Sub1Vector(const int16_t* vector, int v_size, int16_t* result);\n+\n+// Multiply all elements of vector with a scalar.\n+void VectorScalarMultiply(const int8_t* vector, int v_size, float scale,\n+ float* result);\n+\n+// Reduce-sum on a float input vector:\n+// input_vector: float pointer to input vector.\n+// output_vector: float pointer to vector.\n+// output_size: output vector size.\n+// reduction_size: number of consecutive elements from input vector which are\n+// added to get one element of output.\n+void ReductionSumVector(const float* input_vector, float* output_vector,\n+ int output_size, int reduction_size);\n+\n+// Same as above but input/output is 32 bit integer.\n+void ReductionSumVector(const int32_t* input_vector, int32_t* output_vector,\n+ int output_size, int reduction_size);\n+\n+// Same as above but input is 8 bit integer.\n+void ReductionSumVector(const int8_t* input_vector, int32_t* output_vector,\n+ int output_size, int reduction_size);\n+\n+// Layer norm for each batch.\n+void MeanStddevNormalization(const float* __restrict__ input_vector,\n+ float* __restrict__ output_vector, int v_size,\n+ int n_batch);\n+\n+// Saturate Add with rescale on both inputs.\n+void TwoGateSaturatingAdd(const int8_t* input, int8_t input_zp,\n+ const int8_t* recurrent, int8_t recurrent_zp,\n+ int32_t input_effective_scale_a,\n+ int32_t input_effective_scale_b,\n+ int32_t recurrent_effective_scale_a,\n+ int32_t recurrent_effective_scale_b, int32_t n_batch,\n+ int32_t n_cell, int16_t* output);\n+\n+} // namespace tensor_utils\n+} // namespace tflite\n+\n+#endif // TENSORFLOW_LITE_KERNELS_INTERNAL_TENSOR_UTILS_COMMON_H_",
"filename": "tensorflow/lite/kernels/internal/tensor_utils_common.h",
"status": "added"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator BATCH_MATMUL from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\n",
"comments": [
{
"body": "Hi @ddavis-2015, are you stilling planning on integrating this?",
"created_at": "2023-07-19T21:39:06Z"
},
{
"body": "@pkgoogle A new PR based on this PR is in progress. The new PR will appear in the tflite-micro repo when ready.",
"created_at": "2023-07-20T06:28:58Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46504\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46504\">No</a>\n",
"created_at": "2023-07-20T06:39:58Z"
}
],
"number": 46504,
"title": "micro: port op BATCH_MATMUL from lite"
}
|
{
"body": "Extract the parsing out of a switch statement case to create a\r\nstandalone function which can be called by the micro op resolver.\r\n\r\nPR step 1 for issue #46504",
"number": 46668,
"review_comments": [],
"title": "Extract a function for parsing operator BATCH_MATMUL"
}
|
{
"commits": [
{
"message": "Extract a function for parsing operator BATCH_MATMUL\n\nExtract the parsing out of a switch statement case to create a\nstandalone function which can be called by the micro op resolver.\n\nPR step 1 for issue #46504"
}
],
"files": [
{
"diff": "@@ -185,6 +185,10 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n return ParsePool(op, error_reporter, allocator, builtin_data);\n }\n \n+ case BuiltinOperator_BATCH_MATMUL: {\n+ return ParseBatchMatMul(op, error_reporter, allocator, builtin_data);\n+ }\n+\n case BuiltinOperator_BATCH_TO_SPACE_ND: {\n return ParseBatchToSpaceNd(op, error_reporter, allocator, builtin_data);\n }\n@@ -741,19 +745,6 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n *builtin_data = params.release();\n return kTfLiteOk;\n }\n- case BuiltinOperator_BATCH_MATMUL: {\n- auto params = safe_allocator.Allocate<TfLiteBatchMatMulParams>();\n- TF_LITE_ENSURE(error_reporter, params != nullptr);\n- if (const auto* bmm_params =\n- op->builtin_options_as_BatchMatMulOptions()) {\n- params->adj_x = bmm_params->adj_x();\n- params->adj_y = bmm_params->adj_y();\n- params->asymmetric_quantize_inputs =\n- bmm_params->asymmetric_quantize_inputs();\n- }\n- *builtin_data = params.release();\n- return kTfLiteOk;\n- }\n case BuiltinOperator_CALL_ONCE: {\n auto params = safe_allocator.Allocate<TfLiteCallOnceParams>();\n TF_LITE_ENSURE(error_reporter, params != nullptr);\n@@ -971,6 +962,27 @@ TfLiteStatus ParseArgMin(const Operator* op, ErrorReporter* error_reporter,\n return kTfLiteOk;\n }\n \n+// We have this parse function instead of directly returning kTfLiteOk from the\n+// switch-case in ParseOpData because this function is used as part of the\n+// selective registration for the OpResolver implementation in micro.\n+TfLiteStatus ParseBatchMatMul(const Operator* op, ErrorReporter* error_reporter,\n+ BuiltinDataAllocator* allocator,\n+ void** builtin_data) {\n+ CheckParsePointerParams(op, error_reporter, allocator, builtin_data);\n+\n+ SafeBuiltinDataAllocator safe_allocator(allocator);\n+ auto params = safe_allocator.Allocate<TfLiteBatchMatMulParams>();\n+ TF_LITE_ENSURE(error_reporter, params != nullptr);\n+ if (const auto* bmm_params = op->builtin_options_as_BatchMatMulOptions()) {\n+ params->adj_x = bmm_params->adj_x();\n+ params->adj_y = bmm_params->adj_y();\n+ params->asymmetric_quantize_inputs =\n+ bmm_params->asymmetric_quantize_inputs();\n+ }\n+ *builtin_data = params.release();\n+ return kTfLiteOk;\n+}\n+\n // We have this parse function instead of directly returning kTfLiteOk from the\n // switch-case in ParseOpData because this function is used as part of the\n // selective registration for the OpResolver implementation in micro.",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.cc",
"status": "modified"
},
{
"diff": "@@ -84,6 +84,10 @@ TfLiteStatus ParseArgMax(const Operator* op, ErrorReporter* error_reporter,\n TfLiteStatus ParseArgMin(const Operator* op, ErrorReporter* error_reporter,\n BuiltinDataAllocator* allocator, void** builtin_data);\n \n+TfLiteStatus ParseBatchMatMul(const Operator* op, ErrorReporter* error_reporter,\n+ BuiltinDataAllocator* allocator,\n+ void** builtin_data);\n+\n TfLiteStatus ParseBatchToSpaceNd(const Operator* op,\n ErrorReporter* error_reporter,\n BuiltinDataAllocator* allocator,",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.h",
"status": "modified"
}
]
}
|
{
"body": "When I run a testcase on my `aarch64` platform, it reports error as:\r\n```\r\ntensorflow/core/platform/profile_utils/cpu_utils.cc:106] Failed to find bogomips or clock in /proc/cpuinfo; cannot determine CPU frequency\r\n```\r\n\r\nAnd I found `core/platform/profile_utils/cpu_utils.cc` not support `aarch64` yet as follows:\r\n```\r\n#if (defined(__powerpc__) || \\\r\n defined(__ppc__) && (__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__))\r\n retval = sscanf(line.c_str(), \"clock : %lfMHz\", &cpu_freq);\r\n freq_factor = 1.0;\r\n#else\r\n retval = sscanf(line.c_str(), \"bogomips : %lf\", &cpu_freq);\r\n#endif\r\n if (retval > 0) {\r\n const double freq_ghz = cpu_freq / 1000.0 / freq_factor;\r\n if (retval != 1 || freq_ghz < 0.01) {\r\n LOG(WARNING) << \"Failed to get CPU frequency: \" << freq_ghz << \" GHz\";\r\n return INVALID_FREQUENCY;\r\n }\r\n const int64 freq_n =\r\n static_cast<int64>(freq_ghz * 1000.0 * 1000.0 * 1000.0);\r\n LOG(INFO) << \"CPU Frequency: \" << freq_n << \" Hz\";\r\n return freq_n;\r\n }\r\n }\r\n LOG(WARNING)\r\n << \"Failed to find bogomips or clock in /proc/cpuinfo; cannot determine \"\r\n \"CPU frequency\";\r\n return INVALID_FREQUENCY;\r\n```\r\n\r\nBut my `proc/cpuinfo` is\r\n```\r\nBogoMIPS : xxx.xx\r\n```\r\nwhich not match `bogomips : %lf`\r\n\r\nCould anyone help to fix this issue?",
"comments": [
{
"body": "@darmac,\r\nIn order to expedite the trouble-shooting process, could you please fill in the below template and provide the exact sequence of commands / steps that you executed before running into the problem?\r\n\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:\r\n- TensorFlow installed from (source or binary):\r\n- TensorFlow version:\r\n- Python version:\r\n- Installed using virtualenv? pip? conda?:\r\n- Bazel version (if compiling from source):\r\n- GCC/Compiler version (if compiling from source):\r\n- CUDA/cuDNN version:\r\n- GPU model and memory:\r\n\r\nThanks!",
"created_at": "2020-08-21T13:41:28Z"
},
{
"body": "OS Platform and Distribution: `CentOS Linux 8`\r\nMobile device if the issue happens on mobile device: `N/A (TaiShan2280 V2 Server)`\r\nTensorFlow installed from (source or binary): `source`\r\nTensorFlow version: `V2.2.0`\r\nPython version: `3.6.3`\r\nInstalled using: `pip`\r\nBazel version (if compiling from source): `2.0.0`\r\nGCC/Compiler version (if compiling from source): `8.2.1`\r\nCUDA/cuDNN version: `N/A`\r\nGPU model and memory: `N/A`",
"created_at": "2020-08-22T01:06:36Z"
},
{
"body": "@darmac,\r\nA similar issue [#39185](https://github.com/tensorflow/tensorflow/issues/39185), was fixed in TensorFlow v2.3. Could you please update TensorFlow to v2.3 and check if you are facing the same issue?\r\n\r\nAlso, please provide the exact sequence of commands or the code that you executed before running into the error? Thanks!",
"created_at": "2020-08-24T16:30:56Z"
},
{
"body": "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2020-08-31T17:22:15Z"
},
{
"body": "Got it, I have not test the latest version yet. Thanks.",
"created_at": "2020-09-01T01:50:19Z"
},
{
"body": "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2020-09-08T14:16:24Z"
},
{
"body": "Got it.",
"created_at": "2020-09-09T00:58:21Z"
},
{
"body": "> Got it, I have not test the latest version yet. Thanks.\r\n\r\n@darmac,\r\nIn this case, can we close the issue. Please feel free to re-open the issue when you have updates regarding it. Thanks!",
"created_at": "2020-09-11T15:46:01Z"
},
{
"body": "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2020-09-18T16:41:12Z"
},
{
"body": "Closing as stale. Please reopen if you'd like to work on this further.\n",
"created_at": "2020-09-25T20:50:11Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/42545\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/42545\">No</a>\n",
"created_at": "2020-09-25T20:50:24Z"
}
],
"number": 42545,
"title": "Tensorflow V2.2.0 boot fail on aarch64"
}
|
{
"body": "Detect CPU frequency on aarch64. Fixes #42545 and #38260.",
"number": 46643,
"review_comments": [],
"title": "Find bogomips on aarch64"
}
|
{
"commits": [
{
"message": "find bogomips on aarch64"
}
],
"files": [
{
"diff": "@@ -98,6 +98,8 @@ static ICpuUtilsHelper* cpu_utils_helper_instance_ = nullptr;\n freq_factor = 1.0;\n #elif defined(__s390x__)\n retval = sscanf(line.c_str(), \"bogomips per cpu: %lf\", &cpu_freq);\n+#elif defined(__aarch64__)\n+ retval = sscanf(line.c_str(), \"BogoMIPS : %lf\", &cpu_freq);\n #else\n retval = sscanf(line.c_str(), \"bogomips : %lf\", &cpu_freq);\n #endif",
"filename": "tensorflow/core/platform/profile_utils/cpu_utils.cc",
"status": "modified"
}
]
}
|
{
"body": "\r\n@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator FLOOR_DIV from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/floor_div.cc into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro without making any changes or including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45657\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45657\">No</a>\n",
"created_at": "2021-04-12T10:38:08Z"
}
],
"number": 45657,
"title": "micro: port op FLOOR_DIV from lite"
}
|
{
"body": "This is a copy with minimal modification of the kernel and test for\r\noperator FLOOR_DIV from tensorflow/lite/kernels.\r\nAdaptations to micro and addition to the micro build to follow.\r\n\r\nPR step 3 for issue #45657\r\n",
"number": 46505,
"review_comments": [],
"title": "micro: copy operator FLOOR_DIV kernel from lite"
}
|
{
"commits": [
{
"message": "micro: copy operator FLOOR_DIV kernel from lite\n\nThis is a copy with minimal modification of the kernel and test for\noperator FLOOR_DIV from tensorflow/lite/kernels.\nAdaptations to micro and addition to the micro build to follow.\n\nPR step 3 for issue #45657"
}
],
"files": [
{
"diff": "@@ -0,0 +1,170 @@\n+/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include <math.h>\n+#include <stddef.h>\n+#include <stdint.h>\n+\n+#include <functional>\n+\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/binary_function.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/reference_ops.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n+#include \"tensorflow/lite/kernels/kernel_util.h\"\n+\n+namespace tflite {\n+namespace ops {\n+namespace builtin {\n+namespace floor_div {\n+namespace {\n+\n+// Input/output tensor index.\n+constexpr int kInputTensor1 = 0;\n+constexpr int kInputTensor2 = 1;\n+constexpr int kOutputTensor = 0;\n+\n+// Op data for floor_div op.\n+struct OpData {\n+ bool requires_broadcast;\n+};\n+\n+void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n+ auto* data = new OpData;\n+ data->requires_broadcast = false;\n+ return data;\n+}\n+\n+void Free(TfLiteContext* context, void* buffer) {\n+ delete reinterpret_cast<OpData*>(buffer);\n+}\n+\n+TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+ TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);\n+ TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n+\n+ // Reinterprete the opaque data provided by user.\n+ OpData* data = reinterpret_cast<OpData*>(node->user_data);\n+\n+ const TfLiteTensor* input1;\n+ TF_LITE_ENSURE_OK(context,\n+ GetInputSafe(context, node, kInputTensor1, &input1));\n+ const TfLiteTensor* input2;\n+ TF_LITE_ENSURE_OK(context,\n+ GetInputSafe(context, node, kInputTensor2, &input2));\n+ TfLiteTensor* output;\n+ TF_LITE_ENSURE_OK(context,\n+ GetOutputSafe(context, node, kOutputTensor, &output));\n+\n+ TF_LITE_ENSURE_TYPES_EQ(context, input1->type, input2->type);\n+\n+ const TfLiteType type = input1->type;\n+ switch (type) {\n+ case kTfLiteFloat32:\n+ case kTfLiteInt32:\n+ break;\n+ default:\n+ TF_LITE_KERNEL_LOG(context, \"Type '%s' is not supported by floor_div.\",\n+ TfLiteTypeGetName(type));\n+ return kTfLiteError;\n+ }\n+ output->type = type;\n+\n+ data->requires_broadcast = !HaveSameShapes(input1, input2);\n+\n+ TfLiteIntArray* output_size = nullptr;\n+ if (data->requires_broadcast) {\n+ TF_LITE_ENSURE_OK(context, CalculateShapeForBroadcast(\n+ context, input1, input2, &output_size));\n+ } else {\n+ output_size = TfLiteIntArrayCopy(input1->dims);\n+ }\n+\n+ return context->ResizeTensor(context, output, output_size);\n+}\n+\n+template <typename T>\n+TfLiteStatus EvalImpl(TfLiteContext* context, bool requires_broadcast,\n+ const TfLiteTensor* input1, const TfLiteTensor* input2,\n+ TfLiteTensor* output) {\n+ const T* denominator_data = GetTensorData<T>(input2);\n+\n+ // Validate the denominator.\n+ for (int i = 0; i < NumElements(input2); ++i) {\n+ if (std::equal_to<T>()(denominator_data[i], 0)) {\n+ TF_LITE_KERNEL_LOG(context, \"Division by 0\");\n+ return kTfLiteError;\n+ }\n+ }\n+ if (requires_broadcast) {\n+ reference_ops::BroadcastBinaryFunction4DSlow<T, T, T>(\n+ GetTensorShape(input1), GetTensorData<T>(input1),\n+ GetTensorShape(input2), denominator_data, GetTensorShape(output),\n+ GetTensorData<T>(output), reference_ops::FloorDiv<T>);\n+ } else {\n+ reference_ops::BinaryFunction<T, T, T>(\n+ GetTensorShape(input1), GetTensorData<T>(input1),\n+ GetTensorShape(input2), GetTensorData<T>(input2),\n+ GetTensorShape(output), GetTensorData<T>(output),\n+ reference_ops::FloorDiv<T>);\n+ }\n+\n+ return kTfLiteOk;\n+}\n+\n+TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n+ OpData* data = reinterpret_cast<OpData*>(node->user_data);\n+\n+ const TfLiteTensor* input1;\n+ TF_LITE_ENSURE_OK(context,\n+ GetInputSafe(context, node, kInputTensor1, &input1));\n+ const TfLiteTensor* input2;\n+ TF_LITE_ENSURE_OK(context,\n+ GetInputSafe(context, node, kInputTensor2, &input2));\n+ TfLiteTensor* output;\n+ TF_LITE_ENSURE_OK(context,\n+ GetOutputSafe(context, node, kOutputTensor, &output));\n+\n+ switch (input1->type) {\n+ case kTfLiteInt32: {\n+ return EvalImpl<int32_t>(context, data->requires_broadcast, input1,\n+ input2, output);\n+ }\n+ case kTfLiteFloat32: {\n+ return EvalImpl<float>(context, data->requires_broadcast, input1, input2,\n+ output);\n+ }\n+ default: {\n+ TF_LITE_KERNEL_LOG(context, \"Type '%s' is not supported by floor_div.\",\n+ TfLiteTypeGetName(input1->type));\n+ return kTfLiteError;\n+ }\n+ }\n+}\n+\n+} // namespace\n+} // namespace floor_div\n+\n+TfLiteRegistration* Register_FLOOR_DIV() {\n+ // Init, Free, Prepare, Eval are satisfying the Interface required by\n+ // TfLiteRegistration.\n+ static TfLiteRegistration r = {floor_div::Init, floor_div::Free,\n+ floor_div::Prepare, floor_div::Eval};\n+ return &r;\n+}\n+\n+} // namespace builtin\n+} // namespace ops\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/floor_div.cc",
"status": "added"
},
{
"diff": "@@ -0,0 +1,117 @@\n+/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include <stdint.h>\n+\n+#include <vector>\n+\n+#include \"tensorflow/lite/kernels/test_util.h\"\n+#include \"tensorflow/lite/schema/schema_generated.h\"\n+\n+namespace tflite {\n+namespace {\n+\n+using ::testing::ElementsAre;\n+\n+template <typename T>\n+class FloorDivModel : public SingleOpModel {\n+ public:\n+ FloorDivModel(const TensorData& input1, const TensorData& input2,\n+ const TensorData& output) {\n+ input1_ = AddInput(input1);\n+ input2_ = AddInput(input2);\n+ output_ = AddOutput(output);\n+ SetBuiltinOp(BuiltinOperator_FLOOR_DIV, BuiltinOptions_FloorDivOptions,\n+ CreateFloorDivOptions(builder_).Union());\n+ BuildInterpreter({GetShape(input1_), GetShape(input2_)});\n+ }\n+\n+ int input1() { return input1_; }\n+ int input2() { return input2_; }\n+\n+ std::vector<T> GetOutput() { return ExtractVector<T>(output_); }\n+ std::vector<int> GetOutputShape() { return GetTensorShape(output_); }\n+\n+ private:\n+ int input1_;\n+ int input2_;\n+ int output_;\n+};\n+\n+TEST(FloorDivModel, Simple) {\n+ FloorDivModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {}});\n+ model.PopulateTensor<int32_t>(model.input1(), {10, 9, 11, 3});\n+ model.PopulateTensor<int32_t>(model.input2(), {2, 2, 3, 4});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(), ElementsAre(5, 4, 3, 0));\n+}\n+\n+TEST(FloorDivModel, NegativeValue) {\n+ FloorDivModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {}});\n+ model.PopulateTensor<int32_t>(model.input1(), {10, -9, -11, 7});\n+ model.PopulateTensor<int32_t>(model.input2(), {2, 2, -3, -4});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(), ElementsAre(5, -5, 3, -2));\n+}\n+\n+TEST(FloorDivModel, BroadcastFloorDiv) {\n+ FloorDivModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {1}}, {TensorType_INT32, {}});\n+ model.PopulateTensor<int32_t>(model.input1(), {10, -9, -11, 7});\n+ model.PopulateTensor<int32_t>(model.input2(), {-3});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(), ElementsAre(-4, 3, 3, -3));\n+}\n+\n+TEST(FloorDivModel, SimpleFloat) {\n+ FloorDivModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {}});\n+ model.PopulateTensor<float>(model.input1(), {10.05, 9.09, 11.9, 3.01});\n+ model.PopulateTensor<float>(model.input2(), {2.05, 2.03, 3.03, 4.03});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(), ElementsAre(4.0, 4.0, 3.0, 0.0));\n+}\n+\n+TEST(FloorDivModel, NegativeValueFloat) {\n+ FloorDivModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {}});\n+ model.PopulateTensor<float>(model.input1(), {10.03, -9.9, -11.0, 7.0});\n+ model.PopulateTensor<float>(model.input2(), {2.0, 2.3, -3.0, -4.1});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(), ElementsAre(5.0, -5.0, 3.0, -2.0));\n+}\n+\n+TEST(FloorDivModel, BroadcastFloorDivFloat) {\n+ FloorDivModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {1}},\n+ {TensorType_FLOAT32, {}});\n+ model.PopulateTensor<float>(model.input1(), {10.03, -9.9, -11.0, 7.0});\n+ model.PopulateTensor<float>(model.input2(), {-3.3});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(), ElementsAre(-4.0, 2.0, 3.0, -3.0));\n+}\n+} // namespace\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/floor_div_test.cc",
"status": "added"
}
]
}
|
{
"body": "\r\n\r\n**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Mac OS 10.14.2\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below): b'unknown' 1.13.0-rc1\r\n- Python version: Python 3.7.2\r\n- Bazel version (if compiling from source): N/A\r\n- GCC/Compiler version (if compiling from source): N/A\r\n- CUDA/cuDNN version: N/A\r\n- GPU model and memory: N/A\r\n\r\n**Describe the current behavior**\r\n\r\nI've encountered several operations that support int64 but not uint64, without any clear reasoning. `tf.equal`, `tf.fill`, `tf.where`, and `tf.stack`, for example, give errors like:\r\n\r\n InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'Pack' used by node stack (defined at <stdin>:4) with these attrs: [T=DT_UINT64, axis=0, N=2]\r\n\r\n\r\n**Describe the expected behavior**\r\n\r\nFunctions that work for int64s should also work for uint64s when the behavior would be the same.\r\n\r\n**Code to reproduce the issue**\r\n\r\nhttps://gist.github.com/hjfreyer/31ab2dd2d85d1a509272af1c5e011dde\r\n**Other info / logs**\r\nInclude any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.\r\n",
"comments": [
{
"body": "@hjfreyer Most of the operations in TF does not support uint64. There are some historical and reality reasons as far as I know. One is the binary size which could be really big if all signed and unsigned int (8/16/32/64) are enabled. Also, some integer types does not work on GPU for certain math operations yet. \r\n\r\nBy default, `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64` are supported for most of the ops. `uint32` and `uint64` are supported for `bitwise` ops additionally. There are also some ops that are merely memory/type manipulation (like `cast`) so `uint32` and `uint64` are supported as well.\r\n\r\nFrom the list you provide, `tf.equal`, `tf.fill`, `tf.where`, and `tf.stack`, my guess is that they will not be supported, unless `uint32` and `uint64` are added to the default list and applies to almost all other ops.\r\n\r\nThis might take some time and might need guidance from api team I believe.",
"created_at": "2019-03-09T18:44:38Z"
},
{
"body": "(For API owners) We can take a look when there is a specific proposal to review.",
"created_at": "2019-03-20T20:29:59Z"
},
{
"body": "> @hjfreyer Most of the operations in TF does not support uint64. There are some historical and reality reasons as far as I know. One is the binary size which could be really big if all signed and unsigned int (8/16/32/64) are enabled. Also, some integer types does not work on GPU for certain math operations yet.\r\n> \r\n> By default, `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64` are supported for most of the ops. `uint32` and `uint64` are supported for `bitwise` ops additionally. There are also some ops that are merely memory/type manipulation (like `cast`) so `uint32` and `uint64` are supported as well.\r\n> \r\n> From the list you provide, `tf.equal`, `tf.fill`, `tf.where`, and `tf.stack`, my guess is that they will not be supported, unless `uint32` and `uint64` are added to the default list and applies to almost all other ops.\r\n> \r\n> This might take some time and might need guidance from api team I believe.\r\n\r\n@yongtang is there some definitive list what is to be supported, or what shouldn't?\r\ne.g. I just stumbled over the fact that for dtype `tf.uint16` operator == is not implemented … something I would file as a bug, however if it is `wontfix`, then this should be documented somewhere …\r\n\r\nI just took a look at more combinations:\r\n```python \r\nimport tensorflow as tf\r\nfrom tensorflow.python.framework.errors_impl import NotFoundError\r\n\r\ndtypes = set([dtype for dtype in tf.dtypes.__dict__.values() if isinstance(dtype, tf.dtypes.DType)]) - {tf.resource, tf.variant}\r\n\r\nfor dtype in dtypes:\r\n a = tf.zeros(1, dtype=dtype)\r\n try:\r\n print(dtype, a == a)\r\n except NotFoundError:\r\n print(dtype, 'operator == not implemented')\r\n```\r\n```\r\n> TF_CPP_MIN_LOG_LEVEL=3 python test.py\r\n<dtype: 'float32'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'float64'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'int32'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'uint8'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'int16'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'int8'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'string'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'complex64'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'int64'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'bool'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'qint8'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'quint8'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'qint32'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'bfloat16'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'qint16'> operator == not implemented\r\n<dtype: 'quint16'> operator == not implemented\r\n<dtype: 'uint16'> operator == not implemented\r\n<dtype: 'complex128'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'float16'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'uint32'> operator == not implemented\r\n<dtype: 'uint64'> operator == not implemented\r\n```\r\n\r\nSo `qint16`, `quint16`, `uint16`, `uint32`, `uint64` don't even have an equality operator implemented … that makes those datatypes very rudimentary. ",
"created_at": "2020-03-26T18:41:33Z"
},
{
"body": "@csachs I added a PR #38288 for uint16, uint32, and uint64 for tf.math.equal and tf.math.not_equal.",
"created_at": "2020-04-06T23:40:26Z"
},
{
"body": "@hjfreyer \r\n\r\nI am not seeing any issue with TF version ,1.15,2.x.Please, find the gist [here](https://colab.research.google.com/gist/ravikyram/d8387b0dd947a2537fd55d3aec87267f/untitled123.ipynb).Please,verify once and close the issue.Thanks!",
"created_at": "2020-07-14T10:59:47Z"
},
{
"body": "@ravikyram \r\nThe problems raised in my comment remain open.\r\nFurthermore, there seems a regression, in earlier versions:\r\n`tf.zeros(1, tf.qint16)` worked, now it doesn't.\r\nShould I open another issue?",
"created_at": "2020-07-15T13:28:41Z"
},
{
"body": "@csachs The tf.zeros issue is being addressed in PR #41421 (tf.zeros uses FillOp implicitly).",
"created_at": "2020-07-15T16:16:42Z"
},
{
"body": "@csachs \r\n\r\nThe tf.zeros issue is being addressed in PR #41421 also got merged. Can you please verify once.Thanks!",
"created_at": "2020-07-27T15:38:18Z"
},
{
"body": "Added a PR #41795 to cover qint8/quint8/qint16/quint16 for `tf.math.[equal|not_equal]`.",
"created_at": "2020-07-28T02:17:46Z"
},
{
"body": "> @csachs\r\n> \r\n> The tf.zeros issue is being addressed in PR #41421 also got merged. Can you please verify once.Thanks!\r\n\r\nThank you @ravikyram , using version 2.4.0-dev20200728 , the `tf.zeros` issue is fixed.\r\n\r\nHowever the larger code snippet above does not run (producing a new error); I guess it will resolve with https://github.com/tensorflow/tensorflow/pull/41795 .",
"created_at": "2020-07-28T19:54:27Z"
},
{
"body": "Most of the dtypes in https://github.com/tensorflow/tensorflow/issues/26069#issuecomment-604608722 are already supported now except for `tf.qint32` (with `tf.zeros`). Added a PR #46313 to add `tf.qint32` for `tf.zeros`.",
"created_at": "2021-01-10T05:05:12Z"
},
{
"body": "The support of tf.qint16 and tf.quint16 for tf.stack is being added in #46404",
"created_at": "2021-01-13T18:30:02Z"
},
{
"body": "@hjfreyer,\r\nWith @yongtang's PRs, many operations support uint64 now (**`Tensorflow Version 2.4.1`**). Please find [the Gist](https://colab.research.google.com/gist/rmothukuru/4f4124c514714df8fe4456a3c589209f/gh_26069.ipynb) of the Working Code. Thanks! ",
"created_at": "2021-04-22T08:50:22Z"
},
{
"body": "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2021-04-29T09:35:35Z"
},
{
"body": "Closing as stale. Please reopen if you'd like to work on this further.\n",
"created_at": "2021-05-06T09:37:57Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/26069\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/26069\">No</a>\n",
"created_at": "2021-05-06T09:38:03Z"
}
],
"number": 26069,
"title": "Many operations don't support uint64"
}
|
{
"body": "This PR is part of the effort for #26069 where `tf.stack` support most of the qtypes (`tf.qint8/tf.quint8/tf.qint32`)\r\nbut not `tf.qint16` and `tf.quint16`. The reason was that `TF_CALL_QUANTIZED_TYPES` does not include `qint16` and `quint16`.\r\n\r\nThis PR also update to add `qint32` for tf.equal and tf.not_equal to be consistent with other ops.\r\n\r\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>",
"number": 46404,
"review_comments": [],
"title": "Add support of tf.qint16 and tf.quint16 for tf.stack"
}
|
{
"commits": [
{
"message": "Add support of tf.qint16 and tf.quint16 for tf.stack\n\nThis PR is part of the effort for 26069 where tf.stack\nsupport most of the qtypes (`tf.qint8/tf.quint8/tf.qint32`)\nbut not `tf.qint16` and `tf.quint16`. The reason\nwas that `TF_CALL_QUANTIZED_TYPES` does not include\nqint16 and quint16.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
},
{
"message": "Update to add support of tf.qint32 to tf.equal\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
}
],
"files": [
{
"diff": "@@ -18,8 +18,8 @@ limitations under the License.\n namespace tensorflow {\n REGISTER7(BinaryOp, CPU, \"Equal\", functor::equal_to, float, Eigen::half, double,\n uint8, int8, int16, bfloat16);\n-REGISTER7(BinaryOp, CPU, \"Equal\", functor::equal_to, uint16, uint32, uint64,\n- qint8, qint16, quint8, quint16);\n+REGISTER8(BinaryOp, CPU, \"Equal\", functor::equal_to, uint16, uint32, uint64,\n+ qint8, qint16, quint8, quint16, qint32);\n REGISTER_KERNEL_BUILDER(\n Name(\"ApproximateEqual\").Device(DEVICE_CPU).TypeConstraint<float>(\"T\"),\n ApproximateEqualOp<CPUDevice, float>);",
"filename": "tensorflow/core/kernels/cwise_op_equal_to_1.cc",
"status": "modified"
},
{
"diff": "@@ -18,8 +18,8 @@ limitations under the License.\n namespace tensorflow {\n REGISTER7(BinaryOp, CPU, \"NotEqual\", functor::not_equal_to, float, Eigen::half,\n double, uint8, int8, int16, bfloat16);\n-REGISTER7(BinaryOp, CPU, \"NotEqual\", functor::not_equal_to, uint16, uint32,\n- uint64, qint8, qint16, quint8, quint16);\n+REGISTER8(BinaryOp, CPU, \"NotEqual\", functor::not_equal_to, uint16, uint32,\n+ uint64, qint8, qint16, quint8, quint16, qint32);\n #if GOOGLE_CUDA || TENSORFLOW_USE_ROCM\n #if !defined(MLIR_GENERATED_GPU_KERNELS_ENABLED) || \\\n !defined(MLIR_GENERATED_EXPERIMENTAL_GPU_KERNELS_ENABLED)",
"filename": "tensorflow/core/kernels/cwise_op_not_equal_to_1.cc",
"status": "modified"
},
{
"diff": "@@ -127,6 +127,8 @@ class PackOp : public OpKernel {\n \n TF_CALL_ALL_TYPES(REGISTER_PACK);\n TF_CALL_QUANTIZED_TYPES(REGISTER_PACK);\n+TF_CALL_qint16(REGISTER_PACK);\n+TF_CALL_quint16(REGISTER_PACK);\n \n #if defined(IS_MOBILE_PLATFORM) && !defined(SUPPORT_SELECTIVE_REGISTRATION)\n // Primarily used for SavedModel support on mobile.",
"filename": "tensorflow/core/kernels/pack_op.cc",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n from tensorflow.python.framework import test_util\n from tensorflow.python.ops import array_ops\n from tensorflow.python.ops import gradient_checker\n+from tensorflow.python.ops import math_ops\n from tensorflow.python.ops import variables\n from tensorflow.python.platform import test\n \n@@ -274,6 +275,22 @@ def testComplex(self):\n c = array_ops.stack(xs)\n self.assertAllEqual(self.evaluate(c), data)\n \n+ def testQTypes(self):\n+ np.random.seed(7)\n+ with self.session(use_gpu=True):\n+ shape = [2]\n+ for dtype in [\n+ dtypes.quint8,\n+ dtypes.quint16,\n+ dtypes.qint8,\n+ dtypes.qint16,\n+ dtypes.qint32]:\n+ with self.subTest(shape=shape, dtype=dtype):\n+ data = self.randn(shape, dtype.as_numpy_dtype)\n+ xs = list(map(constant_op.constant, data))\n+ c = math_ops.equal(array_ops.stack(xs), data)\n+ self.assertAllEqual(self.evaluate(c), [True, True])\n+\n \n class AutomaticStackingTest(test.TestCase):\n ",
"filename": "tensorflow/python/kernel_tests/array_ops/stack_op_test.py",
"status": "modified"
},
{
"diff": "@@ -998,6 +998,7 @@ def testEqualQuantizeDType(self):\n dtypes_lib.qint16,\n dtypes_lib.quint8,\n dtypes_lib.quint16,\n+ dtypes_lib.qint32,\n ]\n x = np.asarray([0, 1, 2, 3, 4])\n y = np.asarray([0, 1, 2, 3, 4])",
"filename": "tensorflow/python/kernel_tests/cwise_ops_binary_test.py",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThe Google-internal bazel builds have layering checks turned on while the open-source builds do not. This results in PRs passing external checks but then failing internally (see https://github.com/tensorflow/tensorflow/pull/46242 as an example).\r\n\r\nIf we can have the same behavior in the OSS bazel build then there will be one less discrepancy between the internal and open-source builds.",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46347\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46347\">No</a>\n",
"created_at": "2021-01-12T20:09:27Z"
}
],
"number": 46347,
"title": "layering check mismatch between internal and open-source TFLM bazel builds"
}
|
{
"body": "Note that we will need to manually ensure that any new bazel package has the layering_check disabled.\r\n\r\nThe internal builds have layering_check turned on by default, while the open-source builds have them turned off by default. Ideally, we would explicitly turn them on for the open-source build.\r\n\r\nHowever, turning it on (with `layering_check` instead of `-layering_check`) and building with this command:\r\n```\r\nbazel build tensorflow/lite/micro/kernels:add_test --repo_env=CC=`which clang`\r\n```\r\n\r\nresults in a number of additional build errors that will need much broader changes to the TFLM BUILD files to fix.\r\n\r\nAs a result, we are currently turning off the layering_check to at least make the internal and external builds consistent.\r\n\r\nFixes #46347\r\n\r\nSee http://b/177257332 for more internal-only context.\r\n",
"number": 46350,
"review_comments": [],
"title": "Explicitly disable layering check for TFLM bazel packages."
}
|
{
"commits": [
{
"message": "Explicitly disable layering check for TFLM bazel packages.\n\nNote that we will need to manually ensure that any new bazel package has\nthe leyering_check disabled.\n\nThe internal builds have layering_check turned on by default, while the\nopen-source builds have them turned off by default. Ideally, we would\nexplicitly turn them on for the open-source build.\n\nHowever, turning it on (with `layering_check` instead of\n`-layering_check`) and building with this command:\n\n```\nbazel build tensorflow/lite/micro/kernels:add_test --repo_env=CC=`which clang`\n```\n\nresults in a number of additional build errors that will need much\nbroader changes to the TFLM BUILD files to fix.\n\nAs a result, we are currently turning off the layering_check to at least\nmake the internal and external builds consistent.\n\nFixes #46347\n\nSee http://b/177257332 for more internal-only context."
},
{
"message": "fix build formatting."
}
],
"files": [
{
"diff": "@@ -10,6 +10,7 @@ load(\n \n package(\n default_visibility = [\"//visibility:public\"],\n+ features = [\"-layering_check\"],\n licenses = [\"notice\"], # Apache 2.0\n )\n ",
"filename": "tensorflow/lite/micro/BUILD",
"status": "modified"
},
{
"diff": "@@ -1,6 +1,9 @@\n load(\"@bazel_skylib//rules:build_test.bzl\", \"build_test\")\n \n-licenses([\"notice\"]) # Apache 2.0\n+package(\n+ features = [\"-layering_check\"],\n+ licenses = [\"notice\"], # Apache 2.0\n+)\n \n package_group(\n name = \"micro_top_level\",",
"filename": "tensorflow/lite/micro/benchmarks/BUILD",
"status": "modified"
},
{
"diff": "@@ -10,9 +10,11 @@ load(\n \"micro_copts\",\n )\n \n-package(default_visibility = [\"//visibility:public\"])\n-\n-licenses([\"notice\"]) # Apache 2.0\n+package(\n+ default_visibility = [\"//visibility:public\"],\n+ features = [\"-layering_check\"],\n+ licenses = [\"notice\"], # Apache 2.0\n+)\n \n cc_library(\n name = \"model\",",
"filename": "tensorflow/lite/micro/examples/hello_world/BUILD",
"status": "modified"
},
{
"diff": "@@ -6,7 +6,10 @@ load(\n \"tflite_micro_cc_test\",\n )\n \n-licenses([\"notice\"]) # Apache 2.0\n+package(\n+ features = [\"-layering_check\"],\n+ licenses = [\"notice\"], # Apache 2.0\n+)\n \n cc_library(\n name = \"image_model_data\",",
"filename": "tensorflow/lite/micro/examples/image_recognition_experimental/BUILD",
"status": "modified"
},
{
"diff": "@@ -6,9 +6,11 @@ load(\n \"tflite_micro_cc_test\",\n )\n \n-package(default_visibility = [\"//visibility:public\"])\n-\n-licenses([\"notice\"]) # Apache 2.0\n+package(\n+ default_visibility = [\"//visibility:public\"],\n+ features = [\"-layering_check\"],\n+ licenses = [\"notice\"], # Apache 2.0\n+)\n \n cc_library(\n name = \"magic_wand_model_data\",",
"filename": "tensorflow/lite/micro/examples/magic_wand/BUILD",
"status": "modified"
},
{
"diff": "@@ -8,6 +8,7 @@ load(\n \n package(\n default_visibility = [\"//visibility:public\"],\n+ features = [\"-layering_check\"],\n licenses = [\"notice\"], # Apache 2.0\n )\n ",
"filename": "tensorflow/lite/micro/examples/micro_speech/BUILD",
"status": "modified"
},
{
"diff": "@@ -7,6 +7,7 @@ load(\n \n package(\n default_visibility = [\"//visibility:public\"],\n+ features = [\"-layering_check\"],\n licenses = [\"notice\"], # Apache 2.0\n )\n ",
"filename": "tensorflow/lite/micro/examples/micro_speech/micro_features/BUILD",
"status": "modified"
},
{
"diff": "@@ -6,9 +6,11 @@ load(\n \"tflite_micro_cc_test\",\n )\n \n-package(default_visibility = [\"//visibility:public\"])\n-\n-licenses([\"notice\"]) # Apache 2.0\n+package(\n+ default_visibility = [\"//visibility:public\"],\n+ features = [\"-layering_check\"],\n+ licenses = [\"notice\"], # Apache 2.0\n+)\n \n cc_library(\n name = \"model_settings\",",
"filename": "tensorflow/lite/micro/examples/person_detection/BUILD",
"status": "modified"
},
{
"diff": "@@ -1,6 +1,9 @@\n # Description:\n # TensorFlow Lite for Microcontrollers Vision Example Utils.\n-licenses([\"notice\"]) # Apache 2.0\n+package(\n+ features = [\"-layering_check\"],\n+ licenses = [\"notice\"], # Apache 2.0\n+)\n \n py_binary(\n name = \"raw_to_bitmap\",",
"filename": "tensorflow/lite/micro/examples/person_detection/utils/BUILD",
"status": "modified"
},
{
"diff": "@@ -7,7 +7,10 @@ load(\n \"micro_copts\",\n )\n \n-licenses([\"notice\"]) # Apache 2.0\n+package(\n+ features = [\"-layering_check\"],\n+ licenses = [\"notice\"], # Apache 2.0\n+)\n \n config_setting(\n name = \"xtensa_hifimini\",\n@@ -670,6 +673,7 @@ cc_library(\n \"//tensorflow/lite/c:common\",\n \"//tensorflow/lite/kernels/internal:compatibility\",\n \"//tensorflow/lite/kernels/internal:types\",\n+ \"//tensorflow/lite/micro:debug_log\",\n ],\n )\n ",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -1,6 +1,9 @@\n load(\"@bazel_skylib//rules:build_test.bzl\", \"build_test\")\n \n-licenses([\"notice\"])\n+package(\n+ features = [\"-layering_check\"],\n+ licenses = [\"notice\"], # Apache 2.0\n+)\n \n cc_binary(\n name = \"generate_flexbuffers_data\",",
"filename": "tensorflow/lite/micro/kernels/test_data_generation/BUILD",
"status": "modified"
},
{
"diff": "@@ -9,6 +9,7 @@ load(\n \n package(\n default_visibility = [\"//visibility:public\"],\n+ features = [\"-layering_check\"],\n licenses = [\"notice\"], # Apache 2.0\n )\n ",
"filename": "tensorflow/lite/micro/memory_planner/BUILD",
"status": "modified"
},
{
"diff": "@@ -4,7 +4,10 @@ load(\n \"tflite_micro_cc_test\",\n )\n \n-licenses([\"notice\"]) # Apache 2.0\n+package(\n+ features = [\"-layering_check\"],\n+ licenses = [\"notice\"], # Apache 2.0\n+)\n \n package_group(\n name = \"micro\",",
"filename": "tensorflow/lite/micro/testing/BUILD",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator ELU from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\nPR 6: Extract common activation code into activations.cc and activation_utils.h files. Extract common test code into activation_test_utils.h file.\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46323\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46323\">No</a>\n",
"created_at": "2021-02-24T02:01:04Z"
}
],
"number": 46323,
"title": "micro: port op ELU from lite"
}
|
{
"body": "This is a copy with minimal modification of the kernel and test for\r\noperator ELU from tensorflow/lite/kernels.\r\nAdaptations to micro and addition to the micro build to follow.\r\n\r\nPR step 3 for issue #46323",
"number": 46328,
"review_comments": [],
"title": "micro: copy operator ELU kernel from lite"
}
|
{
"commits": [
{
"message": "micro: copy operator ELU kernel from lite\n\nThis is a copy with minimal modification of the kernel and test for\noperator ELU from tensorflow/lite/kernels.\nAdaptations to micro and addition to the micro build to follow.\n\nPR step 3 for issue #46323"
},
{
"message": "Remove header files that do not pass backend tests\n\nRemoved gtest/gtest.h include file."
}
],
"files": [
{
"diff": "@@ -0,0 +1,201 @@\n+/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include <stddef.h>\n+\n+#include <algorithm>\n+#include <cmath>\n+#include <cstdint>\n+#include <functional>\n+#include <limits>\n+\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/cpu_backend_context.h\"\n+#include \"tensorflow/lite/kernels/internal/common.h\"\n+#include \"tensorflow/lite/kernels/internal/compatibility.h\"\n+#include \"tensorflow/lite/kernels/internal/cppmath.h\"\n+#include \"tensorflow/lite/kernels/internal/optimized/optimized_ops.h\"\n+#include \"tensorflow/lite/kernels/internal/quantization_util.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/binary_function.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/integer_ops/log_softmax.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/integer_ops/logistic.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/integer_ops/tanh.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/logistic.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/prelu.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/reference_ops.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/softmax.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/tanh.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n+#include \"tensorflow/lite/kernels/kernel_util.h\"\n+\n+#if __aarch64__ && __clang__\n+#include <arm_neon.h>\n+#endif\n+\n+namespace tflite {\n+namespace ops {\n+namespace builtin {\n+namespace activations {\n+\n+// OLD-TODO(b/142762739): We should figure out a multi-threading plan for most\n+// of the activation ops below.\n+\n+enum KernelType {\n+ kReference,\n+ kGenericOptimized,\n+ kFixedPointOptimized,\n+};\n+\n+struct OpData {\n+ int32_t input_multiplier = 0;\n+ int input_left_shift = 0;\n+ int32_t input_range_radius = 0;\n+ int diff_min = 0;\n+ uint8_t table[256] = {0};\n+};\n+\n+template <typename T>\n+void PopulateLookupTable(struct OpData* data, const TfLiteTensor* input,\n+ TfLiteTensor* output,\n+ const std::function<float(float)>& transform) {\n+ static_assert(sizeof(T) == 1, \"Lookup table valid only for 8bit\");\n+ const float inverse_scale = 1 / output->params.scale;\n+ int32_t maxval = std::numeric_limits<T>::max();\n+ int32_t minval = std::numeric_limits<T>::min();\n+ for (int32_t val = minval; val <= maxval; ++val) {\n+ const float dequantized =\n+ input->params.scale * (val - input->params.zero_point);\n+ const float transformed = transform(dequantized);\n+ const float rescaled = std::round(transformed * inverse_scale);\n+ const int32_t quantized =\n+ static_cast<int32_t>(rescaled + output->params.zero_point);\n+ data->table[static_cast<uint8_t>(static_cast<T>(val))] =\n+ static_cast<uint8_t>(\n+ static_cast<T>(std::max(std::min(maxval, quantized), minval)));\n+ }\n+}\n+\n+// OLD-TODO(b/143696793): move this to optimized_ops.\n+void EvalUsingLookupTable(struct OpData* data, const TfLiteTensor* input,\n+ TfLiteTensor* output) {\n+ const int size =\n+ MatchingFlatSize(GetTensorShape(input), GetTensorShape(output));\n+ uint8_t* output_data = GetTensorData<uint8_t>(output);\n+ const uint8_t* input_data = GetTensorData<uint8_t>(input);\n+ int i = 0;\n+#if __aarch64__ && __clang__\n+ // This code uses ARM64-only instructions.\n+ // OLD-TODO(b/143709993): Port to ARMv7\n+\n+ // Load the tables into registers. (4*4 128-bit registers)\n+ uint8x16x4_t table[4];\n+ table[0] = vld1q_u8_x4(data->table + 16 * 4 * 0);\n+ table[1] = vld1q_u8_x4(data->table + 16 * 4 * 1);\n+ table[2] = vld1q_u8_x4(data->table + 16 * 4 * 2);\n+ table[3] = vld1q_u8_x4(data->table + 16 * 4 * 3);\n+\n+ // Vectorized loop; process uint8x16_t (16 elements) at a time.\n+ constexpr int vectorized_16_loop_step = 16;\n+ const int vectorized_16_loop_end =\n+ size / vectorized_16_loop_step * vectorized_16_loop_step;\n+ for (; i < vectorized_16_loop_end; i += vectorized_16_loop_step) {\n+ uint8x16_t input = vld1q_u8(input_data + i);\n+ uint8x16_t output = optimized_ops::aarch64_lookup_vector(table, input);\n+ vst1q_u8(output_data + i, output);\n+ }\n+ // Postamble and non-ARM64 code: simple for loop.\n+#endif\n+ for (; i < size; ++i) {\n+ output_data[i] = data->table[input_data[i]];\n+ }\n+}\n+\n+void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n+ // This is a builtin op, so we don't use the contents in 'buffer', if any.\n+ // Instead, we allocate a new object to carry information from Prepare() to\n+ // Eval().\n+ return new OpData;\n+}\n+\n+void Free(TfLiteContext* context, void* buffer) {\n+ delete reinterpret_cast<OpData*>(buffer);\n+}\n+\n+TfLiteStatus GenericPrepare(TfLiteContext* context, TfLiteNode* node) {\n+ TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);\n+ TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n+ const TfLiteTensor* input;\n+ TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 0, &input));\n+ TfLiteTensor* output;\n+ TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));\n+ TF_LITE_ENSURE_TYPES_EQ(context, input->type, output->type);\n+\n+ return context->ResizeTensor(context, output,\n+ TfLiteIntArrayCopy(input->dims));\n+}\n+\n+TfLiteStatus EluPrepare(TfLiteContext* context, TfLiteNode* node) {\n+ const TfLiteTensor* input;\n+ TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 0, &input));\n+ TfLiteTensor* output;\n+ TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));\n+ OpData* data = reinterpret_cast<OpData*>(node->user_data);\n+\n+ // Use LUT to handle quantized elu path.\n+ if (input->type == kTfLiteInt8) {\n+ PopulateLookupTable<int8_t>(data, input, output, [](float value) {\n+ return value < 0.0 ? std::exp(value) - 1.0f : value;\n+ });\n+ }\n+ return GenericPrepare(context, node);\n+}\n+\n+TfLiteStatus EluEval(TfLiteContext* context, TfLiteNode* node) {\n+ const TfLiteTensor* input;\n+ TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 0, &input));\n+ TfLiteTensor* output;\n+ TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));\n+ switch (input->type) {\n+ case kTfLiteFloat32: {\n+ optimized_ops::Elu(GetTensorShape(input), GetTensorData<float>(input),\n+ GetTensorShape(output), GetTensorData<float>(output));\n+ return kTfLiteOk;\n+ } break;\n+ case kTfLiteInt8: {\n+ OpData* data = reinterpret_cast<OpData*>(node->user_data);\n+ EvalUsingLookupTable(data, input, output);\n+ return kTfLiteOk;\n+ } break;\n+ default:\n+ TF_LITE_KERNEL_LOG(\n+ context, \"Only float32 and int8 is supported currently, got %s.\",\n+ TfLiteTypeGetName(input->type));\n+ return kTfLiteError;\n+ }\n+}\n+\n+} // namespace activations\n+\n+TfLiteRegistration* Register_ELU() {\n+ static TfLiteRegistration r = {activations::Init, activations::Free,\n+ activations::EluPrepare, activations::EluEval};\n+ return &r;\n+}\n+\n+} // namespace builtin\n+} // namespace ops\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/elu.cc",
"status": "added"
},
{
"diff": "@@ -0,0 +1,237 @@\n+/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include <math.h>\n+#include <stdint.h>\n+#include <stdlib.h>\n+\n+#include <algorithm>\n+#include <initializer_list>\n+#include <limits>\n+#include <map>\n+#include <memory>\n+#include <random>\n+#include <string>\n+#include <utility>\n+#include <vector>\n+\n+#include \"absl/memory/memory.h\"\n+#include \"flatbuffers/flatbuffers.h\" // from @flatbuffers\n+#include \"tensorflow/lite/core/api/op_resolver.h\"\n+#include \"tensorflow/lite/interpreter.h\"\n+#include \"tensorflow/lite/kernels/test_util.h\"\n+#include \"tensorflow/lite/schema/schema_generated.h\"\n+#include \"tensorflow/lite/string_type.h\"\n+\n+namespace tflite {\n+\n+namespace {\n+\n+using ::testing::ElementsAreArray;\n+\n+class BaseActivationsOpModel : public SingleOpModel {\n+ public:\n+ // Most activations don't take any options, so this constructor works for\n+ // them.\n+ BaseActivationsOpModel(BuiltinOperator type, TensorData input) {\n+ input_ = AddInput(input);\n+ if (input.type == TensorType_UINT8) {\n+ output_ = AddOutput({input.type, {}, 0, 0, 1. / 256});\n+ } else if (input.type == TensorType_INT8) {\n+ output_ = AddOutput({input.type, {}, 0, 0, 1. / 256, -128});\n+ } else {\n+ output_ = AddOutput({input.type, {}});\n+ }\n+ SetBuiltinOp(type, BuiltinOptions_NONE, 0);\n+ BuildInterpreter({GetShape(input_)});\n+ }\n+\n+ BaseActivationsOpModel(TfLiteRegistration* registration, BuiltinOperator type,\n+ TensorData input) {\n+ input_ = AddInput(input);\n+ if (input.type == TensorType_UINT8) {\n+ output_ = AddOutput({input.type, {}, 0, 0, 1. / 256});\n+ } else if (input.type == TensorType_INT8) {\n+ output_ = AddOutput({input.type, {}, 0, 0, 1. / 256, -128});\n+ } else {\n+ output_ = AddOutput({input.type, {}});\n+ }\n+ SetBuiltinOp(type, BuiltinOptions_NONE, 0);\n+ resolver_ = absl::make_unique<SingleOpResolver>(type, registration);\n+ BuildInterpreter({GetShape(input_)});\n+ }\n+\n+ // A dedicated constructor for SOFTMAX, which does some options.\n+ BaseActivationsOpModel(float softmax_beta, TensorData input,\n+ TensorType output_type) {\n+ input_ = AddInput(input);\n+ if (output_type == TensorType_UINT8) {\n+ output_ = AddOutput({TensorType_UINT8, {}, 0, 0, 1. / 256});\n+ } else if (output_type == TensorType_INT8) {\n+ output_ = AddOutput({TensorType_INT8, {}, 0, 0, 1. / 256, -128});\n+ } else if (input.type == TensorType_INT16 &&\n+ output_type == TensorType_INT16) {\n+ output_ = AddOutput({TensorType_INT16,\n+ {},\n+ 0,\n+ 0,\n+ 1.0f / (std::numeric_limits<int16_t>::max() + 1),\n+ 0});\n+ } else if (input.type != TensorType_INT16 &&\n+ output_type == TensorType_INT16) {\n+ output_ = AddOutput({TensorType_INT16, {}, 0, 0, 1. / 32768, -16384});\n+ } else {\n+ output_ = AddOutput({output_type, {}});\n+ }\n+ SetBuiltinOp(BuiltinOperator_SOFTMAX, BuiltinOptions_SoftmaxOptions,\n+ CreateSoftmaxOptions(builder_, softmax_beta).Union());\n+ BuildInterpreter({GetShape(input_)});\n+ }\n+\n+ // A dedicated constructor for LeakyRelu, which does some options.\n+ BaseActivationsOpModel(TensorData input, float alpha) {\n+ input_ = AddInput(input);\n+ // The output scale and input scale might be different.\n+ if (input.type == TensorType_UINT8 || input.type == TensorType_INT8 ||\n+ input.type == TensorType_INT16) {\n+ auto output_min = (input.min >= 0) ? input.min : input.min * alpha;\n+ auto output_max = (input.max >= 0) ? input.max : input.max * alpha;\n+ if (input.type == TensorType_INT16) {\n+ output_ = AddOutput({TensorType_INT16,\n+ {},\n+ 0,\n+ 0,\n+ output_max / (std::numeric_limits<int16_t>::max()),\n+ 0});\n+ } else {\n+ output_ = AddOutput({input.type, {}, output_min, output_max});\n+ }\n+ } else {\n+ output_ = AddOutput({input.type, {}});\n+ }\n+ SetBuiltinOp(BuiltinOperator_LEAKY_RELU, BuiltinOptions_LeakyReluOptions,\n+ CreateLeakyReluOptions(builder_, alpha).Union());\n+ BuildInterpreter({GetShape(input_)});\n+ }\n+\n+ BaseActivationsOpModel(BuiltinOperator type, const TensorData& input,\n+ const TensorData& output) {\n+ input_ = AddInput(input);\n+ output_ = AddOutput(output);\n+ SetBuiltinOp(type, BuiltinOptions_NONE, 0);\n+ BuildInterpreter({GetShape(input_)});\n+ }\n+\n+ BaseActivationsOpModel(TfLiteRegistration* registration, BuiltinOperator type,\n+ const TensorData& input, const TensorData& output) {\n+ input_ = AddInput(input);\n+ output_ = AddOutput(output);\n+ SetBuiltinOp(type, BuiltinOptions_NONE, 0);\n+ resolver_ = absl::make_unique<SingleOpResolver>(type, registration);\n+ BuildInterpreter({GetShape(input_)});\n+ }\n+\n+ protected:\n+ int input_;\n+ int output_;\n+};\n+\n+class FloatActivationsOpModel : public BaseActivationsOpModel {\n+ public:\n+ using BaseActivationsOpModel::BaseActivationsOpModel;\n+\n+ void SetInput(const std::vector<float>& data) {\n+ PopulateTensor(input_, data);\n+ }\n+ std::vector<float> GetOutput() { return ExtractVector<float>(output_); }\n+};\n+\n+// Our fixed-point math function implementations have roughly 12 bits of\n+// accuracy, when specialized to 16-bit fixed-point arithmetic.\n+// That is purely an implementation compromise, it would have been possible\n+// to get closer to 16 bits of accuracy but that would be more expensive,\n+// and not needed for our purposes as ultimately the output is either\n+// immediately down-quantized to 8 bits, or will typically be at the output\n+// of the surrounding LSTM cell.\n+// So we can require roughly 2^-12 accuracy when the output is 16-bit, and\n+// we can more or less expect the full 2^-8 accuracy when the output is 8-bit.\n+//\n+// However, the representable output interval is often [-1, 1] (it has to be\n+// for tanh, and even for logistic, when we implement it in fixed-point, we\n+// typically have to do so on such a symmetric interval, e.g. ARM NEON only\n+// has signed fixed-point arithmetic (SQRDMULH)). As the width of [-1, 1]\n+// is 2, our representable values are often diluted by a factor of 2, whence\n+// the factor of 2 below.\n+const float kQuantizedTolerance = 2 * (1. / 256);\n+const float kQuantizedToleranceInt16 = 2 * (1. / 4096);\n+\n+class QuantizedActivationsOpModel : public BaseActivationsOpModel {\n+ public:\n+ using BaseActivationsOpModel::BaseActivationsOpModel;\n+\n+ template <typename T>\n+ void SetInput(const std::vector<float>& data) {\n+ QuantizeAndPopulate<T>(input_, data);\n+ }\n+ template <typename T>\n+ std::vector<T> GetOutput() {\n+ return ExtractVector<T>(output_);\n+ }\n+\n+ template <typename T>\n+ std::vector<float> GetDequantizedOutput() {\n+ return Dequantize<T>(ExtractVector<T>(output_), GetScale(output_),\n+ GetZeroPoint(output_));\n+ }\n+};\n+\n+TEST(FloatActivationsOpTest, Elu) {\n+ FloatActivationsOpModel m(BuiltinOperator_ELU,\n+ /*input=*/{TensorType_FLOAT32, {1, 2, 4, 1}});\n+ m.SetInput({\n+ 0, -6, 2, -4, //\n+ 3, -2, 10, -0.1, //\n+ });\n+ m.Invoke();\n+ EXPECT_THAT(m.GetOutput(), ElementsAreArray(ArrayFloatNear({\n+ 0.0, -0.997521, 2.0, -0.981684, //\n+ 3.0, -0.864665, 10.0, -0.0951626, //\n+ })));\n+}\n+\n+TEST(QuantizedActivationsOpTest, EluInt8) {\n+ const float kMin = -1;\n+ const float kMax = 127.f / 128.f;\n+ QuantizedActivationsOpModel model(\n+ BuiltinOperator_ELU,\n+ /*input=*/{TensorType_INT8, {1, 2, 4, 1}, 8 * kMin, 8 * kMax},\n+ /*output=*/{TensorType_INT8, {1, 2, 4, 1}, 8 * kMin, 8 * kMax});\n+\n+ model.SetInput<int8_t>({\n+ 0, -6, 2, -4, //\n+ 3, -2, 6, -0.1, //\n+ });\n+\n+ model.Invoke();\n+ EXPECT_THAT(model.GetDequantizedOutput<int8_t>(),\n+ ElementsAreArray(ArrayFloatNear(\n+ {\n+ 0, -1.0, 2.0, -1, //\n+ 3.0, -0.875, 6.0, -0.125, //\n+ },\n+ kQuantizedTolerance)));\n+}\n+\n+} // namespace\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/elu_test.cc",
"status": "added"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator ELU from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\nPR 6: Extract common activation code into activations.cc and activation_utils.h files. Extract common test code into activation_test_utils.h file.\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46323\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46323\">No</a>\n",
"created_at": "2021-02-24T02:01:04Z"
}
],
"number": 46323,
"title": "micro: port op ELU from lite"
}
|
{
"body": "Move the reference implementation to its own header so that micro\r\ncan use it without the unrelated depedencies of reference_ops.h.\r\n\r\nPR step 2 for issue #46323",
"number": 46327,
"review_comments": [],
"title": "Extract reference for operator ELU to standalone header"
}
|
{
"commits": [
{
"message": "Extract reference for operator ELU to standalone header\n\nMove the reference implementation to its own header so that micro\ncan use it without the unrelated depedencies of reference_ops.h.\n\nPR step 2 for issue #46323"
}
],
"files": [
{
"diff": "@@ -466,6 +466,7 @@ cc_library(\n \"reference/depthwiseconv_uint8.h\",\n \"reference/dequantize.h\",\n \"reference/div.h\",\n+ \"reference/elu.h\",\n \"reference/exp.h\",\n \"reference/fill.h\",\n \"reference/floor.h\",\n@@ -581,6 +582,7 @@ cc_library(\n \"reference/depthwiseconv_uint8.h\",\n \"reference/dequantize.h\",\n \"reference/div.h\",\n+ \"reference/elu.h\",\n \"reference/exp.h\",\n \"reference/fill.h\",\n \"reference/floor.h\",",
"filename": "tensorflow/lite/kernels/internal/BUILD",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,38 @@\n+/* Copyright 2021 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#ifndef TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_ELU_H_\n+#define TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_ELU_H_\n+\n+#include <cmath>\n+\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n+\n+namespace tflite {\n+\n+namespace reference_ops {\n+\n+inline void Elu(const RuntimeShape& input_shape, const float* input_data,\n+ const RuntimeShape& output_shape, float* output_data) {\n+ const int flat_size = MatchingFlatSize(input_shape, output_shape);\n+ for (int i = 0; i < flat_size; ++i) {\n+ const float val = input_data[i];\n+ output_data[i] = val < 0.0f ? std::expm1(val) : val;\n+ }\n+}\n+\n+} // namespace reference_ops\n+} // namespace tflite\n+\n+#endif // TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_ELU_H_",
"filename": "tensorflow/lite/kernels/internal/reference/elu.h",
"status": "added"
},
{
"diff": "@@ -45,6 +45,7 @@ limitations under the License.\n #include \"tensorflow/lite/kernels/internal/reference/depth_to_space.h\"\n #include \"tensorflow/lite/kernels/internal/reference/dequantize.h\"\n #include \"tensorflow/lite/kernels/internal/reference/div.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/elu.h\"\n #include \"tensorflow/lite/kernels/internal/reference/exp.h\"\n #include \"tensorflow/lite/kernels/internal/reference/fill.h\"\n #include \"tensorflow/lite/kernels/internal/reference/floor.h\"\n@@ -82,15 +83,6 @@ namespace tflite {\n \n namespace reference_ops {\n \n-inline void Elu(const RuntimeShape& input_shape, const float* input_data,\n- const RuntimeShape& output_shape, float* output_data) {\n- const int flat_size = MatchingFlatSize(input_shape, output_shape);\n- for (int i = 0; i < flat_size; ++i) {\n- const float val = input_data[i];\n- output_data[i] = val < 0.0f ? std::expm1(val) : val;\n- }\n-}\n-\n template <typename T>\n inline void Relu(const RuntimeShape& input_shape, const T* input_data,\n const RuntimeShape& output_shape, T* output_data) {",
"filename": "tensorflow/lite/kernels/internal/reference/reference_ops.h",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator ELU from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\nPR 6: Extract common activation code into activations.cc and activation_utils.h files. Extract common test code into activation_test_utils.h file.\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46323\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46323\">No</a>\n",
"created_at": "2021-02-24T02:01:04Z"
}
],
"number": 46323,
"title": "micro: port op ELU from lite"
}
|
{
"body": "Extract the parsing out of a switch statement case to create a\r\nstandalone function which can be called by the micro op resolver.\r\n\r\nPR step 1 for issue #46323",
"number": 46326,
"review_comments": [],
"title": "Extract a function for parsing operator ELU"
}
|
{
"commits": [
{
"message": "Extract a function for parsing operator ELU\n\nExtract the parsing out of a switch statement case to create a\nstandalone function which can be called by the micro op resolver.\n\nPR step 1 for issue #46323"
}
],
"files": [
{
"diff": "@@ -217,6 +217,10 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n return ParseDiv(op, error_reporter, allocator, builtin_data);\n }\n \n+ case BuiltinOperator_ELU: {\n+ return ParseElu(op, error_reporter, allocator, builtin_data);\n+ }\n+\n case BuiltinOperator_EXP: {\n return ParseExp(op, error_reporter, allocator, builtin_data);\n }\n@@ -798,7 +802,6 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n case BuiltinOperator_CONCAT_EMBEDDINGS:\n case BuiltinOperator_COS:\n case BuiltinOperator_CUSTOM:\n- case BuiltinOperator_ELU:\n case BuiltinOperator_EMBEDDING_LOOKUP:\n case BuiltinOperator_EQUAL:\n case BuiltinOperator_LOG_SOFTMAX:\n@@ -1160,6 +1163,14 @@ TfLiteStatus ParseDiv(const Operator* op, ErrorReporter* error_reporter,\n return kTfLiteOk;\n }\n \n+// We have this parse function instead of directly returning kTfLiteOk from the\n+// switch-case in ParseOpData because this function is used as part of the\n+// selective registration for the OpResolver implementation in micro.\n+TfLiteStatus ParseElu(const Operator*, ErrorReporter*, BuiltinDataAllocator*,\n+ void**) {\n+ return kTfLiteOk;\n+}\n+\n // We have this parse function instead of directly returning kTfLiteOk from the\n // switch-case in ParseOpData because this function is used as part of the\n // selective registration for the OpResolver implementation in micro.",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.cc",
"status": "modified"
},
{
"diff": "@@ -123,6 +123,9 @@ TfLiteStatus ParseDequantize(const Operator* op, ErrorReporter* error_reporter,\n TfLiteStatus ParseDiv(const Operator* op, ErrorReporter* error_reporter,\n BuiltinDataAllocator* allocator, void** builtin_data);\n \n+TfLiteStatus ParseElu(const Operator* op, ErrorReporter* error_reporter,\n+ BuiltinDataAllocator* allocator, void** builtin_data);\n+\n TfLiteStatus ParseEqual(const Operator* op, ErrorReporter* error_reporter,\n BuiltinDataAllocator* allocator, void** builtin_data);\n ",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.h",
"status": "modified"
}
]
}
|
{
"body": "**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Google Colab\r\n- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: No\r\n- TensorFlow installed from (source or binary): Google Colab\r\n- TensorFlow version (use command below): v2.4.0-0-g582c8d236cb 2.4.0\r\n- Python version: 3\r\n- Bazel version (if compiling from source): No\r\n- GCC/Compiler version (if compiling from source): No\r\n- CUDA/cuDNN version: No\r\n- GPU model and memory: No\r\n\r\n**Describe the current behavior**\r\nWhen using Attention or AdditiveAttention with mixed precision policy issue occurred due to wrong casting (mask casted to floatx but should be casted to scores.dtype)\r\n\r\n**Describe the expected behavior**\r\nLayers should work without issues with mixed_fp16\r\n\r\n**Standalone code to reproduce the issue**\r\nhttps://colab.research.google.com/drive/1R2MKXAIrmYBBGkij11m9aG7hLm2AEHjF?usp=sharing\r\n\r\n**Other info / logs** Include any logs or source code that would be helpful to\r\ndiagnose the problem. If including tracebacks, please include the full\r\ntraceback. Large logs and files should be attached.\r\nThe issue comes from here https://github.com/tensorflow/tensorflow/blob/v2.4.0/tensorflow/python/keras/layers/dense_attention.py#L129\r\nShould be this:\r\n```python\r\nscores -= 1.e9 * math_ops.cast(padding_mask, dtype=scores.dtype)\r\n```",
"comments": [
{
"body": "I have tried in colab with TF version 2.4, Nightly version(`2.5.0-dev20201229`) and was able to reproduce the issue. Please, find the gist [here](https://colab.research.google.com/gist/ravikyram/7b6b6b54fbf4b7df517cb537b726e371/untitled588.ipynb). Thanks!",
"created_at": "2020-12-30T09:06:13Z"
},
{
"body": "Added a PR #46321 for the fix.",
"created_at": "2021-01-10T19:10:36Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46064\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46064\">No</a>\n",
"created_at": "2021-01-22T06:12:03Z"
}
],
"number": 46064,
"title": "Wrong casting (mixed precision) in attention layers"
}
|
{
"body": "This PR tries to address the issue raised in #46064 where\r\nInvalidArgumentError error is thrown when mixed precision policy is used\r\nin keras Attention/AdditiveAttention layer.\r\n\r\nThis PR fixes #46064.\r\n\r\nThis PR also fixes #43261.\r\n\r\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>",
"number": 46321,
"review_comments": [
{
"body": "Need input from @reedwm to see if this is an appropriate way to test mixed precision behavior.\r\n\r\nLGTM otherwise!",
"created_at": "2021-01-14T17:52:05Z"
},
{
"body": "Can you also add a test to [layer_correctness_test.py](https://github.com/tensorflow/tensorflow/blob/4133fef6917fbf973229a8a8047ba2056cd8b8ee/tensorflow/python/keras/mixed_precision/layer_correctness_test.py#L75) both for Attention and AdditiveAttention? Just need to add a parameter for both classes to the parameterized test. This automatically tests the outputs and gradients are the same between mixed precision and float32.\r\n\r\nAlso better to use the internal-only [policy_scope](https://github.com/tensorflow/tensorflow/blob/7d43911faa7bae5c56e3ffe4bd3c48e75ba37e5c/tensorflow/python/keras/mixed_precision/policy.py#L530) function to set the policy, which is equivalent to the try-finally statement used here",
"created_at": "2021-01-14T19:34:25Z"
},
{
"body": "@yongtang please update the PR based on this feedback.",
"created_at": "2021-01-15T20:51:16Z"
},
{
"body": "Thanks @fchollet, the PR has been updated with comments addressed.",
"created_at": "2021-01-15T22:41:28Z"
},
{
"body": "Pass `causal=True` here, as in the next test. Since there already is an `Attention` test above that runs without causal.",
"created_at": "2021-01-15T23:58:11Z"
},
{
"body": "Thanks @reedwm, updated.",
"created_at": "2021-01-18T17:30:13Z"
}
],
"title": "Fix InvalidArgumentError error when mixed precision policy is used in Attention/AdditiveAttention layer"
}
|
{
"commits": [
{
"message": "Fix InvalidArgumentError error when mixed precision plicy is used in Attention/AdditiveAttention layer\n\nThis PR tries to address the issue raised in 46064 where\nInvalidArgumentError error is thrown when mixed precision plicy is used\nin keras Attention/AdditiveAttention layer.\n\nThis PR fixes 46064.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
},
{
"message": "Update to use policy.policy_scope to address review comment.\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
},
{
"message": "Add test in layer_correctness_test.py, and use 65504 for float16 padding_mask in Attention.\n\nUpdate: additionally passing causal=True to Attention as well (from comment feedback)\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
}
],
"files": [
{
"diff": "@@ -126,7 +126,11 @@ def _apply_scores(self, scores, value, scores_mask=None, training=None):\n if scores_mask is not None:\n padding_mask = math_ops.logical_not(scores_mask)\n # Bias so padding positions do not contribute to attention distribution.\n- scores -= 1.e9 * math_ops.cast(padding_mask, dtype=K.floatx())\n+ # Note 65504. is the max float16 value.\n+ if scores.dtype is dtypes.float16:\n+ scores -= 65504. * math_ops.cast(padding_mask, dtype=scores.dtype)\n+ else:\n+ scores -= 1.e9 * math_ops.cast(padding_mask, dtype=scores.dtype)\n if training is None:\n training = K.learning_phase()\n weights = nn.softmax(scores)",
"filename": "tensorflow/python/keras/layers/dense_attention.py",
"status": "modified"
},
{
"diff": "@@ -24,9 +24,12 @@\n from tensorflow.python import keras\n from tensorflow.python.eager import context\n from tensorflow.python.keras import combinations\n+from tensorflow.python.keras.mixed_precision import policy\n from tensorflow.python.keras.layers import core\n from tensorflow.python.keras.layers import dense_attention\n from tensorflow.python.ops import array_ops\n+from tensorflow.python.ops import math_ops\n+from tensorflow.python.ops import random_ops\n from tensorflow.python.platform import test\n \n \n@@ -757,6 +760,16 @@ def test_serialization(self):\n new_layer = dense_attention.AdditiveAttention.from_config(config)\n self.assertEqual(new_layer.use_scale, True)\n \n+ def test_mixed_float16_policy(self):\n+ # Test case for GitHub issue:\n+ # https://github.com/tensorflow/tensorflow/issues/46064\n+ with policy.policy_scope('mixed_float16'):\n+ q = math_ops.cast(random_ops.random_uniform((2, 3, 4), seed=1), 'float16')\n+ v = math_ops.cast(random_ops.random_uniform((2, 3, 4), seed=2), 'float16')\n+ k = math_ops.cast(random_ops.random_uniform((2, 3, 4), seed=3), 'float16')\n+ layer = dense_attention.AdditiveAttention(causal=True)\n+ _ = layer([q, v, k])\n+\n \n @combinations.generate(combinations.combine(mode=['graph', 'eager']))\n class LowerTriangularMaskTest(test.TestCase, parameterized.TestCase):",
"filename": "tensorflow/python/keras/layers/dense_attention_test.py",
"status": "modified"
},
{
"diff": "@@ -139,6 +139,12 @@ def _create_model_from_layer(self, layer, input_shapes):\n (2, 2, 2)),\n ('Bidirectional',\n lambda: wrappers.Bidirectional(recurrent.SimpleRNN(units=4)), (2, 2, 2)),\n+ ('AttentionLayer',\n+ lambda: dense_attention.Attention(causal=True),\n+ [(2, 2, 3), (2, 3, 3), (2, 3, 3)]),\n+ ('AdditiveAttentionLayerCausal',\n+ lambda: dense_attention.AdditiveAttention(causal=True),\n+ [(2, 3, 4), (2, 3, 4), (2, 3, 4)]),\n )\n def test_layer(self, f32_layer_fn, input_shape, rtol=2e-3, atol=2e-3,\n input_data=None):",
"filename": "tensorflow/python/keras/mixed_precision/layer_correctness_test.py",
"status": "modified"
}
]
}
|
{
"body": "\r\n\r\n**System information**\r\n- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Mac OS 10.14.2\r\n- TensorFlow installed from (source or binary): binary\r\n- TensorFlow version (use command below): b'unknown' 1.13.0-rc1\r\n- Python version: Python 3.7.2\r\n- Bazel version (if compiling from source): N/A\r\n- GCC/Compiler version (if compiling from source): N/A\r\n- CUDA/cuDNN version: N/A\r\n- GPU model and memory: N/A\r\n\r\n**Describe the current behavior**\r\n\r\nI've encountered several operations that support int64 but not uint64, without any clear reasoning. `tf.equal`, `tf.fill`, `tf.where`, and `tf.stack`, for example, give errors like:\r\n\r\n InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'Pack' used by node stack (defined at <stdin>:4) with these attrs: [T=DT_UINT64, axis=0, N=2]\r\n\r\n\r\n**Describe the expected behavior**\r\n\r\nFunctions that work for int64s should also work for uint64s when the behavior would be the same.\r\n\r\n**Code to reproduce the issue**\r\n\r\nhttps://gist.github.com/hjfreyer/31ab2dd2d85d1a509272af1c5e011dde\r\n**Other info / logs**\r\nInclude any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.\r\n",
"comments": [
{
"body": "@hjfreyer Most of the operations in TF does not support uint64. There are some historical and reality reasons as far as I know. One is the binary size which could be really big if all signed and unsigned int (8/16/32/64) are enabled. Also, some integer types does not work on GPU for certain math operations yet. \r\n\r\nBy default, `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64` are supported for most of the ops. `uint32` and `uint64` are supported for `bitwise` ops additionally. There are also some ops that are merely memory/type manipulation (like `cast`) so `uint32` and `uint64` are supported as well.\r\n\r\nFrom the list you provide, `tf.equal`, `tf.fill`, `tf.where`, and `tf.stack`, my guess is that they will not be supported, unless `uint32` and `uint64` are added to the default list and applies to almost all other ops.\r\n\r\nThis might take some time and might need guidance from api team I believe.",
"created_at": "2019-03-09T18:44:38Z"
},
{
"body": "(For API owners) We can take a look when there is a specific proposal to review.",
"created_at": "2019-03-20T20:29:59Z"
},
{
"body": "> @hjfreyer Most of the operations in TF does not support uint64. There are some historical and reality reasons as far as I know. One is the binary size which could be really big if all signed and unsigned int (8/16/32/64) are enabled. Also, some integer types does not work on GPU for certain math operations yet.\r\n> \r\n> By default, `int8`, `uint8`, `int16`, `uint16`, `int32`, `int64` are supported for most of the ops. `uint32` and `uint64` are supported for `bitwise` ops additionally. There are also some ops that are merely memory/type manipulation (like `cast`) so `uint32` and `uint64` are supported as well.\r\n> \r\n> From the list you provide, `tf.equal`, `tf.fill`, `tf.where`, and `tf.stack`, my guess is that they will not be supported, unless `uint32` and `uint64` are added to the default list and applies to almost all other ops.\r\n> \r\n> This might take some time and might need guidance from api team I believe.\r\n\r\n@yongtang is there some definitive list what is to be supported, or what shouldn't?\r\ne.g. I just stumbled over the fact that for dtype `tf.uint16` operator == is not implemented … something I would file as a bug, however if it is `wontfix`, then this should be documented somewhere …\r\n\r\nI just took a look at more combinations:\r\n```python \r\nimport tensorflow as tf\r\nfrom tensorflow.python.framework.errors_impl import NotFoundError\r\n\r\ndtypes = set([dtype for dtype in tf.dtypes.__dict__.values() if isinstance(dtype, tf.dtypes.DType)]) - {tf.resource, tf.variant}\r\n\r\nfor dtype in dtypes:\r\n a = tf.zeros(1, dtype=dtype)\r\n try:\r\n print(dtype, a == a)\r\n except NotFoundError:\r\n print(dtype, 'operator == not implemented')\r\n```\r\n```\r\n> TF_CPP_MIN_LOG_LEVEL=3 python test.py\r\n<dtype: 'float32'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'float64'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'int32'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'uint8'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'int16'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'int8'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'string'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'complex64'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'int64'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'bool'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'qint8'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'quint8'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'qint32'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'bfloat16'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'qint16'> operator == not implemented\r\n<dtype: 'quint16'> operator == not implemented\r\n<dtype: 'uint16'> operator == not implemented\r\n<dtype: 'complex128'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'float16'> tf.Tensor([ True], shape=(1,), dtype=bool)\r\n<dtype: 'uint32'> operator == not implemented\r\n<dtype: 'uint64'> operator == not implemented\r\n```\r\n\r\nSo `qint16`, `quint16`, `uint16`, `uint32`, `uint64` don't even have an equality operator implemented … that makes those datatypes very rudimentary. ",
"created_at": "2020-03-26T18:41:33Z"
},
{
"body": "@csachs I added a PR #38288 for uint16, uint32, and uint64 for tf.math.equal and tf.math.not_equal.",
"created_at": "2020-04-06T23:40:26Z"
},
{
"body": "@hjfreyer \r\n\r\nI am not seeing any issue with TF version ,1.15,2.x.Please, find the gist [here](https://colab.research.google.com/gist/ravikyram/d8387b0dd947a2537fd55d3aec87267f/untitled123.ipynb).Please,verify once and close the issue.Thanks!",
"created_at": "2020-07-14T10:59:47Z"
},
{
"body": "@ravikyram \r\nThe problems raised in my comment remain open.\r\nFurthermore, there seems a regression, in earlier versions:\r\n`tf.zeros(1, tf.qint16)` worked, now it doesn't.\r\nShould I open another issue?",
"created_at": "2020-07-15T13:28:41Z"
},
{
"body": "@csachs The tf.zeros issue is being addressed in PR #41421 (tf.zeros uses FillOp implicitly).",
"created_at": "2020-07-15T16:16:42Z"
},
{
"body": "@csachs \r\n\r\nThe tf.zeros issue is being addressed in PR #41421 also got merged. Can you please verify once.Thanks!",
"created_at": "2020-07-27T15:38:18Z"
},
{
"body": "Added a PR #41795 to cover qint8/quint8/qint16/quint16 for `tf.math.[equal|not_equal]`.",
"created_at": "2020-07-28T02:17:46Z"
},
{
"body": "> @csachs\r\n> \r\n> The tf.zeros issue is being addressed in PR #41421 also got merged. Can you please verify once.Thanks!\r\n\r\nThank you @ravikyram , using version 2.4.0-dev20200728 , the `tf.zeros` issue is fixed.\r\n\r\nHowever the larger code snippet above does not run (producing a new error); I guess it will resolve with https://github.com/tensorflow/tensorflow/pull/41795 .",
"created_at": "2020-07-28T19:54:27Z"
},
{
"body": "Most of the dtypes in https://github.com/tensorflow/tensorflow/issues/26069#issuecomment-604608722 are already supported now except for `tf.qint32` (with `tf.zeros`). Added a PR #46313 to add `tf.qint32` for `tf.zeros`.",
"created_at": "2021-01-10T05:05:12Z"
},
{
"body": "The support of tf.qint16 and tf.quint16 for tf.stack is being added in #46404",
"created_at": "2021-01-13T18:30:02Z"
},
{
"body": "@hjfreyer,\r\nWith @yongtang's PRs, many operations support uint64 now (**`Tensorflow Version 2.4.1`**). Please find [the Gist](https://colab.research.google.com/gist/rmothukuru/4f4124c514714df8fe4456a3c589209f/gh_26069.ipynb) of the Working Code. Thanks! ",
"created_at": "2021-04-22T08:50:22Z"
},
{
"body": "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.\n",
"created_at": "2021-04-29T09:35:35Z"
},
{
"body": "Closing as stale. Please reopen if you'd like to work on this further.\n",
"created_at": "2021-05-06T09:37:57Z"
},
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/26069\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/26069\">No</a>\n",
"created_at": "2021-05-06T09:38:03Z"
}
],
"number": 26069,
"title": "Many operations don't support uint64"
}
|
{
"body": "This PR is part of #26069 where `tf.zeros` does not support\r\nbasic type of `tf.qint32` while all other qtypes have been supported\r\n(tf.{qint8|qint16|quint8|quint16} supported).\r\n\r\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>",
"number": 46313,
"review_comments": [],
"title": "Add tf.qint32 support for tf.zeros"
}
|
{
"commits": [
{
"message": "Add tf.qint32 support for tf.zeros\n\nThis PR is part of 26069 where tf.zeros does not support\nbasic type of tf.qint32 while all other qtypes have been supported\n(tf.{qint8|qint16|quint8|quint16} supported).\n\nSigned-off-by: Yong Tang <yong.tang.github@outlook.com>"
}
],
"files": [
{
"diff": "@@ -187,6 +187,7 @@ REGISTER_KERNEL(CPU, quint8);\n REGISTER_KERNEL(CPU, quint16);\n REGISTER_KERNEL(CPU, qint8);\n REGISTER_KERNEL(CPU, qint16);\n+REGISTER_KERNEL(CPU, qint32);\n #undef REGISTER_CPU_KERNEL\n \n ",
"filename": "tensorflow/core/kernels/constant_op.cc",
"status": "modified"
},
{
"diff": "@@ -109,6 +109,7 @@ DEFINE_FILL_CPU(quint8);\n DEFINE_FILL_CPU(quint16);\n DEFINE_FILL_CPU(qint8);\n DEFINE_FILL_CPU(qint16);\n+DEFINE_FILL_CPU(qint32);\n #undef DEFINE_FILL_CPU\n \n ",
"filename": "tensorflow/core/kernels/fill_functor.cc",
"status": "modified"
},
{
"diff": "@@ -478,6 +478,17 @@ def testQint16Dtype(self):\n z_value = self.evaluate(math_ops.cast(z, dtypes_lib.int32))\n self.assertFalse(np.any(z_value))\n \n+ @test_util.disable_tfrt(\"b/169901260\")\n+ def testQint32Dtype(self):\n+ dtype = dtypes_lib.qint32\n+ z = array_ops.zeros([2, 3], dtype=dtype)\n+ self.assertEqual(z.dtype, dtype)\n+ self.assertEqual([2, 3], z.get_shape())\n+ # cast to int32 so that it can be compred with numpy\n+ # where [qint|quint][8|16] are not available.\n+ z_value = self.evaluate(math_ops.cast(z, dtypes_lib.int32))\n+ self.assertFalse(np.any(z_value))\n+\n \n class ZerosLikeTest(test.TestCase):\n ",
"filename": "tensorflow/python/kernel_tests/constant_op_test.py",
"status": "modified"
}
]
}
|
{
"body": "\r\n@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator FLOOR_MOD from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/floor_div.cc into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro without making any changes or including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45749\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45749\">No</a>\n",
"created_at": "2021-04-12T10:37:50Z"
}
],
"number": 45749,
"title": "micro: port op FLOOR_MOD from lite"
}
|
{
"body": "Implement skeleton (non-working) code for operator and test.\r\nHeader files changed.\r\nNamespaces changed.\r\nSome original code deleted.\r\nSome original code modified.\r\n\r\nThis represents PR step 4 of the work to port operator FLOOR_MOD as tracked in Issue #45749",
"number": 46312,
"review_comments": [],
"title": "micro: prepare to port operator FLOOR_MOD kernel from lite with test"
}
|
{
"commits": [
{
"message": "micro: prepare to port operator FLOOR_MOD kernel from lite with test\n\nImplement skeleton (non-working) code for operator and test.\nHeader files changed.\nNamespaces changed.\nSome original code deleted.\nSome original code modified.\n\nThis represents PR step 4 of the work to port operator FLOOR_MOD as tracked in Issue #45749"
}
],
"files": [
{
"diff": "@@ -12,21 +12,21 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <stddef.h>\n-#include <stdint.h>\n+\n+#include \"tensorflow/lite/kernels/internal/reference/floor_mod.h\"\n \n #include \"tensorflow/lite/c/common.h\"\n #include \"tensorflow/lite/kernels/internal/reference/binary_function.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/reference_ops.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/process_broadcast_shapes.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n #include \"tensorflow/lite/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n \n // OLD-TODO(b/117523611): We should factor out a binary_op and put binary ops\n // there.\n namespace tflite {\n namespace ops {\n-namespace builtin {\n+namespace micro {\n namespace floor_mod {\n namespace {\n \n@@ -35,30 +35,16 @@ constexpr int kInputTensor1 = 0;\n constexpr int kInputTensor2 = 1;\n constexpr int kOutputTensor = 0;\n \n-// Op data for floor_mod op.\n-struct OpData {\n- bool requires_broadcast;\n-};\n-\n // OLD-TODO(b/117912880): Support quantization.\n \n void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n- auto* data = new OpData;\n- data->requires_broadcast = false;\n- return data;\n-}\n-\n-void Free(TfLiteContext* context, void* buffer) {\n- delete reinterpret_cast<OpData*>(buffer);\n+ return nullptr;\n }\n \n TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);\n TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n \n- // Reinterprete the opaque data provided by user.\n- OpData* data = reinterpret_cast<OpData*>(node->user_data);\n-\n const TfLiteTensor* input1;\n TF_LITE_ENSURE_OK(context,\n GetInputSafe(context, node, kInputTensor1, &input1));\n@@ -79,17 +65,7 @@ TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n }\n output->type = type;\n \n- data->requires_broadcast = !HaveSameShapes(input1, input2);\n-\n- TfLiteIntArray* output_size = nullptr;\n- if (data->requires_broadcast) {\n- TF_LITE_ENSURE_OK(context, CalculateShapeForBroadcast(\n- context, input1, input2, &output_size));\n- } else {\n- output_size = TfLiteIntArrayCopy(input1->dims);\n- }\n-\n- return context->ResizeTensor(context, output, output_size);\n+ return kTfLiteError;\n }\n \n template <typename T>\n@@ -125,8 +101,6 @@ TfLiteStatus EvalImpl(TfLiteContext* context, bool requires_broadcast,\n }\n \n TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n- OpData* data = reinterpret_cast<OpData*>(node->user_data);\n-\n const TfLiteTensor* input1;\n TF_LITE_ENSURE_OK(context,\n GetInputSafe(context, node, kInputTensor1, &input1));\n@@ -137,17 +111,19 @@ TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_OK(context,\n GetOutputSafe(context, node, kOutputTensor, &output));\n \n+ bool requires_broadcast = false;\n+\n switch (input1->type) {\n case kTfLiteInt32: {\n- return EvalImpl<int32_t>(context, data->requires_broadcast, input1,\n- input2, output);\n+ return EvalImpl<int32_t>(context, requires_broadcast, input1, input2,\n+ output);\n }\n case kTfLiteInt64: {\n- return EvalImpl<int64_t>(context, data->requires_broadcast, input1,\n- input2, output);\n+ return EvalImpl<int64_t>(context, requires_broadcast, input1, input2,\n+ output);\n }\n case kTfLiteFloat32: {\n- return EvalImpl<float>(context, data->requires_broadcast, input1, input2,\n+ return EvalImpl<float>(context, requires_broadcast, input1, input2,\n output);\n }\n default: {\n@@ -161,14 +137,8 @@ TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n } // namespace\n } // namespace floor_mod\n \n-TfLiteRegistration* Register_FLOOR_MOD() {\n- // Init, Free, Prepare, Eval are satisfying the Interface required by\n- // TfLiteRegistration.\n- static TfLiteRegistration r = {floor_mod::Init, floor_mod::Free,\n- floor_mod::Prepare, floor_mod::Eval};\n- return &r;\n-}\n+TfLiteRegistration* Register_FLOOR_MOD() { return nullptr; }\n \n-} // namespace builtin\n+} // namespace micro\n } // namespace ops\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/floor_mod.cc",
"status": "modified"
},
{
"diff": "@@ -12,118 +12,97 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <stdint.h>\n \n-#include <vector>\n+#include <type_traits>\n \n-#include \"tensorflow/lite/kernels/test_util.h\"\n-#include \"tensorflow/lite/schema/schema_generated.h\"\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_runner.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n \n namespace tflite {\n-namespace {\n-\n-using ::testing::ElementsAre;\n-\n-template <typename T>\n-class FloorModModel : public SingleOpModel {\n- public:\n- FloorModModel(const TensorData& input1, const TensorData& input2,\n- const TensorData& output) {\n- input1_ = AddInput(input1);\n- input2_ = AddInput(input2);\n- output_ = AddOutput(output);\n- SetBuiltinOp(BuiltinOperator_FLOOR_MOD, BuiltinOptions_FloorModOptions,\n- CreateFloorModOptions(builder_).Union());\n- BuildInterpreter({GetShape(input1_), GetShape(input2_)});\n- }\n-\n- int input1() { return input1_; }\n- int input2() { return input2_; }\n-\n- std::vector<T> GetOutput() { return ExtractVector<T>(output_); }\n- std::vector<int> GetOutputShape() { return GetTensorShape(output_); }\n-\n- private:\n- int input1_;\n- int input2_;\n- int output_;\n-};\n-\n-TEST(FloorModModel, Simple) {\n- FloorModModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {}});\n+namespace testing {\n+namespace {}\n+\n+TF_LITE_MICRO_TESTS_BEGIN\n+\n+TF_LITE_MICRO_TEST(FloorModSimple) {\n+#ifdef notdef\n+ FloorMod<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {}});\n model.PopulateTensor<int32_t>(model.input1(), {10, 9, 11, 3});\n model.PopulateTensor<int32_t>(model.input2(), {2, 2, 3, 4});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(), ElementsAre(0, 1, 2, 3));\n+#endif // notdef\n }\n \n-TEST(FloorModModel, NegativeValue) {\n- FloorModModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {}});\n+TF_LITE_MICRO_TEST(FloorModNegativeValue) {\n+#ifdef notdef\n+ FloorMod<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {}});\n model.PopulateTensor<int32_t>(model.input1(), {10, -9, -11, 7});\n model.PopulateTensor<int32_t>(model.input2(), {2, 2, -3, -4});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(), ElementsAre(0, 1, -2, -1));\n+#endif // notdef\n }\n \n-TEST(FloorModModel, BroadcastFloorMod) {\n- FloorModModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {1}}, {TensorType_INT32, {}});\n+TF_LITE_MICRO_TEST(FloorModBroadcast) {\n+#ifdef notdef\n+ FloorMod<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {1}}, {TensorType_INT32, {}});\n model.PopulateTensor<int32_t>(model.input1(), {10, -9, -11, 7});\n model.PopulateTensor<int32_t>(model.input2(), {-3});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(), ElementsAre(-2, 0, -2, -2));\n+#endif // notdef\n }\n \n-TEST(FloorModModel, Int64WithBroadcast) {\n- FloorModModel<int64_t> model({TensorType_INT64, {1, 2, 2, 1}},\n- {TensorType_INT64, {1}}, {TensorType_INT64, {}});\n+TF_LITE_MICRO_TEST(FloorModInt64WithBroadcast) {\n+#ifdef notdef\n+ FloorMod<int64_t> model({TensorType_INT64, {1, 2, 2, 1}},\n+ {TensorType_INT64, {1}}, {TensorType_INT64, {}});\n model.PopulateTensor<int64_t>(model.input1(), {10, -9, -11, (1LL << 34) + 9});\n model.PopulateTensor<int64_t>(model.input2(), {-(1LL << 33)});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(),\n ElementsAre(-8589934582, -9, -11, -8589934583));\n+#endif // notdef\n }\n \n-TEST(FloorModModel, FloatSimple) {\n- FloorModModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {}});\n+TF_LITE_MICRO_TEST(FloorModFloatSimple) {\n+#ifdef notdef\n+ FloorMod<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {}});\n model.PopulateTensor<float>(model.input1(), {10, 9, 11, 3});\n model.PopulateTensor<float>(model.input2(), {2, 2, 3, 4});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(), ElementsAre(0, 1, 2, 3));\n+#endif // notdef\n }\n \n-TEST(FloorModModel, FloatNegativeValue) {\n- FloorModModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {}});\n+TF_LITE_MICRO_TEST(FloorModFloatNegativeValue) {\n+#ifdef notdef\n+ FloorMod<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {}});\n model.PopulateTensor<float>(model.input1(), {10, -9, -11, 7});\n model.PopulateTensor<float>(model.input2(), {2, 2, -3, -4});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(), ElementsAre(0, 1, -2, -1));\n+#endif // notdef\n }\n \n-TEST(FloorModModel, FloatBroadcastFloorMod) {\n- FloorModModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {1}},\n- {TensorType_FLOAT32, {}});\n+TF_LITE_MICRO_TEST(FloorModFloatBroadcast) {\n+#ifdef notdef\n+ FloorMod<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {1}}, {TensorType_FLOAT32, {}});\n model.PopulateTensor<float>(model.input1(), {10, -9, -11, 7});\n model.PopulateTensor<float>(model.input2(), {-3});\n- model.Invoke();\n- EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n EXPECT_THAT(model.GetOutput(), ElementsAre(-2, 0, -2, -2));\n+#endif // notdef\n }\n \n-} // namespace\n+TF_LITE_MICRO_TESTS_END\n+\n+} // namespace testing\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/floor_mod_test.cc",
"status": "modified"
}
]
}
|
{
"body": "\r\n@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator FLOOR_MOD from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/floor_div.cc into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro without making any changes or including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45749\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/45749\">No</a>\n",
"created_at": "2021-04-12T10:37:50Z"
}
],
"number": 45749,
"title": "micro: port op FLOOR_MOD from lite"
}
|
{
"body": "This is a copy with minimal modification of the kernel and test for\r\noperator FLOOR_MOD from tensorflow/lite/kernels.\r\nAdaptations to micro and addition to the micro build to follow.\r\n\r\nPR step 3 for issue #45749",
"number": 46311,
"review_comments": [],
"title": "micro: copy operator FLOOR_MOD kernel from lite"
}
|
{
"commits": [
{
"message": "micro: copy operator FLOOR_MOD kernel from lite\n\nThis is a copy with minimal modification of the kernel and test for\noperator FLOOR_MOD from tensorflow/lite/kernels.\nAdaptations to micro and addition to the micro build to follow.\n\nPR step 3 for issue #45749"
},
{
"message": "Remove include files that do not pass backend tests\n\nRemoved gmock/gtest header files"
}
],
"files": [
{
"diff": "@@ -0,0 +1,174 @@\n+/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include <stddef.h>\n+#include <stdint.h>\n+\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/binary_function.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/reference_ops.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n+#include \"tensorflow/lite/kernels/kernel_util.h\"\n+\n+// OLD-TODO(b/117523611): We should factor out a binary_op and put binary ops\n+// there.\n+namespace tflite {\n+namespace ops {\n+namespace builtin {\n+namespace floor_mod {\n+namespace {\n+\n+// Input/output tensor index.\n+constexpr int kInputTensor1 = 0;\n+constexpr int kInputTensor2 = 1;\n+constexpr int kOutputTensor = 0;\n+\n+// Op data for floor_mod op.\n+struct OpData {\n+ bool requires_broadcast;\n+};\n+\n+// OLD-TODO(b/117912880): Support quantization.\n+\n+void* Init(TfLiteContext* context, const char* buffer, size_t length) {\n+ auto* data = new OpData;\n+ data->requires_broadcast = false;\n+ return data;\n+}\n+\n+void Free(TfLiteContext* context, void* buffer) {\n+ delete reinterpret_cast<OpData*>(buffer);\n+}\n+\n+TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+ TF_LITE_ENSURE_EQ(context, NumInputs(node), 2);\n+ TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n+\n+ // Reinterprete the opaque data provided by user.\n+ OpData* data = reinterpret_cast<OpData*>(node->user_data);\n+\n+ const TfLiteTensor* input1;\n+ TF_LITE_ENSURE_OK(context,\n+ GetInputSafe(context, node, kInputTensor1, &input1));\n+ const TfLiteTensor* input2;\n+ TF_LITE_ENSURE_OK(context,\n+ GetInputSafe(context, node, kInputTensor2, &input2));\n+ TfLiteTensor* output;\n+ TF_LITE_ENSURE_OK(context,\n+ GetOutputSafe(context, node, kOutputTensor, &output));\n+\n+ TF_LITE_ENSURE_TYPES_EQ(context, input1->type, input2->type);\n+\n+ const TfLiteType type = input1->type;\n+ if (type != kTfLiteInt32 && type != kTfLiteFloat32 && type != kTfLiteInt64) {\n+ TF_LITE_KERNEL_LOG(context, \"Type '%s' is not supported by floor_mod.\",\n+ TfLiteTypeGetName(type));\n+ return kTfLiteError;\n+ }\n+ output->type = type;\n+\n+ data->requires_broadcast = !HaveSameShapes(input1, input2);\n+\n+ TfLiteIntArray* output_size = nullptr;\n+ if (data->requires_broadcast) {\n+ TF_LITE_ENSURE_OK(context, CalculateShapeForBroadcast(\n+ context, input1, input2, &output_size));\n+ } else {\n+ output_size = TfLiteIntArrayCopy(input1->dims);\n+ }\n+\n+ return context->ResizeTensor(context, output, output_size);\n+}\n+\n+template <typename T>\n+TfLiteStatus EvalImpl(TfLiteContext* context, bool requires_broadcast,\n+ const TfLiteTensor* input1, const TfLiteTensor* input2,\n+ TfLiteTensor* output) {\n+ const T* denominator_data = GetTensorData<T>(input2);\n+\n+ if (input2->type == kTfLiteInt32 || input2->type == kTfLiteInt64) {\n+ // Validate the denominator only for integer.\n+ const int num_elements = NumElements(input2);\n+ for (int i = 0; i < num_elements; ++i) {\n+ if (denominator_data[i] == 0) {\n+ TF_LITE_KERNEL_LOG(context, \"Division by 0\");\n+ return kTfLiteError;\n+ }\n+ }\n+ }\n+ if (requires_broadcast) {\n+ reference_ops::BroadcastBinaryFunction4DSlow<T, T, T>(\n+ GetTensorShape(input1), GetTensorData<T>(input1),\n+ GetTensorShape(input2), denominator_data, GetTensorShape(output),\n+ GetTensorData<T>(output), reference_ops::FloorMod<T>);\n+ } else {\n+ reference_ops::BinaryFunction<T, T, T>(\n+ GetTensorShape(input1), GetTensorData<T>(input1),\n+ GetTensorShape(input2), GetTensorData<T>(input2),\n+ GetTensorShape(output), GetTensorData<T>(output),\n+ reference_ops::FloorMod<T>);\n+ }\n+\n+ return kTfLiteOk;\n+}\n+\n+TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n+ OpData* data = reinterpret_cast<OpData*>(node->user_data);\n+\n+ const TfLiteTensor* input1;\n+ TF_LITE_ENSURE_OK(context,\n+ GetInputSafe(context, node, kInputTensor1, &input1));\n+ const TfLiteTensor* input2;\n+ TF_LITE_ENSURE_OK(context,\n+ GetInputSafe(context, node, kInputTensor2, &input2));\n+ TfLiteTensor* output;\n+ TF_LITE_ENSURE_OK(context,\n+ GetOutputSafe(context, node, kOutputTensor, &output));\n+\n+ switch (input1->type) {\n+ case kTfLiteInt32: {\n+ return EvalImpl<int32_t>(context, data->requires_broadcast, input1,\n+ input2, output);\n+ }\n+ case kTfLiteInt64: {\n+ return EvalImpl<int64_t>(context, data->requires_broadcast, input1,\n+ input2, output);\n+ }\n+ case kTfLiteFloat32: {\n+ return EvalImpl<float>(context, data->requires_broadcast, input1, input2,\n+ output);\n+ }\n+ default: {\n+ TF_LITE_KERNEL_LOG(context, \"Type '%s' is not supported by floor_mod.\",\n+ TfLiteTypeGetName(input1->type));\n+ return kTfLiteError;\n+ }\n+ }\n+}\n+\n+} // namespace\n+} // namespace floor_mod\n+\n+TfLiteRegistration* Register_FLOOR_MOD() {\n+ // Init, Free, Prepare, Eval are satisfying the Interface required by\n+ // TfLiteRegistration.\n+ static TfLiteRegistration r = {floor_mod::Init, floor_mod::Free,\n+ floor_mod::Prepare, floor_mod::Eval};\n+ return &r;\n+}\n+\n+} // namespace builtin\n+} // namespace ops\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/floor_mod.cc",
"status": "added"
},
{
"diff": "@@ -0,0 +1,129 @@\n+/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include <stdint.h>\n+\n+#include <vector>\n+\n+#include \"tensorflow/lite/kernels/test_util.h\"\n+#include \"tensorflow/lite/schema/schema_generated.h\"\n+\n+namespace tflite {\n+namespace {\n+\n+using ::testing::ElementsAre;\n+\n+template <typename T>\n+class FloorModModel : public SingleOpModel {\n+ public:\n+ FloorModModel(const TensorData& input1, const TensorData& input2,\n+ const TensorData& output) {\n+ input1_ = AddInput(input1);\n+ input2_ = AddInput(input2);\n+ output_ = AddOutput(output);\n+ SetBuiltinOp(BuiltinOperator_FLOOR_MOD, BuiltinOptions_FloorModOptions,\n+ CreateFloorModOptions(builder_).Union());\n+ BuildInterpreter({GetShape(input1_), GetShape(input2_)});\n+ }\n+\n+ int input1() { return input1_; }\n+ int input2() { return input2_; }\n+\n+ std::vector<T> GetOutput() { return ExtractVector<T>(output_); }\n+ std::vector<int> GetOutputShape() { return GetTensorShape(output_); }\n+\n+ private:\n+ int input1_;\n+ int input2_;\n+ int output_;\n+};\n+\n+TEST(FloorModModel, Simple) {\n+ FloorModModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {}});\n+ model.PopulateTensor<int32_t>(model.input1(), {10, 9, 11, 3});\n+ model.PopulateTensor<int32_t>(model.input2(), {2, 2, 3, 4});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(), ElementsAre(0, 1, 2, 3));\n+}\n+\n+TEST(FloorModModel, NegativeValue) {\n+ FloorModModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {}});\n+ model.PopulateTensor<int32_t>(model.input1(), {10, -9, -11, 7});\n+ model.PopulateTensor<int32_t>(model.input2(), {2, 2, -3, -4});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(), ElementsAre(0, 1, -2, -1));\n+}\n+\n+TEST(FloorModModel, BroadcastFloorMod) {\n+ FloorModModel<int32_t> model({TensorType_INT32, {1, 2, 2, 1}},\n+ {TensorType_INT32, {1}}, {TensorType_INT32, {}});\n+ model.PopulateTensor<int32_t>(model.input1(), {10, -9, -11, 7});\n+ model.PopulateTensor<int32_t>(model.input2(), {-3});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(), ElementsAre(-2, 0, -2, -2));\n+}\n+\n+TEST(FloorModModel, Int64WithBroadcast) {\n+ FloorModModel<int64_t> model({TensorType_INT64, {1, 2, 2, 1}},\n+ {TensorType_INT64, {1}}, {TensorType_INT64, {}});\n+ model.PopulateTensor<int64_t>(model.input1(), {10, -9, -11, (1LL << 34) + 9});\n+ model.PopulateTensor<int64_t>(model.input2(), {-(1LL << 33)});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(),\n+ ElementsAre(-8589934582, -9, -11, -8589934583));\n+}\n+\n+TEST(FloorModModel, FloatSimple) {\n+ FloorModModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {}});\n+ model.PopulateTensor<float>(model.input1(), {10, 9, 11, 3});\n+ model.PopulateTensor<float>(model.input2(), {2, 2, 3, 4});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(), ElementsAre(0, 1, 2, 3));\n+}\n+\n+TEST(FloorModModel, FloatNegativeValue) {\n+ FloorModModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {}});\n+ model.PopulateTensor<float>(model.input1(), {10, -9, -11, 7});\n+ model.PopulateTensor<float>(model.input2(), {2, 2, -3, -4});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(), ElementsAre(0, 1, -2, -1));\n+}\n+\n+TEST(FloorModModel, FloatBroadcastFloorMod) {\n+ FloorModModel<float> model({TensorType_FLOAT32, {1, 2, 2, 1}},\n+ {TensorType_FLOAT32, {1}},\n+ {TensorType_FLOAT32, {}});\n+ model.PopulateTensor<float>(model.input1(), {10, -9, -11, 7});\n+ model.PopulateTensor<float>(model.input2(), {-3});\n+ model.Invoke();\n+ EXPECT_THAT(model.GetOutputShape(), ElementsAre(1, 2, 2, 1));\n+ EXPECT_THAT(model.GetOutput(), ElementsAre(-2, 0, -2, -2));\n+}\n+\n+} // namespace\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/floor_mod_test.cc",
"status": "added"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nWhile porting OPs from lite to micro, one of the intermediate steps results in test code copied over from lite that has gtest headers. While this code is not (and can not) be compiled for TFLM it still trips up the formatting checks as described in https://github.com/tensorflow/tensorflow/pull/46159#discussion_r553568396.\r\n\r\nDeleting these includes (the workaround in https://github.com/tensorflow/tensorflow/pull/46159#discussion_r553568396) works just fine and it would be nice to give pull request authors this feedback directly via the TF Micro CI instead of waiting for the change to be imported internally before the error is detected.\r\n\r\nThe overarching goal is to get to a place where if a pull request passes the external CI, it also passes the internal CI (unless the code that a PR is breaking is internal-only).",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46297\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46297\">No</a>\n",
"created_at": "2021-01-11T22:05:34Z"
}
],
"number": 46297,
"title": "gtest headers result in conflicting clang-format requirements"
}
|
{
"body": " * gtest includes will be flagged as errors\r\n * use of the error reporter without the wrapper macros will be an error (https://github.com/tensorflow/tensorflow/pull/45457#discussion_r547434839)\r\n * assert can not be used (static_assert is ok).\r\n\r\nManually tested by adding the disallowed strings to the code and confirmed that an error is raised.\r\n\r\nFixes #46297\r\nFixes http://b/175657165\r\n",
"number": 46298,
"review_comments": [],
"title": "Allow more of the internal checks to have open-source counterparts."
}
|
{
"commits": [
{
"message": "Allow more of the internal checks to have open-source counterparts.\n\n * gtest includes will be flagged as errors\n * use of the error reporter without the wrapper macros will be an error.\n * assert can not be used (static_assert is ok).\n\nManually tested by adding the disallowed strings to the code and\nconfirmed that an error is raised.\n\nFixes #46297\nFixes http://b/175657165"
},
{
"message": "Fix the checks to determine pass/fail for test_code_style."
}
],
"files": [
{
"diff": "@@ -81,7 +81,7 @@ struct CenterSizeEncoding {\n float h;\n float w;\n };\n-// We make sure that the memory allocations are contiguous with static assert.\n+// We make sure that the memory allocations are contiguous with static_assert.\n static_assert(sizeof(BoxCornerEncoding) == sizeof(float) * kNumCoordBox,\n \"Size of BoxCornerEncoding is 4 float values\");\n static_assert(sizeof(CenterSizeEncoding) == sizeof(float) * kNumCoordBox,",
"filename": "tensorflow/lite/micro/kernels/detection_postprocess.cc",
"status": "modified"
},
{
"diff": "@@ -33,3 +33,18 @@ function readable_run {\n echo \"Command completed successfully at $(date)\"\n set -x\n }\n+\n+# Check if the regex ${1} is to be found in the pathspec ${2}.\n+# An optional error messsage can be passed with ${3}\n+function check_contents() {\n+ GREP_OUTPUT=$(git grep -E -rn ${1} -- ${2})\n+\n+ if [ \"${GREP_OUTPUT}\" ]; then\n+ echo \"==============================================\"\n+ echo \"Found matches for ${1} that are not permitted.\"\n+ echo \"${3}\"\n+ echo \"==============================================\"\n+ echo \"${GREP_OUTPUT}\"\n+ return 1\n+ fi\n+}",
"filename": "tensorflow/lite/micro/tools/ci_build/helper_functions.sh",
"status": "modified"
},
{
"diff": "@@ -26,8 +26,9 @@ source tensorflow/lite/micro/tools/ci_build/helper_functions.sh\n # and clang-format checks.\n make -f tensorflow/lite/micro/tools/make/Makefile third_party_downloads\n \n-# Explicitly disable exit on error so that we can properly clean up the\n-# temporary git repository even when one of the scripts fail with an error code.\n+# Explicitly disable exit on error so that we can report all the style errors in\n+# one pass and clean up the temporary git repository even when one of the\n+# scripts fail with an error code.\n set +e\n \n # The pigweed scripts only work from a git repository and the Tensorflow CI\n@@ -42,7 +43,9 @@ if [[ ${1} == \"PRESUBMIT\" ]]; then\n git commit -a -m \"Commit for a temporary repository.\" > /dev/null\n fi\n \n-# Check for license with the necessary exclusions.\n+############################################################\n+# License Check\n+############################################################\n micro/tools/make/downloads/pigweed/pw_presubmit/py/pw_presubmit/pigweed_presubmit.py \\\n kernels/internal/reference/ \\\n micro/ \\\n@@ -65,10 +68,12 @@ micro/tools/make/downloads/pigweed/pw_presubmit/py/pw_presubmit/pigweed_presubmi\n \n LICENSE_CHECK_RESULT=$?\n \n-# Check that the TFLM-only code is clang-formatted We are currently ignoring\n-# Python files (with yapf as the formatter) because that needs additional setup.\n-# We are also ignoring the markdown files to allow for a more gradual rollout of\n-# this presubmit check.\n+############################################################\n+# Formatting Check\n+############################################################\n+# We are currently ignoring Python files (with yapf as the formatter) because\n+# that needs additional setup. We are also ignoring the markdown files to allow\n+# for a more gradual rollout of this presubmit check.\n micro/tools/make/downloads/pigweed/pw_presubmit/py/pw_presubmit/format_code.py \\\n kernels/internal/reference/ \\\n micro/ \\\n@@ -80,6 +85,44 @@ micro/tools/make/downloads/pigweed/pw_presubmit/py/pw_presubmit/format_code.py \\\n \n CLANG_FORMAT_RESULT=$?\n \n+#############################################################################\n+# Avoided specific-code snippets for TFLM\n+#############################################################################\n+\n+CHECK_CONTENTS_PATHSPEC=\\\n+\"micro \"\\\n+\":(exclude)micro/tools/ci_build/test_code_style.sh\"\n+\n+# See https://github.com/tensorflow/tensorflow/issues/46297 for more context.\n+check_contents \"gtest|gmock\" \"${CHECK_CONTENTS_PATHSPEC}\" \\\n+ \"These matches can likely be deleted.\"\n+GTEST_RESULT=$?\n+\n+# See http://b/175657165 for more context.\n+ERROR_REPORTER_MESSAGE=\\\n+\"TF_LITE_REPORT_ERROR should be used instead, so that log strings can be \"\\\n+\"removed to save space, if needed.\"\n+\n+check_contents \"error_reporter.*Report\\(|context->ReportError\\(\" \\\n+ \"${CHECK_CONTENTS_PATHSPEC}\" \"${ERROR_REPORTER_MESSAGE}\"\n+ERROR_REPORTER_RESULT=$?\n+\n+# See http://b/175657165 for more context.\n+ASSERT_PATHSPEC=\\\n+\"${CHECK_CONTENTS_PATHSPEC}\"\\\n+\" :(exclude)micro/examples/micro_speech/esp/ringbuf.c\"\\\n+\" :(exclude)*\\.ipynb\"\\\n+\" :(exclude)*\\.py\"\\\n+\" :(exclude)*zephyr_riscv/Makefile.inc\"\n+\n+check_contents \"\\<assert\\>\" \"${ASSERT_PATHSPEC}\" \\\n+ \"assert should not be used in TFLM code..\"\n+ASSERT_RESULT=$?\n+\n+###########################################################################\n+# All checks are complete, clean up.\n+###########################################################################\n+\n popd\n if [[ ${1} == \"PRESUBMIT\" ]]; then\n rm -rf tensorflow/lite/.git\n@@ -88,7 +131,12 @@ fi\n # Re-enable exit on error now that we are done with the temporary git repo.\n set -e\n \n-if [[ ${LICENSE_CHECK_RESULT} != 0 || ${CLANG_FORMAT_RESULT} != 0 ]]\n+if [[ ${LICENSE_CHECK_RESULT} != 0 || \\\n+ ${CLANG_FORMAT_RESULT} != 0 || \\\n+ ${GTEST_RESULT} != 0 || \\\n+ ${ERROR_REPORTER_RESULT} != 0 || \\\n+ ${ASSERT_RESULT} != 0 \\\n+ ]]\n then\n exit 1\n fi",
"filename": "tensorflow/lite/micro/tools/ci_build/test_code_style.sh",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nIn https://github.com/tensorflow/tensorflow/pull/46242#discussion_r553049656, I was suggesting that the linker was not correctly dropping unused symbols.\r\n\r\nIn fact, what was very likely happening was that I did not do a `make clean` between switching to `BUILD_TYPE=release`. And since the TFLM makefile currently uses the same directory for all `BUILD_TYPE`, only the modified files were being rebuilt with the smaller `release` build.\r\n\r\nWe can reproduce this with the following sequence of commands:\r\n\r\nFirst check what the binary size is for the release build.\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile clean\r\n\r\nmake -f tensorflow/lite/micro/tools/make/Makefile -j8 TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=hifimini XTENSA_CORE=mini1m1m_RG keyword_benchmark BUILD_TYPE=release\r\n\r\nxt-size tensorflow/lite/micro/tools/make/gen/xtensa_hifimini/bin/keyword_benchmark \r\n text\t data\t bss\t dec\t hex\tfilename\r\n 46080\t 40204\t 24952\t 111236\t 1b284\ttensorflow/lite/micro/tools/make/gen/xtensa_hifimini/bin/keyword_benchmark\r\n```\r\n\r\nNext have some intermediate non-release objects and then do a release build:\r\n```\r\nmake -f tensorflow/lite/micro/tools/make/Makefile clean\r\n\r\n# build non-release\r\nmake -f tensorflow/lite/micro/tools/make/Makefile -j8 TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=hifimini XTENSA_CORE=mini1m1m_RG keyword_benchmark\r\n\r\ntouch tensorflow/lite/micro/kernels/xtensa/fully_connected.cc\r\n\r\n#build for release\r\nmake -f tensorflow/lite/micro/tools/make/Makefile -j8 TARGET=xtensa OPTIMIZED_KERNEL_DIR=xtensa TARGET_ARCH=hifimini XTENSA_CORE=mini1m1m_RG keyword_benchmark BUILD_TYPE=release\r\n\r\nxt-size tensorflow/lite/micro/tools/make/gen/xtensa_hifimini/bin/keyword_benchmark \r\n text\t data\t bss\t dec\t hex\tfilename\r\n 54736\t 48168\t 25032\t 127936\t 1f3c0\ttensorflow/lite/micro/tools/make/gen/xtensa_hifimini/bin/keyword_benchmark\r\n```\r\n\r\nWhat we really should be doing is to change the output directory based on the build type.",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46261\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46261\">No</a>\n",
"created_at": "2021-01-11T23:02:46Z"
}
],
"number": 46261,
"title": "different TFLM builds use the same output directory."
}
|
{
"body": "Fixes #46261\n",
"number": 46265,
"review_comments": [],
"title": "Have artifcat directory be different for different build_types."
}
|
{
"commits": [
{
"message": "Have artifcat directory be different for different build_types.\n\nFixes #46261"
}
],
"files": [
{
"diff": "@@ -39,4 +39,4 @@ readable_run make -f tensorflow/lite/micro/tools/make/Makefile \\\n readable_run tensorflow/lite/micro/tools/ci_build/install_arduino_cli.sh\n \n readable_run tensorflow/lite/micro/tools/ci_build/test_arduino_library.sh \\\n- tensorflow/lite/micro/tools/make/gen/arduino_x86_64/prj/tensorflow_lite.zip\n+ tensorflow/lite/micro/tools/make/gen/arduino_x86_64_default/prj/tensorflow_lite.zip",
"filename": "tensorflow/lite/micro/tools/ci_build/test_arduino.sh",
"status": "modified"
},
{
"diff": "@@ -195,6 +195,7 @@ TARGET_TOOLCHAIN_ROOT :=\n # This default build is most suited for usual development and testing as is\n # highlighted by the discussion on this github pull request:\n # https://github.com/tensorflow/tensorflow/pull/42314#issuecomment-694360567\n+BUILD_TYPE := default\n ifeq ($(BUILD_TYPE), debug)\n \t# Specifying BUILD_TYPE=debug adds debug symbols to the binary (and makes it\n \t# larger) and should be used to run a binary with gdb.\n@@ -588,7 +589,8 @@ ALL_SRCS := \\\n \t$(MICROLITE_TEST_SRCS)\n \n # Where compiled objects are stored.\n-GENDIR := $(MAKEFILE_DIR)/gen/$(TARGET)_$(TARGET_ARCH)/\n+\n+GENDIR := $(MAKEFILE_DIR)/gen/$(TARGET)_$(TARGET_ARCH)_$(BUILD_TYPE)/\n OBJDIR := $(GENDIR)obj/\n BINDIR := $(GENDIR)bin/\n LIBDIR := $(GENDIR)lib/",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator LEAKY_RELU from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46161\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46161\">No</a>\n",
"created_at": "2021-03-06T18:47:38Z"
}
],
"number": 46161,
"title": "micro: port op LEAKY_RELU from lite"
}
|
{
"body": "Move the reference implementation to its own header so that micro\r\ncan use it without the unrelated depedencies of reference_ops.h.\r\n\r\nPR step 2 for issue #46161",
"number": 46216,
"review_comments": [],
"title": "Extract reference for operator LEAKY_RELU to standalone header"
}
|
{
"commits": [
{
"message": "Extract reference for operator LEAKY_RELU to standalone header\n\nMove the reference implementation to its own header so that micro\ncan use it without the unrelated depedencies of reference_ops.h.\n\nPR step 2 for issue #46161"
},
{
"message": "correct copyright notice formatting"
},
{
"message": "Merge branch 'master' into LeakyRelu-pr2"
}
],
"files": [
{
"diff": "@@ -481,6 +481,7 @@ cc_library(\n \"reference/integer_ops/tanh.h\",\n \"reference/integer_ops/transpose_conv.h\",\n \"reference/l2normalization.h\",\n+ \"reference/leaky_relu.h\",\n \"reference/logistic.h\",\n \"reference/maximum_minimum.h\",\n \"reference/mul.h\",\n@@ -578,6 +579,7 @@ cc_library(\n \"reference/fully_connected.h\",\n \"reference/hard_swish.h\",\n \"reference/l2normalization.h\",\n+ \"reference/leaky_relu.h\",\n \"reference/legacy_reference_ops.h\",\n \"reference/logistic.h\",\n \"reference/maximum_minimum.h\",",
"filename": "tensorflow/lite/kernels/internal/BUILD",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,70 @@\n+/* Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#ifndef TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_LEAKY_RELU_H_\n+#define TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_LEAKY_RELU_H_\n+\n+#include <algorithm>\n+#include <limits>\n+\n+#include \"tensorflow/lite/kernels/internal/common.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n+\n+namespace tflite {\n+namespace reference_ops {\n+\n+inline void LeakyRelu(const tflite::LeakyReluParams& params,\n+ const RuntimeShape& input_shape, const float* input_data,\n+ const RuntimeShape& output_shape, float* output_data) {\n+ const int flat_size = MatchingFlatSize(input_shape, output_shape);\n+ for (int i = 0; i < flat_size; ++i) {\n+ const float val = input_data[i];\n+ // Note that alpha might be > 1 or < 0, so we don't use std::max here.\n+ output_data[i] = val > 0 ? val : val * params.alpha;\n+ }\n+}\n+\n+template <typename T>\n+inline void QuantizeLeakyRelu(const LeakyReluParams& params,\n+ const RuntimeShape& input_shape,\n+ const T* input_data,\n+ const RuntimeShape& output_shape,\n+ T* output_data) {\n+ const int flat_size = MatchingFlatSize(input_shape, output_shape);\n+ static const int32_t quantized_min = std::numeric_limits<T>::min();\n+ static const int32_t quantized_max = std::numeric_limits<T>::max();\n+ for (int i = 0; i < flat_size; ++i) {\n+ const int32_t input_value = input_data[i] - params.input_offset;\n+ int32_t unclamped_output;\n+ if (input_value >= 0) {\n+ unclamped_output = params.output_offset +\n+ MultiplyByQuantizedMultiplier(\n+ input_value, params.output_multiplier_identity,\n+ params.output_shift_identity);\n+ } else {\n+ unclamped_output = params.output_offset +\n+ MultiplyByQuantizedMultiplier(\n+ input_value, params.output_multiplier_alpha,\n+ params.output_shift_alpha);\n+ }\n+ const T clamped_output =\n+ std::min(quantized_max, std::max(quantized_min, unclamped_output));\n+ output_data[i] = static_cast<T>(clamped_output);\n+ }\n+}\n+\n+} // namespace reference_ops\n+} // namespace tflite\n+\n+#endif // TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_LEAKY_RELU_H_",
"filename": "tensorflow/lite/kernels/internal/reference/leaky_relu.h",
"status": "added"
},
{
"diff": "@@ -49,6 +49,7 @@ limitations under the License.\n #include \"tensorflow/lite/kernels/internal/reference/fully_connected.h\"\n #include \"tensorflow/lite/kernels/internal/reference/hard_swish.h\"\n #include \"tensorflow/lite/kernels/internal/reference/l2normalization.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/leaky_relu.h\"\n #include \"tensorflow/lite/kernels/internal/reference/logistic.h\"\n #include \"tensorflow/lite/kernels/internal/reference/maximum_minimum.h\"\n #include \"tensorflow/lite/kernels/internal/reference/mul.h\"\n@@ -212,48 +213,6 @@ inline void ReluX(const tflite::ActivationParams& params,\n }\n }\n \n-inline void LeakyRelu(const tflite::LeakyReluParams& params,\n- const RuntimeShape& input_shape, const float* input_data,\n- const RuntimeShape& output_shape, float* output_data) {\n- ruy::profiler::ScopeLabel label(\"LeakyRelu (not fused)\");\n- const int flat_size = MatchingFlatSize(input_shape, output_shape);\n- for (int i = 0; i < flat_size; ++i) {\n- const float val = input_data[i];\n- // Note that alpha might be > 1 or < 0, so we don't use std::max here.\n- output_data[i] = val > 0 ? val : val * params.alpha;\n- }\n-}\n-\n-template <typename T>\n-inline void QuantizeLeakyRelu(const LeakyReluParams& params,\n- const RuntimeShape& input_shape,\n- const T* input_data,\n- const RuntimeShape& output_shape,\n- T* output_data) {\n- ruy::profiler::ScopeLabel label(\"Quantized LeakyRelu (not fused)\");\n- const int flat_size = MatchingFlatSize(input_shape, output_shape);\n- static const int32 quantized_min = std::numeric_limits<T>::min();\n- static const int32 quantized_max = std::numeric_limits<T>::max();\n- for (int i = 0; i < flat_size; ++i) {\n- const int32 input_value = input_data[i] - params.input_offset;\n- int32 unclamped_output;\n- if (input_value >= 0) {\n- unclamped_output = params.output_offset +\n- MultiplyByQuantizedMultiplier(\n- input_value, params.output_multiplier_identity,\n- params.output_shift_identity);\n- } else {\n- unclamped_output = params.output_offset +\n- MultiplyByQuantizedMultiplier(\n- input_value, params.output_multiplier_alpha,\n- params.output_shift_alpha);\n- }\n- const T clamped_output =\n- std::min(quantized_max, std::max(quantized_min, unclamped_output));\n- output_data[i] = static_cast<T>(clamped_output);\n- }\n-}\n-\n // TODO(jiawen): We can implement BroadcastMul on buffers of arbitrary\n // dimensionality if the runtime code does a single loop over one dimension\n // that handles broadcasting as the base case. The code generator would then",
"filename": "tensorflow/lite/kernels/internal/reference/reference_ops.h",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator LEAKY_RELU from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46161\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46161\">No</a>\n",
"created_at": "2021-03-06T18:47:38Z"
}
],
"number": 46161,
"title": "micro: port op LEAKY_RELU from lite"
}
|
{
"body": "Extract the parsing out of a switch statement case to create a\r\nstandalone function which can be called by the micro op resolver.\r\n\r\nPR step 1 for issue #46161",
"number": 46215,
"review_comments": [],
"title": "Extract a function for parsing operator LEAKY_RELU"
}
|
{
"commits": [
{
"message": "Extract a function for parsing operator LEAKY_RELU\n\nExtract the parsing out of a switch statement case to create a\nstandalone function which can be called by the micro op resolver.\n\nPR step 1 for issue #46161"
}
],
"files": [
{
"diff": "@@ -245,6 +245,10 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n return ParsePool(op, error_reporter, allocator, builtin_data);\n }\n \n+ case BuiltinOperator_LEAKY_RELU: {\n+ return ParseLeakyRelu(op, error_reporter, allocator, builtin_data);\n+ }\n+\n case BuiltinOperator_LESS: {\n return ParseLess(op, error_reporter, allocator, builtin_data);\n }\n@@ -674,16 +678,6 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n *builtin_data = params.release();\n return kTfLiteOk;\n }\n- case BuiltinOperator_LEAKY_RELU: {\n- auto params = safe_allocator.Allocate<TfLiteLeakyReluParams>();\n- TF_LITE_ENSURE(error_reporter, params != nullptr);\n- if (const auto* leaky_relu_params =\n- op->builtin_options_as_LeakyReluOptions()) {\n- params->alpha = leaky_relu_params->alpha();\n- }\n- *builtin_data = params.release();\n- return kTfLiteOk;\n- }\n case BuiltinOperator_MIRROR_PAD: {\n auto params = safe_allocator.Allocate<TfLiteMirrorPaddingParams>();\n TF_LITE_ENSURE(error_reporter, params != nullptr);\n@@ -1247,6 +1241,22 @@ TfLiteStatus ParseL2Normalization(const Operator* op,\n return kTfLiteOk;\n }\n \n+TfLiteStatus ParseLeakyRelu(const Operator* op, ErrorReporter* error_reporter,\n+ BuiltinDataAllocator* allocator,\n+ void** builtin_data) {\n+ CheckParsePointerParams(op, error_reporter, allocator, builtin_data);\n+\n+ SafeBuiltinDataAllocator safe_allocator(allocator);\n+ auto params = safe_allocator.Allocate<TfLiteLeakyReluParams>();\n+ TF_LITE_ENSURE(error_reporter, params != nullptr);\n+ if (const auto* leaky_relu_params =\n+ op->builtin_options_as_LeakyReluOptions()) {\n+ params->alpha = leaky_relu_params->alpha();\n+ }\n+ *builtin_data = params.release();\n+ return kTfLiteOk;\n+}\n+\n // We have this parse function instead of directly returning kTfLiteOk from the\n // switch-case in ParseOpData because this function is used as part of the\n // selective registration for the OpResolver implementation in micro.",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.cc",
"status": "modified"
},
{
"diff": "@@ -148,6 +148,10 @@ TfLiteStatus ParseL2Normalization(const Operator* op,\n BuiltinDataAllocator* allocator,\n void** builtin_data);\n \n+TfLiteStatus ParseLeakyRelu(const Operator* op, ErrorReporter* error_reporter,\n+ BuiltinDataAllocator* allocator,\n+ void** builtin_data);\n+\n TfLiteStatus ParseLess(const Operator* op, ErrorReporter* error_reporter,\n BuiltinDataAllocator* allocator, void** builtin_data);\n ",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.h",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator LEAKY_RELU from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46161\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46161\">No</a>\n",
"created_at": "2021-03-06T18:47:38Z"
}
],
"number": 46161,
"title": "micro: port op LEAKY_RELU from lite"
}
|
{
"body": "Complete implementation of TFLM operator LEAKY_RELU and associated TFLM test code.\r\n\r\nPR step 5 of the work to port operator LEAKY_RELU as tracked in Issue #46161",
"number": 46214,
"review_comments": [
{
"body": "remove, it is not relevant to TFLM.",
"created_at": "2021-02-11T19:18:10Z"
},
{
"body": "I think we should keep these as int32_t. We do have some non-32 bit targets as well.",
"created_at": "2021-02-11T19:19:22Z"
},
{
"body": "The TfLite code doesn't always follow the Google style guide and we are incrementally moving towards better conformance within TFLM.\r\n\r\n\r\nReorder the params to be inputs then outputs. And consider changing the params to be const and non-const references (this may be awkward given the existing APIs, but do it if possible).\r\n\r\nSame suggestion for the other (non-kernel API) functions in this kernel and the test as well.\r\n\r\nhttps://google.github.io/styleguide/cppguide.html#Inputs_and_Outputs",
"created_at": "2021-02-11T19:23:24Z"
},
{
"body": "only add support for Int8 and Float32 to begin with. Additional data-type support should be part of follow-on PR.",
"created_at": "2021-02-11T19:25:08Z"
},
{
"body": "its somewhat awkward but there are some implicit float to double conversions happening that break the xtensa build (not part of the CI right now), and we will need some casting to get blupill to pass if we keep the structs as all int32_t.\r\n\r\nHere's what I did after checking out this PR:\r\n```cc\r\n double alpha_multiplier = static_cast<double>(input->params.scale) *\r\n static_cast<double>(params->alpha) /\r\n static_cast<double>(output->params.scale);\r\n\r\n int output_shift_alpha;\r\n QuantizeMultiplier(alpha_multiplier, &data->output_multiplier_alpha,\r\n &output_shift_alpha);\r\n data->output_shift_alpha = static_cast<int32_t>(output_shift_alpha);\r\n\r\n double identity_multiplier = static_cast<double>(input->params.scale) /\r\n static_cast<double>(output->params.scale);\r\n int output_shift_identity;\r\n QuantizeMultiplier(identity_multiplier, &data->output_multiplier_identity,\r\n &output_shift_identity);\r\n data->output_shift_identity = static_cast<int32_t>(output_shift_identity);\r\n```\r\n\r\nThe static_cast<double> are only needed with the xtensa toolchain (since it is not reproducible for you, if you miss some of them, that is ok).\r\n\r\nThe static_cast<int32_t> is for the bluepill target.\r\n\r\nI'm going to see if it is possible to get the double promotion errors on xtensa to also be reproducible with either arm-gcc or gcc / clang. If not, I can fix any remaining issues separate from this PR.",
"created_at": "2021-02-11T19:49:17Z"
},
{
"body": "we are going with a flat tflite namespace instead of the nested namespaces that are common in the TfLite code.\r\n\r\nSome context: https://abseil.io/tips/130",
"created_at": "2021-02-11T19:53:01Z"
},
{
"body": "Fixed",
"created_at": "2021-02-20T21:44:02Z"
},
{
"body": "Fixed",
"created_at": "2021-02-20T21:44:20Z"
},
{
"body": "Fixed",
"created_at": "2021-02-20T21:44:36Z"
},
{
"body": "Fixed",
"created_at": "2021-02-20T21:44:52Z"
},
{
"body": "Fixed",
"created_at": "2021-02-20T21:45:03Z"
},
{
"body": "Removed",
"created_at": "2021-02-20T21:45:18Z"
},
{
"body": "A bit of a nit-pick, but could you add \"break\" back in here, and below? I know the return earlier makes it redundant, but if the body of the case statement was changed in the future, we'd end up with an accidental fall through.",
"created_at": "2021-03-03T18:22:49Z"
},
{
"body": "Fixed.",
"created_at": "2021-03-04T23:16:39Z"
}
],
"title": "micro: port operator LEAKY_RELU kernel from lite with test"
}
|
{
"commits": [
{
"message": "micro: port operator LEAKY_RELU kernel from lite with test\n\nComplete implementation of TFLM operator LEAKY_RELU and associated TFLM test code.\n\nPR step 5 of the work to port operator LEAKY_RELU as tracked in Issue #46161"
},
{
"message": "Merge branch 'master' into LeakyRelu-pr5"
},
{
"message": "fix review issues\n\nFlatten namespace\nReorder input/output parameters\nExplicit int/int32_t and float/double conversions"
},
{
"message": "Support only float32/int8"
},
{
"message": "add LEAKY_RELU to AllOpsResolver"
},
{
"message": "restore break statement to end of case block"
},
{
"message": "Merge branch 'master' into LeakyRelu-pr5"
},
{
"message": "Merge branch 'master' into LeakyRelu-pr5"
}
],
"files": [
{
"diff": "@@ -41,6 +41,7 @@ AllOpsResolver::AllOpsResolver() {\n AddGreaterEqual();\n AddHardSwish();\n AddL2Normalization();\n+ AddLeakyRelu();\n AddLess();\n AddLessEqual();\n AddLog();",
"filename": "tensorflow/lite/micro/all_ops_resolver.cc",
"status": "modified"
},
{
"diff": "@@ -263,6 +263,7 @@ cc_library(\n \"fill.cc\",\n \"floor.cc\",\n \"l2norm.cc\",\n+ \"leaky_relu.cc\",\n \"logical.cc\",\n \"logistic.cc\",\n \"maximum_minimum.cc\",\n@@ -659,6 +660,21 @@ cc_test(\n ],\n )\n \n+cc_test(\n+ name = \"leaky_relu_test\",\n+ srcs = [\n+ \"leaky_relu_test.cc\",\n+ ],\n+ deps = [\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:debug_log\",\n+ \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_test(\n name = \"logical_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -23,122 +23,131 @@ limitations under the License.\n #include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n \n namespace tflite {\n-namespace ops {\n-namespace micro {\n-namespace activations {\n namespace {\n \n-// OLD-TODO(b/142762739): We should figure out a multi-threading plan for most\n-// of the activation ops below.\n-\n-struct OpData {};\n-\n-struct LeakyReluOpData : public OpData {\n- int32_t output_multiplier_alpha = 0;\n- int32_t output_shift_alpha = 0;\n- int32_t output_multiplier_identity = 0;\n- int32_t output_shift_identity = 0;\n+// Input/output tensor index.\n+constexpr int kInputTensor = 0;\n+constexpr int kOutputTensor = 0;\n+\n+struct LeakyReluOpData {\n+ // quantization parameters\n+ int32_t output_multiplier_alpha;\n+ int32_t output_shift_alpha;\n+ int32_t output_multiplier_identity;\n+ int32_t output_shift_identity;\n+ int32_t input_zero_point;\n+ int32_t output_zero_point;\n };\n \n template <typename T>\n-void QuantizeLeakyRelu(const TfLiteTensor* input, TfLiteTensor* output,\n- const LeakyReluOpData* data) {\n- LeakyReluParams op_params;\n-\n- op_params.input_offset = input->params.zero_point;\n- op_params.output_offset = output->params.zero_point;\n- op_params.output_multiplier_alpha = data->output_multiplier_alpha;\n- op_params.output_shift_alpha = data->output_shift_alpha;\n- op_params.output_multiplier_identity = data->output_multiplier_identity;\n- op_params.output_shift_identity = data->output_shift_identity;\n- reference_ops::QuantizeLeakyRelu(\n- op_params, GetTensorShape(input), GetTensorData<T>(input),\n- GetTensorShape(output), GetTensorData<T>(output));\n-}\n-\n-} // namespace\n-\n-void* LeakyReluInit(TfLiteContext* context, const char* buffer, size_t length) {\n- return nullptr;\n+void QuantizeLeakyRelu(const LeakyReluOpData& data,\n+ const TfLiteEvalTensor* input,\n+ TfLiteEvalTensor* output) {\n+ LeakyReluParams op_params = {};\n+\n+ op_params.input_offset = data.input_zero_point;\n+ op_params.output_offset = data.output_zero_point;\n+ op_params.output_multiplier_alpha = data.output_multiplier_alpha;\n+ op_params.output_shift_alpha = data.output_shift_alpha;\n+ op_params.output_multiplier_identity = data.output_multiplier_identity;\n+ op_params.output_shift_identity = data.output_shift_identity;\n+ reference_ops::QuantizeLeakyRelu(op_params,\n+ tflite::micro::GetTensorShape(input),\n+ tflite::micro::GetTensorData<T>(input),\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<T>(output));\n }\n \n-TfLiteStatus LeakyReluPrepare(TfLiteContext* context, TfLiteNode* node) {\n+TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);\n TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n const TfLiteTensor* input;\n- TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 0, &input));\n+ TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, kInputTensor, &input));\n TfLiteTensor* output;\n- TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));\n+ TF_LITE_ENSURE_OK(context,\n+ GetOutputSafe(context, node, kOutputTensor, &output));\n TF_LITE_ENSURE_TYPES_EQ(context, input->type, output->type);\n \n- LeakyReluOpData* data = reinterpret_cast<LeakyReluOpData*>(node->user_data);\n-\n- if (output->type == kTfLiteUInt8 || output->type == kTfLiteInt8 ||\n- output->type == kTfLiteInt16) {\n+ if (output->type == kTfLiteInt8) {\n+ LeakyReluOpData* data = static_cast<LeakyReluOpData*>(node->user_data);\n const auto* params =\n- reinterpret_cast<TfLiteLeakyReluParams*>(node->builtin_data);\n+ static_cast<TfLiteLeakyReluParams*>(node->builtin_data);\n+\n+ data->input_zero_point = input->params.zero_point;\n+ data->output_zero_point = output->params.zero_point;\n \n- double alpha_multiplier =\n- input->params.scale * params->alpha / output->params.scale;\n+ int output_shift_alpha;\n+ double alpha_multiplier = static_cast<double>(\n+ input->params.scale * params->alpha / output->params.scale);\n QuantizeMultiplier(alpha_multiplier, &data->output_multiplier_alpha,\n- &data->output_shift_alpha);\n- double identity_multiplier = input->params.scale / output->params.scale;\n+ &output_shift_alpha);\n+ data->output_shift_alpha = static_cast<int32_t>(output_shift_alpha);\n+\n+ int output_shift_identity;\n+ double identity_multiplier =\n+ static_cast<double>(input->params.scale / output->params.scale);\n QuantizeMultiplier(identity_multiplier, &data->output_multiplier_identity,\n- &data->output_shift_identity);\n+ &output_shift_identity);\n+ data->output_shift_identity = static_cast<int32_t>(output_shift_identity);\n }\n \n- if (input->type == kTfLiteInt16 && output->type == kTfLiteInt16) {\n- TF_LITE_ENSURE_EQ(context, input->params.zero_point, 0);\n- TF_LITE_ENSURE_EQ(context, output->params.zero_point, 0);\n- }\n+ return kTfLiteOk;\n+}\n \n- return kTfLiteError;\n+void* LeakyReluInit(TfLiteContext* context, const char* buffer, size_t length) {\n+ TFLITE_DCHECK(context->AllocatePersistentBuffer != nullptr);\n+ return context->AllocatePersistentBuffer(context, sizeof(LeakyReluOpData));\n+}\n+\n+TfLiteStatus LeakyReluPrepare(TfLiteContext* context, TfLiteNode* node) {\n+ return CalculateOpData(context, node);\n }\n \n TfLiteStatus LeakyReluEval(TfLiteContext* context, TfLiteNode* node) {\n- const TfLiteTensor* input;\n- TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 0, &input));\n- TfLiteTensor* output;\n- TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));\n- const auto* params =\n- reinterpret_cast<TfLiteLeakyReluParams*>(node->builtin_data);\n- const LeakyReluOpData* data =\n- reinterpret_cast<LeakyReluOpData*>(node->user_data);\n+ const TfLiteEvalTensor* input =\n+ tflite::micro::GetEvalInput(context, node, kInputTensor);\n+ TfLiteEvalTensor* output =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n+ const LeakyReluOpData& data = *static_cast<LeakyReluOpData*>(node->user_data);\n \n- LeakyReluParams op_params;\n switch (input->type) {\n case kTfLiteFloat32: {\n+ LeakyReluParams op_params = {};\n+ const auto* params =\n+ static_cast<TfLiteLeakyReluParams*>(node->builtin_data);\n+\n op_params.alpha = params->alpha;\n- reference_ops::LeakyRelu(\n- op_params, GetTensorShape(input), GetTensorData<float>(input),\n- GetTensorShape(output), GetTensorData<float>(output));\n- return kTfLiteOk;\n- } break;\n- case kTfLiteUInt8: {\n- QuantizeLeakyRelu<uint8_t>(input, output, data);\n+ reference_ops::LeakyRelu(op_params, tflite::micro::GetTensorShape(input),\n+ tflite::micro::GetTensorData<float>(input),\n+ tflite::micro::GetTensorShape(output),\n+ tflite::micro::GetTensorData<float>(output));\n return kTfLiteOk;\n } break;\n case kTfLiteInt8: {\n- QuantizeLeakyRelu<int8_t>(input, output, data);\n- return kTfLiteOk;\n- } break;\n- case kTfLiteInt16: {\n- QuantizeLeakyRelu<int16_t>(input, output, data);\n+ QuantizeLeakyRelu<int8_t>(data, input, output);\n return kTfLiteOk;\n } break;\n default:\n TF_LITE_KERNEL_LOG(\n- context,\n- \"Only float32, int8, int16 and uint8 is supported currently, got %s.\",\n+ context, \"Only float32, int8 are supported by LEAKY_RELU, got %s.\",\n TfLiteTypeGetName(input->type));\n return kTfLiteError;\n }\n+\n+ return kTfLiteError;\n }\n \n-} // namespace activations\n+} // namespace\n \n-TfLiteRegistration* Register_LEAKY_RELU() { return nullptr; }\n+TfLiteRegistration Register_LEAKY_RELU() {\n+ return {/*init=*/LeakyReluInit,\n+ /*free=*/nullptr,\n+ /*prepare=*/LeakyReluPrepare,\n+ /*invoke=*/LeakyReluEval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/0};\n+}\n \n-} // namespace micro\n-} // namespace ops\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/leaky_relu.cc",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,88 @@ namespace tflite {\n namespace testing {\n namespace {\n \n+// min/max are used to compute scale, zero-point, compare tolerance\n+template <typename T>\n+struct TestLeakyReluParams {\n+ // general parameters\n+ float alpha; // alpha multiplier\n+\n+ // quantization parameters\n+ float data_min; // input and output data minimum value\n+ float data_max; // input and output data maximum value\n+ T* input_data; // quantized input storage\n+ T* output_data; // quantized output storage\n+ float tolerance; // output vs expected value tolerance\n+};\n+\n+void ExecuteLeakyReluTest(const float alpha, const int tensors_count,\n+ TfLiteTensor* tensors) {\n+ TfLiteLeakyReluParams builtin_data = {};\n+ builtin_data.alpha = alpha;\n+\n+ constexpr int kInputArrayData[] = {1, 0};\n+ TfLiteIntArray* inputs_array = IntArrayFromInts(kInputArrayData);\n+ constexpr int kOutputArrayData[] = {1, 1};\n+ TfLiteIntArray* outputs_array = IntArrayFromInts(kOutputArrayData);\n+\n+ const TfLiteRegistration registration = tflite::Register_LEAKY_RELU();\n+ micro::KernelRunner runner(registration, tensors, tensors_count, inputs_array,\n+ outputs_array, static_cast<void*>(&builtin_data));\n+\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.InitAndPrepare());\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.Invoke());\n+}\n+\n+template <typename T>\n+void TestLeakyRelu(const TestLeakyReluParams<T>& params,\n+ const int* input_dims_data, const T* input_data,\n+ const int* expected_dims, const T* expected_data,\n+ T* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(expected_dims);\n+ const int output_count = ElementCount(*output_dims);\n+\n+ TfLiteTensor tensors[] = {\n+ CreateTensor(input_data, input_dims),\n+ CreateTensor(output_data, output_dims),\n+ };\n+ constexpr int tensors_count = std::extent<decltype(tensors)>::value;\n+ ExecuteLeakyReluTest(params.alpha, tensors_count, tensors);\n+\n+ for (int i = 0; i < output_count; i++) {\n+ TF_LITE_MICRO_EXPECT_EQ(expected_data[i], output_data[i]);\n+ }\n+}\n+\n+template <typename T>\n+void TestLeakyReluQuantized(const TestLeakyReluParams<T>& params,\n+ const int* input_dims_data, const float* input_data,\n+ const int* expected_dims,\n+ const float* expected_data, float* output_data) {\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(expected_dims);\n+ const int output_count = ElementCount(*output_dims);\n+\n+ const float scale = ScaleFromMinMax<T>(params.data_min, params.data_max);\n+ const int zero_point =\n+ ZeroPointFromMinMax<T>(params.data_min, params.data_max);\n+\n+ TfLiteTensor tensors[] = {\n+ CreateQuantizedTensor(input_data, params.input_data, input_dims, scale,\n+ zero_point),\n+ CreateQuantizedTensor(params.output_data, output_dims, scale, zero_point),\n+ };\n+ constexpr int kTensorsCount = std::extent<decltype(tensors)>::value;\n+\n+ ExecuteLeakyReluTest(params.alpha, kTensorsCount, tensors);\n+\n+ Dequantize(params.output_data, output_count, scale, zero_point, output_data);\n+ const float kTolerance = params.tolerance;\n+ for (int i = 0; i < output_count; i++) {\n+ TF_LITE_MICRO_EXPECT_NEAR(expected_data[i], output_data[i], kTolerance);\n+ }\n+}\n+\n // Our fixed-point math function implementations have roughly 12 bits of\n // accuracy, when specialized to 16-bit fixed-point arithmetic.\n // That is purely an implementation compromise, it would have been possible\n@@ -43,92 +125,90 @@ namespace {\n // is 2, our representable values are often diluted by a factor of 2, whence\n // the factor of 2 below.\n const float kQuantizedTolerance = 2 * (1. / 256);\n-const float kQuantizedToleranceInt16 = 2 * (1. / 4096);\n \n-template <TensorType tensor_type, typename integer_dtype>\n+template <typename integer_dtype>\n void QuantizedActivationsOpTestLeakyRelu() {\n- const float kMin = -1;\n- const float kMax =\n- std::numeric_limits<integer_dtype>::max() /\n- static_cast<float>(std::numeric_limits<integer_dtype>::max() + 1);\n-#ifdef notdef\n- QuantizedActivationsOpModel m(\n- /*input=*/{tensor_type, {5, 5}, 5 * kMin, 5 * kMax}, 0.1);\n-\n- m.SetInput<integer_dtype>({\n+ constexpr int kDims[] = {2, 5, 5};\n+ constexpr float kInput[] = {\n -5.0f, -4.6f, -4.2f, -3.8f, -3.4f, // Row 1\n -3.0f, -2.6f, -2.2f, -1.8f, -1.4f, // Row 2\n -1.0f, -0.6f, -0.2f, 0.2f, 0.6f, // Row 3\n 1.0f, 1.4f, 1.8f, 2.2f, 2.6f, // Row 4\n 3.0f, 3.4f, 3.8f, 4.2f, 4.6f, // Row 5\n- });\n-\n- float kTestQuantizedTolerance = tensor_type == TensorType_INT16\n- ? kQuantizedToleranceInt16\n- : kQuantizedTolerance * 5;\n-\n- EXPECT_THAT(m.GetDequantizedOutput<integer_dtype>(),\n- ElementsAreArray(ArrayFloatNear(\n- {\n- -0.50f, -0.46f, -0.42f, -0.38f, -0.34f, // Row 1\n- -0.30f, -0.26f, -0.22f, -0.18f, -0.14f, // Row 2\n- -0.10f, -0.06f, -0.02f, 0.20f, 0.60f, // Row 3\n- 1.00f, 1.40f, 1.80f, 2.20f, 2.60f, // Row 4\n- 3.00f, 3.40f, 3.80f, 4.20f, 4.60f, // Row 5\n- },\n- kTestQuantizedTolerance)));\n-#endif // notdef\n+ };\n+ constexpr float kExpect[] = {\n+ -0.50f, -0.46f, -0.42f, -0.38f, -0.34f, // Row 1\n+ -0.30f, -0.26f, -0.22f, -0.18f, -0.14f, // Row 2\n+ -0.10f, -0.06f, -0.02f, 0.20f, 0.60f, // Row 3\n+ 1.00f, 1.40f, 1.80f, 2.20f, 2.60f, // Row 4\n+ 3.00f, 3.40f, 3.80f, 4.20f, 4.60f, // Row 5\n+ };\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ // setup quantization storage and parameters\n+ integer_dtype q_output_data[kOutputCount];\n+ integer_dtype q_input_data[kOutputCount];\n+ constexpr float kMin = -1;\n+ constexpr float kMax =\n+ std::numeric_limits<integer_dtype>::max() /\n+ static_cast<float>(std::numeric_limits<integer_dtype>::max() + 1);\n+ TestLeakyReluParams<integer_dtype> params = {};\n+ params.alpha = 0.1f;\n+ params.data_min = 5 * kMin;\n+ params.data_max = 5 * kMax;\n+ params.input_data = q_input_data;\n+ params.output_data = q_output_data;\n+ params.tolerance = kQuantizedTolerance * 5;\n+\n+ TestLeakyReluQuantized(params, kDims, kInput, kDims, kExpect, output_data);\n }\n \n-TF_LITE_MICRO_TESTS_BEGIN\n+} // namespace\n+} // namespace testing\n+} // namespace tflite\n \n-TF_LITE_MICRO_TEST(QuantizedActivationsOpTestLeakyReluUint8) {\n- const float kMin = -1;\n- const float kMax = 127.f / 128.f;\n-#ifdef notdef\n- QuantizedActivationsOpModel m(\n- /*input=*/{TensorType_UINT8, {2, 3}, 8 * kMin, 8 * kMax}, 0.5);\n-\n- m.SetInput<uint8_t>({\n- 0.0f, 1.0f, 3.0f, // Row 1\n- 1.0f, -1.0f, -2.0f, // Row 2\n- });\n- EXPECT_THAT(m.GetDequantizedOutput<uint8_t>(),\n- ElementsAreArray(ArrayFloatNear(\n- {\n- 0.0f, 1.0f, 3.0f, // Row 1\n- 1.0f, -0.5f, -1.0f, // Row 2\n- },\n- kQuantizedTolerance * 8)));\n-#endif // notdef\n-}\n+TF_LITE_MICRO_TESTS_BEGIN\n \n-TF_LITE_MICRO_TEST(QuantizedActivationsOpTestLeakyReluInt8) {\n- QuantizedActivationsOpTestLeakyRelu<TensorType_INT8, int8_t>();\n+TF_LITE_MICRO_TEST(QuantizedActivationsOpTestLeakyReluInt8_1) {\n+ constexpr int kDims[] = {2, 2, 3};\n+ constexpr float kInput[] = {0.0f, 1.0f, 3.0f, 1.0f, -1.0f, -2.0f};\n+ constexpr float kExpect[] = {0.0f, 1.0f, 3.0f, 1.0f, -0.5f, -1.0f};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+\n+ // setup quantization storage and parameters\n+ int8_t q_output_data[kOutputCount];\n+ int8_t q_input_data[kOutputCount];\n+ constexpr float kMin = -1;\n+ constexpr float kMax = 127.f / 128.f;\n+ tflite::testing::TestLeakyReluParams<int8_t> params = {};\n+ params.alpha = 0.5f;\n+ params.data_min = 8 * kMin;\n+ params.data_max = 8 * kMax;\n+ params.input_data = q_input_data;\n+ params.output_data = q_output_data;\n+ params.tolerance = tflite::testing::kQuantizedTolerance * 8;\n+\n+ tflite::testing::TestLeakyReluQuantized(params, kDims, kInput, kDims, kExpect,\n+ output_data);\n }\n \n-TF_LITE_MICRO_TEST(QuantizedActivationsOpTestLeakyReluInt16) {\n- QuantizedActivationsOpTestLeakyRelu<TensorType_INT16, int16_t>();\n+TF_LITE_MICRO_TEST(QuantizedActivationsOpTestLeakyReluInt8_2) {\n+ tflite::testing::QuantizedActivationsOpTestLeakyRelu<int8_t>();\n }\n \n TF_LITE_MICRO_TEST(FloatActivationsOpTestLeakyRelu) {\n-#ifdef notdef\n- LeakyReluOpModel m({TensorType_FLOAT32, {2, 3}}, 0.5f);\n-\n- m.SetInput({\n- 0.0f, 1.0f, 3.0f, // Row 1\n- 1.0f, -1.0f, -2.0f, // Row 2\n- });\n- m.Invoke();\n- EXPECT_THAT(m.GetOutput(), ElementsAreArray({\n- 0.0f, 1.0f, 3.0f, // Row 1\n- 1.0f, -0.5f, -1.0f, // Row 2\n- }));\n-#endif // notdef\n+ constexpr int kDims[] = {2, 2, 3};\n+ constexpr float kInput[] = {0.0f, 1.0f, 3.0f, 1.0f, -1.0f, -2.0f};\n+ constexpr float kExpect[] = {0.0f, 1.0f, 3.0f, 1.0f, -0.5f, -1.0f};\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n+ tflite::testing::TestLeakyReluParams<float> params = {};\n+ params.alpha = 0.5f;\n+\n+ tflite::testing::TestLeakyRelu(params, kDims, kInput, kDims, kExpect,\n+ output_data);\n }\n \n TF_LITE_MICRO_TESTS_END\n-\n-} // namespace\n-} // namespace testing\n-} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/leaky_relu_test.cc",
"status": "modified"
},
{
"diff": "@@ -37,6 +37,7 @@ TfLiteRegistration Register_CONV_2D();\n TfLiteRegistration Register_DEPTHWISE_CONV_2D();\n TfLiteRegistration Register_ELU();\n TfLiteRegistration Register_EXP();\n+TfLiteRegistration Register_LEAKY_RELU();\n TfLiteRegistration Register_FILL();\n TfLiteRegistration Register_QUANTIZE();\n TfLiteRegistration Register_SHAPE();",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -243,6 +243,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n ParseL2Normalization);\n }\n \n+ TfLiteStatus AddLeakyRelu() {\n+ return AddBuiltin(BuiltinOperator_LEAKY_RELU, tflite::Register_LEAKY_RELU(),\n+ ParseLeakyRelu);\n+ }\n+\n TfLiteStatus AddLess() {\n return AddBuiltin(BuiltinOperator_LESS, tflite::ops::micro::Register_LESS(),\n ParseLess);",
"filename": "tensorflow/lite/micro/micro_mutable_op_resolver.h",
"status": "modified"
},
{
"diff": "@@ -277,6 +277,7 @@ tensorflow/lite/micro/kernels/floor_test.cc \\\n tensorflow/lite/micro/kernels/fully_connected_test.cc \\\n tensorflow/lite/micro/kernels/hard_swish_test.cc \\\n tensorflow/lite/micro/kernels/l2norm_test.cc \\\n+tensorflow/lite/micro/kernels/leaky_relu_test.cc \\\n tensorflow/lite/micro/kernels/logical_test.cc \\\n tensorflow/lite/micro/kernels/logistic_test.cc \\\n tensorflow/lite/micro/kernels/maximum_minimum_test.cc \\\n@@ -337,6 +338,7 @@ tensorflow/lite/micro/kernels/hard_swish.cc \\\n tensorflow/lite/micro/kernels/kernel_runner.cc \\\n tensorflow/lite/micro/kernels/kernel_util.cc \\\n tensorflow/lite/micro/kernels/l2norm.cc \\\n+tensorflow/lite/micro/kernels/leaky_relu.cc \\\n tensorflow/lite/micro/kernels/logical.cc \\\n tensorflow/lite/micro/kernels/logistic.cc \\\n tensorflow/lite/micro/kernels/maximum_minimum.cc \\\n@@ -431,6 +433,7 @@ tensorflow/lite/kernels/internal/reference/integer_ops/pooling.h \\\n tensorflow/lite/kernels/internal/reference/integer_ops/tanh.h \\\n tensorflow/lite/kernels/internal/reference/integer_ops/transpose_conv.h \\\n tensorflow/lite/kernels/internal/reference/l2normalization.h \\\n+tensorflow/lite/kernels/internal/reference/leaky_relu.h \\\n tensorflow/lite/kernels/internal/reference/maximum_minimum.h \\\n tensorflow/lite/kernels/internal/reference/mul.h \\\n tensorflow/lite/kernels/internal/reference/neg.h \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator LEAKY_RELU from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46161\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46161\">No</a>\n",
"created_at": "2021-03-06T18:47:38Z"
}
],
"number": 46161,
"title": "micro: port op LEAKY_RELU from lite"
}
|
{
"body": "Implement skeleton (non-working) code for operator and test.\r\nHeader files changed.\r\nNamespaces changed.\r\nSome original code deleted.\r\nSome original code modified.\r\n\r\nPR step 4 of the work to port operator LEAKY_RELU as tracked in Issue #46161",
"number": 46213,
"review_comments": [],
"title": "micro: prepare to port operator LEAKY_RELU kernel from lite with test"
}
|
{
"commits": [
{
"message": "micro: prepare to port operator LEAKY_RELU kernel from lite with test\n\nImplement skeleton (non-working) code for operator and test.\nHeader files changed.\nNamespaces changed.\nSome original code deleted.\nSome original code modified.\n\nPR step 4 of the work to port operator LEAKY_RELU as tracked in Issue #46161"
}
],
"files": [
{
"diff": "@@ -19,7 +19,6 @@ limitations under the License.\n #include <limits>\n \n #include \"tensorflow/lite/kernels/internal/common.h\"\n-#include \"tensorflow/lite/kernels/internal/types.h\"\n \n namespace tflite {\n namespace reference_ops {",
"filename": "tensorflow/lite/kernels/internal/reference/leaky_relu.h",
"status": "modified"
},
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -12,58 +12,26 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <stddef.h>\n \n-#include <algorithm>\n-#include <cmath>\n-#include <cstdint>\n-#include <functional>\n-#include <limits>\n+#include \"tensorflow/lite/kernels/internal/reference/leaky_relu.h\"\n \n-#include \"tensorflow/lite/c/builtin_op_data.h\"\n #include \"tensorflow/lite/c/common.h\"\n-#include \"tensorflow/lite/kernels/cpu_backend_context.h\"\n-#include \"tensorflow/lite/kernels/internal/common.h\"\n-#include \"tensorflow/lite/kernels/internal/compatibility.h\"\n-#include \"tensorflow/lite/kernels/internal/cppmath.h\"\n-#include \"tensorflow/lite/kernels/internal/optimized/optimized_ops.h\"\n #include \"tensorflow/lite/kernels/internal/quantization_util.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/binary_function.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/integer_ops/log_softmax.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/integer_ops/logistic.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/integer_ops/tanh.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/logistic.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/prelu.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/reference_ops.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/softmax.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/tanh.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/process_broadcast_shapes.h\"\n #include \"tensorflow/lite/kernels/internal/types.h\"\n #include \"tensorflow/lite/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n \n namespace tflite {\n namespace ops {\n-namespace builtin {\n+namespace micro {\n namespace activations {\n namespace {\n \n // OLD-TODO(b/142762739): We should figure out a multi-threading plan for most\n // of the activation ops below.\n \n-enum KernelType {\n- kReference,\n- kGenericOptimized,\n- kFixedPointOptimized,\n-};\n-\n-struct OpData {\n- int32_t input_multiplier = 0;\n- int input_left_shift = 0;\n- int32_t input_range_radius = 0;\n- int diff_min = 0;\n- uint8_t table[256] = {0};\n-};\n+struct OpData {};\n \n struct LeakyReluOpData : public OpData {\n int32_t output_multiplier_alpha = 0;\n@@ -91,11 +59,7 @@ void QuantizeLeakyRelu(const TfLiteTensor* input, TfLiteTensor* output,\n } // namespace\n \n void* LeakyReluInit(TfLiteContext* context, const char* buffer, size_t length) {\n- return new LeakyReluOpData;\n-}\n-\n-void LeakyReluFree(TfLiteContext* context, void* buffer) {\n- delete reinterpret_cast<LeakyReluOpData*>(buffer);\n+ return nullptr;\n }\n \n TfLiteStatus LeakyReluPrepare(TfLiteContext* context, TfLiteNode* node) {\n@@ -128,8 +92,7 @@ TfLiteStatus LeakyReluPrepare(TfLiteContext* context, TfLiteNode* node) {\n TF_LITE_ENSURE_EQ(context, output->params.zero_point, 0);\n }\n \n- return context->ResizeTensor(context, output,\n- TfLiteIntArrayCopy(input->dims));\n+ return kTfLiteError;\n }\n \n TfLiteStatus LeakyReluEval(TfLiteContext* context, TfLiteNode* node) {\n@@ -146,7 +109,7 @@ TfLiteStatus LeakyReluEval(TfLiteContext* context, TfLiteNode* node) {\n switch (input->type) {\n case kTfLiteFloat32: {\n op_params.alpha = params->alpha;\n- optimized_ops::LeakyRelu(\n+ reference_ops::LeakyRelu(\n op_params, GetTensorShape(input), GetTensorData<float>(input),\n GetTensorShape(output), GetTensorData<float>(output));\n return kTfLiteOk;\n@@ -174,13 +137,8 @@ TfLiteStatus LeakyReluEval(TfLiteContext* context, TfLiteNode* node) {\n \n } // namespace activations\n \n-TfLiteRegistration* Register_LEAKY_RELU() {\n- static TfLiteRegistration r = {\n- activations::LeakyReluInit, activations::LeakyReluFree,\n- activations::LeakyReluPrepare, activations::LeakyReluEval};\n- return &r;\n-}\n+TfLiteRegistration* Register_LEAKY_RELU() { return nullptr; }\n \n-} // namespace builtin\n+} // namespace micro\n } // namespace ops\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/leaky_relu.cc",
"status": "modified"
},
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -12,65 +12,20 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <math.h>\n-#include <stdint.h>\n-#include <stdlib.h>\n \n-#include <algorithm>\n-#include <initializer_list>\n #include <limits>\n-#include <map>\n-#include <memory>\n-#include <random>\n-#include <string>\n-#include <utility>\n-#include <vector>\n-\n-#include \"flatbuffers/flatbuffers.h\" // from @flatbuffers\n-#include \"tensorflow/lite/core/api/op_resolver.h\"\n-#include \"tensorflow/lite/interpreter.h\"\n-#include \"tensorflow/lite/kernels/test_util.h\"\n-#include \"tensorflow/lite/schema/schema_generated.h\"\n-#include \"tensorflow/lite/string_type.h\"\n+#include <type_traits>\n+\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_runner.h\"\n+#include \"tensorflow/lite/micro/test_helpers.h\"\n+#include \"tensorflow/lite/micro/testing/micro_test.h\"\n \n namespace tflite {\n+namespace testing {\n namespace {\n \n-using ::testing::ElementsAreArray;\n-\n-class BaseActivationsOpModel : public SingleOpModel {\n- public:\n- // A dedicated constructor for LeakyRelu, which does some options.\n- BaseActivationsOpModel(TensorData input, float alpha) {\n- input_ = AddInput(input);\n- // The output scale and input scale might be different.\n- if (input.type == TensorType_UINT8 || input.type == TensorType_INT8 ||\n- input.type == TensorType_INT16) {\n- auto output_min = (input.min >= 0) ? input.min : input.min * alpha;\n- auto output_max = (input.max >= 0) ? input.max : input.max * alpha;\n- if (input.type == TensorType_INT16) {\n- output_ = AddOutput({TensorType_INT16,\n- {},\n- 0,\n- 0,\n- output_max / (std::numeric_limits<int16_t>::max()),\n- 0});\n- } else {\n- output_ = AddOutput({input.type, {}, output_min, output_max});\n- }\n- } else {\n- output_ = AddOutput({input.type, {}});\n- }\n- SetBuiltinOp(BuiltinOperator_LEAKY_RELU, BuiltinOptions_LeakyReluOptions,\n- CreateLeakyReluOptions(builder_, alpha).Union());\n- BuildInterpreter({GetShape(input_)});\n- }\n-\n- protected:\n- int input_;\n- int output_;\n-};\n-\n // Our fixed-point math function implementations have roughly 12 bits of\n // accuracy, when specialized to 16-bit fixed-point arithmetic.\n // That is purely an implementation compromise, it would have been possible\n@@ -90,53 +45,13 @@ class BaseActivationsOpModel : public SingleOpModel {\n const float kQuantizedTolerance = 2 * (1. / 256);\n const float kQuantizedToleranceInt16 = 2 * (1. / 4096);\n \n-class QuantizedActivationsOpModel : public BaseActivationsOpModel {\n- public:\n- using BaseActivationsOpModel::BaseActivationsOpModel;\n-\n- template <typename T>\n- void SetInput(const std::vector<float>& data) {\n- QuantizeAndPopulate<T>(input_, data);\n- }\n- template <typename T>\n- std::vector<T> GetOutput() {\n- return ExtractVector<T>(output_);\n- }\n-\n- template <typename T>\n- std::vector<float> GetDequantizedOutput() {\n- return Dequantize<T>(ExtractVector<T>(output_), GetScale(output_),\n- GetZeroPoint(output_));\n- }\n-};\n-\n-TEST(QuantizedActivationsOpTest, LeakyReluUint8) {\n- const float kMin = -1;\n- const float kMax = 127.f / 128.f;\n- QuantizedActivationsOpModel m(\n- /*input=*/{TensorType_UINT8, {2, 3}, 8 * kMin, 8 * kMax}, 0.5);\n-\n- m.SetInput<uint8_t>({\n- 0.0f, 1.0f, 3.0f, // Row 1\n- 1.0f, -1.0f, -2.0f, // Row 2\n- });\n- m.Invoke();\n- EXPECT_THAT(m.GetDequantizedOutput<uint8_t>(),\n- ElementsAreArray(ArrayFloatNear(\n- {\n- 0.0f, 1.0f, 3.0f, // Row 1\n- 1.0f, -0.5f, -1.0f, // Row 2\n- },\n- kQuantizedTolerance * 8)));\n-}\n-\n template <TensorType tensor_type, typename integer_dtype>\n void QuantizedActivationsOpTestLeakyRelu() {\n const float kMin = -1;\n const float kMax =\n std::numeric_limits<integer_dtype>::max() /\n static_cast<float>(std::numeric_limits<integer_dtype>::max() + 1);\n-\n+#ifdef notdef\n QuantizedActivationsOpModel m(\n /*input=*/{tensor_type, {5, 5}, 5 * kMin, 5 * kMax}, 0.1);\n \n@@ -147,7 +62,6 @@ void QuantizedActivationsOpTestLeakyRelu() {\n 1.0f, 1.4f, 1.8f, 2.2f, 2.6f, // Row 4\n 3.0f, 3.4f, 3.8f, 4.2f, 4.6f, // Row 5\n });\n- m.Invoke();\n \n float kTestQuantizedTolerance = tensor_type == TensorType_INT16\n ? kQuantizedToleranceInt16\n@@ -163,36 +77,42 @@ void QuantizedActivationsOpTestLeakyRelu() {\n 3.00f, 3.40f, 3.80f, 4.20f, 4.60f, // Row 5\n },\n kTestQuantizedTolerance)));\n+#endif // notdef\n }\n \n-TEST(QuantizedActivationsOpTest, LeakyReluInt8) {\n+TF_LITE_MICRO_TESTS_BEGIN\n+\n+TF_LITE_MICRO_TEST(QuantizedActivationsOpTestLeakyReluUint8) {\n+ const float kMin = -1;\n+ const float kMax = 127.f / 128.f;\n+#ifdef notdef\n+ QuantizedActivationsOpModel m(\n+ /*input=*/{TensorType_UINT8, {2, 3}, 8 * kMin, 8 * kMax}, 0.5);\n+\n+ m.SetInput<uint8_t>({\n+ 0.0f, 1.0f, 3.0f, // Row 1\n+ 1.0f, -1.0f, -2.0f, // Row 2\n+ });\n+ EXPECT_THAT(m.GetDequantizedOutput<uint8_t>(),\n+ ElementsAreArray(ArrayFloatNear(\n+ {\n+ 0.0f, 1.0f, 3.0f, // Row 1\n+ 1.0f, -0.5f, -1.0f, // Row 2\n+ },\n+ kQuantizedTolerance * 8)));\n+#endif // notdef\n+}\n+\n+TF_LITE_MICRO_TEST(QuantizedActivationsOpTestLeakyReluInt8) {\n QuantizedActivationsOpTestLeakyRelu<TensorType_INT8, int8_t>();\n }\n \n-TEST(QuantizedActivationsOpTest, LeakyReluInt16) {\n+TF_LITE_MICRO_TEST(QuantizedActivationsOpTestLeakyReluInt16) {\n QuantizedActivationsOpTestLeakyRelu<TensorType_INT16, int16_t>();\n }\n \n-class LeakyReluOpModel : public SingleOpModel {\n- public:\n- LeakyReluOpModel(const TensorData& input, float alpha) {\n- input_ = AddInput(input);\n- output_ = AddOutput(input);\n- SetBuiltinOp(BuiltinOperator_LEAKY_RELU, BuiltinOptions_LeakyReluOptions,\n- CreateLeakyReluOptions(builder_, alpha).Union());\n- BuildInterpreter({GetShape(input_)});\n- }\n- void SetInput(std::initializer_list<float> data) {\n- PopulateTensor(input_, data);\n- }\n- std::vector<float> GetOutput() { return ExtractVector<float>(output_); }\n-\n- protected:\n- int input_;\n- int output_;\n-};\n-\n-TEST(FloatActivationsOpTest, LeakyRelu) {\n+TF_LITE_MICRO_TEST(FloatActivationsOpTestLeakyRelu) {\n+#ifdef notdef\n LeakyReluOpModel m({TensorType_FLOAT32, {2, 3}}, 0.5f);\n \n m.SetInput({\n@@ -204,7 +124,11 @@ TEST(FloatActivationsOpTest, LeakyRelu) {\n 0.0f, 1.0f, 3.0f, // Row 1\n 1.0f, -0.5f, -1.0f, // Row 2\n }));\n+#endif // notdef\n }\n \n+TF_LITE_MICRO_TESTS_END\n+\n } // namespace\n+} // namespace testing\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/leaky_relu_test.cc",
"status": "modified"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator LEAKY_RELU from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46161\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46161\">No</a>\n",
"created_at": "2021-03-06T18:47:38Z"
}
],
"number": 46161,
"title": "micro: port op LEAKY_RELU from lite"
}
|
{
"body": "This is a copy with minimal modification of the kernel and test for\r\noperator LEAKY_RELU from tensorflow/lite/kernels.\r\nAdaptations to micro and addition to the micro build to follow.\r\n\r\nPR step 3 for issue #46161",
"number": 46212,
"review_comments": [],
"title": "micro: copy operator LEAKY_RELU kernel from lite"
}
|
{
"commits": [
{
"message": "Extract reference for operator LEAKY_RELU to standalone header\n\nMove the reference implementation to its own header so that micro\ncan use it without the unrelated depedencies of reference_ops.h.\n\nPR step 2 for issue #46161"
},
{
"message": "micro: copy operator LEAKY_RELU kernel from lite\n\nThis is a copy with minimal modification of the kernel and test for\noperator LEAKY_RELU from tensorflow/lite/kernels.\nAdaptations to micro and addition to the micro build to follow.\n\nPR step 3 for issue #46161"
},
{
"message": "correct copyright notice formatting"
},
{
"message": "Merge branch 'LeakyRelu-pr2' into LeakyRelu-pr3"
},
{
"message": "Remove include files that do not pass backend tests\n\nRemoved gmock/gtest header files"
}
],
"files": [
{
"diff": "@@ -245,6 +245,10 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n return ParsePool(op, error_reporter, allocator, builtin_data);\n }\n \n+ case BuiltinOperator_LEAKY_RELU: {\n+ return ParseLeakyRelu(op, error_reporter, allocator, builtin_data);\n+ }\n+\n case BuiltinOperator_LESS: {\n return ParseLess(op, error_reporter, allocator, builtin_data);\n }\n@@ -674,16 +678,6 @@ TfLiteStatus ParseOpDataTfLite(const Operator* op, BuiltinOperator op_type,\n *builtin_data = params.release();\n return kTfLiteOk;\n }\n- case BuiltinOperator_LEAKY_RELU: {\n- auto params = safe_allocator.Allocate<TfLiteLeakyReluParams>();\n- TF_LITE_ENSURE(error_reporter, params != nullptr);\n- if (const auto* leaky_relu_params =\n- op->builtin_options_as_LeakyReluOptions()) {\n- params->alpha = leaky_relu_params->alpha();\n- }\n- *builtin_data = params.release();\n- return kTfLiteOk;\n- }\n case BuiltinOperator_MIRROR_PAD: {\n auto params = safe_allocator.Allocate<TfLiteMirrorPaddingParams>();\n TF_LITE_ENSURE(error_reporter, params != nullptr);\n@@ -1247,6 +1241,22 @@ TfLiteStatus ParseL2Normalization(const Operator* op,\n return kTfLiteOk;\n }\n \n+TfLiteStatus ParseLeakyRelu(const Operator* op, ErrorReporter* error_reporter,\n+ BuiltinDataAllocator* allocator,\n+ void** builtin_data) {\n+ CheckParsePointerParams(op, error_reporter, allocator, builtin_data);\n+\n+ SafeBuiltinDataAllocator safe_allocator(allocator);\n+ auto params = safe_allocator.Allocate<TfLiteLeakyReluParams>();\n+ TF_LITE_ENSURE(error_reporter, params != nullptr);\n+ if (const auto* leaky_relu_params =\n+ op->builtin_options_as_LeakyReluOptions()) {\n+ params->alpha = leaky_relu_params->alpha();\n+ }\n+ *builtin_data = params.release();\n+ return kTfLiteOk;\n+}\n+\n // We have this parse function instead of directly returning kTfLiteOk from the\n // switch-case in ParseOpData because this function is used as part of the\n // selective registration for the OpResolver implementation in micro.",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.cc",
"status": "modified"
},
{
"diff": "@@ -148,6 +148,10 @@ TfLiteStatus ParseL2Normalization(const Operator* op,\n BuiltinDataAllocator* allocator,\n void** builtin_data);\n \n+TfLiteStatus ParseLeakyRelu(const Operator* op, ErrorReporter* error_reporter,\n+ BuiltinDataAllocator* allocator,\n+ void** builtin_data);\n+\n TfLiteStatus ParseLess(const Operator* op, ErrorReporter* error_reporter,\n BuiltinDataAllocator* allocator, void** builtin_data);\n ",
"filename": "tensorflow/lite/core/api/flatbuffer_conversions.h",
"status": "modified"
},
{
"diff": "@@ -480,6 +480,7 @@ cc_library(\n \"reference/integer_ops/tanh.h\",\n \"reference/integer_ops/transpose_conv.h\",\n \"reference/l2normalization.h\",\n+ \"reference/leaky_relu.h\",\n \"reference/logistic.h\",\n \"reference/maximum_minimum.h\",\n \"reference/mul.h\",\n@@ -576,6 +577,7 @@ cc_library(\n \"reference/fully_connected.h\",\n \"reference/hard_swish.h\",\n \"reference/l2normalization.h\",\n+ \"reference/leaky_relu.h\",\n \"reference/legacy_reference_ops.h\",\n \"reference/logistic.h\",\n \"reference/maximum_minimum.h\",",
"filename": "tensorflow/lite/kernels/internal/BUILD",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,70 @@\n+/* Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#ifndef TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_LEAKY_RELU_H_\n+#define TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_LEAKY_RELU_H_\n+\n+#include <algorithm>\n+#include <limits>\n+\n+#include \"tensorflow/lite/kernels/internal/common.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n+\n+namespace tflite {\n+namespace reference_ops {\n+\n+inline void LeakyRelu(const tflite::LeakyReluParams& params,\n+ const RuntimeShape& input_shape, const float* input_data,\n+ const RuntimeShape& output_shape, float* output_data) {\n+ const int flat_size = MatchingFlatSize(input_shape, output_shape);\n+ for (int i = 0; i < flat_size; ++i) {\n+ const float val = input_data[i];\n+ // Note that alpha might be > 1 or < 0, so we don't use std::max here.\n+ output_data[i] = val > 0 ? val : val * params.alpha;\n+ }\n+}\n+\n+template <typename T>\n+inline void QuantizeLeakyRelu(const LeakyReluParams& params,\n+ const RuntimeShape& input_shape,\n+ const T* input_data,\n+ const RuntimeShape& output_shape,\n+ T* output_data) {\n+ const int flat_size = MatchingFlatSize(input_shape, output_shape);\n+ static const int32_t quantized_min = std::numeric_limits<T>::min();\n+ static const int32_t quantized_max = std::numeric_limits<T>::max();\n+ for (int i = 0; i < flat_size; ++i) {\n+ const int32_t input_value = input_data[i] - params.input_offset;\n+ int32_t unclamped_output;\n+ if (input_value >= 0) {\n+ unclamped_output = params.output_offset +\n+ MultiplyByQuantizedMultiplier(\n+ input_value, params.output_multiplier_identity,\n+ params.output_shift_identity);\n+ } else {\n+ unclamped_output = params.output_offset +\n+ MultiplyByQuantizedMultiplier(\n+ input_value, params.output_multiplier_alpha,\n+ params.output_shift_alpha);\n+ }\n+ const T clamped_output =\n+ std::min(quantized_max, std::max(quantized_min, unclamped_output));\n+ output_data[i] = static_cast<T>(clamped_output);\n+ }\n+}\n+\n+} // namespace reference_ops\n+} // namespace tflite\n+\n+#endif // TENSORFLOW_LITE_KERNELS_INTERNAL_REFERENCE_LEAKY_RELU_H_",
"filename": "tensorflow/lite/kernels/internal/reference/leaky_relu.h",
"status": "added"
},
{
"diff": "@@ -48,6 +48,7 @@ limitations under the License.\n #include \"tensorflow/lite/kernels/internal/reference/fully_connected.h\"\n #include \"tensorflow/lite/kernels/internal/reference/hard_swish.h\"\n #include \"tensorflow/lite/kernels/internal/reference/l2normalization.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/leaky_relu.h\"\n #include \"tensorflow/lite/kernels/internal/reference/logistic.h\"\n #include \"tensorflow/lite/kernels/internal/reference/maximum_minimum.h\"\n #include \"tensorflow/lite/kernels/internal/reference/mul.h\"\n@@ -211,48 +212,6 @@ inline void ReluX(const tflite::ActivationParams& params,\n }\n }\n \n-inline void LeakyRelu(const tflite::LeakyReluParams& params,\n- const RuntimeShape& input_shape, const float* input_data,\n- const RuntimeShape& output_shape, float* output_data) {\n- ruy::profiler::ScopeLabel label(\"LeakyRelu (not fused)\");\n- const int flat_size = MatchingFlatSize(input_shape, output_shape);\n- for (int i = 0; i < flat_size; ++i) {\n- const float val = input_data[i];\n- // Note that alpha might be > 1 or < 0, so we don't use std::max here.\n- output_data[i] = val > 0 ? val : val * params.alpha;\n- }\n-}\n-\n-template <typename T>\n-inline void QuantizeLeakyRelu(const LeakyReluParams& params,\n- const RuntimeShape& input_shape,\n- const T* input_data,\n- const RuntimeShape& output_shape,\n- T* output_data) {\n- ruy::profiler::ScopeLabel label(\"Quantized LeakyRelu (not fused)\");\n- const int flat_size = MatchingFlatSize(input_shape, output_shape);\n- static const int32 quantized_min = std::numeric_limits<T>::min();\n- static const int32 quantized_max = std::numeric_limits<T>::max();\n- for (int i = 0; i < flat_size; ++i) {\n- const int32 input_value = input_data[i] - params.input_offset;\n- int32 unclamped_output;\n- if (input_value >= 0) {\n- unclamped_output = params.output_offset +\n- MultiplyByQuantizedMultiplier(\n- input_value, params.output_multiplier_identity,\n- params.output_shift_identity);\n- } else {\n- unclamped_output = params.output_offset +\n- MultiplyByQuantizedMultiplier(\n- input_value, params.output_multiplier_alpha,\n- params.output_shift_alpha);\n- }\n- const T clamped_output =\n- std::min(quantized_max, std::max(quantized_min, unclamped_output));\n- output_data[i] = static_cast<T>(clamped_output);\n- }\n-}\n-\n // T is expected to be either float or int.\n template <typename T>\n inline void AddN(const RuntimeShape& input_shape, const size_t num_inputs,",
"filename": "tensorflow/lite/kernels/internal/reference/reference_ops.h",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,186 @@\n+/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include <stddef.h>\n+\n+#include <algorithm>\n+#include <cmath>\n+#include <cstdint>\n+#include <functional>\n+#include <limits>\n+\n+#include \"tensorflow/lite/c/builtin_op_data.h\"\n+#include \"tensorflow/lite/c/common.h\"\n+#include \"tensorflow/lite/kernels/cpu_backend_context.h\"\n+#include \"tensorflow/lite/kernels/internal/common.h\"\n+#include \"tensorflow/lite/kernels/internal/compatibility.h\"\n+#include \"tensorflow/lite/kernels/internal/cppmath.h\"\n+#include \"tensorflow/lite/kernels/internal/optimized/optimized_ops.h\"\n+#include \"tensorflow/lite/kernels/internal/quantization_util.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/binary_function.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/integer_ops/log_softmax.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/integer_ops/logistic.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/integer_ops/tanh.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/logistic.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/prelu.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/reference_ops.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/softmax.h\"\n+#include \"tensorflow/lite/kernels/internal/reference/tanh.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor.h\"\n+#include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n+#include \"tensorflow/lite/kernels/internal/types.h\"\n+#include \"tensorflow/lite/kernels/kernel_util.h\"\n+\n+namespace tflite {\n+namespace ops {\n+namespace builtin {\n+namespace activations {\n+namespace {\n+\n+// OLD-TODO(b/142762739): We should figure out a multi-threading plan for most\n+// of the activation ops below.\n+\n+enum KernelType {\n+ kReference,\n+ kGenericOptimized,\n+ kFixedPointOptimized,\n+};\n+\n+struct OpData {\n+ int32_t input_multiplier = 0;\n+ int input_left_shift = 0;\n+ int32_t input_range_radius = 0;\n+ int diff_min = 0;\n+ uint8_t table[256] = {0};\n+};\n+\n+struct LeakyReluOpData : public OpData {\n+ int32_t output_multiplier_alpha = 0;\n+ int32_t output_shift_alpha = 0;\n+ int32_t output_multiplier_identity = 0;\n+ int32_t output_shift_identity = 0;\n+};\n+\n+template <typename T>\n+void QuantizeLeakyRelu(const TfLiteTensor* input, TfLiteTensor* output,\n+ const LeakyReluOpData* data) {\n+ LeakyReluParams op_params;\n+\n+ op_params.input_offset = input->params.zero_point;\n+ op_params.output_offset = output->params.zero_point;\n+ op_params.output_multiplier_alpha = data->output_multiplier_alpha;\n+ op_params.output_shift_alpha = data->output_shift_alpha;\n+ op_params.output_multiplier_identity = data->output_multiplier_identity;\n+ op_params.output_shift_identity = data->output_shift_identity;\n+ reference_ops::QuantizeLeakyRelu(\n+ op_params, GetTensorShape(input), GetTensorData<T>(input),\n+ GetTensorShape(output), GetTensorData<T>(output));\n+}\n+\n+} // namespace\n+\n+void* LeakyReluInit(TfLiteContext* context, const char* buffer, size_t length) {\n+ return new LeakyReluOpData;\n+}\n+\n+void LeakyReluFree(TfLiteContext* context, void* buffer) {\n+ delete reinterpret_cast<LeakyReluOpData*>(buffer);\n+}\n+\n+TfLiteStatus LeakyReluPrepare(TfLiteContext* context, TfLiteNode* node) {\n+ TF_LITE_ENSURE_EQ(context, NumInputs(node), 1);\n+ TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n+ const TfLiteTensor* input;\n+ TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 0, &input));\n+ TfLiteTensor* output;\n+ TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));\n+ TF_LITE_ENSURE_TYPES_EQ(context, input->type, output->type);\n+\n+ LeakyReluOpData* data = reinterpret_cast<LeakyReluOpData*>(node->user_data);\n+\n+ if (output->type == kTfLiteUInt8 || output->type == kTfLiteInt8 ||\n+ output->type == kTfLiteInt16) {\n+ const auto* params =\n+ reinterpret_cast<TfLiteLeakyReluParams*>(node->builtin_data);\n+\n+ double alpha_multiplier =\n+ input->params.scale * params->alpha / output->params.scale;\n+ QuantizeMultiplier(alpha_multiplier, &data->output_multiplier_alpha,\n+ &data->output_shift_alpha);\n+ double identity_multiplier = input->params.scale / output->params.scale;\n+ QuantizeMultiplier(identity_multiplier, &data->output_multiplier_identity,\n+ &data->output_shift_identity);\n+ }\n+\n+ if (input->type == kTfLiteInt16 && output->type == kTfLiteInt16) {\n+ TF_LITE_ENSURE_EQ(context, input->params.zero_point, 0);\n+ TF_LITE_ENSURE_EQ(context, output->params.zero_point, 0);\n+ }\n+\n+ return context->ResizeTensor(context, output,\n+ TfLiteIntArrayCopy(input->dims));\n+}\n+\n+TfLiteStatus LeakyReluEval(TfLiteContext* context, TfLiteNode* node) {\n+ const TfLiteTensor* input;\n+ TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 0, &input));\n+ TfLiteTensor* output;\n+ TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));\n+ const auto* params =\n+ reinterpret_cast<TfLiteLeakyReluParams*>(node->builtin_data);\n+ const LeakyReluOpData* data =\n+ reinterpret_cast<LeakyReluOpData*>(node->user_data);\n+\n+ LeakyReluParams op_params;\n+ switch (input->type) {\n+ case kTfLiteFloat32: {\n+ op_params.alpha = params->alpha;\n+ optimized_ops::LeakyRelu(\n+ op_params, GetTensorShape(input), GetTensorData<float>(input),\n+ GetTensorShape(output), GetTensorData<float>(output));\n+ return kTfLiteOk;\n+ } break;\n+ case kTfLiteUInt8: {\n+ QuantizeLeakyRelu<uint8_t>(input, output, data);\n+ return kTfLiteOk;\n+ } break;\n+ case kTfLiteInt8: {\n+ QuantizeLeakyRelu<int8_t>(input, output, data);\n+ return kTfLiteOk;\n+ } break;\n+ case kTfLiteInt16: {\n+ QuantizeLeakyRelu<int16_t>(input, output, data);\n+ return kTfLiteOk;\n+ } break;\n+ default:\n+ TF_LITE_KERNEL_LOG(\n+ context,\n+ \"Only float32, int8, int16 and uint8 is supported currently, got %s.\",\n+ TfLiteTypeGetName(input->type));\n+ return kTfLiteError;\n+ }\n+}\n+\n+} // namespace activations\n+\n+TfLiteRegistration* Register_LEAKY_RELU() {\n+ static TfLiteRegistration r = {\n+ activations::LeakyReluInit, activations::LeakyReluFree,\n+ activations::LeakyReluPrepare, activations::LeakyReluEval};\n+ return &r;\n+}\n+\n+} // namespace builtin\n+} // namespace ops\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/leaky_relu.cc",
"status": "added"
},
{
"diff": "@@ -0,0 +1,210 @@\n+/* Copyright 2017 The TensorFlow Authors. All Rights Reserved.\n+\n+Licensed under the Apache License, Version 2.0 (the \"License\");\n+you may not use this file except in compliance with the License.\n+You may obtain a copy of the License at\n+\n+ http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+==============================================================================*/\n+#include <math.h>\n+#include <stdint.h>\n+#include <stdlib.h>\n+\n+#include <algorithm>\n+#include <initializer_list>\n+#include <limits>\n+#include <map>\n+#include <memory>\n+#include <random>\n+#include <string>\n+#include <utility>\n+#include <vector>\n+\n+#include \"flatbuffers/flatbuffers.h\" // from @flatbuffers\n+#include \"tensorflow/lite/core/api/op_resolver.h\"\n+#include \"tensorflow/lite/interpreter.h\"\n+#include \"tensorflow/lite/kernels/test_util.h\"\n+#include \"tensorflow/lite/schema/schema_generated.h\"\n+#include \"tensorflow/lite/string_type.h\"\n+\n+namespace tflite {\n+namespace {\n+\n+using ::testing::ElementsAreArray;\n+\n+class BaseActivationsOpModel : public SingleOpModel {\n+ public:\n+ // A dedicated constructor for LeakyRelu, which does some options.\n+ BaseActivationsOpModel(TensorData input, float alpha) {\n+ input_ = AddInput(input);\n+ // The output scale and input scale might be different.\n+ if (input.type == TensorType_UINT8 || input.type == TensorType_INT8 ||\n+ input.type == TensorType_INT16) {\n+ auto output_min = (input.min >= 0) ? input.min : input.min * alpha;\n+ auto output_max = (input.max >= 0) ? input.max : input.max * alpha;\n+ if (input.type == TensorType_INT16) {\n+ output_ = AddOutput({TensorType_INT16,\n+ {},\n+ 0,\n+ 0,\n+ output_max / (std::numeric_limits<int16_t>::max()),\n+ 0});\n+ } else {\n+ output_ = AddOutput({input.type, {}, output_min, output_max});\n+ }\n+ } else {\n+ output_ = AddOutput({input.type, {}});\n+ }\n+ SetBuiltinOp(BuiltinOperator_LEAKY_RELU, BuiltinOptions_LeakyReluOptions,\n+ CreateLeakyReluOptions(builder_, alpha).Union());\n+ BuildInterpreter({GetShape(input_)});\n+ }\n+\n+ protected:\n+ int input_;\n+ int output_;\n+};\n+\n+// Our fixed-point math function implementations have roughly 12 bits of\n+// accuracy, when specialized to 16-bit fixed-point arithmetic.\n+// That is purely an implementation compromise, it would have been possible\n+// to get closer to 16 bits of accuracy but that would be more expensive,\n+// and not needed for our purposes as ultimately the output is either\n+// immediately down-quantized to 8 bits, or will typically be at the output\n+// of the surrounding LSTM cell.\n+// So we can require roughly 2^-12 accuracy when the output is 16-bit, and\n+// we can more or less expect the full 2^-8 accuracy when the output is 8-bit.\n+//\n+// However, the representable output interval is often [-1, 1] (it has to be\n+// for tanh, and even for logistic, when we implement it in fixed-point, we\n+// typically have to do so on such a symmetric interval, e.g. ARM NEON only\n+// has signed fixed-point arithmetic (SQRDMULH)). As the width of [-1, 1]\n+// is 2, our representable values are often diluted by a factor of 2, whence\n+// the factor of 2 below.\n+const float kQuantizedTolerance = 2 * (1. / 256);\n+const float kQuantizedToleranceInt16 = 2 * (1. / 4096);\n+\n+class QuantizedActivationsOpModel : public BaseActivationsOpModel {\n+ public:\n+ using BaseActivationsOpModel::BaseActivationsOpModel;\n+\n+ template <typename T>\n+ void SetInput(const std::vector<float>& data) {\n+ QuantizeAndPopulate<T>(input_, data);\n+ }\n+ template <typename T>\n+ std::vector<T> GetOutput() {\n+ return ExtractVector<T>(output_);\n+ }\n+\n+ template <typename T>\n+ std::vector<float> GetDequantizedOutput() {\n+ return Dequantize<T>(ExtractVector<T>(output_), GetScale(output_),\n+ GetZeroPoint(output_));\n+ }\n+};\n+\n+TEST(QuantizedActivationsOpTest, LeakyReluUint8) {\n+ const float kMin = -1;\n+ const float kMax = 127.f / 128.f;\n+ QuantizedActivationsOpModel m(\n+ /*input=*/{TensorType_UINT8, {2, 3}, 8 * kMin, 8 * kMax}, 0.5);\n+\n+ m.SetInput<uint8_t>({\n+ 0.0f, 1.0f, 3.0f, // Row 1\n+ 1.0f, -1.0f, -2.0f, // Row 2\n+ });\n+ m.Invoke();\n+ EXPECT_THAT(m.GetDequantizedOutput<uint8_t>(),\n+ ElementsAreArray(ArrayFloatNear(\n+ {\n+ 0.0f, 1.0f, 3.0f, // Row 1\n+ 1.0f, -0.5f, -1.0f, // Row 2\n+ },\n+ kQuantizedTolerance * 8)));\n+}\n+\n+template <TensorType tensor_type, typename integer_dtype>\n+void QuantizedActivationsOpTestLeakyRelu() {\n+ const float kMin = -1;\n+ const float kMax =\n+ std::numeric_limits<integer_dtype>::max() /\n+ static_cast<float>(std::numeric_limits<integer_dtype>::max() + 1);\n+\n+ QuantizedActivationsOpModel m(\n+ /*input=*/{tensor_type, {5, 5}, 5 * kMin, 5 * kMax}, 0.1);\n+\n+ m.SetInput<integer_dtype>({\n+ -5.0f, -4.6f, -4.2f, -3.8f, -3.4f, // Row 1\n+ -3.0f, -2.6f, -2.2f, -1.8f, -1.4f, // Row 2\n+ -1.0f, -0.6f, -0.2f, 0.2f, 0.6f, // Row 3\n+ 1.0f, 1.4f, 1.8f, 2.2f, 2.6f, // Row 4\n+ 3.0f, 3.4f, 3.8f, 4.2f, 4.6f, // Row 5\n+ });\n+ m.Invoke();\n+\n+ float kTestQuantizedTolerance = tensor_type == TensorType_INT16\n+ ? kQuantizedToleranceInt16\n+ : kQuantizedTolerance * 5;\n+\n+ EXPECT_THAT(m.GetDequantizedOutput<integer_dtype>(),\n+ ElementsAreArray(ArrayFloatNear(\n+ {\n+ -0.50f, -0.46f, -0.42f, -0.38f, -0.34f, // Row 1\n+ -0.30f, -0.26f, -0.22f, -0.18f, -0.14f, // Row 2\n+ -0.10f, -0.06f, -0.02f, 0.20f, 0.60f, // Row 3\n+ 1.00f, 1.40f, 1.80f, 2.20f, 2.60f, // Row 4\n+ 3.00f, 3.40f, 3.80f, 4.20f, 4.60f, // Row 5\n+ },\n+ kTestQuantizedTolerance)));\n+}\n+\n+TEST(QuantizedActivationsOpTest, LeakyReluInt8) {\n+ QuantizedActivationsOpTestLeakyRelu<TensorType_INT8, int8_t>();\n+}\n+\n+TEST(QuantizedActivationsOpTest, LeakyReluInt16) {\n+ QuantizedActivationsOpTestLeakyRelu<TensorType_INT16, int16_t>();\n+}\n+\n+class LeakyReluOpModel : public SingleOpModel {\n+ public:\n+ LeakyReluOpModel(const TensorData& input, float alpha) {\n+ input_ = AddInput(input);\n+ output_ = AddOutput(input);\n+ SetBuiltinOp(BuiltinOperator_LEAKY_RELU, BuiltinOptions_LeakyReluOptions,\n+ CreateLeakyReluOptions(builder_, alpha).Union());\n+ BuildInterpreter({GetShape(input_)});\n+ }\n+ void SetInput(std::initializer_list<float> data) {\n+ PopulateTensor(input_, data);\n+ }\n+ std::vector<float> GetOutput() { return ExtractVector<float>(output_); }\n+\n+ protected:\n+ int input_;\n+ int output_;\n+};\n+\n+TEST(FloatActivationsOpTest, LeakyRelu) {\n+ LeakyReluOpModel m({TensorType_FLOAT32, {2, 3}}, 0.5f);\n+\n+ m.SetInput({\n+ 0.0f, 1.0f, 3.0f, // Row 1\n+ 1.0f, -1.0f, -2.0f, // Row 2\n+ });\n+ m.Invoke();\n+ EXPECT_THAT(m.GetOutput(), ElementsAreArray({\n+ 0.0f, 1.0f, 3.0f, // Row 1\n+ 1.0f, -0.5f, -1.0f, // Row 2\n+ }));\n+}\n+\n+} // namespace\n+} // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/leaky_relu_test.cc",
"status": "added"
}
]
}
|
{
"body": "@tensorflow/micro\r\n\r\nThis issue tracks my work porting operator ADD_N from lite to micro.\r\n\r\nThe port will be submitted in a number of PRs. Here's a rough flight plan per @advaitjain and @petewarden:\r\n\r\nPR 1: Extract the code for parsing the op from a flatbuffer out of ParseOpDataTfLite in tensorflow/lite/core/api/flatbuffer_conversions.cc into a standalone function that can be called from micro's op resolver\r\nPR 2: Extract the reference implementation out of tensorflow/lite/kernels/internal/reference/reference_ops.h into its own header which can be included without dragging in reference_ops.h's dependences\r\nPR 3: Copy operator from lite to micro making minimal changes and not including in the build\r\nPR 4: Delete extra code from the micro copy of the operator\r\nPR 5: Port micro copy of operator as necessary and add a corresponding test\r\n",
"comments": [
{
"body": "Are you satisfied with the resolution of your issue?\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=Yes&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46162\">Yes</a>\n<a href=\"https://docs.google.com/forms/d/e/1FAIpQLSfaP12TRhd9xSxjXZjcZFNXPGk4kc1-qMdv3gc6bEP90vY1ew/viewform?entry.85265664=No&entry.2137816233=https://github.com/tensorflow/tensorflow/issues/46162\">No</a>\n",
"created_at": "2021-04-12T10:38:38Z"
}
],
"number": 46162,
"title": "micro: port op ADD_N from lite"
}
|
{
"body": "Complete implementation of TFLM operator ADD_N and associated TFLM test code.\r\n\r\nPR step 5 of the work to port operator ADD_N as tracked in Issue #46162",
"number": 46208,
"review_comments": [],
"title": "micro: port operator ADD_N kernel from lite with test"
}
|
{
"commits": [
{
"message": "micro: port operator ADD_N kernel from lite with test\n\nComplete implementation of TFLM operator ADD_N and associated TFLM test code.\n\nPR step 5 of the work to port operator ADD_N as tracked in Issue #46162"
}
],
"files": [
{
"diff": "@@ -23,13 +23,13 @@ namespace reference_ops {\n // T is expected to be either float or int.\n template <typename T>\n inline void AddN(const RuntimeShape& input_shape, const size_t num_inputs,\n- T* const* input_data, T* output_data) {\n+ const T* const* input_data, T* output_data) {\n // All inputs and output should have the same shape, this is checked during\n // Prepare stage.\n const size_t size = input_shape.FlatSize();\n- for (int i = 0; i < size; ++i) {\n+ for (size_t i = 0; i < size; ++i) {\n T x = 0;\n- for (int j = 0; j < num_inputs; ++j) {\n+ for (size_t j = 0; j < num_inputs; ++j) {\n x += input_data[j][i];\n }\n output_data[i] = x;",
"filename": "tensorflow/lite/kernels/internal/reference/add_n.h",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@ AllOpsResolver::AllOpsResolver() {\n // Please keep this list of Builtin Operators in alphabetical order.\n AddAbs();\n AddAdd();\n+ AddAddN();\n AddArgMax();\n AddArgMin();\n AddAveragePool2D();",
"filename": "tensorflow/lite/micro/all_ops_resolver.cc",
"status": "modified"
},
{
"diff": "@@ -258,6 +258,7 @@ cc_library(\n \"activations.cc\",\n \"hard_swish.cc\",\n \"add.cc\",\n+ \"add_n.cc\",\n \"arg_min_max.cc\",\n \"batch_to_space_nd.cc\",\n \"cast.cc\",\n@@ -393,6 +394,21 @@ cc_test(\n ],\n )\n \n+cc_test(\n+ name = \"add_n_test\",\n+ srcs = [\n+ \"add_n_test.cc\",\n+ ],\n+ deps = [\n+ \":kernel_runner\",\n+ \"//tensorflow/lite/c:common\",\n+ \"//tensorflow/lite/micro:debug_log\",\n+ \"//tensorflow/lite/micro:op_resolvers\",\n+ \"//tensorflow/lite/micro:test_helpers\",\n+ \"//tensorflow/lite/micro/testing:micro_test\",\n+ ],\n+)\n+\n cc_test(\n name = \"add_test\",\n srcs = [",
"filename": "tensorflow/lite/micro/kernels/BUILD",
"status": "modified"
},
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -12,84 +12,108 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ==============================================================================*/\n-#include <stdint.h>\n+\n+#include \"tensorflow/lite/kernels/internal/reference/add_n.h\"\n+\n+#include <cstdint>\n \n #include \"tensorflow/lite/c/common.h\"\n-#include \"tensorflow/lite/kernels/internal/reference/reference_ops.h\"\n-#include \"tensorflow/lite/kernels/internal/tensor.h\"\n #include \"tensorflow/lite/kernels/internal/tensor_ctypes.h\"\n #include \"tensorflow/lite/kernels/kernel_util.h\"\n+#include \"tensorflow/lite/micro/kernels/kernel_util.h\"\n \n namespace tflite {\n-namespace ops {\n-namespace micro {\n-namespace add_n {\n namespace {\n \n-constexpr int kInputTensor1 = 0;\n+constexpr int kInputTensor0 = 0;\n constexpr int kOutputTensor = 0;\n \n-TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+TfLiteStatus CalculateOpData(TfLiteContext* context, TfLiteNode* node) {\n int num_inputs = NumInputs(node);\n TF_LITE_ENSURE(context, num_inputs >= 2);\n TF_LITE_ENSURE_EQ(context, NumOutputs(node), 1);\n \n- const TfLiteTensor* input1;\n- TF_LITE_ENSURE_OK(context,\n- GetInputSafe(context, node, kInputTensor1, &input1));\n+ const TfLiteTensor* input_tensor_first;\n+ TF_LITE_ENSURE_OK(\n+ context, GetInputSafe(context, node, kInputTensor0, &input_tensor_first));\n TfLiteTensor* output;\n TF_LITE_ENSURE_OK(context,\n GetOutputSafe(context, node, kOutputTensor, &output));\n- output->type = input1->type;\n \n- // Check that all input tensors have the same shape and type.\n- for (int i = kInputTensor1 + 1; i < num_inputs; ++i) {\n+ // Check that all tensors have the same shape and type.\n+ TF_LITE_ENSURE_TYPES_EQ(context, output->type, input_tensor_first->type);\n+ for (int i = kInputTensor0 + 1; i < num_inputs; ++i) {\n const TfLiteTensor* input;\n TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, i, &input));\n- TF_LITE_ENSURE(context, HaveSameShapes(input1, input));\n- TF_LITE_ENSURE_TYPES_EQ(context, input1->type, input->type);\n+ TF_LITE_ENSURE(context, HaveSameShapes(input_tensor_first, input));\n+ TF_LITE_ENSURE_TYPES_EQ(context, input_tensor_first->type, input->type);\n }\n \n- return kTfLiteError;\n+ // Allocate scratch buffer space for pointer to each tensor's data\n+ // and store the scratch buffer index in the node's user_data\n+ if (output->type == kTfLiteFloat32) {\n+ int scratch_index;\n+ size_t scratch_size = sizeof(float*) * num_inputs;\n+ TF_LITE_ENSURE_OK(context, context->RequestScratchBufferInArena(\n+ context, scratch_size, &scratch_index));\n+ node->user_data =\n+ reinterpret_cast<decltype(node->user_data)>(scratch_index);\n+ } else {\n+ TF_LITE_KERNEL_LOG(context, \"ADD_N only supports FLOAT32, got %s.\",\n+ TfLiteTypeGetName(output->type));\n+ return kTfLiteError;\n+ }\n+\n+ return kTfLiteOk;\n+}\n+\n+TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {\n+ return CalculateOpData(context, node);\n }\n \n template <typename T>\n-void EvalAddN(TfLiteContext* context, TfLiteNode* node) {\n- // OLD-TODO(haoliang): Initialize all_inputs only once during init.\n- VectorOfTensors<T> all_inputs(*context, *node->inputs);\n- // Safe to use unchecked since caller checks that tensor is valid\n- TfLiteTensor* output = GetOutput(context, node, kOutputTensor);\n+void EvalAddN(TfLiteContext* context, TfLiteNode* node,\n+ TfLiteEvalTensor* output) {\n int num_inputs = NumInputs(node);\n- // Safe to use unchecked since caller checks that tensor is valid\n- const TfLiteTensor* input1 = GetInput(context, node, kInputTensor1);\n- reference_ops::AddN<T>(GetTensorShape(input1), num_inputs, all_inputs.data(),\n- GetTensorData<T>(output));\n+\n+ int scratch_index =\n+ static_cast<int>(reinterpret_cast<intptr_t>(node->user_data));\n+ void* scratch_buffer = context->GetScratchBuffer(context, scratch_index);\n+ const T** all_inputs = static_cast<decltype(all_inputs)>(scratch_buffer);\n+ for (int i = 0; i < num_inputs; i++) {\n+ const TfLiteEvalTensor* next_input =\n+ tflite::micro::GetEvalInput(context, node, kInputTensor0 + i);\n+ all_inputs[i] = tflite::micro::GetTensorData<T>(next_input);\n+ }\n+\n+ reference_ops::AddN<T>(tflite::micro::GetTensorShape(output), num_inputs,\n+ all_inputs, tflite::micro::GetTensorData<T>(output));\n }\n \n TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {\n- const TfLiteTensor* input1;\n- TF_LITE_ENSURE_OK(context,\n- GetInputSafe(context, node, kInputTensor1, &input1));\n- TfLiteTensor* output;\n- TF_LITE_ENSURE_OK(context,\n- GetOutputSafe(context, node, kOutputTensor, &output));\n+ TfLiteEvalTensor* output =\n+ tflite::micro::GetEvalOutput(context, node, kOutputTensor);\n if (output->type == kTfLiteFloat32) {\n- EvalAddN<float>(context, node);\n- } else if (output->type == kTfLiteInt32) {\n- EvalAddN<int32_t>(context, node);\n+ EvalAddN<float>(context, node, output);\n } else {\n- TF_LITE_KERNEL_LOG(context, \"AddN only supports FLOAT32|INT32 now, got %s.\",\n+ TF_LITE_KERNEL_LOG(context, \"ADD_N only supports FLOAT32, got %s.\",\n TfLiteTypeGetName(output->type));\n return kTfLiteError;\n }\n return kTfLiteOk;\n }\n \n } // namespace\n-} // namespace add_n\n \n-TfLiteRegistration* Register_ADD_N() { return nullptr; }\n+TfLiteRegistration Register_ADD_N() {\n+ return {/*init=*/nullptr,\n+ /*free=*/nullptr,\n+ /*prepare=*/Prepare,\n+ /*invoke=*/Eval,\n+ /*profiling_string=*/nullptr,\n+ /*builtin_code=*/0,\n+ /*custom_name=*/nullptr,\n+ /*version=*/0};\n+}\n \n-} // namespace micro\n-} // namespace ops\n } // namespace tflite",
"filename": "tensorflow/lite/micro/kernels/add_n.cc",
"status": "modified"
},
{
"diff": "@@ -1,4 +1,4 @@\n-/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n+/* Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n@@ -23,36 +23,75 @@ limitations under the License.\n \n namespace tflite {\n namespace testing {\n-namespace {} // namespace\n+namespace {\n+\n+constexpr int kMaxInputTensors = 3;\n+constexpr int kMaxOutputTensors = 1;\n+\n+void ExecuteAddN(TfLiteTensor* tensors, int tensors_count) {\n+ int input_array_data[kMaxInputTensors + kMaxOutputTensors] = {tensors_count -\n+ 1};\n+ for (int i = 1; i < tensors_count; i++) {\n+ input_array_data[i] = i - 1;\n+ }\n+ TfLiteIntArray* inputs_array = IntArrayFromInts(input_array_data);\n+ const int kOutputArrayData[] = {1, tensors_count - 1};\n+ TfLiteIntArray* outputs_array = IntArrayFromInts(kOutputArrayData);\n+\n+ const TfLiteRegistration registration = tflite::Register_ADD_N();\n+ micro::KernelRunner runner(registration, tensors, tensors_count, inputs_array,\n+ outputs_array, nullptr);\n+\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.InitAndPrepare());\n+ TF_LITE_MICRO_EXPECT_EQ(kTfLiteOk, runner.Invoke());\n+}\n+\n+template <typename T>\n+void TestAddN(const int* input_dims_data, const T* const* input_data,\n+ int input_data_count, const int* expected_dims,\n+ const T* expected_data, T* output_data) {\n+ TF_LITE_MICRO_EXPECT_LE(input_data_count, kMaxInputTensors);\n+\n+ TfLiteIntArray* input_dims = IntArrayFromInts(input_dims_data);\n+ TfLiteIntArray* output_dims = IntArrayFromInts(expected_dims);\n+ const int output_count = ElementCount(*output_dims);\n+\n+ TfLiteTensor tensors[kMaxInputTensors + kMaxOutputTensors] = {};\n+ for (int i = 0; i < input_data_count; i++) {\n+ tensors[i] = CreateTensor(input_data[i], input_dims);\n+ }\n+ tensors[input_data_count] = CreateTensor(output_data, output_dims);\n+\n+ ExecuteAddN(tensors, input_data_count + 1);\n+\n+ for (int i = 0; i < output_count; i++) {\n+ TF_LITE_MICRO_EXPECT_EQ(expected_data[i], output_data[i]);\n+ }\n+}\n+\n+} // namespace\n } // namespace testing\n } // namespace tflite\n \n TF_LITE_MICRO_TESTS_BEGIN\n \n TF_LITE_MICRO_TEST(FloatAddNOpAddMultipleTensors) {\n-#ifdef notdef\n- FloatAddNOpModel m({{TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {1, 2, 2, 1}},\n- {TensorType_FLOAT32, {1, 2, 2, 1}}},\n- {TensorType_FLOAT32, {}});\n- m.PopulateTensor<float>(m.input(0), {-2.0, 0.2, 0.7, 0.8});\n- m.PopulateTensor<float>(m.input(1), {0.1, 0.2, 0.3, 0.5});\n- m.PopulateTensor<float>(m.input(2), {0.5, 0.1, 0.1, 0.2});\n- EXPECT_THAT(m.GetOutput(), ElementsAreArray({-1.4, 0.5, 1.1, 1.5}));\n-#endif // notdef\n-}\n+ constexpr int kDims[] = {4, 1, 2, 2, 1};\n+ constexpr float kInput1[] = {-2.0, 0.2, 0.7, 0.8};\n+ constexpr float kInput2[] = {0.1, 0.2, 0.3, 0.5};\n+ constexpr float kInput3[] = {0.5, 0.1, 0.1, 0.2};\n+ constexpr float kExpect[] = {-1.4, 0.5, 1.1, 1.5};\n+ const float* kInputs[tflite::testing::kMaxInputTensors] = {\n+ kInput1,\n+ kInput2,\n+ kInput3,\n+ };\n+ constexpr int kInputCount = std::extent<decltype(kInputs)>::value;\n+ constexpr int kOutputCount = std::extent<decltype(kExpect)>::value;\n+ float output_data[kOutputCount];\n \n-TF_LITE_MICRO_TEST(IntegerAddNOpAddMultipleTensors) {\n-#ifdef notdef\n- IntegerAddNOpModel m({{TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {1, 2, 2, 1}},\n- {TensorType_INT32, {1, 2, 2, 1}}},\n- {TensorType_INT32, {}});\n- m.PopulateTensor<int32_t>(m.input(0), {-20, 2, 7, 8});\n- m.PopulateTensor<int32_t>(m.input(1), {1, 2, 3, 5});\n- m.PopulateTensor<int32_t>(m.input(2), {10, -5, 1, -2});\n- EXPECT_THAT(m.GetOutput(), ElementsAreArray({-9, -1, 11, 11}));\n-#endif // notdef\n+ tflite::testing::TestAddN(kDims, kInputs, kInputCount, kDims, kExpect,\n+ output_data);\n }\n \n TF_LITE_MICRO_TESTS_END",
"filename": "tensorflow/lite/micro/kernels/add_n_test.cc",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,7 @@ namespace tflite {\n // (https://abseil.io/tips/130). Any new ops (or cleanup of existing ops should\n // have their Register function declarations in the tflite namespace.\n \n+TfLiteRegistration Register_ADD_N();\n TfLiteRegistration Register_BATCH_TO_SPACE_ND();\n TfLiteRegistration Register_CAST();\n TfLiteRegistration Register_CONV_2D();",
"filename": "tensorflow/lite/micro/kernels/micro_ops.h",
"status": "modified"
},
{
"diff": "@@ -122,6 +122,11 @@ class MicroMutableOpResolver : public MicroOpResolver {\n ParseAdd);\n }\n \n+ TfLiteStatus AddAddN() {\n+ return AddBuiltin(BuiltinOperator_ADD_N, tflite::Register_ADD_N(),\n+ ParseAddN);\n+ }\n+\n TfLiteStatus AddArgMax() {\n return AddBuiltin(BuiltinOperator_ARG_MAX,\n tflite::ops::micro::Register_ARG_MAX(), ParseArgMax);",
"filename": "tensorflow/lite/micro/micro_mutable_op_resolver.h",
"status": "modified"
},
{
"diff": "@@ -258,6 +258,7 @@ tensorflow/lite/micro/simple_memory_allocator_test.cc \\\n tensorflow/lite/micro/testing_helpers_test.cc \\\n tensorflow/lite/micro/kernels/activations_test.cc \\\n tensorflow/lite/micro/kernels/add_test.cc \\\n+tensorflow/lite/micro/kernels/add_n_test.cc \\\n tensorflow/lite/micro/kernels/arg_min_max_test.cc \\\n tensorflow/lite/micro/kernels/batch_to_space_nd_test.cc \\\n tensorflow/lite/micro/kernels/cast_test.cc \\\n@@ -312,6 +313,7 @@ tensorflow/lite/micro/memory_planner/linear_memory_planner_test.cc\n MICROLITE_CC_KERNEL_SRCS := \\\n tensorflow/lite/micro/kernels/activations.cc \\\n tensorflow/lite/micro/kernels/add.cc \\\n+tensorflow/lite/micro/kernels/add_n.cc \\\n tensorflow/lite/micro/kernels/arg_min_max.cc \\\n tensorflow/lite/micro/kernels/batch_to_space_nd.cc \\\n tensorflow/lite/micro/kernels/cast.cc \\\n@@ -406,6 +408,7 @@ tensorflow/lite/kernels/internal/compatibility.h \\\n tensorflow/lite/kernels/internal/optimized/neon_check.h \\\n tensorflow/lite/kernels/internal/quantization_util.h \\\n tensorflow/lite/kernels/internal/reference/add.h \\\n+tensorflow/lite/kernels/internal/reference/add_n.h \\\n tensorflow/lite/kernels/internal/reference/arg_min_max.h \\\n tensorflow/lite/kernels/internal/reference/batch_to_space_nd.h \\\n tensorflow/lite/kernels/internal/reference/binary_function.h \\",
"filename": "tensorflow/lite/micro/tools/make/Makefile",
"status": "modified"
}
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.