hash stringlengths 40 40 | msg stringlengths 1 131k | author stringlengths 1 33 | email stringlengths 0 57 | date int64 1,447B 1,698B |
|---|---|---|---|---|
3de2fa4c243e7bd5af43def28c9f38025cc12da7 | Replace the `imp` module with the `types` one in a test.
`imp` is removed in Python 3.12.
PiperOrigin-RevId: 573210289 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,205,846,000 |
fbf5e0fc096328eaf776a8efa6cf09d0da2c3012 | Priority fusion: remove heuristics, model performance instead.
Currently, priority fusion applies some heuristics to prevent bad
fusion decisions (e.g. fusing a reduce into a transpose). These
heuristics are hard to debug and follow, so it is better to cost
model the effect instead.
For now, ee do this with a rather crude coalescing analysis. We
assume memory accesses are coalesced iff there is a) no
transpose or b) we can use the transpose emitter.
Drive-by fix: use the proper launch grid for DUS. We always
assume it's in place. We're doing this the wrong place, but it's the only
place we have for now.
The new `ShouldFuse` function was contributed by kramerb@.
PiperOrigin-RevId: 573215172 | Johannes Reifferscheid | jreiffers@google.com | 1,697,207,199,000 |
0f989fde74125e80d826fab50c82a63d1d9c7a62 | Update a test to be compatible with typing_extensions==4.8.0.
PiperOrigin-RevId: 573221105 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,208,935,000 |
951933f0d067f4a25d4d8f69c400ad64f0c353ff | [XLA] Add all-reduce+dyn-slice pattern to all_reduce_reassociate.
Support case with no formatting as otherwise requires re-writing all the shapes.
PiperOrigin-RevId: 573237155 | Marcello Maggioni | maggioni@google.com | 1,697,213,375,000 |
8d461d166a97d07f17db66a3a5d9d5be10edcefc | [XLA] Fix collective-pipeliner to maintain control dependencies of cloned instructions.
We were wrongfully ignoring them.
PiperOrigin-RevId: 573237685 | Marcello Maggioni | maggioni@google.com | 1,697,213,520,000 |
c9fd68d3ca87d08a0dceead30fdbca379b56a7ef | Consider maximum of ids/uniques across all replicas.
PiperOrigin-RevId: 573243295 | Matthias Kramm | kramm@google.com | 1,697,215,107,000 |
49ffc4d64acb3d12ba7b5b9ae37825f093a89e14 | #tf-data Record metrics for experiment opt outs and ins.
PiperOrigin-RevId: 573244072 | Matt Callanan | mpcallanan@google.com | 1,697,215,317,000 |
75d34a89cebf90988e0f8a6172123bd08a59b016 | Re-enable layering_check for package.
PiperOrigin-RevId: 573247013 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,216,029,000 |
7d7e0991964682af036c77580b74e943a2a5ec6f | Create Plugin Tracer for plugins.
PiperOrigin-RevId: 573248197 | Clive Verghese | cliveverghese@google.com | 1,697,216,319,000 |
e64c828d086c2c892264e317c1ea3020925de154 | Add reshape to the mhlo->tfl path
PiperOrigin-RevId: 573253209 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,217,394,000 |
b597c9d09dced6817d50e5469da635d5f44094e0 | [XLA] Add some extra debugging in collective pipeliner
PiperOrigin-RevId: 573253247 | Marcello Maggioni | maggioni@google.com | 1,697,217,402,000 |
e062297d627be248162f44bfd603eaf71c0a325b | Redirect references from the lib target to the new single-source-file targets.
PiperOrigin-RevId: 573254774 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,217,744,000 |
28abc8b591f2219d2daab1f81f2cf6fae8601a37 | Make runtime lowering ops for CPU/GPU
PiperOrigin-RevId: 573257395 | Mason Chang | masonchang@google.com | 1,697,218,328,000 |
1536f2ca228099f0cf7793d3031d9b9bebbf03ed | Fix path to cuda_build_defs in NCCL system build.
Copy of https://github.com/openxla/xla/pull/6291 that resolves a TSL merge problem.
PiperOrigin-RevId: 573259068 | Peter Hawkins | phawkins@google.com | 1,697,218,675,000 |
2283506a7c0013bcbfd885b75b01a1157b46098b | [tflite] Move some function implementations to .cc file
PiperOrigin-RevId: 573262586 | Majid Dadashi | majiddadashi@google.com | 1,697,219,467,000 |
7e721abb56c0d8d3ca4f216da56c38a34fb7ab8b | Fix buildifier.yml by removing `github.head_ref` which should be unnecessary
PiperOrigin-RevId: 573270610 | David Dunleavy | ddunleavy@google.com | 1,697,221,013,000 |
e43d4dfb8a2598b22241159a6b32c505fa1d505e | Migrate PrepareQuantize Pass from Experimental
This replaces PrepareSrqQuantize pass and it supports integer bit width and per-channel quantization for conv.
It also removes some functions that appear irrelevant.
PiperOrigin-RevId: 573273946 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,221,619,000 |
f59ab9d18679baafd62f2d05b59dff89e6c58480 | Set up an API to top trace and fdo profile in memory.
PiperOrigin-RevId: 573276173 | Tao Wang | wangtao@google.com | 1,697,222,027,000 |
d8762fcae260e2fcc7d53767ff1e0643aeed323c | Adds GatherTest and GatherTestNoReshard.
PiperOrigin-RevId: 573278212 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,222,394,000 |
247e00d3380df343c8691550db72dc9b93d858d6 | [XLA] Allow ConvertRandomBitsToUniformFloatingPoint to support a wider range of (value_type, bit_type) combinations, including (bfloat18, uint8), as long as bit_type bit width is large enough to hold explicit mantissa bits from the value_type.
PiperOrigin-RevId: 573279151 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,222,579,000 |
f7a2f53111a827bd309b7afb45d151d97b4f0fdc | Integrate LLVM at llvm/llvm-project@41418ca13323
Updates LLVM usage to match
[41418ca13323](https://github.com/llvm/llvm-project/commit/41418ca13323)
PiperOrigin-RevId: 573288219 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,224,473,000 |
e20b1fbb0301431334cfd7bc6c50a6771a88805b | Merge pull request #62059 from jeromemassot:patch-1
PiperOrigin-RevId: 573289085 | TensorFlower Gardener | gardener@tensorflow.org | 1,697,225,225,000 |
eb0d446807bf8db3eb8b2aeda9014192a5f074c2 | [XLA:Runtime] Moved the custom call thunk to a new folder, as part of a thunk clean up, and updated the necessary directories pointing to this thunk. #5758
PiperOrigin-RevId: 573293473 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,225,770,000 |
d251c1e6b76decee0b683d1718dc04d7d064562c | fix spmd expander to correctly handle input types of sliceop for begin and size.
PiperOrigin-RevId: 573294289 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,225,987,000 |
893aa7518fe3175739ac1ba70d7355a0b091115c | Added a null check in `string_util.cc`
PiperOrigin-RevId: 573300006 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,227,427,000 |
0d12bf9204b352d46bdfa1b5c6838d92a6cbc169 | Fix a typo
PiperOrigin-RevId: 573300125 | Austin Anderson | angerson@google.com | 1,697,227,451,000 |
570f23de0ff8098398e626c545be9b74e2dde6ca | [xla:gpu] Replace XLA:FFI example with a statically linked custom call
XLA:FFI will be revamped as a part of XLA runtime work in 2024, and currently statically linked custom calls should be using a regular XLA runtime mechanism.
Link: https://docs.google.com/document/d/1XHzJyfq-ZFn9WHoKe4o_urnwS991dFHgWoNRboBK_3I/edit#bookmark=id.696pyshem503
PiperOrigin-RevId: 573303572 | Eugene Zhulenev | ezhulenev@google.com | 1,697,228,377,000 |
af04317a0dff65effbe9baeb63647d5660cef31e | Avoid adding max/cast ops if clip_norm is a python scalar.
PiperOrigin-RevId: 573306353 | Antonio Sanchez | cantonios@google.com | 1,697,229,089,000 |
d8b1843aab60822ba333bf11c8e3a2191f1cf1cb | Update 3.11 test skips to work for 3.12 as well.
PiperOrigin-RevId: 573306433 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,229,110,000 |
1e9555996e3498d64a62507e176d4b89a0a2bd18 | Hook in lower runtime ops into tf-opt reproducer
PiperOrigin-RevId: 573314493 | Mason Chang | masonchang@google.com | 1,697,231,089,000 |
1f0b0ca414ffdbf4f8ab62214ba6b0587e706ba3 | Updating support notice for Hexagon delegate
PiperOrigin-RevId: 573314634 | Joe Fernandez | joefernandez@google.com | 1,697,231,116,000 |
c6d7c1ed9bb603b7c3b69cb32a707ba0ccef2ea9 | Handle a device tensor without an associsted device buffer.
When an invalid device tensor is given, the device_buffer could be nullptr,
and it causes a crash.
PiperOrigin-RevId: 573317379 | Hyojun Kim | hyojun@google.com | 1,697,231,836,000 |
b3380511cba2ee4bf16d11cbcd10bd31290f7f0a | Internal change only
PiperOrigin-RevId: 573320195 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,232,559,000 |
1c2f7c744ae939d0573a1d4d9990dc1c3ed5dab3 | Legalize TensorListPushBack to tfl custom.
PiperOrigin-RevId: 573324785 | Luke Boyer | lukeboyer@google.com | 1,697,233,697,000 |
1b806972297affeb24dda954a0aa6882433043a1 | Correct re2 header for windows builds
PiperOrigin-RevId: 573326882 | Jake Harmon | jakeharmon@google.com | 1,697,234,233,000 |
cd4d5662cab51df9c98a6deb52e494a82ac3bd01 | Add helper function in mlir_bridge_pass.cc and add unit tests in mlir_bridge_pass_test.cc
PiperOrigin-RevId: 573327993 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,234,515,000 |
c851db84f3dabd05abc1c498daa6ffb391437b24 | [jax] Expose executable input layouts and add test for setting input layouts.
This just does the underlying PJRT, IFRT, and pybind plumbing and
doesn't expose this functionality directly in jax yet.
PiperOrigin-RevId: 573331198 | Skye Wanderman-Milne | skyewm@google.com | 1,697,235,347,000 |
1c798a52f7accd2d50585a553d7257712091166a | Refactor variant end to end unit tests. reuse a common helper to invoke tf and tflite. enforce 1to1 relationship between tests case and tf.function
Add extensive tests for the variant AddN case, which are skipped until this behavior is implemented.
PiperOrigin-RevId: 573331528 | Luke Boyer | lukeboyer@google.com | 1,697,235,443,000 |
09b896603799fdbf66294a6aeb9c06b796479d36 | Do not allow stateless random ops to be constant folded.
If they are then the numbers are generated on the CPU which always generates different numbers than a TPU.
PiperOrigin-RevId: 573333061 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,235,886,000 |
8aec005e33cf6e54b095f7dddcf63989d9bcc497 | Integrate LLVM at llvm/llvm-project@b1115f8ccefb
Updates LLVM usage to match
[b1115f8ccefb](https://github.com/llvm/llvm-project/commit/b1115f8ccefb)
PiperOrigin-RevId: 573339161 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,237,625,000 |
f3066d080a3d12118e5cf8b19348c52f93b19927 | PR #6295: [ROCm] Fixed hlo_runner_main build error on ROCm
Imported from GitHub PR https://github.com/openxla/xla/pull/6295
This PR is enable some ROCm features on PjRt side based on https://github.com/openxla/xla/commit/5324ef5ccc05d70c9cd3dfcd8e1c77896fd1a562 and https://github.com/openxla/xla/commit/8422226150968965613e6d998f0dd0cff1b9abfe
Thanks in adavance! @akuegel
As a side note, we can also remove this `if_static` https://github.com/openxla/xla/blob/main/xla/stream_executor/gpu/BUILD#L184 to avoid the change on metrics.h deps, but not sure whether it's ok to do this way.
Copybara import of the project:
--
ecf1c7d8a4157a1b7ae9be6901c7772228488fa5 by Chao Chen <cchen104@amd.com>:
pjrt RecordFreeGpuSystemMemory on rocm
Merging this change closes #6295
PiperOrigin-RevId: 573343631 | Chao | cchen104@amd.com | 1,697,238,959,000 |
57143e96db8eb9d6e397fb0a79eed1077bd2cc7f | [xla:gpu] Add LaunchCmd for recording kernel launches into command buffers #6242
PiperOrigin-RevId: 573351559 | Eugene Zhulenev | ezhulenev@google.com | 1,697,241,487,000 |
55d3c680e2a835d8219e41936c3c09a369fc2cae | Update release notes for TensorFlow 2.15.0
PiperOrigin-RevId: 573357875 | Raviteja Gorijala | gorijala@google.com | 1,697,243,823,000 |
f44f20b3e734544f11af11fc3038f1593223572f | [xla:gpu] Initialize command sequence from CommandBufferThunk #6242
PiperOrigin-RevId: 573371008 | Eugene Zhulenev | ezhulenev@google.com | 1,697,249,782,000 |
6305dff960a8b1dd3dad719bf528625d930973b1 | [xla:gpu] CommandBufferThunk: Add support for automatic command buffer updates #6242
PiperOrigin-RevId: 573378622 | Eugene Zhulenev | ezhulenev@google.com | 1,697,253,066,000 |
0acfec16254cf45b1b09254f2cd6f0168fef4b5a | [XLA:Runtime] Moved the triangular solve thunk to a new folder and removed unused dependencies, as part of a thunk clean up, and updated the necessary directories pointing to this thunk. #5758
PiperOrigin-RevId: 573390554 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,259,506,000 |
9ff124fe7c0952f922b56325270efede6f3a6bf1 | [stream_executor] NFC: Remove internal API uses outside of StreamExecutor package
PiperOrigin-RevId: 573401361 | Eugene Zhulenev | ezhulenev@google.com | 1,697,264,716,000 |
65c4c0633df49e40887b84a6803e3e4ea596735b | fix uint32 issue | who who who | fsx950223@gmail.com | 1,697,270,741,000 |
5bfc2e63ee47737ef615fbf23e89578e1c6061f1 | Update GraphDef version to 1649.
PiperOrigin-RevId: 573422828 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,274,142,000 |
ef0ba723bd7bc9bdeea2cfcfb34869a3246e51e4 | compat: Update forward compatibility horizon to 2023-10-14
PiperOrigin-RevId: 573422835 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,274,145,000 |
af6b97feca236902f107d28f808467ad9e8f58fc | Internal change only.
PiperOrigin-RevId: 573482212 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,300,281,000 |
53a818c88415d308e00968e012f6380da0cabc9f | [jax] Make shape_from_pyval definition supportable by Python<3.10.
PiperOrigin-RevId: 573482286 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,300,318,000 |
c8b5e2b0d9d91ff199a5eeb30ec2f5d33a5d92c6 | PR #6318: [NVIDIA XLA] Use a seperate mutex for callback_stream_map to avoid deadlocks during shutdown.
Imported from GitHub PR https://github.com/openxla/xla/pull/6318
In `LocalDeviceState`, currently one mutex `mu_` is used to guard multiple things. In `ThenExecuteCallback()` it is used to guard `callback_stream_map_` while in `ReturnStreamToPool()` it is used to guard `usage_stream_pool_`.
I have encountered a deadlock during executor shutdown where the mutex is being held by `ThenExecuteCallback()` while it is waiting on `cuEventRecord`. At the same, a cuda callback which runs `ReturnStreamToPool()` is blocked while it waits for the mutex. The stalled cuda callback prevents forward progress on the stream and cuEventRecord is unable to finish.
To fix this, I added a separate mutex for `ThenExecuteCallback` which is only used to guard `callback_stream_map_`, to avoid the false dependency on changes to `usage_stream_pool_`.
Copybara import of the project:
--
1b52f38384376d0764c81b61336779aa3d31b5a3 by Trevor Morris <tmorris@nvidia.com>:
Use a seperate mutex for callback_stream_map to avoid deadlocks during shutdown.
--
0641594fffff8c2c815f3ad77012c4c175abe801 by Trevor Morris <tmorris@nvidia.com>:
Fix build warnings
Merging this change closes #6318
PiperOrigin-RevId: 573504792 | Trevor Morris | tmorris@nvidia.com | 1,697,313,575,000 |
6a2e17273de9277d4bb0c9c816fec7edbf007e75 | compat: Update forward compatibility horizon to 2023-10-15
PiperOrigin-RevId: 573586642 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,360,535,000 |
a92c53c7732e7492efcf45aa6742401aa6778220 | Update GraphDef version to 1650.
PiperOrigin-RevId: 573586644 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,360,536,000 |
dc6e7551b873e2f4974aaeced0bcb2ee1db0be40 | Consider maximum of ids/uniques across all replicas.
PiperOrigin-RevId: 573644687 | Pankaj Kanwar | pkanwar@google.com | 1,697,397,200,000 |
538a07f84173ab094233dc5580f04fccf848c82d | Break out clustering to a subpipeline for now in preparation to call clustering -> runtime -> export.
PiperOrigin-RevId: 573658195 | Mason Chang | masonchang@google.com | 1,697,406,293,000 |
6bd2ee57f10423972fc05bfb3dd29a8aacc00949 | [XLA] Fix a type mismatch in HloEvaluator.
The reproduction comes from https://github.com/google/jax/discussions/18103 but I'm having trouble reproducing it in a unit test.
If this code path is triggered, a tuple with the incorrect size is returned.
Fixes https://github.com/google/jax/issues/18106
PiperOrigin-RevId: 573666720 | Peter Hawkins | phawkins@google.com | 1,697,411,825,000 |
e94ef549fc24b248b7ae9c27e55614b63ee561e7 | [xla:gpu] NFC: Remove gpu2/runtime2 GPU backend
PiperOrigin-RevId: 573691082 | Eugene Zhulenev | ezhulenev@google.com | 1,697,425,693,000 |
115850e81298e5b52d8c936a7d3952b1861855a8 | Do not create dependencies among instances of an op with TF_RandomGeneratorSideEffect trait
This makes the MLIR side effect modelling of such ops consistent with ACD.
PiperOrigin-RevId: 573704574 | Jian Cai | jiancai@google.com | 1,697,432,054,000 |
ba6822c07520ebb2621e4b0eccb5babd6e9ad898 | Redirect references from the framework target to the new single-source-file targets.
PiperOrigin-RevId: 573730927 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,442,325,000 |
b5b7ef83cd98133b3566a1952a755daef28557e8 | [XLA:GPU] Allow dumping/loading autotune results in tests
For an example, see persisted_autotuning.md and these tests:
- load_autotune_results_using_execpath_test
- load_autotune_results_from_test_workspace_test
- dump_autotune_results_to_test_outputs_test
PiperOrigin-RevId: 573739932 | Tamás Danyluk | tdanyluk@google.com | 1,697,445,397,000 |
34cbb8feb30e75ebe8b5a8aca74a4894f7e2886b | Remove user 'Varsha-anjanappa' from auto-assignment list. | Shivam Mishra | 124146945+shmishra99@users.noreply.github.com | 1,697,446,040,000 |
f4c391d4f2f2df90fd1a9c8a0b657a7c8d532592 | compat: Update forward compatibility horizon to 2023-10-16
PiperOrigin-RevId: 573744686 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,447,025,000 |
a8b3db13e4c2f33ac8d1b6d736865f43faf2df12 | Update GraphDef version to 1651.
PiperOrigin-RevId: 573745039 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,447,118,000 |
4c07487c787fe838ac9fee6136af8e5c40faa3e4 | [XLA] Add util to create a multi-node GPU client in the functional HLO runner.
PiperOrigin-RevId: 573752810 | Benjamin Chetioui | bchetioui@google.com | 1,697,449,537,000 |
7793f575e8efb345e4dd673137a5f4ff0c55f583 | [XLA:GPU] Log more details when the Triton autotuner check fails during compilation.
PiperOrigin-RevId: 573754602 | Thomas Joerg | tjoerg@google.com | 1,697,450,113,000 |
6c8ac5e8b101c973493a094f3a75364ff2aaf2fd | [stream_executor] NFC: Remove unused target
PiperOrigin-RevId: 573754736 | Eugene Zhulenev | ezhulenev@google.com | 1,697,450,171,000 |
de57aee0ce44f46c04c544b5a485eac692da7ed3 | Use faster radix sort using CUB library for simple shapes
Adds a custom call target that maps to pre-compiled cub::DeviceRadixSort kernels
Reference: https://nvlabs.github.io/cub/structcub_1_1_device_radix_sort.html
The compiler pass for converting sort operations to custom calls is implemented separately.
PiperOrigin-RevId: 573762086 | Sergey Kozub | sergeykozub@google.com | 1,697,452,550,000 |
fcdbd352c9983bbf1715d6b7f0bcc8a6e6f1a75b | PR #62030: [ROCm] bugfixing conv_parameters and activated respective test
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/62030
This is bugfix in conv_parameters.cc originally appeared in https://github.com/ROCmSoftwarePlatform/tensorflow-upstream/pull/2000 and then in the follow-up PR: https://github.com/tensorflow/tensorflow/pull/61941
As requested, I am submitting this bugfix as a separate PR.
I also had to extend the tests for Eigen::half datatype and 'Tanh' and 'Sigmoid' activations
since in this case fused matmul uses the CuDNN fallback which, in its turn,
uses MatmulParameters constructor (from tensorflow/core/util/autotune_maps/conv_parameters.cc).
Additionally, I also removed MatmulParameters class from tensorflow/core/kernels/matmul_op.h which is a deadcode (to remove any confusions).
@akuegel: could you please review this as this is a follow up to https://github.com/tensorflow/tensorflow/pull/61941 PR ?
Copybara import of the project:
--
14b5614ad6157ffd2665c755ae625d69a25824a8 by Pavel Emeliyanenko <pavel.emeliyanenko@amd.com>:
bugfixing conv_parameters and activated respective test
--
43fe51bfc54772488f1074bdaeb9939bda31da57 by Pavel Emeliyanenko <pavel.emeliyanenko@amd.com>:
fixed activation functions handling
--
5ae9396c93454f4aa9fd2ce7ccd843f9d0e2b08a by Pavel Emeliyanenko <pavel.emeliyanenko@amd.com>:
fixes according to review
--
4fbe6d27d98e55f88a3dbbd9153f966b7f00318b by Pavel Emeliyanenko <pavel.emeliyanenko@amd.com>:
skipping subtests for which no matmul algorithm is available
Merging this change closes #62030
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/62030 from ROCmSoftwarePlatform:conv_parameters_bugfixing 4fbe6d27d98e55f88a3dbbd9153f966b7f00318b
PiperOrigin-RevId: 573764721 | pemeliya | 141146080+pemeliya@users.noreply.github.com | 1,697,453,466,000 |
2e6b19faf7b19311f720ae38d3c0ebafa8547314 | PR #6289: [NVIDIA XLA] Added a helper function to assign suffix to instruction name when cloning
Imported from GitHub PR https://github.com/openxla/xla/pull/6289
CloneWithNewOperands function doesnt add suffix to the cloned instruction even suffix is specified in the context.
Reuse existing logic to add suffix.
Copybara import of the project:
--
60eda3042a3c9f1702d6b6b053cfc9eba3fb06f9 by TJ <tjx@nvidia.com>:
added a helper function to assign suffix to instruction name when
cloning
Merging this change closes #6289
PiperOrigin-RevId: 573767789 | TJ Xu | tjx@nvidia.com | 1,697,454,411,000 |
be25f1140259daa3b8ebb5a34a5d58a9423f54de | [XLA:GPU] Add handling of kTranspose to split-K GEMM rewriter.
PiperOrigin-RevId: 573771736 | Ilia Sergachev | sergachev@google.com | 1,697,455,821,000 |
bd04383776c7aee88ee21a6305f4ec69c6da6212 | [XLA:GPU] Remove old gpu_performance_model files.
The code was moved to gpu/model.
PiperOrigin-RevId: 573781479 | Oleg Shyshkov | shyshkov@google.com | 1,697,459,098,000 |
55bc38d2f42f0a5619107519f3d65d07b6edf90b | Fix typo in forward_compatible_horizon function signature | Ragu | 88898517+Raguggg@users.noreply.github.com | 1,697,461,063,000 |
0aa0c0e1346cf341b877e51033dcffed820327bd | Fix function to get the singleprint of a SavedModel if the fingerprint proto is not present.
PiperOrigin-RevId: 573787810 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,461,383,000 |
86187a22ec129afd94fb2527dd244cb687e516df | [XLA:GPU] Teach priority fusion about what scatters can be emitted
This recently became more strict with 238685d95df91659531325f343548d8f09d0d3f8.
Share the logic with existing instruction fusion to make sure we don't emit
fusions the scatter emitter cannot handle.
PiperOrigin-RevId: 573791865 | Benjamin Kramer | kramerb@google.com | 1,697,462,621,000 |
e295549ca2e626bb33aa66d180faa4b1b8e8a390 | Integrate LLVM at llvm/llvm-project@ab737a86993b
Updates LLVM usage to match
[ab737a86993b](https://github.com/llvm/llvm-project/commit/ab737a86993b)
PiperOrigin-RevId: 573810444 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,468,401,000 |
63675b3221ec54bf1a23ab32f632c2c3d4cdc8c5 | Fix the documentation of `tf.raw_ops.Tile`.
Input can be 0-D as well.
PiperOrigin-RevId: 573813552 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,469,090,000 |
1c1a5c18c30ef2f874f8ff13b72e5119390e838a | [stream_executor] NFC: Rename and document GpuStreamHack API
PiperOrigin-RevId: 573819952 | Eugene Zhulenev | ezhulenev@google.com | 1,697,470,751,000 |
9a2139c9b24dedb1d6cf7231e6b663d04f2487d2 | Register mhlo dialect with compile_tf_graph.c.
This is needed for constructs like #"mhlo"<"type_extensions<bounds = [4096]>">,
which show up even in otherwise pure-TF graphs.
PiperOrigin-RevId: 573839404 | Matthias Kramm | kramm@google.com | 1,697,474,499,000 |
e20cdd628d6221b76b1e870adaec2a4c7145cf64 | Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId: 573840904 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,474,766,000 |
3dbb56eb50d35214a1c06c5e5593868934040854 | Remove lazy batching
Lazy batching was partially written two years ago but it was never
rolled out because it was deprioritized. Clean up the code because
the surrounding code has changed meaningfully since then.
PiperOrigin-RevId: 573841312 | Marissa Ikonomidis | marissaw@google.com | 1,697,474,851,000 |
31e227ef69094949088d54338cb6cc47003c7165 | Adds TupleReduceTest, ReduceTest, ScatterTest2D, ScatterTest3D, and GatherConvTest.
PiperOrigin-RevId: 573841913 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,474,964,000 |
bab9c153411544dc9cb980feed91a0e8b4467499 | Make some tests 3.12 compatible by adjusting the regex.
PiperOrigin-RevId: 573842370 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,475,044,000 |
2758a19a232454d3317db2181b0672753806e3f1 | [XlaCallModule] Allow i64 platform index arguments.
Previously, for multi-platform serialization the platform index
argument was required to be an i32. Now we allow also
i64, just like we do for dimension variables. This flexibility
is useful for JAX when running in 64-bit mode.
PiperOrigin-RevId: 573843239 | George Necula | necula@google.com | 1,697,475,194,000 |
86ad5efb3398be344c5a505ad62f764b66e518b6 | Break out clustering to a subpipeline for now in preparation to call clustering -> runtime -> export.
PiperOrigin-RevId: 573846301 | Mason Chang | masonchang@google.com | 1,697,475,774,000 |
92b81403305ddba765d3fb9a64bae8f52bf90dfd | Cap numpy version to below 2.0.0 to prevent breakages from new numpy version
Numpy 2.0.0 is scheduled for Jan 2024.
PiperOrigin-RevId: 573848261 | Michael Hudgins | michaelhudgins@google.com | 1,697,476,077,000 |
eb3668137804c96163fe75e471db7ffe0b88fba9 | Add TF Device and MHLO Dialect to Runtime lowering op passes. Required as sometimes MLIR would crash since these passes work on tf_device.cluster and serialized MHLO attributes.
PiperOrigin-RevId: 573849465 | Mason Chang | masonchang@google.com | 1,697,476,310,000 |
8fb401efdd4e5af66183b593f37c808e378bb25d | [stream_executor] NFC: Remove internal API uses outside of StreamExecutor package
Export platform specific handles via se::StreamExecutor APIs and do not depend on transitive includes of stream_executor_internal.h (it will be removed).
PiperOrigin-RevId: 573855693 | Eugene Zhulenev | ezhulenev@google.com | 1,697,477,469,000 |
55589f561d88834ca98f50c14c0aacb46eb8a4b0 | Check in generated pyi files for some py_extension targets.
PiperOrigin-RevId: 573860831 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,478,367,000 |
62e7b0309d629524db44df9e03d43f162b7a7b40 | Propagate stack traces when creating a GraphExecutionState. Constructing a FunctionLibraryDefinition was previously throwing away stack traces. Extract them from the GraphDef, and store them in the FunctionLibraryDefinition.
PiperOrigin-RevId: 573864820 | Alan Liu | liualan@google.com | 1,697,479,045,000 |
fb3ca0891874fd73869b2fa475e2e44300caf422 | Adds tests for KeepUserShardingTupleReduce, GetTupleElementWithUserSharding (disabled), While, DynamicSlice, Alias, AliasTupleParameter, JaxRandomUniform, and Broadcast.
PiperOrigin-RevId: 573868895 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,479,754,000 |
e717adf7bf1e4e5e0029a95f61d2baef966604d5 | [stream_executor] NFC: Clean up StreamExecutor plugin registration
PiperOrigin-RevId: 573875238 | Eugene Zhulenev | ezhulenev@google.com | 1,697,480,910,000 |
4dcb32a7a4cd168b0bc959de199d25bf0d50bae8 | Integrate LLVM at llvm/llvm-project@97217d188469
Updates LLVM usage to match
[97217d188469](https://github.com/llvm/llvm-project/commit/97217d188469)
PiperOrigin-RevId: 573876627 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,481,154,000 |
22e98b3423dcd3aadee197fa6ba0ed3482cadaf6 | [xla:gpu] NFC: Remove XLA runtime FFI support from XLA:GPU executable #6335
As a preparation for adding XLA:FFI on top of Thunks runtime remove previous generations of XLA:FFI that was never officially launched.
PiperOrigin-RevId: 573878414 | Eugene Zhulenev | ezhulenev@google.com | 1,697,481,475,000 |
93133d4abf15256950a46ab3130934f9110726ff | Defer deletion of cache entries to avoid reentrant erases.
PiperOrigin-RevId: 573878553 | Parker Schuh | parkers@google.com | 1,697,481,497,000 |
af72eb3128a0244fb22b914462833eb0c9d99377 | [xla:gpu] Emit Cholesky thunk from HLO #6224
This will be the first of many CLs that enable the IrEmitterUnnested to emit from HLO. This CL adds the emit_ir_from_hlo flag, which will tell the IrEmitterUnnested to emit the cholesky thunk from HLO. Setting the flag to true will not pass any test yet because there are many places in IrEmitterUnnested that rely on MLIR-based buffer allocations.
PiperOrigin-RevId: 573878706 | Anlun Xu | anlunx@google.com | 1,697,481,519,000 |
1aed4d46e4611d4f87631456dbadc30a811c480d | Shifts the value of 'overbudget_var' by a constant amount and adjusts the objective function to compensate. Also eliminates the old 'fixed_cost' logic which no longer provides a runtime speedup
PiperOrigin-RevId: 573883034 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,482,316,000 |
aedffb485a50cc83aaa27e8f635e9760ea90b4c9 | [xla:gpu] Remove std::cerr
PiperOrigin-RevId: 573883137 | Anlun Xu | anlunx@google.com | 1,697,482,338,000 |
de14a85396a501294c137c5b7f507532ae26b565 | Split client target into single-source targets.
PiperOrigin-RevId: 573884676 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,482,652,000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.