hash stringlengths 40 40 | msg stringlengths 1 131k | author stringlengths 1 33 | email stringlengths 0 57 | date int64 1,447B 1,698B |
|---|---|---|---|---|
543f1fb6c5e8de4e8b5ec328a6e30f0a6d9b38cb | BUILD visibility change only
PiperOrigin-RevId: 568936117 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,844,852,000 |
6df7096432a0db764d4f7e0d4d692af0842dc363 | Fix double-specified libtensorflow config target
PiperOrigin-RevId: 568937970 | Austin Anderson | angerson@google.com | 1,695,845,241,000 |
5101489aa60c58adcb0fa796ec11618f86f8b1bb | create boilerplate graph export file in api/v1, which will be used to expose the graph export for MLIR.
PiperOrigin-RevId: 568953653 | Mason Chang | masonchang@google.com | 1,695,848,512,000 |
30d28de817f7477dde22974ffff5af268279c26c | Fix issue in which, in an environment with both tf_keras and keras 2 installed, and with TF_USE_LEGACY_KERAS=1 set, doing `from tensorflow.keras import x` would pull x from keras 2 rathre than tf_keras.
PiperOrigin-RevId: 568955899 | Francois Chollet | fchollet@google.com | 1,695,848,946,000 |
a54e168668782a85a040791d1d678993842e8df4 | Add wildcards after JNI_OnLoad and JNI_OnUnload in linker script.
This avoids errors of the form
ld.lld: error: version script assignment of 'VERS_1.0' to symbol 'JNI_OnLoad' failed: symbol not defined
ld.lld: error: version script assignment of 'VERS_1.0' to symbol 'JNI_OnUnload' failed: symbol not defined
when the... | Fergus Henderson | fergus@google.com | 1,695,849,316,000 |
84dbea0f5d61f772f7d4930b94b90fc737f85336 | Fix nn gradient registrations.
Ensure nn_fused_batch_grad and ctc_ops grads are always available by adding unused imports to nn_impl.py.
PiperOrigin-RevId: 568959466 | Fiona Lang | flang@google.com | 1,695,849,767,000 |
c01b5622108d504d6bacd0022d01607037180acf | [stream_executor] NFC: Rename cuda_gpu_executor to cuda_executor for consistency
No other cuda platform header or cc file has `gpu` in it.
PiperOrigin-RevId: 568960040 | Eugene Zhulenev | ezhulenev@google.com | 1,695,849,898,000 |
fc5cdebd8e9b04d22a6075b8902bc3d17918d204 | [MemorySpaceAssignment] Remove memory_space_assignment from file names in memory_space_assignment folder. Move all MemorySpaceAssignment code to memory_space_assignment namespace.
PiperOrigin-RevId: 568960729 | Subhankar Shah | subhankarshah@google.com | 1,695,850,044,000 |
88a60479b5a873049ec40cacd0c5ad402f54e6ad | Legalize some MHLO broadcasted compare ops to TF GreaterEqual/Greater/LessEqual/Less op directly. tf.BroadcastTo op folder folds constants by default, this would increase the size of models converted from StableHLO because StableHLO requires explicit broadcasting. This change helps reduce model size by removing unneces... | Yishuang Pang | ypang@google.com | 1,695,850,381,000 |
6033f90d201e705a08fdf7bbdeabf974730c457a | [stream_executor] Add initial test for CUDA command buffer
https://github.com/openxla/xla/issues/5857
PiperOrigin-RevId: 568963141 | Eugene Zhulenev | ezhulenev@google.com | 1,695,850,501,000 |
48dc0584aa026822dcac00073b76c495e4954b16 | Link tflite_jni with -Wl,--undefined-version
The version script has some symbols that are undefined for some configurations,
which will lead to a linker error with ld.lld's new --no-undefined-version
default.
```
ld: error: version script assignment of 'VERS_1.0' to symbol 'JNI_OnLoad' failed: symbol not defined
ld: ... | Arian Arfaian | aarfaian@google.com | 1,695,851,511,000 |
9704448afcf256a9908cd34405a826483e3aa81f | [XLA:GPU] Allows fusion of binary element wise ops when exactly one operand is a (shared or unshared) splat constant. Previously, only splat constant operands with a single user would be accepted.
PiperOrigin-RevId: 568968119 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,851,589,000 |
58cf4efca092f58c4c1b47e3c99bd306d7054990 | Implement the PjRtStreamExecutorBuffer::DonateWithControlDependency API for HostCallback in Pathways
PiperOrigin-RevId: 568968580 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,851,683,000 |
19ce6551b9fd0f65e2467d3ba435243f9224d61e | Remove "no_tap" tag for aot test.
PiperOrigin-RevId: 568968719 | Shixin Li | shixinli@google.com | 1,695,851,713,000 |
11d397a9868a9b9531ee3ee112784508f846b6d7 | 1.Update the StatisticsGen logic to handle the custom file pattern defined by user.
2.Update get_split_tfxio to handle splits with custom file pattern properties.
PiperOrigin-RevId: 568978781 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,853,951,000 |
f95a4a1910d5bcc379b96093b6a83ef7be216880 | Cleanup error logging when falling back to combined bridge
PiperOrigin-RevId: 568987102 | Mason Chang | masonchang@google.com | 1,695,855,848,000 |
5bb0daf59df2264da0c0e251f2eb2bc90c61ce9f | Upgraded oneDNN version to v3.3-rc | Bhavani Subramanian | bhavani1.subramanian@intel.com | 1,695,320,598,000 |
62b992d77c932b663daf96d2f353f7729a731d4b | Not an error if failed to open libtensorflow_framework
PiperOrigin-RevId: 568995336 | Haibo Huang | hhb@google.com | 1,695,857,786,000 |
b210366946ce38e3392a515c29b6b71a98df1fd8 | Delete remaining imports from python/__init__.py.
PiperOrigin-RevId: 569002242 | Fiona Lang | flang@google.com | 1,695,859,598,000 |
66aa52232dcf00105c6bab09633568eb6b022c8e | Add APIs for retrieving the TF Lite Extension APIs version number.
PiperOrigin-RevId: 569014286 | Fergus Henderson | fergus@google.com | 1,695,863,112,000 |
812e85802a90c4158598bed297fe946da642cd12 | Remove some unnecessary deps.
PiperOrigin-RevId: 569015036 | Fergus Henderson | fergus@google.com | 1,695,863,289,000 |
ea1e54bb56fc701d9930e6855574b7151a48ee5b | Fix docstring test for tf.summary.
PiperOrigin-RevId: 569022676 | Fiona Lang | flang@google.com | 1,695,865,613,000 |
d2d93477676a1c7735b01b51d4700eccc838b258 | [xla:gpu] Enable cuBLAS gemms in GPU graphs
PiperOrigin-RevId: 569034056 | Anlun Xu | anlunx@google.com | 1,695,869,710,000 |
ef40d017f8dd0a8602ab0565c14a689d41ea4e3c | [TSL] Use down_cast when down casting unique_ptr
No functional changes.
PiperOrigin-RevId: 569045182 | David Majnemer | majnemer@google.com | 1,695,872,495,000 |
f92a70a41e76db5d0829120f46d5b001f89decdf | Destruct objects owned by `WeakRefLRUCache::CacheEntry` out of band using `GlobalPyRefManager()`
This assumes less about whether the thread that destructs `CacheEntry` has GIL or not, which is difficult to reason about due to the `xla::LRUCache`'s use of `std::shared_ptr<CacheEntry>`.
The following changes have been ... | Junwhan Ahn | junwhan@google.com | 1,695,878,085,000 |
354634b03f7e9fa0816e304ad6397f119ae00417 | [stream_executor] NFC: Remove includes of a private stream_executor_pimpl header
`stream_executor_pimpl` is an implementation detail and should not be included outside of `stream_executor` package, instead include `stream_executor.h` which exports StreamExecutor public API
PiperOrigin-RevId: 569070331 | Eugene Zhulenev | ezhulenev@google.com | 1,695,880,509,000 |
f2957eb767ee8ffdd142b7a7ea8a271bc46e98a5 | Reenable file upload in bazel remote config
This replaces the "transfer script via cli argument" hack by the upload support for remote configurations that landed in Bazel 3.1.0.
PiperOrigin-RevId: 569079584 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,883,171,000 |
435df2af17d6c826743a72637ae6b4b909f09448 | [stream_executor] Add support for submitting command buffers for execution
https://github.com/openxla/xla/issues/5857
PiperOrigin-RevId: 569080650 | Eugene Zhulenev | ezhulenev@google.com | 1,695,883,532,000 |
76b5a6c9ee28d6e944c4736a6c82ce54df16b857 | Fixed typos in TF document
Corrected several typos in documentation. Please have a look. Thank you! | sushreebarsa | 84765720+sushreebarsa@users.noreply.github.com | 1,695,884,807,000 |
6c629becc76bb09f348d316b6c7507af3cc5c157 | Update result_analyzer.py | sushreebarsa | 84765720+sushreebarsa@users.noreply.github.com | 1,695,884,988,000 |
2585893a4732485afa1ec253c8c0d0e576b16419 | Update result_analyzer.py | sushreebarsa | 84765720+sushreebarsa@users.noreply.github.com | 1,695,885,044,000 |
c60f0baa24ce624fe0dfd9964f4c9372dadb4e9c | Update shape_output_test.py | sushreebarsa | 84765720+sushreebarsa@users.noreply.github.com | 1,695,885,104,000 |
2aff7cfae2a73e07f461abfa608e060eef3881e5 | Update shape_output_test.py | sushreebarsa | 84765720+sushreebarsa@users.noreply.github.com | 1,695,885,212,000 |
a913e6810f07969ea9c3f0120ae7b0e3f61d5d17 | Internal Code Change
PiperOrigin-RevId: 569095712 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,888,272,000 |
4fe8284a052ba496c4067ad56cc25d332a014704 | Fixed typos in wrap_function.py
Fixed typos in wrap_function.py | tilakrayal | 81610181+tilakrayal@users.noreply.github.com | 1,695,888,974,000 |
2d1b0e17a553a77202a107c66897dc855dc6c8c3 | Update trt_convert.py | tilakrayal | 81610181+tilakrayal@users.noreply.github.com | 1,695,889,252,000 |
a6cce9268431d0176da1f85e30c9c031312a851d | Update GraphDef version to 1633.
PiperOrigin-RevId: 569106700 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,891,729,000 |
40390d99e49213914c2fdbf07a34dfadd292c66b | compat: Update forward compatibility horizon to 2023-09-28
PiperOrigin-RevId: 569106716 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,891,733,000 |
4cc57f1c3e7036060e056679097e7a8a093b418b | [XLA:GPU] Trigger Triton GEMM fusions also on kCopy input operations.
PiperOrigin-RevId: 569114224 | Ilia Sergachev | sergachev@google.com | 1,695,893,900,000 |
ffbe5e18646b26f2d373d8e78bfec5740deafd55 | Change layouts of matmul([bf16 x bf16] -> bf16) operands to ensure that the contracting dimensions for both sides are the most minor for matmul([bf16 x bf16] -> bf16). Do this behind the flag --xla_gpu_ensure_minor_dot_contraction_dims.
PiperOrigin-RevId: 569123880 | Aliia Khasanova | aliia@google.com | 1,695,896,564,000 |
405c15974e16366b8314689b4d22f720ed4d7424 | [XLA:GPU] Do not run kernels in autotuning which spill registers
Not running kernels which spill registers saves some compilation time and does not affect performance apparently.
PiperOrigin-RevId: 569159250 | Son Tuan Vu | vuson@google.com | 1,695,907,115,000 |
72c4b7675c51b946485c5fef7c78e07271c629b7 | [XLA:GPU][NFC] Disable cuBLAS fallback in Triton emitter tests.
PiperOrigin-RevId: 569159562 | Ilia Sergachev | sergachev@google.com | 1,695,907,208,000 |
1697b89f58d06cd9293ed444976e162f49747735 | Support `[0,1,i,o]` transposed convolution kernels
PiperOrigin-RevId: 569168220 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,909,564,000 |
20b4d65b911a9752128a73a7cd866c22f804ecca | [XLA:GPU] [NFC] Remove redundant indirection layer
PiperOrigin-RevId: 569191826 | George Karpenkov | cheshire@google.com | 1,695,915,402,000 |
aff7c7f18eb40f0ae33d6616f54b36f789c64a7d | [XLA:GPU][NFC] Change logging in TritonAutotuner
PiperOrigin-RevId: 569195141 | Tamás Danyluk | tdanyluk@google.com | 1,695,916,124,000 |
614a467679fdbdb74e65a7b89f52745d5b2b0456 | Do not use PJRT to transfer int4 tensors.
XLA PR https://github.com/openxla/xla/pull/5926 will cause PJRT to pack int4 tensors when transferring them to device, but TensorFlow represents int4 tensors as unpacked. This changes makes PJRT not be used to transfer int4 tensors. In the future, TensorFlow will likely pack i... | Reed Wanderman-Milne | reedwm@google.com | 1,695,916,999,000 |
5d4c1b0c67e2cfdcdea416e2314fa676aa072c07 | [stream_executor] NFC: Make temporary memory allocators private and export them via stream_executor target
Temporary memory manager is an implementation detail of StreamExecutor, and all SE clients should get access to headers via `xla/stream_executor` dependency.
PiperOrigin-RevId: 569207046 | Eugene Zhulenev | ezhulenev@google.com | 1,695,918,835,000 |
3f50ac64300be574470ec8bc529adba6671d54f4 | Implement ReduceWindow in TFLite.
PiperOrigin-RevId: 569210176 | Quentin Khan | qkhan@google.com | 1,695,919,553,000 |
96368f2e618af97a1d40a70447315610259031cd | Update release branch to 2.15 | cjflan | 89868659+cjflan@users.noreply.github.com | 1,695,920,807,000 |
c97bf4d1d92b5fbd60f52fe783be378aa72c869a | #tf-data-service Increase version number for internal change.
PiperOrigin-RevId: 569217384 | Matt Callanan | mpcallanan@google.com | 1,695,921,047,000 |
98ef794c0ae6ea7e52ef843175db1871fa761966 | Open source `_pywrap_tpu_embedding` for SparseCore.
PiperOrigin-RevId: 569224412 | Hye Soo Yang | hyey@google.com | 1,695,922,479,000 |
522caf71145fed24fb2a3cfa75c263ff4c6e64e5 | Add deserialization for `stablehlo.rng_bit_generator` op and an e2e gumbel test which emits the op.
PiperOrigin-RevId: 569224472 | Yishuang Pang | ypang@google.com | 1,695,922,489,000 |
59cac69e4ecb54759d318826bc5b5887ee8d203b | format fixed | Raunak | mayank.kumar.raunak@intel.com | 1,695,923,398,000 |
8c6a0b5c1d201c17f5cc26125d058ef6a194d92a | [IFRT] Add on-demand canonicalization for new sharding's memory kind in `PjRtArray::Reshard` in case it has not been done before.
PiperOrigin-RevId: 569228579 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,923,330,000 |
fe0941c441095625558e99e53a72676081b8d11f | Enable cross compilation for GPU PJRT compiler/client.
PiperOrigin-RevId: 569233566 | Shixin Li | shixinli@google.com | 1,695,924,282,000 |
0947680e554c3b89aba29102d9a934ec38a2f417 | Fix missed --config setting
PiperOrigin-RevId: 569237506 | Austin Anderson | angerson@google.com | 1,695,924,978,000 |
5f9815d4d0ca9b7ebe31163dc4ec44ded6869ac7 | added release notes | Raunak | mayank.kumar.raunak@intel.com | 1,695,925,709,000 |
cb557ce896ddc662cb404213dcd778eeb432e258 | Add functions to clean up tensor allocation types in TFLite.
`TfLiteAllocationType` values carry several orthogonal meanings at the same time. For instance, `kTfLiteDynamic` means “we don't know the shape until runtime” and “is malloc allocated”. This prevents writing clear code using `TfLiteTensor::allocation_type` t... | Quentin Khan | qkhan@google.com | 1,695,925,581,000 |
0d84e0456b26c4bd8ee84225bd977093da21558c | fixed the readme format | Raunak | mayank.kumar.raunak@intel.com | 1,695,926,686,000 |
1739a1bba88ee6f7774b73c47e0ac28bb8dcd072 | Remove outdated TODOs in Toco.
PiperOrigin-RevId: 569245248 | Yu-Cheng Ling | ycling@google.com | 1,695,926,547,000 |
a314feadc49fad04260bd8796eb23d628afe5f5b | Merge pull request #60190 from linux-on-ibm-z:add_s390x_support_in_TFLite_String_TensorType
PiperOrigin-RevId: 569249297 | TensorFlower Gardener | gardener@tensorflow.org | 1,695,927,885,000 |
8e8967a83ff910253e5f2880385c010a89ab09f8 | [XLA:GPU] Let Triton GEMM fusions reevaluate operations adding more parameters.
Other operations in the fusion queue may reduce the number of parameters per fusion which allows adding more parameters again.
PiperOrigin-RevId: 569252026 | Ilia Sergachev | sergachev@google.com | 1,695,927,972,000 |
9db5cf6d77fbfb6bacefcd2d861d060b868f63d0 | Replace the deprecated TensorFlow's `gtl::optional` alias with its target `std::optional`
PiperOrigin-RevId: 569261141 | Dmitri Gribenko | dmitrig@google.com | 1,695,930,111,000 |
b8d797390486f4bb14dd72887df4f380f08d479b | Check for bf16 support | Kanvi Khanna | kanvi.khanna@intel.com | 1,695,930,943,000 |
5d16b1d6cf4a34cccccb33bcead11e048deda365 | Integrate LLVM at llvm/llvm-project@23ef8bf9c0f3
Updates LLVM usage to match
[23ef8bf9c0f3](https://github.com/llvm/llvm-project/commit/23ef8bf9c0f3)
PiperOrigin-RevId: 569276827 | Sam McCall | sammccall@google.com | 1,695,933,466,000 |
50b94e93fc6fe8e6efbcb640ec7674f11b2d1323 | [xla] Fix a problem in cloning wrapped async instructions.
Previously, we clone the wrapped async computation twice, one for each async
instruction in a pair of async instructions. We now make sure that both async
instructions use the same wrapped async computation.
PiperOrigin-RevId: 569282322 | Bixia Zheng | bixia@google.com | 1,695,934,593,000 |
5acba3f5acb78e480dc30cc5c03143f470e1303a | Refine result types for TensorListPopBack during shape inference.
PiperOrigin-RevId: 569285288 | Luke Boyer | lukeboyer@google.com | 1,695,935,082,000 |
4ea0eb06620d77ee5e70117a15ae560bc12edc8a | Update TFRT dependency to use revision
http://github.com/tensorflow/runtime/commit/703dbd9190461909be83200d6a1f1c495f1177d5.
PiperOrigin-RevId: 569287679 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,935,594,000 |
f26b3dfc360f37f9948aff41dcade560e6314fdb | Roll back https://github.com/tensorflow/tensorflow/commit/f92a70a41e76db5d0829120f46d5b001f89decdf
PiperOrigin-RevId: 569292433 | Junwhan Ahn | junwhan@google.com | 1,695,936,700,000 |
8314e4cf4f030db93138bc44868e95ded4ce2128 | Fix copying of xla aot runtime sources.
With the recent vendoring of XLA/TSL, the previous required XLA/TSL
runtime sources now have a path prefix `../local_{xla|tsl}/`, which
when copied into the pip package `xla_aot_runtime_src/` folder end up
being copied outside the directory.
We fix this by removing the `../loc... | Antonio Sanchez | cantonios@google.com | 1,695,937,301,000 |
1c22354116b9dc9943e6026bf449d67ac2bfd696 | Update RELEASE.md with final 2.14.0 release notes
PiperOrigin-RevId: 569297075 | Raviteja Gorijala | gorijala@google.com | 1,695,937,769,000 |
d2981d776d2ee0ba155c1d88fa1197395f7e2990 | Enable cross compilation for GPU PJRT compiler/client.
PiperOrigin-RevId: 569299837 | Shixin Li | shixinli@google.com | 1,695,938,394,000 |
047bc82d7c654376822bfa0cfe0976c84266e059 | [mlir][sparse] Change to new syntax in mlir tests
Example:
#sparse_tensor.encoding<{ lvlTypes = [ "compressed" ] } to
map = (d0) -> (d0 : compressed)
FileCheck tests would be changed to more general form for sparse encoding: #sparse_tensor.encoding<{{{.*}}}>.
PiperOrigin-RevId: 569301180 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,938,675,000 |
f7c2dda7a7d1d6bffad118e8a88b4a44535a3751 | Merge pull request #60678 from tensorflow:pjpratik-patch-3
PiperOrigin-RevId: 569304601 | TensorFlower Gardener | gardener@tensorflow.org | 1,695,940,026,000 |
e0be634722c26f8ec3562e0b0945da7e4879ec0a | Allow running xla aot test locally outside docker.
The existing test makes assumptions about the python version and cmake paths
that do not generally hold on local workstations.
Added a cmake variable `TENSORFLOW_PACKAGE_PATH` that allows setting the full
directory of the python tensorflow package, and explicitly set... | Antonio Sanchez | cantonios@google.com | 1,695,939,849,000 |
96a12741adab2a0326ec187cc37367b15174bd74 | Roll back https://github.com/tensorflow/tensorflow/commit/297ec1f82d90599d24033a0bbb677f79565ffeb9
PiperOrigin-RevId: 569314745 | Junwhan Ahn | junwhan@google.com | 1,695,942,006,000 |
335cbac3c2e50494764ef9441c02ce1e164c6f36 | Add serving model run
PiperOrigin-RevId: 569315072 | Feng Wang | wffw@google.com | 1,695,942,080,000 |
0128337b3388fbf2f0025dd013a305d8f09d84cd | Keep attributes in C++ canonicalization rewrites
By default, pattern rewrites do not preserve op attributes. This is problematic for GPU graphs, where device placement of ops are stored in "device" attribute. Losing this information causesd significant performance lost in some cases. This fixes the patterns defined in... | Jian Cai | jiancai@google.com | 1,695,943,537,000 |
8597a45659de0f0776535adc42769f91ce1000a6 | [stream_executor] Add support for tracing into command buffers (graph capture)
We should never mix traced command buffers (captured CUDA graphs) with explicitly constructed command buffers (graphs).
Traced command buffers should be recorded into "parent" command buffers as nested submissions.
Public API is structure... | Eugene Zhulenev | ezhulenev@google.com | 1,695,946,373,000 |
1314592ab06ada38cc339cdd62094e245a2bcdae | Update ARM64 docker containers to have a target for JAX builds and add retry logic for flaky networks
PiperOrigin-RevId: 569333946 | Michael Hudgins | michaelhudgins@google.com | 1,695,946,903,000 |
0dd7a59c937f61f968a4083de1332f2cb537feba | [xla:gpu] Add CommandBufferScheduling pass
The pass outlines command buffers that contain a single fusion instruction.
https://github.com/openxla/xla/issues/5756
PiperOrigin-RevId: 569338364 | Anlun Xu | anlunx@google.com | 1,695,948,325,000 |
d85de9d8479c54349824fe9e9bff9eb8d71ec271 | [stream_executor] Make stream_executor_headers internal and document what's going on in a BUILD file
Clearly document what StreamExecutor clients should depend on, and how BUILD file is structured.
PiperOrigin-RevId: 569339256 | Eugene Zhulenev | ezhulenev@google.com | 1,695,948,571,000 |
24cc7e24b8091abd0b1bc527665e94a1b087d633 | GetMaxIdsAndUniques refactoring and logging
PiperOrigin-RevId: 569365982 | Pat Notz | patn@google.com | 1,695,957,450,000 |
05ab3513abbcd20263e728f432a789082edbcf71 | Also run PostSchedulingCopyInsertion in CompileModuleToLLVMIrImpl.
We want to run this always after scheduling, but so far we only run it in the
AssignBuffers() function. CompileModuleToLLVMIrImpl is on the regular
compilation path.
PiperOrigin-RevId: 569395560 | Adrian Kuegel | akuegel@google.com | 1,695,968,818,000 |
81d69f4a69d7ad7921eea635597d4421f78ba656 | compat: Update forward compatibility horizon to 2023-09-29
PiperOrigin-RevId: 569427815 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,978,135,000 |
14e4c41bf1eaa595f24e88ab94c54b60304117a4 | Update GraphDef version to 1634.
PiperOrigin-RevId: 569428007 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,978,192,000 |
8886b41b2c50cf1caebcf96e7c0a750c2e8ea3e2 | PR #5300: A new pass to optimize the AllGather->Binary_Op order sequence
Imported from GitHub PR https://github.com/openxla/xla/pull/5300
This is a new GPU SPMD optimization pass for the following pattern:
binary-op(all-gather(a), all-gather(b))
to
all-gather(binary-op(a, b))
Copybara import of the project:
--
198c... | kushanam | kahmadian@nvidia.com | 1,695,984,149,000 |
9ec9136eb1612d8bdc365719ddf1d9698db04053 | PR #5491: Add frontend attributes to HLO module
Imported from GitHub PR https://github.com/openxla/xla/pull/5491
This is a PR that addresses #5424.
Copybara import of the project:
--
6cf274695c9bdbee263c8e136abd5c2a6279593e by Boian Petkantchin <boian@nod-labs.com>:
Add frontend attributes to HLO module
--
7242475... | Boian Petkantchin | boian@nod-labs.com | 1,695,985,582,000 |
c0574c422ffa5a7675f96b5c5ff881dfe037279b | Fix null pointer deref in gif_io. | Eli Kobrin | kobrineli@ispras.ru | 1,695,124,407,000 |
63feaf321165e1e2795f43e3834c007364921df6 | Add check for raster bits. | Eli Kobrin | kobrineli@ispras.ru | 1,695,982,043,000 |
13dca754ed367979511b29b71b6ee6e70641267a | Integrate LLVM at llvm/llvm-project@512739ebbb25
Updates LLVM usage to match
[512739ebbb25](https://github.com/llvm/llvm-project/commit/512739ebbb25)
PiperOrigin-RevId: 569465927 | Sam McCall | sammccall@google.com | 1,695,990,792,000 |
0af9d56fe45e7b5dfe29a02190590ee0f14d292d | [XLA:GPU] Replace the use of global TSL flag with precision config in Triton GEMM.
PiperOrigin-RevId: 569471716 | Ilia Sergachev | sergachev@google.com | 1,695,992,762,000 |
86cba519e59c3ca7b5b6e5be4e08eb12e17c58f8 | Update TFRT dependency to use revision
http://github.com/tensorflow/runtime/commit/88e2ee6c6aaed74fc4201ad62e0b6dad0c40d56f.
PiperOrigin-RevId: 569472682 | A. Unique TensorFlower | gardener@tensorflow.org | 1,695,993,088,000 |
3d1787658f2665a7b4f76c9cc7eb8ede55fa4535 | [XLA:GPU] Add VLOG for ShouldFuse.
PiperOrigin-RevId: 569475824 | Oleg Shyshkov | shyshkov@google.com | 1,695,994,150,000 |
39a8b772355f7ee57e8076dc1908dbcdf52a4c4b | [XLA:GPU] Unravel the bool expression in IsUniversallyLoopFusible, IsLoopFusibleAsConsumer and IsLoopFusibleAsProducer
PiperOrigin-RevId: 569476546 | Oleg Shyshkov | shyshkov@google.com | 1,695,994,396,000 |
a1ccb74fbc61f4f926beae9548ea861d22106a0e | Simplify implementation of ReduceWindow by removing template recursions.
PiperOrigin-RevId: 569486188 | Quentin Khan | qkhan@google.com | 1,695,997,538,000 |
61b4313557b43bf6bd98d7c584e9d029e063fd2f | Copy/Paste AddGraphExportLoweringPassesV2 from bridge.cc to the tf_Dialect_to_executor API.
PiperOrigin-RevId: 569496764 | Mason Chang | masonchang@google.com | 1,696,000,731,000 |
ae244a6866bf5efc02ab5806e1037df9a52a5523 | [SE] [NFC] Remove unused APIs from dnn.h
PiperOrigin-RevId: 569504937 | George Karpenkov | cheshire@google.com | 1,696,003,078,000 |
8df5223f556b7a0cf7126348bed0d872bcd79883 | Integrate StableHLO at openxla/stablehlo@2eec6db
PiperOrigin-RevId: 569507553 | Sandeep Dasgupta | sdasgup@google.com | 1,696,003,706,000 |
334d12a1a4d2cda5f3b4b3cdca1a9214051e238b | [PJRT] Split the GpuId() platform constants into CudaId()/RocmId().
Similarly for the GpuName() constant.
While most of the time we treat CUDA and ROCm GPUs identically, we sometimes want to distinguish between CUDA and ROCm (e.g., for DLPack exports) and it's helpful if this is encoded in the platform ID.
PiperOrig... | Peter Hawkins | phawkins@google.com | 1,696,005,277,000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.