hash stringlengths 40 40 | msg stringlengths 1 131k | author stringlengths 1 33 | email stringlengths 0 57 | date int64 1,447B 1,698B |
|---|---|---|---|---|
68235c1107f86381777554d602d39ffc5824653e | [xla:pjrt] Remove tfrt::HostContext dependency from se_gpu_pjrt_client
PiperOrigin-RevId: 574469489 | Eugene Zhulenev | ezhulenev@google.com | 1,697,639,269,000 |
3267bbe50ec5550f3d27ecb95c3e58e0db9cd9a8 | [xla] Switch to tsl/concurrency library inside XLA
This removes remaining dependencies on TFRT from XLA and fixes #6410
PiperOrigin-RevId: 574471178 | Eugene Zhulenev | ezhulenev@google.com | 1,697,639,749,000 |
9f9154c3ea6f2c10d912ba26afe07b24a67ba1f6 | Add `compatible_with` to `xnnpack_plugin` build rules.
PiperOrigin-RevId: 574489845 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,644,307,000 |
9b2fb302a8d345b80c94ee90090234e8808f0eb4 | Fix TF build issues with TPU Embedding V3 and add its APIs.
PiperOrigin-RevId: 574492941 | Hye Soo Yang | hyey@google.com | 1,697,644,993,000 |
b7b42a81e034540e6efb3d721e1614347f89a647 | Integrate StableHLO at openxla/stablehlo@03216ba4
PiperOrigin-RevId: 574493035 | Eugene Burmako | burmako@google.com | 1,697,645,012,000 |
d318ec01dda401dd80a6ba0ee5ab152c5f3ca6f5 | Fix uq->int lowering for Hybrid ops
Use correct tensor shape for RHS and remove unnecessary +0.5.
PiperOrigin-RevId: 574493183 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,645,041,000 |
f37fad4c3cad7e4eff186fb4f14ff096ae79f5d0 | Fix high memory usage when running eager ops as a function.
Fixes https://github.com/tensorflow/tensorflow/issues/58676.
When running eager op as a function, tf.data ops are getting repeatedly
cached leading to high memory usage as reported in the above issue. The
fix is to recognize such ops and prevent them from getting cached again. | Bhavani Subramanian | bhavani1.subramanian@intel.com | 1,697,248,583,000 |
33730f65c4e912fdd962d65f5dcf9ca0d8c9ddb4 | [xla] Remove tf_runtime from XLA OSS dependencies
PiperOrigin-RevId: 574502175 | Eugene Zhulenev | ezhulenev@google.com | 1,697,647,046,000 |
41e118bfed2257f53f56c9e4d647f25abffa14e6 | Migrate RestoreFunctionNamePass to prod directory
PiperOrigin-RevId: 574524632 | Doyeon Kim | doyeonkim@google.com | 1,697,651,172,000 |
b6852c553af40171ce20234ee355e046599dfc80 | Add test to ensure that StatelessRandomNormal does not get constant folded.
PiperOrigin-RevId: 574526910 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,651,579,000 |
7da8b5ab35078a3875e67ee486693bcd578ac1cc | Add reshape to the mhlo->tfl path
PiperOrigin-RevId: 574529700 | Yishuang Pang | ypang@google.com | 1,697,652,080,000 |
f7ddb3c17be6f9f9e5a97c3e156907e64025659e | Re-enable layering_check for package.
PiperOrigin-RevId: 574530532 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,652,232,000 |
ce4adf9f5b0d676213415b98fdbf6f918e14ec5d | Merge pull request #62154 from Intel-tensorflow:bhavanis/eager-mem
PiperOrigin-RevId: 574534799 | TensorFlower Gardener | gardener@tensorflow.org | 1,697,653,410,000 |
cc0c9ffd332feb777c371baff96e2557608a1d93 | Re-enable layering_check for package.
PiperOrigin-RevId: 574535790 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,653,149,000 |
97fa259d145d1da0bd410be2a365b098754c41e3 | Allow `SentencepieceTokenizeOp` to pass through the native variable legalization.
PiperOrigin-RevId: 574544194 | Haoliang Zhang | haoliang@google.com | 1,697,654,800,000 |
bd996707821cbd542ae9c4e4acf665f86be36769 | Update NDK version to r25b
PiperOrigin-RevId: 574544957 | Grant Jensen | grantjensen@google.com | 1,697,654,966,000 |
4744c75142dd5e91286e5e4e04bc5c42c9979bf3 | Reverts a previous, breaking change.
PiperOrigin-RevId: 574552019 | David Majnemer | majnemer@google.com | 1,697,656,395,000 |
f5415709fa049a90c9d530cfb6d75676238f6a8c | Uses a separate iteration variable 'j' for the inner tuple loop, since 'i' is already defined.
PiperOrigin-RevId: 574565511 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,659,117,000 |
d4c9435333c36db1a60f575f38fed3c8c495e38a | [XLA:GPU] Put back FusionFitsInBudget heuristic
This avoids cases where ptxas runs out of parameters.
PiperOrigin-RevId: 574567569 | Benjamin Kramer | kramerb@google.com | 1,697,659,428,000 |
1e81afeec94da321260895526adefd13bdef9628 | PR #6423: point the PJRT integration guide to its new home
Imported from GitHub PR https://github.com/openxla/xla/pull/6423
the google doc just redirects back to Github...
Copybara import of the project:
--
a84e6ef61aeede68b6490bccefd736ca5ca35bbd by David Hall <david.lw.hall@gmail.com>:
point the PJRT integration guide to its new home
the google doc just redirects back to Github...
Merging this change closes #6423
PiperOrigin-RevId: 574580741 | David Hall | david.lw.hall@gmail.com | 1,697,661,630,000 |
610e550d9263a5be5b52b70bd0fadcef3718c4cf | No public description
PiperOrigin-RevId: 574581799 | Yishuang Pang | ypang@google.com | 1,697,661,794,000 |
c64561e1d3d667620ea1969619fce2f2e2615870 | Move TPU Bridge v2 to Clustering, Runtime Lowering, Export
PiperOrigin-RevId: 574587741 | Mason Chang | masonchang@google.com | 1,697,662,698,000 |
0fa88ccc34d7e995de61e930e1c98227215e5530 | Start switching tf-nightly to use Keras 3.
PiperOrigin-RevId: 574606796 | Francois Chollet | fchollet@google.com | 1,697,665,760,000 |
785e9b45d493673774314d817c7acaf8ed3c84e8 | Bump urllib3 from 2.0.6 to 2.0.7
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.0.6 to 2.0.7.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/2.0.6...2.0.7)
---
updated-dependencies:
- dependency-name: urllib3
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com> | dependabot[bot] | 49699333+dependabot[bot]@users.noreply.github.com | 1,697,666,275,000 |
04fb826f98b92dd172ad665d8a5522a2f8201867 | Report errors that trigger step abort to improve debuggability.
PiperOrigin-RevId: 574613218 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,666,817,000 |
7f9ae0fd09d38e9354e92248e0cdbbbbe22bceeb | No public description
PiperOrigin-RevId: 574617068 | Fiona Lang | flang@google.com | 1,697,667,475,000 |
e18f6bc59cda068d1433b95d78a3a88419096bf7 | #tf-data internal-only change
PiperOrigin-RevId: 574618703 | Jim Lin | jimlintw@google.com | 1,697,667,775,000 |
db6739c28ad6360bfad1da5235991f5bdde723fe | Add `compatible_with` to `xnnpack_plugin` build rules.
PiperOrigin-RevId: 574626273 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,669,090,000 |
6b98b1dadc053cf355cc7b4071b8249e2a5b6fbd | Rollback of PR #6289
Rolling back causing issues in prod
PiperOrigin-RevId: 574640435 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,671,555,000 |
492f6e956cacd99081102ad5cde069eb1792821c | Add HloSharding and ShardingParamSharding to CopyToDeviceWithSharding and PjRt-IFRT client.
PiperOrigin-RevId: 574646177 | Ionel Gog | icgog@google.com | 1,697,672,707,000 |
8eb75a591ae5a57728dc378378d80fd742f9ec69 | Redirect more references from the framework target to the new single-source-file targets.
PiperOrigin-RevId: 574655387 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,674,536,000 |
37381cb317b5281d4d301e48d85360bd8959b8b1 | Redirect more references from the framework target to the new single-source-file targets.
PiperOrigin-RevId: 574661446 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,675,883,000 |
31c2e673cf13ce46d811aed894adec06e39e0ca3 | Redirect more references from the framework target to the new single-source-file targets.
PiperOrigin-RevId: 574662103 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,676,023,000 |
d3a1c4532e14aafde58dc86cac99ae94734bbd10 | Integrate LLVM at llvm/llvm-project@b42738805acf
Updates LLVM usage to match
[b42738805acf](https://github.com/llvm/llvm-project/commit/b42738805acf)
PiperOrigin-RevId: 574668765 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,677,483,000 |
4a60c95b13b21d635f619a03e6e7f064a5885c48 | Moving simplification of compare instructions under a boolean var.
PiperOrigin-RevId: 574676431 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,679,585,000 |
47bfd7876d4d16bf367937e61dc5999abd7fab20 | Internal code changes
PiperOrigin-RevId: 574677955 | Yu Feng | feyu@google.com | 1,697,680,008,000 |
f8e05f7c465d4e2e8470dab6e5c3841620fbedb7 | Introduced an Op to compute the deduplicated data size.
PiperOrigin-RevId: 574679418 | Dateng Lin | datenglin@google.com | 1,697,680,407,000 |
d6e336324c9e3d1ad4a6e9b19489a1a436a78a11 | Update ops-related pbtxt files.
PiperOrigin-RevId: 574685536 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,682,118,000 |
393e7e613b6cc840fdf473613ee8c3d0f5420bf9 | Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId: 574690266 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,683,580,000 |
8c25f8252175c6a4f9614797e953fcbfb93c3aed | Add AotCompileXlaFunctionsInMetaGraphDef to compile all the exported xla functions given the function signature.
PiperOrigin-RevId: 574699962 | Shixin Li | shixinli@google.com | 1,697,685,712,000 |
9e4a43cb9fda3dd3668bd508fb36e20aaf30cf45 | Internal change only
PiperOrigin-RevId: 574708331 | Ziyin Huang | ziyinh@google.com | 1,697,688,178,000 |
3fbde8e534e226e359b037008cb42b2578722cf9 | [xla:gpu] Support creating ThunkInfo from HLO #6224
PiperOrigin-RevId: 574728456 | Anlun Xu | anlunx@google.com | 1,697,693,807,000 |
2ff1abf01b7a1d609a7b735a36b2db52144d7d79 | [stream_executor] NFC: Move stream library into a separate build target
PiperOrigin-RevId: 574756391 | Eugene Zhulenev | ezhulenev@google.com | 1,697,700,237,000 |
822079e0d048a45967c36f0a560363f56e44098f | [XLA] Get rid of NaNs when base equals to infinity for complex types exponentiation.
The following cutoffs are implemented in this change:
1. inf^(a + 0i) = inf, if a > 0.
2. inf^(a + 0i) = 0, if a < 0.
PiperOrigin-RevId: 574770628 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,703,611,000 |
30047b64bda3e9aee09798ab20e67c2c81ff2406 | Update GraphDef version to 1654.
PiperOrigin-RevId: 574781106 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,706,131,000 |
c2052d56f076bb6e73fe90b88b2dd104bbf7b417 | compat: Update forward compatibility horizon to 2023-10-19
PiperOrigin-RevId: 574781116 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,706,133,000 |
016f0c7736720ad410f391d0e80d67e056da135c | [stream_executor] Add support for launching command buffers within command buffers
PiperOrigin-RevId: 574786989 | Son Tuan Vu | vuson@google.com | 1,697,707,434,000 |
c4737d32b961c422f5908b5957696aae1094ff6f | Integrate LLVM at llvm/llvm-project@fd1a0b0ee4d8
Updates LLVM usage to match
[fd1a0b0ee4d8](https://github.com/llvm/llvm-project/commit/fd1a0b0ee4d8)
PiperOrigin-RevId: 574819686 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,715,268,000 |
169d70564d96d94d572fba7d87193c6936fcc94e | [XLA] [PjRT] Unify AOT mode and normal compilation for GPU
Current setup leads to excessive code duplication, as AOT relies on a separate
codepath entirely. I'd like to get AOT to a state where it seamlessly interops
with all other XLA tools, and reuses all the codepaths.
This CL instead of using a separate entry point, adds TargetConfig to
CompileOptions. If TargetConfig is present, the current GPU is not queried for
the compilation, and instead, the TargetConfig is used, emulating
cross-compiling for a given GPU.
Two recently added tests in PjRT were removed as they weren't correct:
StreamExecutorClient was serializing to a different proto than a parent class.
PiperOrigin-RevId: 574822280 | George Karpenkov | cheshire@google.com | 1,697,715,948,000 |
d276779a4323b504d6f9b51da549ae21b14f3efd | [XLA] [NFC] Do not lose stack traces attached to Status when converting to Python
PiperOrigin-RevId: 574823833 | George Karpenkov | cheshire@google.com | 1,697,716,351,000 |
8f13e41725df61b3790043229f0fc5359986e396 | Fix the documentation of DynamicStitch.
There is a ' + 1` missing in the merged shape.
PiperOrigin-RevId: 574827146 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,717,132,000 |
42ae35d6d79528570f2963e8fd7effe38e14a676 | Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId: 574836385 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,719,578,000 |
1684b747cd0f3e815385ed2c88247dc472a75992 | Pull out clustering and runtime sections into a submodule for V1 Compat Bridge. Done in preparation to call Bridge Clustering -> Runtime -> Export.
Required as current logic depends on pulling out a submodule and only running clustering / runtime lowering on the submodule.
PiperOrigin-RevId: 574843152 | Mason Chang | masonchang@google.com | 1,697,721,185,000 |
b65f6b23952438216627c50f8ca4aa86dd653c65 | Add ReplaceStablehloOpsInMainFunctionWithXlaCallModuleOps to StableHLO Quantizer
PiperOrigin-RevId: 574846622 | Jiyoun (Jen) Ha | jiyounha@google.com | 1,697,722,081,000 |
ae709a6f0780160d9451fa1b81cc25aef2a5d2aa | PR #6377: Fix the check failure where while loop is not rooted on a tuple
Imported from GitHub PR https://github.com/openxla/xla/pull/6377
Some while loops might not be rooted on a tuple.
This removes the check to assert on such situation.
Addressed issue: https://github.com/openxla/xla/issues/6353
Copybara import of the project:
--
d454d73618f606360f2eff896093045896419c46 by TJ <tjx@nvidia.com>:
Fix the check failure where while loop is not rooted on a tuple
--
c5fccd1f594cf2bf1b54daa60f05e71eecea58e8 by TJ <tjx@nvidia.com>:
removed redundant code in tests
Merging this change closes #6377
PiperOrigin-RevId: 574847170 | TJ Xu | tjx@nvidia.com | 1,697,722,218,000 |
9a723656eb4d5c0f3d0c2e1d69e46e3720385c54 | Redirect references from the lib target to the new single-source-file targets.
PiperOrigin-RevId: 574850420 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,722,923,000 |
9c1b568f0c58cea81dcfcda172b74d8099b55a83 | Integrate LLVM at llvm/llvm-project@c122b9727a27
Updates LLVM usage to match
[c122b9727a27](https://github.com/llvm/llvm-project/commit/c122b9727a27)
PiperOrigin-RevId: 574859172 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,724,838,000 |
fc09820fddd2077cfd03238130241410da706696 | Redirect references from the lib target to the new single-source-file targets.
PiperOrigin-RevId: 574892978 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,731,815,000 |
37cba98ff8f9b7ce09246387f5a79e31e22a55df | Move CPU/GPU Bridge to Clustering, Runtime, Export
PiperOrigin-RevId: 574893474 | Mason Chang | masonchang@google.com | 1,697,731,905,000 |
ca1b62783b11846943c39c81fa55543d83e6519a | Remove extra source files from client target, since all references to the other source files have been migrated.
PiperOrigin-RevId: 574905911 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,734,219,000 |
5a56eb116548befbda40a496085e97dc8d5fc7e4 | Remove old training_ops target and redirect references to the new target.
PiperOrigin-RevId: 574906325 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,734,298,000 |
6d9e3b2b1c1bbe3076eec0d7b166addd786fcfed | Pass in enable_fallback from caller to re-enable logging
PiperOrigin-RevId: 574915149 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,735,577,000 |
b995cfac5aad28d8d1199d7047d3b2aba8dd9708 | Internal change only. This code is not being used yet.
PiperOrigin-RevId: 574933526 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,738,263,000 |
c92ebf84d15b0ec69002739840fea7eaa049dcea | [xla:gpu] Add a test for pipelined Send and Recv chains.
Also remove the control predecessors from a test input as the GPU scheduler now
call P2PSchedulePreparation pass to insert such control predecessors.
PiperOrigin-RevId: 574933777 | Bixia Zheng | bixia@google.com | 1,697,738,294,000 |
b512c66983e0ce9301d056e78e952bd0324703a6 | Adjust wheel verification scripts to look for Keras v3.
PiperOrigin-RevId: 574937596 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,738,839,000 |
426e0c0af3a6beea3df6f4d77936d4acd0edb623 | Improve convert_tf_quant_to_mhlo_int_test
This is aiming at easier numerical verification of quant->int lowering passes.
This CL implements a function to evaluate TF function return value using constant-folding utility in TF quantizer. Under the hood, it uses TF kernels to numerically evaluate TF ops.
Then we execute the lowered int graph using PJRT client and verify the results against TF kernel results. This makes creating test cases easier because we don't need to manually calculate the expected results.
There appears to be some numerical differences for dot. Will investigate further.
PiperOrigin-RevId: 574938998 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,739,068,000 |
00ad8bd78fb20b2e0103f01a00546e793b10d0d2 | [stream_executor] NFC: Do not leak internal stream executor header
PiperOrigin-RevId: 574939070 | Eugene Zhulenev | ezhulenev@google.com | 1,697,739,081,000 |
b3eece95e62ac2c3eee1d7545ac79e903b6fcc19 | Add indirection point `xla_cub_deps`
PiperOrigin-RevId: 574943976 | David Dunleavy | ddunleavy@google.com | 1,697,739,826,000 |
46db6483a0ea3d6662f9a6486582b3d13760c19e | [stream_executor] NFC: Move stream library into a separate build target
PiperOrigin-RevId: 574951363 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,741,042,000 |
654f9e903846e64cc7afc0e2df48b6f89fb9fdc3 | Internal change only.
PiperOrigin-RevId: 574951488 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,741,068,000 |
89081573e5e80c45c9ab682a0422a96417a76a22 | Redirect more references away from the `lib/io:lib` target and onto the new single-source-file targets.
PiperOrigin-RevId: 574954173 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,741,514,000 |
835734132fdd8c4be69acca7f70ff4d9b935badc | [mlir][sparse] Use new syntax in stablehlo tests
Example:
lvlTypes = ["compressed"] to
map = (d0) -> (d0 : compressed)
PiperOrigin-RevId: 574963314 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,743,023,000 |
900b6c7f31b248ee0f9933bd93276499c08161fd | Always load Eigen from @eigen_archive in OSS.
With the vendoring of XLA, bazel gets confused when trying to load Eigen
in some cases, since dependencies were loading Eigen from two different places.
This was preventing us from using certain Eigen modules in XLA.
PiperOrigin-RevId: 574975911 | Antonio Sanchez | cantonios@google.com | 1,697,745,581,000 |
e1ec7973da65e9fb83e906e1d3d67cd10f00c387 | Break Python version into its own env for new ci directory
PiperOrigin-RevId: 574978317 | Michael Hudgins | michaelhudgins@google.com | 1,697,746,083,000 |
287fe6bea57c87e1e0e0f36cbaeb0cfb599e4f1f | Migrate callers of v2 clustering to explicit calls to clustering, runtime, export
PiperOrigin-RevId: 574979231 | Mason Chang | masonchang@google.com | 1,697,746,277,000 |
7438c8cd4084c02332c3dc96c477bc7f2399199a | Bumping Triton version
PiperOrigin-RevId: 574986341 | Mohammed Anany | manany@google.com | 1,697,747,754,000 |
bdf49d98a9e10451c1e4ad3940dcec2f5c4055ba | Separate out lower cluster to runtime ops in prepration to move it to
mlir_bridge_pass
PiperOrigin-RevId: 575004818 | Mason Chang | masonchang@google.com | 1,697,751,279,000 |
1db75cd948996b2ce2e56f75fcbdda1eaa6aa6e0 | Fix use-after-free in tfrt_stub::SavedModel
PiperOrigin-RevId: 575005509 | Kuangyuan Chen | chky@google.com | 1,697,751,422,000 |
6e79d06c8dd9d8e9f0e4edc023555ea2489ad24c | Integrate LLVM at llvm/llvm-project@0446c589afd6
Updates LLVM usage to match
[0446c589afd6](https://github.com/llvm/llvm-project/commit/0446c589afd6)
PiperOrigin-RevId: 575008572 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,752,096,000 |
84fe37c81e3bbee3f6c50c41d9f77f8351bb1407 | Integrate StableHLO at openxla/stablehlo@a621c6df
PiperOrigin-RevId: 575010545 | Eugene Burmako | burmako@google.com | 1,697,752,440,000 |
9a93cdff20c39652803f938fa4f582257f6efda1 | Handle the case where a bitcast is used as a reshape.
PiperOrigin-RevId: 575010734 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,752,473,000 |
93c275d84b965ff3f4ca6fbda1ffdcd2c8436d5f | Optimize op_profile_builder's GetComputationSize eliminating dead recursion calls.
PiperOrigin-RevId: 575015789 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,753,495,000 |
95d1760d2a7ee3e3bd7a51498ac84942d710a1e2 | Fix matmul CPU f16 precision issues, enable fused matmul.
Matmul/fused matmul for f16 on CPU should use f32 compute precision
to avoid excessive precision loss. This should also improve performance
on CPU, since the Eigen kernels will otherwise keep casting f16->f32->f16
internally for every intermediate operation.
Also enabled fused bias-add with Tanh and Sigmoid on CPU.
This fixes the currenly failing test `matmul_op_test_cpu` when CUDA is enabled.
PiperOrigin-RevId: 575016644 | Antonio Sanchez | cantonios@google.com | 1,697,753,682,000 |
e544ac5837d62e7c8bcdaaad8abba1c5d37e6dc1 | [xla:gpu] Add support for building KernelArguments from HLO #6224
- KernelArgument will not contain the MLIR value if emitting from HLO.
- Properly build CompileModuleResults.
- CholeskyTest passes after this change
PiperOrigin-RevId: 575020293 | Anlun Xu | anlunx@google.com | 1,697,754,510,000 |
c7c34af14edfc2f08fb92b54d142b4be26657fb8 | Remove old training_ops target and redirect references to the new target.
PiperOrigin-RevId: 575020818 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,754,628,000 |
2df158f312f28a90b0ff0d2d26bebb2030c6ffbe | PR #6427: Fix configure.py to more directly specify the find_cuda_config.py instead of using `glob()`
Imported from GitHub PR https://github.com/openxla/xla/pull/6427
We know the exact location and don't need to glob() here. The issue with glob is that it can be slow, in particular when re-running the configure script after building with Bazel: the glob will process tens of GB of build directories!
Copybara import of the project:
--
d0ccab91c465871c6c87f9bcf1746cbd5da47967 by Mehdi Amini <mamini@nvidia.com>:
Fix configure.py to more directly specify the find_cuda_config.py instead of using `glob()`
We know the exact location and don't need to glob() here.
The issue with glob is that it can be slow, in particular when re-running the
configure script after building with Bazel: the glob will process tens of GB
of build directories!
Merging this change closes #6427
PiperOrigin-RevId: 575020900 | Mehdi Amini | mamini@nvidia.com | 1,697,754,646,000 |
ff87b5c143db4ab36bfb2c7c7f6135b9b7829bfe | Merge pull request #55427 from btlorch:master
PiperOrigin-RevId: 575021229 | TensorFlower Gardener | gardener@tensorflow.org | 1,697,755,868,000 |
9b03a7c96d643c474082babe9fbd262db16e64c6 | Disable AVX512 GEMM kernels again.
We are receiving reports of out-of-bounds memory accesses within the kernel.
PiperOrigin-RevId: 575030259 | Antonio Sanchez | cantonios@google.com | 1,697,756,738,000 |
c3918454567203f64c9522b806e75db23cbe81cf | Remove unused deps for '//third_party/tensorflow/lite:allocation' target.
PiperOrigin-RevId: 575032090 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,757,168,000 |
7893fc2144f1c1ea804b73fe42cae550e6d3e732 | Add CPU/GPU support for int4 conversions in XLA.
int4 is currently only supported for convert instructions and a few instructions that do not produce any GPU code, such as tuple.
int4 is not yet supported in the Triton emitter, so using int4 will currently be slow on GPUs (and it is likely slow on CPUs as well).
int4 is packed on device but not on host. To represent int4 being packed, a pass, SubByteNormalization, sets element_size_in_bits() on every int4 layout to 4. Care is taken to ensure subsequent passes do not create int4 layouts with element_size_in_bits unset.
PiperOrigin-RevId: 575041571 | Reed Wanderman-Milne | reedwm@google.com | 1,697,759,511,000 |
d1293aed1891739f986bae7c90a158fa9d2f2e17 | #tf-data-service Add metrics for tf.data distributed snapshots.
PiperOrigin-RevId: 575041688 | Yang Chen | yangchen@google.com | 1,697,759,537,000 |
b945983cd7eaacda4e98d88c6162a4533db7d690 | Change default hermetic python version to 3.11. Change HERMETIC_PYTHON_VERSION to TF_PYTHON_VERSION for less confusion.
PiperOrigin-RevId: 575046805 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,760,843,000 |
969cf4477621feee49ac90be6e387b819179208a | Adds EdgeTPU MLIR to xProf
PiperOrigin-RevId: 575050063 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,761,659,000 |
f6c12d8155a02513535835d9d0aa2ce82f4c8dc8 | Merge pull request #60227 from PaDarochek:fix-direct-session
PiperOrigin-RevId: 575052163 | TensorFlower Gardener | gardener@tensorflow.org | 1,697,762,535,000 |
b4b13d6edf44eb5cd35141daa48f4fc450372406 | Remove an unused function parameter.
PiperOrigin-RevId: 575060563 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,764,639,000 |
8cc5a4b24cfb205a04b6d8de9674790447c15a2c | Remove redundant inline comment marker
PiperOrigin-RevId: 575099754 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,778,339,000 |
f0006212d2e213900b103312a8266c3e3a88e105 | [XLA] Add functionality to filter operations that are allowed to be pipelined with collective and add support for reduces that remove trivial dimensions.
PiperOrigin-RevId: 575101000 | Marcello Maggioni | maggioni@google.com | 1,697,778,760,000 |
93d11498ce1e914502c413f187110d26426585ac | Merge pull request #61737 from Tai78641:pr_legalize_erf
PiperOrigin-RevId: 575105888 | TensorFlower Gardener | gardener@tensorflow.org | 1,697,780,755,000 |
3c20d045e1d8db2b508070c32ceabb70b39597a7 | [XLA:GPU] Only retrieve stream capturing status before launching kernel when VLOG is enabled.
PiperOrigin-RevId: 575106504 | Chris Jones | cjfj@google.com | 1,697,780,664,000 |
8530d41fe9041c9204da79666eaeded329d3063d | [stream_executor] NFC: Clean up and document StreamExecutor BUILD file
PiperOrigin-RevId: 575107026 | Eugene Zhulenev | ezhulenev@google.com | 1,697,780,829,000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.