hash stringlengths 40 40 | msg stringlengths 1 131k | author stringlengths 1 33 | email stringlengths 0 57 | date int64 1,447B 1,698B |
|---|---|---|---|---|
04ca99cc59bd5754d43718fe7ea018df0a5b9073 | Simplify Device API
PiperOrigin-RevId: 559550786 | Haibo Huang | hhb@google.com | 1,692,827,574,000 |
c69f7eb64f1b284e1efaede8d2d16d7c7aa68ddd | Fix memory leak in Rendezvous API
PiperOrigin-RevId: 559552415 | Haibo Huang | hhb@google.com | 1,692,827,875,000 |
9dbbe006419d50174b3711c44e1df578a7714dff | Optimize BroadcastAdd6DSlow in TFLite reference kernel.
It removes redundant dimensions, compresses compressible dimensions, and handles broadcasting.
PiperOrigin-RevId: 559554841 | Jared Junyoung Lim | jaredlim@google.com | 1,692,828,380,000 |
84fa457347d92305c9a7bacb1da6a8ed0944741f | Unit test for Rendezvous API
PiperOrigin-RevId: 559560973 | Haibo Huang | hhb@google.com | 1,692,829,725,000 |
2e927ced12de17eb1a6564735f35ee76fecfba94 | [tflite-gpu] Add helper functions so that GPU Async can be zero-copy.
PiperOrigin-RevId: 559561240 | Grant Jensen | grantjensen@google.com | 1,692,829,777,000 |
e1fa66c2e5f84179dd6ec5d1a2075f573ec42a72 | lite: add option to disable serialization of stablehlo ops
PiperOrigin-RevId: 559573264 | Zichuan Wei | zichuanwei@google.com | 1,692,832,428,000 |
ec0540a5a70ef22d3db915d198db22e296879f39 | output incorrectly taken from inputs
PiperOrigin-RevId: 559579114 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,833,859,000 |
1706f7613c88a4ff21956d5796306501b6ffcc12 | Rendezvous API fortify: nullify deleted variables
PiperOrigin-RevId: 559584664 | Haibo Huang | hhb@google.com | 1,692,835,219,000 |
26b5341b0d8e5f5382854bef7eecf2e4d8321249 | Add support to discover multiple GPUs.
PiperOrigin-RevId: 559585088 | Changhui Lin | changhuilin@google.com | 1,692,835,316,000 |
c3af98bca9e153d6a68473b8be94ec58302b6424 | Fix startup config options
startup options cannot be affected by --config=... flags, as they are initialized before the configs are evaluated.
PiperOrigin-RevId: 559586164 | Austin Anderson | angerson@google.com | 1,692,835,575,000 |
606a03ba1226b18946640536f2b251cc7920d299 | Add additional dialects to GetHloModules
The func dialect was found missing, I added some others that we have commonly in ingress too.
PiperOrigin-RevId: 559587193 | Jacques Pienaar | jpienaar@google.com | 1,692,835,816,000 |
21f553c44158bfb044b05278e09208f9c1417253 | Add `experimental_write_callbacks` to `CheckpointOptions`.
PiperOrigin-RevId: 559594012 | Yu Feng | feyu@google.com | 1,692,837,696,000 |
e46b690df18637c58771906059010939df958937 | lite: flatbuffer_import: convert stablehlo attribute to tensor type instead of vector type
PiperOrigin-RevId: 559606029 | Zichuan Wei | zichuanwei@google.com | 1,692,841,163,000 |
5d18993f81c9990a35dce7c34ad233ad14c29591 | Release the `TensorHandle` lock when waiting for `OpIdAndOutputNum`.
PiperOrigin-RevId: 559621835 | Russell Power | power@google.com | 1,692,846,984,000 |
f2f703aa361d41c65364443995cd964d8ba52a3a | compat: Update forward compatibility horizon to 2023-08-24
PiperOrigin-RevId: 559685208 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,867,852,000 |
36d8091284cc54c50ecd056ab6683b4a1c41b76b | Update GraphDef version to 1598.
PiperOrigin-RevId: 559685225 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,867,857,000 |
f8916e252e546a773ddae6bd2206b8a9516206f4 | [XLA:GPU] Triton GEMM: fix handling of degenerate fragments of split tensor dimensions.
PiperOrigin-RevId: 559704209 | Ilia Sergachev | sergachev@google.com | 1,692,873,844,000 |
289bf39526d33f8fdf9cc0970cbcffd837268fb1 | Add missing dependent dialect to rank specialization.
PiperOrigin-RevId: 559711590 | Stephan Herhut | herhut@google.com | 1,692,876,150,000 |
928efe9b36c480af9fd28c9d4af38fb455fdcd68 | Call TfLiteInitializeShimsForTest() in SetUpTestSuite(), rather than in SetUp().
This is needed because some tests invoke TFLite API functions/methods
in the test fixture's constructor, before SetUp() has been called.
PiperOrigin-RevId: 559718230 | Fergus Henderson | fergus@google.com | 1,692,878,513,000 |
46e403cb7fd27d377e687a07437346caff683368 | Explain how new instructions are handled during dfs traversal.
PiperOrigin-RevId: 559719036 | Adrian Kuegel | akuegel@google.com | 1,692,878,741,000 |
25b4c9fd886f5f905d0460f65e555b64f5c69883 | Rename tf_to_kernel to hlo_to_kernel and remove last dependencies on tensorflow dialects.
PiperOrigin-RevId: 559720803 | Stephan Herhut | herhut@google.com | 1,692,879,360,000 |
9958a34b74839484cbc2bf79f28c89aaa8dc7dbe | Add type annotations.
PiperOrigin-RevId: 559723269 | Shashank Viswanadha | shashankvi@google.com | 1,692,880,177,000 |
75cb7394f3a6e5c52065344719179e6d4a067d46 | [XLA:GPU] Check for hlo_module_ not being nullptr
PiperOrigin-RevId: 559730555 | Tamás Danyluk | tdanyluk@google.com | 1,692,882,400,000 |
9048a15d439b859f1656710f8cf866e4537af5d8 | Do not choose bitcasts as fusion roots.
This just adds indexing overhead without any benefit, and may require extra buffers.
Bitcasts outside of fusions are no-ops.
We still allow to fuse a bitcast producer into an already existing fusion.
Otherwise they would act as fusion blockers.
PiperOrigin-RevId: 559732964 | Adrian Kuegel | akuegel@google.com | 1,692,883,146,000 |
7fb51777aa28f70f6d9f3bf5f6e919778efbac2f | Allow cuBLAS LT FP8 gemms to run on Ada GPUs.
cuBLAS supports running FP8 gemms on Ada GPUs (compute capability 8.9) but XLA previously only ran them on Hopper GPUs (compute capability 9.0).
cuDNN still only supports FP8 on Hopper, so we cannot run FP8 convolutions on Ada yet.
PiperOrigin-RevId: 559733091 | Reed Wanderman-Milne | reedwm@google.com | 1,692,883,186,000 |
8bb9a24e5a1f8d41dfe15480fdd5d43761a1ffd6 | [XLA:GPU] Enable single wave autotuning by default
This change should be relatively safe, but if you encounter any problem, please check with --xla_gpu_single_wave_autotuning=false.
This change speeds up the autotuning compilation (not the overall compilation time) ~1-2x, but may cause some increase in memory usage.
... | Tamás Danyluk | tdanyluk@google.com | 1,692,883,252,000 |
ce2c2951a923b72bb52d78f398260885d664ff7d | [XLA:GPU] Avoid leaking mlir::Operations in TritonWrapper
PiperOrigin-RevId: 559740141 | Tamás Danyluk | tdanyluk@google.com | 1,692,885,336,000 |
405dc5d4bd058d18d73fed7a2dac5f8ebd1477ae | [NFC] Explicitly set dialects to `usePropertiesForAttributes=0` in preparation for https://reviews.llvm.org/D158581 (flipping the default to `1`) to land.
This allows us to switch dialects to use properties one by one.
PiperOrigin-RevId: 559751065 | Christian Sigg | csigg@google.com | 1,692,888,413,000 |
37a598ad832d6b438eaeb05f9f9cadceefb9335c | Update TFRT dependency to use revision
http://github.com/tensorflow/runtime/commit/9ca593e8dc03e4d11582fa20a2ff0dc5b5c70e4d.
PiperOrigin-RevId: 559761231 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,890,799,000 |
cce02236eae10c87086e5a54a2006fd271260fe5 | Add variable quantization option test for float16 in lite conversion
PiperOrigin-RevId: 559761410 | Doyoung Gwak | doyounggwak@google.com | 1,692,890,846,000 |
6b01491a8726929d3f9060709f5f153168feeb52 | [XLA:GPU] Load binary if ptx is missing in ResolveConstantGlobals
In a future CL we will leave the ptx empty if the cubin is available,
and this CL is needed to make that work.
Also update related comments.
PiperOrigin-RevId: 559767896 | Tamás Danyluk | tdanyluk@google.com | 1,692,892,385,000 |
11fc64b1ef9b37692ff9ae58799f7a7fe4ff4365 | [XLA:GPU][NFC] Refactor Triton fusion analysis.
Prepare the analysis class to process not only GEMM fusions: rename the analysis class and several functions, move several functions into the fusion context, update the dimension order construction interface, update comments, change some pointers to references.
PiperOri... | Ilia Sergachev | sergachev@google.com | 1,692,893,539,000 |
aa5673adf05f1508cadd961f0cb28964bf5934d2 | [XLA] Relax the restriction on passing preferred element types
PiperOrigin-RevId: 559777378 | Amit Sabne | asabne@google.com | 1,692,894,435,000 |
c71cae7b140bfea31f106f9fd2f583af047c8d8a | Ensure that outfeed ops have their shardings preserved as they go through auto-sharding.
PiperOrigin-RevId: 559782721 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,895,561,000 |
3f1ca93b581fa4c1bc1f9365349c1a484eb4a79c | [XLA] Delete `_xla_host_transfer_original_type` and `_xla_host_transfer_is_lower_bits`.
PiperOrigin-RevId: 559786239 | Ce Zheng | zce@google.com | 1,692,896,316,000 |
9a16aeac0dc7503681345633d0fa60c84704ce3d | Add streamz metric to count gpu compiled programs.
PiperOrigin-RevId: 559790245 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,897,017,000 |
f26dc498d89c503985433953fed5bba2ff55619e | Use similarly signed types for comparison
In the downstream TFLM project, comparing int and size_t types results in sign-compare compiler warnings.
PiperOrigin-RevId: 559793295 | RJ Ascani | rjascani@google.com | 1,692,897,584,000 |
8c2b83f5593a618a48402af6a04629980a127fb5 | Remove TPU specific init in NextPluggableDevice.
The only remaining framework init is in tensorflow/core/tpu/tpu_api_dlsym_initializer.cc which handles both SE and TFRT cases. This change completes the TPU init simplification related to using PJRT plugin.
Also fixes third_party/tensorflow/core/tpu/tpu_model_server_in... | Jieying Luo | jieying@google.com | 1,692,897,876,000 |
174d3230094e35e80eebdb03c23e6ae0f5730d6f | [NFC] Add comment to indicate default value of XLA_PYTHON_CLIENT_MEM_FRACTION
PiperOrigin-RevId: 559795181 | Rahul Joshi | jurahul@google.com | 1,692,897,927,000 |
68d398dc16ed901b142ccd7f57b34e0324c5e6a0 | Support degenerate case of mul operation.
PiperOrigin-RevId: 559796643 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,898,216,000 |
0b8d45fdd073e29511a9d6ba20bd6bbe9aa2bf99 | PR #5152: [ROCM] Update failing xla/service/gpu tests
Imported from GitHub PR https://github.com/openxla/xla/pull/5152
Copybara import of the project:
--
c8e616402d84ca56904fa2338e10d13ae65b7324 by Dragan Mladjenovic <Dragan.Mladjenovic@amd.com>:
[ROCM] Update failing xla/service/gpu tests
Merging this change clos... | Dragan Mladjenovic | Dragan.Mladjenovic@amd.com | 1,692,898,246,000 |
62479f9b3e6c54f4dbcf99e4e026450606d7d63a | Integrate LLVM at llvm/llvm-project@382b97554dd3
Updates LLVM usage to match
[382b97554dd3](https://github.com/llvm/llvm-project/commit/382b97554dd3)
PiperOrigin-RevId: 559804563 | Krasimir Georgiev | krasimir@google.com | 1,692,899,625,000 |
4b34b37da2b7d9852f6f5c096c4ad1516f87c2b5 | [XLA/Conditional code motion] Remove invalid get-tuple-element operations when moving operands inside conditional branches.
PiperOrigin-RevId: 559810879 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,900,756,000 |
d1a5c9af980cdfb4da1abb5abf52590103f5b80a | Always return error status if serialization to FlatBuffer failed.
An error is not emitted to the DiagnosticHandler in all cases in which
serialization can fail. This change bases failure on the result of the
`MlirToFlatBufferTranslateFunction` method call rather than the value of the
status handler's status.
PiperOri... | Arian Arfaian | aarfaian@google.com | 1,692,900,932,000 |
22f749bf0bd07f804b98eaf029ccbbbbbb22b9c9 | Update TFRT dependency to use revision
http://github.com/tensorflow/runtime/commit/812b5fd6c28cd43131344174faee6adee20b1619.
PiperOrigin-RevId: 559816076 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,901,635,000 |
934d5040c94cdccd165503e02f62682a89a4a10c | Add micro LIBTPU_EXCLUDE_C_API_IMPL to remove API implementations from libtpu
PiperOrigin-RevId: 559818210 | Haibo Huang | hhb@google.com | 1,692,902,017,000 |
28649cd109f6cec8a0e02293d8b9eeb261c3335d | Merge pull request #61332 from Intel-tensorflow:kanvi/New_Layernorm_pattern
PiperOrigin-RevId: 559818560 | TensorFlower Gardener | gardener@tensorflow.org | 1,692,902,673,000 |
b726a7ce9954459c811b2a97d79423010b58101c | Update FindTpuIdx for the DCNMessage.
PiperOrigin-RevId: 559820271 | Clive Verghese | cliveverghese@google.com | 1,692,902,398,000 |
c0a59d7d11f967e1adac45a71add4f281c3423f4 | [xla:gpu] Fix GetLatencyBetween.
We didn't return the intended latency for Send/SendDone and Recv/RecvDone, due
to a problem in using IsAsyncPair and a problem in checking opcode for the
instruction pair.
PiperOrigin-RevId: 559825578 | Bixia Zheng | bixia@google.com | 1,692,903,445,000 |
616c71eeddf09c706ba1e289009770b16b8aa81d | Make remaining targets under tensorflow/python/ have strict dependencies.
PiperOrigin-RevId: 559852588 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,692,909,057,000 |
9ab898c000cffa664829b31f1e1c78a57b467887 | Add headers BUILD targets for tensorflow C API constructs
Context: Will remove `tf_tensor` impl from libtpu, which requires some cleanup of headers and separation of interface and implementation
PiperOrigin-RevId: 559853472 | David Silverstone | silverstone@google.com | 1,692,909,251,000 |
a94b7e777a2c2621aa80e7e15ae9e45c6bf2b5cd | put rocm config back in bazelrc | weihanmines | wei.han3@amd.com | 1,691,781,285,000 |
39afd5255c8d4d0a89d46a8af20e98ffb571f09c | [XLA:Python] Remove --enable_tpu flag from the Python client.
This is no longer needed now TPU support is shipped as a PJRT plugin.
PiperOrigin-RevId: 559855269 | Peter Hawkins | phawkins@google.com | 1,692,909,602,000 |
c1244dd1cf9ace7f8b473f0e78504cf4580089a7 | DTensor: Clean up C++ headers, includes and build_targets
Context: Will remove `tf_tensor` impl from libtpu, which requires some cleanup of headers and separation of interface and implementation
PiperOrigin-RevId: 559860084 | David Silverstone | silverstone@google.com | 1,692,910,556,000 |
6182de75f9b2b765c986fcf44e2bd8f4d31b7433 | Add the op stats pass in the TFLite converter for printing stats of ops that are not converted to TFL ops.
PiperOrigin-RevId: 559861123 | Yishuang Pang | ypang@google.com | 1,692,910,768,000 |
331276a552d40b9c15acaf4d7203d9eca20ae63d | Remove never called StreamExecutor::PlatformDeviceCount, ::SupportsDnn, ::SupportsBlas, and ::SupportsFft methods.
PiperOrigin-RevId: 559869460 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,912,306,000 |
e5b257fee18494348d43b3b4318473ba29f9d9ac | Remove unused MultiPlatformManager methods.
PiperOrigin-RevId: 559885166 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,915,447,000 |
f69bcdec0ca15c2ee85c92c5fcf5d848576f34c1 | Eliminate unused Platform::ForceExecutorShutdown, ::GetPeerAccessMap, and EnablePeerAccess.
PiperOrigin-RevId: 559885414 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,915,502,000 |
935ce2ea1c3c848d213f04c9ff68ea4c51e02f89 | PR #5103: Force the cublasCreate call before we start the graph.
Imported from GitHub PR https://github.com/openxla/xla/pull/5103
Calling cublasCreate() need locks and can sync the context. If we have kernels that are already running and are waiting for something, like a NCCL kernel, we end up with deadlock. [Section... | Frédéric Bastien | frederic.bastien@gmail.com | 1,692,915,939,000 |
487a0015dc4e9a3b4e2c4b66ed290bcdf207cc01 | Add support for transferring a tensor with PjRtTensorBuffer.
PiperOrigin-RevId: 559887956 | Changhui Lin | changhuilin@google.com | 1,692,916,043,000 |
de806069cfc73867d1e5cb6a9ad5e42fd696a66f | Integrate StableHLO at openxla/stablehlo@d9a17eb
PiperOrigin-RevId: 559892197 | Kevin Gleason | gleasonk@google.com | 1,692,916,973,000 |
ca6374646ea9e98c59994dc24dd88b57680c3059 | [TF][MLIR] Fix `RegionBranchOpInterface` method in `tf.WhileRegion` op
This commit fixes the following two minor bugs in the `getSuccessorRegions()` interface method of the `tf.WhileRegion` op:-
1. The `cond` region successor of both `body` and the op is incorrectly marked to have the arguments of `body` (instead of ... | Srishti Srivastava | srisrivastava@google.com | 1,692,917,640,000 |
79c43c3d54adaa759240e80f2758183c53547def | Update variable name to adhere to the style guide.
PiperOrigin-RevId: 559897422 | Juhyun Lee | impjdi@google.com | 1,692,918,107,000 |
5322fd40cd58cfa8c551e602fede7a3be19fff95 | [PJRT] Fix checking for output sharding
Output sharding for empty tuple needs to have one "replicated" element.
PiperOrigin-RevId: 559899447 | Marcello Maggioni | maggioni@google.com | 1,692,918,516,000 |
4a5a381c95918a3f2484a154016aaf518a58c720 | lite:stablehlo: add serialization for stablehlo ops: abs, and, cos, exp, floor, log, min, negate, or, power, remainder, rsqrt, select, sub & tanh
PiperOrigin-RevId: 559900601 | Zichuan Wei | zichuanwei@google.com | 1,692,918,797,000 |
669c19aee668ec5c6eda630bc1c1a0bdca8b3158 | [XLA:GPU] Parametrizes Triton Softmax tests from ir_emitter_triton. Currently, F32 and F16 are tested.
PiperOrigin-RevId: 559905690 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,919,966,000 |
f8763b7ea65f1757288fdbfa56553a4cc3cf70bc | Makes `memory_space` a pure virtual method in PjRtBuffer.
PiperOrigin-RevId: 559906991 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,920,255,000 |
6ee904dca19a5fd584b2cf8341769e54c03b5e8f | #tf-data Add log statements for symbolic checkpointing.
PiperOrigin-RevId: 559909678 | Wilsin Gosti | wilsin@google.com | 1,692,920,895,000 |
d9e5091f8e517d65ddcbec6bad00ab241bcaa153 | Add `_copy_trackable_to_cpu()`, `_serialize_to_tensor()`, and `_restore_from_tensor()` to `DelegatingTrackableMixin`.
PiperOrigin-RevId: 559918267 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,922,933,000 |
07eff3770b8414aa5814b5e4d9d2e7a6ac1d6b3d | Avoid clamping in Lowering UQ mhlo.add for i32 storage type in ConvertMHLOQuantToInt pass
For i32 storage type, clamping against int32_max/min is no-op.
PiperOrigin-RevId: 559922810 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,924,247,000 |
844e7e5f3c227f4bb32be67da5f36e919a91d11c | #tf-data-service Include snapshot path in the log message.
PiperOrigin-RevId: 559929314 | Yang Chen | yangchen@google.com | 1,692,925,917,000 |
3d50778c6d32129afd13abfdcbb7e8e118feb1d4 | Fix delegation of VarHandle nodes for unsupported datatype
VarHandle nodes don't have a type associated with it. We rely on a ReadVariable/AssignVariable to figure out the data type. When visiting VarHandle, we currently incorrectly assume we can always handle it, but it really depends on the data type, which we only ... | Zhi An Ng | zhin@google.com | 1,692,926,861,000 |
159f2b1d3cdb6c783e4d6d389ded63cc86baf62f | Allow i8->i32 convert op for zero point constants in `ComposeUniformQuantizedDotGeneralOpWithTwoQuantizedActivations`.
Currently the `mlir::odml::ComposeUniformQuantizedDotGeneralOpWithTwoQuantizedActivations` assumes the zero point values are constant folded into i32 values.
However, there are use cases where the zer... | Dan Suh | dansuh@google.com | 1,692,927,062,000 |
1e6e95965950f076d8c1880567055cfaa09796d5 | Add missing elem-wize functions to xla operation semantics doc | Alexander Pivovarov | apivovarov@gmail.com | 1,692,933,627,000 |
c6ecfeac886e6c39193d422d577cb23b55f3530c | Integrate LLVM at llvm/llvm-project@1ff0bdb86dbf
Updates LLVM usage to match
[1ff0bdb86dbf](https://github.com/llvm/llvm-project/commit/1ff0bdb86dbf)
PiperOrigin-RevId: 559954302 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,934,045,000 |
a7f3934a67900720af3d3b15389551483bee50b8 | Better support for reversed sharding strategies (generated due to the presence of the kReverse HLO op). This implies handling some cases where we coudl not previosuly compute resharding costs.
PiperOrigin-RevId: 559960750 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,936,384,000 |
7d9fb81b8a605f736ddbb99a5397d8a69fb4f4e2 | Update TFRT dependency to use revision
http://github.com/tensorflow/runtime/commit/32c232c66f5224228d1544158c458a9e8368a5cd.
PiperOrigin-RevId: 559965494 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,938,003,000 |
4eb74b85d75cf823b8f6faf350e6c4e13a397d4d | Add a proper jax config for memories so that we can iteratively develop and enable it.
PiperOrigin-RevId: 559977015 | Yash Katariya | yashkatariya@google.com | 1,692,940,993,000 |
a4e65d7b453387b124c7a34aa39de1a9a1c8af44 | Internal Code Change
PiperOrigin-RevId: 559996011 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,946,256,000 |
6bd368dd8c509aef655ba6c5615bfc8e0cf75abc | Internal Code Change
PiperOrigin-RevId: 559999040 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,947,137,000 |
1290b847d80684c1f6a3f193d9ffdf8431659e35 | Merge pull request #61690 from apivovarov:fix_xla_op_sem
PiperOrigin-RevId: 560020394 | TensorFlower Gardener | gardener@tensorflow.org | 1,692,953,675,000 |
b79a11c222085ab958ab5e9ef41450d437bf0ff3 | Update GraphDef version to 1599.
PiperOrigin-RevId: 560023468 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,954,224,000 |
20108002d564107ec8613b057d053331f7834ae4 | compat: Update forward compatibility horizon to 2023-08-25
PiperOrigin-RevId: 560023473 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,954,225,000 |
9af1d271c240e22d20196cbc3a0c993450b35d4e | Update TFLite `SingleOpModel` to test ops that use the new `BuiltinOptions2` schema union.
PiperOrigin-RevId: 560030520 | Quentin Khan | qkhan@google.com | 1,692,956,441,000 |
bfe5732c0423c893a23eff90b35aa90027c41c38 | [XLA:GPU] Remove dependency of ProfileExecutable on Executable::module()
This will allow not storing the HLO module in the autotuning executables, thus saving memory.
PiperOrigin-RevId: 560036254 | Tamás Danyluk | tdanyluk@google.com | 1,692,958,242,000 |
62296aa33e61aad357d43489ce6df0d0846b9ded | [XLA:GPU] Unset some debug options for autotuning compilations
This would probably just waste memory.
PiperOrigin-RevId: 560047723 | Tamás Danyluk | tdanyluk@google.com | 1,692,961,688,000 |
cd0f81459f474b69c9200f2da66aaca704f5348b | [XLA:GPU] Autotuning: Also try convolution algorithms returned by heuristic_mode_a.
On A100, heuristic_mode_b currently does not return enough fast algorithms. So as a
workaround, also try the algorithms returned from heuristic_mode_a. This comes at a
cost though, as we will try more algorithms during autotuning.
Pip... | Adrian Kuegel | akuegel@google.com | 1,692,961,732,000 |
5eaf14034fefaef4258f55bd87be609b32e11093 | PR #5074: XLA runtime support for fused attention
Imported from GitHub PR https://github.com/openxla/xla/pull/5074
This PR add XLA Runtime support for cuDNN runtime based Fused attention feature in XLA.
Copybara import of the project:
--
4fe6ba760c00e3f82e631ff8f7d6404dbdd5cf9e by Ayan Moitra <amoitra@nvidia.com>:
... | Ayan Moitra | amoitra@nvidia.com | 1,692,964,149,000 |
d8031689066ca52c4a8f8f10f47c524fbd979c7a | PR #5185: Add a debug option to toggle reduction epilogue fusion
Imported from GitHub PR https://github.com/openxla/xla/pull/5185
This is to follow up on the original [pr](https://github.com/openxla/xla/commit/f91402af93e86752c5e7c5d0a42ec54f89c007fe) to enable reduction epilogue fusion in xla gpu fusion modules.
We ... | TJ Xu | tjx@nvidia.com | 1,692,964,888,000 |
5c79f0cf5943ef5fdf1dd809e2538eeceaa7512c | Enable DUS tests on GPU.
They are passing now, most likely since when we have the DynamicUpdateSlice
in-place emitter.
PiperOrigin-RevId: 560066499 | Adrian Kuegel | akuegel@google.com | 1,692,967,840,000 |
27ac120221813b19352299ead14ac5a0e5a5e3dc | Enable convolution test on GPU which is working now.
PiperOrigin-RevId: 560067697 | Adrian Kuegel | akuegel@google.com | 1,692,968,259,000 |
61b272d94361b8b80896b77e5bad35ed45c3b3b4 | #tf-data Reuse the `FunctionLibraryDefinition` rather than creating a new one for every identity map function check in `map` transformations to speed up `noop_elimination` rewrite.
PiperOrigin-RevId: 560076996 | Wilsin Gosti | wilsin@google.com | 1,692,971,142,000 |
cc3c71696b077c85f989ba7ca4d1474901cc617a | Enable ManyParametersIntoWhileLoop test on CPU and GPU.
This runs reasonably fast now, it is only the second-slowest test after
ThreeThousandParametersAndOutputElements test which runs more than twice as
long.
PiperOrigin-RevId: 560080763 | Adrian Kuegel | akuegel@google.com | 1,692,972,240,000 |
b10b155d44bcb3bed5f10065326c8bb7f164f734 | Increase number of tuple parameters in LargeTuple test.
Previously, a bigger number resulted in timeouts or OOM on different backends.
This seems to work now.
PiperOrigin-RevId: 560080982 | Adrian Kuegel | akuegel@google.com | 1,692,972,301,000 |
35dd4818cb17c2c1a6981e52fb4842b84652a314 | [XLA:GPU] Triton GEMM: support more broadcasts.
This requires these additional changes:
- Broadcasts now rely on the dimension analysis class. Because Softmax and GEMM emitters share the broadcast generation code the dimension analysis of Softmax fusions is added.
- Softmax fusions contain reductions, so a minimal s... | Ilia Sergachev | sergachev@google.com | 1,692,972,777,000 |
4ceb8dc4bc27bf2d0dd5517318c9002ced72bfd7 | Enable ReduceWindow test on GPU.
Apparently this was failing at some point due to a PTXAS bug that got fixed by
now.
PiperOrigin-RevId: 560082727 | Adrian Kuegel | akuegel@google.com | 1,692,972,828,000 |
65e35d558215f44e5a542ec5c586df8a307f1140 | Integrate LLVM at llvm/llvm-project@2acf00bd0ac2
Updates LLVM usage to match
[2acf00bd0ac2](https://github.com/llvm/llvm-project/commit/2acf00bd0ac2)
PiperOrigin-RevId: 560101344 | Krasimir Georgiev | krasimir@google.com | 1,692,977,960,000 |
f3631aff4b00ed0ffa5d197548d4bb831052d993 | Update TFRT dependency to use revision
http://github.com/tensorflow/runtime/commit/7807f8c4fcb384394aed0dc05dce7604bc60e4b6.
PiperOrigin-RevId: 560110922 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,980,325,000 |
dd3182796834b1888b2380e419c4cf8f5b2e791d | Add control dependencies for input output buffer alias. If needed, we add control dependency to enforce output-related instructions after the users of input buffers.
PiperOrigin-RevId: 560122049 | A. Unique TensorFlower | gardener@tensorflow.org | 1,692,982,755,000 |
83373223fce5da628907042e16d221755684ec21 | clip_by_global_norm: Support tensors in t_list with different dtypes.
Casts the scale factor to each tensor's dtype before multiplying.
Motivation: When using a mixture of float32 and bfloat16 variables, avoids trivial dtype mismatch errors.
PiperOrigin-RevId: 560122639 | RJ Skerry-Ryan | rjryan@google.com | 1,692,982,863,000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.