hash stringlengths 40 40 | msg stringlengths 1 131k | author stringlengths 1 33 | email stringlengths 0 57 | date int64 1,447B 1,698B |
|---|---|---|---|---|
1a7d326c217c1fe5b79a05ca7fa009d2bc7082a8 | Fix a bazel path still referencing TF in tsl
PiperOrigin-RevId: 573888446 | Michael Hudgins | michaelhudgins@google.com | 1,697,483,362,000 |
95ff30206c8b5a97226dbae6f26d87f09943a502 | Refactor `tensorflow/lite/kernels:subgraph_test_util` single binary op build functions.
PiperOrigin-RevId: 573896661 | Quentin Khan | qkhan@google.com | 1,697,485,075,000 |
923378a04853f9a6eb6ac73cb1c0a5b2f26d8522 | Replace the deprecated Python `imp` module usages with `types`.
PiperOrigin-RevId: 573908302 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,487,364,000 |
c7c003e2445f39028c174727a1814b8fd12bf60f | #tf-data make expired() and node->output() atomic
PiperOrigin-RevId: 573909416 | Jim Lin | jimlintw@google.com | 1,697,487,596,000 |
4a6515ab4737a497bd3a30893e8edd8a1a4bd9b5 | Maintain `output_batch_size * batch_group_count = lhs_input_batch_size` for the convolution instructions in SPMD partitioner.
Referring to the operation semantics on convolution (https://www.tensorflow.org/xla/operation_semantics#convwithgeneralpadding_convolution) for more details on the batch_group_count.
PiperOrigin-RevId: 573911344 | Zixuan Jiang | zixuanjiang@google.com | 1,697,487,944,000 |
a88a4142d5c8c6c522b3e1875c60140187d1716d | #tf-data-service Use absl::Status in snapshot_stream_writer.
PiperOrigin-RevId: 573914350 | Yang Chen | yangchen@google.com | 1,697,488,535,000 |
99dbde1e9e643425adf4099922cbb8b06e1e127b | Reduce the number of nodes in fuzz test to avoid integer overflow.
PiperOrigin-RevId: 573924050 | Dinghua Li | dinghua@google.com | 1,697,490,353,000 |
c95107b1959f36a55952f9fc01a8f8f388ab66cf | [xla:cpu] Remove xla/runtime:ffi target and FFI support from XLA:CPU
PiperOrigin-RevId: 573937288 | Eugene Zhulenev | ezhulenev@google.com | 1,697,493,159,000 |
899f688c787aaefd73609a3d3a6202c054fb4ec3 | [xla:runtime] Remove unused FFI modules
PiperOrigin-RevId: 573940390 | Eugene Zhulenev | ezhulenev@google.com | 1,697,493,816,000 |
e6d7cb2c86e591830cef8e580e618ed0ebe6fc0f | Add packed int4 support to GenericTransferManager.
GenericTransferManager has a virtual method added, PackSubbyteTypes. If overridden to return true, int4 arrays, which are always unpacked on the host, will be packed on the device. In particular, transferring int4 arrays from host to device packs them, and transferring them from device to host unpacks them.
Once int4 support is added to XLA:CPU and XLA:GPU, the CPU/GPU transfer managers will override PackSubbyteTypes to return true.
Also two methods of TransferManager, TransferBufferFromDevice and TransferBufferToDevice, are moved to generic_transfer_manager.cc since they are only used by GenericTransferManager.
PiperOrigin-RevId: 573942280 | Reed Wanderman-Milne | reedwm@google.com | 1,697,494,239,000 |
8cb571149fc512a026407e0a49356fbf02c8ae21 | Re-enable layering_check for package.
PiperOrigin-RevId: 573943913 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,494,577,000 |
dd7d89c15651945adc05d257217885ce3e259ee3 | Skip TPU V1 Compat bridge for inference graph outside execution of TPUPartitionedCall op
PiperOrigin-RevId: 573946927 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,495,262,000 |
68d2f3cb05f1c03b322be038bdc27f551e71c573 | #tf-data-service Add a test for split provider restoration.
PiperOrigin-RevId: 573949811 | Yang Chen | yangchen@google.com | 1,697,495,875,000 |
f5c45726087ae66e07ef953007d3ce7eba11d016 | Remove lazy batching
Lazy batching was partially written two years ago but it was never
rolled out because it was deprioritized. Clean up the code because
the surrounding code has changed meaningfully since then.
PiperOrigin-RevId: 573952915 | Haifeng Jin | haifengj@google.com | 1,697,496,617,000 |
2065d03aa48d38327fe7fbc201c8d7f4b2fa6a24 | [XlaCallModule] Allow i64 platform index arguments.
Previously, for multi-platform serialization the platform index
argument was required to be an i32. Now we allow also
i64, just like we do for dimension variables. This flexibility
is useful for JAX when running in 64-bit mode.
PiperOrigin-RevId: 573952962 | Haifeng Jin | haifengj@google.com | 1,697,496,630,000 |
69d5de1b9b4e82db45690d353ada590b6b61b9a7 | Integrate StableHLO at openxla/stablehlo@801bb9c6
PiperOrigin-RevId: 573953802 | Gunhyun Park | gunhyun@google.com | 1,697,496,828,000 |
4e5b34c2d4897199d7668fac3bfc801f9b86211d | [stream_executor] NFC: Do not export plugin_manager header via public API headers
plugin_manager is an internal implementation detail and should not be accessible via public targets
PiperOrigin-RevId: 573955491 | Eugene Zhulenev | ezhulenev@google.com | 1,697,497,253,000 |
3e9ce8de7952b86bfe3cbeb607154d3d7dccced2 | [XLA:GPU] Fix bug in all-to-all for complex data types.
The multiplier for complex data types wasn't being applied correctly; the chunk_bytes calculation double-applied the multiplier.
Fixes https://github.com/google/jax/issues/18122
PiperOrigin-RevId: 573955671 | Peter Hawkins | phawkins@google.com | 1,697,497,294,000 |
e0c4304cf88ac8485a52483f1f666bff223b191e | [xla] Enhance PrintCycle to consider control dependence.
Enhance PrintCycle to print cycles caused by control dependence. Add cycle
string to the error message.
Add a test.
PiperOrigin-RevId: 573955739 | Bixia Zheng | bixia@google.com | 1,697,497,313,000 |
8ead3e4714cf048e66916a5f01acc144972ea384 | Use absl::int128 in PresizedCuckooMap
No need to roll our own.
PiperOrigin-RevId: 573958973 | David Majnemer | majnemer@google.com | 1,697,498,062,000 |
7b81b8c94a3fcf067ce50c31ce031a9ea449a59b | [PJRT] Fold all the specializations of TransposeMicroKernel away
We can rely on some `if constexpr` logic to determine which implementation to use for a given block size and element width.
PiperOrigin-RevId: 573959077 | David Majnemer | majnemer@google.com | 1,697,498,092,000 |
e8d1e13c6283bd3d9477e66bc8d8dd1b4f010c89 | Move `device_tracer` (which register GPU profiler) from `profiler_backends` to targets depend on it.
`third_party/tensorflow/core/profiler/lib:profiler_backends` uses `if_cuda_or_rocm` to decide whether to include `device_tracer`. `third_party/tensorflow/compiler/xla/python:profiler` use `gpu_enabled` to decide whether to include `device_tracer`.
With this change, we can use `gpu_enabled` to control whether the GPU profiler is registered in jaxlib.
PiperOrigin-RevId: 573961594 | Jieying Luo | jieying@google.com | 1,697,498,687,000 |
50a1340c79e6fcb96a9526fefe3fbd0cbb87cfcb | [stream_executor] NFC: Move StreamExecutor external frameworks integrations into separate folder
PiperOrigin-RevId: 573963795 | Eugene Zhulenev | ezhulenev@google.com | 1,697,499,235,000 |
d8c9ec3f697779ad7e3d77368d9662fd54078fc7 | Add pass that moves all TPUCompileMlir ops as far to the front as possible.
(This only adds the pass, it doesn't hook it up to any pipeline.)
PiperOrigin-RevId: 573974640 | Matthias Kramm | kramm@google.com | 1,697,502,034,000 |
dc805474308d1c5563844c6442ca195e251d17f6 | Override boringssl static linking by explicitly importing openssl
PiperOrigin-RevId: 573974999 | Chandra Devarakonda | chandrasekhard@google.com | 1,697,502,116,000 |
a08b56cef687247065534d47a18012693f7162ca | PR #5740: Add tuple input support to all-gather and reduce-scatter
Imported from GitHub PR https://github.com/openxla/xla/pull/5740
This PR adds tuple input support to all-gather and reduce-scatter. This is a revival of part of https://github.com/tensorflow/tensorflow/pull/58377 and to be used in conjunction with https://github.com/pytorch/xla/pull/5624 .
In FSDP, different layers' weights need to be all-gathered/reduced-scatter during training. If some layers are small, multiple layers' weights can be aggregated for more efficient data transfer (same concept as bucket_cap_mb in DDP). With existing all-gather and reduce-scatter in PyTorch-XLA, you would have to do the bucketing and decomposing outside of the operation. This PR enables multiple different tensors to be all-gathered/reduce-scatter, keeping the original tensor shapes to enable bucketing and decomposing optimizations inside the operation.
Original PR has token support like the token used for allreduce to ensure order between CCops. That will be separate PR if needed.
Copybara import of the project:
--
7ea1159a1464efddebe9384e87ed6df504d89b2e by Junmin Hao <junminh@amazon.com>:
Add Tuple input and token support to all-gather and reduce-scatter.
Committer: Junmin Hao <junminh@amazon.com>
--
cdb873e6d97f5f12b3d3c587bb5782d58e3554c5 by Junmin Hao <junminh@amazon.com>:
lint fix
--
aad352117ba950ac5ae62330e3980f4b5898a701 by Jeffrey Huynh <jthuynh@amazon.com>:
Fix hlo_verifier_test failure due to changed expectation
--
32e814524b88a474af5e4e904c0dd19841430b86 by Jeffrey Huynh <jthuynh@amazon.com>:
Separate the token change out into a separate PR with RFC.
--
b301c2a2a5b52180f9e9626173e6b67a78782960 by Jeffrey Huynh <jthuynh@amazon.com>:
Change *WithToken tests to *WithTuple
--
5890278fc16c9f900782d32a92d40ecf548aea85 by Jeffrey Huynh <jthuynh@amazon.com>:
Fix missing parenthesis
Merging this change closes #5740
PiperOrigin-RevId: 573976449 | jeffhataws | jthuynh@amazon.com | 1,697,502,454,000 |
90db0b748c94afb1bb97a7898d7ebc82263f777a | Add TPU Profiler for PJRT.
PiperOrigin-RevId: 573978441 | Clive Verghese | cliveverghese@google.com | 1,697,503,010,000 |
88c41bba412d9e1810e030453810ac5cf5b7c9a6 | Consider maximum of ids/uniques across all replicas.
In contrast to the previous change, this now uses different group keys for each
collective.
PiperOrigin-RevId: 573985589 | Matthias Kramm | kramm@google.com | 1,697,505,305,000 |
95d1f9f20d94506ea28a465c1e6a148a55050b85 | achange names | fsx950223 | fsx950223@outlook.com | 1,697,510,938,000 |
ec374476ed6394b4a1c457452fbea7656e9fba89 | Internal Code Change
PiperOrigin-RevId: 574018506 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,517,776,000 |
12a50f9504cd389380031a8fa1ab47f0eb6f9aa2 | Add LiftQuantizableSpotsAsFunctionsPass to StableHLO Quantizer
PiperOrigin-RevId: 574027714 | Jiyoun (Jen) Ha | jiyounha@google.com | 1,697,520,958,000 |
d8d2c2a905768bb9a452a0976d2f7a9a401f3280 | Migrate `QuantizePass` to the StableHLO quantizer.
This change forks the `QuantizePass` to contain a minimal subset of the original features for StableHLO quantizer.
Weight-only quantization and dynamic-range quantization is disabled for now, as the minimal version only supports static-range PTQ.
PiperOrigin-RevId: 574058482 | Dan Suh | dansuh@google.com | 1,697,530,275,000 |
c1d2e5c9974f4d41fc6c876ffab01eb4656fcacc | Update GraphDef version to 1652.
PiperOrigin-RevId: 574069686 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,533,336,000 |
cd17951efe3edc530795ca2daf449d62a51d22e8 | compat: Update forward compatibility horizon to 2023-10-17
PiperOrigin-RevId: 574075341 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,534,844,000 |
966cee980f05a1d7d4383e131aeb51601f0ce43a | [XLA:GPU] Flatten call graph after while loop double buffering has run.
This is necessary to ensure that computations are referred to by a single
caller, an invariant that is expected to hold for alias analysis.
PiperOrigin-RevId: 574082722 | Benjamin Chetioui | bchetioui@google.com | 1,697,537,015,000 |
5a39244519e46bfba66e377e5ee533bc2ab3969b | Fix for build breakage on MacOS
PiperOrigin-RevId: 574084230 | Jonathan B. Coe | jbcoe@google.com | 1,697,537,448,000 |
57e0ff1aa84464b3578849068488558e13afd2fe | Integrate LLVM at llvm/llvm-project@f6f944e77f74
Updates LLVM usage to match
[f6f944e77f74](https://github.com/llvm/llvm-project/commit/f6f944e77f74)
PiperOrigin-RevId: 574085237 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,537,745,000 |
5d64ceeb286c4fa1b75b5d0fba7ee75710bce338 | Add github user 'venkat6871' into auto assignment list. | Shivam Mishra | 124146945+shmishra99@users.noreply.github.com | 1,697,543,287,000 |
5f7f05a80aac9b01325a78ec3fcff0dbedb1cc23 | Update Python packages' requirements for Python 3.12.
PiperOrigin-RevId: 574111451 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,544,280,000 |
b2a7a25d102fec8fc7e3690218b627738d8a6fc2 | Integrate LLVM at llvm/llvm-project@233c3e6c53a5
Updates LLVM usage to match
[233c3e6c53a5](https://github.com/llvm/llvm-project/commit/233c3e6c53a5)
PiperOrigin-RevId: 574122982 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,547,894,000 |
b9fe6f7ed026c640f86db81f9f248c864034d73e | [XLA:GPU] Make multiple tests work with --xla_gpu_triton_gemm_any=true
Without this change, it would break after we change the default value.
PiperOrigin-RevId: 574128536 | Tamás Danyluk | tdanyluk@google.com | 1,697,549,240,000 |
2a5d57455309389b45b9bf4166baca32612d526c | [XLA:GPU] Fix comment. Fusion merger works in post-order.
PiperOrigin-RevId: 574130265 | Oleg Shyshkov | shyshkov@google.com | 1,697,549,780,000 |
8f065a5d0df2b34904f8b5313018bb8cab83ea3c | Fix a bazel path still referencing TF in xla
PiperOrigin-RevId: 574130820 | Michael Hudgins | michaelhudgins@google.com | 1,697,549,944,000 |
e8df9ca8f36d7e673899b8955af586133d7ca799 | Update TensorFlow built CUDA version to 12.2 in PyPi classifiers.
PiperOrigin-RevId: 574134994 | Michael Hudgins | michaelhudgins@google.com | 1,697,551,194,000 |
2da9196400f88ae747d464fef08b88904000f66b | Fix format of CUDA classifier used for upload to PyPi
For CUDA 12 there is an extra field in the classifier so
add that in to fix the upload failures. | Andrew Goodbody | andrew.goodbody@linaro.org | 1,697,474,178,000 |
062e249895c67e9da9a9a55ea15145d6ad914f8c | Merge branch 'master' into fix_classifier | Michael Hudgins | michaelhudgins@google.com | 1,697,552,056,000 |
42a169ef18c4d489ab582145149aa108e792457f | fix cos op for nnapi delegate | Koan-Sin Tan | koansin.tan@gmail.com | 1,697,552,089,000 |
341a0320140f94a57eb81bbba8f70615afca161e | Reverts a previous, breaking change.
PiperOrigin-RevId: 574144696 | Thomas Köppe | tkoeppe@google.com | 1,697,553,331,000 |
9ee2567b3b94db0861e62af253d91660c9713f3f | Merge pull request #62128 from elfringham:fix_classifier
PiperOrigin-RevId: 574150483 | TensorFlower Gardener | gardener@tensorflow.org | 1,697,555,178,000 |
97cc7e8763482fc6e214f379c3978fbb85098581 | Add parameter for dump prefix
PiperOrigin-RevId: 574161859 | Mason Chang | masonchang@google.com | 1,697,557,231,000 |
9c3413b846b72abdf264beec2d07bc4d944f22ab | Check in generated pyi files for some py_extension targets.
PiperOrigin-RevId: 574165828 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,558,214,000 |
73f4e8bf376a4807773e930e997e1dc05890443f | Fix a typo in the sparse core preprocessing ops.
PiperOrigin-RevId: 574173701 | Ziyin Huang | ziyinh@google.com | 1,697,559,933,000 |
28daabc2a6a431de45f11018c9eddf63d89085f4 | Add support for simplifying compare instructions inside loop bodies.
If a compare instruction compares the loop induction variable with a constant and the constant has certain values, then the compare can be replaced with a constant. This addition by itself might not give us performance, however, it allows us for further optimizations.
PiperOrigin-RevId: 574175424 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,560,296,000 |
c1e90c836b05b5127cdea66c916b7da8abb905ba | Move lower_cluster_to_runtime out of TFRT directory into third_party tensorflow so that Kokoro Windows builds work.
PiperOrigin-RevId: 574177255 | Mason Chang | masonchang@google.com | 1,697,560,680,000 |
a812588ddd794817fbe1869698b7bb17b956b9fe | Remove last remaining usages of TFRT_SYNC_KERNEL registration macro
PiperOrigin-RevId: 574178210 | Rohit Upadhyaya | rohitju@google.com | 1,697,560,915,000 |
b7907132ba04c3347b6258a4a9ba519f5e6b3813 | Remove Kokoro build configs
PiperOrigin-RevId: 574179788 | David Dunleavy | ddunleavy@google.com | 1,697,561,253,000 |
b5b458c30da41d3e07ecb59fb5a4e43fe524b41a | Re-enable layering_check for package.
PiperOrigin-RevId: 574181331 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,561,568,000 |
1ad232cf97680c5e1a71a108efb25c4caeb73a62 | Do not use sharding propagation for convolutions ops for now as the pass is not always successful in inferring operand shardings for convolutions, even when no contraction dimension is sharded.
PiperOrigin-RevId: 574183489 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,561,974,000 |
f04aa0267deb24e3e9b8969b838fe35ec0323ed8 | [stream_executor] NFC: Move StreamExecutor external frameworks integrations into separate folder
PiperOrigin-RevId: 574192405 | Eugene Zhulenev | ezhulenev@google.com | 1,697,563,553,000 |
2213d5137c540ababd6d390399afbf52b5fd1ff4 | Add the CUDA 12.2 classifier to setup.py.
PiperOrigin-RevId: 574194212 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,563,876,000 |
516828e158d2fda1bd44553753334c5a6d357cd0 | Add type annotations to third_party/tensorflow/python/client.
PiperOrigin-RevId: 574197799 | Shashank Viswanadha | shashankvi@google.com | 1,697,564,495,000 |
279c1c4f50fce08779199413349f7bfa34e41fab | Internal Change. Not visible in OSS.
PiperOrigin-RevId: 574202870 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,565,340,000 |
e12ad5fdddfe97d2a6cfeddf5a401db0f3d7dd9d | Replace test methods deprecated in 3.12.
PiperOrigin-RevId: 574206799 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,566,035,000 |
493ecf7b6e4bba3508f7e68c255b61e5e3c863df | [NFC] Fix typo.
PiperOrigin-RevId: 574211195 | Benjamin Chetioui | bchetioui@google.com | 1,697,566,877,000 |
5b9068a9c5e01ae85c83ed694ef2d2f3ce8f7965 | [xla:gpu] NFC: Remove LMHLO op argument from EmitKernel #6224
PiperOrigin-RevId: 574213434 | Anlun Xu | anlunx@google.com | 1,697,567,301,000 |
ea61818d5004c17231ad6d659910f1dc0eef84b5 | Re-enable layering_check for package, except two targets that use headers that have no targets.
PiperOrigin-RevId: 574214035 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,567,404,000 |
71d8cb4550c6a2023b781aec42ff8e3d5528f4a5 | Add `compatible_with` attribute to `//third_party/tensorflow/lite:namespace` target.
This is needed in order to use that target from MediaPipe.
PiperOrigin-RevId: 574215278 | Fergus Henderson | fergus@google.com | 1,697,567,627,000 |
ba33dabe1bb2df84e7a68df2c3298c39f861e6dd | Temporarily expose Runtime passes and use them globally and remove from the Bridge. Used as a temporary migration to make follow up CLs easier and to ensure the two passes stay in sync.
PiperOrigin-RevId: 574222388 | Mason Chang | masonchang@google.com | 1,697,569,001,000 |
4152512974b489c989c7937249e68cebf897d012 | Update numerical_utils.cc
Used descriptive variable names. Rdename the q_fixed variable to quantized_multiplier.
To Check for errors. For example, check that the shift amount is not greater than 31 or less than -31. If it is, return an error instead of converting the multiplier to zero. | Gautam | gautamrbharadwaj@gmail.com | 1,697,572,624,000 |
1c2cf76da9592743cb0c82bcf71aa8d887e3f4be | Merged commit includes the following changes:
574254316 by A. Unique TensorFlower<gardener@tensorflow.org>:
Remove line in build file causing sync issue
--
574252274 by A. Unique TensorFlower<gardener@tensorflow.org>:
Improve NNAPI documentation so that it is more self-contained
--
574249622 by A. Unique TensorFlower<gardener@tensorflow.org>:
Moves saved_model loading steps into function calls.
--
574243633 by A. Unique TensorFlower<gardener@tensorflow.org>:
change of libtpu.so path
--
574238910 by A. Unique TensorFlower<gardener@tensorflow.org>:
#tf-data-service Add detailed debugging info for split providers.
--
574223193 by A. Unique TensorFlower<gardener@tensorflow.org>:
Redirect references from the lib target to the new single-source-file targets.
--
574222818 by A. Unique TensorFlower<gardener@tensorflow.org>:
Add visibility for recml.
--
PiperOrigin-RevId: 574254316 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,575,803,000 |
4011352f6af3f35d6531411373ac6e2c35ca4b10 | [xla] Enhance P2P schedule preparation pass.
Add control dependence to linearize a P2P chain with other instructions that
may invoke collective operations.
Add test cases.
PiperOrigin-RevId: 574254750 | Bixia Zheng | bixia@google.com | 1,697,575,901,000 |
f84d49db76322fc85e76c900e93702245484a087 | Updates Auto Sharding to consider strategies for the gather operation that follow operand 0.
PiperOrigin-RevId: 574261233 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,577,195,000 |
c5f953171ee565021e79fba0394c61d509242f34 | Integrate LLVM at llvm/llvm-project@c4ba84d65551
Updates LLVM usage to match
[c4ba84d65551](https://github.com/llvm/llvm-project/commit/c4ba84d65551)
PiperOrigin-RevId: 574267541 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,578,411,000 |
5db2fba5bd05f37a25b2fa9a368e7247eb11d4d8 | Clean up users of TPUBridge and migrate them to Bridge Clustering API
PiperOrigin-RevId: 574276561 | Mason Chang | masonchang@google.com | 1,697,580,365,000 |
5dd83e4e84338193765edb63acd8112dbba3cb40 | Add tensorlist support in native AddNOp
PiperOrigin-RevId: 574279230 | Luke Boyer | lukeboyer@google.com | 1,697,580,970,000 |
a6c54b68700ef8b160e6dd0b48a6bacc4dc9f49f | Add env var TF_PYTHON_VERSION in Dockerfile documentation
Set TF_PYTHON_VERSION to use hermetic Python in TensorFlow. See ci/official/requirements_updater/README.md for details.
PiperOrigin-RevId: 574287043 | Kanglan Tang | kanglan@google.com | 1,697,582,715,000 |
52be3db886b750d1c4bd0ccc3241591f748d6f0b | Extract out preparation to export to tf executor
PiperOrigin-RevId: 574291204 | Mason Chang | masonchang@google.com | 1,697,583,690,000 |
bc388720fdc7c48274bff1dbb24194fe7b2d5859 | Re-enable layering_check for this package.
PiperOrigin-RevId: 574297009 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,585,086,000 |
cdb29b10dfe8517d1eeaf10148d866838042c1c0 | [XLA:LatencyHidingScheduler] Add support to rerun the scheduler if memory_peak > memory_limit.
Adds a flag `xla_latency_hiding_scheduler_rerun` to denote the number of iterations the scheduler is allowed to rerun with 90% of the memory limit used in the previous iteration. Also adds a test case called RerunWithSmallerMemoryLimit in latency_hiding_scheduler_test.
PiperOrigin-RevId: 574304139 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,586,952,000 |
0aa20a615ff8819e128c0c54cd6b3b90e4eafc52 | Remove logging suffix underscore
PiperOrigin-RevId: 574304908 | Mason Chang | masonchang@google.com | 1,697,587,180,000 |
609d09e4cb2374f1a735d68acc61091a3445ddb0 | #tf-data-service Add vlogging for tf.data service worker graph rewrites.
PiperOrigin-RevId: 574306531 | Matt Callanan | mpcallanan@google.com | 1,697,587,590,000 |
403c8c24e73eadb715ca1502cde68536bf6fe92f | [XlaCallModule] Allow i64 platform index arguments.
Previously, for multi-platform serialization the platform index
argument was required to be an i32. Now we allow also
i64, just like we do for dimension variables. This flexibility
is useful for JAX when running in 64-bit mode.
PiperOrigin-RevId: 574313345 | George Necula | necula@google.com | 1,697,589,668,000 |
d33e8b802f29646325f21ad178b5a7a347499827 | Moves saved_model loading steps into function calls.
PiperOrigin-RevId: 574313410 | Chris Minge | chrisminge@google.com | 1,697,589,693,000 |
c7b0be3271a1ce18ea8f9ace30fb69e242110318 | Add AotCompileToGpuPjRtLoadedExecutableWithDevice to compile function into serialized PjRtLoadedExecutable, with device.
PiperOrigin-RevId: 574333714 | Shixin Li | shixinli@google.com | 1,697,597,857,000 |
60ccf67fad192696512a3da2988be300bdcb34ce | [tensorflow] Update TFRT dependency
PiperOrigin-RevId: 574338996 | Eugene Zhulenev | ezhulenev@google.com | 1,697,599,117,000 |
68c593fe79162e3cd0689d41d33fa7f92cc3ab87 | Adds training metadata to checkpoint.
PiperOrigin-RevId: 574358552 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,605,549,000 |
9fc9c6ab9fc2e2d1c0c188df5529842b45e053fd | Merge pull request #62112 from fsx950223:fix123
PiperOrigin-RevId: 574364373 | TensorFlower Gardener | gardener@tensorflow.org | 1,697,607,895,000 |
85f5aad9e5a210835648813d779956a0db68046c | Merge pull request #62124 from Raguggg:Raguggg-patch-1
PiperOrigin-RevId: 574366628 | TensorFlower Gardener | gardener@tensorflow.org | 1,697,608,670,000 |
a28c4ba5a4bce9d8e7a5c07555f9c5d7018c96bd | Enable layering_check for package (NFC).
Add the required dependencies, for each header include there should be a
corresponding dependency that includes this header in the hdrs attribute.
PiperOrigin-RevId: 574375035 | Adrian Kuegel | akuegel@google.com | 1,697,610,913,000 |
cff5f1d8ff1880f55737f895c1cfb386c23ac696 | Legalize TF::AddNOp with variant inputs to tfl custom
PiperOrigin-RevId: 574381590 | Luke Boyer | lukeboyer@google.com | 1,697,612,846,000 |
c8ae6659ab7bf3ddc044ff812b020e3ef99321de | Fix typo in unittest setup.
PiperOrigin-RevId: 574392823 | Weiyi Wang | weiyiw@google.com | 1,697,615,965,000 |
cd894e0fc57317a1506d82d7a09eff14ba4ea343 | Update GraphDef version to 1653.
PiperOrigin-RevId: 574405560 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,619,729,000 |
80c6d43a37a857b00f07eb2a92dc6d1e75afb3dd | compat: Update forward compatibility horizon to 2023-10-18
PiperOrigin-RevId: 574405566 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,619,730,000 |
7675a582dc3c7622ecc9a6ecdc67f83da5346916 | [XLA:GPU] Refuse Triton codegen for dots with at least one trivial non-contracting dimension
It doesn't really work anyway. We could implement it later, but now it's important to refuse it, so we could enable xla_gpu_triton_gemm_any soon.
Runtime: Does not change runtime for our benchmarks.
PiperOrigin-RevId: 574414044 | Tamás Danyluk | tdanyluk@google.com | 1,697,622,106,000 |
66488ee032cded09c3d2c2865109441ef73e5ee0 | [XLA:GPU] Splat constants emitted in EmitScope to make them tensors
This issue came up with xla_gpu_triton_gemm_any: there was a type mismatch between f32 and tensor<f32> in xla/tests:multioutput_fusion_test.
I added one more test that tests this specifically.
PiperOrigin-RevId: 574444066 | Tamás Danyluk | tdanyluk@google.com | 1,697,631,501,000 |
93778997361af4599dbf557e56ddaef128249b2d | [XlaCallModule] Drop support for dim_args_spec attribute.
This attribute was used to support shape polymorphism in versions
up to and including version 4. Starting on March 28th 2023 with
JAX version 0.4.6 we stopped using this attribute. We are now
beyond the 6 month backward compatibility version and we drop
support for this attribute.
We also increase the minimum supported serialization version to 5.
See https://github.com/google/jax/blob/main/jax/experimental/jax2tf/README.md#native-serialization-versions
PiperOrigin-RevId: 574450204 | George Necula | necula@google.com | 1,697,633,616,000 |
fa5904b5eb47acbdb098d814e689871a9dc6fd72 | [XLA:GPU] Don't test tsl::enable_tensor_float_32_execution(false) for dot anymore
The `dot` instruction doesn't support the flag directly, just by checking the operand precisions.
tf2xla/transforms/legalize_tf.cc sets them to highest if the TensorFloat-32 global variable is false.
So now we also do that in the test.
Hopefully b/280130359 will handle this in a nicer way.
This came up when trying to enable xla_gpu_triton_gemm_any. When we do that, we'll switch a lot of dots over to the Triton codegen, which ignores the TensorFloat-32 global variable and only checks the operand precisions.
PiperOrigin-RevId: 574452125 | Tamás Danyluk | tdanyluk@google.com | 1,697,634,209,000 |
c4ebe045afaef3416d794623a72cf4eab41e8bb4 | [XLA][NFC] Minor cleanups to the functional HLO runner.
PiperOrigin-RevId: 574452162 | Benjamin Chetioui | bchetioui@google.com | 1,697,634,219,000 |
3571d5b1da5f3b1c334996741485d5c2f10a5a3a | [xla] NFC: Switch to tsl/concurrency library in xla runtime
PiperOrigin-RevId: 574466106 | Eugene Zhulenev | ezhulenev@google.com | 1,697,638,263,000 |
fa378d172c802c59eaf1dde92d104dfed312e6a1 | [xla:pjrt] Switch PjRt to TSL concurrency library
After this change only `se_gpu_pjrt_client` has few remaining dependencies on TFRT that will be removed in the next change.
PiperOrigin-RevId: 574467786 | Eugene Zhulenev | ezhulenev@google.com | 1,697,638,774,000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.