hash stringlengths 40 40 | msg stringlengths 1 131k | author stringlengths 1 33 | email stringlengths 0 57 | date int64 1,447B 1,698B |
|---|---|---|---|---|
949e03f3c52418d152b29d3896d253e2f53ba807 | raise error early in ResourceScatterUpdateOp and QuantizeDequantizeV2Op.
PiperOrigin-RevId: 572689848 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,060,161,000 |
0d78f2a50d5098cfeaa95e333b428e140b794053 | #tf-data-service Close the TF record writer after it writes a split.
PiperOrigin-RevId: 572689954 | Yang Chen | yangchen@google.com | 1,697,060,177,000 |
27a6398effea50467daffc048c784fc36d9f8aa6 | Add more descriptive logging for solver results in auto-sharding
PiperOrigin-RevId: 572697720 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,061,820,000 |
6e4a5e81ac31fb274036c8aa93130f138528cc6c | Implement profiler error related C APIs.
PiperOrigin-RevId: 572699789 | Jieying Luo | jieying@google.com | 1,697,062,272,000 |
10a9d177a85761c337ee4d599413a4d782b766c6 | [XLA] Make HloModule::config_ copy-on-write.
PiperOrigin-RevId: 572700723 | Ce Zheng | zce@google.com | 1,697,062,454,000 |
dd969261c029817fe19cf7be56579bc397dab882 | [XLA] Remove more open-coded switches of types
Some logic was quite redundant BitWidth vs ByteWidth along with another bespoke implementation of ByteWidth hiding in ShapeUtil.
Let's merge as much as we can and write it in a generic way.
While we are here, make more of our machinery constexpr friendly so we can more easily metaprogram this stuff.
PiperOrigin-RevId: 572704974 | David Majnemer | majnemer@google.com | 1,697,063,353,000 |
f0a860c12e468c9b1ba43c3b432fc1b842c0ad47 | Cut/paste CPU/GPU bridge passes into clustering_bridge_passes.
PiperOrigin-RevId: 572706474 | Mason Chang | masonchang@google.com | 1,697,063,645,000 |
72e534f7b1b4095dc701b467d01bfcdac4a36760 | Remove extra logic in `glob_lit_test` now that vendoring is completed
PiperOrigin-RevId: 572715850 | David Dunleavy | ddunleavy@google.com | 1,697,065,823,000 |
4174f9c841e908b61385241ef63d58637ea77177 | Re-enable layering_check for package.
PiperOrigin-RevId: 572717125 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,066,123,000 |
2be7e21b72d5ed16fac7902ce2e5cc39dd735927 | #tf-data fix the ram budget calculation from `share * AvailableRam() + total_buffered_bytes` to `share * (AvailableRam() + total_buffered_bytes)`
PiperOrigin-RevId: 572721593 | Jim Lin | jimlintw@google.com | 1,697,067,227,000 |
2b1cbd92331d9b0e607a0640937933121572afbe | Remove the deps of pjrt_c_api_helper from plugin_tracer.
PiperOrigin-RevId: 572722764 | Jieying Luo | jieying@google.com | 1,697,067,514,000 |
33df98aab569fa9a3150e96fed37d094ff6ebce3 | Update curl from 8.2.1 to 8.4.0
PiperOrigin-RevId: 572723857 | Laura Pak | lpak@google.com | 1,697,067,810,000 |
6111ad2d1da05d53494115ee32394a583ef6f9fb | Remove duplicate code.
PiperOrigin-RevId: 572730410 | Hye Soo Yang | hyey@google.com | 1,697,069,406,000 |
229e40f4bda36bd1012951690d4da818bddfb74c | [xla:gpu] Add CommandBufferCmd interpreter for constructing command buffers #6243
PiperOrigin-RevId: 572732154 | Eugene Zhulenev | ezhulenev@google.com | 1,697,069,877,000 |
3f31f99655d127cb2b22c9bd048630652016ac49 | Redirect references from the framework target to the new single-source-file targets.
PiperOrigin-RevId: 572742805 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,073,128,000 |
a09b9dae1fa49d9863a5c7239c80fa48466d1a35 | #tf-data-service Using ShardingPolicy instead of `distributed_epoch`.
PiperOrigin-RevId: 572743752 | Yang Chen | yangchen@google.com | 1,697,073,420,000 |
dc3bfac742e76dc16e4c9bc609b16a7494a9d320 | Fill in BARBIE API For Runtime where it copy/pastes tpu_rewrite_pass and everything after into a runtime owned place.
PiperOrigin-RevId: 572748278 | Mason Chang | masonchang@google.com | 1,697,074,965,000 |
b183786bed2f7af5ba534f2b5c3838aed4ca7b7d | Redirect references from the framework target to the new single-source-file targets.
PiperOrigin-RevId: 572748733 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,075,160,000 |
e6ce6598cc6dbccef8a2cbcf6c5c878077582197 | Integrate LLVM at llvm/llvm-project@25935c384dd8
Updates LLVM usage to match
[25935c384dd8](https://github.com/llvm/llvm-project/commit/25935c384dd8)
PiperOrigin-RevId: 572754541 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,077,520,000 |
780b81d41b6162775911db3d36426a33ec9cd5f3 | Fix config flags for TPU builds
PiperOrigin-RevId: 572755450 | Austin Anderson | angerson@google.com | 1,697,077,861,000 |
18f9ee9ec60038a442b4e20a506536a46d172730 | Fix test breaking
PiperOrigin-RevId: 572769531 | Tongfei Guo | tongfei@google.com | 1,697,082,886,000 |
027c924e514d6a7d52646bf313fd1abef74f7654 | Fix buildifier warnings in XLA
This is in preparation for enabling bulidifier in OSS
PiperOrigin-RevId: 572777994 | David Dunleavy | ddunleavy@google.com | 1,697,086,216,000 |
504b994d8fd2b66a912f4699991b6c939f223829 | Defer to sharding propagation when inferring operand shardings in dot_handler when possible.
Specifically, we defer to sharding propagation in cases where the contracting dimension is not split. This is because, given just the output sharding, which is what sharding propagation takes as an input to infer operand shardings, it is not possible to say whether or not the contraction dimensions are sharded (as the output contains no contraction dimension)
PiperOrigin-RevId: 572779347 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,086,706,000 |
61e467207f6b5c0c2136e29dabae50e90ac1a6ac | Insert new blurb for new release notes TF 2.16.0.
PiperOrigin-RevId: 572785450 | Raviteja Gorijala | gorijala@google.com | 1,697,088,791,000 |
9ad83f93f6743cf5fb1cfa506be42a660049a13a | [stream_executor] NFC: Replace internal API uses with a public one
PiperOrigin-RevId: 572791524 | Eugene Zhulenev | ezhulenev@google.com | 1,697,090,802,000 |
3ecb4fcf2e84cf9c92bcb9b45d560cd7d8d17bd6 | Internal Code Change
PiperOrigin-RevId: 572792702 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,091,159,000 |
397f4af93e04c7d7ccbe6e36997ed3f92035c4b4 | Add passes that unfuses MHLO ops that do not have equivalents in StableHLO
PiperOrigin-RevId: 572794176 | Doyeon Kim | doyeonkim@google.com | 1,697,091,683,000 |
ab0e72720579bf2c9da481a02cb3550c777d2830 | Remove Windows and Darwinn functionality from cuda_configure.bzl.
Remove @cub_archive (shipped with CUDA toolkit since v11).
PiperOrigin-RevId: 572800718 | Christian Sigg | csigg@google.com | 1,697,093,778,000 |
3ef4aba94c67da462efbe97e4f572b54445f2c12 | Testing triton integration 2023-09-14
PiperOrigin-RevId: 572808737 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,096,102,000 |
571fb412a417f780fc4fe68605513d3d7a429374 | NFC: replace uses of `TF_DISALLOW_COPY_AND_ASSIGN`/`SE_DISALLOW_COPY_AND_ASSIGN` macros with implementation.
PiperOrigin-RevId: 572823885 | Christian Sigg | csigg@google.com | 1,697,100,323,000 |
10e0f675d89c8e7023f0d2814a6becaef4d69f9f | Improve test coverage for priority fusion.
New cases:
- reduce-reduce
- transpose-reduce
- convert-reduce
- DUS-reduce
- a regression test for a non-trivial transpose fusion.
PiperOrigin-RevId: 572823935 | Johannes Reifferscheid | jreiffers@google.com | 1,697,100,342,000 |
29ad9d92d2e00f6a36f08557900f0e8df1adbe30 | Update GraphDef version to 1647.
PiperOrigin-RevId: 572827499 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,101,320,000 |
be666b396369dade78d326bc6bb93e5cadca16d8 | compat: Update forward compatibility horizon to 2023-10-12
PiperOrigin-RevId: 572827501 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,101,320,000 |
c4811a2cb32c6c2a6a5096a9e3eb8c1cba6d89b1 | [XLA:GPU] Fix class inheritance in Triton GEMM tests.
This properly applies debug options from parent classes to the child ones.
PiperOrigin-RevId: 572850466 | Ilia Sergachev | sergachev@google.com | 1,697,107,369,000 |
0c7c1774c2e5227cc352fe09e859e63d6e472391 | Rollback of PR #6126
Rollback, breaks internal project
PiperOrigin-RevId: 572860033 | Malcolm Reynolds | mareynolds@google.com | 1,697,110,078,000 |
2728dc9316fdbc4639c54aaf70b9270ee9e6c4d7 | Merge pull request #62008 from Intel-tensorflow:bhavanis/onednn-v3.3-rc
PiperOrigin-RevId: 572864301 | TensorFlower Gardener | gardener@tensorflow.org | 1,697,111,755,000 |
3d0abfedac5c85796b3517c7fb61b0e3ba30b837 | Don't allow to fuse into the first operand of scatter.
This allows to emit the scatter in-place. We also need
to avoid forming fusions where using the in-place emitter
would run into race conditions. Also, adjust the buffer
sharing logic so that it also detects for scatter fusions
whether the buffer of a fusion operand can be shared with
the output.
PiperOrigin-RevId: 572869882 | Adrian Kuegel | akuegel@google.com | 1,697,113,036,000 |
d480bc71044407380f2d186c26ff816ff43cd6aa | Integrate LLVM at llvm/llvm-project@b15b84610f63
Updates LLVM usage to match
[b15b84610f63](https://github.com/llvm/llvm-project/commit/b15b84610f63)
PiperOrigin-RevId: 572873835 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,114,291,000 |
b03af632b4e6731b1e8ef45abff849ffac6d6c6e | [XLA:GPU] Add structure proto logs to priority fusion pass.
The first use of the dump is to match ops in fused and unfused hlos. The proto is minimalistic, but can be extended with more details when needed.
PiperOrigin-RevId: 572880045 | Oleg Shyshkov | shyshkov@google.com | 1,697,115,739,000 |
623f17637ee4cade7c939e56d7cb6b577ba37457 | [NFC] Remove FixBazelEnvPath which doesn't do anything and simplify GetTestUndeclaredOutputsDir
PiperOrigin-RevId: 572894933 | Tamás Danyluk | tdanyluk@google.com | 1,697,119,640,000 |
341992662855485a204af4cc602cc12cf2c657ac | [xla:gpu][costmodel] Incorporate occupancy into bandwidth estimation.
If the launched grid only contains a small number of blocks, available transfer bandwidth is lower.
PiperOrigin-RevId: 572910498 | Christian Sigg | csigg@google.com | 1,697,123,657,000 |
a28e97b2431dbf76fe49628f169c7810fff124df | Update Cython: 3.0.011a -> 3.0.3.
PiperOrigin-RevId: 572924608 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,127,097,000 |
b79d44ffc80adefd15192f0ca7fe7deb54c3aa5a | [XLA:GPU] Add a SymbolUploader hook to enable uploading compiled HLO.
PiperOrigin-RevId: 572928845 | pizzud | pizzud@google.com | 1,697,128,099,000 |
052445e04ce20fd747657e0198a1bcec2b6dff5b | Update rules-python for requirements_updater and wheel_test: 0.18.1 -> 0.26.0.
PiperOrigin-RevId: 572936856 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,129,909,000 |
5618f2ebb47233dd40afdb8daae125fc9fe241bb | Add buildifier linting to CI
PiperOrigin-RevId: 572945962 | David Dunleavy | ddunleavy@google.com | 1,697,131,735,000 |
56cf3e14d3b88cf24b2b564aa6eff4f6142fde60 | Enable flaky test attempts for experimental arm64 config
PiperOrigin-RevId: 572947090 | Michael Hudgins | michaelhudgins@google.com | 1,697,131,953,000 |
685615a458e4844a69dd745c6ad51b2125dbe027 | [mlir][sparse] Change general FileCheck tests to new syntax
Example:
#sparse_tensor.encoding<{{.*}}> to #sparse_tensor.encoding<{ map = (d0, d1) -> (d0 : dense, d1 : compressed) }>>
PiperOrigin-RevId: 572949662 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,132,434,000 |
82ab6e32a5f6f095d21b602e57ee4d4fe21290a3 | Move CPU/GPU Bridge to cluster_tf API
PiperOrigin-RevId: 572954433 | Mason Chang | masonchang@google.com | 1,697,133,295,000 |
88e7d79e2b57790d4bffa7aefbfc448726af22c3 | [XLA:GPU] Add a SymbolUploader hook to enable uploading symbol mappings.
The mapping is between unoptimized and optimized HLO.
PiperOrigin-RevId: 572956390 | pizzud | pizzud@google.com | 1,697,133,638,000 |
2bca1e4263670ee4ff5759e12ac0e8b14d5704f4 | Change buildifier action to run on `pull_request_target`
PiperOrigin-RevId: 572957573 | David Dunleavy | ddunleavy@google.com | 1,697,133,869,000 |
b0813a11493df3f23739466634355a6001635bbc | Describe tf.math.bincount improvements in RELEASE.md
PiperOrigin-RevId: 572957826 | Edward Schwartz | schwartzedward@google.com | 1,697,133,915,000 |
7201f58e854e16a889af701d79e9529bc277b790 | [stream_executor] NFC: Remove unused GpuStreamMemberHack
PiperOrigin-RevId: 572960399 | Eugene Zhulenev | ezhulenev@google.com | 1,697,134,400,000 |
ff9c88d37e6dc19f7c6772159a42d25107ba3a9e | Re-enable layering check for package.
PiperOrigin-RevId: 572960619 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,134,443,000 |
961442748f26f3dd9250c58720b9170d73e6b85c | [xla:gpu] Lower command buffer call to LMHLO
PiperOrigin-RevId: 572961396 | Anlun Xu | anlunx@google.com | 1,697,134,598,000 |
07ee199e2aba35303cf3025fbed3c30a334468b3 | Redirect references from the lib target to the new single-source-file targets.
PiperOrigin-RevId: 572965247 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,135,363,000 |
21ca6d68e197b6c603976de8daea1c6efacc5da6 | Ensure necessary keras imports are complete.
PiperOrigin-RevId: 572966483 | Fiona Lang | flang@google.com | 1,697,135,587,000 |
2815df87eb9cfb971f1eeb1cbe4f72f80190b707 | Do not register TPU profiler through profiler.register_plugin_profiler yet.
Will do it after the plugin implementation is in.
PiperOrigin-RevId: 572969879 | Jieying Luo | jieying@google.com | 1,697,136,336,000 |
9cb9b9931b6297d700a337b2b62aa98d2d6d7820 | Re-enable layering check for package.
PiperOrigin-RevId: 572971691 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,136,750,000 |
72eb2bf1357258b1a60a0eafd238a4688a7b313b | [XLA:GPU] Run DotDecomposer before OperandUpcaster, and ResultUpcaster.
This fixes a case when we have matrix with dimensionality greater than two, and some optimisations (e.g matching `[s8, s8] -> s32` to cublas call) are not getting picked up due to premature upcasting.
PiperOrigin-RevId: 572975894 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,137,632,000 |
807c635d82e9dae1072390dd360aecd6733d1b85 | Attempts to remove some legacy checks (e.g., IsFollowedByBroadcast for the Iota instruction).
PiperOrigin-RevId: 572977142 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,137,929,000 |
0709b4d04f01fbfa82e1cf1d02cb9fbdb7079783 | Fix determinism bug when Triton is used.
The issue was that the autotuner would choose between Triton and cuBLAS depending on which was faster. This behavior is now disabled when determinism is enabled.
Also add XLA determinism tests, including cases where Triton is not used. Before, determinism was only tested through TensorFlow tests but these tests do not reset the autotuning cache between runs of an HLO module and so can not catch autotuning bugs.
I didn't use HloTestBase's RunMultipleTimes method because it doesn't reset the GPU autotuning cache, and cannot reset it because I don't think we can add a dependency from HloTestBase to GPU-specific code.
PiperOrigin-RevId: 572980526 | Reed Wanderman-Milne | reedwm@google.com | 1,697,138,805,000 |
66700e534cf8c3ca4d049ff262832e40da5cc733 | No public description
PiperOrigin-RevId: 572981162 | Fiona Lang | flang@google.com | 1,697,138,979,000 |
b4fe0bb705bafedee4d9c32e2dce760f0a97620d | [XLA] Factor out the logic for creating typed visitors
No functional change is intended.
PiperOrigin-RevId: 572983388 | David Majnemer | majnemer@google.com | 1,697,139,555,000 |
b8feff7233ee2ac283fa5403ab12a1d6029a4cc9 | [tsl:concurrency] NFC: Port changes from TFRT concurrency library
In preparation for switch from TFRT async value to TSL concurrency library, port changes that were added after the split
PiperOrigin-RevId: 572987347 | Eugene Zhulenev | ezhulenev@google.com | 1,697,140,486,000 |
ad5fa0a39ea9308ff4d2c14b9878a61dd1c0cac1 | Add libtpu download for tensorflow-tpu builds
This is a prerequisite for switching TensorFlow's official CI build scripts over to these new ones.
This change also re-sorts a few env files, since one of the arg names changed.
PiperOrigin-RevId: 572990394 | Austin Anderson | angerson@google.com | 1,697,141,167,000 |
073ad58e6600a3877a5965b155f75e6fb9ef6f8b | Internal change only
PiperOrigin-RevId: 572994406 | Ziyin Huang | ziyinh@google.com | 1,697,142,008,000 |
6cf39ac3614961654be68505b743c4ef655bfe8e | Update keras_injection_test so that keras is no longer required to be imported at TF import time.
PiperOrigin-RevId: 572999728 | Fiona Lang | flang@google.com | 1,697,143,184,000 |
b8601edb9880bbd95526126f2eba34c7bdaab392 | Adding a RowIdInitializer to the embedding API.
PiperOrigin-RevId: 573009164 | Yu Feng | feyu@google.com | 1,697,145,264,000 |
1c273ce6604cae7fdd2aca015ae00fad9859153c | PR #6270: [ROCm] fixed rocm configure by remove exception
Imported from GitHub PR https://github.com/openxla/xla/pull/6270
Merging this change closes #6270
PiperOrigin-RevId: 573014999 | Chao | cchen104@amd.com | 1,697,146,556,000 |
347fb1717b038293f93a0c4d2a6f900f9dcff371 | #tf-data-service Use Internal errors during snapshot restoration.
These errors are due to program bugs, so make it clear that they are
server-side internal errors.
PiperOrigin-RevId: 573023277 | Yang Chen | yangchen@google.com | 1,697,148,439,000 |
731cb5831358c9b85d35b3798552cd36db64bb97 | Add dependent mhlo dialect to TPUAnnotateDynamicShapeInputsPass. Required as this pass sets mhlo.attributes for bounded shapes. This could cause a crash in MLIR if the dialect wasn't registered.
PiperOrigin-RevId: 573025807 | Mason Chang | masonchang@google.com | 1,697,149,066,000 |
166d4db25ed948cd15f20868ebe9d7a3c31c908e | Internal BUILD rule changes.
PiperOrigin-RevId: 573025941 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,149,104,000 |
9c671cec5625cfd1c53b7d2dbc062a27e0ee064e | [xla:gpu] Add CommandBufferThunk for executing command buffers via StreamExecutor #6242
PiperOrigin-RevId: 573029971 | Eugene Zhulenev | ezhulenev@google.com | 1,697,150,080,000 |
b46b4bcc44b7d496b23f7bb5326bc88df32364ab | [XLA:GPU] Fix segfault when accessing HloModule::config(). Don't use the const ref before running passes since some passes may destory the old config object due to the recent copy-on-write change.
PiperOrigin-RevId: 573034218 | Ce Zheng | zce@google.com | 1,697,151,170,000 |
0f96a832a6b85c5c12bf0a31a85c3c02dacf2c38 | Add a pass that prints the current MLIR module.
PiperOrigin-RevId: 573038491 | Matthias Kramm | kramm@google.com | 1,697,152,249,000 |
d0a18229d0981e2c612aec3085205abe2aabe61d | #tf-data-service Snapshot manager recovers failed splits.
It fixes this error:
```
Found missing global split index, 1565, in /path/to/dataset_snapshot
```
If the dispatcher fails when writing a split, the split
should be recovered. Otherwise, the dispatcher cannot find
the split with the specified global index.
**Fix:**
Instead of skipping a split, rewrite the split to the split file.
PiperOrigin-RevId: 573048774 | Yang Chen | yangchen@google.com | 1,697,155,008,000 |
71d0b4ab8dcf98dff99a28c89a5e1c7819881ee1 | Upgrade the TensorFlow version at HEAD from 2.15 to 2.16
PiperOrigin-RevId: 573049235 | Raviteja Gorijala | gorijala@google.com | 1,697,155,141,000 |
cb7e91648889efd1c09c65707dc6108c47672bc6 | [PJRT C API] Relax the exact struct size check between jaxlib and the plugin to be greater than or equal to the minimum supported version.
This is guarded by an environment variable, and the default behavior is not changed.
Also remove the version check to use the PJRT_Api.pjrt_api_version and PJRT_Api::PJRT_Plugin_Initialize.
PiperOrigin-RevId: 573050839 | Jieying Luo | jieying@google.com | 1,697,155,559,000 |
03eb7ed2877d0a8ce5b8e5802ee7e9cb3832a86c | Redirect references from the framework target to the new single-source-file targets.
PiperOrigin-RevId: 573054623 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,156,603,000 |
95646fc194a6bb561d943d6852fb24c156a6640c | Add uq->int conversion patterns for func.func and func.return in ConvertMhloQuantToInt pass
This allows using uq type in func ops and simplifying test cases. Also updated tests.
PiperOrigin-RevId: 573055849 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,157,008,000 |
86aa7ef960146497a436ffb0bc3369706f930a40 | [PJRT C API] Change CheckMatchingStructSizes to be ActualStructSizeIsGreaterOrEqual.
With this change, it is no longer required that the plugin has the same struct size as the framework. Because ActualStructSizeIsGreaterOrEqual is called in the plugin, this means it will check whether framework has a greater or equal struct size than the plugin (meaning the framework is newer than the plugin).
PiperOrigin-RevId: 573059095 | Jieying Luo | jieying@google.com | 1,697,158,071,000 |
1b6a515ed3cc09b8193d93c7c4c4a5f5eb534f7e | Only export XLA function names that correspond to root "XlaLaunch" op, not their nested functions.
PiperOrigin-RevId: 573064635 | Shixin Li | shixinli@google.com | 1,697,159,559,000 |
681f93af51cb90203ba323176fe6cda074391075 | Redirect references from the lib target to the new single-source-file targets.
PiperOrigin-RevId: 573066122 | Juan Martinez Castellanos | juanantoniomc@google.com | 1,697,160,059,000 |
defa3db494900879dd483a836fe1327541231842 | #tf-data-service Move atomic file writing code to file_utils.
PiperOrigin-RevId: 573076252 | Yang Chen | yangchen@google.com | 1,697,164,324,000 |
222bce1c8f02bc29d9c1ad3a0909c2c9a105eb31 | [xla:gpu] Break a dependency cycle between Thunk and GpuExecutable #5758
Because of GpuExecutable<->Thunk dependency cycle we can't split individual thunks into separate build targets.
PiperOrigin-RevId: 573080521 | Eugene Zhulenev | ezhulenev@google.com | 1,697,166,090,000 |
1b3c202ec02403336804bbf2eab31ee0f5c33e90 | [PJRT C API] Fixed pjrt_c_api_gpu and remove `noincompatible_remove_legacy_whole_archive`
PiperOrigin-RevId: 573094387 | Jieying Luo | jieying@google.com | 1,697,171,088,000 |
cb896bc1be5140848bdd6f577f4eb5a7b07da1bf | Collect GPU performance modeling code in a package.
Keeping it together in one place makes it easier to justify splitting large
targets up. `gpu_performance_model` in particular is starting to accumulate too
many responsibilities.
PiperOrigin-RevId: 573122863 | Johannes Reifferscheid | jreiffers@google.com | 1,697,180,699,000 |
b5953b857b7e8f4f0eb6e67ac7c3c40c9d4ddae3 | PR #5911: [ROCm] Unifying hip/cuda blas-lt APIs
Imported from GitHub PR https://github.com/openxla/xla/pull/5911
This is a follow-up PR for these two issues:
https://github.com/openxla/xla/pull/4406, https://github.com/openxla/xla/pull/3953
We unified hip/cuda blas-lt APIs by providing a common virtual interface defined in
xla/stream_executor/gpu/gpu_blas_lt.h/.cc with implementations in
xla/stream_executor/cuda/cuda_blas_lt.h/.cc and xla/stream_executor/rocm/hip_blas_lt.h/.cc, respectively.
The main design decision was that we made the class MatmulPlan (originally defined in xla/service/gpu/matmul_utils.h/.cc) **polymorphic** and moved it's interface declaration to gpu_blas_lt.h.
There are two reasons for that, namely:
1. MatmulPlan provided a public function **ExecuteOnStream** which was implemented in terms of conditional compulation
with macros '#if GOOGLE_CUDA' or '#if TF_HIPBLASLT' in order to integrate library-specific data-types. This function becomes now a part of gpu_blas_lt interface.
2. MatmulPlan contained a library-specific member variable 'plan_' of type 'se::gpu::BlasLt::MatmulPlan' which is basically a plain container of MatmulDesc and several MatrixLayouts. These underlying types are again BLASLT library-specific and are **never** used directly, hence there is no need to expose BlasLt::MatmulDesc and BlasLt::MatrixLayout in the public interface.
Besides ExecuteOnStream, the class MatmulPlan also provides a number of overloaded 'DoMatmul' member functions (some of them are template functions) which were extracted as a common part from the original BlasLt implementations. These DoMatmul functions are also required for the oncoming integration of Blas-lt interface into Tensorflow: see tensorflow\core\kernels\matmul_util.h/.cc.
We also extracted the library-specific argument type-checks from templated DoMatmul functions and moved them into a virtual function MatmulPlan::ValidateInputs().
The polymorphic class gpu::BlasLt (defined in gpu_blas_lt.h) is responsible for constructing the objects of type MatmulPlan, the rest blas-lt functionality is solely handled by MatmulPlan interface.
The instantiations of gpu::BlasLt interface, as before, are defined in xla/stream_executor/cuda/cuda_blas.h and xla/stream_executor/rocm/rocm_blas.h, respectively.
We have also tried to compile the code with TF_HIPBLASLT=0 to make sure it also works fine if no hipblas-lt is available.
@akuegel: can you perhaps have a look at our implementation ?
Copybara import of the project:
--
daea33c73b142340481360d020bab10c4d64c79d by Pavel Emeliyanenko <pavel.emeliyanenko@amd.com>:
Unifying hip/cuda blas-lt APIs
work in progress
ongoing work
make sure the code runs with TF_HIPBLASLT=0
adaptions for CUDA compile
moving BlasLt and related stuff to se::gpu namespace
hipblas_lt interface cleanup
adapted the last blas-lt inteface changes for CUDA
--
b4ff019b278dfc93c93f17eaab2eccd772852cd3 by Pavel Emeliyanenko <pavel.emeliyanenko@amd.com>:
protected code by TF_HIPBLASLT macro to make sure code builds without hipblas-lt too
--
7248f692e0ed1262f11ea8c370c0771e9539b342 by Pavel Emeliyanenko <pavel.emeliyanenko@amd.com>:
resolving conflicts
--
d48e6ee7bd320de421b7c870af744d1bca160d8b by Pavel Emeliyanenko <pavel.emeliyanenko@amd.com>:
appliyng reviewer changes
--
1d7cc54d3ce1df1ba6f798c659b4f87292425869 by Pavel Emeliyanenko <pavel.emeliyanenko@amd.com>:
rebased and adapted API for TF blas-lt part
Merging this change closes #5911
PiperOrigin-RevId: 573136621 | pemeliya | 141146080+pemeliya@users.noreply.github.com | 1,697,184,663,000 |
16350d08903ed5a4d53d4c9e3a3c8cbcdd7d02c9 | [XLA:GPU] Triton GEMM: enable more fusions of binary elementwise operations of broadcasts.
PiperOrigin-RevId: 573143351 | Ilia Sergachev | sergachev@google.com | 1,697,186,631,000 |
8d8bdcc5e2f24e1cbfdf25b01f926854c308a840 | compat: Update forward compatibility horizon to 2023-10-13
PiperOrigin-RevId: 573147381 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,187,723,000 |
d5fdbd144cbd7eaf67ded1c057e1bf04826423fd | Update GraphDef version to 1648.
PiperOrigin-RevId: 573147725 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,187,809,000 |
e73b1cb81ddaad82132e6247f82de98965209ac3 | Merge pull request #59533 from gaoyiyeah:audio_spectrogram
PiperOrigin-RevId: 573149223 | TensorFlower Gardener | gardener@tensorflow.org | 1,697,188,976,000 |
30b577f6b811a5e9e98afeb8944184d4423329d0 | Fix analysis of partial epilogue-fused reductions.
When the epilogue crosses a fusion instruction, we currently do
not analyze this correctly, because we are unable to follow edges
through fusion instructions (they are considered non-intermediate).
The fix is to reuse the logic from HloTraversal for these edges:
when encountering a parameter or fusion, continue outside or inside
the fusion, respectively.
PiperOrigin-RevId: 573151739 | Johannes Reifferscheid | jreiffers@google.com | 1,697,188,932,000 |
59fbd612c76b5fff191bde72c3cde799bb37e1b6 | Integrate LLVM at llvm/llvm-project@30faaaf62670
Updates LLVM usage to match
[30faaaf62670](https://github.com/llvm/llvm-project/commit/30faaaf62670)
PiperOrigin-RevId: 573152919 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,189,288,000 |
e0ff250e898f3a2e909ba210cc061d50164f2a52 | Internal Code Change
PiperOrigin-RevId: 573155529 | A. Unique TensorFlower | gardener@tensorflow.org | 1,697,190,052,000 |
1f58ffa2b518c1534748882e0cbede131508677a | Merge pull request #61745 from georgthegreat:master
PiperOrigin-RevId: 573155982 | TensorFlower Gardener | gardener@tensorflow.org | 1,697,190,824,000 |
999398db308da20497a510f173f320da4945bea6 | Add GetTestWorkspaceDir and ResolveTestPrefixes to path.h
They will be used in a future cl.
Unittests are not added because these functions depend on environment variables.
PiperOrigin-RevId: 573162204 | Tamás Danyluk | tdanyluk@google.com | 1,697,191,967,000 |
b2493fdf7943f02bb01558d9d5f4343a15ca44d6 | [XLA:GPU][NFC] Remove unused argument.
PiperOrigin-RevId: 573171515 | Ilia Sergachev | sergachev@google.com | 1,697,194,595,000 |
e7aa1cadef44349d57019401e54d4fb390bf48f1 | Add support for argmin/argmax of bool.
Use All and Any in place of Min and Max for bool.
PiperOrigin-RevId: 573188469 | Jonathan B. Coe | jbcoe@google.com | 1,697,199,907,000 |
c3b98ea5a8387eec21e808caa6e999417494e09a | Update TensorFlow built CUDA version to 12.2 in PyPi classifiers.
PiperOrigin-RevId: 573209600 | Ramesh Sampath | rameshsampath@google.com | 1,697,205,676,000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.