data_type
large_stringclasses
3 values
source
large_stringclasses
29 values
code
large_stringlengths
98
49.4M
filepath
large_stringlengths
5
161
message
large_stringclasses
234 values
commit
large_stringclasses
234 values
subject
large_stringclasses
418 values
critique
large_stringlengths
101
1.26M
metadata
dict
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
Add the FSP boot path for Hopper and Blackwell GPUs. These architectures use FSP with FMC firmware for Chain of Trust boot, rather than SEC2. boot() now dispatches to boot_via_sec2() or boot_via_fsp() based on architecture. The SEC2 path keeps its original command ordering. The FSP path sends SetSystemInfo/SetRegistry...
{ "author": "John Hubbard <jhubbard@nvidia.com>", "date": "Fri, 20 Feb 2026 18:09:50 -0800", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
Hopper, Blackwell and later GPUs require a larger heap for WPR2. Signed-off-by: John Hubbard <jhubbard@nvidia.com> --- drivers/gpu/nova-core/fb.rs | 2 +- drivers/gpu/nova-core/gsp/fw.rs | 74 ++++++++++++++++++++++++--------- 2 files changed, 55 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/nova-core...
{ "author": "John Hubbard <jhubbard@nvidia.com>", "date": "Fri, 20 Feb 2026 18:09:46 -0800", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Hopper and Blackwell, FSP boots GSP with hardware lockdown enabled. After FSP Chain of Trust completes, the driver must poll for lockdown release before proceeding with GSP initialization. Add the register bit and helper functions needed for this polling. Cc: Gary Guo <gary@garyguo.net> Cc: Timur Tabi <ttabi@nvidia...
{ "author": "John Hubbard <jhubbard@nvidia.com>", "date": "Fri, 20 Feb 2026 18:09:48 -0800", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
Add dedicated FB HALs for Hopper (GH100) and Blackwell (GB100) with architecture-specific non-WPR heap sizes. Hopper uses 2 MiB, Blackwell uses 2 MiB + 128 KiB. These are needed for the larger reserved memory regions that Hopper/Blackwell GPUs require. Also adds the non_wpr_heap_size() method to the FbHal trait, and t...
{ "author": "John Hubbard <jhubbard@nvidia.com>", "date": "Fri, 20 Feb 2026 18:09:43 -0800", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Sat, Feb 21, 2026 at 3:11 AM John Hubbard <jhubbard@nvidia.com> wrote: Link: https://lore.kernel.org/rust-for-linux/20260206171253.2704684-2-gary@kernel.org/ [1] Ah, I thought you wanted to put this in `drivers/gpu/nova-core/num.rs` like in the previous version. If it is here instead, then you shouldn't need the...
{ "author": "Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>", "date": "Sat, 21 Feb 2026 21:50:38 +0100", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On 2026-02-21 02:09, John Hubbard wrote: This is wrong. Either this function is always used in const context, in which case you take `ALIGN` as normal function parameter and use `build_assert` and `build_error`, or this function can be called from runtime and you shouldn't have a panic call here. Best, Gary
{ "author": "Gary Guo <gary@garyguo.net>", "date": "Sun, 22 Feb 2026 07:46:47 +0000", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On 2/21/26 12:50 PM, Miguel Ojeda wrote: Works for me. I was anticipating that people wanted it in rust/ but I'm perfectly happy to keep it local to nova-core. I see. thanks, -- John Hubbard
{ "author": "John Hubbard <jhubbard@nvidia.com>", "date": "Sun, 22 Feb 2026 11:03:08 -0800", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On 2/21/26 11:46 PM, Gary Guo wrote: ... I will have another go at this, and put it in nova-core as per Miguel's comment as well. Thanks for catching this, Gary! thanks, -- John Hubbard
{ "author": "John Hubbard <jhubbard@nvidia.com>", "date": "Sun, 22 Feb 2026 11:04:53 -0800", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Sun, Feb 22, 2026 at 8:03 PM John Hubbard <jhubbard@nvidia.com> wrote: Sorry, I didn't mean you necessarily need to move it -- I only meant to point out that if you do, then you don't need the other changes. Cheers, Miguel
{ "author": "Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>", "date": "Sun, 22 Feb 2026 20:08:43 +0100", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Mon Feb 23, 2026 at 4:08 AM JST, Miguel Ojeda wrote: FWIW I think it makes more sense to keep it in `kernel` - even though Nova is the only user for now, this is a useful addition in general.
{ "author": "\"Alexandre Courbot\" <acourbot@nvidia.com>", "date": "Mon, 23 Feb 2026 12:36:12 +0900", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Sun Feb 22, 2026 at 8:04 PM CET, John Hubbard wrote: I think the most common case is that ALIGN is const, but value is not. What about keeping the function as is (with the panic() replaced with a Result) and also add #[inline(always)] pub const fn const_expect<T: Copy>(opt: Result<T>, &'static str) -> T { ...
{ "author": "\"Danilo Krummrich\" <dakr@kernel.org>", "date": "Mon, 23 Feb 2026 12:07:14 +0100", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Fri, Feb 20, 2026 at 06:09:35PM -0800, John Hubbard wrote: Note that Rust Binder's ptr_align could use this if you want another user. Alice
{ "author": "Alice Ryhl <aliceryhl@google.com>", "date": "Mon, 23 Feb 2026 11:23:44 +0000", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On 2026-02-23 11:07, Danilo Krummrich wrote: We already have `Alignable::align_up` for non-const cases, so this would only be used in const context and I don't see the need of having explicit const_expect? Best, Gary
{ "author": "Gary Guo <gary@garyguo.net>", "date": "Mon, 23 Feb 2026 14:16:55 +0000", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Mon Feb 23, 2026 at 3:16 PM CET, Gary Guo wrote: Fair enough -- unfortunate we can't call this from const context.
{ "author": "\"Danilo Krummrich\" <dakr@kernel.org>", "date": "Mon, 23 Feb 2026 15:20:40 +0100", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On 2/21/26 3:09 AM, John Hubbard wrote: Applied to drm-rust-next, thanks! [ Use LKMM atomics; inline and slightly reword TODO comment. - Danilo ]
{ "author": "Danilo Krummrich <dakr@kernel.org>", "date": "Tue, 24 Feb 2026 15:47:53 +0100", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Tue Feb 24, 2026 at 2:47 PM GMT, Danilo Krummrich wrote: Danilo, can you drop this patch from drm-rust-next? The patch that is supposed to be queued is https://lore.kernel.org/rust-for-linux/20260205221758.219192-1-jhubbard@nvidia.com/#t, which does correctly use LKMM atomics and add comments about possible use of...
{ "author": "\"Gary Guo\" <gary@garyguo.net>", "date": "Fri, 27 Feb 2026 15:37:31 +0000", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Hi, This is based on today's linux.git. A git branch with this (plus a fix for a CLIPPY warning on a core Rust for Linux issue which I suspect others have already found and fixed) is here: https://github.com/johnhubbard/linux/tree/nova-core-blackwell-v5 This is quite a large overhaul, multiple passes to fix up a...
null
null
null
[PATCH v5 00/38] gpu: nova-core: firmware: Hopper/Blackwell support
On Fri Feb 27, 2026 at 3:37 PM GMT, Gary Guo wrote: Hmm, actually this patch contains updated comment but somehow have LKMM atomics changed back to Rust atomics. Not sure what happens. Anyhow that patch should be picked instead. Best, Gary
{ "author": "\"Gary Guo\" <gary@garyguo.net>", "date": "Fri, 27 Feb 2026 15:41:20 +0000", "is_openbsd": false, "thread_id": "DGPUWPJCFPZH.4NVAAQS1I6HR@garyguo.net.mbox.gz" }
lkml_critique
lkml
Add reference counting using kref to the fastrpc_user structure to prevent use-after-free issues when contexts are freed from workqueue after device release. The issue occurs when fastrpc_device_release() frees the user structure while invoke contexts are still pending in the workqueue. When the workqueue later calls ...
null
null
null
[PATCH v1] misc: fastrpc: Add reference counting for fastrpc_user structure
On Thu, Feb 26, 2026 at 08:41:21PM +0530, Anandu Krishnan E wrote: Please follow https://docs.kernel.org/process/submitting-patches.html#describe-your-changes and start your commit message by clearly establishing the problem, once that's done you can describe the technical solution. But why does it do that? The rea...
{ "author": "Bjorn Andersson <andersson@kernel.org>", "date": "Thu, 26 Feb 2026 11:50:11 -0600", "is_openbsd": false, "thread_id": "07d585fe-dfd1-41c9-9c58-b2f9893e572e@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Add reference counting using kref to the fastrpc_user structure to prevent use-after-free issues when contexts are freed from workqueue after device release. The issue occurs when fastrpc_device_release() frees the user structure while invoke contexts are still pending in the workqueue. When the workqueue later calls ...
null
null
null
[PATCH v1] misc: fastrpc: Add reference counting for fastrpc_user structure
On 2/26/2026 11:20 PM, Bjorn Andersson wrote: sure,  will update the commit message and send as patch v2. I agree with the refactoring direction you’re suggesting, and separating the address spaces does make the driver easier to reason about. That said, the UAF isn’t limited to the buffer free path. fastrpc_context_fr...
{ "author": "Anandu Krishnan E <anandu.e@oss.qualcomm.com>", "date": "Fri, 27 Feb 2026 19:52:00 +0530", "is_openbsd": false, "thread_id": "07d585fe-dfd1-41c9-9c58-b2f9893e572e@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
Large folios in the page cache depend on the splitting infrastructure from THP. To remove the dependency between large folios and CONFIG_TRANSPARENT_HUGEPAGE, set the min order == max order if THP is disabled. This will make sure the splitting code will not be required when THP is disabled, therefore, removing the depe...
{ "author": "Pankaj Raghav <p.raghav@samsung.com>", "date": "Sat, 6 Dec 2025 04:08:56 +0100", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
When THP is disabled, file-backed large folios max order is capped to the min order to avoid using the splitting infrastructure. Currently, splitting calls will create a warning when called with THP disabled. But splitting call does not have to do anything when min order is same as the folio order. So skip the warnin...
{ "author": "Pankaj Raghav <p.raghav@samsung.com>", "date": "Sat, 6 Dec 2025 04:08:57 +0100", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
Now that dependency between CONFIG_TRANSPARENT_HUGEPAGES and large folios are removed, enable LBS devices even when THP config is disabled. Signed-off-by: Pankaj Raghav <p.raghav@samsung.com> --- include/linux/blkdev.h | 5 ----- 1 file changed, 5 deletions(-) diff --git a/include/linux/blkdev.h b/include/linux/blkd...
{ "author": "Pankaj Raghav <p.raghav@samsung.com>", "date": "Sat, 6 Dec 2025 04:08:58 +0100", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On 12/6/25 04:08, Pankaj Raghav wrote: The description is actually misleading. It's not that you remove the dependency from THP for large folios _in general_ (the CONFIG_THP is retained in this patch). Rather you remove the dependency for large folios _for the block layer_. And that should be make explicit in the descr...
{ "author": "Hannes Reinecke <hare@suse.de>", "date": "Tue, 9 Dec 2025 08:45:46 +0100", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On 5 Dec 2025, at 22:08, Pankaj Raghav wrote: But are large folios really created? IIUC, in do_sync_mmap_readahead(), when THP is disabled, force_thp_readahead is never set to true and later ra->order is set to 0. Oh, page_cache_ra_order() later bumps new_order to mapping_min_folio_order(). So large folios are creat...
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Tue, 09 Dec 2025 11:03:23 -0500", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On 12/9/25 13:15, Hannes Reinecke wrote: Hmm, that is not what I am doing. This has nothing to do with the block layer directly. I mentioned this in the cover letter but I can reiterate it again. Large folios depended on THP infrastructure when it was introduced. When we added added LBS support to the block layer, we...
{ "author": "Pankaj Raghav <kernel@pankajraghav.com>", "date": "Tue, 9 Dec 2025 22:03:40 +0530", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On 12/9/25 17:33, Pankaj Raghav wrote: Yes, and no. That patch limited the maximum blocksize without THP to 4k, so effectively disabling LBS. But this is what I meant. We do _not_ disable the dependency on THP for LBS, we just remove checks for the config option itself in the block layer code. The actual dependency o...
{ "author": "Hannes Reinecke <hare@suse.de>", "date": "Wed, 10 Dec 2025 01:38:09 +0100", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On Tue, Dec 09, 2025 at 11:03:23AM -0500, Zi Yan wrote: I think this is the key question to be discussed at LPC. How much of the current THP code should we say "OK, this is large folio support and everybody needs it" and how much is "This is PMD (or mTHP) support; this architecture doesn't have it, we don't need to c...
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Wed, 10 Dec 2025 04:27:17 +0000", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On 9 Dec 2025, at 23:27, Matthew Wilcox wrote: I am not going, so would like to get a summary afterwards. :) I agree with most of it, except mTHP part. mTHP should be part of large folio, since I see mTHP is anon equivalent to file backed large folio. Both are a >0 order folio mapped by PTEs (ignoring to-be-impleme...
{ "author": "Zi Yan <ziy@nvidia.com>", "date": "Wed, 10 Dec 2025 11:37:51 -0500", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On Wed, Dec 10, 2025 at 11:37:51AM -0500, Zi Yan wrote: You can join the fun at meet.lpc.events, or there's apparently a youtube stream. Maybe we disagree about what words mean ;-) When I said "mTHP" what I meant was "support for TLB entries which cover more than one page". I have no objection to supporting large f...
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Thu, 11 Dec 2025 07:37:57 +0000", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On Sat, Dec 06, 2025 at 04:08:55AM +0100, Pankaj Raghav wrote: Here's an argument. The one remaining caller of add_to_page_cache_lru() is ramfs_nommu_expand_for_mapping(). Attached is a patch which eliminates it ... but it doesn't compile because folio_split() is undefined on nommu. So either we need to reimplement...
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Fri, 27 Feb 2026 05:31:38 +0000", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On 2/27/26 06:31, Matthew Wilcox wrote: I guess it would be rather trivial to just replace add_to_page_cache_lru() by filemap_add_folio() in below code. In the current code base that should work just great unless I am missing something important. folio splitting usually involves unmapping pages, which is rather cum...
{ "author": "\"David Hildenbrand (Arm)\" <david@kernel.org>", "date": "Fri, 27 Feb 2026 09:45:07 +0100", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
File-backed Large folios were initially implemented with dependencies on Transparent Huge Pages (THP) infrastructure. As large folio adoption expanded across the kernel, CONFIG_TRANSPARENT_HUGEPAGE has become an overloaded configuration option, sometimes used as a proxy for large folio support [1][2][3]. This series i...
null
null
null
[RFC v2 0/3] Decoupling large folios dependency on THP
On Fri, Feb 27, 2026 at 09:45:07AM +0100, David Hildenbrand (Arm) wrote: In the Ottawa interpretation, that's true, but I'd prefer not to revisit this code when transitioning to the New York interpretation. This is the NOMMU code after all, and the less time we spend on it, the better. Depending on your point of vi...
{ "author": "Matthew Wilcox <willy@infradead.org>", "date": "Fri, 27 Feb 2026 15:19:54 +0000", "is_openbsd": false, "thread_id": "aaG2GkICML-St3B4@casper.infradead.org.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/2...
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
All supported SoCs have multiple DMA controllers that can be used with the RSPI peripheral. The current bindings only allow a single pair of RX and TX DMAs. The DMA core allows specifying multiple DMAs with the same name, and it will pick the first available one. There is an exception in the base dt-schema rules spec...
{ "author": "Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com>", "date": "Wed, 28 Jan 2026 23:51:30 +0200", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/2...
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
RZ/N2H (R9A09G087) has three DMA controllers that can be used by peripherals like SPI to offload data transfers from the CPU. Wire up the DMA channels for the SPI peripherals. Signed-off-by: Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> --- V3: * ...
{ "author": "Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com>", "date": "Wed, 28 Jan 2026 23:51:32 +0200", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/2...
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
RZ/T2H (R9A09G077) has three DMA controllers that can be used by peripherals like SPI to offload data transfers from the CPU. Wire up the DMA channels for the SPI peripherals. Signed-off-by: Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> --- V3: * ...
{ "author": "Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com>", "date": "Wed, 28 Jan 2026 23:51:31 +0200", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/2...
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
On Wed, Jan 28, 2026 at 11:51:30PM +0200, Cosmin Tanislav wrote: What's the rationale behind not setting minItems to 6 here and to 10 here? Do any of the spi controllers on these SoCs not have the ability to use all of the available dma controllers?
{ "author": "Conor Dooley <conor@kernel.org>", "date": "Thu, 29 Jan 2026 17:44:59 +0000", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/2...
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
I left minItems to 2 in case it is necessary to wire up SPI to only a subset of the DMA controllers, maybe for performance reasons in a board-specific dts? I know that dts is only supposed to describe the hardware itself, but for now this would be the only way to pre-select which DMA controller is used for a specific ...
{ "author": "Cosmin-Gabriel Tanislav <cosmin-gabriel.tanislav.xa@renesas.com>", "date": "Thu, 29 Jan 2026 17:55:21 +0000", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/2...
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
On Thu, Jan 29, 2026 at 05:55:21PM +0000, Cosmin-Gabriel Tanislav wrote: Yeah, I can buy that argument. Acked-by: Conor Dooley <conor.dooley@microchip.com> pw-bot: not-applicable
{ "author": "Conor Dooley <conor@kernel.org>", "date": "Thu, 29 Jan 2026 18:03:37 +0000", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/2...
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
On 28/01/2026 22:51, Cosmin Tanislav wrote: As pointed out by Renesas, this is not correct or finished. I don't understand why Renesas people don't review THEIR own code instead, but send a patch correcting other un-merged patch. Really, start working on each other submissions. NAK Best regards, Krzysztof
{ "author": "Krzysztof Kozlowski <krzk@kernel.org>", "date": "Wed, 18 Feb 2026 08:49:48 +0100", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/2...
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
On Wed, 28 Jan 2026 23:51:29 +0200, Cosmin Tanislav wrote: Applied to https://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git for-next Thanks! [1/3] dt-bindings: spi: renesas,rzv2h-rspi: allow multiple DMAs commit: 4d28f38f64ef69ab27839069ef3346c3c878d137 All being well this means that it will be ...
{ "author": "Mark Brown <broonie@kernel.org>", "date": "Wed, 25 Feb 2026 19:07:41 +0000", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/2...
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
On Wed, 28 Jan 2026 at 22:52, Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com> wrote: Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> i.e. will queue in renesas-devel for v7.1. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@lin...
{ "author": "Geert Uytterhoeven <geert@linux-m68k.org>", "date": "Fri, 27 Feb 2026 15:54:34 +0100", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
The DMA controller can be used to transfer data to and from the SPI controller without involving the CPU for each word of a SPI transfer. Add support for DMA mode, and do some other cleanups while touching the same code. The dts changes in this series depend on the DMA series [1]. [1]: https://lore.kernel.org/lkml/2...
null
null
null
[PATCH v3 0/3] Add DMA support for RZ/T2H RSPI
On Wed, 28 Jan 2026 at 22:52, Cosmin Tanislav <cosmin-gabriel.tanislav.xa@renesas.com> wrote: Thanks, will queue in renesas-devel for v7.1. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org In personal conversations with technical p...
{ "author": "Geert Uytterhoeven <geert@linux-m68k.org>", "date": "Fri, 27 Feb 2026 15:55:39 +0100", "is_openbsd": false, "thread_id": "20260128215132.1353381-1-cosmin-gabriel.tanislav.xa@renesas.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
We want isolation of misplaced folios to work in contexts where VMA isn't available, typically when performing migrations from a kernel thread context. In order to prepare for that, allow migrate_misplaced_folio_prepare() to be called with a NULL VMA. When migrate_misplaced_folio_prepare() is called with non-NULL VMA,...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:34 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
From: Gregory Price <gourry@gourry.net> Tiered memory systems often require migrating multiple folios at once. Currently, migrate_misplaced_folio() handles only one folio per call, which is inefficient for batch operations. This patch introduces migrate_misplaced_folios_batch(), a batch variant that leverages migrate_...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:35 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
This introduces a subsystem for collecting memory access information from different sources. It maintains the hotness information based on the access history and time of access. Additionally, it provides per-lower-tier-node kernel threads (named kmigrated) that periodically promote the pages that are eligible for prom...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:36 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
By default, one byte per PFN is used to store hotness information. Limited number of bits are used to store the access time leading to coarse-grained time tracking. Also there aren't enough bits to track the toptier NID explicitly and hence the default target_nid is used for promotion. This precise mode relaxes the ab...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:37 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
Currently hot page promotion (NUMA_BALANCING_MEMORY_TIERING mode of NUMA Balancing) does hot page detection (via hint faults), hot page classification and eventual promotion, all by itself and sits within the scheduler. With pghot, the new hot page tracking and promotion mechanism being available, NUMA Balancing can l...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:38 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
Use IBS (Instruction Based Sampling) feature present in AMD processors for memory access tracking. The access information obtained from IBS via NMI is fed to pghot sub-system for futher action. In addition to many other information related to the memory access, IBS provides physical (and virtual) address of the access...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:39 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
Enable IBS memory access data collection for user memory accesses by programming the required MSRs. The profiling is turned ON only for user mode execution and turned OFF for kernel mode execution. Profiling is explicitly disabled for NMI handler too. TODOs: - IBS sampling rate is kept fixed for now. - Arch/vendor se...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:40 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
From: Kinsey Ho <kinseyho@google.com> Refactor the existing MGLRU page table walking logic to make it resumable. Additionally, introduce two hooks into the MGLRU page table walk: accessed callback and flush callback. The accessed callback is called for each accessed page detected via the scanned accessed bit. The flu...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:41 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
From: Kinsey Ho <kinseyho@google.com> Introduce a new kernel daemon, klruscand, that periodically invokes the MGLRU page table walk. It leverages the new callbacks to gather access information and forwards it to pghot sub-system for promotion decisions. This benefits from reusing the existing MGLRU page table walk in...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:42 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
Unmapped page cache pages that end up in lower tiers don't get promoted easily. There were attempts to identify such pages and get them promoted as part of NUMA Balancing earlier [1]. The same idea is taken forward here by using folio_mark_accessed() as a source of hotness. Lower tier accesses from folio_mark_accessed...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 29 Jan 2026 20:10:43 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 29-Jan-26 8:10 PM, Bharata B Rao wrote: Here is the first set of results from a microbenchmark: Test system details ------------------- 3 node AMD Zen5 system with 2 regular NUMA nodes (0, 1) and a CXL node (2) $ numactl -H available: 3 nodes (0-2) node 0 cpus: 0-95,192-287 node 0 size: 128460 MB node 1 cpus: 96-...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Mon, 9 Feb 2026 08:55:44 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 29-Jan-26 8:10 PM, Bharata B Rao wrote: Numbers from redis-memtier benchmark: Test system details ------------------- 3 node AMD Zen5 system with 2 regular NUMA nodes (0, 1) and a CXL node (2) $ numactl -H available: 3 nodes (0-2) node 0 cpus: 0-95,192-287 node 0 size: 128460 MB node 1 cpus: 96-191,288-383 node 1...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Mon, 9 Feb 2026 09:00:48 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 29-Jan-26 8:10 PM, Bharata B Rao wrote: Here are Graph500 numbers for the hint fault source: Test system details ------------------- 3 node AMD Zen5 system with 2 regular NUMA nodes (0, 1) and a CXL node (2) $ numactl -H available: 3 nodes (0-2) node 0 cpus: 0-95,192-287 node 0 size: 128460 MB node 1 cpus: 96-191...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Wed, 11 Feb 2026 21:00:26 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 29-Jan-26 8:10 PM, Bharata B Rao wrote: We should hold a folio reference before the above call which will isolate the folio from LRU. Otherwise we may hit VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio) in folio_isolate_lru(). I hit this only when running Graph500 benchmark and have fixed it in the github at: ht...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Wed, 11 Feb 2026 21:10:23 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Wed, Feb 11, 2026 at 09:00:26PM +0530, Bharata B Rao wrote: Can you contextualize TEPS? Higher better? Higher worse? etc. Unfamiliar with this benchmark. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Wed, 11 Feb 2026 11:04:42 -0500", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Wed, Feb 11, 2026 at 09:00:26PM +0530, Bharata B Rao wrote: Lacking access-nid data, maybe it's better to select a random (or round-robin) node in the upper tier? That would at least approach 1/N accuracy in promotion for most access patterns. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Wed, 11 Feb 2026 11:06:57 -0500", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Wed, Feb 11, 2026 at 09:10:23PM +0530, Bharata B Rao wrote: Also relevant note from other work I'm doing, we may want a fast-out for zone-device folios here. We should not bother tracking those at all. (this may also become relevant for private-node memory as well, but I may try to generalize zone_device & privat...
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Wed, 11 Feb 2026 11:08:59 -0500", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 11-Feb-26 9:38 PM, Gregory Price wrote: Yes, zone device folios aren't not tracked by pghot. They get discarded by pghot_record_access() itself. Good. Regards, Bharata.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 12 Feb 2026 07:33:43 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 11-Feb-26 9:34 PM, Gregory Price wrote: In the Graph500 benchmark, higher TEPS (Traversed Edges Per Second) values are better. Regards, Bharata.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 12 Feb 2026 07:46:34 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 11-Feb-26 9:00 PM, Bharata B Rao wrote: These numbers are from scenario where demotion is present: ============================================= Over-committed scenario, promotion + demotion ============================================= Command: mpirun -n 128 --bind-to core --map-by core /home/bharata/benchmarks/g...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Thu, 12 Feb 2026 21:45:40 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Thu, Jan 29, 2026 at 08:10:33PM +0530, Bharata B Rao wrote: In the future can you add a base-commit: for the series? Make's it easier to automate pulling it in for testing and backports etc. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Fri, 13 Feb 2026 09:56:11 -0500", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 13-Feb-26 8:26 PM, Gregory Price wrote: Good suggestion, will do thanks. BTW this series applies on f0b9d8eb98df. Latest github branch: https://github.com/AMDESE/linux-mm/tree/bharata/pghot-rfcv6-pre Regards, Bharata.
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Mon, 16 Feb 2026 08:30:21 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 29-Jan-26 8:10 PM, Bharata B Rao wrote: Here are some numbers from NAS Parallel Benchmark (NPB) with BT application: Test system details ------------------- 3 node AMD Zen5 system with 2 regular NUMA nodes (0, 1) and a CXL node (2) $ numactl -H available: 3 nodes (0-2) node 0 cpus: 0-95,192-287 node 0 size: 12846...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Mon, 23 Feb 2026 19:57:39 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Mon, Feb 23, 2026 at 07:57:39PM +0530, Bharata B Rao wrote: Wow, this really seems to justify the extra memory usage. Is it possible for you to change pghot-default to move the page to a random (or round-robin) node on the top tier instead of NID(0) by default? At least then pghot-default would be correct 1/N % o...
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Mon, 23 Feb 2026 10:02:30 -0500", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 23-Feb-26 8:32 PM, Gregory Price wrote: For pghot-default, with target_nid alternating between the available toptier nodes 0 and 1, the numbers catch up with pghot-precise and base NUMAB2 case as seen below: ================================ Time in seconds 4337.98 Mop/s total 90217.86 pgpromote...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Tue, 24 Feb 2026 17:25:13 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Tue, Feb 24, 2026 at 05:25:13PM +0530, Bharata B Rao wrote: Fascinating! Thank you for the quick follow up. I wonder if this was a lucky run, it almost seems *too* perfect. ~Gregory
{ "author": "Gregory Price <gourry@gourry.net>", "date": "Tue, 24 Feb 2026 10:30:07 -0500", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 24-Feb-26 9:00 PM, Gregory Price wrote: It consistently performs that way. Here are the numbers from another run: ================================ Time in seconds 4329.22 Mop/s total 90400.27 pgpromote_success 41967282 pgpromote_candidate 0 pgpromote_candidate_nrl 41968339 pgdemote_k...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Wed, 25 Feb 2026 10:05:58 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On Thu, 29 Jan 2026 20:10:35 +0530 Bharata B Rao <bharata@amd.com> wrote: [...snip...] Hello Bharata, I hope you are doing well! Thank you for the series. I saw the numbers and they look great. I'm hoping to do some more testing myself as well : -) I'm also going through the series as well!! The single-folio case...
{ "author": "Joshua Hahn <joshua.hahnjy@gmail.com>", "date": "Thu, 26 Feb 2026 12:40:59 -0800", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
Hi, This is v5 of pghot, a hot-page tracking and promotion subsystem. The major change in v5 is reducing the default hotness record size to 1 byte per PFN and adding an optional precision mode (CONFIG_PGHOT_PRECISE) that uses 4 bytes per PFN. This patchset introduces a new subsystem for hot page tracking and promotio...
null
null
null
[RFC PATCH v5 00/10] mm: Hot page tracking and promotion infrastructure
On 27-Feb-26 2:10 AM, Joshua Hahn wrote: Thanks Joshua for looking at the patchset and for your testing offer! Ideally yes, but right now the batch variant gets called only for promotion case. Firstly the hotness is tracked only for lower tier pages. pghot_record_access() ensures this. Next, there is one kmigrate...
{ "author": "Bharata B Rao <bharata@amd.com>", "date": "Fri, 27 Feb 2026 20:11:22 +0530", "is_openbsd": false, "thread_id": "b2a255d1-9c84-4519-bfed-8b3a5d756293@amd.com.mbox.gz" }
lkml_critique
lkml
The aspeed video (be compatible for ast2400, ast2500, ast2600) now needs the reset DTS handle specified, otherwise it will fail to load: [ 0.000000] OF: reserved mem: initialized node video, compatible id shared-dma-pool [ 0.000000] OF: reserved mem: 0xbb000000..0xbeffffff (65536 KiB) map reusable video [ 0.3...
null
null
null
[PATCH v1] media: aspeed: Fix driver probe failure
On 27/02/2026 13:38, Haiyue Wang wrote: Please run scripts/checkpatch.pl on the patches and fix reported warnings. After that, run also 'scripts/checkpatch.pl --strict' on the patches and (probably) fix more warnings. Some warnings can be ignored, especially from --strict run, but the code here looks like it needs a f...
{ "author": "Krzysztof Kozlowski <krzk@kernel.org>", "date": "Fri, 27 Feb 2026 13:59:54 +0100", "is_openbsd": false, "thread_id": "54b36faa-62b3-4561-bfc9-0c507d9e148e@kernel.org.mbox.gz" }
lkml_critique
lkml
The aspeed video (be compatible for ast2400, ast2500, ast2600) now needs the reset DTS handle specified, otherwise it will fail to load: [ 0.000000] OF: reserved mem: initialized node video, compatible id shared-dma-pool [ 0.000000] OF: reserved mem: 0xbb000000..0xbeffffff (65536 KiB) map reusable video [ 0.3...
null
null
null
[PATCH v1] media: aspeed: Fix driver probe failure
On 2/27/2026 8:59 PM, Krzysztof Kozlowski wrote: Seperated into two patches in v2, please help to review.
{ "author": "Haiyue Wang <haiyuewa@163.com>", "date": "Fri, 27 Feb 2026 23:18:13 +0800", "is_openbsd": false, "thread_id": "54b36faa-62b3-4561-bfc9-0c507d9e148e@kernel.org.mbox.gz" }
lkml_critique
lkml
The aspeed video (be compatible for ast2400, ast2500, ast2600) now needs the reset DTS handle specified, otherwise it will fail to load: [ 0.000000] OF: reserved mem: initialized node video, compatible id shared-dma-pool [ 0.000000] OF: reserved mem: 0xbb000000..0xbeffffff (65536 KiB) map reusable video [ 0.3...
null
null
null
[PATCH v1] media: aspeed: Fix driver probe failure
On 27/02/2026 16:18, Haiyue Wang wrote: No, please wait. One posting per day. We have enough of other patches to review. Best regards, Krzysztof
{ "author": "Krzysztof Kozlowski <krzk@kernel.org>", "date": "Fri, 27 Feb 2026 16:35:20 +0100", "is_openbsd": false, "thread_id": "54b36faa-62b3-4561-bfc9-0c507d9e148e@kernel.org.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex...
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
On 2026-01-28 10:13:58 [+0000], K Prateek Nayak wrote: With the Debian config CONFIG_NODES_SHIFT is set to 10 as of 6.18.12+deb14 for amd64 probably due to MAXSMP. so we are getting slightly worse? I didn't want to do this because now we have two pointers to resolve, nr_node_ids vs nr_futex_queues should be largel...
{ "author": "Sebastian Andrzej Siewior <bigeasy@linutronix.de>", "date": "Tue, 24 Feb 2026 12:13:42 +0100", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex...
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
Hello Sebastian, On 2/24/2026 4:43 PM, Sebastian Andrzej Siewior wrote: I have it on good faith that some EPYC user on distro kernels turn on "L3 as NUMA" option which currently results into 32 NUMA nodes on our largest configuration. Adding a little bit more margin for CXL nodes should make even CONFIG_NODES_SHIFT=...
{ "author": "K Prateek Nayak <kprateek.nayak@amd.com>", "date": "Wed, 25 Feb 2026 09:06:08 +0530", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex...
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
On 2026-02-25 09:06:08 [+0530], K Prateek Nayak wrote: Hi Prateek, Okay. According to Kconfig, this is the default for X86_64. The 10 gets set by MAXSMP. This option raises the NR_CPUS_DEFAULT to 8192. That might the overkill. What would be a sane value for NR_CPUS_DEFAULT? I don't have anything that exceeds 3 digits...
{ "author": "Sebastian Andrzej Siewior <bigeasy@linutronix.de>", "date": "Wed, 25 Feb 2026 08:39:39 +0100", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex...
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
On 2/25/2026 1:09 PM, Sebastian Andrzej Siewior wrote: I would have thought a quarter of that would be plenty but looking at the footnote in [1] that says "16 socket GNR system" and the fact that GNR can feature up to 256 threads per socket - that could theoretically put such systems at that NR_CPUS_DEFAULT limit - I ...
{ "author": "K Prateek Nayak <kprateek.nayak@amd.com>", "date": "Wed, 25 Feb 2026 14:21:33 +0530", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex...
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
On 2026-02-25 14:21:33 [+0530], K Prateek Nayak wrote: Hi Prateek, I am still trying to figure out if this is practical or some drunk guys saying "you know what would be fun?" Sounds like it. What would be sane default upper limit then? Something like 1024 CPUs? 2048? Or even more than that? I would try to use thi...
{ "author": "Sebastian Andrzej Siewior <bigeasy@linutronix.de>", "date": "Wed, 25 Feb 2026 10:22:13 +0100", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex...
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
Hey Sebastian, Sorry for the delay! On 2/25/2026 2:52 PM, Sebastian Andrzej Siewior wrote: I feel the current default for NR_CPUS can be be retained as is just to be on the safer side. Turns out QEMU allows for a ridiculous amount of vCPUs per guest and I've found enough evidence of extremely large guests running o...
{ "author": "K Prateek Nayak <kprateek.nayak@amd.com>", "date": "Fri, 27 Feb 2026 14:17:31 +0530", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex...
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
On Wed, Jan 28, 2026 at 10:13:58AM +0000, K Prateek Nayak wrote: Both will result in at least one extra deref/cacheline for each futex op, no?
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Fri, 27 Feb 2026 15:42:03 +0100", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex...
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
Hello Peter, On 2/27/2026 8:12 PM, Peter Zijlstra wrote: Ack but I was wondering if that penalty can be offset by the fact that we no longer need to look at "nr_node_ids" in a separate cacheline? I ran futex bench enough time before posting to come to conclusion that there isn't any noticeable regression - the numbe...
{ "author": "K Prateek Nayak <kprateek.nayak@amd.com>", "date": "Fri, 27 Feb 2026 20:29:03 +0530", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
CONFIG_NODES_SHIFT (which influences MAX_NUMNODES) is often configured generously by distros while the actual number of possible NUMA nodes on most systems is often quite conservative. Instead of reserving MAX_NUMNODES worth of space for futex_queues, dynamically allocate it based on "nr_node_ids" at the time of futex...
null
null
null
[RFC PATCH] futex: Dynamically allocate futex_queues depending on nr_node_ids
On 2026-02-27 14:17:31 [+0530], K Prateek Nayak wrote: Hi Prateek, No worries. You mean distro kernel in 8k CPUs guest? I do this kind of things for testing but not with a distro kernel. Oh well. So you are saying NODES_SHIFT=8 and NR_CPUS=4k is what should be default given "sane" upper limits as of today? I did h...
{ "author": "Sebastian Andrzej Siewior <bigeasy@linutronix.de>", "date": "Fri, 27 Feb 2026 16:15:18 +0100", "is_openbsd": false, "thread_id": "a30c225f-d7df-49a0-8725-6d7baa69728b@amd.com.mbox.gz" }
lkml_critique
lkml
Hi all, After merging the drm-nova tree, today's linux-next build (arm64 allyesconfig) failed like this: ld: drivers/hid/hid-lenovo-go-s.o:(.data+0x840): multiple definition of `rgb_enabled'; drivers/hid/hid-lenovo-go.o:(.data+0xb00): first defined here ld: drivers/hid/hid-lenovo-go-s.o:(.data+0xa80): multiple defini...
null
null
null
linux-next: build failure in the hid tree
On Fri, 27 Feb 2026, Mark Brown wrote: I'll just drop the branch from for-next for now, and will let Mark and Derek look into this and send followup fixes. Thanks, -- Jiri Kosina SUSE Labs
{ "author": "Jiri Kosina <jikos@kernel.org>", "date": "Fri, 27 Feb 2026 15:50:31 +0100 (CET)", "is_openbsd": false, "thread_id": "05q1sn9q-0075-303n-5q49-707o5p208083@xreary.bet.mbox.gz" }
lkml_critique
lkml
Hi all, After merging the drm-nova tree, today's linux-next build (arm64 allyesconfig) failed like this: ld: drivers/hid/hid-lenovo-go-s.o:(.data+0x840): multiple definition of `rgb_enabled'; drivers/hid/hid-lenovo-go.o:(.data+0xb00): first defined here ld: drivers/hid/hid-lenovo-go-s.o:(.data+0xa80): multiple defini...
null
null
null
linux-next: build failure in the hid tree
On Fri, 27 Feb 2026, Jiri Kosina wrote: Seems like both drivers are polluting a lot of global namespace actually. I normally catch this using sparse, but my installation doesn't work currently because of [1], so I missed it. Derek, Mark -- you need to add a lot of 'static' all over the place :) The for-7.1/lenovo...
{ "author": "Jiri Kosina <jikos@kernel.org>", "date": "Fri, 27 Feb 2026 16:28:21 +0100 (CET)", "is_openbsd": false, "thread_id": "05q1sn9q-0075-303n-5q49-707o5p208083@xreary.bet.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices....
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
No maintainers handling the code (so subsystem maintainers) are shown with scripts/get_maintainers.pl on MPAM drivers in drivers/resctrl/. It seems that there is no dedicated subsystem for resctrl and existing drivers went through ARM64 port maintainers, so make that explicit to avoid patches being lost/ignored. Sign...
{ "author": "Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>", "date": "Mon, 16 Feb 2026 12:02:42 +0100", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices....
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
On Mon, Feb 16, 2026 at 12:02:42PM +0100, Krzysztof Kozlowski wrote: What's wrong with the current entry? $ ./scripts/get_maintainer.pl -f drivers/resctrl/mpam_* James Morse <james.morse@arm.com> (maintainer:MPAM DRIVER) Ben Horgan <ben.horgan@arm.com> (maintainer:MPAM DRIVER) Reinette Chatre <reinette.chatre@intel.c...
{ "author": "Catalin Marinas <catalin.marinas@arm.com>", "date": "Wed, 18 Feb 2026 16:23:08 +0000", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices....
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
On 18/02/2026 17:23, Catalin Marinas wrote: I explained in the commit msg: "No maintainers handling the code (so subsystem maintainers)" It does not list the maintainers picking up patches, so if you use standard tools (like b4, patman or scripted get_maintainers), you will never appear on To/Cc list (relying on git-...
{ "author": "Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>", "date": "Wed, 18 Feb 2026 17:46:40 +0100", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices....
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
On Wed, Feb 18, 2026 at 05:46:40PM +0100, Krzysztof Kozlowski wrote: Yeah, I realised what you meant after sending my reply ;). The arm64 maintainers won't proactively pick these patches up unless we are asked by the MPAM maintainers. I don't mind whether the patches go in via the arm64 or Greg's drivers tree. We ju...
{ "author": "Catalin Marinas <catalin.marinas@arm.com>", "date": "Wed, 18 Feb 2026 17:13:45 +0000", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices....
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
On 18/02/2026 18:13, Catalin Marinas wrote: OK, there are few subsystems (e.g. cdx) doing something similar - listing only reviewing maintainer, which later has to poke the actual maintainer picking up patches. I find it confusing practice and might lead to patches being lost on the mailing list (happened for example...
{ "author": "Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>", "date": "Wed, 18 Feb 2026 18:21:26 +0100", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices....
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
Hi Krzysztof, On 2/16/26 11:02, Krzysztof Kozlowski wrote: This change looks good to me. As sparse is more broken I needed to use the patch from [1] to reproduce this. Copied here for convenience. diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 2b30a0529d48..90536b2bc42e 100644 --- a/include/linux/gfp....
{ "author": "Ben Horgan <ben.horgan@arm.com>", "date": "Fri, 27 Feb 2026 14:06:27 +0000", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Code allocates standard kernel memory to pass to the MPAM, which expects __iomem. The code is safe, because __iomem accessors should work fine on kernel mapped memory, however leads to sparse warnings: test_mpam_devices.c:327:42: warning: incorrect type in initializer (different address spaces) test_mpam_devices....
null
null
null
[PATCH 1/2] arm_mpam: Force __iomem casts
On 27/02/2026 15:06, Ben Horgan wrote: The branch from Al Viro was working fine at that time, now merged to master. That I did not know. Anyone can run sparse, as I am doing every now and then, and find issues. Best regards, Krzysztof
{ "author": "Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>", "date": "Fri, 27 Feb 2026 15:51:13 +0100", "is_openbsd": false, "thread_id": "18f3f73a-9bc6-4518-b3e4-730fa0793146@oss.qualcomm.com.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the ini...
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
Andreas Hindborg <a.hindborg@kernel.org> writes: It just occured to me that we should probably add a zulip Link as well: Link: https://rust-for-linux.zulipchat.com/#narrow/channel/288089-General/topic/.E2.9C.94.20Constructing.20Mutex.20from.20PinInit.3CT.2C.20Error.3E/with/567385936 Best regards, Andreas Hindborg
{ "author": "Andreas Hindborg <a.hindborg@kernel.org>", "date": "Sat, 14 Feb 2026 14:37:12 +0100", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the ini...
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Sat Feb 14, 2026 at 2:28 PM CET, Andreas Hindborg wrote: I assume you have a user? I.e. do you need this patch in another tree? I think we should keep this bound and just add: E: From<E2>, This... ...and this match becomes unnecessary then.
{ "author": "\"Danilo Krummrich\" <dakr@kernel.org>", "date": "Sat, 14 Feb 2026 15:17:34 +0100", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the ini...
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Sat Feb 14, 2026 at 3:17 PM CET, Danilo Krummrich wrote: The `Into` trait bounds are the idiomatic ones for functions consuming things. See https://doc.rust-lang.org/std/convert/trait.Into.html: Prefer using Into over From when specifying trait bounds on a generic function to ensure that types that only im...
{ "author": "\"Benno Lossin\" <lossin@kernel.org>", "date": "Sat, 14 Feb 2026 15:40:21 +0100", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the ini...
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Sat Feb 14, 2026 at 3:40 PM CET, Benno Lossin wrote: Yeah, but isn't this only because of [1], which does not apply to the kernel because our minimum compiler version is 1.78 anyways? I.e. are there any cases where we can't implement From in the kernel and have to fall back to Into? [1] https://doc.rust-lang.org/...
{ "author": "\"Danilo Krummrich\" <dakr@kernel.org>", "date": "Sat, 14 Feb 2026 15:56:43 +0100", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the ini...
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Sat Feb 14, 2026 at 3:56 PM CET, Danilo Krummrich wrote: Hmm that's interesting. I'm not sure if that's the only reason. It would be interesting to ask the Rust folks if there 1) is a different use-case for `Into` today; and 2) if they could remove `Into`, would they? If the answer to 2 is "yes", then we could thin...
{ "author": "\"Benno Lossin\" <lossin@kernel.org>", "date": "Mon, 16 Feb 2026 00:29:45 +0100", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the ini...
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Sat, Feb 14, 2026 at 03:56:43PM +0100, Danilo Krummrich wrote: Probably not, but it's still best practice to use Into over From when specifying trait bounds. Alice
{ "author": "Alice Ryhl <aliceryhl@google.com>", "date": "Mon, 16 Feb 2026 08:48:43 +0000", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the ini...
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Mon Feb 16, 2026 at 9:48 AM CET, Alice Ryhl wrote: I'm aware; my point is that I'm questioning this best practice in the context of a modern and self-contained project like Rust in the kernel. This patch is a very good example, as there seem to be zero downsides to a From trait bound, while using the From trait bo...
{ "author": "\"Danilo Krummrich\" <dakr@kernel.org>", "date": "Mon, 16 Feb 2026 10:37:08 +0100", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }
lkml_critique
lkml
Previously, `KBox::pin_slice` required the initializer error type to match the return error type via `E: From<AllocError>`. This prevented using infallible initializers like `new_mutex!` inside `pin_slice`, because `Infallible` does not implement `From<AllocError>`. Introduce a separate type parameter `E2` for the ini...
null
null
null
[PATCH] rust: alloc: allow different error types in `KBox::pin_slice`
On Sat Feb 14, 2026 at 2:56 PM GMT, Danilo Krummrich wrote: There's one benefit in using `From` in trait bound -- you can call both `From::from` and `Into::into` inside the function. If you only have `Into` bound, then `From::from` is not callable. A very minor benefit, though. Another interesting observation is that...
{ "author": "\"Gary Guo\" <gary@garyguo.net>", "date": "Fri, 27 Feb 2026 14:38:18 +0000", "is_openbsd": false, "thread_id": "DGPTKG9OT59O.1N5K1CSI1KJMV@garyguo.net.mbox.gz" }