---
title: Utilities
description: A list of various utilities provided by TypeGPU.
---

## *root.createGuardedComputePipeline*

The `root.createGuardedComputePipeline` method streamlines running simple computations on the GPU.
Under the hood, it creates a compute pipeline that calls the provided callback only if the current thread ID is within the requested range, and returns an object with a `dispatchThreads` method that executes the pipeline.
Since the pipeline is reused, there’s no additional overhead for subsequent calls.

```ts twoslash
import tgpu from 'typegpu';
import * as d from 'typegpu/data';
const root = await tgpu.init();
// ---cut---
const data = root.createMutable(d.arrayOf(d.u32, 8), [0, 1, 2, 3, 4, 5, 6, 7]);

const doubleUpPipeline = root['~unstable']
  .createGuardedComputePipeline((x) => {
    'use gpu';
    data.$[x] *= 2;
  });

doubleUpPipeline.dispatchThreads(8);
doubleUpPipeline.dispatchThreads(8);
doubleUpPipeline.dispatchThreads(4);

// the command encoder will queue the read after `doubleUpPipeline`
console.log(await data.read()); // [0, 8, 16, 24, 16, 20, 24, 28]
```

:::note
Remember to mark the callback with the `'use gpu'` directive to let TypeGPU know that this function is TGSL.
:::

The callback can have up to three arguments (dimensions).
`createGuardedComputePipeline` can simplify writing a pipeline helping reduce serialization overhead when initializing buffers with data.
Buffer initialization commonly uses random number generators.
For that, you can use the [`@typegpu/noise`](TypeGPU/ecosystem/typegpu-noise) library.

```ts twoslash
import tgpu from 'typegpu';
import * as d from 'typegpu/data';
// ---cut---
import { randf } from '@typegpu/noise';

const root = await tgpu.init();

// buffer of 1024x512 floats
const waterLevelMutable = root.createMutable(
  d.arrayOf(d.arrayOf(d.f32, 512), 1024),
);

root['~unstable'].createGuardedComputePipeline((x, y) => {
  'use gpu';
  randf.seed2(d.vec2f(x, y).div(1024));
  waterLevelMutable.$[x][y] = 10 + randf.sample();
}).dispatchThreads(1024, 512);
// callback will be called for x in range 0..1023 and y in range 0..511

// (optional) read values in JS
console.log(await waterLevelMutable.read());
```

The result of `createGuardedComputePipeline` can have bind groups bound using the `with` method.

```ts twoslash
import tgpu from 'typegpu';
import * as d from 'typegpu/data';
import * as std from 'typegpu/std';
const root = await tgpu.init();
// ---cut---
const layout = tgpu.bindGroupLayout({
  values: { storage: d.arrayOf(d.u32), access: 'mutable' },
});
const buffer1 = root
  .createBuffer(d.arrayOf(d.u32, 3), [1, 2, 3]).$usage('storage');
const buffer2 = root
  .createBuffer(d.arrayOf(d.u32, 4), [2, 4, 8, 16]).$usage('storage');
const bindGroup1 = root.createBindGroup(layout, {
  values: buffer1,
});
const bindGroup2 = root.createBindGroup(layout, {
  values: buffer2,
});

const doubleUpPipeline = root['~unstable'].createGuardedComputePipeline((x) => {
  'use gpu';
  layout.$.values[x] *= 2;
});

doubleUpPipeline.with(bindGroup1).dispatchThreads(3);
doubleUpPipeline.with(bindGroup2).dispatchThreads(4);

console.log(await buffer1.read()); // [2, 4, 6];
console.log(await buffer2.read()); // [4, 8, 16, 32];
```

It is recommended NOT to use guarded compute pipelines for:

- More complex compute shaders.
When using guarded compute pipelines, it is impossible to change workgroup sizes, or effectively utilize workgroup shared memory.
For such cases, a manually created pipeline would be more suitable.

- Small calls.
Usually, for small data the shader creation and dispatch is more costly than serialization.
Small buffers can be more efficiently initialized with the `buffer.write()` method.

:::note
The default workgroup sizes are:

- `[1, 1, 1]` for 0D dispatches,
- `[256, 1, 1]` for 1D dispatches,
- `[16, 16, 1]` for 2D dispatches,
- `[8, 8, 4]` for 3D dispatches.

The callback is not called if the global invocation id of a thread would exceed the size in any dimension.
:::

:::tip
`TgpuGuardedComputePipeline` provides getters for the underlying pipeline and the size buffer.
Those might be useful for `tgpu.resolve`, since you cannot resolve a guarded pipeline directly.

```ts
const innerPipeline = doubleUpPipeline.with(bindGroup1).pipeline;
tgpu.resolve([innerPipeline]);
```
:::

## *tgpu.comptime*

`tgpu['~unstable'].comptime(func)` creates a version of `func` that instead of being transpiled to WGSL, will be called during the WGSL code generation.
This can be used to precompute and inject a value into the final shader code.

```ts twoslash
import tgpu from 'typegpu';
import * as d from 'typegpu/data';

const hsvToRgb = tgpu.fn([d.f32, d.f32, d.f32], d.vec3f)``;

// ---cut---
const injectRand01 = tgpu['~unstable']
  .comptime(() => Math.random());

const getColor = tgpu.fn([d.vec3f])((diffuse) => {
  'use gpu';
  const albedo = hsvToRgb(injectRand01(), 1, 0.5);
  return albedo.mul(diffuse);
});
```

Note how the function passed into `comptime` doesn't have to be marked with
`'use gpu'` and can use `Math`. That's because the function doesn't execute on the GPU, it gets
executed before the shader code gets sent to the GPU.

## *tgpu.rawCodeSnippet*

When working on top of some existing shader code, sometimes you may know for certain that some variable will be already defined and should be accessible in the code. 
In such scenario you can use `tgpu['~unstable'].rawCodeSnippet` -- an advanced API that creates a typed shader expression which can be injected into the final shader bundle upon use.

```ts twoslash
import tgpu from 'typegpu';
import * as d from 'typegpu/data';

// ---cut---
// `EXISTING_GLOBAL` is an identifier that we know will be in the
// final shader bundle, but we cannot
// refer to it in any other way.
const existingGlobal = tgpu['~unstable']
  .rawCodeSnippet('EXISTING_GLOBAL', d.f32, 'constant');

const foo = tgpu.fn([], d.f32)(() => {
  'use gpu';
  return existingGlobal.$ * 22;
});

const wgsl = tgpu.resolve([foo]);
// fn foo() -> f32 {
//   return (EXISTING_GLOBAL * 22f);
// }
```

:::note
There currently is no way to create a `rawCodeSnippet` that refers to an existing function.
:::

### Which origin to choose?

The optional third parameter `origin` lets TypeGPU transpiler know how to optimize the code snippet, as well as allows for some transpilation-time validity checks.

Usually `'runtime'` (the default) is a safe bet, but if you're sure that the expression or
computation is constant (either a reference to a constant, a numeric literal,
or an operation on constants), then pass `'constant'` as it might lead to better
optimizations.

If what the expression is a direct reference to an existing value (e.g. a uniform, a
storage binding, ...), then choose from `'uniform'`, `'mutable'`, `'readonly'`, `'workgroup'`,
`'private'` or `'handle'` depending on the address space of the referred value.

## *console.log*

Yes, you read that correctly, TypeGPU implements logging to the console on the GPU!
Just call `console.log` like you would in plain JavaScript, and open the console to see the results.

```ts twoslash
import tgpu from 'typegpu';
import * as d from 'typegpu/data';

const root = await tgpu.init();
// ---cut---
const callCountMutable = root.createMutable(d.u32, 0);
const compute = root['~unstable'].createGuardedComputePipeline(() => {
  'use gpu';
  callCountMutable.$ += 1;
  console.log('Call number', callCountMutable.$);
});

compute.dispatchThreads();
compute.dispatchThreads();

// Eventually...
// "[GPU] Call number 1"
// "[GPU] Call number 2"
```

Currently supported data types for logging include scalars, vectors, matrices, structs, and fixed-size arrays.

Under the hood, TypeGPU translates `console.log` to a series of serializing functions that write the logged arguments to a buffer that is read and deserialized after every draw/dispatch call.

The buffer is of fixed size, which may limit the total amount of information that can be logged; if the buffer overflows, additional logs are dropped.
If that's an issue, you may specify the size manually when creating the `root` object.

```ts twoslash
import tgpu from 'typegpu';
import * as d from 'typegpu/data';

const presentationFormat = undefined as any;
const canvas = undefined as any;
const context = canvas.getContext('webgpu') as any;
// ---cut---
const root = await tgpu.init({
  unstable_logOptions: {
    logCountLimit: 32,
    logSizeLimit: 8, // in bytes, enough to fit 2*u32
  },
});

/* vertex shader */

const mainFragment = tgpu['~unstable'].fragmentFn({
  in: { pos: d.builtin.position },
  out: d.vec4f,
})(({ pos }) => {
  // this log fits in 8 bytes
  // static strings do not count towards the serialized log size
  console.log('X:', d.u32(pos.x), 'Y:', d.u32(pos.y));
  return d.vec4f(0, 1, 1, 1);
});

/* pipeline creation and draw call */
```

:::note
The logs are written to console only after the dispatch finishes and the buffer is read.
This may happen with a noticeable delay.
:::

:::caution
When using `console.log`, atomic operations are injected into the WGSL code to safely synchronize logging from multiple threads.
This synchronization can introduce overhead and significantly impact shader performance.
:::

Other supported `console` functionalities include `console.debug`, `console.info`, `console.warn`, `console.error` and `console.clear`.

There are some limitations (some of which we intend to alleviate in the future):

- `console.log` only works when used in TGSL, when calling or resolving a TypeGPU pipeline.
Otherwise, for example when using `tgpu.resolve` on a WGSL template, logs are ignored.
- `console.log` only works in fragment and compute shaders.
This is due to a [WebGPU limitation](https://www.w3.org/TR/WGSL/#address-space) that does not allow modifying buffers during the vertex shader stage.
- `console.log` currently does not support template literals (but you can use [string substitutions](https://developer.mozilla.org/en-US/docs/Web/API/console#using_string_substitutions), or just pass multiple arguments instead).
