---
title: Jupyter Notebooks
sidebar_position: 6
description: Using Mojo in local and Colab Jupyter Notebooks
---
[Jupyter notebooks](https://jupyter.org) provide a web-based environment for
creating and sharing Mojo computational documents. They combine code, results,
and explanation so readers explore what you built, how you built it, and why it
matters.

You can run Mojo language notebooks locally or in GPU-backed Google Colab environments
to accelerate workloads. For teaching, learning, and exploration, notebooks provide
a hands-on, iterative workflow.

#### Choose an environment
This overview assumes you'll work with Mojo notebooks in one of two ways:

- **Google Colab** <br />
  Fast setup, optional GPU acceleration, ideal for quick experiments and for
  learning GPU programming when you don't have a compatible GPU-enabled
  computer on-hand.
- **Local JupyterLab** <br />
  Private environment with full control of code, data, and dependencies.

Both options use the same notebook model and the same Mojo cell magic.

## Using Mojo on Google Colab

#### 1. Create a Notebook
Visit [Google Colab](https://colab.google) and create a new notebook.

#### 2. Install Mojo
For the nightly release:

```python
!pip install mojo --index-url https://dl.modular.com/public/nightly/python/simple/
```

For the stable release:

```python
!pip install mojo
```

Wait for the "Successfully installed" message.

#### 3. Enable Mojo
In the first cell, run:

```python
import mojo.notebook
```

This adds the `%%mojo` cell magic, so you can compile and
run Mojo code.

Your Colab notebook is now ready to run Mojo programs.

## Using Mojo in Local Jupyter Notebooks

Local notebooks use `pixi` to manage an environment with
Jupyter and Mojo.

#### 1. Create a project
```shell
pixi init notebooks \
    -c https://conda.modular.com/max-nightly/ \
    -c conda-forge
cd notebooks
pixi shell
```

This creates a project directory and enters the Pixi shell.

#### 2. Install required tools
```shell
pixi add mojo jupyterlab ipykernel
```

This installs:
- Mojo
- JupyterLab
- The Python kernel required for notebook execution

#### 3. Start JupyterLab
```shell
jupyter lab
```

JupyterLab opens in your browser.

#### 4. Create a Python-backed notebook
In your web browser:

- Select _File > New > Notebook_.
- Choose the _Python_ kernel.

#### 5. Enable Mojo support
In the first cell, run:

```python
import mojo.notebook
```

This registers the `%%mojo` magic command.
Your local environment is now ready for interactive Mojo development.

## Writing and running Mojo code
Mojo code runs inside notebook cells marked with the `%%mojo` directive.
Each Mojo cell must contain a complete program, including a `main()` function.

#### Example: Hello Mojo
```mojo
%%mojo

def main():
  print("Hello Mojo")
```

Output:

```output
Hello Mojo
```

#### Example: Parameterized compilation
```mojo
%%mojo

# Compiler-parameterized function
fn repeat[count: Int](msg: String):
    @parameter
    for i in range(count):
        print(msg)

# Compiler-argumented function
fn threehello():
    repeat[3]("Hello 🔥!")

# Run
def main():
    threehello()
```

Output:

```output
Hello 🔥!
Hello 🔥!
Hello 🔥!
```

## Using Mojo with GPU support
Google Colab offers GPU-backed runtimes so you can run Mojo GPU
examples even without local hardware. T4, L4, and A100
accelerators are supported. Before running GPU code,
select _Runtime > Change runtime type > [GPU]_.

#### Example: GPU Hello World
```mojo
%%mojo

from gpu.host import DeviceContext

fn kernel():
    print("Hello from the GPU")

def main():
    # Launch GPU kernel
    with DeviceContext() as ctx:
        ctx.enqueue_function_checked[kernel, kernel](grid_dim=1, block_dim=1)
        ctx.synchronize()
```

Output:

```output
Hello from the GPU
```

#### Example: GPU vector addition
This example runs elementwise vector addition on the GPU.
Each GPU thread updates one element.

```mojo
%%mojo

# ===----------------------------------------------------------------------=== #
# Copyright (c) 2025, Modular Inc. All rights reserved.
#
# Licensed under the Apache License v2.0 with LLVM Exceptions:
# https://llvm.org/LICENSE.txt
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ===----------------------------------------------------------------------=== #

from gpu import thread_idx
from gpu.host import DeviceContext
from layout import Layout, LayoutTensor
from sys import has_nvidia_gpu_accelerator, has_amd_gpu_accelerator

comptime VECTOR_WIDTH = 10
comptime layout = Layout.row_major(VECTOR_WIDTH)
comptime active_dtype = DType.uint8
comptime Tensor = LayoutTensor[active_dtype, layout, MutAnyOrigin]


# Elementwise vector addition on GPU threads
fn vector_addition(left: Tensor, right: Tensor, output: Tensor):
    var idx = thread_idx.x
    output[idx] = left[idx] + right[idx]


def main():
    # Ensure a supported GPU (NVIDIA or AMD) is available
    constrained[
        has_nvidia_gpu_accelerator() or has_amd_gpu_accelerator(),
        "This example requires a supported GPU",
    ]()

    # Create GPU device context
    var ctx = DeviceContext()

    # Allocate buffers and tensors for left and right operands, and output
    var left_buffer = ctx.enqueue_create_buffer[active_dtype](VECTOR_WIDTH)
    var left_tensor = Tensor(left_buffer)

    var right_buffer = ctx.enqueue_create_buffer[active_dtype](VECTOR_WIDTH)
    var right_tensor = Tensor(right_buffer)

    var output_buffer = ctx.enqueue_create_buffer[active_dtype](VECTOR_WIDTH)
    var output_tensor = Tensor(output_buffer)

    # Initialize input buffers with sample data
    var message_bytes: List[UInt8] = [
        71, 100, 107, 107, 110, 31, 76, 110, 105, 110
    ]
    with left_buffer.map_to_host() as mapped_buffer:
        var mapped_tensor = Tensor(mapped_buffer)
        for idx in range(VECTOR_WIDTH):
            mapped_tensor[idx] = message_bytes[idx]
    _ = right_buffer.enqueue_fill(1)

    # Launch GPU kernel
    ctx.enqueue_function_checked[vector_addition, vector_addition](
        left_tensor,
        right_tensor,
        output_tensor,
        grid_dim=1,
        block_dim=VECTOR_WIDTH,
    )
    ctx.synchronize()

    # Read results back and print as ASCII
    with output_buffer.map_to_host() as mapped_buffer:
        var mapped_tensor = Tensor(mapped_buffer)
        for idx in range(VECTOR_WIDTH):
            print(chr(Int(mapped_tensor[idx])), end="")
        print()
```

Output:

```output
Hello Mojo
```

:::tip
Learn Mojo GPU programming through the interactive
[Mojo GPU Puzzles](https://puzzles.modular.com/introduction.html).
:::
