<p>
  Implement a GPU program that computes the Fast Fourier Transform (FFT) of a
  complex-valued 1-D signal. Given an input <code>signal</code> array containing
  <code>N</code> complex numbers stored as interleaved real/imaginary pairs,
  compute the discrete Fourier transform and store the result in the
  <code>spectrum</code> array. The FFT converts a time-domain signal into its
  frequency-domain representation using the formula: \[ X_k = \sum_{n=0}^{N-1}
  x_n \cdot e^{-j 2\pi kn / N} \quad \text{for } k = 0, 1, \ldots, N-1 \] The
  FFT algorithm reduces the computational complexity from O(N²) to O(N log N) by
  exploiting symmetries in the twiddle factors.
</p>

<h2>Implementation Requirements</h2>
<ul>
  <li>External libraries (cuFFT etc.) are not permitted</li>
  <li>The <code>solve</code> function signature must remain unchanged</li>
  <li>The final result must be stored in the <code>spectrum</code> array</li>
  <li>The kernel must be entirely GPU-resident—no host-side FFT calls</li>
  <li>
    Both input and output use interleaved real/imaginary layout:
    <code>[real₀, imag₀, real₁, imag₁, ...]</code>
  </li>
</ul>

<h2>Example 1:</h2>
<pre>
Input:  N = 4
        signal = [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
        (represents: [1+0j, 0+0j, 0+0j, 0+0j])

Output: spectrum = [1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0]
        (represents: [1+0j, 1+0j, 1+0j, 1+0j])
</pre>

<h2>Example 2:</h2>
<pre>
Input:  N = 2
        signal = [1.0, 0.0, 1.0, 0.0]
        (represents: [1+0j, 1+0j])

Output: spectrum = [2.0, 0.0, 0.0, 0.0]
        (represents: [2+0j, 0+0j])
</pre>

<h2>Constraints</h2>
<ul>
  <li><code>1 ≤ N ≤ 262,144</code></li>
  <li>All values are 32-bit floating point numbers</li>
  <li>Absolute error ≤ 1e-3 and relative error ≤ 1e-3</li>
  <li>Input and output arrays have length <code>2 × N</code></li>
</ul>
