Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 12 new columns ({'right_sampler', 'domain', 'physics', 'frequency_samples', 'top_samples', 'top_sampler', 'right_samples', 'top_boundary', 'mesh', 'n_observations', 'right_boundary', 'frequency_sampler'}) and 12 missing columns ({'transport_0', 'rank', 'mkdir_mus', 'threads', 'meta_sort_merge_mus', 'minmax_mus', 'bytes', 'start', 'transport_1', 'buffering_mus', 'aggregation_mus', 'memcpy_mus'}).

This happened while the json dataset builder was generating data using

hf://datasets/JakobEWagner/helmholtz_pulsating_sphere_s4096_f400-500_Yre0-1_Yim-1-0/train/properties.json (at revision 87bd9acb3b41ac1cd447848a711de833ff02aa6a)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              domain: struct<box_lengths: list<item: double>, sphere_radius: double, ndim: int64>
                child 0, box_lengths: list<item: double>
                    child 0, item: double
                child 1, sphere_radius: double
                child 2, ndim: int64
              mesh: struct<elements_per_wavelengths: int64, elements_per_radians: double, excitation_boundary: int64, top_boundary: int64, right_boundary: int64>
                child 0, elements_per_wavelengths: int64
                child 1, elements_per_radians: double
                child 2, excitation_boundary: int64
                child 3, top_boundary: int64
                child 4, right_boundary: int64
              physics: struct<c: double, rho: double>
                child 0, c: double
                child 1, rho: double
              n_observations: int64
              frequency_sampler: struct<type: string, x_min: list<item: double>, x_max: list<item: double>>
                child 0, type: string
                child 1, x_min: list<item: double>
                    child 0, item: double
                child 2, x_max: list<item: double>
                    child 0, item: double
              top_boundary: string
              top_sampler: struct<type: string, x_min: list<item: double>, x_max: list<item: double>>
                child 0, type: string
                child 1, x_min: list<item: double>
                    child 0, item: double
                child 2, x_max: list<item: double>
                    child 0, item: double
              right_boundary: string
              right_sampler: struct<type: string, x_min: list<item: double>, x_max: list<item: double>>
                child 0, type: string
                child 1, x_min: list<item: double>
                    child 0, item: double
                child 2, x_max: list<item: double>
                    child 0, item: double
              frequency_samples: list<item: double>
                child 0, item: double
              top_samples: list<item: list<item: double>>
                child 0, item: list<item: double>
                    child 0, item: double
              right_samples: list<item: list<item: double>>
                child 0, item: list<item: double>
                    child 0, item: double
              to
              {'transport_0': {'close_mus': Value(dtype='int64', id=None), 'open_mus': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None), 'write_mus': Value(dtype='int64', id=None)}, 'rank': Value(dtype='int64', id=None), 'mkdir_mus': Value(dtype='int64', id=None), 'threads': Value(dtype='int64', id=None), 'meta_sort_merge_mus': Value(dtype='int64', id=None), 'minmax_mus': Value(dtype='int64', id=None), 'bytes': Value(dtype='int64', id=None), 'start': Value(dtype='string', id=None), 'transport_1': {'close_mus': Value(dtype='int64', id=None), 'open_mus': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None), 'write_mus': Value(dtype='int64', id=None)}, 'buffering_mus': Value(dtype='int64', id=None), 'aggregation_mus': Value(dtype='int64', id=None), 'memcpy_mus': Value(dtype='int64', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1577, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1191, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 12 new columns ({'right_sampler', 'domain', 'physics', 'frequency_samples', 'top_samples', 'top_sampler', 'right_samples', 'top_boundary', 'mesh', 'n_observations', 'right_boundary', 'frequency_sampler'}) and 12 missing columns ({'transport_0', 'rank', 'mkdir_mus', 'threads', 'meta_sort_merge_mus', 'minmax_mus', 'bytes', 'start', 'transport_1', 'buffering_mus', 'aggregation_mus', 'memcpy_mus'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/JakobEWagner/helmholtz_pulsating_sphere_s4096_f400-500_Yre0-1_Yim-1-0/train/properties.json (at revision 87bd9acb3b41ac1cd447848a711de833ff02aa6a)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

transport_0
dict
rank
int64
mkdir_mus
int64
threads
int64
meta_sort_merge_mus
int64
minmax_mus
int64
bytes
int64
start
string
transport_1
dict
buffering_mus
int64
aggregation_mus
int64
memcpy_mus
int64
domain
dict
mesh
dict
physics
dict
n_observations
int64
frequency_sampler
dict
top_boundary
string
top_sampler
dict
right_boundary
string
right_sampler
dict
frequency_samples
sequence
top_samples
sequence
right_samples
sequence
{ "close_mus": 10, "open_mus": 111, "type": "File_POSIX", "write_mus": 695151 }
0
100
1
163,071
225,509
762,404,487
Tue_Jul_09_12:07:07_2024
{ "close_mus": 0, "open_mus": 106, "type": "File_POSIX", "write_mus": 29679 }
549,520
0
45,455
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
{ "box_lengths": [ 1, 1 ], "sphere_radius": 0.2, "ndim": 2 }
{"elements_per_wavelengths":15,"elements_per_radians":30.0,"excitation_boundary":101,"top_boundary":(...TRUNCATED)
{ "c": 343.2, "rho": 1.2043 }
4,096
{ "type": "UniformSampler", "x_min": [ 400 ], "x_max": [ 500 ] }
lambda a: lambda x: np.ones(x[0].shape) * a[0] + 1j * a[1] * np.ones(x[0].shape)
{ "type": "UniformSampler", "x_min": [ 0, -1 ], "x_max": [ 1, 0 ] }
lambda a: lambda x: np.ones(x[1].shape) * a[0] + 1j * a[1] * np.ones(x[1].shape)
{ "type": "UniformSampler", "x_min": [ 0, -1 ], "x_max": [ 1, 0 ] }
[434.6165019391972,420.4460772608176,405.8112930826629,405.62020055309284,448.8348308228759,466.2431(...TRUNCATED)
[[0.8601686212786032,-0.2786319944558162],[0.791580906537889,-0.7637266205339663],[0.780039091103954(...TRUNCATED)
[[0.7649005505092011,-0.5652698979192223],[0.37385365784649094,-0.5976737086596499],[0.3398699743841(...TRUNCATED)

The Helmholtz pulsating sphere datasets provides numerical solutions of the time-harmonic Helmholtz equation. The dataset is generated using the DOLFINx framework.

image/png

Dataset Details

  • Domain Description: The domain consists of a 2D domain with a pulsating sphere placed at the center. The pulsating sphere is encompassed by a bigger square.
  • Physics Constants: The speed of sound is $c=343.20, ms^{-1}$, and the fluid density is $\rho=1.2043, kgm^{-3}.
  • Frequency: The frequency is sampled uniformly from the interval $[400Hz,, 500Hz]$.
  • Velocity Boundary Condition: The surface S of the sphere has a boundary velocity condition with $v_0=1e-3mms^{-1}$.
  • Normalized Admittance Boundary Condition: Both boundaries $\Gamma_1$ and $\Gamma_2$ have a normalized admittance boundary condition. The values for each boundary is sampled uniformly and individually from the interval $[0,, 1]\times [0i,, 1i]$.
  • Samples: The train dataset contains a total of 4096 samples and the test dataset 512 samples.
  • Mesh: Gmsh is used to create the mesh and parameters were chosen to ensure at least 15 elements per wavelength for the highest frequency. The mesh is created once and used for every sample.

Generation Method

To define, assemble, and solve the problem, we use the DOLFINx FEM framework. DOLFINx serves as the computational environment of FEniCSx, representing the next generation of FEniCS. FEniCSx implements a substantial user interface that allows for defining the PDE and its boundary conditions in the Unified Form Language (UFL) format, assembling the problem, specifying a solver and its options, solve and writing the solution to files. It benefits from a large community and stands as one of the most popular open-source alternatives to commercial software like COMSOL. The solution process involves multiple steps:

  1. Mesh Loading: The mesh is created using the Gmsh API and is then imported into DOLFINx, along with the previously defined tags for cells and boundaries.
  2. Function Space Definition: The function space on which the trail and test functions are defined is a Lagrange space of degree 2. Additionally, a second Lagrange function space of degree 1 is defined, facilitating saving the solution to simpler file formats, as higher order elements are not supported by all output schemes.
  3. Defining the Linear Problem: The linear problem is defined using the weak fromulation of the Helmholtz equation in ufl.
  4. Solving the System: A “preonly” Krylov method with a Cholesky preconditioner, and the MUltifrontal Massively Parallel sparse direct Solver (MUMPS) algorithm, a parallel sparse direct solver, is employed ensuring efficient and precise solutions.
  5. Output Saving: Solutions are saved to the hierarchical ADIOS2 native binary-pack (bp version 4) format. Detailed implementations can be found on GitHub. Refer to the GitHub repository https://github.com/JakobEliasWagner/Helmholtz-Pulsating-Sphere.

Usage

The dataset is saved in the hierarchical ADIOS2 native binary-pack (bp version 4) format. The solution p is a complex valued function saved in p_real and p_imag. As the domain is symmectric (indicated by dash-dotted lines) only one quarter of the domain is simulated and saved to file. The saved data is the upper right (first) quadrant.

References

[DOLFINx] Igor A. Baratta, Joseph P. Dean, Jørgen S. Dokken, Michal Habera, Jack S. Hale, Chris N. Richardson, Marie E. Rognes, Matthew W. Scroggs, Nathan Sime, and Garth N. Wells. DOLFINx: the next generation FEniCS problem solving environment.

[Gmsh] Christophe Geuzaine and Jean-Fran¸cois Remacle. Gmsh: A 3-d finite element mesh generator with built-in pre-and post-processing facilities. 79(11):1309–1331. Publisher: Wiley Online Library.

[Acoustics] Steffen Marburg and Bodo Nolte. Computational acoustics of noise propagation in fluids: finite and boundary element methods, volume 578. Springer.

Downloads last month
2