Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'activation_fn_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1869, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 578, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 399, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'activation_fn_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1885, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 597, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 399, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'activation_fn_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1392, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1041, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 999, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1740, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1896, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

model_name
string
local_model_path
string
model_class_name
string
hook_name
string
hook_eval
string
hook_layer
int64
hook_head_index
null
dataset_path
string
dataset_trust_remote_code
bool
streaming
bool
is_dataset_tokenized
bool
context_size
int64
use_cached_activations
bool
cached_activations_path
null
architecture
string
d_in
int64
d_sae
int64
b_dec_init_method
string
expansion_factor
int64
activation_fn
string
activation_fn_kwargs
dict
normalize_sae_decoder
bool
noise_scale
float64
from_pretrained_path
null
apply_b_dec_to_input
bool
decoder_orthogonal_init
bool
decoder_heuristic_init
bool
init_encoder_as_decoder_transpose
bool
n_batches_in_buffer
int64
training_tokens
int64
finetuning_tokens
int64
store_batch_size_prompts
int64
train_batch_size_tokens
int64
normalize_activations
string
device
string
act_store_device
string
seed
int64
dtype
string
prepend_bos
bool
autocast
bool
autocast_lm
bool
compile_llm
bool
llm_compilation_mode
null
compile_sae
bool
sae_compilation_mode
null
adam_beta1
float64
adam_beta2
float64
mse_loss_normalization
null
l1_coefficient
int64
lp_norm
float64
scale_sparsity_penalty_by_decoder_norm
bool
l1_warm_up_steps
int64
lr
float64
lr_scheduler_name
string
lr_warm_up_steps
int64
lr_end
float64
lr_decay_steps
int64
n_restart_cycles
int64
finetuning_method
null
use_ghost_grads
bool
feature_sampling_window
int64
dead_feature_window
int64
dead_feature_threshold
float64
n_eval_batches
int64
eval_batch_size_prompts
null
log_to_wandb
bool
log_activations_store_to_wandb
bool
log_optimizer_state_to_wandb
bool
wandb_project
string
wandb_id
null
run_name
string
wandb_entity
null
wandb_log_frequency
int64
eval_every_n_wandb_logs
int64
resume
bool
n_checkpoints
int64
checkpoint_path
string
verbose
bool
model_kwargs
dict
model_from_pretrained_kwargs
dict
sae_lens_version
string
sae_lens_training_version
string
vision
bool
tokens_per_buffer
int64
llava-hf/llava-v1.6-mistral-7b-hf
/home/saev/changye/model/llava
HookedLlava
blocks.16.hook_resid_post
NOT_IN_USE
16
null
/home/saev/changye/data/pile100k-tokenized-text-llava2048
true
true
true
2,048
false
null
standard
4,096
65,536
zeros
16
relu
{}
false
0
null
false
false
true
true
64
163,840,000
0
4
4,096
expected_average_only_in
cuda:3
cuda:3
42
float32
true
false
false
false
null
false
null
0.9
0.999
null
5
1
true
2,000
0.00005
constant
0
0.000005
8,000
1
null
false
1,000
1,000
0.0001
10
null
true
false
false
interp
null
65536-L1-5-LR-5e-05-Tokens-1.638e+08
null
30
20
false
20
checkpoints/xepk4xea
true
{}
{ "n_devices": 3 }
3.20.0
3.20.0
false
536,870,912

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
5