--- license: bigscience-bloom-rail-1.0 base_model: bigscience/bloom-1b7 tags: - generated_from_trainer model-index: - name: Bloom-1b7-dialogsum-IT results: [] --- # Bloom-1b7-dialogsum-IT This model is a instruction-tuned version of [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) on a dialog summation dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data Instruction Tuned on the dialog summation task here: https://huggingface.co/datasets/adambjorn/UnrelatedForgettingOverhead/viewer/dialogsum/train ## Training procedure Given a set of prompts: ``` python prompts = [ "Provide a concise summary for the following dialogue:", "Summarize this conversation in a few sentences:", "Here is a dialogue. Can you summarize it briefly?", "Read the following dialogue and write a short summary:", "Condense the essence of this conversation into a summary:" ] ``` Each example is concatenated with the prompt, the dialogue, and the summary as so: ``` python concatenated_texts = [ random.choice(prompts) + " " + dialogue + "<\s>" + " Summary:" + summary for dialogue, summary in zip(examples['dialogue'], examples['summary']) ] ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results Final epoch results: {'loss': 0.0137, 'grad_norm': 0.6599154472351074, 'learning_rate': 7.000000000000001e-07, 'epoch': 10.0} Average results: {'train_runtime': 1142.1524, 'train_samples_per_second': 1.751, 'train_steps_per_second': 0.438, 'train_loss': 0.37129621666669843, 'epoch': 10.0} ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2