--- license: mit language: - en pipeline_tag: text2text-generation arxiv: 2310.04921 model-index: - name: crystal-11b results: - task: type: question-answering name: Commonsense Question Answering dataset: type: openbookqa name: OpenBookQA metrics: - type: accuracy value: 84.58 name: Accuracy - task: type: question-answering name: Commonsense Question Answering dataset: type: ai2_arc name: ARC (easy) config: ARC-Easy metrics: - type: accuracy value: 87.54 name: Accuracy - task: type: question-answering name: Commonsense Question Answering dataset: type: ai2_arc name: ARC (challenge) config: ARC-Challenge metrics: - type: accuracy value: 73.24 name: Accuracy - task: type: question-answering name: Commonsense Question Answering dataset: type: commonsense_qa name: CommonsenseQA metrics: - type: accuracy value: 82.31 name: Accuracy - task: type: question-answering name: Commonsense Question Answering dataset: type: qasc name: QASC metrics: - type: accuracy value: 81.97 name: Accuracy - task: type: question-answering name: Commonsense Question Answering dataset: type: piqa name: Physical IQA metrics: - type: accuracy value: 88.08 name: Accuracy - task: type: question-answering name: Commonsense Question Answering dataset: type: social_i_qa name: Social IQA metrics: - type: accuracy value: 82.24 name: Accuracy - task: type: question-answering name: Commonsense Question Answering dataset: type: winogrande name: Winogrande config: winogrande_xl metrics: - type: accuracy value: 90.77 name: Accuracy --- # Model Card for Crystal Crystal is an introspective reasoning model commonsense QA. See our paper at: . ## Model Details ### Model Description Crystal can answer a given commonsense question by first generating a relevant knowledge statement, and then predict the final answer by referencing the generated knowledge. We call this process "introspective reasoning", and it improves both the prediction accuracy and the interpretability of neural models at reasoning tasks. - **Developed by:** Jiacheng Liu, Ramakanth Pasunuru, Hannaneh Hajishirzi, Yejin Choi, Asli Celikyilmaz - **Shared by [optional]:** Jiacheng Liu - **Model type:** Transformers - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model [optional]:** t5-11b ### Model Sources [optional] - **Repository:** - **Paper [optional]:** - **Demo [optional]:** ## Uses ### Direct Use Crystal is intended to answer commonsense questions via an "introspective reasoning" process. ### Out-of-Scope Use Crystal is a research prototype and may give incorrect answers or reasoning process. Do not use for making critical decisions. It is intended to answer questions about commonsense, and may be unreliable when taking input out of this scope. ## Bias, Risks, and Limitations See the **Limitations** section of our paper. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained('liujch1998/crystal-11b') model = AutoModelForSeq2SeqLM.from_pretrained('liujch1998/crystal-11b') model.eval() max_question_len, max_knowledge_len, max_answer_len = 128, 32, 2 k = 1 # number of knowledge statements to generate top_p = 0.0001 question = 'If the mass of an object gets bigger what will happen to the amount of matter contained within it? \\n (A) gets bigger (B) gets smaller' choices = ['A', 'B'] choices_ids = tokenizer(choices, return_tensors='pt', padding='max_length', truncation='longest_first', max_length=max_answer_len).input_ids # (C, AL) prompt = question + ' \\n Knowledge: ' prompt_tok = tokenizer(prompt, return_tensors='pt', padding='max_length', truncation='longest_first', max_length=max_question_len) # (1, QL) knowledges_ids = self.model.generate( input_ids=prompt_tok.input_ids, attention_mask=prompt_tok.attention_mask, max_length=max_knowledge_len + 1, min_length=3, do_sample=True, num_return_sequences=k, top_p=top_p, ) # (K, KL); begins with 0 ([BOS]); ends with 1 ([EOS]) knowledges_ids = knowledges_ids[:, 1:].contiguous() # no beginning; ends with 1 ([EOS]) knowledges = tokenizer.batch_decode(knowledges_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) prompts = [question + (f' \\n Knowledge: {knowledge} \\n Answer: ' if knowledge != '' else ' \\n Answer:') for knowledge in knowledges] prompts_tok = self.tokenizer(prompts, return_tensors='pt', padding='max_length', truncation='longest_first', max_length=max_question_len + max_knowledge_len) # (K, QL+KL) output = model( input_ids=prompts_tok.input_ids, attention_mask=prompts_tok.attention_mask, labels=choices_ids[0].unsqueeze(0).repeat(len(knowledges), 1), ) logitsss = output.logits # (K, AL, V) logitss = logitsss[:, 0, :] # (K, V) choice_ids = choices_ids[:, 0] # (C) answer_logitss = logitss.gather(dim=1, index=choice_ids.unsqueeze(0).expand(len(knowledges), -1)) # (K, C) answer_probss = answer_logitss.softmax(dim=1) # (K, C) answer_probs = answer_probss.max(dim=0).values # (C) pred = answer_probs.argmax(dim=0).item() pred = choices[pred] print(f'Question: {question}\nKnowledge: {knowledges[0]}\nAnswer: {pred}') ``` You may also refer to for implementation. ## Citation [optional] **BibTeX:** ``` @article{Liu2023CrystalIR, title={Crystal: Introspective Reasoners Reinforced with Self-Feedback}, author={Jiacheng Liu and Ramakanth Pasunuru and Hannaneh Hajishirzi and Yejin Choi and Asli Celikyilmaz}, journal={ArXiv}, year={2023}, volume={abs/2310.04921} } ``` ## Model Card Contact Jiacheng Liu