clot
Edit model card

Creative Leap-of-Thought (CLoT)

This repository is the checkpoint of "Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation" [paper].

Introduction

To the best of our knowledge, we are the first to profoundly explore the Leap-of-Thought (LoT) ability in multimodal large language models (LLMs). This involves challenging LLMs to think outside the box, a non-sequential thinking skill equally crucial alongside popular sequential thinking abilities, such as Chain-of-Thought based methods. In this study, we delve into the LLM's LoT ability through the lens of a humor generation game called Oogiri (大喜利). The Oogiri game serves as an ideal platform for exploring the LLM's LoT ability, as it compels participants to think outside the box and provide unexpected and humorous responses to multimodal information (including I2T, T2T, and IT2T).

🤣👉Click [project page] for funny examples👈.

Quickstart 🤗

We provide a simple Chinese example for using CLoT with zero-shot inference. Specifically, we just need a few lines of code as shown below.

from transformers import AutoTokenizer
from transformers.generation import GenerationConfig
from peft import AutoPeftModelForCausalLM
import torch

mpath = "zhongshsh/CLoT-cn"
tokenizer = AutoTokenizer.from_pretrained(mpath, trust_remote_code=True)
generation_config = GenerationConfig.from_pretrained(mpath, trust_remote_code=True)
model = AutoPeftModelForCausalLM.from_pretrained(
    mpath, 
    device_map="cuda",
    trust_remote_code=True
).eval()

query = tokenizer.from_list_format([
    {'image': 'https://i.postimg.cc/Fz0bVzpm/test.png'},
    {'text': '让我们打破常规思维思考问题。请仔细阅读图片,写出一个令人感到意外且搞笑的句子。'},
])
response, history = model.chat(tokenizer, query=query, history=None, generation_config=generation_config)
print(response)

Notice

We strongly advise users against spreading or allowing others to spread the following content, including but not limited to hate speech, violence, pornography, and fraudulent materials.

Citation

@misc{zhong2023clot,
  title={Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation},
  author={Zhong, Shanshan and Huang, Zhongzhan and Gao, Shanghua and Wen, Weushao and Lin, Liang and Zitnik, Marinka and Zhou, Pan},
  journal={arXiv preprint arXiv:2312.02439},
  year={2023}
}
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train zhongshsh/CLoT-cn