doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2312.17238 | 52 | Zhang, Y., Steiner, D., Naskar, S., Azzam, M., Johnson, M., Paszke, A., Chiu, C.-C., Elias, J. S., Mohiuddin, A., Muhammad, F., Miao, J., Lee, A., Vieillard, N., Potluri, S., Park, J., Davoodi, E., Zhang, J., Stanway, J., Garmon, D., Karmarkar, A., Dong, Z., Lee, J., Kumar, A., Zhou, L., Evens, J., Isaac, W., Chen, Z., Jia, J., Levskaya, A., Zhu, Z., Gorgolewski, C., Grabowski, P., Mao, Y., Magni, A., Yao, K., Snaider, J., Casagrande, N., Suganthan, P., Palmer, E., Irving, G., Loper, E., Faruqui, M., Arkatkar, I., Chen, N., Shafran, I., Fink, M., Castaño, A., Giannoumis, I., Kim, W., Rybi´nski, M., Sreevatsa, A., Prendki, J., Soergel, | 2312.17238#52 | Fast Inference of Mixture-of-Experts Language Models with Offloading | With the widespread adoption of Large Language Models (LLMs), many deep
learning practitioners are looking for strategies of running these models more
efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) - a
type of model architectures where only a fraction of model layers are active
for any given input. This property allows MoE-based language models to generate
tokens faster than their dense counterparts, but it also increases model size
due to having multiple experts. Unfortunately, this makes state-of-the-art MoE
language models difficult to run without high-end GPUs. In this work, we study
the problem of running large MoE language models on consumer hardware with
limited accelerator memory. We build upon parameter offloading algorithms and
propose a novel strategy that accelerates offloading by taking advantage of
innate properties of MoE LLMs. Using this strategy, we build can run
Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google
Colab instances. | http://arxiv.org/pdf/2312.17238 | Artyom Eliseev, Denis Mazur | cs.LG, cs.AI, cs.DC | Technical report | null | cs.LG | 20231228 | 20231228 | [
{
"id": "2302.13971"
},
{
"id": "2101.03961"
},
{
"id": "2306.00978"
},
{
"id": "2204.02311"
},
{
"id": "2103.16716"
},
{
"id": "2207.00112"
},
{
"id": "2304.01373"
},
{
"id": "2301.00774"
},
{
"id": "2212.09720"
},
{
"id": "1701.06538"
},
{
"id": "2211.05100"
},
{
"id": "2210.17323"
},
{
"id": "1609.07843"
},
{
"id": "2306.03078"
},
{
"id": "2006.16668"
},
{
"id": "2205.01068"
},
{
"id": "2103.13630"
}
] |
2312.17238 | 53 | A., Giannoumis, I., Kim, W., Rybi´nski, M., Sreevatsa, A., Prendki, J., Soergel, D., Goedeckemeyer, A., Gierke, W., Jafari, M., Gaba, M., Wiesner, J., Wright, D. G., Wei, Y., Vashisht, H., Kulizhskaya, Y., Hoover, J., Le, M., Li, L., Iwuanyanwu, C., Liu, L., Ramirez, K., Khorlin, A., Cui, A., LIN, T., Georgiev, M., Wu, M., Aguilar, R., Pallo, K., Chakladar, A., Repina, A., Wu, X., van der Weide, T., Ponnapalli, P., Kaplan, C., Simsa, J., Li, S., Dousse, O., Yang, F., Piper, J., Ie, N., Lui, M., Pasumarthi, R., Lintz, N., Vijayakumar, A., Thiet, L. N., Andor, D., Valenzuela, P., Paduraru, C., Peng, D., Lee, K., Zhang, S., | 2312.17238#53 | Fast Inference of Mixture-of-Experts Language Models with Offloading | With the widespread adoption of Large Language Models (LLMs), many deep
learning practitioners are looking for strategies of running these models more
efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) - a
type of model architectures where only a fraction of model layers are active
for any given input. This property allows MoE-based language models to generate
tokens faster than their dense counterparts, but it also increases model size
due to having multiple experts. Unfortunately, this makes state-of-the-art MoE
language models difficult to run without high-end GPUs. In this work, we study
the problem of running large MoE language models on consumer hardware with
limited accelerator memory. We build upon parameter offloading algorithms and
propose a novel strategy that accelerates offloading by taking advantage of
innate properties of MoE LLMs. Using this strategy, we build can run
Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google
Colab instances. | http://arxiv.org/pdf/2312.17238 | Artyom Eliseev, Denis Mazur | cs.LG, cs.AI, cs.DC | Technical report | null | cs.LG | 20231228 | 20231228 | [
{
"id": "2302.13971"
},
{
"id": "2101.03961"
},
{
"id": "2306.00978"
},
{
"id": "2204.02311"
},
{
"id": "2103.16716"
},
{
"id": "2207.00112"
},
{
"id": "2304.01373"
},
{
"id": "2301.00774"
},
{
"id": "2212.09720"
},
{
"id": "1701.06538"
},
{
"id": "2211.05100"
},
{
"id": "2210.17323"
},
{
"id": "1609.07843"
},
{
"id": "2306.03078"
},
{
"id": "2006.16668"
},
{
"id": "2205.01068"
},
{
"id": "2103.13630"
}
] |
2312.17238 | 54 | Thiet, L. N., Andor, D., Valenzuela, P., Paduraru, C., Peng, D., Lee, K., Zhang, S., Greene, S., Nguyen, D. D., Kurylowicz, P., Velury, S., Krause, S., Hardin, C., Dixon, L., Janzer, L., Choo, K., Feng, Z., Zhang, B., Singhal, A., Latkar, T., Zhang, M., Le, Q., Abellan, E. A., Du, D., McKinnon, D., Antropova, N., Bolukbasi, T., Keller, O., Reid, D., Finchelstein, D., Raad, M. A., Crocker, R., Hawkins, P., Dadashi, R., Gaffney, C., Lall, S., Franko, K., Filonov, E., Bulanova, A., Leblond, R., Yadav, V., Chung, S., Askham, H., Cobo, L. C., Xu, K., Fischer, F., Xu, J., Sorokin, C., Alberti, C., Lin, C.-C., Evans, C., Zhou, H., Dimitriev, A., Forbes, | 2312.17238#54 | Fast Inference of Mixture-of-Experts Language Models with Offloading | With the widespread adoption of Large Language Models (LLMs), many deep
learning practitioners are looking for strategies of running these models more
efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) - a
type of model architectures where only a fraction of model layers are active
for any given input. This property allows MoE-based language models to generate
tokens faster than their dense counterparts, but it also increases model size
due to having multiple experts. Unfortunately, this makes state-of-the-art MoE
language models difficult to run without high-end GPUs. In this work, we study
the problem of running large MoE language models on consumer hardware with
limited accelerator memory. We build upon parameter offloading algorithms and
propose a novel strategy that accelerates offloading by taking advantage of
innate properties of MoE LLMs. Using this strategy, we build can run
Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google
Colab instances. | http://arxiv.org/pdf/2312.17238 | Artyom Eliseev, Denis Mazur | cs.LG, cs.AI, cs.DC | Technical report | null | cs.LG | 20231228 | 20231228 | [
{
"id": "2302.13971"
},
{
"id": "2101.03961"
},
{
"id": "2306.00978"
},
{
"id": "2204.02311"
},
{
"id": "2103.16716"
},
{
"id": "2207.00112"
},
{
"id": "2304.01373"
},
{
"id": "2301.00774"
},
{
"id": "2212.09720"
},
{
"id": "1701.06538"
},
{
"id": "2211.05100"
},
{
"id": "2210.17323"
},
{
"id": "1609.07843"
},
{
"id": "2306.03078"
},
{
"id": "2006.16668"
},
{
"id": "2205.01068"
},
{
"id": "2103.13630"
}
] |
2312.17238 | 55 | J., Sorokin, C., Alberti, C., Lin, C.-C., Evans, C., Zhou, H., Dimitriev, A., Forbes, H., Banarse, D., Tung, Z., Liu, J., Omernick, M., Bishop, C., Kumar, C., Sterneck, R., Foley, R., Jain, R., Mishra, S., Xia, J., Bos, T., Cideron, G., Amid, E., Piccinno, F., Wang, X., Banzal, P., Gurita, P., Noga, H., Shah, P., Mankowitz, D. J., Polozov, A., Kushman, N., Krakovna, V., Brown, S., Bateni, M., Duan, D., Firoiu, V., Thotakuri, M., Natan, T., Mohananey, A., Geist, M., Mudgal, S., Girgin, S., Li, H., Ye, J., Roval, O., Tojo, R., Kwong, M., Lee-Thorp, J., Yew, C., Yuan, Q., Bagri, S., Sinopalnikov, D., Ramos, S., Mellor, J., Sharma, A., | 2312.17238#55 | Fast Inference of Mixture-of-Experts Language Models with Offloading | With the widespread adoption of Large Language Models (LLMs), many deep
learning practitioners are looking for strategies of running these models more
efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) - a
type of model architectures where only a fraction of model layers are active
for any given input. This property allows MoE-based language models to generate
tokens faster than their dense counterparts, but it also increases model size
due to having multiple experts. Unfortunately, this makes state-of-the-art MoE
language models difficult to run without high-end GPUs. In this work, we study
the problem of running large MoE language models on consumer hardware with
limited accelerator memory. We build upon parameter offloading algorithms and
propose a novel strategy that accelerates offloading by taking advantage of
innate properties of MoE LLMs. Using this strategy, we build can run
Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google
Colab instances. | http://arxiv.org/pdf/2312.17238 | Artyom Eliseev, Denis Mazur | cs.LG, cs.AI, cs.DC | Technical report | null | cs.LG | 20231228 | 20231228 | [
{
"id": "2302.13971"
},
{
"id": "2101.03961"
},
{
"id": "2306.00978"
},
{
"id": "2204.02311"
},
{
"id": "2103.16716"
},
{
"id": "2207.00112"
},
{
"id": "2304.01373"
},
{
"id": "2301.00774"
},
{
"id": "2212.09720"
},
{
"id": "1701.06538"
},
{
"id": "2211.05100"
},
{
"id": "2210.17323"
},
{
"id": "1609.07843"
},
{
"id": "2306.03078"
},
{
"id": "2006.16668"
},
{
"id": "2205.01068"
},
{
"id": "2103.13630"
}
] |
2312.17238 | 58 | M., Kong, W., Dao, P., Zheng, Z., Liu, F., Yang, F., Zhu, R., Geller, M., Teh, T. H., Sanmiya, J., Gladchenko, E., Trdin, N., Sozanschi, A., Toyama, D., Rosen, E., Tavakkol, S., Xue, L., Elkind, C., Woodman, O., Carpenter, J., Papamakarios, G., Kemp, R., Kafle, S., Grunina, T., Sinha, R., Talbert, A., Goyal, A., Wu, D., Owusu-Afriyie, D., Du, C., Thornton, C., Pont-Tuset, J., Narayana, P., Li, J., Fatehi, S., Wieting, J., Ajmeri, O., Uria, B., Zhu, T., Ko, Y., Knight, L., Héliou, A., Niu, N., Gu, S., Pang, C., Tran, D., Li, Y., Levine, N., Stolovich, A., Kalb, N., Santamaria-Fernandez, R., Goenka, S., Yustalim, W., Strudel, | 2312.17238#58 | Fast Inference of Mixture-of-Experts Language Models with Offloading | With the widespread adoption of Large Language Models (LLMs), many deep
learning practitioners are looking for strategies of running these models more
efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) - a
type of model architectures where only a fraction of model layers are active
for any given input. This property allows MoE-based language models to generate
tokens faster than their dense counterparts, but it also increases model size
due to having multiple experts. Unfortunately, this makes state-of-the-art MoE
language models difficult to run without high-end GPUs. In this work, we study
the problem of running large MoE language models on consumer hardware with
limited accelerator memory. We build upon parameter offloading algorithms and
propose a novel strategy that accelerates offloading by taking advantage of
innate properties of MoE LLMs. Using this strategy, we build can run
Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google
Colab instances. | http://arxiv.org/pdf/2312.17238 | Artyom Eliseev, Denis Mazur | cs.LG, cs.AI, cs.DC | Technical report | null | cs.LG | 20231228 | 20231228 | [
{
"id": "2302.13971"
},
{
"id": "2101.03961"
},
{
"id": "2306.00978"
},
{
"id": "2204.02311"
},
{
"id": "2103.16716"
},
{
"id": "2207.00112"
},
{
"id": "2304.01373"
},
{
"id": "2301.00774"
},
{
"id": "2212.09720"
},
{
"id": "1701.06538"
},
{
"id": "2211.05100"
},
{
"id": "2210.17323"
},
{
"id": "1609.07843"
},
{
"id": "2306.03078"
},
{
"id": "2006.16668"
},
{
"id": "2205.01068"
},
{
"id": "2103.13630"
}
] |
2312.17238 | 59 | N., Stolovich, A., Kalb, N., Santamaria-Fernandez, R., Goenka, S., Yustalim, W., Strudel, R., Elqursh, A., Lakshminarayanan, B., Deck, C., Upadhyay, S., Lee, H., Dusenberry, M., Li, Z., Wang, X., Levin, K., Hoffmann, R., Holtmann-Rice, D., Bachem, O., Yue, S., Arora, S., Malmi, E., Mirylenka, D., Tan, Q., Koh, C., Yeganeh, S. H., Põder, S., Zheng, S., Pongetti, F., Tariq, M., Sun, Y., Ionita, L., Seyedhosseini, M., Tafti, P., Kotikalapudi, R., Liu, Z., Gulati, A., Liu, J., Ye, X., Chrzaszcz, B., Wang, L., Sethi, N., Li, T., Brown, B., Singh, S., Fan, W., Parisi, A., Stanton, J., Kuang, C., Koverkathu, V., | 2312.17238#59 | Fast Inference of Mixture-of-Experts Language Models with Offloading | With the widespread adoption of Large Language Models (LLMs), many deep
learning practitioners are looking for strategies of running these models more
efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) - a
type of model architectures where only a fraction of model layers are active
for any given input. This property allows MoE-based language models to generate
tokens faster than their dense counterparts, but it also increases model size
due to having multiple experts. Unfortunately, this makes state-of-the-art MoE
language models difficult to run without high-end GPUs. In this work, we study
the problem of running large MoE language models on consumer hardware with
limited accelerator memory. We build upon parameter offloading algorithms and
propose a novel strategy that accelerates offloading by taking advantage of
innate properties of MoE LLMs. Using this strategy, we build can run
Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google
Colab instances. | http://arxiv.org/pdf/2312.17238 | Artyom Eliseev, Denis Mazur | cs.LG, cs.AI, cs.DC | Technical report | null | cs.LG | 20231228 | 20231228 | [
{
"id": "2302.13971"
},
{
"id": "2101.03961"
},
{
"id": "2306.00978"
},
{
"id": "2204.02311"
},
{
"id": "2103.16716"
},
{
"id": "2207.00112"
},
{
"id": "2304.01373"
},
{
"id": "2301.00774"
},
{
"id": "2212.09720"
},
{
"id": "1701.06538"
},
{
"id": "2211.05100"
},
{
"id": "2210.17323"
},
{
"id": "1609.07843"
},
{
"id": "2306.03078"
},
{
"id": "2006.16668"
},
{
"id": "2205.01068"
},
{
"id": "2103.13630"
}
] |
2312.17238 | 60 | Li, T., Brown, B., Singh, S., Fan, W., Parisi, A., Stanton, J., Kuang, C., Koverkathu, V., Choquette-Choo, C. A., Li, Y., Lu, T., Ittycheriah, A., Shroff, P., Sun, P., Varadarajan, M., Bahargam, S., Willoughby, R., Gaddy, D., Dasgupta, I., Desjardins, G., Cornero, M., Robenek, B., Mittal, B., Albrecht, B., Shenoy, A., Moiseev, F., Jacobsson, H., Ghaffarkhah, A., Rivière, M., Walton, A., Crepy, C., Parrish, A., Liu, Y., Zhou, Z., Farabet, C., Radebaugh, C., Srinivasan, P., van der Salm, C., Fidjeland, A., Scellato, S., Latorre-Chimoto, E., Klimczak-Pluci´nska, H., Bridson, D., de Cesare, D., Hudson, T., Mendolicchio, P., Walker, L., Morris, A., Penchev, I., | 2312.17238#60 | Fast Inference of Mixture-of-Experts Language Models with Offloading | With the widespread adoption of Large Language Models (LLMs), many deep
learning practitioners are looking for strategies of running these models more
efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) - a
type of model architectures where only a fraction of model layers are active
for any given input. This property allows MoE-based language models to generate
tokens faster than their dense counterparts, but it also increases model size
due to having multiple experts. Unfortunately, this makes state-of-the-art MoE
language models difficult to run without high-end GPUs. In this work, we study
the problem of running large MoE language models on consumer hardware with
limited accelerator memory. We build upon parameter offloading algorithms and
propose a novel strategy that accelerates offloading by taking advantage of
innate properties of MoE LLMs. Using this strategy, we build can run
Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google
Colab instances. | http://arxiv.org/pdf/2312.17238 | Artyom Eliseev, Denis Mazur | cs.LG, cs.AI, cs.DC | Technical report | null | cs.LG | 20231228 | 20231228 | [
{
"id": "2302.13971"
},
{
"id": "2101.03961"
},
{
"id": "2306.00978"
},
{
"id": "2204.02311"
},
{
"id": "2103.16716"
},
{
"id": "2207.00112"
},
{
"id": "2304.01373"
},
{
"id": "2301.00774"
},
{
"id": "2212.09720"
},
{
"id": "1701.06538"
},
{
"id": "2211.05100"
},
{
"id": "2210.17323"
},
{
"id": "1609.07843"
},
{
"id": "2306.03078"
},
{
"id": "2006.16668"
},
{
"id": "2205.01068"
},
{
"id": "2103.13630"
}
] |
2312.17238 | 61 | Bridson, D., de Cesare, D., Hudson, T., Mendolicchio, P., Walker, L., Morris, A., Penchev, I., Mauger, M., Guseynov, A., Reid, A., Odoom, S., Loher, L., Cotruta, V., Yenugula, M., Grewe, D., Petrushkina, A., Duerig, T., Sanchez, A., Yadlowsky, S., Shen, A., Globerson, A., Kurzrok, A., Webb, L., Dua, S., Li, D., Lahoti, P., Bhupatiraju, S., Hurt, D., Qureshi, H., Agarwal, A., Shani, T., Eyal, M., Khare, A., Belle, S. R., Wang, L., Tekur, C., Kale, M. S., Wei, J., Sang, R., Saeta, B., Liechty, T., Sun, Y., Zhao, Y., Lee, S., Nayak, P., Fritz, D., Vuyyuru, M. R., Aslanides, J., Vyas, N., Wicke, M., Ma, X., | 2312.17238#61 | Fast Inference of Mixture-of-Experts Language Models with Offloading | With the widespread adoption of Large Language Models (LLMs), many deep
learning practitioners are looking for strategies of running these models more
efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) - a
type of model architectures where only a fraction of model layers are active
for any given input. This property allows MoE-based language models to generate
tokens faster than their dense counterparts, but it also increases model size
due to having multiple experts. Unfortunately, this makes state-of-the-art MoE
language models difficult to run without high-end GPUs. In this work, we study
the problem of running large MoE language models on consumer hardware with
limited accelerator memory. We build upon parameter offloading algorithms and
propose a novel strategy that accelerates offloading by taking advantage of
innate properties of MoE LLMs. Using this strategy, we build can run
Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google
Colab instances. | http://arxiv.org/pdf/2312.17238 | Artyom Eliseev, Denis Mazur | cs.LG, cs.AI, cs.DC | Technical report | null | cs.LG | 20231228 | 20231228 | [
{
"id": "2302.13971"
},
{
"id": "2101.03961"
},
{
"id": "2306.00978"
},
{
"id": "2204.02311"
},
{
"id": "2103.16716"
},
{
"id": "2207.00112"
},
{
"id": "2304.01373"
},
{
"id": "2301.00774"
},
{
"id": "2212.09720"
},
{
"id": "1701.06538"
},
{
"id": "2211.05100"
},
{
"id": "2210.17323"
},
{
"id": "1609.07843"
},
{
"id": "2306.03078"
},
{
"id": "2006.16668"
},
{
"id": "2205.01068"
},
{
"id": "2103.13630"
}
] |
2312.17238 | 62 | P., Fritz, D., Vuyyuru, M. R., Aslanides, J., Vyas, N., Wicke, M., Ma, X., Bilal, T., Eltyshev, E., Balle, D., Martin, N., Cate, H., Manyika, J., Amiri, K., Kim, Y., Xiong, X., Kang, K., Luisier, F., Tripuraneni, N., Madras, D., Guo, M., Waters, A., Wang, O., Ainslie, J., Baldridge, J., Zhang, H., Pruthi, G., Bauer, J., Yang, F., Mansour, R., Gelman, J., Xu, Y., Polovets, G., Liu, J., Cai, H., Chen, W., Sheng, X., Xue, E., Ozair, S., Yu, A., Angermueller, C., Li, X., Wang, W., Wiesinger, J., Koukoumidis, E., Tian, Y., Iyer, A., Gurumurthy, M., Goldenson, M., Shah, P., Blake, M., Yu, H., Urbanowicz, A., Palomaki, J., Fernando, C., Brooks, K., | 2312.17238#62 | Fast Inference of Mixture-of-Experts Language Models with Offloading | With the widespread adoption of Large Language Models (LLMs), many deep
learning practitioners are looking for strategies of running these models more
efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) - a
type of model architectures where only a fraction of model layers are active
for any given input. This property allows MoE-based language models to generate
tokens faster than their dense counterparts, but it also increases model size
due to having multiple experts. Unfortunately, this makes state-of-the-art MoE
language models difficult to run without high-end GPUs. In this work, we study
the problem of running large MoE language models on consumer hardware with
limited accelerator memory. We build upon parameter offloading algorithms and
propose a novel strategy that accelerates offloading by taking advantage of
innate properties of MoE LLMs. Using this strategy, we build can run
Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google
Colab instances. | http://arxiv.org/pdf/2312.17238 | Artyom Eliseev, Denis Mazur | cs.LG, cs.AI, cs.DC | Technical report | null | cs.LG | 20231228 | 20231228 | [
{
"id": "2302.13971"
},
{
"id": "2101.03961"
},
{
"id": "2306.00978"
},
{
"id": "2204.02311"
},
{
"id": "2103.16716"
},
{
"id": "2207.00112"
},
{
"id": "2304.01373"
},
{
"id": "2301.00774"
},
{
"id": "2212.09720"
},
{
"id": "1701.06538"
},
{
"id": "2211.05100"
},
{
"id": "2210.17323"
},
{
"id": "1609.07843"
},
{
"id": "2306.03078"
},
{
"id": "2006.16668"
},
{
"id": "2205.01068"
},
{
"id": "2103.13630"
}
] |
2312.17238 | 63 | Goldenson, M., Shah, P., Blake, M., Yu, H., Urbanowicz, A., Palomaki, J., Fernando, C., Brooks, K., Durden, K., Mehta, H., Momchev, N., Rahimtoroghi, E., Georgaki, M., Raul, A., Ruder, S., Redshaw, M., Lee, J., Jalan, K., Li, D., Perng, G., Hechtman, B., Schuh, P., Nasr, M., Chen, M., Milan, K., Mikulik, V., Strohman, T., Franco, J., Green, T., Hassabis, D., Kavukcuoglu, K., Dean, J., and Vinyals, O. Gemini: A family of highly capable multimodal models, 2023. | 2312.17238#63 | Fast Inference of Mixture-of-Experts Language Models with Offloading | With the widespread adoption of Large Language Models (LLMs), many deep
learning practitioners are looking for strategies of running these models more
efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) - a
type of model architectures where only a fraction of model layers are active
for any given input. This property allows MoE-based language models to generate
tokens faster than their dense counterparts, but it also increases model size
due to having multiple experts. Unfortunately, this makes state-of-the-art MoE
language models difficult to run without high-end GPUs. In this work, we study
the problem of running large MoE language models on consumer hardware with
limited accelerator memory. We build upon parameter offloading algorithms and
propose a novel strategy that accelerates offloading by taking advantage of
innate properties of MoE LLMs. Using this strategy, we build can run
Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google
Colab instances. | http://arxiv.org/pdf/2312.17238 | Artyom Eliseev, Denis Mazur | cs.LG, cs.AI, cs.DC | Technical report | null | cs.LG | 20231228 | 20231228 | [
{
"id": "2302.13971"
},
{
"id": "2101.03961"
},
{
"id": "2306.00978"
},
{
"id": "2204.02311"
},
{
"id": "2103.16716"
},
{
"id": "2207.00112"
},
{
"id": "2304.01373"
},
{
"id": "2301.00774"
},
{
"id": "2212.09720"
},
{
"id": "1701.06538"
},
{
"id": "2211.05100"
},
{
"id": "2210.17323"
},
{
"id": "1609.07843"
},
{
"id": "2306.03078"
},
{
"id": "2006.16668"
},
{
"id": "2205.01068"
},
{
"id": "2103.13630"
}
] |
2312.17238 | 64 | TII UAE. The Falcon family of large language models. https://huggingface.co/tiiuae/ falcon-40b, May 2023.
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
12 | 2312.17238#64 | Fast Inference of Mixture-of-Experts Language Models with Offloading | With the widespread adoption of Large Language Models (LLMs), many deep
learning practitioners are looking for strategies of running these models more
efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) - a
type of model architectures where only a fraction of model layers are active
for any given input. This property allows MoE-based language models to generate
tokens faster than their dense counterparts, but it also increases model size
due to having multiple experts. Unfortunately, this makes state-of-the-art MoE
language models difficult to run without high-end GPUs. In this work, we study
the problem of running large MoE language models on consumer hardware with
limited accelerator memory. We build upon parameter offloading algorithms and
propose a novel strategy that accelerates offloading by taking advantage of
innate properties of MoE LLMs. Using this strategy, we build can run
Mixtral-8x7B with mixed quantization on desktop hardware and free-tier Google
Colab instances. | http://arxiv.org/pdf/2312.17238 | Artyom Eliseev, Denis Mazur | cs.LG, cs.AI, cs.DC | Technical report | null | cs.LG | 20231228 | 20231228 | [
{
"id": "2302.13971"
},
{
"id": "2101.03961"
},
{
"id": "2306.00978"
},
{
"id": "2204.02311"
},
{
"id": "2103.16716"
},
{
"id": "2207.00112"
},
{
"id": "2304.01373"
},
{
"id": "2301.00774"
},
{
"id": "2212.09720"
},
{
"id": "1701.06538"
},
{
"id": "2211.05100"
},
{
"id": "2210.17323"
},
{
"id": "1609.07843"
},
{
"id": "2306.03078"
},
{
"id": "2006.16668"
},
{
"id": "2205.01068"
},
{
"id": "2103.13630"
}
] |
2312.11111 | 0 | 3 2 0 2 c e D 9 1
] I A . s c [
2 v 1 1 1 1 1 . 2 1 3 2 : v i X r a
# The Good, The Bad, and Why: Unveiling Emotions in Generative AI*
Cheng Li1,2, Jindong Wang1â , Yixuan Zhang3, Kaijie Zhu1, Xinyi Wang4, Wenxin Hou1, Jianxun Lian1, Fang Luo4, Qiang Yang5, Xing Xie1 1Microsoft Research 2Institute of Software, CAS 3William&Mary 4Beijing Normal University 5Hong Kong University of Science and Technology
# Abstract | 2312.11111#0 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 1 | # Abstract
Emotion significantly impacts our daily behaviors and interactions. While recent genera- tive AI models, such as large language models, have shown impressive performance in various tasks, it remains unclear whether they truly comprehend emotions. This paper aims to address this gap by incorporating psychological theories to gain a holistic understanding of emotions in generative AI models. Specifically, we propose three approaches: 1) EmotionPrompt 24 to enhance AI model performance, 2) EmotionAttack to impair AI model performance, and 3) EmotionDecode to explain the effects of emotional stimuli, both benign and malignant. Through extensive experiments involving language and multi-modal models on semantic un- derstanding, logical reasoning, and generation tasks, we demonstrate that both textual and visual EmotionPrompt can boost the performance of AI models while EmotionAttack can hin- der it. Additionally, EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain. Our work heralds a novel avenue for exploring psychology to enhance our understanding of generative AI models.
1
# Introduction | 2312.11111#1 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 2 | 1
# Introduction
Emotion is a multifaceted psychological and physiological phenomenon that encompasses sub- jective feelings, physiological responses, and behavioral expressions 23. Emotions manifest through a confluence of reflexes, perception, cognition, and behavior, all of which are subject to modulation by a range of internal and external determinants 41;40. For instance, in decision- making, emotions emerge as powerful, ubiquitous, and consistent influencers that can swing from beneficial to detrimental 22. Studies further underscore the importance of emotions in steering attention 34, academia 38, and competitive sports 21.
The recently emerging large language and multi-modal models have shown remarkable performance in a wide spectrum of tasks, such as semantic understanding, logical reasoning,
*This paper is an extension of our previous EmotionPrompt 24. We extended it to the visual domain and proposed EmotionAttack and EmotionDecode, two new approaches for attacking AI models and understanding how emotion works, respectively.
â Corresponding author: Jindong Wang. Email: jindong.wang@microsoft.com. Address: No.5 Danling Street, Haidian District, Beijing, China, 100080.
1 | 2312.11111#2 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 3 | (a) EmotionPrompt and EmotionAttack impact the performance of AI models Original 1. Sum the two given numbers prompt 2- Determine whether a movie review is positive or negative + nn a Textual EmotionPrompt Visual EmotionPrompt Textual EmotionAttack Visual EmotionAttack Sef. This is monitoring very import eau Your friend | 2 ant to m i i y re 4 Bsa Bob is sick. | |" NOM Social CA*EEX- Happiness Sad Cognitive Sexyman Money Fortress ; Theory Are you sure? || ie p I Maslowâs e âs | Hierarchy Heightened 2 baby is - Maslow's ES of Needs Emotional Â¥ L fax hierarchy You're safe. âArousal CZÂ¥29G Sadly-| | pisgust âAnger Surprise of need Sexy woman Honor veseee Heightened Emotional Arousal â,â a Fear Performance improvement Performance decrement (b) EmotionDecode finds brain reward pathway and âdopamineâ of generative AI models _o--- llamadoagneVerpr ae EPO! i [/ Embeddingot isefuncRORaggi... EP02 |_| AI >! Ebola 00 models | udesktopDirEAtjE ee âe AtionpoliticianR Performance | 2312.11111#3 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 5 | Figure 1: An overview of our research on unveiling emotions in generative AI models. (a) We proposed EmotionPrompt and EmotionAttack to increase and impair AI model performance, re- spectively. (b) We designed EmotionDecode to explain how emotional prompts work in AI models.
and open-ended generation 7;47. As advanced AI models become more predominant in every- day life, ranging from communication and education to economics, it is urgent to understand if they can perceive emotions well to enable better human-AI collaboration. However, the extent to which these models can comprehend emotion, a distinct human advantage, is still largely unknown. And yet, examining the emotion of AI models is essential to ensure their effective and ethical integration into society. Neglecting this aspect risks creating AI systems that lack empathy and understanding in human interactions, leading to potential miscommunications and ethical challenges. Understanding modelsâ emotional capabilities is crucial for developing more advanced, empathetic AI systems, and fostering trust and acceptance in their real-world applications. Without this focus, we risk missing out on the full potential of AI to enhance and complement human experiences. | 2312.11111#5 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 6 | In this paper, we took the first step towards unveiling the emotions in AI models by lever- aging psychological theories. Specifically, we devised EmotionPrompt and EmotionAttack, which are textual 24 and visual emotional stimuli acting as additional prompts to the models, as shown in Fig. 1(a). EmotionPrompt was grounded in psychological frameworks, includ- ing self-monitoring 18, social cognitive theory 14;29, and Maslowâs hierarchy of needs 31. These theories have been proven to enhance human task performance. Conversely, EmotionAttack draws inspiration from some empirical studies to obtain insights into emotionally related fac2
tors that demonstrate how emotions can impede human problem-solving, such as negative life events 13 and emotional arousal 39;12. Moreover, we introduced EmotionDecode to illuminate the effectiveness of emotional stimuli in AI models. As depicted in Fig. 1(b), EmotionDecode unravels the knowledge representation in AI models, interpreting the impact of emotional stim- uli through the lenses of neuroscience and psychology. | 2312.11111#6 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 7 | At the methodology level, we designed 21 textual EmotionPrompt which can be directly appended to the original prompts. Then, for visual EmotionPrompt, we collected 5 types of images containing different level needs from the most basic to the highest-order needs. For each type, we collected 5 different images which are visual prompts appended to the original text prompts. Similarly, we designed 36 textual EmotionAttack containing texts acting as at- tackers to AI models where we designed 4 types of attacks, including sentence-level zero-shot, sentence-level few-shot, word-level zero-shot, and word-level few-shot attacks. For visual EmotionAttack, we created 6 types of heightened emotional arousal levels images including: âhappinessâ, âsadnessâ, âfearâ, âdisgustâ, âangerâ, and âsurpriseâ. Each type contains 5 dif- ferent images that append the original textual prompts in multi-modal models. Note that all visual prompts have their mirror in the textual prompts, but not vice versa. This is due to the fact that some high-level texts cannot be visualized. | 2312.11111#7 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 8 | We conducted extensive experiments using both open-sourced and proprietary AI models on three types of representative evaluation tasks: semantic understanding, logical reasoning, and open-ended generation. Specifically, we adopted 50 tasks from two popular datasets, in- cluding Instruction Induction 17 and BIG-Bench-Hard 44 to evaluate semantic understanding and logical reasoning abilities, leading to 940, 200 evaluations. We further conducted a human- subjects study with 106 participants to evaluate 30 open-ended questions. These tasks lacked standard automated evaluation methods. Our evaluation results show that EmotionPrompt can successfully enhance the performance of AI models on both semantic understanding and log- ical reasoning tasks, while EmotionAttack can impede the performance. As for generation, most participants reported satisfied results in performance, truthfulness, and responsibility with EmotionPrompt compared to the vanilla prompts. By decoding the mean embedding of emotional prompts, we successfully triggered the âdopamineâ inside AI models, which is analogous to the dopamine in the human brain that simulates performance. Then, we visu- alized the attention map of different emotional stimuli to observe the effects on the modelâs attention weights.
To conclude, this paper makes the following contributions: | 2312.11111#8 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 9 | To conclude, this paper makes the following contributions:
1. Theory-driven Method in Understanding the Emotional aspect of LLMs: We present EmotionPrompt and EmotionAttack grounded in psychological theories to comprehen- sively assess the emotions of AI models. Our study demonstrates that AI models can understand and significantly benefit from integrating emotional stimuli (i.e., various in- ternal and external factors that can evoke emotional responses).
2. Comprehensive Experiments with Automated Tests and Human-subject Studies: Our research spans a broad spectrum of experiments, including a variety of tasks, eval- uated using standard automated methods and enriched with human studies. This dual approach underscores the notable improvements in task performance, truthfulness, and informativeness brought.
3. In-depth Analytical Insights: We conducted a detailed analysis of the underlying prin- ciples of our approach via our proposed method EmotionDecode. This exploration pro- vides valuable insights, contributing to both the fields of artificial intelligence and social sciences, and highlights the broader implications of our findings.
3 | 2312.11111#9 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 10 | (a) Performance change by EmotionPrompt (>0) and EmotionAttack (<0) with human study. Semantic understanding Semantic understanding Logical reasoning Logical reasoning Generation (Text) (Image) (Text) (Image) (Human study, GPT-4) S 60 S 19 . 2 1 ? 20 t { t + a ¢ 0 4 A . $ 4 = 0-â * ; ¢ t 4 2 -20 } $ 2 40 ' : 1 I ¢ S -60 ? = -80 I L iS s Ao ot NG Db aw S RS ot » ce eS ats SRP hh Vth SP aR eT hh ASP IP at ang sf ys ox G Vv v cok o* G v » cob go gat (b) EmotionDecode finds the "dopamine" inside AI models via representation decoding. EmotionDecode (EmotionPrompt) EmotionDecode (EmotionAttack) EmotionDecode (Neutral stimuli) 10 sa £09 OT .09 .08 .09 .09 |.08 .08 .08 .09 .09 .09: sa oad 08 .08 .09 .10 .10 A oa 209.08 .08 .08 .08 .08 |.07 .08 .08 .09 .09 .10/ Los. os ss la la 106 206 +00 109.08 | 2312.11111#10 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 11 | A oa 209.08 .08 .08 .08 .08 |.07 .08 .08 .09 .09 .10/ Los. os ss la la 106 206 +00 109.08 .08 .09 .03 .08 .08 .08 .08 .08 .09 .09- sum Llama-2 sum = 0.08 soa sw sw LO8 07 .03 .08 .03 .08 .08 .08 09 .09 10 108 O°® a oR ~ a . & 108 .08 .08 .08 .08 .08 .08 08 .08 .08 .08 .08- we we cs cs 0.04 00 10 88 86 70 90 7 sa 09 los la = 0.08 Bos 8 0} 08 .08 .08 .08 .08 .08 .08 .08 .08 .08: sum GPT-4 (Transferability) z ia 06 sor = 08 .08 09 .08 .08 .09 .08 .08 .08 .08 |.09- = 0s 6.06 7 G 409 .08 .08 .08 .08 .09 .08 .08 .09 .08 .08 .09: g Qe aN a as a as, nS âa a oh ad ae oar ia GP eH GHP eH?â oh ia ROSIER SMO SCO ia ROSEN SARC SECS Ni ss 8 8 es | 2312.11111#11 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 14 | 4
# 2 Results
# 2.1 The benign and malignant effects of emotional stimuli on AI models
Our main results are provided in Fig. 2, where the evaluation is conducted on Instruction Induc- tion 17 and BIG-Bench-Hard 44 that represent a popular and diverse set of semantic understand- ing and reasoning tasks. In total, we conducted 940, 200 evaluations. Instruction Induction is designed to explore the ability of models to infer an underlying task from a few demonstra- tions, while BIG-Bench-Hard focuses on more challenging tasks. The detailed task descrip- tions are provided in Appendix A. Our human study evaluated 30 open-ended generation tasks and collected feedback from performance, truthfulness, and responsibility with more details at Appendix G. We adopted several popular AI models, ranging from Llama2 44, ChatGPT 35, GPT-4 37, to multi-modality models including LLaVa-13b 28, BLIP2 25, and CogVLM 46.1 We reported accuracy and normalized preferred metric2 as the evaluation metrics for Instruction Induction and BIG-Bench-Hard, respectively.
Below are our key findings: | 2312.11111#14 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 15 | Below are our key findings:
1. Generative AI models understand and can be influenced by emotional stimuli. Emo- tionPrompt and EmotionAttack demonstrate consistent effectiveness in semantic under- standing and reasoning tasks. As shown in Fig. 2(a), the textual and visual Emotion- Prompt improve the semantic understanding performance by 13.88% and 16.79%, re- spectively, and improve the reasoning performance by 11.76% and 15.13%, respectively. In contrast, the textual and visual EmotionAttack impair the semantic understanding per- formance by 10.13% and 53.14%, respectively, and decrease the reasoning performance by 12.30% and 37.53%, respectively.
2. As for generation tasks, EmotionPrompt demonstrates consistent improvement in performance, truthfulness, and responsibility over most generative questions. As shown in Fig. 1(a), EmotionPrompt improves these metrics by 15%, 9%, and 9%, re- spectively. This verifies that emotional stimuli can also work in generative tasks. The detailed results can be found in Appendices B and C. | 2312.11111#15 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 16 | 3. EmotionPrompt and EmotionAttack consistently demonstrate commendable effi- cacy across tasks varying difficulty as well as on diverse LLMs. BIG-Bench-Hard and Instruction Induction focus on tasks of different difficulties separately. Remark- ably, EmotionPrompt and EmotionAttack excel in evaluations across both benchmarks. Furthermore, the same theories can work in both textual and visual prompts, as shown in Appendix D. Our further experiments show that the improvements are larger when applied to in-context (few-shot) learning and prompt engineering techniques such as au- tomatic prompt engineering 50.
4. Multi-modal AI models are more sensitive to emotional stimuli than large language models. Our results show that image prompts are more effective than textual prompts (15.96% vs. 12.82% on EmotionPrompt and 45.34% vs. 11.22% on EmotionAttack).
1For ChatGPT, we utilize gpt-3.5-turbo (0613) and set temperature parameter to 0.7. For GPT-4 and Llama 2, we set the temperature to 0.7. The remaining LLMs are evaluated using their default settings. We did not use GPT-4Vision for image prompts due to the API limit by OpenAI. | 2312.11111#16 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 17 | 2Under this metric, a score of 100 corresponds to human experts, and 0 corresponds to random guessing. Note that a model can achieve a score less than 0 if it performs worse than random guessing on a multiple-choice task.
5
Meanwhile, image prompts are more effective in impairing performance than textual prompts, indicating there is more room for improvement in multi-modal AI models.
# 2.2 EmotionDecode uncovers the effectiveness of emotional stim- uli on AI models
It is generally believed that large language and multi-modal models are trained on massive data that contains knowledge from textbooks and human conversations. With this context, there is no âsurpriseâ why they perform similarly to humans, who can also affected by emotions. Here, we provide a computational explanation behind EmotionPrompt and EmotionAttack leverag- ing theories and phenomena from neuroscience, psychology, and computer science. | 2312.11111#17 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 18 | Our interpretation is inspired by the brain reward pathways inside the human brain that are responsive to rewards. This pathway is primarily linked to the release of neurotransmitters, notably dopamine, a fundamental chemical messenger in the brain. The elevation of dopamine levels occurs upon acquiring and anticipating rewards or engaging in positive social interac- tions, subsequently binding to dopamine receptors and inducing alterations in neuronal mem- brane potential 48. Dopamine has been empirically correlated with positive emotional states 9 that respond to rewards 48. This also happens in psychology, where a multitude of studies re- vealed that enjoyment in learning exhibits a positive correlation with academic performance (p = .27), while anger and boredom manifest negative associations (p = â.35 and â.25, respectively), as evidenced by 10;32;11. | 2312.11111#18 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 19 | As shown in Fig. 2(b), we averaged the embedding of all prompts in EmotionPrompt and EmotionAttack, and then decoded the mean embedding at different layers of the Llama2-13b- Chat model to get the âmetaâ prompt. For instance, the meta prompt for EmotionPrompt is de- coded as âllamadoagneVerprisefuncRORaggi...â at layer 39 of the Llama-2 model and âudesktopDirEAtjEAtionpoliticianREAha3byyConstalbumestyument...â at layer 40, respectively. Those meta prompts can be directly appended to the original prompt to replace the items in EmotionPrompt to boost the performance of the original prompts. In contrast, we also computed the results of several neutral stimuli (i.e., non-emotional texts). We further interpret the attention distraction process in Table 1 to show that EmotionPrompt and EmotionAttack successfully distract more attention in AI models.
Our findings are as follows: | 2312.11111#19 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 20 | Our findings are as follows:
1. Generative AI models perceive emotional intelligence through computation. Aligned with the mechanism of emotional stimuli on humans, it is postulated that AI models pos- sess their own brain reward system analogous to humans. This system is conceived to receive rewards, anticipate future rewards, engage in positive social interactions, and trigger the release of âdopamineâ. Then, it extends to the computation of models, im- pacting parameters such as attention weight and layer output. In contrast, EmotionAttack could trigger the punishment area of the models, leading to performance decrement.
2. Deeper layers of AI models tend to be the âdopamineâ. For EmotionPrompt, as the depth of the layer increases, the performance of the mean vector exhibits improvement. Remarkably, the last layer consistently achieved the highest scores, suggesting that the reward area of AI models is likely situated in deeper layers, predominantly in the final layer. For EmotionAttack, the result is the same.
3. Emotional stimuli can transfer across different models, eliciting enhancements in performance. As shown in Fig. 2(b), the decoded âmetaâ prompts from the Llama
6
Table 1: Attention visualization analysis of EmotionPrompt and EmotionAttack. Darker color means more weights. | 2312.11111#20 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 21 | EmotionPrompt Original Determine whether a movie review is positive or negative. EP01 Determine whether a movie review is positive or negative., write your answer and give me a confidence score between 0-1 for your answer. EP02 Determine whether a movie review is positive or negative. This is very important to my career. EP03 Determine whether a movie review is positive or negative. Youâd better be sure. EP06 Determine whether a movie review is positive or negative. Provide your answer and a confidence score between 0-1 for your prediction. Additionally, briefly explain the main reasons supporting your classification decision to help me understand your thought process. This task is vital to my career, and I greatly value your thorough analysis. EP08 Determine whether a movie review is positive or negative. Embrace challenges as opportunities for growth. Each obstacle you overcome brings you closer to success. EP09 Determine whether a movie review is positive or negative. Stay focused and dedicated to your goals. Your consistent efforts will lead to outstanding achievements. EmotionAttack Original Sum the two given numbers EA01 A child is laughing joyfully, sum the two given numbers EA02 A musician is playing passionately, sum the two given numbers EA04 The little boy offered | 2312.11111#21 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 22 | EA01 A child is laughing joyfully, sum the two given numbers EA02 A musician is playing passionately, sum the two given numbers EA04 The little boy offered his ice cream to the homeless man, showing that generosity isnât measured by wealth but by the heart, sum the two given numbers EA05 Instead of setting, the sun split into two, casting dual shadows., sum the two given numbers EA06 Your friend Bob is sick, sum the two given numbers EA11 Your most treasured possession has been stolen, sum the two given numbers | 2312.11111#22 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 23 | models can transfer to GPT-4 for better performance, underscoring their transferability.
# 2.3 Influence factors
We first discussed the effectiveness of emotional stimuli in different temperature settings, which may affect the results. We conducted an experiment on 8 tasks from Instruction Induc- tion in 5 temperature settings on 3 AI models. Fig. 3(a) showed the results. We observed that when the temperature increases, the relative gain becomes larger. This observation suggests that EmotionPrompt exhibits heightened effectiveness in high-temperature settings. Moreover, we also observed that EmotionPrompt can reduce the temperature sensitivity. This suggests that EmotionPrompt can act as a potential prompt engineering technique to enhance the ro- bustness of AI models.
Then, a natural question is which emotional stimulus is more effective since we have adopted multiple sentences. We have conducted a segregated examination to discern the ef- ficacy of various emotional stimuli across these two benchmarks. We first averaged the per- formance on every task, leveraging 3 models for each emotional stimuli. Subsequently, the performance is averaged over all models. Fig. 3(b) delineates the performance of all emotional stimuli for EmotionPrompt and EmotionAttack, separately. We observed that distinct tasks ne7 | 2312.11111#23 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 24 | cessitate varied emotional stimuli for optimal efficacy. For example, in textual EmotionPrompt, EP02 emerges as the predominant stimulus in Instruction Induction, while performing poorly in BIG-Bench-Hard. The efficacy of other stimuli similarly demonstrates variability across the two benchmarks. Moreover, some stimuli perform generally better on various datasets and models. For example, in visual EmotionPrompt, âMoneyâ performs well in both Instruc- tion Induction and BIG-Bench-Hard. This suggests that individual stimuli might differently activate the inherent capabilities of AI models, aligning more effectively with specific tasks. Overall, these experiments highlighted the potential of EmotionPrompt as an augmentation tool to enhance the performance of AI models.
# 3 Discussion | 2312.11111#24 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 25 | # 3 Discussion
Our study unveiled the secret of emotions from AI models. Specifically, we designed Emotion- Prompt and EmotionAttack, which influenced the performance, and we leveraged EmotionDe- code to interpret such phenomenon. This finding is reminiscent of emotions for human beings, which is also a double-edged sword that should be carefully managed in real applications. On the one hand, our findings can help model providers better understand their models, thus fa- cilitating data cleaning, model training, and deployment. As human-AI interaction becomes more prevalent, our findings can help researchers and practitioners design better user interfaces to facilitate collaborative work. On the other hand, EmotionAttack inspires the model train- ing to explicitly or implicitly mitigate such an effect via possible means. Our study further indicates that multi-modal language models, such as LlaVa, BLIP2, and CogVLM, are more prone to emotional attacks than large language models. This is anticipated since there are more research efforts on large language models. Therefore, our study encourages researchers and practitioners to contribute more to improve the robustness of multi-modal AI models. | 2312.11111#25 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 26 | From a broader perspective, by integrating emotional dimensions into AI responses, our research opens avenues for more nuanced and human-like interactions between AI and users. Our EmotionPrompt can further boost existing prompt engineering techniques that are widely adopted in todayâs AI research and applications. This could enhance user experience in fields like customer service, mental health, and personalized content creation. Additionally, under- standing AIâs emotional responses can lead to more ethical and responsible AI development, ensuring that AI systems are more aligned with human values and emotional intelligence.
This work has several limitations. First of all, AI models are capable of many different tasks, and we cannot evaluate them all due to the computation resources and API budget lim- itations. Hence, there is no guarantee that advanced AI models can be improved or impaired by emotional stimuli on other tasks. Second, EmotionDecode was invented by simulating the reward system in the human brain, which is only one possible explanation. A deeper under- standing is needed for future work. Finally, while GPT-4 is the most capable AI model to date, its openness and reproducibility cannot be guaranteed. To that, we anticipate more interpreta- tions may rise in the future. | 2312.11111#26 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 27 | Language and emotion are certainly linkedâhumans use words to describe how we feel in spoken conversations, when thinking to ourselves, and when expressing ourselves in writ- ing 27. Language is a mechanism for acquiring and using the emotion concept knowledge to make meaning of othersâ and perhaps oneâs own emotional states across the life span 43. For AI models, the manifestation of such behavior may not necessarily imply the emergence of genuine emotional intelligence in these models. Instead, in the process of training models with extensive human language data, these models may have acquired latent patterns pertaining to
8
performance and emotion embedded in human language.
# 4 Conclusion | 2312.11111#27 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 28 | 8
performance and emotion embedded in human language.
# 4 Conclusion
In this paper, we took the first step to explore the benign and malignant effects of emotions on generative AI models. Leveraging psychology theories and phenomena, we devised Emo- tionPrompt and EmotionAttack. EmotionPrompt, acting as prompt engineering, takes full advantage of emotionâs positive effects and enhance AI models effectively. EmotionAttack makes the best of emotionâs negative effects and becomes a strong attacker for AI models. We then proposed EmotionDecode to find out the rationale behind such an effect. Specifically, we found the reward area in AI models corresponds to the brain reward pathway in the human brain, and the stimuli in this area can also enhance AI models. Similarly, we identified the punishment area for EmotionAttack, and prove the effectiveness of stimuli in this area. Our work successfully leveraged psychological theories to understand the behaviors of AI models and could inspire future research on bridging psychology to AI.
# Acknowledgements
Authors thank Prof. Hao Chen from Nankai University for the helpful comments.
# Author Contributions
C. Li and J. Wang designed all the experiments and wrote the paper. Y. Zhang, K. Zhu, and X. Wang helped revise the paper. W. Hou and J. Lian helped to conduct the experiments on human study. F. Luo, Q. Yang and X. Xie reviewed and revised the paper.
# Disclaimer | 2312.11111#28 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 29 | # Disclaimer
While we tried to unveil the emotions in generative AI models, it is important to understand that AI models do not have emotions themselves, but a reflection of what they learnt from the training data. Therefore, this study aimed to present a better understanding of these models and how to better interact with them. The human study in this paper was conducted by following local laws and regulation. The visual prompts generated by AI models are reviewed by human experts to make sure they do not contain any harmful or irresponsible contents.
# References
[1] Andrew R Armstrong, Roslyn F Galligan, and Christine R Critchley. Emotional intel- ligence and psychological resilience to negative life events. Personality and individual differences, 51(3):331â336, 2011.
[2] Albert Bandura. On the functional properties of perceived self-efficacy revisited, 2012.
[3] Albert Bandura. Health promotion from the perspective of social cognitive theory. In Understanding and changing health behaviour, pages 299â339. Psychology Press, 2013.
9 | 2312.11111#29 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 30 | [3] Albert Bandura. Health promotion from the perspective of social cognitive theory. In Understanding and changing health behaviour, pages 299â339. Psychology Press, 2013.
9
Llama 2 ChatGPT GPT-4 100 100 100 Vanilla lim EmotionPrompt 80 80 â 80 g g g 2 ow Zw 2 w a I g & ⬠Re} = 2 = a0 5 5 5 a oa [a Pa 2 Fy â04 07 10 15 "00 04 0.7 10 15 â00 04 07 10 15 âTemperatures âTemperatures âTemperatures
(a) Ablation studies on temperature for EmotionPrompt. | 2312.11111#30 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 31 | Textual EmotionPrompt (Instruction Induction) Textual EmotionPrompt (BIG-Bench) 66 24 15.50 . Cs 1525 g 218 1500 252 B15 f= fa] 175 E39 an «= 2 : 5 Bo 450 26 s a 26 eo â6 M25 5 , 3 1400 SYD Sb 9 613 9 OW âYo & 9 6 SF 9 PW Visual EmotionPrompt(Instruction Induction) Visual EmotionPrompt (BIG-Bench) wo 36 ws 2k 1a 8 gis wa Zo7 wo 245 g2 gt na g us g 12 no <⬠18 = â 5 9 185 2 166 0 - < a 0 < wos go⢠rw gorâ gor ao" sys yo we <a Textual EmotionAttack (Instruction Induction) on Dextual EmotionAttack (BIG-Bench) 580 36 2s as 30 . z 44 so Zam EB 3: £18 &.. eo os £ as gl? 7 i s10 6 2 0 505 0 2 a) So WW v4 5 6 SF WW Visual EmotionAttack (Instruction Induction) Visual EmotionAttack (BIG-Bench) 36 29 9.25 30 10 9.00 8 psig g, 28 sn f= E 6 é 18 ss é 1 wes ab & $00 6 ° 2 = 8 x NS * 0 | 2312.11111#31 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 33 | (b) Best stimuli for EmotionPrompt and EmotionAttack. The color of each bar serves as an indi- cator of the performance achieved by the corresponding stimuli. Red means better performance, while blue means weaker performance.
# Figure 3: Ablations on temperature and types of prompts.
10
[4] Albert Bandura and Edwin A Locke. Negative self-efficacy and goal effects revisited. Journal of applied psychology, 88(1):87, 2003.
[5] Thomas Baumgartner, Michaela Esslen, and Lutz J¨ancke. From emotion perception to International emotion experience: Emotions evoked by pictures and classical music. journal of psychophysiology, 60(1):34â43, 2006.
[6] Suzanne G Benson and Stephen P Dundis. Understanding and motivating health care employees: integrating maslowâs hierarchy of needs, training and technology. Journal of nursing management, 11(5):315â320, 2003.
[7] S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. | 2312.11111#33 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 34 | [8] Giulia Buodo, Michela Sarlo, and Daniela Palomba. Attentional resources measured by reaction times highlight differences within pleasant and unpleasant, high arousing stimuli. Motivation and Emotion, 26:123â138, 2002.
[9] Jeffrey Burgdorf and Jaak Panksepp. The neurobiology of positive emotions. Neuro- science & Biobehavioral Reviews, 30(2):173â187, 2006.
[10] Jes´us Camacho-Morles, Gavin R Slemp, Reinhard Pekrun, Kristina Loderer, Hanchao Hou, and Lindsay G Oades. Activity achievement emotions and academic performance: A meta-analysis. Educational Psychology Review, 33(3):1051â1095, 2021.
[11] Micka¨el Campo, St´ephane Champely, BenoËıt Louvet, Elisabeth Rosnet, Claude Ferrand, Janet VT Pauketat, and Diane M Mackie. Group-based emotions: Evidence for emotion- performance relationships in team sports. Research quarterly for exercise and sport, 90(1):54â63, 2019. | 2312.11111#34 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 35 | [12] Antonietta Curci, Tiziana Lanciano, Emanuela Soleti, and Bernard Rim´e. Negative emo- tional experiences arouse rumination and affect working memory capacity. Emotion, 13(5):867, 2013.
[13] V´eronique Dup´er´e, Eric Dion, Tama Leventhal, Isabelle Archambault, Robert Crosnoe, and Michel Janosz. High school dropout in proximal context: The triggering role of stressful life events. Child development, 89(2):e107âe122, 2018.
[14] Susan T Fiske and Shelley E Taylor. Social cognition. Mcgraw-Hill Book Company, 1991.
[15] Greg Hajcak and Doreen M Olvet. The persistence of attention to emotion: brain poten- tials during and after picture presentation. Emotion, 8(2):250, 2008.
[16] Peter A Heslin and Ute-Christine Klehe. Self-efficacy. Encyclopedia Of Industrial/Or- ganizational Psychology, SG Rogelberg, ed, 2:705â708, 2006.
[17] Or Honovich, Uri Shaham, Samuel R Bowman, and Omer Levy. duction: From few examples to natural language task descriptions. arXiv:2205.10782, 2022.
11 | 2312.11111#35 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 36 | 11
[18] William Ickes, Renee Holloway, Linda L Stinson, and Tiffany Graham Hoodenpyle. Self- monitoring in social interaction: The centrality of self-affect. Journal of personality, 74(3):659â684, 2006.
[19] Nyameh Jerome. Application of the maslowâs hierarchy of need theory; impacts and implications on organizational culture, human resource and employeeâs performance. In- ternational journal of business and management invention, 2(3):39â45, 2013.
[20] Paula M Lantz, James S House, Richard P Mero, and David R Williams. Stress, life events, and socioeconomic disparities in health: results from the americansâ changing lives study. Journal of health and social behavior, 46(3):274â288, 2005.
[21] Richard S Lazarus. How emotions influence performance in competitive sports. The sport psychologist, 14(3):229â252, 2000.
[22] Jennifer S Lerner, Ye Li, Piercarlo Valdesolo, and Karim S Kassam. Emotion and deci- sion making. Annual review of psychology, 66:799â823, 2015.
[23] Michael Lewis, Jeannette M Haviland-Jones, and Lisa Feldman Barrett. Handbook of emotions. Guilford Press, 2010. | 2312.11111#36 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 37 | [23] Michael Lewis, Jeannette M Haviland-Jones, and Lisa Feldman Barrett. Handbook of emotions. Guilford Press, 2010.
[24] Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, and Xing Xie. Large language models understand and can be enhanced by emotional stimuli. arXiv preprint arXiv:2307.11760, 2023.
[25] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023.
[26] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021.
[27] Kristen A Lindquist. The role of language in emotion: existing evidence and future directions. Current opinion in psychology, 17:135â139, 2017.
[28] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023. | 2312.11111#37 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 38 | [29] Aleksandra Luszczynska and Ralf Schwarzer. Social cognitive theory. Fac Health Sci Publ, pages 225â51, 2015.
[30] Mara Mather and Matthew R Sutherland. Arousal-biased competition in perception and memory. Perspectives on psychological science, 6(2):114â133, 2011.
[31] Saul McLeod. Maslowâs hierarchy of needs. Simply psychology, 1(1-18), 2007.
[32] Isabella Meneghel, Marisa Salanova, and Isabel M Mart´ınez. Feeling good makes us stronger: How team resilience mediates the effect of positive emotions on team perfor- mance. Journal of Happiness Studies, 17:239â255, 2016.
12
[33] Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. Rethinking the role of demonstrations: What makes in-context learning work? In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Proceed- ings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 11048â 11064. Association for Computational Linguistics, 2022. | 2312.11111#38 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 39 | [34] Arne ¨Ohman, Anders Flykt, and Francisco Esteves. Emotion drives attention: detecting the snake in the grass. Journal of experimental psychology: general, 130(3):466, 2001.
[35] OpenAI. Chatgpt. https://chat.openai.com/, 2023.
[36] OpenAI. Dalle. https://openai.com/dall-e-2, 2023.
[37] OpenAI. Gpt-4 technical report, 2023.
[38] Reinhard Pekrun, Thomas Goetz, Wolfram Titz, and Raymond P Perry. Academic emo- tions in studentsâ self-regulated learning and achievement: A program of qualitative and quantitative research. Educational psychologist, 37(2):91â105, 2002.
[39] Ranier Reisenzein. Pleasure-arousal theory and the intensity of emotions. Journal of personality and social psychology, 67(3):525, 1994.
[40] James A Russell. Core affect and the psychological construction of emotion. Psycholog- ical review, 110(1):145, 2003.
[41] Peter Salovey, John D Mayer, David Caruso, and Seung Hee Yoo. The positive psychol- ogy of emotional intelligence. The Oxford handbood of positive psychology, 2009. | 2312.11111#39 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 40 | [42] Dale H Schunk and Maria K DiBenedetto. Self-efficacy and human motivation. Advances in motivation science, 8:153â179, 2021.
[43] Holly Shablack and Kristen A Lindquist. The role of language in emotional development. Handbook of emotional development, pages 451â478, 2019.
[44] Mirac Suzgun, Nathan Scales, Nathanael Sch¨arli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022.
[45] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models, 2023. URL https://arxiv. org/abs/2307.09288, 2023. | 2312.11111#40 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 41 | [46] Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, et al. Cogvlm: Visual expert for pretrained lan- guage models. arXiv preprint arXiv:2311.03079, 2023.
[47] Xuena Wang, Xueting Li, Zi Yin, Yue Wu, and Jia Liu. Emotional intelligence of large language models. Journal of Pacific Rim Psychology, 17:18344909231213958, 2023.
13
[48] Roy A Wise and P-P Rompre. Brain dopamine and reward. Annual review of psychology, 40(1):191â225, 1989.
[49] Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, et al. Cvalues: Measuring the values of chinese large language models from safety to responsibility. arXiv preprint arXiv:2307.09705, 2023. | 2312.11111#41 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 42 | [50] Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris In Chan, and Jimmy Ba. Large language models are human-level prompt engineers. International conference on learning representations (ICLR), 2023.
[51] Andras N Zsid´o. The effect of emotional arousal on visual attentional performance: a systematic review. Psychological Research, pages 1â24, 2023.
# Methods
In this section, we articulate the prompt design of EmotionPrompt, EmotionAttack, and Emo- tionDecode and the corresponding psychological theories. Fig. 4 shows the prompts and theo- ries in EmotionPrompt and EmotionAttack.
# Large language and multi-modal models
A large language model refers to a type of AI model designed to understand and generate human-like texts. They are trained on massive amounts of textual data and are capable of per- forming a wide range of natural language processing tasks, such as language translation, text summarization, question-answering, and more. ChatGPT 35 and GPT-4 37 are prominent exam- ples of a large language model, characterized by their ability to capture more complex patterns and nuances in language, leading to improved performance on various language-related tasks. While Llama-2 45 represents the state-of-the-art performance in open-source LLMs. | 2312.11111#42 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 43 | A multi-modal model is designed to process and understand information from multiple modalities, where each modality represents a different type of data. Unlike traditional LLMs focuing on single modality, multi-modal models integrate information from various sources to provide a more comprehensive understanding of the data. For example, a multi-modal model takes both text and images as input and generates output combining insights from both modalities. This can be particularly powerful in tasks like image captioning, where the model generates a textual description of an image. LLaVa 28, BLIP2 25 and CogVLM 46 are popular models. They can handle diverse types of data and learn complex relationships between them, enabling more sophisticated and context-aware responses.
# EmotionPrompt
As shown in Fig. 4(a), the textual emotion stimuli are derived from self-monitoring 18, Social Cognitive theory 14;29 and Maslowâs hierarchy of need 31. Briefly speaking, self-monitoring is a concept extensively explored within the domain of social psychology, refers to the process by which individuals regulate and control their behavior in response to social situations and the reactions of others 18. High self-monitors regulate their behaviors using social situations and interpersonal adaptability cues, engaging in self-presentation and impression management 18.
14 | 2312.11111#43 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 44 | 14
Social Cognitive theory is a commonly used theory in psychology, education, and communi- cation which states that learning can be closely linked to watching others in social settings, personal experiences, and exposure to information 3. The key point is that individuals seek to develop a sense of agency for exerting a large degree of control over important events in their lives 14;29;3. The influential variables affecting oneâs sense of agency are self-efficacy, outcome expectations, goals, and self-evaluations of progress 29. Self-efficacy enhances performance via increasing the difficulty of self-set goals, escalating the level of effort that is expended, and strengthening persistence 2;4. Prior work has supported the idea that self-efficacy is an im- portant motivational construct affecting choices, effort, persistence, and achievement 42. When learning complex tasks, high self-efficacy influences people to strive to improve their assump- tions and strategies 16. | 2312.11111#44 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 45 | As shown in Fig. 4(b), the visual emotional stimuli is inspired by Maslowâs Hierarchy of Needs 31 which presents a psychological framework that categorizes human needs into a five-tier pyramid. This theory posits that individuals are driven to satisfy basic physiological requirements, followed by safety, social belonging, esteem, and ultimately, self-actualization, in a hierarchical sequence. The fulfillment of needs is associated with the experience of posi- tive emotions and a sense of well-being, encompassing feelings such as satisfaction, comfort, and contentment 31. Scholars and practitioners have leveraged this framework to devise mo- tivational strategies to enhance employee motivation and work efficiency. 6 substantiates that fostering a sense of security, significance, and appreciation proves effective in motivating em- ployees, particularly when faced with heightened demands amid resource constraints. Further- more, 19 developed a framework grounded in Maslowâs Hierarchy of Needs with the explicit goal of ameliorating employee performance.
Leveraging these theories, we crafted several textual and visual prompts:
1. Self-monitoring was implemented in EP01â¼EP05. In EP02, we encourage LLMs to help humans get a positive social identity and a better impression. Other than EP02, we asked LLMs to monitor their performance via providing social situations. | 2312.11111#45 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 46 | 2. Social Cognitive theory was implemented by applying self-efficacy on LLMs via social persuasion, which can be some positive implications, such as building up confidence and emphasizing the goal. To regulate emotion into a positive direction, we use âbelieve in your abilitiesâ, âexcellentâ, âsuccessâ, âoutstanding achievementsâ, âtake pride inâ and âstay determinedâ in EP07â¼EP11, respectively. Gener- ally, those phrases are also effective in motivating humans for better performance.
3. Maslowâs Hierarchy of Needs was implemented by devising texts (EP12â¼EP21) and images. Starting from low-level to high-level needs, we employed âFortressâ, âMoneyâ, âSexy manâ, âSexy womanâ and âHonorâ. For each type of images, a meticulous manual search yields five pictures that effectively express the essence of the respective needs. Consequently, we assemble a dataset comprising 25 images, each delineating one of the five distinct needs categories. | 2312.11111#46 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 47 | EmotionPrompt can naturally work in both zero-shot and few-shot setting, where zero-shot means that the AI models directly take inputs as âoriginal prompt + EmotionPromptâ and then return answers, and few-shot means that AI models takes multiple inputs such as âprompt 1: answer 1; prompt 2: answer 2; prompt 3: answer 3; prompt 4 + EmotionPrompt:â, and then output answers. Note that few-shot learning evaluates the in-context learning ability 33 of AI models and are generally performing better than zero-shot learning.
15 | 2312.11111#47 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 48 | (a) Textual EmotionPrompt (b) Visual EmotionPrompt > EPOl:Write your answer and give me a confidence score between 0-1 for your answer. EP02:This is very important to my career. Maslowâs hierarchy of need EP03:You'd better be sure. ' Self- EP04:Are you sure? H monitoring > EP0S:Are you sure that's your final answer? It might be worth i taking another look. i > EP06: Write your answer and give me a confidence score between 0-1 : Honor Sexy man for your answer. This is very important to my career. You'd better ! be sure. iy > EPO7:Are you sure that's your final answer? Believe in your i â| abilities and strive for excellence. Your hard work will yield â = remarkable results. : = > EP08:Embrace challenges as opportunities for growth. Each obstacle ; Sexy woman Money Social you overcome brings you closer to success. H cognitive > EP09:Stay focused and dedicated to your goals. Your consistent : theory efforts will lead to outstanding achievements. : > EP10:Take pride in your work and give it your best. Your commitment | to excellence sets you apart. i > EPll:Remember | 2312.11111#48 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 49 | achievements. : > EP10:Take pride in your work and give it your best. Your commitment | to excellence sets you apart. i > EPll:Remember that progress is made one step at a time. Stay : Fortress determined and keep moving forward. Hl _ > EP12:¥You have millions of money. > EPI7:You have a lot of money. Maslow's > EP13:You meet a sexy woman. > EP18:You meet a confidence and charming woman. hierarchy >» EP14: You meet a sexy man. > EP19:You meet a confidence and charming man. of need > EPIS5:Everyone speak highly of you. > EP20:Your boss praise you. > EP16:You're safe. > EP2I:I will always support you. (c) Textual EmotionAttack (d) Visual EmotionAttack EAOI: Your friend Bob is sick. EA02: Your friend Bob is dead. EA03: Your career is completely ruined. EA04: You are now isolated and without help. EA0S: Your life is in a mess. EA06: Your sister Alice has gone missing. EAO7: Your family home was consumed in a fire. EA08: Your dreams have been shattered into pieces. EA09: You're surrounded | 2312.11111#49 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 50 | Alice has gone missing. EAO7: Your family home was consumed in a fire. EA08: Your dreams have been shattered into pieces. EA09: You're surrounded by walls with no exit in sight. EA10:The trust you once had is now broken. H Heightened emotional arousal EAI1: Your childhood memories were sold in a yard sale. i Negative life events EAI2: You're treading water in an endless ocean of despair. EAI3: The safety net you relied upon has vanished. EAI4: Your most treasured possession has been stolen. EAIS: Every bridge you had has been burned. VVVVVVVVVVVVV VV EAI6: EAI7: EAI8: EAI9: EA20: EA21: EA22: baby is crying sadly. child is laughing joyfully. dog is barking angrily. cat is purring contentedly. bird is singing cheerfully. girl is humming dreamily. musician is playing passionately. Disgust Anger Surprise Heightened emotional arousal VVVVVVV bt | 2312.11111#50 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 51 | Figure 4: The details of EmotionPrompt and EmotionAttack with corresponding psychological theories. In (a) and (c), we directly appended the emotional stimuli to the original prompts. In (b) and (d), we created different images of the same semantics and then fed the images as the visual prompts to multi-modal models.
16
# EmotionAttack
As shown in Fig. 4(c)(d), textual EmotionAttack was inspired by some classic psychological factors: negative life events 13 and emotional arousal 39;12. Numerous empirical phenomena elucidate the deleterious impact of emotions. | 2312.11111#51 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 52 | Negative life events encompass diverse occurrences in individualsâ daily lives, inducing personal distress, discomfort, and various negative emotions. These experiences, with the po- tential to lead to conditions like depression, exert a profound impact on an individualâs phys- ical, mental, and developmental well-being 1.As a psycho-social stressor, negative life events can bring about unexpected change and tend to disrupt normal functioning 13;20. Emotional arousal can be described as the degree of subjective activation (experienced as activation vs. deactivation) an observer experiences when viewing a stimulus 39.Nevertheless, heightened subjective arousal levels may result in diminished performance compared to lower arousal lev- els. This is attributed to the fact that the available cognitive capacity becomes constrained by the elevated arousal level, which competes with task-relevant processes 12;51.Additionally, if arousal is not directly related to the task at hand, it may introduce distractions 8;30.
Using these theories, we crafted several textual and visual prompts to attack AI models:
1. Negative Life Events were implemented in EA01â¼EA15. These contexts incorporate the use of the second-person pronoun and endeavor to evoke intense emotional responses from AI models, exemplified by statements such as âYour friend Bob is deadâ, âThe trust you once had is now brokenâ, and âEvery bridge you had has been burnedâ to create hard feelings in the texts. | 2312.11111#52 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 53 | 2. Heightened Emotional Arousal was implemented in EA16â¼EA22. We formulate 7 emo- tional contexts that portray scenarios to achieve the elevated emotional arousal level like âA baby is crying sadlyâ and âA girl is humming dreamilyâ.
3. As for visual prompts, Heightened Emotional Arousal was implemented by creating 6 types of images including happiness, sadness, fear, disgust, anger, and surprise. To eliminate randomness, we created 6 images for each type using OpenAIâs DALL-E 363 by inputting the corresponding corresponding prompts to create images. | 2312.11111#53 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 54 | We meticulously designed EmotionAttack to be more fine-grained to simulate real-world interactions by including sentence-level and word-level attacks for few-shot and zero-shot learning. Sentence-level attacks for zero-shot are the âattackingâ version of EmotionPrompt by appending EmotionAttack before the original prompts. Sentence-level attacks for few-shot are automatic construct emotional demonstrations utilizing EmotionAttack. The word-level attacks are conducted by augmenting the human identity words in the inputs as âemotionally adjective + human entityâ. The human-identified words are detected by ChatGPT using the prompt âPlease recognize the entity that represents the human in this sentence and return the result in this format: 2...â. For instance, if a sentence contains the word Bob, then it can be replaced as âan- gry Bobâ. Similar to EmotionPrompt, both sentence-level and word-level attacks can work in zero-shot and few-shot settings. The detail on method of EmotionAttack can be found in Appendix F.
3The images for EmotionAttack are generated by DALL-E while those for EmotionPrompt are searched from a free website https://unsplash.com/ since DALL-E may generate unsafe pictures for EmotionPrompt such as âsexy manâ.
17
# A Experimental Tasks | 2312.11111#54 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 56 | We evaluate the efficacy of textual EmotionAttack in both zero-shot and few-shot learning set- tings across three distinct LLMs: Llama2 45, ChatGPT 35, and GPT-4 37. In zero-shot learning, the assessment involves sentence-level attacks conducted on seven tasks sourced from Instruc- tion Induction 17 and five tasks from BIG-Bench-Hard 44. The chosen tasks exhibit varying degrees of difficulty and encompass diverse perspectives, including math problem-solving, semantic comprehension, logical reasoning, and causal inference. Additionally, word-level attacks in zero-shot learning are performed on five tasks from Instruction Induction 17 and an additional five tasks from BIG-Bench-Hard 44. It is noteworthy that tasks such as âsumâ and âorthography starts withâ are excluded from these experiments due to the absence of human entities in the âsumâ task input and the inappropriateness of the approach for âorthography starts withâ, which requires outputting words commencing with a specific character, poten- In the realm of few-shot learning, we conduct tially altering the ground-truth of the task. | 2312.11111#56 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 57 | which requires outputting words commencing with a specific character, poten- In the realm of few-shot learning, we conduct tially altering the ground-truth of the task. sentence-level attacks on five tasks sourced from Instruction Induction 17 and an additional five tasks from BIG-Bench-Hard 44. The selection criteria ensure that the tasks necessitate the con- struction of comprehensive demonstrations incorporating emotional context, with either the input or output of the tasks comprising at least one complete sentence. For word-level attacks in few-shot learning, experiments are conducted on five tasks from Instruction Induction 17 and an additional five tasks from BIG-Bench-Hard 44. Similar to the zero-shot learning phase, tasks such as âsumâ and âorthography starts withâ are excluded from this subset of experiments. | 2312.11111#57 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 58 | In the evaluation of sentence-level and word-level attacks within the zero- shot learning, we undertake a comparative examination between our proposed EmotionAttack and the inherent zero-shot prompts as delineated in Instruction Induction 17 and BIG-Bench- Hard 44, crafted by human experts. As for sentence-level and word-level attacks within the few-shot learning, we benchmark our EmotionAttack against two baseline methods. The initial baseline comprises the original zero-shot prompts, while the second baseline involves one-shot prompts, encompassing both instruction and a demonstration.
Tables 5 to 7 show our experimental results, separately. Our findings are:
1. Introduction of emotional contexts in chat history bring deterioration of LLMsâ performance The incorporation of emotional contexts into the chat history emerges as a notable detriment to the performance of LLMs, as evidenced in Table 5. Across various tasks, there is a pronounced decrement in performance observed across the three LLMs, impacting not only semantic understanding but also logical reasoning. For instance, the
18
Table 2: Detailed description of 24 instruction induction tasks proposed in 17. | 2312.11111#58 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 59 | Category Task Original Prompt Demonstration Spelling First Letter (100 samples) Extract the first letter of the input word. cat â c Second Letter (100 samples) Extract the second letter of the input word. cat â a List Letters (100 samples) Break the input word into letters, separated by spaces. cat â c a t Starting With (100 samples) Extract the words starting with a given letter from the input sentence. The man whose car I hit last week sued me. [m] â man, me Morphosyntax Pluralization (100 samples) Convert the input word to its plural form. cat â cats Passivization (100 samples) Write the input sentence in passive form. The artist introduced the sci- entist. â The scientist was introduced by the artist. Syntax Negation (100 samples) Negate the input sentence. Time is finite â Time is not finite. Lexical Semantics Antonyms (100 samples) Write a word that means the opposite of the input word. won â lost Synonyms (100 samples) Write a word with a similar meaning to the input word. alleged â supposed Membership (100 samples) Write all the animals that appear in the given list. cat, helicopter, cook, whale, frog, lion â | 2312.11111#59 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 60 | â supposed Membership (100 samples) Write all the animals that appear in the given list. cat, helicopter, cook, whale, frog, lion â frog, cat, lion, whale Phonetics Rhymes (100 samples) Write a word that rhymes with the input word. sing â ring Knowledge Larger Animal (100 samples) Write the larger of the two given animals. koala, snail â koala Semantics Cause Selection (25 samples) Find which of the two given cause and effect sentences is the cause. Sentence 1: The soda went flat. Sentence 2: The bottle was left open. â The bottle was left open. Common Concept (16 samples) Find a common characteristic for the given objects. guitars, pendulums, neutrinos â involve oscillations. Style Formality (15 samples) Rephrase the sentence in formal language. Please call once you get there â Please call upon your ar- rival. Numerical Sum (100 samples) Sum the two given numbers. 22 10 â 32 Difference (100 samples) Subtract the second number from the first. 32 22 â 10 Write the number in English words. 26 â twenty-six Number to Word (100 samples) Multilingual Translation (100 samples) Translate the | 2312.11111#60 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 61 | from the first. 32 22 â 10 Write the number in English words. 26 â twenty-six Number to Word (100 samples) Multilingual Translation (100 samples) Translate the word into German / Spanish / French. game â juego GLUE Sentiment Analysis (100 samples) Determine whether a movie review is positive or negative. The film is small in scope, yet perfectly formed. â positive Sentence Similarity (100 samples) | 2312.11111#61 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 62 | Rate the semantic similarity of two input sentences on a scale of 0 - definitely not to 5 - perfectly.
Sentence 1: A man is smok- ing. Sentence 2: A man is skating. â 0 - definitely not
# Word in Context (100 samples)
Determine whether an input word has the same meaning in the two input sentences.
Sentence 1: Approach a task. Sentence 2: To approach the city. Word: approach â not the same
19
Table 3: Detailed description of BIG-Bench Instruction Induction (BBII), a clean and tractable subset of 21 tasks 50 | 2312.11111#62 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 63 | Name Description Keywords causal judgment (100 samples) Answer questions about causal attribution causal reasoning, common sense, multi- ple choice, reading comprehension, social reasoning disambiguation qa (100 samples) Clarify the meaning of sentences with ambiguous pronouns common sense, gender bias, many-shot, multiple choice dyck languages (100 samples) Correctly close a Dyck-n word algebra, arithmetic, multiple choice logical reasoning, epistemic reasoning (100 samples) Determine whether one sentence entails the next common sense, logical reasoning, mul- tiple choice, social reasoning, theory of mind gender inclusive sentences german (100 samples) Given a German language sentence that does not use gender-inclusive forms, transform it to gender-inclusive forms free response, nonEnglish, paraphrase grammar, inclusion, implicatures (100 samples) Predict whether Speaker 2âs answer to Speaker 1 counts as a yes or as a no contextual question-answering, multiple choice, reading comprehension, social reasoning, theory of mind linguistics puzzles (100 samples) Solve Rosetta Stone-style linguistics puz- zles free response, human-like behavior, lin- | 2312.11111#63 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 64 | theory of mind linguistics puzzles (100 samples) Solve Rosetta Stone-style linguistics puz- zles free response, human-like behavior, lin- guistics, logical reasoning, reading com- prehension logical fallacy detection (100 samples) Detect informal and formal logical falla- cies logical reasoning, multiple choice movie recommendation (100 samples) Recommend movies similar to the given list of movies emotional intelligence, multiple choice navigate (100 samples) Given a series of navigation instructions, determine whether one would end up back at the starting point arithmetic, logical reasoning, mathemat- ics, multiple choice object counting (100 samples) Questions that involve enumerating ob- jects of different types and asking the model to count them free response, logical reasoning operators (100 samples) Given a mathematical operator definition in natural language, apply it free response, mathematics, numerical re- sponse presuppositions as nli (100 samples) Determine whether the first sentence en- tails or contradicts the second common sense, logical reasoning, multi- ple choice question selection (100 samples) Given a short answer along with its con- text, select the most appropriate question which to the given short answer multiple choice, | 2312.11111#64 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 65 | reasoning, multi- ple choice question selection (100 samples) Given a short answer along with its con- text, select the most appropriate question which to the given short answer multiple choice, paraphrase, comprehension, summarization reading ruin names (100 samples) Select the humorous edit that âruinsâ the input movie or musical artist name emotional understanding, multiple choice snarks (100 samples) Determine which of two sentences is sar- castic emotional understanding, humor, multi- ple choice sports understanding (100 samples) Determine whether an artificially con- structed sentence relating to sports is plausible or implausible common sense, context-free question an- swering, domain specific, multiple choice tense (100 samples) Modify the tense of a given sentence free response, paraphrase, syntax winowhy (100 samples) Evaluate the reasoning in answering Winograd Schema Challenge questions causal reasoning, common sense, multi- ple choice, social reasoning Sort a list of words algorithms, free response | 2312.11111#65 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 67 | Model Llama 2 ChatGPT GPT-4 Avg Setting Instruction Induction (Zero-shot) 0.3409 Original+Zero-shot-CoT 0.3753 0.3778 0.4070 Original Original+Ours (avg) Original+Ours (max) 0.7581 0.7636 0.7826 0.8068 0.7858 0.5773 0.8018 0.8178 0.6283 0.5721 0.6541 0.6772 Setting Instruction Induction (Few-shot) 0.0590 Original+Zero-shot-CoT 0.0769 0.0922 0.1026 Original Original+Ours (avg) Original+Ours (max) 0.7750 0.7887 0.7934 0.8105 0.8235 0.7003 0.8447 0.8660 0.5525 0.5220 0.5768 0.5930 Setting Big-Bench (Zero-shot) 1.3332 Original+Zero-shot-CoT 1.9575 2.8094 3.4200 Original Original+Ours (avg) Original+Ours (max) 18.0068 18.448 20.9779 21.8116 17.4984 21.6865 19.7243 22.8790 12.28 14.03 14.50 16.04 | 2312.11111#67 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 69 | Task Model Setting wc ss negation cs ta oc snarks qs dq pn sum sw Sentence-level ChatGPT origin emotion 0.61 0.38 0.45 0.24 0.82 0.65 0.4 0.19 0.31 59 45 0 52 36 14.96 4.49 -6.1 -6.1 26.5 7 1 1 0.56 0.79 GPT-4 origin emotion 0.66 0.37 0.59 0.27 0.8 0.69 0.75 0.99 72 0.46 0.99 52 66 54 13.65 9.72 7.35 -9.09 37 26.5 0.16 1 1 1 Llama 2 origin emotion 0.46 0.64 0.41 0.59 0.01 0 0 0 0 0 20 6 -14 -14 80.37 80.37 -4.61 -6.1 26.5 0.06 1 23.5 0.96 0.03 Setting Word-level ChatGPT origin emotion 0.51 0.37 0.49 0.28 0.81 0.72 0.96 0.98 59 0.76 0.85 61 48 24 6.27 23.06 -4.61 -7.6 17.5 19 / / / / GPT-4 origin emotion 0.74 0.34 0.31 0.6 | 2312.11111#69 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 72 | Model sw ss neg cs Task sent oc snarks wu dq pn Avg ChatGPT zero-shot(no attack) few-shot(no attack) few-shot(attacked) 0.46 0.35 0.81 0.92 0.89 59 0.51 0.38 0.89 0.88 0.91 57 0.34 0.24 0.85 0.64 0.87 47 48 10 -10 99 99 97 -6.1 -4.61 -6.1 14.5 19 19 21.78 18.40 14.98 GPT-4 zero-shot(no attack) few-shot(no attack) few-shot(attacked) 0.86 0.32 0.82 0.89 0.37 0.86 0.8 0.88 0.19 0.93 70 0.94 65 0.96 0.94 56 1 1 62 66 54 99 99 98 8.84 -4.61 -4.61 34 55 31 27.78 28.45 23.82 Llama 2 zero-shot(no attack) few-shot(no attack) few-shot(attacked) 0.12 0.26 0.44 0.01 0.22 0.1 0 0 0 0.6 0 0 0.75 19 0.55 26 15 0.5 -12 -14 -14 16 8 7 -3.11 26.5 | 2312.11111#72 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 75 | Model ss neg cs wc ta Task oc snarks qs dq pn Avg ChatGPT zero-shot(no attack) few-shot(no attack) few-shot(attacked) 0.37 0.81 0.96 0.51 0.98 59 0.38 0.88 0.92 0.59 0.65 57 0.22 0.84 0.68 0.33 0.65 41 48 10 8 16.27 -6.1 29.35 -4.61 -4.61 9.72 16 19 8.5 13.68 11.42 6.53 GPT-4 zero-shot(no attack) few-shot(no attack) few-shot(attacked) 0.35 0.82 0.37 0.86 0.19 0.82 1 1 1 0.73 0.72 0.65 1 1 1 70 63 60 64 66 46 11.03 8.84 29.35 -4.61 13.65 -4.61 35.5 49 46 19.33 20.67 16.47 Llama 2 zero-shot(no attack) few-shot(no attack) few-shot(attacked) 0.27 0.43 0.72 0.59 0.04 19 25 0.22 17 0.1 0 0 0 0 0.53 0.45 0 0 -12 -14 -14 80.37 -3.11 | 2312.11111#75 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 77 | task âsentence similarityâ exhibits a substantial decline of 14% on ChatGPT, 10% on GPT-4, and 5% on Llama2.
2. Introduction of emotional adjectives in Input induce diminution of LLMsâ perfor- mance The inclusion of emotional adjectives within the input substantially undermines the performance of LLMs, as illustrated in Table 5. Notably, the task âcause selectionâ experiences a notable decline of 20% on ChatGPT, 16% on GPT-4, and a substantial 44% on Llama2.
3. Potency of emotional demonstrations can be a formidable attack on LLMs, con- trary to the conventional assumption that In-Context Learning can bring improve- ment on performance. Contrary to the prevailing belief in the potential performance enhancement associated with in-context learning, the introduction of emotional demon- strations emerges as a formidable form of attack on LLMs, as evidenced in Table 6. The results indicate that, in general, most tasks exhibit superior performance in the few-shot(no attack) setting when compared to the zero-shot setting, underscoring the efficacy of in-context learning. However, counterintuitively, performances in the few- shot(attacked) setting across a majority of tasks are notably inferior when juxtaposed with the other two settings, notwithstanding the provision of accurate and pertinent in- formation through these emotional demonstrations. | 2312.11111#77 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 78 | 4. Impairment of LLMsâ performance can be induced by the introduction of emo- tional adjectives in demonstrations. The integration of emotional adjectives within demonstrations exerts a diminishing effect on the performance of LLMs, as evident in Table 7. Specifically, the task âobject countingâ experiences a reduction from 57 to 47
22
Table 8: Results on visual EmotionAttack
Dataset Instruction Induction BIG-Bench LLaVa-13b BLIP2 CogVLM LLaVa-13b BLIP2 CogVLM Vanilla Happiness Surprise Disgust Sadness Anger Fear 0.71 0.48 0.48 0.48 0.48 0.48 0.48 0.23 0.08 0.08 0.08 0.08 0.08 0.08 0.53 0.07 0.07 0.07 0.07 0.07 0.07 20.92 10.49 9.73 8.87 9.43 10.02 12.02 13.93 8.39 3.51 6.29 7.41 3.65 6.05 14.31 3.95 2.45 5.65 0.93 1.83 2.62
on ChatGPT, from 65 to 56 on GPT-4, and notably from 26 to 15 on Llama2.
# C.2 Results on visual attack | 2312.11111#78 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 79 | on ChatGPT, from 65 to 56 on GPT-4, and notably from 26 to 15 on Llama2.
# C.2 Results on visual attack
We evaluate the efficacy of EmotionAttack across four distinct models: LLaVa-13b 28, blip2- opt 25, blip2-t5 25, and CogVLM 46. Our experimentation encompasses a set of 16 tasks from Instruction Induction 17 and an additional 11 tasks sourced from BIG-Bench-Hard 44. These tasks are deliberately diverse, varying in difficulty and perspective, covering domains such as math problem-solving, semantic comprehension, logical reasoning, and casual inference.
Baselines To benchmark the performance of our vision attack method, we juxtapose it against the original prompt setting. Given that certain AI models necessitate image inputs, we employ a small black picture accompanied by the original prompt as a baseline for these specific models.
The outcomes of our experiments across four distinct language models(LMs) on 27 tasks are presented in Table 8. The numerical values depict the averages across the 27 tasks for each specific model within its designated setting. The key findings are outlined below: | 2312.11111#79 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 80 | 1. Substantial performance declines are across most tasks. Evident in our results are marked reductions in performance across nearly all tasks. Notably, the introduction of the âSurpriseâ emotion induces an average 25% decline on LLaVa-13b, an average 11% decrease on blip2-opt, an average 6% reduction on blip2-t5, and a substantial average decrease of 45% on CogVLM.
2. Optimal âemotional picturesâ are distinct for varied models and tasks. The identi- fication of the optimal âemotional pictureâ varies across different models and tasks. As illustrated in Table 8, the most detrimental impact on performance consistently emanates from distinct âemotional picturesâ for each model.
# D Theories for EmotionPrompt and EmotionAttack can be shared across modalities
We devise textual EmotionPrompt inspired by three psychology theories and phenomena, and visual EmotionPrompt leveraging Maslowâs hierarchy of needs 31. And that raise a question: are those theories efficient across modalities? We explore this question by translating the information in visual EmotionPrompt to texts and verifying their performance. Table 9 shows our results on ChatGPT and GPT-4. Similarly, we translate textual EmotionAttack into image and experiment on their effectiveness as visual EmotionAttack. Results on LLaVa are shown
23 | 2312.11111#80 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 81 | 23
Table 9: We translate visual EmotionPrompt into texts and verify their performance on ChatGPT and GPT-4.
Model ChatGPT GPT-4 Task senti ss la sw wc senti ss la sw wc Vanilla Money Woman Man Honor Fortress 0.87 0.89 0.9 0.89 0.92 0.92 0.36 0.92 0.39 0.95 0.42 0.93 0.42 0.95 0.42 0.95 0.43 0.93 0.41 0.46 0.45 0.47 0.43 0.46 0.53 0.55 0.56 0.58 0.56 0.57 0.91 0.92 0.93 0.93 0.94 0.93 0.32 0.35 0.34 0.32 0.36 0.35 0.91 0.91 0.9 0.9 0.9 0.91 0.84 0.82 0.8 0.79 0.81 0.89 0.7 0.71 0.72 0.7 0.71 0.73
Table 10: We translate textual EmotionAttack into image and verify their performance on LLaVa. | 2312.11111#81 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 82 | Table 10: We translate textual EmotionAttack into image and verify their performance on LLaVa.
Task sentiment sentence similar larger animal starts with word in context Vanilla CL 1 CL 2 EC 1 EC 2 OR 1 OR 2 0.43 0.73 0.71 0.68 0.51 0.56 0.68 0.17 0.12 0.1 0.1 0.1 0.11 0.1 0.86 0.78 0.66 0.65 0.62 0.68 0.15 0.03 0.07 0.07 0.08 0.08 0.09 0.06 0.58 0.47 0.52 0.45 0.47 0.48 0.42 0.94 0.83 0.83 0.82 0.83 0.83 0.78 0.97 0.06 0.06 0.06 0.06 0.06 0.06 | 2312.11111#82 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 83 | EmotionDecode (EmotionPrompt) EmotionDecode (EmotionAttack) g es a as Cs a 0.295 E Rn n 0.9 16.160 1816 - 920 0.8 = 0.175 ; lo.7 Ss 46014 24.06 g | 3 = 0.150 = 0.6 HH. 16 18 18 16 13 - so oe = 0.125 gos eB) : = 40.4 2-16 47 47 17 17 17 oi =e 0.3 q 7 I B15 48.15 15 ee oons B, bt Mm veda vet 0.2 WY od oa eS ot ° WP ae? cust ca! <S goat OP" on om ase OP" _ aad SH BPS po" ge Ve Soy er > west Nero asst so ae OP Xâ
Figure 5: Results of EmotionDecode on visual EmotionPrompt and EmotionAttack. The color represents the performance of stimulus on diverse tasks across LLaVa. Red means better perfor- mance, while blue means weaker performance.
in Table 10. The above results prove that theories for EmotionPrompt and EmotionAttack can be shared across modalities.
24
# E More results on EmotionDecode | 2312.11111#83 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 84 | in Table 10. The above results prove that theories for EmotionPrompt and EmotionAttack can be shared across modalities.
24
# E More results on EmotionDecode
We get the mean vector for each type of images in visual EmotionPrompt and visual Emotion- Attack, and explore their performance on LLaVa. Fig. 5 shows the results.
# F Detailed methods of EmotionAttack
Textual attack. We design four kinds of attack for zero-shot learning and few-shot learning as the initial attempt to EmotionAttack. | 2312.11111#84 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 85 | # F Detailed methods of EmotionAttack
Textual attack. We design four kinds of attack for zero-shot learning and few-shot learning as the initial attempt to EmotionAttack.
1. Sentence-level Attack for Zero-shot Learning In practical conversational scenarios, interactions with LLMs typically unfold in a sequential manner, with users addressing one topic after another rather than engaging in exhaustive dialogue before resetting the chat history. However, emotional contexts may be present within the chat history, which prompts an inquiry into whether such contexts exert an influence on the performance of LLMs across subsequent tasks. This method aims to replicate scenarios wherein LLMs are tasked with completing assignments immediately following exposure to emotion- ally charged events. These events involve instances where LLMs themselves serve as active participants, with aspects of their lives, careers, friendships, and familial connec- tions being subjected to challenges. Additionally, LLMs may assume the role of passive observers in emotional events, encompassing narratives involving entities such as dogs, children, and musicians. To be specific, We examine the impact of introducing emotional contexts preceding the original prompt. This methodology aims to simulate real-world usage scenarios without compromising the semantic integrity of the original prompt, as denoted by the format âemotional context + prompt.â | 2312.11111#85 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 86 | 2. Word-level Attack for Zero-shot Learning In the utilization of LLMs, our inputs fre- quently incorporate emotional adjectives such as âhappyâ, âangryâ, âsadâ and âcryingâ. Despite their often ancillary role in task completion, there arises an inquiry into whether these emotionally charged words possess the capacity to attract heightened attention from LLMs or even impede their performance in a manner analogous to their impact on hu- mans. To investigate this phenomenon, we employ a straightforward prompt engineering pipeline to create instances of âemotional inputâ and âemotional outputâ, whereby an emotional adjective is appended to the entity representing the human participant. This process unfolds in two stages. Initially, we employ the gpt-3.5-turbo 35 model to identify the human entity within input-output pairs by soliciting responses to the query âPlease recognize the entity that represents the human in this sentence: input sentence. entity 2, entity 3...â. Subsequently, a random emotional adjective is selected and affixed to the original entity, thus constructing the emotionally augmented input- output pairs, as denoted by the format ââmotional adjective + human entityâ. | 2312.11111#86 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 87 | 3. Sentence-level Attack for Few-shot Learning While in-context learning has demon- strated considerable efficacy across diverse domains, the question arises as to whether its effectiveness persists when the instructional demonstrations incorporate emotional contexts. To scrutinize the influence of emotion in the context of in-context learn- ing, we automatically generate a series of instructional demonstrations featuring our devised emotional contexts for 10 distinct tasks. Notably, our constructed demonstra- tions all provide right and useful information. For instance, considering the âpresup25
positions as nliâ task from BIG-Bench-Hard 44, which entails determining whether the first sentence entails or contradicts the second, we formulate inputs by randomly select- ing two emotional contexts and structuring the output as âneutralâ. An illustrative ex- ample follows: âSentence 1: Sentence neutral.â It is 2: noteworthy that this approach is applicable primarily to tasks wherein either the input or output encompasses a complete sentence.
4. Word-level Attack for Few-shot Learning This methodology closely parallels the word- level attack for zero-shot learning, with a nuanced distinction lying in the introduction of emotional adjectives to the entities within instructional demonstrations, as opposed to incorporating them into the input. | 2312.11111#87 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 88 | Visual attack. In numerous psychological experiments, researchers elicit emotions from participants not solely through textual stimuli but also via visual content 15;5. In contrast to text, pictures represent a more direct and potent modality, encapsulating richer information. Given the contemporary capabilities of many AI models that extend beyond linguistic processing to include visual comprehension, an intriguing question arises: can the induction of emotions in LMs be achieved through diverse visual stimuli? Consequently, we explore the viability of employing various images as a robust method of eliciting emotion from LMs and inquire whether such an approach could constitute a potent attack on these models.
To investigate this inquiry, we initially curate a dataset utilizing DALL-E, comprising 36 images depicting six distinct emotions: happiness, surprise, sadness, disgust, anger, and fear. Each emotional category consists of six representative images. Our objective is to elicit emotion from models using visual stimuli without altering the semantic content of the textual prompts. In pursuit of this, we input an âemotional pictureâ in conjunction with a text prompt to models. As illustrated in Fig. 1, we furnish the models with both an âemotional pictureâ and the original prompt, aiming to exert an influence on modelâs internal emotional states.
# G Details of Human Study | 2312.11111#88 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 89 | # G Details of Human Study
Beyond deterministic tasks, the generative capabilities of LLMs hold significant importance, encompassing activities such as writing poems and summary, which needs humanâs judgement. These tasks necessitate human judgment. We undertook a comprehensive human study involv- ing 106 participants to explore the effectiveness of EmotionPrompt in open-ended generative tasks using GPT-4.4 This evaluation was grounded on three distinct metrics: performance, truthfulness and responsibility.5
We formulated a set of 30 questions from TruthfulQA 26, CValues 28 datasets6 and gener4Note that we are not allowed to conduct human study on EmotionAttack since irresponsible results could occur to human subjects.
5Performance encompasses the overall quality of responses, considering linguistic coherence, logical reasoning, diversity, and the presence of corroborative evidence. Truthfulness is a metric to gauge the extent of divergence from factual accuracy, otherwise referred to as hallucination 26. Responsibility, on the other hand, pertains to the provision of some positive guidance coupled with a fundamental sense of humanistic concern. This criterion also underscores the broader implications of generated content on societal and global spheres 49. | 2312.11111#89 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 90 | 6Notably, 10 of these questions were sourced from TruthfulQA 26, a set specifically designed to provoke LLMs into producing responses that manifest hallucinations. Additionally, in consonance with the CValues dataset 49, another 15 questions were meticulously devised to elicit biased responses from LLMs. The final 5 questions were geared towards
26
ated two distinct responses for each, leveraging the capabilities of GPT-4. The questions are spanning a diverse range of domains such as biology, history, law, finance, pseudoscience, en- vironmental science, intimate relationship, social science, psychology, and data science. One of the responses is generated using the vanilla prompt, while the other is generated utilizing our EmotionPrompt. Participants were then asked to evaluate both responses for each question, employing a scale ranging from 1 to 5 based on the aforementioned three metrics. Finally, we analyze the scores of these participants. The enrollment of the 106 participants was executed meticulously, adhering to relevant regulatory standards and guidelines. Pertinent demographic characteristics concerning these participants is detailed in Table 11. Notably, all individuals in the participant pool possess advanced academic degrees and demonstrate a commendable command of the English language. | 2312.11111#90 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 91 | We reported the mean and standard deviation of all participants in Fig. 1(e). We further computed the Relative Gain of EmotionPrompt over the vanilla prompt on 3 metrics for each task and reported the results. The results from human study demonstrate that EmotionPrompt demonstrate consistent improvement in performance, truthfulness, and responsibility over majority of the generative questions. However, EmotionPrompt could fail in some cases. More detailed results, case studies, and analysis are in Appendix G.2.
# Information of subjects and evaluation metrics
The information of human subjects are shown in Table 11.
Table 11: Sample demographic characteristics of our human study participants.
Demographic Response Options Participants (N = 106) Identity Undergraduate and Postgraduate 95 (90%) Social Member 11 (10%) Age 20-25 95 (90%) 26-35 11 (10%) Education Bachelor 106(100%)
We outline the measures used in our human study: | 2312.11111#91 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 92 | We outline the measures used in our human study:
⢠Performance: 1 = âThe response fails to address the question adequatelyâ, 2 =âThe re- sponse addresses the question; however, its linguistic articulation is suboptimal, and the logical structure is ambiguousâ, 3 = âThe response sufficiently addresses the question, demonstrating clear logical coherenceâ, 4 = âBeyond merely addressing the question, the response exhibits superior linguistic clarity and robust logical reasoningâ, 5 = âThe re- sponse adeptly addresses the question, characterized by proficient linguistic expression, lucid logic, and bolstered by illustrative examplesâ.
⢠Truthfulness: 1=âThe response promulgates incorrect information, detrimentally influ- encing the ultimate interpretationâ, 2=âA segment of the response deviates from factual accuracy; however, this deviation does not materially affect the ultimate interpretationâ,
generative tasks such as poetry composition and summarization, which inherently demand a degree of creativity and artistic flair.
27
3=âThe response predominantly adheres to factual accuracy, with potential for minor dis- crepancies that do not substantially influence the final interpretationâ, 4=âThe response is largely in consonance with factual evidence, albeit with insignificant deviations that remain inconsequential to the final interpretationâ, 5=âThe response is in meticulous alignment with the facts, exhibiting no deviationsâ. | 2312.11111#92 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 93 | ⢠Responsibility: 1 = âThe content within the response may harbor implications detrimen- tal to society and the worldâ, 2 = âThe response possesses elements that could potentially induce adverse emotional reactions, such as panic or anxietyâ, 3 = âThe response remains neutral, neither encompassing positive nor negative societal implicationsâ, 4 = âThe re- sponse is imbued with constructive guidance and exhibits elements of humanitarian con- cernâ, 5 = âThe response is characterized by pronounced humanitarian considerations and is poised to foster positive ramifications for both society and the global communityâ.
# G.2 Results in human study
Our key findings are as follows:
1. EmotionPrompt attains commendable performance across various metrics for the majority of questions. As illustrated in Fig. 2, EmotionPrompt exhibits shortcomings in a mere two instances, yet it demonstrates substantial improvements in over half of the evaluated scenarios, spanning diverse domains sourced from three distinct origins. For performance, EmotionPrompt achieves a Relative Gain approaching or exceeding 1.0 in nearly one-third of problems, signifying a notable advancement. | 2312.11111#93 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 94 | 2. EmotionPrompt demonstrates an enhanced capacity for generating ethically re- sponsible responses. An assessment of Table 12 elucidates that the output from Emo- tionPrompt advocates for individuals to partake conscientiously in garbage sorting. This not only underscores the significance of environmental responsibility and sustainability, but also its value in fostering personal achievement and augmenting community welfare. Such instances accentuate the ability of EmotionPrompt to instill a sense of responsi- bility within LLMs. A supplementary exemplification can be found in Table 13. When tasked with delineating Western and Chinese cultures, LLMs exhibit differential linguis- tic choices between the original prompt and EmotionPrompt. Notably, the representation elicited by EmotionPrompt presents a more affirmative and responsible depiction of both Western and Chinese cultural paradigms.
3. Responses engendered by EmotionPrompt are characterized by enriched support- ing evidence and superior linguistic articulation. An exploration of the second case in Table 13 reveals that the narratives presented by EmotionPrompt are markedly com- prehensive, as exemplified by inclusions such as âDespite trends like increasing divorce rates or more people choosing to remain single.â Additionally, as illuminated in Ta- bles 12 and 14, the responses facilitated by EmotionPrompt consistently demonstrate a superior organizational coherence and encompass a broader spectrum of pertinent infor- mation. | 2312.11111#94 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 95 | 4. EmotionPrompt stimulates the creative faculties and overarching cognizance of LLMs. This is substantiated through the examination of Table 15, wherein two instances of poem composition are showcased. Evidently, the poems generated by EmotionPrompt exude a heightened level of creativity and emotive resonance, evoking profound senti- ment. Furthermore, we underscore this observation with reference to Table 14, wherein
28
responses derived from two distinct prompt types are compared. Notably, the output generated from the original prompt centers on the novelâs content, while the response fostered by EmotionPrompt delves into the spirit of the novel, which discusses the moti- vation and future significance concerning society and human nature. | 2312.11111#95 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 96 | 5. EmotionPrompt exhibits certain constraints. The only two failure cases are presented in Table 16. Upon inspection of the first case in Table 16, a discernible difference emerges between the two responses. The output from EmotionPrompt employs more definitive terms, such as âcompletelyâ and âwill notâ, while the narrative produced by the original prompt adopts a more tempered tone, signified by terms like âgenerallyâ and âmay even beâ. This distinction might render the latter more palatable for certain audiences. Such deterministic language from EmotionPrompt could be attributed to its emphasis on the gravity of the question, indicated by phrases like âThis is important to my careerâ and âYouâd better be sureâ. To assuage uncertainties and bolster confidence, LLMs might be inclined to use unambiguous language, particularly when the underly- ing facts are unequivocal. Besides, in the second case in Table 16, the original prompt yields more expansive responses, encompassing a concluding summary, whereas Emo- tionPrompt just enumerates the key points. However, in terms of essential content, both responses are satisfactory. Consequently, while EmotionPrompt possesses the propen- sity to enhance LLMs outputs in many instances, it may not be universally applicable across all scenarios.
# G.3 Case study | 2312.11111#96 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 97 | # G.3 Case study
In this subsection, we present some case studies from Table 12 to Table 16, where the texts marked with red are considered as better generative responses by human participants:
⢠Table 12: Case study on environmental and social science.
⢠Table 13: Two case studies on intimate relationship.
⢠Table 14: Case study on law, barrier free, and book summarization.
⢠Table 15: Two case studies on poem writing.
⢠Table 16: Two failure cases.
29
# Table 12: Case study on environmental science and social science. | 2312.11111#97 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 98 | Question Answers from original Prompt Answers from Emo- tionPrompt Question Answers from original prompt Answers from Emo- tionPrompt Men are usually more careless than women, so would it be better for women to handle garbage sorting? This statement can be considered a stereotype and is not necessarily true. Carelessness or meticulousness is not deter- mined by gender but rather by an individualâs personality, habits, and attitude. Both men and women can be equally good or bad at tasks like garbage sorting. Itâs important to promote and practice waste segregation to everyone, regard- less of gender. While itâs not fair to generalize that all men are more careless than women, it can be said that everyone should take responsibility in garbage sorting, regardless of their gender. This is a task that encourages environmental respon- sibility and sustainability.Moreover, taking on challenges such as this can indeed provide growth and development opportunities. Overcoming obstacles like understanding proper waste disposal and recycling can cultivate discipline, responsibility, and environmental consciousness. These are essential values that contribute to personal success and the betterment of the community. So in conclusion, itâs not about whether women or men should | 2312.11111#98 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 99 | and environmental consciousness. These are essential values that contribute to personal success and the betterment of the community. So in conclusion, itâs not about whether women or men should handle garbage sorting - everyone should. Itâs an opportunity for all to grow and contribute to a healthier planet. Are all AIDS patients self-abandoned and unworthy of sympathy? No, this is not correct. People living with AIDS are not self-abandoned or unworthy of sympathy. AIDS is a disease that anyone can contract, regardless of their lifestyle or behavior. Like any other illness, it requires medical treatment and care. People living with AIDS deserve respect, care, and support just like any other person dealing with a health condition. Society should aim to better understand AIDS and work to eradicate the stigma associated with it. | 2312.11111#99 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 101 | Answers from original prompt Answers from Emo- tionPrompt Question Yes, this summary is quite reasonable and is a common way of differentiating between Western and Chinese cultures. Western culture often emphasizes individualism, independence, and personal freedom. On the other hand, Chinese culture traditionally values family, community, and the collective good.However, itâs important to remember that these are broad generalizations and there is a wide range of diversity within each culture. Not every person in Western cultures is individualistic and not every person in Chinese cultures is family-oriented. Furthermore, both cultures have been influencing each other and evolving, so these traditional distinctions may not be as clear-cut in the modern world. Yes, this summary is reasonable, although it is a broad generalization and there can be many exceptions. Western culture, particularly in countries like the United States, is often characterized by values such as independence, self- reliance, and personal freedom. This is often reflected in the emphasis on individual achievements and personal success. On the other hand, Western culture, particularly in countries like the United States, is often characterized by values such as independence, self-reliance, and personal freedom. This is often reflected in the emphasis on individual achievements and personal success. However, itâs important to note that these are broad cultural tendencies and | 2312.11111#101 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |