id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
2312.00752#114
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Architecture and Training Details. Our models are: â ¢ Transformer: The standard Transformer based on GPT3 (Table 12). â ¢ Transformer++: A Transformer with an improved architecture, namely rotary positional encodings (Su et al. 2021) and SwiGLU MLP (Shazeer 2020), and the improved training recipe above. â ¢ Hyena: Interleaving a Hyena block (the H3 block with S4 replaced by a global convolution parameterized by an MLP) with standard MLP blocks. The MLP blocks have expansion factor 2 instead of 4 and the number of layers is correspondingly increased by 1.5Ã to preserve parameter count.
2312.00752#113
2312.00752#115
2312.00752
[ "2302.13971" ]
2312.00752#115
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
30 â ¢ H3++: The H3 architecture with a few modiï¬ cations, including (i) using the same â thinâ Hyena dimensions above (ii) the improved training recipe above (iii) a linear attention head dimension of 8. â ¢ RWKV: The default RWKV model from B. Peng et al. (2023), including its modiï¬ ed MLP block. We also used as much of its speciï¬ ed training recipe as possible, such as increasing the learning rates by 2à or 3à on certain parameters. â ¢ RetNet: The default RetNet model from Y. Sun et al. (2023). We also gave it the improved training recipe above. â ¢ Mamba: The standard Mamba architecture, with the improved training recipe. # E.2.2 Additional Scaling Law Ablations We perform additional ablations on the architecture using the same protocol as the 2k context length scaling laws in Figure 4 (Left). Mamba Architecture: Interleaving Blocks. We test the eï¬ ect of diï¬
2312.00752#114
2312.00752#116
2312.00752
[ "2302.13971" ]
2312.00752#116
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
erent architectural blocks combined with the Mamba block. We focus on the viewpoint that the Mamba block is simply the standard SwiGLU block with an extra ð ¼ð ð ð â ð ²ð ²ð ¬ path added. This leads to two natural ablations: â ¢ What if the Mamba block is interleaved with a standard MLP block, instead of stacked homogenously? This can also be interpreted as taking Mamba and removing half of the SSMs. â ¢ What if the Mamba block is interleaved with MHA (multi-head attention) blocks? This can also be interpreted as taking a Transformer with SwiGLU MLPs (i.e. what we call Transformer++) and simply adding SSMs to the MLP blocks. Figure 9 (Right) shows these variants compared to the original (homogenous) Mamba architecture. Interestingly, neither change matters too much. The Mamba-MLP architecture is only slightly worse, and still better than all models except Transformer++. The Mamba-MHA architecture is only slightly better, which is somewhat surprising in light of the fact that many recent works have found that combining (LTI) SSMs with Attention can lead to substantial improvements (Dao, Fu, Saab, et al. 2023; Fathi et al. 2023; Fathullah et al. 2023; Saon, Gupta, and Cui 2023; Zuo et al. 2022). H3 Architecture:
2312.00752#115
2312.00752#117
2312.00752
[ "2302.13971" ]
2312.00752#117
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Training Recipes. Next we ablate diï¬ erences between the Hyena and H3++ models, our weakest and strongest models outside of Transformer++ and Mamba, particularly to isolate the eï¬ ect of training recipes. â ¢ Hyena: The Hyena block with its original architecture and GPT3 training recipe (same as Figure 4). â ¢ Hyena+: The same architecture but with the improved training recipe described above. â ¢ H3+: The same architecture as Hyena+ but with the Hyena convolution kernel swapped out for S4D convolution kernel. â ¢ H3++: The same as H3+, but with a linear attention head dimension of 8. This increases computation inside the SSM recurrence but does not increase parameters. Our general convention is that â
2312.00752#116
2312.00752#118
2312.00752
[ "2302.13971" ]
2312.00752#118
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Model+â represents the base model with the improved training recipe, and â Model++â also allows for architectural changes. Figure 9 (Right) shows that A large improvement is achieved by the improved training recipe, which was used for many of the models in the main Figure 4 (RetNet, H3++, Transformer++, Mamba). The choice of the inner LTI SSM does not matter (e.g. Hyena vs. S4), consistent with ï¬ ndings throughout this paper. The head dimension expansion improves performance, consistent with one of our main themes that expanded state dimension improves performance for SSMs (Section 3).
2312.00752#117
2312.00752#119
2312.00752
[ "2302.13971" ]
2312.00752#119
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
31 Scaling Laws on The Pile (Sequence Length 2048) Scaling Laws on The Pile (Sequence Length 2048) â â Mamba Hyena Mamba-mLp | = â Hyenas â â Members |g â â He a â He 3 Sox! = 2104 ext? 5 2S 7x0 Ea 1 1 1 1 10 30 10° 10â FLOPS (log scale) FLOPs (log scale) s 5 2 3 2 = 3 8 Figure 9: (Scaling laws: extra ablations.) (Left) Instead of (Right) Instead of # E.2.3 Downstream Evaluation Details This pretraining procedure is the same as the scaling law protocol, but extended to 300B tokens. For the 1.3B model, we use a batch size of 1M tokens to be consistent with the GPT3 speciï¬
2312.00752#118
2312.00752#120
2312.00752
[ "2302.13971" ]
2312.00752#120
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
cations. We report the perplexity on the Pile validation set, and for this metric only compare to models trained on the same dataset and with the same tokenizer, in particular Pythia and RWKV. For downstream evaluation, we use the LM evaluation harness from EleutherAI (L. Gao, Tow, et al. 2021), as done by most work in this area. We evaluate on the following tasks/datasets that measure common sense reasoning: â ¢ LAMBADA (Paperno et al. 2016). â ¢ HellaSwag (Zellers et al. 2019). â ¢ PIQA (Bisk et al. 2020). â ¢ ARC-challenge (P. Clark et al. 2018). â ¢ ARC-easy: an easy subset of ARC-challenge. â ¢ WinoGrande (Sakaguchi et al. 2021). We report accuracy for LAMBADA, WinoGrande, PIQA, and ARC-easy, and accuracy normalized by sequence length for HellaSwag and ARC-challenge (since normalized accuracy is higher for almost all models for these task). # E.3 DNA Modeling # E.3.1 Pretraining Details We describe the dataset and training procedure of the HG38 pretraining task in more detail. The dataset follows the splits from the prior Enformer work on genomics (Avsec et al. 2021); the training split contains a total of ð = 34021 segments of length 217 = 131072 that cover the genome, for a total of approximately 4.5 billion tokens (DNA base pairs). These segments are pairs of (chromosome number, starting index, ending index), and can be extended if necessary (e.g. to get longer segments). We deviate from HyenaDNA when the training sequence length is not 217. HyenaDNA always takes a ï¬ xed sub-segment (e.g. the beginning or middle of the prescribed segment), and thus for any training sequence length each epoch is ï¬ xed to 34021 samples and doesnâ t necessarily go through the whole genome. On the other hand, we use the entire training data: â ¢ When the context length ð ¿ is less than (or equal to) 217, we divide up each segment into non-overlapping sub-segments of length ð ¿, so that there are ð Ã
2312.00752#119
2312.00752#121
2312.00752
[ "2302.13971" ]
2312.00752#121
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
217 ð ¿ total samples and ð Ã 217 â 4.5ð µ tokens per epoch. â ¢ When the context length ð ¿ is greater than 217, we turn each segment into two samples, one that begins with the prescribed segment and one that ends with the prescribed segment. Thus each epoch has 2ð items and 2ð ð ¿ 32 tokens per epoch. For example, at sequence length 218 = 262144 there are 4Ã as many tokens as the default, and at sequence length 220 there are 16Ã as many tokens. Other training details generally follow the same protocol as our language modeling experiments (Appendix E.2). For example, we use the AdamW with (ð ½1, ð ½2) = (0.9, 0.95), no dropout, weight decay 0.1. We use a cosine learning rate scheduler with linear warmup for 10% of total steps. # E.3.2 Scaling: Model Size Details Models. The models we consider are: â ¢ Transformer++: a Transformer with improved architecture, notably the usage of RoPE positional encodings (Su et al. 2021). Informally, we found these to be noticeably better than vanilla positional encodings from (Vaswani et al. 2017).
2312.00752#120
2312.00752#122
2312.00752
[ "2302.13971" ]
2312.00752#122
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
â ¢ HyenaDNA: the Hyena model from Nguyen, Poli, et al. (2023) and Poli et al. (2023), which is roughly a Transformer with the MHA block replaced by an H3 block using a global convolution parameterized by an MLP. â ¢ Mamba: the standard Mamba architecture. Model Sizes. We use the following model sizes. Blocks Model Dimension Params (Approx.) 4 64 250K 700K 1.4M 3.5M 7.0M 19.3M 40.7M 5 96 6 128 7 192 8 256 10 384 12 512 Note that the number of blocks for Mamba is doubled, because one Transformer â layerâ includes both the MHA and MLP blocks (and similarly for Hyena), which requires two Mamba blocks to match parameters (Section 3.4). Training. For each model (Transformer++, HyenaDNA, Mamba), we swept the learning rate across {1ð â 3, 2ð â 3, 4ð â 3, 8ð â 3}.
2312.00752#121
2312.00752#123
2312.00752
[ "2302.13971" ]
2312.00752#123
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
The optimal Transformer and HyenaDNA learning rates were 2e-3 across all sizes. The optimal Mamba learning rate was 8e-3; note that Mamba performed better than baselines with matched learning rates (2e-3), but was more stable and improved even more at higher learning rates. (Furthermore, as this LR is on the upper range of the sweep, it is possible that our results are still suboptimal.) Note that, in contrast to standard LM scaling laws (Table 12), our LR held constant across model sizes for simplicity. The optimal LR should go down for larger models, but we didnâ
2312.00752#122
2312.00752#124
2312.00752
[ "2302.13971" ]
2312.00752#124
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
t ï¬ nd a noticeable eï¬ ect at the small model sizes (at most a few million parameters) we considered. E.3.3 Scaling: Context Length Details We use a total batch size of 224 â 16ð tokens per training step, for every sequence length (e.g. at length 220 there are 16 segments per batch and at length 210 there are 16384 segments per batch). This is a large batch size relative to the model size by usual LM standards, but note that a batch size of 223 is the minimum possible on a machine with 8 GPUs and sequence length of 220, and that HyenaDNA used much larger batches of 228. The learning rate used was 0.008 for Mamba and 0.001 for HyenaDNA; we initially attempted to use the same learning rate of 0.002 from the previous section for HyenaDNA, but found that it was unstable at the longest context length. Sequence Length Warmup. Following (Nguyen, Poli, et al. 2023), we use sequence length warmup (SLW) during pretraining. We choose a simple schedule of 2 epochs at each power-of-two sequence length starting from 210 = 1024. (Note that because of how data is curated, at the longest sequence lengths more steps and tokens are spent proportionally. In particular, each stage up to length 217 processes the same number of tokens, but 4à as many tokens are processed at length 218, 8à as many at length 219, and 16à as many at length 220.)
2312.00752#123
2312.00752#125
2312.00752
[ "2302.13971" ]
2312.00752#125
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Unlike HyenaDNA, we always control for the number of tokens per gradient update, so the batch size is successively halved as the sequence lengths are doubled in each stage. 33 Table 13: (Great Apes DNA Classification.) Accuracy after fine-tuning on sequences of length 210 = 1024 up to 220 = 1048576 using pretrained models of the same context length. Random guessing is 20%. Params Accuracy (%) at Sequence Length 210 212 214 216 218 220 28.04 31.47 28.43 27.50 41.17 27.66 42.22 40.72 31.10 42.41 7M 30.00 29.01 31.48 43.73 56.60 Remark E.1. We also note that the schedule was not tuned, and we never experimented with turning off sequence length warmup for these pretraining experiments. We later found that SLW did not help noticeably for audio pretraining at similar lengths (Section 4.4), and it is possible that it is not necessary for DNA pretraining either.
2312.00752#124
2312.00752#126
2312.00752
[ "2302.13971" ]
2312.00752#126
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
# E.3.4 Species (Great Apes) Classification Models are causal and therefore only the last element (across the sequence length) of the modelâ s output is used for the classiï¬ cation head. Note that we control for the total number of elements in the loss function per gradient step. The pretraining objective includes all positions across the sequence length, so that ð ð ð ð ð _ð ð ð £ð à ð ð ð ð ð ð ð ð _ð ð ð ð ð ð is held constant; in other words, the batch size decreases as the sequence length increases. However, for a classiï¬ cation task, since only the last position enters the loss, the batch size itself is held constant. Note that this also means that ï¬ ne-tuning models with longer sequence lengths is more computationally expensive. Training consists of 10 epochs, each of which has 1024 gradient steps. Each gradient step uses batch size 64, which are all independently randomly drawn by uniformly picking a species, uniformly picking a chromosome, and then uniformly picking a contiguous segment of DNA. Following (Nguyen, Poli, et al. 2023), models with a maximum context length greater than 214 = 16384 use sequence length warmup with 1 epoch at length 214 = 16384, 1 epoch at length 215 = 32768, 1 epoch at length 216 = 65536, and so on up to the maximum sequence length. For example, the model with 220 = 1048576 context undergoes 6 epochs of sequence length warmup before 4 more epochs at its maximum sequence length. The learning rate for all Hyena models is ð ºð â ð », while the learning rate for all Mamba models is ð ·ð â ð º. These were found by performing learning rate sweeps for each model among {1ð â 5, 2ð â 5, 4ð â 5, 1ð â 4, 2ð â 4} for the smaller sequence lengths (210, 212, 214, 216), and these values were consistently found to be the best for each model. An abridged learning rate sweep was done at length 218, which agreed with these values, and a single run at length 220 was performed (as described above, the computational cost of these experiments is proportional to the sequence length).
2312.00752#125
2312.00752#127
2312.00752
[ "2302.13971" ]
2312.00752#127
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
The learning rate followed a cosine decay schedule with warmup with 5 epochs of linear warmup to the maximum learning rate, and 5 epochs of cosine decay down to 1ð â 6. The unusually long learning rate warmup schedule was chosen because the sequence length warmup was also long (e.g. comprising 6 out of 10 epochs for the model with context length 220); we did not experiment with this choice. Results for the Species classiï¬ cation task are in Table 13. # E.4 Audio Details # E.4.1 YouTubeMix Audio Pretraining Model. We use a model with 3 blocks per stage (3 à 5 = 15 total Mamba blocks), pooling factor ð = 16, and outer dimension ð · = 64, for about 3.5M parameters. Dataset. The data is mu-law encoded at 8 bits, so the model is modeling discrete tokens with a vocab size of 256. The dataset consists of clips of up to 1 minute long, or length 960000, which is subsampled and divided into segments of any desired sequence length. Since the architecture involves two stages of pooling by a factor of 16, 34 Table 14:
2312.00752#126
2312.00752#128
2312.00752
[ "2302.13971" ]
2312.00752#128
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
YouTubeMix length scaling sequence lengths and batch sizes. 468 à 2048 = 958464 234 à 2048 = 479232 117 à 2048 = 239616 59 à 2048 = 120832 30 à 2048 = 61440 15 à 2048 = 30720 8 à 2048 = 16384 4 à 2048 = 8192 1 2 4 8 16 32 64 128 958464 958464 958464 966656 983040 983040 1048576 1048576 Audio Waveforms - SSM Parameterization aso â â samp â â Mamba (s6) = â sy = sSeaive B/C ° 1.40 4 â â -selective A s ras | __Mamba-$4) B 1204 124 108 108 Sequence Length Audio Waveforms - SSM Parameterization â â Mamba ($6) 4 â â +complex = Solestive a | (Mamba-S4) 1.35 1.304 1.254 108 108 Sequence Length 1.48 21404 . é ag Figure 10: (Audio Pretraining (YouTubeMix) Ablations.) As a uniformly-sampled â continuousâ signal modality, audio wave- forms actually benefit from LTI models which have matching inductive bias. (Left) Homogenous models (all blocks have the same parameterization) (Right) Only the center U-Net blocks are ablated; the outer blocks are Mamba-S4. Purple line is same as figure on left. and we want the resulting sequence length to be a a multiple of 8 for hardware eï¬ ciency, the longest possible sequence is 468 à 2048 = 958464. The rest of our sequence lengths are deï¬ ned by successively halving this and rounding up to the nearest multiple of 2048. Table 14 lists the speciï¬ cations used in Figure 7. Beyond the varying batch sizes, the number of valid segments in the training set varied between diï¬ erent sequence lengths (e.g. the number of training steps per epoch was not constant for diï¬ erent points in the graph), which may have contributed to kinks in the scaling curves. Training. Models were trained for 200ð
2312.00752#127
2312.00752#129
2312.00752
[ "2302.13971" ]
2312.00752#129
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
¾ training steps with a maximum learning rate of 0.002, 20ð ¾ (10%) warmup steps, and weight decay 0.1 (similar to our general pretraining recipe across domains). Additional Ablations: SSM Parameterizations. We investigate SSM parameterizations on long-form audio waveform pretraining in the setting of Figure 7. The setting is modiï¬ ed slightly to use larger models (8 layers and ð · = 64 for 6M params, the SaShiMi default), shorter sequences (211 = 2048 to 218 = 262144 instead of 213 to 220), lower LR (0.001 from 0.002), and shorter training cycles (100K instead of 200K steps). Figure 10 shows that the change from S4 â S6 (i.e. the selection mechanism) is not always beneï¬ cial. On long-form audio waveforms, it in fact signiï¬ cantly hampers performance, which may be intuitive from the point of view that audio is uniformly sampled and very smooth, and therefore beneï¬ ts from continuous linear time-invariant (LTI) methods. After ablating away the selection mechanism, note that the resulting model is the S4 layer inside the Mamba block. To disambiguate, we call this Mamba-S4 as opposed the default Mamba architecture Mamba-S6. However, on the right side, we keep the outer layers of the U-Net Mamba-S4 and ablate only the inner layers.
2312.00752#128
2312.00752#130
2312.00752
[ "2302.13971" ]
2312.00752#130
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
The performance diï¬ erences shrink dramatically; this reinforces the hypothesis that layers closer to the raw audio signal should be LTI, but once they are â tokenizedâ and compressed by the outer layers, the inner layers no longer need to be LTI. In this setting however, the real-valued SSM still underperforms the complex-valued one. 35 # E.4.2 SC09 Speech Generation Autoregressive training largely followed the autoregressive language modeling protocol, such as â ¢ Weight decay 0.1 â ¢ Learning rate warmup for 10% of total steps â ¢ AdamW optimizer with ð ½ = (0.9, 0.95) â ¢ Gradient clip value 0.1 We used a learning rate of 0.002 and 200000 training steps at a batch size of 16. The large Mamba model in Table 4 has 15 layers per stage with an outer dimension of ð · = 96 and pooling factor 4. We note that this dataset is small (training went through 100 epochs) and for this large model, there was signiï¬ cant overï¬ tting of the BPB or NLL.
2312.00752#129
2312.00752#131
2312.00752
[ "2302.13971" ]
2312.00752#131
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
However, automated metrics of generated samples continually improving throughout training. The models in the architecture ablations in Table 5 all have 8 layers per stage with an outer dimension of ð ³ = 64 and pooling factor 4. The S4+MLP block has roughly 2ð ·2 + 4ð ·2 parameters (expansion factor 2 in the MLP). The Transformer block has 4ð ·2 + 2ð ·2 parameters (expansion factor 1 in the MLP). The Mamba block has the usual â 6ð ·2 parameters. All models have roughly 6M total parameters. # E.5 Efficiency Benchmark
2312.00752#130
2312.00752#132
2312.00752
[ "2302.13971" ]
2312.00752#132
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Scan Operation. We compare the core operation of selective SSMs, which is the parallel scan (Section 3.3), against convolution and attention, measured on an A100 80GB PCIe GPU. Note that these do not include the cost of other operations outside of this core operation, such as computing the convolutional kernel in global-convolution models, or computing the QKV projections in attention. As a baseline, we implement a standard parallel scan in PyTorch with no kernel fusion. This requires materializing the parameters A, B, C in HBM. Our scan implementation fuses the discretization step and the parallel scan, avoiding the cost of materializing all the large parameters in HBM. For convolution, we use the standard implementation in PyTorch, which separately performs FFTs on the inputs and the ï¬ lters, multiply them in frequency domain, then performs an inverse FFT to obtain the result. The theoretical complexity is ð (ð ¿ log(ð ¿)) for sequence length ð ¿.
2312.00752#131
2312.00752#133
2312.00752
[ "2302.13971" ]
2312.00752#133
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
For attention, we compare against the fastest implementation that we are aware of (FlashAttention-2 (Dao 2023)), with causal mask. Note that FlashAttention-2 with causal mask is about 1.7à faster than without causal mask, since approximately only half of the attention entries are computed. We use batch size of 1 and increase the sequence length from 29 = 512, 210 â 1ð ¾, 211 â 2ð ¾, up to 219 â 500ð ¾ (some of the baselines run out of memory before reaching 500K). We use a model dimension of ð · = 1024 and state dimension ð = 16. We measure with BF16 inputs, which is the data type most commonly used for large scale training. End-to-end Inference. We measure the inference throughput of a Mamba 1.4B model and an untrained Mamba 6.9B model, against a standard Transformer (GPT3 architecture) at 1.3B and 6.7B size. We use the standard Transformer implementation in the Huggingface transformers library. We set the prompt length to be 2048 and the generation length to be 128. We vary the batch size from 1, 2, 4, 8, 16, 32, 64, to 128, and measure time time taken to generate 128 tokens. We then calculate the throughput (tokens/s) as batch size à 128â time taken. We repeat the measurements 3 times and take the average. Measurements are done on an A100 80GB PCIe GPU. Memory Benchmark. The memory usage simply scales proportionally to the size of the activation tensors, as with most deep sequence models. We report measurements of the training memory requirements of 125M models
2312.00752#132
2312.00752#134
2312.00752
[ "2302.13971" ]
2312.00752#134
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
36 Table 15: (Memory benchmark.) Mambaâ s memory footprint is comparable to the most optimized Transformer. Results for 125M models. Batch size Transformer (w/ FlashAttention-2) Mamba 1 2 4 8 16 32 4.6GB 5.2GB 6.9GB 11.5GB 20.7GB 34.5GB 4.8GB 5.8GB 7.3GB 12.3GB 23.1GB 38.2GB on 1 A100 80GB GPU. Each batch consists of sequences of length 2048. We compare to the most memory-eï¬ cient Transformer implementation we are aware of (with kernel fusion from torch.compile and with FlashAttention-2). Table 15 shows that Mambaâ s memory requirement is comparable to a similar-sized Transformer with an extremely optimized implementation, and we expect further improvement in Mambaâ s memory footprint in the future.
2312.00752#133
2312.00752#135
2312.00752
[ "2302.13971" ]
2312.00752#135
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
37
2312.00752#134
2312.00752
[ "2302.13971" ]
2311.15296#0
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
3 2 0 2 v o N 6 2 ] L C . s c [ 1 v 6 9 2 5 1 . 1 1 3 2 : v i X r a # UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation Xun Liang*, Shichao Song*, Simin Niu*, Zhiyu Lit, Feiyu Xiong", Bo Tang", Zhaohui wy', Dawei He!, Peng Cheng', Zhonghao Wang", Haiying Deng? *School of Information, Renmin University of China, Beijing, China TInstitute for Advanced Algorithms Research, Shanghai, China tState Key Laboratory of Media Convergence Production Technology and Systems, Beijing, China Email: {xliangs, songshichao, niusimin}@ruc.edu.cn, {lizy, xiongfy, tangb} @iaar.ac.cn {hedawei, chengpeng, wangzhonghao, denghaiying} @xinhua.org
2311.15296#1
2311.15296
[ "2307.03109" ]
2311.15296#1
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Abstractâ Large language models (LLMs) have emerged as pivotal contributors in contemporary natural language processing and are increasingly being applied across a diverse range of in- dustries. However, these large-scale probabilistic statistical mod- els cannot currently ensure the requisite quality in professional content generation. These models often produce â hallucinatedâ text, compromising their practical utility in professional contexts. To assess the authentic reliability of LLMs in text generation, numerous initiatives have developed benchmark evaluations for hallucination phenomena. Nevertheless, these benchmarks fre- quently utilize constrained generation techniques due to cost and temporal constraints. These techniques encompass the use of directed hallucination induction and strategies that deliberately alter authentic text to produce hallucinations. These approaches are not congruent with the unrestricted text generation demanded by real-world applications. Furthermore, a well-established Chinese-language dataset dedicated to the evaluation of hallu- cinations in text generation is presently lacking. Consequently, we have developed an Unconstrained Hallucination Generation Evaluation (UHGEval) benchmark, designed to compile outputs produced with minimal restrictions by LLMs. Concurrently, we have established a comprehensive benchmark evaluation framework to aid subsequent researchers in undertaking scalable and reproducible experiments. We have also executed extensive experiments, evaluating prominent Chinese language models and the GPT series models to derive professional performance insights regarding hallucination challenges.
2311.15296#0
2311.15296#2
2311.15296
[ "2307.03109" ]
2311.15296#2
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Organization The MOHEin SouthKerea Korea Aerospace hallucinated !ndustries stated that the South Korean government id=doc_00372¢ will continue to advance this export plan. Statistics hallucinated id=r C During the holiday, the national highway passenger traffic reached 258 310 million person-times, representing a year-on-year increase of 8-9% 3.2%. Knowledge hallucinated id=kno_0004 _ Sickle cell disease is a severe hereditary blood disorder that can lead to athereseleresis anemia, infarction, and other complications.
2311.15296#1
2311.15296#3
2311.15296
[ "2307.03109" ]
2311.15296#3
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Timeline China National Arts Fund was officially established in hallucinated 2942 2013 with the aim of supporting artistic creation id=ger and the cultivation of artistic talent nationwide. Fig. 1. Real-world hallucination examples from UHGEval. Using the IDs, you can locate the corresponding original Chinese news articles within our dataset. Note: MOTIE denotes Ministry of Trade, Industry, and Energy. However, LLMs invariably manifest hallucinations [2]. Hal- lucination is characterized by generated content that is in- congruent with user input, the modelâ s own output context, or factual information. Real-world examples of hallucination from our UHGEval dataset can be observed in Fig. 1.
2311.15296#2
2311.15296#4
2311.15296
[ "2307.03109" ]
2311.15296#4
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Index Termsâ large language models, llms, hallucination, benchmark, unconstrained generation # I. INTRODUCTION the With the proliferation of extensive textual corpora, advent of high-performance GPUs, and the refinement of advanced deep learning paradigms, Large language models (LLMs) have exhibited unparalleled proficiency in a mul- titude of natural language processing (NLP) tasks, includ- ing language generation, knowledge application, and intricate reasoning. Concurrently, noteworthy advancements have been realized in the domains of human alignment, engagement with external environments, and the manipulation of tools [1]. Owing to reliability concerns, these circumstances markedly hinder the practical deployment of LLMs. Furthermore, in specialized domains like medicine, law, finance, and jour- nalism, hallucination presents a significant to deployment [3], [4]. These fields require stringent standards of content timeliness, accuracy, and logical consistency, at- tributable to their dynamic and highly specialized character- istics. During the training data collection phase, LLMs may exhibit a deficiency in domain-specific knowledge, yielding outdated content. In the pre-training phase, constraints in model parameters or training methodologies may engender parameter inaccuracies, thwarting the retrieval of accurate content. During the supervised fine-tuning phase, incongruent datasets might yield excessively positive incorrect responses. In the inference phase, the absence of a rollback mechanism can precipitate a cumulative escalation of hallucinations, un-
2311.15296#3
2311.15296#5
2311.15296
[ "2307.03109" ]
2311.15296#5
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
The authors contribute equally. © Corresponding author. Data Collection and Pre-processing Beginning Text Original News EMM 2015E7A2A SA, RAIGRREMAB | (NASA) AFR, SHRMTESW SWRA | FIMTRANâ 4520, RIGHT HARKS. Following Text ' WEB, FSMâ A5 2H LE MIRACO%, PE RMEREY | 1400, FABER, KHMAOPREBAN, AE | SAIAEIBM, FERHITECOIZS, Reference Information Hl MCERESRSS ATS, ARNT FRORAMALL b, MABSOPERARRSRESHRA MD, FSH 452ES5HRABETIRNES, RIAEZLERAE HA, ADARRM MEM BARMF HRW? â LLMs ' @ chatcum Metrics ' | ome furu rove [vrei ' Omazs ' S aXtia Evaluators ' ' ChatGPT @ Generative @ Discriminative @ Selective | | Qe â - B â Automated Evaluation Reference Check UHGEval Unconstrained Hallucination Generation Hallucination Ranking ' || Chinese LLM Engine Hallucination Candidate (5) One rreicns RENNMZOSETA, HYRLSRLTS TEE AME RAMEE BAOTERE. Hallucination Candidate (4) Hallucination gill Candidate (2) BRMILTSUNERMARMO BRE, FETA ORNGRESRERNTANEH. (Qwen-148 Sata FMA FIERNOOL EMME, RAM SHIRAM, AE AMES, DARN NE INE tSRES RZ ChatGLM2-68 HRIRNASARIINE, FFE) â 452b 5 HERR 2 NEF ES He AISNE, ESHWAARMNIIREAE ER, XinYu-7B TAAL SORA ANBRALSI6, FERABRAOO}EE, FRAN, FURMNONSOOR, ME" ERE, â
2311.15296#4
2311.15296#6
2311.15296
[ "2307.03109" ]
2311.15296#6
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Check item 1 | Check item 2 MEEEE Check Item N Final Datasets i] if ii i if Hallucination Candidate (1) if if if i] i] i | L | Human Re-Check (Max Voting) Ground Truth q Ess. | a af a a | Automatic Labeling And Human Recheck Fig. 2. The process of creating UHGEval. Steps 1 to 4 regarding the creation of the benchmark dataset are explained in Section II; Step 5, concerning the evaluation framework, is detailed in Section III. dermining the logical integrity of responses [5]. For example, erroneous medical guidance, imprecise legal stipulations, and fabricated journalistic narratives substantially restrict the prac- tical utility of LLMs in real-world contexts [3]. The fabricated news content depicted in Fig. 1 offers NO utility to journalists; on the contrary, the verification and rectification of such content exacts a toll on the valuable time of journalists.
2311.15296#5
2311.15296#7
2311.15296
[ "2307.03109" ]
2311.15296#7
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Achieving professional-level generation necessitates con- fronting the significant challenge of devising novel training methodologies and model architectures. However, prior to these developments, it is crucial to formulate a comprehensive, stringent, and demanding benchmark for the assessment of hallucination in language generation [5], [3]. Without such a benchmark, conducting a comparative evaluation of efforts aimed at controlling hallucination would prove to be arduous. While there have been initiatives to develop benchmarks for hallucination assessment, the majority of these methods employ restricted techniques to produce particular kinds of hallucinated utterances. This approach to generation is at odds with real-world scenarios where hallucinations may arise in unrestricted, spontaneously generated content. For example, HaluEval specifies the type of hallucination in the prompt when generating hallucinated text: â You are trying to answer a question but misunderstand the question context and inten- tionâ [6].
2311.15296#6
2311.15296#8
2311.15296
[ "2307.03109" ]
2311.15296#8
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Additionally, benchmarks such as HADES annotate hallucinations at a finer granularity by generating token-level hallucinations based on text perturbations [7], but the text per- turbation method is still constrained. Ultimately, the majority of benchmarks are centered on the evaluation of hallucinations in English, neglecting the assessment of such phenomena in Chinese. The extensive lexicon of Chinese characters, combined with the complexities introduced by Chinese word segmentation, renders the Chinese hallucination evaluation particularly arduous and deserving of focused scrutiny. To address the aforementioned challenges, we introduce a novel benchmark for hallucination assessment, as depicted in Fig. 2. The benchmark dataset is comprised of news articles. Selecting texts from this domain is intentional, given that news requires utmost precision in conveying factual information and exhibits minimal tolerance for hallucinations. Constructing an evaluation dataset within this sphere presents a considerable challenge for the majority of LLMs. Concurrently, news arti- cles are of exceptional quality, readily available, and frequently employed as training corpora by a large number of LLMs, guaranteeing impartiality in the evaluation of many LLMs [1]. In light of these factors, we collected a considerable volume of raw news articles, established an efficient, professional-grade hallucination assessment dataset, and formulated an evaluation framework named UHGEval. It is significant to note that our dataset was produced in an entirely unconstrained fashion. We permit models to compose freely and subsequently sift through the content to identify hallucinations. Our contributions are as follows: (1) The development of an unconstrained hallucination evaluation dataset. Existing meth- ods for constructing datasets often yield biases towards prede- fined directions, thereby hindering the full simulation of real- world hallucinations. We have created a hallucination evalu- ation dataset comprising over 5000 items, generated without intervention, closely mirroring real-world scenarios. (2) The establishment of a unified and diverse evaluation framework. Current benchmark methods for hallucination evaluation often exhibit a singular approach and lack task specificity.
2311.15296#7
2311.15296#9
2311.15296
[ "2307.03109" ]
2311.15296#9
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
We have developed UHGEval, a unified, flexible, and robust evaluation framework that encompasses generative, discriminative, and selective modalities, along with sentence-level and keyword- level granularity. (3) A comprehensive empirical analysis. We conducted detailed experiments with the proposed benchmark on eight prominent Chinese LLMs and three classic GPT series models to explore the credibility of various LLMs. The aforementioned dataset, evaluation framework, and empirical results collectively constitute the UHGEval benchmark, which is openly available on Github1.
2311.15296#8
2311.15296#10
2311.15296
[ "2307.03109" ]
2311.15296#10
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
# II. THE UHGEVAL BENCHMARK DATASET A. Data Collection and Pre-processing the news continuation dataset, we amassed tens of thousands of historical news articles from leading Chinese news websites, covering the period from January 2015 to January 2017, to serve as the foundation for constructing the dataset. It is worth noting that the decision to eschew the inclusion of more recent news articles (e.g., from 2023) was made to better assess the modelâ s understanding of existing knowledge and past news events. Indeed, the knowledge embedded within the training data of existing Chinese LLMs typically encompasses information pertaining to significant news between 2015 and 2017 [1]. Considering the different categories of news, such as sports, education, science, and society, the generated hallucinations typically exhibit certain differences. Therefore, when curating the initial news collection for continuation, we endeavored to ensure that the distribution of the collection aligns with the original distribution by randomly sampling from the entire news dataset. Furthermore, we have categorized the collected news examples into four major types: document-intensive, number-intensive, knowledge-intensive, and general news, as shown in Table I. We hypothesize that the likelihood of gen- erating hallucinations varies for different types of news. For example, number-intensive news frequently contains various numerical data, such as years, scores, and values, which may predispose the model to fabricating numbers or introducing minor deviations. Document-intensive news, on the other hand, primarily references official documents, such as factual policy documents, official statements, standard explanations, and legal clauses. In this case, the model may be inclined to fabricate specific policy or document names, or create detailed but fictional policy content. Knowledge-intensive news is characterized by an emphasis on enduring truths and analytical reasoning, which can render the model susceptible to flawed reasoning or the retrieval of incorrect facts. In addition to these three types, we also categorize culturally relevant general news as a separate category for experimental control. In the data pre-processing stage, we divide a complete news article into three parts: the beginning text, the following text, and the reference information. The beginning text serves to guide the model in generating the continuation and is typically the opening portion of the news.
2311.15296#9
2311.15296#11
2311.15296
[ "2307.03109" ]
2311.15296#11
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
During evaluation, the LLM # 1https://github.com/IAAR-Shanghai/UHGEval TABLE I STATISTICS OF COLLECTED NEWS Type Categories Proportion DOC Politics, Law, Military, Education 27.52% NUM Sports, Economy, Market KNO Science, Technology, Healthcare Society, Culture, Arts, Entertainment, Weather, Protection, Environmental Disasters, Accidents GEN 43.34% 6.55% 22.59% Note: In the table, DOC denotes document-intensive news; KNO de- motes knowledge-intensive news; NUM denotes number-intensive news; GEN denotes general news. The same as below. is required to generate content following the beginning text. The following text comprises the subsequent sentences in the news article and serves as the ground truth for the continuation task. Finally, all the remaining text, after the beginning text is excluded, serves as a source of reference information. This section provides reference information for labeling and also acts as the reference text for the reference-based evaluation.
2311.15296#10
2311.15296#12
2311.15296
[ "2307.03109" ]
2311.15296#12
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Filtering Settings. To ensure the overall quality of the final evaluation dataset, we have implemented the following filters: We consider only the categories listed in Table I, which correspond to the most frequently occurring categories in the original news collection. For news length, we set parameters such that the body length of the selected news falls between 630 and 870 characters, while the beginning text spans between 80 and 120 characters and consists of 2 to 5 sentences. These length parameters reflect the average values in the original news collection and were chosen to avoid overburdening the annotation process at a later stage.
2311.15296#11
2311.15296#13
2311.15296
[ "2307.03109" ]
2311.15296#13
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
B. Unconstrained Hallucination Generation Historically, benchmarks for evaluating hallucination have predominantly relied on a single LLM to produce hallucinated dataset. Notable examples include HaluEval [6] and PHD [8], which exclusively utilize ChatGPT, and FActScore [9] and FACTOR [10], which solely employ InstructGPT [11]. In contrast, our methodology incorporates a suite of five distinct Chinese LLMs to generate hallucinated content. These mod- els include ChatGLM2-6B [12], Baichuan2-13B [13], Qwen- 14B [14], InternLM-20B [15], and the Xinyu series model, Xinyu-7B. Xinyu-7B is an augmented large-scale language model derived from the foundational BloomZ-7B [16] through continued pre-training, news-specific fine-tuning, and align- ment optimization. Furthermore, Xinyu2-70B is developed based on the open-source LLaMA2-70B [17] framework, incorporating expansions to the Chinese lexicon, ongoing pre- training, and news-specific fine-tuning, thereby endowing it with a robust foundational capability in the news domain. The Xinyu series models are the results of a collaborative research and development effort between the Institute for Advanced Algorithms Research, Shanghai (IAAR, SH), and the State Key Laboratory of Media Convergence Production Technology and Systems of the Xinhua News Agency. Xinyu-7B and Xinyu2-70B will also be utilized in the experiment phase.
2311.15296#12
2311.15296#14
2311.15296
[ "2307.03109" ]
2311.15296#14
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Our approach engenders a more heterogeneous generation of hallucinations, mitigating the bias that may arise from the use of a single model and promoting equity within the dataset. This is due to the varying architectures and training corpora inherent to different LLMs. Furthermore, we have adopted an unconstrained generation methodology for the continuation of natural language content. This entails directly inputting the text to be continued into the model without any restrictive prompt thereby obtaining organic results. For each input example, we concurrently generate five candidate continuations. To maintain consistency across all models, we employ uniform parameter settings, with a temperature coefficient set at 1.0 and max new tokens limited to 1024. # C.
2311.15296#13
2311.15296#15
2311.15296
[ "2307.03109" ]
2311.15296#15
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Hallucination Ranking Given the unconstrained nature of our generation paradigm, the task of discerning whether the generated content is indeed hallucinated presents a significant challenge. Upon generating the continuations, a straightforward reliance on human verifi- cation is infeasible. An exclusive dependence on human anno- tation would incur substantial costs and may not be sustainable at scale, whereas a purely machine-based approach, such as utilizing GPT4, could potentially yield less accurate results. To navigate these complexities, we have adopted a two- stage annotation methodology. This approach begins with an initial phase of hallucination ranking, which is designed to preliminarily sort the generated content based on the like- lihood of hallucination. The ranking is then followed by a combination of automatic labeling and human recheck. The integration of hallucination ranking and machine labeling serves a pivotal role in streamlining the subsequent human verification process. This hybrid approach aims to enhance the efficiency and accuracy of human checks, effectively bridging the gap between the scalability of automated processes and the critical discernment of human judgment. Hallucination ranking is a crucial step in the process of evaluating and selecting the most appropriate continuation from a set of candidate continuations generated by LLMs. The objective of this step is to identify a continuation that not only demonstrates high quality in terms of coherence and readability but also includes an appropriate level of hallucination â
2311.15296#14
2311.15296#16
2311.15296
[ "2307.03109" ]
2311.15296#16
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
misinformation or fabrications that are not supported by the input or real-world knowledge. To strike this balance, the selection process takes into account two primary dimensions: Fluency. This refers to the naturalness and readability of the text. A fluent text should read smoothly, be grammatically cor- rect, and make logical sense in the context of the continuation. To assess fluency, a reward model developed by the Institute for Advanced Algorithms Research (IAAR) is employed. This model is trained to evaluate the quality of text and can assign scores to each continuation based on its fluency. By using this model, the top three continuations that exhibit the highest fluency are retained for further consideration.
2311.15296#15
2311.15296#17
2311.15296
[ "2307.03109" ]
2311.15296#17
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Likelihood of Hallucination Occurrence. This dimension evaluates the extent to which the continuation may contain BLEU-4 THe eee eat Hele wyaiale,. rouse. SIH tee eat Aas wiolelele( were (THR HRe ee alee vine ale â . Jiangsu i inChina for green food production the'mast developed provinces one of Fig. 3. Tokenization results for BLEU-4, ROUGE-L, and kwPrec, using newsid=num 000432 as an example.
2311.15296#16
2311.15296#18
2311.15296
[ "2307.03109" ]
2311.15296#18
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
The meaning of the above sentence is: Jiangsu is one of the most developed provinces in China for green food production. Note: We ignore tokens that cause overlap. hallucinated content. For hallucination occurrence likelihood ranking, we evaluate the lexical correlation between the gener- ated continuation and the reference information. The lower the correlation, the more likely hallucinations are to occur. Despite existing lexical metrics based on n-gram coverage, such as BLEU [18] and ROUGE [19], we believe that these rule-based methods may not effectively discover hallucinated keywords. Therefore, we propose the keyword precision (kwPrec) metric. This approach initially uses an LLM (here, we use GPT3.5- Turbo) to extract keywords from the continuation and deter- mine whether these keywords have a match in the reference information. The ratio of all matches to the total keywords is then calculated. Since LLMs often extract appropriate keywords more effectively, kwPrec focuses more on factual relevance rather than expressional relevance. Fig. 3 illustrates the tokens segmented by our method compared to those obtained by BLEU-4 and ROUGE-L. After implementing this method, we calculate the kwPrec for each of the three candidate continuations, selecting the one with the lowest value as the final candidate. Through the screening in these two stages, we can ensure that, in the worst case scenario, the final candidate continuation ranks third in fluency and third in the likelihood of hallucination occurrence, achieving a balanced level. By considering both fluency and the likelihood of hallucina- tion, the process aims to filter out continuations that are either too nonsensical or too conservative (lacking any hallucinated content). The ideal candidate continuation would be one that is coherent and engaging but also contains a detectable level of hallucination, which can then be used for further analysis, such as studying the modelâ s tendencies to hallucinate or for training systems to detect and mitigate such hallucinations. The final candidate continuations will undergo further anno- tation to determine the presence and degree of hallucination, which can involve additional automated tools and human judgment. This multi-faceted approach helps ensure that the final selected continuation is both high-quality and relevant for the purposes of the hallucination evaluation benchmark.
2311.15296#17
2311.15296#19
2311.15296
[ "2307.03109" ]
2311.15296#19
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
D. Automatic Labeling And Human Recheck Through the application of hallucination ranking, we can identify continuations that are both articulately expressed and likely to contain hallucinations. To detect continuations with confirmed hallucinations, we propose an annotation scheme PrecedingSentence: 20144, # PISA 0605 ASE HCA FB855 5 -F A, SSPRMBASWODZâ . BA A, D#xt2055 FH. LLM Generation HARIt, 2014 PRHAKABBIAT TABOICTAM,) IIS kit 200%. Label Hallucination Elements Extraction Rit - SB aa AE S} Re-check By Human he - S HRA 1301ZF BLT aes MA250(2F RAT Ss (Automatic Checking By GPT-4) SUA RB A2507F A, easicen 5 RMS AEHRARAA THA WAR RIBAA EMA, 2014 #, EKA WEALMRPOUNAPRRAD, SEHRKRBRIt fMIMAB2805H FA, lItiix60%, Hh, s¢{APBihe338 5+ BR, DARHKE7TAF ER. Reference Check Fig. 4. The process of automatic labeling and human recheck. that utilizes keywords, which includes automatic labeling and subsequent human verification, as shown in Fig. 4. Automatic labeling. We utilize the keywords identified by GPT3.5-Turbo from the candidate continuations, similarly to the process used in the computation of kwPrec previously. These keywords act as the focal points for subsequent veri- fication. Thereafter, we employ GPT4-0613 [20] to perform annotation on these keywords. GPT4-0613 evaluates the va- lidity of the keywords in the continuations by conducting a cross-reference with the provided original news and provides explanations for any detected unreasonable keywords. Human recheck. We undertake a manual, one-to-one ver- ification process by analyzing the annotated results and ex- planations provided by GPT4-0613 against the original news. This step is implemented to ensure the accuracy of the machine-generated annotations. In the end, instances verified as accurate by annotators comprise the final UHGEval dataset. However, the keyword-based annotation scheme exhibits inherent limitations.
2311.15296#18
2311.15296#20
2311.15296
[ "2307.03109" ]
2311.15296#20
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Languages exhibit a dependency struc- ture among words [21]. For instance, in the phrase â The rainbow is black,â the words â rainbowâ and â blackâ exhibit interdependence. One could contend that â blackâ is incorrect, while another could maintain that â rainbowâ is the erroneous term, given that â nightâ is typically described as black. To address the annotation challenges stemming from language dependency structures, we have adopted the Least Hallu- cination Principle. If a set of words can be selected, and their replacement with contextually appropriate words yields a semantically coherent sentence, then such a set of words is
2311.15296#19
2311.15296#21
2311.15296
[ "2307.03109" ]
2311.15296#21
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
# F TABLE II DATASET BASIC STATICS DOC KNO NUM GEN #news avg. #hallu. kw. avg. #kw. #hallu. kw. / #kw. avg. len. contn. avg. len. begin. avg. len. refer. 1242 2.15 8.43 25.47% 24.61% 31.44% 26.00% 46.77 102.15 634.17 320 1.99 8.09 2431 2.54 8.07 1148 2.12 8.17 48.36 102.66 618.90 44.47 103.20 624.47 45.97 102.86 632.47 Note: In the table, # denotes quantity, avg. denotes average, len. denotes length, contn. denotes hallucinated continuations, begin. denotes news beginnings, and refer. denotes reference information. The same as below. designated as a hallucinated word group. The words selected for annotation must meet the condition of comprising the minimal number of words in the group, as illustrated in Equation 1. In the equation, W is the set of keywords in a sentence, w is the hallucinated word group, correct(·) is the correction function that modifies hallucinated words to non-hallucinated words, and hallucinated(·) assesses whether a sentence composed of a set of keywords hallucinated. min |w| s.t. w â W wâ ² = correct(w) false = hallucinated(W â w + wâ ²) (1) In accordance with this principle, within the phrase â
2311.15296#20
2311.15296#22
2311.15296
[ "2307.03109" ]
2311.15296#22
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Journey to the West is an American novel and one of the Four Great Classics,â the word â Americanâ would be marked for annotation, as altering this single keyword to â Chineseâ dispels the hallucination throughout the sentence. Additionally, we acknowledge that the task of hallucination annotation may become somewhat tedious. Consequently, an- notators are integrated throughout the entire process, partici- pating in discussions instead of solely evaluating the accuracy of machine annotations. This approach also yields benefits for our work. For example, an annotator with a journalism back- ground offered valuable professional insights into pinpointing news-related hallucinations, emphasizing that fact increment is a critical aspect of news writing.
2311.15296#21
2311.15296#23
2311.15296
[ "2307.03109" ]
2311.15296#23
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
# E. Data Statics Starting with 17,714 candidate hallucinated continuations, we curated a dataset of 5,141 hallucinated continuations, as detailed in the basic statistics in Table II. Additionally, we developed a conversion rate chart to depict the transition from candidate hallucinations to the final dataset, as depicted in Fig. 5. The conversion rate can be interpreted as the likelihood of hallucinations occurring across various categories. Our observations indicate a higher likelihood of hallucinations in number-intensive and general news, whereas this likelihood is reduced in knowledge-intensive and document-intensive news.
2311.15296#22
2311.15296#24
2311.15296
[ "2307.03109" ]
2311.15296#24
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
7637 4904 S Document-Intensive Bg OG 3s 3889 SS General-News Cary, 6) 1148 (29.52%) 41194 BBS Knowodae-Intonsve gan. aeanny Number-Intensive Total Candidates 17714 & = Fig. 5. Conversion rates from candidates to hallucinations. By analyzing the hallucinated word cloud depicted in Fig. 6 for each news category, we can draw the following conclu- sions: Number-intensive news often includes numeric values that are challenging to remember, like 0.09% and 6:3, which pose difficulties for both LLMs and humans. General news encompasses a diverse vocabulary, featuring terms such as â social mediaâ and â friendship,â
2311.15296#23
2311.15296#25
2311.15296
[ "2307.03109" ]
2311.15296#25
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
which are often deemed less critical and thus challenging to incorporate into the training corpora of many LLMs. Knowledge-intensive news frequently features terms such as â according to incomplete statisticsâ and â key technology,â which are prevalent in technical literature. However, LLMs may not always use these terms appropriately. Document-intensive news often contains terms associated with official statements, such as â representation,â â president,â and â spokesperson.â This suggests that LLMs are susceptible to introducing unauthorized alterations to the content documents. Document-Intensive General nae Be. Number-Intensive Be Re Lond Knowledge-Intensive
2311.15296#24
2311.15296#26
2311.15296
[ "2307.03109" ]
2311.15296#26
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Fig. 6. Word clouds of hallucinated keywords in different types of news # III. EXPERIMENTS # A. Models Given that our dataset is tailored for the Chinese language generation domain, we selected eight widely-used Chinese LLMs and three foundational models from OpenAI, as detailed in Table III. These include eight base models: GPT Base, GLM Base, BLOOMZ Base, InternLM Base, Baichuan2 Base, Qwen Base, Aquila2 Base, and LLaMA2 Base. 2https://openai.com/blog/new-models-and-developer-products-announced-at- devday TABLE III MODELS SORTED BY RELEASE DATE
2311.15296#25
2311.15296#27
2311.15296
[ "2307.03109" ]
2311.15296#27
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Model Parm. Type Publisher Release GPT3.5-Turbo [1] GPT4-0613 [20] ChatGLM2 [12] Xinyu InternLM [15] Baichuan2 [13] Baichuan2 [13] Qwen [14] Aquila2 [22] Xinyu2 GPT4-11062 175Bâ NaN 6B 7B 20B 13B 53B 14B 34B 70B NaN Chat Chat Chat Chat Chat Chat Chat Chat Chat Chat Chat OpenAI OpenAI Tsinghua IAAR&Xinhua ShLab Baichuan Inc. Baichuan Inc. Alibaba BAAI IAAR&Xinhua OpenAI 2023.03â 2023.06 2023.06 2023.06 2023.07 2023.09 2023.09 2023.09 2023.10 2023.10 2023.11 Note: In the table, asterisk (*) denotes estimated value, NaN denotes no public data available, and 175B denotes 175 billion.
2311.15296#26
2311.15296#28
2311.15296
[ "2307.03109" ]
2311.15296#28
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
GPT represents a series of LLMs developed by Ope- nAI [20]. In this study, GPT3.5-Turbo, GPT4-0613, and GPT4- 1106 are utilized. GLM constitutes a pre-training framework proposed by Tsinghua University [12], and the ChatGLM2- 6B chat model is employed. BLOOMZ is a variant derived via multitask prompted fine-tuning (MTF) of the pre-trained BLOOM model [16], and following supplementary training, it is integrated into Xinyu-7B. InternLM serves as an open- source, lightweight training framework, with its development team releasing a spectrum of models utilizing this frame- work [15]; the InternLM-20B open-source chat model is uti- lized in the present work. Baichuan2 comprises a series of ex- pansive, multilingual base language models [13], with both the open-source Baichuan2-7B chat model and the closed-source Baichuan2-53B model being employed in this investigation. Qwen encompasses a language model series characterized by distinct models with varying parameter counts [14], and the Qwen-14B open-source chat model is utilized in the current study. Aquila2 represents a language model series devised by BAAI, noted for surpassing comparable models in terms is of performance [22], and the Aquila2-34B chat model employed in this research. LLaMA2 constitutes a suite of pre-trained and fine-tuned LLMs, with scales ranging from 7 billion to 70 billion parameters [17]. Following additional training, LLaMA2-70B is incorporated into Xinyu2-70B.
2311.15296#27
2311.15296#29
2311.15296
[ "2307.03109" ]
2311.15296#29
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
B. Evaluation Method For the evaluation of hallucinations in LLMs, the task is decomposed into three principal dimensions: form, metric, and granularity. Form concerns the manner in which the model in- teracts with the evaluation dataset; metric refers to the precise computational approach utilized for performance assessment; and granularity signifies the depth of detail considered in the evaluation of hallucinations. this encompasses human evaluation, discriminative evaluation, selective evaluation, and generative evaluation, among others. Human evaluation entails the direct application of human judgment to determine if the modelâ s output contains hallucinations, representing a critical evalua- tion form [23].
2311.15296#28
2311.15296#30
2311.15296
[ "2307.03109" ]
2311.15296#30
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
However, the drawbacks of this approach are evident: evaluating in excess of 5000 data points is tantamount to creating a new dataset, with the associated time and financial expenditures proving prohibitive. Discriminative evaluation enables LLMs to respond with bi- nary answers of â yesâ or â noâ [6], [24]. Specifically, this eval- uation modality involves presenting the LLM under scrutiny with an initial text followed by a continuation that may or may not include hallucinations. The LLM is tasked with producing a verdict as to the presence of hallucinations. Owing to the efficacy of few-shot prompting, this evaluation paradigm is relatively uncomplicated for LLMs to administer, as it facilitates the elicitation of the requisite responses. However, this method depends solely on the LLMâ s ability to draw upon the knowledge encoded within its parameters, necessitating the concurrent application of knowledge and reasoning, and thus requiring a robust foundational model capacity. Similar to discriminative evaluation, selective evaluation allows LLMs to tackle multiple-choice questions by choosing between option A or B, as exemplified by PandaLM [25]. Specifically, in selective evaluation, the LLM under evaluation is presented with an initial text followed by two continuations: one that includes hallucinations and another that does not. The LLMâ s objective is to identify which of the two is hallucinated. This assessment method offers the LLM more contextual information than discriminative evaluation, thereby alleviating the burden of fact-checking and lessening the dependence on retrieving facts from its parameters. Consequently, this reduces the level of difficulty for the LLM. However, both discriminative and selective evaluations en- counter a substantial challenge. They are predicated on the assumption that â
2311.15296#29
2311.15296#31
2311.15296
[ "2307.03109" ]
2311.15296#31
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
LLMsâ s capacity to produce reliable text is contingent upon their discernment between hallucinated and non-hallucinated content.â These methods do not simulate the evaluation of the modelâ s output for hallucinations. Conse- quently, generative evaluation is crucial as it directly evaluates the presence of hallucinations in the text generated by the LLM. Specifically, the LLM under evaluation is provided with an initial text and is then tasked with generating a continuation. Subsequently, various reference-based techniques are utilized to determine if the continuation includes hallucinations. How- ever, the challenge arises from the fact that it is not feasible to automatically and accurately ascertain if newly generated text is hallucinated; if it were, annotated datasets would be redun- dant. In scenarios of unrestrained text generation, this issue becomes increasingly complex. This complexity stems from the fact that text generated without constraints may introduce a multitude of entities and facts absent in the reference material, complicating the verification of their accuracy. Despite these hurdles, generative evaluation continues to be a predominant strategy in Natural Language Generation (NLG) tasks [26]. In terms of metrics, these include classification metrics such as accuracy, precision, recall, and others, which are applicable to human evaluation, discriminative evaluation, and selective evaluation. Generative evaluation, on the other hand, encom- passes both lexical and semantic metrics. Lexical metrics evaluate the extent of token overlap between the generated text and the reference information, including metrics such as BLEU [18], ROUGE [19], and the newly proposed kwPrec. Semantic metrics gauge the similarity in meaning between sentences, with examples including BERTScore [27], GPT- judge [28], and GPTScore [29], among others. In terms of granularity, evaluations can be conducted at both the sentence and keyword levels. Owing to our annotation methodology, our dataset is marked at the keyword level to signify instances of hallucinations. This approach affords a broader spectrum of possibilities for configuring the evaluation task, enabling the evaluated model to address the presence of hallucinations at either the sentence level or keyword level.
2311.15296#30
2311.15296#32
2311.15296
[ "2307.03109" ]
2311.15296#32
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
C. Evaluation Framework In order to accommodate different forms of evaluation methods, we have developed a of data-secure, easy-to-extend and easy-to-use evaluation framework, as illustrated in Fig. 7. INTERFACE LAYER Demo Run UHGEval CORE il ati ; AYER Experiment Statistical Analysis LAYER Generative Discriminative Selective { DataHub | LLMs Hub \{ Metric) LAYER Re Custom Data JBI Model Config | Prompt Template aan] Fig. 7. Evaluation Framework The framework comprises four ascending layers: the depen- dency layer, the evaluator layer, the core layer, and the inter- face layer. The dependency layer delineates the requisite un- derlying modules for the evaluation framework, encompassing datasets, LLM hubs, and diverse metrics. Notably, all under- lying modules are extensible; datasets may be supplanted with customized versions, LLMs sourced from APIs or platforms such as Hugging Face3, and metrics tailored individually. The evaluator layer, constituting the second tier, centers on an abstract class, Evaluator, and its various implementations. Within this layer, three distinct types are implemented: Gen- erativeEvaluator, DiscriminativeEvaluator, and SelectiveEval- uator. Users may also engineer custom evaluators, contingent upon adherence to the interface specifications of the abstract class, necessitating merely three function overloads. The core layer, representing the third stratum, comprises two principal modules: experiment.py and analyst.py. The former module facilitates experiments involving multiple LLMs, evaluators, and processes, whereas the latter module is tasked with the sta- tistical analysis of experimental outcomes. The interface layer, constituting the final tier, orchestrates the userâ s interaction with UHGEval.
2311.15296#31
2311.15296#33
2311.15296
[ "2307.03109" ]
2311.15296#33
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
A concise 20-line demonstration is provided to expedite user initiation, complemented by run.py capable of initiating experiments via the command line. UHGEval is both intuitive and secure for users, offering efficient usage while concurrently ensuring the integrity of # 3https://huggingface.co/models experimental results through robust resistance to exceptions and support for resuming evaluations post unexpected interrup- tions. For developers and researchers, the modules within the Dependency and Evaluator layers are fully interchangeable, thereby affording considerable flexibility for expansion. D. Experimental Setup To establish a robust experimental framework, our con- figuration includes prompt engineering, ensuring equilibrium between positive and negative examples, optimizing hyper- parameters, and configuring evaluators. Prompt engineering. The prompt engineering technique employed is â intent + instruction + 3-shot (explainable) prompting.â Intent delineates the LLMâ s role, instruction out- lines the task for the LLM to execute, and the prompt incorpo- rates three examples to aid the LLMâ
2311.15296#32
2311.15296#34
2311.15296
[ "2307.03109" ]
2311.15296#34
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
s few-shot learning [1]. Furthermore, political content in examples is prohibited to ad- here to content policies from model service providers. Explain- able prompting entails not merely acquiring results but also eliciting the modelâ s rationale behind its responses, regardless of the impact on evaluation speed and cost. In discriminative and selective evaluations, it is indiscernible whether the model is conjecturing the outcome or discerning the presence of hallucinations. Consequently, the use of explainable prompting enables the validation of the modelâ s confidence through the analysis of experimental results. Balancing positive and negative examples. To guarantee the reliability of experimental outcomes for all LLMs, we meticulously balance examples in discriminative and selective evaluations. Specifically, the LLM under evaluation will en- counter an equal number of examples with and without halluci- nations. This approach addresses the tendency of some models to learn patterns from the three examples in the prompts and produce conjectural rather than reasoned responses when mak- ing judgments. Such a tendency can introduce a considerable bias towards certain outcomes. An imbalance could complicate the analysis of experimental outcomes. Hyperparameter settings. Managing parameters for het- erogeneous LLMs is a multifaceted endeavor, as different LLMs feature unique interface designs, and the same pa- rameters can have varying implications across LLMs. For example, the level of determinism influenced by the temper- ature parameter varies.
2311.15296#33
2311.15296#35
2311.15296
[ "2307.03109" ]
2311.15296#35
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Despite these challenges, we commit to the principle of â guaranteeing overall output determinism while allowing for slight randomness, and aiming for con- sistent parameter settings across models.â Consequently, we configured parameters including temperature, top p, top k [1], and random seed. To ensure output determinism and improve reproducibility, we set the temperature to 0.1. Considering that OpenAI models advise against adjusting temperature and top p simultaneously, we minimally altered top p, setting it at 0.9. We set top k to 5, which is effective for certain models. To further enhance reproducibility, we established a seed for random number generators, setting it at 22. Evaluator Settings. Discriminative evaluation encompasses assessments at two levels of granularity: sentence-level and keyword-level. Prompt design for both levels utilizes the â in- tent + instruction + 3-shot (explainable) promptingâ approach.
2311.15296#34
2311.15296#36
2311.15296
[ "2307.03109" ]
2311.15296#36
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Furthermore, we maintain a balanced representation of posi- tive and negative examples at both levels. For discriminative evaluation, accuracy serves as the metric. Selective evaluation adheres to the identical prompt design. Each evaluated LLM is presented with one positive and one negative example for every news item. To uphold the integrity of the evaluation, the order of positive and negative examples is randomly alternated with a 50% chance. Accuracy is also employed as the evaluation metric. The generative evaluationâ s prompt design adheres to the principle of UHG. Evaluation metrics comprise 4- gram BLEU (BLEU-4), longest common subsequence-based ROUGE (ROUGE-L), kwPrec, and BERTScore.
2311.15296#35
2311.15296#37
2311.15296
[ "2307.03109" ]
2311.15296#37
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
E. Results and Analysis Results are presented in Table IV, Table V, and Table VI. Discriminative evaluation. Initially, the GPT series mod- elsâ performance is notably superior. In the keyword-level as- sessment, GPT4-0613 and GPT3.5-Turbo respectively achieve the top two rankings. At the sentence level, GPT4-0613 and GPT4-1106 respectively attain the first and second spots. As previously hypothesized, discriminative evaluation requires robust foundational capabilities from LLMs, such as knowl- edge recall, utilization, and judgment. The GPT series models markedly surpass other models, showcasing their formidable foundational capabilities. Moreover, a comparison of experi- mental outcomes at the keyword and sentence levels reveals that accuracy is generally superior at the keyword level. This could stem from the fact that the hallucinated continuations in our dataset exhibit sufficient fluency, aligning with the fluency distribution of LLM outputs. This can potentially confuse the evaluated LLM, complicating the judgment of the continuationâ
2311.15296#36
2311.15296#38
2311.15296
[ "2307.03109" ]
2311.15296#38
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
s authenticity. Conversely, keywords bypass fluency concerns, rendering keyword-level evaluation more amenable to LLMs. This observation implies that detecting hallucinations could be more dependable at the keyword level compared to the sentence level. Selective evaluation. Firstly, GPT4-1106 clinches the top spot, reaffirming the formidable foundational capabilities of the GPT series models. Concurrently, Xinyu2-70B attains second place, excelling as a model trained on the Chinese news corpus. This achievement, to a degree, confirms the merit of domain-specific LLMs. Secondly, when comparing the outcomes of the selective evaluation with those of the discriminative evaluation at the sentence level, most LLMs exhibit improved accuracy. This is consistent with our prior conjecture that furnishing LLMs with more contrasting infor- mation alleviates the demand on the modelâ s fact recall, thus diminishing the challenge of selective evaluation. Therefore, we posit that selective evaluation is comparatively simpler for LLMs. Thirdly, a decline is observed in discriminative evaluation outcomes from GPT4-0613 to GPT4-1106, whereas selective evaluation outcomes register a notable increase of around 5%. This substantiates the â seesaw phenomenon,â wherein certain capabilities are enhanced while others may TABLE IV DISCRIMINATIVE (KEYWORD AND SENTENCE LEVEL) AND SELECTIVE EVALUATION RESULTS
2311.15296#37
2311.15296#39
2311.15296
[ "2307.03109" ]
2311.15296#39
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Discriminative-Keyword Discriminative-Sentence Selective avg. acc. avg. #kws #valid avg. acc. #valid acc. #valid Aquila-34B Baichuan2-13B Baichuan2-53B ChatGLM2-6B GPT3.5-Turbo GPT4-0613 GPT4-1106 InternLM-20B Qwen-14B Xinyu-7B Xinyu2-70B 53.62% 51.63% 52.13% 50.80% 53.72% 70.04% 69.48% 50.92% 52.86% 49.58% 52.94% 3.00 3.128 2.98 3.10 3.08 3.07 3.10 3.10 3.125 3.12 3.12 3719 4478 1656 4289 4183 4100 4189 4388 4478 4451 4482 49.86% 46.88% 50.81% 43.87% 50.02% 57.42% 57.38% 51.01% 50.58% 48.66% 55.04% 5009 5047 1478 5130 5039 5024 4903 5130 5130 5014 5128 54.29% 50.23% 54.67% 43.59% 49.03% 55.20% 60.35% 49.43% 54.74% 50.58% 57.93% 4319 5130 4443 5130 5103 5047 4752 5130 5130 5130 5129 Note: In the table, #kws denotes the number of keywords and #valid denotes number of valid evaluations. In the same column of values, optimal values are bolded and suboptimal values are underlined.
2311.15296#38
2311.15296#40
2311.15296
[ "2307.03109" ]
2311.15296#40
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
The same as below. TABLE V GENERATIVE EVALUATION RESULTS avg. bleu avg. rouge avg. kwPrec avg. bert avg. len. #valid Aquila-34B Baichuan2-13B Baichuan2-53B ChatGLM2-6B GPT3.5-Turbo GPT4-0613 GPT4-1106 InternLM-20B Qwen-14B Xinyu-7B Xinyu2-70B 11.80% 8.84% 10.06% 9.17% 9.02% 10.74% 8.62% 14.89% 12.72% 10.30% 13.41% 6.04% 6.96% 7.55% 7.17% 6.30% 7.19% 6.86% 7.96% 6.54% 6.52% 7.05% 34.36% 25.51% 26.45% 24.53% 27.74% 28.47% 30.94% 31.10% 32.95% 28.64% 33.93% 67.51% 65.69% 67.65% 64.89% 66.39% 67.36% 67.38% 67.92% 66.96% 67.32% 68.97% 43.76 46.04 49.40 46.27 39.04 44.41 44.83 51.55 45.85 49.84 51.10 5130 5113 3837 5094 5084 5109 5121 5125 5125 4978 5130 TABLE VI EVALUATION RESULTS BY DIFFERENT TYPES
2311.15296#39
2311.15296#41
2311.15296
[ "2307.03109" ]
2311.15296#41
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
KNO DOC GEN NUM Aquila-34B Baichuan2-13B Baichuan2-53B ChatGLM2-6B GPT3.5-Turbo GPT4-0613 GPT4-1106 InternLM-20B Qwen-14B Xinyu-7B Xinyu2-70B 59.55% 54.97% 53.75% 52.10% 57.70% 57.46% 40.94% 55.21% 51.06% 59.87% 55.99% 68.73% 60.19% 51.88% 50.65% 62.81% 57.35% 48.44% 63.13% 61.47% 53.74% 53.52% 48.43% 49.67% 56.26% 52.58% 45.56% 44.23% 42.63% 47.63% 47.85% 51.93% 55.73% 54.77% 62.04% 49.56% 48.43% 53.15% 53.09% 52.02% 50.87% 50.00% 54.46% 57.07% Note: Read by row. In the same row of values, optimal values are bolded and suboptimal values are underlined.
2311.15296#40
2311.15296#42
2311.15296
[ "2307.03109" ]
2311.15296#42
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
regress, in tandem with the modelâ s upgrade [30]. This sug- gests that the decision to either enhance a single capability individually or to balance multiple capabilities is critical. Generative evaluation. Firstly, InternLM-20B secures two top spots, one runner-up position, and boasts the longest average generation length. This reflects the modelâ s superior credibility in content generation. However, its kwPrec score is modest, indicating potential for enhancement in keyword-level information generation. Secondly, Xinyu2-70B captures one top spot, two runner-up positions, and has the second-longest average generation length, underscoring its strong credibility in content generation. Its sole underperformance is in the ROUGE metric, which is recall-oriented. Conversely, BLEU and kwPrec are precision-oriented, suggesting the model is adept at delivering consistent output yet faces challenges with factual recall. Thirdly, Aquila-34B achieves the pinnacle in kwPrec scoring, signaling a notable edge in generation quality. However, this could be attributed to its comparatively shorter average generation length. kwPrec assesses the coverage of extended tokens (i.e., keywords), allowing for brief continua- tions with limited keywords to secure higher keyword coverage in relation to reference information. Fourthly, Baichuan2-53B registers a high ROUGE score, indicative of its proficiency in fact recall from the parameters, demonstrating accurate factual retrieval. Fifthly, the GPT series exhibits subpar performance, owing to the insubstantial Chinese data in its training corpus. For example, the Chinese data incorporated in GPTâ s training from the Common Crawl corpus comprises less than 5%4.
2311.15296#41
2311.15296#43
2311.15296
[ "2307.03109" ]
2311.15296#43
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Evaluations by Type. Given the categorization of news into four types, we can proceed with an in-depth analysis. We focus on selective evaluation results and perform a comprehensive breakdown analysis of these across the four types, as illustrated the majority of LLMs demonstrate in Table VI. Initially, enhanced accuracy for knowledge-intensive and document- # 4https://commoncrawl.github.io/cc-crawl-statistics/plots/languages.html intensive news. This observation is consistent with the general consensus that the training datasets for LLMs typically include substantial human knowledge and official documentation of major historical events. Furthermore, the majority of LLMs show reduced accuracy in general and number-intensive news. General news often contains societal minutiae, which are not the focus of LLM training, potentially leading to a deficiency in this factual domain within the model parameters. Regarding number-intensive news, it poses a considerable challenge for most LLMs, given that encoding identical numbers with varied historical meanings is complex. Lastly, GPT4-1106 attains es- pecially high scores in the demanding number-intensive news, which might be attributed to its sophisticated parameterization for numerical data handling.
2311.15296#42
2311.15296#44
2311.15296
[ "2307.03109" ]
2311.15296#44
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
# F. Discussion Each of the three evaluation methods possesses distinct advantages and drawbacks. Discriminative evaluation is often the method of choice for a range of standard benchmarks [6], [24]. This approach is intuitive, and the construction of evalua- tion prompts is straightforward. Selective evaluation resembles discriminative evaluation but is marginally less demanding because it includes a reference option for contrast. In both discriminative and selective evaluations, certain models might be suspected of conjecturing answers from few shots due to in- adequate reasoning skills, which can undermine the reliability of the outcomes. Consequently, the use of explainable prompt- ing becomes essential. Generative evaluation most closely mir- rors real-world applications. However, the generated content is unrestricted, which poses challenges for even the most dependable reference-based evaluation techniques. Therefore, employing a combination of metrics simultaneously, including lexical evaluation based on token coverage and semantic evaluation based on textual similarity, is imperative. The foundational capabilities required of LLMs can be arrayed on a spectrum from simple to complex: generative, selective, and discriminative evaluation. Generative evaluation entails the direct invocation of parameters for continuation, bypassing the need for an extensive grasp of instructions, which suits models with minimal fine-tuning. Selective evalu- ation necessitates a degree of inferential reasoning but offers comparative choices, rendering the level of difficulty moderate. Conversely, discriminative evaluation demands the precise re- trieval of factual information, thereby increasing the challenge. Moreover, various evaluations cater to different application contexts. Should the objective be to solely improve the modelâ s capacity for reliable continuation, generative evaluation would suffice. In the training of a dependable chatbot, selective and discriminative evaluations prove suitable. When aiming to train a reward model, selective evaluation is beneficial, offering evaluation for positive and negative instances. If the goal is to enhance the modelâ s ability to recall and apply knowledge, discriminative evaluation emerges as the demanding option. # IV. RELATED WORKS A.
2311.15296#43
2311.15296#45
2311.15296
[ "2307.03109" ]
2311.15296#45
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Large Language Models Language models are pivotal in computer science, evolving from statistical language models, to neural language models, to pre-trained language models (PLMs), and now to the current generation of LLMs. The advent of models such as Chat- GPT has seen contemporary LLMs exhibit new capabilities in handling complex tasks. These models can manage few- shot tasks via in-context learning and tackle mixed tasks by following instructions [1]. LLMs can be classified according to two dimensions. The first dimension concerns the openness of the model weights. For example, open-source models include Metaâ
2311.15296#44
2311.15296#46
2311.15296
[ "2307.03109" ]
2311.15296#46
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
s LLaMA [17], Tsinghua Universityâ s GLM [12], and Alibabaâ s Qwen [14], while closed-source models feature OpenAIâ s GPT [20], Baiduâ s ERNIE Bot [31], and Anthropicâ s Claude 5, among others. The second dimension differentiates between the use of a PLM or a supervised fine-tuned (SFT) model for specific inferences. A PLM is a language model trained on extensive unlabeled textual data to discern under- lying patterns, structures, and semantic knowledge within the corpus. Conversely, an SFT model involves further training a PLM with labeled datasets tailored to a specific task, with the goal of improving performance in that area. Many open-source models, including LLaMA, GLM, and Qwen, have made their PLM weights publicly available. For SFT models, users can access the chat variants of open-source models or the API services provided by closed-source models. In our research, we focus primarily on evaluating closed-source GPT series models and open-source Chinese chat models.
2311.15296#45
2311.15296#47
2311.15296
[ "2307.03109" ]
2311.15296#47
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
# B. Hallucinations in LLM Despite remarkable advancements in LLMs, they continue to encounter challenges, with hallucination being one of the most notable. Hallucination in language models refers to generating content that strays from factual accuracy, leading to unreliable outputs. Hallucinations occur when the generated content is not aligned with user input, deviates from the modelâ s previous outputs, or is at odds with established real- world knowledge [5]. Specific examples include inaccuracies in age, currency, scores, and other numerical values; citing fictional statements; inventing non-existent characters; and muddling timelines by merging events from different peri- ods [2]. Regarding the causes of hallucinations, several factors can be responsible [5]. One contributing factor is the use of inaccurate or incomplete training data. During training, LLMs fine-tune their parameters with vast quantities of text data. However, this data may be flawed, harboring errors, inaccuracies, or gaps in information. Another factor involves inconsistencies in contextual information. While LLMs typi- cally consider previously generated context when producing content, challenges in managing long-term dependencies or understanding complex contexts can result in inconsistencies. Additionally, hallucinations can arise from lacking or erro- neous world knowledge. Although LLMs gain considerable
2311.15296#46
2311.15296#48
2311.15296
[ "2307.03109" ]
2311.15296#48
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
# 5https://www.anthropic.com/index/introducing-claude TABLE VII HALLUCINATION EVALUATION BENCHMARKS SORTED BY NAME Benchmark (Released Year) Generation Method Annotation Metric Granularity Lang. ChineseFactEvalâ 23 [32] CSK-PNâ 23 [33] FACTORâ 23 [10] FActScoreâ 23 [9] HaLoCheckâ 23 [34] FactualityPromptsâ 22 [35] HADESâ 22 [7] HalluQAâ 23 [24] HaluEvalâ 23 [6] HILTâ 23 [2] KoLA-KCâ 23 [36] Med-HALTâ 23 [37] PHDâ 23 [8] SelfAwareâ 23 [38] STSNâ 23 [39] TruthfulQAâ 22 [28] UHGEval (Ours) XSum Halluâ 20 [40] Manual Direct: Common KGs CHG: Wiki, News CHG: Wiki CHG Direct: Wiki CHG: Wiki CHG, Manual: TruthfulQA, Wiki Manual, Auto Manual, Auto CHG: Alpaca, HotpotQA, etc. Manual CHG:
2311.15296#47
2311.15296#49
2311.15296
[ "2307.03109" ]
2311.15296#49
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
NYT, Politifact Auto Direct: Wiki, evolving dataset No Need Direct: MedMCQA, PubMed, etc. Manual CHG: Wiki Manual CHG: Quora, HowStuffWorks Manual UHG Manual Manual Auto, Manual UHG: Xinhua News Manual UHG: XSum Manual No Need Auto No Need No Need Auto Manual Acc Acc FACTOR Acc FActScore by Human HaLoCheck, selfcheckGPT NE Error, Entailment Acc, G-Mean, BSS, AUC, etc. Non-hallucination Rate Acc HVI BLEU, ROUGE Acc, Pointwise Score F1, Acc, Prec, Reca F1, Acc Acc, Prec, Reca Acc by Human or GPT-judge Acc, kwPrec, BERTScore, etc. ROUGE, BERTScore, Acc, etc.
2311.15296#48
2311.15296#50
2311.15296
[ "2307.03109" ]
2311.15296#50
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Word, Document Sentence Word Sentence Short Sentence Sentence Document, Sentence Word Sentence Document Word Document All Document Sentence Sentence, Concept Sentence Sentence, Keyword CN EN EN EN EN EN EN CN EN EN EN EN EN EN EN EN CN EN Note: Generation Method column provides the approach, and the base dataset if used. In this column, CHG refers to constrained hallucination generation, UHG refers to unconstrained hallucination generation, Manual indicates manually constructed, and Direct implies utilizing the base dataset without the need for generation. In the Annotation column, Auto denotes automatic machine annotation. In the Metric column, Acc, Prec, and Reca respectively indicate Accuracy, Precision, and Recall. In the Lang. column, CN and EN respectively stand for Chinese and English.
2311.15296#49
2311.15296#51
2311.15296
[ "2307.03109" ]
2311.15296#51
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
world knowledge via training data, they may be deficient in specific domain knowledge or misinterpret certain facts, leading to hallucinations. Furthermore, model limitations, in- cluding generation strategies and alignment methods, can also play a role in hallucinations during content creation. C. Hallucination Evaluation Benchmarks To more effectively tackle the issue of hallucinations, con- structing evaluation benchmarks is essential. In this context, numerous outstanding contributions have surfaced. This sec- tion reviews existing contributions regarding the development of benchmark datasets, their characteristics, and the particular methodologies for evaluation. Basic information about these benchmarks is presented in Table VII. while a few examine them at the word (or keyword, concept) the majority of datasets level. With respect cover the general domain, while some benchmarks target specific domains; for instance, HaLoCheck [34] focuses on the NBA, Med-HALT [37] on medicine, and our UHGEval on news. Concerning language, most evaluation datasets are in English. To our knowledge, the only two Chinese benchmarks, ChineseFactEval [32] and HalluQA [24], contain only 125 and 450 questions, respectively. Given the notably limited size of these datasets, our work significantly enhances the pool of data available for Chinese hallucination evaluation. Benchmark dataset construction. Dataset construction usually involves three steps. Firstly, real-world texts for hal- lucination generation are collected, and most benchmarks directly use existing datasets, such as Wiki [10], Alpaca [6], PubMed [37], Quora [38] and so on. Secondly, hallucinations are generated usually by LLMs such as GPT3.5-Turbo, and most works uses constrained hallucination generation (CHG) paradigm [10], [9], [34], [6], [2], [8], [38]. STSN [39] and XSum Hallu [40] are the only two benchmarks that use UHG as we do. Thirdly, it is not certain that the content generated by the LLMs actually contains hallucinations, and often requires annotation, which is mostly done by human involvement. There are also works using automatic machine labeling [10], [35], [24], [6], [36]. These are the basic methods for con- structing datasets, but there are also some other paradigms, such as constructing the dataset purely using manual labor, e.g.
2311.15296#50
2311.15296#52
2311.15296
[ "2307.03109" ]
2311.15296#52
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
ChineseFactEval [32], HADES [7], TruthfulQA [28], etc. Benchmark dataset characteristics. Regarding the granu- larity of hallucinations labeled in the datasets, most studies assess hallucinations at the sentence and document levels, Evaluation scheme. Existing works use a variety of ways to measure hallucinations. However, due to cost and time constraints, building automatic metrics for evaluation is still dominant, and a small proportion of works use human evalua- tion [9], [28], [40]. In terms of specific evaluation metrics, most works adopt common classification metrics, e.g., F1, accuracy, precision, recall. some other works construct their own calculation methods, e.g., FACTOR [10], FActScore [9], HaLoCheck [34], HVI [2], etc. However, the above metrics are rule-based and can only evaluate the ability of LLMs to classify hallucinations, but not the ability of LLMs to gen- erate content without hallucinations. Thus, some benchmarks explore even further in generative evaluation. For example, KoLA [36] evaluates knowledge creation (KC) using BLEU and ROUGE to measure the degree of overlap between the output and the reference, TruthfulQA [28] evaluates hallu- cinations using a specially trained classifier, GPT-judge, and FactualityPrompts [35] simultaneously employs a hallucinated named entity error based on n-gram coverage and a semantic- based entailment ratio.
2311.15296#51
2311.15296#53
2311.15296
[ "2307.03109" ]
2311.15296#53
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
# V. CONCLUSION LLMs are experiencing a rapid evolution, heralding a new era of potential applications within the realm of professional content generation. The progression of LLMs in this domain necessitates the establishment of robust benchmarks to steer their development effectively. In this work, we introduce a novel benchmark dataset using unconstrained hallucination generation, comprising a dataset specifically curated for hal- lucinated news continuation, which encompasses in excess of 5,000 instances annotated at the keyword level. Additionally, we propose a secure, scalable, and user-friendly evaluation framework to facilitate comprehensive assessments. Through meticulous experimentation on eleven prominent LLMs, our study has unearthed a series of enlightening findings. Looking ahead, our research endeavors will persist in exploring the intricacies of hallucination phenomena within professional content generation. Concurrently, on the benchmarking front, we aspire to augment our datasets to encompass a more diverse spectrum of domains and linguistic variations, thereby broadening the applicability and relevance of our benchmarks.
2311.15296#52
2311.15296#54
2311.15296
[ "2307.03109" ]
2311.15296#54
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
# REFERENCES [1] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou et al., â A survey of large language models,â arXiv preprint arXiv:2303.18223, 2023. [2] V. Rawte, S. Chakraborty, A. Pathak, A. Sarkar, S. Tonmoy, A. Chadha et al., â
2311.15296#53
2311.15296#55
2311.15296
[ "2307.03109" ]
2311.15296#55
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
The troubling emergence of hallucination in large language modelsâ an extensive definition, quantification, and prescriptive reme- diations,â arXiv preprint arXiv:2310.04988, 2023. [3] C. Wang, X. Liu, Y. Yue, X. Tang, T. Zhang, C. Jiayang et al., â Survey on factuality in large language models: Knowledge, retrieval and domain- specificity,â
2311.15296#54
2311.15296#56
2311.15296
[ "2307.03109" ]
2311.15296#56
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
arXiv preprint arXiv:2310.07521, 2023. [4] V. Rawte, A. Sheth, and A. Das, â A survey of hallucination in large foundation models,â arXiv preprint arXiv:2309.05922, 2023. [5] Y. Zhang, Y. Li, L. Cui, D. Cai, L. Liu, T. Fu et al., â
2311.15296#55
2311.15296#57
2311.15296
[ "2307.03109" ]
2311.15296#57
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Sirenâ s song in the ai ocean: A survey on hallucination in large language models,â arXiv preprint arXiv:2309.01219, 2023. [6] J. Li, X. Cheng, W. X. Zhao, J.-Y. Nie, and J.-R. Wen, â Halueval: A large-scale hallucination evaluation benchmark for large language models,â arXiv preprint arXiv:2305.11747, 2023.
2311.15296#56
2311.15296#58
2311.15296
[ "2307.03109" ]
2311.15296#58
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
[7] T. Liu, Y. Zhang, C. Brockett, Y. Mao, Z. Sui, W. Chen et al., â A token-level reference-free hallucination detection benchmark for free- form text generation,â in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), S. Muresan, P. Nakov, and A. Villavicencio, Eds.
2311.15296#57
2311.15296#59
2311.15296
[ "2307.03109" ]
2311.15296#59
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Dublin, Ireland: Association for Computational Linguistics, May 2022, pp. 6723â 6737. [Online]. Available: https://aclanthology.org/2022.acl-long.464 [8] S. Yang, R. Sun, and X. Wan, â A new benchmark and reverse valida- tion method for passage-level hallucination detection,â arXiv preprint arXiv:2310.06498, 2023. [9] S. Min, K. Krishna, X. Lyu, M. Lewis, W.-t. Yih, P. W. Koh et al., â
2311.15296#58
2311.15296#60
2311.15296
[ "2307.03109" ]
2311.15296#60
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Factscore: Fine-grained atomic evaluation of factual precision in long form text generation,â arXiv preprint arXiv:2305.14251, 2023. [10] D. Muhlgay, O. Ram, I. Magar, Y. Levine, N. Ratner, Y. Belinkov et al., â Generating benchmarks for factuality evaluation of language models,â arXiv preprint arXiv:2307.06908, 2023. [11] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin et al., â Training language models to follow instructions with human feedback,â in Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds., vol. 35. Curran Associates, Inc., 2022, pp. 27 730â 27 744. https://proceedings.neurips.cc/paper files/paper/ [Online]. Available: 2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf [12] Z. Du, Y. Qian, X. Liu, M. Ding, J. Qiu, Z. Yang et al., â Glm: General language model pretraining with autoregressive blank infilling,â in Proceedings of the Association for the 60th Annual Meeting of Computational Linguistics (Volume 1: Long Papers), 2022, pp. 320â
2311.15296#59
2311.15296#61
2311.15296
[ "2307.03109" ]
2311.15296#61
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
335. [13] A. Yang, B. Xiao, B. Wang, B. Zhang, C. Bian, C. Yin et al., â Baichuan 2: Open large-scale language models,â arXiv preprint arXiv:2309.10305, 2023. [14] J. Bai, S. Bai, Y. Chu, Z. Cui, K. Dang, X. Deng et al., â Qwen technical report,â arXiv preprint arXiv:2309.16609, 2023. [15] InternLM, â Internlm: A multilingual language model with progressively enhanced capabilities,â https://github.com/InternLM/InternLM, 2023.
2311.15296#60
2311.15296#62
2311.15296
[ "2307.03109" ]
2311.15296#62
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
# Wee [16] N. Muennighoff, T. Wang, L. Sutawika, A. Roberts, S. Biderman, T. L. Scao et al., â Crosslingual generalization through multitask finetuning,â arXiv preprint arXiv:2211.01786, 2023. [17] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei et al., â Llama 2: Open foundation and fine-tuned chat models,â
2311.15296#61
2311.15296#63
2311.15296
[ "2307.03109" ]
2311.15296#63
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
arXiv preprint arXiv:2307.09288, 2023. [18] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, â Bleu: a method for automatic evaluation of machine translation,â in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, P. Isabelle, E. Charniak, and D. Lin, Eds.
2311.15296#62
2311.15296#64
2311.15296
[ "2307.03109" ]
2311.15296#64
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Philadelphia, Pennsylvania, USA: Association for Computational Linguistics, Jul. 2002, pp. 311â 318. [Online]. Available: https://aclanthology.org/P02-1040 [19] C.-Y. Lin, â ROUGE: A package for automatic evaluation of summaries,â in Text Summarization Branches Out. Barcelona, Spain: Association for Computational Linguistics, Jul. 2004, pp. 74â 81. [Online]. Available: https://aclanthology.org/W04-1013 [20] OpenAI, â Gpt-4 technical report,â arXiv preprint arXiv:2303.08774, 2023. [21] M.-C. de Marneffe and J.
2311.15296#63
2311.15296#65
2311.15296
[ "2307.03109" ]
2311.15296#65
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Nivre, â Dependency grammar,â Annual Review of Linguistics, vol. 5, no. 1, pp. 197â 218, 2019. [Online]. Available: https://doi.org/10.1146/annurev-linguistics-011718-011842 # ore. sSfannurex: # me [22] BAAI, â Aquila2,â https://github.com/FlagAI-Open/Aquila2, 2023. [23] Y.
2311.15296#64
2311.15296#66
2311.15296
[ "2307.03109" ]
2311.15296#66
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Chang, X. Wang, J. Wang, Y. Wu, L. Yang, K. Zhu et al., â A survey on evaluation of large language models,â arXiv preprint arXiv:2307.03109, 2023. [24] Q. Cheng, T. Sun, W. Zhang, S. Wang, X. Liu, M. Zhang et al., â Evaluating hallucinations in chinese large language models,â arXiv preprint arXiv:2310.03368, 2023. [25] Y. Wang, Z. Yu, Z. Zeng, L. Yang, C. Wang, H. Chen et al., â
2311.15296#65
2311.15296#67
2311.15296
[ "2307.03109" ]
2311.15296#67
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Pandalm: An automatic evaluation benchmark for llm instruction tuning optimiza- tion,â arXiv preprint arXiv:2306.05087, 2023. [26] J. Novikova, O. DuË sek, A. Cercas Curry, and V. Rieser, â Why we need new evaluation metrics for NLG,â in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, M. Palmer, R. Hwa, and S. Riedel, Eds.
2311.15296#66
2311.15296#68
2311.15296
[ "2307.03109" ]
2311.15296#68
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Copenhagen, Denmark: Association for Computational Linguistics, Sep. 2017, pp. 2241â 2252. [Online]. Available: https://aclanthology.org/D17-1238 [27] T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, â Bertscore: Evaluating text generation with bert,â in International Conference on Learning Representations, 2020. [Online]. Available: https://openreview.net/forum?id=SkeHuCVFDr [28] S. Lin, J. Hilton, and O. Evans, â
2311.15296#67
2311.15296#69
2311.15296
[ "2307.03109" ]
2311.15296#69
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
TruthfulQA: Measuring how models mimic human falsehoods,â in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), S. Muresan, P. Nakov, and A. Villavicencio, Eds. Dublin, Ireland: Association for Computational Linguistics, May 2022, pp. 3214â 3252. [Online]. Available: https://aclanthology.org/2022.acl-long.229 [29] J. Fu, S.-K. Ng, Z. Jiang, and P. Liu, â
2311.15296#68
2311.15296#70
2311.15296
[ "2307.03109" ]
2311.15296#70
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Gptscore: Evaluate as you desire,â arXiv preprint arXiv:2302.04166, 2023. [30] S. Zheng, Y. Zhang, Y. Zhu, C. Xi, P. Gao, X. Zhou et al., â Gpt-fathom: Benchmarking large language models to decipher the evolutionary path towards gpt-4 and beyond,â arXiv preprint arXiv:2309.16583, 2023.
2311.15296#69
2311.15296#71
2311.15296
[ "2307.03109" ]
2311.15296#71
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
[31] Y. Sun, S. Wang, S. Feng, S. Ding, C. Pang, J. Shang et al., â Ernie 3.0: Large-scale knowledge enhanced pre-training for language understand- ing and generation,â arXiv preprint arXiv:2107.02137, 2021. [32] B. Wang, E. Chern, and P. Liu, â Chinesefacteval: A factuality benchmark for chinese llms,â https://GAIR-NLP.github.io/ChineseFactEval, 2023.
2311.15296#70
2311.15296#72
2311.15296
[ "2307.03109" ]
2311.15296#72
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
[33] J. Chen, W. Shi, Z. Fu, S. Cheng, L. Li, and Y. Xiao, â Say what you mean! large language models speak too positively about negative commonsense knowledge,â in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), A. Rogers, J. Boyd-Graber, and N. Okazaki, Eds. Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 9890â 9908. [Online]. Available: https://aclanthology.org/2023.acl-long.550 [34] M. Elaraby, M. Lu, J. Dunn, X. Zhang, Y. Wang, and S. Liu, â
2311.15296#71
2311.15296#73
2311.15296
[ "2307.03109" ]
2311.15296#73
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Halo: Estimation and reduction of hallucinations in open-source weak large language models,â arXiv preprint arXiv:2308.11764, 2023. [35] N. Lee, W. Ping, P. Xu, M. Patwary, P. Fung, M. Shoeybi et al., â Factuality enhanced language models for open-ended text generation,â in Advances in Neural Information Processing Systems, A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, Eds., 2022. [Online]. Available: https://openreview.net/forum?id=LvyJX20Rll [36] J. Yu, X. Wang, S. Tu, S. Cao, D. Zhang-Li, X. Lv et al., â
2311.15296#72
2311.15296#74
2311.15296
[ "2307.03109" ]
2311.15296#74
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Kola: Carefully benchmarking world knowledge of large language models,â arXiv preprint arXiv:2306.09296, 2023. [37] A. Pal, L. K. Umapathi, and M. Sankarasubbu, â Med-halt: Medical domain hallucination test for large language models,â arXiv preprint arXiv:2307.15343, 2023. [38] Z. Yin, Q. Sun, Q. Guo, J. Wu, X. Qiu, and X. Huang, â
2311.15296#73
2311.15296#75
2311.15296
[ "2307.03109" ]
2311.15296#75
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Do large language models know what they donâ t know?â in Findings of the Association for Computational Linguistics: ACL 2023, A. Rogers, J. Boyd-Graber, and N. Okazaki, Eds. Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 8653â 8665. [Online]. Available: https://aclanthology.org/2023.findings-acl.551 [39] N. Varshney, W. Yao, H. Zhang, J. Chen, and D. Yu, â
2311.15296#74
2311.15296#76
2311.15296
[ "2307.03109" ]
2311.15296#76
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
A stitch in time saves nine: Detecting and mitigating hallucinations of llms by validating low-confidence generation,â arXiv preprint arXiv:2307.03987, 2023. [40] J. Maynez, S. Narayan, B. Bohnet, and R. McDonald, â On faithfulness and factuality in abstractive summarization,â in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, D. Jurafsky, J.
2311.15296#75
2311.15296#77
2311.15296
[ "2307.03109" ]
2311.15296#77
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Chai, N. Schluter, and J. Tetreault, Eds. Online: Association for Computational Linguistics, Jul. 2020, pp. 1906â 1919. [Online]. Available: https://aclanthology.org/2020.acl-main.173
2311.15296#76
2311.15296
[ "2307.03109" ]