doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
fd7ff88d-441f-43bb-ade2-69046635c1ff
## 4.3. Evaluation 6.75 6.75 -6 -6 -6 7.80 7.50 7.50 7.50 6.75 -6 6.60 6.60 6.60 -12 -12 -12 7.50 6.60 7.20 7.20 7.20 -12 6.45 6.45 6.45 -18 -18 -18 Perplexity (C4) Perplexity (C4) Perplexity (C4) 7.20 Perplexity (HQ) Perplexity (HQ) Perplexity (HQ) 6.45 Over-scaling (%) Over-scaling (%) Over-scaling (%) 6.90 6.90 6.90 -18 Perplexity (C4) 6.30 6.30 6.30 Perplexity (HQ) Over-scaling (%) -24 -24 -24 6.90 6.30 -24 20 40 75 115 150 184 20 40 75 115 150 184 20 40 75 115 150 184 20 40 75 115 150 184 20 40 75 115 150 184 20 40 75 115 150 184 20 40 75 115 150 184 20 40 75 115 150 184 20 40 75 115 150 184 20 40 75 115 150 184 20 40 75 115 150 184 20 40 75 115 150 184 Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) SemDe
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5809e7ff-018e-4ed0-901f-506b0988e362
## 4.3. Evaluation 115 150 184 Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) Dataset Size (Billion Tokens) SemDeDup SemDeDup SemDeDup SemDeDup Full data Random Full data Random Full data Random Density Ask-LLM (Small) Density Ask-LLM (Small) Density Ask-LLM (Small) Ask-LLM (XL) Perplexity (Small) Ask-LLM (XL) Perplexity (Small) Ask-LLM (XL) Perplexity (Small) Perplexity (XL) Prototypes Perplexity (XL) Prototypes Perplexity (XL) Prototypes Full data Random Density Ask-LLM (Small) Ask-LLM (XL) Perplexity (Small) Perplexity (XL) Prototypes | Training config. | Downstream tasks | FLAN Instruction Tuning | |--------------------|--------------------|---------------------------| | Over-scaling (%) | | | | LLM | | | | Sampler | # Tokens | GLUE | | T5-Small | -
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e24cf513-775f-41ac-a270-7b8c0d6ac63f
## 4.3. Evaluation | | | | Sampler | # Tokens | GLUE | | T5-Small | - | 184B | | 18.6 | | | | 78.6 | 25.5 | 28.5 | | T5-Small | Random | 36B ( | | ≡ | | | | 20%) | -0.2 | 79.9 | | 18.6 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fe00073a-286b-4fa8-a177-c961d3f4b7b9
## 4.3. Evaluation | | | 20%) | -0.2 | 79.9 | | 18.6 | | | | 78.1 | 26.9 | 27.8 | | T5-Small | Density | 36B ( | | ≡ | | | | 20%) | -2.1 | 80.5 | | 30.3 | | | | 14.5 | 33.4
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ac24517c-3567-44eb-a334-0976534fdb9c
## 4.3. Evaluation | 80.5 | | 30.3 | | | | 14.5 | 33.4 | | | T5-Small | SemDeDup | 46B ( | | ≡ | | | | 25%) | -4.5 | | | 80.7 | | | | 59.2 | 18.4 | 77.8 | | 37.0 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6ede2328-59ef-4ef7-a03e-a1432af64ce0
## 4.3. Evaluation | | | 59.2 | 18.4 | 77.8 | | 37.0 | | | | T5-Small | Prototypes | 46B ( | | ≡ | | | | 25%) | -8.0 | 79.7 | | T5-Small | Perplexity (Small) | 36B ( | | ≡ | | | | 20%) | -7.8 | 79.9
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8a106d9a-b9c8-4031-82ab-eaa429306ad7
## 4.3. Evaluation | | ≡ | | | | 20%) | -7.8 | 79.9 | | T5-Small | Ask-LLM (XL) | 36B ( | | ≡ | | | | 20%) | | | | 4.2 | | | | 80.3 | | | | 59.8 | 18.6 | 79.1
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
218856ba-e7d3-4f8f-819b-818346cfb643
## 4.3. Evaluation | | 80.3 | | | | 59.8 | 18.6 | 79.1 | | 28.5 | | | | 15.8 | | | | 36.4 | | | | T5-Large | - | 184B | | T5-Large | Random | 36B ( | | ≡ | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
275a6a2d-5cb6-4b3d-bb70-d92a1ef99c10
## 4.3. Evaluation 184B | | T5-Large | Random | 36B ( | | ≡ | | | | 20%) | -6.5 | 88.6 | | T5-Large | Density | 36B ( | | ≡ | | | | 20%) | 2.8 | | | 88.8 | | | | 82.4 | 20.8 | 86.4
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f0b2c065-e05e-44d4-ac5b-cb3380c8bc5f
## 4.3. Evaluation | | 88.8 | | | | 82.4 | 20.8 | 86.4 | | T5-Large | SemDeDup | 46B ( | | ≡ | | | | 25%) | -20.5 | 88.3 | | 36.7 | 21.8 | | | 70.2 | | | | T5-Large | Prototypes | 46B (
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
86fc8dea-ed08-42ad-933e-46d980a7a7a2
## 4.3. Evaluation | | 70.2 | | | | T5-Large | Prototypes | 46B ( | | ≡ | | | | 25%) | 0.2 | 88.4 | | T5-Large | Perplexity (XL) | 36B ( | | ≡ | | | | 20%) | -32.7 | 87.9 | | T5-Large | | | | Ask-LLM
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fb5a5c07-30bd-422a-93fd-e9ba3a8fc809
## 4.3. Evaluation | -32.7 | 87.9 | | T5-Large | | | | Ask-LLM (XL) | 36B ( | | | ≡ | | | | 20%) | | | | 33.0 | 88.8 | 83.0 | | 33.0 | 20.0 | | | 77.1 | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
22328faa-7962-49a9-a168-3e993b44d2c8
## 4.4. Does Reasoning Improve Data Efficiency? to a scoring model of the same model capacity (XL). Similar findings hold true for training efficiency (Figure 5). ASK- LLM converges faster than perplexity filters, both in terms of the average (expected final performance over all proxy model sizes) and pointwise for the best configuration (Small and XL for training T5-Small and T5-Large). Figure 4c shows that ASK-LLM closes up to 33% of the performance gap to the next-largest model size (i.e., the over-scaling metric). ASK-LLM consistently outperforms training on the full dataset as well as perplexity filtering (and coverage-maximizing baselines), despite having access Figure 7 further demonstrates that prompting adds criti- cal information to the sampler not present in perplexity: ASK-LLM scores show no correlation with the perplex- ity scores. Based on this clear behavioral difference, we conclude that reasoning and context are crucial ingredients. We expect prompting techniques such as chain-of-thought reasoning (Wei et al., 2022) to further drive performance.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c656a36a-a426-473e-b302-503d545870c6
## 4.5. When Are Expensive Quality Scores Justified? Figures 4c and 4f suggest that coverage scores—especially those provided by DENSITY—perform well in the mid- data regime (roughly 25% to 50% sampling rate). On the other hand, expensive quality scoring—via the ASK- LLM procedure—is Pareto optimal for the entire quantity- quality trade-off. The higher costs of LLM-based filters are most justified in two scenarios: (i) improving full-data performance, where quality filtering by removing the lowest- quality data is the main way to push the upper limit of model performance; or (ii) in the low-data regime, where keeping only the highest-quality data drives the most model perfor- mance compared to other sampling strategies. We also observe that random sampling is a strong baseline, aligning with recent observations in the literature. Guo et al. (2022a) found that only three methods outperformed random sampling in a computer vision benchmark of 15 algorithms. Ayed & Hayou (2023a) prove the existence of adversarial problem instances where score-based sampling cannot outperform random sampling. These results only serve to highlight the significance of ASK-LLM's gains.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
28b84c91-16e2-4ff7-921d-6cfa05923e49
## 4.6. Effect Of Quality-Scoring Model Capacity Figure 6 demonstrates a clear scaling trend for ASK-LLM's quality-scoring model: larger scoring models are increas- ingly beneficial as the scale of the to-be-trained LLM in- creases. Perplexity filters do not seem to exhibit such trends. The strongly consistent scaling for ASK-LLM also suggests an interesting performance-recipe: to improve downstream data-efficiency, use better quality-scoring models. Creating better quality scorers for ASK-LLM (via fine-tuning, chain- of-thought prompting, more capable scoring models, etc.) is thus an exciting direction for future work. Despite the scaling trends, we would also like to empha- size that even small ASK-LLM models provide compelling sampling performance already for both training T5-Small and T5-Large models. For example, ASK-LLM (Small) outperforms perplexity filtering with any scoring-model in Figure 4f (including T5-XXL) by a sizable margin.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4374680b-3b2e-4506-9e7d-02696479aa56
## 4.7. Do Samplers Prioritize Different Examples? To understand whether different algorithms prioritize different examples, we sorted examples by score and computed the Kendall Tau rank correlation between samplers (Figure 7). We find that samplers differ in significant and inter-
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8a3b5255-89d9-48ab-a7ad-f59e027cb7a1
## Perplexity Filtering esting ways. For example, the "T5-Large" row shows that (i) T5-Large outputs perplexity scores similar to T5-Small early in training, but becomes progressively more nuanced on the path from 20k to 700k training steps, and (ii) perplex- ity and ASK-LLM select for wildly different criteria, with almost no ranking correlation. DENSITY prioritizes coverage over de-noising, maintain- ing the in-distribution test perplexity better than any other strategy (Figures 4a and 4d). This suggests that coverage sampling preserves the objective function, in contrast with other methods that preferentially select for quality in addi- tion to diversity.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d453764b-d42c-4261-95c1-9cddb49a02a4
## 5. Discussion Amortized scoring. The ASK-LLM and perplexity scorers require considerable computation—one LLM inference call for every training sample—which is concerning from both a carbon-emissions and cost perspective (Strubell et al., 2019). However, we argue that the scoring costs are amortized over -0.03 -0.05 0.05 0.03 0.04 0.67 0.77 0.83 1.00 0.84 0.66 0.74 0.78 0.80 0.80 -0.00 -0.03 -0.11 0.02 -0.04 0.03 0.06 0.06 0.82 0.78 0.71 0.66 0.62 1.00 0.80 0.76 0.74 0.74 -0.00 -0.07 -0.14 0.00 -0.04 0.04 0.05 0.05 0.79 0.84 0.79 0.74 0.70 0.80 1.00 0.86 0.84 0.83 -0.00 -0.06 -0.13 -0.00 -0.04 0.05 0.04 0.05 0.76 0.84 0.83 0.78 0.74 0.76 0.86 1.00 0.90 0.88 -0.00 -0.06 -0.13 -0.01 -0.05 0.05 0.04 0.05 0.75 0.84 0.84 0.80 0.76 0.74 0.84 0.90 1.00 0.91 -0.00 -0.05 -0.12 many pre-training runs, which together cost significantly more than the ASK-LLM inference calls (Luccioni et al., 2023). In practical systems, cheaper samplers / scoring models can also pre-filter examples for our more expensive scorers. While LLM pre-training is often thought of as a one-time cost, this has historically not been the case. We therefore view quality scores as a long-term investment. See Appendix A.1 for a deeper discussion about the cost of ASK-LLM scoring. LLM-Based Data Refinement. Recursively training on model-generated data causes degredation
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6d2f5e2a-4702-4c28-bc9f-ef92026dfc05
## 5. Discussion M inference calls (Luccioni et al., 2023). In practical systems, cheaper samplers / scoring models can also pre-filter examples for our more expensive scorers. While LLM pre-training is often thought of as a one-time cost, this has historically not been the case. We therefore view quality scores as a long-term investment. See Appendix A.1 for a deeper discussion about the cost of ASK-LLM scoring. LLM-Based Data Refinement. Recursively training on model-generated data causes degredation in both diffusion models and LLMs, inciting concerns about whether the inter- net will remain a viable source of training data (Shumailov et al., 2023; Alemohammad et al., 2023; Briesch et al., 2023). It is therefore somewhat surprising that LLMs are so ef- fective at deciding which training data to consume. Our ASK-LLM results raise important questions about whether LLM-based filters can function as an intervention in the self-consumption loop, allowing LLMs to self-improve.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
74bb784f-5c3f-40c6-912f-740105e08859
## 6. Conclusion We studied the performance of sampling algorithms that select high-quality data through highly-capable proxies and maximize coverage through embedding similarity. Our ex- periments reveal that LLM-based quality filtering yields a Parteo optimal efficiency tradeoff between data quantity and model quality, with important implications for training cost, self-improvement, and LLM training data curation.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4233bb10-d8a0-4540-a4cd-f21d1761c74d
## Impact Statement Chen, Y., Welling, M., and Smola, A. Super-samples from kernel herding. *arXiv preprint arXiv:1203.3472*, 2012. While increased LLM accessibility has well-documented risks, we expect data-efficient pre-training to be a net social good that reduces (amortized) carbon emissions and pre- training cost while improving quality. Chitta, K., Álvarez, J. M., Haussmann, E., and Farabet, C. Training data subset search with ensemble active learning. IEEE Transactions on Intelligent Transportation Systems, 23(9):14741–14752, 2021.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1cfe5c60-9c8a-437b-abdc-5f7105a1281a
## Acknowledgements We sincerely thank Xinyun Chen and Kelvin Guu for their insightful feedback on early drafts of this paper. Clark, C., Lee, K., Chang, M.-W., Kwiatkowski, T., Collins, M., and Toutanova, K. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
72217347-8486-4b87-b026-f024d2b0ef32
## Appendices | A Algorithms | 14 | |------------------------------------------------------------------|--------------------------------------------| | A.1 | A | | SK | | | -LLM Sampling | | | A.2 | D
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
231e5214-64c2-4e69-999e-53b8d88caa0b
## Appendices | | | A.2 | D | | ENSITY | | | Sampling | | | B | Data-curation Techniques | | B.1
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f3ec32e8-ab5e-49bc-b07c-eae3a5295f46
## Appendices | Data-curation Techniques | | B.1 | Random Sampling | | B.2 | D | | ENSITY | | | Sampling | | | B.3
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e7e4244d-5315-435f-bb8f-80ed15bf0a47
## Appendices | | Sampling | | | B.3 | SemDeDup | | B.4 | SSL Prototypes | | B.5 | Perplexity Filtering | | B.6 | A
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
56ab8ca5-d977-424a-947b-14ea38af75e7
## Appendices | Perplexity Filtering | | B.6 | A | | SK | | | -LLM Sampling | | | C Downstream Evaluation Tasks | 17 | | C.1 | Perplexity
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
12dd5afa-dbb7-4e55-b9f1-e25df027f2fa
## Appendices | 17 | | C.1 | Perplexity | | C.2 | HQ Perplexity | | C.3 | GLUE | | C.4 | SuperGLUE | | C.5
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7dbb8283-f200-4430-b2c0-f34981acb828
## Appendices | SuperGLUE | | C.5 | CNN/DM | | C.6 | SQuAD | | C.7 | FLAN Instruction Tuning | | D Additional Results | 18 | | D.1 (Figure 9) Quality-score Distribution for Different Samplers |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2ed35a1c-3e93-4d07-8d6c-e0abebbed9c2
## Appendices | | D Additional Results | 18 | | D.1 (Figure 9) Quality-score Distribution for Different Samplers | | | D.2 (Figures 10 to 16) Data-quantity | vs. | | D.3 (Figures 17 to 23) Quality of Fresh | vs. | | D.4 (Figures 24 to 30) Data-efficiency of Different Samplers | | | E | Qualitative Results | | E.1
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
facec797-26cd-42e8-a6c3-cf7fd93ec882
## Appendices | | E | Qualitative Results | | E.1 | High-quality Samples Identified by A | | SK | | | -LLM | | | E.2 | Low-quality Samples Identified by A | | SK
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
21dfa348-2abd-4982-bb8d-9fd07fdce4de
## Appendices | | E.2 | Low-quality Samples Identified by A | | SK | | | -LLM | | | E.3 | Increasing-quality Samples Identified by A | | SK |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9ea2ac0a-2257-4095-86db-04aea1ab18ac
## Appendices | Increasing-quality Samples Identified by A | | SK | | | -LLM | | | E.4 | Decreasing-quality Samples Identified by A | | SK | | | -LLM |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5fe3d7dc-93f2-466e-b2d6-e79c94561cd2
## Appendices | | | -LLM | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5ef9c442-4a13-4bd1-a497-838cee0a46eb
## A. Algorithms A.1. Ask-Llm Sampling Algorithm 1 Ask-Llm Sampling 1: **Input:** Dataset D = {x1, x2, · · · *, x*N} s.t. xi *∈ X* is the training sample in plain-text, sample size k, scoring model M : X; *X �→* R 2: Initialize list of scores S = []. 3: **for** n = 1 → N do 4: promptn ← make_prompt(xn) // Make AS K-LLM prompts as in Figure 3 5: Append M("yes" | promptn) to S // Use M to score xn 6: end for 7: **Output:** Select k elements from D with top-k scores in S, without replacement. Discussion on the cost of ASK**-LLM scoring.** Even though ASK-LLM sampling results in impressive performance and training efficiency improvements compared to training on the full-dataset (Appendix D), the data quality scoring cost might seem prohibitive. On the other hand, on top of the improved results, we argue the following to be compelling points in justifying ASK-LLM's one-time-amortized data scoring cost: - ASK-LLM only requires *forward passes* on the entire dataset. This is much cheaper than (i) training the model itself which requires both forward and backward passes on multiple repetitions of the entire dataset, (ii) gradient-based data-curation techniques (Sachdeva & McAuley, 2023; Sachdeva et al., 2023) that also require backward passes, etc. - An additional benefit of the ASK-LLM framework is the ability to leverage memory-efficient, quantized LLM inference setups (Dettmers et al., 2022). This is strictly not possible, *e.g.*, for pre-training LLMs. Notably, quantization isn't the only ASK-LLM-friendly technique. All the recent (and future) advances in efficient *inference* techniques for LLMs (Weng, 2023) directly reduce the amortization cost of the ASK-LLM framework. - Another benefit of ASK-LLM is the ability to naïvely parallelize quality scoring. To be more specific, we can simply scaleup the amount of *small & independent*
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b7488add-4452-43c2-a319-81c0977c3a54
## A. Algorithms A.1. Ask-Llm Sampling Algorithm 1 Ask-Llm Sampling ups (Dettmers et al., 2022). This is strictly not possible, *e.g.*, for pre-training LLMs. Notably, quantization isn't the only ASK-LLM-friendly technique. All the recent (and future) advances in efficient *inference* techniques for LLMs (Weng, 2023) directly reduce the amortization cost of the ASK-LLM framework. - Another benefit of ASK-LLM is the ability to naïvely parallelize quality scoring. To be more specific, we can simply scaleup the amount of *small & independent* inference resources, and run inference calls for various training samples parallely. Note that inference hardware has much smaller requirements compared to, *e.g.*, pre-training or fine-tuning requirements. This is primarily true because of no batch size requirement for inference *vs.* large batch size requirement while training. This enables scaling-up hardware to happen via a large number of small-compute setups (*e.g.*, 4 interconnected GPUs per node) versus increasing the number of large-compute setups (*e.g.*, 1000s of interconnected GPUs per node). - ASK-LLM also uses strictly less compute compared to teacher-student knowledge distillation based training setups (Agarwal et al., 2023). This is true simply because knowledge distillation require (i) bigger teacher model's softmax predictions (ii) for each token in our training data. On the other hand, ASK-LLM requires just the score of the token "yes" given the prompt.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c5da7755-7f01-477a-a505-e64ab9f563e3
## A.2. Density Sampling Our density sampler is adapted from that of Coleman et al. (2022), with a few critical departures: - We use a two-pass procedure that allows for more rigorous theoretical guarantees (and different sampling behavior). - We conduct the density estimation in the model's latent space rather than using Jaccard similarity over n-grams. Improvements: Jaccard similarities are sufficient to construct a reasonable sampling distribution for genomics applications, which are significantly more structured than natural language. However, this is not the case with text - we found that sampling based on Jaccard density is no better than random. For this reason, we must use different kernels (p-stable rather than MinHash) and different input representations (embedding rather than n-grams). However, our more interesting departure from Coleman et al. (2022) is our two-pass sampling procedure, which changes the behavior of the algorithm and allows for more rigorous theoretical guarantees. The original method was only able to demonstrate convergence of cluster populations in the sampled dataset. While this leads to (weak) convergence for some measures of diversity, it also requires strong assumptions about the cluster structure. Theory: We use a recent result that demonstrates consistent sketch-based estimation of the kernel sum (Theorem 3.3 of Liu et al. (2023)), which we paraphrase below. Lemma A.1. Let P(x) denote a probability density function. Let D ∼ iid P(x) denote a dataset. Let k(x, y) be a positive definite LSH kernel, and let S *be the* DENSITY score. Then S(x) is a consistent estimator for the kernel sum. $$S(x)\underset{\mathrm{i.p.}}{\rightarrow}\frac{1}{N}\sum_{x_{i}\in\mathcal{D}}k(x_{i},q)$$ log R/R). with convergence rate O( � If we perform inverse propensity sampling using the score in Lemma A.1, we obtain a sampling procedure that outputs a uniformly-distributed sample. Theorem A.2. Let Q(x) be the distribution formed by (i) drawing N *samples i.i.d. from a distribution* P, e.g. D = {x1, ...xN} ∼
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ebbff8f6-8f0a-49d5-9ab7-46c5dcddcca3
## A.2. Density Sampling 1}{N}\sum_{x_{i}\in\mathcal{D}}k(x_{i},q)$$ log R/R). with convergence rate O( � If we perform inverse propensity sampling using the score in Lemma A.1, we obtain a sampling procedure that outputs a uniformly-distributed sample. Theorem A.2. Let Q(x) be the distribution formed by (i) drawing N *samples i.i.d. from a distribution* P, e.g. D = {x1, ...xN} ∼ P, and (ii) keeping x with probability proportional to 1 S(x). Under the conditions of Lemma A.1, Q(x) → i.p. U(x), where U(x) is the uniform distribution. Proof. Under the conditions of Wied & Weißbach (2012) (specifically, positive-definiteness and ℓ1 integrability / bounded domain), the kernel sum is a consistent estimator of the density. That is, the sum converges in probability to P(x). $$\frac{1}{N}\sum_{x_{i}\in\mathcal{D}}k(x_{i},q)\underset{\text{tp},\,\text{$\downarrow$}}{\rightarrow}P(x)$$ Lemma $\mathbb{A}$.$\mathbb{I}$ shows that $S(x)$ converges in probability to the sum (and thus to $P(x)$). By Slutsky's Theorem, $\frac{1}{S(x)}$ S(x) → 1 P (x) for all x in the support of the distribution (i.e. P(x) ̸= 0). The probability of generating x as part of the sample is: S(x)P(x) Q(x) = Pr[Selectx ∩ Generatex] = Pr[Selectx]Pr[Generatex] = 1 Because 1 P (x) for some constant c, we have that Q(x) → c. S(x) → c Theorem A.2 demonstrates that our DENSITY sampler outputs a
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
342f4ada-201f-4340-a20d-73c5a422b347
## A.2. Density Sampling x) for all x in the support of the distribution (i.e. P(x) ̸= 0). The probability of generating x as part of the sample is: S(x)P(x) Q(x) = Pr[Selectx ∩ Generatex] = Pr[Selectx]Pr[Generatex] = 1 Because 1 P (x) for some constant c, we have that Q(x) → c. S(x) → c Theorem A.2 demonstrates that our DENSITY sampler outputs a uniformly-distributed collection of points over the input space (latent LLM representation space).
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
673a73ca-6c08-4a93-af12-ea4b16b198f3
## Algorithm 2 Inverse Propensity Sampling (Ips) Via Kernel Density Estimation (Kde) 1: **Input:** Dataset D = {x1, x2, · · · *, x*N} of embedings, sample size k, kernel k with corresponding locality-sensitive hash family H (see Coleman & Shrivastava (2020)), hash range B, rows R, random seed s 2: **Initialize:** KDE sketch S ← 0R×B 3: Generate R independent hash functions h1*, . . . , h*R from H with range B and random seed s. 4: **for** n = 1 → N do // Construct KDE estimator for D. 5: for r = 1 → R do // Add xn to the KDE estimator. 6: Sr,hr(xn)+ = 1 7: end for 8: end for 9: Initialize list of scores S = []. 10: **for** n = 1 → N do // Score each example xn 11: score = 0 12: for r = 1 → R do // Compute approximate KDE using S 13: score+ = S[*r, h*r(xn)] 14: end for 15: Append score/R to S 16: end for 17: **Output:** Select k elements from D with probability p = S � S without replacement.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
851079fa-e306-4207-a0b7-020696f4d904
## Total Training = 524B Tokens Cost: Like SemDeDup, D4, and SSL prototypes, our DENSITY sampler requires access to embeddings for each example in the training corpus. However, by eliminating the expensive clustering step, we eliminate a significant computational overhead. Our DENSITY sampling routine required just 80MB of memory and two linear passes through the dataset to score all 364M embeddings. This is significantly less expensive than clustering. Tuning: We also eliminate a large number of hyperparameters, improving tuning. Cluster-based samplers must choose the number of clusters, clustering optimizer and objective, and per-cluster sampling rate or deduplication similarity. Kernel density estimation, on the other hand, has just *two* hyperparameters: the choice of kernel and the bandwidth. We did not observe a significant performance variation among different bandwidth and kernel choices (e.g., the L2 and cosine kernels of Coleman & Shrivastava (2020) perform nearly identically). This is likely because all positive-definite kernels enjoy strong guarantees on the distribution approximation error (Devroye, 1983).
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f7910f8f-7aa1-45aa-864e-945e67b55b4c
## B. Data-Curation Techniques B.1. Random Sampling The de-facto standard for obtaining samples of large datasets where we sample training examples uniformly at random. Notably, random sampling has also been accompanied with strong results in a variety of applications in the data-curation literature primarily due to its unbiased sampling (Ayed & Hayou, 2023b; Guo et al., 2022b).
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c2240edf-b229-4a65-af3f-160f8d064373
## B.2. Density Sampling See Section 3.2 for technical details about the DENSITY sampler. We use Sentence-T5-Base (Ni et al., 2021) as our embedding model for training samples, primarily due to its contrastive training, giving confidence for computing distances amongst its 768-dim embeddings. We use the PStable hash (Datar et al., 2004) to hash the embeddings, along with a [1, 000 × 20, 000] sketch matrix.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ee3abbb8-0ef8-4317-8f9f-5b4f2c17a977
## B.3. Semdedup The key idea is to perform (coverage maximizing) semantic deduplication inside clusters of the original dataset (Abbas et al., 2023). We re-use the Sentence-T5-Base embeddings of data-points (Appendix B.2), and perform k-means clustering to obtain 10, 000 clusters of the entire dataset.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6b77ed53-6adc-46f2-ab36-592838fc19a9
## B.4. Ssl Prototypes They key idea is to remove *prototypical* points in a dataset (Sorscher et al., 2022). As a meaningful proxy, this method removes the points closest to cluster centroids of a dataset. For brevity, we use the name "Prototypes" when reporting our results. We re-use the same embeddings and clustering for both SemDeDup and Prototypes.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6a60f1bb-e3ac-46eb-8fa6-9f12b2fc13ae
## B.5. Perplexity Filtering A popular quality-filtering approach in the literature is to use the perplexity of proxy language models to filter data-points with a high-perplexity under that language model. While the literature historically used small language models for perplexity filtering (Wenzek et al., 2019; Muennighoff et al., 2023), recent work (Marion et al., 2023) suggests improved filtering performance when using LLMs for this task. To this end, we employ perplexity filtering with T5-{Small, Base, Large, XL, XXL} models; as well as intermediate checkpoints during the course of training T5-Large: {20k, 100k, 300k, 500k, 700k}.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8f8f4bd4-e98c-49ec-aaf5-4ff2087a10e7
## B.6. Ask-Llm Sampling See Section 3.1 for technical details about the ASK-LLM sampler. Since ASK-LLM relies on the reasoning capabilities of instruction-tuned models, we use the Flan-T5-{Small, Base, Large, XL, XXL} (Longpre et al., 2023a) models for obtaining the quality scores in ASK-LLM.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
42005d72-6668-4be6-a5da-78c650687c62
## C. Downstream Evaluation Tasks C.1. Perplexity Defined as the exponentiated average negative log-likelihood of an average sequence in the dataset; we compute the perplexity over the default validation set in C4. Note that C4's validation set is a random sample of the dataset, so it is prone to be of much lower quality than curated sources, and hence, a less reliable indicator of true model quality.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2dc04c9f-278a-4f09-b8cd-bb8dec853de3
## C.2. Hq Perplexity As our best effort to devise an inexpensive-to-compute metric that is better aligned with model quality than perplexity on C4's validation set, inspired by the evaluation conducted in Tirumala et al. (2023), we construct a *high-quality* validation set from non web-scrape sources. We collate the validation sets from (1) English portion of wiki40b (Guo et al., 2020), (2) realnews and webtext subsets of C4, and (3) news commentary from the LM1B dataset (Chelba et al., 2013).
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
eed7e9b9-33cb-4fe5-a702-b31237866300
## C.3. Glue A popular natural language understanding meta-benchmark comprising of eleven different tasks (Wang et al., 2018). Note that we report the average score for all individual tasks, after finetuning on the concatenation of all individual tasks' training sets, as is done in the original T5 implementation.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
427a09c6-941e-4ed1-b897-4b5c707f725a
## C.4. Superglue A harder meta-benchmark (vs. GLUE) built to further test the natural language understanding abilities of language models (Wang et al., 2019). Similar to GLUE, we report the average score of all tasks, and conduct fine-tuning on all tasks' concatenated train-set.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c36892b7-a103-4cf7-9337-e026c4901358
## C.5. Cnn/Dm We use the CNN/DM dataset (Hermann et al., 2015) for testing our models' abstractive summarization abilities. Like the T5 original setting, we finetune on the train-set, and report the ROUGE-2 scores.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b6582b4d-57bd-4baa-971d-6dc454103789
## C.6. Squad A popular dataset (Rajpurkar et al., 2016) used to evaluate question-answering capabilities of language models, we compare the finetuned performance of our models using exact-match as the metric.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a5809ba8-87d2-4738-bce1-7e842e299cd4
## C.7. Flan Instruction Tuning A popular application of LLMs has been instruction-following, and chatting capabilities. To test our model's quality on this front, we finetune our models on the FLANv2 dataset (Longpre et al., 2023a), and test the instruction-tuned models' performance from four fronts: - 5-shot MMLU (Hendrycks et al., 2020): a popular benchmark consiting of exam questions from 57 tasks. - 3-shot Big Bench Hard (BBH) (Srivastava et al., 2022): a popular set of 23 hardest tasks from big bench. - Reasoning: macro-average 8-shot performance on GSM8k (Cobbe et al., 2021), SVAMP (Patel et al., 2021), ASDIV (Miao et al., 2021), and StrategyQA (Geva et al., 2021) benchmarks. - QA: macro-average 0-shot performance on UnifiedQA (Khashabi et al., 2020), BoolQ (Clark et al., 2019), Arc-Easy and Arc-Challenge (Clark et al., 2018) benchmarks. - Average: macro-average of all the four benchmarking suites listed above: MMLU, BBH, Reasoning, and Q/A. Please note that all of our reported numbers are based on *single checkpoint* evaluations, *i.e.*, we first select the best checkpoint during FLAN finetuning using the *average* performance on all tasks, and report the individual task performance on that checkpoint.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4f50f1d1-5629-4a78-9b9e-584defc68263
## D. Additional Results D.1. (Figure 9) Quality-Score Distribution For Different Samplers For different data curation techniques listed in Appendix B, we examine the distribution of estimated *data-quality* scores normalized in a way that higher represents better data quality. - For the DENSITY sampler, the plotted score is proportional to the likelihood of the example under the kernel density estimate. - For the Prototypes sampler, the plotted score represents the negated cosine similarity of data-point with its assigned cluster centroid. - For the SemDeDup sampler, the plotted score represents the negated maximum cosine similarity of a datapoint to all other datapoints in its respective cluster. - For the perplexity filtering sampler, the plotted score represents the negated perplexity of a training sample. - For the ASK-LLM sampler, the plotted score represents the log probability of the token "yes" given the prompt in Figure 3.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1ec1825a-a2e5-4344-8167-9277ce6e5389
## D.2. (Figures 10 To 16) Data-Quantity *Vs.* Model-Quality For Different Samplers For different data curation techniques listed in Appendix B, we investigate the tradeoff between the sampling rate and the respectively trained model's quality on various downstream evaluations listed in Appendix C. We plot our results in the following figures: - (Figure 10) **T5-Small, coverage**: Pre-training T5-Small on different amounts of data sampled by {Random sampling, DENSITY sampling, Self-supervised Prototypes sampling, SemDeDup}. - (Figure 11) **T5-Large, coverage**: Pre-training T5-Large on different amounts of data sampled by {Random sampling, DENSITY sampling, Self-supervised Prototypes sampling, SemDeDup}. - (Figure 12) T5-Small, ASK**-LLM**: Pre-training T5-Small on different amounts of data sampled by ASK-LLM using the {Flan-T5-Small, Flan-T5-Base, Flan-T5-Large, Flan-T5-XL, Flan-T5-XXL} scoring models. - (Figure 13) T5-Large, ASK**-LLM**: Pre-training T5-Large on different amounts of data sampled by ASK-LLM using the {Flan-T5-Small, Flan-T5-Base, Flan-T5-Large, Flan-T5-XL, Flan-T5-XXL} scoring models. - (Figure 14) **T5-Small, Perplexity filtering**: Pre-training T5-Small on different amounts of data sampled by Perplexity filtering using the {T5-Small, T5-Base, T5-Large, T5-XL, T5-XXL} scoring models. - (Figure 15) **T5-Large, Perplexity filtering**: Pre-training T5-Large on different amounts of data sampled by Perplexity filtering using the {T5-Small, T5-Base, T5-Large, T5-XL, T5-XXL} scoring models. - (Figure 16) **T5-Large, Perplexity filtering**: Pre-training T5-Large on different amounts of data sampled by Perplexity filtering using the {20k, 100k, 300k, 500k, 700k} intermediate
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b75bc817-9b9e-42ec-978d-43111bfb414f
## D.2. (Figures 10 To 16) Data-Quantity *Vs.* Model-Quality For Different Samplers . - (Figure 15) **T5-Large, Perplexity filtering**: Pre-training T5-Large on different amounts of data sampled by Perplexity filtering using the {T5-Small, T5-Base, T5-Large, T5-XL, T5-XXL} scoring models. - (Figure 16) **T5-Large, Perplexity filtering**: Pre-training T5-Large on different amounts of data sampled by Perplexity filtering using the {20k, 100k, 300k, 500k, 700k} intermediate checkpoints of T5-Large as data quality scoring models.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
687cbb32-1765-4ab7-b485-568f6aae8c79
## D.3. (Figures 17 To 23) Quality Of Fresh *Vs.* Repeated Tokens For Different Samplers We investigate the data-efficiency for different data curation techniques listed in Appendix B over various downstream evaluations listed in Appendix C, when stratifying by the maximum number of repetitions allowed over the sampled dataset. We plot our results in the following figures: - (Figure 17) **T5-Small, coverage**: Average data-efficiency of pre-training T5-Small on data sampled by {Random sampling, DENSITY sampling, Self-supervised Prototypes sampling, SemDeDup}, stratified by the maxmimum number of allowed repetitions over the sampled dataset. - (Figure 18) **T5-Large, coverage**: Average data-efficiency of pre-training T5-Large on data sampled by {Random sampling, DENSITY sampling, Self-supervised Prototypes sampling, SemDeDup}, stratified by the maxmimum number of allowed repetitions over the sampled dataset. - (Figure 19) T5-Small, ASK**-LLM**: Average data-efficiency of pre-training T5-Small on data sampled by ASK-LLM using the {Flan-T5-Small, Flan-T5-Base, Flan-T5-Large, Flan-T5-XL, Flan-T5-XXL} scoring models, stratified by the maxmimum number of allowed repetitions over the sampled dataset. - (Figure 20) T5-Large, ASK**-LLM**: Average data-efficiency of pre-training T5-Large on data sampled by ASK-LLM using the {Flan-T5-Small, Flan-T5-Base, Flan-T5-Large, Flan-T5-XL, Flan-T5-XXL} scoring models, stratified by the maxmimum number of allowed repetitions over the sampled dataset. - (Figure 21) **T5-Small, Perplexity filtering**: Average data-efficiency of pre-training T5-Small on data sampled by Perplexity filtering using the {T5-Small, T5-Base, T5-Large, T5-XL, T5-XXL} scoring models, stratified by the maxmimum number of allowed repetitions over the sampled dataset. - (Figure 22) **T5-Large, Perplexity filtering**:
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9fcf501f-83cb-40bf-91cc-e72639cfdb28
## D.3. (Figures 17 To 23) Quality Of Fresh *Vs.* Repeated Tokens For Different Samplers } scoring models, stratified by the maxmimum number of allowed repetitions over the sampled dataset. - (Figure 21) **T5-Small, Perplexity filtering**: Average data-efficiency of pre-training T5-Small on data sampled by Perplexity filtering using the {T5-Small, T5-Base, T5-Large, T5-XL, T5-XXL} scoring models, stratified by the maxmimum number of allowed repetitions over the sampled dataset. - (Figure 22) **T5-Large, Perplexity filtering**: Average data-efficiency of pre-training T5-Large on data sampled by Perplexity filtering using the {T5-Small, T5-Base, T5-Large, T5-XL, T5-XXL} scoring models, stratified by the maxmimum number of allowed repetitions over the sampled dataset. - (Figure 23) **T5-Large, Perplexity filtering**: Average data-efficiency of pre-training T5-Large on data sampled by Perplexity filtering using the {20k, 100k, 300k, 500k, 700k} intermediate checkpoints of T5-Large as data quality scoring models, stratified by the maxmimum number of allowed repetitions over the sampled dataset.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
024b0414-fe97-45ce-99b5-2bbdc8f59956
## D.4. (Figures 24 To 30) Data-Efficiency Of Different Samplers We investigate the data-efficiency for different data curation techniques listed in Appendix B over various downstream evaluations listed in Appendix C, when stratifying by the sampling ratio or the size of the sampled dataset. We plot our results in the following figures: - (Figure 24) **T5-Small, coverage**: Data-efficiency of pre-training T5-Small on data sampled by {Random sampling, DENSITY sampling, Self-supervised Prototypes sampling, SemDeDup}, stratified by the sampling ratio. - (Figure 25) **T5-Large, coverage**: Data-efficiency of pre-training T5-Large on data sampled by {Random sampling, DENSITY sampling, Self-supervised Prototypes sampling, SemDeDup}, stratified by the sampling ratio. - (Figure 26) T5-Small, ASK**-LLM**: Data-efficiency of pre-training T5-Small on data sampled by ASK-LLM using the {Flan-T5-Small, Flan-T5-Base, Flan-T5-Large, Flan-T5-XL, Flan-T5-XXL} scoring models, stratified by the sampling ratio. - (Figure 27) T5-Large, ASK**-LLM**: Data-efficiency of pre-training T5-Large on data sampled by ASK-LLM using the {Flan-T5-Small, Flan-T5-Base, Flan-T5-Large, Flan-T5-XL, Flan-T5-XXL} scoring models, stratified by the sampling ratio. - (Figure 28) **T5-Small, Perplexity filtering**: Data-efficiency of pre-training T5-Small on data sampled by Perplexity filtering using the {T5-Small, T5-Base, T5-Large, T5-XL, T5-XXL} scoring models, stratified by the sampling ratio. - (Figure 29) **T5-Large, Perplexity filtering**: Data-efficiency of pre-training T5-Large on data sampled by Perplexity filtering using the {T5-Small, T5-Base, T5-Large, T5-XL, T5-XXL} scoring models, stratified by the sampling ratio.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
330f9118-3103-4610-b1eb-f22f085f8e9d
## D.4. (Figures 24 To 30) Data-Efficiency Of Different Samplers on data sampled by Perplexity filtering using the {T5-Small, T5-Base, T5-Large, T5-XL, T5-XXL} scoring models, stratified by the sampling ratio. - (Figure 29) **T5-Large, Perplexity filtering**: Data-efficiency of pre-training T5-Large on data sampled by Perplexity filtering using the {T5-Small, T5-Base, T5-Large, T5-XL, T5-XXL} scoring models, stratified by the sampling ratio. - (Figure 30) **T5-Large, Perplexity filtering**: Data-efficiency of pre-training T5-Large on data sampled by Perplexity filtering using the {20k, 100k, 300k, 500k, 700k} intermediate checkpoints of T5-Large as data quality scoring models, stratified by the sampling ratio.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4aa5dde2-f641-4fc8-9191-53c47c605a7a
## E. Qualitative Results In this section we look at some qualitative training samples, sorted according to various criteria of data-quality scores. Along with the textual content of each training sample, we also list the estimated data-quality percentile for ASK-LLM and perplexity filtering samplers, *i.e.*, the percentile of the given data-point's quality score amongst the entire training set. A high percentile represents that the sampler estimates this training sample to have higher quality compared to other training samples in the dataset. We manually don't include any NSFW examples to the best of our knowledge.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e8a6b5c2-96ae-4120-887e-769ca97d0e86
## E.1. High-Quality Samples Identified By Ask-Llm We look at the training samples that *all* ASK-LLM scoring models, on average, think are good (*i.e.*, have a high percentile). To the best of our understanding, the overarching conclusions we make by observing these qualitative samples are: - ASK-LLM doesn't seem to have any length bias for good examples. - ASK-LLM can accurately tag high-quality training samples that contain a lot of proper nouns and named entities. Perplexity filtering gets these kind of samples wrong. - Even looking at this slice of only the highest-quality data tagged by ASK-LLM, perplexity filtering scores don't seem to correlate well with ASK-LLM scores as suggested by Figure 7. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 93.33% 88.21% 88.11% 100.0% 99.99% 50.29% 30.34% 32.56% 31.61% 25.62% What constitutes overtime for a part-time employee? Question: What is overtime for a part-time employee? Overtime for a part-time employee is time that is beyond the part-time employee's ordinary hours of work or outside the agreed number of hours of work, as specified in their employment contract. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 99.86% 98.54% 96.4% 96.3% 96.67% 46.2% 54.65% 46.2% 49.85% 20.33% Viva La Vegan! - Can a Vegan Lifestyle Help to Get Rid of Ocean Dead Zones? Can a Vegan Lifestyle Help to Get Rid of Ocean Dead Zones? A dead zone is an area at the bottom of the ocean that is oxygen depleted and cannot maintain any marine life. The biggest cause of these dead zones is an overflow of fertilizers, sewage and industrial pollutants being pumped into rivers all over the world. Thankfully dead zones can be reversed and living a vegan lifestyle can help enormously and I'll
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
deb4c8e7-2545-46b3-a81d-0444706d0722
## E.1. High-Quality Samples Identified By Ask-Llm 2% 54.65% 46.2% 49.85% 20.33% Viva La Vegan! - Can a Vegan Lifestyle Help to Get Rid of Ocean Dead Zones? Can a Vegan Lifestyle Help to Get Rid of Ocean Dead Zones? A dead zone is an area at the bottom of the ocean that is oxygen depleted and cannot maintain any marine life. The biggest cause of these dead zones is an overflow of fertilizers, sewage and industrial pollutants being pumped into rivers all over the world. Thankfully dead zones can be reversed and living a vegan lifestyle can help enormously and I'll show you how. What are Ocean Dead Zones? ...... Vegans don't want to harm the planet. On the contrary they want to save it and what better way than living with nature instead of against it and helping the planet in ways we probably never even realised, like helping to reverse our oceans dead zones. Next time you think about buying something you don't need, or eating food that is highly processed or non-organic, spare a thought for the largely unknown dead zones and how overconsumption and an unnatural lifestyle is slowly killing both you and them. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 98.81% 98.96% 95.42% 99.53% 99.56% 88.1% 80.99% 77.13% 65.89% 73.79% Question: Is it necessary to dredge ponds and lakes in the upper coastal region of South Carolina? Answer: It is necessary to dredge ponds and lakes in South Carolina, in the upper coastal region of South Carolina. Each lake and each pond is a different environment and as years pass, these environments accumulate a lot of sediment. They tend to fill in with storm water runoff, they tend from natural leafy materials—whether it be grass clippings, leafy materials, storm water fun off, sand, silt, sediment, muck, mire. All of these produce in the bottoms of pond beds and lake beds. So it is absolutely necessary to do an evaluation every so many years to determine whether or not you need to remove the sediment that's accumulated. ASK-LLM Perplexity Filtering Small Base
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
59b64f69-2e8b-453f-996c-b633766435d5
## E.1. High-Quality Samples Identified By Ask-Llm each pond is a different environment and as years pass, these environments accumulate a lot of sediment. They tend to fill in with storm water runoff, they tend from natural leafy materials—whether it be grass clippings, leafy materials, storm water fun off, sand, silt, sediment, muck, mire. All of these produce in the bottoms of pond beds and lake beds. So it is absolutely necessary to do an evaluation every so many years to determine whether or not you need to remove the sediment that's accumulated. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 88.93% 92.16% 90.3% 95.14% 93.44% 26.83% 34.32% 32.98% 31.14% 28.35% However, it's a long and challenging way to mass production. New Tesla Model 3 is an electric game-changer worth $35,000 and comes in classic black color. A single masterpiece in black now belongs to Tesla's CEO and co-founder Elon Musk. Why not mass market yet? Company has a quite complicated reason. Tesla needs to make sure that it can build, deliver and service enormous numbers of these awesome electric cars without sacrificing quality. Tesla will present 30 first cars at a launch celebration dated on July 28. 100 cars with production speed 3 cars per day dated for August. 1,500 cars will be ready for September. ... Owners of new Teslas will also enjoy exquisite aerodynamic wheel face. An itemized list of the Tesla Model 3's features, specs, and pricing is expected to be revealed on July 28, at the car's launch party. 5.6 seconds is what it gets the Model 3 to go from zero to 60 miles per hour, as May news says. Hot, right? It accelerates even faster than the base model BMW 3 Series or the famous Mercedes-Benz C Class, which are leaders in the compact luxury space. A single charge will allow minimum 215 miles of single drive. The roof in Model 3 is made almost entirely of glass, providing an incredible sense of space and infinity. Moreover, it blocks UV rays and manages the level of heat. ASK-LLM Perplexity Filtering Small Base Large XL
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d1439b6e-e246-41ed-a993-7e5e89e58a0f
## E.1. High-Quality Samples Identified By Ask-Llm 6 seconds is what it gets the Model 3 to go from zero to 60 miles per hour, as May news says. Hot, right? It accelerates even faster than the base model BMW 3 Series or the famous Mercedes-Benz C Class, which are leaders in the compact luxury space. A single charge will allow minimum 215 miles of single drive. The roof in Model 3 is made almost entirely of glass, providing an incredible sense of space and infinity. Moreover, it blocks UV rays and manages the level of heat. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 89.28% 98.11% 98.93% 98.7% 96.32% 26.24% 19.14% 26.25% 26.05% 24.29% Landmines. Every month, 1200 people are maimed, and a further 800 killed throughout the world due to landmines. Landmine removal efforts are clearing about 100,000 mines a year, but at rate it will still be over 1000 years to get them all. The cost of clearing them is huge, with estimates in excess of $50 billion. Worse still, for every 5000 mines cleared, one person will die in the process. Hopefully the work that people like Vandiver and Tan can be built upon and further progress can be made in the fight to clear the world of landmines. The video below shows a group of minesweepers working with the kits- and it is clear even watching them that the level of understanding as to how the mine operates is already improving- giving them the knowledge they need to safely diffuse any mines they encounter. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 87.79% 98.52% 90.11% 91.65% 88.09% 19.72% 17.88% 21.13% 16.95% 11.92% By all measures a successful chemical engineering undergraduate at Oregon Agricultural College, and wanting very much to continue his education and earn his PhD in chemistry, Linus Pauling wrote to several graduate programs across the country, inquiring in particular about
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3f77b603-8d53-407a-899e-c931d2a29bc3
## E.1. High-Quality Samples Identified By Ask-Llm M Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 87.79% 98.52% 90.11% 91.65% 88.09% 19.72% 17.88% 21.13% 16.95% 11.92% By all measures a successful chemical engineering undergraduate at Oregon Agricultural College, and wanting very much to continue his education and earn his PhD in chemistry, Linus Pauling wrote to several graduate programs across the country, inquiring in particular about fellowships. Though he had proven himself to be prodigious talent as a student and, already, as a teacher, Pauling's location in Corvallis didn't carry a great deal of cache with the country's elite institutions. And given his family's shaky financial health, some measure of institutional funding was going to be required if he were to advance in the academy. ... During his sparse free time, Pauling wrote letter after letter to his girlfriend, Ava Helen Miller, who remained in Corvallis to continue work on her Home Economics degree at OAC. Having expressed a desire to marry at least twice before Linus left for California, only to be rebuffed by their families, the two decided in their letters that they would absolutely be wed once Pauling had finished his first year of classes and just prior to his resumption of more construction work during the summer. Their plan came to fruition in Salem, Oregon on June 17, 1923, and Ava Helen moved to Pasadena that fall to accompany her new husband during his second year as a graduate student. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 87.08% 89.33% 95.26% 99.13% 99.94% 98.09% 97.52% 98.83% 97.39% 97.38% Bonelli, N.; Giordano, S.; Procissi, G. Enif-Lang: A Specialized Language for Programming Network Functions on Commodity Hardware. J. Sens. Actuator Netw. 2018, 7, 34. Bonelli N, Giordano S, Procissi G. Enif-Lang
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
15482760-c8ff-4f7c-ae86-7061421a5483
## E.1. High-Quality Samples Identified By Ask-Llm XXL 87.08% 89.33% 95.26% 99.13% 99.94% 98.09% 97.52% 98.83% 97.39% 97.38% Bonelli, N.; Giordano, S.; Procissi, G. Enif-Lang: A Specialized Language for Programming Network Functions on Commodity Hardware. J. Sens. Actuator Netw. 2018, 7, 34. Bonelli N, Giordano S, Procissi G. Enif-Lang: A Specialized Language for Programming Network Functions on Commodity Hardware. Journal of Sensor and Actuator Networks. 2018; 7(3):34. Bonelli, Nicola; Giordano, Stefano; Procissi, Gregorio. 2018. "Enif-Lang: A Specialized Language for Programming Network Functions on Commodity Hardware." J. Sens. Actuator Netw. 7, no. 3: 34. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 96.41% 86.03% 97.38% 95.91% 90.8% 34.7% 44.8% 56.87% 60.15% 77.25% "What is your number one secret to productivity?" In recording their responses, Kruse came across some fascinating suggestions. What follows are some of my favorites. They focus on minutes, not hours. Most people default to hour and half-hour blocks on their calendar; highly successful people know that there are 1,440 minutes in every day and that there is nothing more valuable than time. Money can be lost and made again, but time spent can never be reclaimed. As legendary Olympic gymnast Shannon Miller told Kevin, "To this day, I keep a schedule that is almost minute by minute." You must master your minutes to master your life. ... Energy is everything. You can't make more minutes in the day, but you can increase your energy to increase your attention, focus, and productivity. Highly successful people don't skip meals, sleep, or breaks in the pursuit of more, more, more. Instead, they view food as fuel, sleep as recovery, and
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8355d8b4-acc8-438e-9504-6c621019bd88
## E.1. High-Quality Samples Identified By Ask-Llm valuable than time. Money can be lost and made again, but time spent can never be reclaimed. As legendary Olympic gymnast Shannon Miller told Kevin, "To this day, I keep a schedule that is almost minute by minute." You must master your minutes to master your life. ... Energy is everything. You can't make more minutes in the day, but you can increase your energy to increase your attention, focus, and productivity. Highly successful people don't skip meals, sleep, or breaks in the pursuit of more, more, more. Instead, they view food as fuel, sleep as recovery, and breaks as opportunities to recharge in order to get even more done. Author of #1 bestselling book, Emotional Intelligence 2.0, and president of TalentSmart, world's leading provider of emotional intelligence.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6f950981-0040-4672-85ea-cbd988b158b5
## E.2. Low-Quality Samples Identified By Ask-Llm We look at the training samples that *all* ASK-LLM scoring models, on average, think are bad (*i.e.*, have a low percentile). To the best of our understanding, the overarching conclusions we make by observing these qualitative samples are: - ASK-LLM doesn't seem to have any length bias for bad examples. - ASK-LLM filters hateful or toxic examples that might hurt LLM training. - ASK-LLM rejects non-contextual samples, *e.g.*, having only questions with no answers, repeated non-sensical content, etc. Notably, perplexity filtering performs bad in these cases, as these low quality examples tend to have a low perplexity score. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 0.01% 0.01% 0.01% 0.0% 0.0% 40.46% 25.66% 27.42% 25.6% 28.12% Release name : Juiced2.Hot.Import.Nights-Multi5-RELOADED. ? Format : iso Juiced 2: HIN evolves the current street racing scene, letting players experience PC Repack DiRT Rally v1.1 ? Black Box Bears Cant Drift PC torrent uploaded. ? Juiced 2 ? ? ?? ? ???? ???? ? ??? ? ?? ? ? ? ? ????! . ... HIN evolves the current street racing scene, letting players experience the culture of the real-life HIN tour, the nation?s largest lifestyle custom. Juiced 2 Hot Import Nights Torrent. Bittorrent 729.64 MB. Juiced 2 Hot Import Nights Download free torrent at Largest Bittorrent Source with Several Listed Files. Now you can upload screenshots or other images (cover scans, disc scans,... ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 5.41% 3.86% 0.49% 0.8% 6.24% 62.97% 75.91% 86.3% 85.26% 88.11
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4b460b81-62b4-4a01-b858-1bdf1900a94c
## E.2. Low-Quality Samples Identified By Ask-Llm .64 MB. Juiced 2 Hot Import Nights Download free torrent at Largest Bittorrent Source with Several Listed Files. Now you can upload screenshots or other images (cover scans, disc scans,... ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 5.41% 3.86% 0.49% 0.8% 6.24% 62.97% 75.91% 86.3% 85.26% 88.11% You were a good daughter the first day or two. Now, you are only showing the worst sides of yourself. I can only be sad and disappointed in you. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 1.08% 0.41% 6.16% 2.46% 1.44% 35.97% 24.13% 31.46% 51.15% 38.19% Kids can help you enrich your life? Be a better person? Learn to think about someone else? Apparently whoever said these things has never had children because from everything we have seen and experienced, kids are flat out horrible. College can't come fast enough. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 1.89% 3.58% 3.11% 6.02% 0.09% 18.09% 22.8% 25.61% 19.14% 47.01% EventsThis is how you can go ice skating with real penguinsGrab your tickets before they sell out! Can you spot anyone you know in these fun pics? EventsHow do I get tickets for Wimbledon 2018? ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 2.17% 1.11% 3.75% 2.0% 5.31% 92.49% 89.88% 86.79
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
50dddb08-9526-40fc-baae-0c3afcd67b8a
## E.2. Low-Quality Samples Identified By Ask-Llm 14% 47.01% EventsThis is how you can go ice skating with real penguinsGrab your tickets before they sell out! Can you spot anyone you know in these fun pics? EventsHow do I get tickets for Wimbledon 2018? ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 2.17% 1.11% 3.75% 2.0% 5.31% 92.49% 89.88% 86.79% 97.04% 96.78% That I don't make you happy? We can start all over some day? Somewhere, are you dreaming of me? Won't you come back home to me? ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 0.06% 0.04% 0.08% 0.11% 0.07% 68.86% 51.15% 44.08% 35.81% 19.28% ? , ? , ? , ? , ? ? , ? ? . (1395). ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? . ? ? ? ? , 26(2), 145-159. ? ? ; ? ? ; ? ? ? ? . " ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ". ? ? ? ? , 26, 2, 1395, 145-159. ? , ? , ? , ? , ? ? , ? ? . (1395). ' ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ', ? ? ? ? , 26(2), pp. 145-159. ? , ? , ? , ? , ? ? , ? ? . ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? . ? ? ? ? , 1395; 26(2): 145-159. ? ? ? ? ? ? ? ? ? ? ? ? ? BHT ? ? ? ? ? ? ? DPPH ? ? ? ? ? ? ? ? ? ? ? ? . ? ? ? ? ? ? ? ? ? ? ? ?
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
83c74c0e-55de-4860-9a62-db5af4d0241b
## E.2. Low-Quality Samples Identified By Ask-Llm ? ? ? ? ? ? ? ? ? ? ? ? ', ? ? ? ? , 26(2), pp. 145-159. ? , ? , ? , ? , ? ? , ? ? . ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? . ? ? ? ? , 1395; 26(2): 145-159. ? ? ? ? ? ? ? ? ? ? ? ? ? BHT ? ? ? ? ? ? ? DPPH ? ? ? ? ? ? ? ? ? ? ? ? . ? ? ? ? ? ? ? ? ? ? ? ? (HPMC) ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ... Effect of the plasticizer on permeability, mechanical resistance and thermal behaviour of composite coating films. Powder Technology 238:14-19. Martos MV, Mohamady MA, Fern?ndez?L?pez J, Abd ElRazik KA, Omer EA, P?rez?Alvarez JA and Sendra E, 2011. In vitro antioxidant and antibacterial activities of essentials oils obtained from Egyptian aromatic plants. Food Control 22: 1715?1722. Phoopuritham P, Thongngam M, Yoksan R and Suppakul P, 2011. Antioxidant Properties of Selected Plant Extracts and Application in Packaging as Antioxidant Cellulose?Based Films for Vegetable Oil. Packaging Technology and Science 25: 125?136. Rojas?Gra? MA, Avena?Bustillos RJ, Olsen C, Friedman M, Henika PR, Martin?Belloso O, Pan Zh and McHughTH, 2007. Effects... ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 0.01% 0.02% 0.02% 0.01% 0.0% 59.41% 36.81% 23.01% 12.95% 17.24% Showing results for tags 'A3arma_start'. I have a Error mesage "Addon 'A3_epoch_server' requires addon 'A3_epoch_config'" why is that and how can i fix
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
46423ba1-1733-4d2a-9018-c52535d0670b
## E.2. Low-Quality Samples Identified By Ask-Llm ity Filtering Small Base Large XL XXL Small Base Large XL XXL 0.01% 0.02% 0.02% 0.01% 0.0% 59.41% 36.81% 23.01% 12.95% 17.24% Showing results for tags 'A3arma_start'. I have a Error mesage "Addon 'A3_epoch_server' requires addon 'A3_epoch_config'" why is that and how can i fix this? When i click Ok i get this My Start.cmd losk like this: arma3server.exe [email protected];@EpochHive; -config=C: ? arma 3 ? SC ? config.cfg -ip=192.168.71.234 -port=2301 -profiles=SC -cfg=C: ? arma 3 ? SC ? basic.cfg -name=SC This is my RPT file: ===================================================================== == C: ? arma 3 ? arma3server.exe == arma3server.exe [email protected];@EpochHive; -config=C: ? arma 3 ? SC ? ... 2:05:23 Updating base class ->RscListBox, by a3 ? ui_f ? config.bin/RscIGUIListBox/ 2:05:23 Updating base class ->RscListNBox, by a3 ? ui_f ? config.bin/RscIGUIListNBox/ 2:05:23 Updating base class ->RscText, by a3 ? ui_f ? config.bin/RscBackground/ 2:05:23 Updating base class ->RscText, by a3 ? ui_f ? config.bin/RscBackgroundGUI/ 2:05:23 Updating base class ->RscPicture, by a3 ? ui_f ? config.bin/RscBackgroundGUILeft/ 2:05:23 Updating base class ->RscPicture, by a3 ? ui_f ? config.bin/RscBackgroundGUIRight/ 2:05:23 Updating base class ->RscPicture, by a3 ? ui_f
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
aeb534b1-bde6-463c-b403-70d33d045ff3
## E.2. Low-Quality Samples Identified By Ask-Llm /RscBackground/ 2:05:23 Updating base class ->RscText, by a3 ? ui_f ? config.bin/RscBackgroundGUI/ 2:05:23 Updating base class ->RscPicture, by a3 ? ui_f ? config.bin/RscBackgroundGUILeft/ 2:05:23 Updating base class ->RscPicture, by a3 ? ui_f ? config.bin/RscBackgroundGUIRight/ 2:05:23 Updating base class ->RscPicture, by a3 ? ui_f ? config.bin/RscBackgroundGUIBottom/ 2:05:23 Updating base class ->RscText, by a3... ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 0.47% 3.79% 1.93% 1.08% 10.22% 51.15% 46.92% 63.04% 44.77% 41.35% 10 February 2019 I have 2 houses (joint - me & my wife) in my name and 2 land (plots). Recently sold one of flat (100% cheque payment). Can I reinvest the Capital gains arriving out of sale in purchasing a flat? Note: I had reinvested earlier on (4 years ago) the similar captial gains to buy land from a house sale.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
70bd0cc2-38ac-47ee-802c-0295e8c0be94
## E.3. Increasing-Quality Samples Identified By Ask-Llm We look at the training samples that ASK-LLM scoring models *disagree on* as we go from Flan-T5-Small → Flan-T5-XXL. Specifically, we look at training samples that Flan-T5-Small thinks are of low quality, whereas Flan-T5-XXL thinks otherwise. To the best of our understanding, our overarching conclusions by observing these qualitative samples are: - Larger scoring models in ASK-LLM are able to identify training samples containing *tail-end* of knowledge, *e.g.*, rare world-events, rare named entities, etc. - The increasing quality trend going from Flan-T5-Small → Flan-T5-XXL isn't correlated with the quality scoring model size in perplexity filtering. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 7.67% 30.45% 57.41% 78.17% 97.41% 15.56% 31.02% 24.14% 50.59% 49.64% The historic city of Manchester now features one of the most interesting public art installations that art lovers have ever witnessed. Design studio, Acrylicize installed five giant lamps in Piccadilly Place that represent the many historic periods that the city has gone through, including; Art Deco, Art Nouveau, Victorian, mid-century, and contemporary. The installation is without any doubt, a great piece of art but unlike other artworks, these are absolutely functional as well. Each lamp provides the many visitors with seating, shelter, light and even heat in the winters. The admirers can also witness the historic stories of Manchester via graphic illustrations on the lamps. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 10.48% 31.26% 54.17% 84.17% 97.93% 30.52% 39.49% 35.79% 30.89% 25.39% The Cokin Yellow and Pink Center Spot filter has a clear center and diffused yellow and pink edges. Theses diffused edges will be
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e8d573d5-3286-4d19-b4cd-0d7d3d1c6f21
## E.3. Increasing-Quality Samples Identified By Ask-Llm historic stories of Manchester via graphic illustrations on the lamps. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 10.48% 31.26% 54.17% 84.17% 97.93% 30.52% 39.49% 35.79% 30.89% 25.39% The Cokin Yellow and Pink Center Spot filter has a clear center and diffused yellow and pink edges. Theses diffused edges will be produce blur while leaving the center sharp. The filter effect is directly influenced by the f-stop and the focal length. A lens shot at f/1.4 will see a greater blurring effect than f/8.0 and a 85mm lens will see more blur than a 28mm. Additionally, a longer focal length lens will visually increase the size of the center spot area because it sees less of the filter area. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 7.05% 20.29% 38.23% 50.38% 63.94% 22.41% 14.8% 12.69% 20.68% 8.62% Provide hoist coverage and 200 degree rotation for individual use in bays, along walls, or columns of plants, or as a supplement to an overhead crane or monorail system. This jib has the advantage of providing maximum lift for the hoist, since it can be installed very close to the underside of the lowest ceiling obstruction. It is composed of a vertical mast mounted to 2 brackets on a wall or vertical building beam with a boom that cantilevers out, perpendicular from the wall at the top. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 20.76% 45.81% 60.22% 73.95% 84.14% 2.98% 2.94% 3.49% 2.51% 2.09% The mighty Adyar River that flows through Chennai has a tale to tell.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f56523f1-deee-492d-b68b-ae83e199b131
## E.3. Increasing-Quality Samples Identified By Ask-Llm a wall or vertical building beam with a boom that cantilevers out, perpendicular from the wall at the top. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 20.76% 45.81% 60.22% 73.95% 84.14% 2.98% 2.94% 3.49% 2.51% 2.09% The mighty Adyar River that flows through Chennai has a tale to tell. Arun Krishnamurthy, founder, Environmentalist Foundation of India has documented the origin of the river, the journey and the culmination all captured in images aimed at sensitizing citizens of Chennai to a treasure that they are being denied. Titled Urban Waters, the photo exhibition on Adyar river will bring out Adyar's rich history, fine ecology, urban exploitation and her innate beauty through framed images. The exhibition is organised at Max Mueller Bhavan in Chennai. Goethe Institut, Max Mueller Bhavan is at 4, 5th Street, Rutland Gate, Chennai. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 4.27% 22.22% 47.57% 82.58% 92.4% 6.34% 4.77% 3.89% 8.75% 7.55% The Pendaries Village Skyline Subdivision is located near both the Santa Fe National Forest and the Pecos Wilderness in North Central New Mexico. It has the charm of small town New Mexico, perhaps even more so than its better known nearby sister cities. It offers a unique opportunity for people wishing to enjoy the quiet beauty of Northern New Mexico. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 22.09% 66.57% 76.56% 85.51% 96.98% 20.8% 24.82% 17.42% 18.65% 15.55% Anderson .Paak's new album, Oxnard, is a nod to the Southern California city
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ebf4a3ca-327a-4a5e-b79a-4dc9b5c6ca81
## E.3. Increasing-Quality Samples Identified By Ask-Llm cities. It offers a unique opportunity for people wishing to enjoy the quiet beauty of Northern New Mexico. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 22.09% 66.57% 76.56% 85.51% 96.98% 20.8% 24.82% 17.42% 18.65% 15.55% Anderson .Paak's new album, Oxnard, is a nod to the Southern California city where Anderson grew up. It is the Grammy-nominated artist's third studio album and the first to be released on Dr. Dre's label Aftermath Entertainment. Oxnard includes his latest single, Tints featuring Kendrick Lamar along with album features from J Cole, Pusha T and many more. This is the album he dreamed of making in high school, when he was listening to Jay-Z's The Blueprint, The Game's The Documentary, and Kanye West's The College Dropout. The classic fourth album from the rap-god Eminem. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 0.98% 24.84% 53.36% 88.98% 98.18% 2.3% 1.48% 2.03% 2.1% 3.07% The Disknet is a networking solution which uses the external floppy drive port of the Amiga. It uses the same coax cabling as 10Base2 Ethernet (RG-58U/50Ohm) but is NOT compatible and is capable of transferring at around 45k/sec. The Disknet may be the same device as the AmigaLink, but this has not been confirmed.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2686b6e9-d1dd-4d79-b55d-ac6327a8255a
## E.4. Decreasing-Quality Samples Identified By Ask-Llm We look at the training samples that ASK-LLM scoring models *disagree on* as we go from Flan-T5-Small → Flan-T5-XXL. Specifically, we look at training samples that Flan-T5-XXL thinks are of low quality, whereas Flan-T5-Small thinks otherwise. To the best of our understanding, our overarching conclusions by observing these qualitative samples are: - Smaller quality-scoring models sometimes mislabel non-informative training samples, that contain, *e.g.*, non-informative content, or repeated content. - The decreasing quality trend going from Flan-T5-Small → Flan-T5-XXL isn't correlated with the quality scoring model size in perplexity filtering. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 64.05% 46.39% 35.92% 25.29% 9.63% 4.3% 10.21% 3.47% 3.34% 3.35% one filled with goodwill and cheer. who have supported me thru the year. I wouldn't be changing careers. instead of on strange people's rears. Wishes You a Healthy, Happy Holidays! Ah, how the mighty have fallen! And a Merry fave to you ... and a happy new rear. From one Xmas humor story to another, enjoyed this! Thanks Jack & Susan! Doug, I checked him out–wonderful stuff! Will pass along the good word. Fun and funny–as always! Thanks for the cheer! I can only fave this once, but I've looked at it repeatedly over what has been a bizarre week– and each time you've given me a laugh. That's a gift Bob and I'm grateful! Best of holidays to you and a great New Year! ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 91.25% 71.8% 53.1% 24.11% 4.53% 32.4% 36.56% 46.53% 48.19% 54.84%
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f5d403a5-405f-4a89-aecf-f38829cc807c
## E.4. Decreasing-Quality Samples Identified By Ask-Llm it repeatedly over what has been a bizarre week– and each time you've given me a laugh. That's a gift Bob and I'm grateful! Best of holidays to you and a great New Year! ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 91.25% 71.8% 53.1% 24.11% 4.53% 32.4% 36.56% 46.53% 48.19% 54.84% I hear people saying that vinyl records have a better sound quality than CDs or even DVDs. A mini LP is a CD version of something that was originally released as a 12" (12 inch) vinyl LP. In many cases the packaging is superior to, or at least. Vitalogy; Studio album by Pearl Jam; Released: Vinyl: November 22, 1994 CD: December 6, 1994: Recorded: November 1993 - October 1994: Studio: Bad Animals Studio. Browse best sellers, new releases, AutoRip CDs and vinyl records, deals, vinyl Audio CD. 7.99. From A Room: Volume 1. Chris Stapleton. Audio. The one and only CD, DVD, VIDEO, DJ, VINYL, ERO store. Search our full catalog. Recordstore.co.uk. The UK's leading online record store. Buy new and exclusive signed bundles, CDs, LPs, Merchandise and box sets. Recordstore Day, every. Vinyl Records to CD Conversion - Cheapest on the net! High-quality, standards-compliant CD-Audio of your favorite vinyl records, saved for posterity. Custom CD, DVD Vinyl Packaging You're just a click away from a gorgeous, retail-ready CD or DVD in professional disc packaging. We also offer a full-range of Vinyl. ... Buy with confidence as the. Mar 4, 2017 Despite the decline in mainstream CD usage, some consumers still have CD recording needs for radio, vinyl and other formats. Here are our. 12 results . You can finally burn your cassettes and vinyl records to CD with Crosley's Memory Master II CD Recorder. Just play your cassette or record One Nation is back after the Sold Out New Years Eve event with yet another From its esoteric origins releasing field recordings of steam engines on vinyl to our latest
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a6b6d71a-8123-42d6-93c4-672616f9bdf9
## E.4. Decreasing-Quality Samples Identified By Ask-Llm , retail-ready CD or DVD in professional disc packaging. We also offer a full-range of Vinyl. ... Buy with confidence as the. Mar 4, 2017 Despite the decline in mainstream CD usage, some consumers still have CD recording needs for radio, vinyl and other formats. Here are our. 12 results . You can finally burn your cassettes and vinyl records to CD with Crosley's Memory Master II CD Recorder. Just play your cassette or record One Nation is back after the Sold Out New Years Eve event with yet another From its esoteric origins releasing field recordings of steam engines on vinyl to our latest critically acclaimed Ultradisc UHR™ SACDs, Mobile Fidelity Sound. How much are worth and valued your rare and collectable vinyl and cd by searching on Music Price Guide archive. Heel veel CD, LP, Vinyl SACD op voorraad, snelle levertijden en altijd superscherp geprijsd en lage verzendkosten, voor 17:00 besteld morgen Some of the greatest music ever made isn t available digitally, on mp3, or on CD; but rather is only available on vinyl. Moreover, if you already have purchased. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 96.67% 76.07% 47.33% 30.0% 7.97% 32.02% 21.27% 24.31% 25.77% 23.7%
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
acc549fc-a1f6-4922-8ba4-c92b2ad056c4
## �� A brilliant performance by Year 6 based on The Lion King. Brilliant singing and acting from everyone, congratulations Year 6! A big thank you to all the staff that helped with everything from costumes, set design, make up and directing. A wonderful commemoration of the seven years that Year 6 students have spent at The Good Shepherd. Thank you to all of the parents and staff for attending this celebration and we wish all of the children continued success in their new schools and hope they continue to do themselves proud. Well done to Foundation for showing us what it is to be good friends! This week we have been looking at all the countries in the world that speak Spanish as their native language, there are 21! So throughout school we spent a day learning lots of wonderful things about our chosen country. We looked at maps, flags, famous people, food and so much more! Below is a little glimpse into our fabulous week. ... Click on the links to take a look at some of the brilliant things we got up to! Faith in Families is a charity based here in Nottingham who believe, as we do, that all children have the right to grow up as part of a loving and nurturing family and they provide services for children and families. We learnt lots about adoption and what it can mean for children and their family. We learnt about Fairtrade and all the fantastic work they do around the world. We also discovered lots of products that we did not know were Fairtrade. There was also a sell out Fairtrade food sale, well done everyone! Year 2 have been able to show off our brilliant new high visibility jackets! Now we will be able to stay safe and visible on any out of school trips. We are very lucky to have these donated by Walton & Allen. Thank you! Click on the high visibility jacket to take a look at our super jackets! Year 4 have wowed us with their acting skills in a brilliant performance of Ali Baba - well done Year 4! Year... ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 90.79% 75.97% 58.89% 18.06% 3.0% 13.65% 16.88% 17.85% 14.36% 13.67% Search result for " For Sale " We supply Germany made embalming powder in small quantities from 1
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
60e9cb75-ae49-4070-9b57-f21e896e5a16
## �� their acting skills in a brilliant performance of Ali Baba - well done Year 4! Year... ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 90.79% 75.97% 58.89% 18.06% 3.0% 13.65% 16.88% 17.85% 14.36% 13.67% Search result for " For Sale " We supply Germany made embalming powder in small quantities from 1 kg at affordable prices. We have white and pink 100% hot and 98% pink in stock. Call us on +27786893835 for details. EMBALMING.. EMBALMING POWDER CALL +27786893835 Hager Werken Embalming Compound Pink Powder call +27786893835 in General items from Germany Embalming compound in powder form both PINK and WHITE Radio active.. Sierra Residences Type B, Sg Ara near PISA, Factory,Air-port Sierra Residences (ID: 5695) ================== Monthly Rent: RM 1,000 BU: 1182 sq.ft. Newly Renovated/NOT Furnished - 3.. Very Strategic and Highly Potential LAND 9.7 Acres Converted Residential Land For Sale in Taman Melawati !!!!! Taman Melawati development land , Titile : Freehold, non bumi land. Status:.. I am a Certified Private Loan Lender, Do you need a Fast and Guarantee loan to pay your bills or start up a Business? I offer both local and international loan services to meet your financial needs.. ... Introducing our mining company to you for a very fruitful business transaction. we are a miners who have come together to upgrade our production through the introduction of modern technology and.. Commercial land for sale. Location near to Premium Outlet. Size = 32 acres Good land shape and very suitable for development. Selling price RM 60 per sf. Interested party kindly contact.. Keterangan : * Tanah yang rata dan sangat startegik untuk buat rumah kediaman/rumah rehat (homestay), atau untuk rumah penginapan sendirian/Percutian (vacation home
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
669fb802-ccaa-46af-aa0c-6ec2af4cc775
## �� a very fruitful business transaction. we are a miners who have come together to upgrade our production through the introduction of modern technology and.. Commercial land for sale. Location near to Premium Outlet. Size = 32 acres Good land shape and very suitable for development. Selling price RM 60 per sf. Interested party kindly contact.. Keterangan : * Tanah yang rata dan sangat startegik untuk buat rumah kediaman/rumah rehat (homestay), atau untuk rumah penginapan sendirian/Percutian (vacation home) * Tanah lot tepi berdekatan.. Limited gated Semi D at Sri petaling,fully furnish with lift and move in condition.newly buit,modern,spacius and practical.Prime location for own stay,good gated security and easy access to few main.. Land for sale in MELAKA ! Price : RM 65 per sq fit (or roughly U$D 17 per sq fit ) Size : 53000 sf Property type :freehold housing land Location : Jalan Laksamana Cheng Ho, .. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 94.72% 87.31% 78.07% 13.77% 6.51% 5.75% 9.63% 13.12% 17.51% 17.12% FIFA 20 CONFIRMED TRANSFERS SUMMER 2019 & RUMOURS | w/ ALEX SANDRO BALE & NEYMAR JR. TO BARCELONA!! Top 10 Worst Transfers In Football History! 70 CONFIRMED TRANSFERS JANUARY 2019 ———————— Thank You For Watching ——————————— * Like + Subscribe * =================. FIFA 20 | CONFIRMED TRANSFERS SUMMER 2019 & RUMOURS | w ZIDANE COUTINHO & RONALDO BACK TO R.MADRID! REBUILDING REAL MADRID | DREAM TEAM LINEUP 2019-2020 | POTENTIAL TRANSFERS | w/ NEYMAR & RONALDO! FIFA 20 | CONFIRMED TRANSFERS SUM
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c72a02f0-8883-4dc7-8fc9-23c5e97a4435
## �� In Football History! 70 CONFIRMED TRANSFERS JANUARY 2019 ———————— Thank You For Watching ——————————— * Like + Subscribe * =================. FIFA 20 | CONFIRMED TRANSFERS SUMMER 2019 & RUMOURS | w ZIDANE COUTINHO & RONALDO BACK TO R.MADRID! REBUILDING REAL MADRID | DREAM TEAM LINEUP 2019-2020 | POTENTIAL TRANSFERS | w/ NEYMAR & RONALDO! FIFA 20 | CONFIRMED TRANSFERS SUMMER 2019 & RUMOURS | w BALE FEKIR UMTITI & NEYMAR £300M TO MADRID! SUBSCRIBE http://bit.ly/SoccerAMSub Dean from 442oons is back with his list of the top 5 deals that were done on transfer deadline day. Do you agree with .. FIFA 20 | CONFIRMED TRANSFERS SUMMER 2019 & RUMOURS | w STERLING JAMES AUBAMEYANG & GRIEZMANN! SUBSCRIBE to FOOTBALL DAILY: http://bit.ly/fdsubscribe Last week we broke down our best signings of the summer so far. Now lets expose the worst! Top 150 confirmed transfers / signings of the summer transfer window 2018 ft. Ronaldo, Mbappe, Mahrez, Vidal, Courtois... THANK FOR WATCHING! FIFA 20 | CONFIRMED TRANSFERS SUMMER 2019 & RUMOURS | w/ POGBA SANCHO THIAGO & MESSI TO INTER!! ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 86.25% 69.2% 61.9% 46.57% 19.99% 76.61% 71.91% 94.86% 92.93% 94.99% Phone 1300 616 202 if you're looking for a trustworthy, experienced and licensed Plumber Leopold. We know that getting plumbing repairs in Leopold can be a pain and you've got better things to do than look for a plumber. Clearwater Plumbing and Maintenance will save you from any unnecessary hassle and expense for
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a47ecb6a-9157-405f-ad6b-ff251c3030d8
## �� XL XXL 86.25% 69.2% 61.9% 46.57% 19.99% 76.61% 71.91% 94.86% 92.93% 94.99% Phone 1300 616 202 if you're looking for a trustworthy, experienced and licensed Plumber Leopold. We know that getting plumbing repairs in Leopold can be a pain and you've got better things to do than look for a plumber. Clearwater Plumbing and Maintenance will save you from any unnecessary hassle and expense for a Plumber Leopold. We make sure that wherever you need a Plumber Leopold, Clearwater Plumbing and Maintenance will assist you with your plumbing worries. Plumbing problems with your taps, toilets, gas, hot water and drains are painful enough. You don't need the extra stress of finding a Plumber Leopold that you can trust. And what about all of those plumbers in Leopold who don't clean up after themselves, leaving mud and materials all over your home? Our professional team are different! ... Do you have hot water system repairs Leopold. We have highly experienced plumbers who know how to fix hot water systems Leopold. There can be many possible reasons why your hot water system Leopold is broken. Our Leopold plumbers are reliable, fast and know hot to diagnose problems. Our hot water system repairs Leopold plumbers are trained and qualified. To book an appointment, please call 1300 616 202. We will do our best to get a plumber to you in Leopold as soon as possible. If you notice that there is water leaking from the bottom of your hot water system in Leopold, chances are the system is completely broken. In this scenario, you will need to replace your hot water system in Leopold. Our team of plumbers can help you to choose what hot water system you will need. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 82.64% 75.2% 63.2% 29.51% 8.94% 78.34% 82.07% 91.01% 87.78% 88.02%
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
85057417-e269-44fe-8bd0-af9f525c7a28
## �� completely broken. In this scenario, you will need to replace your hot water system in Leopold. Our team of plumbers can help you to choose what hot water system you will need. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 82.64% 75.2% 63.2% 29.51% 8.94% 78.34% 82.07% 91.01% 87.78% 88.02% You can now configure the minimum TLS protocol level for client connections and connections to other servers. Refer to the following page for more information: Advanced TLS. You can now set an Integrated Capture Point (ICP) to stopped mode by changing the state of the corresponding configuration object to disabled; changing the state to enabled restarts the inbound cycle of the ICP. You can now set the minimum TLS protocol level for the Web Service Capture Point by configuring the option <sec-protocol> in the section <settings> of the Capture Point object. ... Support for the following databases. See the Supported Operating Environment: eServices page for more detailed information and a list of all supported databases. No special procedure is required to upgrade to release 8.5.201.05. Retrieved from "https://docs.genesys.com/Documentation:RN:mm-ixn-svr85rn:mm-ixn-svr8520105:8.5.x (2019-04-21 22:59:48)" This page was last modified on November 8, 2018, at 08:48. ASK-LLM Perplexity Filtering Small Base Large XL XXL Small Base Large XL XXL 62.21% 54.71% 35.73% 22.64% 6.76% 64.82% 85.95% 94.65% 93.35% 85.29% are willing to provide you with perfect services and striding for Display Stand For Boutique , Display Stand for Boutique , Display Stand for Phone , Our product quality is one of the major concerns and has been produced to meet the customer's standards. "Customer services
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7d75ab2b-e41b-43e4-846f-f648b3325719
## �� tering Small Base Large XL XXL Small Base Large XL XXL 62.21% 54.71% 35.73% 22.64% 6.76% 64.82% 85.95% 94.65% 93.35% 85.29% are willing to provide you with perfect services and striding for Display Stand For Boutique , Display Stand for Boutique , Display Stand for Phone , Our product quality is one of the major concerns and has been produced to meet the customer's standards. "Customer services and relationship" is another important area which we understand good communication and relationships with our customers is the most significant power to run it as a long term business. "We have quite a few great team customers very good at internet marketing, QC, and dealing with kinds of troublesome trouble while in the output approach for Display Stand For Boutique , Display Stand for Boutique , Display Stand for Phone , We set a strict quality control system. We've got return and exchange policy and you can exchange within 7 days after receive the wigs if it is in new station and we service repairing free for our solutions. You should feel free to contact us for further information and we are going to give you competitive price list then.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09668v1.md", "file_path": "paper_data/2402.09668v1.md", "file_size": 114352, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b266e0ed-c894-4995-b343-bb086318e8c7
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems Jiaying Lu, Bo Pan, Jieyi Chen, Yingchaojie Feng, Jingyuan Hu, Yuchen Peng, Wei Chen
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3dd379c9-5f17-4643-aa06-a5e22960aeae
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## ✦ Abstract—Recently, Large Language Model based Autonomous system (LLMAS) has gained great popularity for its potential to simulate complicated behaviors of human societies. One of its main challenges is to present and analyze the dynamic events evolution of LLMAS. In this work, we present a visualization approach to explore detailed statuses and agents' behavior within LLMAS. We propose a general pipeline that establishes a behavior structure from raw LLMAS execution events, leverages a behavior summarization algorithm to construct a hierarchical summary of the entire structure in terms of time sequence, and a cause trace method to mine the causal relationship between agent behaviors. We then develop *AgentLens*, a visual analysis system that leverages a hierarchical temporal visualization for illustrating the evolution of LLMAS, and supports users to interactively investigate details and causes of agents' behaviors. Two usage scenarios and a user study demonstrate the effectiveness and usability of our *AgentLens*. Index Terms—LLM, autonomous system, agent, visual analysis.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
04694371-8d18-4686-b036-47d3b629e027
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 1 Introduction A UTONOMOUS agents, as computational entities that possess a certain degree of autonomy [1], [2], are seen as a promising pathway toward achieving artificial general intelligence (AGI) [3], [4]. In recent years, owing to the breakthroughs in natural language processing [5]– [7] achieved by Large Language Models (LLM), the LLM- based autonomous agent has gained widespread adoption in both academia and industry [8], [9]. Built upon LLM- based agents, LLM-based autonomous systems (LLMAS) deploy multiple agents within a shared environment, enabling them to display behavior and social patterns akin to humans. This collective intelligence fosters emergent social dynamics, such as the formation of new relationships, diffusion of information, and the rise of coordination among agents [10]. Consequently, LLMAS exhibits significant potential in society simulation [10], [11], software engineering [12], [13], and scientific research [14]. However, monitoring and analyzing the dynamic evolution of LLMAS, including agents in LLMAS and event sequences undertaken by them, can be challenging due to the tremendous information generated during the system evolution and the inherent unpredictability of LLMs. The most straightforward approach for analyzing LLMAS is to inject logging code into LLMAS to trace agent events of interest and check the raw output logs in text format [15]. However, this approach requires expertise with specific LLMAS and is unintuitive for general users. To address this, many LLMAS projects provide a graphical representation of the simulation process [9], which is typically re-playable 2D [10], [16]–[18] or 3D video [19]–[22]. By transforming a fixed sequence of intermediate simulation events into expressive visual recordings, users can digest that information more efficiently and intuitively. However, a re-playable recording with a fixed level of abstraction limits the flexibility of analysis for LLMAS. Even for a specific LLMAS and a fixed usage scenario, a user's short-term analysis target will change frequently during the analysis process. As the users' analysis target varies, the type, quantity, and granularity of agent events to be visualized also need to change. Moreover, analyzing the agent's behavior at a specific time point requires users to switch the recording back and forth to trace the cause and consequence of this behavior, which is tedious and unreliable. This work thus presents a visualization approach to assist users in efficiently analyzing the evolving status and complex behaviors of agents within an
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3a6538b8-d7d3-4bae-91e5-a4438413ff94
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 1 Introduction A limits the flexibility of analysis for LLMAS. Even for a specific LLMAS and a fixed usage scenario, a user's short-term analysis target will change frequently during the analysis process. As the users' analysis target varies, the type, quantity, and granularity of agent events to be visualized also need to change. Moreover, analyzing the agent's behavior at a specific time point requires users to switch the recording back and forth to trace the cause and consequence of this behavior, which is tedious and unreliable. This work thus presents a visualization approach to assist users in efficiently analyzing the evolving status and complex behaviors of agents within an LLMAS. To mitigate cognitive overload due to the profusion of data produced throughout the evolution of LLMAS, and to enhance adaptability for subsequent analytical processes, we introduce a general pipeline, which establishes a hierarchical behavior structure of agent entities and raw event sequences within the LLMAS operational records. The formulation of the structure is based on our survey of prevalent architectures within extant LLMAS, coupled with a design study that engaged 4 LLMAS developers and 4 layman users. We design an LLM-based algorithm for summarizing agent behavior that furnishes a hierarchical depiction of sequences of agent events. Additionally, we employ a cause trace method to unearth the causal linkages among disparate agent events. Based on the extracted hierarchical structure, we then develop *AgentLens*, a visual analysis system designed to facilitate interactive analysis and exploration of agent behaviors in LLMAS. AgentLens provides a multi-faceted perspective for LL- MAS through its three distinct but interrelated views, each offering a different level of abstraction. The Outline View (Fig. 1, A ) illustrates the spatiotemporal trajectory of each agent with curves of different colors, aiding users in identifying notable agents or their intriguing behaviors throughout the evolution of LLMAS. Users can quickly scan agent behaviors at different granularity (Fig. 1, A1 ), identify agent interaction of interest (Fig. 1, A2 ), perform topic search
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9046daa6-7862-4ad9-8682-152b3b36a9c2
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## Visual Analysis For Agent Behaviors In Llm-Based Autonomous System (Fig. 1, A3 ), and click any time point on an agent curve to further investigate it in the *Agent View* (Fig. 1, B ). The Agent View allows users to progressively reveal agent event information on demand and trace the cause of certain agent behavior. The *Monitor View* (Fig. 1, C ) automatically adjusts the graphical representation of LLMAS for users based on their current point of interest in the *Outline View* or the Agent View. To evaluate the performance of *AgentLens*, we present two cases and conduct a user study with 14 participants to gather their feedback. The results indicate that *AgentLens* is capable of assisting users in the LLMAS evolution analysis and agent behaviors investigation. The main contributions of our work are as follows: • To the best of our knowledge, our work is the first visual analysis system that enables analysis and explorations of agent behaviors within LLMAS. • We propose a general pipeline that establishes a hierarchical behavior structure from raw LLMAS execution events to facilitate downstream analysis. • We conduct two cases and a user study to demonstrate the capabilities of our system. The evaluation results confirm the usefulness and effectiveness of the behavior structure and*AgentLens*.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5dc695b5-dbf5-486c-8c2b-521e79097f8f
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 2 Related Work 2.1 Llm-Based Autonomous Agents Franklin *et al.* [23] defined the agent as an entity situated in the environment that senses the environment and acts on it over time, in pursuit of its own agenda and so as to affect what it senses in the future. Possessing the ability to perform intelligent operations without human intervention, the autonomous agent remains a steadfast goal in artificial intelligence research [3], [24]. The progression of LLMs [6], [25] has underscored exceptional proficiency in areas of comprehension, reasoning, and language generation [26], which kindled optimism for continued advancements in the realm of autonomous agents. With the advent of LLMs, the study of LLM-based autonomous agents began to thrive. This includes enhancing agents' self-reflective capabilities [27], [28], implementing superior task decomposition strategies [29], and endowing the ability to utilize and create tools [30]–[33]. There is also a vibrant development of applications of LLM-based agents in the open source community [15], [34], [35]. Recently, researchers have found that LLM-based agents can address a wider range of tasks through collaboration or competition. Camel [36] presented a framework that emphasizes the autonomous interaction between communicative agents. It is capable of creating varied, detailed instructions across numerous tasks, thereby providing a platform for these agents to demonstrate their cognitive operations. Talebirad *et al.* [37] introduced a comprehensive framework for multi-agent collaboration based on LLMs. ProAgent [18] exhibited the distinctive ability for agents to foresee the upcoming decisions of collaborators and adjust their behaviors, enabling them to excel in cooperative reasoning tasks. Multi-Agent Debate (MAD) [38] introduced an approach in which several agents present their arguments collaboratively while a judge guides the discourse, enhancing agents' divergent thinking for deep-reflective tasks. However, as the number and the intricacy of agents increase, the complexity of analyzing their behaviors escalates rapidly. While past works have focused on elevating the capabilities of LLM-based agents in emulating human-like behaviors, they often overlooked how to effectively analyze agent behaviors. In this work, we identify this research gap and present a visualization approach for analyzing agent behaviors in LLM-based multi-agent systems.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
54f03fa0-e2c6-46da-aa84-9a1d24532697
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 2.2 Llm-Based Autonomous System By incorporating numerous LLM-based agents into a cohesive environment, the LLMAS is capable of handling diverse complex scenarios. For example, WebAgent [39] demonstrated the possibility of building agents that can complete the tasks on real websites following natural language instructions. ChatDev [12] and MetaGPT [13] experimented with software development in multi-agent communication settings. Zhang *et al.* [19] built embodied agents to cooperate effectively with humans. Park *et al.* [10] situates generative agents with unique characteristics in a societal context, in order to mimic human social behaviors. Several task-independent frameworks designed for diverse usages have received considerable attention within the community. AgentVerse [17] dynamically assembled multi-agent teams tailored to task complexities, outperforming individual agents with adaptable team structures. AgentSims [16] offered a real-time evaluation platform for LLM-based agents, enabling adaptable configurations to facilitate the performance evaluation of different modules. AutoGen [40] fostered conversations among multiple agents and organized individual insights in a general manner, offering an interconnected manner to coordinate multiple agents within the LLMAS. MetaGPT [13] injects effective human workflows into multi-agent collaboration by encoding Standardized Operational Procedures (SOP) into prompts, underscoring the potential of incorporating human domain expertise into LLMAS. CGMI [11] replicated human interactions and imitated human routines in real-world scenarios, which enhances the realism of more humanized simulation of complex social scenarios. Previous LLMAS research has primarily focused on constructing more universal frameworks or designing for specific domains, yet there has been a noticeable lack of emphasis on the analysis methods of parallel behaviors among agents within LLMAS. Contemporary LLMAS predominantly depend on conventional methods for surveillance and analysis. MetaGPT [13] utilizes log outputs for record maintenance, while Park et al. [10] adopts panoramic videos for observation, providing detailed maps with agent avatars to denote their locations and behaviors. Distinct from preceding efforts, our work offers an interactive visual system that hierarchically organizes events, facilitating users in quickly grasping the happenings within LLMAS.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
47b831a0-592b-4b08-b398-0b21a79dd125
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 2.3 Event Sequence Visualization Data featuring time-based event sequences is widespread and can be found in various sectors, including healthcare records [41]–[43], career design [44], [45] and social interactions [46]–[48]. In these fields, distinct types of time-stamped events are sequentially organized, each relevant to a particular subject or entity. While earlier methods [49], [50] have been geared toward simpler, low-dimensional data, the data sets encountered in real-world scenarios frequently display a higher level of complexity, calling for more comprehensive analytical ideas and methods. A substantial number of research on event sequence visualization is notably correlated with fields where there is a prevalent demand for event information condensation, such as in the realm of social media data [51], the sphere of smart manufacturing [52], and the study of anomalous user behaviors [53]. Guo *et al.* [54] proposed an organizational framework for event sequences to summarize the common goal of different properties with great heterogeneity. Event- Thread [44] focuses on visualization and cluster analysis, providing an interactive interface for browsing and summarizing event sequence data. Building on past frameworks of condensing events and visualizing them, we focused on the behavioral patterns of LLM agents and proposed an LLM- driven approach to handle non-structured natural languagebased event sequences. Event sequence visualization has highly relevant applications in the realm of collective behavior analysis, which aligns closely with the focus of our research, both referring to activities conducted by a temporary and unstructured group of people [47], [55], [56]. In the field of social media, collective actions emerge from the collaborative efforts of users engaged in disseminating information and navigating through virtual spaces. A variety of sophisticated visual analytics methodologies have been introduced to scrutinize these group dynamics. R-map [57], Socialwave [58], FluxFlow [59] and Google+ ripples [60] are specifically tailored to examine the mechanics of information propagation, while Maqui [61] and Frequence [46] offers insights into the complexities of human mobility within this context. While existing research has made significant contributions to the field, there's a growing need to address the increasingly complex behaviors and interactions that call for the advancement of autonomous systems. Our work introduces event sequence visualization as an integral tool for the analysis and exploration of LLMAS.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9622eeee-7ff8-4236-a1c5-1de4817f2a39
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 3 Overview 3.1 Common Architecture Of Llmas To ensure maximum compatibility with various LLMAS, we survey LLMAS-related papers [27], [28], [62]–[69] as well as some projects [70]–[79] with high stars in open source communities published before August 31, 2023. We analyzed their system architectures and components, based on which we abstract a common architecture (as shown in Fig. 2) for LLMAS. The **system state** in LLMAS provides the environmental information at any time point. At each time point, each **agent** executes its own **task**, which consists of several atomic **operations**. A raw event is generated whenever an operation is executed by an agent, thereby advancing the evolution of LLMAS. System State provides a comprehensive understanding of the environment. By acquiring the environmental information from the system state, agents can comprehend the current context and conditions. For example, the system state can inform agents about object locations and environmental properties, which significantly impact their decisionmaking and planning processes. In addition, the system state governs the timelines of each agent, ensuring events by different agents are temporally aligned. Agents are autonomous entities with cognitive abilities and action capability. By performing various types of tasks, agents can interact with the environment and gradually change the system state to achieve their goals. Additionally, agents can communicate and collaborate with each other. They can share their knowledge and exchange messages to accomplish more complex duties. Tasks are typically customized for the usage scenario of LLMAS. A sequence of operations with a common goal can be grouped as a task. Extending prior research that has focused on different scenarios for agents, we classify tasks into three categories: Perceive, Think, and Act. In Perceive tasks, the agent obtains perception of the external system. Such perception includes sensing the environment (virtual, real, or external resources), as well as perceiving other agents. In *Think* tasks, the agent engages in decisionmaking, reasoning, planning, and other behaviors based on external perception and its own memory. In *Act* tasks, the agent interacts with the external system by providing outputs, including text outputs, virtual actions, or specific invocations such as tool usage. Operations are the basic units for Tasks. Operations can be classified based on their target, including Environmental Operations, Memory Operations, and Decision Operations. Environment Operations execute interactions toward the external system, including other agents and the environment defined by LLMAS. *Memory Operation
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
019055fe-937c-43db-aef8-1f206f2c5180
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 3 Overview 3.1 Common Architecture Of Llmas well as perceiving other agents. In *Think* tasks, the agent engages in decisionmaking, reasoning, planning, and other behaviors based on external perception and its own memory. In *Act* tasks, the agent interacts with the external system by providing outputs, including text outputs, virtual actions, or specific invocations such as tool usage. Operations are the basic units for Tasks. Operations can be classified based on their target, including Environmental Operations, Memory Operations, and Decision Operations. Environment Operations execute interactions toward the external system, including other agents and the environment defined by LLMAS. *Memory Operation*s involve storing and updating the memory of an agent. *Decision Operation*s are for decision-making and action planning, where LLM-based agents typically utilize LLMs for decision operations.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
86d5d2e4-053f-46b6-8235-ab1100c49b52
# Agentlens: Visual Analysis For Agent Behaviors In Llm-Based Autonomous Systems ## 3.2 Design Requirement Our study focuses on users involved in analyzing, exploring, and monitoring LLMAS. Our primary goal is to create a system that enhances users' comprehension of LLMAS. We recruited 4 developers highly familiar with LLMAS and 4 users who have a basic understanding of LLMAS and have previously utilized such systems. To identify the design requirements, We asked participants to explore the behaviors of agents in Reverie1, a typical autonomous system consisting of 25 LLM-based agents. We requested participants to actively explore and delve into the identification of agent behaviors that intrigued them, as well as to investigate the underlying causes or consequences. To facilitate this, we encouraged participants to "think aloud", articulating the information they sought and the type of assistance they desired throughout the process. We then conducted the first interview with them to collect their feedback on the whole exploration process. At the same time, we maintain regular contact with them to keep them updated on the design requirements. Based on their feedback, and combined with the survey on existing LLMAS work in section 3.1, the following 4 design requirements can be summarized. R1. Provide suitable generality of information for different analysis targets. During the evolution of LLMAS, a significant volume of information is continuously generated, which is overwhelming for users to comprehend. While the current 2D graphical interface of Reverie provides a fixed visual abstraction, many users express their desire to change the generality of presented information to better match their current analysis target. For instance, users want to scan summarized agent traces across a large time scale when they analyze the long-term relationship among several agents, while they prefer a detailed presentation of an agent's operations when they analyze how the agent performs a certain task. Therefore, the system should provide users with flexible levels of abstraction for the generated information of LLMAS, and allows users to reveal details according to their analysis target. R2. Present agents' transition of physical location and thought content. The physical and mental changes of agents play a vital role in driving and reflecting the evolution of the entire LLMAS. Nevertheless, currently, users can only stare at the re-playable recording to see if there is a location transition of the agent and check the raw execution log to find when the agent starts to think about a certain idea, which is inefficient and error-prone. Therefore, the system should provide visual emphasis on agents' transition of location and highlight the time points the agent starts to think about a topic the user wishes to explore.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08995v1.md", "file_path": "paper_data/2402.08995v1.md", "file_size": 100775, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }