skdrx commited on
Commit
1630768
1 Parent(s): 9740715

Upload blacklist.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. blacklist.csv +394 -0
blacklist.csv ADDED
@@ -0,0 +1,394 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ title,url,reason
2
+ a brief history of prompt leveraging language models, https://arxiv.org/abs/2310.04438, AI Generated
3
+ hydrogenrich supernovae beyond the neutrinodriven corecollapse paradigm,,About Space not Prompting
4
+ fewshot learning with localization in realistic settings,,not related to prompting
5
+ crosslingual alignment of contextual word embeddings with applications to zeroshot dependency parsing,, no Prompting
6
+ analogyforming transformers for fewshot 3d parsing,, no prompting
7
+ generalpurpose incontext learning by metalearning transformers,, no prompting
8
+ a survey of deep learning for lowshot object detection,, no prompting
9
+ fewshot classincremental learning a survey,, no prompting
10
+ balanced and explainable social media analysis for public health with large language models,,uses BERT
11
+ querydependent prompt evaluation and opti mization with offline inverse rl,,more about deep RL than prompting
12
+ deltaedit exploring textfree training for textdriven image manipulation,,too training focused
13
+ deep language networks joint prompt training of stacked llms using variational inference,, too training focused
14
+ unnatural language processing how do language models handle machinegenerated prompts,, too training focused
15
+ give me the facts! a survey on factual knowledge probing in pretrained language models,, cloze focused
16
+ taskdriven prompt evolution for foundation models,, training related
17
+ diversityaware meta visual prompting,, training focused
18
+ drpt disentangled and recurrent prompt tuning for compositional zeroshot learning,, tuning
19
+ deltaspace a semanticaligned feature space for flexible textguided image editing,, training focused
20
+ instructpix2nerf instructed 3d portrait editing from a single image,, not really about prompting
21
+ what changes can largescale language models bring intensive study on hyperclova billionsscale korean generative pretrained transformers,, about a model not prompts
22
+ mllmdataengine an iterative refinement approach for mllm,,soft prompting
23
+ unleashing the power of pretrained language models for offline reinforcement learning,, out-of-scope
24
+ expt synthetic pretraining for fewshot experimental design,, no prompting
25
+ improving inputlabel mapping with demonstration replay for incontext learning,, out-of-domain
26
+ apollo zeroshot multimodal reasoning with multiple experts, 2310.18369v1.pdf, Lower-Level Transformer Modification - Not Prompting
27
+ fewshot learning with siamese networks and label tuning,, no prompting
28
+ mgimn multigrained interactive matching network for fewshot text classification,, no prompting
29
+ zero and fewshot learning for author profiling,, about models not prompting
30
+ "prompt, generate, then cache cascade of foundation models makes strong fewshot learners", http://arxiv.org/pdf/2303.02151v1.pdf, training
31
+ gradientregulated metaprompt learning for generalizable visionlanguage models, http://arxiv.org/pdf/2303.06571v2.pdf, soft prompting
32
+ decomposed prototype learning for fewshot scene graph generation,http://arxiv.org/pdf/2303.10863v1.pdf, continuous prompts
33
+ supervised masked knowledge distillation for fewshot transformers,, no prompting
34
+ "multimodal c4 an open, billionscale corpus of images interleaved with text", http://arxiv.org/pdf/2303.15466v2.pdf, no prompting
35
+ a survey on fewshot classincremental learning,http://arxiv.org/pdf/2304.06939v3.pdf, no prompting
36
+ unified quantum state tomography and hamiltonian learning using transformer models a languagetranslationlike approach for quantum systems, http://arxiv.org/pdf/2304.08130v2.pdf, no prompting
37
+ pointgpt autoregressively generative pretraining from point clouds, http://arxiv.org/pdf/2305.11487v2.pdf, continuous prompts
38
+ a survey of diffusion models in natural language processing,http://arxiv.org/pdf/2305.14671v2.pdf, no prompting
39
+ oneforall generalized lora for parameterefficient finetuning, http://arxiv.org/pdf/2306.07967v2.pdf, tuning
40
+ protodiff learning to learn prototypical networks by taskguided diffusion, http://arxiv.org/pdf/2306.14770v2.pdf, no prompting
41
+ effective transfer of pretrained large visual model for fabric defect segmentation via specifc knowledge injection, http://arxiv.org/pdf/2306.16186v1.pdf, no prompting
42
+ metatraining with demonstration retrieval for efficient fewshot learning, http://arxiv.org/pdf/2307.00119v1.pdf, cloze prompting
43
+ tableye seeing small tables through the lens of images, http://arxiv.org/pdf/2307.02491v1.pdf, no prompting
44
+ identifying misinformation on youtube through transcript contextual analysis with transformer models, http://arxiv.org/pdf/2307.12155v1.pdf, no prompting
45
+ linkcontext learning for multimodal llms, http://arxiv.org/pdf/2308.07891v1.pdf, no prompting
46
+ less is more towards efficient fewshot 3d semantic segmentation via trainingfree networks, http://arxiv.org/pdf/2308.12961v1.pdf, no prompting
47
+ transprompt v2 a transferable prompting framework for crosstask text classification, http://arxiv.org/pdf/2308.15010v1.pdf, soft prompting
48
+ selfsampling meta sam enhancing fewshot medical image segmentation with metalearning, http://arxiv.org/pdf/2308.16466v3.pdf, training
49
+ promptbased node feature extractor for fewshot learning on textattributed graphs, http://arxiv.org/pdf/2309.02848v1.pdf, cloze prompts
50
+ crossimage context matters for bongard problems, http://arxiv.org/pdf/2309.03468v1.pdf, no prompting
51
+ dept decomposed prompt tuning for parameterefficient finetuning, http://arxiv.org/pdf/2309.05173v2.pdf, tuning
52
+ glad contentaware dynamic graphs for log anomaly detection, http://arxiv.org/pdf/2309.05953v1.pdf, cloze prompting
53
+ sct a simple baseline for parameterefficient finetuning via salient channels, http://arxiv.org/pdf/2309.08513v2.pdf, tuning
54
+ pactuningfinetuning pretrained language models with pacdriven perturbed gradient descent, http://arxiv.org/pdf/2310.17588v1.pdf, no prompting
55
+ on taskpersonalized multimodal fewshot learning for visuallyrich document entity retrieval, http://arxiv.org/pdf/2311.00693v1.pdf, no prompting
56
+ robust finetuning of visionlanguage models for domain generalization, http://arxiv.org/pdf/2311.02236v1.pdf, no prompting
57
+ lesion2vec deep metric learning for fewshot multiple lesions recognition in wireless capsule endoscopy video, http://arxiv.org/pdf/2101.04240v2.pdf, no prompting
58
+ unsupervised law article mining based on deep pretrained language representation models with application to the italian civil code, http://arxiv.org/pdf/2112.03033v1.pdf, no prompting
59
+ "using deepspeed and megatron to train megatronturing nlg 530b, a largescale generative language model", http://arxiv.org/pdf/2201.11990v3.pdf, training
60
+ data distributional properties drive emergent incontext learning in transformers, http://arxiv.org/pdf/2205.05055v6.pdf, no prompting
61
+ hungry hungry hippos towards language modeling with state space models, http://arxiv.org/pdf/2212.14052v3.pdf, no prompting
62
+ clip2scene towards labelefficient 3d scene understanding by clip, http://arxiv.org/pdf/2301.04926v2.pdf, cloze prompting
63
+ learning to detect an animal sound from five examples, http://arxiv.org/pdf/2305.13210v1.pdf, no prompting
64
+ the rise of ai language pathologists exploring twolevel prompt learning for fewshot weaklysupervised whole slide image classification, http://arxiv.org/pdf/2305.17891v1.pdf, training
65
+ language models are fewshot learners, http://arxiv.org/pdf/2005.14165v4.pdf, training
66
+ when promptbased incremental learning does not meet strong pretraining, http://arxiv.org/pdf/2308.10445v1.pdf, training
67
+ "fewer errors, but more stereotypes the effect of model size on gender bias", http://arxiv.org/pdf/2206.09860v1.pdf, MLMs and cloze prompting
68
+ promptattack promptbased attack for language models via gradient search, http://arxiv.org/pdf/2209.01882v1.pdf, cloze prompting
69
+ can language models be specific how, http://arxiv.org/pdf/2210.05159v2.pdf, cloze prompting
70
+ multilingual relation classification via efficient and effective prompting, http://arxiv.org/pdf/2210.13838v2.pdf, soft prompting
71
+ spe symmetrical prompt enhancement for fact probing, http://arxiv.org/pdf/2211.07078v1.pdf, soft prompting
72
+ evaluating the robustness of discrete prompts, http://arxiv.org/pdf/2302.05619v1.pdf, cloze prompting
73
+ syntaxaware hybrid prompt model for fewshot multimodal sentiment analysis, http://arxiv.org/pdf/2306.01312v2.pdf, soft and cloze prompting
74
+ unified multimodal pretraining and promptbased tuning for visionlanguage understanding and generation, http://arxiv.org/pdf/2112.05587v2.pdf, MLMs and cloze prompting
75
+ learning to transfer prompts for text generation, http://arxiv.org/pdf/2205.01543v2.pdf, soft prompting
76
+ towards realistic lowresource relation extraction a benchmark with empirical baseline study, http://arxiv.org/pdf/2210.10678v3.pdf, tuning and cloze prompting
77
+ promptfusion decoupling stability and plasticity for continual learning, http://arxiv.org/pdf/2303.07223v1.pdf, tuning
78
+ are promptbased models clueless, http://arxiv.org/pdf/2205.09295v2.pdf, cloze prompting
79
+ avoiding inference heuristics in fewshot promptbased finetuning, http://arxiv.org/pdf/2109.04144v1.pdf, tuning
80
+ p4e fewshot event detection as promptguided identification and localization, http://arxiv.org/pdf/2202.07615v3.pdf, cloze prompting
81
+ partslip lowshot part segmentation for 3d point clouds via pretrained imagelanguage models, http://arxiv.org/pdf/2212.01558v2.pdf, tuning
82
+ sparsefit fewshot prompting with sparse finetuning for jointly generating predictions and natural language explanations, http://arxiv.org/pdf/2305.13235v2.pdf, training and tuning
83
+ large language model distillation doesn't need a teacher, http://arxiv.org/pdf/2305.14864v1.pdf, training
84
+ multiqgti towards question generation from multimodal sources, http://arxiv.org/pdf/2307.04643v1.pdf, no prompting
85
+ why is prompt tuning for visionlanguage models robust to noisy labels, http://arxiv.org/pdf/2307.11978v1.pdf, tuning
86
+ lowparameter federated learning with large language models, http://arxiv.org/pdf/2307.13896v1.pdf, tuning and MLM
87
+ olala ontology matching with large language models, http://arxiv.org/pdf/2311.03837v1.pdf, uses BERT no specified prefix prompting
88
+ crosslingual supervision improves large language models pretraining, http://arxiv.org/pdf/2305.11778v1.pdf, training focused
89
+ explaincpe a freetext explanation benchmark of chinese pharmacist examination,http://arxiv.org/pdf/2305.12945v2.pdf, training focused
90
+ adapting language models to compress contexts, http://arxiv.org/pdf/2305.14788v2.pdf, soft prompting
91
+ a mechanism for sampleefficient incontext learning for sparse retrieval tasks, http://arxiv.org/pdf/2305.17040v1.pdf, more about LM interpretability than prompting
92
+ large language models are partially primed in pronoun interpretation, http://arxiv.org/pdf/2305.16917v1.pdf, uses in-context learning but is not about prompting methods
93
+ contextual vision transformers for robust representation learning,http://arxiv.org/pdf/2305.19402v2.pdf, not about prefix prompting
94
+ selfverification improves fewshot clinical information extraction, http://arxiv.org/pdf/2306.00024v1.pdf, is about verifying output not modifying input
95
+ measuring and modifying factual knowledge in large language models,http://arxiv.org/pdf/2306.06264v1.pdf, mentions in context learning but it is not the focus
96
+ a survey on multimodal large language models,http://arxiv.org/pdf/2306.13549v1.pdf, not focused on prompting
97
+ potential benefits of employing large language models in research in moral education and development,http://arxiv.org/pdf/2306.13805v2.pdf, not particuyarly about prompting
98
+ assessing the efficacy of large language models in generating accurate teacher responses,http://arxiv.org/pdf/2307.04274v1.pdf, does not focus on prompting methods
99
+ unsupervised calibration through prior adaptation for text classification using large language models,http://arxiv.org/pdf/2307.06713v3.pdf, does not focus on prompting methods
100
+ baby's cothought leveraging large language models for enhanced reasoning in compact models,http://arxiv.org/pdf/2308.01684v2.pdf, focuses on training other models
101
+ diffusion language models can perform many tasks with scaling and instructionfinetuning,http://arxiv.org/pdf/2308.12219v2.pdf, focuses on training
102
+ large language model as autonomous decision maker,http://arxiv.org/pdf/2308.12519v1.pdf, not about prompting methods
103
+ speechtospeech translation with discreteunitbased style transfer,http://arxiv.org/pdf/2309.07566v1.pdf, speech to speech translation
104
+ language modeling is compression,http://arxiv.org/pdf/2309.10668v1.pdf, more about explaining in-context learning than proposing a method
105
+ text data augmentation in lowresource settings via finetuning of large language models,http://arxiv.org/pdf/2310.01119v1.pdf, focuses on training
106
+ humans and language models diverge when predicting repeating text,http://arxiv.org/pdf/2310.06408v2.pdf, focuses on evaluating humans and comparing to prompting method
107
+ amago scalable incontext reinforcement learning for adaptive agents,http://arxiv.org/pdf/2310.09971v2.pdf, not about LMs; this is an RL paper
108
+ meta (outofcontext) learning in neural networks,http://arxiv.org/pdf/2310.15047v2.pdf, evaluates in-context learning but is not based on it
109
+ towards trainingfree openworld segmentation via image prompting foundation models,http://arxiv.org/pdf/2310.10912v1.pdf,image segmentation
110
+ videoprompter an ensemble of foundational models for zeroshot video understanding,http://arxiv.org/pdf/2310.15324v1.pdf,"video understanding, different domain"
111
+ improving diversity of demographic representation in large language models via collectivecritiques and selfvoting,http://arxiv.org/pdf/2310.16523v1.pdf,"model representation, not prompting"
112
+ the power of large language models for wireless communication system development a case study on fpga platforms,http://arxiv.org/pdf/2307.07319v4.pdf,not prompting
113
+ large language models enable fewshot clustering,http://arxiv.org/pdf/2307.00524v1.pdf,"few-shot clustering, not prompting"
114
+ universal fuzzing via large language models,http://arxiv.org/pdf/2308.04748v1.pdf,does not use hard-prefix prompts
115
+ trainingfree openworld segmentation via image prompting foundation models,,image segmentation
116
+ fire food image to recipe generation,http://arxiv.org/pdf/2308.14391v1.pdf,image to text translation
117
+ large language models can accurately predict searcher preferences,http://arxiv.org/pdf/2309.10621v1.pdf,does not use hard-prefix prompts
118
+ understanding incontext learning from repetitions,http://arxiv.org/pdf/2310.00297v2.pdf,"focus is on effects of repetition in in-context learning, not prompting"
119
+ small language models finetuned to coordinate larger language models improve complex reasoning,http://arxiv.org/pdf/2310.18338v1.pdf,"focus on fine-tuning, not hard-prefix prompting"
120
+ revisiting large language models as zeroshot relation extractors,http://arxiv.org/pdf/2310.05028v3.pdf,zero-shot learning for relation extraction
121
+ characterizing attribution and fluency tradeoffs for retrievalaugmented large language models,http://arxiv.org/pdf/2302.05578v2.pdf,RAG
122
+ llmeval unified multidimensional automatic evaluation for opendomain conversations with large language models,http://arxiv.org/pdf/2305.13711v1.pdf,eval of LLMs
123
+ robot task planning based on large language model representing knowledge with directed graph structures,http://arxiv.org/pdf/2306.05171v1.pdf,knowledge representation
124
+ optimus optimization modeling using mip solvers and large language models,http://arxiv.org/pdf/2310.06116v2.pdf,"different approach, MIP solvers"
125
+ promptinfuser how tightly coupling ai and ui design impacts designers' workflows,http://arxiv.org/pdf/2310.15435v1.pdf,focus on UI
126
+ a monte carlo language model pipeline for zeroshot sociopolitical event extraction,http://arxiv.org/pdf/2305.15051v1.pdf,"monte carlo methods, not prompting"
127
+ finetune language models to approximate unbiased incontext learning,http://arxiv.org/pdf/2310.03331v1.pdf,fine-tuning
128
+ on the compositional generalization gap of incontext learning,http://arxiv.org/pdf/2211.08473v1.pdf,"compositional generalization, not hard-prefix prompting"
129
+ fewshot finetuning vs incontext learning a fair comparison and evaluation,http://arxiv.org/pdf/2305.16938v2.pdf,no hard-prefix prompting
130
+ stylemc multichannel based fast textguided image generation and manipulation, http://arxiv.org/pdf/2112.08493v1.pdf, not prompt engineering
131
+ testtime training on nearest neighbors for large language models, http://arxiv.org/pdf/2305.18466v2.pdf, fine-tuning
132
+ chain of natural language inference for reducing large language model ungrounded hallucinations, http://arxiv.org/pdf/2310.03951v2.pdf, no prompt engineering
133
+ differentiable prompt makes pretrained language models better fewshot learners, http://arxiv.org/pdf/2108.13161v7.pdf, not hard prompts
134
+ mme a comprehensive evaluation benchmark for multimodal large language models, http://arxiv.org/pdf/2306.13394v2.pdf, not specifically hard prompting
135
+ protoclip visionlanguage prototypical network for fewshot learning, http://arxiv.org/pdf/2307.03073v2.pdf, not prompting
136
+ a survey on recent named entity recognition and relation classification methods with focus on fewshot learning approaches, http://arxiv.org/pdf/2310.19055v1.pdf, not prompting
137
+ improving incontext fewshot learning via selfsupervised training, http://arxiv.org/pdf/2205.01703v2.pdf, pretraining
138
+ revisiting fewshot learning from a causal perspective, http://arxiv.org/pdf/2209.13816v1.pdf, not prompting
139
+ film how can fewshot image classification benefit from pretrained language models, http://arxiv.org/pdf/2307.04114v1.pdf, not hard prefix prompting
140
+ clues fewshot learning evaluation in natural language understanding, http://arxiv.org/pdf/2111.02570v1.pdf, no prompt engineering
141
+ improving fewshot generalization by exploring and exploiting auxiliary data, http://arxiv.org/pdf/2302.00674v4.pdf, not prompt engineering.
142
+ prompt space optimizing fewshot reasoning success with large language models, http://arxiv.org/pdf/2306.03799v1.pdf, not prompt engineering
143
+ universal fewshot learning of dense prediction tasks with visual token matching, http://arxiv.org/pdf/2303.14969v1.pdf, not prompting
144
+ fdalign feature discrimination alignment for finetuning pretrained models in fewshot learning, http://arxiv.org/pdf/2310.15105v3.pdf, fine tuning
145
+ modelagnostic graph regularization for fewshot learning, http://arxiv.org/pdf/2102.07077v1.pdf, not prompting
146
+ uniform sampling over episode difficulty, http://arxiv.org/pdf/2108.01662v2.pdf, not prompting
147
+ metalearning with taskadaptive loss function for fewshot learning, http://arxiv.org/pdf/2110.03909v2.pdf, focuses on meta-learning
148
+ on measuring the intrinsic fewshot hardness of datasets, http://arxiv.org/pdf/2211.09113v1.pdf, not prompting
149
+ mera merging pretrained adapters for fewshot learning, http://arxiv.org/pdf/2308.15982v1.pdf, not prompting
150
+ metaadapter an online fewshot learner for visionlanguage model, http://arxiv.org/pdf/2311.03774v1.pdf, not prompting
151
+ pushing the limits of simple pipelines for fewshot learning external data and finetuning make a difference, http://arxiv.org/pdf/2204.07305v1.pdf, focus on few-shot learning.
152
+ multilevel finetuning data augmentation and fewshot learning for specialized cyber threat intelligence, http://arxiv.org/pdf/2207.11076v1.pdf, training
153
+ fewshot classification with hypersphere modeling of prototypes, http://arxiv.org/pdf/2211.05319v1.pdf, not prompting
154
+ styleadv meta style adversarial training for crossdomain fewshot learning, http://arxiv.org/pdf/2302.09309v2.pdf, not prompting
155
+ federated fewshot learning for cough classification with edge devices, http://arxiv.org/pdf/2309.01076v1.pdf, not prompting
156
+ is support set diversity necessary for metalearning, http://arxiv.org/pdf/2011.14048v2.pdf, not prompting
157
+ entailment as fewshot learner, http://arxiv.org/pdf/2104.14690v1.pdf, not prompt engineering
158
+ wavprompt towards fewshot spoken language understanding with frozen language models, http://arxiv.org/pdf/2203.15863v2.pdf, fine-tuning
159
+ aligning magma by fewshot learning and finetuning, http://arxiv.org/pdf/2210.14161v1.pdf, finetuning not prompting.
160
+ stunt fewshot tabular learning with selfgenerated tasks from unlabeled tables, http://arxiv.org/pdf/2303.00918v1.pdf, not prompting
161
+ prototypesoriented transductive fewshot learning with conditional transport, http://arxiv.org/pdf/2308.03047v1.pdf, not prompting
162
+ coca classifieroriented calibration for sourcefree universal domain adaptation via textual prototype, http://arxiv.org/pdf/2308.10450v1.pdf, no prompt engineering
163
+ improving generalization in large language models by learning prefix subspaces, http://arxiv.org/pdf/2310.15793v1.pdf, not prompting
164
+ zeroshot and fewshot learning with knowledge graphs a comprehensive survey, http://arxiv.org/pdf/2112.10006v6.pdf, not prompting
165
+ on unifying misinformation detection, http://arxiv.org/pdf/2104.05243v1.pdf, training
166
+ human in the loop how to effectively create coherent topics by manually labeling only a few documents per class, http://arxiv.org/pdf/2212.09422v1.pdf, not prompting.
167
+ neuroclip neuromorphic data understanding by clip and snn, http://arxiv.org/pdf/2306.12073v1.pdf, not prompting
168
+ ppt pretrained prompt tuning for fewshot learning, http://arxiv.org/pdf/2109.04332v3.pdf, soft prompts
169
+ yuan 10 largescale pretrained language model in zeroshot and fewshot learning, http://arxiv.org/pdf/2110.04725v2.pdf, training
170
+ perfect promptfree and efficient fewshot learning with language models, http://arxiv.org/pdf/2204.01172v2.pdf, literally not prompting
171
+ on the effect of pretraining corpora on incontext learning by a largescale language model, http://arxiv.org/pdf/2204.13509v2.pdf, pretraining
172
+ fewshot learning for clinical natural language processing using siamese neural networks, http://arxiv.org/pdf/2208.14923v2.pdf, not prompting
173
+ prompting through prototype a prototypebased prompt learning on pretrained visionlanguage models, http://arxiv.org/pdf/2210.10841v1.pdf, soft prompts
174
+ sgvaclip semanticguided visual adapting of visionlanguage models for fewshot image classification, http://arxiv.org/pdf/2211.16191v2.pdf, training
175
+ auggpt leveraging chatgpt for text data augmentation, http://arxiv.org/pdf/2302.13007v3.pdf, not prompting
176
+ semantic prompt for fewshot image recognition, http://arxiv.org/pdf/2303.14123v1.pdf, not really prompt engineering
177
+ the cot collection improving zeroshot and fewshot learning of language models via chainofthought finetuning, http://arxiv.org/pdf/2305.14045v2.pdf, training
178
+ fewshot learning for inference in medical imaging with subspace feature representations, http://arxiv.org/pdf/2306.11152v1.pdf, no prompting
179
+ visually grounded fewshot word learning in lowresource settings, http://arxiv.org/pdf/2306.11371v2.pdf, not prompting
180
+ crossmodal concept learning and inference for visionlanguage models, http://arxiv.org/pdf/2307.15460v1.pdf, not prompt engineering.
181
+ uniap towards universal animal perception in vision via fewshot learning, http://arxiv.org/pdf/2308.09953v1.pdf, not text prompts
182
+ palm scaling language modeling with pathways, http://arxiv.org/pdf/2204.02311v5.pdf, not prompting
183
+ fewshot electronic health record coding through graph contrastive learning, http://arxiv.org/pdf/2106.15467v1.pdf, not prompting
184
+ ernie 30 largescale knowledge enhanced pretraining for language understanding and generation, http://arxiv.org/pdf/2107.02137v1.pdf, pre-training
185
+ alleviating the incompatibility between cross entropy loss and episode training for fewshot skin disease classification, http://arxiv.org/pdf/2004.09694v1.pdf, not prompting
186
+ fewshot learning through contextual data augmentation, http://arxiv.org/pdf/2103.16911v1.pdf, not prompting
187
+ metalearning gnn initializations for lowresource molecular property prediction, http://arxiv.org/pdf/2003.05996v2.pdf, not prompt engineering.
188
+ neural data augmentation via example extrapolation, http://arxiv.org/pdf/2102.01335v1.pdf, data augmentation
189
+ oneshot learning for the long term consolidation with an artificial hippocampal algorithm, http://arxiv.org/pdf/2102.07503v2.pdf, not prompting
190
+ the power of scale for parameterefficient prompt tuning, http://arxiv.org/pdf/2104.08691v2.pdf, soft prompts
191
+ design of a graphical user interface for fewshot machine learning classification of electron microscopy data, http://arxiv.org/pdf/2107.10387v1.pdf, not prompting
192
+ flipda effective and robust data augmentation for fewshot learning, http://arxiv.org/pdf/2108.06332v2.pdf, not prompting
193
+ on the multilingual capabilities of very largescale english language models, http://arxiv.org/pdf/2108.13349v1.pdf, not prompting
194
+ learning opinion summarizers by selecting informative reviews, http://arxiv.org/pdf/2109.04325v1.pdf, not prompting
195
+ strata selftraining with task augmentation for better fewshot learning, http://arxiv.org/pdf/2109.06270v2.pdf, not prompting
196
+ what does clip know about a red circle visual prompt engineering for vlms, http://arxiv.org/pdf/2304.06712v2.pdf, not text prompting
197
+ conformal prediction with large language models for multichoice question answering, http://arxiv.org/pdf/2305.18404v3.pdf, not prompting.
198
+ p2p tuning pretrained image models for point cloud analysis with pointtopixel prompting, http://arxiv.org/pdf/2208.02812v2.pdf, not text prompting
199
+ evoprompting language models for codelevel neural architecture search, http://arxiv.org/pdf/2302.14838v2.pdf, soft prompts
200
+ right to be forgotten in the era of large language models implications challenges and solutions, http://arxiv.org/pdf/2307.03941v3.pdf, not related
201
+ label supervised llama finetuning, http://arxiv.org/pdf/2310.01208v1.pdf, focus on finetuning not prompting
202
+ incontext learning distillation transferring fewshot learning ability of pretrained language models, http://arxiv.org/pdf/2212.10670v1.pdf, distillation not prompting.
203
+ a neural network solves explains and generates university math problems by program synthesis and fewshot learning at human level, http://arxiv.org/pdf/2112.15594v4.pdf, focuses on fine-tuning
204
+ crossfit a fewshot learning challenge for crosstask generalization in nlp, http://arxiv.org/pdf/2104.08835v2.pdf, not prompting
205
+ jasmine arabic gpt models for fewshot learning, http://arxiv.org/pdf/2212.10755v2.pdf, training
206
+ conversation style transfer using fewshot learning, http://arxiv.org/pdf/2302.08362v2.pdf, not prompting
207
+ cancergpt fewshot drug pair synergy prediction using large pretrained language models, http://arxiv.org/pdf/2304.10946v1.pdf, training
208
+ meta learning to bridge vision and language models for multimodal fewshot learning, http://arxiv.org/pdf/2302.14794v1.pdf, not prompting
209
+ demonstrationbased learning for fewshot biomedical named entity recognition under machine reading comprehension, http://arxiv.org/pdf/2308.06454v1.pdf, not prompt engineering
210
+ robustness over time understanding adversarial examples' effectiveness on longitudinal versions of large language models, http://arxiv.org/pdf/2308.07847v1.pdf, not prompting.
211
+ fewshot natural language generation for taskoriented dialog, http://arxiv.org/pdf/2002.12328v1.pdf, not prompting
212
+ promptfree diffusion taking text out of texttoimage diffusion models, http://arxiv.org/pdf/2305.16223v2.pdf, literally not prompting.
213
+ cutting down on prompts and parameters simple fewshot learning with language models, http://arxiv.org/pdf/2106.13353v2.pdf, not prompt engineering
214
+ executive function a contrastive value policy for resampling and relabeling perceptions via hindsight summarization, http://arxiv.org/pdf/2204.12639v1.pdf, not prompting
215
+ tart a plugandplay transformer module for taskagnostic reasoning, http://arxiv.org/pdf/2306.07536v1.pdf, not prompting
216
+ synergistic integration of large language models and cognitive architectures for robust ai an exploratory analysis, http://arxiv.org/pdf/2308.09830v3.pdf, brief mention of prompting but not related
217
+ visionlanguage models are zeroshot reward models for reinforcement learning, http://arxiv.org/pdf/2310.12921v1.pdf, maybe tangential but not prompt engineering
218
+ fewshot multimodal multitask multilingual learning, http://arxiv.org/pdf/2303.12489v1.pdf, maybe tangential but not prompt engineering
219
+ fewshot learning with visual distribution calibration and crossmodal distribution alignment, http://arxiv.org/pdf/2305.11439v1.pdf, not prompting.
220
+ active learning principles for incontext learning with large language models, http://arxiv.org/pdf/2305.14264v1.pdf, not prompting
221
+ flame fewshot learning from natural language explanations, http://arxiv.org/pdf/2306.08042v1.pdf, not prompting.
222
+ approximating humanlike fewshot learning with gptbased compression, http://arxiv.org/pdf/2308.06942v1.pdf, not promting
223
+ from human days to machine seconds automatically answering and generating machine learning final exams, http://arxiv.org/pdf/2206.05442v7.pdf, not prompting
224
+ cedille a large autoregressive french language model, http://arxiv.org/pdf/2202.03371v1.pdf, not prompting
225
+ finetune like you pretrain improved finetuning of zeroshot vision models, http://arxiv.org/pdf/2212.00638v1.pdf, focuses on fine-tuning
226
+ wordcraft a humanai collaborative editor for story writing, http://arxiv.org/pdf/2107.07430v1.pdf, not prompt engineering
227
+ want to reduce labeling cost gpt3 can help, http://arxiv.org/pdf/2108.13487v1.pdf, not prompting
228
+ cut the carp fishing for zeroshot story evaluation, http://arxiv.org/pdf/2110.03111v3.pdf, tangential but not prompt engineering
229
+ fake it till you make it learning transferable representations from synthetic imagenet clones, http://arxiv.org/pdf/2212.08420v2.pdf, not prompt engineering
230
+ activation addition steering language models without optimization, http://arxiv.org/pdf/2308.10248v2.pdf, messes with activation not prompt engineering
231
+ safurai 001 new qualitative approach for code llm evaluation, http://arxiv.org/pdf/2309.11385v1.pdf, tangential but not prompt engineering
232
+ controlled and conditional text to image generation with diffusion prior, http://arxiv.org/pdf/2302.11710v2.pdf, image prompts
233
+ ipadapter text compatible image prompt adapter for texttoimage diffusion models, http://arxiv.org/pdf/2308.06721v1.pdf, image prompts
234
+ revisiting selftraining for fewshot learning of language model, http://arxiv.org/pdf/2110.01256v1.pdf, tangential but not prompt engineering
235
+ multimodal large language model for visual navigation, http://arxiv.org/pdf/2310.08669v2.pdf, tangential but not prompt engineering
236
+ taskdiff a similarity metric for taskoriented conversations, http://arxiv.org/pdf/2310.15298v2.pdf, tangential but not prompt engineering
237
+ clipadapter better visionlanguage models with feature adapters, http://arxiv.org/pdf/2110.04544v1.pdf, tangential but not prompt engineering
238
+ cones concept embedding search for parameter efficient tuning large vision language models, http://arxiv.org/pdf/2305.18993v1.pdf, tangential but not prompt engineering
239
+ logoprompt synthetic text images can be good visual prompts for visionlanguage models, http://arxiv.org/pdf/2309.01155v2.pdf, visual prompts
240
+ manipulating embeddings of stable diffusion prompts, http://arxiv.org/pdf/2308.12059v1.pdf, manipulates embeddings not text.
241
+ multimodal prompt transformer with hybrid contrastive learning for emotion recognition in conversation,httparxivorgpdf231004456v1pdf, multimodel RL
242
+ promptenhanced selfsupervised representation learning for remote sensing image understanding,httparxivorgpdf231000022v1pdf, about fine-tuning
243
+ discrete prompt compression with reinforcement learning,httparxivorgpdf230808758v1pdf, They compressed prompts using fine-tuning
244
+ automatic short math answer grading via incontext metalearning,httparxivorgpdf220515219v3pdf, About Fine-tuning
245
+ graphprompt biomedical entity normalization using graphbased prompt templates,httparxivorgpdf211203002v1pdf, About fine-tuning
246
+ transformers generalize differently from information stored in context vs in weights,httparxivorgpdf221005675v2pdf, tangentially related
247
+ large language models meet harry potter a bilingual dataset for aligning dialogue agents with characters,httparxivorgpdf221106869v4pdf, tangentially related
248
+ operationalizing specifications in addition to test sets for evaluating constrained generative models,httparxivorgpdf221200006v1pdf, tangentially related as stated in their introduction
249
+ language model acceptability judgements are not always robust to context,httparxivorgpdf221208979v1pdf, I believe it is tangentially related
250
+ training trajectories of language models across scales,httparxivorgpdf221209803v3pdf, More focused on training rather than anything
251
+ sparks of gpts in edge intelligence for metaverse caching and inference for mobile aigc services,httparxivorgpdf230408782v2pdf, Too tangentially related
252
+ tallrec an effective and efficient tuning framework to align large language model with recommendation,httparxivorgpdf230500447v3pdf, More about fine-tuning
253
+ memoryefficient finetuning of compressed large language models via sub4bit integer quantization,httparxivorgpdf230514152v2pdf, About Fine-Tuning I believe
254
+ do large language models know what they don't know,httparxivorgpdf230518153v2pdf, No Mention of Prompting
255
+ revisiting outofdistribution robustness in nlp benchmark analysis and llms evaluations,httparxivorgpdf230604618v2pdf, Not the main focus- barely mention
256
+ transformers as statisticians provable incontext learning with incontext algorithm selection,httparxivorgpdf230604637v2pdf, Hardly mentioned- not main focus
257
+ trained transformers learn linear models incontext,httparxivorgpdf230609927v3pdf, As I understand- this is about training and not prompting
258
+ generative multimodal entity linking,httparxivorgpdf230612725v2pdf, Only soft prompting
259
+ supervised pretraining can learn incontext reinforcement learning,httparxivorgpdf230614892v1pdf, Different Contexts I believe
260
+ hyenadna longrange genomic sequence modeling at single nucleotide resolution,httparxivorgpdf230615794v1pdf, Only Soft Prompting
261
+ explainable depression symptom detection in social media,httparxivorgpdf231013664v2pdf, Only one mention about prompting
262
+ ensembleinstruct generating instructiontuning data with a heterogeneous mixture of lms,httparxivorgpdf231013961v1pdf, About fine-tuning
263
+ anomalygpt detecting industrial anomalies using large visionlanguage models,httparxivorgpdf230815366v3pdf, More about training the model
264
+ uncovering hidden geometry in transformers via disentangling position and context,httparxivorgpdf231004861v1pdf, Completely non-relevant
265
+ mitigating word bias in zeroshot promptbased classifiers,httparxivorgpdf230904992v1pdf, about reweighing probabilities for prompt-based classifiers
266
+ ideal influencedriven selective annotations empower incontext learners in large language models,httparxivorgpdf231010873v1pdf, About fine-tuning
267
+ incontext pretraining language modeling beyond document boundaries,httparxivorgpdf231010638v3pdf, Not about prompting
268
+ alt towards finegrained alignment between language and ctr models for clickthrough rate prediction,httparxivorgpdf231019453v1pdf, Not really about prompting
269
+ understanding catastrophic forgetting in language models via implicit inference,httparxivorgpdf230910105v1pdf, About fine-tuning
270
+ do pretrained transformers really learn incontext by gradient descent,httparxivorgpdf231008540v1pdf, About fine-tuning
271
+ ccprompt counterfactual contrastive prompttuning for manyclass classification,httparxivorgpdf221105987v1pdf, About fine-tuning
272
+ one step of gradient descent is provably the optimal incontext learner with one layer of linear selfattention,httparxivorgpdf230703576v1pdf, Different type of prompt?
273
+ cyclealign iterative distillation from blackbox llm to whitebox models for better human alignment,httparxivorgpdf231016271v1pdf, About fine-tuning
274
+ transformers are efficient incontext estimators for wireless communication,httparxivorgpdf231100226v1pdf, About fine-tuning
275
+ scaling incontext demonstrations with structured attention,http://arxiv.org/pdf/2307.02690v1.pdf,new architecture
276
+ incontext learning and induction heads,http://arxiv.org/pdf/2209.11895v1.pdf,new architecture
277
+ what makes good examples for visual incontext learning,http://arxiv.org/pdf/2301.13670v2.pdf,visual only
278
+ mmicl empowering visionlanguage model with multimodal incontext learning,http://arxiv.org/pdf/2309.07915v2.pdf,visual only
279
+ visual incontext learning for fewshot eczema segmentation,http://arxiv.org/pdf/2309.16656v1.pdf,visual only
280
+ scone benchmarking negation reasoning in language models with finetuning and incontext learning,http://arxiv.org/pdf/2305.19426v1.pdf,fine-tuning
281
+ can whisper perform speechbased incontext learning,http://arxiv.org/pdf/2309.07081v1.pdf,speech
282
+ salm speechaugmented language model with incontext learning for speech recognition and translation,http://arxiv.org/pdf/2310.09424v1.pdf,speech
283
+ can foundation models help us achieve perfect secrecy,http://arxiv.org/pdf/2205.13722v2.pdf,overview paper
284
+ se factual knowledge in frozen giant code model a study on fqn and its retrieval,http://arxiv.org/pdf/2212.08221v1.pdf,unclear task
285
+ incontext learning for attention scheme from single softmax regression to multiple softmax regression via a tensor trick,http://arxiv.org/pdf/2307.02419v1.pdf,new architecture
286
+ synergpt incontext learning for personalized drug synergy prediction and drug design,http://arxiv.org/pdf/2307.11694v2.pdf,new architecture
287
+ twostage llm finetuning with less specialization and more generalization,http://arxiv.org/pdf/2211.00635v2.pdf,fine-tuning
288
+ conceptaware training improves incontext learning ability of language models,http://arxiv.org/pdf/2305.13775v1.pdf,fine-tuning
289
+ probing in context toward building robust classifiers via probing large language models,http://arxiv.org/pdf/2305.14171v2.pdf,uses probes for task
290
+ towards incontext scene understanding,http://arxiv.org/pdf/2306.01667v2.pdf,visual only
291
+ the cost of downscaling language models fact recall deteriorates before incontext learning,http://arxiv.org/pdf/2310.04680v1.pdf,analysis of pruning / LM size
292
+ "last one standing a comparative analysis of security and privacy of soft prompt tuning, lora, and incontext learning",http://arxiv.org/pdf/2310.11397v1.pdf,analysis of lora / tuning / ICL
293
+ when do prompting and prefixtuning work a theory of capabilities and limitations,http://arxiv.org/pdf/2310.19698v1.pdf,analysis of lora / tuning / ICL
294
+ instruct me more! random prompting for visual incontext learning,http://arxiv.org/pdf/2311.03648v1.pdf,visual only
295
+ incontext alignment chat with vanilla language models before finetuning,http://arxiv.org/pdf/2308.04275v1.pdf,fine-tuning
296
+ gpt4 vision on medical image classification a case study on covid19 dataset,http://arxiv.org/pdf/2310.18498v1.pdf,visual only
297
+ fewshot parameterefficient finetuning is better and cheaper than incontext learning,http://arxiv.org/pdf/2205.05638v2.pdf,fine-tuning
298
+ images speak in images a generalist painter for incontext visual learning,http://arxiv.org/pdf/2212.02499v2.pdf,visual only
299
+ how does incontext learning help prompt tuning,http://arxiv.org/pdf/2302.11521v1.pdf,fine-tuning
300
+ symbol tuning improves incontext learning in language models,http://arxiv.org/pdf/2305.08298v1.pdf,fine-tuning
301
+ iterative forward tuning boosts incontext learning in language models,http://arxiv.org/pdf/2305.13016v2.pdf,fine-tuning
302
+ estimating large language model capabilities without labeled test data,http://arxiv.org/pdf/2305.14802v2.pdf,out of scope analysis
303
+ augmenting language models with longterm memory,http://arxiv.org/pdf/2306.07174v1.pdf,new architecture
304
+ o3d offline datadriven discovery and distillation for sequential decisionmaking with large language models,http://arxiv.org/pdf/2310.14403v1.pdf,fine-tuning
305
+ deja vu contextual sparsity for efficient llms at inference time,http://arxiv.org/pdf/2310.17157v1.pdf,new architecture
306
+ principledriven selfalignment of language models from scratch with minimal human supervision,http://arxiv.org/pdf/2305.03047v1.pdf,fine-tuning
307
+ one for all towards training one graph model for all classification tasks,http://arxiv.org/pdf/2310.00149v1.pdf,new architecture
308
+ magma multimodal augmentation of generative models through adapterbased finetuning,http://arxiv.org/pdf/2112.05253v2.pdf,fine-tuning
309
+ blackbox tuning for languagemodelasaservice,http://arxiv.org/pdf/2201.03514v4.pdf,fine-tuning
310
+ contrastive learning for promptbased fewshot language learners,http://arxiv.org/pdf/2205.01308v1.pdf,fine-tuning
311
+ exploring length generalization in large language models,http://arxiv.org/pdf/2207.04901v2.pdf,out of scope analysis
312
+ explanations from large language models make small reasoners better,http://arxiv.org/pdf/2210.06726v1.pdf,out of scope analysis
313
+ visual programming compositional visual reasoning without training,http://arxiv.org/pdf/2211.11559v1.pdf,visual only
314
+ "don't generate, discriminate a proposal for grounding language models to realworld environments",http://arxiv.org/pdf/2212.09736v2.pdf,new architecture
315
+ neural codec language models are zeroshot text to speech synthesizers,http://arxiv.org/pdf/2301.02111v1.pdf,speech
316
+ looped transformers as programmable computers,http://arxiv.org/pdf/2301.13196v1.pdf,out of scope analysis
317
+ grounding language models to images for multimodal inputs and outputs,http://arxiv.org/pdf/2301.13823v4.pdf,new architecture
318
+ proofnet autoformalizing and formally proving undergraduatelevel mathematics,http://arxiv.org/pdf/2302.12433v1.pdf,new architecture
319
+ speak foreign languages with your own voice crosslingual neural codec language modeling,http://arxiv.org/pdf/2303.03926v1.pdf,speech
320
+ when braininspired ai meets agi,http://arxiv.org/pdf/2303.15935v1.pdf,overview paper
321
+ larger probes tell a different story extending psycholinguistic datasets via incontext learning,http://arxiv.org/pdf/2303.16445v1.pdf,dataset
322
+ seggpt segmenting everything in context,http://arxiv.org/pdf/2304.03284v1.pdf,new architecture
323
+ towards robust prompts on visionlanguage models,http://arxiv.org/pdf/2304.08479v1.pdf,vision-only
324
+ understanding and predicting human label variation in natural language inference through explanation,http://arxiv.org/pdf/2304.12443v1.pdf,out of scope analysis
325
+ otter a multimodal model with incontext instruction tuning,http://arxiv.org/pdf/2305.03726v1.pdf,new architecture
326
+ transformers learn incontext by gradient descent,http://arxiv.org/pdf/2212.07677v2.pdf, analysis of ICL as a learning algorithm
327
+ the closeness of incontext learning and weight shifting for softmax regression,http://arxiv.org/pdf/2304.13276v1.pdf, analysis of ICL as a learning algorithm
328
+ what learning algorithm is incontext learning investigations with linear models,http://arxiv.org/pdf/2211.15661v3.pdf, analysis of ICL as a learning algorithm
329
+ transformers as algorithms generalization and stability in incontext learning,http://arxiv.org/pdf/2301.07067v2.pdf, analysis of ICL as a learning algorithm
330
+ explaining emergent incontext learning as kernel regression,http://arxiv.org/pdf/2305.12766v2.pdf, analysis of ICL as a learning algorithm
331
+ label words are anchors an information flow perspective for understanding incontext learning,http://arxiv.org/pdf/2305.14160v1.pdf, analysis of ICL as a learning algorithm
332
+ transformers learn to implement preconditioned gradient descent for incontext learning,http://arxiv.org/pdf/2306.00297v1.pdf, analysis of ICL as a learning algorithm
333
+ investigating the learning behaviour of incontext learning a comparison with supervised learning,http://arxiv.org/pdf/2307.15411v2.pdf, analysis of ICL as a learning algorithm
334
+ incontext learning with transformer is really equivalent to a contrastive learning pattern,http://arxiv.org/pdf/2310.13220v1.pdf, analysis of ICL as a learning algorithm
335
+ incontext learning creates task vectors,http://arxiv.org/pdf/2310.15916v1.pdf, analysis of ICL as a learning algorithm
336
+ "what and how does incontext learning learn bayesian model averaging, parameterization, and generalization",http://arxiv.org/pdf/2305.19420v2.pdf, analysis of ICL as a learning algorithm
337
+ how do transformers learn incontext beyond simple functions a case study on learning with representations,http://arxiv.org/pdf/2310.10616v1.pdf, analysis of ICL as a learning algorithm
338
+ transformers learn higherorder optimization methods for incontext learning a study with linear models,http://arxiv.org/pdf/2310.17086v1.pdf, analysis of ICL as a learning algorithm
339
+ a contemporaneous infrared flash from a long gammaray burst an echo from the central engine,httpdxdoiorg101038nature03520,Not prompting related
340
+ stellar explosions by magnetic towers,httpdxdoiorg101086505621,Not prompting related
341
+ high energy radiation from gamma ray bursts,httpdxdoiorg10106311291372,Not prompting related
342
+ the fireball shock model of gamma ray bursts,httpdxdoiorg10106311361591,Not prompting related
343
+ origin of gamma ray bursters,httpdxdoiorg101143ptps136300,Not prompting related
344
+ the updated e_peak e_gamma correlation in grbs,httpdxdoiorg101393ncci2005100460,Not prompting related
345
+ gammaray burst early afterglows,httpdxdoiorg10106312141841,Not prompting related
346
+ mevgev emission from neutronloaded short gammaray burst jets,httpdxdoiorg101086507261,Not prompting related
347
+ a two component jet model for the xray afterglow flat segment in short grb 051221a,httpdxdoiorg101086512971,Not prompting related
348
+ the shallow phase of xray afterglows,httpdxdoiorg10106312943505,Not prompting related
349
+ hyperaccretion after the blandfordznajek process a new model for grbs with xray flares observed in early afterglows,httpdxdoiorg101088100992718404,Not prompting related
350
+ high energy gammaray emission from gammaray bursts before glast,httpdxdoiorg101007s114670080033z,Not prompting related
351
+ expected performance of a hard xray polarimeter (polar) by monte carlo simulation,httpdxdoiorg101016jnima200904033,Not prompting related
352
+ what do we know about gammaray bursts,httparxivorgabs10094648v2,Not prompting related
353
+ possible origin of rapid variability of gammaray bursts due to convective energy transfer in hyperaccretion disks,httpdxdoiorg101111j13652966201119733x,Not prompting related
354
+ gammaray burst without baryonic and magnetic load,httpdxdoiorg101143ptp126555,Not prompting related
355
+ the physical origin of optical flares following grb 110205a and the nature of the outflow,httpdxdoiorg101088167445271111007,Not prompting related
356
+ magnetic structures in gammaray burst jets probed by gammaray polarization,httpdxdoiorg101088204182057581l1,Not prompting related
357
+ astrophysical zev acceleration in the relativistic jet from an accreting supermassive blackhole,httpdxdoiorg101016jastropartphys201402004,Not prompting related
358
+ neutrinocooled accretion model with magnetic coupling for xray flares in grbs,httpdxdoiorg1010880004637x7732142,Not prompting related
359
+ jet luminosity from neutrinodominated accretion flows in grbs,httparxivorgabs13083236v1,Not prompting related
360
+ 3d manipulation with scanning near field optical nanotweezers,httpdxdoiorg101038nnano201424,Not prompting related
361
+ tuning a multiple classifier system for side effect discovery using genetic algorithms,httparxivorgabs14091053v1,Not prompting related
362
+ moltensalt depleteduranium reactor,httparxivorgabs150303183v1,Not prompting related
363
+ xray flares in grbs general considerations and photospheric origin,httpdxdoiorg101093mnraslslw003,Not prompting related
364
+ waterinduced bimetallic alloy surface segregation a first principle study,httparxivorgabs160102346v1,Not prompting related
365
+ rates and singlettriplet ratios from tadf transients,httparxivorgabs160308998v2,Not prompting related
366
+ physical limits to magnetogenetics,httpdxdoiorg107554elife17210,Not prompting related
367
+ the dark side of ethical robots,httparxivorgabs160602583v1,Not prompting related
368
+ numerical and analytical solutions of neutrinodominated accretion flows with a nonzero torque boundary condition and its applications in gammaray bursts,httpdxdoiorg103847153843578332129,Not prompting related
369
+ highenergy emission as signature of magnetic field amplification in neutron star mergers,httparxivorgabs170101184v1,Not prompting related
370
+ gammaray burst models in light of the grb 170817a gw170817 connection,httparxivorgabs180207328v1,Not prompting related
371
+ surface modified mesoporous gc3n4@feni3 as prompt and proficient magnetic adsorbent for crude oil recovery,httpdxdoiorg101016japsusc201812166,Not prompting related
372
+ the perfect state transfer graph limbo,httparxivorgabs180800696v2,Not prompting related
373
+ variabilities of gammaray bursts from black hole hyperaccretion disks,httpdxdoiorg101093mnrasstw1985,Not prompting related
374
+ data driven exploratory attacks on black box classifiers in adversarial domains,httpdxdoiorg101016jneucom201802007,Not prompting related
375
+ migrating large codebases to c++ modules,httpdxdoiorg1010881742659615251012051,Not prompting related
376
+ mn(ii)doped 2d perovskite for light emitting devices,httparxivorgabs190605099v1,Not prompting related
377
+ deep sequential feature learning in clinical image classification of infectious keratitis,httparxivorgabs200602666v1,Not prompting related
378
+ hydrodynamics of corecollapse supernovae and their progenitors,httpdxdoiorg101007s4111502000085,Not prompting related
379
+ xray plateaus in $γ$ray bursts explained by structured jets,httparxivorgabs200613966v1,Not prompting related
380
+ polar a spaceborne xray polarimeter for transient sources,httpdxdoiorg105194astra7432011,Not prompting related
381
+ the change of grb polarization angles in the magneticdominated jet model,httpdxdoiorg101093mnrasstu2051,Not prompting related
382
+ perspective quantum thermodynamics,httpdxdoiorg10108813672630181011002,Not prompting related
383
+ observational evidence for mass ejection accompanying short gamma ray bursts,httpdxdoiorg101093mnraslslx131,Not prompting related
384
+ photospheric emission from variable engine gamma ray burst simulations,httpdxdoiorg10384715384357aaeed1,Not prompting related
385
+ the divideandconquer framework a suitable setting for the ddm of the future,httparxivorgabs190100229v1,Not prompting related
386
+ spectral puzzle of the offaxis gammaray burst in gw170817,httpdxdoiorg101093mnrasstz1650,Not prompting related
387
+ "equationofstate, critical constants, and thermodynamic properties of lithium at high energy density",httpdxdoiorg10106315143308,Not prompting related
388
+ interpreting the xray afterglows of gammaray bursts with radiative losses and millisecond magnetars,httpdxdoiorg101093mnrasstaa3090,Not prompting related
389
+ wavelet denoising and attentionbased rnnarima model to predict forex price,httparxivorgabs200806841v1,Not prompting related
390
+ testing blandfordznajek mechanism in black hole hyperaccretion flows for longduration gammaray bursts,httpdxdoiorg10384715384357abd6bd,Not prompting related
391
+ deep learningbased detection of the acute respiratory distress syndrome what are the models learning,httparxivorgabs210912323v1,Not prompting related
392
+ "continuationpassing style, defunctionalization, accumulations, and associativity",httpdxdoiorg1022152programmingjournalorg202267,Not prompting related
393
+ helyos a customized offtheshelf solution for autonomous driving applications in delimited areas,httpdxdoiorg101109sii55687202310039276,Not prompting related
394
+ the structure of gamma ray burst jets,httparxivorgabs220611088v2,Not prompting related