new

Get trending papers in your email inbox!

Subscribe

byAK and the research community

Apr 23

I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm

Large Language Models (LLMs) have achieved significant advancements, however, the common learning paradigm treats LLMs as passive information repositories, neglecting their potential for active learning and alignment. Some approaches train LLMs using their own generated synthetic data, exploring the possibility of active alignment. However, there is still a huge gap between these one-time alignment methods and the continuous automatic alignment of humans. In this paper, we introduce I-SHEEP, an Iterative Self-EnHancEmEnt Paradigm.This human-like paradigm enables LLMs to continuously self-align from scratch with nothing. Compared to the one-time alignment method Dromedary sun2023principledriven, which refers to the first iteration in this paper, I-SHEEP can significantly enhance capacities on both Qwen and Llama models. I-SHEEP achieves a maximum relative improvement of 78.2\% in the Alpaca Eval, 24.0\% in the MT Bench, and an absolute increase of 8.88\% in the IFEval accuracy over subsequent iterations in Qwen-1.5 72B model. Additionally, I-SHEEP surpasses the base model in various standard benchmark generation tasks, achieving an average improvement of 24.77\% in code generation tasks, 12.04\% in TrivialQA, and 20.29\% in SQuAD. We also provide new insights based on the experiment results. Our codes, datasets, and models are available at https://anonymous.4open.science/r/I-SHEEP.

Localized Heating and Dynamics of the Solar Corona due to a Symbiosis of Waves and Reconnection

The Sun's outer atmosphere, the corona, is maintained at mega-Kelvin temperatures and fills the heliosphere with a supersonic outflowing wind. The dissipation of magnetic waves and direct electric currents are likely to be the most significant processes for heating the corona, but a lively debate exists on their relative roles. Here, we suggest that the two are often intrinsically linked, since magnetic waves may trigger current dissipation, and impulsive reconnection can launch magnetic waves. We present a study of the first of these processes by using a 2D physics-based numerical simulation using the Adaptive Mesh Refined (AMR) Versatile Advection Code (VAC). Magnetic waves such as fast magnetoacoustic waves are often observed to propagate in the large-scale corona and interact with local magnetic structures. The present numerical simulations show how the propagation of magnetic disturbances towards a null point or separator can lead to the accumulation of the electric currents. Lorentz forces can laterally push and vertically stretch the magnetic fields, forming a current sheet with a strong magnetic-field gradient. The magnetic field lines then break and reconnect, and so contribute towards coronal heating. Numerical results are presented that support these ideas and support the concept of a symbiosis between waves and reconnection in heating the solar corona.

Language Models (Mostly) Know What They Know

We study whether language models can evaluate the validity of their own claims and predict which questions they will be able to answer correctly. We first show that larger models are well-calibrated on diverse multiple choice and true/false questions when they are provided in the right format. Thus we can approach self-evaluation on open-ended sampling tasks by asking models to first propose answers, and then to evaluate the probability "P(True)" that their answers are correct. We find encouraging performance, calibration, and scaling for P(True) on a diverse array of tasks. Performance at self-evaluation further improves when we allow models to consider many of their own samples before predicting the validity of one specific possibility. Next, we investigate whether models can be trained to predict "P(IK)", the probability that "I know" the answer to a question, without reference to any particular proposed answer. Models perform well at predicting P(IK) and partially generalize across tasks, though they struggle with calibration of P(IK) on new tasks. The predicted P(IK) probabilities also increase appropriately in the presence of relevant source materials in the context, and in the presence of hints towards the solution of mathematical word problems. We hope these observations lay the groundwork for training more honest models, and for investigating how honesty generalizes to cases where models are trained on objectives other than the imitation of human writing.