-
Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
Paper • 2402.14083 • Published • 47 -
Linear Transformers are Versatile In-Context Learners
Paper • 2402.14180 • Published • 6 -
Training-Free Long-Context Scaling of Large Language Models
Paper • 2402.17463 • Published • 19 -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 603
Yang Lee
innovation64
AI & ML interests
AGI
Organizations
Collections
2
RAG research
-
Beyond Chain-of-Thought: A Survey of Chain-of-X Paradigms for LLMs
Paper • 2404.15676 • Published -
How faithful are RAG models? Quantifying the tug-of-war between RAG and LLMs' internal prior
Paper • 2404.10198 • Published • 7 -
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 67 -
FaaF: Facts as a Function for the evaluation of RAG systems
Paper • 2403.03888 • Published
Papers
1
models
22
innovation64/llama3.1-8B-instruct-4bit-ruozhiba-4bit
Text Generation
•
Updated
•
6
innovation64/llama3.1-8B-instruct-4bit-ruozhiba-GGUF
Updated
•
114
innovation64/llama3.1-8B-instruct-4bit-ruozhiba-lora
Updated
innovation64/llama3.1-8B-instruct-4bit-ruozhiba-16
Text Generation
•
Updated
•
10
innovation64/speecht5_finetuned_voxpopuli_sl
Text-to-Speech
•
Updated
•
86
innovation64/whisper-tiny-dv
Automatic Speech Recognition
•
Updated
•
7
innovation64/distilhubert-finetuned-gtzan
Audio Classification
•
Updated
•
10
innovation64/poca-aSoccerTwos
Reinforcement Learning
•
Updated
•
11
innovation64/rl_course_vizdoom_health_gathering_supreme
Reinforcement Learning
•
Updated
innovation64/lunralandsss
Reinforcement Learning
•
Updated
datasets
None public yet