flash / requirements.txt
NickNYU
[bugfix]fix the cut-off issue due to LLM predict token limit(256 for openai python lib default), by setting temperature to 0 and set LLM predict method from compact-refine to refine
bd59653
raw
history blame contribute delete
No virus
116 Bytes
llama_index>=0.6.3
llama_hub
streamlit
ruff
black
mypy
accelerate
python-dotenv
sentence_transformers
wandb