Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Spaces:
X-Pipe
/
flash
like
0
Sleeping
App
Files
Files
Community
5
main
flash
3 contributors
History:
22 commits
Chen
Merge pull request
#2
from NickNYU/feature/multi-docs
5819ffe
unverified
12 months ago
.idea
reformat well
about 1 year ago
core
[bugfix]fix the cut-off issue due to LLM predict token limit(256 for openai python lib default), by setting temperature to 0 and set LLM predict method from compact-refine to refine
12 months ago
dataset
[bugfix]fix the cut-off issue due to LLM predict token limit(256 for openai python lib default), by setting temperature to 0 and set LLM predict method from compact-refine to refine
12 months ago
docs
reformat well
about 1 year ago
langchain_manager
[bugfix]fix the cut-off issue due to LLM predict token limit(256 for openai python lib default), by setting temperature to 0 and set LLM predict method from compact-refine to refine
12 months ago
llama
[bugfix]fix the cut-off issue due to LLM predict token limit(256 for openai python lib default), by setting temperature to 0 and set LLM predict method from compact-refine to refine
12 months ago
xpipe_wiki
change to tree-summarize to try map-reduce
12 months ago
.gitignore
1.97 kB
[bugfix]fix the cut-off issue due to LLM predict token limit(256 for openai python lib default), by setting temperature to 0 and set LLM predict method from compact-refine to refine
12 months ago
.pre-commit-config.yaml
112 Bytes
add tools
about 1 year ago
Makefile
163 Bytes
app.py fixable to match hugging-face's app portal
about 1 year ago
README.md
1.24 kB
first edition
about 1 year ago
app.py
1.19 kB
[bugfix]fix the cut-off issue due to LLM predict token limit(256 for openai python lib default), by setting temperature to 0 and set LLM predict method from compact-refine to refine
12 months ago
local-requirements.txt
13 Bytes
reformat well
about 1 year ago
pyproject.toml
370 Bytes
app.py fixable to match hugging-face's app portal
about 1 year ago
requirements.txt
116 Bytes
[bugfix]fix the cut-off issue due to LLM predict token limit(256 for openai python lib default), by setting temperature to 0 and set LLM predict method from compact-refine to refine
12 months ago