Big Deeper
BigDeeper
AI & ML interests
Differentiable hashing, orthonormal polynomial language modeling, image compression into language representations.
Recent Activity
New activity
3 days ago
Lightricks/LTX-Video:SamplerCustomAdvanced is INCREDIBLY slow
New activity
3 days ago
Lightricks/LTX-Video:What minimal VRAM does it require?
New activity
3 days ago
Lightricks/LTX-Video:Longer video?
Organizations
None yet
BigDeeper's activity
SamplerCustomAdvanced is INCREDIBLY slow
#28 opened 3 days ago
by
BigDeeper
What minimal VRAM does it require?
7
#18 opened 7 days ago
by
DrNicefellow
Longer video?
1
#25 opened 3 days ago
by
BigDeeper
Support for fp16?
2
#2 opened 4 days ago
by
BigDeeper
VSCODE + Cline + Ollama + Qwen2.5-Coder-32B-Instruct.Q8_0
3
#20 opened 13 days ago
by
BigDeeper
Minimum required files/models
#60 opened 4 months ago
by
BigDeeper
comfyui does not recognize model files in sft format
5
#18 opened 4 months ago
by
peidong
Are there advantages or disadvantages in changing the format for translation?
3
#10 opened 4 months ago
by
BigDeeper
Does it use a specific chat template?
1
#4 opened 5 months ago
by
BigDeeper
Does it need a specific template?
#12 opened 5 months ago
by
BigDeeper
__ output
1
#19 opened 5 months ago
by
BigDeeper
What does 120B really mean?
3
#1 opened 7 months ago
by
BigDeeper
Does anyone know which specific Python library contains the tokenizer that was used to train Llama-3-70b?
2
#11 opened 7 months ago
by
BigDeeper
15 TeraTokens = 190 Million books
2
#4 opened 8 months ago
by
Languido
I was trying to fine-tune llama3 8b but getting following error - TypeError: LlamaForCausalLM.forward() got an unexpected keyword argument 'decoder_input_ids'
4
#117 opened 7 months ago
by
aniiikket11
Llama-3-70b tokenizer.
3
#116 opened 7 months ago
by
BigDeeper