talkie-web-13b-base-tf (BF16 Transformers + safetensors conversion)

This repository is a Transformers-compatible conversion of talkie-lm/talkie-web-13b-base, the original Talkie base completion model.

The upstream model is a 13B language model trained on 260B tokens of FineWeb. The original model card describes it as architecturally identical to talkie-lm/talkie-1930-13b-base and intended for controlled comparisons between vintage and modern language models.

The original base checkpoint is FP32. This repository stores a BF16 conversion of those weights and packages them for Transformers with custom trust_remote_code modules and BF16 sharded safetensors.

This is not an official Talkie release; refer to the upstream model card for the author-provided provenance and usage notes.

Source Model

Conversion Details

  • Weight dtype: BF16
  • Weight format: sharded safetensors
  • Context length: 2048 tokens
  • Architecture: custom Talkie code loaded with trust_remote_code=True
  • Tokenizer: Talkie tiktoken-compatible tokenizer exposed through AutoTokenizer

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

path = "xlr8harder/talkie-web-13b-base-tf"
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    path,
    trust_remote_code=True,
    dtype=torch.bfloat16,
    device_map={"": "cuda"},
    use_safetensors=True,
)

For base completions:

inputs = tokenizer("The latest discoveries in physics suggest that", return_tensors="pt").to("cuda")
output = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(output[0], skip_special_tokens=True))

vLLM

The included remote-code model implements the Transformers attention-interface hooks expected by vLLM's Transformers modeling backend. For compatibility with that backend, the original single-scalar lm_head_gain is folded into lm_head.weight during conversion; the other Talkie gain parameters remain explicit model parameters. Using vLLM's logit_scale-style approach was not used because it applies scaling after the output matmul, while Talkie applies the gain to the head weight before the matmul. In BF16 this can introduce small rounding differences and, in smoke tests, changed one near-tied top-token ordering.

vllm serve xlr8harder/talkie-web-13b-base-tf \
  --task generate \
  --model-impl transformers \
  --trust-remote-code \
  --dtype bfloat16 \
  --max-model-len 2048

Validation

The Transformers safetensors model was compared against the original Talkie web FP32 checkpoint on a forward-pass smoke test. The top-10 next-token ordering matched exactly; observed max absolute logit difference was 0.03125.

Downloads last month
27
Safetensors
Model size
13B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for xlr8harder/talkie-web-13b-base-tf

Finetuned
(2)
this model

Collection including xlr8harder/talkie-web-13b-base-tf