Cristian Diaz

CristianJD
·

AI & ML interests

OCR, Deep Learning , NLP

Recent Activity

Organizations

Journalists on Hugging Face's profile picture

CristianJD's activity

New activity in microsoft/trocr-base-handwritten 7 months ago
New activity in qantev/trocr-large-spanish 7 months ago
reacted to merve's post with 🔥 8 months ago
view post
Post
2434
Demo for IDEFICS-8B demo is out! HuggingFaceM4/idefics-8b

This checkpoint is not optimized to chat, but rather works very well for various tasks, incl visual question answering and document tasks 💬📑
Chatty one is coming soon!
updated a collection 9 months ago
New activity in microsoft/trocr-base-handwritten 9 months ago

Full Page OCR

5
#5 opened 10 months ago by DDM007
New activity in qantev/trocr-small-spanish 9 months ago

Handwritting and preprinted

#1 opened 9 months ago by CristianJD
New activity in Bennet1996/donut-small 9 months ago

Number of parameters

#1 opened 9 months ago by CristianJD
New activity in Xenova/trocr-small-handwritten 9 months ago

Python

1
#2 opened 9 months ago by CristianJD
New activity in ademax/trocr-small 9 months ago

Context

#1 opened 9 months ago by CristianJD
reacted to gsarti's post with ❤️ 9 months ago
view post
Post
🔍 Today's pick in Interpretability & Analysis of LMs: Information Flow Routes: Automatically Interpreting Language Models at Scale by @javifer @lena-voita

This work presents a novel method to identify salient components in Transformer-based language models by decomposing the contribution of various model components into the residual stream.

This method is more efficient and scalable than previous techniques such as activation patching, as it only requires a single forward pass through the model to identify critical information flow paths. Moreover, it can be applied without a contrastive template, which is observed to produce results dependent on the selected contrastive example for activation patching.

Information flow routes are applied to Llama 2, showing that:

1. Models show “typical” information flow routes for non-content words, while content words don’t exhibit such patterns.
2. Feedforward networks are more active in the bottom layers of the network (where e.g. subject enrichment is performed) and in very last layer.
3. Positional and subword-merging attention heads are among the most active and important throughout the network.
4. Periods can be treated by the model as BOS tokens by leaving their residual representation mostly untouched during the forward pass.

Finally, the paper also demonstrates that some model components are specialized for specific domains, such as coding or multilingual texts, suggesting a high degree of modularity in the network. The contribution of domain-specific heads obtained by projecting right singular values of the OV circuit to the unembedding matrix show highly interpretable concepts being handled in granular model components.

📄 Paper: Information Flow Routes: Automatically Interpreting Language Models at Scale (2403.00824)

🔍 All daily picks: gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9
New activity in datificate/gpt2-small-spanish 9 months ago

Parameters

#2 opened 9 months ago by CristianJD