Transformers documentation

사용자 정의 레이어 및 유틸리티

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.57.1).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

사용자 정의 레이어 및 유틸리티

이 페이지는 라이브러리에서 사용되는 사용자 정의 레이어와 모델링을 위한 유틸리티 함수들을 나열합니다.

이 함수들 대부분은 라이브러리 내의 모델 코드를 연구할 때만 유용합니다.

PyTorch 사용자 정의 모듈

class transformers.Conv1D

< >

( nf nx )

Parameters

  • nf (int) — The number of output features.
  • nx (int) — The number of input features.

1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2).

Basically works like a linear layer but the weights are transposed.

PyTorch 헬퍼(helper) 함수

transformers.apply_chunking_to_forward

< >

( forward_fn: Callable[..., torch.Tensor] chunk_size: int chunk_dim: int *input_tensors ) torch.Tensor

Parameters

  • forward_fn (Callable[..., torch.Tensor]) — The forward function of the model.
  • chunk_size (int) — The chunk size of a chunked tensor: num_chunks = len(input_tensors[0]) / chunk_size.
  • chunk_dim (int) — The dimension over which the input_tensors should be chunked.
  • input_tensors (tuple[torch.Tensor]) — The input tensors of forward_fn which will be chunked

Returns

torch.Tensor

A tensor with the same shape as the forward_fn would have given if applied`.

This function chunks the input_tensors into smaller input tensor parts of size chunk_size over the dimension chunk_dim. It then applies a layer forward_fn to each chunk independently to save memory.

If the forward_fn is independent across the chunk_dim this function will yield the same result as directly applying forward_fn to input_tensors.

Examples:

# rename the usual forward() fn to forward_chunk()
def forward_chunk(self, hidden_states):
    hidden_states = self.decoder(hidden_states)
    return hidden_states


# implement a chunked forward function
def forward(self, hidden_states):
    return apply_chunking_to_forward(self.forward_chunk, self.chunk_size_lm_head, self.seq_len_dim, hidden_states)

transformers.pytorch_utils.prune_linear_layer

< >

( layer: nn.Linear index: torch.LongTensor dim: int = 0 ) torch.nn.Linear

Parameters

  • layer (torch.nn.Linear) — The layer to prune.
  • index (torch.LongTensor) — The indices to keep in the layer.
  • dim (int, optional, defaults to 0) — The dimension on which to keep the indices.

Returns

torch.nn.Linear

The pruned layer as a new layer with requires_grad=True.

Prune a linear layer to keep only entries in index.

Used to remove heads.

Update on GitHub