mustapha commited on
Commit
1588fa7
1 Parent(s): e6331ee

Mean pooling not max pooling

Browse files

Hello There, I hope you are doing well.
If I understand correctly, you are performing mean pooling not a max pooling.
Thank you

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -83,7 +83,7 @@ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tenso
83
  with torch.no_grad():
84
  model_output = model(**encoded_input)
85
 
86
- # Perform pooling. In this case, max pooling.
87
  sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
88
 
89
  print("Sentence embeddings:")
 
83
  with torch.no_grad():
84
  model_output = model(**encoded_input)
85
 
86
+ # Perform pooling. In this case, mean pooling.
87
  sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
88
 
89
  print("Sentence embeddings:")