fcakyon commited on
Commit
c1e6c03
1 Parent(s): a18799e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -5,7 +5,7 @@ tags:
5
  - video-classification
6
  ---
7
 
8
- # TimeSformer (base-sized model, fine-tuned on Something Something v2)
9
 
10
  TimeSformer model pre-trained on [Something Something v2](https://developer.qualcomm.com/software/ai-datasets/something-something). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer).
11
 
@@ -24,12 +24,12 @@ from transformers import AutoImageProcessor, TimesformerForVideoClassification
24
  import numpy as np
25
  import torch
26
 
27
- video = list(np.random.randn(8, 3, 224, 224))
28
 
29
- processor = AutoImageProcessor.from_pretrained("fcakyon/timesformer-base-finetuned-ssv2")
30
- model = TimesformerForVideoClassification.from_pretrained("fcakyon/timesformer-base-finetuned-ssv2")
31
 
32
- inputs = processor(images=video, return_tensors="pt")
33
 
34
  with torch.no_grad():
35
  outputs = model(**inputs)
 
5
  - video-classification
6
  ---
7
 
8
+ # TimeSformer (high-resolution variant, fine-tuned on Something Something v2)
9
 
10
  TimeSformer model pre-trained on [Something Something v2](https://developer.qualcomm.com/software/ai-datasets/something-something). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer).
11
 
 
24
  import numpy as np
25
  import torch
26
 
27
+ video = list(np.random.randn(16, 3, 448, 448))
28
 
29
+ processor = AutoImageProcessor.from_pretrained("fcakyon/timesformer-hr-finetuned-ssv2")
30
+ model = TimesformerForVideoClassification.from_pretrained("fcakyon/timesformer-hr-finetuned-ssv2")
31
 
32
+ inputs = feature_extractor(images=video, return_tensors="pt")
33
 
34
  with torch.no_grad():
35
  outputs = model(**inputs)