jianght commited on
Commit
9fa9b5e
·
verified ·
1 Parent(s): 8ebe346

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +47 -0
  2. UniCAS.pth +3 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## How to use UniCAS to extract features.
2
+
3
+ The code below can be used to run inference; `UniCAS` expects images of size 224x224 that were extracted at 20× magnification.
4
+
5
+ ```python
6
+ import functools
7
+ import timm
8
+ import torch
9
+ from torchvision import transforms
10
+
11
+ params = {
12
+ 'patch_size': 16,
13
+ 'embed_dim': 1024,
14
+ 'depth': 24,
15
+ 'num_heads': 16,
16
+ 'init_values': 1e-05,
17
+ 'mlp_ratio': 2.671875 * 2,
18
+ 'mlp_layer': functools.partial(
19
+ timm.layers.mlp.GluMlp, gate_last=False
20
+ ),
21
+ 'act_layer': torch.nn.modules.activation.SiLU,
22
+ 'no_embed_class': False,
23
+ 'img_size': 224,
24
+ 'num_classes': 0,
25
+ 'in_chans': 3
26
+ }
27
+
28
+ model = timm.models.VisionTransformer(**params)
29
+ print(model.load_state_dict(torch.load("UniCAS.pth"), strict=False))
30
+ model = model.eval().to("cuda")
31
+
32
+
33
+ transform = transforms.Compose([
34
+ transforms.ToTensor(),
35
+ transforms.Normalize(
36
+ mean=(0.485, 0.456, 0.406),
37
+ std=(0.229, 0.224, 0.225),
38
+ ),
39
+ ])
40
+
41
+ input = torch.rand(3, 224, 224)
42
+ input = transforms.ToPILImage()(input)
43
+ input = transform(input).unsqueeze(0)
44
+ with torch.no_grad():
45
+ features = model(input.to("cuda"))
46
+ print(features.shape) # torch.Size([1, 1024])
47
+ ```
UniCAS.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:778e8da742d8b2775e70d934be9ac19536c6287d755ecdb429c097a78ee419e5
3
+ size 1215229559