zuppif commited on
Commit
f82decc
1 Parent(s): b13a1f6

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -14
README.md CHANGED
@@ -2,33 +2,32 @@
2
  license: apache-2.0
3
  tags:
4
  - vision
5
- - image-segmentation
6
 
7
  datasets:
8
- - imagenet-21k
9
  - imagenet-1k
10
 
11
  widget:
12
- - src:https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
13
- widget.title
14
 
15
  ---
16
 
17
- # ConvNext
18
 
19
- ConvNext model trained on imagenet-21k. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
20
 
21
- Disclaimer: The team releasing ConvNext did not write a model card for this model so this model card has been written by the Hugging Face team.
22
 
23
  ## Model description
24
 
25
- weiiii
26
 
27
- ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png)
28
 
29
  ## Intended uses & limitations
30
 
31
- You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
32
  fine-tuned versions on a task that interests you.
33
 
34
  ### How to use
@@ -36,15 +35,15 @@ fine-tuned versions on a task that interests you.
36
  Here is how to use this model:
37
 
38
  ```python
39
- >>> from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
40
  >>> import torch
41
  >>> from datasets import load_dataset
42
 
43
  >>> dataset = load_dataset("huggingface/cats-image")
44
  >>> image = dataset["test"]["image"][0]
45
 
46
- >>> feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-tiny-224")
47
- >>> model = ConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224")
48
 
49
  >>> inputs = feature_extractor(image, return_tensors="pt")
50
 
@@ -59,4 +58,4 @@ Here is how to use this model:
59
 
60
 
61
 
62
- For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
 
2
  license: apache-2.0
3
  tags:
4
  - vision
5
+ - image-classification
6
 
7
  datasets:
 
8
  - imagenet-1k
9
 
10
  widget:
11
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
12
+ example_title: Tiger
13
 
14
  ---
15
 
16
+ # RegNet
17
 
18
+ RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycl).
19
 
20
+ Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
21
 
22
  ## Model description
23
 
24
+ The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.
25
 
26
+ ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/regnet_architecture.png)
27
 
28
  ## Intended uses & limitations
29
 
30
+ You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
31
  fine-tuned versions on a task that interests you.
32
 
33
  ### How to use
 
35
  Here is how to use this model:
36
 
37
  ```python
38
+ >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
39
  >>> import torch
40
  >>> from datasets import load_dataset
41
 
42
  >>> dataset = load_dataset("huggingface/cats-image")
43
  >>> image = dataset["test"]["image"][0]
44
 
45
+ >>> feature_extractor = AutoFeatureExtractor.from_pretrained("")
46
+ >>> model = RegNetForImageClassification.from_pretrained("")
47
 
48
  >>> inputs = feature_extractor(image, return_tensors="pt")
49
 
 
58
 
59
 
60
 
61
+ For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).