lIlBrother commited on
Commit
e2a78ee
1 Parent(s): 7d05d99

Update: 리드미 완전 추가

Browse files
Files changed (1) hide show
  1. README.md +87 -3
README.md CHANGED
@@ -16,8 +16,8 @@ model-index:
16
  - name: barTNumText
17
  results:
18
  - task:
19
- type: translation # Required. Example: automatic-speech-recognition
20
- name: translation # Optional. Example: Speech Recognition
21
  metrics:
22
  - type: bleu # Required. Example: wer. Use metric id from https://hf.co/metrics
23
  value: 0.9161441917016176 # Required. Example: 20.90
@@ -39,4 +39,88 @@ model-index:
39
  value: 0.9500390902948073 # Required. Example: 20.90
40
  name: eval_rougeLsum # Optional. Example: Test WER
41
  verified: true # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
42
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  - name: barTNumText
17
  results:
18
  - task:
19
+ type: text2text-generation # Required. Example: automatic-speech-recognition
20
+ name: text2text-generation # Optional. Example: Speech Recognition
21
  metrics:
22
  - type: bleu # Required. Example: wer. Use metric id from https://hf.co/metrics
23
  value: 0.9161441917016176 # Required. Example: 20.90
 
39
  value: 0.9500390902948073 # Required. Example: 20.90
40
  name: eval_rougeLsum # Optional. Example: Test WER
41
  verified: true # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
42
+ ---
43
+
44
+ # CamemBERT: a Tasty French Language Model
45
+
46
+ ## Table of Contents
47
+ - [CamemBERT: a Tasty French Language Model](#camembert-a-tasty-french-language-model)
48
+ - [Table of Contents](#table-of-contents)
49
+ - [Model Details](#model-details)
50
+ - [Uses](#uses)
51
+ - [Evaluation](#evaluation)
52
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
53
+
54
+
55
+ ## Model Details
56
+ - **Model Description:**
57
+ 뭔가 찾아봐도 모델이나 알고리즘이 딱히 없어서 만들어본 모델입니다. <br />
58
+ BartForConditionalGeneration Fine-Tuning Model For Number To Korean <br />
59
+ BartForConditionalGeneration으로 파인튜닝한, 숫자를 한글로 변환하는 Task 입니다. <br />
60
+
61
+ Dataset use [Korea aihub](https://aihub.or.kr/aihubdata/data/list.do?currMenu=115&topMenu=100&srchDataRealmCode=REALM002&srchDataTy=DATA004) <br />
62
+ I can't open my fine-tuning datasets for my private issue <br />
63
+ 데이터셋은 Korea aihub에서 받아서 사용하였으며, 파인튜닝에 사용된 모든 데이터를 사정상 공개해드릴 수는 없습니다. <br />
64
+
65
+ Korea aihub data is ONLY permit to Korean!!!!!!! <br />
66
+ aihub에서 데이터를 받으실 분은 한국인일 것이므로, 한글로만 작성합니다. <br />
67
+ 정확히는 음성전사를 철자전사로 번역하는 형태로 학습된 모델입니다. (ETRI 전사기준) <br />
68
+
69
+ In case, ten million, some people use 10 million or some people use 10000000, so this model is crucial for training datasets
70
+ 천만을 1000만 혹은 10000000으로 쓸 수도 있기에, Training Datasets에 따라 결과는 상이할 수 있습니다. <br />
71
+ - **Developed by:** Yoo SungHyun(https://github.com/YooSungHyun)
72
+ - **Language(s):** Korean
73
+ - **License:** apache-2.0
74
+ - **Parent Model:** See the [kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) for more information about the pre-trained base model.
75
+
76
+
77
+ ## Uses
78
+ This Model is inferenced token BACKWARD. so, you have to `flip` before `tokenizer.decode()`
79
+ 해당 모델은 inference시 역순으로 예측합니다. (밥을 6시에 먹었어 -> 어 먹었 시에 여섯 을 밥) 때문에 `tokenizer.decode`를 수행하기 전에, `flip`으로 역순으로 치환해주세요.
80
+
81
+ Want see more detail follow this URL [KoGPT_num_converter](https://github.com/ddobokki/KoGPT_num_converter) and see `bart_inference.py` and `bart_train.py`
82
+ ```python
83
+ class BartText2TextGenerationPipeline(Text2TextGenerationPipeline):
84
+ def postprocess(self, model_outputs, return_type=ReturnType.TEXT, clean_up_tokenization_spaces=False):
85
+ records = []
86
+ reversed_model_outputs = torch.flip(model_outputs["output_ids"][0], dims=[-1])
87
+ for output_ids in reversed_model_outputs:
88
+ if return_type == ReturnType.TENSORS:
89
+ record = {f"{self.return_name}_token_ids": output_ids}
90
+ elif return_type == ReturnType.TEXT:
91
+ record = {
92
+ f"{self.return_name}_text": self.tokenizer.decode(
93
+ output_ids,
94
+ skip_special_tokens=True,
95
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
96
+ )
97
+ }
98
+ records.append(record)
99
+ return records
100
+ ```
101
+ ## Evaluation
102
+ Just using `evaluate-metric/bleu` and `evaluate-metric/rouge` in huggingface `evaluate` library
103
+ ## How to Get Started With the Model
104
+ ```python
105
+ from transformers.pipelines import Text2TextGenerationPipeline
106
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
107
+ texts = ["그러게 누가 6시까지 술을 마시래?"]
108
+ tokenizer = AutoTokenizer.from_pretrained(
109
+ inference_args.model_name_or_path,
110
+ )
111
+ model = AutoModelForSeq2SeqLM.from_pretrained(
112
+ inference_args.model_name_or_path,
113
+ )
114
+ # BartText2TextGenerationPipeline is implemented above (see 'Use')
115
+ seq2seqlm_pipeline = BartText2TextGenerationPipeline(model=model, tokenizer=tokenizer)
116
+ kwargs = {
117
+ "min_length": args.min_length,
118
+ "max_length": args.max_length,
119
+ "num_beams": args.beam_width,
120
+ "do_sample": args.do_sample,
121
+ "num_beam_groups": args.num_beam_groups,
122
+ }
123
+ pred = seq2seqlm_pipeline(texts, **kwargs)
124
+ print(pred)
125
+ # 그러게 누가 여섯 시까지 술을 마시래?
126
+ ```