sagard21 commited on
Commit
82e18f8
1 Parent(s): a0fac8a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -25
README.md CHANGED
@@ -34,34 +34,12 @@ This model is an attempt to simplify code understanding by generating line by li
34
  # Model Usage
35
 
36
  ```py
37
- from transformers import AutoTokenizer, T5ForConditionalGeneration, SummarizationPipeline
38
 
39
- pipeline = SummarizationPipeline(
40
- model=T5ForConditionalGeneration.from_pretrained("sagard21/python-code-explainer"),
41
- tokenizer=AutoTokenizer.from_pretrained("sagard21/python-code-explainer", skip_special_tokens=True),
42
- )
43
 
44
- raw_code = """
45
- def preprocess(text: str) -> str:
46
- text = str(text)
47
- text = text.replace("\n", " ")
48
- tokenized_text = text.split(" ")
49
- preprocessed_text = " ".join([token for token in tokenized_text if token])
50
 
51
- return preprocessed_text
52
- """
53
- pipeline([raw_code])
54
-
55
- ```
56
-
57
- ### Expected JSON Output
58
-
59
- ```
60
- [
61
- {
62
- "summary_text": "Create a function preprocess that will take the text as an argument and return the preprocessed text.\n1. In this case, the text will be converted to a string.\n2. At first, we will replace all \"\\n\" with \" \" and then split the text by \" \".\n3. Then we will call the tokenize function on the text and tokenize the text using the split() method.\n4. Next step is to create a list of all the tokens in the string and join them together.\n5. Then the function will return the string preprocessed_text.\n"
63
- }
64
- ]
65
  ```
66
 
67
  ## Validation Metrics
 
34
  # Model Usage
35
 
36
  ```py
37
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
38
 
39
+ tokenizer = AutoTokenizer.from_pretrained("sagard21/python-code-explainer")
 
 
 
40
 
41
+ model = AutoModelForSeq2SeqLM.from_pretrained("sagard21/python-code-explainer")
 
 
 
 
 
42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  ```
44
 
45
  ## Validation Metrics