adymaharana commited on
Commit
e217e55
1 Parent(s): 458cfa9

new model url

Browse files
Files changed (1) hide show
  1. app.py +2 -2
app.py CHANGED
@@ -68,7 +68,7 @@ def save_story_results(images, video_len=4, n_candidates=1, mask=None):
68
  def main(args):
69
  device = 'cuda:0'
70
 
71
- model_url = 'https://drive.google.com/u/1/uc?id=1SYQu_tKpTd7oODjF9fmohDp3PSqvww2T&export=download'
72
 
73
  #model_url = 'https://drive.google.com/u/1/uc?id=1lJ6zMZ6qTvFu6H35-VEdFlN13MMslivJ&export=download'
74
  png_url = 'https://drive.google.com/u/1/uc?id=1C33A1IzSHDPoQ4QBsgFWbF61QWaAxRo_&export=download'
@@ -136,7 +136,7 @@ def main(args):
136
  StoryDALL-E \[1\] is a model trained for the task of Story Visualization \[2\].
137
  The model receives a sequence of captions as input and generates a corresponding sequence of images which form a visual story depicting the narrative in the captions.
138
  We modify this task to enable the model to receive an initial scene as input, which can be used as a cue for the setting of the story and also for generating unseen or low-resource visual elements. We refer to this task as Story Continuation \[1\].
139
- StoryDALL-E is based on the [mega-dalle](https://github.com/borisdayma/dalle-mini) model and is adapted from the corresponding [PyTorch codebase](https://github.com/kuprel/min-dalle).
140
  **This model has been developed for academic purposes only.**
141
 
142
  \[[Paper](http://arxiv.org/abs/2209.06192)\] \[[Code](https://github.com/adymaharana/storydalle)\] \[[Model Card](https://github.com/adymaharana/storydalle/blob/main/MODEL_CARD.MD)\]
 
68
  def main(args):
69
  device = 'cuda:0'
70
 
71
+ model_url = 'https://drive.google.com/u/1/uc?id=1KAXVtE8lEE2Yc83VY7w6ycOOMkdWbmJo&export=sharing'
72
 
73
  #model_url = 'https://drive.google.com/u/1/uc?id=1lJ6zMZ6qTvFu6H35-VEdFlN13MMslivJ&export=download'
74
  png_url = 'https://drive.google.com/u/1/uc?id=1C33A1IzSHDPoQ4QBsgFWbF61QWaAxRo_&export=download'
 
136
  StoryDALL-E \[1\] is a model trained for the task of Story Visualization \[2\].
137
  The model receives a sequence of captions as input and generates a corresponding sequence of images which form a visual story depicting the narrative in the captions.
138
  We modify this task to enable the model to receive an initial scene as input, which can be used as a cue for the setting of the story and also for generating unseen or low-resource visual elements. We refer to this task as Story Continuation \[1\].
139
+ StoryDALL-E is based on the [dalle](https://github.com/kakaobrain/minDALL-E) model.
140
  **This model has been developed for academic purposes only.**
141
 
142
  \[[Paper](http://arxiv.org/abs/2209.06192)\] \[[Code](https://github.com/adymaharana/storydalle)\] \[[Model Card](https://github.com/adymaharana/storydalle/blob/main/MODEL_CARD.MD)\]