docs(readme): typos
Browse files
README.md
CHANGED
|
@@ -31,10 +31,10 @@ To our knowledge, this is the first publicly available large-scale video-generat
|
|
| 31 |
|
| 32 |
## π₯ Latest News
|
| 33 |
|
| 34 |
-
*
|
| 35 |
-
*
|
| 36 |
-
*
|
| 37 |
-
*
|
| 38 |
* Apr. 25, 2025: π We submitted our model to [Vbench-I2V leaderboard](https://huggingface.co/spaces/Vchitect/VBench_Leaderboard), at submission time, MUG-V ranked **#3**.
|
| 39 |
|
| 40 |
|
|
|
|
| 31 |
|
| 32 |
## π₯ Latest News
|
| 33 |
|
| 34 |
+
* Oct. 21, 2025: π We are excited to announce the release of the **MUG-V 10B** [technical report](#). We welcome feedback and discussions.
|
| 35 |
+
* Oct. 21, 2025: π We've released Megatron-LMβbased [training framework](https://github.com/Shopee-MUG/MUG-V-Megatron-LM-Training) addressing the key challenges of training billion-parameter video generators.
|
| 36 |
+
* Oct. 21, 2025: π We've released **MUG-V video enhancement** [inference code](https://github.com/Shopee-MUG/MUG-V/tree/main/mug_enhancer) and [weights](https://huggingface.co/MUG-V/MUG-V-inference) (based on WAN-2.1 1.3B).
|
| 37 |
+
* Oct. 21, 2025: π We've released **MUG-V 10B** ([e-commerce edition](https://github.com/Shopee-MUG/MUG-V)) inference code and weights.
|
| 38 |
* Apr. 25, 2025: π We submitted our model to [Vbench-I2V leaderboard](https://huggingface.co/spaces/Vchitect/VBench_Leaderboard), at submission time, MUG-V ranked **#3**.
|
| 39 |
|
| 40 |
|