innocent-charles commited on
Commit
547251f
1 Parent(s): 111408f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -10
README.md CHANGED
@@ -32,18 +32,8 @@ library_name: transformers
32
  # AViLaMa : African Vision-Languages Aligment Pre-Training Model.
33
  Learning Visual Concepts Directly From African Languages Supervision. [Click to see paper](www.sartify.com)
34
 
35
- ## Contents
36
- 1. [Model Details](#model-details)
37
- 2. [Uses](#uses)
38
- 3. [Training Details](#training-details)
39
- 4. [Evaluation](#evaluation)
40
- 5. [Acknowledgements](#acknowledgements)
41
- 6. [Citation](#citation)
42
- 7. [How To Get Started With the Model](#how-to-get-started-with-the-model)
43
 
44
  ## Model Details
45
-
46
- ### Model Description
47
  AViLaMa is the large open-source text-vision alignment pre-training model in African languages. It brings a way to learn visual concepts directly from African languages supervision. Inspired from OpenAI CLIP, but with more modalities like video, audio, etc.. and other techniques like agnostic languages encoding, data filtering network. All for more than 12 African languages, trained on the #AViLaDa-2B datasets of filtered image, video, audio-text pairs. We are also working to make it usable in directly vision-vision tasks.
48
 
49
  - **Developed by :** Sartify LLC (www.sartify.com)
 
32
  # AViLaMa : African Vision-Languages Aligment Pre-Training Model.
33
  Learning Visual Concepts Directly From African Languages Supervision. [Click to see paper](www.sartify.com)
34
 
 
 
 
 
 
 
 
 
35
 
36
  ## Model Details
 
 
37
  AViLaMa is the large open-source text-vision alignment pre-training model in African languages. It brings a way to learn visual concepts directly from African languages supervision. Inspired from OpenAI CLIP, but with more modalities like video, audio, etc.. and other techniques like agnostic languages encoding, data filtering network. All for more than 12 African languages, trained on the #AViLaDa-2B datasets of filtered image, video, audio-text pairs. We are also working to make it usable in directly vision-vision tasks.
38
 
39
  - **Developed by :** Sartify LLC (www.sartify.com)