Ali El Filali
alielfilali01
AI & ML interests
AI Psychometrician ? | NLP (mainly for Arabic) | Interests include Reinforcement Learning and Cognitive sciences among others
Recent Activity
updated
a dataset
2 days ago
inceptionai/requests-dataset
updated
a dataset
2 days ago
inceptionai/requests-dataset
updated
a dataset
2 days ago
inceptionai/requests-dataset
Organizations
alielfilali01's activity

reacted to
clem's
post with ๐ค
4 days ago

reacted to
BrigitteTousi's
post with ๐
4 days ago
Post
3163
LeRobot goes to driving school! ๐๐๐
Hugging Face just announced a new collab with Yaak to bring the largest open-source self-driving dataset to LeRobot!
Major kudos to HF's @cadene , as well as @sandhawalia , @Shnissen and the Yaak team!
Check out the blog post here: https://huggingface.co/blog/lerobot-goes-to-driving-school
Hugging Face just announced a new collab with Yaak to bring the largest open-source self-driving dataset to LeRobot!
Major kudos to HF's @cadene , as well as @sandhawalia , @Shnissen and the Yaak team!
Check out the blog post here: https://huggingface.co/blog/lerobot-goes-to-driving-school

reacted to
MohamedRashad's
post with ๐โค๏ธ
24 days ago
Post
1692
A while back i shared this model
MohamedRashad/arabic-small-nougat that was a finetune from
facebook/nougat-small for the Arabic Language.
Today this humble project has been scaled with new models, new datasets, new space, and a new paper
Check everything throught this collection here:
MohamedRashad/arabic-nougat-673a3f540bd92904c9b92a8e
Today this humble project has been scaled with new models, new datasets, new space, and a new paper
Check everything throught this collection here:
MohamedRashad/arabic-nougat-673a3f540bd92904c9b92a8e

posted
an
update
25 days ago
Post
822
๐จ Arabic LLM Evaluation ๐จ
Few models join the ranking of inceptionai/AraGen-Leaderboard Today.
The new MistralAI model, Saba, is quite impressive, Top10 ! Well done @arthurmensch and team.
Sadly Mistral did not follow its strategy about public weights this time, we hope this changes soon and we get the model with a permissive license.
We added other Mistral models and apparently, we have been sleeping on mistralai/Mistral-Large-Instruct-2411 !
Another impressive model that joined the ranking today is ALLaM-AI/ALLaM-7B-Instruct-preview. After a long wait finally ALLaM is here and it is IMPRESSIVE given its size !
ALLaM is ranked on OALL/Open-Arabic-LLM-Leaderboard as well.
Few models join the ranking of inceptionai/AraGen-Leaderboard Today.
The new MistralAI model, Saba, is quite impressive, Top10 ! Well done @arthurmensch and team.
Sadly Mistral did not follow its strategy about public weights this time, we hope this changes soon and we get the model with a permissive license.
We added other Mistral models and apparently, we have been sleeping on mistralai/Mistral-Large-Instruct-2411 !
Another impressive model that joined the ranking today is ALLaM-AI/ALLaM-7B-Instruct-preview. After a long wait finally ALLaM is here and it is IMPRESSIVE given its size !
ALLaM is ranked on OALL/Open-Arabic-LLM-Leaderboard as well.

reacted to
merve's
post with ๐๐ง
25 days ago
Post
6075
Google just released PaliGemma 2 Mix: new versatile instruction vision language models ๐ฅ
> Three new models: 3B, 10B, 28B with res 224, 448 ๐
> Can do vision language tasks with open-ended prompts, understand documents, and segment or detect anything ๐คฏ
Read more https://huggingface.co/blog/paligemma2mix
Try the demo google/paligemma2-10b-mix
All models are here google/paligemma-2-mix-67ac6a251aaf3ee73679dcc4
> Three new models: 3B, 10B, 28B with res 224, 448 ๐
> Can do vision language tasks with open-ended prompts, understand documents, and segment or detect anything ๐คฏ
Read more https://huggingface.co/blog/paligemma2mix
Try the demo google/paligemma2-10b-mix
All models are here google/paligemma-2-mix-67ac6a251aaf3ee73679dcc4

reacted to
dreamerdeo's
post with ๐ค๐
25 days ago
Post
2788
๐ Excited to share our technical report on the Southeast Asian multilingual model Sailor2 and its latest updates!
Our 49-page report details Sailor2's development journey, including multilingual data cleaning, small model data mixture simulations, multi-stage continual pre-training, multi-stage post-training, and multi-cultural multi-lingual evaluations. Sailor2 aims to streamline the multilingual model pre-training process efficiently for the community.
๐งญ We highlight Sailor2's impressive performance in low-resource language translation scenarios and its cultural understanding advantages in Southeast Asia, promoting practical applications for regional languages.
Model updates include:ย
๐ก More precise outputs: Reduced redundancy in model outputs through refined post-training data and optimization techniques.ย
๐ Handling longer texts: Expanded to handle up to 128K context length in Southeast Asian languages through long-text training.ย
โก๏ธ Faster inference: Achieved 2.5x faster inference speed with speculative decoding.ย
๐ช๏ธ More model sizes: Introduced new sizes of 3B and 14B through model pruning.
๐ All models are Apache-licensed for commercial use; development tools (code, resources) are open-source.
๐ Technical report: Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs (2502.12982)ย
๐ค๏ธ Models: sail/sailor2-language-models-674d7c9e6b4dbbd9a869906bย
๐ฌ Demo: sail/Sailor2-20B-Chatย
๐ฃ Sailor2 community: https://huggingface.co/sailor2
Our 49-page report details Sailor2's development journey, including multilingual data cleaning, small model data mixture simulations, multi-stage continual pre-training, multi-stage post-training, and multi-cultural multi-lingual evaluations. Sailor2 aims to streamline the multilingual model pre-training process efficiently for the community.
๐งญ We highlight Sailor2's impressive performance in low-resource language translation scenarios and its cultural understanding advantages in Southeast Asia, promoting practical applications for regional languages.
Model updates include:ย
๐ก More precise outputs: Reduced redundancy in model outputs through refined post-training data and optimization techniques.ย
๐ Handling longer texts: Expanded to handle up to 128K context length in Southeast Asian languages through long-text training.ย
โก๏ธ Faster inference: Achieved 2.5x faster inference speed with speculative decoding.ย
๐ช๏ธ More model sizes: Introduced new sizes of 3B and 14B through model pruning.
๐ All models are Apache-licensed for commercial use; development tools (code, resources) are open-source.
๐ Technical report: Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs (2502.12982)ย
๐ค๏ธ Models: sail/sailor2-language-models-674d7c9e6b4dbbd9a869906bย
๐ฌ Demo: sail/Sailor2-20B-Chatย
๐ฃ Sailor2 community: https://huggingface.co/sailor2

reacted to
fantos's
post with ๐ฅ
about 2 months ago
Post
4280
๐ HuggingFace Spaces Ranking Tracker - Your Complete AI Trend Analytics!
Introducing the Spaces Ranking Tracker, a comprehensive analytics dashboard that tracks and analyzes every AI application in the HuggingFace ecosystem.
โจ Key Features:
โข Real-time tracking of daily ranking changes over 30 days
โข Detailed analysis of top 100 trending spaces
โข User-based integrated score visualization
โข One-click access to space details
โข Interactive rank change graphs
๐ Dashboard Components:
1. Main Dashboard
- Daily rank trend graphs
- Top 20 creators' combined score chart
- Detailed space information cards
- Real-time trending score updates
2. Space Detailed Analysis
- Creation date, current rank, and trending score
- 30-day ranking history
- Direct space access
- Custom color coding for intuitive rank display
๐ฏ How to Use:
โข Monitor latest AI community trends
โข Track your project's performance
โข Discover popular AI demos
โข Analyze competing projects
โข Follow AI ecosystem dynamics
3. Interactive Features
- Custom filtering options
- Sorting by various metrics
- Detailed performance statistics
- Comprehensive trending scores
- Historical data tracking
Stay on top of every movement in the HuggingFace ecosystem with daily ranking updates! ๐ Try it now!
๐ Access Dashboard: fantos/Ranking-Tracker
#HuggingFace #AI #DataVisualization #TrendAnalysis #AITrends
Introducing the Spaces Ranking Tracker, a comprehensive analytics dashboard that tracks and analyzes every AI application in the HuggingFace ecosystem.
โจ Key Features:
โข Real-time tracking of daily ranking changes over 30 days
โข Detailed analysis of top 100 trending spaces
โข User-based integrated score visualization
โข One-click access to space details
โข Interactive rank change graphs
๐ Dashboard Components:
1. Main Dashboard
- Daily rank trend graphs
- Top 20 creators' combined score chart
- Detailed space information cards
- Real-time trending score updates
2. Space Detailed Analysis
- Creation date, current rank, and trending score
- 30-day ranking history
- Direct space access
- Custom color coding for intuitive rank display
๐ฏ How to Use:
โข Monitor latest AI community trends
โข Track your project's performance
โข Discover popular AI demos
โข Analyze competing projects
โข Follow AI ecosystem dynamics
3. Interactive Features
- Custom filtering options
- Sorting by various metrics
- Detailed performance statistics
- Comprehensive trending scores
- Historical data tracking
Stay on top of every movement in the HuggingFace ecosystem with daily ranking updates! ๐ Try it now!
๐ Access Dashboard: fantos/Ranking-Tracker
#HuggingFace #AI #DataVisualization #TrendAnalysis #AITrends

reacted to
burtenshaw's
post with ๐
about 2 months ago
Post
3345
Manic few days in open source AI, with game changing development all over the place. Here's a round up of the resources:
- The science team at @huggingface reproduced and open source the seek r1. https://github.com/huggingface/open-r1
- @qwen released a series of models with 1 million token context! https://qwenlm.github.io/blog/qwen2.5-1m/
- SmolVLM got even smaller with completely new variants at 256m and 500m https://huggingface.co/blog/smolervlm
There's so much you could do with these developments. Especially combining them together into agentic applications or fine-tuning them on your use case.
- The science team at @huggingface reproduced and open source the seek r1. https://github.com/huggingface/open-r1
- @qwen released a series of models with 1 million token context! https://qwenlm.github.io/blog/qwen2.5-1m/
- SmolVLM got even smaller with completely new variants at 256m and 500m https://huggingface.co/blog/smolervlm
There's so much you could do with these developments. Especially combining them together into agentic applications or fine-tuning them on your use case.

reacted to
AdinaY's
post with ๐ฅ๐ง
about 2 months ago
Post
2852
BIG release by DeepSeek AI๐ฅ๐ฅ๐ฅ
DeepSeek-R1 & DeepSeek-R1-Zero: two 660B reasoning models are here, alongside 6 distilled dense models (based on Llama & Qwen) for the community!
https://huggingface.co/deepseek-ai
deepseek-ai/DeepSeek-R1
โจ MIT License : enabling distillation for custom models
โจ 32B & 70B models match OpenAI o1-mini in multiple capabilities
โจ API live now! Access Chain of Thought reasoning with model='deepseek-reasoner'
DeepSeek-R1 & DeepSeek-R1-Zero: two 660B reasoning models are here, alongside 6 distilled dense models (based on Llama & Qwen) for the community!
https://huggingface.co/deepseek-ai
deepseek-ai/DeepSeek-R1
โจ MIT License : enabling distillation for custom models
โจ 32B & 70B models match OpenAI o1-mini in multiple capabilities
โจ API live now! Access Chain of Thought reasoning with model='deepseek-reasoner'

reacted to
MohamedRashad's
post with โค๏ธ
2 months ago
Post
2078
The winners of Best Paper Award in NeurIPs2024 (FoundationVision)
Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale
Prediction (2404.02905) has just released a new paper called infinty:
Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis (2412.04431)
And i managed to build a space for it so anyone can try it out: MohamedRashad/Infinity
The idea of a text to image model using autoregressive archticture is quite interesting in my opinion.
Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis (2412.04431)
And i managed to build a space for it so anyone can try it out: MohamedRashad/Infinity
The idea of a text to image model using autoregressive archticture is quite interesting in my opinion.

posted
an
update
2 months ago
Post
2058
3C3H AraGen Leaderboard welcomes today
deepseek-ai/DeepSeek-V3 and 12 other models (including the late gpt-3.5 ๐) to the ranking of best LLMs in Arabic !
Observations:
- DeepSeek-v3 ranked 3rd and only Open model among the top 5 !
- A 14B open model ( Qwen/Qwen2.5-14B-Instruct) outperforms gpt-3.5-turbo-0125 (from last year). This shows how much we came in advancing and supporting Arabic presence within the LLM ecosystem !
- Contrary to what observed in likelihood-acc leaderboards (like OALL/Open-Arabic-LLM-Leaderboard) further finetuned models like maldv/Qwentile2.5-32B-Instruct actually decreased the performance compared to the original model Qwen/Qwen2.5-32B-Instruct.
It's worth to note that the decrease is statiscally insignificant which imply that at best, the out-domain finetuning do not really hurts the model original capabilities acquired during pretraining.
Previous work addressed this (finetuning VS pretraining) but more investigation in this regard is required (any PhDs here ? This could be your question ...)
Check out the latest rankings: inceptionai/AraGen-Leaderboard
Observations:
- DeepSeek-v3 ranked 3rd and only Open model among the top 5 !
- A 14B open model ( Qwen/Qwen2.5-14B-Instruct) outperforms gpt-3.5-turbo-0125 (from last year). This shows how much we came in advancing and supporting Arabic presence within the LLM ecosystem !
- Contrary to what observed in likelihood-acc leaderboards (like OALL/Open-Arabic-LLM-Leaderboard) further finetuned models like maldv/Qwentile2.5-32B-Instruct actually decreased the performance compared to the original model Qwen/Qwen2.5-32B-Instruct.
It's worth to note that the decrease is statiscally insignificant which imply that at best, the out-domain finetuning do not really hurts the model original capabilities acquired during pretraining.
Previous work addressed this (finetuning VS pretraining) but more investigation in this regard is required (any PhDs here ? This could be your question ...)
Check out the latest rankings: inceptionai/AraGen-Leaderboard

reacted to
prithivMLmods's
post with ๐
2 months ago
Post
6007
Reasoning SmolLM2 ๐
๐ฏFine-tuning SmolLM2 on a lightweight synthetic reasoning dataset for reasoning-specific tasks. Future updates will focus on lightweight, blazing-fast reasoning models. Until then, check out the blog for fine-tuning details.
๐ฅBlog : https://huggingface.co/blog/prithivMLmods/smollm2-ft
๐ผ Models :
+ SmolLM2-CoT-360M : prithivMLmods/SmolLM2-CoT-360M
+ Reasoning-SmolLM2-135M : prithivMLmods/Reasoning-SmolLM2-135M
+ SmolLM2-CoT-360M-GGUF : prithivMLmods/SmolLM2-CoT-360M-GGUF
๐ค Other Details :
+ Demo : prithivMLmods/SmolLM2-CoT-360M
+ Fine-tune nB : prithivMLmods/SmolLM2-CoT-360M
๐ฏFine-tuning SmolLM2 on a lightweight synthetic reasoning dataset for reasoning-specific tasks. Future updates will focus on lightweight, blazing-fast reasoning models. Until then, check out the blog for fine-tuning details.
๐ฅBlog : https://huggingface.co/blog/prithivMLmods/smollm2-ft
๐ผ Models :
+ SmolLM2-CoT-360M : prithivMLmods/SmolLM2-CoT-360M
+ Reasoning-SmolLM2-135M : prithivMLmods/Reasoning-SmolLM2-135M
+ SmolLM2-CoT-360M-GGUF : prithivMLmods/SmolLM2-CoT-360M-GGUF
๐ค Other Details :
+ Demo : prithivMLmods/SmolLM2-CoT-360M
+ Fine-tune nB : prithivMLmods/SmolLM2-CoT-360M

reacted to
merve's
post with โค๏ธ
2 months ago
Post
4891
supercharge your LLM apps with smolagents ๐ฅ
however cool your LLM is, without being agentic it can only go so far
enter smolagents: a new agent library by Hugging Face to make the LLM write code, do analysis and automate boring stuff!
Here's our blog for you to get started https://huggingface.co/blog/smolagents
however cool your LLM is, without being agentic it can only go so far
enter smolagents: a new agent library by Hugging Face to make the LLM write code, do analysis and automate boring stuff!
Here's our blog for you to get started https://huggingface.co/blog/smolagents

reacted to
suayptalha's
post with โค๏ธ
3 months ago
Post
2150
๐ Introducing ๐
๐ข๐ซ๐ฌ๐ญ ๐๐ฎ๐ ๐ ๐ข๐ง๐ ๐
๐๐๐ ๐๐ง๐ญ๐๐ ๐ซ๐๐ญ๐ข๐จ๐ง ๐จ๐ ๐ฆ๐ข๐ง๐๐๐ ๐๐จ๐๐๐ฅ๐ฌ from the paper ๐๐๐ซ๐ ๐๐๐๐ฌ ๐๐ฅ๐ฅ ๐๐ ๐๐๐๐๐๐?
๐ฅ I have integrated ๐ง๐๐ฑ๐ญ-๐ ๐๐ง๐๐ซ๐๐ญ๐ข๐จ๐ง ๐๐๐๐ฌ, specifically minGRU, which offer faster performance compared to Transformer architectures, into HuggingFace. This allows users to leverage the lighter and more efficient minGRU models with the "๐ญ๐ซ๐๐ง๐ฌ๐๐จ๐ซ๐ฆ๐๐ซ๐ฌ" ๐ฅ๐ข๐๐ซ๐๐ซ๐ฒ for both usage and training.
๐ป I integrated two main tasks: ๐๐ข๐ง๐๐๐๐ ๐จ๐ซ๐๐๐ช๐ฎ๐๐ง๐๐๐๐ฅ๐๐ฌ๐ฌ๐ข๐๐ข๐๐๐ญ๐ข๐จ๐ง and ๐๐ข๐ง๐๐๐๐ ๐จ๐ซ๐๐๐ฎ๐ฌ๐๐ฅ๐๐.
๐๐ข๐ง๐๐๐๐ ๐จ๐ซ๐๐๐ช๐ฎ๐๐ง๐๐๐๐ฅ๐๐ฌ๐ฌ๐ข๐๐ข๐๐๐ญ๐ข๐จ๐ง:
You can use this class for ๐๐๐ช๐ฎ๐๐ง๐๐ ๐๐ฅ๐๐ฌ๐ฌ๐ข๐๐ข๐๐๐ญ๐ข๐จ๐ง tasks. I also trained a Sentiment Analysis model with stanfordnlp/imdb dataset.
๐๐ข๐ง๐๐๐๐ ๐จ๐ซ๐๐๐ฎ๐ฌ๐๐ฅ๐๐:
You can use this class for ๐๐๐ฎ๐ฌ๐๐ฅ ๐๐๐ง๐ ๐ฎ๐๐ ๐ ๐๐จ๐๐๐ฅ tasks such as GPT, Llama. I also trained an example model with roneneldan/TinyStories dataset. You can fine-tune and use it!
๐ ๐๐ข๐ง๐ค๐ฌ:
Models: suayptalha/mingru-676fe8d90760d01b7955d7ab
GitHub: https://github.com/suayptalha/minGRU-hf
LinkedIn Post: https://www.linkedin.com/posts/suayp-talha-kocabay_mingru-a-suayptalha-collection-activity-7278755484172439552-wNY1
๐ฐ ๐๐ซ๐๐๐ข๐ญ๐ฌ:
Paper Link: https://arxiv.org/abs/2410.01201
I am thankful to Leo Feng, Frederick Tung, Mohamed Osama Ahmed, Yoshua Bengio and Hossein Hajimirsadeghi for their papers.
๐ฅ I have integrated ๐ง๐๐ฑ๐ญ-๐ ๐๐ง๐๐ซ๐๐ญ๐ข๐จ๐ง ๐๐๐๐ฌ, specifically minGRU, which offer faster performance compared to Transformer architectures, into HuggingFace. This allows users to leverage the lighter and more efficient minGRU models with the "๐ญ๐ซ๐๐ง๐ฌ๐๐จ๐ซ๐ฆ๐๐ซ๐ฌ" ๐ฅ๐ข๐๐ซ๐๐ซ๐ฒ for both usage and training.
๐ป I integrated two main tasks: ๐๐ข๐ง๐๐๐๐ ๐จ๐ซ๐๐๐ช๐ฎ๐๐ง๐๐๐๐ฅ๐๐ฌ๐ฌ๐ข๐๐ข๐๐๐ญ๐ข๐จ๐ง and ๐๐ข๐ง๐๐๐๐ ๐จ๐ซ๐๐๐ฎ๐ฌ๐๐ฅ๐๐.
๐๐ข๐ง๐๐๐๐ ๐จ๐ซ๐๐๐ช๐ฎ๐๐ง๐๐๐๐ฅ๐๐ฌ๐ฌ๐ข๐๐ข๐๐๐ญ๐ข๐จ๐ง:
You can use this class for ๐๐๐ช๐ฎ๐๐ง๐๐ ๐๐ฅ๐๐ฌ๐ฌ๐ข๐๐ข๐๐๐ญ๐ข๐จ๐ง tasks. I also trained a Sentiment Analysis model with stanfordnlp/imdb dataset.
๐๐ข๐ง๐๐๐๐ ๐จ๐ซ๐๐๐ฎ๐ฌ๐๐ฅ๐๐:
You can use this class for ๐๐๐ฎ๐ฌ๐๐ฅ ๐๐๐ง๐ ๐ฎ๐๐ ๐ ๐๐จ๐๐๐ฅ tasks such as GPT, Llama. I also trained an example model with roneneldan/TinyStories dataset. You can fine-tune and use it!
๐ ๐๐ข๐ง๐ค๐ฌ:
Models: suayptalha/mingru-676fe8d90760d01b7955d7ab
GitHub: https://github.com/suayptalha/minGRU-hf
LinkedIn Post: https://www.linkedin.com/posts/suayp-talha-kocabay_mingru-a-suayptalha-collection-activity-7278755484172439552-wNY1
๐ฐ ๐๐ซ๐๐๐ข๐ญ๐ฌ:
Paper Link: https://arxiv.org/abs/2410.01201
I am thankful to Leo Feng, Frederick Tung, Mohamed Osama Ahmed, Yoshua Bengio and Hossein Hajimirsadeghi for their papers.

posted
an
update
3 months ago
Post
1988
~75% on the challenging GPQA with only 40M parameters ๐ฅ๐ฅณ
GREAT ACHIEVEMENT ! Or is it ?
This new Work, "Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation", take out the mystery about many models i personally suspected their results. Speacially on leaderboards other than the english one, Like the Open Arabic LLM Leaderbaord OALL/Open-Arabic-LLM-Leaderboard.
The authors of this work, first started by training a model on the GPQA data, which, unsurprisingly, led to the model achieving 100% performance.
Afterward, they trained what they referred to as a 'legitimate' model on legitimate data (MedMCQA). However, they introduced a distillation loss from the earlier, 'cheated' model.
What they discovered was fascinating: the knowledge of GPQA leaked through this distillation loss, even though the legitimate model was never explicitly trained on GPQA during this stage.
This raises important questions about the careful use of distillation in model training, especially when the training data is opaque. As they demonstrated, itโs apparently possible to (intentionally or unintentionally) leak test data through this method.
Find out more: Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation (2412.15255)
GREAT ACHIEVEMENT ! Or is it ?
This new Work, "Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation", take out the mystery about many models i personally suspected their results. Speacially on leaderboards other than the english one, Like the Open Arabic LLM Leaderbaord OALL/Open-Arabic-LLM-Leaderboard.
The authors of this work, first started by training a model on the GPQA data, which, unsurprisingly, led to the model achieving 100% performance.
Afterward, they trained what they referred to as a 'legitimate' model on legitimate data (MedMCQA). However, they introduced a distillation loss from the earlier, 'cheated' model.
What they discovered was fascinating: the knowledge of GPQA leaked through this distillation loss, even though the legitimate model was never explicitly trained on GPQA during this stage.
This raises important questions about the careful use of distillation in model training, especially when the training data is opaque. As they demonstrated, itโs apparently possible to (intentionally or unintentionally) leak test data through this method.
Find out more: Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation (2412.15255)

replied to
their
post
3 months ago
You are a HERO !