Klee Young

k-young

AI & ML interests

None yet

Recent Activity

liked a Space 23 days ago
ginigen/FLUX-Text-Tree-Image
liked a Space 23 days ago
enzostvs/deepsite
liked a model about 1 month ago
Jonjew/PatrickNagelStyle
View all activity

Organizations

YoungDM's profile picture

k-young's activity

reacted to openfree's post with ๐Ÿ”ฅ 3 months ago
view post
Post
4373
๐ŸŒŸ MoneyRadar - AI-Powered Global News Analysis System

๐Ÿ’ป Live Demo: openfree/MoneyRadar

๐ŸŽฏ Core Features
1. ๐Ÿค– 24/7 Automated News Scanning

Auto-collection of Top 100 trending news
Real-time monitoring across 60 countries
Smart filtering of investment-critical news

2. ๐Ÿ” Advanced Custom Search

Unlimited keyword search capability
Country/language-specific search options
Real-time trend-based related keywords

3. ๐ŸŽจ Smart Analysis & Visualization

AI-powered sentiment analysis
Automated content summarization
Investment decision-supporting insights

โšก Automated Information Collection
Key Companies (NVIDIA, APPLE, TESLA, etc.)

Earnings/Forecasts
Product/Technology announcements
Market share changes
M&A and major news

Financial Markets & Digital Assets

Macroeconomic indicators
Regulatory changes
Market sentiment analysis
Major exchange updates

๐Ÿ“Š Business Applications

Real-time market trend tracking
Competitor movement monitoring
Early investment opportunity detection
Risk early warning system

๐ŸŒŸ Key Differentiators

Full Automation

Zero manual intervention
Real-time data updates
Automated result storage/management


User-Centric Design

Intuitive interface
Customizable alerts
Mobile optimization


Advanced Analytics

News cross-checking
Historical tracking
Trend prediction support



Join Community ๐Ÿ’ฌ
"With MoneyRadar, never miss a beat in the global market movements!"
reacted to m-ric's post with โค๏ธ 5 months ago
view post
Post
1232
Made a new app to visualize the LLM race โ‡’ ๐—ก๐—ผ ๐—˜๐˜‚๐—ฟ๐—ผ๐—ฝ๐—ฒ๐—ฎ๐—ป ๐—ฐ๐—ผ๐—บ๐—ฝ๐—ฎ๐—ป๐˜† ๐—ถ๐—ป ๐˜๐—ต๐—ฒ ๐˜๐—ผ๐—ฝ ๐Ÿญ๐Ÿฌ ๐Ÿ‡ช๐Ÿ‡บโŒ

See the app here ๐Ÿ‘‰ m-ric/llm-race-to-the-top

I've adapted an app by @andrewrreed that tracks progress of LLMs ( andrewrreed/closed-vs-open-arena-elo), on the Chatbot Arena leaderboard, to compare companies from different countries.

The outcome is quite sad, as a Frenchman and European.

The top 10 is exclusively US ๐Ÿ‡บ๐Ÿ‡ธ and Chinese ๐Ÿ‡จ๐Ÿ‡ณ companies (after great Chinese LLM releases recently, like the Qwen2.5 series), with the notable exception of Mistral AI ๐Ÿ‡ซ๐Ÿ‡ท.

American companies are making fast progress, Chinese ones even faster. Europe is at risk of being left behind. And the EU AI Act hasn't even come into force yet to slow down the EU market. We need to wake up ๐Ÿ˜ฌ

โš ๏ธ Caution: This Chatbot Arena ELO ranking is not the most accurate, especially at high scores like this, because LLM makers can game it to some extent.
  • 1 reply
ยท
upvoted an article 7 months ago
view article
Article

The 5 Most Under-Rated Tools on Hugging Face

โ€ข 87
reacted to m-ric's post with ๐Ÿ‘€ 7 months ago
view post
Post
1077
๐Ÿง  Stanford paper might be the key to OpenAI o1โ€™s performance: Whatโ€™s so effective about Chain of Thought? โ‡’ it unlocks radically different sequential tasks!

๐Ÿ’ญย Reminder: A Chain of Thought (CoT) means that you instruct the model to โ€œthink step by stepโ€. Often itโ€™s literally just putting in the prompt โ€œletโ€™s think step by step.โ€

๐Ÿค”ย This method has been shown to be unreasonably effective to increase perf on benchmarks. However why it works so well remains unclear.

Here's the scoop: Transformers are amazing at parallel processing, but they've always struggled with tasks that require sequential reasoning.

โ›”๏ธ For instance if you ask them the result of 3^2^2^2^โ€ฆ, with 20 iterations, theyโ€™ll nearly always fail.

๐Ÿ’กย Indeed, researchers prove mathematically, by assimilating transformers networks to logical circuits, that effectively they cannot solve sequential tasks that require more than a certain threshold of sequences.

But CoT enables sequential reasoning:

- ๐Ÿงฑ Each step in the CoT corresponds to simulating one operation in a complex circuit.
- ๐Ÿ”„ This allows the transformer to "reset" the depth of intermediate outputs, overcoming previous limitations.
- ๐Ÿš€ Thus, with CoT, constant-depth transformers can now solve ANY problem computable by polynomial-size circuits! (That's a huge class of problems in computer science.)
- ๐Ÿ”‘ Transformers can now handle tricky tasks like iterated squares (computing 3^2^2^2^2) composed permutations and evaluating circuits - stuff that requires serial computation.
- ๐Ÿ“Šย The improvement is especially dramatic for transformers with a limited depth. Empirical tests on four arithmetic problems showed massive accuracy gains with CoT on inherently serial tasks.

Main takeaway: Chain-of-thought isn't just a neat trick - it fundamentally expands what transformer models can do!

Read the paper ๐Ÿ‘‰ย  Chain of Thought Empowers Transformers to Solve Inherently Serial Problems (2402.12875)