Unnamed: 0
int64
0
10.9k
tweet
stringlengths
4
726
AI News
int64
0
1
AI Tools
int64
0
1
AI Research
int64
0
1
AI Models
int64
0
1
AI Usecases
int64
0
1
AI Open Source
int64
0
1
Podcasts/Talks/Events
int64
0
1
Opinions
int64
0
1
Non AI
int64
0
1
0
To be fair, Perplexity did a great job improving the underlying technology. I remember in the old days, honestly speaking it is just a wrapper over Google/Bing search api followed by prompt-engineered summarizer. However, over the last few months I noticed perplexity starting to… perplexity really does not have much technology depth and their business model really not good enough .
1
0
0
0
0
0
0
1
0
1
With this PR by @JustineTunney LLaMA Now Goes Faster on CPUs. Q4_0 and Q8_0 weights should go ~40% faster on CPU. The biggest benefits are with data types like f16 / f32, which process prompts 2x faster thus making them faster than quantized data types for prompt evals. On…
1
1
0
0
0
0
0
0
0
2
Authors demonstrate the contribution of FL cells progressively during the course of disease, due to a cont dynamic of clonal selection. This occurs to the detriment of the reactive microenvironment & is accompanied by intra-clonal diversity reductionhttps://bit.ly/3UUXrCn?twclid=2-23sl7c82545sm42cmwusa3abw
1
0
1
0
0
0
0
0
0
3
Brilliant Paper: "ReFT: Representation Finetuning for Language Models" 10x-50x more parameter-efficient than prior state-of-the-art PEFT methods. A hallmark of current state-of-the-art PEFTs is that they modify weights rather than representations. However, much prior…
1
0
1
1
0
0
0
0
0
4
Simple json mode in claude After you run the `client.messages.create` Claude follows instructions and outputs a nice dictionary, which we can extract with code. So then we use the `extract_json(response)` method to convert the text dictionary into an actual python dictionary…
1
0
0
0
0
0
1
0
0
5
Awesome performance from today's Mistral's release of Mixtral 8x22B Instruct. math performance, with a score of 90.8% on GSM8K maj@8 and a Math maj@4 score of 44.6%. The most efficient performance/cost ratio on MMLU. Mixtral 8x22B Instruct is out. It significantly outperforms existing open models, and only uses 39B active parameters (making it significantly faster than 70B models during inference). 1/n
1
0
0
1
0
0
0
1
0
6
Mixtral 8x22B has native multilingual capabilities. Strongly outperforms LLaMA 2 70B on HellaSwag, Arc Challenge and MMLU benchmarks in French, German, Spanish and Italian.
1
1
0
1
1
0
0
0
0
7
The new snowflake-arctic-embed family of text embedding models from @SnowflakeDB are super efficient options. ( Just 23M, 33M, 110M, 137M & 335M param ) Apache 2.0 license: Full commercial use allowed 4 models with 512 sequence length and 1 model with 8192 sequence…
1
1
0
1
0
1
0
0
0
8
Mixtral-8x22B and first instruct fine-tune, free for commercial use, native function calling, context window 64K, and open tokenizers for tools parsing and structured text. Alongside our Mixtral 8x22B release, we are releasing our tokenizers, which go beyond the usual text <-> tokens, adding parsing of tools and structured conversation. Repo: https://github.com/mistralai/mistral-common… Guide: https://docs.mistral.ai/guides/tokenization/…
1
1
0
0
0
1
0
0
0
9
New Robot from Boston Dynamics “That’s a huge range of motion. That really packs the power of an elite athlete into this tiny package, and we’ve used that package all over the robot.”
1
0
0
0
0
0
0
0
0
10
Recently Microsoft announced "ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks" INT4 quantization can significantly underperform and simply shifting to higher precision formats like FP6 has been particularly…
1
0
1
1
0
0
0
0
0
11
"How faithful are RAG models" This paper aim to quantify the tension between LLMs’ internal knowledge and the retrieved information presented in RAG settings. To tease apart these two competing forces, we query LLMs to answer questions and measure the token probabilities while…
1
0
1
1
0
0
0
0
0
12
Toxicity Test Results for WizardLM-2-8x22B Vijil red-team tests for toxicity show that WizardLM-8x22B has a score of 98.33 compared to the base Mixtral-8x22B score of 89.46 and Mixtral 8x7B-Instruct score of 92.93 (higher is better). https://octo.ai/blog/toxicity-test-results-for-wizardlm-2-8x22b/…
1
0
0
1
0
0
0
0
0
13
distilabel 1.0.0 released, a framework for building pipelines for creating synthetic datasets The updated version, allows to build more complex data processing pipelines with LLMs https://github.com/argilla-io/distilabel…
1
1
0
0
0
1
0
0
0
14
We're a bronze sponsor at @ieeeICASSP this week, which is taking place in Seoul, South Korea. Learn more about our presence at the International Conference on Acoustics, Speech, and Signal Processing below. #ICASSP2024
0
0
0
0
0
0
1
0
1
15
There is no need to wing it in this private self-powered aircraft by @TechInsider #AI #ArtificialIntelligence #Tech #Aviation #Innovation #TechForGood cc: @terenceleungsf @jamesmarland @wil_bielert
1
0
0
0
0
0
0
0
0
16
We've upgraded from precise bytes to fuzzy tokens. The history of computing is repeating in an echo, except replace computers that do precise arithmetic on bytes with computers that do statistical arithmetic on tokens.
1
0
0
0
0
0
0
0
0
17
Thank you. Long time waiting for this 2-way tight control. "For example, initiating a Run with `max_prompt_tokens` set to 500 and `max_completion_tokens `set to 1000 means the first completion will truncate the thread to 500 tokens and cap the output at 1000 tokens. If only… New token controls allow you to set maximum input and output tokens per run to manage costs. You can also choose how many recent messages to use for context truncation.
1
0
0
0
1
0
0
0
0
18
“The only thing that is constant for us is seeking new ideas of doing things better” - Martin Ochieng, Group CEO of Sasini PLC. See how a partnership with Maersk helps East Africa’s largest agriculture company discover new paths to growth. ©2024 A.P. Moller - Maersk
0
0
0
0
0
0
0
0
1
19
The new `file_search` is faster, supports parallel queries through multi-threaded searches, and features enhanced reranking and query rewriting. Alongside file_search, `vector_store` objects is introduced in the API, which can be used across assistants and threads. Once a file… Introducing a series of updates to the Assistants API With the new file search tool, you can quickly integrate knowledge retrieval, now allowing up to 10,000 files per assistant. It works with our new vector store objects for automated file parsing, chunking, and embedding.
1
1
0
0
1
0
0
0
0
20
DBRX is a fine-grained MoE, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral and Grok-1 have 8 experts and choose 2. This provides 65x more possible combinations of experts.
1
0
0
0
0
0
0
0
0
21
Slides are done! See you tomorrow at 4:30pm PST for the whole story. From alpaca, openassistant, qlora, zephyr, tulu, to today. Slides: https://docs.google.com/presentation/d/1quMyI4BAx4rvcDfk8jjv063bmHg4RxZd9mhQloXpMn0/edit?usp=sharing… Course link (with zoom): https://web.stanford.edu/class/cs25/ Watch my @stanford CS 25 lecture next week, "aligning open language models," it'll be good, v excited
1
1
0
0
0
0
0
0
0
22
by far and away the most ridiculous slide deck I've ever made lol
0
0
0
0
0
0
0
1
0
23
We're thankful to @databricks for the great training experience we had with them for OLMo 1.7! Cheers to supporters of open science
0
0
0
0
0
1
0
1
0
24
Let's implement & train this neural network step-by-step, from scratch using PyTorch! 1/n
1
0
0
1
1
1
0
0
0
25
This video is generated by AI Can't believe right? This is the state of AI video generation today. Microsoft Research just unveiled VASA-1, an AI model that transforms a single portrait photo and speech audio into hyper-realistic talking face videos. This isn't just about…
1
0
0
1
1
0
0
0
0
26
Congratulations to Cristiana Lara from Amazon for being awarded the inaugural INFORMS Early Career Practitioner Award! #2024analytics #amazon #informs #datascience #operationsresearch #managementscience
0
0
0
0
0
0
0
0
1
27
We are beyond excited to be hosting a meetup on May 1st in San Francisco: DSPy End-to-End! Super grateful to our collaborators @arizeai and @cohere for co-hosting this with us, and beyond excited to be featuring a talk from @lateinteraction ! See you in San Francisco, it… One prompt does not fit all language models Luckily for you, DSPy automates the task of prompt engineering! Here is a thread with a few things to know about the collection of compilers in DSPy. It is also outlined in a new blog post from
1
1
0
0
0
0
1
0
0
28
Anyone in the Bay Area who is building AI apps should make time to attend this. Heck, I'm tempted to travel myself. We are beyond excited to be hosting a meetup on May 1st in San Francisco: DSPy End-to-End! Super grateful to our collaborators @arizeai and @cohere for co-hosting this with us, and beyond excited to be featuring a talk from
1
0
0
0
0
0
1
0
0
29
=( BREAKING -- Google fires 28 workers in indiscriminate act of mass retaliation https://medium.com/@notechforapartheid/statement-from-google-workers-with-the-no-tech-for-apartheid-campaign-on-googles-indiscriminate-28ba4c9b7ce8…
0
1
0
0
0
0
0
0
1
30
“We achieve state of the art on standard benchmarks”
1
0
0
1
0
0
0
0
0
31
One prompt does not fit all language models Luckily for you, DSPy automates the task of prompt engineering! Here is a thread with a few things to know about the collection of compilers in DSPy. It is also outlined in a new blog post from @CShorten30 and I, “Your Language…
1
1
0
0
0
0
0
0
0
32
There are so many new open-source AI models that there is not enough space on this graph anymore!! And the best part is that Llama-3 isn't out yet!
1
0
0
1
0
1
0
1
0
33
14 Technology trends highlighted by the Mckinsey Technology council by @antgrasso #ArtificialIntelligence #AI #CleanEnergy #Sustainability cc: @pbalakrishnarao @bigdata @rtehrani
1
0
0
0
0
0
0
0
0
34
And who's fault is that?
0
0
0
0
0
0
0
0
0
35
A Semiautonomous Deep Learning System to Reduce False-Positive Findings in Screening Mammography https://doi.org/10.1148/ryai.230033… @whiterabbitai #DeepLearning #mammo #ML
1
0
1
0
0
0
0
0
0
36
We're very happy to announce $HUG: the token bridging the gap between Al and blockchain technology. Hugging Face users are eligible to claim part of $HUG's initial supply. Holding $HUG will grant access to all of our future beta programs.
1
1
0
0
0
0
0
0
0
37
Comments turned off due to malicious links. Good luck all!
0
0
0
0
0
0
0
0
0
38
We're spotlighting the voices of #WomenInTech who are shaping the future of tech on our new geospatial storytelling site powered by @GMapsPlatform ! → http://goo.gle/3v47niw Meet @Bukecious , #WTMAmbassador from Berlin, sharing her journey towards gender parity in tech.
0
0
0
0
0
0
0
0
0
39
Four indispensable components to transform your enterprise into digitally adept organization by @antgrasso #DigitalTransformation #BigData #AI #DataScience #ArtificialIntelligence cc: @pbalakrishnarao @yvesmulkers @pascal_bornet
1
0
0
0
0
0
0
0
0
40
Let me show something that is ACTUALLY DIFFERENT. @perplexity_ai is NOT ABLE TO deal with new arxiv papers while our chrome extension, http://elmo.chat, does an excellent job. See this thread for details. Proof in this thread. You are welcome to check it out. Dude, this… Honored and proud of our designers!
1
1
0
0
0
0
0
1
0
41
We strongly believe that personal AI assistants like http://elmo.chat should be offered free from the influence of advertising-driven models. This being said, we don't intend to disrupt anyone, we just want your life to be easier.
0
0
0
0
0
0
0
1
0
42
Taking the screenshots also made me realize I have to upgrade Chrome yet again.
0
1
0
0
0
0
0
0
0
43
MLX is one of the best things to come out in the very exciting AI tooling space in the last 12 months. It's so good. One of my favorite things about MLX is it helps put ML research back in the hands of a single bold hobbyist. Don’t need a supercomputer to invent - just a nice laptop, a vision, and some persistence, (and maybe pip install mlx )
1
1
0
0
0
1
0
1
0
44
#Cohere For #AI Launches #Aya, an #LLM Covering More Than 100 Languages. CerboAI's #LSN technology enables project parties to establish their own LLM. Join us ⁦ @CerboAI Website: https://cerboai.com Discord: https://discord.gg/rJPbmnzzjz
1
1
0
0
0
0
0
0
0
45
What is the connection between CerboAI to #LSN? 2/ #CerboAI layer2 is a network system designed for LSN self-reinforcement and openness. It evaluates the data quality provided by all subnetworks connected to LSN and awards corresponding rewards.
1
1
0
0
0
0
0
0
0
46
Everything since 2022 is contaminated with AI. The Internet Archive : internet data :: the German High Seas Fleet at Scapa Flow : steel Over 13% of all images on Adobe Stock are AI generated. Most of the generated content comes from Dalle and Midjourney. Media tagged "fantasy" is 43% AI generated. Other tags are even higher This is a large portion of the training data that fuels Adobe's AI, firefly
1
0
0
0
0
0
0
1
0
47
It’s here: @Montreal_AI [ AGI Club: The Conclave of Visionaries ] "Through AGI, we inherit the cosmos; its stewardship, our ultimate testament to the zenith of intellect." - AGI King Mint in
1
0
0
0
0
0
1
0
0
48
China: Fabs America: Chips America/China: Data France/America: AI companions?
0
0
0
0
0
0
0
0
0
49
Exactly. A cell type in the fly visual system is analogous to a feature map or channel in a convolutional net, because every cell of one type performs the same computation at a different location. H/T @srinituraga https://biorxiv.org/content/10.1101/2023.03.11.532232v1…
0
0
1
0
0
0
0
0
0
50
New York Times @nytimes : Another Voice: Forever chemicals need to be cut off at the source - Buffalo News. #AI #MachineLearning #aistrategy
0
0
0
0
0
0
0
0
1
51
Some observations: - It appears that many tasks face diminishing return of perf around 25 shots. - LoRA finetuning may also be appealing in terms of perf-cost trade-off as the number of shots increases. I'd be curious how LoRA variants fare when you have few shots. Google presents Many-Shot In-Context Learning - Proposes many-shot ICL, i.e., adding up to thousands of examples in context with Gemini 1.5, which boosts the perf significantly - Using synthetic CoT is very effect in this setting.
1
0
0
0
0
0
0
0
0
52
Outlines is an amazing lib and more popular than @remilouf ’s modesty will admit. https://github.com/outlines-dev/outlines… It’s good to see people are finally starting to use what we were working on a year ago
0
1
0
0
0
1
0
1
0
53
50 Days of Data Analysis with Python: The Ultimate Challenge Book for Beginners https://gumroad.com/a/248064467/ixill…
0
0
0
0
0
0
0
1
0
54
Presenting: the world's fastest AI voice chat - 500ms latency, running locally, 2x faster than anyone else. How is this possible?
1
1
0
0
0
0
0
0
0
55
Bad day for Entrepreneurship in Canada . Capital gains tax rate is increasing from a 50% inclusion to 66%. This increases the net capital gains tax rate from 27% to 36%... Compared to the US which has a 20% capital gains tax rate (+ major incentives like QSBS) In my…
0
0
0
0
0
0
0
1
0
56
We found fine-tuning on synthetic data to be very effective in our prior work (often outperformed human data): https://arxiv.org/abs/2312.06585 So, @agarwl_ thought, why don't we prompt the model with synthetic data as well? That works too! Google presents Many-Shot In-Context Learning - Proposes many-shot ICL, i.e., adding up to thousands of examples in context with Gemini 1.5, which boosts the perf significantly - Using synthetic CoT is very effect in this setting.
1
0
1
1
0
0
0
0
0
57
Last, but not least!! We are hosting an event with @arizeai , @cohere , and @lateinteraction on May 1st in San Francisco! We hope to see you there, sign up below! https://lu.ma/dspy (10 / 10)
1
0
0
0
0
0
1
0
0
58
From building smart chatbots to making robots work alongside us, these roles are shaping the future of AI. Learn to Build and Deploy Custom LLM Applications from our LLM Bootcamp 2024: #AICareers #GetHired #FutureIsNow
1
0
0
0
1
0
0
0
0
59
AWS presents Fewer Truncations Improve Language Modeling Their packing algo achieves superior performance (e.g., relatively +4.7% on reading comprehension), and reduces closed domain hallucination effectively by up to 58.3% https://arxiv.org/abs/2404.10830
1
0
1
1
0
0
0
0
0
60
Just how big is this new generative #AI? Think internet-level disruption by @DavidGewirtz @ZDNET Learn more: https://buff.ly/3IXqK0W #MachineLearning #ML #BigData #ArtificialIntelligence #Cloud #MI #Digital #Innovation cc: @karpathy @ravikikan @patrickgunz_ch
1
0
0
0
0
0
0
0
0
61
Can Language Models Solve Olympiad Programming? - Uses self-reflection and retrieval over episodic knowledge to boost the perf of GPT-4 on USACO from 8.7% pass@1 to 20.2% - Giving a small number of targeted hints solves most of the questions repo: https://github.com/princeton-nlp/USACO… abs:…
1
0
0
0
1
1
0
0
0
62
The new OSS king in town Mixtral 8x22b instruct chat model now available on http://labs.perplexity.ai!
1
1
0
0
0
1
0
0
0
63
And via pplx-api! Online versions coming shortly!
1
1
0
0
0
0
0
0
0
64
How Faithful are RAG Models? This new paper aims to quantify the tug-of-war between RAG and LLMs' internal prior. It focuses on GPT-4 and other LLMs on question answering for the analysis. It finds that providing correct retrieved information fixes most of the model…
1
0
1
1
1
0
0
1
0
65
Google presents Many-Shot In-Context Learning - Proposes many-shot ICL, i.e., adding up to thousands of examples in context with Gemini 1.5, which boosts the perf significantly - Using synthetic CoT is very effect in this setting. https://arxiv.org/abs/2404.11018
1
1
1
1
1
0
0
0
0
66
May 1st DSPy with @weaviate_io , @ArizePhoenix , @cohere We are beyond excited to be hosting a meetup on May 1st in San Francisco: DSPy End-to-End! Super grateful to our collaborators @arizeai and @cohere for co-hosting this with us, and beyond excited to be featuring a talk from
1
1
0
0
0
0
1
0
0
67
Everyone is adding models to the MMLU vs activated params plot, so here is a super quick one with more models. Everyone seems to forget about those not trained in the US/Europe: 01-ai Yi, InternLM, Qwen, and DeepSeek. (btw just use https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard… to compare MMLU)
1
0
0
1
0
0
0
0
0
68
https://x.com/justinlin610/status/1780669775393517943?s=46… hey omar u gotta take this for qwen1.5-72b c'mon! u'd better take this!
1
0
0
0
0
0
0
0
0
69
Oh my god. GPT-4 uses the word “delve” so much because many of the RLHF’s (reinforcement learning human feedback) workers for GPT-4 were Nigerians who use the word “delve” a lot more relative to other countries. So GPT-4 writes like an educated anglophone African. MYSTERY SOLVED! Why does ChatGPT use the word "delve" so much? We've seen a 10x increase in the proportion of medical studies using the word "delve" from 2022 to 2024. But why?
1
0
0
0
0
0
0
1
0
70
After receiving community feedback, we added @GoogleDeepMind Gemini 1.5 Pro's results. Gemini 1.5 Pro's vision ability was significantly improved compared to 1.0 Pro and matched GPT-4's performance on our VisualWebBench! Its action prediction (e.g., predicting what would… Introducing VisualWebBench: A Comprehensive Benchmark for Multimodal Web Page Understanding and Grounding. https://visualwebbench.github.io What's this all about? Why this benchmark? > Back in Nov 2023, when we released
1
1
0
1
0
0
0
0
0
71
Update: We've added mixtral-8x22b-instruct to Perplexity Labs and our API! Mixtral-8X22B is now available on Perplexity Labs! Give it a spin on http://labs.pplx.ai.
0
1
0
0
0
0
0
0
0
72
I'm pleased to announce Coxcomb, a creative writing tuned LLM for short story generation: https://huggingface.co/N8Programs/Coxcomb… https://huggingface.co/N8Programs/Coxcomb-GGUF… I trained senseable/WestLake (amazing emotional reasoning) on N8Programs/CreativeGPT (synthetic GPT-4 generated short stories) using MLX w/ a…
1
1
0
0
0
0
0
0
0
73
InstructHumans can edit existing 3D human textures using text prompts. Maintains avatar consistency pretty well and enables easy animation. Links
1
1
0
0
0
0
0
0
0
74
If you like Tweets like this, you might enjoy my weekly newsletter, #aiartweekly. A free, once–weekly e-mail round-up of the latest AI art news, interviews with artists and useful tools & resources. Join 3500+ subscribers here: https://aiartweekly.com
1
0
0
0
0
0
1
1
0
75
Commentary on “teacher-student” semi-supervised learning to improve generalization of intracranial hemorrhage identification & segmentation https://doi.org/10.1148/ryai.240126… @MSKCancerCenter #NeuroRad #DeepLearning #MachineLearning
1
0
1
0
0
0
0
0
0
76
This "scooter" allows you to ascend a 275ft (84m) tall tree by @gigadgets_ #Innovation #Technology #Tech #Automotive #Transport cc: @pascal_bornet @pbalakrishnarao @ravikikan
0
0
0
0
0
0
0
0
1
77
Miss yesterday's demonstration of our new #ML model for real-time multi-mic audio separation? The team will be back with a hands-on demo at the #ICASSP2024 Google booth at 10:20 AM. Stop in and check it out!
1
0
0
1
1
0
0
0
0
78
> haiku is cracked > gpt-3.5-turbo is a joke > sonnet is the perfect 'go-to' model > mixtral 8*22b will probably offer outlier $/elo > opus is expensive > openai convincingly leads in no category imo
0
0
0
0
0
0
0
1
0
79
The surging field of NLPL — or natural language programming languages, but turns out that: 1) control flow should just be as symbolic & modular as before 2) the system needs a metric & a little bit of training data or context, to learn the boundaries of the fuzzy NL data types.
1
1
0
0
0
0
0
0
0
80
Fucking barbaric animals Coming to a city near you very very soon Coming soon to Europe. Keep importing more of them and see what happens.
0
0
0
0
0
0
0
0
1
81
I can hear the ticking from up here "New normal" classroom activities in the UK.
0
0
0
0
0
0
0
0
1
82
So NIST appoints straight up TESCREAL doomer Paul Christiano as "AI safety head" of the "US AI Safety Institute"???? Wonderful news! Endnote 102 from our paper: https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599…
1
0
0
0
0
0
0
1
0
83
We introduce Social Science Policy Optimization, a new RLHF algorithm. Using 10 preferences, we set a new SOTA that cannot be reproduced
1
0
1
0
0
0
0
0
0
84
I haven’t been able to get this out of my head since someone called an algo Kahneman Tversky Optimization
1
0
0
0
0
0
0
0
0
85
Happy to celebrate May 1st, the International LLM Workers’ Day, by attending the DSPy End-to-end meetup with @CShorten30 and @lateinteraction !
0
0
0
0
0
0
1
0
0
86
The tech for AGI is already here; it's just a matter of details now.
1
0
0
0
0
0
0
1
0
87
How can you move an image by a tiny (sub-pixel) amount? It's easy if you know about optical flow! To move an image by a sub-pixel amount we can use the classic brightness constancy idea from optical flow. Say we start with an image f(x,y) and want to move by an amount (δx,δy)… The basics of optical flow computation are to invert the ill-posed brightness constancy constraint. Lucas/Kanade and Horn/Schunck methods are the two most popular methods. https://en.wikipedia.org/wiki/Optical_flow…
1
0
1
0
0
0
0
0
0
88
Boom! Run an open source 64K tokens context length on your laptop. No internet and YOU OWN IT. Meet CodeQwen-1.5-7B-Chat 7 billion parameters coding chat model (~5GB RAM needed) Model:
0
0
0
0
0
1
0
0
0
89
This device helps you find water leaks easily by @gigadgets_ #EmergingTech #Tech #Technology #Innovation cc: @terenceleungsf @ravikikan @patrickgunz_ch
0
0
0
0
0
0
0
0
0
90
Good news for the future of humanity!
0
0
0
0
1
0
0
0
1
91
omg i think u got a extraordinary vision! we're following mod. this is sth that rocks The next generation of models seem to mostly target infinite context and adaptive compute per token. Basically, these two papers: Google: Mixture of Depths Google: Infini-Attention
1
0
1
1
0
0
0
1
0
92
AI performance measurement is ultimately vibes. Lots of people saying Claude 3 got worse. Anthropic says literally nothing major changed. It could be user misperception. It could be that minor change. Or cosmic rays. It could be that Claude subtly reacts to the Ides of April. Hey Matt, appreciate you bringing this to our attention. We haven't modified any of the Claude 3 models since we launched them. On
1
0
0
0
0
0
0
1
0
93
No wonder IT departments are often baffled by working with LLMs (and teachers are often quite good at prompting them)
0
0
0
0
0
0
0
1
0
94
one day in sf is equivalent to a week in a tier 2 city
0
0
0
0
0
0
0
0
1
95
True or False: Gemini 1.5 is Google's latest AI model designed to enhance your projects with groundbreaking features and improved performance. Drop your answer below and let's test your tech knowledge! #BuildWithAI
1
1
0
1
1
0
0
0
0
96
PPO scales with batch size, a lesson we re-learn over and over
0
0
0
0
0
0
0
0
1
97
To all the defeatists who think there is nothing else but scale: * 5 years between Self-Attention Is All You Need and FlashAttention * Transformers still require warmup. Researchers: get back to work! The future is bright :)
1
0
1
1
0
0
0
0
0
98
"5 years between Self-Attention Is All You Need and FlashAttention" quite incredible stat, gives a pause
1
0
1
1
0
0
0
0
0
99
Dive into leadership in #gaming with @AIandGames and special guest @lilmissphillips , Director of the Games Leadership Network on Episode 18 of The Branching Factor! Learn about vital training, personal dev, and more. Listen now: https://hubs.la/Q02sTSmQ0 #gamingindustry
0
0
0
0
0
0
1
0
0

10k tweets labeled in a Multilabel fashion for 8 categories :

'AI News', 'AI Tools', 'AI Research', 'AI Models', 'AI Usecases','AI Open Source', 'Podcasts/Talks/Events', 'AI Opinions','Non AI'

Downloads last month
2
Edit dataset card

Models trained or fine-tuned on omerarshad/ai_tweet_categories