File size: 2,430 Bytes
ea61d35
 
 
 
 
 
 
 
 
2e31a14
14ddd88
7732df1
5e17bfb
e912408
5e17bfb
 
 
 
 
 
 
 
 
8dc782e
a762e11
 
5e17bfb
 
 
 
6e7f2b2
ad963c2
6e7f2b2
ad963c2
0595e64
dfad1ad
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
title: README
emoji: πŸ“š
colorFrom: purple
colorTo: blue
sdk: static
pinned: false
---

Welcome to the llmware HuggingFace page.   We believe that the ascendence of LLMs creates a major new application pattern and data 
pipelines that will be transformative in the enterprise, especially in knowledge-intensive industries.    Our open source research efforts 
are focused both on the new "ware" ("middleware" and "software" that will wrap and integrate LLMs), as well as building high-quality
automation-focused enterprise Agent, RAG and embedding models.  

Our model training initiatives fall into four major categories:

--SLIMs (Structured Language Instruction Models) - small, specialized function calling models for stacking in multi-model, Agent-based workflows  

--BLING/DRAGON - highly-accurate fact-based question-answering models  

--Industry-BERT - industry fine-tuned embedding models  

--Private Inference Self-Hosting, Packaging and Quantization - GGUF, ONNX, OpenVino  


Please check out a few of our recent blog postings related to these initiatives:   
  [SMALL MODEL ACCURACY BENCHMARK](https://medium.com/@darrenoberst/best-small-language-models-for-accuracy-and-enterprise-use-cases-benchmark-results-cf71964759c8) | 
  [OUR JOURNEY BUILDING ACCURATE ENTERPRISE SMALL MODELS](https://medium.com/@darrenoberst/building-the-most-accurate-small-language-models-our-journey-781474f64d88) | 
  [THINKING DOES NOT HAPPEN ONE TOKEN AT A TIME](https://medium.com/@darrenoberst/thinking-does-not-happen-one-token-at-a-time-0dd0c6a528ec) | 
  [SLIMs](https://medium.com/@darrenoberst/slims-small-specialized-models-function-calling-and-multi-model-agents-8c935b341398) | 
  [BLING](https://medium.com/@darrenoberst/small-instruct-following-llms-for-rag-use-case-54c55e4b41a8) | 
  [RAG-INSTRUCT-TEST-DATASET](https://medium.com/@darrenoberst/how-accurate-is-rag-8f0706281fd9) | 
  [LLMWARE EMERGING STACK](https://medium.com/@darrenoberst/the-emerging-llm-stack-for-rag-deee093af5fa) | 
  [MODEL SIZE TRENDS](https://medium.com/@darrenoberst/are-the-mega-llms-driving-the-future-or-they-already-in-the-past-c3b949f9f5a5) |
  [OPEN SOURCE RAG](https://medium.com/@darrenoberst/open-source-llms-in-rag-89d397b39511)  
  [1B-3B-7B LLM CAPABILITIES](https://medium.com/@darrenoberst/rag-instruct-capabilities-they-grow-up-so-fast-2647550cdc0a)  
  
Interested?  [Join us on Discord](https://discord.gg/MhZn5Nc39h)