Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ Without their independent research collaboration this model release would not ha
|
|
25 |
|
26 |
- Fintuned with **SFT**
|
27 |
- Aligned with **DPO**
|
28 |
-
- **Using a novel training technique**
|
29 |
It allows to evaluate the no free lunch theorem and make a better choice to optimize it - created by the [LaserRMT research group](https://github.com/cognitivecomputations/laserRMT)
|
30 |
- Optimized with **LaserRMT**
|
31 |
|
@@ -52,7 +52,7 @@ It allows to evaluate the no free lunch theorem and make a better choice to opti
|
|
52 |
- **Model Type:** SauerkrautLM-7b-LaserChat is a finetuned Model based on [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
|
53 |
- **Language(s):** German, English
|
54 |
- **License:** Apache 2.0
|
55 |
-
- **Contact:** [Website VAGO solutions](https://vago-solutions.de/#Kontakt), [Website Hyperspace.ai](https://hyperspace.
|
56 |
|
57 |
### Training procedure:
|
58 |
|
@@ -101,10 +101,10 @@ However, we cannot guarantee consistently appropriate behavior. Therefore, if yo
|
|
101 |
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
|
102 |
|
103 |
## Contact
|
104 |
-
If you are interested in customized LLMs for business applications, please get in contact with us via our
|
105 |
|
106 |
## Collaborations
|
107 |
-
We are also keenly seeking support and investment for our
|
108 |
|
109 |
## Acknowledgement
|
110 |
Many thanks to [openchat](https://huggingface.co/openchat) for providing such valuable model to the Open-Source community
|
|
|
25 |
|
26 |
- Fintuned with **SFT**
|
27 |
- Aligned with **DPO**
|
28 |
+
- **Using a novel training technique** - we partially freeze the model according to a laser-like analysis (yet to be officially announced)
|
29 |
It allows to evaluate the no free lunch theorem and make a better choice to optimize it - created by the [LaserRMT research group](https://github.com/cognitivecomputations/laserRMT)
|
30 |
- Optimized with **LaserRMT**
|
31 |
|
|
|
52 |
- **Model Type:** SauerkrautLM-7b-LaserChat is a finetuned Model based on [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
|
53 |
- **Language(s):** German, English
|
54 |
- **License:** Apache 2.0
|
55 |
+
- **Contact:** [Website VAGO solutions](https://vago-solutions.de/#Kontakt), [Website Hyperspace.ai](https://hyperspace.computer/)
|
56 |
|
57 |
### Training procedure:
|
58 |
|
|
|
101 |
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
|
102 |
|
103 |
## Contact
|
104 |
+
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
|
105 |
|
106 |
## Collaborations
|
107 |
+
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
|
108 |
|
109 |
## Acknowledgement
|
110 |
Many thanks to [openchat](https://huggingface.co/openchat) for providing such valuable model to the Open-Source community
|