Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -9,19 +9,19 @@ pinned: false
|
|
| 9 |
|  | <span style="color:white">Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content</span> |
|
| 10 |
|-----------------------------------|-|
|
| 11 |
|
| 12 |
-
Vectara is
|
| 13 |
-
|
| 14 |
-
|
|
|
|
| 15 |
|
| 16 |
To learn more - here are some resources:
|
| 17 |
* [Sign up](https://console.vectara.com/signup/?utm_source=huggingface&utm_medium=space&utm_term=i[…]=console&utm_campaign=huggingface-space-integration-console) for a Vectara account.
|
| 18 |
* Check out our API [documentation](https://docs.vectara.com/docs/).
|
| 19 |
* We have created [vectara-ingest](https://github.com/vectara/vectara-ingest) to help you with data ingestion and [vectara-answer](https://github.com/vectara/vectara-answer) as a quick start with building the UI.
|
| 20 |
* Join us on [Discord](https://discord.gg/GFb8gMz6UH) or ask questions in [Forums](https://discuss.vectara.com/)
|
| 21 |
-
* Here are few demo applications
|
| 22 |
* [AskNews](https://asknews.demo.vectara.com/)
|
| 23 |
-
* [
|
| 24 |
-
* [Legal Aid](https://legalaid.demo.vectara.com/)
|
| 25 |
* Our [Hughes Hallucination Evaluation Model](https://huggingface.co/vectara/hallucination_evaluation_model), or HHEM, is a model to detect LLM hallucinations.
|
| 26 |
* [HHEM leaderboard](https://huggingface.co/spaces/vectara/leaderboard)
|
| 27 |
* Our platform provides a production-grade [factual consistency score](https://vectara.com/blog/automating-hallucination-detection-introducing-vectara-factual-consistency-score/) (aka HHEM v2) which supports a longer sequence length, is calibrated, and is integrated into our Query APIs.
|
|
|
|
| 9 |
|  | <span style="color:white">Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content Invisible Content</span> |
|
| 10 |
|-----------------------------------|-|
|
| 11 |
|
| 12 |
+
Vectara is the Agent Operating System for trusted enterprise AI: Agentic RAG with multi-modal retrieval, orchestration,
|
| 13 |
+
and always-on governance so enterprises can safely operate grounded, secure, auditable agents with real-time policy and factual-consistency enforcement.
|
| 14 |
+
|
| 15 |
+
We provide simple APIs for creating AI Agents that dramatically simplifies the task of building scalable, secure, and reliable AI applications.
|
| 16 |
|
| 17 |
To learn more - here are some resources:
|
| 18 |
* [Sign up](https://console.vectara.com/signup/?utm_source=huggingface&utm_medium=space&utm_term=i[…]=console&utm_campaign=huggingface-space-integration-console) for a Vectara account.
|
| 19 |
* Check out our API [documentation](https://docs.vectara.com/docs/).
|
| 20 |
* We have created [vectara-ingest](https://github.com/vectara/vectara-ingest) to help you with data ingestion and [vectara-answer](https://github.com/vectara/vectara-answer) as a quick start with building the UI.
|
| 21 |
* Join us on [Discord](https://discord.gg/GFb8gMz6UH) or ask questions in [Forums](https://discuss.vectara.com/)
|
| 22 |
+
* Here are few demo applications:
|
| 23 |
* [AskNews](https://asknews.demo.vectara.com/)
|
| 24 |
+
* [Agentic Demo](https://agent-demo.vectara.com/)
|
|
|
|
| 25 |
* Our [Hughes Hallucination Evaluation Model](https://huggingface.co/vectara/hallucination_evaluation_model), or HHEM, is a model to detect LLM hallucinations.
|
| 26 |
* [HHEM leaderboard](https://huggingface.co/spaces/vectara/leaderboard)
|
| 27 |
* Our platform provides a production-grade [factual consistency score](https://vectara.com/blog/automating-hallucination-detection-introducing-vectara-factual-consistency-score/) (aka HHEM v2) which supports a longer sequence length, is calibrated, and is integrated into our Query APIs.
|