YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This repository contains technical documents detailing the research and development efforts by OLM Research. Our work focuses on fine-tuning AI models and integrating AI with decentralized platforms to advance the capabilities of the OLM ecosystem.

Researches

R&D of OLM Instruction Fine-Tuning Models

This research outlines the methodologies applied in fine-tuning OpenLM models, including supervised instruction-tuning and reinforcement learning from human feedback (RLHF). It describes the use of diverse datasets, combining human-annotated and GPT-generated data, to enhance model performance for various applications such as conversational AI and evaluation tasks. Fine-tuning techniques include the use of cosine learning rate schedules, teacher-forcing with loss masking, and advanced algorithms like Direct Policy Optimization (DPO) and Rejection Sampling for iterative training.

R&D of SearchOLM in ChatOLM

This document covers the development of SearchOLM, a decentralized search interface integrated with ChatOLM. It provides a technical overview of the system architecture, including search query processing, content retrieval from the web, natural language understanding via ORA's on-chain AI Oracle, and ranking of extracted information. The paper details components such as query rewriting, vector-based content embedding, and response generation, aimed at enabling seamless and decentralized information retrieval within the ChatOLM ecosystem.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.