Edit model card

1. Background of Model Development

  • NTIS (www.ntis.go.kr) and KISTI (Korean Institute of Science and Technology Information) have been working to enhance accessibility and convenience for users seeking R&D data and science and technology information. The rise of ChatGPT has revolutionised the paradigm of information delivery with its Generative AI model. NTIS and KISTI are also aiming to provide conversation-based complex information by breaking away from the search-oriented information delivery method by utilizing Generative AI technology.
  • Despite this remarkable technology, however, the tendency for hallucination in Generative AI models is a significant impediment for researchers and professionals in scientific and technological fields, where high accuracy is crucial. In particular, the current generative AI model experiences more severe issues with hallucination, particularly when creating R&D and science and technology information. These fields heavily employ specialist jargon that is seldom used in the pre-training stage, and involve different question types than those trained by existing models.
  • Therefore, we decided to create a LLM specialized for R&D data to provide more accurate R&D and scientific information.

2. KoRnDAlpaca

  • KoRnDAlpaca is based on Korean and fine-tuned with 1 million instruction data (R&D Instruction dataset v1.3) generated from Korean national research reports.
  • The base model of KoRnDAlpaca is EleutherAI/polyglot-en-12.8b.
  • For more information about the training procedure and model, please contact gsjang@kisti.re.kr.
  • To actually use the model, please use https://huggingface.co/NTIS/KoRnDAlpaca-Polyglot-12.8B.

3. R&D Instruction Dataset v1.3

  • The dataset is built using 30,000 original research reports from the last 5 years provided by KISTI (curation.kisti.re.kr).
  • The dataset cannot be released at this time due to the licensing issues (to be discussed to release data in the future).
  • The process of building the dataset is as follows
    • A. Extract important texts related to technology, such as technology trends and technology definitions, from research reports.
    • B. Preprocess the extracted text
    • C. Generate question and answer pairs (total 1.5 million) based on the extracted text by using ChatGPT API(temporarily), which scheduled to be replaced with our own question&answer generation model(`23.11).
    • D. Reformat the dataset in the form of (Instruction, Output, Source). ‘Instruction’ is the user's question, ‘Output’ is the answer, and ‘Source’ is the research report identification code that the answer is based on.
    • E. Remove low-quality data by the data quality evaluation module. Use only high-quality Q&As for training. (1 million)
      • ※ In KoRnDAlpaca v2 (planned for `23.10), in addition to Q&A, the instruction dataset will be added to generate long-form technology trends.

4. Future plans

  • 23.10: Release KoRnDAlpaca v2 (adds the ability to generate long-form technology trend information in Markdown format)
  • 23.12: Release NITS-seachGPT module v1 (Retriever + KoRnDAlpaca v3)
    • ※ R&D-specific open-domain question answering module with "Retriever + Generator" structure
    • ※ NTIS-searchGPT v1 is an early edition, with anticipated performance improvements scheduled for 2024.
  • 23.12: KoRnDAlpaca v2 will be applied to the chatbot of NTIS (www.ntis.go.kr)

References

Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .