Edit model card

ICKG Model Card

Model Details

ICKG (Integrated Contextual Knowledge Graph Generator) is a knowledge graph construction (KGC) task-specific instruction-following language model fine-tuned from LMSYS's Vicuna-7B, which itself is derived from Meta's LLaMA LLM.

  • Developed by: Xiaohui Li
  • Model type: Auto-regressive language model based on the transformer architecture.
  • License: Non-commercial
  • Finetuned from model: Vicuna-7B (originally from LLaMA).

Model Sources

Uses

The primary use of iKG LLM is for generating knowledge graphs (KG) based on instruction-following capability with specialized prompts. It's intended for researchers, data scientists, and developers interested in natural language processing, and knowledge graph construction.

How to Get Started with the Model

Training Details

iKG is fine-tuned from Vicuna-7B using ~3K instruction-following demonstrations including KG construction input document and extracted KG triplets as response output. iKG is thus learnt to extract list of KG triplets from given text document via prompt engineering. For more in-depth training details, refer to the "Generative Knowledge Graph Construction with Fine-tuned LLM" section of the accompanying paper.

  • Prompt Template: The entities and relationship can be customized for specific tasks. <input_text> is the document text to replace.

    From the provided document labeled as INPUT_TEXT, your task is to extract structured information from it in the form of triplet for constructing a knowledge graph. Each tuple should be in the form of ('h', 'type',  'r', 'o', 'type'), where 'h' stands for the head entity, 'r' for the relationship, and 'o' for the tail entity. The 'type' denotes the category of the corresponding entity. Do NOT include redundant triplets, NOT include triplets with relationship that occurs in the past.   
    
    Note that the entities should not be generic, numerical or temporal (like dates or percentages).  Entities must be classified into the following categories:
    ORG: Organizations other than government or regulatory bodies
    ORG/GOV: Government bodies (e.g., "United States Government")
    ORG/REG: Regulatory bodies (e.g., "Federal Reserve")
    PERSON: Individuals (e.g., "Elon Musk")
    GPE: Geopolitical entities such as countries, cities, etc. (e.g., "Germany")
    COMP: Companies (e.g., "Google")
    PRODUCT: Products or services (e.g., "iPhone")
    EVENT: Specific and Material Events (e.g., "Olympic Games", "Covid-19")
    SECTOR: Company sectors or industries (e.g., "Technology sector")
    ECON_INDICATOR: Economic indicators (e.g., "Inflation rate"), numerical value like "10%" is not a ECON_INDICATOR;
    FIN_INSTRUMENT: Financial and market instruments (e.g., "Stocks", "Global Markets")
    CONCEPT: Abstract ideas or notions or themes (e.g., "Inflation", "AI", "Climate Change")
    
    The relationships 'r' between these entities must be represented by one of the following relation verbs set: Has, Announce, Operate_In, Introduce, Produce, Control, Participates_In, Impact, Positive_Impact_On, Negative_Impact_On, Relate_To, Is_Member_Of, Invests_In, Raise, Decrease.
    
    Remember to conduct entity disambiguation, consolidating different phrases or acronyms that refer to the same entity (for instance,  "UK Central Bank", "BOE" and "Bank of England" should be unified as "Bank of England"). Simplify each entity of the triplet to be less than four words.  
    
    Your output should strictly be in a list format of triplets in the JSON list format of ('h', 'type', 'r', 'o', 'type'), where the relationship 'r' must be in the given relation verbs set above. Only output the list. 
    ===========================================================
    As an Example, consider the following news excerpt:
    'Apple Inc. is set to introduce the new iPhone 14 in the technology sector this month. The product's release is likely to positively impact Apple's stock value.'
    
    From this text, your output should be:
    [('Apple Inc.', 'COMP', 'Introduce', 'iPhone 14', 'PRODUCT'),
     ('Apple Inc.', 'COMP', 'Operate_In', 'Technology Sector', 'SECTOR'),
     ('iPhone 14', 'PRODUCT', 'Positive_Impact_On', 'Apple's Stock Value', 'FIN_INSTRUMENT')]
    
    INPUT_TEXT:
    <input_text>
    

Evaluation

iKG has undergone preliminary evaluation comparing its performance to GPT-3.5, GPT-4, and the original Vicuna-7B model. With respect to the KG construction task, it outperforms GPT-3.5 and Vicuna-7B while exhibiting comparative capability as GPT-4. iKG excels in generating instruction-based knowledge graphs with a particular emphasis on quality and adherence to format.

For a more detailed introduction, refer to the accompanying paper.

Downloads last month
9