File size: 5,541 Bytes
54dab23 9d15850 36e65e1 9d15850 157cca9 c366edb a3488c9 9d15850 a3488c9 9d15850 a3488c9 9d15850 4ff58fb 9d15850 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
license: cc-by-nc-4.0
---
# iKG Model Card
## Model Details
iKG ([Imperial](https://www.imperial.ac.uk/research-and-innovation/) Knowledge Graph Generator) is a knowledge graph construction (KGC) task-specific instruction-following language model fine-tuned from Vicuna-7B, which itself is derived from Meta's LLaMA LLM.
- **Developed by**: [Xiaohui Li](https://xiaohui-victor-li.github.io/)
- **Model type**: Auto-regressive language model based on the transformer architecture.
- **License**: Non-commercial
- **Finetuned from model**: [Vicuna-7B](https://huggingface.co/lmsys/vicuna-7b-v1.3) (originally from [LLaMA](https://arxiv.org/abs/2302.13971)).
## Model Sources
- **Repository**: [https://github.com/your-github-repo](https://github.com/your-github-repo)
- **Website**: [https://xiaohui-victor-li.github.io/FinDKG/](https://xiaohui-victor-li.github.io/FinDKG/)
- **Paper**: [https://arxiv.org/abs/your-paper-id](https://arxiv.org/abs/your-paper-id)
## Uses
The primary use of iKG LLM is for generating knowledge graphs (KG) based on instruction-following capability with specialized prompts. It's intended for researchers, data scientists, and developers interested in natural language processing, and knowledge graph construction.
## How to Get Started with the Model
- **Python Code**: [https://github.com/your-github-repo/tree/main#api](https://github.com/your-github-repo/tree/main#api)
- **Command line interface of FastChat**: [https://github.com/your-github-repo#ikg-weights](https://github.com/your-github-repo#ikg-weights)
## Training Details
iKG is fine-tuned from Vicuna-7B using ~3K instruction-following demonstrations including KG construction input document and extracted KG triplets as response output. iKG is thus learnt to extract list of KG triplets from given text document via prompt engineering. For more in-depth training details, refer to the "Generative Knowledge Graph Construction with Fine-tuned LLM" section of [the accompanying paper](https://arxiv.org/abs/your-paper-id).
- **Prompt Template**>: The entities and relationship can be customized for specific tasks. `<input_text>` is the document text to replace.
```
From the provided document labeled as INPUT_TEXT, your task is to extract structured information from it in the form of triplet for constructing a knowledge graph. Each tuple should be in the form of ('h', 'type', 'r', 'o', 'type'), where 'h' stands for the head entity, 'r' for the relationship, and 'o' for the tail entity. The 'type' denotes the category of the corresponding entity. Do NOT include redundant triplets, NOT include triplets with relationship that occurs in the past.
Note that the entities should not be generic, numerical or temporal (like dates or percentages). Entities must be classified into the following categories:
ORG: Organizations other than government or regulatory bodies
ORG/GOV: Government bodies (e.g., "United States Government")
ORG/REG: Regulatory bodies (e.g., "Federal Reserve")
PERSON: Individuals (e.g., "Elon Musk")
GPE: Geopolitical entities such as countries, cities, etc. (e.g., "Germany")
COMP: Companies (e.g., "Google")
PRODUCT: Products or services (e.g., "iPhone")
EVENT: Specific and Material Events (e.g., "Olympic Games", "Covid-19")
SECTOR: Company sectors or industries (e.g., "Technology sector")
ECON_INDICATOR: Economic indicators (e.g., "Inflation rate"), numerical value like "10%" is not a ECON_INDICATOR;
FIN_INSTRUMENT: Financial and market instruments (e.g., "Stocks", "Global Markets")
CONCEPT: Abstract ideas or notions or themes (e.g., "Inflation", "AI", "Climate Change")
The relationships 'r' between these entities must be represented by one of the following relation verbs set: Has, Announce, Operate_In, Introduce, Produce, Control, Participates_In, Impact, Positive_Impact_On, Negative_Impact_On, Relate_To, Is_Member_Of, Invests_In, Raise, Decrease.
Remember to conduct entity disambiguation, consolidating different phrases or acronyms that refer to the same entity (for instance, "UK Central Bank", "BOE" and "Bank of England" should be unified as "Bank of England"). Simplify each entity of the triplet to be less than four words.
Your output should strictly be in a list format of triplets in the JSON list format of ('h', 'type', 'r', 'o', 'type'), where the relationship 'r' must be in the given relation verbs set above. Only output the list.
===========================================================
As an Example, consider the following news excerpt:
'Apple Inc. is set to introduce the new iPhone 14 in the technology sector this month. The product's release is likely to positively impact Apple's stock value.'
From this text, your output should be:
[('Apple Inc.', 'COMP', 'Introduce', 'iPhone 14', 'PRODUCT'),
('Apple Inc.', 'COMP', 'Operate_In', 'Technology Sector', 'SECTOR'),
('iPhone 14', 'PRODUCT', 'Positive_Impact_On', 'Apple's Stock Value', 'FIN_INSTRUMENT')]
INPUT_TEXT:
<input_text>
```
## Evaluation
iKG has undergone preliminary evaluation comparing its performance to GPT-3.5, GPT-4, and the original Vicuna-7B model. With respect to the KG construction task, it outperforms GPT-3.5 and Vicuna-7B while exhibiting comparative capability as GPT-4. iKG excels in generating instruction-based knowledge graphs with a particular emphasis on quality and adherence to format.
For a more detailed introduction, refer to [the accompanying paper](https://arxiv.org/abs/your-paper-id).
|