Sristtee commited on
Commit
500f1bd
·
verified ·
1 Parent(s): 81f8888

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +199 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - patents
4
+ - climate
5
+ - green-technology
6
+ - text-classification
7
+ - patent-classification
8
+ - human-in-the-loop
9
+ - multi-agent
10
+ - patentsberta
11
+ language:
12
+ - en
13
+ pipeline_tag: text-classification
14
+ library_name: transformers
15
+ ---
16
+
17
+ # Green Patent Detection: Multi-Agent HITL + PatentSBERTa
18
+
19
+ This repository contains an advanced green patent detection workflow built for **binary classification of patent claims** into:
20
+
21
+ - **1 = Green / climate mitigation related**
22
+ - **0 = Non-green**
23
+
24
+ The project extends a baseline PatentSBERTa workflow by adding a **Human-in-the-Loop (HITL)** review stage and a **multi-agent debate system** before final fine-tuning.
25
+
26
+ ## Project overview
27
+
28
+ The goal of this project is to improve green patent detection by combining:
29
+
30
+ 1. **High-risk sample selection** from uncertainty sampling
31
+ 2. **Multi-agent LLM review** of difficult claims
32
+ 3. **Human verification** of the AI suggestions
33
+ 4. **Final fine-tuning of PatentSBERTa** using silver labels + gold HITL labels
34
+
35
+ This workflow was designed to test whether a more advanced labeling pipeline produces stronger training data than a simple single-LLM labeling approach.
36
+
37
+ ## Base model
38
+
39
+ The final classifier is built from:
40
+
41
+ - **Base encoder:** `AI-Growth-Lab/PatentSBERTa`
42
+ - **Task:** Binary text classification
43
+ - **Domain:** Patent claim classification for climate mitigation / green technology
44
+
45
+ ## Data used in the notebook
46
+
47
+ The notebook uses the following files:
48
+
49
+ - `patents_50k_green.parquet`
50
+ - `train_meta.csv`
51
+ - `y_train.npy`
52
+ - `eval_silver.parquet`
53
+ - `hitl_green_100.csv`
54
+ - `hitl_review_progress_with_llm.csv`
55
+ - `hitl_green_gold.csv`
56
+ - `hitl_three_agents.csv`
57
+
58
+ ## Methodology
59
+
60
+ ### 1. High-risk claim selection
61
+
62
+ A set of **100 high-risk patent claims** was selected from earlier uncertainty sampling outputs. These were the most difficult / ambiguous examples for the model.
63
+
64
+ ### 2. Multi-agent debate system
65
+
66
+ Three agents were created using `CrewAI` and an Ollama-hosted model (`qwen2.5:3b-instruct`):
67
+
68
+ - **Advocate Agent** – argues why the claim should be classified as green under Y02 climate mitigation logic
69
+ - **Skeptic Agent** – argues why the claim may not qualify and checks for weak evidence or greenwashing
70
+ - **Judge Agent** – reviews both sides and returns a structured final output with:
71
+ - predicted label
72
+ - confidence
73
+ - rationale
74
+
75
+ This produces an AI suggestion for each difficult claim.
76
+
77
+ ### 3. Human-in-the-Loop review
78
+
79
+ The AI-generated suggestion was then manually reviewed.
80
+
81
+ The final human label was stored as:
82
+
83
+ - `is_green_human`
84
+
85
+ These human-reviewed labels form the **gold dataset** for the difficult claims.
86
+
87
+ ### 4. Gold-enhanced training
88
+
89
+ The final training set combines:
90
+
91
+ - **Silver labels** from the earlier training data
92
+ - **100 gold human-reviewed claims** from the multi-agent workflow
93
+
94
+ This combined dataset was used to fine-tune PatentSBERTa.
95
+
96
+ ## Training configuration
97
+
98
+ The notebook fine-tunes the model with the following setup:
99
+
100
+ - **Model:** `AI-Growth-Lab/PatentSBERTa`
101
+ - **Max sequence length:** `256`
102
+ - **Epochs:** `1`
103
+ - **Learning rate:** `2e-5`
104
+ - **Train batch size:** `8`
105
+ - **Eval batch size:** `8`
106
+ - **Weight decay:** `0.01`
107
+ - **Framework:** Hugging Face Transformers Trainer
108
+
109
+ ## Dataset splits used during fine-tuning
110
+
111
+ From the notebook:
112
+
113
+ - **Training data:** silver training set + gold HITL labels
114
+ - **Evaluation data:** `eval_silver`
115
+ - **Additional check:** `gold_100`
116
+
117
+ The notebook text states that the final training dataset contains **35,200 claims**.
118
+
119
+ ## Human vs AI agreement
120
+
121
+ According to the notebook:
122
+
123
+ - **Simple LLM from Assignment 2:** `94%` agreement with human labels
124
+ - **Agentic system from Assignment 3:** `87%` agreement with human labels
125
+
126
+ This suggests that the multi-agent system used stricter reasoning criteria, which created more disagreement on borderline cases.
127
+
128
+ ## Repository contents
129
+
130
+ Depending on what you upload, this repository may include:
131
+
132
+ - the processed HITL dataset
133
+ - the final trained model
134
+ - tokenizer files
135
+ - training notebook
136
+ - prediction / rationale outputs for the 100 reviewed claims
137
+
138
+ ## Expected columns in the HITL dataset
139
+
140
+ The notebook shows or creates columns such as:
141
+
142
+ - `id`
143
+ - `text`
144
+ - `p_green`
145
+ - `u`
146
+ - `llm_green_suggested`
147
+ - `llm_confidence`
148
+ - `llm_rationale`
149
+ - `is_green_human`
150
+
151
+ ## Example use
152
+
153
+ ```python
154
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
155
+ import torch
156
+
157
+ model_name = "YOUR_HF_REPO_NAME"
158
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
159
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
160
+
161
+ text = "A company develops a carbon capture system that reduces CO2 emissions from cement factories."
162
+ inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=256)
163
+
164
+ with torch.no_grad():
165
+ logits = model(**inputs).logits
166
+ pred = torch.argmax(logits, dim=-1).item()
167
+
168
+ print("Predicted label:", pred)
169
+ ```
170
+
171
+ ## Intended use
172
+
173
+ This project is intended for:
174
+
175
+ - research and coursework on green patent detection
176
+ - experimentation with HITL labeling pipelines
177
+ - comparison of simple vs advanced AI-assisted annotation workflows
178
+ - climate-tech related document classification
179
+
180
+ ## Limitations
181
+
182
+ - The gold set is relatively small (**100 reviewed claims**)
183
+ - The multi-agent workflow depends on LLM reasoning quality
184
+ - Agreement with humans does not automatically guarantee better downstream model performance
185
+ - Final performance metrics should be reported from the actual training run in this repository
186
+
187
+ ## Notes
188
+
189
+ This README was prepared from the notebook workflow and code structure. If you are uploading the **model repo**, add the final evaluation metrics from your training output. If you are uploading the **dataset repo**, you can keep the methodology sections and remove the model inference example if not needed.
190
+
191
+ ## Citation
192
+
193
+ If you use this work, please cite the repository and the base model:
194
+
195
+ - `AI-Growth-Lab/PatentSBERTa`
196
+
197
+ You may also describe the workflow as:
198
+
199
+ > Multi-Agent Human-in-the-Loop green patent detection using PatentSBERTa with gold-enhanced fine-tuning.