lj408 commited on
Commit
3322f89
1 Parent(s): 2323502

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -1
README.md CHANGED
@@ -6,6 +6,60 @@ language:
6
  - en
7
  tags:
8
  - legal
 
9
  size_categories:
10
  - 1K<n<10K
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - en
7
  tags:
8
  - legal
9
+ - patent
10
  size_categories:
11
  - 1K<n<10K
12
+ pretty_name: HUPD-DCG
13
+ ---
14
+ # Dataset Card for HUPD-DCG (Description-based Claim Generation)
15
+
16
+ <!-- Provide a quick summary of the dataset. -->
17
+
18
+ This dataset is used for generating patent claims based on the patent description.
19
+
20
+
21
+ ## Dataset Information
22
+
23
+ <!-- Provide a longer summary of what this dataset is. -->
24
+
25
+ - **Repository:** https://github.com/scylj1/LLM4DPCG
26
+ - **Paper:** https://arxiv.org/abs/2406.19465
27
+
28
+
29
+
30
+ ## Dataset Structure
31
+
32
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
33
+
34
+ There are two zip files for training and test data. Training and test folders consist of 8,244 and 1,311 patent files respectively. Each file is in JSON format that
35
+ includes detailed information of a patent application. We use the claims and full description parts.
36
+
37
+
38
+ ## Dataset Creation
39
+
40
+
41
+ ### Source Data
42
+
43
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
44
+ [The Harvard USPTO Patent Dataset (HUPD)](https://huggingface.co/datasets/HUPD/hupd/) is a recently collected large-scale multi-purpose patent dataset, including more than 4.5 million patent documents with 34 data fields (patent description, abstract, and claims are included). HUPD-DCG is collected by filtering a portion of target documents based on this large dataset.
45
+
46
+ ### Data Collection and Processing
47
+
48
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
49
+
50
+ Firstly, we selected all the patent documents filed in 2017. We eliminated any pending or rejected documents and only kept granted documents to formulate a high-quality dataset for claim generation. Considering the context length of some LLMs, such as Llama-3, we opted for documents with a description length smaller than 8,000 tokens in our experiments. In practical settings, models are developed by training on existing patent documents and subsequently employed for new applications. To simulate realistic scenarios, we ordered the documents by date and used the last 1,311 documents for testing, which is about 14\% of the whole dataset.
51
+
52
+
53
+
54
+ ## Citation
55
+
56
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
57
+
58
+ ```
59
+ @article{jiang2024can,
60
+ title={Can Large Language Models Generate High-quality Patent Claims?},
61
+ author={Jiang, Lekang and Zhang, Caiqi and Scherz, Pascal A and Goetz, Stephan},
62
+ journal={arXiv preprint arXiv:2406.19465},
63
+ year={2024}
64
+ }
65
+ ```