You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Summary

This dataset is composed of data manually extracted from a subset of publicly available Certificates of Analysis (COA) of Standard Reference Materials (SRMs) produced by the National Institute of Standards and Technology (NIST). Not all SRMs produced by NIST are present in this dataset. SRMs are a broadly varying class of materials, so the relevant measurements are as well. There are usually multiple properties measured for each material. This dataset includes chemical and physical properties, their uncertainty, and units which were manually extraced by a subject matter expert.

There are 2 configurations for this dataset:

  • The "md" (default) configuration returns the properties for each SRM in a markdown table.
  • The "json" configuration returns the properties in an formatted JSON string.

There are multiple "splits" of the data available, but they do not correspond to traditional test/train data partitions. Instead, the data is divided according to the metrology type of the measurement made. These include:

  • "certified" - certified reference values reported in the COA.
  • "information" - other background information on the material reported in the COA.
  • "manufacturer_data" - data provided from manufacturers instead of NIST but are listed in the COA.
  • "non_certified" - measured values which are not considered "certified" by NIST but can be found in the COA.
  • "reference" - other reference values found in the COA.
  • "all_types" - all of the above

Each entry in the dataset contains the following:

  • "data" - the stringified markdown table as determined by a subject matter expert.
  • "pdf_file" - absolute path to the COA file on your computer.
  • "accurate;text-embedding-ada-002" - the StorageContext produced by indexing the parsed documents using LlamaCloud's "LlamaParse" tool set to "accurate" mode with embed_model = llama_index.embeddings.openai.OpenAIEmbedding(model='text-embedding-ada-002'). Node parsing was done at LlamaIndex default settings. See the code below for details on indexing and the LlamaIndex documentation.
  • "continuous;text-embedding-ada-002" - the same as "accurate;text-embedding-ada-002" except the "LlamaParse" tool was set to "continuous" mode to produce a single parsed document; this was subsequently broken into as few nodes as possible using the maximum chunk size (8192) that fits into the embedding model.

Intended Use

This dataset was originally assembled to train automated data extraction (RAG) tools to extract properties from unstructured pdf documents into a structured format.

LlamaParse

The accurate;text-embedding-ada-002 key in each entry provides the absolute path to the StorageContext for each COA document which was generated as follows:

import nest_asyncio
import os
import tqdm

from datasets import load_dataset
from llama_parse import LlamaParse
from llama_index.core import VectorStoreIndex

dset = load_dataset(
  "mahynski/nist-coa-pdf", 
  split="all_types",
  token=os.getenv('HF_TOKEN'),
  trust_remote_code=True, # This is important to include
  name='md'
)

nest_asyncio.apply()

parser = LlamaParse(
    api_key=os.getenv("LLAMA_CLOUD_API_KEY"), 
    result_type="text",
    accurate_mode=True
)

for i in tqdm.tqdm(range(len(dset))):
    try:
        parsed_document = parser.load_data(dset[i]['pdf_file'])
        index = VectorStoreIndex.from_documents(
            parsed_document # Used default OpenAI 'text-embedding-ada-002' as embed_model, aka "Text-embedding-ada-002-v2" on OpenAI website
        )
        srm_no = os.path.basename(dset[i]['pdf_file']).split('.pdf')[0]
        index.storage_context.persist(os.path.join('./_llama_parse_accurate/', srm_no))
    except Exception as e:
        print(f'Failed on {i} : {e}')
        break

For the "continuous" case the following modifications were made:

parser = LlamaParse(
    api_key=os.getenv("LLAMA_CLOUD_API_KEY"), 
    result_type="text",
    continuous_mode=True
)

for i in tqdm.tqdm(range(len(dset))):
    try:
        parsed_document = parser.load_data(dset[i]['pdf_file'])
        index = VectorStoreIndex.from_documents(
            parsed_document, 
            transformations=[SentenceSplitter(chunk_size=8192, chunk_overlap=512)],
            embed_model=OpenAIEmbedding(model='text-embedding-ada-002')
        ) 
        srm_no = os.path.basename(dset[i]['pdf_file']).split('.pdf')[0]
        index.storage_context.persist(os.path.join('./_llama_parse_continuous/', srm_no))
    except Exception as e:
        print(f'Failed on {i} : {e}')
        break

The _llama_parse_accurate and _llama_parse_continuous directories are included in this dataset and their appropriate subdirectories are returned as, e.g., the accurate;text-embedding-ada-002 dataset key. To convert this to an index which can be queried:

from llama_index.core import StorageContext, load_index_from_storage
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI

# 1. Set embedding model to be the same as what the indices were encoded with.
# This way the query text is embedded with the same model as the nodes in the documents.
# See https://docs.llamaindex.ai/en/stable/understanding/storing/storing/
embed_model = OpenAIEmbedding(model='text-embedding-ada-002') # aka "Text-embedding-ada-002-v2" on OpenAI website

# 2. Rebuild storage context.
storage_context = StorageContext.from_defaults(
    persist_dir=dset[10]['accurate;text-embedding-ada-002']
)

# 3. Load index and create QA engine.
index = load_index_from_storage(
    storage_context,
    embed_model=embed_model
)
query_engine = index.as_query_engine(
    llm=OpenAI(temperature=0.0, model="gpt-4o-mini-2024-07-18", max_tokens=16384) # Configure the AI model you wish to use
)
response = query_engine.query('What is the level of Chromium in this SRM?')

print(response.response)

Usage

The dataset can be accessed in multiple ways, but perhaps the easiest is to use the datasets package. The data extracted by a subject matter expert and the raw pdf are returned as a dictionary with the keys: 'data' and 'pdf_file'.

import os
from datasets import load_dataset
from dotenv import load_dotenv

load_dotenv('.env')

dset = load_dataset(
  "mahynski/nist-coa-pdf", 
  split="certified",
  token=os.getenv('HF_TOKEN'),
  trust_remote_code=True, # This is important to include
  name='md'
)

manually_extracted_datatable = dset[0]['data']
original_coa_pdf_file = dset[0]['pdf_file']
storage_context = dset[0]['accurate;text-embedding-ada-002']
Downloads last month
2