You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

UK PV dataset

Domestic solar photovoltaic (PV) power generation data from the United Kingdom. This dataset contains data from over 30,000 solar PV systems. The dataset spans 2010 to 2025. The dataset is updated with new data every few months.

All PV systems in this dataset report cumulative energy generation every 30 minutes. This data represents a true accumulation of the total energy generated in the last half hour. This cumulative value is calculated on the meter itself. As such, the 30-minute data is generally of high quality and low noise.

Of the 30,000 systems, about 1,309 of the solar PV systems also report instantaneous readings every 5 minutes. This data is recorded in the 5_minutely.parquet files. Please note that this data is noisy because these readings are instantaneous, and solar PV systems respond within milliseconds to changes in solar irradiance. And even small, wispy clouds can cause changes in solar irradiance!

The 30-minutely data is not an aggregation of the 5-minutely data. The 5-minutely data is instantaneous. The 30-minutely data is a true sum of the total energy produced over the time period. As such, the 30-minutely data should be the preferred source of data.

The data is recorded by professional meters, wired into the AC side of the inverter.

To protect the privacy of the solar PV system owners, we have reduced the precision of the geographical location of each PV system to about 1 kilometer. If you are the owner of a PV system in the dataset, and do not want your solar data to be shared then please email us at info@openclimatefix.org.

This dataset is made possible by Sheffield Solar.

Files

  • metadata.csv: The geographical location, orientation, tilt, and nominal generation capacity for each solar PV system.
  • {5,30}_minutely/year=YYYY/month=MM/data.parquet: Energy production time series data for each solar PV system, for one month.
  • bad_data.csv: This CSV describes time periods which we believe contain "bad" data.

metadata.csv

Metadata of the different PV systems.

The csv columns are:

  • ss_id: The Sheffield Solar identifier of the solar PV system.
  • start_datetime_GMT: The datetime of the first observation in the 30-minutely dataset, in UTC.
  • end_datetime_GMT: The datetime of the last observation in the 30-minutely dataset, in UTC.
  • latitude_rounded: Latitude of the PV system, rounded to approximately the nearest km.
  • longitude_rounded: Longitude of the PV system, rounded to approximately the nearest km.
  • orientation: The orientation of the PV panels, in degrees.
  • tilt: The tilt of the PV panels with respect to the ground, in degrees. 0 degrees would be horizontal (parallel to the ground). 90 degrees would be standing perpendicular to the ground.
  • kWp: The nominal power generation capacity of the PV system (kilowatts peak). Note that some PV systems may occasionally produce more than the nominal capacity.

{5,30}_minutely/

Time series of solar generation, stored as partitioned Parquet files.

The files contain 3 columns:

  • ss_id: The Sheffield Solar ID of the system.
  • datetime_GMT: The datetime of the recording in Greenwich Mean Time (UTC+0). Each row represents data for the time period ending at the datetime_GMT. For example, a row with a timestamp of 12:00 in a 30min.parquet file would represent the total energy generated from 11:30 to 12:00.
  • generation_Wh: The energy generated in the period (in watt hours) at the given timestamp for the given system. One "Watt hour" is the amount of energy generated in one hour if the power output is one watt. So, to calculate the average power (in Watts), multiply the generation_Wh by 12 in the 5-minutely datasets; or multiple generation_Wh by 2 in the 30-minutely datasets.

These Parquet files are partitioned using the Hive convention of naming the partitions key=value. Polars, Pandas, and DuckDb should all automatically infer the partitioning schema.

The Parquet files are sorted by SS_ID and then by datetime_GMT, and so should be fast to query. Full Parquet statistics are stored in the Parquet files.

bad_data.csv

The CSV columns are:

  • ss_id: The Sheffield Solar ID. There may be multiple rows with the same SS ID if there are multiple periods that a given solar system is "bad".
  • start_datetime_GMT: The first reading that should be dropped.
  • end_datetime_GMT: The last reading that should be dropped. If this field is blank then drop data for this solar system until the end of the dataset.
  • reason: A human-readable description of why this period should be dropped. Might be blank.

Using this dataset

Streaming from Hugging Face

You can stream data directly from Hugging Face like this:

import polars as pl
df = pl.scan_parquet(
    "hf://datasets/openclimatefix/uk_pv/30_minutely",
    hive_partitioning=True,
    hive_schema={"year": pl.Int16, "month": pl.Int8},
)

Because this dataset is gated, you'll first have to get an access token from your Hugging Face settings page, and set HF_TOKEN=<token> on your command line before running the Python snippet above.

(Note that, if you want to use hive_partitioning in Polars then you'll have to wait for Polars PR #22661 to be merged, or compile Polars with that PR. Alternatively, you can use scan_parquet("hf://datasets/openclimatefix/uk_pv/30_minutely") without hive_partitioning or hive_schema.)

Downloading the dataset

Streaming from Hugging Face is slow. So streaming isn't practical if you're planning to use a lot of the data. It's best to download the data first. See the Hugging Face docs on downloading datasets. (Note that, on Ubuntu, you can install git lfs with apt install git-lfs. See this page for more info on installing git lfs.)

Note that, if you want the best performance when filtering by datetime_GMT then you'll want to make sure that you're only reading the parquet partitions that contain the data you need. In Polars, you have to explicitly filter by the Hive partitions year and month, like this:

df.filter(
    # Select the Hive partitions for this sample (which doubles performance!)
    pl.date(pl.col("year"), pl.col("month"), day=1).is_between(
        start_datetime.date().replace(day=1),
        end_datetime.date().replace(day=1)
    ),
    # Select the precise date range for this sample:
    pl.col("datetime_GMT").is_between(start_datetime, end_datetime),
)

Known issues

  • There is no data at exactly midnight on the first day of each month, for the years 2010 to 2024.
  • There are some missing readings. This is inevitable with any large-scale real-world dataset. Sometimes there will be data missing for a single timestep. Sometimes longer periods of readings will be missing (days, weeks, or months).

Data cleaning

We have already dropped rows with NaNs, and to dropped any rows in the half-hourly data where the generation_Wh is above 100,000 Wh.

We have not performed any more cleaning steps because data cleaning is slightly subjective. And "obviously wrong" data (like an insanely high value) can indicate that all the readings near to that "insane" reading should be dropped.

We recommend the following cleaning steps:

  • Remove all the periods of bad data described in bad_data.csv. (And feel free to suggest more periods of bad data!)
  • Remove all negative generation_Wh values. Consider also removing the readings immediately before and after any negative value (for a given SS_ID). Or, if you're feeling really ruthless (!), drop the entire day of data whenever a solar system produces a negative value during that day.
  • For the half-hourly data, remove any rows where the generation_Wh is more than that solar system's kWp x 750 (where kWp is from the metadata). (In principle, the highest legal generation_Wh should be kWp x 500: We multiply by 1,000 to get from kW to watts, and then divide by 2 to get from watts to watt-hours per half hour). We increase the threshold to 750 because some solar systems do sometimes generate more than their nominal capacity and/or perhaps the nominal capacity is slightly wrong.
  • For each day of data for a specific solar PV system:
    • Remove any day where there generation_Wh is non-zero at night.
    • Remove any day where generation_Wh is zero during the day when there is significant amounts of irradiance. (Or, another way to find suspicious data is to compare each PV system's power output with the average of its geospatial neighbours).

Here's a Python snippet for removing rows where the generation_Wh is higher than 750 x kWp:

import polars as pl
from datetime import date
import pathlib

# Change these paths!
PV_DATA_PATH = pathlib.Path("~/data/uk_pv").expanduser()
OUTPUT_PATH = pathlib.Path("~/data/uk_pv_cleaned/30_minutely").expanduser()

# Lazily open source data
df = pl.scan_parquet(
    PV_DATA_PATH / "30_minutely",
    hive_schema={"year": pl.Int16, "month": pl.Int8},
)

metadata = pl.read_csv(PV_DATA_PATH / "metadata.csv")

# Process one month of data at a time, to limit memory usage
months = pl.date_range(date(2010, 11, 1), date(2025, 4, 1), interval="1mo", eager=True)
for _first_day_of_month in months:
    output_path = OUTPUT_PATH / f"year={_first_day_of_month.year}" / f"month={_first_day_of_month.month}"
    output_path.mkdir(parents=True, exist_ok=True)
    (
        df.filter(
            # Select the Parquet partition for this month:
            pl.col.year == _first_day_of_month.year, 
            pl.col.month == _first_day_of_month.month
        )
        .join(metadata.select(["ss_id", "kWp"]).lazy(), on="ss_id", how="left")
        .filter(pl.col.generation_Wh < pl.col.kWp * 750)
        .drop(["year", "month", "kWp"])
        .write_parquet(output_path / "data.parquet", statistics=True)
    )

Citing this dataset

For referencing - you can use the DOI: 10.57967/hf/0878, or this is the full reference BibteX:

@misc {open_climate_fix_2025,
author = { {Open Climate Fix} },
title = { uk_pv (Revision <ABBREVIATED GIT COMMIT HASH>) },
year = 2025,
url = { https://huggingface.co/datasets/openclimatefix/uk_pv },
doi = { 10.57967/hf/0878 },
publisher = { Hugging Face }
}

useful links

https://huggingface.co/docs/datasets/share - this repo was made by following this tutorial

Downloads last month
222