feat: update readmes and adjust home value forecasts
Browse files- README.md +188 -8
- data/README.md +3 -0
- processed/README.md +3 -0
- processed/home_value_forecasts/final.jsonl +3 -0
- processors/README.md +4 -0
- processors/home_value_forecasts.ipynb +34 -26
- tester.ipynb +74 -20
- zillow.py +11 -13
README.md
CHANGED
@@ -1,10 +1,190 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
#
|
4 |
-
# -
|
5 |
-
#
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
|
|
|
|
|
|
|
|
10 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
# language:
|
3 |
+
# - "List of ISO 639-1 code for your language"
|
4 |
+
# - lang1
|
5 |
+
# - lang2
|
6 |
+
pretty_name: "Zillow"
|
7 |
+
# tags:
|
8 |
+
# - tag1
|
9 |
+
# - tag2
|
10 |
+
license: "other"
|
11 |
+
# task_categories:
|
12 |
+
# - task1
|
13 |
+
# - task2
|
14 |
---
|
15 |
+
|
16 |
+
# Housing Data Provided by Zillow
|
17 |
+
## Updated 2023-02-01
|
18 |
+
|
19 |
+
This dataset contains several configs produced based on files available at https://www.zillow.com/research/data/.
|
20 |
+
|
21 |
+
supported configs:
|
22 |
+
<!-- list each with a short description (1 sentence) -->
|
23 |
+
- [`home_values`](https://huggingface.co/datasets/misikoff/zillow#home-values): Zillow Home Value Index (ZHVI) for all homes, mid-tier, bottom-tier, and top-tier homes.
|
24 |
+
- [`home_value_forecasts`](https://huggingface.co/datasets/misikoff/zillow#home-value-forecasts): Zillow Home Value Forecast (ZHVF) for all homes, mid-tier, bottom-tier, and top-tier homes.
|
25 |
+
- [`rentals`](https://huggingface.co/datasets/misikoff/zillow#rentals): Zillow Observed Rent Index (ZORI) for all homes, mid-tier, bottom-tier, and top-tier homes.
|
26 |
+
- [`for_sale_listings`](https://huggingface.co/datasets/misikoff/zillow#for-sale-listings): Median listing price, new listings, and new pending listings.
|
27 |
+
- [`sales`](https://huggingface.co/datasets/misikoff/zillow#sales): Median sale price, median sale price per square foot, and sales count.
|
28 |
+
- [`days_on_market`](https://huggingface.co/datasets/misikoff/zillow#days-on-market): Days to pending, days to close, share of listings with a price cut, and price cuts.
|
29 |
+
- [`new_constructions`](https://huggingface.co/datasets/misikoff/zillow#new-constructions): Median sale price, median sale price per square foot, and sales count.
|
30 |
+
|
31 |
+
## HOME VALUES
|
32 |
+
|
33 |
+
<!-- Zillow Home Value Index (ZHVI): A measure of the typical home value and market changes across a given region and housing type. It reflects the typical value for homes in the 35th to 65th percentile range. Available as a smoothed, seasonally adjusted measure and as a raw measure. -->
|
34 |
+
|
35 |
+
<!-- Zillow publishes top-tier ZHVI (\$, typical value for homes within the 65th to 95th percentile range for a given region) and bottom-tier ZHVI (\$, typical value for homes within the 5th to 35th percentile range for a given region). -->
|
36 |
+
|
37 |
+
<!-- Zillow also publishes ZHVI for all single-family residences (\$, typical value for all single-family homes in a given region), for condo/coops (\$), for all homes with 1, 2, 3, 4 and 5+ bedrooms (\$), and the ZHVI per square foot (\$, typical value of all homes per square foot calculated by taking the estimated home value for each home in a given region and dividing it by the homeβs square footage). -->
|
38 |
+
|
39 |
+
<!-- Note: Starting with the January 2023 data release, and for all subsequent releases, the full ZHVI time series has been upgraded to harness the power of the neural Zestimate. -->
|
40 |
+
|
41 |
+
<!-- More information about what ZHVI is and how itβs calculated is available on this overview page. Hereβs a handy ZHVI User Guide for information about properly citing and making calculations with this metric. -->
|
42 |
+
|
43 |
+
Base Columns
|
44 |
+
- `Region ID`: dtype="string", a unique identifier for the region
|
45 |
+
- `Size Rank`: dtype="int32", a rank of the region's size
|
46 |
+
- `Region`: dtype="string", the name of the region
|
47 |
+
- `Region Type`: dtype="string", the type of region
|
48 |
+
- `State`: dtype="string", the US state abbreviation for the state containing the region
|
49 |
+
- `Home Type`: dtype="string", the type of home
|
50 |
+
- `Date`: dtype="string", the date of the last day of the month for this data
|
51 |
+
|
52 |
+
Value Columns
|
53 |
+
- `Mid Tier ZHVI (Smoothed) (Seasonally Adjusted)`: dtype="float32",
|
54 |
+
- `Bottom Tier ZHVI (Smoothed) (Seasonally Adjusted)`: dtype="float32",
|
55 |
+
- `Top Tier ZHVI (Smoothed) (Seasonally Adjusted)`: dtype="float32",
|
56 |
+
- `ZHVI`: dtype="float32",
|
57 |
+
- `Mid Tier ZHVI`: dtype="float32"
|
58 |
+
|
59 |
+
|
60 |
+
## HOME VALUES FORECASTS
|
61 |
+
|
62 |
+
<!-- Zillow Home Value Forecast (ZHVF): A month-ahead, quarter-ahead and year-ahead forecast of the Zillow Home Value Index (ZHVI). ZHVF is created using the all homes, mid-tier cut of ZHVI and is available both raw and smoothed, seasonally adjusted. -->
|
63 |
+
|
64 |
+
<!-- Note: Starting with the January 2023 forecast (made available in February 2023), Zillowβs Home Value Forecast is based on the upgraded ZHVI that harnesses the power of the neural Zestimate. More information about what ZHVI is and how itβs calculated is available on this overview page. -->
|
65 |
+
|
66 |
+
Base Columns
|
67 |
+
- `Region ID`: dtype="string", a unique identifier for the region
|
68 |
+
- `Size Rank`: dtype="int32", a rank of the region's size
|
69 |
+
- `Region`: dtype="string", the name of the region
|
70 |
+
- `Region Type`: dtype="string", the type of region
|
71 |
+
- `State`: dtype="string", the US state abbreviation for the state containing the region
|
72 |
+
- `City`: dtype="string", id="City"),
|
73 |
+
- `Metro`: dtype="string", id="Metro"),
|
74 |
+
- `County`: dtype="string", id="County"),
|
75 |
+
- `Home Type`: dtype="string", the type of home
|
76 |
+
- `Date`: dtype="string", the date of these forecasts
|
77 |
+
|
78 |
+
Value Columns
|
79 |
+
- `Month Over Month % (Smoothed)`: dtype="float32",
|
80 |
+
- `Quarter Over Quarter % (Smoothed)`: dtype="float32",
|
81 |
+
- `Year Over Year % (Smoothed)`: dtype="float32"
|
82 |
+
- `Month Over Month % (Raw)`: dtype="float32"
|
83 |
+
- `Quarter Over Quarter % (Raw)`: dtype="float32"
|
84 |
+
- `Year Over Year % (Raw)`: dtype="float32"
|
85 |
+
|
86 |
+
## RENTALS
|
87 |
+
|
88 |
+
Base Columns
|
89 |
+
- `Region ID`: dtype="string", a unique identifier for the region
|
90 |
+
- `Size Rank`: dtype="int32", a rank of the region's size
|
91 |
+
- `Region`: dtype="string", the name of the region
|
92 |
+
- `Region Type`: dtype="string", the type of region
|
93 |
+
- `State`: dtype="string", the US state abbreviation for the state containing the region
|
94 |
+
- `Home Type`: dtype="string", the type of home
|
95 |
+
- `Date`: dtype="string", the date of the last day of the month for this data
|
96 |
+
|
97 |
+
Value Columns
|
98 |
+
- `Rent (Smoothed)`: dtype="float32", Zillow Observed Rent Index (ZORI): A smoothed measure of the typical observed market rate rent across a given region. ZORI is a repeat-rent index that is weighted to the rental housing stock to ensure representativeness across the entire market, not just those homes currently listed for-rent. The index is dollar-denominated by computing the mean of listed rents that fall into the 40th to 60th percentile range for all homes and apartments in a given region, which is weighted to reflect the rental housing stock.
|
99 |
+
- `Rent (Smoothed) (Seasonally Adjusted)`: dtype="float32", Zillow Observed Rent Index (ZORI) :A smoothed measure of the typical observed market rate rent across a given region. ZORI is a repeat-rent index that is weighted to the rental housing stock to ensure representativeness across the entire market, not just those homes currently listed for-rent. The index is dollar-denominated by computing the mean of listed rents that fall into the 40th to 60th percentile range for all homes and apartments in a given region, which is weighted to reflect the rental housing stock.
|
100 |
+
|
101 |
+
## FOR-SALE LISTINGS
|
102 |
+
|
103 |
+
Base Columns
|
104 |
+
- `Region ID`: dtype="string", a unique identifier for the region
|
105 |
+
- `Size Rank`: dtype="int32", a rank of the region's size
|
106 |
+
- `Region`: dtype="string", the name of the region
|
107 |
+
- `Region Type`: dtype="string", the type of region
|
108 |
+
- `State`: dtype="string", the US state abbreviation for the state containing the region
|
109 |
+
- `Home Type`: dtype="string", the type of home
|
110 |
+
- `Date`: dtype="string", the date of the last day of the month for this data
|
111 |
+
|
112 |
+
Value Columns
|
113 |
+
- `Median Listing Price`: dtype="float32", The median price at which homes across various geographies were listed.
|
114 |
+
- `Median Listing Price (Smoothed)`: dtype="float32", The median price at which homes across various geographies were listed. (smoothed)
|
115 |
+
- `New Listings`: dtype="int32", how many new listings have come on the market in a given month
|
116 |
+
- `New Listings (Smoothed)`: dtype="int32", how many new listings have come on the market in a given month. (smoothed)
|
117 |
+
- `New Pending (Smoothed)`: dtype="int32", The count of listings that changed from for-sale to pending status on Zillow.com in a given time period. (smoothed)
|
118 |
+
- `New Pending`: dtype="int32", The count of listings that changed from for-sale to pending status on Zillow.com in a given time period.
|
119 |
+
|
120 |
+
|
121 |
+
## SALES (TODO investigate columns)
|
122 |
+
<!-- Sale-to-List Ratio (mean/median): Ratio of sale vs. final list price. -->
|
123 |
+
<!-- Percent of Sales Below/Above List: Share of sales where sale price below/above the final list price; excludes homes sold for exactly the list price. -->
|
124 |
+
|
125 |
+
Base Columns
|
126 |
+
- `Region ID`: dtype="string", a unique identifier for the region
|
127 |
+
- `Size Rank`: dtype="int32", a rank of the region's size
|
128 |
+
- `Region`: dtype="string", the name of the region
|
129 |
+
- `Region Type`: dtype="string", the type of region
|
130 |
+
- `State`: dtype="string", the US state abbreviation for the state containing the region
|
131 |
+
- `Home Type`: dtype="string", the type of home
|
132 |
+
- `Date`: dtype="string", the date of the last day of the month for this data
|
133 |
+
|
134 |
+
Value Columns
|
135 |
+
- `Median Sale Price`: dtype="float32", The median price at which homes across various geographies were sold.
|
136 |
+
- `Median Sale Price per Sqft`: dtype="float32" The median price per square foot at which homes across various geographies were sold.
|
137 |
+
- `Sales Count`: dtype="int32", The "Sales Count Nowcast" is the estimated number of unique properties that sold during the month after accounting for the latency between when sales occur and when they are reported.
|
138 |
+
|
139 |
+
## DAYS ON MARKET AND PRICE CUTS (TODO investigate columns more)
|
140 |
+
|
141 |
+
Days to Pending: How long it takes homes in a region to change to pending status on Zillow.com after first being shown as for sale. The reported figure indicates the number of days (mean or median) that it took for homes that went pending during the week being reported, to go pending. This differs from the old βDays on Zillowβ metric in that it excludes the in-contract period before a home sells.
|
142 |
+
Days to Close (mean/median): Number of days between the listing going pending and the sale date.
|
143 |
+
Share of Listings With a Price Cut: The number of unique properties with a list price at the end of the month thatβs less than the list price at the beginning of the month, divided by the number of unique properties with an active listing at some point during the month.
|
144 |
+
Price Cuts: The mean and median price cut for listings in a given region during a given time period, expressed as both dollars ($) and as a percentage (%) of list price.
|
145 |
+
|
146 |
+
Base Columns
|
147 |
+
- `Region ID`: dtype="string", a unique identifier for the region
|
148 |
+
- `Size Rank`: dtype="int32", a rank of the region's size
|
149 |
+
- `Region`: dtype="string", the name of the region
|
150 |
+
- `Region Type`: dtype="string", the type of region
|
151 |
+
- `State`: dtype="string", the US state abbreviation for the state containing the region
|
152 |
+
- `Home Type`: dtype="string", the type of home
|
153 |
+
- `Date`: dtype="string", the date of the last day of the week for this data
|
154 |
+
|
155 |
+
Value Columns
|
156 |
+
- `Mean Listings Price Cut Amount (Smoothed)`: dtype="float32"
|
157 |
+
- `Percent Listings Price Cut`: dtype="float32", The number of unique properties with a list price at the end of the month thatβs less than the list price at the beginning of the month, divided by the number of unique properties with an active listing at some point during the month.
|
158 |
+
- `Mean Listings Price Cut Amount`: dtype="float32"
|
159 |
+
- `Percent Listings Price Cut (Smoothed)`: dtype="float32"
|
160 |
+
- `Median Days on Pending (Smoothed)`: dtype="float32", median number of days it takes for homes in a region to change to pending status on Zillow.com after first being shown as for sale. (smoothed)
|
161 |
+
- `Median Days on Pending`: dtype="float32", median number of days it takes for homes in a region to change to pending status on Zillow.com after first being shown as for sale.
|
162 |
+
|
163 |
+
## NEW CONSTRUCTION
|
164 |
+
|
165 |
+
Base Columns
|
166 |
+
- `Region ID`: dtype="string", a unique identifier for the region
|
167 |
+
- `Size Rank`: dtype="int32", a rank of the region's size
|
168 |
+
- `Region`: dtype="string", the name of the region
|
169 |
+
- `Region Type`: dtype="string", the type of region
|
170 |
+
- `State`: dtype="string", the US state abbreviation for the state containing the region
|
171 |
+
- `Home Type`: dtype="string", the type of home
|
172 |
+
- `Date`: dtype="string", the date of the last day of the month for this data
|
173 |
+
|
174 |
+
Value Columns
|
175 |
+
- `Median Sale Price`: dtype="float32", the median sale price of new construction homes that sold during the month in the specified region
|
176 |
+
- `Median Sale Price per Sqft`: dtype="float32", the median sale price per square foot of new construction homes that sold during the month in the specified region
|
177 |
+
- `Sales Count`: dtype="int32", the number of new construction homes that sold during the month in the specified region
|
178 |
+
|
179 |
+
## DEFINITIONS OF HOME TYPES
|
180 |
+
- All Homes: Zillow defines all homes as single-family, condominium and co-operative homes with a county record. Unless specified, all series cover this segment of the housing stock.
|
181 |
+
- Condo/Co-op: Condominium and co-operative homes.
|
182 |
+
- Multifamily 5+ units: Units in buildings with 5 or more housing units, that are not condominiums or co-ops.
|
183 |
+
- Duplex/Triplex/Quadplex: Housing units in buildings with 2, 3, or 4 housing units.
|
184 |
+
|
185 |
+
# Example Usage
|
186 |
+
```python
|
187 |
+
from datasets import load_dataset
|
188 |
+
|
189 |
+
dataset = load_dataset("misikoff/zillow", 'home_values', trust_remote_code=True)
|
190 |
+
```
|
data/README.md
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
# Raw Data
|
2 |
+
|
3 |
+
This is the raw data downloaded directly from https://www.zillow.com/research/data/. It is processed by the processors and the result is stored in the processed directory.
|
processed/README.md
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
# Processed Data stored as *.jsonl files
|
2 |
+
|
3 |
+
This is where the processed files are stored, to be ingested by zillow.py.
|
processed/home_value_forecasts/final.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f0557666842cd52409b8f1509e1402270d649ddb9371f9992007422a807d29fa
|
3 |
+
size 8185465
|
processors/README.md
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Processors
|
3 |
+
|
4 |
+
These processors build the processed files found in the `processed` directory. They are used to ingest the raw data and prepare it for analysis.
|
processors/home_value_forecasts.ipynb
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
"cells": [
|
3 |
{
|
4 |
"cell_type": "code",
|
5 |
-
"execution_count":
|
6 |
"metadata": {},
|
7 |
"outputs": [],
|
8 |
"source": [
|
@@ -12,7 +12,7 @@
|
|
12 |
},
|
13 |
{
|
14 |
"cell_type": "code",
|
15 |
-
"execution_count":
|
16 |
"metadata": {},
|
17 |
"outputs": [],
|
18 |
"source": [
|
@@ -25,7 +25,7 @@
|
|
25 |
},
|
26 |
{
|
27 |
"cell_type": "code",
|
28 |
-
"execution_count":
|
29 |
"metadata": {},
|
30 |
"outputs": [
|
31 |
{
|
@@ -361,7 +361,7 @@
|
|
361 |
"[21062 rows x 16 columns]"
|
362 |
]
|
363 |
},
|
364 |
-
"execution_count":
|
365 |
"metadata": {},
|
366 |
"output_type": "execute_result"
|
367 |
}
|
@@ -418,7 +418,7 @@
|
|
418 |
},
|
419 |
{
|
420 |
"cell_type": "code",
|
421 |
-
"execution_count":
|
422 |
"metadata": {},
|
423 |
"outputs": [
|
424 |
{
|
@@ -442,15 +442,15 @@
|
|
442 |
" <thead>\n",
|
443 |
" <tr style=\"text-align: right;\">\n",
|
444 |
" <th></th>\n",
|
445 |
-
" <th>
|
446 |
-
" <th>
|
447 |
" <th>RegionType</th>\n",
|
448 |
-
" <th>
|
449 |
" <th>State</th>\n",
|
450 |
" <th>City</th>\n",
|
451 |
" <th>Metro</th>\n",
|
452 |
" <th>County</th>\n",
|
453 |
-
" <th>
|
454 |
" <th>Month Over Month % (Smoothed)</th>\n",
|
455 |
" <th>Quarter Over Quarter % (Smoothed)</th>\n",
|
456 |
" <th>Year Over Year % (Smoothed)</th>\n",
|
@@ -664,20 +664,20 @@
|
|
664 |
"</div>"
|
665 |
],
|
666 |
"text/plain": [
|
667 |
-
"
|
668 |
-
"0
|
669 |
-
"1
|
670 |
-
"2
|
671 |
-
"3
|
672 |
-
"4
|
673 |
-
"...
|
674 |
-
"20162
|
675 |
-
"20163
|
676 |
-
"20164
|
677 |
-
"20165
|
678 |
-
"20166
|
679 |
"\n",
|
680 |
-
" Metro County
|
681 |
"0 NaN NaN 2023-12-31 \n",
|
682 |
"1 NaN NaN 2023-12-31 \n",
|
683 |
"2 NaN NaN 2023-12-31 \n",
|
@@ -732,7 +732,7 @@
|
|
732 |
"[21062 rows x 15 columns]"
|
733 |
]
|
734 |
},
|
735 |
-
"execution_count":
|
736 |
"metadata": {},
|
737 |
"output_type": "execute_result"
|
738 |
}
|
@@ -756,12 +756,20 @@
|
|
756 |
"\n",
|
757 |
"final_df = combined_df[all_cols]\n",
|
758 |
"final_df = final_df.drop(\"StateName\", axis=1)\n",
|
759 |
-
"final_df = final_df.rename(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
760 |
"\n",
|
761 |
"# iterate over rows of final_df and populate State and City columns if the regionType is msa\n",
|
762 |
"for index, row in final_df.iterrows():\n",
|
763 |
" if row[\"RegionType\"] == \"msa\":\n",
|
764 |
-
" regionName = row[\"
|
765 |
" # final_df.at[index, 'Metro'] = regionName\n",
|
766 |
"\n",
|
767 |
" city = regionName.split(\", \")[0]\n",
|
@@ -775,7 +783,7 @@
|
|
775 |
},
|
776 |
{
|
777 |
"cell_type": "code",
|
778 |
-
"execution_count":
|
779 |
"metadata": {},
|
780 |
"outputs": [],
|
781 |
"source": [
|
|
|
2 |
"cells": [
|
3 |
{
|
4 |
"cell_type": "code",
|
5 |
+
"execution_count": 1,
|
6 |
"metadata": {},
|
7 |
"outputs": [],
|
8 |
"source": [
|
|
|
12 |
},
|
13 |
{
|
14 |
"cell_type": "code",
|
15 |
+
"execution_count": 2,
|
16 |
"metadata": {},
|
17 |
"outputs": [],
|
18 |
"source": [
|
|
|
25 |
},
|
26 |
{
|
27 |
"cell_type": "code",
|
28 |
+
"execution_count": 3,
|
29 |
"metadata": {},
|
30 |
"outputs": [
|
31 |
{
|
|
|
361 |
"[21062 rows x 16 columns]"
|
362 |
]
|
363 |
},
|
364 |
+
"execution_count": 3,
|
365 |
"metadata": {},
|
366 |
"output_type": "execute_result"
|
367 |
}
|
|
|
418 |
},
|
419 |
{
|
420 |
"cell_type": "code",
|
421 |
+
"execution_count": 7,
|
422 |
"metadata": {},
|
423 |
"outputs": [
|
424 |
{
|
|
|
442 |
" <thead>\n",
|
443 |
" <tr style=\"text-align: right;\">\n",
|
444 |
" <th></th>\n",
|
445 |
+
" <th>Region ID</th>\n",
|
446 |
+
" <th>Region</th>\n",
|
447 |
" <th>RegionType</th>\n",
|
448 |
+
" <th>Size Rank</th>\n",
|
449 |
" <th>State</th>\n",
|
450 |
" <th>City</th>\n",
|
451 |
" <th>Metro</th>\n",
|
452 |
" <th>County</th>\n",
|
453 |
+
" <th>Date</th>\n",
|
454 |
" <th>Month Over Month % (Smoothed)</th>\n",
|
455 |
" <th>Quarter Over Quarter % (Smoothed)</th>\n",
|
456 |
" <th>Year Over Year % (Smoothed)</th>\n",
|
|
|
664 |
"</div>"
|
665 |
],
|
666 |
"text/plain": [
|
667 |
+
" Region ID Region RegionType Size Rank State City \\\n",
|
668 |
+
"0 102001 United States country 0 NaN NaN \n",
|
669 |
+
"1 394913 New York, NY msa 1 NY New York \n",
|
670 |
+
"2 753899 Los Angeles, CA msa 2 CA Los Angeles \n",
|
671 |
+
"3 394463 Chicago, IL msa 3 IL Chicago \n",
|
672 |
+
"4 394514 Dallas, TX msa 4 TX Dallas \n",
|
673 |
+
"... ... ... ... ... ... ... \n",
|
674 |
+
"20162 82097 55087 zip 39992 MN Warsaw \n",
|
675 |
+
"20163 85325 62093 zip 39992 IL NaN \n",
|
676 |
+
"20164 92085 77661 zip 39992 TX NaN \n",
|
677 |
+
"20165 92811 79078 zip 39992 TX NaN \n",
|
678 |
+
"20166 98183 95419 zip 39992 CA Camp Meeker \n",
|
679 |
"\n",
|
680 |
+
" Metro County Date \\\n",
|
681 |
"0 NaN NaN 2023-12-31 \n",
|
682 |
"1 NaN NaN 2023-12-31 \n",
|
683 |
"2 NaN NaN 2023-12-31 \n",
|
|
|
732 |
"[21062 rows x 15 columns]"
|
733 |
]
|
734 |
},
|
735 |
+
"execution_count": 7,
|
736 |
"metadata": {},
|
737 |
"output_type": "execute_result"
|
738 |
}
|
|
|
756 |
"\n",
|
757 |
"final_df = combined_df[all_cols]\n",
|
758 |
"final_df = final_df.drop(\"StateName\", axis=1)\n",
|
759 |
+
"final_df = final_df.rename(\n",
|
760 |
+
" columns={\n",
|
761 |
+
" \"CountyName\": \"County\",\n",
|
762 |
+
" \"BaseDate\": \"Date\",\n",
|
763 |
+
" \"RegionName\": \"Region\",\n",
|
764 |
+
" \"RegionID\": \"Region ID\",\n",
|
765 |
+
" \"SizeRank\": \"Size Rank\",\n",
|
766 |
+
" }\n",
|
767 |
+
")\n",
|
768 |
"\n",
|
769 |
"# iterate over rows of final_df and populate State and City columns if the regionType is msa\n",
|
770 |
"for index, row in final_df.iterrows():\n",
|
771 |
" if row[\"RegionType\"] == \"msa\":\n",
|
772 |
+
" regionName = row[\"Region\"]\n",
|
773 |
" # final_df.at[index, 'Metro'] = regionName\n",
|
774 |
"\n",
|
775 |
" city = regionName.split(\", \")[0]\n",
|
|
|
783 |
},
|
784 |
{
|
785 |
"cell_type": "code",
|
786 |
+
"execution_count": 8,
|
787 |
"metadata": {},
|
788 |
"outputs": [],
|
789 |
"source": [
|
tester.ipynb
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
"cells": [
|
3 |
{
|
4 |
"cell_type": "code",
|
5 |
-
"execution_count":
|
6 |
"metadata": {},
|
7 |
"outputs": [],
|
8 |
"source": [
|
@@ -13,9 +13,71 @@
|
|
13 |
},
|
14 |
{
|
15 |
"cell_type": "code",
|
16 |
-
"execution_count":
|
17 |
"metadata": {},
|
18 |
"outputs": [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
{
|
20 |
"name": "stdout",
|
21 |
"output_type": "stream",
|
@@ -24,29 +86,21 @@
|
|
24 |
]
|
25 |
},
|
26 |
{
|
27 |
-
"
|
28 |
-
"
|
29 |
-
"
|
30 |
-
|
31 |
-
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
|
32 |
-
"\u001b[0;31mUnboundLocalError\u001b[0m Traceback (most recent call last)",
|
33 |
-
"Cell \u001b[0;32mIn[14], line 12\u001b[0m\n\u001b[1;32m 10\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m config \u001b[38;5;129;01min\u001b[39;00m configs:\n\u001b[1;32m 11\u001b[0m \u001b[38;5;28mprint\u001b[39m(config)\n\u001b[0;32m---> 12\u001b[0m dataset \u001b[38;5;241m=\u001b[39m \u001b[43mload_dataset\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmisikoff/zillow\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtrust_remote_code\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\u001b[43m)\u001b[49m\n",
|
34 |
-
"File \u001b[0;32m~/opt/anaconda3/envs/sta663/lib/python3.12/site-packages/datasets/load.py:2548\u001b[0m, in \u001b[0;36mload_dataset\u001b[0;34m(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)\u001b[0m\n\u001b[1;32m 2543\u001b[0m verification_mode \u001b[38;5;241m=\u001b[39m VerificationMode(\n\u001b[1;32m 2544\u001b[0m (verification_mode \u001b[38;5;129;01mor\u001b[39;00m VerificationMode\u001b[38;5;241m.\u001b[39mBASIC_CHECKS) \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m save_infos \u001b[38;5;28;01melse\u001b[39;00m VerificationMode\u001b[38;5;241m.\u001b[39mALL_CHECKS\n\u001b[1;32m 2545\u001b[0m )\n\u001b[1;32m 2547\u001b[0m \u001b[38;5;66;03m# Create a dataset builder\u001b[39;00m\n\u001b[0;32m-> 2548\u001b[0m builder_instance \u001b[38;5;241m=\u001b[39m \u001b[43mload_dataset_builder\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 2549\u001b[0m \u001b[43m \u001b[49m\u001b[43mpath\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mpath\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2550\u001b[0m \u001b[43m \u001b[49m\u001b[43mname\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mname\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2551\u001b[0m \u001b[43m \u001b[49m\u001b[43mdata_dir\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mdata_dir\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2552\u001b[0m \u001b[43m \u001b[49m\u001b[43mdata_files\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mdata_files\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2553\u001b[0m \u001b[43m \u001b[49m\u001b[43mcache_dir\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcache_dir\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2554\u001b[0m \u001b[43m \u001b[49m\u001b[43mfeatures\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mfeatures\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2555\u001b[0m \u001b[43m \u001b[49m\u001b[43mdownload_config\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mdownload_config\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2556\u001b[0m \u001b[43m \u001b[49m\u001b[43mdownload_mode\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mdownload_mode\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2557\u001b[0m \u001b[43m \u001b[49m\u001b[43mrevision\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrevision\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2558\u001b[0m \u001b[43m \u001b[49m\u001b[43mtoken\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtoken\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2559\u001b[0m \u001b[43m \u001b[49m\u001b[43mstorage_options\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstorage_options\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2560\u001b[0m \u001b[43m \u001b[49m\u001b[43mtrust_remote_code\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtrust_remote_code\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2561\u001b[0m \u001b[43m \u001b[49m\u001b[43m_require_default_config_name\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mname\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mis\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 2562\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mconfig_kwargs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2563\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 2565\u001b[0m \u001b[38;5;66;03m# Return iterable dataset in case of streaming\u001b[39;00m\n\u001b[1;32m 2566\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m streaming:\n",
|
35 |
-
"File \u001b[0;32m~/opt/anaconda3/envs/sta663/lib/python3.12/site-packages/datasets/load.py:2257\u001b[0m, in \u001b[0;36mload_dataset_builder\u001b[0;34m(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)\u001b[0m\n\u001b[1;32m 2255\u001b[0m builder_cls \u001b[38;5;241m=\u001b[39m get_dataset_builder_class(dataset_module, dataset_name\u001b[38;5;241m=\u001b[39mdataset_name)\n\u001b[1;32m 2256\u001b[0m \u001b[38;5;66;03m# Instantiate the dataset builder\u001b[39;00m\n\u001b[0;32m-> 2257\u001b[0m builder_instance: DatasetBuilder \u001b[38;5;241m=\u001b[39m \u001b[43mbuilder_cls\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 2258\u001b[0m \u001b[43m \u001b[49m\u001b[43mcache_dir\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcache_dir\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2259\u001b[0m \u001b[43m \u001b[49m\u001b[43mdataset_name\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mdataset_name\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2260\u001b[0m \u001b[43m \u001b[49m\u001b[43mconfig_name\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mconfig_name\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2261\u001b[0m \u001b[43m \u001b[49m\u001b[43mdata_dir\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mdata_dir\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2262\u001b[0m \u001b[43m \u001b[49m\u001b[43mdata_files\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mdata_files\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2263\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mhash\u001b[39;49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mdataset_module\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mhash\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2264\u001b[0m \u001b[43m \u001b[49m\u001b[43minfo\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43minfo\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2265\u001b[0m \u001b[43m \u001b[49m\u001b[43mfeatures\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mfeatures\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2266\u001b[0m \u001b[43m \u001b[49m\u001b[43mtoken\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtoken\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2267\u001b[0m \u001b[43m \u001b[49m\u001b[43mstorage_options\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstorage_options\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2268\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mbuilder_kwargs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2269\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mconfig_kwargs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 2270\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 2271\u001b[0m builder_instance\u001b[38;5;241m.\u001b[39m_use_legacy_cache_dir_if_possible(dataset_module)\n\u001b[1;32m 2273\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m builder_instance\n",
|
36 |
-
"File \u001b[0;32m~/opt/anaconda3/envs/sta663/lib/python3.12/site-packages/datasets/builder.py:382\u001b[0m, in \u001b[0;36mDatasetBuilder.__init__\u001b[0;34m(self, cache_dir, dataset_name, config_name, hash, base_path, info, features, token, use_auth_token, repo_id, data_files, data_dir, storage_options, writer_batch_size, name, **config_kwargs)\u001b[0m\n\u001b[1;32m 379\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m info \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 380\u001b[0m \u001b[38;5;66;03m# TODO FOR PACKAGED MODULES IT IMPORTS DATA FROM src/packaged_modules which doesn't make sense\u001b[39;00m\n\u001b[1;32m 381\u001b[0m info \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mget_exported_dataset_info()\n\u001b[0;32m--> 382\u001b[0m info\u001b[38;5;241m.\u001b[39mupdate(\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_info\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m)\n\u001b[1;32m 383\u001b[0m info\u001b[38;5;241m.\u001b[39mbuilder_name \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mname\n\u001b[1;32m 384\u001b[0m info\u001b[38;5;241m.\u001b[39mdataset_name \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdataset_name\n",
|
37 |
-
"File \u001b[0;32m~/.cache/huggingface/modules/datasets_modules/datasets/misikoff--zillow/d642880e153f01354c57f69b68ea9e02d46260977e73b26b4c4853d95d4fccac/zillow.py:266\u001b[0m, in \u001b[0;36mNewDataset._info\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 234\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mconfig\u001b[38;5;241m.\u001b[39mname \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mhome_values\u001b[39m\u001b[38;5;124m\"\u001b[39m:\n\u001b[1;32m 235\u001b[0m features \u001b[38;5;241m=\u001b[39m datasets\u001b[38;5;241m.\u001b[39mFeatures(\n\u001b[1;32m 236\u001b[0m {\n\u001b[1;32m 237\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mRegion ID\u001b[39m\u001b[38;5;124m\"\u001b[39m: datasets\u001b[38;5;241m.\u001b[39mValue(dtype\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mstring\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;28mid\u001b[39m\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mRegion ID\u001b[39m\u001b[38;5;124m\"\u001b[39m),\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 260\u001b[0m }\n\u001b[1;32m 261\u001b[0m )\n\u001b[1;32m 262\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m datasets\u001b[38;5;241m.\u001b[39mDatasetInfo(\n\u001b[1;32m 263\u001b[0m \u001b[38;5;66;03m# This is the description that will appear on the datasets page.\u001b[39;00m\n\u001b[1;32m 264\u001b[0m description\u001b[38;5;241m=\u001b[39m_DESCRIPTION,\n\u001b[1;32m 265\u001b[0m \u001b[38;5;66;03m# This defines the different columns of the dataset and their types\u001b[39;00m\n\u001b[0;32m--> 266\u001b[0m features\u001b[38;5;241m=\u001b[39m\u001b[43mfeatures\u001b[49m, \u001b[38;5;66;03m# Here we define them above because they are different between the two configurations\u001b[39;00m\n\u001b[1;32m 267\u001b[0m \u001b[38;5;66;03m# If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and\u001b[39;00m\n\u001b[1;32m 268\u001b[0m \u001b[38;5;66;03m# specify them. They'll be used if as_supervised=True in builder.as_dataset.\u001b[39;00m\n\u001b[1;32m 269\u001b[0m \u001b[38;5;66;03m# supervised_keys=(\"sentence\", \"label\"),\u001b[39;00m\n\u001b[1;32m 270\u001b[0m \u001b[38;5;66;03m# Homepage of the dataset for documentation\u001b[39;00m\n\u001b[1;32m 271\u001b[0m homepage\u001b[38;5;241m=\u001b[39m_HOMEPAGE,\n\u001b[1;32m 272\u001b[0m \u001b[38;5;66;03m# License for the dataset if available\u001b[39;00m\n\u001b[1;32m 273\u001b[0m license\u001b[38;5;241m=\u001b[39m_LICENSE,\n\u001b[1;32m 274\u001b[0m \u001b[38;5;66;03m# Citation for the dataset\u001b[39;00m\n\u001b[1;32m 275\u001b[0m citation\u001b[38;5;241m=\u001b[39m_CITATION,\n\u001b[1;32m 276\u001b[0m )\n",
|
38 |
-
"\u001b[0;31mUnboundLocalError\u001b[0m: cannot access local variable 'features' where it is not associated with a value"
|
39 |
]
|
40 |
}
|
41 |
],
|
42 |
"source": [
|
43 |
"configs = [\n",
|
44 |
-
"
|
45 |
-
"
|
46 |
-
"
|
47 |
-
"
|
48 |
-
"
|
49 |
-
"
|
50 |
" \"days_on_market\",\n",
|
51 |
"]\n",
|
52 |
"for config in configs:\n",
|
|
|
2 |
"cells": [
|
3 |
{
|
4 |
"cell_type": "code",
|
5 |
+
"execution_count": 4,
|
6 |
"metadata": {},
|
7 |
"outputs": [],
|
8 |
"source": [
|
|
|
13 |
},
|
14 |
{
|
15 |
"cell_type": "code",
|
16 |
+
"execution_count": 7,
|
17 |
"metadata": {},
|
18 |
"outputs": [
|
19 |
+
{
|
20 |
+
"name": "stdout",
|
21 |
+
"output_type": "stream",
|
22 |
+
"text": [
|
23 |
+
"home_value_forecasts\n",
|
24 |
+
"new_constructions\n",
|
25 |
+
"for_sale_listings\n"
|
26 |
+
]
|
27 |
+
},
|
28 |
+
{
|
29 |
+
"name": "stderr",
|
30 |
+
"output_type": "stream",
|
31 |
+
"text": [
|
32 |
+
"Downloading data: 100%|ββββββββββ| 215M/215M [00:05<00:00, 37.3MB/s] \n",
|
33 |
+
"Generating train split: 693661 examples [00:20, 34052.02 examples/s]\n"
|
34 |
+
]
|
35 |
+
},
|
36 |
+
{
|
37 |
+
"name": "stdout",
|
38 |
+
"output_type": "stream",
|
39 |
+
"text": [
|
40 |
+
"rentals\n"
|
41 |
+
]
|
42 |
+
},
|
43 |
+
{
|
44 |
+
"name": "stderr",
|
45 |
+
"output_type": "stream",
|
46 |
+
"text": [
|
47 |
+
"Downloading data: 100%|ββββββββββ| 413M/413M [00:12<00:00, 34.2MB/s] \n",
|
48 |
+
"Generating train split: 1258740 examples [00:28, 44715.39 examples/s]\n"
|
49 |
+
]
|
50 |
+
},
|
51 |
+
{
|
52 |
+
"name": "stdout",
|
53 |
+
"output_type": "stream",
|
54 |
+
"text": [
|
55 |
+
"sales\n"
|
56 |
+
]
|
57 |
+
},
|
58 |
+
{
|
59 |
+
"name": "stderr",
|
60 |
+
"output_type": "stream",
|
61 |
+
"text": [
|
62 |
+
"Downloading data: 100%|ββββββββββ| 280M/280M [00:06<00:00, 41.1MB/s] \n",
|
63 |
+
"Generating train split: 504608 examples [00:19, 25569.29 examples/s]\n"
|
64 |
+
]
|
65 |
+
},
|
66 |
+
{
|
67 |
+
"name": "stdout",
|
68 |
+
"output_type": "stream",
|
69 |
+
"text": [
|
70 |
+
"home_values\n"
|
71 |
+
]
|
72 |
+
},
|
73 |
+
{
|
74 |
+
"name": "stderr",
|
75 |
+
"output_type": "stream",
|
76 |
+
"text": [
|
77 |
+
"Downloading data: 100%|ββββββββββ| 47.3M/47.3M [00:01<00:00, 29.7MB/s]\n",
|
78 |
+
"Generating train split: 117912 examples [00:03, 35540.83 examples/s]\n"
|
79 |
+
]
|
80 |
+
},
|
81 |
{
|
82 |
"name": "stdout",
|
83 |
"output_type": "stream",
|
|
|
86 |
]
|
87 |
},
|
88 |
{
|
89 |
+
"name": "stderr",
|
90 |
+
"output_type": "stream",
|
91 |
+
"text": [
|
92 |
+
"Generating train split: 586714 examples [00:16, 34768.33 examples/s]\n"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
93 |
]
|
94 |
}
|
95 |
],
|
96 |
"source": [
|
97 |
"configs = [\n",
|
98 |
+
" \"home_value_forecasts\",\n",
|
99 |
+
" \"new_constructions\",\n",
|
100 |
+
" \"for_sale_listings\",\n",
|
101 |
+
" \"rentals\",\n",
|
102 |
+
" \"sales\",\n",
|
103 |
+
" \"home_values\",\n",
|
104 |
" \"days_on_market\",\n",
|
105 |
"]\n",
|
106 |
"for config in configs:\n",
|
zillow.py
CHANGED
@@ -37,15 +37,13 @@ _DESCRIPTION = """\
|
|
37 |
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
|
38 |
"""
|
39 |
|
40 |
-
|
41 |
-
_HOMEPAGE = ""
|
42 |
|
43 |
# TODO: Add the licence for the dataset here if you can find it
|
44 |
_LICENSE = ""
|
45 |
|
46 |
|
47 |
-
|
48 |
-
class NewDataset(datasets.GeneratorBasedBuilder):
|
49 |
"""TODO: Short description of my dataset."""
|
50 |
|
51 |
VERSION = datasets.Version("1.1.0")
|
@@ -88,21 +86,21 @@ class NewDataset(datasets.GeneratorBasedBuilder):
|
|
88 |
),
|
89 |
]
|
90 |
|
91 |
-
DEFAULT_CONFIG_NAME = "
|
92 |
|
93 |
def _info(self):
|
94 |
if self.config.name == "home_value_forecasts":
|
95 |
features = datasets.Features(
|
96 |
{
|
97 |
-
"
|
98 |
-
"
|
99 |
-
"
|
100 |
"RegionType": datasets.Value(dtype="string", id="RegionType"),
|
101 |
"State": datasets.Value(dtype="string", id="State"),
|
102 |
"City": datasets.Value(dtype="string", id="City"),
|
103 |
"Metro": datasets.Value(dtype="string", id="Metro"),
|
104 |
"County": datasets.Value(dtype="string", id="County"),
|
105 |
-
"
|
106 |
"Month Over Month % (Smoothed)": datasets.Value(
|
107 |
dtype="float32", id="Month Over Month % (Smoothed)"
|
108 |
),
|
@@ -347,15 +345,15 @@ class NewDataset(datasets.GeneratorBasedBuilder):
|
|
347 |
data = json.loads(row)
|
348 |
if self.config.name == "home_value_forecasts":
|
349 |
yield key, {
|
350 |
-
"
|
351 |
-
"
|
352 |
-
"
|
353 |
"RegionType": data["RegionType"],
|
354 |
"State": data["State"],
|
355 |
"City": data["City"],
|
356 |
"Metro": data["Metro"],
|
357 |
"County": data["County"],
|
358 |
-
"
|
359 |
"Month Over Month % (Smoothed)": data[
|
360 |
"Month Over Month % (Smoothed)"
|
361 |
],
|
|
|
37 |
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
|
38 |
"""
|
39 |
|
40 |
+
_HOMEPAGE = "https://huggingface.co/datasets/misikoff/zillow"
|
|
|
41 |
|
42 |
# TODO: Add the licence for the dataset here if you can find it
|
43 |
_LICENSE = ""
|
44 |
|
45 |
|
46 |
+
class Zillow(datasets.GeneratorBasedBuilder):
|
|
|
47 |
"""TODO: Short description of my dataset."""
|
48 |
|
49 |
VERSION = datasets.Version("1.1.0")
|
|
|
86 |
),
|
87 |
]
|
88 |
|
89 |
+
DEFAULT_CONFIG_NAME = ""
|
90 |
|
91 |
def _info(self):
|
92 |
if self.config.name == "home_value_forecasts":
|
93 |
features = datasets.Features(
|
94 |
{
|
95 |
+
"Region ID": datasets.Value(dtype="string", id="Region ID"),
|
96 |
+
"Size Rank": datasets.Value(dtype="int32", id="Size Rank"),
|
97 |
+
"Region": datasets.Value(dtype="string", id="Region"),
|
98 |
"RegionType": datasets.Value(dtype="string", id="RegionType"),
|
99 |
"State": datasets.Value(dtype="string", id="State"),
|
100 |
"City": datasets.Value(dtype="string", id="City"),
|
101 |
"Metro": datasets.Value(dtype="string", id="Metro"),
|
102 |
"County": datasets.Value(dtype="string", id="County"),
|
103 |
+
"Date": datasets.Value(dtype="string", id="Date"),
|
104 |
"Month Over Month % (Smoothed)": datasets.Value(
|
105 |
dtype="float32", id="Month Over Month % (Smoothed)"
|
106 |
),
|
|
|
345 |
data = json.loads(row)
|
346 |
if self.config.name == "home_value_forecasts":
|
347 |
yield key, {
|
348 |
+
"Region ID": data["Region ID"],
|
349 |
+
"Size Rank": data["Size Rank"],
|
350 |
+
"Region": data["Region"],
|
351 |
"RegionType": data["RegionType"],
|
352 |
"State": data["State"],
|
353 |
"City": data["City"],
|
354 |
"Metro": data["Metro"],
|
355 |
"County": data["County"],
|
356 |
+
"Date": data["Date"],
|
357 |
"Month Over Month % (Smoothed)": data[
|
358 |
"Month Over Month % (Smoothed)"
|
359 |
],
|