language:
- en
pipeline_tag: text-classification
widget:
- title: Rating Example
text: '4.7'
- title: Reviews Example
text: (188)
- title: Reviews Example 2
text: '188'
- title: Reviews Example 3
text: No Reviews
- title: Price Example
text: $
- title: Type Example
text: Coffee shop
- title: Address Example
text: Frederick, MD
- title: Address Example 2
text: 552 W 48th St
- title: Address Example 3
text: In Hilton Hotel
- title: Hours Example
text: Closed
- title: Hours Example 2
text: Opens 7β―AM Fri
- title: Hours Example 3
text: Permanently closed
- title: Service Option Example
text: Dine-in
- title: Service Option Example 2
text: Takeout
- title: Service Option Example 3
text: Delivery
- title: Phone Example
text: (301) 000-0000
- title: Years In Business Example
text: 5+ Years in Business
- title: Button Text Example
text: Directions
- title: Description Example
text: 'Provides: Auto maintenance'
license: mit
datasets:
- serpapi/local-results-en
tags:
- scraping
- parsing
- serp
- api
- opensource
BERT-Based Classification Model for Google Local Listings
This repository contains a BERT-based classification model developed using the Hugging Face library, and a dataset gathered by SerpApi's Google Local API. The model is designed to classify different texts extracted from Google Local Listings.
You may check out the Open Source Github Repository that contains the source code of a Ruby Gem called `google-local-results-ai-parser`.
Usage and Classification for Parsing
The example code below represents using it Python with Inference API for prototyping. You may use different programming languages for calling the results, and you may parallelize your work. Prototyping endpoint will have limited amount of calls. For Production Purposes
or Large Prototyping Activities
, consider setting an Inference API Endpoint from Huggingface
, or a Private API Server
for serving the model.
API_URL = "https://api-inference.huggingface.co/models/serpapi/bert-base-local-results"
headers = {"Authorization": "Bearer xxxxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "5540 N Lamar Blvd #12, Austin, TX 78756, United States",
})
Output: address
Strong Features
The BERT-based model excels in the following areas:
- Differentiating difficult semantic similarities with ease
"No Reviews"
βreviews
"(5K+)"
βreviews
- Handling partial texts that can be combined later
"Open β Closes 5β―pm"
"Open"
βhours
"Closes 5β―pm"
βhours
- Handling Vocabulary from diverse areas with ease
"Doctor"
βtype
"Restaurant"
βtype
- Returning Assurance Score for After-Correction
"4.7"
βrating(0.999)
- Strong Against Grammatical Mistakes
"Krebside Pickup"
βservice options
Parts Covered and Corresponding Keys in SerpApi Parsers
- Type of Place:
type
- Number of Reviews:
reviews
- Phone Number:
phone
- Rating:
rating
- Address:
address
- Operating Hours:
hours
- Description or Descriptive Review:
description
- Expensiveness:
expensiveness
- Service Options:
service options
- Button Text:
links
- Years in Business:
years_in_business
Please refer to the documentation of SerpApi's Google Local API and Google Local Pack API for more details on different parts:
References:
- SerpApi's Google Local API: https://serpapi.com/google-local-api
- SerpApi's Google Local Pack API: https://serpapi.com/local-pack
Known Limitations
The model has a few limitations that should be taken into account:
- The model does not classify the title of a place. This is because the title often contains many elements that can be easily confused with other parts, even for a human eye.
- The
label
key is not covered by the model, as it can be easily handled with traditional code. - In some cases,
button text
could be classified asservice options
oraddress
. However, this can be easily avoided by checking if a text is in a button in the traditional part of the code. The button text is only used to prevent emergent cases."Delivery"
βservice options [Correct Label is button text]
"Share"
βaddress [Correct Label is button text]
- In some cases, the model may classify a portion of the
description
ashours
if the description is about operating hours. For example:"Drive through: Open β Closes 12 AM"
"Drive through: Open"
βdescription
"Closes 12 AM"
βhours
- In some cases, the model may classify some
description
astype
. This is because somedescription
do look liketype
. For Example:"Iconic Seattle-based coffeehouse chain"
βtype [Correct Label is description]
- In some cases, the model may classify some
reviews
asrating
. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:"Expand more"
βhours [Correct Label is button text]
- In some cases, the model may classify some
service options
astype
. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:"Takeaway"
βtype [Correct Label is service options]
- In some cases, the model may classify some
reviews
ashours
orprice
. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:"(1.4K)"
βrating [Correct Label is reviews]
"(1.6K)"
βprice [Correct Label is reviews]
- In some cases, the model may classify some
service options
asdescription
ortype
. The reason for the confusion ondescription
is because of a recent change in their categorization in SerpApi keys. The data contains labels prior to that. For Example:"On-site services"
βtype [Correct Label is service options]
"Online appointments"
βdescription [Correct Label is service options]
- The model may be susceptible to error in one word entries. This is a minority of the cases, and it could be fixed with assurance scores. For Example:
"Sushi"
βaddress(0.984), type(0.0493) [Correct Label is type]
"Diagorou 4"
βaddress(0.999) [Correct address in same listing]
- The model cannot differentiate between extra parts that are extracted in SerpApi's Google Local API and Google Local Pack API. These parts are not feasible to extract via Classification Models.
- The model is not designed for Listings outside English Language.
Disclaimer
We value full transparency and painful honesty both in our internal and external communications. We believe a world with complete and open transparency is a better world.
However, while we strive for transparency, there are certain situations where sharing specific datasets may not be feasible or advisable. In the case of the dataset used to train our model, which contains different parts of a Google Local Listing including addresses and phone numbers, we have made a careful decision not to share it. We prioritize the well-being and safety of individuals, and sharing this dataset could potentially cause harm to people whose personal information is included.
Protecting the privacy and security of individuals is of utmost importance to us. Disclosing personal information, such as addresses and phone numbers, without proper consent or safeguards could lead to privacy violations, identity theft, harassment, or other forms of misuse. Our commitment to responsible data usage means that we handle sensitive information with great care and take appropriate measures to ensure its protection.
While we understand the value of transparency, we also recognize the need to strike a balance between transparency and safeguarding individuals' privacy and security. In this particular case, the potential harm that could result from sharing the dataset outweighs the benefits of complete transparency. By prioritizing privacy, we aim to create a safer and more secure environment for all individuals involved.
We appreciate your understanding and support in our commitment to responsible and ethical data practices. If you have any further questions or concerns, please feel free to reach out to us.