code
stringlengths
501
5.19M
package
stringlengths
2
81
path
stringlengths
9
304
filename
stringlengths
4
145
Copyright 2023 Zymbit Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
zymbitwalletsdk
/zymbitwalletsdk-1.0.0.tar.gz/zymbitwalletsdk-1.0.0/LICENSE.md
LICENSE.md
# Zymbit Wallet Python SDK ## Overview Ethereum accounts, signatures, and transactions have an additional layer of complexity over traditional cryptographic keys and signatures. The Zymbit Wallet SDK aims to abstract away this complexity, enabling you to create and manage multiple blockchain wallets and seamlessly integrate with various blockchains without having to deal with their technical intricacies. The first iteration of the SDK encapsulates all wallet creation, management, and use (sending transactions and interacting with dApps) capabilities for Ethereum and EVM compatible chains. If you are a developer interested in creating your own custom implementations of Accounts and/or Keyrings to work with ZymbitKeyringManager, you should further explore this repository. By extending the Account and [Keyring Abstract Base Classes (ABCs)](https://docs.python.org/3/library/abc.html), you can implement the required methods and any additional functionality as needed. The elliptic curves we support (secp256k1, secp256r1, and ed25519) are used by many major blockchains, including Bitcoin, Ethereum, Cardano, Solana, and Polkadot. Developing your own keyrings can be incredibly beneficial for a wide range of applications, such as key management or on-chain interactions like sending transactions or interacting with smart contracts. **NOTE:** Only compatible with [HSM6](https://www.zymbit.com/hsm6/), [SCM](https://www.zymbit.com/scm/), and [SEN](https://www.zymbit.com/secure-compute-node/) ## Installation ``` pip install zymbitwalletsdk ``` ## Documentation: [Zymbit Wallet Python SDK Documentation](https://docs.zymbit.com/zymbit-wallet-sdk/zymbit-wallet-python-sdk/)
zymbitwalletsdk
/zymbitwalletsdk-1.0.0.tar.gz/zymbitwalletsdk-1.0.0/README.md
README.md
# zyme > Short blurb about what your product does. [![PyPI][pypi-image]][pypi-url] [![Downloads][downloads-image]][downloads-url] [![Status][status-image]][pypi-url] [![Python Version][python-version-image]][pypi-url] [![Format][format-image]][pypi-url] [![Requirements][requirements-status-image]][requirements-status-url] [![tests][tests-image]][tests-url] [![Codecov][codecov-image]][codecov-url] [![CodeFactor][codefactor-image]][codefactor-url] [![Codeclimate][codeclimate-image]][codeclimate-url] [![Lgtm alerts][lgtm-alerts-image]][lgtm-alerts-url] [![Lgtm quality][lgtm-quality-image]][lgtm-quality-url] [![CodeQl][codeql-image]][codeql-url] [![readthedocs][readthedocs-image]][readthedocs-url] [![pre-commit][pre-commit-image]][pre-commit-url] [![pre-commit.ci status][pre-commit.ci-image]][pre-commit.ci-url] [![Imports: isort][isort-image]][isort-url] [![Code style: black][black-image]][black-url] [![Checked with mypy][mypy-image]][mypy-url] [![security: bandit][bandit-image]][bandit-url] [![Commitizen friendly][commitizen-image]][commitizen-url] [![Conventional Commits][conventional-commits-image]][conventional-commits-url] [![DeepSource][deepsource-image]][deepsource-url] [![license][license-image]][license-url] One to two paragraph statement about your product and what it does. ![](assets/header.png) ## Installation OS X & Linux: ```sh pip3 install zyme ``` Windows: ```sh pip install zyme ``` ## Usage example A few motivating and useful examples of how your product can be used. Spice this up with code blocks and potentially more screenshots. _For more examples and usage, please refer to the [Wiki][wiki]._ ## Development setup Describe how to install all development dependencies and how to run an automated test-suite of some kind. Potentially do this for multiple platforms. ```sh pip install --editable zyme ``` ## Documentation ### - [**Read the Docs**](https://zyme.readthedocs.io/en/latest/) ### - [**Wiki**](https://github.com/Stephen-RA-King/zyme/wiki) ## Meta [![](assets/linkedin.png)](https://linkedin.com/in/stephen-k-3a4644210) [![](assets/github.png)](https://github.com/Stephen-RA-King) [![](assets/pypi.png)](https://pypi.org/project/zyme/) [![](assets/www.png)](https://www.justpython.tech) [![](assets/email.png)](mailto:stephen.ra.king@gmail.com) [![](assets/cv.png)](https://www.justpython.tech/cv) Stephen R A King : stephen.ra.king@gmail.com Distributed under the MIT license. See [license](license-url) for more information. [https://github.com/Stephen-RA-King/zyme](https://github.com/Stephen-RA-King/zyme) Created with Cookiecutter template: [**cc_template**][cc_template-url] version 1.1.1 <!-- Markdown link & img dfn's --> [bandit-image]: https://img.shields.io/badge/security-bandit-yellow.svg [bandit-url]: https://github.com/PyCQA/bandit [black-image]: https://img.shields.io/badge/code%20style-black-000000.svg [black-url]: https://github.com/psf/black [cc_template-url]: https://github.com/Stephen-RA-King/cc_template [codeclimate-image]: https://api.codeclimate.com/v1/badges/7fc352185512a1dab75d/maintainability [codeclimate-url]: https://codeclimate.com/github/Stephen-RA-King/zyme/maintainability [codecov-image]: https://codecov.io/gh/Stephen-RA-King/zyme/branch/main/graph/badge.svg [codecov-url]: https://app.codecov.io/gh/Stephen-RA-King/zyme [codefactor-image]: https://www.codefactor.io/repository/github/Stephen-RA-King/zyme/badge [codefactor-url]: https://www.codefactor.io/repository/github/Stephen-RA-King/zyme [codeql-image]: https://github.com/Stephen-RA-King/zyme/actions/workflows/codeql-analysis.yml/badge.svg [codeql-url]: https://github.com/Stephen-RA-King/zyme/actions/workflows/codeql-analysis.yml [commitizen-image]: https://img.shields.io/badge/commitizen-friendly-brightgreen.svg [commitizen-url]: http://commitizen.github.io/cz-cli/ [conventional-commits-image]: https://img.shields.io/badge/Conventional%20Commits-1.0.0-yellow.svg?style=flat-square [conventional-commits-url]: https://conventionalcommits.org [deepsource-image]: https://static.deepsource.io/deepsource-badge-light-mini.svg [deepsource-url]: https://deepsource.io/gh/Stephen-RA-King/zyme/?ref=repository-badge [downloads-image]: https://static.pepy.tech/personalized-badge/zyme?period=total&units=international_system&left_color=black&right_color=orange&left_text=Downloads [downloads-url]: https://pepy.tech/project/zyme [format-image]: https://img.shields.io/pypi/format/zyme [isort-image]: https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336 [isort-url]: https://github.com/pycqa/isort/ [lgtm-alerts-image]: https://img.shields.io/lgtm/alerts/g/Stephen-RA-King/zyme.svg?logo=lgtm&logoWidth=18 [lgtm-alerts-url]: https://lgtm.com/projects/g/Stephen-RA-King/zyme/alerts/ [lgtm-quality-image]: https://img.shields.io/lgtm/grade/python/g/Stephen-RA-King/zyme.svg?logo=lgtm&logoWidth=18 [lgtm-quality-url]: https://lgtm.com/projects/g/Stephen-RA-King/zyme/context:python [license-image]: https://img.shields.io/pypi/l/zyme [license-url]: https://github.com/Stephen-RA-King/zyme/blob/main/license [mypy-image]: http://www.mypy-lang.org/static/mypy_badge.svg [mypy-url]: http://mypy-lang.org/ [pre-commit-image]: https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white [pre-commit-url]: https://github.com/pre-commit/pre-commit [pre-commit.ci-image]: https://results.pre-commit.ci/badge/github/Stephen-RA-King/gitwatch/main.svg [pre-commit.ci-url]: https://results.pre-commit.ci/latest/github/Stephen-RA-King/gitwatch/main [pypi-url]: https://pypi.org/project/zyme/ [pypi-image]: https://img.shields.io/pypi/v/zyme.svg [python-version-image]: https://img.shields.io/pypi/pyversions/zyme [readthedocs-image]: https://readthedocs.org/projects/zyme/badge/?version=latest [readthedocs-url]: https://zyme.readthedocs.io/en/latest/?badge=latest [requirements-status-image]: https://requires.io/github/Stephen-RA-King/zyme/requirements.svg?branch=main [requirements-status-url]: https://requires.io/github/Stephen-RA-King/zyme/requirements/?branch=main [status-image]: https://img.shields.io/pypi/status/zyme.svg [tests-image]: https://github.com/Stephen-RA-King/zyme/actions/workflows/tests.yml/badge.svg [tests-url]: https://github.com/Stephen-RA-King/zyme/actions/workflows/tests.yml [wiki]: https://github.com/Stephen-RA-King/zyme/wiki
zyme
/zyme-0.1.1.tar.gz/zyme-0.1.1/README.md
README.md
Zymp ==== Zymp is a Python library to design "restriction site arrays", which are compact sequences with many restriction sites. For instance here is a 159-nucleotide sequence made with Zymp, with 49 enzyme recognition sites (out of 52 provided). That's a frequency of around 3 nucleotides per site: .. image:: https://raw.githubusercontent.com/Edinburgh-Genome-Foundry/zymp/master/docs/_static/images/example_array.png :width: 800 Infos ----- **PIP installation:** .. code:: bash pip install zymp **Github Page:** `<https://github.com/Edinburgh-Genome-Foundry/zymp>`_ **License:** MIT, Copyright Edinburgh Genome Foundry More biology software --------------------- .. image:: https://raw.githubusercontent.com/Edinburgh-Genome-Foundry/Edinburgh-Genome-Foundry.github.io/master/static/imgs/logos/egf-codon-horizontal.png :target: https://edinburgh-genome-foundry.github.io/ Zymp is part of the `EGF Codons <https://edinburgh-genome-foundry.github.io/>`_ synthetic biology software suite for DNA design, manufacturing and validation.
zymp
/zymp-0.1.3.tar.gz/zymp-0.1.3/pypi-readme.rst
pypi-readme.rst
.. raw:: html <p align="center"> <img alt="Zymp" title="Zymp" src="https://raw.githubusercontent.com/Edinburgh-Genome-Foundry/zymp/master/docs/_static/images/title.png" width="300"> <br /> </p> .. image:: https://github.com/Edinburgh-Genome-Foundry/zymp/actions/workflows/build.yml/badge.svg :target: https://github.com/Edinburgh-Genome-Foundry/zymp/actions/workflows/build.yml :alt: GitHub CI build status .. image:: https://coveralls.io/repos/github/Edinburgh-Genome-Foundry/zymp/badge.svg?branch=master :target: https://coveralls.io/github/Edinburgh-Genome-Foundry/zymp?branch=master **Zymp** is a Python library to produce small sequences of DNA packed with enzyme restriction sites. You specify the enzymes you want, the ones you don't want, whether you want the sites to be unique, or any other condition, and Zymp will attempt to find a compact sequence verifying all of this (it really focuses on sequence shortness). **Warning:** Zymp is implemented with a "whatever works well enough" philosophy. It has a lot of "whatever" but it generally works "well enough". The algorithm is greedy with many simplifications so don't expect perfect solutions. Examples -------- Here is how you design a sequence .. code:: python from zymp import (stacked_sites_array, plot_sequence_sites, annotate_enzymes_sites, write_record) enzymes_names = [ 'AccI', 'AclI', 'AflII', 'AflIII', 'AgeI', 'ApaLI', 'AseI', 'AvaI', 'BamHI', 'BanII', 'BlnI', 'BmtI', 'BsmI', 'BssHII', 'DdeI', 'DraI', 'Eco47III', 'EcoRI', 'EcoRV', 'HindII', 'HindIII', 'HinfI', 'HpaI', 'KpnI', 'MfeI', 'MluI', 'MspA1I', 'MunI', 'NaeI', 'NcoI', 'NdeI', 'NheI', 'NotI', 'NsiI', 'NspI', 'PstI', 'PvuI', 'PvuII', 'SacI', 'SacII', 'SalI', 'ScaI', 'SfaNI', 'SnaBI', 'SpeI', 'SphI', 'SspI', 'StyI', 'VspI', 'XhoI', 'XmaI', 'ZraI' ] forbidden_enzymes=['BsmBI', 'BsaI'] # DESIGN AN OPTIMIZED SEQUENCE WITH ZYMP seq, sites_in_seq, leftover = stacked_sites_array( enzymes_names, forbidden_enzymes=forbidden_enzymes, unique_sites=True, tries=100) print ("Sequence length:", len(seq), "\nRestriction sites:", len(sites_in_seq), "\nSites not included: ", leftover) # PLOT A SUMMARY ax = plot_sequence_sites(seq, enzymes_names) ax.figure.savefig("stacked_array.pdf", bbox_inches='tight') # WRITE THE SEQUENCE AND SITE ANNOTATIONS AS A RECORD record = annotate_enzymes_sites( seq, enzymes_names, forbidden_enzymes=forbidden_enzymes) write_record(record, 'stacked_site_array.gb') **Plot output:** .. raw:: html <p align="center"> <img alt="stacked array" title="stacked array" src="https://raw.githubusercontent.com/Edinburgh-Genome-Foundry/zymp/master/docs/_static/images/example_array.png" width="800"> <br /> </p> **Console output:** .. code:: bash Sequence length: 159 Restriction sites: 49 Sites not included: {'NcoI', 'HpaI', 'SacII'} Zymp has created a 159-nucleotide sequence with 49 of the 52 restriction sites we specified, that's only ~3 nucleotides per site ! and the sequence is free of BsaI or HpaI sites, so it is compatible with Golden Gate assembly. If NcoI and HpaI are your favorite enzymes, you may be disappointed that they are not in the final sequence. Zymp allows you to add validity conditions for the result: .. code:: python from zymp import stacked_sites_array def success_condition(seq, sites_in_seq, leftover): return {'NcoI', 'HpaI'}.issubset(sites_in_seq) seq, sites_in_seq, leftover = stacked_sites_array( enzymes_names, forbidden_enzymes=forbidden_enzymes, tries=100, success_condition=success_condition) print ("Sequence length:", len(seq), "\nRestriction sites:", len(sites_in_seq), "\nSites not included: ", leftover) **New console output:** .. code:: bash Sequence length: 158 Restriction sites: 47 Sites not included: {'SacII', 'SacI', 'XhoI', 'BlnI', 'XmaI'} Installation ------------ You can install zymp through PIP: .. code:: pip install zymp Alternatively, you can unzip the sources in a folder and type: .. code:: python setup.py install License = MIT ------------- Zymp is an open-source software originally written at the `Edinburgh Genome Foundry <http://genomefoundry.org>`_ by `Zulko <https://github.com/Zulko>`_ and `released on Github <https://github.com/Edinburgh-Genome-Foundry/zymp>`_ under the MIT licence (Copyright 2018 Edinburgh Genome Foundry). Everyone is welcome to contribute! More biology software --------------------- .. image:: https://raw.githubusercontent.com/Edinburgh-Genome-Foundry/Edinburgh-Genome-Foundry.github.io/master/static/imgs/logos/egf-codon-horizontal.png :target: https://edinburgh-genome-foundry.github.io/ Zymp is part of the `EGF Codons <https://edinburgh-genome-foundry.github.io/>`_ synthetic biology software suite for DNA design, manufacturing and validation.
zymp
/zymp-0.1.3.tar.gz/zymp-0.1.3/README.rst
README.rst
import os import shutil import sys import tempfile import tarfile import optparse import subprocess from distutils import log try: from site import USER_SITE except ImportError: USER_SITE = None DEFAULT_VERSION = "0.9.6" DEFAULT_URL = "https://pypi.python.org/packages/source/s/setuptools/" def _python_cmd(*args): args = (sys.executable,) + args return subprocess.call(args) == 0 def _install(tarball, install_args=()): # extracting the tarball tmpdir = tempfile.mkdtemp() log.warn('Extracting in %s', tmpdir) old_wd = os.getcwd() try: os.chdir(tmpdir) tar = tarfile.open(tarball) _extractall(tar) tar.close() # going in the directory subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0]) os.chdir(subdir) log.warn('Now working in %s', subdir) # installing log.warn('Installing Setuptools') if not _python_cmd('setup.py', 'install', *install_args): log.warn('Something went wrong during the installation.') log.warn('See the error message above.') # exitcode will be 2 return 2 finally: os.chdir(old_wd) shutil.rmtree(tmpdir) def _build_egg(egg, tarball, to_dir): # extracting the tarball tmpdir = tempfile.mkdtemp() log.warn('Extracting in %s', tmpdir) old_wd = os.getcwd() try: os.chdir(tmpdir) tar = tarfile.open(tarball) _extractall(tar) tar.close() # going in the directory subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0]) os.chdir(subdir) log.warn('Now working in %s', subdir) # building an egg log.warn('Building a Setuptools egg in %s', to_dir) _python_cmd('setup.py', '-q', 'bdist_egg', '--dist-dir', to_dir) finally: os.chdir(old_wd) shutil.rmtree(tmpdir) # returning the result log.warn(egg) if not os.path.exists(egg): raise IOError('Could not build the egg.') def _do_download(version, download_base, to_dir, download_delay): egg = os.path.join(to_dir, 'setuptools-%s-py%d.%d.egg' % (version, sys.version_info[0], sys.version_info[1])) if not os.path.exists(egg): tarball = download_setuptools(version, download_base, to_dir, download_delay) _build_egg(egg, tarball, to_dir) sys.path.insert(0, egg) import setuptools setuptools.bootstrap_install_from = egg def use_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, download_delay=15): # making sure we use the absolute path to_dir = os.path.abspath(to_dir) was_imported = 'pkg_resources' in sys.modules or \ 'setuptools' in sys.modules try: import pkg_resources except ImportError: return _do_download(version, download_base, to_dir, download_delay) try: pkg_resources.require("setuptools>=" + version) return except pkg_resources.VersionConflict: e = sys.exc_info()[1] if was_imported: sys.stderr.write( "The required version of setuptools (>=%s) is not available,\n" "and can't be installed while this script is running. Please\n" "install a more recent version first, using\n" "'easy_install -U setuptools'." "\n\n(Currently using %r)\n" % (version, e.args[0])) sys.exit(2) else: del pkg_resources, sys.modules['pkg_resources'] # reload ok return _do_download(version, download_base, to_dir, download_delay) except pkg_resources.DistributionNotFound: return _do_download(version, download_base, to_dir, download_delay) def download_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, delay=15): """Download setuptools from a specified location and return its filename `version` should be a valid setuptools version number that is available as an egg for download under the `download_base` URL (which should end with a '/'). `to_dir` is the directory where the egg will be downloaded. `delay` is the number of seconds to pause before an actual download attempt. """ # making sure we use the absolute path to_dir = os.path.abspath(to_dir) try: from urllib.request import urlopen except ImportError: from urllib2 import urlopen tgz_name = "setuptools-%s.tar.gz" % version url = download_base + tgz_name saveto = os.path.join(to_dir, tgz_name) src = dst = None if not os.path.exists(saveto): # Avoid repeated downloads try: log.warn("Downloading %s", url) src = urlopen(url) # Read/write all in one block, so we don't create a corrupt file # if the download is interrupted. data = src.read() dst = open(saveto, "wb") dst.write(data) finally: if src: src.close() if dst: dst.close() return os.path.realpath(saveto) def _extractall(self, path=".", members=None): """Extract all members from the archive to the current working directory and set owner, modification time and permissions on directories afterwards. `path' specifies a different directory to extract to. `members' is optional and must be a subset of the list returned by getmembers(). """ import copy import operator from tarfile import ExtractError directories = [] if members is None: members = self for tarinfo in members: if tarinfo.isdir(): # Extract directories with a safe mode. directories.append(tarinfo) tarinfo = copy.copy(tarinfo) tarinfo.mode = 448 # decimal for oct 0700 self.extract(tarinfo, path) # Reverse sort directories. if sys.version_info < (2, 4): def sorter(dir1, dir2): return cmp(dir1.name, dir2.name) directories.sort(sorter) directories.reverse() else: directories.sort(key=operator.attrgetter('name'), reverse=True) # Set correct owner, mtime and filemode on directories. for tarinfo in directories: dirpath = os.path.join(path, tarinfo.name) try: self.chown(tarinfo, dirpath) self.utime(tarinfo, dirpath) self.chmod(tarinfo, dirpath) except ExtractError: e = sys.exc_info()[1] if self.errorlevel > 1: raise else: self._dbg(1, "tarfile: %s" % e) def _build_install_args(options): """ Build the arguments to 'python setup.py install' on the setuptools package """ install_args = [] if options.user_install: if sys.version_info < (2, 6): log.warn("--user requires Python 2.6 or later") raise SystemExit(1) install_args.append('--user') return install_args def _parse_args(): """ Parse the command line for options """ parser = optparse.OptionParser() parser.add_option( '--user', dest='user_install', action='store_true', default=False, help='install in user site package (requires Python 2.6 or later)') parser.add_option( '--download-base', dest='download_base', metavar="URL", default=DEFAULT_URL, help='alternative URL from where to download the setuptools package') options, args = parser.parse_args() # positional arguments are ignored return options def main(version=DEFAULT_VERSION): """Install or upgrade setuptools and EasyInstall""" options = _parse_args() tarball = download_setuptools(download_base=options.download_base) return _install(tarball, _build_install_args(options)) if __name__ == '__main__': sys.exit(main())
zymp
/zymp-0.1.3.tar.gz/zymp-0.1.3/ez_setup.py
ez_setup.py
MIT License Copyright (c) 2018 The Python Packaging Authority Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
zymptest
/zymptest-0.2.2.tar.gz/zymptest-0.2.2/README.rst
README.rst
import random import sys import time sys.path.append(r'../../') import requests from threadpool import ThreadPool, makeRequests from oyospider.common.db_operate import MySQLdbHelper class ProxyIPHelper(object): def __init__(self): self.proxy_ip_table = "dm_proxy_ip_t" self.mydb = MySQLdbHelper() def get_usable_proxy_ip(self): sql = "select * from dm_proxy_ip_t" records = self.mydb.executeSql(sql) for record in records: print("get_usable_proxy_ip=" + record[1]) return records def get_usable_anon_proxy_ip(self): """获取可用的高匿 代理IP """ sql = "SELECT * FROM dm_proxy_ip_t p WHERE p.anon LIKE '%高匿%' AND DATE_FORMAT( succTime, '%Y-%m-%d' ) = ( SELECT DATE_FORMAT( max( succTime ), '%Y-%m-%d' ) FROM dm_proxy_ip_t )" records = self.mydb.executeSql(sql) # for record in records: # print record[1] return records def get_usable_anon_proxy_ip_str(self): records = self.get_usable_anon_proxy_ip() ip_port = [] for t in records: ip_port.append("http://" + t[1] + ":" + t[2]) return ip_port def find_all_proxy_ip(self): """ 查出所有代理IP """ db_helper = MySQLdbHelper() # proxy_ip_list = db_helper.select("proxy_ip", fields=["protocol", "ip", "port"]) # proxy_ip_list = db_helper.executeSql("select protocol,ip,port from proxy_ip where 1=1 limit 1") proxy_ip_list = db_helper.executeSql( "SELECT protocol,ip,port,source FROM proxy_ip as t order by t.id DESC limit 2000;") return proxy_ip_list def find_china_proxy_ip(self, limit): """ 查出中国境内代理IP,作为打底数据 """ db_helper = MySQLdbHelper() # proxy_ip_list = db_helper.select("proxy_ip", fields=["protocol", "ip", "port"]) sql = "select protocol,ip,`port`,source from proxy_ip t where 1=1 and ( t.area like '%山东%' or t.area like '%江苏%' " \ "or t.area like '%上海%' or t.area like '%浙江%' or t.area like '%安徽%' or t.area like '%福建%' or t.area like '%江西%' " \ "or t.area like '%广东%' or t.area like '%广西%' or t.area like '%海南%' or t.area like '%河南%' or t.area like '%湖南%' " \ "or t.area like '%湖北%' or t.area like '%北京%' or t.area like '%天津%' or t.area like '%河北%' or t.area like '%山西%' " \ "or t.area like '%内蒙%' or t.area like '%宁夏%' or t.area like '%青海%' or t.area like '%陕西%' or t.area like '%甘肃%' " \ "or t.area like '%新疆%' or t.area like '%四川%' or t.area like '%贵州%' or t.area like '%云南%' or t.area like '%重庆%' " \ "or t.area like '%西藏%' or t.area like '%辽宁%' or t.area like '%吉林%' or t.area like '%黑龙%' or t.area like '%香港%' " \ "or t.area like '%澳门%' or t.area like '%台湾%') order by t.create_time desc limit " + str(limit) proxy_ip_list = db_helper.executeSql(sql) return proxy_ip_list def callback_test(self, request, result): print("callback_test") def get_all_proxy_ip_useable(self, target_site, target_url, put_proxy_to_redis): """ 测试指定URL代理的有效性 """ proxy_ip_list = self.find_all_proxy_ip() # useable_ip_list = [] batchno = int(round(time.time() * 1000)) # timestamp = int(round(time.time())) par_list = [] for proxy_ip in proxy_ip_list: paras = [] paras.append(proxy_ip[0]) paras.append(proxy_ip[1]) paras.append(proxy_ip[2]) paras.append(proxy_ip[3]) paras.append(target_site) paras.append(target_url) paras.append(batchno) paras.append(put_proxy_to_redis) par_list.append((paras, None)) # print paras print(par_list) pool = ThreadPool(50) requests = makeRequests(self.test_proxy_ip_useable1, par_list, self.callback_test) for req in requests: pool.putRequest(req) pool.wait() # for proxy_ip in proxy_ip_list: # # protocol = proxy_ip[0] # # ip = proxy_ip[1] # # port = proxy_ip[2] # # test_proxy_id = self.test_proxy_ip_useable(proxy_ip[0], proxy_ip[1], proxy_ip[2], target_url) # print "proxy_ip = " + str(test_proxy_id) # if test_proxy_id: # put_proxy_to_redis(proxy_ip[0], proxy_ip[1], proxy_ip[2]) # useable_ip_list.append(test_proxy_id) # # redis_helper # return useable_ip_list # redis_helper def test_proxy_ip_useable(self, protocol, ip, port, target_url): proxy = "" if protocol: proxy = protocol + "://" + ip + ":" + port else: proxy = "http://" + ip + ":" + port # proxy ="18017115578:194620chao@"+ ip + port # user_agent_list = RotateUserAgentMiddleware() user_agent_list = [ \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1" \ "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6", \ "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1", \ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5", \ "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \ "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3", \ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", \ "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24" ] headers = { "User-Agent": random.choice(user_agent_list) } proxy_obj = requests.utils.urlparse(proxy) if proxy_obj.scheme.upper() == 'HTTP': test_url = target_url test_proxies = { "http": proxy_obj.netloc } elif proxy_obj.scheme.upper() == 'HTTPS': test_url = target_url test_proxies = { "https": proxy_obj.netloc } if test_proxies: # 测试代理有效性 try: print("proxy:'%s',test_url:'%s'" % (proxy, test_url)) response = requests.head(test_url, headers=headers, proxies=test_proxies, timeout=8) print("proxy:'%s',test_url:'%s',status_code:'%s'" % (proxy, test_url, response.status_code)) if response.status_code == 200: # return proxy_ip return protocol, ip, port except Exception as e: print(e) else: return None def test_proxy_ip_useable1(self, protocol, ip, port, source, target_site, target_url, batchno, put_proxy_to_redis): proxy = "" if protocol: proxy = protocol + "://" + ip + ":" + port else: proxy = "http://" + ip + ":" + port # user_agent_list = RotateUserAgentMiddleware() user_agent_list = [ \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1" \ "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6", \ "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1", \ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5", \ "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \ "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3", \ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", \ "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24" ] headers = { "User-Agent": random.choice(user_agent_list) } proxy_obj = requests.utils.urlparse(proxy) if proxy_obj.scheme.upper() == 'HTTP': test_url = target_url test_proxies = { "http": proxy_obj.netloc } elif proxy_obj.scheme.upper() == 'HTTPS': test_url = target_url test_proxies = { "https": proxy_obj.netloc } if test_proxies: # 测试代理有效性 try: print("proxy:'%s',test_url:'%s',source:'%s'" % (proxy, test_url, source)) response = requests.head(test_url, headers=headers, proxies=test_proxies, timeout=8) print("proxy:'%s',test_url:'%s',source:'%s',status_code:'%s'" % ( proxy, test_url, source, response.status_code)) if response.status_code == 200: # return proxy_ip if put_proxy_to_redis: print("put_proxy_to_redis:%s,%s,%s" % (protocol, ip, port)) put_proxy_to_redis(protocol, ip, port, source, target_site, batchno, 60 * 15) return protocol, ip, port except Exception as e: print(e) else: return None
zymtest2
/zymtest2-0.1.1-py3-none-any.whl/pytest/proxy_ip_oper.py
proxy_ip_oper.py
import datetime import json import sys import time import urllib2 sys.path.append(r'../../') from oyospider.common.db_operate import MySQLdbHelper reload(sys) sys.setdefaultencoding('utf-8') def send_monitor_info(): db_helper = MySQLdbHelper() sql = """ SELECT ht.ota_name, ht.ota_hotel_count, tp.hotel_crawl_count, room_price_count, DATE_FORMAT(begin_time,'%%Y-%%m-%%d %%H:%%i') begin_time, DATE_FORMAT(end_time,'%%Y-%%m-%%d %%T') end_time, DATE_FORMAT(checkin_date,'%%Y-%%m-%%d') checkin_date, batch_no FROM ( SELECT h.ota_name, count( 1 ) ota_hotel_count FROM dm_hotel_monitor_ota_map_t h WHERE h.ota_hotel_url <> '' AND h.ota_hotel_url <> '/' GROUP BY h.ota_name ) ht INNER JOIN ( SELECT t.ota_name, count( DISTINCT t.ota_hotel_id ) hotel_crawl_count, count( 1 ) room_price_count, min( create_time ) begin_time, max( create_time ) end_time, t.checkin_date, t.batch_no batch_no FROM hotel_room_price_monitor t WHERE t.create_time >= '%s' AND t.create_time < '%s' GROUP BY t.ota_name, t.checkin_date, DATE_FORMAT( t.create_time, '%%Y-%%m-%%d %%H' ) ORDER BY t.ota_name ) tp WHERE ht.ota_name = tp.ota_name and ht.ota_name = '%s' order by ota_name ,batch_no desc """ end_time = datetime.datetime.strptime(time.strftime('%Y-%m-%d %H', time.localtime(time.time())) + ":59:59", "%Y-%m-%d %H:%M:%S") end_time_str = datetime.datetime.strftime(end_time, "%Y-%m-%d %H:%M:%S") begin_time_str = datetime.datetime.strftime(end_time + datetime.timedelta(hours=-3), "%Y-%m-%d %H:%M:%S") send_url = "https://oapi.dingtalk.com/robot/send?access_token=3b0cb4f0d390d8b3d12d76c198d733c780ebc0532f876d9e7801c6ff011f3da1" for ota_name in ["ctrip", "meituan"]: record = db_helper.executeSql(sql % (begin_time_str, end_time_str, ota_name)) msg_body = [] hotel_count = 0 for r in record: hotel_count = r[1] msg_body.append( " > ###### 爬取时间:%s \n\n > ###### 入住日期:%s \n\n> ###### 酒店总数:%s \n\n > ###### 房价总数:%s \n\n ###### \n\n" % ( r[4], r[6], r[2], r[3])) head_msg = " #### 全网最低价项目 #### \n\n %s 最近三次爬取统计:\n\n ##### 映射酒店总数:%s \n\n ———————————————— \n\n " % ( ota_name, hotel_count) head_msg = head_msg + "\n\n ———————————————— \n\n".join(msg_body) # 发送消息 post_data = {'msgtype': 'markdown', 'markdown': {'title': '全网最低价', 'text': head_msg} } headers = {'Content-Type': 'application/json; charset=utf-8'} req = urllib2.Request(url=send_url, headers=headers, data=json.dumps(post_data)) res_data = urllib2.urlopen(req) res = res_data.read() print res def send_scrapy_log_info(): print "test" if __name__ == '__main__': send_monitor_info()
zymtest2
/zymtest2-0.1.1-py3-none-any.whl/pytest/ding_talk_warn.py
ding_talk_warn.py
import sys import threading import time import schedule sys.path.append(r'../../') from oyospider.common.get_meituan_token import MeiTuanTokenHelper from oyospider.common.proxy_ip_pull_redis import RedisIPHelper from oyospider.common.redis_operate import RedisHelper def get_all_proxy_to_db_and_redis_job(): redis_helper = RedisHelper() ctrip_thread = threading.Thread(target=redis_helper.load_usable_proxy_ip_to_redis, args=("ctrip", "https://hotels.ctrip.com/hotel/428365.html",)) ctrip_thread.start() meituan_thread = threading.Thread(target=redis_helper.load_usable_proxy_ip_to_redis, args=("meituan", "https://www.meituan.com/jiudian/157349277/",)) meituan_thread.start() ip_thread = threading.Thread(target=redis_helper.get_database_proxy_ip) ip_thread.start() def get_dailiyun_proxy_to_redis_job(): redis_helper = RedisIPHelper() ctrip_thread = threading.Thread(target=redis_helper.load_usable_proxy_ip_to_redis, args=("ctrip", "https://hotels.ctrip.com/hotel/428365.html",)) ctrip_thread.start() meituan_thread = threading.Thread(target=redis_helper.load_usable_proxy_ip_to_redis, args=("meituan", "https://www.meituan.com/jiudian/157349277/",)) meituan_thread.start() def get_meituan_token(): meituan_helper = MeiTuanTokenHelper() meituan_token_thread = threading.Thread(target=meituan_helper.start_requests) meituan_token_thread.start() if __name__ == '__main__': try: get_all_proxy_to_db_and_redis_job() get_dailiyun_proxy_to_redis_job() get_meituan_token() # schedule.every(10).minutes.do(get_all_proxy_to_db_and_redis_job) schedule.every(2).minutes.do(get_dailiyun_proxy_to_redis_job) schedule.every(20).seconds.do(get_meituan_token) except Exception as e: print(e) # while True: try: schedule.run_pending() time.sleep(1) except Exception as e: print(e) # num = [1, 3, 6, 4, 2, ] # for i in range(3): # print i, num[i]
zymtest2
/zymtest2-0.1.1-py3-none-any.whl/pytest/schedule_task.py
schedule_task.py
import re import sys import time sys.path.append(r'../../') import requests from oyospider.common.db_operate import MySQLdbHelper class ProxyIpExtractHelper(object): """ 从各网获取代理IP操作类 """ def get_from_xiguan(self, fetch_num): """ 西瓜代理提取接口,并入库 接口文档:http://www.xiguadaili.com/api """ for protocol in ["http", "https"]: if not fetch_num: fetch_num = "100" # protocol = "http" api_url = "http://api3.xiguadaili.com/ip/?tid=556077616504319&category=2&show_area=true&show_operator=true&num=%s&protocol=%s" % ( fetch_num, protocol) # api_url = "http://dly.134t.com/query.txt?key=NPBF565B9C&word=&count=%s"%(fetch_num) # api_url = "http://svip.kdlapi.com/api/getproxy/?orderid=963803204081436&num=%s&b_pcchrome=1&b_pcie=1&b_pcff=1&protocol=2&method=2&an_an=1&an_ha=1&sep=1"%(fetch_num) print("get_from_xiguan url = " + api_url) proxy_ips = [] response = requests.get(api_url) res = response.text # print res if res: ip_list = res.split("\r\n") field = ["ip", "port", "operator", "area", "protocol", "anon", "delay", "source", "type", "create_time"] values = [] for ip_str in ip_list: # print type(ip_str) # print re.findall(r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}", ip_str)[0] # print ip_str ip = re.findall(r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}", ip_str)[0] port = re.findall(r":(\d+).*", ip_str)[0] area = "" if re.findall(r"@(.*)#", ip_str): area = re.findall(r"@(.*)#", ip_str)[0] operator = "" if re.findall(r"#(.*)", ip_str): operator = re.findall(r"#(.*)", ip_str)[0] # proxy_ip = ({"ip": ip, "port": port, "area": area, "operator": operator, "protocol": protocol}) value = [] value.append(ip) value.append(port) value.append(operator) value.append(area) value.append(protocol) value.append("2") value.append("") value.append("xiguadaili") # 代理IP来源 value.append("1") # 收费 value.append(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))) values.append(value) # print value # print proxy_ip # proxy_ips.append(proxy_ip) db_helper = MySQLdbHelper() # 插入临时表 db_helper.insertMany("proxy_ip_swap", field, values) # 插入正式表,用于去重 insert_sql = "insert into proxy_ip(ip,port,operator,area,protocol,anon,delay,source,type,create_time) select ip,port,operator,area,protocol,anon,delay,source,type,create_time from proxy_ip_swap s where not exists (select null from proxy_ip p where p.ip = s.ip and p.port = s.port and p.protocol = s.protocol)" db_helper.executeCommit(insert_sql) return proxy_ips def get_from_dailiyun(self): """ 代理云提取接口,直接入redist 接口文档:https://www.showdoc.cc/bjt5521?page_id=157160154849769 """ # api_url = "http://dly.134t.com/query.txt?key=NPBF565B9C&word=&count=1000" api_url = "http://dly.134t.com/query.txt?key=NPBF565B9C&word=&count=100&detail=true" print("get_from_dailiyun url = " + api_url) response = requests.get(api_url) res = response.text if res: ip_list = res.split("\r\n") return ip_list def get_all_proxy_site(self): """ 从网站或API获得所有代理IP """ print("get_all_proxy_site") db_helper = MySQLdbHelper() # 1.西瓜代理 self.get_from_xiguan(1000) # 清空临时表 truncate_sql = "truncate table proxy_ip_swap" db_helper.executeCommit(truncate_sql) # print proxy_ip["ip"] + "," + proxy_ip["port"] + "," + proxy_ip["area"] + "," + proxy_ip[ # "operator"] + "," + proxy_ip["protocol"] # for ip_str in range(5): # print proxy_ip["ip"] + "," + proxy_ip["port"] + "," + proxy_ip["area"] + "," + proxy_ip[ # "operator"] + "," + proxy_ip["protocol"] if __name__ == '__main__': # str = "61.222.87.87:38157@台湾省#电信" # print re.findall(r":(\d+).*", str)[0] # print re.findall(r"@(.*)#", str)[0] # print re.findall(r"#(.*)", str)[0] # # print re.findall(r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}", str)[0] extract_helper = ProxyIpExtractHelper() extract_helper.get_all_proxy_site() # adapter.get_all_proxy_site() # adapter.test_proxy_ip_useable("hotel.meituan.com/shanghai/") # adapter.load_usable_proxy_ip_to_redis("meiTuan")
zymtest2
/zymtest2-0.1.1-py3-none-any.whl/pytest/proxy_ip_pull.py
proxy_ip_pull.py
import random import sys import threading sys.path.append(r'../../') import redis from redis import ConnectionError from scrapy.utils.project import get_project_settings from oyospider.common.proxy_ip_oper import ProxyIPHelper from oyospider.common.proxy_ip_pull import ProxyIpExtractHelper import gevent.monkey gevent.monkey.patch_all() class RedisHelper(object): def __init__(self): settings = get_project_settings() host = settings.get('REDIS_HOST', '') port = settings.get('REDIS_PORT') password = settings.get('REDIS_PASSWORD') self.dailiyun_username = settings.get('DAILIYUN_USERNAME') self.dailiyun_password = settings.get('DAILIYUN_PASSWORD') # self.pool = Pool(1) # password = settings.get("REDIS_PARAMS").get('password') try: self.redis_con = redis.StrictRedis(host=host, port=port, password=password) # ping = self.ping() except NameError: return {'error': 'cannot import redis library'} except ConnectionError as e: return {'error': str(e)} def get_redis_conn(self): return self.redis_con def put_proxy_to_redis_pool(self, protocol, ip, port, source, target_site, batchno, expire_time): """ 将可用的代理IP放入redis池中 :param protocol: :param ip: :param port: :param source: :param target_site: :param batchno: :param expire_time :return: """ key = "proxy_ip_pool:%s:%s|%s|%s|%s" % (target_site, source, protocol, ip, port) self.redis_con.set(key, "") self.redis_con.expire(key, expire_time) def put_proxy_ip_to_redis_queue(self, protocol, ip, port, source, target_site, batchno, expire_time): """ 将可用的代理IP放入redis队列中 :param protocol: :param ip: :param port: :param source: :param target_site: :param batchno: :param expire_time :return: """ key = "proxy_ip_queue:%s:%s|%s|%s|%s" % (target_site, source, protocol, ip, port) self.redis_con.set(key, "") self.redis_con.expire(key, expire_time) def put_proxy_ip_to_redis_queue(self, targer_site, proxy_ip_str): """ 将可用的代理IP放入redis队列中 :param targer_site: :param proxy_ip_str: :return: """ key = "proxy_ip_queue:%s" % targer_site self.redis_con.rpush(key, proxy_ip_str) self.redis_con.expire(key, 60 * 10) def load_repeat_proxy_ip_ctrip(self): name = "ctrip_ip" proxy = self.redis_con.lpop(name) return proxy def load_repeat_proxy_ip_meituan(self): name = "meituan_ip" proxy = self.redis_con.lpop(name) return proxy def load_usable_proxy_ip_to_redis(self, target_site, target_url): """ 加载可用的代理IP :param target_site: :param target_url: :return: """ # 加载到redis中 # print"============ load_usable_proxy_ip_to_redis init=============" proxy_ip_helper = ProxyIPHelper() proxy_ip_helper.get_all_proxy_ip_useable(target_site, target_url, self.put_proxy_to_redis_pool) def get_usable_proxy_ip(self, site): """ 获得可以用的代理IP,没有的话直接从数据库里拿最近的代理IP,同时加载可用的代理IP到redis中 :param site: :return: """ # 判断redis中是否有代理 # print "len = %s" % len(self.redis_con.sscan_iter(site + "_Ips")) site_keys = [] print "get ip from redis " for key in self.redis_con.keys(site + "Ips*"): site_keys.append(key) print "redis keys = " + str(site_keys) if site_keys: site_ips = self.redis_con.srandmember(max(site_keys)) if site_ips: return site_ips.split("|") # print site_ips(0) # print random.choice(site_ips) proxy_ip_helper = ProxyIPHelper() china_proxy_ips = proxy_ip_helper.find_china_proxy_ip(100) if china_proxy_ips: # 异步加载到redis中 # self.pool.apply_async(self.load_usable_proxy_ip_to_redis, args=(site,)) thread = threading.Thread(target=self.load_usable_proxy_ip_to_redis, args=(site,)) # thread.setDaemon(True) thread.start() thread.join() # 先返回表中随机IP给调用者 return random.choice(china_proxy_ips) else: return None def get_database_proxy_ip(self): p_ip = ProxyIpExtractHelper() p_ip.get_all_proxy_site() def get_usable_proxy_ip_from_redis_queue(self, target_site): """ 从队列中取代理ip :param target_site: :return:格式:代理来源|代理协议|代理IP|代理port """ key = "proxy_ip_queue:%s" % target_site proxy_ip_queue = self.redis_con.lpop(key) print "get_usable_proxy_ip_from_redis_queue,proxy_ip = %s" % proxy_ip_queue return proxy_ip_queue def get_usable_proxy_ip_from_redis_pool(self, target_site): """ 从ip池中取代理ip :param target_site: :return:格式:代理来源|代理协议|代理IP|代理port """ # 代理云数量相对西瓜代理数量较少,需要增加代理云的随机选中机率 # 查询西代理IP数量 random_key = ["dailiyun|*", "xiguadaili|*", "*"] sub_key = random.choice(random_key) match_key = "proxy_ip_pool:%s:%s" % (target_site, sub_key) print "match_key = %s" % match_key # print "get_usable_proxy_ip_from_redis_pool = %s" % match_key site_keys = [] for key in self.redis_con.keys(match_key): site_keys.append(key) # print "get_usable_proxy_ip_from_redis_pool size :%s " % len(site_keys) proxy_ip_pool = None if len(site_keys) > 0: proxy_ip_key = random.choice(site_keys) proxy_ip_pool = proxy_ip_key.split(":")[2] print "get_usable_proxy_ip_from_redis_pool,proxy_ip = %s" % proxy_ip_pool return proxy_ip_pool def get_usable_proxy_ip_from_db(self): """ 从数据库中取代理ip :return:格式:代理来源|代理协议|代理IP|代理port """ proxy_ip_helper = ProxyIPHelper() china_proxy_ips = proxy_ip_helper.find_all_proxy_ip() proxy_ip_recrod = random.choice(china_proxy_ips) proxy_ip_db = None if proxy_ip_recrod: proxy_ip_db = "%s|%s|%s|%s" % ( proxy_ip_recrod[3], proxy_ip_recrod[0], proxy_ip_recrod[1], proxy_ip_recrod[2]) print "get_usable_proxy_ip_from_db,proxy_ip = %s" % proxy_ip_db return proxy_ip_db def get_usable_proxy_ip_v2(self, target_site): """ 根据优先级获取可用ip :return:格式:代理来源|代理协议|代理IP|代理port """ # 1.从队列中取 ip proxy_ip_str = self.get_usable_proxy_ip_from_redis_queue(target_site) if not proxy_ip_str: # 如果队列中有代理IP,则使用队列中的ip,如果没有,则从ip池中取 # 2.从IP池中取 ip # proxy_ip_str = self.get_usable_proxy_ip_from_redis_pool(target_site) if not proxy_ip_str: # 3.从数据库中取ip proxy_ip_str = self.get_usable_proxy_ip_from_db() return proxy_ip_str def get_usable_request_proxy_ip(self, target_site): """ 获得可直接用于设置的代理IP :return:格式:scrapy resquest标准格式,可以直接使用,其它格式需要处理 """ proxy_ip_str = self.get_usable_proxy_ip_v2(target_site) proxy_ip_req = None if proxy_ip_str: # 根据代理来源判断生成代理ip的正确字符串 proxy_ip_info = proxy_ip_str.split("|") proxy_source = proxy_ip_info[0] if proxy_source == "dailiyun": user_name = self.dailiyun_username password = self.dailiyun_password proxy_ip_req = "%s://%s:%s@%s:%s" % ( proxy_ip_info[1], user_name, password, proxy_ip_info[2], proxy_ip_info[3]) elif proxy_source == "xiguadaili": proxy_ip_req = "%s://%s:%s" % (proxy_ip_info[1], proxy_ip_info[2], proxy_ip_info[3]) else: print "unkown proxy_source:" + target_site return proxy_ip_req, proxy_ip_str if __name__ == '__main__': redis_helper = RedisHelper() ctrip_thread = threading.Thread(target=redis_helper.load_usable_proxy_ip_to_redis, args=("ctrip", "https://hotels.ctrip.com/hotel/428365.html",)) ctrip_thread.start() meituan_thread = threading.Thread(target=redis_helper.load_usable_proxy_ip_to_redis, args=("meituan", "https://www.meituan.com/jiudian/157349277/",)) meituan_thread.start() ip_thread = threading.Thread(target=redis_helper.get_database_proxy_ip) ip_thread.start()
zymtest2
/zymtest2-0.1.1-py3-none-any.whl/pytest/redis_operate.py
redis_operate.py
import re import MySQLdb from scrapy.utils.project import get_project_settings class MySQLdbHelper(object): """操作mysql数据库,基本方法 """ def __init__(self): settings = get_project_settings() self.DB_CONF = settings.get('DB_CONF') db_conf = self.DB_CONF self.host = db_conf['host'] self.username = db_conf['user'] self.password = db_conf['passwd'] self.database = db_conf['db'] self.port = db_conf['port'] self.charset = db_conf['charset'] self.con = None self.cur = None try: self.con = MySQLdb.connect(host=self.host, user=self.username, passwd=self.password, db=self.database, port=self.port, charset=self.charset) # print self.host # 所有的查询,都在连接 con 的一个模块 cursor 上面运行的 self.cur = self.con.cursor() except: raise Exception("DataBase connect error,please check the db config.") def close(self): """关闭数据库连接 """ if not self.con: self.con.close() else: raise Exception("DataBase doesn't connect,close connectiong error;please check the db config.") def getVersion(self): """获取数据库的版本号 """ self.cur.execute("SELECT VERSION()") return self.getOneData() def getOneData(self): # 取得上个查询的结果,是单个结果 data = self.cur.fetchone() return data def creatTable(self, tablename, attrdict, constraint): """创建数据库表 args: tablename :表名字 attrdict :属性键值对,{'book_name':'varchar(200) NOT NULL'...} constraint :主外键约束,PRIMARY KEY(`id`) """ if self.isExistTable(tablename): return sql = '' sql_mid = '`id` bigint(11) NOT NULL AUTO_INCREMENT,' for attr, value in attrdict.items(): sql_mid = sql_mid + '`' + attr + '`' + ' ' + value + ',' sql = sql + 'CREATE TABLE IF NOT EXISTS %s (' % tablename sql = sql + sql_mid sql = sql + constraint sql = sql + ') ENGINE=InnoDB DEFAULT CHARSET=utf8' print 'creatTable:' + sql self.executeCommit(sql) def executeSql(self, sql=''): """执行sql语句,针对读操作返回结果集 args: sql :sql语句 """ try: self.cur.execute(sql) records = self.cur.fetchall() return records except MySQLdb.Error, e: error = 'MySQL execute failed! ERROR (%s): %s' % (e.args[0], e.args[1]) print error def executeCommit(self, sql=''): """执行数据库sql语句,针对更新,删除,事务等操作失败时回滚 """ try: self.cur.execute(sql) self.con.commit() except MySQLdb.Error, e: self.con.rollback() error = 'MySQL execute failed! ERROR (%s): %s' % (e.args[0], e.args[1]) print "error:", error return error def insert(self, tablename, params): """创建数据库表 args: tablename :表名字 key :属性键 value :属性值 """ key = [] value = [] for tmpkey, tmpvalue in params.items(): key.append(tmpkey) if isinstance(tmpvalue, str): value.append("\'" + tmpvalue + "\'") else: value.append(tmpvalue) attrs_sql = '(' + ','.join(key) + ')' values_sql = ' values(' + ','.join(value) + ')' sql = 'insert into %s' % tablename sql = sql + attrs_sql + values_sql print '_insert:' + sql self.executeCommit(sql) def select(self, tablename, cond_dict='', order='', fields='*'): """查询数据 args: tablename :表名字 cond_dict :查询条件 order :排序条件 example: print mydb.select(table) print mydb.select(table, fields=["name"]) print mydb.select(table, fields=["name", "age"]) print mydb.select(table, fields=["age", "name"]) """ consql = ' ' if cond_dict != '': for k, v in cond_dict.items(): consql = consql + k + '=' + v + ' and' consql = consql + ' 1=1 ' if fields == "*": sql = 'select * from %s where ' % tablename else: if isinstance(fields, list): fields = ",".join(fields) sql = 'select %s from %s where ' % (fields, tablename) else: raise Exception("fields input error, please input list fields.") sql = sql + consql + order print 'select:' + sql return self.executeSql(sql) def insertMany(self, table, attrs, values): """插入多条数据 args: tablename :表名字 attrs :属性键 values :属性值 example: table='test_MySQLdb' key = ["id" ,"name", "age"] value = [[101, "liuqiao", "25"], [102,"liuqiao1", "26"], [103 ,"liuqiao2", "27"], [104 ,"liuqiao3", "28"]] mydb.insertMany(table, key, value) """ values_sql = ['%s' for v in attrs] attrs_sql = '(' + ','.join(attrs) + ')' values_sql = ' values(' + ','.join(values_sql) + ')' sql = 'insert into %s' % table sql = sql + attrs_sql + values_sql print 'insertMany:' + sql try: print sql for i in range(0, len(values), 20000): self.cur.executemany(sql, values[i:i + 20000]) self.con.commit() except MySQLdb.Error, e: self.con.rollback() error = 'insertMany executemany failed! ERROR (%s): %s' % (e.args[0], e.args[1]) print error def delete(self, tablename, cond_dict): """删除数据 args: tablename :表名字 cond_dict :删除条件字典 example: params = {"name" : "caixinglong", "age" : "38"} mydb.delete(table, params) """ consql = ' ' if cond_dict != '': for k, v in cond_dict.items(): if isinstance(v, str): v = "\'" + v + "\'" consql = consql + tablename + "." + k + '=' + v + ' and ' consql = consql + ' 1=1 ' sql = "DELETE FROM %s where%s" % (tablename, consql) print sql return self.executeCommit(sql) def update(self, tablename, attrs_dict, cond_dict): """更新数据 args: tablename :表名字 attrs_dict :更新属性键值对字典 cond_dict :更新条件字典 example: params = {"name" : "caixinglong", "age" : "38"} cond_dict = {"name" : "liuqiao", "age" : "18"} mydb.update(table, params, cond_dict) """ attrs_list = [] consql = ' ' for tmpkey, tmpvalue in attrs_dict.items(): attrs_list.append("`" + tmpkey + "`" + "=" + "\'" + tmpvalue + "\'") attrs_sql = ",".join(attrs_list) print "attrs_sql:", attrs_sql if cond_dict != '': for k, v in cond_dict.items(): if isinstance(v, str): v = "\'" + v + "\'" consql = consql + "`" + tablename + "`." + "`" + k + "`" + '=' + v + ' and ' consql = consql + ' 1=1 ' sql = "UPDATE %s SET %s where%s" % (tablename, attrs_sql, consql) print sql return self.executeCommit(sql) def dropTable(self, tablename): """删除数据库表 args: tablename :表名字 """ sql = "DROP TABLE %s" % tablename self.executeCommit(sql) def deleteTable(self, tablename): """清空数据库表 args: tablename :表名字 """ sql = "DELETE FROM %s" % tablename self.executeCommit(sql) def isExistTable(self, tablename): """判断数据表是否存在 args: tablename :表名字 Return: 存在返回True,不存在返回False """ sql = "select * from %s" % tablename result = self.executeCommit(sql) if result is None: return True else: if re.search("doesn't exist", result): return False else: return True if __name__ == "__main__": mydb = MySQLdbHelper() print mydb.getVersion() table = 'test_MySQLdb' attrs = {'name': 'varchar(200) DEFAULT NULL', 'age': 'int(11) DEFAULT NULL'} constraint = 'PRIMARY KEY(`id`)' print mydb.creatTable(table, attrs, constraint) params = {"name": "caixinglong", "age": "38"} mydb.insert('test_MySQLdb', params) print mydb.select(table) print mydb.select(table, fields=["name", "age"]) print mydb.select(table, fields=["age", "name"]) key = ["id", "name", "age"] value = [[101, "liuqiao", "25"], [102, "liuqiao1", "26"], [103, "liuqiao2", "27"], [104, "liuqiao3", "28"]] mydb.insertMany(table, key, value) mydb.delete(table, params) cond_dict = {"name": "liuqiao", "age": "18"} mydb.update(table, params, cond_dict) # mydb.deleteTable(table) # mydb.dropTable(table) print mydb.select(table + "1") print mydb.isExistTable(table + "1")
zymtest2
/zymtest2-0.1.1-py3-none-any.whl/pytest/db_operate.py
db_operate.py
import random import sys import threading import time sys.path.append(r'../../') import requests from threadpool import ThreadPool, makeRequests from oyospider.common.proxy_ip_pull import ProxyIpExtractHelper import redis from redis import ConnectionError from scrapy.utils.project import get_project_settings class RedisIPHelper(object): def __init__(self): settings = get_project_settings() host = settings.get('REDIS_HOST', '') port = settings.get('REDIS_PORT') password = settings.get('REDIS_PASSWORD') self.dailiyun_username = settings.get('DAILIYUN_USERNAME') self.dailiyun_password = settings.get('DAILIYUN_PASSWORD') try: self.redis_con = redis.StrictRedis(host=host, port=port, password=password) except NameError: return {'error': 'cannot import redis library'} except ConnectionError as e: return {'error': str(e)} def get_redis_ip(self): r = self.redis_con keys = r.keys("yunIps_*") # print(keys) if keys: IPs = [] for key in keys: proxy_ip = r.get(key) # print key # print proxy_ip IPs.append(proxy_ip) return IPs else: return "" def load_usable_proxy_ip_to_redis(self, target_site, target_url): """ 加载可用的代理IP :param target_site: :param target_url: :return: """ ip_helper = ProxyIpExtractHelper() ip_list = ip_helper.get_from_dailiyun() # 加载到redis中 self.get_all_proxy_ip_usable(target_site, target_url, "dailiyun", ip_list, self.put_proxy_to_redis_pool) def callback_test(self, request, result): print("callback_test") def put_proxy_to_redis_pool(self, protocol, ip, port, source, target_site, batchno, expire_time): """ 将可用的meituan代理IP放入内存中 :param protocol: :param ip: :param port: :param source: :param target_site: :param batchno: :param expire_time :return: """ key = "proxy_ip_pool:%s:%s|%s|%s|%s" % (target_site, source, protocol, ip, port) self.redis_con.set(key, "") self.redis_con.expire(key, expire_time) def get_all_proxy_ip_usable(self, target_site, target_url, source, ip_list, put_proxy_to_redis): """ 测试指定URL代理的有效性 """ # useable_ip_list = [] batchno = int(round(time.time() * 1000)) # timestamp = int(round(time.time())) par_list = [] for proxy_ip in ip_list: paras = [] paras.append(proxy_ip) paras.append(target_site) paras.append(target_url) paras.append(source) paras.append(batchno) paras.append(put_proxy_to_redis) par_list.append((paras, None)) # print paras print(" par_list = " + str(par_list)) pool = ThreadPool(20) requests = makeRequests(self.test_proxy_ip_useable, par_list, self.callback_test) for req in requests: pool.putRequest(req) pool.wait() def test_proxy_ip_useable(self, ip_str, target_site, target_url, source, batchno, put_proxy_to_redis): """ 测试指定URL代理的有效性 """ user_agent_list = [ \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1" \ "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6", \ "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1", \ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5", \ "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \ "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \ "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3", \ "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", \ "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24" ] headers = { "User-Agent": random.choice(user_agent_list) } ip_info = ip_str.split(",") ip_port = ip_info[0] protocol = "http" ip_addr = ip_port.split(":")[0] port = ip_port.split(":")[1] ip_effect_time = ip_info[3] ip_expire_time = int(ip_info[4]) # 当前时间 cur_timestamp = int(round(time.time())) + 5 # 计算Ip的过期时间 redis_expire_time = ip_expire_time - cur_timestamp print "ip_expire_time = %s,redis_expire_time = %s" % (ip_expire_time, redis_expire_time) user_name = self.dailiyun_username password = self.dailiyun_password proxy_url = "%s://%s:%s@%s:%s" % (protocol, user_name, password, ip_addr, port) proxy_obj = requests.utils.urlparse(proxy_url) test_url = target_url test_proxies = { "http": proxy_obj.netloc } if redis_expire_time > 0: # 测试代理有效性 try: print("proxy:'%s',test_url:'%s'" % (proxy_url, test_url)) response = requests.head(test_url, headers=headers, proxies=test_proxies, timeout=8) print("proxy:'%s',test_url:'%s',status_code:'%s'" % (test_proxies, test_url, response.status_code)) if response.status_code == 200: # return proxy_ip if put_proxy_to_redis: print("put_proxy_to_redis:%s,%s,%s,%s" % (protocol, ip_addr, port, redis_expire_time)) put_proxy_to_redis(protocol, ip_addr, port, source, target_site, batchno, redis_expire_time) return proxy_url except Exception as e: print(e) else: return None if __name__ == '__main__': redis_helper = RedisIPHelper() ctrip_thread = threading.Thread(target=redis_helper.load_usable_proxy_ip_to_redis, args=("ctrip", "https://hotels.ctrip.com/hotel/428365.html",)) ctrip_thread.start() meituan_thread = threading.Thread(target=redis_helper.load_usable_proxy_ip_to_redis, args=("meituan", "https://www.meituan.com/jiudian/157349277/",)) meituan_thread.start()
zymtest2
/zymtest2-0.1.1-py3-none-any.whl/pytest/proxy_ip_pull_redis.py
proxy_ip_pull_redis.py
import json import os import re import sys import time from redis import StrictRedis from selenium import webdriver sys.path.append(r'../../') from oyospider.common.db_operate import MySQLdbHelper from oyospider.items import Meituan_tokenItem from oyospider.settings import REDIS_HOST, REDIS_PORT, PHANTOMJS_PATH, SERVICE_LOG_PATH, REDIS_PASSWORD class MeiTuanTokenHelper(object): def __init__(self): mydb = MySQLdbHelper() # self.ipdb = ProxyIP() # 查出来要爬取的监控酒店 sql = "select * from dm_hotel_monitor_ota_map_t h where h.ota_name = 'meituan' limit 5" records = mydb.executeSql(sql) urls = [] for row in records: if row[5] != '/': urls.append(row[5]) self.start_urls = urls def start_requests(self): item = Meituan_tokenItem() for url in self.start_urls: browser = webdriver.PhantomJS(PHANTOMJS_PATH, service_log_path=SERVICE_LOG_PATH) browser.get(url) har = str(json.loads(browser.get_log('har')[0]['message'])) if len(re.findall(r"_token=(.+?)&", har)) > 0: token_str = re.findall(r"_token=(.+?)&", har)[0] item['meituan_token'] = token_str if 'meituan_token' in item: sr = StrictRedis(host=REDIS_HOST, port=REDIS_PORT, password=REDIS_PASSWORD, db=15) cur_timestamp = (int(round(time.time() * 1000))) keys = "meituan_token:%s" % cur_timestamp key = keys expire_time = 240 value = item["meituan_token"] sr.setex(key, expire_time, value) # return item print item continue if __name__ == '__main__': # t = time.time() # print (int(round(t * 1000))) sp = MeiTuanTokenHelper() # while True: try: sp.start_requests() # 自动kill 使用cpu超过5分钟的 phantomjs 进程 cmd = '''kill -9 `ps -aux|grep phantomjs|awk '{split($10,arr,":");if(arr[1]*60+arr[2]>5){print $2}}'` ''' os.system(cmd) time.sleep(10) except Exception as e: print(e)
zymtest2
/zymtest2-0.1.1-py3-none-any.whl/pytest/get_meituan_token.py
get_meituan_token.py
# zync zync is a utility tool for python operations. [![zync-ci](https://github.com/tjbredemeyer/zync/actions/workflows/ci.yml/badge.svg)](https://github.com/tjbredemeyer/zync/actions/workflows/ci.yml) ## INSTALLATION ```bash pip install zyncify ``` ## Usage ### 1. IMPORT ```python from zync import * ``` ### 2. FUNCTIONS #### logger logger takes in a string and logs it with an INFO level. ```python from zync import logger # logging a string INFO logger("info message") # logging a variable INFO message = "info message" logger(message) ### # returns: INFO info message ``` #### bugger bugger takes in a string and logs it with a DEBUG level. ```python from zync import bugger # logging a string DEBUG bugger("debug message") # logging a variable DEBUG message = "debug message" bugger(message) ### # returns: DEBUG debug message ``` #### wegger wegger takes in a string and logs it with an ERROR level. ```python from zync import wegger # logging a string ERROR wegger("error message") # logging a variable ERROR message = "error message" wegger(message) ### # returns: ERROR debug message ``` #### Slugger Slugger converts a string to slug while maintaining capitalization. ```python from zync import Slugger # Slugging a string with Caps Slugger("Test String") # Slugging a variable with caps string = "Test String" Slugger(string) ### # returns: Test-String ``` #### slugger slugger converts a string to a slug with no capitalization. ```python from zync import slugger # Slugging a string without Caps slugger("Test String") # Slugging a variable without caps string = "Test String" slugger(string) ### # returns: test-string ``` ### 3. TAIL LOG FILE ```bash tail -f ./.zync.log ``` ## Author TJ Bredemeyer twitter: @tjbredemeyer
zyncify
/zyncify-0.1.9.tar.gz/zyncify-0.1.9/README.md
README.md
import logging import inspect import os W = "\033[39m" B = "\033[94m" G = "\033[92m" Y = "\033[33m" R = "\033[91m" M = "\033[35m" C = "\033[36m" L = "\033[2m" X = "\033[0m" class BuggerFormat(logging.Formatter): """Formatting bugger output""" def format(self, record): """Formatting bugger output""" record.levelname = "bugger" levelname = record.levelname.upper() record.levelname = levelname return super().format(record) class LoggerFormat(logging.Formatter): """Formatting logger output""" def format(self, record): """Formatting logger output""" record.levelname = "logger" levelname = record.levelname.upper() record.levelname = levelname return super().format(record) class EggerFormat(logging.Formatter): """Formatting egger output""" def format(self, record): """Formatting egger output""" record.levelname = "wegger" levelname = record.levelname.upper() record.levelname = levelname return super().format(record) class Bugger: """the bugger log class""" def __init__(self, name): self.logger = logging.getLogger(name) self.logger.setLevel(logging.DEBUG) file_handler = logging.FileHandler(".zync.log") formatter = BuggerFormat( f"{W}{L}%(asctime)s {X}" f"{G}[{X}" f"{G}%(levelname)s{X}" f"{G}] {X}" f"{W}%(url)s {X}" f"%(message)s{X}", ) file_handler.setFormatter(formatter) self.logger.addHandler(file_handler) def __call__(self, log, url): self.logger.debug(log, extra={"url": url}) class Logger: """the logger log class""" def __init__(self, name): self.logger = logging.getLogger(name) self.logger.setLevel(logging.INFO) file_handler = logging.FileHandler(".zync.log") formatter = LoggerFormat( f"{W}{L}%(asctime)s {X}" f"{C}[{X}" f"{C}%(levelname)s{X}" f"{C}] {X}" f"{W}%(url)s {X}" f"%(message)s{X}", ) file_handler.setFormatter(formatter) self.logger.addHandler(file_handler) def __call__(self, log, url): self.logger.info(log, extra={"url": url}) class Egger: """the egger log class""" def __init__(self, name): self.logger = logging.getLogger(name) self.logger.setLevel(logging.ERROR) file_handler = logging.FileHandler(".zync.log") formatter = EggerFormat( f"{W}{L}%(asctime)s {X}" f"{R}[{X}" f"{R}%(levelname)s{X}" f"{R}] {X}" f"{W}%(url)s {X}" f"%(message)s{X}", ) file_handler.setFormatter(formatter) self.logger.addHandler(file_handler) def __call__(self, log, url): self.logger.error(log, extra={"url": url}) bugger_base = Bugger("bugger") logger_base = Logger("logger") wegger_base = Egger("wegger") def link(frame): """getting the relative path for logging position""" filename = inspect.getframeinfo(frame).filename current_dir = os.getcwd() path = os.path.relpath(filename, current_dir) line = inspect.getframeinfo(frame).positions.lineno col = inspect.getframeinfo(frame).positions.col_offset # pylint disable=C0209 href = f"{path}:{line}:{col}" href_link = "file '" + href + "'" return href_link def bugger(log): """the bugger method""" frame = inspect.currentframe().f_back url = link(frame) return bugger_base(log, url) def logger(log): """the logger method""" frame = inspect.currentframe().f_back url = link(frame) return logger_base(log, url) def wegger(log): """the egger method""" frame = inspect.currentframe().f_back url = link(frame) return wegger_base(log, url)
zyncify
/zyncify-0.1.9.tar.gz/zyncify-0.1.9/zync/logger.py
logger.py
# Bytecomp v1.1.0 Utilities for working with bytecode. **Magic:** ```py import bytecomp bytecomp.MAGIC # Returns Magic ``` **PYC Headers:** ```py import bytecomp bytecomp.HEADER # Returns .pyc Header bytecomp.generate_header() # Also returns a .pyc header ``` **Compiling Bytecode:** ```py import bytecomp code_object = compile("""print('Hello!')""",'file','exec') pyc = open('compiled.pyc','wb') pyc.write(bytecomp.compile_object(code_object)) pyc.close() # Above code generates a working .pyc file from a code object. ``` **Executing Bytecode:** ```py import bytecomp code_object = b'U\r\r\n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00@\x00\x00\x00s\x0c\x00\x00\x00e\x00d\x00\x83\x01\x01\x00d\x01S\x00)\x02z\x03Hi!N)\x01\xda\x05print\xa9\x00r\x01\x00\x00\x00r\x01\x00\x00\x00\xda\x03idk\xda\x08<module>\x01\x00\x00\x00\xf3\x00\x00\x00\x00' bytecomp.exec_bytecode(code_object) # Above code executes the bytes-like object (Can have a header or not have a header) ``` **Removing a header from Bytecode:** ```py import bytecomp bytecomp.remove_header(b'U\r\r\n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00@\x00\x00\x00s\x0c\x00\x00\x00e\x00d\x00\x83\x01\x01\x00d\x01S\x00)\x02z\x03Hi!N)\x01\xda\x05print\xa9\x00r\x01\x00\x00\x00r\x01\x00\x00\x00\xda\x03idk\xda\x08<module>\x01\x00\x00\x00\xf3\x00\x00\x00\x00') # Above code removes the header (First 16 bytes) so you can unmarshal it and execute it ``` **Encrypting Bytecode:** ```py import bytecomp code_object = compile("print('This is a test.')",'file','exec') crypted = bytecomp.crypt_bytecode(code_object) # Above code returns a string, which can be executed with the code below. ``` **Executing Encrypted Bytecode:** ```py import bytecomp bytecomp.exec_crypted('c%0*YdNS#d&&L@bBZH4CS3P4z1MEQT3dCicKq7%Pk+qG5g*A~Sj8%udo+~gnr%V-yQdA2Q$_ll;by)5*l$PgY7p`F~2WbQo_ZgFOG869eT4rP=7Gx$^vjD}ufs6(KfJq*%') # Above code executes the encrypted code we made earlier. ``` **Bytecomp** is created by DeKrypt. <br> [Support the project!](https://github.com/dekrypted/bytecomp) Leave a star.
zynpacker
/zynpacker-0.6.tar.gz/zynpacker-0.6/README.md
README.md
zype-python 0.1.0 ----------------- .. image:: https://travis-ci.org/khfayzullaev/zype-python.svg?branch=master :target: https://travis-ci.org/khfayzullaev/zype-python A simple wrapper around Zype API inspired by SoundCloud API `client <https://github.com/soundcloud/soundcloud-python>`_. Installation ------------ Run:: pip install zype To use: .. code:: python from zype import Zype client = Zype(api_key="<YOUR API KEY>") Examples ------- To get all videos available on your account, you can do: .. code:: python from zype import Zype client = Zype(api_key="<YOUR API KEY>") videos = client.get('videos') if videos is not None: for v in videos: print v.title
zype
/zype-0.1.0.tar.gz/zype-0.1.0/README.rst
README.rst
## Introduction This here is a Python interface module meant to streamline obtaining macroeconomic data from Zypl.ai's alternative data API macro endpoint. It offers a few simple methods to obtain the data from server and store it locally for future usage, whatever that may be. Please keep in mind that for succesfull usage of this module it is absolutely essential for you to be in a good mood and healthy disposition, otherwise it might not work. To be fair, it might not work either way, but if you meet the requirement stated above you, at the very least, won't get upset by this fact nearly as much. ## Usage This module is obtained from pip with the usual installation line: ``` pip install zypl_macro ``` If you're not running your machine under Windows or do not know how to use pip, please refer [here](https://pip.pypa.io/en/stable/) for pointers. It is all very straightforward. After installing the module first order of business is to import and instantiate its utility class, like so: ``` from zypl_macro.library import DataGetter getter_instance = DataGetter() ``` After this you're going to have to provide authorization token aka API key in order to be allowed to query data endpoint. It is done via a dedicated method: ``` getter_instance.auth('your-very-very-secret-token') ``` You can get an API key from zypl's alternative data API server administration, if they'll feel like providing you with one. Please don't lose it. Once you succesfully got an instance of the class in your code and provided it with the token, you can start querying data. For now there are three main methods you can utilize. ### get_countries You can obtain the list of all the countries supported in alt data system calling this method. ``` getter_instance.get_countries() ``` ### get_indicators Works similar to the previous one and provides you with a list of all the macroeconomic indicators in the database. You can call with a country specified in order to get only indicators pertaining to that country, otherwise you're gonna get them all. ``` getter_instance.get_indicators(country='Uzbekistan') ``` ### get_data This is the main method that allows you to obtain the data itself. The only mandatory argument is the country you want your data on: ``` getter_instance.get_data(country='Tajikistan') ``` You can also provide it with `start` and `end` arguments to specify the date range you want to get your data in. Dates must be in iso format, e.g. YYYY-MM-DD. ``` getter_instance.get_data(country='Tajikistan', start='2020-02-01', end='2022-02-01') ``` You can provide either of these arguments or both of them or none, it'll be fine. `frequency` argument lets you pick the frequency (duh) of the data you're going to get. Indicators are grouped by frequencies of their collection, which goes as follows: Daily, Monthly, Quarterly, Yearly. You'll get different sets of indicators depending on this argument. ``` getter_instance.get_data(country='Tajikistan', frequency='Monthly') ``` `indicators` argument lets you specify exact list of indicators you want to obtain. It should be passed as a list or tuple containing names of desired indicators as strings. These are case sensitive and should match exactly what you get from get_indicators(), so keep it in mind. ``` getter_instance.get_data(country='Tajikistan', indicators=['GDP', 'Inflation Food']) ``` Take care if you specify indicators together with frequency. The latter takes priority, so you might not get all the indicators you asked for if some of them aren't in selected frequency group. ## Misc All the utility functions return either pandas dataframe or stringified message of the error occured, if any. You're free to do with them what you will, just don't forget to actually check what you got returned. If alt data API endpoint gets changed or moved somewhere (it shouldn't, but weirder things has been known to happen), this module is not going to work properly. In this case, and if you happen to know its new living address, you can call _set_url method to point the module there. Please don't touch this method otherwise, things will break.
zypl-macro
/zypl_macro-1.0.5.tar.gz/zypl_macro-1.0.5/README.md
README.md
import pandas as pd from requests import get, exceptions import datetime import os class NoAuthorization(Exception): def __init__(self, message="You're not authorized! Please call auth() method with a valid authorization key"): self.message = message super().__init__(self.message) pass class DataGetter(): _URL = 'https://alt-data-api.azurewebsites.net/api/macro/get' # _URL = 'http://localhost:8000/api/macro/get' _API_KEY = '' def _set_url(self, url): self._URL = url def _prettify_indicators(self, ind_list): return [" ".join([name.capitalize() if name not in ['gdp', 'cpi'] else name.upper() for name in indicator.split("_")]) for indicator in ind_list] def _api_call(self, **kwargs): if self._API_KEY == '': raise NoAuthorization() params = { k:v for k,v in kwargs.items() if len(v) > 0 } try: api_response = get(url=self._URL, params=params, headers={ 'AD-Api-Key': self._API_KEY }) return api_response except exceptions.Timeout: return "API server doesn't respond" except exceptions.ConnectionError: return "Network connection error" def auth(self, token=''): self._API_KEY = token response = self._api_call(frequency='Yearly', country='Tajikistan') if response.status_code == 403: print('Invalid authorization key') self._API_KEY = '' def get_data(self, indicators=None, **kwargs): if 'start' in kwargs.keys(): try: datetime.date.fromisoformat(kwargs['start']) except ValueError: return "Dates should be provided in YYYY-MM-DD format!" if 'end' in kwargs.keys(): try: datetime.date.fromisoformat(kwargs['end']) except ValueError: return "Dates should be provided in YYYY-MM-DD format!" if not 'country' in kwargs.keys(): return 'Provide the country to get data for' try: data = self._api_call( country = kwargs['country'], frequency = kwargs.get('frequency') or '' ) except NoAuthorization as e: return e.message data = data.json() if len(data) == 0: return 'Invalid country name.' df = pd.DataFrame(data) df['date'] = pd.to_datetime(df['date']) if 'start' in kwargs.keys() or 'end' in kwargs.keys(): if 'start' in kwargs.keys() and not 'end' in kwargs.keys(): mask = (df['date'] >= kwargs['start']) elif not 'start' in kwargs.keys() and 'end' in kwargs.keys(): mask = (df['date'] <= kwargs['end']) else: mask = (df['date'] >= kwargs['start']) & (df['date'] <= kwargs['end']) df = df.loc[mask] if len(df) == 0: return 'Start or end date are out of bounds.' df.columns = self._prettify_indicators(df.columns) if isinstance(indicators, list): cols = list(filter(lambda name: name not in indicators and name not in ['Country', 'Date'], df.columns)) df.drop(columns=cols, inplace=True) df.dropna(subset=df.drop(columns=['Country', 'Date']).columns, inplace=True, how='all') # df.to_csv("%s/%s_macrodata.csv" % (os.getcwd(), kwargs['country']), header=df.columns, index=False, sep=";") df.sort_values(by='Date', inplace=True) return df def get_countries(self): try: data = self._api_call(frequency="Yearly").json() except NoAuthorization as e: return e.message entirety = pd.DataFrame(data) countries = pd.DataFrame({'Country name': entirety['country'].unique()}) # countries.to_csv('%s/supported_countries.csv' % os.getcwd(), index=False, sep=";") return countries def get_indicators(self, **kwargs): try: data = self._api_call(country=kwargs.get('country') or '').json() except NoAuthorization as e: return e.message if len(data) == 0: return 'Invalid country name.' entirety = pd.DataFrame(data) indicators = pd.DataFrame({'Indicator name': self._prettify_indicators([name for name in entirety.columns if name not in ['date', 'country']])}) # indicators.to_csv('%s/indicators.csv' % os.getcwd(), index=False, sep=";") return indicators
zypl-macro
/zypl_macro-1.0.5.tar.gz/zypl_macro-1.0.5/zypl_macro/library.py
library.py
============================= Zypper Patch Status Collector ============================= This queries the current patch status of the system from Zypper and exports it in a format compatible with the `Prometheus Node Exporter's`_ textfile collector. Usage ----- :: # HELP zypper_applicable_patches The current count of applicable patches # TYPE zypper_applicable_patches gauge zypper_applicable_patches{category="security",severity="critical"} 0 zypper_applicable_patches{category="security",severity="important"} 2 zypper_applicable_patches{category="security",severity="moderate"} 0 zypper_applicable_patches{category="security",severity="low"} 0 zypper_applicable_patches{category="security",severity="unspecified"} 0 zypper_applicable_patches{category="recommended",severity="critical"} 0 zypper_applicable_patches{category="recommended",severity="important"} 0 zypper_applicable_patches{category="recommended",severity="moderate"} 0 zypper_applicable_patches{category="recommended",severity="low"} 0 zypper_applicable_patches{category="recommended",severity="unspecified"} 0 zypper_applicable_patches{category="optional",severity="critical"} 0 zypper_applicable_patches{category="optional",severity="important"} 0 zypper_applicable_patches{category="optional",severity="moderate"} 0 zypper_applicable_patches{category="optional",severity="low"} 0 zypper_applicable_patches{category="optional",severity="unspecified"} 0 zypper_applicable_patches{category="feature",severity="critical"} 0 zypper_applicable_patches{category="feature",severity="important"} 0 zypper_applicable_patches{category="feature",severity="moderate"} 0 zypper_applicable_patches{category="feature",severity="low"} 0 zypper_applicable_patches{category="feature",severity="unspecified"} 0 zypper_applicable_patches{category="document",severity="critical"} 0 zypper_applicable_patches{category="document",severity="important"} 0 zypper_applicable_patches{category="document",severity="moderate"} 0 zypper_applicable_patches{category="document",severity="low"} 0 zypper_applicable_patches{category="document",severity="unspecified"} 0 zypper_applicable_patches{category="yast",severity="critical"} 0 zypper_applicable_patches{category="yast",severity="important"} 0 zypper_applicable_patches{category="yast",severity="moderate"} 0 zypper_applicable_patches{category="yast",severity="low"} 0 zypper_applicable_patches{category="yast",severity="unspecified"} 0 # HELP zypper_service_needs_restart Set to 1 if service requires a restart due to using no-longer-existing libraries. # TYPE zypper_service_needs_restart gauge zypper_service_needs_restart{service="nscd"} 1 zypper_service_needs_restart{service="dbus"} 1 zypper_service_needs_restart{service="cups"} 1 zypper_service_needs_restart{service="sshd"} 1 zypper_service_needs_restart{service="cron"} 1 # HELP zypper_product_end_of_life Unix timestamp on when support for the product will end. # TYPE zypper_product_end_of_life gauge zypper_product_end_of_life{product="openSUSE"} 1606694400 zypper_product_end_of_life{product="openSUSE_Addon_NonOss"} 1000000000000001 # HELP zypper_needs_rebooting Whether the system requires a reboot as core libraries or services have been updated. # TYPE zypper_needs_rebooting gauge zypper_needs_rebooting 0 # HELP zypper_scrape_success Whether the last scrape for zypper data was successful. # TYPE zypper_scrape_success gauge zypper_scrape_success 1 To get this picked up by the `Prometheus Node Exporter's`_ textfile collector dump the output into a ``zypper.prom`` file in the textfile collector directory:: > zypper-patch-status-collector > /var/lib/node_exporter/collector/zypper.prom Installation ------------ Running this requires Python. Install as any Python software via pip:: pip install zypper-patch-status-collector It also requires the reboot advisory and the lifecycle plug-in for zypper to be installed:: zypper install zypper-needs-restarting zypper-lifecycle-plugin Tests ----- The tests are based on pytest_. Just run the following in the project root:: pytest License ------- This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You can find a full version of the license in the `LICENSE file`_. If not, see https://www.gnu.org/licenses/. .. _`Prometheus Node Exporter's`: https://github.com/prometheus/node_exporter .. _pytest: https://docs.pytest.org/en/latest/ .. _`LICENSE file`: ./LICENSE.txt
zypper-patch-status-collector
/zypper-patch-status-collector-0.2.1.tar.gz/zypper-patch-status-collector-0.2.1/README.rst
README.rst
========== CHANGE LOG ========== All notable changes to this project will be documented in this file. The format is based on `Keep a Changelog`_ and this project adheres to `Semantic Versioning`_. 0.2.1 – 2020-06-17 ================== Fixed ----- * Fix crash in rendering `zypper_service_needs_restart` when there is actually a service to restart. 0.2.0 – 2020-06-15 ================== Added ----- * New metric `zypper_needs_rebooting` exports wether the system requires a reboot according to ``zypper needs-rebooting``. * New metric `zypper_product_end_of_live` exports end of life of products as reported by ``zypper lifecycle``. * New metric `zypper_service_needs_restart` exported for each service reported by `zypper ps -sss`. * Python 3.8 is now supported Removed ------- * Python 2 is no longer supported 0.1.0 – 2017-12-31 ================== Added ----- * Dump metrics on available patches on standard output _`Keep a Changelog`: http://keepachangelog.com/en/1.0.0/ _`Semantic Versioning`: http://semver.org/spec/v2.0.0.html
zypper-patch-status-collector
/zypper-patch-status-collector-0.2.1.tar.gz/zypper-patch-status-collector-0.2.1/CHANGELOG.rst
CHANGELOG.rst
import collections import itertools import re from typing import Iterable from ._model import CATEGORIES, SEVERITIES, Patch, Product GAUGE_META_TEMPLATE = '''\ # HELP {name} {help_text} # TYPE {name} gauge ''' GAUGE_VALUE_TEMPLATE = '''\ {name} {value} ''' def _render_gauge_meta(name, help_text): return GAUGE_META_TEMPLATE.format( name=name, help_text=help_text ) def _render_gauge_value(name, value): return GAUGE_VALUE_TEMPLATE.format( name=name, value=value, ) def _render_patch_meta(): return _render_gauge_meta( name='zypper_applicable_patches', help_text='The current count of applicable patches', ) def _render_patch_count(patch, count): return _render_gauge_value( name='zypper_applicable_patches{{category="{category}",severity="{severity}"}}'.format( category=patch.category, severity=patch.severity, ), value=count, ) def _render_service_needs_restart_meta(): return _render_gauge_meta( name='zypper_service_needs_restart', help_text='Set to 1 if service requires a restart due to using no-longer-existing libraries.', ) def _render_service_needs_restart_value(service: str): # There is only a specific set of characters allowed in labels. safe_name = re.sub(r'[^a-zA-Z0-9_]', '_', service) return _render_gauge_value( name=f'zypper_service_needs_restart{{service="{safe_name}"}}', value=1, ) def _render_product_meta(): return _render_gauge_meta( name='zypper_product_end_of_life', help_text='Unix timestamp on when support for the product will end.', ) def _render_product_eol(product: Product): # There is only a specific set of characters allowed in labels. safe_name = re.sub(r'[^a-zA-Z0-9_]', '_', product.name) return _render_gauge_value( name=f'zypper_product_end_of_life{{product="{safe_name}"}}', value=product.eol, ) def _render_needs_rebooting(needs_rebooting): return _render_gauge_meta( name='zypper_needs_rebooting', help_text='Whether the system requires a reboot as core libraries or services have been updated.', ) + _render_gauge_value( name='zypper_needs_rebooting', value=1 if needs_rebooting else 0 ) def _render_scrape_success(value): return _render_gauge_meta( name='zypper_scrape_success', help_text='Whether the last scrape for zypper data was successful.', ) + _render_gauge_value( name='zypper_scrape_success', value=value, ) def render( patches: Iterable[Patch], services_needing_restart: Iterable[str], needs_rebooting: bool, products: Iterable[Product], ): patch_histogram = collections.Counter(patches) if patches is None or services_needing_restart is None or products is None: return _render_scrape_success(0) metrics = [ _render_patch_meta() ] + [ _render_patch_count(patch, patch_histogram.get(patch, 0)) for patch in ( Patch(category, severity) for category, severity in itertools.product(CATEGORIES, SEVERITIES) ) ] + [ _render_service_needs_restart_meta() ] + [ _render_service_needs_restart_value(service) for service in services_needing_restart ] + [ _render_product_meta() ] + [ _render_product_eol(product) for product in products ] + [ _render_needs_rebooting(needs_rebooting), _render_scrape_success(1) ] return ''.join(metrics)
zypper-patch-status-collector
/zypper-patch-status-collector-0.2.1.tar.gz/zypper-patch-status-collector-0.2.1/zypper_patch_status_collector/_prometheus.py
_prometheus.py
import argparse import sys import textwrap import pkg_resources from ._prometheus import render from ._zypper import get_applicable_patches, get_lifecycle_info, get_services_needing_restart, check_needs_reboot LICENSE_TEXT = textwrap.dedent("""\ Copyright (C) 2017 Matthias Bach This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.\ """) def main(args=sys.argv[1:]): parser = argparse.ArgumentParser( description='Export patch status in Prometheus-compatible format..', ) parser.add_argument( '--license', action='store_true', default=False, help='Show license information' ) parser.add_argument('--version', action='version', version=str( pkg_resources.get_distribution('zypper-patch-status-collector').version ),) parsed_args = parser.parse_args(args) if parsed_args.license: print(LICENSE_TEXT) return run() def run(): try: patches = get_applicable_patches() except Exception as e: # in case of error, carry on print('Failed to query zypper: {}'.format(e), file=sys.stderr) patches = None try: services_needing_restart = get_services_needing_restart() except Exception as e: # in case of error, carry on print('Failed to query zypper: {}'.format(e), file=sys.stderr) services_needing_restart = None try: needs_reboot = check_needs_reboot() except Exception as e: # in case of error, carry on print('Failed to query zypper: {}'.format(e), file=sys.stderr) needs_reboot = False try: products = get_lifecycle_info() except Exception as e: # in case of error, carry on print('Failed to query zypper: {}'.format(e), file=sys.stderr) products = None metrics = render(patches, services_needing_restart, needs_reboot, products) print(metrics) if patches is None or products is None: sys.exit(1)
zypper-patch-status-collector
/zypper-patch-status-collector-0.2.1.tar.gz/zypper-patch-status-collector-0.2.1/zypper_patch_status_collector/_cli.py
_cli.py
import logging import pandas as pd import pymsteams from pymsteams import TeamsWebhookException from notify.types import DfsInfo class NotifyTeams: def __init__(self, webhook: str): """ Parameters ---------- webhook: str url for sending the teams message """ self.msg = pymsteams.connectorcard(webhook) self.msg.color("#F0B62E") def add_full_dataframe(self, df: pd.DataFrame) -> None: """ Parameters ---------- df: pd.DataFrame Dataframe that will be added to the card. Returns ------- None Adds a section for the table to the teams message object. """ if df.shape[0] > 30: logging.warning(f"only first 30 records will be added.({df.shape[0]}> the limit of 30).") df = df.head(n=30) section = pymsteams.cardsection() md_table = df.to_markdown(index=False) section.text(md_table) self.msg.addSection(section) def create_dataframe_report(self, dfs: DfsInfo) -> None: """ Parameters ---------- dfs: dict Dataframes containing {name, df} as key value pairs Returns ------- None Adds a section for the table to the teams message object. """ for df_name, df_shape in dfs.items(): section = pymsteams.cardsection() section.activityTitle(f"<h1><b>{df_name}</b></h1>") section.activityImage("https://pbs.twimg.com/profile_images/1269974132818620416/nt7fTdpB.jpg") section.text(f"> In totaal **{df_shape[0]}** records met **{df_shape[1]}** kolommen verwerkt") self.msg.addSection(section) def create_buttons(self, buttons: dict) -> None: """ Parameters ---------- buttons: dict dictionairy containing button_name, button_link as key, value pairs. Returns ------- None Adds the button(s) to the teams message """ for button_name, button_link in buttons.items(): self.msg.addLinkButton(button_name, button_link) def basic_message( self, title: str, message: str = None, buttons: dict = None, df: pd.DataFrame = pd.DataFrame(), dfs: DfsInfo = None, ) -> None: """ This function posts a message, containing a section, in a Microsoft Teams channel Parameters ---------- dfs: dict Dataframes dictionary, with keys as dataframe name and value as dataframe. df: pd.DataFrame df that will be added to a card section. length of dataframe should not exceed 10. title: str Title of the message (optional) message: str Content of the message (optional) buttons: dict dictionary of button_name, button_url as key value pairs Returns ------- None sends a message in a teams channel, reporting col en records as information. """ self.msg.title(title) # always required. if message: self.msg.text(message) if dfs: self.create_dataframe_report(dfs) if not df.empty: self.add_full_dataframe(df) if buttons: self.create_buttons(buttons) try: self.msg.send() except TeamsWebhookException: logging.warning("Teams notification not sent!")
zyppnotify
/zyppnotify-0.5.1-py3-none-any.whl/notify/teams.py
teams.py
import os import pandas as pd from babel.numbers import format_currency, format_decimal from notify.exceptions import EnvironmentVariablesError def format_numbers(df: pd.DataFrame, currency_columns: list = None, number_columns: list = None): """ This functions converts currencies (values) and numbers (digits) columns to formatted text columns. Parameters ---------- df: pd.DataFrame Dataframe with columns which need to be formatted currency_columns: list List of columns which will be formatted to currencies with a Euro sign number_columns: list List with columns which will be formatted to European standard. Returns ------- df: pd.DataFrame Dataframe with converted columns """ # format de bedrag kolommen if number_columns is None: number_columns = [] if currency_columns is None: currency_columns = [] for col in currency_columns: df[col] = df[col].apply(lambda x: format_currency(number=x, currency="EUR", locale="nl_NL")) # format de nummer kolommen for col in number_columns: df[col] = df[col].apply(lambda x: format_decimal(number=x, locale="nl_NL")) return df def check_environment_variables(required_variables: list): """ Test if environment variables are set. Parameters ---------- required_variables: list list of required variables that need to be present in environment variables. Returns ------- None """ values = [os.environ.get(x) for x in required_variables] if not all(values): raise EnvironmentVariablesError(f"One of the environment variables {', '.join(required_variables)} is not set") def dataframe_to_html(df: pd.DataFrame) -> str: """ This functions converts a dataframe to an HTML table. Parameters ---------- df: pd.DataFrame Dataframe which needs to be converted to HTML Returns ------- pretty_html_table: str html body with generated HTML table """ html_table = df.to_html(index=False, classes="styled-table", justify="center") pretty_html_table = ( """ <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Dataframe report</title> <style type="text/css" media="screen"> h1 { background-color: #a8a8a8; display: flex; flex-direction: column; justify-content: center; text-align: center; } .styled-table { border-collapse: collapse; margin: 25px 0; font-size: 0.9em; font-family: sans-serif; min-width: 400px; box-shadow: 0 0 20px rgba(0, 0, 0, 0.15); } .styled-table thead tr { background-color: #009879; color: #ffffff; text-align: left; } .styled-table th, .styled-table td { padding: 12px 15px; } .styled-table tbody tr { border-bottom: thin solid #dddddd; } .styled-table tbody tr:nth-of-type(even) { background-color: #f3f3f3; } .styled-table tbody tr.active-row { font-weight: bold; color: #009879; } .styled-table tbody tr:last-of-type { border-bottom: 2px solid #009879; } </style> </head> <body>""" + html_table + "</body>" ) return pretty_html_table
zyppnotify
/zyppnotify-0.5.1-py3-none-any.whl/notify/utils.py
utils.py
import base64 import logging import os from urllib import request import pandas as pd from notify.msgraph import Graph from notify.utils import check_environment_variables, dataframe_to_html class NotifyMail: def __init__( self, to: str, subject: str, message: str, cc: str = None, bcc: str = None, files: dict = None, df: pd.DataFrame = pd.DataFrame(), ): """ This function sends an e-mail from Microsoft Exchange server Parameters ---------- to: str the e-mail adress to send email to subject: str subject of the message message: HTML or plain text content of the message cc: str e-mail address to add as cc bcc: str e-mail address to add as bcc files: str, list Path(s) to file(s) to add as attachment df: pd.DataFrame dataframe that needs to be added to the HTML message. """ check_environment_variables(["EMAIL_USER", "MAIL_TENANT_ID", "MAIL_CLIENT_ID", "MAIL_CLIENT_SECRET"]) self.sender = os.environ.get("EMAIL_USER") self.to = to.replace(";", ",") self.cc = cc.replace(";", ",") if cc is not None else cc self.bcc = bcc.replace(";", ",") if bcc is not None else bcc self.subject = subject self.message = message self.files = [files] if isinstance(files, str) else files self.df = df self.graph = Graph() self.graph.ensure_graph_for_app_only_auth() @staticmethod def read_file_content(path): if path.startswith("http") or path.startswith("www"): with request.urlopen(path) as download: content = base64.b64encode(download.read()) else: with open(path, "rb") as f: content = base64.b64encode(f.read()) return content def send_email(self): """ This function sends an e-mail from Microsoft Exchange server Returns ------- response: requests.Response """ endpoint = f"https://graph.microsoft.com/v1.0/users/{self.sender}/sendMail" msg = { "Message": { "Subject": self.subject, "Body": {"ContentType": "HTML", "Content": self.message}, "ToRecipients": [{"EmailAddress": {"Address": to.strip()}} for to in self.to.split(",")], }, "SaveToSentItems": "true", } if self.cc: msg["Message"]["CcRecipients"] = [{"EmailAddress": {"Address": cc.strip()}} for cc in self.cc.split(",")] if self.bcc: msg["Message"]["BccRecipients"] = [ {"EmailAddress": {"Address": bcc.strip()}} for bcc in self.bcc.split(",") ] # add html table (if table less than 30 records) if self.df.shape[0] in range(1, 31): html_table = dataframe_to_html(df=self.df) elif self.df.shape[0] > 30: logging.warning(f"Only first 30 records will be added. ({self.df.shape[0]} > the limit of 30).") html_table = dataframe_to_html(df=self.df.head(n=30)) else: html_table = "" # no data in dataframe (0 records) msg["Message"]["Body"]["Content"] += html_table if self.files: # There might be a more safe way to check if a string is an url, but for our purposes, this suffices. attachments = list() for name, path in self.files.items(): content = self.read_file_content(path) attachments.append( { "@odata.type": "#microsoft.graph.fileAttachment", "ContentBytes": content.decode("utf-8"), "Name": name, } ) msg["Message"]["Attachments"] = attachments response = self.graph.app_client.post(endpoint, json=msg) return response
zyppnotify
/zyppnotify-0.5.1-py3-none-any.whl/notify/mail.py
mail.py
from itertools import product, permutations import random import vthread def get_all_operation_combine(cards): c1, c2, c3, c4 = cards operators = ['+', '-', '*', '/'] expressions = [] for p in product(operators, repeat=len(cards) - 1): # 运算符是注入在数字之间,所以用数字的长度 -1 op1, op2, op3 = p # 循环运算符 (3) expressions.append('{} {} {} {} {} {} {}'.format(c1, op1, c2, op2, c3, op3, c4)) return expressions # 得出不同的数字和运算符组合的列表 def rand_card(): return random.randint(1, 14) # 从十四张牌中随意抽取一张 def get_all_operation_combine_with_number_exchange(cards): all_result = [] for p in permutations(cards): # 将随机抽取的列表的四个数进行全排列,然后循环调用 get_all_operation_combine() 获得数学运算式,未加括号,放入列表中 all_result += get_all_operation_combine(p) return all_result # 利用递归思想进行括号添加 def add_brace(numbers): if len(numbers) < 2: return [numbers] if len(numbers) == 2: return [['(' + str(numbers[0])] + [str(numbers[1]) + ')']] results = [] for i in range(1, len(numbers)): prefix = numbers[:i] prefix1 = add_brace(prefix) tail = numbers[i:] tails = add_brace(tail) for p, t in product(prefix1, tails): # 将列表中的组合列表先拆开,分别在头步和尾部添加括号在用列表组合 brace_with_around = ['(' + p[0]] + p[1:] + t[:-1] + [t[-1] + ')'] results.append(brace_with_around) return results # 不固定长读输出数学运算式 def join_op_with_brace_number(operators, with_brace): finally_exp = with_brace[0] for i, op in enumerate(operators): finally_exp += (op + ' ' + with_brace[i + 1]) return finally_exp # 添加括号 def join_brace_to_expression(expression): numbers = expression.split()[::2] # 数字拆分 operators = expression.split()[1::2] # 运算符拆分 with_braces = add_brace(numbers) # 添加括号 with_operator_and_brace = [] for brace in with_braces: with_operator_and_brace.append(join_op_with_brace_number(operators, brace)) return with_operator_and_brace def simple_but_may_not_answer(cards): target = 24 for exp in get_all_operation_combine(cards): if eval(exp) == target: print(exp) def a_little_complicate_but_may_not_answer(cards): target = 24 for exp in get_all_operation_combine_with_number_exchange(cards): if eval(exp) == target: print(exp) # 不固定长度 def complicate_but_useful_with_brace(cards): targe = 24 for exp in get_all_operation_combine_with_number_exchange(cards): for b in join_brace_to_expression(exp): # 添加括号不固定长度,数学运算式组合 try: if eval(b) == targe: print(b) except ZeroDivisionError: continue new_cards = [rand_card() for _ in range(4)] # print('我抽到的牌是: {}'.format(new_cards)) # # print('-- 不带交换位置找到的答案') # simple_but_may_not_answer(new_cards) # # print('-- 带了交换位置找到的答案') # a_little_complicate_but_may_not_answer(new_cards) if __name__ == '__main__': print('-- 带了括号的答案是') complicate_but_useful_with_brace([12, 2, 7, 2])
zys0428
/zys0428-0.0.1-py3-none-any.whl/zys/24.py
24.py
## zyte-api-convertor A Python module to convert Zyte API Json payload to [Scrapy ZyteAPI](https://github.com/scrapy-plugins/scrapy-zyte-api) project. It uses Scrapy and scrapy-zyte-api plugin to generate the project, also it uses black to format the code. ### Requirements ``` Python 3.6+ Scrapy scrapy-zyte-api black ``` ### Documentation [Zyte API Documentation](https://docs.zyte.com/zyte-api/get-started/index.html) Test the Zyte API payload using postman or curl. Once it gives the desired response, use the same payload with this module to convert it to a Scrapy ZyteAPI project. ### Installation `pip install zyte-api-convertor` ### Usage ```shell Usage: zyte-api-convertor <payload> --project-name <project_name> --spider-name <spider_name> Example: zyte-api-convertor '{"url": "https://httpbin.org/ip", "browserHtml": true, "screenshot": true}' --project-name sample_project --spider-name sample_spider Usage: zyte-api-convertor <payload> --project-name <project_name> Example: zyte-api-convertor '{"url": "https://httpbin.org/ip", "browserHtml": true, "screenshot": true}' --project-name sample_project Usage: zyte-api-convertor <payload> --spider-name <spider_name> Example: zyte-api-convertor '{"url": "https://httpbin.org/ip", "browserHtml": true, "screenshot": true}' --spider-name sample_spider Usage: zyte-api-convertor <payload> Example: zyte-api-convertor '{"url": "https://httpbin.org/ip", "browserHtml": true, "screenshot": true}' ``` ### Example zyte-api-convertor expects a valid json payload at the least. But it does have other options as well. You can use the `--project-name` and `--spider-name` options to set the project and spider name. If you don't use these options, it will use the default project and spider name. ```shell zyte-api-convertor '{"url": "https://httpbin.org/ip", "browserHtml": true, "screenshot": true}' --project-name sample_project --spider-name sample_spider ``` Output: ```shell mukthy@Mukthys-MacBook-Pro % zyte-api-convertor '{"url": "https://httpbin.org/ip", "browserHtml": true, "screenshot": true}' --project-name sample_project --spider-name sample_spider Code Generated! Writing to file... Writing Done! reformatted sample_project/sample_project/spiders/sample_project.py All done! ✨ 🍰 ✨ 1 file reformatted. Formatting Done! ``` Project Created Successfully. ```shell mukthy@Mukthys-MacBook-Pro % sample_project % tree . ├── sample_project │   ├── __init__.py │   ├── items.py │   ├── middlewares.py │   ├── pipelines.py │   ├── settings.py │   └── spiders │   ├── __init__.py │   └── sample_project.py └── scrapy.cfg 3 directories, 8 files ``` Sample Spider Code: ```python import scrapy class SampleQuotesSpider(scrapy.Spider): name = "sample_spider" custom_settings = { "DOWNLOAD_HANDLERS": { "http": "scrapy_zyte_api.ScrapyZyteAPIDownloadHandler", "https": "scrapy_zyte_api.ScrapyZyteAPIDownloadHandler", }, "DOWNLOADER_MIDDLEWARES": { "scrapy_zyte_api.ScrapyZyteAPIDownloaderMiddleware": 1000 }, "REQUEST_FINGERPRINTER_CLASS": "scrapy_zyte_api.ScrapyZyteAPIRequestFingerprinter", "TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor", "ZYTE_API_KEY": "YOUR_API_KEY", } def start_requests(self): yield scrapy.Request( url="https://httpbin.org/ip", meta={ "zyte_api": { "javascript": False, "screenshot": True, "browserHtml": True, "actions": [], "requestHeaders": {}, "geolocation": "US", "experimental": {"responseCookies": False}, } }, ) def parse(self, response): print(response.text) ``` Please note that the `ZYTE_API_KEY` is not set in the `custom_settings` of the spider. You need to set it before running it.
zyte-api-convertor
/zyte_api_convertor-1.0.3.tar.gz/zyte_api_convertor-1.0.3/README.md
README.md
import json import os import subprocess import sys def payload_to_zyte(spider_name): if os.name != 'nt': data = sys.argv[1] # print(data) data = data.replace(',}', '}') data = data.replace(',]', ']') data = json.loads(data) # print(data) else: print("Windows Detected!, Please enter the payload again.") data = input("Enter the JSON Payload with quotes (only): ") # print(data) data = data.replace(',}', '}') data = data.replace(',]', ']') data = data.replace("'{", "{") data = data.replace("}'", "}") data = json.loads(data) # print(data) url = data['url'] if 'actions' in data: actions = data['actions'] else: actions = [] # Post request with custom headers if 'httpRequestMethod' in data and data['httpRequestMethod'] == "POST" and ('customHttpRequestHeaders' in data): httpRequestMethod = data['httpRequestMethod'] httpResponseBody = data['httpResponseBody'] httpResponseHeaders = True if 'experimental' in data: experimental = data['experiment'] else: experimental = { "responseCookies": False, } if 'geolocation' in data: geolocation = data['geolocation'] else: geolocation = "US" if 'customHttpRequestHeaders' in data: customHttpRequestHeaders = data['customHttpRequestHeaders'] else: customHttpRequestHeaders = [] httpRequestBody = data['httpRequestBody'] meta = {"zyte_api": {"customHttpRequestHeaders": customHttpRequestHeaders, "geolocation": geolocation, "httpResponseBody": httpResponseBody, "httpResponseHeaders": httpResponseHeaders, "experimental": experimental, "httpRequestMethod": httpRequestMethod, "httpRequestBody": httpRequestBody}} # Post request with request headers elif ('httpRequestMethod' in data and data['httpRequestMethod'] == "POST") and ('requestHeaders' in data): httpRequestMethod = data['httpRequestMethod'] httpResponseBody = data['httpResponseBody'] httpResponseHeaders = True if 'experimental' in data: experimental = data['experiment'] else: experimental = { "responseCookies": False, } if 'geolocation' in data: geolocation = data['geolocation'] else: geolocation = "US" if 'requestHeaders' in data: requestHeaders = data['requestHeaders'] else: requestHeaders = {} httpRequestBody = data['httpRequestBody'] meta = {"zyte_api": {"requestHeaders": requestHeaders, "geolocation": geolocation, "httpResponseBody": httpResponseBody, "httpResponseHeaders": httpResponseHeaders, "experimental": experimental, "httpRequestMethod": httpRequestMethod, "httpRequestBody": httpRequestBody}} # Post request without request headers elif 'httpRequestMethod' in data and data['httpRequestMethod'] == "POST": httpRequestMethod = data['httpRequestMethod'] httpResponseBody = data['httpResponseBody'] httpResponseHeaders = True if 'experimental' in data: experimental = data['experiment'] else: experimental = { "responseCookies": False, } if 'geolocation' in data: geolocation = data['geolocation'] else: geolocation = "US" httpRequestBody = data['httpRequestBody'] meta = {"zyte_api": {"geolocation": geolocation, "httpResponseBody": httpResponseBody, "httpResponseHeaders": httpResponseHeaders, "experimental": experimental, "httpRequestMethod": httpRequestMethod, "httpRequestBody": httpRequestBody}} # Get request with custom headers elif ('httpResponseBody' in data and data['httpResponseBody'] == True) and ('customHttpRequestHeaders' in data): httpResponseBody = data['httpResponseBody'] httpResponseHeaders = True if 'experimental' in data: experimental = data['experiment'] else: experimental = { "responseCookies": False, } if 'geolocation' in data: geolocation = data['geolocation'] else: geolocation = "US" if 'customHttpRequestHeaders' in data: customHttpRequestHeaders = data['customHttpRequestHeaders'] else: customHttpRequestHeaders = [] meta = {"zyte_api": {"customHttpRequestHeaders": customHttpRequestHeaders, "geolocation": geolocation, "httpResponseBody": httpResponseBody, "httpResponseHeaders": httpResponseHeaders, "experimental": experimental}} # Get request with request headers elif ('httpResponseBody' in data and data['httpResponseBody'] == True) and ('requestHeaders' in data): httpResponseBody = data['httpResponseBody'] httpResponseHeaders = True if 'experimental' in data: experimental = data['experiment'] else: experimental = { "responseCookies": False, } if 'geolocation' in data: geolocation = data['geolocation'] else: geolocation = "US" if 'requestHeaders' in data: requestHeaders = data['requestHeaders'] else: requestHeaders = {} meta = {"zyte_api": {"requestHeaders": requestHeaders, "geolocation": geolocation, "httpResponseBody": httpResponseBody, "httpResponseHeaders": httpResponseHeaders, "experimental": experimental}} # BrowserHtml set to True elif 'browserHtml' in data and data['browserHtml'] == True: browserHtml = data['browserHtml'] if 'javascript' in data: javascript = data['javascript'] else: javascript = False if 'screenshot' in data: screenshot = data['screenshot'] else: screenshot = False if 'requestHeaders' in data: requestHeaders = data['requestHeaders'] else: requestHeaders = {} if 'geolocation' in data: geolocation = data['geolocation'] else: geolocation = "US" if 'experimental' in data: experimental = data['experimental'] else: experimental = { "responseCookies": False, } meta = {"zyte_api": {"javascript": javascript, "screenshot": screenshot, "browserHtml": browserHtml, "actions": actions, "requestHeaders": requestHeaders, "geolocation": geolocation, "experimental": experimental}} # Get request without any request headers else: httpResponseBody = True httpResponseHeaders = True if 'experimental' in data: experimental = data['experiment'] else: experimental = { "responseCookies": False, } if 'geolocation' in data: geolocation = data['geolocation'] else: geolocation = "US" meta = {"zyte_api": {"geolocation": geolocation, "httpResponseBody": httpResponseBody, "httpResponseHeaders": httpResponseHeaders, "experimental": experimental}} formatter = { 'url': url } custom_settings = { 'DOWNLOAD_HANDLERS': {"http": "scrapy_zyte_api.ScrapyZyteAPIDownloadHandler", "https": "scrapy_zyte_api.ScrapyZyteAPIDownloadHandler"}, 'DOWNLOADER_MIDDLEWARES': {"scrapy_zyte_api.ScrapyZyteAPIDownloaderMiddleware": 1000}, 'REQUEST_FINGERPRINTER_CLASS': "scrapy_zyte_api.ScrapyZyteAPIRequestFingerprinter", 'TWISTED_REACTOR': "twisted.internet.asyncioreactor.AsyncioSelectorReactor", 'ZYTE_API_KEY': "YOUR_API_KEY" } data = """ import scrapy class SampleQuotesSpider(scrapy.Spider): name = "{spider_name}" custom_settings = {custom_settings} def start_requests(self): yield scrapy.Request(url="{url}", meta={meta}) def parse(self, response): print(response.text) """.format(**formatter, meta=meta, custom_settings=custom_settings, spider_name=spider_name) return data def create_scrapy_project(code, project_name): subprocess.run(["scrapy", "startproject", f"{project_name}"], stdout=subprocess.DEVNULL) # create a new scrapy project. with open(f"{project_name}/{project_name}/spiders/{project_name}.py", "w") as f: # write the code to a file. f.write(code) print("Writing Done!") subprocess.run(["black", f"{project_name}/{project_name}/spiders/{project_name}.py"]) # format the code using black. print("Formatting Done!") def main(): try: args = sys.argv[1:] if "--help" in args: usage = ''' Usage: zyte-api-convertor <payload> --project-name <project_name> --spider-name <spider_name> Example: zyte-api-convertor '{"url": "https://httpbin.org/ip", "browserHtml": true, "screenshot": true}' --project-name sample_project --spider-name sample_spider Usage: zyte-api-convertor <payload> --project-name <project_name> Example: zyte-api-convertor '{"url": "https://httpbin.org/ip", "browserHtml": true, "screenshot": true}' --project-name sample_project Usage: zyte-api-convertor <payload> --spider-name <spider_name> Example: zyte-api-convertor '{"url": "https://httpbin.org/ip", "browserHtml": true, "screenshot": true}' --spider-name sample_spider Usage: zyte-api-convertor <payload> Example: zyte-api-convertor '{"url": "https://httpbin.org/ip", "browserHtml": true, "screenshot": true}' ''' print(usage) return elif "--project-name" in args and '--spider-name' in args: try: project_name = args[args.index("--project-name") + 1] spider_name = args[args.index("--spider-name") + 1] if "-" in project_name: print( "Error: Project names must begin with a letter and contain only\n letters, numbers and underscores") return code = payload_to_zyte(spider_name) print("Code Generated!") print("Writing to file...") create_scrapy_project(code, project_name) return except IndexError: print("Please provide a project name and spider name.") return elif "--project-name" in args: try: project_name = args[args.index("--project-name") + 1] spider_name = "sample_zyte_api" if "-" in project_name: print( "Error: Project names must begin with a letter and contain only\n letters, numbers and underscores") return code = payload_to_zyte(spider_name) print("Code Generated!") print("Writing to file...") create_scrapy_project(code, project_name) return except IndexError: print("Please provide a project name.") return elif "--spider-name" in args: try: spider_name = args[args.index("--spider-name") + 1] project_name = "sample_zyte_api_project" code = payload_to_zyte(spider_name) print("Code Generated!") print("Writing to file...") create_scrapy_project(code, project_name) return except IndexError: print("Please provide a spider name.") return elif len(args) < 1: print("Please provide a payload, Payload is Must. Use --help for more info") return else: spider_name = "sample_zyte_api" code = payload_to_zyte(spider_name) print("Code Generated!") print("Writing to file...") project_name = "sample_zyte_api_project" create_scrapy_project(code, project_name) return except IndexError: print("Please provide a payload, Payload is Must. Use --help for more info") return if __name__ == '__main__': main()
zyte-api-convertor
/zyte_api_convertor-1.0.3.tar.gz/zyte_api_convertor-1.0.3/src/zyte_api/convertor.py
convertor.py
Changes ======= 0.4.5 (2023-01-03) ------------------ * w3lib >= 2.1.1 is required in install_requires, to ensure that URLs are escaped properly. * unnecessary ``requests`` library is removed from install_requires * fixed tox 4 support 0.4.4 (2022-12-01) ------------------ * Fixed an issue with submitting URLs which contain unescaped symbols * New "retrying" argument for AsyncClient.__init__, which allows to set custom retrying policy for the client * ``--dont-retry-errors`` argument in the CLI tool 0.4.3 (2022-11-10) ------------------ * Connections are no longer reused between requests. This reduces the amount of ``ServerDisconnectedError`` exceptions. 0.4.2 (2022-10-28) ------------------ * Bump minimum ``aiohttp`` version to 3.8.0, as earlier versions don't support brotli decompression of responses * Declared Python 3.11 support 0.4.1 (2022-10-16) ------------------ * Network errors, like server timeouts or disconnections, are now retried for up to 15 minutes, instead of 5 minutes. 0.4.0 (2022-09-20) ------------------ * Require to install ``Brotli`` as a dependency. This changes the requests to have ``Accept-Encoding: br`` and automatically decompress brotli responses. 0.3.0 (2022-07-29) ------------------ Internal AggStats class is cleaned up: * ``AggStats.n_extracted_queries`` attribute is removed, as it was a duplicate of ``AggStats.n_results`` * ``AggStats.n_results`` is renamed to ``AggStats.n_success`` * ``AggStats.n_input_queries`` is removed as redundant and misleading; AggStats got a new ``AggStats.n_processed`` property instead. This change is backwards incompatible if you used stats directly. 0.2.1 (2022-07-29) ------------------ * ``aiohttp.client_exceptions.ClientConnectorError`` is now treated as a network error and retried accordingly. * Removed the unused ``zyte_api.sync`` module. 0.2.0 (2022-07-14) ------------------ * Temporary download errors are now retried 3 times by default. They were not retried in previous releases. 0.1.4 (2022-05-21) ------------------ This release contains usability improvements to the command-line script: * Instead of ``python -m zyte_api`` you can now run it as ``zyte-api``; * the type of the input file (``--intype`` argument) is guessed now, based on file extension and content; .jl, .jsonl and .txt files are supported. 0.1.3 (2022-02-03) ------------------ * Minor documenation fix * Remove support for Python 3.6 * Added support for Python 3.10 0.1.2 (2021-11-10) ------------------ * Default timeouts changed 0.1.1 (2021-11-01) ------------------ * CHANGES.rst updated properly 0.1.0 (2021-11-01) ------------------ * Initial release.
zyte-api
/zyte-api-0.4.5.tar.gz/zyte-api-0.4.5/CHANGES.rst
CHANGES.rst
=============== python-zyte-api =============== .. image:: https://img.shields.io/pypi/v/zyte-api.svg :target: https://pypi.python.org/pypi/zyte-api :alt: PyPI Version .. image:: https://img.shields.io/pypi/pyversions/zyte-api.svg :target: https://pypi.python.org/pypi/zyte-api :alt: Supported Python Versions .. image:: https://github.com/zytedata/python-zyte-api/actions/workflows/test.yml/badge.svg :target: https://github.com/zytedata/python-zyte-api/actions/workflows/test.yml :alt: Build Status .. image:: https://codecov.io/github/zytedata/zyte-api/coverage.svg?branch=master :target: https://codecov.io/gh/zytedata/zyte-api :alt: Coverage report Python client libraries for `Zyte API`_. Command-line utility and asyncio-based library are provided by this package. Installation ============ :: pip install zyte-api ``zyte-api`` requires Python 3.7+. API key ======= Make sure you have an API key for the `Zyte API`_ service. You can set ``ZYTE_API_KEY`` environment variable with the key to avoid passing it around explicitly. Read the `documentation <https://python-zyte-api.readthedocs.io>`_ for more information. License is BSD 3-clause. * Documentation: https://python-zyte-api.readthedocs.io * Source code: https://github.com/zytedata/python-zyte-api * Issue tracker: https://github.com/zytedata/python-zyte-api/issues .. _Zyte API: https://docs.zyte.com/zyte-api/get-started.html
zyte-api
/zyte-api-0.4.5.tar.gz/zyte-api-0.4.5/README.rst
README.rst
import argparse import json import sys import asyncio import logging import random import tqdm from tenacity import retry_if_exception from zyte_api.aio.client import ( create_session, AsyncClient, ) from zyte_api.constants import ENV_VARIABLE, API_URL from zyte_api.utils import _guess_intype from zyte_api.aio.retry import RetryFactory, _is_throttling_error class DontRetryErrorsFactory(RetryFactory): retry_condition = retry_if_exception(_is_throttling_error) logger = logging.getLogger('zyte_api') _UNSET = object() async def run(queries, out, *, n_conn, stop_on_errors, api_url, api_key=None, retry_errors=True): retrying = None if retry_errors else DontRetryErrorsFactory().build() client = AsyncClient(n_conn=n_conn, api_key=api_key, api_url=api_url, retrying=retrying) async with create_session(connection_pool_size=n_conn) as session: result_iter = client.request_parallel_as_completed( queries=queries, session=session, ) pbar = tqdm.tqdm(smoothing=0, leave=True, total=len(queries), miniters=1, unit="url") pbar.set_postfix_str(str(client.agg_stats)) try: for fut in result_iter: try: result = await fut json.dump(result, out, ensure_ascii=False) out.write("\n") out.flush() pbar.update() except Exception as e: if stop_on_errors: raise logger.error(str(e)) finally: pbar.set_postfix_str(str(client.agg_stats)) finally: pbar.close() logger.info(client.agg_stats.summary()) logger.info(f"\nAPI error types:\n{client.agg_stats.api_error_types.most_common()}") logger.info(f"\nStatus codes:\n{client.agg_stats.status_codes.most_common()}") logger.info(f"\nException types:\n{client.agg_stats.exception_types.most_common()}") def read_input(input_fp, intype): assert intype in {"txt", "jl", _UNSET} lines = input_fp.readlines() if intype is _UNSET: intype = _guess_intype(input_fp.name, lines) if intype == "txt": urls = [u.strip() for u in lines if u.strip()] records = [{"url": url, "browserHtml": True} for url in urls] else: records = [ json.loads(line.strip()) for line in lines if line.strip() ] # Automatically replicating the url in echoData to being able to # to match URLs with content in the responses for record in records: record.setdefault("echoData", record.get("url")) return records def _main(program_name='zyte-api'): """ Process urls from input file through Zyte API """ p = argparse.ArgumentParser( prog=program_name, description=""" Process input URLs from a file using Zyte API. """, ) p.add_argument("input", type=argparse.FileType("r", encoding='utf8'), help="Input file with urls, url per line by default. The " "Format can be changed using `--intype` argument.") p.add_argument("--intype", default=_UNSET, choices=["txt", "jl"], help="Type of the input file. " "Allowed values are 'txt' (1 URL per line) and 'jl' " "(JSON Lines file, each object describing the " "parameters of a request). " "If not specified, the input type is guessed based on " "the input file name extension (.jl, .jsonl, .txt) or " "content, and assumed to be txt if guessing fails.") p.add_argument("--limit", type=int, help="Max number of URLs to take from the input") p.add_argument("--output", "-o", default=sys.stdout, type=argparse.FileType("w", encoding='utf8'), help=".jsonlines file to store extracted data. " "By default, results are printed to stdout.") p.add_argument("--n-conn", type=int, default=20, help="number of connections to the API server " "(default: %(default)s)") p.add_argument("--api-key", help="Zyte API key. " "You can also set %s environment variable instead " "of using this option." % ENV_VARIABLE) p.add_argument("--api-url", help="Zyte API endpoint (default: %(default)s)", default=API_URL) p.add_argument("--loglevel", "-L", default="INFO", choices=["DEBUG", "INFO", "WARNING", "ERROR"], help="log level (default: %(default)s)") p.add_argument("--shuffle", help="Shuffle input URLs", action="store_true") p.add_argument("--dont-retry-errors", help="Don't retry request and network errors", action="store_true") args = p.parse_args() logging.basicConfig( stream=sys.stderr, level=getattr(logging, args.loglevel) ) queries = read_input(args.input, args.intype) if args.shuffle: random.shuffle(queries) if args.limit: queries = queries[:args.limit] logger.info(f"Loaded {len(queries)} urls from {args.input.name}; shuffled: {args.shuffle}") logger.info(f"Running Zyte API (connections: {args.n_conn})") loop = asyncio.get_event_loop() coro = run(queries, out=args.output, n_conn=args.n_conn, stop_on_errors=False, api_url=args.api_url, api_key=args.api_key, retry_errors=not args.dont_retry_errors) loop.run_until_complete(coro) loop.close() if __name__ == '__main__': _main(program_name='python -m zyte_api')
zyte-api
/zyte-api-0.4.5.tar.gz/zyte-api-0.4.5/zyte_api/__main__.py
__main__.py
from typing import Optional from collections import Counter import functools import time import attr from runstats import Statistics from zyte_api.errors import ParsedError def zero_on_division_error(meth): @functools.wraps(meth) def wrapper(*args, **kwargs): try: return meth(*args, **kwargs) except ZeroDivisionError: return 0 return wrapper class AggStats: def __init__(self): self.time_connect_stats = Statistics() self.time_total_stats = Statistics() self.n_success = 0 # number of successful results returned to the user self.n_fatal_errors = 0 # number of errors returned to the user, after all retries self.n_attempts = 0 # total amount of requests made to Zyte API, including retries self.n_429 = 0 # number of 429 (throttling) responses self.n_errors = 0 # number of errors, including errors which were retried self.status_codes = Counter() self.exception_types = Counter() self.api_error_types = Counter() def __str__(self): return "conn:{:0.2f}s, resp:{:0.2f}s, throttle:{:.1%}, err:{}+{}({:.1%}) | success:{}/{}({:.1%})".format( self.time_connect_stats.mean(), self.time_total_stats.mean(), self.throttle_ratio(), self.n_errors - self.n_fatal_errors, self.n_fatal_errors, self.error_ratio(), self.n_success, self.n_processed, self.success_ratio() ) def summary(self): return ( "\n" + "Summary\n" + "-------\n" + "Mean connection time: {:0.2f}\n".format(self.time_connect_stats.mean()) + "Mean response time: {:0.2f}\n".format(self.time_total_stats.mean()) + "Throttle ratio: {:0.1%}\n".format(self.throttle_ratio()) + "Attempts: {}\n".format(self.n_attempts) + "Errors: {:0.1%}, fatal: {}, non fatal: {}\n".format( self.error_ratio(), self.n_fatal_errors, self.n_errors - self.n_fatal_errors) + "Successful URLs: {} of {}\n".format( self.n_success, self.n_processed) + "Success ratio: {:0.1%}\n".format(self.success_ratio()) ) @zero_on_division_error def throttle_ratio(self): return self.n_429 / self.n_attempts @zero_on_division_error def error_ratio(self): return self.n_errors / self.n_attempts @zero_on_division_error def success_ratio(self): return self.n_success / self.n_processed @property def n_processed(self): """ Total number of processed URLs """ return self.n_success + self.n_fatal_errors @attr.s class ResponseStats: _start = attr.ib(repr=False) # type: float # Wait time, before this request is sent. Can be large in case of retries. time_delayed = attr.ib(default=None) # type: Optional[float] # Time between sending a request and having a connection established time_connect = attr.ib(default=None) # type: Optional[float] # Time to read & decode the response time_read = attr.ib(default=None) # type: Optional[float] # time to get an exception (usually, a network error) time_exception = attr.ib(default=None) # type: Optional[float] # Total time to process the response, excluding the wait time caused # by retries. time_total = attr.ib(default=None) # type: Optional[float] # HTTP status code status = attr.ib(default=None) # type: Optional[int] # error (parsed), in case of error response error = attr.ib(default=None) # type: Optional[ParsedError] # exception raised exception = attr.ib(default=None) # type: Optional[Exception] @classmethod def create(cls, start_global): start = time.perf_counter() return cls( start=start, time_delayed=start - start_global, ) def record_connected(self, status: int, agg_stats: AggStats): self.status = status self.time_connect = time.perf_counter() - self._start agg_stats.time_connect_stats.push(self.time_connect) agg_stats.status_codes[self.status] += 1 def record_read(self, agg_stats: Optional[AggStats] = None): now = time.perf_counter() self.time_total = now - self._start self.time_read = self.time_total - (self.time_connect or 0) if agg_stats: agg_stats.time_total_stats.push(self.time_total) def record_exception(self, exception: Exception, agg_stats: AggStats): self.time_exception = time.perf_counter() - self._start self.exception = exception agg_stats.status_codes[0] += 1 agg_stats.exception_types[exception.__class__] += 1 def record_request_error(self, error_body: bytes, agg_stats: AggStats): self.error = ParsedError.from_body(error_body) if self.status == 429: # XXX: status must be set already! agg_stats.n_429 += 1 else: agg_stats.n_errors += 1 agg_stats.api_error_types[self.error.type] += 1
zyte-api
/zyte-api-0.4.5.tar.gz/zyte-api-0.4.5/zyte_api/stats.py
stats.py
import asyncio import time from functools import partial from typing import Optional, Iterator, List import aiohttp from aiohttp import TCPConnector from tenacity import AsyncRetrying from .errors import RequestError from .retry import zyte_api_retrying from ..apikey import get_apikey from ..constants import API_URL, API_TIMEOUT from ..stats import AggStats, ResponseStats from ..utils import _process_query, user_agent # 120 seconds is probably too long, but we are concerned about the case with # many concurrent requests and some processing logic running in the same reactor, # thus, saturating the CPU. This will make timeouts more likely. AIO_API_TIMEOUT = aiohttp.ClientTimeout(total=API_TIMEOUT + 120) def create_session(connection_pool_size=100, **kwargs) -> aiohttp.ClientSession: """ Create a session with parameters suited for Zyte API """ kwargs.setdefault('timeout', AIO_API_TIMEOUT) if "connector" not in kwargs: kwargs["connector"] = TCPConnector(limit=connection_pool_size, force_close=True) return aiohttp.ClientSession(**kwargs) def _post_func(session): """ Return a function to send a POST request """ if session is None: return partial(aiohttp.request, method='POST', timeout=AIO_API_TIMEOUT) else: return session.post class AsyncClient: def __init__(self, *, api_key=None, api_url=API_URL, n_conn=15, retrying: Optional[AsyncRetrying] = None, ): self.api_key = get_apikey(api_key) self.api_url = api_url self.n_conn = n_conn self.agg_stats = AggStats() self.retrying = retrying or zyte_api_retrying async def request_raw(self, query: dict, *, endpoint: str = 'extract', session=None, handle_retries=True, retrying: Optional[AsyncRetrying] = None, ): retrying = retrying or self.retrying post = _post_func(session) auth = aiohttp.BasicAuth(self.api_key) headers = {'User-Agent': user_agent(aiohttp), 'Accept-Encoding': 'br'} response_stats = [] start_global = time.perf_counter() async def request(): stats = ResponseStats.create(start_global) self.agg_stats.n_attempts += 1 post_kwargs = dict( url=self.api_url + endpoint, json=_process_query(query), auth=auth, headers=headers, ) try: async with post(**post_kwargs) as resp: stats.record_connected(resp.status, self.agg_stats) if resp.status >= 400: content = await resp.read() resp.release() stats.record_read() stats.record_request_error(content, self.agg_stats) raise RequestError( request_info=resp.request_info, history=resp.history, status=resp.status, message=resp.reason, headers=resp.headers, response_content=content ) response = await resp.json() stats.record_read(self.agg_stats) return response except Exception as e: if not isinstance(e, RequestError): self.agg_stats.n_errors += 1 stats.record_exception(e, agg_stats=self.agg_stats) raise finally: response_stats.append(stats) if handle_retries: request = retrying.wraps(request) try: # Try to make a request result = await request() self.agg_stats.n_success += 1 except Exception: self.agg_stats.n_fatal_errors += 1 raise return result def request_parallel_as_completed(self, queries: List[dict], *, endpoint: str = 'extract', session: Optional[aiohttp.ClientSession] = None, ) -> Iterator[asyncio.Future]: """ Send multiple requests to Zyte API in parallel. Return an `asyncio.as_completed` iterator. ``queries`` is a list of requests to process (dicts). ``session`` is an optional aiohttp.ClientSession object. Set the session TCPConnector limit to a value greater than the number of connections. """ sem = asyncio.Semaphore(self.n_conn) async def _request(query): async with sem: return await self.request_raw(query, endpoint=endpoint, session=session) return asyncio.as_completed([_request(query) for query in queries])
zyte-api
/zyte-api-0.4.5.tar.gz/zyte-api-0.4.5/zyte_api/aio/client.py
client.py
import asyncio import logging from aiohttp import client_exceptions from tenacity import ( wait_chain, wait_fixed, wait_random_exponential, wait_random, stop_after_attempt, stop_after_delay, retry_if_exception, RetryCallState, before_sleep_log, after_log, AsyncRetrying, before_log, retry_base, ) from tenacity.stop import stop_never from .errors import RequestError logger = logging.getLogger(__name__) _NETWORK_ERRORS = ( asyncio.TimeoutError, # could happen while reading the response body client_exceptions.ClientResponseError, client_exceptions.ClientOSError, client_exceptions.ServerConnectionError, client_exceptions.ServerDisconnectedError, client_exceptions.ServerTimeoutError, client_exceptions.ClientPayloadError, client_exceptions.ClientConnectorSSLError, client_exceptions.ClientConnectorError, ) def _is_network_error(exc: BaseException) -> bool: if isinstance(exc, RequestError): # RequestError is ClientResponseError, which is in the # _NETWORK_ERRORS list, but it should be handled # separately. return False return isinstance(exc, _NETWORK_ERRORS) def _is_throttling_error(exc: BaseException) -> bool: return isinstance(exc, RequestError) and exc.status in (429, 503) def _is_temporary_download_error(exc: BaseException) -> bool: return isinstance(exc, RequestError) and exc.status == 520 class RetryFactory: """ Build custom retry configuration """ retry_condition: retry_base = ( retry_if_exception(_is_throttling_error) | retry_if_exception(_is_network_error) | retry_if_exception(_is_temporary_download_error) ) # throttling throttling_wait = wait_chain( # always wait 20-40s first wait_fixed(20) + wait_random(0, 20), # wait 20-40s again wait_fixed(20) + wait_random(0, 20), # wait from 30 to 630s, with full jitter and exponentially # increasing max wait time wait_fixed(30) + wait_random_exponential(multiplier=1, max=600) ) # connection errors, other client and server failures network_error_wait = ( # wait from 3s to ~1m wait_random(3, 7) + wait_random_exponential(multiplier=1, max=55) ) temporary_download_error_wait = network_error_wait throttling_stop = stop_never network_error_stop = stop_after_delay(15 * 60) temporary_download_error_stop = stop_after_attempt(4) def wait(self, retry_state: RetryCallState) -> float: assert retry_state.outcome, "Unexpected empty outcome" exc = retry_state.outcome.exception() assert exc, "Unexpected empty exception" if _is_throttling_error(exc): return self.throttling_wait(retry_state=retry_state) elif _is_network_error(exc): return self.network_error_wait(retry_state=retry_state) elif _is_temporary_download_error(exc): return self.temporary_download_error_wait(retry_state=retry_state) else: raise RuntimeError("Invalid retry state exception: %s" % exc) def stop(self, retry_state: RetryCallState) -> bool: assert retry_state.outcome, "Unexpected empty outcome" exc = retry_state.outcome.exception() assert exc, "Unexpected empty exception" if _is_throttling_error(exc): return self.throttling_stop(retry_state) elif _is_network_error(exc): return self.network_error_stop(retry_state) elif _is_temporary_download_error(exc): return self.temporary_download_error_stop(retry_state) else: raise RuntimeError("Invalid retry state exception: %s" % exc) def reraise(self) -> bool: return True def build(self) -> AsyncRetrying: return AsyncRetrying( wait=self.wait, retry=self.retry_condition, stop=self.stop, reraise=self.reraise(), before=before_log(logger, logging.DEBUG), after=after_log(logger, logging.DEBUG), before_sleep=before_sleep_log(logger, logging.DEBUG), ) zyte_api_retrying: AsyncRetrying = RetryFactory().build()
zyte-api
/zyte-api-0.4.5.tar.gz/zyte-api-0.4.5/zyte_api/aio/retry.py
retry.py
""" Basic command-line interface for Zyte Automatic Extraction. """ import argparse import json import sys import asyncio import logging import random import tqdm from autoextract import Request from autoextract.aio import ( request_parallel_as_completed, create_session ) from autoextract.stats import AggStats from autoextract.aio.client import Result from autoextract.constants import ENV_VARIABLE from autoextract.request import Query logger = logging.getLogger('autoextract') async def run(query: Query, out, n_conn, batch_size, stop_on_errors=False, api_key=None, api_endpoint=None, max_query_error_retries=0, disable_cert_validation=False): agg_stats = AggStats() async with create_session(connection_pool_size=n_conn, disable_cert_validation=disable_cert_validation) as session: result_iter = request_parallel_as_completed( query=query, n_conn=n_conn, batch_size=batch_size, session=session, api_key=api_key, endpoint=api_endpoint, agg_stats=agg_stats, max_query_error_retries=max_query_error_retries ) pbar = tqdm.tqdm(smoothing=0, leave=True, total=len(query), miniters=1, unit="url") pbar.set_postfix_str(str(agg_stats)) try: for fut in result_iter: try: batch_result: Result = await fut for res in batch_result: json.dump(res, out, ensure_ascii=False) out.write("\n") out.flush() pbar.update() except Exception as e: if stop_on_errors: raise logger.error(str(e)) finally: pbar.set_postfix_str(str(agg_stats)) finally: pbar.close() logger.info(agg_stats.summary()) def read_input(input_fp, intype, page_type): assert intype in {"txt", "jl", ""} if intype == "txt": urls = [u.strip() for u in input_fp.readlines() if u.strip()] query = [Request(url, pageType=page_type) for url in urls] return query elif intype == "jl": records = [ json.loads(line.strip()) for line in input_fp.readlines() if line.strip() ] for rec in records: rec.setdefault("pageType", page_type) if not isinstance(rec.get("meta", ""), (str, type(None))): raise TypeError("meta must be str or null, got {!r}".format(rec['meta'])) return records if __name__ == '__main__': """ Process urls from input file through Zyte Automatic Extraction """ p = argparse.ArgumentParser( prog='python -m autoextract', description=""" Process input URLs from a file using Zyte Automatic Extraction. """, ) p.add_argument("input", type=argparse.FileType("r", encoding='utf8'), help="Input file with urls, url per line by default. The " "Format can be changed using `--intype` argument.") p.add_argument("--intype", default="txt", choices=["txt", "jl"], help='Type of the input file (default: %(default)s). ' 'Allowed values are "txt": input should be one ' 'URL per line, and "jl": input should be a jsonlines ' 'file, with {"url": "...", "meta": ...,} dicts; see ' 'https://docs.zyte.com/automatic-extraction.html#requests ' 'for the data format description.') p.add_argument("--output", "-o", default=sys.stdout, type=argparse.FileType("w", encoding='utf8'), help=".jsonlines file to store extracted data. " "By default, results are printed to stdout.") p.add_argument("--n-conn", type=int, default=20, help="number of connections to the API server " "(default: %(default)s)") p.add_argument("--batch-size", type=int, default=2, help="batch size (default: %(default)s)") p.add_argument("--page-type", "-t", default="article", help="type of the pages in the input file, " "e.g. article, product, jobPosting " "(default: %(default)s)") p.add_argument("--api-key", help="Zyte Automatic Extraction API key. " "You can also set %s environment variable instead " "of using this option." % ENV_VARIABLE) p.add_argument("--api-endpoint", help="Zyte Automatic Extraction API endpoint.") p.add_argument("--loglevel", "-L", default="INFO", choices=["DEBUG", "INFO", "WARNING", "ERROR"], help="log level") p.add_argument("--shuffle", help="Shuffle input URLs", action="store_true") p.add_argument("--max-query-error-retries", type=int, default=0, help="Max number of Query-level error retries. " "Enable Query-level error retries to increase the " "success rate at the cost of more requests being " "performed. It is recommended if you are interested " "in a higher success rate.") p.add_argument("--disable-cert-validation", action="store_true", help="Disable TSL certificate validation in HTTPS requests. " "Any certificate will be accepted. Consider the security consequences.") args = p.parse_args() logging.basicConfig(level=getattr(logging, args.loglevel)) query = read_input(args.input, args.intype, args.page_type) if args.shuffle: random.shuffle(query) logger.info(f"Loaded {len(query)} urls from {args.input.name}; shuffled: {args.shuffle}") logger.info(f"Running Zyte Automatic Extraction (connections: {args.n_conn}, " f"batch size: {args.batch_size}, page type: {args.page_type})") loop = asyncio.get_event_loop() coro = run(query, out=args.output, n_conn=args.n_conn, batch_size=args.batch_size, stop_on_errors=False, api_key=args.api_key, api_endpoint=args.api_endpoint, max_query_error_retries=args.max_query_error_retries, disable_cert_validation=args.disable_cert_validation) loop.run_until_complete(coro) loop.close()
zyte-autoextract
/zyte_autoextract-0.7.1-py3-none-any.whl/autoextract/__main__.py
__main__.py
from typing import Optional import functools import time import attr from runstats import Statistics def zero_on_division_error(meth): @functools.wraps(meth) def wrapper(*args, **kwargs): try: return meth(*args, **kwargs) except ZeroDivisionError: return 0 return wrapper class AggStats: def __init__(self): self.time_connect_stats = Statistics() self.time_total_stats = Statistics() self.n_results = 0 self.n_fatal_errors = 0 self.n_attempts = 0 self.n_429 = 0 self.n_errors = 0 self.n_input_queries = 0 self.n_extracted_queries = 0 # Queries answered without any type of error self.n_query_responses = 0 self.n_billable_query_responses = 0 # Some errors are also billed def __str__(self): return "conn:{:0.2f}s, resp:{:0.2f}s, throttle:{:.1%}, err:{}+{}({:.1%}) | success:{}/{}({:.1%})".format( self.time_connect_stats.mean(), self.time_total_stats.mean(), self.throttle_ratio(), self.n_errors - self.n_fatal_errors, self.n_fatal_errors, self.error_ratio(), self.n_extracted_queries, self.n_input_queries, self.success_ratio() ) def summary(self): return ( "\n" + "Summary\n" + "-------\n" + "Mean connection time: {:0.2f}\n".format(self.time_connect_stats.mean()) + "Mean response time: {:0.2f}\n".format(self.time_total_stats.mean()) + "Throttle ratio: {:0.1%}\n".format(self.throttle_ratio()) + "Attempts: {}\n".format(self.n_attempts) + "Errors: {:0.1%}, fatal: {}, non fatal: {}\n".format( self.error_ratio(), self.n_fatal_errors, self.n_errors - self.n_fatal_errors) + "Successful URLs: {} of {}\n".format( self.n_extracted_queries, self.n_input_queries) + "Success ratio: {:0.1%}\n".format(self.success_ratio()) + "Billable query responses: {} of {}\n".format( self.n_billable_query_responses, self.n_query_responses) ) @zero_on_division_error def throttle_ratio(self): return self.n_429 / self.n_attempts @zero_on_division_error def error_ratio(self): return self.n_errors / self.n_attempts @zero_on_division_error def success_ratio(self): return self.n_extracted_queries / self.n_input_queries @attr.s class ResponseStats: _start = attr.ib(repr=False) # type: float # Wait time, before this request is sent. Can be large in case of retries. time_delayed = attr.ib(default=None) # type: Optional[float] # Time between sending a request and having a connection established time_connect = attr.ib(default=None) # type: Optional[float] # Time to read & decode the response time_read = attr.ib(default=None) # type: Optional[float] # Total time to process the response, excluding the wait time caused # by retries. time_total = attr.ib(default=None) # type: Optional[float] # HTTP status code status = attr.ib(default=None) # type: Optional[int] # response content, in case of error response error = attr.ib(default=None) # type: Optional[bytes] @classmethod def create(cls, start_global): start = time.perf_counter() return cls( start=start, time_delayed=start - start_global, ) def record_connected(self, agg_stats: AggStats): self.time_connect = time.perf_counter() - self._start agg_stats.time_connect_stats.push(self.time_connect) def record_read(self, agg_stats: Optional[AggStats]=None): now = time.perf_counter() self.time_total = now - self._start self.time_read = self.time_total - (self.time_connect or 0) if agg_stats: agg_stats.time_total_stats.push(self.time_total)
zyte-autoextract
/zyte_autoextract-0.7.1-py3-none-any.whl/autoextract/stats.py
stats.py
import asyncio import time import warnings from typing import Optional, Dict, List, Iterator from functools import partial import aiohttp from aiohttp import TCPConnector from tenacity import AsyncRetrying from autoextract.constants import API_ENDPOINT, API_TIMEOUT from autoextract.apikey import get_apikey from autoextract.utils import chunks, user_agent from autoextract.request import Query, query_as_dict_list from autoextract.stats import ResponseStats, AggStats from .retry import autoextract_retrying from .errors import RequestError, _QueryError, is_billable_error_msg AIO_API_TIMEOUT = aiohttp.ClientTimeout(total=API_TIMEOUT + 60, sock_read=API_TIMEOUT + 30, sock_connect=10) def create_session(connection_pool_size=100, disable_cert_validation=False, **kwargs) -> aiohttp.ClientSession: """ Create a session with parameters suited for Zyte Automatic Extraction """ kwargs.setdefault('timeout', AIO_API_TIMEOUT) if "connector" not in kwargs: kwargs["connector"] = TCPConnector(limit=connection_pool_size, ssl=False if disable_cert_validation else None) return aiohttp.ClientSession(**kwargs) class Result(List[Dict]): retry_stats: Optional[Dict] = None response_stats: Optional[List[ResponseStats]] = None class RequestProcessor: """Help keeping track of query results and errors between retries. After initializing your Request Processor, you may use it for just a single or for multiple requests. This class is especially useful because it stores successful queries to avoid repeating them when retrying requests. """ def __init__(self, query: Query, max_retries: int = 0): """Reset temporary data structures and initialize them""" self._reset() self.pending_queries = query_as_dict_list(query) self._max_retries = max_retries self._complete_queries: List[Dict] = list() self._n_extracted_queries: int = 0 self._n_query_responses: int = 0 self._n_billable_query_responses: int = 0 def _reset(self): """Clear temporary variables between retries""" self.pending_queries: List[Dict] = list() self._retriable_queries: List[Dict] = list() self._retriable_query_exceptions: List[Dict] = list() def _enqueue_error(self, query_result, query_exception): """Enqueue Query-level error. Enqueued errors could be: - used in combination with successes with `get_latest_results` - retried using `pending_requests` """ self._retriable_queries.append(query_result) self._retriable_query_exceptions.append(query_exception) user_query = query_result["query"]["userQuery"] # Temporary workaround for a backend issue. Won't be needed soon. if 'userAgent' in user_query: del user_query['userAgent'] self.pending_queries.append(user_query) def get_latest_results(self): """Get latest results (errors + successes). This method could be used to retrieve results when an exception is raised while processing results. """ return self._complete_queries + self._retriable_queries def extracted_queries_count(self): """Number of queries extracted without any error""" return self._n_extracted_queries def query_responses_count(self): """Number of query responses received""" return self._n_query_responses def billable_query_responses_count(self): """Number of billable query responses (some errors are billable)""" return self._n_billable_query_responses def process_results(self, query_results): """Process query results. Return successful queries and also failed ones. If `self._max_retries` is greater than 0, this method might raise a `QueryError` exception. If multiple `QueryError` exceptions are parsed, the one with the longest timeout is raised. Successful requests are saved in `self._complete_queries` among with errors that cannot be retried, and they are kept between executions while retriable failures are saved in `self._retriable_queries`. Queries saved in `self._retriable_queries` are moved to `self.pending_queries` between executions. You can use the first or the n-th result: - You can get all queries successfully responded in the first try. - You can get all queries successfully in the n-th try. - You may stop with a partial number of successful queries. """ self._reset() for query_result in query_results: self._n_query_responses += 1 if "error" not in query_result: self._n_extracted_queries += 1 self._n_billable_query_responses += 1 else: if is_billable_error_msg(query_result["error"]): self._n_billable_query_responses += 1 if self._max_retries and "error" in query_result: query_exception = _QueryError.from_query_result( query_result, self._max_retries) if query_exception.retriable: self._enqueue_error(query_result, query_exception) continue self._complete_queries.append(query_result) if self._retriable_query_exceptions: # Prioritize exceptions that have retry seconds defined # and get the one with the longest timeout value exception_with_longest_timeout = max( self._retriable_query_exceptions, key=lambda exc: exc.retry_seconds ) raise exception_with_longest_timeout return self.get_latest_results() async def request_raw(query: Query, api_key: Optional[str] = None, endpoint: Optional[str] = None, *, handle_retries: bool = True, max_query_error_retries: int = 0, session: Optional[aiohttp.ClientSession] = None, agg_stats: AggStats = None, headers: Optional[Dict[str, str]] = None, retrying: Optional[AsyncRetrying] = None ) -> Result: """ Send a request to Zyte Automatic Extraction API. ``query`` is a list of dicts or Request objects, as described in the API docs (see https://docs.zyte.com/automatic-extraction.html). ``api_key`` is your Zyte Automatic Extraction API key. If not set, it is taken from ZYTE_AUTOEXTRACT_KEY environment variable. ``session`` is an optional aiohttp.ClientSession object; use it to enable HTTP Keep-Alive and to control connection pool size. This function retries http 429 errors and network errors by default; this allows to handle server-side throttling properly. Use ``handle_retries=False`` if you want to disable this behavior (e.g. to implement it yourself). Among others, this function can raise autoextract.errors.RequestError, if there is a Request-level error returned by the API after all attempts were exhausted. Throttling errors are retried indefinitely when handle_retries is True. When ``handle_retries=True``, we could also retry Query-level errors. Use ``max_query_error_retries > 0`` if you want to to enable this behavior. ``agg_stats`` argument allows to keep track of various stats; pass an ``AggStats`` instance, and it'll be updated. Additional ``headers`` for the API request can be provided. This headers are included in the request done against the API endpoint: they won't be used in subsequent requests for fetching the URLs provided in the query. The default retry policy can be overridden by providing a custom ``retrying`` object of type :class:`tenacity.AsyncRetrying` that can be built with the class :class:`autoextract.retry.RetryFactory`. The following is an example that configure 3 attempts for server type errors:: factory = RetryFactory() factory.server_error_stop = stop_after_attempt(3) retrying = factory.build() See :func:`request_parallel_as_completed` for a more high-level interface to send requests in parallel. """ endpoint = API_ENDPOINT if endpoint is None else endpoint retrying = retrying or autoextract_retrying if agg_stats is None: agg_stats = AggStats() # dummy stats, to simplify code if max_query_error_retries and not handle_retries: warnings.warn( "You've specified a max number of Query-level error retries, " "but retries are disabled. Consider passing the handle_retries " "argument as True.", stacklevel=2 ) # Keep state between executions/retries request_processor = RequestProcessor( query=query, max_retries=max_query_error_retries if handle_retries else 0, ) post = _post_func(session) auth = aiohttp.BasicAuth(get_apikey(api_key)) headers = {'User-Agent': user_agent(aiohttp), **(headers or {})} response_stats = [] start_global = time.perf_counter() async def request(): stats = ResponseStats.create(start_global) agg_stats.n_attempts += 1 post_kwargs = dict( url=endpoint, json=request_processor.pending_queries, auth=auth, headers=headers, ) try: async with post(**post_kwargs) as resp: stats.status = resp.status stats.record_connected(agg_stats) if resp.status >= 400: content = await resp.read() resp.release() stats.record_read() stats.error = content if resp.status == 429: agg_stats.n_429 += 1 else: agg_stats.n_errors += 1 raise RequestError( request_info=resp.request_info, history=resp.history, status=resp.status, message=resp.reason, headers=resp.headers, response_content=content ) response = await resp.json() stats.record_read(agg_stats) return request_processor.process_results(response) except Exception as e: if not isinstance(e, RequestError): agg_stats.n_errors += 1 raise finally: response_stats.append(stats) if handle_retries: request = retrying.wraps(request) try: # Try to make a batch request result = await request() except _QueryError: # If Tenacity fails to retry a _QueryError because the max number of # retries or a timeout was reached, get latest results combining # error and successes and consider it as the final result. result = request_processor.get_latest_results() except Exception: agg_stats.n_fatal_errors += 1 raise finally: agg_stats.n_input_queries += len(query) agg_stats.n_extracted_queries += request_processor.extracted_queries_count() agg_stats.n_billable_query_responses += request_processor.billable_query_responses_count() agg_stats.n_query_responses += request_processor.query_responses_count() result = Result(result) result.response_stats = response_stats if handle_retries and hasattr(request, 'retry'): result.retry_stats = request.retry.statistics # type: ignore agg_stats.n_results += 1 return result def request_parallel_as_completed(query: Query, api_key: Optional[str] = None, *, endpoint: Optional[str] = None, session: Optional[aiohttp.ClientSession] = None, batch_size=1, n_conn=1, agg_stats: AggStats = None, max_query_error_retries=0, ) -> Iterator[asyncio.Future]: """ Send multiple requests to Zyte Automatic Extraction API in parallel. Return an `asyncio.as_completed` iterator. ``query`` is a list of requests to process (autoextract.Request instances or dicts). ``api_key`` is your Zyte Automatic Extraction API key. If not set, it is taken from ZYTE_AUTOEXTRACT_KEY environment variable. ``n_conn`` is a number of parallel connections to a server. ``batch_size`` is an amount of queries sent in a batch in each connection. Higher batch_size increase response time, but allows to achieve the same throughput with less connections to server. For example, if your API key has a limit of 3RPS, and average response time you observe for your websites is 10s, then to get to these 3RPS you may set e.g. batch_size=2, n_conn=15 - this would allow to process 30 requests in parallel. ``session`` is an optional aiohttp.ClientSession object; use it to enable HTTP Keep-Alive. ``agg_stats`` argument allows to keep track of various stats; pass an ``AggStats`` instance, and it'll be updated. Use ``max_query_error_retries > 0`` if you want Query-level errors to be retried. """ sem = asyncio.Semaphore(n_conn) async def _request(batch_query): async with sem: return await request_raw(batch_query, api_key=api_key, endpoint=endpoint, session=session, agg_stats=agg_stats, max_query_error_retries=max_query_error_retries, ) batches = chunks(query, batch_size) return asyncio.as_completed([_request(batch) for batch in batches]) def _post_func(session): """ Return a function to send a POST request """ if session is None: return partial(aiohttp.request, method='POST', timeout=AIO_API_TIMEOUT) else: return session.post
zyte-autoextract
/zyte_autoextract-0.7.1-py3-none-any.whl/autoextract/aio/client.py
client.py
import json import logging import re from json import JSONDecodeError from typing import Optional from aiohttp import ClientResponseError logger = logging.getLogger(__name__) class DomainOccupied: DOMAIN_OCCUPIED_REGEX = re.compile( r".*domain (.+) is occupied, please retry in (.+) seconds.*", re.IGNORECASE ) DEFAULT_RETRY_SECONDS = 5 * 60 # 5 minutes def __init__(self, domain: str, retry_seconds: float): self.domain = domain self.retry_seconds = retry_seconds @classmethod def from_message(cls, message: str) -> Optional["DomainOccupied"]: match = cls.DOMAIN_OCCUPIED_REGEX.match(message) if not match: return None domain = match.group(1) try: retry_seconds = float(match.group(2)) except ValueError: logger.warning( f"Could not extract retry seconds " f"from Domain Occupied error message: {message}" ) retry_seconds = cls.DEFAULT_RETRY_SECONDS return cls(domain=domain, retry_seconds=retry_seconds) class RequestError(ClientResponseError): """ Exception which is raised when Request-level error is returned. In contrast with ClientResponseError, it allows to inspect response content. https://docs.zyte.com/automatic-extraction.html#request-level """ def __init__(self, *args, **kwargs): self.response_content = kwargs.pop("response_content") super().__init__(*args, **kwargs) def error_data(self): """ Parses request error ``response_content`` """ data = {} if self.response_content: try: data = json.loads(self.response_content.decode("utf-8")) if not isinstance(data, dict): data = {} logger.warning( "Wrong JSON format for RequestError content '{}'. " "A dict was expected".format(self.response_content) ) except (JSONDecodeError, UnicodeDecodeError) as _: # noqa: F841 logger.warning( "Wrong JSON format for RequestError content '{}'".format( self.response_content) ) return data def __str__(self): return f"RequestError: {self.status}, message={self.message}, " \ f"headers={self.headers}, body={self.response_content}" _RETRIABLE_ERR_MSGS = [ "query timed out", "Downloader error: No response", "Downloader error: http50", "Downloader error: 50", "Downloader error: GlobalTimeoutError", "Downloader error: ConnectionResetByPeer", "Proxy error: banned", "Proxy error: internal_error", "Proxy error: nxdomain", "Proxy error: timeout", "Proxy error: ssl_tunnel_error", "Proxy error: msgtimeout", "Proxy error: econnrefused", "Proxy error: connect_timeout", ] _RETRIABLE_ERR_MSGS_RE = re.compile( "|".join(re.escape(msg) for msg in _RETRIABLE_ERR_MSGS), re.IGNORECASE ) def is_retriable_error_msg(msg: Optional[str]) -> bool: """True if the error is one of those that could benefit from a retry""" msg = msg or "" return bool(_RETRIABLE_ERR_MSGS_RE.search(msg)) class _QueryError(Exception): """ Exception which is raised when a Query-level error is returned. https://docs.zyte.com/automatic-extraction.html#query-level """ def __init__(self, query: dict, message: str, max_retries: int = 0): self.query = query self.message = message self.max_retries = max_retries self.domain_occupied = DomainOccupied.from_message(message) def __str__(self): return f"_QueryError: query={self.query}, message={self.message}, " \ f"max_retries={self.max_retries}" @classmethod def from_query_result(cls, query_result: dict, max_retries: int = 0): return cls(query=query_result["query"], message=query_result["error"], max_retries=max_retries) @property def retriable(self) -> bool: if self.domain_occupied: return True return is_retriable_error_msg(self.message) @property def retry_seconds(self) -> float: if self.domain_occupied: return self.domain_occupied.retry_seconds return 0.0 # Based on https://docs.zyte.com/automatic-extraction.html#reference _NON_BILLABLE_ERR_MSGS = [ "malformed url", "URL cannot be longer than", "non-HTTP schemas are not allowed", "Extraction not permitted for this URL", ] _NON_BILLABLE_ERR_MSGS_RE = re.compile( "|".join(re.escape(msg) for msg in _NON_BILLABLE_ERR_MSGS), re.IGNORECASE ) def is_billable_error_msg(msg: Optional[str]) -> bool: """ Return true if the error message is billable. Based on https://docs.zyte.com/automatic-extraction.html#reference >>> is_billable_error_msg(None) True >>> is_billable_error_msg("") True >>> is_billable_error_msg(" URL cannot be longer than 4096 UTF-16 characters ") False >>> is_billable_error_msg(" malformed url ") False >>> is_billable_error_msg("Domain example.com is occupied, please retry in 23.5 seconds") False """ msg = msg or "" is_domain_ocupied = bool(DomainOccupied.from_message(msg)) is_no_billable = (_NON_BILLABLE_ERR_MSGS_RE.search(msg) or is_domain_ocupied) return not is_no_billable ACCOUNT_DISABLED_ERROR_TYPE = "http://errors.xod.scrapinghub.com/account-disabled.html"
zyte-autoextract
/zyte_autoextract-0.7.1-py3-none-any.whl/autoextract/aio/errors.py
errors.py
import asyncio import logging from aiohttp import client_exceptions from tenacity import ( wait_chain, wait_fixed, wait_random_exponential, wait_random, stop_after_attempt, stop_after_delay, retry_if_exception, RetryCallState, before_sleep_log, after_log, AsyncRetrying, ) from tenacity.stop import stop_never from .errors import RequestError, _QueryError logger = logging.getLogger(__name__) _NETWORK_ERRORS = ( asyncio.TimeoutError, # could happen while reading the response body client_exceptions.ClientResponseError, client_exceptions.ClientOSError, client_exceptions.ServerConnectionError, client_exceptions.ServerDisconnectedError, client_exceptions.ServerTimeoutError, client_exceptions.ClientPayloadError, client_exceptions.ClientConnectorSSLError, ) def _is_network_error(exc: BaseException) -> bool: if isinstance(exc, RequestError): # RequestError is ClientResponseError, which is in the # _NETWORK_ERRORS list, but it should be handled # separately. return False return isinstance(exc, _NETWORK_ERRORS) def _is_throttling_error(exc: BaseException) -> bool: return isinstance(exc, RequestError) and exc.status == 429 def _is_server_error(exc: BaseException) -> bool: return isinstance(exc, RequestError) and exc.status >= 500 def _is_retriable_query_error(exc: BaseException) -> bool: return isinstance(exc, _QueryError) and exc.retriable and exc.max_retries > 0 class RetryFactory: """ Build custom retry configuration """ retry_condition = ( retry_if_exception(_is_throttling_error) | retry_if_exception(_is_network_error) | retry_if_exception(_is_server_error) | retry_if_exception(_is_retriable_query_error) ) # throttling throttling_wait = wait_chain( # always wait 20-40s first wait_fixed(20) + wait_random(0, 20), # wait 20-40s again wait_fixed(20) + wait_random(0, 20), # wait from 30 to 630s, with full jitter and exponentially # increasing max wait time wait_fixed(30) + wait_random_exponential(multiplier=1, max=600) ) # connection errors, other client and server failures network_error_wait = ( # wait from 3s to ~1m wait_random(3, 7) + wait_random_exponential(multiplier=1, max=55) ) server_error_wait = network_error_wait retriable_query_error_wait = network_error_wait throttling_stop = stop_never network_error_stop = stop_after_delay(15 * 60) server_error_stop = stop_after_delay(15 * 60) retryable_query_error_stop = stop_after_delay(15 * 60) def wait(self, retry_state: RetryCallState) -> float: exc: BaseException = retry_state.outcome.exception() # type: ignore if _is_throttling_error(exc): return self.throttling_wait(retry_state=retry_state) elif _is_network_error(exc): return self.network_error_wait(retry_state=retry_state) elif _is_server_error(exc): return self.server_error_wait(retry_state=retry_state) elif _is_retriable_query_error(exc): assert isinstance(exc, _QueryError) return max( exc.retry_seconds, self.retriable_query_error_wait(retry_state=retry_state) ) else: raise RuntimeError("Invalid retry state exception: %s" % exc) def stop(self, retry_state: RetryCallState) -> bool: exc: BaseException = retry_state.outcome.exception() # type: ignore if _is_throttling_error(exc): return self.throttling_stop(retry_state) elif _is_network_error(exc): return self.network_error_stop(retry_state) elif _is_server_error(exc): return self.server_error_stop(retry_state) elif _is_retriable_query_error(exc): assert isinstance(exc, _QueryError) return ( self.retryable_query_error_stop | stop_after_attempt(exc.max_retries + 1) )(retry_state) else: raise RuntimeError("Invalid retry state exception: %s" % exc) def before_sleep(self, retry_state: RetryCallState): return before_sleep_log(logger, logging.DEBUG) def after(self, retry_state: RetryCallState): return after_log(logger, logging.DEBUG) def reraise(self) -> bool: return True def build(self) -> AsyncRetrying: return AsyncRetrying( wait=self.wait, retry=self.retry_condition, stop=self.stop, before_sleep=self.before_sleep, after=self.after, reraise=self.reraise() ) autoextract_retrying: AsyncRetrying = RetryFactory().build()
zyte-autoextract
/zyte_autoextract-0.7.1-py3-none-any.whl/autoextract/aio/retry.py
retry.py
================= zyte-common-items ================= .. image:: https://img.shields.io/pypi/v/zyte-common-items.svg :target: https://pypi.python.org/pypi/zyte-common-items :alt: PyPI Version .. image:: https://img.shields.io/pypi/pyversions/zyte-common-items.svg :target: https://pypi.python.org/pypi/zyte-common-items :alt: Supported Python Versions .. image:: https://github.com/zytedata/zyte-common-items/workflows/tox/badge.svg :target: https://github.com/zytedata/zyte-common-items/actions :alt: Build Status .. image:: https://codecov.io/github/zytedata/zyte-common-items/coverage.svg?branch=master :target: https://codecov.io/gh/zytedata/zyte-common-items :alt: Coverage report .. description starts ``zyte-common-items`` is a Python 3.8+ library of item_ and `page object`_ classes for web data extraction that we use at Zyte_ to maximize opportunities for code reuse. .. _item: https://docs.scrapy.org/en/latest/topics/items.html .. _page object: https://web-poet.readthedocs.io/en/stable/ .. _Zyte: https://www.zyte.com/ .. description ends * Documentation: https://zyte-common-items.readthedocs.io/en/latest/ * License: BSD 3-clause
zyte-common-items
/zyte-common-items-0.10.0.tar.gz/zyte-common-items-0.10.0/README.rst
README.rst
from typing import Any, Callable, Dict, Optional, Tuple, Type, Union from warnings import warn from weakref import WeakKeyDictionary import attrs from web_poet.page_inputs.url import _Url # Caches the attribute names for attr.s classes CLASS_ATTRS: WeakKeyDictionary = WeakKeyDictionary() def split_in_unknown_and_known_fields( data: Optional[dict], item_cls: Type ) -> Tuple[Dict, Dict]: """ Return a pair of dicts. The first one contains those elements not belonging to the attr class ``item_cls``. The second one contains the rest. That is, those attributes not belonging to ``item_cls`` class """ data = data or {} if not attrs.has(item_cls): raise ValueError(f"The cls {item_cls} is not attrs class") if item_cls not in CLASS_ATTRS: CLASS_ATTRS[item_cls] = {field.name for field in attrs.fields(item_cls)} unknown, known = split_dict(data, lambda k: k in CLASS_ATTRS[item_cls]) return unknown, known def split_dict(dict: Dict, key_pred: Callable[[Any], Any]) -> Tuple[Dict, Dict]: """Splits the dictionary in two. The first dict contains the records for which the key predicate is False and the second dict contains the rest. >>> split_dict({}, lambda k: False) ({}, {}) >>> split_dict(dict(a=1, b=2, c=3), lambda k: k != 'a') ({'a': 1}, {'b': 2, 'c': 3}) """ # noqa yes, no = {}, {} for k, v in dict.items(): if key_pred(k): yes[k] = v else: no[k] = v return (no, yes) def url_to_str(url: Union[str, _Url]) -> str: if not isinstance(url, (str, _Url)): raise ValueError( f"{url!r} is neither a string nor an instance of RequestURL or ResponseURL." ) return str(url) def format_datetime(dt): return f"{dt.isoformat(timespec='seconds')}Z" def convert_to_class(value: Any, new_cls: type) -> Any: if type(value) == new_cls: return value input_attributes = {attribute.name for attribute in attrs.fields(value.__class__)} output_attributes = {attribute.name for attribute in attrs.fields(new_cls)} shared_attributes = input_attributes & output_attributes new_value = new_cls( **{attribute: getattr(value, attribute) for attribute in shared_attributes} ) removed_nonempty_attributes = { attribute for attribute in (input_attributes - output_attributes) if getattr(value, attribute) != attrs.fields_dict(value.__class__)[attribute].default } if removed_nonempty_attributes: warn( ( f"Conversion of {value} into {new_cls} is dropping the non-default " f"values of the following attributes: " f"{removed_nonempty_attributes}." ), RuntimeWarning, ) return new_value def cast_metadata(value, cls): new_value = convert_to_class(value, cls) return new_value def metadata_processor(metadata, page): return cast_metadata(metadata, page.metadata_cls) class MetadataCaster: def __init__(self, target): self._target = target def __call__(self, value): return cast_metadata(value, self._target)
zyte-common-items
/zyte-common-items-0.10.0.tar.gz/zyte-common-items-0.10.0/zyte_common_items/util.py
util.py
from collections.abc import Iterable from functools import wraps from typing import Any, Callable, List, Optional, Union from lxml.html import HtmlElement from parsel import Selector, SelectorList from web_poet.mixins import ResponseShortcutsMixin from zyte_parsers import Breadcrumb as zp_Breadcrumb from zyte_parsers import extract_brand_name, extract_breadcrumbs, extract_price from .items import Breadcrumb def _get_base_url(page: Any) -> Optional[str]: if isinstance(page, ResponseShortcutsMixin): return page.base_url return getattr(page, "url", None) def _handle_selectorlist(value: Any) -> Any: if not isinstance(value, SelectorList): return value if len(value) == 0: return None return value[0] def only_handle_nodes( f: Callable[[Union[Selector, HtmlElement], Any], Any] ) -> Callable[[Any, Any], Any]: @wraps(f) def wrapper(value: Any, page: Any) -> Any: value = _handle_selectorlist(value) if not isinstance(value, (Selector, HtmlElement)): return value result = f(value, page) return result return wrapper def breadcrumbs_processor(value: Any, page: Any) -> Any: """Convert the data into a list of :class:`~zyte_common_items.Breadcrumb` objects if possible. Supported inputs are :class:`~parsel.selector.Selector`, :class:`~parsel.selector.SelectorList`, :class:`~lxml.html.HtmlElement` and an iterable of :class:`zyte_parsers.Breadcrumb` objects. Other inputs are returned as is. """ def _from_zp_breadcrumb(value: zp_Breadcrumb) -> Breadcrumb: return Breadcrumb(name=value.name, url=value.url) value = _handle_selectorlist(value) if isinstance(value, (Selector, HtmlElement)): zp_breadcrumbs = extract_breadcrumbs(value, base_url=_get_base_url(page)) return ( [_from_zp_breadcrumb(b) for b in zp_breadcrumbs] if zp_breadcrumbs else None ) if not isinstance(value, Iterable) or isinstance(value, str): return value results: List[Any] = [] for item in value: if isinstance(item, zp_Breadcrumb): results.append(_from_zp_breadcrumb(item)) else: results.append(item) return results @only_handle_nodes def brand_processor(value: Union[Selector, HtmlElement], page: Any) -> Any: """Convert the data into a brand name if possible. Supported inputs are :class:`~parsel.selector.Selector`, :class:`~parsel.selector.SelectorList` and :class:`~lxml.html.HtmlElement`. Other inputs are returned as is. """ return extract_brand_name(value, search_depth=2) @only_handle_nodes def price_processor(value: Union[Selector, HtmlElement], page: Any) -> Any: """Convert the data into a price string if possible. Uses the price-parser_ library. Supported inputs are :class:`~parsel.selector.Selector`, :class:`~parsel.selector.SelectorList` and :class:`~lxml.html.HtmlElement`. Other inputs are returned as is. Puts the parsed Price object into ``page._parsed_price``. .. _price-parser: https://github.com/scrapinghub/price-parser """ price = extract_price(value) page._parsed_price = price if price.amount is None: return None return str(price.amount) @only_handle_nodes def simple_price_processor(value: Union[Selector, HtmlElement], page: Any) -> Any: """Convert the data into a price string if possible. Uses the price-parser_ library. Supported inputs are :class:`~parsel.selector.Selector`, :class:`~parsel.selector.SelectorList` and :class:`~lxml.html.HtmlElement`. Other inputs are returned as is. .. _price-parser: https://github.com/scrapinghub/price-parser """ price = extract_price(value) if price.amount is None: return None return str(price.amount)
zyte-common-items
/zyte-common-items-0.10.0.tar.gz/zyte-common-items-0.10.0/zyte_common_items/processors.py
processors.py
import base64 from typing import List, Optional, Type import attrs from zyte_common_items.base import Item from zyte_common_items.util import convert_to_class, url_to_str # Metadata #################################################################### @attrs.define(kw_only=True) class ProbabilityMetadata(Item): """Data extraction process metadata that indicates a probability.""" #: The probability (0 for 0%, 1 for 100%) that the resource features the #: expected data type. #: #: For example, if the extraction of a product from a given URL is #: requested, and that URL points to the webpage of a product with complete #: certainty, the value should be `1`. If with complete certainty the #: webpage features a job listing instead of a product, the value should be #: `0`. When there is no complete certainty, the value could be anything in #: between (e.g. `0.96`). probability: Optional[float] = 1.0 @attrs.define(kw_only=True) class _ListMetadata(Item): """Data extraction process metadata that indicates the download date. See :class:`ArticleList.metadata <zyte_common_items.ArticleList.metadata>`. """ #: Date and time when the product data was downloaded, in UTC timezone and #: the following format: ``YYYY-MM-DDThh:mm:ssZ``. dateDownloaded: Optional[str] = None @attrs.define(kw_only=True) class _DetailsMetadata(_ListMetadata): """Data extraction process metadata that indicates the download date and a probability.""" #: The probability (0 for 0%, 1 for 100%) that the resource features the #: expected data type. #: #: For example, if the extraction of a product from a given URL is #: requested, and that URL points to the webpage of a product with complete #: certainty, the value should be `1`. If with complete certainty the #: webpage features a job listing instead of a product, the value should be #: `0`. When there is no complete certainty, the value could be anything in #: between (e.g. `0.96`). probability: Optional[float] = 1.0 @attrs.define(kw_only=True) class Metadata(_DetailsMetadata): """Generic metadata class. It defines all attributes of metadata classes for specific item types, so that it can be used during extraction instead of a more specific class, and later converted to the corresponding, more specific metadata class. """ #: The search text used to find the item. searchText: Optional[str] = None @attrs.define(kw_only=True) class ArticleMetadata(_DetailsMetadata): pass @attrs.define(kw_only=True) class ArticleListMetadata(_ListMetadata): pass @attrs.define(kw_only=True) class ArticleNavigationMetadata(_ListMetadata): pass @attrs.define(kw_only=True) class BusinessPlaceMetadata(Metadata): pass @attrs.define(kw_only=True) class JobPostingMetadata(Metadata): """Metadata associated with a job posting.""" pass @attrs.define(kw_only=True) class ProductMetadata(_DetailsMetadata): pass @attrs.define(kw_only=True) class ProductListMetadata(_ListMetadata): pass @attrs.define(kw_only=True) class ProductNavigationMetadata(_ListMetadata): pass @attrs.define(kw_only=True) class RealEstateMetadata(_DetailsMetadata): pass ############################################################################### @attrs.define class _Media(Item): #: URL. #: #: When multiple URLs exist for a given media element, pointing to #: different-quality versions, the highest-quality URL should be used. #: #: `Data URIs`_ are not allowed in this attribute. #: #: .. _Data URIs: https://en.wikipedia.org/wiki/Data_URI_scheme url: str = attrs.field(converter=url_to_str) @attrs.define class AdditionalProperty(Item): """A name-value pair. See :attr:`Product.additionalProperties <zyte_common_items.Product.additionalProperties>`. """ #: Name. name: str #: Value. value: str @attrs.define(kw_only=True) class AggregateRating(Item): """Aggregate data about reviews and ratings. At least one of :attr:`ratingValue` or :attr:`reviewCount` is required. See :attr:`Product.aggregateRating <zyte_common_items.Product.aggregateRating>`. """ #: Maximum value of the rating system. bestRating: Optional[float] = None #: Average value of all ratings. ratingValue: Optional[float] = None #: Review count. reviewCount: Optional[int] = None @attrs.define class Audio(_Media): """Audio. See :class:`Article.audios <zyte_common_items.Article.audios>`. """ @attrs.define(kw_only=True) class Author(Item): """Author of an article. See :attr:`Article.authors <zyte_common_items.Article.authors>`. """ #: Email. email: Optional[str] = None #: URL of the details page of the author. url: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) #: Full name. name: Optional[str] = None #: Text from which :attr:`~zyte_common_items.Author.name` was #: extracted. nameRaw: Optional[str] = None @attrs.define class Brand(Item): """Brand. See :attr:`Product.brand <zyte_common_items.Product.brand>`. """ #: Name as it appears on the source webpage (no post-processing). name: str @attrs.define(kw_only=True) class Breadcrumb(Item): """A breadcrumb from the `breadcrumb trail`_ of a webpage. See :attr:`Product.breadcrumbs <zyte_common_items.Product.breadcrumbs>`. .. _breadcrumb trail: https://en.wikipedia.org/wiki/Breadcrumb_navigation """ #: Displayed name. name: Optional[str] = None #: Target URL. url: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) @attrs.define class Gtin(Item): """GTIN_ type-value pair. See :class:`Product.gtin <zyte_common_items.Product.gtin>`. .. _GTIN: https://en.wikipedia.org/wiki/Global_Trade_Item_Number """ #: Identifier of the GTIN format of ``value``. #: #: One of: ``"gtin13"``, ``"gtin8"``, ``"gtin14"``, ``"isbn10"``, #: ``"isbn13"``, ``"ismn"``, ``"issn"``, ``"upc"``. type: str #: Value. #: #: It should only contain digits. value: str @attrs.define class Image(_Media): """Image. See for example :class:`Product.images <zyte_common_items.Product.images>` and :class:`Product.mainImage <zyte_common_items.Product.mainImage>`. """ @attrs.define(kw_only=True) class Link(Item): """A link from a webpage to another webpage.""" #: Displayed text. text: Optional[str] = None #: Target URL. url: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) @attrs.define(kw_only=True) class NamedLink(Item): """A link from a webpage to another webpage.""" #: The name of the link. name: Optional[str] = None #: Target URL. url: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) @attrs.define(kw_only=True) class Address(Item): """Address item.""" #: The raw address information, as it appears on the website. addressRaw: Optional[str] = None #: The street address of the place. streetAddress: Optional[str] = None #: The city the place is located in. addressCity: Optional[str] = None #: The locality to which the place belongs. addressLocality: Optional[str] = None #: The region of the place. addressRegion: Optional[str] = None #: The country the place is located in. #: #: The country name or the `ISO 3166-1 alpha-2 country code #: <https://en.wikipedia.org/wiki/ISO_3166-1>`__. addressCountry: Optional[str] = None #: The postal code of the address. postalCode: Optional[str] = None #: The auxiliary part of the postal code. #: #: It may include a state abbreviation or town name, depending on local standards. postalCodeAux: Optional[str] = None #: Geographical latitude of the place. latitude: Optional[float] = None #: Geographical longitude of the place. longitude: Optional[float] = None @attrs.define(kw_only=True) class Amenity(Item): """An amenity that a business place has""" #: Name of amenity. name: str #: Availability of the amenity. value: bool @attrs.define(kw_only=True) class StarRating(Item): """Official star rating of a place.""" #: Star rating of the place, as it appears on the page, without processing. raw: Optional[str] = None #: Star rating value of the place. ratingValue: Optional[float] = None @attrs.define(kw_only=True) class ParentPlace(Item): """If the place is located inside another place, these are the details of the parent place.""" #: Name of the parent place. name: str #: Identifier of the parent place. placeId: str @attrs.define(kw_only=True) class OpeningHoursItem(Item): """Specification of opening hours of a business place.""" #: English weekday name. dayOfWeek: Optional[str] = None #: Opening time in ISO 8601 format, local time. opens: Optional[str] = None #: Closing time in ISO 8601 format, local time. closes: Optional[str] = None #: Day of the week, as it appears on the page, without processing. rawDayOfWeek: Optional[str] = None #: Opening time, as it appears on the page, without processing. rawOpens: Optional[str] = None #: Closing time, as it appears on the page, without processing. rawCloses: Optional[str] = None @attrs.define(kw_only=True) class RealEstateArea(Item): """Area of a place, with type, units, value and raw value.""" #: Area value: float #: Unit of the value field, one of: SQMT (square meters), SQFT (square #: feet), ACRE (acres). unitCode: str #: Type of area, one of: LOT, FLOOR areaType: Optional[str] = None #: Area in the raw format, as it appears on the website. raw: str @attrs.define(kw_only=True) class Header(Item): """An HTTP header""" #: Name of the header name: str #: Value of the header value: str @attrs.define(slots=False) class Request(Item): """Describe a web request to load a page""" #: HTTP URL url: str = attrs.field(converter=url_to_str) #: HTTP method method: str = "GET" #: HTTP request body, Base64-encoded body: Optional[str] = None #: HTTP headers headers: Optional[List[Header]] = None #: Name of the page being requested. name: Optional[str] = None _body_bytes = None @property def body_bytes(self) -> Optional[bytes]: """Request.body as bytes""" # todo: allow to set body bytes in __init__, to avoid encoding/decoding. if self._body_bytes is None: if self.body is not None: self._body_bytes = base64.b64decode(self.body) return self._body_bytes def to_scrapy(self, callback, **kwargs): """ Convert a request to scrapy.Request. All kwargs are passed to scrapy.Request as-is. """ import scrapy header_list = [(header.name, header.value) for header in self.headers or []] return scrapy.Request( url=self.url, callback=callback, method=self.method or "GET", headers=header_list, body=self.body_bytes, **kwargs ) @attrs.define class Video(_Media): """Video. See :class:`Article.videos <zyte_common_items.Article.videos>`. """ def cast_request(value: Request, cls: Type[Request]) -> Request: new_value = convert_to_class(value, cls) if type(value) is Request and cls is ProbabilityRequest: new_value.metadata = ProbabilityMetadata(probability=1.0) return new_value def request_list_processor(request_list): return [cast_request(request, ProbabilityRequest) for request in request_list] @attrs.define(kw_only=True) class ProbabilityRequest(Request): """A :class:`Request` that includes a probability value.""" #: Data extraction process metadata. metadata: Optional[ProbabilityMetadata] = None @attrs.define(kw_only=True) class JobLocation(Item): """Location of a job offer.""" #: Job location, as it appears on the website. raw: Optional[str] = None @attrs.define(kw_only=True) class BaseSalary(Item): """Base salary of a job offer.""" #: Salary amount as it appears on the website. raw: Optional[str] = None #: The minimum value of the base salary as a number string. valueMin: Optional[str] = None #: The maximum value of the base salary as a number string. valueMax: Optional[str] = None #: The type of rate associated with the salary, e.g. monthly, annual, daily. rateType: Optional[str] = None #: Currency associated with the salary amount. currency: Optional[str] = None #: Currency associated with the salary amount, without normalization. currencyRaw: Optional[str] = None @attrs.define(kw_only=True) class HiringOrganization(Item): """Organization that is hiring for a job offer.""" #: Name of the hiring organization. name: Optional[str] = None #: Organization information as available on the website. nameRaw: Optional[str] = None #: Identifier of the organization used by job posting website. id: Optional[str] = None
zyte-common-items
/zyte-common-items-0.10.0.tar.gz/zyte-common-items-0.10.0/zyte_common_items/components.py
components.py
from collections import ChainMap from typing import Dict, List, Optional, Union, get_args, get_origin import attrs from .util import split_in_unknown_and_known_fields _Trail = Optional[str] def is_data_container(cls_or_obj): """Used for discerning classes/instances if they are part of the Zyte Common Item definitions. """ try: return issubclass(cls_or_obj, Item) except TypeError: # must be an instance rather than a class return isinstance(cls_or_obj, Item) class _ItemBase: # Reserving an slot for _unknown_fields_dict. # This is done in a base class because otherwise attr.s won't pick it up __slots__ = ("_unknown_fields_dict",) def _get_import_path(obj: type): return f"{obj.__module__}.{obj.__qualname__}" def _extend_trail(trail: _Trail, key: Union[int, str]): if isinstance(key, str): if not trail: trail = key else: trail += f".{key}" else: assert isinstance(key, int) item = f"[{key}]" if not trail: trail = item else: trail += item return trail @attrs.define class Item(_ItemBase): def __attrs_post_init__(self): self._unknown_fields_dict = {} @classmethod def from_dict(cls, item: Optional[Dict]): """Read an item from a dictionary.""" return cls._from_dict(item) @classmethod def _from_dict(cls, item: Optional[Dict], *, trail: _Trail = None): """Read an item from a dictionary.""" if not item: return None if not isinstance(item, dict): path = _get_import_path(cls) if not trail: prefix = "Expected" else: prefix = f"Expected {trail} to be" raise ValueError(f"{prefix} a dict with fields from {path}, got {item!r}.") item = cls._apply_field_types_to_sub_fields(item, trail=trail) unknown_fields, known_fields = split_in_unknown_and_known_fields(item, cls) obj = cls(**known_fields) # type: ignore obj._unknown_fields_dict = unknown_fields return obj @classmethod def from_list(cls, items: Optional[List[Dict]], *, trail: _Trail = None) -> List: """Read items from a list.""" return cls._from_list(items) @classmethod def _from_list(cls, items: Optional[List[Dict]], *, trail: _Trail = None) -> List: """Read items from a list.""" result = [] for index, item in enumerate(items or []): index_trail = _extend_trail(trail, index) result.append(cls._from_dict(item, trail=index_trail)) return result @classmethod def _apply_field_types_to_sub_fields(cls, item: Dict, trail: _Trail = None): """This applies the correct data container class for some of the fields that need them. Specifically, this traverses recursively each field to determine the proper data container class based on the type annotations. This could handle both ``list`` and ``object`` type requirements. For example: * Article having ``breadcrumbs: List[Breadcrumb]`` * Product having ``brand: Optional[Brand]`` Moreover, fields that are not defined to be part of data container classes will be ignored. For example: * Article having ``headline: Optional[str]`` * Product having ``name: Optional[str]`` """ from_dict, from_list = {}, {} annotations = ChainMap( *(c.__annotations__ for c in cls.__mro__ if "__annotations__" in c.__dict__) ) for field, type_annotation in annotations.items(): origin = get_origin(type_annotation) is_optional = False if origin == Union: field_classes = get_args(type_annotation) if len(field_classes) != 2 or not isinstance(None, field_classes[1]): path = f"{_get_import_path(cls)}.{field}" raise ValueError( f"{path} is annotated with {type_annotation}. Fields " f"should only be annotated with one type (or " f"optional)." ) is_optional = len(field_classes) == 2 and isinstance( None, field_classes[1] ) type_annotation = field_classes[0] origin = get_origin(type_annotation) if origin is list: value = item.get(field) if not isinstance(value, list) and not (is_optional and value is None): field_trail = _extend_trail(trail, field) raise ValueError( f"Expected {field_trail} to be a list, got " f"{value!r}." ) type_annotation = get_args(type_annotation)[0] if is_data_container(type_annotation): from_list[field] = type_annotation elif is_data_container(type_annotation): from_dict[field] = type_annotation if from_dict or from_list: item = dict(**item) for key, cls in (from_dict or {}).items(): key_trail = _extend_trail(trail, key) value = item.get(key) if value is not None and not isinstance(value, dict): path = _get_import_path(cls) raise ValueError( f"Expected {key_trail} to be a dict with fields " f"from {path}, got {value!r}." ) item[key] = cls._from_dict(value, trail=key_trail) for key, cls in (from_list or {}).items(): key_trail = _extend_trail(trail, key) value = item.get(key) if value is not None and not isinstance(value, list): path = _get_import_path(cls) raise ValueError( f"Expected {key_trail} to be a list of dicts " f"with fields from {path}, got {value!r}." ) item[key] = cls._from_list(value, trail=key_trail) return item
zyte-common-items
/zyte-common-items-0.10.0.tar.gz/zyte-common-items-0.10.0/zyte_common_items/base.py
base.py
from typing import List, Optional import attrs from zyte_common_items.base import Item from zyte_common_items.components import ( AdditionalProperty, Address, AggregateRating, Amenity, ArticleListMetadata, ArticleMetadata, ArticleNavigationMetadata, Audio, Author, BaseSalary, Brand, Breadcrumb, BusinessPlaceMetadata, Gtin, HiringOrganization, Image, JobLocation, JobPostingMetadata, Link, NamedLink, OpeningHoursItem, ParentPlace, ProbabilityMetadata, ProbabilityRequest, ProductListMetadata, ProductMetadata, ProductNavigationMetadata, RealEstateArea, RealEstateMetadata, Request, StarRating, Video, cast_request, ) from zyte_common_items.util import MetadataCaster, url_to_str @attrs.define(slots=True, kw_only=True) class ArticleFromList(Item): """Article from an article list from an article listing page. See :class:`ArticleList`. """ #: Clean text of the article, including sub-headings, with newline #: separators. #: #: Format: #: #: - trimmed (no whitespace at the beginning or the end of the body #: string), #: - line breaks included, #: - no length limit, #: - no normalization of Unicode characters. articleBody: Optional[str] = None #: All authors of the article. authors: Optional[List[Author]] = None #: Publication date of the article. #: #: Format: ISO 8601 format: "YYYY-MM-DDThh:mm:ssZ" or #: "YYYY-MM-DDThh:mm:ss±zz:zz". #: #: With timezone, if available. #: #: If the actual publication date is not found, the date of the last #: modification is used instead. datePublished: Optional[str] = None #: Same date as #: :attr:`~zyte_common_items.ArticleFromList.datePublished`, but #: :before parsing/normalization, i.e. as it appears on the website. datePublishedRaw: Optional[str] = None #: Headline or title. headline: Optional[str] = None #: Language of the article, as an ISO 639-1 language code. #: #: Sometimes the article language is not the same as the web page overall #: language. inLanguage: Optional[str] = None #: Main image. mainImage: Optional[Image] = None #: All images. images: Optional[List[Image]] = None #: Data extraction process metadata. metadata: Optional[ProbabilityMetadata] = attrs.field( default=None, converter=attrs.converters.optional(MetadataCaster(ProbabilityMetadata)), kw_only=True # type: ignore ) #: Main URL. url: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) @attrs.define(kw_only=True) class Article(Item): #: Headline or title. headline: Optional[str] = None #: Publication date of the article. #: #: Format: ISO 8601 format: "YYYY-MM-DDThh:mm:ssZ" or #: "YYYY-MM-DDThh:mm:ss±zz:zz". #: #: With timezone, if available. #: #: If the actual publication date is not found, the value of #: :attr:`~zyte_common_items.Article.dateModified` is used instead. datePublished: Optional[str] = None #: Same date as #: :attr:`~zyte_common_items.Article.datePublished`, but #: :before parsing/normalization, i.e. as it appears on the website. datePublishedRaw: Optional[str] = None #: Date when the article was most recently modified. #: #: Format: ISO 8601 format: "YYYY-MM-DDThh:mm:ssZ" or #: "YYYY-MM-DDThh:mm:ss±zz:zz". #: #: With timezone, if available. dateModified: Optional[str] = None #: Same date as #: :attr:`~zyte_common_items.Article.dateModified`, but #: :before parsing/normalization, i.e. as it appears on the website. dateModifiedRaw: Optional[str] = None #: All authors of the article. authors: Optional[List[Author]] = None #: Webpage `breadcrumb trail`_. #: #: .. _Breadcrumb trail: https://en.wikipedia.org/wiki/Breadcrumb_navigation breadcrumbs: Optional[List[Breadcrumb]] = None #: Language of the article, as an ISO 639-1 language code. #: #: Sometimes the article language is not the same as the web page overall #: language. inLanguage: Optional[str] = None #: Main image. mainImage: Optional[Image] = None #: All images. images: Optional[List[Image]] = None #: A short summary of the article. #: #: It can be either human-provided (if available), or auto-generated. description: Optional[str] = None #: Clean text of the article, including sub-headings, with newline #: separators. #: #: Format: #: #: - trimmed (no whitespace at the beginning or the end of the body #: string), #: - line breaks included, #: - no length limit, #: - no normalization of Unicode characters. articleBody: Optional[str] = None #: Simplified and standardized HTML of the article, including sub-headings, #: image captions and embedded content (videos, tweets, etc.). #: #: Format: HTML string normalized in a consistent way. articleBodyHtml: Optional[str] = None #: All videos. videos: Optional[List[Video]] = None #: All audios. audios: Optional[List[Audio]] = None #: Canonical form of the URL, as indicated by the website. #: #: See also ``url``. canonicalUrl: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) #: The main URL of the article page. #: #: The URL of the final response, after any redirects. #: #: Required attribute. #: #: In case there is no article data on the page or the page was not #: reached, the returned "empty" item would still contain this URL field. url: str = attrs.field(converter=url_to_str) #: Data extraction process metadata. metadata: Optional[ArticleMetadata] = attrs.field( default=None, converter=attrs.converters.optional(MetadataCaster(ArticleMetadata)), kw_only=True # type: ignore ) @attrs.define(slots=True, kw_only=True) class ArticleList(Item): """Article list from an article listing page. The :attr:`url` attribute is the only required attribute, all other fields are optional. """ #: The main URL of the article list. #: #: The URL of the final response, after any redirects. #: #: Required attribute. #: #: In case there is no article list data on the page or the page was not #: reached, the returned item still contain this URL field and all the #: other available datapoints. url: str = attrs.field(converter=url_to_str) #: Canonical form of the URL, as indicated by the website. #: #: See also ``url``. canonicalUrl: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) #: List of article details found on the page. #: #: The order of the articles reflects their position on the page. articles: Optional[List[ArticleFromList]] = None #: Webpage `breadcrumb trail`_. #: #: .. _Breadcrumb trail: https://en.wikipedia.org/wiki/Breadcrumb_navigation breadcrumbs: Optional[List[Breadcrumb]] = None #: Data extraction process metadata. metadata: Optional[ArticleListMetadata] = attrs.field( default=None, converter=attrs.converters.optional(MetadataCaster(ArticleListMetadata)), kw_only=True # type: ignore ) @attrs.define(kw_only=True) class ProductVariant(Item): """:class:`Product` variant. See :attr:`Product.variants`. """ #: List of name-value pais of data about a specific, otherwise unmapped #: feature. #: #: Additional properties usually appear in product pages in the form of a #: specification table or a free-form specification list. #: #: Additional properties that require 1 or more extra requests may not be #: extracted. #: #: See also ``features``. additionalProperties: Optional[List[AdditionalProperty]] = None #: Availability status. #: #: The value is expected to be one of: ``"InStock"``, ``"OutOfStock"``. availability: Optional[str] = None #: Canonical form of the URL, as indicated by the website. #: #: See also ``url``. canonicalUrl: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) #: Color. #: #: It is extracted as displayed (e.g. ``"white"``). #: #: See also ``size``, ``style``. color: Optional[str] = None #: Price currency `ISO 4217`_ alphabetic code (e.g. ``"USD"``). #: #: See also ``currencyRaw``. #: #: .. _ISO 4217: https://en.wikipedia.org/wiki/ISO_4217 currency: Optional[str] = None #: Price currency as it appears on the webpage (no post-processing), e.g. #: ``"$"``. #: #: See also ``currency``. currencyRaw: Optional[str] = None #: List of standardized GTIN_ product identifiers associated with the #: product, which are unique for the product across different sellers. #: #: See also: ``mpn``, ``productId``, ``sku``. #: #: .. _GTIN: https://en.wikipedia.org/wiki/Global_Trade_Item_Number gtin: Optional[List[Gtin]] = None #: All product images. #: #: The main image (see ``mainImage``) should be first in the list. #: #: Images only displayed as part of the product description are excluded. images: Optional[List[Image]] = None #: Main product image. mainImage: Optional[Image] = None #: `Manufacturer part number (MPN)`_. #: #: A product should have the same MPN across different e-commerce websites. #: #: See also: ``gtin``, ``productId``, ``sku``. #: #: .. _Manufacturer part number (MPN): https://en.wikipedia.org/wiki/Part_number mpn: Optional[str] = None #: Name as it appears on the webpage (no post-processing). name: Optional[str] = None #: Price at which the product is being offered. #: #: It is a string with the price amount, with a full stop as decimal #: separator, and no thousands separator or currency (see ``currency`` and #: ``currencyRaw``), e.g. ``"10500.99"``. #: #: If ``regularPrice`` is not ``None``, ``price`` should always be lower #: than ``regularPrice``. price: Optional[str] = None #: Product identifier, unique within an e-commerce website. #: #: It may come in the form of an SKU or any other identifier, a hash, or #: even a URL. #: #: See also: ``gtin``, ``mpn``, ``sku``. productId: Optional[str] = None #: Price at which the product was being offered in the past, and which is #: presented as a reference next to the current price. #: #: It may be labeled as the original price, the list price, or the maximum #: retail price for which the product is sold. #: #: See ``price`` for format details. #: #: If ``regularPrice`` is not ``None``, it should always be higher than #: ``price``. regularPrice: Optional[str] = None #: Size or dimensions. #: #: Pertinent to products such as garments, shoes, accessories, etc. #: #: It is extracted as displayed (e.g. ``"XL"``). #: #: See also ``color``, ``style``. size: Optional[str] = None #: `Stock keeping unit (SKU)`_ identifier, i.e. a merchant-specific product #: identifier. #: #: See also: ``gtin``, ``mpn``, ``productId``. #: #: .. _Stock keeping unit (SKU): https://en.wikipedia.org/wiki/Stock_keeping_unit sku: Optional[str] = None #: Style. #: #: Pertinent to products such as garments, shoes, accessories, etc. #: #: It is extracted as displayed (e.g. ``"polka dots"``). #: #: See also ``color``, ``size``. style: Optional[str] = None #: Main URL from which the product variant data could be extracted. #: #: See also ``canonicalUrl``. url: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) @attrs.define(kw_only=True) class Product(Item): """Product from an e-commerce website. The :attr:`url` attribute is the only required attribute, all other fields are optional. """ #: List of name-value pais of data about a specific, otherwise unmapped #: feature. #: #: Additional properties usually appear in product pages in the form of a #: specification table or a free-form specification list. #: #: Additional properties that require 1 or more extra requests may not be #: extracted. #: #: See also ``features``. additionalProperties: Optional[List[AdditionalProperty]] = None #: Aggregate data about reviews and ratings. aggregateRating: Optional[AggregateRating] = None #: Availability status. #: #: The value is expected to be one of: ``"InStock"``, ``"OutOfStock"``. availability: Optional[str] = None #: Brand. brand: Optional[Brand] = None #: Webpage `breadcrumb trail`_. #: #: .. _Breadcrumb trail: https://en.wikipedia.org/wiki/Breadcrumb_navigation breadcrumbs: Optional[List[Breadcrumb]] = None #: Canonical form of the URL, as indicated by the website. #: #: See also ``url``. canonicalUrl: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) #: Color. #: #: It is extracted as displayed (e.g. ``"white"``). #: #: See also ``size``, ``style``. color: Optional[str] = None #: Price currency `ISO 4217`_ alphabetic code (e.g. ``"USD"``). #: #: See also ``currencyRaw``. #: #: .. _ISO 4217: https://en.wikipedia.org/wiki/ISO_4217 currency: Optional[str] = None #: Price currency as it appears on the webpage (no post-processing), e.g. #: ``"$"``. #: #: See also ``currency``. currencyRaw: Optional[str] = None #: Plain-text description. #: #: If the description is split across different parts of the source #: webpage, only the main part, containing the most useful pieces of #: information, should be extracted into this attribute. #: #: It may contain data found in other attributes (``features``, #: ``additionalProperties``). #: #: Format-wise: #: #: - Line breaks and non-ASCII characters are allowed. #: #: - There is no length limit for this attribute, the content should not #: be truncated. #: #: - There should be no whitespace at the beginning or end. #: #: See also ``descriptionHtml``. description: Optional[str] = None #: HTML description. #: #: See ``description`` for extraction details. #: #: The format is not the raw HTML from the source webpage. See the `HTML #: normalization specification`_ for details. #: #: .. _HTML normalization specification: https://docs.zyte.com/automatic-extraction/article.html#format-of-articlebodyhtml-field descriptionHtml: Optional[str] = None #: List of features. #: #: They are usually listed as bullet points in product webpages. #: #: See also ``additionalProperties``. features: Optional[List[str]] = None #: List of standardized GTIN_ product identifiers associated with the #: product, which are unique for the product across different sellers. #: #: See also: ``mpn``, ``productId``, ``sku``. #: #: .. _GTIN: https://en.wikipedia.org/wiki/Global_Trade_Item_Number gtin: Optional[List[Gtin]] = None #: All product images. #: #: The main image (see ``mainImage``) should be first in the list. #: #: Images only displayed as part of the product description are excluded. images: Optional[List[Image]] = None #: Main product image. mainImage: Optional[Image] = None #: Data extraction process metadata. metadata: Optional[ProductMetadata] = attrs.field( default=None, converter=attrs.converters.optional(MetadataCaster(ProductMetadata)), kw_only=True # type: ignore ) #: `Manufacturer part number (MPN)`_. #: #: A product should have the same MPN across different e-commerce websites. #: #: See also: ``gtin``, ``productId``, ``sku``. #: #: .. _Manufacturer part number (MPN): https://en.wikipedia.org/wiki/Part_number mpn: Optional[str] = None #: Name as it appears on the webpage (no post-processing). name: Optional[str] = None #: Price at which the product is being offered. #: #: It is a string with the price amount, with a full stop as decimal #: separator, and no thousands separator or currency (see ``currency`` and #: ``currencyRaw``), e.g. ``"10500.99"``. #: #: If ``regularPrice`` is not ``None``, ``price`` should always be lower #: than ``regularPrice``. price: Optional[str] = None # Redefined to extend the documentation. #: Product identifier, unique within an e-commerce website. #: #: It may come in the form of an SKU or any other identifier, a hash, or #: even a URL. #: #: See also: ``gtin``, ``mpn``, ``sku``. productId: Optional[str] = None #: Price at which the product was being offered in the past, and which is #: presented as a reference next to the current price. #: #: It may be labeled as the original price, the list price, or the maximum #: retail price for which the product is sold. #: #: See ``price`` for format details. #: #: If ``regularPrice`` is not ``None``, it should always be higher than #: ``price``. regularPrice: Optional[str] = None #: Size or dimensions. #: #: Pertinent to products such as garments, shoes, accessories, etc. #: #: It is extracted as displayed (e.g. ``"XL"``). #: #: See also ``color``, ``style``. size: Optional[str] = None #: `Stock keeping unit (SKU)`_ identifier, i.e. a merchant-specific product #: identifier. #: #: See also: ``gtin``, ``mpn``, ``productId``. #: #: .. _Stock keeping unit (SKU): https://en.wikipedia.org/wiki/Stock_keeping_unit sku: Optional[str] = None #: Style. #: #: Pertinent to products such as garments, shoes, accessories, etc. #: #: It is extracted as displayed (e.g. ``"polka dots"``). #: #: See also ``color``, ``size``. style: Optional[str] = None #: Main URL from which the data has been extracted. #: #: See also ``canonicalUrl``. url: str = attrs.field(converter=url_to_str) #: List of variants. #: #: When slightly different versions of a product are displayed on the same #: product page, allowing you to choose a specific product version from a #: selection, each of those product versions are considered a product #: variant. #: #: Product variants usually differ in ``color`` or ``size``. #: #: The following items are *not* considered product variants: #: #: - Different products within the same bundle of products. #: #: - Product add-ons, e.g. premium upgrades of a base product. #: #: Only variant-specific data is extracted as product variant details. For #: example, if variant-specific versions of the product description do not #: exist in the source webpage, the description attributes of the product #: variant are *not* filled with the base product description. #: #: Extracted product variants may not include those that are not visible in #: the source webpage. #: #: Product variant details may not include those that require multiple #: additional requests (e.g. 1 or more requests per variant). variants: Optional[List[ProductVariant]] = None @attrs.define(slots=True, kw_only=True) class ProductFromList(Item): """Product from a product list from a product listing page of an e-commerce webpage. See :class:`ProductList`. """ #: Price currency `ISO 4217`_ alphabetic code (e.g. ``"USD"``). #: #: See also ``currencyRaw``. #: #: .. _ISO 4217: https://en.wikipedia.org/wiki/ISO_4217 currency: Optional[str] = None #: Price currency as it appears on the webpage (no post-processing), e.g. #: ``"$"``. #: #: See also ``currency``. currencyRaw: Optional[str] = None #: Main product image. mainImage: Optional[Image] = None #: Data extraction process metadata. metadata: Optional[ProbabilityMetadata] = attrs.field( default=None, converter=attrs.converters.optional(MetadataCaster(ProbabilityMetadata)), kw_only=True # type: ignore ) #: Name as it appears on the webpage (no post-processing). name: Optional[str] = None #: Price at which the product is being offered. #: #: It is a string with the price amount, with a full stop as decimal #: separator, and no thousands separator or currency (see ``currency`` and #: ``currencyRaw``), e.g. ``"10500.99"``. #: #: If ``regularPrice`` is not ``None``, ``price`` should always be lower #: than ``regularPrice``. price: Optional[str] = None #: Product identifier, unique within an e-commerce website. #: #: It may come in the form of an SKU or any other identifier, a hash, or #: even a URL. productId: Optional[str] = None #: Price at which the product was being offered in the past, and which is #: presented as a reference next to the current price. #: #: It may be labeled as the original price, the list price, or the maximum #: retail price for which the product is sold. #: #: See ``price`` for format details. #: #: If ``regularPrice`` is not ``None``, it should always be higher than #: ``price``. regularPrice: Optional[str] = None #: Main URL from which the product data could be extracted. url: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) @attrs.define(slots=True, kw_only=True) class ProductList(Item): """Product list from a product listing page of an e-commerce webpage. It represents, for example, a single page from a category. The :attr:`url` attribute is the only required attribute, all other fields are optional. """ #: Webpage `breadcrumb trail`_. #: #: .. _Breadcrumb trail: https://en.wikipedia.org/wiki/Breadcrumb_navigation breadcrumbs: Optional[List[Breadcrumb]] = None #: Canonical form of the URL, as indicated by the website. #: #: See also ``url``. canonicalUrl: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) #: Name of the product listing as it appears on the webpage #: (no post-processing). #: #: For example, if the webpage is one of the pages of the Robots category, #: ``categoryName`` is ``'Robots'``. categoryName: Optional[str] = None #: Data extraction process metadata. metadata: Optional[ProductListMetadata] = attrs.field( default=None, converter=attrs.converters.optional(MetadataCaster(ProductListMetadata)), kw_only=True # type: ignore ) #: Current page number, if displayed explicitly on the list page. #: #: Numeration starts with 1. pageNumber: Optional[int] = None #: Link to the next page. paginationNext: Optional[Link] = None #: List of products. #: #: It only includes product information found in the product listing page #: itself. Product information that requires visiting each product URL is #: not meant to be covered. #: #: The order of the products reflects their position on the rendered page. #: Product order is top-to-bottom, and left-to-right or right-to-left #: depending on the webpage locale. products: Optional[List[ProductFromList]] = None #: Main URL from which the data has been extracted. #: #: See also ``canonicalUrl``. url: str = attrs.field(converter=url_to_str) @attrs.define(slots=True, kw_only=True) class BusinessPlace(Item): """Business place, with properties typically seen on maps or business listings.""" #: Unique identifier of the place on the website. placeId: Optional[str] = None #: The main URL that the place data was extracted from. #: #: The URL of the final response, after any redirects. #: #: In case there is no product data on the page or the page was not reached, the returned "empty" #: item would still contain url field and metadata field with dateDownloaded. url: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) #: The name of the place. name: Optional[str] = None #: List of actions that can be performed directly from the URLs on the place page, including URLs. actions: Optional[List[NamedLink]] = None #: List of name-value pais of any unmapped additional properties specific to the place. additionalProperties: Optional[List[AdditionalProperty]] = None #: The address details of the place. address: Optional[Address] = None #: The details of the reservation action, #: e.g. table reservation in case of restaurants #: or room reservation in case of hotels. reservationAction: Optional[NamedLink] = None #: List of categories the place belongs to. categories: Optional[List[str]] = None #: The description of the place. #: #: Stripped of white spaces. description: Optional[str] = None #: List of frequently mentioned features of this place. features: Optional[List[str]] = None #: URL to a map of the place. map: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) #: A list of URL values of all images of the place. images: Optional[List[Image]] = None #: List of amenities of the place. amenityFeatures: Optional[List[Amenity]] = None #: The overall rating, based on a collection of reviews or ratings. aggregateRating: Optional[AggregateRating] = None #: Official star rating of the place. starRating: Optional[StarRating] = None #: If the place is located inside another place, these are the details of the parent place. containedInPlace: Optional[ParentPlace] = None #: Ordered specification of opening hours, including data for opening and closing time for each day of the week. openingHours: Optional[List[OpeningHoursItem]] = None #: List of partner review sites. reviewSites: Optional[List[NamedLink]] = None #: The phone number associated with the place, as it appears on the page. telephone: Optional[str] = None #: How is the price range of the place viewed by its customers (from z to zzzz). priceRange: Optional[str] = None #: Which timezone is the place situated in. #: #: Standard: Name compliant with IANA tz database (tzdata). timezone: Optional[str] = None #: If the information is verified by the owner of this place. isVerified: Optional[bool] = None #: The URL pointing to the official website of the place. website: Optional[str] = attrs.field( default=None, converter=attrs.converters.optional(url_to_str), kw_only=True ) #: List of the tags associated with the place. tags: Optional[List[str]] = None #: Data extraction process metadata. metadata: Optional[BusinessPlaceMetadata] = attrs.field( default=None, converter=attrs.converters.optional(MetadataCaster(BusinessPlaceMetadata)), kw_only=True # type: ignore ) @attrs.define(slots=True, kw_only=True) class RealEstate(Item): #: The url of the final response, after any redirects. url: str = attrs.field(converter=url_to_str) #: Webpage `breadcrumb trail`_. #: #: .. _Breadcrumb trail: https://en.wikipedia.org/wiki/Breadcrumb_navigation breadcrumbs: Optional[List[Breadcrumb]] = None #: The identifier of the real estate, usually assigned by the seller and unique within a website, similar to product SKU. realEstateId: Optional[str] = None #: The name of the real estate. name: Optional[str] = None #: Publication date of the real estate offer. #: #: Format: ISO 8601 format: "YYYY-MM-DDThh:mm:ssZ" #: #: With timezone, if available. datePublished: Optional[str] = None #: Same date as datePublished, but before parsing/normalization, i.e. as it appears on the website. datePublishedRaw: Optional[str] = None #: The description of the real estate. #: #: Format: #: #: - trimmed (no whitespace at the beginning or the end of the description string), #: #: - line breaks included, #: #: - no length limit, #: #: - no normalization of Unicode characters, #: #: - no concatenation of description from different parts of the page. description: Optional[str] = None #: The details of the main image of the real estate. mainImage: Optional[Image] = None #: A list of URL values of all images of the real estate. images: Optional[List[Image]] = None #: The details of the address of the real estate. address: Optional[Address] = None #: Real estate area details. area: Optional[RealEstateArea] = None #: The total number of bathrooms in the real estate. numberOfBathroomsTotal: Optional[int] = None #: The number of full bathrooms in the real estate. numberOfFullBathrooms: Optional[int] = None #: The number of partial bathrooms in the real estate. numberOfPartialBathrooms: Optional[int] = None #: The number of bedrooms in the real estate. numberOfBedrooms: Optional[int] = None #: The number of rooms (excluding bathrooms and closets) of the real estate. numberOfRooms: Optional[int] = None #: Type of a trade action: buying or renting. tradeType: Optional[str] = None #: The offer price of the real estate. price: Optional[str] = None #: The rental period to which the rental price applies, only available in case of rental. Usually weekly, monthly, quarterly, yearly. rentalPeriod: Optional[str] = None #: Currency associated with the price, as appears on the page (no post-processing). currencyRaw: Optional[str] = None #: The currency of the price, in 3-letter ISO 4217 format. currency: Optional[str] = None #: A name-value pair field holding information pertaining to specific features. Usually in a form of a specification table or freeform specification list. additionalProperties: Optional[List[AdditionalProperty]] = None #: Type of the property, e.g. flat, house, land. propertyType: Optional[str] = None #: The year the real estate was built. yearBuilt: Optional[int] = None #: The URL of the virtual tour of the real estate. virtualTourUrl: Optional[str] = None #: Contains metadata about the data extraction process. metadata: Optional[RealEstateMetadata] = attrs.field( default=None, converter=attrs.converters.optional(MetadataCaster(RealEstateMetadata)), kw_only=True # type: ignore ) class RequestListCaster: def __init__(self, target): self._target = target def __call__(self, value): return [cast_request(item, self._target) for item in value] @attrs.define(kw_only=True) class ProductNavigation(Item): """Represents the navigational aspects of a product listing page on an e-commerce website""" #: Main URL from which the data is extracted. url: str = attrs.field(converter=url_to_str) #: Name of the category/page with the product list. #: #: Format: #: #: - trimmed (no whitespace at the beginning or the end of the description string) categoryName: Optional[str] = None #: List of sub-category links ordered by their position in the page. subCategories: Optional[List[ProbabilityRequest]] = attrs.field( default=None, converter=attrs.converters.optional(RequestListCaster(ProbabilityRequest)), kw_only=True # type: ignore ) #: List of product links found on the page category ordered by their position in the page. items: Optional[List[ProbabilityRequest]] = attrs.field( default=None, converter=attrs.converters.optional(RequestListCaster(ProbabilityRequest)), kw_only=True # type: ignore ) #: A link to the next page, if available. nextPage: Optional[Request] = None #: Number of the current page. #: #: It should only be extracted if the webpage shows a page number. #: #: It must be 1-based. For example, if the first page of a listing is #: numbered as 0 on the website, it should be extracted as `1` nonetheless. pageNumber: Optional[int] = None #: Data extraction process metadata. metadata: Optional[ProductNavigationMetadata] = attrs.field( default=None, converter=attrs.converters.optional(MetadataCaster(ProductNavigationMetadata)), kw_only=True # type: ignore ) @attrs.define(kw_only=True) class ArticleNavigation(Item): """Represents the navigational aspects of an article listing webpage. See :class:`ArticleList`. """ #: Main URL from which the data is extracted. url: str = attrs.field(converter=url_to_str) #: Name of the category/page. #: #: Format: #: #: - trimmed (no whitespace at the beginning or the end of the description string) categoryName: Optional[str] = None #: List of sub-category links ordered by their position in the page. subCategories: Optional[List[ProbabilityRequest]] = attrs.field( default=None, converter=attrs.converters.optional(RequestListCaster(ProbabilityRequest)), kw_only=True # type: ignore ) #: Links to listed items in order of appearance. items: Optional[List[ProbabilityRequest]] = attrs.field( default=None, converter=attrs.converters.optional(RequestListCaster(ProbabilityRequest)), kw_only=True # type: ignore ) #: A link to the next page, if available. nextPage: Optional[Request] = None #: Number of the current page. #: #: It should only be extracted if the webpage shows a page number. #: #: It must be 1-based. For example, if the first page of a listing is #: numbered as 0 on the website, it should be extracted as `1` nonetheless. pageNumber: Optional[int] = None #: Data extraction process metadata. metadata: Optional[ArticleNavigationMetadata] = attrs.field( default=None, converter=attrs.converters.optional(MetadataCaster(ArticleNavigationMetadata)), kw_only=True # type: ignore ) @attrs.define(kw_only=True) class JobPosting(Item): #: The url of the final response, after any redirects. url: str = attrs.field(converter=url_to_str) #: The identifier of the job posting. jobPostingId: Optional[str] = None #: Publication date of the job posting. #: #: Format: ISO 8601 format: "YYYY-MM-DDThh:mm:ssZ" #: #: With timezone, if available. datePublished: Optional[str] = None #: Same date as datePublished, but before parsing/normalization, i.e. as it appears on the website. datePublishedRaw: Optional[str] = None #: The date when the job posting was most recently modified. #: #: Format: ISO 8601 format: "YYYY-MM-DDThh:mm:ssZ" #: #: With timezone, if available. dateModified: Optional[str] = None #: Same date as dateModified, but before parsing/normalization, i.e. as it appears on the website. dateModifiedRaw: Optional[str] = None #: The date after which the job posting is not valid, e.g. the end of an offer. #: #: Format: ISO 8601 format: "YYYY-MM-DDThh:mm:ssZ" #: #: With timezone, if available. validThrough: Optional[str] = None #: Same date as validThrough, but before parsing/normalization, i.e. as it appears on the website. validThroughRaw: Optional[str] = None #: The title of the job posting. jobTitle: Optional[str] = None #: The headline of the job posting. headline: Optional[str] = None #: A (typically single) geographic location associated with the job position. jobLocation: Optional[JobLocation] = None #: A description of the job posting including sub-headings, with newline separators. #: #: Format: #: #: - trimmed (no whitespace at the beginning or the end of the description string), #: #: - line breaks included, #: #: - no length limit, #: #: - no normalization of Unicode characters. description: Optional[str] = None #: Simplified HTML of the description, including sub-headings, image captions and embedded content. descriptionHtml: Optional[str] = None #: Type of employment (e.g. full-time, part-time, contract, temporary, seasonal, internship). employmentType: Optional[str] = None #: The base salary of the job or of an employee in the proposed role. baseSalary: Optional[BaseSalary] = None #: Candidate requirements for the job. requirements: Optional[List[str]] = None #: Information about the organization offering the job position. hiringOrganization: Optional[HiringOrganization] = None #: Job start date #: #: Format: ISO 8601 format: "YYYY-MM-DDThh:mm:ssZ" #: #: With timezone, if available. jobStartDate: Optional[str] = None #: Same date as jobStartDate, but before parsing/normalization, i.e. as it appears on the website. jobStartDateRaw: Optional[str] = None #: Specifies the remote status of the position. remoteStatus: Optional[str] = None #: Contains metadata about the data extraction process. metadata: Optional[JobPostingMetadata] = attrs.field( default=None, converter=attrs.converters.optional(MetadataCaster(JobPostingMetadata)), kw_only=True # type: ignore )
zyte-common-items
/zyte-common-items-0.10.0.tar.gz/zyte-common-items-0.10.0/zyte_common_items/items.py
items.py
from types import MappingProxyType from typing import Any, Collection, Iterator, KeysView from itemadapter.adapter import AttrsAdapter from zyte_common_items.base import Item def _is_empty(value): """Return ``True`` if the value is to be considered empty for the purpose of excluding it from serialization. Empty values include: ``None``, empty collections (tuples, lists, etc.). Non-empty values include: empty ``bytes`` or ``str``, ``False``, ``0``. *value* is assumed not to be a mapping, which should be treated as a non-empty value, but this function would treat as an empty value. """ return value is None or ( not value and not isinstance(value, (bytes, str)) and isinstance(value, Collection) ) class ZyteItemAdapter(AttrsAdapter): """Wrap an :ref:`item <items>` to interact with its content as if it was a dictionary. It can be :ref:`configured <configuration>` into itemadapter_ to improve interaction with :ref:`items <items>` for itemadapter users like Scrapy_. In extends AttrsAdapter_ with the following features: - Allows interaction and serialization of fields from :attr:`~zyte_common_items.Item._unknown_fields_dict` as if they were regular item fields. - Removes keys with empty values from the output of `ItemAdapter.asdict()`_, for a cleaner output. .. _AttrsAdapter: https://github.com/scrapy/itemadapter#built-in-adapters .. _itemadapter: https://github.com/scrapy/itemadapter#itemadapter .. _ItemAdapter.asdict(): https://github.com/scrapy/itemadapter#asdict---dict .. _Scrapy: https://scrapy.org/ """ @classmethod def is_item(cls, item: Any) -> bool: return isinstance(item, Item) def get_field_meta(self, field_name: str) -> MappingProxyType: if field_name in self._fields_dict: return self._fields_dict[field_name].metadata # type: ignore elif field_name in self.item._unknown_fields_dict: return MappingProxyType({}) raise KeyError(field_name) def field_names(self) -> KeysView: return KeysView({**self._fields_dict, **self.item._unknown_fields_dict}) def __getitem__(self, field_name: str) -> Any: if field_name in self._fields_dict: return getattr(self.item, field_name) elif field_name in self.item._unknown_fields_dict: return self.item._unknown_fields_dict[field_name] raise KeyError(field_name) def __setitem__(self, field_name: str, value: Any) -> None: if field_name in self._fields_dict: setattr(self.item, field_name, value) else: self.item._unknown_fields_dict[field_name] = value def __delitem__(self, field_name: str) -> None: if field_name in self._fields_dict: del self._fields_dict[field_name] delattr(self.item, field_name) elif field_name in self.item._unknown_fields_dict: del self.item._unknown_fields_dict[field_name] else: raise KeyError( f"Object of type {self.item.__class__.__name__} does not contain a field with name {field_name}" ) def __iter__(self) -> Iterator: fields = [ attr for attr in self._fields_dict if not _is_empty(getattr(self.item, attr)) ] fields.extend( attr for attr in self.item._unknown_fields_dict if not _is_empty(self.item._unknown_fields_dict[attr]) ) return iter(fields) class ZyteItemKeepEmptyAdapter(ZyteItemAdapter): """Similar to :class:`~.ZyteItemAdapter` but doesn't remove empty values. It is intended to be used in tests and other use cases where it's important to differentiate between empty and missing fields. """ def __iter__(self) -> Iterator: fields = [attr for attr in self._fields_dict if hasattr(self.item, attr)] fields.extend(self.item._unknown_fields_dict) return iter(fields)
zyte-common-items
/zyte-common-items-0.10.0.tar.gz/zyte-common-items-0.10.0/zyte_common_items/adapter.py
adapter.py
from datetime import datetime from types import CoroutineType from typing import Generic, Optional, Type, TypeVar import attrs from price_parser import Price from web_poet import ItemPage, RequestUrl, Returns, WebPage, field from web_poet.fields import FieldsMixin from web_poet.pages import ItemT from web_poet.utils import get_generic_param from .components import ( ArticleListMetadata, ArticleMetadata, ArticleNavigationMetadata, BusinessPlaceMetadata, JobPostingMetadata, ProductListMetadata, ProductMetadata, ProductNavigationMetadata, RealEstateMetadata, request_list_processor, ) from .items import ( Article, ArticleList, ArticleNavigation, BusinessPlace, JobPosting, Product, ProductList, ProductNavigation, RealEstate, ) from .processors import ( brand_processor, breadcrumbs_processor, price_processor, simple_price_processor, ) from .util import format_datetime, metadata_processor #: Generic type for metadata classes for specific item types. MetadataT = TypeVar("MetadataT") def _date_downloaded_now(): return format_datetime(datetime.utcnow()) class HasMetadata(Generic[MetadataT]): """Inherit from this generic mixin to set the metadata class used by a page class.""" @property def metadata_cls(self) -> Optional[Type[MetadataT]]: """Metadata class.""" return _get_metadata_class(type(self)) def _get_metadata_class(cls: type) -> Optional[Type[MetadataT]]: return get_generic_param(cls, HasMetadata) class PriceMixin(FieldsMixin): """Provides price-related field implementations.""" _parsed_price: Optional[Price] = None async def _get_parsed_price(self) -> Optional[Price]: if self._parsed_price is None: # the price field wasn't executed or doesn't write _parsed_price price = getattr(self, "price", None) if isinstance(price, CoroutineType): price = await price if self._parsed_price is None: # the price field doesn't write _parsed_price (or doesn't exist) self._parsed_price = Price( amount=None, currency=None, amount_text=price ) return self._parsed_price @field def currency(self) -> Optional[str]: return getattr(self, "CURRENCY", None) @field async def currencyRaw(self) -> Optional[str]: parsed_price = await self._get_parsed_price() if parsed_price: return parsed_price.currency return None class _BasePage(ItemPage[ItemT], HasMetadata[MetadataT]): class Processors: metadata = [metadata_processor] @field def metadata(self) -> MetadataT: if self.metadata_cls is None: raise ValueError(f"{type(self)} doesn'have a metadata class configured.") value = self.metadata_cls() attributes = dir(value) if "dateDownloaded" in attributes: value.dateDownloaded = _date_downloaded_now() # type: ignore if "probability" in attributes: value.probability = 1.0 # type: ignore return value def no_item_found(self) -> ItemT: """Return an item with the current url and probability=0, indicating that the passed URL doesn't contain the expected item. Use it in your .validate_input implementation. """ if self.metadata_cls is None: raise ValueError(f"{type(self)} doesn'have a metadata class configured.") metadata = self.metadata_cls() metadata_attributes = dir(metadata) if "dateDownloaded" in metadata_attributes: metadata.dateDownloaded = _date_downloaded_now() # type: ignore if "probability" in metadata_attributes: metadata.probability = 0.0 # type: ignore return self.item_cls( # type: ignore url=self.url, # type: ignore[attr-defined] metadata=metadata, ) @attrs.define class BasePage(_BasePage): class Processors(_BasePage.Processors): pass request_url: RequestUrl @field def url(self) -> str: return str(self.request_url) class BaseArticlePage(BasePage, Returns[Article], HasMetadata[ArticleMetadata]): class Processors(BasePage.Processors): breadcrumbs = [breadcrumbs_processor] class BaseArticleListPage( BasePage, Returns[ArticleList], HasMetadata[ArticleListMetadata] ): class Processors(BasePage.Processors): breadcrumbs = [breadcrumbs_processor] class BaseArticleNavigationPage( BasePage, Returns[ArticleNavigation], HasMetadata[ArticleNavigationMetadata] ): pass class BaseBusinessPlacePage( BasePage, Returns[BusinessPlace], HasMetadata[BusinessPlaceMetadata] ): pass class BaseJobPostingPage( BasePage, Returns[JobPosting], HasMetadata[JobPostingMetadata] ): pass class BaseProductPage( BasePage, PriceMixin, Returns[Product], HasMetadata[ProductMetadata] ): class Processors(BasePage.Processors): brand = [brand_processor] breadcrumbs = [breadcrumbs_processor] price = [price_processor] regularPrice = [simple_price_processor] class BaseProductListPage( BasePage, Returns[ProductList], HasMetadata[ProductListMetadata] ): class Processors(BasePage.Processors): breadcrumbs = [breadcrumbs_processor] class BaseProductNavigationPage( BasePage, Returns[ProductNavigation], HasMetadata[ProductNavigationMetadata] ): class Processors(BasePage.Processors): subCategories = [request_list_processor] items = [request_list_processor] class BaseRealEstatePage( BasePage, Returns[RealEstate], HasMetadata[RealEstateMetadata] ): class Processors(BasePage.Processors): breadcrumbs = [breadcrumbs_processor] @attrs.define class Page(_BasePage, WebPage): class Processors(_BasePage.Processors): pass @field def url(self) -> str: return str(self.response.url) class ArticlePage(Page, Returns[Article], HasMetadata[ArticleMetadata]): class Processors(Page.Processors): breadcrumbs = [breadcrumbs_processor] class ArticleListPage(Page, Returns[ArticleList], HasMetadata[ArticleListMetadata]): class Processors(Page.Processors): breadcrumbs = [breadcrumbs_processor] class ArticleNavigationPage( Page, Returns[ArticleNavigation], HasMetadata[ArticleNavigationMetadata] ): pass class BusinessPlacePage( Page, Returns[BusinessPlace], HasMetadata[BusinessPlaceMetadata] ): pass class JobPostingPage(Page, Returns[JobPosting], HasMetadata[JobPostingMetadata]): pass class ProductPage(Page, PriceMixin, Returns[Product], HasMetadata[ProductMetadata]): class Processors(Page.Processors): brand = [brand_processor] breadcrumbs = [breadcrumbs_processor] price = [price_processor] regularPrice = [simple_price_processor] class ProductListPage(Page, Returns[ProductList], HasMetadata[ProductListMetadata]): class Processors(Page.Processors): breadcrumbs = [breadcrumbs_processor] class ProductNavigationPage( Page, Returns[ProductNavigation], HasMetadata[ProductNavigationMetadata] ): pass class RealEstatePage(Page, Returns[RealEstate], HasMetadata[RealEstateMetadata]): class Processors(Page.Processors): breadcrumbs = [breadcrumbs_processor]
zyte-common-items
/zyte-common-items-0.10.0.tar.gz/zyte-common-items-0.10.0/zyte_common_items/pages.py
pages.py
============ zyte-parsers ============ .. image:: https://img.shields.io/pypi/v/zyte-parsers.svg :target: https://pypi.python.org/pypi/zyte-parsers :alt: PyPI Version .. image:: https://img.shields.io/pypi/pyversions/zyte-parsers.svg :target: https://pypi.python.org/pypi/zyte-parsers :alt: Supported Python Versions .. image:: https://github.com/zytedata/zyte-parsers/workflows/tox/badge.svg :target: https://github.com/zytedata/zyte-parsers/actions :alt: Build Status .. image:: https://codecov.io/github/zytedata/zyte-parsers/coverage.svg?branch=master :target: https://codecov.io/gh/zytedata/zyte-parsers :alt: Coverage report .. image:: https://readthedocs.org/projects/zyte-parsers/badge/?version=stable :target: https://zyte-parsers.readthedocs.io/en/stable/?badge=stable :alt: Documentation Status .. description starts ``zyte-parsers`` is a Python 3.7+ library that contains functions to extract data from webpage parts. .. description ends * Documentation: https://zyte-parsers.readthedocs.io/en/latest/ * License: BSD 3-clause
zyte-parsers
/zyte-parsers-0.3.0.tar.gz/zyte-parsers-0.3.0/README.rst
README.rst
import re import string from collections import Counter from typing import List, Optional, Tuple import attr from .api import SelectorOrElement, input_to_element from .utils import extract_link, extract_text, first_satisfying @attr.s(frozen=True, auto_attribs=True) class Breadcrumb: name: Optional[str] = None url: Optional[str] = None _PUNCTUATION_TRANS = str.maketrans("", "", string.punctuation) _BREADCRUMBS_SEP = ( "ᐊᐅ<>ᐸᐳ‹›≺≻≪≫«»⋘⋙❬❭❮❯❰❱⟨⟩⟪⟫⫷⫸〈〉《》⦉⦊⭅⭆⭠⭢←→↤↦⇐⇒⇠⇢" "⇦⇨⇽⇾⟵⟶⟸⟹⟻⟼⟽⟾⮘⮚⮜⮞⯇⯈⊲⊳◀▶◁▷◂▸◃▹◄►◅▻➜➝➞➟➠➡➢➣➤➧➨➩" "➪➫➬➭➮➯➱➲/⁄\\⟋⟍⫻⫼⫽|𐬻¦‖∣⎪⎟⎸⎹│┃┆┇┊┋❘❙❚.,+:-" ) SEP_REG_STR = rf"([{_BREADCRUMBS_SEP}]+|->)" SPLIT_REG = re.compile(rf"(^|\s+)[{_BREADCRUMBS_SEP}]+($|\s+)") SEP_REG = re.compile(rf"^{SEP_REG_STR}$") LSTRIP_SEP_REG = re.compile(rf"^{SEP_REG_STR}\s+") RSTRIP_SEP_REG = re.compile(rf"\s+{SEP_REG_STR}$") def extract_breadcrumbs( node: SelectorOrElement, *, base_url: Optional[str], max_search_depth: int = 10 ) -> Optional[Tuple[Breadcrumb, ...]]: """Extract breadcrumb items from node that represents breadcrumb component. It finds all anchor elements to specified maximal depth. Anchors are collected in pre-order traversal. Such strategy of traversing supports cases where structure of nodes representing breadcrumbs is flat, which means that breadcrumb's anchors are on the same depth of HTML structure and where breadcrumb items are nested, which means that element with next item can be a child of element with previous breadcrumb item. It also post-processes extracted breadcrumbs by using semantic markup or the location of breadcrumb separators. :param node: Node representing and including breadcrumb component. :param base_url: Base URL of site. :param max_search_depth: Max depth for searching anchors. :return: Tuple with breadcrumb items. """ def extract_breadcrumbs_rec( node, search_depth, breadcrumbs_accum, markup_hier_accum, separators_accum, list_tag_occured, curr_markup_hier, ): """ Traverse html tree and search for elements that represent breadcrumb items with maximal depth of searching equal to `max_search_depth`. It also extracts breadcrumb items from element's tails since it often happens that non-anchor items are placed without any surrounding element. Because breadcrumb elements may contain dropdowns, the function filters them out by doing the following: * does not go into nested HTML list elements (<ol> and <ul>). * does not go into any HTML list elements with classes that relate to drop down, like "dropdown", "drop-down", "DropDown", etc. For every found element it does the following clean-up: * extracts name of breadcrumb from element's text or `title` attribute. * name cannot be a single character with punctuation like "»" or "|". * is able to parse name and split it from separators. * breadcrumb item has to contain name or url. * relative URLs are joined with base URL. """ if node.tag in {"button"}: return if node.tag == "a" or len(node) == 0: name = first_satisfying( [ extract_text(node), node.get("title").strip() if node.get("title") else None, ] ) url = extract_link(node, base_url) left_sep, parsed_name, right_sep = _parse_breadcrumb_name(name) if left_sep and separators_accum and not separators_accum[-1]: separators_accum[-1] = left_sep if parsed_name or url: breadcrumbs_accum.append(Breadcrumb(parsed_name, url)) markup_hier_accum.append(curr_markup_hier) separators_accum.append(right_sep) else: is_list_tag = node.tag in {"ul", "ol"} skip_list_tag = is_list_tag and ( _has_special_class(node.get("class")) or list_tag_occured ) item_type = _extract_markup_type(node) if search_depth < max_search_depth and not skip_list_tag: for child in node: new_hierarchy = list(curr_markup_hier) if item_type: new_hierarchy.append(item_type) extract_breadcrumbs_rec( child, search_depth + 1, breadcrumbs_accum, markup_hier_accum, separators_accum, list_tag_occured=list_tag_occured or is_list_tag, curr_markup_hier=new_hierarchy, ) if node.tail is not None: left_sep, parsed_name, right_sep = _parse_breadcrumb_name(node.tail) if left_sep and separators_accum and not separators_accum[-1]: separators_accum[-1] = left_sep if parsed_name: breadcrumbs_accum.append(Breadcrumb(name=parsed_name)) markup_hier_accum.append(curr_markup_hier) separators_accum.append(right_sep) node = input_to_element(node) breadcrumbs: List[Breadcrumb] = [] markup_hier: List[List[str]] = [] separators: List[bool] = [] extract_breadcrumbs_rec( node, 0, breadcrumbs, markup_hier, separators, list_tag_occured=False, curr_markup_hier=[], ) assert len(breadcrumbs) == len(markup_hier) == len(separators) return _postprocess_breadcrumbs(breadcrumbs, markup_hier, separators) def _parse_breadcrumb_name( name: Optional[str], ) -> Tuple[Optional[str], Optional[str], Optional[str]]: """Split extracted name into left separator, clean name and right separator.""" if name: stripped_name = name.strip() if SEP_REG.match(stripped_name): return stripped_name.strip(), None, None left_match = LSTRIP_SEP_REG.match(stripped_name) left_sep = left_match.group().strip() if left_match else None without_left_sep = ( stripped_name[left_match.end() :] if left_match else stripped_name ) if SEP_REG.match(without_left_sep): return left_sep, None, without_left_sep.strip() right_match = RSTRIP_SEP_REG.search(without_left_sep) right_sep = right_match.group().strip() if right_match else None name = ( without_left_sep[: right_match.start()] if right_match else without_left_sep ) return left_sep, name or None, right_sep return None, None, None def _postprocess_breadcrumbs(breadcrumbs, markup_hier, separators): """ Post-process breadcrumbs using the following procedures: * If there is only a single breadcrumb with name and without link, try to split the name into separate breadcrumb items. * If markup exists, then use it for selecting correct breadcrumb items. * Otherwise, use location of separators to determine which breadcrumb items are relevant and which not (if there is separator between two items then these two items are relevant). """ if not breadcrumbs: return None if len(breadcrumbs) == 1 and breadcrumbs[0].name and not breadcrumbs[0].url: parts = (s.strip() for s in SPLIT_REG.split(breadcrumbs[0].name)) return tuple(Breadcrumb(name=p) for p in parts if p) markup_exists = any(len(h) > 0 for h in markup_hier) if markup_exists: breadcrumbs = _postprocess_using_markup(breadcrumbs, markup_hier) else: breadcrumbs = _postprocess_using_separators(breadcrumbs, separators) return tuple(_remove_duplicated_first_and_last_items(breadcrumbs)) def _postprocess_using_markup(breadcrumbs, markup_hier): breadcrumb_indices_with_markup = [ idx for idx, h in enumerate(markup_hier) if len(h) > 0 ] first_with_markup = min(breadcrumb_indices_with_markup, default=-1) last_with_markup = max(breadcrumb_indices_with_markup, default=-1) # often the items without markup at the beginning and the end are # respectively home and product items indices_to_leave = {first_with_markup - 1, last_with_markup + 1} return [ b for idx, (b, h) in enumerate(zip(breadcrumbs, markup_hier)) if idx in indices_to_leave or len(h) > 0 ] def _postprocess_using_separators(breadcrumbs, separators): def prev_sep(idx): return separators[idx - 1] if 0 <= idx - 1 < len(separators) else None most_common_seps = Counter(filter(None, separators)).most_common() main_sep = most_common_seps[0][0] if most_common_seps else None if not main_sep: return breadcrumbs return [ b for idx, (b, sep) in enumerate(zip(breadcrumbs, separators)) if sep == main_sep or (prev_sep(idx) == main_sep) ] def _extract_markup_type(node): def check_schema(name): for schema_attr in {"itemtype", "typeof"}: if name in node.get(schema_attr, "").lower(): return True return False if check_schema("data-vocabulary.org/breadcrumb"): return "data-vocabulary" if check_schema("listitem"): return "schema" def _remove_duplicated_first_and_last_items(breadcrumbs): """ Remove "go back" urls from the beginning or the end of breadcrumb element. There is an assumption that there can be only one such url. First it tries to remove url at the beginning by checking if there is any other the same url in further breadcrumb items. If not, it checks the last url by comparing it with remaining urls. """ first_url = breadcrumbs[0].url if first_url is not None and first_url in (b.url for b in breadcrumbs[1:] if b.url): return breadcrumbs[1:] last_url = breadcrumbs[-1].url if last_url is not None and last_url in (b.url for b in breadcrumbs[1:-1] if b.url): return breadcrumbs[:-1] return breadcrumbs def _has_special_class(class_attr: str) -> bool: """ Check if a given value of class attribute has a class that relates to drop down like "dropdown", "drop-down", "DropDown", etc. """ if class_attr: return any( cls_name in c.translate(_PUNCTUATION_TRANS).lower().strip() for cls_name in {"dropdown", "actions"} for c in class_attr.split() ) return False
zyte-parsers
/zyte-parsers-0.3.0.tar.gz/zyte-parsers-0.3.0/zyte_parsers/breadcrumbs.py
breadcrumbs.py
import itertools from typing import Any, Callable, Iterable, Optional from urllib.parse import urljoin import html_text from lxml.html import HtmlElement, fromstring # noqa: F401 from parsel import Selector # noqa: F401 from w3lib.html import strip_html5_whitespace from zyte_parsers.api import SelectorOrElement, input_to_element def is_js_url(url: str) -> bool: """Check if the URL is intended for handling by JS. >>> is_js_url("http://example.com") False >>> is_js_url("/foo") False >>> is_js_url("javascript:void(0)") True >>> is_js_url("#") True """ normed = url.strip().lower() if normed.startswith("javascript:") or normed.startswith("#"): return True return False def strip_urljoin(base_url: Optional[str], url: Optional[str]) -> str: r"""Strip the URL and use ``urljoin`` on it. >>> strip_urljoin("http://example.com", None) 'http://example.com' >>> strip_urljoin("http://example.com", "foo") 'http://example.com/foo' >>> strip_urljoin("http://example.com", " ") 'http://example.com' >>> strip_urljoin("http://example.com", " foo\t") 'http://example.com/foo' >>> strip_urljoin(None, "foo") 'foo' >>> strip_urljoin(None, None) '' """ if url is not None: url = strip_html5_whitespace(url) # XXX: mypy doesn't like when one passes None to urljoin return urljoin(base_url or "", url or "") def extract_link(a_node: SelectorOrElement, base_url: str) -> Optional[str]: """ Extract the absolute url link from an ``<a>`` HTML tag. >>> extract_link(fromstring("<a href=' http://example.com'"), "") 'http://example.com' >>> extract_link(fromstring("<a href='/foo '"), "http://example.com") 'http://example.com/foo' >>> extract_link(fromstring("<a href='' data-url='http://example.com'"), "") 'http://example.com' >>> extract_link(fromstring("<a href='javascript:void(0)'"), "") >>> extract_link(Selector(text="<a href='http://example.com'").css("a")[0], "") 'http://example.com' """ a_node = input_to_element(a_node) link = a_node.get("href") or a_node.get("data-url") if not link or is_js_url(link): return None try: link = strip_urljoin(base_url, link) except ValueError: link = None return link def extract_text(node: SelectorOrElement, guess_layout: bool = False) -> Optional[str]: """Extract text from HTML using ``html_text``. >>> extract_text(fromstring("<p>foo bar </p>")) 'foo bar' >>> extract_text(Selector(text="<p>foo bar </p>")) 'foo bar' """ node = input_to_element(node) value = html_text.extract_text(node, guess_layout=guess_layout) if value: return value return None def first_satisfying( xs: Iterable, condition_fun: Callable[[Any], Any] = lambda x: x, default: Any = None ) -> Any: """Return the first item in ``xs`` that satisfies the condition. >>> first_satisfying([0, "", 1]) 1 >>> first_satisfying([1, 2, 3], condition_fun=lambda x: x > 1) 2 >>> first_satisfying([0, ""], default=2) 2 """ try: return next(x for x in xs if condition_fun(x)) except StopIteration: return default def iterwalk_limited(node: HtmlElement, search_depth: int) -> Iterable[HtmlElement]: yield node if search_depth <= 0: return for child in node: yield from iterwalk_limited(child, search_depth - 1) def take(iterable: Iterable[Any], n: int): return list(itertools.islice(iterable, n))
zyte-parsers
/zyte-parsers-0.3.0.tar.gz/zyte-parsers-0.3.0/zyte_parsers/utils.py
utils.py
## Zyte-spmstats It is a small python module for interacting with Zyte Smart Proxy Manager Stats API ### Documentation [Zyte SPM Stats API Documentation](https://docs.zyte.com/smart-proxy-manager/stats.html) ### Installation `pip install zyte-spmstats` ### Usage 1. For a single domain/netloc: `python -m zyte.spmstats <ORG-API> amazon.com 2022-06-15T18:50:00 2022-06-17T23:00` Output: ` { "failed": 0, "clean": 29, "time_gte": "2022-06-10T18:55:00", "concurrency": 0, "domain": "amazon.com", "traffic": 3865060, "total_time": 1945 }, ` 2. For a multiple domain/netloc: `python -m zyte.spmstats <ORG-API> amazon.com,pharmamarket.be 2022-06-15T18:50:00 2022-06-17T23:00` Output: `"results": [ { "failed": 88, "clean": 230, "time_gte": "2022-06-13T07:50:00", "concurrency": 1, "domain": "pharmamarket.be", "traffic": 3690976, "total_time": 2386 }, { "failed": 224, "clean": 8497, "time_gte": "2022-06-16T01:45:00", "concurrency": 80, "domain": "amazon.com", "traffic": 2280046474, "total_time": 1373 }]`
zyte-spmstats
/zyte-spmstats-1.0.0.tar.gz/zyte-spmstats-1.0.0/README.md
README.md
from typing import Union, Callable, Optional from zython.operations import _iternal from zython.operations._op_codes import _Op_code from zython.operations.constraint import Constraint from zython.operations.operation import Operation from zython.var_par.array import ArrayMixin from zython.var_par.types import ZnSequence def exists(seq: ZnSequence, func: Optional[Union["Constraint", Callable]] = None) -> Constraint: """ Specify constraint which should be true for `at least` one element in ``seq``. The method has the same signature as ``forall``. See Also -------- forall Examples -------- >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self): ... self.a = zn.var(range(0, 10)) ... self.b = zn.var(range(0, 10)) ... self.c = zn.var(range(0, 10)) ... self.constraints = [zn.exists((self.a, self.b, self.c), lambda elem: elem > 0)] >>> model = MyModel() >>> result = model.solve_satisfy() >>> sorted((result["a"], result["b"], result["c"])) [0, 0, 1] """ iter_var, operation = _iternal.get_iter_var_and_op(seq, func) return Constraint.exists(seq, iter_var, operation) def forall(seq: ZnSequence, func: Optional[Union["Constraint", Callable]] = None) -> Constraint: """ Takes expression (that is, constraint) or function which return constraint and make them a single constraint which should be true for every element in the array. Parameters ---------- seq: range, array of var, or sequence (list or tuple) of var sequence to apply ``func`` func: Constraint or Callable, optional Constraint every element in seq should satisfy or function which returns such constraint. If function or lambda it should be with 0 or 1 arguments only. Returns ------- result: Constraint resulted constraint Examples -------- >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self): ... self.a = zn.Array(zn.var(int), shape=3) ... self.constraints = [zn.forall(self.a, lambda elem: elem > 0)] >>> model = MyModel() >>> model.solve_satisfy() Solution(a=[1, 1, 1]) """ iter_var, operation = _iternal.get_iter_var_and_op(seq, func) return Constraint.forall(seq, iter_var, operation) def sum(seq: ZnSequence, func: Optional[Union["Constraint", Callable]] = None) -> Operation: """ Calculate the sum of the ``seq`` according with ``func`` Iterates through elements in seq and calculate their sum, you can modify summarized expressions by specifying ``func`` parameter. Parameters ---------- seq: range, array of var, or sequence (list or tuple) of var sequence to sum up func: Operation or Callable, optional Operation which will be executed with every element and later sum up. Or function which returns such operation. If function or lambda it should be with 0 or 1 arguments only. Returns ------- result: Operation Operation which will calculate the sum Examples -------- >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self): ... self.a = zn.Array(zn.var(range(1, 10)), shape=4) >>> model = MyModel() >>> model.solve_minimize(zn.sum(model.a)) Solution(objective=4, a=[1, 1, 1, 1]) # find minimal integer sides of the right triangle >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self): ... self.a = zn.var(int) ... self.b = zn.var(int) ... self.c = zn.var(int) ... self.constraints = [self.c ** 2 == zn.sum((self.a, self.b), lambda i: i ** 2), ... zn.forall((self.a, self.b, self.c), lambda i: i > 0)] >>> model = MyModel() >>> model.solve_minimize(model.c) Solution(objective=5, a=4, b=3, c=5) """ iter_var, operation = _iternal.get_iter_var_and_op(seq, func) if isinstance(seq, ArrayMixin) and operation is None: type_ = seq.type else: type_ = operation.type if type_ is None: raise ValueError("Can't derive the type of {} expression".format(func)) return Operation.sum(seq, iter_var, operation, type_=type_) def count(seq: ZnSequence, value: Union[int, Operation, Callable[[ZnSequence], Operation]]) -> Operation: """ Returns the number of occurrences of ``value`` in ``seq``. Parameters ---------- seq: range, array of var, or sequence (list or tuple) of var Sequence to count ``value`` in value: Operation or Callable, optional Operation or constant which will be counted in ``seq``. Or function which returns such value. If function or lambda it should be with 0 or 1 arguments only. Returns ------- result: Operation Operation which will calculate the number of ``value`` in ``seq``. Examples -------- Simple timeshedule problem: you with your neighbor wanted to deside who will wash the dishes in the next week. You should do it 3 days (because you've bought fancy doormat) and your neighbour - 4 days. >>> from collections import Counter >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self): ... self.a = zn.Array(zn.var(range(2)), shape=7) ... self.constraints = [zn.count(self.a, 0) == 3, zn.count(self.a, 1) == 4] >>> model = MyModel() >>> result = model.solve_satisfy() >>> Counter(result["a"]) Counter({1: 4, 0: 3}) ``zn.alldifferent`` could be emulated via ``zn.count`` >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self): ... self.a = zn.Array(zn.var(range(10)), shape=4) ... self.constraints = [zn.forall(range(self.a.size(0)), ... lambda i: zn.count(self.a, lambda elem: elem == self.a[i]) == 1)] >>> model = MyModel() >>> result = model.solve_satisfy() >>> Counter(result["a"]) Counter({3: 1, 2: 1, 1: 1, 0: 1}) """ iter_var, operation = _iternal.get_iter_var_and_op(seq, value) return Operation.count(seq, iter_var, operation, type_=int) def min(seq: ZnSequence, key: Union[Operation, Callable[[ZnSequence], Operation], None] = None) -> Operation: """ Finds the smallest object in ``seq``, according to ``key`` Parameters ---------- seq: range, array of var, or sequence (list or tuple) of var Sequence to find smallest element in key: Operation or Callable, optional The parameter has the same semantic as in python: specify the operation which result will be latter compared. Returns ------- result: Operation Operation which will find the smallest element. See Also -------- max Examples -------- >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self): ... self.a = zn.Array([[1, 2, 3], [-1, -2, -3]]) ... self.m = zn.min(self.a) >>> model = MyModel() >>> model.solve_satisfy() Solution(m=-3) """ iter_var, operation = _iternal.get_iter_var_and_op(seq, key) return Operation.min(seq, iter_var, operation, type_=int) def max(seq: ZnSequence, key: Union[Operation, Callable[[ZnSequence], Operation], None] = None) -> Operation: """ Finds the biggest object in ``seq``, according to ``key`` Parameters ---------- seq: range, array of var, or sequence (list or tuple) of var Sequence to find smallest element in key: Operation or Callable, optional The parameter has the same semantic as in python: specify the operation which result will be latter compared. Returns ------- result: Operation Operation which will find the biggest element. See Also -------- min Examples -------- >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self): ... self.a = zn.Array([[1, 2, 3], [-1, -2, -3]]) ... self.m = zn.max(range(self.a.size(0)), lambda row: zn.count(self.a[row, :], lambda elem: elem < 0)) >>> model = MyModel() >>> model.solve_satisfy() Solution(m=3) """ iter_var, operation = _iternal.get_iter_var_and_op(seq, key) return Operation.max(seq, iter_var, operation, type_=int) class alldifferent(Constraint): """ requires all the variables appearing in its argument to be different Parameters ---------- seq: range, array of var, or sequence (list or tuple) of var sequence which elements of which should be distinct except0: bool, optional if set - ``seq`` can contain any amount of 0. See Also -------- allequal ndistinct Examples -------- >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self): ... self.a = zn.Array(zn.var(range(1, 10)), shape=5) ... self.x = zn.var(range(3)) ... self.y = zn.var(range(3)) ... self.z = zn.var(range(3)) ... self.constraints = [zn.alldifferent(self.a[:3]), zn.alldifferent((self.x, self.y, self.z))] >>> model = MyModel() >>> model.solve_satisfy() Solution(a=[3, 2, 1, 1, 1], x=2, y=1, z=0) If ``except0`` flag is set constraint doesn't affect 0'es in the ``seq`` >>> from collections import Counter >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self): ... self.a = zn.Array(zn.var(range(5)), shape=6) ... self.constraints = [zn.alldifferent(self.a, except0=True), zn.sum(self.a) == 10] >>> model = MyModel() >>> result = model.solve_satisfy() >>> Counter(result["a"]) == {0: 2, 4: 1, 3: 1, 2: 1, 1: 1} True """ def __init__(self, seq: ZnSequence, except0: Optional[bool] = None): if except0: super().__init__(_Op_code.alldifferent_except_0, seq) else: super().__init__(_Op_code.alldifferent, seq) class allequal(Constraint): """ requires all the variables appearing in its argument to be equal Parameters ---------- seq: range, array of var, or sequence (list or tuple) of var sequence which elements of which should be distinct See Also -------- alldifferent ndistinct Examples -------- >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self): ... self.a = zn.Array(zn.var(range(1, 10)), shape=(2, 4)) ... self.constraints = [self.a[0, 0] == 5, zn.allequal(self.a)] >>> model = MyModel() >>> model.solve_satisfy() Solution(a=[[5, 5, 5, 5], [5, 5, 5, 5]]) """ def __init__(self, seq: ZnSequence): super().__init__(_Op_code.allequal, seq) class ndistinct(Operation): """ returns the number of distinct values in ``seq``. Parameters ---------- seq: range, array of var, or sequence (list or tuple) of var sequence which elements of which should be distinct See Also -------- alldifferent ndistinct Returns ------- n: Operation Operation, which calculates the number of distinct values in ``seq`` Examples -------- >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self, n): ... self.a = zn.Array(zn.var(range(1, 10)), shape=5) ... self.constraints = [zn.ndistinct(self.a) == n] >>> model = MyModel(3) >>> result = model.solve_satisfy() >>> len(set(result["a"])) 3 """ def __init__(self, seq: ZnSequence): super().__init__(_Op_code.ndistinct, seq) class circuit(Constraint): """ Constrains the elements of ``seq`` to define a circuit where x[i] = j means that j is the successor of i. Examples -------- >>> import zython as zn >>> class MyModel(zn.Model): ... def __init__(self): ... self.a = zn.Array(zn.var(range(5)), shape=5) ... self.constraints = [zn.circuit(self.a)] >>> model = MyModel() >>> model.solve_satisfy() Solution(a=[2, 4, 3, 1, 0]) """ def __init__(self, seq: ZnSequence): super().__init__(_Op_code.circuit, seq)
zython
/operations/functions_and_predicates.py
functions_and_predicates.py
from numbers import Number from typing import Optional, Callable, Union, Type import zython from zython.operations._op_codes import _Op_code from zython.operations.constraint import Constraint def _get_wider_type(left, right): return int class Operation(Constraint): def __init__(self, op, *params, type_=None): super(Operation, self).__init__(op, *params, type_=type_) def __pow__(self, power, modulo=None): return self.pow(self, power, modulo) def __mul__(self, other): return self.mul(self, other) def __rmul__(self, other): return self.mul(other, self) # def __truediv__(self, other): # op = _Operation(_Op_code.truediv, self, other) # op._type = _get_wider_type(self, other) # return op # # def __rtruediv__(self, other): # op = _Operation(_Op_code.mul, other, self) # op._type = _get_wider_type(self, other) # return op def __floordiv__(self, other): return self.floordiv(self, other) def __rfloordiv__(self, other): return self.floordiv(other, self) def __mod__(self, other): return self.mod(self, other) def __rmod__(self, other): return self.mod(other, self) def __add__(self, other): return self.add(self, other) def __radd__(self, other): return self.add(other, self) def __sub__(self, other): return self.sub(self, other) def __rsub__(self, other): return self.sub(other, self) def __eq__(self, other): return Operation(_Op_code.eq, self, other, type_=int) def __ne__(self, other): return Operation(_Op_code.ne, self, other, type_=int) def __lt__(self, other): return Operation(_Op_code.lt, self, other, type_=int) def __gt__(self, other): return Operation(_Op_code.gt, self, other, type_=int) def __le__(self, other): return Operation(_Op_code.le, self, other, type_=int) def __ge__(self, other): return Operation(_Op_code.ge, self, other, type_=int) # below method is used for validation and control of _Operation creation # when you create _Operation as Operation(_Op_code.sum, seq, iter_var, func) # it is easy to forgot the order and number of variables, so it is better to call # Operation.sum which has param names and type hints @staticmethod def add(left, right): return Operation(_Op_code.add, left, right, type_=_get_wider_type(left, right)) @staticmethod def sub(left, right): return Operation(_Op_code.sub, left, right, type_=_get_wider_type(left, right)) @staticmethod def pow(base, power, modulo=None): if modulo is not None: raise ValueError("modulo is not supported") return Operation(_Op_code.pow, base, power, type_=_get_wider_type(base, power)) @staticmethod def mul(left, right): return Operation(_Op_code.mul, left, right, type_=_get_wider_type(left, right)) @staticmethod def floordiv(left, right): _validate_div(left, right) return Operation(_Op_code.floordiv, left, right, type_=_get_wider_type(left, right)) @staticmethod def mod(left, right): _validate_div(left, right) return Operation(_Op_code.mod, left, right, type_=_get_wider_type(left, right)) @staticmethod def size(array: "zython.var_par.array.ArrayMixin", dim: int): if 0 <= dim < array.ndims(): return Operation(_Op_code.size, array, dim, type_=int) raise ValueError(f"Array has 0..{array.ndims()} dimensions, but {dim} were specified") @staticmethod def sum(seq: "zython.var_par.types.ZnSequence", iter_var: Optional["zython.var_par.var.var"] = None, func: Optional[Union["Operation", Callable]] = None, type_: Optional[Type] = None): return Operation(_Op_code.sum_, seq, iter_var, func, type_=type_) @staticmethod def count(seq: "zython.var_par.types.ZnSequence", iter_var: Optional["zython.var_par.var.var"] = None, func: Optional[Union["Operation", Callable]] = None, type_: Optional[Type] = None): return Operation(_Op_code.count, seq, iter_var, func, type_=type_) @staticmethod def min(seq: "zython.var_par.types.ZnSequence", iter_var: Optional["zython.var_par.var.var"] = None, func: Optional[Union["Operation", Callable]] = None, type_: Optional[Type] = None): return Operation(_Op_code.min_, seq, iter_var, func, type_=type_) @staticmethod def max(seq: "zython.var_par.types.ZnSequence", iter_var: Optional["zython.var_par.var.var"] = None, func: Optional[Union["Operation", Callable]] = None, type_: Optional[Type] = None): return Operation(_Op_code.max_, seq, iter_var, func, type_=type_) def _validate_div(left, right): if isinstance(right, Number) and right == 0 or getattr(right, "value", 1) == 0: raise ValueError("right part of expression can't be 0")
zython
/operations/operation.py
operation.py
import base64 import json import logging from Crypto.Cipher import PKCS1_v1_5 from Crypto.PublicKey import RSA import requests from .helpers import decrypt_response, encrypt_request _LOGGER = logging.getLogger(__name__) class ZyxelT50Modem: def __init__(self, password=None, host='192.168.1.1', username='admin') -> None: self.url = host self.user = username self.password = password self.r = requests.Session() self.r.trust_env = False # ignore proxy settings # we define the AesKey ourselves self.aes_key = b'\x42' * 32 self.enc_aes_key = None self.sessionkey = None self._model = None self._sw_version = None self._unique_id = None def connect(self) -> None: """Set up a Zyxel modem.""" self.enc_aes_key = self.__get_aes_key() try: self.__login() except CannotConnect as exp: _LOGGER.error("Failed to connect to modem") raise exp status = self.get_device_status() device_info = status["DeviceInfo"] if self._unique_id is None: self._unique_id = device_info["SerialNumber"] self._model = device_info["ModelName"] self._sw_version = device_info["SoftwareVersion"] def __get_aes_key(self): # ONCE # get pub key response = self.r.get(f"http://{self.url}/getRSAPublickKey") pubkey_str = response.json()['RSAPublicKey'] # Encrypt the aes key with RSA pubkey of the device pubkey = RSA.import_key(pubkey_str) cipher_rsa = PKCS1_v1_5.new(pubkey) return cipher_rsa.encrypt(base64.b64encode(self.aes_key)) def __login(self): login_data = { "Input_Account": self.user, "Input_Passwd": base64.b64encode(self.password.encode('ascii')).decode('ascii'), "RememberPassword": 0, "SHA512_password": False } enc_request = encrypt_request(self.aes_key, login_data) enc_request['key'] = base64.b64encode(self.enc_aes_key).decode('ascii') response = self.r.post(f"http://{self.url}/UserLogin", json.dumps(enc_request)) decrypted_response = decrypt_response(self.aes_key, response.json()) if decrypted_response is not None: response = json.loads(decrypted_response) self.sessionkey = response['sessionkey'] return 'result' in response and response['result'] == 'ZCFG_SUCCESS' _LOGGER.error("Failed to decrypt response") raise CannotConnect def logout(self): response = self.r.post(f"http://{self.url}/cgi-bin/UserLogout?sessionKey={self.sessionkey}") response = response.json() if 'result' in response and response['result'] == 'ZCFG_SUCCESS': return True else: return False def __get_device_info(self, oid): response = self.r.get(f"http://{self.url}/cgi-bin/DAL?oid={oid}") decrypted_response = decrypt_response(self.aes_key, response.json()) if decrypted_response is not None: json_string = decrypted_response.decode('utf8').replace("'", '"') json_data = json.loads(json_string) return json_data['Object'][0] _LOGGER.error("Failed to get device status") return None def get_device_status(self): result = self.__get_device_info("cardpage_status") if result is not None: return result _LOGGER.error("Failed to get device status") return None def get_connected_devices(self): result = self.__get_device_info("lanhosts") if result is not None: devices = {} for device in result['lanhosts']: devices[device['PhysAddress']] = { "hostName": device['HostName'], "physAddress": device['PhysAddress'], "ipAddress": device['IPAddress'], } return devices _LOGGER.error("Failed to connected devices") return [] class CannotConnect(Exception): """Error to indicate we cannot connect."""
zyxel-t50-modem
/zyxel_t50_modem-0.0.2-py3-none-any.whl/zyxelt50/modem.py
modem.py
import re import sys # 控制仓数据 # 25(起始位)00(仓位数据)ttttpppppppphhhh(温湿度大气压模块传感器数据)xxxxyyyyzzzzxxxxyyyyzzzzRRRRPPPPYYYYxxxxyyyyzzzz(九轴数据)DDDDcccc(声呐数据)WWWWdddd(水深传感器)0000(奇偶校验位)FFFF(结束位) class base_parser(): def __init__(self): self.re_ = None self.pattern = None def parse(self, sentence): match = self.pattern.match(sentence) if not match: print('unregular sentence') sys.exit() sentence_dict_hex = match.groupdict() print(sentence_dict_hex) sentence_dict_dec = {} for key, value in sentence_dict_hex.items(): sentence_dict_dec[key] = int(value, 16) print(sentence_dict_dec) return sentence_dict_hex, sentence_dict_dec class up_parser(base_parser): def __init__(self): super().__init__() self.re_ = '(?P<起始位>25)' \ '(?P<仓位>00|01)' \ '(?P<温度>([0-9]|[a-f]|[A-F]){4})' \ '(?P<气压>([0-9]|[a-f]|[A-F]){8})' \ '(?P<湿度>([0-9]|[a-f]|[A-F]){4})' \ '(?P<加速度Ax>([0-9]|[a-f]|[A-F]){4})' \ '(?P<加速度Ay>([0-9]|[a-f]|[A-F]){4})' \ '(?P<加速度Az>([0-9]|[a-f]|[A-F]){4})' \ '(?P<角速度Wx>([0-9]|[a-f]|[A-F]){4})' \ '(?P<角速度Wy>([0-9]|[a-f]|[A-F]){4})' \ '(?P<角速度Wz>([0-9]|[a-f]|[A-F]){4})' \ '(?P<角度Roll>([0-9]|[a-f]|[A-F]){4})' \ '(?P<角度Pitch>([0-9]|[a-f]|[A-F]){4})' \ '(?P<角度Yaw>([0-9]|[a-f]|[A-F]){4})' \ '(?P<磁场Hx>([0-9]|[a-f]|[A-F]){4})' \ '(?P<磁场Hy>([0-9]|[a-f]|[A-F]){4})' \ '(?P<磁场Hz>([0-9]|[a-f]|[A-F]){4})' \ '(?P<声呐>([0-9]|[a-f]|[A-F]){8})' \ '(?P<声呐确信度>([0-9]|[a-f]|[A-F]){4})' \ '(?P<水温>([0-9]|[a-f]|[A-F]){4})' \ '(?P<水深>([0-9]|[a-f]|[A-F]){4})' \ '(?P<确认位>([0-9]|[a-f]|[A-F]){2})' \ '(?P<结束位>([0-9]|[a-f]|[A-F]){4})' self.pattern = re.compile(self.re_) # def parse(self, sentence): # match = self.pattern.match(sentence) # if not match: # print('unregular sentence') # sys.exit() # sentence_dict_hex = match.groupdict() # print(sentence_dict_hex) # sentence_dict_dec = {} # for key, value in sentence_dict_hex.items(): # sentence_dict_dec[key] = int(value, 16) # print(sentence_dict_dec) # return sentence_dict_dec # 起始位 前进后退 旋转或侧推 垂直 灯光 云台 传送 机械臂1 机械臂2 机械臂3 机械臂4 机械臂5 机械臂6 预留PWM 模式开关 验证位 结束位 # 0x25 500-2500 500-2500 500-2500 500-2500 500-2500 500-2500 500-2500 500-2500 500-2500 500-2500 500-2500 500-2500 500-2500 0x21 # 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 class down_parser(base_parser): def __init__(self): self.re_ = '(?P<起始位>25)' \ '(?P<前进后退>([0-9]|[a-f]|[A-F]){4})' \ '(?P<旋转或侧推>([0-9]|[a-f]|[A-F]){4})' \ '(?P<垂直>([0-9]|[a-f]|[A-F]){4})' \ '(?P<灯光>([0-9]|[a-f]|[A-F]){4})' \ '(?P<云台>([0-9]|[a-f]|[A-F]){4})' \ '(?P<传送>([0-9]|[a-f]|[A-F]){4})' \ '(?P<机械臂1>([0-9]|[a-f]|[A-F]){4})' \ '(?P<机械臂2>([0-9]|[a-f]|[A-F]){4})' \ '(?P<机械臂3>([0-9]|[a-f]|[A-F]){4})' \ '(?P<机械臂4>([0-9]|[a-f]|[A-F]){4})' \ '(?P<机械臂5>([0-9]|[a-f]|[A-F]){4})' \ '(?P<机械臂6>([0-9]|[a-f]|[A-F]){4})' \ '(?P<预留PWM>([0-9]|[a-f]|[A-F]){4})' \ '(?P<模式开关>([0-9]|[a-f]|[A-F]){2})' \ '(?P<验证位>([0-9]|[a-f]|[A-F]){2})' \ '(?P<结束位>21)' self.pattern = re.compile(self.re_) # def parse(self, sentence): # match = self.pattern.match(sentence) # if not match: # print('unregular sentence') # sys.exit() # sentence_dict_hex = match.groupdict() # print(sentence_dict_hex) # sentence_dict_dec = {} # for key, value in sentence_dict_hex.items(): # sentence_dict_dec[key] = int(value, 16) # print(sentence_dict_dec) if __name__ == '__main__': x = up_parser() x.parse('25001234123456780bcddddaaaacccceeeeffff99990000777755554444ccccaaaaaaaacccc1111dddd00FFFFeeeee') y = down_parser() y.parse('2505DC05DC05DC05DC05DC05DC05DC05DC05DC05DC05DC05DC05DC080021')
zyz-hello-world
/zyz-hello-world-0.0.1.tar.gz/zyz-hello-world-0.0.1/compar/sentence_parser.py
sentence_parser.py
import os import uuid from datetime import timedelta from threading import Lock from flask import Flask, Blueprint,cli from flask.helpers import get_debug_flag, _PackageBoundObject, _endpoint_from_view_func from flask.templating import _default_template_ctx_processor from werkzeug.routing import Rule, Map, MapAdapter,RequestSlash, RequestRedirect, RequestAliasRedirect, _simple_rule_re from werkzeug.urls import url_quote, url_join from werkzeug.datastructures import ImmutableDict from werkzeug._internal import _encode_idna, _get_environ from werkzeug._compat import to_unicode, string_types, wsgi_decoding_dance from werkzeug.exceptions import BadHost, NotFound, MethodNotAllowed # 自定义flask蓝图,给所有路由增加'methods': ['GET', 'POST']参数。不用每个都写 class ZyzBlueprint(Blueprint): def __init__(self, name, import_name, default_methods=None, static_folder=None,static_url_path=None, template_folder=None, url_prefix=None, subdomain=None, url_defaults=None, root_path=None): super().__init__(name, import_name, static_folder=static_folder, static_url_path=static_url_path, template_folder=template_folder, url_prefix=url_prefix, subdomain=subdomain, url_defaults=url_defaults, root_path=root_path) self.default_methods = default_methods # 装饰器 def route(self,rule,**options): # 设置默认请求 methods = options.get('methods') if not methods: options['methods'] = self.default_methods # 给函数名增加版本号,多个版本使用相同函数名时,蓝图存储监听时可以区分。 def decorator(f): endpoint = options.pop("endpoint", f.__name__ + str(options.get('version')).replace('.','_')) # add_url_rule 函数,就是将路由和函数名的对应关系,生成一个flask对象,存在Blueprint类中的deferred_functions类属性中 # rule: 路由 例:/userInfo/get.json # endpoint : 函数名,自定义加上了版本号 例:login_mobile['3_1_2'] # f : 函数 例:<function sheng_portal.backend.app.versions.version_1_3_1.controller.user.user.login_check> # **options : 路由后面带的参数 例{'methods': ['GET', 'POST'], 'version': ['1.3.1']} self.add_url_rule(rule, endpoint, f, **options) return f return decorator class ZyzRule(Rule): def __init__(self, string, version=None, **options): self.version = version super().__init__(string, **options) class ZyzFlask(Flask): url_rule_class = ZyzRule # 默认设置,配置参数 default_config = ImmutableDict({ 'DEBUG': get_debug_flag(), # 启用/禁用调试模式 'TESTING': False, # 启用/禁用测试模式 'PROPAGATE_EXCEPTIONS': None, # 显式地允许或禁用异常的传播。如果没有设置或显式地设置为 None ,当 TESTING 或 DEBUG 为真时,这个值隐式地为 true. 'PRESERVE_CONTEXT_ON_EXCEPTION': None, # 你同样可以用这个设定来强制启用(允许调试器内省),即使没有调试执行,这对调试生产应用很有用(但风险也很大) 'SECRET_KEY': None, # 密匙 还不知道干嘛用的 'PERMANENT_SESSION_LIFETIME': timedelta(days=31), # 控制session的过期时间 'USE_X_SENDFILE': False, # 启用/禁用 x-sendfile(一种下载文件的工具),打开后,下载不可控,可能有漏洞,需谨慎 'LOGGER_NAME': None, # 日志记录器的名称 'LOGGER_HANDLER_POLICY': 'always', # 可以通过配置这个来组织flask默认记录日志 'SERVER_NAME': None, # 服务器名和端口。需要这个选项来支持子域名 (例如: 'myapp.dev:5000' ) 'APPLICATION_ROOT': None, # 如果应用不占用完整的域名或子域名,这个选项可以被设置为应用所在的路径。这个路径也会用于会话 cookie 的路径值。如果直接使用域名,则留作 None 'SESSION_COOKIE_NAME': 'session', # 会话 cookie 的名称。 'SESSION_COOKIE_DOMAIN': '.mofanghr.com', # 会话 cookie 的域。如果不设置这个值,则 cookie 对 SERVER_NAME 的全部子域名有效 'SESSION_COOKIE_PATH': '/', # 会话 cookie 的路径。如果不设置这个值,且没有给 '/' 设置过,则 cookie 对 APPLICATION_ROOT 下的所有路径有效 'SESSION_COOKIE_HTTPONLY': True, # 控制 cookie 是否应被设置 httponly 的标志, 默认为 True 'SESSION_COOKIE_SECURE': False, # 控制 cookie 是否应被设置安全标志,默认为 False 'SESSION_REFRESH_EACH_REQUEST': True, # 如果被设置为 True (这是默认值),每一个请求 cookie 都会被刷新。如果设置为 False ,只有当 cookie 被修改后才会发送一个 set-cookie 的标头 'MAX_CONTENT_LENGTH': None, # 如果设置为字节数, Flask 会拒绝内容长度大于此值的请求进入,并返回一个 413 状态码 'SEND_FILE_MAX_AGE_DEFAULT': timedelta(hours=12), # 默认缓存控制的最大期限,以秒计, 'TRAP_BAD_REQUEST_ERRORS': False, # 这个设置用于在不同的调试模式情形下调试,返回相同的错误信息 BadRequest 'TRAP_HTTP_EXCEPTIONS': False, # 如果这个值被设置为 True ,Flask不会执行 HTTP 异常的错误处理 'EXPLAIN_TEMPLATE_LOADING': False, # 解释模板加载 'PREFERRED_URL_SCHEME': 'http', # 生成URL的时候如果没有可用的 URL 模式话将使用这个值。默认为 http 'JSON_AS_ASCII': True, # 默认情况下 Flask 使用 ascii 编码来序列化对象。如果这个值被设置为 False , Flask不会将其编码为 ASCII,并且按原样输出,返回它的 unicode 字符串 'JSON_SORT_KEYS': True, # Flask 按照 JSON 对象的键的顺序来序来序列化它。这样做是为了确保键的顺序不会受到字典的哈希种子的影响,从而返回的值每次都是一致的,不会造成无用的额外 HTTP 缓存 'JSONIFY_PRETTYPRINT_REGULAR': True, # 如果这个配置项被 True (默认值), 如果不是 XMLHttpRequest 请求的话(由 X-Requested-With 标头控制) json 字符串的返回值会被漂亮地打印出来。 'JSONIFY_MIMETYPE': 'application/json', 'TEMPLATES_AUTO_RELOAD': None, }) def __init__( self, import_name, static_path=None, static_url_path=None, static_folder='static', template_folder='templates', instance_path=None, instance_relative_config=False, root_path=None ): self.version_dict = {} _PackageBoundObject.__init__( self, import_name, template_folder=template_folder, root_path=root_path ) if static_path is not None: from warnings import warn warn(DeprecationWarning('static_path is now called static_url_path'), stacklevel=2) static_url_path = static_path if static_url_path is not None: self.static_url_path = static_url_path if static_folder is not None: self.static_floder = static_folder if instance_path is None: instance_path = self.auto_find_instance_path() elif not os.path.isabs(instance_path): raise ValueError('If an instance path is provided it must be ' 'absolute. A relative path was given instead.') # 保存实例文件夹的路径。 versionadded:: 0.8 self.instance_path = instance_path # 这种行为就像普通字典,但支持其他方法从文件加载配置文件。 self.config = self.make_config(instance_relative_config) # 准备记录日志的设置 self._logger = None self.logger_name = self.import_name # 注册所有视图函数的字典。钥匙会是:用于生成URL的函数名值是函数对象本身。 # 注册一个视图函数,使用:route装饰器 self.view_functions = {} # 支持现在不推荐的Error处理程序属性。现在将使用 self._error_handlers = {} #: A dictionary of all registered error handlers. The key is ``None`` #: for error handlers active on the application, otherwise the key is #: the name of the blueprint. Each key points to another dictionary #: where the key is the status code of the http exception. The #: special key ``None`` points to a list of tuples where the first item #: is the class for the instance check and the second the error handler #: function. #: #: To register a error handler, use the :meth:`errorhandler` #: decorator. self.error_handler_spec = {None: self._error_handlers} #: A list of functions that are called when :meth:`url_for` raises a #: :exc:`~werkzeug.routing.BuildError`. Each function registered here #: is called with `error`, `endpoint` and `values`. If a function #: returns ``None`` or raises a :exc:`BuildError` the next function is #: tried. #: #: .. versionadded:: 0.9 self.url_build_error_handlers = [] self.before_request_funcs = {} self.before_first_request_funcs = [] self.after_request_funcs = {} self.teardown_request_funcs = {} self.teardown_appcontext_funcs = [] self.url_value_preprocessors = {} self.url_default_functions = {} self.template_context_processors = { None: [_default_template_ctx_processor] } self.shell_context_processor = [] self.blueprints = {} self._blueprint_order = [] self.extensions = {} # 使用这个可以在创建类之后改变路由转换器的但是在任何线路连接之前 # from werkzeug.routing import BaseConverter #: class ListConverter(BaseConverter): #: def to_python(self, value): #: return value.split(',') #: def to_url(self, values): #: return ','.join(BaseConverter.to_url(value) #: for value in values) #: #: app = Flask(__name__) #: app.url_map.converters['list'] = ListConverter #: @list.route("/job.json") >>> /list/job.json self.url_map = ZyzMap() # 如果应用程序已经处理至少一个,则在内部跟踪 self._got_first_request = False self._before_request_lock = Lock() if self.has_static_folder: self.add_url_rule(self.static_url_path + '/<path:filename>', endpoint='static', view_func=self.send_static_file) self.cli = cli.AppGroup(self.name) def make_config(self, instance_relative=False): """用于通过flask构造函数创建config属性。从构造函数传递“nstance_relative”参数 flask(名为“instance_relative_config”),并指示是否配置应该与实例路径或 根路径相对应。""" root_path = self.root_path if instance_relative: root_path = self.instance_path return self.config_class(root_path, self.default_config) def create_url_adapter(self, request): if request is not None: # 设置 request_id request.trace_id = str(uuid.uuid4()) # 设置 app version request.version = get_version(request) return self.url_map.bind_to_environ(request.environ, request=request, version_dict=self.version_dict, server_name=self.config['SERVER_NAME']) if self.config['SERVER_NAME'] is not None: return self.url_map.bind( self.config['SERVER_NAME'], script_name=self.config['APPLICATION_ROOT'] or '/', url_scheme=self.config['PREFERRED_URL_SCHEME'] ) def add_url_rule(self, rule, endpoint=None, view_func=None, **options): if endpoint is None: endpoint = _endpoint_from_view_func(view_func) options['endpoint'] = endpoint methods = options.pop('methods', None) if methods is None: methods = getattr(view_func,'methods',None) or ('GET',) methods = set(methods) required_methods = set(getattr(view_func, 'required_methods', ())) provide_automatic_options = getattr(view_func, 'provide_automatic_options', None) if provide_automatic_options is None: if "OPTIONS" not in methods: provide_automatic_options = True required_methods.add('OPTIONS') else: provide_automatic_options = False # 设置 version_dict if self.version_dict.get(rule) is None: self.version_dict[rule] = [] version_list = self.version_dict.get(rule.strip()) # 添加version list version = options.get('version') if version and isinstance(version, list): for item in version: if item not in version_list: version_list.append(item) # 增加回调时 的 methods methods |= required_methods rule = self.url_rule_class(rule, methods=methods, **options) rule.provide_automatic_options = provide_automatic_options self.url_map.add(rule) if view_func is not None: old_func = self.view_functions.get(endpoint) if old_func is not None and old_func != view_func: raise AssertionError('View function mapping is overwriting an ' 'existing endpoint function: %s' % endpoint) self.view_functions[endpoint] = view_func class ZyzMap(Map): def bind(self, server_name, script_name=None, subdomain=None, url_scheme='http', default_method='GET', path_info=None, query_args=None, request=None, version_dict=None): server_name = server_name.lower() if self.host_matching: if subdomain is not None: raise RuntimeError('host matching enabled and a ' 'subdomain was provided') elif subdomain is None: subdomain = self.default_subdomain if script_name is None: script_name = '/' try: server_name = _encode_idna(server_name) except UnicodeError: raise BadHost() return ZyzMapAdapter(self,server_name, script_name, subdomain, url_scheme, path_info, default_method, query_args, request, version_dict) def bind_to_environ(self, environ, server_name=None, subdomain=None, request=None, version_dict=None): environ = _get_environ(environ) if 'HTTP_HOST' in environ: wsgi_server_name = environ['HTTP_HOST'] if environ['wsgi.url_scheme'] == 'http' and wsgi_server_name.endswith(':80'): wsgi_server_name = wsgi_server_name[:-3] elif environ['wsgi.url_scheme'] == 'https' and wsgi_server_name.endswith(':443'): wsgi_server_name = wsgi_server_name[:-4] else: wsgi_server_name = environ['SERVER_NAME'] if (environ['wsgi.url_scheme'],environ['SERVER_PORT']) not in (('https','443'),('http','80')): wsgi_server_name += ':' + environ['SERVER_PORT'] wsgi_server_name = wsgi_server_name.lower() if server_name is None: server_name = wsgi_server_name else: server_name = server_name.lower() if subdomain is None and not self.host_matching: cur_server_name = wsgi_server_name.split('.') real_server_name = server_name.split('.') offset = -len(real_server_name) if cur_server_name[offset:] != real_server_name: subdomain = '<invalid>' else: subdomain = '.'.join(filter(None, cur_server_name[:offset])) def _get_wsgi_string(name): val = environ.get(name) if val is not None: return wsgi_decoding_dance(val, self.charset) script_name = _get_wsgi_string('SCRIPT_NAME') path_info = _get_wsgi_string('PATH_INFO') query_args = _get_wsgi_string('QUERY_STRING') return ZyzMap.bind(self, server_name, script_name, subdomain, environ['wsgi.url_scheme'], environ['REQUEST_METHOD'], path_info, query_args=query_args, request=request, version_dict=version_dict) class ZyzMapAdapter(MapAdapter): def __init__(self, map, server_name, script_name, subdomain, url_scheme, path_info, default_method, query_args=None, request = None, version_dict = None): self.request = request self.version_dict = version_dict if version_dict is not None else {} super().__init__(map, server_name, script_name, subdomain, url_scheme, path_info, default_method, query_args) def match(self, path_info=None, method=None, return_rule=False, query_args=None): self.map.update() if path_info is None: path_info = self.path_info else: path_info = to_unicode(path_info, self.map.charset) if query_args is None: query_args = self.query_args method = (method or self.default_method).upper() path = u'%s|%s' % ( self.map.host_matching and self.server_name or self.subdomain, path_info and '/%s' % path_info.lstrip('/') ) have_match_for = set() for rule in self.map.rules: try: rv = rule.match(path) except RequestSlash: raise RequestRedirect(self.make_redirect_url( url_quote(path_info, self.map.charset, safe='/:|+') + '/', query_args)) except RequestAliasRedirect as e: raise RequestRedirect(self.make_alias_redirect_url( path, rule.endpint, e.matched_values, method,query_args)) if rv is None: continue if rule.methods is not None and method not in rule.methods: have_match_for.update(rule.methods) continue # 确定版本 version = get_version(self.request) if self.request and version: if not isinstance(rule.version, list) or not rule.version: rule.version = list() version_list = self.version_dict.get(rule.rule) if len(rule.version) == 0 \ and version_list is not None \ and version in version_list: continue elif len(rule.version) != 0 and version not in rule.version: continue self.request.rule_version = rule.version if self.map.redirect_defaults: redirect_url = self.get_default_redirect(rule, method, rv, query_args) if redirect_url is not None: if isinstance(rule.redirect_to,string_types): def _handle_match(match): value = rv[match.group(1)] return rule._converters[match.group(1)].to_url(value) redirect_url =_simple_rule_re.sub(_handle_match,rule.redirect_to) else: redirect_url = rule.redirect_to(self, **rv) raise RequestRedirect(str(url_join('%s://%s%s%s' % ( self.url_scheme or 'http', self.subdomain and self.subdomain + '.' or '', self.server_name, self.script_name ),redirect_url))) if return_rule: return rule, rv else: return rule.endpint, rv if have_match_for: raise MethodNotAllowed(valid_methods=list(have_match_for)) raise NotFound() def get_version(request): try: return request.version except AttributeError: pass return request.args.get('version')
zyzFlask
/zyzFlask-0.2.0.tar.gz/zyzFlask-0.2.0/core/zyz_flask.py
zyz_flask.py
import time import json import redis import hashlib import requests import traceback import urllib.parse from functools import wraps from flask import make_response, request from core.log import logger from core.utils import get_randoms, AesCrypt, get_hashlib from core.common import get_trace_id, is_none, get_version from core.global_settings import * from core.exceptions import BusinessException from core.check_param import build_check_rule, CheckParam from configs import AES_KEY, REDIS_HOST, REDIS_PORT, REDIS_PASSWORD, AUTH_COOKIE_KEY, SSO_VERSION, \ CALL_SYSTEM_ID check_param = CheckParam() class BaseError(object): def __init__(self): pass @staticmethod def not_login(): return return_data(code=LOGIN_FAIL, msg='用户未登录') @staticmethod def not_local_login(): return return_data(code=OTHER_LOGIN_FAIL, msg='您的帐号已在其他设备上登录\n请重新登录') @staticmethod def system_exception(): return return_data(code=REQUEST_FAIL, msg='后台系统异常') @staticmethod def request_params_incorrect(): return return_data(code=REQUEST_FAIL, msg='请求参数不正确') @staticmethod def common_feild_null(feild): raise BusinessException(code=-99, msg=feild + '不能为空') @staticmethod def common_feild_wrong(feild): raise BusinessException(code=-99, msg=feild + '错误') class Redis(object): def __init__(self,db=None): self.db = db if not hasattr(Redis,'REDIS_POOL_CACHE'): self.getRedisCoon() self.get_server() def getRedisCoon(self): REDIS_ = redis.ConnectionPool(host=REDIS_HOST, port=REDIS_PORT, password=REDIS_PASSWORD) REDIS_1 = redis.ConnectionPool(host=REDIS_HOST, port=REDIS_PORT, password=REDIS_PASSWORD, db=1) REDIS_2 = redis.ConnectionPool(host=REDIS_HOST, port=REDIS_PORT, password=REDIS_PASSWORD, db=2) REDIS_3 = redis.ConnectionPool(host=REDIS_HOST, port=REDIS_PORT, password=REDIS_PASSWORD, db=3) REDIS_4 = redis.ConnectionPool(host=REDIS_HOST, port=REDIS_PORT, password=REDIS_PASSWORD, db=4) self.REDIS_POOL_CACHE = { '0': REDIS_, '1': REDIS_1, '2': REDIS_2, '3': REDIS_3, '4': REDIS_4 } def __get_pool(self): if self.db is None: self.db = '0' return self.REDIS_POOL_CACHE.get(self.db) def get_server(self): self.conn = redis.Redis(connection_pool=self.__get_pool()) return self.conn def set_variable(self, name, value, ex=None, px=None, nx=False, xx=False): # 设置普通键值对 # EX — seconds – 设置键key的过期时间,单位时秒 (datetime.timedelta 格式) # PX — milliseconds – 设置键key的过期时间,单位时毫秒 (datetime.timedelta 格式) # NX – 只有键key不存在的时候才会设置key的值 (布尔值) # XX – 只有键key存在的时候才会设置key的值 (布尔值) self.conn.set(name, value, ex, px, nx, xx) def get_variable(self, name): # 获取普通键值对的值 return self.conn.get(name) def delete_variable(self, *names): # 删除指定的一个或多个键 根据`names` self.conn.delete(*names) def get_hget(self, name, key): return self.conn.hget(name, key) def get_hgetall(self, name): return self.conn.hgetall(name) def set_hset(self, name, key, value): self.conn.hset(name, key, value) def set_rpush(self, name, value): # 列表结尾中增加值 self.conn.rpush(name, value) def get_lpop(self, name): # 弹出列表的第一个值(非阻塞) self.conn.lpop(name) def set_blpop(self, *name, timeout=0): # 弹出传入所有列表的第一个有值的(阻塞),可以设置阻塞超时时间 self.conn.blpop(*name, timeout) def get_llen(self, name): # 返回列表的长度(列表不存在时返回0) self.conn.llen(name) def set_sadd(self, name, value): # 集合中增加元素 self.conn.sadd(name, value) def delete_srem(self, name, *value): # 删除集合中的一个或多个元素 self.conn.srem(name, *value) def spop(self, name): # 随机移除集合中的一个元素并返回 return self.get_server().spop(name) def smembers(self, name): return self.conn.smembers(name) def sismember(self, name, value): # 判断value是否是集合name中的元素。是返回1 ,不是返回0 return self.conn.sismember(name,value) def expire(self,name,time): # 设置key的过期时间 self.conn.expire(name,time) class LoginAndReturn(object): def __init__(self): pass def login_required(f): @wraps(f) # 不改变使用装饰器原有函数的结构(如__name__, __doc__) def decorated_function(*args, **kw): #### 所有注释都是进行单点登录操作的 !!!!!!! auth_token = request.cookies.get('auth_token') refresh_time = request.cookies.get('refresh_time') user_id = get_cookie_info().get('user_id') # 这个个方法里存在单点登录状态 sso_code = get_cookie_info().get('sso_code') if is_none(auth_token) or is_none(refresh_time) or is_none(user_id): return BaseError.not_login() # 去redis中取 组装cookie时存的随机数 _redis = Redis() _sso_code = _redis.get_hget("app_sso_code", user_id) # 校验cookie解析出来的随机数 和存在redis中的随机数是否一致 if not is_none(_sso_code) and not is_none(sso_code) and sso_code != _sso_code: logger.info("账号在其他设备登陆了%s"% user_id) return BaseError.not_local_login() # 解密auth_token中的sign sign = aes_decrypt(auth_token) # 利用user_id + '#$%' + redis中随机数 + '#$%' + md5加密后的字符串 组装_sign _sign = hashlib.sha1(AUTH_COOKIE_KEY + user_id + refresh_time + sso_code).hexdigest() if sign == _sign: return f(*args, **kw) else: return BaseError.not_login() return decorated_function # 制作response并返回的函数,包括制作response的请求头和请求体 # login_data : 登录操作时必传参数,必须包括user_id,其余可以包括想带入cookie中的参数 格式{“user_id”:“12345”} def return_data(code=200, data=None, msg=u'成功', login_data=None): data = {} if data is None else data data_json = json.dumps({'traceID': get_trace_id(), 'code': code, 'msg': msg, 'data': data}) response = make_response(data_json, 200) response.headers['Content-Type'] = 'application/json' if request.headers.get('Origin') in MOBILE_ORIGIN_URL: response.headers['Access-Control-Allow-Origin'] = request.headers.get('Origin') else: response.headers['Access-Control-Allow-Origin'] = 'https://i.mofanghr.com' response.headers['Access-Control-Allow-Methods'] = 'PUT,GET,POST,DELETE' response.headers['Access-Control-Allow-Credentials'] = 'true' response.headers['Access-Control-Allow-Headers'] = "Referer,Accept,Origin,User-Agent" create_auth_cookie(response, login_data) return response def create_auth_cookie(response, login_data): # 进场获取缓存cookie中的信息 auth_token = request.cookies.get('auth_token') refresh_time = request.cookies.get('refresh_time') # refresh_time = 1482222171524 cookie_info = request.cookies.get('cookie_info') # 设置cookie过期时间点, time.time() + 60 表示一分钟后 outdate = time.time() + 60 * 60 * 24 * 30 # 记录登录态三天 _redis = Redis() #login 如果是登录操作,cookie中所有信息重新生成 if not is_none(login_data) and not is_none(login_data.get('user_id')): user_id = login_data.get('user_id') sso_code = "vJjPtawUC8" # 如果当前版本不设置单点登录,则使用固定随机码 if get_version() in SSO_VERSION: # 如果版本设置单点登录,随机生成10位随机数,当做单机唯一登录码,存在redis中方便对比 # 只要不清除登录态,单点登录则不会触发 sso_code = get_randoms(10) _redis.set_hset("app_sso_code", user_id, sso_code) # 产生新的refresh_time 和新的auth_token refresh_time = str(int(round(time.time() * 1000))) sign = get_hashlib(AUTH_COOKIE_KEY + user_id + refresh_time + sso_code) auth_token = aes_encrypt(sign) login_data['sso_code'] = sso_code cookie_info = aes_encrypt(json.dumps(login_data)) #not login 如果不是登录操作,并且cookie中auth_token和refresh_time存在 if not is_none(auth_token) and not is_none(refresh_time): now_time = int(round(time.time() * 1000)) differ_minuts = (now_time - int(refresh_time)) / (60*1000) if differ_minuts >= 30 and is_none(login_data): user_id = get_cookie_info().get('user_id') if not is_none(user_id): refresh_time = str(int(round(time.time() * 1000))) sso_code = _redis.get_hget("app_sso_code", user_id) # 获取单点登录码 sign = get_hashlib(AUTH_COOKIE_KEY + user_id + refresh_time + sso_code) auth_token = aes_encrypt(sign) response.set_cookie('auth_token', auth_token, path='/', domain='.mofanghr.com', expires=outdate) response.set_cookie('refresh_time', str(refresh_time), path='/', domain='.mofanghr.com', expires=outdate) response.set_cookie('cookie_info', cookie_info, path='/', domain='.mofanghr.com', expires=outdate) return response def request_check(func): @wraps(func) def decorator(*args, **kw): # 校验参数 try: check_rule = build_check_rule(str(request.url_rule),str(request.rule_version), list(request.url_rule.methods & set(METHODS))) check_func = check_param.get_check_rules().get(check_rule) if check_func: check_func(*args, **kw) except BusinessException as e: if not is_none(e.func): return e.func elif not is_none(e.code) and not is_none(e.msg): business_exception_log(e) return return_data(code=e.code, msg=e.msg) # 监听抛出的异常 try: if request.trace_id is not None and request.full_path is not None: logger.info('trace_id is:' + request.trace_id + ' request path:' + request.full_path) return func(*args, **kw) except BusinessException as e: if e.func is not None: return e.func() elif e.code is not None and e.msg is not None: business_exception_log(e) if e.code == SYSTEM_CODE_404 or e.code == SYSTEM_CODE_503: return return_data(code=e.code, msg='很抱歉服务器异常,请您稍后再试') else: return return_data(code=e.code, msg=e.msg) else: return request_fail() except Exception: return request_fail() return decorator # 使用AES算法对字符串进行加密 def aes_encrypt(text): aes_crypt = AesCrypt(AES_KEY) # 初始化密钥 encrypt_text = aes_crypt.encrypt(text) # 加密字符串 return encrypt_text # 使用AES算法对字符串进行解密 def aes_decrypt(text): aes_crypt = AesCrypt(AES_KEY) # 初始化密钥 decrypt_text = aes_crypt.decrypt(text) # 解密成字符串 return decrypt_text # 获取并解析cookie_info def get_cookie_info(): req_cookie = request.cookies.get('cookie_info') if req_cookie is not None: try: aes_crypt = AesCrypt(AES_KEY) # 初始化密钥 aes_crypt_cookie = aes_crypt.decrypt(req_cookie) req_cookie = json.loads(aes_crypt_cookie) return req_cookie except: return {} else: return {} def business_exception_log(e): if not is_none(request.trace_id) and not is_none(request.full_path): logger.error('BusinessException, code: %s, msg: %s trace_id: %s request path: %s' % (e.code, e.msg, request.trace_id, request.full_path)) else: logger.error('BusinessException, code: %s, msg: %s' % (e.code, e.msg)) def request_fail(func=BaseError.system_exception): if not is_none(request.trace_id): logger.error('request fail trace id is:' + str(request.trace_id)) logger.error(traceback.format_exc()) return func() # 工厂模式,根据不同的后端项目域名生成不同的项目对象,传入request_api中组成接口 # 针对不同的后端服务项目,实例化后生成不同的service对象 # 有三个属性 service.url 后端项目的域名 || "http://user.service.mofanghr.com/" # service.params 项目通用普通参数,放在params中 || {“userID”:"12345"} # service.common_params 项目通用公共参数,拼在url问号?后面 || {“userID”:"12345"} class Service_api(object): def __init__(self,url,params=None,common_params=None): self.url = url self.params = params self.common_params = common_params def __str__(self): return self.url # 工厂模式,根据传如的不同项目加上不同的后端接口生成不同的接口函数,请求接口并处理返回值 # base_prj 针对不同的后端服务项目,实例化后的service对象, # 有三个属性 base_prj.url 后端项目的域名 || "http://user.service.mofanghr.com/" # base_prj.params 项目通用普通参数,放在params中 || {“userID”:"12345"} # base_prj.common_params 项目通用公共参数,拼在url问号?后面 || {“userID”:"12345"} # # fixUrl 接口的后缀url,拼在项目域名后,形成完整的访问接口 # baseParams 如果同一个接口需要增加相同的参数,可以放在baseParams中,增加拓展性 || {‘reqSource’:‘Mf1.0’} class Requests_api(object): def __init__(self, base_prj, fixUrl, baseParams=None): self.base_prj = base_prj self.url = base_prj.url + fixUrl self.baseParams = baseParams # 执行请求后端的函数,get请求 # fixUrl 同一个项目下的不同接口后缀 || inner/careerObjective/get.json # params 访问携带参数 || {“userID”:"12345"} def implement_get(self, params, **kwargs): self.url_add_common_param() self.url_add_business_param(params) logger.info(self.url) resp = requests.get(self.url, **kwargs) if resp.status_code == 200: ret_data = resp.json() else: raise BusinessException(code=resp.status_code, msg=resp.text, url=resp.url) # 如果请求成功,但是后端返回的code不是200,则记录日志 if 'code' not in ret_data or ret_data.get('code') != 200: logger.error( 'api_return_error, code: %s, msg: %s, url: %s' % (ret_data.get('code'), ret_data.get('msg'), self.url)) return ret_data # 执行请求后端的函数,post请求 # headers 请求头,如果有特殊的请求头要求,可以使用||{'Content-Type': 'application/json;charset=utf-8'} def implement_post(self, params, headers=None,**kwargs): self.url_add_common_param() params = {'params':json.dumps(params)} resp = requests.post(self.url, data=params, headers=headers, **kwargs) logger.info(self.url) if resp.status_code == 200: ret_data = resp.json() else: raise BusinessException(code=resp.status_code, msg=resp.text, url=resp.url) if 'code' not in ret_data or ret_data.get('code') != 200: logger.error( 'api_return_error, code: %s, msg: %s, url: %s' % (ret_data.get('code'), ret_data.get('msg'), self.url)) return ret_data # 格式化params,并组装URL的函数,将参数值转化为url编码拼接到self.url后面 # self.url http://user.service.mofanghr.com/inner/crm/getSessionAndJobList.json?params=%22%3a+%22jobStandardCardID%24% def url_add_business_param(self, params): # 如果存在需要整个项目传的参数,则增加进params中 if not is_none(self.base_prj.params): for _service_key in self.base_prj.params: params[_service_key] =self.base_prj.params.get(_service_key) # 如果存在需要整个接口传的参数,则增加进params中 if not is_none(self.baseParams): for _key in self.baseParams: params[_key] = self.baseParams.get(_key) self.url = self.url + '&params=' + urllib.parse.quote_plus(json.dumps(params)) # 给URL增加公共参数,所有接口都会有的参数 # self.base_prj.common_params 个性化公共参数,个别接口可以根据需求自行添加 || {“serviceName”:“send”} def url_add_common_param(self): self.url = self.url + '?traceID=' + get_trace_id() + '&callSystemID=' + str(CALL_SYSTEM_ID) if not is_none(self.base_prj.common_params): for key,value in self.base_prj.common_params.items(): self.url = self.url + "&" + str(key) + "=" + str(value) def __str__(self): return self.url # 例子 # SEARCH_API_URL = Service_api("http://search.service.mofanghr.com/", common_params={"callSystemID":str(CALL_SYSTEM_ID)}) # 每个service实例化一个 # job_search = Requests_api(SEARCH_API_URL,"inner/all/job/search.json") # 每个接口实例化一个 # # result = job_search.implement_get({"userID":"12345"}) # 前端函数中使用
zyzFlask
/zyzFlask-0.2.0.tar.gz/zyzFlask-0.2.0/core/core.py
core.py
from core.core import BaseError from core.core import return_data class Error(BaseError): @staticmethod def mobile_null_error(): return return_data(code=-100, msg=u'手机号不能为空') @staticmethod def password_null_error(): return return_data(code=-101, msg=u'密码不能为空') @staticmethod def account_status_close(): return return_data(code=-102, data={'phone':'01056216855'}, msg=u'抱歉,您的账号已经被冻结\n如有疑问请联系客服010-56216855') @staticmethod def account_not_exist(): return return_data(code=-103, msg=u'账号不存在') @staticmethod def invalid_verify_code(): return return_data(code=-104, msg=u'验证码错误') @staticmethod def invalid_account_password(): return return_data(code=-105, msg=u'账号不存在或密码错误') @staticmethod def verify_code_null(): return return_data(code=-106, msg=u'验证码不能为空') @staticmethod def user_info_null(): return return_data(code=-107, msg=u'用户信息不存在') @staticmethod def user_update_fail(): return return_data(code=-108, msg=u'更新用户资料失败') @staticmethod def verify_code_app_key_null(): return return_data(code=-109, msg=u'验证码appkey不能为空') @staticmethod def verify_code_verify_type_null(): return return_data(code=-110, msg=u'验证码verifyType不能为空') @staticmethod def verify_code_sms_limit(): return return_data(code=-111, msg=u'短信验证码请求超过上限,请使用语音验证码') @staticmethod def change_identity_fail(): return return_data(code=-112, msg=u'用户身份选择失败') @staticmethod def reset_password_fail(): return return_data(code=-113, msg=u'重置密码失败') @staticmethod def account_null(): return return_data(code=-114, msg=u'账号不能为空') @staticmethod def verify_code_limit(): return return_data(code=-115, msg=u'验证码请求超过上限,请明天再试') @staticmethod def password_null(): return return_data(code=-116, msg=u'您的账号没有设置密码\n请使用验证码进入') @staticmethod def update_company_permission(): return return_data(code=-117, msg=u'您没有修改公司权限') @staticmethod def company_no_repetition_binding(): return return_data(code=-118, msg=u'该用户已绑定其它公司\n不能再绑定') @staticmethod def verify_code_past_due(): return return_data(code=-119, msg=u'验证码错误') @staticmethod def verify_code_voice_limit(): return return_data(code=-120, msg=u'语音验证码请求超过6次') @staticmethod def save_fail(): return return_data(code=-121, msg=u'保存信息失败') @staticmethod def get_fail(): return return_data(code=-122, msg=u'获取信息失败') @staticmethod def upload_fail(): return return_data(code=-123, msg=u'上传失败') @staticmethod def send_job_fail(): return return_data(code=-124, msg=u'很遗憾,贵公司今天已经发布20个职位,请明天再来吧!') @staticmethod def enterprise_send_job_fail(): return return_data(code=-125, msg=u'您的账号尚未认证,认证后才能发布更多职位喔!') @staticmethod def old_password_fail(): return return_data(code=-126, msg=u'旧密码错误') @staticmethod def old_password_null(): return return_data(code=-127, msg=u'原密码不能为空') @staticmethod def password_type_error(): return return_data(code=-128, msg=u'修改密码类型不正确') @staticmethod def resume_not_full(): return return_data(code=-130, msg=u'简历信息不完整') @staticmethod def reserved_repeated(): return return_data(code=-131, msg=u'你已申请过,不能重复申请') @staticmethod def no_point(): return return_data(code=-132, msg=u'抱歉,账户余额不足\n可赚取积分或联系我们(010-56216855)充值M币') @staticmethod def limit_call_phone(): return return_data(code=-133, msg=u'拨打电话超过上限') @staticmethod def limit_im_unautherized_count(): return return_data(code=-135, msg=u'很抱歉,认证后才能与更多候选人主动发起沟通') @staticmethod def can_not_cancel_hr_auth(): return return_data(code=-136, msg=u'您的认证已通过\n不可取消认证') #所有时段都已预约已满 @staticmethod def reserved_full(): return return_data(code=-139, msg=u'该职位预约已满\n去看看更多好职位吧') #报错时,和可预约时段为0时 @staticmethod def session_overdue(): return return_data(code=-139, msg=u'非常遗憾!\n该职位没有可预约的场次\n去看看更多好职位吧') # 过期,还有其他场次可预约 @staticmethod def has_effective_session(): return return_data(code=-140, msg=u'该时段面试已经开始\n换个时段吧') # 已满,还有其他场次可预约 @staticmethod def has_effective_session2(): return return_data(code=-141, msg=u'该时段面试预约已满\n换个时段吧') # 已暂停,还有其他场次可预约 @staticmethod def no_session_reserve2(): return return_data(code=-172, msg=u'该时段面试已暂停\n换个时段吧') # 已取消,还有其他场次可预约 @staticmethod def cancel_has_session(): return return_data(code=-171, msg=u'该时段面试已取消\n换个时段吧') # 没有可预约的时段 @staticmethod def no_session_reserve(): return return_data(code=-170, msg=u'没有可预约的时段\n去看看更多好职位吧') @staticmethod def verify_code_risk_limit(): return return_data(code=-142, data={'phone':'01056216855'}, msg=u'验证码请求超出上限,如您需要紧急登录请联系客服(010-56216855)') @staticmethod def invalid_image_verify_code(): return return_data(code=-143, msg=u'请先输入正确的图片验证码') @staticmethod def update_mobile_error(): return return_data(code=-144, msg=u'手机号码已经存在') @staticmethod def mobile_exist_error(): return return_data(code=-145, msg=u'手机号码已经存在') @staticmethod def created_job_error(): return return_data(code=-146, msg=u'请勿发布重复职位') @staticmethod def backend_system_error(): return return_data(code=-147, msg=u'后台系统异常') @staticmethod def verify_code_limit2(): return return_data(code=-148, data={'phone':'01056216855'}, msg=u'验证码请求超过上限,请明天尝试\n如需紧急修改手机号码,请联系客服(010-56216855)') @staticmethod def verify_code_risk_limit2(): return return_data(code=-149, data={'phone': '01056216855'}, msg=u'验证码请求超过上限,请明天尝试\n如需紧急修改手机号码,请联系客服(010-56216855)') @staticmethod def verify_code_risk_limit3(): return return_data(code=-150, data={'phone': '01056216855'}, msg=u'验证码请求超过上限,请明天尝试\n如需紧急修改密码,请联系客服(010-56216855)') @staticmethod def invalid_uuid_code(): return return_data(code=-151, msg=u'二维码过期') @staticmethod def get_resume_error(): return return_data(code=-152, msg=u'获取简历失败') @staticmethod def invalid_uuid_key(): return return_data(code=-153, msg=u'校验参数不合法') @staticmethod def get_question_error(): return return_data(code=-154, msg=u'获取答题失败') @staticmethod def get_company_error(): return return_data(code=-155, msg=u'获取公司失败') @staticmethod def delete_resume_error(): return return_data(code=-156, msg=u'删除简历失败') @staticmethod def send_email_error(): return return_data(code=-157, msg=u'发送邮件失败') @staticmethod def no_flow_ok(): return return_data(code=-158, msg=u'没有需要确认的面试') @staticmethod def get_company_list_error(): return return_data(code=-159, msg=u'获取公司列表信息失败') @staticmethod def get_advisor_seror(): return return_data(code=-160, msg=u'获取顾问信息失败') @staticmethod def get_postion_seror(): return return_data(code=-161, msg=u'获取职位信息失败') @staticmethod def no_career_Area(): return return_data(code=-163, msg=u'求职期望城市为空') @staticmethod def no_gender(): return return_data(code=-164, msg=u'性别为空') @staticmethod def get_survey_answer_error(): return return_data(code=-165, msg=u'获取答题失败') @staticmethod def open_red_packet_error(): return return_data(code=-166, msg=u'打开红包失败') @staticmethod def user_payment_error(): return return_data(code=-167, msg=u'提现失败') @staticmethod def user_wechat_mapping_error(): return return_data(code=-167, msg=u'微信绑定失败') @staticmethod def withdraw_filed_fail(): return return_data(code=-167, msg=u'提现失败') @staticmethod def withdraw_filed_not_enouth(): return return_data(code=-160, msg=u'余额小于20,不可提现,继续努力吧!') @staticmethod def create_resume_error(): return return_data(code=-168, msg=u'创建简历失败') @staticmethod def upload_number_error(): return return_data(code=-169, msg=u'上传简历数量超过限制')
zyzFlask
/zyzFlask-0.2.0.tar.gz/zyzFlask-0.2.0/core/error.py
error.py
class CheckParam(object): def __init__(self): self.check_rules = dict() def register_check_param(self, check_param=None, url_prefix=''): if not isinstance(check_param, SubCheckParam): raise RuntimeError('check_param is not a SubCheckParam object. ' 'type: %s' % type(check_param)) check_rules = check_param.get_check_rules() for check_rule in check_rules: url = check_rule.url version = check_rule.version methods = check_rule.methods f = check_rule.f self.check_rules[str({'url':url_prefix + url, 'version':sorted(version), 'methods': sorted(methods)})] = f def get_check_rules(self): return self.check_rules class SubCheckParam(object): def __init__(self,default_methods): self.check_rules = list() self.default_methods = default_methods def check(self, url=None, version=None, methods=None): methods = methods if methods is not None else self.default_methods def decorator(f): if not url: raise ValueError('A non-empty url is required.') if not methods: raise ValueError('A non-empty method is required.') self.__add_check_rule(url, version, methods, f) return f return decorator def __add_check_rule(self, url, version, methods, f): if version and isinstance(version, list): version = sorted(version) else: version = [] self.check_rules.append(CheckRule(url=url, version=version, methods=methods, f=f)) def get_check_rules(self): return self.check_rules class CheckRule(object): def __init__(self,url, version, methods, f): self.url = url self.version = version self.methods = methods self.f = f def build_check_rule(url=None, version=None, methods=None): if not url: raise ValueError('A non-empty url is required.') if not methods: raise ValueError('A non-empty method is required.') if version and isinstance(version, list): version = sorted(version) else: version = [] return str({'url': url,'version': version,'methods': sorted(methods)})
zyzFlask
/zyzFlask-0.2.0.tar.gz/zyzFlask-0.2.0/core/check_param.py
check_param.py
import os import re import time import fcntl import shutil import logging.config from configs import LOGGING_PATH from stat import ST_MTIME from logging import FileHandler, StreamHandler from logging.handlers import TimedRotatingFileHandler, RotatingFileHandler class StreamHandler_MP(StreamHandler): """ 一个处理程序类,它将日志记录适当地格式化,写入流。用于多进程 """ def emit(self, record): """ 发出一个记录。首先seek(写文件流时控制光标的)到文件的最后,以便多进程登录到同一个文件。 """ try: if hasattr(self.stream, "seek"): self.stream.seek(0, 2) except IOError as e: pass StreamHandler.emit(self, record) class FileHandler_MP(FileHandler,StreamHandler_MP): """ 为多进程写入格式化日志记录到磁盘文件的处理程序类。 """ def emit(self, record): """ 发出一条记录。 如果因为在构造函数中指定了延迟而未打开流,则在调用超类发出之前打开它。 """ if self.stream is None: self.stream = self._open() StreamHandler_MP.emit(self,record) class RotatingFileHandler_MP(RotatingFileHandler,FileHandler_MP): """ 处理程序,用于记录一组文件,当当前文件达到一定大小时,该文件从一个文件切换到下一个文件。 基于logging.RotatingFileHandler文件处理程序,用于多进程的修正 """ _lock_dir = '.lock' if os.path.exists(_lock_dir): pass else: os.mkdir(_lock_dir) def doRollover(self): """ 做一个翻转,如__init__()中所描述的,对于多进程,我们使用深拷贝(shutil.copy)而不是重命名。 """ self.stream.close() if self.backupCount > 0: for i in range(self.backupCount - 1, 0, -1): sfn = "%s.%d" % (self.baseFilename, i) dfn = "%s.%d" % (self.baseFilename, i + 1) if os.path.exists(sfn): if os.path.exists(dfn): os.remove(dfn) shutil.copy(sfn, dfn) dfn = self.baseFilename + ".1" if os.path.exists(dfn): os.remove(dfn) if os.path.exists(self.baseFilename): shutil.copy(self.baseFilename, dfn) self.mode = "w" self.stream = self._open() def emit(self, record): """ 发送一条记录, 将记录输出到文件中,如doRollover().对于多进程,我们使用文件锁。还有更好的方法吗? """ try: if self.shouldRollover(record): self.doRollover() FileLock = self._lock_dir + '/' + os.path.basename(self.baseFilename) + '.' + record.levelName f = open(FileLock, "w+") fcntl.flock(f.fileno(), fcntl.LOCK_EX) FileHandler_MP.emit(self, record) fcntl.flock(f.fileno(), fcntl.LOCK_UN) f.close() except (KeyboardInterrupt, SystemExit): raise except: self.handleError(record) class TimedRoeatingFileHandler_MP(TimedRotatingFileHandler, FileHandler_MP): """ 处理程序,用于logging文件,以一定的时间间隔旋转日志文件。 如果 backupCount > 0,则在进行翻转时,保持不超过backupCount个数的文件。 最老的被删除。 """ _lock_dir = '.lock' if os.path.exists(_lock_dir): pass else: os.mkdir(_lock_dir) def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0): FileHandler_MP.__init__(self, filename, 'a', encoding, delay) self.encoding = encoding self.when = when self.backupCount = backupCount self.utc = utc # 计算实际更新间隔,这只是更新之间的秒数。还设置在更新发生时使用的文件名后缀。 # 当前的'when'为什么是时得到支持: # S - Seconds 秒 # M - Minutes 分钟 # H - Hours 小时 # D - Days 天 # 在夜里进行文件更新 # W{0-6} -在某一天更新;0代表星期一 6代表星期日 # “when” 的的情况并不重要;小写或大写字母都会起作用。 if self.when == 'S': self.suffix = "%Y-%m-%d_%H-%M-%S" self.extMatch = r"^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}-\d{2}$" elif self.when == "M": self.suffix = "%Y-%m-%d_%H-%M" self.extMatch = r"^\d{4}-\d{2}-\d{2}_\d{2}-\d{2}$" elif self.when == "H": self.suffix = "%Y-%m-%d_%H" self.extMatch = r"^\d{4}-\d{2}-\d{2}_\d{2}$" elif self.when == "D" or self.when == "MIDNIGHT": self.suffix = "%Y-%m-%d" self.extMatch = r"^\d{4}-\d{2}-\d{2}$" elif self.when.startswith('W'): if len(self.when) != 2: raise ValueError("你必须指定每周更新日志的时间从0到6(0是星期一): %s" % self.when) if self.when[1] < '0' or self.when[1] > '6': raise ValueError("每周更新时间是无效日期: %s" % self.when) self.dayOfWeek = int(self.when[1]) self.suffix = "%Y-%m-%d" self.extMatch = r"^\d{4}-\d{2}-\d{2}" else: raise ValueError("指定的更新间隔无效: %s" % self.when) self.extMatch = re.compile(self.extMatch) if interval != 1: raise ValueError("无效翻滚间隔,必须为1。") def shouldRollover(self, record): """ 确定是否发生翻滚。记录不被使用,因为我们只是比较时间,但它需要方法签名是相同的。 """ if not os.path.exists(self.baseFilename): # print(“文件不存在”) return 0 cTime = time.localtime(time.time()) mTime = time.localtime(os.stat(self.baseFilename)[ST_MTIME]) if self.when == "S" and cTime[5] != mTime[5]: # print("cTime:",cTime[5],"mTime:",mTime[5]) return 1 elif self.when == "M" and cTime[4] != mTime[4]: # print("cTime:",cTime[4],"mTime:",mTime[4]) return 1 elif self.when == "H" and cTime[3] != mTime[3]: # print("cTime:",cTime[3],"mTime:",mTime[3]) return 1 elif (self.when == "MIDNIGHT" or self.when == 'D') and cTime[2] != mTime[2]: # print("cTime:",cTime[2],"mTime:",mTime[2]) return 1 elif self.when == "W" and cTime[1] != mTime[1]: # print("cTime:",cTime[1],"mTime:",mTime[1]) return 1 else: return 0 def doRollover(self): """ 更新日志文件; 在这种情况下,当发生翻滚时,日期/时间戳被附加到文件名。但是,您希望文件被命名为间隔开始,而不是当前时间。 如果有一个备份计数,那么我们必须得到一个匹配的文件名列表,对它们进行排序,并删除具有最长后缀的列表。 对于多进程,我们使用深拷贝(shutil.copy)而不是重命名。 """ if self.stream: self.stream.close() # 获得这个序列开始的时间,并使它成为一个时间表. # t = self.rolloverAT - self.interval t = int(time.time()) if self.utc: timeTuple = time.gmtime(t) else: timeTuple = time.localtime(t) dfn = self.baseFilename + '.' + time.strftime(self.suffix, timeTuple) if os.path.exists(dfn): os.remove(dfn) if os.path.exists(self.baseFilename): shutil.copy(self.baseFilename,dfn) # print("%s -> %s" %(self.baseFilename, dfn)) # os.rename(self.baseFilename,dfn) if self.backupCount > 0: # 查找最旧的日志文件并删除它 # s = glob.glob(self.baseFilename + ".20*") # if len(s) > self.backupCount: # s.sort() # os.remove(s[0]) for s in self.getFilesToDelete(): os.remove(s) self.mode = 'w' self.stream = self._open() def emit(self, record): """ 发送一条记录 将记录输出到文件中,如SE所述。对于多进程,我们使用文件锁。还有更好的方法吗? """ try: if self.shouldRollover(record): self.doRollover() FileLock = self._lock_dir + '/' + os.path.basename(self.baseFilename) + '.' + record.levelname f = open(FileLock,"w+") fcntl.flock(f.fileno(), fcntl.LOCK_EX) FileHandler_MP.emit(self,record) fcntl.flock(f.fileno(), fcntl.LOCK_UN) f.close() except (KeyboardInterrupt, SystemExit): raise except: self.handleError(record) logging.config.fileConfig(LOGGING_PATH) # 这个路径是log日志,的配置文件的路径 logger = logging.getLogger()
zyzFlask
/zyzFlask-0.2.0.tar.gz/zyzFlask-0.2.0/core/log.py
log.py
# 本代码是有关消息中间件组件的一些使用 ##rabbimq ``` import zz_spider from zz_spider.rabbit_mq.MqDeal import DealRabbitMQ ``` ### 使用 ``` # -*- coding: utf-8 -*- # @Time : 10/14/21 5:38 PM # @Author : ZZK # @File : test_spider.py # @describe : from zz_spider.RabbitMq import DealRabbitMQ host = xxx port = xxx user = xxx password = xxx queue_name = xxx url_port = xxx def spider(res): """ :param res: :return: """ for i in res: data =i #mongo(data) print(i) mqobeject = DealRabbitMQ(host,user=user,passwd=password,port=port,url_port=url_port) #spider_main 放置抓取的主要函数 mqobeject.consumer_mq(spider_main=spider,queue_name=queue_name) #将错误数据写入失败队列中,后缀名必须为_error mqobeject.send_mq(queue_name='123_error',msg={'1':1}) ``` ##kafka ``` # 发送 producer = KkProducer( bootstrap_servers=bootstrap_servers, options_name=options_name, try_max_times=try_max_times ) params: bootstrap_servers 创建连接域名:端口; (例:127.0.0.1:9092) options_name 话题名称:topics (例:topic_{flow_id}_{execute_id}_{data_time}) try_max_times 失败重试最大次数, 默认为 3 test:测试连接 test_send.py # 接收 ``` > 如需帮助请联系 zzk_python@163.com
zz-spider
/zz_spider-1.0.0.tar.gz/zz_spider-1.0.0/README.md
README.md
import json from confluent_kafka import cimpl from kafka import (SimpleClient, KafkaConsumer) from kafka.common import (OffsetRequestPayload, TopicPartition) from confluent_kafka.admin import (AdminClient, NewTopic) class KkOffset(object): def __init__(self, bootstrap_servers=None, topic=None, group_id=None): self.bootstrap_servers = bootstrap_servers self.topic = topic self.group_id = group_id self.get_topic_offset = self._get_topic_offset self.get_group_offset = self._get_group_offset self.surplus_offset = self._surplus_offset @property def _get_topic_offset(self): # topic的offset总和 client = SimpleClient(self.bootstrap_servers) partitions = client.topic_partitions[self.topic] offset_requests = [OffsetRequestPayload(self.topic, p, -1, 1) for p in partitions.keys()] offsets_responses = client.send_offset_request(offset_requests) return sum([r.offsets[0] for r in offsets_responses]) @property def _get_group_offset(self): # topic特定group已消费的offset的总和 consumer = KafkaConsumer(bootstrap_servers=self.bootstrap_servers, group_id=self.group_id, ) pts = [TopicPartition(topic=self.topic, partition=i) for i in consumer.partitions_for_topic(self.topic)] result = consumer._coordinator.fetch_committed_offsets(pts) return sum([r.offset for r in result.values()]) @property def _surplus_offset(self) -> int: """ :param topic_offset: topic的offset总和 :param group_offset: topic特定group已消费的offset的总和 :return: 未消费的条数 """ lag = self.get_topic_offset - self.get_group_offset if lag < 0: return 0 return lag def watch_topics(topic: str, bootstrap_servers: str): # 查看所有话题 consumer = KafkaConsumer(topic, bootstrap_servers=bootstrap_servers, group_id=('',), value_deserializer=json.loads, auto_offset_reset='earliest', enable_auto_commit=True, auto_commit_interval_ms=1000) return consumer.topics() def create_topics(topic_list: list, bootstrap_servers: str): """ :param host_port_and: 10.0.0.1:9092,10.0.0.2:9092,10.0.0.3:9092 :type list: The split with "," and same as port :return: create topics infos """ a = AdminClient({'bootstrap.servers': bootstrap_servers}) new_topics = [ NewTopic(topic, num_partitions=3, replication_factor=1) for topic in topic_list ] fs = a.create_topics(new_topics) for topic, f in fs.items(): try: f.result() # result‘s itself 为空 return {'status': True, 'data': f"创建成功: {topic}"} except cimpl.KafkaException: return {'status': True, 'data': "话题已存在"} except Exception as ex: return {'status': False, 'msg': f"请检查连接异常: {ex}"} finally: pass
zz-spider
/zz_spider-1.0.0.tar.gz/zz_spider-1.0.0/zz_spider/KafkaInstant.py
KafkaInstant.py
from __future__ import absolute_import from anti_useragent import UserAgent import json import re import time from kafka import KafkaProducer import sys, os sys.path.append(os.path.abspath('..')) try: from module import logging from exceptions import KafkaInternalError from KafkaInstant import (KkOffset, create_topics, watch_topics) except: from ..module import logging from ..exceptions import KafkaInternalError from ..KafkaInstant import (KkOffset, create_topics, watch_topics) """ :param 生产消息队列 : 传输时的压缩格式 compression_type="gzip" 每条消息的最大大小 max_request_size=1024 * 1024 * 20 重试次数 retries=3 """ class KkProducer(KafkaProducer): pending_futures = [] log = logging.get_logger('kafka_product') def __init__(self, options_name, try_max_times=3, *args, **kwargs): super(KkProducer, self).__init__( value_serializer=lambda m: json.dumps(m).encode('ascii'), retries=try_max_times, metadata_max_age_ms=10000000, request_timeout_ms=30000000, *args, **kwargs) self.topic = options_name self.try_max_times = try_max_times self.flush_now = self._flush def sync_producer(self, data_li: list or dict, partition=0, times: int = 0): """ 同步发送 数据 :param data_li: 发送数据 :return: """ if not self.det_topic(): raise KafkaInternalError(err_code=-1, err_msg='创建topic失败') if times > self.try_max_times: return False if not isinstance(data_li, list): future = self.send(self.topic, data_li, partition=partition) record_metadata = future.get(timeout=10) # 同步确认消费 partition = record_metadata.partition # 数据所在的分区 offset = record_metadata.offset # 数据所在分区的位置 print('save success:{0}, partition: {1}, offset: {2}'.format(record_metadata, partition, offset)) for data in data_li: if not isinstance(data, dict): raise TypeError data.update({ "options_name": self.topic, "data": data, }) future = self.send(self.topic, data, partition=partition) record_metadata = future.get(timeout=10) # 同步确认消费 partition = record_metadata.partition # 数据所在的分区 offset = record_metadata.offset # 数据所在分区的位置 print('save success:{0}, partition: {1}, offset: {2}'.format(record_metadata, partition, offset)) def asyn_producer_callback(self, data_li: list or dict, partition=0, times: int = 0): """ 异步发送数据 + 发送状态处理 :param data_li:发送数据 :return: """ if not self.det_topic(): raise KafkaInternalError(err_code=-1, err_msg='创建topic失败') if times > self.try_max_times: # 异常数据 self.log.debug( f'【数据丢失】:发送数据失败:{data_li}' ) return False data_item = { "data": None, "queue_name": self.topic, "result": None, } try: if isinstance(data_li, list): for data in data_li: if data and (len(data) > 0): result = True else: result = False data_item.update({"data": data, "result": result}) self.send(topic=self.topic, value=data_item, partition=partition).add_callback( self.send_success, data_item).add_errback(self.send_error, data_item, times + 1, self.topic) else: data_item.update({"data": data_li}) self.send(topic=self.topic, value=data_item, partition=partition).add_callback( self.send_success, data_item).add_errback(self.send_error, data_item, times + 1, self.topic) except Exception as ex: raise KafkaInternalError(err_code=-1, err_msg=ex) # pass finally: self.flush() # 批量提交 # self.flush_now() # 批量提交 print('这里批量提交') def det_topic(self): create_topics([self.topic], bootstrap_servers=self.config['bootstrap_servers']) all_topic = watch_topics(self.topic, bootstrap_servers=self.config['bootstrap_servers']) if self.topic not in all_topic: create_topics([self.topic], bootstrap_servers=self.config['bootstrap_servers']) return False else: return True def origin_length(self, datas, max_nums: int = 1000000): # 判断数据数量 if not datas: return {'status': False, 'msg': u'数据为空'} if not isinstance(datas, list): return {'status': True, 'data': 1} if len(datas) > max_nums: return {'status': False, 'msg': u'批次数据量过大'} return {'status': True, 'data': len(datas)} @classmethod def send_success(*args, **kwargs): """异步发送成功回调函数""" print('save success is', args) return True @classmethod def send_error(*args, **kwargs): """异步发送错误回调函数""" print('save error', args) with open('/opt/ldp/{0}.json'.format(args[2]), 'a+', encoding='utf-8') as wf: json.dump(obj=args[0], fp=wf) return False def save_local(self, file_name: str, data_origin: dict): try: with open( f'{file_name}.json', 'a+', encoding='utf-8' ) as wf: json.dump(obj=data_origin, fp=wf) except Exception as ex: return ex @property def _flush(self, timeout=None): flush = super().flush(timeout=timeout) for future in self.pending_futures: if future.failed(): # raise KafkaError(err_code='flush', err_msg='Failed to send batch') pass self.pending_futures = [] return flush
zz-spider
/zz_spider-1.0.0.tar.gz/zz_spider-1.0.0/zz_spider/KafakProduct.py
KafakProduct.py
import json import os import re import time import threading, multiprocessing from kafka import KafkaConsumer from KafakProduct import KkProducer from module import logging class SavePipeline: """ :parameter data save pipeline as file :param save_data to localfile.json """ def __init__(self, file_name, log_path=os.path.abspath(os.path.dirname(os.getcwd())) + '/'): self.log_path = log_path self.file_name = file_name def __setitem__(self, key, value): return getattr(key, value) def __getitem__(self, item): return self def save_data(self, doing_type: str = 'a+', data_item: dict = lambda d: json.loads(d)): """ 数据存储 :param doing_type: str :param data_item: dict :return: 写入json文件 """ whole_path = self.log_path + '{0}.json'.format(self.file_name) print(whole_path, 'A' * 20) try: with open(whole_path, doing_type, encoding='utf-8') as wf: result = wf.write(json.dumps(data_item) + '\n') if not result: return {'status': False, 'msg': '写入失败'} return {'status': True, 'data': '写入成功'} except Exception as e: return {'status': False, 'msg': str(e)} finally: pass class KafkaReceive(threading.Thread): log = logging.get_logger('kafka_consume') def __init__(self, bootstrap_servers, topic, group_id, client_id=''): self.bootstrap_servers = bootstrap_servers self.topic = topic self.group_id = group_id self.client_id = client_id threading.Thread.__init__(self) self.stop_event = threading.Event() def stop(self): self.stop_event.set() def run(self): consumer = KafkaConsumer(bootstrap_servers=self.bootstrap_servers, group_id=self.group_id, auto_offset_reset='latest', consumer_timeout_ms=1000) consumer.subscribe([self.topic]) sucess_is = 0 error_is = 0 get_list = [] for msg in consumer: self.log.info(f'topic: {msg.topic}') self.log.info(f'partition: {msg.partition}') self.log.info(f'key: {msg.key}; value: {msg.value}') self.log.info(f'offset: {msg.offset}') msg_item = json.loads(json.dumps(msg.value.decode('utf-8'))) result = SavePipeline(self.topic).save_data(data_item=msg_item) if result.get('status', False): sucess_is += 1 else: error_is += 1 get_list.append(msg_item) consumer.close() return sucess_is, error_is, get_list def restart_program(): import sys python = sys.executable os.execl(python, python, * sys.argv) def task(bootstrap_servers, topic, times_count: int = 4): from KafkaInstant import KkOffset offset = KkOffset( bootstrap_servers=bootstrap_servers, topic=topic, group_id=topic ) print(offset.surplus_offset, type(offset.surplus_offset)) import time tasks = [ KafkaReceive( bootstrap_servers=bootstrap_servers, topic=topic, group_id=topic) for c in range(times_count) ] for t in tasks: t.start() time.sleep(10) for task in tasks: task.stop() for task in tasks: task.join() if __name__ == '__main__': # task() KafkaReceive( bootstrap_servers='', topic='', group_id='').run()
zz-spider
/zz_spider-1.0.0.tar.gz/zz_spider-1.0.0/zz_spider/KafkaConsume.py
KafkaConsume.py
import pika import requests import json from retrying import retry from pika.exceptions import AMQPError def retry_if_rabbit_error(exception): return isinstance(exception, AMQPError) class DealRabbitMQ(object): def __init__(self,host,user, passwd,port,url_port): """ :param host: :param user: :param passwd: :param port: :param url_port: :param spider_main: """ self.host = host self.user = user self.passwd = passwd self.port = port self.url_port = url_port credentials = pika.PlainCredentials(user, passwd) connection = pika.BlockingConnection(pika.ConnectionParameters(host=host, port=port, credentials=credentials, heartbeat=0)) # heartbeat 表示7200时间没反应后就报错 self.channel = connection.channel() self.channel.basic_qos(prefetch_size=0, prefetch_count=1) @retry(retry_on_exception=retry_if_rabbit_error) def get_count_by_url(self,queue_name): """ :return: ready,unack,total """ try: url = 'http://{0}:{1}/api/queues/%2f/{2}'.format(self.host,self.url_port,queue_name) r = requests.get(url, auth=(self.user, self.passwd)) if r.status_code != 200: return -1 res = r.json() # ready,unack,total true_count = self.channel.queue_declare(queue=queue_name, durable=True).method.message_count lax_count = max(true_count,res['messages']) return res['messages_ready'], res['messages_unacknowledged'], lax_count # return dic['messages'] except Exception as e: print("rabbitmq connect url error:",e) raise ConnectionError("rabbitmq connect url error:{0}".format(e)) def callback(self,ch, method, properties, body): """ :param ch: :param method: :param properties: :param body: :return: """ res = json.loads(body) self.spider_main(res) ch.basic_ack(delivery_tag=method.delivery_tag) @retry(retry_on_exception=retry_if_rabbit_error) def consumer_mq(self,spider_main,queue_name): self.spider_main = spider_main self.channel.basic_consume(queue_name, self.callback, False) while self.channel._consumer_infos: ready_count, unack_count, total_count = self.get_count_by_url(queue_name) print("ready中的消息量:{0}",total_count) if total_count == 0: #当真实消息量以及ready中全为0才代表消耗完 self.channel.stop_consuming() # 退出监听 self.channel.connection.process_data_events(time_limit=1) try: self.channel.queue_delete(queue_name) print("消费完成:成功清除队列") except TypeError: print("消费完成:成功清除队列") @retry(retry_on_exception=retry_if_rabbit_error) def send_mq(self,queue_name,msg): """ 往错误队列中写入数据 :return: """ if not queue_name or not msg: raise ValueError("queue_name or msg is None") if 'error' not in queue_name: raise ValueError("queue_name is not error queue") self.channel.queue_declare(queue=queue_name, durable=True) self.channel.basic_publish(exchange='', routing_key=queue_name, body=str(msg), properties=pika.BasicProperties(delivery_mode=2) ) print('成功写入消息')
zz-spider
/zz_spider-1.0.0.tar.gz/zz_spider-1.0.0/zz_spider/RabbitMq.py
RabbitMq.py
from __future__ import absolute_import, unicode_literals import sys import socket from os.path import dirname, abspath, join # from misc import Utils try: from environs import Env except: # Utils().install('environs') from environs import Env try: from loguru import logger except: # Utils().install("loguru") from loguru import logger def set_log_config(formatter, logfile=None): return { "default": { "handlers": [ { "sink": sys.stdout, "format": formatter, "level": "TRACE" }, { "sink": "info.log" if not logfile else logfile, "format": formatter, "level": "INFO", "rotation": '1 week', "retention": '30 days', 'encoding': 'utf-8' }, ], "extra": { "host": socket.gethostbyname(socket.gethostname()), 'log_name': 'default', 'type': 'None' }, "levels": [ dict(name="TRACE", icon="✏️", color="<cyan><bold>"), dict(name="DEBUG", icon="❄️", color="<blue><bold>"), dict(name="INFO", icon="♻️", color="<bold>"), dict(name="SUCCESS", icon="✔️", color="<green><bold>"), dict(name="WARNING", icon="⚠️", color="<yellow><bold>"), dict(name="ERROR", icon="❌️", color="<red><bold>"), dict(name="CRITICAL", icon="☠️", color="<RED><bold>"), ] }, 'kafka': True } class LogFormatter(object): default_formatter = '<green>{time:YYYY-MM-DD HH:mm:ss,SSS}</green> | ' \ '[<cyan>{extra[log_name]}</cyan>] <cyan>{module}</cyan>:<cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> | ' \ '<red>{extra[host]}</red> | ' \ '<level>{level.icon}{level: <5}</level> | ' \ '<level>{level.no}</level> | ' \ '<level>{extra[type]}</level> | ' \ '<level>{message}</level> ' kafka_formatter = '{time:YYYY-MM-DD HH:mm:ss,SSS}| ' \ '[{extra[log_name]}] {module}:{name}:{function}:{line} | ' \ '{extra[host]} | ' \ '{process} | ' \ '{thread} | ' \ '{level: <5} | ' \ '{level.no} | ' \ '{extra[type]}| ' \ '{message} ' def __init__(self): self.logger = logger def setter_log_handler(self, callback=None): assert callable(callback), 'callback must be a callable object' self.logger.add(callback, format=self.kafka_formatter) def get_logger(self, name=None): log_config = set_log_config(self.default_formatter, f'{name}.log') config = log_config.pop('default', {}) if name: config['extra']['log_name'] = name self.logger.configure(**config) return self.logger @staticmethod def format(spider, meta): if hasattr(spider, 'logging_keys'): logging_txt = [] for key in spider.logging_keys: if meta.get(key, None) is not None: logging_txt.append(u'{0}:{1} '.format(key, meta[key])) logging_txt.append('successfully') return ' '.join(logging_txt)
zz-spider
/zz_spider-1.0.0.tar.gz/zz_spider-1.0.0/module/log.py
log.py
# zz-test 简单IP代理池 simple_pp 是个 异步并发IP代理验证工具,速度很快,一千个代理半分钟左右可完成。 ### 安装 ```pip install -U simple-proxy-pool``` 或下载 repo (e.g., ```git clone https://github.com/ffreemt/simple-proxy-pool.git``` 换到 simple-proxy-pool目录执行 ``` python install -r requirements.txt python setup.py develop ``` ### 代理验证原理 通过IP代理访问 www.baidu.com, 能成功获取百度返回的头则代理有效。再检查头里面是否含'via', 不含'via'即为匿名代理。参考 aio_headers.py。 ### 用法 #### 命令行 ##### 简单用法 ```python -m simple_pp``` simple_pp 会试着以各种方式搜集到不少于 200 个代理,验证后将有效代理输出到屏幕上。 ##### 普通用法 用户可以提供自己的代理:直接将自由格式的代理贴在命令行后面,或提供含自由格式代理的文件名贴在命令行后面,或在运行 `python -m simple_pp` 前将代理拷入系统剪贴板。 ```python -m simple_pp``` 贴入需验证的IP代理(格式 ip:端口, 以空格、回车非数字字母或中文隔开均可)。或: ```python -m simple_pp file1 file2 ...``` 文件内含以上格式的IP代理 也可以用pipe,例如 ``` curl "https://www.freeip.top/?page=1" | python -m simple_pp ``` #### 高级用法 显示详细用法 ```python -m simple_pp -h``` 给定代理数目 ```python -m simple_pp -p 500``` 只显示有效匿名代理 ```python -m simple_pp -a``` 给定代理数目、只显示有效匿名代理 ```python -m simple_pp -p 800 -a``` #### python 程序内调用 ``` from simple_pp import simple_pp from pprint import pprint ip_list = [ip1, ip2, ...] res = simple_pp(ip_list) pprint(res) ``` 输出 res 里格式为: res[0] = ip_list[0] +(是否有效,是否匿名,响应时间秒) 可参看__main__.py 或 tests 里面的文件。有疑问或反馈可发 Issues。 例如 ``` import asyncio import httpx from simple_pp import simple_pp simple_pp(['113.53.230.167:80', '36.25.243.51:80']) ``` 输出: [('113.53.230.167:80', True, False, 0.31), ('36.25.243.51:80', True, True, 0.51)] -> 第一个代理为透明代理,第二个代理为匿名代理 也可以直接将网页结果送给 simple_pp, 例如 ``` import re import asyncio import httpx from pprint import pprint from simple_pp import simple_pp arun = lambda x: asyncio.get_event_loop().run_until_complete(x) _ = [elm for elm in simple_pp([':'.join(elm) if elm[1] else elm[0] for elm in re.findall(r'(?:https?://)?(\d{1,3}(?:\.\d{1,3}){3})(?:[\s\t:\'",]+(\d{1,4}))?', arun(httpx.get('https://www.freeip.top/?page=1')).text)]) if elm[-3] is True] pprint(_) # 可能拿到将近 10 个代理 # 或 _ = [elm for elm in simple_pp(arun(httpx.get('https://www.freeip.top/?page=1')).text) if elm[-3] is True] pprint(_) # ditto ``` ### 鸣谢 * 用了 jhao 的 proxypool 项目里几个文件。感谢jhao开源。
zz-test
/zz-test-0.0.6.tar.gz/zz-test-0.0.6/README.md
README.md
# dg_pip #### 介绍 {**以下是 Gitee 平台说明,您可以替换此简介** Gitee 是 OSCHINA 推出的基于 Git 的代码托管平台(同时支持 SVN)。专为开发者提供稳定、高效、安全的云端软件开发协作平台 无论是个人、团队、或是企业,都能够用 Gitee 实现代码托管、项目管理、协作开发。企业项目请看 [https://gitee.com/enterprises](https://gitee.com/enterprises)} #### 软件架构 软件架构说明 #### 安装教程 1. xxxx 2. xxxx 3. xxxx #### 使用说明 1. xxxx 2. xxxx 3. xxxx #### 参与贡献 1. Fork 本仓库 2. 新建 Feat_xxx 分支 3. 提交代码 4. 新建 Pull Request #### 特技 1. 使用 Readme\_XXX.md 来支持不同的语言,例如 Readme\_en.md, Readme\_zh.md 2. Gitee 官方博客 [blog.gitee.com](https://blog.gitee.com) 3. 你可以 [https://gitee.com/explore](https://gitee.com/explore) 这个地址来了解 Gitee 上的优秀开源项目 4. [GVP](https://gitee.com/gvp) 全称是 Gitee 最有价值开源项目,是综合评定出的优秀开源项目 5. Gitee 官方提供的使用手册 [https://gitee.com/help](https://gitee.com/help) 6. Gitee 封面人物是一档用来展示 Gitee 会员风采的栏目 [https://gitee.com/gitee-stars/](https://gitee.com/gitee-stars/)
zzaTest
/zzaTest-0.0.1.tar.gz/zzaTest-0.0.1/README.md
README.md
[![Python application](https://github.com/AndreiPuchko/zzdb/actions/workflows/main.yml/badge.svg)](https://github.com/AndreiPuchko/zzdb/actions/workflows/main.yml) # The light Python DB API wrapper with some ORM functions (MySQL, PostgreSQL, SQLite) ## Quick start (run demo files) ## - in docker: ```bash git clone https://github.com/AndreiPuchko/zzdb && cd zzdb/database.docker ./up.sh ./down.sh ``` ## - on your system: ```bash pip install zzdb git clone https://github.com/AndreiPuchko/zzdb && cd zzdb # sqlite: python3 ./demo/demo.py # mysql and postgresql: pip install mysql-connector-python psycopg2-binary pushd database.docker && docker-compose up -d && popd python3 ./demo/demo_mysql.py python3 ./demo/demo_postgresql.py pushd database.docker && docker-compose down -v && popd ``` # Features: --- ## Connect ```python from zzdb.db import ZzDb database_sqlite = ZzDb("sqlite3", database_name=":memory:") # or just database_sqlite = ZzDb() database_mysql = ZzDb( "mysql", user="root", password="zztest" host="0.0.0.0", port="3308", database_name="zztest", ) # or just database_mysql = ZzDb(url="mysql://root:zztest@0.0.0.0:3308/zztest") database_postgresql = ZzDb( "postgresql", user="zzuser", password="zztest" host="0.0.0.0", port=5432, database_name="zztest1", ) ``` --- ## Define & migrate database schema (ADD COLUMN only). ```python zzdb.schema import ZzDbSchema schema = ZzDbSchema() schema.add(table="topic_table", column="uid", datatype="int", datalen=9, pk=True) schema.add(table="topic_table", column="name", datatype="varchar", datalen=100) schema.add(table="message_table", column="uid", datatype="int", datalen=9, pk=True) schema.add(table="message_table", column="message", datatype="varchar", datalen=100) schema.add( table="message_table", column="parent_uid", to_table="topic_table", to_column="uid", related="name" ) database.set_schema(schema) ``` --- ## INSERT, UPDATE, DELETE ```python database.insert("topic_table", {"name": "topic 0"}) database.insert("topic_table", {"name": "topic 1"}) database.insert("topic_table", {"name": "topic 2"}) database.insert("topic_table", {"name": "topic 3"}) database.insert("message_table", {"message": "Message 0 in 0", "parent_uid": 0}) database.insert("message_table", {"message": "Message 1 in 0", "parent_uid": 0}) database.insert("message_table", {"message": "Message 0 in 1", "parent_uid": 1}) database.insert("message_table", {"message": "Message 1 in 1", "parent_uid": 1}) # this returns False because there is no value 2 in topic_table.id - schema works! database.insert("message_table", {"message": "Message 1 in 1", "parent_uid": 2}) database.delete("message_table", {"uid": 2}) database.update("message_table", {"uid": 0, "message": "updated message"}) ``` --- ## Cursor ```python cursor = database.cursor(table_name="topic_table") cursor = database.cursor( table_name="topic_table", where=" name like '%2%'", order="name desc" ) cursor.insert({"name": "insert record via cursor"}) cursor.delete({"uid": 2}) cursor.update({"uid": 0, "message": "updated message"}) cursor = database.cursor(sql="select name from topic_table") for x in cursor.records(): print(x) print(cursor.r.name) cursor.record(0)['name'] cursor.row_count() cursor.first() cursor.last() cursor.next() cursor.prev() cursor.bof() cursor.eof() ```
zzdb
/zzdb-0.1.11.tar.gz/zzdb-0.1.11/README.md
README.md
<H1 CLASS="western" style="text-align:center;">ZZDeepRollover</H1> This code enables the detection of rollovers performed by zebrafish larvae tracked by the open-source software <a href="https://github.com/oliviermirat/ZebraZoom" target="_blank">ZebraZoom</a>. This code is still in "beta mode". For more information visit <a href="https://zebrazoom.org/" target="_blank">zebrazoom.org</a> or email us at info@zebrazoom.org<br/> <H2 CLASS="western">Road Map:</H2> [Preparing the rollovers detection model](#preparing)<br/> [Testing the rollovers detection model](#testing)<br/> [Training the rollovers detection model](#training)<br/> [Using the rollovers detection model](#using)<br/> <a name="preparing"/> <H2 CLASS="western">Preparing the rollovers detection model:</H2> The detection of rollovers is based on deep learning. You must first install pytorch on your machine. It may be better to first create an anaconda environment for this purpose.<br/><br/> You then need to place the output result folders of <a href="https://github.com/oliviermirat/ZebraZoom" target="_blank">ZebraZoom</a> inside the folder "ZZoutput" of this repository.<br/><br/> In order to train the rollovers detection model, you must also manually classify the frames of some of the tracked videos in order to be able to create a training set. Look inside the folder "manualClassificationExamples" for examples of how to create such manual classifications. You then need to place those manual classifications inside the corresponding output result folders of ZebraZoom.<br/><br/> <a name="testing"/> <H2 CLASS="western">Testing the rollovers detection model:</H2> In order to test the accuracy of the rollovers detection model, you can use the script leaveOneOutVideoTest.py, you will need to adjust some variables at the beginning of that script. The variable "videos" is an array that must contain the name of videos for which a manual classification of frames exist and has been placed inside the corresponding output result folder (inside the folder ZZoutput of this repository).<br/><br/> The script leaveOneOutVideoTest.py will loop through all the videos learning the model on all but one video and testing on the video left out.<br/><br/> <a name="training"/> <H2 CLASS="western">Training the rollovers detection model:</H2> Once the model has been tested using the steps described in the previous section, you can now learn the final model on all the videos for which a manual classification of frames exist using the script trainModel.py (you will need to adjust a few variables in that script).<br/><br/> <a name="using"/> <H2 CLASS="western">Using the rollovers detection model:</H2> As mentionned above, you can then use the script useModel.py to apply the rollovers detection model on a video.<br/><br/>
zzdeeprollover
/zzdeeprollover-0.0.4.tar.gz/zzdeeprollover-0.0.4/README.md
README.md
# The light Python GUI builder (currently based on PyQt5) # How to start ## With docker && x11: ```bash git clone https://github.com/AndreiPuchko/zzgui.git # sudo if necessary cd zzgui/docker-x11 && ./build_and_run_menu.sh ``` ## With PyPI package: ```bash poetry new project_01 && cd project_01 && poetry shell poetry add zzgui cd project_01 python -m zzgui > example_app.py && python example_app.py ``` ## Explore sources: ```bash git clone https://github.com/AndreiPuchko/zzgui.git cd zzgui pip3 install poetry poetry shell poetry install python3 demo/demo_00.py # All demo launcher python3 demo/demo_01.py # basic: main menu, form & widgets python3 demo/demo_02.py # forms and forms in form python3 demo/demo_03.py # grid form (CSV data), automatic creation of forms based on data python3 demo/demo_04.py # progressbar, data loading, sorting and filtering python3 demo/demo_05.py # nonmodal form python3 demo/demo_06.py # code editor python3 demo/demo_07.py # database app (4 tables, mock data loading) - requires a zzdb package python3 demo/demo_08.py # database app, requires a zzdb package, autoschema ``` ## demo/demo_03.py screenshot ![Alt text](https://andreipuchko.github.io/zzgui/screenshot.png) # Build standalone executable (The resulting executable file will appear in the folder dist/) ## One file ```bash pyinstaller -F demo/demo.py ``` ## One directory ```bash pyinstaller -D demo/demo.py ```
zzgui
/zzgui-0.1.15.tar.gz/zzgui-0.1.15/README.md
README.md
import math import matplotlib.pyplot as plt from .Generaldistribution import Distribution class Gaussian(Distribution): """ Gaussian distribution class for calculating and visualizing a Gaussian distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats extracted from the data file """ def __init__(self, mu=0, sigma=1): Distribution.__init__(self, mu, sigma) def calculate_mean(self): """Function to calculate the mean of the data set. Args: None Returns: float: mean of the data set """ avg = 1.0 * sum(self.data) / len(self.data) self.mean = avg return self.mean def calculate_stdev(self, sample=True): """Function to calculate the standard deviation of the data set. Args: sample (bool): whether the data represents a sample or population Returns: float: standard deviation of the data set """ if sample: n = len(self.data) - 1 else: n = len(self.data) mean = self.calculate_mean() sigma = 0 for d in self.data: sigma += (d - mean) ** 2 sigma = math.sqrt(sigma / n) self.stdev = sigma return self.stdev def plot_histogram(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.hist(self.data) plt.title('Histogram of Data') plt.xlabel('data') plt.ylabel('count') def pdf(self, x): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2) def plot_histogram_pdf(self, n_spaces = 50): """Function to plot the normalized histogram of the data and a plot of the probability density function along the same range Args: n_spaces (int): number of data points Returns: list: x values for the pdf plot list: y values for the pdf plot """ mu = self.mean sigma = self.stdev min_range = min(self.data) max_range = max(self.data) # calculates the interval between x values interval = 1.0 * (max_range - min_range) / n_spaces x = [] y = [] # calculate the x values to visualize for i in range(n_spaces): tmp = min_range + interval*i x.append(tmp) y.append(self.pdf(tmp)) # make the plots fig, axes = plt.subplots(2,sharex=True) fig.subplots_adjust(hspace=.5) axes[0].hist(self.data, density=True) axes[0].set_title('Normed Histogram of Data') axes[0].set_ylabel('Density') axes[1].plot(x, y) axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation') axes[0].set_ylabel('Density') plt.show() return x, y def __add__(self, other): """Function to add together two Gaussian distributions Args: other (Gaussian): Gaussian instance Returns: Gaussian: Gaussian distribution """ result = Gaussian() result.mean = self.mean + other.mean result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2) return result def __repr__(self): """Function to output the characteristics of the Gaussian instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}".format(self.mean, self.stdev)
zzhprob-second
/zzhprob_second-0.3-py3-none-any.whl/distributions/Gaussiandistribution.py
Gaussiandistribution.py
import math import matplotlib.pyplot as plt from .Generaldistribution import Distribution class Binomial(Distribution): """ Binomial distribution class for calculating and visualizing a Binomial distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats to be extracted from the data file p (float) representing the probability of an event occurring n (int) number of trials TODO: Fill out all functions below """ def __init__(self, prob=.5, size=20): self.n = size self.p = prob Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev()) def calculate_mean(self): """Function to calculate the mean from p and n Args: None Returns: float: mean of the data set """ self.mean = self.p * self.n return self.mean def calculate_stdev(self): """Function to calculate the standard deviation from p and n. Args: None Returns: float: standard deviation of the data set """ self.stdev = math.sqrt(self.n * self.p * (1 - self.p)) return self.stdev def replace_stats_with_data(self): """Function to calculate p and n from the data set Args: None Returns: float: the p value float: the n value """ self.n = len(self.data) self.p = 1.0 * sum(self.data) / len(self.data) self.mean = self.calculate_mean() self.stdev = self.calculate_stdev() def plot_bar(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n]) plt.title('Bar Chart of Data') plt.xlabel('outcome') plt.ylabel('count') def pdf(self, k): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k))) b = (self.p ** k) * (1 - self.p) ** (self.n - k) return a * b def plot_bar_pdf(self): """Function to plot the pdf of the binomial distribution Args: None Returns: list: x values for the pdf plot list: y values for the pdf plot """ x = [] y = [] # calculate the x values to visualize for i in range(self.n + 1): x.append(i) y.append(self.pdf(i)) # make the plots plt.bar(x, y) plt.title('Distribution of Outcomes') plt.ylabel('Probability') plt.xlabel('Outcome') plt.show() return x, y def __add__(self, other): """Function to add together two Binomial distributions with equal p Args: other (Binomial): Binomial instance Returns: Binomial: Binomial distribution """ try: assert self.p == other.p, 'p values are not equal' except AssertionError as error: raise result = Binomial() result.n = self.n + other.n result.p = self.p result.calculate_mean() result.calculate_stdev() return result def __repr__(self): """Function to output the characteristics of the Binomial instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}, p {}, n {}".\ format(self.mean, self.stdev, self.p, self.n)
zzhprob-second
/zzhprob_second-0.3-py3-none-any.whl/distributions/Binomialdistribution.py
Binomialdistribution.py
import math import matplotlib.pyplot as plt from .Generaldistribution import Distribution class Gaussian(Distribution): """ Gaussian distribution class for calculating and visualizing a Gaussian distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats extracted from the data file """ def __init__(self, mu=0, sigma=1): Distribution.__init__(self, mu, sigma) def calculate_mean(self): """Function to calculate the mean of the data set. Args: None Returns: float: mean of the data set """ avg = 1.0 * sum(self.data) / len(self.data) self.mean = avg return self.mean def calculate_stdev(self, sample=True): """Function to calculate the standard deviation of the data set. Args: sample (bool): whether the data represents a sample or population Returns: float: standard deviation of the data set """ if sample: n = len(self.data) - 1 else: n = len(self.data) mean = self.calculate_mean() sigma = 0 for d in self.data: sigma += (d - mean) ** 2 sigma = math.sqrt(sigma / n) self.stdev = sigma return self.stdev def plot_histogram(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.hist(self.data) plt.title('Histogram of Data') plt.xlabel('data') plt.ylabel('count') def pdf(self, x): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2) def plot_histogram_pdf(self, n_spaces = 50): """Function to plot the normalized histogram of the data and a plot of the probability density function along the same range Args: n_spaces (int): number of data points Returns: list: x values for the pdf plot list: y values for the pdf plot """ mu = self.mean sigma = self.stdev min_range = min(self.data) max_range = max(self.data) # calculates the interval between x values interval = 1.0 * (max_range - min_range) / n_spaces x = [] y = [] # calculate the x values to visualize for i in range(n_spaces): tmp = min_range + interval*i x.append(tmp) y.append(self.pdf(tmp)) # make the plots fig, axes = plt.subplots(2,sharex=True) fig.subplots_adjust(hspace=.5) axes[0].hist(self.data, density=True) axes[0].set_title('Normed Histogram of Data') axes[0].set_ylabel('Density') axes[1].plot(x, y) axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation') axes[0].set_ylabel('Density') plt.show() return x, y def __add__(self, other): """Function to add together two Gaussian distributions Args: other (Gaussian): Gaussian instance Returns: Gaussian: Gaussian distribution """ result = Gaussian() result.mean = self.mean + other.mean result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2) return result def __repr__(self): """Function to output the characteristics of the Gaussian instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}".format(self.mean, self.stdev)
zzhprob
/zzhprob-0.3.tar.gz/zzhprob-0.3/distributions/Gaussiandistribution.py
Gaussiandistribution.py
import math import matplotlib.pyplot as plt from .Generaldistribution import Distribution class Binomial(Distribution): """ Binomial distribution class for calculating and visualizing a Binomial distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats to be extracted from the data file p (float) representing the probability of an event occurring n (int) number of trials TODO: Fill out all functions below """ def __init__(self, prob=.5, size=20): self.n = size self.p = prob Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev()) def calculate_mean(self): """Function to calculate the mean from p and n Args: None Returns: float: mean of the data set """ self.mean = self.p * self.n return self.mean def calculate_stdev(self): """Function to calculate the standard deviation from p and n. Args: None Returns: float: standard deviation of the data set """ self.stdev = math.sqrt(self.n * self.p * (1 - self.p)) return self.stdev def replace_stats_with_data(self): """Function to calculate p and n from the data set Args: None Returns: float: the p value float: the n value """ self.n = len(self.data) self.p = 1.0 * sum(self.data) / len(self.data) self.mean = self.calculate_mean() self.stdev = self.calculate_stdev() def plot_bar(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n]) plt.title('Bar Chart of Data') plt.xlabel('outcome') plt.ylabel('count') def pdf(self, k): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k))) b = (self.p ** k) * (1 - self.p) ** (self.n - k) return a * b def plot_bar_pdf(self): """Function to plot the pdf of the binomial distribution Args: None Returns: list: x values for the pdf plot list: y values for the pdf plot """ x = [] y = [] # calculate the x values to visualize for i in range(self.n + 1): x.append(i) y.append(self.pdf(i)) # make the plots plt.bar(x, y) plt.title('Distribution of Outcomes') plt.ylabel('Probability') plt.xlabel('Outcome') plt.show() return x, y def __add__(self, other): """Function to add together two Binomial distributions with equal p Args: other (Binomial): Binomial instance Returns: Binomial: Binomial distribution """ try: assert self.p == other.p, 'p values are not equal' except AssertionError as error: raise result = Binomial() result.n = self.n + other.n result.p = self.p result.calculate_mean() result.calculate_stdev() return result def __repr__(self): """Function to output the characteristics of the Binomial instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}, p {}, n {}".\ format(self.mean, self.stdev, self.p, self.n)
zzhprob
/zzhprob-0.3.tar.gz/zzhprob-0.3/distributions/Binomialdistribution.py
Binomialdistribution.py
import math import matplotlib.pyplot as plt from .Generaldistribution import Distribution class Gaussian(Distribution): """ Gaussian distribution class for calculating and visualizing a Gaussian distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats extracted from the data file """ def __init__(self, mu=0, sigma=1): Distribution.__init__(self, mu, sigma) def calculate_mean(self): """Function to calculate the mean of the data set. Args: None Returns: float: mean of the data set """ avg = 1.0 * sum(self.data) / len(self.data) self.mean = avg return self.mean def calculate_stdev(self, sample=True): """Function to calculate the standard deviation of the data set. Args: sample (bool): whether the data represents a sample or population Returns: float: standard deviation of the data set """ if sample: n = len(self.data) - 1 else: n = len(self.data) mean = self.calculate_mean() sigma = 0 for d in self.data: sigma += (d - mean) ** 2 sigma = math.sqrt(sigma / n) self.stdev = sigma return self.stdev def plot_histogram(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.hist(self.data) plt.title('Histogram of Data') plt.xlabel('data') plt.ylabel('count') def pdf(self, x): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2) def plot_histogram_pdf(self, n_spaces = 50): """Function to plot the normalized histogram of the data and a plot of the probability density function along the same range Args: n_spaces (int): number of data points Returns: list: x values for the pdf plot list: y values for the pdf plot """ mu = self.mean sigma = self.stdev min_range = min(self.data) max_range = max(self.data) # calculates the interval between x values interval = 1.0 * (max_range - min_range) / n_spaces x = [] y = [] # calculate the x values to visualize for i in range(n_spaces): tmp = min_range + interval*i x.append(tmp) y.append(self.pdf(tmp)) # make the plots fig, axes = plt.subplots(2,sharex=True) fig.subplots_adjust(hspace=.5) axes[0].hist(self.data, density=True) axes[0].set_title('Normed Histogram of Data') axes[0].set_ylabel('Density') axes[1].plot(x, y) axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation') axes[0].set_ylabel('Density') plt.show() return x, y def __add__(self, other): """Function to add together two Gaussian distributions Args: other (Gaussian): Gaussian instance Returns: Gaussian: Gaussian distribution """ result = Gaussian() result.mean = self.mean + other.mean result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2) return result def __repr__(self): """Function to output the characteristics of the Gaussian instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}".format(self.mean, self.stdev)
zzhprobsecond
/zzhprobsecond-0.3.tar.gz/zzhprobsecond-0.3/distributions/Gaussiandistribution.py
Gaussiandistribution.py
import math import matplotlib.pyplot as plt from .Generaldistribution import Distribution class Binomial(Distribution): """ Binomial distribution class for calculating and visualizing a Binomial distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats to be extracted from the data file p (float) representing the probability of an event occurring n (int) number of trials TODO: Fill out all functions below """ def __init__(self, prob=.5, size=20): self.n = size self.p = prob Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev()) def calculate_mean(self): """Function to calculate the mean from p and n Args: None Returns: float: mean of the data set """ self.mean = self.p * self.n return self.mean def calculate_stdev(self): """Function to calculate the standard deviation from p and n. Args: None Returns: float: standard deviation of the data set """ self.stdev = math.sqrt(self.n * self.p * (1 - self.p)) return self.stdev def replace_stats_with_data(self): """Function to calculate p and n from the data set Args: None Returns: float: the p value float: the n value """ self.n = len(self.data) self.p = 1.0 * sum(self.data) / len(self.data) self.mean = self.calculate_mean() self.stdev = self.calculate_stdev() def plot_bar(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n]) plt.title('Bar Chart of Data') plt.xlabel('outcome') plt.ylabel('count') def pdf(self, k): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k))) b = (self.p ** k) * (1 - self.p) ** (self.n - k) return a * b def plot_bar_pdf(self): """Function to plot the pdf of the binomial distribution Args: None Returns: list: x values for the pdf plot list: y values for the pdf plot """ x = [] y = [] # calculate the x values to visualize for i in range(self.n + 1): x.append(i) y.append(self.pdf(i)) # make the plots plt.bar(x, y) plt.title('Distribution of Outcomes') plt.ylabel('Probability') plt.xlabel('Outcome') plt.show() return x, y def __add__(self, other): """Function to add together two Binomial distributions with equal p Args: other (Binomial): Binomial instance Returns: Binomial: Binomial distribution """ try: assert self.p == other.p, 'p values are not equal' except AssertionError as error: raise result = Binomial() result.n = self.n + other.n result.p = self.p result.calculate_mean() result.calculate_stdev() return result def __repr__(self): """Function to output the characteristics of the Binomial instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}, p {}, n {}".\ format(self.mean, self.stdev, self.p, self.n)
zzhprobsecond
/zzhprobsecond-0.3.tar.gz/zzhprobsecond-0.3/distributions/Binomialdistribution.py
Binomialdistribution.py
import math import matplotlib.pyplot as plt from .Generaldistribution import Distribution class Gaussian(Distribution): """ Gaussian distribution class for calculating and visualizing a Gaussian distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats extracted from the data file """ def __init__(self, mu=0, sigma=1): Distribution.__init__(self, mu, sigma) def calculate_mean(self): """Function to calculate the mean of the data set. Args: None Returns: float: mean of the data set """ avg = 1.0 * sum(self.data) / len(self.data) self.mean = avg return self.mean def calculate_stdev(self, sample=True): """Function to calculate the standard deviation of the data set. Args: sample (bool): whether the data represents a sample or population Returns: float: standard deviation of the data set """ if sample: n = len(self.data) - 1 else: n = len(self.data) mean = self.calculate_mean() sigma = 0 for d in self.data: sigma += (d - mean) ** 2 sigma = math.sqrt(sigma / n) self.stdev = sigma return self.stdev def plot_histogram(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.hist(self.data) plt.title('Histogram of Data') plt.xlabel('data') plt.ylabel('count') def pdf(self, x): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2) def plot_histogram_pdf(self, n_spaces = 50): """Function to plot the normalized histogram of the data and a plot of the probability density function along the same range Args: n_spaces (int): number of data points Returns: list: x values for the pdf plot list: y values for the pdf plot """ mu = self.mean sigma = self.stdev min_range = min(self.data) max_range = max(self.data) # calculates the interval between x values interval = 1.0 * (max_range - min_range) / n_spaces x = [] y = [] # calculate the x values to visualize for i in range(n_spaces): tmp = min_range + interval*i x.append(tmp) y.append(self.pdf(tmp)) # make the plots fig, axes = plt.subplots(2,sharex=True) fig.subplots_adjust(hspace=.5) axes[0].hist(self.data, density=True) axes[0].set_title('Normed Histogram of Data') axes[0].set_ylabel('Density') axes[1].plot(x, y) axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation') axes[0].set_ylabel('Density') plt.show() return x, y def __add__(self, other): """Function to add together two Gaussian distributions Args: other (Gaussian): Gaussian instance Returns: Gaussian: Gaussian distribution """ result = Gaussian() result.mean = self.mean + other.mean result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2) return result def __repr__(self): """Function to output the characteristics of the Gaussian instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}".format(self.mean, self.stdev)
zzhtongjione
/zzhtongjione-0.5.tar.gz/zzhtongjione-0.5/distributions/Gaussiandistribution.py
Gaussiandistribution.py
import math import matplotlib.pyplot as plt from .Generaldistribution import Distribution class Binomial(Distribution): """ Binomial distribution class for calculating and visualizing a Binomial distribution. Attributes: mean (float) representing the mean value of the distribution stdev (float) representing the standard deviation of the distribution data_list (list of floats) a list of floats to be extracted from the data file p (float) representing the probability of an event occurring n (int) number of trials TODO: Fill out all functions below """ def __init__(self, prob=.5, size=20): self.n = size self.p = prob Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev()) def calculate_mean(self): """Function to calculate the mean from p and n Args: None Returns: float: mean of the data set """ self.mean = self.p * self.n return self.mean def calculate_stdev(self): """Function to calculate the standard deviation from p and n. Args: None Returns: float: standard deviation of the data set """ self.stdev = math.sqrt(self.n * self.p * (1 - self.p)) return self.stdev def replace_stats_with_data(self): """Function to calculate p and n from the data set Args: None Returns: float: the p value float: the n value """ self.n = len(self.data) self.p = 1.0 * sum(self.data) / len(self.data) self.mean = self.calculate_mean() self.stdev = self.calculate_stdev() def plot_bar(self): """Function to output a histogram of the instance variable data using matplotlib pyplot library. Args: None Returns: None """ plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n]) plt.title('Bar Chart of Data') plt.xlabel('outcome') plt.ylabel('count') def pdf(self, k): """Probability density function calculator for the gaussian distribution. Args: x (float): point for calculating the probability density function Returns: float: probability density function output """ a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k))) b = (self.p ** k) * (1 - self.p) ** (self.n - k) return a * b def plot_bar_pdf(self): """Function to plot the pdf of the binomial distribution Args: None Returns: list: x values for the pdf plot list: y values for the pdf plot """ x = [] y = [] # calculate the x values to visualize for i in range(self.n + 1): x.append(i) y.append(self.pdf(i)) # make the plots plt.bar(x, y) plt.title('Distribution of Outcomes') plt.ylabel('Probability') plt.xlabel('Outcome') plt.show() return x, y def __add__(self, other): """Function to add together two Binomial distributions with equal p Args: other (Binomial): Binomial instance Returns: Binomial: Binomial distribution """ try: assert self.p == other.p, 'p values are not equal' except AssertionError as error: raise result = Binomial() result.n = self.n + other.n result.p = self.p result.calculate_mean() result.calculate_stdev() return result def __repr__(self): """Function to output the characteristics of the Binomial instance Args: None Returns: string: characteristics of the Gaussian """ return "mean {}, standard deviation {}, p {}, n {}".\ format(self.mean, self.stdev, self.p, self.n)
zzhtongjione
/zzhtongjione-0.5.tar.gz/zzhtongjione-0.5/distributions/Binomialdistribution.py
Binomialdistribution.py
## Project description Athena is the core SDK for requesting the EnOS API ### Example #### 1.1 Query ```python from poseidon import poseidon appkey = '7b51b354-f200-45a9-a349-40cc97730c5a' appsecret = '65417473-2da3-40cc-b235-513b9123aefg' url = 'http://{apim-url}/someservice/v1/tyy?sid=28654780' req = poseidon.urlopen(appkey, appsecret, url) print(req) ``` #### 1.2 Header ```python from poseidon import poseidon appkey = '7b51b354-f200-45a9-a349-40cc97730c5a' appsecret = '65417473-2da3-40cc-b235-513b9123aefg' url = 'http://{apim-url}/someservice/v1/tyy?sid=28654780' header={} req = poseidon.urlopen(appkey, appsecret, url, None, header) print(req) ``` #### 1.3 Body ```python from poseidon import poseidon appkey = '7b51b354-f200-45a9-a349-40cc97730c5a' appsecret = '65417473-2da3-40cc-b235-513b9123aefg' url = 'http://{apim-url}/someservice/v1/tyy?sid=28654780' data = {"username": "11111", "password": "11111"} req = poseidon.urlopen(appkey, appsecret, url, data) print(req) ```
zzltest
/zzltest-0.1.5.tar.gz/zzltest-0.1.5/README.md
README.md
import hashlib Q =str D =int G =round e =list h =None k =bytes F =KeyError w =print p =hashlib .sha256 import hmac u =hmac .new import base64 K =base64 .b64encode U =base64 .b64decode import time B =time .time from Crypto .PublicKey import RSA L =RSA .importKey from Crypto .Cipher import PKCS1_OAEP t =PKCS1_OAEP .new from urllib import request ,parse ,error n =error .URLError b =request .urlopen g =request .Request import ssl ssl ._create_default_https_context =ssl ._create_unverified_context import simplejson c =simplejson .loads import json X =json .dumps m =2 *60 *60 *1000 s =30 *24 *60 *60 *1000 sk ='knoczslufdasvhbivbewnrvuywachsrawqdpzesccknrhhetgmrcwfqfudywbeon' P =b'MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAokFjy0wLMKH0/39hxPN6JYRkMDXzvVIGQh55Keo2LIsP/jRU/yZHT/Vkg34yU9koNjSaacPvooXEoI5eFGuRrsBMrotZ5xfejCrTbGvZqjhnMPBheDmxfflIZzRrF/zoQvF0nIbmGNkxEfROHtDDkgNuGRdthXrNavCgfM2z3LNF83UT9CGpxJWBeKK3pXfYLsQ4f8uyrQRcy2BhKfJ/PKai1mocXYqr07JfQ0XZM4xIzuQ7E4ybNk5IFreDuuhF63wXAi1uonGzqjEYcbC1xT2boNiZORoOQWpAHhSbIRpljmW/uHBvoKZ573PQbbxE62hXv1Z1iVky0dtAV65dXwIDAQAB' def __OO0000OO0O0O00OOO (O00000O00OO0OOO0O ,O0000OOO000O0O00O ): O000O000O0O00O000 =[O00000O00OO0OOO0O .split ("-")[1 ],O0000OOO000O0O00O .split ("-")[1 ]] OO00O0O0OO0O0OO00 =''.join (O000O000O0O00O000 ) return Q (D (OO00O0O0OO0O0OO00 ,16 ))[:6 ] def __OO0O00O00O0O00O00 (O000O0O0O000OO000 ,O000O0O0OOO0O0OO0 ,O0OO00O0OOOOO00OO ): O0O00OOO0OO0O00OO =O000O0O0O000OO000 +Q (D (G (B ()*1000 ))+O000O0O0OOO0O0OO0 ) OOOO0O0OO0O0O00O0 =O000O0O0O000OO000 +Q (D (G (B ()*1000 ))+O0OO00O0OOOOO00OO ) O0OO0O00O0000000O =e () O0OO0O00O0000000O .append (u (sk .encode ('UTF-8'),O0O00OOO0OO0O00OO .encode ('UTF-8'),p ).hexdigest ()) O0OO0O00O0000000O .append (u (sk .encode ('UTF-8'),OOOO0O0OO0O0O00O0 .encode ('UTF-8'),p ).hexdigest ()) return O0OO0O00O0000000O def __O0OOO0OOO0000OO0O (O0O000000OOOO0OO0 ,OOOO0O00OOO00O000 ): O0000OO0O0O0OOOOO =O0O000000OOOO0OO0 +Q (D (G (B ()*1000 ))+OOOO0O00OOO00O000 ) return u (sk .encode ('UTF-8'),O0000OO0O0O0OOOOO .encode ('UTF-8'),p ).hexdigest () def __OOO0O00OO0OOO0000 (O0O0O0OOO000OOO00 ,OO00O0O0OO0O0000O ,OO0OOO0000O0O000O ,O00000OO0OOOO0O0O ,O0O0O00O00O0O0O0O ): OOOO0OO0O00OO0OOO =O0O0O0OOO000OOO00 +'$'+OO00O0O0OO0O0000O +'$'+OO0OOO0000O0O000O +'$'+O00000OO0OOOO0O0O +'$'+O0O0O00O00O0O0O0O O0OO0O0O00OO00OOO =U (P ) O00O000O00O0000OO =L (O0OO0O0O00OO00OOO ) O00OO0O00OOOO00OO =t (O00O000O00O0000OO ) OOOO0O0000000OOOO =O00OO0O00OOOO00OO .encrypt (OOOO0OO0O00OO0OOO .encode ('UTF-8')) return K (OOOO0O0000000OOOO ).decode () def urlopen (O000OO0O0O0O0O0OO ,O0O000O00OOOOO0OO ,O0O000O0O0O0OOO0O ,data =h ,headers ={}): OO00O00000O0000OO =__OO0000OO0O0O00OOO (O000OO0O0O0O0O0OO ,O0O000O00OOOOO0OO ) O00000O0O0OO0OO0O =__OO0O00O00O0O00O00 (OO00O00000O0000OO ,m ,s ) OO0OO000O0O0OO00O =__OOO0O00OO0OOO0000 (OO00O00000O0000OO ,O000OO0O0O0O0O0OO ,O0O000O00OOOOO0OO ,O00000O0O0OO0OO0O [0 ],O00000O0O0OO0OO0O [1 ]) if data !=h : data =X (data ) data =k (data ,'UTF-8') O0OOOO0OO0OOO0OO0 =g (O0O000O0O0O0OOO0O ,data ) O0OOOO0OO0OOO0OO0 .add_header ('apim-accesstoken',OO0OO000O0O0OO00O ) O0OOOO0OO0OOO0OO0 .add_header ('Content-Type','application/json;charset=utf-8') O0OOOO0OO0OOO0OO0 .add_header ('User-Agent','Python_enos_api') for O000OO0O0O0O0O0OO ,OO0OO0O0O00O0O00O in headers .items (): O0OOOO0OO0OOO0OO0 .add_header (O000OO0O0O0O0O0OO ,OO0OO0O0O00O0O00O ) try : O0O0OO0O0OOOO00O0 =b (O0OOOO0OO0OOO0OO0 ) O0O000O00OOOOOO0O =O0O0OO0O0OOOO00O0 .read ().decode ('UTF-8') OOO0O0O0O00OOO00O =c (O0O000O00OOOOOO0O ) try : OO0O0O0O000OO000O =OOO0O0O0O00OOO00O ['apim_status'] if OO0O0O0O000OO000O ==4011 : OO0O0O0OO0OOO0OO0 =OOO0O0O0O00OOO00O ['apim_refreshtoken'] OOOO0OOO0OOO0O00O =__O0OOO0OOO0000OO0O (OO00O00000O0000OO ,m ) OO0OO000O0O0OO00O =__OOO0O00OO0OOO0000 (OO00O00000O0000OO ,O000OO0O0O0O0O0OO ,O0O000O00OOOOO0OO ,OOOO0OOO0OOO0O00O ,OO0O0O0OO0OOO0OO0 ) elif OO0O0O0O000OO000O ==4012 : O00000O0O0OO0OO0O =__OO0O00O00O0O00O00 (OO00O00000O0000OO ,m ,s ) OO0OO000O0O0OO00O =__OOO0O00OO0OOO0000 (OO00O00000O0000OO ,O000OO0O0O0O0O0OO ,O0O000O00OOOOO0OO ,O00000O0O0OO0OO0O [0 ],O00000O0O0OO0OO0O [1 ]) if data !=h : data =X (data ) data =k (data ,'UTF-8') O0OOOO0OO0OOO0OO0 =g (O0O000O0O0O0OOO0O ,data ) O0OOOO0OO0OOO0OO0 .add_header ('apim-accesstoken',OO0OO000O0O0OO00O ) O0OOOO0OO0OOO0OO0 .add_header ('Content-Type','application/json;charset=utf-8') O0OOOO0OO0OOO0OO0 .add_header ('User-Agent','Python_enos_api') for O000OO0O0O0O0O0OO ,OO0OO0O0O00O0O00O in headers .items (): O0OOOO0OO0OOO0OO0 .add_header (O000OO0O0O0O0O0OO ,OO0OO0O0O00O0O00O ) O0O0OO0O0OOOO00O0 =b (O0OOOO0OO0OOO0OO0 ) return O0O0OO0O0OOOO00O0 .read ().decode ('UTF-8') except F : return OOO0O0O0O00OOO00O except n as OOO000O00000O0O00 : w (OOO000O00000O0O00 )
zzltest
/zzltest-0.1.5.tar.gz/zzltest-0.1.5/posedion/poseidon.py
poseidon.py
import sys import os import re import datetime import argparse from time import strptime import bibtexparser from bibtexparser.bparser import BibTexParser from bibtexparser.bwriter import BibTexWriter from bibtexparser.bibdatabase import BibDatabase def main(): parse(sys.argv[1:]) def parse(args): parser = argparse.ArgumentParser(description='zzo-bibtex-parser') parser.add_argument('--path', required=True, help='bib file path') args = parser.parse_args() if args.path == None: print('You must specify bib file path') else: # make list of dictionary for the data with open(args.path, 'r', encoding='utf8') as bibtex_file: parser = BibTexParser(common_strings=True) bib_database = bibtexparser.load(bibtex_file, parser=parser) for dic in bib_database.entries: year = '0000' month = '01' day = '01' entry_type = 'misc' result = ['---'] for key in dic: # delete { } \ from value if (key != 'month'): parsed_dict = re.sub('[{}\\\\]', '', dic[key]) if key != 'file' and key != 'ID' and key != 'urldate' and key != 'language': if key == 'author': authors = parsed_dict.split('and') result.append(f'authors: {authors}') elif key == 'keywords': keywords = parsed_dict.split(',') result.append(f'keywords: {keywords}') elif key == 'url': result.append(f'links:\n - name: url\n link: {parsed_dict})') elif key == 'journal': result.append(f'publication: "{parsed_dict}"') elif key == 'year': year = parsed_dict elif key == 'month': month = month_string_to_number(dic[key]) elif key == 'day': day = parsed_dict elif key == 'ENTRYTYPE': doubleQuoteEscape = parsed_dict.replace('"', '\\"') result.append(f'{key}: "{doubleQuoteEscape}"') entry_type = doubleQuoteEscape else: doubleQuoteEscape = parsed_dict.replace('"', '\\"') result.append(f'{key}: "{doubleQuoteEscape}"') result.append(f'enableToc: {False}') result.append(f'enableWhoami: {True}') result.append(f'pinned: {True if is_current_year(int(year)) else False}') result.append(f'publishDate: "{year}-{month}-{day}"') result.append('---') # make file if dic['ID']: # md file renamedDir = re.sub(r'(?<!^)(?=[A-Z])', '_', dic['ID']).lower() filename = f"content/publication/{entry_type}/{renamedDir}/index.md" listfilename = f"content/publication/{entry_type}/_index.md" dirname = os.path.dirname(filename) listdirname = os.path.dirname(listfilename) if not os.path.exists(dirname): os.makedirs(dirname) with open(filename, 'w', encoding='utf8') as file: file.write('\n'.join(result)) if not os.path.exists(listdirname): os.makedirs(listdirname) with open(listfilename, 'w', encoding='utf8') as listfile: listdata = [ '---', f'title: {entry_type}', f'date: {datetime.datetime.now()}', f'description: Publication - {entry_type}', '---' ] listfile.write('\n'.join(listdata)) # bib file db = BibDatabase() db.entries = [dic] writer = BibTexWriter() with open(f"content/publication/{entry_type}/{renamedDir}/cite.bib", 'w', encoding='utf8') as bibfile: bibfile.write(writer.write(db)) else: print('There is no ID') def month_string_to_number(string): m = { 'jan': "01", 'feb': "02", 'mar': "03", 'apr': "04", 'may': "05", 'jun': "06", 'jul': "07", 'aug': "08", 'sep': "09", 'oct': "10", 'nov': "11", 'dec': "12" } s = string.strip()[:3].lower() try: out = m[s] return out except: raise ValueError('Not a month') def is_current_year(year): now = datetime.datetime.now() return now.year == year if __name__ == "__main__": main()
zzo-bibtex-parser
/zzo-bibtex-parser-1.0.8.tar.gz/zzo-bibtex-parser-1.0.8/zzo/parser.py
parser.py
# zzpy Python3 Utilities ```shell # pip3 install zzpy -i https://pypi.org/simple # pip3 install zzpy --upgrade -i https://pypi.org/simple ``` # 待验证 - [ ] zgeo # 未完成 - [ ] zcaptcha - [ ] zbrowser - [ ] zjson - [ ] zrun - [ ] ztime # 已完成 - [x] zrandom - [x] zconfig - [x] zfile - [x] zmongo - [x] zmysql - [x] zredis - [x] zsys # release log * 1.1.39: +mongo_download deprecated: mongo_download_collection * 1.1.23: +mongo_download_collection * 1.1.18: +area_matcher * 1.1.16: +dict_extract * 1.1.9: excel * 1.1.6: +zfunction.list_or_args; +zredis.publish/listen; zredis.brpop/blpop+safe * 1.0.20200827: +ES处理zes * 1.0.20200825: +zfile.download_file * 1.0.20200824: +zmongo.mongo_collection * 1.0.20200821.4: 删除ztime无用方法 * 1.0.20200821.3: ztime * 1.0.20200821:zalioss+url * 1.0.20200820.10:zalioss+list+delete * 1.0.20200820.8: 私有库推送 * 1.0.20200817: 重构OssFile,支持oss_url格式,去掉OssFile,新增AliOss * 1.0.20200812.6: excel->csv * 1.0.20200812.5: zalioss修复默认配置的bug * 1.0.20200812.4: +zalioss * 1.0.20200812.3: zjson.jsondumps cls参数默认值为DateEncoder * 1.0.20200812.2: zmysql.mysql_iter_table fix default parameters * 1.0.20200812.1: zmysql.mysql_iter_table+fields * 1.0.20200812: zmysql+mysql_iter_table * 1.0.20200723: zmysql.MySQLConfig对MYSQL_URL解析规则问题修复 * 1.0.20200721: ztime+get_month * 1.0.9: build.sh * 1.0.8: +get_today, +get_date * 1.0.7: jsondumps+params * 1.0.6: +jsondumps
zzpy
/zzpy-1.1.57.tar.gz/zzpy-1.1.57/README.md
README.md
# Databus Client Databus Client is a Python client for making HTTP requests to the Databus API and retrieving JSON data. ## Installation You can install Databus Client using pip: ```agsl pip install databus-client ``` ## Usage #### Get datasheet pack data ```python from api.apitable_databus_client import DatabusClient # Initialize the client with the base URL of the Databus API host_address = "https://integration.vika.ltd" client = DatabusClient(host_address) # Make a GET request to fetch the datasheet pack data datasheet_id = "dstbUhd5coNXQoXFD8" result = client.get_datasheet_pack(datasheet_id) if result: print(result) else: print("Request failed.") ``` ## develop ```shell python3 -m venv .env source ./env/bin/active pip3 install -r requirements.txt pip3 install twine ``` ## build and publish ```shell python3 setup.py sdist twine upload dist/* ```
zzq-strings-sum
/zzq-strings-sum-0.6.0.tar.gz/zzq-strings-sum-0.6.0/README.md
README.md
import requests import logging class DatabusClient: """ DatabusClient is a client for making HTTP requests to the Databus API. It provides a simple interface to send GET requests and retrieve JSON data. Parameters: host (str): The base URL of the Databus API. Usage: host_address = "https://integration.vika.ltd" client = DatabusClient(host_address) datasheet_id = "dstbUhd5coNXQoXFD8" result = client.get_datasheet_pack(datasheet_id) if result: print(result) else: print("Request failed.") """ def __init__(self, host): """ Initialize a new DatabusClient instance. Args: host (str): The base URL of the Databus API. """ self.host = host self.logger = logging.getLogger("DatabusClient") self.logger.setLevel(logging.DEBUG) # Create a console handler for logging ch = logging.StreamHandler() ch.setLevel(logging.DEBUG) # Create a formatter and attach it to the handler formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s") ch.setFormatter(formatter) # Add the console handler to the logger self.logger.addHandler(ch) def get_datasheet_pack(self, datasheet_id): """ Send a GET request to the Databus API to fetch the datasheet pack data. Args: datasheet_id (str): The ID of the datasheet to retrieve. Returns: dict or None: A dictionary containing the JSON data from the response, or None if the request failed. """ url = f"{self.host}/databus/get_datasheet_pack/{datasheet_id}" headers = {'accept': '*/*'} try: response = requests.get(url, headers=headers) response.raise_for_status() # Raise an exception if the request is unsuccessful json_data = response.json() return json_data except requests.exceptions.RequestException as e: self.logger.error(f"Error occurred during the request: {e}") return None # Example usage: if __name__ == "__main__": # Configure logging to show debug messages import logging logging.basicConfig(level=logging.DEBUG) host_address = "https://integration.vika.ltd" client = DatabusClient(host_address) datasheet_id = "dst9zyUXiLDYjowMvz" result = client.get_datasheet_pack(datasheet_id) if result: print("result message", result['message']) print("result code", result['code']) else: print("Request failed.")
zzq-strings-sum
/zzq-strings-sum-0.6.0.tar.gz/zzq-strings-sum-0.6.0/src/api/apitable_databus_client.py
apitable_databus_client.py
# zzsn_nlp #### 介绍 郑州数能软件科技有限公司_自然语言处理 #### 软件架构 软件架构说明 #### 安装教程 1. xxxx 2. xxxx 3. xxxx #### 使用说明 1. xxxx 2. xxxx 3. xxxx #### 参与贡献 1. Fork 本仓库 2. 新建 Feat_xxx 分支 3. 提交代码 4. 新建 Pull Request #### 特技 1. 使用 Readme\_XXX.md 来支持不同的语言,例如 Readme\_en.md, Readme\_zh.md 2. Gitee 官方博客 [blog.gitee.com](https://blog.gitee.com) 3. 你可以 [https://gitee.com/explore](https://gitee.com/explore) 这个地址来了解 Gitee 上的优秀开源项目 4. [GVP](https://gitee.com/gvp) 全称是 Gitee 最有价值开源项目,是综合评定出的优秀开源项目 5. Gitee 官方提供的使用手册 [https://gitee.com/help](https://gitee.com/help) 6. Gitee 封面人物是一档用来展示 Gitee 会员风采的栏目 [https://gitee.com/gitee-stars/](https://gitee.com/gitee-stars/)
zzsn-nlp
/zzsn_nlp-0.0.1.tar.gz/zzsn_nlp-0.0.1/README.md
README.md
import os from pyltp import Segmentor, Postagger, NamedEntityRecognizer, Parser, SementicRoleLabeller from pyltp import SentenceSplitter class LTP(object): def __init__(self, model_path: str): super(LTP, self).__init__() self._model_path = model_path self._build_model() def _build_model(self): self._cws = Segmentor() self._pos = Postagger() self._ner = NamedEntityRecognizer() self._parser = Parser() self._role_label = SementicRoleLabeller() self._cws.load(os.path.join(self._model_path, 'cws.model')) self._pos.load(os.path.join(self._model_path, 'pos.model')) self._ner.load(os.path.join(self._model_path, 'ner.model')) self._parser.load(os.path.join(self._model_path, 'parser.model')) self._role_label.load(os.path.join(self._model_path, 'pisrl.model')) pass def split(self, sentence: str) -> list: # 分句 sents = SentenceSplitter.split(sentence) sents_list = list(sents) return sents_list def cws(self, sentence: str) -> list: word_list = list(self._cws.segment(sentence)) return word_list def pos(self, sentence: str) -> [list, list]: word_list = self.cws(sentence=sentence) tag_list = list(self._pos.postag(word_list)) return word_list, tag_list def ner(self, sentence: str) -> [list, list]: word_list, tag_list = self.pos(sentence=sentence) tag_list = list(self._ner.recognize(word_list, tag_list)) return word_list, tag_list def parse(self, sentence: str) -> [list, list, list]: word_list, tag_list = self.pos(sentence=sentence) arc_list = list(self._parser.parse(word_list, tag_list)) return word_list, tag_list, arc_list def role_label(self, sentence: str) -> [list, list, list, list]: word_list, tag_list, arc_list = self.parse(sentence=sentence) role_list = list(self._role_label.label(word_list, tag_list, arc_list)) return word_list, tag_list, arc_list, role_list def release(self): self._cws.release() self._pos.release() self._ner.release() self._parser.release() self._role_label.release()
zzsn-nlp
/zzsn_nlp-0.0.1.tar.gz/zzsn_nlp-0.0.1/sequence/model/model_pyltp.py
model_pyltp.py
import os import logging import logging.handlers from pathlib import Path __all__ = ['logger'] # 用户配置部分 ↓ import tqdm LEVEL_COLOR = { 'DEBUG': 'cyan', 'INFO': 'green', 'WARNING': 'yellow', 'ERROR': 'red', 'CRITICAL': 'red,bg_white', } STDOUT_LOG_FMT = '%(log_color)s[%(asctime)s] [%(levelname)s] [%(threadName)s] [%(filename)s:%(lineno)d] %(message)s' STDOUT_DATE_FMT = '%Y-%m-%d %H:%M:%S' FILE_LOG_FMT = '[%(asctime)s] [%(levelname)s] [%(threadName)s] [%(filename)s:%(lineno)d] %(message)s' FILE_DATE_FMT = '%Y-%m-%d %H:%M:%S' # 用户配置部分 ↑ class ColoredFormatter(logging.Formatter): COLOR_MAP = { 'black': '30', 'red': '31', 'green': '32', 'yellow': '33', 'blue': '34', 'magenta': '35', 'cyan': '36', 'white': '37', 'bg_black': '40', 'bg_red': '41', 'bg_green': '42', 'bg_yellow': '43', 'bg_blue': '44', 'bg_magenta': '45', 'bg_cyan': '46', 'bg_white': '47', 'light_black': '1;30', 'light_red': '1;31', 'light_green': '1;32', 'light_yellow': '1;33', 'light_blue': '1;34', 'light_magenta': '1;35', 'light_cyan': '1;36', 'light_white': '1;37', 'light_bg_black': '100', 'light_bg_red': '101', 'light_bg_green': '102', 'light_bg_yellow': '103', 'light_bg_blue': '104', 'light_bg_magenta': '105', 'light_bg_cyan': '106', 'light_bg_white': '107', } def __init__(self, fmt, datefmt): super(ColoredFormatter, self).__init__(fmt, datefmt) def parse_color(self, level_name): color_name = LEVEL_COLOR.get(level_name, '') if not color_name: return "" color_value = [] color_name = color_name.split(',') for _cn in color_name: color_code = self.COLOR_MAP.get(_cn, '') if color_code: color_value.append(color_code) return '\033[' + ';'.join(color_value) + 'm' def format(self, record): record.log_color = self.parse_color(record.levelname) message = super(ColoredFormatter, self).format(record) + '\033[0m' return message class TqdmLoggingHandler(logging.Handler): def __init__(self, level=logging.NOTSET): super().__init__(level) def emit(self, record): try: msg = self.format(record) tqdm.tqdm.write(msg) self.flush() except (KeyboardInterrupt, SystemExit): raise except: self.handleError(record) def _get_logger(log_to_file=True, log_filename='default.log', log_level='DEBUG'): _logger = logging.getLogger(__name__) stdout_handler = logging.StreamHandler() stdout_handler.setFormatter( ColoredFormatter( fmt=STDOUT_LOG_FMT, datefmt=STDOUT_DATE_FMT, ) ) _logger.addHandler(stdout_handler) # _logger.setLevel(logging.INFO) # _logger.addHandler(TqdmLoggingHandler()) if log_to_file: # _tmp_path = os.path.dirname(os.path.abspath(__file__)) # _tmp_path = os.path.join(_tmp_path, '../logs/{}'.format(log_filename)) _project_path = os.path.dirname(os.getcwd()) _tmp_path = os.path.join(_project_path, 'logs') Path(_tmp_path).mkdir(parents=True, exist_ok=True) _tmp_path = os.path.join(_tmp_path, log_filename) file_handler = logging.handlers.TimedRotatingFileHandler(_tmp_path, when='midnight', backupCount=30) file_formatter = logging.Formatter( fmt=FILE_LOG_FMT, datefmt=FILE_DATE_FMT, ) file_handler.setFormatter(file_formatter) _logger.addHandler(file_handler) _logger.setLevel(log_level) return _logger logger = _get_logger(log_to_file=False)
zzsn-nlp
/zzsn_nlp-0.0.1.tar.gz/zzsn_nlp-0.0.1/utils/log.py
log.py
import fasttext from pathlib import Path from utils.log import logger from utils.utils import timeit from base.runner.base_runner import BaseRunner from classification.config.fast_text_config import FastTextConfig from classification.evaluation.classify_evaluator import ClassifyEvaluator from classification.utils.utils import * class FastTextRunner(BaseRunner): def __init__(self, config_path: str): super(FastTextRunner, self).__init__() self._config_path = config_path self._config = None self._train_dataloader = None self._valid_dataloader = None self._test_dataloader = None self._model = None self._loss = None self._optimizer = None self._evaluator = None self._build() @timeit def _build(self): self._build_config() self._build_data() self._build_model() self._build_loss() self._build_optimizer() self._build_evaluator() pass def _build_config(self): self._config = FastTextConfig(config_path=self._config_path).load_config() pass def _build_data(self): self._train_path = self._config.data.train_path self._valid_path = self._config.data.valid_path self._test_path = self._config.data.test_path pass def _build_model(self): if self._config.status == 'test' or 'pred': self._load_checkpoint() pass def _build_loss(self): pass def _build_optimizer(self): pass def _build_evaluator(self): self._evaluator = ClassifyEvaluator() pass @timeit def train(self): self._model = fasttext.train_supervised( input=self._train_path, autotuneValidationFile=self._test_path, autotuneDuration=3000, autotuneModelSize='200M' ) self._save_checkpoint(epoch=100) self._valid(epoch=100) pass def _train_epoch(self, epoch: int): pass def _valid(self, epoch: int): with open(self._valid_path, encoding='utf-8') as file: self._valid_dataloader = file.readlines() labels = [] pre_labels = [] for text in self._valid_dataloader: label = text.replace('__label__', '')[0] text = text.replace('__label__', '')[1:-1] labels.append(int(label)) # print(model.predict(text)) pre_label = self._model.predict(text)[0][0].replace('__label__', '') # print(pre_label) pre_labels.append(int(pre_label)) # print(model.predict(text)) # p = precision_score(labels, pre_labels) # r = recall_score(labels, pre_labels) # f1 = f1_score(labels, pre_labels) p, r, f1 = self._evaluator.evaluate(true_list=labels, pred_list=pre_labels) logger.info('P: {:.4f}, R: {:.4f}, F1: {:.4f}'.format(p, r, f1)) pass def test(self): self._valid(epoch=100) pass def pred(self, id: int, title: str, content: str): text = (title + '。') * 2 + content text = clean_txt(raw=clean_tag(text=text)) if type(text) is str: text = text.replace('\n', '').replace('\r', '').replace('\t', '') pre_label = self._model.predict(text)[0][0].replace('__label__', '') if pre_label == '0': label = '非招聘股票' elif pre_label == '1': label = '招聘信息' else: label = '股票信息' return { 'handleMsg': 'success', 'isHandleSuccess': True, 'logs': None, 'resultData': { 'id': id, 'label': label } } pass def _display_result(self, epoch: int): pass @timeit def _save_checkpoint(self, epoch: int): Path(self._config.learn.dir.saved).mkdir(parents=True, exist_ok=True) self._model.save_model(self._config.learn.dir.save_model) pass def _load_checkpoint(self): self._model = fasttext.load_model(self._config.learn.dir.save_model) pass if __name__ == '__main__': ft_config_path = '../config/fast_text_config.yml' runner = FastTextRunner(config_path=ft_config_path) # runner.train() runner.test()
zzsn-nlp
/zzsn_nlp-0.0.1.tar.gz/zzsn_nlp-0.0.1/classification/runner/runner_fast_text.py
runner_fast_text.py
import os import random import pandas as pd from pandas import DataFrame from sklearn.model_selection import train_test_split from classification.utils.utils import * from doc_similarity.model.cosine_similarity import CosineSimilarity from doc_similarity.model.jaccard import JaccardSimilarity from doc_similarity.model.levenshtein import LevenshteinSimilarity from doc_similarity.model.min_hash import MinHashSimilarity from doc_similarity.model.sim_hash import OldSimHashSimilarity from utils.utils import * root_path = '/home/zzsn/liuyan/word2vec/doc_similarity' stop_words_path = os.path.join(root_path, 'stop_words.txt') cos_sim = CosineSimilarity(stop_words_path=stop_words_path) jac_sim = JaccardSimilarity(stop_words_path=stop_words_path) lev_sim = LevenshteinSimilarity(stop_words_path=stop_words_path) min_hash_sim = MinHashSimilarity(stop_words_path=stop_words_path) old_sim_hash_sim = OldSimHashSimilarity() def remove_repetition(path: str) -> list: """ 根据文章标题进行数据去重清洗 :param path: :return: """ data_loader = pd.read_excel(path) delete_num = 0 article_list = [] for index in range(len(data_loader['id'])): title = data_loader['title'][index].replace('\n', '').replace('\r', '').replace('\t', '') if judge_sim(article_list=article_list, title=title): print('Add : \tindex: {} \t id: {} \t title: {}'.format( index, data_loader['id'][index], data_loader['title'][index]) ) article_list.append({ 'id': data_loader['id'][index], 'title': title, 'content': data_loader['content'][index].replace( '\n', '' ).replace('\r', '').replace('\t', ''), 'origin': data_loader['origin'][index], 'source_address': data_loader['sourceaddress'][index] }) else: delete_num += 1 print('Delete: \tindex: {} \t id: {} \t title: {}'.format( index, data_loader['id'][index], data_loader['title'][index]) ) print('Delete: \t{}'.format(delete_num)) return article_list pass def judge_sim(article_list: list, title: str) -> bool: if len(article_list) < 1: return True if len(article_list) > 100: article_list = article_list[-100: -1] for article in article_list: if cos_sim.calculate(article['title'], title) > 0.9: print('{} --- {}'.format(title, article['title'])) return False return True pass def process_txt(data_loader: DataFrame, train_file_path: str, valid_file_path: str): articles = data_loader['article'] labels = data_loader['label'] article_list = [] for article, label in zip(articles, labels): if type(article) is str: text = article.replace('\n', '').replace('\r', '').replace('\t', '') else: print('{} is not str!'.format(article)) continue text = seg(text=text, sw=stop_words(path='sample/stop_words.txt')) text = '__label__{} {}'.format(label, text) article_list.append(text) # for index in range(len(data_loader['article'])): # content = data_loader['article'][index].replace('\n', '').replace('\r', '').replace('\t', '') # # text = seg(content, NLPTokenizer, stop_words()) # text = seg(content, stop_words(path='sample/stop_words.txt')) # text = '__label__1 {}'.format(text) # # text = transform_data(text, data_loader['label'][index]) # article_list.append(text) train_data, valid_data = train_test_split( article_list, train_size=0.8, random_state=2021, shuffle=True ) with open( train_file_path, 'w', encoding='utf-8' ) as train_file, open( valid_file_path, 'w', encoding='utf-8' ) as valid_file: for train in train_data: train_file.write(train + '\n') for valid in valid_data: valid_file.write(valid + '\n') pass def process_fx(path='sample/风险训练集.xlsx'): data_list = pd.read_excel(path) data_list['article'] = (data_list['title'] + '。') * 2 + data_list['content'] pass def process_f_zp_gp(path: str, train_file_path: str, valid_file_path: str): data_loader = pd.read_excel(path) # data_loader['article'] = '{}。{}'.format(data_loader['title'] * 2, data_loader['content']) data_loader['article'] = data_loader['title'] * 2 + '。' + data_loader['content'] data_loader['article'] = data_loader.article.apply(clean_tag).apply(clean_txt) process_txt( data_loader=data_loader, train_file_path=train_file_path, valid_file_path=valid_file_path ) pass def merge_f_zp_gp(f_path: str, zp_path: str, gp_path: str, result_path: str): result_list = [] f_list = read_excel_random(f_path, label=0) zp_list = read_excel_random(zp_path, label=1) gp_list = read_excel_random(gp_path, label=2) result_list.extend(f_list) result_list.extend(zp_list) result_list.extend(gp_list[:5000]) df = pd.DataFrame(result_list) df.to_excel(result_path) pass def read_excel_random(path: str, label: int) -> list: df = pd.read_excel(path) result_list = [] titles, contents = df['title'], df['content'] for title, content in zip(titles, contents): result_list.append({ 'title': title, 'content': content, 'label': label }) random.shuffle(result_list) return result_list # return result_list[:5000] if len(result_list) > 5000 else result_list pass if __name__ == '__main__': # 源语料去重 # article_list = remove_repetition(path='sample/股票信息.xlsx') # df = pd.DataFrame(article_list) # df.to_excel('sample/去重股票信息.xlsx') # merge # merge_f_zp_gp( # f_path='sample/去重非招聘股票.xlsx', # zp_path='sample/去重招聘信息.xlsx', # gp_path='sample/去重股票信息.xlsx', # result_path='sample/去重_F_ZP_GP.xlsx' # ) # excel2txt 准备训练 # process_fx() process_f_zp_gp( path='sample/去重_F_ZP_GP.xlsx', train_file_path='/home/zzsn/liuyan/data/f_zp_gp/train.txt', valid_file_path='/home/zzsn/liuyan/data/f_zp_gp/valid.txt' ) # list2xlsx(result_list=article_list, xlsx_path='sample/去重招聘信息.xlsx') pass
zzsn-nlp
/zzsn_nlp-0.0.1.tar.gz/zzsn_nlp-0.0.1/classification/data/data_process.py
data_process.py
import pandas as pd total_list = ['资讯 研究热词', '研究领域 地域', '研究领域 企业', '研究领域 资讯', '研究领域 专家', '研究领域 领导', '研究领域 研究领域', '地域 资讯', '专家 地域', '专家 企业', '企业 资讯', '专家 专家', '专家 资讯', '领导 企业', '领导 地域', '领导 专家', '领导 资讯', '领导 领导', '企业 企业'] def stat(data_path: str): total_dict = { '资讯 研究热词': [], '研究领域 地域': [], '研究领域 企业': [], '研究领域 资讯': [], '研究领域 专家': [], '研究领域 领导': [], '研究领域 研究领域': [], '地域 资讯': [], '专家 地域': [], '专家 企业': [], '企业 资讯': [], '专家 专家': [], '专家 资讯': [], '领导 企业': [], '领导 地域': [], '领导 专家': [], '领导 资讯': [], '领导 领导': [], '企业 企业': [] } with open(data_path, 'r', encoding='utf-8') as f: lines = f.readlines() le_list, le_label_list, re_list, ri_list, ri_label_list, id_list, bo_list = [], [], [], [], [], [], [] data_list = [] double_list = [] for line in lines: list_ = line.strip('\n').split(',') left = list_[0].strip()[1: -1] left_label = list_[1].strip()[1: -1] relation = list_[2].strip()[1: -1] right = list_[3].strip()[1: -1] right_label = list_[4].strip()[1: -1] corpusID = list_[5].strip() bool_ = True if list_[6].strip()[1: -1] == '1' else False le_list.append(left) le_label_list.append(left_label) re_list.append(relation) ri_list.append(right) ri_label_list.append(right_label) # id_list.append(corpusID) bo_list.append(bool_) double_list.append(left_label + ' ' + right_label) # data_list.append({ # 'left': left, # 'left_label': left_label, # 'relation': relation, # 'right': right, # 'right_label': right_label, # 'corpusID': corpusID, # 'valid': bool_ # }) bool_dt = False for double_type in total_list: if bool_dt: break type_L, type_R = double_type.split(' ') if left_label == type_L and right_label == type_R: total_dict[double_type].append({ 'left': left, 'left_label': left_label, 'relation': relation, 'right': right, 'right_label': right_label, 'corpusID': corpusID, 'bool': bool_ }) bool_dt = True elif left_label == type_R and right_label == type_L: total_dict[double_type].append({ 'left': left, 'left_label': left_label, 'relation': relation, 'right': right, 'right_label': right_label, 'corpusID': corpusID, 'bool': bool_ }) bool_dt = True if not bool_dt: print('得了呵的!!!') # result_re = pd.value_counts(re_list) result_bo = pd.value_counts(bo_list) # result_double = pd.value_counts(double_list) # print(result_re) print(result_bo) # print(result_double) return total_dict def stats_re(total_dict: dict): for double_type in total_list: type_list = total_dict[double_type] re_list = [] for type_dict in type_list: if type_dict['bool']: re_list.append(type_dict['relation']) print('{}: \n{}\n'.format(double_type, pd.value_counts(re_list))) if __name__ == '__main__': data_path = '/home/zutnlp/zutnlp_student_2017/liuyan/datasets/zzsn/re/实体标签.csv' total_dict = stat(data_path=data_path) stats_re(total_dict=total_dict) pass
zzsn-nlp
/zzsn_nlp-0.0.1.tar.gz/zzsn_nlp-0.0.1/relation_extraction/data/data_stats.py
data_stats.py
from __future__ import print_function import os import time import sys import random import torch import gc import xlsxwriter import torch.optim as optim import numpy as np from doc_event.model.seqlabel import SeqLabel from doc_event.data.data_loader import Data from doc_event.evaluate.eval_entity import eval_entity try: import cPickle as pickle except ImportError: import pickle seed_num = 42 random.seed(seed_num) torch.manual_seed(seed_num) np.random.seed(seed_num) torch.cuda.manual_seed_all(seed_num) torch.backends.cudnn.deterministic = True def data_initialization(data): data.build_alphabet(data.train_dir) data.build_alphabet(data.dev_dir) data.build_alphabet(data.test_dir) data.fix_alphabet() def predict_check(pred_variable, gold_variable, mask_variable): """ input: pred_variable (batch_size, sent_len): pred tag result, in numpy format gold_variable (batch_size, sent_len): gold result variable mask_variable (batch_size, sent_len): mask variable """ pred = pred_variable.cpu().data.numpy() gold = gold_variable.cpu().data.numpy() mask = mask_variable.cpu().data.numpy() overlaped = (pred == gold) right_token = np.sum(overlaped * mask) total_token = mask.sum() # print("right: %s, total: %s"%(right_token, total_token)) return right_token, total_token def recover_label(pred_variable, gold_variable, mask_variable, label_alphabet, word_recover): """ input: pred_variable (batch_size, sent_len): pred tag result gold_variable (batch_size, sent_len): gold result variable mask_variable (batch_size, sent_len): mask variable """ pred_variable = pred_variable[word_recover] gold_variable = gold_variable[word_recover] mask_variable = mask_variable[word_recover] batch_size = gold_variable.size(0) seq_len = gold_variable.size(1) mask = mask_variable.cpu().data.numpy() pred_tag = pred_variable.cpu().data.numpy() gold_tag = gold_variable.cpu().data.numpy() batch_size = mask.shape[0] pred_label = [] gold_label = [] for idx in range(batch_size): pred = [label_alphabet.get_instance(pred_tag[idx][idy]) for idy in range(seq_len) if mask[idx][idy] != 0] gold = [label_alphabet.get_instance(gold_tag[idx][idy]) for idy in range(seq_len) if mask[idx][idy] != 0] assert (len(pred) == len(gold)) pred_label.append(pred) gold_label.append(gold) return pred_label, gold_label def lr_decay(optimizer, epoch, decay_rate, init_lr): lr = init_lr / (1 + decay_rate * epoch) print(' Learning rate is set as:', lr) for param_group in optimizer.param_groups: param_group['lr'] = lr return optimizer def evaluate(data, model, name): if name == 'train': instances = data.train_Ids elif name == 'dev': instance_texts, instances = data.dev_texts, data.dev_Ids elif name == 'test': instance_texts, instances = data.test_texts, data.test_Ids else: print('Error: wrong evaluate name,', name) exit(1) right_token = 0 whole_token = 0 pred_results = [] gold_results = [] sequences, doc_ids = [], [] # set model in eval model model.eval() batch_size = data.HP_batch_size start_time = time.time() train_num = len(instances) total_batch = train_num // batch_size + 1 for batch_id in range(total_batch): start = batch_id * batch_size end = (batch_id + 1) * batch_size if end > train_num: end = train_num instance = instances[start:end] instance_text = instance_texts[start:end] if not instance: continue batch_word, batch_word_len, batch_word_recover, list_sent_words_tensor, batch_label, mask = batchify_sequence_labeling_with_label( instance, data.HP_gpu, False) tag_seq = model(batch_word, batch_word_len, list_sent_words_tensor, mask) pred_label, gold_label = recover_label(tag_seq, batch_label, mask, data.label_alphabet, batch_word_recover) pred_results += pred_label gold_results += gold_label sequences += [item[0] for item in instance_text] doc_ids += [item[-1] for item in instance_text] # import ipdb; ipdb.set_trace() decode_time = time.time() - start_time speed = len(instances) / decode_time # acc, p, r, f = get_ner_fmeasure(gold_results, pred_results, data.tagScheme) # p, r, f = get_macro_avg(sequences, pred_results, doc_ids) labels = list() for label in data.label_alphabet.instances: labels.append(label) labels.remove('O') from sklearn.metrics import classification_report tag_true_all, tag_pred_all, text_all = list(), list(), list() for gold_list, pred_list, seq_list in zip(gold_results, pred_results, sequences): tag_true_all.extend(gold_list) tag_pred_all.extend(pred_list) text_all.extend(seq_list) stat_info = classification_report(tag_true_all, tag_pred_all, labels=labels, output_dict=True) # print(stat_info) macro_avg = stat_info['macro avg'] p, r, f1 = macro_avg['precision'], macro_avg['recall'], macro_avg['f1-score'] print('macro avg precision: %.4f, recall: %.4f, f1-score: %.4f' % (p, r, f1)) # merge result_true = merge(seq_lists=sequences, tag_lists=gold_results, doc_ids=doc_ids) result_pred = merge(seq_lists=sequences, tag_lists=pred_results, doc_ids=doc_ids) return speed, p, r, f1, pred_results, result_true, result_pred def merge(seq_lists, tag_lists, doc_ids): # merge the result [sequences, pred_results, doc_ids] doc_id_ = None text_all, tag_all = list(), list() text, tag = [], [] for text_list, tag_list, doc_id in zip(seq_lists, tag_lists, doc_ids): if doc_id_ is None or doc_id_ == doc_id: doc_id_ = doc_id text.extend(text_list) tag.extend(tag_list) else: text_all.append(text) tag_all.append(tag) doc_id_ = doc_id text = text_list tag = tag_list text_all.append(text) tag_all.append(tag) return [text_all, tag_all] def batchify_sequence_labeling_with_label(input_batch_list, gpu, if_train=True): """ input: list of words, chars and labels, various length. [[words, features, chars, labels],[words, features, chars,labels],...] words: word ids for one sentence. (batch_size, sent_len) labels: label ids for one sentence. (batch_size, sent_len) output: zero padding for word and char, with their batch length word_seq_tensor: (batch_size, max_sent_len) Variable word_seq_lengths: (batch_size,1) Tensor label_seq_tensor: (batch_size, max_sent_len) mask: (batch_size, max_sent_len) """ batch_size = len(input_batch_list) words = [sent[0] for sent in input_batch_list] sent_words = [sent[1] for sent in input_batch_list] labels = [sent[2] for sent in input_batch_list] word_seq_lengths = torch.LongTensor(list(map(len, words))) max_seq_len = word_seq_lengths.max().item() word_seq_tensor = torch.zeros((batch_size, max_seq_len), requires_grad=if_train).long() label_seq_tensor = torch.zeros((batch_size, max_seq_len), requires_grad=if_train).long() mask = torch.zeros((batch_size, max_seq_len), requires_grad=if_train).bool() for idx, (seq, label, seqlen) in enumerate(zip(words, labels, word_seq_lengths)): seqlen = seqlen.item() word_seq_tensor[idx, :seqlen] = torch.LongTensor(seq) label_seq_tensor[idx, :seqlen] = torch.LongTensor(label) mask[idx, :seqlen] = torch.Tensor([1] * seqlen) word_seq_lengths, word_perm_idx = word_seq_lengths.sort(0, descending=True) word_seq_tensor = word_seq_tensor[word_perm_idx] label_seq_tensor = label_seq_tensor[word_perm_idx] mask = mask[word_perm_idx] _, word_seq_recover = word_perm_idx.sort(0, descending=False) list_sent_words_tensor = [] for sent_words_one_example in sent_words: one_example_list = [] for sent in sent_words_one_example: sent_tensor = torch.zeros((1, len(sent)), requires_grad=if_train).long() sent_tensor[0, :len(sent)] = torch.LongTensor(sent) if gpu: one_example_list.append(sent_tensor.cuda()) else: one_example_list.append(sent_tensor) list_sent_words_tensor.append(one_example_list) word_perm_idx = word_perm_idx.data.numpy().tolist() list_sent_words_tensor_perm = [] for idx in word_perm_idx: list_sent_words_tensor_perm.append(list_sent_words_tensor[idx]) if gpu: word_seq_tensor = word_seq_tensor.cuda() word_seq_lengths = word_seq_lengths.cuda() word_seq_recover = word_seq_recover.cuda() label_seq_tensor = label_seq_tensor.cuda() mask = mask.cuda() return word_seq_tensor, word_seq_lengths, word_seq_recover, list_sent_words_tensor_perm, label_seq_tensor, mask def train(data): print('Training model...') data.show_data_summary() save_data_name = data.model_dir + '.dset' data.save(save_data_name) model = SeqLabel(data) if data.optimizer.lower() == 'sgd': optimizer = optim.SGD(model.parameters(), lr=data.HP_lr, momentum=data.HP_momentum, weight_decay=data.HP_l2) elif data.optimizer.lower() == 'adagrad': optimizer = optim.Adagrad(model.parameters(), lr=data.HP_lr, weight_decay=data.HP_l2) elif data.optimizer.lower() == 'adadelta': optimizer = optim.Adadelta(model.parameters(), lr=data.HP_lr, weight_decay=data.HP_l2) elif data.optimizer.lower() == "rmsprop": optimizer = optim.RMSprop(model.parameters(), lr=data.HP_lr, weight_decay=data.HP_l2) elif data.optimizer.lower() == 'adam': optimizer = optim.Adam(model.parameters(), lr=data.HP_lr, weight_decay=data.HP_l2) else: print('Optimizer illegal: %s' % (data.optimizer)) exit(1) best_dev = -10 best_epoch = -10 # start training for idx in range(data.HP_iteration): epoch_start = time.time() temp_start = epoch_start print('\nEpoch: %s/%s' % (idx + 1, data.HP_iteration)) if data.optimizer == 'SGD': optimizer = lr_decay(optimizer, idx, data.HP_lr_decay, data.HP_lr) instance_count = 0 sample_id = 0 sample_loss = 0 total_loss = 0 right_token = 0 whole_token = 0 random.shuffle(data.train_Ids) print('Shuffle: first input word list:', data.train_Ids[0][0]) # set model in train model model.train() model.zero_grad() batch_size = data.HP_batch_size train_num = len(data.train_Ids) total_batch = train_num // batch_size + 1 for batch_id in range(total_batch): start = batch_id * batch_size end = (batch_id + 1) * batch_size if end > train_num: end = train_num instance = data.train_Ids[start: end] if not instance: continue batch_word, batch_word_len, batch_word_recover, list_sent_words_tensor, batch_label, mask = batchify_sequence_labeling_with_label( instance, data.HP_gpu, True) instance_count += 1 loss, tag_seq = model.calculate_loss(batch_word, batch_word_len, list_sent_words_tensor, batch_label, mask) right, whole = predict_check(tag_seq, batch_label, mask) right_token += right whole_token += whole # print("loss:",loss.item()) sample_loss += loss.item() total_loss += loss.item() if end % 500 == 0: temp_time = time.time() temp_cost = temp_time - temp_start temp_start = temp_time print(' Instance: %s; Time: %.2fs; loss: %.4f; acc: %s/%s=%.4f' % ( end, temp_cost, sample_loss, right_token, whole_token, (right_token + 0.) / whole_token)) if sample_loss > 1e8 or str(sample_loss) == 'nan': print('ERROR: LOSS EXPLOSION (>1e8) ! PLEASE SET PROPER PARAMETERS AND STRUCTURE! EXIT....') exit(1) sys.stdout.flush() sample_loss = 0 loss.backward() optimizer.step() model.zero_grad() temp_time = time.time() temp_cost = temp_time - temp_start print(' Instance: %s; Time: %.2fs; loss: %.4f; acc: %s/%s=%.4f' % ( end, temp_cost, sample_loss, right_token, whole_token, (right_token + 0.) / whole_token)) epoch_finish = time.time() epoch_cost = epoch_finish - epoch_start print('Epoch: %s training finished. Time: %.2fs, speed: %.2fst/s, total loss: %s' % ( idx + 1, epoch_cost, train_num / epoch_cost, total_loss)) print('total_loss:', total_loss) if total_loss > 1e8 or str(total_loss) == 'nan': print('ERROR: LOSS EXPLOSION (>1e8) ! PLEASE SET PROPER PARAMETERS AND STRUCTURE! EXIT....') exit(1) # continue speed, p, r, f, _, result_true, result_pred = evaluate(data, model, 'dev') # generate results {true, pred} result_true_lists, result_pred_lists = generate_result_lists(result_true, result_pred) p, r, f1 = eval_entity(result_true_lists, result_pred_lists) dev_finish = time.time() dev_cost = dev_finish - epoch_finish current_score = f1 print( 'Dev: time: %.2fs, speed: %.2fst/s; precision: %.4f, recall: %.4f, f1-score: %.4f' % ( dev_cost, speed, p, r, f1 ) ) if current_score > best_dev: print('\n!!! Exceed previous best f1-score: {}'.format(best_dev)) model_name = data.model_dir + '.best.model' print('Save current best model in file: {}\n'.format(model_name)) torch.save(model.state_dict(), model_name) best_dev = current_score best_epoch = idx + 1 else: print('\nBest model in epoch: {}, f1-score: {}\n'.format(best_epoch, best_dev)) gc.collect() def load_model_decode(data, name): print('Load Model from file: ', data.model_dir) model = SeqLabel(data) if data.HP_gpu: model.load_state_dict(torch.load(data.load_model_dir)) else: model.load_state_dict(torch.load(data.load_model_dir, map_location=lambda storage, loc: storage)) start_time = time.time() speed, p, r, f, pred_results, result_true, result_pred = evaluate(data, model, name) end_time = time.time() time_cost = end_time - start_time # generate results {true, pred} result_true_lists, result_pred_lists = generate_result_lists(result_true, result_pred) p, r, f1 = eval_entity(result_true_lists, result_pred_lists) print('\n{}: time_cost: {:.2f}s, speed: {:.2f}st/s, precision: {:.4f}, recall: {:.4f}, f1-score: {:.4f}'.format( name, time_cost, speed, p, r, f1 )) list2xlsx(xlsx_path=data.result_true_path, result_lists=result_true_lists) list2xlsx(xlsx_path=data.result_pred_path, result_lists=result_pred_lists) return pred_results def generate_result_lists(result_true, result_pred): # generate results {true, pred} result_true_lists, result_pred_lists = list(), list() for word_true_list, tag_true_list, word_pred_list, tag_pred_list in zip( result_true[0], result_true[1], result_pred[0], result_pred[1] ): result_true_dict = build_list2dict(len(word_true_list), word_true_list, tag_true_list, typ='true') result_pred_dict = build_list2dict(len(word_pred_list), word_pred_list, tag_pred_list, typ='pred') result_true_lists.append(result_true_dict) result_pred_lists.append(result_pred_dict) return result_true_lists, result_pred_lists def build_list2dict(_len, _word_list, _tag_list, typ): ps_list = list() result_dict = { 'content': ''.join(_word_list), 'amount_of_cooperation': set(), 'project_name': set(), 'state': set(), 'company_identification_Party_A': set(), 'company_identification_Party_B': set(), 'project_cycle': set(), 'project_status': set() } # tag_dict = { # 'amount_of_cooperation': '合作金额', # 'project_name': '项目名称', # 'state': '国家', # 'company_identification_Party_A': '企业识别甲方', # 'company_identification_Party_B': '企业识别乙方', # 'project_cycle': '项目周期', # 'project_status': '项目状态' # } for index, word, tag in zip(range(_len), _word_list, _tag_list): start_pos = index end_pos = index + 1 label_type = tag[2:] if tag[0] == 'B' and end_pos != _len: # two != while _tag_list[end_pos][0] == 'I' and _tag_list[end_pos][2:] == label_type and end_pos + 1 != _len: end_pos += 1 if _tag_list[end_pos][0] == 'E': chunk = ''.join(_word_list[start_pos: end_pos + 1]) if label_type == 'project_status' and typ == 'pred': ps_list.append(chunk) else: result_dict[label_type].add(chunk) if typ == 'pred' and len(ps_list) > 0: result_dict['project_status'] = [max(ps_list, key=ps_list.count)] return result_dict def list2xlsx(xlsx_path=None, result_lists=None): # 创建工作簿 workbook = xlsxwriter.Workbook(xlsx_path) # 创建工作表 worksheet = workbook.add_worksheet('sheet1') # 按行写 worksheet.write_row( 0, 0, [ '合同金额', '项目名称', '国家', '企业识别甲方', '企业识别乙方', '项目周期', '项目状态' ] ) for index, result in enumerate(result_lists): worksheet.write_row( index + 1, 0, [ ','.join(result['amount_of_cooperation']), ','.join(result['project_name']), ','.join(result['state']), ','.join(result['company_identification_Party_A']), ','.join(result['company_identification_Party_B']), ','.join(result['project_cycle']), ','.join(result['project_status']) ] ) workbook.close() if __name__ == '__main__': os.environ['CUDA_VISIBLE_DEVICES'] = '0' config_path = '../config/config' data = Data() data.read_config(config_file=config_path) status = data.status.lower() print('Seed num:', seed_num) if status == 'train': print('MODEL: train') data_initialization(data) data.generate_instance('train') data.generate_instance('dev') data.generate_instance('test') data.build_pretrain_emb() train(data) print('\n\nMODEL: decode') data.load(data.dset_dir) decode_results = load_model_decode(data, 'test') data.write_decoded_results(decode_results, 'test') elif status == 'decode': print('MODEL: decode') data.load(data.dset_dir) data.read_config(config_file=config_path) print(data.test_dir) data.generate_instance('test') decode_results = load_model_decode(data, 'test') data.write_decoded_results(decode_results, 'test') else: print('Invalid argument! Please use valid arguments! (train/decode)')
zzsn-nlp
/zzsn_nlp-0.0.1.tar.gz/zzsn_nlp-0.0.1/doc_event/runner/runner.py
runner.py
def eval_entity(true_lists, pred_lists): TP, FN, FP = 0, 0, 0 for true_dict, pred_dict in zip(true_lists, pred_lists): tp, fn, fp = compute_entity(true_dict, pred_dict) TP += tp FN += fn FP += fp p = TP / (TP + FP) if (TP + FP) != 0 else 0 r = TP / (TP + FN) if (TP + FN) != 0 else 0 f1 = (2 * p * r) / (p + r) if (p + r) != 0 else 0 return p, r, f1 def compute_entity(true_dict, pred_dict): content_true, content_pred = true_dict['content'], pred_dict['content'] amount_of_cooperation_true, amount_of_cooperation_pred = true_dict['amount_of_cooperation'], pred_dict['amount_of_cooperation'] project_name_true, project_name_pred = true_dict['project_name'], pred_dict['project_name'] state_true, state_pred = true_dict['state'], pred_dict['state'] company_identification_Party_A_true, company_identification_Party_A_pred = true_dict['company_identification_Party_A'], pred_dict['company_identification_Party_A'] company_identification_Party_B_true, company_identification_Party_B_pred = true_dict['company_identification_Party_B'], pred_dict['company_identification_Party_B'] project_cycle_true, project_cycle_pred = true_dict['project_cycle'], pred_dict['project_cycle'] project_status_true, project_status_pred = true_dict['project_status'], pred_dict['project_status'] TP, FP = 0, 0 # compute TP + FN TP_FN = len(amount_of_cooperation_true) + len(project_name_true) + len(state_true) + len( company_identification_Party_A_true ) + len(company_identification_Party_B_true) + len(project_cycle_true) + len( project_status_true ) for aof_pred in amount_of_cooperation_pred: if judge_exist(aof_pred, amount_of_cooperation_true): TP += 1 else: FP += 1 for pn_pred in project_name_pred: if judge_exist(pn_pred, project_name_true): TP += 1 else: FP += 1 for s_pred in state_pred: if judge_exist(s_pred, state_true): TP += 1 else: FP += 1 for ciPA_pred in company_identification_Party_A_pred: if judge_exist(ciPA_pred, company_identification_Party_A_true): TP += 1 else: FP += 1 for ciPB_pred in company_identification_Party_B_pred: if judge_exist(ciPB_pred, company_identification_Party_B_true): TP += 1 else: FP += 1 for pc_pred in project_cycle_pred: if judge_exist(pc_pred, project_cycle_true): TP += 1 else: FP += 1 for ps_pred in project_status_pred: if judge_exist(ps_pred, project_status_true): TP += 1 else: FP += 1 return TP, TP_FN - TP, FP def judge_exist(pred, true_list): for true in true_list: if pred == true: return True return False
zzsn-nlp
/zzsn_nlp-0.0.1.tar.gz/zzsn_nlp-0.0.1/doc_event/evaluate/eval_entity.py
eval_entity.py
import xlsxwriter from sklearn.model_selection import train_test_split from data_process import * def build_list2dict(_len, _word_list, _tag_list): result_dict = { 'content': ''.join(_word_list), 'amount_of_cooperation': set(), 'project_name': set(), 'state': set(), 'company_identification_Party_A': set(), 'company_identification_Party_B': set(), 'project_cycle': set(), 'project_status': set() } # tag_dict = { # 'amount_of_cooperation': '合作金额', # 'project_name': '项目名称', # 'state': '国家', # 'company_identification_Party_A': '企业识别甲方', # 'company_identification_Party_B': '企业识别乙方', # 'project_cycle': '项目周期', # 'project_status': '项目状态' # } for index, word, tag in zip(range(_len), _word_list, _tag_list): start_pos = index end_pos = index + 1 label_type = tag[2:] if tag[0] == 'B' and end_pos != _len: # two != while _tag_list[end_pos][0] == 'I' and _tag_list[end_pos][2:] == label_type and end_pos + 1 != _len: end_pos += 1 if _tag_list[end_pos][0] == 'E': result_dict[label_type].add(''.join(_word_list[start_pos: end_pos + 1])) # build_list.append({'start_pos': start_pos, # 'end_pos': end_pos + 1, # 'label_type': tag_dict[label_type]}) return result_dict def list2xlsx(xlsx_path=None, result_lists=None): # 创建工作簿 workbook = xlsxwriter.Workbook(xlsx_path) # 创建工作表 worksheet = workbook.add_worksheet('sheet1') # 按行写 worksheet.write_row( 0, 0, [ '合同金额', '项目名称', '国家', '企业识别甲方', '企业识别乙方', '项目周期', '项目状态' ] ) for index, result in enumerate(result_lists): worksheet.write_row( index + 1, 0, [ ','.join(result['amount_of_cooperation']), ','.join(result['project_name']), ','.join(result['state']), ','.join(result['company_identification_Party_A']), ','.join(result['company_identification_Party_B']), ','.join(result['project_cycle']), ','.join(result['project_status']) ] ) workbook.close() def data_split(data_list): # split_str = ',,、;;。' # # split_str = ';;。' # 1 # split_str = ';;。!!' # 2 split_str = ';;。!!??' # 3 result_list = [] # 同时也可以以空格 ‘ ’ 为边界进行切分 即split_str = ',,、;;。 ' for word_list, tag_list in data_list: length = 1 split_words, split_tags = [], [] split_list = [] for word, tag in zip(word_list, tag_list): split_words.append(word) split_tags.append(tag) if length > 30 and tag[0] in ['O', 'E'] and word in split_str: split_list.append([split_words, split_tags]) split_words, split_tags = [], [] length = 1 elif length > 120 and tag[0] in ['O', 'E']: split_list.append([split_words, split_tags]) split_words, split_tags = [], [] length = 1 if length >= 200: print(111111111111111111111111111111111) # Warning length += 1 merge_list = merge_seq(seq_list=split_list) result_list.append(merge_list) assert len(data_list) == len(result_list), 'data_list: {} != result_list: {} !'.format( len(data_list), len(result_list) ) return result_list def merge_seq(seq_list): i = 0 num_sent_to_include, max_length = 3, 200 merge_words, merge_tags = [], [] merge_list, stats_list = [], [] for word_list, tag_list in seq_list: if i == 0: merge_words.extend(word_list) merge_tags.extend(tag_list) i += 1 elif i == 3: merge_list.append([merge_words, merge_tags]) stats_list.append(i) merge_words = word_list merge_tags = tag_list i = 1 elif len(merge_words) + len(word_list) < max_length: merge_words.append('#####') merge_tags.append('O') merge_words.extend(word_list) merge_tags.extend(tag_list) i += 1 else: merge_list.append([merge_words, merge_tags]) stats_list.append(i) merge_words = word_list merge_tags = tag_list i = 1 print('段 平均由 {} 句构成'.format(sum(stats_list) / len(stats_list))) return merge_list pass if __name__ == '__main__': xlsx_path = './sample/total_datasets.xlsx' total_list = xlsx2list(xlsx_path=xlsx_path) data_list = list() for sentence in total_list: word_list, tag_list = sentence2tag(sentence) data_list.append([word_list, tag_list]) result_list = data_split(data_list=data_list) train_list, dev_list = train_test_split( result_list, test_size=0.1, random_state=2021 ) write2txt(train_list, 'train_3.txt', 'train') write2txt(dev_list, 'dev_3.txt', 'dev') write2txt(dev_list, 'test_3.txt', 'test') # test_data_path = 'test.txt' # with open(test_data_path, 'r', encoding='utf-8') as f: # file = f.readlines() # doc_id = None # word_list, tag_list = list(), list() # for line in file: # if doc_id is None: # doc_id = line.strip('\n') # else: # word, tag = line.strip('\n').split('\t') # pass # result_lists = list() # for word_list, tag_list in result_list: # result_dict = build_list2dict(len(word_list), word_list, tag_list) # result_lists.append(result_dict) # list2xlsx(xlsx_path='test_result_true.xlsx', result_lists=result_lists) pass
zzsn-nlp
/zzsn_nlp-0.0.1.tar.gz/zzsn_nlp-0.0.1/doc_event/data/data_split.py
data_split.py
from __future__ import print_function from __future__ import absolute_import import os import sys from doc_event.utils.alphabet import Alphabet from doc_event.utils.functions import * try: import cPickle as pickle except ImportError: import pickle as pickle START = '</s>' UNKNOWN = '</unk>' PADDING = '</pad>' class Data: def __init__(self): self.MAX_SENTENCE_LENGTH = 1000 self.number_normalized = True self.norm_word_emb = False self.word_alphabet = Alphabet('word') self.label_alphabet = Alphabet('label', True) self.tagScheme = 'NoSeg' # BMES/BIO self.split_token = '\t' self.seg = True # I/O self.train_dir = None self.dev_dir = None self.test_dir = None self.decode_dir = None self.dset_dir = None # data vocabulary related file self.model_dir = None # model save file self.load_model_dir = None # model load file self.result_true_path = None self.result_pred_path = None self.word_emb_dir = None self.train_texts = [] self.dev_texts = [] self.test_texts = [] self.train_Ids = [] self.dev_Ids = [] self.test_Ids = [] self.pretrain_word_embedding = None self.pretrain_feature_embeddings = [] self.label_size = 0 self.word_alphabet_size = 0 self.label_alphabet_size = 0 self.feature_alphabet_sizes = [] self.feature_emb_dims = [] self.norm_feature_embs = [] self.word_emb_dim = 50 # Networks self.use_crf = True self.word_feature_extractor = 'LSTM' # 'LSTM'/'CNN'/'GRU' self.use_bert = False self.bert_dir = None # Training self.average_batch_loss = False self.optimizer = 'SGD' # 'SGD'/'AdaGrad'/'AdaDelta'/'RMSProp'/'Adam' self.status = 'train' # Hyperparameters self.HP_iteration = 100 self.HP_batch_size = 10 self.HP_hidden_dim = 200 self.HP_dropout = 0.5 self.HP_lstm_layer = 1 self.HP_bilstm = True self.HP_gpu = False self.HP_lr = 0.015 self.HP_lr_decay = 0.05 self.HP_clip = None self.HP_momentum = 0 self.HP_l2 = 1e-8 def show_data_summary(self): print('++' * 50) print('DATA SUMMARY START:') print(' I/O:') print(' Start Sequence Laebling task...') print(' Tag scheme: %s' % (self.tagScheme)) print(' Split token: %s' % (self.split_token)) print(' MAX SENTENCE LENGTH: %s' % (self.MAX_SENTENCE_LENGTH)) print(' Number normalized: %s' % (self.number_normalized)) print(' Word alphabet size: %s' % (self.word_alphabet_size)) print(' Label alphabet size: %s' % (self.label_alphabet_size)) print(' Word embedding dir: %s' % (self.word_emb_dir)) print(' Word embedding size: %s' % (self.word_emb_dim)) print(' Norm word emb: %s' % (self.norm_word_emb)) print(' Train file directory: %s' % (self.train_dir)) print(' Dev file directory: %s' % (self.dev_dir)) print(' Test file directory: %s' % (self.test_dir)) print(' Dset file directory: %s' % (self.dset_dir)) print(' Model file directory: %s' % (self.model_dir)) print(' Loadmodel directory: %s' % (self.load_model_dir)) print(' Decode file directory: %s' % (self.decode_dir)) print(' Train instance number: %s' % (len(self.train_texts))) print(' Dev instance number: %s' % (len(self.dev_texts))) print(' Test instance number: %s' % (len(self.test_texts))) print(' ' + '++' * 20) print(' Model Network:') print(' Model use_crf: %s' % (self.use_crf)) print(' Model word extractor: %s' % (self.word_feature_extractor)) print(' ' + '++' * 20) print(' Training:') print(' Optimizer: %s' % (self.optimizer)) print(' Iteration: %s' % (self.HP_iteration)) print(' BatchSize: %s' % (self.HP_batch_size)) print(' Average batch loss: %s' % (self.average_batch_loss)) print(' ' + '++' * 20) print(' Hyperparameters:') print(' Hyper lr: %s' % (self.HP_lr)) print(' Hyper lr_decay: %s' % (self.HP_lr_decay)) print(' Hyper HP_clip: %s' % (self.HP_clip)) print(' Hyper momentum: %s' % (self.HP_momentum)) print(' Hyper l2: %s' % (self.HP_l2)) print(' Hyper hidden_dim: %s' % (self.HP_hidden_dim)) print(' Hyper dropout: %s' % (self.HP_dropout)) print(' Hyper lstm_layer: %s' % (self.HP_lstm_layer)) print(' Hyper bilstm: %s' % (self.HP_bilstm)) print(' Hyper GPU: %s' % (self.HP_gpu)) print('DATA SUMMARY END.') print('++' * 50) sys.stdout.flush() def build_alphabet(self, input_file): in_lines = open(input_file, 'r').readlines() for line in in_lines: if len(line) > 2: # if sequence labeling data format i.e. CoNLL 2003 pairs = line.strip('\n').split('\t') if len(pairs) < 2: continue word = pairs[0] if self.number_normalized: word = normalize_word(word) label = pairs[-1] self.label_alphabet.add(label) self.word_alphabet.add(word) self.word_alphabet_size = self.word_alphabet.size() self.label_alphabet_size = self.label_alphabet.size() start_S = False start_B = False for label, _ in self.label_alphabet.iteritems(): if 'S-' in label.upper(): start_S = True elif 'B-' in label.upper(): start_B = True if start_B: if start_S: self.tagScheme = 'BMES' else: self.tagScheme = 'BIO' def fix_alphabet(self): self.word_alphabet.close() self.label_alphabet.close() def build_pretrain_emb(self): if self.word_emb_dir: print('Load pretrained word embedding, norm: %s, dir: %s' % (self.norm_word_emb, self.word_emb_dir)) self.pretrain_word_embedding, self.word_emb_dim = build_pretrain_embedding(self.word_emb_dir, self.word_alphabet, self.word_emb_dim, self.norm_word_emb) def generate_instance(self, name): self.fix_alphabet() if name == 'train': self.train_texts, self.train_Ids = read_instance(self.train_dir, self.word_alphabet, self.label_alphabet, self.number_normalized, self.MAX_SENTENCE_LENGTH, self.split_token) elif name == 'dev': self.dev_texts, self.dev_Ids = read_instance(self.dev_dir, self.word_alphabet, self.label_alphabet, self.number_normalized, self.MAX_SENTENCE_LENGTH, self.split_token) elif name == 'test': self.test_texts, self.test_Ids = read_instance(self.test_dir, self.word_alphabet, self.label_alphabet, self.number_normalized, self.MAX_SENTENCE_LENGTH, self.split_token) else: print('Error: you can only generate train/dev/test instance! Illegal input:%s' % (name)) def write_decoded_results(self, predict_results, name): sent_num = len(predict_results) content_list = [] if name == 'train': content_list = self.train_texts elif name == 'dev': content_list = self.dev_texts elif name == 'test': content_list = self.test_texts else: print('Error: illegal name during writing predict result, name should be within train/dev/test !') assert (sent_num == len(content_list)) fout = open(self.decode_dir, 'w') for idx in range(sent_num): sent_length = len(predict_results[idx]) fout.write(content_list[idx][-1] + '\n') for idy in range(sent_length): # content_list[idx] is a list with [word, char, label] try: # Will fail with python3 fout.write(content_list[idx][0][idy].encode('utf-8') + ' ' + predict_results[idx][idy] + '\n') except: fout.write(content_list[idx][0][idy] + ' ' + predict_results[idx][idy] + '\n') fout.write('\n') fout.close() print('Predict %s result has been written into file. %s' % (name, self.decode_dir)) def load(self, data_file): f = open(data_file, 'rb') tmp_dict = pickle.load(f) f.close() self.__dict__.update(tmp_dict) def save(self, save_file): f = open(save_file, 'wb') pickle.dump(self.__dict__, f, 2) f.close() def read_config(self, config_file): project_root_path = os.path.dirname(os.getcwd()) config = config_file_to_dict(config_file) # read data: the_item = 'train_dir' if the_item in config: self.train_dir = os.path.join(project_root_path, config[the_item]) the_item = 'dev_dir' if the_item in config: self.dev_dir = os.path.join(project_root_path, config[the_item]) the_item = 'test_dir' if the_item in config: self.test_dir = os.path.join(project_root_path, config[the_item]) the_item = 'decode_dir' if the_item in config: self.decode_dir = os.path.join(project_root_path, config[the_item]) the_item = 'dset_dir' if the_item in config: self.dset_dir = os.path.join(project_root_path, config[the_item]) the_item = 'model_dir' if the_item in config: self.model_dir = os.path.join(project_root_path, config[the_item]) the_item = 'load_model_dir' if the_item in config: self.load_model_dir = os.path.join(project_root_path, config[the_item]) the_item = 'result_true_path' if the_item in config: self.result_true_path = os.path.join(project_root_path, config[the_item]) the_item = 'result_pred_path' if the_item in config: self.result_pred_path = os.path.join(project_root_path, config[the_item]) the_item = 'word_emb_dir' if the_item in config: self.word_emb_dir = config[the_item] the_item = 'MAX_SENTENCE_LENGTH' if the_item in config: self.MAX_SENTENCE_LENGTH = int(config[the_item]) the_item = 'norm_word_emb' if the_item in config: self.norm_word_emb = str2bool(config[the_item]) the_item = 'number_normalized' if the_item in config: self.number_normalized = str2bool(config[the_item]) the_item = 'seg' if the_item in config: self.seg = str2bool(config[the_item]) the_item = 'word_emb_dim' if the_item in config: self.word_emb_dim = int(config[the_item]) # read network: the_item = 'use_crf' if the_item in config: self.use_crf = str2bool(config[the_item]) the_item = 'word_seq_feature' if the_item in config: self.word_feature_extractor = config[the_item] the_item = 'use_bert' if the_item in config: self.use_bert = str2bool(config[the_item]) the_item = 'bert_dir' if the_item in config: self.bert_dir = config[the_item] # read training setting: the_item = 'optimizer' if the_item in config: self.optimizer = config[the_item] the_item = 'ave_batch_loss' if the_item in config: self.average_batch_loss = str2bool(config[the_item]) the_item = 'status' if the_item in config: self.status = config[the_item] # read Hyperparameters: the_item = 'iteration' if the_item in config: self.HP_iteration = int(config[the_item]) the_item = 'batch_size' if the_item in config: self.HP_batch_size = int(config[the_item]) the_item = 'hidden_dim' if the_item in config: self.HP_hidden_dim = int(config[the_item]) the_item = 'dropout' if the_item in config: self.HP_dropout = float(config[the_item]) the_item = 'lstm_layer' if the_item in config: self.HP_lstm_layer = int(config[the_item]) the_item = 'bilstm' if the_item in config: self.HP_bilstm = str2bool(config[the_item]) the_item = 'gpu' if the_item in config: self.HP_gpu = str2bool(config[the_item]) the_item = 'learning_rate' if the_item in config: self.HP_lr = float(config[the_item]) the_item = 'lr_decay' if the_item in config: self.HP_lr_decay = float(config[the_item]) the_item = 'clip' if the_item in config: self.HP_clip = float(config[the_item]) the_item = 'momentum' if the_item in config: self.HP_momentum = float(config[the_item]) the_item = 'l2' if the_item in config: self.HP_l2 = float(config[the_item]) def config_file_to_dict(input_file): config = {} fins = open(input_file, 'r').readlines() for line in fins: if len(line) > 0 and line[0] == '#': continue if '=' in line: pair = line.strip().split('#', 1)[0].split('=', 1) item = pair[0] if item == 'feature': if item not in config: feat_dict = {} config[item] = feat_dict feat_dict = config[item] new_pair = pair[-1].split() feat_name = new_pair[0] one_dict = {} one_dict['emb_dir'] = None one_dict['emb_size'] = 10 one_dict['emb_norm'] = False if len(new_pair) > 1: for idx in range(1, len(new_pair)): conf_pair = new_pair[idx].split('=') if conf_pair[0] == 'emb_dir': one_dict['emb_dir'] = conf_pair[-1] elif conf_pair[0] == 'emb_size': one_dict['emb_size'] = int(conf_pair[-1]) elif conf_pair[0] == 'emb_norm': one_dict['emb_norm'] = str2bool(conf_pair[-1]) feat_dict[feat_name] = one_dict # print "feat",feat_dict else: if item in config: print('Warning: duplicated config item found: %s, updated.' % (pair[0])) config[item] = pair[-1] return config def str2bool(string): if string == 'True' or string == 'true' or string == 'TRUE': return True else: return False
zzsn-nlp
/zzsn_nlp-0.0.1.tar.gz/zzsn_nlp-0.0.1/doc_event/data/data_loader.py
data_loader.py
import re import xlrd from sklearn.model_selection import train_test_split def xlsx2list(xlsx_path=None) -> list: # 打开excel wb = xlrd.open_workbook(xlsx_path) # 按工作簿定位工作表 sh = wb.sheet_by_name('Sheet1') print(sh.nrows) # 有效数据行数 print(sh.ncols) # 有效数据列数 print(sh.cell(0, 0).value) # 输出第一行第一列的值 print(sh.row_values(0)) # 输出第一行的所有值 # 将数据和标题组合成字典 print(dict(zip(sh.row_values(0), sh.row_values(1)))) # 遍历excel,打印所有数据 total_list = list() for i in range(sh.nrows): row = sh.row_values(i) total_list.append({ 'title': row[1].replace('\n', '').replace('\r', '').replace('\t', ''), 'content': row[2].replace('\n', '').replace('\r', '').replace('\t', ''), 'amount_of_cooperation': row[3].split(';') if len(row[3]) > 0 else None, 'project_name': row[4].split(',') if len(row[4]) > 0 else None, 'state': row[5].split(',') if len(row[5]) > 0 else None, 'company_identification_Party_A': row[6].split(',') if len(row[6]) > 0 else None, 'company_identification_Party_B': row[7].split(',') if len(row[7]) > 0 else None, 'project_cycle': row[8].split(',') if len(row[8]) > 0 else None, 'project_status': row[9].split(',') if len(row[9]) > 0 else None, }) total_list = total_list[3:] return total_list def stats(content=None, com_list=None) -> list: result_list = list() for com in com_list: pattern = re.compile(com) result = pattern.findall(content) result_list.append(len(result)) return result_list def sentence2tag(sentence=None): title, content = sentence['title'], sentence['content'] content = title + content amount_of_cooperation = sentence['amount_of_cooperation'] project_name = sentence['project_name'] state = sentence['state'] company_identification_Party_A = sentence['company_identification_Party_A'] company_identification_Party_B = sentence['company_identification_Party_B'] project_cycle = sentence['project_cycle'] project_status = sentence['project_status'] word_list = list(content) tag_list = ['O' for c in content] if amount_of_cooperation is None: pass # print('None') else: for aoc in amount_of_cooperation: index_list = find_all(content, aoc) tag_list = tag_update(tag_list, index_list, aoc, 'amount_of_cooperation') if project_name is None: pass # print('None') else: for pn in project_name: index_list = find_all(content, pn) tag_list = tag_update(tag_list, index_list, pn, 'project_name') if state is None: pass # print('None') else: for s in state: index_list = find_all(content, s) tag_list = tag_update(tag_list, index_list, s, 'state') if company_identification_Party_A is None: pass # print('None') else: for ciPA in company_identification_Party_A: index_list = find_all(content, ciPA) tag_list = tag_update(tag_list, index_list, ciPA, 'company_identification_Party_A') if company_identification_Party_B is None: pass # print('None') else: for ciPB in company_identification_Party_B: index_list = find_all(content, ciPB) tag_list = tag_update(tag_list, index_list, ciPB, 'company_identification_Party_B') if project_cycle is None: # print('None') pass else: for pc in project_cycle: index_list = find_all(content, pc) tag_list = tag_update(tag_list, index_list, pc, 'project_cycle') if project_status is None: pass # print('None') else: for ps in project_status: index_list = find_all(content, ps[0:2]) tag_list = tag_update(tag_list, index_list, ps[0:2], 'project_status') s_word = ['', '\n', '\t'] s_tag = ['', ' ', '\n', '\t'] for word, tag in zip(word_list, tag_list): if word in s_word: print(111111111) if tag in s_tag: print(11111) return word_list, tag_list # result_list = stats(content, amount_of_cooperation) pass def tag_update(tag_list, index_list, s, tag_name): if index_list is False: return tag_list for index in index_list: if judge_all_o(tag_list, index, index + len(s)): tag_list[index] = 'B-' + tag_name for i in range(index + 1, index + len(s) - 1): tag_list[i] = 'I-' + tag_name tag_list[index + len(s) - 1] = 'E-' + tag_name return tag_list def judge_all_o(tag_list, index, index_end): if tag_list[index][0] == 'O' or tag_list[index][0] == 'B': if tag_list[index_end - 1][0] == 'O' or tag_list[index_end - 1][0] == 'E': if tag_list[index][0] == 'B': pass return True return False def find_all(sub, s): """ 从一篇文章(sub)中找到所有符合要素(s)的chunk,并返回起始下标 :param sub: role :param s: doc :return: index: list """ if len(s) < 2: print('要素名过短: {}'.format(s)) # 要素名过短提示 index_list = [] index = sub.find(s) while index != -1: index_list.append(index) index = sub.find(s, index + 1) if len(index_list) > 0: return index_list else: print('事件要素: {} 在文章中未能匹配成功!'.format(s)) # 文章中未匹配的要素 return False def check_all(result_list): for word_list, tag_list in result_list: suffix = None for word, tag in zip(word_list, tag_list): if suffix is None: if tag[0] == 'I' or tag[0] == 'E': print(111) if tag[0] == 'B': suffix = tag[2:] if suffix is not None: if tag[0] == 'I' or tag[0] == 'E': if tag[2:] != suffix: print(111) if tag[0] == 'O': suffix = None if word is ' ': if tag[0] is not 'O': pass pass def write2txt(docs_list, txt_path, typ): i = 0 with open(txt_path, 'w', encoding='utf-8') as f: for doc_list in docs_list: for word_list, tag_list in doc_list: if len(word_list) >= 250: print(len(word_list)) f.write(typ + '-' + str(i) + '\n') for index, word, tag in zip(range(len(word_list)), word_list, tag_list): # if word == ' ' f.write(word + '\t' + tag + '\n') if index + 1 == len(word_list): f.write('\n') i += 1 def data_split_write2txt(result_list, txt_path, typ): """ data_split + write2txt :param result_list: list :param txt_path: :param typ: train/dev/test :return: """ i = 0 # split_str = ',,、;;。' split_str = ';;。' # 同时也可以以空格 ‘ ’ 为边界进行切分 即split_str = ',,、;;。 ' with open(txt_path, 'w', encoding='utf-8') as f: for word_list, tag_list in result_list: f.write(typ + '-' + str(i) + '\n') length = 1 for index, word, tag in zip(range(len(word_list)), word_list, tag_list): f.write(word + '\t' + tag + '\n') if index + 1 == len(word_list): f.write('\n') elif length > 30 and tag[0] in ['O', 'E'] and word in split_str: f.write('\n' + typ + '-' + str(i) + '\n') length = 1 elif length > 120 and tag[0] in ['O', 'E']: f.write('\n' + typ + '-' + str(i) + '\n') length = 1 if length >= 200: print(111111111111111111111111111111111) length += 1 pass i += 1 if __name__ == '__main__': s = '27.52亿美元,2,436.03亿元' s_list = re.split('元,|币,', s) xlsx_path = './sample/total_datasets.xlsx' total_list = xlsx2list(xlsx_path=xlsx_path) result_list = list() for sentence in total_list: word_list, tag_list = sentence2tag(sentence) result_list.append([word_list, tag_list]) check_all(result_list) train_list, dev_list = train_test_split( result_list, test_size=0.1, random_state=2021 ) data_split_write2txt(train_list, 'train_1_.txt', 'train') data_split_write2txt(dev_list, 'dev_1_.txt', 'dev') data_split_write2txt(dev_list, 'test_1_.txt', 'test') pass
zzsn-nlp
/zzsn_nlp-0.0.1.tar.gz/zzsn_nlp-0.0.1/doc_event/data/data_process.py
data_process.py
from __future__ import print_function import json import os class Alphabet: def __init__(self, name, label=False, keep_growing=True): self.name = name self.UNKNOWN = '</unk>' self.label = label self.instance2index = {} self.instances = [] self.keep_growing = keep_growing # Index 0 is occupied by default, all else following. self.default_index = 0 self.next_index = 1 if not self.label: self.add(self.UNKNOWN) def clear(self, keep_growing=True): self.instance2index = {} self.instances = [] self.keep_growing = keep_growing # Index 0 is occupied by default, all else following. self.default_index = 0 self.next_index = 1 def add(self, instance): if instance not in self.instance2index: self.instances.append(instance) self.instance2index[instance] = self.next_index self.next_index += 1 def get_index(self, instance): try: return self.instance2index[instance] except KeyError: if self.keep_growing: index = self.next_index self.add(instance) return index else: return self.instance2index[self.UNKNOWN] def get_instance(self, index): if index == 0: if self.label: return self.instances[0] # First index is occupied by the wildcard element. return None try: return self.instances[index - 1] except IndexError: print('WARNING:Alphabet get_instance ,unknown instance, return the first label.') return self.instances[0] def size(self): # if self.label: # return len(self.instances) # else: return len(self.instances) + 1 def iteritems(self): return self.instance2index.items() def enumerate_items(self, start=1): if start < 1 or start >= self.size(): raise IndexError('Enumerate is allowed between [1 : size of the alphabet)') return zip(range(start, len(self.instances) + 1), self.instances[start - 1:]) def close(self): self.keep_growing = False def open(self): self.keep_growing = True def get_content(self): return {'instance2index': self.instance2index, 'instances': self.instances} def from_json(self, data): self.instances = data['instances'] self.instance2index = data['instance2index'] def save(self, output_directory, name=None): """ Save both alhpabet records to the given directory. :param output_directory: Directory to save model and weights. :param name: The alphabet saving name, optional. :return: """ saving_name = name if name else self.__name try: json.dump(self.get_content(), open(os.path.join(output_directory, saving_name + '.json'), 'w')) except Exception as e: print('Exception: Alphabet is not saved: ' % repr(e)) def load(self, input_directory, name=None): """ Load model architecture and weights from the give directory. This allow we use old models even the structure changes. :param input_directory: Directory to save model and weights :return: """ loading_name = name if name else self.__name self.from_json(json.load(open(os.path.join(input_directory, loading_name + '.json'))))
zzsn-nlp
/zzsn_nlp-0.0.1.tar.gz/zzsn_nlp-0.0.1/doc_event/utils/alphabet.py
alphabet.py