code
stringlengths 1
5.19M
| package
stringlengths 1
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
# SF4wD #
four-component stochastic frontier model with determinants
## Motivation ##
This package was developed to complement four-component stochastic frontier that consider
determinants in mean and variance parameters of inefficiency distributions
by Ruei-Chi Lee.
## Installation ##
Install via `$ pip install 4SFwD`
## Features ##
* **SF4wD**: main.py - set method and model to run simulation or real data
* **HMC**: Hamilton Monte Carlo designed for determinants parameters.
* **DA**: Data augmentation for the model
* **TK**: Two-parametrization method originally proposed by Tsiona and Kunmbhaker (2014) for four-component model without determinants.
* **PMCMC**: Particle MCMC for the model (perferred approach) - speed up by GPU parallel computation
## Example ##
Here is how you run a simulation estimation for a four-component stochastic frontier model via PMCMC:
- Parameter setting guideline in the SF4wD.py
- Simulation data only offers stochastic frontier model that consider determinants in both mean and variance parameter of inefficiencies.
```python
import SF4wD
#model:str - different way to consider determinants
#method:str - different Bayesian method to estimate the model
#data_name : str - simulation data or data in data/.
#S : int - MCMC length
#H : int - number of particles in PMCMC
#gpu: boolean - use parallel computation to run PMCMC
#save: boolean - save MCMC data
my_model = SF4wD(model = 'D', method = 'PMCMC', data_name ='',S=10, H=100, gpu=False, save=False)
my_model.run()
```
output:
```python
mean sd hpd_3% hpd_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat
beta0 2.412 0.093 2.318 2.555 0.046 0.035 4.0 4.0 7.0 10.0 NaN
beta1 1.078 0.074 0.977 1.242 0.023 0.017 10.0 10.0 10.0 10.0 NaN
xi0 0.580 0.043 0.531 0.652 0.014 0.011 9.0 9.0 8.0 10.0 NaN
xi1 0.694 0.127 0.479 0.867 0.073 0.058 3.0 3.0 3.0 10.0 NaN
delta0 0.141 0.072 0.013 0.273 0.023 0.019 10.0 8.0 10.0 10.0 NaN
delta1 0.774 0.137 0.620 0.984 0.079 0.063 3.0 3.0 3.0 10.0 NaN
z0 -0.461 0.716 -1.844 0.609 0.376 0.291 4.0 4.0 4.0 10.0 NaN
z1 2.728 0.889 1.268 3.941 0.459 0.354 4.0 4.0 4.0 10.0 NaN
gamma0 0.662 0.092 0.500 0.773 0.052 0.041 3.0 3.0 3.0 10.0 NaN
gamma1 0.412 0.061 0.349 0.519 0.021 0.015 9.0 9.0 9.0 10.0 NaN
sigma_alpha_sqr 1.377 0.178 1.095 1.693 0.075 0.057 6.0 6.0 6.0 10.0 NaN
sigma_v_sqr 2.575 2.523 1.290 9.515 1.062 0.793 6.0 6.0 3.0 10.0 NaN
```
## License ##
Ruei-Chi Lee is the main author and contributor.
Bug reports, feature requests, questions, rants, etc are welcome, preferably
on the github page.
| 4SFwD | /4SFwD-0.0.2.tar.gz/4SFwD-0.0.2/README.md | README.md |
import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
setuptools.setup(
name='4SFwD', #
version="0.0.2",
author='Ruei-Chi Lee',
author_email='axu3bjo4fu6@gmail.com',
description='four component stochastic frontier model with determinants',
long_description=long_description,
long_description_content_type="text/markdown",
url='https://https://github.com/rickylee318/sf_with_determinants',
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
python_requires='>=3.6',
) | 4SFwD | /4SFwD-0.0.2.tar.gz/4SFwD-0.0.2/setup.py | setup.py |
########################################################################
# $Header: /var/local/cvsroot/4Suite/README,v 1.12.4.3 2006/10/19 22:02:59 mbrown Exp $
#
# 4Suite README
#
4SUITE CONTENTS
===============
4Suite is a suite of Python modules for XML and RDF processing.
Its major components include the following:
* Ft.Xml.Domlette: A very fast, lightweight XPath-oriented DOM.
* Ft.Xml.Sax: A very fast SAX 2 parser.
* Ft.Xml.XPath: An XPath 1.0 implementation for Domlette documents.
* Ft.Xml.Xslt: A robust XSLT 1.0 processor.
* Ft.Xml.XUpdate: An XUpdate processor.
* Ft.Lib: Various support libraries that can be used independently.
* Ft.Rdf: RDF processing tools, including a query/inference language.
* Ft.Server: An integrated document & RDF repository with web access.
4Suite also includes convenient command-line tools:
* 4xml: XML document parsing and reserialization.
* 4xpath: XPath expression evaluation.
* 4xslt: XSLT processing engine.
* 4xupdate: XUpdate processing.
* 4rdf: RDF/XML parsing, persistence, querying and reserialization.
* 4ss_manager: Document/RDF repository administration.
* 4ss: Document/RDF repository user commands.
Effective version 1.0b2, Ft.Lib and Ft.Xml are distributed as the
"4Suite XML" release package. The Ft.Rdf and Ft.Server components will
be packaged as separate add-ons after the 4Suite XML 1.0 release.
If you need RDF or repository functionality before then, you must use
the monolithic 4Suite 1.0b1 release for now.
MINIMUM PREREQUISITES
=====================
* General requirements:
(1) The underlying platform must be either POSIX or Windows.
POSIX means any Unix-like OS, such as a major Linux distro,
FreeBSD, OpenBSD, NetBSD, Solaris, Cygwin, Mac OS X, etc.
Windows means Windows 2000, XP, or Server 2003. Windows 98, Me,
or NT might work, but no guarantees.
(2) Python 2.2.1 or higher.
(3) If building from source, a C compiler is required.
* Additional requirements for certain features:
* Triclops (RDF graph visualizer in repository Dashboard) - GraphViz
(any version with the executable 'dot' or 'dot.exe').
RECOMMENDATIONS
===============
* Use Python 2.3.5 or 2.4.4.
* Use an official python.org Python distribution, not ActiveState's.
* If PyXML is installed, make sure it is the latest version.
* If installing PyXML after 4Suite, install PyXML with --without-xpath.
OS-SPECIFIC INSTALLATION NOTES
==============================
* On POSIX, if building from source, the install step will result in a
build, if it hasn't been done already. The user doing the install
must have permission to write to all of the installation directories,
so it is typical to do the install, if not also the build, as root.
If you want to do the build step as a regular user, do it first with
'python setup.py build' as the regular user, then su to root, and run
'python setup.py install'.
* Some Linux distros come with old versions of 4Suite. Try to remove
all traces of the old versions before installing the new.
* Some POSIX platforms come with prebuilt versions of Python. Ensure
that the version you are using meets 4Suite's minimum prerequisites.
Some Python installations are missing libs and C headers, were built
with unusual options, or have other quirks that interfere with
building and using 4Suite. Affected users may need to replace their
Python installation, perhaps by building Python from source.
* On Windows, if installing with the self-extracting .exe, keys from a
standard Python distribution from python.org must be present in the
Registry.
* On Mac OS X, it is recommended by the pythonmac-sig to use the
universal installer for both PPC and Intel Macs instead of the system
supplied (Apple's) Python.
GENERAL INSTALLATION
====================
On Windows, if installing from self-extracting .exe:
1. Just run the installer.
On Red Hat Linux, if installing from .rpm archive:
1. Use 'rpm' in the normal way.
On POSIX or Windows, if building from source:
1. Unpack the source distribution.
2. cd 4Suite
3. As root, run 'python setup.py install'
For custom build and installation options, see
'python setup.py --help'
'python setup.py config --help'
'python setup.py build --help'
'python setup.py install --help'
See more detailed instructions at
http://4suite.org/docs/UNIX.xml (POSIX)
http://4Suite.org/docs/Windows.xml (Windows)
POST-INSTALL TESTING
====================
Extensive regression tests are bundled with 4Suite. After installation,
you can go to the Tests directory (its installed location varies by
platform) and follow the instructions in the README there.
DOCUMENTATION
=============
Documentation is piecemeal and always a work-in-progress; sorry.
As mentioned, detailed instructions for installation are on 4suite.org.
Detailed instructions for setting up and using some of the repository
features of 4Suite are at http://4suite.org/docs/QuickStart.xml
An installation layout guide that describes common install locations
and how the current installation system works is available at
http://4suite.org/docs/installation-locations.xhtml
Python API docs (in XML and HTML) can be generated when building from
source by adding the option '--with-docs' to the setup.py invocation.
These will end up in a documentation directory during the install;
the exact location varies depending on the '--docdir'
Pre-generated API docs (HTML only) can be downloaded from 4suite.org
or from the 4Suite project page on SourceForge.
A detailed users' manual covering 4Suite's XML processing features is
available for viewing online at http://4suite.org/docs/CoreManual.xml.
The HTML version is generated and distributed with the API docs.
Many helpful and important docs can be found in Uche's Akara at
http://uche.ogbuji.net/tech/akara/4suite/
Any questions not answered by these docs should be asked on the 4Suite
mailing list. See http://lists.fourthought.com/mailman/listinfo/4suite
Developers and users can also confer via IRC on irc.freenode.net
channel #4suite.
ENVIRONMENT VARIABLES
=====================
None of these are necessary for a basic installation to work;
this list is just for reference.
FTSERVER_CONFIG_FILE = The absolute path of the repository config file.
Required if you want to use the repository features of 4Suite.
FT_DATABASE_DIR = The directory to use for filesystem-based repository
database. Optional (will default) but recommended if using the
FlatFile repository driver.
FTSS_USERNAME = Repository username to use when invoking 4ss command-
line tools, to avoid being prompted. This is overridden by
'4ss agent' or '4ss login' settings. Optional.
FTSS_PASSWORD_FILE = The absolute path of the file in which to store
4ss command-line tool login information. Used by '4ss login'.
Optional (will default to a file in the user's home directory, or
the Windows folder on Windows).
XML_CATALOG_FILES = The absolute paths or URIs of XML or TR9401 Catalogs
to use. Optional. Used by Ft.Xml.Catalog at import time. Items in the
list must be separated by os.pathsep (";" on Windows, ":" on POSIX).
XSLTINCLUDE = The absolute paths from which alternative URIs for the
XSLT stylesheet will be derived, for the purpose of extending the
resolution capability of xsl:include and xsl:import instructions.
Optional. Used by the 4xslt command-line tool only.
EXTMODULES = The names of Python modules that define XPath extension
functions and/or XSLT extension elements. Multiple modules must be
separated in the list by ":". Optional (this info can also be set
directly on instances of Ft.Xml.XPath.Context.Context or
Ft.Xml.Xslt.Processor.Processor).
UPGRADING
=========
Detailed instructions are not available, sorry.
Upgrading 4Suite from 0.11.1:
Remove all traces of 0.11.1 *and* PyXML first, since they were
integrated. Unset environment variables that were related to the
old version of 4Suite. Check your PATH; 4Suite 0.11.1 installed
command-line scripts to a different location than what you need now.
Also, update any Python scripts that you may have that rely on the
old APIs to use the new; for example, use Ft.Xml.XPath and Ft.Xml.Xslt
instead of xml.xpath and xml.xslt.
Upgrading from 0.12.0a1, 0.12.0a2, 0.12.0a3, 1.0a1, 1.0a3:
Installation locations varied; remove as much as you can first.
Check your PATH; as of 4Suite 1.0a4, the command-line scripts
are installed to a different location than before, but the old
scripts will not be removed when the new ones are installed.
Repository users:
Upgrading can be tricky. First read
http://lists.fourthought.com/pipermail/4suite/2004-October/012933.html
Also, if there is a 4ss.conf in the same location as the where
the default server config file will be installed (e.g., in
/usr/local/lib/4Suite on Unix), it will be renamed, so be sure
that your FTSERVER_CONFIG_FILE still points to your own config file
(it's a good idea to move it out of the way of future upgrades).
Upgrading from 1.0a4, 1.0b1, 1.0b2, 1.0b3, 1.0rc1, 1.0rc2:
There are no special instructions for upgrading from these versions.
Keeping up-to-date with current development code:
See the CVS instructions at http://4suite.org/docs/4SuiteCVS.xml
| 4Suite-XML | /4Suite-XML-docs-1.0.2.zip/4Suite-XML-docs-1.0.2/README | README |
# -*- coding: utf-8 -*-
"""
Created on Wed Jun 17 11:23:15 2020
@author: Kumar Awanish
"""
import pandas
from sklearn.ensemble import RandomForestRegressor
import pickle
def training_features(training_features):
"""
Function to read training features csv file.
"""
data=pandas.read_csv(training_features)
return data
def training_targets(training_targets):
"""
Function to read training targets csv file.
"""
data=pandas.read_csv(training_targets)
return data
def prediction_features(prediction_features):
"""
Function to read prediction features csv file.
"""
data=pandas.read_csv(prediction_features)
return data
def model_training(training_features, training_targets):
"""
Function to train model on training features and targets.
"""
features=training_features(training_features)
features.drop(labels=['datetime'],inplace=True, axis=1)
lables=training_targets(training_targets)
lables.drop(labels=['datetime'],inplace=True, axis=1)
clf = RandomForestRegressor(max_depth=2, random_state=0)
clf.fit(features, lables)
# Save to file in the current working directory
pkl_filename = "pickle_model.pkl"
with open(pkl_filename, 'wb') as file:
pickle.dump(clf, file)
return clf
def model_prediction (prediction_features,pkl_filename):
"""
Function to predict the predction features outcomes.
"""
perdiction_features =prediction_features(prediction_features)
perdiction_features.drop(labels=['datetime'],inplace=True, axis=1)
# Load from file
with open(pkl_filename, 'rb') as file:
pickle_model = pickle.load(file)
#predict the model
predict = pickle_model.predict(perdiction_features)
return predict
| 4cast-awi-package | /4cast_awi_package-0.0.4-py3-none-any.whl/4cast_package/4cast_package.py | 4cast_package.py |
# -*- coding: utf-8 -*-
"""
Created on Wed Jun 17 11:31:15 2020
@author: kumar Awanish
"""
from .mypackage import ( # noqa: F401
training_features,
training_targets,
prediction_features,
model_training,
model_prediction
)
del mypackage
| 4cast-awi-package | /4cast_awi_package-0.0.4-py3-none-any.whl/4cast_package/__init__.py | __init__.py |
# -*- coding: utf-8 -*-
"""
Created on Wed Jun 17 11:23:15 2020
@author: Kumar Awanish
"""
import pandas
from sklearn.ensemble import RandomForestRegressor
import pickle
def training_features(training_features):
"""
Function to read training features csv file.
"""
data=pandas.read_csv(training_features)
return data
def training_targets(training_targets):
"""
Function to read training targets csv file.
"""
data=pandas.read_csv(training_targets)
return data
def prediction_features(prediction_features):
"""
Function to read prediction features csv file.
"""
data=pandas.read_csv(prediction_features)
return data
def model_training(training_features, training_targets):
"""
Function to train model on training features and targets.
"""
features=training_features(training_features)
features.drop(labels=['datetime'],inplace=True, axis=1)
lables=training_targets(training_targets)
lables.drop(labels=['datetime'],inplace=True, axis=1)
clf = RandomForestRegressor(max_depth=2, random_state=0)
clf.fit(features, lables)
# Save to file in the current working directory
pkl_filename = "pickle_model.pkl"
with open(pkl_filename, 'wb') as file:
pickle.dump(clf, file)
return clf
def model_prediction (prediction_features,pkl_filename):
"""
Function to predict the predction features outcomes.
"""
perdiction_features =prediction_features(prediction_features)
perdiction_features.drop(labels=['datetime'],inplace=True, axis=1)
# Load from file
with open(pkl_filename, 'rb') as file:
pickle_model = pickle.load(file)
#predict the model
predict = pickle_model.predict(perdiction_features)
return predict
| 4cast-package | /4cast_package-0.0.4-py3-none-any.whl/4cast_package/4cast_package.py | 4cast_package.py |
# -*- coding: utf-8 -*-
"""
Created on Wed Jun 17 11:31:15 2020
@author: kumar Awanish
"""
from .mypackage import ( # noqa: F401
training_features,
training_targets,
prediction_features,
model_training,
model_prediction
)
del mypackage
| 4cast-package | /4cast_package-0.0.4-py3-none-any.whl/4cast_package/__init__.py | __init__.py |
This is a 4chan downloader. It will run until thread dies or being pruned.
Usage : 4cdl thread_id [-t number]
-t number : number of download thread
| 4cdl | /4cdl-0.1.tar.gz/4cdl-0.1/README.txt | README.txt |
from urlparse import urlparse
import argparse
import httplib
import urllib2
import re
import time
import json
import os
import threading
sleep_time = 10
wait_thread_sleep_time = 2
cache_string = "Run out of free thread. Retry after" + \
str(wait_thread_sleep_time) + "second"
number_of_thread = 10
class downloadThread (threading.Thread):
def __init__(self, url, folder):
threading.Thread.__init__(self)
self.url = url
self.folder = folder
def run(self):
print "Starting download thread for " + self.url
download(self.url, self.folder)
print "Exiting download thread for " + self.url
def download(url, folder):
file_name = '.\\' + folder + '\\' + url.split('/')[-1]
if not os.path.exists('.\\' + folder + '\\'):
os.makedirs('.\\' + folder + '\\')
headers = {'User-Agent': 'Mozilla/5.0'}
req = urllib2.Request(url, None, headers)
u = urllib2.urlopen(req)
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
# Check if file is already downloaded
if os.path.isfile(file_name) and file_size == os.stat(file_name).st_size:
print "File "+file_name+" is already downloaded"
return
# Begin download
file_size_dl = 0
block_sz = 1024
with open(file_name, 'wb') as f:
while True:
buffer = u.read(block_sz)
if not buffer:
break
file_size_dl += len(buffer)
f.write(buffer)
status = r" [%3.2f%%]" % (file_size_dl * 100. / file_size)
print "Downloading:" + file_name + status
def check_thread(board, sid):
prev_img_list = []
while True:
myConnection = httplib.HTTPSConnection(
"a.4cdn.org")
myConnection.request("GET", "/" + board + "/thread/" + sid + ".json")
reply = myConnection.getresponse()
print reply.status, reply.reason
if reply.status == 404:
print "404 Not found. Please check the URL again!"
break
temp_json = reply.read()
img_list = re.findall(r'"filename":".+?".+?"tim":.+?,', temp_json)
if not os.path.exists('.\\' + board + sid + '\\'):
os.makedirs('.\\' + board + sid + '\\')
with open('.\\' + board + sid + '\\' + sid + ".json", 'wb') as f:
f.write(temp_json)
# Print img_list
myConnection.close()
for i in img_list[len(prev_img_list):]:
j = json.loads('{'+i[:-1]+'}')
download_link = \
"http://i.4cdn.org/" + board + "/" + str(j['tim']) + j['ext']
print download_link
while (threading.activeCount() == number_of_thread):
print cache_string
time.sleep(wait_thread_sleep_time)
downloadThread(download_link, board + sid).start()
prev_img_list = img_list
time.sleep(sleep_time)
def parse_thread_URL(url):
url_components = urlparse(url).path.split('/')
return url_components[1], url_components[3]
prog_description = 'Download all images and json of a 4chan thread until '\
'thread dies. Resume and multi-thread download supported.'\
'From json and the images, the original html can be generated.'
parser = argparse.ArgumentParser(description=prog_description)
parser.add_argument('threadURL', metavar='Thread_URL',
help='The thread URL for example '
'http://boards.4chan.org/biz/thread/1873336', default=10)
parser.add_argument('-t', '--thread_num', metavar='number',
help='The number of download thread, default is 10')
args = parser.parse_args()
number_of_thread = args.thread_num
board, thread_id = parse_thread_URL(args.threadURL)
check_thread(board, thread_id)
| 4cdl | /4cdl-0.1.tar.gz/4cdl-0.1/4cdl.py | 4cdl.py |
import ez_setup
ez_setup.use_setuptools()
from setuptools import setup, find_packages
setup(
name="4cdl",
version="0.1",
packages=find_packages(),
scripts=['4cdl.py'],
# Project uses reStructuredText, so ensure that the docutils get
# installed or upgraded on the target machine
install_requires=[],
package_data={
},
# metadata for upload to PyPI
author="AnhLam",
author_email="tuananhlam@gmail.com",
description="Lightweight 4chan downloader",
license="Free4All",
keywords="4chan downloader image board",
url="https://github.com/tuananhlam/4cdl", # project home page, if any
# could also include long_description, download_url, classifiers, etc.
) | 4cdl | /4cdl-0.1.tar.gz/4cdl-0.1/setup.py | setup.py |
fourch
======
.. _docs: https://4ch.readthedocs.org
.. _repo: https://github.com/plausibility/4ch
fourch (stylized as 4ch) is a wrapper to the 4chan JSON API, provided by moot. It allows you to interact with 4chan (in a READONLY way) easily through your scripts.
Originally <strike>stolen</strike> forked from `e000/py-4chan <https://github.com/e000/py-4chan>`_, but then I moved repos and renamed stuff since I'm pretty bad about that.
Requirements
------------
- Python 2.7 (what I test with, 2.x might work)
- requests
Notes
-----
- This isn't guaranteed to work all the time; after all, the API may change, and 4ch will have to be updated accordingly.
- If a feature is missing, open an issue on the `repo`_, and it may well be implemented.
Running / Usage
---------------
- Install & import: ``$ pip install 4ch``, ``import fourch``
- See the `docs`_
Contributing
------------
If you're interested in contributing to the usability of 4ch, or just want to give away stars, you can visit the 4ch github `repo`_.
| 4ch | /4ch-1.0.0.tar.gz/4ch-1.0.0/README.rst | README.rst |
"""fourch
======
.. _docs: https://4ch.readthedocs.org
.. _repo: https://github.com/plausibility/4ch
fourch (stylized as 4ch) is a wrapper to the 4chan JSON API, provided by moot. It allows you to interact with 4chan (in a READONLY way) easily through your scripts.
Originally <strike>stolen</strike> forked from `e000/py-4chan <https://github.com/e000/py-4chan>`_, but then I moved repos and renamed stuff since I'm pretty bad about that.
Requirements
------------
- Python 2.7 (what I test with, 2.x might work)
- requests
Notes
-----
- This isn't guaranteed to work all the time; after all, the API may change, and 4ch will have to be updated accordingly.
- If a feature is missing, open an issue on the `repo`_, and it may well be implemented.
Running / Usage
---------------
- Install & import: ``$ pip install 4ch``, ``import fourch``
- See the `docs`_
Contributing
------------
If you're interested in contributing to the usability of 4ch, or just want to give away stars, you can visit the 4ch github `repo`_.
"""
from setuptools import setup
if __name__ != "__main__":
import sys
sys.exit(1)
kw = {
"name": "4ch",
"version": "1.0.0",
"description": "Python wrapper for the 4chan JSON API.",
"long_description": __doc__,
"url": "https://github.com/sysr-q/4ch",
"author": "sysr_q",
"author_email": "chris@gibsonsec.org",
"license": "MIT",
"packages": ["fourch"],
"install_requires": ["requests"],
"zip_safe": False,
"keywords": "wrapper 4chan chan json",
"classifiers": [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 2"
]
}
if __name__ == "__main__":
setup(**kw)
| 4ch | /4ch-1.0.0.tar.gz/4ch-1.0.0/setup.py | setup.py |
# vim: sw=4 expandtab softtabstop=4 autoindent
import requests
from .reply import Reply
class Thread(object):
""" This object stores information about the given thread.
It has a list of fourch.replies, as well as options to
easily pull in updates (new posts), and create an instance
with the json of a thread.
"""
def __init__(self, board, res):
""" Create the thread instance and initialize variables.
:param board: the :class:`fourch.Board` parent instance
:type board: :class:`fourch.Board`
:param res: the given threads number
:type res: str or int
"""
self._board = board
self.res = res
self.alive = True
self.op = None
self.replies = []
self.omitted_posts = 0
self.omitted_images = 0
# If this is a precached thread, should it get updated?
self._should_update = False
# HTTP Last-Modified header for If-Modified-Since
self._last_modified = None
def __repr__(self):
end = ""
if self.omitted_posts or self.omitted_images:
end = ", {0} omitted posts, {1} omitted images".format(
self.omitted_posts, self.omitted_images
)
return "<{0} /{1}/{2}, {3} replies{4}>".format(
self.__class__.__name__,
self._board.name,
self.res,
len(self.replies),
end
)
@staticmethod
def from_req(board, res, r):
""" Create a thread object from the given request.
If the thread has 404d, this will return None,
and if it isn't 200 OK, it will raise_for_status().
Actually creates the thread by calling :func:`from_json`.
:param board: the :class:`fourch.Board` parent instance
:type board: :class:`fourch.Board`
:param res: the given threads number
:type res: str or int
:param r: the requests object
:type r: requests.models.Response
"""
if r.status_code == requests.codes.not_found:
return None
elif r.status_code == requests.codes.ok:
return Thread.from_json(board,
r.json(),
res=res,
last_modified=r.headers["last-modified"])
else:
r.raise_for_status()
@staticmethod
def from_json(board, json, res=None, last_modified=None):
""" Create a thread object from the given JSON data.
:param board: the :class:`fourch.Board` parent instance
:type board: :class:`fourch.Board`
:param json: the json data from the 4chan API
:type board: dict
:param res: the given threads number
:type res: str or int
:param last_modified: when was the page last modified
:type last_modified: int or None
:return: the created :class:`fourch.Thread`
:rtype: :class:`fourch.Thread`
"""
t = Thread(board, res)
t._last_modified = last_modified
replies = json["posts"]
t.op = Reply(t, replies.pop(0))
t.replies = [Reply(t, r) for r in replies]
if res is None:
t._should_update = True
t.res = t.op.number
t.omitted_posts = t.op._json.get("omitted_posts", 0)
t.omitted_images = t.op._json.get("omitted_images", 0)
return t
@property
def sticky(self):
""" Is this thread stuck?
:return: whether or not the thread is stuck
:rtype: bool
"""
return self.op.sticky
@property
def closed(self):
""" Is the thread closed?
:return: whether or not the thread is closed
:rtype: bool
"""
return self.op.closed
@property
def last_reply(self):
""" Return the last :class:`fourch.Reply` to the thread, or the op
if there are no replies.
:return: the last :class:`fourch.Reply` to the thread.
:rtype: :class:`fourch.Reply`
"""
if not self.replies:
return self.op
return self.replies[-1]
@property
def images(self):
""" Create a generator which yields all of the image urls for the thread.
:return: a generator yieling all image urls
:rtype: generator
"""
yield self.op.file.url
for r in self.replies:
if not r.has_file:
continue
yield r.file.url
def update(self, force=False):
""" Update the thread, pulling in new replies,
appending them to the reply pool.
:param force: should replies be replaced with fresh reply objects
:type force: bool
:return: the number of new replies
:rtype: int
"""
if not self.alive and not force:
return 0
url = self._board.url("api_thread",
board=self._board.name,
thread=self.res)
headers = None
if self._last_modified:
# If-Modified-Since, to not waste bandwidth.
headers = {
"If-Modified-Since": self._last_modified
}
r = self._board._session.get(url, headers=headers)
if r.status_code == requests.codes.not_modified:
# 304 Not Modified
return 0
elif r.status_code == requests.codes.not_found:
# 404 Not Found
self.alive = False
# Remove from cache.
self._board._cache.pop(self.res, None)
return 0
elif r.status_code == requests.codes.ok:
if not self.alive:
self.alive = True
self._board._cache[self.res] = self
self._should_update = False
self.omitted_posts = 0
self.omitted_images = 0
self._last_modified = r.headers["last-modified"]
replies = r.json()["posts"]
post_count = len(self.replies)
self.op = Reply(self, replies.pop(0))
if not force:
self.replies.extend(
[Reply(self, p)
for p in replies
if p["no"] > self.last_reply.number]
)
else:
self.replies = [Reply(self, p) for p in replies]
post_count_new = len(self.replies)
post_count_diff = post_count_new - post_count
if post_count_diff < 0:
raise Exception("post count delta is somehow negative...")
return post_count_diff
else:
r.raise_for_status()
| 4ch | /4ch-1.0.0.tar.gz/4ch-1.0.0/fourch/thread.py | thread.py |
# vim: sw=4 expandtab softtabstop=4 autoindent
__version__ = "1.0.0"
| 4ch | /4ch-1.0.0.tar.gz/4ch-1.0.0/fourch/_version.py | _version.py |
# vim: sw=4 expandtab softtabstop=4 autoindent
from ._version import __version__
urls = {
"api": "a.4cdn.org",
"boards": "boards.4chan.org",
"images": "i.4cdn.org",
"thumbs": "t.4cdn.org",
# These are tacked to the end of the api url after formatting.
"api_board": "/{board}/{page}.json",
"api_thread": "/{board}/thread/{thread}.json",
"api_threads": "/{board}/threads.json",
"api_catalog": "/{board}/catalog.json",
"api_boards": "/boards.json"
}
class struct:
def __init__(self, **entries):
self.__dict__.update(entries)
| 4ch | /4ch-1.0.0.tar.gz/4ch-1.0.0/fourch/fourch.py | fourch.py |
# vim: sw=4 expandtab softtabstop=4 autoindent
import fourch
import base64
import re
class Reply(object):
""" This object stores information regarding a specific post
on any given thread. It uses python properties to easily
allow access to information.
"""
def __init__(self, thread, json):
""" Initialize the reply with the relevant information
:param thread: the :class:`fourch.Thread` parent instance
:type thread: :class:`fourch.Thread`
:param json: the json data for this post
:type json: dict
"""
self._thread = thread
self._json = json
def __repr__(self):
return "<{0}.{1} /{2}/{3}#{4}, image: {5}>".format(
self.__class__.__module__,
self.__class__.__name__,
self._thread._board.name,
self._thread.res,
self.number,
bool(self.has_file)
)
@property
def is_op(self):
"""Is this post the OP (first post in thread)"""
return self._json.get("resto", 1) == 0
@property
def number(self):
"""The number relating to this post"""
return self._json.get("no", 0)
@property
def reply_to(self):
"""What post ID is this a reply to"""
return self._json.get("resto", 0)
@property
def sticky(self):
"""Is this thread stuck?"""
return bool(self._json.get("sticky", 0))
@property
def closed(self):
"""Is this thread closed?"""
return bool(self._json.get("closed", 0))
@property
def now(self):
"""Humanized date string of post time"""
return self._json.get("now", "")
@property
def timestamp(self):
"""The UNIX timestamp of post time"""
return self._json.get("time", 0)
@property
def tripcode(self):
"""Trip code, if any, of the post"""
return self._json.get("trip", "")
@property
def id(self):
"""Post ID, if any. (Admin, Mod, Developer, etc)"""
return self._json.get("id", "")
@property
def capcode(self):
"""Post capcode, if any. (none, mod, admin, etc)"""
return self._json.get("capcode", "")
@property
def country(self):
""" The country code this was posted from. Two characters, XX if
unknown.
"""
return self._json.get("country", "XX")
@property
def country_name(self):
"""The name of the country this was posted from"""
return self._json.get("country_name", "")
@property
def email(self):
"""The email attached to the post"""
return self._json.get("email", "")
@property
def subject(self):
"""The subject of the post"""
return self._json.get("sub", "")
@property
def comment(self):
"""The comment, including escaped HTML"""
return self._json.get("com", "")
@property
def comment_text(self):
""" The stripped (mostly) plain text version of the comment.
The comment goes through various regexes to become (mostly) clean.
Some HTML will still be present, this is because Python's
:mod:`HTMLParser` won't escape everything, and since it's
undocumented, only god may know how to add more escapes.
"""
import HTMLParser
com = self.comment
# <span class="quote">>text!</span>
# --- >text!
com = re.sub(r"\<span[^>]+\>(?:>|>)([^</]+)\<\/span\>",
r">\1",
com,
flags=re.I)
# <a class="quotelink" href="XX#pYYYY">>>YYYY</a>
# --- >>YYYY
com = re.sub(r"\<a[^>]+\>(?:>|>){2}(\d+)\<\/a\>",
r">>\1",
com,
flags=re.I)
# Add (OP) to quotelinks to op
com = re.sub(r"\>\>({0})".format(self._thread.op.number),
r">>\1 (OP)",
com,
flags=re.I)
# <br> or <br /> to newline
com = re.sub(r"\<br ?\/?\>", "\n", com, flags=re.I)
# Send the remaining HTML through the HTMLParser to unescape.
com = HTMLParser.HTMLParser().unescape(com)
return com
@property
def url(self):
"""The URL of the post on the parent thread"""
return "{0}{1}/{2}/thread/{3}#p{4}".format(
self._thread._board.proto,
self._thread._board._urls["boards"],
self._thread._board.name,
self._thread.res,
self.number
)
# File related
@property
def has_file(self):
"""Whether or not this post has an image attached"""
return "filename" in self._json
@property
def file(self):
""" This holds the information regarding the image attached
to a post, if there is one at all.
It returns the relevant information in a class format,
accessible via ``r.file.url``, for example.
Information stored:
- renamed
- name
- extension
- size
- md5
- md5b64
- width
- height
- thumb_width
- thumb_height
- deleted
- spoiler
- url
- thumb_url
:return: a struct with information related to image
"""
if not self.has_file:
return fourch.struct()
f = {
"renamed": self._json.get("tim", 0),
"name": self._json.get("filename", ""),
"extension": self._json.get("ext", ""),
"size": self._json.get("fsize", 0),
"md5": base64.b64decode(self._json.get("md5")),
"md5b64": self._json.get("md5", ""),
"width": self._json.get("w", 0),
"height": self._json.get("h", 0),
"thumb_width": self._json.get("tn_w", 0),
"thumb_height": self._json.get("tn_h", 0),
"deleted": bool(self._json.get("filedeleted", 0)),
"spoiler": bool(self._json.get("spoiler", 0)),
"url": "",
"thumb_url": ""
}
f["url"] = "{0}{1}/{2}/{3}{4}".format(
self._thread._board.proto,
fourch.urls["images"],
self._thread._board.name,
f["renamed"],
f["extension"]
)
f["thumb_url"] = "{0}{1}/{2}/{3}s.jpg".format(
self._thread._board.proto,
fourch.urls["thumbs"],
self._thread._board.name,
f["renamed"]
)
return fourch.struct(**f)
| 4ch | /4ch-1.0.0.tar.gz/4ch-1.0.0/fourch/reply.py | reply.py |
# vim: sw=4 expandtab softtabstop=4 autoindent
import requests
import fourch
from .thread import Thread
class Board(object):
""" fourch.Board is the master instance which allows easy access to the
creation of thread objects.
"""
def __init__(self, name, https=False):
""" Create the board instance, and initialize internal variables.
:param name: The board name, minus slashes. e.g., 'b', 'x', 'tv'
:type name: string
:param https: Should we use HTTPS or HTTP?
:type https: bool
"""
self.name = name
self.https = https
self._session = None
self._cache = {} # {id: fourch.Thread(id)} -- prefetched threads
def __repr__(self):
# TODO: Fetch title/nsfw status from /boards.
return "<{0} /{1}/>".format(
self.__class__.__name__,
self.name
)
@property
def session(self):
if self._session is None:
self._session = requests.Session()
uaf = "fourch/{0} (@https://github.com/sysr-q/4ch)"
self._session.headers.update({
"User-agent": uaf.format(fourch.__version__),
})
return self._session
@property
def proto(self):
# Since this might change on-the-fly..
return "https://" if self.https else "http://"
def url(self, endpoint, *k, **v):
return (self.proto
+ fourch.urls["api"]
+ fourch.urls[endpoint].format(*k, **v))
def catalog(self):
""" Get a list of all the thread OPs and last replies.
"""
url = self.url("api_catalog", board=self.name)
r = self.session.get(url)
return r.json()
def threads(self):
""" Get a list of all the threads alive, and which page they're on.
You can cross-reference this with a threads number to see which
page it's on at the time of calling.
"""
url = self.url("api_threads", board=self.name)
r = self.session.get(url)
return r.json()
def thread(self, res, update_cache=True):
""" Create a :class:`fourch.thread` object.
If the thread has already been fetched, return the cached thread.
:param res: the thread number to fetch
:type res: str or int
:param update_cache: should we update if it's cached?
:type update_cache: bool
:return: the :class:`fourch.Thread` object
:rtype: :class:`fourch.Thread` or None
"""
if res in self._cache:
t = self._cache[res]
if update_cache:
t.update()
return t
url = self.url("api_thread", board=self.name, thread=res)
r = self.session.get(url)
t = Thread.from_req(self, res, r)
if t is not None:
self._cache[res] = t
return t
def page(self, page=1, update_each=False):
""" Return all the threads in a single page.
The page number is one-indexed. First page is 1, second is 2, etc.
If a thread has already been cached, return the cache entry rather
than making a new thread.
:param page: page to pull threads from
:type page: int
:param update_each: should each thread be updated, to pull all
replies
:type update_each: bool
:return: a list of :class:`fourch.Thread` objects, corresponding to
all threads on given page
:rtype: list
"""
url = self.url("api_board", board=self.name, page=page)
r = self.session.get(url)
if r.status_code != requests.codes.ok:
r.raise_for_status()
json = r.json()
threads = []
for thj in json["threads"]:
t = None
res = thj["posts"][0]["no"]
if res in self._cache:
t = self._cache[res]
t._should_update = True
else:
t = Thread.from_json(self,
thj,
last_modified=r.headers["last-modified"])
self._cache[res] = t
if update_each:
t.update()
threads.append(t)
return threads
def thread_exists(self, res):
""" Figure out whether or not a thread exists.
This is as easy as checking if it 404s.
:param res: the thread number to fetch
:type res: str or int
:return: whether or not the given thread exists
:rtype: bool
"""
url = self.url("api_thread", board=self.name, thread=res)
return self.session.head(url).status_code == requests.codes.ok
| 4ch | /4ch-1.0.0.tar.gz/4ch-1.0.0/fourch/board.py | board.py |
# vim: sw=4 expandtab softtabstop=4 autoindent
""" fourch (stylised as 4ch) is an easy-to-implement Python wrapper for
4chan's JSON API, as provided by moot.
It uses the documentation of the 4chan API located at:
https://github.com/4chan/4chan-API
This is based off of the API last updated Aug 12, 2014.
(4chan-API commit: 1b2bc7858afc555127b8911b4d760480769872a9)
"""
from ._version import __version__
from .fourch import urls
from .thread import Thread
from .board import Board
from .reply import Reply
import requests
def boards(https=False):
""" Get a list of all boards on 4chan, in :class:`fourch.board.Board`
objects.
:param https: Should we use HTTPS or HTTP?
:type https: bool
"""
s = requests.Session()
s.headers.update({
"User-Agent": "fourch/{0} (@https://github.com/sysr-q/4ch)".format(
__version__
)
})
proto = "https://" if https else "http://"
url = proto + urls['api'] + urls["api_boards"]
r = s.get(url)
if r.status_code != requests.codes.ok:
r.raise_for_status()
boards = []
for json_board in r.json()['boards']:
boards.append(Board(json_board['board'], https=https))
return boards
| 4ch | /4ch-1.0.0.tar.gz/4ch-1.0.0/fourch/__init__.py | __init__.py |
chan
====
A python script that downloads all images from a 4chan thread.
Install
-------
::
pip install 4chan
Usage
-----
::
usage: chan [-h] [--watch] url
positional arguments:
url The url of the thread.
optional arguments:
-h, --help show this help message and exit
--watch If this argument is passed, we will watch the thread for new
images.
Example
~~~~~~~
::
chan thread-url
| 4chan | /4chan-0.0.4.tar.gz/4chan-0.0.4/README.rst | README.rst |
# -*- coding: utf-8 -*-
from setuptools import setup, Command
import os
import sys
from shutil import rmtree
here = os.path.abspath(os.path.dirname(__file__))
with open("README.rst", "rb") as f:
long_descr = f.read().decode("utf-8")
class PublishCommand(Command):
"""Support setup.py publish."""
description = 'Build and publish the package.'
user_options = []
@staticmethod
def status(s):
"""Prints things in bold."""
print('\033[1m{0}\033[0m'.format(s))
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
try:
self.status('Removing previous builds…')
rmtree(os.path.join(here, 'dist'))
except:
pass
self.status('Building Source and Wheel (universal) distribution…')
os.system('{0} setup.py sdist bdist_wheel --universal'.format(sys.executable))
self.status('Uploading the package to PyPi via Twine…')
os.system('twine upload dist/*')
sys.exit()
setup(
name="4chan",
packages=["chan"],
entry_points={
"console_scripts": ['chan = chan.chan:main']
},
version='0.0.4',
description="A python script that downloads all images from a 4chan thread.",
long_description=long_descr,
author="Anthony Bloomer",
author_email="ant0@protonmail.ch",
url="https://github.com/AnthonyBloomer/chan",
classifiers=[
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
"Topic :: Software Development :: Libraries",
'Programming Language :: Python :: 2.7'
],
cmdclass={
'publish': PublishCommand,
},
zip_safe=False
) | 4chan | /4chan-0.0.4.tar.gz/4chan-0.0.4/setup.py | setup.py |
from .chan import main
main() | 4chan | /4chan-0.0.4.tar.gz/4chan-0.0.4/chan/__main__.py | __main__.py |
import sys, json, os, urllib
from urlparse import urlparse
import argparse
import time
URL = 'https://a.4cdn.org/'
IMAGE_URL = 'http://i.4cdn.org/'
allowed_types = ['.jpg', '.png', '.gif']
parser = argparse.ArgumentParser()
parser.add_argument("url", help='The url of the thread.')
parser.add_argument("--watch", action='store_true', help='If this argument is passed, we will watch the thread for new images.')
args = parser.parse_args()
def download(board, url):
response = urllib.urlopen(url)
try:
result = json.loads(response.read())
for post in result['posts']:
try:
filename = str(post['tim']) + post['ext']
if post['ext'] in allowed_types and not os.path.exists(filename):
urllib.urlretrieve(IMAGE_URL + board + '/' + filename, filename)
except KeyError:
continue
except ValueError:
sys.exit('No response. Is the thread deleted?')
def watch(board, url):
while True:
download(board, url)
time.sleep(60)
def main():
if 'boards.4chan.org' not in args.url:
sys.exit("You didn't enter a valid 4chan URL")
split = urlparse(args.url).path.replace('/', ' ').split()
board, thread = split[0], split[2]
url = '%s%s/thread/%s.json' % (URL, board, thread)
try:
os.mkdir(thread)
print 'Created directory...'
except OSError:
print 'Directory already exists. Continuing. '
pass
os.chdir(thread)
if args.watch:
print("Watching thread for new images")
watch(board, url)
else:
print("Downloading")
download(board, url)
if __name__ == '__main__':
main()
| 4chan | /4chan-0.0.4.tar.gz/4chan-0.0.4/chan/chan.py | chan.py |
"""
The MIT License (MIT)
Copyright (c) 2023-present scrazzz
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
"""
__title__ = '4chan.py'
__author__ = 'scrazzz'
__license__ = 'MIT'
__copyright__ = 'Copyright (c) 2023-present scrazzz'
__version__ = '0.0.0'
| 4chan.py | /4chan.py-0.0.0-py3-none-any.whl/4chan/__init__.py | __init__.py |
4chandownloader
===============
4chan thread downloader.
::
pip install 4chandownloader
4chandownloader http://boards.4chan.org/b/res/423861837 4chanarchives --delay 5 --thumbs
| 4chandownloader | /4chandownloader-0.4.1.tar.gz/4chandownloader-0.4.1/README.rst | README.rst |
#!/usr/bin/env python
# coding: utf-8
import os
import sys
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
if sys.argv[-1] == 'publish':
os.system('python setup.py sdist upload')
sys.exit()
setup(
name = '4chandownloader',
version = '0.4.1',
description = '4chan thread downloader',
long_description = open('README.rst').read(),
license = open('LICENSE').read(),
author = u'toxinu',
author_email = 'toxinu@gmail.com',
url = 'https://github.com/toxinu/4chandownloader',
keywords = '4chan downloader images',
scripts = ['4chandownloader'],
install_requires = ['requests==0.14.0', 'docopt==0.5.0'],
classifiers = (
'Intended Audience :: Developers',
'Natural Language :: English',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Programming Language :: Python',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7')
)
| 4chandownloader | /4chandownloader-0.4.1.tar.gz/4chandownloader-0.4.1/setup.py | setup.py |
4channel is a python3 tool and module to download all images/webm from a 4channel thread.
Installation
---------------
### Dependencies
4channel requires:
- python (>= 3.6)
### User installation
```
pip install 4channel
```
Usage
---------
```
usage: 4channel [-h] [--webm] [--watch] [--dryrun] [-r RECURSE] url [out]
positional arguments:
url the url of the thread.
out specify output directory (optional)
optional arguments:
-h, --help show this help message and exit
--webm in addition to images also download webm videos.
--watch watch the thread every 60 seconds for new images.
--dryrun dry run without actually downloading images.
-r RECURSE, --recurse RECURSE
recursively download images if 1st post contains link to previous thread up to specified depth
examples:
python -m fourchannel https://boards.4channel.org/g/thread/76759434#p76759434
import fourchannel as f
f.download(url='https://boards.4channel.org/g/thread/76759434#p76759434')
```
| 4channel | /4channel-0.0.9.tar.gz/4channel-0.0.9/README.md | README.md |
#!/usr/bin/env python
import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
setuptools.setup(
name="4channel",
packages=["fourchannel"],
version="0.0.9",
description="A python3 tool and module to download all images/webm from a 4channel thread.",
long_description=long_description,
long_description_content_type="text/markdown",
author="Kyle K",
license="GPLv3",
author_email="kylek389@gmail.com",
url="https://github.com/fatalhalt/fourchannel",
keywords=["4chan", "image", "downloader", "scraper"],
install_requires=[],
classifiers=[
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Intended Audience :: End Users/Desktop",
"Topic :: Utilities",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
],
entry_points={
"console_scripts": ["4channel=fourchannel.fourchannel:main"],
},
python_requires=">=3.6",
)
| 4channel | /4channel-0.0.9.tar.gz/4channel-0.0.9/setup.py | setup.py |
from .fourchannel import main
main() | 4channel | /4channel-0.0.9.tar.gz/4channel-0.0.9/fourchannel/__main__.py | __main__.py |
import sys, json, os
import urllib.request
import urllib.parse
import argparse
import time
import signal
import re
"""
notes:
- for module to be importable, it is good idea not to do ArgumentParser in global scope
- to avoid typing 'import fourchannel.fourchannel' and then 'fourchannel.fourchannel.download'
'from .fourchannel import download' was added to __init__.py
"""
URL = 'https://a.4cdn.org/'
IMAGE_URL = 'https://i.4cdn.org/'
allowed_types = ['.jpg', '.png', '.gif']
watching = False
max_retry = 1
resorted_to_archive = False
hit_cloudflare_block = False
list_of_cloudflare_blocked_media_file = []
def fuuka_retrieve(result, board, thread, dryrun):
i = 0
global hit_cloudflare_block
global list_of_cloudflare_blocked_media_file
for post in result[thread]['posts']:
if result[thread]['posts'][post]['media'] is None:
continue
filename = result[thread]['posts'][post]['media']['media_orig']
if filename[filename.index('.'):] in allowed_types and not os.path.exists(filename):
if not dryrun:
# retrieve file from warosu.org, https://i.warosu.org/data/<board>/img/0xxx/xx/<filename>
thread_first_3nums = '0' + thread[:3]
thread_forth_and_fifth_nums = thread[3:5]
url_warosu = 'https://i.warosu.org/data/' + board + '/img/' + thread_first_3nums + '/' + thread_forth_and_fifth_nums + '/' + filename
if not hit_cloudflare_block:
print(f"downloading {filename}")
req = urllib.request.Request(url_warosu, headers={'User-Agent': 'Mozilla/5.0'})
try:
response = urllib.request.urlopen(req)
with open(filename, "wb") as file:
file.write(response.read())
i = i+1
except urllib.error.HTTPError as e:
if e.code in [503] and e.hdrs['Server'] == 'cloudflare':
hit_cloudflare_block = True
print(f"hit cloudflare block: {e}")
else:
print(f"cloudflare block, download {url_warosu} manually in the browser")
list_of_cloudflare_blocked_media_file.append(url_warosu)
else:
print(f"skipping {filename}, dryrun")
else:
if not watching:
print(f"skipping {filename}, already present")
print(f"downloaded {i} files from https://i.warosu.org/ thread# {thread}")
# loops through posts of given thread and downloads media files
def load_thread_json(board, thread, url, recurse, dryrun=False):
global resorted_to_archive
response = None
archive_url_is_being_used_for_this_stack_frame_so_call_fuuka = False
try:
if resorted_to_archive is True:
archive_url_is_being_used_for_this_stack_frame_so_call_fuuka = True
req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0'})
response = urllib.request.urlopen(req)
else:
response = urllib.request.urlopen(url)
except urllib.error.HTTPError as e:
if e.code in [404]:
if not resorted_to_archive:
resorted_to_archive = True
newurl = '%s=%s&num=%s' % ('https://archived.moe/_/api/chan/thread?board', board, thread)
print(f"url {url} returned 404, resorting to {newurl}")
load_thread_json(board, thread, newurl, recurse-1, dryrun)
else:
global max_retry
max_retry = max_retry - 1
if max_retry < 1:
return
else:
print(f"archive url {url} returned 404, retrying...")
load_thread_json(board, thread, newurl, recurse, dryrun)
else:
print(f"unhandled error: {e}")
return
try:
result = json.loads(response.read())
op_subject = ''
op_post_time = ''
if recurse > 0:
try:
op_comment = ''
if archive_url_is_being_used_for_this_stack_frame_so_call_fuuka is True:
# for json from fuuka the tread# to previous thread is in slightly different place
op_comment = result[thread]['op']['comment']
op_subject = result[thread]['op']['title']
op_post_time = result[thread]['op']['fourchan_date']
#prev_thread_num = re.search(r'.*Previous thread:\s*>>(\d{8}).*', op_comment).group(1)
#prev_thread_num = re.search(r'.*[pP]revious [tT]hread:\s*.*?(\d{8}).*', op_comment).group(1)
prev_thread_num = re.search(r'.*[pP]revious:? (?:[tT]hread:)?\s*.*?(\d{8}).*', op_comment).group(1)
newurl = '%s=%s&num=%s' % ('https://archived.moe/_/api/chan/thread?board', board, prev_thread_num)
print(f"recursing to archive thread# {prev_thread_num} at {newurl}")
load_thread_json(board, prev_thread_num, newurl, recurse-1, dryrun)
else:
op_comment = result['posts'][0]['com']
op_subject = result['posts'][0]['sub'] if result['posts'][0].get('sub') is not None else 'No title'
op_post_time = result['posts'][0]['now']
#prev_thread_path = re.search(r'^.*[pP]revious [tT]hread.*href="([^"]+)".*$', op_comment).group(1)
#prev_thread_num = re.search(r'.*[pP]revious [tT]hread:\s*.*?(\d{8}).*', op_comment).group(1)
prev_thread_num = re.search(r'.*[pP]revious:? (?:[tT]hread:)?\s*.*?(\d{8}).*', op_comment).group(1)
prev_thread_path = '/' + board + '/thread/' + prev_thread_num
split = urllib.parse.urlparse('https://boards.4channel.org' + prev_thread_path).path.replace('/', ' ').split()
newurl = '%s%s/thread/%s.json' % (URL, split[0], split[2])
print(f"recursing to {prev_thread_path}")
load_thread_json(board, split[2], newurl, recurse-1, dryrun)
except AttributeError:
print(f"did not find a link to previous thread. the comment was:\n---\n{op_comment}\n---")
pass
if archive_url_is_being_used_for_this_stack_frame_so_call_fuuka is True:
fuuka_retrieve(result, board, thread, dryrun)
else:
i = 0
total_bytes_dw = 0
for post in result['posts']:
try:
filename = str(post['tim']) + post['ext']
if post['ext'] in allowed_types and not os.path.exists(filename):
if not dryrun:
print(f"downloading {filename}")
fn, headers = urllib.request.urlretrieve(IMAGE_URL + board + '/' + filename, filename)
total_bytes_dw = total_bytes_dw + int(headers['Content-Length'])
i = i+1
else:
print(f"skipping {filename}, dryrun")
else:
if not watching:
print(f"skipping {filename}, already present")
except KeyError:
continue
print(f"downloaded {'%.*f%s' % (2, total_bytes_dw / (1<<20), 'MB')} of {i} files from {url} ({op_subject}) ({op_post_time})")
except ValueError:
sys.exit('no response, thread deleted?')
# the key function that that we expect to be used when 4channel is imported as a module
# this function parses user's URL and calls load_thread_json() that does the actual downloading
def download(**kwargs):
if 'boards.4channel.org' not in kwargs.get('url'):
sys.exit("you didn't enter a valid 4channel URL")
if kwargs.get('recurse') is None:
kwargs['recurse'] = 0 # handle case when module is imported and .download() is called with just url
split = urllib.parse.urlparse(kwargs.get('url')).path.replace('/', ' ').split()
board, thread = split[0], split[2]
url = '%s%s/thread/%s.json' % (URL, board, thread)
outdir = kwargs.get('out') if kwargs.get('out') is not None else thread
try:
os.mkdir(outdir)
print(f"created {os.path.join(os.getcwd(), outdir)} directory...")
except OSError:
print(f"{outdir} directory already exists, continuing...")
pass
if os.path.basename(os.getcwd()) != outdir:
os.chdir(outdir)
if kwargs.get('webm') is True:
allowed_types.append('.webm')
if kwargs.get('watch') is True:
global watching
watching = True
print(f"watching /{board}/{thread} for new images")
while True:
load_thread_json(board, thread, url, 0)
time.sleep(60)
else:
print(f"downloading /{board}/{thread}")
load_thread_json(board, thread, url, kwargs.get('recurse'), kwargs.get('dryrun'))
if hit_cloudflare_block:
with open('_cloudflare_blocked_files.txt', "w") as f:
print(*list_of_cloudflare_blocked_media_file, sep="\n", file=f)
os.chdir("..")
def signal_handler(signal, frame):
print('\nSIGINT or CTRL-C detected, exiting gracefully')
sys.exit(0)
def main():
parser = argparse.ArgumentParser()
parser.add_argument("url", help='the url of the thread.')
parser.add_argument("out", nargs='?', help='specify output directory (optional)')
parser.add_argument("--webm", action="store_true", help="in addition to images also download webm videos.")
parser.add_argument("--watch", action='store_true', help='watch the thread every 60 seconds for new images.')
parser.add_argument("--dryrun", action="store_true", help="dry run without actually downloading images.")
parser.add_argument('-r', "--recurse", type=int, default=0, help="recursively download images if 1st post contains link to previous thread up to specified depth")
args = parser.parse_args()
signal.signal(signal.SIGINT, signal_handler)
download(**vars(args)) # pass in args as dict and unpack
if __name__ == '__main__':
main()
| 4channel | /4channel-0.0.9.tar.gz/4channel-0.0.9/fourchannel/fourchannel.py | fourchannel.py |
from .fourchannel import download | 4channel | /4channel-0.0.9.tar.gz/4channel-0.0.9/fourchannel/__init__.py | __init__.py |
import time
from typing import Callable, Union
def speed_test(func: Callable, key="MS", *args, **kwargs) -> Union[int, float]:
"""
Taking a function to run.
Additional taking a key(str) (MS = milliseconds, NS = nanoseconds)
MS is preferred, because nanoseconds may not work on different OS.
Returning result in <float>type or None-type object if key is invalid.
:param func: - function to run
:param key: - MS(), NS(). MS - Default
:return: float or None(if key is invalid).
"""
if key == "MS":
start_mark = time.monotonic()
func()
stop_mark = time.monotonic()
runtime_mark = stop_mark - start_mark
return runtime_mark
elif key == "NS":
start_mark = time.monotonic_ns()
func()
stop_mark = time.monotonic_ns()
runtime_mark = stop_mark - start_mark
return runtime_mark
| 4codesdk-pkg | /4codesdk_pkg-0.0.1-py3-none-any.whl/4codesdk_package/common/code_speed_test.py | code_speed_test.py |
# 4DGB Workflow

A dockerized application implementing an end-to-end workflow to process Hi-C data files and displaying their structures in an instance of the [4D Genome Browser](https://github.com/lanl/4DGB).
The workflow takes ```.hic``` data, processes the data and creates a running server that can be used to view the data with a web browser. The system takes advantage of previous runs, so if you've already computed some data, it won't be recomputed the next time the workflow is run.
The workflow is split into two stages: "Build" and "View". Each implemented with a separate docker image. The Build stage does most of the computation (including the most expensive part, running the LAMMPS simulation) and outputs a project suitable for viewing with the [4D Genome Browser](https://github.com/lanl/4DGB). The View stage simply creates an instance of this browser, allowing the user to view their project.
## Setting up Input Data
1. Create a directory to contain all of your input data. In it, create a `workflow.yaml` file with the following format:
```yaml
project:
resolution: 200000 # optional (defaults to 200000)
chromosome: X # optional (defaults to 'X')
count_threshold: 2.0 # optional (defaults to 2.0)
datasets:
- name: "Data 01"
hic: "path/to/data_01.hic"
- name: "Data 02"
hic: "path/to/data_02.hic"
```
*See the [File Specification Document](doc/project.md) for full details on what can be included in the input data*
2. Checkout submodules
```sh
git submodule update --init
```
3. Build the Docker images.
```sh
make docker
```
4. Run the browser!
```sh
./4DGBWorkflow run /path/to/project/directory/
```
**Example output:**
```
$ ./4DGBWorkflow run ./example_project
[>]: Building project... (this may take a while)
#
# Ready!
# Open your web browser and visit:
# http://localhost:8000/compare.html?gtkproject=example_project
#
# Press [Ctrl-C] to exit
#
```
If this is the first time running a project, this may take a while, since it needs to run a molecular dynamics simulation with LAMMPS on your input data. The next time you run it, it won't need to run the simulation again. If you update the input files, then the simulation will automatically be re-run!
**Example Screenshot**

## Help for Maintainers
See the [Publising](./doc/publishing.md) doc for information on publishing and releasing new versions.
## ❄️ For Nix Users
For initiates of the [NixOS cult](https://nixos.org/), there is a Nix Flake which exports a package of the workflow builder as well as development environment in which you can easily run the workflow. Each submodule also has its in own flake exporting relevant packages.
To enter the development environment (you need to enable submodules):
```sh
nix develop '.?submodules=1'
```
To build a project and run the browser:
```sh
# Build the workflow script
nix build '.?submodules=1#workflow-build'
# Run the just-built workflow
./result/bin/4dgb-workflow-build example_project/ example_out/
# Run the browser (which is available in the PATH in the dev environment)
PROJECT_HOME=example_out/ gtkserver.py
```
| 4dgb-workflow | /4dgb-workflow-1.5.6.tar.gz/4dgb-workflow-1.5.6/README.md | README.md |
import setuptools
# read the description file
from os import path
this_directory = path.abspath(path.dirname(__file__))
with open(path.join(this_directory, 'doc/description.md'), encoding='utf-8') as f:
long_description_text = f.read()
this_directory = path.abspath(path.dirname(__file__))
version = ""
with open(path.join(this_directory, 'version.txt'), encoding='utf-8') as f:
version_text = f.read().strip()
setuptools.setup(
name="4dgb-workflow",
version=version_text,
author="David H. Rogers",
author_email="dhr@lanl.gov",
description="4D Genome Browser Workflow.",
long_description=long_description_text,
long_description_content_type='text/markdown',
url="https://github.com/4dgb/4DGBWorkflow",
include_package_data=True,
packages=['4dgb-workflow'],
scripts=['4DGBWorkflow', 'doc/description.md', 'version.txt'],
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
],
)
| 4dgb-workflow | /4dgb-workflow-1.5.6.tar.gz/4dgb-workflow-1.5.6/setup.py | setup.py |
# 4DGB toolkit
This is the toolkit associated with the 4DGenomeBrowser project
| 4dgb-workflow | /4dgb-workflow-1.5.6.tar.gz/4dgb-workflow-1.5.6/doc/description.md | description.md |
# Group number 6
## Thee Ngamsangrat, Junkai Ong, Gabin Ryu, Chenfan Zhuang
[](https://travis-ci.com/cs107-JTGC/cs107-FinalProject)
[](undefined) | 4dpyautodiff | /4dpyautodiff-1.0.0.tar.gz/4dpyautodiff-1.0.0/README.md | README.md |
import setuptools
long_description = ("PyAutoDiff provides the ability to seamlessly calculate the gradient of a given function within "
"your Python code. By using automatic differentiation, this project addresses efficiency and "
"precision issues in symbolic and finite differentiation algorithms")
setuptools.setup(
name="4dpyautodiff",
version="1.0.0",
author="cs107-JTGC",
author_email="chenfanzhuang@g.harvard.edu",
description="A simple library for auto differentiation",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/cs107-JTGC/cs107-FinalProject",
packages=setuptools.find_packages(exclude=['docs', 'tests*', 'scripts']),
install_requires=[
"numpy",
"graphviz"
],
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
python_requires='>=3.6',
)
| 4dpyautodiff | /4dpyautodiff-1.0.0.tar.gz/4dpyautodiff-1.0.0/setup.py | setup.py |
import numpy as np
from matplotlib import pyplot as plt
from pyautodiff import *
def Newton_Raphson_method(fn, xk, stepsize_thresh=1e-6, max_iters=1000, success_tolerance=1e-6, debug=False):
"""
Newton's method to find root.
Args:
fn: funtion
xk: initial guess
stepsize_thresh: If ||x_{k+1} - x_{k}|| <= thresh, return
max_iters: If #iters > max_iters, return
success_tolerance: The absolute tolerance for fn(root)
debug: Defaults to False. If True, print info for every iteration
Returns:
A dict
"""
f = None
is_scalar = (np.ndim(xk) == 0)
checker = abs if is_scalar else np.linalg.norm
solver = (lambda x, y: y / x) if is_scalar else np.linalg.solve
offset = 1
for k in range(max_iters):
f = fn(Var(xk, "x")) # This is a Var instance!! Access val and der by .val and .diff() respectively
delta_x = solver(f.diff(), -f.val)
if checker(delta_x) < stepsize_thresh:
offset = 0
break
if debug:
print(f"k={k}\tx={np.round(xk, 2)}\tf(x)={np.round(f.val)}\tf'(x)={np.round(f.diff())}")
xk = xk + delta_x
return {
"succeed": np.allclose(f.val, 0, atol=success_tolerance),
"iter": k + offset,
"x": xk,
"f(x)": f.val,
"f\'(x)": f.diff()
}
def cal_val_der(fn, xs):
vals = []
ders = []
for x in xs:
try:
if not isinstance(x, (Var, VarAutoName)):
y = fn(VarAutoName(x))
else:
y = fn(x)
except:
y = Var(0)
finally:
vals.append(y.val)
ders.append(y.diff())
return vals, ders
def draw_scalar(fn, roots, plt_range=[0, 10]):
x = np.linspace(plt_range[0], plt_range[1], 1000).tolist()
y, d = cal_val_der(fn, x)
fig, ax = plt.subplots()
ax.plot(x, y, label='val')
ax.plot(x, d, label='der')
ax.scatter(roots, cal_val_der(fn, roots)[0], label="root")
ax.grid(True, which='both')
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
plt.title("Use 0 to fill in +-inf")
plt.legend()
plt.show()
if __name__ == '__main__':
print("====Scalar demo====")
f = lambda x: x ** (-x) - log(x)
rtn = Newton_Raphson_method(f, 1, debug=True)
if rtn['succeed']:
root = rtn["x"]
print(f"Find a root={np.round(root, 4)}")
draw_scalar(f, [root], plt_range=[0.1, root + 0.5])
else:
print(f"Failed. Try another x0 or larger max_iters!")
print(rtn)
draw_scalar(f, [], plt_range=[1, 5])
print("====Vector demo====")
A = Var(np.array([[1, 2], [3, 4]]))
g = lambda x: A @ x - sin(exp(x))
n_roots = 0
for x0 in [[1, -1], [1, 1], [0, 0]]:
x0 = np.array(x0).reshape(-1, 1)
rtn = Newton_Raphson_method(g, x0, debug=False)
if rtn["succeed"]:
n_roots += 1
root = rtn["x"]
print(f"Find #{n_roots} root={np.round(root, 2).tolist()}")
else:
print(f"Failed. Try another x0 or larger max_iters!")
| 4dpyautodiff | /4dpyautodiff-1.0.0.tar.gz/4dpyautodiff-1.0.0/applications/root_finder.py | root_finder.py |
from .root_finder import Newton_Raphson_method
| 4dpyautodiff | /4dpyautodiff-1.0.0.tar.gz/4dpyautodiff-1.0.0/applications/__init__.py | __init__.py |
import math
from collections import defaultdict
from functools import wraps
import numpy as np
from pyautodiff import Var, Mode
def _dunder_wrapper(fn, is_unary=False):
"""
A wrapper function to bridge dunder method and classmethod (operation).
For example, Var.__add__ = dunder_wrapper(VarAdd.binary_operation).
Or Var.__add__ = lambda a, b: VarAdd.binary_operation(a, b)
Args:
fn: operation function
is_unary: Defaults to False for binary operations like Add, Substract; True for unary operations like abs, exp.
Returns:
The wrapped function.
"""
@wraps(fn)
def wrapper(*args):
a = args[0]
if is_unary:
return fn(a)
b = args[1]
return fn(a, b)
return wrapper
class Ops:
"""
A template class for all operations for class `Var` (i.e. unary and binary operations).
For each operation, users MUST implement (at least) two functions: `op()` and `local_derivative()`.
Non-element-wise operations should re-write some more methods. See 'VarTranspose' and 'VarMatMul' as reference.
Then the propagation for forward/reverse/mix mode will be auto handled by the pipeline, which locate in
`binary_operation()` and `unary_operation()`.
"""
# Unary operation: 1 (by default)
# Binary operaton: 2 (customized)
n_operands = 1
# A string to be displayed in computational graph.
# If None, suffix of class name will be used.
# For example, exp (operation) will show `Exp` since its class name is `VarExp` in the plot.
# See ../visualization.py for its usage.
symbol = None
@classmethod
def op(cls, va, vb):
"""
Implement the numerical value operation in this function.
For unary operation: vc = f(va), return vc;
For binary operation: vc = va op vb, return vc;
To be implemented by each operation.
Args:
va: numerical value of operand a (a.val, a is a Var instance);
vb: numerical value of operand b (b.val, b is a Var instance);
Returns:
A Number or np.ndarray, the numerical value of this operation
"""
raise NotImplementedError
@classmethod
def local_derivative(cls, va, vb, vc, skip_lda=False, skip_ldb=False):
"""
Calculate the derivative for every elementary operations.
For unary operation: c = f(a), return the local partial derivative: df/da;
For binary operation: c = a op b, return the local partial derivatives df/da, df/db;
(a,b could be results of some operations)
For example,
x = Var(1, 'x')
a = x + 1
b = 2 - x
c = a * b
The local derivative dc/da = 1, dc/db = 2;
The target derivative dc/dx = dc/da*da/dx + dc/db*db/dx = 1*1 + 2*(-1) = -1
Args:
va: numerical value of operand (a Var instance) a (=a.val);
vb: numerical value of operand (a Var instance) b (=b.val);
vc: numerical value of operation result (a Var instance) c (=c.val);
skip_lda: If a is a constant, no need to calculate dc/da
skip_ldb: If b is a constant, no need to calculate dc/db
Returns:
A Number or np.ndarray for unary operation;
A list of two Numbers or np.ndarrays for binary operation;
"""
raise NotImplementedError
@classmethod
def chain_rule(cls, lda, da, ldb, db, forward_mode=True):
"""
Apply chain rule in forward mode.
For composite function: c = g(f(a, b)), dg/dx = dg/df*df/dx; dg/dy = dg/df*df/dy;
Args:
lda: A Number or np.ndarray, represents the local derivative: dc/da
da: a dict stores the derivative of a.
For example, {'x': da/dx, 'y': da/dy}, where `x`,'y' is the involved variables.
ldb: A Number or np.ndarray, represents the local derivative: dc/db
db: a dict stores the derivative of b.
For example, {'x': db/dx, 'y': db/dy}, where `x`,'y' is the involved variables.
forward_mode: defaults to True; False for reverse or mix mode.
Returns:
A dict stores the derivative of c by applying the chain rule. For example,
{'x': dc/dx, 'y': dc/dy} where `x`,'y' are the target variables.
"""
einsum_dispatcher = "ijkl,ij->ijkl" if forward_mode else "ijkl,kl->ijkl"
def _apply(d, ld):
if d is None:
return
ndim = np.ndim(ld)
if ndim == 0:
fn = lambda tot, loc: tot * loc
elif ndim == 2:
fn = lambda tot, loc: np.einsum(einsum_dispatcher, tot, loc)
else:
raise TypeError(f"Local derivative only supports scalar or 2D matrix but not {np.shape(ld)}")
for wrt in d:
dc[wrt] += fn(d[wrt], ld)
dc = defaultdict(int)
_apply(da, lda)
_apply(db, ldb)
return dict(dc)
@classmethod
def merge_var_shapes(cls, sa, sb=None):
"""
Propagate the _var_shapes to the operation result by synthesizing the _var_shapes of a and b.
BE CAREFUL, a _var_shapes (dict) instance can be shared across multiple var instances.
Don't use _var_shapes for any instance specific calculation.
Args:
sa: _var_shapes of the first operand
sb: _var_shapes of the second operand, could be None
Returns:
a dict, the merged _var_shapes
"""
if sb is None:
return sa
if sa is None:
return sb
sa.update(sb)
return sa
@classmethod
def merge_modes(cls, ma, mb=None):
"""
Merge mode by such rules:
1. Forward op reverse/mix --> mix
2. Forward op forward/NONE --> forward
3. Reverse op reverse/NONE --> reverse
4. Reverse/mix/NONE op mix --> mix
5. NONE op NONE --> NONE
Args:
ma: a.mode
mb: b.mode
Returns:
A mode value
"""
if mb is None or mb == Mode.NONE:
return ma
if ma == Mode.NONE:
return mb
if ma != mb:
return Mode.Mix
return ma
@classmethod
def fwdprop(cls, a, b, val):
"""
Propagation for forward mode. Suppose current operation : c = a op b is one step of f(x), by chain rule,
we have: dc/dx = dc/da * da/dx + dc/db * db/dx, return dc/dx.
Args:
a: the first operand, a Var instance
b: the second operand, a Var instance, could be None
val: the numerical operation result
Returns:
a dict, the derivative of operation result instance
"""
if cls.n_operands == 2:
lda, ldb = cls.local_derivative(a.val, b.val, val,
skip_lda=a.is_const,
skip_ldb=b.is_const)
return cls.chain_rule(lda, a.derivative, ldb, b.derivative)
lda = cls.local_derivative(a.val, None, val)
return cls.chain_rule(lda, a.derivative, None, None)
@classmethod
def backprop(cls, a, b, val, dfdc):
"""
Propagation for reverse/mix mode. Suppose current operation : c = a op b is one step of f(x), by chain rule,
we have: df/da = df/dc * dc/da, df/db = df/dc * dc/db.
Args:
a: the first operand, a Var instance
b: the second operand, a Var instance, could be None
val: the numerical operation result
dfdc: the backprop gradient.
Returns:
None
"""
if cls.n_operands == 2:
lda, ldb = cls.local_derivative(a.val, b.val, val,
skip_lda=a.is_const,
skip_ldb=b.is_const)
a._bpgrad.update(cls.chain_rule(lda, dfdc, None, None, False))
b._bpgrad.update(cls.chain_rule(None, None, ldb, dfdc, False))
else:
lda = cls.local_derivative(a.val, None, val)
a._bpgrad.update(cls.chain_rule(lda, dfdc, None, None, False))
@classmethod
def merge_fwd_backprop(cls, dcdxs, dfdc):
"""
Merge derivatives from forward mode and reverse mode. Suppose current node is c, in mix mode. W.r.t x, we have
dc/dx and df/dc, then PART of df/dx is df/dc * dc/dx.
Args:
dcdxs: a dict like {'x': dcdx, 'y': dcdy}
dfdc: a dict like {f: dfdc}
Returns:
a dict like {'x': dfdc (part), 'y': dfdy (part)}
"""
dfdxs = {}
for wrt in dcdxs:
dfdxs[wrt] = np.einsum("ijpq, klij->klpq", dcdxs[wrt], dfdc)
return dfdxs
@classmethod
def binary_operation(cls, a, b):
"""
A universal binary operation process. Newly defined operations (class) do not need to re-write it.
Args:
a: a Number of np.ndarray or `Var` instance, the first operand of the calculation
b: a Number of np.ndarray or `Var` instance , the second operand of the calculation
Returns:
A `Var` instance whose `.val` is the numerical value of the operation and `.derivative` containing
the derivative w.r.t. the involved variables.
"""
if not isinstance(a, Var):
a = Var(a)
if not isinstance(b, Var):
b = Var(b)
# Stop numpy auto broadcasting but allow the operation between scalar and vector,
# or the differentiation would be too complicated to deal with
if np.ndim(a.val) > 0 and np.ndim(b.val) > 0 and cls.__name__ != "VarMatMul":
assert a.val.shape == b.val.shape, f"Shapes mismatch: {a.val.shape} != {b.val.shape}"
# S1: calculate numerical result
val = cls.op(a.val, b.val)
# S2: get mode of the result
mode = cls.merge_modes(a.mode, b.mode)
# Prepare params for constructing a Var instance to contain the operation result
params = dict(derivative={},
_var_shapes=cls.merge_var_shapes(a._var_shapes, b._var_shapes),
mode=mode,
_context=[cls, [a, b]])
# Reverse/mix mode vars will calculate derivative later (when .diff() is called)
if mode not in (Mode.Forward, Mode.NONE):
return Var(val, **params)
params["derivative"] = cls.fwdprop(a, b, val)
return Var(val, **params)
@classmethod
def unary_operation(cls, a):
"""
A universal unary operation process. Newly defined operations (class) do not need to re-write it.
Args:
a: a Number of np.ndarray or `Var` instance, the first operand of the calculation
Returns:
A `Var` instance whose `.val` is the numerical value of the operation and `.derivative` containing
the derivative w.r.t. the involved variables.
"""
if not isinstance(a, Var):
a = Var(a)
# S1: calculate numerical result
val = cls.op(a.val, None)
# S2: inherit the mode for the result
mode = a.mode
# Prepare params for constructing a Var instance to contain the operation result
params = dict(derivative={},
_var_shapes=cls.merge_var_shapes(a._var_shapes),
mode=mode,
_context=[cls, [a]])
if mode not in (Mode.Forward, Mode.NONE):
return Var(val, **params)
params["derivative"] = cls.fwdprop(a, None, val)
return Var(val, **params)
class VarNeg(Ops):
"""
A class for negative constants. Gives the value and local derivative.
This class inherits from the Ops Class.
To use:
>>> -Var(1, 'x')
(<class 'pyautodiff.var.Var'> name: None val: -1, der: {'x': array([[[[-1.]]]])})
"""
symbol = "-"
@classmethod
def op(cls, va, vb):
return -va
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
return -1
class VarPos(Ops):
"""
A class for positive constants. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> +Var(1, 'x')
(<class 'pyautodiff.var.Var'> name: None val: 1, der: {'x': array([[[[1.]]]])})
"""
symbol = "+"
@classmethod
def op(cls, va, vb):
return va
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
return 1
class VarAbs(Ops):
"""
A class for absolute values. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> abs(Var(1, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 1, der: {'x': array([[[[1.]]]])})
"""
@classmethod
def op(cls, va, vb):
return abs(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
d = np.ones_like(va)
d[va < 0] = -1
try:
# scalar
return d.item()
except:
return d
class VarAdd(Ops):
"""
A class for addition. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> Var(1, 'x') + 1
(<class 'pyautodiff.var.Var'> name: None val: 2, der: {'x': array([[[[1.]]]])})
>>> Var(1, 'x') + Var(2, 'y')
(<class 'pyautodiff.var.Var'> name: None val: 3, der: {'x': array([[[[1.]]]]), 'y': array([[[[1.]]]])})
"""
n_operands = 2
symbol = "+"
@classmethod
def op(cls, va, vb):
return va + vb
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
return 1, 1
class VarSub(Ops):
"""
A class for addition. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> Var(1, 'x') - 1
(<class 'pyautodiff.var.Var'> name: None val: 0, der: {'x': array([[[[1.]]]])})
>>> Var(1, 'x') - Var(2, 'y')
(<class 'pyautodiff.var.Var'> name: None val: -1, der: {'x': array([[[[1.]]]]), 'y': array([[[[-1.]]]])})
"""
n_operands = 2
symbol = "-"
@classmethod
def op(cls, va, vb):
return va - vb
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
return 1, -1
class VarMul(Ops):
"""
A class for multiplication. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> Var(1, 'x') * 2
(<class 'pyautodiff.var.Var'> name: None val: 2, der: {'x': array([[[[2.]]]])})
>>> Var(1, 'x') * Var(2, 'y')
(<class 'pyautodiff.var.Var'> name: None val: 2, der: {'x': array([[[[2.]]]]), 'y': array([[[[1.]]]])})
"""
n_operands = 2
symbol = "*"
@classmethod
def op(cls, va, vb):
return va * vb
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
return vb, va
class VarTrueDiv(Ops):
"""
A class for division. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> Var(4, 'x') / 2
(<class 'pyautodiff.var.Var'> name: None val: 2.0, der: {'x': array([[[[0.5]]]])})
>>> Var(4, 'x') / Var(2, 'y')
(<class 'pyautodiff.var.Var'> name: None val: 2.0, der: {'x': array([[[[0.5]]]]), 'y': array([[[[-1.]]]])})
"""
n_operands = 2
symbol = "/"
@classmethod
def op(cls, va, vb):
return va / vb
@classmethod
def local_derivative(cls, va, vb, vc, skip_lda=False, skip_ldb=False):
if skip_ldb:
return 1 / vb, 0
if skip_lda:
return 0, -vc / vb
return 1 / vb, -vc / vb
class VarPow(Ops):
"""
A class for power operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> Var(2, 'x') ** 2
(<class 'pyautodiff.var.Var'> name: None val: 4, der: {'x': array([[[[4.]]]])})
>>> Var(4, 'x') ** Var(2, 'y')
(<class 'pyautodiff.var.Var'> name: None val: 16, der: {'x': array([[[[8.]]]]), 'y': array([[[[22.18070978]]]])})
"""
n_operands = 2
symbol = "power"
@classmethod
def op(cls, va, vb):
return va ** vb
@classmethod
def local_derivative(cls, va, vb, vc, skip_lda=False, skip_ldb=False):
""" derivative of w.r.t vb and vb: b * a^(b-1)*a', a^b*ln(a)*b' """
if skip_ldb:
return vb * (va ** (vb - 1)), 0
if skip_lda:
return 0, np.log(va) * vc
return vb * (va ** (vb - 1)), np.log(va) * vc
class VarExp(Ops):
"""
A class for exponential operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> exp(Var(0, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 1.0, der: {'x': array([[[[1.]]]])})
"""
@classmethod
def op(cls, va, vb):
return np.exp(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
""" c = e^a --> c' = e^a"""
return vc
class VarLog(Ops):
"""
A class for logarithm. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> log(Var(1, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 0.0, der: {'x': array([[[[1.]]]])})
"""
n_operands = 2
@classmethod
def op(cls, va, vb):
""" log_vb(va) """
return np.log(va) / np.log(vb)
# return np.log(va)
@classmethod
def local_derivative(cls, va, vb, vc, skip_lda=False, skip_ldb=False):
""" c'=a'/(a*ln(b)), c' = -(b'*log(a))/(b*log^2(b))) """
inv_log_vb = 1 / np.log(vb)
if skip_ldb:
return 1 / va * inv_log_vb, 0
if skip_lda:
return 0, -vc * inv_log_vb / vb
return 1 / va * inv_log_vb, -vc * inv_log_vb / vb
@classmethod
def binary_operation_with_base(clf, a, base=math.e):
""" Wrap function to explicitly specify base """
return clf.binary_operation(a, base)
class VarLogistic(Ops):
"""
Logistic function: f(x) = 1 / (1 + exp(x))
>>> sigmoid((Var(0, 'x')))
(<class 'pyautodiff.var.Var'> name: None val: 0.5, der: {'x': array([[[[0.25]]]])})
"""
@classmethod
def op(cls, va, vb):
return 1 / (1 + np.exp(-va))
@classmethod
def local_derivative(cls, va, vb, vc, skip_lda=False, skip_ldb=False):
return vc * (1 - vc)
class VarMatMul(Ops):
"""
Matrix multiplication.
>>> (Var(np.array([[1],[2]]), 'x') @ Var(np.array([[0,1]]), 'y')).val.tolist()
[[0, 1], [0, 2]]
"""
n_operands = 2
symbol = "@"
@classmethod
def op(cls, va, vb):
return va @ vb
@classmethod
def local_derivative(cls, va, vb, vc, skip_lda=False, skip_ldb=False):
return vb, va
@classmethod
def chain_rule(cls, lda, da, ldb, db, forward_mode=True):
"""
Apply chain rule in forward mode for Matmul: c = a@b
Args:
lda: A Number or np.ndarray, represents the local derivative: dc/da (b.val in this case)
da: a dict stores the derivative of a.
For example, {'x': da/dx, 'y': da/dy}, where `x`,'y' is the involved variables.
ldb: A Number or np.ndarray, represents the local derivative: dc/db (a.val in this case)
db: a dict stores the derivative of b.
For example, {'x': db/dx, 'y': db/dy}, where `x`,'y' is the involved variables.
Returns:
A dict stores the derivative of c by applying the chain rule. For example,
{'x': dc/dx, 'y': dc/dy} where `x`,'y' is the involved variables.
"""
def _apply(d, ld, s):
if d is None:
return
for wrt in d:
dc[wrt] += np.einsum(s, d[wrt], ld)
dc = defaultdict(int)
_apply(da, lda, "pqkl,qr->prkl" if forward_mode else "mnpr,qr->mnpq")
_apply(db, ldb, "qrkl,pq->prkl" if forward_mode else "mnpr,pq->mnqr")
return dict(dc)
class VarTranspose(Ops):
"""
Transpose matrix.
>>> (Var(np.array([[1,2]]), 'x').T).val.tolist()
[[1], [2]]
"""
symbol = ".T"
@classmethod
def op(cls, va, vb):
return np.transpose(va)
@classmethod
def fwdprop(cls, a, b, val):
der = {}
for wrt in a.derivative:
der[wrt] = np.einsum('ijkl->jikl', a.derivative[wrt])
return der
@classmethod
def backprop(cls, a, b, val, dfdc):
bp = {}
for wrt in dfdc:
bp[wrt] = np.einsum('ijkl->ijlk', dfdc[wrt])
a._bpgrad.update(bp)
@classmethod
def unary_operation(cls, a):
"""
A universal unary operation process. Newly defined operations (class) do not need to re-write it.
Args:
a: a Number of np.ndarray or `Var` instance, the first operand of the calculation
Returns:
A `Var` instance whose `.val` is the numerical value of the operation and `.derivative` containing
the derivative w.r.t. the involved variables.
"""
if not isinstance(a, Var):
a = Var(a)
val = cls.op(a.val, None)
mode = a.mode
params = dict(derivative={},
_var_shapes=cls.merge_var_shapes(a._var_shapes),
mode=mode,
_context=[cls, [a]])
if mode not in (Mode.Forward, Mode.NONE):
return Var(val, **params)
params["derivative"] = cls.fwdprop(a, None, val)
return Var(val, **params)
Var.__neg__ = _dunder_wrapper(VarNeg.unary_operation, True)
Var.__pos__ = _dunder_wrapper(VarPos.unary_operation, True)
Var.__abs__ = _dunder_wrapper(VarAbs.unary_operation, True)
# +=, -=, *=, /= are auto enabled.
Var.__radd__ = Var.__add__ = _dunder_wrapper(VarAdd.binary_operation)
Var.__sub__ = _dunder_wrapper(VarSub.binary_operation)
Var.__rsub__ = lambda a, b: VarSub.binary_operation(b, a)
Var.__rmul__ = Var.__mul__ = _dunder_wrapper(VarMul.binary_operation)
Var.__truediv__ = _dunder_wrapper(VarTrueDiv.binary_operation)
Var.__rtruediv__ = lambda a, b: VarTrueDiv.binary_operation(b, a)
Var.__pow__ = _dunder_wrapper(VarPow.binary_operation)
Var.__rpow__ = lambda a, b: VarPow.binary_operation(b, a)
pow = VarPow.binary_operation
# TODO: Fan
Var.__matmul__ = _dunder_wrapper(VarMatMul.binary_operation)
Var.transpose = transpose = _dunder_wrapper(VarTranspose.unary_operation, True)
exp = VarExp.unary_operation # enable exp(x)
Var.exp = _dunder_wrapper(exp, True) # enable x.exp()
log = VarLog.binary_operation_with_base
Var.log = _dunder_wrapper(log)
logistic = sigmoid = VarLogistic.unary_operation
Var.sigmoid = _dunder_wrapper(sigmoid, True)
sqrt = lambda x: x ** 0.5
Var.sqrt = sqrt
| 4dpyautodiff | /4dpyautodiff-1.0.0.tar.gz/4dpyautodiff-1.0.0/pyautodiff/ops.py | ops.py |
import time
from enum import Enum
import numpy as np
from graphviz import Digraph
from pyautodiff import Mode
class NodeType(Enum):
var = 0
const = 1
operation = 2
class Viser(object):
"""
A class to draw the computational graph and auto diff trace by graphviz.Digraph (directed graph).
To distinguish the computational graph and the AD trace, two families of arrow will be used:
1. For the computational graph, use full arrow. Black one represents the first operand, while one represents the
second operand.
2. For the AD trace, use half arrow. Black one represents the Forward mode and the white for the reverse/mix mode.
If the arrow has the upper part, it indicates the var is the first operand; the lower part indicates the second
operand.
To use:
Viser(Var(1, 'x') + Var(2, 'y'), draw_AD_trace=False)
"""
def __init__(self, x, draw_AD_trace=False, horizontal=True):
"""
Args:
x: a Var instance containing the operation result
draw_AD_trace: defaults to False, plot the computational graph. If True, draw the AD trace
horizontal: defaults to True, extend the plot horizontally (lfet->right); If False, draw the plot ,
vertically (top->down).
"""
self.n_nodes = 1
self.ad_trace = draw_AD_trace
self.g = Digraph('Trace', format='png')
if horizontal:
self.g.attr(rankdir='LR')
self.g.node("0", label="output")
self._draw(x, "0")
@staticmethod
def _get_op_symbol(cls):
"""
Return the symbol of operation to display on the plot. For example, symbol of VarAdd: "+".
Args:
cls: Operation class
Returns:
"""
if cls.symbol is None:
return cls.__name__[3:]
return cls.symbol
def _get_unique_id(self):
"""
Generate a unique id for node.
Returns:
a string for id
"""
return f"{time.process_time()}_{self.n_nodes}"
@staticmethod
def _get_color(xtype):
"""
Return the color for node by node(var) type.
Args:
xtype: node type
Returns:
a string for color
"""
return {
NodeType.var: None,
NodeType.const: "darkseagreen2",
NodeType.operation: "lavender",
}[xtype]
@staticmethod
def _get_shape(xtype):
"""
Return the shape for node by node(var) type.
Args:
xtype: node type
Returns:
a string for shape
"""
return {
NodeType.var: "oval",
NodeType.const: "oval",
NodeType.operation: "box",
}[xtype]
@staticmethod
def _get_style(xtype):
"""
Return the box style for node by node(var) type.
Args:
xtype: node type
Returns:
a string for box style
"""
return {
NodeType.var: None,
NodeType.const: "filled",
NodeType.operation: "filled",
}[xtype]
@staticmethod
def _get_arrow(is_second_operand=False, ad=False, reverse_mode=False):
"""
Return the arrow type for node by node(var) type. The arrow type see the docstring of class.
Args:
xtype: node type
Returns:
a string for arrow type
"""
if ad:
if reverse_mode:
return "ornormal" if is_second_operand else "olnormal"
return "rnormal" if is_second_operand else "lnormal"
return "onormal" if is_second_operand else "normal"
@staticmethod
def _beatify_val(val):
"""Keep at most 3 digits for float"""
return np.around(val, 3)
def _draw(self, x, father, is_second_operand=False):
"""
Draw the graph recursivley. The graph stores in self.g.
Be careful, the direction of the arrow is determined by the propagation direction.
Args:
x: a var instance, a member of a composite operation
father: x's "previous" node.
is_second_operand: True/False
Returns:
None
"""
try:
cls, operands = x._context
xid = self._get_unique_id()
xlabel = self._get_op_symbol(cls)
xtype = NodeType.operation
except:
operands = []
if x.name is None:
xid = self._get_unique_id()
xlabel = f"{self._beatify_val(x.val)}"
xtype = NodeType.const
else:
xid = xlabel = x.name
xtype = NodeType.var
self.g.node(xid, label=xlabel,
color=self._get_color(xtype),
shape=self._get_shape(xtype),
style=self._get_style(xtype))
if father is not None:
if self.ad_trace and x.mode != Mode.Forward:
self.g.edge(father, xid, arrowhead=self._get_arrow(is_second_operand, True, True))
else:
self.g.edge(xid, father, arrowhead=self._get_arrow(is_second_operand, self.ad_trace, False))
for i, t in enumerate(operands):
self._draw(t, xid, i == 1)
def show(self):
"""Show the plot. For IPython/jupyter notebook, call "self.g" directly"""
self.g.view(cleanup=True, directory="/tmp")
def save(self, path):
"""Pass in a string as path, save the plot to local"""
self.g.render(path)
| 4dpyautodiff | /4dpyautodiff-1.0.0.tar.gz/4dpyautodiff-1.0.0/pyautodiff/visualization.py | visualization.py |
from enum import Enum
from collections import deque, defaultdict, Counter
import time
from numbers import Number
import numpy as np
"""A global counter to count the total number of VarAutoName instances"""
G_VAR_AUTO_NAME_NUM = 0
class Mode(Enum):
Forward = 0
Reverse = 1
Mix = 2
NONE = -1
class Var(object):
"""
A class that holds variables with AutoDiff method in both forward and reverse mode.
Supports 20+ elementary functions (see ../ops.py) for scalar(Number) and 2D matrix (np.ndarray).
Attributes:
val (Number or np.ndarray): numerical value of this variable.
name (str or Number, optional): name assigned to the variable, such as "x","longlongname" or 1.1, 2.
derivative (dict): a dict that stores the derivative values w.r.t all involved variables. For example,
{"x": numerical value of dfdx, "y": numerical value of dfdy}. A 4D matrix D is used to represent
the single der such as dcdx: D_{ijkl} represents dc_{ij}/dx_{kl}.
mode (Mode): for newly declared instance, specify Forward or Reverse mode. The Mix mode will come from
the operation of a Forward mode and a non-Forward mode.
_var_shapes (dict): a dict that stores the shape of the involved variables used for squeezing the 4D
derivative matrix to 2D Jacobian matrix if necessary. For example, {"x": (1,1), "y": (2,1)}.
LEAVE IT ALONE when declare a new Var instance.
_context (list): [cls, operands] where "cls" represents the operation and operands is [a, b] if cls is
a binary operation else [a] for unary operation. LEAVE IT ALONE when declare a new Var instance.
_bpgrad (dict): a dict stores the temporary backpropagation gradiant, used for reverse mode only.
For example, u.bp = {f: 1} means dfdu = 1. The key here is the hash value of f (output, a Var instance)
while the key in u.derivative is the name of x (input, a Var instance). Similarly, a 4D matrix D is used
to represent a single gradient dfdc: D_{ijkl} represents df_{ij}/dc_{kl}.
_degree (int): "out degree" = number of usage in the computational graph, used for reverse mode only.
"""
def __init__(self, val, name=None, derivative=None, mode=Mode.Forward, _var_shapes=None, _context=None):
"""
----------
Args:
val (Number or np.ndarray): numerical value of the variable.
name (str or Number, optional): name of the variable. If None(default), the variable will be treated
as a constant, which means no derivative wrt this instance.
derivative (Number or np.ndarray, optional): a dict; Defaults to None. If name is None, derivative will be
set as an empty dict; If name is not None, derivative will be initialized as {name: 4D_matrix};
Number/np.ndarray can be passed in as the `seed` for this variable (name should not be None and the
shape of seed should match its value).
mode (Mode): Forward(default)/Reverse. The mode of const will be set as Mode.NONE.
_var_shapes (dict or None): Leave it None when declare an instance. See explanations above.
_context (list or None): Leave it None when declare an instance. See explanations above.
TO use:
>>> x = Var(1, 'x', mode=Mode.Forward)
>>> y = Var(np.array([[1],[2]]), 'y', mode=Mode.Reverse)
"""
self._val = val
if name is None or isinstance(name, (str, Number)):
self.name = name
else:
raise TypeError(f"name should be a str or Number, {type(name)} is not supported.")
# Init derivative
if isinstance(derivative, dict):
self.derivative = derivative
elif name is not None:
self.derivative = {name: self._init_seed(val, derivative)}
else:
if derivative is not None:
raise ValueError(f"Need a name!")
# Use {} instead of None to skip the type check when self.derivative is used
self.derivative = {}
# Be careful, this dict is designed for sharing across multiple instances
# which means for x = Var(1, 'x'), x.self._var_shapes can contain key="y" that is not x's "target wrt var"
self._var_shapes = _var_shapes
if name is not None:
try:
self._var_shapes[name] = np.shape(val)
except:
self._var_shapes = {name: np.shape(val)}
self.mode = Mode.NONE if self.is_const else mode
self._context = _context
# Used only for reverse mode
# Will be activated when ._reverse_diff() is called
self._degrees = None
self._bpgrad = None
def _init_seed(self, val, seed=None):
"""
Initialize the derivative for newly declared var instance. The shape of seed should match the shape of val.
Or exception will be thrown out. If val is scalar, seed must be a scalar too; If val is matrix, seed could
be a scalar or a matrix.
Args:
val: var's value, used for aligning the shape of val and derivative
seed: a Number or np.ndarray, defaults to None.
Returns:
a 4D matrix as the initial derivative.
For example (this function will be called in __init__):
>>> Var(1, 'x', 100).derivative['x'].tolist()
[[[[100.0]]]]
>>> Var(np.array([[1],[2]]), 'x', 2).derivative['x'].tolist() # output is np.ndarray
[[[[2.0], [0.0]]], [[[0.0], [2.0]]]]
>>> Var(np.array([[1],[2]]), 'x', np.array([[100],[200]])).derivative['x'].tolist()
[[[[100.0], [0.0]]], [[[0.0], [200.0]]]]
"""
if seed is None:
seed = 1
elif not isinstance(seed, (Number, np.ndarray, list)):
raise TypeError(f"Init derivative(seed) should be a ndarray or Number, {type(seed)} is not supported.")
seed = np.array(seed)
ndim = np.ndim(val)
# Init seed should be a scalar or the shape of seed should be equal to the shape of value
assert np.ndim(seed) == 0 or np.size(seed) == np.size(val), (
f"Initial derivative {np.shape(seed)} should match the shape of val {np.shape(val)}")
if ndim == 2:
k, l = val.shape
elif ndim == 0:
k = l = 1
else:
raise ValueError(f"Val only support scalar/2D-matrix. Input: {val.shape}")
return np.einsum('ij,kl->ikjl', np.eye(k) * seed, np.eye(l))
def __str__(self):
return f"(val: {self.val}, der: {self.derivative})"
def __repr__(self):
return f"({self.__class__} name: {self.name} val: {self.val}, der: {self.derivative})"
def __eq__(self, b):
"""Only compare the `val` and `derivative`. `name` is ignored."""
if not isinstance(b, Var) or not np.allclose(self.val, b.val) or not (
self.derivative.keys() == b.derivative.keys()):
return False
for wrt in self.derivative.keys():
# TODO: Fan
# Use np.array_equal instead to check the shape?
if not np.allclose(self.derivative[wrt], b.derivative[wrt]):
return False
return True
@property
def val(self):
"""Return numerical value of variable"""
return self._val
@val.setter
def val(self, v):
"""Set numerical value for variable"""
self._val = v
def _squeeze_der(self, name, v):
"""
Squeeze the 4D derivative matrix to match the expectation of Jacobian matrix. The output shape is listed below:
Input type --> output type: Jacobian matrix type
Scalar --> scalar: scalar
Scalar --> vector((n,1) or (1,n)): 2D matrix(n,1)
Vector((n,1) or (1,n)) --> scalar: 2D matrix(1,n)
Vector((n,1) or (1,n)) --> Vector((m,1) or (1,m)): 2D matrix(m,n)
Matrix((m,n)) --> matrix((p,q)): 3D matrix if one of m,n,p,q is 1 else 4D matrix
Args:
name: name of target var instance
v: 4D derivative matrix
Returns:
A scalar or matrix, the squeezed derivative.
"""
shape = self._var_shapes[name]
if len(shape) == 0:
try:
return v.item()
except:
return np.squeeze(np.squeeze(v, -1), -1)
m, n, k, l = v.shape
assert (k, l) == shape, f"var shape {shape} and der shape: {self.val.shape} mismatch!"
if l == 1:
v = np.squeeze(v, -1)
elif k == 1:
v = np.squeeze(v, -2)
if n == 1:
v = np.squeeze(v, 1)
elif m == 1:
v = np.squeeze(v, 0)
return v
def __hash__(self):
return id(self)
def _count_degrees(self):
"""
Count "out degree" for every involved var instance for reverse mode.
Returns: a dict where key = node, val = out degree
"""
q = deque()
q.append(self)
degrees = Counter()
visited = defaultdict(bool)
while len(q) > 0:
v = q.popleft()
if v._context is None:
continue
_, operands = v._context
for t in operands:
degrees[t] += 1
if not visited[t]:
visited[t] = True
q.append(t)
return degrees
def _backward(self):
"""
Recursively trace back along the computational graph to propagate the derivative from output to input.
See more explanations in code comments.
"""
# Two cases to "merge" the .derivative from the forward propagation and ._bpgrad from the back propagation
# if self.derivative is not None, two possible cases:
# 1. For target vars like x,y whose .derivative is initialized when declared;
# 2. For mix mode calculation, some node in forward mode in the trace has non-empty .derivative
# Be careful, the merged derivative could be part of the total derivative so we need to accumulate all.
if len(self.derivative) > 0:
from pyautodiff import Ops
# f: a var instance, dfdc: numerical derivative for dfdc (suppose current instance(self) is c)
f, dfdc = self._bpgrad.popitem()
# Merge two 4D matrix
d = Ops.merge_fwd_backprop(self.derivative, dfdc)
# Accumulate the derivatives
f.derivative = Counter(f.derivative)
f.derivative.update(d)
f.derivative = dict(f.derivative)
elif self._context is not None:
cls, operands = self._context
cls.backprop(operands[0],
operands[1] if len(operands) == 2 else None,
self.val,
self._bpgrad)
# Clear it for next BP
self._bpgrad.popitem()
for t in operands:
t._degree -= 1
# When t.degree is 0, dfdt is complete and safe to trace back
if t._degree == 0:
t._backward()
def _reverse_diff(self):
"""
Start AD of reverse mode.
"""
degrees = self._count_degrees()
for t in degrees:
t._degree = degrees[t]
t._bpgrad = Counter()
self._bpgrad = {self: self._init_seed(self.val)}
self._backward()
def diff(self, wrts=None, return_raw=False):
"""
Get derivative w.r.t. to each var in `wrts`.
Args:
wrts: single variable name or a list/tuple of variable names. Defaults to None, equals to `all`.
Returns:
a Number or np.ndarray if wrts is single variable name;
or a list of Number or np.ndarray that corresponds to each variable name in wrts, if wrts is a list/tuple;
or a dict with the variable name as a key and value as a Number or np.ndarray, if wrts is None.
"""
# Reverse mode
if len(self.derivative) == 0 and self._context is not None:
self._reverse_diff()
der = self.derivative
keys = list(der.keys())
if not return_raw:
der = {x: self._squeeze_der(x, der[x]) for x in keys}
if wrts is None:
if len(keys) == 0:
return 0
if len(keys) == 1:
return der[keys[0]]
return der
elif isinstance(wrts, (list, tuple)):
return [der.get(w, 0) for w in wrts]
else:
try:
return der[wrts]
except:
raise TypeError(f"wrts only supports None/list/tuple or a var name!")
@property
def T(self):
"""
To support x.T
Returns: Transposed matrix
"""
return self.transpose()
@property
def is_const(self):
"""Const like: Var(1)"""
return self._var_shapes is None
class VarAutoName(Var):
"""
A wrapper class for class `Var`. Variable names are auto-generated by combining the current number of
instances of `VarAutoName` and system process time to avoid duplicate names.
"""
def __init__(self, val, derivative=None, mode=Mode.Forward):
"""
Args:
val (Number or np.ndarray): numerical value; same as `val` in `Var`.
derivative: a dict or a Number/np.ndarray. Defaults to None. Same as `derivative` in `Var`.
"""
# TODO: Fan
# Add a Lock to protect G_VAR_AUTO_NAME_NUM
global G_VAR_AUTO_NAME_NUM
G_VAR_AUTO_NAME_NUM += 1
name = f"{G_VAR_AUTO_NAME_NUM}_{time.process_time()}"
super().__init__(val, name=name, derivative=derivative, mode=mode)
@staticmethod
def clear_var_counter():
"""
Clears G_VAR_AUTO_NAME_NUM in case of overflow.
Returns: None
"""
global G_VAR_AUTO_NAME_NUM
G_VAR_AUTO_NAME_NUM = 0
| 4dpyautodiff | /4dpyautodiff-1.0.0.tar.gz/4dpyautodiff-1.0.0/pyautodiff/var.py | var.py |
import numpy as np
from pyautodiff import Var, Ops
from pyautodiff.ops import _dunder_wrapper
class VarSin(Ops):
"""
A class for trigonometric sine operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> sin(Var(0, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 0.0, der: {'x': array([[[[1.]]]])})
"""
@classmethod
def op(cls, va, vb):
return np.sin(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
return np.cos(va)
class VarCos(Ops):
"""
A class for trigonometric cosine operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> cos(Var(0, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 1.0, der: {'x': array([[[[0.]]]])})
"""
@classmethod
def op(cls, va, vb):
return np.cos(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
return -np.sin(va)
class VarTan(Ops):
"""
A class for trigonometric tangent operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> tan(Var(0, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 0.0, der: {'x': array([[[[1.]]]])})
"""
@classmethod
def op(cls, va, vb):
return np.tan(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
return (1 / np.cos(va)) ** 2
class VarArcSin(Ops):
"""
A class for trigonometric arcsine operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> arcsin(Var(0, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 0.0, der: {'x': array([[[[1.]]]])})
"""
@classmethod
def op(cls, va, vb):
return np.arcsin(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
return 1 / np.sqrt(1 - va ** 2)
class VarArcCos(Ops):
"""
A class for trigonometric arccosine operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> arccos(Var(0, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 1.5707963267948966, der: {'x': array([[[[-1.]]]])})
"""
@classmethod
def op(cls, va, vb):
return np.arccos(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
return -1 / np.sqrt(1 - va ** 2)
class VarArcTan(Ops):
"""
A class for trigonometric arctangent operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> arctan(Var(0, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 0.0, der: {'x': array([[[[1.]]]])})
"""
@classmethod
def op(cls, va, vb):
return np.arctan(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
return 1 / (1 + va ** 2)
class VarSinH(Ops):
"""
A class for trigonometric hyperbolic sine operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> arcsinh(Var(0, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 0.0, der: {'x': array([[[[1.]]]])})
"""
@classmethod
def op(cls, va, vb):
return np.sinh(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
""" derivative of sinh(x) = cosh(x)"""
return np.cosh(va)
class VarCosH(Ops):
"""
A class for trigonometric hyperbolic cosine operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> arccosh(Var(2, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 1.3169578969248166, der: {'x': array([[[[0.57735027]]]])})
"""
@classmethod
def op(cls, va, vb):
return np.cosh(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
""" derivative of cosh(x) = sinh(x)"""
return np.sinh(va)
class VarTanH(Ops):
"""
A class for trigonometric hyperbolic tangent operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> tanh(Var(1, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 0.7615941559557649, der: {'x': array([[[[0.41997434]]]])})
"""
@classmethod
def op(cls, va, vb):
return np.tanh(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
""" derivative of tanh(x) = 1 - tanh(x)^2
Args:
**kwargs:
"""
return 1 - np.tanh(va) ** 2
class VarArcSinH(Ops):
"""
A class for trigonometric hyperbolic arcsine operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> arcsinh(Var(0, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 0.0, der: {'x': array([[[[1.]]]])})
"""
@classmethod
def op(cls, va, vb):
return np.arcsinh(va)
@classmethod
def local_derivative(cls, va, vb, vc):
""" for all real va """
return 1 / np.sqrt((va ** 2) + 1)
class VarArcCosH(Ops):
"""
A class for trigonometric hyperbolic arccosine operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> arccosh(Var(2, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 1.3169578969248166, der: {'x': array([[[[0.57735027]]]])})
"""
@classmethod
def op(cls, va, vb):
return np.arccosh(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
""" for all real va>1
Args:
**kwargs:
"""
assert (va > 1), "va should be greater than 1."
return 1 / np.sqrt((va ** 2) - 1)
class VarArcTanH(Ops):
"""
A class for trigonometric hyperbolic arctan operation. Gives the value and local derivative.
This class inherits from the Ops Class.
>>> arctanh(Var(0, 'x'))
(<class 'pyautodiff.var.Var'> name: None val: 0.0, der: {'x': array([[[[1.]]]])})
"""
@classmethod
def op(cls, va, vb):
""" the domain of arctanh is (-1, 1) """
assert (np.abs(va) < 1), "The value inside arctanh should be between (-1, 1)."
return np.arctanh(va)
@classmethod
def local_derivative(cls, va, vb, vc, **kwargs):
""" derivative of arctanh(x) = 1 / (1-x^2)
Args:
**kwargs:
"""
return 1 / (1 - va ** 2)
sin = VarSin.unary_operation # enable sin(x)
Var.sin = _dunder_wrapper(sin, True) # enable x.sin()
arcsin = VarArcSin.unary_operation
Var.arcsin = _dunder_wrapper(arcsin, True)
cos = VarCos.unary_operation
Var.cos = _dunder_wrapper(cos, True)
arccos = VarArcCos.unary_operation
Var.arccos = _dunder_wrapper(arccos, True)
tan = VarTan.unary_operation
Var.tan = _dunder_wrapper(tan, True)
arctan = VarArcTan.unary_operation
Var.arctan = _dunder_wrapper(arctan, True)
sinh = VarSinH.unary_operation
Var.sinh = _dunder_wrapper(sinh, True)
arcsinh = VarArcSinH.unary_operation
Var.arcsinh = _dunder_wrapper(arcsinh, True)
cosh = VarCosH.unary_operation
Var.cosh = _dunder_wrapper(cosh, True)
arccosh = VarArcCosH.unary_operation
Var.arccosh = _dunder_wrapper(arccosh, True)
tanh = VarTanH.unary_operation
Var.tanh = _dunder_wrapper(tanh, True)
arctanh = VarArcTanH.unary_operation
Var.arctanh = _dunder_wrapper(arctanh, True)
| 4dpyautodiff | /4dpyautodiff-1.0.0.tar.gz/4dpyautodiff-1.0.0/pyautodiff/trig_ops.py | trig_ops.py |
from .var import Var, VarAutoName, Mode
from .ops import Ops, exp, log, pow, sqrt, transpose, sigmoid, logistic
from .trig_ops import sin, cos, tan, sinh, cosh, tanh, arcsin, arccos, arctan, arcsinh, arccosh, arctanh
from .visualization import Viser | 4dpyautodiff | /4dpyautodiff-1.0.0.tar.gz/4dpyautodiff-1.0.0/pyautodiff/__init__.py | __init__.py |
📦 setup.py (for humans)
=======================
This repo exists to provide [an example setup.py] file, that can be used
to bootstrap your next Python project. It includes some advanced
patterns and best practices for `setup.py`, as well as some
commented–out nice–to–haves.
For example, this `setup.py` provides a `$ python setup.py upload`
command, which creates a *universal wheel* (and *sdist*) and uploads
your package to [PyPi] using [Twine], without the need for an annoying
`setup.cfg` file. It also creates/uploads a new git tag, automatically.
In short, `setup.py` files can be daunting to approach, when first
starting out — even Guido has been heard saying, "everyone cargo cults
thems". It's true — so, I want this repo to be the best place to
copy–paste from :)
[Check out the example!][an example setup.py]
Installation
-----
```bash
cd your_project
# Download the setup.py file:
# download with wget
wget https://raw.githubusercontent.com/navdeep-G/setup.py/master/setup.py -O setup.py
# download with curl
curl -O https://raw.githubusercontent.com/navdeep-G/setup.py/master/setup.py
```
To Do
-----
- Tests via `$ setup.py test` (if it's concise).
Pull requests are encouraged!
More Resources
--------------
- [What is setup.py?] on Stack Overflow
- [Official Python Packaging User Guide](https://packaging.python.org)
- [The Hitchhiker's Guide to Packaging]
- [Cookiecutter template for a Python package]
License
-------
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any means.
[an example setup.py]: https://github.com/navdeep-G/setup.py/blob/master/setup.py
[PyPi]: https://docs.python.org/3/distutils/packageindex.html
[Twine]: https://pypi.python.org/pypi/twine
[image]: https://farm1.staticflickr.com/628/33173824932_58add34581_k_d.jpg
[What is setup.py?]: https://stackoverflow.com/questions/1471994/what-is-setup-py
[The Hitchhiker's Guide to Packaging]: https://the-hitchhikers-guide-to-packaging.readthedocs.io/en/latest/creation.html
[Cookiecutter template for a Python package]: https://github.com/audreyr/cookiecutter-pypackage
| 4in | /4in-0.1.0.tar.gz/4in-0.1.0/README.md | README.md |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Note: To use the 'upload' functionality of this file, you must:
# $ pipenv install twine --dev
import io
import os
import sys
from shutil import rmtree
from setuptools import find_packages, setup, Command
# Package meta-data.
NAME = '4in'
DESCRIPTION = 'My short description for my project.'
URL = 'https://github.com/qu6zhi/4in'
EMAIL = 'qu6zhi@qq.com'
AUTHOR = 'qu6zhi'
REQUIRES_PYTHON = '>=3.6.0'
VERSION = '0.1.0'
# What packages are required for this module to be executed?
REQUIRED = [
# 'requests', 'maya', 'records',
]
# What packages are optional?
EXTRAS = {
# 'fancy feature': ['django'],
}
# The rest you shouldn't have to touch too much :)
# ------------------------------------------------
# Except, perhaps the License and Trove Classifiers!
# If you do change the License, remember to change the Trove Classifier for that!
here = os.path.abspath(os.path.dirname(__file__))
# Import the README and use it as the long-description.
# Note: this will only work if 'README.md' is present in your MANIFEST.in file!
try:
with io.open(os.path.join(here, 'README.md'), encoding='utf-8') as f:
long_description = '\n' + f.read()
except FileNotFoundError:
long_description = DESCRIPTION
# Load the package's __version__.py module as a dictionary.
about = {}
if not VERSION:
project_slug = NAME.lower().replace("-", "_").replace(" ", "_")
with open(os.path.join(here, project_slug, '__version__.py')) as f:
exec(f.read(), about)
else:
about['__version__'] = VERSION
class UploadCommand(Command):
"""Support setup.py upload."""
description = 'Build and publish the package.'
user_options = []
@staticmethod
def status(s):
"""Prints things in bold."""
print('\033[1m{0}\033[0m'.format(s))
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
try:
self.status('Removing previous builds…')
rmtree(os.path.join(here, 'dist'))
except OSError:
pass
self.status('Building Source and Wheel (universal) distribution…')
os.system('{0} setup.py sdist bdist_wheel --universal'.format(sys.executable))
self.status('Uploading the package to PyPI via Twine…')
os.system('twine upload dist/*')
self.status('Pushing git tags…')
os.system('git tag v{0}'.format(about['__version__']))
os.system('git push --tags')
sys.exit()
# Where the magic happens:
setup(
name=NAME,
version=about['__version__'],
description=DESCRIPTION,
long_description=long_description,
long_description_content_type='text/markdown',
author=AUTHOR,
author_email=EMAIL,
python_requires=REQUIRES_PYTHON,
url=URL,
packages=find_packages(exclude=["tests", "*.tests", "*.tests.*", "tests.*"]),
# If your package is a single module, use this instead of 'packages':
# py_modules=['mypackage'],
# entry_points={
# 'console_scripts': ['mycli=mymodule:cli'],
# },
install_requires=REQUIRED,
extras_require=EXTRAS,
include_package_data=True,
license='MIT',
classifiers=[
# Trove classifiers
# Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
'License :: OSI Approved :: MIT License',
'Programming Language :: Python',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy'
],
# $ setup.py publish support.
cmdclass={
'upload': UploadCommand,
},
)
| 4in | /4in-0.1.0.tar.gz/4in-0.1.0/setup.py | setup.py |
# 8b d8 Yb dP 88""Yb db dP""b8 88 dP db dP""b8 888888
# 88b d88 YbdP 88__dP dPYb dP `" 88odP dPYb dP `" 88__
# 88YbdP88 8P 88""" dP__Yb Yb 88"Yb dP__Yb Yb "88 88""
# 88 YY 88 dP 88 dP""""Yb YboodP 88 Yb dP""""Yb YboodP 888888
VERSION = (5, 2, 0)
__version__ = '.'.join(map(str, VERSION))
| 4in | /4in-0.1.0.tar.gz/4in-0.1.0/mypackage/__version__.py | __version__.py |
# Insert your code here.
| 4in | /4in-0.1.0.tar.gz/4in-0.1.0/mypackage/core.py | core.py |
from .core import *
| 4in | /4in-0.1.0.tar.gz/4in-0.1.0/mypackage/__init__.py | __init__.py |
# 4logik python rest client
Utility package to call an enpoint generated by 4Logik
## Installation
Use pip
```
pip install 4logik-python-rest-client
```
## How to call a CSV endpoint
- Locate the input CSV file
- Identify the URL of the enpoint
- Identify the name of the data set of the response that contains the results
Example of using the package:
```python
from py4logik_python_rest_client.endpoint_caller import call_csv_endpoint, call_csv_endpoint_read_data_set
# input parameters
input_csv_file = "/home/user1/incomingData.csv"
endpoint_url = "http://myOrganization.myDeployedService.com/RiskCalulationProcess"
# call the endpoint
received_json_data = call_csv_endpoint(ms_url, input_csv_file)
print(received_json_data)
```
The result will contain useful metadata like the quantity of business exceptions and the list of data sets which you can print using:
```python
print(received_json_data["data_sets_names"])
print(received_json_data["data_sets_results"])
```
To read the specific rows of a data set, call the method "call_csv_endpoint_read_data_set" sending the name of the data set, like this:
```python
specific_data_set_name_to_read = "ReportResult"
data_set_result_rows = call_csv_endpoint_read_data_set(ms_url, input_csv_file, specific_data_set_name_to_read)
print(data_set_result_rows)
```
## Example using the package inside Jupyter and converting the result to a data frame:
```python
import json
import pandas as pd
import tempfile
from py4logik_python_rest_client.endpoint_caller import call_csv_endpoint_read_data_set
# input parameters
input_csv_file = "/home/user1/incomingData.csv"
endpoint_url = "http://myOrganization.myDeployedService.com/RiskCalulationProcess"
dataset_name = "riskResult"
# call the endpoint
received_json_data = call_csv_endpoint_read_data_set(ms_url, input_csv_file, dataset_name)
# now convert the received json to panda
temp_file = tempfile.NamedTemporaryFile(delete=False)
output_json = temp_file.name
with open(output_json,'w', encoding='UTF_8') as f:
f.write(json.dumps(received_json_data))
f.close()
final_data_frame = pd.read_json(output_json)
final_data_frame
``` | 4logik-python-rest-client | /4logik-python-rest-client-1.0.4.tar.gz/4logik-python-rest-client-1.0.4/readme_for_pypi.md | readme_for_pypi.md |
"""Setup script for 4logik-python-rest-client"""
# Standard library imports
import pathlib
# Third party imports
from setuptools import setup
# The directory containing this file
HERE = pathlib.Path(__file__).resolve().parent
# The text of the README file is used as a description
README = (HERE / "readme_for_pypi.md").read_text()
# This call to setup() does all the work
setup(
name="4logik-python-rest-client",
version="1.0.4",
description="Execute microservice endpoint using HTTP REST",
long_description=README,
long_description_content_type="text/markdown",
url="https://www.4logik.com/",
keywords='python project',
author="Eugenia Morales",
author_email="eugeniamorales251@gmail.com",
license="MIT",
classifiers=[
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
],
packages=['py4logik_python_rest_client'],
install_requires=["requests"],
)
| 4logik-python-rest-client | /4logik-python-rest-client-1.0.4.tar.gz/4logik-python-rest-client-1.0.4/setup.py | setup.py |
""" Method to invoke endpoint http API"""
import requests
def call_csv_endpoint(
endopint_url: str,
csv_input_file_name: str,
call_timeout: int = 15,
):
"""Receive CSV input and send it to a microservice HTTP endpoint, then return all data collection from json result"""
with open(csv_input_file_name, encoding="utf-8") as csv_file:
files = {"file": ("input_data.csv", csv_file, "text/csv")}
response_from_api = requests.post(
endopint_url, timeout=call_timeout, files=files
)
# raise error if there was a problem calling the endpoint
response_from_api.raise_for_status()
# read result as json
result = response_from_api.json()
# count number of business exceptions
b_exceptions = result["data_collection"]["businessExceptions"]
b_exceptions_data = []
for b_excep in b_exceptions:
b_exceptions_data.append({"exception_comment": b_excep["exceptionComment"]})
# read names of additional data sets
data_sets = result["data_collection"]["resultAdditionalData"]
data_sets_names = []
# data_sets_results = []
for d_set in data_sets:
data_set_name = d_set["inputFormatName"]
data_sets_names.append(data_set_name)
# get data set rows
# input_object = d_set["inputObject"]
# for ikey in input_object.keys():
# data_sets_results.append({data_set_name: input_object[ikey]})
# prepare information to return
result_data = {
"business_exceptions_quantity": len(b_exceptions),
"business_exceptions_data": b_exceptions_data,
"data_sets_names": data_sets_names,
# "data_sets_results": data_sets_results,
}
return result_data
def call_csv_endpoint_read_data_set(
endopint_url: str,
csv_input_file_name: str,
data_set_name_to_return: str,
call_timeout: int = 15,
):
"""Receive CSV input and send it to a microservice HTTP endpoint, then return all data collection from json result"""
with open(csv_input_file_name, encoding="utf-8") as csv_file:
files = {"file": ("input_data.csv", csv_file, "text/csv")}
response_from_api = requests.post(
endopint_url, timeout=call_timeout, files=files
)
# raise error if there was a problem calling the endpoint
response_from_api.raise_for_status()
# read result as json
result = response_from_api.json()
# read names of additional data sets
data_sets = result["data_collection"]["resultAdditionalData"]
for d_set in data_sets:
data_set_name = d_set["inputFormatName"]
if data_set_name == data_set_name_to_return:
input_object = d_set["inputObject"]
for ikey in input_object.keys():
return input_object[ikey]
# if reach this point the data set name was not found
return {}
| 4logik-python-rest-client | /4logik-python-rest-client-1.0.4.tar.gz/4logik-python-rest-client-1.0.4/py4logik_python_rest_client/endpoint_caller.py | endpoint_caller.py |
You should definitely read this file. It will explain a lot. Corona. | 4nil0cin | /4nil0cin-0.1.tar.gz/4nil0cin-0.1/README.txt | README.txt |
from setuptools import setup
setup(name='4nil0cin',
version='0.1',
description='Gaussian distributions',
packages=['4nil0cin'],
zip_safe=False)
| 4nil0cin | /4nil0cin-0.1.tar.gz/4nil0cin-0.1/setup.py | setup.py |
#!/usr/bin/python3
# -*- coding: utf-8 -*-
# import _4quila
def _4ssert(expression, raisable=Exception):
if not expression:
raise raisable
| 4quila | /4quila-0.36.200302-py3-none-any.whl/_4helper/__init__.py | __init__.py |
#!/usr/bin/python3
# -*- coding: utf-8 -*-
# import _4quila
import json
import os
import inspect
from tornado.ioloop import IOLoop
from tornado.web import RequestHandler, Application
from tornado.websocket import WebSocketHandler
from _4helper import _4ssert
class WebServer:
@classmethod
def parse_ip_port(cls, ip_port):
if isinstance(ip_port, int) or ":" not in ip_port:
return "127.0.0.1", int(ip_port)
else:
ip, port = ip_port.split(":")
return ip, int(port)
@classmethod
def start(cls, config):
ip = config.get("ip", "127.0.0.1")
port = int(config.get("port", "80"))
routes = config.get("routes", {"/": cls})
class _WebSocketHandler(WebSocketHandler):
async def open(self, *args, **kwargs):
print(f"open {args} {kwargs}")
async def on_close(self):
print("close")
async def on_message(self, message):
print(f"handling {message}")
self.write_message(f"got {message}")
class _Handler(RequestHandler):
SUPPORTED_METHODS = ["GET", "POST"]
async def get(self):
await self.handle()
async def post(self):
await self.handle(True)
async def handle(self, is_post=False):
match_handler = None
max_match_length = 0
for path, handler in routes.items():
if self.request.path.startswith(path):
match_length = len(path)
if match_length > max_match_length:
max_match_length = match_length
match_handler = handler
if match_handler is None:
self.set_status(404)
self.finish()
return
func_name = "handle_%s" % self.request.path[max_match_length:]
func = getattr(match_handler, func_name, None)
if func is None:
self.set_status(404)
self.finish()
return
if self.request.arguments:
request = dict(
(i, j[0].decode()) for i, j in self.request.arguments.items()
)
else:
request = json.loads(self.request.body or "{}")
request = dict((i, str(j)) for i, j in request.items())
func_parameters = inspect.signature(func).parameters
for key, value in (
("headers", self.request.headers),
("body", self.request.body),
):
_4ssert(key not in request)
if key in func_parameters:
request[key] = value
response = await func(**request)
if isinstance(response, dict):
self.write(json.dumps(response))
else:
self.write(response)
self.finish()
Application(
[(r"/websocket", _WebSocketHandler), (r".*", _Handler,)],
static_path=os.path.join(os.getcwd(), "static"),
).listen(port, address=ip)
IOLoop.current().start()
| 4quila | /4quila-0.36.200302-py3-none-any.whl/_4server/web.py | web.py |
# import _4quila
import json
import os
import inspect
from tornado.ioloop import IOLoop
from tornado.web import RequestHandler, Application
from tornado.websocket import WebSocketHandler
from _4helper import _4ssert
class _WebSocketHandler(WebSocketHandler):
async def open(self, *args, **kwargs):
print(f"open {args} {kwargs}")
async def on_close(self):
print("close")
async def on_message(self, message):
print(f"handling {message}")
self.write_message(f"got {message}")
class _Handler(RequestHandler):
SUPPORTED_METHODS = ["GET", "POST"]
def initialize(self, routes):
self.routes = routes #pylint: disable=attribute-defined-outside-init
async def get(self):
await self.handle()
async def post(self):
await self.handle()
async def handle(self):
match_handler = None
max_match_length = 0
for path, handler in self.routes.items():
if self.request.path.startswith(path):
match_length = len(path)
if match_length > max_match_length:
max_match_length = match_length
match_handler = handler
if match_handler is None:
self.set_status(404)
self.finish()
return
func_name = "handle_%s" % self.request.path[max_match_length:]
func = getattr(match_handler, func_name, None)
if func is None:
self.set_status(404)
self.finish()
return
if self.request.arguments:
request = dict(
(i, j[0].decode()) for i, j in self.request.arguments.items()
)
else:
request = json.loads(self.request.body or "{}")
request = dict((i, str(j)) for i, j in request.items())
func_parameters = inspect.signature(func).parameters
for key, value in (
("headers", self.request.headers),
("body", self.request.body),
):
_4ssert(key not in request)
if key in func_parameters:
request[key] = value
response = await func(**request)
if isinstance(response, dict):
self.write(json.dumps(response))
else:
self.write(response)
self.finish()
def start(settings):
ip = settings.pop("ip", "127.0.0.1")
port = int(settings.pop("port"))
Application(
[
(r"/websocket", _WebSocketHandler),
(r".*", _Handler, {"routes": settings.pop("routes", {})}),
],
static_path=os.path.join(os.getcwd(), "static"),
).listen(port, address=ip)
IOLoop.current().start()
| 4quila | /4quila-0.36.200302-py3-none-any.whl/_4server/__init__.py | __init__.py |
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import inspect
import logging
import sys
import traceback
import linecache
logger = logging.getLogger('_4quila')
logger.setLevel(logging.INFO)
logger.addHandler(logging.StreamHandler())
def format_frame(frame):
for k, v in frame.f_locals.items():
if k.startswith('__') or k.endswith('__'):
continue
if inspect.ismodule(v):
continue
if inspect.isfunction(v):
continue
if inspect.isclass(v):
continue
yield '%s->%s' % (k, v)
def format_stacks():
exc_type, exc, exc_traceback = sys.exc_info()
if not exc_type:
return
for tb in traceback.walk_tb(exc_traceback):
tb_frame, tb_lineno = tb
tb_filename = tb_frame.f_code.co_filename
tb_name = tb_frame.f_code.co_name
tb_line = linecache.getline(tb_frame.f_code.co_filename, tb_lineno).strip()
yield '%s[%s] %s: %s' % (tb_filename, tb_lineno, tb_name, tb_line)
for item in format_frame(tb_frame):
yield ' %s' % item
yield '%s %s' % (exc_type.__name__, exc)
| 4quila | /4quila-0.36.200302-py3-none-any.whl/_4quila/common.py | common.py |
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import inspect
from .common import format_frame, format_stacks, logger
def error(content, expands=None):
info(content, expands=expands, depth=2)
raise Exception("_4")
def info(content, expands=None, depth=1):
expand_lines = []
if expands:
for expand_index, expand in enumerate(expands):
try:
expand_lines.append("[%s] = %s" % (expand_index, expand))
for expand_field in dir(expand):
if expand_field.startswith("__") and expand_field.endswith("__"):
continue
expand_lines.append(
"[%s].%s->%s"
% (expand_index, expand_field, getattr(expand, expand_field))
)
except Exception:
continue
logger.info(
"\n".join(
["", ">>> %s >>>>" % content,]
+ list(format_stacks())
+ list(format_frame(inspect.stack()[depth].frame))
+ expand_lines
+ ["<<< %s <<<<" % content,]
)
)
| 4quila | /4quila-0.36.200302-py3-none-any.whl/_4quila/logger.py | logger.py |
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import time
import json
from functools import wraps
from random import randint
from .common import logger, format_stacks
def tracer(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
log_id = randint(0, 100000)
start_time = time.time()
def _time():
# return '%0.4f' % (time.time() - start_time)
return "%sms" % int((time.time() - start_time) * 1000)
def _log(lines):
lines = (
["", "<<< trace_%s <<<<<<<<<<<" % log_id]
+ [" %s" % line for line in lines]
+ [">>> trace_%s >>>>>>>>>>>" % log_id, ""]
)
logger.info("\n".join(lines))
def _json(result, header=" " * 4):
if isinstance(result, dict):
return "\n".join(
"%s%s" % (header, i)
for i in json.dumps(result, indent=4).splitlines()
)
else:
try:
assert isinstance(result, str)
return "\n".join(
"%s%s" % (header, i)
for i in json.dumps(json.loads(result), indent=4).splitlines()
)
except Exception:
return result
if (
len(args) >= 3
and hasattr(args[1], "method")
and hasattr(args[1], "path")
and hasattr(args[2], "dict")
):
mode = "DJANGO_HANDLER"
else:
mode = ""
def _log_input():
if mode == "DJANGO_HANDLER":
return "%s:%s %s" % (args[1].method, args[1].path, args[2].dict())
else:
return "<----< %s %s" % (
" ".join(str(i) for i in args),
" ".join("%s:%s" % (k, v) for k, v in kwargs.items()),
)
def _log_output():
if mode == "DJANGO_HANDLER":
return "%s %s -> %s" % (
_time(),
result.status_code,
_json(result.content.decode("utf-8")),
)
else:
return ">----> %s" % _json(result)
_log([_log_input()])
try:
result = fn(*args, **kwargs)
except Exception:
_log([_log_input(),] + list(format_stacks()))
raise
else:
_log(
[_log_input(), _log_output(),]
)
return result
return wrapper
| 4quila | /4quila-0.36.200302-py3-none-any.whl/_4quila/tracer.py | tracer.py |
#!/usr/bin/python3
# -*- coding: utf-8 -*-
# import _4quila
import contextlib
import requests
import json
from .common import logger
class Session:
def __init__(self, domain, cookies=None, headers=None):
self._session = requests.session()
self.domain = domain
self.cookies = cookies or {}
self.headers = headers or {}
if "User-Agent" not in self.headers:
self.headers["User-Agent"] = (
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML,"
" like Gecko) Ubuntu Chromium/69.0.3497.81 Chrome/69.0.3497.81 Safari/537.36"
)
def close(self):
self._session.close()
def _request(self, method, path, params=None, data=None, headers=None):
params = params or {}
data = data or {}
headers = headers or {}
logger.info(
"%s ing %s" % (method, self.domain + path,)
+ (" params %s" % params if params else "")
+ (" data %s" % data if data else "")
+ (" headers %s" % headers if headers else "")
)
headers.update(self.headers)
response = self._session.request(
method,
self.domain + path,
data=json.dumps(data),
params=params,
cookies=self.cookies,
headers=headers,
)
try:
response_json = response.json()
logger.info("responding json:\n%s" % json.dumps(response_json, indent=4))
return response_json
except Exception:
logger.info("responding text:\n%s" % ("".join(response.text.splitlines())))
return response.text
def get(self, path, params=None, headers=None):
return self._request("GET", path, params=params, headers=headers)
def post(self, path, data=None, headers=None):
return self._request("POST", path, data=data, headers=headers)
def head(self, path, params=None, headers=None):
return self._request("HEAD", path, params=params, headers=headers)
@contextlib.contextmanager
def session(domain, cookies=None, headers=None):
_session = Session(domain, cookies=cookies, headers=headers)
yield _session
_session.close()
| 4quila | /4quila-0.36.200302-py3-none-any.whl/_4quila/browser.py | browser.py |
#!/usr/bin/python3
# -*- coding: utf-8 -*-
# import _4quila
from tornado.ioloop import IOLoop
def start():
return IOLoop.current().start()
def run(func):
return IOLoop.current().run_sync(func)
| 4quila | /4quila-0.36.200302-py3-none-any.whl/_4quila/loop.py | loop.py |
#!/usr/bin/python3
# -*- coding: utf-8 -*-
# import _4quila
import os
import tempfile
import fcntl
import contextlib
@contextlib.contextmanager
def lock(lock_id):
basename = '%s.lock' % lock_id
lockfile = os.path.normpath(tempfile.gettempdir() + '/' + basename)
fp = open(lockfile, 'w')
fcntl.lockf(fp, fcntl.LOCK_EX | fcntl.LOCK_NB)
yield
fcntl.lockf(fp, fcntl.LOCK_UN)
if os.path.isfile(lockfile):
os.unlink(lockfile)
| 4quila | /4quila-0.36.200302-py3-none-any.whl/_4quila/locker.py | locker.py |
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import builtins
from . import logger as _4logger
from .tracer import tracer as _4tracer
builtins._4logger = _4logger
builtins._4tracer = _4tracer
| 4quila | /4quila-0.36.200302-py3-none-any.whl/_4quila/__init__.py | __init__.py |
#!/usr/bin/python3
# -*- coding: utf-8 -*-
# import _4quila
from tornado.web import RequestHandler, Application
from tornado.websocket import WebSocketHandler
import json
import os
import inspect
from . import loop
def parse_ip_port(ip_port):
if isinstance(ip_port, int) or ":" not in ip_port:
return "127.0.0.1", int(ip_port)
else:
ip, port = ip_port.split(":")
return ip, int(port)
def http(ip_port, handlers=None):
class _WebSocketHandler(WebSocketHandler):
async def open(self, *args, **kwargs):
print(f"open {args} {kwargs}")
async def on_close(self):
print("close")
async def on_message(self, message):
print(f"handling {message}")
self.write_message(f"got {message}")
class _Handler(RequestHandler):
SUPPORTED_METHODS = ["GET", "POST"]
async def get(self):
await self.handle()
async def post(self):
await self.handle(True)
async def handle(self, is_post=False):
match_handler = None
max_match_length = 0
for path, handler in handlers.items():
if self.request.path.startswith(path):
match_length = len(path)
if match_length > max_match_length:
max_match_length = match_length
match_handler = handler
if match_handler is None:
self.set_status(404)
self.finish()
return
func_name = "handle_%s" % self.request.path[max_match_length:]
func = getattr(match_handler, func_name, None)
if func is None:
self.set_status(404)
self.finish()
return
if self.request.arguments:
request = dict(
(i, j[0].decode()) for i, j in self.request.arguments.items()
)
else:
request = json.loads(self.request.body or "{}")
request = dict((i, str(j)) for i, j in request.items())
if "headers" in inspect.signature(func).parameters:
response = await func(**request, headers=self.request.headers)
else:
response = await func(**request)
if isinstance(response, dict):
self.write(json.dumps(response))
else:
self.write(response)
self.finish()
ip, port = parse_ip_port(ip_port)
Application(
[(r"/websocket", _WebSocketHandler), (r".*", _Handler,)],
static_path=os.path.join(os.getcwd(), "static"),
).listen(port, address=ip)
loop.start()
| 4quila | /4quila-0.36.200302-py3-none-any.whl/_4quila/server.py | server.py |
# 4scanner [](https://travis-ci.org/pboardman/4scanner)

4scanner can search multiple imageboards threads for matching keywords then download all images to disk.
## Supported imageboards
- 4chan
- lainchan
- uboachan
You can create an issue if you want to see other imageboards supported
## Installing
` pip3 install 4scanner `
(4scanner is ONLY compatible with python3+)
For Arch Linux there is an [AUR package](https://aur.archlinux.org/packages/4scanner/)
## Running via Docker
Create a config (detail below), name it config.json and drop it where you would like to download the images. Then run a container:
`docker run -v /can/be/anywhere:/output -v /anywhere/else:/root/.4scanner lacsap/4scanner`
`/can/be/anywhere` Can be anywhere on your computer, images will be downloaded there (This is the directory where you need to put the config.json)
`/anywhere/else` Can be anywhere on your computer, it will contain the sqlite3 database 4scanner use to keep track of downloaded threads and duplicate
## How to
the first thing you need to do is create a simple json file with the directories names
you want, the boards you want to search and the keywords.
(see the json file section for more details)
After your json file is done you can start 4scanner with:
` 4scanner file.json `
it will search all threads for the keywords defined in your json file and
download all images/webms from threads where a keyword is found. (In the current directory unless you specify one with -o )
## Creating your JSON file via the 4genconf script (easy)
The `4genconf` utility is now installed as of 4scanner 1.5.1. This utility will ask you simple questions about what you want to download and generate a configuration file for you!
## Creating your JSON file manually
Creating the JSON file is easy, you can use the example.json file as a base.
Your "Searches" are what 4scanner use to know which board to check for what keywords and the name of the folder where it needs to download the images, you can have as many "Searches" as you want.
Here is an example of what the JSON file should look like:
```json
{"searches":[
{
"imageboard": "IMAGEBOARD",
"folder_name": "YOUR_FOLDER_NAME",
"board": "BOARD_LETTER",
"keywords": ["KEYWORD1", "KEYWORD2"]
},
{
"imageboard": "4chan",
"folder_name": "vidya",
"board": "v",
"keywords": ["tf2", "splatoon", "world of tank"]
}
]}
```
## Search options
4scanner has a lot of options for downloading only the images you want. Such as downloading only images with a certain width or height, or only images with a certain extension.
To see all available options with examples check out: [OPTIONS.md](OPTIONS.md)
[Hydrus Network](https://hydrusnetwork.github.io/hydrus/) users: check out the `tag` [option](OPTIONS.md) to automatically tag your images on import
- Example with all optional options
```json
{"searches":[
{
"imageboard": "4chan",
"folder_name": "vidya",
"board": "v",
"width": ">1000",
"height": ">1000",
"filename": "IMG_",
"extension": [".jpg", ".png"],
"tag": ["game"],
"keywords": ["tf2", "splatoon", "world of tank"],
"check_duplicate": true,
"subject_only": false
}
]}
```
This will download images bigger than 1000x1000 which are .jpg or .png with a filename containing ``` IMG_ ```
## Notes
- the keywords search is case insensitive
## 4downloader
4downloader is also installed with 4scanner and can be use to download
a single thread like this:
``` 4downloader http://boards.4chan.org/b/thread/373687492 ```
It will download all images until the thread die.
You can also download threads from imageboards other than 4chan with ```-i```
| 4scanner | /4scanner-1.6.3.tar.gz/4scanner-1.6.3/README.md | README.md |
# Always prefer setuptools over distutils
from setuptools import setup, find_packages
# To use a consistent encoding
from codecs import open
from os import path
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.md'), encoding='utf-8') as f:
long_description = f.read()
setup(
name='4scanner',
version='1.6.3',
description='4chan threads scanner',
long_description=long_description,
long_description_content_type="text/markdown",
url='https://github.com/Lacsap-/4scanner',
author='Pascal Boardman',
author_email='pascalboardman@gmail.com',
license='MIT',
scripts=['bin/4downloader', 'bin/4scanner', 'bin/4genconf'],
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: End Users/Desktop',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
],
keywords='4chan scan download scrape scraper chan imageboard',
packages=['scanner'],
install_requires=['requests'],
)
| 4scanner | /4scanner-1.6.3.tar.gz/4scanner-1.6.3/setup.py | setup.py |
#!/usr/bin/env python3
class imageboard_info:
def __init__(self, imageboard):
"""
Used to get info about the differents supported imageboards.
self.base_url is the URL of the image board
self.image_base_url is the URL where the pictures are hosted (sometime the same as base_url)
self.thread_picture is the url path where the pictures are hosted
self.thread_subfolder is the url path where the threads are hosted
"""
if imageboard == "4chan":
self.base_url = "http://a.4cdn.org/"
self.image_base_url = "http://i.4cdn.org/"
self.thread_subfolder = "/thread/"
self.image_subfolder = "/"
elif imageboard == "lainchan":
self.base_url = "https://lainchan.org/"
self.image_base_url = "https://lainchan.org/"
self.thread_subfolder = "/res/"
self.image_subfolder = "/src/"
elif imageboard == "uboachan":
self.base_url = "https://uboachan.net/"
self.image_base_url = "https://uboachan.net/"
self.thread_subfolder = "/res/"
self.image_subfolder = "/src/"
else:
raise ValueError("Imageboard {0} is not supported.".format(imageboard))
| 4scanner | /4scanner-1.6.3.tar.gz/4scanner-1.6.3/scanner/imageboard_info.py | imageboard_info.py |
#!/usr/bin/env python3
import time
import json
import os
import re
from scanner import downloader, imageboard_info
from scanner.config import DB_FILE, currently_downloading
import sqlite3
import subprocess
import urllib.request
import threading
import http.client
class thread_scanner:
def __init__(self, keywords_file:str, output:str, quota_mb:int, wait_time:int, logger):
"""
Using the keyword file passed as a paramater to 4scanner,
thread_scanner will search multiple threads and imageboards
and launch the download of a thread if a keyword is found in first post of the thread.
Use scan() to start the scan.
Args:
keywords_file: path of file containing whats imageboard to search as JSON (see README for more info)
output: The output directory where the pictures will be downloaded
quota_mb: stop 4scanner after quota_mb MB have been downloaded
throttle: Time to wait, in second, between image downloads
wait_time: number of time to wait between scans
"""
self.keywords_file = keywords_file
self.output = output
self.quota_mb = quota_mb
self.wait_time = wait_time
self.logger = logger
def get_catalog_json(self, board:str, chan:str):
"""
Get the catalog of a given imageboards board as a JSON
Return:
catalog info as a dict
"""
chan_base_url = imageboard_info.imageboard_info(chan).base_url
catalog = urllib.request.urlopen(
"{0}{1}/catalog.json".format(chan_base_url, board))
try:
catalog_data = catalog.read()
except http.client.IncompleteRead as err:
catalog_data = err.partial
return json.loads(catalog_data.decode("utf8"))
def scan_thread(self, keyword:str, catalog_json:str, subject_only:str, wildcard:str):
"""
Check each thread, threads who contains the keyword are returned
Args:
keyword: A keyword to search for. Example: "moot"
catalog_json: A dict of a board catalog, as returned by get_catalog_json()
subject_only: Search only withing the subject of the thread, as oposed to searching the subject and first post
Returns:
a list of threads number that matched the keyword
"""
matched_threads = []
for i in range(len(catalog_json)):
for thread in catalog_json[i]["threads"]:
if wildcard == "all":
regex = r'{0}'.format(keyword)
# Search thread subject
if 'sub' in thread:
if re.search(regex, str(thread["sub"]), re.IGNORECASE):
matched_threads.append(thread["no"])
if not subject_only:
# Search OPs post body
if 'com' in thread:
if re.search(regex, str(thread["com"]), re.IGNORECASE):
matched_threads.append(thread["no"])
elif wildcard == "start":
regex = r'\b{0}'.format(keyword)
# Search thread subject
if 'sub' in thread:
if re.search(regex, str(thread["sub"]), re.IGNORECASE):
matched_threads.append(thread["no"])
if not subject_only:
# Search OPs post body
if 'com' in thread:
if re.search(regex, str(thread["com"]), re.IGNORECASE):
matched_threads.append(thread["no"])
else:
regex = r'\b{0}\b'.format(keyword)
# Search thread subject
if 'sub' in thread:
if re.search(regex, str(thread["sub"]), re.IGNORECASE):
matched_threads.append(thread["no"])
if not subject_only:
# Search OPs post body
if 'com' in thread:
if re.search(regex, str(thread["com"]), re.IGNORECASE):
matched_threads.append(thread["no"])
return matched_threads
def download_thread(self, thread_id:int, chan:str, board:str, folder:str, output:str, condition:dict, dupe_check:bool, tag_list:list, throttle:int):
"""
Create a downloader object with the info passed as paramater and start the download of in a new thread.
"""
thread_downloader = downloader.downloader(thread_id, board,chan, output, folder, True, condition, dupe_check, tag_list, throttle, self.logger)
t = threading.Thread(target=thread_downloader.download)
t.daemon = True
t.start()
def dir_size_mb(self, directory):
"""
Check the size of a directory in MB.
Args:
directory: the path to a directory
Returns:
Size of the directory in MB
"""
total_size = 0
for dirpath, dirnames, filenames in os.walk(directory):
for f in filenames:
fp = os.path.join(dirpath, f)
total_size += os.path.getsize(fp)
return total_size / 1000000
def check_quota(self):
"""
Stop 4scanner of the download quota was reached.
"""
if int(self.quota_mb) < dir_size_mb(os.path.join(self.output, "downloads")):
self.logger.info("Quota limit exceeded. Stopping 4scanner.")
exit(0)
def get_check_duplicate(self, search):
"""
Check whether to activate the check duplicate feature
Returns:
True if we need to activate it, False otherwise
"""
if 'check_duplicate' in search:
if search['check_duplicate']:
return True
else:
return False
# duplicate check is on by default
return True
def get_condition(self, search:dict):
"""
Get all search condition from a search
Returns:
All search conditions as a dict
"""
condition = {}
if 'extension' in search:
condition["ext"] = []
if isinstance(search['extension'], str):
condition["ext"].append(search['extension'])
else:
for extension in search['extension']:
condition["ext"].append(extension)
else:
condition["ext"] = False
if 'filename' in search:
condition["filename"] = []
if isinstance(search['filename'], str):
condition["filename"].append(search['filename'])
else:
for extension in search['filename']:
condition["filename"].append(extension)
else:
condition["filename"] = False
if 'width' in search:
condition["width"] = search['width']
else:
condition["width"] = False
if 'height' in search:
condition["height"] = search['height']
else:
condition["height"] = False
return condition
def get_imageboard(self, search:dict):
"""
get imageboard from a search
Returns:
imageboard_info object of an imageboard
"""
if 'imageboard' in search:
chan = search["imageboard"]
# will raise error if not supported
imageboard_info.imageboard_info(chan)
else:
# default
chan = "4chan"
return chan
def get_tag_list(self, search):
"""
get all tags from a search
Returns:
a list containing all tags or None
"""
if 'tag' in search:
tag = search["tag"]
else:
tag = None
return tag
def get_subject_only(self, search):
"""
Check whether to search only the subject of post for a given search.
Returns:
True to get subject only, False otherwise
"""
if 'subject_only' in search:
subject_only = search["subject_only"]
else:
subject_only = None
return subject_only
def get_wildcard(self, search):
"""
Check whether to search only the subject of post for a given search.
Returns:
True to get subject only, False otherwise
"""
if 'wildcard' in search:
wildcard = search["wildcard"]
else:
wildcard = None
return wildcard
def get_keyword(self, search):
"""
get a list of all keywords to use in a search.
Returns:
list of all keywords to search for
"""
if 'keywords' in search:
keywords_array = []
if isinstance(search['keywords'], str):
keywords_array.append(search['keywords'])
else:
for keywords in search['keywords']:
keywords_array.append(keywords)
else:
self.logger.critical("Cannot scan without any keyword...")
exit(1)
return keywords_array
def scan(self):
"""
Start the scanning/download process.
"""
while True:
if self.quota_mb:
self.check_quota()
self.logger.info("Searching threads...")
try:
json_file = json.load(open(self.keywords_file))
except ValueError:
self.logger.critical("Your JSON file is malformed. Quitting.")
exit(1)
for search in json_file["searches"]:
# Getting imageboard to search
chan = self.get_imageboard(search)
# Checking conditions
condition = self.get_condition(search)
# Check if we need to check for duplicate when downloading
dupe_check = self.get_check_duplicate(search)
# Getting output folder name
folder_name = search["folder_name"]
# Get tag list (if any)
tag_list = self.get_tag_list(search)
# Get throttle
throttle = int(search['throttle']) if 'throttle' in search else 2
# if this is true we will search only the subject field
subject_only = self.get_subject_only(search)
wildcard = self.get_wildcard(search)
board = search["board"]
keywords = self.get_keyword(search)
try:
catalog_json = self.get_catalog_json(board, chan)
for keyword in keywords:
threads_id = self.scan_thread(keyword, catalog_json, subject_only, wildcard)
for thread_id in list(set(threads_id)):
if thread_id not in currently_downloading:
self.download_thread(thread_id, chan, board,
folder_name, self.output,
condition, dupe_check,
tag_list, throttle)
# Used to keep track of what is currently downloading
currently_downloading.append(thread_id)
except urllib.error.HTTPError as err:
self.logger.warning("Error while opening {0} catalog page. "
"Retrying during next scan.".format(board))
pass
active_downloads = threading.active_count()-1
self.logger.info("{0} threads currently downloading.".format(active_downloads))
self.logger.info("Searching again in {0} minutes!".format(str(int(self.wait_time / 60))))
time.sleep(self.wait_time)
| 4scanner | /4scanner-1.6.3.tar.gz/4scanner-1.6.3/scanner/thread_scanner.py | thread_scanner.py |
# Used to store package wide constants
import os
if os.path.isdir(os.path.expanduser("~/.4scanner")):
DB_FILE = os.path.expanduser("~/.4scanner/4scanner.db")
elif os.getenv("XDG_DATA_HOME"):
DB_FILE = os.path.join(os.getenv("XDG_DATA_HOME"), "4scanner", "4scanner.db")
elif os.getenv("APPDATA"):
DB_FILE = os.path.join(os.getenv("APPDATA"), "4scanner", "4scanner.db")
else:
DB_FILE = os.path.join(os.getenv("HOME"), ".local", "share", "4scanner", "4scanner.db")
# Global variable used to keep track of what is downloading
currently_downloading = []
| 4scanner | /4scanner-1.6.3.tar.gz/4scanner-1.6.3/scanner/config.py | config.py |
import os
from scanner import dupecheck
from scanner.config import DB_FILE
import sqlite3
def db_init():
"""
Initialize the DB used to store image hash and downloaded threads
"""
conn = sqlite3.connect(DB_FILE)
c = conn.cursor()
c.execute('''CREATE TABLE IF NOT EXISTS Image_Hash
(Hash TEXT, Thread_Number INTEGER, Date_Added INTEGER DEFAULT (strftime('%s','now')));''')
# TRY THIS TO FIX THE DATE (datetime('now','localtime')
c.execute('''CREATE TABLE IF NOT EXISTS Downloaded_Thread
(Thread_Number INTEGER, Imageboard TEXT, Board TEXT, Date_Added INTEGER DEFAULT (strftime('%s','now')));''')
conn.commit()
conn.close()
def create_conf_dir():
"""
Create home config directory
"""
if os.path.isdir(os.path.expanduser("~/.4scanner")):
pass
elif os.getenv("XDG_DATA_HOME"):
if not os.path.isdir(os.path.join(os.getenv("XDG_DATA_HOME"), "4scanner")):
os.mkdir(os.path.join(os.getenv("XDG_DATA_HOME"), "4scanner"))
elif os.getenv("APPDATA"):
if not os.path.isdir(os.path.join(os.getenv("APPDATA"), "4scanner")):
os.mkdir(os.path.join(os.getenv("APPDATA"), "4scanner"))
else:
if not os.path.isdir(os.path.join(os.getenv("HOME"), ".local", "share", "4scanner")):
os.makedirs(os.path.join(os.getenv("HOME"), ".local", "share", "4scanner"))
create_conf_dir()
db_init()
| 4scanner | /4scanner-1.6.3.tar.gz/4scanner-1.6.3/scanner/__init__.py | __init__.py |
#!/usr/bin/env python3
import json
import logging
import os
from scanner import imageboard_info, dupecheck
from scanner.config import DB_FILE, currently_downloading
import sqlite3
import sys
import re
import time
import urllib
import http.client
import requests
import threading
import shutil
class downloader:
def __init__(self, thread_nb:int, board:str, imageboard:str, output_folder:str, folder:str, is_quiet:bool, condition:dict, check_duplicate:bool, tags:list, throttle:int, logger, single_run=False):
"""
class used for downloading a thread. Can be started after initialization by calling it's download() function.
Args:
thread_nb: the thread number of an imageboard thread. Ex: 809293
board: The board where the thread exist. Ex: 'g' for the 4chan technology board (http://boards.4channel.org/g/)
imageboard: The imageboard where the thread exist. Ex: 4chan
output_folder: directory where the pictures will be downloaded. Ex: /tmp/4scanner_img
folder: an optional directory name that can be specified for sorting image in the output_folder. Ex: pictures_of_computers
is_quiet: suppresses all logging.
condition: dict used when deciding which pictures to download. Ex: {"width": "=1920", "height": "=1080"}
check_duplicate: Avoid downloading duplicate that were already downloaded.
tags: this list of tags will be added a file called $PICTURE_NAME.txt for every pictures to help importing pictures to hydrus network
throttle: Time to wait, in second, between image downloads
logger: The logger to use with the class
single_run: Run the download loop only once, use if you don't want to wait for a thread to 404 before exiting.
"""
# Getting info about the imageboard URL
ib_info = imageboard_info.imageboard_info(imageboard)
base_url = ib_info.base_url
image_url = ib_info.image_base_url
thread_subfolder = ib_info.thread_subfolder
image_subfolder = ib_info.image_subfolder
# These URL are the url of the thread
# and the base url where images are stored on the imageboard
self.thread_url = "{0}{1}{2}{3}.json".format(base_url, board, thread_subfolder, thread_nb)
self.image_url = "{0}{1}{2}".format(image_url, board, image_subfolder)
self.tmp_dir = "/tmp/{0}/".format(os.getpid())
self.curr_time = time.strftime('%d%m%Y-%H%M%S')
self.pid = os.getpid()
self.thread = threading.current_thread().name
self.downloaded_log = "{0}/{1}4scanner_dld-{2}-{3}".format(self.tmp_dir, self.curr_time, self.pid, self.thread)
self.out_dir = os.path.join(output_folder, 'downloads', imageboard, board, folder, str(thread_nb))
self.thread_nb = thread_nb
self.imageboard = imageboard
self.board = board
self.condition = condition
self.check_duplicate = check_duplicate
self.is_quiet = is_quiet
self.tags = tags
self.throttle = int(throttle)
# Creating the tmp and output directory
os.makedirs(self.tmp_dir, exist_ok=True)
os.makedirs(self.out_dir, exist_ok=True)
self.single_run = single_run
self.logger = logger
# Main download function
def download(self):
"""
Start the download of all pictures.
It will return either when the thread 404, is archived, or if stopped by a special conditon such as single_run
"""
self.logger.info("{}: Starting download.".format(self.thread_url))
while True:
# Getting the thread's json
try:
thread_json = json.loads(self.get_thread_json())
except ValueError:
self.logger.critical("{}: Problem connecting to {0}. stopping download for thread {1}".format(self.thread_url, self.imageboard, self.thread_nb))
self.remove_thread_from_downloading()
self.remove_tmp_files()
exit(1)
# Checking if thread was archived, if it is it will be removed after the download loop
if thread_json["posts"][0].get("archived"):
if not self.is_quiet:
self.logger.info("{}: Thread is archived, getting images then quitting.".format(self.thread_url))
archived = True
else:
archived = False
# Image download loop
for post in thread_json["posts"]:
if 'filename' in post:
if not self.was_downloaded(post["tim"]):
if self.meet_dl_condition(post):
tmp_pic = self.download_image(post)
final_pic = os.path.join(self.out_dir, tmp_pic.split('/')[-1])
self.add_to_downloaded_log(post["tim"])
if self.check_duplicate:
# If picture is not a duplicate copy it to out_dir
if not self.remove_if_duplicate(tmp_pic):
shutil.move(tmp_pic, final_pic)
self.add_tag_file(final_pic + ".txt")
else:
shutil.move(tmp_pic, final_pic)
self.add_tag_file(final_pic + ".txt")
time.sleep(self.throttle)
# Some imageboards allow more than 1 picture per post
if 'extra_files' in post:
for picture in post["extra_files"]:
if not self.was_downloaded(picture["tim"]):
if self.meet_dl_condition(picture):
tmp_pic = self.download_image(picture)
final_pic = os.path.join(self.out_dir, tmp_pic.split('/')[-1])
self.add_to_downloaded_log(picture["tim"])
if self.check_duplicate:
# If picture is not a duplicate copy it to out_dir
if not self.remove_if_duplicate(tmp_pic):
shutil.move(tmp_pic, final_pic)
self.add_tag_file(final_pic + ".txt")
else:
shutil.move(tmp_pic, final_pic)
self.add_tag_file(final_pic + ".txt")
time.sleep(self.throttle)
if archived or self.single_run:
self.remove_thread_from_downloading()
self.remove_tmp_files()
exit(0)
time.sleep(20)
def remove_thread_from_downloading(self):
"""
Remove a thread from the global download list currently_downloading.
No effect if this list is not defined (for example 4downloader does not use it)
"""
# In a try except because 4downloader does not store threads in this list
try:
scanner.currently_downloading.remove(self.thread_nb)
except NameError as e:
pass
def add_thread_to_downloaded(self):
"""
Add a thread to the Downloaded_Thread table of 4scanner.
"""
conn = sqlite3.connect(DB_FILE)
c = conn.cursor()
c.execute("INSERT INTO Downloaded_Thread (Thread_Number, Imageboard, Board) VALUES (?, ?, ?)",
(self.thread_nb, self.imageboard, self.board))
conn.commit()
conn.close()
def get_thread_json(self):
"""
Get the json definition of the imageboard thread currently being downloaded.
If the imageboard returns a 404 it will stop the downloading process.
Returns:
String containing the info of the thread in JSON
"""
response = requests.get(self.thread_url)
if response.status_code == 404:
if not self.is_quiet:
self.logger.info("{}: thread 404\'d, stopping download".format(self.thread_url))
self.remove_thread_from_downloading()
self.add_thread_to_downloaded()
exit(0)
return response.text
def add_to_downloaded_log(self, img_filename):
"""
Write the provided image filename to the log file defined in downloader.
"""
f = open(self.downloaded_log, "a")
f.write("{0}\n".format(img_filename))
f.close()
def was_downloaded(self, img_filename:str):
"""
Check if the image was already downloaded during this run.
Returns:
True if it was already downloaded, False otherwise
"""
if os.path.isfile(self.downloaded_log):
f = open(self.downloaded_log, "r")
if str(img_filename) in f.read():
f.close()
return True
else:
return False
else:
return False
def extension_condition(self, condition_ext:str, post_ext:str):
"""
Check if the extension condition match with the post_ext extension.
Returns:
True if it matches, False otherwise
"""
if condition_ext:
for extension in condition_ext:
if extension == post_ext:
return True
else:
# Always return true if condition was not specified
return True
return False
def filename_condition(self, condition_filename:str, post_filename:str):
"""
Check if the filename condition match with the post_filename filename.
Returns:
True if it matches, False otherwise
"""
if condition_filename:
for i in condition_filename:
if i.lower() in post_filename.lower():
return True
else:
# Always return true if condition was not specified
return True
return False
def width_condition(self, condition_width:str, post_width:str):
"""
Check if the width condition match with the post_width width.
Returns:
True if it matches, False otherwise
"""
if condition_width:
if condition_width[0] == "=":
if int(post_width) == int(condition_width.split("=")[-1]):
return True
elif condition_width[0] == "<":
if int(post_width) < int(condition_width.split("<")[-1]):
return True
elif condition_width[0] == ">":
if int(post_width) > int(condition_width.split(">")[-1]):
return True
else:
self.logger.critical("{}: width need to be in this format: >1024, <256 or =1920".format(self.thread_url))
exit(1)
else:
# Always return true if condition was not specified
return True
return False
def height_condition(self, condition_height:str, post_height:str):
"""
Check if the height condition match with the post_height height.
Returns:
True if it matches, False otherwise
"""
if condition_height:
if condition_height[0] == "=":
if int(post_height) == int(condition_height.split("=")[-1]):
return True
elif condition_height[0] == "<":
if int(post_height) < int(condition_height.split("<")[-1]):
return True
elif condition_height[0] == ">":
if int(post_height) > int(condition_height.split(">")[-1]):
return True
else:
self.logger.critical("{}: height need to be in this format: >1024, <256 or =1080".format(self.thread_url))
exit(1)
else:
# Always return true if condition was not specified
return True
return False
# Check if all condition returned true
def all_condition_check(self, condition_list):
"""
Check if each element of the list is True. There is probably a better way to do this.
Returns:
True if it matches, False otherwise
"""
for i in condition_list:
if not i:
return False
return True
# Return True if an image fit all search conditions
def meet_dl_condition(self, post):
"""
Check if a picture matches all download conditions.
Returns:
True if it does, False otherwise
"""
condition_list = []
condition_list.append(self.extension_condition(self.condition["ext"], post['ext']))
condition_list.append(self.width_condition(self.condition["width"], post['w']))
condition_list.append(self.height_condition(self.condition["height"], post['h']))
condition_list.append(self.filename_condition(self.condition["filename"], post['filename']))
return self.all_condition_check(condition_list)
def remove_if_duplicate(self, img_path):
"""
Remove an image if it was already downloaded
Returns:
True if the image was removed, False otherwise
"""
if img_path:
img_hash = dupecheck.hash_image(img_path)
if dupecheck.is_duplicate(img_hash):
os.remove(img_path)
return True
else:
dupecheck.add_to_db(img_hash, self.thread_nb)
return False
def remove_tmp_files(self):
"""
Remove the temporary log file used to know which pictures had been downloaded.
"""
if os.path.isfile(self.downloaded_log):
os.unlink(self.downloaded_log)
# Return downloaded picture path or false if an error occured
def download_image(self, post_dic:dict):
"""
Download an image from a post (dict)
Returns:
The downloaded picture path or False if an error occured
"""
try:
pic_url = self.image_url + str(post_dic["tim"]) + post_dic["ext"]
out_pic = os.path.join(self.tmp_dir, str(post_dic["tim"]) + post_dic["ext"])
urllib.request.urlretrieve(pic_url, out_pic)
except urllib.error.HTTPError as err:
return False
return out_pic
def add_tag_file(self, tag_file:str):
"""
Create a tag file at the given path with the tags from the object.
"""
if self.tags:
with open(tag_file, 'w') as f:
for tag in self.tags:
f.write(tag + "\n")
| 4scanner | /4scanner-1.6.3.tar.gz/4scanner-1.6.3/scanner/downloader.py | downloader.py |
#!/usr/bin/env python3
import hashlib
import os
import sqlite3
from scanner.config import DB_FILE
def hash_image(img_location:str):
"""
Create a return a hash from an image
Returns:
Hash of the picture
"""
with open(img_location, 'rb') as img:
m = hashlib.md5()
while True:
data = img.read(8192)
if not data:
break
m.update(data)
return m.hexdigest()
def add_to_db(img_hash, thread_nb):
"""
Add a thread number to 4scanner's Image_Hash table
"""
conn = sqlite3.connect(DB_FILE)
c = conn.cursor()
c.execute("INSERT INTO Image_Hash (hash, Thread_Number) VALUES (?,?)", (img_hash, thread_nb))
conn.commit()
conn.close()
def is_duplicate(img_hash):
"""
Check if a picture with the same img_hash was already downloaded. (Since 4scanner's DB creation)
Returns:
True if the picture was already downloaded before, False otherwise
"""
conn = sqlite3.connect(DB_FILE)
c = conn.cursor()
c.execute("SELECT Hash FROM Image_Hash WHERE Hash = ?", (img_hash,))
result = c.fetchone()
conn.close()
if result:
return True
else:
return False
| 4scanner | /4scanner-1.6.3.tar.gz/4scanner-1.6.3/scanner/dupecheck.py | dupecheck.py |
#!/usr/bin/env python3
import json
import logging
import os
from scanner import thread_scanner, dupecheck, imageboard_info, downloader
print("Testing scanner.py")
t_scanner = thread_scanner.thread_scanner("test/test_config.json", "/tmp/", 200, 1, logging.getLogger())
print("--------------------------------------------------------")
print("Testing: t_scanner.get_catalog_json -")
print("--------------------------------------------------------")
catalog_json = t_scanner.get_catalog_json("a", "4chan")
print('\x1b[6;30;42m' + 'OK' + '\x1b[0m')
print("--------------------------------------------------------")
print("Testing: t_scanner.scan_thread -")
print("--------------------------------------------------------")
list_of_threads = t_scanner.scan_thread("anime", catalog_json, False)
for i in list_of_threads:
print(i)
print('\x1b[6;30;42m' + 'OK' + '\x1b[0m')
print("--------------------------------------------------------")
print("!!! t_scanner.download_thread not tested yet !!! -")
print("--------------------------------------------------------")
print("--------------------------------------------------------")
print("Testing: t_scanner.dir_size_mb -")
print("--------------------------------------------------------")
#Creating a 15mb file in a subfolder of a folder
os.system("mkdir folder_size_test")
os.system("mkdir folder_size_test/subfolder")
os.system("dd if=/dev/zero of=folder_size_test/subfolder/15mbfile bs=15000000 count=1")
# Getting folder size
size = t_scanner.dir_size_mb("folder_size_test")
if int(size) != 15:
print(size)
exit(1)
print('\x1b[6;30;42m' + 'OK' + '\x1b[0m')
print("--------------------------------------------------------")
print("!!! scanner.check_quota not tested yet !!! -")
print("--------------------------------------------------------")
print("--------------------------------------------------------")
print("Testing: -")
print("scanner.get_check_duplicate -")
print("scanner.get_condition -")
print("scanner.get_imageboard -")
print("scanner.get_keyword -")
print("--------------------------------------------------------")
json_file = json.load(open("test/test_config.json"))
# Get a list of every search entry in the json file
search_list = []
for search in json_file["searches"]:
search_list.append(search)
# Test on the first search with all optionals parameters
duplicate1 = t_scanner.get_check_duplicate(search_list[0])
condition1 = t_scanner.get_condition(search_list[0])
imageboard1 = t_scanner.get_imageboard(search_list[0])
keyword1 = t_scanner.get_keyword(search_list[0])
if not duplicate1:
print("duplicate1 should be True but is False")
exit(1)
if condition1["filename"] != ['IMG_']:
print("filename error in condition1")
exit(1)
if condition1["width"] != '>100':
print("width error in condition1")
exit(1)
if condition1["height"] != '>200':
print("height error in condition1")
exit(1)
if condition1["ext"] != ['.jpg', '.png']:
print("ext error in condition1")
exit(1)
if imageboard1 != '4chan':
print("imageboard1 should be 4chan")
exit(1)
if keyword1 != ['keyword1', 'keyword2', 'keyword3']:
print("keyword1 should be equal to ['keyword1', 'keyword2', 'keyword3']")
exit(1)
duplicate2 = t_scanner.get_check_duplicate(search_list[1])
condition2 = t_scanner.get_condition(search_list[1])
imageboard2 = t_scanner.get_imageboard(search_list[1])
keyword2 = t_scanner.get_keyword(search_list[1])
if not duplicate2:
print("duplicate2 should be True but is False")
exit(1)
if condition2["filename"]:
print("filename should be false")
exit(1)
if condition2["width"]:
print("width should be false")
exit(1)
if condition2["height"]:
print("height should be false")
exit(1)
if condition2["ext"]:
print("ext should be false")
exit(1)
if imageboard2 != '4chan':
print("imageboard2 should be 4chan")
exit(1)
if keyword2 != ['keyword']:
print("keyword2 should be equal to ['keyword']")
exit(1)
print('\x1b[6;30;42m' + 'OK' + '\x1b[0m')
print("--------------------------------------------------------")
print("!!! scanner.scan not tested yet !!! -")
print("--------------------------------------------------------")
print('\x1b[6;30;42m' + 'All test OK for scanner.py' + '\x1b[0m')
print("Testing dupecheck.py")
print("--------------------------------------------------------")
print("Testing: dupecheck.hash_image -")
print("--------------------------------------------------------")
hash = dupecheck.hash_image("test/test_img.png")
if hash != "b3ce9cb3aefc5e240b4295b406ce8b9a":
print("hash should be b3ce9cb3aefc5e240b4295b406ce8b9a")
exit(1)
print('\x1b[6;30;42m' + 'OK' + '\x1b[0m')
print("--------------------------------------------------------")
print("!!! dupecheck.add_to_db not tested yet !!! -")
print("--------------------------------------------------------")
print("--------------------------------------------------------")
print("!!! dupecheck.is_duplicate not tested yet !!! -")
print("--------------------------------------------------------")
print('\x1b[6;30;42m' + 'All test OK for dupecheck.py' + '\x1b[0m')
print("Testing imageboard_info.py")
print("--------------------------------------------------------")
print("Testing: imageboard_info.get_imageboard_info -")
print("--------------------------------------------------------")
info_4chan = imageboard_info.imageboard_info("4chan")
if info_4chan.base_url != "http://a.4cdn.org/":
print("chan_base_url wrong for 4chan")
exit(1)
if info_4chan.thread_subfolder != "/thread/":
print("chan_thread_subfolder wrong for 4chan")
exit(1)
if info_4chan.image_subfolder != "/":
print("chan_image_subfolder wrong for 4chan")
exit(1)
if info_4chan.image_base_url != "http://i.4cdn.org/":
print("chan_image_base_url wrong for 4chan")
exit(1)
info_lainchan = imageboard_info.imageboard_info("lainchan")
if info_lainchan.base_url != "https://lainchan.org/":
print("chan_base_url wrong for lainchan")
exit(1)
if info_lainchan.thread_subfolder != "/res/":
print("chan_thread_subfolder wrong for lainchan")
exit(1)
if info_lainchan.image_subfolder != "/src/":
print("chan_image_subfolder wrong for lainchan")
exit(1)
if info_lainchan.image_base_url != "https://lainchan.org/":
print("chan_image_base_url wrong for lainchan")
exit(1)
info_uboachan = imageboard_info.imageboard_info("uboachan")
if info_uboachan.base_url != "https://uboachan.net/":
print("chan_base_url wrong for uboachan")
exit(1)
if info_uboachan.thread_subfolder != "/res/":
print("chan_thread_subfolder wrong for uboachan")
exit(1)
if info_uboachan.image_subfolder != "/src/":
print("chan_image_subfolder wrong for uboachan")
exit(1)
if info_uboachan.image_base_url != "https://uboachan.net/":
print("chan_image_base_url wrong for uboachan")
exit(1)
print('\x1b[6;30;42m' + 'OK' + '\x1b[0m')
print('\x1b[6;30;42m' + 'All test OK for imageboard_info.py' + '\x1b[0m')
print("Testing download.py")
# Creating download object
condition = {"ext": False, "filename": False, "width": False, "height": False, }
download = downloader.downloader(list_of_threads[0], 'a',"4chan", ".", "testci", True, condition, True, ["travistag1", "ci:travistag2"], 2, logging.getLogger())
print("--------------------------------------------------------")
print("!!! download.load not tested yet !!! -")
print("--------------------------------------------------------")
print("--------------------------------------------------------")
print("Testing: download.add_to_downloaded_log -")
print("Testing: download.was_downloaded -")
print("--------------------------------------------------------")
os.system("echo "" > test_download_log.txt")
download.add_to_downloaded_log("my_filename")
if 'my_filename' not in open(download.downloaded_log).read():
print("'my_filename' is not in {0}".format(download.downloaded_log))
exit(1)
downloaded = download.was_downloaded("my_filename")
if not downloaded:
print("'returned' should be True")
exit(1)
downloaded = download.was_downloaded("other_filename")
if downloaded:
print("'returned' should be False")
exit(1)
print('\x1b[6;30;42m' + 'OK' + '\x1b[0m')
print("--------------------------------------------------------")
print("Testing: download.extension_condition -")
print("--------------------------------------------------------")
if not download.extension_condition([".jpg"], ".jpg"):
print("same extension should return True")
exit(1)
if download.extension_condition([".jpg"], ".png"):
print("different extension should return False")
exit(1)
print('\x1b[6;30;42m' + 'OK' + '\x1b[0m')
print("--------------------------------------------------------")
print("Testing: download.filename_condition -")
print("--------------------------------------------------------")
if not download.filename_condition(["IMG_"], "IMG_2345.jpg"):
print("IMG_ is in IMG_2345, should return True")
exit(1)
if download.filename_condition(["PIC"], "IMG_2345.jpg"):
print("PIC is not in IMG_2345, should return False")
exit(1)
print('\x1b[6;30;42m' + 'OK' + '\x1b[0m')
print("--------------------------------------------------------")
print("Testing: download.width_condition -")
print("--------------------------------------------------------")
if not download.width_condition("=100", 100):
print("100 is equal to 100, should be True")
exit(1)
if download.width_condition("=100", 101):
print("100 is not equal to 101, should be False")
exit(1)
if not download.width_condition(">100", 101):
print("101 is greater than 100, should be True")
exit(1)
if download.width_condition(">100", 99):
print("99 is not greater than 100, should be False")
exit(1)
if not download.width_condition("<100", 99):
print("99 is lower than 100, should be True")
exit(1)
if download.width_condition("<100", 101):
print("101 is not lower than 100, should be False")
exit(1)
print('\x1b[6;30;42m' + 'OK' + '\x1b[0m')
print("--------------------------------------------------------")
print("Testing: download.all_condition_check -")
print("--------------------------------------------------------")
all_true = [True, True, True, True]
one_false = [True, False, True, True]
all_false = [False, False, False, False]
if not download.all_condition_check(all_true):
print("all conditions are True, should return True")
exit(1)
if download.all_condition_check(one_false):
print("one condition is False, should return False")
exit(1)
if download.all_condition_check(all_false):
print("all conditions are False, should return False")
exit(1)
print('\x1b[6;30;42m' + 'OK' + '\x1b[0m')
print("--------------------------------------------------------")
print("!!! download.meet_dl_condition not tested yet !!! -")
print("--------------------------------------------------------")
print("--------------------------------------------------------")
print("!!! download.remove_if_duplicate not tested yet !!! -")
print("--------------------------------------------------------")
print("--------------------------------------------------------")
print("!!! download.download_image not tested yet !!! -")
print("--------------------------------------------------------")
#img_url= "https://github.com/Lacsap-/4scanner/raw/master/logo/"
#post_dic = {'tim': '4scanner128', 'ext': '.png'}
#file_path = download.download_image(img_url, post_dic, ".")
#if not os.path.isfile(file_path):
# print("4scanner128.png should have been downloaded.")
# exit(1)
print('\x1b[6;30;42m' + 'OK' + '\x1b[0m')
print("--------------------------------------------------------")
print("!!! download.download_thread not tested yet !!! -")
print("--------------------------------------------------------")
print('\x1b[6;30;42m' + 'All test OK for download.py' + '\x1b[0m')
print('\x1b[6;30;42m' + 'SUCCESS' + '\x1b[0m')
| 4scanner | /4scanner-1.6.3.tar.gz/4scanner-1.6.3/test/test.py | test.py |
4to5 - Replace the number 4 with the number 5.
==============================================
Unlike 2to3, this module finally does what it says! Replaces two numbers on your
interpreter. It's a true life-saver for both you and your colleagues.
Usage
======
.. code-block:: python
pip install 4to5
python
>>> 2 + 2
5
>>> 3 + 1
5
>>> 3 + 2 == 3 + 1
True
>>> 4 - 2
3
>> 4 - 1 # Cause 4-1 == 5-1 == 4 == 5
5
>>> for i in range(10):
... print(i)
...
0
1
2
3
5
5
6
7
8
9
Notes
=====
50% chance you won't be able to remove it, as apparently the number 4 is
impotant for pip, and without it pip doesn't seem to work properly.
To manually uninstall, delete ``sitecustomize.py`` from your ``site-packages`` directory.
Maybe I'll add a ``fix_my_system.py`` file in the future to remove it without using
the number 4.
Supports virtual environments.
Enjoy! | 4to5 | /4to5-0.0.1.tar.gz/4to5-0.0.1/README.rst | README.rst |
import ctypes
ctypes.memmove(id(4) + 0x18, id(5) + 0x18, 4) | 4to5 | /4to5-0.0.1.tar.gz/4to5-0.0.1/sitecustomize.py | sitecustomize.py |
import sys
import os
import setuptools
import sysconfig
from setuptools.command.install import install
class PreInstall(install):
def run(self):
site_packages_dir = sysconfig.get_path("purelib")
sitecustomize_path = os.path.join(site_packages_dir, "sitecustomize.py")
if os.path.exists(sitecustomize_path):
raise FileExistsError("Site customize file already exists. "
"Please remove it before installing.")
install.run(self)
with open('README.rst') as f:
readme = f.read()
setuptools.setup(
name='4to5',
version='0.0.1',
description="Replace the number 4 with the number 5.",
long_description=readme,
author="Bar Harel",
py_modules=["sitecustomize"],
cmdclass={'install': PreInstall,},
) | 4to5 | /4to5-0.0.1.tar.gz/4to5-0.0.1/setup.py | setup.py |
from setuptools import setup
setup(name='5_Rakoto031_upload_to_pypi',
version='0.1',
description='Gaussian and Binomial distributions',
packages=['5_Rakoto031_upload_to_pypi'],
zip_safe=False)
| 5-Rakoto031-upload-to-pypi | /5_Rakoto031_upload_to_pypi-0.1.tar.gz/5_Rakoto031_upload_to_pypi-0.1/setup.py | setup.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | 5-Rakoto031-upload-to-pypi | /5_Rakoto031_upload_to_pypi-0.1.tar.gz/5_Rakoto031_upload_to_pypi-0.1/5_Rakoto031_upload_to_pypi/Gaussiandistribution.py | Gaussiandistribution.py |
class Distribution:
def __init__(self, mu=0, sigma=1):
""" Generic distribution class for calculating and
visualizing a probability distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
self.mean = mu
self.stdev = sigma
self.data = []
def read_data_file(self, file_name):
"""Function to read in data from a txt file. The txt file should have
one number (float) per line. The numbers are stored in the data attribute.
Args:
file_name (string): name of a file to read from
Returns:
None
"""
with open(file_name) as file:
data_list = []
line = file.readline()
while line:
data_list.append(int(line))
line = file.readline()
file.close()
self.data = data_list
| 5-Rakoto031-upload-to-pypi | /5_Rakoto031_upload_to_pypi-0.1.tar.gz/5_Rakoto031_upload_to_pypi-0.1/5_Rakoto031_upload_to_pypi/Generaldistribution.py | Generaldistribution.py |
from .Gaussiandistribution import Gaussian
from .Binomialdistribution import Binomial
| 5-Rakoto031-upload-to-pypi | /5_Rakoto031_upload_to_pypi-0.1.tar.gz/5_Rakoto031_upload_to_pypi-0.1/5_Rakoto031_upload_to_pypi/__init__.py | __init__.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Binomial(Distribution):
""" Binomial distribution class for calculating and
visualizing a Binomial distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats to be extracted from the data file
p (float) representing the probability of an event occurring
n (int) number of trials
TODO: Fill out all functions below
"""
def __init__(self, prob=.5, size=20):
self.n = size
self.p = prob
Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev())
def calculate_mean(self):
"""Function to calculate the mean from p and n
Args:
None
Returns:
float: mean of the data set
"""
self.mean = self.p * self.n
return self.mean
def calculate_stdev(self):
"""Function to calculate the standard deviation from p and n.
Args:
None
Returns:
float: standard deviation of the data set
"""
self.stdev = math.sqrt(self.n * self.p * (1 - self.p))
return self.stdev
def replace_stats_with_data(self):
"""Function to calculate p and n from the data set
Args:
None
Returns:
float: the p value
float: the n value
"""
self.n = len(self.data)
self.p = 1.0 * sum(self.data) / len(self.data)
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev()
def plot_bar(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n])
plt.title('Bar Chart of Data')
plt.xlabel('outcome')
plt.ylabel('count')
def pdf(self, k):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k)))
b = (self.p ** k) * (1 - self.p) ** (self.n - k)
return a * b
def plot_bar_pdf(self):
"""Function to plot the pdf of the binomial distribution
Args:
None
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
x = []
y = []
# calculate the x values to visualize
for i in range(self.n + 1):
x.append(i)
y.append(self.pdf(i))
# make the plots
plt.bar(x, y)
plt.title('Distribution of Outcomes')
plt.ylabel('Probability')
plt.xlabel('Outcome')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Binomial distributions with equal p
Args:
other (Binomial): Binomial instance
Returns:
Binomial: Binomial distribution
"""
try:
assert self.p == other.p, 'p values are not equal'
except AssertionError as error:
raise
result = Binomial()
result.n = self.n + other.n
result.p = self.p
result.calculate_mean()
result.calculate_stdev()
return result
def __repr__(self):
"""Function to output the characteristics of the Binomial instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}, p {}, n {}".\
format(self.mean, self.stdev, self.p, self.n) | 5-Rakoto031-upload-to-pypi | /5_Rakoto031_upload_to_pypi-0.1.tar.gz/5_Rakoto031_upload_to_pypi-0.1/5_Rakoto031_upload_to_pypi/Binomialdistribution.py | Binomialdistribution.py |
<a name="readme-top"></a>
<!-- VideoPoker-5CardRedraw -->
[![Contributors][contributors-shield]][contributors-url]
[![Forks][forks-shield]][forks-url]
[![Stargazers][stars-shield]][stars-url]
[![Issues][issues-shield]][issues-url]
[![MIT License][license-shield]][license-url]
[![LinkedIn][linkedin-shield]][linkedin-url]
<!-- PROJECT LOGO -->
<br />
<div align="center">
<a href="https://github.com/ralbee1/VideoPoker-5CardRedraw">
<img src="documentation/logo.png" alt="Logo" width="80" height="80">
</a>
<h3 align="center">VideoPoker-5CardRedraw</h3>
<p align="center">
A pythonic creation of a 5 card redraw video poker.
<br />
<a href="https://github.com/ralbee1/VideoPoker-5CardRedraw"><strong>Explore the docs »</strong></a>
<br />
<br />
<a href="https://github.com/ralbee1/VideoPoker-5CardRedraw">View Demo</a>
·
<a href="https://github.com/ralbee1/VideoPoker-5CardRedraw/issues">Report Bug</a>
·
<a href="https://github.com/ralbee1/VideoPoker-5CardRedraw/issues">Request Feature</a>
</p>
</div>
<!-- TABLE OF CONTENTS -->
<details>
<summary>Table of Contents</summary>
<ol>
<li>
<a href="#about-the-project">About The Project</a>
<ul>
<li><a href="#built-with">Built With</a></li>
<li><a href="#Features">Features</a></li>
</ul>
</li>
<li>
<a href="#getting-started">Getting Started</a>
<ul>
<li><a href="#prerequisites">Prerequisites</a></li>
<li><a href="#installation">Installation</a></li>
</ul>
</li>
<li><a href="#usage">Usage</a></li>
<li><a href="#contact">Contact</a></li>
<li><a href="#acknowledgments">Acknowledgments</a></li>
</ol>
</details>
<!-- ABOUT THE PROJECT -->
## About The Project
<!--
[![Product Name Screen Shot][product-screenshot]](https://example.com)
-->
5 Card Draw is a playable Python 5 card draw poker application. This project served as a hands-on Python learning experience in 2021. On my journey, I learned about creating graphical user interfaces in python, pythonic best practices, CI/CD workflows, PyPi deployments, and much more. The beautiful learning opportunity provided this project was balancing desired learning opportunities and refining 5 Card Draw into a polished application. This project is currently archived with the last remaining features involved further polishing the UI/UX experience, adding sound, and cashing out player credits. If I were to start over, I'd rank poker hands with a symantic system over a integer score.
<p align="right">(<a href="#readme-top">back to top</a>)</p>
### Features
- [ ] **5 Card Redraw**
- [ ] Modular Hand Ranking and Scoring
- [ ] Player Hand and Deck creation
- [ ] Playable GUI interface
- [ ] Bank text file
- [ ] **PyPi Installs**
- [ ] **Pep 8 Standards**
- [ ] **GitHub CI/CD Pipelines**
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- GETTING STARTED -->
## Getting Started
The following is an guide for running 5 card redraw poker locally.
### Prerequisites
1. [Python 3.10.8 or Newer](https://www.python.org/downloads/release/python-3108/)
### Installation
Developer Install:
<br/>
Summary: The developer install is for those who want to contribute to or clone VideoPoker-5CardRedraw.
1. Clone the repo (or use Github Desktop)
```sh
git clone https://github.com/ralbee1/VideoPoker-5CardRedraw.git
```
2. Open the CLI and navigate the current working directory to where you cloned VideoPoker-5CardDraw
3. Install the Pip Package from the CLI, copy and run this command:
```sh
py -m pip install -e .
```
<br/>
<br/>
User Install
<br/>
1. Automatic User Install from the Command line via PyPi.
```sh
pip install 5-card-draw
```
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- USAGE EXAMPLES -->
## Usage / How to Play
If your python files open with Python by default then from the commmand line run:
```js
video_poker.py;
```
The game is played by aiming to make the best poker hand possible. The top of the interface shows the hand ranking and the payouts sorted by how many credits you bet per round, 1 thru 5. To begin, click DEAL. You hold cards with the intent of keeping them and drawing new cards to try to improve your hand ranking. After drawing new cards, your hand is automatically scored and profits are payed out. You may then click "DEAL" and start over.
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- CONTACT -->
## Contact
* []()Email - ralbee1@iwu.edu
* []()Project Link: [https://github.com/ralbee1/VideoPoker-5CardRedraw](https://github.com/ralbee1/VideoPoker-5CardRedraw)
<!-- ACKNOWLEDGMENTS -->
## Acknowledgments
* []() This variant of poker was inspired by Super Double Double as found in Las Vegas Casinos.
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[contributors-shield]: https://img.shields.io/github/contributors/ralbee1/VideoPoker-5CardRedraw.svg?style=for-the-badge
[contributors-url]: https://github.com/ralbee1/VideoPoker-5CardRedraw/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/ralbee1/VideoPoker-5CardRedraw.svg?style=for-the-badge
[forks-url]: https://github.com/ralbee1/VideoPoker-5CardRedraw/network/members
[stars-shield]: https://img.shields.io/github/stars/ralbee1/VideoPoker-5CardRedraw.svg?style=for-the-badge
[stars-url]: https://github.com/ralbee1/VideoPoker-5CardRedraw/stargazers
[issues-shield]: https://img.shields.io/github/issues/ralbee1/VideoPoker-5CardRedraw.svg?style=for-the-badge
[issues-url]: https://github.com/ralbee1/VideoPoker-5CardRedraw/issues
[license-shield]: https://img.shields.io/github/license/ralbee1/VideoPoker-5CardRedraw.svg?style=for-the-badge
[license-url]: https://github.com/ralbee1/VideoPoker-5CardRedraw/blob/master/LICENSE.txt
[linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=for-the-badge&logo=linkedin&colorB=555
[linkedin-url]: https://linkedin.com/in/Richard-Albee
[product-screenshot]: images/screenshot.png
[python.org]: https://www.python.org/static/img/python-logo.png
[python-url]: https://www.python.org/
[pypi.org]: https://pypi.org/static/images/logo-small.2a411bc6.svg
[pypi-url]: https://pypi.org/project/pip/
| 5-card-draw | /5_card_draw-1.0.2.tar.gz/5_card_draw-1.0.2/README.md | README.md |
'''Setup file for building a pip for a module.
Local Install Process:
Build Pip Distributable: py -m build --wheel from the /PythonTools/ directory with this setup.py in it. Then install from the .whl file.
INSTRUCTIONS FOR BUILDING A PIP https://pip.pypa.io/en/stable/cli/pip_wheel/
OR
Developer Install: "py -m pip install -e ." from this folder.
Publish a Pip Version to PyPi:
0. Create an account https://pypi.org/account/register/
1. Install Prequisites: py -m pip install --upgrade pip setuptools wheel twine build
2. py setup.py sdist bdist_wheel
3. py twine upload dist/*
'''
import os
from pathlib import Path
import setuptools
requires = [
'tk',
'pathlib'
]
scripts = [
str(Path('5_card_draw','video_poker.py'))
]
#Package setuptools pypi install for local developer installs
setuptools.setup(
name = '5_card_draw',
version = os.getenv('PACKAGE_VERSION', '1.0.2'),
description = 'Video Poker application for 5 Card Draw Poker',
author = 'Richard Albee',
author_email='Ralbee1@iwu.edu',
packages = setuptools.find_packages(),
install_requires = requires,
scripts = scripts,
classifiers = [
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'Natural Language :: English',
'Programming Language :: Python',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10'
],
python_requires = '>=3.8',
url = "https://github.com/ralbee1/VideoPoker-5CardRedraw"
)
| 5-card-draw | /5_card_draw-1.0.2.tar.gz/5_card_draw-1.0.2/setup.py | setup.py |
'''Module ran to start the program, Poker: 5 Card Redraw'''
def init(top, gui, *args, **kwargs):
'''Initialize globals for the top level of the GUI'''
global w, top_level, root
w = gui
top_level = top
root = top
def destroy_window():
'''Function which closes the window.'''
global top_level
top_level.destroy()
top_level = None
if __name__ == '__main__':
import PAGEGUI
PAGEGUI.vp_start_gui()
| 5-card-draw | /5_card_draw-1.0.2.tar.gz/5_card_draw-1.0.2/5_card_draw/video_poker.py | video_poker.py |
from setuptools import setup
setup(name='5_exercise_upload_to_pypi',
version='1.2',
description='Gaussian and Binomial distributions',
packages=['5_exercise_upload_to_pypi'],
author = 'Satyendra Jaladi',
author_email = 'sam.satya38@gmail.com',
zip_safe=False)
| 5-exercise-upload-to-pypi | /5_exercise_upload_to_pypi-1.2.tar.gz/5_exercise_upload_to_pypi-1.2/setup.py | setup.py |
from setuptools import setup
setup(name='dsnd_probability',
version='1.2',
description='Gaussian and Binomial distributions',
packages=['dsnd_distributions'],
author = 'Satyendra Jaladi',
author_email = 'sam.satya38@gmail.com',
zip_safe=False)
| 5-exercise-upload-to-pypi | /5_exercise_upload_to_pypi-1.2.tar.gz/5_exercise_upload_to_pypi-1.2/5_exercise_upload_to_pypi/setup.py | setup.py |
from distutils.core import setup
setup \
(
name='5',
version='1.0',
py_modules=['5'],
author='wangyang',
author_email='rsslytear@sina.com',
description='a test mod',
)
| 5 | /5-1.0.tar.gz/5-1.0/6.py | 6.py |
#aaa=['1','2','3',['4']]
def pop(the_list):
for a in the_list:
if isinstance(a,list):
pop(a)
else:
print (a)
#pop(aaa)
| 5 | /5-1.0.tar.gz/5-1.0/5.py | 5.py |
from setuptools import setup
setup(name='5090_distributions',
version='0.1',
description='Gaussian distributions',
packages=['5090_distributions'],
zip_safe=False)
| 5090-distributions | /5090_distributions-0.1.tar.gz/5090_distributions-0.1/setup.py | setup.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Gaussian(Distribution):
""" Gaussian distribution class for calculating and
visualizing a Gaussian distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
def __init__(self, mu=0, sigma=1):
Distribution.__init__(self, mu, sigma)
def calculate_mean(self):
"""Function to calculate the mean of the data set.
Args:
None
Returns:
float: mean of the data set
"""
avg = 1.0 * sum(self.data) / len(self.data)
self.mean = avg
return self.mean
def calculate_stdev(self, sample=True):
"""Function to calculate the standard deviation of the data set.
Args:
sample (bool): whether the data represents a sample or population
Returns:
float: standard deviation of the data set
"""
if sample:
n = len(self.data) - 1
else:
n = len(self.data)
mean = self.calculate_mean()
sigma = 0
for d in self.data:
sigma += (d - mean) ** 2
sigma = math.sqrt(sigma / n)
self.stdev = sigma
return self.stdev
def plot_histogram(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.hist(self.data)
plt.title('Histogram of Data')
plt.xlabel('data')
plt.ylabel('count')
def pdf(self, x):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
return (1.0 / (self.stdev * math.sqrt(2*math.pi))) * math.exp(-0.5*((x - self.mean) / self.stdev) ** 2)
def plot_histogram_pdf(self, n_spaces = 50):
"""Function to plot the normalized histogram of the data and a plot of the
probability density function along the same range
Args:
n_spaces (int): number of data points
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
mu = self.mean
sigma = self.stdev
min_range = min(self.data)
max_range = max(self.data)
# calculates the interval between x values
interval = 1.0 * (max_range - min_range) / n_spaces
x = []
y = []
# calculate the x values to visualize
for i in range(n_spaces):
tmp = min_range + interval*i
x.append(tmp)
y.append(self.pdf(tmp))
# make the plots
fig, axes = plt.subplots(2,sharex=True)
fig.subplots_adjust(hspace=.5)
axes[0].hist(self.data, density=True)
axes[0].set_title('Normed Histogram of Data')
axes[0].set_ylabel('Density')
axes[1].plot(x, y)
axes[1].set_title('Normal Distribution for \n Sample Mean and Sample Standard Deviation')
axes[0].set_ylabel('Density')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Gaussian distributions
Args:
other (Gaussian): Gaussian instance
Returns:
Gaussian: Gaussian distribution
"""
result = Gaussian()
result.mean = self.mean + other.mean
result.stdev = math.sqrt(self.stdev ** 2 + other.stdev ** 2)
return result
def __repr__(self):
"""Function to output the characteristics of the Gaussian instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}".format(self.mean, self.stdev) | 5090-distributions | /5090_distributions-0.1.tar.gz/5090_distributions-0.1/5090_distributions/Gaussiandistribution.py | Gaussiandistribution.py |
class Distribution:
def __init__(self, mu=0, sigma=1):
""" Generic distribution class for calculating and
visualizing a probability distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats extracted from the data file
"""
self.mean = mu
self.stdev = sigma
self.data = []
def read_data_file(self, file_name):
"""Function to read in data from a txt file. The txt file should have
one number (float) per line. The numbers are stored in the data attribute.
Args:
file_name (string): name of a file to read from
Returns:
None
"""
with open(file_name) as file:
data_list = []
line = file.readline()
while line:
data_list.append(int(line))
line = file.readline()
file.close()
self.data = data_list
| 5090-distributions | /5090_distributions-0.1.tar.gz/5090_distributions-0.1/5090_distributions/Generaldistribution.py | Generaldistribution.py |
from .Gaussiandistribution import Gaussian
from .Binomialdistribution import Binomial
| 5090-distributions | /5090_distributions-0.1.tar.gz/5090_distributions-0.1/5090_distributions/__init__.py | __init__.py |
import math
import matplotlib.pyplot as plt
from .Generaldistribution import Distribution
class Binomial(Distribution):
""" Binomial distribution class for calculating and
visualizing a Binomial distribution.
Attributes:
mean (float) representing the mean value of the distribution
stdev (float) representing the standard deviation of the distribution
data_list (list of floats) a list of floats to be extracted from the data file
p (float) representing the probability of an event occurring
n (int) number of trials
TODO: Fill out all functions below
"""
def __init__(self, prob=.5, size=20):
self.n = size
self.p = prob
Distribution.__init__(self, self.calculate_mean(), self.calculate_stdev())
def calculate_mean(self):
"""Function to calculate the mean from p and n
Args:
None
Returns:
float: mean of the data set
"""
self.mean = self.p * self.n
return self.mean
def calculate_stdev(self):
"""Function to calculate the standard deviation from p and n.
Args:
None
Returns:
float: standard deviation of the data set
"""
self.stdev = math.sqrt(self.n * self.p * (1 - self.p))
return self.stdev
def replace_stats_with_data(self):
"""Function to calculate p and n from the data set
Args:
None
Returns:
float: the p value
float: the n value
"""
self.n = len(self.data)
self.p = 1.0 * sum(self.data) / len(self.data)
self.mean = self.calculate_mean()
self.stdev = self.calculate_stdev()
def plot_bar(self):
"""Function to output a histogram of the instance variable data using
matplotlib pyplot library.
Args:
None
Returns:
None
"""
plt.bar(x = ['0', '1'], height = [(1 - self.p) * self.n, self.p * self.n])
plt.title('Bar Chart of Data')
plt.xlabel('outcome')
plt.ylabel('count')
def pdf(self, k):
"""Probability density function calculator for the gaussian distribution.
Args:
x (float): point for calculating the probability density function
Returns:
float: probability density function output
"""
a = math.factorial(self.n) / (math.factorial(k) * (math.factorial(self.n - k)))
b = (self.p ** k) * (1 - self.p) ** (self.n - k)
return a * b
def plot_bar_pdf(self):
"""Function to plot the pdf of the binomial distribution
Args:
None
Returns:
list: x values for the pdf plot
list: y values for the pdf plot
"""
x = []
y = []
# calculate the x values to visualize
for i in range(self.n + 1):
x.append(i)
y.append(self.pdf(i))
# make the plots
plt.bar(x, y)
plt.title('Distribution of Outcomes')
plt.ylabel('Probability')
plt.xlabel('Outcome')
plt.show()
return x, y
def __add__(self, other):
"""Function to add together two Binomial distributions with equal p
Args:
other (Binomial): Binomial instance
Returns:
Binomial: Binomial distribution
"""
try:
assert self.p == other.p, 'p values are not equal'
except AssertionError as error:
raise
result = Binomial()
result.n = self.n + other.n
result.p = self.p
result.calculate_mean()
result.calculate_stdev()
return result
def __repr__(self):
"""Function to output the characteristics of the Binomial instance
Args:
None
Returns:
string: characteristics of the Gaussian
"""
return "mean {}, standard deviation {}, p {}, n {}".\
format(self.mean, self.stdev, self.p, self.n) | 5090-distributions | /5090_distributions-0.1.tar.gz/5090_distributions-0.1/5090_distributions/Binomialdistribution.py | Binomialdistribution.py |
from distutils.core import setup
setup(
name='51PubModules',
version='0.0.2',
author='jun',
author_email='jun.mr@qq.com',
url='http://docs.51pub.cn/python/opmodules',
packages=['opmysql'],
description='system manage modules',
license='MIT',
install_requires=['pymysql'],
)
| 51PubModules | /51PubModules-0.0.2.tar.gz/51PubModules-0.0.2/setup.py | setup.py |
import pymysql
import time
import os
import subprocess
import logging
__all__ = ["PyMysqlDB"]
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s [%(levelname)s] %(funcName)s: %(message)s',
datefmt="%d %b %Y %H:%M:%S")
class PyMysqlDB:
def __init__(self, host=None, user=None, pwd=None, port=3306, base_path=None, backup_path='/data/LocalBackup'):
self.host = host
self.user = user
self.pwd = pwd
self.port = int(port)
self.base_path = base_path
self.backup_path = backup_path
def select_database(self):
db_list = []
con = pymysql.connect(host=self.host, user=self.user, password=self.pwd, db='information_schema',
port=self.port)
cur = con.cursor()
cur.execute('select SCHEMA_NAME from SCHEMATA')
for (db,) in cur.fetchall():
db_list.append(db)
return db_list
def backup_by_database(self, database):
logging.info('backup database: {}'.format(database))
today = time.strftime("%Y%m%d", time.localtime())
backup_dir = '{}/{}'.format(self.backup_path, today)
if not os.path.isdir(backup_dir):
os.makedirs(backup_dir)
os.chdir(backup_dir)
start_time = int(time.time())
cmd = "{}/bin/mysqldump --opt -h{} -P{} -u{} -p{} {} | gzip > {}/{}/{}-{}-{}.sql.gz".format(self.base_path,
self.host,
self.port,
self.user, self.pwd,
database,
self.backup_path,
today, today,
self.host,
database)
result = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
content = result.stdout.read()
if content and not content.decode().startswith("Warning:"):
subject = "{} - {} backup error, reason: {}".format(self.host, database, content.decode())
logging.error(subject)
end_time = int(time.time())
use_time = end_time - start_time
logging.info('{} - {} backup finished, use time: {}s'.format(self.host, database, float('%.2f' % use_time)))
def backup_by_table(self):
pass
def backup_all(self, **kwargs):
exclude_db = kwargs.get('exclude_db', [])
db_list = [val for val in self.select_database() if val not in exclude_db]
logging.info('db_list: {}'.format(db_list))
for db in db_list:
self.backup_by_database(db)
logging.info('{} backup all finished'.format(self.host))
| 51PubModules | /51PubModules-0.0.2.tar.gz/51PubModules-0.0.2/opmysql/mysqldb.py | mysqldb.py |
Overview
========
This is the Python wrapper of the lite C pattern-based mobile detection solution by 51Degrees.mobi. This package is designed to work in conjunction with the core 51Degrees.mobi Mobile Detector for Python package. Please, check out `51degrees.mobi <http://51degrees.mobi>`_ for a detailed description of the Python mobile detection solution, extra documentation and other useful information.
| 51degrees-mobile-detector-lite-pattern-wrapper | /51degrees-mobile-detector-lite-pattern-wrapper-1.0.tar.gz/51degrees-mobile-detector-lite-pattern-wrapper-1.0/README.rst | README.rst |
'''
51Degrees Mobile Detector (Lite C Pattern Wrapper)
==================================================
51Degrees Mobile Detector is a Python wrapper of the lite C pattern-based
mobile detection solution by 51Degrees.mobi. Check out http://51degrees.mobi
for a detailed description, extra documentation and other useful information.
:copyright: (c) 2013 by 51Degrees.mobi, see README.rst for more details.
:license: MPL2, see LICENSE.txt for more details.
'''
from __future__ import absolute_import
import os
import subprocess
import shutil
import tempfile
from setuptools import setup, find_packages, Extension
from distutils.command.build_ext import build_ext as _build_ext
from distutils import ccompiler
def has_snprintf():
'''Checks C function snprintf() is available in the platform.
'''
cc = ccompiler.new_compiler()
tmpdir = tempfile.mkdtemp(prefix='51degrees-mobile-detector-lite-pattern-wrapper-install-')
try:
try:
source = os.path.join(tmpdir, 'snprintf.c')
with open(source, 'w') as f:
f.write(
'#include <stdio.h>\n'
'int main() {\n'
' char buffer[8];\n'
' snprintf(buffer, 8, "Hey!");\n'
' return 0;\n'
'}')
objects = cc.compile([source], output_dir=tmpdir)
cc.link_executable(objects, os.path.join(tmpdir, 'a.out'))
except:
return False
return True
finally:
shutil.rmtree(tmpdir)
class build_ext(_build_ext):
def run(self, *args, **kwargs):
'''
Some stuff needs to be generated before running normal Python
extension build:
- Compilation of 'lib/pcre/dftables.c'.
- Generation of 'lib/pcre/pcre_chartables.c'.
'''
# Fetch root folder of the project.
root = os.path.dirname(os.path.abspath(__file__))
# Compile 'lib/pcre/dftables.c'.
cc = ccompiler.new_compiler()
objects = cc.compile([os.path.join('lib', 'pcre', 'dftables.c')])
if objects:
cc.link_executable(objects, os.path.join('lib', 'pcre', 'dftables'))
else:
raise Exception('Failed to compile "dftables.c".')
# Generate 'lib/pcre/pcre_chartables.c'.
if subprocess.call('"%s" "%s"' % (
os.path.join(root, 'lib', 'pcre', 'dftables'),
os.path.join(root, 'lib', 'pcre', 'pcre_chartables.c')), shell=True) != 0:
raise Exception('Failed to generate "pcre_chartables.c".')
# Continue with normal command behavior.
return _build_ext.run(self, *args, **kwargs)
define_macros = []
if has_snprintf():
define_macros.append(('HAVE_SNPRINTF', None))
setup(
name='51degrees-mobile-detector-lite-pattern-wrapper',
version='1.0',
author='51Degrees.mobi',
author_email='info@51degrees.mobi',
cmdclass={'build_ext': build_ext},
packages=find_packages(),
include_package_data=True,
ext_modules=[
Extension('_fiftyone_degrees_mobile_detector_lite_pattern_wrapper',
sources=[
'wrapper.c',
os.path.join('lib', '51Degrees.mobi.c'),
os.path.join('lib', 'pcre', 'pcre_chartables.c'),
os.path.join('lib', 'pcre', 'pcre_compile.c'),
os.path.join('lib', 'pcre', 'pcre_config.c'),
os.path.join('lib', 'pcre', 'pcre_dfa_exec.c'),
os.path.join('lib', 'pcre', 'pcre_exec.c'),
os.path.join('lib', 'pcre', 'pcre_fullinfo.c'),
os.path.join('lib', 'pcre', 'pcre_get.c'),
os.path.join('lib', 'pcre', 'pcre_globals.c'),
os.path.join('lib', 'pcre', 'pcre_info.c'),
os.path.join('lib', 'pcre', 'pcre_maketables.c'),
os.path.join('lib', 'pcre', 'pcre_newline.c'),
os.path.join('lib', 'pcre', 'pcre_ord2utf8.c'),
os.path.join('lib', 'pcre', 'pcre_refcount.c'),
os.path.join('lib', 'pcre', 'pcre_study.c'),
os.path.join('lib', 'pcre', 'pcre_tables.c'),
os.path.join('lib', 'pcre', 'pcre_try_flipped.c'),
os.path.join('lib', 'pcre', 'pcre_ucp_searchfuncs.c'),
os.path.join('lib', 'pcre', 'pcre_valid_utf8.c'),
os.path.join('lib', 'pcre', 'pcre_version.c'),
os.path.join('lib', 'pcre', 'pcre_xclass.c'),
os.path.join('lib', 'pcre', 'pcreposix.c'),
os.path.join('lib', 'snprintf', 'snprintf.c'),
],
define_macros=define_macros,
extra_compile_args=[
'-w',
# Let the linker strip duplicated symbols (required in OSX).
'-fcommon',
# Avoid 'Symbol not found' errors on extension load caused by
# usage of vendor specific '__inline' keyword.
'-std=gnu89',
],
),
],
url='http://51degrees.mobi',
description='51Degrees Mobile Detector (Lite C Pattern Wrapper).',
long_description=__doc__,
license='MPL2',
classifiers=[
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'Topic :: Software Development :: Libraries',
'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',
'Programming Language :: C',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Operating System :: POSIX',
'Operating System :: MacOS :: MacOS X',
],
install_requires=[
'distribute',
'51degrees-mobile-detector',
],
)
| 51degrees-mobile-detector-lite-pattern-wrapper | /51degrees-mobile-detector-lite-pattern-wrapper-1.0.tar.gz/51degrees-mobile-detector-lite-pattern-wrapper-1.0/setup.py | setup.py |
Overview
========
This is the Python wrapper of the C trie-based mobile detection solution by 51Degrees.mobi. This package is designed to work in conjunction with the core 51Degrees.mobi Mobile Detector for Python package. Please, check out `51degrees.mobi <http://51degrees.mobi>`_ for a detailed description of the Python mobile detection solution, extra documentation and other useful information.
| 51degrees-mobile-detector-trie-wrapper | /51degrees-mobile-detector-trie-wrapper-1.0.tar.gz/51degrees-mobile-detector-trie-wrapper-1.0/README.rst | README.rst |
'''
51Degrees Mobile Detector (C Trie Wrapper)
==========================================
51Degrees Mobile Detector is a Python wrapper of the C trie-based mobile
detection solution by 51Degrees.mobi. Check out http://51degrees.mobi for
a detailed description, extra documentation and other useful information.
:copyright: (c) 2013 by 51Degrees.mobi, see README.rst for more details.
:license: MPL2, see LICENSE.txt for more details.
'''
from __future__ import absolute_import
import os
import subprocess
import shutil
import tempfile
from setuptools import setup, find_packages, Extension
from distutils import ccompiler
def has_snprintf():
'''Checks C function snprintf() is available in the platform.
'''
cc = ccompiler.new_compiler()
tmpdir = tempfile.mkdtemp(prefix='51degrees-mobile-detector-trie-wrapper-install-')
try:
try:
source = os.path.join(tmpdir, 'snprintf.c')
with open(source, 'w') as f:
f.write(
'#include <stdio.h>\n'
'int main() {\n'
' char buffer[8];\n'
' snprintf(buffer, 8, "Hey!");\n'
' return 0;\n'
'}')
objects = cc.compile([source], output_dir=tmpdir)
cc.link_executable(objects, os.path.join(tmpdir, 'a.out'))
except:
return False
return True
finally:
shutil.rmtree(tmpdir)
define_macros = []
if has_snprintf():
define_macros.append(('HAVE_SNPRINTF', None))
setup(
name='51degrees-mobile-detector-trie-wrapper',
version='1.0',
author='51Degrees.mobi',
author_email='info@51degrees.mobi',
packages=find_packages(),
include_package_data=True,
ext_modules=[
Extension('_fiftyone_degrees_mobile_detector_trie_wrapper',
sources=[
'wrapper.c',
os.path.join('lib', '51Degrees.mobi.c'),
os.path.join('lib', 'snprintf', 'snprintf.c'),
],
define_macros=define_macros,
extra_compile_args=[
'-w',
],
),
],
url='http://51degrees.mobi',
description='51Degrees Mobile Detector (C Trie Wrapper).',
long_description=__doc__,
license='MPL2',
classifiers = [
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'Topic :: Software Development :: Libraries',
'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',
'Programming Language :: C',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Operating System :: POSIX',
'Operating System :: MacOS :: MacOS X',
],
install_requires=[
'distribute',
'51degrees-mobile-detector',
],
)
| 51degrees-mobile-detector-trie-wrapper | /51degrees-mobile-detector-trie-wrapper-1.0.tar.gz/51degrees-mobile-detector-trie-wrapper-1.0/setup.py | setup.py |
|51degrees|
Device Detection Python API
51Degrees Mobile Detector is a server side mobile detection solution.
Changelog
====================
- Fixed a bug where an additional compile argument was causing compilation errors with clang.
- Updated the v3-trie-wrapper package to include the Lite Hash Trie data file.
- Updated Lite Pattern data file for November.
- Updated Lite Hash Trie data file for November.
General
========
Before you start matching user agents, you may wish to configure the solution to use a different database. You can easily generate a sample settings file running the following command
$ 51degrees-mobile-detector settings > ~/51degrees-mobile-detector.settings.py
The core ``51degrees-mobile-detector`` is included as a dependency when installing either the ``51degrees-mobile-detector-v3-wrapper`` or ``51degrees-mobile-detector-v3-wrapper`` packages.
During install a directory which contains your data file will be created in ``~\51Degrees``.
Settings
=========
General Settings
----------------
- ``DETECTION_METHOD`` (defaults to 'v3-wrapper'). Sets the preferred mobile device detection method. Available options are v3-wrapper (requires 51degrees-mobile-detector-v3-wrapper package), v3-trie-wrapper
- ``PROPERTIES`` (defaults to ''). List of case-sensitive property names to be fetched on every device detection. Leave empty to fetch all available properties.
- ``LICENCE`` Your 51Degrees license key for enhanced device data. This is required if you want to set up the automatic 51degrees-mobile-detector-premium-pattern-wrapper package updates.
Trie Detector settings
-----------------------
- ``V3_TRIE_WRAPPER_DATABASE`` Location of the Hash Trie data file.
Pattern Detector settings
--------------------------
- ``V3_WRAPPER_DATABASE`` Location of the Pattern data file.
- ``CACHE_SIZE`` (defaults to 10000). Sets the size of the workset cache.
- ``POOL_SIZE`` (defaults to 20). Sets the size of the workset pool.
Usage Sharer Settings
----------------------
- ``USAGE_SHARER_ENABLED`` (defaults to True). Indicates if usage data should be shared with 51Degrees.com. We recommended leaving this value unchanged to ensure we're improving the performance and accuracy of the solution.
- Adavanced usage sharer settings are detailed in your settings file.
Automatic Updates
------------------
If you want to set up automatic updates, add your license key to your settings and add the following command to your cron
$ 51degrees-mobile-detector update-premium-pattern-wrapper
NOTE: Currently auto updates are only available with our Pattern API.
Usage
======
Core
-----
By executing the following a useful help page will be displayed explaining basic usage.
$ 51degrees-mobile-detector
To check everything is set up , try fetching a match with
$ 51degrees-mobile-detector match "Mozilla/5.0 (iPad; CPU OS 5_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Mobile/9B176"
Examples
=========
Additional examples can be found on our GitHub_ repository.
User Support
============
If you have any issues please get in touch with our Support_ or open an issue on our GitHub_ repository.
.. |51degrees| image:: https://51degrees.com/DesktopModules/FiftyOne/Distributor/Logo.ashx?utm_source=github&utm_medium=repository&utm_content=readme_pattern&utm_campaign=python-open-source
:target: https://51degrees.com
.. _GitHub: https://github.com/51Degrees/Device-Detection/tree/master/python
.. _Support: support@51degrees.com
| 51degrees-mobile-detector-v3-trie-wrapper | /51degrees-mobile-detector-v3-trie-wrapper-3.2.18.4.tar.gz/51degrees-mobile-detector-v3-trie-wrapper-3.2.18.4/README.rst | README.rst |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.