hexsha stringlengths 40 40 | size int64 5 2.06M | ext stringclasses 11
values | lang stringclasses 1
value | max_stars_repo_path stringlengths 3 251 | max_stars_repo_name stringlengths 4 130 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 251 | max_issues_repo_name stringlengths 4 130 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 251 | max_forks_repo_name stringlengths 4 130 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 1 1.05M | avg_line_length float64 1 1.02M | max_line_length int64 3 1.04M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ae219f0c0bca31c2d0339c1392885c6f7d746211 | 2,007 | py | Python | src/util/utils.py | purpleposeidon/texture-atlas-generator | 68864bf42a8bb38b4cdbd883f776b32816084c8e | [
"Unlicense",
"MIT"
] | null | null | null | src/util/utils.py | purpleposeidon/texture-atlas-generator | 68864bf42a8bb38b4cdbd883f776b32816084c8e | [
"Unlicense",
"MIT"
] | null | null | null | src/util/utils.py | purpleposeidon/texture-atlas-generator | 68864bf42a8bb38b4cdbd883f776b32816084c8e | [
"Unlicense",
"MIT"
] | null | null | null | import os.path
import shutil
from data_parsers.json_parser import JsonParser
from data_parsers.xml_parser import XmlParser
from data_parsers.parser import ParserError
from packing_algorithms.ratcliff.texture_packer_ratcliff import TexturePackerRatcliff
from packing_algorithms.maxrects.texture_packer_maxrects import TexturePackerMaxRects
from packing_algorithms.maxrects.texture_packer_maxrects import FreeRectChoiceHeuristicEnum
| 34.016949 | 93 | 0.757349 |
ae21aa10b6eac30d1836dbd0c8245d129f6fe3ff | 1,038 | py | Python | myslice/web/rest/confirm.py | loicbaron/myslice2 | 32af9462cc9e5654a6e3036978ae74b0a03a2698 | [
"MIT"
] | null | null | null | myslice/web/rest/confirm.py | loicbaron/myslice2 | 32af9462cc9e5654a6e3036978ae74b0a03a2698 | [
"MIT"
] | 1 | 2020-06-02T12:30:07.000Z | 2020-06-02T12:30:07.000Z | myslice/web/rest/confirm.py | loicbaron/myslice2 | 32af9462cc9e5654a6e3036978ae74b0a03a2698 | [
"MIT"
] | 1 | 2018-10-29T16:11:26.000Z | 2018-10-29T16:11:26.000Z | import json
import logging
import rethinkdb as r
from tornado import gen, escape
from myslice.db.activity import Event
from myslice.lib.util import myJSONEncoder
from myslice.web.rest import Api
logger = logging.getLogger('myslice.rest.confirm')
| 31.454545 | 95 | 0.628131 |
ae21fa093e46f99cba061fc2247a9c451b1f519b | 218 | py | Python | FullContact/utils.py | KamalAwasthi/FullContact | fa2e9f29079064b015848d980ddbb8da51f323c9 | [
"Apache-2.0"
] | 2 | 2018-05-31T16:21:06.000Z | 2019-11-28T11:58:12.000Z | FullContact/utils.py | KamalAwasthi/FullContact | fa2e9f29079064b015848d980ddbb8da51f323c9 | [
"Apache-2.0"
] | null | null | null | FullContact/utils.py | KamalAwasthi/FullContact | fa2e9f29079064b015848d980ddbb8da51f323c9 | [
"Apache-2.0"
] | 2 | 2018-02-12T16:37:08.000Z | 2019-11-28T11:58:24.000Z | import requests
| 31.142857 | 133 | 0.738532 |
ae22121e986bc6059cb536b9769429d2efd4c361 | 1,665 | py | Python | python/advent_of_code/y2015/day01.py | stonecharioteer/advent-of-code | c18e47e378e82f82b77558a114e7d7c3a43c8429 | [
"MIT"
] | null | null | null | python/advent_of_code/y2015/day01.py | stonecharioteer/advent-of-code | c18e47e378e82f82b77558a114e7d7c3a43c8429 | [
"MIT"
] | null | null | null | python/advent_of_code/y2015/day01.py | stonecharioteer/advent-of-code | c18e47e378e82f82b77558a114e7d7c3a43c8429 | [
"MIT"
] | null | null | null | """--- Day 1: Not Quite Lisp ---
Santa was hoping for a white Christmas, but his weather machine's "snow" function is powered by stars, and he's fresh out! To save Christmas, he needs you to collect fifty stars by December 25th.
Collect stars by helping Santa solve puzzles. Two puzzles will be made available on each day in the Advent calendar; the second puzzle is unlocked when you complete the first. Each puzzle grants one star. Good luck!
Here's an easy puzzle to warm you up.
Santa is trying to deliver presents in a large apartment building, but he can't find the right floor - the directions he got are a little confusing. He starts on the ground floor (floor 0) and then follows the instructions one character at a time.
An opening parenthesis, (, means he should go up one floor, and a closing parenthesis, ), means he should go down one floor.
The apartment building is very tall, and the basement is very deep; he will never find the top or bottom floors.
For example:
(()) and ()() both result in floor 0.
((( and (()(()( both result in floor 3.
))((((( also results in floor 3.
()) and ))( both result in floor -1 (the first basement level).
))) and )())()) both result in floor -3.
To what floor do the instructions take Santa?"""
from typing import TextIO, Tuple
def run(inp: TextIO) -> Tuple[int, int]:
"""Returns floor count"""
data = inp.read()
floor = 0
basement = None
for ix, character in enumerate(data):
if character == "(":
floor += 1
elif character == ")":
floor -= 1
if floor == -1 and basement is None:
basement = ix+1
return floor, basement
| 42.692308 | 247 | 0.684685 |
ae2625e0bfcb85513b735f8abfbccb014e1bc0b8 | 875 | py | Python | setup.py | nbgallery/ipylogging | fa54a7ace0262398b5d7a9dd3ec6938248a70752 | [
"MIT"
] | 1 | 2021-10-18T22:12:37.000Z | 2021-10-18T22:12:37.000Z | setup.py | nbgallery/ipylogging | fa54a7ace0262398b5d7a9dd3ec6938248a70752 | [
"MIT"
] | null | null | null | setup.py | nbgallery/ipylogging | fa54a7ace0262398b5d7a9dd3ec6938248a70752 | [
"MIT"
] | null | null | null | # vim: expandtab tabstop=4 shiftwidth=4
from setuptools import setup
# read the contents of your README file
from os import path
this_directory = path.abspath(path.dirname(__file__))
with open(path.join(this_directory, 'README.md'), 'r') as f:
long_description = f.read()
setup(
name='ipylogging',
version='2020.342.1',
author='Bill Allen',
author_email='photo.allen@gmail.com',
description='Easy log messages in Jupyter notebooks.',
long_description=long_description,
long_description_content_type='text/markdown',
license='MIT',
keywords='logging logger logs ipython jupyter notebook messages'.split(),
url='https://github.com/nbgallery/ipylogging',
packages=['ipylogging'],
classifiers=[
'Development Status :: 4 - Beta',
'Topic :: Utilities',
'License :: OSI Approved :: MIT License'
]
)
| 30.172414 | 77 | 0.691429 |
ae288231dc020ec00eec037bd175a4539730e6b8 | 2,594 | py | Python | utils/i18n.py | minsukkahng/pokr.kr | 169475778c998b4198ac7d6a1cebbc3c389e41b8 | [
"Apache-2.0"
] | 76 | 2015-01-19T12:39:43.000Z | 2021-10-14T06:10:25.000Z | utils/i18n.py | minsukkahng/pokr.kr | 169475778c998b4198ac7d6a1cebbc3c389e41b8 | [
"Apache-2.0"
] | 22 | 2015-01-03T01:00:53.000Z | 2019-09-14T11:55:06.000Z | utils/i18n.py | minsukkahng/pokr.kr | 169475778c998b4198ac7d6a1cebbc3c389e41b8 | [
"Apache-2.0"
] | 28 | 2015-01-14T15:45:00.000Z | 2020-06-03T13:29:41.000Z | from babel import Locale
from flask import current_app as cur_app, request
from flask.ext.babel import Babel, get_locale
from functools import wraps
from popong_nlp.utils.translit import translit
__all__ = ['PopongBabel']
def assert_valid_locale(locale):
if not is_valid_locale(locale):
raise InvalidLocaleError()
def filter_translit(*args, **kwargs):
locale = str(get_locale())
_type = kwargs.get('type')
if len(args) == 1:
string = args[0]
return translit(string, 'ko', locale, _type) if locale != 'ko' else string
elif args:
raise Exception('filter_translit() only accepts one or zero argument')
else:
return lambda x: filter_translit(x, type=_type)
| 24.018519 | 82 | 0.662298 |
ae28fbfcfc5475fc99a477407eec02fb25989dcb | 5,240 | py | Python | Model/lookalike-model/lookalike_model/application/pipeline/top_n_similarity_table_generator.py | sanjaynirmal/blue-marlin | 725d614e941e5de76562d354edf11ac18897f242 | [
"Apache-2.0"
] | 1 | 2020-03-06T09:41:49.000Z | 2020-03-06T09:41:49.000Z | Model/lookalike-model/lookalike_model/application/pipeline/top_n_similarity_table_generator.py | sanjaynirmal/blue-marlin | 725d614e941e5de76562d354edf11ac18897f242 | [
"Apache-2.0"
] | null | null | null | Model/lookalike-model/lookalike_model/application/pipeline/top_n_similarity_table_generator.py | sanjaynirmal/blue-marlin | 725d614e941e5de76562d354edf11ac18897f242 | [
"Apache-2.0"
] | null | null | null | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0.html
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import yaml
import argparse
import pyspark.sql.functions as fn
from pyspark import SparkContext
from pyspark.sql import HiveContext
from pyspark.sql.types import FloatType, StringType, StructType, StructField, ArrayType, MapType, StructType
# from rest_client import predict, str_to_intlist
import requests
import json
import argparse
from pyspark.sql.functions import udf
from math import sqrt
import time
import numpy as np
import itertools
import heapq
'''
This process generates the top-n-similarity table.
spark-submit --master yarn --num-executors 20 --executor-cores 5 --executor-memory 16G --driver-memory 16G --conf spark.driver.maxResultSize=5g --conf spark.hadoop.hive.exec.dynamic.partition=true --conf spark.hadoop.hive.exec.dynamic.partition.mode=nonstrict top_n_similarity_table_generator.py config.yml
The top-n-similarity table is
|user| top-N-similarity|top-n-users
|:-------------| :------------: |
|user-1-did| [similarity-score-11, similarity-score-12, similarity-score-13] |[user-did-1, user-did-2, user-did-3]|
|user-2-did| [similarity-score-21, similarity-score-22, similarity-score-23] |[user-did-10, user-did-20, user-did-30]|
|user-3-did| [similarity-score-31, similarity-score-32, similarity-score-33] |[user-did-23, user-did-87, user-did-45]|
'''
if __name__ == "__main__":
start = time.time()
parser = argparse.ArgumentParser(description=" ")
parser.add_argument('config_file')
args = parser.parse_args()
with open(args.config_file, 'r') as yml_file:
cfg = yaml.safe_load(yml_file)
sc = SparkContext.getOrCreate()
sc.setLogLevel('INFO')
hive_context = HiveContext(sc)
run(sc=sc, hive_context=hive_context, cfg=cfg)
sc.stop()
end = time.time()
print('Runtime of the program is:', (end - start))
| 40.620155 | 306 | 0.717748 |
ae294288f339abaa44909776daf88e26d1673f50 | 1,056 | py | Python | lib/auth.py | p4lsec/autoshoppr | a0dba3060e26008c2d441358ff7f4a909ba4fcab | [
"MIT"
] | null | null | null | lib/auth.py | p4lsec/autoshoppr | a0dba3060e26008c2d441358ff7f4a909ba4fcab | [
"MIT"
] | null | null | null | lib/auth.py | p4lsec/autoshoppr | a0dba3060e26008c2d441358ff7f4a909ba4fcab | [
"MIT"
] | null | null | null | from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import pickle
import configparser | 35.2 | 98 | 0.629735 |
ae29895c6324b4119860a3e674198d1b40dd9964 | 1,317 | py | Python | Verulean/days/aoc15.py | BasedJellyfish11/Advent-of-Code-2021 | 9ed84902958c99c341ec2444d5db561c84348911 | [
"MIT"
] | 3 | 2021-12-03T22:40:17.000Z | 2021-12-23T21:17:16.000Z | Verulean/days/aoc15.py | BasedJellyfish11/Advent-of-Code-2021 | 9ed84902958c99c341ec2444d5db561c84348911 | [
"MIT"
] | null | null | null | Verulean/days/aoc15.py | BasedJellyfish11/Advent-of-Code-2021 | 9ed84902958c99c341ec2444d5db561c84348911 | [
"MIT"
] | null | null | null | import numpy as np
import heapq
| 24.388889 | 64 | 0.525437 |
ae2d0af0f1b9daeb6ad913a0cc22fcfa911b9c6b | 5,291 | py | Python | pypy/module/_minimal_curses/fficurses.py | microvm/pypy-mu | 6b03fbe93052d0eb3a4c67152c987c16837b3484 | [
"Apache-2.0",
"OpenSSL"
] | 34 | 2015-07-09T04:53:27.000Z | 2021-07-19T05:22:27.000Z | pypy/module/_minimal_curses/fficurses.py | microvm/pypy-mu | 6b03fbe93052d0eb3a4c67152c987c16837b3484 | [
"Apache-2.0",
"OpenSSL"
] | 6 | 2015-05-30T17:20:45.000Z | 2017-06-12T14:29:23.000Z | pypy/module/_minimal_curses/fficurses.py | microvm/pypy-mu | 6b03fbe93052d0eb3a4c67152c987c16837b3484 | [
"Apache-2.0",
"OpenSSL"
] | 11 | 2015-09-07T14:26:08.000Z | 2020-04-10T07:20:41.000Z | """ The ffi for rpython, need to be imported for side effects
"""
from rpython.rtyper.lltypesystem import rffi
from rpython.rtyper.lltypesystem import lltype
from rpython.rtyper.tool import rffi_platform
from rpython.rtyper.extfunc import register_external
from pypy.module._minimal_curses import interp_curses
from rpython.translator.tool.cbuild import ExternalCompilationInfo
# We cannot trust ncurses5-config, it's broken in various ways in
# various versions. For example it might not list -ltinfo even though
# it's needed, or --cflags might be completely empty. On Ubuntu 10.04
# it gives -I/usr/include/ncurses, which doesn't exist at all. Crap.
def guess_eci():
for eci in try_eci():
if rffi_platform.configure(CConfig)['HAS']:
return eci
raise ImportError("failed to guess where ncurses is installed. "
"You might need to install libncurses5-dev or similar.")
eci = guess_eci()
INT = rffi.INT
INTP = lltype.Ptr(lltype.Array(INT, hints={'nolength':True}))
c_setupterm = rffi.llexternal('setupterm', [rffi.CCHARP, INT, INTP], INT,
compilation_info=eci)
c_tigetstr = rffi.llexternal('tigetstr', [rffi.CCHARP], rffi.CCHARP,
compilation_info=eci)
c_tparm = rffi.llexternal('tparm', [rffi.CCHARP, INT, INT, INT, INT, INT,
INT, INT, INT, INT], rffi.CCHARP,
compilation_info=eci)
ERR = rffi.CConstant('ERR', lltype.Signed)
OK = rffi.CConstant('OK', lltype.Signed)
register_external(interp_curses._curses_setupterm_null,
[int], llimpl=curses_setupterm_null_llimpl,
export_name='_curses.setupterm_null')
register_external(interp_curses._curses_setupterm,
[str, int], llimpl=curses_setupterm_llimpl,
export_name='_curses.setupterm')
register_external(interp_curses._curses_tigetstr, [str], str,
export_name='_curses.tigetstr', llimpl=tigetstr_llimpl)
register_external(interp_curses._curses_tparm, [str, [int]], str,
export_name='_curses.tparm', llimpl=tparm_llimpl)
| 36.743056 | 82 | 0.634096 |
ae2e72786b0e755905085b12bef4f3ce69f9d8fc | 34 | py | Python | pycalc.py | erhuabushuo/pycalc | a46b85aaafe37ad7cca95ac0198d9bfea985b598 | [
"MIT"
] | null | null | null | pycalc.py | erhuabushuo/pycalc | a46b85aaafe37ad7cca95ac0198d9bfea985b598 | [
"MIT"
] | null | null | null | pycalc.py | erhuabushuo/pycalc | a46b85aaafe37ad7cca95ac0198d9bfea985b598 | [
"MIT"
] | null | null | null | import calcpy
calcpy.calculcate() | 11.333333 | 19 | 0.823529 |
ae2f2e86a4f028f1e691a235394a547ef477d257 | 470 | py | Python | peripherals/cardreader.py | sparkoo/payterm | a8c783583017e65a6c2549a831a7dfa44367dbd1 | [
"WTFPL"
] | null | null | null | peripherals/cardreader.py | sparkoo/payterm | a8c783583017e65a6c2549a831a7dfa44367dbd1 | [
"WTFPL"
] | 4 | 2020-02-26T21:56:57.000Z | 2020-03-01T11:37:39.000Z | peripherals/cardreader.py | sparkoo/payterm | a8c783583017e65a6c2549a831a7dfa44367dbd1 | [
"WTFPL"
] | null | null | null | import RPi.GPIO as GPIO
import time
from mfrc522 import SimpleMFRC522
import importlib.util
spec = importlib.util.spec_from_file_location("conn", "lib/conn.py")
conn = importlib.util.module_from_spec(spec)
spec.loader.exec_module(conn)
conn.writeConn("cardreader", readCard)
| 20.434783 | 68 | 0.723404 |
ae313f7b22dd8a45cb53e8bfba694df52241d4b5 | 1,310 | py | Python | exercises/development/intermediate/exercise_5.py | littlekign/comp-think.github.io | 21bce306c7672b6355a6fdaf260824542dbca595 | [
"CC0-1.0",
"CC-BY-4.0"
] | 40 | 2019-01-25T11:14:30.000Z | 2021-12-05T15:04:11.000Z | exercises/development/intermediate/exercise_5.py | littlekign/comp-think.github.io | 21bce306c7672b6355a6fdaf260824542dbca595 | [
"CC0-1.0",
"CC-BY-4.0"
] | 1 | 2020-11-08T15:18:58.000Z | 2020-11-19T22:44:28.000Z | exercises/development/intermediate/exercise_5.py | littlekign/comp-think.github.io | 21bce306c7672b6355a6fdaf260824542dbca595 | [
"CC0-1.0",
"CC-BY-4.0"
] | 19 | 2019-12-28T16:06:01.000Z | 2021-12-14T15:52:44.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2019, Silvio Peroni <essepuntato@gmail.com>
#
# Permission to use, copy, modify, and/or distribute this software for any purpose
# with or without fee is hereby granted, provided that the above copyright notice
# and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
# FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT,
# OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE,
# DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS
# SOFTWARE.
from collections import deque
# Test case for the function
# Code of the function
# Tests
print(test_do_it(deque(["a", "b"]), 3, None))
print(test_do_it(deque(["a", "b", "c", "d", "e"]), 3, deque(["d", "e"])))
| 33.589744 | 84 | 0.70458 |
ae3401171f8e9d1d9a120271065ad3caf42b8ad2 | 38 | py | Python | tests/__init__.py | masasin/latexipy | 1f888a44f2077a5c0ef63216616cd24c279e44d0 | [
"MIT"
] | 144 | 2017-08-24T08:58:58.000Z | 2021-04-18T10:38:44.000Z | tests/__init__.py | masasin/latexipy | 1f888a44f2077a5c0ef63216616cd24c279e44d0 | [
"MIT"
] | 424 | 2017-09-04T16:21:10.000Z | 2022-03-28T02:23:25.000Z | tests/__init__.py | masasin/latexipy | 1f888a44f2077a5c0ef63216616cd24c279e44d0 | [
"MIT"
] | 15 | 2017-08-26T08:05:55.000Z | 2019-05-13T22:29:44.000Z | '''Unit test package for latexipy.'''
| 19 | 37 | 0.684211 |
ae340b92aadfe682e4f7ba5a3b3a05872dd322d2 | 192 | py | Python | pyravendb/data/counters.py | CDuPlooy/ravendb-python-client | dbe51ee8eea166e0d9e60897ab480dd9a693366b | [
"MIT"
] | 19 | 2019-02-16T14:39:38.000Z | 2022-03-23T12:27:00.000Z | pyravendb/data/counters.py | CDuPlooy/ravendb-python-client | dbe51ee8eea166e0d9e60897ab480dd9a693366b | [
"MIT"
] | 24 | 2018-10-21T07:31:21.000Z | 2022-03-27T17:27:29.000Z | pyravendb/data/counters.py | CDuPlooy/ravendb-python-client | dbe51ee8eea166e0d9e60897ab480dd9a693366b | [
"MIT"
] | 14 | 2018-08-14T07:58:46.000Z | 2022-01-05T12:20:08.000Z | from enum import Enum
| 16 | 33 | 0.625 |
ae3425a0e350725139bf2c51d7938fab7269b9d6 | 516 | py | Python | src/lib/spaces/orientedplane.py | Wombatlord/PygamePong | d56b1529fe095e6a30b27b6039d9d52105ad900d | [
"MIT"
] | null | null | null | src/lib/spaces/orientedplane.py | Wombatlord/PygamePong | d56b1529fe095e6a30b27b6039d9d52105ad900d | [
"MIT"
] | 2 | 2021-02-19T05:05:43.000Z | 2021-02-20T02:16:53.000Z | src/lib/spaces/orientedplane.py | Wombatlord/PygamePong | d56b1529fe095e6a30b27b6039d9d52105ad900d | [
"MIT"
] | 1 | 2020-08-13T10:14:46.000Z | 2020-08-13T10:14:46.000Z | from src.lib.spaces.vector import Vector
| 30.352941 | 70 | 0.660853 |
ae344aa9b51e47e1afbcdf4afc821fdaead42258 | 715 | py | Python | core/migrations/0003_alter_carro_chassi_alter_carro_montadora.py | montalvas/django05 | 199f2ba1c757d899a78f8fc40742081bc74a4187 | [
"MIT"
] | null | null | null | core/migrations/0003_alter_carro_chassi_alter_carro_montadora.py | montalvas/django05 | 199f2ba1c757d899a78f8fc40742081bc74a4187 | [
"MIT"
] | null | null | null | core/migrations/0003_alter_carro_chassi_alter_carro_montadora.py | montalvas/django05 | 199f2ba1c757d899a78f8fc40742081bc74a4187 | [
"MIT"
] | null | null | null | # Generated by Django 4.0.2 on 2022-02-02 19:01
import core.models
from django.db import migrations, models
import django.db.models.deletion
| 27.5 | 117 | 0.633566 |
ae371956e205306769109642792d7cbf72cc52de | 156 | py | Python | snake/__init__.py | lparolari/snake | ceaaec051584be768c9541fb106234e6de2b4900 | [
"MIT"
] | 1 | 2020-11-02T11:04:49.000Z | 2020-11-02T11:04:49.000Z | snake/__init__.py | lparolari/snake | ceaaec051584be768c9541fb106234e6de2b4900 | [
"MIT"
] | 112 | 2019-09-24T20:08:23.000Z | 2021-02-08T00:36:07.000Z | snake/__init__.py | lparolari/snake | ceaaec051584be768c9541fb106234e6de2b4900 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Top-level package for snake."""
__author__ = """Luca Parolari"""
__email__ = 'luca.parolari23@gmail.com'
__version__ = '0.2.2'
| 19.5 | 39 | 0.641026 |
ae371c01a5249a7ea65891e859df84f39ceed04c | 1,357 | py | Python | UI for prediction/prediction_file.py | berfin-t/HeartAttackPrediction | a9acbd0356f3c3e4100b1964862242f6afe7da3b | [
"Apache-2.0"
] | null | null | null | UI for prediction/prediction_file.py | berfin-t/HeartAttackPrediction | a9acbd0356f3c3e4100b1964862242f6afe7da3b | [
"Apache-2.0"
] | null | null | null | UI for prediction/prediction_file.py | berfin-t/HeartAttackPrediction | a9acbd0356f3c3e4100b1964862242f6afe7da3b | [
"Apache-2.0"
] | null | null | null | import pickle
import os
import sys
import pandas as pd
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
import warnings
warnings.filterwarnings("ignore", message="Reloaded modules: <module_name>")
if __name__=='__main__':
train()
| 26.096154 | 95 | 0.664702 |
ae386b890d023c6368568c73f1d37d4dc2112c5f | 1,432 | py | Python | bot/models.py | xammi/nash_dom_bot | 9d5dfc7e0120d56c95e020e7e20505b973a5d402 | [
"MIT"
] | null | null | null | bot/models.py | xammi/nash_dom_bot | 9d5dfc7e0120d56c95e020e7e20505b973a5d402 | [
"MIT"
] | null | null | null | bot/models.py | xammi/nash_dom_bot | 9d5dfc7e0120d56c95e020e7e20505b973a5d402 | [
"MIT"
] | null | null | null | from django.db import models
from django.db.models import CASCADE
| 31.822222 | 87 | 0.703212 |
ae3968619345b1a7bf80058788bc082425067214 | 3,592 | py | Python | tf_ops/genCompileScript.py | chenzhutian/MCCNN | e28ca4a2deeecbfd1c8939ca666fcc010554fcbb | [
"MIT"
] | 90 | 2018-07-05T13:43:43.000Z | 2022-01-21T08:23:06.000Z | tf_ops/genCompileScript.py | DylanWusee/MCCNN | 13c2afb81aa231779b2be564ae31931b1d82e3fa | [
"MIT"
] | 9 | 2018-11-08T14:22:59.000Z | 2022-03-13T08:35:15.000Z | tf_ops/genCompileScript.py | DylanWusee/MCCNN | 13c2afb81aa231779b2be564ae31931b1d82e3fa | [
"MIT"
] | 12 | 2018-11-09T09:31:46.000Z | 2021-06-21T01:23:11.000Z | '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
\file genCompileScript.py
\brief Python script to generate the compile script for unix systems.
\copyright Copyright (c) 2018 Visual Computing group of Ulm University,
Germany. See the LICENSE file at the top-level directory of
this distribution.
\author pedro hermosilla (pedro-1.hermosilla-casajus@uni-ulm.de)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
import argparse
import tensorflow as tf
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Generate the compile script for the MCCNN operations.')
parser.add_argument('--cudaFolder', required=True, help='Path to the CUDA folder')
parser.add_argument('--MLPSize', default=8, type=int, help='Size of the MLPs (default 8)')
parser.add_argument('--debugInfo', action='store_true', help='Print debug information during execution (default: False)')
args = parser.parse_args()
debugString = " -DPRINT_CONV_INFO" if args.debugInfo else ""
with open("compile.sh", "w") as myCompileScript:
myCompileScript.write(args.cudaFolder+"/bin/nvcc -DBLOCK_MLP_SIZE="+str(args.MLPSize)+debugString+" -std=c++11 aabb_gpu.cu -o aabb_gpu.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC\n")
myCompileScript.write(args.cudaFolder+"/bin/nvcc -DBLOCK_MLP_SIZE="+str(args.MLPSize)+debugString+" -std=c++11 sort_gpu.cu -o sort_gpu.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC\n")
myCompileScript.write(args.cudaFolder+"/bin/nvcc -DBLOCK_MLP_SIZE="+str(args.MLPSize)+debugString+" -std=c++11 find_neighbors.cu -o find_neighbors.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC\n")
myCompileScript.write(args.cudaFolder+"/bin/nvcc -DBLOCK_MLP_SIZE="+str(args.MLPSize)+debugString+" -std=c++11 compute_pdf.cu -o compute_pdf.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC\n")
myCompileScript.write(args.cudaFolder+"/bin/nvcc -DBLOCK_MLP_SIZE="+str(args.MLPSize)+debugString+" -std=c++11 poisson_sampling.cu -o poisson_sampling.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC\n")
myCompileScript.write(args.cudaFolder+"/bin/nvcc -DBLOCK_MLP_SIZE="+str(args.MLPSize)+debugString+" -std=c++11 spatial_conv.cu -o spatial_conv.cu.o -c -O2 -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC\n")
tensorflowInclude = tf.sysconfig.get_include()
tensorflowLib = tf.sysconfig.get_lib()
myCompileScript.write("g++ -std=c++11 -DBLOCK_MLP_SIZE="+str(args.MLPSize)+debugString+" spatial_conv.cc poisson_sampling.cc compute_pdf.cc "\
"find_neighbors.cc sort_gpu.cc aabb_gpu.cc spatial_conv.cu.o poisson_sampling.cu.o compute_pdf.cu.o "\
"find_neighbors.cu.o sort_gpu.cu.o aabb_gpu.cu.o -o MCConv.so -shared -fPIC -I"+tensorflowInclude+" -I"+tensorflowInclude+"/external/nsync/public "\
"-I"+args.cudaFolder+"/include -lcudart -L "+args.cudaFolder+"/lib64/ -L"+tensorflowLib+" -ltensorflow_framework -O2 -D_GLIBCXX_USE_CXX11_ABI=0\n")
with open("MCConvModuleSrc", "r") as mySrcPyScript:
with open("MCConvModule.py", "w") as myDestPyScript:
for line in mySrcPyScript:
myDestPyScript.write(line)
myDestPyScript.write("\n")
myDestPyScript.write("\n")
myDestPyScript.write("def get_block_size():\n")
myDestPyScript.write(" return "+str(args.MLPSize)+"\n")
myDestPyScript.write("\n")
| 71.84 | 214 | 0.65618 |
ae39d5bb797b8ed6a1c3f37606a273b2c5c79dbb | 8,326 | py | Python | tests/test_optimization.py | davidusb-geek/emhass | 5d6a5ad45c26b819c6bc1cb0e8943940d7fc8f17 | [
"MIT"
] | 17 | 2021-09-12T22:32:09.000Z | 2022-03-17T17:45:29.000Z | tests/test_optimization.py | davidusb-geek/emhass | 5d6a5ad45c26b819c6bc1cb0e8943940d7fc8f17 | [
"MIT"
] | 1 | 2021-12-22T21:10:04.000Z | 2021-12-22T21:10:04.000Z | tests/test_optimization.py | davidusb-geek/emhass | 5d6a5ad45c26b819c6bc1cb0e8943940d7fc8f17 | [
"MIT"
] | 2 | 2021-11-03T10:29:05.000Z | 2021-11-19T12:08:24.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import unittest
import pandas as pd
import numpy as np
import pathlib
import pickle
from datetime import datetime, timezone
from emhass.retrieve_hass import retrieve_hass
from emhass.optimization import optimization
from emhass.forecast import forecast
from emhass.utils import get_root, get_yaml_parse, get_days_list, get_logger
# the root folder
root = str(get_root(__file__, num_parent=2))
# create logger
logger, ch = get_logger(__name__, root, save_to_file=False)
if __name__ == '__main__':
unittest.main()
ch.close()
logger.removeHandler(ch)
| 60.773723 | 138 | 0.700336 |
ae3a5afb8c080bcd642ec9b461aca11065494bcb | 4,555 | py | Python | experiments/counters.py | TenantBase/django-experiments | b75cf11159da4f4c75d9798dff3ddfd1ca454261 | [
"MIT"
] | null | null | null | experiments/counters.py | TenantBase/django-experiments | b75cf11159da4f4c75d9798dff3ddfd1ca454261 | [
"MIT"
] | 1 | 2019-05-29T00:00:15.000Z | 2019-05-29T00:00:15.000Z | experiments/counters.py | TenantBase/django-experiments | b75cf11159da4f4c75d9798dff3ddfd1ca454261 | [
"MIT"
] | null | null | null | from django.conf import settings
from django.utils.functional import cached_property
import redis
from redis.sentinel import Sentinel
from redis.exceptions import ConnectionError, ResponseError
COUNTER_CACHE_KEY = 'experiments:participants:%s'
COUNTER_FREQ_CACHE_KEY = 'experiments:freq:%s'
| 38.931624 | 130 | 0.639517 |
ae3bab1dfe4bf59579d4fb381bd53583200e99c5 | 447 | py | Python | irl_gym/envs/env_utils.py | uidilr/irl_gym | 3352cb9189f3d5076a116db6678207e186ff4fc6 | [
"MIT"
] | 1 | 2020-12-29T11:04:56.000Z | 2020-12-29T11:04:56.000Z | irl_gym/envs/env_utils.py | uidilr/irl_gym | 3352cb9189f3d5076a116db6678207e186ff4fc6 | [
"MIT"
] | null | null | null | irl_gym/envs/env_utils.py | uidilr/irl_gym | 3352cb9189f3d5076a116db6678207e186ff4fc6 | [
"MIT"
] | null | null | null | import os
ENV_ASSET_DIR = os.path.join(os.path.dirname(__file__), 'assets')
| 20.318182 | 65 | 0.568233 |
ae3bbabd4550be0f4670cf95d502fca83a0b0369 | 1,483 | py | Python | example/resnet/convert_resnet_pytorch.py | leonskim/webdnn | f97c798c9a659fe953f9dc8c8537b8917e4be7a2 | [
"MIT"
] | 1 | 2021-04-09T15:55:35.000Z | 2021-04-09T15:55:35.000Z | example/resnet/convert_resnet_pytorch.py | leonskim/webdnn | f97c798c9a659fe953f9dc8c8537b8917e4be7a2 | [
"MIT"
] | null | null | null | example/resnet/convert_resnet_pytorch.py | leonskim/webdnn | f97c798c9a659fe953f9dc8c8537b8917e4be7a2 | [
"MIT"
] | null | null | null | """
Example of converting ResNet-50 PyTorch model
"""
import argparse
import os
import torch, torchvision
import numpy as np
from webdnn.backend import generate_descriptor, backend_names
from webdnn.frontend.pytorch import PyTorchConverter
from webdnn.util import console
if __name__ == "__main__":
main()
| 28.519231 | 102 | 0.697235 |
ae3bbe9c1d9612593b0ae91960ae41d57309e29d | 436 | py | Python | test/test_getter_setter.py | msztolcman/fileperms | 8a99c5bb981265c18228b58cf44419c032d8d895 | [
"MIT"
] | null | null | null | test/test_getter_setter.py | msztolcman/fileperms | 8a99c5bb981265c18228b58cf44419c032d8d895 | [
"MIT"
] | null | null | null | test/test_getter_setter.py | msztolcman/fileperms | 8a99c5bb981265c18228b58cf44419c032d8d895 | [
"MIT"
] | null | null | null | from fileperms import Permission, Permissions
| 24.222222 | 45 | 0.577982 |
ae3c1e0a31bb35705167a39456828a45fbe7fc2b | 157 | py | Python | lessons/Test_and_function_programming_in_Python/project/tests/test_sum_benchmark.py | johnklee/oo_dp_lesson | 06814a88b86b38435e0ed8f305ce9e50c1aac1f6 | [
"MIT"
] | null | null | null | lessons/Test_and_function_programming_in_Python/project/tests/test_sum_benchmark.py | johnklee/oo_dp_lesson | 06814a88b86b38435e0ed8f305ce9e50c1aac1f6 | [
"MIT"
] | 7 | 2021-06-07T03:52:37.000Z | 2022-03-14T11:07:31.000Z | lessons/Test_and_function_programming_in_Python/project/tests/test_sum_benchmark.py | johnklee/oo_dp_lesson | 06814a88b86b38435e0ed8f305ce9e50c1aac1f6 | [
"MIT"
] | null | null | null | from my_sum import sum
| 22.428571 | 43 | 0.751592 |
ae3d3e28bf5a8518622d4a9ff1865444e5e3583f | 1,889 | py | Python | Project_1-Alien_Invasion/settings.py | Vandeilsonln/Python-Crash-Course | 39b4f421504618f947672304a8e97edf7bc7f13d | [
"MIT"
] | null | null | null | Project_1-Alien_Invasion/settings.py | Vandeilsonln/Python-Crash-Course | 39b4f421504618f947672304a8e97edf7bc7f13d | [
"MIT"
] | null | null | null | Project_1-Alien_Invasion/settings.py | Vandeilsonln/Python-Crash-Course | 39b4f421504618f947672304a8e97edf7bc7f13d | [
"MIT"
] | null | null | null | import pygame | 32.568966 | 105 | 0.643727 |
ae3d486c599d0d195d2c989a9d3e670a0c3383e1 | 431 | py | Python | connect.py | XtremeCurling/nextbus2pg | 4a7b32ecbc5232c3a7e4a81152aea87b2c80d517 | [
"MIT"
] | null | null | null | connect.py | XtremeCurling/nextbus2pg | 4a7b32ecbc5232c3a7e4a81152aea87b2c80d517 | [
"MIT"
] | 4 | 2018-04-03T21:12:24.000Z | 2018-05-13T22:53:43.000Z | connect.py | XtremeCurling/nextbus2pg | 4a7b32ecbc5232c3a7e4a81152aea87b2c80d517 | [
"MIT"
] | null | null | null | import urllib
import psycopg2
import psycopg2.extras
# Connect to a postgres database. Tweak some things.
| 28.733333 | 78 | 0.74942 |
ae3e733e97f3939f4c5a55b9fab69488409a8357 | 1,153 | py | Python | app/main/views/letter_jobs.py | karlchillmaid/notifications-admin | 9ef6da4ef9e2fa97b7debb4b573cb035a5cb8880 | [
"MIT"
] | null | null | null | app/main/views/letter_jobs.py | karlchillmaid/notifications-admin | 9ef6da4ef9e2fa97b7debb4b573cb035a5cb8880 | [
"MIT"
] | null | null | null | app/main/views/letter_jobs.py | karlchillmaid/notifications-admin | 9ef6da4ef9e2fa97b7debb4b573cb035a5cb8880 | [
"MIT"
] | null | null | null | from flask import redirect, render_template, request, session, url_for
from flask_login import login_required
from app import letter_jobs_client
from app.main import main
from app.utils import user_is_platform_admin
| 31.162162 | 100 | 0.666956 |
ae3ea51dd07df4bb77e861ac50689fed8f983f65 | 909 | py | Python | dev-test/1_euler/srayan/euler-2.py | sgango/Y1-Project | 89205600552ede6f8da29231cfa52a3538ae8df4 | [
"BSD-2-Clause"
] | 2 | 2020-09-23T13:27:26.000Z | 2021-09-14T14:15:30.000Z | dev-test/1_euler/srayan/euler-2.py | sgango/Y1-Project | 89205600552ede6f8da29231cfa52a3538ae8df4 | [
"BSD-2-Clause"
] | 1 | 2020-06-18T14:02:59.000Z | 2020-06-18T14:02:59.000Z | dev-test/1_euler/srayan/euler-2.py | sgango/Y1-Project | 89205600552ede6f8da29231cfa52a3538ae8df4 | [
"BSD-2-Clause"
] | null | null | null | """
Adapting Euler method to handle 2nd order ODEs
Srayan Gangopadhyay
2020-05-16
"""
import numpy as np
import matplotlib.pyplot as plt
"""
y' = dy/dx
For a function of form y'' = f(x, y, y')
Define y' = v so y'' = v'
"""
# PARAMETERS
y0 = 1 # y(x=0) =
v0 = -2 # y'(x=0) =
delta = 0.01 # step size
end = 4 # x-value to stop integration
steps = int(end/delta) + 1 # number of steps
x = np.linspace(0, end, steps) # array of x-values (discrete time)
y = np.zeros(steps) # empty array for solution
v = np.zeros(steps)
y[0] = y0 # inserting initial value
v[0] = v0
# INTEGRATING
for i in range(1, steps):
v[i] = v[i-1] + (delta*func(y[i-1], v[i-1], x[i-1]))
y[i] = y[i-1] + (delta*v[i-1])
plt.plot(x, y, label='Approx. soln (Euler)')
plt.plot(x, y, 'o')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
| 21.139535 | 67 | 0.59516 |
ae3ecabbefa60b62d05ffb5c99a5c524d7526637 | 73 | py | Python | task3/mask_r_cnn/mark_cvppp.py | HenryLiangzy/COMP9517_Group | 83be7304bee47d52781ea71f06838cd148dbd0bd | [
"Apache-2.0"
] | null | null | null | task3/mask_r_cnn/mark_cvppp.py | HenryLiangzy/COMP9517_Group | 83be7304bee47d52781ea71f06838cd148dbd0bd | [
"Apache-2.0"
] | null | null | null | task3/mask_r_cnn/mark_cvppp.py | HenryLiangzy/COMP9517_Group | 83be7304bee47d52781ea71f06838cd148dbd0bd | [
"Apache-2.0"
] | null | null | null | import cv2
import os
import glob
import numpy as np
| 9.125 | 18 | 0.712329 |
ae3f83f14ff4a0be7289a02711f0b034c72507db | 3,022 | py | Python | dss_sm_so/tests/test_backends.py | MobileCloudNetworking/dssaas | 87b6f7d60ecc397a88326a955b2ddfd3d73205d1 | [
"Apache-2.0"
] | null | null | null | dss_sm_so/tests/test_backends.py | MobileCloudNetworking/dssaas | 87b6f7d60ecc397a88326a955b2ddfd3d73205d1 | [
"Apache-2.0"
] | null | null | null | dss_sm_so/tests/test_backends.py | MobileCloudNetworking/dssaas | 87b6f7d60ecc397a88326a955b2ddfd3d73205d1 | [
"Apache-2.0"
] | 1 | 2018-10-09T06:28:36.000Z | 2018-10-09T06:28:36.000Z | __author__ = 'florian'
import unittest
from occi.backend import ActionBackend, KindBackend
from sm.sm.backends import ServiceBackend
from mock import patch
from sm.sm.so_manager import SOManager
from occi.core_model import Kind
from occi.core_model import Resource
| 38.74359 | 101 | 0.697551 |
ae41aa44d40af6f5bd17f7b224b76b24b0631ba4 | 4,738 | py | Python | demo/q0w_demo_analyzer/core/fonts.py | YourNorth/rezak-summarizator | 3ab2f4bf1044ea9654b4084a39030987e4b8bfe8 | [
"MIT"
] | 3 | 2020-03-28T16:48:10.000Z | 2020-12-01T17:18:55.000Z | demo/q0w_demo_analyzer/core/fonts.py | YourNorth/rezak-summarizator | 3ab2f4bf1044ea9654b4084a39030987e4b8bfe8 | [
"MIT"
] | 31 | 2020-03-20T17:53:08.000Z | 2021-03-10T11:48:11.000Z | demo/q0w_demo_analyzer/core/fonts.py | YourNorth/rezak-summarizator | 3ab2f4bf1044ea9654b4084a39030987e4b8bfe8 | [
"MIT"
] | 1 | 2020-03-20T05:01:16.000Z | 2020-03-20T05:01:16.000Z | TESTPHRASE = 'Lorem ipsum'
# ANSI COLORS
# ====== FAMILY ===== #
end = '\33[0m'
bold = '\33[1m'
italic = '\33[3m'
underline = '\33[4m'
blink = '\33[5m'
blink2 = '\33[6m'
selected = '\33[7m'
# ====== COLOR ====== #
# greyscale
black = '\33[97m'
grey = '\33[90m'
grey2 = '\33[37m'
white = '\33[30m'
# less saturation
red = '\33[91m'
yellow = '\33[33m'
green = '\33[32m'
beige = '\33[36m'
blue = '\33[94m'
violet = '\33[35m'
# more saturation
red2 = '\33[31m'
yellow2 = '\33[93m'
green2 = '\33[92m'
beige2 = '\33[96m'
blue2 = '\33[34m'
violet2 = '\33[95m'
# === BACKGROUND ==== #
# greyscale
blackbg = '\33[107m'
greybg = '\33[100m'
greybg2 = '\33[47m'
whitebg = '\33[40m'
# less saturation
redbg = '\33[101m'
yellowbg = '\33[43m'
greenbg = '\33[42m'
beigebg = '\33[46m'
bluebg = '\33[104m'
violetbg = '\33[45m'
# more saturation
redbg2 = '\33[41m'
yellowbg2 = '\33[103m'
greenbg2 = '\33[102m'
beigebg2 = '\33[106m'
bluebg2 = '\33[44m'
violetbg2 = '\33[105m'
backs = [whitebg, greybg, greybg2, blackbg, redbg, redbg2, yellowbg, yellowbg2, greenbg, greenbg2, beigebg, beigebg2,
bluebg,
bluebg2, violetbg, violetbg2]
simples = [white, grey, grey2, black, red, red2, yellow, yellow2, green, green2, beige, beige2, blue,
blue2, violet, violet2]
# TODO: lists => dict with pairs; bg, sm => invert value (bg <=> sm)
if __name__ == "__main__":
if blackbg in backs:
print(bg(simple_color=red2) + black + TESTPHRASE + end)
print(sm(back_color=beigebg) + TESTPHRASE + end)
print(paint(value=TESTPHRASE, content_color=red2))
print(enhance(color=violet) + TESTPHRASE + end)
print(enhance(color=whitebg) + TESTPHRASE + end)
family()
color()
background()
| 31.798658 | 117 | 0.58358 |
ae41b46656d025e136cbbd3d68dd912515307e97 | 1,370 | py | Python | setup.py | eduk8s/prototype-cli | 74443dafb08e5b65f48ea3b9a7a03a803f79437a | [
"Apache-2.0"
] | 1 | 2019-12-30T02:52:56.000Z | 2019-12-30T02:52:56.000Z | setup.py | eduk8s/prototype-cli | 74443dafb08e5b65f48ea3b9a7a03a803f79437a | [
"Apache-2.0"
] | null | null | null | setup.py | eduk8s/prototype-cli | 74443dafb08e5b65f48ea3b9a7a03a803f79437a | [
"Apache-2.0"
] | null | null | null | import sys
import os
from setuptools import setup
long_description = open("README.rst").read()
classifiers = [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
]
setup_kwargs = dict(
name="eduk8s-cli",
version="0.1.0",
description="Command line client for eduk8s.",
long_description=long_description,
url="https://github.com/eduk8s/eduk8s-cli",
author="Graham Dumpleton",
author_email="Graham.Dumpleton@gmail.com",
license="Apache License, Version 2.0",
python_requires=">=3.7.0",
classifiers=classifiers,
keywords="eduk8s kubernetes",
packages=["eduk8s", "eduk8s.cli", "eduk8s.kube",],
package_dir={"eduk8s": "src/eduk8s"},
package_data={"eduks.crds": ["session.yaml", "workshop.yaml"],},
entry_points={
"console_scripts": ["eduk8s = eduk8s.cli:main"],
"eduk8s_cli_plugins": [
"workshop = eduk8s.cli.workshop",
"session = eduk8s.cli.session",
"install = eduk8s.cli.install",
],
},
install_requires=[
"click",
"requests",
"rstr",
"PyYaml",
"kopf==0.23.2",
"openshift==0.10.1",
],
)
setup(**setup_kwargs)
| 27.4 | 68 | 0.607299 |
ae444d42c02b963c853a9f963e814c548f5a9dae | 1,652 | py | Python | torchsupport/training/score_supervised.py | bobelly/torchsupport | 5aa0a04f20c193ec99310f5d6a3375d2e95e740d | [
"MIT"
] | 18 | 2019-05-02T16:32:15.000Z | 2021-04-16T09:33:54.000Z | torchsupport/training/score_supervised.py | bobelly/torchsupport | 5aa0a04f20c193ec99310f5d6a3375d2e95e740d | [
"MIT"
] | 5 | 2019-10-14T13:46:49.000Z | 2021-06-08T11:48:34.000Z | torchsupport/training/score_supervised.py | bobelly/torchsupport | 5aa0a04f20c193ec99310f5d6a3375d2e95e740d | [
"MIT"
] | 12 | 2019-05-12T21:34:24.000Z | 2021-07-15T14:14:16.000Z | import random
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as func
from torchsupport.data.io import netwrite, to_device, make_differentiable
from torchsupport.training.energy import DenoisingScoreTraining
from torchsupport.training.samplers import AnnealedLangevin
| 30.592593 | 73 | 0.70339 |
ae44d99a916d3dc0c3e4c682ab78f7a52d9f0c8b | 332 | py | Python | src/modeling/models/timer.py | NovaSBE-DSKC/predict-campaing-sucess-rate | fec339aee7c883f55d64130eb69e490f765ee27d | [
"MIT"
] | null | null | null | src/modeling/models/timer.py | NovaSBE-DSKC/predict-campaing-sucess-rate | fec339aee7c883f55d64130eb69e490f765ee27d | [
"MIT"
] | null | null | null | src/modeling/models/timer.py | NovaSBE-DSKC/predict-campaing-sucess-rate | fec339aee7c883f55d64130eb69e490f765ee27d | [
"MIT"
] | null | null | null | import time
| 22.133333 | 70 | 0.581325 |
ae490aaf317fe81f8776bee9c9b05dfe568d8efd | 3,538 | py | Python | tests/system/workspace_factory.py | davetcoleman/catkin_tools | 3dd28ffab0e48775b14c6bab5a7b8b974cdd126c | [
"Apache-2.0"
] | null | null | null | tests/system/workspace_factory.py | davetcoleman/catkin_tools | 3dd28ffab0e48775b14c6bab5a7b8b974cdd126c | [
"Apache-2.0"
] | null | null | null | tests/system/workspace_factory.py | davetcoleman/catkin_tools | 3dd28ffab0e48775b14c6bab5a7b8b974cdd126c | [
"Apache-2.0"
] | null | null | null | import os
import shutil
from ..utils import temporary_directory
| 37.242105 | 113 | 0.623233 |
ae4a47fd92f1a12f864fae2ce0feac13263ca7ac | 1,019 | py | Python | setup.py | thermokarst-forks/q2-plugin-template | 0583ed514a7476ae75fd7a052043e0aec2faecb9 | [
"BSD-3-Clause"
] | 5 | 2021-05-10T14:23:11.000Z | 2022-03-04T14:37:15.000Z | setup.py | thermokarst-forks/q2-plugin-template | 0583ed514a7476ae75fd7a052043e0aec2faecb9 | [
"BSD-3-Clause"
] | 2 | 2021-05-12T15:08:31.000Z | 2021-07-13T13:57:24.000Z | setup.py | thermokarst-forks/q2-plugin-template | 0583ed514a7476ae75fd7a052043e0aec2faecb9 | [
"BSD-3-Clause"
] | 3 | 2021-05-12T15:02:12.000Z | 2022-02-09T13:33:19.000Z | # ----------------------------------------------------------------------------
# Copyright (c) 2021, QIIME 2 development team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file LICENSE, distributed with this software.
# ----------------------------------------------------------------------------
from setuptools import find_packages, setup
import versioneer
setup(
name='q2-plugin-name',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
license='BSD-3-Clause',
packages=find_packages(),
author="Michal Ziemski",
author_email="ziemski.michal@gmail.com",
description=("This is a template for building a new QIIME 2 plugin."),
url="https://github.com/bokulich-lab/q2-plugin-template",
entry_points={
'qiime2.plugins':
['q2-plugin-name=q2_plugin_name.plugin_setup:plugin']
},
package_data={
'q2_plugin_name': [
'citations.bib'
],
},
zip_safe=False,
)
| 29.970588 | 78 | 0.570167 |
ae4a834438c1be65dcec72110a53d1ee4b52eb26 | 7,722 | py | Python | zero/recommendation_algorithm.py | Akulen/mangaki-zero | 5eb2de06b8684ed948b8b903e9f567f06c35e3ef | [
"MIT"
] | null | null | null | zero/recommendation_algorithm.py | Akulen/mangaki-zero | 5eb2de06b8684ed948b8b903e9f567f06c35e3ef | [
"MIT"
] | null | null | null | zero/recommendation_algorithm.py | Akulen/mangaki-zero | 5eb2de06b8684ed948b8b903e9f567f06c35e3ef | [
"MIT"
] | null | null | null | from zero.side import SideInformation
from zero.chrono import Chrono
from collections import defaultdict
from itertools import product
import numpy as np
import pickle
import os.path
import logging
def compute_dcg(self, y_pred, y_true):
'''
Computes the discounted cumulative gain as stated in:
https://gist.github.com/bwhite/3726239
'''
ranked_gains = self.get_ranked_gains(y_pred, y_true)
return self.dcg_at_k(ranked_gains, 100)
def compute_ndcg(self, y_pred, y_true):
ranked_gains = self.get_ranked_gains(y_pred, y_true)
return self.ndcg_at_k(ranked_gains, 100)
def dcg_at_k(self, r, k):
r = np.asfarray(r)[:k]
if r.size:
return np.sum(np.subtract(np.power(2, r), 1) /
np.log2(np.arange(2, r.size + 2)))
return 0.
def ndcg_at_k(self, r, k):
idcg = self.dcg_at_k(sorted(r, reverse=True), k)
if not idcg:
return 0.
return self.dcg_at_k(r, k) / idcg
def compute_metrics(self):
if self.X_train is not None:
y_train_pred = self.predict(self.X_train)
train_rmse = self.compute_rmse(self.y_train, y_train_pred)
self.metrics['train']['rmse'].append(train_rmse)
logging.warning('Train RMSE=%f', train_rmse)
if self.X_test is not None:
y_test_pred = self.predict(self.X_test)
test_rmse = self.compute_rmse(self.y_test, y_test_pred)
self.metrics['test']['rmse'].append(test_rmse)
logging.warning('Test RMSE=%f', test_rmse)
def __str__(self):
return '[%s]' % self.get_shortname().upper()
def register_algorithm(algorithm_name, default_kwargs=None):
if default_kwargs is None:
default_kwargs = {}
return decorator
| 35.916279 | 91 | 0.615644 |
ae4aea0f2b66c03f8fc9b59889443427e5fe285c | 150,103 | py | Python | venv/Lib/site-packages/pyo/lib/_wxwidgets.py | mintzer/pupillometry-rf-back | cfa86fa984a49dce0123798f8de5b838c02e10d5 | [
"CC-BY-4.0"
] | null | null | null | venv/Lib/site-packages/pyo/lib/_wxwidgets.py | mintzer/pupillometry-rf-back | cfa86fa984a49dce0123798f8de5b838c02e10d5 | [
"CC-BY-4.0"
] | null | null | null | venv/Lib/site-packages/pyo/lib/_wxwidgets.py | mintzer/pupillometry-rf-back | cfa86fa984a49dce0123798f8de5b838c02e10d5 | [
"CC-BY-4.0"
] | null | null | null | from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
"""
Copyright 2009-2015 Olivier Belanger
This file is part of pyo, a python module to help digital signal
processing script creation.
pyo is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
pyo is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with pyo. If not, see <http://www.gnu.org/licenses/>.
"""
import wx, os, sys, math, time, unicodedata
import wx.stc as stc
from ._core import rescale
if "phoenix" in wx.version():
wx.GraphicsContext_Create = wx.GraphicsContext.Create
wx.EmptyBitmap = wx.Bitmap
wx.EmptyImage = wx.Image
wx.BitmapFromImage = wx.Bitmap
wx.Image_HSVValue = wx.Image.HSVValue
wx.Image_HSVtoRGB = wx.Image.HSVtoRGB
if sys.version_info[0] < 3:
unicode_t = unicode
else:
unicode_t = str
BACKGROUND_COLOUR = "#EBEBEB"
def interpFloat(t, v1, v2):
"interpolator for a single value; interprets t in [0-1] between v1 and v2"
return (v2 - v1) * t + v1
def tFromValue(value, v1, v2):
"returns a t (in range 0-1) given a value in the range v1 to v2"
if (v2 - v1) == 0:
return 1.0
else:
return float(value - v1) / (v2 - v1)
def clamp(v, minv, maxv):
"clamps a value within a range"
if v < minv:
v = minv
if v > maxv:
v = maxv
return v
POWOFTWO = {
2: 1,
4: 2,
8: 3,
16: 4,
32: 5,
64: 6,
128: 7,
256: 8,
512: 9,
1024: 10,
2048: 11,
4096: 12,
8192: 13,
16384: 14,
32768: 15,
65536: 16,
}
def powOfTwo(x):
"Return 2 raised to the power of x."
return 2 ** x
def powOfTwoToInt(x):
"Return the exponent of 2 correponding to the value x."
return POWOFTWO[x]
# TODO: key, command and slmap should be removed from the multislider widget.
# It should work in the same way as the ControlSlider widget.
# TODO: BACKGROUND_COLOUR hard-coded all over the place in this class.
######################################################################
### Control window for PyoObject
######################################################################
######################################################################
### View window for PyoTableObject
######################################################################
######################################################################
## View window for PyoMatrixObject
#####################################################################
######################################################################
## Spectrum Display
######################################################################
# TODO: Adjust the font size according to the size of the panel.
######################################################################
## Spectrum Display
######################################################################
######################################################################
## Grapher window for PyoTableObject control
######################################################################
OFF = 10
OFF2 = OFF * 2
RAD = 3
RAD2 = RAD * 2
AREA = RAD + 2
AREA2 = AREA * 2
def ensureNFD(unistr):
if sys.platform == "win32" or sys.platform.startswith("linux"):
encodings = [sys.getdefaultencoding(), sys.getfilesystemencoding(), "cp1252", "iso-8859-1", "utf-16"]
format = "NFC"
else:
encodings = [sys.getdefaultencoding(), sys.getfilesystemencoding(), "macroman", "iso-8859-1", "utf-16"]
format = "NFC"
decstr = unistr
if type(decstr) != unicode_t:
for encoding in encodings:
try:
decstr = decstr.decode(encoding)
break
except UnicodeDecodeError:
continue
except:
decstr = "UnableToDecodeString"
print("Unicode encoding not in a recognized format...")
break
if decstr == "UnableToDecodeString":
return unistr
else:
return unicodedata.normalize(format, decstr)
| 35.987293 | 120 | 0.525166 |
ae4bd329b0a39f201a2f41d92b1c573029070350 | 5,382 | py | Python | napalm_yang/utils.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 64 | 2016-10-20T15:47:18.000Z | 2021-11-11T11:57:32.000Z | napalm_yang/utils.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 126 | 2016-10-05T10:36:14.000Z | 2019-05-15T08:43:23.000Z | napalm_yang/utils.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 63 | 2016-11-07T15:23:08.000Z | 2021-09-22T14:41:16.000Z | from napalm_yang import base
def model_to_dict(model, mode="", show_defaults=False):
"""
Given a model, return a representation of the model in a dict.
This is mostly useful to have a quick visual represenation of the model.
Args:
model (PybindBase): Model to transform.
mode (string): Whether to print config, state or all elements ("" for all)
Returns:
dict: A dictionary representing the model.
Examples:
>>> config = napalm_yang.base.Root()
>>>
>>> # Adding models to the object
>>> config.add_model(napalm_yang.models.openconfig_interfaces())
>>> config.add_model(napalm_yang.models.openconfig_vlan())
>>> # Printing the model in a human readable format
>>> pretty_print(napalm_yang.utils.model_to_dict(config))
>>> {
>>> "openconfig-interfaces:interfaces [rw]": {
>>> "interface [rw]": {
>>> "config [rw]": {
>>> "description [rw]": "string",
>>> "enabled [rw]": "boolean",
>>> "mtu [rw]": "uint16",
>>> "name [rw]": "string",
>>> "type [rw]": "identityref"
>>> },
>>> "hold_time [rw]": {
>>> "config [rw]": {
>>> "down [rw]": "uint32",
>>> "up [rw]": "uint32"
(trimmed for clarity)
"""
if model._yang_type in ("container", "list"):
cls = model if model._yang_type in ("container",) else model._contained_class()
result = {}
for k, v in cls:
r = model_to_dict(v, mode=mode, show_defaults=show_defaults)
if r:
result[get_key(k, v, model._defining_module, show_defaults)] = r
return result
else:
if show_defaults:
if model._default is False:
if model._yang_type != "boolean":
# Unless the datatype is bool, when the _default attribute
# is False, it means there is not default value defined in
# the YANG model.
return None
return model._default
return model._yang_type if is_mode(model, mode) else None
def diff(f, s):
"""
Given two models, return the difference between them.
Args:
f (Pybindbase): First element.
s (Pybindbase): Second element.
Returns:
dict: A dictionary highlighting the differences.
Examples:
>>> diff = napalm_yang.utils.diff(candidate, running)
>>> pretty_print(diff)
>>> {
>>> "interfaces": {
>>> "interface": {
>>> "both": {
>>> "Port-Channel1": {
>>> "config": {
>>> "mtu": {
>>> "first": "0",
>>> "second": "9000"
>>> }
>>> }
>>> }
>>> },
>>> "first_only": [
>>> "Loopback0"
>>> ],
>>> "second_only": [
>>> "Loopback1"
>>> ]
>>> }
>>> }
>>> }
"""
if isinstance(f, base.Root) or f._yang_type in ("container", None):
result = _diff_root(f, s)
elif f._yang_type in ("list",):
result = _diff_list(f, s)
else:
result = {}
first = "{}".format(f)
second = "{}".format(s)
if first != second:
result = {"first": first, "second": second}
return result
| 30.40678 | 87 | 0.473987 |
ae4dfb5b9ba2ae94cfbe34ece6b1afd93884dd8b | 2,430 | py | Python | config.py | kenykau/reinforcement-forex | cac8c59ae7f5593bb7d9bb47e85f4ba2435a7a33 | [
"MIT"
] | null | null | null | config.py | kenykau/reinforcement-forex | cac8c59ae7f5593bb7d9bb47e85f4ba2435a7a33 | [
"MIT"
] | null | null | null | config.py | kenykau/reinforcement-forex | cac8c59ae7f5593bb7d9bb47e85f4ba2435a7a33 | [
"MIT"
] | null | null | null | from enum import IntEnum
from typing import List, Dict
| 25.851064 | 169 | 0.488477 |
ae4e5f7fe6b5f5c3253e178b1b6eeb60c312745d | 3,020 | py | Python | metaci/release/models.py | giveclarity/MetaCI | f51bd50acf2e7d5e111f993f4816e5f0a5c5a441 | [
"BSD-3-Clause"
] | null | null | null | metaci/release/models.py | giveclarity/MetaCI | f51bd50acf2e7d5e111f993f4816e5f0a5c5a441 | [
"BSD-3-Clause"
] | null | null | null | metaci/release/models.py | giveclarity/MetaCI | f51bd50acf2e7d5e111f993f4816e5f0a5c5a441 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import datetime
from django.db import models
from django.utils.translation import ugettext_lazy as _
from model_utils import Choices
from model_utils.fields import AutoCreatedField, AutoLastModifiedField
from model_utils.models import StatusModel
from metaci.release.utils import update_release_from_github
| 30.816327 | 82 | 0.67351 |
ae4f4fa26dd1ca5d802353abe3e69eef53ebe442 | 218 | py | Python | src/models/bert_crf.py | Zrealshadow/NLP_HW | 1fcc874b53cdf9465ab188c12082587d48644601 | [
"MIT"
] | 1 | 2020-08-19T03:27:18.000Z | 2020-08-19T03:27:18.000Z | src/models/bert_crf.py | OpenNLPhub/ChineseNER | 1fcc874b53cdf9465ab188c12082587d48644601 | [
"MIT"
] | null | null | null | src/models/bert_crf.py | OpenNLPhub/ChineseNER | 1fcc874b53cdf9465ab188c12082587d48644601 | [
"MIT"
] | null | null | null | import torch
import os
from torch import nn
from transformers import BertForTokenClassification,BertTokenizer,BertConfig;
cwd=os.getcwd()
| 16.769231 | 77 | 0.770642 |
ae5037c12585478c8a1c85ed99c80d350dbc79af | 415 | py | Python | mathMB/__init__.py | mburger-stsci/mathMB | 107c11a6e65429c5f7d2facb4ce4e199538a39d8 | [
"BSD-3-Clause"
] | null | null | null | mathMB/__init__.py | mburger-stsci/mathMB | 107c11a6e65429c5f7d2facb4ce4e199538a39d8 | [
"BSD-3-Clause"
] | null | null | null | mathMB/__init__.py | mburger-stsci/mathMB | 107c11a6e65429c5f7d2facb4ce4e199538a39d8 | [
"BSD-3-Clause"
] | null | null | null | from .interpu import interpu
from .minmaxmean import minmaxmean
from .randomdeviates import random_deviates_1d, random_deviates_2d
from .rotation_matrix import rotation_matrix
from .smooth import smooth, smooth_sphere
from .fit_model import fit_model
from .histogram import HistogramSphere, Histogram, Histogram2d
name = 'mathMB'
__author__ = 'Matthew Burger'
__email__ = 'mburger@stsci.edu'
__version__ = '1.10'
| 29.642857 | 66 | 0.824096 |
ae52b0c373a33d43af43b8a92c2a1b20dd0c87e2 | 3,841 | py | Python | dgraphpandas/strategies/horizontal.py | rohith-bs/dgraphpandas | 29e91e2e7bb1d5d991ab94709a2d7e27f7dd7316 | [
"MIT"
] | 1 | 2022-02-28T17:34:11.000Z | 2022-02-28T17:34:11.000Z | dgraphpandas/strategies/horizontal.py | rohith-bs/dgraphpandas | 29e91e2e7bb1d5d991ab94709a2d7e27f7dd7316 | [
"MIT"
] | null | null | null | dgraphpandas/strategies/horizontal.py | rohith-bs/dgraphpandas | 29e91e2e7bb1d5d991ab94709a2d7e27f7dd7316 | [
"MIT"
] | 1 | 2021-04-10T19:57:05.000Z | 2021-04-10T19:57:05.000Z | import logging
from typing import Any, Dict, List, Callable, Union
import pandas as pd
from dgraphpandas.config import get_from_config
from dgraphpandas.strategies.vertical import vertical_transform
logger = logging.getLogger(__name__)
def horizontal_transform(
frame: Union[str, pd.DataFrame],
config: Dict[str, Any],
config_file_key: str,
**kwargs):
'''
Horizontally Transform a Pandas DataFrame into Intrinsic and Edge DataFrames.
'''
if frame is None:
raise ValueError('frame')
if not config:
raise ValueError('config')
if not config_file_key:
raise ValueError('config_file_key')
file_config: Dict[str, Any] = config['files'][config_file_key]
type_overrides: Dict[str, str] = get_from_config('type_overrides', file_config, {}, **(kwargs))
subject_fields: Union[List[str], Callable[..., List[str]]] = get_from_config('subject_fields', file_config, **(kwargs))
date_fields: Dict[str, str] = get_from_config('date_fields', file_config, {}, **(kwargs))
if not subject_fields:
raise ValueError('subject_fields')
if isinstance(frame, str):
logger.debug(f'Reading file {frame}')
read_csv_options: Dict[str, Any] = get_from_config('read_csv_options', file_config, {}, **(kwargs))
frame = pd.read_csv(frame, **(read_csv_options))
if frame.shape[1] <= len(subject_fields):
raise ValueError(f'''
It looks like there are no data fields.
The subject_fields are {subject_fields}
The frame columns are {frame.columns}
''')
'''
Date Fields get special treatment as they can be represented in many different ways
from different sources. Therefore if the column has been defined in date_fields
then apply those options to that column.
'''
for col, date_format in date_fields.items():
date_format = date_fields[col]
logger.debug(f'Converting {col} to datetime: {date_format}')
frame[col] = pd.to_datetime(frame[col], **(date_format))
if col not in type_overrides:
logger.debug(f'Ensuring {col} has datetime64 type')
type_overrides[col] = 'datetime64'
'''
Ensure that object values have the correct type according to type_overrides.
For example, when pandas reads a csv and detects a numerical value it may decide to
represent them as a float e.g 10.0 so when it's melted into a string it will show as such
But we really want the value to be just 10 so it matches the corresponding rdf type.
Therefore before we melt the frame, we enforce these columns have the correct form.
'''
logger.debug('Applying Type Overrides %s', type_overrides)
for col, current_type in type_overrides.items():
try:
logger.debug(f'Converting {col} to {current_type}')
frame[col] = frame[col].astype(current_type)
except ValueError:
logger.exception(
f'''
Could not convert {col} to {current_type}.
Please confirm that the values in the {col} series are convertable to {current_type}.
A common scenario here is when we have NA values but the target type does not support them.
''')
exit()
'''
Pivot the Horizontal DataFrame based on the given key (subject).
Change the frame to be 3 columns with triples: subject, predicate, object
This changes the horizontal frame into a vertical frame as this more closely
resembles rdf triples.
'''
logger.debug(f'Melting frame with subject: {subject_fields}')
frame = frame.melt(
id_vars=subject_fields,
var_name='predicate',
value_name='object')
return vertical_transform(frame, config, config_file_key, **(kwargs))
| 40.431579 | 123 | 0.667534 |
ae533f8aecb8c3af4f9e6c1898e9747d30e5e6e5 | 2,675 | py | Python | classifier-start/lib/utils.py | sharifkaiser/codelabs-edgetpu-image-classifier-detector | da01229abec824994776507949adad1939fa45f0 | [
"Apache-2.0"
] | 4 | 2019-05-13T15:18:36.000Z | 2021-10-08T22:16:49.000Z | classifier-start/lib/utils.py | sharifkaiser/codelabs-edgetpu-image-classifier-detector | da01229abec824994776507949adad1939fa45f0 | [
"Apache-2.0"
] | 1 | 2019-06-30T14:43:31.000Z | 2019-10-25T17:49:52.000Z | classifier-start/lib/utils.py | sharifkaiser/codelabs-edgetpu-image-classifier-detector | da01229abec824994776507949adad1939fa45f0 | [
"Apache-2.0"
] | 3 | 2019-07-22T15:16:02.000Z | 2022-03-04T11:51:11.000Z | # Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from .svg import *
CSS_STYLES = str(CssStyle({'.back': Style(fill='black',
stroke='black',
stroke_width='0.5em')}))
LABEL_PATTERN = re.compile(r'\s*(\d+)(.+)')
| 34.74026 | 94 | 0.575327 |
ae537a846977c181886563c30f2c68e1118f6d27 | 74,410 | py | Python | remit_admin/views.py | naamara/blink | 326c035b2f0ef0feae4cd7aa2d4e73fa4a40171a | [
"Unlicense",
"MIT"
] | null | null | null | remit_admin/views.py | naamara/blink | 326c035b2f0ef0feae4cd7aa2d4e73fa4a40171a | [
"Unlicense",
"MIT"
] | 10 | 2019-12-26T17:31:31.000Z | 2022-03-21T22:17:33.000Z | remit_admin/views.py | naamara/blink | 326c035b2f0ef0feae4cd7aa2d4e73fa4a40171a | [
"Unlicense",
"MIT"
] | null | null | null | # Create your views here.
from django.template import Template, context, RequestContext
from django.shortcuts import render_to_response, render, get_object_or_404, redirect, HttpResponseRedirect, HttpResponse
from django.contrib.auth.decorators import login_required
from remit_admin.forms import RateUpdateForm, ProfileUpdateForm, ProfileAddForm, PhonebookAddForm, TransactionAddForm, CreateAdminUserForm, TransactionUpdateForm, ContactUserForm, EditAdminUserForm, transactionPhonenumberSearchForm, ChargesLimitsForm,CreateHealthUserForm,AddInfoForm,AddHealthInfoForm,AddLawInfoForm,AddPubInfoForm,AddEducInfoForm
import remit.settings as settings
#from remit.utils import generate_sha1, mailer, sendsms, error_message, success_message
from remit.utils import error_message, success_message, admin_mail, sendsms, mailer
import payments.payment as p
from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
from remit.models import Transaction, Phonebook, Rate, Country, Charge
from remit.utils import COUNTRY_CHOICES, NETWORK_CHOICES
from accounts.models import Profile, AdminProfile, UserActions,Create_staff_User
from remit_admin.decorators import admin_required, superuser_required, permission_required, customer_care_required
from django.db.models import Q
from datetime import datetime, timedelta
import payments.payment as payments
from django.db.models import Sum, Max
from django.contrib import messages
from django.db import IntegrityError
import remit_admin.utils as admin_utils
import urllib2
from django.core.files.base import ContentFile
from StringIO import StringIO
from PIL import Image
from remit.utils import debug, log_unauthorized_access, render_to_pdf
#from dateutil.relativedelta import relativedelta
from django.contrib.auth.models import Permission
from django.contrib.contenttypes.models import ContentType
from django.core.urlresolvers import reverse
from remit_admin.models import EmailSupport, add_health_info,HealthInfo,LawhInfo,JounalisthInfo,EducationInfo
import pytz
from django.contrib.auth.models import User
from remit_admin.utils import log_action, store_login_info
from pesapot.pesapot import PesaPot
def dashboard_stats(request):
'''Data for the admin templated'''
data = {'boss_man': False}
countries = Country.objects.all()
if request.user.is_active and request.user.is_staff:
'''get data only when user is logged in'''
profile = User.objects.filter(
is_superuser=False, is_staff=False).count()
data['user_count'] = profile
data['verified_user_count'] = admin_utils.verified_users(
count=True)
data['blocked_user_count'] = admin_utils.blocked_users(count=True)
data['pending_user_count'] = admin_utils.users_pending_verification(
count=True)
transaction = Transaction.objects.filter(
visa_success=True, is_processed=False, amount_sent__isnull=False).aggregate(Sum('amount_sent'))
data['amount_pending'] = transaction['amount_sent__sum']
for country in countries:
currency = country.currency.lower()
# amount pending
transaction = Transaction.objects.filter(
visa_success=True, is_processed=False, to_country=country.pk, amount_sent__isnull=False).aggregate(Sum('amount_received'))
data['amount_pending_%s' % currency] = transaction[
'amount_received__sum']
# pending transactions
transaction = Transaction.objects.filter(
visa_success=True, is_processed=False, amount_sent__isnull=False, to_country=country.pk).count()
data['pending_transactions_%s' % currency] = transaction
data['pending_transactions'] = len(Transaction.momo.pending())
transaction = Transaction.objects.filter(
visa_success=False, is_processed=False, amount_sent__isnull=False).count()
data['failed_transactions'] = transaction
transaction = Transaction.objects.filter(
visa_success=True, is_processed=True, amount_sent__isnull=False).aggregate(Sum('amount_sent'))
data['total_amount_transfered'] = transaction['amount_sent__sum']
transaction = Transaction.objects.filter(
visa_success=True, is_processed=True, amount_sent__isnull=False).aggregate(Sum('amount_sent'))
data['total_amount_transfered'] = transaction['amount_sent__sum']
transaction = Transaction.objects.filter(
visa_success=True, is_processed=True, amount_sent__isnull=False).aggregate(Sum('amount_received'))
data['total_amount_transfered_ugx'] = transaction[
'amount_received__sum']
data['user_with_transaction'] = Transaction.objects.filter(
visa_success=True, is_processed=True, amount_sent__isnull=False).values('user').distinct().count()
data['complete_transactions'] = Transaction.objects.filter(
visa_success=True, is_processed=True, amount_sent__isnull=False).count()
data['pending_bills'] = Transaction.objects.filter(
visa_success=True,
is_processed=False,
amount_sent__isnull=False,
utility=True
).count()
data['cancelled_bills'] = Transaction.objects.filter(
visa_success=True,
is_processed=False,
amount_sent__isnull=False,
utility=True
).count()
data['failed_bills'] = Transaction.objects.filter(
visa_success=False,
is_processed=False,
amount_sent__isnull=False,
utility=True
).count()
return data
def create_superuser(user):
'''we are not doing this'''
profile = AdminProfile.objects.create(user=user)
def get_user_permissions(user):
'''return user permissions as a dict'''
permissions = {}
for x in Permission.objects.filter(user=user):
permissions.update({x.codename: True})
return permissions
def get_country_access(user):
'''get the users country access'''
countries = ()
if user.is_superuser:
countries = COUNTRY_CHOICES
else:
profile = AdminProfile.objects.get(user=user)
if not profile.country == 'False':
for keyword, value in COUNTRY_CHOICES:
if profile.country == keyword:
countries = ((keyword, value),)
else:
countries = COUNTRY_CHOICES
return countries
def get_network_access(user):
'''get the users network access'''
networks = {}
if user.is_superuser:
networks = NETWORK_CHOICES
else:
profile = AdminProfile.objects.get(user=user)
if not profile.mobile_network == 'False':
networks = profile.mobile_network
for keyword, value in NETWORK_CHOICES:
if profile.mobile_network == keyword:
networks = ((keyword, value),)
else:
networks = NETWORK_CHOICES
return networks
def check_user_permission(user, codename):
'''check if user has a particular permission to do something'''
if user.is_superuser:
# Admin is all powerfull
return True
else:
perm = Permission.objects.filter(user=user, codename=codename)
return perm
def transaction_receipt(request, name):
name = int(name) ^ 0xABCDEFAB
transaction = get_object_or_404(Transaction.objects.filter(pk=name))
template = settings.EMAIL_TEMPLATE_DIR + 'credit_card_charged_pdf.html'
#log_action(request,model_object=transaction, action_flag=6, change_message='Downloaded Receipt Transaction')
return render_to_pdf(
template, {
'data': transaction,
'BASE_URL': settings.BASE_URL
}
)
def stuff_transaction_list(user, status=1):
'''
status
(1)-successful,
(2)-pending,
(3)-Failed,
(4)-All,
(6)-successful bills
(7)-All bills
(8)-All non bill transactions
(9)-All pending bills
(10)-All failed bills
(11)-All cancelled bills
'''
transaction_list = False
if status == 1:
transaction_list = Transaction.objects.filter(
visa_success=True, is_processed=True, amount_sent__isnull=False, utility=False)
elif status == 2:
transaction_list = Transaction.objects.filter(
visa_success=True, is_processed=False, amount_sent__isnull=False, utility=False)
elif status == 3:
transaction_list = Transaction.objects.filter(
visa_success=False, utility=False)
elif status == 4:
#transaction_list = Transaction.objects.all()
transaction_list = Transaction.objects.filter(utility=False)
elif status == 5:
transaction_list = Transaction.objects.filter(
is_canceled=True, visa_success=True, is_processed=True, amount_sent__isnull=False, utility=False
)
elif status == 6:
transaction_list = Transaction.objects.filter(
visa_success=True, is_processed=True, amount_sent__isnull=False, utility=True
)
elif status == 7:
transaction_list = Transaction.objects.filter(
utility=True
)
elif status == 8:
transaction_list = Transaction.objects.filter(
utility=False
)
elif status == 9:
transaction_list = Transaction.objects.filter(
visa_success=True, is_processed=False, amount_sent__isnull=False, utility=True)
elif status == 10:
#
transaction_list = Transaction.objects.filter(
visa_success=False, utility=True)
elif status == 11:
transaction_list = Transaction.objects.filter(
is_canceled=True, visa_success=True, is_processed=True, amount_sent__isnull=False, utility=True
)
# else:
# if len(transaction_list) > 0:
# transaction_list = transaction_list.filter(utility=False)
'''get the transaction list our stuff users are allowed access to'''
if transaction_list and not user.is_superuser:
country_filter = network_filter = Q()
for value, keyword in get_country_access(user):
country_filter |= Q(to_country__code=value)
for value, keyword in get_network_access(user):
network_filter |= Q(mobile_network_code=value)
#transaction_list = Transaction.objects.filter(country_filter & network_filter)
transaction_list = transaction_list.filter(
country_filter & network_filter)
# if successful:
# transaction_list = transaction_list.filter(
# visa_success=True, is_processed=True, amount_sent__isnull=False)
return transaction_list
def tradelance(request):
"""work with tradelance."""
pretitle = 'Pending Transactions'
page_title = 'Pending Transactions'
response_data = {}
return render_view(request,'admin/tradelance.html',
{'result':response_data
})
def tradelance_response(request):
"""Tradelance response."""
phone = None
amount = None
tlance_method = None
response_data = {}
pesapot = PesaPot()
if request.POST:
data = request.POST.copy()
amount = data.get('tlance_amount','')
number = data.get('tlance_number','')
tlance_id = data.get('tlance_status','')
tlance_method = data.get('selected_tmethod','')
if tlance_method == 'tlance_deposit':
response_data = pesapot.TradelanceDeposit(number,amount)
elif tlance_method == 'tlance_request':
response_data = pesapot.TradelanceRequest(number,amount)
elif tlance_method == 'tlance_balance':
response_data = pesapot.TradelanceBalance()
elif tlance_method == 'tlance_status':
response_data = pesapot.TradelanceStatus(tlance_id)
return render_view(request,'admin/tradelance_response.html',
{'result':response_data})
#@admin_required
def save_transaction(cur, user, pending=False):
for row in cur.fetchall():
debug(row, 'row data')
cur.execute(
"SELECT invoice_id,phon_num,phon_ext,amount_received,amount,added,exchange_rate from transaction_log where log_id = %d " %
row[0])
datarow = cur.fetchone()
if datarow:
data = {
'user': user.pk,
'receiver_number': datarow[1],
'receiver_country_code': datarow[2],
'amount_sent': datarow[4],
'processed_by': 1,
'rate': datarow[6],
'visa_success': True,
}
processed_on = datetime.fromtimestamp(int(datarow[5]))
if not pending:
data['processed_on'] = processed_on
data['is_processed'] = True
else:
debug(data, 'Pending Transaction')
data['is_processed'] = False
data['amount_received'] = float(datarow[4]) * float(datarow[6])
data['started_on'] = processed_on
transaction = TransactionAddForm(data)
if transaction.is_valid():
try:
transaction.save()
except IntegrityError as e:
print e
else:
print transaction.errors
def add_health_info(request, is_customer_care=False):
'''create an admin user'''
form = AddHealthInfoForm()
title_health = request.POST.get('title_health','')
message = request.POST.get('message','')
print 'Subject ', title_health
print 'Message ', message
form = AddHealthInfoForm(request.POST)
if request.POST:
if form.is_valid():
health_info = HealthInfo(msg=message, sub=title_health)
health_info.save()
print 'Success'
# user.save()
# assign user permissions
update = False
messages.success(request, "The Info Was Successfully Created")
return render_view(request, 'add_health_info.html', {'form': form})
def add_law_info(request, is_customer_care=False):
'''create an admin user'''
form = AddLawInfoForm()
sub = request.POST.get('sub','')
msg = request.POST.get('msg','')
print 'Subject ', sub
print 'Message ', msg
form = AddLawInfoForm(request.POST)
if request.POST:
if form.is_valid():
law_info = LawhInfo(msg=msg, sub=sub)
law_info.save()
print 'Success'
# user.save()
# assign user permissions
update = False
messages.success(request, "The Info Was Successfully Created")
return render_view(request, 'add_law_info.html', {'form': form})
def add_pub_info(request, is_customer_care=False):
'''create an admin user'''
form = AddPubInfoForm()
sub = request.POST.get('sub','')
msg = request.POST.get('msg','')
print 'Subject ', sub
print 'Message ', msg
form = AddPubInfoForm(request.POST)
if request.POST:
if form.is_valid():
pub_info = JounalisthInfo(msg=msg, sub=sub)
pub_info.save()
print 'Success'
# user.save()
# assign user permissions
update = False
messages.success(request, "The Info Was Successfully Created")
return render_view(request, 'add_law_info.html', {'form': form})
def add_educ_info(request, is_customer_care=False):
'''create an admin user'''
form = AddEducInfoForm()
sub = request.POST.get('sub','')
msg = request.POST.get('msg','')
print 'Subject ', sub
print 'Message ', msg
form = AddEducInfoForm(request.POST)
if request.POST:
if form.is_valid():
educ_info = EducationInfo(msg=msg, sub=sub)
educ_info.save()
print 'Success'
# user.save()
# assign user permissions
update = False
messages.success(request, "The Info Was Successfully Created")
return render_view(request, 'add_educ_info.html', {'form': form})
def assign_permissions(user, form, update=False, is_customer_care=False):
'''assign staff members permissions'''
if user:
if is_customer_care:
# customer care options
content_type = ContentType.objects.get_for_model(Transaction)
view_transaction = Permission.objects.get(
content_type=content_type, codename="view_transaction")
edit_transactions = Permission.objects.get(
content_type=content_type, codename="edit_transaction")
user.user_permissions.add(view_transaction)
user.user_permissions.remove(edit_transactions)
else:
content_type = ContentType.objects.get_for_model(Profile)
view_profile = Permission.objects.get(
content_type=content_type, codename="view_profile")
edit_profile = Permission.objects.get(
content_type=content_type, codename="edit_profile")
if form.cleaned_data['users'] == '2':
user.user_permissions.add(view_profile)
user.user_permissions.remove(edit_profile)
elif form.cleaned_data['users'] == '3':
user.user_permissions.add(edit_profile, view_profile)
if update and form.cleaned_data['users'] == '1':
user.user_permissions.remove(edit_profile, view_profile)
# rates edit permissions
content_type = ContentType.objects.get_for_model(Rate)
view_rate = Permission.objects.get(
content_type=content_type, codename="view_rate")
edit_rate = Permission.objects.get(
content_type=content_type, codename="edit_rate")
if form.cleaned_data['rates'] == '2':
user.user_permissions.add(view_rate)
user.user_permissions.remove(edit_rate)
elif form.cleaned_data['rates'] == '3':
user.user_permissions.add(view_rate, edit_rate)
if update and form.cleaned_data['rates'] == '1':
user.user_permissions.remove(edit_rate, view_rate)
# transaction edit permissions
content_type = ContentType.objects.get_for_model(Transaction)
view_transaction = Permission.objects.get(
content_type=content_type, codename="view_transaction")
edit_transactions = Permission.objects.get(
content_type=content_type, codename="edit_transaction")
if form.cleaned_data['transactions'] == '2':
user.user_permissions.add(view_transaction)
user.user_permissions.remove(edit_transactions)
elif form.cleaned_data['transactions'] == '3':
user.user_permissions.add(view_transaction, edit_transactions)
if update and form.cleaned_data['transactions'] == '1':
user.user_permissions.remove(
edit_transactions, view_transaction)
# reports
content_type = ContentType.objects.get_for_model(Transaction)
view_reports = Permission.objects.get(
content_type=content_type,
codename="view_reports"
)
if form.cleaned_data['reports'] == '2':
user.user_permissions.add(view_reports)
if update and form.cleaned_data['reports'] == '1':
user.user_permissions.remove(view_reports)
# audit trails
content_type = ContentType.objects.get_for_model(AdminProfile)
view_audit_trail = Permission.objects.get(
content_type=content_type, codename="view_audit_trail")
try:
if form.cleaned_data['audit_trail'] == '2':
user.user_permissions.add(view_audit_trail)
if update and form.cleaned_data['audit_trail'] == '1':
user.user_permissions.remove(view_audit_trail)
except Exception, e:
print e
user.save()
def admin_503(request):
return render_view(request, 'admin_503.html', {})
def generate_csv_report(transaction, user=False, _file=False):
'''generate a csv report'''
import csv
from django.utils.encoding import smart_str
date = datetime.today().strftime("%B-%d-%Y")
response = HttpResponse(content_type='text/csv')
if _file:
'''if we want a'''
response = StringIO()
else:
response[
'Content-Disposition'] = 'attachment; filename="report_%s.csv"' % date
writer = csv.writer(response)
header = [
smart_str(u"Transaction ID"),
smart_str(u"MOM Transaction ID"),
smart_str(u"Date"),
smart_str(u"Sender names"),
smart_str(u"Sender number"),
smart_str(u"Sender country"),
smart_str(u"Currency"),
smart_str(u"Recipient name"),
smart_str(u"Recipient number"),
smart_str(u"Amount"),
smart_str(u"Status"),
smart_str(u"Revenue Share"),
]
if user:
if user.is_superuser:
header.append(smart_str(u"Mobile network"))
header.append(smart_str(u"USD Amount Sent"))
#header.append(smart_str(u"Exchange Rate"))
writer.writerow(header)
for t in transaction:
if t.actual_delivery_date:
t_date = t.actual_delivery_date
else:
t_date = t.actual_initiation_date
content = [
smart_str(t.get_invoice()),
smart_str(t.get_network_transactionid()),
smart_str(t_date),
smart_str(t.get_sender_profile().get_names()),
smart_str(t.get_sender_profile().get_phonenumber()),
smart_str(t.sender_country),
smart_str(t.currency_sent),
smart_str(t.recipient_names()),
smart_str(t.recipient_number()),
smart_str(t.amount_received),
smart_str(t.actual_status),
smart_str(t.revenue_share()),
]
if user:
if user.is_superuser:
content.append(smart_str(t.get_mobile_network()))
content.append(smart_str(t.amount_sent))
# content.append(smart_str(t.exchange_rate))
writer.writerow(content)
return response
| 39.432962 | 430 | 0.619339 |
ae5506b0817f6bd6ba613b44f0005304d3cd1c5d | 932 | py | Python | example.py | rivasd/activiewBCI | b2278ebacc733e328f28d308146108a52d3deb78 | [
"MIT"
] | 1 | 2020-09-10T08:04:06.000Z | 2020-09-10T08:04:06.000Z | example.py | rivasd/activiewBCI | b2278ebacc733e328f28d308146108a52d3deb78 | [
"MIT"
] | null | null | null | example.py | rivasd/activiewBCI | b2278ebacc733e328f28d308146108a52d3deb78 | [
"MIT"
] | null | null | null | from ActiView import ActiveTwo
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore, QtGui
import numpy as np
app = QtGui.QApplication([])
win = pg.GraphicsWindow()
win.setWindowTitle("Mimicking ActiView's EEG monitoring screen")
monitor = win.addPlot()
#we have so many curves that we will store them in an array
curves = [monitor.plot() for x in range(64)]
#this is the data that will be continuously updated and plotted
rawdata = np.empty((64,0))
#initialize connection with ActiView
actiview = ActiveTwo()
timer = pg.QtCore.QTimer()
timer.timeout.connect(update)
timer.start(0)
if __name__ == '__main__':
import sys
if sys.flags.interactive != 1 or not hasattr(pg.QtCore, 'PYQT_VERSION'):
pg.QtGui.QApplication.exec_() | 23.897436 | 76 | 0.714592 |
ae56d2f7d0da35ed27472371e04da9a9312adf59 | 608 | py | Python | encoder_decoder_model/DNS_NetworkParameters_Jiao_Wan.py | mrjiao2018/LearningGroupStructure | 7426837b8c96f771724cfd663a57ed32f9d16560 | [
"MIT"
] | 1 | 2018-12-07T14:36:13.000Z | 2018-12-07T14:36:13.000Z | encoder_decoder_model/DNS_NetworkParameters_Jiao_Wan.py | mrjiao2018/LearningGroupStructure | 7426837b8c96f771724cfd663a57ed32f9d16560 | [
"MIT"
] | null | null | null | encoder_decoder_model/DNS_NetworkParameters_Jiao_Wan.py | mrjiao2018/LearningGroupStructure | 7426837b8c96f771724cfd663a57ed32f9d16560 | [
"MIT"
] | 1 | 2018-12-05T11:03:07.000Z | 2018-12-05T11:03:07.000Z | import os
| 33.777778 | 94 | 0.710526 |
ae581ce55f5c70d548f9c9c2e1e5bda4e73bac54 | 11,401 | py | Python | segmentron/utils/visualize.py | GhadeerElmkaiel/Trans2Seg | 6717db602205cbed494ae1913ac7cbbca8e83463 | [
"Apache-2.0"
] | null | null | null | segmentron/utils/visualize.py | GhadeerElmkaiel/Trans2Seg | 6717db602205cbed494ae1913ac7cbbca8e83463 | [
"Apache-2.0"
] | null | null | null | segmentron/utils/visualize.py | GhadeerElmkaiel/Trans2Seg | 6717db602205cbed494ae1913ac7cbbca8e83463 | [
"Apache-2.0"
] | null | null | null | import os
import logging
import numpy as np
import torch
from PIL import Image
#from torchsummary import summary
from thop import profile
__all__ = ['get_color_pallete', 'print_iou', 'set_img_color',
'show_prediction', 'show_colorful_images', 'save_colorful_images']
def save_colorful_images(prediction, filename, output_dir, palettes):
'''
:param prediction: [B, H, W, C]
'''
im = Image.fromarray(palettes[prediction.astype('uint8').squeeze()])
fn = os.path.join(output_dir, filename)
out_dir = os.path.split(fn)[0]
if not os.path.exists(out_dir):
os.mkdir(out_dir)
im.save(fn)
def get_color_pallete(npimg, dataset='cityscape'):
"""Visualize image.
Parameters
----------
npimg : numpy.ndarray
Single channel image with shape `H, W, 1`.
dataset : str, default: 'pascal_voc'
The dataset that model pretrained on. ('pascal_voc', 'ade20k')
Returns
-------
out_img : PIL.Image
Image with color pallete
"""
# recovery boundary
if dataset in ('pascal_voc', 'pascal_aug'):
npimg[npimg == -1] = 255
# put colormap
if dataset == 'ade20k':
npimg = npimg + 1
out_img = Image.fromarray(npimg.astype('uint8'))
out_img.putpalette(adepallete)
return out_img
elif dataset == 'cityscape':
out_img = Image.fromarray(npimg.astype('uint8'))
out_img.putpalette(cityscapepallete)
return out_img
elif dataset == 'trans10kv2' or dataset == 'transparent11':
out_img = Image.fromarray(npimg.astype('uint8'))
out_img.putpalette(trans10kv2pallete)
return out_img
elif dataset == 'pascal_voc':
out_img = Image.fromarray(npimg.astype('uint8'))
out_img.putpalette(vocpallete)
return out_img
elif dataset == 'sber_dataset':
out_img = Image.fromarray(npimg.astype('uint8'))
out_img.putpalette(sberpallete)
return out_img
elif dataset == 'sber_dataset_all':
out_img = Image.fromarray(npimg.astype('uint8'))
out_img.putpalette(sberallpallete)
return out_img
elif dataset == 'sber_dataset_all_no_fu':
out_img = Image.fromarray(npimg.astype('uint8'))
out_img.putpalette(sberallNoFUpallete)
return out_img
vocpallete = _getvocpallete(256)
adepallete = [
0, 0, 0, 120, 120, 120, 180, 120, 120, 6, 230, 230, 80, 50, 50, 4, 200, 3, 120, 120, 80, 140, 140, 140, 204,
5, 255, 230, 230, 230, 4, 250, 7, 224, 5, 255, 235, 255, 7, 150, 5, 61, 120, 120, 70, 8, 255, 51, 255, 6, 82,
143, 255, 140, 204, 255, 4, 255, 51, 7, 204, 70, 3, 0, 102, 200, 61, 230, 250, 255, 6, 51, 11, 102, 255, 255,
7, 71, 255, 9, 224, 9, 7, 230, 220, 220, 220, 255, 9, 92, 112, 9, 255, 8, 255, 214, 7, 255, 224, 255, 184, 6,
10, 255, 71, 255, 41, 10, 7, 255, 255, 224, 255, 8, 102, 8, 255, 255, 61, 6, 255, 194, 7, 255, 122, 8, 0, 255,
20, 255, 8, 41, 255, 5, 153, 6, 51, 255, 235, 12, 255, 160, 150, 20, 0, 163, 255, 140, 140, 140, 250, 10, 15,
20, 255, 0, 31, 255, 0, 255, 31, 0, 255, 224, 0, 153, 255, 0, 0, 0, 255, 255, 71, 0, 0, 235, 255, 0, 173, 255,
31, 0, 255, 11, 200, 200, 255, 82, 0, 0, 255, 245, 0, 61, 255, 0, 255, 112, 0, 255, 133, 255, 0, 0, 255, 163,
0, 255, 102, 0, 194, 255, 0, 0, 143, 255, 51, 255, 0, 0, 82, 255, 0, 255, 41, 0, 255, 173, 10, 0, 255, 173, 255,
0, 0, 255, 153, 255, 92, 0, 255, 0, 255, 255, 0, 245, 255, 0, 102, 255, 173, 0, 255, 0, 20, 255, 184, 184, 0,
31, 255, 0, 255, 61, 0, 71, 255, 255, 0, 204, 0, 255, 194, 0, 255, 82, 0, 10, 255, 0, 112, 255, 51, 0, 255, 0,
194, 255, 0, 122, 255, 0, 255, 163, 255, 153, 0, 0, 255, 10, 255, 112, 0, 143, 255, 0, 82, 0, 255, 163, 255,
0, 255, 235, 0, 8, 184, 170, 133, 0, 255, 0, 255, 92, 184, 0, 255, 255, 0, 31, 0, 184, 255, 0, 214, 255, 255,
0, 112, 92, 255, 0, 0, 224, 255, 112, 224, 255, 70, 184, 160, 163, 0, 255, 153, 0, 255, 71, 255, 0, 255, 0,
163, 255, 204, 0, 255, 0, 143, 0, 255, 235, 133, 255, 0, 255, 0, 235, 245, 0, 255, 255, 0, 122, 255, 245, 0,
10, 190, 212, 214, 255, 0, 0, 204, 255, 20, 0, 255, 255, 255, 0, 0, 153, 255, 0, 41, 255, 0, 255, 204, 41, 0,
255, 41, 255, 0, 173, 0, 255, 0, 245, 255, 71, 0, 255, 122, 0, 255, 0, 255, 184, 0, 92, 255, 184, 255, 0, 0,
133, 255, 255, 214, 0, 25, 194, 194, 102, 255, 0, 92, 0, 255]
cityscapepallete = [
128, 64, 128,
244, 35, 232,
70, 70, 70,
102, 102, 156,
190, 153, 153,
153, 153, 153,
250, 170, 30,
220, 220, 0,
107, 142, 35,
152, 251, 152,
0, 130, 180,
220, 20, 60,
255, 0, 0,
0, 0, 142,
0, 0, 70,
0, 60, 100,
0, 80, 100,
0, 0, 230,
119, 11, 32,
]
trans10kv2pallete = [
0, 0, 0,
120, 120, 70,
235, 255, 7,
6, 230, 230,
204, 255, 4,
120, 120, 120,
140, 140, 140,
255, 51, 7,
224, 5, 255,
204, 5, 255,
150, 5, 61,
4, 250, 7]
sberpallete = [
255, 255, 255,
255, 0, 0,
0, 0, 0,
]
# sberallpallete = [
# 102, 255, 102, # Mirror
# 51, 221, 255, # Glass
# 245, 147, 49, # FU
# 184, 61, 245, # Other Optical Surface
# 250, 50, 83, # Floor
# 0, 0, 0,
# ]
sberallNoFUpallete = [
102, 255, 102, # Mirror
51, 221, 255, # Glass
# 245, 147, 49, # FU
250, 50, 83, # Floor
184, 61, 245, # Other Optical Surface
0, 0, 0,
]
sberallpallete = [
102, 255, 102, # Mirror
51, 221, 255, # Glass
245, 147, 49, # FU
184, 61, 245, # Other Optical Surface
250, 50, 83, # Floor
0, 0, 0,
6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, 10, 10, 11, 11, 11, 12, 12, 12, 13, 13, 13, 14, 14, 14, 15, 15, 15, 16, 16, 16, 17, 17, 17, 18, 18, 18, 19, 19, 19, 20, 20, 20, 21, 21, 21, 22, 22, 22, 23, 23, 23, 24, 24, 24, 25, 25, 25, 26, 26, 26, 27, 27, 27, 28, 28, 28, 29, 29, 29, 30, 30, 30, 31, 31, 31, 32, 32, 32, 33, 33, 33, 34, 34, 34, 35, 35, 35, 36, 36, 36, 37, 37, 37, 38, 38, 38, 39, 39, 39, 40, 40, 40, 41, 41, 41, 42, 42, 42, 43, 43, 43, 44, 44, 44, 45, 45, 45, 46, 46, 46, 47, 47, 47, 48, 48, 48, 49, 49, 49, 50, 50, 50, 51, 51, 51, 52, 52, 52, 53, 53, 53, 54, 54, 54, 55, 55, 55, 56, 56, 56, 57, 57, 57, 58, 58, 58, 59, 59, 59, 60, 60, 60, 61, 61, 61, 62, 62, 62, 63, 63, 63, 64, 64, 64, 65, 65, 65, 66, 66, 66, 67, 67, 67, 68, 68, 68, 69, 69, 69, 70, 70, 70, 71, 71, 71, 72, 72, 72, 73, 73, 73, 74, 74, 74, 75, 75, 75, 76, 76, 76, 77, 77, 77, 78, 78, 78, 79, 79, 79, 80, 80, 80, 81, 81, 81, 82, 82, 82, 83, 83, 83, 84, 84, 84, 85, 85, 85, 86, 86, 86, 87, 87, 87, 88, 88, 88, 89, 89, 89, 90, 90, 90, 91, 91, 91, 92, 92, 92, 93, 93, 93, 94, 94, 94, 95, 95, 95, 96, 96, 96, 97, 97, 97, 98, 98, 98, 99, 99, 99, 100, 100, 100, 101, 101, 101, 102, 102, 102, 103, 103, 103, 104, 104, 104, 105, 105, 105, 106, 106, 106, 107, 107, 107, 108, 108, 108, 109, 109, 109, 110, 110, 110, 111, 111, 111, 112, 112, 112, 113, 113, 113, 114, 114, 114, 115, 115, 115, 116, 116, 116, 117, 117, 117, 118, 118, 118, 119, 119, 119, 120, 120, 120, 121, 121, 121, 122, 122, 122, 123, 123, 123, 124, 124, 124, 125, 125, 125, 126, 126, 126, 127, 127, 127, 128, 128, 128, 129, 129, 129, 130, 130, 130, 131, 131, 131, 132, 132, 132, 133, 133, 133, 134, 134, 134, 135, 135, 135, 136, 136, 136, 137, 137, 137, 138, 138, 138, 139, 139, 139, 140, 140, 140, 141, 141, 141, 142, 142, 142, 143, 143, 143, 144, 144, 144, 145, 145, 145, 146, 146, 146, 147, 147, 147, 148, 148, 148, 149, 149, 149, 150, 150, 150, 151, 151, 151, 152, 152, 152, 153, 153, 153, 154, 154, 154, 155, 155, 155, 156, 156, 156, 157, 157, 157, 158, 158, 158, 159, 159, 159, 160, 160, 160, 161, 161, 161, 162, 162, 162, 163, 163, 163, 164, 164, 164, 165, 165, 165, 166, 166, 166, 167, 167, 167, 168, 168, 168, 169, 169, 169, 170, 170, 170, 171, 171, 171, 172, 172, 172, 173, 173, 173, 174, 174, 174, 175, 175, 175, 176, 176, 176, 177, 177, 177, 178, 178, 178, 179, 179, 179, 180, 180, 180, 181, 181, 181, 182, 182, 182, 183, 183, 183, 184, 184, 184, 185, 185, 185, 186, 186, 186, 187, 187, 187, 188, 188, 188, 189, 189, 189, 190, 190, 190, 191, 191, 191, 192, 192, 192, 193, 193, 193, 194, 194, 194, 195, 195, 195, 196, 196, 196, 197, 197, 197, 198, 198, 198, 199, 199, 199, 200, 200, 200, 201, 201, 201, 202, 202, 202, 203, 203, 203, 204, 204, 204, 205, 205, 205, 206, 206, 206, 207, 207, 207, 208, 208, 208, 209, 209, 209, 210, 210, 210, 211, 211, 211, 212, 212, 212, 213, 213, 213, 214, 214, 214, 215, 215, 215, 216, 216, 216, 217, 217, 217, 218, 218, 218, 219, 219, 219, 220, 220, 220, 221, 221, 221, 222, 222, 222, 223, 223, 223, 224, 224, 224, 225, 225, 225, 226, 226, 226, 227, 227, 227, 228, 228, 228, 229, 229, 229, 230, 230, 230, 231, 231, 231, 232, 232, 232, 233, 233, 233, 234, 234, 234, 235, 235, 235, 236, 236, 236, 237, 237, 237, 238, 238, 238, 239, 239, 239, 240, 240, 240, 241, 241, 241, 242, 242, 242, 243, 243, 243, 244, 244, 244, 245, 245, 245, 246, 246, 246, 247, 247, 247, 248, 248, 248, 249, 249, 249, 250, 250, 250, 251, 251, 251, 252, 252, 252, 253, 253, 253, 254, 254, 254, 255, 255, 255] | 46.534694 | 3,459 | 0.556179 |
ae585b1ebb6ddaac2ce15869614a545c6f947635 | 342 | py | Python | pangolin/core/context_processors.py | skylifewww/pangolinreact | 8d8a45fd15c442618f2ed1ecab15e2e2ab4b7a3a | [
"MIT"
] | null | null | null | pangolin/core/context_processors.py | skylifewww/pangolinreact | 8d8a45fd15c442618f2ed1ecab15e2e2ab4b7a3a | [
"MIT"
] | null | null | null | pangolin/core/context_processors.py | skylifewww/pangolinreact | 8d8a45fd15c442618f2ed1ecab15e2e2ab4b7a3a | [
"MIT"
] | null | null | null | from django.conf import settings
from django.utils.timezone import now
from .utils import intspace, set_param
| 26.307692 | 85 | 0.652047 |
ae5b1b9181972edef32d0c181d78511358cde1b1 | 2,671 | py | Python | 8_random_walker_segmentation_scikit-image.py | Data-Laboratory/WorkExamples | 27e58207e664da7813673e6792c0c30c0a5bf74c | [
"MIT"
] | 1 | 2021-12-15T22:27:27.000Z | 2021-12-15T22:27:27.000Z | 8_random_walker_segmentation_scikit-image.py | Data-Laboratory/WorkExamples | 27e58207e664da7813673e6792c0c30c0a5bf74c | [
"MIT"
] | null | null | null | 8_random_walker_segmentation_scikit-image.py | Data-Laboratory/WorkExamples | 27e58207e664da7813673e6792c0c30c0a5bf74c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
__author__ = "Sreenivas Bhattiprolu"
__license__ = "Feel free to copy, I appreciate if you acknowledge Python for Microscopists"
# https://www.youtube.com/watch?v=6P8YhJa2V6o
"""
Using Random walker to generate lables and then segment and finally cleanup using closing operation.
"""
import matplotlib.pyplot as plt
from skimage import io, img_as_float
import numpy as np
img = img_as_float(io.imread("images/Alloy_noisy.jpg"))
#plt.hist(img.flat, bins=100, range=(0, 1))
# Very noisy image so histogram looks horrible. Let us denoise and see if it helps.
from skimage.restoration import denoise_nl_means, estimate_sigma
sigma_est = np.mean(estimate_sigma(img, multichannel=True))
denoise_img = denoise_nl_means(img, h=1.15 * sigma_est, fast_mode=True,
patch_size=5, patch_distance=3, multichannel=True)
#plt.hist(denoise_img.flat, bins=100, range=(0, 1))
# Much better histogram and now we can see two separate peaks.
#Still close enough so cannot use histogram based segmentation.
#Let us see if we can get any better by some preprocessing.
#Let's try histogram equalization
from skimage import exposure #Contains functions for hist. equalization
#eq_img = exposure.equalize_hist(denoise_img)
eq_img = exposure.equalize_adapthist(denoise_img)
#plt.imshow(eq_img, cmap='gray')
#plt.hist(denoise_img.flat, bins=100, range=(0., 1))
#Not any better. Let us stretch the hoistogram between 0.7 and 0.95
# The range of the binary image spans over (0, 1).
# For markers, let us include all between each peak.
markers = np.zeros(img.shape, dtype=np.uint)
markers[(eq_img < 0.8) & (eq_img > 0.7)] = 1
markers[(eq_img > 0.85) & (eq_img < 0.99)] = 2
from skimage.segmentation import random_walker
# Run random walker algorithm
# https://scikit-image.org/docs/dev/api/skimage.segmentation.html#skimage.segmentation.random_walker
labels = random_walker(eq_img, markers, beta=10, mode='bf')
plt.imsave("images/markers.jpg", markers)
segm1 = (labels == 1)
segm2 = (labels == 2)
all_segments = np.zeros((eq_img.shape[0], eq_img.shape[1], 3)) #nothing but denoise img size but blank
all_segments[segm1] = (1,0,0)
all_segments[segm2] = (0,1,0)
#plt.imshow(all_segments)
from scipy import ndimage as nd
segm1_closed = nd.binary_closing(segm1, np.ones((3,3)))
segm2_closed = nd.binary_closing(segm2, np.ones((3,3)))
all_segments_cleaned = np.zeros((eq_img.shape[0], eq_img.shape[1], 3))
all_segments_cleaned[segm1_closed] = (1,0,0)
all_segments_cleaned[segm2_closed] = (0,1,0)
plt.imshow(all_segments_cleaned)
plt.imsave("images/random_walker.jpg", all_segments_cleaned)
| 31.797619 | 102 | 0.7383 |
ae5c773b88cd0f9d9fbee09e572ef2fc27d6c119 | 40 | py | Python | smileyjoe_io/constant.py | SmileyJoe/smileyjoe_io | 29e3b55e33f17f799f59801158499809fcce0af4 | [
"MIT"
] | null | null | null | smileyjoe_io/constant.py | SmileyJoe/smileyjoe_io | 29e3b55e33f17f799f59801158499809fcce0af4 | [
"MIT"
] | 2 | 2020-02-11T23:34:28.000Z | 2020-06-05T17:33:09.000Z | smileyjoe_io/constant.py | SmileyJoe/smileyjoe_io | 29e3b55e33f17f799f59801158499809fcce0af4 | [
"MIT"
] | null | null | null | SUB_SECRET = 'secret'
SUB_MAIN = 'main'
| 13.333333 | 21 | 0.7 |
ae5dba7efd27593d74b0d517709967bd1f8e2e4a | 3,090 | py | Python | dockend/dockend.py | ChrisVidal10/dockend | 8904e1d017fcc1767d8593190df537a750a50b4c | [
"MIT"
] | null | null | null | dockend/dockend.py | ChrisVidal10/dockend | 8904e1d017fcc1767d8593190df537a750a50b4c | [
"MIT"
] | 1 | 2018-06-25T23:38:09.000Z | 2018-06-25T23:38:09.000Z | dockend/dockend.py | ChrisVidal10/dockend | 8904e1d017fcc1767d8593190df537a750a50b4c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from termcolor import cprint
import argparse
import docker
DOCKER_CLIENT = docker.from_env()
if __name__ == '__main__':
main()
| 35.517241 | 124 | 0.612621 |
ae616a523c7cfa0788d9038fa4b59abb5b2c597c | 625 | py | Python | exp_figure/figure_3(grey).py | qqxx6661/LDSM | b2be6fdfdac00fc4a469a72b3a10686fa0f4bd80 | [
"MIT"
] | 4 | 2019-06-04T06:19:01.000Z | 2021-04-16T15:50:30.000Z | exp_figure/figure_3(grey).py | qqxx6661/LDSM | b2be6fdfdac00fc4a469a72b3a10686fa0f4bd80 | [
"MIT"
] | 1 | 2019-09-10T10:33:18.000Z | 2021-02-08T14:51:39.000Z | exp_figure/figure_3(grey).py | qqxx6661/LDSM | b2be6fdfdac00fc4a469a72b3a10686fa0f4bd80 | [
"MIT"
] | 2 | 2019-06-04T06:19:08.000Z | 2021-09-06T07:30:44.000Z | import random
import matplotlib.pyplot as plt
import numpy as np
#
fig = plt.figure(figsize=(10, 6))
ax1 = fig.add_subplot(1, 1, 1)
ax1.set_xlabel('Frame', fontsize=18)
ax1.set_ylabel('Overall Time Cost (s)', fontsize=18)
x = range(180)
y1 = []
y2 = []
for i in range(180):
y1.append(random.uniform(0.30, 0.32))
y2.append(random.uniform(0.36, 0.38))
print(y1)
print(y2)
ax1.plot(x, y1,linestyle=':',marker='o', label="1-cam scenario")
ax1.plot(x, y2,marker='>', label="8-cam scenario")
plt.xticks((0, 30, 60, 90, 120, 150, 180), fontsize=16)
plt.yticks(fontsize=18)
plt.legend(fontsize=12)
plt.show()
| 23.148148 | 64 | 0.68 |
ae6311f1dc6cb97bf176a5a088a3d0aac371ae07 | 176 | py | Python | src/radixlib/constants.py | 0xOmarA/RadixLib | 85d75a47d4c4df4c1a319b74857ae2c513933623 | [
"MIT"
] | 32 | 2022-01-12T16:52:28.000Z | 2022-03-24T18:05:47.000Z | src/radixlib/constants.py | 0xOmarA/RadixLib | 85d75a47d4c4df4c1a319b74857ae2c513933623 | [
"MIT"
] | 3 | 2022-01-12T17:01:55.000Z | 2022-02-12T15:14:16.000Z | src/radixlib/constants.py | 0xOmarA/RadixLib | 85d75a47d4c4df4c1a319b74857ae2c513933623 | [
"MIT"
] | 1 | 2022-01-21T04:28:07.000Z | 2022-01-21T04:28:07.000Z | from typing import Dict
XRD_RRI: Dict[str, str] = {
"mainnet": "xrd_rr1qy5wfsfh",
"stokenet": "xrd_tr1qyf0x76s",
"betanet": "xrd_br1qy73gwac",
"localnet": ""
} | 22 | 34 | 0.636364 |
ae63be6d85b78ced6ae0f350b22b8798f6f015df | 1,349 | py | Python | tests/examples/coreutils/ls/requirements/requirements.py | testflows/TestFlows-Core | 0aa17247dffd2f7199465031ab16cc4f12c9cfb0 | [
"Apache-2.0"
] | 3 | 2020-06-25T19:23:19.000Z | 2021-10-20T19:29:56.000Z | tests/examples/coreutils/ls/requirements/requirements.py | testflows/TestFlows-Core | 0aa17247dffd2f7199465031ab16cc4f12c9cfb0 | [
"Apache-2.0"
] | null | null | null | tests/examples/coreutils/ls/requirements/requirements.py | testflows/TestFlows-Core | 0aa17247dffd2f7199465031ab16cc4f12c9cfb0 | [
"Apache-2.0"
] | 1 | 2020-02-24T12:31:45.000Z | 2020-02-24T12:31:45.000Z | # These requirements were auto generated
# from software requirements specification (SRS)
# document by TestFlows v1.6.200716.1214830.
# Do not edit by hand but re-generate instead
# using 'tfs requirements generate' command.
from testflows.core import Requirement
RQ_SRS001_CU_LS = Requirement(
name='RQ.SRS001-CU.LS',
version='1.0',
priority=None,
group=None,
type=None,
uid=None,
description=(
'The [ls] utility SHALL list the contents of a directory.\n'
),
link=None
)
RQ_SRS001_CU_LS_Synopsis = Requirement(
name='RQ.SRS001-CU.LS.Synopsis',
version='1.0',
priority=None,
group=None,
type=None,
uid=None,
description=(
'The [ls] utility SHALL support the following synopsis.\n'
'\n'
'```bash\n'
'SYNOPSIS\n'
' ls [OPTION]... [FILE]...\n'
'```\n'
),
link=None
)
RQ_SRS001_CU_LS_Default_Directory = Requirement(
name='RQ.SRS001-CU.LS.Default.Directory',
version='1.0',
priority=None,
group=None,
type=None,
uid=None,
description=(
'The [ls] utility SHALL by default list information about the contents of the current directory.\n'
),
link=None
)
| 26.45098 | 107 | 0.578206 |
ae6410131f36b3418762c4c860f8c13f5bed9bd8 | 1,067 | py | Python | build/lib/flaskr/__init__.py | LayneWei/NLP-medical-information-extraction | 1657d956afd3a2c476da28e3e8a4f1c4ce4bdc4b | [
"MIT"
] | null | null | null | build/lib/flaskr/__init__.py | LayneWei/NLP-medical-information-extraction | 1657d956afd3a2c476da28e3e8a4f1c4ce4bdc4b | [
"MIT"
] | null | null | null | build/lib/flaskr/__init__.py | LayneWei/NLP-medical-information-extraction | 1657d956afd3a2c476da28e3e8a4f1c4ce4bdc4b | [
"MIT"
] | null | null | null | import os
from flask import Flask
#import SQLAlchemy
from flaskr import db
| 23.711111 | 66 | 0.66448 |
ae6515732de013312213bbfb2e08738b394327ad | 139 | py | Python | nlpatl/sampling/clustering/__init__.py | dumpmemory/nlpatl | 59209242d1ac26714b11b86261070ac50cc90432 | [
"MIT"
] | 18 | 2021-11-29T06:43:46.000Z | 2022-03-29T09:58:32.000Z | nlpatl/sampling/clustering/__init__.py | dumpmemory/nlpatl | 59209242d1ac26714b11b86261070ac50cc90432 | [
"MIT"
] | null | null | null | nlpatl/sampling/clustering/__init__.py | dumpmemory/nlpatl | 59209242d1ac26714b11b86261070ac50cc90432 | [
"MIT"
] | 1 | 2021-11-29T06:43:47.000Z | 2021-11-29T06:43:47.000Z | from nlpatl.sampling.clustering.nearest_mean import NearestMeanSampling
from nlpatl.sampling.clustering.farthest import FarthestSampling
| 46.333333 | 72 | 0.884892 |
ae689f1c1175daa6fc473f2cb48f19de2559deff | 830 | py | Python | EvaMap/Metrics/sameAs.py | benjimor/EvaMap | 42e616abe9f15925b885797d30496e30615989a0 | [
"MIT"
] | 1 | 2021-01-29T18:53:26.000Z | 2021-01-29T18:53:26.000Z | EvaMap/Metrics/sameAs.py | benjimor/EvaMap | 42e616abe9f15925b885797d30496e30615989a0 | [
"MIT"
] | 1 | 2021-06-06T17:56:00.000Z | 2021-06-06T17:56:00.000Z | EvaMap/Metrics/sameAs.py | benjimor/EvaMap | 42e616abe9f15925b885797d30496e30615989a0 | [
"MIT"
] | null | null | null | import rdflib
import requests
from EvaMap.Metrics.metric import metric | 31.923077 | 112 | 0.595181 |
ae69157e9e2838981548e0247bcf82bb4114ecc3 | 1,501 | py | Python | zeeko/handlers/setup_package.py | alexrudy/Zeeko | fb4992724620ed548dd32c3201f79f5b7bebfe32 | [
"BSD-3-Clause"
] | 2 | 2017-07-23T22:06:05.000Z | 2020-02-28T07:54:15.000Z | zeeko/handlers/setup_package.py | alexrudy/Zeeko | fb4992724620ed548dd32c3201f79f5b7bebfe32 | [
"BSD-3-Clause"
] | 1 | 2020-10-29T19:54:06.000Z | 2020-10-29T19:54:06.000Z | zeeko/handlers/setup_package.py | alexrudy/Zeeko | fb4992724620ed548dd32c3201f79f5b7bebfe32 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import
import os
from zeeko._build_helpers import get_utils_extension_args, get_zmq_extension_args, _generate_cython_extensions, pxd, get_package_data
from astropy_helpers import setup_helpers
utilities = [pxd("..utils.rc"),
pxd("..utils.msg"),
pxd("..utils.pthread"),
pxd("..utils.lock"),
pxd("..utils.condition"),
pxd("..utils.clock")]
base = [ pxd("..cyloop.throttle"), pxd("..cyloop.statemachine"), pxd(".snail"), pxd(".base")]
dependencies = {
'base' : utilities + [ pxd("..cyloop.throttle") ],
'snail' : utilities + [ pxd("..cyloop.throttle"), pxd("..cyloop.statemachine") ],
'client' : utilities + base + [ pxd("..messages.receiver") ],
'server' : utilities + base + [ pxd("..messages.publisher") ],
}
def get_extensions(**kwargs):
"""Get the Cython extensions"""
extension_args = setup_helpers.DistutilsExtensionArgs()
extension_args.update(get_utils_extension_args())
extension_args.update(get_zmq_extension_args())
extension_args['include_dirs'].append('numpy')
package_name = __name__.split(".")[:-1]
extensions = [e for e in _generate_cython_extensions(extension_args, os.path.dirname(__file__), package_name)]
for extension in extensions:
name = extension.name.split(".")[-1]
if name in dependencies:
extension.depends.extend(dependencies[name])
return extensions | 38.487179 | 133 | 0.654231 |
ae69c8b77055ca55392fe8a19a30b6175954dde3 | 5,716 | py | Python | networkapi/api_route_map/v4/serializers.py | vinicius-marinho/GloboNetworkAPI | 94651d3b4dd180769bc40ec966814f3427ccfb5b | [
"Apache-2.0"
] | 73 | 2015-04-13T17:56:11.000Z | 2022-03-24T06:13:07.000Z | networkapi/api_route_map/v4/serializers.py | leopoldomauricio/GloboNetworkAPI | 3b5b2e336d9eb53b2c113977bfe466b23a50aa29 | [
"Apache-2.0"
] | 99 | 2015-04-03T01:04:46.000Z | 2021-10-03T23:24:48.000Z | networkapi/api_route_map/v4/serializers.py | shildenbrand/GloboNetworkAPI | 515d5e961456cee657c08c275faa1b69b7452719 | [
"Apache-2.0"
] | 64 | 2015-08-05T21:26:29.000Z | 2022-03-22T01:06:28.000Z | # -*- coding: utf-8 -*-
import logging
from django.db.models import get_model
from rest_framework import serializers
from networkapi.util.geral import get_app
from networkapi.util.serializers import DynamicFieldsModelSerializer
log = logging.getLogger(__name__)
| 31.065217 | 78 | 0.448216 |
ae6b38b5ef961be1df343f27398c8d975b548233 | 274 | py | Python | src/admin_panel/models.py | sahilsehgal1995/lenme-api | 65826619b039c5c4035b6e0c133c32014977489e | [
"MIT"
] | null | null | null | src/admin_panel/models.py | sahilsehgal1995/lenme-api | 65826619b039c5c4035b6e0c133c32014977489e | [
"MIT"
] | null | null | null | src/admin_panel/models.py | sahilsehgal1995/lenme-api | 65826619b039c5c4035b6e0c133c32014977489e | [
"MIT"
] | null | null | null | from src import db, BaseMixin
| 27.4 | 62 | 0.70073 |
ae6dc1d38a589fb6dfb638a55ee82b80c824df9d | 10,329 | py | Python | actions/line_mmc.py | fmariv/udt-qgis-plugin | 20cbf8889f2a2448d982c7057a4cfbe37d90d78b | [
"MIT"
] | null | null | null | actions/line_mmc.py | fmariv/udt-qgis-plugin | 20cbf8889f2a2448d982c7057a4cfbe37d90d78b | [
"MIT"
] | 2 | 2021-09-02T07:22:24.000Z | 2021-09-22T05:31:45.000Z | actions/line_mmc.py | fmariv/udt-qgis-plugin | 20cbf8889f2a2448d982c7057a4cfbe37d90d78b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
/***************************************************************************
UDTPlugin
In this file is where the LineMMC class is defined. The main function
of this class is to run the automation process that exports the geometries
and generates the metadata of a municipal line.
***************************************************************************/
"""
import os
import numpy as np
from PyQt5.QtCore import QVariant
from qgis.core import (QgsVectorLayer,
QgsCoordinateReferenceSystem,
QgsVectorFileWriter,
QgsMessageLog,
QgsField,
QgsProject)
from ..config import *
from .adt_postgis_connection import PgADTConnection
from ..utils import *
# TODO in progress...
| 41.817814 | 127 | 0.618162 |
ae6e3ce852e6d0276690375427d2e2f3c5953dfb | 5,369 | py | Python | SentimentAnalysis.py | hoossainalik/instalyzer | 9ad7c59fba3f617801d3ec0c3ae216029ee0aece | [
"MIT"
] | null | null | null | SentimentAnalysis.py | hoossainalik/instalyzer | 9ad7c59fba3f617801d3ec0c3ae216029ee0aece | [
"MIT"
] | null | null | null | SentimentAnalysis.py | hoossainalik/instalyzer | 9ad7c59fba3f617801d3ec0c3ae216029ee0aece | [
"MIT"
] | null | null | null | """
Module: Sentiment Analysis
Author: Hussain Ali Khan
Version: 1.0.0
Last Modified: 29/11/2018 (Thursday)
"""
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
import pandas as pd
import re
import os
from emoji import UNICODE_EMOJI
import matplotlib.pyplot as plt
import seaborn as sns
def save_results_as_csv(results, fn, c_name):
results_df = pd.DataFrame(results.get_scores())
results_df['class'] = results_df[['pos', 'neg', 'neu']].idxmax(axis=1)
results_df['class'] = results_df['class'].map({'pos': 'Positive', 'neg': 'Negative', 'neu': 'Neutral'})
text_df = pd.DataFrame(results.get_data(), columns=[c_name])
final_df = text_df.join(results_df)
print(final_df)
print(final_df.describe())
pie_plot_title = "Pie Plot For Sentiments Of " + c_name + " In dataset <" + fn + ">"
final_df["class"].value_counts().plot(kind="pie", autopct='%.1f%%', figsize=(8, 8), title=pie_plot_title)
pp = sns.pairplot(final_df, hue="class", height=3)
pp.fig.suptitle("Pair Plot For Sentiments Of "+c_name+" In dataset <"+fn+">")
plt.show()
final_df.to_csv("SentimentAnalysisResults/" + fn + ".csv")
# search your emoji
# add space near your emoji
if __name__ == "__main__":
main() | 28.558511 | 109 | 0.596573 |
ae6e8bcab2c7710339f988ae2adebe63a8a6d860 | 11,100 | py | Python | deep_sort_/track.py | brjathu/PHALP | 0502c0aa515292bc70e358fe3b3ec65e63215327 | [
"MIT"
] | 45 | 2022-02-23T04:32:22.000Z | 2022-03-31T15:02:39.000Z | deep_sort_/track.py | brjathu/PHALP | 0502c0aa515292bc70e358fe3b3ec65e63215327 | [
"MIT"
] | 5 | 2022-02-23T15:08:29.000Z | 2022-03-24T19:54:55.000Z | deep_sort_/track.py | brjathu/PHALP | 0502c0aa515292bc70e358fe3b3ec65e63215327 | [
"MIT"
] | 2 | 2022-02-26T13:01:19.000Z | 2022-03-24T04:53:29.000Z | """
Modified code from https://github.com/nwojke/deep_sort
"""
import numpy as np
import copy
import torch
import torch.nn as nn
import torch.nn.functional as F
import scipy.signal as signal
from scipy.ndimage.filters import gaussian_filter1d
| 42.692308 | 176 | 0.607748 |
ae722ba487151806c74bc4bb207a931bee8b9346 | 818 | py | Python | meiduo_mall/apps/users/utils.py | 150619/meiduo_mall_project | c0441ad744c6dd0e2962d5e734c842e237b9ad0b | [
"MIT"
] | null | null | null | meiduo_mall/apps/users/utils.py | 150619/meiduo_mall_project | c0441ad744c6dd0e2962d5e734c842e237b9ad0b | [
"MIT"
] | null | null | null | meiduo_mall/apps/users/utils.py | 150619/meiduo_mall_project | c0441ad744c6dd0e2962d5e734c842e237b9ad0b | [
"MIT"
] | null | null | null | import re
from django.contrib.auth.backends import ModelBackend
from django.contrib.auth.mixins import LoginRequiredMixin
from django.http import JsonResponse
from apps.users.models import User
| 29.214286 | 76 | 0.656479 |
ae738c094d5028bff43bc7f3386c802a9cf32a46 | 95 | py | Python | ginger/scripts/templates/app_templates/app_name/urls.py | vivsh/django-ginger | d293109becc72845a23f2aeb732ed808a7a67d69 | [
"MIT"
] | null | null | null | ginger/scripts/templates/app_templates/app_name/urls.py | vivsh/django-ginger | d293109becc72845a23f2aeb732ed808a7a67d69 | [
"MIT"
] | null | null | null | ginger/scripts/templates/app_templates/app_name/urls.py | vivsh/django-ginger | d293109becc72845a23f2aeb732ed808a7a67d69 | [
"MIT"
] | null | null | null |
from django.conf.urls import url, patterns
from . import views
urlpatterns = patterns("",
)
| 11.875 | 42 | 0.726316 |
ae73d85a5f4dcb6adc5dcd4a48bf418ba8cec3c0 | 1,184 | py | Python | ppln/gatkRealignerTargCreator.py | asalomatov/nextgen-pipeline | 4ac358050075dc40d32a1c09160e86a41f093f98 | [
"MIT"
] | 4 | 2017-08-11T21:02:35.000Z | 2020-10-29T19:49:41.000Z | ppln/gatkRealignerTargCreator.py | asalomatov/nextgen-pipeline | 4ac358050075dc40d32a1c09160e86a41f093f98 | [
"MIT"
] | null | null | null | ppln/gatkRealignerTargCreator.py | asalomatov/nextgen-pipeline | 4ac358050075dc40d32a1c09160e86a41f093f98 | [
"MIT"
] | 2 | 2017-08-18T19:40:10.000Z | 2017-08-19T03:43:07.000Z | '''
'''
import sys, subprocess
sys.path.insert(0, '/nethome/asalomatov/projects/ppln')
import logProc
ntFlag = '-nt 10'
#interval_padding = '--interval_padding 0' # bed files padded with 100bp
interval_padding = '--interval_padding 200'
read_filter = '--read_filter BadCigar'
print '\nsys.args :', sys.argv[1:]
inbam, outfile, refGenome, knownindels, tmpdir, gatk, gaps, outdir = sys.argv[1:]
cmd = 'java -Xms750m -Xmx10g -XX:+UseSerialGC -Djava.io.tmpdir=%(tmpdir)s -jar %(gatk)s -T RealignerTargetCreator -I %(inbam)s --known %(knownindels)s -o %(outfile)s -R %(refGenome)s %(ntFlag)s %(read_filter)s'
#cmd = 'java -Xms750m -Xmx2500m -XX:+UseSerialGC -Djava.io.tmpdir=%(tmpdir)s -jar %(gatk)s -T RealignerTargetCreator -I %(inbam)s -o %(outfile)s -R %(refGenome)s %(ntFlag)s %(read_filter)s -L %(inbed)s -XL %(gaps)s'
cmd = cmd % locals()
print cmd
logProc.logProc(outfile, outdir, cmd, 'started')
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if p.returncode == 0:
logProc.logProc(outfile, outdir, cmd, 'finished')
else:
logProc.logProc(outfile, outdir, cmd, 'failed', stderr)
sys.exit(1)
| 42.285714 | 215 | 0.705236 |
ae74ea38559f52ac217bf0d17616d5da35736211 | 14,773 | py | Python | functions_baseline_opencv.py | Shiro-LK/Super-Resolution-ProbaV | e6b9d9d62caa50b84cd5bdca906af53aa1a5de8b | [
"MIT"
] | null | null | null | functions_baseline_opencv.py | Shiro-LK/Super-Resolution-ProbaV | e6b9d9d62caa50b84cd5bdca906af53aa1a5de8b | [
"MIT"
] | null | null | null | functions_baseline_opencv.py | Shiro-LK/Super-Resolution-ProbaV | e6b9d9d62caa50b84cd5bdca906af53aa1a5de8b | [
"MIT"
] | 1 | 2020-04-15T10:36:31.000Z | 2020-04-15T10:36:31.000Z | # -*- coding: utf-8 -*-
import cv2
import numpy as np
import os
import pandas as pd
import math
from skimage import io
from skimage.transform import rescale
import skimage
import numba
from numba import prange
import time
from pathlib import Path
# MAX 35 IMG
## Create TXT FILE for loading
## Load all data
## load one scene data
## METRIC FUNCTION FOR ONE SCENE
def baseline_predict_test(data, dirs = "results_baseline", interpolation=cv2.INTER_CUBIC):
num = len(data)
for i in range( num ):
LR, QM, norm = data[i]
p = Path(LR[0])
img_predict = baseline_predict_scene(LR, QM, interpolation=interpolation)
#print(img_predict.shape)
# save img
#predicted[i] = img_predict
#names[i] = p.parts[-2]
save_prediction(img_predict, p.parts[-2], directory=dirs)
#norm = import_norm_data()
#print(norm)
#
#create_data(path="data\\", normalize_data=norm)
#data_test = load_data(os.path.join("data","test.txt"), istrain=False)
#datas = load_data(os.path.join("data","train.txt"), istrain=True)
#begin = time.time()
#predict = baseline_predict(datas, istrain=True, evaluate=True, version=1)
#print(time.time()-begin)
#begin = time.time()
#baseline_predict_test(data_test)
#print(time.time()-begin)
| 34.678404 | 175 | 0.56434 |
ae75ff7994410f7e88a0e941f01acf2c32ca349b | 4,676 | py | Python | csvtoqbo.py | Airbitz/airbitz-ofx | 8dc9a851fc585e373611d6d8e27ae0e8540ea35b | [
"MIT"
] | 2 | 2016-01-08T20:14:21.000Z | 2018-06-15T17:58:09.000Z | csvtoqbo.py | EdgeApp/airbitz-ofx | 8dc9a851fc585e373611d6d8e27ae0e8540ea35b | [
"MIT"
] | null | null | null | csvtoqbo.py | EdgeApp/airbitz-ofx | 8dc9a851fc585e373611d6d8e27ae0e8540ea35b | [
"MIT"
] | 2 | 2016-01-08T20:14:22.000Z | 2016-03-30T19:59:48.000Z | #####################################################################
# #
# File: csvtoqbo.py #
# Developer: Paul Puey #
# Original Code by: Justin Leto #
# Forked from https://github.com/jleto/csvtoqbo #
# #
# main utility script file Python script to convert CSV files #
# of transactions exported from various platforms to QBO for #
# import into Quickbooks Online. #
# #
# Usage: python csvtoqbo.py <options> <csvfiles> #
# #
#####################################################################
import sys, traceback
import os
import logging
import csv
import qbo
import airbitzwallets
# If only utility script is called
if len(sys.argv) <= 1:
sys.exit("Usage: python %s <options> <csvfiles>\n"
"Where possible options include:\n"
" -btc Output bitcoin in full BTC denomination\n"
" -mbtc Output bitcoin in mBTC denomination\n"
" -bits Output bitcoin in bits (uBTC) denomination" % sys.argv[0]
)
# If help is requested
elif (sys.argv[1] == '--help'):
sys.exit("Help for %s not yet implemented." % sys.argv[0])
# Test for valid options, instantiate appropiate provider object
if sys.argv[1] == '-mbtc':
denom = 1000
elif sys.argv[1] == '-btc':
denom = 1
elif sys.argv[1] == '-bits':
denom = 1000000
myProvider = airbitzwallets.airbitzwallets()
# For each CSV file listed for conversion
for arg in sys.argv:
if sys.argv.index(arg) > 1:
try:
with open(arg[:len(arg)-3] + 'log'):
os.remove(arg[:len(arg)-3] + 'log')
except IOError:
pass
logging.basicConfig(filename=arg[:len(arg)-3] + 'log', level=logging.INFO)
logging.info("Opening '%s' CSV File" % myProvider.getName())
try:
with open(arg, 'r') as csvfile:
# Open CSV for reading
reader = csv.DictReader(csvfile, delimiter=',', quotechar='"')
#instantiate the qbo object
myQbo = None
myQbo = qbo.qbo()
txnCount = 0
for row in reader:
txnCount = txnCount+1
sdata = str(row)
#read in values from row of csv file
date_posted = myProvider.getDatePosted(myProvider,row)
txn_memo = myProvider.getTxnMemo(myProvider,row)
txn_amount = myProvider.getTxnAmount(myProvider,row)
txn_curamt = myProvider.getTxnCurAmt(myProvider,row)
txn_category = myProvider.getTxnCategory(myProvider,row)
txn_id = myProvider.getTxnId(myProvider,row)
name = myProvider.getTxnName(myProvider,row)
try:
#Add transaction to the qbo document
if myQbo.addTransaction(denom, date_posted, txn_memo, txn_id, txn_amount, txn_curamt, txn_category, name):
print('Transaction [' + str(txnCount) + '] added successfully!')
logging.info('Transaction [' + str(txnCount) + '] added successfully!')
except:
#Error adding transaction
exc_type, exc_value, exc_traceback = sys.exc_info()
lines = traceback.format_exception(exc_type, exc_value, exc_traceback)
print(''.join('!! ' + line for line in lines))
logging.info("Transaction [" + str(txnCount) + "] excluded!")
logging.info('>> Data: ' + str(sdata))
pass
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
lines = traceback.format_exception(exc_type, exc_value, exc_traceback)
print(''.join('!! ' + line for line in lines))
logging.info("Trouble reading CSV file!")
# After transactions have been read, write full QBO document to file
try:
filename = arg[:len(arg)-3] + 'qbo'
if myQbo.Write('./'+ filename):
print("QBO file written successfully!")
#log successful write
logging.info("QBO file %s written successfully!" % filename)
except:
#IO Error
exc_type, exc_value, exc_traceback = sys.exc_info()
lines = traceback.format_exception(exc_type, exc_value, exc_traceback)
print(''.join('!! ' + line for line in lines))
logging.info(''.join('!! ' + line for line in lines))
| 39.965812 | 155 | 0.534217 |
ae76f9115d5aeaf3a667d8ad57a43721c93b13f6 | 14,613 | py | Python | api/app/models/test_geofile.py | enermaps/enermaps | bac472e60e44724be605068103d01da0320483e6 | [
"Apache-2.0"
] | 5 | 2021-09-09T13:17:09.000Z | 2022-03-31T11:57:06.000Z | api/app/models/test_geofile.py | enermaps/enermaps | bac472e60e44724be605068103d01da0320483e6 | [
"Apache-2.0"
] | 154 | 2020-10-16T09:14:00.000Z | 2022-03-31T13:31:17.000Z | api/app/models/test_geofile.py | enermaps/enermaps | bac472e60e44724be605068103d01da0320483e6 | [
"Apache-2.0"
] | 9 | 2021-05-20T15:12:54.000Z | 2022-03-15T15:39:42.000Z | import copy
import json
import os
from app.common import path
from app.common.projection import epsg_string_to_proj4
from app.common.test import BaseApiTest
from . import geofile, storage
| 34.222482 | 88 | 0.534729 |
ae776a59c5b3c2f0543df97b416e9a2ebb997432 | 488 | py | Python | mypylib/utils.py | WillJBrown/displayotron | 8fa143acc7a70c47d4a288bd81afb7241dd10b9f | [
"MIT"
] | null | null | null | mypylib/utils.py | WillJBrown/displayotron | 8fa143acc7a70c47d4a288bd81afb7241dd10b9f | [
"MIT"
] | null | null | null | mypylib/utils.py | WillJBrown/displayotron | 8fa143acc7a70c47d4a288bd81afb7241dd10b9f | [
"MIT"
] | null | null | null | import os
import sys
| 32.533333 | 75 | 0.75 |
ae777c31a89605e1d2a2a1e2bfa85ac8840eebee | 6,676 | py | Python | src/modules/base/Configuration.py | andreaswatch/piTomation | 140bff77ad0b84ad17898106c7be7dc48a2d0783 | [
"MIT"
] | null | null | null | src/modules/base/Configuration.py | andreaswatch/piTomation | 140bff77ad0b84ad17898106c7be7dc48a2d0783 | [
"MIT"
] | null | null | null | src/modules/base/Configuration.py | andreaswatch/piTomation | 140bff77ad0b84ad17898106c7be7dc48a2d0783 | [
"MIT"
] | null | null | null | '''
Basic piTomation configuration options.
'''
from pydantic import BaseModel
from typing import Any, Optional, Union
from pydantic.class_validators import validator
__pdoc__ = {
"WithPlugins": None,
"configuration": None
}
__registry: dict[type, list[type]] = {}
'''Contains all @configuration class types, key is the base type'''
def configuration(cls):
'''All configurations in the configuration file must be tagged with #@configuration, so that the __registry is aware about the classes.'''
__register(cls)
return cls
def WithPlugins(t: type):
'''Used internally to add all derivered types to a list'''
if t in __registry.keys():
classes = list(__registry[t])
return Union[tuple(classes)] # type: ignore
raise Exception("AppConfiguration must get imported after all plugins")
#@configuration
#@configuration
#@configuration
#@configuration
#@configuration
| 29.539823 | 142 | 0.687088 |
ae78cbed15f9ad137cc2fb68f470a749694330cd | 72 | py | Python | danceschool/payments/paypal/__init__.py | django-danceschool/django-danceschool | 65ae09ffdcb0821e82df0e1f634fe13c0384a525 | [
"BSD-3-Clause"
] | 32 | 2017-09-12T04:25:25.000Z | 2022-03-21T10:48:07.000Z | danceschool/payments/paypal/__init__.py | django-danceschool/django-danceschool | 65ae09ffdcb0821e82df0e1f634fe13c0384a525 | [
"BSD-3-Clause"
] | 97 | 2017-09-01T02:43:08.000Z | 2022-01-03T18:20:34.000Z | danceschool/payments/paypal/__init__.py | django-danceschool/django-danceschool | 65ae09ffdcb0821e82df0e1f634fe13c0384a525 | [
"BSD-3-Clause"
] | 19 | 2017-09-26T13:34:46.000Z | 2022-03-21T10:48:10.000Z | default_app_config = 'danceschool.payments.paypal.apps.PaypalAppConfig'
| 36 | 71 | 0.861111 |
ae7a6bf6cf0a8187540066ce63f57293b91d1b01 | 25 | py | Python | datamaps/__init__.py | fossabot/datamaps-1 | c66c3f20e43bd41ec0874f40f39bd0eff89fd476 | [
"MIT"
] | null | null | null | datamaps/__init__.py | fossabot/datamaps-1 | c66c3f20e43bd41ec0874f40f39bd0eff89fd476 | [
"MIT"
] | null | null | null | datamaps/__init__.py | fossabot/datamaps-1 | c66c3f20e43bd41ec0874f40f39bd0eff89fd476 | [
"MIT"
] | null | null | null | __version__ = "1.0.0b13"
| 12.5 | 24 | 0.68 |
ae7c6c2bff216ece676b2bf277a3dea69a26d9e6 | 2,414 | py | Python | python/opentrons_ot3_firmware/scripts/generate_header.py | Opentrons/ot3-firmware | 3047fbf54ed2bf9350a9fe02c0c0fb246ac0285a | [
"Apache-2.0"
] | 3 | 2021-09-21T13:20:27.000Z | 2021-12-02T13:12:32.000Z | python/opentrons_ot3_firmware/scripts/generate_header.py | Opentrons/ot3-firmware | 3047fbf54ed2bf9350a9fe02c0c0fb246ac0285a | [
"Apache-2.0"
] | 36 | 2021-08-10T15:18:09.000Z | 2022-03-30T19:08:13.000Z | python/opentrons_ot3_firmware/scripts/generate_header.py | Opentrons/ot3-firmware | 3047fbf54ed2bf9350a9fe02c0c0fb246ac0285a | [
"Apache-2.0"
] | null | null | null | """Script to generate c++ header file of canbus constants."""
from __future__ import annotations
import argparse
import io
from enum import Enum
from typing import Type, Any
import sys
from opentrons_ot3_firmware.constants import (
MessageId,
FunctionCode,
NodeId,
)
def generate(output: io.StringIO) -> None:
"""Generate source code into output."""
output.write("/********************************************\n")
output.write("* This is a generated file. Do not modify. *\n")
output.write("********************************************/\n")
output.write("#pragma once\n\n")
with block(
output=output,
start="namespace can_ids {\n\n",
terminate="} // namespace can_ids\n\n",
):
write_enum(FunctionCode, output)
write_enum(MessageId, output)
write_enum(NodeId, output)
def write_enum(e: Type[Enum], output: io.StringIO) -> None:
"""Generate enum class from enumeration."""
output.write(f"/** {e.__doc__} */\n")
with block(
output=output, start=f"enum class {e.__name__} {{\n", terminate="};\n\n"
):
for i in e:
output.write(f" {i.name} = 0x{i.value:x},\n")
def main() -> None:
"""Entry point."""
parser = argparse.ArgumentParser(
description="Generate a C++ header file defining CANBUS constants."
)
parser.add_argument(
"target",
metavar="TARGET",
type=argparse.FileType("w"),
default=sys.stdout,
nargs="?",
help="path of header file to generate; use - or do not specify for stdout",
)
args = parser.parse_args()
generate(args.target)
if __name__ == "__main__":
main()
| 27.431818 | 83 | 0.584921 |
ae81bb11bf5eb162ed9c0bef3b103ae5f25903e5 | 772 | py | Python | icekit/response_pages/migrations/0001_initial.py | ic-labs/django-icekit | c507ea5b1864303732c53ad7c5800571fca5fa94 | [
"MIT"
] | 52 | 2016-09-13T03:50:58.000Z | 2022-02-23T16:25:08.000Z | icekit/response_pages/migrations/0001_initial.py | ic-labs/django-icekit | c507ea5b1864303732c53ad7c5800571fca5fa94 | [
"MIT"
] | 304 | 2016-08-11T14:17:30.000Z | 2020-07-22T13:35:18.000Z | icekit/response_pages/migrations/0001_initial.py | ic-labs/django-icekit | c507ea5b1864303732c53ad7c5800571fca5fa94 | [
"MIT"
] | 12 | 2016-09-21T18:46:35.000Z | 2021-02-15T19:37:50.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
| 29.692308 | 143 | 0.563472 |
ae81fe56f7386088702aa7258803c69277db7d71 | 5,495 | py | Python | tests/cancer.py | old-rob/cptac | 9b33893dd11c9320628a751c8840783a6ce81957 | [
"Apache-2.0"
] | null | null | null | tests/cancer.py | old-rob/cptac | 9b33893dd11c9320628a751c8840783a6ce81957 | [
"Apache-2.0"
] | null | null | null | tests/cancer.py | old-rob/cptac | 9b33893dd11c9320628a751c8840783a6ce81957 | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 Samuel Payne sam_payne@byu.edu
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The purpose of this class is to organize a cancer object's datasets by
# type. dataset.py in the cptac package defines a lot of methods and members
# but there is no built-in way to call them in batches by type for testing.
import pytest
| 33.10241 | 89 | 0.593267 |
ae836b13e8e1a6b59a6fe8580af78a2b08d03bc1 | 1,958 | py | Python | mobula/solvers/LRUpdater.py | wkcn/mobula | 4eec938d6477776f5f2d68bcf41de83fb8da5195 | [
"MIT"
] | 47 | 2017-07-15T02:13:18.000Z | 2022-01-01T09:37:59.000Z | mobula/solvers/LRUpdater.py | wkcn/mobula | 4eec938d6477776f5f2d68bcf41de83fb8da5195 | [
"MIT"
] | 3 | 2018-06-22T13:55:12.000Z | 2020-01-29T01:41:13.000Z | mobula/solvers/LRUpdater.py | wkcn/mobula | 4eec938d6477776f5f2d68bcf41de83fb8da5195 | [
"MIT"
] | 8 | 2017-09-03T12:42:54.000Z | 2020-09-27T03:38:59.000Z | #coding=utf-8
import numpy as np
# TODO: MULTISTEP
LR_POLICY_NUM = 7
LRUpdater.METHODS = [None] * LR_POLICY_NUM
LRUpdater.METHODS[LR_POLICY.FIXED] = LRUpdater.fixed
LRUpdater.METHODS[LR_POLICY.STEP] = LRUpdater.step
LRUpdater.METHODS[LR_POLICY.EXP] = LRUpdater.exp
LRUpdater.METHODS[LR_POLICY.INV] = LRUpdater.inv
LRUpdater.METHODS[LR_POLICY.POLY] = LRUpdater.poly
LRUpdater.METHODS[LR_POLICY.SIGMOID] = LRUpdater.sigmoid
| 37.653846 | 94 | 0.650664 |
ae83889dc0e1e2a10d944afb86e01b0c15293029 | 5,098 | py | Python | code/mlflow.py | michaelhball/ml_tidbits | 55b77fded5f31cd280f043c8aa792a07ca572170 | [
"MIT"
] | 1 | 2021-04-15T19:42:51.000Z | 2021-04-15T19:42:51.000Z | code/mlflow.py | michaelhball/ml_toolshed | 55b77fded5f31cd280f043c8aa792a07ca572170 | [
"MIT"
] | null | null | null | code/mlflow.py | michaelhball/ml_toolshed | 55b77fded5f31cd280f043c8aa792a07ca572170 | [
"MIT"
] | null | null | null | import git
from mlflow.tracking import MlflowClient
from .utils import scp_files
| 41.112903 | 113 | 0.632797 |
ae84bc9755e4432da8e4dc0549c028ec150a10c7 | 4,215 | py | Python | infinite_nature/autocruise.py | DionysisChristopoulos/google-research | 7f59ef421beef32ca16c2a7215be74f7eba01a0f | [
"Apache-2.0"
] | 23,901 | 2018-10-04T19:48:53.000Z | 2022-03-31T21:27:42.000Z | infinite_nature/autocruise.py | davidfitzek/google-research | eb2b142f26e39aac1dcbb768417465ae9d4e5af6 | [
"Apache-2.0"
] | 891 | 2018-11-10T06:16:13.000Z | 2022-03-31T10:42:34.000Z | infinite_nature/autocruise.py | davidfitzek/google-research | eb2b142f26e39aac1dcbb768417465ae9d4e5af6 | [
"Apache-2.0"
] | 6,047 | 2018-10-12T06:31:02.000Z | 2022-03-31T13:59:28.000Z | # coding=utf-8
# Copyright 2021 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Uses a heuristic to automatically navigate generated scenes.
fly_camera.fly_dynamic will generate poses using disparity maps that avoid
crashing into nearby terrain.
"""
import pickle
import time
import config
import fly_camera
import imageio
import infinite_nature_lib
import numpy as np
import tensorflow as tf
tf.compat.v1.flags.DEFINE_string(
"output_folder", "autocruise_output",
"Folder to save autocruise results")
tf.compat.v1.flags.DEFINE_integer(
"num_steps", 500,
"Number of steps to fly.")
FLAGS = tf.compat.v1.flags.FLAGS
def generate_autocruise(np_input_rgbd, checkpoint,
save_directory, num_steps, np_input_intrinsics=None):
"""Saves num_steps frames of infinite nature using an autocruise algorithm.
Args:
np_input_rgbd: [H, W, 4] numpy image and disparity to start
Infinite Nature with values ranging in [0, 1]
checkpoint: (str) path to the pre-trained checkpoint
save_directory: (str) the directory to save RGB images to
num_steps: (int) the number of steps to generate
np_input_intrinsics: [4] estimated intrinsics. If not provided,
makes assumptions on the FOV.
"""
render_refine, style_encoding = infinite_nature_lib.load_model(checkpoint)
if np_input_intrinsics is None:
# 0.8 focal_x corresponds to a FOV of ~64 degrees. This can be
# manually changed if more assumptions about the input image is given.
h, w, unused_channel = np_input_rgbd.shape
ratio = w / float(h)
np_input_intrinsics = np.array([0.8, 0.8 * ratio, .5, .5], dtype=np.float32)
np_input_rgbd = tf.image.resize(np_input_rgbd, [160, 256])
style_noise = style_encoding(np_input_rgbd)
meander_x_period = 100
meander_y_period = 100
meander_x_magnitude = 0.0
meander_y_magnitude = 0.0
fly_speed = 0.2
horizon = 0.3
near_fraction = 0.2
starting_pose = np.array(
[[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0]],
dtype=np.float32)
# autocruise heuristic funciton
fly_next_pose_function = fly_camera.fly_dynamic(
np_input_intrinsics, starting_pose,
speed=fly_speed,
meander_x_period=meander_x_period,
meander_x_magnitude=meander_x_magnitude,
meander_y_period=meander_y_period,
meander_y_magnitude=meander_y_magnitude,
horizon=horizon,
near_fraction=near_fraction)
if not tf.io.gfile.exists(save_directory):
tf.io.gfile.makedirs(save_directory)
curr_pose = starting_pose
curr_rgbd = np_input_rgbd
t0 = time.time()
for i in range(num_steps - 1):
next_pose = fly_next_pose_function(curr_rgbd)
curr_rgbd = render_refine(
curr_rgbd, style_noise, curr_pose, np_input_intrinsics,
next_pose, np_input_intrinsics)
# Update pose information for view.
curr_pose = next_pose
imageio.imsave("%s/%04d.png" % (save_directory, i),
(255 * curr_rgbd[:, :, :3]).astype(np.uint8))
if i % 100 == 0:
print("%d / %d frames generated" % (i, num_steps))
print("time / step: %04f" % ((time.time() - t0) / (i + 1)))
print()
if __name__ == "__main__":
tf.compat.v1.enable_eager_execution()
tf.compat.v1.app.run(main)
| 33.452381 | 80 | 0.704152 |
ae84e004def4ae5d171603fde9ae436d07658e06 | 1,675 | py | Python | tilequeue/queue/file.py | ducdk90/tilequeue | c664b5c89a9f0e6743405ab266aa9ca80b57806e | [
"MIT"
] | 29 | 2016-11-03T18:39:21.000Z | 2022-02-27T17:42:37.000Z | tilequeue/queue/file.py | ducdk90/tilequeue | c664b5c89a9f0e6743405ab266aa9ca80b57806e | [
"MIT"
] | 146 | 2016-07-07T16:41:07.000Z | 2021-12-11T00:27:20.000Z | tilequeue/queue/file.py | ducdk90/tilequeue | c664b5c89a9f0e6743405ab266aa9ca80b57806e | [
"MIT"
] | 28 | 2016-08-19T16:08:52.000Z | 2021-07-26T10:16:29.000Z | from tilequeue.queue import MessageHandle
import threading
| 28.389831 | 79 | 0.590448 |
ae858354ab4f1914f4dfc11dd1d64a5507769f1b | 563 | py | Python | app/gui/repeater.py | TomVollerthun1337/logsmith | f2ecab4dea295d5493a9a3e77a2837b13fa139e5 | [
"Apache-2.0"
] | 19 | 2020-01-18T00:25:43.000Z | 2022-03-14T07:39:08.000Z | app/gui/repeater.py | TomVollerthun1337/logsmith | f2ecab4dea295d5493a9a3e77a2837b13fa139e5 | [
"Apache-2.0"
] | 85 | 2020-01-21T12:13:56.000Z | 2022-03-31T04:01:03.000Z | app/gui/repeater.py | TomVollerthun1337/logsmith | f2ecab4dea295d5493a9a3e77a2837b13fa139e5 | [
"Apache-2.0"
] | 2 | 2020-06-25T06:15:19.000Z | 2021-02-15T18:17:38.000Z | import logging
from PyQt5.QtCore import QTimer
logger = logging.getLogger('logsmith')
| 21.653846 | 44 | 0.635879 |
ae867f0e402cb89db3cccc626cd6f645b33f32f2 | 40 | py | Python | condensate/core/__init__.py | Zwierlein/condensate | 34908b7e99785e9a4a9c5c743fe1a8e6f4cbf4ad | [
"MIT"
] | 4 | 2021-07-24T10:57:06.000Z | 2021-12-11T01:24:54.000Z | condensate/core/__init__.py | Zwierlein/condensate | 34908b7e99785e9a4a9c5c743fe1a8e6f4cbf4ad | [
"MIT"
] | 9 | 2021-07-15T04:13:23.000Z | 2021-08-02T21:57:00.000Z | condensate/core/__init__.py | Zwierlein/condensate | 34908b7e99785e9a4a9c5c743fe1a8e6f4cbf4ad | [
"MIT"
] | 2 | 2021-07-21T10:39:30.000Z | 2021-08-01T03:05:14.000Z | from condensate.core.build import gpcore | 40 | 40 | 0.875 |
ae874a5f5cca2dcc55151c5b0e06fba1846032d7 | 250 | py | Python | urdubiometer/scanner/__init__.py | urdubiometer/urdubiometer | 034c1efc0403352caa9c5c944cf9450b8833bb24 | [
"BSD-3-Clause"
] | null | null | null | urdubiometer/scanner/__init__.py | urdubiometer/urdubiometer | 034c1efc0403352caa9c5c944cf9450b8833bb24 | [
"BSD-3-Clause"
] | 220 | 2019-07-30T19:20:59.000Z | 2022-03-28T10:33:19.000Z | urdubiometer/scanner/__init__.py | urdubiometer/urdubiometer | 034c1efc0403352caa9c5c944cf9450b8833bb24 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""Sub-level package for Scanner, a metrical scanner in Urdu BioMeter."""
__author__ = """A. Sean Pue"""
__email__ = "a@seanpue.com"
from .scanner import * # noqa
from .ghazal import * # noqa
from .types import * # noqa
| 22.727273 | 73 | 0.652 |
ae87c58fe103e3173ad0eb9f9da060726e492203 | 346 | py | Python | search01.py | kekelele/canlib-for-kvaser | 5f7f55319a33956de0bb9a1376ee7fbf897b8c4b | [
"Apache-2.0"
] | null | null | null | search01.py | kekelele/canlib-for-kvaser | 5f7f55319a33956de0bb9a1376ee7fbf897b8c4b | [
"Apache-2.0"
] | null | null | null | search01.py | kekelele/canlib-for-kvaser | 5f7f55319a33956de0bb9a1376ee7fbf897b8c4b | [
"Apache-2.0"
] | 1 | 2019-08-16T04:25:44.000Z | 2019-08-16T04:25:44.000Z | from canlib import canlib
num_channels = canlib.getNumberOfChannels()
print("Found %d channels" % num_channels)
for ch in range(0, num_channels):
chdata = canlib.ChannelData(ch)
print("%d. %s (%s / %s)" % (ch, chdata.device_name,
chdata.card_upc_no,
chdata.card_serial_no)) | 38.444444 | 55 | 0.592486 |
ae882f531080b30ee1443e5c07ad6e5e57ec1e85 | 2,208 | py | Python | src/assisted_test_infra/test_infra/helper_classes/config/base_config.py | nirarg/assisted-test-infra | e07c43501c1d9bfaa1aee3aea49f1ef359faee07 | [
"Apache-2.0"
] | null | null | null | src/assisted_test_infra/test_infra/helper_classes/config/base_config.py | nirarg/assisted-test-infra | e07c43501c1d9bfaa1aee3aea49f1ef359faee07 | [
"Apache-2.0"
] | null | null | null | src/assisted_test_infra/test_infra/helper_classes/config/base_config.py | nirarg/assisted-test-infra | e07c43501c1d9bfaa1aee3aea49f1ef359faee07 | [
"Apache-2.0"
] | null | null | null | from abc import ABC, abstractmethod
from dataclasses import asdict, dataclass
from typing import Any
from triggers.env_trigger import DataPool, Triggerable
| 30.666667 | 89 | 0.613225 |
ae8a201243a94cc44dd5cdf663f89bc62c36cf5a | 917 | py | Python | licensecheck/types.py | matthewdeanmartin/LicenseCheck | 54063d10d2033adc77fe12ddac6c0ced1a5e6502 | [
"MIT"
] | null | null | null | licensecheck/types.py | matthewdeanmartin/LicenseCheck | 54063d10d2033adc77fe12ddac6c0ced1a5e6502 | [
"MIT"
] | null | null | null | licensecheck/types.py | matthewdeanmartin/LicenseCheck | 54063d10d2033adc77fe12ddac6c0ced1a5e6502 | [
"MIT"
] | null | null | null | """PackageCompat type.
"""
from __future__ import annotations
import typing
from enum import Enum
| 14.107692 | 56 | 0.687023 |
ae8a6e4bdddcbc9fac409eabb59750fe2825a857 | 3,785 | py | Python | exp/ground/infonce_acc_plot/plot.py | ChopinSharp/info-ground | 12fba3c478b806f2fe068faac81237fd0f458b80 | [
"Apache-2.0"
] | 56 | 2020-09-21T07:41:08.000Z | 2022-01-10T13:28:36.000Z | exp/ground/infonce_acc_plot/plot.py | ChopinSharp/info-ground | 12fba3c478b806f2fe068faac81237fd0f458b80 | [
"Apache-2.0"
] | 5 | 2020-08-26T15:50:29.000Z | 2022-01-04T07:53:07.000Z | exp/ground/infonce_acc_plot/plot.py | ChopinSharp/info-ground | 12fba3c478b806f2fe068faac81237fd0f458b80 | [
"Apache-2.0"
] | 15 | 2020-08-24T16:36:20.000Z | 2022-01-17T12:51:45.000Z | import os
import numpy as np
import matplotlib.pyplot as plt
import utils.io as io
from global_constants import misc_paths
if __name__=='__main__':
main() | 33.495575 | 115 | 0.622985 |
ae8ae6c3af80fec8fe0b9de9dbde6389fdebdfe5 | 889 | py | Python | sciwx/demo/canvas4_tool.py | Pad0y/imagepy | 23f41b64ade02f94b566b0d23a4b6459c1a1578d | [
"BSD-4-Clause"
] | null | null | null | sciwx/demo/canvas4_tool.py | Pad0y/imagepy | 23f41b64ade02f94b566b0d23a4b6459c1a1578d | [
"BSD-4-Clause"
] | null | null | null | sciwx/demo/canvas4_tool.py | Pad0y/imagepy | 23f41b64ade02f94b566b0d23a4b6459c1a1578d | [
"BSD-4-Clause"
] | null | null | null | import sys
sys.path.append("../../")
from skimage.data import astronaut, camera
from sciwx.canvas import ICanvas
from sciapp.action import Tool
import wx
if __name__ == "__main__":
app = wx.App()
frame = wx.Frame(None)
canvas = ICanvas(frame, autofit=True)
canvas.set_img(camera())
canvas.set_tool(TestTool())
frame.Show()
app.MainLoop()
| 22.794872 | 64 | 0.574803 |
ae8b36f89eab35825f3909abeb288a05a078f59a | 5,416 | py | Python | fairness/app.py | Tomcli/ffdl-knative | b68edaaa1717ac34c946e25d24198590012b0e20 | [
"Apache-2.0"
] | 2 | 2019-01-18T16:10:50.000Z | 2019-10-24T11:42:31.000Z | fairness/app.py | Tomcli/ffdl-knative | b68edaaa1717ac34c946e25d24198590012b0e20 | [
"Apache-2.0"
] | null | null | null | fairness/app.py | Tomcli/ffdl-knative | b68edaaa1717ac34c946e25d24198590012b0e20 | [
"Apache-2.0"
] | null | null | null | import os
from aif360.datasets import BinaryLabelDataset
from aif360.metrics import ClassificationMetric
import numpy as np
import argparse
import pandas as pd
import boto3
import botocore
import json
from flask import Flask, request, abort
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
def dataset_wrapper(outcome, protected, unprivileged_groups, privileged_groups, favorable_label, unfavorable_label):
""" A wrapper function to create aif360 dataset from outcome and protected in numpy array format.
"""
df = pd.DataFrame(data=outcome,
columns=['outcome'])
df['race'] = protected
dataset = BinaryLabelDataset(favorable_label=favorable_label,
unfavorable_label=unfavorable_label,
df=df,
label_names=['outcome'],
protected_attribute_names=['race'],
unprivileged_protected_attributes=unprivileged_groups)
return dataset
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
| 41.984496 | 116 | 0.636263 |
ae8cd09d05a4a07cf96b19a5f1e50745c23583f0 | 1,352 | py | Python | socialserver/conftest.py | niallasher/socialserver-neo | 7e7d25d939133d149b56ccd54fbfa62d75cabb73 | [
"MIT"
] | null | null | null | socialserver/conftest.py | niallasher/socialserver-neo | 7e7d25d939133d149b56ccd54fbfa62d75cabb73 | [
"MIT"
] | 11 | 2022-03-10T04:55:09.000Z | 2022-03-30T14:24:19.000Z | socialserver/conftest.py | niallasher/socialserver-neo | 7e7d25d939133d149b56ccd54fbfa62d75cabb73 | [
"MIT"
] | null | null | null | # Copyright (c) Niall Asher 2022
from os import remove, path, mkdir
from socialserver.util.output import console
from socialserver.util.config import config
from socialserver import application
from werkzeug.serving import make_server
from threading import Thread
application_thread = TestingServer(application)
| 28.765957 | 87 | 0.713018 |
ae8d4fb7a13c8900171e04c43ef88541458c98e4 | 484 | py | Python | wildlifecompliance/migrations/0258_auto_20190717_1102.py | preranaandure/wildlifecompliance | bc19575f7bccf7e19adadbbaf5d3eda1d1aee4b5 | [
"Apache-2.0"
] | 1 | 2020-12-07T17:12:40.000Z | 2020-12-07T17:12:40.000Z | wildlifecompliance/migrations/0258_auto_20190717_1102.py | preranaandure/wildlifecompliance | bc19575f7bccf7e19adadbbaf5d3eda1d1aee4b5 | [
"Apache-2.0"
] | 14 | 2020-01-08T08:08:26.000Z | 2021-03-19T22:59:46.000Z | wildlifecompliance/migrations/0258_auto_20190717_1102.py | preranaandure/wildlifecompliance | bc19575f7bccf7e19adadbbaf5d3eda1d1aee4b5 | [
"Apache-2.0"
] | 15 | 2020-01-08T08:02:28.000Z | 2021-11-03T06:48:32.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2019-07-17 03:02
from __future__ import unicode_literals
from django.db import migrations, models
| 23.047619 | 58 | 0.636364 |
ae8f4daece742a4c95381dd42af1f242bb79321d | 1,739 | py | Python | trseeker/models/chromosome_model.py | ad3002/Lyrebird | 8c0a186e32d61189f073401152c52a89bfed46ed | [
"MIT"
] | null | null | null | trseeker/models/chromosome_model.py | ad3002/Lyrebird | 8c0a186e32d61189f073401152c52a89bfed46ed | [
"MIT"
] | null | null | null | trseeker/models/chromosome_model.py | ad3002/Lyrebird | 8c0a186e32d61189f073401152c52a89bfed46ed | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
#@created: 08.09.2011
#@author: Aleksey Komissarov
#@contact: ad3002@gmail.com
from PyExp import AbstractModel
| 28.048387 | 87 | 0.496262 |
ae9051d6cbac2dfdb6778875e7c06650eadb0b18 | 448 | py | Python | config.py | richardpanda/todo-api | 40ee3cc3fa96fc58fa6721e92a057c01ac938273 | [
"MIT"
] | null | null | null | config.py | richardpanda/todo-api | 40ee3cc3fa96fc58fa6721e92a057c01ac938273 | [
"MIT"
] | null | null | null | config.py | richardpanda/todo-api | 40ee3cc3fa96fc58fa6721e92a057c01ac938273 | [
"MIT"
] | null | null | null | import os
from dotenv import load_dotenv
load_dotenv()
| 20.363636 | 70 | 0.683036 |
ae913a111fa67bd457b4fc79bdf05c3e30106229 | 724 | py | Python | src/DevicesAPP/migrations/0003_auto_20180418_1607.py | mizamae/HomeAutomation | 8c462ee4c31c1fea6792cb19af66a4d2cf7bb2ca | [
"MIT"
] | null | null | null | src/DevicesAPP/migrations/0003_auto_20180418_1607.py | mizamae/HomeAutomation | 8c462ee4c31c1fea6792cb19af66a4d2cf7bb2ca | [
"MIT"
] | 9 | 2017-11-21T15:45:18.000Z | 2022-02-11T03:37:54.000Z | src/DevicesAPP/migrations/0003_auto_20180418_1607.py | mizamae/HomeAutomation | 8c462ee4c31c1fea6792cb19af66a4d2cf7bb2ca | [
"MIT"
] | 1 | 2020-07-22T02:24:17.000Z | 2020-07-22T02:24:17.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.4 on 2018-04-18 14:07
from __future__ import unicode_literals
from django.db import migrations
| 36.2 | 314 | 0.683702 |
ae91c071e030f6f6416a98aa6827070d9d7478ef | 7,395 | py | Python | backtoshops/notifs/views.py | RaphaelPrevost/Back2Shops | 5f2d369e82fe2a7b9b3a6c55782319b23d142dfd | [
"CECILL-B"
] | null | null | null | backtoshops/notifs/views.py | RaphaelPrevost/Back2Shops | 5f2d369e82fe2a7b9b3a6c55782319b23d142dfd | [
"CECILL-B"
] | 6 | 2021-03-31T19:21:50.000Z | 2022-01-13T01:46:09.000Z | backtoshops/notifs/views.py | RaphaelPrevost/Back2Shops | 5f2d369e82fe2a7b9b3a6c55782319b23d142dfd | [
"CECILL-B"
] | null | null | null | # -*- coding: utf-8 -*-
#############################################################################
#
# Copyright Dragon Dollar Limited
# contact: contact@dragondollar.com
#
# This software is a collection of webservices designed to provide a secure
# and scalable framework to build e-commerce websites.
#
# This software is governed by the CeCILL-B license under French law and
# abiding by the rules of distribution of free software. You can use,
# modify and/ or redistribute the software under the terms of the CeCILL-B
# license as circulated by CEA, CNRS and INRIA at the following URL
# " http://www.cecill.info".
#
# As a counterpart to the access to the source code and rights to copy,
# modify and redistribute granted by the license, users are provided only
# with a limited warranty and the software's author, the holder of the
# economic rights, and the successive licensors have only limited
# liability.
#
# In this respect, the user's attention is drawn to the risks associated
# with loading, using, modifying and/or developing or reproducing the
# software by the user in light of its specific status of free software,
# that may mean that it is complicated to manipulate, and that also
# therefore means that it is reserved for developers and experienced
# professionals having in-depth computer knowledge. Users are therefore
# encouraged to load and test the software's suitability as regards their
# requirements in conditions enabling the security of their systems and/or
# data to be ensured and, more generally, to use and operate it in the
# same conditions as regards security.
#
# The fact that you are presently reading this means that you have had
# knowledge of the CeCILL-B license and that you accept its terms.
#
#############################################################################
import settings
import json
from django.core.paginator import Paginator, EmptyPage, InvalidPage
from django.core.urlresolvers import reverse
from django.http import HttpResponse, HttpResponseBadRequest
from django.views.generic import View, ListView
from django.views.generic.edit import CreateView, DeleteView, UpdateView
from django.views.generic.base import TemplateResponseMixin
from sorl.thumbnail import get_thumbnail
from fouillis.views import AdminLoginRequiredMixin
from notifs.forms import NotifForm
from notifs.models import Notif, NotifTemplateImage
| 36.975 | 85 | 0.652738 |