section stringlengths 2 30 | filename stringlengths 1 82 | text stringlengths 783 28M |
|---|---|---|
natsort | ns_enum | # -*- coding: utf-8 -*-
"""This module defines the "ns" enum for natsort."""
from __future__ import absolute_import, division, print_function, unicode_literals
class ns(object):
"""
Enum to control the `natsort` algorithm.
This class acts like an enum to control the `natsort` algorithm. The
user may select several options simultaneously by or'ing the options
together. For example, to choose ``ns.INT``, ``ns.PATH``, and
``ns.LOCALE``, you could do ``ns.INT | ns.LOCALE | ns.PATH``. Each
function in the :mod:`natsort` package has an `alg` option that accepts
this enum to allow fine control over how your input is sorted.
Each option has a shortened 1- or 2-letter form.
.. note:: Please read :ref:`locale_issues` before using ``ns.LOCALE``.
Attributes
----------
INT, I (default)
The default - parse numbers as integers.
FLOAT, F
Tell `natsort` to parse numbers as floats.
UNSIGNED, U (default)
Tell `natsort` to ignore any sign (i.e. "-" or "+") to the immediate
left of a number. This is the default.
SIGNED, S
Tell `natsort` to take into account any sign (i.e. "-" or "+")
to the immediate left of a number.
REAL, R
This is a shortcut for ``ns.FLOAT | ns.SIGNED``, which is useful
when attempting to sort real numbers.
NOEXP, N
Tell `natsort` to not search for exponents as part of a float number.
For example, with `NOEXP` the number "5.6E5" would be interpreted
as `5.6`, `"E"`, and `5` instead of `560000`.
NUMAFTER, NA
Tell `natsort` to sort numbers after non-numbers. By default
numbers will be ordered before non-numbers.
PATH, P
Tell `natsort` to interpret strings as filesystem paths, so they
will be split according to the filesystem separator
(i.e. '/' on UNIX, '\\' on Windows), as well as splitting on the
file extension, if any. Without this, lists of file paths like
``['Folder/', 'Folder (1)/', 'Folder (10)/']`` will not be
sorted properly; 'Folder/' will be placed at the end, not at the
front. It is the same as setting the old `as_path` option to
`True`.
COMPATIBILITYNORMALIZE, CN
Use the "NFKD" unicode normalization form on input rather than the
default "NFD". This will transform characters such as '⑦' into
'7'. Please see https://stackoverflow.com/a/7934397/1399279,
https://stackoverflow.com/a/7931547/1399279,
and http://unicode.org/reports/tr15/ for full details into unicode
normalization.
LOCALE, L
Tell `natsort` to be locale-aware when sorting. This includes both
proper sorting of alphabetical characters as well as proper
handling of locale-dependent decimal separators and thousands
separators. This is a shortcut for
``ns.LOCALEALPHA | ns.LOCALENUM``.
Your sorting results will vary depending on your current locale.
LOCALEALPHA, LA
Tell `natsort` to be locale-aware when sorting, but only for
alphabetical characters.
LOCALENUM, LN
Tell `natsort` to be locale-aware when sorting, but only for
decimal separators and thousands separators.
IGNORECASE, IC
Tell `natsort` to ignore case when sorting. For example,
``['Banana', 'apple', 'banana', 'Apple']`` would be sorted as
``['apple', 'Apple', 'Banana', 'banana']``.
LOWERCASEFIRST, LF
Tell `natsort` to put lowercase letters before uppercase letters
when sorting. For example,
``['Banana', 'apple', 'banana', 'Apple']`` would be sorted as
``['apple', 'banana', 'Apple', 'Banana']`` (the default order
would be ``['Apple', 'Banana', 'apple', 'banana']`` which is
the order from a purely ordinal sort).
Useless when used with `IGNORECASE`. Please note that if used
with ``LOCALE``, this actually has the reverse effect and will
put uppercase first (this is because ``LOCALE`` already puts
lowercase first); you may use this to your advantage if you
need to modify the order returned with ``LOCALE``.
GROUPLETTERS, G
Tell `natsort` to group lowercase and uppercase letters together
when sorting. For example,
``['Banana', 'apple', 'banana', 'Apple']`` would be sorted as
``['Apple', 'apple', 'Banana', 'banana']``.
Useless when used with `IGNORECASE`; use with `LOWERCASEFIRST`
to reverse the order of upper and lower case. Generally not
needed with `LOCALE`.
CAPITALFIRST, C
Only used when `LOCALE` is enabled. Tell `natsort` to put all
capitalized words before non-capitalized words. This is essentially
the inverse of `GROUPLETTERS`, and is the default Python sorting
behavior without `LOCALE`.
UNGROUPLETTERS, UG
An alias for `CAPITALFIRST`.
NANLAST, NL
If an NaN shows up in the input, this instructs `natsort` to
treat these as +Infinity and place them after all the other numbers.
By default, an NaN be treated as -Infinity and be placed first.
TYPESAFE, T
Deprecated as of `natsort` version 5.0.0; this option is now
a no-op because it is always true.
VERSION, V
Deprecated as of `natsort` version 5.0.0; this option is now
a no-op because it is the default.
DIGIT, D
Same as `VERSION` above.
Notes
-----
If you prefer to use `import natsort as ns` as opposed to
`from natsort import natsorted, ns`, the `ns` options are
available as top-level imports.
>>> import natsort as ns
>>> a = ['num5.10', 'num-3', 'num5.3', 'num2']
>>> ns.natsorted(a, alg=ns.REAL) == ns.natsorted(a, alg=ns.ns.REAL)
True
"""
# Following were previously now options but are now defaults.
TYPESAFE = T = 0
INT = I = 0
VERSION = V = 0
DIGIT = D = 0
UNSIGNED = U = 0
# The below are options. The values are stored as powers of two
# so bitmasks can be used to extract the user's requested options.
FLOAT = F = 1 << 0
SIGNED = S = 1 << 1
REAL = R = FLOAT | SIGNED
NOEXP = N = 1 << 2
PATH = P = 1 << 3
LOCALEALPHA = LA = 1 << 4
LOCALENUM = LN = 1 << 5
LOCALE = L = LOCALEALPHA | LOCALENUM
IGNORECASE = IC = 1 << 6
LOWERCASEFIRST = LF = 1 << 7
GROUPLETTERS = G = 1 << 8
UNGROUPLETTERS = UG = 1 << 9
CAPITALFIRST = C = UNGROUPLETTERS
NANLAST = NL = 1 << 10
COMPATIBILITYNORMALIZE = CN = 1 << 11
NUMAFTER = NA = 1 << 12
# The below are private options for internal use only.
_NUMERIC_ONLY = REAL | NOEXP
_DUMB = 1 << 31
|
TechDrawTools | CommandMoveView | # ***************************************************************************
# * Copyright (c) 2022 Wanderer Fan <wandererfan@gmail.com> *
# * *
# * This program is free software; you can redistribute it and/or modify *
# * it under the terms of the GNU Lesser General Public License (LGPL) *
# * as published by the Free Software Foundation; either version 2 of *
# * the License, or (at your option) any later version. *
# * for detail see the LICENCE text file. *
# * *
# * This program is distributed in the hope that it will be useful, *
# * but WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
# * GNU Library General Public License for more details. *
# * *
# * You should have received a copy of the GNU Library General Public *
# * License along with this program; if not, write to the Free Software *
# * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 *
# * USA *
# * *
# ***************************************************************************
"""Provides the TechDraw MoveView GuiCommand."""
__title__ = "TechDrawTools.CommandMoveView"
__author__ = "WandererFan"
__url__ = "https://www.freecad.org"
__version__ = "00.01"
__date__ = "2022/01/11"
import FreeCAD as App
import FreeCADGui as Gui
import TechDrawTools
from PySide.QtCore import QT_TRANSLATE_NOOP
class CommandMoveView:
"""Moves a View from current Page to a different Page."""
def __init__(self):
"""Initialize variables for the command that must exist at all times."""
pass
def GetResources(self):
"""Return a dictionary with data that will be used by the button or menu item."""
return {
"Pixmap": "actions/TechDraw_MoveView.svg",
"Accel": "",
"MenuText": QT_TRANSLATE_NOOP("TechDraw_MoveView", "Move View"),
"ToolTip": QT_TRANSLATE_NOOP(
"TechDraw_MoveView", "Move a View to a new Page"
),
}
def Activated(self):
"""Run the following code when the command is activated (button press)."""
sel = Gui.Selection.getSelection()
viewName = ""
views = list()
for o in sel:
if o.isDerivedFrom("TechDraw::DrawView"):
views.append(o)
if views:
viewName = views[0].Name
toPageName = ""
fromPageName = ""
pages = list()
for o in sel:
if o.isDerivedFrom("TechDraw::DrawPage"):
pages.append(o)
if pages:
fromPageName = pages[0].Name
if len(pages) > 1:
toPageName = pages[1].Name
self.ui = TechDrawTools.TaskMoveView()
self.ui.setValues(viewName, fromPageName, toPageName)
Gui.Control.showDialog(self.ui)
def IsActive(self):
"""Return True when the command should be active or False when it should be disabled (greyed)."""
if App.ActiveDocument:
return (
TechDrawTools.TDToolsUtil.havePage()
and TechDrawTools.TDToolsUtil.haveView()
)
else:
return False
#
# The command must be "registered" with a unique name by calling its class.
Gui.addCommand("TechDraw_MoveView", CommandMoveView())
|
pwidgets | patternctrls | # -*- coding: utf-8 -*-
#
# Copyright (C) 2015 by Ihor E. Novikov
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
import math
from copy import deepcopy
import wal
from colorbtn import PDColorButton
from colorctrls import SwatchCanvas
from patterns import PATTERN_PRESETS
from sk1 import _
from sk1.resources import get_bmp, icons
from uc2 import libgeom, libimg, sk2const
from uc2.utils import fsutils
from unitctrls import StaticUnitLabel, UnitSpin
class PatternPaletteSwatch(wal.VPanel, SwatchCanvas):
callback = None
pattern = None
def __init__(self, parent, cms, pattern, size=(40, 20), onclick=None):
self.color = None
self.cms = cms
wal.VPanel.__init__(self, parent)
SwatchCanvas.__init__(self)
self.pack(size)
if onclick:
self.callback = onclick
self.set_pattern(pattern)
def set_pattern(self, pattern):
self.pattern = pattern
self.fill = [0, sk2const.FILL_PATTERN, [sk2const.PATTERN_IMG, pattern]]
self.refresh()
def mouse_left_up(self, point):
if self.callback:
self.callback(deepcopy(self.pattern))
class PatternMiniPalette(wal.VPanel):
callback = None
cells = []
def __init__(self, parent, cms, onclick=None):
wal.VPanel.__init__(self, parent)
self.set_bg(wal.BLACK)
grid = wal.GridPanel(parent, 2, 6, 1, 1)
grid.set_bg(wal.BLACK)
self.cells = []
for item in range(12):
self.cells.append(
PatternPaletteSwatch(
grid, cms, PATTERN_PRESETS[item], onclick=self.on_click
)
)
grid.pack(self.cells[-1])
self.pack(grid, padding_all=1)
if onclick:
self.callback = onclick
def on_click(self, pattern):
if self.callback:
self.callback(pattern)
class PatternSwatch(wal.VPanel, SwatchCanvas):
callback = None
pattern = None
def __init__(self, parent, cms, pattern_def, size=(130, 130)):
self.color = None
self.cms = cms
wal.VPanel.__init__(self, parent)
SwatchCanvas.__init__(self, border="news")
self.pack(size)
self.set_pattern_def(pattern_def)
def set_pattern_def(self, pattern_def):
self.fill = [0, sk2const.FILL_PATTERN, pattern_def]
self.refresh()
class PatternColorEditor(wal.HPanel):
image_style = []
callback = None
def __init__(self, parent, dlg, cms, image_style, onchange=None):
self.image_style = deepcopy(image_style)
self.callback = onchange
wal.HPanel.__init__(self, parent)
self.pack(wal.Label(self, _("Fg:")))
txt = _("Change pattern foreground color")
self.fg_btn = PDColorButton(
self, dlg, cms, self.image_style[0], txt, onchange=self.fg_changed
)
self.pack(self.fg_btn, padding=5)
self.pack((10, 1))
self.pack(wal.Label(self, _("Bg:")))
txt = _("Change pattern background color")
self.bg_btn = PDColorButton(
self, dlg, cms, self.image_style[1], txt, onchange=self.bg_changed
)
self.pack(self.bg_btn, padding=5)
def fg_changed(self, color):
self.image_style[0] = deepcopy(color)
if self.callback:
self.callback(self.get_image_style())
def bg_changed(self, color):
self.image_style[1] = deepcopy(color)
if self.callback:
self.callback(self.get_image_style())
def set_image_style(self, image_style):
self.image_style = deepcopy(image_style)
self.fg_btn.set_color(self.image_style[0])
self.bg_btn.set_color(self.image_style[1])
def get_image_style(self):
return deepcopy(self.image_style)
class TransformEditor(wal.VPanel):
app = None
callback = None
transforms = []
def __init__(
self,
parent,
app,
trafo=[] + sk2const.NORMAL_TRAFO,
transforms=[] + sk2const.PATTERN_TRANSFORMS,
onchange=None,
):
self.app = app
self.callback = onchange
wal.VPanel.__init__(self, parent)
grid = wal.GridPanel(self, 3, 7, 3, 2)
# ---Origin X
txt = _("Horizontal origin shift")
grid.pack(get_bmp(grid, icons.PD_PATTERN_ORIGIN_X, txt))
self.origin_x = UnitSpin(
app, grid, can_be_negative=True, onchange=self.changes, onenter=self.changes
)
grid.pack(self.origin_x)
grid.pack(StaticUnitLabel(app, grid))
grid.pack((10, 5))
# ---Origin Y
txt = _("Vertical origin shift")
grid.pack(get_bmp(grid, icons.PD_PATTERN_ORIGIN_Y, txt))
self.origin_y = UnitSpin(
app, grid, can_be_negative=True, onchange=self.changes, onenter=self.changes
)
grid.pack(self.origin_y)
grid.pack(StaticUnitLabel(app, grid))
# ---Scale X
txt = _("Scale horizontally")
grid.pack(get_bmp(grid, icons.PD_PATTERN_SCALE_X, txt))
self.scale_x = wal.FloatSpin(
grid,
range_val=(-1000000.0, 1000000.0),
step=1.0,
onchange=self.changes,
onenter=self.changes,
)
grid.pack(self.scale_x)
grid.pack(wal.Label(grid, "%"))
grid.pack((10, 5))
# ---Scale Y
txt = _("Scale vertically")
grid.pack(get_bmp(grid, icons.PD_PATTERN_SCALE_Y, txt))
self.scale_y = wal.FloatSpin(
grid,
range_val=(-1000000.0, 1000000.0),
step=1.0,
onchange=self.changes,
onenter=self.changes,
)
grid.pack(self.scale_y)
grid.pack(wal.Label(grid, "%"))
# ---Shear X
txt = _("Shear horizontally")
grid.pack(get_bmp(grid, icons.PD_PATTERN_SHEAR_X, txt))
self.shear_x = wal.FloatSpin(
grid,
range_val=(0.0, 85.0),
step=1.0,
onchange=self.changes,
onenter=self.changes,
)
grid.pack(self.shear_x)
grid.pack(wal.Label(grid, "°"))
grid.pack((10, 5))
# ---Shear X
txt = _("Shear vertically")
grid.pack(get_bmp(grid, icons.PD_PATTERN_SHEAR_Y, txt))
self.shear_y = wal.FloatSpin(
grid,
range_val=(0.0, 85.0),
step=1.0,
onchange=self.changes,
onenter=self.changes,
)
grid.pack(self.shear_y)
grid.pack(wal.Label(grid, "°"))
self.pack(grid)
# ---Rotate
rot_panel = wal.HPanel(self)
txt = _("Rotate pattern")
rot_panel.pack(get_bmp(rot_panel, icons.PD_PATTERN_ROTATE, txt))
self.rotate = wal.FloatSpin(
rot_panel,
range_val=(-360.0, 360.0),
step=1.0,
onchange=self.changes,
onenter=self.changes,
)
rot_panel.pack(self.rotate, padding=3)
rot_panel.pack(wal.Label(rot_panel, "°"))
self.pack(rot_panel, padding=5)
self.set_trafo(trafo, transforms)
def set_trafo(self, trafo, transforms):
x0, y0 = trafo[-2:]
sx, sy, shx, shy, rotate = transforms
self.origin_x.set_point_value(x0)
self.origin_y.set_point_value(y0)
self.scale_x.set_value(sx * 100.0)
self.scale_y.set_value(sy * 100.0)
self.shear_x.set_value(shx * 180.0 / math.pi)
self.shear_y.set_value(shy * 180.0 / math.pi)
self.rotate.set_value(rotate * 180.0 / math.pi)
self.transforms = transforms
def get_trafo(self):
x0 = self.origin_x.get_point_value()
y0 = self.origin_y.get_point_value()
sx = self.scale_x.get_value() / 100.0
sy = self.scale_y.get_value() / 100.0
shx = self.shear_x.get_value()
shy = self.shear_y.get_value()
if shx + shy > 85:
if shx == self.transforms[3]:
shy = 85 - shx
else:
shx = 85 - shy
shx = math.pi * shx / 180.0
shy = math.pi * shy / 180.0
angle = math.pi * self.rotate.get_value() / 180.0
trafo = [sx, 0.0, 0.0, sy, x0, y0]
if angle:
trafo2 = [
math.cos(angle),
math.sin(angle),
-math.sin(angle),
math.cos(angle),
0.0,
0.0,
]
trafo = libgeom.multiply_trafo(trafo, trafo2)
if shx or shy:
trafo2 = [1.0, math.tan(shy), math.tan(shx), 1.0, 0.0, 0.0]
trafo = libgeom.multiply_trafo(trafo, trafo2)
self.transforms = [sx, sy, shx, shy, angle]
return trafo, [sx, sy, shx, shy, angle]
def changes(self):
if self.callback:
self.callback(*self.get_trafo())
class TrafoEditor(wal.VPanel):
app = None
callback = None
def __init__(self, parent, app, onchange=None):
self.app = app
self.callback = onchange
wal.VPanel.__init__(self, parent)
self.pack(wal.Label(self, _("Transformation matrix:")), padding=5)
grid = wal.GridPanel(self, 3, 5, 3, 1)
# ---M11
grid.pack(wal.Label(grid, "m11:"))
self.m11 = wal.FloatSpin(
grid,
range_val=(-1000000.0, 1000000.0),
step=0.01,
onchange=self.changes,
onenter=self.changes,
)
grid.pack(self.m11)
grid.pack((5, 5))
# ---M12
grid.pack(wal.Label(grid, "m12:"))
self.m12 = wal.FloatSpin(
grid,
range_val=(-1000000.0, 1000000.0),
step=0.01,
onchange=self.changes,
onenter=self.changes,
)
grid.pack(self.m12)
# ---M21
grid.pack(wal.Label(grid, "m21:"))
self.m21 = wal.FloatSpin(
grid,
range_val=(-1000000.0, 1000000.0),
step=0.01,
onchange=self.changes,
onenter=self.changes,
)
grid.pack(self.m21)
grid.pack((5, 5))
# ---M22
grid.pack(wal.Label(grid, "m22:"))
self.m22 = wal.FloatSpin(
grid,
range_val=(-1000000.0, 1000000.0),
step=0.01,
onchange=self.changes,
onenter=self.changes,
)
grid.pack(self.m22)
# ---dx
grid.pack(wal.Label(grid, "dx:"))
self.dx = wal.FloatSpin(
grid,
range_val=(-1000000.0, 1000000.0),
step=0.01,
onchange=self.changes,
onenter=self.changes,
)
grid.pack(self.dx)
grid.pack((5, 5))
# ---dy
grid.pack(wal.Label(grid, "dy:"))
self.dy = wal.FloatSpin(
grid,
range_val=(-1000000.0, 1000000.0),
step=0.01,
onchange=self.changes,
onenter=self.changes,
)
grid.pack(self.dy)
self.pack(grid)
def set_trafo(self, *args):
trafo = args[0]
self.m11.set_value(trafo[0])
self.m12.set_value(trafo[1])
self.m21.set_value(trafo[2])
self.m22.set_value(trafo[3])
self.dx.set_value(trafo[4])
self.dy.set_value(trafo[5])
def get_trafo(self):
m11 = self.m11.get_value()
m12 = self.m12.get_value()
m21 = self.m21.get_value()
m22 = self.m22.get_value()
dx = self.dx.get_value()
dy = self.dy.get_value()
return [m11, m12, m21, m22, dx, dy], []
def changes(self):
if self.callback:
self.callback(*self.get_trafo())
class PatternTrafoEditor(wal.VPanel):
app = None
callback = None
transforms = []
active_panel = None
def __init__(self, parent, app, onchange=None):
wal.VPanel.__init__(self, parent)
self.transfom_editor = TransformEditor(self, app, onchange=onchange)
self.trafo_editor = TrafoEditor(self, app, onchange=onchange)
self.transfom_editor.set_visible(False)
self.pack(self.trafo_editor)
self.active_panel = self.trafo_editor
def set_trafo(self, trafo, transforms):
if not transforms:
m11, m12, m21, m22 = trafo[:4]
if not m12 and not m21:
transforms = [m11, m22, 0.0, 0.0, 0.0]
if transforms:
if not self.active_panel == self.transfom_editor:
self.remove(self.active_panel)
self.active_panel.set_visible(False)
self.active_panel = self.transfom_editor
self.pack(self.transfom_editor)
self.active_panel.set_visible(True)
else:
if not self.active_panel == self.trafo_editor:
self.remove(self.active_panel)
self.active_panel.set_visible(False)
self.active_panel = self.trafo_editor
self.pack(self.active_panel)
self.active_panel.set_visible(True)
self.active_panel.set_trafo(trafo, transforms)
class PatternEditor(wal.HPanel):
pattern_def = []
cms = None
dlg = None
callback = None
def __init__(self, parent, dlg, cms, pattern_def, onchange=None):
self.dlg = dlg
self.app = dlg.app
self.cms = cms
self.pattern_def = deepcopy(pattern_def)
self.callback = onchange
wal.HPanel.__init__(self, parent)
left_panel = wal.VPanel(self)
self.pattern_swatch = PatternSwatch(left_panel, self.cms, pattern_def)
left_panel.pack(self.pattern_swatch)
button_panel = wal.HPanel(left_panel)
txt = _("Load pattern from file")
button_panel.pack(
wal.ImageButton(
self,
icons.PD_OPEN,
wal.SIZE_16,
tooltip=txt,
flat=False,
onclick=self.load_pattern,
),
padding=1,
)
txt = _("Save pattern into file")
button_panel.pack(
wal.ImageButton(
self,
icons.PD_FILE_SAVE,
wal.SIZE_16,
tooltip=txt,
flat=False,
onclick=self.save_pattern,
),
padding=1,
)
left_panel.pack(button_panel, padding=2)
self.pack(left_panel, fill=True)
right_panel = wal.VPanel(self)
pce = PatternColorEditor(
right_panel, dlg, cms, pattern_def[2], onchange=self.color_changed
)
self.pattern_color_editor = pce
right_panel.pack(self.pattern_color_editor, padding=5)
self.trafo_editor = PatternTrafoEditor(
right_panel, dlg.app, onchange=self.trafo_changed
)
right_panel.pack(self.trafo_editor, padding=5)
self.pack(right_panel, fill=True, expand=True)
def color_changed(self, image_style):
self.pattern_def[2] = image_style
if self.callback:
self.callback(self.get_pattern_def())
def trafo_changed(self, trafo, transforms=None):
transforms = transforms or []
self.pattern_def[3] = trafo
self.pattern_def[4] = transforms
if self.callback:
self.callback(self.get_pattern_def())
def set_pattern_def(self, pattern_def):
self.pattern_def = deepcopy(pattern_def)
self.update()
def get_pattern_def(self):
return deepcopy(self.pattern_def)
def update(self):
self.pattern_swatch.set_pattern_def(self.pattern_def)
self.pattern_color_editor.set_image_style(self.pattern_def[2])
self.trafo_editor.set_trafo(self.pattern_def[3], self.pattern_def[4])
self.pattern_color_editor.set_visible(
self.pattern_def[0] == sk2const.PATTERN_IMG
)
def load_pattern(self):
img_file = self.app.import_pattern(self.dlg)
if img_file:
fobj = fsutils.get_fileptr(img_file)
pattern, flag = libimg.read_pattern(fobj.read())
pattern_type = sk2const.PATTERN_TRUECOLOR
if flag:
pattern_type = sk2const.PATTERN_IMG
if flag == "EPS":
pattern_type = sk2const.PATTERN_EPS
self.pattern_def[0] = pattern_type
self.pattern_def[1] = pattern
if self.callback:
self.callback(self.get_pattern_def())
def save_pattern(self):
self.app.extract_pattern(
self.dlg, self.pattern_def[1], self.pattern_def[0] == sk2const.PATTERN_EPS
)
|
migrations | 0028_alter_settings_language_alter_settings_timezone | # Generated by Django 4.2.2 on 2023-06-16 19:28
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("babybuddy", "0027_remove_standard_group"),
]
operations = [
migrations.AlterField(
model_name="settings",
name="language",
field=models.CharField(
choices=[
("ca", "Catalan"),
("cs", "Czech"),
("zh-hans", "Chinese (simplified)"),
("da", "Danish"),
("nl", "Dutch"),
("en-US", "English (US)"),
("en-GB", "English (UK)"),
("fr", "French"),
("fi", "Finnish"),
("de", "German"),
("hu", "Hungarian"),
("it", "Italian"),
("nb", "Norwegian Bokmål"),
("pl", "Polish"),
("pt", "Portuguese"),
("ru", "Russian"),
("es", "Spanish"),
("sv", "Swedish"),
("tr", "Turkish"),
],
default="en-US",
max_length=255,
verbose_name="Language",
),
),
migrations.AlterField(
model_name="settings",
name="timezone",
field=models.CharField(
choices=[
("Africa/Abidjan", "Africa/Abidjan"),
("Africa/Accra", "Africa/Accra"),
("Africa/Addis_Ababa", "Africa/Addis_Ababa"),
("Africa/Algiers", "Africa/Algiers"),
("Africa/Asmara", "Africa/Asmara"),
("Africa/Bamako", "Africa/Bamako"),
("Africa/Bangui", "Africa/Bangui"),
("Africa/Banjul", "Africa/Banjul"),
("Africa/Bissau", "Africa/Bissau"),
("Africa/Blantyre", "Africa/Blantyre"),
("Africa/Brazzaville", "Africa/Brazzaville"),
("Africa/Bujumbura", "Africa/Bujumbura"),
("Africa/Cairo", "Africa/Cairo"),
("Africa/Casablanca", "Africa/Casablanca"),
("Africa/Ceuta", "Africa/Ceuta"),
("Africa/Conakry", "Africa/Conakry"),
("Africa/Dakar", "Africa/Dakar"),
("Africa/Dar_es_Salaam", "Africa/Dar_es_Salaam"),
("Africa/Djibouti", "Africa/Djibouti"),
("Africa/Douala", "Africa/Douala"),
("Africa/El_Aaiun", "Africa/El_Aaiun"),
("Africa/Freetown", "Africa/Freetown"),
("Africa/Gaborone", "Africa/Gaborone"),
("Africa/Harare", "Africa/Harare"),
("Africa/Johannesburg", "Africa/Johannesburg"),
("Africa/Juba", "Africa/Juba"),
("Africa/Kampala", "Africa/Kampala"),
("Africa/Khartoum", "Africa/Khartoum"),
("Africa/Kigali", "Africa/Kigali"),
("Africa/Kinshasa", "Africa/Kinshasa"),
("Africa/Lagos", "Africa/Lagos"),
("Africa/Libreville", "Africa/Libreville"),
("Africa/Lome", "Africa/Lome"),
("Africa/Luanda", "Africa/Luanda"),
("Africa/Lubumbashi", "Africa/Lubumbashi"),
("Africa/Lusaka", "Africa/Lusaka"),
("Africa/Malabo", "Africa/Malabo"),
("Africa/Maputo", "Africa/Maputo"),
("Africa/Maseru", "Africa/Maseru"),
("Africa/Mbabane", "Africa/Mbabane"),
("Africa/Mogadishu", "Africa/Mogadishu"),
("Africa/Monrovia", "Africa/Monrovia"),
("Africa/Nairobi", "Africa/Nairobi"),
("Africa/Ndjamena", "Africa/Ndjamena"),
("Africa/Niamey", "Africa/Niamey"),
("Africa/Nouakchott", "Africa/Nouakchott"),
("Africa/Ouagadougou", "Africa/Ouagadougou"),
("Africa/Porto-Novo", "Africa/Porto-Novo"),
("Africa/Sao_Tome", "Africa/Sao_Tome"),
("Africa/Tripoli", "Africa/Tripoli"),
("Africa/Tunis", "Africa/Tunis"),
("Africa/Windhoek", "Africa/Windhoek"),
("America/Adak", "America/Adak"),
("America/Anchorage", "America/Anchorage"),
("America/Anguilla", "America/Anguilla"),
("America/Antigua", "America/Antigua"),
("America/Araguaina", "America/Araguaina"),
(
"America/Argentina/Buenos_Aires",
"America/Argentina/Buenos_Aires",
),
("America/Argentina/Catamarca", "America/Argentina/Catamarca"),
("America/Argentina/Cordoba", "America/Argentina/Cordoba"),
("America/Argentina/Jujuy", "America/Argentina/Jujuy"),
("America/Argentina/La_Rioja", "America/Argentina/La_Rioja"),
("America/Argentina/Mendoza", "America/Argentina/Mendoza"),
(
"America/Argentina/Rio_Gallegos",
"America/Argentina/Rio_Gallegos",
),
("America/Argentina/Salta", "America/Argentina/Salta"),
("America/Argentina/San_Juan", "America/Argentina/San_Juan"),
("America/Argentina/San_Luis", "America/Argentina/San_Luis"),
("America/Argentina/Tucuman", "America/Argentina/Tucuman"),
("America/Argentina/Ushuaia", "America/Argentina/Ushuaia"),
("America/Aruba", "America/Aruba"),
("America/Asuncion", "America/Asuncion"),
("America/Atikokan", "America/Atikokan"),
("America/Bahia", "America/Bahia"),
("America/Bahia_Banderas", "America/Bahia_Banderas"),
("America/Barbados", "America/Barbados"),
("America/Belem", "America/Belem"),
("America/Belize", "America/Belize"),
("America/Blanc-Sablon", "America/Blanc-Sablon"),
("America/Boa_Vista", "America/Boa_Vista"),
("America/Bogota", "America/Bogota"),
("America/Boise", "America/Boise"),
("America/Cambridge_Bay", "America/Cambridge_Bay"),
("America/Campo_Grande", "America/Campo_Grande"),
("America/Cancun", "America/Cancun"),
("America/Caracas", "America/Caracas"),
("America/Cayenne", "America/Cayenne"),
("America/Cayman", "America/Cayman"),
("America/Chicago", "America/Chicago"),
("America/Chihuahua", "America/Chihuahua"),
("America/Ciudad_Juarez", "America/Ciudad_Juarez"),
("America/Costa_Rica", "America/Costa_Rica"),
("America/Creston", "America/Creston"),
("America/Cuiaba", "America/Cuiaba"),
("America/Curacao", "America/Curacao"),
("America/Danmarkshavn", "America/Danmarkshavn"),
("America/Dawson", "America/Dawson"),
("America/Dawson_Creek", "America/Dawson_Creek"),
("America/Denver", "America/Denver"),
("America/Detroit", "America/Detroit"),
("America/Dominica", "America/Dominica"),
("America/Edmonton", "America/Edmonton"),
("America/Eirunepe", "America/Eirunepe"),
("America/El_Salvador", "America/El_Salvador"),
("America/Fort_Nelson", "America/Fort_Nelson"),
("America/Fortaleza", "America/Fortaleza"),
("America/Glace_Bay", "America/Glace_Bay"),
("America/Goose_Bay", "America/Goose_Bay"),
("America/Grand_Turk", "America/Grand_Turk"),
("America/Grenada", "America/Grenada"),
("America/Guadeloupe", "America/Guadeloupe"),
("America/Guatemala", "America/Guatemala"),
("America/Guayaquil", "America/Guayaquil"),
("America/Guyana", "America/Guyana"),
("America/Halifax", "America/Halifax"),
("America/Havana", "America/Havana"),
("America/Hermosillo", "America/Hermosillo"),
("America/Indiana/Indianapolis", "America/Indiana/Indianapolis"),
("America/Indiana/Knox", "America/Indiana/Knox"),
("America/Indiana/Marengo", "America/Indiana/Marengo"),
("America/Indiana/Petersburg", "America/Indiana/Petersburg"),
("America/Indiana/Tell_City", "America/Indiana/Tell_City"),
("America/Indiana/Vevay", "America/Indiana/Vevay"),
("America/Indiana/Vincennes", "America/Indiana/Vincennes"),
("America/Indiana/Winamac", "America/Indiana/Winamac"),
("America/Inuvik", "America/Inuvik"),
("America/Iqaluit", "America/Iqaluit"),
("America/Jamaica", "America/Jamaica"),
("America/Juneau", "America/Juneau"),
("America/Kentucky/Louisville", "America/Kentucky/Louisville"),
("America/Kentucky/Monticello", "America/Kentucky/Monticello"),
("America/Kralendijk", "America/Kralendijk"),
("America/La_Paz", "America/La_Paz"),
("America/Lima", "America/Lima"),
("America/Los_Angeles", "America/Los_Angeles"),
("America/Lower_Princes", "America/Lower_Princes"),
("America/Maceio", "America/Maceio"),
("America/Managua", "America/Managua"),
("America/Manaus", "America/Manaus"),
("America/Marigot", "America/Marigot"),
("America/Martinique", "America/Martinique"),
("America/Matamoros", "America/Matamoros"),
("America/Mazatlan", "America/Mazatlan"),
("America/Menominee", "America/Menominee"),
("America/Merida", "America/Merida"),
("America/Metlakatla", "America/Metlakatla"),
("America/Mexico_City", "America/Mexico_City"),
("America/Miquelon", "America/Miquelon"),
("America/Moncton", "America/Moncton"),
("America/Monterrey", "America/Monterrey"),
("America/Montevideo", "America/Montevideo"),
("America/Montserrat", "America/Montserrat"),
("America/Nassau", "America/Nassau"),
("America/New_York", "America/New_York"),
("America/Nome", "America/Nome"),
("America/Noronha", "America/Noronha"),
("America/North_Dakota/Beulah", "America/North_Dakota/Beulah"),
("America/North_Dakota/Center", "America/North_Dakota/Center"),
(
"America/North_Dakota/New_Salem",
"America/North_Dakota/New_Salem",
),
("America/Nuuk", "America/Nuuk"),
("America/Ojinaga", "America/Ojinaga"),
("America/Panama", "America/Panama"),
("America/Paramaribo", "America/Paramaribo"),
("America/Phoenix", "America/Phoenix"),
("America/Port-au-Prince", "America/Port-au-Prince"),
("America/Port_of_Spain", "America/Port_of_Spain"),
("America/Porto_Velho", "America/Porto_Velho"),
("America/Puerto_Rico", "America/Puerto_Rico"),
("America/Punta_Arenas", "America/Punta_Arenas"),
("America/Rankin_Inlet", "America/Rankin_Inlet"),
("America/Recife", "America/Recife"),
("America/Regina", "America/Regina"),
("America/Resolute", "America/Resolute"),
("America/Rio_Branco", "America/Rio_Branco"),
("America/Santarem", "America/Santarem"),
("America/Santiago", "America/Santiago"),
("America/Santo_Domingo", "America/Santo_Domingo"),
("America/Sao_Paulo", "America/Sao_Paulo"),
("America/Scoresbysund", "America/Scoresbysund"),
("America/Sitka", "America/Sitka"),
("America/St_Barthelemy", "America/St_Barthelemy"),
("America/St_Johns", "America/St_Johns"),
("America/St_Kitts", "America/St_Kitts"),
("America/St_Lucia", "America/St_Lucia"),
("America/St_Thomas", "America/St_Thomas"),
("America/St_Vincent", "America/St_Vincent"),
("America/Swift_Current", "America/Swift_Current"),
("America/Tegucigalpa", "America/Tegucigalpa"),
("America/Thule", "America/Thule"),
("America/Tijuana", "America/Tijuana"),
("America/Toronto", "America/Toronto"),
("America/Tortola", "America/Tortola"),
("America/Vancouver", "America/Vancouver"),
("America/Whitehorse", "America/Whitehorse"),
("America/Winnipeg", "America/Winnipeg"),
("America/Yakutat", "America/Yakutat"),
("Antarctica/Casey", "Antarctica/Casey"),
("Antarctica/Davis", "Antarctica/Davis"),
("Antarctica/DumontDUrville", "Antarctica/DumontDUrville"),
("Antarctica/Macquarie", "Antarctica/Macquarie"),
("Antarctica/Mawson", "Antarctica/Mawson"),
("Antarctica/McMurdo", "Antarctica/McMurdo"),
("Antarctica/Palmer", "Antarctica/Palmer"),
("Antarctica/Rothera", "Antarctica/Rothera"),
("Antarctica/Syowa", "Antarctica/Syowa"),
("Antarctica/Troll", "Antarctica/Troll"),
("Antarctica/Vostok", "Antarctica/Vostok"),
("Arctic/Longyearbyen", "Arctic/Longyearbyen"),
("Asia/Aden", "Asia/Aden"),
("Asia/Almaty", "Asia/Almaty"),
("Asia/Amman", "Asia/Amman"),
("Asia/Anadyr", "Asia/Anadyr"),
("Asia/Aqtau", "Asia/Aqtau"),
("Asia/Aqtobe", "Asia/Aqtobe"),
("Asia/Ashgabat", "Asia/Ashgabat"),
("Asia/Atyrau", "Asia/Atyrau"),
("Asia/Baghdad", "Asia/Baghdad"),
("Asia/Bahrain", "Asia/Bahrain"),
("Asia/Baku", "Asia/Baku"),
("Asia/Bangkok", "Asia/Bangkok"),
("Asia/Barnaul", "Asia/Barnaul"),
("Asia/Beirut", "Asia/Beirut"),
("Asia/Bishkek", "Asia/Bishkek"),
("Asia/Brunei", "Asia/Brunei"),
("Asia/Chita", "Asia/Chita"),
("Asia/Choibalsan", "Asia/Choibalsan"),
("Asia/Colombo", "Asia/Colombo"),
("Asia/Damascus", "Asia/Damascus"),
("Asia/Dhaka", "Asia/Dhaka"),
("Asia/Dili", "Asia/Dili"),
("Asia/Dubai", "Asia/Dubai"),
("Asia/Dushanbe", "Asia/Dushanbe"),
("Asia/Famagusta", "Asia/Famagusta"),
("Asia/Gaza", "Asia/Gaza"),
("Asia/Hebron", "Asia/Hebron"),
("Asia/Ho_Chi_Minh", "Asia/Ho_Chi_Minh"),
("Asia/Hong_Kong", "Asia/Hong_Kong"),
("Asia/Hovd", "Asia/Hovd"),
("Asia/Irkutsk", "Asia/Irkutsk"),
("Asia/Jakarta", "Asia/Jakarta"),
("Asia/Jayapura", "Asia/Jayapura"),
("Asia/Jerusalem", "Asia/Jerusalem"),
("Asia/Kabul", "Asia/Kabul"),
("Asia/Kamchatka", "Asia/Kamchatka"),
("Asia/Karachi", "Asia/Karachi"),
("Asia/Kathmandu", "Asia/Kathmandu"),
("Asia/Khandyga", "Asia/Khandyga"),
("Asia/Kolkata", "Asia/Kolkata"),
("Asia/Krasnoyarsk", "Asia/Krasnoyarsk"),
("Asia/Kuala_Lumpur", "Asia/Kuala_Lumpur"),
("Asia/Kuching", "Asia/Kuching"),
("Asia/Kuwait", "Asia/Kuwait"),
("Asia/Macau", "Asia/Macau"),
("Asia/Magadan", "Asia/Magadan"),
("Asia/Makassar", "Asia/Makassar"),
("Asia/Manila", "Asia/Manila"),
("Asia/Muscat", "Asia/Muscat"),
("Asia/Nicosia", "Asia/Nicosia"),
("Asia/Novokuznetsk", "Asia/Novokuznetsk"),
("Asia/Novosibirsk", "Asia/Novosibirsk"),
("Asia/Omsk", "Asia/Omsk"),
("Asia/Oral", "Asia/Oral"),
("Asia/Phnom_Penh", "Asia/Phnom_Penh"),
("Asia/Pontianak", "Asia/Pontianak"),
("Asia/Pyongyang", "Asia/Pyongyang"),
("Asia/Qatar", "Asia/Qatar"),
("Asia/Qostanay", "Asia/Qostanay"),
("Asia/Qyzylorda", "Asia/Qyzylorda"),
("Asia/Riyadh", "Asia/Riyadh"),
("Asia/Sakhalin", "Asia/Sakhalin"),
("Asia/Samarkand", "Asia/Samarkand"),
("Asia/Seoul", "Asia/Seoul"),
("Asia/Shanghai", "Asia/Shanghai"),
("Asia/Singapore", "Asia/Singapore"),
("Asia/Srednekolymsk", "Asia/Srednekolymsk"),
("Asia/Taipei", "Asia/Taipei"),
("Asia/Tashkent", "Asia/Tashkent"),
("Asia/Tbilisi", "Asia/Tbilisi"),
("Asia/Tehran", "Asia/Tehran"),
("Asia/Thimphu", "Asia/Thimphu"),
("Asia/Tokyo", "Asia/Tokyo"),
("Asia/Tomsk", "Asia/Tomsk"),
("Asia/Ulaanbaatar", "Asia/Ulaanbaatar"),
("Asia/Urumqi", "Asia/Urumqi"),
("Asia/Ust-Nera", "Asia/Ust-Nera"),
("Asia/Vientiane", "Asia/Vientiane"),
("Asia/Vladivostok", "Asia/Vladivostok"),
("Asia/Yakutsk", "Asia/Yakutsk"),
("Asia/Yangon", "Asia/Yangon"),
("Asia/Yekaterinburg", "Asia/Yekaterinburg"),
("Asia/Yerevan", "Asia/Yerevan"),
("Atlantic/Azores", "Atlantic/Azores"),
("Atlantic/Bermuda", "Atlantic/Bermuda"),
("Atlantic/Canary", "Atlantic/Canary"),
("Atlantic/Cape_Verde", "Atlantic/Cape_Verde"),
("Atlantic/Faroe", "Atlantic/Faroe"),
("Atlantic/Madeira", "Atlantic/Madeira"),
("Atlantic/Reykjavik", "Atlantic/Reykjavik"),
("Atlantic/South_Georgia", "Atlantic/South_Georgia"),
("Atlantic/St_Helena", "Atlantic/St_Helena"),
("Atlantic/Stanley", "Atlantic/Stanley"),
("Australia/Adelaide", "Australia/Adelaide"),
("Australia/Brisbane", "Australia/Brisbane"),
("Australia/Broken_Hill", "Australia/Broken_Hill"),
("Australia/Darwin", "Australia/Darwin"),
("Australia/Eucla", "Australia/Eucla"),
("Australia/Hobart", "Australia/Hobart"),
("Australia/Lindeman", "Australia/Lindeman"),
("Australia/Lord_Howe", "Australia/Lord_Howe"),
("Australia/Melbourne", "Australia/Melbourne"),
("Australia/Perth", "Australia/Perth"),
("Australia/Sydney", "Australia/Sydney"),
("Canada/Atlantic", "Canada/Atlantic"),
("Canada/Central", "Canada/Central"),
("Canada/Eastern", "Canada/Eastern"),
("Canada/Mountain", "Canada/Mountain"),
("Canada/Newfoundland", "Canada/Newfoundland"),
("Canada/Pacific", "Canada/Pacific"),
("Europe/Amsterdam", "Europe/Amsterdam"),
("Europe/Andorra", "Europe/Andorra"),
("Europe/Astrakhan", "Europe/Astrakhan"),
("Europe/Athens", "Europe/Athens"),
("Europe/Belgrade", "Europe/Belgrade"),
("Europe/Berlin", "Europe/Berlin"),
("Europe/Bratislava", "Europe/Bratislava"),
("Europe/Brussels", "Europe/Brussels"),
("Europe/Bucharest", "Europe/Bucharest"),
("Europe/Budapest", "Europe/Budapest"),
("Europe/Busingen", "Europe/Busingen"),
("Europe/Chisinau", "Europe/Chisinau"),
("Europe/Copenhagen", "Europe/Copenhagen"),
("Europe/Dublin", "Europe/Dublin"),
("Europe/Gibraltar", "Europe/Gibraltar"),
("Europe/Guernsey", "Europe/Guernsey"),
("Europe/Helsinki", "Europe/Helsinki"),
("Europe/Isle_of_Man", "Europe/Isle_of_Man"),
("Europe/Istanbul", "Europe/Istanbul"),
("Europe/Jersey", "Europe/Jersey"),
("Europe/Kaliningrad", "Europe/Kaliningrad"),
("Europe/Kirov", "Europe/Kirov"),
("Europe/Kyiv", "Europe/Kyiv"),
("Europe/Lisbon", "Europe/Lisbon"),
("Europe/Ljubljana", "Europe/Ljubljana"),
("Europe/London", "Europe/London"),
("Europe/Luxembourg", "Europe/Luxembourg"),
("Europe/Madrid", "Europe/Madrid"),
("Europe/Malta", "Europe/Malta"),
("Europe/Mariehamn", "Europe/Mariehamn"),
("Europe/Minsk", "Europe/Minsk"),
("Europe/Monaco", "Europe/Monaco"),
("Europe/Moscow", "Europe/Moscow"),
("Europe/Oslo", "Europe/Oslo"),
("Europe/Paris", "Europe/Paris"),
("Europe/Podgorica", "Europe/Podgorica"),
("Europe/Prague", "Europe/Prague"),
("Europe/Riga", "Europe/Riga"),
("Europe/Rome", "Europe/Rome"),
("Europe/Samara", "Europe/Samara"),
("Europe/San_Marino", "Europe/San_Marino"),
("Europe/Sarajevo", "Europe/Sarajevo"),
("Europe/Saratov", "Europe/Saratov"),
("Europe/Simferopol", "Europe/Simferopol"),
("Europe/Skopje", "Europe/Skopje"),
("Europe/Sofia", "Europe/Sofia"),
("Europe/Stockholm", "Europe/Stockholm"),
("Europe/Tallinn", "Europe/Tallinn"),
("Europe/Tirane", "Europe/Tirane"),
("Europe/Ulyanovsk", "Europe/Ulyanovsk"),
("Europe/Vaduz", "Europe/Vaduz"),
("Europe/Vatican", "Europe/Vatican"),
("Europe/Vienna", "Europe/Vienna"),
("Europe/Vilnius", "Europe/Vilnius"),
("Europe/Volgograd", "Europe/Volgograd"),
("Europe/Warsaw", "Europe/Warsaw"),
("Europe/Zagreb", "Europe/Zagreb"),
("Europe/Zurich", "Europe/Zurich"),
("GMT", "GMT"),
("Indian/Antananarivo", "Indian/Antananarivo"),
("Indian/Chagos", "Indian/Chagos"),
("Indian/Christmas", "Indian/Christmas"),
("Indian/Cocos", "Indian/Cocos"),
("Indian/Comoro", "Indian/Comoro"),
("Indian/Kerguelen", "Indian/Kerguelen"),
("Indian/Mahe", "Indian/Mahe"),
("Indian/Maldives", "Indian/Maldives"),
("Indian/Mauritius", "Indian/Mauritius"),
("Indian/Mayotte", "Indian/Mayotte"),
("Indian/Reunion", "Indian/Reunion"),
("Pacific/Apia", "Pacific/Apia"),
("Pacific/Auckland", "Pacific/Auckland"),
("Pacific/Bougainville", "Pacific/Bougainville"),
("Pacific/Chatham", "Pacific/Chatham"),
("Pacific/Chuuk", "Pacific/Chuuk"),
("Pacific/Easter", "Pacific/Easter"),
("Pacific/Efate", "Pacific/Efate"),
("Pacific/Fakaofo", "Pacific/Fakaofo"),
("Pacific/Fiji", "Pacific/Fiji"),
("Pacific/Funafuti", "Pacific/Funafuti"),
("Pacific/Galapagos", "Pacific/Galapagos"),
("Pacific/Gambier", "Pacific/Gambier"),
("Pacific/Guadalcanal", "Pacific/Guadalcanal"),
("Pacific/Guam", "Pacific/Guam"),
("Pacific/Honolulu", "Pacific/Honolulu"),
("Pacific/Kanton", "Pacific/Kanton"),
("Pacific/Kiritimati", "Pacific/Kiritimati"),
("Pacific/Kosrae", "Pacific/Kosrae"),
("Pacific/Kwajalein", "Pacific/Kwajalein"),
("Pacific/Majuro", "Pacific/Majuro"),
("Pacific/Marquesas", "Pacific/Marquesas"),
("Pacific/Midway", "Pacific/Midway"),
("Pacific/Nauru", "Pacific/Nauru"),
("Pacific/Niue", "Pacific/Niue"),
("Pacific/Norfolk", "Pacific/Norfolk"),
("Pacific/Noumea", "Pacific/Noumea"),
("Pacific/Pago_Pago", "Pacific/Pago_Pago"),
("Pacific/Palau", "Pacific/Palau"),
("Pacific/Pitcairn", "Pacific/Pitcairn"),
("Pacific/Pohnpei", "Pacific/Pohnpei"),
("Pacific/Port_Moresby", "Pacific/Port_Moresby"),
("Pacific/Rarotonga", "Pacific/Rarotonga"),
("Pacific/Saipan", "Pacific/Saipan"),
("Pacific/Tahiti", "Pacific/Tahiti"),
("Pacific/Tarawa", "Pacific/Tarawa"),
("Pacific/Tongatapu", "Pacific/Tongatapu"),
("Pacific/Wake", "Pacific/Wake"),
("Pacific/Wallis", "Pacific/Wallis"),
("US/Alaska", "US/Alaska"),
("US/Arizona", "US/Arizona"),
("US/Central", "US/Central"),
("US/Eastern", "US/Eastern"),
("US/Hawaii", "US/Hawaii"),
("US/Mountain", "US/Mountain"),
("US/Pacific", "US/Pacific"),
("UTC", "UTC"),
],
default="UTC",
max_length=100,
verbose_name="Timezone",
),
),
]
|
friture | generator | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (C) 2009 Timoth?Lecomte
# This file is part of Friture.
#
# Friture is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3 as published by
# the Free Software Foundation.
#
# Friture is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Friture. If not, see <http://www.gnu.org/licenses/>.
import logging
import numpy as np
import sounddevice
from friture.audiobackend import SAMPLING_RATE, AudioBackend
from friture.generators.burst import BurstGenerator
from friture.generators.pink import PinkGenerator
from friture.generators.sine import SineGenerator
from friture.generators.sweep import SweepGenerator
from friture.generators.white import WhiteGenerator
from PyQt5 import QtCore, QtGui, QtWidgets
SMOOTH_DISPLAY_TIMER_PERIOD_MS = 25
FRAMES_PER_BUFFER = 2 * 1024
DEFAULT_GENERATOR_KIND_INDEX = 0
RAMP_LENGTH = 3e-3 # 10 ms
(STOPPED, STARTING, PLAYING, STOPPING) = list(range(0, 4))
class Generator_Widget(QtWidgets.QWidget):
stream_stop_ramp_finished = QtCore.pyqtSignal()
def __init__(self, parent):
super().__init__(parent)
self.logger = logging.getLogger(__name__)
self.audiobuffer = None
self.setObjectName("Generator_Widget")
self.grid_layout = QtWidgets.QGridLayout(self)
self.grid_layout.setObjectName("grid_layout")
self.generators = []
self.generators.append(SineGenerator(self))
self.generators.append(WhiteGenerator(self))
self.generators.append(PinkGenerator(self))
self.generators.append(SweepGenerator(self))
self.generators.append(BurstGenerator(self))
self.combobox_generator_kind = QtWidgets.QComboBox(self)
self.combobox_generator_kind.setObjectName("combobox_generator_kind")
self.stacked_settings_layout = QtWidgets.QStackedLayout()
for generator in self.generators:
self.combobox_generator_kind.addItem(generator.name)
self.stacked_settings_layout.addWidget(generator.settingsWidget())
self.combobox_generator_kind.setCurrentIndex(DEFAULT_GENERATOR_KIND_INDEX)
self.t = 0.0
self.t_start = 0.0
self.t_stop = RAMP_LENGTH
self.state = STOPPED
self.stream_stop_ramp_finished.connect(self.stop_stream_after_ramp)
self.device = None
self.stream = None
# we will try to open all the output devices until one
# works, starting by the default input device
for device in AudioBackend().output_devices:
self.logger.info("Opening the stream for device '%s'", device["name"])
try:
self.stream = AudioBackend().open_output_stream(
device, self.audio_callback
)
self.stream.start()
self.stream.stop()
self.device = device
self.logger.info("Stream opened successfully")
break
except Exception:
self.logger.exception("Failed to open stream")
self.start_stop_button = QtWidgets.QPushButton(self)
startStopIcon = QtGui.QIcon()
startStopIcon.addPixmap(
QtGui.QPixmap(":/images-src/start.svg"), QtGui.QIcon.Normal, QtGui.QIcon.Off
)
startStopIcon.addPixmap(
QtGui.QPixmap(":/images-src/stop.svg"), QtGui.QIcon.Normal, QtGui.QIcon.On
)
startStopIcon.addPixmap(
QtGui.QPixmap(":/images-src/stop.svg"), QtGui.QIcon.Active, QtGui.QIcon.On
)
startStopIcon.addPixmap(
QtGui.QPixmap(":/images-src/stop.svg"), QtGui.QIcon.Selected, QtGui.QIcon.On
)
startStopIcon.addPixmap(
QtGui.QPixmap(":/images-src/stop.svg"), QtGui.QIcon.Disabled, QtGui.QIcon.On
)
self.start_stop_button.setIcon(startStopIcon)
self.start_stop_button.setObjectName("generatorStartStop")
self.start_stop_button.setText("Start")
self.start_stop_button.setToolTip("Start/Stop generator")
self.start_stop_button.setCheckable(True)
self.start_stop_button.setChecked(False)
self.grid_layout.addWidget(self.start_stop_button, 0, 0, 1, 1)
self.grid_layout.addWidget(self.combobox_generator_kind, 1, 0, 1, 1)
self.grid_layout.addLayout(self.stacked_settings_layout, 2, 0, 1, 1)
self.combobox_generator_kind.activated.connect(
self.stacked_settings_layout.setCurrentIndex
)
self.start_stop_button.toggled.connect(self.start_stop_button_toggle)
# initialize the settings dialog
devices = AudioBackend().get_readable_output_devices_list()
if self.device is not None:
device_index = AudioBackend().output_devices.index(self.device)
else:
device_index = None
self.settings_dialog = Generator_Settings_Dialog(self, devices, device_index)
self.settings_dialog.combobox_output_device.currentIndexChanged.connect(
self.device_changed
)
# channels = AudioBackend().get_readable_current_output_channels()
# for channel in channels:
# self.settings_dialog.comboBox_firstChannel.addItem(channel)
# self.settings_dialog.comboBox_secondChannel.addItem(channel)
# current_device = AudioBackend().get_readable_current_output_device()
# self.settings_dialog.combobox_output_device.setCurrentIndex(current_device)
# first_channel = AudioBackend().get_current_first_channel()
# self.settings_dialog.comboBox_firstChannel.setCurrentIndex(first_channel)
# second_channel = AudioBackend().get_current_second_channel()
# self.settings_dialog.comboBox_secondChannel.setCurrentIndex(second_channel)
def device_changed(self, index):
device = AudioBackend().output_devices[index]
# save current stream in case we need to restore it
previous_stream = self.stream
previous_device = self.device
self.logger.info("Trying to write to output device '%s'", device["name"])
# first see if the format is supported by PortAudio
try:
AudioBackend().is_output_format_supported(device, np.int16)
except sounddevice.PortAudioError as err:
self.on_device_change_error(
previous_stream,
previous_device,
"Format is not supported: {0}".format(err),
)
return
try:
self.stream = AudioBackend().open_output_stream(device, self.audio_callback)
self.device = device
self.stream.start()
if self.state not in [STARTING, PLAYING]:
self.stream.stop()
except (sounddevice.PortAudioError, OSError) as err:
self.on_device_change_error(
previous_stream,
previous_device,
"Failed to open output device: {0}".format(err),
)
return
self.logger.info("Success")
previous_stream.stop()
self.settings_dialog.combobox_output_device.setCurrentIndex(
AudioBackend().output_devices.index(self.device)
)
def on_device_change_error(self, previous_stream, previous_device, message):
self.logger.exception(message)
if self.stream is not None:
self.stream.stop()
# restore previous stream
self.stream = previous_stream
self.device = previous_device
# Note: the error message is a child of the settings dialog, so that
# that dialog remains on top when the error message is closed
error_message = QtWidgets.QErrorMessage(self.settings_dialog)
error_message.setWindowTitle("Output device error")
error_message.showMessage(
"Impossible to use the selected output device, reverting to the previous one. Reason is: "
+ message
)
self.settings_dialog.combobox_output_device.setCurrentIndex(
AudioBackend().output_devices.index(self.device)
)
def settings_called(self, checked):
self.settings_dialog.show()
# method
def set_buffer(self, buffer):
self.audiobuffer = buffer
# slot
def start_stop_button_toggle(self, checked):
if checked:
self.start_stop_button.setText("Stop")
if self.state == STOPPED or self.state == STOPPING:
self.state = STARTING
self.t_start = 0.0
self.stream.start()
else:
self.start_stop_button.setText("Start")
if self.state == PLAYING or self.state == STARTING:
self.state = STOPPING
self.t_stop = RAMP_LENGTH
# will stop at the end of the ramp
def stop_stream_after_ramp(self):
self.stream.stop()
def handle_new_data(self, floatdata):
# we do not make anything of the input data in the generator...
return
def audio_callback(self, out_data, frame_count, time_info, status):
if status:
self.logger.info(status)
N = frame_count
if self.state == STOPPED:
out_data.fill(0)
return
# if we cannot write any sample, return now
if N == 0:
return
t = self.t + np.arange(0, N / float(SAMPLING_RATE), 1.0 / float(SAMPLING_RATE))
name = self.combobox_generator_kind.currentText()
generators = [
generator for generator in self.generators if generator.name == name
]
if len(generators) == 0:
self.logger.error("generator error : index of signal type not found")
out_data.fill(0)
return
if len(generators) > 1:
self.logger.error(
"generator error : 2 (or more) generators have the same name"
)
out_data.fill(0)
return
generator = generators[0]
floatdata = generator.signal(t)
# add smooth ramps at start/stop to avoid undesirable bursts
if self.state == STARTING:
# add a ramp at the start
t_ramp = self.t_start + np.arange(
0, N / float(SAMPLING_RATE), 1.0 / float(SAMPLING_RATE)
)
t_ramp = np.clip(t_ramp, 0.0, RAMP_LENGTH)
floatdata *= t_ramp / RAMP_LENGTH
self.t_start += N / float(SAMPLING_RATE)
if self.t_start > RAMP_LENGTH:
self.state = PLAYING
if self.state == STOPPING:
self.logger.info("stopping %f %d", self.t_stop, N)
# add a ramp at the end
t_ramp = self.t_stop - np.arange(
0, N / float(SAMPLING_RATE), 1.0 / float(SAMPLING_RATE)
)
t_ramp = np.clip(t_ramp, 0.0, RAMP_LENGTH)
floatdata *= t_ramp / RAMP_LENGTH
self.t_stop -= N / float(SAMPLING_RATE)
if self.t_stop < 0.0:
self.state = STOPPED
self.stream_stop_ramp_finished.emit()
# output channels are interleaved
# we output to all channels simultaneously with the same data
maxOutputChannels = AudioBackend().get_device_outputchannels_count(self.device)
floatdata = np.tile(floatdata, (maxOutputChannels, 1)).transpose()
int16info = np.iinfo(np.int16)
norm_coeff = min(abs(int16info.min), int16info.max)
intdata = (
np.clip(floatdata, int16info.min, int16info.max) * norm_coeff
).astype(np.int16)
# update the time counter
self.t += N / float(SAMPLING_RATE)
# data copy
out_data[:] = intdata
def canvasUpdate(self):
return
def saveState(self, settings):
settings.setValue("generator kind", self.combobox_generator_kind.currentIndex())
for generator in self.generators:
generator.settingsWidget().saveState(settings)
self.settings_dialog.saveState(settings)
def restoreState(self, settings):
generator_kind = settings.value(
"generator kind", DEFAULT_GENERATOR_KIND_INDEX, type=int
)
self.combobox_generator_kind.setCurrentIndex(generator_kind)
self.stacked_settings_layout.setCurrentIndex(generator_kind)
for generator in self.generators:
generator.settingsWidget().restoreState(settings)
self.settings_dialog.restoreState(settings)
class Generator_Settings_Dialog(QtWidgets.QDialog):
def __init__(self, parent, devices, device_index):
super().__init__(parent)
self.setWindowTitle("Generator settings")
self.form_layout = QtWidgets.QFormLayout(self)
self.combobox_output_device = QtWidgets.QComboBox(self)
self.combobox_output_device.setObjectName("comboBox_outputDevice")
self.form_layout.addRow(
"Select the output device:", self.combobox_output_device
)
self.setLayout(self.form_layout)
for device in devices:
self.combobox_output_device.addItem(device)
if device_index is not None:
self.combobox_output_device.setCurrentIndex(device_index)
def saveState(self, settings):
# for the output device, we search by name instead of index, since
# we do not know if the device order stays the same between sessions
settings.setValue("deviceName", self.combobox_output_device.currentText())
def restoreState(self, settings):
device_name = settings.value("deviceName", "")
device_index = self.combobox_output_device.findText(device_name)
# change the device only if it exists in the device list
if device_index >= 0:
self.combobox_output_device.setCurrentIndex(device_index)
|
blocks | qa_vector_sink_source | #!/usr/bin/env python
#
# Copyright 2008,2010,2013 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# SPDX-License-Identifier: GPL-3.0-or-later
#
#
import math
import pmt
from gnuradio import blocks, gr, gr_unittest
def make_tag(key, value, offset, srcid=None):
tag = gr.tag_t()
tag.key = pmt.string_to_symbol(key)
tag.value = pmt.to_pmt(value)
tag.offset = offset
if srcid is not None:
tag.srcid = pmt.to_pmt(srcid)
return tag
def compare_tags(a, b):
return (
a.offset == b.offset
and pmt.equal(a.key, b.key)
and pmt.equal(a.value, b.value)
and pmt.equal(a.srcid, b.srcid)
)
class test_vector_sink_source(gr_unittest.TestCase):
def setUp(self):
self.tb = gr.top_block()
def tearDown(self):
self.tb = None
def test_001(self):
# Test that sink has data set in source for the simplest case
src_data = [float(x) for x in range(16)]
expected_result = src_data
src = blocks.vector_source_f(src_data)
dst = blocks.vector_sink_f()
self.tb.connect(src, dst)
self.tb.run()
result_data = dst.data()
self.assertEqual(expected_result, result_data)
def test_002(self):
# Test vectors (the gnuradio vector I/O type)
src_data = [float(x) for x in range(16)]
expected_result = src_data
src = blocks.vector_source_f(src_data, False, 2)
dst = blocks.vector_sink_f(2)
self.tb.connect(src, dst)
self.tb.run()
result_data = dst.data()
self.assertEqual(expected_result, result_data)
def test_003(self):
# Test that we can only make vectors (the I/O type) if the input
# vector has sufficient size
src_data = [float(x) for x in range(16)]
self.assertRaises(
ValueError, lambda: blocks.vector_source_f(src_data, False, 3)
)
def test_004(self):
# Test sending and receiving tagged streams
src_data = [float(x) for x in range(16)]
expected_result = src_data
src_tags = [make_tag("key", "val", 0, "src")]
expected_tags = src_tags[:]
src = blocks.vector_source_f(src_data, repeat=False, tags=src_tags)
dst = blocks.vector_sink_f()
self.tb.connect(src, dst)
self.tb.run()
result_data = dst.data()
result_tags = dst.tags()
self.assertEqual(expected_result, result_data)
self.assertEqual(len(result_tags), 1)
self.assertTrue(compare_tags(expected_tags[0], result_tags[0]))
def test_005(self):
# Test that repeat works (with tagged streams)
length = 16
src_data = [float(x) for x in range(length)]
expected_result = src_data + src_data
src_tags = [make_tag("key", "val", 0, "src")]
expected_tags = [
make_tag("key", "val", 0, "src"),
make_tag("key", "val", length, "src"),
]
src = blocks.vector_source_f(src_data, repeat=True, tags=src_tags)
head = blocks.head(gr.sizeof_float, 2 * length)
dst = blocks.vector_sink_f()
self.tb.connect(src, head, dst)
self.tb.run()
result_data = dst.data()
result_tags = dst.tags()
self.assertEqual(expected_result, result_data)
self.assertEqual(len(result_tags), 2)
self.assertTrue(compare_tags(expected_tags[0], result_tags[0]))
self.assertTrue(compare_tags(expected_tags[1], result_tags[1]))
def test_006(self):
# Test set_data
src_data = [float(x) for x in range(16)]
expected_result = src_data
src = blocks.vector_source_f((3, 1, 4))
dst = blocks.vector_sink_f()
src.set_data(src_data)
self.tb.connect(src, dst)
self.tb.run()
result_data = dst.data()
self.assertEqual(expected_result, result_data)
def test_007(self):
# Test set_repeat
src_data = [float(x) for x in range(16)]
expected_result = src_data
src = blocks.vector_source_f(src_data, True)
dst = blocks.vector_sink_f()
src.set_repeat(False)
self.tb.connect(src, dst)
# will timeout if set_repeat does not work
self.tb.run()
result_data = dst.data()
self.assertEqual(expected_result, result_data)
if __name__ == "__main__":
gr_unittest.run(test_vector_sink_source)
|
BasicDrawing | MyView | import AppDrawing
import FrameworkTextDrawing
import objc
import UIHandling
from Cocoa import *
# XXX: Why are these global?
_drawingCommand = UIHandling.kHICommandSimpleRect
_pdfDocument = None
class MyView(NSView):
currentMenuItem = objc.IBOutlet()
def initWithFrame_(self, frameRect):
self = super(MyView, self).initWithFrame_(frameRect)
if self is None:
return None
global _pdfDocument
_pdfDocument = None
return self
if False:
def isFlipped(self):
return True
def drawRect_(self, rect):
context = NSGraphicsContext.currentContext().graphicsPort()
if _pdfDocument is None:
if _drawingCommand in (
UIHandling.kHICommandDrawNSString,
UIHandling.kHICommandDrawNSLayoutMgr,
UIHandling.kHICommandDrawCustomNSLayoutMgr,
):
if _drawingCommand == UIHandling.kHICommandDrawNSString:
FrameworkTextDrawing.drawNSStringWithAttributes()
elif _drawingCommand == UIHandling.kHICommandDrawNSLayoutMgr:
FrameworkTextDrawing.drawWithNSLayout()
else:
FrameworkTextDrawing.drawWithCustomNSLayout()
else:
AppDrawing.DispatchDrawing(context, _drawingCommand)
else:
mediaRect = CGPDFDocumentGetMediaBox(_pdfDocument, 1)
mediaRect.origin.x = mediaRect.origin.y = 0
CGContextDrawPDFDocument(context, mediaRect, _pdfDocument, 1)
@objc.IBAction
def setDrawCommand_(self, sender):
global _drawingCommand, _pdfDocument
newCommand = sender.tag()
if _drawingCommand != newCommand:
_drawingCommand = newCommand
# The view needs to be redisplayed since there is a new drawing command.
self.setNeedsDisplay_(True)
# Disable previous menu item.
if self.currentMenuItem is not None:
self.currentMenuItem.setState_(NSOffState)
# Update the current item.
self.currentMenuItem = sender
# Enable new menu item.
self.currentMenuItem.setState_(NSOnState)
# If we were showing a pasted document, let's get rid of it.
if _pdfDocument:
_pdfDocument = None
def currentPrintableCommand(self):
# The best representation for printing or exporting
# when the current command caches using a bitmap context
# or a layer is to not do any caching.
if _drawingCommand in (
UIHandling.kHICommandDrawOffScreenImage,
UIHandling.kHICommandDrawWithLayer,
):
return UIHandling.kHICommandDrawNoOffScreenImage
return _drawingCommand
def print_(self, sender):
global _drawingCommand
savedDrawingCommand = _drawingCommand
# Set the drawing command to be one that is printable.
_drawingCommand = self.currentPrintableCommand()
# Do the printing operation on the view.
NSPrintOperation.printOperationWithView_(self).runOperation()
# Restore that before the printing operation.
_drawingCommand = savedDrawingCommand
def acceptsFirstResponder(self):
return True
@objc.IBAction
def copy_(self, sender):
addPDFDataToPasteBoard(_drawingCommand)
@objc.IBAction
def paste_(self, sender):
global _pdfDocument
newPDFDocument = createNewPDFRefFromPasteBoard()
if newPDFDocument is not None:
_pdfDocument = newPDFDocument
# The view needs to be redisplayed since there is
# a new PDF document.
self.setNeedsDisplay_(True)
# Return the number of pages available for printing. For this
# application it is always 1.
def knowsPageRange_(self):
return True, NSRange(1, 1)
# Return the drawing rectangle for a particular page number.
# For this application it is always the page width and height.
def rectForPage_(self, page):
pi = NSPrintOperation.currentOperation().printInfo()
# Calculate the page height in points.
paperSize = pi.paperSize()
return NSMakeRect(0, 0, paperSize.width, paperSize.height)
def validateMenuItem_(self, menuItem):
if menuItem.tag() == _drawingCommand:
self.currentMenuItem = menuItem
menuItem.setState_(True)
else:
menuItem.setState_(False)
return True
|
Arch | import3DS | # ***************************************************************************
# * Copyright (c) 2016 Yorik van Havre <yorik@uncreated.net> *
# * *
# * This program is free software; you can redistribute it and/or modify *
# * it under the terms of the GNU Lesser General Public License (LGPL) *
# * as published by the Free Software Foundation; either version 2 of *
# * the License, or (at your option) any later version. *
# * for detail see the LICENCE text file. *
# * *
# * This program is distributed in the hope that it will be useful, *
# * but WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
# * GNU Library General Public License for more details. *
# * *
# * You should have received a copy of the GNU Library General Public *
# * License along with this program; if not, write to the Free Software *
# * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 *
# * USA *
# * *
# ***************************************************************************
import os
import FreeCAD
import Mesh
__title__ = "FreeCAD 3DS importer"
__author__ = "Yorik van Havre"
__url__ = "https://www.freecad.org"
DEBUG = True
## @package import3DS
# \ingroup ARCH
# \brief 3DS file format importer
#
# This module provides tools to import 3DS files.
def check3DS():
"checks if collada if available"
global dom3ds
dom3ds = None
try:
from Dice3DS import dom3ds
except ImportError:
FreeCAD.Console.PrintError("Dice3DS not found, 3DS support is disabled.\n")
return False
else:
return True
def open(filename):
"called when freecad wants to open a file"
if not check3DS():
return
docname = os.path.splitext(os.path.basename(filename))[0]
doc = FreeCAD.newDocument(docname)
doc.Label = docname
FreeCAD.ActiveDocument = doc
read(filename)
return doc
def insert(filename, docname):
"called when freecad wants to import a file"
if not check3DS():
return
try:
doc = FreeCAD.getDocument(docname)
except NameError:
doc = FreeCAD.newDocument(docname)
FreeCAD.ActiveDocument = doc
read(filename)
return doc
def read(filename):
dom = dom3ds.read_3ds_file(filename, tight=False)
for j, d_nobj in enumerate(dom.mdata.objects):
if type(d_nobj.obj) != dom3ds.N_TRI_OBJECT:
continue
verts = []
if d_nobj.obj.points:
for d_point in d_nobj.obj.points.array:
verts.append([d_point[0], d_point[1], d_point[2]])
meshdata = []
for d_face in d_nobj.obj.faces.array:
meshdata.append([verts[int(d_face[i])] for i in range(3)])
m = [tuple(r) for r in d_nobj.obj.matrix.array]
m = m[0] + m[1] + m[2] + m[3]
placement = FreeCAD.Placement(FreeCAD.Matrix(*m))
mesh = Mesh.Mesh(meshdata)
obj = FreeCAD.ActiveDocument.addObject("Mesh::Feature", "Mesh")
obj.Mesh = mesh
obj.Placement = placement
else:
print("Skipping object without vertices array: ", d_nobj.obj)
|
app | test_installer | # SPDX-License-Identifier: LGPL-2.1-or-later
# ***************************************************************************
# * *
# * Copyright (c) 2022 FreeCAD Project Association *
# * *
# * This file is part of FreeCAD. *
# * *
# * FreeCAD is free software: you can redistribute it and/or modify it *
# * under the terms of the GNU Lesser General Public License as *
# * published by the Free Software Foundation, either version 2.1 of the *
# * License, or (at your option) any later version. *
# * *
# * FreeCAD is distributed in the hope that it will be useful, but *
# * WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU *
# * Lesser General Public License for more details. *
# * *
# * You should have received a copy of the GNU Lesser General Public *
# * License along with FreeCAD. If not, see *
# * <https://www.gnu.org/licenses/>. *
# * *
# ***************************************************************************
"""Contains the unit test class for addonmanager_installer.py non-GUI functionality."""
import os
import shutil
import sys
import tempfile
import unittest
from unittest.mock import Mock
from zipfile import ZipFile
sys.path.append("../../") # So the IDE can find the imports below
import FreeCAD
from Addon import Addon
from addonmanager_git import GitManager, initialize_git
from addonmanager_installer import AddonInstaller, InstallationMethod, MacroInstaller
from addonmanager_metadata import MetadataReader
from AddonManagerTest.app.mocks import MockAddon, MockMacro
class TestAddonInstaller(unittest.TestCase):
"""Test class for addonmanager_installer.py non-GUI functionality"""
MODULE = "test_installer" # file name without extension
def setUp(self):
"""Initialize data needed for all tests"""
# self.start_time = time.perf_counter()
self.test_data_dir = os.path.join(
FreeCAD.getHomePath(), "Mod", "AddonManager", "AddonManagerTest", "data"
)
self.real_addon = Addon(
"TestAddon",
"https://github.com/FreeCAD/FreeCAD-addons",
Addon.Status.NOT_INSTALLED,
"master",
)
self.mock_addon = MockAddon()
def tearDown(self):
"""Finalize the test."""
# end_time = time.perf_counter()
# print(f"Test '{self.id()}' ran in {end_time-self.start_time:.4f} seconds")
def test_validate_object(self):
"""An object is valid if it has a name, url, and branch attribute."""
AddonInstaller._validate_object(self.real_addon) # Won't raise
AddonInstaller._validate_object(self.mock_addon) # Won't raise
class NoName:
def __init__(self):
self.url = "https://github.com/FreeCAD/FreeCAD-addons"
self.branch = "master"
no_name = NoName()
with self.assertRaises(RuntimeError):
AddonInstaller._validate_object(no_name)
class NoUrl:
def __init__(self):
self.name = "TestAddon"
self.branch = "master"
no_url = NoUrl()
with self.assertRaises(RuntimeError):
AddonInstaller._validate_object(no_url)
class NoBranch:
def __init__(self):
self.name = "TestAddon"
self.url = "https://github.com/FreeCAD/FreeCAD-addons"
no_branch = NoBranch()
with self.assertRaises(RuntimeError):
AddonInstaller._validate_object(no_branch)
def test_update_metadata(self):
"""If a metadata file exists in the installation location, it should be
loaded."""
addon = Mock()
addon.name = "MockAddon"
installer = AddonInstaller(addon, [])
installer._update_metadata() # Does nothing, but should not crash
installer = AddonInstaller(self.real_addon, [])
with tempfile.TemporaryDirectory() as temp_dir:
installer.installation_path = temp_dir
installer._update_metadata()
addon_dir = os.path.join(temp_dir, self.real_addon.name)
os.mkdir(addon_dir)
shutil.copy(
os.path.join(self.test_data_dir, "good_package.xml"),
os.path.join(addon_dir, "package.xml"),
)
good_metadata = MetadataReader.from_file(
os.path.join(addon_dir, "package.xml")
)
installer._update_metadata()
self.assertEqual(self.real_addon.installed_version, good_metadata.version)
def test_finalize_zip_installation_non_github(self):
"""Ensure that zip files are correctly extracted."""
with tempfile.TemporaryDirectory() as temp_dir:
test_simple_repo = os.path.join(self.test_data_dir, "test_simple_repo.zip")
non_gh_mock = MockAddon()
non_gh_mock.url = test_simple_repo
non_gh_mock.name = "NonGitHubMock"
installer = AddonInstaller(non_gh_mock, [])
installer.installation_path = temp_dir
installer._finalize_zip_installation(test_simple_repo)
expected_location = os.path.join(temp_dir, non_gh_mock.name, "README")
self.assertTrue(
os.path.isfile(expected_location), "Non-GitHub zip extraction failed"
)
def test_finalize_zip_installation_github(self):
with tempfile.TemporaryDirectory() as temp_dir:
test_github_style_repo = os.path.join(
self.test_data_dir, "test_github_style_repo.zip"
)
self.mock_addon.url = test_github_style_repo
self.mock_addon.branch = "master"
installer = AddonInstaller(self.mock_addon, [])
installer.installation_path = temp_dir
installer._finalize_zip_installation(test_github_style_repo)
expected_location = os.path.join(temp_dir, self.mock_addon.name, "README")
self.assertTrue(
os.path.isfile(expected_location), "GitHub zip extraction failed"
)
def test_code_in_branch_subdirectory_true(self):
"""When there is a subdirectory with the branch name in it, find it"""
installer = AddonInstaller(self.mock_addon, [])
with tempfile.TemporaryDirectory() as temp_dir:
os.mkdir(
os.path.join(
temp_dir, f"{self.mock_addon.name}-{self.mock_addon.branch}"
)
)
result = installer._code_in_branch_subdirectory(temp_dir)
self.assertTrue(result, "Failed to find ZIP subdirectory")
def test_code_in_branch_subdirectory_false(self):
"""When there is not a subdirectory with the branch name in it, don't find
one"""
installer = AddonInstaller(self.mock_addon, [])
with tempfile.TemporaryDirectory() as temp_dir:
result = installer._code_in_branch_subdirectory(temp_dir)
self.assertFalse(result, "Found ZIP subdirectory when there was none")
def test_code_in_branch_subdirectory_more_than_one(self):
"""When there are multiple subdirectories, never find a branch subdirectory"""
installer = AddonInstaller(self.mock_addon, [])
with tempfile.TemporaryDirectory() as temp_dir:
os.mkdir(
os.path.join(
temp_dir, f"{self.mock_addon.name}-{self.mock_addon.branch}"
)
)
os.mkdir(os.path.join(temp_dir, "AnotherSubdir"))
result = installer._code_in_branch_subdirectory(temp_dir)
self.assertFalse(
result, "Found ZIP subdirectory when there were multiple subdirs"
)
def test_move_code_out_of_subdirectory(self):
"""All files are moved out and the subdirectory is deleted"""
installer = AddonInstaller(self.mock_addon, [])
with tempfile.TemporaryDirectory() as temp_dir:
subdir = os.path.join(
temp_dir, f"{self.mock_addon.name}-{self.mock_addon.branch}"
)
os.mkdir(subdir)
with open(os.path.join(subdir, "README.txt"), "w", encoding="utf-8") as f:
f.write("# Test file for unit testing")
with open(
os.path.join(subdir, "AnotherFile.txt"), "w", encoding="utf-8"
) as f:
f.write("# Test file for unit testing")
installer._move_code_out_of_subdirectory(temp_dir)
self.assertTrue(os.path.isfile(os.path.join(temp_dir, "README.txt")))
self.assertTrue(os.path.isfile(os.path.join(temp_dir, "AnotherFile.txt")))
self.assertFalse(os.path.isdir(subdir))
def test_install_by_git(self):
"""Test using git to install. Depends on there being a local git
installation: the test is skipped if there is no local git."""
git_manager = initialize_git()
if not git_manager:
self.skipTest("git not found, skipping git installer tests")
return
# Our test git repo has to be in a zipfile, otherwise it cannot itself be
# stored in git, since it has a .git subdirectory.
with tempfile.TemporaryDirectory() as temp_dir:
git_repo_zip = os.path.join(self.test_data_dir, "test_repo.zip")
with ZipFile(git_repo_zip, "r") as zip_repo:
zip_repo.extractall(temp_dir)
mock_addon = MockAddon()
mock_addon.url = os.path.join(temp_dir, "test_repo")
mock_addon.branch = "main"
installer = AddonInstaller(mock_addon, [])
installer.installation_path = os.path.join(temp_dir, "installed_addon")
installer._install_by_git()
self.assertTrue(os.path.exists(installer.installation_path))
addon_name_dir = os.path.join(installer.installation_path, mock_addon.name)
self.assertTrue(os.path.exists(addon_name_dir))
readme = os.path.join(addon_name_dir, "README.md")
self.assertTrue(os.path.exists(readme))
def test_install_by_copy(self):
"""Test using a simple filesystem copy to install an addon."""
with tempfile.TemporaryDirectory() as temp_dir:
git_repo_zip = os.path.join(self.test_data_dir, "test_repo.zip")
with ZipFile(git_repo_zip, "r") as zip_repo:
zip_repo.extractall(temp_dir)
mock_addon = MockAddon()
mock_addon.url = os.path.join(temp_dir, "test_repo")
mock_addon.branch = "main"
installer = AddonInstaller(mock_addon, [])
installer.addon_to_install = mock_addon
installer.installation_path = os.path.join(temp_dir, "installed_addon")
installer._install_by_copy()
self.assertTrue(os.path.exists(installer.installation_path))
addon_name_dir = os.path.join(installer.installation_path, mock_addon.name)
self.assertTrue(os.path.exists(addon_name_dir))
readme = os.path.join(addon_name_dir, "README.md")
self.assertTrue(os.path.exists(readme))
def test_determine_install_method_local_path(self):
"""Test which install methods are accepted for a local path"""
with tempfile.TemporaryDirectory() as temp_dir:
installer = AddonInstaller(self.mock_addon, [])
method = installer._determine_install_method(
temp_dir, InstallationMethod.COPY
)
self.assertEqual(method, InstallationMethod.COPY)
git_manager = initialize_git()
if git_manager:
method = installer._determine_install_method(
temp_dir, InstallationMethod.GIT
)
self.assertEqual(method, InstallationMethod.GIT)
method = installer._determine_install_method(
temp_dir, InstallationMethod.ZIP
)
self.assertIsNone(method)
method = installer._determine_install_method(
temp_dir, InstallationMethod.ANY
)
self.assertEqual(method, InstallationMethod.COPY)
def test_determine_install_method_file_url(self):
"""Test which install methods are accepted for a file:// url"""
with tempfile.TemporaryDirectory() as temp_dir:
installer = AddonInstaller(self.mock_addon, [])
temp_dir = "file://" + temp_dir.replace(os.path.sep, "/")
method = installer._determine_install_method(
temp_dir, InstallationMethod.COPY
)
self.assertEqual(method, InstallationMethod.COPY)
git_manager = initialize_git()
if git_manager:
method = installer._determine_install_method(
temp_dir, InstallationMethod.GIT
)
self.assertEqual(method, InstallationMethod.GIT)
method = installer._determine_install_method(
temp_dir, InstallationMethod.ZIP
)
self.assertIsNone(method)
method = installer._determine_install_method(
temp_dir, InstallationMethod.ANY
)
self.assertEqual(method, InstallationMethod.COPY)
def test_determine_install_method_local_zip(self):
"""Test which install methods are accepted for a local path to a zipfile"""
with tempfile.TemporaryDirectory() as temp_dir:
installer = AddonInstaller(self.mock_addon, [])
temp_file = os.path.join(temp_dir, "dummy.zip")
method = installer._determine_install_method(
temp_file, InstallationMethod.COPY
)
self.assertEqual(method, InstallationMethod.ZIP)
method = installer._determine_install_method(
temp_file, InstallationMethod.GIT
)
self.assertIsNone(method)
method = installer._determine_install_method(
temp_file, InstallationMethod.ZIP
)
self.assertEqual(method, InstallationMethod.ZIP)
method = installer._determine_install_method(
temp_file, InstallationMethod.ANY
)
self.assertEqual(method, InstallationMethod.ZIP)
def test_determine_install_method_remote_zip(self):
"""Test which install methods are accepted for a remote path to a zipfile"""
installer = AddonInstaller(self.mock_addon, [])
temp_file = "https://freecad.org/dummy.zip" # Doesn't have to actually exist!
method = installer._determine_install_method(temp_file, InstallationMethod.COPY)
self.assertIsNone(method)
method = installer._determine_install_method(temp_file, InstallationMethod.GIT)
self.assertIsNone(method)
method = installer._determine_install_method(temp_file, InstallationMethod.ZIP)
self.assertEqual(method, InstallationMethod.ZIP)
method = installer._determine_install_method(temp_file, InstallationMethod.ANY)
self.assertEqual(method, InstallationMethod.ZIP)
def test_determine_install_method_https_known_sites_copy(self):
"""Test which install methods are accepted for an https GitHub URL"""
installer = AddonInstaller(self.mock_addon, [])
installer.git_manager = True
for site in ["github.org", "gitlab.org", "framagit.org", "salsa.debian.org"]:
with self.subTest(site=site):
temp_file = (
f"https://{site}/dummy/dummy" # Doesn't have to actually exist!
)
method = installer._determine_install_method(
temp_file, InstallationMethod.COPY
)
self.assertIsNone(method, f"Allowed copying from {site} URL")
def test_determine_install_method_https_known_sites_git(self):
"""Test which install methods are accepted for an https GitHub URL"""
installer = AddonInstaller(self.mock_addon, [])
installer.git_manager = True
for site in ["github.org", "gitlab.org", "framagit.org", "salsa.debian.org"]:
with self.subTest(site=site):
temp_file = (
f"https://{site}/dummy/dummy" # Doesn't have to actually exist!
)
method = installer._determine_install_method(
temp_file, InstallationMethod.GIT
)
self.assertEqual(
method,
InstallationMethod.GIT,
f"Failed to allow git access to {site} URL",
)
def test_determine_install_method_https_known_sites_zip(self):
"""Test which install methods are accepted for an https GitHub URL"""
installer = AddonInstaller(self.mock_addon, [])
installer.git_manager = True
for site in ["github.org", "gitlab.org", "framagit.org", "salsa.debian.org"]:
with self.subTest(site=site):
temp_file = (
f"https://{site}/dummy/dummy" # Doesn't have to actually exist!
)
method = installer._determine_install_method(
temp_file, InstallationMethod.ZIP
)
self.assertEqual(
method,
InstallationMethod.ZIP,
f"Failed to allow zip access to {site} URL",
)
def test_determine_install_method_https_known_sites_any_gm(self):
"""Test which install methods are accepted for an https GitHub URL"""
installer = AddonInstaller(self.mock_addon, [])
installer.git_manager = True
for site in ["github.org", "gitlab.org", "framagit.org", "salsa.debian.org"]:
with self.subTest(site=site):
temp_file = (
f"https://{site}/dummy/dummy" # Doesn't have to actually exist!
)
method = installer._determine_install_method(
temp_file, InstallationMethod.ANY
)
self.assertEqual(
method,
InstallationMethod.GIT,
f"Failed to allow git access to {site} URL",
)
def test_determine_install_method_https_known_sites_any_no_gm(self):
"""Test which install methods are accepted for an https GitHub URL"""
installer = AddonInstaller(self.mock_addon, [])
installer.git_manager = None
for site in ["github.org", "gitlab.org", "framagit.org", "salsa.debian.org"]:
with self.subTest(site=site):
temp_file = (
f"https://{site}/dummy/dummy" # Doesn't have to actually exist!
)
method = installer._determine_install_method(
temp_file, InstallationMethod.ANY
)
self.assertEqual(
method,
InstallationMethod.ZIP,
f"Failed to allow zip access to {site} URL",
)
def test_fcmacro_copying(self):
with tempfile.TemporaryDirectory() as temp_dir:
mock_addon = MockAddon()
mock_addon.url = os.path.join(
self.test_data_dir, "test_addon_with_fcmacro.zip"
)
installer = AddonInstaller(mock_addon, [])
installer.installation_path = temp_dir
installer.macro_installation_path = os.path.join(temp_dir, "Macros")
installer.run()
self.assertTrue(
os.path.exists(os.path.join(temp_dir, "Macros", "TestMacro.FCMacro")),
"FCMacro file was not copied to macro installation location",
)
class TestMacroInstaller(unittest.TestCase):
MODULE = "test_installer" # file name without extension
def setUp(self):
"""Set up the mock objects"""
self.mock = MockAddon()
self.mock.macro = MockMacro()
def test_installation(self):
"""Test the wrapper around the macro installer"""
# Note that this doesn't test the underlying Macro object's install function,
# it only tests whether that function is called appropriately by the
# MacroInstaller wrapper.
with tempfile.TemporaryDirectory() as temp_dir:
installer = MacroInstaller(self.mock)
installer.installation_path = temp_dir
installation_succeeded = installer.run()
self.assertTrue(installation_succeeded)
self.assertTrue(
os.path.exists(os.path.join(temp_dir, self.mock.macro.filename))
)
|
traffic | traffic | # The contents of this file are subject to the Common Public Attribution
# License Version 1.0. (the "License"); you may not use this file except in
# compliance with the License. You may obtain a copy of the License at
# http://code.reddit.com/LICENSE. The License is based on the Mozilla Public
# License Version 1.1, but Sections 14 and 15 have been added to cover use of
# software over a computer network and provide for limited attribution for the
# Original Developer. In addition, Exhibit A has been modified to be consistent
# with Exhibit B.
#
# Software distributed under the License is distributed on an "AS IS" basis,
# WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for
# the specific language governing rights and limitations under the License.
#
# The Original Code is reddit.
#
# The Original Developer is the Initial Developer. The Initial Developer of
# the Original Code is reddit Inc.
#
# All portions of the code written by reddit are Copyright (c) 2006-2015 reddit
# Inc. All Rights Reserved.
###############################################################################
import calendar
import datetime
import os
import urllib
from time import sleep
from boto.emr.connection import EmrConnection
from boto.exception import S3ResponseError
from boto.s3.connection import S3Connection
from pylons import app_globals as g
from r2.lib.emr_helpers import (EmrException, modify_slave_count,
terminate_jobflow)
from r2.lib.s3_helpers import copy_to_s3, get_text_from_s3, s3_key_exists
from r2.lib.traffic.emr_traffic import (aggregate_interval, coalesce_interval,
extract_hour)
from r2.lib.utils import tup
from r2.models.traffic import (AdImpressionsByCodename,
ClickthroughsByCodename, PageviewsByLanguage,
PageviewsBySubreddit,
PageviewsBySubredditAndPath, SitewidePageviews,
TargetedClickthroughsByCodename,
TargetedImpressionsByCodename)
from sqlalchemy.exc import DataError
RAW_LOG_DIR = g.RAW_LOG_DIR
PROCESSED_DIR = g.PROCESSED_DIR
AGGREGATE_DIR = g.AGGREGATE_DIR
AWS_LOG_DIR = g.AWS_LOG_DIR
# the "or None" business is so that a blank string becomes None to cause boto
# to look for credentials in other places.
s3_connection = S3Connection(g.TRAFFIC_ACCESS_KEY or None,
g.TRAFFIC_SECRET_KEY or None)
emr_connection = EmrConnection(g.TRAFFIC_ACCESS_KEY or None,
g.TRAFFIC_SECRET_KEY or None)
traffic_categories = (SitewidePageviews, PageviewsBySubreddit,
PageviewsBySubredditAndPath, PageviewsByLanguage,
ClickthroughsByCodename, TargetedClickthroughsByCodename,
AdImpressionsByCodename, TargetedImpressionsByCodename)
traffic_subdirectories = {
SitewidePageviews: 'sitewide',
PageviewsBySubreddit: 'subreddit',
PageviewsBySubredditAndPath: 'srpath',
PageviewsByLanguage: 'lang',
ClickthroughsByCodename: 'clicks',
TargetedClickthroughsByCodename: 'clicks_targeted',
AdImpressionsByCodename: 'thing',
TargetedImpressionsByCodename: 'thingtarget',
}
def _get_processed_path(basedir, interval, category_cls, filename):
return os.path.join(basedir, interval,
traffic_subdirectories[category_cls], filename)
def get_aggregate(interval, category_cls):
"""Return the aggregate output file from S3."""
part = 0
data = {}
while True:
path = _get_processed_path(AGGREGATE_DIR, interval, category_cls,
'part-r-%05d' % part)
if not s3_key_exists(s3_connection, path):
break
# Sometimes S3 doesn't let us read immediately after key is written
for i in xrange(5):
try:
txt = get_text_from_s3(s3_connection, path)
except S3ResponseError as e:
print 'S3ResponseError on %s, retrying' % path
sleep(300)
else:
break
else:
print 'Could not retrieve %s' % path
raise e
for line in txt.splitlines():
tuples = line.rstrip('\n').split('\t')
group, uniques, pageviews = tuples[:-2], tuples[-2], tuples[-1]
if len(group) > 1:
group = tuple(group)
else:
group = group[0]
data[group] = (int(uniques), int(pageviews))
part += 1
if not data:
raise ValueError("No data for %s/%s" % (interval,
category_cls.__name__))
return data
def report_interval(interval, background=True):
if background:
from multiprocessing import Process
p = Process(target=_report_interval, args=(interval,))
p.start()
else:
_report_interval(interval)
def _name_to_kw(category_cls, name):
"""Get the keywords needed to build an instance of traffic data."""
def target_split(name):
"""Split a name that contains multiple words.
Name is (link,campaign-subreddit) where link and campaign are
thing fullnames. campaign and subreddit are each optional, so
the string could look like any of these:
(t3_bh,t8_ab-pics), (t3_bh,t8_ab), (t3_bh,-pics), (t3_bh,)
Also check for the old format (t3_by, pics)
"""
link_codename, target_info = name
campaign_codename = None
if not target_info:
subreddit = ''
elif target_info.find('-') != -1:
campaign_codename, subreddit = target_info.split('-', 1)
elif target_info.find('_') != -1:
campaign_codename = target_info
subreddit = ''
else:
subreddit = target_info
return {'codename': campaign_codename or link_codename,
'subreddit': subreddit}
d = {SitewidePageviews: lambda n: {},
PageviewsBySubreddit: lambda n: {'subreddit': n},
PageviewsBySubredditAndPath: lambda n: {'srpath': n},
PageviewsByLanguage: lambda n: {'lang': n},
ClickthroughsByCodename: lambda n: {'codename': name},
AdImpressionsByCodename: lambda n: {'codename': name},
TargetedClickthroughsByCodename: target_split,
TargetedImpressionsByCodename: target_split}
return d[category_cls](name)
def _report_interval(interval):
"""Read aggregated traffic from S3 and write to postgres."""
from r2.models.traffic import engine
from sqlalchemy.orm import scoped_session, sessionmaker
Session = scoped_session(sessionmaker(bind=engine))
# determine interval_type from YYYY-MM[-DD][-HH]
pieces = interval.split('-')
pieces = [int(i) for i in pieces]
if len(pieces) == 4:
interval_type = 'hour'
elif len(pieces) == 3:
interval_type = 'day'
pieces.append(0)
elif len(pieces) == 2:
interval_type = 'month'
pieces.append(1)
pieces.append(0)
else:
raise
pg_interval = "%04d-%02d-%02d %02d:00:00" % tuple(pieces)
print 'reporting interval %s (%s)' % (pg_interval, interval_type)
# Read aggregates and write to traffic db
for category_cls in traffic_categories:
now = datetime.datetime.now()
print '*** %s - %s - %s' % (category_cls.__name__, interval, now)
data = get_aggregate(interval, category_cls)
len_data = len(data)
step = max(len_data / 5, 100)
for i, (name, (uniques, pageviews)) in enumerate(data.iteritems()):
try:
for n in tup(name):
unicode(n)
except UnicodeDecodeError:
print '%s - %s - %s - %s' % (category_cls.__name__, name,
uniques, pageviews)
continue
if i % step == 0:
now = datetime.datetime.now()
print '%s - %s - %s/%s - %s' % (interval, category_cls.__name__,
i, len_data, now)
kw = {'date': pg_interval, 'interval': interval_type,
'unique_count': uniques, 'pageview_count': pageviews}
kw.update(_name_to_kw(category_cls, name))
r = category_cls(**kw)
try:
Session.merge(r)
Session.commit()
except DataError:
Session.rollback()
continue
Session.remove()
now = datetime.datetime.now()
print 'finished reporting %s (%s) - %s' % (pg_interval, interval_type, now)
def process_pixel_log(log_path, fast=False):
"""Process an hourly pixel log file.
Extract data from raw hourly log and aggregate it and report it. Also
depending on the specific date and options, aggregate and report the day
and month. Setting fast=True is appropriate for backfilling as it
eliminates reduntant steps.
"""
if log_path.endswith('/*'):
log_dir = log_path[:-len('/*')]
date_fields = os.path.basename(log_dir).split('.', 1)[0].split('-')
else:
date_fields = os.path.basename(log_path).split('.', 1)[0].split('-')
year, month, day, hour = (int(i) for i in date_fields)
hour_date = '%s-%02d-%02d-%02d' % (year, month, day, hour)
day_date = '%s-%02d-%02d' % (year, month, day)
month_date = '%s-%02d' % (year, month)
# All logs from this day use the same jobflow
jobflow_name = 'Traffic Processing %s' % day_date
output_path = os.path.join(PROCESSED_DIR, 'hour', hour_date)
extract_hour(emr_connection, jobflow_name, log_path, output_path,
log_uri=AWS_LOG_DIR)
input_path = os.path.join(PROCESSED_DIR, 'hour', hour_date)
output_path = os.path.join(AGGREGATE_DIR, hour_date)
aggregate_interval(emr_connection, jobflow_name, input_path, output_path,
log_uri=AWS_LOG_DIR)
if not fast:
report_interval(hour_date)
if hour == 23 or (not fast and (hour == 0 or hour % 4 == 3)):
# Don't aggregate and report day on every hour
input_path = os.path.join(PROCESSED_DIR, 'hour', '%s-*' % day_date)
output_path = os.path.join(AGGREGATE_DIR, day_date)
aggregate_interval(emr_connection, jobflow_name, input_path,
output_path, log_uri=AWS_LOG_DIR)
if not fast:
report_interval(day_date)
if hour == 23:
# Special tasks for final hour of the day
input_path = os.path.join(PROCESSED_DIR, 'hour', '%s-*' % day_date)
output_path = os.path.join(PROCESSED_DIR, 'day', day_date)
coalesce_interval(emr_connection, jobflow_name, input_path,
output_path, log_uri=AWS_LOG_DIR)
terminate_jobflow(emr_connection, jobflow_name)
if not fast:
aggregate_month(month_date)
report_interval(month_date)
def aggregate_month(month_date):
jobflow_name = 'Traffic Processing %s' % month_date
input_path = os.path.join(PROCESSED_DIR, 'day', '%s-*' % month_date)
output_path = os.path.join(AGGREGATE_DIR, month_date)
aggregate_interval(emr_connection, jobflow_name, input_path, output_path,
log_uri=AWS_LOG_DIR, slave_instance_type='m2.2xlarge')
terminate_jobflow(emr_connection, jobflow_name)
def process_month_hours(month_date, start_hour=0, days=None):
"""Process hourly logs from entire month.
Complete monthly backfill requires running [verify_month_inputs,]
process_month_hours, aggregate_month, [verify_month_outputs,] and
report_entire_month.
"""
year, month = month_date.split('-')
year, month = int(year), int(month)
days = days or xrange(1, calendar.monthrange(year, month)[1] + 1)
hours = xrange(start_hour, 24)
for day in days:
for hour in hours:
hour_date = '%04d-%02d-%02d-%02d' % (year, month, day, hour)
log_path = os.path.join(RAW_LOG_DIR, '%s.log.gz' % hour_date)
if not s3_key_exists(s3_connection, log_path):
log_path = os.path.join(RAW_LOG_DIR, '%s.log.bz2' % hour_date)
if not s3_key_exists(s3_connection, log_path):
print 'Missing log for %s' % hour_date
continue
print 'Processing %s' % log_path
process_pixel_log(log_path, fast=True)
hours = xrange(24)
def report_entire_month(month_date, start_hour=0, start_day=1):
"""Report all hours and days from month."""
year, month = month_date.split('-')
year, month = int(year), int(month)
hours = xrange(start_hour, 24)
for day in xrange(start_day, calendar.monthrange(year, month)[1] + 1):
for hour in hours:
hour_date = '%04d-%02d-%02d-%02d' % (year, month, day, hour)
try:
report_interval(hour_date, background=False)
except ValueError:
print 'Failed for %s' % hour_date
continue
hours = xrange(24)
day_date = '%04d-%02d-%02d' % (year, month, day)
try:
report_interval(day_date, background=False)
except ValueError:
print 'Failed for %s' % day_date
continue
report_interval(month_date, background=False)
def verify_month_outputs(month_date):
"""Check existance of all hour, day, month aggregates for month_date."""
year, month = month_date.split('-')
year, month = int(year), int(month)
missing = []
for day in xrange(1, calendar.monthrange(year, month)[1] + 1):
for hour in xrange(24):
hour_date = '%04d-%02d-%02d-%02d' % (year, month, day, hour)
for category_cls in traffic_categories:
for d in [AGGREGATE_DIR, os.path.join(PROCESSED_DIR, 'hour')]:
path = _get_processed_path(d, hour_date, category_cls,
'part-r-00000')
if not s3_key_exists(s3_connection, path):
missing.append(hour_date)
day_date = '%04d-%02d-%02d' % (year, month, day)
for category_cls in traffic_categories:
for d in [AGGREGATE_DIR, os.path.join(PROCESSED_DIR, 'day')]:
path = _get_processed_path(d, day_date, category_cls,
'part-r-00000')
if not s3_key_exists(s3_connection, path):
missing.append(day_date)
month_date = '%04d-%02d' % (year, month)
for c in traffic_categories:
path = _get_processed_path(AGGREGATE_DIR, month_date, category_cls,
'part-r-00000')
if not s3_key_exists(s3_connection, path):
missing.append(month_date)
for d in sorted(list(set(missing))):
print d
def verify_month_inputs(month_date):
"""Check existance of all hourly traffic logs for month_date."""
year, month = month_date.split('-')
year, month = int(year), int(month)
missing = []
for day in xrange(1, calendar.monthrange(year, month)[1] + 1):
for hour in xrange(24):
hour_date = '%04d-%02d-%02d-%02d' % (year, month, day, hour)
log_path = os.path.join(RAW_LOG_DIR, '%s.log.gz' % hour_date)
if not s3_key_exists(s3_connection, log_path):
log_path = os.path.join(RAW_LOG_DIR, '%s.log.bz2' % hour_date)
if not s3_key_exists(s3_connection, log_path):
missing.append(hour_date)
for d in missing:
print d
def process_hour(hour_date):
"""Process hour_date's traffic.
Can't fire at the very start of an hour because it takes time to bzip and
upload the file to S3. Check the bucket for the file and sleep if it
doesn't exist.
"""
SLEEPTIME = 180
log_dir = os.path.join(RAW_LOG_DIR, hour_date)
files_missing = [os.path.join(log_dir, '%s.log.bz2' % h)
for h in g.TRAFFIC_LOG_HOSTS]
files_missing = [f for f in files_missing
if not s3_key_exists(s3_connection, f)]
while files_missing:
print 'Missing log(s) %s, sleeping' % files_missing
sleep(SLEEPTIME)
files_missing = [f for f in files_missing
if not s3_key_exists(s3_connection, f)]
process_pixel_log(os.path.join(log_dir, '*'))
|
posthog | filters | from typing import List, Optional, Tuple, TypeVar, Union
from django.db import models
from django.db.models import Q
from django.db.models.query import QuerySet, RawQuerySet
from rest_framework import filters, settings
from rest_framework.request import Request
from rest_framework.views import APIView
_MT = TypeVar("_MT", bound=models.Model)
class TermSearchFilterBackend(filters.BaseFilterBackend):
"""
Allows fuzzy searching based on the pg_trgm extension.
Remember to add relevant indices if the table is expected to have large amounts of data.
"""
# The URL query parameter used for the search.
search_param = settings.api_settings.SEARCH_PARAM
def get_search_fields(self, view: APIView) -> Optional[List[str]]:
"""
Search fields are obtained from the view.
"""
return getattr(view, "search_fields", None)
def get_search_terms(self, request: Request):
"""
Search terms are set by a ?search=... query parameter
"""
terms = request.query_params.get(self.search_param, "")
terms = terms.replace("\x00", "") # strip null characters
return list(filter(None, terms.split(" ")))
def filter_queryset(
self,
request: Request,
queryset: Union[QuerySet[_MT], RawQuerySet],
view: APIView,
):
if isinstance(queryset, RawQuerySet):
return queryset
search_fields = self.get_search_fields(view)
search_terms = self.get_search_terms(request)
if not search_fields or not search_terms:
return queryset
term_filter = Q()
for _term_idx, search_term in enumerate(search_terms):
search_filter_query = Q()
for _idx, search_field in enumerate(search_fields):
search_filter_query = search_filter_query | Q(
**{f"{search_field}__icontains": search_term}
)
term_filter = term_filter & search_filter_query
return queryset.filter(term_filter)
def term_search_filter_sql(
search_fields: List[str],
search_terms: Optional[str] = "",
search_extra: Optional[str] = "",
) -> Tuple[str, dict]:
if not search_fields or not search_terms:
return "", {}
terms = list(filter(None, search_terms.replace("\x00", "").split(" ")))
kwargs = {}
term_filter = []
for term_idx, search_term in enumerate(terms):
search_filter_query = []
for idx, search_field in enumerate(search_fields):
index = term_idx * len(search_fields) + idx
search_filter_query.append(f"{search_field} ilike %(search_{index})s")
kwargs[f"search_{index}"] = f"%{search_term}%"
term_filter.append(f"({' OR '.join(search_filter_query)})")
if term_filter:
return f"AND (({' AND '.join(term_filter)}) {search_extra})", kwargs
else:
return "", {}
|
generate-fake-data | main | import argparse
import asyncio
import math
import random
import typing
import uuid
from datetime import datetime
import aiohttp
from faker import Faker
from tqdm.asyncio import tqdm
fake = Faker()
TEAMS_USERS_COMMAND = "generate_teams_and_users"
SCHEDULES_ONCALL_SHIFTS_COMMAND = "generate_schedules_and_oncall_shifts"
GRAFANA_API_URL = None
ONCALL_API_URL = None
ONCALL_API_TOKEN = None
class OnCallApiUser(typing.TypedDict):
id: str
class OnCallApiOnCallShift(typing.TypedDict):
id: str
class OnCallApiListUsersResponse(typing.TypedDict):
results: typing.List[OnCallApiUser]
class GrafanaAPIUser(typing.TypedDict):
id: int
def _generate_unique_email() -> str:
user = fake.profile()
return f'{uuid.uuid4()}-{user["mail"]}'
async def _grafana_api_request(
http_session: aiohttp.ClientSession, method: str, url: str, **request_kwargs
) -> typing.Awaitable[typing.Dict]:
resp = await http_session.request(
method, f"{GRAFANA_API_URL}{url}", **request_kwargs
)
return await resp.json()
async def _oncall_api_request(
http_session: aiohttp.ClientSession, method: str, url: str, **request_kwargs
) -> typing.Awaitable[typing.Dict]:
resp = await http_session.request(
method,
f"{ONCALL_API_URL}{url}",
headers={"Authorization": ONCALL_API_TOKEN},
**request_kwargs,
)
return await resp.json()
def generate_team(
http_session: aiohttp.ClientSession, org_id: int
) -> typing.Callable[[], typing.Awaitable[typing.Dict]]:
"""
https://grafana.com/docs/grafana/latest/developers/http_api/team/#add-team
"""
def _generate_team() -> typing.Awaitable[typing.Dict]:
return _grafana_api_request(
http_session,
"POST",
"/api/teams",
json={
"name": str(uuid.uuid4()),
"email": _generate_unique_email(),
"orgId": org_id,
},
)
return _generate_team
def generate_user(
http_session: aiohttp.ClientSession, org_id: int
) -> typing.Callable[[], typing.Awaitable[typing.Dict]]:
"""
https://grafana.com/docs/grafana/latest/developers/http_api/admin/#global-users
"""
async def _generate_user() -> typing.Awaitable[typing.Dict]:
user = fake.profile()
# create the user in grafana
grafana_user: GrafanaAPIUser = await _grafana_api_request(
http_session,
"POST",
"/api/admin/users",
json={
"name": user["name"],
"email": _generate_unique_email(),
"login": str(uuid.uuid4()),
"password": fake.password(length=20),
"OrgId": org_id,
},
)
# update the user's basic role in grafana to Admin
# https://grafana.com/docs/grafana/latest/developers/http_api/org/#updates-the-given-user
await _grafana_api_request(
http_session,
"PATCH",
f'/api/org/users/{grafana_user["id"]}',
json={"role": "Admin"},
)
return grafana_user
return _generate_user
def generate_schedule(
http_session: aiohttp.ClientSession, oncall_shift_ids: typing.List[str]
) -> typing.Callable[[], typing.Awaitable[typing.Dict]]:
def _generate_schedule() -> typing.Awaitable[typing.Dict]:
# Create a schedule
# https://grafana.com/docs/oncall/latest/oncall-api-reference/schedules/#create-a-schedule
return _oncall_api_request(
http_session,
"POST",
"/api/v1/schedules",
json={
"name": f"Schedule {uuid.uuid4()}",
"type": "calendar",
"time_zone": "UTC",
"shifts": oncall_shift_ids,
},
)
return _generate_schedule
def _bulk_generate_data(
iterations: int,
data_generator_func: typing.Callable[[], typing.Awaitable[typing.Dict]],
) -> typing.Awaitable[typing.List[typing.Dict]]:
return tqdm.gather(
*[asyncio.ensure_future(data_generator_func()) for _ in range(iterations)]
)
async def _generate_grafana_teams_and_users(
args: argparse.Namespace, http_session: aiohttp.ClientSession
) -> typing.Awaitable[None]:
global GRAFANA_API_URL
GRAFANA_API_URL = args.grafana_api_url
org_id = args.grafana_org_id
print("Generating team(s)")
await _bulk_generate_data(args.teams, generate_team(http_session, org_id))
print("Generating user(s)")
await _bulk_generate_data(args.users, generate_user(http_session, org_id))
print(
f"""
Grafana teams and users generated
Now head to the OnCall plugin and manually visit the plugin to trigger a sync. This will sync grafana
teams/users to OnCall. Once completed, you can run the {SCHEDULES_ONCALL_SHIFTS_COMMAND} command.
"""
)
async def _generate_oncall_schedules_and_oncall_shifts(
args: argparse.Namespace, http_session: aiohttp.ClientSession
) -> typing.Awaitable[None]:
global ONCALL_API_URL, ONCALL_API_TOKEN
ONCALL_API_URL = args.oncall_api_url
ONCALL_API_TOKEN = args.oncall_api_token
today = datetime.now()
print("Fetching users from OnCall API")
# Fetch users from the OnCall API
users: OnCallApiListUsersResponse = await _oncall_api_request(
http_session, "GET", "/api/v1/users"
)
user_ids: typing.List[str] = [u["id"] for u in users["results"]]
num_users = len(user_ids)
print(f"Fetched {num_users} user(s) from the OnCall API")
async def _create_oncall_shift(shift_start_time: str) -> typing.Awaitable[str]:
"""
Creates an eight hour shift.
`shift_start_time` - ex. 09:00:00, 15:00:00
https://grafana.com/docs/oncall/latest/oncall-api-reference/on_call_shifts/#create-an-oncall-shift
"""
new_shift: OnCallApiOnCallShift = await _oncall_api_request(
http_session,
"POST",
"/api/v1/on_call_shifts",
json={
"name": f"On call shift{uuid.uuid4()}",
"type": "rolling_users",
"start": today.strftime(f"%Y-%m-%dT{shift_start_time}"),
"time_zone": "UTC",
"duration": 60 * 60 * 8, # 8 hours
"frequency": "daily",
"week_start": "MO",
"rolling_users": [
[u] for u in random.choices(user_ids, k=math.floor(num_users / 2))
],
"start_rotation_from_user_index": 0,
"team_id": None,
},
)
oncall_shift_id = new_shift["id"]
print(f"Generated OnCall shift w/ ID {oncall_shift_id}")
return oncall_shift_id
print("Creating three 8h on-call shifts")
morning_shift_id = await _create_oncall_shift("00:00:00")
afternoon_shift_id = await _create_oncall_shift("08:00:00")
evening_shift_id = await _create_oncall_shift("16:00:00")
print("Generating schedules(s)")
await _bulk_generate_data(
args.schedules,
generate_schedule(
http_session, [morning_shift_id, afternoon_shift_id, evening_shift_id]
),
)
async def main() -> typing.Awaitable[None]:
parser = argparse.ArgumentParser(
description="Set of commands to help generate fake data in a Grafana OnCall setup."
)
subparsers = parser.add_subparsers(help="sub-command help")
grafana_command_parser = subparsers.add_parser(
TEAMS_USERS_COMMAND,
description="Command to generate teams and users in Grafana",
)
grafana_command_parser.set_defaults(func=_generate_grafana_teams_and_users)
grafana_command_parser.add_argument(
"--grafana-api-url",
help="Grafana API URL. This should include the basic authentication username/password in the URL. ex. http://oncall:oncall@localhost:3000",
default="http://oncall:oncall@localhost:3000",
)
grafana_command_parser.add_argument(
"--grafana-org-id",
help="Org ID, in Grafana, of the org that you would like to generate data for",
type=int,
default=1,
)
grafana_command_parser.add_argument(
"-t", "--teams", help="Number of teams to generate", default=10, type=int
)
grafana_command_parser.add_argument(
"-u", "--users", help="Number of users to generate", default=1_000, type=int
)
oncall_command_parser = subparsers.add_parser(
SCHEDULES_ONCALL_SHIFTS_COMMAND,
description="Command to generate schedules and on-call shifts in OnCall",
)
oncall_command_parser.set_defaults(
func=_generate_oncall_schedules_and_oncall_shifts
)
oncall_command_parser.add_argument(
"--oncall-api-url",
help="OnCall API URL",
default="http://localhost:8080",
)
oncall_command_parser.add_argument(
"--oncall-api-token", help="OnCall API token", required=True
)
oncall_command_parser.add_argument(
"-s",
"--schedules",
help="Number of schedules to generate",
default=100,
type=int,
)
args = parser.parse_args()
async with aiohttp.ClientSession(
connector=aiohttp.TCPConnector(limit=5)
) as session:
await args.func(args, session)
if __name__ == "__main__":
asyncio.run(main())
|
scripts | check_gcode_buffer | #!/usr/bin/env python3
# Copyright (c) 2020 Ultimaker B.V.
# Cura is released under the terms of the LGPLv3 or higher.
import copy
import math
import os
import sys
from typing import Dict, List, Optional, Tuple
# ====================================
# Constants and Default Values
# ====================================
DEFAULT_BUFFER_FILLING_RATE_IN_C_PER_S = 50.0 # The buffer filling rate in #commands/s
DEFAULT_BUFFER_SIZE = 15 # The buffer size in #commands
MINIMUM_PLANNER_SPEED = 0.05
# Setting values for Ultimaker S5.
MACHINE_MAX_FEEDRATE_X = 300
MACHINE_MAX_FEEDRATE_Y = 300
MACHINE_MAX_FEEDRATE_Z = 40
MACHINE_MAX_FEEDRATE_E = 45
MACHINE_MAX_ACCELERATION_X = 9000
MACHINE_MAX_ACCELERATION_Y = 9000
MACHINE_MAX_ACCELERATION_Z = 100
MACHINE_MAX_ACCELERATION_E = 10000
MACHINE_MAX_JERK_XY = 20
MACHINE_MAX_JERK_Z = 0.4
MACHINE_MAX_JERK_E = 5
MACHINE_MINIMUM_FEEDRATE = 0.001
MACHINE_ACCELERATION = 3000
def get_code_and_num(gcode_line: str) -> Tuple[str, str]:
"""Gets the code and number from the given g-code line."""
gcode_line = gcode_line.strip()
cmd_code = gcode_line[0].upper()
cmd_num = str(gcode_line[1:])
return cmd_code, cmd_num
def get_value_dict(parts: List[str]) -> Dict[str, str]:
"""Fetches arguments such as X1 Y2 Z3 from the given part list and returns a dict"""
value_dict = {}
for p in parts:
p = p.strip()
if not p:
continue
code, num = get_code_and_num(p)
value_dict[code] = num
return value_dict
# ============================
# Math Functions - Begin
# ============================
def calc_distance(pos1, pos2):
delta = {k: pos1[k] - pos2[k] for k in pos1}
distance = 0
for value in delta.values():
distance += value**2
distance = math.sqrt(distance)
return distance
def calc_acceleration_distance(
init_speed: float, target_speed: float, acceleration: float
) -> float:
"""Given the initial speed, the target speed, and the acceleration
calculate the distance that's needed for the acceleration to finish.
"""
if acceleration == 0:
return 0.0
return (target_speed**2 - init_speed**2) / (2 * acceleration)
def calc_acceleration_time_from_distance(
initial_feedrate: float, distance: float, acceleration: float
) -> float:
"""Gives the time it needs to accelerate from an initial speed to reach a final distance."""
discriminant = initial_feedrate**2 - 2 * acceleration * -distance
# If the discriminant is negative, we're moving in the wrong direction.
# Making the discriminant 0 then gives the extremum of the parabola instead of the intersection.
discriminant = max(0, discriminant)
return (-initial_feedrate + math.sqrt(discriminant)) / acceleration
def calc_intersection_distance(
initial_feedrate: float, final_feedrate: float, acceleration: float, distance: float
) -> float:
"""Calculates the point at which you must start braking.
This gives the distance from the start of a line at which you must start
decelerating (at a rate of `-acceleration`) if you started at speed
`initial_feedrate` and accelerated until this point and want to end at the
`final_feedrate` after a total travel of `distance`. This can be used to
compute the intersection point between acceleration and deceleration in the
cases where the trapezoid has no plateau (i.e. never reaches maximum speed).
"""
if acceleration == 0:
return 0
return (
2 * acceleration * distance
- initial_feedrate * initial_feedrate
+ final_feedrate * final_feedrate
) / (4 * acceleration)
def calc_max_allowable_speed(
acceleration: float, target_velocity: float, distance: float
) -> float:
"""Calculates the maximum speed that is allowed at this point when you must be
able to reach target_velocity using the acceleration within the allotted
distance.
"""
return math.sqrt(target_velocity * target_velocity - 2 * acceleration * distance)
class Command:
def __init__(self, cmd_str: str) -> None:
self._cmd_str = cmd_str # type: str
self.estimated_exec_time = 0.0 # type: float
self._cmd_process_function_map = {
"G": self._handle_g,
"M": self._handle_m,
"T": self._handle_t,
}
self._is_comment = False # type: bool
self._is_empty = False # type: bool
# Fields taken from CuraEngine's implementation.
self._recalculate = False
self._accelerate_until = 0
self._decelerate_after = 0
self._initial_feedrate = 0
self._final_feedrate = 0
self._entry_speed = 0
self._max_entry_speed = 0
self._nominal_length = False
self._nominal_feedrate = 0
self._max_travel = 0
self._distance = 0
self._acceleration = 0
self._delta = [0, 0, 0]
self._abs_delta = [0, 0, 0]
def calculate_trapezoid(self, entry_factor, exit_factor):
"""Calculate the velocity-time trapezoid function for this move.
Each move has a three-part function mapping time to velocity.
"""
initial_feedrate = self._nominal_feedrate * entry_factor
final_feedrate = self._nominal_feedrate * exit_factor
# How far are we accelerating and how far are we decelerating?
accelerate_distance = calc_acceleration_distance(
initial_feedrate, self._nominal_feedrate, self._acceleration
)
decelerate_distance = calc_acceleration_distance(
self._nominal_feedrate, final_feedrate, -self._acceleration
)
plateau_distance = (
self._distance - accelerate_distance - decelerate_distance
) # And how far in between at max speed?
# Is the plateau negative size? That means no cruising, and we'll have to
# use intersection_distance to calculate when to abort acceleration and
# start braking in order to reach the final_rate exactly at the end of
# this command.
if plateau_distance < 0:
accelerate_distance = calc_intersection_distance(
initial_feedrate, final_feedrate, self._acceleration, self._distance
)
accelerate_distance = max(accelerate_distance, 0) # Due to rounding errors.
accelerate_distance = min(accelerate_distance, self._distance)
plateau_distance = 0
self._accelerate_until = accelerate_distance
self._decelerate_after = accelerate_distance + plateau_distance
self._initial_feedrate = initial_feedrate
self._final_feedrate = final_feedrate
@property
def is_command(self) -> bool:
return not self._is_comment and not self._is_empty
def __str__(self) -> str:
if self._is_comment or self._is_empty:
return self._cmd_str
info = "t=%s" % (self.estimated_exec_time)
return self._cmd_str.strip() + " ; --- " + info + os.linesep
def parse(self) -> None:
"""Estimates the execution time of this command and calculates the state after this command is executed."""
line = self._cmd_str.strip()
if not line:
self._is_empty = True
return
if line.startswith(";"):
self._is_comment = True
return
# Remove comment
line = line.split(";", 1)[0].strip()
parts = line.split(" ")
cmd_code, cmd_num = get_code_and_num(parts[0])
cmd_num = int(cmd_num)
func = self._cmd_process_function_map.get(cmd_code)
if func is None:
print("!!! no handle function for command type [%s]" % cmd_code)
return
func(cmd_num, parts)
def _handle_g(self, cmd_num: int, parts: List[str]) -> None:
self.estimated_exec_time = 0.0
# G10: Retract. Make this behave as if it's a retraction of 25mm.
if cmd_num == 10:
# TODO: If already retracted, this shouldn't add anything to the time.
cmd_num = 1
parts = ["G1", "E" + str(buf.current_position[3] - 25)]
# G11: Unretract. Make this behave as if it's an unretraction of 25mm.
elif cmd_num == 11:
# TODO: If already unretracted, this shouldn't add anything to the time.
cmd_num = 1
parts = ["G1", "E" + str(buf.current_position[3] + 25)]
# G0 and G1: Move
if cmd_num in (0, 1):
# Move
if len(parts) > 0:
value_dict = get_value_dict(parts[1:])
new_position = copy.deepcopy(buf.current_position)
new_position[0] = float(value_dict.get("X", new_position[0]))
new_position[1] = float(value_dict.get("Y", new_position[1]))
new_position[2] = float(value_dict.get("Z", new_position[2]))
new_position[3] = float(value_dict.get("E", new_position[3]))
buf.current_feedrate = (
float(value_dict.get("F", buf.current_feedrate * 60.0)) / 60.0
)
if buf.current_feedrate < MACHINE_MINIMUM_FEEDRATE:
buf.current_feedrate = MACHINE_MINIMUM_FEEDRATE
self._delta = [
new_position[0] - buf.current_position[0],
new_position[1] - buf.current_position[1],
new_position[2] - buf.current_position[2],
new_position[3] - buf.current_position[3],
]
self._abs_delta = [abs(x) for x in self._delta]
self._max_travel = max(self._abs_delta)
if self._max_travel > 0:
self._nominal_feedrate = buf.current_feedrate
self._distance = math.sqrt(
self._abs_delta[0] ** 2
+ self._abs_delta[1] ** 2
+ self._abs_delta[2] ** 2
)
if self._distance == 0:
self._distance = self._abs_delta[3]
current_feedrate = [
d * self._nominal_feedrate / self._distance for d in self._delta
]
current_abs_feedrate = [abs(f) for f in current_feedrate]
feedrate_factor = min(1.0, MACHINE_MAX_FEEDRATE_X)
feedrate_factor = min(feedrate_factor, MACHINE_MAX_FEEDRATE_Y)
feedrate_factor = min(feedrate_factor, buf.max_z_feedrate)
feedrate_factor = min(feedrate_factor, MACHINE_MAX_FEEDRATE_E)
# TODO: XY_FREQUENCY_LIMIT
current_feedrate = [f * feedrate_factor for f in current_feedrate]
current_abs_feedrate = [
f * feedrate_factor for f in current_abs_feedrate
]
self._nominal_feedrate *= feedrate_factor
self._acceleration = MACHINE_ACCELERATION
max_accelerations = [
MACHINE_MAX_ACCELERATION_X,
MACHINE_MAX_ACCELERATION_Y,
MACHINE_MAX_ACCELERATION_Z,
MACHINE_MAX_ACCELERATION_E,
]
for n in range(len(max_accelerations)):
if (
self._acceleration * self._abs_delta[n] / self._distance
> max_accelerations[n]
):
self._acceleration = max_accelerations[n]
vmax_junction = MACHINE_MAX_JERK_XY / 2
vmax_junction_factor = 1.0
if current_abs_feedrate[2] > buf.max_z_jerk / 2:
vmax_junction = min(vmax_junction, buf.max_z_jerk)
if current_abs_feedrate[3] > buf.max_e_jerk / 2:
vmax_junction = min(vmax_junction, buf.max_e_jerk)
vmax_junction = min(vmax_junction, self._nominal_feedrate)
safe_speed = vmax_junction
if buf.previous_nominal_feedrate > 0.0001:
xy_jerk = math.sqrt(
(current_feedrate[0] - buf.previous_feedrate[0]) ** 2
+ (current_feedrate[1] - buf.previous_feedrate[1]) ** 2
)
vmax_junction = self._nominal_feedrate
if xy_jerk > MACHINE_MAX_JERK_XY:
vmax_junction_factor = MACHINE_MAX_JERK_XY / xy_jerk
if (
abs(current_feedrate[2] - buf.previous_feedrate[2])
> MACHINE_MAX_JERK_Z
):
vmax_junction_factor = min(
vmax_junction_factor,
(
MACHINE_MAX_JERK_Z
/ abs(
current_feedrate[2] - buf.previous_feedrate[2]
)
),
)
if (
abs(current_feedrate[3] - buf.previous_feedrate[3])
> MACHINE_MAX_JERK_E
):
vmax_junction_factor = min(
vmax_junction_factor,
(
MACHINE_MAX_JERK_E
/ abs(
current_feedrate[3] - buf.previous_feedrate[3]
)
),
)
vmax_junction = min(
buf.previous_nominal_feedrate,
vmax_junction * vmax_junction_factor,
) # Limit speed to max previous speed.
self._max_entry_speed = vmax_junction
v_allowable = calc_max_allowable_speed(
-self._acceleration, MINIMUM_PLANNER_SPEED, self._distance
)
self._entry_speed = min(vmax_junction, v_allowable)
self._nominal_length = self._nominal_feedrate <= v_allowable
self._recalculate = True
buf.previous_feedrate = current_feedrate
buf.previous_nominal_feedrate = self._nominal_feedrate
buf.current_position = new_position
self.calculate_trapezoid(
self._entry_speed / self._nominal_feedrate,
safe_speed / self._nominal_feedrate,
)
self.estimated_exec_time = (
-1 # Signal that we need to include this in our second pass.
)
# G4: Dwell, pause the machine for a period of time.
elif cmd_num == 4:
# Pnnn is time to wait in milliseconds (P0 wait until all previous moves are finished)
cmd, num = get_code_and_num(parts[1])
num = float(num)
if cmd == "P":
if num > 0:
self.estimated_exec_time = num
def _handle_m(self, cmd_num: int, parts: List[str]) -> None:
self.estimated_exec_time = 0.0
# M203: Set maximum feedrate. Only Z is supported. Assume 0 execution time.
if cmd_num == 203:
value_dict = get_value_dict(parts[1:])
buf.max_z_feedrate = value_dict.get("Z", buf.max_z_feedrate)
# M204: Set default acceleration. Assume 0 execution time.
if cmd_num == 204:
value_dict = get_value_dict(parts[1:])
buf.acceleration = value_dict.get("S", buf.acceleration)
# M205: Advanced settings, we only set jerks for Griffin. Assume 0 execution time.
if cmd_num == 205:
value_dict = get_value_dict(parts[1:])
buf.max_xy_jerk = value_dict.get("XY", buf.max_xy_jerk)
buf.max_z_jerk = value_dict.get("Z", buf.max_z_jerk)
buf.max_e_jerk = value_dict.get("E", buf.max_e_jerk)
def _handle_t(self, cmd_num: int, parts: List[str]) -> None:
# Tn: Switching extruder. Assume 0 seconds. Actually more like 2.
self.estimated_exec_time = 0.0
class CommandBuffer:
def __init__(
self,
all_lines: List[str],
buffer_filling_rate: float = DEFAULT_BUFFER_FILLING_RATE_IN_C_PER_S,
buffer_size: int = DEFAULT_BUFFER_SIZE,
) -> None:
self._all_lines = all_lines
self._all_commands = list()
self._buffer_filling_rate = buffer_filling_rate # type: float
self._buffer_size = buffer_size # type: int
self.acceleration = 3000
self.current_position = [0, 0, 0, 0]
self.current_feedrate = 0
self.max_xy_jerk = MACHINE_MAX_JERK_XY
self.max_z_jerk = MACHINE_MAX_JERK_Z
self.max_e_jerk = MACHINE_MAX_JERK_E
self.max_z_feedrate = MACHINE_MAX_FEEDRATE_Z
# If the buffer can depletes less than this amount time, it can be filled up in time.
lower_bound_buffer_depletion_time = (
self._buffer_size / self._buffer_filling_rate
) # type: float
self._detection_time_frame = lower_bound_buffer_depletion_time
self._code_count_limit = self._buffer_size
self.total_time = 0.0
self.previous_feedrate = [0, 0, 0, 0]
self.previous_nominal_feedrate = 0
print("Command speed: %s" % buffer_filling_rate)
print("Code Limit: %s" % self._code_count_limit)
self._bad_frame_ranges = []
def process(self) -> None:
buf.total_time = 0.0
cmd0_idx = 0
total_frame_time = 0.0
cmd_count = 0
for idx, line in enumerate(self._all_lines):
cmd = Command(line)
cmd.parse()
if not cmd.is_command:
continue
self._all_commands.append(cmd)
# Second pass: Reverse kernel.
kernel_commands = [None, None, None]
for cmd in reversed(self._all_commands):
if cmd.estimated_exec_time >= 0:
continue # Not a movement command.
kernel_commands[2] = kernel_commands[1]
kernel_commands[1] = kernel_commands[0]
kernel_commands[0] = cmd
self.reverse_pass_kernel(
kernel_commands[0], kernel_commands[1], kernel_commands[2]
)
# Third pass: Forward kernel.
kernel_commands = [None, None, None]
for cmd in self._all_commands:
if cmd.estimated_exec_time >= 0:
continue # Not a movement command.
kernel_commands[0] = kernel_commands[1]
kernel_commands[1] = kernel_commands[2]
kernel_commands[2] = cmd
self.forward_pass_kernel(
kernel_commands[0], kernel_commands[1], kernel_commands[2]
)
self.forward_pass_kernel(kernel_commands[1], kernel_commands[2], None)
# Fourth pass: Recalculate the commands that have _recalculate set.
previous = None
current = None
for current in self._all_commands:
if current.estimated_exec_time >= 0:
current = None
continue # Not a movement command.
if previous:
# Recalculate if current command entry or exit junction speed has changed.
if previous._recalculate or current._recalculate:
# Note: Entry and exit factors always >0 by all previous logic operators.
previous.calculate_trapezoid(
previous._entry_speed / previous._nominal_feedrate,
current._entry_speed / previous._nominal_feedrate,
)
previous._recalculate = False
previous = current
if current is not None and current.estimated_exec_time >= 0:
current.calculate_trapezoid(
current._entry_speed / current._nominal_feedrate,
MINIMUM_PLANNER_SPEED / current._nominal_feedrate,
)
current._recalculate = False
# Fifth pass: Compute time for movement commands.
for cmd in self._all_commands:
if cmd.estimated_exec_time >= 0:
continue # Not a movement command.
plateau_distance = cmd._decelerate_after - cmd._accelerate_until
cmd.estimated_exec_time = calc_acceleration_time_from_distance(
cmd._initial_feedrate, cmd._accelerate_until, cmd._acceleration
)
cmd.estimated_exec_time += plateau_distance / cmd._nominal_feedrate
cmd.estimated_exec_time += calc_acceleration_time_from_distance(
cmd._final_feedrate,
(cmd._distance - cmd._decelerate_after),
cmd._acceleration,
)
for idx, cmd in enumerate(self._all_commands):
cmd_count += 1
if idx > cmd0_idx or idx == 0:
buf.total_time += cmd.estimated_exec_time
total_frame_time += cmd.estimated_exec_time
if total_frame_time > 1:
# Find the next starting command which makes the total execution time of the frame to be less than
# 1 second.
cmd0_idx += 1
total_frame_time -= self._all_commands[cmd0_idx].estimated_exec_time
cmd_count -= 1
while total_frame_time > 1:
cmd0_idx += 1
total_frame_time -= self._all_commands[
cmd0_idx
].estimated_exec_time
cmd_count -= 1
# If within the current time frame the code count exceeds the limit, record that.
if (
total_frame_time <= self._detection_time_frame
and cmd_count > self._code_count_limit
):
need_to_append = True
if self._bad_frame_ranges:
last_item = self._bad_frame_ranges[-1]
if last_item["start_line"] == cmd0_idx:
last_item["end_line"] = idx
last_item["cmd_count"] = cmd_count
last_item["time"] = total_frame_time
need_to_append = False
if need_to_append:
self._bad_frame_ranges.append(
{
"start_line": cmd0_idx,
"end_line": idx,
"cmd_count": cmd_count,
"time": total_frame_time,
}
)
def reverse_pass_kernel(
self,
previous: Optional[Command],
current: Optional[Command],
next: Optional[Command],
) -> None:
if not current or not next:
return
# If entry speed is already at the maximum entry speed, no need to
# recheck. The command is cruising. If not, the command is in state of
# acceleration or deceleration. Reset entry speed to maximum and check
# for maximum allowable speed reductions to ensure maximum possible
# planned speed.
if current._entry_speed != current._max_entry_speed:
# If nominal length is true, max junction speed is guaranteed to be
# reached. Only compute for max allowable speed if block is
# decelerating and nominal length is false.
if (
not current._nominal_length
and current._max_entry_speed > next._max_entry_speed
):
current._entry_speed = min(
current._max_entry_speed,
calc_max_allowable_speed(
-current._acceleration, next._entry_speed, current._distance
),
)
else:
current._entry_speed = current._max_entry_speed
current._recalculate = True
def forward_pass_kernel(
self,
previous: Optional[Command],
current: Optional[Command],
next: Optional[Command],
) -> None:
if not previous:
return
# If the previous command is an acceleration command, but it is not long
# enough to complete the full speed change within the command, we need to
# adjust the entry speed accordingly. Entry speeds have already been
# reset, maximised and reverse planned by the reverse planner. If nominal
# length is set, max junction speed is guaranteed to be reached. No need
# to recheck.
if not previous._nominal_length:
if previous._entry_speed < current._entry_speed:
entry_speed = min(
current._entry_speed,
calc_max_allowable_speed(
-previous._acceleration,
previous._entry_speed,
previous._distance,
),
)
if current._entry_speed != entry_speed:
current._entry_speed = entry_speed
current._recalculate = True
def to_file(self, file_name: str) -> None:
all_lines = [str(c) for c in self._all_commands]
with open(file_name, "w", encoding="utf-8") as f:
f.writelines(all_lines)
f.write(";---TOTAL ESTIMATED TIME:" + str(self.total_time))
def report(self) -> None:
for item in self._bad_frame_ranges:
print(
"Potential buffer underrun from line {start_line} to {end_line}, code count = {code_count}, in {time}s ({speed} cmd/s)".format(
start_line=item["start_line"],
end_line=item["end_line"],
code_count=item["cmd_count"],
time=round(item["time"], 4),
speed=round(item["cmd_count"] / item["time"], 2),
)
)
print(
"Total predicted number of buffer underruns:", len(self._bad_frame_ranges)
)
if __name__ == "__main__":
if len(sys.argv) < 2 or 3 < len(sys.argv):
print("Usage: <input g-code> [output g-code]")
sys.exit(1)
in_filename = sys.argv[1]
out_filename = None
if len(sys.argv) == 3:
out_filename = sys.argv[2]
with open(in_filename, "r", encoding="utf-8") as f:
all_lines = f.readlines()
buf = CommandBuffer(all_lines)
buf.process()
# Output annotated gcode is optional
if out_filename is not None:
buf.to_file(out_filename)
buf.report()
|
website | backends | import logging
from django.contrib.auth.backends import RemoteUserBackend
from django.contrib.auth.models import User
from papers.models import Researcher
from papers.utils import validate_orcid
from website.models import ShibbolethAccount
from website.utils import merge_users
logger = logging.getLogger("dissemin." + __name__)
class ShibbolethRemoteUserBackend(RemoteUserBackend):
"""
We want to overwrite the user creation process to integrate ORCID.
To this end we create a concordance between eduPersonPrincipalName and Django User objects and use this to get or create a User.
This class is heavily inspired by shibboleth.backens.ShibbolethRemoteUserBackend
"""
def authenticate(self, request, remote_user, shib_meta):
"""
The remote_user is considered as trusted.
Sets up a user based on shibboleth data.
We file in website.models.ShibbolethAccount for it. If it does not exist, we create a user.
If we have an orcid passed in the shib_meta, we try to find a researcher, otherwise we create a researcher.
"""
# If no remote_user is given, we abort
if not remote_user:
logger.info("remote_user invalid")
return
logger.debug("Received remote_user: {}".format(remote_user))
# This is the real process of authentication
user = None
shib_account = None
try:
shib_account = ShibbolethAccount.objects.get(
shib_username=shib_meta.get("username")
)
except ShibbolethAccount.DoesNotExist:
logger.debug("username {} not found".format(shib_meta.get("username")))
orcid = validate_orcid(shib_meta.get("orcid"))
if shib_account:
logger.debug("Found ShibbolethAccount: {}".format(shib_account))
# If we have a ShibbolethAccount object, we have a Researcher object
researcher = Researcher.objects.get(user=shib_account.user)
# If we have a ORCID, we can do some stuff
if orcid:
if researcher.orcid:
# If both objects have ORCIDs, we can assume that they are identical
user = shib_account.user
# Researcher object has no ORCID. We try to find a Researcher with that ORCID and merge, otherwise we can just set the ORCID to the current researcher
else:
try:
alt_researcher = Researcher.objects.get(orcid=orcid)
except Researcher.DoesNotExist:
logger.debug(
"Found no researcher with orcid {}, save that on related researcher".format(
orcid
)
)
researcher.orcid = orcid
researcher.save()
else:
# We have an alternative researcher. If there is user, merge them, otherwise proceed directly to merging researchers
if alt_researcher.user:
merge_users(shib_account.user, alt_researcher.user)
researcher.merge(alt_researcher, delete_user=True)
user = shib_account.user
else:
user = shib_account.user
# We have no ShibbolethAccount object
# If we have an ORCID, we can try to find a Researcher
elif orcid:
try:
researcher = Researcher.objects.get(orcid=orcid)
except Researcher.DoesNotExist:
pass
else:
# We have found a Researcher object
if researcher.user:
# The found researcher has a user object. We use it
ShibbolethAccount.objects.create(
user=researcher.user, shib_username=shib_meta.get("username")
)
user = researcher.user
else:
# The found researcher has no user object. We create a user and connect it
user = User.objects.create_user(
remote_user,
first_name=shib_meta.get("first_name"),
last_name=shib_meta.get("last_name"),
)
ShibbolethAccount.objects.create(
user=user, shib_username=shib_meta.get("username")
)
researcher.user = user
researcher.save()
# We have no ORCID, so we create a ShibbolethAccount and Researcher
if not user:
user = self.create_new_user_and_researcher(remote_user, orcid, shib_meta)
return user
def create_new_user_and_researcher(self, remote_user, orcid, shib_meta):
"""
Creates a new user and researcher and returns the user
"""
user = User.objects.create_user(
remote_user,
first_name=shib_meta.get("first_name"),
last_name=shib_meta.get("last_name"),
)
ShibbolethAccount.objects.create(
user=user, shib_username=shib_meta.get("username")
)
Researcher.create_by_name(
shib_meta.get("first_name"),
shib_meta.get("last_name"),
orcid=orcid,
user=user,
)
return user
|
extractor | mychannels | # coding: utf-8
from __future__ import unicode_literals
import re
from .common import InfoExtractor
class MyChannelsIE(InfoExtractor):
_VALID_URL = r"https?://(?:www\.)?mychannels\.com/.*(?P<id_type>video|production)_id=(?P<id>[0-9]+)"
_TEST = {
"url": "https://mychannels.com/missholland/miss-holland?production_id=3416",
"md5": "b8993daad4262dd68d89d651c0c52c45",
"info_dict": {
"id": "wUUDZZep6vQD",
"ext": "mp4",
"title": "Miss Holland joins VOTE LEAVE",
"description": "Miss Holland | #13 Not a potato",
"uploader": "Miss Holland",
},
}
def _real_extract(self, url):
id_type, url_id = re.match(self._VALID_URL, url).groups()
webpage = self._download_webpage(url, url_id)
video_data = self._html_search_regex(
r'<div([^>]+data-%s-id="%s"[^>]+)>' % (id_type, url_id),
webpage,
"video data",
)
def extract_data_val(attr, fatal=False):
return self._html_search_regex(
r'data-%s\s*=\s*"([^"]+)"' % attr, video_data, attr, fatal=fatal
)
minoto_id = extract_data_val("minoto-id") or self._search_regex(
r"/id/([a-zA-Z0-9]+)", extract_data_val("video-src", True), "minoto id"
)
return {
"_type": "url_transparent",
"url": "minoto:%s" % minoto_id,
"id": url_id,
"title": extract_data_val("title", True),
"description": extract_data_val("description"),
"thumbnail": extract_data_val("image"),
"uploader": extract_data_val("channel"),
}
|
examples | list_classes | #!/usr/bin/python
"""This script lists classes and optionally attributes from UML model created
with Gaphor."""
import optparse
import sys
from gaphor import UML
from gaphor.application import Session
# Setup command line options.
usage = "usage: %prog [options] file.gaphor"
def main():
parser = optparse.OptionParser(usage=usage)
parser.add_option(
"-a",
"--attributes",
dest="attrs",
action="store_true",
help="Print class attributes",
)
(options, args) = parser.parse_args()
if len(args) != 1:
parser.print_help()
sys.exit(1)
# The model file to load.
model = args[0]
# Create the Gaphor application object.
session = Session()
# Get services we need.
element_factory = session.get_service("element_factory")
file_manager = session.get_service("file_manager")
# Load model from file.
file_manager.load(model)
# Find all classes using factory select.
for cls in element_factory.select(UML.Class):
print(f"Found class {cls.name}")
if options.attrs:
for attr in cls.ownedAttribute:
print(f" Attribute: {attr.name}")
if __name__ == "__main__":
main()
|
metasync | itunes | # This file is part of beets.
# Copyright 2016, Tom Jaspers.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Synchronize information from iTunes's library
"""
import os
import plistlib
import shutil
import tempfile
from contextlib import contextmanager
from time import mktime
from urllib.parse import unquote, urlparse
from beets import util
from beets.dbcore import types
from beets.library import DateType
from beets.util import bytestring_path, syspath
from beetsplug.metasync import MetaSource
from confuse import ConfigValueError
@contextmanager
def create_temporary_copy(path):
temp_dir = bytestring_path(tempfile.mkdtemp())
temp_path = os.path.join(temp_dir, b"temp_itunes_lib")
shutil.copyfile(syspath(path), syspath(temp_path))
try:
yield temp_path
finally:
shutil.rmtree(syspath(temp_dir))
def _norm_itunes_path(path):
# Itunes prepends the location with 'file://' on posix systems,
# and with 'file://localhost/' on Windows systems.
# The actual path to the file is always saved as posix form
# E.g., 'file://Users/Music/bar' or 'file://localhost/G:/Music/bar'
# The entire path will also be capitalized (e.g., '/Music/Alt-J')
# Note that this means the path will always have a leading separator,
# which is unwanted in the case of Windows systems.
# E.g., '\\G:\\Music\\bar' needs to be stripped to 'G:\\Music\\bar'
return util.bytestring_path(
os.path.normpath(unquote(urlparse(path).path)).lstrip("\\")
).lower()
class Itunes(MetaSource):
item_types = {
"itunes_rating": types.INTEGER, # 0..100 scale
"itunes_playcount": types.INTEGER,
"itunes_skipcount": types.INTEGER,
"itunes_lastplayed": DateType(),
"itunes_lastskipped": DateType(),
"itunes_dateadded": DateType(),
}
def __init__(self, config, log):
super().__init__(config, log)
config.add({"itunes": {"library": "~/Music/iTunes/iTunes Library.xml"}})
# Load the iTunes library, which has to be the .xml one (not the .itl)
library_path = config["itunes"]["library"].as_filename()
try:
self._log.debug(f"loading iTunes library from {library_path}")
with create_temporary_copy(library_path) as library_copy:
with open(library_copy, "rb") as library_copy_f:
raw_library = plistlib.load(library_copy_f)
except OSError as e:
raise ConfigValueError("invalid iTunes library: " + e.strerror)
except Exception:
# It's likely the user configured their '.itl' library (<> xml)
if os.path.splitext(library_path)[1].lower() != ".xml":
hint = (
": please ensure that the configured path"
" points to the .XML library"
)
else:
hint = ""
raise ConfigValueError("invalid iTunes library" + hint)
# Make the iTunes library queryable using the path
self.collection = {
_norm_itunes_path(track["Location"]): track
for track in raw_library["Tracks"].values()
if "Location" in track
}
def sync_from_source(self, item):
result = self.collection.get(util.bytestring_path(item.path).lower())
if not result:
self._log.warning(f"no iTunes match found for {item}")
return
item.itunes_rating = result.get("Rating")
item.itunes_playcount = result.get("Play Count")
item.itunes_skipcount = result.get("Skip Count")
if result.get("Play Date UTC"):
item.itunes_lastplayed = mktime(result.get("Play Date UTC").timetuple())
if result.get("Skip Date"):
item.itunes_lastskipped = mktime(result.get("Skip Date").timetuple())
if result.get("Date Added"):
item.itunes_dateadded = mktime(result.get("Date Added").timetuple())
|
bup | drecurse | from __future__ import absolute_import
import os
import stat
import bup.xstat as xstat
from bup.helpers import (
add_error,
debug1,
finalized,
resolve_parent,
should_rx_exclude_path,
)
from bup.io import path_msg
# the use of fchdir() and lstat() is for two reasons:
# - help out the kernel by not making it repeatedly look up the absolute path
# - avoid race conditions caused by doing listdir() on a changing symlink
try:
O_LARGEFILE = os.O_LARGEFILE
except AttributeError:
O_LARGEFILE = 0
try:
O_NOFOLLOW = os.O_NOFOLLOW
except AttributeError:
O_NOFOLLOW = 0
def finalized_fd(path):
fd = os.open(path, os.O_RDONLY | O_LARGEFILE | O_NOFOLLOW | os.O_NDELAY)
return finalized(fd, lambda x: os.close(x))
def _dirlist():
l = []
for n in os.listdir(b"."):
try:
st = xstat.lstat(n)
except OSError as e:
add_error(Exception("%s: %s" % (resolve_parent(n), str(e))))
continue
if stat.S_ISDIR(st.st_mode):
n += b"/"
l.append((n, st))
l.sort(reverse=True)
return l
def _recursive_dirlist(
prepend,
xdev,
bup_dir=None,
excluded_paths=None,
exclude_rxs=None,
xdev_exceptions=frozenset(),
):
for name, pst in _dirlist():
path = prepend + name
if excluded_paths:
if os.path.normpath(path) in excluded_paths:
debug1("Skipping %r: excluded.\n" % path_msg(path))
continue
if exclude_rxs and should_rx_exclude_path(path, exclude_rxs):
continue
if name.endswith(b"/"):
if bup_dir != None:
if os.path.normpath(path) == bup_dir:
debug1("Skipping BUP_DIR.\n")
continue
if xdev != None and pst.st_dev != xdev and path not in xdev_exceptions:
debug1(
"Skipping contents of %r: different filesystem.\n" % path_msg(path)
)
else:
try:
with finalized_fd(name) as fd:
os.fchdir(fd)
except OSError as e:
add_error("%s: %s" % (prepend, e))
else:
for i in _recursive_dirlist(
prepend=prepend + name,
xdev=xdev,
bup_dir=bup_dir,
excluded_paths=excluded_paths,
exclude_rxs=exclude_rxs,
xdev_exceptions=xdev_exceptions,
):
yield i
os.chdir(b"..")
yield (path, pst)
def recursive_dirlist(
paths,
xdev,
bup_dir=None,
excluded_paths=None,
exclude_rxs=None,
xdev_exceptions=frozenset(),
):
with finalized_fd(b".") as startdir:
try:
assert not isinstance(paths, str)
for path in paths:
try:
pst = xstat.lstat(path)
if stat.S_ISLNK(pst.st_mode):
yield (path, pst)
continue
except OSError as e:
add_error("recursive_dirlist: %s" % e)
continue
try:
opened_pfile = finalized_fd(path)
except OSError as e:
add_error(e)
continue
with opened_pfile as pfile:
pst = xstat.fstat(pfile)
if xdev:
xdev = pst.st_dev
else:
xdev = None
if stat.S_ISDIR(pst.st_mode):
os.fchdir(pfile)
prepend = os.path.join(path, b"")
for i in _recursive_dirlist(
prepend=prepend,
xdev=xdev,
bup_dir=bup_dir,
excluded_paths=excluded_paths,
exclude_rxs=exclude_rxs,
xdev_exceptions=xdev_exceptions,
):
yield i
os.fchdir(startdir)
else:
prepend = path
yield (prepend, pst)
except:
try:
os.fchdir(startdir)
except:
pass
raise
|
femsolver | solverbase | # ***************************************************************************
# * Copyright (c) 2017 Markus Hovorka <m.hovorka@live.de> *
# * Copyright (c) 2017 Bernd Hahnebach <bernd@bimstatik.org> *
# * *
# * This file is part of the FreeCAD CAx development system. *
# * *
# * This program is free software; you can redistribute it and/or modify *
# * it under the terms of the GNU Lesser General Public License (LGPL) *
# * as published by the Free Software Foundation; either version 2 of *
# * the License, or (at your option) any later version. *
# * for detail see the LICENCE text file. *
# * *
# * This program is distributed in the hope that it will be useful, *
# * but WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
# * GNU Library General Public License for more details. *
# * *
# * You should have received a copy of the GNU Library General Public *
# * License along with this program; if not, write to the Free Software *
# * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 *
# * USA *
# * *
# ***************************************************************************
__title__ = "FreeCAD FEM solver base object"
__author__ = "Markus Hovorka"
__url__ = "https://www.freecad.org"
## \addtogroup FEM
# @{
import FreeCAD as App
from femtools.errors import DirectoryDoesNotExistError, MustSaveError
from . import run
if App.GuiUp:
import FreeCADGui as Gui
from PySide import QtGui
from . import solver_taskpanel
class Proxy(object):
BaseType = "Fem::FemSolverObjectPython"
def __init__(self, obj):
obj.Proxy = self
obj.addExtension("App::GroupExtensionPython")
def createMachine(self, obj, directory, testmode):
raise NotImplementedError()
def createEquation(self, obj, eqId):
raise NotImplementedError()
def isSupported(self, equation):
raise NotImplementedError()
def addEquation(self, obj, eqId):
obj.addObject(self.createEquation(obj.Document, eqId))
def editSupported(self):
return False
def edit(self, directory):
raise NotImplementedError()
def execute(self, obj):
return True
class ViewProxy(object):
"""Proxy for FemSolverElmers View Provider."""
def __init__(self, vobj):
vobj.Proxy = self
vobj.addExtension("Gui::ViewProviderGroupExtensionPython")
def setEdit(self, vobj, mode=0):
try:
machine = run.getMachine(vobj.Object)
except MustSaveError:
error_message = (
"Please save the file before opening the task panel. "
"This must be done because the location of the working "
'directory is set to "Beside *.FCStd File".'
)
App.Console.PrintError(error_message + "\n")
QtGui.QMessageBox.critical(
Gui.getMainWindow(), "Can't open Task Panel", error_message
)
return False
except DirectoryDoesNotExistError:
error_message = "Selected working directory doesn't exist."
App.Console.PrintError(error_message + "\n")
QtGui.QMessageBox.critical(
Gui.getMainWindow(), "Can't open Task Panel", error_message
)
return False
task = solver_taskpanel.ControlTaskPanel(machine)
Gui.Control.showDialog(task)
return True
def unsetEdit(self, vobj, mode=0):
Gui.Control.closeDialog()
def doubleClicked(self, vobj):
if Gui.Control.activeDialog():
Gui.Control.closeDialog()
vobj.Document.setEdit(vobj.Object.Name)
return True
def attach(self, vobj):
pass
## @}
|
chardet | sjisprober | ######################## BEGIN LICENSE BLOCK ########################
# The Original Code is mozilla.org code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 1998
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
from .chardistribution import SJISDistributionAnalysis
from .codingstatemachine import CodingStateMachine
from .enums import MachineState, ProbingState
from .jpcntx import SJISContextAnalysis
from .mbcharsetprober import MultiByteCharSetProber
from .mbcssm import SJIS_SM_MODEL
class SJISProber(MultiByteCharSetProber):
def __init__(self):
super(SJISProber, self).__init__()
self.coding_sm = CodingStateMachine(SJIS_SM_MODEL)
self.distribution_analyzer = SJISDistributionAnalysis()
self.context_analyzer = SJISContextAnalysis()
self.reset()
def reset(self):
super(SJISProber, self).reset()
self.context_analyzer.reset()
@property
def charset_name(self):
return self.context_analyzer.charset_name
@property
def language(self):
return "Japanese"
def feed(self, byte_str):
for i in range(len(byte_str)):
coding_state = self.coding_sm.next_state(byte_str[i])
if coding_state == MachineState.ERROR:
self.logger.debug(
"%s %s prober hit error at byte %s",
self.charset_name,
self.language,
i,
)
self._state = ProbingState.NOT_ME
break
elif coding_state == MachineState.ITS_ME:
self._state = ProbingState.FOUND_IT
break
elif coding_state == MachineState.START:
char_len = self.coding_sm.get_current_charlen()
if i == 0:
self._last_char[1] = byte_str[0]
self.context_analyzer.feed(
self._last_char[2 - char_len :], char_len
)
self.distribution_analyzer.feed(self._last_char, char_len)
else:
self.context_analyzer.feed(
byte_str[i + 1 - char_len : i + 3 - char_len], char_len
)
self.distribution_analyzer.feed(byte_str[i - 1 : i + 1], char_len)
self._last_char[0] = byte_str[-1]
if self.state == ProbingState.DETECTING:
if self.context_analyzer.got_enough_data() and (
self.get_confidence() > self.SHORTCUT_THRESHOLD
):
self._state = ProbingState.FOUND_IT
return self.state
def get_confidence(self):
context_conf = self.context_analyzer.get_confidence()
distrib_conf = self.distribution_analyzer.get_confidence()
return max(context_conf, distrib_conf)
|
disabled-ZeronameLocal | SiteManagerPlugin | import json
import logging
import os
import re
import socket
import sys
import time
from base64 import b64encode
from http.client import HTTPConnection, HTTPException, HTTPSConnection
from Config import config
from Debug import Debug
from Plugin import PluginManager
allow_reload = False # No reload supported
@PluginManager.registerTo("SiteManager")
class SiteManagerPlugin(object):
def load(self, *args, **kwargs):
super(SiteManagerPlugin, self).load(*args, **kwargs)
self.log = logging.getLogger("ZeronetLocal Plugin")
self.error_message = None
if (
not config.namecoin_host
or not config.namecoin_rpcport
or not config.namecoin_rpcuser
or not config.namecoin_rpcpassword
):
self.error_message = "Missing parameters"
self.log.error(
"Missing parameters to connect to namecoin node. Please check all the arguments needed with '--help'. Zeronet will continue working without it."
)
return
url = "%(host)s:%(port)s" % {
"host": config.namecoin_host,
"port": config.namecoin_rpcport,
}
self.c = HTTPConnection(url, timeout=3)
user_pass = "%(user)s:%(password)s" % {
"user": config.namecoin_rpcuser,
"password": config.namecoin_rpcpassword,
}
userAndPass = b64encode(bytes(user_pass, "utf-8")).decode("ascii")
self.headers = {
"Authorization": "Basic %s" % userAndPass,
"Content-Type": " application/json ",
}
payload = json.dumps(
{"jsonrpc": "2.0", "id": "zeronet", "method": "ping", "params": []}
)
try:
self.c.request("POST", "/", payload, headers=self.headers)
response = self.c.getresponse()
data = response.read()
self.c.close()
if response.status == 200:
result = json.loads(data.decode())["result"]
else:
raise Exception(response.reason)
except Exception as err:
self.log.error(
"The Namecoin node is unreachable. Please check the configuration value are correct. Zeronet will continue working without it."
)
self.error_message = err
self.cache = dict()
# Checks if it's a valid address
def isAddress(self, address):
return self.isBitDomain(address) or super(SiteManagerPlugin, self).isAddress(
address
)
# Return: True if the address is domain
def isDomain(self, address):
return self.isBitDomain(address) or super(SiteManagerPlugin, self).isDomain(
address
)
# Return: True if the address is .bit domain
def isBitDomain(self, address):
return re.match(r"(.*?)([A-Za-z0-9_-]+\.bit)$", address)
# Return: Site object or None if not found
def get(self, address):
if self.isBitDomain(address): # Its looks like a domain
address_resolved = self.resolveDomain(address)
if address_resolved: # Domain found
site = self.sites.get(address_resolved)
if site:
site_domain = site.settings.get("domain")
if site_domain != address:
site.settings["domain"] = address
else: # Domain not found
site = self.sites.get(address)
else: # Access by site address
site = super(SiteManagerPlugin, self).get(address)
return site
# Return or create site and start download site files
# Return: Site or None if dns resolve failed
def need(self, address, *args, **kwargs):
if self.isBitDomain(address): # Its looks like a domain
address_resolved = self.resolveDomain(address)
if address_resolved:
address = address_resolved
else:
return None
return super(SiteManagerPlugin, self).need(address, *args, **kwargs)
# Resolve domain
# Return: The address or None
def resolveDomain(self, domain):
domain = domain.lower()
# remove .bit on end
if domain[-4:] == ".bit":
domain = domain[0:-4]
domain_array = domain.split(".")
if self.error_message:
self.log.error(
"Not able to connect to Namecoin node : {!s}".format(self.error_message)
)
return None
if len(domain_array) > 2:
self.log.error(
"Too many subdomains! Can only handle one level (eg. staging.mixtape.bit)"
)
return None
subdomain = ""
if len(domain_array) == 1:
domain = domain_array[0]
else:
subdomain = domain_array[0]
domain = domain_array[1]
if domain in self.cache:
delta = time.time() - self.cache[domain]["time"]
if delta < 3600:
# Must have been less than 1hour
return self.cache[domain]["addresses_resolved"][subdomain]
payload = json.dumps(
{
"jsonrpc": "2.0",
"id": "zeronet",
"method": "name_show",
"params": ["d/" + domain],
}
)
try:
self.c.request("POST", "/", payload, headers=self.headers)
response = self.c.getresponse()
data = response.read()
self.c.close()
domain_object = json.loads(data.decode())["result"]
except Exception as err:
# domain doesn't exist
return None
if "zeronet" in domain_object["value"]:
zeronet_domains = json.loads(domain_object["value"])["zeronet"]
if isinstance(zeronet_domains, str):
# {
# "zeronet":"19rXKeKptSdQ9qt7omwN82smehzTuuq6S9"
# } is valid
zeronet_domains = {"": zeronet_domains}
self.cache[domain] = {
"addresses_resolved": zeronet_domains,
"time": time.time(),
}
elif "map" in domain_object["value"]:
# Namecoin standard use {"map": { "blog": {"zeronet": "1D..."} }}
data_map = json.loads(domain_object["value"])["map"]
zeronet_domains = dict()
for subdomain in data_map:
if "zeronet" in data_map[subdomain]:
zeronet_domains[subdomain] = data_map[subdomain]["zeronet"]
if "zeronet" in data_map and isinstance(data_map["zeronet"], str):
# {"map":{
# "zeronet":"19rXKeKptSdQ9qt7omwN82smehzTuuq6S9",
# }}
zeronet_domains[""] = data_map["zeronet"]
self.cache[domain] = {
"addresses_resolved": zeronet_domains,
"time": time.time(),
}
else:
# No Zeronet address registered
return None
return self.cache[domain]["addresses_resolved"][subdomain]
@PluginManager.registerTo("ConfigPlugin")
class ConfigPlugin(object):
def createArguments(self):
group = self.parser.add_argument_group("Zeroname Local plugin")
group.add_argument(
"--namecoin_host", help="Host to namecoin node (eg. 127.0.0.1)"
)
group.add_argument("--namecoin_rpcport", help="Port to connect (eg. 8336)")
group.add_argument(
"--namecoin_rpcuser",
help="RPC user to connect to the namecoin node (eg. nofish)",
)
group.add_argument(
"--namecoin_rpcpassword", help="RPC password to connect to namecoin node"
)
return super(ConfigPlugin, self).createArguments()
|
storages | file_storage | #!/usr/bin/python
# -*- coding: utf-8 -*-
# thumbor imaging service
# https://github.com/thumbor/thumbor/wiki
# Licensed under the MIT license:
# http://www.opensource.org/licenses/mit-license
# Copyright (c) 2011 globo.com thumbor@googlegroups.com
import hashlib
import os
from datetime import datetime
from json import dumps, loads
from os.path import dirname, exists, getmtime, splitext
from shutil import move
from uuid import uuid4
from thumbor import storages
from thumbor.utils import logger
class Storage(storages.BaseStorage):
async def put(self, path, file_bytes):
file_abspath = self.path_on_filesystem(path)
temp_abspath = f"{file_abspath}.{str(uuid4()).replace('-', '')}"
file_dir_abspath = dirname(file_abspath)
logger.debug("creating tempfile for %s in %s...", path, temp_abspath)
self.ensure_dir(file_dir_abspath)
with open(temp_abspath, "wb") as _file:
_file.write(file_bytes)
logger.debug("moving tempfile %s to %s...", temp_abspath, file_abspath)
move(temp_abspath, file_abspath)
return path
async def put_crypto(self, path):
if not self.context.config.STORES_CRYPTO_KEY_FOR_EACH_IMAGE:
return
file_abspath = self.path_on_filesystem(path)
file_dir_abspath = dirname(file_abspath)
self.ensure_dir(file_dir_abspath)
if not self.context.server.security_key:
raise RuntimeError(
"STORES_CRYPTO_KEY_FOR_EACH_IMAGE can't be "
"True if no SECURITY_KEY specified"
)
crypto_path = f"{splitext(file_abspath)[0]}.txt"
temp_abspath = f"{crypto_path}.{str(uuid4()).replace('-', '')}"
with open(temp_abspath, "wb") as _file:
try:
security_key = self.context.server.security_key.encode()
except (UnicodeDecodeError, AttributeError):
security_key = self.context.server.security_key
_file.write(security_key)
move(temp_abspath, crypto_path)
logger.debug(
"Stored crypto at %s (security key: %s)",
crypto_path,
self.context.server.security_key,
)
return file_abspath
async def put_detector_data(self, path, data):
file_abspath = self.path_on_filesystem(path)
path = f"{splitext(file_abspath)[0]}.detectors.txt"
temp_abspath = f"{path}.{str(uuid4()).replace('-', '')}"
file_dir_abspath = dirname(file_abspath)
self.ensure_dir(file_dir_abspath)
with open(temp_abspath, "w", encoding="utf-8") as _file:
_file.write(dumps(data))
move(temp_abspath, path)
return file_abspath
async def get(self, path):
abs_path = self.path_on_filesystem(path)
resource_available = await self.exists(path, path_on_filesystem=abs_path)
if not resource_available:
return None
with open(self.path_on_filesystem(path), "rb") as source_file:
return source_file.read()
async def get_crypto(self, path):
file_abspath = self.path_on_filesystem(path)
crypto_file = f"{splitext(file_abspath)[0]}.txt"
if not exists(crypto_file):
return None
with open(crypto_file, "r", encoding="utf-8") as crypto_f:
return crypto_f.read()
async def get_detector_data(self, path):
file_abspath = self.path_on_filesystem(path)
path = f"{splitext(file_abspath)[0]}.detectors.txt"
resource_available = await self.exists(path, path_on_filesystem=path)
if not resource_available:
return None
with open(path, "r", encoding="utf-8") as detector_file:
return loads(detector_file.read())
def path_on_filesystem(self, path):
digest = hashlib.sha1(path.encode("utf-8")).hexdigest()
root_path = self.context.config.FILE_STORAGE_ROOT_PATH.rstrip("/")
return f"{root_path}/{digest[:2]}/{digest[2:]}"
async def exists(self, path, path_on_filesystem=None): # pylint: disable=arguments-differ
if path_on_filesystem is None:
path_on_filesystem = self.path_on_filesystem(path)
return os.path.exists(path_on_filesystem) and not self.__is_expired(
path_on_filesystem
)
async def remove(self, path):
n_path = self.path_on_filesystem(path)
return os.remove(n_path)
def __is_expired(self, path):
if self.context.config.STORAGE_EXPIRATION_SECONDS is None:
return False
timediff = datetime.now() - datetime.fromtimestamp(getmtime(path))
return timediff.total_seconds() > self.context.config.STORAGE_EXPIRATION_SECONDS
|
decrypters | CzshareComFolder | # -*- coding: utf-8 -*-
import re
from ..base.decrypter import BaseDecrypter
class CzshareComFolder(BaseDecrypter):
__name__ = "CzshareComFolder"
__type__ = "decrypter"
__version__ = "0.27"
__status__ = "testing"
__pattern__ = r"https?://(?:www\.)?(czshare|sdilej)\.(com|cz)/folders/.+"
__config__ = [
("activated", "bool", "Activated", True),
("use_premium", "bool", "Use premium account if available", True),
(
"folder_per_package",
"Default;Yes;No",
"Create folder for each package",
"Default",
),
]
__description__ = """Czshare.com folder decrypter plugin, now Sdilej.cz"""
__license__ = "GPLv3"
__authors__ = [("zoidberg", "zoidberg@mujmail.cz")]
FOLDER_PATTERN = r'<tr class="subdirectory">\s*<td>\s*<table>(.*?)</table>'
LINK_PATTERN = r'<td class="col2"><a href="(.+?)">info</a></td>'
def decrypt(self, pyfile):
html = self.load(pyfile.url)
m = re.search(self.FOLDER_PATTERN, html, re.S)
if m is None:
self.error(self._("FOLDER_PATTERN not found"))
self.links.extend(re.findall(self.LINK_PATTERN, m.group(1)))
|
extractor | ustudio | from __future__ import unicode_literals
import re
from ..utils import int_or_none, unescapeHTML, unified_strdate
from .common import InfoExtractor
class UstudioIE(InfoExtractor):
IE_NAME = "ustudio"
_VALID_URL = r"https?://(?:(?:www|v1)\.)?ustudio\.com/video/(?P<id>[^/]+)/(?P<display_id>[^/?#&]+)"
_TEST = {
"url": "http://ustudio.com/video/Uxu2my9bgSph/san_francisco_golden_gate_bridge",
"md5": "58bbfca62125378742df01fc2abbdef6",
"info_dict": {
"id": "Uxu2my9bgSph",
"display_id": "san_francisco_golden_gate_bridge",
"ext": "mp4",
"title": "San Francisco: Golden Gate Bridge",
"description": "md5:23925500697f2c6d4830e387ba51a9be",
"thumbnail": r"re:^https?://.*\.jpg$",
"upload_date": "20111107",
"uploader": "Tony Farley",
},
}
def _real_extract(self, url):
video_id, display_id = re.match(self._VALID_URL, url).groups()
config = self._download_xml(
"http://v1.ustudio.com/embed/%s/ustudio/config.xml" % video_id, display_id
)
def extract(kind):
return [
{
"url": unescapeHTML(item.attrib["url"]),
"width": int_or_none(item.get("width")),
"height": int_or_none(item.get("height")),
}
for item in config.findall("./qualities/quality/%s" % kind)
if item.get("url")
]
formats = extract("video")
self._sort_formats(formats)
webpage = self._download_webpage(url, display_id)
title = self._og_search_title(webpage)
upload_date = unified_strdate(
self._search_regex(
r"(?s)Uploaded by\s*.+?\s*on\s*<span>([^<]+)</span>",
webpage,
"upload date",
fatal=False,
)
)
uploader = self._search_regex(
r"Uploaded by\s*<a[^>]*>([^<]+)<", webpage, "uploader", fatal=False
)
return {
"id": video_id,
"display_id": display_id,
"title": title,
"description": self._og_search_description(webpage),
"thumbnails": extract("image"),
"upload_date": upload_date,
"uploader": uploader,
"formats": formats,
}
class UstudioEmbedIE(InfoExtractor):
IE_NAME = "ustudio:embed"
_VALID_URL = (
r"https?://(?:(?:app|embed)\.)?ustudio\.com/embed/(?P<uid>[^/]+)/(?P<id>[^/]+)"
)
_TEST = {
"url": "http://app.ustudio.com/embed/DeN7VdYRDKhP/Uw7G1kMCe65T",
"md5": "47c0be52a09b23a7f40de9469cec58f4",
"info_dict": {
"id": "Uw7G1kMCe65T",
"ext": "mp4",
"title": "5 Things IT Should Know About Video",
"description": "md5:93d32650884b500115e158c5677d25ad",
"uploader_id": "DeN7VdYRDKhP",
},
}
def _real_extract(self, url):
uploader_id, video_id = re.match(self._VALID_URL, url).groups()
video_data = self._download_json(
"http://app.ustudio.com/embed/%s/%s/config.json" % (uploader_id, video_id),
video_id,
)["videos"][0]
title = video_data["name"]
formats = []
for ext, qualities in video_data.get("transcodes", {}).items():
for quality in qualities:
quality_url = quality.get("url")
if not quality_url:
continue
height = int_or_none(quality.get("height"))
formats.append(
{
"format_id": "%s-%dp" % (ext, height) if height else ext,
"url": quality_url,
"width": int_or_none(quality.get("width")),
"height": height,
}
)
self._sort_formats(formats)
thumbnails = []
for image in video_data.get("images", []):
image_url = image.get("url")
if not image_url:
continue
thumbnails.append(
{
"url": image_url,
}
)
return {
"id": video_id,
"title": title,
"description": video_data.get("description"),
"duration": int_or_none(video_data.get("duration")),
"uploader_id": uploader_id,
"tags": video_data.get("keywords"),
"thumbnails": thumbnails,
"formats": formats,
}
|
util | Msgpack | import io
import os
import struct
import msgpack
import msgpack.fallback
def msgpackHeader(size):
if size <= 2**8 - 1:
return b"\xc4" + struct.pack("B", size)
elif size <= 2**16 - 1:
return b"\xc5" + struct.pack(">H", size)
elif size <= 2**32 - 1:
return b"\xc6" + struct.pack(">I", size)
else:
raise Exception("huge binary string")
def stream(data, writer):
packer = msgpack.Packer(use_bin_type=True)
writer(packer.pack_map_header(len(data)))
for key, val in data.items():
writer(packer.pack(key))
if isinstance(val, io.IOBase): # File obj
max_size = os.fstat(val.fileno()).st_size - val.tell()
size = min(max_size, val.read_bytes)
bytes_left = size
writer(msgpackHeader(size))
buff = 1024 * 64
while 1:
writer(val.read(min(bytes_left, buff)))
bytes_left = bytes_left - buff
if bytes_left <= 0:
break
else: # Simple
writer(packer.pack(val))
return size
class FilePart(object):
__slots__ = ("file", "read_bytes", "__class__")
def __init__(self, *args, **kwargs):
self.file = open(*args, **kwargs)
self.__enter__ == self.file.__enter__
def __getattr__(self, attr):
return getattr(self.file, attr)
def __enter__(self, *args, **kwargs):
return self.file.__enter__(*args, **kwargs)
def __exit__(self, *args, **kwargs):
return self.file.__exit__(*args, **kwargs)
# Don't try to decode the value of these fields as utf8
bin_value_keys = (
"hashfield_raw",
"peers",
"peers_ipv6",
"peers_onion",
"body",
"sites",
"bin",
)
def objectDecoderHook(obj):
global bin_value_keys
back = {}
for key, val in obj:
if type(key) is bytes:
key = key.decode("utf8")
if key in bin_value_keys or type(val) is not bytes or len(key) >= 64:
back[key] = val
else:
back[key] = val.decode("utf8")
return back
def getUnpacker(fallback=False, decode=True):
if fallback: # Pure Python
unpacker = msgpack.fallback.Unpacker
else:
unpacker = msgpack.Unpacker
extra_kwargs = {"max_buffer_size": 5 * 1024 * 1024}
if msgpack.version[0] >= 1:
extra_kwargs["strict_map_key"] = False
if decode: # Workaround for backward compatibility: Try to decode bin to str
unpacker = unpacker(
raw=True, object_pairs_hook=objectDecoderHook, **extra_kwargs
)
else:
unpacker = unpacker(raw=False, **extra_kwargs)
return unpacker
def pack(data, use_bin_type=True):
return msgpack.packb(data, use_bin_type=use_bin_type)
def unpack(data, decode=True):
unpacker = getUnpacker(decode=decode)
unpacker.feed(data)
return next(unpacker)
|
QT | WBG_Training | from Code.QT import Colocacion, Controles, Iconos, QTUtil2, QTVarios
from PyQt4 import QtGui
class TConf(QtGui.QWidget):
def __init__(self, dicValoracion, dicVentaja):
QtGui.QWidget.__init__(self)
lyVal = Colocacion.V()
self.chbVal = Controles.CHB(self, _("All/None"), True).capturaCambiado(
self, self.capturaVal
)
lyVal.control(self.chbVal)
lyVal.espacio(15)
self.liVal = []
for k, (txt, ico) in dicValoracion.iteritems():
lb = Controles.LB(self).ponImagen(ico.pixmap(16, 16))
chb = Controles.CHB(self, txt, True)
chb.id = k
lyW = Colocacion.H().control(lb).control(chb).relleno()
lyVal.otro(lyW)
self.liVal.append(chb)
lyVal.relleno()
gbVal = Controles.GB(self, _("Rating"), lyVal)
lyVen = Colocacion.V()
self.chbVen = Controles.CHB(self, _("All/None"), True).capturaCambiado(
self, self.capturaVen
)
lyVen.control(self.chbVen)
lyVen.espacio(15)
self.liVen = []
for k, (txt, ico) in dicVentaja.iteritems():
lb = Controles.LB(self).ponImagen(ico.pixmap(16, 16))
chb = Controles.CHB(self, txt, True)
chb.id = k
lyW = Colocacion.H().control(lb).control(chb).relleno()
lyVen.otro(lyW)
self.liVen.append(chb)
lyVen.relleno()
gbVen = Controles.GB(self, _("Advantage"), lyVen)
layout = Colocacion.H().relleno().control(gbVal).control(gbVen).relleno()
self.setLayout(layout)
def capturaVal(self):
ok = self.chbVal.valor()
for chb in self.liVal:
chb.ponValor(ok)
def capturaVen(self):
ok = self.chbVen.valor()
for chb in self.liVen:
chb.ponValor(ok)
def resultado(self):
reVal = []
for chb in self.liVal:
if not chb.valor():
reVal.append(chb.id)
reVen = []
for chb in self.liVen:
if not chb.valor():
reVen.append(chb.id)
return reVal, reVen
class WTraining(QTVarios.WDialogo):
def __init__(self, wParent, dicValoracion, dicVentaja):
icono = Iconos.TutorialesCrear()
extparam = "trainingMyOwnBook"
titulo = _("Create a training")
QTVarios.WDialogo.__init__(self, wParent, titulo, icono, extparam)
tb = QTUtil2.tbAcceptCancel(self)
lbName = Controles.LB(self, _("Name") + ": ")
self.name = Controles.ED(self).anchoFijo(200)
lyName = Colocacion.H().relleno().control(lbName).control(self.name).relleno()
self.chbAddSTD = Controles.CHB(
self, _("Add a label with the standard opening"), True
)
lyNameSTD = Colocacion.V().otro(lyName).control(self.chbAddSTD).relleno()
self.sbDepth, lbDepth = QTUtil2.spinBoxLB(
self, 30, 2, 999, etiqueta=_("Depth"), maxTam=48
)
lyDepth = (
Colocacion.H().relleno().control(lbDepth).control(self.sbDepth).relleno()
)
self.rbBlancas = Controles.RB(self, _("White")).activa()
self.rbNegras = Controles.RB(self, _("Black"))
hbox = Colocacion.H().control(self.rbBlancas).control(self.rbNegras)
gbColor = Controles.GB(self, _("Play with"), hbox)
self.tcWhite = TConf(dicValoracion, dicVentaja)
ly = Colocacion.H().control(self.tcWhite)
gbWhite = Controles.GB(self, "", ly)
gbWhite.setStyleSheet("QWidget { background-color: %s }" % "Ivory")
self.tcBlack = TConf(dicValoracion, dicVentaja)
ly = Colocacion.H().control(self.tcBlack)
gbBlack = Controles.GB(self, "", ly)
gbBlack.setStyleSheet("QWidget { background-color: %s }" % "Lavender")
tab = Controles.Tab()
tab.nuevaTab(gbWhite, _("Conditions to White"))
tab.nuevaTab(gbBlack, _("Conditions to Black"))
lyHead = (
Colocacion.H()
.otro(lyNameSTD)
.espacio(15)
.otro(lyDepth)
.espacio(15)
.control(gbColor)
)
layout = Colocacion.V().control(tb).otro(lyHead).control(tab).margen(3)
self.setLayout(layout)
self.recuperarVideo()
self.siAceptado = False
def resultado(self):
dic = {}
dic["WHITE"] = self.tcWhite.resultado()
dic["BLACK"] = self.tcBlack.resultado()
dic["SIWHITE"] = self.rbBlancas.isChecked()
dic["NAME"] = self.name.texto().strip()
dic["DEPTH"] = self.sbDepth.valor()
dic["ADDSTD"] = self.chbAddSTD.valor()
return dic
def aceptar(self):
if not self.name.texto().strip():
QTUtil2.mensError(self, _("Name is missing"))
return
self.siAceptado = True
self.accept()
|
web | form | """
HTML forms
(part of web.py)
"""
import copy
import re
import net
import utils
import webapi as web
def attrget(obj, attr, value=None):
try:
if hasattr(obj, 'has_key') and obj.has_key(attr):
return obj[attr]
except TypeError:
# Handle the case where has_key takes different number of arguments.
# This is the case with Model objects on appengine. See #134
pass
if hasattr(obj, attr):
return getattr(obj, attr)
return value
class Form(object):
r"""
HTML form.
>>> f = Form(Textbox("x"))
>>> f.render()
u'<table>\n <tr><th><label for="x">x</label></th><td><input type="text" id="x" name="x"/></td></tr>\n</table>'
"""
def __init__(self, *inputs, **kw):
self.inputs = inputs
self.valid = True
self.note = None
self.validators = kw.pop('validators', [])
def __call__(self, x=None):
o = copy.deepcopy(self)
if x: o.validates(x)
return o
def render(self):
out = ''
out += self.rendernote(self.note)
out += '<table>\n'
for i in self.inputs:
html = utils.safeunicode(i.pre) + i.render() + self.rendernote(i.note) + utils.safeunicode(i.post)
if i.is_hidden():
out += ' <tr style="display: none;"><th></th><td>%s</td></tr>\n' % (html)
else:
out += ' <tr><th><label for="%s">%s</label></th><td>%s</td></tr>\n' % (i.id, net.websafe(i.description), html)
out += "</table>"
return out
def render_css(self):
out = []
out.append(self.rendernote(self.note))
for i in self.inputs:
if not i.is_hidden():
out.append('<label for="%s">%s</label>' % (i.id, net.websafe(i.description)))
out.append(i.pre)
out.append(i.render())
out.append(self.rendernote(i.note))
out.append(i.post)
out.append('\n')
return ''.join(out)
def rendernote(self, note):
if note: return '<strong class="wrong">%s</strong>' % net.websafe(note)
else: return ""
def validates(self, source=None, _validate=True, **kw):
source = source or kw or web.input()
out = True
for i in self.inputs:
v = attrget(source, i.name)
if _validate:
out = i.validate(v) and out
else:
i.set_value(v)
if _validate:
out = out and self._validate(source)
self.valid = out
return out
def _validate(self, value):
self.value = value
for v in self.validators:
if not v.valid(value):
self.note = v.msg
return False
return True
def fill(self, source=None, **kw):
return self.validates(source, _validate=False, **kw)
def __getitem__(self, i):
for x in self.inputs:
if x.name == i: return x
raise KeyError, i
def __getattr__(self, name):
# don't interfere with deepcopy
inputs = self.__dict__.get('inputs') or []
for x in inputs:
if x.name == name: return x
raise AttributeError, name
def get(self, i, default=None):
try:
return self[i]
except KeyError:
return default
def _get_d(self): #@@ should really be form.attr, no?
return utils.storage([(i.name, i.get_value()) for i in self.inputs])
d = property(_get_d)
class Input(object):
def __init__(self, name, *validators, **attrs):
self.name = name
self.validators = validators
self.attrs = attrs = AttributeList(attrs)
self.description = attrs.pop('description', name)
self.value = attrs.pop('value', None)
self.pre = attrs.pop('pre', "")
self.post = attrs.pop('post', "")
self.note = None
self.id = attrs.setdefault('id', self.get_default_id())
if 'class_' in attrs:
attrs['class'] = attrs['class_']
del attrs['class_']
def is_hidden(self):
return False
def get_type(self):
raise NotImplementedError
def get_default_id(self):
return self.name
def validate(self, value):
self.set_value(value)
for v in self.validators:
if not v.valid(value):
self.note = v.msg
return False
return True
def set_value(self, value):
self.value = value
def get_value(self):
return self.value
def render(self):
attrs = self.attrs.copy()
attrs['type'] = self.get_type()
if self.value is not None:
attrs['value'] = self.value
attrs['name'] = self.name
return '<input %s/>' % attrs
def rendernote(self, note):
if note: return '<strong class="wrong">%s</strong>' % net.websafe(note)
else: return ""
def addatts(self):
# add leading space for backward-compatibility
return " " + str(self.attrs)
class AttributeList(dict):
"""List of atributes of input.
>>> a = AttributeList(type='text', name='x', value=20)
>>> a
<attrs: 'type="text" name="x" value="20"'>
"""
def copy(self):
return AttributeList(self)
def __str__(self):
return " ".join(['%s="%s"' % (k, net.websafe(v)) for k, v in self.items()])
def __repr__(self):
return '<attrs: %s>' % repr(str(self))
class Textbox(Input):
"""Textbox input.
>>> Textbox(name='foo', value='bar').render()
u'<input type="text" id="foo" value="bar" name="foo"/>'
>>> Textbox(name='foo', value=0).render()
u'<input type="text" id="foo" value="0" name="foo"/>'
"""
def get_type(self):
return 'text'
class Password(Input):
"""Password input.
>>> Password(name='password', value='secret').render()
u'<input type="password" id="password" value="secret" name="password"/>'
"""
def get_type(self):
return 'password'
class Textarea(Input):
"""Textarea input.
>>> Textarea(name='foo', value='bar').render()
u'<textarea id="foo" name="foo">bar</textarea>'
"""
def render(self):
attrs = self.attrs.copy()
attrs['name'] = self.name
value = net.websafe(self.value or '')
return '<textarea %s>%s</textarea>' % (attrs, value)
class Dropdown(Input):
r"""Dropdown/select input.
>>> Dropdown(name='foo', args=['a', 'b', 'c'], value='b').render()
u'<select id="foo" name="foo">\n <option value="a">a</option>\n <option selected="selected" value="b">b</option>\n <option value="c">c</option>\n</select>\n'
>>> Dropdown(name='foo', args=[('a', 'aa'), ('b', 'bb'), ('c', 'cc')], value='b').render()
u'<select id="foo" name="foo">\n <option value="a">aa</option>\n <option selected="selected" value="b">bb</option>\n <option value="c">cc</option>\n</select>\n'
"""
def __init__(self, name, args, *validators, **attrs):
self.args = args
super(Dropdown, self).__init__(name, *validators, **attrs)
def render(self):
attrs = self.attrs.copy()
attrs['name'] = self.name
x = '<select %s>\n' % attrs
for arg in self.args:
x += self._render_option(arg)
x += '</select>\n'
return x
def _render_option(self, arg, indent=' '):
if isinstance(arg, (tuple, list)):
value, desc= arg
else:
value, desc = arg, arg
if self.value == value or (isinstance(self.value, list) and value in self.value):
select_p = ' selected="selected"'
else:
select_p = ''
return indent + '<option%s value="%s">%s</option>\n' % (select_p, net.websafe(value), net.websafe(desc))
class GroupedDropdown(Dropdown):
r"""Grouped Dropdown/select input.
>>> GroupedDropdown(name='car_type', args=(('Swedish Cars', ('Volvo', 'Saab')), ('German Cars', ('Mercedes', 'Audi'))), value='Audi').render()
u'<select id="car_type" name="car_type">\n <optgroup label="Swedish Cars">\n <option value="Volvo">Volvo</option>\n <option value="Saab">Saab</option>\n </optgroup>\n <optgroup label="German Cars">\n <option value="Mercedes">Mercedes</option>\n <option selected="selected" value="Audi">Audi</option>\n </optgroup>\n</select>\n'
>>> GroupedDropdown(name='car_type', args=(('Swedish Cars', (('v', 'Volvo'), ('s', 'Saab'))), ('German Cars', (('m', 'Mercedes'), ('a', 'Audi')))), value='a').render()
u'<select id="car_type" name="car_type">\n <optgroup label="Swedish Cars">\n <option value="v">Volvo</option>\n <option value="s">Saab</option>\n </optgroup>\n <optgroup label="German Cars">\n <option value="m">Mercedes</option>\n <option selected="selected" value="a">Audi</option>\n </optgroup>\n</select>\n'
"""
def __init__(self, name, args, *validators, **attrs):
self.args = args
super(Dropdown, self).__init__(name, *validators, **attrs)
def render(self):
attrs = self.attrs.copy()
attrs['name'] = self.name
x = '<select %s>\n' % attrs
for label, options in self.args:
x += ' <optgroup label="%s">\n' % net.websafe(label)
for arg in options:
x += self._render_option(arg, indent = ' ')
x += ' </optgroup>\n'
x += '</select>\n'
return x
class Radio(Input):
def __init__(self, name, args, *validators, **attrs):
self.args = args
super(Radio, self).__init__(name, *validators, **attrs)
def render(self):
x = '<span>'
for arg in self.args:
if isinstance(arg, (tuple, list)):
value, desc= arg
else:
value, desc = arg, arg
attrs = self.attrs.copy()
attrs['name'] = self.name
attrs['type'] = 'radio'
attrs['value'] = value
if self.value == value:
attrs['checked'] = 'checked'
x += '<input %s/> %s' % (attrs, net.websafe(desc))
x += '</span>'
return x
class Checkbox(Input):
"""Checkbox input.
>>> Checkbox('foo', value='bar', checked=True).render()
u'<input checked="checked" type="checkbox" id="foo_bar" value="bar" name="foo"/>'
>>> Checkbox('foo', value='bar').render()
u'<input type="checkbox" id="foo_bar" value="bar" name="foo"/>'
>>> c = Checkbox('foo', value='bar')
>>> c.validate('on')
True
>>> c.render()
u'<input checked="checked" type="checkbox" id="foo_bar" value="bar" name="foo"/>'
"""
def __init__(self, name, *validators, **attrs):
self.checked = attrs.pop('checked', False)
Input.__init__(self, name, *validators, **attrs)
def get_default_id(self):
value = utils.safestr(self.value or "")
return self.name + '_' + value.replace(' ', '_')
def render(self):
attrs = self.attrs.copy()
attrs['type'] = 'checkbox'
attrs['name'] = self.name
attrs['value'] = self.value
if self.checked:
attrs['checked'] = 'checked'
return '<input %s/>' % attrs
def set_value(self, value):
self.checked = bool(value)
def get_value(self):
return self.checked
class Button(Input):
"""HTML Button.
>>> Button("save").render()
u'<button id="save" name="save">save</button>'
>>> Button("action", value="save", html="<b>Save Changes</b>").render()
u'<button id="action" value="save" name="action"><b>Save Changes</b></button>'
"""
def __init__(self, name, *validators, **attrs):
super(Button, self).__init__(name, *validators, **attrs)
self.description = ""
def render(self):
attrs = self.attrs.copy()
attrs['name'] = self.name
if self.value is not None:
attrs['value'] = self.value
html = attrs.pop('html', None) or net.websafe(self.name)
return '<button %s>%s</button>' % (attrs, html)
class Hidden(Input):
"""Hidden Input.
>>> Hidden(name='foo', value='bar').render()
u'<input type="hidden" id="foo" value="bar" name="foo"/>'
"""
def is_hidden(self):
return True
def get_type(self):
return 'hidden'
class File(Input):
"""File input.
>>> File(name='f').render()
u'<input type="file" id="f" name="f"/>'
"""
def get_type(self):
return 'file'
class Validator:
def __deepcopy__(self, memo): return copy.copy(self)
def __init__(self, msg, test, jstest=None): utils.autoassign(self, locals())
def valid(self, value):
try: return self.test(value)
except: return False
notnull = Validator("Required", bool)
class regexp(Validator):
def __init__(self, rexp, msg):
self.rexp = re.compile(rexp)
self.msg = msg
def valid(self, value):
return bool(self.rexp.match(value))
if __name__ == "__main__":
import doctest
doctest.testmod()
|
femtaskpanels | task_constraint_initialflowvelocity | # ***************************************************************************
# * Copyright (c) 2017 Markus Hovorka <m.hovorka@live.de> *
# * Copyright (c) 2020 Bernd Hahnebach <bernd@bimstatik.org> *
# * *
# * This file is part of the FreeCAD CAx development system. *
# * *
# * This program is free software; you can redistribute it and/or modify *
# * it under the terms of the GNU Lesser General Public License (LGPL) *
# * as published by the Free Software Foundation; either version 2 of *
# * the License, or (at your option) any later version. *
# * for detail see the LICENCE text file. *
# * *
# * This program is distributed in the hope that it will be useful, *
# * but WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
# * GNU Library General Public License for more details. *
# * *
# * You should have received a copy of the GNU Library General Public *
# * License along with this program; if not, write to the Free Software *
# * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 *
# * USA *
# * *
# ***************************************************************************
__title__ = (
"FreeCAD FEM constraint initial flow velocity task panel for the document object"
)
__author__ = "Markus Hovorka, Bernd Hahnebach"
__url__ = "https://www.freecad.org"
## @package task_constraint_initialflowvelocity
# \ingroup FEM
# \brief task panel for constraint initial flow velocity object
import FreeCAD
import FreeCADGui
from femguiutils import selection_widgets
from femtools import femutils, membertools
from PySide import QtCore
class _TaskPanel(object):
def __init__(self, obj):
self._obj = obj
self._paramWidget = FreeCADGui.PySideUic.loadUi(
FreeCAD.getHomePath() + "Mod/Fem/Resources/ui/InitialFlowVelocity.ui"
)
# geometry selection widget
# start with Solid in list!
self._selectionWidget = selection_widgets.GeometryElementsSelection(
obj.References, ["Solid", "Face"], True, False
)
# form made from param and selection widget
self.form = [self._paramWidget, self._selectionWidget]
analysis = obj.getParentGroup()
self._mesh = None
self._part = None
if analysis is not None:
self._mesh = membertools.get_single_member(analysis, "Fem::FemMeshObject")
if self._mesh is not None:
self._part = femutils.get_part_to_mesh(self._mesh)
self._partVisible = None
self._meshVisible = None
# connect unspecified option
QtCore.QObject.connect(
self._paramWidget.velocityXBox,
QtCore.SIGNAL("toggled(bool)"),
self._velocityXEnable,
)
QtCore.QObject.connect(
self._paramWidget.velocityYBox,
QtCore.SIGNAL("toggled(bool)"),
self._velocityYEnable,
)
QtCore.QObject.connect(
self._paramWidget.velocityZBox,
QtCore.SIGNAL("toggled(bool)"),
self._velocityZEnable,
)
# connect formula option
QtCore.QObject.connect(
self._paramWidget.formulaXCB,
QtCore.SIGNAL("toggled(bool)"),
self._formulaXEnable,
)
QtCore.QObject.connect(
self._paramWidget.formulaYCB,
QtCore.SIGNAL("toggled(bool)"),
self._formulaYEnable,
)
QtCore.QObject.connect(
self._paramWidget.formulaZCB,
QtCore.SIGNAL("toggled(bool)"),
self._formulaZEnable,
)
self._initParamWidget()
def _velocityXEnable(self, toggled):
if toggled:
self._paramWidget.formulaX.setDisabled(toggled)
self._paramWidget.velocityX.setDisabled(toggled)
else:
if self._paramWidget.formulaXCB.isChecked():
self._paramWidget.formulaX.setDisabled(toggled)
else:
self._paramWidget.velocityX.setDisabled(toggled)
def _velocityYEnable(self, toggled):
if toggled:
self._paramWidget.formulaY.setDisabled(toggled)
self._paramWidget.velocityY.setDisabled(toggled)
else:
if self._paramWidget.formulaYCB.isChecked():
self._paramWidget.formulaY.setDisabled(toggled)
else:
self._paramWidget.velocityY.setDisabled(toggled)
def _velocityZEnable(self, toggled):
if toggled:
self._paramWidget.formulaZ.setDisabled(toggled)
self._paramWidget.velocityZ.setDisabled(toggled)
else:
if self._paramWidget.formulaZCB.isChecked():
self._paramWidget.formulaZ.setDisabled(toggled)
else:
self._paramWidget.velocityZ.setDisabled(toggled)
def _formulaXEnable(self, toggled):
FreeCAD.Console.PrintMessage("_formulaXEnable\n")
if self._paramWidget.velocityXBox.isChecked():
FreeCAD.Console.PrintMessage("velocityXBox isChecked\n")
return
else:
FreeCAD.Console.PrintMessage("velocityXBox not checked\n")
self._paramWidget.formulaX.setEnabled(toggled)
self._paramWidget.velocityX.setDisabled(toggled)
def _formulaYEnable(self, toggled):
if self._paramWidget.velocityYBox.isChecked():
return
else:
self._paramWidget.formulaY.setEnabled(toggled)
self._paramWidget.velocityY.setDisabled(toggled)
def _formulaZEnable(self, toggled):
if self._paramWidget.velocitZXBox.isChecked():
return
else:
self._paramWidget.formulaZ.setEnabled(toggled)
self._paramWidget.velocityZ.setDisabled(toggled)
def open(self):
if self._mesh is not None and self._part is not None:
self._meshVisible = self._mesh.ViewObject.isVisible()
self._partVisible = self._part.ViewObject.isVisible()
self._mesh.ViewObject.hide()
self._part.ViewObject.show()
def reject(self):
FreeCADGui.ActiveDocument.resetEdit()
self._restoreVisibility()
return True
def accept(self):
if self._obj.References != self._selectionWidget.references:
self._obj.References = self._selectionWidget.references
self._applyWidgetChanges()
self._obj.Document.recompute()
FreeCADGui.ActiveDocument.resetEdit()
self._restoreVisibility()
return True
def _restoreVisibility(self):
if self._mesh is not None and self._part is not None:
if self._meshVisible:
self._mesh.ViewObject.show()
else:
self._mesh.ViewObject.hide()
if self._partVisible:
self._part.ViewObject.show()
else:
self._part.ViewObject.hide()
def _initParamWidget(self):
unit = "m/s"
self._paramWidget.velocityX.setProperty("unit", unit)
self._paramWidget.velocityY.setProperty("unit", unit)
self._paramWidget.velocityZ.setProperty("unit", unit)
self._paramWidget.velocityX.setProperty("value", self._obj.VelocityX)
FreeCADGui.ExpressionBinding(self._paramWidget.velocityX).bind(
self._obj, "VelocityX"
)
self._paramWidget.velocityXBox.setChecked(self._obj.VelocityXUnspecified)
self._paramWidget.formulaX.setText(self._obj.VelocityXFormula)
self._paramWidget.formulaXCB.setChecked(self._obj.VelocityXHasFormula)
self._paramWidget.velocityY.setProperty("value", self._obj.VelocityY)
FreeCADGui.ExpressionBinding(self._paramWidget.velocityY).bind(
self._obj, "VelocityY"
)
self._paramWidget.velocityYBox.setChecked(self._obj.VelocityYUnspecified)
self._paramWidget.formulaY.setText(self._obj.VelocityYFormula)
self._paramWidget.formulaYCB.setChecked(self._obj.VelocityYHasFormula)
self._paramWidget.velocityZ.setProperty("value", self._obj.VelocityZ)
FreeCADGui.ExpressionBinding(self._paramWidget.velocityZ).bind(
self._obj, "VelocityZ"
)
self._paramWidget.velocityZBox.setChecked(self._obj.VelocityZUnspecified)
self._paramWidget.formulaZ.setText(self._obj.VelocityZFormula)
self._paramWidget.formulaZCB.setChecked(self._obj.VelocityZHasFormula)
def _applyVelocityChanges(self, enabledBox, velocityQSB):
enabled = enabledBox.isChecked()
velocity = None
try:
velocity = velocityQSB.property("value")
except ValueError:
FreeCAD.Console.PrintMessage(
"Wrong input. Not recognised input: '{}' "
"Velocity has not been set.\n".format(velocityQSB.text())
)
velocity = "0.0 m/s"
return enabled, velocity
def _applyWidgetChanges(self):
# apply the velocities and their enabled state
(
self._obj.VelocityXUnspecified,
self._obj.VelocityX,
) = self._applyVelocityChanges(
self._paramWidget.velocityXBox, self._paramWidget.velocityX
)
self._obj.VelocityXHasFormula = self._paramWidget.formulaXCB.isChecked()
self._obj.VelocityXFormula = self._paramWidget.formulaX.text()
(
self._obj.VelocityYUnspecified,
self._obj.VelocityY,
) = self._applyVelocityChanges(
self._paramWidget.velocityYBox, self._paramWidget.velocityY
)
self._obj.VelocityYHasFormula = self._paramWidget.formulaYCB.isChecked()
self._obj.VelocityYFormula = self._paramWidget.formulaY.text()
(
self._obj.VelocityZUnspecified,
self._obj.VelocityZ,
) = self._applyVelocityChanges(
self._paramWidget.velocityZBox, self._paramWidget.velocityZ
)
self._obj.VelocityZHasFormula = self._paramWidget.formulaZCB.isChecked()
self._obj.VelocityZFormula = self._paramWidget.formulaZ.text()
|
utils | translations | # -*- coding: utf-8 -*-
"""
flaskbb.utils.translations
~~~~~~~~~~~~~~~~~~~~~~~~~~
This module contains the translation Domain used by FlaskBB.
:copyright: (c) 2016 by the FlaskBB Team.
:license: BSD, see LICENSE for more details.
"""
import logging
import os
import subprocess
import babel
from flask import current_app
from flask_babelplus import Domain, get_locale
from flask_babelplus.utils import get_state
logger = logging.getLogger(__name__)
class FlaskBBDomain(Domain):
def __init__(self, app):
self.app = app
super(FlaskBBDomain, self).__init__()
# Plugin translations
with self.app.app_context():
self.plugin_translations = self.app.pluggy.hook.flaskbb_load_translations()
def get_translations(self):
"""Returns the correct gettext translations that should be used for
this request. This will never fail and return a dummy translation
object if used outside of the request or if a translation cannot be
found.
"""
state = get_state(silent=True)
if state is None:
return babel.support.NullTranslations()
locale = get_locale()
cache = self.get_translations_cache()
translations = cache.get(str(locale))
# load them into the cache
if translations is None:
dirname = self.get_translations_path(state.app)
translations = babel.support.Translations.load(
dirname, locale, domain=self.domain
)
# now load and add the plugin translations
for plugin in self.plugin_translations:
logger.debug("Loading plugin translation from: " "{}".format(plugin))
plugin_translation = babel.support.Translations.load(
dirname=plugin, locales=locale, domain="messages"
)
if type(plugin_translation) is not babel.support.NullTranslations:
translations.add(plugin_translation)
self.cache[str(locale)] = translations
return translations
def update_translations(include_plugins=False):
"""Updates all translations.
:param include_plugins: If set to `True` it will also update the
translations for all plugins.
"""
translations_folder = os.path.join(current_app.root_path, "translations")
source_file = os.path.join(translations_folder, "messages.pot")
subprocess.call(
[
"pybabel",
"extract",
"-F",
"babel.cfg",
"-k",
"lazy_gettext",
"-o",
source_file,
".",
]
)
subprocess.call(["pybabel", "update", "-i", source_file, "-d", translations_folder])
if include_plugins:
for plugin in current_app.pluggy.list_name():
update_plugin_translations(plugin)
def add_translations(translation):
"""Adds a new language to the translations.
:param translation: The short name of the translation
like ``en`` or ``de_AT``.
"""
translations_folder = os.path.join(current_app.root_path, "translations")
source_file = os.path.join(translations_folder, "messages.pot")
subprocess.call(
[
"pybabel",
"extract",
"-F",
"babel.cfg",
"-k",
"lazy_gettext",
"-o",
source_file,
".",
]
)
subprocess.call(
[
"pybabel",
"init",
"-i",
source_file,
"-d",
translations_folder,
"-l",
translation,
]
)
def compile_translations(include_plugins=False):
"""Compiles all translations.
:param include_plugins: If set to `True` it will also compile the
translations for all plugins.
"""
translations_folder = os.path.join(current_app.root_path, "translations")
subprocess.call(["pybabel", "compile", "-d", translations_folder])
if include_plugins:
for plugin in current_app.pluggy.list_name():
compile_plugin_translations(plugin)
def add_plugin_translations(plugin, translation):
"""Adds a new language to the plugin translations.
:param plugin: The plugins identifier.
:param translation: The short name of the translation
like ``en`` or ``de_AT``.
"""
plugin_folder = current_app.pluggy.get_plugin(plugin).__path__[0]
translations_folder = os.path.join(plugin_folder, "translations")
source_file = os.path.join(translations_folder, "messages.pot")
subprocess.call(
[
"pybabel",
"extract",
"-F",
"babel.cfg",
"-k",
"lazy_gettext",
"-o",
source_file,
plugin_folder,
]
)
subprocess.call(
[
"pybabel",
"init",
"-i",
source_file,
"-d",
translations_folder,
"-l",
translation,
]
)
def update_plugin_translations(plugin):
"""Updates the plugin translations.
Returns ``False`` if no translations for this plugin exists.
:param plugin: The plugins identifier
"""
plugin_folder = current_app.pluggy.get_plugin(plugin).__path__[0]
translations_folder = os.path.join(plugin_folder, "translations")
source_file = os.path.join(translations_folder, "messages.pot")
if not os.path.exists(source_file):
return False
subprocess.call(
[
"pybabel",
"extract",
"-F",
"babel.cfg",
"-k",
"lazy_gettext",
"-o",
source_file,
plugin_folder,
]
)
subprocess.call(["pybabel", "update", "-i", source_file, "-d", translations_folder])
def compile_plugin_translations(plugin):
"""Compile the plugin translations.
Returns ``False`` if no translations for this plugin exists.
:param plugin: The plugins identifier
"""
plugin_folder = current_app.pluggy.get_plugin(plugin).__path__[0]
translations_folder = os.path.join(plugin_folder, "translations")
if not os.path.exists(translations_folder):
return False
subprocess.call(["pybabel", "compile", "-d", translations_folder])
|
senf | _winapi | # -*- coding: utf-8 -*-
# Copyright 2016 Christoph Reiter
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import ctypes
import sys
if sys.platform == "win32":
from ctypes import CDLL, WinDLL, wintypes
shell32 = WinDLL("shell32")
kernel32 = WinDLL("kernel32")
shlwapi = WinDLL("shlwapi")
msvcrt = CDLL("msvcrt")
GetCommandLineW = kernel32.GetCommandLineW
GetCommandLineW.argtypes = []
GetCommandLineW.restype = wintypes.LPCWSTR
CommandLineToArgvW = shell32.CommandLineToArgvW
CommandLineToArgvW.argtypes = [wintypes.LPCWSTR, ctypes.POINTER(ctypes.c_int)]
CommandLineToArgvW.restype = ctypes.POINTER(wintypes.LPWSTR)
LocalFree = kernel32.LocalFree
LocalFree.argtypes = [wintypes.HLOCAL]
LocalFree.restype = wintypes.HLOCAL
# https://msdn.microsoft.com/en-us/library/windows/desktop/aa383751.aspx
LPCTSTR = ctypes.c_wchar_p
LPWSTR = wintypes.LPWSTR
LPCWSTR = ctypes.c_wchar_p
LPTSTR = LPWSTR
PCWSTR = ctypes.c_wchar_p
PCTSTR = PCWSTR
PWSTR = ctypes.c_wchar_p
PTSTR = PWSTR
LPVOID = wintypes.LPVOID
WCHAR = wintypes.WCHAR
LPSTR = ctypes.c_char_p
BOOL = wintypes.BOOL
LPBOOL = ctypes.POINTER(BOOL)
UINT = wintypes.UINT
WORD = wintypes.WORD
DWORD = wintypes.DWORD
SHORT = wintypes.SHORT
HANDLE = wintypes.HANDLE
ULONG = wintypes.ULONG
LPCSTR = wintypes.LPCSTR
STD_INPUT_HANDLE = DWORD(-10)
STD_OUTPUT_HANDLE = DWORD(-11)
STD_ERROR_HANDLE = DWORD(-12)
INVALID_HANDLE_VALUE = wintypes.HANDLE(-1).value
INTERNET_MAX_SCHEME_LENGTH = 32
INTERNET_MAX_PATH_LENGTH = 2048
INTERNET_MAX_URL_LENGTH = (
INTERNET_MAX_SCHEME_LENGTH + len("://") + INTERNET_MAX_PATH_LENGTH
)
FOREGROUND_BLUE = 0x0001
FOREGROUND_GREEN = 0x0002
FOREGROUND_RED = 0x0004
FOREGROUND_INTENSITY = 0x0008
BACKGROUND_BLUE = 0x0010
BACKGROUND_GREEN = 0x0020
BACKGROUND_RED = 0x0040
BACKGROUND_INTENSITY = 0x0080
COMMON_LVB_REVERSE_VIDEO = 0x4000
COMMON_LVB_UNDERSCORE = 0x8000
UrlCreateFromPathW = shlwapi.UrlCreateFromPathW
UrlCreateFromPathW.argtypes = [PCTSTR, PTSTR, ctypes.POINTER(DWORD), DWORD]
UrlCreateFromPathW.restype = ctypes.HRESULT
SetEnvironmentVariableW = kernel32.SetEnvironmentVariableW
SetEnvironmentVariableW.argtypes = [LPCTSTR, LPCTSTR]
SetEnvironmentVariableW.restype = wintypes.BOOL
GetEnvironmentVariableW = kernel32.GetEnvironmentVariableW
GetEnvironmentVariableW.argtypes = [LPCTSTR, LPTSTR, DWORD]
GetEnvironmentVariableW.restype = DWORD
GetEnvironmentStringsW = kernel32.GetEnvironmentStringsW
GetEnvironmentStringsW.argtypes = []
GetEnvironmentStringsW.restype = ctypes.c_void_p
FreeEnvironmentStringsW = kernel32.FreeEnvironmentStringsW
FreeEnvironmentStringsW.argtypes = [ctypes.c_void_p]
FreeEnvironmentStringsW.restype = ctypes.c_bool
GetStdHandle = kernel32.GetStdHandle
GetStdHandle.argtypes = [DWORD]
GetStdHandle.restype = HANDLE
class COORD(ctypes.Structure):
_fields_ = [
("X", SHORT),
("Y", SHORT),
]
class SMALL_RECT(ctypes.Structure):
_fields_ = [
("Left", SHORT),
("Top", SHORT),
("Right", SHORT),
("Bottom", SHORT),
]
class CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):
_fields_ = [
("dwSize", COORD),
("dwCursorPosition", COORD),
("wAttributes", WORD),
("srWindow", SMALL_RECT),
("dwMaximumWindowSize", COORD),
]
GetConsoleScreenBufferInfo = kernel32.GetConsoleScreenBufferInfo
GetConsoleScreenBufferInfo.argtypes = [
HANDLE,
ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO),
]
GetConsoleScreenBufferInfo.restype = BOOL
GetConsoleOutputCP = kernel32.GetConsoleOutputCP
GetConsoleOutputCP.argtypes = []
GetConsoleOutputCP.restype = UINT
SetConsoleOutputCP = kernel32.SetConsoleOutputCP
SetConsoleOutputCP.argtypes = [UINT]
SetConsoleOutputCP.restype = BOOL
GetConsoleCP = kernel32.GetConsoleCP
GetConsoleCP.argtypes = []
GetConsoleCP.restype = UINT
SetConsoleCP = kernel32.SetConsoleCP
SetConsoleCP.argtypes = [UINT]
SetConsoleCP.restype = BOOL
SetConsoleTextAttribute = kernel32.SetConsoleTextAttribute
SetConsoleTextAttribute.argtypes = [HANDLE, WORD]
SetConsoleTextAttribute.restype = BOOL
SetConsoleCursorPosition = kernel32.SetConsoleCursorPosition
SetConsoleCursorPosition.argtypes = [HANDLE, COORD]
SetConsoleCursorPosition.restype = BOOL
ReadConsoleW = kernel32.ReadConsoleW
ReadConsoleW.argtypes = [HANDLE, LPVOID, DWORD, ctypes.POINTER(DWORD), LPVOID]
ReadConsoleW.restype = BOOL
MultiByteToWideChar = kernel32.MultiByteToWideChar
MultiByteToWideChar.argtypes = [
UINT,
DWORD,
LPCSTR,
ctypes.c_int,
LPWSTR,
ctypes.c_int,
]
MultiByteToWideChar.restype = ctypes.c_int
WideCharToMultiByte = kernel32.WideCharToMultiByte
WideCharToMultiByte.argtypes = [
UINT,
DWORD,
LPCWSTR,
ctypes.c_int,
LPSTR,
ctypes.c_int,
LPCSTR,
LPBOOL,
]
WideCharToMultiByte.restype = ctypes.c_int
MoveFileW = kernel32.MoveFileW
MoveFileW.argtypes = [LPCTSTR, LPCTSTR]
MoveFileW.restype = BOOL
GetFileInformationByHandleEx = None
if hasattr(kernel32, "GetFileInformationByHandleEx"):
GetFileInformationByHandleEx = kernel32.GetFileInformationByHandleEx
GetFileInformationByHandleEx.argtypes = [
HANDLE,
ctypes.c_int,
ctypes.c_void_p,
DWORD,
]
GetFileInformationByHandleEx.restype = BOOL
else:
# Windows XP
pass
MAX_PATH = 260
FileNameInfo = 2
class FILE_NAME_INFO(ctypes.Structure):
_fields_ = [
("FileNameLength", DWORD),
("FileName", WCHAR),
]
_get_osfhandle = msvcrt._get_osfhandle
_get_osfhandle.argtypes = [ctypes.c_int]
_get_osfhandle.restype = HANDLE
GetFileType = kernel32.GetFileType
GetFileType.argtypes = [HANDLE]
GetFileType.restype = DWORD
FILE_TYPE_PIPE = 0x0003
|
util | environment | # Copyright 2015 Christoph Reiter
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
"""Varisour function for figuring out which platform wa are running on
and under which environment.
"""
import configparser
import ctypes
import fnmatch
import os
import sys
from gi.repository import Gio, GLib
def _dbus_name_owned(name):
"""Returns True if the dbus name has an owner"""
if not is_linux():
return False
return dbus_name_owned(name)
def is_flatpak():
"""If we are running in a flatpak"""
return is_linux() and os.path.exists("/.flatpak-info")
def matches_flatpak_runtime(pattern: str) -> bool:
"""Pass a fnmatch pattern for matching the flatpak runtime ID"""
if not is_linux():
return False
config = configparser.ConfigParser()
try:
with open("/.flatpak-info", "r", encoding="utf-8") as f:
config.read_file(f)
runtime = config.get("Application", "runtime")
except (OSError, configparser.Error):
return False
return fnmatch.fnmatchcase(runtime, pattern)
def is_plasma():
"""If we are running under plasma"""
return _dbus_name_owned("org.kde.plasmashell")
def is_unity():
"""If we are running under Ubuntu/Unity"""
return _dbus_name_owned("com.canonical.Unity.Launcher")
def is_enlightenment():
"""If we are running under Enlightenment"""
return _dbus_name_owned("org.enlightenment.wm.service")
def is_linux():
"""If we are on Linux (or similar)"""
return not is_windows() and not is_osx()
def is_windows():
"""If we are running under Windows or Wine"""
return os.name == "nt"
def is_wine():
"""If we are running under Wine"""
if not is_windows():
return False
try:
ctypes.cdll.ntdll.wine_get_version
except AttributeError:
return False
else:
return True
def is_osx():
"""If we are running under OS X"""
return sys.platform == "darwin"
def dbus_name_owned(name):
"""Returns True if the dbus name has an owner"""
BUS_DAEMON_NAME = "org.freedesktop.DBus"
BUS_DAEMON_PATH = "/org/freedesktop/DBus"
BUS_DAEMON_IFACE = "org.freedesktop.DBus"
try:
bus = Gio.DBusProxy.new_for_bus_sync(
Gio.BusType.SESSION,
Gio.DBusProxyFlags.NONE,
None,
BUS_DAEMON_NAME,
BUS_DAEMON_PATH,
BUS_DAEMON_IFACE,
None,
)
return bus.NameHasOwner("(s)", name)
except GLib.Error:
return False
|
utils | blockutils | # SPDX-FileCopyrightText: Florian Bruhin (The Compiler) <mail@qutebrowser.org>
#
# SPDX-License-Identifier: GPL-3.0-or-later
"""Code that is shared between the host blocker and Brave ad blocker."""
import functools
import os
from typing import IO, List, Optional
from qutebrowser.api import config, downloads, message
from qutebrowser.qt.core import QObject, QUrl, pyqtSignal
class FakeDownload(downloads.TempDownload):
"""A download stub to use on_download_finished with local files."""
def __init__(self, fileobj: IO[bytes]) -> None:
# pylint: disable=super-init-not-called
self.fileobj = fileobj
self.successful = True
class BlocklistDownloads(QObject):
"""Download blocklists from the given URLs.
Attributes:
single_download_finished:
A signal that is emitted when a single download has finished. The
listening slot is provided with the download object.
all_downloads_finished:
A signal that is emitted when all downloads have finished. The
first argument is the number of items downloaded.
_urls: The URLs to download.
_in_progress: The DownloadItems which are currently downloading.
_done_count: How many files have been read successfully.
_finished_registering_downloads:
Used to make sure that if all the downloads finish really quickly,
before all of the block-lists have been added to the download
queue, we don't emit `single_download_finished`.
_started: Has the `initiate` method been called?
_finished: Has `all_downloads_finished` been emitted?
"""
single_download_finished = pyqtSignal(object) # arg: the file object
all_downloads_finished = pyqtSignal(int) # arg: download count
def __init__(self, urls: List[QUrl], parent: Optional[QObject] = None) -> None:
super().__init__(parent)
self._urls = urls
self._in_progress: List[downloads.TempDownload] = []
self._done_count = 0
self._finished_registering_downloads = False
self._started = False
self._finished = False
def initiate(self) -> None:
"""Initiate downloads of each url in `self._urls`."""
if self._started:
raise ValueError("This download has already been initiated")
self._started = True
if not self._urls:
self._finished = True
self.all_downloads_finished.emit(self._done_count)
return
for url in self._urls:
self._download_blocklist_url(url)
self._finished_registering_downloads = True
if not self._in_progress and not self._finished:
# The in-progress list is empty but we still haven't called the
# completion callback yet. This happens when all downloads finish
# before we've set `_finished_registering_dowloads` to False.
self._finished = True
self.all_downloads_finished.emit(self._done_count)
def _download_blocklist_url(self, url: QUrl) -> None:
"""Take a blocklist url and queue it for download.
Args:
url: url to download
"""
if url.scheme() == "file":
# The URL describes a local file on disk if the url scheme is
# "file://". We handle those as a special case.
filename = url.toLocalFile()
if os.path.isdir(filename):
for entry in os.scandir(filename):
if entry.is_file():
self._import_local(entry.path)
else:
self._import_local(filename)
else:
download = downloads.download_temp(url)
self._in_progress.append(download)
download.finished.connect(
functools.partial(self._on_download_finished, download)
)
def _import_local(self, filename: str) -> None:
"""Pretend that a local file was downloaded from the internet.
Args:
filename: path to a local file to import.
"""
try:
fileobj = open(filename, "rb") # pylint: disable=consider-using-with
except OSError as e:
message.error(
"blockutils: Error while reading {}: {}".format(filename, e.strerror)
)
return
download = FakeDownload(fileobj)
self._in_progress.append(download)
self._on_download_finished(download)
def _on_download_finished(self, download: downloads.TempDownload) -> None:
"""Check if all downloads are finished and if so, trigger callback.
Arguments:
download: The finished download.
"""
self._in_progress.remove(download)
if download.successful:
self._done_count += 1
assert not isinstance(download.fileobj, downloads.UnsupportedAttribute)
assert download.fileobj is not None
try:
# Call the user-provided callback
self.single_download_finished.emit(download.fileobj)
finally:
download.fileobj.close()
if not self._in_progress and self._finished_registering_downloads:
self._finished = True
self.all_downloads_finished.emit(self._done_count)
def is_whitelisted_url(url: QUrl) -> bool:
"""Check if the given URL is on the adblock whitelist."""
whitelist = config.val.content.blocking.whitelist
return any(pattern.matches(url) for pattern in whitelist)
|
gvim-viewer | plugin | from __future__ import absolute_import
import subprocess
from gi.repository import Gtk
from sunflower.plugin_base.viewer_extension import ViewerExtension
def register_plugin(application):
"""Register plugin class with application"""
application.register_viewer_extension(("text/plain",), GVimViewer)
class GVimViewer(ViewerExtension):
"""Viewer extension that embeds GVim window into notebook and allows you to
view files using your configuration.
"""
def __init__(self, parent):
ViewerExtension.__init__(self, parent)
self._process = None
# create container
self._container = Gtk.Viewport()
self._container.set_shadow_type(Gtk.ShadowType.IN)
# create socket for embeding GVim window
self._socket = Gtk.Socket()
self._socket.connect("realize", self.__socket_realized)
# pack interface
self._container.add(self._socket)
def __socket_realized(self, widget, data=None):
"""Connect process when socket is realized"""
socket_id = self._socket.get_id()
# generate command string
command = ("gvim", "--socketid", str(socket_id), "-R", self._parent.path)
# create new process
self._process = subprocess.Popen(command)
def get_title(self):
"""Return page title"""
return _("GVim")
def get_container(self):
"""Return container widget to be embedded to notebook"""
return self._container
def focus_object(self):
"""Focus main object in extension"""
self._socket.child_focus(Gtk.DIR_TAB_FORWARD)
|
frescobaldi-app | definition | # This file is part of the Frescobaldi project, http://www.frescobaldi.org/
#
# Copyright (c) 2014 - 2014 by Wilbert Berendsen
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
# See http://www.gnu.org/licenses/ for more information.
"""
Find the definition of variables.
"""
import app
import browseriface
import documentinfo
import ly.music.items
from PyQt5.QtCore import QUrl
from PyQt5.QtGui import QTextCursor
def refnode(cursor):
"""Return the music item at the cursor if that probably is a reference to a definition elsewhere."""
node = documentinfo.music(cursor.document()).node(cursor.position())
if (
node
and node.end_position() >= cursor.selectionEnd()
and isinstance(
node,
(
ly.music.items.UserCommand,
ly.music.items.MarkupUserCommand,
),
)
):
return node
def target(node):
"""Return the target node (where the node is defined)."""
value = node.value()
if value:
target = value.parent()
if isinstance(target, ly.music.items.Document):
target = value # happens with #(define-markup-command ...)
return target
def goto_definition(mainwindow, cursor=None):
"""Go to the definition of the item the mainwindow's cursor is at.
Return True if there was a definition.
"""
if cursor is None:
cursor = mainwindow.textCursor()
node = refnode(cursor)
if node:
t = target(node)
if t:
goto_target(mainwindow, t)
return True
def goto_target(mainwindow, target):
"""Switch to the document and location where the node target is."""
lydoc = target.document
try:
# this succeeds if this is a document that is currently open
doc = lydoc.document
except AttributeError:
# it is an included file, just load it
filename = target.document.filename
doc = app.openUrl(QUrl.fromLocalFile(filename))
cursor = QTextCursor(doc)
cursor.setPosition(target.position)
browseriface.get(mainwindow).setTextCursor(cursor)
mainwindow.currentView().centerCursor()
|
utils | http | # -*- coding: utf-8 -*-
"""
flaskbb.utils.http
~~~~~~~~~~~~~~~~~~
Provides a utility function that attempts to validate an URL against
a set of valid hosts.
See https://www.owasp.org/index.php/Unvalidated_Redirects_and_Forwards_Cheat_Sheet
for more information about this topic.
Note: Most of this code has been taken from Django 3.2.0.alpha0.
"""
import unicodedata
from urllib.parse import (
ParseResult,
SplitResult,
_coerce_args,
_splitnetloc,
_splitparams,
scheme_chars,
uses_params,
)
# Copied from urllib.parse.urlparse() but uses fixed urlsplit() function.
def _urlparse(url, scheme="", allow_fragments=True):
"""Parse a URL into 6 components:
<scheme>://<netloc>/<path>;<params>?<query>#<fragment>
Return a 6-tuple: (scheme, netloc, path, params, query, fragment).
Note that we don't break the components up in smaller bits
(e.g. netloc is a single string) and we don't expand % escapes."""
url, scheme, _coerce_result = _coerce_args(url, scheme)
splitresult = _urlsplit(url, scheme, allow_fragments)
scheme, netloc, url, query, fragment = splitresult
if scheme in uses_params and ";" in url:
url, params = _splitparams(url)
else:
params = ""
result = ParseResult(scheme, netloc, url, params, query, fragment)
return _coerce_result(result)
# Copied from urllib.parse.urlsplit() with
# https://github.com/python/cpython/pull/661 applied.
# This fix has been backported to Python 3.8.
# TODO: Remove this once we drop support for Python < 3.8
def _urlsplit(url, scheme="", allow_fragments=True):
"""Parse a URL into 5 components:
<scheme>://<netloc>/<path>?<query>#<fragment>
Return a 5-tuple: (scheme, netloc, path, query, fragment).
Note that we don't break the components up in smaller bits
(e.g. netloc is a single string) and we don't expand % escapes."""
url, scheme, _coerce_result = _coerce_args(url, scheme)
netloc = query = fragment = ""
i = url.find(":")
if i > 0:
for c in url[:i]:
if c not in scheme_chars:
break
else:
scheme, url = url[:i].lower(), url[i + 1 :]
if url[:2] == "//":
netloc, url = _splitnetloc(url, 2)
if ("[" in netloc and "]" not in netloc) or (
"]" in netloc and "[" not in netloc
):
raise ValueError("Invalid IPv6 URL")
if allow_fragments and "#" in url:
url, fragment = url.split("#", 1)
if "?" in url:
url, query = url.split("?", 1)
v = SplitResult(scheme, netloc, url, query, fragment)
return _coerce_result(v)
def _url_has_allowed_host_and_scheme(url, allowed_hosts, require_https=False):
# Chrome considers any URL with more than two slashes to be absolute, but
# urlparse is not so flexible. Treat any url with three slashes as unsafe.
if url.startswith("///"):
return False
try:
url_info = _urlparse(url)
except ValueError: # e.g. invalid IPv6 addresses
return False
# Forbid URLs like http:///example.com - with a scheme, but without a hostname.
# In that URL, example.com is not the hostname but, a path component. However,
# Chrome will still consider example.com to be the hostname, so we must not
# allow this syntax.
if not url_info.netloc and url_info.scheme:
return False
# Forbid URLs that start with control characters. Some browsers (like
# Chrome) ignore quite a few control characters at the start of a
# URL and might consider the URL as scheme relative.
if unicodedata.category(url[0])[0] == "C":
return False
scheme = url_info.scheme
# Consider URLs without a scheme (e.g. //example.com/p) to be http.
if not url_info.scheme and url_info.netloc:
scheme = "http"
valid_schemes = ["https"] if require_https else ["http", "https"]
return (not url_info.netloc or url_info.netloc in allowed_hosts) and (
not scheme or scheme in valid_schemes
)
def is_safe_url(url, allowed_hosts, require_https=False):
"""
Return ``True`` if the url uses an allowed host and a safe scheme.
Always return ``False`` on an empty url.
If ``require_https`` is ``True``, only 'https' will be considered a valid
scheme, as opposed to 'http' and 'https' with the default, ``False``.
Note: "True" doesn't entail that a URL is "safe". It may still be e.g.
quoted incorrectly. Ensure to also use django.utils.encoding.iri_to_uri()
on the path component of untrusted URLs.
"""
if url is not None:
url = url.strip()
if not url:
return False
if allowed_hosts is None:
allowed_hosts = set()
elif isinstance(allowed_hosts, str):
allowed_hosts = {allowed_hosts}
# Chrome treats \ completely as / in paths but it could be part of some
# basic auth credentials so we need to check both URLs.
return _url_has_allowed_host_and_scheme(
url, allowed_hosts, require_https=require_https
) and _url_has_allowed_host_and_scheme(
url.replace("\\", "/"), allowed_hosts, require_https=require_https
)
|
trends | formula | import math
import re
from itertools import accumulate
from string import ascii_uppercase
from typing import Any, Dict, List
from posthog.clickhouse.kafka_engine import trim_quotes_expr
from posthog.constants import NON_TIME_SERIES_DISPLAY_TYPES, TRENDS_CUMULATIVE
from posthog.models.filters.filter import Filter
from posthog.models.team import Team
from posthog.queries.breakdown_props import get_breakdown_cohort_name
from posthog.queries.insight import insight_sync_execute
from posthog.queries.trends.util import parse_response
from sentry_sdk import push_scope
# Regex for adding the formula variable index to all params, except HogQL params
PARAM_DISAMBIGUATION_REGEX = re.compile(r"%\((?!hogql_)")
class TrendsFormula:
def _run_formula_query(self, filter: Filter, team: Team):
letters = [ascii_uppercase[i] for i in range(0, len(filter.entities))]
queries = []
params: Dict[str, Any] = {}
for idx, entity in enumerate(filter.entities):
_, sql, entity_params, _ = self._get_sql_for_entity(filter, team, entity) # type: ignore
sql = PARAM_DISAMBIGUATION_REGEX.sub(f"%({idx}_", sql)
entity_params = {
f"{idx}_{key}": value for key, value in entity_params.items()
}
queries.append(sql)
params.update(entity_params)
params.update(filter.hogql_context.values)
breakdown_value = ""
if filter.breakdown_type == "cohort":
breakdown_columns = ", ".join(
f"sub_{letter}.breakdown_value" for letter in letters
)
breakdown_value = ", arrayFilter(x -> x != 0, [{}])[1]".format(
breakdown_columns
)
else:
breakdown_columns = ", ".join(
trim_quotes_expr(f"sub_{letter}.breakdown_value") for letter in letters
)
breakdown_value = ", arrayFilter(x -> notEmpty(x), [{}])[1]".format(
breakdown_columns
)
is_aggregate = filter.display in NON_TIME_SERIES_DISPLAY_TYPES
sql = """SELECT
{date_select}
arrayMap(({letters_select}) -> {formula}, {selects})
{breakdown_value}
{max_length}
FROM ({first_query}) as sub_A
{queries}
""".format(
date_select="'' as date," if is_aggregate else "sub_A.date,",
letters_select=", ".join(letters),
formula=filter.formula, # formula is properly escaped in the filter
# Need to wrap aggregates in arrays so we can still use arrayMap
selects=", ".join(
[
(
f"[ifNull(sub_{letter}.total, 0)]"
if is_aggregate
else f"arrayResize(sub_{letter}.total, max_length, 0)"
)
for letter in letters
]
),
breakdown_value=breakdown_value if filter.breakdown else "",
max_length=""
if is_aggregate
else ", arrayMax([{}]) as max_length".format(
", ".join(f"length(sub_{letter}.total)" for letter in letters)
),
first_query=queries[0],
queries="".join(
[
"FULL OUTER JOIN ({query}) as sub_{letter} ON sub_A.breakdown_value = sub_{letter}.breakdown_value ".format(
query=query, letter=letters[i + 1]
)
for i, query in enumerate(queries[1:])
]
)
if filter.breakdown
else "".join(
[
" CROSS JOIN ({}) as sub_{}".format(query, letters[i + 1])
for i, query in enumerate(queries[1:])
]
),
)
with push_scope() as scope:
scope.set_context("filter", filter.to_dict())
scope.set_tag("team", team)
scope.set_context("query", {"sql": sql, "params": params})
result = insight_sync_execute(
sql,
params,
query_type="trends_formula",
filter=filter,
team_id=team.pk,
)
response = []
for item in result:
additional_values: Dict[str, Any] = {"label": self._label(filter, item)}
if filter.breakdown:
additional_values["breakdown_value"] = additional_values["label"]
if is_aggregate:
additional_values["data"] = []
additional_values["aggregated_value"] = item[1][0]
else:
additional_values["data"] = [
round(number, 2)
if not math.isnan(number) and not math.isinf(number)
else 0.0
for number in item[1]
]
if filter.display == TRENDS_CUMULATIVE:
additional_values["data"] = list(
accumulate(additional_values["data"])
)
additional_values["count"] = float(sum(additional_values["data"]))
response.append(
parse_response(item, filter, additional_values=additional_values)
)
return response
def _label(self, filter: Filter, item: List) -> str:
if filter.breakdown:
if filter.breakdown_type == "cohort":
return get_breakdown_cohort_name(item[2])
return item[2]
return "Formula ({})".format(filter.formula)
|
Code | Kibitzers | import os.path
from Code import EngineThread, MotoresExternos, Util, VarGen, XRun
from Code.QT import Iconos
from PyQt4 import QtGui
class Tipos:
def __init__(self):
self.liTipos = (
("M", _("Candidates"), Iconos.pmPuntoRojo()),
("I", _("Indexes") + " - RodentII", Iconos.pmPuntoNegro()),
("S", _("Best move"), Iconos.pmPuntoVerde()),
("L", _("Best move in one line"), Iconos.pmPuntoMagenta()),
("J", _("Select move"), Iconos.pmPuntoNaranja()),
("C", _("Threats"), Iconos.pmPuntoAzul()),
("E", _("Stockfish evaluation"), Iconos.pmPuntoAmarillo()),
("B", _("Polyglot book"), Iconos.pmPuntoEstrella()),
)
def combo(self):
return [(label, key) for key, label, pm in self.liTipos]
def comboSinIndices(self):
return [(label, key) for key, label, pm in self.liTipos if key != "I"]
def texto(self, tipo):
for tp, nom, pm in self.liTipos:
if tp == tipo:
return nom
def dicDelegado(self):
return {tp: pm for tp, txt, pm in self.liTipos}
def dicIconos(self):
return {tp: QtGui.QIcon(pm) for tp, txt, pm in self.liTipos}
class Kibitzer(MotoresExternos.MotorExterno):
def __init__(self):
MotoresExternos.MotorExterno.__init__(self)
self.tipo = None
self.huella = None
self.prioridad = EngineThread.priorities.normal
self.posicionBase = False
self.visible = True
self.nombre = None
def ponHuella(self, liEngines):
liHuellas = [en.huella for en in liEngines if en != self]
while True:
self.huella = Util.huella()
if self.huella not in liHuellas:
return
def clonar(self, liEngines):
otro = Kibitzer()
otro.leerTXT(self.save2txt())
otro.tipo = self.tipo
otro.prioridad = self.prioridad
otro.nombre = self.nombre
otro.posicionBase = self.posicionBase
otro.visible = True
otro.ponHuella(liEngines)
lista = [en.nombre for en in liEngines]
d = 0
while otro.nombre in lista:
d += 1
otro.nombre = "%s-%d" % (self.nombre, d)
return otro
def huella(self):
return self.huella
def leerTXT(self, txt):
dic = Util.txt2dic(txt)
self.restore(dic)
self.huella = dic["HUELLA"]
self.tipo = dic["TIPO"]
self.prioridad = dic["PRIORITY"]
self.nombre = dic["NOMBRE"]
self.posicionBase = dic.get("POSICIONBASE", False)
self.visible = dic.get("VISIBLE", True)
return os.path.isfile(self.exe)
def save2txt(self):
dic = MotoresExternos.MotorExterno.save(self)
dic["HUELLA"] = self.huella
dic["PRIORITY"] = self.prioridad
dic["TIPO"] = self.tipo
dic["NOMBRE"] = self.nombre
dic["POSICIONBASE"] = self.posicionBase
dic["VISIBLE"] = self.visible
return Util.dic2txt(dic)
def configMotor(self):
return MotoresExternos.ConfigMotor(self)
def leerConfigEngine(self, resp):
if resp.startswith("*"):
me = MotoresExternos.buscaMotor(resp)
if me:
self.exe = Util.dirRelativo(me.exe)
self.args = me.args
self.alias = me.alias
self.idName = me.idName
self.clave = me.clave
self.idAuthor = me.idAuthor
self.idInfo = me.idInfo
self.liOpciones = me.liOpciones
self.maxMultiPV = me.maxMultiPV
self.multiPV = me.multiPV
self.elo = me.elo
else:
return False
else:
cm = VarGen.configuracion.buscaRival(resp)
self.alias = cm.clave
self.args = []
self.idName = cm.nombre
self.idAuthor = cm.autor
self.idInfo = ""
self.multiPV = cm.multiPV
self.maxMultiPV = cm.maxMultiPV
self.exe = cm.ejecutable()
me = MotoresExternos.MotorExterno()
me.leerUCI(self.exe, self.args)
self.liOpciones = me.liOpciones
for op in self.liOpciones:
for comando, valor in cm.liUCI:
if op.nombre == comando:
if op.tipo == "check":
op.valor = valor.lower() == "true"
elif op.tipo == "spin":
op.valor = int(valor)
else:
op.valor = valor
break
return True
def readAntiguo(self, dic):
nom_motor = dic["MOTOR"]
if not self.leerConfigEngine(nom_motor):
return False
self.nombre = dic["NOMBRE"]
self.tipo = dic["TIPO"]
self.huella = Util.huella()
self.visible = True
self.posicionBase = False
fvideo = dic.get("FVIDEO")
if fvideo:
try:
fdest = os.path.join(
os.path.dirname(fvideo), "KIB%s.video" % self.huella
)
os.rename(fvideo, fdest)
except:
pass
return os.path.isfile(self.exe)
def ctipo(self):
return Tipos().texto(self.tipo)
def cpriority(self):
return EngineThread.priorities.texto(self.prioridad)
class Kibitzers:
def __init__(self):
self.fichero = VarGen.configuracion.ficheroKibitzers
self.lista, self.lastfolder = self.read()
def readAntiguo(self, listaAnt):
lastfolder = ""
lista = []
for dic in listaAnt:
kib = Kibitzer()
if kib.readAntiguo(dic):
lista.append(kib)
self.lista, self.lastfolder = lista, lastfolder
self.save()
return lista, lastfolder
def read(self):
lista = []
lastfolder = ""
dic = Util.recuperaVar(self.fichero)
if type(dic) == list:
return self.readAntiguo(dic)
if dic:
lastfolder = dic.get("LASTFOLDER", "")
lista_txt = dic.get("LISTA", [])
if lista_txt:
for txt in lista_txt:
kib = Kibitzer()
if kib.leerTXT(txt):
lista.append(kib)
return lista, lastfolder
def save(self):
dic = {
"LISTA": [en.save2txt() for en in self.lista],
"LASTFOLDER": self.lastfolder,
}
Util.guardaVar(self.fichero, dic)
def nuevo(self, nombre, motor, tipo, prioridad):
kib = Kibitzer()
kib.ponHuella(self.lista)
kib.leerConfigEngine(motor)
kib.alias = kib.nombre = nombre
kib.tipo = tipo
kib.prioridad = prioridad
self.lista.append(kib)
self.save()
return len(self.lista) - 1
def nuevoPolyglot(self, book):
kib = Kibitzer()
kib.ponHuella(self.lista)
kib.alias = kib.nombre = book.nombre
kib.tipo = "B"
kib.exe = book.path
kib.clave = book.nombre
self.lista.append(kib)
self.save()
return len(self.lista) - 1
def __len__(self):
return len(self.lista)
def kibitzer(self, num):
return self.lista[num]
def remove(self, num):
del self.lista[num]
self.save()
def up(self, num):
if num > 0:
self.lista[num], self.lista[num - 1] = self.lista[num - 1], self.lista[num]
self.save()
return num - 1
return None
def down(self, num):
if num < (len(self.lista) - 1):
self.lista[num], self.lista[num + 1] = self.lista[num + 1], self.lista[num]
self.save()
return num + 1
return None
def clonar(self, num):
kib = self.lista[num].clonar(self.lista)
self.lista.append(kib)
self.save()
return len(self.lista) - 1
def lista_menu(self):
dIco = Tipos().dicIconos()
return [(kib.nombre, dIco[kib.tipo]) for kib in self.lista if kib.visible]
class Orden:
def __init__(self):
self.clave = ""
self.dv = {}
def ponVar(self, nombre, valor):
self.dv[nombre] = valor
def bloqueEnvio(self):
self.dv["__CLAVE__"] = self.clave
return self.dv
class IPCKibitzer:
CONFIGURACION = "C"
FEN = "F"
TERMINAR = "T"
def __init__(self, gestor, numkibitzer):
configuracion = gestor.configuracion
fdb = configuracion.ficheroTemporal("db")
self.ipc = Util.IPC(fdb, True)
orden = Orden()
orden.clave = self.CONFIGURACION
orden.dv["USER"] = configuracion.user
orden.dv["NUMKIBITZER"] = numkibitzer
self.escribe(orden)
self.popen = XRun.run_lucas("-kibitzer", fdb)
def escribe(self, orden):
self.ipc.push(orden.bloqueEnvio())
def siActiva(self):
if self.popen is None:
return False
return self.popen.poll() is None
def ponFen(self, fen, fenBase):
orden = Orden()
orden.clave = self.FEN
orden.dv["FEN"] = "%s|%s" % (fen, fenBase)
self.escribe(orden)
def terminar(self):
try:
orden = Orden()
orden.clave = self.TERMINAR
self.escribe(orden)
self.ipc.close()
self.close()
except:
pass
def close(self):
if self.popen:
try:
self.popen.terminate()
self.popen = None
except:
pass
|
website | middleware | from django.contrib import auth
from django.core.exceptions import ImproperlyConfigured
from shibboleth.app_settings import GROUP_ATTRIBUTES
from shibboleth.middleware import (
ShibbolethRemoteUserMiddleware,
ShibbolethValidationError,
)
from website.backends import ShibbolethRemoteUserBackend
class ShibbolethRemoteUserMiddleware(ShibbolethRemoteUserMiddleware):
"""
The subclassed middleware looks for the user by eppn, but we do not store eppn on user model, but in a linked class. So we have to get the user from there.
"""
def process_request(self, request):
"""
This function is almost identical to that of the super class, exxcept for retrieval of a user
"""
# AuthenticationMiddleware is required so that request.user exists.
if not hasattr(request, "user"):
raise ImproperlyConfigured(
"The Django remote user auth middleware requires the"
" authentication middleware to be installed. Edit your"
" MIDDLEWARE_CLASSES setting to insert"
" 'django.contrib.auth.middleware.AuthenticationMiddleware'"
" before the RemoteUserMiddleware class."
)
# Locate the remote user header.
try:
username = request.META[self.header]
except KeyError:
# If specified header doesn't exist then return (leaving
# request.user set to AnonymousUser by the
# AuthenticationMiddleware).
# Or we logout the user if he was authenticated with this backend
if request.user.is_authenticated:
self._remove_invalid_user(request)
return
# If we got an empty value for request.META[self.header], treat it like
# self.header wasn't in self.META at all - it's still an anonymous user.
if not username:
return
# If the user is already authenticated and that user is the user we are
# getting passed in the headers, then the correct user is already
# persisted in the session and we don't need to continue.
is_authenticated = request.user.is_authenticated
# Here we do not look for the username of the authenticated user, but its shibbolethuser.shib_username
if is_authenticated and hasattr(request.user, "shibboleth_account"):
if request.user.shibboleth_account.shib_username == self.clean_username(
username, request
):
return
# Make sure we have all required Shiboleth elements before proceeding.
shib_meta, error = self.parse_attributes(request)
# Add parsed attributes to the session.
request.session["shib"] = shib_meta
if error:
raise ShibbolethValidationError(
"All required Shibboleth elements" " not found. %s" % shib_meta
)
# We are seeing this user for the first time in this session, attempt
# to authenticate the user.
user = auth.authenticate(request, remote_user=username, shib_meta=shib_meta)
if user:
# User is valid. Set request.user and persist user in the session
# by logging the user in.
request.user = user
auth.login(request, user)
# Upgrade user groups if configured in the settings.py
# If activated, the user will be associated with those groups.
if GROUP_ATTRIBUTES:
self.update_user_groups(request, user)
# call make profile.
self.make_profile(user, shib_meta)
# setup session.
self.setup_session(request)
def _remove_invalid_user(self, request):
"""
Remove the current authenticated user in the request which is invalid
but only if the user is authenticated via the ShibbolethRemoteUserBackend.
"""
try:
stored_backend = auth.load_backend(
request.session.get(auth.BACKEND_SESSION_KEY, "")
)
except ImportError:
# backend failed to load
auth.logout(request)
else:
if isinstance(stored_backend, ShibbolethRemoteUserBackend):
auth.logout(request)
|
eva | container | from __future__ import annotations
from typing import TYPE_CHECKING, Dict, Optional, TypeVar
from ipv8.types import Peer
from tribler.core.components.ipv8.eva.transfer.base import Transfer
if TYPE_CHECKING:
from tribler.core.components.ipv8.eva.protocol import EVAProtocol
T = TypeVar("T", bound=Transfer)
class Container(Dict[Peer, T]):
"""This class designed as a storage for transfers.
Key feature of the Container class is an ability to call
`self.eva.scheduler.send_scheduled()` for each item deletion.
"""
def __init__(self, eva: EVAProtocol):
super().__init__()
self.eva = eva
def pop(self, key: Peer, default: Optional[T] = None) -> T:
value = super().pop(key, default)
self.eva.scheduler.send_scheduled()
return value
def update(self, *args, **kwargs) -> None:
super().update(*args, **kwargs)
self.eva.scheduler.send_scheduled()
def __setitem__(self, key: Peer, value: T):
if key in self:
raise KeyError("Peer is already in container")
super().__setitem__(key, value)
def __delitem__(self, key: Peer):
super().__delitem__(key)
self.eva.scheduler.send_scheduled()
|
database | _table_utils | #!/usr/bin/env python
#
# Copyright (c) 2012 Simone Basso <bassosimone@gmail.com>,
# NEXA Center for Internet & Society at Politecnico di Torino
#
# This file is part of Neubot <http://www.neubot.org/>.
#
# Neubot is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Neubot is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Neubot. If not, see <http://www.gnu.org/licenses/>.
#
""" Regression tests for neubot/database/_table_utils.py """
import sqlite3
import sys
import unittest
if __name__ == "__main__":
sys.path.insert(0, ".")
from neubot.database import _table_utils
class TestRenameColumn(unittest.TestCase):
"""Regression test for rename_column() feature"""
template = {
"namex": "Simone",
"sur_name": "Basso",
"age": 29,
}
mapping = {"namex": "name", "sur_name": "surname"}
def test_success(self):
"""Test for the successful case"""
connection = sqlite3.connect(":memory:")
connection.execute(_table_utils.make_create_table("Person", self.template))
_table_utils.rename_column(connection, "Person", self.template, self.mapping)
cursor = connection.cursor()
cursor.execute('SELECT sql FROM sqlite_master WHERE name="Person";')
query = cursor.next()[0]
self.assertEqual(
query,
"CREATE TABLE Person (id INTEGER PRIMARY "
"KEY, age INTEGER, surname TEXT, name TEXT)",
)
if __name__ == "__main__":
unittest.main()
|
src | guiHtml | # HTML GUI
# def GuiObject
def main():
import BaseHTTPServer
class Handler(BaseHTTPServer.BaseHTTPRequestHandler):
def log_message(self, format, *args):
pass
def do_GET(self):
print("GET: %s" % self.path)
if self.path == "/":
return self.returnMainPage()
self.send_response(404)
self.end_headers()
def returnMainPage(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(
"""
<html>
<body>Hey there!</body>
</html>
"""
)
def startServer(port=0):
import BaseHTTPServer
return BaseHTTPServer.HTTPServer(("", port), Handler)
def tryOrFail(fn):
try:
return fn()
except Exception:
return
# Try with some default ports first.
httpd = (
tryOrFail(lambda: startServer(port=9123))
or tryOrFail(lambda: startServer(port=9321))
or startServer()
)
_, port = httpd.server_address
import webbrowser
webbrowser.open("http://localhost:%i" % port)
import main
main.handleApplicationInit()
try:
while True:
httpd.handle_request()
except KeyboardInterrupt:
raise SystemExit
def guiMain():
pass
def dummyMainLoop():
from State import state
for ev, args, kwargs in state.updates.read():
pass
|
plugins | trigger_fx | # -*- coding: utf-8 -*-
"""
This file is part of the Stargate project, Copyright Stargate Team
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation version 3 of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
"""
from sglib.lib import util
from sglib.lib.translate import _
from sgui.widgets import *
from .util import get_screws
TRIGGERFX_INPUT0 = 0
TRIGGERFX_INPUT1 = 1
TRIGGERFX_OUTPUT0 = 2
TRIGGERFX_OUTPUT1 = 3
TRIGGERFX_FIRST_CONTROL_PORT = 4
TRIGGERFX_GATE_NOTE = 4
TRIGGERFX_GATE_MODE = 5
TRIGGERFX_GATE_WET = 6
TRIGGERFX_GATE_PITCH = 7
TRIGGERFX_GLITCH_ON = 8
TRIGGERFX_GLITCH_NOTE = 9
TRIGGERFX_GLITCH_TIME = 10
TRIGGERFX_GLITCH_PB = 11
TRIGGERFX_PORT_MAP = {
"Gate Wet": TRIGGERFX_GATE_WET,
"Glitch Time": TRIGGERFX_GLITCH_TIME,
}
STYLESHEET = """
QWidget#plugin_window {
background: qlineargradient(
x1: 0, y1: 0, x2: 0, y2: 1,
stop: 0 #df76a0, stop: 1 #b34d6c
);
}
QLineEdit,
QSpinBox,
QDoubleSpinBox,
QComboBox {
background: qlineargradient(
x1: 0, y1: 0, x2: 0, y2: 1,
stop: 0 #6a6a6a, stop: 0.5 #828282, stop: 1 #6a6a6a
);
border: 1px solid #222222;
border-radius: 6px;
color: #222222;
}
QLabel#plugin_name_label,
QLabel#plugin_value_label {
background: none;
color: #222222;
}
QComboBox::drop-down
{
border-bottom-right-radius: 3px;
border-left-color: #222222;
border-left-style: solid; /* just a single line */
border-left-width: 0px;
border-top-right-radius: 3px; /* same radius as the QComboBox */
color: #cccccc;
subcontrol-origin: padding;
subcontrol-position: top right;
width: 15px;
}
QComboBox::down-arrow
{
image: url({{ PLUGIN_ASSETS_DIR }}/drop-down.svg);
}
QCheckBox,
QRadioButton
{
background: none;
color: #cccccc;
margin: 3px;
padding: 0px;
}
QCheckBox::indicator,
QRadioButton::indicator
{
background-color: #222222;
border-radius: 6px;
border: 1px solid #cccccc;
color: #cccccc;
height: 18px;
margin-left: 6px;
width: 18px;
}
QCheckBox::indicator:checked,
QRadioButton::indicator:checked
{
background-color: qradialgradient(
cx: 0.5, cy: 0.5,
fx: 0.5, fy: 0.5,
radius: 1.0,
stop: 0.25 #cccccc,
stop: 0.3 #222222
);
}
QPushButton:hover
{
border: 2px solid #cccccc;
}
QRadioButton::indicator:hover,
QCheckBox::indicator:hover
{
border: 1px solid #ffffff;
}
QWidget#note_selector {
background: none;
}
QLabel#logo {
background: none;
}
"""
class triggerfx_plugin_ui(AbstractPluginUI):
def __init__(self, *args, **kwargs):
AbstractPluginUI.__init__(
self,
*args,
stylesheet=STYLESHEET,
**kwargs,
)
self.widget.setFixedHeight(100)
self._plugin_name = "TRIGGERFX"
self.is_instrument = False
self.preset_manager = None
self.main_hlayout = QHBoxLayout()
self.layout.addLayout(self.main_hlayout)
left_screws = get_screws()
self.main_hlayout.addLayout(left_screws)
f_knob_size = DEFAULT_KNOB_SIZE
knob_kwargs = {
"arc_width_pct": 0.0,
"fg_svg": os.path.join(
util.PLUGIN_ASSETS_DIR,
"knob-plastic-3.svg",
),
}
self.gate_gridlayout = QGridLayout()
self.main_hlayout.addLayout(self.gate_gridlayout)
self.main_hlayout.addLayout
self.gate_on_checkbox = checkbox_control(
"Gate",
TRIGGERFX_GATE_MODE,
self.plugin_rel_callback,
self.plugin_val_callback,
self.port_dict,
a_preset_mgr=self.preset_manager,
tooltip=(
"Enable the MIDI triggered gate. Audio will be muted except "
"when the trigger note is being played."
),
)
self.gate_on_checkbox.add_to_grid_layout(self.gate_gridlayout, 3)
self.gate_note_selector = NoteSelectorWidget(
TRIGGERFX_GATE_NOTE,
self.plugin_rel_callback,
self.plugin_val_callback,
self.port_dict,
120,
self.preset_manager,
name_label="Trigger Note",
)
self.gate_note_selector.add_to_grid_layout(self.gate_gridlayout, 6)
self.gate_wet_knob = knob_control(
f_knob_size,
_("Wet"),
TRIGGERFX_GATE_WET,
self.plugin_rel_callback,
self.plugin_val_callback,
0,
100,
0,
KC_DECIMAL,
self.port_dict,
self.preset_manager,
knob_kwargs=knob_kwargs,
tooltip=(
"Dry/wet control, 1.0 for full wet sound, 0.0 for full " "dry sound"
),
)
self.gate_wet_knob.add_to_grid_layout(self.gate_gridlayout, 9)
self.gate_pitch_knob = knob_control(
f_knob_size,
_("Pitch"),
TRIGGERFX_GATE_PITCH,
self.plugin_rel_callback,
self.plugin_val_callback,
20,
120,
60,
KC_PITCH,
self.port_dict,
self.preset_manager,
knob_kwargs=knob_kwargs,
tooltip=(
"High values cause the gate to open and close very quickly, "
"low values cause it to open and close more slowly"
),
)
self.gate_pitch_knob.add_to_grid_layout(self.gate_gridlayout, 12)
self.main_hlayout.addItem(
QSpacerItem(1, 1, QSizePolicy.Policy.Expanding),
)
pixmap = QPixmap(
os.path.join(
util.PLUGIN_ASSETS_DIR,
"triggerfx",
"logo.svg",
)
)
self.logo_label = QLabel("")
self.logo_label.setObjectName("logo")
self.logo_label.setPixmap(pixmap)
self.main_hlayout.addWidget(self.logo_label)
self.main_hlayout.addItem(
QSpacerItem(1, 1, QSizePolicy.Policy.Expanding),
)
self.glitch_gridlayout = QGridLayout()
self.main_hlayout.addLayout(self.glitch_gridlayout)
self.glitch_on_checkbox = checkbox_control(
"Glitch",
TRIGGERFX_GLITCH_ON,
self.plugin_rel_callback,
self.plugin_val_callback,
self.port_dict,
a_preset_mgr=self.preset_manager,
tooltip=(
"Enable/disable glitch. Plays the audio input on short "
"repeat when the trigger note is played"
),
)
self.glitch_on_checkbox.add_to_grid_layout(self.glitch_gridlayout, 3)
self.glitch_note_selector = NoteSelectorWidget(
TRIGGERFX_GLITCH_NOTE,
self.plugin_rel_callback,
self.plugin_val_callback,
self.port_dict,
119,
self.preset_manager,
name_label="Trigger Note",
)
self.glitch_note_selector.add_to_grid_layout(self.glitch_gridlayout, 6)
self.glitch_time_knob = knob_control(
f_knob_size,
_("Time"),
TRIGGERFX_GLITCH_TIME,
self.plugin_rel_callback,
self.plugin_val_callback,
1,
25,
10,
KC_TIME_DECIMAL,
self.port_dict,
self.preset_manager,
knob_kwargs=knob_kwargs,
tooltip="The length of the repeat in seconds",
)
self.glitch_time_knob.add_to_grid_layout(self.glitch_gridlayout, 9)
self.glitch_pb_knob = knob_control(
f_knob_size,
_("Pitchbend"),
TRIGGERFX_GLITCH_PB,
self.plugin_rel_callback,
self.plugin_val_callback,
0,
36,
0,
KC_INTEGER,
self.port_dict,
self.preset_manager,
knob_kwargs=knob_kwargs,
tooltip=(
"How much pitchbend affects pitch, in semitones, while the "
"trigger note is pressed"
),
)
self.glitch_pb_knob.add_to_grid_layout(self.glitch_gridlayout, 12)
right_screws = get_screws()
self.main_hlayout.addLayout(right_screws)
self.open_plugin_file()
self.set_midi_learn(TRIGGERFX_PORT_MAP)
|
lib | metrics | """
Exposes various metrics via Prometheus.
"""
import configparser
import datetime
import functools
import os
from common_metrics import (
PACKAGE_FILE_COUNT_BUCKETS,
PACKAGE_SIZE_BUCKETS,
PROCESSING_TIME_BUCKETS,
TASK_DURATION_BUCKETS,
)
from django.conf import settings
from django.db.models import Sum
from django.utils import timezone
from fpr.models import FormatVersion
from main.models import File, FileFormatVersion, Transfer
from prometheus_client import Counter, Gauge, Histogram, Info, start_http_server
from version import get_full_version
job_counter = Counter(
"mcpclient_job_total",
"Number of jobs processed, labeled by script",
["script_name"],
)
job_processed_timestamp = Gauge(
"mcpclient_job_success_timestamp",
"Timestamp of most recent job processed, labeled by script",
["script_name"],
)
job_error_counter = Counter(
"mcpclient_job_error_total",
"Number of failures processing jobs, labeled by script",
["script_name"],
)
job_error_timestamp = Gauge(
"mcpclient_job_error_timestamp",
"Timestamp of most recent job failure, labeled by script",
["script_name"],
)
task_execution_time_histogram = Histogram(
"mcpclient_task_execution_time_seconds",
"Histogram of worker task execution times in seconds, labeled by script",
["script_name"],
buckets=TASK_DURATION_BUCKETS,
)
waiting_for_gearman_time_counter = Counter(
"mcpclient_gearman_sleep_time_seconds",
"Total worker sleep after gearman error times in seconds",
)
transfer_started_counter = Counter(
"mcpclient_transfer_started_total",
"Number of Transfers started, by transfer type",
["transfer_type"],
)
transfer_started_timestamp = Gauge(
"mcpclient_transfer_started_timestamp",
"Timestamp of most recent transfer started, by transfer type",
["transfer_type"],
)
transfer_completed_counter = Counter(
"mcpclient_transfer_completed_total",
"Number of Transfers completed, by transfer type",
["transfer_type"],
)
transfer_completed_timestamp = Gauge(
"mcpclient_transfer_completed_timestamp",
"Timestamp of most recent transfer completed, by transfer type",
["transfer_type"],
)
transfer_error_counter = Counter(
"mcpclient_transfer_error_total",
"Number of transfer failures, by transfer type, error type",
["transfer_type", "failure_type"],
)
transfer_error_timestamp = Gauge(
"mcpclient_transfer_error_timestamp",
"Timestamp of most recent transfer failure, by transfer type, error type",
["transfer_type", "failure_type"],
)
transfer_files_histogram = Histogram(
"mcpclient_transfer_files",
"Histogram of number of files included in transfers, by transfer type",
["transfer_type"],
buckets=PACKAGE_FILE_COUNT_BUCKETS,
)
transfer_size_histogram = Histogram(
"mcpclient_transfer_size_bytes",
"Histogram of number bytes in transfers, by transfer type",
["transfer_type"],
buckets=PACKAGE_SIZE_BUCKETS,
)
sip_started_counter = Counter("mcpclient_sip_started_total", "Number of SIPs started")
sip_started_timestamp = Gauge(
"mcpclient_sip_started_timestamp", "Timestamp of most recent SIP started"
)
sip_error_counter = Counter(
"mcpclient_sip_error_total",
"Number of SIP failures, by error type",
["failure_type"],
)
sip_error_timestamp = Gauge(
"mcpclient_sip_error_timestamp",
"Timestamp of most recent SIP failure, by error type",
["failure_type"],
)
aips_stored_counter = Counter("mcpclient_aips_stored_total", "Number of AIPs stored")
dips_stored_counter = Counter("mcpclient_dips_stored_total", "Number of DIPs stored")
aips_stored_timestamp = Gauge(
"mcpclient_aips_stored_timestamp", "Timestamp of most recent AIP stored"
)
dips_stored_timestamp = Gauge(
"mcpclient_dips_stored_timestamp", "Timestamp of most recent DIP stored"
)
aip_processing_time_histogram = Histogram(
"mcpclient_aip_processing_seconds",
"Histogram of AIP processing time, from first file recorded in DB to storage in SS",
buckets=PROCESSING_TIME_BUCKETS,
)
dip_processing_time_histogram = Histogram(
"mcpclient_dip_processing_seconds",
"Histogram of DIP processing time, from first file recorded in DB to storage in SS",
buckets=PROCESSING_TIME_BUCKETS,
)
aip_files_stored_histogram = Histogram(
"mcpclient_aip_files_stored",
"Histogram of number of files stored in AIPs. Note, this includes metadata, derivatives, etc.",
buckets=PACKAGE_FILE_COUNT_BUCKETS,
)
dip_files_stored_histogram = Histogram(
"mcpclient_dip_files_stored",
"Histogram of number of files stored in DIPs.",
buckets=PACKAGE_FILE_COUNT_BUCKETS,
)
aip_size_histogram = Histogram(
"mcpclient_aip_size_bytes",
"Histogram of number of bytes stored in AIPs. Note, this includes metadata, derivatives, etc.",
buckets=PACKAGE_SIZE_BUCKETS,
)
dip_size_histogram = Histogram(
"mcpclient_dip_size_bytes",
"Histogram of number of bytes stored in DIPs. Note, this includes metadata, derivatives, etc.",
buckets=PACKAGE_SIZE_BUCKETS,
)
# As we track over 1000 formats, the cardinality here is around 3000 and
# well over the recommended number of label values for Prometheus (not over
# 100). This will break down if we start tracking many nodes.
aip_files_stored_by_file_group_and_format_counter = Counter(
"mcpclient_aip_files_stored_by_file_group_and_format_total",
"Number of original files stored in AIPs labeled by file group, format name.",
["file_group", "format_name"],
)
aip_original_file_timestamps_histogram = Histogram(
"mcpclient_aip_original_file_timestamps",
"Histogram of modification times for files stored in AIPs, bucketed by year",
buckets=[1970, 1980, 1990, 2005, 2010]
+ list(range(2015, datetime.date.today().year + 2))
+ [float("inf")],
)
archivematica_info = Info("archivematica_version", "Archivematica version info")
environment_info = Info("environment_variables", "Environment Variables")
# There's no central place to pull these constants from currently
FILE_GROUPS = ("original", "derivative", "metadata")
PACKAGE_FAILURE_TYPES = ("fail", "reject")
TRANSFER_TYPES = ("Standard", "Dataverse", "Dspace", "TRIM", "Maildir", "Unknown")
def skip_if_prometheus_disabled(func):
@functools.wraps(func)
def wrapper(*args, **kwds):
if settings.PROMETHEUS_ENABLED:
return func(*args, **kwds)
return None
return wrapper
def init_counter_labels():
# Zero our counters to start, by intializing all labels. Non-zero starting points
# cause problems when measuring rates.
modules_config = configparser.RawConfigParser()
modules_config.read(settings.CLIENT_MODULES_FILE)
for script_name, _ in modules_config.items("supportedBatchCommands"):
job_counter.labels(script_name=script_name)
job_processed_timestamp.labels(script_name=script_name)
job_error_counter.labels(script_name=script_name)
job_error_timestamp.labels(script_name=script_name)
task_execution_time_histogram.labels(script_name=script_name)
for transfer_type in TRANSFER_TYPES:
transfer_started_counter.labels(transfer_type=transfer_type)
transfer_started_timestamp.labels(transfer_type=transfer_type)
transfer_completed_counter.labels(transfer_type=transfer_type)
transfer_completed_timestamp.labels(transfer_type=transfer_type)
transfer_files_histogram.labels(transfer_type=transfer_type)
transfer_size_histogram.labels(transfer_type=transfer_type)
for failure_type in PACKAGE_FAILURE_TYPES:
transfer_error_counter.labels(
transfer_type=transfer_type, failure_type=failure_type
)
transfer_error_timestamp.labels(
transfer_type=transfer_type, failure_type=failure_type
)
for failure_type in PACKAGE_FAILURE_TYPES:
sip_error_counter.labels(failure_type=failure_type)
sip_error_timestamp.labels(failure_type=failure_type)
for format_name in FormatVersion.objects.values_list("description", flat=True):
for file_group in FILE_GROUPS:
aip_files_stored_by_file_group_and_format_counter.labels(
file_group=file_group, format_name=format_name
)
@skip_if_prometheus_disabled
def start_prometheus_server():
init_counter_labels()
archivematica_info.info({"version": get_full_version()})
environment_info.info(os.environ)
return start_http_server(
settings.PROMETHEUS_BIND_PORT, addr=settings.PROMETHEUS_BIND_ADDRESS
)
@skip_if_prometheus_disabled
def job_completed(script_name):
job_counter.labels(script_name=script_name).inc()
job_processed_timestamp.labels(script_name=script_name).set_to_current_time()
@skip_if_prometheus_disabled
def job_failed(script_name):
job_counter.labels(script_name=script_name).inc()
job_error_counter.labels(script_name=script_name).inc()
job_error_timestamp.labels(script_name=script_name).set_to_current_time()
def _get_file_group(raw_file_group_use):
"""Convert one of the file group use values we know about into
the smaller subset that we track:
original -> original
metadata -> metadata
submissionDocumentation -> metadata
access -> derivative
thumbnail -> derivative
preservation -> derivative
aip -> derivative
"""
raw_file_group_use = raw_file_group_use.lower()
if raw_file_group_use == "original":
return "original"
elif raw_file_group_use in ("metadata", "submissiondocumentation"):
return "metadata"
else:
return "derivative"
@skip_if_prometheus_disabled
def aip_stored(sip_uuid, size):
aips_stored_counter.inc()
aips_stored_timestamp.set_to_current_time()
aip_size_histogram.observe(size)
try:
earliest_file = File.objects.filter(sip_id=sip_uuid).earliest("enteredsystem")
except File.DoesNotExist:
pass
else:
duration = (timezone.now() - earliest_file.enteredsystem).total_seconds()
aip_processing_time_histogram.observe(duration)
# We do two queries here, as we may not have format information for everything
total_file_count = File.objects.filter(sip_id=sip_uuid).count()
aip_files_stored_histogram.observe(total_file_count)
# TODO: This could probably benefit from batching with prefetches. Using just
# prefetches will likely break down with very large numbers of files.
for file_obj in (
File.objects.filter(sip_id=sip_uuid).exclude(filegrpuse="aip").iterator()
):
if file_obj.filegrpuse.lower() == "original" and file_obj.modificationtime:
aip_original_file_timestamps_histogram.observe(
file_obj.modificationtime.year
)
file_group = _get_file_group(file_obj.filegrpuse)
format_name = "Unknown"
format_version_m2m = (
FileFormatVersion.objects.select_related(
"format_version", "format_version__format"
)
.filter(file_uuid=file_obj.uuid)
.first()
)
if (
format_version_m2m
and format_version_m2m.format_version
and format_version_m2m.format_version.format
):
format_name = format_version_m2m.format_version.format.description
aip_files_stored_by_file_group_and_format_counter.labels(
file_group=file_group, format_name=format_name
).inc()
@skip_if_prometheus_disabled
def dip_stored(sip_uuid, size):
dips_stored_counter.inc()
dips_stored_timestamp.set_to_current_time()
dip_size_histogram.observe(size)
try:
earliest_file = File.objects.filter(sip_id=sip_uuid).earliest("enteredsystem")
except File.DoesNotExist:
pass
else:
duration = (timezone.now() - earliest_file.enteredsystem).total_seconds()
dip_processing_time_histogram.observe(duration)
file_count = File.objects.filter(sip_id=sip_uuid).count()
dip_files_stored_histogram.observe(file_count)
@skip_if_prometheus_disabled
def transfer_started(transfer_type):
if not transfer_type:
transfer_type = "Unknown"
transfer_started_counter.labels(transfer_type=transfer_type).inc()
transfer_started_timestamp.labels(transfer_type=transfer_type).set_to_current_time()
@skip_if_prometheus_disabled
def transfer_completed(transfer_uuid):
try:
transfer = Transfer.objects.get(uuid=transfer_uuid)
except Transfer.DoesNotExist:
return
transfer_type = transfer.type or "Unknown"
transfer_completed_counter.labels(transfer_type=transfer_type).inc()
transfer_completed_timestamp.labels(
transfer_type=transfer_type
).set_to_current_time()
file_queryset = File.objects.filter(transfer=transfer)
file_count = file_queryset.count()
transfer_files_histogram.labels(transfer_type=transfer_type).observe(file_count)
transfer_size = file_queryset.aggregate(total_size=Sum("size"))
transfer_size_histogram.labels(transfer_type=transfer_type).observe(
transfer_size["total_size"]
)
@skip_if_prometheus_disabled
def transfer_failed(transfer_type, failure_type):
if not transfer_type:
transfer_type = "Unknown"
transfer_error_counter.labels(
transfer_type=transfer_type, failure_type=failure_type
).inc()
transfer_error_timestamp.labels(
transfer_type=transfer_type, failure_type=failure_type
).set_to_current_time()
@skip_if_prometheus_disabled
def sip_started():
sip_started_counter.inc()
sip_started_timestamp.set_to_current_time()
@skip_if_prometheus_disabled
def sip_failed(failure_type):
sip_error_counter.labels(failure_type=failure_type).inc()
sip_error_timestamp.labels(failure_type=failure_type).set_to_current_time()
|
gui | ctrlPanel | # =============================================================================
# Copyright (C) 2010 Diego Duclos
#
# This file is part of pyfa.
#
# pyfa is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# pyfa is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with pyfa. If not, see <http://www.gnu.org/licenses/>.
# =============================================================================
from collections import namedtuple
# noinspection PyPackageRequirements
import wx
from gui.bitmap_loader import BitmapLoader
from gui.contextMenu import ContextMenu
from gui.utils.inputs import FloatBox, FloatRangeBox
from service.const import GraphCacheCleanupReason
from service.fit import Fit
from .lists import SourceWrapperList, TargetWrapperList
from .vector import VectorPicker
InputData = namedtuple("InputData", ("handle", "unit", "value"))
InputBox = namedtuple("InputBox", ("handle", "unit", "textBox", "icon", "label"))
CheckBox = namedtuple("CheckBox", ("handle", "checkBox"))
_t = wx.GetTranslation
class GraphControlPanel(wx.Panel):
def __init__(self, graphFrame, parent):
super().__init__(parent)
self.graphFrame = graphFrame
self._mainInputBox = None
self._miscInputBoxes = []
self._inputCheckboxes = []
self._storedRanges = {}
self._storedConsts = {}
mainSizer = wx.BoxSizer(wx.VERTICAL)
optsSizer = wx.BoxSizer(wx.HORIZONTAL)
commonOptsSizer = wx.BoxSizer(wx.VERTICAL)
ySubSelectionSizer = wx.BoxSizer(wx.HORIZONTAL)
yText = wx.StaticText(self, wx.ID_ANY, _t("Axis Y:"))
ySubSelectionSizer.Add(yText, 0, wx.ALIGN_CENTER_VERTICAL | wx.RIGHT, 5)
self.ySubSelection = wx.Choice(self, wx.ID_ANY)
self.ySubSelection.Bind(wx.EVT_CHOICE, self.OnYTypeUpdate)
ySubSelectionSizer.Add(self.ySubSelection, 1, wx.EXPAND | wx.ALL, 0)
commonOptsSizer.Add(ySubSelectionSizer, 0, wx.EXPAND | wx.ALL, 0)
xSubSelectionSizer = wx.BoxSizer(wx.HORIZONTAL)
xText = wx.StaticText(self, wx.ID_ANY, _t("Axis X:"))
xSubSelectionSizer.Add(xText, 0, wx.ALIGN_CENTER_VERTICAL | wx.RIGHT, 5)
self.xSubSelection = wx.Choice(self, wx.ID_ANY)
self.xSubSelection.Bind(wx.EVT_CHOICE, self.OnXTypeUpdate)
xSubSelectionSizer.Add(self.xSubSelection, 1, wx.EXPAND | wx.ALL, 0)
commonOptsSizer.Add(xSubSelectionSizer, 0, wx.EXPAND | wx.TOP, 5)
self.showLegendCb = wx.CheckBox(
self, wx.ID_ANY, _t("Show legend"), wx.DefaultPosition, wx.DefaultSize, 0
)
self.showLegendCb.SetValue(True)
self.showLegendCb.Bind(wx.EVT_CHECKBOX, self.OnShowLegendChange)
commonOptsSizer.Add(self.showLegendCb, 0, wx.EXPAND | wx.TOP, 5)
self.showY0Cb = wx.CheckBox(
self,
wx.ID_ANY,
_t("Always show Y = 0"),
wx.DefaultPosition,
wx.DefaultSize,
0,
)
self.showY0Cb.SetValue(True)
self.showY0Cb.Bind(wx.EVT_CHECKBOX, self.OnShowY0Change)
commonOptsSizer.Add(self.showY0Cb, 0, wx.EXPAND | wx.TOP, 5)
optsSizer.Add(commonOptsSizer, 0, wx.EXPAND | wx.RIGHT, 10)
graphOptsSizer = wx.BoxSizer(wx.HORIZONTAL)
self.inputsSizer = wx.BoxSizer(wx.VERTICAL)
graphOptsSizer.Add(self.inputsSizer, 1, wx.EXPAND | wx.ALL, 0)
vectorSize = 90 if "wxGTK" in wx.PlatformInfo else 75
self.srcVectorSizer = wx.BoxSizer(wx.VERTICAL)
self.srcVectorLabel = wx.StaticText(self, wx.ID_ANY, "")
self.srcVectorSizer.Add(
self.srcVectorLabel, 0, wx.ALIGN_CENTER_HORIZONTAL | wx.BOTTOM, 5
)
self.srcVector = VectorPicker(
self, style=wx.NO_BORDER, size=vectorSize, offset=0
)
self.srcVector.Bind(VectorPicker.EVT_VECTOR_CHANGED, self.OnNonMainInputChanged)
self.srcVectorSizer.Add(
self.srcVector,
0,
wx.SHAPED | wx.ALIGN_CENTER_HORIZONTAL | wx.ALIGN_CENTER_VERTICAL | wx.ALL,
0,
)
graphOptsSizer.Add(self.srcVectorSizer, 0, wx.EXPAND | wx.LEFT, 15)
self.tgtVectorSizer = wx.BoxSizer(wx.VERTICAL)
self.tgtVectorLabel = wx.StaticText(self, wx.ID_ANY, "")
self.tgtVectorSizer.Add(
self.tgtVectorLabel, 0, wx.ALIGN_CENTER_HORIZONTAL | wx.BOTTOM, 5
)
self.tgtVector = VectorPicker(
self, style=wx.NO_BORDER, size=vectorSize, offset=0
)
self.tgtVector.Bind(VectorPicker.EVT_VECTOR_CHANGED, self.OnNonMainInputChanged)
self.tgtVectorSizer.Add(
self.tgtVector,
0,
wx.SHAPED | wx.ALIGN_CENTER_HORIZONTAL | wx.ALIGN_CENTER_VERTICAL | wx.ALL,
0,
)
graphOptsSizer.Add(self.tgtVectorSizer, 0, wx.EXPAND | wx.LEFT, 10)
optsSizer.Add(graphOptsSizer, 1, wx.EXPAND | wx.ALL, 0)
contextSizer = wx.BoxSizer(wx.VERTICAL)
savedFont = self.GetFont()
contextIconFont = wx.SystemSettings.GetFont(wx.SYS_DEFAULT_GUI_FONT)
contextIconFont.SetPointSize(8)
self.SetFont(contextIconFont)
self.contextIcon = wx.StaticText(
self, wx.ID_ANY, "\u2630", size=wx.Size((10, -1))
)
self.contextIcon.Bind(wx.EVT_CONTEXT_MENU, self.contextMenuHandler)
self.contextIcon.Bind(wx.EVT_LEFT_UP, self.contextMenuHandler)
self.SetFont(savedFont)
contextSizer.Add(self.contextIcon, 0, wx.EXPAND | wx.ALL, 0)
optsSizer.Add(contextSizer, 0, wx.EXPAND | wx.ALL, 0)
mainSizer.Add(optsSizer, 0, wx.EXPAND | wx.ALL, 10)
self.srcTgtSizer = wx.BoxSizer(wx.HORIZONTAL)
self.sourceList = SourceWrapperList(graphFrame, self)
self.sourceList.SetMinSize((270, -1))
self.srcTgtSizer.Add(self.sourceList, 1, wx.EXPAND | wx.ALL, 0)
self.targetList = TargetWrapperList(graphFrame, self)
self.targetList.SetMinSize((270, -1))
self.srcTgtSizer.Add(self.targetList, 1, wx.EXPAND | wx.LEFT, 10)
mainSizer.Add(
self.srcTgtSizer, 1, wx.EXPAND | wx.LEFT | wx.BOTTOM | wx.RIGHT, 10
)
self.SetSizer(mainSizer)
self.inputTimer = wx.Timer(self)
self.Bind(wx.EVT_TIMER, self.OnInputTimer, self.inputTimer)
self._setVectorDefaults()
def updateControls(self, layout=True):
if layout:
self.Freeze()
self._clearStoredValues()
view = self.graphFrame.getView()
self.refreshAxeLabels()
# Vectors
self._setVectorDefaults()
if view.srcVectorDef is not None:
self.srcVector.Show(True)
self.srcVectorLabel.Show(True)
self.srcVectorLabel.SetLabel(view.srcVectorDef.label)
else:
self.srcVector.Show(False)
self.srcVectorLabel.Show(False)
if view.tgtVectorDef is not None:
self.tgtVector.Show(True)
self.tgtVectorLabel.Show(True)
self.tgtVectorLabel.SetLabel(view.tgtVectorDef.label)
else:
self.tgtVector.Show(False)
self.tgtVectorLabel.Show(False)
# Source and target list
self.refreshColumns(layout=False)
self.targetList.Show(view.hasTargets)
# Inputs
self._updateInputs(storeInputs=False)
# Context icon
self.contextIcon.Show(
ContextMenu.hasMenu(self, None, None, (view.internalName,))
)
if layout:
self.graphFrame.Layout()
self.graphFrame.UpdateWindowSize()
self.Thaw()
def _updateInputs(self, storeInputs=True):
if storeInputs:
self._storeCurrentValues()
# Clean up old inputs
for inputBox in (self._mainInputBox, *self._miscInputBoxes):
if inputBox is None:
continue
for child in (inputBox.textBox, inputBox.icon, inputBox.label):
if child is not None:
child.Destroy()
for checkbox in self._inputCheckboxes:
checkbox.checkBox.Destroy()
self.inputsSizer.Clear()
self._mainInputBox = None
self._miscInputBoxes.clear()
self._inputCheckboxes.clear()
# Update vectors
view = self.graphFrame.getView()
handledHandles = set()
if view.srcVectorDef is not None:
self.__handleVector(
view.srcVectorDef,
self.srcVector,
handledHandles,
self.xType.mainInput[0],
)
if view.tgtVectorDef is not None:
self.__handleVector(
view.tgtVectorDef,
self.tgtVector,
handledHandles,
self.xType.mainInput[0],
)
# Update inputs
self.__addInputField(
view.inputMap[self.xType.mainInput], handledHandles, mainInput=True
)
for inputDef in view.inputs:
if inputDef.handle in handledHandles:
continue
self.__addInputField(inputDef, handledHandles)
# Add checkboxes
for checkboxDef in view.checkboxes:
if checkboxDef.handle in handledHandles:
continue
self.__addInputCheckbox(checkboxDef, handledHandles)
def __handleVector(self, vectorDef, vector, handledHandles, mainInputHandle):
handledHandles.add(vectorDef.lengthHandle)
handledHandles.add(vectorDef.angleHandle)
try:
storedLength = self._storedConsts[
(vectorDef.lengthHandle, vectorDef.lengthUnit)
]
except KeyError:
pass
else:
vector.SetLength(storedLength / 100)
try:
storedAngle = self._storedConsts[
(vectorDef.angleHandle, vectorDef.angleUnit)
]
except KeyError:
pass
else:
vector.SetAngle(storedAngle)
vector.SetDirectionOnly(vectorDef.lengthHandle == mainInputHandle)
def __addInputField(self, inputDef, handledHandles, mainInput=False):
if not self.__checkInputConditions(inputDef):
return
handledHandles.add(inputDef.handle)
fieldSizer = wx.BoxSizer(wx.HORIZONTAL)
tooltipText = (
inputDef.mainTooltip if mainInput else inputDef.secondaryTooltip
) or ""
if mainInput:
fieldTextBox = FloatRangeBox(
self,
self._storedRanges.get(
(inputDef.handle, inputDef.unit), inputDef.defaultRange
),
)
fieldTextBox.Bind(wx.EVT_TEXT, self.OnMainInputChanged)
else:
fieldTextBox = FloatBox(
self,
self._storedConsts.get(
(inputDef.handle, inputDef.unit), inputDef.defaultValue
),
)
fieldTextBox.Bind(wx.EVT_TEXT, self.OnNonMainInputChanged)
fieldTextBox.SetToolTip(wx.ToolTip(tooltipText))
fieldSizer.Add(fieldTextBox, 0, wx.EXPAND | wx.RIGHT, 5)
fieldIcon = None
if inputDef.iconID is not None:
icon = BitmapLoader.getBitmap(inputDef.iconID, "icons")
if icon is not None:
fieldIcon = wx.StaticBitmap(self)
fieldIcon.SetBitmap(icon)
fieldIcon.SetToolTip(wx.ToolTip(tooltipText))
fieldSizer.Add(fieldIcon, 0, wx.ALIGN_CENTER_VERTICAL | wx.RIGHT, 3)
fieldLabel = wx.StaticText(self, wx.ID_ANY, self.formatLabel(inputDef))
fieldLabel.SetToolTip(wx.ToolTip(tooltipText))
fieldSizer.Add(fieldLabel, 0, wx.ALIGN_CENTER_VERTICAL | wx.ALL, 0)
self.inputsSizer.Add(fieldSizer, 0, wx.EXPAND | wx.BOTTOM, 5)
# Store info about added input box
inputBox = InputBox(
handle=inputDef.handle,
unit=inputDef.unit,
textBox=fieldTextBox,
icon=fieldIcon,
label=fieldLabel,
)
if mainInput:
self._mainInputBox = inputBox
else:
self._miscInputBoxes.append(inputBox)
def __addInputCheckbox(self, checkboxDef, handledHandles):
if not self.__checkInputConditions(checkboxDef):
return
handledHandles.add(checkboxDef.handle)
fieldCheckbox = wx.CheckBox(
self, wx.ID_ANY, checkboxDef.label, wx.DefaultPosition, wx.DefaultSize, 0
)
fieldCheckbox.SetValue(
self._storedConsts.get((checkboxDef.handle, None), checkboxDef.defaultValue)
)
fieldCheckbox.Bind(wx.EVT_CHECKBOX, self.OnNonMainInputChanged)
self.inputsSizer.Add(fieldCheckbox, 0, wx.BOTTOM, 5)
# Store info about added checkbox
checkbox = CheckBox(handle=checkboxDef.handle, checkBox=fieldCheckbox)
self._inputCheckboxes.append(checkbox)
def __checkInputConditions(self, inputDef):
if not inputDef.conditions:
return True
selectedX = self.xType
selectedY = self.yType
for xCond, yCond in inputDef.conditions:
xMatch = True
yMatch = True
if xCond is not None:
xCondHandle, xCondUnit = xCond
xMatch = selectedX.handle == xCondHandle and selectedX.unit == xCondUnit
if yCond is not None:
yCondHandle, yCondUnit = yCond
yMatch = selectedY.handle == yCondHandle and selectedY.unit == yCondUnit
if xMatch and yMatch:
return True
return False
def refreshAxeLabels(self, restoreSelection=False):
view = self.graphFrame.getView()
if restoreSelection:
selectedY = self.ySubSelection.GetSelection()
selectedX = self.xSubSelection.GetSelection()
else:
selectedY = selectedX = 0
self.ySubSelection.Clear()
for yDef in view.yDefs:
if yDef.hidden and not self.graphFrame.includeHidden:
continue
self.ySubSelection.Append(self.formatLabel(yDef, selector=True), yDef)
self.ySubSelection.Enable(len(view.yDefs) > 1)
self.ySubSelection.SetSelection(selectedY)
self.xSubSelection.Clear()
for xDef in view.xDefs:
if xDef.hidden and not self.graphFrame.includeHidden:
continue
self.xSubSelection.Append(self.formatLabel(xDef, selector=True), xDef)
self.xSubSelection.Enable(len(view.xDefs) > 1)
self.xSubSelection.SetSelection(selectedX)
def refreshColumns(self, layout=True):
view = self.graphFrame.getView()
self.sourceList.refreshExtraColumns(view.srcExtraCols)
self.targetList.refreshExtraColumns(view.tgtExtraCols)
self.srcTgtSizer.Detach(self.sourceList)
self.srcTgtSizer.Detach(self.targetList)
self.srcTgtSizer.Add(
self.sourceList, self.sourceList.getWidthProportion(), wx.EXPAND | wx.ALL, 0
)
self.srcTgtSizer.Add(
self.targetList,
self.targetList.getWidthProportion(),
wx.EXPAND | wx.LEFT,
10,
)
self.Layout()
def OnShowLegendChange(self, event):
event.Skip()
self.graphFrame.draw()
def OnShowY0Change(self, event):
event.Skip()
self.graphFrame.draw()
def OnYTypeUpdate(self, event):
event.Skip()
self._updateInputs()
self.graphFrame.resetXMark()
self.graphFrame.Layout()
self.graphFrame.UpdateWindowSize()
self.graphFrame.draw()
def OnXTypeUpdate(self, event):
event.Skip()
self._updateInputs()
self.graphFrame.resetXMark()
self.graphFrame.Layout()
self.graphFrame.UpdateWindowSize()
self.graphFrame.draw()
def OnMainInputChanged(self, event):
event.Skip()
self.graphFrame.resetXMark()
self.inputTimer.Stop()
self.inputTimer.Start(
Fit.getInstance().serviceFittingOptions["marketSearchDelay"], True
)
def OnNonMainInputChanged(self, event):
event.Skip()
self.inputTimer.Stop()
self.inputTimer.Start(
Fit.getInstance().serviceFittingOptions["marketSearchDelay"], True
)
def OnInputTimer(self, event):
event.Skip()
self.graphFrame.clearCache(reason=GraphCacheCleanupReason.inputChanged)
self.graphFrame.draw()
def getValues(self):
view = self.graphFrame.getView()
misc = []
processedHandles = set()
def addMiscData(handle, unit, value):
if handle in processedHandles:
return
inputData = InputData(handle=handle, unit=unit, value=value)
misc.append(inputData)
# Main input box
main = InputData(
handle=self._mainInputBox.handle,
unit=self._mainInputBox.unit,
value=self._mainInputBox.textBox.GetValueRange(),
)
processedHandles.add(self._mainInputBox.handle)
# Vectors
srcVectorDef = view.srcVectorDef
if srcVectorDef is not None:
if not self.srcVector.IsDirectionOnly:
addMiscData(
handle=srcVectorDef.lengthHandle,
unit=srcVectorDef.lengthUnit,
value=self.srcVector.GetLength() * 100,
)
addMiscData(
handle=srcVectorDef.angleHandle,
unit=srcVectorDef.angleUnit,
value=self.srcVector.GetAngle(),
)
tgtVectorDef = view.tgtVectorDef
if tgtVectorDef is not None:
if not self.tgtVector.IsDirectionOnly:
addMiscData(
handle=tgtVectorDef.lengthHandle,
unit=tgtVectorDef.lengthUnit,
value=self.tgtVector.GetLength() * 100,
)
addMiscData(
handle=tgtVectorDef.angleHandle,
unit=tgtVectorDef.angleUnit,
value=self.tgtVector.GetAngle(),
)
# Other input boxes
for inputBox in self._miscInputBoxes:
addMiscData(
handle=inputBox.handle,
unit=inputBox.unit,
value=inputBox.textBox.GetValueFloat(),
)
# Checkboxes
for checkbox in self._inputCheckboxes:
addMiscData(
handle=checkbox.handle, unit=None, value=checkbox.checkBox.GetValue()
)
return main, misc
@property
def showLegend(self):
return self.showLegendCb.GetValue()
@property
def showY0(self):
return self.showY0Cb.GetValue()
@property
def yType(self):
return self.ySubSelection.GetClientData(self.ySubSelection.GetSelection())
@property
def xType(self):
return self.xSubSelection.GetClientData(self.xSubSelection.GetSelection())
@property
def sources(self):
return self.sourceList.wrappers
@property
def targets(self):
return self.targetList.wrappers
# Fit events
def OnFitRenamed(self, event):
self.sourceList.OnFitRenamed(event)
self.targetList.OnFitRenamed(event)
def OnFitChanged(self, event):
self.sourceList.OnFitChanged(event)
self.targetList.OnFitChanged(event)
def OnFitRemoved(self, event):
self.sourceList.OnFitRemoved(event)
self.targetList.OnFitRemoved(event)
# Target profile events
def OnProfileRenamed(self, event):
self.sourceList.OnProfileRenamed(event)
self.targetList.OnProfileRenamed(event)
def OnProfileChanged(self, event):
self.sourceList.OnProfileChanged(event)
self.targetList.OnProfileChanged(event)
def OnProfileRemoved(self, event):
self.sourceList.OnProfileRemoved(event)
self.targetList.OnProfileRemoved(event)
def OnResistModeChanged(self, event):
self.targetList.OnResistModeChanged(event)
def formatLabel(self, axisDef, selector=False):
label = axisDef.selectorLabel if selector else axisDef.label
if axisDef.unit is None:
return label
return "{}, {}".format(label, axisDef.unit)
def _storeCurrentValues(self):
main, misc = self.getValues()
if main is not None:
self._storedRanges[(main.handle, main.unit)] = main.value
for input in misc:
self._storedConsts[(input.handle, input.unit)] = input.value
def _clearStoredValues(self):
self._storedRanges.clear()
self._storedConsts.clear()
def _setVectorDefaults(self):
self.srcVector.SetValue(length=0, angle=90)
self.tgtVector.SetValue(length=1, angle=90)
def contextMenuHandler(self, event):
viewName = self.graphFrame.getView().internalName
menu = ContextMenu.getMenu(self, None, None, (viewName,))
if menu is not None:
self.PopupMenu(menu)
event.Skip()
|
packages | requests_file | """
Copyright 2015 Red Hat, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import errno
import io
import locale
import os
import os.path
import stat
import sys
from io import BytesIO
from urllib.parse import unquote, urljoin, urlparse
from requests import Response, codes
from requests.adapters import BaseAdapter
from streamlink.compat import is_win32
class FileAdapter(BaseAdapter):
def send(self, request, **kwargs):
""" Wraps a file, described in request, in a Response object.
:param request: The PreparedRequest` being "sent".
:returns: a Response object containing the file
"""
# Check that the method makes sense. Only support GET
if request.method not in ("GET", "HEAD"):
raise ValueError(f"Invalid request method {request.method}")
# Parse the URL
url_parts = urlparse(request.url)
# Make the Windows URLs slightly nicer
if is_win32 and url_parts.netloc.endswith(":"):
url_parts = url_parts._replace(path=f"/{url_parts.netloc}{url_parts.path}", netloc="")
# Reject URLs with a hostname component
if url_parts.netloc and url_parts.netloc not in ("localhost", ".", "..", "-"):
raise ValueError("file: URLs with hostname components are not permitted")
# If the path is relative update it to be absolute
if url_parts.netloc in (".", ".."):
pwd = os.path.abspath(url_parts.netloc).replace(os.sep, "/") + "/"
if is_win32:
# prefix the path with a / in Windows
pwd = f"/{pwd}"
url_parts = url_parts._replace(path=urljoin(pwd, url_parts.path.lstrip("/")))
resp = Response()
resp.url = request.url
# Open the file, translate certain errors into HTTP responses
# Use urllib's unquote to translate percent escapes into whatever
# they actually need to be
try:
# If the netloc is - then read from stdin
if url_parts.netloc == "-":
resp.raw = sys.stdin.buffer
# make a fake response URL, the current directory
resp.url = "file://" + os.path.abspath(".").replace(os.sep, "/") + "/"
else:
# Split the path on / (the URL directory separator) and decode any
# % escapes in the parts
path_parts = [unquote(p) for p in url_parts.path.split('/')]
# Strip out the leading empty parts created from the leading /'s
while path_parts and not path_parts[0]:
path_parts.pop(0)
# If os.sep is in any of the parts, someone fed us some shenanigans.
# Treat is like a missing file.
if any(os.sep in p for p in path_parts):
raise IOError(errno.ENOENT, os.strerror(errno.ENOENT))
# Look for a drive component. If one is present, store it separately
# so that a directory separator can correctly be added to the real
# path, and remove any empty path parts between the drive and the path.
# Assume that a part ending with : or | (legacy) is a drive.
if path_parts and (path_parts[0].endswith('|') or path_parts[0].endswith(':')):
path_drive = path_parts.pop(0)
if path_drive.endswith('|'):
path_drive = f"{path_drive[:-1]}:"
while path_parts and not path_parts[0]:
path_parts.pop(0)
else:
path_drive = ''
# Try to put the path back together
# Join the drive back in, and stick os.sep in front of the path to
# make it absolute.
path = path_drive + os.sep + os.path.join(*path_parts)
# Check if the drive assumptions above were correct. If path_drive
# is set, and os.path.splitdrive does not return a drive, it wasn't
# reall a drive. Put the path together again treating path_drive
# as a normal path component.
if path_drive and not os.path.splitdrive(path):
path = os.sep + os.path.join(path_drive, *path_parts)
# Use io.open since we need to add a release_conn method, and
# methods can't be added to file objects in python 2.
resp.raw = io.open(path, "rb")
resp.raw.release_conn = resp.raw.close
except IOError as e:
if e.errno == errno.EACCES:
resp.status_code = codes.forbidden
elif e.errno == errno.ENOENT:
resp.status_code = codes.not_found
else:
resp.status_code = codes.bad_request
# Wrap the error message in a file-like object
# The error message will be localized, try to convert the string
# representation of the exception into a byte stream
resp_str = str(e).encode(locale.getpreferredencoding(False))
resp.raw = BytesIO(resp_str)
resp.headers['Content-Length'] = len(resp_str)
# Add release_conn to the BytesIO object
resp.raw.release_conn = resp.raw.close
else:
resp.status_code = codes.ok
# If it's a regular file, set the Content-Length
resp_stat = os.fstat(resp.raw.fileno())
if stat.S_ISREG(resp_stat.st_mode):
resp.headers['Content-Length'] = resp_stat.st_size
return resp
def close(self):
pass
|
clientScripts | index_aip | #!/usr/bin/env python
import os
import sys
import traceback
from glob import glob
import django
import elasticSearchFunctions
import identifier_functions
import storageService as storage_service
from custom_handlers import get_script_logger
from main.models import UnitVariable
# dashboard
# archivematicaCommon
django.setup()
from django.conf import settings as mcpclient_settings
logger = get_script_logger("archivematica.mcp.client.indexAIP")
def get_identifiers(job, sip_path):
"""Get additional identifiers to index."""
identifiers = []
# MODS
mods_paths = glob(f"{sip_path}/submissionDocumentation/**/mods/*.xml")
for mods in mods_paths:
identifiers.extend(identifier_functions.extract_identifiers_from_mods(mods))
# Islandora identifier
islandora_path = glob(f"{sip_path}/submissionDocumentation/**/*-METS.xml")
for mets in islandora_path:
identifiers.extend(identifier_functions.extract_identifier_from_islandora(mets))
job.pyprint("Indexing additional identifiers %s", identifiers)
return identifiers
def index_aip(job):
"""Write AIP information to ElasticSearch."""
sip_uuid = job.args[1] # %SIPUUID%
sip_name = job.args[2] # %SIPName%
sip_staging_path = job.args[3] # %SIPDirectory%
sip_type = job.args[4] # %SIPType%
aip_location = job.args[5] # %AIPsStore%%
if "aips" not in mcpclient_settings.SEARCH_ENABLED:
logger.info("Skipping indexing: AIPs indexing is currently disabled.")
return 0
location_description = storage_service.retrieve_storage_location_description(
aip_location, logger
)
elasticSearchFunctions.setup_reading_from_conf(mcpclient_settings)
client = elasticSearchFunctions.get_client()
aip_info = storage_service.get_file_info(uuid=sip_uuid)
job.pyprint("AIP info:", aip_info)
aip_info = aip_info[0]
mets_staging_path = os.path.join(sip_staging_path, f"METS.{sip_uuid}.xml")
identifiers = get_identifiers(job, sip_staging_path)
# If this is an AIC, find the number of AIP stored in it and index that
aips_in_aic = None
if sip_type == "AIC":
try:
uv = UnitVariable.objects.get(
unittype="SIP", unituuid=sip_uuid, variable="AIPsinAIC"
)
aips_in_aic = uv.variablevalue
except UnitVariable.DoesNotExist:
pass
# Delete ES index before creating new one if reingesting
if "REIN" in sip_type:
job.pyprint(
"Deleting outdated entry for AIP and AIP files with UUID",
sip_uuid,
"from archival storage",
)
elasticSearchFunctions.delete_aip(client, sip_uuid)
elasticSearchFunctions.delete_aip_files(client, sip_uuid)
job.pyprint("Indexing AIP and AIP files")
# Even though we treat MODS identifiers as SIP-level, we need to index them
# here because the archival storage tab actually searches on the
# aips/aipfile index.
ret = elasticSearchFunctions.index_aip_and_files(
client=client,
uuid=sip_uuid,
aip_stored_path=aip_info["current_full_path"],
mets_staging_path=mets_staging_path,
name=sip_name,
aip_size=aip_info["size"],
aips_in_aic=aips_in_aic,
identifiers=identifiers,
encrypted=aip_info["encrypted"],
location=location_description,
printfn=job.pyprint,
)
if ret == 1:
job.pyprint("Error indexing AIP and AIP files", file=sys.stderr)
return ret
def filter_status_code(status_code):
"""Force successful status code.
When ``INDEX_AIP_CONTINUE_ON_ERROR`` is enabled the desire of the user is
to continue processing the package at all costs. To achieve it, we return
the exit code 179 - this ensure that the job is marked as failing while the
processing is not interrupted.
"""
if mcpclient_settings.INDEX_AIP_CONTINUE_ON_ERROR and status_code > 0:
status_code = 179
return status_code
def call(jobs):
for job in jobs:
with job.JobContext(logger=logger):
try:
status_code = index_aip(job)
except Exception as err:
# We want to capture any exception so ``filter_status_code``
# makes the last call on what is the status returned.
status_code = 1
job.print_error(repr(err))
job.print_error(traceback.format_exc())
job.set_status(filter_status_code(status_code))
|
plugins | limiter | # -*- coding: utf-8 -*-
"""
This file is part of the Stargate project, Copyright Stargate Team
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation version 3 of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
"""
from sglib.lib.translate import _
from sgui.widgets import *
from .util import get_screws
SG_LIM_THRESHOLD = 0
SG_LIM_CEILING = 1
SG_LIM_RELEASE = 2
SG_LIM_UI_MSG_ENABLED = 3
SG_LIM_PORT_MAP = {}
STYLESHEET = """\
QWidget#plugin_window{
background: qlineargradient(
x1: 0, y1: 0, x2: 1, y2: 1,
stop: 0 #383637, stop: 1 #2b2b2b
);
background-image: url({{ PLUGIN_ASSETS_DIR }}/limiter/logo.svg);
background-position: left;
background-repeat: no-repeat;
border: none;
}
QComboBox{
background: qlineargradient(
x1: 0, y1: 0, x2: 0, y2: 1,
stop: 0 #6a6a6a, stop: 0.5 #828282, stop: 1 #6a6a6a
);
border: 1px solid #222222;
border-radius: 6px;
color: #cccccc;
}
QLabel#plugin_name_label,
QLabel#plugin_value_label{
background: none;
color: #cccccc;
}
"""
class LimiterPluginUI(AbstractPluginUI):
def __init__(self, *args, **kwargs):
AbstractPluginUI.__init__(
self,
*args,
stylesheet=STYLESHEET,
**kwargs,
)
self.widget.setFixedHeight(100)
self._plugin_name = "SG Limiter"
self.is_instrument = False
self.preset_manager = None
self.main_hlayout = QHBoxLayout()
left_screws = get_screws()
self.main_hlayout.addLayout(left_screws)
self.main_hlayout.addItem(
QSpacerItem(1, 1, QSizePolicy.Policy.Expanding),
)
self.layout.addLayout(self.main_hlayout)
f_knob_size = DEFAULT_KNOB_SIZE
self.groupbox_gridlayout = QGridLayout()
self.main_hlayout.addLayout(self.groupbox_gridlayout)
knob_kwargs = {
"arc_brush": QColor("#02cad5"),
"arc_bg_brush": QColor("#021425"),
"arc_width_pct": 12.0,
"fg_svg": None,
"bg_svg": None,
}
self.thresh_knob = knob_control(
f_knob_size,
_("Thresh"),
SG_LIM_THRESHOLD,
self.plugin_rel_callback,
self.plugin_val_callback,
-360,
0,
0,
KC_TENTH,
self.port_dict,
self.preset_manager,
knob_kwargs=knob_kwargs,
tooltip="The threshold to begin limiting the sound",
)
self.thresh_knob.add_to_grid_layout(self.groupbox_gridlayout, 3)
self.ceiling_knob = knob_control(
f_knob_size,
_("Ceiling"),
SG_LIM_CEILING,
self.plugin_rel_callback,
self.plugin_val_callback,
-180,
0,
0,
KC_TENTH,
self.port_dict,
self.preset_manager,
knob_kwargs=knob_kwargs,
tooltip=(
"This knob will adjust auto-gain to keep the sound at \n"
"approximately this level"
),
)
self.ceiling_knob.add_to_grid_layout(self.groupbox_gridlayout, 7)
self.release_knob = knob_control(
f_knob_size,
_("Release"),
SG_LIM_RELEASE,
self.plugin_rel_callback,
self.plugin_val_callback,
50,
1500,
500,
KC_INTEGER,
self.port_dict,
self.preset_manager,
knob_kwargs=knob_kwargs,
tooltip=(
"Release in milliseconds. Higher values result in\n"
"smoother sounds, lower values are perceived as louder\n"
"but may introduce unwanted artifacts to the sound"
),
)
self.release_knob.add_to_grid_layout(self.groupbox_gridlayout, 22)
peak_gradient = QLinearGradient(0.0, 0.0, 0.0, 100.0)
peak_gradient.setColorAt(0.0, QColor("#cc2222"))
peak_gradient.setColorAt(0.3, QColor("#cc2222"))
peak_gradient.setColorAt(0.6, QColor("#8877bb"))
peak_gradient.setColorAt(1.0, QColor("#7777cc"))
self.peak_meter = peak_meter(
16,
False,
invert=True,
brush=peak_gradient,
)
self.main_hlayout.addItem(
QSpacerItem(1, 1, QSizePolicy.Policy.Expanding),
)
self.main_hlayout.addWidget(self.peak_meter.widget)
right_screws = get_screws()
self.main_hlayout.addLayout(right_screws)
self.ui_msg_enabled = null_control(
SG_LIM_UI_MSG_ENABLED,
self.plugin_rel_callback,
self.plugin_val_callback,
0,
self.port_dict,
)
self.open_plugin_file()
self.set_midi_learn(SG_LIM_PORT_MAP)
self.enable_ui_msg(True)
def widget_close(self):
self.enable_ui_msg(False)
AbstractPluginUI.widget_close(self)
def widget_show(self):
self.enable_ui_msg(True)
def enable_ui_msg(self, a_enabled):
if a_enabled:
self.plugin_val_callback(SG_LIM_UI_MSG_ENABLED, 1.0)
else:
self.plugin_val_callback(SG_LIM_UI_MSG_ENABLED, 0.0)
def ui_message(self, a_name, a_value):
if a_name == "gain":
self.peak_meter.set_value([a_value] * 2)
else:
AbstractPluginUI.ui_message(a_name, a_value)
def save_plugin_file(self):
# Don't allow the peak meter to run at startup
self.ui_msg_enabled.set_value(0)
AbstractPluginUI.save_plugin_file(self)
|
classes | version | """
@file
@brief This file get the current version of openshot from the openshot.org website
@author Jonathan Thomas <jonathan@openshot.org>
@section LICENSE
Copyright (c) 2008-2018 OpenShot Studios, LLC
(http://www.openshotstudios.com). This file is part of
OpenShot Video Editor (http://www.openshot.org), an open-source project
dedicated to delivering high quality video editing and animation solutions
to the world.
OpenShot Video Editor is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
OpenShot Video Editor is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with OpenShot Library. If not, see <http://www.gnu.org/licenses/>.
"""
import threading
import requests
from classes import info
from classes.app import get_app
from classes.logger import log
def get_current_Version():
"""Get the current version"""
t = threading.Thread(target=get_version_from_http, daemon=True)
t.start()
def get_version_from_http():
"""Get the current version # from openshot.org"""
url = "http://www.openshot.org/version/json/"
# Send metric HTTP data
try:
r = requests.get(
url, headers={"user-agent": "openshot-qt-%s" % info.VERSION}, verify=False
)
log.info("Found current version: %s" % r.json())
# Parse version
openshot_version = r.json().get("openshot_version")
info.ERROR_REPORT_STABLE_VERSION = r.json().get("openshot_version")
info.ERROR_REPORT_RATE_STABLE = r.json().get("error_rate_stable")
info.ERROR_REPORT_RATE_UNSTABLE = r.json().get("error_rate_unstable")
info.TRANS_REPORT_RATE_STABLE = r.json().get("trans_rate_stable")
info.TRANS_REPORT_RATE_UNSTABLE = r.json().get("trans_rate_unstable")
# Emit signal for the UI
get_app().window.FoundVersionSignal.emit(openshot_version)
except Exception as Ex:
log.error("Failed to get version from: %s" % url)
|
plugins | spec | # -*- coding: utf-8 -*-
"""
flaskbb.plugins.spec
~~~~~~~~~~~~~~~~~~~~~~~
This module provides the core FlaskBB plugin hook definitions
:copyright: (c) 2017 by the FlaskBB Team.
:license: BSD, see LICENSE for more details.
"""
from pluggy import HookspecMarker
spec = HookspecMarker("flaskbb")
# Setup Hooks
@spec
def flaskbb_extensions(app):
"""Hook for initializing any plugin loaded extensions."""
@spec
def flaskbb_load_translations():
"""Hook for registering translation folders."""
@spec
def flaskbb_load_migrations():
"""Hook for registering additional migrations."""
@spec
def flaskbb_load_blueprints(app):
"""Hook for registering blueprints.
:param app: The application object.
"""
@spec
def flaskbb_request_processors(app):
"""Hook for registering pre/post request processors.
:param app: The application object.
"""
@spec
def flaskbb_errorhandlers(app):
"""Hook for registering error handlers.
:param app: The application object.
"""
@spec
def flaskbb_jinja_directives(app):
"""Hook for registering jinja filters, context processors, etc.
:param app: The application object.
"""
@spec
def flaskbb_additional_setup(app, pluggy):
"""Hook for any additional setup a plugin wants to do after all other
application setup has finished.
For example, you could apply a WSGI middleware::
@impl
def flaskbb_additional_setup(app):
app.wsgi_app = ProxyFix(app.wsgi_app)
:param app: The application object.
:param pluggy: The pluggy object.
"""
@spec
def flaskbb_load_post_markdown_class(app):
"""
Hook for loading a mistune renderer child class in order to render
markdown on posts and user signatures. All classes returned by this hook
will be composed into a single class to render markdown for posts.
Since all classes will be composed together, child classes should call
super as appropriate and not add any new arguments to `__init__` since the
class will be insantiated with predetermined arguments.
Example::
class YellingRenderer(mistune.Renderer):
def paragraph(self, text):
return super(YellingRenderer, self).paragraph(text.upper())
@impl
def flaskbb_load_post_markdown_class():
return YellingRenderer
:param app: The application object associated with the class if needed
:type app: Flask
"""
@spec
def flaskbb_load_nonpost_markdown_class(app):
"""
Hook for loading a mistune renderer child class in order to render
markdown in locations other than posts, for example in category or
forum descriptions. All classes returned by this hook will be composed into
a single class to render markdown for nonpost content (e.g. forum and
category descriptions).
Since all classes will be composed together, child classes should call
super as appropriate and not add any new arguments to `__init__` since the
class will be insantiated with predetermined arguments.
Example::
class YellingRenderer(mistune.Renderer):
def paragraph(self, text):
return super(YellingRenderer, self).paragraph(text.upper())
@impl
def flaskbb_load_nonpost_markdown_class():
return YellingRenderer
:param app: The application object associated with the class if needed
:type app: Flask
"""
@spec
def flaskbb_load_post_markdown_plugins(plugins, app):
"""
Hook for loading mistune renderer plugins used when rendering markdown on
posts and user signatures. Implementations should modify the `plugins`
list directly.
Example of adding plugins::
from mistune.plugins import plugin_abbr, plugin_table
@impl
def flaskbb_load_post_markdown_plugins(plugins):
# add the built-in mistune table and abbr plugins
plugins.extend([plugin_abbr, plugin_table])
Example of removing plugins::
from flaskbb.markup import plugin_userify
@impl
def flaskbb_load_post_markdown_plugins(plugins):
try:
# remove the FlaskBB user mention link plugin
plugins.remove(plugin_userify)
except ValueError:
# other FlaskBB plugins might beat you to removing a plugin,
# which is not an error. You should not raise an exception in
# this case.
pass
:param plugins: List of mistune plugins to load.
:type plugins: list
:param app: The application object.
:type app: Flask
.. seealso::
https://mistune.readthedocs.io/en/v2.0.2/advanced.html#create-plugins
Mistune plugin documentation.
:data:`~flaskbb.markup.plugin_userify`
FlaskBB-provided plugin that links user mentions to their profiles.
:data:`~flaskbb.markup.DEFAULT_PLUGINS`
List of plugins loaded by default.
:func:`flaskbb_load_nonpost_markdown_plugins`
Hook to modify the list of plugins for markdown rendering in
non-post areas.
"""
@spec
def flaskbb_load_nonpost_markdown_plugins(plugins, app):
"""
Hook for loading mistune renderer plugins used when rendering markdown in
locations other than posts, for example in category or forum
descriptions. Implementations should modify the `plugins` list directly.
See :func:`flaskbb_load_post_markdown_plugins` for more details.
:param plugins: List of mistune plugins to load.
:type plugins: list
:param app: The application object.
:type app: Flask
"""
@spec
def flaskbb_cli(cli, app):
"""Hook for registering CLI commands.
For example::
@impl
def flaskbb_cli(cli):
@cli.command()
def testplugin():
click.echo("Hello Testplugin")
return testplugin
:param app: The application object.
:param cli: The FlaskBBGroup CLI object.
"""
@spec
def flaskbb_shell_context():
"""Hook for registering shell context handlers
Expected to return a single callable function that returns a dictionary or
iterable of key value pairs.
"""
# Event hooks
@spec
def flaskbb_event_post_save_before(post):
"""Hook for handling a post before it has been saved.
:param flaskbb.forum.models.Post post: The post which triggered the event.
"""
@spec
def flaskbb_event_post_save_after(post, is_new):
"""Hook for handling a post after it has been saved.
:param flaskbb.forum.models.Post post: The post which triggered the event.
:param bool is_new: True if the post is new, False if it is an edit.
"""
@spec
def flaskbb_event_topic_save_before(topic):
"""Hook for handling a topic before it has been saved.
:param flaskbb.forum.models.Topic topic: The topic which triggered the
event.
"""
@spec
def flaskbb_event_topic_save_after(topic, is_new):
"""Hook for handling a topic after it has been saved.
:param flaskbb.forum.models.Topic topic: The topic which triggered the
event.
:param bool is_new: True if the topic is new, False if it is an edit.
"""
# TODO(anr): When pluggy 1.0 is released, mark this spec deprecated
@spec
def flaskbb_event_user_registered(username):
"""Hook for handling events after a user is registered
.. warning::
This hook is deprecated in favor of
:func:`~flaskbb.plugins.spec.flaskbb_registration_post_processor`
:param username: The username of the newly registered user.
"""
@spec
def flaskbb_gather_registration_validators():
"""
Hook for gathering user registration validators, implementers must return
a callable that accepts a
:class:`~flaskbb.core.auth.registration.UserRegistrationInfo` and raises
a :class:`~flaskbb.core.exceptions.ValidationError` if the registration
is invalid or :class:`~flaskbb.core.exceptions.StopValidation` if
validation of the registration should end immediately.
Example::
def cannot_be_named_fred(user_info):
if user_info.username.lower() == 'fred':
raise ValidationError(('username', 'Cannot name user fred'))
@impl
def flaskbb_gather_registration_validators():
return [cannot_be_named_fred]
.. note::
This is implemented as a hook that returns callables since the
callables are designed to raise exceptions that are aggregated to
form the failure message for the registration response.
See Also: :class:`~flaskbb.core.auth.registration.UserValidator`
"""
@spec
def flaskbb_registration_failure_handler(user_info, failures):
"""
Hook for dealing with user registration failures, receives the info
that user attempted to register with as well as the errors that failed
the registration.
Example::
from .utils import fuzz_username
def has_already_registered(failures):
return any(
attr = "username" and "already registered" in msg
for (attr, msg) in failures
)
def suggest_alternate_usernames(user_info, failures):
if has_already_registered(failures):
suggestions = fuzz_username(user_info.username)
failures.append(("username", "Try: {}".format(suggestions)))
@impl
def flaskbb_registration_failure_handler(user_info, failures):
suggest_alternate_usernames(user_info, failures)
See Also: :class:`~flaskbb.core.auth.registration.RegistrationFailureHandler`
""" # noqa
@spec
def flaskbb_registration_post_processor(user):
"""
Hook for handling actions after a user has successfully registered. This
spec receives the user object after it has been successfully persisted
to the database.
Example::
def greet_user(user):
flash(_("Thanks for registering {}".format(user.username)))
@impl
def flaskbb_registration_post_processor(user):
greet_user(user)
See Also: :class:`~flaskbb.core.auth.registration.RegistrationPostProcessor`
""" # noqa
@spec(firstresult=True)
def flaskbb_authenticate(identifier, secret):
"""Hook for authenticating users in FlaskBB.
This hook should return either an instance of
:class:`flaskbb.user.models.User` or None.
If a hook decides that all attempts for authentication
should end, it may raise a
:class:`flaskbb.core.exceptions.StopAuthentication`
and include a reason why authentication was stopped.
Only the first User result will used and the default FlaskBB
authentication is tried last to give others an attempt to
authenticate the user instead.
See also:
:class:`AuthenticationProvider<flaskbb.core.auth.AuthenticationProvider>`
Example of alternative auth::
def ldap_auth(identifier, secret):
"basic ldap example with imaginary ldap library"
user_dn = "uid={},ou=flaskbb,dc=flaskbb,dc=org"
try:
ldap.bind(user_dn, secret)
return User.query.join(
UserLDAP
).filter(
UserLDAP.dn==user_dn
).with_entities(User).one()
except:
return None
@impl
def flaskbb_authenticate(identifier, secret):
return ldap_auth(identifier, secret)
Example of ending authentication::
def prevent_login_with_too_many_failed_attempts(identifier):
user = User.query.filter(
db.or_(
User.username == identifier,
User.email == identifier
)
).first()
if user is not None:
if has_too_many_failed_logins(user):
raise StopAuthentication(_(
"Your account is temporarily locked due to too many"
" login attempts"
))
@impl(tryfirst=True)
def flaskbb_authenticate(user, identifier):
prevent_login_with_too_many_failed_attempts(identifier)
"""
@spec
def flaskbb_post_authenticate(user):
"""Hook for handling actions that occur after a user is
authenticated but before setting them as the current user.
This could be used to handle MFA. However, these calls will
be blocking and should be taken into account.
Responses from this hook are not considered at all. If a hook
should need to prevent the user from logging in, it should
register itself as tryfirst and raise a
:class:`flaskbb.core.exceptions.StopAuthentication`
and include why the login was prevented.
See also:
:class:`PostAuthenticationHandler<flaskbb.core.auth.PostAuthenticationHandler>`
Example::
def post_auth(user):
today = utcnow()
if is_anniversary(today, user.date_joined):
flash(_("Happy registerversary!"))
@impl
def flaskbb_post_authenticate(user):
post_auth(user)
"""
@spec
def flaskbb_authentication_failed(identifier):
"""Hook for handling authentication failure events.
This hook will only be called when no authentication
providers successfully return a user or a
:class:`flaskbb.core.exceptions.StopAuthentication`
is raised during the login process.
See also:
:class:`AuthenticationFailureHandler<flaskbb.core.auth.AuthenticationFailureHandler>`
Example::
def mark_failed_logins(identifier):
user = User.query.filter(
db.or_(
User.username == identifier,
User.email == identifier
)
).first()
if user is not None:
if user.login_attempts is None:
user.login_attempts = 1
else:
user.login_attempts += 1
user.last_failed_login = utcnow()
"""
@spec(firstresult=True)
def flaskbb_reauth_attempt(user, secret):
"""Hook for handling reauth in FlaskBB
These hooks receive the currently authenticated user
and the entered secret. Only the first response from
this hook is considered -- similar to the authenticate
hooks. A successful attempt should return True, otherwise
None for an unsuccessful or untried reauth from an
implementation. Reauth will be considered a failure if
no implementation return True.
If a hook decides that a reauthenticate attempt should
cease, it may raise StopAuthentication.
See also:
:class:`ReauthenticateProvider<flaskbb.core.auth.ReauthenticateProvider>`
Example of checking secret or passing to the next implementer::
@impl
def flaskbb_reauth_attempt(user, secret):
if check_password(user.password, secret):
return True
Example of forcefully ending reauth::
@impl
def flaskbb_reauth_attempt(user, secret):
if user.login_attempts > 5:
raise StopAuthentication(
_("Too many failed authentication attempts")
)
"""
@spec
def flaskbb_post_reauth(user):
"""Hook called after successfully reauthenticating.
These hooks are called a user has passed the flaskbb_reauth_attempt
hooks but before their reauth is confirmed so a post reauth implementer
may still force a reauth to fail by raising StopAuthentication.
Results from these hooks are not considered.
See also:
:class:`PostReauthenticateHandler<flaskbb.core.auth.PostAuthenticationHandler>`
"""
@spec
def flaskbb_reauth_failed(user):
"""Hook called if a reauth fails.
These hooks will only be called if no implementation
for flaskbb_reauth_attempt returns a True result or if
an implementation raises StopAuthentication.
If an implementation raises ForceLogout it should register
itself as trylast to give other reauth failed handlers an
opprotunity to run first.
See also:
:class:`ReauthenticateFailureHandler<flaskbb.core.auth.ReauthenticateFailureHandler>`
"""
# Form hooks
@spec
def flaskbb_form_post(form):
"""Hook for modifying the :class:`~flaskbb.forum.forms.ReplyForm`.
For example::
@impl
def flaskbb_form_post(form):
form.example = TextField("Example Field", validators=[
DataRequired(message="This field is required"),
Length(min=3, max=50)])
:param form: The :class:`~flaskbb.forum.forms.ReplyForm` class.
"""
@spec
def flaskbb_form_post_save(form, post):
"""Hook for modifying the :class:`~flaskbb.forum.forms.ReplyForm`.
This hook is called while populating the post object with
the data from the form. The post object will be saved after the hook
call.
:param form: The form object.
:param post: The post object.
"""
@spec
def flaskbb_form_topic(form):
"""Hook for modifying the :class:`~flaskbb.forum.forms.NewTopicForm`
For example::
@impl
def flaskbb_form_topic(form):
form.example = TextField("Example Field", validators=[
DataRequired(message="This field is required"),
Length(min=3, max=50)])
:param form: The :class:`~flaskbb.forum.forms.NewTopicForm` class.
"""
@spec
def flaskbb_form_topic_save(form, topic):
"""Hook for modifying the :class:`~flaskbb.forum.forms.NewTopicForm`.
This hook is called while populating the topic object with
the data from the form. The topic object will be saved after the hook
call.
:param form: The form object.
:param topic: The topic object.
"""
@spec
def flaskbb_form_registration(form):
"""
Hook for modifying the :class:`~flaskbb.auth.forms.RegisterForm`.
:param form: The form class
"""
@spec
def flaskbb_gather_password_validators(app):
"""
Hook for gathering :class:`~flaskbb.core.changesets.ChangeSetValidator`
instances specialized for handling :class:`~flaskbb.core.user.update.PasswordUpdate`
This hook should return an iterable::
class NotLongEnough(ChangeSetValidator):
def __init__(self, min_length):
self._min_length = min_length
def validate(self, model, changeset):
if len(changeset.new_password) < self._min_length:
raise ValidationError(
"new_password",
"Password must be at least {} characters ".format(
self._min_length
)
)
@impl
def flaskbb_gather_password_validators(app):
return [NotLongEnough(app.config['MIN_PASSWORD_LENGTH'])]
:param app: The current application
"""
@spec
def flaskbb_gather_email_validators(app):
"""
Hook for gathering :class:`~flaskbb.core.changesets.ChangeSetValidator`
instances specialized for :class:`~flaskbb.core.user.update.EmailUpdate`.
This hook should return an iterable::
class BlackListedEmailProviders(ChangeSetValidator):
def __init__(self, black_list):
self._black_list = black_list
def validate(self, model, changeset):
provider = changeset.new_email.split('@')[1]
if provider in self._black_list:
raise ValidationError(
"new_email",
"{} is a black listed email provider".format(provider)
)
@impl
def flaskbb_gather_email_validators(app):
return [BlackListedEmailProviders(app.config["EMAIL_PROVIDER_BLACK_LIST"])]
:param app: The current application
"""
@spec
def flaskbb_gather_details_update_validators(app):
"""
Hook for gathering :class:`~flaskbb.core.changesets.ChangeSetValidator`
instances specialized for :class:`~flaskbb.core.user.update.UserDetailsChange`.
This hook should return an iterable::
class DontAllowImageSignatures(ChangeSetValidator):
def __init__(self, renderer):
self._renderer = renderer
def validate(self, model, changeset):
rendered = self._renderer.render(changeset.signature)
if '<img' in rendered:
raise ValidationError("signature", "No images allowed in signature")
@impl
def flaskbb_gather_details_update_validators(app):
renderer = app.pluggy.hook.flaskbb_load_nonpost_markdown_class()
return [DontAllowImageSignatures(renderer())]
:param app: The current application
"""
@spec
def flaskbb_details_updated(user, details_update):
"""
Hook for responding to a user updating their details. This hook is called
after the details update has been persisted.
See also :class:`~flaskbb.core.changesets.ChangeSetPostProcessor`
:param user: The user whose details have been updated.
:param details_update: The details change set applied to the user.
"""
@spec
def flaskbb_password_updated(user):
"""
Hook for responding to a user updating their password. This hook is called
after the password change has been persisted::
@impl
def flaskbb_password_updated(app, user):
send_email(
"Password changed",
[user.email],
text_body=...,
html_body=...
)
See also :class:`~flaskbb.core.changesets.ChangeSetPostProcessor`
:param user: The user that updated their password.
"""
@spec
def flaskbb_email_updated(user, email_update):
"""
Hook for responding to a user updating their email. This hook is called after
the email change has been persisted::
@impl
def flaskbb_email_updated(app):
send_email(
"Email changed",
[email_change.old_email],
text_body=...,
html_body=...
)
See also :class:`~flaskbb.core.changesets.ChangeSetPostProcessor`.
:param user: The user whose email was updated.
:param email_update: The change set applied to the user.
"""
@spec
def flaskbb_settings_updated(user, settings_update):
"""
Hook for responding to a user updating their settings. This hook is called after
the settings change has been persisted.
See also :class:`~flaskbb.core.changesets.ChangeSetPostProcessor`
:param user: The user whose settings have been updated.
:param settings: The settings change set applied to the user.
"""
# Template Hooks
@spec
def flaskbb_tpl_navigation_before():
"""Hook for registering additional navigation items.
in :file:`templates/layout.html`.
"""
@spec
def flaskbb_tpl_navigation_after():
"""Hook for registering additional navigation items.
in :file:`templates/layout.html`.
"""
@spec
def flaskbb_tpl_user_nav_loggedin_before():
"""Hook for registering additional user navigational items
which are only shown when a user is logged in.
in :file:`templates/layout.html`.
"""
@spec
def flaskbb_tpl_user_nav_loggedin_after():
"""Hook for registering additional user navigational items
which are only shown when a user is logged in.
in :file:`templates/layout.html`.
"""
@spec
def flaskbb_tpl_form_registration_before(form):
"""This hook is emitted in the Registration form **before** the first
input field but after the hidden CSRF token field.
in :file:`templates/auth/register.html`.
:param form: The form object.
"""
@spec
def flaskbb_tpl_form_registration_after(form):
"""This hook is emitted in the Registration form **after** the last
input field but before the submit field.
in :file:`templates/auth/register.html`.
:param form: The form object.
"""
@spec
def flaskbb_tpl_form_user_details_before(form):
"""This hook is emitted in the Change User Details form **before** an
input field is rendered.
in :file:`templates/user/change_user_details.html`.
:param form: The form object.
"""
@spec
def flaskbb_tpl_form_user_details_after(form):
"""This hook is emitted in the Change User Details form **after** the last
input field has been rendered but before the submit field.
in :file:`templates/user/change_user_details.html`.
:param form: The form object.
"""
@spec
def flaskbb_tpl_profile_settings_menu(user):
"""This hook is emitted on the user settings page in order to populate the
side bar menu. Implementations of this hook should return a list of tuples
that are view name and display text. The display text will be provided to
the translation service so it is unnecessary to supply translated text.
A plugin can declare a new block by setting the view to None. If this is
done, consider marking the hook implementation with `trylast=True` to
avoid capturing plugins that do not create new blocks.
For example::
@impl(trylast=True)
def flaskbb_tpl_profile_settings_menu():
return [
(None, 'Account Settings'),
('user.settings', 'General Settings'),
('user.change_user_details', 'Change User Details'),
('user.change_email', 'Change E-Mail Address'),
('user.change_password', 'Change Password')
]
Hookwrappers for this spec should not be registered as FlaskBB
supplies its own hookwrapper to flatten all the lists into a single list.
in :file:`templates/user/settings_layout.html`
.. versionchanged:: 2.1.0
The user param. Typically this will be the current user but might not
always be the current user.
:param user: The user the settings menu is being rendered for.
"""
@spec
def flaskbb_tpl_profile_sidebar_links(user):
"""
This hook is emitted on the user profile page in order to populate the
sidebar menu. Implementations of this hook should return an iterable of
:class:`~flaskbb.display.navigation.NavigationItem` instances::
@impl
def flaskbb_tpl_profile_sidebar_links(user):
return [
NavigationLink(
endpoint="user.profile",
name=_("Overview"),
icon="fa fa-home",
urlforkwargs={"username": user.username},
),
NavigationLink(
endpoint="user.view_all_topics",
name=_("Topics"),
icon="fa fa-comments",
urlforkwargs={"username": user.username},
),
NavigationLink(
endpoint="user.view_all_posts",
name=_("Posts"),
icon="fa fa-comment",
urlforkwargs={"username": user.username},
),
]
.. warning::
Hookwrappers for this spec should not be registered as FlaskBB registers
its own hook wrapper to flatten all the results into a single list.
.. versionadded:: 2.1
:param user: The user the profile page belongs to.
"""
@spec
def flaskbb_tpl_admin_settings_menu(user):
"""This hook is emitted in the admin panel and used to add additional
navigation links to the admin menu.
Implementations of this hook should return a list of tuples
that are view name, display text and optionally an icon.
The display text will be provided to the translation service so it
is unnecessary to supply translated text.
For example::
@impl(trylast=True)
def flaskbb_tpl_admin_settings_menu():
# only add this item if the user is an admin
if Permission(IsAdmin, identity=current_user):
return [
("myplugin.foobar", "Foobar", "fa fa-foobar")
]
Hookwrappers for this spec should not be registered as FlaskBB
supplies its own hookwrapper to flatten all the lists into a single list.
in :file:`templates/management/management_layout.html`
:param user: The current user object.
"""
@spec
def flaskbb_tpl_admin_settings_sidebar(user):
"""This hook is emitted in the admin panels setting tab and used
to add additional navigation links to the sidebar settings menu.
Implementations of this hook should return a list of tuples
that are view name and display text.
The display text will be provided to the translation service so it
is unnecessary to supply translated text.
For example::
@impl(trylast=True)
def flaskbb_tpl_admin_settings_menu():
return [
("myplugin.foobar", "Foobar")
]
Only admins can view the Settings tab.
Hookwrappers for this spec should not be registered as FlaskBB
supplies its own hookwrapper to flatten all the lists into a single list.
in :file:`templates/management/settings.html`
:param user: The current user object.
"""
@spec
def flaskbb_tpl_profile_sidebar_stats(user):
"""This hook is emitted on the users profile page below the standard
information. For example, it can be used to add additional items
such as a link to the profile.
in :file:`templates/user/profile_layout.html`
:param user: The user object for whom the profile is currently visited.
"""
@spec
def flaskbb_tpl_post_author_info_before(user, post):
"""This hook is emitted before the information about the
author of a post is displayed (but after the username).
in :file:`templates/forum/topic.html`
:param user: The user object of the post's author.
:param post: The post object.
"""
@spec
def flaskbb_tpl_post_author_info_after(user, post):
"""This hook is emitted after the information about the
author of a post is displayed (but after the username).
in :file:`templates/forum/topic.html`
:param user: The user object of the post's author.
:param post: The post object.
"""
@spec
def flaskbb_tpl_post_content_before(post):
"""Hook to do some stuff before the post content is rendered.
in :file:`templates/forum/topic.html`
:param post: The current post object.
"""
@spec
def flaskbb_tpl_post_content_after(post):
"""Hook to do some stuff after the post content is rendered.
in :file:`templates/forum/topic.html`
:param post: The current post object.
"""
@spec
def flaskbb_tpl_post_menu_before(post):
"""Hook for inserting a new item at the beginning of the post menu.
in :file:`templates/forum/topic.html`
:param post: The current post object.
"""
@spec
def flaskbb_tpl_post_menu_after(post):
"""Hook for inserting a new item at the end of the post menu.
in :file:`templates/forum/topic.html`
:param post: The current post object.
"""
@spec
def flaskbb_tpl_topic_controls(topic):
"""Hook for inserting additional topic moderation controls.
in :file:`templates/forum/topic_controls.html`
:param topic: The current topic object.
"""
@spec
def flaskbb_tpl_form_new_post_before(form):
"""Hook for inserting a new form field before the first field is
rendered.
For example::
@impl
def flaskbb_tpl_form_new_post_after(form):
return render_template_string(
\"""
<div class="form-group">
<div class="col-md-12 col-sm-12 col-xs-12">
<label>{{ form.example.label.text }}</label>
{{ form.example(class="form-control",
placeholder=form.example.label.text) }}
{%- for error in form.example.errors -%}
<span class="help-block">{{error}}</span>
{%- endfor -%}
</div>
</div>
\"""
in :file:`templates/forum/new_post.html`
:param form: The form object.
"""
@spec
def flaskbb_tpl_form_new_post_after(form):
"""Hook for inserting a new form field after the last field is
rendered (but before the submit field).
in :file:`templates/forum/new_post.html`
:param form: The form object.
"""
@spec
def flaskbb_tpl_form_new_topic_before(form):
"""Hook for inserting a new form field before the first field is
rendered (but before the CSRF token).
in :file:`templates/forum/new_topic.html`
:param form: The form object.
"""
@spec
def flaskbb_tpl_form_new_topic_after(form):
"""Hook for inserting a new form field after the last field is
rendered (but before the submit button).
in :file:`templates/forum/new_topic.html`
:param form: The form object.
"""
|
CoreVideo | _metadata | # This file is generated by objective.metadata
#
# Last update: Fri Oct 19 09:50:50 2012
import sys
import objc
if sys.maxsize > 2**32:
def sel32or64(a, b):
return b
else:
def sel32or64(a, b):
return a
if sys.byteorder == "little":
def littleOrBig(a, b):
return a
else:
def littleOrBig(a, b):
return b
misc = {}
misc.update(
{
"CVTimeStamp": objc.createStructType(
"CVTimeStamp",
sel32or64(
b"{_CVTimeStamp=IiqQdq{CVSMPTETime=ssLLLssss}QQ}",
b"{_CVTimeStamp=IiqQdq{CVSMPTETime=ssIIIssss}QQ}",
),
[
"version",
"videoTimeScale",
"videoTime",
"hostTime",
"rateScalar",
"videoRefreshPeriod",
"smpteTime",
"flags",
"reserved",
],
),
"CVPlanarPixelBufferInfo_YCbCrBiPlanar": objc.createStructType(
"CVPlanarPixelBufferInfo_YCbCrBiPlanar",
b"{CVPlanarPixelBufferInfo_YCbCrBiPlanar={CVPlanarComponentInfo=iI}{CVPlanarComponentInfo=iI}}",
[],
),
"CVPlanarPixelBufferInfo_YCbCrPlanar": objc.createStructType(
"CVPlanarPixelBufferInfo_YCbCrPlanar",
b"{CVPlanarPixelBufferInfo_YCbCrPlanar={CVPlanarComponentInfo=iI}{CVPlanarComponentInfo=iI}{CVPlanarComponentInfo=iI}}",
["componentInfoY", "componentInfoCb", "componentInfoCr"],
),
"CVPlanarComponentInfo": objc.createStructType(
"CVPlanarComponentInfo",
b"{CVPlanarComponentInfo=iI}",
["offset", "rowBytes"],
),
"CVTime": objc.createStructType(
"CVTime", b"{_CVTime=qii}", ["timeValue", "timeScale", "flags"]
),
"CVSMPTETime": objc.createStructType(
"CVSMPTETime",
sel32or64(b"{CVSMPTETime=ssLLLssss}", b"{CVSMPTETime=ssIIIssss}"),
[
"subframes",
"subframeDivisor",
"counter",
"type",
"flags",
"hours",
"minutes",
"seconds",
"frames",
],
),
"CVPlanarPixelBufferInfo": objc.createStructType(
"CVPlanarPixelBufferInfo",
b"{CVPlanarPixelBufferInfo=[1{CVPlanarComponentInfo=iI}]}",
["componentInfo"],
),
}
)
constants = """$kCVBufferMovieTimeKey@^{__CFString=}$kCVBufferNonPropagatedAttachmentsKey@^{__CFString=}$kCVBufferPropagatedAttachmentsKey@^{__CFString=}$kCVBufferTimeScaleKey@^{__CFString=}$kCVBufferTimeValueKey@^{__CFString=}$kCVImageBufferCGColorSpaceKey@^{__CFString=}$kCVImageBufferChromaLocationBottomFieldKey@^{__CFString=}$kCVImageBufferChromaLocationTopFieldKey@^{__CFString=}$kCVImageBufferChromaLocation_Bottom@^{__CFString=}$kCVImageBufferChromaLocation_BottomLeft@^{__CFString=}$kCVImageBufferChromaLocation_Center@^{__CFString=}$kCVImageBufferChromaLocation_DV420@^{__CFString=}$kCVImageBufferChromaLocation_Left@^{__CFString=}$kCVImageBufferChromaLocation_Top@^{__CFString=}$kCVImageBufferChromaLocation_TopLeft@^{__CFString=}$kCVImageBufferChromaSubsamplingKey@^{__CFString=}$kCVImageBufferChromaSubsampling_411@^{__CFString=}$kCVImageBufferChromaSubsampling_420@^{__CFString=}$kCVImageBufferChromaSubsampling_422@^{__CFString=}$kCVImageBufferCleanApertureHeightKey@^{__CFString=}$kCVImageBufferCleanApertureHorizontalOffsetKey@^{__CFString=}$kCVImageBufferCleanApertureKey@^{__CFString=}$kCVImageBufferCleanApertureVerticalOffsetKey@^{__CFString=}$kCVImageBufferCleanApertureWidthKey@^{__CFString=}$kCVImageBufferColorPrimariesKey@^{__CFString=}$kCVImageBufferColorPrimaries_EBU_3213@^{__CFString=}$kCVImageBufferColorPrimaries_ITU_R_709_2@^{__CFString=}$kCVImageBufferColorPrimaries_P22@^{__CFString=}$kCVImageBufferColorPrimaries_SMPTE_C@^{__CFString=}$kCVImageBufferDisplayDimensionsKey@^{__CFString=}$kCVImageBufferDisplayHeightKey@^{__CFString=}$kCVImageBufferDisplayWidthKey@^{__CFString=}$kCVImageBufferFieldCountKey@^{__CFString=}$kCVImageBufferFieldDetailKey@^{__CFString=}$kCVImageBufferFieldDetailSpatialFirstLineEarly@^{__CFString=}$kCVImageBufferFieldDetailSpatialFirstLineLate@^{__CFString=}$kCVImageBufferFieldDetailTemporalBottomFirst@^{__CFString=}$kCVImageBufferFieldDetailTemporalTopFirst@^{__CFString=}$kCVImageBufferGammaLevelKey@^{__CFString=}$kCVImageBufferICCProfileKey@^{__CFString=}$kCVImageBufferPixelAspectRatioHorizontalSpacingKey@^{__CFString=}$kCVImageBufferPixelAspectRatioKey@^{__CFString=}$kCVImageBufferPixelAspectRatioVerticalSpacingKey@^{__CFString=}$kCVImageBufferPreferredCleanApertureKey@^{__CFString=}$kCVImageBufferTransferFunctionKey@^{__CFString=}$kCVImageBufferTransferFunction_EBU_3213@^{__CFString=}$kCVImageBufferTransferFunction_ITU_R_709_2@^{__CFString=}$kCVImageBufferTransferFunction_SMPTE_240M_1995@^{__CFString=}$kCVImageBufferTransferFunction_SMPTE_C@^{__CFString=}$kCVImageBufferTransferFunction_UseGamma@^{__CFString=}$kCVImageBufferYCbCrMatrixKey@^{__CFString=}$kCVImageBufferYCbCrMatrix_ITU_R_601_4@^{__CFString=}$kCVImageBufferYCbCrMatrix_ITU_R_709_2@^{__CFString=}$kCVImageBufferYCbCrMatrix_SMPTE_240M_1995@^{__CFString=}$kCVIndefiniteTime@{_CVTime=qii}$kCVOpenGLBufferHeight@^{__CFString=}$kCVOpenGLBufferInternalFormat@^{__CFString=}$kCVOpenGLBufferMaximumMipmapLevel@^{__CFString=}$kCVOpenGLBufferPoolMaximumBufferAgeKey@^{__CFString=}$kCVOpenGLBufferPoolMinimumBufferCountKey@^{__CFString=}$kCVOpenGLBufferTarget@^{__CFString=}$kCVOpenGLBufferWidth@^{__CFString=}$kCVOpenGLTextureCacheChromaSamplingModeAutomatic@^{__CFString=}$kCVOpenGLTextureCacheChromaSamplingModeBestPerformance@^{__CFString=}$kCVOpenGLTextureCacheChromaSamplingModeHighestQuality@^{__CFString=}$kCVOpenGLTextureCacheChromaSamplingModeKey@^{__CFString=}$kCVPixelBufferBytesPerRowAlignmentKey@^{__CFString=}$kCVPixelBufferCGBitmapContextCompatibilityKey@^{__CFString=}$kCVPixelBufferCGImageCompatibilityKey@^{__CFString=}$kCVPixelBufferExtendedPixelsBottomKey@^{__CFString=}$kCVPixelBufferExtendedPixelsLeftKey@^{__CFString=}$kCVPixelBufferExtendedPixelsRightKey@^{__CFString=}$kCVPixelBufferExtendedPixelsTopKey@^{__CFString=}$kCVPixelBufferHeightKey@^{__CFString=}$kCVPixelBufferIOSurfaceCoreAnimationCompatibilityKey@^{__CFString=}$kCVPixelBufferIOSurfaceOpenGLESFBOCompatibilityKey@^{__CFString=}$kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey@^{__CFString=}$kCVPixelBufferIOSurfaceOpenGLFBOCompatibilityKey@^{__CFString=}$kCVPixelBufferIOSurfaceOpenGLTextureCompatibilityKey@^{__CFString=}$kCVPixelBufferIOSurfacePropertiesKey@^{__CFString=}$kCVPixelBufferMemoryAllocatorKey@^{__CFString=}$kCVPixelBufferOpenGLCompatibilityKey@^{__CFString=}$kCVPixelBufferOpenGLESCompatibilityKey@^{__CFString=}$kCVPixelBufferPixelFormatTypeKey@^{__CFString=}$kCVPixelBufferPlaneAlignmentKey@^{__CFString=}$kCVPixelBufferPoolAllocationThresholdKey@^{__CFString=}$kCVPixelBufferPoolFreeBufferNotification@^{__CFString=}$kCVPixelBufferPoolMaximumBufferAgeKey@^{__CFString=}$kCVPixelBufferPoolMinimumBufferCountKey@^{__CFString=}$kCVPixelBufferWidthKey@^{__CFString=}$kCVPixelFormatBitsPerBlock@^{__CFString=}$kCVPixelFormatBlackBlock@^{__CFString=}$kCVPixelFormatBlockHeight@^{__CFString=}$kCVPixelFormatBlockHorizontalAlignment@^{__CFString=}$kCVPixelFormatBlockVerticalAlignment@^{__CFString=}$kCVPixelFormatBlockWidth@^{__CFString=}$kCVPixelFormatCGBitmapContextCompatibility@^{__CFString=}$kCVPixelFormatCGBitmapInfo@^{__CFString=}$kCVPixelFormatCGImageCompatibility@^{__CFString=}$kCVPixelFormatCodecType@^{__CFString=}$kCVPixelFormatConstant@^{__CFString=}$kCVPixelFormatContainsAlpha@^{__CFString=}$kCVPixelFormatFillExtendedPixelsCallback@^{__CFString=}$kCVPixelFormatFourCC@^{__CFString=}$kCVPixelFormatHorizontalSubsampling@^{__CFString=}$kCVPixelFormatName@^{__CFString=}$kCVPixelFormatOpenGLCompatibility@^{__CFString=}$kCVPixelFormatOpenGLESCompatibility@^{__CFString=}$kCVPixelFormatOpenGLFormat@^{__CFString=}$kCVPixelFormatOpenGLInternalFormat@^{__CFString=}$kCVPixelFormatOpenGLType@^{__CFString=}$kCVPixelFormatPlanes@^{__CFString=}$kCVPixelFormatQDCompatibility@^{__CFString=}$kCVPixelFormatVerticalSubsampling@^{__CFString=}$kCVZeroTime@{_CVTime=qii}$"""
enums = """$kCVAttachmentMode_ShouldNotPropagate@0$kCVAttachmentMode_ShouldPropagate@1$kCVPixelBufferLock_ReadOnly@1$kCVPixelFormatType_16BE555@16$kCVPixelFormatType_16BE565@1110783541$kCVPixelFormatType_16Gray@1647392359$kCVPixelFormatType_16LE555@1278555445$kCVPixelFormatType_16LE5551@892679473$kCVPixelFormatType_16LE565@1278555701$kCVPixelFormatType_1IndexedGray_WhiteIsZero@33$kCVPixelFormatType_1Monochrome@1$kCVPixelFormatType_24BGR@842285639$kCVPixelFormatType_24RGB@24$kCVPixelFormatType_2Indexed@2$kCVPixelFormatType_2IndexedGray_WhiteIsZero@34$kCVPixelFormatType_30RGB@1378955371$kCVPixelFormatType_32ABGR@1094862674$kCVPixelFormatType_32ARGB@32$kCVPixelFormatType_32AlphaGray@1647522401$kCVPixelFormatType_32BGRA@1111970369$kCVPixelFormatType_32RGBA@1380401729$kCVPixelFormatType_420YpCbCr8BiPlanarFullRange@875704422$kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange@875704438$kCVPixelFormatType_420YpCbCr8Planar@2033463856$kCVPixelFormatType_420YpCbCr8PlanarFullRange@1714696752$kCVPixelFormatType_422YpCbCr10@1983000880$kCVPixelFormatType_422YpCbCr16@1983000886$kCVPixelFormatType_422YpCbCr8@846624121$kCVPixelFormatType_422YpCbCr8FullRange@2037741158$kCVPixelFormatType_422YpCbCr8_yuvs@2037741171$kCVPixelFormatType_422YpCbCr_4A_8BiPlanar@1630697081$kCVPixelFormatType_4444AYpCbCr16@2033463606$kCVPixelFormatType_4444AYpCbCr8@2033463352$kCVPixelFormatType_4444YpCbCrA8@1983131704$kCVPixelFormatType_4444YpCbCrA8R@1916022840$kCVPixelFormatType_444YpCbCr10@1983131952$kCVPixelFormatType_444YpCbCr8@1983066168$kCVPixelFormatType_48RGB@1647589490$kCVPixelFormatType_4Indexed@4$kCVPixelFormatType_4IndexedGray_WhiteIsZero@36$kCVPixelFormatType_64ARGB@1647719521$kCVPixelFormatType_8Indexed@8$kCVPixelFormatType_8IndexedGray_WhiteIsZero@40$kCVPixelFormatType_OneComponent8@1278226488$kCVPixelFormatType_TwoComponent8@843264056$kCVReturnAllocationFailed@-6662$kCVReturnDisplayLinkAlreadyRunning@-6671$kCVReturnDisplayLinkCallbacksNotSet@-6673$kCVReturnDisplayLinkNotRunning@-6672$kCVReturnError@-6660$kCVReturnFirst@-6660$kCVReturnInvalidArgument@-6661$kCVReturnInvalidDisplay@-6670$kCVReturnInvalidPixelBufferAttributes@-6682$kCVReturnInvalidPixelFormat@-6680$kCVReturnInvalidPoolAttributes@-6691$kCVReturnInvalidSize@-6681$kCVReturnLast@-6699$kCVReturnPixelBufferNotOpenGLCompatible@-6683$kCVReturnPoolAllocationFailed@-6690$kCVReturnSuccess@0$kCVReturnWouldExceedAllocationThreshold@-6689$kCVSMPTETimeRunning@2$kCVSMPTETimeType24@0$kCVSMPTETimeType25@1$kCVSMPTETimeType2997@4$kCVSMPTETimeType2997Drop@5$kCVSMPTETimeType30@3$kCVSMPTETimeType30Drop@2$kCVSMPTETimeType5994@7$kCVSMPTETimeType60@6$kCVSMPTETimeValid@1$kCVTimeIsIndefinite@1$kCVTimeStampBottomField@131072$kCVTimeStampHostTimeValid@2$kCVTimeStampIsInterlaced@196608$kCVTimeStampRateScalarValid@16$kCVTimeStampSMPTETimeValid@4$kCVTimeStampTopField@65536$kCVTimeStampVideoHostTimeValid@3$kCVTimeStampVideoRefreshPeriodValid@8$kCVTimeStampVideoTimeValid@1$"""
misc.update({})
functions = {
"CVImageBufferGetEncodedSize": (
sel32or64(b"{CGSize=ff}^{__CVBuffer=}", b"{CGSize=dd}^{__CVBuffer=}"),
),
"CVOpenGLTextureRelease": (b"v^{__CVBuffer=}",),
"CVPixelBufferPoolRelease": (b"v^{__CVPixelBufferPool=}",),
"CVPixelBufferPoolGetTypeID": (sel32or64(b"L", b"Q"),),
"CVPixelBufferCreate": (
sel32or64(
b"i^{__CFAllocator=}LLL^{__CFDictionary=}^^{__CVBuffer=}",
b"i^{__CFAllocator=}QQI^{__CFDictionary=}^^{__CVBuffer=}",
),
"",
{"retval": {"already_cfretained": True}},
),
"CVOpenGLBufferPoolGetTypeID": (sel32or64(b"L", b"Q"),),
"CVPixelBufferFillExtendedPixels": (b"i^{__CVBuffer=}",),
"CVOpenGLTextureCacheRetain": (
b"^{__CVOpenGLTextureCache=}^{__CVOpenGLTextureCache=}",
),
"CVOpenGLBufferPoolCreateOpenGLBuffer": (
b"i^{__CFAllocator=}^{__CVOpenGLBufferPool=}^^{__CVBuffer=}",
"",
{
"retval": {"already_cfretained": True},
"arguments": {2: {"already_cfretained": True, "type_modifier": "o"}},
},
),
"CVDisplayLinkSetCurrentCGDisplay": (b"i^{__CVDisplayLink=}I",),
"CVBufferSetAttachment": (b"v^{__CVBuffer=}^{__CFString=}@I",),
"CVGetCurrentHostTime": (b"Q",),
"CVPixelBufferPoolCreate": (
b"i^{__CFAllocator=}^{__CFDictionary=}^{__CFDictionary=}^^{__CVPixelBufferPool=}",
"",
{
"retval": {"already_cfretained": True},
"arguments": {3: {"already_cfretained": True, "type_modifier": "o"}},
},
),
"CVPixelBufferGetHeightOfPlane": (
sel32or64(b"L^{__CVBuffer=}L", b"Q^{__CVBuffer=}Q"),
),
"CVBufferRetain": (b"^{__CVBuffer=}^{__CVBuffer=}",),
"CVDisplayLinkTranslateTime": (
sel32or64(
b"i^{__CVDisplayLink=}^{_CVTimeStamp=IiqQdq{CVSMPTETime=ssLLLssss}QQ}^{_CVTimeStamp=IiqQdq{CVSMPTETime=ssLLLssss}QQ}",
b"i^{__CVDisplayLink=}^{_CVTimeStamp=IiqQdq{CVSMPTETime=ssIIIssss}QQ}^{_CVTimeStamp=IiqQdq{CVSMPTETime=ssIIIssss}QQ}",
),
"",
{"arguments": {1: {"type_modifier": "n"}, 2: {"type_modifier": "o"}}},
),
"CVPixelBufferRetain": (b"^{__CVBuffer=}^{__CVBuffer=}",),
"CVPixelBufferGetPlaneCount": (sel32or64(b"L^{__CVBuffer=}", b"Q^{__CVBuffer=}"),),
"CVOpenGLTextureCacheRelease": (b"v^{__CVOpenGLTextureCache=}",),
"CVPixelBufferGetBaseAddress": (
b"^v^{__CVBuffer=}",
"",
{"retval": {"c_array_of_variable_length": True}},
),
"CVOpenGLBufferPoolRelease": (b"v^{__CVOpenGLBufferPool=}",),
"CVOpenGLTextureCacheGetTypeID": (sel32or64(b"L", b"Q"),),
"CVPixelBufferLockBaseAddress": (b"i^{__CVBuffer=}Q",),
"CVPixelBufferUnlockBaseAddress": (b"i^{__CVBuffer=}Q",),
"CVPixelFormatDescriptionRegisterDescriptionWithPixelFormatType": (
sel32or64(b"v^{__CFDictionary=}L", b"v^{__CFDictionary=}I"),
),
"CVOpenGLTextureIsFlipped": (b"Z^{__CVBuffer=}",),
"CVPixelBufferGetTypeID": (sel32or64(b"L", b"Q"),),
"CVDisplayLinkGetActualOutputVideoRefreshPeriod": (b"d^{__CVDisplayLink=}",),
"CVPixelBufferGetWidth": (sel32or64(b"L^{__CVBuffer=}", b"Q^{__CVBuffer=}"),),
"CVDisplayLinkCreateWithCGDisplay": (
b"iI^^{__CVDisplayLink=}",
"",
{
"retval": {"already_cfretained": True},
"arguments": {1: {"already_cfretained": True, "type_modifier": "o"}},
},
),
"CVBufferRelease": (b"v^{__CVBuffer=}",),
"CVDisplayLinkStart": (b"i^{__CVDisplayLink=}",),
"CVDisplayLinkGetCurrentTime": (
sel32or64(
b"i^{__CVDisplayLink=}^{_CVTimeStamp=IiqQdq{CVSMPTETime=ssLLLssss}QQ}",
b"i^{__CVDisplayLink=}^{_CVTimeStamp=IiqQdq{CVSMPTETime=ssIIIssss}QQ}",
),
"",
{"arguments": {1: {"type_modifier": "o"}}},
),
"CVPixelFormatDescriptionArrayCreateWithAllPixelFormatTypes": (
b"^{__CFArray=}^{__CFAllocator=}",
"",
{"retval": {"already_cfretained": True}},
),
"CVPixelBufferPoolGetAttributes": (b"^{__CFDictionary=}^{__CVPixelBufferPool=}",),
"CVBufferGetAttachments": (b"^{__CFDictionary=}^{__CVBuffer=}I",),
"CVOpenGLBufferPoolCreate": (
b"i^{__CFAllocator=}^{__CFDictionary=}^{__CFDictionary=}^^{__CVOpenGLBufferPool=}",
"",
{
"retval": {"already_cfretained": True},
"arguments": {3: {"already_cfretained": True, "type_modifier": "o"}},
},
),
"CVDisplayLinkRetain": (b"^{__CVDisplayLink=}^{__CVDisplayLink=}",),
"CVPixelBufferCreateWithIOSurface": (
b"i^{__CFAllocator=}^{__IOSurface=}^{__CFDictionary=}^^{__CVBuffer=}",
"",
{
"retval": {"already_cfretained": True},
"arguments": {3: {"already_cfretained": True, "type_modifier": "o"}},
},
),
"CVDisplayLinkCreateWithOpenGLDisplayMask": (
b"iI^^{__CVDisplayLink=}",
"",
{
"retval": {"already_cfretained": True},
"arguments": {1: {"already_cfretained": True, "type_modifier": "o"}},
},
),
"CVOpenGLBufferCreate": (
sel32or64(
b"i^{__CFAllocator=}LL^{__CFDictionary=}^^{__CVBuffer=}",
b"i^{__CFAllocator=}QQ^{__CFDictionary=}^^{__CVBuffer=}",
),
"",
{
"retval": {"already_cfretained": True},
"arguments": {4: {"already_cfretained": True, "type_modifier": "o"}},
},
),
"CVPixelBufferPoolCreatePixelBufferWithAuxAttributes": (
b"i^{__CFAllocator=}^{__CVPixelBufferPool=}^{__CFDictionary=}^^{__CVBuffer=}",
"",
{
"retval": {"already_cfretained": True},
"arguments": {3: {"already_cfretained": True, "type_modifier": "o"}},
},
),
"CVOpenGLTextureCacheFlush": (b"v^{__CVOpenGLTextureCache=}Q",),
"CVDisplayLinkCreateWithActiveCGDisplays": (
b"i^^{__CVDisplayLink=}",
"",
{
"retval": {"already_cfretained": True},
"arguments": {0: {"already_cfretained": True, "type_modifier": "o"}},
},
),
"CVDisplayLinkGetNominalOutputVideoRefreshPeriod": (
b"{_CVTime=qii}^{__CVDisplayLink=}",
),
"CVPixelBufferCreateResolvedAttributesDictionary": (
b"i^{__CFAllocator=}^{__CFArray=}^^{__CFDictionary=}",
"",
{
"retval": {"already_cfretained": True},
"arguments": {2: {"already_cfretained": True, "type_modifier": "o"}},
},
),
"CVDisplayLinkSetOutputCallback": (
b"i^{__CVDisplayLink=}^?^v",
"",
{
"arguments": {
1: {
"callable": {
"retval": {"type": b"i"},
"arguments": {
0: {"type": b"^{__CVDisplayLink=}"},
1: {
"type": b"^{_CVTimeStamp=IiqQdq{CVSMPTETime=ssIIIssss}QQ}",
"type_modifier": "n",
},
2: {
"type": b"^{_CVTimeStamp=IiqQdq{CVSMPTETime=ssIIIssss}QQ}",
"type_modifier": "N",
},
3: {"type": b"Q"},
4: {"type": b"^Q", "type_modifier": "o"},
5: {"type": b"^v"},
},
}
}
}
},
),
"CVOpenGLTextureGetName": (b"I^{__CVBuffer=}",),
"CVOpenGLBufferRelease": (b"v^{__CVBuffer=}",),
"CVOpenGLTextureRetain": (b"^{__CVBuffer=}^{__CVBuffer=}",),
"CVPixelBufferIsPlanar": (b"Z^{__CVBuffer=}",),
"CVPixelBufferGetWidthOfPlane": (
sel32or64(b"L^{__CVBuffer=}L", b"Q^{__CVBuffer=}Q"),
),
"CVBufferPropagateAttachments": (b"v^{__CVBuffer=}^{__CVBuffer=}",),
"CVPixelBufferPoolRetain": (b"^{__CVPixelBufferPool=}^{__CVPixelBufferPool=}",),
"CVPixelBufferGetHeight": (sel32or64(b"L^{__CVBuffer=}", b"Q^{__CVBuffer=}"),),
"CVPixelBufferGetExtendedPixels": (
sel32or64(b"v^{__CVBuffer=}^L^L^L^L", b"v^{__CVBuffer=}^Q^Q^Q^Q"),
"",
{
"arguments": {
1: {"type_modifier": "o"},
2: {"type_modifier": "o"},
3: {"type_modifier": "o"},
4: {"type_modifier": "o"},
}
},
),
"CVOpenGLBufferGetTypeID": (sel32or64(b"L", b"Q"),),
"CVDisplayLinkRelease": (b"v^{__CVDisplayLink=}",),
"CVBufferGetAttachment": (
b"@^{__CVBuffer=}^{__CFString=}^I",
"",
{"arguments": {2: {"type_modifier": "o"}}},
),
"CVDisplayLinkStop": (b"i^{__CVDisplayLink=}",),
"CVPixelFormatDescriptionCreateWithPixelFormatType": (
sel32or64(
b"^{__CFDictionary=}^{__CFAllocator=}L",
b"^{__CFDictionary=}^{__CFAllocator=}I",
),
"",
{"retval": {"already_cfretained": True}},
),
"CVPixelBufferGetIOSurface": (b"^{__IOSurface=}^{__CVBuffer=}",),
"CVOpenGLTextureCacheCreateTextureFromImage": (
b"i^{__CFAllocator=}^{__CVOpenGLTextureCache=}^{__CVBuffer=}^{__CFDictionary=}^^{__CVBuffer=}",
"",
{"retval": {"already_cfretained": True}},
),
"CVDisplayLinkCreateWithCGDisplays": (
sel32or64(b"i^Il^^{__CVDisplayLink=}", b"i^Iq^^{__CVDisplayLink=}"),
"",
{
"retval": {"already_cfretained": True},
"arguments": {
0: {"c_array_length_in_arg": 1, "type_modifier": "n"},
2: {"already_cfretained": True, "type_modifier": "o"},
},
},
),
"CVPixelBufferPoolGetPixelBufferAttributes": (
b"^{__CFDictionary=}^{__CVPixelBufferPool=}",
),
"CVOpenGLTextureGetTypeID": (sel32or64(b"L", b"Q"),),
"CVImageBufferIsFlipped": (b"Z^{__CVBuffer=}",),
"CVOpenGLBufferPoolGetAttributes": (b"^{__CFDictionary=}^{__CVOpenGLBufferPool=}",),
"CVBufferRemoveAllAttachments": (b"v^{__CVBuffer=}",),
"CVPixelBufferCreateWithBytes": (
sel32or64(
b"i^{__CFAllocator=}LLL^vL^?^v^{__CFDictionary=}^^{__CVBuffer=}",
b"i^{__CFAllocator=}QQI^vQ^?^v^{__CFDictionary=}^^{__CVBuffer=}",
),
"",
{
"retval": {"already_cfretained": True},
"arguments": {
6: {
"callable": {
"retval": {"type": b"v"},
"arguments": {0: {"type": b"^v"}, 1: {"type": b"^v"}},
}
}
},
},
),
"CVOpenGLBufferPoolRetain": (b"^{__CVOpenGLBufferPool=}^{__CVOpenGLBufferPool=}",),
"CVPixelBufferCreateWithPlanarBytes": (
sel32or64(
b"i^{__CFAllocator=}LLL^vLL^^v^L^L^L^?^v^{__CFDictionary=}^^{__CVBuffer=}",
b"i^{__CFAllocator=}QQI^vQQ^^v^Q^Q^Q^?^v^{__CFDictionary=}^^{__CVBuffer=}",
),
"",
{
"retval": {"already_cfretained": True},
"arguments": {
11: {
"callable": {
"retval": {"type": b"v"},
"arguments": {
0: {"type": b"^v"},
1: {"type": b"^v"},
2: {"type": b"Q"},
3: {"type": b"Q"},
4: {"type": b"^^v"},
},
}
}
},
},
),
"CVImageBufferGetCleanRect": (
sel32or64(
b"{CGRect={CGPoint=ff}{CGSize=ff}}^{__CVBuffer=}",
b"{CGRect={CGPoint=dd}{CGSize=dd}}^{__CVBuffer=}",
),
),
"CVImageBufferCreateColorSpaceFromAttachments": (
b"^{CGColorSpace=}^{__CFDictionary=}",
"",
{"retval": {"already_cfretained": True}},
),
"CVPixelBufferGetBytesPerRowOfPlane": (
sel32or64(b"L^{__CVBuffer=}L", b"Q^{__CVBuffer=}Q"),
),
"CVDisplayLinkGetTypeID": (sel32or64(b"L", b"Q"),),
"CVImageBufferGetDisplaySize": (
sel32or64(b"{CGSize=ff}^{__CVBuffer=}", b"{CGSize=dd}^{__CVBuffer=}"),
),
"CVPixelBufferGetDataSize": (sel32or64(b"L^{__CVBuffer=}", b"Q^{__CVBuffer=}"),),
"CVOpenGLBufferPoolGetOpenGLBufferAttributes": (
b"^{__CFDictionary=}^{__CVOpenGLBufferPool=}",
),
"CVOpenGLBufferAttach": (b"i^{__CVBuffer=}^{_CGLContextObject=}Iii",),
"CVPixelBufferGetBaseAddressOfPlane": (
sel32or64(b"^v^{__CVBuffer=}L", b"^v^{__CVBuffer=}Q"),
"",
{"retval": {"c_array_of_variable_length": True}},
),
"CVDisplayLinkIsRunning": (b"Z^{__CVDisplayLink=}",),
"CVPixelBufferGetPixelFormatType": (
sel32or64(b"L^{__CVBuffer=}", b"I^{__CVBuffer=}"),
),
"CVBufferRemoveAttachment": (b"v^{__CVBuffer=}^{__CFString=}",),
"CVOpenGLBufferGetAttributes": (b"^{__CFDictionary=}^{__CVBuffer=}",),
"CVDisplayLinkGetOutputVideoLatency": (b"{_CVTime=qii}^{__CVDisplayLink=}",),
"CVPixelBufferGetBytesPerRow": (sel32or64(b"L^{__CVBuffer=}", b"Q^{__CVBuffer=}"),),
"CVPixelBufferPoolCreatePixelBuffer": (
b"i^{__CFAllocator=}^{__CVPixelBufferPool=}^^{__CVBuffer=}",
"",
{
"retval": {"already_cfretained": True},
"arguments": {2: {"already_cfretained": True, "type_modifier": "o"}},
},
),
"CVImageBufferGetColorSpace": (b"^{CGColorSpace=}^{__CVBuffer=}",),
"CVDisplayLinkGetCurrentCGDisplay": (b"I^{__CVDisplayLink=}",),
"CVDisplayLinkSetCurrentCGDisplayFromOpenGLContext": (
b"i^{__CVDisplayLink=}^{_CGLContextObject=}^{_CGLPixelFormatObject=}",
),
"CVPixelBufferRelease": (b"v^{__CVBuffer=}",),
"CVBufferSetAttachments": (b"v^{__CVBuffer=}^{__CFDictionary=}I",),
"CVOpenGLTextureGetTarget": (b"I^{__CVBuffer=}",),
"CVGetHostClockFrequency": (b"d",),
"CVGetHostClockMinimumTimeDelta": (b"I",),
"CVOpenGLBufferRetain": (b"^{__CVBuffer=}^{__CVBuffer=}",),
"CVOpenGLTextureCacheCreate": (
b"i^{__CFAllocator=}^{__CFDictionary=}^{_CGLContextObject=}^{_CGLPixelFormatObject=}^{__CFDictionary=}^^{__CVOpenGLTextureCache=}",
"",
{"retval": {"already_cfretained": True}},
),
"CVOpenGLTextureGetCleanTexCoords": (b"v^{__CVBuffer=}[2f][2f][2f][2f]",),
}
aliases = {"CV_INLINE": "CF_INLINE"}
cftypes = [
("CVBufferRef", b"^{__CVBuffer=}", "CVBufferGetTypeID", None),
("CVDisplayLinkRef", b"^{__CVDisplayLink=}", "CVDisplayLinkGetTypeID", None),
(
"CVOpenGLBufferPoolRef",
b"^{__CVOpenGLBufferPool=}",
"CVOpenGLBufferPoolGetTypeID",
None,
),
(
"CVOpenGLTextureCacheRef",
b"^{__CVOpenGLTextureCache=}",
"CVOpenGLTextureCacheGetTypeID",
None,
),
(
"CVPixelBufferPoolRef",
b"^{__CVPixelBufferPool=}",
"CVPixelBufferPoolGetTypeID",
None,
),
("CVOpenGLBufferRef", b"^{__CVOpenGLBuffer=}", "CVOpenGLBufferGetTypeID", None),
("CVPixelBufferRef", b"^{__CVPixelBuffer=}", "CVPixelBufferGetTypeID", None),
("CVOpenGLTextureRef", b"^{__CVOpenGLTexture=}", "CVOpenGLTextureGetTypeID", None),
]
expressions = {}
# END OF FILE
|
user | update | # -*- coding: utf-8 -*-
"""
flaskbb.core.user.update
~~~~~~~~~~~~~~~~~~~~~~~~
This modules provides services used in updating user details
across FlaskBB.
:copyright: (c) 2014-2018 by the FlaskBB Team.
:license: BSD, see LICENSE for more details.
"""
from abc import ABC, abstractmethod
import attr
from ..changesets import empty, is_empty
def _should_assign(current, new):
return not is_empty(new) and current != new
@attr.s(hash=True, eq=True, order=True, repr=True, frozen=True)
class UserDetailsChange(object):
"""
Object representing a change user details.
"""
birthday = attr.ib(default=empty)
gender = attr.ib(default=empty)
location = attr.ib(default=empty)
website = attr.ib(default=empty)
avatar = attr.ib(default=empty)
signature = attr.ib(default=empty)
notes = attr.ib(default=empty)
def assign_to_user(self, user):
for name, value in attr.asdict(self).items():
if _should_assign(getattr(user, name), value):
setattr(user, name, value)
@attr.s(hash=True, eq=True, order=True, repr=False, frozen=True)
class PasswordUpdate(object):
"""
Object representing an update to a user's password.
"""
old_password = attr.ib()
new_password = attr.ib()
@attr.s(hash=True, eq=True, order=True, repr=True, frozen=True)
class EmailUpdate(object):
"""
Object representing a change to a user's email address.
"""
# TODO(anr): Change to str.lower once Python2 is dropped
old_email = attr.ib(converter=lambda x: x.lower())
new_email = attr.ib(converter=lambda x: x.lower())
@attr.s(hash=True, eq=True, order=True, repr=True, frozen=True)
class SettingsUpdate(object):
"""
Object representing an update to a user's settings.
"""
language = attr.ib()
theme = attr.ib()
def assign_to_user(self, user):
for name, value in attr.asdict(self).items():
if _should_assign(getattr(user, name), value):
setattr(user, name, value)
class UserSettingsUpdatePostProcessor(ABC):
"""
Used to react to a user updating their settings. This post processor
recieves the user that updated their settings and the change set that was
applied to the user. This post processor is called after the update has
been persisted so further changes must be persisted separately.
"""
@abstractmethod
def post_process_settings_update(self, user, settings_update):
"""
This method is abstract
"""
pass
|
utils | file_utils_test | """Tests for file utilities.
"""
__copyright__ = "Copyright (C) 2014, 2016 Martin Blais"
__license__ = "GNU GPLv2"
import os
import re
import tempfile
import unittest
from os import path
from beancount.utils import file_utils, test_utils
def clean(prefix, iterable):
return [re.sub("^{}/".format(re.escape(prefix)), "", f) for f in iterable]
class TestFileUtilsFind(test_utils.TestTempdirMixin, test_utils.TestCase):
def setUp(self):
super().setUp()
for filename in [
"alice/",
"alice/bottle.txt",
"rabbit/",
"rabbit/suit/",
"rabbit/suit/glasses.txt",
"caterpillar/",
"caterpillar/who-are-you.txt",
]:
abs_filename = path.join(self.tempdir, filename)
if filename.endswith("/"):
os.makedirs(abs_filename)
else:
open(abs_filename, "w").close()
def test_find_files(self):
def walk(fords):
return sorted(clean(self.tempdir, file_utils.find_files(fords)))
self.assertEqual(
sorted(
[
"alice/bottle.txt",
"rabbit/suit/glasses.txt",
"caterpillar/who-are-you.txt",
]
),
walk([self.tempdir]),
)
self.assertEqual(
["alice/bottle.txt"], walk([path.join(self.tempdir, "alice/bottle.txt")])
)
self.assertEqual([], walk([path.join(self.tempdir, "alice/blabla.txt")]))
# Test a string directly.
self.assertEqual(
["alice/bottle.txt"], walk(path.join(self.tempdir, "alice/bottle.txt"))
)
class TestMiscFileUtils(unittest.TestCase):
def test_guess_file_format(self):
self.assertEqual("csv", file_utils.guess_file_format("/user/output.csv"))
self.assertEqual("text", file_utils.guess_file_format("/user/output.text"))
self.assertEqual("text", file_utils.guess_file_format("/user/output.txt"))
self.assertEqual("html", file_utils.guess_file_format("/user/output.html"))
self.assertEqual("html", file_utils.guess_file_format("/user/output.xhtml"))
self.assertEqual(None, file_utils.guess_file_format("/user/output"))
def test_path_greedy_split(self):
self.assertEqual(
("/tmp/tmp.ju3h4h/blabla", None),
file_utils.path_greedy_split("/tmp/tmp.ju3h4h/blabla"),
)
self.assertEqual(
("/tmp/tmp.ju3h4h/bla", ".tgz"),
file_utils.path_greedy_split("/tmp/tmp.ju3h4h/bla.tgz"),
)
self.assertEqual(
("/tmp/tmp.ju3h4h/bla", ".tar.gz"),
file_utils.path_greedy_split("/tmp/tmp.ju3h4h/bla.tar.gz"),
)
def test_chdir_contextmanager(self):
with file_utils.chdir(tempfile.gettempdir()) as tmpdir:
self.assertIsInstance(tmpdir, str)
self.assertEqual(path.realpath(tempfile.gettempdir()), os.getcwd())
if __name__ == "__main__":
unittest.main()
|
extractors | singlefile | __package__ = "archivebox.extractors"
import json
from pathlib import Path
from typing import Optional
from ..config import (
CHROME_BINARY,
DEPENDENCIES,
SAVE_SINGLEFILE,
SINGLEFILE_ARGS,
SINGLEFILE_VERSION,
TIMEOUT,
)
from ..index.schema import ArchiveError, ArchiveResult, Link
from ..logging_util import TimedProgress
from ..system import chmod_file, run
from ..util import chrome_args, enforce_types, is_static_file
@enforce_types
def should_save_singlefile(
link: Link, out_dir: Optional[Path] = None, overwrite: Optional[bool] = False
) -> bool:
if is_static_file(link.url):
return False
out_dir = out_dir or Path(link.link_dir)
if not overwrite and (out_dir / "singlefile.html").exists():
return False
return SAVE_SINGLEFILE
@enforce_types
def save_singlefile(
link: Link, out_dir: Optional[Path] = None, timeout: int = TIMEOUT
) -> ArchiveResult:
"""download full site using single-file"""
out_dir = out_dir or Path(link.link_dir)
output = "singlefile.html"
browser_args = chrome_args(CHROME_TIMEOUT=0)
# SingleFile CLI Docs: https://github.com/gildas-lormeau/SingleFile/tree/master/cli
browser_args = "--browser-args={}".format(json.dumps(browser_args[1:]))
options = [
*SINGLEFILE_ARGS,
"--browser-executable-path={}".format(CHROME_BINARY),
browser_args,
]
# Deduplicate options (single-file doesn't like when you use the same option two times)
#
# NOTE: Options names that come first clobber conflicting names that come later
# My logic is SINGLEFILE_ARGS is the option that affects the singlefile command with most
# specificity, therefore the user sets it with a lot intent, therefore it should take precedence
# kind of like the ergonomic principle of lexical scope in programming languages.
seen_option_names = []
def test_seen(argument):
option_name = argument.split("=")[0]
if option_name in seen_option_names:
return False
else:
seen_option_names.append(option_name)
return True
deduped_options = list(filter(test_seen, options))
cmd = [
DEPENDENCIES["SINGLEFILE_BINARY"]["path"],
*deduped_options,
link.url,
output,
]
status = "succeeded"
timer = TimedProgress(timeout, prefix=" ")
try:
result = run(cmd, cwd=str(out_dir), timeout=timeout)
# parse out number of files downloaded from last line of stderr:
# "Downloaded: 76 files, 4.0M in 1.6s (2.52 MB/s)"
output_tail = [
line.strip()
for line in (result.stdout + result.stderr).decode().rsplit("\n", 3)[-3:]
if line.strip()
]
hints = (
"Got single-file response code: {}.".format(result.returncode),
*output_tail,
)
# Check for common failure cases
if (result.returncode > 0) or not (out_dir / output).is_file():
raise ArchiveError("SingleFile was not able to archive the page", hints)
chmod_file(output, cwd=str(out_dir))
except (Exception, OSError) as err:
status = "failed"
# TODO: Make this prettier. This is necessary to run the command (escape JSON internal quotes).
cmd[2] = browser_args.replace('"', '\\"')
output = err
finally:
timer.end()
return ArchiveResult(
cmd=cmd,
pwd=str(out_dir),
cmd_version=SINGLEFILE_VERSION,
output=output,
status=status,
**timer.stats,
)
|
rest | settings_endpoint | from aiohttp import web
from aiohttp_apispec import docs, json_schema
from ipv8.REST.schema import schema
from marshmallow.fields import Boolean
from tribler.core.components.libtorrent.download_manager.download_manager import (
DownloadManager,
)
from tribler.core.components.restapi.rest.rest_endpoint import (
RESTEndpoint,
RESTResponse,
)
from tribler.core.config.tribler_config import TriblerConfig
from tribler.core.utilities.network_utils import default_network_utils
from tribler.core.utilities.utilities import froze_it
@froze_it
class SettingsEndpoint(RESTEndpoint):
"""
This endpoint is reponsible for handing all requests regarding settings and configuration.
"""
path = "/settings"
def __init__(
self, tribler_config: TriblerConfig, download_manager: DownloadManager = None
):
super().__init__()
self.tribler_config = tribler_config
self.download_manager = download_manager
def setup_routes(self):
self.app.add_routes(
[web.get("", self.get_settings), web.post("", self.update_settings)]
)
@docs(
tags=["General"],
summary="Return all the session settings that can be found in Tribler.",
responses={200: {"schema": schema(GetTriblerSettingsResponse={})}},
description="This endpoint returns all the session settings that can be found in Tribler.\n\n It also returns "
"the runtime-determined ports",
)
async def get_settings(self, request):
self._logger.info(f"Get settings. Request: {request}")
return RESTResponse(
{
"settings": self.tribler_config.dict(),
"ports": list(default_network_utils.ports_in_use),
}
)
@docs(
tags=["General"],
summary="Update Tribler settings.",
responses={
200: {"schema": schema(UpdateTriblerSettingsResponse={"modified": Boolean})}
},
)
@json_schema(schema(UpdateTriblerSettingsRequest={}))
async def update_settings(self, request):
settings_dict = await request.json()
await self.parse_settings_dict(settings_dict)
self.tribler_config.write()
return RESTResponse({"modified": True})
async def parse_setting(self, section, option, value):
"""
Set a specific Tribler setting. Throw a ValueError if this setting is not available.
"""
# if section in self.config.config and option in self.config.config[section]:
self.tribler_config.__getattribute__(section).__setattr__(option, value)
# else:
# raise ValueError(f"Section {section} with option {option} does not exist")
# Perform some actions when specific keys are set
if section == "libtorrent" and (
option == "max_download_rate" or option == "max_upload_rate"
):
if self.download_manager:
self.download_manager.update_max_rates_from_config()
async def parse_settings_dict(self, settings_dict, depth=1, root_key=None):
"""
Parse the settings dictionary.
"""
for key, value in settings_dict.items():
if isinstance(value, dict):
await self.parse_settings_dict(value, depth=depth + 1, root_key=key)
else:
await self.parse_setting(root_key, key, value)
|
extractor | lcp | # coding: utf-8
from __future__ import unicode_literals
from .arkena import ArkenaIE
from .common import InfoExtractor
class LcpPlayIE(ArkenaIE):
_VALID_URL = (
r"https?://play\.lcp\.fr/embed/(?P<id>[^/]+)/(?P<account_id>[^/]+)/[^/]+/[^/]+"
)
_TESTS = [
{
"url": "http://play.lcp.fr/embed/327336/131064/darkmatter/0",
"md5": "b8bd9298542929c06c1c15788b1f277a",
"info_dict": {
"id": "327336",
"ext": "mp4",
"title": "327336",
"timestamp": 1456391602,
"upload_date": "20160225",
},
"params": {
"skip_download": True,
},
}
]
class LcpIE(InfoExtractor):
_VALID_URL = r"https?://(?:www\.)?lcp\.fr/(?:[^/]+/)*(?P<id>[^/]+)"
_TESTS = [
{
# arkena embed
"url": "http://www.lcp.fr/la-politique-en-video/schwartzenberg-prg-preconise-francois-hollande-de-participer-une-primaire",
"md5": "b8bd9298542929c06c1c15788b1f277a",
"info_dict": {
"id": "d56d03e9",
"ext": "mp4",
"title": "Schwartzenberg (PRG) préconise à François Hollande de participer à une primaire à gauche",
"description": "md5:96ad55009548da9dea19f4120c6c16a8",
"timestamp": 1456488895,
"upload_date": "20160226",
},
"params": {
"skip_download": True,
},
},
{
# dailymotion live stream
"url": "http://www.lcp.fr/le-direct",
"info_dict": {
"id": "xji3qy",
"ext": "mp4",
"title": "La Chaine Parlementaire (LCP), Live TNT",
"description": "md5:5c69593f2de0f38bd9a949f2c95e870b",
"uploader": "LCP",
"uploader_id": "xbz33d",
"timestamp": 1308923058,
"upload_date": "20110624",
},
"params": {
# m3u8 live stream
"skip_download": True,
},
},
{
"url": "http://www.lcp.fr/emissions/277792-les-volontaires",
"only_matching": True,
},
]
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
play_url = self._search_regex(
r'<iframe[^>]+src=(["\'])(?P<url>%s?(?:(?!\1).)*)\1' % LcpPlayIE._VALID_URL,
webpage,
"play iframe",
default=None,
group="url",
)
if not play_url:
return self.url_result(url, "Generic")
title = self._og_search_title(webpage, default=None) or self._html_search_meta(
"twitter:title", webpage, fatal=True
)
description = self._html_search_meta(
("description", "twitter:description"), webpage
)
return {
"_type": "url_transparent",
"ie_key": LcpPlayIE.ie_key(),
"url": play_url,
"display_id": display_id,
"title": title,
"description": description,
}
|
usage-manual | export_usage_manual | # pip install selenium
# I had to also do pip install --upgrade requests
# Download geckodriver from here https://github.com/mozilla/geckodriver/releases and put it in Downloads
# sudo chmod +x geckodriver
# export PATH=$PATH:/home/marc/Downloads (or wherever you put it)
import os
import time
from html.parser import HTMLParser
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
# Settings
pages_to_save = [
"Handling Flowgraphs",
"Types of Blocks",
"Metadata Information",
"Stream Tags",
"Block Thread Affinity and Priority",
"Configuration Files",
"VOLK Guide",
"Polymorphic Types (PMTs)",
"Message Passing",
"QT GUI",
"Logging",
"Performance Counters",
"Tagged Stream Blocks",
"Polyphase Filterbanks",
]
# set up web driver
# dir that contains geckodriver
driver = webdriver.Firefox("/home/marc/Downloads/geckodriver")
print("STARTING")
for page_name in pages_to_save:
print("Processing", page_name)
driver.get("https://wiki.gnuradio.org/index.php/Special:Export")
# fill in text box
text_area = driver.find_element_by_xpath("//*[@name='pages']")
text_area.send_keys(page_name)
# uncheck "save as file" box
check_box = driver.find_element_by_xpath("//*[@name='wpDownload']")
check_box.click()
# hit Export
submit_button = driver.find_element_by_xpath("//*[@value='Export']")
submit_button.click()
# get HTML of new page
raw_html = driver.page_source
start_index = raw_html.find("<page>")
cropped_html = raw_html[start_index:]
# save text to file
h = HTMLParser()
# makes it so stuff like > shows up as a greater than sign
cropped_html_text = h.unescape(cropped_html)
text_file = open("(exported from wiki) " + page_name + ".txt", "w")
text_file.write(cropped_html_text)
text_file.close()
driver.close()
print("DONE")
|
settings | base | import os
from distutils.util import strtobool
from django.utils.translation import gettext_lazy as _
from dotenv import find_dotenv, load_dotenv
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Environment variables
# Check for and load environment variables from a .env file.
load_dotenv(find_dotenv())
# Required settings
ALLOWED_HOSTS = os.environ.get("ALLOWED_HOSTS", "*").split(",")
SECRET_KEY = os.environ.get("SECRET_KEY") or None
DEBUG = bool(strtobool(os.environ.get("DEBUG") or "False"))
# Applications
# https://docs.djangoproject.com/en/4.0/ref/applications/
INSTALLED_APPS = [
"api",
"babybuddy.apps.BabyBuddyConfig",
"core.apps.CoreConfig",
"dashboard",
"reports",
"axes",
"django_filters",
"rest_framework",
"rest_framework.authtoken",
"widget_tweaks",
"imagekit",
"storages",
"import_export",
"qr_code",
"dbsettings",
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"django.contrib.humanize",
]
# Middleware
# https://docs.djangoproject.com/en/4.0/ref/middleware/
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"whitenoise.middleware.WhiteNoiseMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"babybuddy.middleware.RollingSessionMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"babybuddy.middleware.UserTimezoneMiddleware",
"django.middleware.locale.LocaleMiddleware",
"babybuddy.middleware.UserLanguageMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
"axes.middleware.AxesMiddleware",
"babybuddy.middleware.HomeAssistant",
]
# URL dispatcher
# https://docs.djangoproject.com/en/4.0/topics/http/urls/
ROOT_URLCONF = "babybuddy.urls"
# Templates
# https://docs.djangoproject.com/en/4.0/ref/settings/#std:setting-TEMPLATES
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": ["babybuddy/templates", "babybuddy/templates/error"],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
# Database
# https://docs.djangoproject.com/en/4.0/ref/settings/#databases
config = {
"ENGINE": os.getenv("DB_ENGINE") or "django.db.backends.sqlite3",
"NAME": os.getenv("DB_NAME") or os.path.join(BASE_DIR, "data/db.sqlite3"),
}
if os.getenv("DB_USER"):
config["USER"] = os.getenv("DB_USER")
if os.environ.get("DB_PASSWORD") or os.environ.get("POSTGRES_PASSWORD"):
config["PASSWORD"] = os.environ.get("DB_PASSWORD") or os.environ.get(
"POSTGRES_PASSWORD"
)
if os.getenv("DB_HOST"):
config["HOST"] = os.getenv("DB_HOST")
if os.getenv("DB_PORT"):
config["PORT"] = os.getenv("DB_PORT")
DATABASES = {"default": config}
# Cache
# https://docs.djangoproject.com/en/4.0/topics/cache/
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.db.DatabaseCache",
"LOCATION": "cache_default",
}
}
# WGSI
# https://docs.djangoproject.com/en/4.0/howto/deployment/wsgi/
WSGI_APPLICATION = "babybuddy.wsgi.application"
# Authentication
# https://docs.djangoproject.com/en/4.0/topics/auth/default/
AUTHENTICATION_BACKENDS = [
"axes.backends.AxesBackend",
"django.contrib.auth.backends.ModelBackend",
]
LOGIN_REDIRECT_URL = "babybuddy:root-router"
LOGIN_URL = "babybuddy:login"
LOGOUT_REDIRECT_URL = "babybuddy:login"
REVERSE_PROXY_AUTH = bool(strtobool(os.environ.get("REVERSE_PROXY_AUTH") or "False"))
# Use remote user middleware when reverse proxy auth is enabled.
if REVERSE_PROXY_AUTH:
# Must appear AFTER AuthenticationMiddleware.
MIDDLEWARE.append("babybuddy.middleware.CustomRemoteUser")
AUTHENTICATION_BACKENDS.append("django.contrib.auth.backends.RemoteUserBackend")
# Timezone
# https://docs.djangoproject.com/en/4.0/topics/i18n/timezones/
USE_TZ = True
TIME_ZONE = "UTC"
# Internationalization
# https://docs.djangoproject.com/en/4.0/topics/i18n/
USE_I18N = True
LANGUAGE_CODE = "en-US"
LOCALE_PATHS = [
os.path.join(BASE_DIR, "locale"),
]
LANGUAGES = [
("ca", _("Catalan")),
("cs", _("Czech")),
("zh-hans", _("Chinese (simplified)")),
("da", _("Danish")),
("nl", _("Dutch")),
("en-US", _("English (US)")),
("en-GB", _("English (UK)")),
("fr", _("French")),
("fi", _("Finnish")),
("de", _("German")),
("hu", _("Hungarian")),
("it", _("Italian")),
("nb", _("Norwegian Bokmål")),
("pl", _("Polish")),
("pt", _("Portuguese")),
("ru", _("Russian")),
("es", _("Spanish")),
("sv", _("Swedish")),
("tr", _("Turkish")),
]
# Format localization
# https://docs.djangoproject.com/en/4.0/topics/i18n/formatting/
USE_L10N = True
FORMAT_MODULE_PATH = ["babybuddy.formats"]
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/4.0/howto/static-files/
# http://whitenoise.evans.io/en/stable/django.html
STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
STATICFILES_FINDERS = [
"django.contrib.staticfiles.finders.FileSystemFinder",
"django.contrib.staticfiles.finders.AppDirectoriesFinder",
]
STATIC_URL = os.path.join(os.environ.get("SUB_PATH") or "", "static/")
STATIC_ROOT = os.path.join(BASE_DIR, "static")
WHITENOISE_ROOT = os.path.join(BASE_DIR, "static", "babybuddy", "root")
# Media files (User uploaded content)
# https://docs.djangoproject.com/en/4.0/topics/files/
MEDIA_ROOT = os.path.join(BASE_DIR, "media")
MEDIA_URL = "media/"
AWS_STORAGE_BUCKET_NAME = os.environ.get("AWS_STORAGE_BUCKET_NAME") or None
AWS_ACCESS_KEY_ID = os.environ.get("AWS_ACCESS_KEY_ID") or None
AWS_SECRET_ACCESS_KEY = os.environ.get("AWS_SECRET_ACCESS_KEY") or None
AWS_S3_ENDPOINT_URL = os.environ.get("AWS_S3_ENDPOINT_URL") or None
if AWS_STORAGE_BUCKET_NAME:
DEFAULT_FILE_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"
# Email
# https://docs.djangoproject.com/en/4.0/topics/email/
EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
EMAIL_SUBJECT_PREFIX = "[Baby Buddy] "
EMAIL_TIMEOUT = 30
if os.environ.get("EMAIL_HOST"):
EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
EMAIL_HOST = os.environ.get("EMAIL_HOST")
EMAIL_HOST_USER = os.environ.get("EMAIL_HOST_USER") or ""
EMAIL_HOST_PASSWORD = os.environ.get("EMAIL_HOST_PASSWORD") or ""
EMAIL_PORT = os.environ.get("EMAIL_PORT") or 25
EMAIL_USE_TLS = bool(strtobool(os.environ.get("EMAIL_USE_TLS") or "False"))
EMAIL_USE_SSL = bool(strtobool(os.environ.get("EMAIL_USE_SSL") or "False"))
EMAIL_SSL_KEYFILE = os.environ.get("EMAIL_SSL_KEYFILE") or None
EMAIL_SSL_CERTFILE = os.environ.get("EMAIL_SSL_CERTFILE") or None
# Security
# https://docs.djangoproject.com/en/4.0/ref/settings/#secure-proxy-ssl-header
if os.environ.get("SECURE_PROXY_SSL_HEADER"):
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
# https://docs.djangoproject.com/en/4.0/topics/http/sessions/#settings
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SECURE = bool(
strtobool(os.environ.get("SESSION_COOKIE_SECURE") or "False")
)
# https://docs.djangoproject.com/en/4.0/ref/csrf/#settings
CSRF_COOKIE_HTTPONLY = True
CSRF_COOKIE_SECURE = bool(strtobool(os.environ.get("CSRF_COOKIE_SECURE") or "False"))
CSRF_FAILURE_VIEW = "babybuddy.views.csrf_failure"
CSRF_TRUSTED_ORIGINS = list(
filter(None, os.environ.get("CSRF_TRUSTED_ORIGINS", "").split(","))
)
# https://docs.djangoproject.com/en/4.0/topics/auth/passwords/
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
},
{
"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
"OPTIONS": {
"min_length": 8,
},
},
{
"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
},
{
"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
},
]
# Django Rest Framework
# https://www.django-rest-framework.org/
REST_FRAMEWORK = {
"DEFAULT_AUTHENTICATION_CLASSES": [
"rest_framework.authentication.SessionAuthentication",
"rest_framework.authentication.TokenAuthentication",
],
"DEFAULT_FILTER_BACKENDS": [
"django_filters.rest_framework.DjangoFilterBackend",
"rest_framework.filters.OrderingFilter",
],
"DEFAULT_METADATA_CLASS": "api.metadata.APIMetadata",
"DEFAULT_PAGINATION_CLASS": "rest_framework.pagination.LimitOffsetPagination",
"DEFAULT_PERMISSION_CLASSES": ["api.permissions.BabyBuddyDjangoModelPermissions"],
"DEFAULT_RENDERER_CLASSES": [
"rest_framework.renderers.JSONRenderer",
],
"PAGE_SIZE": 100,
}
# Import/Export configuration
# See https://django-import-export.readthedocs.io/
IMPORT_EXPORT_IMPORT_PERMISSION_CODE = "add"
IMPORT_EXPORT_EXPORT_PERMISSION_CODE = "change"
IMPORT_EXPORT_USE_TRANSACTIONS = True
# Axes configuration
# See https://django-axes.readthedocs.io/en/latest/4_configuration.html
AXES_COOLOFF_TIME = 1
AXES_FAILURE_LIMIT = 5
AXES_LOCKOUT_TEMPLATE = "error/lockout.html"
AXES_LOCKOUT_URL = "/login/lock"
# Session configuration
# Used by RollingSessionMiddleware to determine how often to reset the session.
# See https://docs.djangoproject.com/en/4.0/topics/http/sessions/
ROLLING_SESSION_REFRESH = 86400
# Set default auto field for models.
# See https://docs.djangoproject.com/en/4.0/releases/3.2/#customizing-type-of-auto-created-primary-keys
DEFAULT_AUTO_FIELD = "django.db.models.AutoField"
# Baby Buddy configuration
# See https://docs.baby-buddy.net/ for details about these settings.
BABY_BUDDY = {
"ALLOW_UPLOADS": bool(strtobool(os.environ.get("ALLOW_UPLOADS") or "True")),
"READ_ONLY_GROUP_NAME": "read_only",
}
# Home assistant specific configuration
ENABLE_HOME_ASSISTANT_SUPPORT = bool(
strtobool(os.environ.get("ENABLE_HOME_ASSISTANT_SUPPORT") or "False")
)
|
protos | losses_pb2 | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: object_detection/protos/losses.proto
import sys
_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1"))
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pb2
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name="object_detection/protos/losses.proto",
package="object_detection.protos",
syntax="proto2",
serialized_pb=_b(
'\n$object_detection/protos/losses.proto\x12\x17object_detection.protos"\x9f\x02\n\x04Loss\x12\x44\n\x11localization_loss\x18\x01 \x01(\x0b\x32).object_detection.protos.LocalizationLoss\x12H\n\x13\x63lassification_loss\x18\x02 \x01(\x0b\x32+.object_detection.protos.ClassificationLoss\x12\x45\n\x12hard_example_miner\x18\x03 \x01(\x0b\x32).object_detection.protos.HardExampleMiner\x12 \n\x15\x63lassification_weight\x18\x04 \x01(\x02:\x01\x31\x12\x1e\n\x13localization_weight\x18\x05 \x01(\x02:\x01\x31"\x9a\x02\n\x10LocalizationLoss\x12J\n\x0bweighted_l2\x18\x01 \x01(\x0b\x32\x33.object_detection.protos.WeightedL2LocalizationLossH\x00\x12W\n\x12weighted_smooth_l1\x18\x02 \x01(\x0b\x32\x39.object_detection.protos.WeightedSmoothL1LocalizationLossH\x00\x12L\n\x0cweighted_iou\x18\x03 \x01(\x0b\x32\x34.object_detection.protos.WeightedIOULocalizationLossH\x00\x42\x13\n\x11localization_loss">\n\x1aWeightedL2LocalizationLoss\x12 \n\x11\x61nchorwise_output\x18\x01 \x01(\x08:\x05\x66\x61lse"D\n WeightedSmoothL1LocalizationLoss\x12 \n\x11\x61nchorwise_output\x18\x01 \x01(\x08:\x05\x66\x61lse"\x1d\n\x1bWeightedIOULocalizationLoss"\x96\x03\n\x12\x43lassificationLoss\x12V\n\x10weighted_sigmoid\x18\x01 \x01(\x0b\x32:.object_detection.protos.WeightedSigmoidClassificationLossH\x00\x12V\n\x10weighted_softmax\x18\x02 \x01(\x0b\x32:.object_detection.protos.WeightedSoftmaxClassificationLossH\x00\x12^\n\x14\x62ootstrapped_sigmoid\x18\x03 \x01(\x0b\x32>.object_detection.protos.BootstrappedSigmoidClassificationLossH\x00\x12Y\n\x16weighted_sigmoid_focal\x18\x04 \x01(\x0b\x32\x37.object_detection.protos.SigmoidFocalClassificationLossH\x00\x42\x15\n\x13\x63lassification_loss"E\n!WeightedSigmoidClassificationLoss\x12 \n\x11\x61nchorwise_output\x18\x01 \x01(\x08:\x05\x66\x61lse"c\n\x1eSigmoidFocalClassificationLoss\x12 \n\x11\x61nchorwise_output\x18\x01 \x01(\x08:\x05\x66\x61lse\x12\x10\n\x05gamma\x18\x02 \x01(\x02:\x01\x32\x12\r\n\x05\x61lpha\x18\x03 \x01(\x02"]\n!WeightedSoftmaxClassificationLoss\x12 \n\x11\x61nchorwise_output\x18\x01 \x01(\x08:\x05\x66\x61lse\x12\x16\n\x0blogit_scale\x18\x02 \x01(\x02:\x01\x31"w\n%BootstrappedSigmoidClassificationLoss\x12\r\n\x05\x61lpha\x18\x01 \x01(\x02\x12\x1d\n\x0ehard_bootstrap\x18\x02 \x01(\x08:\x05\x66\x61lse\x12 \n\x11\x61nchorwise_output\x18\x03 \x01(\x08:\x05\x66\x61lse"\xa1\x02\n\x10HardExampleMiner\x12\x1d\n\x11num_hard_examples\x18\x01 \x01(\x05:\x02\x36\x34\x12\x1a\n\riou_threshold\x18\x02 \x01(\x02:\x03\x30.7\x12K\n\tloss_type\x18\x03 \x01(\x0e\x32\x32.object_detection.protos.HardExampleMiner.LossType:\x04\x42OTH\x12%\n\x1amax_negatives_per_positive\x18\x04 \x01(\x05:\x01\x30\x12"\n\x17min_negatives_per_image\x18\x05 \x01(\x05:\x01\x30":\n\x08LossType\x12\x08\n\x04\x42OTH\x10\x00\x12\x12\n\x0e\x43LASSIFICATION\x10\x01\x12\x10\n\x0cLOCALIZATION\x10\x02'
),
)
_HARDEXAMPLEMINER_LOSSTYPE = _descriptor.EnumDescriptor(
name="LossType",
full_name="object_detection.protos.HardExampleMiner.LossType",
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name="BOTH", index=0, number=0, options=None, type=None
),
_descriptor.EnumValueDescriptor(
name="CLASSIFICATION", index=1, number=1, options=None, type=None
),
_descriptor.EnumValueDescriptor(
name="LOCALIZATION", index=2, number=2, options=None, type=None
),
],
containing_type=None,
options=None,
serialized_start=1834,
serialized_end=1892,
)
_sym_db.RegisterEnumDescriptor(_HARDEXAMPLEMINER_LOSSTYPE)
_LOSS = _descriptor.Descriptor(
name="Loss",
full_name="object_detection.protos.Loss",
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name="localization_loss",
full_name="object_detection.protos.Loss.localization_loss",
index=0,
number=1,
type=11,
cpp_type=10,
label=1,
has_default_value=False,
default_value=None,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="classification_loss",
full_name="object_detection.protos.Loss.classification_loss",
index=1,
number=2,
type=11,
cpp_type=10,
label=1,
has_default_value=False,
default_value=None,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="hard_example_miner",
full_name="object_detection.protos.Loss.hard_example_miner",
index=2,
number=3,
type=11,
cpp_type=10,
label=1,
has_default_value=False,
default_value=None,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="classification_weight",
full_name="object_detection.protos.Loss.classification_weight",
index=3,
number=4,
type=2,
cpp_type=6,
label=1,
has_default_value=True,
default_value=float(1),
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="localization_weight",
full_name="object_detection.protos.Loss.localization_weight",
index=4,
number=5,
type=2,
cpp_type=6,
label=1,
has_default_value=True,
default_value=float(1),
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
],
extensions=[],
nested_types=[],
enum_types=[],
options=None,
is_extendable=False,
syntax="proto2",
extension_ranges=[],
oneofs=[],
serialized_start=66,
serialized_end=353,
)
_LOCALIZATIONLOSS = _descriptor.Descriptor(
name="LocalizationLoss",
full_name="object_detection.protos.LocalizationLoss",
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name="weighted_l2",
full_name="object_detection.protos.LocalizationLoss.weighted_l2",
index=0,
number=1,
type=11,
cpp_type=10,
label=1,
has_default_value=False,
default_value=None,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="weighted_smooth_l1",
full_name="object_detection.protos.LocalizationLoss.weighted_smooth_l1",
index=1,
number=2,
type=11,
cpp_type=10,
label=1,
has_default_value=False,
default_value=None,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="weighted_iou",
full_name="object_detection.protos.LocalizationLoss.weighted_iou",
index=2,
number=3,
type=11,
cpp_type=10,
label=1,
has_default_value=False,
default_value=None,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
],
extensions=[],
nested_types=[],
enum_types=[],
options=None,
is_extendable=False,
syntax="proto2",
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name="localization_loss",
full_name="object_detection.protos.LocalizationLoss.localization_loss",
index=0,
containing_type=None,
fields=[],
),
],
serialized_start=356,
serialized_end=638,
)
_WEIGHTEDL2LOCALIZATIONLOSS = _descriptor.Descriptor(
name="WeightedL2LocalizationLoss",
full_name="object_detection.protos.WeightedL2LocalizationLoss",
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name="anchorwise_output",
full_name="object_detection.protos.WeightedL2LocalizationLoss.anchorwise_output",
index=0,
number=1,
type=8,
cpp_type=7,
label=1,
has_default_value=True,
default_value=False,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
],
extensions=[],
nested_types=[],
enum_types=[],
options=None,
is_extendable=False,
syntax="proto2",
extension_ranges=[],
oneofs=[],
serialized_start=640,
serialized_end=702,
)
_WEIGHTEDSMOOTHL1LOCALIZATIONLOSS = _descriptor.Descriptor(
name="WeightedSmoothL1LocalizationLoss",
full_name="object_detection.protos.WeightedSmoothL1LocalizationLoss",
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name="anchorwise_output",
full_name="object_detection.protos.WeightedSmoothL1LocalizationLoss.anchorwise_output",
index=0,
number=1,
type=8,
cpp_type=7,
label=1,
has_default_value=True,
default_value=False,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
],
extensions=[],
nested_types=[],
enum_types=[],
options=None,
is_extendable=False,
syntax="proto2",
extension_ranges=[],
oneofs=[],
serialized_start=704,
serialized_end=772,
)
_WEIGHTEDIOULOCALIZATIONLOSS = _descriptor.Descriptor(
name="WeightedIOULocalizationLoss",
full_name="object_detection.protos.WeightedIOULocalizationLoss",
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[],
extensions=[],
nested_types=[],
enum_types=[],
options=None,
is_extendable=False,
syntax="proto2",
extension_ranges=[],
oneofs=[],
serialized_start=774,
serialized_end=803,
)
_CLASSIFICATIONLOSS = _descriptor.Descriptor(
name="ClassificationLoss",
full_name="object_detection.protos.ClassificationLoss",
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name="weighted_sigmoid",
full_name="object_detection.protos.ClassificationLoss.weighted_sigmoid",
index=0,
number=1,
type=11,
cpp_type=10,
label=1,
has_default_value=False,
default_value=None,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="weighted_softmax",
full_name="object_detection.protos.ClassificationLoss.weighted_softmax",
index=1,
number=2,
type=11,
cpp_type=10,
label=1,
has_default_value=False,
default_value=None,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="bootstrapped_sigmoid",
full_name="object_detection.protos.ClassificationLoss.bootstrapped_sigmoid",
index=2,
number=3,
type=11,
cpp_type=10,
label=1,
has_default_value=False,
default_value=None,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="weighted_sigmoid_focal",
full_name="object_detection.protos.ClassificationLoss.weighted_sigmoid_focal",
index=3,
number=4,
type=11,
cpp_type=10,
label=1,
has_default_value=False,
default_value=None,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
],
extensions=[],
nested_types=[],
enum_types=[],
options=None,
is_extendable=False,
syntax="proto2",
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name="classification_loss",
full_name="object_detection.protos.ClassificationLoss.classification_loss",
index=0,
containing_type=None,
fields=[],
),
],
serialized_start=806,
serialized_end=1212,
)
_WEIGHTEDSIGMOIDCLASSIFICATIONLOSS = _descriptor.Descriptor(
name="WeightedSigmoidClassificationLoss",
full_name="object_detection.protos.WeightedSigmoidClassificationLoss",
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name="anchorwise_output",
full_name="object_detection.protos.WeightedSigmoidClassificationLoss.anchorwise_output",
index=0,
number=1,
type=8,
cpp_type=7,
label=1,
has_default_value=True,
default_value=False,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
],
extensions=[],
nested_types=[],
enum_types=[],
options=None,
is_extendable=False,
syntax="proto2",
extension_ranges=[],
oneofs=[],
serialized_start=1214,
serialized_end=1283,
)
_SIGMOIDFOCALCLASSIFICATIONLOSS = _descriptor.Descriptor(
name="SigmoidFocalClassificationLoss",
full_name="object_detection.protos.SigmoidFocalClassificationLoss",
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name="anchorwise_output",
full_name="object_detection.protos.SigmoidFocalClassificationLoss.anchorwise_output",
index=0,
number=1,
type=8,
cpp_type=7,
label=1,
has_default_value=True,
default_value=False,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="gamma",
full_name="object_detection.protos.SigmoidFocalClassificationLoss.gamma",
index=1,
number=2,
type=2,
cpp_type=6,
label=1,
has_default_value=True,
default_value=float(2),
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="alpha",
full_name="object_detection.protos.SigmoidFocalClassificationLoss.alpha",
index=2,
number=3,
type=2,
cpp_type=6,
label=1,
has_default_value=False,
default_value=float(0),
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
],
extensions=[],
nested_types=[],
enum_types=[],
options=None,
is_extendable=False,
syntax="proto2",
extension_ranges=[],
oneofs=[],
serialized_start=1285,
serialized_end=1384,
)
_WEIGHTEDSOFTMAXCLASSIFICATIONLOSS = _descriptor.Descriptor(
name="WeightedSoftmaxClassificationLoss",
full_name="object_detection.protos.WeightedSoftmaxClassificationLoss",
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name="anchorwise_output",
full_name="object_detection.protos.WeightedSoftmaxClassificationLoss.anchorwise_output",
index=0,
number=1,
type=8,
cpp_type=7,
label=1,
has_default_value=True,
default_value=False,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="logit_scale",
full_name="object_detection.protos.WeightedSoftmaxClassificationLoss.logit_scale",
index=1,
number=2,
type=2,
cpp_type=6,
label=1,
has_default_value=True,
default_value=float(1),
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
],
extensions=[],
nested_types=[],
enum_types=[],
options=None,
is_extendable=False,
syntax="proto2",
extension_ranges=[],
oneofs=[],
serialized_start=1386,
serialized_end=1479,
)
_BOOTSTRAPPEDSIGMOIDCLASSIFICATIONLOSS = _descriptor.Descriptor(
name="BootstrappedSigmoidClassificationLoss",
full_name="object_detection.protos.BootstrappedSigmoidClassificationLoss",
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name="alpha",
full_name="object_detection.protos.BootstrappedSigmoidClassificationLoss.alpha",
index=0,
number=1,
type=2,
cpp_type=6,
label=1,
has_default_value=False,
default_value=float(0),
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="hard_bootstrap",
full_name="object_detection.protos.BootstrappedSigmoidClassificationLoss.hard_bootstrap",
index=1,
number=2,
type=8,
cpp_type=7,
label=1,
has_default_value=True,
default_value=False,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="anchorwise_output",
full_name="object_detection.protos.BootstrappedSigmoidClassificationLoss.anchorwise_output",
index=2,
number=3,
type=8,
cpp_type=7,
label=1,
has_default_value=True,
default_value=False,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
],
extensions=[],
nested_types=[],
enum_types=[],
options=None,
is_extendable=False,
syntax="proto2",
extension_ranges=[],
oneofs=[],
serialized_start=1481,
serialized_end=1600,
)
_HARDEXAMPLEMINER = _descriptor.Descriptor(
name="HardExampleMiner",
full_name="object_detection.protos.HardExampleMiner",
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name="num_hard_examples",
full_name="object_detection.protos.HardExampleMiner.num_hard_examples",
index=0,
number=1,
type=5,
cpp_type=1,
label=1,
has_default_value=True,
default_value=64,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="iou_threshold",
full_name="object_detection.protos.HardExampleMiner.iou_threshold",
index=1,
number=2,
type=2,
cpp_type=6,
label=1,
has_default_value=True,
default_value=float(0.7),
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="loss_type",
full_name="object_detection.protos.HardExampleMiner.loss_type",
index=2,
number=3,
type=14,
cpp_type=8,
label=1,
has_default_value=True,
default_value=0,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="max_negatives_per_positive",
full_name="object_detection.protos.HardExampleMiner.max_negatives_per_positive",
index=3,
number=4,
type=5,
cpp_type=1,
label=1,
has_default_value=True,
default_value=0,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
_descriptor.FieldDescriptor(
name="min_negatives_per_image",
full_name="object_detection.protos.HardExampleMiner.min_negatives_per_image",
index=4,
number=5,
type=5,
cpp_type=1,
label=1,
has_default_value=True,
default_value=0,
message_type=None,
enum_type=None,
containing_type=None,
is_extension=False,
extension_scope=None,
options=None,
file=DESCRIPTOR,
),
],
extensions=[],
nested_types=[],
enum_types=[
_HARDEXAMPLEMINER_LOSSTYPE,
],
options=None,
is_extendable=False,
syntax="proto2",
extension_ranges=[],
oneofs=[],
serialized_start=1603,
serialized_end=1892,
)
_LOSS.fields_by_name["localization_loss"].message_type = _LOCALIZATIONLOSS
_LOSS.fields_by_name["classification_loss"].message_type = _CLASSIFICATIONLOSS
_LOSS.fields_by_name["hard_example_miner"].message_type = _HARDEXAMPLEMINER
_LOCALIZATIONLOSS.fields_by_name[
"weighted_l2"
].message_type = _WEIGHTEDL2LOCALIZATIONLOSS
_LOCALIZATIONLOSS.fields_by_name[
"weighted_smooth_l1"
].message_type = _WEIGHTEDSMOOTHL1LOCALIZATIONLOSS
_LOCALIZATIONLOSS.fields_by_name[
"weighted_iou"
].message_type = _WEIGHTEDIOULOCALIZATIONLOSS
_LOCALIZATIONLOSS.oneofs_by_name["localization_loss"].fields.append(
_LOCALIZATIONLOSS.fields_by_name["weighted_l2"]
)
_LOCALIZATIONLOSS.fields_by_name[
"weighted_l2"
].containing_oneof = _LOCALIZATIONLOSS.oneofs_by_name["localization_loss"]
_LOCALIZATIONLOSS.oneofs_by_name["localization_loss"].fields.append(
_LOCALIZATIONLOSS.fields_by_name["weighted_smooth_l1"]
)
_LOCALIZATIONLOSS.fields_by_name[
"weighted_smooth_l1"
].containing_oneof = _LOCALIZATIONLOSS.oneofs_by_name["localization_loss"]
_LOCALIZATIONLOSS.oneofs_by_name["localization_loss"].fields.append(
_LOCALIZATIONLOSS.fields_by_name["weighted_iou"]
)
_LOCALIZATIONLOSS.fields_by_name[
"weighted_iou"
].containing_oneof = _LOCALIZATIONLOSS.oneofs_by_name["localization_loss"]
_CLASSIFICATIONLOSS.fields_by_name[
"weighted_sigmoid"
].message_type = _WEIGHTEDSIGMOIDCLASSIFICATIONLOSS
_CLASSIFICATIONLOSS.fields_by_name[
"weighted_softmax"
].message_type = _WEIGHTEDSOFTMAXCLASSIFICATIONLOSS
_CLASSIFICATIONLOSS.fields_by_name[
"bootstrapped_sigmoid"
].message_type = _BOOTSTRAPPEDSIGMOIDCLASSIFICATIONLOSS
_CLASSIFICATIONLOSS.fields_by_name[
"weighted_sigmoid_focal"
].message_type = _SIGMOIDFOCALCLASSIFICATIONLOSS
_CLASSIFICATIONLOSS.oneofs_by_name["classification_loss"].fields.append(
_CLASSIFICATIONLOSS.fields_by_name["weighted_sigmoid"]
)
_CLASSIFICATIONLOSS.fields_by_name[
"weighted_sigmoid"
].containing_oneof = _CLASSIFICATIONLOSS.oneofs_by_name["classification_loss"]
_CLASSIFICATIONLOSS.oneofs_by_name["classification_loss"].fields.append(
_CLASSIFICATIONLOSS.fields_by_name["weighted_softmax"]
)
_CLASSIFICATIONLOSS.fields_by_name[
"weighted_softmax"
].containing_oneof = _CLASSIFICATIONLOSS.oneofs_by_name["classification_loss"]
_CLASSIFICATIONLOSS.oneofs_by_name["classification_loss"].fields.append(
_CLASSIFICATIONLOSS.fields_by_name["bootstrapped_sigmoid"]
)
_CLASSIFICATIONLOSS.fields_by_name[
"bootstrapped_sigmoid"
].containing_oneof = _CLASSIFICATIONLOSS.oneofs_by_name["classification_loss"]
_CLASSIFICATIONLOSS.oneofs_by_name["classification_loss"].fields.append(
_CLASSIFICATIONLOSS.fields_by_name["weighted_sigmoid_focal"]
)
_CLASSIFICATIONLOSS.fields_by_name[
"weighted_sigmoid_focal"
].containing_oneof = _CLASSIFICATIONLOSS.oneofs_by_name["classification_loss"]
_HARDEXAMPLEMINER.fields_by_name["loss_type"].enum_type = _HARDEXAMPLEMINER_LOSSTYPE
_HARDEXAMPLEMINER_LOSSTYPE.containing_type = _HARDEXAMPLEMINER
DESCRIPTOR.message_types_by_name["Loss"] = _LOSS
DESCRIPTOR.message_types_by_name["LocalizationLoss"] = _LOCALIZATIONLOSS
DESCRIPTOR.message_types_by_name[
"WeightedL2LocalizationLoss"
] = _WEIGHTEDL2LOCALIZATIONLOSS
DESCRIPTOR.message_types_by_name[
"WeightedSmoothL1LocalizationLoss"
] = _WEIGHTEDSMOOTHL1LOCALIZATIONLOSS
DESCRIPTOR.message_types_by_name[
"WeightedIOULocalizationLoss"
] = _WEIGHTEDIOULOCALIZATIONLOSS
DESCRIPTOR.message_types_by_name["ClassificationLoss"] = _CLASSIFICATIONLOSS
DESCRIPTOR.message_types_by_name[
"WeightedSigmoidClassificationLoss"
] = _WEIGHTEDSIGMOIDCLASSIFICATIONLOSS
DESCRIPTOR.message_types_by_name[
"SigmoidFocalClassificationLoss"
] = _SIGMOIDFOCALCLASSIFICATIONLOSS
DESCRIPTOR.message_types_by_name[
"WeightedSoftmaxClassificationLoss"
] = _WEIGHTEDSOFTMAXCLASSIFICATIONLOSS
DESCRIPTOR.message_types_by_name[
"BootstrappedSigmoidClassificationLoss"
] = _BOOTSTRAPPEDSIGMOIDCLASSIFICATIONLOSS
DESCRIPTOR.message_types_by_name["HardExampleMiner"] = _HARDEXAMPLEMINER
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Loss = _reflection.GeneratedProtocolMessageType(
"Loss",
(_message.Message,),
dict(
DESCRIPTOR=_LOSS,
__module__="object_detection.protos.losses_pb2",
# @@protoc_insertion_point(class_scope:object_detection.protos.Loss)
),
)
_sym_db.RegisterMessage(Loss)
LocalizationLoss = _reflection.GeneratedProtocolMessageType(
"LocalizationLoss",
(_message.Message,),
dict(
DESCRIPTOR=_LOCALIZATIONLOSS,
__module__="object_detection.protos.losses_pb2",
# @@protoc_insertion_point(class_scope:object_detection.protos.LocalizationLoss)
),
)
_sym_db.RegisterMessage(LocalizationLoss)
WeightedL2LocalizationLoss = _reflection.GeneratedProtocolMessageType(
"WeightedL2LocalizationLoss",
(_message.Message,),
dict(
DESCRIPTOR=_WEIGHTEDL2LOCALIZATIONLOSS,
__module__="object_detection.protos.losses_pb2",
# @@protoc_insertion_point(class_scope:object_detection.protos.WeightedL2LocalizationLoss)
),
)
_sym_db.RegisterMessage(WeightedL2LocalizationLoss)
WeightedSmoothL1LocalizationLoss = _reflection.GeneratedProtocolMessageType(
"WeightedSmoothL1LocalizationLoss",
(_message.Message,),
dict(
DESCRIPTOR=_WEIGHTEDSMOOTHL1LOCALIZATIONLOSS,
__module__="object_detection.protos.losses_pb2",
# @@protoc_insertion_point(class_scope:object_detection.protos.WeightedSmoothL1LocalizationLoss)
),
)
_sym_db.RegisterMessage(WeightedSmoothL1LocalizationLoss)
WeightedIOULocalizationLoss = _reflection.GeneratedProtocolMessageType(
"WeightedIOULocalizationLoss",
(_message.Message,),
dict(
DESCRIPTOR=_WEIGHTEDIOULOCALIZATIONLOSS,
__module__="object_detection.protos.losses_pb2",
# @@protoc_insertion_point(class_scope:object_detection.protos.WeightedIOULocalizationLoss)
),
)
_sym_db.RegisterMessage(WeightedIOULocalizationLoss)
ClassificationLoss = _reflection.GeneratedProtocolMessageType(
"ClassificationLoss",
(_message.Message,),
dict(
DESCRIPTOR=_CLASSIFICATIONLOSS,
__module__="object_detection.protos.losses_pb2",
# @@protoc_insertion_point(class_scope:object_detection.protos.ClassificationLoss)
),
)
_sym_db.RegisterMessage(ClassificationLoss)
WeightedSigmoidClassificationLoss = _reflection.GeneratedProtocolMessageType(
"WeightedSigmoidClassificationLoss",
(_message.Message,),
dict(
DESCRIPTOR=_WEIGHTEDSIGMOIDCLASSIFICATIONLOSS,
__module__="object_detection.protos.losses_pb2",
# @@protoc_insertion_point(class_scope:object_detection.protos.WeightedSigmoidClassificationLoss)
),
)
_sym_db.RegisterMessage(WeightedSigmoidClassificationLoss)
SigmoidFocalClassificationLoss = _reflection.GeneratedProtocolMessageType(
"SigmoidFocalClassificationLoss",
(_message.Message,),
dict(
DESCRIPTOR=_SIGMOIDFOCALCLASSIFICATIONLOSS,
__module__="object_detection.protos.losses_pb2",
# @@protoc_insertion_point(class_scope:object_detection.protos.SigmoidFocalClassificationLoss)
),
)
_sym_db.RegisterMessage(SigmoidFocalClassificationLoss)
WeightedSoftmaxClassificationLoss = _reflection.GeneratedProtocolMessageType(
"WeightedSoftmaxClassificationLoss",
(_message.Message,),
dict(
DESCRIPTOR=_WEIGHTEDSOFTMAXCLASSIFICATIONLOSS,
__module__="object_detection.protos.losses_pb2",
# @@protoc_insertion_point(class_scope:object_detection.protos.WeightedSoftmaxClassificationLoss)
),
)
_sym_db.RegisterMessage(WeightedSoftmaxClassificationLoss)
BootstrappedSigmoidClassificationLoss = _reflection.GeneratedProtocolMessageType(
"BootstrappedSigmoidClassificationLoss",
(_message.Message,),
dict(
DESCRIPTOR=_BOOTSTRAPPEDSIGMOIDCLASSIFICATIONLOSS,
__module__="object_detection.protos.losses_pb2",
# @@protoc_insertion_point(class_scope:object_detection.protos.BootstrappedSigmoidClassificationLoss)
),
)
_sym_db.RegisterMessage(BootstrappedSigmoidClassificationLoss)
HardExampleMiner = _reflection.GeneratedProtocolMessageType(
"HardExampleMiner",
(_message.Message,),
dict(
DESCRIPTOR=_HARDEXAMPLEMINER,
__module__="object_detection.protos.losses_pb2",
# @@protoc_insertion_point(class_scope:object_detection.protos.HardExampleMiner)
),
)
_sym_db.RegisterMessage(HardExampleMiner)
# @@protoc_insertion_point(module_scope)
|
transmissionrpc | httphandler | # -*- coding: utf-8 -*-
# Copyright (c) 2011-2013 Erik Svensson <erik.public@gmail.com>
# Licensed under the MIT license.
import sys
from six import PY3
from transmissionrpc.error import HTTPHandlerError
if PY3:
from http.client import BadStatusLine
from urllib.error import HTTPError, URLError
from urllib.request import (
HTTPBasicAuthHandler,
HTTPDigestAuthHandler,
HTTPPasswordMgrWithDefaultRealm,
Request,
build_opener,
)
else:
from httplib import BadStatusLine
from urllib2 import (
HTTPBasicAuthHandler,
HTTPDigestAuthHandler,
HTTPError,
HTTPPasswordMgrWithDefaultRealm,
Request,
URLError,
build_opener,
)
class HTTPHandler(object):
"""
Prototype for HTTP handling.
"""
def set_authentication(self, uri, login, password):
"""
Transmission use basic authentication in earlier versions and digest
authentication in later versions.
* uri, the authentication realm URI.
* login, the authentication login.
* password, the authentication password.
"""
raise NotImplementedError(
"Bad HTTPHandler, failed to implement set_authentication."
)
def request(self, url, query, headers, timeout):
"""
Implement a HTTP POST request here.
* url, The URL to request.
* query, The query data to send. This is a JSON data string.
* headers, a dictionary of headers to send.
* timeout, requested request timeout in seconds.
"""
raise NotImplementedError("Bad HTTPHandler, failed to implement request.")
class DefaultHTTPHandler(HTTPHandler):
"""
The default HTTP handler provided with transmissionrpc.
"""
def __init__(self):
HTTPHandler.__init__(self)
self.http_opener = build_opener()
def set_authentication(self, uri, login, password):
password_manager = HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(realm=None, uri=uri, user=login, passwd=password)
self.http_opener = build_opener(
HTTPBasicAuthHandler(password_manager),
HTTPDigestAuthHandler(password_manager),
)
def request(self, url, query, headers, timeout):
request = Request(url, query.encode("utf-8"), headers)
try:
if (
sys.version_info[0] == 2 and sys.version_info[1] > 5
) or sys.version_info[0] > 2:
response = self.http_opener.open(request, timeout=timeout)
else:
response = self.http_opener.open(request)
except HTTPError as error:
if error.fp is None:
raise HTTPHandlerError(
error.filename, error.code, error.msg, dict(error.hdrs)
)
else:
raise HTTPHandlerError(
error.filename,
error.code,
error.msg,
dict(error.hdrs),
error.read(),
)
except URLError as error:
# urllib2.URLError documentation is horrendous!
# Try to get the tuple arguments of URLError
if (
hasattr(error.reason, "args")
and isinstance(error.reason.args, tuple)
and len(error.reason.args) == 2
):
raise HTTPHandlerError(
httpcode=error.reason.args[0], httpmsg=error.reason.args[1]
)
else:
raise HTTPHandlerError(httpmsg="urllib2.URLError: %s" % (error.reason))
except BadStatusLine as error:
raise HTTPHandlerError(httpmsg="httplib.BadStatusLine: %s" % (error.line))
return response.read().decode("utf-8")
|
utils | path | import re
from pathlib import Path
from typing import Callable, Optional, Union
from streamlink.compat import is_win32
REPLACEMENT = "_"
SPECIAL_PATH_PARTS = (".", "..")
_UNPRINTABLE = "".join(chr(c) for c in range(32))
_UNSUPPORTED_POSIX = "/"
_UNSUPPORTED_WIN32 = '\x7f"*/:<>?\\|'
RE_CHARS_POSIX = re.compile(f"[{re.escape(_UNPRINTABLE + _UNSUPPORTED_POSIX)}]+")
RE_CHARS_WIN32 = re.compile(f"[{re.escape(_UNPRINTABLE + _UNSUPPORTED_WIN32)}]+")
if is_win32:
RE_CHARS = RE_CHARS_WIN32
else:
RE_CHARS = RE_CHARS_POSIX
def replace_chars(path: str, charmap: Optional[str] = None, replacement: str = REPLACEMENT) -> str:
if charmap is None:
pattern = RE_CHARS
else:
charmap = charmap.lower()
if charmap in ("posix", "unix"):
pattern = RE_CHARS_POSIX
elif charmap in ("windows", "win32"):
pattern = RE_CHARS_WIN32
else:
raise ValueError("Invalid charmap")
return pattern.sub(replacement, path)
def replace_path(pathlike: Union[str, Path], mapper: Callable[[str], str]) -> Path:
def get_part(part):
newpart = mapper(part)
return REPLACEMENT if part != newpart and newpart in SPECIAL_PATH_PARTS else newpart
return Path(*(get_part(part) for part in Path(pathlike).expanduser().parts))
|
DragItemAround | DragItemAround | # Updated by mvl on 2009-02-22 for PyObjC 2
import objc
from AppKit import *
from Foundation import *
from PyObjCTools import AppHelper
class DraggableItemView(NSView):
"""."""
_locationDefault = NSMakePoint(0.0, 0.0)
_itemColorDefault = NSColor.redColor()
_backgroundColorDefault = NSColor.whiteColor()
def awakeFromNib(self):
self.dragging = None
def initWithFrame_(self, frame):
"""."""
result = super(DraggableItemView, self).initWithFrame_(frame)
if result is not None:
result._location = self._locationDefault
result._itemColor = self._itemColorDefault
result._backgroundColor = self._backgroundColorDefault
return result
def drawRect_(self, rect):
"""."""
NSColor.whiteColor().set()
NSBezierPath.fillRect_(rect)
self.itemColor().set()
NSBezierPath.fillRect_(self.calculatedItemBounds())
def isOpaque(self):
"""."""
return self.backgroundColor().alphaComponent() >= 1.0
def offsetLocationByX_andY_(self, x, y):
"""."""
self.setNeedsDisplayInRect_(self.calculatedItemBounds())
if self.isFlipped():
invertDeltaY = -1
else:
invertDeltaY = 1
self.location().x = self.location().x + x
self.location().y = self.location().y + y * invertDeltaY
self.setNeedsDisplayInRect_(self.calculatedItemBounds())
def mouseDown_(self, event):
"""."""
clickLocation = self.convertPoint_fromView_(event.locationInWindow(), None)
itemHit = self.isPointInItem_(clickLocation)
if itemHit:
self.dragging = True
self.lastDragLocation = clickLocation
NSCursor.closedHandCursor().push()
def mouseDragged_(self, event):
"""."""
if self.dragging:
newDragLocation = self.convertPoint_fromView_(
event.locationInWindow(), None
)
self.offsetLocationByX_andY_(
newDragLocation.x - self.lastDragLocation.x,
newDragLocation.y - self.lastDragLocation.y,
)
self.lastDragLocation = newDragLocation
self.autoscroll_(event)
def mouseUp_(self, event):
"""."""
self.dragging = False
# NSCursor has both an instance and a class method w/ the name 'pop'
NSCursor.pyobjc_classMethods.pop()
self.window().invalidateCursorRectsForView_(self)
def acceptsFirstResponder(self):
"""."""
return True
def keyDown_(self, event):
"""."""
handled = False
characters = event.charactersIgnoringModifiers()
if characters.isEqual_("r"):
handled = True
self.setItemPropertiesToDefault_(self)
if handled is False:
super(DraggableItemView, self).keyDown_(event)
@objc.IBAction
def changeColor_(self, sender):
"""."""
self.setItemColor_(sender.color())
def resetCursorRects(self):
"""."""
self.discardCursorRects()
self.addCursorRect_cursor_(
self.calculatedItemBounds(), NSCursor.openHandCursor()
)
@objc.IBAction
def moveUp_(self, sender):
"""."""
self.offsetLocationByX_andY_(0.0, 10.0)
self.window().invalidateCursorRectsForView_(self)
@objc.IBAction
def moveDown_(self, sender):
"""."""
self.offsetLocationByX_andY_(0.0, -10.0)
self.window().invalidateCursorRectsForView_(self)
@objc.IBAction
def moveLeft_(self, sender):
"""."""
self.offsetLocationByX_andY_(-10.0, 0.0)
self.window().invalidateCursorRectsForView_(self)
@objc.IBAction
def moveRight_(self, sender):
"""."""
self.offsetLocationByX_andY_(10.0, 0.0)
self.window().invalidateCursorRectsForView_(self)
@objc.IBAction
def setItemPropertiesToDefault_(self, sender):
"""."""
self.setLocation_(self._locationDefault)
self.setItemColor_(self._itemColorDefault)
self.setBackgroundColor_(self._backgroundColorDefault)
def setLocation_(self, point):
"""."""
if not NSEqualPoints(point, self.location()):
self.setNeedsDisplayInRect_(self.calculatedItemBounds())
self._location = point
self.setNeedsDisplayInRect_(self.calculatedItemBounds())
self.window().invalidateCursorRectsForView_(self)
def location(self):
"""."""
return self._location
def setBackgroundColor_(self, aColor):
"""."""
if not self.backgroundColor().isEqual_(aColor):
self._backgroundColor = aColor
self.setNeedsDisplayInRect_(self.calculatedItemBounds())
def backgroundColor(self):
"""."""
return self._backgroundColor
def setItemColor_(self, aColor):
"""."""
if not self.itemColor().isEqual_(aColor):
self._itemColor = aColor
self.setNeedsDisplayInRect_(self.calculatedItemBounds())
def itemColor(self):
"""."""
return self._itemColor
def calculatedItemBounds(self):
"""."""
return NSMakeRect(self.location().x, self.location().y, 60.0, 20.0)
def isPointInItem_(self, testPoint):
"""."""
itemHit = NSPointInRect(testPoint, self.calculatedItemBounds())
if itemHit:
pass
return itemHit
if __name__ == "__main__":
AppHelper.runEventLoop()
|
PyObjCTest | test_cflocale | from CoreFoundation import *
from PyObjCTools.TestSupport import *
try:
unicode
except NameError:
unicode = str
try:
long
except NameError:
long = int
class TestLocale(TestCase):
def testTypes(self):
self.assertIsCFType(CFLocaleRef)
def testGetTypeID(self):
self.assertIsInstance(CFLocaleGetTypeID(), (int, long))
def testInspection(self):
locale = CFLocaleGetSystem()
self.assertIsInstance(locale, CFLocaleRef)
locale = CFLocaleCopyCurrent()
self.assertIsInstance(locale, CFLocaleRef)
idents = CFLocaleCopyAvailableLocaleIdentifiers()
self.assertIsInstance(idents, CFArrayRef)
codes = CFLocaleCopyISOLanguageCodes()
self.assertIsInstance(codes, CFArrayRef)
codes = CFLocaleCopyISOCountryCodes()
self.assertIsInstance(codes, CFArrayRef)
codes = CFLocaleCopyISOCurrencyCodes()
self.assertIsInstance(codes, CFArrayRef)
val = CFLocaleCreateCanonicalLanguageIdentifierFromString(None, "de_DE")
self.assertIsInstance(val, unicode)
self.assertEqual(val, "de-DE")
val = CFLocaleCreateCanonicalLocaleIdentifierFromString(None, "de_DE")
self.assertIsInstance(val, unicode)
self.assertEqual(val, "de_DE")
val = CFLocaleCreateCanonicalLocaleIdentifierFromScriptManagerCodes(
None, 55, 75
)
self.assertIsInstance(val, unicode)
dct = CFLocaleCreateComponentsFromLocaleIdentifier(None, "nl_NL")
try:
# 10.6
self.assertEqual(dct[kCFLocaleCountryCodeKey], "NL")
self.assertEqual(dct[kCFLocaleLanguageCodeKey], "nl")
except NameError:
# 10.5 and earlier
self.assertEqual(dct["locale:country code"], "NL")
self.assertEqual(dct["locale:language code"], "nl")
val = CFLocaleCreateLocaleIdentifierFromComponents(None, dct)
self.assertIsInstance(val, unicode)
self.assertEqual(val, "nl_NL")
locale = CFLocaleCreate(None, "nl_NL")
self.assertIsInstance(locale, CFLocaleRef)
locale = CFLocaleCreateCopy(None, locale)
self.assertIsInstance(locale, CFLocaleRef)
ident = CFLocaleGetIdentifier(locale)
self.assertEqual(ident, "nl_NL")
v = CFLocaleGetValue(locale, kCFLocaleDecimalSeparator)
self.assertEqual(v, ",")
v = CFLocaleCopyDisplayNameForPropertyValue(
locale, kCFLocaleIdentifier, "nl_NL"
)
if v is not None:
self.assertIsInstance(v, unicode)
self.assertEqual(v, b"Nederlands (Nederland)".decode("ascii"))
def testConstants(self):
self.assertIsInstance(kCFLocaleIdentifier, unicode)
self.assertIsInstance(kCFLocaleLanguageCode, unicode)
self.assertIsInstance(kCFLocaleCountryCode, unicode)
self.assertIsInstance(kCFLocaleScriptCode, unicode)
self.assertIsInstance(kCFLocaleVariantCode, unicode)
self.assertIsInstance(kCFLocaleExemplarCharacterSet, unicode)
self.assertIsInstance(kCFLocaleCalendarIdentifier, unicode)
self.assertIsInstance(kCFLocaleCalendar, unicode)
self.assertIsInstance(kCFLocaleCollationIdentifier, unicode)
self.assertIsInstance(kCFLocaleUsesMetricSystem, unicode)
self.assertIsInstance(kCFLocaleMeasurementSystem, unicode)
self.assertIsInstance(kCFLocaleDecimalSeparator, unicode)
self.assertIsInstance(kCFLocaleGroupingSeparator, unicode)
self.assertIsInstance(kCFLocaleCurrencySymbol, unicode)
self.assertIsInstance(kCFLocaleCurrencyCode, unicode)
self.assertIsInstance(kCFGregorianCalendar, unicode)
self.assertIsInstance(kCFBuddhistCalendar, unicode)
self.assertIsInstance(kCFChineseCalendar, unicode)
self.assertIsInstance(kCFHebrewCalendar, unicode)
self.assertIsInstance(kCFIslamicCalendar, unicode)
self.assertIsInstance(kCFIslamicCivilCalendar, unicode)
self.assertIsInstance(kCFJapaneseCalendar, unicode)
@min_os_level("10.5")
def testFunctions10_5(self):
codes = CFLocaleCopyCommonISOCurrencyCodes()
self.assertIsInstance(codes, CFArrayRef)
codes = CFLocaleCopyPreferredLanguages()
self.assertIsInstance(codes, CFArrayRef)
@min_os_level("10.5")
def testConstants10_5(self):
self.assertIsInstance(kCFLocaleCurrentLocaleDidChangeNotification, unicode)
@min_os_level("10.6")
def testConstants10_6(self):
self.assertEqual(kCFLocaleLanguageDirectionUnknown, 0)
self.assertEqual(kCFLocaleLanguageDirectionLeftToRight, 1)
self.assertEqual(kCFLocaleLanguageDirectionRightToLeft, 2)
self.assertEqual(kCFLocaleLanguageDirectionTopToBottom, 3)
self.assertEqual(kCFLocaleLanguageDirectionBottomToTop, 4)
self.assertIsInstance(kCFLocaleCollatorIdentifier, unicode)
self.assertIsInstance(kCFLocaleQuotationBeginDelimiterKey, unicode)
self.assertIsInstance(kCFLocaleQuotationEndDelimiterKey, unicode)
self.assertIsInstance(kCFLocaleAlternateQuotationBeginDelimiterKey, unicode)
self.assertIsInstance(kCFLocaleAlternateQuotationEndDelimiterKey, unicode)
self.assertIsInstance(kCFRepublicOfChinaCalendar, unicode)
self.assertIsInstance(kCFPersianCalendar, unicode)
self.assertIsInstance(kCFIndianCalendar, unicode)
self.assertIsInstance(kCFISO8601Calendar, unicode)
@min_os_level("10.6")
def testFunctions10_6(self):
v = CFLocaleGetWindowsLocaleCodeFromLocaleIdentifier("nl_NL")
self.assertIsInstance(v, (int, long))
self.assertResultIsCFRetained(
CFLocaleCreateLocaleIdentifierFromWindowsLocaleCode
)
v = CFLocaleCreateLocaleIdentifierFromWindowsLocaleCode(None, 1043)
self.assertIsInstance(v, unicode)
v = CFLocaleGetLanguageCharacterDirection("NL")
self.assertEqual(v, kCFLocaleLanguageDirectionLeftToRight)
v = CFLocaleGetLanguageLineDirection("NL")
self.assertEqual(v, kCFLocaleLanguageDirectionTopToBottom)
if __name__ == "__main__":
main()
|
actions | activity | from gaphor import UML
from gaphor.diagram.presentation import (
AttachedPresentation,
Classified,
ElementPresentation,
connect,
)
from gaphor.diagram.shapes import Box, JustifyContent, Text, TextAlign, draw_border
from gaphor.diagram.support import represents
from gaphor.diagram.text import FontStyle
from gaphor.UML.recipes import stereotypes_str
@represents(UML.Activity)
class ActivityItem(Classified, ElementPresentation):
def __init__(self, diagram, id=None):
super().__init__(diagram, id, width=50, height=50)
self.width = 100
self.watch("subject[NamedElement].name").watch(
"subject.appliedStereotype.classifier.name"
).watch("subject[Classifier].isAbstract", self.update_shapes).watch(
"subject[Activity].node[ActivityParameterNode].parameter.name",
self.update_parameters,
).watch(
"subject[Activity].node[ActivityParameterNode].parameter.typeValue",
self.update_parameters,
)
def postload(self):
super().postload()
self.update_parameters()
def update_shapes(self, event=None):
self.shape = Box(
Text(
text=lambda: stereotypes_str(self.subject),
style={"text-align": TextAlign.LEFT},
),
Text(
text=lambda: self.subject.name or "",
style={
"text-align": TextAlign.LEFT,
"font-style": FontStyle.ITALIC
if self.subject and self.subject.isAbstract
else FontStyle.NORMAL,
},
),
style={
"padding": (4, 12, 4, 12),
"border-radius": 20,
"justify-content": JustifyContent.START,
},
draw=draw_border,
)
def update_parameters(self, event=None):
diagram = self.diagram
parameter_nodes = (
[p for p in self.subject.node if isinstance(p, UML.ActivityParameterNode)]
if self.subject
else []
)
parameter_items = {
i.subject: i
for i in self.children
if isinstance(i, ActivityParameterNodeItem)
}
for node in parameter_nodes:
if node not in parameter_items:
item = diagram.create(
ActivityParameterNodeItem, parent=self, subject=node
)
item.matrix.translate(0, 10)
connect(item, item.handles()[0], self)
for node in parameter_items:
if node not in parameter_nodes:
del self.children[parameter_items[node]]
class ActivityParameterNodeItem(AttachedPresentation[UML.ActivityParameterNode]):
def __init__(self, diagram, id=None):
super().__init__(
diagram,
id,
shape=Box(
Text(
text=lambda: self.subject.parameter.name or "",
width=120,
),
style={"padding": (4, 12, 4, 12)},
draw=draw_border,
),
)
self.watch("subject[ActivityParameterNode].parameter.name")
def update(self, context):
self.width, self.height = super().update(context)
|
Spreadsheet | Init | # ***************************************************************************
# * Copyright (c) 2001,2002 Juergen Riegel <juergen.riegel@web.de> *
# * Copyright (c) 2013 Yorik van Havre <yorik@uncreated.net> *
# * Copyright (c) 2013 Eivind Kvedalen <eivind@kvedalen.name> *
# * *
# * This file is part of the FreeCAD CAx development system. *
# * *
# * This program is free software; you can redistribute it and/or modify *
# * it under the terms of the GNU Lesser General Public License (LGPL) *
# * as published by the Free Software Foundation; either version 2 of *
# * the License, or (at your option) any later version. *
# * for detail see the LICENCE text file. *
# * *
# * FreeCAD is distributed in the hope that it will be useful, *
# * but WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
# * GNU Lesser General Public License for more details. *
# * *
# * You should have received a copy of the GNU Library General Public *
# * License along with FreeCAD; if not, write to the Free Software *
# * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 *
# * USA *
# * *
# ***************************************************************************/
# FreeCAD init script of the Spreadsheet module
# Get the Parameter Group of this module
ParGrp = App.ParamGet("System parameter:Modules").GetGroup("Spreadsheet")
# Set the needed information
ParGrp.SetString("HelpIndex", "Spreadsheet/Help/index.html")
ParGrp.SetString("WorkBenchName", "Spreadsheet")
ParGrp.SetString("WorkBenchModule", "SpreadsheetWorkbench.py")
# add Import/Export types
App.addImportType("Excel spreadsheet (*.xlsx)", "importXLSX")
App.__unit_test__ += ["TestSpreadsheet"]
|
objc | _pythonify | import sys
from objc import _objc
__all__ = []
class OC_PythonFloat(float):
__slots__ = ("__pyobjc_object__",)
def __new__(cls, obj, value):
self = float.__new__(cls, value)
self.__pyobjc_object__ = obj
return self
__class__ = property(lambda self: self.__pyobjc_object__.__class__)
def __getattr__(self, attr):
return getattr(self.__pyobjc_object__, attr)
def __reduce__(self):
return (float, (float(self),))
base_class = int if sys.version_info[0] >= 3 else long
class OC_PythonLong(base_class):
def __new__(cls, obj, value):
self = base_class.__new__(cls, value)
self.__pyobjc_object__ = obj
return self
__class__ = property(lambda self: self.__pyobjc_object__.__class__)
def __getattr__(self, attr):
return getattr(self.__pyobjc_object__, attr)
# The long type doesn't support __slots__ on subclasses, fake
# one part of the effect of __slots__: don't allow setting of attributes.
def __setattr__(self, attr, value):
if attr != "__pyobjc_object__":
raise AttributeError(
"'%s' object has no attribute '%s')" % (self.__class__.__name__, attr)
)
self.__dict__["__pyobjc_object__"] = value
def __reduce__(self):
return (base_class, (base_class(self),))
if sys.version_info[0] == 2:
class OC_PythonInt(int):
__slots__ = ("__pyobjc_object__",)
def __new__(cls, obj, value):
self = int.__new__(cls, value)
self.__pyobjc_object__ = obj
return self
__class__ = property(lambda self: self.__pyobjc_object__.__class__)
def __getattr__(self, attr):
return getattr(self.__pyobjc_object__, attr)
def __reduce__(self):
return (int, (int(self),))
NSNumber = _objc.lookUpClass("NSNumber")
NSDecimalNumber = _objc.lookUpClass("NSDecimalNumber")
Foundation = None
def numberWrapper(obj):
if isinstance(obj, NSDecimalNumber):
return obj
try:
tp = obj.objCType()
except AttributeError:
import warnings
warnings.warn(
"NSNumber instance doesn't implement objCType? %r" % (obj,), RuntimeWarning
)
return obj
if tp in b"qQLfd":
if tp == b"q":
return OC_PythonLong(obj, obj.longLongValue())
elif tp in b"QL":
return OC_PythonLong(obj, obj.unsignedLongLongValue())
else:
return OC_PythonFloat(obj, obj.doubleValue())
elif sys.version_info[0] == 2:
return OC_PythonInt(obj, obj.longValue())
else: # pragma: no cover (py3k)
return OC_PythonLong(obj, obj.longValue())
_objc._setNSNumberWrapper(numberWrapper)
|
bleachbit | Command | # vim: ts=4:sw=4:expandtab
# BleachBit
# Copyright (C) 2008-2023 Andrew Ziem
# https://www.bleachbit.org
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
Command design pattern implementation for cleaning
Standard clean up commands are Delete, Truncate and Shred. Everything
else is counted as special commands: run any external process, edit
JSON or INI file, delete registry key, edit SQLite3 database, etc.
"""
import logging
import os
import types
from bleachbit import FileUtilities, _
if "nt" == os.name:
import bleachbit.Windows
else:
from bleachbit.General import WindowsError
logger = logging.getLogger(__name__)
try:
from sqlite3 import DatabaseError
except ModuleNotFoundError as e:
logger.exception(
_(
"There was a ModuleNotFoundError when importing sqlite3, so make sure that the Python package for sqlite3 is installed."
)
)
def whitelist(path):
"""Return information that this file was whitelisted"""
ret = {
# TRANSLATORS: This is the label in the log indicating was
# skipped because it matches the whitelist
"label": _("Skip"),
"n_deleted": 0,
"n_special": 0,
"path": path,
"size": 0,
}
return ret
class Delete:
"""Delete a single file or directory. Obey the user
preference regarding shredding."""
def __init__(self, path):
"""Create a Delete instance to delete 'path'"""
self.path = path
self.shred = False
def __str__(self):
return "Command to %s %s" % ("shred" if self.shred else "delete", self.path)
def execute(self, really_delete):
"""Make changes and return results"""
if FileUtilities.whitelisted(self.path):
yield whitelist(self.path)
return
ret = {
# TRANSLATORS: This is the label in the log indicating will be
# deleted (for previews) or was actually deleted
"label": _("Delete"),
"n_deleted": 1,
"n_special": 0,
"path": self.path,
"size": FileUtilities.getsize(self.path),
}
if really_delete:
try:
FileUtilities.delete(self.path, self.shred)
except WindowsError as e:
# WindowsError: [Error 32] The process cannot access the file because it is being
# used by another process: 'C:\\Documents and
# Settings\\username\\Cookies\\index.dat'
if 32 != e.winerror and 5 != e.winerror:
raise
try:
bleachbit.Windows.delete_locked_file(self.path)
except:
raise
else:
if self.shred:
import warnings
warnings.warn(
_(
"At least one file was locked by another process, so its contents could not be overwritten. It will be marked for deletion upon system reboot."
)
)
# TRANSLATORS: The file will be deleted when the
# system reboots
ret["label"] = _("Mark for deletion")
yield ret
class Function:
"""Execute a simple Python function"""
def __init__(self, path, func, label):
"""Path is a pathname that exists or None. If
it exists, func takes the pathname. Otherwise,
function returns the size."""
self.path = path
self.func = func
self.label = label
try:
assert isinstance(func, types.FunctionType)
except AssertionError:
raise AssertionError("Expected MethodType but got %s" % type(func))
def __str__(self):
if self.path:
return "Function: %s: %s" % (self.label, self.path)
else:
return "Function: %s" % (self.label)
def execute(self, really_delete):
if self.path is not None and FileUtilities.whitelisted(self.path):
yield whitelist(self.path)
return
ret = {
"label": self.label,
"n_deleted": 0,
"n_special": 1,
"path": self.path,
"size": None,
}
if really_delete:
if self.path is None:
# Function takes no path. It returns the size.
func_ret = self.func()
if isinstance(func_ret, types.GeneratorType):
# function returned generator
for func_ret in self.func():
if True == func_ret or isinstance(func_ret, tuple):
# Return control to GTK idle loop.
# If tuple, then display progress.
yield func_ret
# either way, func_ret should be an integer
assert isinstance(func_ret, int)
ret["size"] = func_ret
else:
if os.path.isdir(self.path):
raise RuntimeError(
"Attempting to run file function %s on directory %s"
% (self.func.__name__, self.path)
)
# Function takes a path. We check the size.
oldsize = FileUtilities.getsize(self.path)
try:
self.func(self.path)
except DatabaseError as e:
if -1 == e.message.find(
"file is encrypted or is not a database"
) and -1 == e.message.find("or missing database"):
raise
logger.exception(e.message)
return
try:
newsize = FileUtilities.getsize(self.path)
except OSError as e:
from errno import ENOENT
if e.errno == ENOENT:
# file does not exist
newsize = 0
else:
raise
ret["size"] = oldsize - newsize
yield ret
class Ini:
"""Remove sections or parameters from a .ini file"""
def __init__(self, path, section, parameter):
"""Create the instance"""
self.path = path
self.section = section
self.parameter = parameter
def __str__(self):
return "Command to clean .ini path=%s, section=%s, parameter=%s " % (
self.path,
self.section,
self.parameter,
)
def execute(self, really_delete):
"""Make changes and return results"""
if FileUtilities.whitelisted(self.path):
yield whitelist(self.path)
return
ret = {
# TRANSLATORS: Parts of this file will be deleted
"label": _("Clean file"),
"n_deleted": 0,
"n_special": 1,
"path": self.path,
"size": None,
}
if really_delete:
oldsize = FileUtilities.getsize(self.path)
FileUtilities.clean_ini(self.path, self.section, self.parameter)
newsize = FileUtilities.getsize(self.path)
ret["size"] = oldsize - newsize
yield ret
class Json:
"""Remove a key from a JSON configuration file"""
def __init__(self, path, address):
"""Create the instance"""
self.path = path
self.address = address
def __str__(self):
return "Command to clean JSON file, path=%s, address=%s " % (
self.path,
self.address,
)
def execute(self, really_delete):
"""Make changes and return results"""
if FileUtilities.whitelisted(self.path):
yield whitelist(self.path)
return
ret = {
"label": _("Clean file"),
"n_deleted": 0,
"n_special": 1,
"path": self.path,
"size": None,
}
if really_delete:
oldsize = FileUtilities.getsize(self.path)
FileUtilities.clean_json(self.path, self.address)
newsize = FileUtilities.getsize(self.path)
ret["size"] = oldsize - newsize
yield ret
class Shred(Delete):
"""Shred a single file"""
def __init__(self, path):
"""Create an instance to shred 'path'"""
Delete.__init__(self, path)
self.shred = True
def __str__(self):
return "Command to shred %s" % self.path
class Truncate(Delete):
"""Truncate a single file"""
def __str__(self):
return "Command to truncate %s" % self.path
def execute(self, really_delete):
"""Make changes and return results"""
if FileUtilities.whitelisted(self.path):
yield whitelist(self.path)
return
ret = {
# TRANSLATORS: The file will be truncated to 0 bytes in length
"label": _("Truncate"),
"n_deleted": 1,
"n_special": 0,
"path": self.path,
"size": FileUtilities.getsize(self.path),
}
if really_delete:
with open(self.path, "w") as f:
f.truncate(0)
yield ret
class Winreg:
"""Clean Windows registry"""
def __init__(self, keyname, valuename):
"""Create the Windows registry cleaner"""
self.keyname = keyname
self.valuename = valuename
def __str__(self):
return "Command to clean registry, key=%s, value=%s " % (
self.keyname,
self.valuename,
)
def execute(self, really_delete):
"""Execute the Windows registry cleaner"""
if "nt" != os.name:
return
_str = None # string representation
ret = None # return value meaning 'deleted' or 'delete-able'
if self.valuename:
_str = "%s<%s>" % (self.keyname, self.valuename)
ret = bleachbit.Windows.delete_registry_value(
self.keyname, self.valuename, really_delete
)
else:
ret = bleachbit.Windows.delete_registry_key(self.keyname, really_delete)
_str = self.keyname
if not ret:
# Nothing to delete or nothing was deleted. This return
# makes the auto-hide feature work nicely.
return
ret = {
"label": _("Delete registry key"),
"n_deleted": 0,
"n_special": 1,
"path": _str,
"size": 0,
}
yield ret
|
QT | PantallaMemoria | import random
import time
from Code import ControlPosicion
from Code.QT import Colocacion, Controles, Iconos, QTUtil2, QTVarios, Tablero
from PyQt4 import QtCore, QtGui
class WDatos(QtGui.QDialog):
def __init__(self, wParent, txtcategoria, maxNivel):
super(WDatos, self).__init__(wParent)
self.setWindowTitle(_("Check your memory on a chessboard"))
self.setWindowIcon(Iconos.Memoria())
self.setWindowFlags(
QtCore.Qt.WindowCloseButtonHint
| QtCore.Qt.Dialog
| QtCore.Qt.WindowTitleHint
)
tb = QTUtil2.tbAcceptCancel(self)
f = Controles.TipoLetra(puntos=12, peso=75)
self.ed, lb = QTUtil2.spinBoxLB(
self,
maxNivel,
1,
maxNivel,
etiqueta=txtcategoria + " " + _("Level"),
maxTam=40,
)
lb.ponFuente(f)
ly = Colocacion.H().control(lb).control(self.ed).margen(20)
layout = Colocacion.V().control(tb).otro(ly).margen(3)
self.setLayout(layout)
def aceptar(self):
self.nivel = self.ed.value()
self.accept()
def paramMemoria(parent, txtCategoria, maxNivel):
if maxNivel == 1:
return 1
# Datos
w = WDatos(parent, txtCategoria, maxNivel)
if w.exec_():
return w.nivel
else:
return None
class WMemoria(QTVarios.WDialogo):
def __init__(self, procesador, txtcategoria, nivel, segundos, listaFen, record):
titulo = _("Check your memory on a chessboard")
icono = Iconos.Memoria()
extparam = "memoria"
QTVarios.WDialogo.__init__(self, procesador.pantalla, titulo, icono, extparam)
f = Controles.TipoLetra(puntos=10, peso=75)
self.configuracion = procesador.configuracion
self.nivel = nivel
self.segundos = segundos
self.record = record
# Tablero
confTablero = self.configuracion.confTablero("MEMORIA", 48)
self.listaFen = listaFen
self.posicion = ControlPosicion.ControlPosicion()
self.tablero = Tablero.PosTablero(self, confTablero)
self.tablero.crea()
self.tablero.ponDispatchDrop(self.dispatchDrop)
self.tablero.baseCasillasSC.setAcceptDrops(True)
self.ultimaPieza = "P"
self.piezas = self.tablero.piezas
tamPiezas = max(16, int(32 * self.tablero.confTablero.anchoPieza() / 48))
self.listaPiezasW = QTVarios.ListaPiezas(
self, "P,N,B,R,Q,K", self.tablero, tamPiezas, margen=0
)
self.listaPiezasB = QTVarios.ListaPiezas(
self, "p,n,b,r,q,k", self.tablero, tamPiezas, margen=0
)
# Ayuda
lbAyuda = Controles.LB(
self,
_(
"<ul><li><b>Add piece</b> : Right mouse button on empty square</li><li><b>Copy piece</b> : Left mouse button on empty square</li><li><b>Move piece</b> : Drag and drop piece with left mouse button</li><li><b>Delete piece</b> : Right mouse button on occupied square</li></ul>"
),
)
ly = Colocacion.H().control(lbAyuda)
self.gbAyuda = Controles.GB(self, _("Help"), ly)
# Rotulos informacion
lbCategoria = Controles.LB(self, txtcategoria).ponFuente(f)
lbNivel = Controles.LB(
self, _X(_("Level %1/%2"), str(nivel + 1), "25")
).ponFuente(f)
if record:
lbRecord = Controles.LB(
self, _X(_("Record %1 seconds"), str(record))
).ponFuente(f)
# Rotulo de tiempo
self.rotuloDispone = (
Controles.LB(
self,
_X(
_("You have %1 seconds to remember the position of %2 pieces"),
str(self.segundos),
str(self.nivel + 3),
),
)
.ponWrap()
.ponFuente(f)
.alinCentrado()
)
self.rotuloDispone1 = (
Controles.LB(self, _("when you know you can press the Continue button"))
.ponWrap()
.ponFuente(f)
.alinCentrado()
)
ly = Colocacion.V().control(self.rotuloDispone).control(self.rotuloDispone1)
self.gbTiempo = Controles.GB(self, "", ly)
self.rotuloDispone1.hide()
# Tool bar
liAcciones = (
(_("Start"), Iconos.Empezar(), "empezar"),
(_("Continue"), Iconos.Pelicula_Seguir(), "seguir"),
(_("Check"), Iconos.Check(), "comprobar"),
(_("Target"), Iconos.Verde32(), "objetivo"),
(_("Wrong"), Iconos.Rojo32(), "nuestro"),
(_("Repeat"), Iconos.Pelicula_Repetir(), "repetir"),
(_("Resign"), Iconos.Abandonar(), "abandonar"),
)
self.tb = tb = Controles.TB(self, liAcciones)
self.ponToolBar(["empezar"])
# Colocamos
lyP = (
Colocacion.H()
.relleno()
.control(self.listaPiezasW)
.control(self.listaPiezasB)
.relleno()
.margen(0)
)
lyT = Colocacion.V().control(self.tablero).otro(lyP).margen(0)
lyI = Colocacion.V()
lyI.control(tb)
lyI.relleno()
lyI.controlc(lbCategoria)
lyI.controlc(lbNivel)
if record:
lyI.controlc(lbRecord)
lyI.controlc(self.gbTiempo)
lyI.relleno()
lyI.control(self.gbAyuda)
lyI.margen(3)
ly = Colocacion.H().otro(lyT).otro(lyI).relleno()
ly.margen(3)
self.setLayout(ly)
self.timer = None
self.encenderExtras(False)
def mueve(self, desde, hasta):
if desde == hasta:
return
if self.casillas[hasta]:
self.tablero.borraPieza(hasta)
self.casillas[hasta] = self.casillas[desde]
self.casillas[desde] = None
self.tablero.muevePieza(desde, hasta)
def borraCasilla(self, desde):
self.casillas[desde] = None
self.tablero.borraPieza(desde)
def creaCasilla(self, desde):
menu = QtGui.QMenu(self)
siK = False
sik = False
for p in self.casillas.itervalues():
if p == "K":
siK = True
elif p == "k":
sik = True
liOpciones = []
if not siK:
liOpciones.append((_("King"), "K"))
liOpciones.extend(
[
(_("Queen"), "Q"),
(_("Rook"), "R"),
(_("Bishop"), "B"),
(_("Knight"), "N"),
(_("Pawn"), "P"),
]
)
if not sik:
liOpciones.append((_("King"), "k"))
liOpciones.extend(
[
(_("Queen"), "q"),
(_("Rook"), "r"),
(_("Bishop"), "b"),
(_("Knight"), "n"),
(_("Pawn"), "p"),
]
)
for txt, pieza in liOpciones:
icono = self.tablero.piezas.icono(pieza)
accion = QtGui.QAction(icono, txt, menu)
accion.clave = pieza
menu.addAction(accion)
resp = menu.exec_(QtGui.QCursor.pos())
if resp:
pieza = resp.clave
self.ponPieza(desde, pieza)
def repitePieza(self, desde):
self.casillas[desde] = self.ultimaPieza
pieza = self.tablero.creaPieza(self.ultimaPieza, desde)
pieza.activa(True)
def ponPieza(self, desde, pieza):
antultimo = self.ultimaPieza
self.ultimaPieza = pieza
self.repitePieza(desde)
if pieza == "K":
self.ultimaPieza = antultimo
if pieza == "k":
self.ultimaPieza = antultimo
def dispatchDrop(self, desde, pieza):
if self.casillas[desde]:
self.borraCasilla(desde)
self.ponPieza(desde, pieza)
def procesarTB(self):
accion = self.sender().clave
if accion == "abandonar":
self.reject()
elif accion == "empezar":
self.empezar()
elif accion == "seguir":
self.seguir()
elif accion == "comprobar":
self.comprobar()
elif accion == "objetivo":
self.objetivo()
elif accion == "nuestro":
self.nuestro()
elif accion == "repetir":
self.repetir()
def ponToolBar(self, liAcciones):
self.tb.clear()
for k in liAcciones:
self.tb.dicTB[k].setVisible(True)
self.tb.dicTB[k].setEnabled(True)
self.tb.addAction(self.tb.dicTB[k])
self.tb.liAcciones = liAcciones
self.tb.update()
def empezar(self):
# Ha pulsado empezar
# Elegimos el fen de la lista
nPos = random.randint(0, len(self.listaFen) - 1)
self.fenObjetivo = self.listaFen[nPos]
del self.listaFen[nPos]
self.posicion.leeFen(self.fenObjetivo)
self.tablero.ponPosicion(self.posicion)
self.tablero.desactivaTodas()
self.casillas = self.posicion.casillas
self.tablero.casillas = self.casillas
# Quitamos empezar y ponemos seguir
self.ponToolBar(["seguir"])
self.rotuloDispone.ponTexto(
_X(
_("You have %1 seconds to remember the position of %2 pieces"),
str(self.segundos),
str(self.nivel + 3),
)
)
self.rotuloDispone1.ponTexto(
_("when you know you can press the Continue button")
)
self.rotuloDispone1.show()
self.rotuloDispone1.show()
self.gbTiempo.show()
self.tiempoPendiente = self.segundos
self.iniciaReloj()
def seguir(self):
self.paraReloj()
self.tablero.ponMensajero(self.mueve)
self.tablero.mensBorrar = self.borraCasilla
self.tablero.mensCrear = self.creaCasilla
self.tablero.mensRepetir = self.repitePieza
# Quitamos seguir y ponemos comprobar
self.ponToolBar(["comprobar"])
self.rotuloDispone1.ponTexto(
_X(
_("When you've loaded the %1 pieces you can click the Check button"),
str(self.nivel + 3),
)
)
self.rotuloDispone.setVisible(False)
self.iniTiempo = time.time()
for k in self.casillas:
self.casillas[k] = None
self.tablero.ponPosicion(self.posicion)
self.encenderExtras(True)
def encenderExtras(self, si):
self.gbAyuda.setVisible(si)
self.listaPiezasW.setEnabled(si)
self.listaPiezasB.setEnabled(si)
def ponCursor(self):
cursor = self.piezas.cursor(self.ultimaPieza)
for item in self.tablero.escena.items():
item.setCursor(cursor)
self.tablero.setCursor(cursor)
def comprobar(self):
self.tiempo = int(time.time() - self.iniTiempo)
fenNuevo = self.posicion.fen()
fenNuevo = fenNuevo[: fenNuevo.index(" ")]
fenComprobar = self.fenObjetivo
fenComprobar = fenComprobar[: fenComprobar.index(" ")]
if fenComprobar == fenNuevo:
mens = _X(_("Right, it took %1 seconds."), str(self.tiempo))
if self.tiempo < self.record or self.record == 0:
mens += "<br>" + _("New record!")
QTUtil2.mensaje(self, mens)
self.accept()
return
QTUtil2.mensaje(self, _("The position is incorrect."))
self.fenNuestro = self.posicion.fen()
self.tablero.ponMensajero(None)
self.tablero.mensBorrar = None
self.tablero.mensCrear = None
self.tablero.mensRepetir = None
self.tablero.desactivaTodas()
self.gbTiempo.hide()
self.encenderExtras(False)
# Quitamos comprobar y ponemos el resto
li = ["objetivo", "nuestro"]
if len(self.listaFen):
li.append("repetir")
self.ponToolBar(li)
def objetivo(self):
self.posicion.leeFen(self.fenObjetivo)
self.tablero.ponPosicion(self.posicion)
self.tablero.desactivaTodas()
def nuestro(self):
self.posicion.leeFen(self.fenNuestro)
self.tablero.ponPosicion(self.posicion)
self.tablero.desactivaTodas()
def repetir(self):
self.rotuloDispone.ponTexto(
_X(
_("You have %1 seconds to remember the position of %2 pieces"),
str(self.segundos),
str(self.nivel + 3),
)
)
self.rotuloDispone.show()
self.rotuloDispone1.hide()
self.gbTiempo.show()
self.empezar()
def reloj(self):
self.tiempoPendiente -= 1
self.rotuloDispone.ponTexto(
_X(
_("You have %1 seconds to remember the position of %2 pieces"),
str(self.tiempoPendiente),
str(self.nivel + 3),
)
)
if self.tiempoPendiente == 0:
self.seguir()
def iniciaReloj(self):
if self.timer is not None:
self.timer.stop()
del self.timer
self.timer = QtCore.QTimer(self)
self.connect(self.timer, QtCore.SIGNAL("timeout()"), self.reloj)
self.timer.start(1000)
def paraReloj(self):
if self.timer is not None:
self.timer.stop()
del self.timer
self.timer = None
def lanzaMemoria(procesador, txtcategoria, nivel, segundos, listaFen, record):
w = WMemoria(procesador, txtcategoria, nivel, segundos, listaFen, record)
if w.exec_():
return w.tiempo
else:
return None
|
processor | rtmpdump | #!/usr/bin/env python
import os.path
import subprocess
def get_usable_rtmpdump(cmd):
try:
p = subprocess.Popen([cmd], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
return cmd
except:
return None
RTMPDUMP = get_usable_rtmpdump("rtmpdump")
def has_rtmpdump_installed():
return RTMPDUMP is not None
#
# params ={"-y":"playlist","-q":None,}
# if Only Key ,Value should be None
# -r -o should not be included in params
def download_rtmpdump_stream(url, title, ext, params={}, output_dir="."):
filename = "%s.%s" % (title, ext)
filepath = os.path.join(output_dir, filename)
cmdline = [RTMPDUMP, "-r"]
cmdline.append(url)
cmdline.append("-o")
cmdline.append(filepath)
for key in params.keys():
cmdline.append(key)
if params[key] != None:
cmdline.append(params[key])
# cmdline.append('-y')
# cmdline.append(playpath)
print("Call rtmpdump:\n" + " ".join(cmdline) + "\n")
subprocess.call(cmdline)
return
#
def play_rtmpdump_stream(player, url, params={}):
# construct left side of pipe
cmdline = [RTMPDUMP, "-r"]
cmdline.append(url)
# append other params if exist
for key in params.keys():
cmdline.append(key)
if params[key] != None:
cmdline.append(params[key])
cmdline.append("-o")
cmdline.append("-")
# pipe start
cmdline.append("|")
cmdline.append(player)
cmdline.append("-")
# logging
print("Call rtmpdump:\n" + " ".join(cmdline) + "\n")
# call RTMPDump!
subprocess.call(cmdline)
# os.system("rtmpdump -r '%s' -y '%s' -o - | %s -" % (url, playpath, player))
return
|
Gui | Surface | # -*- coding: utf-8 -*-
# ***************************************************************************
# * Copyright (c) 2017 sliptonic <shopinthewoods@gmail.com> *
# * *
# * This program is free software; you can redistribute it and/or modify *
# * it under the terms of the GNU Lesser General Public License (LGPL) *
# * as published by the Free Software Foundation; either version 2 of *
# * the License, or (at your option) any later version. *
# * for detail see the LICENCE text file. *
# * *
# * This program is distributed in the hope that it will be useful, *
# * but WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
# * GNU Library General Public License for more details. *
# * *
# * You should have received a copy of the GNU Library General Public *
# * License along with this program; if not, write to the Free Software *
# * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 *
# * USA *
# * *
# ***************************************************************************
import FreeCAD
import FreeCADGui
import Path
import Path.Base.Gui.Util as PathGuiUtil
import Path.Op.Gui.Base as PathOpGui
import Path.Op.Surface as PathSurface
import PathGui
from PySide import QtCore
__title__ = "Path Surface Operation UI"
__author__ = "sliptonic (Brad Collette)"
__url__ = "https://www.freecad.org"
__doc__ = "Surface operation page controller and command implementation."
translate = FreeCAD.Qt.translate
if False:
Path.Log.setLevel(Path.Log.Level.DEBUG, Path.Log.thisModule())
Path.Log.trackModule(Path.Log.thisModule())
else:
Path.Log.setLevel(Path.Log.Level.INFO, Path.Log.thisModule())
class TaskPanelOpPage(PathOpGui.TaskPanelPage):
"""Page controller class for the Surface operation."""
def initPage(self, obj):
self.setTitle("3D Surface - " + obj.Label)
# self.updateVisibility()
# retrieve property enumerations
# self.propEnums = PathSurface.ObjectSurface.opPropertyEnumerations(False)
self.propEnums = PathSurface.ObjectSurface.propertyEnumerations(False)
def getForm(self):
"""getForm() ... returns UI"""
form = FreeCADGui.PySideUic.loadUi(":/panels/PageOpSurfaceEdit.ui")
comboToPropertyMap = [
("boundBoxSelect", "BoundBox"),
("scanType", "ScanType"),
("cutPattern", "CutPattern"),
("profileEdges", "ProfileEdges"),
("layerMode", "LayerMode"),
("dropCutterDirSelect", "DropCutterDir"),
]
enumTups = PathSurface.ObjectSurface.propertyEnumerations(dataType="raw")
PathGuiUtil.populateCombobox(form, enumTups, comboToPropertyMap)
return form
def getFields(self, obj):
"""getFields(obj) ... transfers values from UI to obj's properties"""
self.updateToolController(obj, self.form.toolController)
self.updateCoolant(obj, self.form.coolantController)
if obj.BoundBox != str(self.form.boundBoxSelect.currentData()):
obj.BoundBox = str(self.form.boundBoxSelect.currentData())
if obj.ScanType != str(self.form.scanType.currentData()):
obj.ScanType = str(self.form.scanType.currentData())
if obj.LayerMode != str(self.form.layerMode.currentData()):
obj.LayerMode = str(self.form.layerMode.currentData())
"""
The following method of getting values from the UI form
allows for translations of combobox options in the UI.
The requirement is that the enumeration lists must
be in the same order in both the opPropertyEnumerations() method
and the UI panel QComboBox list.
Another step to ensure synchronization of the two lists is to
populate the list dynamically in this Gui module in `initPage()`
using the property enumerations list when loading the UI panel.
This type of dynamic combobox population is done for the
Tool Controller selection.
"""
# val = self.propEnums["CutPattern"][self.form.cutPattern.currentIndex()]
# if obj.CutPattern != val:
# obj.CutPattern = val
# val = self.propEnums["ProfileEdges"][self.form.profileEdges.currentIndex()]
# if obj.ProfileEdges != val:
# obj.ProfileEdges = val
obj.CutPattern = self.form.cutPattern.currentData()
obj.ProfileEdges = self.form.profileEdges.currentData()
if obj.AvoidLastX_Faces != self.form.avoidLastX_Faces.value():
obj.AvoidLastX_Faces = self.form.avoidLastX_Faces.value()
obj.DropCutterExtraOffset.x = FreeCAD.Units.Quantity(
self.form.boundBoxExtraOffsetX.text()
).Value
obj.DropCutterExtraOffset.y = FreeCAD.Units.Quantity(
self.form.boundBoxExtraOffsetY.text()
).Value
if obj.DropCutterDir != str(self.form.dropCutterDirSelect.currentData()):
obj.DropCutterDir = str(self.form.dropCutterDirSelect.currentData())
PathGuiUtil.updateInputField(obj, "DepthOffset", self.form.depthOffset)
if obj.StepOver != self.form.stepOver.value():
obj.StepOver = self.form.stepOver.value()
PathGuiUtil.updateInputField(obj, "SampleInterval", self.form.sampleInterval)
if obj.UseStartPoint != self.form.useStartPoint.isChecked():
obj.UseStartPoint = self.form.useStartPoint.isChecked()
if obj.BoundaryEnforcement != self.form.boundaryEnforcement.isChecked():
obj.BoundaryEnforcement = self.form.boundaryEnforcement.isChecked()
if obj.OptimizeLinearPaths != self.form.optimizeEnabled.isChecked():
obj.OptimizeLinearPaths = self.form.optimizeEnabled.isChecked()
if (
obj.OptimizeStepOverTransitions
!= self.form.optimizeStepOverTransitions.isChecked()
):
obj.OptimizeStepOverTransitions = (
self.form.optimizeStepOverTransitions.isChecked()
)
def setFields(self, obj):
"""setFields(obj) ... transfers obj's property values to UI"""
self.setupToolController(obj, self.form.toolController)
self.setupCoolant(obj, self.form.coolantController)
self.selectInComboBox(obj.BoundBox, self.form.boundBoxSelect)
self.selectInComboBox(obj.ScanType, self.form.scanType)
self.selectInComboBox(obj.LayerMode, self.form.layerMode)
"""
The following method of setting values in the UI form
allows for translations of combobox options in the UI.
The requirement is that the enumeration lists must
be in the same order in both the opPropertyEnumerations() method
and the UI panel QComboBox list.
The original method is commented out below.
"""
# idx = self.propEnums["CutPattern"].index(obj.CutPattern)
# self.form.cutPattern.setCurrentIndex(idx)
# idx = self.propEnums["ProfileEdges"].index(obj.ProfileEdges)
# self.form.profileEdges.setCurrentIndex(idx)
self.selectInComboBox(obj.CutPattern, self.form.cutPattern)
self.selectInComboBox(obj.ProfileEdges, self.form.profileEdges)
self.form.avoidLastX_Faces.setValue(obj.AvoidLastX_Faces)
self.form.boundBoxExtraOffsetX.setText(
FreeCAD.Units.Quantity(
obj.DropCutterExtraOffset.x, FreeCAD.Units.Length
).UserString
)
self.form.boundBoxExtraOffsetY.setText(
FreeCAD.Units.Quantity(
obj.DropCutterExtraOffset.y, FreeCAD.Units.Length
).UserString
)
self.selectInComboBox(obj.DropCutterDir, self.form.dropCutterDirSelect)
self.form.depthOffset.setText(
FreeCAD.Units.Quantity(
obj.DepthOffset.Value, FreeCAD.Units.Length
).UserString
)
self.form.stepOver.setValue(obj.StepOver)
self.form.sampleInterval.setText(
FreeCAD.Units.Quantity(
obj.SampleInterval.Value, FreeCAD.Units.Length
).UserString
)
if obj.UseStartPoint:
self.form.useStartPoint.setCheckState(QtCore.Qt.Checked)
else:
self.form.useStartPoint.setCheckState(QtCore.Qt.Unchecked)
if obj.BoundaryEnforcement:
self.form.boundaryEnforcement.setCheckState(QtCore.Qt.Checked)
else:
self.form.boundaryEnforcement.setCheckState(QtCore.Qt.Unchecked)
if obj.OptimizeLinearPaths:
self.form.optimizeEnabled.setCheckState(QtCore.Qt.Checked)
else:
self.form.optimizeEnabled.setCheckState(QtCore.Qt.Unchecked)
if obj.OptimizeStepOverTransitions:
self.form.optimizeStepOverTransitions.setCheckState(QtCore.Qt.Checked)
else:
self.form.optimizeStepOverTransitions.setCheckState(QtCore.Qt.Unchecked)
self.updateVisibility()
def getSignalsForUpdate(self, obj):
"""getSignalsForUpdate(obj) ... return list of signals for updating obj"""
signals = []
signals.append(self.form.toolController.currentIndexChanged)
signals.append(self.form.coolantController.currentIndexChanged)
signals.append(self.form.boundBoxSelect.currentIndexChanged)
signals.append(self.form.scanType.currentIndexChanged)
signals.append(self.form.layerMode.currentIndexChanged)
signals.append(self.form.cutPattern.currentIndexChanged)
signals.append(self.form.profileEdges.currentIndexChanged)
signals.append(self.form.avoidLastX_Faces.editingFinished)
signals.append(self.form.boundBoxExtraOffsetX.editingFinished)
signals.append(self.form.boundBoxExtraOffsetY.editingFinished)
signals.append(self.form.dropCutterDirSelect.currentIndexChanged)
signals.append(self.form.depthOffset.editingFinished)
signals.append(self.form.stepOver.editingFinished)
signals.append(self.form.sampleInterval.editingFinished)
signals.append(self.form.useStartPoint.stateChanged)
signals.append(self.form.boundaryEnforcement.stateChanged)
signals.append(self.form.optimizeEnabled.stateChanged)
signals.append(self.form.optimizeStepOverTransitions.stateChanged)
return signals
def updateVisibility(self, sentObj=None):
"""updateVisibility(sentObj=None)... Updates visibility of Tasks panel objects."""
if self.form.scanType.currentText() == "Planar":
self.form.cutPattern.show()
self.form.cutPattern_label.show()
self.form.optimizeStepOverTransitions.show()
if hasattr(self.form, "profileEdges"):
self.form.profileEdges.show()
self.form.profileEdges_label.show()
self.form.avoidLastX_Faces.show()
self.form.avoidLastX_Faces_label.show()
self.form.boundBoxExtraOffsetX.hide()
self.form.boundBoxExtraOffsetY.hide()
self.form.boundBoxExtraOffset_label.hide()
self.form.dropCutterDirSelect.hide()
self.form.dropCutterDirSelect_label.hide()
elif self.form.scanType.currentText() == "Rotational":
self.form.cutPattern.hide()
self.form.cutPattern_label.hide()
self.form.optimizeStepOverTransitions.hide()
if hasattr(self.form, "profileEdges"):
self.form.profileEdges.hide()
self.form.profileEdges_label.hide()
self.form.avoidLastX_Faces.hide()
self.form.avoidLastX_Faces_label.hide()
self.form.boundBoxExtraOffsetX.show()
self.form.boundBoxExtraOffsetY.show()
self.form.boundBoxExtraOffset_label.show()
self.form.dropCutterDirSelect.show()
self.form.dropCutterDirSelect_label.show()
def registerSignalHandlers(self, obj):
self.form.scanType.currentIndexChanged.connect(self.updateVisibility)
Command = PathOpGui.SetupOperation(
"Surface",
PathSurface.Create,
TaskPanelOpPage,
"Path_3DSurface",
QtCore.QT_TRANSLATE_NOOP("Path_Surface", "3D Surface"),
QtCore.QT_TRANSLATE_NOOP(
"Path_Surface", "Create a 3D Surface Operation from a model"
),
PathSurface.SetupProperties,
)
FreeCAD.Console.PrintLog("Loading PathSurfaceGui... done\n")
|
faulthandler | tests | from __future__ import with_statement
import datetime
import faulthandler
import os
import re
import signal
import subprocess
import sys
import tempfile
import unittest
from contextlib import contextmanager
try:
import threading
HAVE_THREADS = True
except ImportError:
HAVE_THREADS = False
TIMEOUT = 1
Py_REF_DEBUG = hasattr(sys, "gettotalrefcount")
try:
skipIf = unittest.skipIf
except AttributeError:
import functools
def skipIf(test, reason):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kw):
if not test:
return func(*args, **kw)
else:
print("skip %s: %s" % (func.__name__, reason))
return wrapper
return decorator
try:
from resource import RLIMIT_CORE
from resource import error as resource_error
from resource import setrlimit
except ImportError:
prepare_subprocess = None
else:
def prepare_subprocess():
# don't create core file
try:
setrlimit(RLIMIT_CORE, (0, 0))
except (ValueError, resource_error):
pass
def expected_traceback(lineno1, lineno2, header, count=1):
regex = header
regex += ' File "<string>", line %s in func\n' % lineno1
regex += ' File "<string>", line %s in <module>' % lineno2
if count != 1:
regex = (regex + "\n") * (count - 1) + regex
return "^" + regex + "$"
@contextmanager
def temporary_filename():
filename = tempfile.mktemp()
try:
yield filename
finally:
try:
os.unlink(filename)
except OSError:
pass
class FaultHandlerTests(unittest.TestCase):
def get_output(self, code, filename=None):
"""
Run the specified code in Python (in a new child process) and read the
output from the standard error or from a file (if filename is set).
Return the output lines as a list.
Strip the reference count from the standard error for Python debug
build, and replace "Current thread 0x00007f8d8fbd9700" by "Current
thread XXX".
"""
options = {}
if prepare_subprocess:
options["preexec_fn"] = prepare_subprocess
process = subprocess.Popen(
[sys.executable, "-c", code],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
**options,
)
stdout, stderr = process.communicate()
exitcode = process.wait()
output = stdout.decode("ascii", "backslashreplace")
output = re.sub(r"\[\d+ refs\]\r?\n?$", "", output)
if filename:
self.assertEqual(output, "")
with open(filename, "rb") as fp:
output = fp.read()
output = output.decode("ascii", "backslashreplace")
output = re.sub("Current thread 0x[0-9a-f]+", "Current thread XXX", output)
return output.splitlines(), exitcode
def check_fatal_error(
self,
code,
line_number,
name_regex,
filename=None,
all_threads=True,
other_regex=None,
):
"""
Check that the fault handler for fatal errors is enabled and check the
traceback from the child process output.
Raise an error if the output doesn't match the expected format.
"""
if all_threads:
header = r"Current thread XXX"
else:
header = r"Traceback \(most recent call first\)"
regex = """
^Fatal Python error: %s
%s:
File "<string>", line %s in <module>$
""".strip()
regex = regex % (name_regex, header, line_number)
if other_regex:
regex += "|" + other_regex
output, exitcode = self.get_output(code, filename)
output = "\n".join(output)
self.assertRegex(output, regex)
self.assertNotEqual(exitcode, 0)
def test_read_null(self):
self.check_fatal_error(
"""
import faulthandler
faulthandler.enable()
faulthandler._read_null()
""".strip(),
3,
"(?:Segmentation fault|Bus error)",
)
def test_sigsegv(self):
self.check_fatal_error(
"""
import faulthandler
faulthandler.enable()
faulthandler._sigsegv()
""".strip(),
3,
"Segmentation fault",
)
def test_sigabrt(self):
self.check_fatal_error(
"""
import faulthandler
faulthandler.enable()
faulthandler._sigabrt()
""".strip(),
3,
"Aborted",
)
@skipIf(sys.platform == "win32", "SIGFPE cannot be caught on Windows")
def test_sigfpe(self):
self.check_fatal_error(
"""
import faulthandler
faulthandler.enable()
faulthandler._sigfpe()
""".strip(),
3,
"Floating point exception",
)
@skipIf(not hasattr(faulthandler, "_sigbus"), "need faulthandler._sigbus()")
def test_sigbus(self):
self.check_fatal_error(
"""
import faulthandler
faulthandler.enable()
faulthandler._sigbus()
""".strip(),
3,
"Bus error",
)
@skipIf(not hasattr(faulthandler, "_sigill"), "need faulthandler._sigill()")
def test_sigill(self):
self.check_fatal_error(
"""
import faulthandler
faulthandler.enable()
faulthandler._sigill()
""".strip(),
3,
"Illegal instruction",
)
def test_fatal_error(self):
if sys.version_info >= (2, 6):
arg = "b'xyz'"
else:
arg = "'xyz'"
message = "xyz\nFatal Python error: Aborted"
self.check_fatal_error(
"""
import faulthandler
faulthandler.enable()
faulthandler._fatal_error(%s)
""".strip()
% (arg,),
3,
message,
)
@skipIf(
not hasattr(faulthandler, "_stack_overflow"),
"need faulthandler._stack_overflow()",
)
def test_stack_overflow(self):
self.check_fatal_error(
"""
import faulthandler
faulthandler.enable()
faulthandler._stack_overflow()
""".strip(),
3,
"(?:Segmentation fault|Bus error)",
other_regex="unable to raise a stack overflow",
)
def test_gil_released(self):
self.check_fatal_error(
"""
import faulthandler
faulthandler.enable()
faulthandler._read_null(True)
""".strip(),
3,
"(?:Segmentation fault|Bus error)",
)
def test_enable_file(self):
with temporary_filename() as filename:
self.check_fatal_error(
"""
import faulthandler
output = open(%r, 'wb')
faulthandler.enable(output)
faulthandler._read_null()
""".strip()
% (filename,),
4,
"(?:Segmentation fault|Bus error)",
filename=filename,
)
def test_enable_single_thread(self):
self.check_fatal_error(
"""
import faulthandler
faulthandler.enable(all_threads=False)
faulthandler._read_null()
""".strip(),
3,
"(?:Segmentation fault|Bus error)",
all_threads=False,
)
def test_disable(self):
code = """
import faulthandler
faulthandler.enable()
faulthandler.disable()
faulthandler._read_null()
""".strip()
not_expected = "Fatal Python error"
stderr, exitcode = self.get_output(code)
stder = "\n".join(stderr)
self.assertTrue(
not_expected not in stderr, "%r is present in %r" % (not_expected, stderr)
)
self.assertNotEqual(exitcode, 0)
def test_is_enabled(self):
was_enabled = faulthandler.is_enabled()
try:
faulthandler.enable()
self.assertTrue(faulthandler.is_enabled())
faulthandler.disable()
self.assertFalse(faulthandler.is_enabled())
finally:
if was_enabled:
faulthandler.enable()
else:
faulthandler.disable()
def check_dump_traceback(self, filename):
"""
Explicitly call dump_traceback() function and check its output.
Raise an error if the output doesn't match the expected format.
"""
code = """
from __future__ import with_statement
import faulthandler
def funcB():
if %s:
with open(%s, "wb") as fp:
faulthandler.dump_traceback(fp)
else:
faulthandler.dump_traceback()
def funcA():
funcB()
funcA()
""".strip()
code = code % (bool(filename), repr(filename))
if filename:
lineno = 7
else:
lineno = 9
expected = [
"Current thread XXX:",
' File "<string>", line %s in funcB' % lineno,
' File "<string>", line 12 in funcA',
' File "<string>", line 14 in <module>',
]
trace, exitcode = self.get_output(code, filename)
self.assertEqual(trace, expected)
self.assertEqual(exitcode, 0)
def test_dump_traceback(self):
self.check_dump_traceback(None)
def test_dump_traceback_file(self):
with temporary_filename() as filename:
self.check_dump_traceback(filename)
@skipIf(not HAVE_THREADS, "need threads")
def check_dump_traceback_threads(self, filename):
"""
Call explicitly dump_traceback(all_threads=True) and check the output.
Raise an error if the output doesn't match the expected format.
"""
code = """
from __future__ import with_statement
import faulthandler
from threading import Thread, Event
import time
def dump():
if %s:
with open(%s, "wb") as fp:
faulthandler.dump_traceback(fp, all_threads=True)
else:
faulthandler.dump_traceback(all_threads=True)
class Waiter(Thread):
# avoid blocking if the main thread raises an exception.
daemon = True
def __init__(self):
Thread.__init__(self)
self.running = Event()
self.stop = Event()
def run(self):
self.running.set()
self.stop.wait()
waiter = Waiter()
waiter.start()
waiter.running.wait()
dump()
waiter.stop.set()
waiter.join()
""".strip()
code = code % (bool(filename), repr(filename))
output, exitcode = self.get_output(code, filename)
output = "\n".join(output)
if filename:
lineno = 9
else:
lineno = 11
regex = """
^Thread 0x[0-9a-f]+:
(?: File ".*threading.py", line [0-9]+ in [_a-z]+
){1,3} File "<string>", line 24 in run
File ".*threading.py", line [0-9]+ in _?_bootstrap_inner
File ".*threading.py", line [0-9]+ in _?_bootstrap
Current thread XXX:
File "<string>", line %s in dump
File "<string>", line 29 in <module>$
""".strip()
regex = regex % (lineno,)
self.assertRegex(output, regex)
self.assertEqual(exitcode, 0)
def test_dump_traceback_threads(self):
self.check_dump_traceback_threads(None)
def test_dump_traceback_threads_file(self):
with temporary_filename() as filename:
self.check_dump_traceback_threads(filename)
def _check_dump_traceback_later(self, repeat, cancel, filename):
"""
Check how many times the traceback is written in timeout x 2.5 seconds,
or timeout x 3.5 seconds if cancel is True: 1, 2 or 3 times depending
on repeat and cancel options.
Raise an error if the output doesn't match the expect format.
"""
timeout_str = str(datetime.timedelta(seconds=TIMEOUT))
code = """
import faulthandler
import time
def func(repeat, cancel, timeout):
if cancel:
faulthandler.cancel_dump_traceback_later()
for loop in range(2):
time.sleep(timeout * 1.25)
faulthandler.cancel_dump_traceback_later()
timeout = %s
repeat = %s
cancel = %s
if %s:
file = open(%s, "wb")
else:
file = None
faulthandler.dump_traceback_later(timeout,
repeat=repeat, file=file)
func(repeat, cancel, timeout)
if file is not None:
file.close()
""".strip()
code = code % (TIMEOUT, repeat, cancel, bool(filename), repr(filename))
trace, exitcode = self.get_output(code, filename)
trace = "\n".join(trace)
if not cancel:
if repeat:
count = 2
else:
count = 1
header = r"Timeout \(%s\)!\nCurrent thread XXX:\n" % timeout_str
regex = expected_traceback(8, 20, header, count=count)
self.assertRegex(trace, regex)
else:
self.assertEqual(trace, "")
self.assertEqual(exitcode, 0)
@skipIf(
not hasattr(faulthandler, "dump_traceback_later"),
"need faulthandler.dump_traceback_later()",
)
def check_dump_traceback_later(self, repeat=False, cancel=False, file=False):
if file:
with temporary_filename() as filename:
self._check_dump_traceback_later(repeat, cancel, filename)
else:
self._check_dump_traceback_later(repeat, cancel, None)
def test_dump_traceback_later(self):
self.check_dump_traceback_later()
def test_dump_traceback_later_repeat(self):
self.check_dump_traceback_later(repeat=True)
def test_dump_traceback_later_cancel(self):
self.check_dump_traceback_later(cancel=True)
def test_dump_traceback_later_file(self):
self.check_dump_traceback_later(file=True)
@skipIf(not hasattr(faulthandler, "register"), "need faulthandler.register")
def check_register(
self, filename=False, all_threads=False, unregister=False, chain=False
):
"""
Register a handler displaying the traceback on a user signal. Raise the
signal and check the written traceback.
If chain is True, check that the previous signal handler is called.
Raise an error if the output doesn't match the expected format.
"""
signum = signal.SIGUSR1
code = """
import faulthandler
import os
import signal
import sys
def func(signum):
os.kill(os.getpid(), signum)
def handler(signum, frame):
handler.called = True
handler.called = False
exitcode = 0
signum = %s
filename = %s
unregister = %s
all_threads = %s
chain = %s
if bool(filename):
file = open(filename, "wb")
else:
file = None
if chain:
signal.signal(signum, handler)
faulthandler.register(signum, file=file,
all_threads=all_threads, chain=chain)
if unregister:
faulthandler.unregister(signum)
func(signum)
if chain and not handler.called:
if file is not None:
output = file
else:
output = sys.stderr
output.write("Error: signal handler not called!\\n")
exitcode = 1
if file is not None:
file.close()
sys.exit(exitcode)
""".strip()
code = code % (
signum,
repr(filename),
unregister,
all_threads,
chain,
)
trace, exitcode = self.get_output(code, filename)
trace = "\n".join(trace)
if not unregister:
if all_threads:
regex = "Current thread XXX:\n"
else:
regex = "Traceback \(most recent call first\):\n"
regex = expected_traceback(7, 29, regex)
self.assertRegex(trace, regex)
else:
self.assertEqual(trace, "")
if unregister:
self.assertNotEqual(exitcode, 0)
else:
self.assertEqual(exitcode, 0)
def test_register(self):
self.check_register()
def test_unregister(self):
self.check_register(unregister=True)
def test_register_file(self):
with temporary_filename() as filename:
self.check_register(filename=filename)
def test_register_threads(self):
self.check_register(all_threads=True)
def test_register_chain(self):
self.check_register(chain=True)
if not hasattr(unittest.TestCase, "assertRegex"):
# Copy/paste from Python 3.3: just replace (str, bytes) by str
def assertRegex(self, text, expected_regex, msg=None):
"""Fail the test unless the text matches the regular expression."""
if isinstance(expected_regex, str):
assert expected_regex, "expected_regex must not be empty."
expected_regex = re.compile(expected_regex)
if not expected_regex.search(text):
msg = msg or "Regex didn't match"
msg = "%s: %r not found in %r" % (msg, expected_regex.pattern, text)
raise self.failureException(msg)
if __name__ == "__main__":
unittest.main()
|
engines | sqlite | # SPDX-License-Identifier: AGPL-3.0-or-later
"""
SQLite database (Offline)
"""
# pylint: disable=missing-function-docstring
import sqlite3
engine_type = "offline"
database = ""
query_str = ""
limit = 10
paging = True
result_template = "key-value.html"
def init(engine_settings):
if "query_str" not in engine_settings:
raise ValueError("query_str cannot be empty")
if not engine_settings["query_str"].lower().startswith("select "):
raise ValueError("only SELECT query is supported")
def search(query, params):
query_params = {"query": query}
query_to_run = query_str + " LIMIT {0} OFFSET {1}".format(
limit, (params["pageno"] - 1) * limit
)
connection = sqlite3.connect(database)
cur = connection.cursor()
cur.execute(query_to_run, query_params)
results = _fetch_results(cur)
cur.close()
connection.close()
return results
def _fetch_results(cur):
results = []
titles = [name for (name, _, _, _, _, _, _) in cur.description]
res = cur.fetchone()
while res:
result = dict(zip(titles, map(str, res)))
result["template"] = result_template
results.append(result)
res = cur.fetchone()
return results
|
metadata | tags | from xl.nls import gettext
def N_(x):
return x
class _TD:
__slots__ = [
"name", # descriptive name
"translated_name", # translated name
"tag_name", # raw tag name
"type",
"editable",
"min",
"max",
"use_disk", # set true if should retrieve tag from disk -- which means
# the tag cannot be stored in the database
]
def __init__(self, name, type, **kwargs):
self.name = name
self.translated_name = gettext(name)
self.type = type
# these are overridable by keyword arg
self.editable = True
self.use_disk = False
for k, v in kwargs.items():
setattr(self, k, v)
#: List of metadata tags currently supported by exaile, which are
#: translated into the corresponding tag for each audio format if
#: it supports it. Prefer to extend this list's features instead
#: of creating your own list of tag metadata
#:
#: @note We use N_ (fake gettext) because for some uses these strings are
#: translated later, so we store both the translated and untranslated
#: version
tag_data = {
# fmt: off
"album": _TD(N_("Album"), "text"),
"arranger": _TD(N_("Arranger"), "text"),
"artist": _TD(N_("Artist"), "text"),
"albumartist": _TD(N_("Album artist"), "text"),
"author": _TD(N_("Author"), "text"),
"bpm": _TD(N_("BPM"), "int", min=0, max=500),
"copyright": _TD(N_("Copyright"), "text"),
"comment": _TD(N_("Comment"), "multiline"),
"composer": _TD(N_("Composer"), "text"),
"conductor": _TD(N_("Conductor"), "text"),
"cover": _TD(N_("Cover"), "image", use_disk=True),
"date": _TD(N_("Date"), "datetime"),
"discnumber": _TD(N_("Disc"), "dblnum", min=0, max=50),
"encodedby": _TD(N_("Encoded by"), "text"),
"genre": _TD(N_("Genre"), "text"),
"grouping": _TD(N_("Grouping"), "text"),
"isrc": _TD(N_("ISRC"), "text"),
"language": _TD(N_("Language"), "text"),
"lyrics": _TD(N_("Lyrics"), "multiline", use_disk=True),
"lyricist": _TD(N_("Lyricist"), "text"),
"organization": _TD(N_("Organization"), "text"),
"originalalbum": _TD(N_("Original album"), "text"),
"originalartist": _TD(N_("Original artist"), "text"),
"originaldate": _TD(N_("Original date"), "text"),
"part": None,
"performer": _TD(N_("Performer"), "text"),
"title": _TD(N_("Title"), "text"),
"tracknumber": _TD(N_("Track"), "dblnum", min=0, max=500),
"version": _TD(N_("Version"), "text"),
"website": _TD(N_("Website"), "text"),
# various internal tags
"__bitrate": _TD(N_("Bitrate"), "bitrate", editable=False),
"__basedir": None,
"__date_added": _TD(N_("Date added"), "timestamp", editable=False),
"__last_played": _TD(N_("Last played"), "timestamp", editable=False),
"__length": _TD(N_("Length"), "time", editable=False),
"__loc": _TD(N_("Location"), "location", editable=False),
"__modified": _TD(N_("Modified"), "timestamp", editable=False),
"__playtime": _TD(N_("Play time"), "time", editable=False),
"__playcount": _TD(N_("Times played"), "int", editable=False),
"__rating": None, # currently special.
"__startoffset": _TD(
N_("Start offset"), "time", min=0, max=3600
), # TODO: calculate these parameters
"__stopoffset": _TD(N_("Stop offset"), "time", min=0, max=3600),
# fmt: on
}
disk_tags = set()
for k, v in tag_data.items():
if v:
v.tag_name = k
if v.use_disk:
disk_tags.add(k)
def get_default_tagdata(tag):
"""If the tagname is not in tag_data, you can use this function
to get a _TD object for it"""
return _TD(tag, "text", editable=(not tag.startswith("__")), tag_name=tag)
|
mr-tools | mr_tools | # The contents of this file are subject to the Common Public Attribution
# License Version 1.0. (the "License"); you may not use this file except in
# compliance with the License. You may obtain a copy of the License at
# http://code.reddit.com/LICENSE. The License is based on the Mozilla Public
# License Version 1.1, but Sections 14 and 15 have been added to cover use of
# software over a computer network and provide for limited attribution for the
# Original Developer. In addition, Exhibit A has been modified to be consistent
# with Exhibit B.
#
# Software distributed under the License is distributed on an "AS IS" basis,
# WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for
# the specific language governing rights and limitations under the License.
#
# The Original Code is reddit.
#
# The Original Developer is the Initial Developer. The Initial Developer of
# the Original Code is reddit Inc.
#
# All portions of the code written by reddit are Copyright (c) 2006-2015 reddit
# Inc. All Rights Reserved.
###############################################################################
import multiprocessing
import sys
from r2.lib.mr_tools._mr_tools import (emit, format_dataspec, mr_map,
mr_reduce, stdin)
def join_things(fields, deleted=False, spam=True):
"""A reducer that joins thing table dumps and data table dumps"""
# Because of how Python handles scope, if we want to modify these outside
# the closure function below, they need to be inside a mutable object.
# http://stackoverflow.com/a/23558809/120999
counters = {
'processed': 0,
'skipped': 0,
}
def process(thing_id, vals):
data = {}
thing = None
for val in vals:
if val[0] == 'thing':
thing = format_dataspec(val,
['data_type', # e.g. 'thing'
'thing_type', # e.g. 'link'
'ups',
'downs',
'deleted',
'spam',
'timestamp'])
elif val[0] == 'data':
val = format_dataspec(val,
['data_type', # e.g. 'data'
'thing_type', # e.g. 'link'
'key', # e.g. 'sr_id'
'value'])
if val.key in fields:
data[val.key] = val.value
if (
# silently ignore if we didn't see the 'thing' row
thing is not None
# remove spam and deleted as appriopriate
and (deleted or thing.deleted == 'f')
and (spam or thing.spam == 'f')
# and silently ignore items that don't have all of the
# data that we need
and all(field in data for field in fields)):
counters['processed'] += 1
yield ((thing_id, thing.thing_type, thing.ups, thing.downs,
thing.deleted, thing.spam, thing.timestamp)
+ tuple(data[field] for field in fields))
else:
counters['skipped'] += 1
mr_reduce(process)
# Print to stderr to avoid getting this caught up in the pipe of
# compute_time_listings.
print >> sys.stderr, '%s items processed, %s skipped' % (
counters['processed'], counters['skipped'])
class Mapper(object):
def __init__(self):
pass
def process(self, values):
raise NotImplemented
def __call__(self, line):
line = line.strip('\n')
vals = line.split('\t')
return list(self.process(vals)) # a list of tuples
def mr_map_parallel(processor, fd = stdin,
workers = multiprocessing.cpu_count(),
chunk_size = 1000):
# `process` must be an instance of Mapper and promise that it is
# safe to execute in a fork()d process. Also note that we fuck
# up the result ordering, but relying on result ordering breaks
# the mapreduce contract anyway. Note also that like many of the
# mr_tools functions, we break on newlines in the emitted output
if workers == 1:
return mr_map(process, fd=fd)
pool = multiprocessing.Pool(workers)
for res in pool.imap_unordered(processor, fd, chunk_size):
for subres in res:
emit(subres)
def test():
from r2.lib.mr_tools._mr_tools import keyiter
for key, vals in keyiter():
print key, vals
for val in vals:
print '\t', val
class UpperMapper(Mapper):
def process(self, values):
yield map(str.upper, values)
def test_parallel():
return mr_map_parallel(UpperMapper())
|
models | rules | # The contents of this file are subject to the Common Public Attribution
# License Version 1.0. (the "License"); you may not use this file except in
# compliance with the License. You may obtain a copy of the License at
# http://code.reddit.com/LICENSE. The License is based on the Mozilla Public
# License Version 1.1, but Sections 14 and 15 have been added to cover use of
# software over a computer network and provide for limited attribution for the
# Original Developer. In addition, Exhibit A has been modified to be consistent
# with Exhibit B.
#
# Software distributed under the License is distributed on an "AS IS" basis,
# WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for
# the specific language governing rights and limitations under the License.
#
# The Original Code is reddit.
#
# The Original Developer is the Initial Developer. The Initial Developer of
# the Original Code is reddit Inc.
#
# All portions of the code written by reddit are Copyright (c) 2006-2015 reddit
# Inc. All Rights Reserved.
###############################################################################
import json
import time
from datetime import datetime
import pytz
from pycassa.system_manager import UTF8_TYPE
from pylons.i18n import _
from r2.lib.db import tdb_cassandra
OLD_SITEWIDE_RULES = [
_("spam"),
_("vote manipulation"),
_("personal information"),
_("sexualizing minors"),
_("breaking reddit"),
]
SITEWIDE_RULES = [
_("Spam"),
_("Personal and confidential information"),
_("Threatening, harassing, or inciting violence"),
]
MAX_RULES_PER_SUBREDDIT = 10
class SubredditRules(tdb_cassandra.View):
_use_db = True
_extra_schema_creation_args = {
"key_validation_class": UTF8_TYPE,
"column_name_class": UTF8_TYPE,
"default_validation_class": UTF8_TYPE,
}
_compare_with = UTF8_TYPE
_read_consistency_level = tdb_cassandra.CL.ONE
_write_consistency_level = tdb_cassandra.CL.ONE
_connection_pool = "main"
@classmethod
def get_rule_blob(self, short_name, description, priority, kind, created_utc=None):
if not created_utc:
created_utc = time.mktime(datetime.now(pytz.UTC).timetuple())
rule_params = {
"description": description,
"priority": priority,
"created_utc": created_utc,
}
if kind and kind != "all":
rule_params["kind"] = kind
jsonpacked = json.dumps(rule_params)
blob = {short_name: jsonpacked}
return blob
@classmethod
def create(self, subreddit, short_name, description, kind=None, created_utc=None):
"""Create a rule and append to the end of the priority list."""
try:
priority = len(list(self._cf.get(subreddit._id36)))
except tdb_cassandra.NotFoundException:
priority = 0
if priority >= MAX_RULES_PER_SUBREDDIT:
return
blob = self.get_rule_blob(short_name, description, priority, kind, created_utc)
self._set_values(subreddit._id36, blob)
@classmethod
def remove_rule(self, subreddit, short_name):
"""Remove a rule and update priorities of remaining rules."""
self._remove(subreddit._id36, [short_name])
rules = self.get_rules(subreddit)
blobs = {}
for index, rule in enumerate(rules):
if rule["priority"] != index:
blobs.update(
self.get_rule_blob(
short_name=rule["short_name"],
description=rule["description"],
priority=index,
kind=rule.get("kind"),
created_utc=rule["created_utc"],
)
)
self._set_values(subreddit._id36, blobs)
@classmethod
def update(self, subreddit, old_short_name, short_name, description, kind=None):
"""Update the short_name or description of a rule."""
rules = self._cf.get(subreddit._id36)
if old_short_name != short_name:
old_rule = rules.get(old_short_name, None)
self._remove(subreddit._id36, [old_short_name])
else:
old_rule = rules.get(short_name, None)
if not old_rule:
return False
old_rule = json.loads(old_rule)
if not old_rule.get("created_utc"):
old_rule["created_utc"] = time.mktime(
datetime.strptime(
old_rule.pop("when")[:-6], "%Y-%m-%d %H:%M:%S.%f"
).timetuple()
)
blob = self.get_rule_blob(
short_name=short_name,
description=description,
priority=old_rule["priority"],
kind=kind,
created_utc=old_rule["created_utc"],
)
self._set_values(subreddit._id36, blob)
@classmethod
def reorder(self, subreddit, short_name, priority):
"""Update the priority spot of a rule
Move an existing rule to the desired spot in the rules
list and then update the priority of the rules.
"""
rule_to_reorder = self.get_rule(subreddit, short_name)
if not rule_to_reorder:
return False
self._remove(subreddit._id36, [short_name])
rules = self.get_rules(subreddit)
priority = min(priority, len(rules))
current_priority_index = 0
blobs = {}
blobs.update(
self.get_rule_blob(
short_name=rule_to_reorder["short_name"],
description=rule_to_reorder["description"],
priority=priority,
kind=rule_to_reorder.get("kind"),
created_utc=rule_to_reorder["created_utc"],
)
)
for rule in rules:
# Placeholder for rule_to_reorder's new priority
if priority == current_priority_index:
current_priority_index += 1
if rule["priority"] != current_priority_index:
blobs.update(
self.get_rule_blob(
short_name=rule["short_name"],
description=rule["description"],
priority=current_priority_index,
kind=rule.get("kind"),
created_utc=rule["created_utc"],
)
)
current_priority_index += 1
self._set_values(subreddit._id36, blobs)
@classmethod
def get_rule(self, subreddit, short_name):
"""Return rule associated with short_name or None."""
try:
rules = self._cf.get(subreddit._id36)
except tdb_cassandra.NotFoundException:
return None
rule = rules.get(short_name, None)
if not rule:
return None
rule = json.loads(rule)
rule["short_name"] = short_name
return rule
@classmethod
def get_rules(self, subreddit, kind=None):
"""Return list of rules sorted by priority.
If kind is empty, then all the rules apply.
"""
try:
query = self._cf.get(subreddit._id36)
except tdb_cassandra.NotFoundException:
return []
result = []
for uuid, json_blob in query.iteritems():
payload = json.loads(json_blob)
if not payload.get("created_utc"):
payload["created_utc"] = time.mktime(
datetime.strptime(
payload.pop("when")[:-6], "%Y-%m-%d %H:%M:%S.%f"
).timetuple()
)
payload["short_name"] = uuid
if not kind:
result.append(payload)
elif kind in payload.get("kind", kind):
result.append(payload)
return sorted(result, key=lambda t: t["priority"])
|
views | sidebars | """Provides content for sidebars, accessible via the 'g' object.
TODO: make it smarter (direct access from 'g' using lazy objects?) and
cacheable.
"""
from __future__ import annotations
from datetime import datetime, timedelta
from abilian.core.models.subjects import User
from abilian.sbe.apps.communities.models import Community, Membership
from flask import g
from flask_login import current_user
from .social import social
class Sidebars:
@property
def latest_visitors(self) -> list[User]:
return (
User.query.filter(User.last_active != None)
.order_by(User.last_active.desc())
.limit(15)
.all()
)
@property
def active_visitor_count(self):
one_minute_ago = datetime.utcnow() - timedelta(0, 60)
return User.query.filter(User.last_active > one_minute_ago).count()
@property
def my_communities(self) -> list[Community]:
query = Community.query
query = query.order_by(Community.last_active_at.desc())
if not current_user.has_role("admin"):
# Filter with permissions
query = query.join(Membership).filter(Membership.user == current_user)
return query.limit(10).all()
@property
def all_communities(self):
# TODO: limit
return []
# return Community.query.all()
@social.before_request
def inject_sidebars():
g.sidebars = Sidebars()
|
deprecated | update_na_news_impprt | import glob
import logging
import os
from typing import Any, Dict, List, Text
import pandas as pd
from dag_factory.components.old_news_import import OldNewsImportSpec
from dag_factory.components.utils import (
date_str_to_unixtime,
get_all_csv_paths,
tag_dict_to_dict,
)
from newscrawler.crawler import extract_article_text_from_html
from newscrawler.extract_rss import get_page
from pymongo import MongoClient
from tfx import types
from tfx.components.base import base_component, base_executor, executor_spec
from tfx.types import standard_artifacts
from tfx.types.artifact_utils import get_single_uri
from tfx.types.component_spec import ChannelParameter, ExecutionParameter
class Executor(base_executor.BaseExecutor):
def Do(
self,
input_dict: Dict[Text, List[types.Artifact]],
output_dict: Dict[Text, List[types.Artifact]],
exec_properties: Dict[Text, Any],
) -> None:
client = MongoClient(
host=exec_properties["ip"],
port=int(exec_properties["port"]),
username=exec_properties["username"],
password=exec_properties["password"],
)
db = client[exec_properties["dbname"]]
for path in glob.glob(os.path.join(exec_properties["backup_dir"], "*")):
csv_paths = get_all_csv_paths(path)
print("Storing {} files to MongoDB".format(len(csv_paths)))
for csv_path in csv_paths:
if "NewsCrawler" in csv_path:
csv_rel_path = os.path.relpath(
csv_path, os.path.join(path, "pipelines")
)
csv_rel_path_norm = os.path.normpath(csv_rel_path)
csv_source = csv_rel_path_norm.split(os.sep)[0]
csv_source = csv_source.replace(".py", "")
col = db[csv_source]
try:
df = pd.read_csv(csv_path)
df = df.fillna(0)
except:
continue
logging.info(
"--Storing {} files to {} from {}".format(
len(df), csv_source, csv_path
)
)
for _, row in df.iterrows():
data = dict(row)
try:
article_information = extract_article_information_from_html(
get_page(data["link"])
)
for key in data.keys():
if not data.get(key, None):
data[key] = article_information[key]
except:
pass
data_op = {"$set": data}
query = {"link": data["link"]}
col.update_one(query, data_op, upsert=True)
logging.info("\tWorking on: {}".format(data["link"]))
class UpdateNANewsImport(base_component.BaseComponent):
SPEC_CLASS = OldNewsImportSpec
EXECUTOR_SPEC = executor_spec.ExecutorClassSpec(Executor)
def __init__(
self,
backup_dir: Text,
ip: Text = None,
port: Text = None,
username: Text = None,
password: Text = None,
dbname: Text = None,
):
if not ip:
ip = "mongo"
if not port:
port = "27017"
if not username:
username = os.environ["MONGO_ROOT_USER"]
if not password:
password = os.environ["MONGO_ROOT_PASSWORD"]
if not dbname:
dbname = os.environ["MONGO_DATABASE_NAME"]
spec = OldNewsImportSpec(
ip=ip,
port=port,
username=username,
password=password,
dbname=dbname,
backup_dir=backup_dir,
)
super(UpdateNANewsImport, self).__init__(spec=spec)
|
serializers | shift_swap | import datetime
import typing
from apps.schedules.models import OnCallSchedule, ShiftSwapRequest
from common.api_helpers.custom_fields import (
OrganizationFilteredPrimaryKeyRelatedField,
TimeZoneAwareDatetimeField,
)
from common.api_helpers.mixins import EagerLoadingMixin
from django.utils import timezone
from rest_framework import serializers
if typing.TYPE_CHECKING:
from apps.user_management.models import User
class BaseShiftSwapRequestListSerializer(
EagerLoadingMixin, serializers.ModelSerializer
):
id = serializers.CharField(read_only=True, source="public_primary_key")
schedule = OrganizationFilteredPrimaryKeyRelatedField(
queryset=OnCallSchedule.objects
)
created_at = TimeZoneAwareDatetimeField(read_only=True)
updated_at = TimeZoneAwareDatetimeField(read_only=True)
swap_start = TimeZoneAwareDatetimeField()
swap_end = TimeZoneAwareDatetimeField()
beneficiary = serializers.CharField(
read_only=True, source="beneficiary.public_primary_key"
)
benefactor = serializers.SerializerMethodField(read_only=True)
SELECT_RELATED = [
"schedule",
"beneficiary",
"benefactor",
]
class Meta:
model = ShiftSwapRequest
fields = [
"id",
"created_at",
"updated_at",
"status",
"schedule",
"swap_start",
"swap_end",
"description",
"beneficiary",
"benefactor",
]
read_only_fields = [
"status",
]
class ShiftSwapRequestListSerializer(BaseShiftSwapRequestListSerializer):
def get_benefactor(self, obj: ShiftSwapRequest) -> str | None:
return obj.benefactor.public_primary_key if obj.benefactor else None
class ShiftSwapRequestSerializer(ShiftSwapRequestListSerializer):
class Meta(ShiftSwapRequestListSerializer.Meta):
fields = ShiftSwapRequestListSerializer.Meta.fields + [
"shifts",
]
read_only_fields = ShiftSwapRequestListSerializer.Meta.read_only_fields + [
"shifts",
]
@staticmethod
def validate_start_and_end_times(
swap_start: datetime.datetime, swap_end: datetime.datetime
) -> None:
if timezone.now() > swap_start:
raise serializers.ValidationError(
"swap_start must be a datetime in the future"
)
if swap_start > swap_end:
raise serializers.ValidationError("swap_end must occur after swap_start")
def validate(self, data):
swap_start = data.get("swap_start", None)
swap_end = data.get("swap_end", None)
if self.partial: # self.partial is true when it's a "partial update" aka PATCH
# if any time related field is specified then we will enforce that they must all be specified
time_fields = [swap_start, swap_end]
any_time_fields_specified = any(time_fields)
all_time_fields_specified = all(time_fields)
if any_time_fields_specified and not all_time_fields_specified:
raise serializers.ValidationError(
"when doing a partial update on time related fields, both start and end times must be specified"
)
elif all_time_fields_specified:
self.validate_start_and_end_times(swap_start, swap_end)
else:
self.validate_start_and_end_times(swap_start, swap_end)
# TODO: we should validate that the beneficiary actually has shifts for the specified schedule
# between swap_start and swap_end
return data
class ShiftSwapRequestExpandedUsersListSerializer(BaseShiftSwapRequestListSerializer):
beneficiary = serializers.SerializerMethodField(read_only=True)
benefactor = serializers.SerializerMethodField(read_only=True)
def _serialize_user(self, user: "User") -> dict | None:
user_data = None
if user:
user_data = {
"display_name": user.username,
"email": user.email,
"pk": user.public_primary_key,
"avatar_full": user.avatar_full_url,
}
return user_data
def get_benefactor(self, obj: ShiftSwapRequest) -> dict | None:
return self._serialize_user(obj.benefactor)
def get_beneficiary(self, obj: ShiftSwapRequest) -> dict | None:
return self._serialize_user(obj.beneficiary)
class ShiftSwapRequestExpandedUsersSerializer(
ShiftSwapRequestExpandedUsersListSerializer,
ShiftSwapRequestSerializer,
):
pass
|
femexamples | ccx_cantilever_faceload | # ***************************************************************************
# * Copyright (c) 2019 Bernd Hahnebach <bernd@bimstatik.org> *
# * Copyright (c) 2020 Sudhanshu Dubey <sudhanshu.thethunder@gmail.com *
# * *
# * This file is part of the FreeCAD CAx development system. *
# * *
# * This program is free software; you can redistribute it and/or modify *
# * it under the terms of the GNU Lesser General Public License (LGPL) *
# * as published by the Free Software Foundation; either version 2 of *
# * the License, or (at your option) any later version. *
# * for detail see the LICENCE text file. *
# * *
# * This program is distributed in the hope that it will be useful, *
# * but WITHOUT ANY WARRANTY; without even the implied warranty of *
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
# * GNU Library General Public License for more details. *
# * *
# * You should have received a copy of the GNU Library General Public *
# * License along with this program; if not, write to the Free Software *
# * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 *
# * USA *
# * *
# ***************************************************************************
import ObjectsFem
from . import manager
from .ccx_cantilever_base_solid import setup_cantilever_base_solid
from .manager import init_doc
def get_information():
return {
"name": "CCX cantilever face load",
"meshtype": "solid",
"meshelement": "Tet10",
"constraints": ["fixed", "force"],
"solvers": ["calculix", "ccxtools", "elmer", "mystran", "z88"],
"material": "solid",
"equations": ["mechanical"],
}
def get_explanation(header=""):
return (
header
+ """
To run the example from Python console use:
from femexamples.ccx_cantilever_faceload import setup
setup()
See forum topic post:
...
"""
)
def setup(doc=None, solvertype="ccxtools"):
# init FreeCAD document
if doc is None:
doc = init_doc()
# explanation object
# just keep the following line and change text string in get_explanation method
manager.add_explanation_obj(
doc, get_explanation(manager.get_header(get_information()))
)
# setup CalculiX cantilever
doc = setup_cantilever_base_solid(doc, solvertype)
analysis = doc.Analysis
geom_obj = doc.Box
# constraint force
con_force = ObjectsFem.makeConstraintForce(doc, "ConstraintForce")
con_force.References = [(geom_obj, "Face2")]
con_force.Force = 9000000.0
con_force.Direction = (geom_obj, ["Edge5"])
con_force.Reversed = True
analysis.addObject(con_force)
doc.recompute()
return doc
|
views | insights | from typing import Any, Dict
from ee.clickhouse.queries.funnels.funnel_correlation import FunnelCorrelation
from ee.clickhouse.queries.paths import ClickhousePaths
from ee.clickhouse.queries.retention import ClickhouseRetention
from ee.clickhouse.queries.stickiness import ClickhouseStickiness
from posthog.api.insight import InsightViewSet
from posthog.decorators import cached_by_filters
from posthog.models import Insight
from posthog.models.dashboard import Dashboard
from posthog.models.filters import Filter
from rest_framework.decorators import action
from rest_framework.permissions import SAFE_METHODS, BasePermission
from rest_framework.request import Request
from rest_framework.response import Response
class CanEditInsight(BasePermission):
message = "This insight is on a dashboard that can only be edited by its owner, team members invited to editing the dashboard, and project admins."
def has_object_permission(self, request: Request, view, insight: Insight) -> bool:
if request.method in SAFE_METHODS:
return True
return (
view.user_permissions.insight(insight).effective_privilege_level
== Dashboard.PrivilegeLevel.CAN_EDIT
)
class ClickhouseInsightsViewSet(InsightViewSet):
permission_classes = [*InsightViewSet.permission_classes, CanEditInsight]
retention_query_class = ClickhouseRetention
stickiness_query_class = ClickhouseStickiness
paths_query_class = ClickhousePaths
# ******************************************
# /projects/:id/insights/funnel/correlation
#
# params:
# - params are the same as for funnel
#
# Returns significant events, i.e. those that are correlated with a person
# making it through a funnel
# ******************************************
@action(methods=["GET", "POST"], url_path="funnel/correlation", detail=False)
def funnel_correlation(
self, request: Request, *args: Any, **kwargs: Any
) -> Response:
result = self.calculate_funnel_correlation(request)
return Response(result)
@cached_by_filters
def calculate_funnel_correlation(self, request: Request) -> Dict[str, Any]:
team = self.team
filter = Filter(request=request, team=team)
base_uri = request.build_absolute_uri("/")
result = FunnelCorrelation(filter=filter, team=team, base_uri=base_uri).run()
return {"result": result}
|
extractor | wat | # coding: utf-8
from __future__ import unicode_literals
from ..compat import compat_str
from ..utils import ExtractorError, int_or_none, try_get, unified_strdate
from .common import InfoExtractor
class WatIE(InfoExtractor):
_VALID_URL = r"(?:wat:|https?://(?:www\.)?wat\.tv/video/.*-)(?P<id>[0-9a-z]+)"
IE_NAME = "wat.tv"
_TESTS = [
{
"url": "http://www.wat.tv/video/soupe-figues-l-orange-aux-epices-6z1uz_2hvf7_.html",
"info_dict": {
"id": "11713067",
"ext": "mp4",
"title": "Soupe de figues à l'orange et aux épices",
"description": 'Retrouvez l\'émission "Petits plats en équilibre", diffusée le 18 août 2014.',
"upload_date": "20140819",
"duration": 120,
},
"params": {
# m3u8 download
"skip_download": True,
},
"expected_warnings": ["HTTP Error 404"],
"skip": "This content is no longer available",
},
{
"url": "http://www.wat.tv/video/gregory-lemarchal-voix-ange-6z1v7_6ygkj_.html",
"md5": "b16574df2c3cd1a36ca0098f2a791925",
"info_dict": {
"id": "11713075",
"ext": "mp4",
"title": "Grégory Lemarchal, une voix d'ange depuis 10 ans (1/3)",
"upload_date": "20140816",
},
"expected_warnings": ["Ce contenu n'est pas disponible pour l'instant."],
"skip": "This content is no longer available",
},
]
_GEO_BYPASS = False
def _real_extract(self, url):
video_id = self._match_id(url)
video_id = (
video_id
if video_id.isdigit() and len(video_id) > 6
else compat_str(int(video_id, 36))
)
# 'contentv4' is used in the website, but it also returns the related
# videos, we don't need them
# video_data = self._download_json(
# 'http://www.wat.tv/interface/contentv4s/' + video_id, video_id)
video_data = self._download_json(
"https://mediainfo.tf1.fr/mediainfocombo/" + video_id,
video_id,
query={"context": "MYTF1", "pver": "4001000"},
)
video_info = video_data["media"]
error_desc = video_info.get("error_desc")
if error_desc:
if video_info.get("error_code") == "GEOBLOCKED":
self.raise_geo_restricted(error_desc, video_info.get("geoList"))
raise ExtractorError(error_desc, expected=True)
title = video_info["title"]
formats = []
def extract_formats(manifest_urls):
for f, f_url in manifest_urls.items():
if not f_url:
continue
if f in ("dash", "mpd"):
formats.extend(
self._extract_mpd_formats(
f_url.replace("://das-q1.tf1.fr/", "://das-q1-ssl.tf1.fr/"),
video_id,
mpd_id="dash",
fatal=False,
)
)
elif f == "hls":
formats.extend(
self._extract_m3u8_formats(
f_url,
video_id,
"mp4",
"m3u8_native",
m3u8_id="hls",
fatal=False,
)
)
delivery = video_data.get("delivery") or {}
extract_formats({delivery.get("format"): delivery.get("url")})
if not formats:
if delivery.get("drm"):
raise ExtractorError("This video is DRM protected.", expected=True)
manifest_urls = self._download_json(
"http://www.wat.tv/get/webhtml/" + video_id, video_id, fatal=False
)
if manifest_urls:
extract_formats(manifest_urls)
self._sort_formats(formats)
return {
"id": video_id,
"title": title,
"thumbnail": video_info.get("preview"),
"upload_date": unified_strdate(
try_get(
video_data, lambda x: x["mediametrie"]["chapters"][0]["estatS4"]
)
),
"duration": int_or_none(video_info.get("duration")),
"formats": formats,
}
|
Models | PrinterConfigurationModel | # Copyright (c) 2018 Ultimaker B.V.
# Cura is released under the terms of the LGPLv3 or higher.
from typing import List
from PyQt6.QtCore import QObject, pyqtProperty, pyqtSignal
MYPY = False
if MYPY:
from cura.PrinterOutput.Models.ExtruderConfigurationModel import (
ExtruderConfigurationModel,
)
class PrinterConfigurationModel(QObject):
configurationChanged = pyqtSignal()
def __init__(self) -> None:
super().__init__()
self._printer_type = ""
self._extruder_configurations = [] # type: List[ExtruderConfigurationModel]
self._buildplate_configuration = ""
def setPrinterType(self, printer_type: str) -> None:
self._printer_type = printer_type
@pyqtProperty(str, fset=setPrinterType, notify=configurationChanged)
def printerType(self) -> str:
return self._printer_type
def setExtruderConfigurations(
self, extruder_configurations: List["ExtruderConfigurationModel"]
) -> None:
if self._extruder_configurations != extruder_configurations:
self._extruder_configurations = extruder_configurations
for extruder_configuration in self._extruder_configurations:
extruder_configuration.extruderConfigurationChanged.connect(
self.configurationChanged
)
self.configurationChanged.emit()
@pyqtProperty(
"QVariantList", fset=setExtruderConfigurations, notify=configurationChanged
)
def extruderConfigurations(self):
return self._extruder_configurations
def setBuildplateConfiguration(self, buildplate_configuration: str) -> None:
if self._buildplate_configuration != buildplate_configuration:
self._buildplate_configuration = buildplate_configuration
self.configurationChanged.emit()
@pyqtProperty(str, fset=setBuildplateConfiguration, notify=configurationChanged)
def buildplateConfiguration(self) -> str:
return self._buildplate_configuration
def isValid(self) -> bool:
"""This method is intended to indicate whether the configuration is valid or not.
The method checks if the mandatory fields are or not set
"""
if not self._extruder_configurations:
return False
for configuration in self._extruder_configurations:
if configuration is None:
return False
return self._printer_type != ""
def hasAnyMaterialLoaded(self) -> bool:
if not self.isValid():
return False
for configuration in self._extruder_configurations:
if (
configuration.activeMaterial
and configuration.activeMaterial.type != "empty"
):
return True
return False
def __str__(self):
message_chunks = []
message_chunks.append("Printer type: " + self._printer_type)
message_chunks.append("Extruders: [")
for configuration in self._extruder_configurations:
message_chunks.append(" " + str(configuration))
message_chunks.append("]")
if self._buildplate_configuration is not None:
message_chunks.append("Buildplate: " + self._buildplate_configuration)
return "\n".join(message_chunks)
def __eq__(self, other):
if not isinstance(other, PrinterConfigurationModel):
return False
if self.printerType != other.printerType:
return False
if self.buildplateConfiguration != other.buildplateConfiguration:
return False
if len(self.extruderConfigurations) != len(other.extruderConfigurations):
return False
for self_extruder, other_extruder in zip(
sorted(self._extruder_configurations, key=lambda x: x.position),
sorted(other.extruderConfigurations, key=lambda x: x.position),
):
if self_extruder != other_extruder:
return False
return True
def __hash__(self):
"""The hash function is used to compare and create unique sets. The configuration is unique if the configuration
of the extruders is unique (the order of the extruders matters), and the type and buildplate is the same.
"""
extruder_hash = hash(0)
first_extruder = None
for configuration in self._extruder_configurations:
extruder_hash ^= hash(configuration)
if configuration.position == 0:
first_extruder = configuration
# To ensure the correct order of the extruders, we add an "and" operation using the first extruder hash value
if first_extruder:
extruder_hash &= hash(first_extruder)
return (
hash(self._printer_type)
^ extruder_hash
^ hash(self._buildplate_configuration)
)
|
neubot | runner_core | # neubot/runner_core.py
#
# Copyright (c) 2012 Simone Basso <bassosimone@gmail.com>,
# NEXA Center for Internet & Society at Politecnico di Torino
#
# This file is part of Neubot <http://www.neubot.org/>.
#
# Neubot is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Neubot is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Neubot. If not, see <http://www.gnu.org/licenses/>.
#
""" Component that runs the selected test """
#
# This is the component that allows for running tests
# on demand both from command line and from the web
# user interface of Neubot.
#
import collections
import getopt
import logging
import sys
if __name__ == "__main__":
sys.path.insert(0, ".")
from neubot import bittorrent, privacy, runner_rendezvous, system, utils_modules
from neubot.config import CONFIG
from neubot.database import DATABASE
from neubot.defer import Deferred
from neubot.log import STREAMING_LOG
from neubot.net.poller import POLLER
from neubot.notify import NOTIFIER
from neubot.raw_negotiate import RawNegotiate
from neubot.runner_dload import RunnerDload
from neubot.runner_hosts import RUNNER_HOSTS
from neubot.runner_mlabns import RunnerMlabns
from neubot.runner_tests import RUNNER_TESTS
from neubot.speedtest.client import ClientSpeedtest
class RunnerCore(object):
"""Implements component that runs the selected test"""
def __init__(self):
"""Initialize component that runs the selected test"""
self.dynamic_tests = {}
self.queue = collections.deque()
self.running = False
def test_is_running(self):
"""Reports whether a test is running"""
return self.running
def run(self, test, deferred, auto_discover=True, ctx=None):
"""Run test and deferred when done"""
if (
test != "rendezvous"
and test != "speedtest"
and test != "bittorrent"
and test != "dload"
and test != "raw"
and test != "mlab-ns"
and test not in self.dynamic_tests
):
utils_modules.modprobe("mod_" + test, "register_test", self.dynamic_tests)
if auto_discover:
logging.info("runner_core: Need to auto-discover first...")
deferred2 = Deferred()
deferred2.add_callback(lambda param: None)
if test == "raw":
self.queue.append(("mlab-ns", deferred2, {"policy": "random"}))
elif test == "bittorrent" or test == "speedtest":
self.queue.append(("rendezvous", deferred2, None))
else:
try:
test_rec = self.dynamic_tests[test]
self.queue.append(
(
test_rec["discover_method"],
deferred2,
{"policy": test_rec["discover_policy"]},
)
)
except (KeyboardInterrupt, SystemExit):
raise
except:
logging.warning("runner: internal error", exc_info=1)
self.queue.append((test, deferred, ctx))
self.run_queue()
def run_queue(self):
"""If possible run the first test in queue"""
# Adapted from neubot/rendezvous/client.py
if not self.queue:
return
if self.running:
return
#
# Subscribe BEFORE starting the test, otherwise we
# may miss the 'testdone' event if the connection
# to the negotiator service fails, and we will stay
# stuck forever.
#
NOTIFIER.subscribe("testdone", self.test_done)
# Prevent concurrent tests
self.running = True
# Safely run first element in queue
deferred = Deferred()
deferred.add_callback(self._do_run_queue)
deferred.add_errback(self._run_queue_error)
deferred.callback(self.queue[0])
@staticmethod
def _run_queue_error(error):
"""Invoked when _do_run_queue() fails"""
logging.error("runner_core: catched exception: %s", error)
NOTIFIER.publish("testdone")
def _do_run_queue(self, first_elem):
"""Actually run first element in queue"""
# Make a copy of current settings
conf = CONFIG.copy()
# Make sure we abide to M-Lab policy
if privacy.count_valid(conf, "privacy.") != 3:
privacy.complain()
raise RuntimeError("runner_core: bad privacy settings")
elif first_elem[0] == "rendezvous":
runner_rendezvous.run(conf["agent.master"], "9773")
elif first_elem[0] == "speedtest":
uri = RUNNER_TESTS.test_to_negotiate_uri("speedtest")
conf["speedtest.client.uri"] = uri
client = ClientSpeedtest(POLLER)
client.configure(conf)
client.connect_uri()
elif first_elem[0] == "bittorrent":
uri = RUNNER_TESTS.test_to_negotiate_uri("bittorrent")
conf["bittorrent._uri"] = uri
bittorrent.run(POLLER, conf)
elif first_elem[0] == "dload":
RunnerDload(first_elem[2])
elif first_elem[0] == "raw":
address = RUNNER_HOSTS.get_random_host()
handler = RawNegotiate()
handler.connect((address, 8080), CONFIG["prefer_ipv6"], 0, {})
elif first_elem[0] == "mlab-ns":
handler = RunnerMlabns()
if not first_elem[2]:
extra = {"policy": ""} # get closest server by default
else:
extra = first_elem[2]
handler.connect(
("mlab-ns.appspot.com", 80), CONFIG["prefer_ipv6"], 0, extra
)
elif first_elem[0] in self.dynamic_tests:
address = RUNNER_HOSTS.get_random_host()
port = 80 # XXX
self.dynamic_tests[first_elem[0]]["test_func"](
{
"address": address,
"conf": CONFIG.copy(),
"poller": POLLER,
"port": port,
}
)
else:
raise RuntimeError("runner_core: asked to run an unknown test")
def test_done(self, *baton):
"""Invoked when the test is done"""
#
# Stop streaming test events to interested parties
# via the log streaming API.
# Do not stop processing immediately and give HTTP
# code time to stream logs to the client in case
# connections fails immediately.
# This must not be done when we're processing the
# somewhat internal 'rendezvous' or 'mlab-ns' tests.
#
if self.queue[0][0] != "rendezvous" and self.queue[0][0] != "mlab-ns":
POLLER.sched(2, STREAMING_LOG.stop_streaming)
# Paranoid
if baton[0] != "testdone":
raise RuntimeError("runner_core: invoked for the wrong event")
# Notify the caller that the test is done
deferred, ctx = self.queue.popleft()[1:]
deferred.callback(ctx)
#
# Allow for more tests
# If callback() adds one more test, that would
# be run by the run_queue() invocation below.
#
self.running = False
# Eventually run next queued test
self.run_queue()
RUNNER_CORE = RunnerCore()
USAGE = "usage: neubot runner_core [-nv] [-f dabatase] test [uri]"
def main(args):
"""Main function"""
try:
options, arguments = getopt.getopt(args[1:], "f:nv")
except getopt.error:
sys.exit(USAGE)
database_path = system.get_default_database_path()
auto_discover = True
for name, value in options:
if name == "-f":
database_path = value
elif name == "-n":
auto_discover = False
elif name == "-v":
CONFIG["verbose"] = 1
if len(arguments) != 1 and len(arguments) != 2:
sys.exit(USAGE)
DATABASE.set_path(database_path)
CONFIG.merge_database(DATABASE.connection())
if len(arguments) == 2:
RUNNER_TESTS.update({arguments[0]: [arguments[1]]})
ctx = {"uri": arguments[1]}
else:
ctx = None
deferred = Deferred()
deferred.add_callback(lambda param: None)
RUNNER_CORE.run(arguments[0], deferred, auto_discover, ctx)
POLLER.loop()
if __name__ == "__main__":
main(sys.argv)
|
gui | lists | # =============================================================================
# Copyright (C) 2010 Diego Duclos
#
# This file is part of pyfa.
#
# pyfa is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# pyfa is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with pyfa. If not, see <http://www.gnu.org/licenses/>.
# =============================================================================
import gui.display
# noinspection PyPackageRequirements
import wx
from eos.saveddata.targetProfile import TargetProfile
from graphs.style import BASE_COLORS, LIGHTNESSES, STYLES
from graphs.wrapper import SourceWrapper, TargetWrapper
from gui.builtinViewColumns.graphColor import GraphColor
from gui.builtinViewColumns.graphLightness import GraphLightness
from gui.builtinViewColumns.graphLineStyle import GraphLineStyle
from gui.contextMenu import ContextMenu
from service.const import GraphCacheCleanupReason
from service.fit import Fit
from .stylePickers import ColorPickerPopup, LightnessPickerPopup, LineStylePickerPopup
_t = wx.GetTranslation
class BaseWrapperList(gui.display.Display):
def __init__(self, graphFrame, parent):
super().__init__(parent)
self.graphFrame = graphFrame
self._wrappers = []
self.hoveredRow = None
self.hoveredColumn = None
self.Bind(wx.EVT_CHAR_HOOK, self.kbEvent)
self.Bind(wx.EVT_LEFT_DOWN, self.OnLeftDown)
self.Bind(wx.EVT_LEFT_DCLICK, self.OnLeftDClick)
self.Bind(wx.EVT_MOTION, self.OnMouseMove)
self.Bind(wx.EVT_LEAVE_WINDOW, self.OnLeaveWindow)
@property
def wrappers(self):
# Sort fits first, then target profiles
return sorted(self._wrappers, key=lambda w: not w.isFit)
# UI-related stuff
@property
def defaultTTText(self):
raise NotImplementedError
def refreshExtraColumns(self, extraColSpecs):
baseColNames = set()
for baseColName in self.DEFAULT_COLS:
if ":" in baseColName:
baseColName = baseColName.split(":", 1)[0]
baseColNames.add(baseColName)
columnsToRemove = set()
for col in self.activeColumns:
if col.name not in baseColNames:
columnsToRemove.add(col)
for col in columnsToRemove:
self.removeColumn(col)
for colSpec in extraColSpecs:
self.appendColumnBySpec(colSpec)
self.refreshView()
def refreshView(self):
self.refresh(self.wrappers)
def updateView(self):
self.update(self.wrappers)
# UI event handling
def OnMouseMove(self, event):
row, _, col = self.HitTestSubItem(event.Position)
if row != self.hoveredRow or col != self.hoveredColumn:
if self.ToolTip is not None:
self.SetToolTip(None)
else:
self.hoveredRow = row
self.hoveredColumn = col
if row != -1 and col != -1 and col < self.ColumnCount:
item = self.getWrapper(row)
if item is None:
return
tooltip = self.activeColumns[col].getToolTip(item)
if tooltip:
self.SetToolTip(tooltip)
else:
self.SetToolTip(None)
else:
self.SetToolTip(self.defaultTTText)
event.Skip()
def OnLeaveWindow(self, event):
self.SetToolTip(None)
self.hoveredRow = None
self.hoveredColumn = None
event.Skip()
def handleDrag(self, type, fitID):
if type == "fit" and not self.containsFitID(fitID):
sFit = Fit.getInstance()
fit = sFit.getFit(fitID)
self.appendItem(fit)
self.updateView()
self.graphFrame.draw()
def OnLeftDown(self, event):
row, _ = self.HitTest(event.Position)
if row != -1:
pickers = {
self.getColIndex(GraphColor): ColorPickerPopup,
self.getColIndex(GraphLightness): LightnessPickerPopup,
self.getColIndex(GraphLineStyle): LineStylePickerPopup,
}
# In case we had no index for some column, remove None
pickers.pop(None, None)
col = self.getColumn(event.Position)
if col in pickers:
picker = pickers[col]
wrapper = self.getWrapper(row)
if wrapper is not None:
win = picker(parent=self, wrapper=wrapper)
pos = wx.GetMousePosition()
win.Position(pos, (0, 0))
win.Popup()
return
event.Skip()
def OnLineStyleChange(self):
self.updateView()
self.graphFrame.draw()
def OnLeftDClick(self, event):
row, _ = self.HitTest(event.Position)
wrapper = self.getWrapper(row)
if wrapper is None:
return
self.removeWrappers([wrapper])
def kbEvent(self, event):
keycode = event.GetKeyCode()
modifiers = event.GetModifiers()
if keycode == 65 and modifiers == wx.MOD_CONTROL:
self.selectAll()
elif (
keycode in (wx.WXK_DELETE, wx.WXK_NUMPAD_DELETE)
and modifiers == wx.MOD_NONE
):
self.removeWrappers(self.getSelectedWrappers())
event.Skip()
# Wrapper-related methods
def getWrapper(self, row):
if row == -1:
return None
try:
return self.wrappers[row]
except IndexError:
return None
def removeWrappers(self, wrappers):
wrappers = set(wrappers).intersection(self._wrappers)
if not wrappers:
return
for wrapper in wrappers:
self._wrappers.remove(wrapper)
self.updateView()
for wrapper in wrappers:
if wrapper.isFit:
self.graphFrame.clearCache(
reason=GraphCacheCleanupReason.fitRemoved, extraData=wrapper.item.ID
)
elif wrapper.isProfile:
self.graphFrame.clearCache(
reason=GraphCacheCleanupReason.profileRemoved,
extraData=wrapper.item.ID,
)
self.graphFrame.draw()
def getSelectedWrappers(self):
wrappers = []
for row in self.getSelectedRows():
wrapper = self.getWrapper(row)
if wrapper is None:
continue
wrappers.append(wrapper)
return wrappers
def appendItem(self, item):
raise NotImplemented
def containsFitID(self, fitID):
for wrapper in self._wrappers:
if wrapper.isFit and wrapper.item.ID == fitID:
return True
return False
def containsProfileID(self, profileID):
for wrapper in self._wrappers:
if wrapper.isProfile and wrapper.item.ID == profileID:
return True
return False
# Wrapper-related events
def OnFitRenamed(self, event):
if self.containsFitID(event.fitID):
self.updateView()
def OnFitChanged(self, event):
if set(event.fitIDs).intersection(w.item.ID for w in self._wrappers if w.isFit):
self.updateView()
def OnFitRemoved(self, event):
wrapper = next(
(w for w in self._wrappers if w.isFit and w.item.ID == event.fitID), None
)
if wrapper is not None:
self._wrappers.remove(wrapper)
self.updateView()
def OnProfileRenamed(self, event):
if self.containsProfileID(event.profileID):
self.updateView()
def OnProfileChanged(self, event):
if self.containsProfileID(event.profileID):
self.updateView()
def OnProfileRemoved(self, event):
wrapper = next(
(w for w in self._wrappers if w.isProfile and w.item.ID == event.profileID),
None,
)
if wrapper is not None:
self._wrappers.remove(wrapper)
self.updateView()
# Context menu handlers
def addFit(self, fit):
if fit is None:
return
if self.containsFitID(fit.ID):
return
self.appendItem(fit)
self.updateView()
self.graphFrame.draw()
def getExistingFitIDs(self):
return [w.item.ID for w in self._wrappers if w.isFit]
def addFitsByIDs(self, fitIDs):
sFit = Fit.getInstance()
for fitID in fitIDs:
if self.containsFitID(fitID):
continue
fit = sFit.getFit(fitID)
if fit is not None:
self.appendItem(fit)
self.updateView()
self.graphFrame.draw()
class SourceWrapperList(BaseWrapperList):
DEFAULT_COLS = ("Graph Color", "Base Icon", "Base Name")
def __init__(self, graphFrame, parent):
super().__init__(graphFrame, parent)
self.Bind(wx.EVT_CONTEXT_MENU, self.spawnMenu)
fit = Fit.getInstance().getFit(self.graphFrame.mainFrame.getActiveFit())
if fit is not None:
self.appendItem(fit)
self.updateView()
def appendItem(self, item):
# Find out least used color
colorUseMap = {c: 0 for c in BASE_COLORS}
for wrapper in self._wrappers:
if wrapper.colorID not in colorUseMap:
continue
colorUseMap[wrapper.colorID] += 1
def getDefaultParams():
leastUses = min(colorUseMap.values(), default=0)
for colorID in BASE_COLORS:
if leastUses == colorUseMap.get(colorID, 0):
return colorID
return None
colorID = getDefaultParams()
self._wrappers.append(SourceWrapper(item=item, colorID=colorID))
def spawnMenu(self, event):
clickedPos = self.getRowByAbs(event.Position)
self.ensureSelection(clickedPos)
selection = self.getSelectedWrappers()
mainItem = self.getWrapper(clickedPos)
itemContext = None if mainItem is None else _t("Fit")
menu = ContextMenu.getMenu(
self,
mainItem,
selection,
("graphFitList", itemContext),
("graphFitListMisc", itemContext),
)
if menu:
self.PopupMenu(menu)
@property
def defaultTTText(self):
return _t("Drag a fit into this list to graph it")
class TargetWrapperList(BaseWrapperList):
DEFAULT_COLS = ("Graph Lightness", "Graph Line Style", "Base Icon", "Base Name")
def __init__(self, graphFrame, parent):
super().__init__(graphFrame, parent)
self.Bind(wx.EVT_CONTEXT_MENU, self.spawnMenu)
self.appendItem(TargetProfile.getIdeal())
self.updateView()
def appendItem(self, item):
# Find out least used lightness
lightnessUseMap = {(l, s): 0 for l in LIGHTNESSES for s in STYLES}
for wrapper in self._wrappers:
key = (wrapper.lightnessID, wrapper.lineStyleID)
if key not in lightnessUseMap:
continue
lightnessUseMap[key] += 1
def getDefaultParams():
leastUses = min(lightnessUseMap.values(), default=0)
for lineStyleID in STYLES:
for lightnessID in LIGHTNESSES:
if leastUses == lightnessUseMap.get((lightnessID, lineStyleID), 0):
return lightnessID, lineStyleID
return None, None
lightnessID, lineStyleID = getDefaultParams()
self._wrappers.append(
TargetWrapper(item=item, lightnessID=lightnessID, lineStyleID=lineStyleID)
)
def spawnMenu(self, event):
clickedPos = self.getRowByAbs(event.Position)
self.ensureSelection(clickedPos)
selection = self.getSelectedWrappers()
mainItem = self.getWrapper(clickedPos)
itemContext = None if mainItem is None else _t("Target")
menu = ContextMenu.getMenu(
self,
mainItem,
selection,
("graphTgtList", itemContext),
("graphTgtListMisc", itemContext),
)
if menu:
self.PopupMenu(menu)
def OnResistModeChanged(self, event):
if set(event.fitIDs).intersection(w.item.ID for w in self._wrappers if w.isFit):
self.updateView()
@property
def defaultTTText(self):
return _t("Drag a fit into this list to have your fits graphed against it")
# Context menu handlers
def addProfile(self, profile):
if profile is None:
return
if self.containsProfileID(profile.ID):
return
self.appendItem(profile)
self.updateView()
self.graphFrame.draw()
|
blocks | qa_interleave | #!/usr/bin/env python
#
# Copyright 2004,2007,2010,2012-2014 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# SPDX-License-Identifier: GPL-3.0-or-later
#
#
from gnuradio import blocks, gr, gr_unittest
class test_interleave(gr_unittest.TestCase):
def setUp(self):
self.tb = gr.top_block()
def tearDown(self):
self.tb = None
def test_int_001(self):
lenx = 64
src0 = blocks.vector_source_f(list(range(0, lenx, 4)))
src1 = blocks.vector_source_f(list(range(1, lenx, 4)))
src2 = blocks.vector_source_f(list(range(2, lenx, 4)))
src3 = blocks.vector_source_f(list(range(3, lenx, 4)))
op = blocks.interleave(gr.sizeof_float)
dst = blocks.vector_sink_f()
self.tb.connect(src0, (op, 0))
self.tb.connect(src1, (op, 1))
self.tb.connect(src2, (op, 2))
self.tb.connect(src3, (op, 3))
self.tb.connect(op, dst)
self.tb.run()
expected_result = tuple(range(lenx))
result_data = dst.data()
self.assertFloatTuplesAlmostEqual(expected_result, result_data)
def test_int_002(self):
blksize = 4
lenx = 64
def plusup_big(a):
return a + (blksize * 4)
def plusup_little(a):
return a + blksize
a_vec = list(range(0, blksize))
for i in range(0, (lenx // (4 * blksize)) - 1):
a_vec += list(map(plusup_big, a_vec[len(a_vec) - blksize :]))
b_vec = list(map(plusup_little, a_vec))
c_vec = list(map(plusup_little, b_vec))
d_vec = list(map(plusup_little, c_vec))
src0 = blocks.vector_source_f(a_vec)
src1 = blocks.vector_source_f(b_vec)
src2 = blocks.vector_source_f(c_vec)
src3 = blocks.vector_source_f(d_vec)
op = blocks.interleave(gr.sizeof_float, blksize)
dst = blocks.vector_sink_f()
self.tb.connect(src0, (op, 0))
self.tb.connect(src1, (op, 1))
self.tb.connect(src2, (op, 2))
self.tb.connect(src3, (op, 3))
self.tb.connect(op, dst)
self.tb.run()
expected_result = tuple(range(lenx))
result_data = dst.data()
self.assertFloatTuplesAlmostEqual(expected_result, result_data)
def test_deint_001(self):
lenx = 64
src = blocks.vector_source_f(list(range(lenx)))
op = blocks.deinterleave(gr.sizeof_float)
dst0 = blocks.vector_sink_f()
dst1 = blocks.vector_sink_f()
dst2 = blocks.vector_sink_f()
dst3 = blocks.vector_sink_f()
self.tb.connect(src, op)
self.tb.connect((op, 0), dst0)
self.tb.connect((op, 1), dst1)
self.tb.connect((op, 2), dst2)
self.tb.connect((op, 3), dst3)
self.tb.run()
expected_result0 = tuple(range(0, lenx, 4))
expected_result1 = tuple(range(1, lenx, 4))
expected_result2 = tuple(range(2, lenx, 4))
expected_result3 = tuple(range(3, lenx, 4))
self.assertFloatTuplesAlmostEqual(expected_result0, dst0.data())
self.assertFloatTuplesAlmostEqual(expected_result1, dst1.data())
self.assertFloatTuplesAlmostEqual(expected_result2, dst2.data())
self.assertFloatTuplesAlmostEqual(expected_result3, dst3.data())
def test_deint_002(self):
blksize = 4
lenx = 64
src = blocks.vector_source_f(list(range(lenx)))
op = blocks.deinterleave(gr.sizeof_float, blksize)
dst0 = blocks.vector_sink_f()
dst1 = blocks.vector_sink_f()
dst2 = blocks.vector_sink_f()
dst3 = blocks.vector_sink_f()
self.tb.connect(src, op)
self.tb.connect((op, 0), dst0)
self.tb.connect((op, 1), dst1)
self.tb.connect((op, 2), dst2)
self.tb.connect((op, 3), dst3)
self.tb.run()
def plusup_big(a):
return a + (blksize * 4)
def plusup_little(a):
return a + blksize
a_vec = list(range(0, blksize))
for i in range(0, (lenx // (4 * blksize)) - 1):
a_vec += list(map(plusup_big, a_vec[len(a_vec) - blksize :]))
b_vec = list(map(plusup_little, a_vec))
c_vec = list(map(plusup_little, b_vec))
d_vec = list(map(plusup_little, c_vec))
expected_result0 = tuple(a_vec)
expected_result1 = tuple(b_vec)
expected_result2 = tuple(c_vec)
expected_result3 = tuple(d_vec)
self.assertFloatTuplesAlmostEqual(expected_result0, dst0.data())
self.assertFloatTuplesAlmostEqual(expected_result1, dst1.data())
self.assertFloatTuplesAlmostEqual(expected_result2, dst2.data())
self.assertFloatTuplesAlmostEqual(expected_result3, dst3.data())
if __name__ == "__main__":
gr_unittest.run(test_interleave)
|
models | ssd_inception_v2_feature_extractor | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""SSDFeatureExtractor for InceptionV2 features."""
import tensorflow as tf
from app.object_detection.meta_architectures import ssd_meta_arch
from app.object_detection.models import feature_map_generators
from app.object_detection.utils import ops
from nets import inception_v2
slim = tf.contrib.slim
class SSDInceptionV2FeatureExtractor(ssd_meta_arch.SSDFeatureExtractor):
"""SSD Feature Extractor using InceptionV2 features."""
def __init__(
self,
is_training,
depth_multiplier,
min_depth,
pad_to_multiple,
conv_hyperparams,
batch_norm_trainable=True,
reuse_weights=None,
):
"""InceptionV2 Feature Extractor for SSD Models.
Args:
is_training: whether the network is in training mode.
depth_multiplier: float depth multiplier for feature extractor.
min_depth: minimum feature extractor depth.
pad_to_multiple: the nearest multiple to zero pad the input height and
width dimensions to.
conv_hyperparams: tf slim arg_scope for conv2d and separable_conv2d ops.
batch_norm_trainable: Whether to update batch norm parameters during
training or not. When training with a small batch size
(e.g. 1), it is desirable to disable batch norm update and use
pretrained batch norm params.
reuse_weights: Whether to reuse variables. Default is None.
"""
super(SSDInceptionV2FeatureExtractor, self).__init__(
is_training,
depth_multiplier,
min_depth,
pad_to_multiple,
conv_hyperparams,
batch_norm_trainable,
reuse_weights,
)
def preprocess(self, resized_inputs):
"""SSD preprocessing.
Maps pixel values to the range [-1, 1].
Args:
resized_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
Returns:
preprocessed_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
"""
return (2.0 / 255.0) * resized_inputs - 1.0
def extract_features(self, preprocessed_inputs):
"""Extract features from preprocessed inputs.
Args:
preprocessed_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
Returns:
feature_maps: a list of tensors where the ith tensor has shape
[batch, height_i, width_i, depth_i]
"""
preprocessed_inputs.get_shape().assert_has_rank(4)
shape_assert = tf.Assert(
tf.logical_and(
tf.greater_equal(tf.shape(preprocessed_inputs)[1], 33),
tf.greater_equal(tf.shape(preprocessed_inputs)[2], 33),
),
["image size must at least be 33 in both height and width."],
)
feature_map_layout = {
"from_layer": ["Mixed_4c", "Mixed_5c", "", "", "", ""],
"layer_depth": [-1, -1, 512, 256, 256, 128],
}
with tf.control_dependencies([shape_assert]):
with slim.arg_scope(self._conv_hyperparams):
with tf.variable_scope(
"InceptionV2", reuse=self._reuse_weights
) as scope:
_, image_features = inception_v2.inception_v2_base(
ops.pad_to_multiple(preprocessed_inputs, self._pad_to_multiple),
final_endpoint="Mixed_5c",
min_depth=self._min_depth,
depth_multiplier=self._depth_multiplier,
scope=scope,
)
feature_maps = feature_map_generators.multi_resolution_feature_maps(
feature_map_layout=feature_map_layout,
depth_multiplier=self._depth_multiplier,
min_depth=self._min_depth,
insert_1x1_conv=True,
image_features=image_features,
)
return feature_maps.values()
|
digital | qa_lfsr | #!/usr/bin/env python
#
# Copyright 2012 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# SPDX-License-Identifier: GPL-3.0-or-later
#
#
import math
import numpy as np
from gnuradio import digital, gr, gr_unittest
from gnuradio.digital.utils import lfsr_args
class test_lfsr(gr_unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_lfsr_001(self):
reglen = 8
l = digital.lfsr(1, 1, reglen)
result_data = []
for i in range(4 * (reglen + 1)):
result_data.append(l.next_bit())
expected_result = 4 * (
[
1,
]
+ reglen
* [
0,
]
)
self.assertFloatTuplesAlmostEqual(expected_result, result_data, 5)
def test_lfsr_002(self):
l = digital.lfsr(*lfsr_args(0b1, 5, 3, 0))
result_data = [l.next_bit() for _ in range(2 * (2**5 - 1))]
expected_result = [
1,
0,
0,
0,
0,
1,
0,
1,
0,
1,
1,
1,
0,
1,
1,
0,
0,
0,
1,
1,
1,
1,
1,
0,
0,
1,
1,
0,
1,
0,
0,
] * 2
self.assertEqual(expected_result, result_data)
seq1 = [l.next_bit() for _ in range(2**5 - 1)]
seq2 = [l.next_bit() for _ in range(2**5 - 1)]
self.assertEqual(seq1, seq2)
res = np.convolve(seq1, [1, 0, 1, 0, 0, 1]) % 2
self.assertEqual(sum(res[5:-5]), 0, msg="LRS not generated properly")
if __name__ == "__main__":
gr_unittest.run(test_lfsr)
|
Bigfile | BigfilePlugin | import base64
import binascii
import collections
import json
import math
import os
import shutil
import subprocess
import time
import warnings
import gevent
import gevent.lock
from Crypt import CryptHash
from Debug import Debug
from Plugin import PluginManager
with warnings.catch_warnings():
warnings.filterwarnings("ignore") # Ignore missing sha3 warning
import merkletools
import util
from util import Msgpack, helper
from util.Flag import flag
from .BigfilePiecefield import BigfilePiecefield, BigfilePiecefieldPacked
# We can only import plugin host clases after the plugins are loaded
@PluginManager.afterLoad
def importPluginnedClasses():
global VerifyError, config
from Config import config
from Content.ContentManager import VerifyError
if "upload_nonces" not in locals():
upload_nonces = {}
@PluginManager.registerTo("UiRequest")
class UiRequestPlugin(object):
def isCorsAllowed(self, path):
if path == "/ZeroNet-Internal/BigfileUpload":
return True
else:
return super(UiRequestPlugin, self).isCorsAllowed(path)
@helper.encodeResponse
def actionBigfileUpload(self):
nonce = self.get.get("upload_nonce")
if nonce not in upload_nonces:
return self.error403("Upload nonce error.")
upload_info = upload_nonces[nonce]
del upload_nonces[nonce]
self.sendHeader(
200,
"text/html",
noscript=True,
extra_headers={
"Access-Control-Allow-Origin": "null",
"Access-Control-Allow-Credentials": "true",
},
)
self.readMultipartHeaders(self.env["wsgi.input"]) # Skip http headers
result = self.handleBigfileUpload(upload_info, self.env["wsgi.input"].read)
return json.dumps(result)
def actionBigfileUploadWebsocket(self):
ws = self.env.get("wsgi.websocket")
if not ws:
self.start_response("400 Bad Request", [])
return [b"Not a websocket request!"]
nonce = self.get.get("upload_nonce")
if nonce not in upload_nonces:
return self.error403("Upload nonce error.")
upload_info = upload_nonces[nonce]
del upload_nonces[nonce]
ws.send("poll")
buffer = b""
def read(size):
nonlocal buffer
while len(buffer) < size:
buffer += ws.receive()
ws.send("poll")
part, buffer = buffer[:size], buffer[size:]
return part
result = self.handleBigfileUpload(upload_info, read)
ws.send(json.dumps(result))
def handleBigfileUpload(self, upload_info, read):
site = upload_info["site"]
inner_path = upload_info["inner_path"]
with site.storage.open(inner_path, "wb", create_dirs=True) as out_file:
merkle_root, piece_size, piecemap_info = site.content_manager.hashBigfile(
read, upload_info["size"], upload_info["piece_size"], out_file
)
if len(piecemap_info["sha512_pieces"]) == 1: # Small file, don't split
hash = binascii.hexlify(piecemap_info["sha512_pieces"][0])
hash_id = site.content_manager.hashfield.getHashId(hash)
site.content_manager.optionalDownloaded(
inner_path, hash_id, upload_info["size"], own=True
)
else: # Big file
file_name = helper.getFilename(inner_path)
site.storage.open(upload_info["piecemap"], "wb").write(
Msgpack.pack({file_name: piecemap_info})
)
# Find piecemap and file relative path to content.json
file_info = site.content_manager.getFileInfo(inner_path, new_file=True)
content_inner_path_dir = helper.getDirname(file_info["content_inner_path"])
piecemap_relative_path = upload_info["piecemap"][
len(content_inner_path_dir) :
]
file_relative_path = inner_path[len(content_inner_path_dir) :]
# Add file to content.json
if site.storage.isFile(file_info["content_inner_path"]):
content = site.storage.loadJson(file_info["content_inner_path"])
else:
content = {}
if "files_optional" not in content:
content["files_optional"] = {}
content["files_optional"][file_relative_path] = {
"sha512": merkle_root,
"size": upload_info["size"],
"piecemap": piecemap_relative_path,
"piece_size": piece_size,
}
merkle_root_hash_id = site.content_manager.hashfield.getHashId(merkle_root)
site.content_manager.optionalDownloaded(
inner_path, merkle_root_hash_id, upload_info["size"], own=True
)
site.storage.writeJson(file_info["content_inner_path"], content)
site.content_manager.contents.loadItem(
file_info["content_inner_path"]
) # reload cache
return {
"merkle_root": merkle_root,
"piece_num": len(piecemap_info["sha512_pieces"]),
"piece_size": piece_size,
"inner_path": inner_path,
}
def readMultipartHeaders(self, wsgi_input):
found = False
for i in range(100):
line = wsgi_input.readline()
if line == b"\r\n":
found = True
break
if not found:
raise Exception("No multipart header found")
return i
def actionFile(self, file_path, *args, **kwargs):
if kwargs.get("file_size", 0) > 1024 * 1024 and kwargs.get(
"path_parts"
): # Only check files larger than 1MB
path_parts = kwargs["path_parts"]
site = self.server.site_manager.get(path_parts["address"])
big_file = site.storage.openBigfile(
path_parts["inner_path"], prebuffer=2 * 1024 * 1024
)
if big_file:
kwargs["file_obj"] = big_file
kwargs["file_size"] = big_file.size
return super(UiRequestPlugin, self).actionFile(file_path, *args, **kwargs)
@PluginManager.registerTo("UiWebsocket")
class UiWebsocketPlugin(object):
def actionBigfileUploadInit(self, to, inner_path, size, protocol="xhr"):
valid_signers = self.site.content_manager.getValidSigners(inner_path)
auth_address = self.user.getAuthAddress(self.site.address)
if not self.site.settings["own"] and auth_address not in valid_signers:
self.log.error(
"FileWrite forbidden %s not in valid_signers %s"
% (auth_address, valid_signers)
)
return self.response(
to, {"error": "Forbidden, you can only modify your own files"}
)
nonce = CryptHash.random()
piece_size = 1024 * 1024
inner_path = self.site.content_manager.sanitizePath(inner_path)
file_info = self.site.content_manager.getFileInfo(inner_path, new_file=True)
content_inner_path_dir = helper.getDirname(file_info["content_inner_path"])
file_relative_path = inner_path[len(content_inner_path_dir) :]
upload_nonces[nonce] = {
"added": time.time(),
"site": self.site,
"inner_path": inner_path,
"websocket_client": self,
"size": size,
"piece_size": piece_size,
"piecemap": inner_path + ".piecemap.msgpack",
}
if protocol == "xhr":
return {
"url": "/ZeroNet-Internal/BigfileUpload?upload_nonce=" + nonce,
"piece_size": piece_size,
"inner_path": inner_path,
"file_relative_path": file_relative_path,
}
elif protocol == "websocket":
server_url = self.request.getWsServerUrl()
if server_url:
proto, host = server_url.split("://")
origin = proto.replace("http", "ws") + "://" + host
else:
origin = "{origin}"
return {
"url": origin
+ "/ZeroNet-Internal/BigfileUploadWebsocket?upload_nonce="
+ nonce,
"piece_size": piece_size,
"inner_path": inner_path,
"file_relative_path": file_relative_path,
}
else:
return {"error": "Unknown protocol"}
@flag.no_multiuser
def actionSiteSetAutodownloadBigfileLimit(self, to, limit):
permissions = self.getPermissions(to)
if "ADMIN" not in permissions:
return self.response(to, "You don't have permission to run this command")
self.site.settings["autodownload_bigfile_size_limit"] = int(limit)
self.response(to, "ok")
def actionFileDelete(self, to, inner_path):
piecemap_inner_path = inner_path + ".piecemap.msgpack"
if self.hasFilePermission(inner_path) and self.site.storage.isFile(
piecemap_inner_path
):
# Also delete .piecemap.msgpack file if exists
self.log.debug("Deleting piecemap: %s" % piecemap_inner_path)
file_info = self.site.content_manager.getFileInfo(piecemap_inner_path)
if file_info:
content_json = self.site.storage.loadJson(
file_info["content_inner_path"]
)
relative_path = file_info["relative_path"]
if relative_path in content_json.get("files_optional", {}):
del content_json["files_optional"][relative_path]
self.site.storage.writeJson(
file_info["content_inner_path"], content_json
)
self.site.content_manager.loadContent(
file_info["content_inner_path"], add_bad_files=False, force=True
)
try:
self.site.storage.delete(piecemap_inner_path)
except Exception as err:
self.log.error(
"File %s delete error: %s" % (piecemap_inner_path, err)
)
return super(UiWebsocketPlugin, self).actionFileDelete(to, inner_path)
@PluginManager.registerTo("ContentManager")
class ContentManagerPlugin(object):
def getFileInfo(self, inner_path, *args, **kwargs):
if "|" not in inner_path:
return super(ContentManagerPlugin, self).getFileInfo(
inner_path, *args, **kwargs
)
inner_path, file_range = inner_path.split("|")
pos_from, pos_to = map(int, file_range.split("-"))
file_info = super(ContentManagerPlugin, self).getFileInfo(
inner_path, *args, **kwargs
)
return file_info
def readFile(self, read_func, size, buff_size=1024 * 64):
part_num = 0
recv_left = size
while 1:
part_num += 1
read_size = min(buff_size, recv_left)
part = read_func(read_size)
if not part:
break
yield part
if part_num % 100 == 0: # Avoid blocking ZeroNet execution during upload
time.sleep(0.001)
recv_left -= read_size
if recv_left <= 0:
break
def hashBigfile(self, read_func, size, piece_size=1024 * 1024, file_out=None):
self.site.settings["has_bigfile"] = True
recv = 0
try:
piece_hash = CryptHash.sha512t()
piece_hashes = []
piece_recv = 0
mt = merkletools.MerkleTools()
mt.hash_function = CryptHash.sha512t
part = ""
for part in self.readFile(read_func, size):
if file_out:
file_out.write(part)
recv += len(part)
piece_recv += len(part)
piece_hash.update(part)
if piece_recv >= piece_size:
piece_digest = piece_hash.digest()
piece_hashes.append(piece_digest)
mt.leaves.append(piece_digest)
piece_hash = CryptHash.sha512t()
piece_recv = 0
if len(piece_hashes) % 100 == 0 or recv == size:
self.log.info(
"- [HASHING:%.0f%%] Pieces: %s, %.1fMB/%.1fMB"
% (
float(recv) / size * 100,
len(piece_hashes),
recv / 1024 / 1024,
size / 1024 / 1024,
)
)
part = ""
if len(part) > 0:
piece_digest = piece_hash.digest()
piece_hashes.append(piece_digest)
mt.leaves.append(piece_digest)
except Exception as err:
raise err
finally:
if file_out:
file_out.close()
mt.make_tree()
merkle_root = mt.get_merkle_root()
if type(merkle_root) is bytes: # Python <3.5
merkle_root = merkle_root.decode()
return merkle_root, piece_size, {"sha512_pieces": piece_hashes}
def hashFile(self, dir_inner_path, file_relative_path, optional=False):
inner_path = dir_inner_path + file_relative_path
file_size = self.site.storage.getSize(inner_path)
# Only care about optional files >1MB
if not optional or file_size < 1 * 1024 * 1024:
return super(ContentManagerPlugin, self).hashFile(
dir_inner_path, file_relative_path, optional
)
back = {}
content = self.contents.get(dir_inner_path + "content.json")
hash = None
piecemap_relative_path = None
piece_size = None
# Don't re-hash if it's already in content.json
if content and file_relative_path in content.get("files_optional", {}):
file_node = content["files_optional"][file_relative_path]
if file_node["size"] == file_size:
self.log.info("- [SAME SIZE] %s" % file_relative_path)
hash = file_node.get("sha512")
piecemap_relative_path = file_node.get("piecemap")
piece_size = file_node.get("piece_size")
if not hash or not piecemap_relative_path: # Not in content.json yet
if (
file_size < 5 * 1024 * 1024
): # Don't create piecemap automatically for files smaller than 5MB
return super(ContentManagerPlugin, self).hashFile(
dir_inner_path, file_relative_path, optional
)
self.log.info("- [HASHING] %s" % file_relative_path)
merkle_root, piece_size, piecemap_info = self.hashBigfile(
self.site.storage.open(inner_path, "rb").read, file_size
)
if not hash:
hash = merkle_root
if not piecemap_relative_path:
file_name = helper.getFilename(file_relative_path)
piecemap_relative_path = file_relative_path + ".piecemap.msgpack"
piecemap_inner_path = inner_path + ".piecemap.msgpack"
self.site.storage.open(piecemap_inner_path, "wb").write(
Msgpack.pack({file_name: piecemap_info})
)
back.update(
super(ContentManagerPlugin, self).hashFile(
dir_inner_path, piecemap_relative_path, optional=True
)
)
piece_num = int(math.ceil(float(file_size) / piece_size))
# Add the merkle root to hashfield
hash_id = self.site.content_manager.hashfield.getHashId(hash)
self.optionalDownloaded(inner_path, hash_id, file_size, own=True)
self.site.storage.piecefields[hash].frombytes(b"\x01" * piece_num)
back[file_relative_path] = {
"sha512": hash,
"size": file_size,
"piecemap": piecemap_relative_path,
"piece_size": piece_size,
}
return back
def getPiecemap(self, inner_path):
file_info = self.site.content_manager.getFileInfo(inner_path)
piecemap_inner_path = (
helper.getDirname(file_info["content_inner_path"]) + file_info["piecemap"]
)
self.site.needFile(piecemap_inner_path, priority=20)
piecemap = Msgpack.unpack(
self.site.storage.open(piecemap_inner_path, "rb").read()
)[helper.getFilename(inner_path)]
piecemap["piece_size"] = file_info["piece_size"]
return piecemap
def verifyPiece(self, inner_path, pos, piece):
try:
piecemap = self.getPiecemap(inner_path)
except Exception as err:
raise VerifyError(
"Unable to download piecemap: %s" % Debug.formatException(err)
)
piece_i = int(pos / piecemap["piece_size"])
if (
CryptHash.sha512sum(piece, format="digest")
!= piecemap["sha512_pieces"][piece_i]
):
raise VerifyError("Invalid hash")
return True
def verifyFile(self, inner_path, file, ignore_same=True):
if "|" not in inner_path:
return super(ContentManagerPlugin, self).verifyFile(
inner_path, file, ignore_same
)
inner_path, file_range = inner_path.split("|")
pos_from, pos_to = map(int, file_range.split("-"))
return self.verifyPiece(inner_path, pos_from, file)
def optionalDownloaded(self, inner_path, hash_id, size=None, own=False):
if "|" in inner_path:
inner_path, file_range = inner_path.split("|")
pos_from, pos_to = map(int, file_range.split("-"))
file_info = self.getFileInfo(inner_path)
# Mark piece downloaded
piece_i = int(pos_from / file_info["piece_size"])
self.site.storage.piecefields[file_info["sha512"]][piece_i] = b"\x01"
# Only add to site size on first request
if hash_id in self.hashfield:
size = 0
elif size > 1024 * 1024:
file_info = self.getFileInfo(inner_path)
if (
file_info and "sha512" in file_info
): # We already have the file, but not in piecefield
sha512 = file_info["sha512"]
if sha512 not in self.site.storage.piecefields:
self.site.storage.checkBigfile(inner_path)
return super(ContentManagerPlugin, self).optionalDownloaded(
inner_path, hash_id, size, own
)
def optionalRemoved(self, inner_path, hash_id, size=None):
if size and size > 1024 * 1024:
file_info = self.getFileInfo(inner_path)
sha512 = file_info["sha512"]
if sha512 in self.site.storage.piecefields:
del self.site.storage.piecefields[sha512]
# Also remove other pieces of the file from download queue
for key in list(self.site.bad_files.keys()):
if key.startswith(inner_path + "|"):
del self.site.bad_files[key]
self.site.worker_manager.removeSolvedFileTasks()
return super(ContentManagerPlugin, self).optionalRemoved(
inner_path, hash_id, size
)
@PluginManager.registerTo("SiteStorage")
class SiteStoragePlugin(object):
def __init__(self, *args, **kwargs):
super(SiteStoragePlugin, self).__init__(*args, **kwargs)
self.piecefields = collections.defaultdict(BigfilePiecefield)
if "piecefields" in self.site.settings.get("cache", {}):
for sha512, piecefield_packed in (
self.site.settings["cache"].get("piecefields").items()
):
if piecefield_packed:
self.piecefields[sha512].unpack(base64.b64decode(piecefield_packed))
self.site.settings["cache"]["piecefields"] = {}
def createSparseFile(self, inner_path, size, sha512=None):
file_path = self.getPath(inner_path)
self.ensureDir(os.path.dirname(inner_path))
f = open(file_path, "wb")
f.truncate(min(1024 * 1024 * 5, size)) # Only pre-allocate up to 5MB
f.close()
if os.name == "nt":
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
subprocess.call(
["fsutil", "sparse", "setflag", file_path],
close_fds=True,
startupinfo=startupinfo,
)
if sha512 and sha512 in self.piecefields:
self.log.debug(
"%s: File not exists, but has piecefield. Deleting piecefield."
% inner_path
)
del self.piecefields[sha512]
def write(self, inner_path, content):
if "|" not in inner_path:
return super(SiteStoragePlugin, self).write(inner_path, content)
# Write to specific position by passing |{pos} after the filename
inner_path, file_range = inner_path.split("|")
pos_from, pos_to = map(int, file_range.split("-"))
file_path = self.getPath(inner_path)
# Create dir if not exist
self.ensureDir(os.path.dirname(inner_path))
if not os.path.isfile(file_path):
file_info = self.site.content_manager.getFileInfo(inner_path)
self.createSparseFile(inner_path, file_info["size"])
# Write file
with open(file_path, "rb+") as file:
file.seek(pos_from)
if hasattr(content, "read"): # File-like object
shutil.copyfileobj(content, file) # Write buff to disk
else: # Simple string
file.write(content)
del content
self.onUpdated(inner_path)
def checkBigfile(self, inner_path):
file_info = self.site.content_manager.getFileInfo(inner_path)
if not file_info or (
file_info and "piecemap" not in file_info
): # It's not a big file
return False
self.site.settings["has_bigfile"] = True
file_path = self.getPath(inner_path)
sha512 = file_info["sha512"]
piece_num = int(math.ceil(float(file_info["size"]) / file_info["piece_size"]))
if os.path.isfile(file_path):
if sha512 not in self.piecefields:
if open(file_path, "rb").read(128) == b"\0" * 128:
piece_data = b"\x00"
else:
piece_data = b"\x01"
self.log.debug(
"%s: File exists, but not in piecefield. Filling piecefiled with %s * %s."
% (inner_path, piece_num, piece_data)
)
self.piecefields[sha512].frombytes(piece_data * piece_num)
else:
self.log.debug("Creating bigfile: %s" % inner_path)
self.createSparseFile(inner_path, file_info["size"], sha512)
self.piecefields[sha512].frombytes(b"\x00" * piece_num)
self.log.debug("Created bigfile: %s" % inner_path)
return True
def openBigfile(self, inner_path, prebuffer=0):
if not self.checkBigfile(inner_path):
return False
self.site.needFile(inner_path, blocking=False) # Download piecemap
return BigFile(self.site, inner_path, prebuffer=prebuffer)
class BigFile(object):
def __init__(self, site, inner_path, prebuffer=0):
self.site = site
self.inner_path = inner_path
file_path = site.storage.getPath(inner_path)
file_info = self.site.content_manager.getFileInfo(inner_path)
self.piece_size = file_info["piece_size"]
self.sha512 = file_info["sha512"]
self.size = file_info["size"]
self.prebuffer = prebuffer
self.read_bytes = 0
self.piecefield = self.site.storage.piecefields[self.sha512]
self.f = open(file_path, "rb+")
self.read_lock = gevent.lock.Semaphore()
def read(self, buff=64 * 1024):
with self.read_lock:
pos = self.f.tell()
read_until = min(self.size, pos + buff)
requests = []
# Request all required blocks
while 1:
piece_i = int(pos / self.piece_size)
if piece_i * self.piece_size >= read_until:
break
pos_from = piece_i * self.piece_size
pos_to = pos_from + self.piece_size
if not self.piecefield[piece_i]:
requests.append(
self.site.needFile(
"%s|%s-%s" % (self.inner_path, pos_from, pos_to),
blocking=False,
update=True,
priority=10,
)
)
pos += self.piece_size
if not all(requests):
return None
# Request prebuffer
if self.prebuffer:
prebuffer_until = min(self.size, read_until + self.prebuffer)
priority = 3
while 1:
piece_i = int(pos / self.piece_size)
if piece_i * self.piece_size >= prebuffer_until:
break
pos_from = piece_i * self.piece_size
pos_to = pos_from + self.piece_size
if not self.piecefield[piece_i]:
self.site.needFile(
"%s|%s-%s" % (self.inner_path, pos_from, pos_to),
blocking=False,
update=True,
priority=max(0, priority),
)
priority -= 1
pos += self.piece_size
gevent.joinall(requests)
self.read_bytes += buff
# Increase buffer for long reads
if self.read_bytes > 7 * 1024 * 1024 and self.prebuffer < 5 * 1024 * 1024:
self.site.log.debug(
"%s: Increasing bigfile buffer size to 5MB..." % self.inner_path
)
self.prebuffer = 5 * 1024 * 1024
return self.f.read(buff)
def seek(self, pos, whence=0):
with self.read_lock:
if whence == 2: # Relative from file end
pos = self.size + pos # Use the real size instead of size on the disk
whence = 0
return self.f.seek(pos, whence)
def seekable(self):
return self.f.seekable()
def tell(self):
return self.f.tell()
def close(self):
self.f.close()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.close()
@PluginManager.registerTo("WorkerManager")
class WorkerManagerPlugin(object):
def addTask(self, inner_path, *args, **kwargs):
file_info = kwargs.get("file_info")
if file_info and "piecemap" in file_info: # Bigfile
self.site.settings["has_bigfile"] = True
piecemap_inner_path = (
helper.getDirname(file_info["content_inner_path"])
+ file_info["piecemap"]
)
piecemap_task = None
if not self.site.storage.isFile(piecemap_inner_path):
# Start download piecemap
piecemap_task = super(WorkerManagerPlugin, self).addTask(
piecemap_inner_path, priority=30
)
autodownload_bigfile_size_limit = self.site.settings.get(
"autodownload_bigfile_size_limit",
config.autodownload_bigfile_size_limit,
)
if (
"|" not in inner_path
and self.site.isDownloadable(inner_path)
and file_info["size"] / 1024 / 1024
<= autodownload_bigfile_size_limit
):
gevent.spawn_later(
0.1, self.site.needFile, inner_path + "|all"
) # Download all pieces
if "|" in inner_path:
# Start download piece
task = super(WorkerManagerPlugin, self).addTask(
inner_path, *args, **kwargs
)
inner_path, file_range = inner_path.split("|")
pos_from, pos_to = map(int, file_range.split("-"))
task["piece_i"] = int(pos_from / file_info["piece_size"])
task["sha512"] = file_info["sha512"]
else:
if inner_path in self.site.bad_files:
del self.site.bad_files[inner_path]
if piecemap_task:
task = piecemap_task
else:
fake_evt = (
gevent.event.AsyncResult()
) # Don't download anything if no range specified
fake_evt.set(True)
task = {"evt": fake_evt}
if not self.site.storage.isFile(inner_path):
self.site.storage.createSparseFile(
inner_path, file_info["size"], file_info["sha512"]
)
piece_num = int(
math.ceil(float(file_info["size"]) / file_info["piece_size"])
)
self.site.storage.piecefields[file_info["sha512"]].frombytes(
b"\x00" * piece_num
)
else:
task = super(WorkerManagerPlugin, self).addTask(inner_path, *args, **kwargs)
return task
def taskAddPeer(self, task, peer):
if "piece_i" in task:
if not peer.piecefields[task["sha512"]][task["piece_i"]]:
if task["sha512"] not in peer.piecefields:
gevent.spawn(peer.updatePiecefields, force=True)
elif not task["peers"]:
gevent.spawn(peer.updatePiecefields)
return False # Deny to add peers to task if file not in piecefield
return super(WorkerManagerPlugin, self).taskAddPeer(task, peer)
@PluginManager.registerTo("FileRequest")
class FileRequestPlugin(object):
def isReadable(self, site, inner_path, file, pos):
# Peek into file
if file.read(10) == b"\0" * 10:
# Looks empty, but makes sures we don't have that piece
file_info = site.content_manager.getFileInfo(inner_path)
if "piece_size" in file_info:
piece_i = int(pos / file_info["piece_size"])
if not site.storage.piecefields[file_info["sha512"]][piece_i]:
return False
# Seek back to position we want to read
file.seek(pos)
return super(FileRequestPlugin, self).isReadable(site, inner_path, file, pos)
def actionGetPiecefields(self, params):
site = self.sites.get(params["site"])
if not site or not site.isServing(): # Site unknown or not serving
self.response({"error": "Unknown site"})
return False
# Add peer to site if not added before
peer = site.addPeer(self.connection.ip, self.connection.port, return_peer=True)
if not peer.connection: # Just added
peer.connect(self.connection) # Assign current connection to peer
piecefields_packed = {
sha512: piecefield.pack()
for sha512, piecefield in site.storage.piecefields.items()
}
self.response({"piecefields_packed": piecefields_packed})
def actionSetPiecefields(self, params):
site = self.sites.get(params["site"])
if not site or not site.isServing(): # Site unknown or not serving
self.response({"error": "Unknown site"})
self.connection.badAction(5)
return False
# Add or get peer
peer = site.addPeer(
self.connection.ip,
self.connection.port,
return_peer=True,
connection=self.connection,
)
if not peer.connection:
peer.connect(self.connection)
peer.piecefields = collections.defaultdict(BigfilePiecefieldPacked)
for sha512, piecefield_packed in params["piecefields_packed"].items():
peer.piecefields[sha512].unpack(piecefield_packed)
site.settings["has_bigfile"] = True
self.response({"ok": "Updated"})
@PluginManager.registerTo("Peer")
class PeerPlugin(object):
def __getattr__(self, key):
if key == "piecefields":
self.piecefields = collections.defaultdict(BigfilePiecefieldPacked)
return self.piecefields
elif key == "time_piecefields_updated":
self.time_piecefields_updated = None
return self.time_piecefields_updated
else:
return super(PeerPlugin, self).__getattr__(key)
@util.Noparallel(ignore_args=True)
def updatePiecefields(self, force=False):
if self.connection and self.connection.handshake.get("rev", 0) < 2190:
return False # Not supported
# Don't update piecefield again in 1 min
if (
self.time_piecefields_updated
and time.time() - self.time_piecefields_updated < 60
and not force
):
return False
self.time_piecefields_updated = time.time()
res = self.request("getPiecefields", {"site": self.site.address})
if not res or "error" in res:
return False
self.piecefields = collections.defaultdict(BigfilePiecefieldPacked)
try:
for sha512, piecefield_packed in res["piecefields_packed"].items():
self.piecefields[sha512].unpack(piecefield_packed)
except Exception as err:
self.log(
"Invalid updatePiecefields response: %s" % Debug.formatException(err)
)
return self.piecefields
def sendMyHashfield(self, *args, **kwargs):
return super(PeerPlugin, self).sendMyHashfield(*args, **kwargs)
def updateHashfield(self, *args, **kwargs):
if self.site.settings.get("has_bigfile"):
thread = gevent.spawn(self.updatePiecefields, *args, **kwargs)
back = super(PeerPlugin, self).updateHashfield(*args, **kwargs)
thread.join()
return back
else:
return super(PeerPlugin, self).updateHashfield(*args, **kwargs)
def getFile(self, site, inner_path, *args, **kwargs):
if "|" in inner_path:
inner_path, file_range = inner_path.split("|")
pos_from, pos_to = map(int, file_range.split("-"))
kwargs["pos_from"] = pos_from
kwargs["pos_to"] = pos_to
return super(PeerPlugin, self).getFile(site, inner_path, *args, **kwargs)
@PluginManager.registerTo("Site")
class SitePlugin(object):
def isFileDownloadAllowed(self, inner_path, file_info):
if "piecemap" in file_info:
file_size_mb = file_info["size"] / 1024 / 1024
if config.bigfile_size_limit and file_size_mb > config.bigfile_size_limit:
self.log.debug(
"Bigfile size %s too large: %sMB > %sMB, skipping..."
% (inner_path, file_size_mb, config.bigfile_size_limit)
)
return False
file_info = file_info.copy()
file_info["size"] = file_info["piece_size"]
return super(SitePlugin, self).isFileDownloadAllowed(inner_path, file_info)
def getSettingsCache(self):
back = super(SitePlugin, self).getSettingsCache()
if self.storage.piecefields:
back["piecefields"] = {
sha512: base64.b64encode(piecefield.pack()).decode("utf8")
for sha512, piecefield in self.storage.piecefields.items()
}
return back
def needFile(self, inner_path, *args, **kwargs):
if inner_path.endswith("|all"):
@util.Pooled(20)
def pooledNeedBigfile(inner_path, *args, **kwargs):
if inner_path not in self.bad_files:
self.log.debug("Cancelled piece, skipping %s" % inner_path)
return False
return self.needFile(inner_path, *args, **kwargs)
inner_path = inner_path.replace("|all", "")
file_info = self.needFileInfo(inner_path)
# Use default function to download non-optional file
if "piece_size" not in file_info:
return super(SitePlugin, self).needFile(inner_path, *args, **kwargs)
file_size = file_info["size"]
piece_size = file_info["piece_size"]
piece_num = int(math.ceil(float(file_size) / piece_size))
file_threads = []
piecefield = self.storage.piecefields.get(file_info["sha512"])
for piece_i in range(piece_num):
piece_from = piece_i * piece_size
piece_to = min(file_size, piece_from + piece_size)
if not piecefield or not piecefield[piece_i]:
inner_path_piece = "%s|%s-%s" % (inner_path, piece_from, piece_to)
self.bad_files[inner_path_piece] = self.bad_files.get(
inner_path_piece, 1
)
res = pooledNeedBigfile(inner_path_piece, blocking=False)
if res is not True and res is not False:
file_threads.append(res)
gevent.joinall(file_threads)
else:
return super(SitePlugin, self).needFile(inner_path, *args, **kwargs)
@PluginManager.registerTo("ConfigPlugin")
class ConfigPlugin(object):
def createArguments(self):
group = self.parser.add_argument_group("Bigfile plugin")
group.add_argument(
"--autodownload_bigfile_size_limit",
help="Also download bigfiles smaller than this limit if help distribute option is checked",
default=10,
metavar="MB",
type=int,
)
group.add_argument(
"--bigfile_size_limit",
help="Maximum size of downloaded big files",
default=False,
metavar="MB",
type=int,
)
return super(ConfigPlugin, self).createArguments()
|
utils | parse | # -*- encoding: utf-8 -*-
from itertools import chain
from urllib.parse import unquote
import html5lib
def thread(data, default="Untitled.", id=None):
"""
Extract <h1> title from web page. The title is *probably* the text node,
which is the nearest H1 node in context to an element with the `isso-thread` id.
"""
html = html5lib.parse(data, treebuilder="dom")
assert html.lastChild.nodeName == "html"
html = html.lastChild
# aka getElementById, but limited to div and section tags
el = list(
filter(
lambda i: i.attributes["id"].value == "isso-thread",
filter(
lambda i: "id" in i.attributes,
chain(*map(html.getElementsByTagName, ("div", "section"))),
),
)
)
if not el:
return id, default
el = el[0]
visited = []
def recurse(node):
for child in node.childNodes:
if child.nodeType != child.ELEMENT_NODE:
continue
if child.nodeName.upper() == "H1":
return child
if child not in visited:
return recurse(child)
def gettext(rv):
for child in rv.childNodes:
if child.nodeType == child.TEXT_NODE:
yield child.nodeValue
if child.nodeType == child.ELEMENT_NODE:
for item in gettext(child):
yield item
try:
id = unquote(el.attributes["data-isso-id"].value)
except (KeyError, AttributeError):
pass
try:
return id, unquote(el.attributes["data-title"].value)
except (KeyError, AttributeError):
pass
while el is not None: # el.parentNode is None in the very end
visited.append(el)
rv = recurse(el)
if rv:
return id, "".join(gettext(rv)).strip()
el = el.parentNode
return id, default
|
QT | PantallaPlayPGN | from Code import Partida, TrListas, Util
from Code.QT import (
Colocacion,
Columnas,
Controles,
Grid,
Iconos,
PantallaPGN,
QTUtil2,
QTVarios,
)
class PlayPGNs(Util.DicSQL):
def __init__(self, fichero):
Util.DicSQL.__init__(self, fichero)
self.regKeys = self.keys(True, True)
def leeRegistro(self, num):
return self.__getitem__(self.regKeys[num])
def append(self, valor):
k = str(Util.hoy())
self.__setitem__(k, valor)
self.regKeys = self.keys(True, True)
def appendHash(self, xhash, valor):
"""Usado desde databases-partidas, el hash = hash del xpv"""
k = str(Util.hoy()) + "|" + str(xhash)
self.__setitem__(k, valor)
self.regKeys = self.keys(True, True)
def recnoHash(self, xhash):
"""Usado desde databases-partidas"""
for recno, key in enumerate(self.regKeys):
if "|" in key:
h = int(key.split("|")[1])
if xhash == h:
return recno
return None
def cambiaRegistro(self, num, valor):
self.__setitem__(self.regKeys[num], valor)
def borraRegistro(self, num):
self.__delitem__(self.regKeys[num])
self.regKeys = self.keys(True, True)
def borraLista(self, li):
li.sort()
li.reverse()
for x in li:
self.__delitem__(self.regKeys[x])
self.pack()
self.regKeys = self.keys(True, True)
def rotulo(self, num):
r = self.leeRegistro(num)
def x(k):
return r.get(k, "")
date = x("DATE").replace(".?", "").replace("?", "")
return "%s-%s : %s %s %s" % (
x("WHITE"),
x("BLACK"),
date,
x("EVENT"),
x("SITE"),
)
class WPlayBase(QTVarios.WDialogo):
def __init__(self, procesador):
titulo = _("Play against a game")
QTVarios.WDialogo.__init__(
self, procesador.pantalla, titulo, Iconos.Law(), "playgame"
)
self.procesador = procesador
self.configuracion = procesador.configuracion
self.recno = None
self.db = PlayPGNs(self.configuracion.ficheroPlayPGN)
# Historico
oColumnas = Columnas.ListaColumnas()
def creaCol(clave, rotulo, siCentrado=True):
oColumnas.nueva(clave, rotulo, 80, siCentrado=siCentrado)
# # Claves segun orden estandar
liBasic = (
"EVENT",
"SITE",
"DATE",
"ROUND",
"WHITE",
"BLACK",
"RESULT",
"ECO",
"FEN",
"WHITEELO",
"BLACKELO",
)
for clave in liBasic:
rotulo = TrListas.pgnLabel(clave)
creaCol(clave, rotulo, clave != "EVENT")
self.grid = Grid.Grid(
self, oColumnas, siSelecFilas=True, siSeleccionMultiple=True
)
self.grid.setMinimumWidth(self.grid.anchoColumnas() + 20)
# Tool bar
liAcciones = (
(_("Close"), Iconos.MainMenu(), self.terminar),
None,
(_("Play"), Iconos.Empezar(), self.empezar),
(_("New"), Iconos.Nuevo(), self.nuevo),
None,
(_("Remove"), Iconos.Borrar(), self.borrar),
None,
)
self.tb = QTVarios.LCTB(self, liAcciones)
# Colocamos
lyTB = Colocacion.H().control(self.tb).margen(0)
ly = Colocacion.V().otro(lyTB).control(self.grid).margen(3)
self.setLayout(ly)
self.registrarGrid(self.grid)
self.recuperarVideo(siTam=False)
self.grid.gotop()
def gridDobleClick(self, grid, fila, columna):
self.empezar()
def gridNumDatos(self, grid):
return len(self.db)
def gridDato(self, grid, fila, oColumna):
col = oColumna.clave
reg = self.db.leeRegistro(fila)
return reg.get(col, "")
def terminar(self):
self.guardarVideo()
self.db.close()
self.accept()
def closeEvent(self, QCloseEvent):
self.guardarVideo()
self.db.close()
def nuevo(self):
unpgn = PantallaPGN.eligePartida(self)
if unpgn and unpgn.partida.numJugadas():
reg = unpgn.dic
unpgn.partida.siTerminada()
reg["PARTIDA"] = unpgn.partida.guardaEnTexto()
self.db.append(reg)
self.grid.refresh()
self.grid.gotop()
def borrar(self):
li = self.grid.recnosSeleccionados()
if len(li) > 0:
if QTUtil2.pregunta(self, _("Do you want to delete all selected records?")):
self.db.borraLista(li)
self.grid.gotop()
self.grid.refresh()
def empezar(self):
li = self.grid.recnosSeleccionados()
if len(li) > 0:
recno = li[0]
w = WPlay1(self, self.configuracion, self.db, recno)
if w.exec_():
self.recno = recno
self.siBlancas = w.siBlancas
self.accept()
class WPlay1(QTVarios.WDialogo):
def __init__(self, owner, configuracion, db, recno):
QTVarios.WDialogo.__init__(
self, owner, _("Play against a game"), Iconos.PlayGame(), "play1game"
)
self.owner = owner
self.db = db
self.configuracion = configuracion
self.recno = recno
self.registro = self.db.leeRegistro(recno)
self.partida = Partida.Partida()
self.partida.recuperaDeTexto(self.registro["PARTIDA"])
self.lbRotulo = (
Controles.LB(self, self.db.rotulo(recno))
.ponTipoLetra(puntos=12)
.ponColorFondoN("#076C9F", "#EFEFEF")
)
self.liIntentos = self.registro.get("LIINTENTOS", [])
oColumnas = Columnas.ListaColumnas()
oColumnas.nueva("DATE", _("Date"), 80, siCentrado=True)
oColumnas.nueva("COLOR", _("Play with"), 80, siCentrado=True)
oColumnas.nueva("POINTS", _("Points"), 80, siCentrado=True)
oColumnas.nueva("TIME", _("Time"), 80, siCentrado=True)
self.grid = Grid.Grid(
self, oColumnas, siSelecFilas=True, siSeleccionMultiple=True
)
self.grid.setMinimumWidth(self.grid.anchoColumnas() + 20)
# Tool bar
liAcciones = (
(_("Close"), Iconos.MainMenu(), self.terminar),
None,
(_("Train"), Iconos.Entrenar(), self.empezar),
None,
(_("Remove"), Iconos.Borrar(), self.borrar),
None,
)
self.tb = QTVarios.LCTB(self, liAcciones)
# Colocamos
lyTB = Colocacion.H().control(self.tb).margen(0)
ly = (
Colocacion.V()
.otro(lyTB)
.control(self.grid)
.control(self.lbRotulo)
.margen(3)
)
self.setLayout(ly)
self.registrarGrid(self.grid)
self.recuperarVideo(siTam=False)
self.grid.gotop()
def gridNumDatos(self, grid):
return len(self.liIntentos)
def gridDato(self, grid, fila, oColumna):
col = oColumna.clave
reg = self.liIntentos[fila]
if col == "DATE":
f = reg["DATE"]
return "%02d/%02d/%d-%02d:%02d" % (f.day, f.month, f.year, f.hour, f.minute)
if col == "COLOR":
c = reg["COLOR"]
if c == "b":
return _("Black")
elif c == "w":
return _("White")
if col == "POINTS":
return "%d (%d)" % (reg["POINTS"], reg["POINTSMAX"])
if col == "TIME":
s = int(reg["TIME"])
m = int(s / 60)
s -= m * 60
return "%d' %d\"" % (m, s)
def guardar(self, dic):
self.liIntentos.insert(0, dic)
self.grid.refresh()
self.grid.gotop()
self.registro["LIINTENTOS"] = self.liIntentos
self.db.cambiaRegistro(self.numRegistro, self.registro)
def terminar(self, siAccept=False):
self.guardarVideo()
if siAccept:
self.accept()
else:
self.reject()
def borrar(self):
li = self.grid.recnosSeleccionados()
if len(li) > 0:
if QTUtil2.pregunta(self, _("Do you want to delete all selected records?")):
li.sort()
li.reverse()
for x in li:
del self.liIntentos[x]
self.grid.gotop()
self.grid.refresh()
def empezar(self):
self.siBlancas = QTVarios.blancasNegras(self)
self.terminar(True)
|
reader | bitmap_reader | # --------------------------------------------------------------------------
# Software: InVesalius - Software de Reconstrucao 3D de Imagens Medicas
# Copyright: (C) 2001 Centro de Pesquisas Renato Archer
# Homepage: http://www.softwarepublico.gov.br
# Contact: invesalius@cti.gov.br
# License: GNU - GPL 2 (LICENSE.txt/LICENCA.txt)
# --------------------------------------------------------------------------
# Este programa e software livre; voce pode redistribui-lo e/ou
# modifica-lo sob os termos da Licenca Publica Geral GNU, conforme
# publicada pela Free Software Foundation; de acordo com a versao 2
# da Licenca.
#
# Este programa eh distribuido na expectativa de ser util, mas SEM
# QUALQUER GARANTIA; sem mesmo a garantia implicita de
# COMERCIALIZACAO ou de ADEQUACAO A QUALQUER PROPOSITO EM
# PARTICULAR. Consulte a Licenca Publica Geral GNU para obter mais
# detalhes.
# --------------------------------------------------------------------------
import imghdr
import os
import re
import sys
import tempfile
import threading
from multiprocessing import cpu_count
import invesalius.constants as const
import invesalius.data.converters as converters
import invesalius.utils as utils
import numpy
import wx
from imageio import imread
from invesalius import inv_paths
from invesalius.pubsub import pub as Publisher
from vtkmodules.util import numpy_support
from vtkmodules.vtkCommonCore import vtkFileOutputWindow, vtkOutputWindow
from vtkmodules.vtkCommonDataModel import vtkImageData
from vtkmodules.vtkImagingColor import vtkImageLuminance
from vtkmodules.vtkImagingCore import vtkImageCast, vtkImageResample
from vtkmodules.vtkIOImage import (
vtkBMPReader,
vtkJPEGReader,
vtkPNGReader,
vtkPNGWriter,
vtkTIFFReader,
)
# flag to control vtk error in read files
no_error = True
vtk_error = False
if sys.platform == "win32":
try:
import win32api
_has_win32api = True
except ImportError:
_has_win32api = False
else:
_has_win32api = False
class Singleton:
def __init__(self, klass):
self.klass = klass
self.instance = None
def __call__(self, *args, **kwds):
if self.instance == None:
self.instance = self.klass(*args, **kwds)
return self.instance
@Singleton
class BitmapData:
def __init__(self):
self.data = None
def GetData(self):
return self.data
def SetData(self, data):
self.data = data
def GetOnlyBitmapPath(self):
paths = [item[0] for item in self.data]
return paths
def GetFirstBitmapSize(self):
return (self.data[0][3], self.data[0][4])
def IsAllBitmapSameSize(self):
sizes = [item[5] for item in self.data]
k = {}
for v in sizes:
k[v] = ""
if len(k.keys()) > 1:
return False
else:
return True
def GetFirstPixelSize(self):
path = self.data[0][0]
size = ReadBitmap(path).dtype.itemsize * 8
return size
def RemoveFileByPath(self, path):
for d in self.data:
if path in d:
self.data.remove(d)
def GetIndexByPath(self, path):
for i, v in enumerate(self.data):
if path in v:
return i
class BitmapFiles:
def __init__(self):
self.bitmapfiles = []
def Add(self, bmp):
self.bitmapfiles.append(bmp)
def Sort(self, x):
c_re = re.compile("\d+")
if len(c_re.findall(x[6])) > 0:
return [int(i) for i in c_re.findall(x[6])]
else:
return [str(x[6])]
def GetValues(self):
bmpfile = self.bitmapfiles
bmpfile.sort(key=self.Sort)
bmp_data = BitmapData()
bmp_data.data = bmpfile
return bmpfile
class LoadBitmap:
def __init__(self, bmp_file, filepath):
self.bmp_file = bmp_file
# self.filepath = utils.decode(filepath, const.FS_ENCODE)
self.filepath = filepath
self.run()
def run(self):
global vtk_error
# ----- verify extension ------------------
extension = VerifyDataType(self.filepath)
file_name = self.filepath.decode(const.FS_ENCODE).split(os.path.sep)[-1]
n_array = ReadBitmap(self.filepath)
if not (isinstance(n_array, numpy.ndarray)):
return False
image = converters.to_vtk(
n_array, spacing=(1, 1, 1), slice_number=1, orientation="AXIAL"
)
dim = image.GetDimensions()
x = dim[0]
y = dim[1]
img = vtkImageResample()
img.SetInputData(image)
img.SetAxisMagnificationFactor(0, 0.25)
img.SetAxisMagnificationFactor(1, 0.25)
img.SetAxisMagnificationFactor(2, 1)
img.Update()
tp = img.GetOutput().GetScalarTypeAsString()
image_copy = vtkImageData()
image_copy.DeepCopy(img.GetOutput())
thumbnail_path = tempfile.mktemp()
write_png = vtkPNGWriter()
write_png.SetInputConnection(img.GetOutputPort())
write_png.AddObserver("WarningEvent", VtkErrorPNGWriter)
write_png.SetFileName(thumbnail_path)
write_png.Write()
if vtk_error:
img = vtkImageCast()
img.SetInputData(image_copy)
img.SetOutputScalarTypeToUnsignedShort()
# img.SetClampOverflow(1)
img.Update()
write_png = vtkPNGWriter()
write_png.SetInputConnection(img.GetOutputPort())
write_png.SetFileName(thumbnail_path)
write_png.Write()
vtk_error = False
id = wx.NewId()
bmp_item = [
self.filepath,
thumbnail_path,
extension,
x,
y,
str(x) + " x " + str(y),
file_name,
id,
]
self.bmp_file.Add(bmp_item)
def yGetBitmaps(directory, recursive=True, gui=True):
"""
Return all full paths to DICOM files inside given directory.
"""
nfiles = 0
# Find total number of files
if recursive:
for dirpath, dirnames, filenames in os.walk(directory):
nfiles += len(filenames)
else:
dirpath, dirnames, filenames = os.walk(directory)
nfiles = len(filenames)
counter = 0
bmp_file = BitmapFiles()
# Retrieve only TIFF, BMP, JPEG and PNG files
if recursive:
for dirpath, dirnames, filenames in os.walk(directory):
for name in filenames:
filepath = os.path.join(dirpath, name).encode(const.FS_ENCODE)
counter += 1
if gui:
yield (counter, nfiles)
LoadBitmap(bmp_file, filepath)
else:
dirpath, dirnames, filenames = os.walk(directory)
for name in filenames:
filepath = str(os.path.join(dirpath, name)).encode(const.FS_ENCODE)
counter += 1
if gui:
yield (counter, nfiles)
yield bmp_file.GetValues()
class ProgressBitmapReader:
def __init__(self):
Publisher.subscribe(self.CancelLoad, "Cancel bitmap load")
def CancelLoad(self):
self.running = False
self.stoped = True
def SetWindowEvent(self, frame):
self.frame = frame
def SetDirectoryPath(self, path, recursive=True):
self.running = True
self.stoped = False
self.GetBitmaps(path, recursive)
def UpdateLoadFileProgress(self, cont_progress):
Publisher.sendMessage("Update bitmap load", data=cont_progress)
def EndLoadFile(self, bitmap_list):
Publisher.sendMessage("End bitmap load", data=bitmap_list)
def GetBitmaps(self, path, recursive):
y = yGetBitmaps(path, recursive)
for value_progress in y:
if not self.running:
break
if isinstance(value_progress, tuple):
self.UpdateLoadFileProgress(value_progress)
else:
self.EndLoadFile(value_progress)
self.UpdateLoadFileProgress(None)
self.stoped = False
def VtkErrorPNGWriter(obj, f):
global vtk_error
vtk_error = True
def ScipyRead(filepath):
try:
r = imread(filepath, flatten=True)
dt = r.dtype
if dt == "float" or dt == "float16" or dt == "float32" or dt == "float64":
shift = -r.max() / 2
simage = numpy.zeros_like(r, dtype="int16")
simage[:] = r.astype("int32") + shift
return simage
else:
return r
except IOError:
return False
def VtkRead(filepath, t):
if not const.VTK_WARNING:
log_path = os.path.join(inv_paths.USER_LOG_DIR, "vtkoutput.txt")
fow = vtkFileOutputWindow()
fow.SetFileName(log_path.encode(const.FS_ENCODE))
ow = vtkOutputWindow()
ow.SetInstance(fow)
global no_error
if t == "bmp":
reader = vtkBMPReader()
elif t == "tiff" or t == "tif":
reader = vtkTIFFReader()
elif t == "png":
reader = vtkPNGReader()
elif t == "jpeg" or t == "jpg":
reader = vtkJPEGReader()
else:
return False
reader.AddObserver("ErrorEvent", VtkErrorToPy)
reader.SetFileName(filepath)
reader.Update()
if no_error:
image = reader.GetOutput()
dim = image.GetDimensions()
if reader.GetNumberOfScalarComponents() > 1:
luminanceFilter = vtkImageLuminance()
luminanceFilter.SetInputData(image)
luminanceFilter.Update()
image = vtkImageData()
image.DeepCopy(luminanceFilter.GetOutput())
img_array = numpy_support.vtk_to_numpy(image.GetPointData().GetScalars())
img_array.shape = (dim[1], dim[0])
return img_array
else:
no_error = True
return False
def ReadBitmap(filepath):
t = VerifyDataType(filepath)
if _has_win32api:
filepath = win32api.GetShortPathName(filepath)
if t == False:
try:
measures_info = GetPixelSpacingFromInfoFile(filepath)
except UnicodeDecodeError:
measures_info = False
if measures_info:
Publisher.sendMessage("Set bitmap spacing", spacing=measures_info)
return False
img_array = VtkRead(filepath, t)
if not (isinstance(img_array, numpy.ndarray)):
no_error = True
img_array = ScipyRead(filepath)
if not (isinstance(img_array, numpy.ndarray)):
return False
return img_array
def GetPixelSpacingFromInfoFile(filepath):
filepath = utils.decode(filepath, const.FS_ENCODE)
if filepath.endswith(".DS_Store"):
return False
try:
fi = open(filepath, "r")
lines = fi.readlines()
except UnicodeDecodeError:
# fix uCTI from CTI file
try:
fi = open(filepath, "r", encoding="iso8859-1")
lines = fi.readlines()
except UnicodeDecodeError:
return False
measure_scale = "mm"
values = []
if len(lines) > 0:
# info text from avizo
if "# Avizo Stacked Slices" in lines[0]:
value = lines[2].split(" ")
spx = float(value[1])
spy = float(value[2])
value = lines[5].split(" ")
spz = float(value[1])
return [spx * 0.001, spy * 0.001, spz * 0.001]
else:
# info text from skyscan
for l in lines:
if "Pixel Size" in l:
if "um" in l:
measure_scale = "um"
value = l.split("=")[-1]
values.append(value)
if len(values) > 0:
value = values[-1]
value = value.replace("\n", "")
value = value.replace("\r", "")
# convert um to mm (InVesalius default)
if measure_scale == "um":
value = float(value) * 0.001
measure_scale = "mm"
elif measure_scale == "nm":
value = float(value) * 0.000001
return [value, value, value]
else:
return False
else:
return False
def VtkErrorToPy(obj, evt):
global no_error
no_error = False
def VerifyDataType(filepath):
try:
filepath = utils.decode(filepath, const.FS_ENCODE)
t = imghdr.what(filepath)
if t:
return t
else:
return False
except IOError:
return False
|
octoprint | _version | # This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (built by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.15+dev (https://github.com/warner/python-versioneer)
"""Git implementation of _version.py."""
import errno
import logging
import os
import re
import subprocess
import sys
def get_keywords():
"""Get the keywords needed to look up the version information."""
# these strings will be replaced by git during git-archive.
# setup.py/versioneer.py will grep for the variable names, so they must
# each be defined on a line of their own. _version.py will just call
# get_keywords().
git_refnames = "$Format:%d$"
git_full = "$Format:%H$"
keywords = {"refnames": git_refnames, "full": git_full}
return keywords
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_config():
"""Create, populate and return the VersioneerConfig() object."""
# these strings are filled in when 'setup.py versioneer' creates
# _version.py
cfg = VersioneerConfig()
cfg.VCS = "git"
cfg.style = "pep440-tag"
cfg.tag_prefix = ""
cfg.parentdir_prefix = ""
cfg.versionfile_source = "src/octoprint/_version.py"
cfg.lookupfile = ".versioneer-lookup"
cfg.verbose = False
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
LONG_VERSION_PY = {}
HANDLERS = {}
def register_vcs_handler(vcs, method): # decorator
"""Decorator to mark a method as the handler for a particular VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False):
"""Call the given command(s)."""
assert isinstance(commands, list)
p = None
for c in commands:
try:
dispcmd = str([c] + args)
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen(
[c] + args,
cwd=cwd,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr else None),
)
break
except OSError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %s" % dispcmd)
print(e)
return None
else:
if verbose:
print(f"unable to find command, tried {commands}")
return None
stdout = p.communicate()[0].strip()
if sys.version_info[0] >= 3:
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % dispcmd)
return None
return stdout
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes
both the project name and a version string.
"""
dirname = os.path.basename(root)
if not dirname.startswith(parentdir_prefix):
if verbose:
print(
"guessing rootdir is '%s', but '%s' doesn't start with "
"prefix '%s'" % (root, dirname, parentdir_prefix)
)
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
return {
"version": dirname[len(parentdir_prefix) :],
"full-revisionid": None,
"dirty": False,
"error": None,
}
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
f = open(versionfile_abs, encoding="utf-8")
for line in f.readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
f.close()
except OSError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if not keywords:
raise NotThisMethod("no keywords at all, weird")
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = {r.strip() for r in refnames.strip("()").split(",")}
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = {r[len(TAG) :] for r in refs if r.startswith(TAG)}
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = {r for r in refs if re.search(r"\d", r)}
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
branches = [
r
for r in refs
if not r.startswith(TAG) and r != "HEAD" and not r.startswith("refs/")
]
if verbose:
print("likely branches: %s" % ",".join(sorted(branches)))
branch = None
if branches:
branch = branches[0]
if verbose:
print("likely tags: %s" % ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix) :]
if verbose:
print("picking %s" % r)
result = {
"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False,
"error": None,
}
if branch is not None:
result["branch"] = branch
return result
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {
"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False,
"error": "no suitable tags",
}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
if not os.path.exists(os.path.join(root, ".git")):
if verbose:
print("no .git in %s" % root)
raise NotThisMethod("no .git directory")
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out = run_command(
GITS,
[
"describe",
"--tags",
"--dirty",
"--always",
"--long",
"--match",
"%s*" % tag_prefix,
],
cwd=root,
)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[: git_describe.rindex("-dirty")]
# figure out our branch
abbrev_ref_out = run_command(GITS, ["rev-parse", "--abbrev-ref", "HEAD"], cwd=root)
if abbrev_ref_out is not None and abbrev_ref_out != "HEAD":
pieces["branch"] = abbrev_ref_out.strip()
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r"^(.+)-(\d+)-g([0-9a-f]+)$", git_describe)
if not mo:
# unparsable. Maybe git-describe is misbehaving?
pieces["error"] = "unable to parse git-describe output: '%s'" % describe_out
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%s' doesn't start with prefix '%s'"
print(fmt % (full_tag, tag_prefix))
pieces["error"] = "tag '{}' doesn't start with prefix '{}'".format(
full_tag,
tag_prefix,
)
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix) :]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out = run_command(GITS, ["rev-list", "HEAD", "--count"], cwd=root)
pieces["distance"] = int(count_out) # total number of commits
return pieces
@register_vcs_handler("git", "parse_lookup_file")
def git_parse_lookup_file(path):
"""Parse a versioneer lookup file.
This file allows definition of branch specific data like virtual tags or
custom styles to use for version rendering.
"""
if not os.path.exists(path):
return []
import re
lookup = []
with open(path, encoding="utf-8") as f:
for line in f:
if "#" in line:
line = line[: line.index("#")]
line = line.strip()
if not line:
continue
try:
split_line = list(map(lambda x: x.strip(), line.split()))
if not len(split_line):
continue
matcher = re.compile(split_line[0])
if len(split_line) == 1:
entry = [matcher, None, None, None]
elif len(split_line) == 2:
render = split_line[1]
entry = [matcher, render, None, None]
elif len(split_line) == 3:
tag, ref_commit = split_line[1:]
entry = [matcher, None, tag, ref_commit]
elif len(split_line) == 4:
tag, ref_commit, render = split_line[1:]
entry = [matcher, render, tag, ref_commit]
else:
continue
lookup.append(entry)
except Exception:
logging.getLogger(__name__).exception("Versioneer problem")
break
return lookup
@register_vcs_handler("git", "pieces_from_lookup")
def git_pieces_from_lookup(lookup, root, verbose, run_command=run_command):
"""Extract version information based on provided lookup data."""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
stdout = run_command(GITS, ["rev-parse", "--abbrev-ref", "HEAD"], cwd=root)
if stdout is None:
raise NotThisMethod("git rev-parse --abbrev-ref HEAD failed")
current_branch = stdout.strip()
if current_branch == "HEAD":
raise NotThisMethod("not on a branch")
for matcher, render, tag, ref_commit in lookup:
if matcher.match(current_branch):
if tag is None or ref_commit is None:
raise NotThisMethod("tag or ref_commit is unset for " "this branch")
stdout = run_command(
GITS, ["rev-list", "%s..HEAD" % ref_commit, "--count"], cwd=root
)
if stdout is None:
raise NotThisMethod(
"git rev-list %s..HEAD " "--count failed" % ref_commit
)
try:
num_commits = int(stdout.strip())
except ValueError:
raise NotThisMethod(
"git rev-list %s..HEAD --count didn't "
"return a valid number" % ref_commit
)
stdout = run_command(GITS, ["rev-parse", "--short", "HEAD"], cwd=root)
if stdout is None:
raise NotThisMethod("git describe rev-parse " "--short HEAD failed")
short_hash = stdout.strip()
stdout = run_command(
GITS, ["describe", "--tags", "--dirty", "--always"], cwd=root
)
if stdout is None:
raise NotThisMethod("git describe --tags --dirty " "--always failed")
dirty = stdout.strip().endswith("-dirty")
stdout = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if stdout is None:
raise NotThisMethod("git rev-parse HEAD failed")
full = stdout.strip()
return {
"long": full,
"short": short_hash,
"dirty": dirty,
"branch": current_branch,
"closest-tag": tag,
"distance": num_commits,
"error": None,
"render": render,
}
raise NotThisMethod("no matching lookup definition found")
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%d.g%s" % (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%d.g%s" % (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_tag(pieces):
"""TAG[[.postDISTANCE].dev0+gHEX] -- Just the tag if not dirty, else more info
Useful for projects that want commit based tracking on some branches
but have the master branch only report tags, to allow for commits that
do not modify actual code (e.g. to .github/* or docs).
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]+gHEX
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
rendered += ".dev0"
rendered += "+g%s" % pieces["short"]
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%s" % pieces["short"]
return rendered
def render_pep440_pre(pieces):
"""TAG[.post.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post.devDISTANCE
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += ".post.dev%d" % pieces["distance"]
else:
# exception #1
rendered = "0.post.dev%d" % pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%s" % pieces["short"]
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%s" % pieces["short"]
return rendered
def render_pep440_dev(pieces):
"""TAG[.devDISTANCE]+gHEX[.dirty] .
Exceptions:
1: no tags. 0.devDISTANCE+gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += ".dev%d" % pieces["distance"]
rendered += plus_or_dot(pieces)
else:
# exception #1
rendered = "0.dev%d" % pieces["distance"]
rendered += "+"
rendered += "g%s" % pieces["short"]
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Eexceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {
"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
}
if "render" in pieces and pieces["render"] is not None:
style = pieces["render"]
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "pep440-dev":
rendered = render_pep440_dev(pieces)
elif style == "pep440-tag":
rendered = render_pep440_tag(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%s'" % style)
result = {
"version": rendered,
"full-revisionid": pieces["long"],
"dirty": pieces["dirty"],
"error": None,
}
if "branch" in pieces and pieces["branch"] is not None:
result["branch"] = pieces["branch"]
return result
def get_versions():
"""Get version information or return default if unable to do so."""
# I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have
# __file__, we can work backwards from there to the root. Some
# py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
# case we can only use expanded keywords.
cfg = get_config()
verbose = cfg.verbose
try:
return git_versions_from_keywords(get_keywords(), cfg.tag_prefix, verbose)
except NotThisMethod:
pass
try:
root = os.path.realpath(__file__)
# versionfile_source is the relative path from the top of the source
# tree (where the .git directory might live) to this file. Invert
# this to find the root from __file__.
for _ in cfg.versionfile_source.split("/"):
root = os.path.dirname(root)
except NameError:
return {
"version": "0+unknown",
"full-revisionid": None,
"dirty": None,
"error": "unable to find root of source tree",
}
lookupfile = cfg.lookupfile if cfg.lookupfile is not None else ".versioneer-lookup"
lookuppath = os.path.join(root, lookupfile)
if os.path.exists(lookuppath):
try:
lookup_data = git_parse_lookup_file(lookuppath)
pieces = git_pieces_from_lookup(lookup_data, root, verbose)
return render(pieces, cfg.style)
except NotThisMethod:
pass
try:
pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)
return render(pieces, cfg.style)
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
return versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
except NotThisMethod:
pass
return {
"version": "0+unknown",
"full-revisionid": None,
"dirty": None,
"error": "unable to compute version",
}
|
instruments | Instrument | #####################################################################
# -*- coding: utf-8 -*- #
# #
# Frets on Fire #
# Copyright (C) 2009 Blazingamer #
# #
# This program is free software; you can redistribute it and/or #
# modify it under the terms of the GNU General Public License #
# as published by the Free Software Foundation; either version 2 #
# of the License, or (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program; if not, write to the Free Software #
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, #
# MA 02110-1301, USA. #
#####################################################################
import logging
import math
import os
import numpy as np
import OpenGL.GL as gl
from fofix.core import cmgl
from fofix.core.Image import draw3Dtex
from fofix.core.Mesh import Mesh
from fofix.core.Shader import shaders
from fofix.game.song import FREESTYLE_MARKING_NOTE, MarkerNote, Note, Tempo
log = logging.getLogger(__name__)
class Instrument(object):
def __init__(self, engine, playerObj, scene, player=0):
self.engine = engine
self.scene = scene
self.song = self.scene.song
self.starPowerDecreaseDivisor = 200.0 / self.engine.audioSpeedFactor
self.bigRockEndingMarkerSeen = False
self.isStarPhrase = False
self.finalStarSeen = False
self.time = 0.0
self.pickStartPos = 0
self.leftyMode = False
self.drumFli = False
self.freestyleActive = False
self.drumFillsActive = False
self.incomingNeckMode = self.engine.config.get("game", "incoming_neck_mode")
self.guitarSoloNeckMode = self.engine.config.get("game", "guitar_solo_neck")
self.bigRockEndings = self.engine.config.get("game", "big_rock_endings")
# For Animated notes
self.noteSpinFrames = 16
self.Animspeed = 30 # Lower value = Faster animations
self.indexCount = 0
self.noteSpinFrameIndex = 0
# BRE scoring variables
self.freestyleEnabled = False
self.freestyleStart = 0
self.freestyleFirstHit = 0
self.freestyleLength = 0
self.freestyleLastHit = 0
self.freestyleBonusFret = -2
self.freestyleLastFretHitTime = range(5)
self.freestyleBaseScore = 750
self.freestylePeriod = 1000
self.freestylePercent = 50
self.freestyleReady = False
self.freestyleOffset = 5
self.freestyleSP = False
# empty variables for class compatibility
self.totalPhrases = 0
self.accThresholdWorstLate = 0
self.accThresholdVeryLate = 0
self.accThresholdLate = 0
self.accThresholdSlightlyLate = 0
self.accThresholdExcellentLate = 0
self.accThresholdPerfect = 0
self.accThresholdExcellentEarly = 0
self.accThresholdSlightlyEarly = 0
self.accThresholdEarly = 0
self.accThresholdVeryEarly = 0
self.tempoBpm = 120 # default is NEEDED here...
self.beatsPerBoard = 5.0
self.boardWidth = self.engine.theme.neckWidth
self.boardLength = self.engine.theme.neckLength
self.beatsPerUnit = self.beatsPerBoard / self.boardLength
self.fretColors = self.engine.theme.noteColors
self.spColor = self.engine.theme.spNoteColor
self.useFretColors = self.engine.theme.use_fret_colors
self.powerActiveColorToggle = self.engine.theme.powerActiveColorToggle
self.powerGainColorToggle = self.engine.theme.powerGainColorToggle
if not self.engine.theme.killNoteColor == "frets":
kC = self.engine.theme.killNoteColor
self.killColor = [kC, kC, kC, kC, kC]
else:
self.killColor = self.fretColors
self.playedNotes = []
self.missedNotes = []
self.useMidiSoloMarkers = False
self.canGuitarSolo = False
self.guitarSolo = False
self.sameNoteHopoString = False
self.hopoProblemNoteNum = -1
self.currentGuitarSoloHitNotes = 0
self.cappedScoreMult = 0
self.battleTarget = 0
self.currentBpm = 120.0 # need a default 120BPM to be set in case a custom song has no tempo events.
self.currentPeriod = 60000.0 / self.currentBpm
self.targetBpm = self.currentBpm
self.targetPeriod = 60000.0 / self.targetBpm
self.lastBpmChange = -1.0
self.baseBeat = 0.0
self.camAngle = 0.0 # set from guitarScene
self.indexFps = self.engine.config.get("video", "fps")
self.Animspeed = 30 # Lower value = Faster animations
# For Animated Starnotes
self.indexCoun = 0
# to keep track of pause status here as well
self.paused = False
self.spEnabled = True
self.starPower = 0
self.starPowerGained = False
self.spNote = False
self.starpowerMode = self.engine.config.get("game", "starpower_mode")
self.killPoints = False
# get difficulty
self.difficulty = playerObj.getDifficultyInt()
self.controlType = playerObj.controlType
self.scoreMultiplier = 1
# I do not understand fully how the handicap scorecard works at the moment, nor do I have the time to figure it out.
# so for now, I'm just writing some extra code here for the early hitwindow size handicap.
self.earlyHitWindowSizeFactor = 0.5
self.hitw = self.engine.config.get(
"game", "note_hit_window"
) # this should be global, not retrieved every BPM change.
if self.hitw == 0:
self.hitw = 2.3
elif self.hitw == 1:
self.hitw = 1.9
elif self.hitw == 2:
self.hitw = 1.2
elif self.hitw == 3:
self.hitw = 1.0
elif self.hitw == 4:
self.hitw = 0.70
else:
self.hitw = 1.2
# need a separate variable to track whether or not hopos are actually active
self.wasLastNoteHopod = False
self.hopoLast = -1
self.hopoColor = (0, 0.5, 0.5)
self.player = player
self.hit = [False, False, False, False, False]
self.freestyleHit = [False, False, False, False, False]
# this should be retrieved once at init, not repeatedly in-game whenever tails are rendered.
self.notedisappear = self.engine.config.get("game", "notedisappear")
self.fretsUnderNotes = self.engine.config.get("game", "frets_under_notes")
self.staticStrings = self.engine.config.get("performance", "static_strings")
self.muteSustainReleases = self.engine.config.get("game", "sustain_muting")
self.twoChord = 0
self.twoChordApply = False
self.hopoActive = 0
self.LastStrumWasChord = False
self.vbpmLogicType = self.engine.config.get("debug", "use_new_vbpm_beta")
# Get theme
self.theme = self.engine.data.theme
self.spRefillMode = self.engine.config.get("game", "sp_notes_while_active")
self.hitglow_color = self.engine.config.get(
"video", "hitglow_color"
) # this should be global, not retrieved every fret render.
# check if BRE enabled
if self.bigRockEndings == 2 or self.bigRockEndings == 1:
self.freestyleEnabled = True
self.nstype = self.engine.config.get("game", "nstype") # neck style
self.twoDnote = self.engine.theme.twoDnote # note style (2D or 3D)
self.twoDkeys = self.engine.theme.twoDkeys # key style
self.threeDspin = (
self.engine.theme.threeDspin
) # 3d notes spin when they are star power notes
self.noterotate = self.engine.config.get(
"coffee", "noterotate"
) # adjust notes for if they were designed for FoF 1.1 or 1.2
self.billboardNote = (
self.engine.theme.billboardNote
) # 3D notes follow the angle of the camera
# fixing neck speed
if self.nstype < 3:
# not constant mode
self.speed = self.engine.config.get("coffee", "neckSpeed") * 0.01
else:
# constant mode
self.speed = 410 - self.engine.config.get(
"coffee", "neckSpeed"
) # invert this value
self.boardScaleX = self.boardWidth / 3.0
self.boardScaleY = self.boardLength / 9.0
self.fretPress = self.engine.theme.fret_press
self.coOpFailed = False
self.coOpRestart = False
self.coOpRescueTime = 0.0
self.setBPM(self.currentBpm)
if self.starpowerMode == 1:
self.starNotesSet = False
else:
self.starNotesSet = True
self.maxStars = []
self.starNotes = []
self.totalNotes = 0
self.keys = []
self.actions = []
self.soloKey = []
self.disableVBPM = self.engine.config.get("game", "disable_vbpm")
self.disableFretSFX = self.engine.config.get("video", "disable_fretsfx")
self.disableFlameSFX = self.engine.config.get("video", "disable_flamesfx")
self.meshColor = self.engine.theme.meshColor
self.hopoColor = self.engine.theme.hopoColor
self.spotColor = self.engine.theme.spotColor
self.keyColor = self.engine.theme.keyColor
self.key2Color = self.engine.theme.key2Color
self.hitFlameYPos = self.engine.theme.hitFlamePos[0]
self.hitFlameZPos = self.engine.theme.hitFlamePos[1]
self.holdFlameYPos = self.engine.theme.holdFlamePos[0]
self.holdFlameZPos = self.engine.theme.holdFlamePos[1]
self.hitFlameSize = self.engine.theme.hitFlameSize
self.holdFlameSize = self.engine.theme.holdFlameSize
self.hitFlameBlackRemove = self.engine.theme.hitFlameBlackRemove
self.hitGlowsBlackRemove = self.engine.theme.hitGlowsBlackRemove
self.hitGlowOffset = self.engine.theme.hitGlowOffset
self.hitFlameOffset = self.engine.theme.hitFlameOffset
self.drumHitFlameOffset = self.engine.theme.drumHitFlameOffset
self.hitGlowsRotation = self.engine.theme.hitFlameRotation
self.hitFlameRotation = self.engine.theme.hitFlameRotation
# all flames/glows are set to there corresponding color else they are set to the fret colors
if not self.engine.theme.flamesColor == "frets":
fC = self.engine.theme.flamesColor
self.flameColors = [fC, fC, fC, fC, fC]
else:
self.flameColors = self.fretColors
if not self.engine.theme.hitGlowColor == "frets":
hGC = self.engine.theme.hitGlowColor
self.hitGlowColors = [hGC, hGC, hGC, hGC, hGC]
else:
self.hitGlowColors = self.fretColors
if not self.engine.theme.glowColor == "frets":
gC = self.engine.theme.glowColor
self.glowColor = [gC, gC, gC, gC, gC]
else:
self.glowColor = self.fretColors
self.twoChordMax = False
self.canGuitarSolo = False
self.guitarSolo = False
self.fretboardHop = 0.00
self.scoreMultiplier = 1
self.coOpFailed = False
self.coOpRestart = False
self.starPowerActive = False
# Tail's base arrays will get modified overtime
self.tail_tex = np.array(
[[0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0]], dtype=np.float32
)
self.tail_col = np.array(
[[0, 0, 0, 1], [0, 0, 0, 1], [0, 0, 0, 1], [0, 0, 0, 1]], dtype=np.float32
)
self.tail_vtx = np.array(
[[0, 0, 0], [0, 0, 0], [0, 0, 0], [0, 0, 0]], dtype=np.float32
)
def checkPath(self, subdirectory, s_file, lastResort=False):
"""
Check if there is a "drum" or "bass" folder inside the subdirectory for
image replacement.
:param subdirectory: the folder in the theme to search
if the instrument is drum or bass it will extend this
:param s_file: the file to search for
:param lastResort: if the file isn't even found in the default path then
resort to using the file in the data folder
:return: the path of the searched file.
"""
# Get theme
themename = self.engine.data.themeLabel
defaultpath = os.path.join("themes", themename, subdirectory)
themepath = os.path.join("themes", themename, subdirectory)
if self.isDrum:
themepath = os.path.join(themepath, "drum")
elif self.isBassGuitar:
themepath = os.path.join(themepath, "bass")
if self.engine.fileExists(os.path.join(themepath, s_file)):
return os.path.join(themepath, s_file)
else:
if lastResort and not self.engine.fileExists(
os.path.join(defaultpath, s_file)
):
return s_file
log.warning("Image not found: " + os.path.join(themepath, s_file))
return os.path.join(defaultpath, s_file)
def loadFlames(self):
engine = self.engine
themename = self.engine.data.themeLabel
get = lambda s_file: self.checkPath("flames", s_file)
self.HCount = 0
self.HFrameLimit = self.engine.theme.HoldFlameFrameLimit
self.HFrameLimit2 = self.engine.theme.HitFlameFrameLimit
self.HCountAni = False
if self.disableFretSFX:
self.glowDrawing = None
else:
engine.loadImgDrawing(self, "glowDrawing", get("glow.png"))
if not self.glowDrawing:
engine.loadImgDrawing(self, "glowDrawing", "glow.png")
if self.disableFlameSFX:
self.hitglow2Drawing = None
self.hitglowDrawing = None
self.hitglowAnim = None
self.hitflamesAnim = None
self.hitflames2Drawing = None
self.hitflames1Drawing = None
else:
engine.loadImgDrawing(self, "hitflames1Drawing", get("hitflames1.png"))
engine.loadImgDrawing(self, "hitflames2Drawing", get("hitflames2.png"))
engine.loadImgDrawing(self, "hitflamesAnim", get("hitflamesanimation.png"))
engine.loadImgDrawing(
self, "powerHitflamesAnim", get("powerhitflamesanimation.png")
)
engine.loadImgDrawing(self, "hitglowAnim", get("hitglowanimation.png"))
engine.loadImgDrawing(self, "hitglowDrawing", get("hitglow.png"))
engine.loadImgDrawing(self, "hitglow2Drawing", get("hitglow2.png"))
engine.loadImgDrawing(
self,
"hitlightning",
os.path.join("themes", themename, "lightning.png"),
textureSize=(128, 128),
)
def loadNotes(self):
engine = self.engine
get = lambda s_file: self.checkPath("notes", s_file)
self.noteSpin = self.engine.config.get("performance", "animated_notes")
self.spActTex = None
self.noteTex = None
self.noteButtons = None
if self.twoDnote:
if self.noteSpin:
self.starSpinFrames = 16
engine.loadImgDrawing(
self, "noteAnimatedNormal", get("animated_normal.png")
)
engine.loadImgDrawing(
self, "noteAnimatedHOPO", get("animated_hopo.png")
)
engine.loadImgDrawing(
self, "noteAnimatedPower", get("animated_power.png")
)
engine.loadImgDrawing(
self, "noteAnimatedPowerHOPO", get("animated_power_hopo.png")
)
engine.loadImgDrawing(
self, "noteAnimatedPowerActive", get("animated_power_active.png")
)
engine.loadImgDrawing(
self,
"noteAnimatedPowerActiveHOPO",
get("animated_power_active_hopo.png"),
)
engine.loadImgDrawing(self, "noteButtons", get("notes.png"))
size = (
self.boardWidth / self.strings / 2,
self.boardWidth / self.strings / 2,
)
self.noteVtx = np.array(
[
[-size[0], 0.0, size[1]],
[size[0], 0.0, size[1]],
[-size[0], 0.0, -size[1]],
[size[0], 0.0, -size[1]],
],
dtype=np.float32,
)
self.noteTexCoord = [
[
np.array(
[
[i / float(self.strings), s / 6.0],
[(i + 1) / float(self.strings), s / 6.0],
[i / float(self.strings), (s + 1) / 6.0],
[(i + 1) / float(self.strings), (s + 1) / 6.0],
],
dtype=np.float32,
)
for i in range(self.strings)
]
for s in range(6)
]
self.animatedNoteTexCoord = [
[
np.array(
[
[i / float(self.strings), s / float(self.noteSpinFrames)],
[
(i + 1) / float(self.strings),
s / float(self.noteSpinFrames),
],
[
i / float(self.strings),
(s + 1) / float(self.noteSpinFrames),
],
[
(i + 1) / float(self.strings),
(s + 1) / float(self.noteSpinFrames),
],
],
dtype=np.float32,
)
for i in range(self.strings)
]
for s in range(self.noteSpinFrames)
]
else:
defaultNote = False
# can't use IOError for fallback logic for a Mesh() call...
if self.engine.fileExists(get("note.dae")):
# look in the notes folder for files
self.engine.resource.load(
self,
"noteMesh",
lambda: Mesh(engine.resource.fileName(get("note.dae"))),
)
else:
# default to files in data folder
self.engine.resource.load(
self, "noteMesh", lambda: Mesh(engine.resource.fileName("note.dae"))
)
defaultNote = True
if self.engine.fileExists(get("star.dae")):
# look in the notes folder for files
self.engine.resource.load(
self,
"starMesh",
lambda: Mesh(self.engine.resource.fileName(get("star.dae"))),
)
else:
# No mesh for star notes
self.starMesh = None
if defaultNote:
self.notetex = False
else:
self.notetex = True
self.startex = True
self.staratex = True
for i in range(5):
if not engine.loadImgDrawing(
self,
"notetex" + chr(97 + i),
get("notetex_" + chr(97 + i) + ".png"),
):
self.notetex = False
break
for i in range(5):
if not self.engine.loadImgDrawing(
self,
"startex" + chr(97 + i),
get("startex_" + chr(97 + i) + ".png"),
):
self.startex = False
break
for i in range(5):
if not self.engine.loadImgDrawing(
self,
"staratex" + chr(97 + i),
get("staratex_" + chr(97 + i) + ".png"),
):
self.staratex = False
break
def loadFrets(self):
engine = self.engine
get = lambda s_file: self.checkPath("frets", s_file)
if self.twoDkeys:
engine.loadImgDrawing(self, "fretButtons", get("fretbuttons.png"))
else:
defaultKey = False
# can't use IOError for fallback logic for a Mesh() call...
if self.engine.fileExists(get("key.dae")):
# look in the frets folder for files
engine.resource.load(
self,
"keyMesh",
lambda: Mesh(engine.resource.fileName(get("key.dae"))),
)
else:
# default to files in data folder
engine.resource.load(
self, "keyMesh", lambda: Mesh(engine.resource.fileName("key.dae"))
)
defaultKey = True
if defaultKey:
self.keytex = False
else:
self.keytex = True
for i in range(5):
if not engine.loadImgDrawing(
self,
"keytex" + chr(97 + i),
get("keytex_" + chr(97 + i) + ".png"),
):
self.keytex = False
break
def loadTails(self):
engine = self.engine
get = lambda s_file: self.checkPath("tails", s_file)
getD = lambda s_file: self.checkPath(
"tails", s_file, True
) # resorts to checking data
# MFH - freestyle tails (for drum fills & BREs)
engine.loadImgDrawing(
self, "freestyle1", getD("freestyletail1.png"), textureSize=(128, 128)
)
engine.loadImgDrawing(
self, "freestyle2", getD("freestyletail2.png"), textureSize=(128, 128)
)
if self.tailsEnabled:
self.simpleTails = False
for i in range(0, 7):
if not engine.loadImgDrawing(
self,
"tail" + str(i),
get("tail" + str(i) + ".png"),
textureSize=(128, 128),
):
self.simpleTails = True
break
if not engine.loadImgDrawing(
self,
"taile" + str(i),
get("taile" + str(i) + ".png"),
textureSize=(128, 128),
):
self.simpleTails = True
break
if not engine.loadImgDrawing(
self,
"btail" + str(i),
get("btail" + str(i) + ".png"),
textureSize=(128, 128),
):
self.simpleTails = True
break
if not engine.loadImgDrawing(
self,
"btaile" + str(i),
get("btaile" + str(i) + ".png"),
textureSize=(128, 128),
):
self.simpleTails = True
break
if self.simpleTails:
log.debug("Simple tails used; complex tail loading error...")
engine.loadImgDrawing(
self, "tail1", getD("tail1.png"), textureSize=(128, 128)
)
engine.loadImgDrawing(
self, "tail2", getD("tail2.png"), textureSize=(128, 128)
)
engine.loadImgDrawing(
self, "bigTail1", getD("bigtail1.png"), textureSize=(128, 128)
)
engine.loadImgDrawing(
self, "bigTail2", getD("bigtail2.png"), textureSize=(128, 128)
)
engine.loadImgDrawing(
self, "kill1", getD("kill1.png"), textureSize=(128, 128)
)
engine.loadImgDrawing(
self, "kill2", getD("kill2.png"), textureSize=(128, 128)
)
else:
self.tail1 = None
self.tail2 = None
self.bigTail1 = None
self.bigTail2 = None
self.kill1 = None
self.kill2 = None
def loadImages(self):
self.loadFrets()
self.loadNotes()
self.loadTails()
self.loadFlames()
def setMultiplier(self, multiplier):
self.scoreMultiplier = multiplier
self.neck.scoreMultiplier = multiplier
def endPick(self, pos):
if not self.isDrum:
for time, note in self.playedNotes:
if time + note.length > pos + self.noteReleaseMargin:
self.playedNotes = []
return False
self.playedNotes = []
return True
def setBPM(self, bpm):
if bpm > 200:
bpm = 200
# Filter out unnecessary BPM settings (when currentBPM is already set!)
self.currentBpm = bpm # update current BPM as well
# Neck speed determination:
if self.nstype == 0:
# BPM mode
self.neckSpeed = (340 - bpm) / self.speed
elif self.nstype == 1:
# Difficulty mode
if self.difficulty == 0:
# expert
self.neckSpeed = 220 / self.speed
elif self.difficulty == 1:
self.neckSpeed = 250 / self.speed
elif self.difficulty == 2:
self.neckSpeed = 280 / self.speed
else:
# easy
self.neckSpeed = 300 / self.speed
elif self.nstype == 2:
# BPM & Diff mode
if self.difficulty == 0:
# expert
self.neckSpeed = (226 - (bpm / 10)) / self.speed
elif self.difficulty == 1:
self.neckSpeed = (256 - (bpm / 10)) / self.speed
elif self.difficulty == 2:
self.neckSpeed = (286 - (bpm / 10)) / self.speed
else:
# easy
self.neckSpeed = (306 - (bpm / 10)) / self.speed
else:
# Percentage mode - pre-calculated
self.neckSpeed = self.speed
self.earlyMargin = 250 - bpm / 5 - 70 * self.hitw
self.lateMargin = 250 - bpm / 5 - 70 * self.hitw
if self.muteSustainReleases == 4:
# tight
self.noteReleaseMargin = 200 - bpm / 5 - 70 * 1.2
elif self.muteSustainReleases == 3:
# standard
self.noteReleaseMargin = 200 - bpm / 5 - 70 * 1.0
elif self.muteSustainReleases == 2:
# wide
self.noteReleaseMargin = 200 - bpm / 5 - 70 * 0.7
else:
# ultra-wide
self.noteReleaseMargin = 200 - bpm / 5 - 70 * 0.5
# TODO - only calculate the below values if the realtime hit accuracy feedback display is enabled - otherwise this is a waste!
self.accThresholdWorstLate = 0 - self.lateMargin
self.accThresholdVeryLate = 0 - (3 * self.lateMargin / 4)
self.accThresholdLate = 0 - (2 * self.lateMargin / 4)
self.accThresholdSlightlyLate = 0 - (1 * self.lateMargin / 4)
self.accThresholdExcellentLate = -1.0
self.accThresholdPerfect = 1.0
self.accThresholdExcellentEarly = 1 * self.lateMargin / 4
self.accThresholdSlightlyEarly = 2 * self.lateMargin / 4
self.accThresholdEarly = 3 * self.lateMargin / 4
self.accThresholdVeryEarly = 4 * self.lateMargin / 4
def getRequiredNotes(self, song, pos):
track = song.track[self.player]
notes = [
(time, event)
for time, event in track.getEvents(
pos - self.lateMargin, pos + self.earlyMargin
)
if isinstance(event, Note)
and not (event.hopod or event.played or event.skipped)
and (time >= (pos - self.lateMargin))
and (time <= (pos + self.earlyMargin))
]
return sorted(notes, key=lambda x: x[0])
def getMissedNotes(self, song, pos, catchup=False):
if not song and not song.readyToGo:
return
m1 = self.lateMargin
m2 = self.lateMargin * 2
track = song.track[self.player]
notes = [
(time, event)
for time, event in track.getEvents(pos - m2, pos - m1)
if isinstance(event, Note)
and time >= (pos - m2)
and time <= (pos - m1)
and not event.played
and not event.hopod
and not event.skipped
]
if catchup:
for time, event in notes:
event.skipped = True
return sorted(notes, key=lambda x: x[0])
def getRequiredNotesForRender(self, song, pos):
track = song.track[self.player]
notes = [
(time, event)
for time, event in track.getEvents(
pos - self.currentPeriod * 2,
pos + self.currentPeriod * self.beatsPerBoard,
)
]
return notes
def coOpRescue(self, pos):
self.coOpRestart = True # initializes Restart Timer
self.coOpRescueTime = pos
self.starPower = 0
log.debug("Rescued at " + str(pos))
def isKillswitchPossible(self):
possible = False
if self.isDrum:
return possible
for i in range(0, 5):
if self.hit[i]:
possible = True
return possible
def renderHitTrails(self, controls):
"""Renders the tail glow hitflame"""
if self.hitGlowColors[0][0] == -1 or self.disableFlameSFX:
return
if self.HCountAni:
for n in range(self.strings):
f = self.fretWeight[n]
if f and (
controls.getState(self.actions[0])
or controls.getState(self.actions[1])
):
f += 0.25
w = self.boardWidth / self.strings
x = (self.strings / 2 - n) * w
y = f / 6
y -= self.holdFlameYPos
flameSize = self.holdFlameSize
alphaEnabled = self.hitGlowsBlackRemove
if self.fretActivity[n]:
ms = math.sin(self.time) * 0.25 + 1
ff = self.fretActivity[n] + 1.2
vtx = flameSize * ff
s = ff / 6
if not self.hitFlameYPos == 0:
y = s - self.holdFlameYPos
else:
y = 0
if not self.hitFlameZPos == 0:
z = s - self.holdFlameZPos
else:
z = 0
y -= self.hitGlowOffset[n]
color = self.hitGlowColors[n]
color = tuple(
[color[ifc] + 0.38 for ifc in range(3)]
) # to make sure the final color looks correct on any color set
# Animated hitflames
if self.hitglowAnim:
self.HCount += 1
if self.HCount > self.Animspeed - 1:
self.HCount = 0
HIndex = (
self.HCount * self.HFrameLimit
- (self.HCount * self.HFrameLimit) % self.Animspeed
) / self.Animspeed
if HIndex >= self.HFrameLimit - 1:
HIndex = 0
texX = (
HIndex * (1.0 / self.HFrameLimit),
HIndex * (1.0 / self.HFrameLimit)
+ (1.0 / self.HFrameLimit),
)
draw3Dtex(
self.hitglowAnim,
coord=(x, y, z),
rot=self.hitGlowsRotation,
scale=(2.4, 1, 3.3),
vertex=(-vtx, -vtx, vtx, vtx),
texcoord=(texX[0], 0.0, texX[1], 1.0),
multiples=True,
alpha=alphaEnabled,
color=(1, 1, 1),
)
if self.hitglowDrawing:
flameColorMod = (1.19, 1.97, 10.59)
flamecol = tuple(
[color[ifc] * flameColorMod[ifc] for ifc in range(3)]
)
if self.starPowerActive and self.powerActiveColorToggle:
flamecol = self.spColor
elif self.spNote and self.powerGainColorToggle:
flamecol = self.spColor
draw3Dtex(
self.hitglowDrawing,
coord=(x, y + 0.15, z),
rot=self.hitGlowsRotation,
scale=(
0.5 + 0.6 * ms * ff,
1.5 + 0.6 * ms * ff,
1 + 0.6 * ms * ff,
),
vertex=(-vtx, -vtx, vtx, vtx),
texcoord=(0.0, 0.0, 1.0, 1.0),
multiples=True,
alpha=alphaEnabled,
color=flamecol,
)
if self.hitglow2Drawing:
ff += 0.3
vtx = flameSize * ff
flameColorMod = (1.19, 1.78, 12.22)
flamecol = tuple(
[color[ifc] * flameColorMod[ifc] for ifc in range(3)]
)
if self.starPowerActive and self.powerActiveColorToggle:
flamecol = self.spColor
elif self.spNote and self.powerGainColorToggle:
flamecol = self.spColor
draw3Dtex(
self.hitglow2Drawing,
coord=(x, y, z),
rot=self.hitGlowsRotation,
scale=(
0.40 + 0.6 * ms * ff,
1.5 + 0.6 * ms * ff,
1 + 0.6 * ms * ff,
),
vertex=(-vtx, -vtx, vtx, vtx),
texcoord=(0.0, 0.0, 1.0, 1.0),
multiples=True,
alpha=alphaEnabled,
color=flamecol,
)
def renderAnimatedFlames(self, song, pos):
"""Renders the flames that appear when a note is struck"""
if not song or self.flameColors[0][0] == -1:
return
flameSize = self.hitFlameSize
w = self.boardWidth / self.strings
renderedNotes = self.getRequiredNotesForRender(song, pos)
alphaEnabled = self.hitFlameBlackRemove
for time, event in renderedNotes:
if not isinstance(event, Note):
continue
if event.played or event.hopod:
if not self.disableFlameSFX:
if self.isDrum:
if event.number == 0:
# make the bass drum not render a flame
continue
x = (self.strings / 2 + 0.5 - event.number) * w
else:
x = (self.strings / 2 - event.number) * w
ff = 1 + 0.25
s = ff / 6
if not self.hitFlameYPos == 0:
y = s - self.hitFlameYPos
else:
y = 0
if not self.hitFlameZPos == 0:
z = s - self.hitFlameZPos
else:
z = 0
if self.isDrum:
y -= self.drumHitFlameOffset[event.number]
else:
y -= self.hitFlameOffset[event.number]
# y += .665 # XXX: see if it's useless
ff += 1.5 # ff first time is 2.75 after this
vtx = flameSize * ff
if self.hitflamesAnim:
event.HCount2 += 1
self.HCountAni = False
if event.HCount2 >= 5.0:
self.HCountAni = True
if event.HCount2 < self.HFrameLimit2:
HIndex = (
event.HCount2 * self.HFrameLimit2
- (event.HCount2 * self.HFrameLimit2)
% self.HFrameLimit2
) / self.HFrameLimit2
texX = (
HIndex * (1.0 / self.HFrameLimit2),
HIndex * (1.0 / self.HFrameLimit2)
+ (1.0 / self.HFrameLimit2),
)
if self.powerHitflamesAnim and self.starPowerActive:
texture = self.powerHitflamesAnim
else:
texture = self.hitflamesAnim
draw3Dtex(
texture,
coord=(x, y + 0.665, z),
rot=self.hitFlameRotation,
scale=(1.6, 1.6, 4.9),
vertex=(-vtx, -vtx, vtx, vtx),
texcoord=(texX[0], 0.0, texX[1], 1.0),
multiples=True,
alpha=alphaEnabled,
color=(1, 1, 1),
)
def renderFlames(self, song, pos):
"""Renders the flames that appear when a note is struck"""
if not song or self.flameColors[0][0] == -1:
return
w = self.boardWidth / self.strings
flameSize = self.hitFlameSize
flameLimit = 10.0
flameLimitHalf = round(flameLimit / 2.0)
renderedNotes = self.getRequiredNotesForRender(song, pos)
alphaEnabled = self.hitFlameBlackRemove
for time, event in renderedNotes:
if not isinstance(event, Note):
continue
if (event.played or event.hopod) and event.flameCount < flameLimit:
if not self.disableFlameSFX:
if self.isDrum:
if event.number == 0:
continue
flameColor = self.flameColors[event.number]
x = (self.strings / 2 + 0.5 - event.number) * w
else:
flameColor = self.flameColors[event.number]
x = (self.strings / 2 - event.number) * w
if self.starPowerActive and self.powerActiveColorToggle:
flamecol = self.spColor
elif event.star and self.powerGainColorToggle:
flamecol = self.spColor
ms = math.sin(self.time) * 0.25 + 1
xlightning = (self.strings / 2 - event.number) * 2.2 * w
ff = 1 + 0.25
s = ff / 6
if not self.hitFlameYPos == 0:
y = s - self.hitFlameYPos
else:
y = 0
if not self.hitFlameZPos == 0:
z = s - self.hitFlameZPos
else:
z = 0
if self.isDrum:
y -= self.drumHitFlameOffset[event.number]
else:
y -= self.hitFlameOffset[event.number]
# y += .665 # XXX: see if it's useless
ff += 1.5 # ff first time is 2.75 after this
vtx = flameSize * ff
if not self.hitflamesAnim:
self.HCountAni = True
if event.flameCount < flameLimitHalf and self.hitflames2Drawing:
draw3Dtex(
self.hitflames2Drawing,
coord=(x, y + 0.20, z),
rot=self.hitFlameRotation,
scale=(
0.25 + 0.6 * ms * ff,
event.flameCount / 6.0 + 0.6 * ms * ff,
event.flameCount / 6.0 + 0.6 * ms * ff,
),
vertex=(-vtx, -vtx, vtx, vtx),
texcoord=(0.0, 0.0, 1.0, 1.0),
multiples=True,
alpha=alphaEnabled,
color=flameColor,
)
for i in range(3):
draw3Dtex(
self.hitflames2Drawing,
coord=(x - 0.005, y + 0.255, z),
rot=self.hitFlameRotation,
scale=(
0.30 + i * 0.05 + 0.6 * ms * ff,
event.flameCount / (5.5 - i * 0.4) + 0.6 * ms * ff,
event.flameCount / (5.5 - i * 0.4) + 0.6 * ms * ff,
),
vertex=(-vtx, -vtx, vtx, vtx),
texcoord=(0.0, 0.0, 1.0, 1.0),
multiples=True,
alpha=alphaEnabled,
color=flameColor,
)
flameColor = tuple(
[flameColor[ifc] + 0.38 for ifc in range(3)]
) # to make sure the final color looks correct on any color set
flameColorMod = 0.1 * (flameLimit - event.flameCount)
flamecol = tuple([ifc * flameColorMod for ifc in flameColor])
scaleChange = (3.0, 2.5, 2.0, 1.7)
yOffset = (0.35, 0.405, 0.355, 0.355)
scaleMod = 0.6 * ms * ff
for step in range(4):
if step == 0:
yzscaleMod = event.flameCount / scaleChange[step]
else:
yzscaleMod = (event.flameCount + 1) / scaleChange[step]
if self.hitflames1Drawing:
draw3Dtex(
self.hitflames1Drawing,
coord=(x - 0.005, y + yOffset[step], z),
rot=self.hitFlameRotation,
scale=(
0.25 + step * 0.05 + scaleMod,
yzscaleMod + scaleMod,
yzscaleMod + scaleMod,
),
vertex=(-vtx, -vtx, vtx, vtx),
texcoord=(0.0, 0.0, 1.0, 1.0),
multiples=True,
alpha=alphaEnabled,
color=flamecol,
)
# draw lightning in GH themes on SP gain
if (
step == 0
and event.finalStar
and self.spEnabled
and self.hitlightning
):
draw3Dtex(
self.hitlightning,
coord=(xlightning, ff / 6, 3.3),
rot=(90, 1, 0, 0),
scale=(
0.15 + 0.5 * ms * ff,
event.flameCount / 3.0 + 0.6 * ms * ff,
2,
),
vertex=(0.4, -2, 0.4, 2),
texcoord=(0.0, 0.0, 1.0, 1.0),
multiples=True,
alpha=True,
color=(1, 1, 1),
)
event.flameCount += 1
def render3DNote(self, texture, model, color, isTappable):
"""Group rendering of 2D notes into method"""
if self.billboardNote:
gl.glRotatef(self.camAngle + 90, 1, 0, 0)
if texture:
gl.glColor3f(1, 1, 1)
gl.glEnable(gl.GL_TEXTURE_2D)
texture.texture.bind()
gl.glMatrixMode(gl.GL_TEXTURE)
gl.glScalef(1, -1, 1)
gl.glMatrixMode(gl.GL_MODELVIEW)
gl.glScalef(self.boardScaleX, self.boardScaleY, 1)
if isTappable:
mesh = "Mesh_001"
else:
mesh = "Mesh"
model.render(mesh)
if shaders.enable("notes"):
shaders.setVar("isTextured", True)
model.render(mesh)
shaders.disable()
gl.glMatrixMode(gl.GL_TEXTURE)
gl.glLoadIdentity()
gl.glMatrixMode(gl.GL_MODELVIEW)
gl.glDisable(gl.GL_TEXTURE_2D)
else:
# mesh = outer ring (black)
# mesh_001 = main note (key color)
# mesh_002 = top (spot or hopo if no mesh_003)
# mesh_003 = hopo bump (hopo color)
# fixed 3D note colours
gl.glColor4f(*color)
if shaders.enable("notes"):
shaders.setVar("isTextured", False)
model.render("Mesh_001")
shaders.disable()
gl.glColor3f(self.spotColor[0], self.spotColor[1], self.spotColor[2])
if isTappable:
if self.hopoColor[0] == -2:
gl.glColor4f(*color)
else:
gl.glColor3f(
self.hopoColor[0], self.hopoColor[1], self.hopoColor[2]
)
if model.find("Mesh_003"):
model.render("Mesh_003")
gl.glColor3f(
self.spotColor[0], self.spotColor[1], self.spotColor[2]
)
model.render("Mesh_002")
gl.glColor3f(self.meshColor[0], self.meshColor[1], self.meshColor[2])
model.render("Mesh")
def renderNote(
self,
length,
sustain,
color,
tailOnly=False,
isTappable=False,
fret=0,
spNote=False,
isOpen=False,
spAct=False,
):
if tailOnly:
return
# this should be retrieved once at init, not repeatedly in-game whenever tails are rendered.
if self.twoDnote:
noteImage = self.noteButtons
tailOnly = True
y = 0
if spNote:
y += 2
elif self.starPowerActive:
y += 4
if isTappable:
y += 1
if self.noteSpin:
texCoord = self.animatedNoteTexCoord[self.noteSpinFrameIndex][fret]
if isTappable:
if spNote:
noteImage = self.noteAnimatedPowerHOPO
elif self.starPowerActive:
noteImage = self.noteAnimatedPowerActiveHOPO
else:
noteImage = self.noteAnimatedHOPO
else:
if spNote:
noteImage = self.noteAnimatedPower
elif self.starPowerActive:
noteImage = self.noteAnimatedPowerActive
else:
noteImage = self.noteAnimatedNormal
if not noteImage:
noteImage = self.noteButtons
texCoord = self.noteTexCoord[y][fret]
else:
noteImage = self.noteButtons
texCoord = self.noteTexCoord[y][fret]
draw3Dtex(
noteImage,
vertex=self.noteVtx,
texcoord=texCoord,
scale=(1, 1, 1),
rot=(self.camAngle, 1, 0, 0),
multiples=False,
color=color,
)
else:
# 3d Notes
shaders.setVar("Material", color, "notes")
self.notepos = self.engine.theme.notepos
self.noterot = self.engine.theme.noterot
if spNote and self.starMesh is not None:
meshObj = self.starMesh
else:
meshObj = self.noteMesh
gl.glPushMatrix()
gl.glEnable(gl.GL_DEPTH_TEST)
gl.glDepthMask(1)
gl.glShadeModel(gl.GL_SMOOTH)
if spNote and self.threeDspin:
gl.glRotate(90 + self.time / 3, 0, 1, 0)
elif not spNote and self.noterotate:
gl.glRotatef(90, 0, 1, 0)
gl.glRotatef(-90, 1, 0, 0)
if fret >= 0 and fret <= 4:
gl.glRotate(self.noterot[fret], 0, 0, 1)
gl.glTranslatef(0, self.notepos[fret], 0)
if self.notetex:
if self.startex and spNote:
texture = getattr(self, "startex" + chr(97 + fret))
elif self.staratex and self.starPowerActive:
texture = getattr(self, "staratex" + chr(97 + fret))
else:
texture = getattr(self, "notetex" + chr(97 + fret))
else:
texture = None
self.render3DNote(texture, meshObj, color, isTappable)
gl.glDepthMask(0)
gl.glPopMatrix()
gl.glDisable(gl.GL_DEPTH_TEST)
def renderNotes(self, visibility, song, pos):
if not song:
return
if not song.readyToGo:
return
self.bigMax = 0
# Update dynamic period
self.currentPeriod = self.neckSpeed
self.targetPeriod = self.neckSpeed
self.killPoints = False
w = self.boardWidth / self.strings
self.starNotesInView = False
self.openStarNotesInView = False
renderedNotes = reversed(self.getRequiredNotesForRender(song, pos))
for time, event in renderedNotes:
if isinstance(event, Tempo):
self.tempoBpm = event.bpm
if self.lastBpmChange > 0 and self.disableVBPM:
continue
if (
pos - time > self.currentPeriod or self.lastBpmChange < 0
) and time > self.lastBpmChange:
self.baseBeat += (time - self.lastBpmChange) / self.currentPeriod
self.targetBpm = event.bpm
self.lastBpmChange = time
self.neck.lastBpmChange = time
self.neck.baseBeat = self.baseBeat
continue
if not isinstance(event, Note):
continue
if event.noteBpm == 0.0:
event.noteBpm = self.tempoBpm
if event.number == 0 and self.isDrum:
# skip all open notes
continue
if self.coOpFailed:
if self.coOpRestart:
if time - self.coOpRescueTime < (
self.currentPeriod * self.beatsPerBoard * 2
):
continue
elif (
self.coOpRescueTime
+ (self.currentPeriod * self.beatsPerBoard * 2)
< pos
):
self.coOpFailed = False
self.coOpRestart = False
log.debug("Turning off coOpFailed. Rescue successful.")
else:
# can't break. Tempo.
continue
self.spNote = False
if self.isDrum:
x = (self.strings / 2 - 0.5 - (event.number - 1)) * w
isOpen = False
c = self.fretColors[event.number]
else:
x = (self.strings / 2 - (event.number)) * w
isOpen = False
c = self.fretColors[event.number]
if event.number == 4 and self.isDrum:
c = self.fretColors[
0
] # need to swap note 0 and note 4 colors for drums:
z = ((time - pos) / self.currentPeriod) / self.beatsPerUnit
z2 = ((time + event.length - pos) / self.currentPeriod) / self.beatsPerUnit
if z > self.boardLength * 0.8:
f = (self.boardLength - z) / (self.boardLength * 0.2)
elif z < 0:
f = min(1, max(0, 1 + z2))
else:
f = 1.0
# hide notes in BRE zone if BRE enabled
if self.freestyleEnabled:
if self.isDrum:
if self.drumFillsReady or self.freestyleReady:
if (
time > self.freestyleStart - self.freestyleOffset
and time
< self.freestyleStart
+ self.freestyleOffset
+ self.freestyleLength
):
z = -2.0
else:
if (
time > self.freestyleStart - self.freestyleOffset
and time
< self.freestyleStart
+ self.freestyleOffset
+ self.freestyleLength
):
z = -2.0
if self.twoDnote and not self.useFretColors:
color = (1, 1, 1, 1 * visibility * f)
else:
color = (
0.1 + 0.8 * c[0],
0.1 + 0.8 * c[1],
0.1 + 0.8 * c[2],
1 * visibility * f,
)
if self.isDrum:
length = 0
else:
if event.length > 120:
length = (
(event.length - 50) / self.currentPeriod / self.beatsPerUnit
)
else:
length = 0
tailOnly = False
# user setting for starpower refill / replenish notes
if self.starPowerActive:
if self.spRefillMode == 0:
# mode 0 = no starpower / overdrive refill notes
self.spEnabled = False
elif self.spRefillMode == 1 and self.theme != 2:
# mode 1 = overdrive refill notes in RB themes only
self.spEnabled = False
elif self.spRefillMode == 2 and song.midiStyle != 1:
# mode 2 = refill based on MIDI type
self.spEnabled = False
if event.star:
self.starNotesInView = True
if event.finalStar:
self.finalStarSeen = True
self.starNotesInView = True
if event.star and self.spEnabled:
self.spNote = True
if event.finalStar and self.spEnabled:
if event.played or event.hopod:
if event.flameCount < 1 and not self.starPowerGained:
if self.starPower < 50 and self.isDrum:
# not enough starpower to activate yet, kill existing drumfills
for dfEvent in self.drumFillEvents:
dfEvent.happened = True
log.debug("star power added")
if self.starPower < 100:
self.starPower += 25
if self.starPower > 100:
self.starPower = 100
self.overdriveFlashCount = (
0 # this triggers the oFlash strings & timer
)
self.starPowerGained = True
if event.tappable < 2 or self.isDrum:
isTappable = False
else:
isTappable = True
if event.played or event.hopod:
# if the note is hit
continue
elif z < 0:
# Notes past frets
# if none of the below they keep on going, it would be self.notedisappear == 1
if self.notedisappear == 0:
# Notes disappear
continue
elif self.notedisappear == 2:
# Notes turn red
color = (1, 0, 0, 1) # turn note red
if self.isDrum:
sustain = False
gl.glPushMatrix()
gl.glTranslatef(x, 0, z)
self.renderNote(
length,
sustain=sustain,
color=color,
tailOnly=tailOnly,
isTappable=isTappable,
fret=event.number,
spNote=self.spNote,
isOpen=isOpen,
)
gl.glPopMatrix()
else:
if z + length < -1.0:
continue
if event.length <= 120:
length = None
sustain = False
if event.length > (1.4 * (60000.0 / event.noteBpm) / 4):
sustain = True
gl.glPushMatrix()
gl.glTranslatef(x, 0, z)
if shaders.turnon:
shaders.setVar(
"note_position",
(x, (1.0 - visibility) ** (event.number + 1), z),
"notes",
)
self.renderNote(
length,
sustain=sustain,
color=color,
tailOnly=tailOnly,
isTappable=isTappable,
fret=event.number,
spNote=self.spNote,
)
gl.glPopMatrix()
# end FOR loop / note rendering loop
if (
(not self.openStarNotesInView)
and (not self.starNotesInView)
and self.finalStarSeen
):
self.spEnabled = True
self.isStarPhrase = False
self.finalStarSeen = False
def renderOpenNotes(self, visibility, song, pos):
if not song:
return
if not song.readyToGo:
return
self.bigMax = 0
self.currentPeriod = self.neckSpeed
self.targetPeriod = self.neckSpeed
self.killPoints = False
self.openStarNotesInView = False
renderedNotes = reversed(self.getRequiredNotesForRender(song, pos))
for time, event in renderedNotes:
if isinstance(event, Tempo):
self.tempoBpm = event.bpm
if self.lastBpmChange > 0 and self.disableVBPM:
continue
if (
pos - time > self.currentPeriod or self.lastBpmChange < 0
) and time > self.lastBpmChange:
self.baseBeat += (time - self.lastBpmChange) / self.currentPeriod
self.targetBpm = event.bpm
self.lastBpmChange = time
self.neck.lastBpmChange = time
self.neck.baseBeat = self.baseBeat
continue
if not isinstance(event, Note):
continue
if event.noteBpm == 0.0:
event.noteBpm = self.tempoBpm
if self.coOpFailed:
if self.coOpRestart:
if time - self.coOpRescueTime < (
self.currentPeriod * self.beatsPerBoard * 2
):
continue
elif (
self.coOpRescueTime
+ (self.currentPeriod * self.beatsPerBoard * 2)
< pos
):
self.coOpFailed = False
self.coOpRestart = False
log.debug("Turning off coOpFailed. Rescue successful.")
else:
# can't break. Tempo.
continue
if not event.number == 0:
# if Normal note exit
continue
isOpen = True
x = 0
c = self.openFretColor
z = ((time - pos) / self.currentPeriod) / self.beatsPerUnit
if z > self.boardLength * 0.8:
f = (self.boardLength - z) / (self.boardLength * 0.2)
else:
f = 1.0
# hide notes in BRE zone if BRE enabled
if self.freestyleEnabled:
if self.isDrum:
if self.drumFillsReady or self.freestyleReady:
if (
time > self.freestyleStart - self.freestyleOffset
and time
< self.freestyleStart
+ self.freestyleOffset
+ self.freestyleLength
):
z = -2.0
else:
if (
time > self.freestyleStart - self.freestyleOffset
and time
< self.freestyleStart
+ self.freestyleOffset
+ self.freestyleLength
):
z = -2.0
if self.twoDnote and not self.useFretColors:
color = (1, 1, 1, 1 * visibility * f)
else:
color = (
0.1 + 0.8 * c[0],
0.1 + 0.8 * c[1],
0.1 + 0.8 * c[2],
1 * visibility * f,
)
length = 0
tailOnly = False
self.spNote = False
# user setting for starpower refill / replenish notes
if self.starPowerActive:
if self.spRefillMode == 0:
# mode 0 = no starpower / overdrive refill notes
self.spEnabled = False
elif self.spRefillMode == 1 and self.theme != 2:
# mode 1 = overdrive refill notes in RB themes only
self.spEnabled = False
elif self.spRefillMode == 2 and song.midiStyle != 1:
# mode 2 = refill based on MIDI type
self.spEnabled = False
if event.star:
self.openStarNotesInView = True
if event.finalStar:
self.finalStarSeen = True
self.openStarNotesInView = True
if event.finalStar and self.spEnabled:
if event.played or event.hopod:
if event.flameCount < 1 and not self.starPowerGained:
if self.starPower < 50:
# not enough starpower to activate yet, kill existing drumfills
for dfEvent in self.drumFillEvents:
dfEvent.happened = True
if self.starPower < 100:
self.starPower += 25
if self.starPower > 100:
self.starPower = 100
self.overdriveFlashCount = (
0 # this triggers the oFlash strings & timer
)
self.starPowerGained = True
isTappable = False
if event.played or event.hopod:
# if the note is hit
continue
elif z < 0:
# Notes past frets
# if none of the below they keep on going, it would be self.notedisappear == 1
if self.notedisappear == 0:
# Notes disappear
continue
elif self.notedisappear == 2:
# Notes turn red
color = (1, 0, 0, 1) # turn note red
sustain = False
gl.glPushMatrix()
gl.glTranslatef(x, 0, z)
self.renderNote(
length,
sustain=sustain,
color=color,
tailOnly=tailOnly,
isTappable=isTappable,
fret=event.number,
spNote=self.spNote,
isOpen=isOpen,
)
gl.glPopMatrix()
# end FOR loop / note rendering loop
if (
(not self.openStarNotesInView)
and (not self.starNotesInView)
and self.finalStarSeen
):
self.spEnabled = True
self.finalStarSeen = False
self.isStarPhrase = False
def render3DKey(self, texture, model, x, y, color, fretNum, f):
"""Group rendering of 3D keys/frets into method"""
gl.glPushMatrix()
gl.glDepthMask(1)
gl.glEnable(gl.GL_LIGHTING)
gl.glShadeModel(gl.GL_SMOOTH)
gl.glEnable(gl.GL_LIGHT0)
gl.glRotatef(90, 0, 1, 0)
gl.glLightfv(
gl.GL_LIGHT0,
gl.GL_POSITION,
np.array([5.0, 10.0, -10.0, 0.0], dtype=np.float32),
)
gl.glLightfv(
gl.GL_LIGHT0,
gl.GL_AMBIENT,
np.array([0.2, 0.2, 0.2, 0.0], dtype=np.float32),
)
gl.glLightfv(
gl.GL_LIGHT0,
gl.GL_DIFFUSE,
np.array([1.0, 1.0, 1.0, 0.0], dtype=np.float32),
)
gl.glRotatef(-90, 1, 0, 0)
gl.glRotatef(-90, 0, 0, 1)
if fretNum == 0:
# green fret button
gl.glRotate(self.keyrot[0], 0, 1, 0)
gl.glTranslatef(0, 0, self.keypos[0])
elif fretNum == 1:
# red fret button
gl.glRotate(self.keyrot[1], 0, 1, 0)
gl.glTranslatef(0, 0, self.keypos[1])
elif fretNum == 2:
# yellow fret button
gl.glRotate(self.keyrot[2], 0, 1, 0)
gl.glTranslatef(0, 0, self.keypos[2])
elif fretNum == 3:
# blue fret button
gl.glRotate(self.keyrot[3], 0, 1, 0)
gl.glTranslatef(0, 0, self.keypos[3])
elif fretNum == 4:
# orange fret button
gl.glRotate(self.keyrot[4], 0, 1, 0)
gl.glTranslatef(0, 0, self.keypos[4])
gl.glTranslatef(x, y + color[3] * 6, 0)
if texture:
gl.glColor4f(1, 1, 1, color[3] + 1.0)
gl.glEnable(gl.GL_TEXTURE_2D)
texture.bind()
gl.glMatrixMode(gl.GL_TEXTURE)
gl.glScalef(1, -1, 1)
gl.glMatrixMode(gl.GL_MODELVIEW)
gl.glScalef(self.boardScaleX, self.boardScaleY, 1)
if not self.hit[fretNum] and f:
model.render("Mesh_001")
elif self.hit[fretNum]:
model.render("Mesh_002")
else:
model.render("Mesh")
gl.glMatrixMode(gl.GL_TEXTURE)
gl.glLoadIdentity()
gl.glMatrixMode(gl.GL_MODELVIEW)
gl.glDisable(gl.GL_TEXTURE_2D)
else:
gl.glColor4f(color[0], color[1], color[2], color[3] + 1.0)
# Mesh - Main fret
# Key_001 - Top of fret (key_color)
# Key_002 - Bottom of fret (key2_color)
# Glow_001 - Only rendered when a note is hit along with the glow.svg
if model.find("Glow_001"):
model.render("Mesh")
if model.find("Key_001"):
gl.glColor3f(self.keyColor[0], self.keyColor[1], self.keyColor[2])
model.render("Key_001")
if model.find("Key_002"):
gl.glColor3f(
self.key2Color[0], self.key2Color[1], self.key2Color[2]
)
model.render("Key_002")
else:
model.render()
gl.glDisable(gl.GL_LIGHTING)
gl.glDisable(gl.GL_LIGHT0)
gl.glDepthMask(0)
gl.glPopMatrix()
def renderFrets(self, visibility, song, controls):
pass
def renderHitGlow(self):
for n in range(self.strings2):
c = self.glowColor[n]
f = self.fretActivity[n]
w = self.boardWidth / self.strings
x = (self.strings / 2 - n) * w
if self.fretPress:
y = f / 6
else:
y = 0
size = 0.22
if f and not self.disableFretSFX:
if self.glowColor[0] == -1:
s = 1.0
else:
s = 0.0
while s < 1:
ms = s * (math.sin(self.time) * 0.25 + 1)
gl.glColor3f(c[0] * (1 - ms), c[1] * (1 - ms), c[2] * (1 - ms))
gl.glPushMatrix()
gl.glTranslatef(x, y, 0)
gl.glScalef(
0.1 + 0.02 * ms * f, 0.1 + 0.02 * ms * f, 0.1 + 0.02 * ms * f
)
gl.glRotatef(90, 0, 1, 0)
gl.glRotatef(-90, 1, 0, 0)
gl.glRotatef(-90, 0, 0, 1)
if not self.twoDkeys and not self.keytex:
if self.keyMesh.find("Glow_001"):
self.keyMesh.render("Glow_001")
else:
self.keyMesh.render()
gl.glPopMatrix()
s += 0.2
# Hitglow color
if self.hitglow_color == 0:
glowcol = (c[0], c[1], c[2]) # Same as fret
elif self.hitglow_color == 1:
glowcol = (1, 1, 1) # Actual color
f += 2
draw3Dtex(
self.glowDrawing,
coord=(x, 0, 0.01),
rot=(f * 90 + self.time, 0, 1, 0),
texcoord=(0.0, 0.0, 1.0, 1.0),
vertex=(-size * f, -size * f, size * f, size * f),
multiples=True,
alpha=True,
color=glowcol,
)
def renderTail(
self,
song,
length,
sustain,
kill,
color,
tailOnly=False,
isTappable=False,
big=False,
fret=0,
spNote=False,
freestyleTail=0,
pos=0,
):
"""
if freestyleTail == 0, act normally.
if freestyleTail == 1, render an freestyle tail
if freestyleTail == 2, render highlighted freestyle tail
"""
def project(beat):
return 0.125 * beat / self.beatsPerUnit # was 0.12
offset = (pos - self.lastBpmChange) / self.currentPeriod + self.baseBeat
self.tailSpeed = self.engine.theme.noteTailSpeedMulti
if not self.simpleTails:
# Seperate Tail images dont color the images
tailcol = (1, 1, 1, 1)
elif self.starPowerActive and self.powerActiveColorToggle:
tailcol = self.spColor
elif spNote and self.powerGainColorToggle:
tailcol = self.spColor
elif not big and tailOnly:
# grey because the note was missed
tailcol = (0.6, 0.6, 0.6, color[3])
else:
# normal colors
tailcol = color
if length > self.boardLength:
s = self.boardLength
else:
s = length
size = (0.4, s)
if kill and big:
kEffect = (math.sin(pos / 50) + 1) / 2
size = ((0.02 + (kEffect * 0.182) * 2), s)
c = [
self.killColor[fret][0],
self.killColor[fret][1],
self.killColor[fret][2],
]
if c != [0, 0, 0]:
for i in range(0, 3):
c[i] = c[i] * kEffect + color[i] * (1 - kEffect)
tailcol = (0.1 + 0.8 * c[0], 0.1 + 0.8 * c[1], 0.1 + 0.8 * c[2], 1)
if sustain:
if length is not None:
# so any theme containing appropriate files can use new tails
if not self.simpleTails:
for n in range(5):
if big and tailOnly:
if kill and self.killfx == 0:
tex1 = self.kill1
tex2 = self.kill2
else:
if (
self.starPowerActive
and self.powerActiveColorToggle
and not color == (0, 0, 0, 1)
):
tex1 = self.btail6
tex2 = self.btaile6
elif (
spNote
and self.powerGainColorToggle
and not color == (0, 0, 0, 1)
):
tex1 = self.btail6
tex2 = self.btaile6
else:
if fret == n:
tex1 = getattr(self, "btail" + str(n + 1))
tex2 = getattr(self, "btaile" + str(n + 1))
else:
if tailOnly:
# Note let go
tex1 = self.tail0
tex2 = self.taile0
else:
if (
self.starPowerActive
and self.powerActiveColorToggle
and not color == (0, 0, 0, 1)
):
tex1 = self.tail6
tex2 = self.taile6
elif (
spNote
and self.powerGainColorToggle
and not color == (0, 0, 0, 1)
):
tex1 = self.tail6
tex2 = self.taile6
else:
if fret == n:
tex1 = getattr(self, "tail" + str(n + 1))
tex2 = getattr(self, "taile" + str(n + 1))
else:
if big and tailOnly:
if kill:
tex1 = self.kill1
tex2 = self.kill2
else:
tex1 = self.bigTail1
tex2 = self.bigTail2
else:
tex1 = self.tail1
tex2 = self.tail2
if big and tailOnly and shaders.enable("tail"):
color = (color[0] * 1.5, color[1] * 1.5, color[2] * 1.5, 1.0)
shaders.setVar("color", color)
if kill and self.killfx == 0:
h = shaders.getVar("height")
shaders.modVar("height", 0.5, 0.06 / h - 0.1)
shaders.setVar("offset", (5.0 - size[1], 0.0))
size = (size[0] * 15, size[1])
# Render the long part of the tail
gl.glEnable(gl.GL_TEXTURE_2D)
if length >= self.boardLength:
movement1 = (
project((offset * self.tailSpeed) * self.beatsPerUnit) * 3
) - (project(offset * self.beatsPerUnit) * 3)
movement2 = (
project(((offset * self.tailSpeed) + s) * self.beatsPerUnit) * 3
) - (project(offset * self.beatsPerUnit) * 3)
else:
movement1 = (
project((offset * self.tailSpeed) * self.beatsPerUnit) * 3
)
movement2 = (
project(((offset * self.tailSpeed) + s) * self.beatsPerUnit) * 3
)
self.tail_tex[0][1] = self.tail_tex[1][1] = movement1
self.tail_tex[2][1] = self.tail_tex[3][1] = movement2
self.tail_col[0][0] = self.tail_col[1][0] = self.tail_col[2][
0
] = self.tail_col[3][0] = tailcol[0]
self.tail_col[0][1] = self.tail_col[1][1] = self.tail_col[2][
1
] = self.tail_col[3][1] = tailcol[1]
self.tail_col[0][2] = self.tail_col[1][2] = self.tail_col[2][
2
] = self.tail_col[3][2] = tailcol[2]
self.tail_vtx[0][0] = self.tail_vtx[2][0] = -size[0]
self.tail_vtx[1][0] = self.tail_vtx[3][0] = size[0]
self.tail_vtx[0][2] = self.tail_vtx[1][2] = size[1]
tex1.texture.bind()
cmgl.draw_arrays(
gl.GL_TRIANGLE_STRIP,
vertices=self.tail_vtx,
colors=self.tail_col,
texcoords=self.tail_tex,
)
gl.glDisable(gl.GL_TEXTURE_2D)
draw3Dtex(
tex2,
vertex=(-size[0], size[1], size[0], size[1] + 0.6),
texcoord=(0.0, 0.0, 1.0, 1.0),
color=tailcol,
) # render the end of a tail
shaders.disable()
if tailOnly:
return
def renderFreeStyleTail(self, length, color, fret, pos):
if length is not None:
if length > self.boardLength:
s = self.boardLength
else:
s = length
# render a freestyle tail (self.freestyle1 & self.freestyle2)
zsize = 0.25
if self.freestyleActive:
size = (0.30, s - zsize) # was .15
else:
size = (0.15, s - zsize)
if self.isDrum and self.drumFillsActive:
if self.drumFillsHits >= 4:
size = (0.30, s - zsize)
elif self.drumFillsHits >= 3:
size = (0.25, s - zsize)
elif self.drumFillsHits >= 2:
size = (0.21, s - zsize)
elif self.drumFillsHits >= 1:
size = (0.17, s - zsize)
c1, c2, c3, c4 = color
tailGlow = (
1 - (pos - self.freestyleLastFretHitTime[fret]) / self.freestylePeriod
)
if tailGlow < 0:
tailGlow = 0
color = (
c1 + c1 * 2.0 * tailGlow,
c2 + c2 * 2.0 * tailGlow,
c3 + c3 * 2.0 * tailGlow,
c4 * 0.6 + c4 * 0.4 * tailGlow,
) # this fades inactive tails' color darker
tailcol = color
tex1 = self.freestyle1
tex2 = self.freestyle2
draw3Dtex(
tex1,
vertex=(-size[0], 0, size[0], size[1]),
texcoord=(0.0, 0.0, 1.0, 1.0),
color=tailcol,
) # Middle of freestyle tail
draw3Dtex(
tex2,
vertex=(-size[0], size[1], size[0], size[1] + zsize),
texcoord=(0.0, 0.05, 1.0, 0.95),
color=tailcol,
) # end of freestyle tail
draw3Dtex(
tex2,
vertex=(-size[0], 0 - zsize, size[0], 0 + (0.05)),
texcoord=(0.0, 0.95, 1.0, 0.05),
color=tailcol,
) # beginning of freestyle tail
def renderFreestyleLanes(self, visibility, song, pos, controls):
if not song:
return
if not song.readyToGo:
return
# check for [section big_rock_ending] to set a flag to determine how to treat the last drum fill marker note:
if song.breMarkerTime and pos > song.breMarkerTime:
self.bigRockEndingMarkerSeen = True
self.drumFillsReady = False
boardWindowMax = pos + self.currentPeriod * self.beatsPerBoard
track = song.midiEventTrack[self.player]
beatsPerUnit = self.beatsPerBoard / self.boardLength
breLaneOffset = (self.strings - 5) * 0.5
if self.freestyleEnabled:
freestyleActive = False
if self.isDrum:
self.drumFillsActive = False
drumFillEvents = []
for time, event in track.getEvents(
pos - self.freestyleOffset, boardWindowMax + self.freestyleOffset
):
if isinstance(event, MarkerNote):
if event.number == FREESTYLE_MARKING_NOTE and (
not event.happened or self.bigRockEndingMarkerSeen
):
# don't kill the BRE!
if self.isDrum:
drumFillEvents.append(event)
length = (event.length - 50) / self.currentPeriod / beatsPerUnit
w = self.boardWidth / self.strings
self.freestyleLength = event.length
self.freestyleStart = time
z = ((time - pos) / self.currentPeriod) / beatsPerUnit
z2 = (
(time + event.length - pos) / self.currentPeriod
) / beatsPerUnit
if z > self.boardLength * 0.8:
f = (self.boardLength - z) / (self.boardLength * 0.2)
elif z < 0:
f = min(1, max(0, 1 + z2))
else:
f = 1.0
time -= self.freestyleOffset
# allow tail to move under frets
if self.isDrum:
if time > pos:
self.drumFillsHits = -1
if self.starPower >= 50 and not self.starPowerActive:
self.drumFillsReady = True
else:
self.drumFillsReady = False
if self.bigRockEndingMarkerSeen:
self.freestyleReady = True
if self.isDrum:
self.drumFillsReady = False
else:
self.freestyleReady = False
if time < pos:
if self.bigRockEndingMarkerSeen:
freestyleActive = True
elif self.isDrum:
if self.drumFillsReady:
self.drumFillsActive = True
self.drumFillWasJustActive = True
if self.drumFillsHits < 0:
self.drumFillsCount += 1
self.drumFillsHits = 0
if z < -1.5:
length += z + 1.5
z = -1.5
if self.isDrum:
breRangeStart = 1
else:
breRangeStart = 0
# render freestyle tails
if self.freestyleReady or self.drumFillsReady:
for theFret in range(breRangeStart, 5):
x = (self.strings / 2 - breLaneOffset - theFret) * w
if self.isDrum and theFret == 4:
c = self.fretColors[0]
else:
c = self.fretColors[theFret]
color = (
0.1 + 0.8 * c[0],
0.1 + 0.8 * c[1],
0.1 + 0.8 * c[2],
1.0 * visibility * f,
)
gl.glPushMatrix()
gl.glTranslatef(
x, (1.0 - visibility) ** (theFret + 1), z
)
self.renderFreeStyleTail(length, color, theFret, pos)
gl.glPopMatrix()
if self.isDrum and (
self.drumFillsActive
and self.drumFillsHits >= 4
and z + length < self.boardLength
):
gl.glPushMatrix()
color = (
0.1 + 0.8 * c[0],
0.1 + 0.8 * c[1],
0.1 + 0.8 * c[2],
1.0 * visibility * f,
)
gl.glTranslatef(x, 0.0, z + length)
gl.glScalef(1, 1.7, 1.3)
self.renderNote(
length,
sustain=False,
color=color,
tailOnly=False,
isTappable=False,
fret=4,
spNote=False,
isOpen=False,
spAct=True,
)
gl.glPopMatrix()
self.freestyleActive = freestyleActive
if self.isDrum:
self.drumFillEvents = drumFillEvents
def renderTails(self, visibility, song, pos, killswitch):
if not song:
return
if not song.readyToGo:
return
# Update dynamic period
self.currentPeriod = self.neckSpeed
self.killPoints = False
w = self.boardWidth / self.strings
num = 0
renderedNotes = self.getRequiredNotesForRender(song, pos)
for time, event in renderedNotes:
if isinstance(event, Tempo):
self.tempoBpm = event.bpm
continue
if not isinstance(event, Note):
continue
if event.length <= 120:
continue
if event.noteBpm == 0.0:
event.noteBpm = self.tempoBpm
if self.coOpFailed:
if self.coOpRestart:
if time - self.coOpRescueTime < (
self.currentPeriod * self.beatsPerBoard * 2
):
continue
elif (
self.coOpRescueTime
+ (self.currentPeriod * self.beatsPerBoard * 2)
< pos
):
self.coOpFailed = False
self.coOpRestart = False
log.debug("Turning off coOpFailed. Rescue successful.")
else:
continue
c = self.fretColors[event.number]
x = (self.strings / 2 - event.number) * w
z = ((time - pos) / self.currentPeriod) / self.beatsPerUnit
z2 = ((time + event.length - pos) / self.currentPeriod) / self.beatsPerUnit
if z > self.boardLength * 0.8:
f = (self.boardLength - z) / (self.boardLength * 0.2)
elif z < 0:
f = min(1, max(0, 1 + z2))
else:
f = 1.0
color = (
0.1 + 0.8 * c[0],
0.1 + 0.8 * c[1],
0.1 + 0.8 * c[2],
1 * visibility * f,
)
if event.length > 120:
length = (event.length - 50) / self.currentPeriod / self.beatsPerUnit
else:
length = 0
tailOnly = False
# user setting for starpower refill / replenish notes
if self.spEnabled and (event.star or event.finalStar):
spNote = event.star
else:
spNote = False
if event.tappable < 2:
isTappable = False
else:
isTappable = True
if z < 0:
# Note past frets
if event.played or event.hopod:
# note hit
tailOnly = True
length += z
z = 0
if length <= 0:
continue
else:
# note is missed
if self.notedisappear == 0:
# Notes keep on going when missed
color = (0.6, 0.6, 0.6, 0.5 * visibility * f)
elif self.notedisappear == 1:
# Notes disappear when missed
color = (0.6, 0.6, 0.6, 0.5 * visibility * f)
if self.notedisappear == 2:
# turn red when missed
color = (1, 0, 0, 1)
big = False
self.bigMax = 0
for i in range(0, self.strings):
if self.hit[i]:
big = True
self.bigMax += 1
if self.spEnabled and killswitch:
if event.star or event.finalStar:
if big and tailOnly:
self.killPoints = True
color = (1, 1, 1, 1)
if z + length < -1.0:
continue
# crop to board edge
if z + length > self.boardLength:
length = self.boardLength - z
sustain = False
if event.length > (1.4 * (60000.0 / event.noteBpm) / 4):
sustain = True
gl.glPushMatrix()
gl.glTranslatef(x, (1.0 - visibility) ** (event.number + 1), z)
if big and num < self.bigMax:
num += 1
self.renderTail(
song,
length,
sustain=sustain,
kill=killswitch,
color=color,
tailOnly=tailOnly,
isTappable=isTappable,
big=True,
fret=event.number,
spNote=spNote,
pos=pos,
)
else:
self.renderTail(
song,
length,
sustain=sustain,
kill=killswitch,
color=color,
tailOnly=tailOnly,
isTappable=isTappable,
fret=event.number,
spNote=spNote,
pos=pos,
)
gl.glPopMatrix()
if killswitch and self.killfx == 1:
gl.glBlendFunc(gl.GL_SRC_ALPHA, gl.GL_ONE)
for time, event in self.playedNotes:
step = self.currentPeriod / 16
t = time + event.length
x = (self.strings / 2 - event.number) * w
c = self.fretColors[event.number]
s = t
proj = 1.0 / self.currentPeriod / self.beatsPerUnit
zStep = step * proj
def waveForm(t):
u = ((t - time) * -0.1 + pos - time) / 64.0 + 0.0001
return (
(
math.sin(event.number + self.time * -0.01 + t * 0.03)
+ math.cos(event.number + self.time * 0.01 + t * 0.02)
)
* 0.1
+ 0.1
+ math.sin(u) / (5 * u)
)
gl.glBegin(gl.GL_TRIANGLE_STRIP)
f1 = 0
while t > time:
if ((t - pos) * proj) < self.boardLength:
z = (t - pos) * proj
else:
z = self.boardLength
if z < 0:
break
f2 = min((s - t) / (6 * step), 1.0)
a1 = waveForm(t) * f1
a2 = waveForm(t - step) * f2
gl.glColor4f(c[0], c[1], c[2], 0.5)
gl.glVertex3f(x - a1, 0, z)
gl.glVertex3f(x - a2, 0, z - zStep)
gl.glColor4f(1, 1, 1, 0.75)
gl.glVertex3f(x, 0, z)
gl.glVertex3f(x, 0, z - zStep)
gl.glColor4f(c[0], c[1], c[2], 0.5)
gl.glVertex3f(x + a1, 0, z)
gl.glVertex3f(x + a2, 0, z - zStep)
gl.glVertex3f(x + a2, 0, z - zStep)
gl.glVertex3f(x - a2, 0, z - zStep)
t -= step
f1 = f2
gl.glEnd()
gl.glBlendFunc(gl.GL_SRC_ALPHA, gl.GL_ONE_MINUS_SRC_ALPHA)
|
extractor | rai | # coding: utf-8
from __future__ import unicode_literals
import re
from ..compat import compat_str, compat_urlparse
from ..utils import (
ExtractorError,
GeoRestrictedError,
HEADRequest,
determine_ext,
find_xpath_attr,
fix_xml_ampersands,
int_or_none,
parse_duration,
remove_start,
strip_or_none,
try_get,
unified_strdate,
unified_timestamp,
update_url_query,
urljoin,
xpath_text,
)
from .common import InfoExtractor
class RaiBaseIE(InfoExtractor):
_UUID_RE = r"[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12}"
_GEO_COUNTRIES = ["IT"]
_GEO_BYPASS = False
def _extract_relinker_info(self, relinker_url, video_id):
if not re.match(r"https?://", relinker_url):
return {"formats": [{"url": relinker_url}]}
formats = []
geoprotection = None
is_live = None
duration = None
for platform in ("mon", "flash", "native"):
relinker = self._download_xml(
relinker_url,
video_id,
note="Downloading XML metadata for platform %s" % platform,
transform_source=fix_xml_ampersands,
query={"output": 45, "pl": platform},
headers=self.geo_verification_headers(),
)
if not geoprotection:
geoprotection = (
xpath_text(relinker, "./geoprotection", default=None) == "Y"
)
if not is_live:
is_live = xpath_text(relinker, "./is_live", default=None) == "Y"
if not duration:
duration = parse_duration(
xpath_text(relinker, "./duration", default=None)
)
url_elem = find_xpath_attr(relinker, "./url", "type", "content")
if url_elem is None:
continue
media_url = url_elem.text
# This does not imply geo restriction (e.g.
# http://www.raisport.rai.it/dl/raiSport/media/rassegna-stampa-04a9f4bd-b563-40cf-82a6-aad3529cb4a9.html)
if "/video_no_available.mp4" in media_url:
continue
ext = determine_ext(media_url)
if (ext == "m3u8" and platform != "mon") or (
ext == "f4m" and platform != "flash"
):
continue
if ext == "m3u8" or "format=m3u8" in media_url or platform == "mon":
formats.extend(
self._extract_m3u8_formats(
media_url,
video_id,
"mp4",
"m3u8_native",
m3u8_id="hls",
fatal=False,
)
)
elif ext == "f4m" or platform == "flash":
manifest_url = update_url_query(
media_url.replace("manifest#live_hds.f4m", "manifest.f4m"),
{"hdcore": "3.7.0", "plugin": "aasp-3.7.0.39.44"},
)
formats.extend(
self._extract_f4m_formats(
manifest_url, video_id, f4m_id="hds", fatal=False
)
)
else:
bitrate = int_or_none(xpath_text(relinker, "bitrate"))
formats.append(
{
"url": media_url,
"tbr": bitrate if bitrate > 0 else None,
"format_id": "http-%d" % bitrate if bitrate > 0 else "http",
}
)
if not formats and geoprotection is True:
self.raise_geo_restricted(countries=self._GEO_COUNTRIES)
formats.extend(self._create_http_urls(relinker_url, formats))
return dict(
(k, v)
for k, v in {
"is_live": is_live,
"duration": duration,
"formats": formats,
}.items()
if v is not None
)
def _create_http_urls(self, relinker_url, fmts):
_RELINKER_REG = r"https?://(?P<host>[^/]+?)/(?:i/)?(?P<extra>[^/]+?)/(?P<path>.+?)/(?P<id>\w+)(?:_(?P<quality>[\d\,]+))?(?:\.mp4|/playlist\.m3u8).+?"
_MP4_TMPL = "%s&overrideUserAgentRule=mp4-%s"
_QUALITY = {
# tbr: w, h
"250": [352, 198],
"400": [512, 288],
"700": [512, 288],
"800": [700, 394],
"1200": [736, 414],
"1800": [1024, 576],
"2400": [1280, 720],
"3200": [1440, 810],
"3600": [1440, 810],
"5000": [1920, 1080],
"10000": [1920, 1080],
}
def test_url(url):
resp = self._request_webpage(
HEADRequest(url),
None,
headers={"User-Agent": "Rai"},
fatal=False,
errnote=False,
note=False,
)
if resp is False:
return False
if resp.code == 200:
return False if resp.url == url else resp.url
return None
def get_format_info(tbr):
import math
br = int_or_none(tbr)
if len(fmts) == 1 and not br:
br = fmts[0].get("tbr")
if br > 300:
tbr = compat_str(math.floor(br / 100) * 100)
else:
tbr = "250"
# try extracting info from available m3u8 formats
format_copy = None
for f in fmts:
if f.get("tbr"):
br_limit = math.floor(br / 100)
if br_limit - 1 <= math.floor(f["tbr"] / 100) <= br_limit + 1:
format_copy = f.copy()
return (
{
"width": format_copy.get("width"),
"height": format_copy.get("height"),
"tbr": format_copy.get("tbr"),
"vcodec": format_copy.get("vcodec"),
"acodec": format_copy.get("acodec"),
"fps": format_copy.get("fps"),
"format_id": "https-%s" % tbr,
}
if format_copy
else {
"width": _QUALITY[tbr][0],
"height": _QUALITY[tbr][1],
"format_id": "https-%s" % tbr,
"tbr": int(tbr),
}
)
loc = test_url(_MP4_TMPL % (relinker_url, "*"))
if not isinstance(loc, compat_str):
return []
mobj = re.match(_RELINKER_REG, test_url(relinker_url) or "")
if not mobj:
return []
available_qualities = (
mobj.group("quality").split(",") if mobj.group("quality") else ["*"]
)
available_qualities = [i for i in available_qualities if i]
formats = []
for q in available_qualities:
fmt = {
"url": _MP4_TMPL % (relinker_url, q),
"protocol": "https",
"ext": "mp4",
}
fmt.update(get_format_info(q))
formats.append(fmt)
return formats
@staticmethod
def _extract_subtitles(url, video_data):
STL_EXT = "stl"
SRT_EXT = "srt"
subtitles = {}
subtitles_array = video_data.get("subtitlesArray") or []
for k in ("subtitles", "subtitlesUrl"):
subtitles_array.append({"url": video_data.get(k)})
for subtitle in subtitles_array:
sub_url = subtitle.get("url")
if sub_url and isinstance(sub_url, compat_str):
sub_lang = subtitle.get("language") or "it"
sub_url = urljoin(url, sub_url)
sub_ext = determine_ext(sub_url, SRT_EXT)
subtitles.setdefault(sub_lang, []).append(
{
"ext": sub_ext,
"url": sub_url,
}
)
if STL_EXT == sub_ext:
subtitles[sub_lang].append(
{
"ext": SRT_EXT,
"url": sub_url[: -len(STL_EXT)] + SRT_EXT,
}
)
return subtitles
class RaiPlayIE(RaiBaseIE):
_VALID_URL = (
r"(?P<base>https?://(?:www\.)?raiplay\.it/.+?-(?P<id>%s))\.(?:html|json)"
% RaiBaseIE._UUID_RE
)
_TESTS = [
{
"url": "http://www.raiplay.it/video/2014/04/Report-del-07042014-cb27157f-9dd0-4aee-b788-b1f67643a391.html",
"md5": "8970abf8caf8aef4696e7b1f2adfc696",
"info_dict": {
"id": "cb27157f-9dd0-4aee-b788-b1f67643a391",
"ext": "mp4",
"title": "Report del 07/04/2014",
"alt_title": "St 2013/14 - Espresso nel caffè - 07/04/2014",
"description": "md5:d730c168a58f4bb35600fc2f881ec04e",
"thumbnail": r"re:^https?://.*\.jpg$",
"uploader": "Rai Gulp",
"duration": 6160,
"series": "Report",
"season": "2013/14",
"subtitles": {
"it": "count:2",
},
},
"params": {
"skip_download": True,
},
},
{
# 1080p direct mp4 url
"url": "https://www.raiplay.it/video/2021/03/Leonardo-S1E1-b5703b02-82ee-475a-85b6-c9e4a8adf642.html",
"md5": "2e501e8651d72f05ffe8f5d286ad560b",
"info_dict": {
"id": "b5703b02-82ee-475a-85b6-c9e4a8adf642",
"ext": "mp4",
"title": "Leonardo - S1E1",
"alt_title": "St 1 Ep 1 - Episodio 1",
"description": "md5:f5360cd267d2de146e4e3879a5a47d31",
"thumbnail": r"re:^https?://.*\.jpg$",
"uploader": "Rai 1",
"duration": 3229,
"series": "Leonardo",
"season": "Season 1",
},
},
{
"url": "http://www.raiplay.it/video/2016/11/gazebotraindesi-efebe701-969c-4593-92f3-285f0d1ce750.html?",
"only_matching": True,
},
{
# subtitles at 'subtitlesArray' key (see #27698)
"url": "https://www.raiplay.it/video/2020/12/Report---04-01-2021-2e90f1de-8eee-4de4-ac0e-78d21db5b600.html",
"only_matching": True,
},
{
# DRM protected
"url": "https://www.raiplay.it/video/2020/09/Lo-straordinario-mondo-di-Zoey-S1E1-Lo-straordinario-potere-di-Zoey-ed493918-1d32-44b7-8454-862e473d00ff.html",
"only_matching": True,
},
]
def _real_extract(self, url):
base, video_id = re.match(self._VALID_URL, url).groups()
media = self._download_json(base + ".json", video_id, "Downloading video JSON")
if try_get(
media,
(
lambda x: x["rights_management"]["rights"]["drm"],
lambda x: x["program_info"]["rights_management"]["rights"]["drm"],
),
dict,
):
raise ExtractorError("This video is DRM protected.", expected=True)
title = media["name"]
video = media["video"]
relinker_info = self._extract_relinker_info(video["content_url"], video_id)
self._sort_formats(relinker_info["formats"])
thumbnails = []
for _, value in media.get("images", {}).items():
if value:
thumbnails.append(
{
"url": urljoin(url, value),
}
)
date_published = media.get("date_published")
time_published = media.get("time_published")
if date_published and time_published:
date_published += " " + time_published
subtitles = self._extract_subtitles(url, video)
program_info = media.get("program_info") or {}
season = media.get("season")
info = {
"id": remove_start(media.get("id"), "ContentItem-") or video_id,
"display_id": video_id,
"title": self._live_title(title) if relinker_info.get("is_live") else title,
"alt_title": strip_or_none(media.get("subtitle")),
"description": media.get("description"),
"uploader": strip_or_none(media.get("channel")),
"creator": strip_or_none(media.get("editor") or None),
"duration": parse_duration(video.get("duration")),
"timestamp": unified_timestamp(date_published),
"thumbnails": thumbnails,
"series": program_info.get("name"),
"season_number": int_or_none(season),
"season": season if (season and not season.isdigit()) else None,
"episode": media.get("episode_title"),
"episode_number": int_or_none(media.get("episode")),
"subtitles": subtitles,
}
info.update(relinker_info)
return info
class RaiPlayLiveIE(RaiPlayIE):
_VALID_URL = r"(?P<base>https?://(?:www\.)?raiplay\.it/dirette/(?P<id>[^/?#&]+))"
_TESTS = [
{
"url": "http://www.raiplay.it/dirette/rainews24",
"info_dict": {
"id": "d784ad40-e0ae-4a69-aa76-37519d238a9c",
"display_id": "rainews24",
"ext": "mp4",
"title": "re:^Diretta di Rai News 24 [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$",
"description": "md5:4d00bcf6dc98b27c6ec480de329d1497",
"uploader": "Rai News 24",
"creator": "Rai News 24",
"is_live": True,
},
"params": {
"skip_download": True,
},
}
]
class RaiPlayPlaylistIE(InfoExtractor):
_VALID_URL = r"(?P<base>https?://(?:www\.)?raiplay\.it/programmi/(?P<id>[^/?#&]+))"
_TESTS = [
{
"url": "http://www.raiplay.it/programmi/nondirloalmiocapo/",
"info_dict": {
"id": "nondirloalmiocapo",
"title": "Non dirlo al mio capo",
"description": "md5:98ab6b98f7f44c2843fd7d6f045f153b",
},
"playlist_mincount": 12,
}
]
def _real_extract(self, url):
base, playlist_id = re.match(self._VALID_URL, url).groups()
program = self._download_json(
base + ".json", playlist_id, "Downloading program JSON"
)
entries = []
for b in program.get("blocks") or []:
for s in b.get("sets") or []:
s_id = s.get("id")
if not s_id:
continue
medias = self._download_json(
"%s/%s.json" % (base, s_id),
s_id,
"Downloading content set JSON",
fatal=False,
)
if not medias:
continue
for m in medias.get("items") or []:
path_id = m.get("path_id")
if not path_id:
continue
video_url = urljoin(url, path_id)
entries.append(
self.url_result(
video_url,
ie=RaiPlayIE.ie_key(),
video_id=RaiPlayIE._match_id(video_url),
)
)
return self.playlist_result(
entries,
playlist_id,
program.get("name"),
try_get(program, lambda x: x["program_info"]["description"]),
)
class RaiIE(RaiBaseIE):
_VALID_URL = (
r"https?://[^/]+\.(?:rai\.(?:it|tv)|rainews\.it)/.+?-(?P<id>%s)(?:-.+?)?\.html"
% RaiBaseIE._UUID_RE
)
_TESTS = [
{
# var uniquename = "ContentItem-..."
# data-id="ContentItem-..."
"url": "http://www.raisport.rai.it/dl/raiSport/media/rassegna-stampa-04a9f4bd-b563-40cf-82a6-aad3529cb4a9.html",
"info_dict": {
"id": "04a9f4bd-b563-40cf-82a6-aad3529cb4a9",
"ext": "mp4",
"title": "TG PRIMO TEMPO",
"thumbnail": r"re:^https?://.*\.jpg$",
"duration": 1758,
"upload_date": "20140612",
},
"skip": "This content is available only in Italy",
},
{
# with ContentItem in many metas
"url": "http://www.rainews.it/dl/rainews/media/Weekend-al-cinema-da-Hollywood-arriva-il-thriller-di-Tate-Taylor-La-ragazza-del-treno-1632c009-c843-4836-bb65-80c33084a64b.html",
"info_dict": {
"id": "1632c009-c843-4836-bb65-80c33084a64b",
"ext": "mp4",
"title": 'Weekend al cinema, da Hollywood arriva il thriller di Tate Taylor "La ragazza del treno"',
"description": "I film in uscita questa settimana.",
"thumbnail": r"re:^https?://.*\.png$",
"duration": 833,
"upload_date": "20161103",
},
},
{
# with ContentItem in og:url
"url": "http://www.rai.it/dl/RaiTV/programmi/media/ContentItem-efb17665-691c-45d5-a60c-5301333cbb0c.html",
"md5": "06345bd97c932f19ffb129973d07a020",
"info_dict": {
"id": "efb17665-691c-45d5-a60c-5301333cbb0c",
"ext": "mp4",
"title": "TG1 ore 20:00 del 03/11/2016",
"description": "TG1 edizione integrale ore 20:00 del giorno 03/11/2016",
"thumbnail": r"re:^https?://.*\.jpg$",
"duration": 2214,
"upload_date": "20161103",
},
},
{
# initEdizione('ContentItem-...'
"url": "http://www.tg1.rai.it/dl/tg1/2010/edizioni/ContentSet-9b6e0cba-4bef-4aef-8cf0-9f7f665b7dfb-tg1.html?item=undefined",
"info_dict": {
"id": "c2187016-8484-4e3a-8ac8-35e475b07303",
"ext": "mp4",
"title": r"re:TG1 ore \d{2}:\d{2} del \d{2}/\d{2}/\d{4}",
"duration": 2274,
"upload_date": "20170401",
},
"skip": "Changes daily",
},
{
# HLS live stream with ContentItem in og:url
"url": "http://www.rainews.it/dl/rainews/live/ContentItem-3156f2f2-dc70-4953-8e2f-70d7489d4ce9.html",
"info_dict": {
"id": "3156f2f2-dc70-4953-8e2f-70d7489d4ce9",
"ext": "mp4",
"title": "La diretta di Rainews24",
},
"params": {
"skip_download": True,
},
},
{
# ContentItem in iframe (see #12652) and subtitle at 'subtitlesUrl' key
"url": "http://www.presadiretta.rai.it/dl/portali/site/puntata/ContentItem-3ed19d13-26c2-46ff-a551-b10828262f1b.html",
"info_dict": {
"id": "1ad6dc64-444a-42a4-9bea-e5419ad2f5fd",
"ext": "mp4",
"title": "Partiti acchiappavoti - Presa diretta del 13/09/2015",
"description": "md5:d291b03407ec505f95f27970c0b025f4",
"upload_date": "20150913",
"subtitles": {
"it": "count:2",
},
},
"params": {
"skip_download": True,
},
},
{
# Direct MMS URL
"url": "http://www.rai.it/dl/RaiTV/programmi/media/ContentItem-b63a4089-ac28-48cf-bca5-9f5b5bc46df5.html",
"only_matching": True,
},
{
"url": "https://www.rainews.it/tgr/marche/notiziari/video/2019/02/ContentItem-6ba945a2-889c-4a80-bdeb-8489c70a8db9.html",
"only_matching": True,
},
]
def _extract_from_content_id(self, content_id, url):
media = self._download_json(
"http://www.rai.tv/dl/RaiTV/programmi/media/ContentItem-%s.html?json"
% content_id,
content_id,
"Downloading video JSON",
)
title = media["name"].strip()
media_type = media["type"]
if "Audio" in media_type:
relinker_info = {
"formats": [
{
"format_id": media.get("formatoAudio"),
"url": media["audioUrl"],
"ext": media.get("formatoAudio"),
}
]
}
elif "Video" in media_type:
relinker_info = self._extract_relinker_info(media["mediaUri"], content_id)
else:
raise ExtractorError("not a media file")
self._sort_formats(relinker_info["formats"])
thumbnails = []
for image_type in ("image", "image_medium", "image_300"):
thumbnail_url = media.get(image_type)
if thumbnail_url:
thumbnails.append(
{
"url": compat_urlparse.urljoin(url, thumbnail_url),
}
)
subtitles = self._extract_subtitles(url, media)
info = {
"id": content_id,
"title": title,
"description": strip_or_none(media.get("desc")),
"thumbnails": thumbnails,
"uploader": media.get("author"),
"upload_date": unified_strdate(media.get("date")),
"duration": parse_duration(media.get("length")),
"subtitles": subtitles,
}
info.update(relinker_info)
return info
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
content_item_id = None
content_item_url = self._html_search_meta(
(
"og:url",
"og:video",
"og:video:secure_url",
"twitter:url",
"twitter:player",
"jsonlink",
),
webpage,
default=None,
)
if content_item_url:
content_item_id = self._search_regex(
r"ContentItem-(%s)" % self._UUID_RE,
content_item_url,
"content item id",
default=None,
)
if not content_item_id:
content_item_id = self._search_regex(
r"""(?x)
(?:
(?:initEdizione|drawMediaRaiTV)\(|
<(?:[^>]+\bdata-id|var\s+uniquename)=|
<iframe[^>]+\bsrc=
)
(["\'])
(?:(?!\1).)*\bContentItem-(?P<id>%s)
"""
% self._UUID_RE,
webpage,
"content item id",
default=None,
group="id",
)
content_item_ids = set()
if content_item_id:
content_item_ids.add(content_item_id)
if video_id not in content_item_ids:
content_item_ids.add(video_id)
for content_item_id in content_item_ids:
try:
return self._extract_from_content_id(content_item_id, url)
except GeoRestrictedError:
raise
except ExtractorError:
pass
relinker_url = self._proto_relative_url(
self._search_regex(
r"""(?x)
(?:
var\s+videoURL|
mediaInfo\.mediaUri
)\s*=\s*
([\'"])
(?P<url>
(?:https?:)?
//mediapolis(?:vod)?\.rai\.it/relinker/relinkerServlet\.htm\?
(?:(?!\1).)*\bcont=(?:(?!\1).)+)\1
""",
webpage,
"relinker URL",
group="url",
)
)
relinker_info = self._extract_relinker_info(
urljoin(url, relinker_url), video_id
)
self._sort_formats(relinker_info["formats"])
title = self._search_regex(
r'var\s+videoTitolo\s*=\s*([\'"])(?P<title>[^\'"]+)\1',
webpage,
"title",
group="title",
default=None,
) or self._og_search_title(webpage)
info = {
"id": video_id,
"title": title,
}
info.update(relinker_info)
return info
|
downloaders | MegaRapidoNet | # -*- coding: utf-8 -*-
import random
from ..base.multi_downloader import MultiDownloader
def random_with_n_digits(n):
rand = "0."
not_zero = 0
for i in range(1, n + 1):
r = random.randint(0, 9)
if r > 0:
not_zero += 1
rand += str(r)
if not_zero > 0:
return rand
else:
return random_with_n_digits(n)
class MegaRapidoNet(MultiDownloader):
__name__ = "MegaRapidoNet"
__type__ = "downloader"
__version__ = "0.12"
__status__ = "testing"
__pattern__ = r"http://(?:www\.)?\w+\.megarapido\.net/\?file=\w+"
__config__ = [
("enabled", "bool", "Activated", True),
("use_premium", "bool", "Use premium account if available", True),
("fallback", "bool", "Fallback to free download if premium fails", False),
("chk_filesize", "bool", "Check file size", True),
("max_wait", "int", "Reconnect if waiting time is greater than minutes", 10),
("revert_failed", "bool", "Revert to standard download if fails", True),
]
__description__ = """MegaRapido.net multi-downloader plugin"""
__license__ = "GPLv3"
__authors__ = [("Kagenoshin", "kagenoshin@gmx.ch")]
LINK_PREMIUM_PATTERN = (
r'<\s*?a[^>]*?title\s*?=\s*?["\'].*?download["\'][^>]*?href=["\']([^"\']+)'
)
ERROR_PATTERN = r'<\s*?div[^>]*?class\s*?=\s*?["\']?alert-message error.*?>([^<]*)'
def handle_premium(self, pyfile):
self.data = self.load(
"http://megarapido.net/gerar.php",
post={
"rand": random_with_n_digits(16),
"urllist": pyfile.url,
"links": pyfile.url,
"exibir": "normal",
"usar": "premium",
"user": self.account.get_data("sid"),
"autoreset": "",
},
)
if "desloga e loga novamente para gerar seus links" in self.data.lower():
self.error(self._("You have logged in at another place"))
return super().handle_premium(pyfile)
|
downloaders | FilefactoryCom | # -*- coding: utf-8 -*-
import re
from datetime import timedelta
from pyload.core.network.request_factory import get_url
from ..base.hoster import parse_file_info
from ..base.simple_downloader import SimpleDownloader
def get_info(urls):
for url in urls:
h = get_url(url, just_header=True)
m = re.search(r"Location: (.+)\r\n", h)
if m and not re.match(
m.group(1), FilefactoryCom.__pattern__
): #: : It's a direct link! Skipping
yield (url, 0, 7, url)
else:
#: It's a standard html page
yield parse_file_info(FilefactoryCom, url, get_url(url))
class FilefactoryCom(SimpleDownloader):
__name__ = "FilefactoryCom"
__type__ = "downloader"
__version__ = "0.64"
__status__ = "testing"
__pattern__ = r"https?://(?:www\.)?filefactory\.com/(file|trafficshare/\w+)/\w+"
__config__ = [
("enabled", "bool", "Activated", True),
("use_premium", "bool", "Use premium account if available", True),
("fallback", "bool", "Fallback to free download if premium fails", True),
("chk_filesize", "bool", "Check file size", True),
("max_wait", "int", "Reconnect if waiting time is greater than minutes", 10),
]
__description__ = """Filefactory.com downloader plugin"""
__license__ = "GPLv3"
__authors__ = [
("stickell", "l.stickell@yahoo.it"),
("Walter Purcaro", "vuolter@gmail.com"),
]
INFO_PATTERN = r'<div id="file_name"[^>]*>\s*<h2>(?P<N>[^<]+)</h2>\s*<div id="file_info">\s*(?P<S>[\d.,]+) (?P<U>[\w^_]+) uploaded'
OFFLINE_PATTERN = (
r"<h2>File Removed</h2>|This file is no longer available|Invalid Download Link"
)
LINK_FREE_PATTERN = LINK_PREMIUM_PATTERN = r'"([^"]+filefactory\.com/get.+?)"'
WAIT_PATTERN = r'<div id="countdown_clock" data-delay="(\d+)">'
PREMIUM_ONLY_PATTERN = r">Premium Account Required"
COOKIES = [("filefactory.com", "locale", "en_US.utf-8")]
def handle_free(self, pyfile):
if "Currently only Premium Members can download files larger than" in self.data:
self.fail(self._("File too large for free download"))
elif "All free download slots on this server are currently in use" in self.data:
self.retry(
50,
timedelta(minutes=15).total_seconds(),
self._("All free slots are busy"),
)
m = re.search(self.LINK_FREE_PATTERN, self.data)
if m is None:
return
self.link = m.group(1)
m = re.search(self.WAIT_PATTERN, self.data)
if m is not None:
self.wait(m.group(1))
def check_download(self):
check = self.scan_download(
{
"multiple": b"You are currently downloading too many files at once.",
"error": b'<div id="errorMessage">',
}
)
if check == "multiple":
self.log_debug("Parallel downloads detected; waiting 15 minutes")
self.retry(
wait=timedelta(minutes=15).total_seconds(),
msg=self._("Parallel downloads"),
)
elif check == "error":
self.error(self._("Unknown error"))
return SimpleDownloader.check_download(self)
|
disc | dbpoweramplog | # -*- coding: utf-8 -*-
#
# Picard, the next-generation MusicBrainz tagger
#
# Copyright (C) 2022 Philipp Wolfer
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import re
from picard.disc.utils import NotSupportedTOCError, TocEntry, calculate_mb_toc_numbers
from picard.util import detect_unicode_encoding
RE_TOC_ENTRY = re.compile(
r"^Track (?P<num>\d+):\s+Ripped LBA (?P<start_sector>\d+) to (?P<end_sector>\d+)"
)
def filter_toc_entries(lines):
"""
Take iterator of lines, return iterator of toc entries
"""
last_track_num = 0
for line in lines:
m = RE_TOC_ENTRY.match(line)
if m:
track_num = int(m["num"])
if last_track_num + 1 != track_num:
raise NotSupportedTOCError(
f"Non consecutive track numbers ({last_track_num} => {track_num}) in dBPoweramp log. Likely a partial rip, disc ID cannot be calculated"
)
last_track_num = track_num
yield TocEntry(track_num, int(m["start_sector"]), int(m["end_sector"]) - 1)
def toc_from_file(path):
"""Reads dBpoweramp log files, generates MusicBrainz disc TOC listing for use as discid."""
encoding = detect_unicode_encoding(path)
with open(path, "r", encoding=encoding) as f:
return calculate_mb_toc_numbers(filter_toc_entries(f))
|
extractor | dfb | from __future__ import unicode_literals
import re
from ..utils import unified_strdate
from .common import InfoExtractor
class DFBIE(InfoExtractor):
IE_NAME = "tv.dfb.de"
_VALID_URL = r"https?://tv\.dfb\.de/video/(?P<display_id>[^/]+)/(?P<id>\d+)"
_TEST = {
"url": "http://tv.dfb.de/video/u-19-em-stimmen-zum-spiel-gegen-russland/11633/",
"md5": "ac0f98a52a330f700b4b3034ad240649",
"info_dict": {
"id": "11633",
"display_id": "u-19-em-stimmen-zum-spiel-gegen-russland",
"ext": "mp4",
"title": "U 19-EM: Stimmen zum Spiel gegen Russland",
"upload_date": "20150714",
},
}
def _real_extract(self, url):
display_id, video_id = re.match(self._VALID_URL, url).groups()
player_info = self._download_xml(
"http://tv.dfb.de/server/hd_video.php?play=%s" % video_id, display_id
)
video_info = player_info.find("video")
stream_access_url = self._proto_relative_url(
video_info.find("url").text.strip()
)
formats = []
# see http://tv.dfb.de/player/js/ajax.js for the method to extract m3u8 formats
for sa_url in (stream_access_url, stream_access_url + "&area=&format=iphone"):
stream_access_info = self._download_xml(sa_url, display_id)
token_el = stream_access_info.find("token")
manifest_url = (
token_el.attrib["url"] + "?" + "hdnea=" + token_el.attrib["auth"]
)
if ".f4m" in manifest_url:
formats.extend(
self._extract_f4m_formats(
manifest_url + "&hdcore=3.2.0",
display_id,
f4m_id="hds",
fatal=False,
)
)
else:
formats.extend(
self._extract_m3u8_formats(
manifest_url,
display_id,
"mp4",
"m3u8_native",
m3u8_id="hls",
fatal=False,
)
)
self._sort_formats(formats)
return {
"id": video_id,
"display_id": display_id,
"title": video_info.find("title").text,
"thumbnail": "http://tv.dfb.de/images/%s_640x360.jpg" % video_id,
"upload_date": unified_strdate(video_info.find("time_date").text),
"formats": formats,
}
|
migrations | 0096_plugins | import django.contrib.postgres.fields.jsonb
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("posthog", "0095_session_recording_event_table"),
]
operations = [
migrations.CreateModel(
name="Plugin",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("name", models.CharField(blank=True, max_length=200, null=True)),
("description", models.TextField(blank=True, null=True)),
(
"url",
models.CharField(blank=True, max_length=800, null=True),
),
(
"config_schema",
django.contrib.postgres.fields.jsonb.JSONField(default=dict),
),
("tag", models.CharField(blank=True, max_length=200, null=True)),
("archive", models.BinaryField(blank=True, null=True)),
("from_json", models.BooleanField(default=False)),
("from_web", models.BooleanField(default=False)),
(
"error",
django.contrib.postgres.fields.jsonb.JSONField(
default=None, null=True
),
),
],
),
migrations.CreateModel(
name="PluginConfig",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("enabled", models.BooleanField(default=False)),
("order", models.IntegerField(blank=True, null=True)),
(
"config",
django.contrib.postgres.fields.jsonb.JSONField(default=dict),
),
(
"error",
django.contrib.postgres.fields.jsonb.JSONField(
default=None, null=True
),
),
(
"plugin",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="posthog.Plugin"
),
),
(
"team",
models.ForeignKey(
null=True,
on_delete=django.db.models.deletion.CASCADE,
to="posthog.Team",
),
),
],
),
migrations.AddField(
model_name="team",
name="plugins_opt_in",
field=models.BooleanField(default=False),
),
]
|
funnels | funnel_unordered | import uuid
from typing import Any, Dict, List, Optional, cast
from posthog.models.entity.entity import Entity
from posthog.queries.funnels.base import ClickhouseFunnelBase
from posthog.queries.util import correct_result_for_sampling
from rest_framework.exceptions import ValidationError
class ClickhouseFunnelUnordered(ClickhouseFunnelBase):
"""
Unordered Funnel is a funnel where the order of steps doesn't matter.
## Query Intuition
Imagine a funnel with three events: A, B, and C.
This query splits the problem into two parts:
1. Given the first event is A, find the furthest everyone went starting from A.
This finds any B's and C's that happen after A (without ordering them)
2. Repeat the above, assuming first event to be B, and then C.
Then, the outer query unions the result of (2) and takes the maximum of these.
## Results
The result format is the same as the basic funnel, i.e. [step, count].
Here, `step_i` (0 indexed) signifies the number of people that did at least `i+1` steps.
## Exclusion Semantics
For unordered funnels, exclusion is a bit weird. It means, given all ordering of the steps,
how far can you go without seeing an exclusion event.
If you see an exclusion event => you're discarded.
See test_advanced_funnel_multiple_exclusions_between_steps for details.
"""
QUERY_TYPE = "funnel_unordered"
def _serialize_step(
self,
step: Entity,
count: int,
people: Optional[List[uuid.UUID]] = None,
sampling_factor: Optional[float] = None,
) -> Dict[str, Any]:
return {
"action_id": None,
"name": f"Completed {step.index+1} step{'s' if step.index != 0 else ''}",
"custom_name": None,
"order": step.index,
"people": people if people else [],
"count": correct_result_for_sampling(count, sampling_factor),
"type": step.type,
}
def get_query(self):
max_steps = len(self._filter.entities)
for exclusion in self._filter.exclusions:
if (
exclusion.funnel_from_step != 0
or exclusion.funnel_to_step != max_steps - 1
):
raise ValidationError(
"Partial Exclusions not allowed in unordered funnels"
)
breakdown_clause = self._get_breakdown_prop()
return f"""
SELECT {self._get_count_columns(max_steps)} {self._get_step_time_avgs(max_steps)} {self._get_step_time_median(max_steps)} {breakdown_clause} FROM (
{self.get_step_counts_query()}
) {'GROUP BY prop' if breakdown_clause != '' else ''}
"""
def get_step_counts_query(self):
max_steps = len(self._filter.entities)
union_query = self.get_step_counts_without_aggregation_query()
breakdown_clause = self._get_breakdown_prop()
inner_timestamps, outer_timestamps = self._get_timestamp_selects()
return f"""
SELECT aggregation_target, steps {self._get_step_time_avgs(max_steps, inner_query=True)} {self._get_step_time_median(max_steps, inner_query=True)} {breakdown_clause} {outer_timestamps} {self._get_person_and_group_properties(aggregate=True)} FROM (
SELECT aggregation_target, steps, max(steps) over (PARTITION BY aggregation_target {breakdown_clause}) as max_steps {self._get_step_time_names(max_steps)} {breakdown_clause} {inner_timestamps} {self._get_person_and_group_properties()} FROM (
{union_query}
)
) GROUP BY aggregation_target, steps {breakdown_clause}
HAVING steps = max_steps
"""
def get_step_counts_without_aggregation_query(self):
max_steps = len(self._filter.entities)
union_queries = []
entities_to_use = list(self._filter.entities)
partition_select = self._get_partition_cols(1, max_steps)
sorting_condition = self.get_sorting_condition(max_steps)
breakdown_clause = self._get_breakdown_prop(group_remaining=True)
exclusion_clause = self._get_exclusion_condition()
for i in range(max_steps):
inner_query = f"""
SELECT
aggregation_target,
timestamp,
{partition_select}
{breakdown_clause}
{self._get_person_and_group_properties()}
FROM ({self._get_inner_event_query(entities_to_use, f"events_{i}")})
"""
formatted_query = f"""
SELECT *, {sorting_condition} AS steps {exclusion_clause} {self._get_step_times(max_steps)} {self._get_person_and_group_properties()} FROM (
{inner_query}
) WHERE step_0 = 1
{'AND exclusion = 0' if exclusion_clause else ''}
"""
# rotate entities by 1 to get new first event
entities_to_use.append(entities_to_use.pop(0))
union_queries.append(formatted_query)
return " UNION ALL ".join(union_queries)
def _get_step_times(self, max_steps: int):
conditions: List[str] = []
conversion_times_elements = []
for i in range(max_steps):
conversion_times_elements.append(f"latest_{i}")
conditions.append(
f"arraySort([{','.join(conversion_times_elements)}]) as conversion_times"
)
for i in range(1, max_steps):
conditions.append(
f"if(isNotNull(conversion_times[{i+1}]) AND conversion_times[{i+1}] <= conversion_times[{i}] + INTERVAL {self._filter.funnel_window_interval} {self._filter.funnel_window_interval_unit_ch()}, "
f"dateDiff('second', conversion_times[{i}], conversion_times[{i+1}]), NULL) step_{i}_conversion_time"
)
# array indices in ClickHouse are 1-based :shrug:
formatted = ", ".join(conditions)
return f", {formatted}" if formatted else ""
def get_sorting_condition(self, max_steps: int):
conditions = []
event_times_elements = []
for i in range(max_steps):
event_times_elements.append(f"latest_{i}")
conditions.append(
f"arraySort([{','.join(event_times_elements)}]) as event_times"
)
# replacement of latest_i for whatever query part requires it, just like conversion_times
basic_conditions: List[str] = []
for i in range(1, max_steps):
basic_conditions.append(
f"if(latest_0 < latest_{i} AND latest_{i} <= latest_0 + INTERVAL {self._filter.funnel_window_interval} {self._filter.funnel_window_interval_unit_ch()}, 1, 0)"
)
conditions.append(f"arraySum([{','.join(basic_conditions)}, 1])")
if basic_conditions:
return ",".join(conditions)
else:
return "1"
def _get_exclusion_condition(self):
if not self._filter.exclusions:
return ""
conditions = []
for exclusion_id, exclusion in enumerate(self._filter.exclusions):
from_time = f"latest_{exclusion.funnel_from_step}"
to_time = f"event_times[{cast(int, exclusion.funnel_to_step) + 1}]"
exclusion_time = (
f"exclusion_{exclusion_id}_latest_{exclusion.funnel_from_step}"
)
condition = (
f"if( {exclusion_time} > {from_time} AND {exclusion_time} < "
f"if(isNull({to_time}), {from_time} + INTERVAL {self._filter.funnel_window_interval} {self._filter.funnel_window_interval_unit_ch()}, {to_time}), 1, 0)"
)
conditions.append(condition)
if conditions:
return f", arraySum([{','.join(conditions)}]) as exclusion"
else:
return ""
|
sabnzbd | getipaddress | #!/usr/bin/python3 -OO
# Copyright 2007-2023 The SABnzbd-Team (sabnzbd.org)
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
"""
sabnzbd.getipaddress
"""
import functools
import logging
import multiprocessing.pool
import socket
import time
import urllib.error
import urllib.request
from typing import Callable
import sabnzbd
import sabnzbd.cfg
import socks
from sabnzbd.encoding import ubtou
def timeout(max_timeout: float):
"""Timeout decorator, parameter in seconds."""
def timeout_decorator(item: Callable) -> Callable:
"""Wrap the original function."""
@functools.wraps(item)
def func_wrapper(*args, **kwargs):
"""Closure for function."""
# Raises a TimeoutError if execution exceeds max_timeout
# Raises a RuntimeError is SABnzbd is already shutting down when called
try:
return sabnzbd.THREAD_POOL.submit(item, *args, **kwargs).result(
max_timeout
)
except (TimeoutError, RuntimeError):
return None
return func_wrapper
return timeout_decorator
@timeout(3.0)
def addresslookup(myhost):
return socket.getaddrinfo(myhost, 80)
@timeout(3.0)
def addresslookup4(myhost):
return socket.getaddrinfo(myhost, 80, socket.AF_INET)
@timeout(3.0)
def addresslookup6(myhost):
return socket.getaddrinfo(myhost, 80, socket.AF_INET6)
def active_socks5_proxy():
"""Return the active proxy"""
if socket.socket == socks.socksocket:
return "%s:%s" % socks.socksocket.default_proxy[1:3]
return None
def dnslookup():
"""Perform a basic DNS lookup"""
start = time.time()
try:
addresslookup(sabnzbd.cfg.selftest_host())
result = True
except:
result = False
logging.debug("DNS Lookup = %s (in %.2f seconds)", result, time.time() - start)
return result
def localipv4():
try:
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s_ipv4:
# Option: use 100.64.1.1 (IANA-Reserved IPv4 Prefix for Shared Address Space)
s_ipv4.connect(("10.255.255.255", 80))
ipv4 = s_ipv4.getsockname()[0]
except socket.error:
ipv4 = None
logging.debug("Local IPv4 address = %s", ipv4)
return ipv4
def publicipv4():
"""Because of dual IPv4/IPv6 clients, finding the
public ipv4 needs special attention, meaning forcing
IPv4 connections, and not allowing IPv6 connections
Function uses sabnzbd.cfg.selftest_host(), which must report our public IPv4 address over which we access it
"""
start = time.time()
try:
# look up IPv4 addresses of selftest_host
lookup_result_iv4 = addresslookup4(sabnzbd.cfg.selftest_host())
# Make sure there is a result, abort otherwise
if not lookup_result_iv4:
raise Exception
except Exception:
# something very bad: no name resolving of selftest_host
logging.debug(
"Failed to detect public IPv4 address: looking up %s failed",
sabnzbd.cfg.selftest_host(),
)
return None
public_ipv4 = None
# we got one or more IPv4 address(es) for selftest_host, so let's connect and ask for our own public IPv4
for item in lookup_result_iv4:
# get next IPv4 address of sabnzbd.cfg.selftest_host()
selftest_ipv4 = item[4][0]
try:
# put the selftest_host's IPv4 address into the URL
req = urllib.request.Request("http://" + selftest_ipv4 + "/")
# specify the User-Agent, because certain sites refuse connections with "python urllib2" as User-Agent:
req.add_header("User-Agent", "SABnzbd/%s" % sabnzbd.__version__)
# specify the Host, because we only provide the IPv4 address in the URL:
req.add_header("Host", sabnzbd.cfg.selftest_host())
# get the response, timeout 2 seconds, in case the website is not accessible
public_ipv4 = ubtou(urllib.request.urlopen(req, timeout=2).read())
# ... check the response is indeed an IPv4 address:
# if we got anything else than a plain IPv4 address, this will raise an exception
socket.inet_aton(public_ipv4)
# if we get here without exception, we found our public IPv4, and we're done:
break
except (socket.error, urllib.error.URLError):
# the connect OR the inet_aton raised an exception, so:
public_ipv4 = None # reset
# continue the for loop to try next server IPv4 address
pass
if not public_ipv4:
logging.debug(
"Failed to get public IPv4 address from %s", sabnzbd.cfg.selftest_host()
)
return None
logging.debug(
"Public IPv4 address = %s (in %.2f seconds)", public_ipv4, time.time() - start
)
return public_ipv4
def ipv6():
try:
with socket.socket(socket.AF_INET6, socket.SOCK_DGRAM) as s_ipv6:
# IPv6 prefix for documentation purpose
s_ipv6.connect(("2001:db8::8080", 80))
ipv6_address = s_ipv6.getsockname()[0]
except:
ipv6_address = None
logging.debug("IPv6 address = %s", ipv6_address)
return ipv6_address
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.