content
stringlengths 30
38.6k
| id
int64 0
6.74k
|
---|---|
def create_authinfo(computer, store=False):
"""Allow the current user to use the given computer."""
from aiida.orm import AuthInfo
authinfo = AuthInfo(computer=computer, user=get_current_user())
if store:
authinfo.store()
return authinfo | 0 |
def retinanet_target_assign(bbox_pred,
cls_logits,
anchor_box,
anchor_var,
gt_boxes,
gt_labels,
is_crowd,
im_info,
num_classes=1,
positive_overlap=0.5,
negative_overlap=0.4):
"""
**Target Assign Layer for the detector RetinaNet.**
This OP finds out positive and negative samples from all anchors
for training the detector `RetinaNet <https://arxiv.org/abs/1708.02002>`_ ,
and assigns target labels for classification along with target locations for
regression to each sample, then takes out the part belonging to positive and
negative samples from category prediction( :attr:`cls_logits`) and location
prediction( :attr:`bbox_pred`) which belong to all anchors.
The searching principles for positive and negative samples are as followed:
1. Anchors are assigned to ground-truth boxes when it has the highest IoU
overlap with a ground-truth box.
2. Anchors are assigned to ground-truth boxes when it has an IoU overlap
higher than :attr:`positive_overlap` with any ground-truth box.
3. Anchors are assigned to background when its IoU overlap is lower than
:attr:`negative_overlap` for all ground-truth boxes.
4. Anchors which do not meet the above conditions do not participate in
the training process.
Retinanet predicts a :math:`C`-vector for classification and a 4-vector for box
regression for each anchor, hence the target label for each positive(or negative)
sample is a :math:`C`-vector and the target locations for each positive sample
is a 4-vector. As for a positive sample, if the category of its assigned
ground-truth box is class :math:`i`, the corresponding entry in its length
:math:`C` label vector is set to 1 and all other entries is set to 0, its box
regression targets are computed as the offset between itself and its assigned
ground-truth box. As for a negative sample, all entries in its length :math:`C`
label vector are set to 0 and box regression targets are omitted because
negative samples do not participate in the training process of location
regression.
After the assignment, the part belonging to positive and negative samples is
taken out from category prediction( :attr:`cls_logits` ), and the part
belonging to positive samples is taken out from location
prediction( :attr:`bbox_pred` ).
Args:
bbox_pred(Variable): A 3-D Tensor with shape :math:`[N, M, 4]` represents
the predicted locations of all anchors. :math:`N` is the batch size( the
number of images in a mini-batch), :math:`M` is the number of all anchors
of one image, and each anchor has 4 coordinate values. The data type of
:attr:`bbox_pred` is float32 or float64.
cls_logits(Variable): A 3-D Tensor with shape :math:`[N, M, C]` represents
the predicted categories of all anchors. :math:`N` is the batch size,
:math:`M` is the number of all anchors of one image, and :math:`C` is
the number of categories (**Notice: excluding background**). The data type
of :attr:`cls_logits` is float32 or float64.
anchor_box(Variable): A 2-D Tensor with shape :math:`[M, 4]` represents
the locations of all anchors. :math:`M` is the number of all anchors of
one image, each anchor is represented as :math:`[xmin, ymin, xmax, ymax]`,
:math:`[xmin, ymin]` is the left top coordinate of the anchor box,
:math:`[xmax, ymax]` is the right bottom coordinate of the anchor box.
The data type of :attr:`anchor_box` is float32 or float64. Please refer
to the OP :ref:`api_fluid_layers_anchor_generator`
for the generation of :attr:`anchor_box`.
anchor_var(Variable): A 2-D Tensor with shape :math:`[M,4]` represents the expanded
factors of anchor locations used in loss function. :math:`M` is number of
all anchors of one image, each anchor possesses a 4-vector expanded factor.
The data type of :attr:`anchor_var` is float32 or float64. Please refer
to the OP :ref:`api_fluid_layers_anchor_generator`
for the generation of :attr:`anchor_var`.
gt_boxes(Variable): A 1-level 2-D LoDTensor with shape :math:`[G, 4]` represents
locations of all ground-truth boxes. :math:`G` is the total number of
all ground-truth boxes in a mini-batch, and each ground-truth box has 4
coordinate values. The data type of :attr:`gt_boxes` is float32 or
float64.
gt_labels(variable): A 1-level 2-D LoDTensor with shape :math:`[G, 1]` represents
categories of all ground-truth boxes, and the values are in the range of
:math:`[1, C]`. :math:`G` is the total number of all ground-truth boxes
in a mini-batch, and each ground-truth box has one category. The data type
of :attr:`gt_labels` is int32.
is_crowd(Variable): A 1-level 1-D LoDTensor with shape :math:`[G]` which
indicates whether a ground-truth box is a crowd. If the value is 1, the
corresponding box is a crowd, it is ignored during training. :math:`G` is
the total number of all ground-truth boxes in a mini-batch. The data type
of :attr:`is_crowd` is int32.
im_info(Variable): A 2-D Tensor with shape [N, 3] represents the size
information of input images. :math:`N` is the batch size, the size
information of each image is a 3-vector which are the height and width
of the network input along with the factor scaling the origin image to
the network input. The data type of :attr:`im_info` is float32.
num_classes(int32): The number of categories for classification, the default
value is 1.
positive_overlap(float32): Minimum overlap required between an anchor
and ground-truth box for the anchor to be a positive sample, the default
value is 0.5.
negative_overlap(float32): Maximum overlap allowed between an anchor
and ground-truth box for the anchor to be a negative sample, the default
value is 0.4. :attr:`negative_overlap` should be less than or equal to
:attr:`positive_overlap`, if not, the actual value of
:attr:`positive_overlap` is :attr:`negative_overlap`.
Returns:
A tuple with 6 Variables:
**predict_scores** (Variable): A 2-D Tensor with shape :math:`[F+B, C]` represents
category prediction belonging to positive and negative samples. :math:`F`
is the number of positive samples in a mini-batch, :math:`B` is the number
of negative samples, and :math:`C` is the number of categories
(**Notice: excluding background**). The data type of :attr:`predict_scores`
is float32 or float64.
**predict_location** (Variable): A 2-D Tensor with shape :math:`[F, 4]` represents
location prediction belonging to positive samples. :math:`F` is the number
of positive samples. :math:`F` is the number of positive samples, and each
sample has 4 coordinate values. The data type of :attr:`predict_location`
is float32 or float64.
**target_label** (Variable): A 2-D Tensor with shape :math:`[F+B, 1]` represents
target labels for classification belonging to positive and negative
samples. :math:`F` is the number of positive samples, :math:`B` is the
number of negative, and each sample has one target category. The data type
of :attr:`target_label` is int32.
**target_bbox** (Variable): A 2-D Tensor with shape :math:`[F, 4]` represents
target locations for box regression belonging to positive samples.
:math:`F` is the number of positive samples, and each sample has 4
coordinate values. The data type of :attr:`target_bbox` is float32 or
float64.
**bbox_inside_weight** (Variable): A 2-D Tensor with shape :math:`[F, 4]`
represents whether a positive sample is fake positive, if a positive
sample is false positive, the corresponding entries in
:attr:`bbox_inside_weight` are set 0, otherwise 1. :math:`F` is the number
of total positive samples in a mini-batch, and each sample has 4
coordinate values. The data type of :attr:`bbox_inside_weight` is float32
or float64.
**fg_num** (Variable): A 2-D Tensor with shape :math:`[N, 1]` represents the number
of positive samples. :math:`N` is the batch size. **Notice: The number
of positive samples is used as the denominator of later loss function,
to avoid the condition that the denominator is zero, this OP has added 1
to the actual number of positive samples of each image.** The data type of
:attr:`fg_num` is int32.
Examples:
.. code-block:: python
import paddle.fluid as fluid
bbox_pred = fluid.data(name='bbox_pred', shape=[1, 100, 4],
dtype='float32')
cls_logits = fluid.data(name='cls_logits', shape=[1, 100, 10],
dtype='float32')
anchor_box = fluid.data(name='anchor_box', shape=[100, 4],
dtype='float32')
anchor_var = fluid.data(name='anchor_var', shape=[100, 4],
dtype='float32')
gt_boxes = fluid.data(name='gt_boxes', shape=[10, 4],
dtype='float32')
gt_labels = fluid.data(name='gt_labels', shape=[10, 1],
dtype='int32')
is_crowd = fluid.data(name='is_crowd', shape=[1],
dtype='int32')
im_info = fluid.data(name='im_info', shape=[1, 3],
dtype='float32')
score_pred, loc_pred, score_target, loc_target, bbox_inside_weight, fg_num = \\
fluid.layers.retinanet_target_assign(bbox_pred, cls_logits, anchor_box,
anchor_var, gt_boxes, gt_labels, is_crowd, im_info, 10)
"""
check_variable_and_dtype(bbox_pred, 'bbox_pred', ['float32', 'float64'],
'retinanet_target_assign')
check_variable_and_dtype(cls_logits, 'cls_logits', ['float32', 'float64'],
'retinanet_target_assign')
check_variable_and_dtype(anchor_box, 'anchor_box', ['float32', 'float64'],
'retinanet_target_assign')
check_variable_and_dtype(anchor_var, 'anchor_var', ['float32', 'float64'],
'retinanet_target_assign')
check_variable_and_dtype(gt_boxes, 'gt_boxes', ['float32', 'float64'],
'retinanet_target_assign')
check_variable_and_dtype(gt_labels, 'gt_labels', ['int32'],
'retinanet_target_assign')
check_variable_and_dtype(is_crowd, 'is_crowd', ['int32'],
'retinanet_target_assign')
check_variable_and_dtype(im_info, 'im_info', ['float32', 'float64'],
'retinanet_target_assign')
helper = LayerHelper('retinanet_target_assign', **locals())
# Assign target label to anchors
loc_index = helper.create_variable_for_type_inference(dtype='int32')
score_index = helper.create_variable_for_type_inference(dtype='int32')
target_label = helper.create_variable_for_type_inference(dtype='int32')
target_bbox = helper.create_variable_for_type_inference(
dtype=anchor_box.dtype)
bbox_inside_weight = helper.create_variable_for_type_inference(
dtype=anchor_box.dtype)
fg_num = helper.create_variable_for_type_inference(dtype='int32')
helper.append_op(
type="retinanet_target_assign",
inputs={
'Anchor': anchor_box,
'GtBoxes': gt_boxes,
'GtLabels': gt_labels,
'IsCrowd': is_crowd,
'ImInfo': im_info
},
outputs={
'LocationIndex': loc_index,
'ScoreIndex': score_index,
'TargetLabel': target_label,
'TargetBBox': target_bbox,
'BBoxInsideWeight': bbox_inside_weight,
'ForegroundNumber': fg_num
},
attrs={
'positive_overlap': positive_overlap,
'negative_overlap': negative_overlap
})
loc_index.stop_gradient = True
score_index.stop_gradient = True
target_label.stop_gradient = True
target_bbox.stop_gradient = True
bbox_inside_weight.stop_gradient = True
fg_num.stop_gradient = True
cls_logits = nn.reshape(x=cls_logits, shape=(-1, num_classes))
bbox_pred = nn.reshape(x=bbox_pred, shape=(-1, 4))
predicted_cls_logits = nn.gather(cls_logits, score_index)
predicted_bbox_pred = nn.gather(bbox_pred, loc_index)
return predicted_cls_logits, predicted_bbox_pred, target_label, target_bbox, bbox_inside_weight, fg_num | 1 |
def test_01():
""" Runs OK """
run(SAMPLE1, 3) | 2 |
def example_1():
"""
THIS IS A LONG COMMENT AND should be wrapped to fit within a 72
character limit
"""
long_1 = """LONG CODE LINES should be wrapped within 79 character to
prevent page cutoff stuff"""
long_2 = """This IS a long string that looks gross and goes beyond
what it should"""
some_tuple =(1, 2, 3, 'a')
some_variable={"long": long_1,
'other':[math.pi, 100,200, 300, 9999292929292, long_2],
"more": {"inner": "THIS whole logical line should be wrapped"},
"data": [444,5555,222,3,3,4,4,5,5,5,5,5,5,5]}
return (some_tuple, some_variable) | 3 |
def simple_cnn(input_shape=(32, 32, 2)):
"""Creates a 2-D convolutional encoder-decoder network.
Parameters
----------
input_shape : sequence of ints, optional
Input data shape of the form (H, W, C). Default is (32, 32, 2).
Returns
-------
model
An instance of Keras' Model class.
Notes
-----
Given a concatenated pair of static and moving images as input, the
CNN computes a dense displacement field that is used to warp the
moving image to match with the static image.
The number of channels in the output (displacement field) is equal
to the dimensionality of the input data. For 3-D volumes, it is 3,
and for 2-D images, it is 2. The first channel comprises
displacement in the x-direction and the second comprises
displacement in the y-direction.
"""
out_channels = 2
inputs = layers.Input(shape=input_shape)
# encoder
x = layers.Conv2D(32, kernel_size=3, strides=2, padding='same',
activation='relu')(inputs) # 32 --> 16
x = layers.BatchNormalization()(x) # 16
x = layers.Conv2D(32, kernel_size=3, strides=1, padding='same',
activation='relu')(x) # 16
x = layers.BatchNormalization()(x) # 16
x = layers.MaxPool2D(pool_size=2)(x) # 16 --> 8
x = layers.Conv2D(64, kernel_size=3, strides=1, padding='same',
activation='relu')(x) # 8
x = layers.BatchNormalization()(x) # 8
x = layers.Conv2D(64, kernel_size=3, strides=1, padding='same',
activation='relu')(x) # 8
x = layers.BatchNormalization()(x) # 8
x = layers.MaxPool2D(pool_size=2)(x) # 8 --> 4
x = layers.Conv2D(128, kernel_size=3, strides=1, padding='same',
activation='relu')(x) # 4
x = layers.BatchNormalization()(x) # 4
# decoder
x = layers.Conv2DTranspose(64, kernel_size=2, strides=2,
padding='same')(x) # 4 --> 8
x = layers.Conv2D(64, kernel_size=3, strides=1, padding='same',
activation='relu')(x) # 8
x = layers.BatchNormalization()(x) # 8
x = layers.Conv2DTranspose(32, kernel_size=2, strides=2,
padding='same')(x) # 8 --> 16
x = layers.Conv2D(32, kernel_size=3, strides=1, padding='same',
activation='relu')(x) # 16
x = layers.BatchNormalization()(x) # 16
x = layers.Conv2DTranspose(16, kernel_size=2, strides=2,
padding='same')(x) # 16 --> 32
x = layers.Conv2D(16, kernel_size=3, strides=1, padding='same',
activation='relu')(x) # 32
x = layers.BatchNormalization()(x) # 32
x = layers.Conv2D(out_channels, kernel_size=1, strides=1,
padding='same')(x) # 32
# Create the model.
model = tf.keras.Model(inputs, x, name='simple_cnn')
return model
"""
Differntiable image sampling
References:
1. https://github.com/tensorflow/models/blob/master/research/transformer/spatial_transformer.py
2. Jaderberg, Max, Karen Simonyan, and Andrew Zisserman. "Spatial
transformer networks." Advances in neural information processing
systems. 2015. https://arxiv.org/pdf/1506.02025.pdf
3. *Spatial* Transformer Networks by Kushagra Bhatnagar https://link.medium.com/0b2OrmqVO5
""" | 4 |
def utc_from_timestamp(timestamp: float) -> dt.datetime:
"""Return a UTC time from a timestamp."""
return UTC.localize(dt.datetime.utcfromtimestamp(timestamp)) | 5 |
async def camera_privacy_fixture(
hass: HomeAssistant, mock_entry: MockEntityFixture, mock_camera: Camera
):
"""Fixture for a single camera for testing the switch platform."""
# disable pydantic validation so mocking can happen
Camera.__config__.validate_assignment = False
camera_obj = mock_camera.copy(deep=True)
camera_obj._api = mock_entry.api
camera_obj.channels[0]._api = mock_entry.api
camera_obj.channels[1]._api = mock_entry.api
camera_obj.channels[2]._api = mock_entry.api
camera_obj.name = "Test Camera"
camera_obj.recording_settings.mode = RecordingMode.NEVER
camera_obj.feature_flags.has_led_status = False
camera_obj.feature_flags.has_hdr = False
camera_obj.feature_flags.video_modes = [VideoMode.DEFAULT]
camera_obj.feature_flags.has_privacy_mask = True
camera_obj.feature_flags.has_speaker = False
camera_obj.feature_flags.has_smart_detect = False
camera_obj.add_privacy_zone()
camera_obj.is_ssh_enabled = False
camera_obj.osd_settings.is_name_enabled = False
camera_obj.osd_settings.is_date_enabled = False
camera_obj.osd_settings.is_logo_enabled = False
camera_obj.osd_settings.is_debug_enabled = False
mock_entry.api.bootstrap.reset_objects()
mock_entry.api.bootstrap.cameras = {
camera_obj.id: camera_obj,
}
await hass.config_entries.async_setup(mock_entry.entry.entry_id)
await hass.async_block_till_done()
assert_entity_counts(hass, Platform.SWITCH, 6, 5)
yield camera_obj
Camera.__config__.validate_assignment = True | 6 |
def rotate(
component: ComponentOrFactory,
angle: float = 90,
) -> Component:
"""Return rotated component inside a new component.
Most times you just need to place a reference and rotate it.
This rotate function just encapsulates the rotated reference into a new component.
Args:
component:
angle: in degrees
"""
component = component() if callable(component) else component
component_new = Component()
component_new.component = component
ref = component_new.add_ref(component)
ref.rotate(angle)
component_new.add_ports(ref.ports)
component_new.copy_child_info(component)
return component_new | 7 |
def move_media(origin_server, file_id, src_paths, dest_paths):
"""Move the given file, and any thumbnails, to the dest repo
Args:
origin_server (str):
file_id (str):
src_paths (MediaFilePaths):
dest_paths (MediaFilePaths):
"""
logger.info("%s/%s", origin_server, file_id)
# check that the original exists
original_file = src_paths.remote_media_filepath(origin_server, file_id)
if not os.path.exists(original_file):
logger.warn(
"Original for %s/%s (%s) does not exist",
origin_server, file_id, original_file,
)
else:
mkdir_and_move(
original_file,
dest_paths.remote_media_filepath(origin_server, file_id),
)
# now look for thumbnails
original_thumb_dir = src_paths.remote_media_thumbnail_dir(
origin_server, file_id,
)
if not os.path.exists(original_thumb_dir):
return
mkdir_and_move(
original_thumb_dir,
dest_paths.remote_media_thumbnail_dir(origin_server, file_id)
) | 8 |
def is_db_user_superuser(conn):
"""Function to test whether the current DB user is a PostgreSQL superuser."""
logger = logging.getLogger('dirbs.db')
with conn.cursor() as cur:
cur.execute("""SELECT rolsuper
FROM pg_roles
WHERE rolname = CURRENT_USER""")
res = cur.fetchone()
if res is None:
logger.warn('Failed to find CURRENT_USER in pg_roles table')
return False
return res[0] | 9 |
def handle_floor(expr):
"""
Apply floor() then return the floored expression.
expr: Expr - sympy expression as an argument to floor()
"""
return sympy.functions.floor(expr, evaluate=False) | 10 |
def run_client(server_address, server_port):
"""Ping a UDP pinger server running at the given address
"""
# Fill in the client side code here.
raise NotImplementedError
return 0 | 11 |
def gamma_trace(t):
"""
trace of a single line of gamma matrices
Examples
========
>>> from sympy.physics.hep.gamma_matrices import GammaMatrix as G, \
gamma_trace, LorentzIndex
>>> from sympy.tensor.tensor import tensor_indices, tensor_heads
>>> p, q = tensor_heads('p, q', [LorentzIndex])
>>> i0,i1,i2,i3,i4,i5 = tensor_indices('i0:6', LorentzIndex)
>>> ps = p(i0)*G(-i0)
>>> qs = q(i0)*G(-i0)
>>> gamma_trace(G(i0)*G(i1))
4*metric(i0, i1)
>>> gamma_trace(ps*ps) - 4*p(i0)*p(-i0)
0
>>> gamma_trace(ps*qs + ps*ps) - 4*p(i0)*p(-i0) - 4*p(i0)*q(-i0)
0
"""
if isinstance(t, TensAdd):
res = TensAdd(*[_trace_single_line(x) for x in t.args])
return res
t = _simplify_single_line(t)
res = _trace_single_line(t)
return res | 12 |
def _GetReportingClient():
"""Returns a client that uses an API key for Cloud SDK crash reports.
Returns:
An error reporting client that uses an API key for Cloud SDK crash reports.
"""
client_class = core_apis.GetClientClass(util.API_NAME, util.API_VERSION)
client_instance = client_class(get_credentials=False, http=http.Http())
client_instance.AddGlobalParam('key', CRASH_API_KEY)
return client_instance | 13 |
def page_not_found(e):
"""Return a custom 404 error."""
return 'Sorry, nothing at this URL.', 404 | 14 |
def get_nm_node_params(nm_host):
"""
Return a dict of all node params in NM,
with their id as the dict key.
:param nm_host: NodeMeister hostname/IP
:type nm_host: string
:rtype: dict
:returns: NM node params, dict of the form:
{id<int>: {'paramkey': <string>, 'paramvalue': <string or None>, 'node': <int>, 'id': <int>}
"""
r = {}
j = get_json("http://%s/enc/parameters/nodes/" % nm_host)
for o in j:
r[o['id']] = o
return r | 15 |
def _nodef_to_private_pond(converter, x):
"""Map a NodeDef x to a PrivatePondTensor."""
dtype = x.attr["dtype"].type
warn_msg = "Unexpected dtype {} found at node {}"
err_msg = "Unsupported dtype {} found at node {}"
x_shape = [i.size for i in x.attr["value"].tensor.tensor_shape.dim]
if not x_shape:
if dtype == tf.float32:
nums = x.attr["value"].tensor.float_val
elif dtype == tf.float64:
nums = x.attr["value"].tensor.float_val
elif dtype == tf.int32:
logging.warning(warn_msg, dtype, x.name)
nums = x.attr["value"].tensor.int_val
else:
raise TypeError(err_msg.format(dtype, x.name))
def inputter_fn():
return tf.constant(np.array(nums).reshape(1, 1))
else:
if dtype == tf.float32:
nums = array.array('f', x.attr["value"].tensor.tensor_content)
elif dtype == tf.float64:
nums = array.array('d', x.attr["value"].tensor.tensor_content)
elif dtype == tf.int32:
logging.warning(warn_msg, dtype, x.name)
nums = array.array('i', x.attr["value"].tensor.tensor_content)
else:
raise TypeError(err_msg.format(dtype, x.name))
def inputter_fn():
return tf.constant(np.array(nums).reshape(x_shape))
x_private = converter.protocol.define_private_input(
converter.model_provider, inputter_fn)
return x_private | 16 |
def get_keys(install=False, trust=False, force=False):
"""
Get pgp public keys available on mirror
with suffix .key or .pub
"""
if not spack.mirror.MirrorCollection():
tty.die("Please add a spack mirror to allow " +
"download of build caches.")
keys = set()
for mirror in spack.mirror.MirrorCollection().values():
fetch_url_build_cache = url_util.join(
mirror.fetch_url, _build_cache_relative_path)
mirror_dir = url_util.local_file_path(fetch_url_build_cache)
if mirror_dir:
tty.msg("Finding public keys in %s" % mirror_dir)
files = os.listdir(str(mirror_dir))
for file in files:
if re.search(r'\.key', file) or re.search(r'\.pub', file):
link = url_util.join(fetch_url_build_cache, file)
keys.add(link)
else:
tty.msg("Finding public keys at %s" %
url_util.format(fetch_url_build_cache))
# For s3 mirror need to request index.html directly
p, links = web_util.spider(
url_util.join(fetch_url_build_cache, 'index.html'), depth=1)
for link in links:
if re.search(r'\.key', link) or re.search(r'\.pub', link):
keys.add(link)
for link in keys:
with Stage(link, name="build_cache", keep=True) as stage:
if os.path.exists(stage.save_filename) and force:
os.remove(stage.save_filename)
if not os.path.exists(stage.save_filename):
try:
stage.fetch()
except fs.FetchError:
continue
tty.msg('Found key %s' % link)
if install:
if trust:
Gpg.trust(stage.save_filename)
tty.msg('Added this key to trusted keys.')
else:
tty.msg('Will not add this key to trusted keys.'
'Use -t to install all downloaded keys') | 17 |
def plot_plane(ax,
distances:list,
z_coords:list,
label:str=None,
decorate:bool=True,
show_half:bool=False,
**kwargs):
"""
Plot plane.
Args:
ax: matplotlib ax.
distances (list): List of plane intervals.
z_coords (list): List of z coordinate of each plane.
label (str): Plot label.
decorate (bool): If True, ax is decorated.
show_half: If True, atom planes which are periodically equivalent are
not showed.
"""
if decorate:
xlabel = 'Distance'
ylabel = 'Hight'
else:
xlabel = ylabel = None
_distances = deepcopy(distances)
_z_coords = deepcopy(z_coords)
_distances.insert(0, distances[-1])
_distances.append(distances[0])
_z_coords.insert(0, -distances[-1])
_z_coords.append(z_coords[-1]+distances[0])
c = np.sum(distances)
fixed_z_coords = _z_coords + distances[0] / 2 - c / 2
num = len(fixed_z_coords)
bulk_distance = _distances[int(num/4)]
if show_half:
n = int((num + 2) / 4)
_distances = _distances[n:3*n]
fixed_z_coords = fixed_z_coords[n:3*n]
line_chart(ax=ax,
xdata=_distances,
ydata=fixed_z_coords,
xlabel=xlabel,
ylabel=ylabel,
label=label,
sort_by='y',
**kwargs)
if decorate:
xmin = bulk_distance - 0.025
xmax = bulk_distance + 0.025
if show_half:
ax.hlines(0,
xmin=xmin-0.01,
xmax=xmax+0.01,
linestyle='--',
color='k',
linewidth=1.)
else:
tb_idx = [1, int(num/2), num-1]
for idx in tb_idx:
ax.hlines(fixed_z_coords[idx]-distances[0]/2,
xmin=xmin-0.01,
xmax=xmax+0.01,
linestyle='--',
color='k',
linewidth=1.) | 18 |
def local(jarfile, klass, *args):
"""Syntax: [storm local topology-jar-path class ...]
Runs the main method of class with the specified arguments but pointing to a local cluster
The storm jars and configs in ~/.storm are put on the classpath.
The process is configured so that StormSubmitter
(http://storm.apache.org/releases/current/javadocs/org/apache/storm/StormSubmitter.html)
and others will interact with a local cluster instead of the one configured by default.
Most options should work just like with the storm jar command.
local also adds in the option --local-ttl which sets the number of seconds the
local cluster will run for before it shuts down.
--java-debug lets you turn on java debugging and set the parameters passed to -agentlib:jdwp on the JDK
--java-debug transport=dt_socket,address=localhost:8000
will open up a debugging server on port 8000.
"""
[ttl, debug_args, args] = parse_local_opts(args)
extrajvmopts = ["-Dstorm.local.sleeptime=" + ttl]
if debug_args != None:
extrajvmopts = extrajvmopts + ["-agentlib:jdwp=" + debug_args]
run_client_jar(jarfile, "org.apache.storm.LocalCluster", [klass] + list(args), client=False, daemon=False, extrajvmopts=extrajvmopts) | 19 |
def get_dict_from_list(list_of_dicts, key_value, key='id'):
"""
Returns dictionary with key: @prm{key} equal to @prm{key_value} from a
list of dictionaries: @prm{list_of_dicts}.
"""
for dictionary in list_of_dicts:
if dictionary.get(key) == None:
raise Exception("No key: " + key + " in dictionary.")
if dictionary.get(key) == key_value:
return dictionary
return None | 20 |
def oauth2_clients(client_id: str) -> Response:
"""
Return OAuth2 client applications
:return:
GET /ajax/oauth2/clients: list of OAuth2 clients
"""
client = next((c for c in oauth_clients.values() if c.client_id == client_id), None)
if not client:
raise UnknownClientError()
return json_response(dict(id=client.client_id, name=client.name, description=client.description, icon=client.icon)) | 21 |
def calc_estimate(data):
""" Return a first estimation on the parameter from the data """
xc0, yc0 = data.x.mean(axis=1)
r0 = sqrt((data.x[0]-xc0)**2 +(data.x[1] -yc0)**2).mean()
return xc0, yc0, r0 | 22 |
def test_ap_beacon_rate_ht2(dev, apdev):
"""Open AP with Beacon frame TX rate HT-MCS 1 in VHT BSS"""
hapd = hostapd.add_ap(apdev[0], {'ssid': 'beacon-rate'})
res = hapd.get_driver_status_field('capa.flags')
if (int(res, 0) & 0x0000100000000000) == 0:
raise HwsimSkip("Setting Beacon frame TX rate not supported")
hapd.disable()
hapd.set('beacon_rate', 'ht:1')
hapd.set("country_code", "DE")
hapd.set("hw_mode", "a")
hapd.set("channel", "36")
hapd.set("ieee80211n", "1")
hapd.set("ieee80211ac", "1")
hapd.set("ht_capab", "[HT40+]")
hapd.set("vht_capab", "")
hapd.set("vht_oper_chwidth", "0")
hapd.set("vht_oper_centr_freq_seg0_idx", "0")
try:
hapd.enable()
dev[0].scan_for_bss(hapd.own_addr(), freq="5180")
dev[0].connect('beacon-rate', key_mgmt="NONE", scan_freq="5180")
time.sleep(0.5)
finally:
dev[0].request("DISCONNECT")
hapd.request("DISABLE")
subprocess.call(['iw', 'reg', 'set', '00'])
dev[0].flush_scan_cache() | 23 |
def test_displacy_parse_ents_with_kb_id_options(en_vocab):
"""Test that named entities with kb_id on a Doc are converted into displaCy's format."""
doc = Doc(en_vocab, words=["But", "Google", "is", "starting", "from", "behind"])
doc.ents = [Span(doc, 1, 2, label=doc.vocab.strings["ORG"], kb_id="Q95")]
ents = displacy.parse_ents(
doc, {"kb_url_template": "https://www.wikidata.org/wiki/{}"}
)
assert isinstance(ents, dict)
assert ents["text"] == "But Google is starting from behind "
assert ents["ents"] == [
{
"start": 4,
"end": 10,
"label": "ORG",
"kb_id": "Q95",
"kb_url": "https://www.wikidata.org/wiki/Q95",
}
] | 24 |
def cit(ispace0, ispace1):
"""
The Common IterationIntervals of two IterationSpaces.
"""
found = []
for it0, it1 in zip(ispace0.itintervals, ispace1.itintervals):
if it0 == it1:
found.append(it0)
else:
break
return tuple(found) | 25 |
def drawdown(return_series: pd.Series):
"""Takes a time series of asset returns.
returns a DataFrame with columns for
the wealth index,
the previous peaks, and
the percentage drawdown
"""
wealth_index = 1000*(1+return_series).cumprod()
previous_peaks = wealth_index.cummax()
drawdowns = (wealth_index - previous_peaks)/previous_peaks
return pd.DataFrame({"Wealth": wealth_index,
"Previous Peak": previous_peaks,
"Drawdown": drawdowns}) | 26 |
def _should_run_cmake(commands, cmake_with_sdist):
"""Return True if at least one command requiring ``cmake`` to run
is found in ``commands``."""
for expected_command in [
"build",
"build_ext",
"develop",
"install",
"install_lib",
"bdist",
"bdist_dumb",
"bdist_egg",
"bdist_rpm",
"bdist_wininst",
"bdist_wheel",
"test",
]:
if expected_command in commands:
return True
if "sdist" in commands and cmake_with_sdist:
return True
return False | 27 |
def count_keys(n):
"""Generate outcome bitstrings for n-qubits.
Args:
n (int): the number of qubits.
Returns:
list: A list of bitstrings ordered as follows:
Example: n=2 returns ['00', '01', '10', '11'].
"""
return [bin(j)[2:].zfill(n) for j in range(2**n)] | 28 |
def generate_CNN_model(x_shape, class_number, filters, fc_hidden_nodes,
learning_rate=0.01, regularization_rate=0.01,
metrics=['accuracy']):
"""
Generate a convolutional neural network (CNN) model.
The compiled Keras model is returned.
Parameters
----------
x_shape : tuple
Shape of the input dataset: (num_samples, num_timesteps, num_channels)
class_number : int
Number of classes for classification task
filters : list of ints
number of filters for each convolutional layer
fc_hidden_nodes : int
number of hidden nodes for the hidden dense layer
learning_rate : float
learning rate
regularization_rate : float
regularization rate
metrics : list
Metrics to calculate on the validation set.
See https://keras.io/metrics/ for possible values.
Returns
-------
model : Keras model
The compiled Keras model
"""
dim_length = x_shape[1] # number of samples in a time series
dim_channels = x_shape[2] # number of channels
outputdim = class_number # number of classes
weightinit = 'lecun_uniform' # weight initialization
model = Sequential()
model.add(
BatchNormalization(
input_shape=(
dim_length,
dim_channels)))
for filter_number in filters:
model.add(Convolution1D(filter_number, kernel_size=3, padding='same',
kernel_regularizer=l2(regularization_rate),
kernel_initializer=weightinit))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(units=fc_hidden_nodes,
kernel_regularizer=l2(regularization_rate),
kernel_initializer=weightinit)) # Fully connected layer
model.add(Activation('relu')) # Relu activation
model.add(Dense(units=outputdim, kernel_initializer=weightinit))
model.add(BatchNormalization())
model.add(Activation("softmax")) # Final classification layer
# if class_number == 2:
# loss = 'binary_crossentropy'
# else:
# loss = 'categorical_crossentropy'
loss = 'categorical_crossentropy'
model.compile(loss=loss,
optimizer=Adam(lr=learning_rate),
metrics=metrics)
return model | 29 |
def get_relative_errors(test_data_id):
"""
Compute and save the relative errors of every point found on every network in a testing set.
Relative error is defined in (Katz and Reggia 2017).
test_data_id should be as in fxpt_experiments.generate_test_data (without file extension).
"""
network_sizes, num_samples, _ = fe.load_test_data('%s.npz'%test_data_id)
for alg in ['traverse','baseline']:
for (N, S) in zip(network_sizes, num_samples):
for samp in range(S):
print('%s, alg %s, N %d,samp %d'%(test_data_id,alg,N,samp))
npz = np.load('results/%s_%s_N_%d_s_%d.npz'%(alg,test_data_id,N,samp))
W = npz['W']
fxV = npz['fxV']
fxV, converged = rfx.refine_fxpts_capped(W, fxV)
margin = rfx.estimate_forward_error(W, fxV)
f = np.tanh(W.dot(fxV))-fxV
re = np.fabs(f/margin)
re_fx, re_un = re[:,converged].max(axis=0), re[:,~converged].max(axis=0)
re_fx = re_fx[re_fx > 0]
f_fx, f_un = np.fabs(f[:,converged]).max(axis=0), np.fabs(f[:,~converged]).max(axis=0)
f_fx = f_fx[f_fx > 0]
re_npz = {}
re_npz['f_fx'] = f_fx
re_npz['f_un'] = f_un
re_npz['re_fx'] = re_fx
re_npz['re_un'] = re_un
fe.save_npz_file('results/%s_re_%s_N_%d_s_%d.npz'%(alg,test_data_id,N,samp), **re_npz) | 30 |
def expect_warnings(*messages, **kw):
"""Context manager which expects one or more warnings.
With no arguments, squelches all SAWarning and RemovedIn20Warning emitted via
sqlalchemy.util.warn and sqlalchemy.util.warn_limited. Otherwise
pass string expressions that will match selected warnings via regex;
all non-matching warnings are sent through.
The expect version **asserts** that the warnings were in fact seen.
Note that the test suite sets SAWarning warnings to raise exceptions.
""" # noqa
return _expect_warnings(
(sa_exc.RemovedIn20Warning, sa_exc.SAWarning), messages, **kw
) | 31 |
def test_dataset_url_import_job(url, svc_client_with_repo):
"""Test dataset import via url."""
svc_client, headers, project_id, url_components = svc_client_with_repo
user = {'user_id': headers['Renku-User-Id']}
payload = {
'project_id': project_id,
'dataset_uri': url,
}
response = svc_client.post(
'/datasets.import',
data=json.dumps(payload),
headers=headers,
)
assert response
assert_rpc_response(response)
assert {'job_id', 'created_at'} == set(response.json['result'].keys())
dest = make_project_path(
user, {
'owner': url_components.owner,
'name': url_components.name
}
)
old_commit = Repo(dest).head.commit
job_id = response.json['result']['job_id']
dataset_import(
user,
job_id,
project_id,
url,
)
new_commit = Repo(dest).head.commit
assert old_commit.hexsha != new_commit.hexsha
assert f'service: dataset import {url}' == new_commit.message
response = svc_client.get(
f'/jobs/{job_id}',
headers=headers,
)
assert response
assert_rpc_response(response)
assert 'COMPLETED' == response.json['result']['state'] | 32 |
def main(noreboot = 'false', **kwargs):
"""
Master script that calls content scripts to be deployed when provisioning systems
"""
# NOTE: Using __file__ may freeze if trying to build an executable, e.g. via py2exe.
# NOTE: Using __file__ does not work if running from IDLE/interpreter.
# NOTE: __file__ may return relative path as opposed to an absolute path, so include os.path.abspath.
scriptname = ''
if '__file__' in dir():
scriptname = os.path.abspath(__file__)
else:
scriptname = os.path.abspath(sys.argv[0])
# Check special parameter types
noreboot = 'true' == noreboot.lower()
sourceiss3bucket = 'true' == kwargs.get('sourceiss3bucket', 'false').lower()
print('+' * 80)
print('Entering script -- {0}'.format(scriptname))
print('Printing parameters --')
print(' noreboot = {0}'.format(noreboot))
for key, value in kwargs.items():
print(' {0} = {1}'.format(key, value))
system = platform.system()
systemparams = get_system_params(system)
scriptstoexecute = get_scripts_to_execute(system, systemparams['workingdir'], **kwargs)
#Loop through each 'script' in scriptstoexecute
for script in scriptstoexecute:
url = script['ScriptSource']
filename = url.split('/')[-1]
fullfilepath = systemparams['workingdir'] + systemparams['pathseparator'] + filename
#Download each script, script['ScriptSource']
download_file(url, fullfilepath, sourceiss3bucket)
#Execute each script, passing it the parameters in script['Parameters']
#TODO: figure out if there's a better way to call and execute the script
print('Running script -- ' + script['ScriptSource'])
print('Sending parameters --')
for key, value in script['Parameters'].items():
print(' {0} = {1}'.format(key, value))
paramstring = ' '.join("%s='%s'" % (key, val) for (key, val) in script['Parameters'].iteritems())
fullcommand = 'python {0} {1}'.format(fullfilepath, paramstring)
result = os.system(fullcommand)
if result is not 0:
message = 'Encountered an unrecoverable error executing a ' \
'content script. Exiting with failure.\n' \
'Command executed: {0}' \
.format(fullcommand)
raise SystemError(message)
cleanup(systemparams['workingdir'])
if noreboot:
print('Detected `noreboot` switch. System will not be rebooted.')
else:
print('Reboot scheduled. System will reboot after the script exits.')
os.system(systemparams['restart'])
print('{0} complete!'.format(scriptname))
print('-' * 80) | 33 |
def test_events():
"""Tests that expected events are created by MOTAccumulator.update()."""
acc = mm.MOTAccumulator()
# All FP
acc.update([], [1, 2], [], frameid=0)
# All miss
acc.update([1, 2], [], [], frameid=1)
# Match
acc.update([1, 2], [1, 2], [[1, 0.5], [0.3, 1]], frameid=2)
# Switch
acc.update([1, 2], [1, 2], [[0.2, np.nan], [np.nan, 0.1]], frameid=3)
# Match. Better new match is available but should prefer history
acc.update([1, 2], [1, 2], [[5, 1], [1, 5]], frameid=4)
# No data
acc.update([], [], [], frameid=5)
expect = mm.MOTAccumulator.new_event_dataframe()
expect.loc[(0, 0), :] = ['RAW', np.nan, np.nan, np.nan]
expect.loc[(0, 1), :] = ['RAW', np.nan, 1, np.nan]
expect.loc[(0, 2), :] = ['RAW', np.nan, 2, np.nan]
expect.loc[(0, 3), :] = ['FP', np.nan, 1, np.nan]
expect.loc[(0, 4), :] = ['FP', np.nan, 2, np.nan]
expect.loc[(1, 0), :] = ['RAW', np.nan, np.nan, np.nan]
expect.loc[(1, 1), :] = ['RAW', 1, np.nan, np.nan]
expect.loc[(1, 2), :] = ['RAW', 2, np.nan, np.nan]
expect.loc[(1, 3), :] = ['MISS', 1, np.nan, np.nan]
expect.loc[(1, 4), :] = ['MISS', 2, np.nan, np.nan]
expect.loc[(2, 0), :] = ['RAW', np.nan, np.nan, np.nan]
expect.loc[(2, 1), :] = ['RAW', 1, 1, 1.0]
expect.loc[(2, 2), :] = ['RAW', 1, 2, 0.5]
expect.loc[(2, 3), :] = ['RAW', 2, 1, 0.3]
expect.loc[(2, 4), :] = ['RAW', 2, 2, 1.0]
expect.loc[(2, 5), :] = ['MATCH', 1, 2, 0.5]
expect.loc[(2, 6), :] = ['MATCH', 2, 1, 0.3]
expect.loc[(3, 0), :] = ['RAW', np.nan, np.nan, np.nan]
expect.loc[(3, 1), :] = ['RAW', 1, 1, 0.2]
expect.loc[(3, 2), :] = ['RAW', 2, 2, 0.1]
expect.loc[(3, 3), :] = ['TRANSFER', 1, 1, 0.2]
expect.loc[(3, 4), :] = ['SWITCH', 1, 1, 0.2]
expect.loc[(3, 5), :] = ['TRANSFER', 2, 2, 0.1]
expect.loc[(3, 6), :] = ['SWITCH', 2, 2, 0.1]
expect.loc[(4, 0), :] = ['RAW', np.nan, np.nan, np.nan]
expect.loc[(4, 1), :] = ['RAW', 1, 1, 5.]
expect.loc[(4, 2), :] = ['RAW', 1, 2, 1.]
expect.loc[(4, 3), :] = ['RAW', 2, 1, 1.]
expect.loc[(4, 4), :] = ['RAW', 2, 2, 5.]
expect.loc[(4, 5), :] = ['MATCH', 1, 1, 5.]
expect.loc[(4, 6), :] = ['MATCH', 2, 2, 5.]
expect.loc[(5, 0), :] = ['RAW', np.nan, np.nan, np.nan]
pd.util.testing.assert_frame_equal(acc.events, expect) | 34 |
def get_contigous_borders(indices):
"""
helper function to derive contiguous borders from a list of indices
Parameters
----------
indicies : all indices at which a certain thing occurs
Returns
-------
list of groups when the indices starts and ends (note: last element is the real last element of the group _not_ n+1)
"""
r =[ [indices[0]] ]
prev = r[0][0]
for ix,i in enumerate(indices):
# distance bw last occurence and current > 1
# then there is obviously a space
if (i - prev) > 1:
# add end
r[-1].append(indices[ix-1])
# add new start
r.append([ indices[ix] ])
prev = i
r[-1].append( indices[-1] )
return r | 35 |
def check_files(test_dir, expected):
"""
Walk test_dir.
Check that all dirs are readable.
Check that all files are:
* non-special,
* readable,
* have a posix path that ends with one of the expected tuple paths.
"""
result = []
locs = []
if filetype.is_file(test_dir):
test_dir = fileutils.parent_directory(test_dir)
test_dir_path = fileutils.as_posixpath(test_dir)
for top, _, files in os.walk(test_dir):
for f in files:
location = os.path.join(top, f)
locs.append(location)
path = fileutils.as_posixpath(location)
path = path.replace(test_dir_path, '').strip('/')
result.append(path)
assert sorted(expected) == sorted(result)
for location in locs:
assert filetype.is_file(location)
assert not filetype.is_special(location)
assert filetype.is_readable(location) | 36 |
def ExtractSubObjsTargetedAtAll(
inputs,
num_parts,
description_parts,
description_all,
description_all_from_libs):
"""For (lib, obj) tuples in the all_from_libs section, extract the obj out of
the lib and added it to inputs. Returns a list of lists for which part the
extracted obj belongs in (which is whichever the .lib isn't in)."""
by_parts = [[] for _ in range(num_parts)]
for lib_spec, obj_spec in description_all_from_libs:
for input_file in inputs:
if re.search(lib_spec, input_file):
objs = GetLibObjList(input_file)
match_count = 0
for obj in objs:
if re.search(obj_spec, obj, re.I):
extracted_obj = ExtractObjFromLib(input_file, obj)
#Log('extracted %s (%s %s)' % (extracted_obj, input_file, obj))
i = PartFor(input_file, description_parts, description_all)
if i == -1:
raise SystemExit(
'%s is already in all parts, but matched '
'%s in all_from_libs' % (input_file, obj))
# See note in main().
assert num_parts == 2, "Can't handle > 2 dlls currently"
by_parts[1 - i].append(obj)
match_count += 1
if match_count == 0:
raise SystemExit(
'%s, %s matched a lib, but no objs' % (lib_spec, obj_spec))
return by_parts | 37 |
def get_data() -> None:
"""
Infinite loop of every 10min requests to Vilnius vaccination center.
Collects count of vaccines and adds to PostgreSQL database.
Sends an email if Pfizer vaccine is available.
"""
while True:
sql_connection = psycopg2.connect(
database=DATABASE, user=USER, password=PASSWORD, host=HOST
)
# Connect to DB
cur = sql_connection.cursor()
headers = {
"Connection": "keep-alive",
"Cache-Control": "max-age=0",
"sec-ch-ua": "^\\^",
"sec-ch-ua-mobile": "?0",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"Sec-Fetch-Site": "cross-site",
"Sec-Fetch-Mode": "navigate",
"Sec-Fetch-User": "?1",
"Sec-Fetch-Dest": "document",
"Accept-Language": "en-US,en;q=0.9",
}
page = requests.get(
"https://vilnius-vac.myhybridlab.com/selfregister/vaccine", headers=headers
)
soup = BeautifulSoup(page.content, "html.parser")
vaccines = soup.find("vaccine-rooms", class_=None)[":vaccine-rooms"]
json_object = json.loads(vaccines)
# Time
time_raw = soup.find("small", class_="text-muted").get_text().split()
time_str = time_raw[2] + " " + time_raw[3]
dt = datetime.fromisoformat(time_str)
now = datetime.now().replace(microsecond=0)
eet_dt = now + timedelta(hours=3)
diff_secs = (eet_dt - dt).seconds
total_sleep = 602 - diff_secs
moderna = json_object[0]["free_total"]
pfizer = json_object[1]["free_total"]
astra = json_object[2]["free_total"]
janssen = json_object[3]["free_total"]
cur.execute(
f"INSERT INTO vilnius_vakcinos (time, moderna, pfizer, astra_zeneca, janssen) VALUES ('{time_str}', {moderna}, {pfizer}, {astra}, {janssen});"
)
sql_connection.commit()
sql_connection.close()
if pfizer > 0:
send_email(
"Pfizer count: {pfizer}, link to register: https://vilnius-vac.myhybridlab.com/selfregister/vaccine"
)
time.sleep(total_sleep) | 38 |
def setup_logging(path):
"""Initialize logging to screen and path."""
# See https://docs.python.org/2/library/logging.html#logrecord-attributes
# [IWEF]mmdd HH:MM:SS.mmm] msg
fmt = '%(levelname).1s%(asctime)s.%(msecs)03d] %(message)s' # pylint: disable=line-too-long
datefmt = '%m%d %H:%M:%S'
logging.basicConfig(
level=logging.INFO,
format=fmt,
datefmt=datefmt,
)
build_log = logging.FileHandler(filename=path, mode='w')
build_log.setLevel(logging.INFO)
formatter = logging.Formatter(fmt, datefmt=datefmt)
build_log.setFormatter(formatter)
logging.getLogger('').addHandler(build_log)
return build_log | 39 |
def add_network(host):
"""Adds a network to the stacki db. For historical reasons the first test network this creates is pxe=False."""
def _inner(name, address, pxe = False):
result = host.run(
f'stack add network {name} address={address} mask=255.255.255.0 pxe={pxe}'
)
if result.rc != 0:
pytest.fail(f'unable to add dummy network "{name}"')
# First use of the fixture adds network "test"
_inner('test', '192.168.0.0')
# Then return the inner function, so we can call it inside the test
# to get more networks added
return _inner | 40 |
def pytest_cloud(session, coverage):
"""
pytest cloud tests session
"""
# Install requirements
if _upgrade_pip_setuptools_and_wheel(session):
_install_requirements(session, "zeromq")
requirements_file = os.path.join(
"requirements", "static", "ci", _get_pydir(session), "cloud.txt"
)
install_command = ["--progress-bar=off", "-r", requirements_file]
session.install(*install_command, silent=PIP_INSTALL_SILENT)
cmd_args = [
"--rootdir",
REPO_ROOT,
"--log-file={}".format(RUNTESTS_LOGFILE),
"--log-file-level=debug",
"--show-capture=no",
"-ra",
"-s",
"--run-expensive",
"-k",
"cloud",
] + session.posargs
_pytest(session, coverage, cmd_args) | 41 |
def lower_schedule(cluster, schedule, sregistry, options):
"""
Turn a Schedule into a sequence of Clusters.
"""
ftemps = options['cire-ftemps']
if ftemps:
make = TempFunction
else:
# Typical case -- the user does *not* "see" the CIRE-created temporaries
make = Array
clusters = []
subs = {}
for alias, writeto, ispace, aliaseds, indicess in schedule:
# Basic info to create the temporary that will hold the alias
name = sregistry.make_name()
dtype = cluster.dtype
if writeto:
# The Dimensions defining the shape of Array
# Note: with SubDimensions, we may have the following situation:
#
# for zi = z_m + zi_ltkn; zi <= z_M - zi_rtkn; ...
# r[zi] = ...
#
# Instead of `r[zi - z_m - zi_ltkn]` we have just `r[zi]`, so we'll need
# as much room as in `zi`'s parent to avoid going OOB
# Aside from ugly generated code, the reason we do not rather shift the
# indices is that it prevents future passes to transform the loop bounds
# (e.g., MPI's comp/comm overlap does that)
dimensions = [d.parent if d.is_Sub else d for d in writeto.itdimensions]
# The halo must be set according to the size of writeto space
halo = [(abs(i.lower), abs(i.upper)) for i in writeto]
# The indices used to write into the Array
indices = []
for i in writeto:
try:
# E.g., `xs`
sub_iterators = writeto.sub_iterators[i.dim]
assert len(sub_iterators) == 1
indices.append(sub_iterators[0])
except KeyError:
# E.g., `z` -- a non-shifted Dimension
indices.append(i.dim - i.lower)
obj = make(name=name, dimensions=dimensions, halo=halo, dtype=dtype)
expression = Eq(obj[indices], alias)
callback = lambda idx: obj[idx]
else:
# Degenerate case: scalar expression
assert writeto.size == 0
obj = Symbol(name=name, dtype=dtype)
expression = Eq(obj, alias)
callback = lambda idx: obj
# Create the substitution rules for the aliasing expressions
subs.update({aliased: callback(indices)
for aliased, indices in zip(aliaseds, indicess)})
# Construct the `alias` DataSpace
accesses = detect_accesses(expression)
parts = {k: IntervalGroup(build_intervals(v)).add(ispace.intervals).relaxed
for k, v in accesses.items() if k}
dspace = DataSpace(cluster.dspace.intervals, parts)
# Drop or weaken parallelism if necessary
properties = dict(cluster.properties)
for d, v in cluster.properties.items():
if any(i.is_Modulo for i in ispace.sub_iterators[d]):
properties[d] = normalize_properties(v, {SEQUENTIAL})
elif d not in writeto.dimensions:
properties[d] = normalize_properties(v, {PARALLEL_IF_PVT})
# Finally, build the `alias` Cluster
clusters.append(cluster.rebuild(exprs=expression, ispace=ispace,
dspace=dspace, properties=properties))
return clusters, subs | 42 |
def changelog():
"""Get the most recent version's changelog as Markdown.
"""
print(changelog_as_markdown()) | 43 |
def sine(value):
"""Filter to get sine of the value."""
try:
return math.sin(float(value))
except (ValueError, TypeError):
return value | 44 |
def tamper(payload, **kwargs):
"""
Unicode-escapes non-encoded characters in a given payload (not processing already encoded) (e.g. SELECT -> \u0053\u0045\u004C\u0045\u0043\u0054)
Notes:
* Useful to bypass weak filtering and/or WAFs in JSON contexes
>>> tamper('SELECT FIELD FROM TABLE')
'\\\\u0053\\\\u0045\\\\u004C\\\\u0045\\\\u0043\\\\u0054\\\\u0020\\\\u0046\\\\u0049\\\\u0045\\\\u004C\\\\u0044\\\\u0020\\\\u0046\\\\u0052\\\\u004F\\\\u004D\\\\u0020\\\\u0054\\\\u0041\\\\u0042\\\\u004C\\\\u0045'
"""
retVal = payload
if payload:
retVal = ""
i = 0
while i < len(payload):
if payload[i] == '%' and (i < len(payload) - 2) and payload[i + 1:i + 2] in string.hexdigits and payload[i + 2:i + 3] in string.hexdigits:
retVal += "\\u00%s" % payload[i + 1:i + 3]
i += 3
else:
retVal += '\\u%.4X' % ord(payload[i])
i += 1
return retVal | 45 |
def _download_study_clin(pdc_study_id):
"""Download PDC clinical data for a particular study."""
clinical_query = '''
query {
clinicalPerStudy(pdc_study_id: "''' + pdc_study_id + '''", acceptDUA: true) {
age_at_diagnosis, ajcc_clinical_m, ajcc_clinical_n, ajcc_clinical_stage, ajcc_clinical_t, ajcc_pathologic_m,
ajcc_pathologic_n, ajcc_pathologic_stage, ajcc_pathologic_t, ann_arbor_b_symptoms, ann_arbor_clinical_stage,
ann_arbor_extranodal_involvement, ann_arbor_pathologic_stage, best_overall_response, burkitt_lymphoma_clinical_variant,
case_id, case_submitter_id, cause_of_death, circumferential_resection_margin, classification_of_tumor, colon_polyps_history,
days_to_best_overall_response, days_to_birth, days_to_death, days_to_diagnosis, days_to_hiv_diagnosis, days_to_last_follow_up,
days_to_last_known_disease_status, days_to_new_event, days_to_recurrence, demographic_id, demographic_submitter_id,
diagnosis_id, diagnosis_submitter_id, disease_type, ethnicity, figo_stage, gender, hiv_positive, hpv_positive_type, hpv_status,
icd_10_code, iss_stage, last_known_disease_status, laterality, ldh_level_at_diagnosis, ldh_normal_range_upper,
lymphatic_invasion_present, lymph_nodes_positive, method_of_diagnosis, morphology, new_event_anatomic_site, new_event_type,
overall_survival, perineural_invasion_present, primary_diagnosis, primary_site, prior_malignancy, prior_treatment,
progression_free_survival, progression_free_survival_event, progression_or_recurrence, race, residual_disease,
site_of_resection_or_biopsy, status, synchronous_malignancy, tissue_or_organ_of_origin, tumor_cell_content, tumor_grade,
tumor_stage, vascular_invasion_present, vital_status, year_of_birth, year_of_death, year_of_diagnosis
}
}
'''
result_json = _query_pdc(clinical_query)
result_df = pd.\
DataFrame(result_json["data"]["clinicalPerStudy"])
return result_df | 46 |
def compound(r):
"""
returns the result of compounding the set of returns in r
"""
return np.expm1(np.log1p(r).sum()) | 47 |
def dump_entries(dirname, response):
"""Given a getevents response, dump all the entries into files
named by itemid.
Return the set of itemids received."""
all_itemids = set()
props = {}
for i in range(1, int(response.get('prop_count', 0)) + 1):
itemid = response['prop_%d_itemid' % i]
name = response['prop_%d_name' % i]
value = response['prop_%d_value' % i]
if itemid not in props:
props[itemid] = {}
props[itemid][name] = value
for i in range(1, int(response.get('events_count', 0)) + 1):
itemid = response['events_%d_itemid' % i]
all_itemids.add(itemid)
with open('%s/%s' % (dirname, itemid), 'w') as outfile:
fields = ('itemid',
'anum',
'eventtime',
'security',
'allowmask',
'poster',
'url',
'subject',)
for field in fields:
key = 'events_%d_%s' % (i, field)
if key in response:
print >>outfile, field + ':', response[key]
if itemid in props:
for key, val in props[itemid].items():
print >>outfile, key + ':', val
print >>outfile
key = 'events_%d_event' % i
print >>outfile, urllib.unquote(response[key])
return all_itemids | 48 |
def runtests(*test_args):
"""Setup and run django-lockdowns test suite."""
os.environ['DJANGO_SETTINGS_MODULE'] = 'lockdown.tests.test_settings'
django.setup()
if not test_args:
test_args = ['lockdown.tests']
test_runner = get_runner(settings)()
failures = test_runner.run_tests(test_args)
sys.exit(bool(failures)) | 49 |
def onehot_encode_seq(sequence, m=0, padding=False):
"""Converts a given IUPAC DNA sequence to a one-hot
encoded DNA sequence.
"""
import numpy as np
import torch
valid_keys = ['a','c','g','t','u','n','r','y','s','w','k','m']
nucs = {'a':0,'c':1,'g':2,'t':3,'u':3}
if padding:
assert m != 0, "If using padding, m should be bigger than 0"
padding_mat = np.tile(0.25,(m-1,4))
onehot = np.tile(.0,(len(sequence),4))
for i,char in enumerate(sequence.lower()):
if char not in valid_keys:
sys.exit("invalid char in sequence (choose from acgt and nryswkm)")
elif char == 'n':
onehot[i,:] = 0.25
elif char == 'r':
onehot[i,(0,2)] = 0.5
elif char == 'y':
onehot[i,(1,3)] = 0.5
elif char == 's':
onehot[i,(1,2)] = 0.5
elif char == 'w':
onehot[i,(0,3)] = 0.5
elif char == 'k':
onehot[i,(2,3)] = 0.5
elif char == 'm':
onehot[i,(0,1)] = 0.5
else:
onehot[i,nucs[char]] = 1
if padding:
onehot = np.concatenate((padding_mat, onehot, padding_mat))
return onehot | 50 |
def hessian ( box, r_cut, r, f ):
"""Calculates Hessian function (for 1/N correction to config temp)."""
import numpy as np
# This routine is only needed in a constant-energy ensemble
# It is assumed that positions are in units where box = 1
# but the result is given in units where sigma = 1 and epsilon = 1
# It is assumed that forces have already been calculated in array f
n, d = r.shape
assert d==3, 'Dimension error in hessian'
assert np.all ( r.shape==f.shape ), 'Dimension mismatch in hessian'
r_cut_box = r_cut / box
r_cut_box_sq = r_cut_box ** 2
box_sq = box ** 2
hes = 0.0
if fast:
for i in range(n-1):
rij = r[i,:] - r[i+1:,:] # Separation vectors
rij = rij - np.rint ( rij ) # Periodic boundary conditions in box=1 units
rij_sq = np.sum(rij**2,axis=1) # Squared separations for j>1
in_range = rij_sq < r_cut_box_sq # Set flags for within cutoff
rij_sq = rij_sq * box_sq # Now in sigma=1 units
rij = rij * box # Now in sigma=1 units
fij = f[i,:] - f[i+1:,:] # Differences in forces
ff = np.sum(fij*fij,axis=1)
rf = np.sum(rij*fij,axis=1)
sr2 = np.where ( in_range, 1.0 / rij_sq, 0.0 ) # Only where in range
sr6 = sr2 ** 3
sr8 = sr6 * sr2
sr10 = sr8 * sr2
v1 = 24.0 * ( 1.0 - 2.0 * sr6 ) * sr8
v2 = 96.0 * ( 7.0 * sr6 - 2.0 ) * sr10
hes = hes + np.sum(v1 * ff) + np.sum(v2 * rf**2)
else:
for i in range(n-1):
for j in range(i+1,n):
rij = r[i,:] - r[j,:] # Separation vector
rij = rij - np.rint ( rij ) # Periodic boundary conditions in box=1 units
rij_sq = np.sum ( rij**2 ) # Squared separation
if rij_sq < r_cut_box_sq:
rij_sq = rij_sq * box_sq # Now in sigma=1 units
rij = rij * box # Now in sigma=1 units
fij = f[i,:] - f[j,:] # Difference in forces
ff = np.dot(fij,fij)
rf = np.dot(rij,fij)
sr2 = 1.0 / rij_sq
sr6 = sr2 ** 3
sr8 = sr6 * sr2
sr10 = sr8 * sr2
v1 = 24.0 * ( 1.0 - 2.0 * sr6 ) * sr8
v2 = 96.0 * ( 7.0 * sr6 - 2.0 ) * sr10
hes = hes + v1 * ff + v2 * rf**2
return hes | 51 |
def polar2dial(ax):
"""
Turns a matplotlib axes polar plot into a dial plot
"""
#Rotate the plot so that noon is at the top and midnight
#is at the bottom, and fix the labels so radial direction
#is latitude and azimuthal direction is local time in hours
ax.set_theta_zero_location('S')
theta_label_values = np.array([0.,3.,6.,9.,12.,15.,18.,21.])*180./12
theta_labels = ['%d:00' % (int(th/180.*12)) for th in theta_label_values.flatten().tolist()]
ax.set_thetagrids(theta_label_values,labels=theta_labels)
r_label_values = 90.-np.array([80.,70.,60.,50.,40.])
r_labels = [r'$%d^{o}$' % (int(90.-rv)) for rv in r_label_values.flatten().tolist()]
ax.set_rgrids(r_label_values,labels=r_labels)
ax.set_rlim([0.,40.]) | 52 |
def train_cam_model(X_train, Y_train, X_test, Y_test,
batch_size, nb_epoch):
"""Train CAM model based on your pretrained model
# Arguments
model: your pretrained model, CAM model is trained based on this model.
"""
# Use your allready trained model
pretrained_model_path = ''
pretrained_weights_path = ''
# Your pretrained model name
pretrained_model_name = 'VGG16'
# Label class num
num_classes = 10
# CAM input spacial size
gap_spacial_size = 14
# The layer before CAM(GAP) layers.
# CAM paper suggests to use the last convnet(VGG) or mergenet(Inception, or other architectures)
# Change this name based on your model.
if pretrained_model_name == 'VGG16':
in_layer_name = 'block5_conv3'
elif pretrained_model_name == 'InceptionV3':
in_layer_name = 'batchnormalization_921'
elif pretrained_model_name == 'ResNet50':
in_layer_name = 'merge_13'
else:
in_layer_name = ''
# Load your allready trained model, transfer it to CAM model
pretrained_model = read_model(pretrained_model_path,
pretrained_weights_path)
# Create CAM model based on trained model
model = create_cam_model(pretrained_model,
gap_spacial_size,
num_classes,
in_layer_name,
CAM_CONV_LAYER)
# Train your CAM model
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
model.fit(X_train, Y_train,
batch_size=batch_size,
nb_epoch=nb_epoch,
shuffle=True, verbose=1,
validation_data=(X_test, Y_test))
# Save model
model.save_weights('')
return model | 53 |
def CheckVintfFromTargetFiles(inp, info_dict=None):
"""
Checks VINTF metadata of a target files zip.
Args:
inp: path to the target files archive.
info_dict: The build-time info dict. If None, it will be loaded from inp.
Returns:
True if VINTF check is skipped or compatible, False if incompatible. Raise
a RuntimeError if any error occurs.
"""
input_tmp = common.UnzipTemp(inp, GetVintfFileList() + UNZIP_PATTERN)
return CheckVintfFromExtractedTargetFiles(input_tmp, info_dict) | 54 |
def get_target_proxy(properties, res_name, project_id, bs_resources):
""" Creates a target proxy resource. """
protocol = get_protocol(properties)
depends = []
if 'HTTP' in protocol:
urlMap = copy.deepcopy(properties['urlMap'])
if 'name' not in urlMap and 'name' in properties:
urlMap['name'] = '{}-url-map'.format(properties['name'])
target, resources, outputs = get_url_map(
urlMap,
'{}-url-map'.format(res_name),
project_id
)
depends.append(resources[0]['name'])
else:
depends.append(bs_resources[0]['name'])
target = get_ref(bs_resources[0]['name'])
resources = []
outputs = []
name = '{}-target'.format(res_name)
proxy = {
'name': name,
'type': 'target_proxy.py',
'properties': {
'name': '{}-target'.format(properties.get('name', res_name)),
'project': project_id,
'protocol': protocol,
'target': target,
},
'metadata': {
'dependsOn': [depends],
},
}
for prop in ['proxyHeader', 'quicOverride']:
set_optional_property(proxy['properties'], properties, prop)
outputs.extend(
[
{
'name': 'targetProxyName',
'value': '$(ref.{}.name)'.format(name)
},
{
'name': 'targetProxySelfLink',
'value': '$(ref.{}.selfLink)'.format(name)
},
{
'name': 'targetProxyKind',
'value': '$(ref.{}.kind)'.format(name)
}
]
)
if 'ssl' in properties:
ssl_spec = properties['ssl']
proxy['properties']['ssl'] = ssl_spec
creates_new_certificate = not 'url' in ssl_spec['certificate']
if creates_new_certificate:
outputs.extend(
[
{
'name': 'certificateName',
'value': '$(ref.{}.certificateName)'.format(name)
},
{
'name': 'certificateSelfLink',
'value': '$(ref.{}.certificateSelfLink)'.format(name)
}
]
)
return [proxy] + resources, outputs | 55 |
def read_raw_kit(input_fname, mrk=None, elp=None, hsp=None, stim='>',
slope='-', stimthresh=1, preload=False, stim_code='binary',
allow_unknown_format=False, standardize_names=None,
verbose=None):
"""Reader function for Ricoh/KIT conversion to FIF.
Parameters
----------
input_fname : str
Path to the sqd file.
mrk : None | str | array_like, shape (5, 3) | list of str or array_like
Marker points representing the location of the marker coils with
respect to the MEG Sensors, or path to a marker file.
If list, all of the markers will be averaged together.
elp : None | str | array_like, shape (8, 3)
Digitizer points representing the location of the fiducials and the
marker coils with respect to the digitized head shape, or path to a
file containing these points.
hsp : None | str | array, shape (n_points, 3)
Digitizer head shape points, or path to head shape file. If more than
10,000 points are in the head shape, they are automatically decimated.
stim : list of int | '<' | '>'
Channel-value correspondence when converting KIT trigger channels to a
Neuromag-style stim channel. For '<', the largest values are assigned
to the first channel (default). For '>', the largest values are
assigned to the last channel. Can also be specified as a list of
trigger channel indexes.
slope : '+' | '-'
How to interpret values on KIT trigger channels when synthesizing a
Neuromag-style stim channel. With '+', a positive slope (low-to-high)
is interpreted as an event. With '-', a negative slope (high-to-low)
is interpreted as an event.
stimthresh : float
The threshold level for accepting voltage changes in KIT trigger
channels as a trigger event.
%(preload)s
stim_code : 'binary' | 'channel'
How to decode trigger values from stim channels. 'binary' read stim
channel events as binary code, 'channel' encodes channel number.
allow_unknown_format : bool
Force reading old data that is not officially supported. Alternatively,
read and re-save the data with the KIT MEG Laboratory application.
%(standardize_names)s
%(verbose)s
Returns
-------
raw : instance of RawKIT
A Raw object containing KIT data.
See Also
--------
mne.io.Raw : Documentation of attribute and methods.
Notes
-----
If mrk, hsp or elp are array_like inputs, then the numbers in xyz
coordinates should be in units of meters.
"""
return RawKIT(input_fname=input_fname, mrk=mrk, elp=elp, hsp=hsp,
stim=stim, slope=slope, stimthresh=stimthresh,
preload=preload, stim_code=stim_code,
allow_unknown_format=allow_unknown_format,
standardize_names=standardize_names, verbose=verbose) | 56 |
def post(url, body, accept=None, headers=None):
"""
Make a basic HTTP call to CMR using the POST action
Parameters:
url (string): resource to get
body (dictionary): parameters to send, or string if raw text to be sent
accept (string): encoding of the returned data, some form of json is expected
client_id (string): name of the client making the (not python or curl)
headers (dictionary): HTTP headers to apply
"""
if isinstance(body, str):
#JSON string or other such text passed in"
data = body
else:
# Do not use the standard url encoder `urllib.parse.urlencode(body)` for
# the body/data because it can not handle repeating values as required
# by CMR. For example: `{'entry_title': ['2', '3']}` must become
# `entry_title=2&entry_title=3` not `entry_title=[2, 3]`
data = expand_query_to_parameters(body)
data = data.encode('utf-8')
logger.debug(" Headers->CMR= %s", headers)
logger.debug(" POST Data= %s", data)
req = urllib.request.Request(url, data)
if accept is not None:
apply_headers_to_request(req, {'Accept': accept})
apply_headers_to_request(req, headers)
try:
#pylint: disable=R1732 # the mock code does not support this in tests
resp = urllib.request.urlopen(req)
response = resp.read()
raw_response = response.decode('utf-8')
if resp.status == 200:
obj_json = json.loads(raw_response)
head_list = {}
for head in resp.getheaders():
head_list[head[0]] = head[1]
if logger.getEffectiveLevel() == logging.DEBUG:
stringified = str(common.mask_dictionary(head_list, ["cmr-token", "authorization"]))
logger.debug(" CMR->Headers = %s", stringified)
obj_json['http-headers'] = head_list
elif resp.status == 204:
obj_json = {}
head_list = {}
for head in resp.getheaders():
head_list[head[0]] = head[1]
obj_json['http-headers'] = head_list
else:
if raw_response.startswith("{") and raw_response.endswith("}"):
return json.loads(raw_response)
return raw_response
return obj_json
except urllib.error.HTTPError as exception:
raw_response = exception.read()
try:
obj_json = json.loads(raw_response)
obj_json['code'] = exception.code
obj_json['reason'] = exception.reason
return obj_json
except json.decoder.JSONDecodeError as err:
return err
return raw_response | 57 |
def _check_currency(currency: str):
"""Check that currency is in supported set."""
if currency not in currency_set:
raise ValueError(
f"currency {currency} not in supported currency set, "
f"{currency_set}"
) | 58 |
def convert_luminance_to_color_value(luminance, transfer_function):
"""
輝度[cd/m2] から code value の RGB値に変換する。
luminance の単位は [cd/m2]。無彩色である。
Examples
--------
>>> convert_luminance_to_color_value(100, tf.GAMMA24)
>>> [ 1.0 1.0 1.0 ]
>>> convert_luminance_to_color_value(100, tf.ST2084)
>>> [ 0.50807842 0.50807842 0.50807842 ]
"""
code_value = convert_luminance_to_code_value(
luminance, transfer_function)
return np.array([code_value, code_value, code_value]) | 59 |
def write_zfile(file_handle, data, compress=1):
"""Write the data in the given file as a Z-file.
Z-files are raw data compressed with zlib used internally by joblib
for persistence. Backward compatibility is not guarantied. Do not
use for external purposes.
"""
file_handle.write(_ZFILE_PREFIX)
length = hex_str(len(data))
# Store the length of the data
file_handle.write(asbytes(length.ljust(_MAX_LEN)))
file_handle.write(zlib.compress(asbytes(data), compress)) | 60 |
def index(imageOrFilter) :
"""Return the index of an image, or of the output image of a filter
This method take care of updating the needed informations
"""
# we don't need the entire output, only its size
imageOrFilter.UpdateOutputInformation()
img = output(imageOrFilter)
return img.GetLargestPossibleRegion().GetIndex() | 61 |
def get_house(name, world = None):
"""Returns a dictionary containing a house's info, a list of possible matches or None.
If world is specified, it will also find the current status of the house in that world."""
c = tibiaDatabase.cursor()
try:
# Search query
c.execute("SELECT * FROM Houses WHERE name LIKE ? ORDER BY LENGTH(name) ASC LIMIT 15", ("%" + name + "%",))
result = c.fetchall()
if len(result) == 0:
return None
elif result[0]["name"].lower() == name.lower() or len(result) == 1:
house = result[0]
else:
return [x['name'] for x in result]
if world is None or world not in tibia_worlds:
house["fetch"] = False
return house
house["world"] = world
house["url"] = url_house.format(id=house["id"], world=world)
tries = 5
while True:
try:
page = yield from aiohttp.get(house["url"])
content = yield from page.text(encoding='ISO-8859-1')
except Exception:
if tries == 0:
log.error("get_house: Couldn't fetch {0} (id {1}) in {2}, network error.".format(house["name"],
house["id"],
world))
house["fetch"] = False
break
else:
tries -= 1
yield from asyncio.sleep(network_retry_delay)
continue
# Trimming content to reduce load
try:
start_index = content.index("\"BoxContent\"")
end_index = content.index("</TD></TR></TABLE>")
content = content[start_index:end_index]
except ValueError:
if tries == 0:
log.error("get_house: Couldn't fetch {0} (id {1}) in {2}, network error.".format(house["name"],
house["id"],
world))
house["fetch"] = False
break
else:
tries -= 1
yield from asyncio.sleep(network_retry_delay)
continue
house["fetch"] = True
m = re.search(r'monthly rent is <B>(\d+)', content)
if m:
house['rent'] = int(m.group(1))
if "rented" in content:
house["status"] = "rented"
m = re.search(r'rented by <A?.+name=([^\"]+).+e has paid the rent until <B>([^<]+)</B>', content)
if m:
house["owner"] = urllib.parse.unquote_plus(m.group(1))
house["until"] = m.group(2).replace(" ", " ")
if "move out" in content:
house["status"] = "transferred"
m = re.search(r'will move out on <B>([^<]+)</B> \(time of daily server save\) and will pass the '
r'house to <A.+name=([^\"]+).+ for <B>(\d+) gold', content)
if m:
house["transfer_date"] =house["until"] = m.group(1).replace(" ", " ")
house["transferee"] = urllib.parse.unquote_plus(m.group(2))
house["transfer_price"] = int(m.group(3))
elif "auctioned" in content:
house["status"] = "auctioned"
if ". No bid has" in content:
house["status"] = "empty"
break
m = re.search(r'The auction will end at <B>([^\<]+)</B>\. '
r'The highest bid so far is <B>(\d+).+ by .+name=([^\"]+)\"', content)
if m:
house["auction_end"] = m.group(1).replace(" ", " ")
house["top_bid"] = int(m.group(2))
house["top_bidder"] = urllib.parse.unquote_plus(m.group(3))
break
return house
finally:
c.close() | 62 |
def test_check_script(rpconn, piece_hashes, spool_regtest, transactions):
"""
Test :staticmethod:`check_script`.
Args;
alice (str): bitcoin address of alice, the sender
bob (str): bitcoin address of bob, the receiver
rpconn (AuthServiceProxy): JSON-RPC connection
(:class:`AuthServiceProxy` instance) to bitcoin regtest
transactions (Transactions): :class:`Transactions` instance to
communicate to the bitcoin regtest node
"""
from spool import Spool
from spool.spoolex import BlockchainSpider
sender_password = uuid1().hex.encode('utf-8')
sender_wallet = BIP32Node.from_master_secret(sender_password,
netcode='XTN')
sender_address = sender_wallet.bitcoin_address()
rpconn.importaddress(sender_address)
rpconn.sendtoaddress(sender_address, Spool.FEE/100000000)
rpconn.sendtoaddress(sender_address, Spool.TOKEN/100000000)
rpconn.sendtoaddress(sender_address, Spool.TOKEN/100000000)
rpconn.sendtoaddress(sender_address, Spool.TOKEN/100000000)
rpconn.generate(1)
receiver_address = rpconn.getnewaddress()
# TODO do not rely on Spool
txid = spool_regtest.transfer(
('', sender_address),
receiver_address,
piece_hashes,
sender_password,
5,
min_confirmations=1,
)
verb = BlockchainSpider.check_script(transactions.get(txid)['vouts'])
assert verb == b'ASCRIBESPOOL01TRANSFER5' | 63 |
def glance_detail(request):
"""
OpenStack specific action to get image details from Glance
:param request: HTTPRequest
:return: rendered HTML
"""
required_fields = set(['imageId'])
if not required_fields.issubset(request.POST):
return render(request, 'ajax/ajaxError.html', {'error': "Invalid Parameters in POST"})
image_id = request.POST["imageId"]
image = get_object_or_404(Image, pk=image_id)
if openstackUtils.connect_to_openstack():
glance_id = openstackUtils.get_image_id_for_name(image.name)
glance_json = dict()
if glance_id is not None:
glance_json = openstackUtils.get_glance_image_detail(glance_id)
logger.debug("glance json of %s is" % glance_id)
logger.debug(glance_json)
logger.debug("---")
return render(request, 'images/glance_detail.html', {'image': glance_json,
"image_id": image_id,
"glance_id": glance_id,
"openstack_host": configuration.openstack_host
})
else:
return render(request, 'error.html', {'error': "Could not connect to OpenStack"}) | 64 |
def write_nonterminal_arcs(start_state, loop_state, next_state,
nonterminals, left_context_phones):
"""This function relates to the grammar-decoding setup, see
kaldi-asr.org/doc/grammar.html. It is called from write_fst_no_silence
and write_fst_silence, and writes to the stdout some extra arcs
in the lexicon FST that relate to nonterminal symbols.
See the section "Special symbols in L.fst,
kaldi-asr.org/doc/grammar.html#grammar_special_l.
start_state: the start-state of L.fst.
loop_state: the state of high out-degree in L.fst where words leave
and enter.
next_state: the number from which this function can start allocating its
own states. the updated value of next_state will be returned.
nonterminals: the user-defined nonterminal symbols as a list of
strings, e.g. ['#nonterm:contact_list', ... ].
left_context_phones: a list of phones that may appear as left-context,
e.g. ['a', 'ah', ... '#nonterm_bos'].
"""
shared_state = next_state
next_state += 1
final_state = next_state
next_state += 1
print("{src}\t{dest}\t{phone}\t{word}\t{cost}".format(
src=start_state, dest=shared_state,
phone='#nonterm_begin', word='#nonterm_begin',
cost=0.0))
for nonterminal in nonterminals:
print("{src}\t{dest}\t{phone}\t{word}\t{cost}".format(
src=loop_state, dest=shared_state,
phone=nonterminal, word=nonterminal,
cost=0.0))
# this_cost equals log(len(left_context_phones)) but the expression below
# better captures the meaning. Applying this cost to arcs keeps the FST
# stochatic (sum-to-one, like an HMM), so that if we do weight pushing
# things won't get weird. In the grammar-FST code when we splice things
# together we will cancel out this cost, see the function CombineArcs().
this_cost = -math.log(1.0 / len(left_context_phones))
for left_context_phone in left_context_phones:
print("{src}\t{dest}\t{phone}\t{word}\t{cost}".format(
src=shared_state, dest=loop_state,
phone=left_context_phone, word='<eps>', cost=this_cost))
# arc from loop-state to a final-state with #nonterm_end as ilabel and olabel
print("{src}\t{dest}\t{phone}\t{word}\t{cost}".format(
src=loop_state, dest=final_state,
phone='#nonterm_end', word='#nonterm_end', cost=0.0))
print("{state}\t{final_cost}".format(
state=final_state, final_cost=0.0))
return next_state | 65 |
def _construct_expression(coeffs, opt):
"""The last resort case, i.e. use the expression domain. """
domain, result = EX, []
for coeff in coeffs:
result.append(domain.from_sympy(coeff))
return domain, result | 66 |
def get_share_range(level: int):
"""Returns the share range for a specific level
The returned value is a list with the lower limit and the upper limit in that order."""
return int(round(level * 2 / 3, 0)), int(round(level * 3 / 2, 0)) | 67 |
def class_from_module_path(
module_path: Text, lookup_path: Optional[Text] = None
) -> Type:
"""Given the module name and path of a class, tries to retrieve the class.
The loaded class can be used to instantiate new objects.
Args:
module_path: either an absolute path to a Python class,
or the name of the class in the local / global scope.
lookup_path: a path where to load the class from, if it cannot
be found in the local / global scope.
Returns:
a Python class
Raises:
ImportError, in case the Python class cannot be found.
RasaException, in case the imported result is something other than a class
"""
klass = None
if "." in module_path:
module_name, _, class_name = module_path.rpartition(".")
m = importlib.import_module(module_name)
klass = getattr(m, class_name, None)
elif lookup_path:
# try to import the class from the lookup path
m = importlib.import_module(lookup_path)
klass = getattr(m, module_path, None)
if klass is None:
raise ImportError(f"Cannot retrieve class from path {module_path}.")
if not inspect.isclass(klass):
raise RasaException(
f"`class_from_module_path()` is expected to return a class, "
f"but for {module_path} we got a {type(klass)}."
)
return klass | 68 |
def _get_vars(fis):
"""Get an encoded version of the parameters of the fuzzy sets in a FIS"""
for variable in fis.variables:
for value_name, value in variable.values.items():
for par, default in value._get_description().items():
if par != "type":
yield "var_" + variable.name + "_" + value_name + "_" + par, default
# Same for target
for variable in [fis.target]: # For symmetry
for value_name, value in variable.values.items():
for par, default in value._get_description().items():
if par != "type":
yield "target_" + variable.name + "_" + value_name + "_" + par, default | 69 |
def detection_map(detect_res,
label,
class_num,
background_label=0,
overlap_threshold=0.3,
evaluate_difficult=True,
has_state=None,
input_states=None,
out_states=None,
ap_version='integral'):
"""
${comment}
Args:
detect_res: ${detect_res_comment}
label: ${label_comment}
class_num: ${class_num_comment}
background_label: ${background_label_comment}
overlap_threshold: ${overlap_threshold_comment}
evaluate_difficult: ${evaluate_difficult_comment}
has_state: ${has_state_comment}
input_states: (tuple|None) If not None, It contains 3 elements:
(1) pos_count ${pos_count_comment}.
(2) true_pos ${true_pos_comment}.
(3) false_pos ${false_pos_comment}.
out_states: (tuple|None) If not None, it contains 3 elements.
(1) accum_pos_count ${accum_pos_count_comment}.
(2) accum_true_pos ${accum_true_pos_comment}.
(3) accum_false_pos ${accum_false_pos_comment}.
ap_version: ${ap_type_comment}
Returns:
${map_comment}
Examples:
.. code-block:: python
import paddle.fluid as fluid
from fluid.layers import detection
detect_res = fluid.data(
name='detect_res',
shape=[10, 6],
dtype='float32')
label = fluid.data(
name='label',
shape=[10, 6],
dtype='float32')
map_out = detection.detection_map(detect_res, label, 21)
"""
helper = LayerHelper("detection_map", **locals())
def __create_var(type):
return helper.create_variable_for_type_inference(dtype=type)
map_out = __create_var('float32')
accum_pos_count_out = out_states[
0] if out_states is not None else __create_var('int32')
accum_true_pos_out = out_states[
1] if out_states is not None else __create_var('float32')
accum_false_pos_out = out_states[
2] if out_states is not None else __create_var('float32')
pos_count = input_states[0] if input_states is not None else None
true_pos = input_states[1] if input_states is not None else None
false_pos = input_states[2] if input_states is not None else None
helper.append_op(
type="detection_map",
inputs={
'Label': label,
'DetectRes': detect_res,
'HasState': has_state,
'PosCount': pos_count,
'TruePos': true_pos,
'FalsePos': false_pos
},
outputs={
'MAP': map_out,
'AccumPosCount': accum_pos_count_out,
'AccumTruePos': accum_true_pos_out,
'AccumFalsePos': accum_false_pos_out
},
attrs={
'overlap_threshold': overlap_threshold,
'evaluate_difficult': evaluate_difficult,
'ap_type': ap_version,
'class_num': class_num,
})
return map_out | 70 |
def unshorten_amount(amount) -> Decimal:
""" Given a shortened amount, convert it into a decimal
"""
# BOLT #11:
# The following `multiplier` letters are defined:
#
#* `m` (milli): multiply by 0.001
#* `u` (micro): multiply by 0.000001
#* `n` (nano): multiply by 0.000000001
#* `p` (pico): multiply by 0.000000000001
units = {
'p': 10**12,
'n': 10**9,
'u': 10**6,
'm': 10**3,
}
unit = str(amount)[-1]
# BOLT #11:
# A reader SHOULD fail if `amount` contains a non-digit, or is followed by
# anything except a `multiplier` in the table above.
if not re.fullmatch("\\d+[pnum]?", str(amount)):
raise LnDecodeException("Invalid amount '{}'".format(amount))
if unit in units.keys():
return Decimal(amount[:-1]) / units[unit]
else:
return Decimal(amount) | 71 |
def belong(in_list1: list, in_list2: list) -> list:
"""
Check wheter or not all the element in list in_list1 belong into in_list2
:param in_list1: the source list
:param in_list2: the target list where to find the element in in_list1
:return: return True if the statement is verified otherwise return False
"""
return all(element in in_list2 for element in in_list1) | 72 |
def _one_hot( # pylint: disable=unused-argument
indices,
depth,
on_value=None,
off_value=None,
axis=None,
dtype=None,
name=None):
"""One hot."""
if on_value is None:
on_value = 1
if off_value is None:
off_value = 0
if dtype is None:
dtype = utils.common_dtype([on_value, off_value], np.float32)
indices = np.array(indices)
depth = np.array(depth)
pred = abs(np.arange(depth, dtype=indices.dtype) -
indices[..., np.newaxis]) > 0
y_out = np.where(pred, np.array(off_value, dtype), np.array(on_value, dtype))
if axis is not None:
y_out = np.moveaxis(y_out, -1, axis)
return y_out | 73 |
def allcombinations(orgset,k):
"""
returns all permutations of orgset with up to k items
:param orgset: the list to be iterated
:param k: the maxcardinality of the subsets
:return: an iterator of the subsets
example:
>>> c = allcombinations([1,2,3,4],2)
>>> for s in c:
... print s
(1,)
(2,)
(3,)
(4,)
(1, 2)
(1, 3)
(1, 4)
(2, 3)
(2, 4)
(3, 4)
"""
return itertools.chain(*[combination(orgset,i) for i in range(1,k+1)]) | 74 |
def guess_project_dir() -> str:
"""Return detected project dir or user home directory."""
try:
result = subprocess.run(
["git", "rev-parse", "--show-toplevel"],
stderr=subprocess.PIPE,
stdout=subprocess.PIPE,
universal_newlines=True,
check=False,
)
except FileNotFoundError:
# if git is absent we use home directory
return str(Path.home())
if result.returncode != 0:
return str(Path.home())
return result.stdout.splitlines()[0] | 75 |
def get_conf_entry(qname):
"""
Return the entire JSON expression for a given qname.
"""
return conf.get_conf_entry(qname) | 76 |
def scale_down(src_size, size):
"""Scales down crop size if it's larger than image size.
If width/height of the crop is larger than the width/height of the image,
sets the width/height to the width/height of the image.
Parameters
----------
src_size : tuple of int
Size of the image in (width, height) format.
size : tuple of int
Size of the crop in (width, height) format.
Returns
-------
tuple of int
A tuple containing the scaled crop size in (width, height) format.
Example
--------
>>> src_size = (640,480)
>>> size = (720,120)
>>> new_size = mx.img.scale_down(src_size, size)
>>> new_size
(640,106)
"""
w, h = size
sw, sh = src_size
if sh < h:
w, h = float(w * sh) / h, sh
if sw < w:
w, h = sw, float(h * sw) / w
return int(w), int(h) | 77 |
def run_read_bin_and_llc_conversion_test(llc_grid_dir, llc_lons_fname='XC.data',
llc_hfacc_fname='hFacC.data', llc=90,
llc_grid_filetype = '>f',
make_plots=False):
"""
Runs test on the read_bin_llc and llc_conversion routines
Parameters
----------
llc_grid_dir : string
A string with the directory of the binary file to open
llc_lons_fname : string
A string with the name of the XC grid file [XC.data]
llc_hfacc_fname : string
A string with the name of the hfacC grid file [hFacC.data]
llc : int
the size of the llc grid. For ECCO v4, we use the llc90 domain
so `llc` would be `90`.
Default: 90
llc_grid_filetype: string
the file type, default is big endian (>) 32 bit float (f)
alternatively, ('<d') would be little endian (<) 64 bit float (d)
Deafult: '>f'
make_plots : boolean
A boolean specifiying whether or not to make plots
Deafult: False
Returns
-------
1 : all tests passed
0 : at least one test failed
"""
# SET TEST RESULT = 1 TO START
TEST_RESULT = 1
# %% ----------- TEST 1: 2D field XC FOM GRID FILE
#%% 1a LOAD COMPACT
tmpXC_c = read_llc_to_compact(llc_grid_dir, llc_lons_fname, llc=llc,
filetype=llc_grid_filetype)
tmpXC_f = read_llc_to_faces(llc_grid_dir, llc_lons_fname, llc=llc,
filetype=llc_grid_filetype)
tmpXC_t = read_llc_to_tiles(llc_grid_dir, llc_lons_fname, llc=llc,
filetype=llc_grid_filetype)
if make_plots:
#plt.close('all')
for f in range(1,6):
plt.figure()
plt.imshow(tmpXC_f[f]);plt.colorbar()
plot_tiles(tmpXC_t)
plt.draw()
raw_input("Press Enter to continue...")
#%% 1b CONVERT COMPACT TO FACES, TILES
tmpXC_cf = llc_compact_to_faces(tmpXC_c)
tmpXC_ct = llc_compact_to_tiles(tmpXC_c)
for f in range(1,6):
tmp = np.unique(tmpXC_f[f] - tmpXC_cf[f])
print ('unique diffs CF ', f, tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 1b-1')
return TEST_RESULT
tmp = np.unique(tmpXC_ct - tmpXC_t)
print ('unique diffs for CT ', tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 1b-2')
return TEST_RESULT
#%% 1c CONVERT FACES TO TILES, COMPACT
tmpXC_ft = llc_faces_to_tiles(tmpXC_f)
tmpXC_fc = llc_faces_to_compact(tmpXC_f)
# unique diff tests
tmp = np.unique(tmpXC_t - tmpXC_ft)
print ('unique diffs for FT ', tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 1c-1')
return TEST_RESULT
tmp = np.unique(tmpXC_fc - tmpXC_c)
print ('unique diffs FC', tmp )
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 1c-2')
return TEST_RESULT
#%% 1d CONVERT TILES to FACES, COMPACT
tmpXC_tf = llc_tiles_to_faces(tmpXC_t)
tmpXC_tc = llc_tiles_to_compact(tmpXC_t)
# unique diff tests
for f in range(1,6):
tmp = np.unique(tmpXC_f[f] - tmpXC_tf[f])
print ('unique diffs for TF ', f, tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 1d-1')
return TEST_RESULT
tmp = np.unique(tmpXC_tc - tmpXC_c)
print ('unique diffs TC', tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 1d-2')
return TEST_RESULT
#%% 1e CONVERT COMPACT TO FACES TO TILES TO FACES TO COMPACT
tmpXC_cftfc = llc_faces_to_compact(llc_tiles_to_faces(llc_faces_to_tiles(llc_compact_to_faces(tmpXC_c))))
tmp = np.unique(tmpXC_cftfc - tmpXC_c)
print ('unique diffs CFTFC', tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 1e')
return TEST_RESULT
# %% ----------- TEST 2: 3D fields HFACC FOM GRID FILE
#%% 2a LOAD COMPACT
tmpHF_c = read_llc_to_compact(llc_grid_dir, llc_hfacc_fname, llc=llc,nk=50,
filetype=llc_grid_filetype)
tmpHF_f = read_llc_to_faces(llc_grid_dir, llc_hfacc_fname, llc=llc, nk=50,
filetype=llc_grid_filetype)
tmpHF_t = read_llc_to_tiles(llc_grid_dir, llc_hfacc_fname, llc=llc, nk=50,
filetype=llc_grid_filetype)
tmpHF_c.shape
if make_plots:
#plt.close('all')
plt.imshow(tmpHF_c[0,:]);plt.colorbar()
plot_tiles(tmpHF_t[:,0,:])
plot_tiles(tmpHF_t[:,20,:])
plt.draw()
raw_input("Press Enter to continue...")
#%% 2b CONVERT COMPACT TO FACES, TILES
tmpHF_cf = llc_compact_to_faces(tmpHF_c)
tmpHF_ct = llc_compact_to_tiles(tmpHF_c)
# unique diff tests
for f in range(1,6):
tmp = np.unique(tmpHF_f[f] - tmpHF_cf[f])
print ('unique diffs CF ', f, tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 2b-1')
return TEST_RESULT
tmp = np.unique(tmpHF_ct - tmpHF_t)
print ('unique diffs CT ', tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 2b-2')
return TEST_RESULT
if make_plots:
for k in [0, 20]:
for f in range(1,6):
plt.figure()
plt.imshow(tmpHF_cf[f][k,:], origin='lower');plt.colorbar()
plt.draw()
raw_input("Press Enter to continue...")
#%% 2c CONVERT FACES TO TILES, COMPACT
tmpHF_ft = llc_faces_to_tiles(tmpHF_f)
tmpHF_fc = llc_faces_to_compact(tmpHF_f)
if make_plots:
#plt.close('all')
plot_tiles(tmpHF_ft[:,0,:])
plot_tiles(tmpHF_ft[:,20,:])
plt.draw()
raw_input("Press Enter to continue...")
# unique diff tests
tmp = np.unique(tmpHF_t - tmpHF_ft)
print ('unique diffs FT ', tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 2c-1')
return TEST_RESULT
tmp = np.unique(tmpHF_fc - tmpHF_c)
print ('unique diffs FC', tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 2c-2')
return TEST_RESULT
#%% 2d CONVERT TILES to FACES, COMPACT
tmpHF_tf = llc_tiles_to_faces(tmpHF_t)
tmpHF_tc = llc_tiles_to_compact(tmpHF_t)
if make_plots:
#plt.close('all')
for k in [0, 20]:
for f in range(1,6):
plt.figure()
plt.imshow(tmpHF_tf[f][k,:], origin='lower');plt.colorbar()
plt.draw()
raw_input("Press Enter to continue...")
# unique diff tests
for f in range(1,6):
tmp = np.unique(tmpHF_f[f] - tmpHF_tf[f])
print ('unique diffs TF ', f, tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 2d-1')
return TEST_RESULT
tmp = np.unique(tmpHF_tc - tmpHF_c)
print ('unique diffs TC ', tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 2d-1')
return TEST_RESULT
#%% 2e CONVERT COMPACT TO FACES TO TILES TO FACES TO COMPACT
tmpHF_cftfc = llc_faces_to_compact(llc_tiles_to_faces(
llc_faces_to_tiles(llc_compact_to_faces(tmpHF_c))))
tmp = np.unique(tmpHF_cftfc - tmpHF_c)
print ('unique diffs CFTFC ', tmp)
if len(tmp) != 1 or tmp[0] != 0:
TEST_RESULT = 0
print ('failed on 2e')
return TEST_RESULT
print ('YOU MADE IT THIS FAR, TESTS PASSED!')
return TEST_RESULT | 78 |
def test_lmp_predict(all_lmp, all_gp, all_mgp, bodies, multihyps):
"""
test the lammps implementation
"""
# pytest.skip()
prefix = f"{bodies}{multihyps}"
mgp_model = all_mgp[prefix]
gp_model = all_gp[prefix]
lmp_calculator = all_lmp[prefix]
ase_calculator = FLARE_Calculator(gp_model, mgp_model, par=False, use_mapping=True)
# create test structure
np.random.seed(1)
cell = np.diag(np.array([1, 1, 1])) * 4
nenv = 10
unique_species = gp_model.training_statistics["species"]
cutoffs = gp_model.cutoffs
struc_test, f = get_random_structure(cell, unique_species, nenv)
# build ase atom from struc
ase_atoms_flare = struc_test.to_ase_atoms()
ase_atoms_flare = FLARE_Atoms.from_ase_atoms(ase_atoms_flare)
ase_atoms_flare.set_calculator(ase_calculator)
ase_atoms_lmp = deepcopy(struc_test).to_ase_atoms()
ase_atoms_lmp.set_calculator(lmp_calculator)
try:
lmp_en = ase_atoms_lmp.get_potential_energy()
flare_en = ase_atoms_flare.get_potential_energy()
lmp_stress = ase_atoms_lmp.get_stress()
flare_stress = ase_atoms_flare.get_stress()
lmp_forces = ase_atoms_lmp.get_forces()
flare_forces = ase_atoms_flare.get_forces()
except Exception as e:
os.chdir(curr_path)
print(e)
raise e
os.chdir(curr_path)
# check that lammps agrees with mgp to within 1 meV/A
print("energy", lmp_en - flare_en, flare_en)
assert np.isclose(lmp_en, flare_en, atol=1e-3)
print("force", lmp_forces - flare_forces, flare_forces)
assert np.isclose(lmp_forces, flare_forces, atol=1e-3).all()
print("stress", lmp_stress - flare_stress, flare_stress)
assert np.isclose(lmp_stress, flare_stress, atol=1e-3).all()
# check the lmp var
# mgp_std = np.sqrt(mgp_pred[1])
# print("isclose? diff:", lammps_stds[atom_num]-mgp_std, "mgp value", mgp_std)
# assert np.isclose(lammps_stds[atom_num], mgp_std, rtol=1e-2)
clean(prefix=prefix) | 79 |
def _single_replace(self, to_replace, method, inplace, limit):
"""
Replaces values in a Series using the fill method specified when no
replacement value is given in the replace method
"""
if self.ndim != 1:
raise TypeError('cannot replace {0} with method {1} on a {2}'
.format(to_replace, method, type(self).__name__))
orig_dtype = self.dtype
result = self if inplace else self.copy()
fill_f = missing.get_fill_func(method)
mask = missing.mask_missing(result.values, to_replace)
values = fill_f(result.values, limit=limit, mask=mask)
if values.dtype == orig_dtype and inplace:
return
result = pd.Series(values, index=self.index,
dtype=self.dtype).__finalize__(self)
if inplace:
self._update_inplace(result._data)
return
return result | 80 |
def grabArtifactFromJenkins(**context):
"""
Grab an artifact from the previous job
The python-jenkins library doesn't expose a method for that
But it's totally possible to build manually the request for that
"""
hook = JenkinsHook("jenkins_nqa")
jenkins_server = hook.get_jenkins_server()
url = context['task_instance'].xcom_pull(task_ids='trigger_job')
#The JenkinsJobTriggerOperator store the job url in the xcom variable corresponding to the task
#You can then use it to access things or to get the job number
#This url looks like : http://jenkins_url/job/job_name/job_number/
url = url + "artifact/myartifact.xml" #Or any other artifact name
self.log.info("url : %s", url)
request = Request(url)
response = jenkins_server.jenkins_open(request)
self.log.info("response: %s", response)
return response #We store the artifact content in a xcom variable for later use | 81 |
def _add_commandline_features(output_df: pd.DataFrame, force: bool):
"""
Add commandline default features.
Parameters
----------
output_df : pd.DataFrame
The dataframe to add features to
force : bool
If True overwrite existing feature columns
"""
if "commandlineLen" not in output_df or force:
output_df["commandlineLen"] = output_df.apply(
lambda x: len(x.CommandLine), axis=1
)
if "commandlineLogLen" not in output_df or force:
output_df["commandlineLogLen"] = output_df.apply(
lambda x: log10(x.commandlineLen) if x.commandlineLen else 0, axis=1
)
if "commandlineTokensFull" not in output_df or force:
output_df["commandlineTokensFull"] = output_df[["CommandLine"]].apply(
lambda x: delim_count(x.CommandLine), axis=1
)
if "commandlineScore" not in output_df or force:
output_df["commandlineScore"] = output_df.apply(
lambda x: char_ord_score(x.CommandLine), axis=1
)
if "commandlineTokensHash" not in output_df or force:
output_df["commandlineTokensHash"] = output_df.apply(
lambda x: delim_hash(x.CommandLine), axis=1
) | 82 |
def create_RPS_xml_report(suite_name, suite_data_list):
"""STUB - suite_name is a string = Basic, KokkosMechanics, etc.;
suite_data_list will be the values for a key, Basic or KokkosMechanics
"""
aggregate_results_dict = dict()
#print(suite_data_list)
for list_item in suite_data_list:
for index, timing in enumerate(list_item[1:]):
if "Not run" in timing:
continue
variant_name = col_meanings_dict[index + 1]
if variant_name not in aggregate_results_dict:
aggregate_results_dict[variant_name] = 0.0
# sums values of all the basic kernels
aggregate_results_dict[variant_name] += float(timing)
#print(aggregate_results_dict)
suite_root = ET.SubElement(perf_root, "timing")
associate_timings_with_xml(suite_root, aggregate_results_dict, suite_name)
for list_item in suite_data_list:
test_timings_dict = dict()
for index, timing in enumerate(list_item[1:]):
if "Not run" in timing:
continue
variant_name = col_meanings_dict[index + 1]
test_timings_dict[variant_name] = float(timing)
xml_element_for_a_kernel_test = ET.SubElement(suite_root, "timing")
associate_timings_with_xml(xml_element_for_a_kernel_test,
test_timings_dict, list_item[0]) | 83 |
def ensure_dict_from_str(s, **kwargs):
"""Given a multiline string with key=value items convert it to a dictionary
Parameters
----------
s: str or dict
Returns None if input s is empty
"""
if not s:
return None
if isinstance(s, dict):
return s
out = {}
for value_str in ensure_list_from_str(s, **kwargs):
if '=' not in value_str:
raise ValueError("{} is not in key=value format".format(repr(value_str)))
k, v = value_str.split('=', 1)
if k in out:
err = "key {} was already defined in {}, but new value {} was provided".format(k, out, v)
raise ValueError(err)
out[k] = v
return out | 84 |
def LessOptionsStart(builder):
"""This method is deprecated. Please switch to Start."""
return Start(builder) | 85 |
def load_net(cfg_filepath, weights_filepath, clear):
# type: (str, str, bool) -> object
"""
:param cfg_filepath: cfg file name
:param weights_filepath: weights file name
:param clear: True if you want to clear the weights otherwise False
:return: darknet network object
"""
return pyyolo.darknet.load_net(cfg_filepath, weights_filepath, clear) | 86 |
def join_bytes_or_unicode(prefix, suffix):
"""
Join two path components of either ``bytes`` or ``unicode``.
The return type is the same as the type of ``prefix``.
"""
# If the types are the same, nothing special is necessary.
if type(prefix) == type(suffix):
return join(prefix, suffix)
# Otherwise, coerce suffix to the type of prefix.
if isinstance(prefix, text_type):
return join(prefix, suffix.decode(getfilesystemencoding()))
else:
return join(prefix, suffix.encode(getfilesystemencoding())) | 87 |
def cfg():
"""Model configuration."""
name = ''
parameters = {
} | 88 |
def translate_bbox(image, bboxes, pixels, replace, shift_horizontal):
"""Equivalent of PIL Translate in X/Y dimension that shifts image and bbox.
Args:
image: 3D uint8 Tensor.
bboxes: 2D Tensor that is a list of the bboxes in the image. Each bbox
has 4 elements (min_y, min_x, max_y, max_x) of type float with values
between [0, 1].
pixels: An int. How many pixels to shift the image and bboxes
replace: A one or three value 1D tensor to fill empty pixels.
shift_horizontal: Boolean. If true then shift in X dimension else shift in
Y dimension.
Returns:
A tuple containing a 3D uint8 Tensor that will be the result of translating
image by pixels. The second element of the tuple is bboxes, where now
the coordinates will be shifted to reflect the shifted image.
"""
if shift_horizontal:
image = translate_x(image, pixels, replace)
else:
image = translate_y(image, pixels, replace)
# Convert bbox coordinates to pixel values.
image_height = tf.shape(image)[0]
image_width = tf.shape(image)[1]
# pylint:disable=g-long-lambda
wrapped_shift_bbox = lambda bbox: _shift_bbox(
bbox, image_height, image_width, pixels, shift_horizontal)
# pylint:enable=g-long-lambda
bboxes = tf.map_fn(wrapped_shift_bbox, bboxes)
return image, bboxes | 89 |
def bootstrap(args):
"""Clone repo at pull/branch into root and run job script."""
# pylint: disable=too-many-locals,too-many-branches,too-many-statements
job = args.job
repos = parse_repos(args)
upload = args.upload
build_log_path = os.path.abspath('build-log.txt')
build_log = setup_logging(build_log_path)
started = time.time()
if args.timeout:
end = started + args.timeout * 60
else:
end = 0
call = lambda *a, **kw: _call(end, *a, **kw)
gsutil = GSUtil(call)
logging.info('Bootstrap %s...', job)
build = build_name(started)
if upload:
if repos and repos[repos.main][1]: # merging commits, a pr
paths = pr_paths(upload, repos, job, build)
else:
paths = ci_paths(upload, job, build)
logging.info('Gubernator results at %s', gubernator_uri(paths))
# TODO(fejta): Replace env var below with a flag eventually.
os.environ[GCS_ARTIFACTS_ENV] = paths.artifacts
version = 'unknown'
exc_type = None
setup_creds = False
try:
setup_root(call, args.root, repos, args.ssh, args.git_cache, args.clean)
logging.info('Configure environment...')
if repos:
version = find_version(call)
else:
version = ''
setup_magic_environment(job)
setup_credentials(call, args.service_account, upload)
setup_creds = True
logging.info('Start %s at %s...', build, version)
if upload:
start(gsutil, paths, started, node(), version, repos)
success = False
try:
call(job_script(job))
logging.info('PASS: %s', job)
success = True
except subprocess.CalledProcessError:
logging.error('FAIL: %s', job)
except Exception: # pylint: disable=broad-except
exc_type, exc_value, exc_traceback = sys.exc_info()
logging.exception('unexpected error')
success = False
if not setup_creds:
setup_credentials(call, args.service_account, upload)
if upload:
logging.info('Upload result and artifacts...')
logging.info('Gubernator results at %s', gubernator_uri(paths))
try:
finish(gsutil, paths, success, '_artifacts', build, version, repos, call)
except subprocess.CalledProcessError: # Still try to upload build log
success = False
logging.getLogger('').removeHandler(build_log)
build_log.close()
if upload:
gsutil.copy_file(paths.build_log, build_log_path)
if exc_type:
raise exc_type, exc_value, exc_traceback # pylint: disable=raising-bad-type
if not success:
# TODO(fejta/spxtr): we should distinguish infra and non-infra problems
# by exit code and automatically retrigger after an infra-problem.
sys.exit(1) | 90 |
def validate_positive_int(ctx, param, value):
"""Callback to validate param passed is a positive integer."""
if isinstance(value, int) and value > 0:
return value
raise click.BadParameter("Must be a positive integer") | 91 |
def sum_crc16(crc, file_bit):
"""
计算CRC16
@param crc:初始校验码
@param file_bit:文件2进制流
@return:校验码
"""
for bit in file_bit:
crc = 0xffff & crc
# temp = crc // 256
temp = crc >> 8
crc = 0xffff & crc
crc <<= 8
crc = 0xffff & crc
crc ^= crc_list[0xff & (temp ^ bit)]
return crc | 92 |
def get_dicts_from_list(list_of_dicts, list_of_key_values, key='id'):
"""
Returns list of dictionaries with keys: @prm{key} equal to one from list
@prm{list_of_key_values} from a list of dictionaries: @prm{list_of_dicts}.
"""
ret = []
for dictionary in list_of_dicts:
if dictionary.get(key) == None:
raise Exception("No key: " + key + " in dictionary.")
if dictionary.get(key) in list_of_key_values:
ret.append(dictionary)
return ret | 93 |
def pack_range(key, packing, grad_vars, rng):
"""Form the concatenation of a specified range of gradient tensors.
Args:
key: Value under which to store meta-data in packing that will be used
later to restore the grad_var list structure.
packing: Dict holding data describing packed ranges of small tensors.
grad_vars: List of (grad, var) pairs for one replica.
rng: A pair of integers giving the first, last indices of a consecutive
range of tensors to be packed.
Returns:
A tensor that is the concatenation of all the specified small tensors.
"""
to_pack = grad_vars[rng[0]:rng[1] + 1]
members = []
variables = []
restore_shapes = []
with ops.name_scope('pack'):
for g, v in to_pack:
variables.append(v)
restore_shapes.append(g.shape)
with ops.device(g.device):
members.append(array_ops.reshape(g, [-1]))
packing[key] = GradPackTuple(
indices=range(rng[0], rng[1] + 1),
vars=variables,
shapes=restore_shapes)
with ops.device(members[0].device):
return array_ops.concat(members, 0) | 94 |
def _compute_delta(log_moments, eps):
"""Compute delta for given log_moments and eps.
Args:
log_moments: the log moments of privacy loss, in the form of pairs
of (moment_order, log_moment)
eps: the target epsilon.
Returns:
delta
"""
min_delta = 1.0
for moment_order, log_moment in log_moments:
if moment_order == 0:
continue
if math.isinf(log_moment) or math.isnan(log_moment):
sys.stderr.write("The %d-th order is inf or Nan\n" % moment_order)
continue
if log_moment < moment_order * eps:
min_delta = min(min_delta,
math.exp(log_moment - moment_order * eps))
return min_delta | 95 |
def concat_docs():
"""Concatinates files yielded by the generator `find_docs`."""
file_path = os.path.dirname(os.path.realpath(__file__))
head, tail = os.path.split(file_path)
outfile = head + "/README.rst"
if not os.path.isfile(outfile):
print("../README.rst not found, exiting!")
exit(1)
with open(outfile, 'w') as readme_handle:
readme_handle.write(repository_tags)
for doc in find_docs():
with open(doc, 'r') as doc_handle:
for line in doc_handle:
readme_handle.write(line)
readme_handle.write("\n") | 96 |
def checkFormatReturnTraceOnError(file_path):
"""Run checkFormat and return the traceback of any exception."""
try:
return checkFormat(file_path)
except:
return traceback.format_exc().split("\n") | 97 |
def print_build_memory_usage(report):
""" Generate result table with memory usage values for build results
Aggregates (puts together) reports obtained from self.get_memory_summary()
Positional arguments:
report - Report generated during build procedure.
"""
from prettytable import PrettyTable
columns_text = ['name', 'target', 'toolchain']
columns_int = ['static_ram', 'total_flash']
table = PrettyTable(columns_text + columns_int)
for col in columns_text:
table.align[col] = 'l'
for col in columns_int:
table.align[col] = 'r'
for target in report:
for toolchain in report[target]:
for name in report[target][toolchain]:
for dlist in report[target][toolchain][name]:
for dlistelem in dlist:
# Get 'memory_usage' record and build table with
# statistics
record = dlist[dlistelem]
if 'memory_usage' in record and record['memory_usage']:
# Note that summary should be in the last record of
# 'memory_usage' section. This is why we are
# grabbing last "[-1]" record.
row = [
record['description'],
record['target_name'],
record['toolchain_name'],
record['memory_usage'][-1]['summary'][
'static_ram'],
record['memory_usage'][-1]['summary'][
'total_flash'],
]
table.add_row(row)
result = "Memory map breakdown for built projects (values in Bytes):\n"
result += table.get_string(sortby='name')
return result | 98 |
def argmin(a, axis=None, out=None):
"""
Returns the indices of the minimum values along an axis.
Parameters
----------
a : array_like
Input array.
axis : int, optional
By default, the index is into the flattened array, otherwise
along the specified axis.
out : array, optional
If provided, the result will be inserted into this array. It should
be of the appropriate shape and dtype.
Returns
-------
index_array : ndarray of ints
Array of indices into the array. It has the same shape as `a.shape`
with the dimension along `axis` removed.
See Also
--------
ndarray.argmin, argmax
amin : The minimum value along a given axis.
unravel_index : Convert a flat index into an index tuple.
take_along_axis : Apply ``np.expand_dims(index_array, axis)``
from argmin to an array as if by calling min.
Notes
-----
In case of multiple occurrences of the minimum values, the indices
corresponding to the first occurrence are returned.
Examples
--------
>>> a = np.arange(6).reshape(2,3) + 10
>>> a
array([[10, 11, 12],
[13, 14, 15]])
>>> np.argmin(a)
0
>>> np.argmin(a, axis=0)
array([0, 0, 0])
>>> np.argmin(a, axis=1)
array([0, 0])
Indices of the minimum elements of a N-dimensional array:
>>> ind = np.unravel_index(np.argmin(a, axis=None), a.shape)
>>> ind
(0, 0)
>>> a[ind]
10
>>> b = np.arange(6) + 10
>>> b[4] = 10
>>> b
array([10, 11, 12, 13, 10, 15])
>>> np.argmin(b) # Only the first occurrence is returned.
0
>>> x = np.array([[4,2,3], [1,0,3]])
>>> index_array = np.argmin(x, axis=-1)
>>> # Same as np.min(x, axis=-1, keepdims=True)
>>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1)
array([[2],
[0]])
>>> # Same as np.max(x, axis=-1)
>>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1).squeeze(axis=-1)
array([2, 0])
"""
return _wrapfunc(a, 'argmin', axis=axis, out=out) | 99 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 34