code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
def get_structure_from_mp(formula): m = MPRester() entries = m.get_entries(formula, inc_structure="final") if len(entries) == 0: raise ValueError("No structure with formula %s in Materials Project!" % formula) elif len(entries) > 1: warnings.warn("%d structu...
Convenience method to get a crystal from the Materials Project database via the API. Requires PMG_MAPI_KEY to be set. Args: formula (str): A formula Returns: (Structure) The lowest energy structure in Materials Project with that formula.
def make_ica_funs(observed_dimension, latent_dimension): def sample(weights, n_samples, noise_std, rs): latents = rs.randn(latent_dimension, n_samples) latents = np.array(sorted(latents.T, key=lambda a_entry: a_entry[0])).T noise = rs.randn(n_samples, observed_dimension) * noise_std ...
These functions implement independent component analysis. The model is: latents are drawn i.i.d. for each data point from a product of student-ts. weights are the same across all datapoints. each data = latents * weghts + noise.
def patterned(self): pattFill = self._xPr.get_or_change_to_pattFill() self._fill = _PattFill(pattFill)
Selects the pattern fill type. Note that calling this method does not by itself set a foreground or background color of the pattern. Rather it enables subsequent assignments to properties like fore_color to set the pattern and colors.
def run(self): def compound_name(id): if id not in self._model.compounds: return id return self._model.compounds[id].properties.get(, id) exclude = set(self._args.exclude) count = 0 unbalanced = 0 unchecked = 0 ...
Run charge balance command
def variantsGenerator(self, request): compoundId = datamodel.VariantSetCompoundId \ .parse(request.variant_set_id) dataset = self.getDataRepository().getDataset(compoundId.dataset_id) variantSet = dataset.getVariantSet(compoundId.variant_set_id) intervalIterator = pa...
Returns a generator over the (variant, nextPageToken) pairs defined by the specified request.
def flaskrun(app, default_host="127.0.0.1", default_port="8000"): parser = optparse.OptionParser() parser.add_option( "-H", "--host", help="Hostname of the Flask app " + "[default %s]" % default_host, default=default_host, ) parser.add_option( "-P", ...
Takes a flask.Flask instance and runs it. Parses command-line flags to configure the app.
def set_connected(self, connected): with self.__connect_wait_condition: self.connected = connected if connected: self.__connect_wait_condition.notify()
:param bool connected:
def ls(obj=None): if obj is None: import builtins all = builtins.__dict__.copy() all.update(globals()) objlst = sorted(conf.layers, key=lambda x:x.__name__) for o in objlst: print("%-10s : %s" %(o.__name__,o.name)) else: if isinstance(obj...
List available layers, or infos on a given layer
def _build_models_query(self, query): registered_models_ct = self.build_models_list() if registered_models_ct: restrictions = [xapian.Query( % (TERM_PREFIXES[DJANGO_CT], model_ct)) for model_ct in registered_models_ct] limit_query = xapian.Que...
Builds a query from `query` that filters to documents only from registered models.
def _initialize_with_array(self, data, rowBased=True): if rowBased: self.matrix = [] if len(data) != self._rows: raise ValueError("Size of Matrix does not match") for col in xrange(self._columns): self.matrix.append([]) ...
Set the matrix values from a two dimensional list.
def save_assessment_part(self, assessment_part_form, *args, **kwargs): if assessment_part_form.is_for_update(): return self.update_assessment_part(assessment_part_form, *args, **kwargs) else: return self.create_assessment_part(assessment_part_form, *arg...
Pass through to provider AssessmentPartAdminSession.update_assessment_part
def run(self): try: answer = urlopen(self.url + "&mode=queue").read().decode() except (HTTPError, URLError) as error: self.output = { "full_text": str(error.reason), "color": " } return answer = json.loads(...
Connect to SABnzbd and get the data.
def _op_method(self, data): res = np.empty(len(self.operators), dtype=np.ndarray) for i in range(len(self.operators)): res[i] = self.operators[i].op(data) return res
Operator This method returns the input data operated on by all of the operators Parameters ---------- data : np.ndarray Input data array Returns ------- np.ndarray linear operation results
def _row_selector(self, other): if type(other) is SArray: if self.__has_size__() and other.__has_size__() and len(other) != len(self): raise IndexError("Cannot perform logical indexing on arrays of different length.") with cython_context(): return...
Where other is an SArray of identical length as the current Frame, this returns a selection of a subset of rows in the current SFrame where the corresponding row in the selector is non-zero.
def submit_response(self, assessment_section_id, item_id, answer_form): if not isinstance(answer_form, ABCAnswerForm): raise errors.InvalidArgument() if answer_form.is_for_update(): raise errors.InvalidArgument() try: if self._form...
Submits an answer to an item. arg: assessment_section_id (osid.id.Id): ``Id`` of the ``AssessmentSection`` arg: item_id (osid.id.Id): ``Id`` of the ``Item`` arg: answer_form (osid.assessment.AnswerForm): the response raise: IllegalState - ``has_assessment_secti...
def _none_rejecter(validation_callable ): def reject_none(x): if x is not None: return validation_callable(x) else: raise ValueIsNone(wrong_value=x) reject_none.__name__ = .format(get_callable_name(validation_callable))...
Wraps the given validation callable to reject None values. When a None value is received by the wrapper, it is not passed to the validation_callable and instead this function will raise a WrappingFailure. When any other value is received the validation_callable is called as usual. :param validation_callabl...
def traverse_bfs(self): . Yields (``Node``, distance) tuples Args: ``include_self`` (``bool``): ``True`` to include self in the traversal, otherwise ``False`` ' if not isinstance(include_self, bool): raise TypeError("include_self must be a bool") ...
Perform a Breadth-First Search (BFS) starting at this ``Node`` object'. Yields (``Node``, distance) tuples Args: ``include_self`` (``bool``): ``True`` to include self in the traversal, otherwise ``False``
def status(name, sig=None): * cmd = .format(_service_path(name)) out = __salt__[](cmd, python_shell=False) try: pid = re.search(r, out).group(1) except AttributeError: pid = return pid
Return the status for a service via daemontools, return pid if running CLI Example: .. code-block:: bash salt '*' daemontools.status <service name>
def p_startswith(self, st, ignorecase=False): "Return True if the input starts with `st` at current position" length = len(st) matcher = result = self.input[self.pos:self.pos + length] if ignorecase: matcher = result.lower() st = st.lower() if matcher == s...
Return True if the input starts with `st` at current position
def _non_framed_body_length(header, plaintext_length): body_length = header.algorithm.iv_len body_length += 8 body_length += plaintext_length body_length += header.algorithm.auth_len return body_length
Calculates the length of a non-framed message body, given a complete header. :param header: Complete message header object :type header: aws_encryption_sdk.structures.MessageHeader :param int plaintext_length: Length of plaintext in bytes :rtype: int
def predict(data, training_dir=None, model_name=None, model_version=None, cloud=False): if cloud: if not model_version or not model_name: raise ValueError() if training_dir: raise ValueError() with warnings.catch_warnings(): warnings.simplefilter("ignore") return cloud_predict(m...
Runs prediction locally or on the cloud. Args: data: List of csv strings or a Pandas DataFrame that match the model schema. training_dir: local path to the trained output folder. model_name: deployed model name model_version: depoyed model version cloud: bool. If False, does local prediction and ...
def load_plugin(plugin_name): plugin_cls = plugin_map.get(plugin_name, None) if not plugin_cls: try: plugin_module_name, plugin_cls_name = plugin_name.split(":") plugin_module = import_module(plugin_module_name) plugin_cls = getattr(plugin_module, plugin_cls_name...
Given a plugin name, load plugin cls from plugin directory. Will throw an exception if no plugin can be found.
def get_start_array(self, *start_words, **kwargs): if not self.start_arrays: raise MarkovTextExcept("Не с чего начинать генерацию.") if not start_words: return choice(self.start_arrays) _variants = [] _weights = [] for tokens in self.start_arrays...
Генерирует начало предложения. :start_words: Попытаться начать предложение с этих слов.
def sample(self, iter, length=None, verbose=0): self._cur_trace_index = 0 self.max_trace_length = iter self._iter = iter self.verbose = verbose or 0 self.seed() self.db.connect_model(self) if length is None: length = iter ...
Draws iter samples from the posterior.
def _filenames_from_arg(filename): if isinstance(filename, string_types): filenames = [filename] elif isinstance(filename, (list, tuple)): filenames = filename else: raise Exception() for fn in filenames: if not os.path.exists(fn): raise ValueError( % fn)...
Utility function to deal with polymorphic filenames argument.
def insertPrimaryDataset(self): try : body = request.body.read() indata = cjson.decode(body) indata = validateJSONInputNoCopy("primds", indata) indata.update({"creation_date": dbsUtils().getTime(), "create_by": dbsUtils().getCreateBy() }) self...
API to insert A primary dataset in DBS :param primaryDSObj: primary dataset object :type primaryDSObj: dict :key primary_ds_type: TYPE (out of valid types in DBS, MC, DATA) (Required) :key primary_ds_name: Name of the primary dataset (Required)
def list_view_on_selected(self, widget, selected_item_key): self.lbl.set_text( + self.listView.children[selected_item_key].get_text())
The selection event of the listView, returns a key of the clicked event. You can retrieve the item rapidly
def ready(self): if self._last_failed: delta = time.time() - self._last_failed return delta >= self.backoff() return True
Whether or not enough time has passed since the last failure
def create_subscription(self, client_id, client_secret, callback_url, object_type=model.Subscription.OBJECT_TYPE_ACTIVITY, aspect_type=model.Subscription.ASPECT_TYPE_CREATE, verify_token=model.Subscription.VERIFY_TOKEN_DEFAULT): ...
Creates a webhook event subscription. http://strava.github.io/api/partner/v3/events/#create-a-subscription :param client_id: application's ID, obtained during registration :type client_id: int :param client_secret: application's secret, obtained during registration :type clien...
def get_transactions(self, include_investment=False): assert_pd() s = StringIO(self.get_transactions_csv( include_investment=include_investment)) s.seek(0) df = pd.read_csv(s, parse_dates=[]) df.columns = [c.lower().replace(, ) for c in df.columns] df...
Returns the transaction data as a Pandas DataFrame.
def clean_extra(self): extra_paths = self.get_extra_paths() for path in extra_paths: if not os.path.exists(path): continue if os.path.isdir(path): self._clean_directory(path) else: self._clean_file(path)
Clean extra files/directories specified by get_extra_paths()
def coverage_pileup(self, space, start, end): return ((col.pos, self._normalize(col.n, self._total)) for col in self._bam.pileup(space, start, end))
Retrieve pileup coverage across a specified region.
def is_dir(dirname): if not os.path.isdir(dirname): msg = "{0} is not a directory".format(dirname) raise argparse.ArgumentTypeError(msg) else: return dirname
Checks if a path is an actual directory that exists
def get_amount_arrears_transactions(self, billing_cycle): previous_billing_cycle = billing_cycle.get_previous() if not previous_billing_cycle: return Decimal(0) return self.to_account.balance( transaction__date__lt=previous_billing_cycle.date_range.upper, ...
Get the sum of all transaction legs in to_account during given billing cycle
def to_internal_value(self, data): user = getattr(self.context.get(), ) queryset = self.get_queryset() permission = get_full_perm(, queryset.model) try: return get_objects_for_user( user, permission, queryset.filter(**{...
Convert to internal value.
def exec_command(command, cwd=None): r rc = None stdout = stderr = None if ssh_conn is None: ld_library_path = {: % os.environ.get(, )} p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=ld_library_path, cwd=cwd) stdout, stderr = p.com...
r''' Helper to exec locally (subprocess) or remotely (paramiko)
def publish_to_target(self, target_arn, message): conn = self.get_conn() messages = { : message } return conn.publish( TargetArn=target_arn, Message=json.dumps(messages), MessageStructure= )
Publish a message to a topic or an endpoint. :param target_arn: either a TopicArn or an EndpointArn :type target_arn: str :param message: the default message you want to send :param message: str
def update(path,value,timestamp=None): value = float(value) fh = None try: fh = open(path,) return file_update(fh, value, timestamp) finally: if fh: fh.close()
update(path,value,timestamp=None) path is a string value is a float timestamp is either an int or float
def detectAndroid(self): if UAgentInfo.deviceAndroid in self.__userAgent \ or self.detectGoogleTV(): return True return False
Return detection of an Android device Detects *any* Android OS-based device: phone, tablet, and multi-media player. Also detects Google TV.
def add_field(self, field, **kwargs): getattr(self, self._private_fields_name).append(field) self._expire_cache(reverse=True) self._expire_cache(reverse=False)
Add each field as a private field.
def launch(self, args=None): self.options = self.parse_args(args) if self.options.saveinputmeta: self.save_input_meta() if self.options.inputmeta: self.options = self.get_options_from_file(self.options.inputmeta) self.run(self.op...
This method triggers the parsing of arguments.
def _create_ids(self, home_teams, away_teams): categories = pd.Categorical(np.append(home_teams,away_teams)) home_id, away_id = categories.codes[0:int(len(categories)/2)], categories.codes[int(len(categories)/2):len(categories)+1] return home_id, away_id
Creates IDs for both players/teams
def get_all_floating_ips(self): data = self.get_data("floating_ips") floating_ips = list() for jsoned in data[]: floating_ip = FloatingIP(**jsoned) floating_ip.token = self.token floating_ips.append(floating_ip) return floating_ips
This function returns a list of FloatingIP objects.
def create_session(self): session = None if self.key_file is not None: credfile = os.path.expandvars(os.path.expanduser(self.key_file)) try: with open(credfile, ) as f: creds = json.load(f) except json.JSONDecodeError as...
Create a session. First we look in self.key_file for a path to a json file with the credentials. The key file should have 'AWSAccessKeyId' and 'AWSSecretKey'. Next we look at self.profile for a profile name and try to use the Session call to automatically pick up the keys for the profi...
def get_port_profile_status_output_port_profile_mac_association_applied_interface_interface_type(self, **kwargs): config = ET.Element("config") get_port_profile_status = ET.Element("get_port_profile_status") config = get_port_profile_status output = ET.SubElement(get_port_profil...
Auto Generated Code
def connect(self, task_spec): self.thread_starter.outputs.append(task_spec) task_spec._connect_notify(self.thread_starter)
Connect the *following* task to this one. In other words, the given task is added as an output task. task -- the task to connect to.
def time_estimate(self, duration, **kwargs): path = % (self.manager.path, self.get_id()) data = {: duration} return self.manager.gitlab.http_post(path, post_data=data, **kwargs)
Set an estimated time of work for the object. Args: duration (str): Duration in human format (e.g. 3h30) **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabTimeTrackingError...
def create_model(self, model_server_workers=None, role=None, vpc_config_override=VPC_CONFIG_DEFAULT): role = role or self.role return ChainerModel(self.model_data, role, self.entry_point, source_dir=self._model_source_dir(), enable_cloudwatch_metrics=self.enable_clou...
Create a SageMaker ``ChainerModel`` object that can be deployed to an ``Endpoint``. Args: role (str): The ``ExecutionRoleArn`` IAM Role ARN for the ``Model``, which is also used during transform jobs. If not specified, the role from the Estimator will be used. model_serv...
async def raw(self, command, *args, _conn=None, **kwargs): start = time.monotonic() ret = await self._raw( command, *args, encoding=self.serializer.encoding, _conn=_conn, **kwargs ) logger.debug("%s (%.4f)s", command, time.monotonic() - start) return ret
Send the raw command to the underlying client. Note that by using this CMD you will lose compatibility with other backends. Due to limitations with aiomcache client, args have to be provided as bytes. For rest of backends, str. :param command: str with the command. :param timeo...
def get_conn(self, aws_access_key=None, aws_secret_key=None): return boto.connect_dynamodb( aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key, )
Hook point for overriding how the CounterPool gets its connection to AWS.
def build_skeleton(nodes, independencies): nodes = list(nodes) if isinstance(independencies, Independencies): def is_independent(X, Y, Zs): return IndependenceAssertion(X, Y, Zs) in independencies elif callable(independencies): is_independent = ...
Estimates a graph skeleton (UndirectedGraph) from a set of independencies using (the first part of) the PC algorithm. The independencies can either be provided as an instance of the `Independencies`-class or by passing a decision function that decides any conditional independency assertion. ...
def _import_module(self, module_path): LOGGER.debug(, module_path) try: return __import__(module_path) except ImportError as error: LOGGER.critical(, module_path, error) return None
Dynamically import a module returning a handle to it. :param str module_path: The module path :rtype: module
def pl_resolution(KB, alpha): "Propositional-logic resolution: say if alpha follows from KB. [Fig. 7.12]" clauses = KB.clauses + conjuncts(to_cnf(~alpha)) new = set() while True: n = len(clauses) pairs = [(clauses[i], clauses[j]) for i in range(n) for j in range(i+1, n)]...
Propositional-logic resolution: say if alpha follows from KB. [Fig. 7.12]
def _get_model_fitting(self, con_est_id): for (mpe_id, pe_ids), contrasts in self.contrasts.items(): for contrast in contrasts: if contrast.estimation.id == con_est_id: model_fitting_id = mpe_id pe_map_ids = pe_ids ...
Retreive model fitting that corresponds to contrast with identifier 'con_id' from the list of model fitting objects stored in self.model_fittings
def new(self, sources_by_grp): source_models = [] for sm in self.source_models: src_groups = [] for src_group in sm.src_groups: sg = copy.copy(src_group) sg.sources = sorted(sources_by_grp.get(sg.id, []), ...
Generate a new CompositeSourceModel from the given dictionary. :param sources_by_group: a dictionary grp_id -> sources :returns: a new CompositeSourceModel instance
def parse_args(): parser = argparse.ArgumentParser( description=) parser.add_argument(, required=True, help=) parser.add_argument(, required=True, help=) parser.add_argument(, required=True, help=) parser.add_argument(, required=True, ...
Parses command line arguments.
def iter_setup_packages(srcdir, packages): for packagename in packages: package_parts = packagename.split() package_path = os.path.join(srcdir, *package_parts) setup_package = os.path.relpath( os.path.join(package_path, )) if os.path.isfile(setup_package): ...
A generator that finds and imports all of the ``setup_package.py`` modules in the source packages. Returns ------- modgen : generator A generator that yields (modname, mod), where `mod` is the module and `modname` is the module name for the ``setup_package.py`` modules.
def gpg_download_key( key_id, key_server, config_dir=None ): config_dir = get_config_dir( config_dir ) tmpdir = make_gpg_tmphome( prefix="download", config_dir=config_dir ) gpg = gnupg.GPG( homedir=tmpdir ) recvdat = gpg.recv_keys( key_server, key_id ) fingerprint = None try: asse...
Download a GPG key from a key server. Do not import it into any keyrings. Return the ASCII-armored key
def add_image(self, figure, dpi=72): name = os.path.join(self._dir, % self.fig_counter) self.fig_counter += 1 figure.savefig(name, dpi=dpi) plt.close(figure) self.body += % name
Adds an image to the last chapter/section. The image will be stored in the `{self.title}_files` directory. :param matplotlib.figure figure: A matplotlib figure to be saved into the report
def dump(self, out=sys.stdout, row_fn=repr, limit=-1, indent=0): NL = if indent: out.write(" "*indent + self.pivot_key_str()) else: out.write("Pivot: %s" % .join(self._pivot_attrs)) out.write(NL) if self.has_subtables(): do_all(sub.d...
Dump out the contents of this table in a nested listing. @param out: output stream to write to @param row_fn: function to call to display individual rows @param limit: number of records to show at deepest level of pivot (-1=show all) @param indent: current nesting level
def remove(name=None, pkgs=None, recursive=True, **kwargs): ***["foo", "bar"] try: pkg_params, pkg_type = __salt__[]( name, pkgs ) except MinionError as exc: raise CommandExecutionError(exc) if not pkg_params: return {} old = list_pkgs() targe...
name The name of the package to be deleted. recursive Also remove dependent packages (not required elsewhere). Default mode: enabled. Multiple Package Options: pkgs A list of packages to delete. Must be passed as a python list. The ``name`` parameter will be ignore...
def split_fusion_transcript(annotation_path, transcripts): annotation = collections.defaultdict(dict) forward = reverse = trans = string.maketrans(forward, reverse) five_pr_splits = collections.defaultdict(dict) three_pr_splits = collections.defaultdict(dict) regex = re.compil...
Finds the breakpoint in the fusion transcript and splits the 5' donor from the 3' acceptor :param str annotation_path: Path to transcript annotation file :param dict transcripts: Dictionary of fusion transcripts :return: 5' donor sequences and 3' acceptor sequences :rtype: tuple
def register(lifter, arch_name): if issubclass(lifter, Lifter): l.debug("Registering lifter %s for architecture %s.", lifter.__name__, arch_name) lifters[arch_name].append(lifter) if issubclass(lifter, Postprocessor): l.debug("Registering postprocessor %s for architecture %s.", lift...
Registers a Lifter or Postprocessor to be used by pyvex. Lifters are are given priority based on the order in which they are registered. Postprocessors will be run in registration order. :param lifter: The Lifter or Postprocessor to register :vartype lifter: :class:`Lifter` or :class:`Postprocess...
def logger_init(): log = logging.getLogger("pyinotify") console_handler = logging.StreamHandler() console_handler.setFormatter( logging.Formatter("[%(asctime)s %(name)s %(levelname)s] %(message)s")) log.addHandler(console_handler) log.setLevel(20) return log
Initialize logger instance.
def get_file_stream_api(self): if not self._file_stream_api: if self._current_run_id is None: raise UsageError( ) self._file_stream_api = FileStreamApi(self, self._current_run_id) return self._file_stream_api
This creates a new file pusher thread. Call start to initiate the thread that talks to W&B
def push_design_documents(self, design_path): for db_name in os.listdir(design_path): if db_name.startswith("__") or db_name.startswith("."): continue db_path = os.path.join(design_path, db_name) doc = self._folder_to_dict(db_path) doc_id ...
Push the design documents stored in `design_path` to the server
def edges_to_dict_of_dataframes(grid, edges): omega = 2 * pi * 50 srid = int(cfg_ding0.get(, )) lines = {: [], : [], : [], : [], : [], : [], : [], : [], : [], : []} for edge in edges: line_name = .join([, str(grid.id_db), ...
Export edges to DataFrame Parameters ---------- grid: ding0.Network edges: list Edges of Ding0.Network graph Returns ------- edges_dict: dict
def _secondary_max(self): return ( self.secondary_range[1] if (self.secondary_range and self.secondary_range[1] is not None) else (max(self._secondary_values) if self._secondary_values else None) )
Getter for the maximum series value
def _connect(self): while self.protocol: _LOGGER.info(, self.port) try: ser = serial.serial_for_url( self.port, self.baud, timeout=self.timeout) except serial.SerialException: _LOGGER.error(, self.port) ...
Connect to the serial port. This should be run in a new thread.
def add(self, *args, **kwargs): for cookie in args: self.all_cookies.append(cookie) if cookie.name in self: continue self[cookie.name] = cookie for key, value in kwargs.items(): cookie = self.cookie_class(key, val...
Add Cookie objects by their names, or create new ones under specified names. Any unnamed arguments are interpreted as existing cookies, and are added under the value in their .name attribute. With keyword arguments, the key is interpreted as the cookie name and the value as the ...
def get_model_name(model): if not is_estimator(model): raise YellowbrickTypeError( "Cannot detect the model name for non estimator: ".format( type(model) ) ) else: if isinstance(model, Pipeline): return get_model_name(model.steps[...
Detects the model name for a Scikit-Learn model or pipeline. Parameters ---------- model: class or instance The object to determine the name for. If the model is an estimator it returns the class name; if it is a Pipeline it returns the class name of the final transformer or estimat...
def predict(self, control=None, control_matrix=None, process_matrix=None, process_covariance=None): if process_matrix is None: process_matrix = self._defaults[] if process_covariance is None: process_covariance = self._defaults[] if con...
Predict the next *a priori* state mean and covariance given the last posterior. As a special case the first call to this method will initialise the posterior and prior estimates from the *initial_state_estimate* and *initial_covariance* arguments passed when this object was created. In t...
def hosted_number_orders(self): if self._hosted_number_orders is None: self._hosted_number_orders = HostedNumberOrderList(self) return self._hosted_number_orders
:rtype: twilio.rest.preview.hosted_numbers.hosted_number_order.HostedNumberOrderList
def condition_from_code(condcode): if condcode in __BRCONDITIONS: cond_data = __BRCONDITIONS[condcode] return {CONDCODE: condcode, CONDITION: cond_data[0], DETAILED: cond_data[1], EXACT: cond_data[2], EXACTNL: cond_data[3], ...
Get the condition name from the condition code.
def is_polynomial(self): return all(isinstance(k, INT_TYPES) and k >= 0 for k in self._data)
Tells whether it is a linear combination of natural powers of ``x``.
def main(): data = {:{: , : [, ]}} lazyxml.dump(data, ) with open(, ) as fp: lazyxml.dump(data, fp) from cStringIO import StringIO buffer = StringIO() lazyxml.dump(data, buffer) print buffer.getvalue() buffer.close() print lazyxml.dumps(data) ...
<root a1="1" a2="2"> <test1 a="1" b="2" c="3"> <normal index="5" required="false"> <bar><![CDATA[1]]></bar> <bar><![CDATA[2]]></bar> <foo><![CDATA[<foo-1>]]></foo> </normal> <repeat1 index="1" required="false"> <...
def cells_rt_meta(workbook, sheet, row, col): logger_excel.info("enter cells_rt_meta") col_loop = 0 cell_data = [] temp_sheet = workbook.sheet_by_name(sheet) while col_loop < temp_sheet.ncols: col += 1 col_loop += 1 try: if temp_sheet.cell_value(row, col) != ...
Traverse all cells in a row. If you find new data in a cell, add it to the list. :param obj workbook: :param str sheet: :param int row: :param int col: :return list: Cell data for a specific row
def get_list(self, terms, limit=0, sort=False, ranks=None): ranks = ranks or self.ranks got_cards = [] try: indices = self.find_list(terms, limit=limit) got_cards = [self.cards[i] for i in indices if self.cards[i] not in got_cards] se...
Get the specified cards from the stack. :arg term: The search term. Can be a card full name, value, suit, abbreviation, or stack indice. :arg int limit: The number of items to retrieve for each term. :arg bool sort: Whether or not to sort the resu...
def _get_pattern(self, pattern_id): for key in (, , ): if key in self.tagged_blocks: data = self.tagged_blocks.get_data(key) for pattern in data: if pattern.pattern_id == pattern_id: return pattern return No...
Get pattern item by id.
def uuid(cls): if hasattr(cls, ) and cls.__uuid_primary_key__: return synonym() else: return immutable(Column(UUIDType(binary=False), default=uuid_.uuid4, unique=True, nullable=False))
UUID column, or synonym to existing :attr:`id` column if that is a UUID
def continuousGenerator(self, request): compoundId = None if request.continuous_set_id != "": compoundId = datamodel.ContinuousSetCompoundId.parse( request.continuous_set_id) if compoundId is None: raise exceptions.ContinuousSetNotSpecifiedExcepti...
Returns a generator over the (continuous, nextPageToken) pairs defined by the (JSON string) request.
def render_addPersonForm(self, ctx, data): addPersonForm = liveform.LiveForm( self.addPerson, self._baseParameters, description=) addPersonForm.compact() addPersonForm.jsClass = u addPersonForm.setFragmentParent(self) return addPersonForm
Create and return a L{liveform.LiveForm} for creating a new L{Person}.
def add_rpt(self, sequence, mod, pt): modstr = self.value(mod) if modstr == : self._stream.restore_context() self.diagnostic.notify( error.Severity.ERROR, "Cannot repeat a lookahead rule", error.LocationInfo.from_stream(self._stream, is_...
Add a repeater to the previous sequence
def _ss(self, class_def): yc = self.y[class_def] css = yc - yc.mean() css *= css return sum(css)
calculates sum of squares for a class
def backwards(self, orm): "Write your backwards methods here." from django.contrib.auth.models import Group projects = orm[].objects.all() names = [PROJECT_GROUP_TEMPLATE.format(p.name) for p in projects] Group.objects.filter(name__in=names).delete()
Write your backwards methods here.
def git_remote(git_repo): github_token = os.getenv(GITHUB_TOKEN_KEY) if github_token: return .format( github_token, git_repo) return .format(git_repo)
Return the URL for remote git repository. Depending on the system setup it returns ssh or https remote.
def toListInt(value): if TypeConverters._can_convert_to_list(value): value = TypeConverters.toList(value) if all(map(lambda v: TypeConverters._is_integer(v), value)): return [int(v) for v in value] raise TypeError("Could not convert %s to list of ints" % ...
Convert a value to list of ints, if possible.
def _set_typeahead(cls, el, value): PlaceholderHandler.reset_placeholder_dropdown(el) if not value and not el.value: DropdownHandler.set_dropdown_glyph(el.id, "glyphicon-alert") return if source: dropdown_el.h...
Convert given `el` to typeahead input and set it to `value`. This method also sets the dropdown icons and descriptors. Args: el (obj): Element reference to the input you want to convert to typeahead. value (list): List of dicts with two keys: ``source`` and ``va...
def velocity_dispersion(self, kwargs_lens, kwargs_lens_light, lens_light_model_bool_list=None, aniso_param=1, r_eff=None, R_slit=0.81, dR_slit=0.1, psf_fwhm=0.7, num_evaluate=1000): gamma = kwargs_lens[0][] if in kwargs_lens_light[0]: center_x, center_y ...
computes the LOS velocity dispersion of the lens within a slit of size R_slit x dR_slit and seeing psf_fwhm. The assumptions are a Hernquist light profile and the spherical power-law lens model at the first position. Further information can be found in the AnalyticKinematics() class. :param kw...
def reset(self, seed): logger.debug(f) self.seed_generator.reset(seed) for c in self.clones: c.reset(seed)
Reset this generator's seed generator and any clones.
def _query_compressed(options, collection_name, num_to_skip, num_to_return, query, field_selector, opts, check_keys=False, ctx=None): op_query, max_bson_size = _query( options, collection_name, num_to_skip, num_to_return, query...
Internal compressed query message helper.
def _compute_dependencies(self): from _markerlib import compile as compile_marker dm = self.__dep_map = {None: []} reqs = [] for req in self._parsed_pkg_info.get_all() or []: distvers, mark = self._preparse_requirement(req) parsed = parse_requir...
Recompute this distribution's dependencies.
def write_lammpsdata(structure, filename, atom_style=): if atom_style not in [, , , ]: raise ValueError(.format(atom_style)) xyz = np.array([[atom.xx,atom.xy,atom.xz] for atom in structure.atoms]) forcefield = True if structure[0].type == : forcefield = False box = Box(...
Output a LAMMPS data file. Outputs a LAMMPS data file in the 'full' atom style format. Assumes use of 'real' units. See http://lammps.sandia.gov/doc/atom_style.html for more information on atom styles. Parameters ---------- structure : parmed.Structure ParmEd structure object f...
def get_profile_histogram(x, y, n_bins=100): if len(x) != len(y): raise ValueError() y = y.astype(np.float32) n, bin_edges = np.histogram(x, bins=n_bins) sy = np.histogram(x, bins=n_bins, weights=y)[0] sy2 = np.histogram(x, bins=n_bins, weights=y * y)[0] bin_centers = (bin_edg...
Takes 2D point data (x,y) and creates a profile histogram similar to the TProfile in ROOT. It calculates the y mean for every bin at the bin center and gives the y mean error as error bars. Parameters ---------- x : array like data x positions y : array like data y positions n_b...
def unpack(self, buff, offset=0): try: unpacked_data = struct.unpack(, buff[offset:offset+4]) self._value = .join([str(x) for x in unpacked_data]) except struct.error as exception: raise exceptions.UnpackException( % (exception, ...
Unpack a binary message into this object's attributes. Unpack the binary value *buff* and update this object attributes based on the results. Args: buff (bytes): Binary data package to be unpacked. offset (int): Where to begin unpacking. Raises: Exc...
def _deinterlace(self, raw): vpr = self.width * self.planes vpi = vpr * self.height if self.bitdepth > 8: a = array(, [0] * vpi) else: a = bytearray([0] * vpi) source_offset = 0 for lines in ...
Read raw pixel data, undo filters, deinterlace, and flatten. Return a single array of values.
def _set_collector_profile(self, v, load=False): if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("collector_profiletype collector_profilename",collector_profile.collector_profile, yang_name="collector-profile", rest_name="profile", parent=self, is_container=, ...
Setter method for collector_profile, mapped from YANG variable /telemetry/collector/collector_profile (list) If this variable is read-only (config: false) in the source YANG file, then _set_collector_profile is considered as a private method. Backends looking to populate this variable should do so via c...
def autocorrelated_relaxed_clock(self, root_rate, autocorrel, distribution=): optioncheck(distribution, [, ]) if autocorrel == 0: for node in self._tree.preorder_node_iter(): node.rate = root_rate return for ...
Attaches rates to each node according to autocorrelated lognormal model from Kishino et al.(2001), or autocorrelated exponential
def hil_gps_send(self, time_usec, fix_type, lat, lon, alt, eph, epv, vel, vn, ve, vd, cog, satellites_visible, force_mavlink1=False): return self.send(self.hil_gps_encode(time_usec, fix_type, lat, lon, alt, eph, epv, vel, vn, ve, vd, cog, satellites_visible), force_mavlink1=force_mavlin...
The global position, as returned by the Global Positioning System (GPS). This is NOT the global position estimate of the sytem, but rather a RAW sensor value. See message GLOBAL_POSITION for the global position estimate. Coordinate frame i...
def tsne(adata, **kwargs) -> Union[Axes, List[Axes], None]: return plot_scatter(adata, , **kwargs)
\ Scatter plot in tSNE basis. Parameters ---------- {adata_color_etc} {edges_arrows} {scatter_bulk} {show_save_ax} Returns ------- If `show==False` a :class:`~matplotlib.axes.Axes` or a list of it.