code
stringlengths
70
11.9k
docstring
stringlengths
4
7.08k
text
stringlengths
128
15k
def _cim_qualifier(key, value): if key is None: raise ValueError("Qualifier name must not be None") if isinstance(value, CIMQualifier): if value.name.lower() != key.lower(): raise ValueError( _format("CIMQualifier.name must be dictionary key {0!A}, but " "is {1!A}", key, value.name)) qual = value else: qual = CIMQualifier(key, value) return qual
Return a CIMQualifier object, from dict item input (key+value), after performing some checks. If the input value is a CIMQualifier object, it is returned. Otherwise, a new CIMQualifier object is created from the input value, and returned.
### Input: Return a CIMQualifier object, from dict item input (key+value), after performing some checks. If the input value is a CIMQualifier object, it is returned. Otherwise, a new CIMQualifier object is created from the input value, and returned. ### Response: def _cim_qualifier(key, value): if key is None: raise ValueError("Qualifier name must not be None") if isinstance(value, CIMQualifier): if value.name.lower() != key.lower(): raise ValueError( _format("CIMQualifier.name must be dictionary key {0!A}, but " "is {1!A}", key, value.name)) qual = value else: qual = CIMQualifier(key, value) return qual
def flush(self): if self.buffer is None: return data = self.buffer self.buffer = [] for x in self.dests: yield from x.enqueue_task(self, *data)
Flush the buffer of buffered tiers to our destination tiers.
### Input: Flush the buffer of buffered tiers to our destination tiers. ### Response: def flush(self): if self.buffer is None: return data = self.buffer self.buffer = [] for x in self.dests: yield from x.enqueue_task(self, *data)
def PolygonPatch(polygon, **kwargs): def coding(ob): n = len(getattr(ob, , None) or ob) vals = ones(n, dtype=Path.code_type) * Path.LINETO vals[0] = Path.MOVETO return vals if hasattr(polygon, ): ptype = polygon.geom_type if ptype == : polygon = [Polygon(polygon)] elif ptype == : polygon = [Polygon(p) for p in polygon] else: raise ValueError( "A polygon or multi-polygon representation is required") else: polygon = getattr(polygon, , polygon) ptype = polygon["type"] if ptype == : polygon = [Polygon(polygon)] elif ptype == : polygon = [Polygon(p) for p in polygon[]] else: raise ValueError( "A polygon or multi-polygon representation is required") vertices = concatenate([ concatenate([asarray(t.exterior)[:, :2]] + [asarray(r)[:, :2] for r in t.interiors]) for t in polygon]) codes = concatenate([ concatenate([coding(t.exterior)] + [coding(r) for r in t.interiors]) for t in polygon]) return PathPatch(Path(vertices, codes), **kwargs)
Constructs a matplotlib patch from a geometric object The `polygon` may be a Shapely or GeoJSON-like object possibly with holes. The `kwargs` are those supported by the matplotlib.patches.Polygon class constructor. Returns an instance of matplotlib.patches.PathPatch. Example (using Shapely Point and a matplotlib axes): >> b = Point(0, 0).buffer(1.0) >> patch = PolygonPatch(b, fc='blue', ec='blue', alpha=0.5) >> axis.add_patch(patch)
### Input: Constructs a matplotlib patch from a geometric object The `polygon` may be a Shapely or GeoJSON-like object possibly with holes. The `kwargs` are those supported by the matplotlib.patches.Polygon class constructor. Returns an instance of matplotlib.patches.PathPatch. Example (using Shapely Point and a matplotlib axes): >> b = Point(0, 0).buffer(1.0) >> patch = PolygonPatch(b, fc='blue', ec='blue', alpha=0.5) >> axis.add_patch(patch) ### Response: def PolygonPatch(polygon, **kwargs): def coding(ob): n = len(getattr(ob, , None) or ob) vals = ones(n, dtype=Path.code_type) * Path.LINETO vals[0] = Path.MOVETO return vals if hasattr(polygon, ): ptype = polygon.geom_type if ptype == : polygon = [Polygon(polygon)] elif ptype == : polygon = [Polygon(p) for p in polygon] else: raise ValueError( "A polygon or multi-polygon representation is required") else: polygon = getattr(polygon, , polygon) ptype = polygon["type"] if ptype == : polygon = [Polygon(polygon)] elif ptype == : polygon = [Polygon(p) for p in polygon[]] else: raise ValueError( "A polygon or multi-polygon representation is required") vertices = concatenate([ concatenate([asarray(t.exterior)[:, :2]] + [asarray(r)[:, :2] for r in t.interiors]) for t in polygon]) codes = concatenate([ concatenate([coding(t.exterior)] + [coding(r) for r in t.interiors]) for t in polygon]) return PathPatch(Path(vertices, codes), **kwargs)
def clientUpdated(self, *args, **kwargs): ref = { : , : , : [ { : True, : , }, ], : , } return self._makeTopicExchange(ref, *args, **kwargs)
Client Updated Messages Message that a new client has been updated. This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys: * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
### Input: Client Updated Messages Message that a new client has been updated. This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys: * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. ### Response: def clientUpdated(self, *args, **kwargs): ref = { : , : , : [ { : True, : , }, ], : , } return self._makeTopicExchange(ref, *args, **kwargs)
def generate_amr_lines(f1, f2): while True: cur_amr1 = amr.AMR.get_amr_line(f1) cur_amr2 = amr.AMR.get_amr_line(f2) if not cur_amr1 and not cur_amr2: pass elif not cur_amr1: print("Error: File 1 has less AMRs than file 2", file=ERROR_LOG) print("Ignoring remaining AMRs", file=ERROR_LOG) elif not cur_amr2: print("Error: File 2 has less AMRs than file 1", file=ERROR_LOG) print("Ignoring remaining AMRs", file=ERROR_LOG) else: yield cur_amr1, cur_amr2 continue break
Read one AMR line at a time from each file handle :param f1: file handle (or any iterable of strings) to read AMR 1 lines from :param f2: file handle (or any iterable of strings) to read AMR 2 lines from :return: generator of cur_amr1, cur_amr2 pairs: one-line AMR strings
### Input: Read one AMR line at a time from each file handle :param f1: file handle (or any iterable of strings) to read AMR 1 lines from :param f2: file handle (or any iterable of strings) to read AMR 2 lines from :return: generator of cur_amr1, cur_amr2 pairs: one-line AMR strings ### Response: def generate_amr_lines(f1, f2): while True: cur_amr1 = amr.AMR.get_amr_line(f1) cur_amr2 = amr.AMR.get_amr_line(f2) if not cur_amr1 and not cur_amr2: pass elif not cur_amr1: print("Error: File 1 has less AMRs than file 2", file=ERROR_LOG) print("Ignoring remaining AMRs", file=ERROR_LOG) elif not cur_amr2: print("Error: File 2 has less AMRs than file 1", file=ERROR_LOG) print("Ignoring remaining AMRs", file=ERROR_LOG) else: yield cur_amr1, cur_amr2 continue break
def bed(args): from collections import defaultdict from jcvi.compara.synteny import AnchorFile, check_beds from jcvi.formats.bed import Bed from jcvi.formats.base import get_number p = OptionParser(bed.__doc__) p.add_option("--switch", default=False, action="store_true", help="Switch reference and aligned map elements") p.add_option("--scale", type="float", help="Scale the aligned map distance by factor") p.set_beds() p.set_outfile() opts, args = p.parse_args(args) if len(args) != 1: sys.exit(not p.print_help()) anchorsfile, = args switch = opts.switch scale = opts.scale ac = AnchorFile(anchorsfile) pairs = defaultdict(list) for a, b, block_id in ac.iter_pairs(): pairs[a].append(b) qbed, sbed, qorder, sorder, is_self = check_beds(anchorsfile, p, opts) bd = Bed() for q in qbed: qseqid, qstart, qend, qaccn = q.seqid, q.start, q.end, q.accn if qaccn not in pairs: continue for s in pairs[qaccn]: si, s = sorder[s] sseqid, sstart, send, saccn = s.seqid, s.start, s.end, s.accn if switch: qseqid, sseqid = sseqid, qseqid qstart, sstart = sstart, qstart qend, send = send, qend qaccn, saccn = saccn, qaccn if scale: sstart /= scale try: newsseqid = get_number(sseqid) except ValueError: raise ValueError("`{0}` is on `{1}` with no number to extract".\ format(saccn, sseqid)) bedline = "\t".join(str(x) for x in (qseqid, qstart - 1, qend, "{0}:{1}".format(newsseqid, sstart))) bd.add(bedline) bd.print_to_file(filename=opts.outfile, sorted=True)
%prog bed anchorsfile Convert ANCHORS file to BED format.
### Input: %prog bed anchorsfile Convert ANCHORS file to BED format. ### Response: def bed(args): from collections import defaultdict from jcvi.compara.synteny import AnchorFile, check_beds from jcvi.formats.bed import Bed from jcvi.formats.base import get_number p = OptionParser(bed.__doc__) p.add_option("--switch", default=False, action="store_true", help="Switch reference and aligned map elements") p.add_option("--scale", type="float", help="Scale the aligned map distance by factor") p.set_beds() p.set_outfile() opts, args = p.parse_args(args) if len(args) != 1: sys.exit(not p.print_help()) anchorsfile, = args switch = opts.switch scale = opts.scale ac = AnchorFile(anchorsfile) pairs = defaultdict(list) for a, b, block_id in ac.iter_pairs(): pairs[a].append(b) qbed, sbed, qorder, sorder, is_self = check_beds(anchorsfile, p, opts) bd = Bed() for q in qbed: qseqid, qstart, qend, qaccn = q.seqid, q.start, q.end, q.accn if qaccn not in pairs: continue for s in pairs[qaccn]: si, s = sorder[s] sseqid, sstart, send, saccn = s.seqid, s.start, s.end, s.accn if switch: qseqid, sseqid = sseqid, qseqid qstart, sstart = sstart, qstart qend, send = send, qend qaccn, saccn = saccn, qaccn if scale: sstart /= scale try: newsseqid = get_number(sseqid) except ValueError: raise ValueError("`{0}` is on `{1}` with no number to extract".\ format(saccn, sseqid)) bedline = "\t".join(str(x) for x in (qseqid, qstart - 1, qend, "{0}:{1}".format(newsseqid, sstart))) bd.add(bedline) bd.print_to_file(filename=opts.outfile, sorted=True)
def go_to_preset(self, action=None, channel=0, preset_point_number=1): ret = self.command( .format(action, channel, preset_point_number) ) return ret.content.decode()
Params: action - start or stop channel - channel number preset_point_number - preset point number
### Input: Params: action - start or stop channel - channel number preset_point_number - preset point number ### Response: def go_to_preset(self, action=None, channel=0, preset_point_number=1): ret = self.command( .format(action, channel, preset_point_number) ) return ret.content.decode()
def run(self): new_mins = list(salt.utils.minions.CkMinions(self.opts).connected_ids()) cc = cache_cli(self.opts) cc.get_cached() cc.put_cache([new_mins]) log.debug()
Gather currently connected minions and update the cache
### Input: Gather currently connected minions and update the cache ### Response: def run(self): new_mins = list(salt.utils.minions.CkMinions(self.opts).connected_ids()) cc = cache_cli(self.opts) cc.get_cached() cc.put_cache([new_mins]) log.debug()
def kwarg(string, separator=): if separator not in string: raise ValueError("Separator not in value " % (separator, string)) if string.strip().startswith(separator): raise ValueError("Value starts with separator " % (string, separator)) if string.strip().endswith(separator): raise ValueError("Value ends with separator " % (string, separator)) if string.count(separator) != 1: raise ValueError("Value should only have one separator" % (string, separator)) key, value = string.split(separator) return {key: value}
Return a dict from a delimited string.
### Input: Return a dict from a delimited string. ### Response: def kwarg(string, separator=): if separator not in string: raise ValueError("Separator not in value " % (separator, string)) if string.strip().startswith(separator): raise ValueError("Value starts with separator " % (string, separator)) if string.strip().endswith(separator): raise ValueError("Value ends with separator " % (string, separator)) if string.count(separator) != 1: raise ValueError("Value should only have one separator" % (string, separator)) key, value = string.split(separator) return {key: value}
def get_correctness_for_response(self, response): for answer in self.my_osid_object.get_answers(): if self._is_match(response, answer): try: return answer.get_score() except AttributeError: return 100 for answer in self.my_osid_object.get_wrong_answers(): if self._is_match(response, answer): try: return answer.get_score() except AttributeError: return 0 return 0
get measure of correctness available for a particular response
### Input: get measure of correctness available for a particular response ### Response: def get_correctness_for_response(self, response): for answer in self.my_osid_object.get_answers(): if self._is_match(response, answer): try: return answer.get_score() except AttributeError: return 100 for answer in self.my_osid_object.get_wrong_answers(): if self._is_match(response, answer): try: return answer.get_score() except AttributeError: return 0 return 0
def _fillVolumesAndPaths(self, paths): self.diffs = collections.defaultdict((lambda: [])) self.extraKeys = {} for key in self.bucket.list(): if key.name.startswith(theTrashPrefix): continue keyInfo = self._parseKeyName(key.name) if keyInfo is None: if key.name[-1:] != : logger.warning("Ignoring in S3", key.name) continue if keyInfo[] == : stream = io.BytesIO() key.get_contents_to_file(stream) Store.Volume.readInfo(stream) continue if keyInfo[] == : keyInfo[] = None path = self._relativePath("/" + keyInfo[]) if path is None: continue diff = Store.Diff(self, keyInfo[], keyInfo[], key.size) logger.debug("Adding %s in %s", diff, path) self.diffs[diff.fromVol].append(diff) paths[diff.toVol].append(path) self.extraKeys[diff] = path
Fill in paths. :arg paths: = { Store.Volume: ["linux path",]}
### Input: Fill in paths. :arg paths: = { Store.Volume: ["linux path",]} ### Response: def _fillVolumesAndPaths(self, paths): self.diffs = collections.defaultdict((lambda: [])) self.extraKeys = {} for key in self.bucket.list(): if key.name.startswith(theTrashPrefix): continue keyInfo = self._parseKeyName(key.name) if keyInfo is None: if key.name[-1:] != : logger.warning("Ignoring in S3", key.name) continue if keyInfo[] == : stream = io.BytesIO() key.get_contents_to_file(stream) Store.Volume.readInfo(stream) continue if keyInfo[] == : keyInfo[] = None path = self._relativePath("/" + keyInfo[]) if path is None: continue diff = Store.Diff(self, keyInfo[], keyInfo[], key.size) logger.debug("Adding %s in %s", diff, path) self.diffs[diff.fromVol].append(diff) paths[diff.toVol].append(path) self.extraKeys[diff] = path
def on_bar_min1(self, tiny_quote): data = tiny_quote symbol = data.symbol price = data.open now = datetime.datetime.now() work_time = now.replace(hour=15, minute=55, second=0) if now >= work_time: ma_20 = self.get_sma(20, symbol) ma_60 = self.get_sma(60, symbol) if ma_20 >= ma_60 and self.flag==0: self.do_trade(symbol, price, "buy") self.flag = 1 elif ma_20 < ma_60 and self.flag==1: self.do_trade(symbol, price, "sell") self.flag = 0
每一分钟触发一次回调
### Input: 每一分钟触发一次回调 ### Response: def on_bar_min1(self, tiny_quote): data = tiny_quote symbol = data.symbol price = data.open now = datetime.datetime.now() work_time = now.replace(hour=15, minute=55, second=0) if now >= work_time: ma_20 = self.get_sma(20, symbol) ma_60 = self.get_sma(60, symbol) if ma_20 >= ma_60 and self.flag==0: self.do_trade(symbol, price, "buy") self.flag = 1 elif ma_20 < ma_60 and self.flag==1: self.do_trade(symbol, price, "sell") self.flag = 0
def format_value(self, value, padding): if padding: return "{:0{pad}d}".format(value, pad=padding) else: return str(value)
Get padding adjusting for negative values.
### Input: Get padding adjusting for negative values. ### Response: def format_value(self, value, padding): if padding: return "{:0{pad}d}".format(value, pad=padding) else: return str(value)
def snyder_opt(self, structure): nsites = structure.num_sites volume = structure.volume num_density = 1e30 * nsites / volume return 1.66914e-23 * \ (self.long_v(structure) + 2.*self.trans_v(structure))/3. \ / num_density ** (-2./3.) * (1 - nsites ** (-1./3.))
Calculates Snyder's optical sound velocity (in SI units) Args: structure: pymatgen structure object Returns: Snyder's optical sound velocity (in SI units)
### Input: Calculates Snyder's optical sound velocity (in SI units) Args: structure: pymatgen structure object Returns: Snyder's optical sound velocity (in SI units) ### Response: def snyder_opt(self, structure): nsites = structure.num_sites volume = structure.volume num_density = 1e30 * nsites / volume return 1.66914e-23 * \ (self.long_v(structure) + 2.*self.trans_v(structure))/3. \ / num_density ** (-2./3.) * (1 - nsites ** (-1./3.))
def get_user_by_id(session, user_id, user_details=None): if user_details: user_details[] = True response = make_get_request( session, .format(user_id), params_data=user_details) json_data = response.json() if response.status_code == 200: return json_data[] else: raise UserNotFoundException( message=json_data[], error_code=json_data[], request_id=json_data[] )
Get details about specific user
### Input: Get details about specific user ### Response: def get_user_by_id(session, user_id, user_details=None): if user_details: user_details[] = True response = make_get_request( session, .format(user_id), params_data=user_details) json_data = response.json() if response.status_code == 200: return json_data[] else: raise UserNotFoundException( message=json_data[], error_code=json_data[], request_id=json_data[] )
def ram2disk(self): values = self.series self.deactivate_ram() self.diskflag = True self._save_int(values) self.update_fastaccess()
Move internal data from RAM to disk.
### Input: Move internal data from RAM to disk. ### Response: def ram2disk(self): values = self.series self.deactivate_ram() self.diskflag = True self._save_int(values) self.update_fastaccess()
def set_all_tiers(key, value, django_cache_timeout=DEFAULT_TIMEOUT): DEFAULT_REQUEST_CACHE.set(key, value) django_cache.set(key, value, django_cache_timeout)
Caches the value for the provided key in both the request cache and the django cache. Args: key (string) value (object) django_cache_timeout (int): (Optional) Timeout used to determine if and for how long to cache in the django cache. A timeout of 0 will skip the django cache. If timeout is provided, use that timeout for the key; otherwise use the default cache timeout.
### Input: Caches the value for the provided key in both the request cache and the django cache. Args: key (string) value (object) django_cache_timeout (int): (Optional) Timeout used to determine if and for how long to cache in the django cache. A timeout of 0 will skip the django cache. If timeout is provided, use that timeout for the key; otherwise use the default cache timeout. ### Response: def set_all_tiers(key, value, django_cache_timeout=DEFAULT_TIMEOUT): DEFAULT_REQUEST_CACHE.set(key, value) django_cache.set(key, value, django_cache_timeout)
def stop_task(self, task_name): for greenlet in self.active[task_name]: try: gevent.kill(greenlet) self.active[task_name] = [] except BaseException: pass
Stops a running or dead task
### Input: Stops a running or dead task ### Response: def stop_task(self, task_name): for greenlet in self.active[task_name]: try: gevent.kill(greenlet) self.active[task_name] = [] except BaseException: pass
def minimum_address(self): minimum_address = self._segments.minimum_address if minimum_address is not None: minimum_address //= self.word_size_bytes return minimum_address
The minimum address of the data, or ``None`` if the file is empty.
### Input: The minimum address of the data, or ``None`` if the file is empty. ### Response: def minimum_address(self): minimum_address = self._segments.minimum_address if minimum_address is not None: minimum_address //= self.word_size_bytes return minimum_address
def slug(hans, style=Style.NORMAL, heteronym=False, separator=, errors=, strict=True): return separator.join(chain(*pinyin(hans, style=style, heteronym=heteronym, errors=errors, strict=strict) ))
生成 slug 字符串. :param hans: 汉字 :type hans: unicode or list :param style: 指定拼音风格,默认是 :py:attr:`~pypinyin.Style.NORMAL` 风格。 更多拼音风格详见 :class:`~pypinyin.Style` :param heteronym: 是否启用多音字 :param separstor: 两个拼音间的分隔符/连接符 :param errors: 指定如何处理没有拼音的字符,详情请参考 :py:func:`~pypinyin.pinyin` :param strict: 是否严格遵照《汉语拼音方案》来处理声母和韵母,详见 :ref:`strict` :return: slug 字符串. :raise AssertionError: 当传入的字符串不是 unicode 字符时会抛出这个异常 :: >>> import pypinyin >>> from pypinyin import Style >>> pypinyin.slug('中国人') 'zhong-guo-ren' >>> pypinyin.slug('中国人', separator=' ') 'zhong guo ren' >>> pypinyin.slug('中国人', style=Style.FIRST_LETTER) 'z-g-r' >>> pypinyin.slug('中国人', style=Style.CYRILLIC) 'чжун1-го2-жэнь2'
### Input: 生成 slug 字符串. :param hans: 汉字 :type hans: unicode or list :param style: 指定拼音风格,默认是 :py:attr:`~pypinyin.Style.NORMAL` 风格。 更多拼音风格详见 :class:`~pypinyin.Style` :param heteronym: 是否启用多音字 :param separstor: 两个拼音间的分隔符/连接符 :param errors: 指定如何处理没有拼音的字符,详情请参考 :py:func:`~pypinyin.pinyin` :param strict: 是否严格遵照《汉语拼音方案》来处理声母和韵母,详见 :ref:`strict` :return: slug 字符串. :raise AssertionError: 当传入的字符串不是 unicode 字符时会抛出这个异常 :: >>> import pypinyin >>> from pypinyin import Style >>> pypinyin.slug('中国人') 'zhong-guo-ren' >>> pypinyin.slug('中国人', separator=' ') 'zhong guo ren' >>> pypinyin.slug('中国人', style=Style.FIRST_LETTER) 'z-g-r' >>> pypinyin.slug('中国人', style=Style.CYRILLIC) 'чжун1-го2-жэнь2' ### Response: def slug(hans, style=Style.NORMAL, heteronym=False, separator=, errors=, strict=True): return separator.join(chain(*pinyin(hans, style=style, heteronym=heteronym, errors=errors, strict=strict) ))
def whatIfOrder(self, contract: Contract, order: Order) -> OrderState: return self._run(self.whatIfOrderAsync(contract, order))
Retrieve commission and margin impact without actually placing the order. The given order will not be modified in any way. This method is blocking. Args: contract: Contract to test. order: Order to test.
### Input: Retrieve commission and margin impact without actually placing the order. The given order will not be modified in any way. This method is blocking. Args: contract: Contract to test. order: Order to test. ### Response: def whatIfOrder(self, contract: Contract, order: Order) -> OrderState: return self._run(self.whatIfOrderAsync(contract, order))
def print_stats(genomes): header = [, , , , , , \ , , \ , , \ , , \ ] print(.join(header)) for genome, contigs in list(genomes.items()): for contig, samples in list(contigs.items()): for sample, stats in list(samples.items()): for locus, rates in list(stats[].items()): length = rates[][] position, strand = rates[][] position = % position out = [genome, contig, locus, position, strand, length, \ sample, % (rates[]), \ rates[], rates[], \ rates[], rates[], \ rates[][][0]] print(.join([str(i) for i in out]))
print substitution rate data to table genomes[genome][contig][sample] = \ {'bp_stats':{}, 'sub_rates'[locus] = {ref PnPs, consensus PnPs}}
### Input: print substitution rate data to table genomes[genome][contig][sample] = \ {'bp_stats':{}, 'sub_rates'[locus] = {ref PnPs, consensus PnPs}} ### Response: def print_stats(genomes): header = [, , , , , , \ , , \ , , \ , , \ ] print(.join(header)) for genome, contigs in list(genomes.items()): for contig, samples in list(contigs.items()): for sample, stats in list(samples.items()): for locus, rates in list(stats[].items()): length = rates[][] position, strand = rates[][] position = % position out = [genome, contig, locus, position, strand, length, \ sample, % (rates[]), \ rates[], rates[], \ rates[], rates[], \ rates[][][0]] print(.join([str(i) for i in out]))
def deleteUnused(self): if self.dryrun: self._client.listUnused() else: self._client.deleteUnused()
Delete any old snapshots in path, if not kept.
### Input: Delete any old snapshots in path, if not kept. ### Response: def deleteUnused(self): if self.dryrun: self._client.listUnused() else: self._client.deleteUnused()
def retry(default=None): def decorator(func): @functools.wraps(func) def _wrapper(*args, **kw): for pos in range(1, MAX_RETRIES): try: return func(*args, **kw) except (RuntimeError, requests.ConnectionError) as error: LOGGER.warning("Failed: %s, %s", type(error), error) for _ in range(pos): _rest() LOGGER.warning("Request Aborted") return default return _wrapper return decorator
Retry functions after failures
### Input: Retry functions after failures ### Response: def retry(default=None): def decorator(func): @functools.wraps(func) def _wrapper(*args, **kw): for pos in range(1, MAX_RETRIES): try: return func(*args, **kw) except (RuntimeError, requests.ConnectionError) as error: LOGGER.warning("Failed: %s, %s", type(error), error) for _ in range(pos): _rest() LOGGER.warning("Request Aborted") return default return _wrapper return decorator
def tangent_surface_single(obj, uv, normalize): skl = obj.derivatives(uv[0], uv[1], 1) point = skl[0][0] vector_u = linalg.vector_normalize(skl[1][0]) if normalize else skl[1][0] vector_v = linalg.vector_normalize(skl[0][1]) if normalize else skl[0][1] return tuple(point), tuple(vector_u), tuple(vector_v)
Evaluates the surface tangent vector at the given (u,v) parameter pair. The output returns a list containing the starting point (i.e., origin) of the vector and the vectors themselves. :param obj: input surface :type obj: abstract.Surface :param uv: (u,v) parameter pair :type uv: list or tuple :param normalize: if True, the returned tangent vector is converted to a unit vector :type normalize: bool :return: A list in the order of "surface point", "derivative w.r.t. u" and "derivative w.r.t. v" :rtype: list
### Input: Evaluates the surface tangent vector at the given (u,v) parameter pair. The output returns a list containing the starting point (i.e., origin) of the vector and the vectors themselves. :param obj: input surface :type obj: abstract.Surface :param uv: (u,v) parameter pair :type uv: list or tuple :param normalize: if True, the returned tangent vector is converted to a unit vector :type normalize: bool :return: A list in the order of "surface point", "derivative w.r.t. u" and "derivative w.r.t. v" :rtype: list ### Response: def tangent_surface_single(obj, uv, normalize): skl = obj.derivatives(uv[0], uv[1], 1) point = skl[0][0] vector_u = linalg.vector_normalize(skl[1][0]) if normalize else skl[1][0] vector_v = linalg.vector_normalize(skl[0][1]) if normalize else skl[0][1] return tuple(point), tuple(vector_u), tuple(vector_v)
def fil_double_to_angle(angle): negative = (angle < 0.0) angle = np.abs(angle) dd = np.floor((angle / 10000)) angle -= 10000 * dd mm = np.floor((angle / 100)) ss = angle - 100 * mm dd += mm/60.0 + ss/3600.0 if negative: dd *= -1 return dd
Reads a little-endian double in ddmmss.s (or hhmmss.s) format and then converts to Float degrees (or hours). This is primarily used to read src_raj and src_dej header values.
### Input: Reads a little-endian double in ddmmss.s (or hhmmss.s) format and then converts to Float degrees (or hours). This is primarily used to read src_raj and src_dej header values. ### Response: def fil_double_to_angle(angle): negative = (angle < 0.0) angle = np.abs(angle) dd = np.floor((angle / 10000)) angle -= 10000 * dd mm = np.floor((angle / 100)) ss = angle - 100 * mm dd += mm/60.0 + ss/3600.0 if negative: dd *= -1 return dd
def getFirstElementCustomFilter(self, filterFunc): for child in self.children: if filterFunc(child) is True: return child childSearchResult = child.getFirstElementCustomFilter(filterFunc) if childSearchResult is not None: return childSearchResult return None
getFirstElementCustomFilter - Gets the first element which matches a given filter func. Scans first child, to the bottom, then next child to the bottom, etc. Does not include "self" node. @param filterFunc <function> - A function or lambda expression that should return "True" if the passed node matches criteria. @return <AdvancedTag/None> - First match, or None @see getElementsCustomFilter
### Input: getFirstElementCustomFilter - Gets the first element which matches a given filter func. Scans first child, to the bottom, then next child to the bottom, etc. Does not include "self" node. @param filterFunc <function> - A function or lambda expression that should return "True" if the passed node matches criteria. @return <AdvancedTag/None> - First match, or None @see getElementsCustomFilter ### Response: def getFirstElementCustomFilter(self, filterFunc): for child in self.children: if filterFunc(child) is True: return child childSearchResult = child.getFirstElementCustomFilter(filterFunc) if childSearchResult is not None: return childSearchResult return None
def mend(self, length): if length == 0: raise Exception("Can't mend the root !") if length == 1: return self.children = OrderedDict((node.name, node) for node in self.get_level(length)) for child in self.children.values(): child.parent = self
Cut all branches from this node to its children and adopt all nodes at certain level.
### Input: Cut all branches from this node to its children and adopt all nodes at certain level. ### Response: def mend(self, length): if length == 0: raise Exception("Can't mend the root !") if length == 1: return self.children = OrderedDict((node.name, node) for node in self.get_level(length)) for child in self.children.values(): child.parent = self
def _to_timezone(self, dt): tz = self._get_tz() utc_dt = pytz.utc.localize(dt) return utc_dt.astimezone(tz)
Takes a naive timezone with an utc value and return it formatted as a local timezone.
### Input: Takes a naive timezone with an utc value and return it formatted as a local timezone. ### Response: def _to_timezone(self, dt): tz = self._get_tz() utc_dt = pytz.utc.localize(dt) return utc_dt.astimezone(tz)
def handle(self, handler, req, resp, **kwargs): params = self.require_params(req) if getattr(self, , False): handler = partial(handler, context=req.context) meta, content = self.require_meta_and_content( handler, params, **kwargs ) self.make_body(resp, params, meta, content) return content
Handle given resource manipulation flow in consistent manner. This mixin is intended to be used only as a base class in new flow mixin classes. It ensures that regardless of resource manunipulation semantics (retrieve, get, delete etc.) the flow is always the same: 1. Decode and validate all request parameters from the query string using ``self.require_params()`` method. 2. Use ``self.require_meta_and_content()`` method to construct ``meta`` and ``content`` dictionaries that will be later used to create serialized response body. 3. Construct serialized response body using ``self.body()`` method. Args: handler (method): resource manipulation method handler. req (falcon.Request): request object instance. resp (falcon.Response): response object instance to be modified. **kwargs: additional keyword arguments retrieved from url template. Returns: Content dictionary (preferably resource representation).
### Input: Handle given resource manipulation flow in consistent manner. This mixin is intended to be used only as a base class in new flow mixin classes. It ensures that regardless of resource manunipulation semantics (retrieve, get, delete etc.) the flow is always the same: 1. Decode and validate all request parameters from the query string using ``self.require_params()`` method. 2. Use ``self.require_meta_and_content()`` method to construct ``meta`` and ``content`` dictionaries that will be later used to create serialized response body. 3. Construct serialized response body using ``self.body()`` method. Args: handler (method): resource manipulation method handler. req (falcon.Request): request object instance. resp (falcon.Response): response object instance to be modified. **kwargs: additional keyword arguments retrieved from url template. Returns: Content dictionary (preferably resource representation). ### Response: def handle(self, handler, req, resp, **kwargs): params = self.require_params(req) if getattr(self, , False): handler = partial(handler, context=req.context) meta, content = self.require_meta_and_content( handler, params, **kwargs ) self.make_body(resp, params, meta, content) return content
def _get_multi_param(self, param_prefix): if param_prefix.endswith("."): prefix = param_prefix else: prefix = param_prefix + "." values = [] index = 1 while True: value_dict = self._get_multi_param_helper(prefix + str(index)) if not value_dict: break values.append(value_dict) index += 1 return values
Given a querystring of ?LaunchConfigurationNames.member.1=my-test-1&LaunchConfigurationNames.member.2=my-test-2 this will return ['my-test-1', 'my-test-2']
### Input: Given a querystring of ?LaunchConfigurationNames.member.1=my-test-1&LaunchConfigurationNames.member.2=my-test-2 this will return ['my-test-1', 'my-test-2'] ### Response: def _get_multi_param(self, param_prefix): if param_prefix.endswith("."): prefix = param_prefix else: prefix = param_prefix + "." values = [] index = 1 while True: value_dict = self._get_multi_param_helper(prefix + str(index)) if not value_dict: break values.append(value_dict) index += 1 return values
def json(self): if six.PY3: return json.loads(self.body.decode(self.charset)) else: return json.loads(self.body)
Return response body deserialized into JSON object.
### Input: Return response body deserialized into JSON object. ### Response: def json(self): if six.PY3: return json.loads(self.body.decode(self.charset)) else: return json.loads(self.body)
def check_archive_ext (archive): if not archive.lower().endswith(".dms"): rest = archive[-4:] msg = "xdms(1) archive file must end with `.dms" % rest raise util.PatoolError(msg)
xdms(1) cannot handle files with extensions other than '.dms'.
### Input: xdms(1) cannot handle files with extensions other than '.dms'. ### Response: def check_archive_ext (archive): if not archive.lower().endswith(".dms"): rest = archive[-4:] msg = "xdms(1) archive file must end with `.dms" % rest raise util.PatoolError(msg)
def read_record_member(self, orcid_id, request_type, token, put_code=None, accept_type=): return self._get_info(orcid_id, self._get_member_info, request_type, token, put_code, accept_type)
Get the member info about the researcher. Parameters ---------- :param orcid_id: string Id of the queried author. :param request_type: string For example: 'record'. See https://members.orcid.org/api/tutorial/read-orcid-records for possible values.. :param response_format: string One of json, xml. :param token: string Token received from OAuth 2 3-legged authorization. :param put_code: string | list of strings The id of the queried work. In case of 'works' request_type might be a list of strings :param accept_type: expected MIME type of received data Returns ------- :returns: dict | lxml.etree._Element Record(s) in JSON-compatible dictionary representation or in XML E-tree, depending on accept_type specified.
### Input: Get the member info about the researcher. Parameters ---------- :param orcid_id: string Id of the queried author. :param request_type: string For example: 'record'. See https://members.orcid.org/api/tutorial/read-orcid-records for possible values.. :param response_format: string One of json, xml. :param token: string Token received from OAuth 2 3-legged authorization. :param put_code: string | list of strings The id of the queried work. In case of 'works' request_type might be a list of strings :param accept_type: expected MIME type of received data Returns ------- :returns: dict | lxml.etree._Element Record(s) in JSON-compatible dictionary representation or in XML E-tree, depending on accept_type specified. ### Response: def read_record_member(self, orcid_id, request_type, token, put_code=None, accept_type=): return self._get_info(orcid_id, self._get_member_info, request_type, token, put_code, accept_type)
def residual_resample(weights): N = len(weights) indexes = np.zeros(N, ) num_copies = (np.floor(N*np.asarray(weights))).astype(int) k = 0 for i in range(N): for _ in range(num_copies[i]): indexes[k] = i k += 1 residual = weights - num_copies residual /= sum(residual) cumulative_sum = np.cumsum(residual) cumulative_sum[-1] = 1. indexes[k:N] = np.searchsorted(cumulative_sum, random(N-k)) return indexes
Performs the residual resampling algorithm used by particle filters. Based on observation that we don't need to use random numbers to select most of the weights. Take int(N*w^i) samples of each particle i, and then resample any remaining using a standard resampling algorithm [1] Parameters ---------- weights : list-like of float list of weights as floats Returns ------- indexes : ndarray of ints array of indexes into the weights defining the resample. i.e. the index of the zeroth resample is indexes[0], etc. References ---------- .. [1] J. S. Liu and R. Chen. Sequential Monte Carlo methods for dynamic systems. Journal of the American Statistical Association, 93(443):1032–1044, 1998.
### Input: Performs the residual resampling algorithm used by particle filters. Based on observation that we don't need to use random numbers to select most of the weights. Take int(N*w^i) samples of each particle i, and then resample any remaining using a standard resampling algorithm [1] Parameters ---------- weights : list-like of float list of weights as floats Returns ------- indexes : ndarray of ints array of indexes into the weights defining the resample. i.e. the index of the zeroth resample is indexes[0], etc. References ---------- .. [1] J. S. Liu and R. Chen. Sequential Monte Carlo methods for dynamic systems. Journal of the American Statistical Association, 93(443):1032–1044, 1998. ### Response: def residual_resample(weights): N = len(weights) indexes = np.zeros(N, ) num_copies = (np.floor(N*np.asarray(weights))).astype(int) k = 0 for i in range(N): for _ in range(num_copies[i]): indexes[k] = i k += 1 residual = weights - num_copies residual /= sum(residual) cumulative_sum = np.cumsum(residual) cumulative_sum[-1] = 1. indexes[k:N] = np.searchsorted(cumulative_sum, random(N-k)) return indexes
def move(self, x, y): SetWindowPos(self._hwnd, None, x, y, 0, 0, SWP_NOSIZE)
Move window top-left corner to position
### Input: Move window top-left corner to position ### Response: def move(self, x, y): SetWindowPos(self._hwnd, None, x, y, 0, 0, SWP_NOSIZE)
def est_meanfn(self, fn): return np.einsum(, self.particle_weights, fn(self.particle_locations) )
Returns an the expectation value of a given function :math:`f` over the current particle distribution. Here, :math:`f` is represented by a function ``fn`` that is vectorized over particles, such that ``f(modelparams)`` has shape ``(n_particles, k)``, where ``n_particles = modelparams.shape[0]``, and where ``k`` is a positive integer. :param callable fn: Function implementing :math:`f` in a vectorized manner. (See above.) :rtype: :class:`numpy.ndarray`, shape ``(k, )``. :returns: An array containing the an estimate of the mean of :math:`f`.
### Input: Returns an the expectation value of a given function :math:`f` over the current particle distribution. Here, :math:`f` is represented by a function ``fn`` that is vectorized over particles, such that ``f(modelparams)`` has shape ``(n_particles, k)``, where ``n_particles = modelparams.shape[0]``, and where ``k`` is a positive integer. :param callable fn: Function implementing :math:`f` in a vectorized manner. (See above.) :rtype: :class:`numpy.ndarray`, shape ``(k, )``. :returns: An array containing the an estimate of the mean of :math:`f`. ### Response: def est_meanfn(self, fn): return np.einsum(, self.particle_weights, fn(self.particle_locations) )
def _sanitize_data(self, data): if "_atom_site_attached_hydrogens" in data.data.keys(): attached_hydrogens = [str2float(x) for x in data.data[] if str2float(x) != 0] if len(attached_hydrogens) > 0: self.errors.append("Structure has implicit hydrogens defined, " "parsed structure unlikely to be suitable for use " "in calculations unless hydrogens added.") if "_atom_site_type_symbol" in data.data.keys(): idxs_to_remove = [] new_atom_site_label = [] new_atom_site_type_symbol = [] new_atom_site_occupancy = [] new_fract_x = [] new_fract_y = [] new_fract_z = [] for idx, el_row in enumerate(data["_atom_site_label"]): if len(data["_atom_site_type_symbol"][idx].split()) > \ len(data["_atom_site_label"][idx].split()): els_occu = {} symbol_str = data["_atom_site_type_symbol"][idx] symbol_str_lst = symbol_str.split() for elocc_idx in range(len(symbol_str_lst)): symbol_str_lst[elocc_idx] = re.sub( r, , symbol_str_lst[elocc_idx].strip()) els_occu[str(re.findall(r, symbol_str_lst[ elocc_idx].strip())[1]).replace(, )] = \ float( + re.findall(r, symbol_str_lst[ elocc_idx].strip())[1]) x = str2float(data["_atom_site_fract_x"][idx]) y = str2float(data["_atom_site_fract_y"][idx]) z = str2float(data["_atom_site_fract_z"][idx]) for et, occu in els_occu.items(): new_atom_site_label.append( et + + str(len(new_atom_site_label))) new_atom_site_type_symbol.append(et) new_atom_site_occupancy.append(str(occu)) new_fract_x.append(str(x)) new_fract_y.append(str(y)) new_fract_z.append(str(z)) idxs_to_remove.append(idx) continue for comparison_frac in important_fracs: if abs(1 - frac/comparison_frac) < 1e-4: fracs_to_change[(label, idx)] = str(comparison_frac) if fracs_to_change: self.errors.append("Some fractional co-ordinates rounded to ideal values to " "avoid finite precision errors.") for (label, idx), val in fracs_to_change.items(): data.data[label][idx] = val return data
Some CIF files do not conform to spec. This function corrects known issues, particular in regards to Springer materials/ Pauling files. This function is here so that CifParser can assume its input conforms to spec, simplifying its implementation. :param data: CifBlock :return: data CifBlock
### Input: Some CIF files do not conform to spec. This function corrects known issues, particular in regards to Springer materials/ Pauling files. This function is here so that CifParser can assume its input conforms to spec, simplifying its implementation. :param data: CifBlock :return: data CifBlock ### Response: def _sanitize_data(self, data): if "_atom_site_attached_hydrogens" in data.data.keys(): attached_hydrogens = [str2float(x) for x in data.data[] if str2float(x) != 0] if len(attached_hydrogens) > 0: self.errors.append("Structure has implicit hydrogens defined, " "parsed structure unlikely to be suitable for use " "in calculations unless hydrogens added.") if "_atom_site_type_symbol" in data.data.keys(): idxs_to_remove = [] new_atom_site_label = [] new_atom_site_type_symbol = [] new_atom_site_occupancy = [] new_fract_x = [] new_fract_y = [] new_fract_z = [] for idx, el_row in enumerate(data["_atom_site_label"]): if len(data["_atom_site_type_symbol"][idx].split()) > \ len(data["_atom_site_label"][idx].split()): els_occu = {} symbol_str = data["_atom_site_type_symbol"][idx] symbol_str_lst = symbol_str.split() for elocc_idx in range(len(symbol_str_lst)): symbol_str_lst[elocc_idx] = re.sub( r, , symbol_str_lst[elocc_idx].strip()) els_occu[str(re.findall(r, symbol_str_lst[ elocc_idx].strip())[1]).replace(, )] = \ float( + re.findall(r, symbol_str_lst[ elocc_idx].strip())[1]) x = str2float(data["_atom_site_fract_x"][idx]) y = str2float(data["_atom_site_fract_y"][idx]) z = str2float(data["_atom_site_fract_z"][idx]) for et, occu in els_occu.items(): new_atom_site_label.append( et + + str(len(new_atom_site_label))) new_atom_site_type_symbol.append(et) new_atom_site_occupancy.append(str(occu)) new_fract_x.append(str(x)) new_fract_y.append(str(y)) new_fract_z.append(str(z)) idxs_to_remove.append(idx) continue for comparison_frac in important_fracs: if abs(1 - frac/comparison_frac) < 1e-4: fracs_to_change[(label, idx)] = str(comparison_frac) if fracs_to_change: self.errors.append("Some fractional co-ordinates rounded to ideal values to " "avoid finite precision errors.") for (label, idx), val in fracs_to_change.items(): data.data[label][idx] = val return data
def _render_templates(files, filetable, written_files, force, open_mode=): for tpl_path, content in filetable: target_path = files[tpl_path] needdir = os.path.dirname(target_path) assert needdir, "Target should have valid parent dir" try: os.makedirs(needdir) except OSError as err: if err.errno != errno.EEXIST: raise if os.path.isfile(target_path): if force: LOG.warning("Forcing overwrite of existing file %s.", target_path) elif target_path in written_files: LOG.warning("Previous stencil has already written file %s.", target_path) else: print("Skipping existing file %s" % target_path) LOG.info("Skipping existing file %s", target_path) continue with open(target_path, open_mode) as newfile: print("Writing rendered file %s" % target_path) LOG.info("Writing rendered file %s", target_path) newfile.write(content) written_files.append(target_path)
Write template contents from filetable into files. Using filetable for the rendered templates, and the list of files, render all the templates into actual files on disk, forcing to overwrite the file as appropriate, and using the given open mode for the file.
### Input: Write template contents from filetable into files. Using filetable for the rendered templates, and the list of files, render all the templates into actual files on disk, forcing to overwrite the file as appropriate, and using the given open mode for the file. ### Response: def _render_templates(files, filetable, written_files, force, open_mode=): for tpl_path, content in filetable: target_path = files[tpl_path] needdir = os.path.dirname(target_path) assert needdir, "Target should have valid parent dir" try: os.makedirs(needdir) except OSError as err: if err.errno != errno.EEXIST: raise if os.path.isfile(target_path): if force: LOG.warning("Forcing overwrite of existing file %s.", target_path) elif target_path in written_files: LOG.warning("Previous stencil has already written file %s.", target_path) else: print("Skipping existing file %s" % target_path) LOG.info("Skipping existing file %s", target_path) continue with open(target_path, open_mode) as newfile: print("Writing rendered file %s" % target_path) LOG.info("Writing rendered file %s", target_path) newfile.write(content) written_files.append(target_path)
def findProbableCopyright(self): retCopyrights = set() for R in self: begin, abS = findCopyright(R.get(, )) if abS != : retCopyrights.add(abS) return list(retCopyrights)
Finds the (likely) copyright string from all abstracts in the `RecordCollection` # Returns `list[str]` > A deduplicated list of all the copyright strings
### Input: Finds the (likely) copyright string from all abstracts in the `RecordCollection` # Returns `list[str]` > A deduplicated list of all the copyright strings ### Response: def findProbableCopyright(self): retCopyrights = set() for R in self: begin, abS = findCopyright(R.get(, )) if abS != : retCopyrights.add(abS) return list(retCopyrights)
def uninstall_wic(self, wic_slot_number): slot_number = 0 adapter = self._slots[slot_number] if wic_slot_number > len(adapter.wics) - 1: raise DynamipsError("WIC slot {wic_slot_number} doesnvm slot_remove_binding "{name}" {slot_number} {wic_slot_number}Router "{name}" [{id}]: {wic} removed from WIC slot {wic_slot_number}'.format(name=self._name, id=self._id, wic=adapter.wics[wic_slot_number], wic_slot_number=wic_slot_number)) adapter.uninstall_wic(wic_slot_number)
Uninstalls a WIC adapter from this router. :param wic_slot_number: WIC slot number
### Input: Uninstalls a WIC adapter from this router. :param wic_slot_number: WIC slot number ### Response: def uninstall_wic(self, wic_slot_number): slot_number = 0 adapter = self._slots[slot_number] if wic_slot_number > len(adapter.wics) - 1: raise DynamipsError("WIC slot {wic_slot_number} doesnvm slot_remove_binding "{name}" {slot_number} {wic_slot_number}Router "{name}" [{id}]: {wic} removed from WIC slot {wic_slot_number}'.format(name=self._name, id=self._id, wic=adapter.wics[wic_slot_number], wic_slot_number=wic_slot_number)) adapter.uninstall_wic(wic_slot_number)
def field_items(self, path=str(), **options): parent = path if path else str() items = list() for name, item in self.items(): item_path = .format(parent, name) if parent else name if is_container(item): for field in item.field_items(item_path, **options): items.append(field) elif is_pointer(item) and get_nested(options): for field in item.field_items(item_path, **options): items.append(field) elif is_field(item): items.append((item_path, item)) else: raise MemberTypeError(self, item, item_path) return items
Returns a **flatten** list of ``('field path', field item)`` tuples for each :class:`Field` *nested* in the `Structure`. :param str path: field path of the `Structure`. :keyword bool nested: if ``True`` all :class:`Pointer` fields in the :attr:`~Pointer.data` objects of all :class:`Pointer` fields in the `Structure` list their referenced :attr:`~Pointer.data` object field items as well (chained method call).
### Input: Returns a **flatten** list of ``('field path', field item)`` tuples for each :class:`Field` *nested* in the `Structure`. :param str path: field path of the `Structure`. :keyword bool nested: if ``True`` all :class:`Pointer` fields in the :attr:`~Pointer.data` objects of all :class:`Pointer` fields in the `Structure` list their referenced :attr:`~Pointer.data` object field items as well (chained method call). ### Response: def field_items(self, path=str(), **options): parent = path if path else str() items = list() for name, item in self.items(): item_path = .format(parent, name) if parent else name if is_container(item): for field in item.field_items(item_path, **options): items.append(field) elif is_pointer(item) and get_nested(options): for field in item.field_items(item_path, **options): items.append(field) elif is_field(item): items.append((item_path, item)) else: raise MemberTypeError(self, item, item_path) return items
def transfers_complete(self): for transfer in self.transfers: if not transfer.is_complete: error = { : 4003, : } hellraiser(error)
Check if all transfers are completed.
### Input: Check if all transfers are completed. ### Response: def transfers_complete(self): for transfer in self.transfers: if not transfer.is_complete: error = { : 4003, : } hellraiser(error)
def ensure_table_strings(table): for row in range(len(table)): for column in range(len(table[row])): table[row][column] = str(table[row][column]) return table
Force each cell in the table to be a string Parameters ---------- table : list of lists Returns ------- table : list of lists of str
### Input: Force each cell in the table to be a string Parameters ---------- table : list of lists Returns ------- table : list of lists of str ### Response: def ensure_table_strings(table): for row in range(len(table)): for column in range(len(table[row])): table[row][column] = str(table[row][column]) return table
def index(self, req, drivers): result = [] for driver in drivers: result.append(driver.list_network(req.params)) data = { : "index", : "network", : req.environ[], : result } return data
List all network List all of netowrks on some special cloud with: :Param req :Type object Request
### Input: List all network List all of netowrks on some special cloud with: :Param req :Type object Request ### Response: def index(self, req, drivers): result = [] for driver in drivers: result.append(driver.list_network(req.params)) data = { : "index", : "network", : req.environ[], : result } return data
def _bootstrap_pacman( root, pkg_confs=, img_format=, pkgs=None, exclude_pkgs=None, ): _make_nodes(root) if pkgs is None: pkgs = [] elif isinstance(pkgs, six.string_types): pkgs = pkgs.split() default_pkgs = (, , , ) for pkg in default_pkgs: if pkg not in pkgs: pkgs.append(pkg) if exclude_pkgs is None: exclude_pkgs = [] elif isinstance(exclude_pkgs, six.string_types): exclude_pkgs = exclude_pkgs.split() for pkg in exclude_pkgs: pkgs.remove(pkg) if img_format != : __salt__[](.format(root), , fstype=, opts=) __salt__[](.format(root), , fstype=, opts=) __salt__[]( .format(root), , , ) pac_files = [rf for rf in os.listdir() if rf.startswith()] for pac_file in pac_files: __salt__[](.format(pac_file, _cmd_quote(root))) __salt__[](, .format(root), recurse=True) pacman_args = [, , , _cmd_quote(root), ] + pkgs __salt__[](pacman_args, python_shell=False) if img_format != : __salt__[](.format(root)) __salt__[](.format(root))
Bootstrap an image using the pacman tools root The root of the image to install to. Will be created as a directory if it does not exist. (e.x.: /root/arch) pkg_confs The location of the conf files to copy into the image, to point pacman to the right repos and configuration. img_format The image format to be used. The ``dir`` type needs no special treatment, but others need special treatment. pkgs A list of packages to be installed on this image. For Arch Linux, this will include ``pacman``, ``linux``, ``grub``, and ``systemd-sysvcompat`` by default. exclude_pkgs A list of packages to be excluded. If you do not want to install the defaults, you need to include them in this list.
### Input: Bootstrap an image using the pacman tools root The root of the image to install to. Will be created as a directory if it does not exist. (e.x.: /root/arch) pkg_confs The location of the conf files to copy into the image, to point pacman to the right repos and configuration. img_format The image format to be used. The ``dir`` type needs no special treatment, but others need special treatment. pkgs A list of packages to be installed on this image. For Arch Linux, this will include ``pacman``, ``linux``, ``grub``, and ``systemd-sysvcompat`` by default. exclude_pkgs A list of packages to be excluded. If you do not want to install the defaults, you need to include them in this list. ### Response: def _bootstrap_pacman( root, pkg_confs=, img_format=, pkgs=None, exclude_pkgs=None, ): _make_nodes(root) if pkgs is None: pkgs = [] elif isinstance(pkgs, six.string_types): pkgs = pkgs.split() default_pkgs = (, , , ) for pkg in default_pkgs: if pkg not in pkgs: pkgs.append(pkg) if exclude_pkgs is None: exclude_pkgs = [] elif isinstance(exclude_pkgs, six.string_types): exclude_pkgs = exclude_pkgs.split() for pkg in exclude_pkgs: pkgs.remove(pkg) if img_format != : __salt__[](.format(root), , fstype=, opts=) __salt__[](.format(root), , fstype=, opts=) __salt__[]( .format(root), , , ) pac_files = [rf for rf in os.listdir() if rf.startswith()] for pac_file in pac_files: __salt__[](.format(pac_file, _cmd_quote(root))) __salt__[](, .format(root), recurse=True) pacman_args = [, , , _cmd_quote(root), ] + pkgs __salt__[](pacman_args, python_shell=False) if img_format != : __salt__[](.format(root)) __salt__[](.format(root))
def regex_opt_inner(strings, open_paren): close_paren = open_paren and or if not strings: return first = strings[0] if len(strings) == 1: return open_paren + escape(first) + close_paren if not first: return open_paren + regex_opt_inner(strings[1:], ) \ + + close_paren if len(first) == 1: oneletter = [] rest = [] for s in strings: if len(s) == 1: oneletter.append(s) else: rest.append(s) if len(oneletter) > 1: if rest: return open_paren + regex_opt_inner(rest, ) + \ + make_charset(oneletter) + close_paren return open_paren + make_charset(oneletter) + close_paren prefix = commonprefix(strings) if prefix: plen = len(prefix) return open_paren + escape(prefix) \ + regex_opt_inner([s[plen:] for s in strings], ) \ + close_paren strings_rev = [s[::-1] for s in strings] suffix = commonprefix(strings_rev) if suffix: slen = len(suffix) return open_paren \ + regex_opt_inner(sorted(s[:-slen] for s in strings), ) \ + escape(suffix[::-1]) + close_paren return open_paren + \ .join(regex_opt_inner(list(group[1]), ) for group in groupby(strings, lambda s: s[0] == first[0])) \ + close_paren
Return a regex that matches any string in the sorted list of strings.
### Input: Return a regex that matches any string in the sorted list of strings. ### Response: def regex_opt_inner(strings, open_paren): close_paren = open_paren and or if not strings: return first = strings[0] if len(strings) == 1: return open_paren + escape(first) + close_paren if not first: return open_paren + regex_opt_inner(strings[1:], ) \ + + close_paren if len(first) == 1: oneletter = [] rest = [] for s in strings: if len(s) == 1: oneletter.append(s) else: rest.append(s) if len(oneletter) > 1: if rest: return open_paren + regex_opt_inner(rest, ) + \ + make_charset(oneletter) + close_paren return open_paren + make_charset(oneletter) + close_paren prefix = commonprefix(strings) if prefix: plen = len(prefix) return open_paren + escape(prefix) \ + regex_opt_inner([s[plen:] for s in strings], ) \ + close_paren strings_rev = [s[::-1] for s in strings] suffix = commonprefix(strings_rev) if suffix: slen = len(suffix) return open_paren \ + regex_opt_inner(sorted(s[:-slen] for s in strings), ) \ + escape(suffix[::-1]) + close_paren return open_paren + \ .join(regex_opt_inner(list(group[1]), ) for group in groupby(strings, lambda s: s[0] == first[0])) \ + close_paren
def reassign_proficiency_to_objective_bank(self, objective_id, from_objective_bank_id, to_objective_bank_id): self.assign_objective_to_objective_bank(objective_id, to_objective_bank_id) try: self.unassign_objective_from_objective_bank(objective_id, from_objective_bank_id) except: self.unassign_objective_from_objective_bank(objective_id, to_objective_bank_id) raise
Moves an ``Objective`` from one ``ObjectiveBank`` to another. Mappings to other ``ObjectiveBanks`` are unaffected. arg: objective_id (osid.id.Id): the ``Id`` of the ``Objective`` arg: from_objective_bank_id (osid.id.Id): the ``Id`` of the current ``ObjectiveBank`` arg: to_objective_bank_id (osid.id.Id): the ``Id`` of the destination ``ObjectiveBank`` raise: NotFound - ``objective_id, from_objective_bank_id,`` or ``to_objective_bank_id`` not found or ``objective_id`` not mapped to ``from_objective_bank_id`` raise: NullArgument - ``objective_id, from_objective_bank_id,`` or ``to_objective_bank_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
### Input: Moves an ``Objective`` from one ``ObjectiveBank`` to another. Mappings to other ``ObjectiveBanks`` are unaffected. arg: objective_id (osid.id.Id): the ``Id`` of the ``Objective`` arg: from_objective_bank_id (osid.id.Id): the ``Id`` of the current ``ObjectiveBank`` arg: to_objective_bank_id (osid.id.Id): the ``Id`` of the destination ``ObjectiveBank`` raise: NotFound - ``objective_id, from_objective_bank_id,`` or ``to_objective_bank_id`` not found or ``objective_id`` not mapped to ``from_objective_bank_id`` raise: NullArgument - ``objective_id, from_objective_bank_id,`` or ``to_objective_bank_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* ### Response: def reassign_proficiency_to_objective_bank(self, objective_id, from_objective_bank_id, to_objective_bank_id): self.assign_objective_to_objective_bank(objective_id, to_objective_bank_id) try: self.unassign_objective_from_objective_bank(objective_id, from_objective_bank_id) except: self.unassign_objective_from_objective_bank(objective_id, to_objective_bank_id) raise
def save(self, filething=None, v2_version=4, v23_sep=, padding=None): fileobj = filething.fileobj fileobj.seek(0) dsd_header = DSDChunk(fileobj) if dsd_header.offset_metdata_chunk == 0: fileobj.seek(0, 2) dsd_header.offset_metdata_chunk = fileobj.tell() dsd_header.write() try: data = self._prepare_data( fileobj, dsd_header.offset_metdata_chunk, self.size, v2_version, v23_sep, padding) except ID3Error as e: reraise(error, e, sys.exc_info()[2]) fileobj.seek(dsd_header.offset_metdata_chunk) fileobj.write(data) fileobj.truncate() dsd_header.total_size = fileobj.tell() dsd_header.write()
Save ID3v2 data to the DSF file
### Input: Save ID3v2 data to the DSF file ### Response: def save(self, filething=None, v2_version=4, v23_sep=, padding=None): fileobj = filething.fileobj fileobj.seek(0) dsd_header = DSDChunk(fileobj) if dsd_header.offset_metdata_chunk == 0: fileobj.seek(0, 2) dsd_header.offset_metdata_chunk = fileobj.tell() dsd_header.write() try: data = self._prepare_data( fileobj, dsd_header.offset_metdata_chunk, self.size, v2_version, v23_sep, padding) except ID3Error as e: reraise(error, e, sys.exc_info()[2]) fileobj.seek(dsd_header.offset_metdata_chunk) fileobj.write(data) fileobj.truncate() dsd_header.total_size = fileobj.tell() dsd_header.write()
def fetch(self): if not self.local_path: self.make_local_path() fetcher = BookFetcher(self) fetcher.fetch()
just pull files from PG
### Input: just pull files from PG ### Response: def fetch(self): if not self.local_path: self.make_local_path() fetcher = BookFetcher(self) fetcher.fetch()
def _parse_settings_bond_2(opts, iface, bond_def): bond = {: } valid = [] if in opts: if isinstance(opts[], list): if 1 <= len(opts[]) <= 16: bond.update({: }) for ip in opts[]: if bond[]: bond[] = bond[] + + ip else: bond[] = ip else: _raise_error_iface(iface, , valid) else: _raise_error_iface(iface, , valid) else: _raise_error_iface(iface, , valid) if in opts: try: int(opts[]) bond.update({: opts[]}) except ValueError: _raise_error_iface(iface, , []) else: _log_default_iface(iface, , bond_def[]) bond.update({: bond_def[]}) if in opts: valid = [, , ] if opts[] in valid: bond.update({: opts[]}) else: _raise_error_iface(iface, , valid) return bond
Filters given options and outputs valid settings for bond2. If an option has a value that is not expected, this function will log what the Interface, Setting and what it was expecting.
### Input: Filters given options and outputs valid settings for bond2. If an option has a value that is not expected, this function will log what the Interface, Setting and what it was expecting. ### Response: def _parse_settings_bond_2(opts, iface, bond_def): bond = {: } valid = [] if in opts: if isinstance(opts[], list): if 1 <= len(opts[]) <= 16: bond.update({: }) for ip in opts[]: if bond[]: bond[] = bond[] + + ip else: bond[] = ip else: _raise_error_iface(iface, , valid) else: _raise_error_iface(iface, , valid) else: _raise_error_iface(iface, , valid) if in opts: try: int(opts[]) bond.update({: opts[]}) except ValueError: _raise_error_iface(iface, , []) else: _log_default_iface(iface, , bond_def[]) bond.update({: bond_def[]}) if in opts: valid = [, , ] if opts[] in valid: bond.update({: opts[]}) else: _raise_error_iface(iface, , valid) return bond
def add_node_to_network(self, node, network): network.add_node(node) node.receive() environment = network.nodes(type=Environment)[0] environment.connect(whom=node) gene = node.infos(type=LearningGene)[0].contents if (gene == "social"): prev_agents = RogersAgent.query\ .filter(and_(RogersAgent.failed == False, RogersAgent.network_id == network.id, RogersAgent.generation == node.generation - 1))\ .all() parent = random.choice(prev_agents) parent.connect(whom=node) parent.transmit(what=Meme, to_whom=node) elif (gene == "asocial"): environment.transmit(to_whom=node) else: raise ValueError("{} has invalid learning gene value of {}" .format(node, gene)) node.receive()
Add participant's node to a network.
### Input: Add participant's node to a network. ### Response: def add_node_to_network(self, node, network): network.add_node(node) node.receive() environment = network.nodes(type=Environment)[0] environment.connect(whom=node) gene = node.infos(type=LearningGene)[0].contents if (gene == "social"): prev_agents = RogersAgent.query\ .filter(and_(RogersAgent.failed == False, RogersAgent.network_id == network.id, RogersAgent.generation == node.generation - 1))\ .all() parent = random.choice(prev_agents) parent.connect(whom=node) parent.transmit(what=Meme, to_whom=node) elif (gene == "asocial"): environment.transmit(to_whom=node) else: raise ValueError("{} has invalid learning gene value of {}" .format(node, gene)) node.receive()
def get_all_integration(self, **kwargs): kwargs[] = True if kwargs.get(): return self.get_all_integration_with_http_info(**kwargs) else: (data) = self.get_all_integration_with_http_info(**kwargs) return data
Gets a flat list of all Wavefront integrations available, along with their status # noqa: E501 # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.get_all_integration(async_req=True) >>> result = thread.get() :param async_req bool :param int offset: :param int limit: :return: ResponseContainerPagedIntegration If the method is called asynchronously, returns the request thread.
### Input: Gets a flat list of all Wavefront integrations available, along with their status # noqa: E501 # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.get_all_integration(async_req=True) >>> result = thread.get() :param async_req bool :param int offset: :param int limit: :return: ResponseContainerPagedIntegration If the method is called asynchronously, returns the request thread. ### Response: def get_all_integration(self, **kwargs): kwargs[] = True if kwargs.get(): return self.get_all_integration_with_http_info(**kwargs) else: (data) = self.get_all_integration_with_http_info(**kwargs) return data
def body(self, body): self._request.body = body self.add_matcher(matcher(, body))
Defines the body data to match. ``body`` argument can be a ``str``, ``binary`` or a regular expression. Arguments: body (str|binary|regex): body data to match. Returns: self: current Mock instance.
### Input: Defines the body data to match. ``body`` argument can be a ``str``, ``binary`` or a regular expression. Arguments: body (str|binary|regex): body data to match. Returns: self: current Mock instance. ### Response: def body(self, body): self._request.body = body self.add_matcher(matcher(, body))
def set_window_size(self, width, height): self._imgwin_wd = int(width) self._imgwin_ht = int(height) self._ctr_x = width // 2 self._ctr_y = height // 2 self.logger.debug("widget resized to %dx%d" % (width, height)) self.make_callback(, width, height) self.redraw(whence=0)
Report the size of the window to display the image. **Callbacks** Will call any callbacks registered for the ``'configure'`` event. Callbacks should have a method signature of:: (viewer, width, height, ...) .. note:: This is called by the subclass with ``width`` and ``height`` as soon as the actual dimensions of the allocated window are known. Parameters ---------- width : int The width of the window in pixels. height : int The height of the window in pixels.
### Input: Report the size of the window to display the image. **Callbacks** Will call any callbacks registered for the ``'configure'`` event. Callbacks should have a method signature of:: (viewer, width, height, ...) .. note:: This is called by the subclass with ``width`` and ``height`` as soon as the actual dimensions of the allocated window are known. Parameters ---------- width : int The width of the window in pixels. height : int The height of the window in pixels. ### Response: def set_window_size(self, width, height): self._imgwin_wd = int(width) self._imgwin_ht = int(height) self._ctr_x = width // 2 self._ctr_y = height // 2 self.logger.debug("widget resized to %dx%d" % (width, height)) self.make_callback(, width, height) self.redraw(whence=0)
def getent(refresh=False): * if in __context__ and not refresh: return __context__[] ret = [] for grinfo in grp.getgrall(): if not grinfo.gr_name.startswith(): ret.append(_format_info(grinfo)) __context__[] = ret return ret
Return info on all groups CLI Example: .. code-block:: bash salt '*' group.getent
### Input: Return info on all groups CLI Example: .. code-block:: bash salt '*' group.getent ### Response: def getent(refresh=False): * if in __context__ and not refresh: return __context__[] ret = [] for grinfo in grp.getgrall(): if not grinfo.gr_name.startswith(): ret.append(_format_info(grinfo)) __context__[] = ret return ret
def reload(script, input, output): script = Path(script).expand().abspath() output = Path(output).expand().abspath() input = input if isinstance(input, (list, tuple)) else [input] output.makedirs_p() _script_reload(script, input, output)
reloads the generator script when the script files or the input files changes
### Input: reloads the generator script when the script files or the input files changes ### Response: def reload(script, input, output): script = Path(script).expand().abspath() output = Path(output).expand().abspath() input = input if isinstance(input, (list, tuple)) else [input] output.makedirs_p() _script_reload(script, input, output)
def git_path_valid(git_path=None): if git_path is None and GIT_PATH is None: return False if git_path is None: git_path = GIT_PATH try: call([git_path, ]) return True except OSError: return False
Check whether the git executable is found.
### Input: Check whether the git executable is found. ### Response: def git_path_valid(git_path=None): if git_path is None and GIT_PATH is None: return False if git_path is None: git_path = GIT_PATH try: call([git_path, ]) return True except OSError: return False
def unique_prefixes(context): res = {} for m in context.modules.values(): if m.keyword == "submodule": continue prf = new = m.i_prefix suff = 0 while new in res.values(): suff += 1 new = "%s%x" % (prf, suff) res[m] = new return res
Return a dictionary with unique prefixes for modules in `context`. Keys are 'module' statements and values are prefixes, disambiguated where necessary.
### Input: Return a dictionary with unique prefixes for modules in `context`. Keys are 'module' statements and values are prefixes, disambiguated where necessary. ### Response: def unique_prefixes(context): res = {} for m in context.modules.values(): if m.keyword == "submodule": continue prf = new = m.i_prefix suff = 0 while new in res.values(): suff += 1 new = "%s%x" % (prf, suff) res[m] = new return res
def _gzip(self, response): bytesio = six.BytesIO() with gzip.GzipFile(fileobj=bytesio, mode=) as gz: gz.write(response) return bytesio.getvalue()
Apply gzip compression to a response.
### Input: Apply gzip compression to a response. ### Response: def _gzip(self, response): bytesio = six.BytesIO() with gzip.GzipFile(fileobj=bytesio, mode=) as gz: gz.write(response) return bytesio.getvalue()
def _initParams(self): params = SP.zeros(self.getNumberParams()) self.setParams(params)
initialize paramters to vector of zeros
### Input: initialize paramters to vector of zeros ### Response: def _initParams(self): params = SP.zeros(self.getNumberParams()) self.setParams(params)
def list_distros(package_format=None): client = get_distros_api() with catch_raise_api_exception(): distros, _, headers = client.distros_list_with_http_info() ratelimits.maybe_rate_limit(client, headers) return [ distro.to_dict() for distro in distros if not package_format or distro.format == package_format ]
List available distributions.
### Input: List available distributions. ### Response: def list_distros(package_format=None): client = get_distros_api() with catch_raise_api_exception(): distros, _, headers = client.distros_list_with_http_info() ratelimits.maybe_rate_limit(client, headers) return [ distro.to_dict() for distro in distros if not package_format or distro.format == package_format ]
def display(self, image): assert(image.mode == self.mode) assert(image.size == self.size) self._last_image = image.copy() sz = image.width * image.height * 4 buf = bytearray(sz * 3) m = self._mapping for idx, (r, g, b, a) in enumerate(image.getdata()): offset = sz + m[idx] * 4 brightness = (a >> 4) if a != 0xFF else self._brightness buf[offset] = (0xE0 | brightness) buf[offset + 1] = b buf[offset + 2] = g buf[offset + 3] = r self._serial_interface.data(list(buf))
Takes a 32-bit RGBA :py:mod:`PIL.Image` and dumps it to the daisy-chained APA102 neopixels. If a pixel is not fully opaque, the alpha channel value is used to set the brightness of the respective RGB LED.
### Input: Takes a 32-bit RGBA :py:mod:`PIL.Image` and dumps it to the daisy-chained APA102 neopixels. If a pixel is not fully opaque, the alpha channel value is used to set the brightness of the respective RGB LED. ### Response: def display(self, image): assert(image.mode == self.mode) assert(image.size == self.size) self._last_image = image.copy() sz = image.width * image.height * 4 buf = bytearray(sz * 3) m = self._mapping for idx, (r, g, b, a) in enumerate(image.getdata()): offset = sz + m[idx] * 4 brightness = (a >> 4) if a != 0xFF else self._brightness buf[offset] = (0xE0 | brightness) buf[offset + 1] = b buf[offset + 2] = g buf[offset + 3] = r self._serial_interface.data(list(buf))
def search(self, index_name, query): try: results = self.els_search.search(index=index_name, body=query) return results except Exception, error: error_str = % str(error) error_str += print error_str raise RuntimeError(error_str)
Search the given index_name with the given ELS query. Args: index_name: Name of the Index query: The string to be searched. Returns: List of results. Raises: RuntimeError: When the search query fails.
### Input: Search the given index_name with the given ELS query. Args: index_name: Name of the Index query: The string to be searched. Returns: List of results. Raises: RuntimeError: When the search query fails. ### Response: def search(self, index_name, query): try: results = self.els_search.search(index=index_name, body=query) return results except Exception, error: error_str = % str(error) error_str += print error_str raise RuntimeError(error_str)
def _TerminateProcessByPid(self, pid): self._RaiseIfNotRegistered(pid) process = self._processes_per_pid[pid] self._TerminateProcess(process) self._StopMonitoringProcess(process)
Terminate a process that's monitored by the engine. Args: pid (int): process identifier (PID). Raises: KeyError: if the process is not registered with and monitored by the engine.
### Input: Terminate a process that's monitored by the engine. Args: pid (int): process identifier (PID). Raises: KeyError: if the process is not registered with and monitored by the engine. ### Response: def _TerminateProcessByPid(self, pid): self._RaiseIfNotRegistered(pid) process = self._processes_per_pid[pid] self._TerminateProcess(process) self._StopMonitoringProcess(process)
def _connect_pipeline(self, pipeline, required_outputs, workflow, subject_inds, visit_inds, filter_array, force=False): if self.reprocess == : force = True pipeline.cap() final_nodes = [] prqs_to_process_array = np.zeros((len(subject_inds), len(visit_inds)), dtype=bool) prqs_to_skip_array = np.zeros((len(subject_inds), len(visit_inds)), dtype=bool) for getter_name in pipeline.prerequisites: prereq = pipeline.study.pipeline(getter_name) if prereq.to_process_array.any(): final_nodes.append(prereq.node()) prqs_to_process_array |= prereq.to_process_array prqs_to_skip_array |= prereq.to_skip_array "frequency, when the pipeline only iterates over " .format("".join(o.name for o in outputs), freq, "".join(pipeline.iterators()))) outputnode = pipeline.outputnode(freq) to_connect = {o.suffixed_name: (outputnode, o.name) for o in outputs if o.is_spec} to_connect.update( {i: (iter_nodes[i], i) for i in pipeline.iterators()}) for input_freq in pipeline.input_frequencies: checksums_to_connect = [ i.checksum_suffixed_name for i in pipeline.frequency_inputs(input_freq)] if not checksums_to_connect: continue source = sources[input_freq] for iterator in (pipeline.iterators(input_freq) - pipeline.iterators(freq)): join = pipeline.add( .format( input_freq, freq, iterator), IdentityInterface( checksums_to_connect), inputs={ tc: (source, tc) for tc in checksums_to_connect}, joinsource=iterator, joinfield=checksums_to_connect) source = join to_connect.update( {tc: (source, tc) for tc in checksums_to_connect}) sink = pipeline.add( .format(freq), RepositorySink( (o.collection for o in outputs), pipeline), inputs=to_connect) deiter_nodes[freq] = sink for iterator in sorted(pipeline.iterators(freq), key=deiter_node_sort_key): deiter_nodes[freq] = pipeline.add( .format(freq, iterator), IdentityInterface( []), inputs={ : (deiter_nodes[freq], )}, joinsource=iterator, joinfield=) pipeline.add( , Merge( len(deiter_nodes)), inputs={ .format(i): (di, ) for i, di in enumerate(deiter_nodes.values(), start=1)})
Connects a pipeline to a overarching workflow that sets up iterators over subjects|visits present in the repository (if required) and repository source and sink nodes Parameters ---------- pipeline : Pipeline The pipeline to connect required_outputs : set[str] | None The outputs required to be produced by this pipeline. If None all are deemed to be required workflow : nipype.pipeline.engine.Workflow The overarching workflow to connect the pipeline to subject_inds : dct[str, int] A mapping of subject ID to row index in the filter array visit_inds : dct[str, int] A mapping of visit ID to column index in the filter array filter_array : 2-D numpy.array[bool] A two-dimensional boolean array, where rows correspond to subjects and columns correspond to visits in the repository. True values represent a combination of subject & visit ID to include in the current round of processing. Note that if the 'force' flag is not set, sessions won't be reprocessed unless the save provenance doesn't match that of the given pipeline. force : bool | 'all' A flag to force the processing of all sessions in the filter array, regardless of whether the parameters|pipeline used to generate existing data matches the given pipeline
### Input: Connects a pipeline to a overarching workflow that sets up iterators over subjects|visits present in the repository (if required) and repository source and sink nodes Parameters ---------- pipeline : Pipeline The pipeline to connect required_outputs : set[str] | None The outputs required to be produced by this pipeline. If None all are deemed to be required workflow : nipype.pipeline.engine.Workflow The overarching workflow to connect the pipeline to subject_inds : dct[str, int] A mapping of subject ID to row index in the filter array visit_inds : dct[str, int] A mapping of visit ID to column index in the filter array filter_array : 2-D numpy.array[bool] A two-dimensional boolean array, where rows correspond to subjects and columns correspond to visits in the repository. True values represent a combination of subject & visit ID to include in the current round of processing. Note that if the 'force' flag is not set, sessions won't be reprocessed unless the save provenance doesn't match that of the given pipeline. force : bool | 'all' A flag to force the processing of all sessions in the filter array, regardless of whether the parameters|pipeline used to generate existing data matches the given pipeline ### Response: def _connect_pipeline(self, pipeline, required_outputs, workflow, subject_inds, visit_inds, filter_array, force=False): if self.reprocess == : force = True pipeline.cap() final_nodes = [] prqs_to_process_array = np.zeros((len(subject_inds), len(visit_inds)), dtype=bool) prqs_to_skip_array = np.zeros((len(subject_inds), len(visit_inds)), dtype=bool) for getter_name in pipeline.prerequisites: prereq = pipeline.study.pipeline(getter_name) if prereq.to_process_array.any(): final_nodes.append(prereq.node()) prqs_to_process_array |= prereq.to_process_array prqs_to_skip_array |= prereq.to_skip_array "frequency, when the pipeline only iterates over " .format("".join(o.name for o in outputs), freq, "".join(pipeline.iterators()))) outputnode = pipeline.outputnode(freq) to_connect = {o.suffixed_name: (outputnode, o.name) for o in outputs if o.is_spec} to_connect.update( {i: (iter_nodes[i], i) for i in pipeline.iterators()}) for input_freq in pipeline.input_frequencies: checksums_to_connect = [ i.checksum_suffixed_name for i in pipeline.frequency_inputs(input_freq)] if not checksums_to_connect: continue source = sources[input_freq] for iterator in (pipeline.iterators(input_freq) - pipeline.iterators(freq)): join = pipeline.add( .format( input_freq, freq, iterator), IdentityInterface( checksums_to_connect), inputs={ tc: (source, tc) for tc in checksums_to_connect}, joinsource=iterator, joinfield=checksums_to_connect) source = join to_connect.update( {tc: (source, tc) for tc in checksums_to_connect}) sink = pipeline.add( .format(freq), RepositorySink( (o.collection for o in outputs), pipeline), inputs=to_connect) deiter_nodes[freq] = sink for iterator in sorted(pipeline.iterators(freq), key=deiter_node_sort_key): deiter_nodes[freq] = pipeline.add( .format(freq, iterator), IdentityInterface( []), inputs={ : (deiter_nodes[freq], )}, joinsource=iterator, joinfield=) pipeline.add( , Merge( len(deiter_nodes)), inputs={ .format(i): (di, ) for i, di in enumerate(deiter_nodes.values(), start=1)})
def create_shared_folder(self, name, host_path, writable, automount, auto_mount_point): if not isinstance(name, basestring): raise TypeError("name can only be an instance of type basestring") if not isinstance(host_path, basestring): raise TypeError("host_path can only be an instance of type basestring") if not isinstance(writable, bool): raise TypeError("writable can only be an instance of type bool") if not isinstance(automount, bool): raise TypeError("automount can only be an instance of type bool") if not isinstance(auto_mount_point, basestring): raise TypeError("auto_mount_point can only be an instance of type basestring") self._call("createSharedFolder", in_p=[name, host_path, writable, automount, auto_mount_point])
Creates a new global shared folder by associating the given logical name with the given host path, adds it to the collection of shared folders and starts sharing it. Refer to the description of :py:class:`ISharedFolder` to read more about logical names. In the current implementation, this operation is not implemented. in name of type str Unique logical name of the shared folder. in host_path of type str Full path to the shared folder in the host file system. in writable of type bool Whether the share is writable or readonly in automount of type bool Whether the share gets automatically mounted by the guest or not. in auto_mount_point of type str Where the guest should automatically mount the folder, if possible. For Windows and OS/2 guests this should be a drive letter, while other guests it should be a absolute directory.
### Input: Creates a new global shared folder by associating the given logical name with the given host path, adds it to the collection of shared folders and starts sharing it. Refer to the description of :py:class:`ISharedFolder` to read more about logical names. In the current implementation, this operation is not implemented. in name of type str Unique logical name of the shared folder. in host_path of type str Full path to the shared folder in the host file system. in writable of type bool Whether the share is writable or readonly in automount of type bool Whether the share gets automatically mounted by the guest or not. in auto_mount_point of type str Where the guest should automatically mount the folder, if possible. For Windows and OS/2 guests this should be a drive letter, while other guests it should be a absolute directory. ### Response: def create_shared_folder(self, name, host_path, writable, automount, auto_mount_point): if not isinstance(name, basestring): raise TypeError("name can only be an instance of type basestring") if not isinstance(host_path, basestring): raise TypeError("host_path can only be an instance of type basestring") if not isinstance(writable, bool): raise TypeError("writable can only be an instance of type bool") if not isinstance(automount, bool): raise TypeError("automount can only be an instance of type bool") if not isinstance(auto_mount_point, basestring): raise TypeError("auto_mount_point can only be an instance of type basestring") self._call("createSharedFolder", in_p=[name, host_path, writable, automount, auto_mount_point])
def logout(self): if self._token: header_data = { : self._token } self._session = requests.session() self._token = None self._panel = None self._user = None self._devices = None self._automations = None try: response = self._session.post( CONST.LOGOUT_URL, headers=header_data) response_object = json.loads(response.text) except OSError as exc: _LOGGER.warning("Caught exception during logout: %s", str(exc)) return False if response.status_code != 200: raise AbodeAuthenticationException( (response.status_code, response_object[])) _LOGGER.debug("Logout Response: %s", response.text) _LOGGER.info("Logout successful") return True
Explicit Abode logout.
### Input: Explicit Abode logout. ### Response: def logout(self): if self._token: header_data = { : self._token } self._session = requests.session() self._token = None self._panel = None self._user = None self._devices = None self._automations = None try: response = self._session.post( CONST.LOGOUT_URL, headers=header_data) response_object = json.loads(response.text) except OSError as exc: _LOGGER.warning("Caught exception during logout: %s", str(exc)) return False if response.status_code != 200: raise AbodeAuthenticationException( (response.status_code, response_object[])) _LOGGER.debug("Logout Response: %s", response.text) _LOGGER.info("Logout successful") return True
def findReference(self, name, cls=QtGui.QWidget): ref_widget = self._referenceWidget if not ref_widget: return None if ref_widget.objectName() == name: return ref_widget return ref_widget.findChild(cls, name)
Looks up a reference from the widget based on its object name. :param name | <str> cls | <subclass of QtGui.QObject> :return <QtGui.QObject> || None
### Input: Looks up a reference from the widget based on its object name. :param name | <str> cls | <subclass of QtGui.QObject> :return <QtGui.QObject> || None ### Response: def findReference(self, name, cls=QtGui.QWidget): ref_widget = self._referenceWidget if not ref_widget: return None if ref_widget.objectName() == name: return ref_widget return ref_widget.findChild(cls, name)
def write_input(self, output_dir, make_dir_if_not_present=True, write_cif=False, write_path_cif=False, write_endpoint_inputs=False): output_dir = Path(output_dir) if make_dir_if_not_present and not output_dir.exists(): output_dir.mkdir(parents=True) self.incar.write_file(str(output_dir / )) self.kpoints.write_file(str(output_dir / )) self.potcar.write_file(str(output_dir / )) for i, p in enumerate(self.poscars): d = output_dir / str(i).zfill(2) if not d.exists(): d.mkdir(parents=True) p.write_file(str(d / )) if write_cif: p.structure.to(filename=str(d / .format(i))) if write_endpoint_inputs: end_point_param = MITRelaxSet( self.structures[0], user_incar_settings=self.user_incar_settings) for image in [, str(len(self.structures) - 1).zfill(2)]: end_point_param.incar.write_file( str(output_dir / image / )) end_point_param.kpoints.write_file( str(output_dir / image / )) end_point_param.potcar.write_file( str(output_dir / image / )) if write_path_cif: sites = set() l = self.structures[0].lattice for site in chain(*(s.sites for s in self.structures)): sites.add( PeriodicSite(site.species, site.frac_coords, l)) nebpath = Structure.from_sites(sorted(sites)) nebpath.to(filename=str(output_dir / ))
NEB inputs has a special directory structure where inputs are in 00, 01, 02, .... Args: output_dir (str): Directory to output the VASP input files make_dir_if_not_present (bool): Set to True if you want the directory (and the whole path) to be created if it is not present. write_cif (bool): If true, writes a cif along with each POSCAR. write_path_cif (bool): If true, writes a cif for each image. write_endpoint_inputs (bool): If true, writes input files for running endpoint calculations.
### Input: NEB inputs has a special directory structure where inputs are in 00, 01, 02, .... Args: output_dir (str): Directory to output the VASP input files make_dir_if_not_present (bool): Set to True if you want the directory (and the whole path) to be created if it is not present. write_cif (bool): If true, writes a cif along with each POSCAR. write_path_cif (bool): If true, writes a cif for each image. write_endpoint_inputs (bool): If true, writes input files for running endpoint calculations. ### Response: def write_input(self, output_dir, make_dir_if_not_present=True, write_cif=False, write_path_cif=False, write_endpoint_inputs=False): output_dir = Path(output_dir) if make_dir_if_not_present and not output_dir.exists(): output_dir.mkdir(parents=True) self.incar.write_file(str(output_dir / )) self.kpoints.write_file(str(output_dir / )) self.potcar.write_file(str(output_dir / )) for i, p in enumerate(self.poscars): d = output_dir / str(i).zfill(2) if not d.exists(): d.mkdir(parents=True) p.write_file(str(d / )) if write_cif: p.structure.to(filename=str(d / .format(i))) if write_endpoint_inputs: end_point_param = MITRelaxSet( self.structures[0], user_incar_settings=self.user_incar_settings) for image in [, str(len(self.structures) - 1).zfill(2)]: end_point_param.incar.write_file( str(output_dir / image / )) end_point_param.kpoints.write_file( str(output_dir / image / )) end_point_param.potcar.write_file( str(output_dir / image / )) if write_path_cif: sites = set() l = self.structures[0].lattice for site in chain(*(s.sites for s in self.structures)): sites.add( PeriodicSite(site.species, site.frac_coords, l)) nebpath = Structure.from_sites(sorted(sites)) nebpath.to(filename=str(output_dir / ))
def join(self, path): return self.parse_uri(urlparse(os.path.join(str(self), path)), storage_args=self.storage_args)
Similar to :func:`os.path.join` but returns a storage object instead. :param str path: path to join on to this object's URI :returns: a storage object :rtype: BaseURI
### Input: Similar to :func:`os.path.join` but returns a storage object instead. :param str path: path to join on to this object's URI :returns: a storage object :rtype: BaseURI ### Response: def join(self, path): return self.parse_uri(urlparse(os.path.join(str(self), path)), storage_args=self.storage_args)
def nodeprep(string, allow_unassigned=False): chars = list(string) _nodeprep_do_mapping(chars) do_normalization(chars) check_prohibited_output( chars, ( stringprep.in_table_c11, stringprep.in_table_c12, stringprep.in_table_c21, stringprep.in_table_c22, stringprep.in_table_c3, stringprep.in_table_c4, stringprep.in_table_c5, stringprep.in_table_c6, stringprep.in_table_c7, stringprep.in_table_c8, stringprep.in_table_c9, lambda x: x in _nodeprep_prohibited )) check_bidi(chars) if not allow_unassigned: check_unassigned( chars, ( stringprep.in_table_a1, ) ) return "".join(chars)
Process the given `string` using the Nodeprep (`RFC 6122`_) profile. In the error cases defined in `RFC 3454`_ (stringprep), a :class:`ValueError` is raised.
### Input: Process the given `string` using the Nodeprep (`RFC 6122`_) profile. In the error cases defined in `RFC 3454`_ (stringprep), a :class:`ValueError` is raised. ### Response: def nodeprep(string, allow_unassigned=False): chars = list(string) _nodeprep_do_mapping(chars) do_normalization(chars) check_prohibited_output( chars, ( stringprep.in_table_c11, stringprep.in_table_c12, stringprep.in_table_c21, stringprep.in_table_c22, stringprep.in_table_c3, stringprep.in_table_c4, stringprep.in_table_c5, stringprep.in_table_c6, stringprep.in_table_c7, stringprep.in_table_c8, stringprep.in_table_c9, lambda x: x in _nodeprep_prohibited )) check_bidi(chars) if not allow_unassigned: check_unassigned( chars, ( stringprep.in_table_a1, ) ) return "".join(chars)
def update(self, validate=False): unfiltered_rs = self.connection.get_all_volumes([self.id]) rs = [ x for x in unfiltered_rs if x.id == self.id ] if len(rs) > 0: self._update(rs[0]) elif validate: raise ValueError( % self.id) return self.status
Update the data associated with this volume by querying EC2. :type validate: bool :param validate: By default, if EC2 returns no data about the volume the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2.
### Input: Update the data associated with this volume by querying EC2. :type validate: bool :param validate: By default, if EC2 returns no data about the volume the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2. ### Response: def update(self, validate=False): unfiltered_rs = self.connection.get_all_volumes([self.id]) rs = [ x for x in unfiltered_rs if x.id == self.id ] if len(rs) > 0: self._update(rs[0]) elif validate: raise ValueError( % self.id) return self.status
def parse(filename): for event, elt in et.iterparse(filename, events= (, , , ), huge_tree=True): if event == : obj = _elt2obj(elt) obj[] = ENTER yield obj if elt.text: yield {: TEXT, : elt.text} elif event == : yield {: EXIT} if elt.tail: yield {: TEXT, : elt.tail} elt.clear() elif event == : yield {: COMMENT, : elt.text} elif event == : yield {: PI, : elt.text} else: assert False, (event, elt)
Parses file content into events stream
### Input: Parses file content into events stream ### Response: def parse(filename): for event, elt in et.iterparse(filename, events= (, , , ), huge_tree=True): if event == : obj = _elt2obj(elt) obj[] = ENTER yield obj if elt.text: yield {: TEXT, : elt.text} elif event == : yield {: EXIT} if elt.tail: yield {: TEXT, : elt.tail} elt.clear() elif event == : yield {: COMMENT, : elt.text} elif event == : yield {: PI, : elt.text} else: assert False, (event, elt)
def _db_install(self, db_name): self._logger.info("Installing NIPAP database schemas into db") self._execute(db_schema.ip_net % (db_name)) self._execute(db_schema.functions) self._execute(db_schema.triggers)
Install nipap database schema
### Input: Install nipap database schema ### Response: def _db_install(self, db_name): self._logger.info("Installing NIPAP database schemas into db") self._execute(db_schema.ip_net % (db_name)) self._execute(db_schema.functions) self._execute(db_schema.triggers)
def receive(self, input): if IRichInput.providedBy(input): richInput = unicode(input) symbolInput = unicode(input.symbol()) else: richInput = None symbolInput = unicode(input) action = LOG_FSM_TRANSITION( self.logger, fsm_identifier=self.identifier, fsm_state=unicode(self.state), fsm_rich_input=richInput, fsm_input=symbolInput) with action as theAction: output = super(FiniteStateLogger, self).receive(input) theAction.addSuccessFields( fsm_next_state=unicode(self.state), fsm_output=[unicode(o) for o in output]) if self._action is not None and self._isTerminal(self.state): self._action.addSuccessFields( fsm_terminal_state=unicode(self.state)) self._action.finish() self._action = None return output
Add logging of state transitions to the wrapped state machine. @see: L{IFiniteStateMachine.receive}
### Input: Add logging of state transitions to the wrapped state machine. @see: L{IFiniteStateMachine.receive} ### Response: def receive(self, input): if IRichInput.providedBy(input): richInput = unicode(input) symbolInput = unicode(input.symbol()) else: richInput = None symbolInput = unicode(input) action = LOG_FSM_TRANSITION( self.logger, fsm_identifier=self.identifier, fsm_state=unicode(self.state), fsm_rich_input=richInput, fsm_input=symbolInput) with action as theAction: output = super(FiniteStateLogger, self).receive(input) theAction.addSuccessFields( fsm_next_state=unicode(self.state), fsm_output=[unicode(o) for o in output]) if self._action is not None and self._isTerminal(self.state): self._action.addSuccessFields( fsm_terminal_state=unicode(self.state)) self._action.finish() self._action = None return output
def get_packing_plan(self, topologyName, callback=None): isWatching = False ret = { "result": None } if callback: isWatching = True else: def callback(data): ret["result"] = data self._get_packing_plan_with_watch(topologyName, callback, isWatching) return ret["result"]
get packing plan
### Input: get packing plan ### Response: def get_packing_plan(self, topologyName, callback=None): isWatching = False ret = { "result": None } if callback: isWatching = True else: def callback(data): ret["result"] = data self._get_packing_plan_with_watch(topologyName, callback, isWatching) return ret["result"]
def _connect(self): oauth_client = BackendApplicationClient(client_id=self.client_id) oauth_session = OAuth2Session(client=oauth_client) token = oauth_session.fetch_token(token_url=self.url + "oauth/token", client_id=self.client_id, client_secret=self.client_secret) return OAuth2Session(client_id=self.client_id, token=token)
Retrieve token from USMA API and create an authenticated session :returns OAuth2Session: authenticated client session
### Input: Retrieve token from USMA API and create an authenticated session :returns OAuth2Session: authenticated client session ### Response: def _connect(self): oauth_client = BackendApplicationClient(client_id=self.client_id) oauth_session = OAuth2Session(client=oauth_client) token = oauth_session.fetch_token(token_url=self.url + "oauth/token", client_id=self.client_id, client_secret=self.client_secret) return OAuth2Session(client_id=self.client_id, token=token)
def _on_nodes(self): all_graphs = self.all_graphs all_nodes = [n for g in all_graphs for n in g.nodes] for graph in all_graphs: for edge in graph.edges: edge._nodes = all_nodes
Maintains each branch's list of available nodes in order that they may move themselves (InstanceEditor values).
### Input: Maintains each branch's list of available nodes in order that they may move themselves (InstanceEditor values). ### Response: def _on_nodes(self): all_graphs = self.all_graphs all_nodes = [n for g in all_graphs for n in g.nodes] for graph in all_graphs: for edge in graph.edges: edge._nodes = all_nodes
def export_pipeline(exported_pipeline, operators, pset, impute=False, pipeline_score=None, random_state=None, data_file_path=): pipeline_tree = expr_to_tree(exported_pipeline, pset) pipeline_text = generate_import_code(exported_pipeline, operators, impute) pipeline_code = pipeline_code_wrapper(generate_export_pipeline_code(pipeline_tree, operators)) if pipeline_code.count("FunctionTransformer(copy)"): pipeline_text += if not data_file_path: data_file_path = pipeline_text += .format(data_file_path, random_state) if impute: pipeline_text += if pipeline_score is not None: pipeline_text += .format(pipeline_score) pipeline_text += pipeline_text += pipeline_code return pipeline_text
Generate source code for a TPOT Pipeline. Parameters ---------- exported_pipeline: deap.creator.Individual The pipeline that is being exported operators: List of operator classes from operator library pipeline_score: Optional pipeline score to be saved to the exported file impute: bool (False): If impute = True, then adda a imputation step. random_state: integer Random seed in train_test_split function. data_file_path: string (default: '') By default, the path of input dataset is 'PATH/TO/DATA/FILE' by default. If data_file_path is another string, the path will be replaced. Returns ------- pipeline_text: str The source code representing the pipeline
### Input: Generate source code for a TPOT Pipeline. Parameters ---------- exported_pipeline: deap.creator.Individual The pipeline that is being exported operators: List of operator classes from operator library pipeline_score: Optional pipeline score to be saved to the exported file impute: bool (False): If impute = True, then adda a imputation step. random_state: integer Random seed in train_test_split function. data_file_path: string (default: '') By default, the path of input dataset is 'PATH/TO/DATA/FILE' by default. If data_file_path is another string, the path will be replaced. Returns ------- pipeline_text: str The source code representing the pipeline ### Response: def export_pipeline(exported_pipeline, operators, pset, impute=False, pipeline_score=None, random_state=None, data_file_path=): pipeline_tree = expr_to_tree(exported_pipeline, pset) pipeline_text = generate_import_code(exported_pipeline, operators, impute) pipeline_code = pipeline_code_wrapper(generate_export_pipeline_code(pipeline_tree, operators)) if pipeline_code.count("FunctionTransformer(copy)"): pipeline_text += if not data_file_path: data_file_path = pipeline_text += .format(data_file_path, random_state) if impute: pipeline_text += if pipeline_score is not None: pipeline_text += .format(pipeline_score) pipeline_text += pipeline_text += pipeline_code return pipeline_text
def fq2fa(fq): c = cycle([1, 2, 3, 4]) for line in fq: n = next(c) if n == 1: seq = [ % (line.strip().split(, 1)[1])] if n == 2: seq.append(line.strip()) yield seq
convert fq to fa
### Input: convert fq to fa ### Response: def fq2fa(fq): c = cycle([1, 2, 3, 4]) for line in fq: n = next(c) if n == 1: seq = [ % (line.strip().split(, 1)[1])] if n == 2: seq.append(line.strip()) yield seq
def printGenericTree(element, level=0, showids=True, labels=False, showtype=True, TYPE_MARGIN=18): ID_MARGIN = 5 SHORT_TYPES = { "rdf:Property": "rdf:Property", "owl:AnnotationProperty": "owl:Annot.Pr.", "owl:DatatypeProperty": "owl:DatatypePr.", "owl:ObjectProperty": "owl:ObjectPr.", } if showids: _id_ = Fore.BLUE + \ "[%d]%s" % (element.id, " " * (ID_MARGIN - len(str(element.id)))) + \ Fore.RESET elif showtype: _prop = uri2niceString(element.rdftype) try: prop = SHORT_TYPES[_prop] except: prop = _prop _id_ = Fore.BLUE + \ "[%s]%s" % (prop, " " * (TYPE_MARGIN - len(prop))) + Fore.RESET else: _id_ = "" if labels: bestLabel = element.bestLabel(qname_allowed=False) if bestLabel: bestLabel = Fore.MAGENTA + " (\"%s\")" % bestLabel + Fore.RESET else: bestLabel = "" printDebug("%s%s%s%s" % (_id_, "-" * 4 * level, element.qname, bestLabel)) for sub in element.children(): printGenericTree(sub, (level + 1), showids, labels, showtype, TYPE_MARGIN)
Print nicely into stdout the taxonomical tree of an ontology. Works irrespectively of whether it's a class or property. Note: indentation is made so that ids up to 3 digits fit in, plus a space. [123]1-- [1]123-- [12]12-- <TYPE_MARGIN> is parametrized so that classes and properties can have different default spacing (eg owl:class vs owl:AnnotationProperty)
### Input: Print nicely into stdout the taxonomical tree of an ontology. Works irrespectively of whether it's a class or property. Note: indentation is made so that ids up to 3 digits fit in, plus a space. [123]1-- [1]123-- [12]12-- <TYPE_MARGIN> is parametrized so that classes and properties can have different default spacing (eg owl:class vs owl:AnnotationProperty) ### Response: def printGenericTree(element, level=0, showids=True, labels=False, showtype=True, TYPE_MARGIN=18): ID_MARGIN = 5 SHORT_TYPES = { "rdf:Property": "rdf:Property", "owl:AnnotationProperty": "owl:Annot.Pr.", "owl:DatatypeProperty": "owl:DatatypePr.", "owl:ObjectProperty": "owl:ObjectPr.", } if showids: _id_ = Fore.BLUE + \ "[%d]%s" % (element.id, " " * (ID_MARGIN - len(str(element.id)))) + \ Fore.RESET elif showtype: _prop = uri2niceString(element.rdftype) try: prop = SHORT_TYPES[_prop] except: prop = _prop _id_ = Fore.BLUE + \ "[%s]%s" % (prop, " " * (TYPE_MARGIN - len(prop))) + Fore.RESET else: _id_ = "" if labels: bestLabel = element.bestLabel(qname_allowed=False) if bestLabel: bestLabel = Fore.MAGENTA + " (\"%s\")" % bestLabel + Fore.RESET else: bestLabel = "" printDebug("%s%s%s%s" % (_id_, "-" * 4 * level, element.qname, bestLabel)) for sub in element.children(): printGenericTree(sub, (level + 1), showids, labels, showtype, TYPE_MARGIN)
def random_markov_chain(n, k=None, sparse=False, random_state=None): P = random_stochastic_matrix(n, k, sparse, format=, random_state=random_state) mc = MarkovChain(P) return mc
Return a randomly sampled MarkovChain instance with n states, where each state has k states with positive transition probability. Parameters ---------- n : scalar(int) Number of states. k : scalar(int), optional(default=None) Number of states that may be reached from each state with positive probability. Set to n if not specified. sparse : bool, optional(default=False) Whether to store the transition probability matrix in sparse matrix form. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- mc : MarkovChain Examples -------- >>> mc = qe.markov.random_markov_chain(3, random_state=1234) >>> mc.P array([[ 0.19151945, 0.43058932, 0.37789123], [ 0.43772774, 0.34763084, 0.21464142], [ 0.27259261, 0.5073832 , 0.22002419]]) >>> mc = qe.markov.random_markov_chain(3, k=2, random_state=1234) >>> mc.P array([[ 0.19151945, 0.80848055, 0. ], [ 0. , 0.62210877, 0.37789123], [ 0.56227226, 0. , 0.43772774]])
### Input: Return a randomly sampled MarkovChain instance with n states, where each state has k states with positive transition probability. Parameters ---------- n : scalar(int) Number of states. k : scalar(int), optional(default=None) Number of states that may be reached from each state with positive probability. Set to n if not specified. sparse : bool, optional(default=False) Whether to store the transition probability matrix in sparse matrix form. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- mc : MarkovChain Examples -------- >>> mc = qe.markov.random_markov_chain(3, random_state=1234) >>> mc.P array([[ 0.19151945, 0.43058932, 0.37789123], [ 0.43772774, 0.34763084, 0.21464142], [ 0.27259261, 0.5073832 , 0.22002419]]) >>> mc = qe.markov.random_markov_chain(3, k=2, random_state=1234) >>> mc.P array([[ 0.19151945, 0.80848055, 0. ], [ 0. , 0.62210877, 0.37789123], [ 0.56227226, 0. , 0.43772774]]) ### Response: def random_markov_chain(n, k=None, sparse=False, random_state=None): P = random_stochastic_matrix(n, k, sparse, format=, random_state=random_state) mc = MarkovChain(P) return mc
def report_non_responding_hosting_devices(self, context, host, hosting_device_ids): self.update_hosting_device_status(context, host, {const.HD_DEAD: hosting_device_ids})
Report that a hosting device is determined to be dead. :param context: contains user information :param host: originator of callback :param hosting_device_ids: list of non-responding hosting devices
### Input: Report that a hosting device is determined to be dead. :param context: contains user information :param host: originator of callback :param hosting_device_ids: list of non-responding hosting devices ### Response: def report_non_responding_hosting_devices(self, context, host, hosting_device_ids): self.update_hosting_device_status(context, host, {const.HD_DEAD: hosting_device_ids})
def install(path, restart=False): * cmd = [, path, ] if restart: cmd.append() else: cmd.append() ret_code = __salt__[](cmd, ignore_retcode=True) file_name = os.path.basename(path) errors = {2359302: .format(file_name), 87: } if ret_code in errors: raise CommandExecutionError(errors[ret_code]) elif ret_code: raise CommandExecutionError(.format(ret_code)) return True
Install a KB from a .msu file. Args: path (str): The full path to the msu file to install restart (bool): ``True`` to force a restart if required by the installation. Adds the ``/forcerestart`` switch to the ``wusa.exe`` command. ``False`` will add the ``/norestart`` switch instead. Default is ``False`` Returns: bool: ``True`` if successful, otherwise ``False`` Raise: CommandExecutionError: If the package is already installed or an error is encountered CLI Example: .. code-block:: bash salt '*' wusa.install C:/temp/KB123456.msu
### Input: Install a KB from a .msu file. Args: path (str): The full path to the msu file to install restart (bool): ``True`` to force a restart if required by the installation. Adds the ``/forcerestart`` switch to the ``wusa.exe`` command. ``False`` will add the ``/norestart`` switch instead. Default is ``False`` Returns: bool: ``True`` if successful, otherwise ``False`` Raise: CommandExecutionError: If the package is already installed or an error is encountered CLI Example: .. code-block:: bash salt '*' wusa.install C:/temp/KB123456.msu ### Response: def install(path, restart=False): * cmd = [, path, ] if restart: cmd.append() else: cmd.append() ret_code = __salt__[](cmd, ignore_retcode=True) file_name = os.path.basename(path) errors = {2359302: .format(file_name), 87: } if ret_code in errors: raise CommandExecutionError(errors[ret_code]) elif ret_code: raise CommandExecutionError(.format(ret_code)) return True
def node_selection(cmd_name, node_qty): cmd_disp = cmd_name.upper() cmd_title = ("\r{1}{0} NODE{2} - Enter {3} " ({4}0 = Exit Command{2}): ". format(cmd_disp, C_TI, C_NORM, C_WARN, C_HEAD2)) ui_cmd_title(cmd_title) selection_valid = False input_flush() with term.cbreak(): while not selection_valid: node_num = input_by_key() try: node_num = int(node_num) except ValueError: node_num = 99999 if node_num <= node_qty: selection_valid = True else: ui_print_suffix("Invalid Entry", C_ERR) sleep(0.5) ui_cmd_title(cmd_title) return node_num
Determine Node via alternate input method.
### Input: Determine Node via alternate input method. ### Response: def node_selection(cmd_name, node_qty): cmd_disp = cmd_name.upper() cmd_title = ("\r{1}{0} NODE{2} - Enter {3} " ({4}0 = Exit Command{2}): ". format(cmd_disp, C_TI, C_NORM, C_WARN, C_HEAD2)) ui_cmd_title(cmd_title) selection_valid = False input_flush() with term.cbreak(): while not selection_valid: node_num = input_by_key() try: node_num = int(node_num) except ValueError: node_num = 99999 if node_num <= node_qty: selection_valid = True else: ui_print_suffix("Invalid Entry", C_ERR) sleep(0.5) ui_cmd_title(cmd_title) return node_num
def collect_fragment(event, agora_host): agora = Agora(agora_host) graph_pattern = "" for tp in __triple_patterns: graph_pattern += .format(tp) fragment, _, graph = agora.get_fragment_generator( % graph_pattern, stop_event=event, workers=4) __extract_pattern_nodes(graph) log.info( % graph_pattern) for (t, s, p, o) in fragment: collectors = __triple_patterns[str(__plan_patterns[t])] for c, args in collectors: log.debug(.format(s.n3(graph.namespace_manager), graph.qname(p), o.n3(graph.namespace_manager), c)) c((s, p, o)) if event.isSet(): raise Exception() yield (c.func_name, (t, s, p, o))
Execute a search plan for the declared graph pattern and sends all obtained triples to the corresponding collector functions (config
### Input: Execute a search plan for the declared graph pattern and sends all obtained triples to the corresponding collector functions (config ### Response: def collect_fragment(event, agora_host): agora = Agora(agora_host) graph_pattern = "" for tp in __triple_patterns: graph_pattern += .format(tp) fragment, _, graph = agora.get_fragment_generator( % graph_pattern, stop_event=event, workers=4) __extract_pattern_nodes(graph) log.info( % graph_pattern) for (t, s, p, o) in fragment: collectors = __triple_patterns[str(__plan_patterns[t])] for c, args in collectors: log.debug(.format(s.n3(graph.namespace_manager), graph.qname(p), o.n3(graph.namespace_manager), c)) c((s, p, o)) if event.isSet(): raise Exception() yield (c.func_name, (t, s, p, o))
def calculate_size(name, items): data_size = 0 data_size += calculate_size_str(name) data_size += INT_SIZE_IN_BYTES for items_item in items: data_size += calculate_size_data(items_item) return data_size
Calculates the request payload size
### Input: Calculates the request payload size ### Response: def calculate_size(name, items): data_size = 0 data_size += calculate_size_str(name) data_size += INT_SIZE_IN_BYTES for items_item in items: data_size += calculate_size_data(items_item) return data_size
def get_access_token(self, code): try: self._token = super().fetch_token( MINUT_TOKEN_URL, client_id=self._client_id, client_secret=self._client_secret, code=code, ) except MissingTokenError as error: _LOGGER.debug("Token issues: %s", error) return self._token
Get new access token.
### Input: Get new access token. ### Response: def get_access_token(self, code): try: self._token = super().fetch_token( MINUT_TOKEN_URL, client_id=self._client_id, client_secret=self._client_secret, code=code, ) except MissingTokenError as error: _LOGGER.debug("Token issues: %s", error) return self._token
def search(cls, session, queries, out_type): cls._check_implements() domain = cls.get_search_domain(queries) return cls( % cls.__endpoint__, data={: str(domain)}, session=session, out_type=out_type, )
Search for a record given a domain. Args: session (requests.sessions.Session): Authenticated session. queries (helpscout.models.Domain or iter): The queries for the domain. If a ``Domain`` object is provided, it will simply be returned. Otherwise, a ``Domain`` object will be generated from the complex queries. In this case, the queries should conform to the interface in :func:`helpscout.domain.Domain.from_tuple`. out_type (helpscout.BaseModel): The type of record to output. This should be provided by child classes, by calling super. Returns: RequestPaginator(output_type=helpscout.BaseModel): Results iterator of the ``out_type`` that is defined.
### Input: Search for a record given a domain. Args: session (requests.sessions.Session): Authenticated session. queries (helpscout.models.Domain or iter): The queries for the domain. If a ``Domain`` object is provided, it will simply be returned. Otherwise, a ``Domain`` object will be generated from the complex queries. In this case, the queries should conform to the interface in :func:`helpscout.domain.Domain.from_tuple`. out_type (helpscout.BaseModel): The type of record to output. This should be provided by child classes, by calling super. Returns: RequestPaginator(output_type=helpscout.BaseModel): Results iterator of the ``out_type`` that is defined. ### Response: def search(cls, session, queries, out_type): cls._check_implements() domain = cls.get_search_domain(queries) return cls( % cls.__endpoint__, data={: str(domain)}, session=session, out_type=out_type, )
def FindFileByName(self, file_name): try: return self._file_descriptors[file_name] except KeyError: pass try: file_proto = self._internal_db.FindFileByName(file_name) except KeyError as error: if self._descriptor_db: file_proto = self._descriptor_db.FindFileByName(file_name) else: raise error if not file_proto: raise KeyError( % file_name) return self._ConvertFileProtoToFileDescriptor(file_proto)
Gets a FileDescriptor by file name. Args: file_name: The path to the file to get a descriptor for. Returns: A FileDescriptor for the named file. Raises: KeyError: if the file cannot be found in the pool.
### Input: Gets a FileDescriptor by file name. Args: file_name: The path to the file to get a descriptor for. Returns: A FileDescriptor for the named file. Raises: KeyError: if the file cannot be found in the pool. ### Response: def FindFileByName(self, file_name): try: return self._file_descriptors[file_name] except KeyError: pass try: file_proto = self._internal_db.FindFileByName(file_name) except KeyError as error: if self._descriptor_db: file_proto = self._descriptor_db.FindFileByName(file_name) else: raise error if not file_proto: raise KeyError( % file_name) return self._ConvertFileProtoToFileDescriptor(file_proto)
def _is_number_matching_desc(national_number, number_desc): if number_desc is None: return False actual_length = len(national_number) possible_lengths = number_desc.possible_length if len(possible_lengths) > 0 and actual_length not in possible_lengths: return False return _match_national_number(national_number, number_desc, False)
Determine if the number matches the given PhoneNumberDesc
### Input: Determine if the number matches the given PhoneNumberDesc ### Response: def _is_number_matching_desc(national_number, number_desc): if number_desc is None: return False actual_length = len(national_number) possible_lengths = number_desc.possible_length if len(possible_lengths) > 0 and actual_length not in possible_lengths: return False return _match_national_number(national_number, number_desc, False)
def validate_unwrap(self, value): if not isinstance(value, list): self._fail_validation_type(value, list) for value_dict in value: if not isinstance(value_dict, dict): cause = BadValueException(, value_dict, ) self._fail_validation(value, , cause=cause) k = value_dict.get() v = value_dict.get() if k is None: self._fail_validation(value, ) try: self.key_type.validate_unwrap(k) except BadValueException as bve: self._fail_validation(value, % k, cause=bve) try: self.value_type.validate_unwrap(v) except BadValueException as bve: self._fail_validation(value, % k, cause=bve) return True
Expects a list of dictionaries with ``k`` and ``v`` set to the keys and values that will be unwrapped into the output python dictionary should have
### Input: Expects a list of dictionaries with ``k`` and ``v`` set to the keys and values that will be unwrapped into the output python dictionary should have ### Response: def validate_unwrap(self, value): if not isinstance(value, list): self._fail_validation_type(value, list) for value_dict in value: if not isinstance(value_dict, dict): cause = BadValueException(, value_dict, ) self._fail_validation(value, , cause=cause) k = value_dict.get() v = value_dict.get() if k is None: self._fail_validation(value, ) try: self.key_type.validate_unwrap(k) except BadValueException as bve: self._fail_validation(value, % k, cause=bve) try: self.value_type.validate_unwrap(v) except BadValueException as bve: self._fail_validation(value, % k, cause=bve) return True
def energy(self): r s, b, W, N = self.state, self.b, self.W, self.N self.E = - sum(s * b) - sum([s[i] * s[j] * W[i, j] for (i, j) in product(range(N), range(N)) if i < j]) self.low_energies[-1] = self.E self.low_energies.sort() self.high_energies[-1] = self.E self.high_energies.sort() self.high_energies = self.high_energies[::-1] return self.E
r""" Compute the global energy for the current joint state of all nodes - sum(s[i] * b[i]) - sum([s[i]*s[j]*W[i,j] for (i, j) in product(range(N), range(N)) if i<j)]) E = − ∑ s i b i − ∑ i i< j s i s j w ij
### Input: r""" Compute the global energy for the current joint state of all nodes - sum(s[i] * b[i]) - sum([s[i]*s[j]*W[i,j] for (i, j) in product(range(N), range(N)) if i<j)]) E = − ∑ s i b i − ∑ i i< j s i s j w ij ### Response: def energy(self): r s, b, W, N = self.state, self.b, self.W, self.N self.E = - sum(s * b) - sum([s[i] * s[j] * W[i, j] for (i, j) in product(range(N), range(N)) if i < j]) self.low_energies[-1] = self.E self.low_energies.sort() self.high_energies[-1] = self.E self.high_energies.sort() self.high_energies = self.high_energies[::-1] return self.E
def shadow_calc(data): up_shadow = abs(data.high - (max(data.open, data.close))) down_shadow = abs(data.low - (min(data.open, data.close))) entity = abs(data.open - data.close) towards = True if data.open < data.close else False print( * 15) print(.format(up_shadow)) print(.format(down_shadow)) print(.format(entity)) print(.format(towards)) return up_shadow, down_shadow, entity, data.date, data.code
计算上下影线 Arguments: data {DataStruct.slice} -- 输入的是一个行情切片 Returns: up_shadow {float} -- 上影线 down_shdow {float} -- 下影线 entity {float} -- 实体部分 date {str} -- 时间 code {str} -- 代码
### Input: 计算上下影线 Arguments: data {DataStruct.slice} -- 输入的是一个行情切片 Returns: up_shadow {float} -- 上影线 down_shdow {float} -- 下影线 entity {float} -- 实体部分 date {str} -- 时间 code {str} -- 代码 ### Response: def shadow_calc(data): up_shadow = abs(data.high - (max(data.open, data.close))) down_shadow = abs(data.low - (min(data.open, data.close))) entity = abs(data.open - data.close) towards = True if data.open < data.close else False print( * 15) print(.format(up_shadow)) print(.format(down_shadow)) print(.format(entity)) print(.format(towards)) return up_shadow, down_shadow, entity, data.date, data.code
def get_or_create_stream(self, stream_id, try_create=True): stream_id = get_stream_id(stream_id) if stream_id in self.streams: logging.debug("found {}".format(stream_id)) return self.streams[stream_id] elif try_create: logging.debug("creating {}".format(stream_id)) return self.create_stream(stream_id=stream_id)
Helper function to get a stream or create one if it's not already defined :param stream_id: The stream id :param try_create: Whether to try to create the stream if not found :return: The stream object
### Input: Helper function to get a stream or create one if it's not already defined :param stream_id: The stream id :param try_create: Whether to try to create the stream if not found :return: The stream object ### Response: def get_or_create_stream(self, stream_id, try_create=True): stream_id = get_stream_id(stream_id) if stream_id in self.streams: logging.debug("found {}".format(stream_id)) return self.streams[stream_id] elif try_create: logging.debug("creating {}".format(stream_id)) return self.create_stream(stream_id=stream_id)
def check_type(o, acceptable_types, may_be_none=True): if not isinstance(acceptable_types, tuple): acceptable_types = (acceptable_types,) if may_be_none and o is None: pass elif isinstance(o, acceptable_types): pass else: error_message = ( "We were expecting to receive an instance of one of the following " "types: {types}{none}; but instead we received {o} which is a " "{o_type}.".format( types=", ".join([repr(t.__name__) for t in acceptable_types]), none="or " if may_be_none else "", o=o, o_type=repr(type(o).__name__) ) ) raise TypeError(error_message)
Object is an instance of one of the acceptable types or None. Args: o: The object to be inspected. acceptable_types: A type or tuple of acceptable types. may_be_none(bool): Whether or not the object may be None. Raises: TypeError: If the object is None and may_be_none=False, or if the object is not an instance of one of the acceptable types.
### Input: Object is an instance of one of the acceptable types or None. Args: o: The object to be inspected. acceptable_types: A type or tuple of acceptable types. may_be_none(bool): Whether or not the object may be None. Raises: TypeError: If the object is None and may_be_none=False, or if the object is not an instance of one of the acceptable types. ### Response: def check_type(o, acceptable_types, may_be_none=True): if not isinstance(acceptable_types, tuple): acceptable_types = (acceptable_types,) if may_be_none and o is None: pass elif isinstance(o, acceptable_types): pass else: error_message = ( "We were expecting to receive an instance of one of the following " "types: {types}{none}; but instead we received {o} which is a " "{o_type}.".format( types=", ".join([repr(t.__name__) for t in acceptable_types]), none="or " if may_be_none else "", o=o, o_type=repr(type(o).__name__) ) ) raise TypeError(error_message)
def get_assembly_mapping_data(self, source_assembly, target_assembly): return self._load_assembly_mapping_data( self._get_path_assembly_mapping_data(source_assembly, target_assembly) )
Get assembly mapping data. Parameters ---------- source_assembly : {'NCBI36', 'GRCh37', 'GRCh38'} assembly to remap from target_assembly : {'NCBI36', 'GRCh37', 'GRCh38'} assembly to remap to Returns ------- dict dict of json assembly mapping data if loading was successful, else None
### Input: Get assembly mapping data. Parameters ---------- source_assembly : {'NCBI36', 'GRCh37', 'GRCh38'} assembly to remap from target_assembly : {'NCBI36', 'GRCh37', 'GRCh38'} assembly to remap to Returns ------- dict dict of json assembly mapping data if loading was successful, else None ### Response: def get_assembly_mapping_data(self, source_assembly, target_assembly): return self._load_assembly_mapping_data( self._get_path_assembly_mapping_data(source_assembly, target_assembly) )
def _language_exclusions(stem: LanguageStemRange, exclusions: List[ShExDocParser.LanguageExclusionContext]) -> None: for excl in exclusions: excl_langtag = LANGTAG(excl.LANGTAG().getText()[1:]) stem.exclusions.append(LanguageStem(excl_langtag) if excl.STEM_MARK() else excl_langtag)
languageExclusion = '-' LANGTAG STEM_MARK?
### Input: languageExclusion = '-' LANGTAG STEM_MARK? ### Response: def _language_exclusions(stem: LanguageStemRange, exclusions: List[ShExDocParser.LanguageExclusionContext]) -> None: for excl in exclusions: excl_langtag = LANGTAG(excl.LANGTAG().getText()[1:]) stem.exclusions.append(LanguageStem(excl_langtag) if excl.STEM_MARK() else excl_langtag)
def _create_regexp_filter(regex): (type(value), value)) return re.search(compiled_regex, value) is not None return filter_fn
Returns a boolean function that filters strings based on a regular exp. Args: regex: A string describing the regexp to use. Returns: A function taking a string and returns True if any of its substrings matches regex.
### Input: Returns a boolean function that filters strings based on a regular exp. Args: regex: A string describing the regexp to use. Returns: A function taking a string and returns True if any of its substrings matches regex. ### Response: def _create_regexp_filter(regex): (type(value), value)) return re.search(compiled_regex, value) is not None return filter_fn