code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
def rbac_policy_update(request, policy_id, **kwargs): body = {: kwargs} rbac_policy = neutronclient(request).update_rbac_policy( policy_id, body=body).get() return RBACPolicy(rbac_policy)
Update a RBAC Policy. :param request: request context :param policy_id: target policy id :param target_tenant: target tenant of the policy :return: RBACPolicy object
def query_hek(time, time_window=1): hek_client = hek.HEKClient() start_time = time - timedelta(hours=time_window) end_time = time + timedelta(hours=time_window) responses = hek_client.query(hek.attrs.Time(start_time, end_time)) return responses
requests hek responses for a given time :param time: datetime object :param time_window: how far in hours on either side of the input time to look for results :return: hek response list
def style_from_dict(style_dict, include_defaults=True): assert isinstance(style_dict, Mapping) if include_defaults: s2 = {} s2.update(DEFAULT_STYLE_EXTENSIONS) s2.update(style_dict) style_dict = s2 token_to_attrs = {} for ttype, styledef in sorted(s...
Create a ``Style`` instance from a dictionary or other mapping. The dictionary is equivalent to the ``Style.styles`` dictionary from pygments, with a few additions: it supports 'reverse' and 'blink'. Usage:: style_from_dict({ Token: '#ff0000 bold underline', Token.Title: '...
def merge(self, target, source, target_comment=None, source_comment=None): return TicketMergeRequest(self).post(target, source, target_comment=target_comment, source_comment=source_comment)
Merge the ticket(s) or ticket ID(s) in source into the target ticket. :param target: ticket id or object to merge tickets into :param source: ticket id, object or list of tickets or ids to merge into target :param source_comment: optional comment for the source ticket(s) :param target_c...
def _recursive_round(self, value, precision): if hasattr(value, ): return tuple(self._recursive_round(v, precision) for v in value) return round(value, precision)
Round all numbers within an array or nested arrays value: number or nested array of numbers precision: integer valueue of number of decimals to keep
def Lomb_Scargle(data, precision, min_period, max_period, period_jobs=1): time, mags, *err = data.T scaled_mags = (mags-mags.mean())/mags.std() minf, maxf = 2*np.pi/max_period, 2*np.pi/min_period freqs = np.arange(minf, maxf, precision) pgram = lombscargle(time, scaled_mags, freqs) return ...
Returns the period of *data* according to the `Lomb-Scargle periodogram <https://en.wikipedia.org/wiki/Least-squares_spectral_analysis#The_Lomb.E2.80.93Scargle_periodogram>`_. **Parameters** data : array-like, shape = [n_samples, 2] or [n_samples, 3] Array containing columns *time*, *mag*, and (op...
def to_json(self): result = super(FieldsResource, self).to_json() result[] = self.fields_with_locales() return result
Returns the JSON Representation of the resource.
def _approxaA(self,R,vR,vT,z,vz,phi,interp=True,cindx=None): if isinstance(R,(int,float,numpy.float32,numpy.float64)): R= numpy.array([R]) vR= numpy.array([vR]) vT= numpy.array([vT]) z= numpy.array([z]) vz= numpy.array([vz]) phi= ...
NAME: _approxaA PURPOSE: return action-angle coordinates for a point based on the linear approximation around the stream track INPUT: R,vR,vT,z,vz,phi - phase-space coordinates of the given point interp= (True), if True, use the interpolated track ...
def validate_regex(ctx, param, value): if not value: return None try: re.compile(value) except re.error: raise click.BadParameter(.format(value)) return value
Validate that a provided regex compiles.
def runs(self, path="", filters={}, order="-created_at", per_page=None): username, project, run = self._parse_path(path) if not self._runs.get(path): self._runs[path + str(filters) + str(order)] = Runs(self.client, username, project, ...
Return a set of runs from a project that match the filters provided. You can filter by config.*, summary.*, state, username, createdAt, etc. The filters use the same query language as MongoDB: https://docs.mongodb.com/manual/reference/operator/query Order can be created_at, heartbeat_...
def mergebam(args): p = OptionParser(mergebam.__doc__) p.set_cpus() opts, args = p.parse_args(args) if len(args) not in (2, 3): sys.exit(not p.print_help()) if len(args) == 2: idir1, outdir = args dir1 = [idir1] if idir1.endswith(".bam") else iglob(idir1, "*.bam") ...
%prog mergebam dir1 homo_outdir or %prog mergebam dir1 dir2/20.bam het_outdir Merge sets of BAMs to make diploid. Two modes: - Homozygous mode: pair-up the bams in the two folders and merge - Heterozygous mode: pair the bams in first folder with a particular bam
def detect(self, stream, threshold, threshold_type, trig_int, plotvar, daylong=False, parallel_process=True, xcorr_func=None, concurrency=None, cores=None, ignore_length=False, group_size=None, overlap="calculate", debug=0, full_peaks=False, save_progress=Fals...
Detect using a Tribe of templates within a continuous stream. :type stream: `obspy.core.stream.Stream` :param stream: Continuous data to detect within using the Template. :type threshold: float :param threshold: Threshold level, if using `threshold_type='MAD'` then this will...
def find_executable(executable, path=None): if sys.platform != : return distutils.spawn.find_executable(executable, path) if path is None: path = os.environ[] paths = path.split(os.pathsep) extensions = os.environ.get(, ).split(os.pathsep) base, ext = os.path.splitext(executab...
As distutils.spawn.find_executable, but on Windows, look up every extension declared in PATHEXT instead of just `.exe`
def print(root): def print_before(previous=0, defined=None, is_last=False): defined = defined or {} ret = if previous != 0: for i in range(previous - 1): if i in defined: ...
Transform the parsed tree to the string. Expects tree like structure. You can see example output below. (R)SplitRules26 |--(N)Iterate | `--(R)SplitRules30 | `--(N)Symb | `--(R)SplitRules4 | `--(T)e `--(N)Concat `--(R)Split...
def fullname(self): prefix = "" if self.parent: if self.parent.fullname: prefix = self.parent.fullname + ":" else: return "" return prefix + self.name
includes the full path with parent names
def to_bytes(s, encoding=None, errors=None): if not isinstance(s, bytes): return ( % s).encode(encoding or , errors or ) elif not encoding or encoding == : return s else: d = s.decode() return d.encode(encoding, errors or )
Convert *s* into bytes
def igrf12syn(isv, date, itype, alt, lat, elong): p, q, cl, sl = [0.] * 105, [0.] * 105, [0.] * 13, [0.] * 13 x, y, z = 0., 0., 0. if date < 1900.0 or date > 2025.0: f = 1.0 print( + str(date)) print() print() return x, y, z, f elif date >= 2015.0: ...
This is a synthesis routine for the 12th generation IGRF as agreed in December 2014 by IAGA Working Group V-MOD. It is valid 1900.0 to 2020.0 inclusive. Values for dates from 1945.0 to 2010.0 inclusive are definitive, otherwise they are non-definitive. INPUT isv = 0 if main-field values are req...
def format(self): subtag = self.data[] if self.data[] == : return subtag.upper() if self.data[] == : return subtag.capitalize() return subtag
Get the subtag code conventional format according to RFC 5646 section 2.1.1. :return: string -- subtag code conventional format.
def pull_byte(self, stack_pointer): addr = stack_pointer.value byte = self.memory.read_byte(addr) stack_pointer.increment(1) return byte
pulled a byte from stack
def wallet_frontiers(self, wallet): wallet = self._process_value(wallet, ) payload = {"wallet": wallet} resp = self.call(, payload) return resp.get() or {}
Returns a list of pairs of account and block hash representing the head block starting for accounts from **wallet** :param wallet: Wallet to return frontiers for :type wallet: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.wallet_frontiers( ... wallet="000D1B...
def places_within_radius( self, place=None, latitude=None, longitude=None, radius=0, **kwargs ): kwargs[] = True kwargs[] = True kwargs[] = False kwargs.setdefault(, ) unit = kwargs.setdefault(, ) if place is not None: response =...
Return descriptions of the places stored in the collection that are within the circle specified by the given location and radius. A list of dicts will be returned. The center of the circle can be specified by the identifier of another place in the collection with the *place* keyword arg...
def make_content_range(self, length): rng = self.range_for_length(length) if rng is not None: return ContentRange(self.units, rng[0], rng[1], length)
Creates a :class:`~werkzeug.datastructures.ContentRange` object from the current range and given content length.
def _increment_stage(self): try: if self._cur_stage < self._stage_count: self._cur_stage += 1 else: self._completed_flag.set() except Exception, ex: raise EnTKError(text=ex)
Purpose: Increment stage pointer. Also check if Pipeline has completed.
def lock_pidfile_or_die(pidfile): pid = os.getpid() try: remove_if_stale_pidfile(pidfile) pid_write_file = pidfile + + str(pid) fpid = open(pid_write_file, ) try: fpid.write("%s\n" % pid) finally: fpid.close() if not take_file_lock(pi...
@pidfile: must be a writable path Exceptions are logged. Returns the PID.
def debug_print_strip_msg(self, i, line): if self.debug_level == 2: print(" Stripping Line %d: " % (i + 1, line.rstrip())) elif self.debug_level > 2: print(" Stripping Line %d:" % (i + 1)) hexdump(line)
Debug print indicating that an empty line is being skipped :param i: The line number of the line that is being currently parsed :param line: the parsed line :return: None
def get_previous_price_list(self, currency, start_date, end_date): start = start_date.strftime() end = end_date.strftime() url = ( .format( start, end, currency ) ) response = requests.get(url) if respons...
Get List of prices between two dates
def get_devices(self, condition=None, page_size=1000): condition = validate_type(condition, type(None), Expression, *six.string_types) page_size = validate_type(page_size, *six.integer_types) params = {"embed": "true"} if condition is not None: params["condition"] ...
Iterates over each :class:`Device` for this device cloud account Examples:: # get a list of all devices all_devices = list(dc.devicecore.get_devices()) # build a mapping of devices by their vendor id using a # dict comprehension devices = dc.devicec...
def download_sample_and_align(job, sample, inputs, ids): uuid, urls = sample r1_url, r2_url = urls if len(urls) == 2 else (urls[0], None) job.fileStore.logToMaster(.format(uuid, r1_url, r2_url)) ids[] = job.addChildJobFn(download_url_job, r1_url, s3_key_path=inputs.ssec, disk=inputs.file_size)...
Downloads the sample and runs BWA-kit :param JobFunctionWrappingJob job: Passed by Toil automatically :param tuple(str, list) sample: UUID and URLS for sample :param Namespace inputs: Contains input arguments :param dict ids: FileStore IDs for shared inputs
def features_properties_null_remove(obj): features = obj[] for i in tqdm(range(len(features))): if in features[i]: properties = features[i][] features[i][] = {p:properties[p] for p in properties if properties[p] is not None} return obj
Remove any properties of features in the collection that have entries mapping to a null (i.e., None) value
def merge(self, keypath, value, op=): negated = False keypath = keypath[:] if keypath[0] == : negated = self.get_environment_variable(, pop=False, default=False) if negated: keypath[0] = "distractor" if keypath not in self: ...
First gets the cell at BeliefState's keypath, or creates a new cell from the first target that has that keypath (This could mess up if the member its copying from has a different Cell or domain for that keypath.) Second, this merges that cell with the value
def excel_to_sql(excel_file_path, engine, read_excel_kwargs=None, to_generic_type_kwargs=None, to_sql_kwargs=None): if read_excel_kwargs is None: read_excel_kwargs = dict() if to_sql_kwargs is None: to_sql_kwargs = dict() if to_generi...
Create a database from excel. :param read_excel_kwargs: dict, arguments for ``pandas.read_excel`` method. example: ``{"employee": {"skiprows": 10}, "department": {}}`` :param to_sql_kwargs: dict, arguments for ``pandas.DataFrame.to_sql`` method. limitation: 1. If a integer column has Non...
def compute_alignments(self, prev_state, precomputed_values, mask=None): WaSp = T.dot(prev_state, self.Wa) UaH = precomputed_values if UaH.ndim == 2: preact = WaSp[:, None, :] + UaH[None, :, :] else: preact = WaSp[:, None, :] + UaH act =...
Compute the alignment weights based on the previous state.
def solve(self, lam): s = weighted_graphtf(self.nnodes, self.y, self.weights, lam, self.Dk.shape[0], self.Dk.shape[1], self.Dk.nnz, self.Dk.row.astype(), self.Dk.col.astype(), self.Dk.data.astype(), self.maxsteps, se...
Solves the GFL for a fixed value of lambda.
def period(self): return timedelta(seconds=2 * np.pi * np.sqrt(self.kep.a ** 3 / self.mu))
Period of the orbit as a timedelta
def quandl_bundle(environ, asset_db_writer, minute_bar_writer, daily_bar_writer, adjustment_writer, calendar, start_session, end_session, cache, show_progress...
quandl_bundle builds a daily dataset using Quandl's WIKI Prices dataset. For more information on Quandl's API and how to obtain an API key, please visit https://docs.quandl.com/docs#section-authentication
def state(self): return Emitter(weakref.proxy(self.lib), self.lib.jit_new_state())
Returns a new JIT state. You have to clean up by calling .destroy() afterwards.
def get_share_url_with_dirname(uk, shareid, dirname): return .join([ const.PAN_URL, , , shareid, , uk, , encoder.encode_uri_component(dirname), , ])
得到共享目录的链接
def getEAnnotation(self, source): for annotation in self.eAnnotations: if annotation.source == source: return annotation return None
Return the annotation with a matching source attribute.
def _read_response(self, response): self.name = response[] self.description = response[] self.layoutName = response[] self.archiveBrowsingEnabled = response[]
JSON Documentation: https://www.jfrog.com/confluence/display/RTF/Repository+Configuration+JSON
def _analyze_file(self, f): f.seek(0) if self.CHECK_BOM: encoding = self.has_bom(f) f.seek(0) else: util.warn_deprecated( " attribute is deprecated. " "Please override 'has_bom` function to control or avoid BO...
Analyze the file.
def origin_east_asia(origin): return origin_china(origin) or origin_japan(origin) \ or origin_mongolia(origin) or origin_south_korea(origin) \ or origin_taiwan(origin)
\ Returns if the origin is located in East Asia Holds true for the following countries: * China * Japan * Mongolia * South Korea * Taiwan `origin` The origin to check.
def normalize(self) -> : tensor = self.tensor / bk.ccast(bk.sqrt(self.norm())) return State(tensor, self.qubits, self._memory)
Normalize the state
def _load_poses(self): pose_file = os.path.join(self.pose_path, self.sequence + ) poses = [] try: with open(pose_file, ) as f: lines = f.readlines() if self.frames is not None: lines = [lines[i] for i in self.fram...
Load ground truth poses (T_w_cam0) from file.
def get_creation_date( self, bucket: str, key: str, ) -> datetime: return self.get_last_modified_date(bucket, key)
Retrieves the creation date for a given key in a given bucket. :param bucket: the bucket the object resides in. :param key: the key of the object for which the creation date is being retrieved. :return: the creation date
def _pop_comment_block(self, statements, header_re): res = [] comments = [] match = None st_iter = iter(statements) for st in st_iter: if isinstance(st, ast.Comment): match = header_re.match(st.text) if match: ...
Look for a series of comments that start with one that matches the regex. If the first comment is found, all subsequent comments are popped from statements, concatenated and dedented and returned.
def covlen(args): import numpy as np import pandas as pd import seaborn as sns from jcvi.formats.base import DictFile p = OptionParser(covlen.__doc__) p.add_option("--maxsize", default=1000000, type="int", help="Max contig size") p.add_option("--maxcov", default=100, type="int", help="...
%prog covlen covfile fastafile Plot coverage vs length. `covfile` is two-column listing contig id and depth of coverage.
def get_contacts(self): all_contacts = self.wapi_functions.getAllContacts() return [Contact(contact, self) for contact in all_contacts]
Fetches list of all contacts This will return chats with people from the address book only Use get_all_chats for all chats :return: List of contacts :rtype: list[Contact]
def interp(self, new_timestamps, interpolation_mode=0): if not len(self.samples) or not len(new_timestamps): return Signal( self.samples.copy(), self.timestamps.copy(), self.unit, self.name, comment=self.comment...
returns a new *Signal* interpolated using the *new_timestamps* Parameters ---------- new_timestamps : np.array timestamps used for interpolation interpolation_mode : int interpolation mode for integer signals; default 0 * 0 - repeat previous samp...
def set_sequence_from_str(self, sequence): self._qsequences = [QKeySequence(s) for s in sequence.split()] self.update_warning()
This is a convenience method to set the new QKeySequence of the shortcut editor from a string.
def restore(self): sys = set(self._sys_modules.keys()) for mod_name in sys.difference(self._saved_modules): del self._sys_modules[mod_name]
Unloads all modules that weren't loaded when save_modules was called.
def unload_extension(self, module_str): if module_str in sys.modules: mod = sys.modules[module_str] self._call_unload_ipython_extension(mod)
Unload an IPython extension by its module name. This function looks up the extension's name in ``sys.modules`` and simply calls ``mod.unload_ipython_extension(self)``.
def list_data_links(self, instance): response = self.get_proto(path= + instance) message = rest_pb2.ListLinkInfoResponse() message.ParseFromString(response.content) links = getattr(message, ) return iter([Link(link) for link in links])
Lists the data links visible to this client. Data links are returned in random order. :param str instance: A Yamcs instance name. :rtype: ~collections.Iterable[.Link]
def set_type_by_schema(self, schema_obj, schema_type): schema_id = self._get_object_schema_id(schema_obj, schema_type) if not self.storage.contains(schema_id): schema = self.storage.create_schema( schema_obj, self.name, schema_type, root=self.root) asser...
Set property type by schema object Schema will create, if it doesn't exists in collection :param dict schema_obj: raw schema object :param str schema_type:
def with_metaclass(meta, *bases): class metaclass(meta): __call__ = type.__call__ __init__ = type.__init__ def __new__(cls, name, this_bases, d): if this_bases is None: return type.__new__(cls, name, (), d) return meta(name, bases, d) return m...
Create a base class with a metaclass. For example, if you have the metaclass >>> class Meta(type): ... pass Use this as the metaclass by doing >>> from symengine.compatibility import with_metaclass >>> class MyClass(with_metaclass(Meta, object)): ... pass This is equivalent ...
def _get_text(self): boxes = self.boxes txt = [] for line in boxes: txt_line = u"" for box in line.word_boxes: txt_line += u" " + box.content txt.append(txt_line) return txt
Get the text corresponding to this page
def sense_ttb(self, target): return super(Device, self).sense_ttb(target, did=b)
Activate the RF field and probe for a Type B Target. The RC-S956 can discover Type B Targets (Type 4B Tag) at 106 kbps. For a Type 4B Tag the firmware automatically sends an ATTRIB command that configures the use of DID and 64 byte maximum frame size. The driver reverts this configurati...
def _process_messages(self, messages): if self._shuttingdown: return if not messages: proc_block_size = sys.maxsize if self.auto_commit_every_n: proc_block_size = self.auto_commit_every_n ...
Send messages to the `processor` callback to be processed In the case we have a commit policy, we send messages to the processor in blocks no bigger than auto_commit_every_n (if set). Otherwise, we send the entire message block to be processed.
def json(self, dict=False, **kwargs): try: graph = self.graph except AttributeError: raise NotImplementedError() return _netjson_networkgraph(self.protocol, self.version, self.revision, ...
Outputs NetJSON format
def derive(self, modifier): def forward(value): changed_value = modifier(value) derived.fire(changed_value) derived = Event() self.add_callback(forward) return derived
Returns a new :class:`Event` instance that will fire when this event fires. The value passed to the callbacks to the new event is the return value of the given `modifier` function which is passed the original value.
def verify_client_id(self): from .models import Client from .exceptions.invalid_client import ClientDoesNotExist from .exceptions.invalid_request import ClientNotProvided if self.client_id: try: self.client = Client.objects.for_id(self.client_id) ...
Verify a provided client id against the database and set the `Client` object that is associated with it to `self.client`. TODO: Document all of the thrown exceptions.
def contains(self, key, counter_id): with self._lock: return counter_id in self._metadata[key]
Return whether a counter_id is present for a given instance key. If the key is not in the cache, raises a KeyError.
def get_label(self,callb=None): if self.label is None: mypartial=partial(self.resp_set_label) if callb: mycallb=lambda x,y:(mypartial(y),callb(x,y)) else: mycallb=lambda x,y:mypartial(y) response = self.req_with_resp(GetLab...
Convenience method to request the label from the device This method will check whether the value has already been retrieved from the device, if so, it will simply return it. If no, it will request the information from the device and request that callb be executed when a response is received. Th...
def pkcs7_unpad(data): if isinstance(data, str): return data[0:-ord(data[-1])] else: return data[0:-data[-1]]
Remove the padding bytes that were added at point of encryption. Implementation copied from pyaspora: https://github.com/mjnovice/pyaspora/blob/master/pyaspora/diaspora/protocol.py#L209
def export(name, target=None, rev=None, user=None, username=None, password=None, force=False, overwrite=False, externals=True, trust=False, trust_failures=None): s --trust-server-cert trust_failures : ...
Export a file or directory from an SVN repository name Address and path to the file or directory to be exported. target Name of the target directory where the checkout will put the working directory rev : None The name revision number to checkout. Enable "force" if the dir...
def escape(url): if salt.utils.platform.is_windows(): return url scheme = urlparse(url).scheme if not scheme: if url.startswith(): return url else: return .format(url) elif scheme == : path, saltenv = parse(url) if path.startswith(): ...
add escape character `|` to `url`
def get_memory_map_xml(self): root = ElementTree.Element() for r in self._context.core.memory_map: prop.text = hex(r.blocksize).rstrip("L") return MAP_XML_HEADER + ElementTree.tostring(root)
! @brief Generate GDB memory map XML.
def add_aggregated_lv_components(network, components): generators = {} loads = {} for lv_grid in network.mv_grid.lv_grids: generators.setdefault(lv_grid, {}) for gen in lv_grid.generators: generators[lv_grid].setdefault(gen.type, {}) generators[lv_gri...
Aggregates LV load and generation at LV stations Use this function if you aim for MV calculation only. The according DataFrames of `components` are extended by load and generators representing these aggregated respecting the technology type. Parameters ---------- network : Network The ...
def assign_taxonomy( data, min_confidence=0.80, output_fp=None, training_data_fp=None, fixrank=True, max_memory=None, tmp_dir=tempfile.gettempdir()): data = list(data) for line in app_result[]: excep = parse_rdp_exception(line) if excep is not None: ...
Assign taxonomy to each sequence in data with the RDP classifier data: open fasta file object or list of fasta lines confidence: minimum support threshold to assign taxonomy to a sequence output_fp: path to write output; if not provided, result will be returned in a dict of {seq_id:(ta...
def setup(self): self.log.info() if not os.path.exists(self.pathToWorkspace): os.makedirs(self.pathToWorkspace) if not os.path.exists(self.pathToWorkspace + "/qubits_output"): os.makedirs(self.pathToWorkspace + "/qubits_output") spectr...
*setup the workspace in the requested location* **Return:** - ``None``
def delete(self, id): lt = meta.Session.query(LayerTemplate).get(id) if lt is None: abort(404) meta.Session.delete(lt) meta.Session.commit()
DELETE /layertemplates/id: Delete an existing item.
def update(did): required_attributes = [, , , , , , ] required_metadata_base_attributes = [, , , , , , , ] required_metadata_curation_attributes = [, ] assert isinstance(request.json, dict), data = request.json if not da...
Update DDO of an existing asset --- tags: - ddo consumes: - application/json parameters: - in: body name: body required: true description: DDO of the asset. schema: type: object required: - "@context" - created...
def flatten_list(multiply_list): if isinstance(multiply_list, list): return [rv for l in multiply_list for rv in flatten_list(l)] else: return [multiply_list]
碾平 list:: >>> a = [1, 2, [3, 4], [[5, 6], [7, 8]]] >>> flatten_list(a) [1, 2, 3, 4, 5, 6, 7, 8] :param multiply_list: 混淆的多层列表 :return: 单层的 list
def get_go2sectiontxt(self): go2txt = {} _get_secs = self.hdrobj.get_sections hdrgo2sectxt = {h:" ".join(_get_secs(h)) for h in self.get_hdrgos()} usrgo2hdrgo = self.get_usrgo2hdrgo() for goid, ntgo in self.go2nt.items(): hdrgo = ntgo.GO if ntgo.is_hdrgo else...
Return a dict with actual header and user GO IDs as keys and their sections as values.
def get_fields(self, field_verbose=True, value_verbose=True, fields=[], extra_fields=[], remove_fields = []): field_list = [] for field in self.__class__._meta.fields: if field.name in remove_fields: continue if fields and field.name not...
返回字段名及其对应值的列表 field_verbose 为True,返回定义中的字段的verbose_name, False返回其name value_verbose 为True,返回数据的显示数据,会转换为choice的内容,为False, 返回数据的实际值 fields 指定了要显示的字段 extra_fields 指定了要特殊处理的非field,比如是函数 remove_fields 指定了不显示的字段
def rowCount(self, index=QModelIndex()): if self.total_rows <= self.rows_loaded: return self.total_rows else: return self.rows_loaded
Array row number
def _archive_entry_year(self, category): " Return ARCHIVE_ENTRY_YEAR from settings (if exists) or year of the newest object in category " year = getattr(settings, , None) if not year: n = now() try: year = Listing.objects.filter( ca...
Return ARCHIVE_ENTRY_YEAR from settings (if exists) or year of the newest object in category
def list_bookmarks(self, start_date=None, end_date=None, limit=None): query = Search( using=self.client, index=self.aggregation_alias, doc_type=self.bookmark_doc_type ).sort({: {: }}) range_args = {} if start_date: range_args[] = ...
List the aggregation's bookmarks.
def UpdateHuntObject(self, hunt_id, start_time=None, **kwargs): hunt_obj = self.ReadHuntObject(hunt_id) delta_suffix = "_delta" for k, v in kwargs.items(): if v is None: continue if k.endswith(delta_suffix): key = k[:-len(delta_suffix)] current_value = getattr(hun...
Updates the hunt object by applying the update function.
def action_delete(self, courseid, taskid, path): path = path.strip() if not path.startswith("/"): path = "/" + path wanted_path = self.verify_path(courseid, taskid, path) if wanted_path is None: return self.show_tab_file(courseid, taskid, _("Int...
Delete a file or a directory
def _try_to_get_extension(obj): if is_path(obj): path = obj elif is_path_obj(obj): return obj.suffix[1:] elif is_file_stream(obj): try: path = get_path_from_stream(obj) except ValueError: return None elif is_ioinfo(obj): path = obj....
Try to get file extension from given path or file object. :param obj: a file, file-like object or something :return: File extension or None >>> _try_to_get_extension("a.py") 'py'
def vlm_add_input(self, psz_name, psz_input): s input MRL. This will add the specified one. @param psz_name: the media to work on. @param psz_input: the input MRL. @return: 0 on success, -1 on error. ' return libvlc_vlm_add_input(self, str_to_bytes(psz_name), str_to_bytes...
Add a media's input MRL. This will add the specified one. @param psz_name: the media to work on. @param psz_input: the input MRL. @return: 0 on success, -1 on error.
def IsPropertyInMetaIgnoreCase(classId, key): if classId in _ManagedObjectMeta: for prop in _ManagedObjectMeta[classId]: if (prop.lower() == key.lower()): return _ManagedObjectMeta[classId][prop] if classId in _MethodFactoryMeta: for prop in _MethodFactoryMeta[classId]: if (prop.lower() == key...
Methods returns the property meta of the provided key for the given classId. Given key is case insensitive.
def get_hmac(self, key): h = HMAC.new(key, None, SHA256) h.update(self.iv) h.update(str(self.chunks).encode()) h.update(self.f_key) h.update(self.alpha_key) h.update(str(self.encrypted).encode()) return h.digest()
Returns the keyed HMAC for authentication of this state data. :param key: the key for the keyed hash function
def paragraph(node): text = if node.string_content is not None: text = node.string_content o = nodes.paragraph(, .join(text)) o.line = node.sourcepos[0][0] for n in MarkDown(node): o.append(n) return o
Process a paragraph, which includes all content under it
def AddMethod(obj, function, name=None): if name is None: name = function.__name__ else: function = RenameFunction(function, name) if hasattr(obj, ) and obj.__class__ is not type: if sys.version_info[:2] > (3, 2): method = MethodType(function, obj...
Adds either a bound method to an instance or the function itself (or an unbound method in Python 2) to a class. If name is ommited the name of the specified function is used by default. Example:: a = A() def f(self, x, y): self.z = x + y AddMethod(f, A, "add") a.add...
def NHot(n, *xs, simplify=True): if not isinstance(n, int): raise TypeError("expected n to be an int") if not 0 <= n <= len(xs): fstr = "expected 0 <= n <= {}, got {}" raise ValueError(fstr.format(len(xs), n)) xs = [Expression.box(x).node for x in xs] num = len(xs) term...
Return an expression that means "exactly N input functions are true". If *simplify* is ``True``, return a simplified expression.
def run(self, key, value, num_alts): field_info = self.header.get_info_field_info(key) if not isinstance(value, list): return TABLE = { ".": len(value), "A": num_alts, "R": num_alts + 1, "G": binomial(num_alts + 1, 2), ...
Check value in INFO[key] of record Currently, only checks for consistent counts are implemented :param str key: key of INFO entry to check :param value: value to check :param int alts: list of alternative alleles, for length
def resolve_upload_path(self, filename=None): if filename is None: return constants.UPLOAD_VOLUME return os.path.join(constants.UPLOAD_VOLUME, filename)
Resolve upload path for use with the executor. :param filename: Filename to resolve :return: Resolved filename, which can be used to access the given uploaded file in programs executed using this executor
def _compute_e2_factor(self, imt, vs30): e2 = np.zeros_like(vs30) if imt.name == "PGV": period = 1 elif imt.name == "PGA": period = 0 else: period = imt.period if period < 0.35: return e2 else: idx = v...
Compute and return e2 factor, equation 19, page 80.
def OnStartup(self): last_request = self.transaction_log.Get() if last_request: status = rdf_flows.GrrStatus( status=rdf_flows.GrrStatus.ReturnedStatus.CLIENT_KILLED, error_message="Client killed during transaction") if self.nanny_controller: nanny_st...
A handler that is called on client startup.
def get_all_triggers(bump, file_triggers): triggers = set() if file_triggers: triggers = triggers.union(detect_file_triggers(config.trigger_patterns)) if bump: _LOG.debug("trigger: %s bump requested", bump) triggers.add(bump) return triggers
Aggregated set of significant figures to bump
def apply_correlation(self, sites, imt, residuals, stddev_intra=0): try: corma = self.cache[imt] except KeyError: corma = self.get_lower_triangle_correlation_matrix( sites.complete, imt) self.cache[...
Apply correlation to randomly sampled residuals. :param sites: :class:`~openquake.hazardlib.site.SiteCollection` residuals were sampled for. :param imt: Intensity measure type object, see :mod:`openquake.hazardlib.imt`. :param residuals: 2d numpy ...
def formatTime(self, record, datefmt=None): if datefmt: s = datetime.datetime.now().strftime(datefmt) else: t = datetime.datetime.now().strftime(self.default_time_format) s = self.default_msec_format % (t, record.msecs) return s
Overrides formatTime method to use datetime module instead of time module to display time in microseconds. Time module by default does not resolve time to microseconds.
def _enrich_link(self, glossary): try: Model = apps.get_model(*glossary[][].split()) obj = Model.objects.get(pk=glossary[][]) glossary[].update(identifier=str(obj)) except (KeyError, ObjectDoesNotExist): pass
Enrich the dict glossary['link'] with an identifier onto the model
def datapoint_indices_for_tensor(self, tensor_index): if tensor_index >= self._num_tensors: raise ValueError( %(tensor_index, self._num_tensors)) return self._file_num_to_indices[tensor_index]
Returns the indices for all datapoints in the given tensor.
def _python_type(self, key, value): try: field_type = self._sp_cols[key][] if field_type in [, ]: return float(value) elif field_type == : value = self.date_format.search(value).group(0) ...
Returns proper type from the schema
def readGif(filename, asNumpy=True): if PIL is None: raise RuntimeError("Need PIL to read animated gif files.") if np is None: raise RuntimeError("Need Numpy to read animated gif files.") if not os.path.isfile(filename): raise IOError( + str(filename)) ...
readGif(filename, asNumpy=True) Read images from an animated GIF file. Returns a list of numpy arrays, or, if asNumpy is false, a list if PIL images.
def lookup_thread_id(self): query_string = % ( self.topic, self.owner, self.realm) cache_key = (self.owner, self.realm, self.topic) result = self.lookup_cache_key(cache_key) if result is not None: my_req = self.raw_pull(result) if my_req.sta...
Lookup thread id as required by CommentThread.lookup_thread_id. This implementation will query GitHub with the required parameters to try and find the topic for the owner, realm, topic, etc., specified in init.
def _concrete_instance(self, instance_doc): if not isinstance(instance_doc, dict): return None try: service = instance_doc[] cls = self._service_class_map[service] return cls(instance_document=instance_doc, instances=self) ...
Concretize an instance document. :param dict instance_doc: A document describing an instance. Should come from the API. :returns: A subclass of :py:class:`bases.BaseInstance`, or None. :rtype: :py:class:`bases.BaseInstance`
def __write(self, s): self.buf += s while len(self.buf) > self.bufsize: self.fileobj.write(self.buf[:self.bufsize]) self.buf = self.buf[self.bufsize:]
Write string s to the stream if a whole new block is ready to be written.