code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
def filter_rows(self, filters, rows): ret = [] for row in rows: if not self.row_is_filtered(row, filters): ret.append(row) return ret
returns rows as filtered by filters
def gateway_snapshot(self, indices=None): path = self.conn._make_path(indices, (), , ) return self.conn._send_request(, path)
Gateway snapshot one or more indices (See :ref:`es-guide-reference-api-admin-indices-gateway-snapshot`) :keyword indices: a list of indices or None for default configured.
def setDatastreamVersionable(self, pid, dsID, versionable): http_args = {: versionable} url = % {: pid, : dsID} response = self.put(url, params=http_args) return response.status_code == requests.codes.ok
Update datastream versionable setting. :param pid: object pid :param dsID: datastream id :param versionable: boolean :returns: boolean success
def get_application_choices(): result = [] keys = set() for ct in ContentType.objects.order_by(, ): try: if issubclass(ct.model_class(), TranslatableModel) and ct.app_label not in keys: result.append((.format(ct.app_label), .format(ct.app_label.capitalize()))) ...
Get the select options for the application selector :return:
def handle_Sample(self, instance): ars = instance.getAnalysisRequests() if len(ars) == 1: return self.handle_AnalysisRequest(ars[0]) else: return instance.absolute_url()
If this sample has a single AR, go there. If the sample has 0 or >1 ARs, go to the sample's view URL.
def excel_synthese(fct, df, excel_file): def sheet_name(name): name = unicodedata.normalize(, name).encode(, ) name = k.replace("'", "").replace(":", "").replace(" ", "_") name = "%i-%s" % (i, name) name = name[:31] return name res_count = dict() poll...
Enregistre dans un fichier Excel une synthèse des calculs réglementaires en fournissant les valeurs calculées suivant les réglementations définies dans chaque fonction de calcul et un tableau de nombre de dépassement. Les résultats sont enregistrés Paramètres: fct: fonction renvoyant les éléments c...
def stats(self, topic=None, channel=None, text=False): if text: fields = {: } else: fields = {: } if topic: nsq.assert_valid_topic_name(topic) fields[] = topic if channel: nsq.assert_valid_channel_name(channel) ...
Return internal instrumented statistics. :param topic: (optional) filter to topic :param channel: (optional) filter to channel :param text: return the stats as a string (default: ``False``)
def create_autosummary_file(modules, opts): lines = [ , , , , , .format(opts.destdir), , ] modules.sort() for module in modules: lines.append(.format(module)) lines.append() fname = path.join(opts.srcdir, .format(opts.do...
Create the module's index.
def mounted(name, device, fstype, mkmnt=False, opts=, dump=0, pass_num=0, config=, persist=True, mount=True, user=None, match_on=, device_name_regex=None, extra_mou...
Verify that a device is mounted name The path to the location where the device is to be mounted device The device name, typically the device node, such as ``/dev/sdb1`` or ``UUID=066e0200-2867-4ebe-b9e6-f30026ca2314`` or ``LABEL=DATA`` fstype The filesystem type, this will...
async def write_close_frame(self, data: bytes = b"") -> None: if self.state is State.OPEN: self.state = State.CLOSING logger.debug("%s - state = CLOSING", self.side) await self.write_frame(True, OP_CLOSE, data, _expected_s...
Write a close frame if and only if the connection state is OPEN. This dedicated coroutine must be used for writing close frames to ensure that at most one close frame is sent on a given connection.
def tokenize_punctuation_command(text): if text.peek() == : for point in PUNCTUATION_COMMANDS: if text.peek((1, len(point) + 1)) == point: return text.forward(len(point) + 1)
Process command that augments or modifies punctuation. This is important to the tokenization of a string, as opening or closing punctuation is not supposed to match. :param Buffer text: iterator over text, with current position
def OnTextColorDialog(self, event): dlg = wx.ColourDialog(self.main_window) dlg.GetColourData().SetChooseFull(True) if dlg.ShowModal() == wx.ID_OK: data = dlg.GetColourData() color = data.GetColour().GetRGB() post_c...
Event handler for launching text color dialog
def addDataModels(self, mods): for _, mdef in mods: for univname, _, _ in mdef.get(, ()): self.addUnivName(univname) for _, mdef in mods: for formname, formopts, propdefs in mdef.get(, ()): self.formnames.add(formname) ...
Adds a model definition (same format as input to Model.addDataModels and output of Model.getModelDef).
def countinputs(inputlist): numInputs = 0 numASNfiles = 0 files = irafglob(inputlist, atfile=None) numInputs = len(files) for file in files: if (checkASN(file) == True): numASNfiles += 1 return numInputs,numASNfiles
Determine the number of inputfiles provided by the user and the number of those files that are association tables Parameters ---------- inputlist : string the user input Returns ------- numInputs: int number of inputs provided by the user numASNfiles: int numb...
def version(self): response = self.get(version="", base="/version") response.raise_for_status() data = response.json() return (data["major"], data["minor"])
Get Kubernetes API version
def map_providers(self, query=, cached=False): if cached is True and query in self.__cached_provider_queries: return self.__cached_provider_queries[query] pmap = {} for alias, drivers in six.iteritems(self.opts[]): for driver, details in six.iteritems(drivers): ...
Return a mapping of what named VMs are running on what VM providers based on what providers are defined in the configuration and VMs
def get_randomized_guid_sample(self, item_count): dataset = self.get_whitelist() random.shuffle(dataset) return dataset[:item_count]
Fetch a subset of randomzied GUIDs from the whitelist
def classical(group, src_filter, gsims, param, monitor=Monitor()): if not hasattr(src_filter, ): src_filter = SourceFilter(src_filter, {}) src_mutex = getattr(group, , None) == rup_mutex = getattr(group, , None) == cluster = getattr(group, , None) grp_ids = set() for ...
Compute the hazard curves for a set of sources belonging to the same tectonic region type for all the GSIMs associated to that TRT. The arguments are the same as in :func:`calc_hazard_curves`, except for ``gsims``, which is a list of GSIM instances. :returns: a dictionary {grp_id: pmap} with at...
def getEventTypeNameFromEnum(self, eType): fn = self.function_table.getEventTypeNameFromEnum result = fn(eType) return result
returns the name of an EVREvent enum value
def isa_from_graph(graph: nx.Graph, oneq_type=, twoq_type=) -> ISA: all_qubits = list(range(max(graph.nodes) + 1)) qubits = [Qubit(i, type=oneq_type, dead=i not in graph.nodes) for i in all_qubits] edges = [Edge(sorted((a, b)), type=twoq_type, dead=False) for a, b in graph.edges] return ISA(qubits,...
Generate an ISA object from a NetworkX graph. :param graph: The graph :param oneq_type: The type of 1-qubit gate. Currently 'Xhalves' :param twoq_type: The type of 2-qubit gate. One of 'CZ' or 'CPHASE'.
def make_roi(cls, sources=None): if sources is None: sources = {} src_fact = cls() src_fact.add_sources(sources) ret_model = roi_model.ROIModel( {}, skydir=SkyCoord(0.0, 0.0, unit=)) for source in src_fact.sources.values(): ret_model.l...
Build and return a `fermipy.roi_model.ROIModel` object from a dict with information about the sources
def robust_isinstance(inst, typ) -> bool: if typ is Any: return True if is_typevar(typ): if hasattr(typ, ) and typ.__constraints__ is not None: typs = get_args(typ, evaluate=True) return any(robust_isinstance(inst, t) for t in typs) elif hasattr(typ, ) and ty...
Similar to isinstance, but if 'typ' is a parametrized generic Type, it is first transformed into its base generic class so that the instance check works. It is also robust to Union and Any. :param inst: :param typ: :return:
def _get_id(self, file_path): title = % self.__class__.__name__ list_kwargs = { : self.drive_space, : } path_segments = file_path.split(os.sep) parent_id = empty_string = ...
a helper method for retrieving id of file or folder
def insert_entry(self, entry): self.feature += entry; self.entries.append(entry);
! @brief Insert new clustering feature to the leaf node. @param[in] entry (cfentry): Clustering feature.
def set_mime_type(self, mime_type): try: self.set_lexer_from_mime_type(mime_type) except ClassNotFound: _logger().exception() self._lexer = TextLexer() return False except ImportError: _logger().warnin...
Update the highlighter lexer based on a mime type. :param mime_type: mime type of the new lexer to setup.
def track_file_ident_desc(self, file_ident): trackingaddition if not self._initialized: raise pycdlibexception.PyCdlibInternalError() self.fi_descs.append(file_ident)
A method to start tracking a UDF File Identifier descriptor in this UDF File Entry. Both 'tracking' and 'addition' add the identifier to the list of file identifiers, but tracking doees not expand or otherwise modify the UDF File Entry. Parameters: file_ident - The UDF File Id...
def get_run_on_node_mask(): bitmask = libnuma.numa_get_run_node_mask() nodemask = nodemask_t() libnuma.copy_bitmask_to_nodemask(bitmask, byref(nodemask)) libnuma.numa_bitmask_free(bitmask) return numa_nodemask_to_set(nodemask)
Returns the mask of nodes that the current thread is allowed to run on. @return: node mask @rtype: C{set}
def main(): buff = for line in fileinput.input(): buff += line parser = jbossparser.JbossParser() result = parser.parse(buff) print(json.dumps(result))
Reads stdin jboss output, writes json on output :return:
def aws_syncr_spec(self): formatted_string = formatted(string_spec(), MergedOptionStringFormatter, expected_type=string_types) return create_spec(AwsSyncr , extra = defaulted(formatted_string, "") , stage = defaulted(formatted_string, "") , debug = defaulted(...
Spec for aws_syncr options
def gaps(args): p = OptionParser(gaps.__doc__) opts, args = p.parse_args(args) if len(args) != 1: sys.exit(not p.print_help()) blastfile, = args blast = BlastSlow(blastfile) logging.debug("A total of {} records imported".format(len(blast))) query_gaps = list(collect_gaps(blas...
%prog gaps A_vs_B.blast Find distribution of gap sizes betwen adjacent HSPs.
def set_volume(self, percent, update_group=True): if percent not in range(0, 101): raise ValueError() new_volume = self._client[][] new_volume[] = percent self._client[][][] = percent yield from self._server.client_volume(self.identifier, new_volume) ...
Set client volume percent.
def _add_variant_gene_relationship(self, patient_var_map, gene_coordinate_map): dipper_util = DipperUtil() model = Model(self.graph) for patient in patient_var_map: for variant_id, variant in patient_var_map[patient].items(): variant_bnode =...
Right now it is unclear the best approach on how to connect variants to genes. In most cases has_affected_locus/GENO:0000418 is accurate; however, there are cases where a variant is in the intron on one gene and is purported to causally affect another gene down or upstream. In these ca...
def xreload(mod): r = Reload(mod) r.apply() found_change = r.found_change r = None pydevd_dont_trace.clear_trace_filter_cache() return found_change
Reload a module in place, updating classes, methods and functions. mod: a module object Returns a boolean indicating whether a change was done.
def get_peers_in_established(self): est_peers = [] for peer in self._peers.values(): if peer.in_established: est_peers.append(peer) return est_peers
Returns list of peers in established state.
def print_extended_help(): w = textwrap.TextWrapper() w.expand_tabs = False w.width=110 w.initial_indent = w.subsequent_indent = print() print(textwrap.fill("<split> Complete parameter list:", initial_indent=)) print() cmd = "--input : (required) csv file to split into ...
Prints an extended help message.
def affected_files(self): added, modified, deleted = self._changes_cache return list(added.union(modified).union(deleted))
Get's a fast accessible file changes for given changeset
def getSuccessors(jobGraph, alreadySeenSuccessors, jobStore): successors = set() def successorRecursion(jobGraph): for successorList in jobGraph.stack: for successorJobNode in successorList: if ...
Gets successors of the given job by walking the job graph recursively. Any successor in alreadySeenSuccessors is ignored and not traversed. Returns the set of found successors. This set is added to alreadySeenSuccessors.
def to_netcdf4(self, fname=None, base_instrument=None, epoch_name=, zlib=False, complevel=4, shuffle=True): import netCDF4 import pysat file_format = base_instrument = Instrument() if base_instrument is None else base_inst...
Stores loaded data into a netCDF4 file. Parameters ---------- fname : string full path to save instrument object to base_instrument : pysat.Instrument used as a comparison, only attributes that are present with self and not on base_instrument ...
def _compute_ndim(row_loc, col_loc): row_scaler = is_scalar(row_loc) col_scaler = is_scalar(col_loc) if row_scaler and col_scaler: ndim = 0 elif row_scaler ^ col_scaler: ndim = 1 else: ndim = 2 return ndim
Compute the ndim of result from locators
def _check_delete_fw(self, tenant_id, drvr_name): fw_dict = self.fwid_attr[tenant_id].get_fw_dict() ret = False try: with self.fwid_attr[tenant_id].mutex_lock: self.update_fw_db_final_result(fw_dict.get(), ( fw_constants.RESULT_FW_DELETE_I...
Deletes the Firewall, if all conditioms are met. This function after modifying the DB with delete operation status, calls the routine to remove the fabric cfg from DB and unconfigure the device.
def get_occupied_slots(instance): return [slot for slot in get_all_slots(type(instance)) if hasattr(instance,slot)]
Return a list of slots for which values have been set. (While a slot might be defined, if a value for that slot hasn't been set, then it's an AttributeError to request the slot's value.)
def build_debian(config, os_versions, os_type=): def build_pkg(config, os_type, os_version): result = _build_package(config, os_type, os_version) if not result.succeeded: print(result.cli) raise DebianError(result, os_type, os_version, frame=gfi(cf())) error = 0 ...
build_debian Builds for a specific debian operating system with os version specified. By default, it will use os_type='ubuntu'
def remove_container(self, container, v=False, link=False, force=False): params = {: v, : link, : force} res = self._delete( self._url("/containers/{0}", container), params=params ) self._raise_for_status(res)
Remove a container. Similar to the ``docker rm`` command. Args: container (str): The container to remove v (bool): Remove the volumes associated with the container link (bool): Remove the specified link and not the underlying container force (bool...
def add_if_unique(self, name): with self.lock: if name not in self.names: self.names.append(name) return True return False
Returns ``True`` on success. Returns ``False`` if the name already exists in the namespace.
def add_prefix(arg, opts, shell_opts): if not in opts and not in opts and not in opts: print("ERROR: , or must be specified.", file=sys.stderr) sys.exit(1) if len([opt for opt in opts if opt in [, , ]]) > 1: print("ERROR: Use either assignment , or manual mode (using )"...
Add prefix to NIPAP
def opt(self, x_init, f_fp=None, f=None, fp=None): rcstrings = [, , ] assert f_fp != None, "BFGS requires f_fp" opt_dict = {} if self.xtol is not None: print("WARNING: l-bfgs-b doesnm going to ignore it") if self.ftol is not None: print("WARNING...
Run the optimizer
def fit(self, X, y, sample_weight=None, eval_set=None, eval_metric=None, early_stopping_rounds=None, verbose=True, xgb_model=None, sample_weight_eval_set=None, callbacks=None): if sample_weight is not None: trainDmatrix = DMatrix(X, label=y, weight=sample_we...
Fit the gradient boosting model Parameters ---------- X : array_like Feature matrix y : array_like Labels sample_weight : array_like instance weights eval_set : list, optional A list of (X, y) tuple pairs to use as a valida...
def _is_idempotent(self, output): output = re.sub(r, , output) changed = re.search(r, output) if changed: return False return True
Parses the output of the provisioning for changed and returns a bool. :param output: A string containing the output of the ansible run. :return: bool
def getCatalog(instance, field=): uid = instance.UID() if in instance.REQUEST and \ [x for x in instance.REQUEST[] if x.find(uid) > -1]: return None else: catalog = getToolByName(plone, catalog_name) return catalog
Returns the catalog that stores objects of instance passed in type. If an object is indexed by more than one catalog, the first match will be returned. :param instance: A single object :type instance: ATContentType :returns: The first catalog that stores the type of object passed in
def get_tasks(self, thread_name): if thread_name not in self.tasks_by_thread: with self._tasks_lock: self.tasks_by_thread[thread_name] = OrderedDict() return self.tasks_by_thread[thread_name]
Args: thread_name (str): name of the thread to get the tasks for Returns: OrderedDict of str, Task: list of task names and log records for each for the given thread
def isplaybook(obj): return isinstance(obj, Iterable) and (not isinstance(obj, string_types) and not isinstance(obj, Mapping))
Inspects the object and returns if it is a playbook Args: obj (object): The object to be inspected by this function Returns: boolean: True if the object is a list and False if it is not
def prepend_zeros_to_lists(ls): longest = max([len(l) for l in ls]) for i in range(len(ls)): while len(ls[i]) < longest: ls[i].insert(0, "0")
Takes a list of lists and appends 0s to the beggining of each sub_list until they are all the same length. Used for sign-extending binary numbers.
def on_delete(self, forced): if not forced and self.handler is not None and not self.is_closed: self.promote() else: self.close()
Session expiration callback `forced` If session item explicitly deleted, forced will be set to True. If item expired, will be set to False.
def search_information(adjacency, transform=None, has_memory=False): N = len(adjacency) if np.allclose(adjacency, adjacency.T): flag_triu = True else: flag_triu = False T = np.linalg.solve(np.diag(np.sum(adjacency, axis=1)), adjacency) _, hops, Pmat = distance_wei_floyd(adjac...
Calculates search information of `adjacency` Computes the amount of information (measured in bits) that a random walker needs to follow the shortest path between a given pair of nodes. Parameters ---------- adjacency : (N x N) array_like Weighted/unweighted, direct/undirected connection we...
def ParseOptions(cls, options, output_module): elastic_output_modules = ( elastic.ElasticsearchOutputModule, elastic.ElasticsearchOutputModule) if not isinstance(output_module, elastic_output_modules): raise errors.BadConfigObject( ) index_name = cls._ParseStringOption( ...
Parses and validates options. Args: options (argparse.Namespace): parser options. output_module (OutputModule): output module to configure. Raises: BadConfigObject: when the output module object is of the wrong type. BadConfigOption: when a configuration parameter fails validation.
def connect(self): if ( in os.environ) and (sys.platform != ): conn = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) try: retry_on_signal(lambda: conn.connect(os.environ[])) except: return elif sys.platform ...
Method automatically called by the run() method of the AgentProxyThread
def dropna(self, drop_nan=True, drop_masked=True, column_names=None): copy = self.copy() copy.select_non_missing(drop_nan=drop_nan, drop_masked=drop_masked, column_names=column_names, name=FILTER_SELECTION_NAME, mode=) return copy
Create a shallow copy of a DataFrame, with filtering set using select_non_missing. :param drop_nan: drop rows when there is a NaN in any of the columns (will only affect float values) :param drop_masked: drop rows when there is a masked value in any of the columns :param column_names: The colum...
def password(self, value): if isinstance(value, str): self._password = value self._handler = None
gets/sets the current password
def import_args_from_dict(value, args, config): if isinstance(value, six.string_types): for match in TOKEN_REGEX.finditer(str(value)): token = match.group(1) if token in args: actual_param = args[token] if isinstance(actual_param, six.string_types): value = value.replace("...
Replaces some arguments by those specified by a key-value dictionary. This function will be recursively called on a dictionary looking for any value containing a "$" variable. If found, the value will be replaced by the attribute in "args" of the same name. It is used to load arguments from the CLI and any ex...
def Join(self, Id): reply = self._Owner._DoCommand( % (self.Id, Id), % self.Id) return Conference(self._Owner, reply.split()[-1])
Joins with another call to form a conference. :Parameters: Id : int Call Id of the other call to join to the conference. :return: Conference object. :rtype: `Conference`
def apply_update(self, value, index): _log.debug(.format(type(value).__name__, index, value.value)) builder = asiodnp3.UpdateBuilder() builder.Update(value, index) update = builder.Build() OutstationApplication.get_outstation().Apply(update)
Record an opendnp3 data value (Analog, Binary, etc.) in the outstation's database. The data value gets sent to the Master as a side-effect. :param value: An instance of Analog, Binary, or another opendnp3 data value. :param index: (integer) Index of the data definition in the opendnp3 data...
def update_index(model_items, model_name, action=, bulk_size=100, num_docs=-1, start_date=None, end_date=None, refresh=True): indexdeleted like to perform on this group of data. Must be in (, ) and defaults to :param bulk_size: bulk size for indexing. Defaults to 100. :param num_docs: maximum number of mod...
Updates the index for the provided model_items. :param model_items: a list of model_items (django Model instances, or proxy instances) which are to be indexed/updated or deleted. If action is 'index', the model_items must be serializable objects. If action is 'delete', the model_items must be primary keys c...
def read_union(fo, writer_schema, reader_schema=None): index = read_long(fo) if reader_schema: if not isinstance(reader_schema, list): if match_types(writer_schema[index], reader_schema): return read_data(fo, writer_schema[index], reader_schema) els...
A union is encoded by first writing a long value indicating the zero-based position within the union of the schema of its value. The value is then encoded per the indicated schema within the union.
def facility(self, column=None, value=None, **kwargs): return self._resolve_call(, column, value, **kwargs)
Check information related to Radiation facilities. >>> RADInfo().facility('state_code', 'CA')
def get_instance(self, payload): return RecordingInstance( self._version, payload, account_sid=self._solution[], call_sid=self._solution[], )
Build an instance of RecordingInstance :param dict payload: Payload response from the API :returns: twilio.rest.api.v2010.account.call.recording.RecordingInstance :rtype: twilio.rest.api.v2010.account.call.recording.RecordingInstance
def _array_handler(self, _cursor_type): _type = _cursor_type.get_canonical() size = _type.get_array_size() if size == -1 and _type.kind == TypeKind.INCOMPLETEARRAY: size = 0 _array_type = _type.get_array_elemen...
Handles all array types. Resolves it's element type and makes a Array typedesc.
def get_automation(self, automation_id, refresh=False): if self._automations is None: self.get_automations() refresh = False automation = self._automations.get(str(automation_id)) if automation and refresh: automation.refresh() return autom...
Get a single automation.
def inicializar_y_capturar_excepciones_simple(func): "Decorador para inicializar y capturar errores (versión básica indep.)" @functools.wraps(func) def capturar_errores_wrapper(self, *args, **kwargs): self.inicializar() try: return func(self, *args, **kwargs) except: ...
Decorador para inicializar y capturar errores (versión básica indep.)
def dmag_magic(in_file="measurements.txt", dir_path=".", input_dir_path="", spec_file="specimens.txt", samp_file="samples.txt", site_file="sites.txt", loc_file="locations.txt", plot_by="loc", LT="AF", norm=True, XLP="", save_plots=True, fmt="svg"): dir_path = os.path.realpa...
plots intensity decay curves for demagnetization experiments Parameters ---------- in_file : str, default "measurements.txt" dir_path : str output directory, default "." input_dir_path : str input file directory (if different from dir_path), default "" spec_file : str in...
def icnr(x, scale=2, init=nn.init.kaiming_normal_): "ICNR init of `x`, with `scale` and `init` function." ni,nf,h,w = x.shape ni2 = int(ni/(scale**2)) k = init(torch.zeros([ni2,nf,h,w])).transpose(0, 1) k = k.contiguous().view(ni2, nf, -1) k = k.repeat(1, 1, scale**2) k = k.contiguous().view...
ICNR init of `x`, with `scale` and `init` function.
def vra(self,*args,**kwargs): from .OrbitTop import _check_roSet, _check_voSet _check_roSet(self,kwargs,) _check_voSet(self,kwargs,) dist= self._orb.dist(*args,**kwargs) if _APY_UNITS and isinstance(dist,units.Quantity): out= units.Quantity(dist.to(units.kpc)...
NAME: vra PURPOSE: return velocity in right ascension (km/s) INPUT: t - (optional) time at which to get vra (can be Quantity) obs=[X,Y,Z,vx,vy,vz] - (optional) position and velocity of observer in the Galactocentric frame ...
def p_tag_ref(self, p): p[0] = AstTagRef(self.path, p.lineno(1), p.lexpos(1), p[1])
tag_ref : ID
def is_ipython_notebook(file_name): if (not re.match("^.*checkpoint\.ipynb$", file_name)) and re.match("^.*\.ipynb$", file_name): return True return False
Return True if file_name matches a regexp for an ipython notebook. False otherwise. :param file_name: file to test
def write(self, file_or_filename): self.book = Workbook() self._write_data(None) self.book.save(file_or_filename)
Writes case data to file in Excel format.
def get_s3_client(): session_kwargs = {} if hasattr(settings, ): session_kwargs[] = settings.AWS_ACCESS_KEY_ID if hasattr(settings, ): session_kwargs[] = settings.AWS_SECRET_ACCESS_KEY boto3.setup_default_session(**session_kwargs) s3_kwargs = {} if hasattr(settings, ): ...
A DRY place to make sure AWS credentials in settings override environment based credentials. Boto3 will fall back to: http://boto3.readthedocs.io/en/latest/guide/configuration.html
def token_expired(self): if self._token_timer is None: return True return timeutil.is_newer_than(self._token_timer, timeutil.ONE_HOUR)
Provide access to flag indicating if token has expired.
def economic_qs(K, epsilon=sqrt(finfo(float).eps)): r (S, Q) = eigh(K) nok = abs(max(Q[0].min(), Q[0].max(), key=abs)) < epsilon nok = nok and abs(max(K.min(), K.max(), key=abs)) >= epsilon if nok: from scipy.linalg import eigh as sp_eigh (S, Q) = sp_eigh(K) ok = S >= epsilon...
r"""Economic eigen decomposition for symmetric matrices. A symmetric matrix ``K`` can be decomposed in :math:`\mathrm Q_0 \mathrm S_0 \mathrm Q_0^\intercal + \mathrm Q_1\ \mathrm S_1 \mathrm Q_1^ \intercal`, where :math:`\mathrm S_1` is a zero matrix with size determined by ``K``'s rank deficiency. ...
def sngl_ifo_job_setup(workflow, ifo, out_files, curr_exe_job, science_segs, datafind_outs, parents=None, link_job_instance=None, allow_overlap=True, compatibility_mode=True): if compatibility_mode and not link_job_instance: errMsg = ...
This function sets up a set of single ifo jobs. A basic overview of how this works is as follows: * (1) Identify the length of data that each job needs to read in, and what part of that data the job is valid for. * START LOOPING OVER SCIENCE SEGMENTS * (2) Identify how many jobs are needed (if an...
def obfn_reg(self): rl1 = np.linalg.norm((self.wl1 * self.obfn_gvar()).ravel(), 1) return (self.lmbda*rl1, rl1)
Compute regularisation term and contribution to objective function.
def get_post(self): pk = self.kwargs.get(self.post_pk_url_kwarg, None) if not pk: return if not hasattr(self, ): self._forum_post = get_object_or_404(Post, pk=pk) return self._forum_post
Returns the considered post if applicable.
def read_csv(csvfile, options): name, ext = os.path.splitext(csvfile) try: if ext == : f = gzip.open(csvfile, ) else: f = open(csvfile, ) except IOError: print(" \n could not be opened\n".format(f=os.path.basename(csvfile))) sys.exit(1) ...
Read csv and return molList, a list of mol objects
def location_2_json(self): LOGGER.debug("Location.location_2_json") json_obj = { : self.id, : self.name, : self.description, : self.address, : self.zip_code, : self.town, : self.type, : self.country,...
transform ariane_clip3 location object to Ariane server JSON obj :return: Ariane JSON obj
def calculate_size(name, sequence): data_size = 0 data_size += calculate_size_str(name) data_size += LONG_SIZE_IN_BYTES return data_size
Calculates the request payload size
def saccadic_momentum_effect(durations, forward_angle, summary_stat=nanmean): durations_per_da = np.nan * np.ones((len(e_angle) - 1,)) for i, (bo, b1) in enumerate(zip(e_angle[:-1], e_angle[1:])): idx = ( bo <= forward_angle) & ( forward_angle < ...
Computes the mean fixation duration at forward angles.
def get_json_body(self, required=None, validators=None): content_type = self.request.headers.get(, ) if not in content_type.split(): raise HTTPError(415, ) if not self.request.body: error = logging.war...
Get JSON from the request body :param required: optionally provide a list of keys that should be in the JSON body (raises a 400 HTTPError if any are missing) :param validator: optionally provide a dictionary of items that should be in the body with a method that validates the item. ...
def detection_multiplot(stream, template, times, streamcolour=, templatecolour=, size=(10.5, 7.5), **kwargs): import matplotlib.pyplot as plt template_stachans = [(tr.stats.station, tr.stats.channel) for tr in template] stream_stachans = [(tr.s...
Plot a stream of data with a template on top of it at detection times. :type stream: obspy.core.stream.Stream :param stream: Stream of data to be plotted as the background. :type template: obspy.core.stream.Stream :param template: Template to be plotted on top of the base stream. :type times: list ...
def add_items(self, items): _items = [self._listitemify(item) for item in items] tuples = [item.as_tuple() for item in _items] xbmcplugin.addDirectoryItems(self.handle, tuples, len(tuples)) self.added_items.extend(_items) return _items
Adds ListItems to the XBMC interface. Each item in the provided list should either be instances of xbmcswift2.ListItem, or regular dictionaries that will be passed to xbmcswift2.ListItem.from_dict. Returns the list of ListItems. :param items: An iterable of items where each item is eith...
def get_template_names(self): year = self.get_archive_part_value() week = self.get_archive_part_value() month = self.get_archive_part_value() day = self.get_archive_part_value() templates = [] path = template_names = self.get_default_base_template_names...
Return a list of template names to be used for the view.
def node_set_to_surface(self, tag): nodes = self.nodes.copy() dummy = nodes.iloc[0].copy() dummy["coords"] *= np.nan dummy["sets"] = True nodes.loc[0] = dummy element_surfaces= self.split("surfaces").unstack() surf = pd.DataFrame( nodes.sets[tag].loc[element_...
Converts a node set to surface.
def copy(self, empty=False): g = graph(self.layout.n, self.distance, self.layout.type) g.layout = self.layout.copy(g) g.styles = self.styles.copy(g) g.events = self.events.copy(g) if not empty: for n in self.nodes: g.add_nod...
Create a copy of the graph (by default with nodes and edges).
def get_X_gradients(self, X): return X.mean.gradient, X.variance.gradient, X.binary_prob.gradient
Get the gradients of the posterior distribution of X in its specific form.
def add_inspection(name): def inspection_inner(func): @functools.wraps(func) def encapsulated(*args, **kwargs): try: return func(*args, **kwargs) except (TypeError, AttributeError, ValueError, OSError): r...
Add a Jishaku object inspection
def gen_cannon_grad_spec(choose, coeffs, pivots): base_labels = [4800, 2.5, 0.03, 0.10, -0.17, -0.17, 0, -0.16, -0.13, -0.15, 0.13, 0.08, 0.17, -0.062] label_names = np.array( [, , , , , , , , , , , , , ]) label_atnum = np.array( [0, 1, -1, 13, 20, 6, 26,...
Generate Cannon gradient spectra Parameters ---------- labels: default values for [teff, logg, feh, cfe, nfe, afe, ak] choose: val of cfe or nfe, whatever you're varying low: lowest val of cfe or nfe, whatever you're varying high: highest val of cfe or nfe, whatever you're varying
def inferURILocalSymbol(aUri): stringa = aUri try: ns = stringa.split(" name = stringa.split(" except: if "/" in stringa: ns = stringa.rsplit("/", 1)[0] name = stringa.rsplit("/", 1)[1] else: ns = "" name = stringa ...
From a URI returns a tuple (namespace, uri-last-bit) Eg from <'http://www.w3.org/2008/05/skos#something'> ==> ('something', 'http://www.w3.org/2008/05/skos') from <'http://www.w3.org/2003/01/geo/wgs84_pos'> we extract ==> ('wgs84_pos', 'http://www.w3.org/2003/01/geo/')
def get_dict(self, name, default=None): if name not in self: if default is not None: return default raise EnvironmentError.not_found(self._prefix, name) return dict(**self.get(name))
Retrieves an environment variable value as a dictionary. Args: name (str): The case-insensitive, unprefixed variable name. default: If provided, a default value will be returned instead of throwing ``EnvironmentError``. Returns: dict: The environment...
def export(self): return { : self.id, : self.name, : self._artist_name, : self._artist_id, : self._cover_url}
Returns a dictionary with all album information. Use the :meth:`from_export` method to recreate the :class:`Album` object.
def make_optimize_tensor(self, model, session=None, var_list=None, **kwargs): session = model.enquire_session(session) with session.as_default(): var_list = self._gen_var_list(model, var_list) optimizer_kwargs = self._optimizer_kwargs.copy() options = optimiz...
Make SciPy optimization tensor. The `make_optimize_tensor` method builds optimization tensor and initializes all necessary variables created by optimizer. :param model: GPflow model. :param session: Tensorflow session. :param var_list: List of variables for training....
def get_private_endpoint(id: str, guid: str) -> str: _username, domain = id.split("@") return "https://%s/receive/users/%s" % (domain, guid)
Get remote endpoint for delivering private payloads.
def load_plume_package(package, plume_dir, accept_defaults): from canari.commands.load_plume_package import load_plume_package load_plume_package(package, plume_dir, accept_defaults)
Loads a canari package into Plume.
def _parse_time_to_freeze(time_to_freeze_str): if time_to_freeze_str is None: time_to_freeze_str = datetime.datetime.utcnow() if isinstance(time_to_freeze_str, datetime.datetime): time_to_freeze = time_to_freeze_str elif isinstance(time_to_freeze_str, datetime.date): time_to_fr...
Parses all the possible inputs for freeze_time :returns: a naive ``datetime.datetime`` object