code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
def build_subresource_uri(self, resource_id_or_uri=None, subresource_id_or_uri=None, subresource_path=): if subresource_id_or_uri and "/" in subresource_id_or_uri: return subresource_id_or_uri else: if not resource_id_or_uri: raise exceptions.HPOneViewVal...
Helps to build a URI with resource path and its sub resource path. Args: resoure_id_or_uri: ID/URI of the main resource. subresource_id__or_uri: ID/URI of the sub resource. subresource_path: Sub resource path to be added with the URI. Returns: Returns UR...
def _pick_colours(self, palette_name, selected=False): return self._frame.palette[self._pick_palette_key(palette_name, selected)]
Pick the rendering colour for a widget based on the current state. :param palette_name: The stem name for the widget - e.g. "button". :param selected: Whether this item is selected or not. :returns: A colour tuple (fg, attr, bg) to be used.
def html(text, extensions=0, render_flags=0): extensions = args_to_int(extension_map, extensions) render_flags = args_to_int(html_flag_map, render_flags) ib = lib.hoedown_buffer_new(IUNIT) ob = lib.hoedown_buffer_new(OUNIT) renderer = lib.hoedown_html_renderer_new(render_flags, 0) document...
Convert markdown text to HTML. ``extensions`` can be a list or tuple of extensions (e.g. ``('fenced-code', 'footnotes', 'strikethrough')``) or an integer (e.g. ``EXT_FENCED_CODE | EXT_FOOTNOTES | EXT_STRIKETHROUGH``). ``render_flags`` can be a list or tuple of flags (e.g. ``('skip-html', 'hard-wra...
def add_occurrences(events, count): for day in count: for item in count[day]: for event in events: if event.pk == item[1]: try: event.occurrence.append(day) except AttributeError: event.o...
Adds an occurrence key to the event object w/ a list of occurrences and adds a popover (for use with twitter bootstrap). The occurrence is added so that each event can be aware of what day(s) it occurs in the month.
def make_ifar_plot(workflow, trigger_file, out_dir, tags=None, hierarchical_level=None): if hierarchical_level is not None and tags: tags = [("HIERARCHICAL_LEVEL_{:02d}".format( hierarchical_level))] + tags elif hierarchical_level is not None and not tags: ...
Creates a node in the workflow for plotting cumulative histogram of IFAR values.
def deref(o, timeout_s=None, timeout_val=None): if isinstance(o, IDeref): return o.deref() elif isinstance(o, IBlockingDeref): return o.deref(timeout_s, timeout_val) raise TypeError(f"Object of type {type(o)} cannot be dereferenced")
Dereference a Deref object and return its contents. If o is an object implementing IBlockingDeref and timeout_s and timeout_val are supplied, deref will wait at most timeout_s seconds, returning timeout_val if timeout_s seconds elapse and o has not returned.
def container_setting(name, container, settings=None): identityType_map2string = {0: , 1: , 2: , 3: , 4: } ret = {: name, : {}, : str(), : None} if not settings: ret[] = ret[] = True return ret ret_settings = { : {}, : {},...
Set the value of the setting for an IIS container. :param str name: The name of the IIS container. :param str container: The type of IIS container. The container types are: AppPools, Sites, SslBindings :param str settings: A dictionary of the setting names and their values. Example of usage...
def generate_encodeable_characters(characters: Iterable[str], encodings: Iterable[str]) -> Iterable[str]: for c in characters: for encoding in encodings: try: c.encode(encoding) yield c except UnicodeEncodeError:...
Generates the subset of 'characters' that can be encoded by 'encodings'. Args: characters: The characters to check for encodeability e.g. 'abcd'. encodings: The encodings to check against e.g. ['cp1252', 'iso-8859-5']. Returns: The subset of 'characters' that can be encoded using one o...
def load(name, path=None, ext="dat", silent=False): filename = __get_filename(path, name, ext) if not os.path.exists(filename): if not silent: raise ValueException("Specified input filename doesn't exist.") return None with open(filename, "rb") as f: return pickle....
Loads an object from file with given name and extension. Optionally the path can be specified as well.
def unconsume(self, seq): for kmer in iter_kmers(seq, self.k, canonical=self.canonical): self._decr(kmer)
Subtracts all k-mers in sequence.
def StartCli(args, adb_commands, extra=None, **device_kwargs): try: dev = adb_commands() dev.ConnectDevice(port_path=args.port_path, serial=args.serial, default_timeout_ms=args.timeout_ms, **device_kwargs) except usb_exceptions.DeviceNotFoundError as e: pri...
Starts a common CLI interface for this usb path and protocol.
def setColor(self, color): s color and escape code.bluebluebluedblueredredreddredyellowyellowyellowdyellowgreengreengreendgreenwildwilddwildwild']
Sets Card's color and escape code.
def create_base_storage(self, logical_size, variant): if not isinstance(logical_size, baseinteger): raise TypeError("logical_size can only be an instance of type baseinteger") if not isinstance(variant, list): raise TypeError("variant can only be an instance of type list...
Starts creating a hard disk storage unit (fixed/dynamic, according to the variant flags) in the background. The previous storage unit created for this object, if any, must first be deleted using :py:func:`delete_storage` , otherwise the operation will fail. Before the operation ...
def has_scope(context=None): if not booted(context): return False _sd_version = version(context) if _sd_version is None: return False return _sd_version >= 205
Scopes were introduced in systemd 205, this function returns a boolean which is true when the minion is systemd-booted and running systemd>=205.
def docker_start(develop=True): curr_dir = os.path.dirname(os.path.realpath(__file__)) local(.format(curr_dir)) if develop: docker_exec() print()
Start docker container
def choices(klass): _choices = [] for attr in user_attributes(klass.Meta): val = getattr(klass.Meta, attr) setattr(klass, attr, val[0]) _choices.append((val[0], val[1])) setattr(klass, , tuple(_choices)) return klass
Decorator to set `CHOICES` and other attributes
def oridam_generate_patterns(word_in,cm,ed=1,level=0,pos=0,candidates=None): alternates = cm.get(word_in[pos],[]) if not candidates: candidates = [] assert ed <= len(word_in), if (pos >len(word_in)) or ed == 0: return candidates pfx = sfx = curr_candidates = [] fo...
ed = 1 by default, pos - internal variable for algorithm
def receive_loop_with_callback(self, queue_name, callback): self.connect() channel = self.create_channel(queue_name) channel.basic_qos(prefetch_count=1) channel.basic_consume(callback, queue=queue_name) channel.start_consuming()
Process incoming messages with callback until close is called. :param queue_name: str: name of the queue to poll :param callback: func(ch, method, properties, body) called with data when data arrives :return:
def _get_all_resource_attributes(network_id, template_id=None): base_qry = db.DBSession.query( ResourceAttr.id.label(), ResourceAttr.ref_key.label(), ResourceAttr.cr_date.label(), Resourc...
Get all the attributes for the nodes, links and groups of a network. Return these attributes as a dictionary, keyed on type (NODE, LINK, GROUP) then by ID of the node or link.
def rerun(client, revision, roots, siblings, inputs, paths): graph = Graph(client) outputs = graph.build(paths=paths, revision=revision) outputs = siblings(graph, outputs) output_paths = {node.path for node in outputs} roots = {graph.normalize_path(root) for root in roots} asser...
Recreate files generated by a sequence of ``run`` commands.
def get_track_by_mbid(self, mbid): params = {"mbid": mbid} doc = _Request(self, "track.getInfo", params).execute(True) return Track(_extract(doc, "name", 1), _extract(doc, "name"), self)
Looks up a track by its MusicBrainz ID
def ds2json(ds, u_var, v_var, lat_dim=, lon_dim=, units=None): import numpy as np ds = ds.copy() for var_name in (u_var, v_var): var_dims = ds[var_name].dims if set(var_dims) != set([lat_dim, lon_dim]): raise ValueError( "Invalid dimensions for variable in ...
Assumes that the velocity components are given on a regular grid (fixed spacing in latitude and longitude). Parameters ---------- u_var : str Name of the U-component (zonal) variable. v_var : str Name of the V-component (meridional) variable. lat_dim : str, optional Name...
def is_executable(path): return (stat.S_IXUSR & os.stat(path)[stat.ST_MODE] or stat.S_IXGRP & os.stat(path)[stat.ST_MODE] or stat.S_IXOTH & os.stat(path)[stat.ST_MODE])
is the given path executable?
def run(self): if self.debug: print("Starting " + self.name) if isinstance(self.function, str): globals()[self.function](*self.args, **self.kwargs) else: self.function(*self.args, **self.kwargs) if self.debug: print("Exiting " + self.name)
Fonctionnement du thread
def smooth(x, rho, penalty, axis=0, newshape=None): orig_shape = x.shape if newshape is not None: x = x.reshape(newshape) n = x.shape[axis] lap_op = spdiags([(2 + rho / penalty) * np.ones(n), -1 * np.ones(n), -1 * np.ones(n)], [0, -1, ...
Applies a smoothing operator along one dimension currently only accepts a matrix as input Parameters ---------- penalty : float axis : int, optional Axis along which to apply the smoothing (Default: 0) newshape : tuple, optional Desired shape of the parameters to apply the nu...
def _output(self, file_like_object, path=None): if not path: self._output_to_display(file_like_object) else: self._output_to_file(file_like_object, path)
Display or save file like object.
def device(dev, stats=False, config=False, internals=False, superblock=False): ** result = {} if not _sysfs_attr(_bcpath(dev), None, , .format(dev)): return False elif _bcsys(dev, ): back_uuid = uuid(dev) if back_uuid is not None: result[] = back_uuid ...
Check the state of a single bcache device CLI example: .. code-block:: bash salt '*' bcache.device bcache0 salt '*' bcache.device /dev/sdc stats=True :param stats: include statistics :param settings: include all settings :param internals: include all internals :param superblo...
def _is_not_pickle_safe_gl_model_class(obj_class): if issubclass(obj_class, _toolkits._model.CustomModel): return not obj_class._is_gl_pickle_safe() return False
Check if a Turi create model is pickle safe. The function does it by checking that _CustomModel is the base class. Parameters ---------- obj_class : Class to be checked. Returns ---------- True if the GLC class is a model and is pickle safe.
def build_clnsig(clnsig_info): clnsig_obj = dict( value = clnsig_info[], accession = clnsig_info.get(), revstat = clnsig_info.get() ) return clnsig_obj
docstring for build_clnsig
def get_method_info(self, obj): info = self.get_base_info(obj) info.update({}) return info
Returns the info for a Method
def lines(n_traces=5,n=100,columns=None,dateIndex=True,mode=None): index=pd.date_range(,periods=n) if dateIndex else list(range(n)) df=pd.DataFrame(np.random.randn(n,n_traces),index=index, columns=getName(n_traces,columns=columns,mode=mode)) return df.cumsum()
Returns a DataFrame with the required format for a scatter (lines) plot Parameters: ----------- n_traces : int Number of traces n : int Number of points for each trace columns : [str] List of column names dateIndex : bool If True it will return a datetime index if False it will return a enu...
def server(description=None, **kwargs): description = description or return wsgi.WSGIServer(hello, description=description, **kwargs)
Create the :class:`.WSGIServer` running :func:`hello`.
def execute_async(self, output_options=None, sampling=None, context=None, query_params=None): if output_options is None: output_options = QueryOutput.table() batch = output_options.priority == append = output_options.table_mode == overwrite = output_options.table_mode == ta...
Initiate the query and return a QueryJob. Args: output_options: a QueryOutput object describing how to execute the query sampling: sampling function to use. No sampling is done if None. See bigquery.Sampling context: an optional Context object providing project_id and credentials. If a specific ...
def som_get_capture_objects(som_pointer): ccore = ccore_library.get() ccore.som_get_capture_objects.restype = POINTER(pyclustering_package) package = ccore.som_get_capture_objects(som_pointer) result = package_extractor(package).extract() return result
! @brief Returns list of indexes of captured objects by each neuron. @param[in] som_pointer (c_pointer): pointer to object of self-organized map.
def get_current_stats(self, names=None): if names is None: names = self.default_stats return self._current_stats.getstats(names)
Return one or more of the current stats as a tuple. This function does no computation. It only returns what has already been calculated. If a stat hasn't been calculated, it will be returned as ``numpy.nan``. Parameters ---------- names : list of str, optional ...
def detrended_price_oscillator(data, period): catch_errors.check_for_period_error(data, period) period = int(period) dop = [data[idx] - np.mean(data[idx+1-(int(period/2)+1):idx+1]) for idx in range(period-1, len(data))] dop = fill_for_noncomputable_vals(data, dop) return dop
Detrended Price Oscillator. Formula: DPO = DATA[i] - Avg(DATA[period/2 + 1])
def ReadCronJobs(self, cronjob_ids=None): if cronjob_ids is None: res = [job.Copy() for job in itervalues(self.cronjobs)] else: res = [] for job_id in cronjob_ids: try: res.append(self.cronjobs[job_id].Copy()) except KeyError: raise db.UnknownCronJobEr...
Reads a cronjob from the database.
def evaluate(self, data): expression_engine = data.process.requirements.get(, None) if expression_engine is not None: expression_engine = self.get_expression_engine(expression_engine) steps = data.process.run.get(, None) if steps is None: return...
Evaluate the code needed to compute a given Data object.
def document(self, document): self._doc_types.append(document) if document._index._name is None: document._index = self return document
Associate a :class:`~elasticsearch_dsl.Document` subclass with an index. This means that, when this index is created, it will contain the mappings for the ``Document``. If the ``Document`` class doesn't have a default index yet (by defining ``class Index``), this instance will be used. C...
def get_cso_dataframe(self): assert self.jco is not None assert self.pst is not None weights = self.pst.observation_data.loc[self.jco.to_dataframe().index,"weight"].copy().values cso = np.diag(np.sqrt((self.qhalfx.x.dot(self.qhalfx.x.T))))/(float(self.pst.npar-1)) cso_df...
get a dataframe of composite observation sensitivity, as returned by PEST in the seo file. Note that this formulation deviates slightly from the PEST documentation in that the values are divided by (npar-1) rather than by (npar). The equation is cso_j = ((Q^1/2*J*J^T*Q^1/2)^1/2)_jj/(NP...
def watch(value, spectator_type=Spectator): if isinstance(value, Watchable): wtype = type(value) else: raise TypeError("Expected a Watchable, not %r." % value) spectator = getattr(value, "_instance_spectator", None) if not isinstance(spectator, Spectator): spectator = specta...
Register a :class:`Specatator` to a :class:`Watchable` and return it. In order to register callbacks to an eventful object, you need to create a Spectator that will watch it for you. A :class:`Specatator` is a relatively simple object that has methods for adding, deleting, and triggering callbacks. To ...
def create(host, port): wrapper = WrapperEchoServer({ : None }) d = { : port, : wrapper } if host: d[] = host ses = EchoServer(d) wrapper.server = ses return [wrapper], cmd_line
Prepare server to execute :return: Modules to execute, cmd line function :rtype: list[WrapperServer], callable | None
def dallinger(): from logging.config import fileConfig fileConfig( os.path.join(os.path.dirname(__file__), "logging.ini"), disable_existing_loggers=False, )
Dallinger command-line utility.
def disable(name, lbn, target, profile=, tgt_type=): t get new ones. Example: .. code-block:: yaml disable-before-deploy: modjk_worker.disable: - name: {{ grains[] }} - lbn: application - target: - tgt_type: grain worker_disable',...
.. versionchanged:: 2017.7.0 The ``expr_form`` argument has been renamed to ``tgt_type``, earlier releases must use ``expr_form``. Disable the named worker from the lbn load balancers at the targeted minions. The worker will get traffic only for current sessions and won't get new ones. ...
def add_group(self, group_attribs=None, parent=None): if parent is None: parent = self.tree.getroot() elif not self.contains_group(parent): warnings.warn( .format(parent)) if group_attribs is None: group_attribs = {} ...
Add an empty group element to the SVG.
def list_observatories(self): response = requests.get(self.base_url + ).text return safe_load(response)
Get the IDs of all observatories with have stored observations on this server. :return: a sequence of strings containing observatories IDs
def do_with_ruby(ruby, cmdline, runas=None): s shims using a specific ruby version CLI Example: .. code-block:: bash salt rbenv.do_with_ruby 2.0.0-p0 salt rbenv.do_with_ruby 2.0.0-p0 runas=deploy Command must be specifiedRBENV_VERSION'] = ruby cmd = cmdline else: ...
Execute a ruby command with rbenv's shims using a specific ruby version CLI Example: .. code-block:: bash salt '*' rbenv.do_with_ruby 2.0.0-p0 'gem list bundler' salt '*' rbenv.do_with_ruby 2.0.0-p0 'gem list bundler' runas=deploy
def make_tarball(base_name, base_dir, compress=, verbose=False, dry_run=False): compress_ext = { : ".gz", : , : ".Z" } tarfile_compress_flag = {:, :} compress_flags = {: ["-f"]} if compress is not None and c...
Create a tar file from all the files under 'base_dir'. This file may be compressed. :param compress: Compression algorithms. Supported algorithms are: 'gzip': (the default) 'compress' 'bzip2' None For 'gzip' and 'bzip2' the internal tarfile module will be used. For 'comp...
def read(self, size = -1): if size < -1: raise Exception() if size == -1: t = self.current_segment.remaining_len(self.current_position) if not t: return None old_new_pos = self.current_position self.current_position = self.current_segment.end_address return self.current_segment.data[old...
Returns data bytes of size size from the current segment. If size is -1 it returns all the remaining data bytes from memory segment
def _count_vocab(self, analyzed_docs): vocabulary = self.vocabulary_ j_indices = _make_int_array() indptr = _make_int_array() indptr.append(0) for doc in analyzed_docs: for feature in doc: try: j_indices.append(vocabulary[f...
Create sparse feature matrix, and vocabulary where fixed_vocab=False
def _first_step_to_match(match_step): parts = [] if match_step.root_block is not None: if not isinstance(match_step.root_block, QueryRoot): raise AssertionError(u u.format(match_step.root_block, match_step)) match_step.root_block.validate() ...
Transform the very first MATCH step into a MATCH query string.
def _create_translation_file(feature_folder, dataset_name, translation, formula_id2index): translationfilename = "%s/translation-%s.csv" % (feature_folder, dataset_name) ...
Write a loop-up file that contains the direct (record-wise) lookup information. Parameters ---------- feature_folder : Path to the feature files. dataset_name : 'traindata', 'validdata' or 'testdata'. translation : list of triples (raw data id, formula in latex, formula ...
def tabulate(d, transpose=False, thousands=True, key_fun=None, sep=, align=True): pairs = d.keys() rows, cols = zip(*pairs) if transpose: rows, cols = cols, rows rows = sorted(set(rows)) cols = sorted(set(cols)) header = ["o"] + list(cols) table = [] for r in rows: ...
d is a dictionary, keyed by tuple(A, B). Goal is to put A in rows, B in columns, report data in table form. >>> d = {(1,'a'):3, (1,'b'):4, (2,'a'):5, (2,'b'):0} >>> print tabulate(d) =========== o a b ----------- 1 3 4 2 5 0 ----------- >>> print tabulate(d, tr...
def CLJP(S, color=False): if not isspmatrix_csr(S): raise TypeError() S = remove_diagonal(S) colorid = 0 if color: colorid = 1 T = S.T.tocsr() splitting = np.empty(S.shape[0], dtype=) amg_core.cljp_naive_splitting(S.shape[0], S.indp...
Compute a C/F splitting using the parallel CLJP algorithm. Parameters ---------- S : csr_matrix Strength of connection matrix indicating the strength between nodes i and j (S_ij) color : bool use the CLJP coloring approach Returns ------- splitting : array A...
def get_git_home(path=): ctx = click.get_current_context(silent=True) if ctx and GIT_KEY in ctx.meta: return ctx.meta[GIT_KEY] from git import Repo return Repo(path, search_parent_directories=True).working_dir
Get Git path from the current context.
def crc8(data): crc = 0 for byte in data: crc ^= byte for _ in range(8): if crc & 0x01: crc = (crc >> 1) ^ 0x8C else: crc >>= 1 crc &= 0xFF return crc
Perform the 1-Wire CRC check on the provided data. :param bytearray data: 8 byte array representing 64 bit ROM code
def _set_fc_port(self, v, load=False): if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("name",fc_port.fc_port, yang_name="fc-port", rest_name="FibreChannel", parent=self, is_container=, user_ordered=True, path_helper=self._path_helper, yang_keys=, extensions={...
Setter method for fc_port, mapped from YANG variable /interface/fc_port (list) If this variable is read-only (config: false) in the source YANG file, then _set_fc_port is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_fc_port() directly. ...
def stat(self, follow_symlinks=True): if follow_symlinks: if self._statresult_symlink is None: file_object = self._filesystem.resolve(self.path) if self._filesystem.is_windows_fs: file_object.st_nlink = 0 self._statresult_s...
Return a stat_result object for this entry. Args: follow_symlinks: If False and the entry is a symlink, return the result for the symlink, otherwise for the object it points to.
def close_filenos(preserve): maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1] if maxfd == resource.RLIM_INFINITY: maxfd = 4096 for fileno in range(maxfd): if fileno not in preserve: try: os.close(fileno) except OSError as err: ...
Close unprotected file descriptors Close all open file descriptors that are not in preserve. If ulimit -nofile is "unlimited", all is defined filenos <= 4096, else all is <= the output of resource.getrlimit(). :param preserve: set with protected files :type preserve: set :return: None
def update_ssh_public_key( self, name, ssh_public_key, update_mask=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): if "update_ssh_public_key" not in self._inner_a...
Updates an SSH public key and returns the profile information. This method supports patch semantics. Example: >>> from google.cloud import oslogin_v1 >>> >>> client = oslogin_v1.OsLoginServiceClient() >>> >>> name = client.fingerprint_path('[U...
def bear(a1, b1, a2, b2): tol = 1e-15 v1 = CartesianVector.from_spherical(1.0, a1, b1) v2 = CartesianVector.from_spherical(1.0, a2, b2) v0 = CartesianVector.from_spherical(r=1.0, alpha=0.0, delta=d2r(90.0)) if abs(v1.cross(v0).mod) < tol...
Find bearing/position angle between two points on a unit sphere. Parameters ---------- a1, b1 : float Longitude-like and latitude-like angles defining the first point. Both are in radians. a2, b2 : float Longitude-like and latitude-like angles defining the second point....
def get_template_loader(self, subdir=): s template loaderdjango_mako_plus') return dmp.engine.get_template_loader(self.app, subdir)
App-specific function to get the current app's template loader
def getLoggingLocation(self): if sys.platform == "win32": modulePath = os.path.realpath(__file__) modulePath = modulePath[:modulePath.rfind("/")] return modulePath else: return "/tmp" return ""
Return the path for the calcpkg.log file - at the moment, only use a Linux path since I don't know where Windows thinks logs should go.
def c_if(self, classical, val): for gate in self.instructions: gate.c_if(classical, val) return self
Add classical control register to all instructions.
def terminate(self): self.toggle_scan(False) self.keep_going = False self.join()
Signal runner to stop and join thread.
def create_hosting_device_resources(self, context, complementary_id, tenant_id, mgmt_context, max_hosted): mgmt_port = None if mgmt_context and mgmt_context.get() and tenant_id: p_spec = {: { : tenant_id, ...
Create resources for a hosting device in a plugin specific way.
def updateFile(self, logical_file_name=[], is_file_valid=1, lost=0, dataset=): if lost in [1, True, , , , , ]: lost = 1 if is_file_valid in [1, True, , , , , ]: dbsExceptionHandler("dbsException-invalid-input2", dbsExceptionCode["dbsException-invalid-input2"], se...
API to update file status :param logical_file_name: logical_file_name to update (optional), but must have either a fln or a dataset :type logical_file_name: str :param is_file_valid: valid=1, invalid=0 (Required) :type is_file_valid: bool :param lost: default lost=0 (op...
def commit(self, changeset_id: uuid.UUID) -> None: self._validate_changeset(changeset_id) journal_data = self.journal.commit_changeset(changeset_id) if self.journal.is_empty(): self.reset() for key, value in journal_data.items(): ...
Commits a given changeset. This merges the given changeset and all subsequent changesets into the previous changeset giving precidence to later changesets in case of any conflicting keys. If this is the base changeset then all changes will be written to the underlying database and the J...
def load_data(self, mode="train", format="csv"): if mode in self.data.keys(): run_dates = pd.DatetimeIndex(start=self.start_dates[mode], end=self.end_dates[mode],freq="1D") run_date_str = [d.strftime("%Y%m%d-%H%M") for d in run_dates.date]...
Load data from flat data files containing total track information and information about each timestep. The two sets are combined using merge operations on the Track IDs. Additional member information is gathered from the appropriate member file. Args: mode: "train" or "forecast" ...
def post_event(self, event): url = "{0}/{1}/projects/{2}/events/{3}".format(self.base_url, self.api_version, self.project_id, event.event_collection) headers = utilities.headers(self.w...
Posts a single event to the Keen IO API. The write key must be set first. :param event: an Event to upload
def num_to_var_int(x): x = int(x) if x < 253: return from_int_to_byte(x) elif x < 65536: return from_int_to_byte(253) + encode(x, 256, 2)[::-1] elif x < 4294967296: return from_int_to_byte(254) + encode(x, 256, 4)[::-1] else: return from_int_to_byte(255) + enc...
(bitcoin-specific): convert an integer into a variable-length integer
def Scatter(y, win=13, remove_outliers=False): if remove_outliers: if len(y) >= 50: ys = y - Smooth(y, 50) else: ys = y M = np.nanmedian(ys) MAD = 1.4826 * np.nanmedian(np.abs(ys - M)) out = [] for i, _ in enumerate(y): ...
Return the scatter in ppm based on the median running standard deviation for a window size of :py:obj:`win` = 13 cadences (for K2, this is ~6.5 hours, as in VJ14). :param ndarray y: The array whose CDPP is to be computed :param int win: The window size in cadences. Default `13` :param bool remove_o...
def _check_samples_line(klass, arr): if len(arr) <= len(REQUIRE_NO_SAMPLE_HEADER): if tuple(arr) != REQUIRE_NO_SAMPLE_HEADER: raise exceptions.IncorrectVCFFormat( "Sample header line indicates no sample but does not " "equal required p...
Peform additional check on samples line
def ListClientsForKeywords(self, keywords, start_time=None): res = {kw: [] for kw in keywords} for kw in keywords: for client_id, timestamp in iteritems(self.keywords.get(kw, {})): if start_time is not None and timestamp < start_time: continue res[kw].append(client_id) r...
Lists the clients associated with keywords.
def getFrontmostApp(cls): apps = cls._getRunningApps() for app in apps: pid = app.processIdentifier() ref = cls.getAppRefByPid(pid) try: if ref.AXFrontmost: return ref except (_a11y.ErrorUnsupported, ...
Get the current frontmost application. Raise a ValueError exception if no GUI applications are found.
def validate(self, columns=None): schema = self.schema() if not columns: ignore_flags = orb.Column.Flags.Virtual | orb.Column.Flags.ReadOnly columns = schema.columns(flags=~ignore_flags).values() use_indexes = True else: use_indexes = Fals...
Validates the current record object to make sure it is ok to commit to the database. If the optional override dictionary is passed in, then it will use the given values vs. the one stored with this record object which can be useful to check to see if the record will be valid before it is commit...
def _parse_cpe_name(cpe): part = { : , : , : , } ret = {} cpe = (cpe or ).split() if len(cpe) > 4 and cpe[0] == : if cpe[1].startswith(): ret[], ret[], ret[] = cpe[2:5] ret[] = cpe[5] if len(cpe) > 5 else None ret[] = part.ge...
Parse CPE_NAME data from the os-release Info: https://csrc.nist.gov/projects/security-content-automation-protocol/scap-specifications/cpe :param cpe: :return:
def _filter_defs_at_call_sites(self, defs): filtered_defs = LiveDefinitions() for variable, locs in defs.items(): if isinstance(variable, SimRegisterVariable): if self.project.arch.name == : if variable.reg in (self.project.arch...
If we are not tracing into the function that are called in a real execution, we should properly filter the defs to account for the behavior of the skipped function at this call site. This function is a WIP. See TODOs inside. :param defs: :return:
def build_attrs(self, *args, **kwargs): "Helper function for building an attribute dictionary." self.attrs = self.widget.build_attrs(*args, **kwargs) return self.attrs
Helper function for building an attribute dictionary.
def amari_alpha(logu, alpha=1., self_normalized=False, name=None): with tf.compat.v1.name_scope(name, "amari_alpha", [logu]): if alpha is None or tf.is_tensor(alpha): raise TypeError("`alpha` cannot be `None` or `Tensor` type.") if (self_normalized is None or tf.is_tensor(self_normalized)): rai...
The Amari-alpha Csiszar-function in log-space. A Csiszar-function is a member of, ```none F = { f:R_+ to R : f convex }. ``` When `self_normalized = True`, the Amari-alpha Csiszar-function is: ```none f(u) = { -log(u) + (u - 1), alpha = 0 { u log(u) - (u - 1), alpha = 1 { [(u*...
def get_job(self, job_resource_name: str) -> Dict: return self.service.projects().programs().jobs().get( name=job_resource_name).execute()
Returns metadata about a previously created job. See get_job_result if you want the results of the job and not just metadata about the job. Params: job_resource_name: A string of the form `projects/project_id/programs/program_id/jobs/job_id`. Returns: ...
def register_plugin(self): self.breakpoints.edit_goto.connect(self.main.editor.load) self.breakpoints.clear_all_breakpoints.connect( self.main.editor.clear_all_breakpoints) self.breakpoints.clear_breakpoint.connect( self...
Register plugin in Spyder's main window
def _squeeze(x, axis): x = tf.convert_to_tensor(value=x, name=) if axis is None: return tf.squeeze(x, axis=None) axis = tf.convert_to_tensor(value=axis, name=, dtype=tf.int32) axis += tf.zeros([1], dtype=axis.dtype) keep_axis, _ = tf.compat.v1.setdiff1d(tf.range(0, tf.rank(x)), axis) return tf.resh...
A version of squeeze that works with dynamic axis.
def imports(modules, forgive=False): def wrap(f): if modules: setattr(f, , modules) for alternatives in modules: alternatives = alternatives.split() mod_name = alternatives[0].split()[-1] ...
Should be used as a decorator to *attach* import statments to function definitions. These imports are added to the global (i.e. module-level of the decorated function) namespace. Two forms of import statements are supported (in the following examples ``foo``, ``bar``, ``oof, and ``rab`` are mo...
def generate_set_partitions(set_): set_ = scipy.asarray(set_) strings = generate_set_partition_strings(len(set_)) partitions = [] for string in strings: blocks = [] for block_num in scipy.unique(string): blocks.append(set_[string == block_num]) partitions.append(...
Generate all of the partitions of a set. This is a helper function that utilizes the restricted growth strings from :py:func:`generate_set_partition_strings`. The partitions are returned in lexicographic order. Parameters ---------- set_ : :py:class:`Array` or other Array-like, (`m`,) ...
def pair(seeders, delegator_factory, *args, **kwargs): return (chain(*seeders) if len(seeders) > 1 else seeders[0], delegator_factory(*args, **kwargs))
The basic pair producer. :return: a (seeder, delegator_factory(\*args, \*\*kwargs)) tuple. :param seeders: If it is a seeder function or a list of one seeder function, it is returned as the final seeder. If it is a list of more than one seeder function, they are chained togethe...
def iter(context, resource, **kwargs): data = utils.sanitize_kwargs(**kwargs) id = data.pop(, None) subresource = data.pop(, None) data[] = data.get(, 20) if subresource: uri = % (context.dci_cs_api, resource, id, subresource) resource = subresource else: uri = % ...
List all resources
def init_connection_file(self): if self.existing: try: cf = find_connection_file(self.existing) except Exception: self.log.critical("Could not find existing kernel connection file %s", self.existing) self.exit(1) self.l...
find the connection file, and load the info if found. The current working directory and the current profile's security directory will be searched for the file if it is not given by absolute path. When attempting to connect to an existing kernel and the `--existing` ...
def _validate_forbidden(self, forbidden_values, field, value): if isinstance(value, _str_type): if value in forbidden_values: self._error(field, errors.FORBIDDEN_VALUE, value) elif isinstance(value, Sequence): forbidden = set(value) & set(forbidden_values...
{'type': 'list'}
def _calc_footprint(self): corners = [self.corner(corner) for corner in self.corner_types()] coords = [] for corner in corners: shape = corner.get_shape(corner.crs) coords.append([shape.x, shape.y]) shp = Polygon(coords) self._footprint ...
Return rectangle in world coordinates, as GeoVector.
def t0_ref_supconj(b, orbit, solve_for=None, **kwargs): orbit_ps = _get_system_ps(b, orbit) metawargs = orbit_ps.meta metawargs.pop()
Create a constraint for t0_ref in an orbit - allowing translating between t0_ref and t0_supconj. :parameter b: the :class:`phoebe.frontend.bundle.Bundle` :parameter str orbit: the label of the orbit in which this constraint should be built :parameter str solve_for: if 't0_ref' should not be th...
def pushback(self) -> None: if abs(self._idx - 1) > self._pushback_depth: raise IndexError("Exceeded pushback depth") self._idx -= 1
Push one character back onto the stream, allowing it to be read again.
def _str_to_int(self, string): string = string.lower() if string.endswith("l"): string = string[:-1] if string.lower().startswith("0x"): match = re.match(r, string) return int(match.group(1), 0x10) else: return int(str...
Check for the hex
def containerIsRunning(container_name): client = docker.from_env(version=) try: this_container = client.containers.get(container_name) if this_container.status == : return True else: return False except NotFound: return None excep...
Checks whether the container is running or not. :param container_name: Name of the container being checked. :returns: True if status is 'running', False if status is anything else, and None if the container does not exist.
def get_jids(): serv = _get_serv(ret=None) jids = _get_list(serv, ) loads = serv.get_multi(jids) ret = {} for jid, load in six.iteritems(loads): ret[jid] = salt.utils.jid.format_jid_instance(jid, salt.utils.json.loads(load)) return ret
Return a list of all job ids
def source_absent(name): ret = {: name, : {}, : None, : } if name not in __salt__[](): ret[] = True ret[] = .format(name) else: if __opts__[]: res = {} ret[] = True else: res = __salt...
Ensure an image source is absent on the computenode name : string source url
def lazy_result(f): @wraps(f) def decorated(ctx, param, value): return LocalProxy(lambda: f(ctx, param, value)) return decorated
Decorate function to return LazyProxy.
def filter(args): p = OptionParser(filter.__doc__) p.add_option("--score", dest="score", default=0, type="int", help="Score cutoff") p.set_align(pctid=95, hitlen=100, evalue=.01) p.add_option("--noself", default=False, action="store_true", help="Remove self-self hi...
%prog filter test.blast Produce a new blast file and filter based on: - score: >= cutoff - pctid: >= cutoff - hitlen: >= cutoff - evalue: <= cutoff - ids: valid ids Use --inverse to obtain the complementary records for the criteria above. - noself: remove self-self hits
def setup(config, minconn=5, maxconn=10, adapter=, key=, slave=False): global __db if in key: raise TypeError(t Contain dotThe Key: "%s" was set.slave' if not slave: __db[master_key] = database if slave_key not in __db: __db[slave_key] = [database] else: ...
Setup database :param config dict: is the db adapter config :param key string: the key to identify dabtabase :param adapter string: the dabtabase adapter current support mysql only :param minconn int: the min connection for connection pool :param maxconn int: the max connection for connection pool ...
def matched_filter(template, data, psd=None, low_frequency_cutoff=None, high_frequency_cutoff=None, sigmasq=None): snr, _, norm = matched_filter_core(template, data, psd=psd, low_frequency_cutoff=low_frequency_cutoff, high_frequency_cutoff=high_frequency_cutoff, h_norm...
Return the complex snr. Return the complex snr, along with its associated normalization of the template, matched filtered against the data. Parameters ---------- template : TimeSeries or FrequencySeries The template waveform data : TimeSeries or FrequencySeries The strain data ...