id
stringlengths 24
28
| content
stringlengths 121
2.08k
|
---|---|
codereview_python_data_2312 | '''
node_part = _CAPI_DGLMetisPartition(g._graph, k)
node_part = utils.toindex(node_part)
- print(node_part.tousertensor())
return partition_graph_with_halo(g, node_part, 1)
def compact_graphs(graphs, always_preserve=None):
Is it better to set the partition id to node frame, and use another function to split into dict? Since I would assume the partition result could be used somewhere else.
'''
node_part = _CAPI_DGLMetisPartition(g._graph, k)
node_part = utils.toindex(node_part)
return partition_graph_with_halo(g, node_part, 1)
def compact_graphs(graphs, always_preserve=None): |
codereview_python_data_2325 | value = os.path.expanduser(value)
value = os.path.expandvars(value)
if not os.path.isabs(value):
- if standarddir.config():
abspath = os.path.join(standarddir.config(), value)
if os.path.isfile(abspath):
return abspath
Please explicitely check for `... is not None` here - in this case it wouldn't matter as it never should be anything else falsey (i.e. it'd never return an empty string), but it still makes it more clear this is expected to be a string or `None`.
value = os.path.expanduser(value)
value = os.path.expandvars(value)
if not os.path.isabs(value):
+ if standarddir.config() is not None:
abspath = os.path.join(standarddir.config(), value)
if os.path.isfile(abspath):
return abspath |
codereview_python_data_2330 | "<!-- Created by KGML_Pathway.py %s -->" % time.asctime(),
]
)
- rough_xml = header + as_string(ET.tostring(self.element, "utf-8"))
reparsed = minidom.parseString(rough_xml)
return reparsed.toprettyxml(indent=" ")
Do we need the as_string here? It looks like elementtree is already returning a string.
"<!-- Created by KGML_Pathway.py %s -->" % time.asctime(),
]
)
+ rough_xml = header + ET.tostring(self.element, "utf-8")
reparsed = minidom.parseString(rough_xml)
return reparsed.toprettyxml(indent=" ") |
codereview_python_data_2331 | # but for now it is.
if not flow:
raise exceptions.CommandError("No flow selected.")
require_dummy_response = (
part in ("response-headers", "response-body", "set-cookies") and
flow.response is None
)
- flow.backup()
if require_dummy_response:
flow.response = http.HTTPResponse.make()
if part == "cookies":
Nit: Don't move it between `require_dummy_response` definition and usage, this can live above or below :)
# but for now it is.
if not flow:
raise exceptions.CommandError("No flow selected.")
+ flow.backup()
+
require_dummy_response = (
part in ("response-headers", "response-body", "set-cookies") and
flow.response is None
)
if require_dummy_response:
flow.response = http.HTTPResponse.make()
if part == "cookies": |
codereview_python_data_2333 | Args:
session (object): Database session.
- model (Model): Model name to create.
dao (object): Data Access Object from dao.py
service_config (ServiceConfig): Service configuration.
inventory_id (str): Inventory id to import from
Do you still need the `name` in the arg description, if this is not `str` type anymore.?
Args:
session (object): Database session.
+ model (Model): Model object.
dao (object): Data Access Object from dao.py
service_config (ServiceConfig): Service configuration.
inventory_id (str): Inventory id to import from |
codereview_python_data_2340 | ('Dataset and results have different sizes: '
f'{self.cumulative_sizes[-1]} v.s. {len(results)}')
if self.separate_eval:
dataset_idx = -1
total_eval_results = dict()
Now the result is a dict of dict. Chances are that the TextLogger.average may fail if TextLogger.average is called. Please double check that.
('Dataset and results have different sizes: '
f'{self.cumulative_sizes[-1]} v.s. {len(results)}')
+ # Check whether all the datasets support evaluation
+ for dataset in self.datasets:
+ assert hasattr(dataset, 'evaluate'), \
+ f'{type(dataset)} does not implement evaluate function'
+
if self.separate_eval:
dataset_idx = -1
total_eval_results = dict() |
codereview_python_data_2345 | upgrade_message = "{0} Agent upgrade discovered, updating to {1} -- exiting"
if is_hotfix_upgrade and next_hotfix_time <= now:
- raise ExitException(upgrade_message.format(AgentUpgradeType.Hotfix, available_agent.name))
elif (not is_hotfix_upgrade) and next_normal_time <= now:
- raise ExitException(upgrade_message.format(AgentUpgradeType.Normal, available_agent.name))
# Not upgrading the agent as the times don't match for their relevant upgrade, logging it appropriately
if is_hotfix_upgrade:
Guess it is beneficial to include current agent version here in the logs
upgrade_message = "{0} Agent upgrade discovered, updating to {1} -- exiting"
if is_hotfix_upgrade and next_hotfix_time <= now:
+ raise AgentUpgradeExitException(upgrade_message.format(AgentUpgradeType.Hotfix, available_agent.name))
elif (not is_hotfix_upgrade) and next_normal_time <= now:
+ raise AgentUpgradeExitException(upgrade_message.format(AgentUpgradeType.Normal, available_agent.name))
# Not upgrading the agent as the times don't match for their relevant upgrade, logging it appropriately
if is_hotfix_upgrade: |
codereview_python_data_2353 | we infer in GraphML that both are floats.
named_key_ids : bool (optional)
If True use attr.name as value for key elements' id attribute.
- edge_id_from_attribute : keyword argument, hashtable identifier (optional),
- Select edge_attribute for edge_id
Examples
--------
I think this parameter description needs to be made more clear, e.g. ```suggestion edge_id_from_attribute : dict key (optional), If provided, the graphml edge id is set by looking up the corresponding edge data attribute keyed by this parameter. If `None`, the edge id is set by the edge key if `G` is a MultiGraph, else the edge id is left unset. ``` It's difficult to capture the behavior in a concise description, but I think something along these lines would be a bit more clear. It'd be great to include an example in the docstring, but it's not necessary for this PR.
we infer in GraphML that both are floats.
named_key_ids : bool (optional)
If True use attr.name as value for key elements' id attribute.
+ edge_id_from_attribute : dict key (optional)
+ If provided, the graphml edge id is set by looking up the corresponding
+ edge data attribute keyed by this parameter. If `None` or the key does not exist in edge data,
+ the edge id is set by the edge key if `G` is a MultiGraph, else the edge id is left unset.
Examples
-------- |
codereview_python_data_2357 | <h1>Error 503 Backend is unhealthy</h1>
<p>Backend is unhealthy</p>
<h3>Guru Mediation:</h3>
- <p>Details: cache-sea4458-SEA 1645542657 1832937762</p>
<hr>
<p>Varnish cache server</p>
</body>
would it be worth importing these defines from `influx_listenstore`?
<h1>Error 503 Backend is unhealthy</h1>
<p>Backend is unhealthy</p>
<h3>Guru Mediation:</h3>
+ <p>Details: cache-sea4445-SEA 1645542657 290303048</p>
<hr>
<p>Varnish cache server</p>
</body> |
codereview_python_data_2359 | 'scatter_kws']
})
def __call__(self, axis=None, cyclic_index=0, lbrt=None):
dfview = self._stack.last
Try something like this: `style_opts = [el for key in dframe_options for el in dframe_options[key]]` This would let you remove the class property stuff.
'scatter_kws']
})
+ style_opts = list({opt for opts in dframe_options.values() for opt in opts})
+
def __call__(self, axis=None, cyclic_index=0, lbrt=None):
dfview = self._stack.last |
codereview_python_data_2364 | norm_cfg=norm_cfg,
**kwargs))
inplanes = planes * block.expansion
- for i in range(1, num_blocks):
layers.append(
block(
inplanes=inplanes,
Although the results are the same, the following is less missleading: ```python for i in range(num_blocks - 1) ```
norm_cfg=norm_cfg,
**kwargs))
inplanes = planes * block.expansion
+ for _ in range(num_blocks - 1):
layers.append(
block(
inplanes=inplanes, |
codereview_python_data_2366 | def slot_code(self, scope):
if not self._needs_own(scope):
# if the type does not have object attributes, it can
- # delegate GC methods to its parent - if the parent
# functions are defined in the same module
slot_code = self._parent_slot_function(scope)
return slot_code or '0'
I don't think this is a typo. In science, and probably other contexts, "iff " commonly refers to "if, and only if, " ```suggestion # delegate GC methods to its parent - iff the parent ```
def slot_code(self, scope):
if not self._needs_own(scope):
# if the type does not have object attributes, it can
+ # delegate GC methods to its parent - iff the parent
# functions are defined in the same module
slot_code = self._parent_slot_function(scope)
return slot_code or '0' |
codereview_python_data_2370 | self.base_class_path = [self.selenium_server_jar_path, self.junit_path, self.junit_listener_path,
self.hamcrest_path, self.json_jar_path]
self.base_class_path.extend(self.scenario.get("additional-classpath", []))
- self.base_class_path=[os.path.abspath(x) for x in self.base_class_path]
def prepare(self):
"""
Code style. Btw, it's weird Codacy didn't catch that.
self.base_class_path = [self.selenium_server_jar_path, self.junit_path, self.junit_listener_path,
self.hamcrest_path, self.json_jar_path]
self.base_class_path.extend(self.scenario.get("additional-classpath", []))
+ self.base_class_path = [os.path.abspath(executor.engine.find_file(x)) for x in self.base_class_path]
def prepare(self):
""" |
codereview_python_data_2374 | parsed = []
headers = {'User-Agent': self._user_agent}
- # Some videos may be also available on (especially on CNews)
if videos['ID_DM'] != '':
- for stream in self.session.streams('https://www..com/video/' + videos['ID_DM']).items():
yield stream
for quality, video_url in list(videos['MEDIA']['VIDEOS'].items()):
This looks like you've accidentially replaced the string "DailyMotion" with "" in the entire project.
parsed = []
headers = {'User-Agent': self._user_agent}
+ # Some videos may be also available on Dailymotion (especially on CNews)
if videos['ID_DM'] != '':
+ for stream in self.session.streams('https://www.dailymotion.com/video/' + videos['ID_DM']).items():
yield stream
for quality, video_url in list(videos['MEDIA']['VIDEOS'].items()): |
codereview_python_data_2375 | """ return wallet synchronization status """
return self.wallet.is_up_to_date()
- @command('')
def getfee(self):
"""Return current optimal fee per kilobyte, according to
config settings (static/dynamic)"""
Shouldn't this use `'n'` instead?
""" return wallet synchronization status """
return self.wallet.is_up_to_date()
+ @command('n')
def getfee(self):
"""Return current optimal fee per kilobyte, according to
config settings (static/dynamic)""" |
codereview_python_data_2378 | }
candies = inventory.candies().get(pokemon.pokemon_id).quantity
- threshold = pokemon_config.get('candy_threshold', 400)
- if( candies > threshold ):
self.emit_event(
'ignore_candy_above_thresold',
level='info',
candy_threshold should default to false since this code will be triggered when config not changed for candy_threshold.
}
candies = inventory.candies().get(pokemon.pokemon_id).quantity
+ threshold = pokemon_config.get('candy_threshold', False)
+ if( threshold > 0 and candies > threshold ):
self.emit_event(
'ignore_candy_above_thresold',
level='info', |
codereview_python_data_2379 | def __repr__(self):
repr_str = self.__class__.__name__
repr_str += f'(min_ious={self.min_ious}, '
- repr_str += f'min_crop_size={self.min_crop_size})'
repr_str += f'bbox_clip_border={self.bbox_clip_border})'
return repr_str
```python f'min_crop_size={self.min_crop_size})' -> f'min_crop_size={self.min_crop_size}, ' ```
def __repr__(self):
repr_str = self.__class__.__name__
repr_str += f'(min_ious={self.min_ious}, '
+ repr_str += f'min_crop_size={self.min_crop_size}), '
repr_str += f'bbox_clip_border={self.bbox_clip_border})'
return repr_str |
codereview_python_data_2390 | """
def __init__(self):
- """Initialize."""
LOGGER.debug('Initializing SecurityCenterClient')
self.repository = SecurityCenterRepositoryClient()
Maybe 'Unable to create CSCC finding:' is easier to understand in this case?
"""
def __init__(self):
+ """Initialize.
+ TODO: Add api quota configs here.
+ max_calls, quota_period = api_helpers.get_ratelimiter_config(
+ inventory_configs.api_quota_configs, 'securitycenter')
+ """
LOGGER.debug('Initializing SecurityCenterClient')
self.repository = SecurityCenterRepositoryClient() |
codereview_python_data_2391 | ##### Distributed sampler infrastructure #####
-def CreateSender(ip, port):
""" Create a sender communicator via C socket
Parameter:
maybe we should have the code in another file?
##### Distributed sampler infrastructure #####
+def _create_sender(ip, port):
""" Create a sender communicator via C socket
Parameter: |
codereview_python_data_2393 | data[recipient].add(project_locale)
def get_suggestions(self):
- start = timezone.now() - timedelta(days=1)
return Translation.objects.filter(
approved=False, rejected=False, fuzzy=False
suggestion: What do you think about a separate config option for the number of days between notifications?
data[recipient].add(project_locale)
def get_suggestions(self):
+ start = timezone.now() - timedelta(days=7)
return Translation.objects.filter(
approved=False, rejected=False, fuzzy=False |
codereview_python_data_2395 | def test_process_templates():
- template_dir = os.path.join(os.path.dirname(__file__), '../resources/templates')
temp_dir = tempfile.gettempdir()
repo_name = str(uuid.uuid4())
We do have a temp_dir fixture you could use, which would give you a temporary dir to do this work.
def test_process_templates():
+ template_dir = os.path.join(
+ os.path.dirname(__file__), '../resources/templates')
temp_dir = tempfile.gettempdir()
repo_name = str(uuid.uuid4()) |
codereview_python_data_2397 | ('any_package', None, '', None),
('any_package', None, 'Version: 1.2.3\nVersion: 1.2.4', '1.2.4'),
('any_package', None, 'Version: 1.2.4\nVersion: 1.2.3', '1.2.4'),
- ('any_package', '1.2.5', 'Version: 1.2.3\nVersion: 1.2.4', None),
# self package (APP_NAME)
(APP_NAME, None, 'Version: 1.2.3\nVersion: 1.2.4', '1.2.4'),
(APP_NAME, None, 'Version: 1.2.4\nVersion: 1.2.3', '1.2.4'),
I would keep the multi-line output as well
('any_package', None, '', None),
('any_package', None, 'Version: 1.2.3\nVersion: 1.2.4', '1.2.4'),
('any_package', None, 'Version: 1.2.4\nVersion: 1.2.3', '1.2.4'),
# self package (APP_NAME)
(APP_NAME, None, 'Version: 1.2.3\nVersion: 1.2.4', '1.2.4'),
(APP_NAME, None, 'Version: 1.2.4\nVersion: 1.2.3', '1.2.4'), |
codereview_python_data_2398 | if print_stats:
for key in player_stats:
- print("[#] -- %s: %s" % (key, player_stats[key]))
return json.dumps(player_stats, indent=4)
I would change this to `print("[#] -- {}: {}".format(key, player_stats[key]))` - to keep consistency
if print_stats:
for key in player_stats:
+ print('[#] -- {}: {}'.format(key, player_stats[key]))
return json.dumps(player_stats, indent=4) |
codereview_python_data_2402 | Parameters
----------
obj : object
- An MDAnalysis object
mime : str
The MIME type to add, e.g. "image/svg+xml"
func : callable
That's a really broad input type! There's no restriction here that it should be related to i.e., `Universe` or `atomgroup` as far as visualization-related things goes?
Parameters
----------
obj : object
+ An MDAnalysis :class:`~MDAnalysis.core.universe.Universe` or
+ :class:`~MDAnalysis.core.groups.AtomGroup`
mime : str
The MIME type to add, e.g. "image/svg+xml"
func : callable |
codereview_python_data_2404 | ::
>>> from_key_val_list([('key', 'val')])
- collections.OrderedDict([('key', 'val')])
>>> from_key_val_list('string')
ValueError: need more than 1 value to unpack
>>> from_key_val_list({'key': 'val'})
- collections.OrderedDict([('key', 'val')])
:rtype: OrderedDict
"""
It looks like this is a cli example. We don't want the "collection" lead here. The interpreter is just showing the returned objects repr. Same with the line below this.
::
>>> from_key_val_list([('key', 'val')])
+ OrderedDict([('key', 'val')])
>>> from_key_val_list('string')
ValueError: need more than 1 value to unpack
>>> from_key_val_list({'key': 'val'})
+ OrderedDict([('key', 'val')])
:rtype: OrderedDict
""" |
codereview_python_data_2406 | (re.compile(r'^download-remove --all$'), r'download-clear'),
(re.compile(r'^hint links fill "([^"]*)"$'), r'hint links fill \1'),
-
- (re.compile(r'^set-cmd-text :open -([tb]) {url:pretty}$'),
- r'set-cmd-text :open -\1 -i {url:pretty}'),
- (re.compile(r'^hint links fill :open -t {hint-url}$'),
- r'hint links fill :open -t -i {hint-url}'),
]
Hmm, I'm not really comfortable about this. Those fixes are mainly for things which are clearly not valid anymore, to automatically fix them in user's configs. Having this here would mean an user can never remove `-i` from those keybindings.... So I'd rather not have this rebinding done automatically. (I know this sucks... I want to rip out the current config system as soon as the QtWebEngine-stuff is over)
(re.compile(r'^download-remove --all$'), r'download-clear'),
(re.compile(r'^hint links fill "([^"]*)"$'), r'hint links fill \1'),
] |
codereview_python_data_2409 | class InvalidToken(PlexError):
def __init__(self, token_number, message):
- msg = ("Token number {number}: {message}"
- .format(number=token_number, message=message))
- PlexError.__init__(self, msg)
class InvalidScanner(PlexError):
Aaah! Please don't switch to `.format()` formatting! We use '%' formatting throughout the code base - for various (sometimes historical) reasons, but above all for better looking format strings of C's "{ }" blocks.
class InvalidToken(PlexError):
def __init__(self, token_number, message):
+ PlexError.__init__(self, "Token number %d: %s" % (token_number, message))
class InvalidScanner(PlexError): |
codereview_python_data_2413 | Returns
-------
A list of cycles, where each cycle is represented by a list of nodes
- along the cycle.
Example:
Extra initial spaces here?
Returns
-------
A list of cycles, where each cycle is represented by a list of nodes
+ along the cycle.
Example: |
codereview_python_data_2417 | def _activate_persistor(self):
self._repo_persistor = dnf.persistor.RepoPersistor(self.conf.cachedir)
- def init_plugins(self, disabled_glob=(), enable_plugin=(), cli=None):
# :api
"""Load plugins and run their __init__()."""
if self.conf.plugins:
- self._plugins._load(self.conf, disabled_glob, enable_plugin)
self._plugins._run_init(self, cli)
def configure_plugins(self):
`enable_plugins` probably? Since it's list of plugins.
def _activate_persistor(self):
self._repo_persistor = dnf.persistor.RepoPersistor(self.conf.cachedir)
+ def init_plugins(self, disabled_glob=(), enable_plugins=(), cli=None):
# :api
"""Load plugins and run their __init__()."""
if self.conf.plugins:
+ self._plugins._load(self.conf, disabled_glob, enable_plugins)
self._plugins._run_init(self, cli)
def configure_plugins(self): |
codereview_python_data_2419 | ''.format(attrname))
vals = cur.fetchall()
except sqlite3.DatabaseError:
- raise IOError(
- "Failed reading the atoms from DMS Database")
else:
attrs[attrname] = np.array(vals, dtype=dt)
What was the reason for removing `raise from None`? See PR #2357 for rationale.
''.format(attrname))
vals = cur.fetchall()
except sqlite3.DatabaseError:
+ errmsg = "Failed reading the atoms from DMS Database"
+ raise IOError(errmsg) from None
else:
attrs[attrname] = np.array(vals, dtype=dt) |
codereview_python_data_2420 | def get_result(self, request: Request):
self._validate_request_type(request)
- if not getConfig().enableRichSchemas:
- raise InvalidClientRequest(request.identifier, request.reqId, "RicheSchemas feature is disabled")
id = request.operation[RS_ID]
I think there is a small typo here `RichESchemas` We can make the message more detailed. Something like "RichSchema transaction is disabled" and "GetRichSchema query is disabled". But the current variant is ok.
def get_result(self, request: Request):
self._validate_request_type(request)
+ if not getConfig().ENABLE_RICH_SCHEMAS:
+ raise InvalidClientRequest(request.identifier, request.reqId, "RichSchema queries are disabled")
id = request.operation[RS_ID] |
codereview_python_data_2425 | class PDBParser(object):
"""Parse a PDB file and return a Structure object."""
- def __init__(self, PERMISSIVE=True, structure_builder=None, QUIET=False):
"""Create a PDBParser object.
The PDB parser call a number of standard methods in an aggregated
That looks like an API change (removing the ``get_header`` option) which is not backwards compatible.
class PDBParser(object):
"""Parse a PDB file and return a Structure object."""
+ def __init__(self, PERMISSIVE=True, get_header=False,
+ structure_builder=None, QUIET=False):
"""Create a PDBParser object.
The PDB parser call a number of standard methods in an aggregated |
codereview_python_data_2428 | Parameters
----------
- groupby_ngroups:
shape: tuple
Return
Some description for params is needed?
Parameters
----------
+ groupby_ngroups: str or int
+ number of groups that will be used in `groupby` operation
shape: tuple
Return |
codereview_python_data_2430 | def _translate_str(sequence, table, stop_symbol="*", to_stop=False,
cds=False, pos_stop="X", gap=None):
- """Translate a nucleotide to string (PRIVATE).
Arguments:
- sequence - a string
"a nucleotide" implies a single base letter. How about ``Translate nucleotide string into a protein string (PRIVATE).``
def _translate_str(sequence, table, stop_symbol="*", to_stop=False,
cds=False, pos_stop="X", gap=None):
+ """Translate nucleotide string into a protein string (PRIVATE).
Arguments:
- sequence - a string |
codereview_python_data_2432 | if grant_roles_cmds:
print(constants.MESSAGE_CREATE_ROLE_SCRIPT)
-
- with open('grant_forseti_roles.sh', 'a+') as roles_script:
- for cmd in grant_roles_cmds:
- roles_script.write('%s\n' % ' '.join(cmd))
return True
return False
Why is this being removed?
if grant_roles_cmds:
print(constants.MESSAGE_CREATE_ROLE_SCRIPT)
+ failed_commands = ['%s\n' % ' '.join(cmd) for cmd in grant_roles_cmds]
+ file_name = 'grant_forseti_roles.sh'
+ _generate_script_file(file_name, failed_commands, 'a+')
return True
return False |
codereview_python_data_2435 | centers=centers
)
- X_1, y_1, w_1, dX_1, dy_1, dw_1 = _create_data(
- objective='classification',
- output='array'
- )
-
params = {
"n_estimators": 10,
"num_leaves": 10
Why was this necessary? You should just use the `dask_classifier` defined below this. With this change, you'd only be doing the local predict on arrays each time, but we want to test on all of DataFrame, Array, and sparse matrix.
centers=centers
)
params = {
"n_estimators": 10,
"num_leaves": 10 |
codereview_python_data_2451 | if not self._stats:
return
- with open(file_name, 'w', encoding="utf-8", errors="ignore") as f:
writer = csv.writer(f)
longest = []
Redundant `list`s in this file.
if not self._stats:
return
+ with open(file_name, 'w') as f:
writer = csv.writer(f)
longest = [] |
codereview_python_data_2453 | class BlastTableEntry(object):
- """Store the record details."""
def __init__(self, in_rec):
"""Initialize the class."""
Maybe something about Blast Table Entry, since "record" will have more than one interpretation in the context?
class BlastTableEntry(object):
+ """Store the Blast Table Entry, the field values from the table."""
def __init__(self, in_rec):
"""Initialize the class.""" |
codereview_python_data_2454 | except (pika.exceptions.ConnectionClosed, AttributeError):
pass
- ls = InfluxListenStore({ 'REDIS_HOST' : config.REDIS_HOST,
- 'REDIS_PORT' : config.REDIS_PORT,
- 'INFLUX_HOST': config.INFLUX_HOST,
- 'INFLUX_PORT': config.INFLUX_PORT,
- 'INFLUX_DB_NAME': config.INFLUX_DB_NAME})
- listen_count = ls.get_total_listen_count()
-
try:
user_count = _get_user_count()
except DatabaseException as e:
Wouldn't using `influx_connection._influx` be better here?
except (pika.exceptions.ConnectionClosed, AttributeError):
pass
+ listen_count = _influx.get_total_listen_count()
try:
user_count = _get_user_count()
except DatabaseException as e: |
codereview_python_data_2464 | return formatted_msg
return msg
-
- def sendTeleMessage(self, chat_id=None, parse_mode='Markdown', text=None):
- try:
- self._tbot.sendMessage(chat_id=chat_id, parse_mode=parse_mode, text=text)
- except telegram.error.NetworkError:
- time.sleep(1)
- except telegram.error.TelegramError:
- time.sleep(10)
- except telegram.error.Unauthorized:
- self.update_id += 1
What is this supposed to do here? chat_handler is an abstract handler, not intended to have any telegram-specific logic. The communication to the frontend (telegram and discord currently existing) is done by telegram_handler and discord_handler.
return formatted_msg
return msg |
codereview_python_data_2465 | @batch_transform
-def price_multiple(data, multiplier, keyarg=1):
- return data.price * multiplier * keyarg
class BatchTransformAlgorithm(TradingAlgorithm):
I'm being very picky here, but you might rename `keyarg` here to something like `defaultarg` or `optarg`, since this construction doesn't actually imply a keyword argument, just a default value for an argument (making it optional).
@batch_transform
+def price_multiple(data, multiplier, extra_arg=1):
+ return data.price * multiplier * extra_arg
class BatchTransformAlgorithm(TradingAlgorithm): |
codereview_python_data_2469 | from google.cloud.forseti.services.inventory.base import crawler
from google.cloud.forseti.services.inventory.base import gcp
from google.cloud.forseti.services.inventory.base import resources
-from google.cloud.forseti.common.util import log_util
-
-LOGGER = log_util.get_logger(__name__)
class CrawlerConfig(crawler.CrawlerConfig):
If the logger isn't used, it probably doesn't need to be added.
from google.cloud.forseti.services.inventory.base import crawler
from google.cloud.forseti.services.inventory.base import gcp
from google.cloud.forseti.services.inventory.base import resources
class CrawlerConfig(crawler.CrawlerConfig): |
codereview_python_data_2471 | create_inventory = False
if create_instances and not idempotent:
- Create(self.args, self.molecule).execute()
if create_inventory:
self.molecule._create_inventory_file()
Not a blocker, just my 2 cents: You used this pattern in a lot of places, but I prefer to see that split in 2 lines, one for creating the object, one for calling the function.
create_inventory = False
if create_instances and not idempotent:
+ c = Create(self.args, self.molecule)
+ c.execute()
if create_inventory:
self.molecule._create_inventory_file() |
codereview_python_data_2482 | class _iLocIndexer(_LocationIndexerBase):
- """A indexer for modin_df.iloc[] functionality"""
def __getitem__(self, key):
row_loc, col_loc, ndim, self.row_scaler, self.col_scaler = _parse_tuple(key)
```suggestion """An indexer for modin_df.iloc[] functionality""" ```
class _iLocIndexer(_LocationIndexerBase):
+ """An indexer for modin_df.iloc[] functionality"""
def __getitem__(self, key):
row_loc, col_loc, ndim, self.row_scaler, self.col_scaler = _parse_tuple(key) |
codereview_python_data_2483 | Number of input node features.
hidden_feats : list of int
``hidden_feats[i]`` gives the size of node representations after the i-th GCN layer.
- ``len(hidden_feats)`` equals the number of GCN layers.
activation : list of activation functions or None
If None, no activation will be applied. If not None, ``activation[i]`` gives the
activation function to be used for the i-th GCN layer. ``len(activation)`` equals
Same. Minimal required parameters are preferred for most users.
Number of input node features.
hidden_feats : list of int
``hidden_feats[i]`` gives the size of node representations after the i-th GCN layer.
+ ``len(hidden_feats)`` equals the number of GCN layers. By default, we use
+ ``[64, 64]``.
activation : list of activation functions or None
If None, no activation will be applied. If not None, ``activation[i]`` gives the
activation function to be used for the i-th GCN layer. ``len(activation)`` equals |
codereview_python_data_2485 | deque wrapper implementing the Queue interface.
"""
- def put(self, *args, **kwargs):
- return self.append(*args)
- def get(self, **kwargs):
return self.pop()
Omit **kwargs, or propagate all the way? Better than including them to be silently dropped if passed in.
deque wrapper implementing the Queue interface.
"""
+ def put(self, obj, block=None, timeout=None):
+ del block, timeout
+ return self.append(obj)
+ def get(self, block=None, timeout=None):
+ del block, timeout
return self.pop() |
codereview_python_data_2487 | gsutil cp -r gs://{scanner_bucket}/rules {forseti_home}/
# Download the Newest Config Validator constraints from GCS
-rm -rf /home/ubuntu/config_validator_constraints
-gsutil cp -r gs://{scanner_bucket}/config_validator_constraints /home/ubuntu/
# Start Forseti service depends on vars defined above.
bash ./install/gcp/scripts/initialize_forseti_services.sh
Will this always be started up as default? Is there any impact to the VM in terms of load and memory usage?
gsutil cp -r gs://{scanner_bucket}/rules {forseti_home}/
# Download the Newest Config Validator constraints from GCS
+rm -rf /home/ubuntu/policy-library
+gsutil cp -r gs://{scanner_bucket}/policy-library /home/ubuntu/
# Start Forseti service depends on vars defined above.
bash ./install/gcp/scripts/initialize_forseti_services.sh |
codereview_python_data_2489 | # Volume in each radial shell
vols = np.power(self.results.edges, 3)
- vol = 4/3 * np.pi * (vols[1:] - vols[:-1])
# Average number density
box_vol = self.volume / self.n_frames
`np.diff(vols)` is also an option
# Volume in each radial shell
vols = np.power(self.results.edges, 3)
+ vol = 4/3 * np.pi * np.diff(vols)
# Average number density
box_vol = self.volume / self.n_frames |
codereview_python_data_2490 | Run scanner:
$ forseti_scanner \\
--rules <rules path> \\
- --engine_name <rule engine name> \\
- --output_path <output path (optional)> \\
"""
Do we need to list the config arg here?
Run scanner:
$ forseti_scanner \\
--rules <rules path> \\
+ --engine_name <rule engine name>
""" |
codereview_python_data_2492 | def start_workers(actions, context, analyzer_config_map,
- jobs, output_path, skip_handler, metadata,
- ctu_collect, ctu_analyze, ctu_dir, ctu_func_map_cmd):
"""
Start the workers in the process pool.
For every build action there is worker which makes the analysis.
Adding this many new parameters to the function doesn't feel the best. Perhaps we could somehow use the extended `config_handler` classes in `libcodechecker.analyze` instead?
def start_workers(actions, context, analyzer_config_map,
+ jobs, output_path, skip_handler, metadata):
"""
Start the workers in the process pool.
For every build action there is worker which makes the analysis. |
codereview_python_data_2498 | return self._get_filter_completion_model(model)
def _get_filter_completion_model(self, model):
- """Wraps the argument model with a CompletionFilterModel.
Args:
model: the source model.
Why `self.parent()` instead of just `self` here for the parent?
return self._get_filter_completion_model(model)
def _get_filter_completion_model(self, model):
+ """Wrap the argument model with a CompletionFilterModel.
Args:
model: the source model. |
codereview_python_data_2501 | if not os.path.isfile(config_file):
continue
- match = re.fullmatch(regex, config_file)
if not match:
continue
NIT: If you always report success, there might be a scenario where the status being reported is success but the process failed with some error. Might be worth making this a bit more strict to report success only if return_code == 0 else "failed" or something.
if not os.path.isfile(config_file):
continue
+ match = re.match(regex, config_file)
if not match:
continue |
codereview_python_data_2503 | Parameters
-----------
- obj : AtomGroup or Universe or :class:`Timestep`
"""
try:
from rdkit import Chem
Don't worry about Timestep, assume either AG or Universe
Parameters
-----------
+ obj : AtomGroup or Universe
"""
try:
from rdkit import Chem |
codereview_python_data_2505 | .format(type(train_set).__name__))
train_set.construct()
# copy the parameters from train_set
- params.update(train_set.params)
params_str = param_dict_to_str(params)
# set network if necessary
for alias in _ConfigAliases.get("machines"):
Shouldn't network initialization go before dataset construction?
.format(type(train_set).__name__))
train_set.construct()
# copy the parameters from train_set
+ params.update(train_set.get_params())
params_str = param_dict_to_str(params)
# set network if necessary
for alias in _ConfigAliases.get("machines"): |
codereview_python_data_2510 | def config_py() -> str:
"""Get the location for config.py.
- hard-coding config.py in the config dir is not reliable, as config.py may
- be overridden.
"""
return _locations[_Location.config_py]
This reads kinda strange to me - what about "Usually, config.py is in standarddir.config(), but this can be overridden with the --config-py argument." or so?
def config_py() -> str:
"""Get the location for config.py.
+ Usually, config.py is in standarddir.config(), but this can be overridden
+ with the --config-py argument.
"""
return _locations[_Location.config_py] |
codereview_python_data_2512 | locator_type = list(locator.keys())[0]
locator_value = locator[locator_type]
if not first_locator:
- first_locator = (locator_type, locator_value)
elements = self.driver.find_elements(self.BYS[locator_type.lower()], locator_value)
else:
# disable implicit wait to get the result instantly for the other locators
if previous line (except.. pass) is executed elements var isn't initialized
locator_type = list(locator.keys())[0]
locator_value = locator[locator_type]
if not first_locator:
+ first_locator = (self.BYS[locator_type.lower()], locator_value)
elements = self.driver.find_elements(self.BYS[locator_type.lower()], locator_value)
else:
# disable implicit wait to get the result instantly for the other locators |
codereview_python_data_2513 | for char in self._opt.text:
if char in pattern:
text += '<span class="highlight">%s</span>' % char
else:
text += char
else:
Note to self: This should probably be HTML-escaped correctly - but that's not an issue with your PR, as it was already wrong before.
for char in self._opt.text:
if char in pattern:
text += '<span class="highlight">%s</span>' % char
+ pattern = pattern.replace(char, '')
else:
text += char
else: |
codereview_python_data_2518 | return owned
- def get_transactions_filtered(self, asset_id=None, operation=None):
"""
Get a list of transactions filtered on some criteria
"""
- if not asset_id:
- raise ValueError("Need asset_id")
txids = backend.query.get_txids_filtered(self.connection, asset_id,
operation)
for txid in txids:
Instead of checking for the `asset_id`, why not making it mandatory? It can still be a keyword argument: ```python def get_transactions_filtered(self, *, asset_id, operation=None): # ... ```
return owned
+ def get_transactions_filtered(self, asset_id, operation=None):
"""
Get a list of transactions filtered on some criteria
"""
txids = backend.query.get_txids_filtered(self.connection, asset_id,
operation)
for txid in txids: |
codereview_python_data_2530 | len(fact['relationships']) > 0]
# list of used facts
uf = link.get('used', [])
- requirement = await self._load_requirements(requirements_info)
if not requirement.enforce(combo[0], uf, operation['facts']):
return False
return True
think you could extract this function to base_service? then this + the parsing (from parsing_svc) + the planner (from operation_svc) could use it.
len(fact['relationships']) > 0]
# list of used facts
uf = link.get('used', [])
+ requirement = await self.load_module('Requirement', requirements_info)
if not requirement.enforce(combo[0], uf, operation['facts']):
return False
return True |
codereview_python_data_2536 | def test_reduce():
- reductions = [(fn.sum, np.sum), (fn.min, np.min), (fn.max, np.max)]
batch_gens = [Batch1D, Batch2D, Batch3D]
types = [
How long does this test take. Maybe we should split it to smaller and bigger flavor?
def test_reduce():
+ reductions = [(fn.reductions.sum, np.sum), (fn.reductions.min, np.min), (fn.reductions.max, np.max)]
batch_gens = [Batch1D, Batch2D, Batch3D]
types = [ |
codereview_python_data_2540 | else:
result_slice = self.df.columns.slice_locs(col_loc.start, col_loc.stop)
return self.df.iloc[:, slice(*result_slice)]
- if self.df.empty:
- return self.df._default_to_pandas(lambda df: df.loc[key])
row_lookup, col_lookup = self._compute_lookup(row_loc, col_loc)
if any(i == -1 for i in row_lookup) or any(i == -1 for i in col_lookup):
maybe we should put this at the top of the function, so we make sure that the execution won't go on that probably-failing fast path above?
else:
result_slice = self.df.columns.slice_locs(col_loc.start, col_loc.stop)
return self.df.iloc[:, slice(*result_slice)]
row_lookup, col_lookup = self._compute_lookup(row_loc, col_loc)
if any(i == -1 for i in row_lookup) or any(i == -1 for i in col_lookup): |
codereview_python_data_2543 | for m in model.references():
m._document = None
- if self.theme:
- doc.theme = self.theme
- elif self.theme is None:
- doc.theme = None
doc.add_root(model)
comm_id = plot.comm.id if plot.comm else None
Same as above applies here.
for m in model.references():
m._document = None
+ doc.theme = self.theme
doc.add_root(model)
comm_id = plot.comm.id if plot.comm else None |
codereview_python_data_2548 | for tab in self.widgets():
self._remove_tab(tab)
- def close_tab(self, tab, add_undo=True):
"""Close a tab.
Args:
Please make this a keyword-only argument by adding a `*` argument before `add_undo`.
for tab in self.widgets():
self._remove_tab(tab)
+ def close_tab(self, tab, *, add_undo=True):
"""Close a tab.
Args: |
codereview_python_data_2560 | # create the block
block = b.create_block([tx_transfer_signed])
b.write_block(block, durability='hard')
- # vote the block valid
vote = b.vote(block.id, b.get_last_voted_block().id, False)
b.write_vote(vote)
`# vote the block invalid`
# create the block
block = b.create_block([tx_transfer_signed])
b.write_block(block, durability='hard')
+ # vote the block invalid
vote = b.vote(block.id, b.get_last_voted_block().id, False)
b.write_vote(vote) |
codereview_python_data_2563 | return engine.FuzzResult(fuzz_result.output, fuzz_result.command, crashes,
stats, fuzz_result.time_executed)
- # FIXME: Add support for additional arguments.
def reproduce(self, target_path, input_path, arguments, max_time): # pylint: disable=unused-argument
"""Reproduce a crash given an input.
same here and in all other cases.
return engine.FuzzResult(fuzz_result.output, fuzz_result.command, crashes,
stats, fuzz_result.time_executed)
def reproduce(self, target_path, input_path, arguments, max_time): # pylint: disable=unused-argument
"""Reproduce a crash given an input. |
codereview_python_data_2564 | matcher='path'):
"""
Subclasses should pass specific operations, arguments, and acceptors to
- their super class.
:param name: The name of the waiter. This can be any descriptive string.
:param operation: The operation to wait for. This must match the casing of
... their **superclass**
matcher='path'):
"""
Subclasses should pass specific operations, arguments, and acceptors to
+ their superclass.
:param name: The name of the waiter. This can be any descriptive string.
:param operation: The operation to wait for. This must match the casing of |
codereview_python_data_2572 | import pytest
pytestmark = pytest.mark.tendermint
VALIDATORS_ENDPOINT = '/api/v1/validators/'
Do we have any tests for an actual MongoDB instance running and querying for `validators` collection?
import pytest
+from requests.exceptions import RequestException
+
pytestmark = pytest.mark.tendermint
VALIDATORS_ENDPOINT = '/api/v1/validators/' |
codereview_python_data_2573 | def cimported_files(self, filename):
if filename[-4:] == '.pyx' and path_exists(filename[:-4] + '.pxd'):
pxd_list = [filename[:-4] + '.pxd']
- elif filename[-3:] == '.py'and path_exists(filename[:-3] + '.pxd'):
pxd_list = [filename[:-3] + '.pxd']
else:
pxd_list = []
```suggestion elif filename[-3:] == '.py' and path_exists(filename[:-3] + '.pxd'): ``` ... just tidied up a space while we're changing it.
def cimported_files(self, filename):
if filename[-4:] == '.pyx' and path_exists(filename[:-4] + '.pxd'):
pxd_list = [filename[:-4] + '.pxd']
+ elif filename[-3:] == '.py' and path_exists(filename[:-3] + '.pxd'):
pxd_list = [filename[:-3] + '.pxd']
else:
pxd_list = [] |
codereview_python_data_2575 | "after {1}s".format(ovf_file_path,
max_retry * sleep_time))
- def wait_for_ssh_host_key(self, max_retry=360, sleep_time=1):
"""
Wait for cloud-init to generate ssh host key
"""
also do `max_retry` * 5 so that we poll the same amount of time...
"after {1}s".format(ovf_file_path,
max_retry * sleep_time))
+ def wait_for_ssh_host_key(self, max_retry=1800, sleep_time=1):
"""
Wait for cloud-init to generate ssh host key
""" |
codereview_python_data_2579 | Notes
-----
This API is always used together with ``set_batch_num_edges`` to specify batching
- information of a graph.
Examples
--------
If there are edges linking nodes from two specified subgraphs, what will the `unbatch` do?
Notes
-----
This API is always used together with ``set_batch_num_edges`` to specify batching
+ information of a graph, it also do not check the correspondance between the graph structure
+ and batching information and user must guarantee there will be no cross-graph edges in the
+ batch.
Examples
-------- |
codereview_python_data_2584 | networks:
- name: foo
- name: bar
- network_mode: host
docker_host: tcp://localhost:12376
env:
FOO: bar
looks like the indenting is off here. Should be spaces. ``` - name: bar network_mode: host ```
networks:
- name: foo
- name: bar
+ network_mode: host
docker_host: tcp://localhost:12376
env:
FOO: bar |
codereview_python_data_2593 | [4.]])
"""
# Graph with one relation type
- if self._graph.number_of_etypes() == 1:
etid = self.get_etype_id(etype)
etype = self.canonical_etypes[etid]
_, dtid = self._graph.metagraph.find_edge(etid)
`core.message_passing` will call apply too. This will call it an extra time.
[4.]])
"""
# Graph with one relation type
+ if self._graph.number_of_etypes() == 1 or etype != None :
etid = self.get_etype_id(etype)
etype = self.canonical_etypes[etid]
_, dtid = self._graph.metagraph.find_edge(etid) |
codereview_python_data_2596 | Returns:
An encoded list of integers representing code points.
"""
- result = None
- try:
- result = list(map(ord, string_data))
- except:
- # Python3 fallback.
- result = list(string_data)
- return result
def decode_to_text(encoded_list):
which one is the python2 case, and which one is python 3? maybe we should be explicit and use ```python if sys.version_info.major == 3: etc ```
Returns:
An encoded list of integers representing code points.
"""
+ if sys.version_info.major == 3:
+ return list(string_data)
+
+ result = list(map(ord, string_data))
def decode_to_text(encoded_list): |
codereview_python_data_2599 | ("tempfactors", "bfactors")]
for a, b in alias_pairs:
if topologyattr.attrname == a and hasattr(self._topology, b):
- warnings.warn(f"You are adding {a} to a Universe that "
- f"has {b}. From MDAnalysis version 2.0, {a} "
- f"and {b} will no longer be separate "
- "TopologyAttrs. Instead, they will be aliases "
- "of the same attribute.", DeprecationWarning)
self._topology.add_TopologyAttr(topologyattr)
self._process_attr(topologyattr)
Can we add this as note in the `add_TopologyAttr` docstring? That way folks understand what is going on with regards to this.
("tempfactors", "bfactors")]
for a, b in alias_pairs:
if topologyattr.attrname == a and hasattr(self._topology, b):
+ err = ("You are adding {a} to a Universe that "
+ "has {b}. From MDAnalysis version 2.0, {a} "
+ "and {b} will no longer be separate "
+ "TopologyAttrs. Instead, they will be aliases "
+ "of the same attribute.").format(a=a, b=b)
+ warnings.warn(err, DeprecationWarning)
self._topology.add_TopologyAttr(topologyattr)
self._process_attr(topologyattr) |
codereview_python_data_2602 | cancelable=False, composed=False):
self._check_vanished()
log.webelem.debug("Firing event on {!r} via javascript.".format(self))
- event = javascript.string_escape(event)
self._elem.evaluateJavaScript(
- "this.dispatchEvent(new Event('{}', "
"{{'bubbles': {}, 'cancelable': {}, 'composed': {}}}))"
- .format(event, str(bubbles).lower(), str(cancelable).lower(),
- str(composed).lower()))
def caret_position(self):
"""Get the text caret position for the current element."""
You should probably use `javascript.convert_js_arg()` for all args (including `event`) here instead (feel free to make it public)
cancelable=False, composed=False):
self._check_vanished()
log.webelem.debug("Firing event on {!r} via javascript.".format(self))
self._elem.evaluateJavaScript(
+ "this.dispatchEvent(new Event({}, "
"{{'bubbles': {}, 'cancelable': {}, 'composed': {}}}))"
+ .format(javascript.convert_js_arg(event),
+ javascript.convert_js_arg(bubbles),
+ javascript.convert_js_arg(cancelable),
+ javascript.convert_js_arg(composed)))
def caret_position(self):
"""Get the text caret position for the current element.""" |
codereview_python_data_2605 | elif 'streams' not in kwargs:
kwargs['streams'] = self.p.streams
-
- if isinstance(kwargs['streams'], dict):
- kwargs['streams'] = streams.streams_list_from_dict(kwargs['streams'])
kwargs['per_element'] = self._per_element
kwargs['link_dataset'] = self._propagate_dataset
kwargs['link_inputs'] = self.p.link_inputs
I think we want something else here tbh, I'll take a look at this once you're done.
elif 'streams' not in kwargs:
kwargs['streams'] = self.p.streams
kwargs['per_element'] = self._per_element
kwargs['link_dataset'] = self._propagate_dataset
kwargs['link_inputs'] = self.p.link_inputs |
codereview_python_data_2608 | import os
PRIO_DEFAULT = 10
-PRIO_DROPINCONF = 15
PRIO_MAINCONFIG = 20
PRIO_AUTOMATICCONFIG = 30
PRIO_REPOCONFIG = 40
PRIO_PLUGINDEFAULT = 50
PRIO_PLUGINCONFIG = 60
PRIO_COMMANDLINE = 70
PRIO_RUNTIME = 80
Shouldn't this have more priority than main config?
import os
PRIO_DEFAULT = 10
PRIO_MAINCONFIG = 20
PRIO_AUTOMATICCONFIG = 30
PRIO_REPOCONFIG = 40
PRIO_PLUGINDEFAULT = 50
PRIO_PLUGINCONFIG = 60
+PRIO_DROPINCONF = 65
PRIO_COMMANDLINE = 70
PRIO_RUNTIME = 80 |
codereview_python_data_2613 | class MoveToFort(BaseTask):
- def __init__(self, bot, config=None):
- self.bot = bot
-
def should_run(self):
return (self.bot.has_space_for_loot()) or self.bot.softban
We don't need this. We get this behavior already from BaseTask
class MoveToFort(BaseTask):
def should_run(self):
return (self.bot.has_space_for_loot()) or self.bot.softban |
codereview_python_data_2616 | @aiohttp_apispec.docs(tags=['operations'],
summary='Get Links from Operation',
- description='Retrieves all links for a given operation_id. Uses fields from BaseGetAllQuerySchema',
- ' for parameters. Returns links in format provided by LinkSchema.')
@aiohttp_apispec.querystring_schema(BaseGetAllQuerySchema)
@aiohttp_apispec.response_schema(LinkSchema(many=True, partial=True))
async def get_operation_links(self, request: web.Request):
use return schema an parameters to describe "Uses fields from BaseGetAllQuerySchema', ' for parameters. Returns links in format provided by LinkSchema"
@aiohttp_apispec.docs(tags=['operations'],
summary='Get Links from Operation',
+ description='Retrieves all links for a given operation_id.',
+ parameters=[{
+ 'in': 'path',
+ 'name': 'id',
+ 'operation_id' : 'Unique ID for operation',
+ 'schema' : {'type': 'string'},
+ 'required': 'true'
+ }])
@aiohttp_apispec.querystring_schema(BaseGetAllQuerySchema)
@aiohttp_apispec.response_schema(LinkSchema(many=True, partial=True))
async def get_operation_links(self, request: web.Request): |
codereview_python_data_2617 | # degree bucketing
degrees, v_buckets = scheduler.degree_bucketing(self.msg_graph, v)
- null_v_buckets = []
non_null_v_buckets = []
reduced_msgs = []
for deg, v_bkt in zip(degrees, v_buckets):
Right now only one bucket will be `null_v_buckets` (due to degree bucketing). So you could simplify this code.
# degree bucketing
degrees, v_buckets = scheduler.degree_bucketing(self.msg_graph, v)
+ null_v_bucket = None
non_null_v_buckets = []
reduced_msgs = []
for deg, v_bkt in zip(degrees, v_buckets): |
codereview_python_data_2630 | src : str
The source feature field.
edge : str
- The destination feature field.
out : str
The output message field.
```suggestion The edge feature field. ```
src : str
The source feature field.
edge : str
+ The edge feature field.
out : str
The output message field. |
codereview_python_data_2632 | return dict(
loss_rpn_cls=losses['loss_cls'],
loss_rpn_reg=losses['loss_reg'],
- loss_rpn_shape=losses['loss_shape'],
- loss_rpn_loc=losses['loss_loc'])
def get_bboxes_single(self,
cls_scores,
We may rename the loss to `loss_anchor_shape` and `loss_anchor_loc`.
return dict(
loss_rpn_cls=losses['loss_cls'],
loss_rpn_reg=losses['loss_reg'],
+ loss_anchor_shape=losses['loss_shape'],
+ loss_anchor_loc=losses['loss_loc'])
def get_bboxes_single(self,
cls_scores, |
codereview_python_data_2634 | maximum_matching = hopcroft_karp_matching
-def minimum_weight_full_matching(G, top_nodes=None, weight='weight'):
r"""Returns a minimum weight full matching of the bipartite graph `G`.
Let :math:`G = ((U, V), E)` be a weighted bipartite graph with real weights
Why the change here? I'm not familiar with the matching algorithms so forgive me if this is an obvious question. Are there sometimes multiple minimum-weight full-matchings for bipartite graphs?
maximum_matching = hopcroft_karp_matching
+def minimum_weight_full_matching(G, top_nodes=None, weight="weight"):
r"""Returns a minimum weight full matching of the bipartite graph `G`.
Let :math:`G = ((U, V), E)` be a weighted bipartite graph with real weights |
codereview_python_data_2646 | def hausdorff(P, Q):
- r"""Calculate the undirected Hausdorff distance between two paths.
*P* (*Q*) is a :class:`numpy.ndarray` of :math:`N_p` (:math:`N_q`) time
steps, :math:`N` atoms, and :math:`3N` coordinates (e.g.,
~~replace "undirected" by "symmetric": this is what it is called in~~ D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge. Comparing images using the hausdorff distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(9):850-863, 1993. ~~(and also in the PSA paper).~~
def hausdorff(P, Q):
+ r"""Calculate the Hausdorff distance between two paths.
*P* (*Q*) is a :class:`numpy.ndarray` of :math:`N_p` (:math:`N_q`) time
steps, :math:`N` atoms, and :math:`3N` coordinates (e.g., |
codereview_python_data_2649 | frame.replace_nas_in_column(icol, replacement_value)
def sort_column(self, frame):
- if frame.nrows == 0 or frame.ncols == 0:
return
-
icol = random.randint(0, frame.ncols - 1)
print("[10] Sorting column %d ASC" % icol)
if python_output:
Sorting a 0-row frame is a valid operation, so why prohibit this case? Especially since it has a higher chance of uncovering an error...
frame.replace_nas_in_column(icol, replacement_value)
def sort_column(self, frame):
+ if frame.ncols == 0:
return
icol = random.randint(0, frame.ncols - 1)
print("[10] Sorting column %d ASC" % icol)
if python_output: |
codereview_python_data_2654 | # If there is no forwarding rules defined in the rule file then no
# forwarding rule is violated.
if not resource_rules:
return None
I am just wondering if it would be more efficient to have this check upstream, where the rule book is built, and raise an exception so that we can save some cycle like this. Okay as is, but please put a TODO for the future.
# If there is no forwarding rules defined in the rule file then no
# forwarding rule is violated.
+ # TODO: Maybe we can move this up a level so we don't have to go
+ # through the iteration process.
if not resource_rules:
return None |
codereview_python_data_2656 | # add extra samples to make it evenly divisible
# in case that indices is shorter than half of total_size
indices = (indices *
- int(self.total_size / len(indices) + 1))[:self.total_size]
assert len(indices) == self.total_size
# subsample
It may be easier to understand with `math.ceil`?
# add extra samples to make it evenly divisible
# in case that indices is shorter than half of total_size
indices = (indices *
+ math.ceil(self.total_size / len(indices)))[:self.total_size]
assert len(indices) == self.total_size
# subsample |
codereview_python_data_2657 | :param pdir: put the file in this directory (default: create a PDB-style directory tree)
:type pdir: string
- :param compress: if set to True, existing structure files will be gzip stored. Default: False
:type compress: bool
:return: filename
The meaning of this isn't immediately clear to me, maybe something like "downloaded files will be gzip stored"?
:param pdir: put the file in this directory (default: create a PDB-style directory tree)
:type pdir: string
+ :param compress: if set to True, downloaded files will be gzip stored. Default: False
:type compress: bool
:return: filename |
codereview_python_data_2661 | @classmethod
def _section(cls, opts):
"""Get logging settings from config file section "logging"."""
try:
logging_config = cls.config['logging']
- except (TypeError, KeyError, NoSectionError, AttributeError):
return False
logging.config.dictConfig(logging_config)
return True
Do we want the attribute error to allow `logging_config = cls.config.options('logging')` like @riga suggested? Or is the best course of action to just return False?
@classmethod
def _section(cls, opts):
"""Get logging settings from config file section "logging"."""
+ if isinstance(cls.config, LuigiConfigParser):
+ return False
try:
logging_config = cls.config['logging']
+ except (TypeError, KeyError, NoSectionError):
return False
logging.config.dictConfig(logging_config)
return True |
codereview_python_data_2671 | parser = create_parser()
- assert parser.parse_args(['configure', 'localmongodb']).command
assert parser.parse_args(['configure', 'localmongodb']).command
assert parser.parse_args(['show-config']).command
assert parser.parse_args(['init']).command
I guess we could remove one?
parser = create_parser()
assert parser.parse_args(['configure', 'localmongodb']).command
assert parser.parse_args(['show-config']).command
assert parser.parse_args(['init']).command |
codereview_python_data_2672 | bitmap_masks = self.to_ndarray()
return BitmapMasks(bitmap_masks, self.height, self.width)
- def area(self):
- """ Compute area of masks using the shoelace formula
- https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates
This func is modified from
https://github.com/facebookresearch/detectron2/blob/ffff8acc35ea88ad1cb1806ab0f00b4c1c5dbfd9/detectron2/structures/masks.py#L387
Return:
ndarray: areas of each instance
Add a blank line between the summary and description.
bitmap_masks = self.to_ndarray()
return BitmapMasks(bitmap_masks, self.height, self.width)
+ @property
+ def areas(self):
+ """Compute areas of masks.
+
This func is modified from
https://github.com/facebookresearch/detectron2/blob/ffff8acc35ea88ad1cb1806ab0f00b4c1c5dbfd9/detectron2/structures/masks.py#L387
+ Only works with Polygons, using the shoelace formula
Return:
ndarray: areas of each instance |
codereview_python_data_2678 | try:
return task._orig_run(*args, **kwargs)
except Ignore:
- raise
except autoretry_for as exc:
if retry_backoff:
retry_kwargs['countdown'] = \
@thedrow I think re-raising `Ignore` causes a maximum recursion condition. Perhaps simply `pass` here? Also, if you can share the unit test, I might be able to take a look.
try:
return task._orig_run(*args, **kwargs)
except Ignore:
+ pass
except autoretry_for as exc:
if retry_backoff:
retry_kwargs['countdown'] = \ |
codereview_python_data_2680 | else:
# If we *do* have enough space, tabs should occupy the whole
- # window # width. If there are pinned tabs their size will be
# subtracted from the total window width.
# During shutdown the self.count goes down,
# but the self.pinned_count not - this generates some odd
(actually, nitpick: can you remove the `#` before `width` here?)
else:
# If we *do* have enough space, tabs should occupy the whole
+ # window width. If there are pinned tabs their size will be
# subtracted from the total window width.
# During shutdown the self.count goes down,
# but the self.pinned_count not - this generates some odd |
codereview_python_data_2682 | from azurelinuxagent.ga.exthandlers import ExtHandlerInstance
-class ExtensionCommandNames:
INSTALL = "install"
UNINSTALL = "uninstall"
UPDATE = "update"
NIT: Dont you need to inherit it from Object (like `ExtensionCommandNames(object)`). Not sure if this is needed for py2 vs py3, but we follow this convention for all our classes, it would be cleaner if you could just follow it here too just to be consistent with the other code. Thanks!
from azurelinuxagent.ga.exthandlers import ExtHandlerInstance
+class ExtensionCommandNames(object):
INSTALL = "install"
UNINSTALL = "uninstall"
UPDATE = "update" |
codereview_python_data_2684 | dx = np.diff(data, 1, axis=1)[0:r-1, 0:c-1]
dy = np.diff(data, 1, axis=0)[0:r-1, 0:c-1]
cyclic_range = None if not matrix_dim.cyclic else np.diff(matrix_dim.range)
if cyclic_range is not None: # Wrap into the specified range
# shift values such that wrapping works ok
dx += matrix_dim.range[0]
dy += matrix_dim.range[0]
Using `np.diff(matrix_dim.range)` if fine assuming that both values are numeric. This assumption may be violated as the range tuple is allowed to use a `None` value to indicate lower/upper bounds that are not set. For this reason, I would have an extra line with something like: ``` python if matrix_dim.cyclic and (None in matrix_dim.range): raise Exception('Cyclic range must be specified to compute the gradient of cyclic quantities') ```
dx = np.diff(data, 1, axis=1)[0:r-1, 0:c-1]
dy = np.diff(data, 1, axis=0)[0:r-1, 0:c-1]
+ if matrix_dim.cyclic and (None in matrix_dim.range):
+ raise Exception("Cyclic range must be specified to compute "
+ "the gradient of cyclic quantities")
cyclic_range = None if not matrix_dim.cyclic else np.diff(matrix_dim.range)
if cyclic_range is not None: # Wrap into the specified range
+ raise NotImplementedError("Cyclic ranges are not supported currently")
# shift values such that wrapping works ok
dx += matrix_dim.range[0]
dy += matrix_dim.range[0] |
codereview_python_data_2692 | started = pyqtSignal()
def __init__(self, what, *, verbose=False, additional_env=None,
- parent=None, output_to_tab=False):
super().__init__(parent)
self._what = what
self.verbose = verbose
- self.output_to_tab = output_to_tab
self._started = False
self.cmd = None
self.args = None
Since this isn't used outside of `GUIProcess`, it should be "private", i.e. prefixed with a `_`.
started = pyqtSignal()
def __init__(self, what, *, verbose=False, additional_env=None,
+ parent=None, output=False):
super().__init__(parent)
self._what = what
self.verbose = verbose
+ self._output = output
self._started = False
self.cmd = None
self.args = None |
codereview_python_data_2693 | self.connection = pika.BlockingConnection(pika.ConnectionParameters(host=config.RABBITMQ_HOST, port=config.RABBITMQ_PORT))
break
except Exception as e:
- self.log.error("Cannot connect to rabbitmq: %s, sleeping 2 seconds")
sleep(2)
Seems like you forgot to add the error message here.
self.connection = pika.BlockingConnection(pika.ConnectionParameters(host=config.RABBITMQ_HOST, port=config.RABBITMQ_PORT))
break
except Exception as e:
+ self.log.error("Cannot connect to rabbitmq: %s, sleeping 2 seconds" % str(e))
sleep(2) |
codereview_python_data_2697 | product_values = {}
for line in text_file_object.readlines():
- if line[0] == '#':
continue
key, value = line.split('=')
key = key.strip().upper()
I opt the following to ignore leading whitespace as well ``` line = line.strip if line.startswith('#'): continue ```
product_values = {}
for line in text_file_object.readlines():
+ line = line.strip()
+ if line.startswith('#'):
continue
key, value = line.split('=')
key = key.strip().upper() |
codereview_python_data_2701 | Returns
-------
G : DiGraph
- A tournament on n nodes, with exactly one directed edge joining
each pair of distinct nodes.
Notes
```suggestion A tournament on `n` nodes, with exactly one directed edge joining ``` Per the numpydoc standard
Returns
-------
G : DiGraph
+ A tournament on `n` nodes, with exactly one directed edge joining
each pair of distinct nodes.
Notes |
codereview_python_data_2711 | 2 * self.img_scale[0])
mosaic_labels = np.concatenate(mosaic_labels, 0)
- mosaic_filter = np.prod(mosaic_bboxes[:, 2:4] - \
- mosaic_bboxes[:, 0:2] > 2, axis=1) == 1
- mosaic_bboxes = mosaic_bboxes[mosaic_filter]
- mosaic_labels = mosaic_labels[mosaic_filter]
results['img'] = mosaic_img
results['img_shape'] = mosaic_img.shape
If you want to add filtering rules, it is recommended to create a new internal method, and then put the hyperparameters in the initialization method. ```python def __init__(self, img_scale=(640, 640), center_ratio_range=(0.5, 1.5), min_bbox_size, pad_val=114): self.min_bbox_size=min_bbox_size def _filter_box_candidates(self, bbox): .... return ((w > self.min_bbox_size) & (h > self.min_bbox_size) ```
2 * self.img_scale[0])
mosaic_labels = np.concatenate(mosaic_labels, 0)
+ mosaic_bboxes, mosaic_labels = \
+ self._filter_box_candidates(mosaic_bboxes, mosaic_labels)
results['img'] = mosaic_img
results['img_shape'] = mosaic_img.shape |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.