id
stringlengths 30
32
| content
stringlengths 139
2.8k
|
|---|---|
codereview_new_python_data_4399
|
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-"""
-
-This tests reading a LAMMPS data file with a
-"PairIJ Coeffs" section, issue #3336.
-
-"""
-
-import MDAnalysis as mda
-
-
-PAIRIJ_COEFFS_DATA = "PR3959_test.data"
-
-u = mda.Universe(PAIRIJ_COEFFS_DATA)
-
-print(str(u))
This can become part of the other LAMMPS Topology tests in `test_lammpsdata.py` it doesn't need its own file.
|
codereview_new_python_data_4400
|
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-"""
-
-This tests reading a LAMMPS data file with a
-"PairIJ Coeffs" section, issue #3336.
-
-"""
-
-import MDAnalysis as mda
-
-
-PAIRIJ_COEFFS_DATA = "PR3959_test.data"
-
-u = mda.Universe(PAIRIJ_COEFFS_DATA)
-
-print(str(u))
This needs to be added as a `datafile` see datafiles.py on how to do that.
|
codereview_new_python_data_4401
|
def test_wc_dist(u):
wmsg = ("Accessing results via selection indices is "
"deprecated and will be removed in MDAnalysis 2.5.0")
- with pytest.warns(match=wmsg):
for i in range(len(strand1)):
assert_allclose(WC.results.pair_distances[:, i], WC.results[i][0])
MAybe use https://docs.pytest.org/en/6.2.x/reference.html#pytest.deprecated_call
```python
with pytest.deprecated_call(match=wmsg):
```
def test_wc_dist(u):
wmsg = ("Accessing results via selection indices is "
"deprecated and will be removed in MDAnalysis 2.5.0")
+ with pytest.deprecated_call(match=wmsg):
for i in range(len(strand1)):
assert_allclose(WC.results.pair_distances[:, i], WC.results[i][0])
|
codereview_new_python_data_4402
|
def _single_frame(self) -> None:
self._res_array[self._frame_index, :] = dist
def _conclude(self) -> None:
self.results['times'] = np.array(self.times)
self.results['pair_distances'] = self._res_array
To be removed in 2.5.0
def _single_frame(self) -> None:
self._res_array[self._frame_index, :] = dist
def _conclude(self) -> None:
+ # Remove 2.5.0
self.results['times'] = np.array(self.times)
self.results['pair_distances'] = self._res_array
|
codereview_new_python_data_4403
|
def calc_dihedrals(coords1: Union[np.ndarray, 'AtomGroup'],
Array containing the dihedral angles formed by each quadruplet of
coordinates. Values are returned in radians (rad). If four single
coordinates were supplied, the dihedral angle is returned as a single
- number instead of an array. Range of the dihedral angle is (:math:`-\pi`, :math:`\pi`)
.. versionadded:: 0.8
Please stay with in 79 characters linewidth.
You can also make the whole range a math expression
```rest
:math:`(-\pi, \pi)`
```
def calc_dihedrals(coords1: Union[np.ndarray, 'AtomGroup'],
Array containing the dihedral angles formed by each quadruplet of
coordinates. Values are returned in radians (rad). If four single
coordinates were supplied, the dihedral angle is returned as a single
+ number instead of an array. The range of dihedral angle is
+ :math:`(-\pi, \pi)`.
.. versionadded:: 0.8
|
codereview_new_python_data_4404
|
def test_sanitize_fail_warning(self):
with warnings.catch_warnings():
warnings.simplefilter("error")
u.atoms.convert_to.rdkit()
- if record:
- assert all("Could not sanitize molecule" not in str(w.message)
- for w in record.list)
@requires_rdkit
This needs to go, too, there's no `record` in the new code.
def test_sanitize_fail_warning(self):
with warnings.catch_warnings():
warnings.simplefilter("error")
u.atoms.convert_to.rdkit()
@requires_rdkit
|
codereview_new_python_data_4405
|
def test_sanitize_fail_warning(self):
with warnings.catch_warnings():
warnings.simplefilter("error")
u.atoms.convert_to.rdkit()
- if record:
- assert all("Could not sanitize molecule" not in str(w.message)
- for w in record.list)
@requires_rdkit
Is the idea that this block should pass without a warning but if it does then the test fails?
Well, the test fails, and I don't know why — this seems to be behavior that we masked previously.
def test_sanitize_fail_warning(self):
with warnings.catch_warnings():
warnings.simplefilter("error")
u.atoms.convert_to.rdkit()
@requires_rdkit
|
codereview_new_python_data_4406
|
If `distopia`_ is installed, the functions in this table will accept the key
'distopia' for the `backend` keyword argument. If the distopia backend is
selected the `distopia` library will be used to calculate the distances. Note
-that for functions listed in this table **distopia is the default backend if it
-is available**.
.. Note::
Distopia does not currently support triclinic simulation boxes. If you
Am I correct in understanding that this is no longer true?
If `distopia`_ is installed, the functions in this table will accept the key
'distopia' for the `backend` keyword argument. If the distopia backend is
selected the `distopia` library will be used to calculate the distances. Note
+that for functions listed in this table **distopia is not the default backend
+if and must be selected.**
.. Note::
Distopia does not currently support triclinic simulation boxes. If you
|
codereview_new_python_data_4407
|
def timeseries(self, asel=None, start=0, stop=-1, step=1, order='afc'):
data is returned whenever `asel` is different from ``None``.
start : int (optional)
stop : int (optional)
- .. deprecated:: 3.0
Note that `stop` is currently *inclusive* but will be
- deprecated in favour of being *exclusive* in 3.0.
step : int (optional)
order : {"afc", "acf", "caf", "fac", "fca", "cfa"} (optional)
the order/shape of the return data array, corresponding
Might be missing something but diff indicates no replacement for this? You probably should have done kind of "this is what step does"?
def timeseries(self, asel=None, start=0, stop=-1, step=1, order='afc'):
data is returned whenever `asel` is different from ``None``.
start : int (optional)
stop : int (optional)
+ .. deprecated:: 2.4.0
Note that `stop` is currently *inclusive* but will be
+ changed in favour of being *exclusive* in version 3.0.
step : int (optional)
order : {"afc", "acf", "caf", "fac", "fca", "cfa"} (optional)
the order/shape of the return data array, corresponding
|
codereview_new_python_data_4408
|
def timeseries(self, asel=None, start=0, stop=-1, step=1, order='afc'):
data is returned whenever `asel` is different from ``None``.
start : int (optional)
stop : int (optional)
- .. deprecated:: 3.0
Note that `stop` is currently *inclusive* but will be
- deprecated in favour of being *exclusive* in 3.0.
step : int (optional)
order : {"afc", "acf", "caf", "fac", "fca", "cfa"} (optional)
the order/shape of the return data array, corresponding
```suggestion
.. deprecated:: 2.4.0
Note that `stop` is currently *inclusive* but will be
changed in favour of being *exclusive* in version 3.0.
```
essentially deprecation happens now, change happens in the future
def timeseries(self, asel=None, start=0, stop=-1, step=1, order='afc'):
data is returned whenever `asel` is different from ``None``.
start : int (optional)
stop : int (optional)
+ .. deprecated:: 2.4.0
Note that `stop` is currently *inclusive* but will be
+ changed in favour of being *exclusive* in version 3.0.
step : int (optional)
order : {"afc", "acf", "caf", "fac", "fca", "cfa"} (optional)
the order/shape of the return data array, corresponding
|
codereview_new_python_data_4409
|
def _read_frame(self, i):
timestep = self._read_next_timestep()
return timestep
- def _read_next_timestep(self, ts=None):
- # NOTE: TRR implements its own version
- """copy next frame into timestep"""
- if self._frame == self.n_frames - 1:
- raise IOError(errno.EIO, 'trying to go over trajectory limit')
- if ts is None:
- ts = self.ts
- if ts.has_positions:
- frame = self._xdr.read_direct(ts.positions)
- else:
- frame = self._xdr.read()
- self._frame += 1
- self._frame_to_ts(frame, ts)
- return ts
-
def Writer(self, filename, n_atoms=None, **kwargs):
"""Return writer for trajectory format"""
if n_atoms is None:
If TRR implements its own then why have a generic method here that only works for XTC?
Let's remove `_read_next_timestep` from the base class. Then we can also cleanly remove `read_direct` from TRR.
def _read_frame(self, i):
timestep = self._read_next_timestep()
return timestep
def Writer(self, filename, n_atoms=None, **kwargs):
"""Return writer for trajectory format"""
if n_atoms is None:
|
codereview_new_python_data_4410
|
def test_raises_StopIteration(self, reader):
with pytest.raises(StopIteration):
next(reader)
- @pytest.mark.parametrize('order', ['turnip', 'abc', ''])
- def test_timeseries_raises_incorrect_order(self, reader, order):
- with pytest.raises(ValueError, match="must be a permutation of `fac`"):
reader.timeseries(order=order)
class _Multi(_TestReader):
```suggestion
@pytest.mark.parametrize('order', ['turnip', 'abc'])
def test_timeseries_raises_unknown_order_key(self, reader, order):
with pytest.raises(ValueError, match="Unrecognized order key"):
reader.timeseries(order=order)
@pytest.mark.parametrize('order', ['faac', 'affc', 'afcc', ''])
def test_timeseries_raises_incorrect_order_key(self, reader, order):
with pytest.raises(ValueError, match="Repeated or missing keys"):
reader.timeseries(order=order)
```
def test_raises_StopIteration(self, reader):
with pytest.raises(StopIteration):
next(reader)
+ @pytest.mark.parametrize('order', ['turnip', 'abc'])
+ def test_timeseries_raises_unknown_order_key(self, reader, order):
+ with pytest.raises(ValueError, match="Unrecognized order key"):
+ reader.timeseries(order=order)
+
+ @pytest.mark.parametrize('order', ['faac', 'affc', 'afcc', ''])
+ def test_timeseries_raises_incorrect_order_key(self, reader, order):
+ with pytest.raises(ValueError, match="Repeated or missing keys"):
reader.timeseries(order=order)
class _Multi(_TestReader):
|
codereview_new_python_data_4411
|
def _read_next_timestep(self, ts=None):
if ts is None:
# use a copy to avoid that ts always points to the same reference
# removing this breaks lammps reader
- ts = self.ts.copy() # why is this copy required ??
frame = self._file.read()
self._frame += 1
- ts = self._frame_to_ts(frame, ts)
- self.ts = ts
return ts
def Writer(self, filename, n_atoms=None, **kwargs):
@orbeckst @MDAnalysis/coredevs does anyone know specifically why this copy is required? None of the other readers use this pattern and it is ~30% overhead but removing it breaks a bunch of tests. I will keep working on it but if anyone has a quick answer would be appreciated.
def _read_next_timestep(self, ts=None):
if ts is None:
# use a copy to avoid that ts always points to the same reference
# removing this breaks lammps reader
+ ts = self.ts # why is this copy required ??
frame = self._file.read()
self._frame += 1
+ self._frame_to_ts(frame, ts)
return ts
def Writer(self, filename, n_atoms=None, **kwargs):
|
codereview_new_python_data_4412
|
def _read_next_timestep(self, ts=None):
if self._frame == self.n_frames - 1:
raise IOError('trying to go over trajectory limit')
if ts is None:
- # use a copy to avoid that ts always points to the same reference
- # removing this breaks lammps reader
- ts = self.ts # why is this copy required ??
frame = self._file.read()
self._frame += 1
self._frame_to_ts(frame, ts)
I don’t understand the comment — remove?
def _read_next_timestep(self, ts=None):
if self._frame == self.n_frames - 1:
raise IOError('trying to go over trajectory limit')
if ts is None:
+ ts = self.ts
frame = self._file.read()
self._frame += 1
self._frame_to_ts(frame, ts)
|
codereview_new_python_data_4413
|
def test_pickle_reader(self, reader):
"Timestep is changed after pickling")
def test_frame_collect_all_same(self, reader):
- # check that the timestep resets so that the base pointer is the same
- # for all timesteps in a collection witht eh exception of memoryreader
if isinstance(reader, mda.coordinates.memory.MemoryReader):
- pytest.xfail()
collected_ts = []
for i, ts in enumerate(reader):
collected_ts.append(ts.positions)
give xfail a reason?
def test_pickle_reader(self, reader):
"Timestep is changed after pickling")
def test_frame_collect_all_same(self, reader):
+ # check that the timestep resets so that the base reference is the same
+ # for all timesteps in a collection with the exception of memoryreader
if isinstance(reader, mda.coordinates.memory.MemoryReader):
+ pytest.xfail("memoryreader allows independent coordinates")
collected_ts = []
for i, ts in enumerate(reader):
collected_ts.append(ts.positions)
|
codereview_new_python_data_4414
|
def __init__(self, filename, convert_units=True, dt=None, **kwargs):
.. versionchanged:: 0.17.0
Changed to use libdcd.pyx library and removed the correl function
- .. versionchanged:: 2.4.0
- Added deprecation warning for timestep copying
"""
super(DCDReader, self).__init__(
filename, convert_units=convert_units, **kwargs)
I don't think that adding a deprecation warning requires a versionchanged. The deprecated in the class docs is sufficient.
def __init__(self, filename, convert_units=True, dt=None, **kwargs):
.. versionchanged:: 0.17.0
Changed to use libdcd.pyx library and removed the correl function
"""
super(DCDReader, self).__init__(
filename, convert_units=convert_units, **kwargs)
|
codereview_new_python_data_4415
|
def test_isolayer(self, u, periodic):
ref_outer = set(np.where((d1 < rmax) | (d2 < rmax))[0])
ref_outer -= ref_inner
- assert ref_outer == set(result.indices)
@pytest.mark.parametrize('periodic', (True, False))
def test_spherical_zone(self, u, periodic):
just to check, this selection is capturing something right?
def test_isolayer(self, u, periodic):
ref_outer = set(np.where((d1 < rmax) | (d2 < rmax))[0])
ref_outer -= ref_inner
+ assert ref_outer == set(result.indices) and len(list(ref_outer)) > 0
@pytest.mark.parametrize('periodic', (True, False))
def test_spherical_zone(self, u, periodic):
|
codereview_new_python_data_4416
|
class DumpReader(base.ReaderBase):
to represent the unit cell. Lengths *A*, *B*, *C* are in the MDAnalysis
length unit (Å), and angles are in degrees.
- .. versionchanges:: 2.4.0
Now imports velocities and forces, translates the box to the origin,
and optionally unwraps trajectories with image flags upon loading.
.. versionchanged:: 2.2.0
```suggestion
.. versionchanged:: 2.4.0
```
class DumpReader(base.ReaderBase):
to represent the unit cell. Lengths *A*, *B*, *C* are in the MDAnalysis
length unit (Å), and angles are in degrees.
+ .. versionchanged:: 2.4.0
Now imports velocities and forces, translates the box to the origin,
and optionally unwraps trajectories with image flags upon loading.
.. versionchanged:: 2.2.0
|
codereview_new_python_data_4417
|
def u(self):
0.019,
0.019,
0.019,
- 0.019, # methane [:5]
-0.003,
0.001,
0.001,
```suggestion
0.019, # methane [:5]
```
One thing picked up by flake8
def u(self):
0.019,
0.019,
0.019,
+ 0.019, # methane [:5]
-0.003,
0.001,
0.001,
|
codereview_new_python_data_4418
|
def test_group_return_unsorted_sorted_unique(self, ugroup):
assert unsorted_unique is sorted_unique
assert unique._cache['unsorted_unique'] is sorted_unique
class TestEmptyAtomGroup(object):
""" Test empty atom groups
"""
```suggestion
class TestEmptyAtomGroup(object):
```
As above, slight PEP8 thing
def test_group_return_unsorted_sorted_unique(self, ugroup):
assert unsorted_unique is sorted_unique
assert unique._cache['unsorted_unique'] is sorted_unique
+
class TestEmptyAtomGroup(object):
""" Test empty atom groups
"""
|
codereview_new_python_data_4419
|
def wrapper(*args, **kwargs):
def check_atomgroup_not_empty(groupmethod):
"""Decorator triggering a ``ValueError`` if the underlying group is empty.
- Avoids obscure computational errors on group methods.
Raises
------
This is a bit vague
```suggestion
Avoids downstream errors in computing properties of empty atomgroups.
```
def wrapper(*args, **kwargs):
def check_atomgroup_not_empty(groupmethod):
"""Decorator triggering a ``ValueError`` if the underlying group is empty.
+ Avoids downstream errors in computing properties of empty atomgroups.
Raises
------
|
codereview_new_python_data_4420
|
def _load_offsets(self):
"{self.filename}. Using slow offset calculation.")
self._read_offsets(store=True)
return
- raise e
with fasteners.InterProcessLock(lock_name) as filelock:
if not isfile(fname):
```suggestion
raise
```
def _load_offsets(self):
"{self.filename}. Using slow offset calculation.")
self._read_offsets(store=True)
return
+ else:
+ raise
with fasteners.InterProcessLock(lock_name) as filelock:
if not isfile(fname):
|
codereview_new_python_data_4421
|
class TXYZParser(TopologyReaderBase):
- Elements
.. versionadded:: 0.17.0
- .. versionchanged:: 2.4
- Adding Element attribute if all names is a valid element symbol
"""
format = ['TXYZ', 'ARC']
```suggestion
Adding the `Element` attribute if all names are valid element symbols.
```
class TXYZParser(TopologyReaderBase):
- Elements
.. versionadded:: 0.17.0
+ .. versionchanged:: 2.4.0
+ Adding the `Element` attribute if all names are valid element symbols.
"""
format = ['TXYZ', 'ARC']
|
codereview_new_python_data_4422
|
class TXYZParser(TopologyReaderBase):
- Elements
.. versionadded:: 0.17.0
- .. versionchanged:: 2.4
- Adding Element attribute if all names is a valid element symbol
"""
format = ['TXYZ', 'ARC']
```suggestion
.. versionchanged:: 2.4.0
```
class TXYZParser(TopologyReaderBase):
- Elements
.. versionadded:: 0.17.0
+ .. versionchanged:: 2.4.0
+ Adding the `Element` attribute if all names are valid element symbols.
"""
format = ['TXYZ', 'ARC']
|
codereview_new_python_data_4423
|
def test_TXYZ_elements():
element_list = np.array(['C', 'H', 'H', 'O', 'H', 'C', 'H', 'H', 'H'], dtype=object)
assert_equal(u.atoms.elements, element_list)
def test_missing_elements_noattribute():
"""Check that:
```suggestion
```
def test_TXYZ_elements():
element_list = np.array(['C', 'H', 'H', 'O', 'H', 'C', 'H', 'H', 'H'], dtype=object)
assert_equal(u.atoms.elements, element_list)
+
def test_missing_elements_noattribute():
"""Check that:
|
codereview_new_python_data_4424
|
class TXYZParser(TopologyReaderBase):
- Atomnames
- Atomtypes
- - Elements
.. versionadded:: 0.17.0
.. versionchanged:: 2.4.0
```suggestion
- Elements (if all atom names are element symbols)
```
Otherwise users will be surprised when they do not have an `elements` attribute.
class TXYZParser(TopologyReaderBase):
- Atomnames
- Atomtypes
+ - Elements (if all atom names are element symbols)
.. versionadded:: 0.17.0
.. versionchanged:: 2.4.0
|
codereview_new_python_data_4425
|
def test_between_simple_case_indices_only(self, group, ag, ag2, expected):
).indices
assert_equal(actual, expected)
- distance = 5.9
-
- def test_between_return_type_not_empty(self, group, ag, ag2):
'''Test MDAnalysis.analysis.distances.between() for
returned type when returned group is not empty.'''
actual = MDAnalysis.analysis.distances.between(
group,
ag,
ag2,
- self.distance
)
assert(isinstance(actual, MDAnalysis.core.groups.AtomGroup))
- distance = 1.0
-
- def test_between_return_type_empty(self, group, ag, ag2):
- '''Test MDAnalysis.analysis.distances.between() for
- returned type when returned group is empty.'''
- actual = MDAnalysis.analysis.distances.between(
- group,
- ag,
- ag2,
- self.distance
- )
- assert(isinstance(actual, MDAnalysis.core.groups.AtomGroup))
Setting this should be unnecessary here. Make this and the one below two separate variables, eg `self.distance_not_empty` and `self.distance_empty=0.0` I would also make it 0 to be super clear.
def test_between_simple_case_indices_only(self, group, ag, ag2, expected):
).indices
assert_equal(actual, expected)
+ @pytest.mark.parametrize('dists', [5.9, 0.0])
+ def test_between_return_type(self, dists, group, ag, ag2):
'''Test MDAnalysis.analysis.distances.between() for
returned type when returned group is not empty.'''
actual = MDAnalysis.analysis.distances.between(
group,
ag,
ag2,
+ dists
)
assert(isinstance(actual, MDAnalysis.core.groups.AtomGroup))
|
codereview_new_python_data_4426
|
def test_between_simple_case_indices_only(self, group, ag, ag2, expected):
).indices
assert_equal(actual, expected)
- distance = 5.9
-
- def test_between_return_type_not_empty(self, group, ag, ag2):
'''Test MDAnalysis.analysis.distances.between() for
returned type when returned group is not empty.'''
actual = MDAnalysis.analysis.distances.between(
group,
ag,
ag2,
- self.distance
)
assert(isinstance(actual, MDAnalysis.core.groups.AtomGroup))
- distance = 1.0
-
- def test_between_return_type_empty(self, group, ag, ag2):
- '''Test MDAnalysis.analysis.distances.between() for
- returned type when returned group is empty.'''
- actual = MDAnalysis.analysis.distances.between(
- group,
- ag,
- ag2,
- self.distance
- )
- assert(isinstance(actual, MDAnalysis.core.groups.AtomGroup))
Alternately you could combine these into one test using `@pytest.mark.parametrize`. I would probably go for this option. :)
def test_between_simple_case_indices_only(self, group, ag, ag2, expected):
).indices
assert_equal(actual, expected)
+ @pytest.mark.parametrize('dists', [5.9, 0.0])
+ def test_between_return_type(self, dists, group, ag, ag2):
'''Test MDAnalysis.analysis.distances.between() for
returned type when returned group is not empty.'''
actual = MDAnalysis.analysis.distances.between(
group,
ag,
ag2,
+ dists
)
assert(isinstance(actual, MDAnalysis.core.groups.AtomGroup))
|
codereview_new_python_data_4427
|
def test_between_return_type(self, dists, group, ag, ag2):
ag2,
dists
)
- assert(isinstance(actual, MDAnalysis.core.groups.AtomGroup))
`assert` is a statement, not a function
```suggestion
assert isinstance(actual, MDAnalysis.core.groups.AtomGroup)
```
def test_between_return_type(self, dists, group, ag, ag2):
ag2,
dists
)
+ assert isinstance(actual, MDAnalysis.core.groups.AtomGroup)
|
codereview_new_python_data_4428
|
def test_between_simple_case_indices_only(self, group, ag, ag2, expected):
@pytest.mark.parametrize('dists', [5.9, 0.0])
def test_between_return_type(self, dists, group, ag, ag2):
- '''Test MDAnalysis.analysis.distances.between() for
- returned type when returned group is not empty.'''
actual = MDAnalysis.analysis.distances.between(
group,
ag,
One last tiny nitpick just on this docstring as this now test both the empty and non-empty case right?
def test_between_simple_case_indices_only(self, group, ag, ag2, expected):
@pytest.mark.parametrize('dists', [5.9, 0.0])
def test_between_return_type(self, dists, group, ag, ag2):
+ '''Test that MDAnalysis.analysis.distances.between()
+ returns an AtomGroup even when the returned group is empty.'''
actual = MDAnalysis.analysis.distances.between(
group,
ag,
|
codereview_new_python_data_4429
|
import pytng
from MDAnalysisTests.datafiles import (TNG_traj, TNG_traj_gro)
-
@pytest.mark.skipif(not HAS_PYTNG, reason="pytng not installed")
-class TestTNGTraj(object):
_n_atoms = 1000
_n_frames = 101
Why are we not using it multiframereader base test class in the tests here? Given we have the issue opened to implement everything using them we should make sure we have a good reason to have new tests that don't use them (and document why)
import pytng
from MDAnalysisTests.datafiles import (TNG_traj, TNG_traj_gro)
+from MDAnalysisTests.coordinates.base import MultiframeReaderTest
@pytest.mark.skipif(not HAS_PYTNG, reason="pytng not installed")
+class TestTNGTraj(MultiframeReaderTest):
_n_atoms = 1000
_n_frames = 101
|
codereview_new_python_data_4430
|
def test_pytng_not_present_raises():
with pytest.raises(ImportError, match="please install pytng"):
u = mda.Universe(TNG_traj_gro, TNG_traj)
@pytest.mark.skipif(not HAS_PYTNG, reason="pytng not installed")
class TNGReference(BaseReference):
also test the static method (for getting number of atoms) under no-pytng conditions
def test_pytng_not_present_raises():
with pytest.raises(ImportError, match="please install pytng"):
u = mda.Universe(TNG_traj_gro, TNG_traj)
+@pytest.mark.skipif(HAS_PYTNG, reason="pytng present")
+def test_parse_n_atoms_no_pytng():
+ with pytest.raises(ImportError, match="please install pytng"):
+ mda.coordinates.TNG.TNGReader.parse_n_atoms(TNG_traj)
+
@pytest.mark.skipif(not HAS_PYTNG, reason="pytng not installed")
class TNGReference(BaseReference):
|
codereview_new_python_data_4431
|
def test_equivalent_atoms(self, ref, output):
for attr in self.almost_equal_atom_attrs:
ra = getattr(r, attr)
oa = getattr(o, attr)
- assert_allclose(
- ra,
- oa,
- rtol=0,
- atol=1e-2,
- err_msg=('atom {} not almost equal for atoms' +
- '{} and {}').format(attr, r, o))
@pytest.mark.parametrize('attr', ('bonds', 'angles', 'impropers',
'cmaps'))
I'm not super fond of the formatting given we end up using a whole line for a two character variable. I won't block on that but I would ask to use f-strings and implicit continuation here instead of `+`
def test_equivalent_atoms(self, ref, output):
for attr in self.almost_equal_atom_attrs:
ra = getattr(r, attr)
oa = getattr(o, attr)
+ assert_allclose(ra, oa, rtol=0, atol=1e-2,
+ err_msg=f'atom {attr} not almost equal for atoms '
+ f'{r} and {o}')
@pytest.mark.parametrize('attr', ('bonds', 'angles', 'impropers',
'cmaps'))
|
codereview_new_python_data_4432
|
def _format_PDB_charges(charges: np.ndarray) -> np.ndarray:
NumPy array of dtype object with strings representing the
formal charges of the atoms being written.
"""
- if charges.dtype != int:
raise ValueError("formal charges array should be of `int` type")
outcharges = charges.astype(object)
Do we allow `np.int32` here? I am not 100 % sure how the format strings work and I only ask because `np.int32` != int
def _format_PDB_charges(charges: np.ndarray) -> np.ndarray:
NumPy array of dtype object with strings representing the
formal charges of the atoms being written.
"""
+ if not np.issubdtype(charges, np.integer):
raise ValueError("formal charges array should be of `int` type")
outcharges = charges.astype(object)
|
codereview_new_python_data_4433
|
def attach_auxiliary(self,
for reader in coord_parent._auxs.values():
aux_memory_usage += reader._memory_usage()
if aux_memory_usage > memory_limit:
- warnings.warn("AuxReader: memory usage warning!")
def _memory_usage(self):
raise NotImplementedError("BUG: Override _memory_usage() "
Its at times like this that Python needs `virtual` and `override` like in C++, but I like the design. :)
def attach_auxiliary(self,
for reader in coord_parent._auxs.values():
aux_memory_usage += reader._memory_usage()
if aux_memory_usage > memory_limit:
+ warnings.warn("AuxReader: memory usage warning! "
+ f"Auxiliary data takes up {aux_memory_usage/1000} "
+ f"KB of memory (Warning limit: {memory_limit/1000} "
+ "KB).")
def _memory_usage(self):
raise NotImplementedError("BUG: Override _memory_usage() "
|
codereview_new_python_data_4434
|
def attach_auxiliary(self,
for reader in coord_parent._auxs.values():
aux_memory_usage += reader._memory_usage()
if aux_memory_usage > memory_limit:
- warnings.warn("AuxReader: memory usage warning!")
def _memory_usage(self):
raise NotImplementedError("BUG: Override _memory_usage() "
Return a warning including the amount of memory used and the limit.
def attach_auxiliary(self,
for reader in coord_parent._auxs.values():
aux_memory_usage += reader._memory_usage()
if aux_memory_usage > memory_limit:
+ warnings.warn("AuxReader: memory usage warning! "
+ f"Auxiliary data takes up {aux_memory_usage/1000} "
+ f"KB of memory (Warning limit: {memory_limit/1000} "
+ "KB).")
def _memory_usage(self):
raise NotImplementedError("BUG: Override _memory_usage() "
|
codereview_new_python_data_4435
|
def long_description(readme):
'packaging',
'fasteners',
'gsd>=1.9.3',
- 'pyedr'
]
setup(name='MDAnalysis',
add a comma to reduce diff noise when we add the next
def long_description(readme):
'packaging',
'fasteners',
'gsd>=1.9.3',
+ 'pyedr',
]
setup(name='MDAnalysis',
|
codereview_new_python_data_4436
|
from typing import Union, Optional, Callable
from typing import TYPE_CHECKING
-if TYPE_CHECKING:
from ..core.groups import AtomGroup
from .util import check_coords, check_box
from .mdamath import triclinic_vectors
Add a `#pragma: no coverage` (or whatever excludes it from coverage) as this looks as some code that does not need testing/coverage — unless you tell me that it ought to have been executed!
from typing import Union, Optional, Callable
from typing import TYPE_CHECKING
+if TYPE_CHECKING: # pragma: no cover
from ..core.groups import AtomGroup
from .util import check_coords, check_box
from .mdamath import triclinic_vectors
|
codereview_new_python_data_4437
|
def check_coords(*coord_names, **options):
array([1., 1., 1.], dtype=float32)
>>>
>>> # automatic handling of AtomGroups
- >>> u = mda.Universe(PSF,DCD)
>>> coordsum(u.atoms, u.select_atoms("index 1 to 10"))
...
>>>
add space after comma
def check_coords(*coord_names, **options):
array([1., 1., 1.], dtype=float32)
>>>
>>> # automatic handling of AtomGroups
+ >>> u = mda.Universe(PSF, DCD)
>>> coordsum(u.atoms, u.select_atoms("index 1 to 10"))
...
>>>
|
codereview_new_python_data_4438
|
# MDAnalysis: A Toolkit for the Analysis of Molecular Dynamics Simulations.
# J. Comput. Chem. 32 (2011), 2319--2327, doi:10.1002/jcc.21787
#
-from turtle import position
import pytest
import numpy as np
from numpy.testing import assert_equal, assert_almost_equal, assert_allclose
Really, we're using `turtle` somewhere?!
# MDAnalysis: A Toolkit for the Analysis of Molecular Dynamics Simulations.
# J. Comput. Chem. 32 (2011), 2319--2327, doi:10.1002/jcc.21787
#
import pytest
import numpy as np
from numpy.testing import assert_equal, assert_almost_equal, assert_allclose
|
codereview_new_python_data_4439
|
def test_PBC_mixed_combinations(self, backend, ref_system, pos0, pos1,
d = distances.distance_array(ref_val, points_val,
box=box,
backend=backend)
- assert_almost_equal(d, np.array([[0., 0., 0., self._dist(points[3],
- ref=[1, 1, 2])]]))
def test_PBC2(self, backend):
a = np.array([7.90146923, -13.72858524, 3.75326586], dtype=np.float32)
This would probably fit visual indentation and improve readability.
```suggestion
assert_almost_equal(
d, np.array([[0., 0., 0., self._dist(points[3], ref=[1, 1, 2])]]))
```
def test_PBC_mixed_combinations(self, backend, ref_system, pos0, pos1,
d = distances.distance_array(ref_val, points_val,
box=box,
backend=backend)
+ assert_almost_equal(
+ d, np.array([[0., 0., 0., self._dist(points[3], ref=[1, 1, 2])]]))
def test_PBC2(self, backend):
a = np.array([7.90146923, -13.72858524, 3.75326586], dtype=np.float32)
|
codereview_new_python_data_4440
|
def test_input_unchanged_apply_PBC_atomgroup(self, coords_atomgroups, box,
ref = crd.positions.copy()
res = distances.apply_PBC(crd, box, backend=backend)
assert_equal(crd.positions, ref)
-
class TestEmptyInputCoordinates(object):
"""Tests ensuring that the following functions in MDAnalysis.lib.distances
I sometimes get issues with suggestions deleting lines when I do this.. hopefully it doesn't happen here.
```suggestion
```
def test_input_unchanged_apply_PBC_atomgroup(self, coords_atomgroups, box,
ref = crd.positions.copy()
res = distances.apply_PBC(crd, box, backend=backend)
assert_equal(crd.positions, ref)
class TestEmptyInputCoordinates(object):
"""Tests ensuring that the following functions in MDAnalysis.lib.distances
|
codereview_new_python_data_4441
|
def search(self, atoms: AtomGroup,
unique_idx = unique_int_1d(np.asarray(pairs[:, 1], dtype=np.intp))
return self._index2level(unique_idx, level)
- def _index2level(self, indices: List[int], level: str) -> None:
"""Convert list of atom_indices in a AtomGroup to either the
Atoms or segments/residues containing these atoms.
```suggestion
def _index2level(self, indices: List[int], level: str) -> Union[AtomGroup, ResidueGroup, SegmentGroup]:
```
def search(self, atoms: AtomGroup,
unique_idx = unique_int_1d(np.asarray(pairs[:, 1], dtype=np.intp))
return self._index2level(unique_idx, level)
+ def _index2level(self,
+ indices: List[int],
+ level: str
+ ) -> Union[AtomGroup, ResidueGroup, SegmentGroup]:
"""Convert list of atom_indices in a AtomGroup to either the
Atoms or segments/residues containing these atoms.
|
codereview_new_python_data_4442
|
def test_plot_mean_profile(self, hole, frames, profiles):
stds = np.array(list(map(np.std, binned)))
midpoints = 0.5 * bins[1:] + 0.5 * bins[:-1]
# assume a bin of length 1.5
- assert_allclose(midpoints, bins[1:], atol=0.75)
ylow = list(mean-(2*stds))
yhigh = list(mean+(2*stds))
This is a really creative use of `assert_allclose`! However, it is usually to better to be much more direct with tests. For example, instead of allowing any value within 0.75 of a midpoint in either direction, you could directly calculate the difference between `midpoint` and `bins` and check that it is 0.
```suggestion
difference_right = bins[1:] - midpoints
assert_allclose(difference_right, 0.75)
```
A similar test could be done with the left-hand-side (`bins[:-1]`) if you wanted to check that the midpoint was actually midway through.
def test_plot_mean_profile(self, hole, frames, profiles):
stds = np.array(list(map(np.std, binned)))
midpoints = 0.5 * bins[1:] + 0.5 * bins[:-1]
# assume a bin of length 1.5
+ difference_right = bins[1:] - midpoints
+ assert_allclose(difference_right, 0.75)
ylow = list(mean-(2*stds))
yhigh = list(mean+(2*stds))
|
codereview_new_python_data_4588
|
def main():
{}
</ul>
</description>
- </release>\n'''[1:]
tmp = ''
regex = r"version ([\d.]+) \(([\w ]+)\)\n(.*?)[\n]{2}"
Here we take all symbols except the first one. Should it be better to just remove **<** from the beginning of the line instead of making a copy of the string?
def main():
{}
</ul>
</description>
+ </release>\n'''.lstrip("\r\n")
tmp = ''
regex = r"version ([\d.]+) \(([\w ]+)\)\n(.*?)[\n]{2}"
|
codereview_new_python_data_4655
|
def prepare(self):
# xyz if STABLE|LATEST
# STABLE otherwise
config_version = self.settings.get("version", JMeter.VERSION, force_set=True)
- if config_version == "auto":
- self.settings["version"] = JMeter.VERSION
- elif config_version == JMeter.VERSION or config_version == JMeter.VERSION_LATEST:
self.settings["version"] = config_version
else:
self.settings["version"] = JMeter.VERSION
```suggestion
if config_version in [JMeter.VERSION, JMeter.VERSION_LATEST]:
self.settings["version"] = config_version
else:
self.settings["version"] = JMeter.VERSION
```
def prepare(self):
# xyz if STABLE|LATEST
# STABLE otherwise
config_version = self.settings.get("version", JMeter.VERSION, force_set=True)
+ if config_version in [JMeter.VERSION, JMeter.VERSION_LATEST]:
self.settings["version"] = config_version
else:
self.settings["version"] = JMeter.VERSION
|
codereview_new_python_data_5038
|
def test_mask_parse(self):
[4.1, 2.2, 0.6, 5.5, 0.6],
[2.7, 2.5, 0.4, 5.7, 0.2],
],
- numpy.float64,
)
try:
treecluster(data, mask1)
As `numpy.float` was a synonym for `float`, I would prefer to replace it with just `float` here. Then effectively the test does not change. Also, it makes the test a bit more robust, as `float` can be different things on different platforms, but `numpy.float64` is always the same.
def test_mask_parse(self):
[4.1, 2.2, 0.6, 5.5, 0.6],
[2.7, 2.5, 0.4, 5.7, 0.2],
],
+ float,
)
try:
treecluster(data, mask1)
|
codereview_new_python_data_5039
|
Examples
--------
-This example downloads the Cellosaurus database and parses it:
>>> from urllib.request import urlopen
>>> from io import TextIOWrapper
>>> from Bio.ExPASy import cellosaurus
>>> url = "ftp://ftp.expasy.org/databases/cellosaurus/cellosaurus.txt"
>>> bytestream = urlopen(url)
>>> textstream = TextIOWrapper(bytestream, "UTF-8")
>>> records = cellosaurus.parse(textstream)
>>> for record in records:
... if 'Homo sapiens' in record['OX'][0]:
Ugly, but I can't immediately think of anything shorter.
Worth a tiny bit of explanation in the comment above?
Examples
--------
+This example downloads the Cellosaurus database and parses it. Note that
+urlopen returns a stream of bytes, while the parser expects a stream of plain
+string, so we use TextIOWrapper to convert bytes to string using the UTF-8
+encoding. This is not needed if you download the cellosaurus.txt file in
+advance and open it (see the comment below).
>>> from urllib.request import urlopen
>>> from io import TextIOWrapper
>>> from Bio.ExPASy import cellosaurus
>>> url = "ftp://ftp.expasy.org/databases/cellosaurus/cellosaurus.txt"
>>> bytestream = urlopen(url)
>>> textstream = TextIOWrapper(bytestream, "UTF-8")
+ >>> # alternatively, use
+ >>> # textstream = open("cellosaurus.txt")
+ >>> # if you downloaded the cellosaurus.txt file in advance.
>>> records = cellosaurus.parse(textstream)
>>> for record in records:
... if 'Homo sapiens' in record['OX'][0]:
|
codereview_new_python_data_5040
|
def __init__(self, probe_radius=1.40, n_points=100, radii_dict=None):
f"Probe radius must be a positive number: {probe_radius} <= 0"
)
- self.probe_radius = probe_radius
if n_points < 1:
raise ValueError(
Would it matter if the user passed an integer or some other non-float like a NumPy scalar?
def __init__(self, probe_radius=1.40, n_points=100, radii_dict=None):
f"Probe radius must be a positive number: {probe_radius} <= 0"
)
+ self.probe_radius = float(probe_radius)
if n_points < 1:
raise ValueError(
|
codereview_new_python_data_5041
|
def run_psea(fname, verbose=False):
if not p.stderr.strip() and os.path.exists(base + ".sea"):
return base + ".sea"
else:
- raise RuntimeError("stderr not empty")
def psea(pname):
Sorry, I missed this. We should return the message here e.g. `raise RuntimeError(f"Error running p-sea: {p.stderr}")`
def run_psea(fname, verbose=False):
if not p.stderr.strip() and os.path.exists(base + ".sea"):
return base + ".sea"
else:
+ raise RuntimeError(f"Error running p-sea: {p.stderr}")
def psea(pname):
|
codereview_new_python_data_5042
|
def _count_codons(self, fasta_file):
# iterate over sequence and count all the codons in the FastaFile.
for record in SeqIO.parse(handle, "fasta"):
- sequence = record.seq
for i in range(0, len(sequence), 3):
codon = sequence[i : i + 3]
try:
Does this need a ``.upper()`` to match the old behaviour?
(Noting ~~as on another recent case~~ **as per the PR description**, that the old code did not handle mixed case sequence properly)
def _count_codons(self, fasta_file):
# iterate over sequence and count all the codons in the FastaFile.
for record in SeqIO.parse(handle, "fasta"):
+ sequence = record.seq.upper()
for i in range(0, len(sequence), 3):
codon = sequence[i : i + 3]
try:
|
codereview_new_python_data_5043
|
-# Copyright 2000, 2004 by Brad Chapman.
-# Revisions copyright 2010-2013, 2015-2018 by Peter Cock.
# All rights reserved.
#
# This file is part of the Biopython distribution and governed by your
Although the classes here were in ``Bio/Align/__init__.py`` I don't think that copyright block applies.
Rather it looks like your name should be here - it was added in https://github.com/biopython/biopython/pull/1655
+# Copyright 2018-2022 by Michiel de Hoon.
# All rights reserved.
#
# This file is part of the Biopython distribution and governed by your
|
codereview_new_python_data_5044
|
from Bio import MissingExternalDependencyError
import os
if "command not found" or "'psea' is not recognized" in getoutput("psea -h"):
raise MissingExternalDependencyError(
"Download and install psea from ftp://ftp.lmcp.jussieu.fr/pub/sincris/software/protein/p-sea/. Make sure that psea is on path"
As in the examples referred to, please add this line above:
```
os.environ['LANG'] = 'C'
```
We want to ensure the error messages are in US English to match the if-statement.
from Bio import MissingExternalDependencyError
import os
+os.environ["LANG"] = "C"
+
if "command not found" or "'psea' is not recognized" in getoutput("psea -h"):
raise MissingExternalDependencyError(
"Download and install psea from ftp://ftp.lmcp.jussieu.fr/pub/sincris/software/protein/p-sea/. Make sure that psea is on path"
|
codereview_new_python_data_5045
|
class State(enum.Enum):
class AlignmentIterator(interfaces.AlignmentIterator):
- """FASTA output alignment iterator.
- For reading the (pairwise) alignments from the FASTA alignment programs
- using the '-m 8CB' or '-m 8CC' output formats.
"""
def __init__(self, source):
Mention BLAST output here too (already at top of file)?
class State(enum.Enum):
class AlignmentIterator(interfaces.AlignmentIterator):
+ """Alignment iterator for tabular output from BLAST or FASTA.
+ For reading (pairwise) alignments from tabular output generated by BLAST
+ run with the '-outfmt 7' argument, as well as tabular output generated by
+ William Pearson's FASTA alignment programs with the '-m 8CB' or '-m 8CC'
+ output formats.
"""
def __init__(self, source):
|
codereview_new_python_data_5046
|
def __init__(self, database_manager: DatabaseManager,
self.get_revocation_strategy = get_revocation_strategy
self.write_req_validator = write_req_validator
self.legacy_sort_config = getConfig().REV_STRATEGY_USE_COMPAT_ORDERING or False
- self.config_state = self.database_manager.get_database(CONFIG_LEDGER_ID).state
self.node = node
def use_legacy_sort(self, txn) -> bool:
`self.config_state` looks unused
def __init__(self, database_manager: DatabaseManager,
self.get_revocation_strategy = get_revocation_strategy
self.write_req_validator = write_req_validator
self.legacy_sort_config = getConfig().REV_STRATEGY_USE_COMPAT_ORDERING or False
self.node = node
def use_legacy_sort(self, txn) -> bool:
|
codereview_new_python_data_5047
|
def hard_reset():
execute_command(soft_reset_cmd, timeout=RECOVERY_CMD_TIMEOUT)
if environment.is_android_emulator():
- #For recovery state
logs.log('Platform ANDROID_EMULATOR detected.')
restart_adb()
state = get_device_state()
Can you please format this comment a bit better by adding a space after the "#" and making it a more complete sentence? e.g. `# Handle recovery states for emulator by doing a complete wipe`.
def hard_reset():
execute_command(soft_reset_cmd, timeout=RECOVERY_CMD_TIMEOUT)
if environment.is_android_emulator():
logs.log('Platform ANDROID_EMULATOR detected.')
restart_adb()
state = get_device_state()
|
codereview_new_python_data_5048
|
def do_GET(self): # pylint: disable=invalid-name
def run_server():
"""Start a HTTP server to respond to the health checker."""
- if utils.is_oss_fuzz() or environment.is_android():
- # OSS-Fuzz and Android multiple instances per host model aren't supported
# yet.
return
This may be a little too broad, and capture cases (e.g. Cuttlefish) where there is a single host per "device".
Would it be possible to have this be narrower here?
def do_GET(self): # pylint: disable=invalid-name
def run_server():
"""Start a HTTP server to respond to the health checker."""
+ if utils.is_oss_fuzz() or environment.is_android_real_device():
+ # OSS-Fuzz & Android multiple instances per host model isn't supported
# yet.
return
|
codereview_new_python_data_5049
|
def is_android_kernel(plt=None):
def is_android_real_device():
"""Return True if we are on a real android device."""
- return platorm() == 'ANDROID'
def is_lib():
Please fix the typo/tests, thanks!
def is_android_kernel(plt=None):
def is_android_real_device():
"""Return True if we are on a real android device."""
+ return platform() == 'ANDROID'
def is_lib():
|
codereview_new_python_data_5050
|
def split_stacktrace(stacktrace: str):
"""Split stacktrace by line, and handle special cases with regex."""
stacktrace = re.sub(CONCATENATED_SAN_DEADLYSIGNAL_REGEX,
SPLIT_CONCATENATED_SAN_DEADLYSIGNAL_REGEX, stacktrace)
- print(stacktrace)
return stacktrace.splitlines()
def parse(self, stacktrace: str) -> CrashInfo:
nit: remove print.
def split_stacktrace(stacktrace: str):
"""Split stacktrace by line, and handle special cases with regex."""
stacktrace = re.sub(CONCATENATED_SAN_DEADLYSIGNAL_REGEX,
SPLIT_CONCATENATED_SAN_DEADLYSIGNAL_REGEX, stacktrace)
return stacktrace.splitlines()
def parse(self, stacktrace: str) -> CrashInfo:
|
codereview_new_python_data_5051
|
def split_stacktrace(stacktrace: str):
"""Split stacktrace by line, and handle special cases with regex."""
stacktrace = re.sub(CONCATENATED_SAN_DEADLYSIGNAL_REGEX,
SPLIT_CONCATENATED_SAN_DEADLYSIGNAL_REGEX, stacktrace)
- print(stacktrace)
return stacktrace.splitlines()
def parse(self, stacktrace: str) -> CrashInfo:
Add a comment here explaining what this is doing. i.e. handling malformed data where there is a missing newline between the actual crash info and this line.
def split_stacktrace(stacktrace: str):
"""Split stacktrace by line, and handle special cases with regex."""
stacktrace = re.sub(CONCATENATED_SAN_DEADLYSIGNAL_REGEX,
SPLIT_CONCATENATED_SAN_DEADLYSIGNAL_REGEX, stacktrace)
return stacktrace.splitlines()
def parse(self, stacktrace: str) -> CrashInfo:
|
codereview_new_python_data_5052
|
SAN_DEADLYSIGNAL_REGEX = re.compile(
r'(Address|Leak|Memory|UndefinedBehavior|Thread)Sanitizer:DEADLYSIGNAL')
CONCATENATED_SAN_DEADLYSIGNAL_REGEX = re.compile(
- r'\n(.+)(' + SAN_DEADLYSIGNAL_REGEX.pattern + r')\n')
SPLIT_CONCATENATED_SAN_DEADLYSIGNAL_REGEX = r'\n\1\n\2\n'
SAN_FPE_REGEX = re.compile(r'.*[a-zA-Z]+Sanitizer: FPE ')
SAN_ILL_REGEX = re.compile(r'.*[a-zA-Z]+Sanitizer: ILL ')
To make this more robust, can we match on any non-space characters? i.e. `[^\s]` instad of `.`.
SAN_DEADLYSIGNAL_REGEX = re.compile(
r'(Address|Leak|Memory|UndefinedBehavior|Thread)Sanitizer:DEADLYSIGNAL')
CONCATENATED_SAN_DEADLYSIGNAL_REGEX = re.compile(
+ r'\n([^\n]*\S[^\n]*)(' + SAN_DEADLYSIGNAL_REGEX.pattern + r')\n')
SPLIT_CONCATENATED_SAN_DEADLYSIGNAL_REGEX = r'\n\1\n\2\n'
SAN_FPE_REGEX = re.compile(r'.*[a-zA-Z]+Sanitizer: FPE ')
SAN_ILL_REGEX = re.compile(r'.*[a-zA-Z]+Sanitizer: ILL ')
|
codereview_new_python_data_5053
|
def prepare(self, corpus_dir, target_path, build_dir):
arguments.extend(strategy_info.arguments)
# Update strategy info with environment variables from fuzzer's options.
- for env_var_name, value in extra_env:
- if env_var_name not in strategy_info.extra_env:
- strategy_info.extra_env[env_var_name] = value
# Check for seed corpus and add it into corpus directory.
engine_common.unpack_seed_corpus_if_needed(target_path, corpus_dir)
You need a None check here, per the test failure.
def prepare(self, corpus_dir, target_path, build_dir):
arguments.extend(strategy_info.arguments)
# Update strategy info with environment variables from fuzzer's options.
+ if extra_env is not None:
+ for env_var_name, value in extra_env:
+ if env_var_name not in strategy_info.extra_env:
+ strategy_info.extra_env[env_var_name] = value
# Check for seed corpus and add it into corpus directory.
engine_common.unpack_seed_corpus_if_needed(target_path, corpus_dir)
|
codereview_new_python_data_5054
|
def matches_top_crash(testcase, top_crashes_by_project_and_platform):
def _group_testcases_based_on_variants(testcase_map):
"""Group testcases that are associated based on variant analysis."""
# Skip this if the project is configured so (like Google3).
- config_decision = local_config.ProjectConfig().get('variant_grouping',
- 'enable')
- if config_decision == 'disable':
return
logs.log('Grouping based on variant analysis.')
nit: just use boolean.
def matches_top_crash(testcase, top_crashes_by_project_and_platform):
def _group_testcases_based_on_variants(testcase_map):
"""Group testcases that are associated based on variant analysis."""
# Skip this if the project is configured so (like Google3).
+ enable = local_config.ProjectConfig().get('deduplication.variant', False)
+ if not enable:
return
logs.log('Grouping based on variant analysis.')
|
codereview_new_python_data_5055
|
OPTIONS_FILE_EXTENSION = '.options'
# Whitelist for env variables .options files can set.
-ENV_VAR_WHITELIST = set([afl_constants.DONT_DEFER_ENV_VAR, "GODEBUG"])
class FuzzerOptionsException(Exception):
Please use single quotes.
OPTIONS_FILE_EXTENSION = '.options'
# Whitelist for env variables .options files can set.
+ENV_VAR_WHITELIST = {afl_constants.DONT_DEFER_ENV_VAR, 'GODEBUG'}
class FuzzerOptionsException(Exception):
|
codereview_new_python_data_5056
|
def check_miracleptr_status(testcase):
try:
return MIRACLEPTR_STATUS[status]
except:
- logs.log(f'Unknown MiraclePtr status: {line}')
break
return None
nit: log_error here.
def check_miracleptr_status(testcase):
try:
return MIRACLEPTR_STATUS[status]
except:
+ logs.log_error(f'Unknown MiraclePtr status: {line}')
break
return None
|
codereview_new_python_data_5057
|
GROUP_MAX_TESTCASE_LIMIT = 25
VARIANT_CRASHES_IGNORE = re.compile(
- r'^Out-of-memory|^Timeout|^Missing-library|^Data race')
VARIANT_THRESHOLD_PERCENTAGE = 0.2
VARIANT_MIN_THRESHOLD = 5
nit: do
```
r'^(Out-of-memory|...|...)'
```
to avoid many repetitions of `^`.
GROUP_MAX_TESTCASE_LIMIT = 25
VARIANT_CRASHES_IGNORE = re.compile(
+ r'^(Out-of-memory|Timeout|Missing-library|Data race)')
VARIANT_THRESHOLD_PERCENTAGE = 0.2
VARIANT_MIN_THRESHOLD = 5
|
codereview_new_python_data_5058
|
r'^std::sys_common::backtrace',
r'^__rust_start_panic',
r'^__scrt_common_main_seh',
- r'^libgcc_s.so.1',
# Functions names (contains).
r'.*ASAN_OnSIGSEGV',
Can you add a test? see https://github.com/google/clusterfuzz/commit/c8ddbe20fc4706cdfc8f46b79076fd7ad6b1cd9c for an example.
r'^std::sys_common::backtrace',
r'^__rust_start_panic',
r'^__scrt_common_main_seh',
+ r'^libgcc_s.so.*',
# Functions names (contains).
r'.*ASAN_OnSIGSEGV',
|
codereview_new_python_data_5059
|
def _get_runner():
def _get_reproducer_path(log, reproducers_dir):
"""Gets the reproducer path, if any."""
crash_match = _CRASH_REGEX.search(log)
- if not crash_match or not crash_match.group(1):
return None
tmp_crash_path = Path(crash_match.group(1))
prm_crash_path = Path(reproducers_dir) / tmp_crash_path.name
just call this variable `crash_path`. Not clear what `prm` means here either.
def _get_runner():
def _get_reproducer_path(log, reproducers_dir):
"""Gets the reproducer path, if any."""
crash_match = _CRASH_REGEX.search(log)
+ if not crash_match:
return None
tmp_crash_path = Path(crash_match.group(1))
prm_crash_path = Path(reproducers_dir) / tmp_crash_path.name
|
codereview_new_python_data_5060
|
r'libFuzzer: out-of-memory \(',
r'rss limit exhausted',
r'in rust_oom',
- r'Failure description: out-of-memory', # Centipede
]))
RUNTIME_ERROR_REGEX = re.compile(r'#\s*Runtime error in (.*)')
RUNTIME_ERROR_LINE_REGEX = re.compile(r'#\s*Runtime error in (.*), line [0-9]+')
nit: end with period.
r'libFuzzer: out-of-memory \(',
r'rss limit exhausted',
r'in rust_oom',
+ r'Failure description: out-of-memory', # Centipede.
]))
RUNTIME_ERROR_REGEX = re.compile(r'#\s*Runtime error in (.*)')
RUNTIME_ERROR_LINE_REGEX = re.compile(r'#\s*Runtime error in (.*), line [0-9]+')
|
codereview_new_python_data_5061
|
def save(self, issue):
def create(self):
"""Create an issue object locally."""
- raw_fields = {"id": "-1", "fields": {"components": [], "labels": []}}
# Create jira issue object
jira_issue = jira.resources.Issue({},
jira.resilientsession.ResilientSession(),
nit: please use single quotes for all strings.
def save(self, issue):
def create(self):
"""Create an issue object locally."""
+ raw_fields = {'id': '-1', 'fields': {'components': [], 'labels': []}}
# Create jira issue object
jira_issue = jira.resources.Issue({},
jira.resilientsession.ResilientSession(),
|
codereview_new_python_data_5062
|
def get_testcase_variant(testcase_id, job_type):
def get_all_testcase_variants(testcase_id):
"""Get all testcase variant entities based on testcase id."""
- variants = data_types.TestcaseVariant.query(
- data_types.TestcaseVariant.testcase_id == testcase_id).get()
- if not variants:
return []
- return variants
# ------------------------------------------------------------------------------
# Fuzz target related functions
Don't call get(). This only gets a single entity from the entire query.
Just return the query as is. We can iterate over this query to get all results.
def get_testcase_variant(testcase_id, job_type):
def get_all_testcase_variants(testcase_id):
"""Get all testcase variant entities based on testcase id."""
+ variants_query = data_types.TestcaseVariant.query(
+ data_types.TestcaseVariant.testcase_id == testcase_id)
+ if not variants_query:
return []
+ return [v for v in variants_query.iter()]
+
# ------------------------------------------------------------------------------
# Fuzz target related functions
|
codereview_new_python_data_5063
|
def get_all_testcase_variants(testcase_id):
data_types.TestcaseVariant.testcase_id == testcase_id)
if not variants_query:
return []
- return list(variants_query.iter())
# ------------------------------------------------------------------------------
I think this will never be the case. can omit.
def get_all_testcase_variants(testcase_id):
data_types.TestcaseVariant.testcase_id == testcase_id)
if not variants_query:
return []
+ return list(variants_query.iter())
# ------------------------------------------------------------------------------
|
codereview_new_python_data_5064
|
def get_all_testcase_variants(testcase_id):
data_types.TestcaseVariant.testcase_id == testcase_id)
if not variants_query:
return []
- return list(variants_query.iter())
# ------------------------------------------------------------------------------
No need to turn this into an eager list. We can just return the query as-is. Users can iterate over it to get all results.
def get_all_testcase_variants(testcase_id):
data_types.TestcaseVariant.testcase_id == testcase_id)
if not variants_query:
return []
+ return list(variants_query.iter())
# ------------------------------------------------------------------------------
|
codereview_new_python_data_5065
|
def test_reproduce(self):
"""Tests reproducing a crash."""
testcase_path, _ = setup_testcase_and_corpus('crash', 'empty_corpus')
engine_impl = engine.Engine()
- sanitized_target_path = f'{DATA_DIR}/__centipede_address/test_fuzzer'
result = engine_impl.reproduce(sanitized_target_path, testcase_path, [], 10)
self.assertListEqual([sanitized_target_path, testcase_path], result.command)
self.assertIn('ERROR: AddressSanitizer: heap-use-after-free', result.output)
minor thing: can we rename `__centipede_address` to something even more general, like `__extra_build` ?
I'm working on build setup support, and it's a lot easier to not have to make assumptions about the sanitizer and the engine.
def test_reproduce(self):
"""Tests reproducing a crash."""
testcase_path, _ = setup_testcase_and_corpus('crash', 'empty_corpus')
engine_impl = engine.Engine()
+ sanitized_target_path = f'{DATA_DIR}/__extra_build/test_fuzzer'
result = engine_impl.reproduce(sanitized_target_path, testcase_path, [], 10)
self.assertListEqual([sanitized_target_path, testcase_path], result.command)
self.assertIn('ERROR: AddressSanitizer: heap-use-after-free', result.output)
|
codereview_new_python_data_5066
|
def reset_current_memory_tool_options(redzone_size=0,
set_value('MEMORY_TOOL', tool_name)
bot_platform = platform()
# Default options for memory debuggin tool used.
- if tool_name in ['ASAN', 'HWASAN', 'NOSANITIZER']:
tool_options = get_asan_options(redzone_size, malloc_context_size,
quarantine_size_mb, bot_platform, leaks,
disable_ubsan)
Could we just separate this out into its own branch and set `tool_options` to be `{}` instead?
Or alternatively, we just set `tool_options = {}` before line 865 and then we don't need the explicit branch.
def reset_current_memory_tool_options(redzone_size=0,
set_value('MEMORY_TOOL', tool_name)
bot_platform = platform()
+ tool_options = {}
# Default options for memory debuggin tool used.
+ if tool_name in ['ASAN', 'HWASAN']:
tool_options = get_asan_options(redzone_size, malloc_context_size,
quarantine_size_mb, bot_platform, leaks,
disable_ubsan)
|
codereview_new_python_data_5067
|
def job_name(self, project_name, config_suffix):
GFT_UBSAN_JOB = JobInfo('googlefuzztest_ubsan_', 'googlefuzztest', 'undefined',
['googlefuzztest', 'engine_ubsan'])
-LIBFUZZER_NONE_JOB = JobInfo("libfuzzer_none_", "libfuzzer", "none", [])
LIBFUZZER_NONE_I386_JOB = JobInfo(
- "libfuzzer_none_i386_", "libfuzzer", "none", [], architecture='i386')
JOB_MAP = {
'libfuzzer': {
nit: let's `s/none/nosanitizer/` to make it clearer what `none` means to users.
def job_name(self, project_name, config_suffix):
GFT_UBSAN_JOB = JobInfo('googlefuzztest_ubsan_', 'googlefuzztest', 'undefined',
['googlefuzztest', 'engine_ubsan'])
+LIBFUZZER_NONE_JOB = JobInfo("libfuzzer_nosanitizer_", "libfuzzer", "none", [])
LIBFUZZER_NONE_I386_JOB = JobInfo(
+ "libfuzzer_nosanitizer_i386_", "libfuzzer", "none", [], architecture='i386')
JOB_MAP = {
'libfuzzer': {
|
codereview_new_python_data_5068
|
def test_unrecovered_exception(self):
testcase.get_metadata(triage.TRIAGE_MESSAGE_KEY))
def test_no_experimental_bugs(self):
- """Test no exception."""
self.mock.file_issue.return_value = 'ID', None
self.testcase.crash_type = 'Command injection'
self.assertFalse(triage._file_issue(self.testcase, self.issue_tracker))
nit: "Tests"
And I'm not sure this description is correct.
def test_unrecovered_exception(self):
testcase.get_metadata(triage.TRIAGE_MESSAGE_KEY))
def test_no_experimental_bugs(self):
+ """Tests not filing experimental bugs."""
self.mock.file_issue.return_value = 'ID', None
self.testcase.crash_type = 'Command injection'
self.assertFalse(triage._file_issue(self.testcase, self.issue_tracker))
|
codereview_new_python_data_5069
|
def prepare_runner(fuzzer_path,
engine_common.recreate_directory(fuzzer_utils.get_temp_dir())
if environment.get_value('USE_UNSHARE'):
runner_class = UnshareAflRunner
- elif not environment.is_android():
- runner_class = AflRunner
- else:
runner_class = AflAndroidRunner
runner = runner_class(
fuzzer_path,
nit: can we invert this condition?
i.e.
```python
elif environment.is_android():
runner_class = AflAndroidRunner
else:
runner_class = AflRunner
```
def prepare_runner(fuzzer_path,
engine_common.recreate_directory(fuzzer_utils.get_temp_dir())
if environment.get_value('USE_UNSHARE'):
runner_class = UnshareAflRunner
+ elif environment.is_android():
runner_class = AflAndroidRunner
+ else:
+ runner_class = AflRunner
runner = runner_class(
fuzzer_path,
|
codereview_new_python_data_5070
|
def get_command(self, additional_args=None):
if additional_args:
command.extend(additional_args)
- return self.tool_prefix('unshare') + self.tool_prefix('inj') + command
class ModifierProcessRunner(ModifierProcessRunnerMixin, ProcessRunner):
nit: rename inj to extra_sanitizers
def get_command(self, additional_args=None):
if additional_args:
command.extend(additional_args)
+ return self.tool_prefix('unshare') + self.tool_prefix('extra_sanitizers') + command
class ModifierProcessRunner(ModifierProcessRunnerMixin, ProcessRunner):
|
codereview_new_python_data_5071
|
def parse(self, stacktrace: str) -> CrashInfo:
# Shell bugs detected by extra sanitizers.
self.update_state_on_match(
- EXTRA_SANITIZERS_SHELL_BUG_REGEX, line, state, new_type='Shell bug')
# For KASan crashes, additional information about a bad access may come
# from a later line. Update the type and address if this happens.
Can we rename "Shell bug" to "Command injection"?
def parse(self, stacktrace: str) -> CrashInfo:
# Shell bugs detected by extra sanitizers.
self.update_state_on_match(
+ EXTRA_SANITIZERS_COMMAND_INJECTION_REGEX, line, state, new_type='Shell bug')
# For KASan crashes, additional information about a bad access may come
# from a later line. Update the type and address if this happens.
|
codereview_new_python_data_5072
|
def create_grid(self):
self.method_label = QLabel(_('Method') + ':')
self.method_combo = QComboBox()
self.method_combo.addItems([_('Preserve payment'), _('Decrease payment')])
- self.method_combo.currentIndexChanged.connect(self.update)
old_size_label = TxSizeLabel()
old_size_label.setAlignment(Qt.AlignCenter)
old_size_label.setAmount(self.old_tx_size)
This does not work.
It works with:
```suggestion
self.method_combo.currentIndexChanged.connect(self._trigger_update)
```
but I guess that then questions whether _trigger_update is indeed supposed to be private.
Maybe it is the `update` method that should be private instead.
def create_grid(self):
self.method_label = QLabel(_('Method') + ':')
self.method_combo = QComboBox()
self.method_combo.addItems([_('Preserve payment'), _('Decrease payment')])
+ self.method_combo.currentIndexChanged.connect(self.trigger_update)
old_size_label = TxSizeLabel()
old_size_label.setAlignment(Qt.AlignCenter)
old_size_label.setAmount(self.old_tx_size)
|
codereview_new_python_data_5073
|
def start(self, initial_data = {}):
return self._current
def last_if_single_password(self, view, wizard_data):
- return False # TODO: self._daemon.config.get('single_password')
def last_if_single_password_and_not_bip39(self, view, wizard_data):
return self.last_if_single_password(view, wizard_data) and not wizard_data['seed_type'] == 'bip39'
Please raise a `NotImplementedError` exception here, instead of returning `False`.
it took me a while to understand that this method must be overloaded.
def start(self, initial_data = {}):
return self._current
def last_if_single_password(self, view, wizard_data):
+ raise NotImplementedError()
def last_if_single_password_and_not_bip39(self, view, wizard_data):
return self.last_if_single_password(view, wizard_data) and not wizard_data['seed_type'] == 'bip39'
|
codereview_new_python_data_5074
|
def export_request(self, x: Invoice) -> Dict[str, Any]:
# if request was paid onchain, add relevant fields
# note: addr is reused when getting paid on LN! so we check for that.
is_paid, conf, tx_hashes = self._is_onchain_invoice_paid(x)
- if is_paid and self.lnworker.get_invoice_status(x) != PR_PAID:
if conf is not None:
d['confirmations'] = conf
d['tx_hashes'] = tx_hashes
`self.lnworker` might be None, also need to test for that (see my updated comment)
def export_request(self, x: Invoice) -> Dict[str, Any]:
# if request was paid onchain, add relevant fields
# note: addr is reused when getting paid on LN! so we check for that.
is_paid, conf, tx_hashes = self._is_onchain_invoice_paid(x)
+ if is_paid and (not self.lnworker or self.lnworker.get_invoice_status(x) != PR_PAID):
if conf is not None:
d['confirmations'] = conf
d['tx_hashes'] = tx_hashes
|
codereview_new_python_data_5075
|
def merge_in(self, tree, scope, stage, merge_scope=False):
# CodeGenerator, and tell that CodeGenerator to generate code
# from multiple sources.
assert isinstance(self.body, Nodes.StatListNode)
if self.pxd_stats is None:
self.pxd_stats = Nodes.StatListNode(self.body.pos, stats=[])
Let's add a guard against typos.
```suggestion
assert isinstance(self.body, Nodes.StatListNode)
assert stage in ('pxd', 'utility')
```
def merge_in(self, tree, scope, stage, merge_scope=False):
# CodeGenerator, and tell that CodeGenerator to generate code
# from multiple sources.
assert isinstance(self.body, Nodes.StatListNode)
+ assert stage in ('pxd', 'utility')
if self.pxd_stats is None:
self.pxd_stats = Nodes.StatListNode(self.body.pos, stats=[])
|
codereview_new_python_data_5076
|
def __init__(self, cname, components):
self._convert_to_py_code = None
self._convert_from_py_code = None
# equivalent_type must be set now because it isn't available at import time
- from . import Builtin
- self.equivalent_type = Builtin.tuple_type
def __str__(self):
return "(%s)" % ", ".join(str(c) for c in self.components)
If `tuple_type` is really all we need, then let's just import that directly.
```suggestion
from .Builtin import tuple_type
self.equivalent_type = tuple_type
```
def __init__(self, cname, components):
self._convert_to_py_code = None
self._convert_from_py_code = None
# equivalent_type must be set now because it isn't available at import time
+ from .Builtin import tuple_type
+ self.equivalent_type = tuple_type
def __str__(self):
return "(%s)" % ", ".join(str(c) for c in self.components)
|
codereview_new_python_data_5077
|
def test_use_typing_attributes_as_non_annotations():
def test_optional_ctuple(x: typing.Optional[tuple[float]]):
"""
- Should not be a C-tuple (because these can't be optional
>>> test_optional_ctuple((1.0,))
tuple object
"""
```suggestion
Should not be a C-tuple (because these can't be optional)
```
def test_use_typing_attributes_as_non_annotations():
def test_optional_ctuple(x: typing.Optional[tuple[float]]):
"""
+ Should not be a C-tuple (because these can't be optional)
>>> test_optional_ctuple((1.0,))
tuple object
"""
|
codereview_new_python_data_5082
|
# distutils: language = c++
-# Note that the pure and classic syntax examples are not quite identical
-# since pure Python syntax does not support C++ "new", so we allocate the
-# scratch space slightly differently
from cython.parallel import parallel, prange
from cython.cimports.libc.stdlib import malloc, free
from cython.cimports.libcpp.algorithm import nth_element
Are we using C++ language in this file? If I am not wrong not, so we can remove this.
# distutils: language = c++
from cython.parallel import parallel, prange
from cython.cimports.libc.stdlib import malloc, free
from cython.cimports.libcpp.algorithm import nth_element
|
codereview_new_python_data_5083
|
def is_reverse_number_slot(name):
unaryfunc = Signature("T", "O") # typedef PyObject * (*unaryfunc)(PyObject *);
binaryfunc = Signature("OO", "O") # typedef PyObject * (*binaryfunc)(PyObject *, PyObject *);
ibinaryfunc = Signature("TO", "O") # typedef PyObject * (*binaryfunc)(PyObject *, PyObject *);
-powternaryfunc = Signature("OO?", "O") # typedef PyObject * (*ternaryfunc)(PyObject *, PyObject *, PyObject *);
-ipowternaryfunc = Signature("TO?", "O") # typedef PyObject * (*ternaryfunc)(PyObject *, PyObject *, PyObject *);
callfunc = Signature("T*", "O") # typedef PyObject * (*ternaryfunc)(PyObject *, PyObject *, PyObject *);
inquiry = Signature("T", "i") # typedef int (*inquiry)(PyObject *);
lenfunc = Signature("T", "z") # typedef Py_ssize_t (*lenfunc)(PyObject *);
```suggestion
powternaryfunc = Signature("OO?", "O") # typedef PyObject * (*ternaryfunc)(PyObject *, PyObject *, PyObject *);
ipowternaryfunc = Signature("TO?", "O") # typedef PyObject * (*ternaryfunc)(PyObject *, PyObject *, PyObject *);
```
def is_reverse_number_slot(name):
unaryfunc = Signature("T", "O") # typedef PyObject * (*unaryfunc)(PyObject *);
binaryfunc = Signature("OO", "O") # typedef PyObject * (*binaryfunc)(PyObject *, PyObject *);
ibinaryfunc = Signature("TO", "O") # typedef PyObject * (*binaryfunc)(PyObject *, PyObject *);
+powternaryfunc = Signature("OO?", "O") # typedef PyObject * (*ternaryfunc)(PyObject *, PyObject *, PyObject *);
+ipowternaryfunc = Signature("TO?", "O") # typedef PyObject * (*ternaryfunc)(PyObject *, PyObject *, PyObject *);
callfunc = Signature("T*", "O") # typedef PyObject * (*ternaryfunc)(PyObject *, PyObject *, PyObject *);
inquiry = Signature("T", "i") # typedef int (*inquiry)(PyObject *);
lenfunc = Signature("T", "z") # typedef Py_ssize_t (*lenfunc)(PyObject *);
|
codereview_new_python_data_5084
|
def runtests(options, cmd_args, coverage=None):
('limited_api_bugs.txt', options.limited_api),
('windows_bugs.txt', sys.platform == 'win32'),
('cygwin_bugs.txt', sys.platform == 'cygwin'),
- ('windows_bugs_39.txt', sys.platform == 'win32' and sys.version_info[:2] == (3, 9)),
]
exclude_selectors += [
While generally a good idea, this change is unrelated to this PR now.
```suggestion
('windows_bugs_39.txt', sys.platform == 'win32' and sys.version_info[:2] == (3, 9))
```
def runtests(options, cmd_args, coverage=None):
('limited_api_bugs.txt', options.limited_api),
('windows_bugs.txt', sys.platform == 'win32'),
('cygwin_bugs.txt', sys.platform == 'cygwin'),
+ ('windows_bugs_39.txt', sys.platform == 'win32' and sys.version_info[:2] == (3, 9))
]
exclude_selectors += [
|
codereview_new_python_data_5085
|
def find_referenced_modules(self, env, module_list, modules_seen):
module_list.append(env)
def sort_types_by_inheritance(self, type_dict, type_order, getkey):
- subclasses = {} # maps type key to list of subclass keys
for key in type_order:
new_entry = type_dict[key]
# collect all base classes to check for children
```suggestion
subclasses = {} # maps type key to list of subclass keys
```
def find_referenced_modules(self, env, module_list, modules_seen):
module_list.append(env)
def sort_types_by_inheritance(self, type_dict, type_order, getkey):
+ subclasses = {} # maps type key to list of subclass keys
for key in type_order:
new_entry = type_dict[key]
# collect all base classes to check for children
|
codereview_new_python_data_5086
|
def func(a: fused_type1, b: fused_type2):
# called from Cython space
cfunc[cython.double](5.0, 1.0)
cpfunc[cython.float, cython.double](1.0, 2.0)
-# Indexing def function in Cython code requires string names
func["float", "double"](1.0, 2.0)
```suggestion
# Indexing def functions in Cython code requires string names
```
def func(a: fused_type1, b: fused_type2):
# called from Cython space
cfunc[cython.double](5.0, 1.0)
cpfunc[cython.float, cython.double](1.0, 2.0)
+# Indexing def functions in Cython code requires string names
func["float", "double"](1.0, 2.0)
|
codereview_new_python_data_5087
|
class Context(object):
# include_directories [string]
# future_directives [object]
# language_level int currently 2 or 3 for Python 2/3
- # legacy_implicit_noexcept [bool]
cython_scope = None
language_level = None # warn when not set but default to Py2
I think the `[]` are denoting a list
```suggestion
# legacy_implicit_noexcept bool
```
class Context(object):
# include_directories [string]
# future_directives [object]
# language_level int currently 2 or 3 for Python 2/3
+ # legacy_implicit_noexcept bool
cython_scope = None
language_level = None # warn when not set but default to Py2
|
codereview_new_python_data_5088
|
def __init__(self, include_directories, compiler_directives, cpp=False,
if language_level is not None:
self.set_language_level(language_level)
- self.legacy_implicit_noexcept = getattr(options, 'legacy_implicit_noexcept', False)
self.gdb_debug_outputwriter = None
Is the `getattr()` only needed for the `None` case? If so, I'd change it to be more explicit:
```suggestion
self.legacy_implicit_noexcept = options.legacy_implicit_noexcept if options else False
```
The `getattr()` suggests to me that there are cases where the option might not be initialised, in which case I'd rather see it being initialised.
def __init__(self, include_directories, compiler_directives, cpp=False,
if language_level is not None:
self.set_language_level(language_level)
+ self.legacy_implicit_noexcept = options.legacy_implicit_noexcept if options else False
self.gdb_debug_outputwriter = None
|
codereview_new_python_data_5089
|
def annotated_function(a: cython.int, b: cython.int):
with cython.annotation_typing(False):
# Cython is ignoring annotations within this code block
c: list = []
- c.append(a)
- c.append(b)
- c.append(s)
return c
If I were writing this I'd only put the `c: list` in the `with` block. But either way will compile
def annotated_function(a: cython.int, b: cython.int):
with cython.annotation_typing(False):
# Cython is ignoring annotations within this code block
c: list = []
+ c.append(a)
+ c.append(b)
+ c.append(s)
return c
|
codereview_new_python_data_5090
|
def main():
foo4: cython.int = 1
foo5: stdint.bar = 5 # warning
foo6: object = 1
_WARNINGS = """
Here, I wanted also to test the following case:
```python
foo7: cython.bar = 1
```
But it yields following warnings:
```
14:16: Unknown type declaration 'cython.bar' in annotation, ignoring
25:10: 'cpdef_method' redeclared
36:10: 'cpdef_cname_method' redeclared
977:29: Ambiguous exception value, same as default return value: 0
1018:46: Ambiguous exception value, same as default return value: 0
1108:29: Ambiguous exception value, same as default return value: 0
```
Not sure why there are additional warnings. It is the reason why it is not included in the test...
def main():
foo4: cython.int = 1
foo5: stdint.bar = 5 # warning
foo6: object = 1
+ # FIXME: This is raising additional warnings not related to this test.
+ # foo7: cython.bar = 1
+ with cython.annotation_typing(False):
+ foo8: Bar = 1
+ foo9: stdint.bar = 5
+ foo10: cython.bar = 1
_WARNINGS = """
|
codereview_new_python_data_5091
|
def analyse_as_type(self, env):
# Try to give a helpful warning when users write plain C type names.
if not env.in_c_type_context and PyrexTypes.parse_basic_type(self.name):
- warning(self.pos, "Found C type '%s' in a Python annotation. Did you mean to use a 'cython.%s'?" % (self.name, self.name))
return None
Given that we say "Found C type …" and not "Found the C type …", I think we should be consistent and go without the article at the end as well.
```suggestion
warning(self.pos, "Found C type '%s' in a Python annotation. Did you mean to use 'cython.%s'?" % (self.name, self.name))
```
def analyse_as_type(self, env):
# Try to give a helpful warning when users write plain C type names.
if not env.in_c_type_context and PyrexTypes.parse_basic_type(self.name):
+ warning(self.pos, "Found C type '%s' in a Python annotation. Did you mean to use 'cython.%s'?" % (self.name, self.name))
return None
|
codereview_new_python_data_5092
|
def analyse_as_type(self, env):
return self.analyse_type_annotation(env)[1]
def _warn_on_unknown_annotation(self, env, annotation):
- """Method checks for cases when user should be warned that annotation contains unkonwn types."""
if annotation.is_name:
# Validate annotation in form `var: type`
if not env.lookup(annotation.name):
```suggestion
"""Method checks for cases when user should be warned that annotation contains unknown types."""
```
def analyse_as_type(self, env):
return self.analyse_type_annotation(env)[1]
def _warn_on_unknown_annotation(self, env, annotation):
+ """Method checks for cases when user should be warned that annotation contains unknown types."""
if annotation.is_name:
# Validate annotation in form `var: type`
if not env.lookup(annotation.name):
|
codereview_new_python_data_5094
|
def run_build():
"Topic :: Software Development :: Libraries :: Python Modules"
],
project_urls={
- 'Documentation': 'https://cython.readthedocs.io/',
- 'Funding': 'https://cython.readthedocs.io/en/latest/src/donating.html',
- 'Source': 'https://github.com/cython/cython',
- 'Tracker': 'https://github.com/cython/cython/issues',
},
scripts=scripts,
"Bug Tracker" sounds clearer
def run_build():
"Topic :: Software Development :: Libraries :: Python Modules"
],
project_urls={
+ "Documentation": "https://cython.readthedocs.io/",
+ "Donate": "https://cython.readthedocs.io/en/latest/src/donating.html",
+ "Source code": "https://github.com/cython/cython",
+ "Bug tracker": "https://github.com/cython/cython/issues",
+ "User group": "https://groups.google.com/g/cython-users",
},
scripts=scripts,
|
codereview_new_python_data_5095
|
def run_build():
project_urls={
"Documentation": "https://cython.readthedocs.io/",
"Donate": "https://cython.readthedocs.io/en/latest/src/donating.html",
- "Source code": "https://github.com/cython/cython",
- "Bug tracker": "https://github.com/cython/cython/issues",
- "User group": "https://groups.google.com/g/cython-users",
},
scripts=scripts,
mailing list would be a better name I think, although it's probably not the best, but a long established one
def run_build():
project_urls={
"Documentation": "https://cython.readthedocs.io/",
"Donate": "https://cython.readthedocs.io/en/latest/src/donating.html",
+ "Source Code": "https://github.com/cython/cython",
+ "Bug Tracker": "https://github.com/cython/cython/issues",
+ "User Group": "https://groups.google.com/g/cython-users",
},
scripts=scripts,
|
codereview_new_python_data_5096
|
def run_build():
project_urls={
"Documentation": "https://cython.readthedocs.io/",
"Donate": "https://cython.readthedocs.io/en/latest/src/donating.html",
- "Source code": "https://github.com/cython/cython",
- "Bug tracker": "https://github.com/cython/cython/issues",
- "User group": "https://groups.google.com/g/cython-users",
},
scripts=scripts,
```suggestion
"Donate": "https://cython.readthedocs.io/en/latest/src/donating.html",
"Source Code": "https://github.com/cython/cython",
"Bug Tracker": "https://github.com/cython/cython/issues",
"User Group": "https://groups.google.com/g/cython-users",
```
I've applied the capitalization. Personally I think "User Group" is fine. We can always edit it later now it's there
def run_build():
project_urls={
"Documentation": "https://cython.readthedocs.io/",
"Donate": "https://cython.readthedocs.io/en/latest/src/donating.html",
+ "Source Code": "https://github.com/cython/cython",
+ "Bug Tracker": "https://github.com/cython/cython/issues",
+ "User Group": "https://groups.google.com/g/cython-users",
},
scripts=scripts,
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.