id
int64
20
338k
vocab_size
int64
2
671
ast_levels
int64
4
32
nloc
int64
1
451
n_ast_nodes
int64
12
5.6k
n_identifiers
int64
1
186
n_ast_errors
int64
0
10
n_words
int64
2
2.17k
n_whitespaces
int64
2
13.8k
fun_name
stringlengths
2
73
commit_message
stringlengths
51
15.3k
url
stringlengths
31
59
code
stringlengths
51
31k
ast_errors
stringlengths
0
1.46k
token_counts
int64
6
3.32k
file_name
stringlengths
5
56
language
stringclasses
1 value
path
stringlengths
7
134
commit_id
stringlengths
40
40
repo
stringlengths
3
28
complexity
int64
1
153
258,709
35
6
2
23
3
0
43
85
_n_features_out
ENH Adds feature_names_out to preprocessing module (#21079) Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org> Co-authored-by: 赵丰 (Zhao Feng) <616545598@qq.com> Co-authored-by: Niket Jain <51831161+nikJ13@users.noreply.github.com> Co-authored-by: Loïc Estève <loic.esteve@ymail.com>
https://github.com/scikit-learn/scikit-learn.git
def _n_features_out(self): # Used by _ClassNamePrefixFeaturesOutMixin. This model preserves the # number of input features but this is not a one-to-one mapping in the # usual sense. Hence the choice not to use _OneToOneFeatureMixin to # implement get_feature_names_out for this class. return self.n_features_in_
10
_data.py
Python
sklearn/preprocessing/_data.py
d7feac0ccfe1a7b8a55f2e16f249f77508a91fe1
scikit-learn
1
261,541
28
11
9
100
11
0
33
76
sqeuclidean_row_norms
MAINT Introduce `MiddleTermComputer`, an abstraction generalizing `GEMMTermComputer` (#24807) Co-authored-by: Julien Jerphanion <git@jjerphan.xyz> Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
https://github.com/scikit-learn/scikit-learn.git
def sqeuclidean_row_norms(X, num_threads): if X.dtype == np.float64: return np.asarray(_sqeuclidean_row_norms64(X, num_threads)) if X.dtype == np.float32: return np.asarray(_sqeuclidean_row_norms32(X, num_threads)) raise ValueError( "Only float64 or float32 datasets are supported at this time, " f"got: X.dtype={X.dtype}." )
57
_dispatcher.py
Python
sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
239e16319116ab7445c0557bb08783ab2d60673d
scikit-learn
3
60,429
4
7
2
24
4
0
4
6
_SetVerboseLevel
Balanced joint maximum mean discrepancy for deep transfer learning
https://github.com/jindongwang/transferlearning.git
def _SetVerboseLevel(level): return _cpplint_state.SetVerboseLevel(level)
13
cpp_lint.py
Python
code/deep/BJMMD/caffe/scripts/cpp_lint.py
cc4d0564756ca067516f71718a3d135996525909
transferlearning
1
291,163
6
6
6
23
5
0
6
13
_async_zha_physical_discovery
Migrate ZHA when enabling multi-PAN support on HA Yellow (#82213) * Migrate ZHA when enabling multi-PAN support on HA Yellow * Refactor BaseZhaFlow.async_step_maybe_confirm_ezsp_restore * Change data passed to ZHA to initiate migration * Catch errors during ZHA migration * Fix ZhaMigrationHelper.async_prepare_yellow_migration return value * Improve test coverage * Improve test coverage * Fix spelling * Rename some none HA yellow specifics * Rename again * Increase number of migration retries + refactor * Suppress OperationNotAllowed when reloading * Adjust tests
https://github.com/home-assistant/core.git
async def _async_zha_physical_discovery(self) -> dict[str, Any]:
13
silabs_multiprotocol_addon.py
Python
homeassistant/components/homeassistant_hardware/silabs_multiprotocol_addon.py
be7e76f302f670da61f24b96332a061a5032f479
core
1
145,827
2
6
55
13
2
0
2
9
test_timeslices_fully_overlapping_experiences
[RLlib] Issue 22625: `MultiAgentBatch.timeslices()` does not behave as expected. (#22657)
https://github.com/ray-project/ray.git
def test_timeslices_fully_overlapping_experiences(self):
245
test_multi_agent_batch.py
Python
rllib/policy/tests/test_multi_agent_batch.py
c0ade5f0b7cfc9aeba46cde7af3b36068a6420df
ray
3
160,150
33
12
9
155
23
1
37
96
test_gen_pyf_no_overwrite
TST: Initialize f2py2e tests of the F2PY CLI (#20668) Increases F2PY coverage by around 15 percent. For the CLI itself it covers the major features (around 70 percent), with the exception of mostly numpy.distutils stuff. More importantly, sets the groundwork for #20056, in that passing the same testsuite should indicate feature parity.
https://github.com/numpy/numpy.git
def test_gen_pyf_no_overwrite(capfd, hello_world_f90, monkeypatch): ipath = Path(hello_world_f90) monkeypatch.setattr(sys, "argv", f'f2py -h faker.pyf {ipath}'.split()) with util.switchdir(ipath.parent): Path("faker.pyf").write_text("Fake news", encoding="ascii") with pytest.raises(SystemExit): f2pycli() # Refuse to overwrite _, err = capfd.readouterr() assert "Use --overwrite-signature to overwrite" in err @pytest.mark.xfail
@pytest.mark.xfail
78
test_f2py2e.py
Python
numpy/f2py/tests/test_f2py2e.py
729ad4f92420231e2a7009b3223c6c7620b8b808
numpy
1
281,598
6
12
39
44
7
0
6
49
print_help
Terminal Wide Rich (#1161) * My idea for how we handle Rich moving forward * remove independent consoles * FIxed pylint issues * add a few vars * Switched print to console * More transitions * Changed more prints * Replaced all prints * Fixing tabulate * Finished replace tabulate * Finished removing rich from Tabulate * add Panel around menu * add GST watermark under feature flag * Fixed 46 tests * Delete test_screener[False].yaml * Delete test_screener[True].yaml * Fixed the rest of the tests * add help and source color vars and use rgb * rich on stocks/options * update rich on disc, dps, sia * rich in gov, ins and scr menus * ba and ca menus with rich * Fixed import issue * Fixed some tests * removed termcolor * Removed prettytable * add rich to remaining stocks menus * FIxed linting issue * Added James' changes * Updated dependencies * Add rich to cryptocurrency menu * refactor economy and forex * refactor etf with rich * refactor mfunds * refactor rich rest * not specify style so default color works well on any background * Fixing mypy issues * Updated tests * More test fixes * James' test fixes * Updating tests : stocks/screener - fix cassettes using BR * Updating tests : crypto * Updating tests : disable DEBUG_MODE * Updating tests : stocks/fa/yfinance * minor fixes that escape * Improve the rich table function (that replaces tabulate :D ) * Fixed bad code * delete rogue file + dcf fix + NoConsole * sia mypy * fuck you linter * fuck you linter pt 2 * skip hehe * i hate the black linter * ubuntu mypy attempt * Update : rich_config + gtff * Updating tests : conftest * Updating tests : stocks * Update : rich_config * Updating : rich_config * make panel configurable for Theodore :b * colors update * Merged * Updating : rich_config + feature_flags * Updating : rich_config * Updating tests : stocks * Updating : feature_flags Co-authored-by: DidierRLopes <dro.lopes@campus.fct.unl.pt> Co-authored-by: Chavithra PARANA <chavithra@gmail.com> Co-authored-by: james <jmaslek11@gmail.com> Co-authored-by: jose-donato <zmcdonato@gmail.com>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def print_help(self): console.print( text=f, menu="Home", )
20
terminal.py
Python
terminal.py
82747072c511beb1b2672846ae2ee4aec53eb562
OpenBBTerminal
1
100,844
6
6
3
22
4
0
6
20
use_mixed_precision
Refactoring and TravisCI to Github Actions (#1239) * refactor training * travis to actions
https://github.com/deepfakes/faceswap.git
def use_mixed_precision(self) -> bool: return self._use_mixed_precision
12
settings.py
Python
plugins/train/model/_base/settings.py
ff6b0209dd5ad57b81b0aca570df7f39a7119bfb
faceswap
1
151,750
11
10
3
69
9
0
11
32
publish
log warning if channel too far behind, add docstrings to message stream
https://github.com/freqtrade/freqtrade.git
def publish(self, message): waiter, self._waiter = self._waiter, self._loop.create_future() waiter.set_result((message, time.time(), self._waiter))
43
message_stream.py
Python
freqtrade/rpc/api_server/ws/message_stream.py
afc00bc30a94abd64fee000535e66287fd91595f
freqtrade
1
133,951
11
12
7
75
13
0
12
28
test_error_serialization
[Test][Client] Only start ray once in client tests (#28835) It looks like we're frequently starting and shutting down Ray in this test because `ray_start_client_server` isn't connecting to the Ray created by `ray_start_regular_shared`, and is instead starting a new Ray head process every time it launches. Ray client tests are failing frequently with: ``` [2022-10-06 07:31:46,253 E 13235 13751] core_worker_process.cc:277: The core worker has already been shutdown. This happens when the language frontend accesses the Ray's worker after it is shutdown. The process will exit ``` Which is probably caused by having multiple ray clusters running simultaneous, with some shutting down asynchronously. This refactor forces all of the tests in the module to use the same Ray cluster. Also fixes two other sources of potential flakiness: * Joins the thread in test_client_thread_safe (seems like this has a bad interaction when the client server is cleaned up) * Calls ray.get in `test_stdout_log_stream`, to make sure that the remote function is done running before we try searching for its output Should also have the happy side effect of speeding up test_client. Ran the `Small & Client` tests (regular and external redis) twice each, no flakes, and windows version of test_client.
https://github.com/ray-project/ray.git
def test_error_serialization(call_ray_start_shared): fake_path = os.path.join(os.path.dirname(__file__), "not_a_real_file") with pytest.raises(FileNotFoundError): with ray_start_client_server_for_address(call_ray_start_shared) as ray:
57
test_client.py
Python
python/ray/tests/test_client.py
297341e107daee1ea3aff991ae8ea8c90993c683
ray
1
85,091
16
9
6
52
6
0
18
56
test_bitbucket2_on_push_commits_multiple_committers_with_others
webhooks: Pick a more reasonable length for short sha. 7 characters are not enough for large projects, so we change it to reasonably longer. As an example, The Linux kernel needs at least 11 characters of sha in its shortened form to identify a revision. We pick 11 so it should work for most of the projects. Signed-off-by: Zixuan James Li <p359101898@gmail.com>
https://github.com/zulip/zulip.git
def test_bitbucket2_on_push_commits_multiple_committers_with_others(self) -> None: commit_info = "* first commit ([84b96adc644](https://bitbucket.org/kolaszek/repository-name/commits/84b96adc644a30fd6465b3d196369d880762afed))\n" expected_message = f self.check_webhook( "push_multiple_committers_with_others", TOPIC_BRANCH_EVENTS, expected_message )
24
tests.py
Python
zerver/webhooks/bitbucket2/tests.py
4e4689949438735622bdf669f05d218c671e7e01
zulip
1
208,178
140
21
42
720
59
0
229
778
test_flag_allow_error_cb_on_chord_header_on_upgraded_chord
Added test for task_allow_error_cb_on_chord_header flag with an upgraded chord input (#7744)
https://github.com/celery/celery.git
def test_flag_allow_error_cb_on_chord_header_on_upgraded_chord(self, manager, subtests): try: manager.app.backend.ensure_chords_allowed() except NotImplementedError as e: raise pytest.skip(e.args[0]) if not manager.app.conf.result_backend.startswith('redis'): raise pytest.skip('Requires redis result backend.') redis_connection = get_redis_connection() manager.app.conf.task_allow_error_cb_on_chord_header = True errback_msg = 'errback called' errback_key = 'echo_errback' errback_sig = redis_echo.si(errback_msg, redis_key=errback_key) body_msg = 'chord body called' body_key = 'echo_body' body_sig = redis_echo.si(body_msg, redis_key=body_key) headers = ( # (fail.si(),), <-- this is not supported because it's not a valid chord header (only one task) (fail.si(), fail.si(), fail.si()), (fail.si(), identity.si(42)), (fail.si(), identity.si(42), identity.si(42)), (fail.si(), identity.si(42), fail.si()), (fail.si(), identity.si(42), fail.si(), identity.si(42)), (fail.si(), identity.si(42), fail.si(), identity.si(42), fail.si()), ) # for some reason using parametrize breaks the test so we do it manually unfortunately for header in headers: implicit_chord_sig = chain(group(list(header)), body_sig) implicit_chord_sig.link_error(errback_sig) redis_connection.delete(errback_key, body_key) with subtests.test(msg='Error propagates from failure in header'): res = implicit_chord_sig.delay() with pytest.raises(ExpectedException): res.get(timeout=TIMEOUT) with subtests.test(msg='Confirm the body was not executed'): with pytest.raises(TimeoutError): # confirm the chord body was not called await_redis_echo((body_msg,), redis_key=body_key, timeout=10) # Double check assert not redis_connection.exists(body_key), 'Chord body was called when it should have not' with subtests.test(msg='Confirm the errback was called for each failed header task + body'): # confirm the errback was called for each task in the chord header failed_header_tasks_count = len(list(filter(lambda f_sig: f_sig.name == fail.si().name, header))) expected_errbacks_count = failed_header_tasks_count + 1 # +1 for the body expected_errbacks = tuple(errback_msg for _ in range(expected_errbacks_count)) await_redis_echo(expected_errbacks, redis_key=errback_key) # confirm there are not leftovers assert not redis_connection.exists(errback_key) # Cleanup redis_connection.delete(errback_key)
440
test_canvas.py
Python
t/integration/test_canvas.py
afe0c2354bf61745d70df7b7005667e4f9ae64f6
celery
5
20,071
19
7
2
42
6
0
20
33
uname_attr
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def uname_attr(attribute): # type: (str) -> str return _distro.uname_attr(attribute) try: from functools import cached_property except ImportError: # Python < 3.8
13
distro.py
Python
pipenv/patched/notpip/_vendor/distro.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
1
246,349
29
10
8
121
16
0
36
99
test_servers_in_room
Faster joins: parse msc3706 fields in send_join response (#12011) Part of my work on #11249: add code to handle the new fields added in MSC3706.
https://github.com/matrix-org/synapse.git
def test_servers_in_room(self) -> None: parser = SendJoinParser(RoomVersions.V1, False) response = {"org.matrix.msc3706.servers_in_room": ["hs1", "hs2"]} serialised_response = json.dumps(response).encode() # Send data to the parser parser.write(serialised_response) # Retrieve and check the parsed SendJoinResponse parsed_response = parser.finish() self.assertEqual(parsed_response.servers_in_room, ["hs1", "hs2"])
68
test_client.py
Python
tests/federation/transport/test_client.py
da0e9f8efdac1571eab35ad2cc842073ca54769c
synapse
1
104,763
3
7
2
22
3
0
3
17
reset_format
Add code examples to API docs (#4168) * add code examples for functions related to the base dataset class * ✨ make style * 🖍 make each code example fully reproducible where applicable * 🖍 show parameter usage for some functions * 🖍 add examples for DatasetInfo functions
https://github.com/huggingface/datasets.git
def reset_format(self): self.set_format()
11
arrow_dataset.py
Python
src/datasets/arrow_dataset.py
445107bae3fcd6ac9eeae503232960fa4ba8ccfd
datasets
1
56,282
18
9
10
47
4
0
19
90
test_preview_error_messaging_with_deployments
Add a CLI command to preview how a FlowRun will appear in any FlowRunner's execution environment (PrefectHQ/orion#1971) Co-authored-by: Terrence Dorsey <terrence@prefect.io> Co-authored-by: Michael Adkins <madkinszane@gmail.com>
https://github.com/PrefectHQ/prefect.git
def test_preview_error_messaging_with_deployments(): invoke_and_assert( [ "deployment", "preview", "./tests/deployment_test_files/single_flow.py", # not a deployment file ], expected_code=1, expected_output_contains="No deployment specifications found!", )
25
test_deployment_preview.py
Python
tests/cli/test_deployment_preview.py
5afded9fe6724d9e336f59792ee1d60656a2d94d
prefect
1
259,338
15
11
6
94
19
0
17
35
test_max_features_callable_data
ENH Allow `SelectFromModel`'s `max_features` to accept callables (#22356) * Initial implementation * Improved error handling and stability * Added unit tests * Updated test to use `max_features_` instead of `max_features` * Added documentation for new private attribute `max_features_` * Improved error handling for callables * Updated whats_new * Removed incorrect term reference to `max_features` * Removed float case and improved testing * Updated test names to more clearly reflect intention * Added a sample callable in `max_features` description * Improved documentation and streamlined error handling * Updated example to include demonstrate using a callable for max_features * Separated out callable demo into separate example * Removed demo from `max_features` docs (now in example) * Updated changelog * Apply suggestions from code review Co-authored-by: Thomas J. Fan <thomasjpfan@gmail.com> * Trimmed unneeded comments * Updated tests to reflect new error handling * Removed new line at end of docstring * Updated docstring * Fixed example syntax error * Fixed example syntax * Apply suggestions from code review Co-authored-by: Thomas J. Fan <thomasjpfan@gmail.com> * Reverted irrelevant changes * Update sklearn/feature_selection/_from_model.py Co-authored-by: Thomas J. Fan <thomasjpfan@gmail.com> * Fixed error message * Improved test coverage * Minor doc improvement -- added a list for `max_features` type * Update sklearn/feature_selection/_from_model.py Co-authored-by: Adrin Jalali <adrin.jalali@gmail.com> * Improved input validation and added test for array-like * Updated doc to use no longer use lambda function * Fixed docstring list * Added missing whitespace for list format in docstring Co-authored-by: Thomas J. Fan <thomasjpfan@gmail.com> Co-authored-by: Adrin Jalali <adrin.jalali@gmail.com>
https://github.com/scikit-learn/scikit-learn.git
def test_max_features_callable_data(max_features): clf = RandomForestClassifier(n_estimators=50, random_state=0) m = Mock(side_effect=max_features) transformer = SelectFromModel(estimator=clf, max_features=m, threshold=-np.inf) transformer.fit_transform(data, y) m.assert_called_with(data)
59
test_from_model.py
Python
sklearn/feature_selection/tests/test_from_model.py
db24a30bd3b90a9d55e82e450631de96305744f7
scikit-learn
1
308,369
19
10
14
90
13
0
21
43
setup_platform
Make ATTR_ENTITY_ID required in local_file service call (#63017) Co-authored-by: epenet <epenet@users.noreply.github.com>
https://github.com/home-assistant/core.git
def setup_platform(hass, config, add_entities, discovery_info=None): if DATA_LOCAL_FILE not in hass.data: hass.data[DATA_LOCAL_FILE] = [] file_path = config[CONF_FILE_PATH] camera = LocalFile(config[CONF_NAME], file_path) hass.data[DATA_LOCAL_FILE].append(camera)
84
camera.py
Python
homeassistant/components/local_file/camera.py
d8dabd305cffe7e65a20f201a03361caf76cdeb8
core
2
294,877
11
8
7
57
11
0
11
76
set_timer
Add overlay options to Tado (#65886) Co-authored-by: Paulus Schoutsen <balloob@gmail.com>
https://github.com/home-assistant/core.git
def set_timer(self, temperature=None, time_period=None, requested_overlay=None): self._control_hvac( hvac_mode=CONST_MODE_HEAT, target_temp=temperature, duration=time_period, overlay_mode=requested_overlay, )
39
climate.py
Python
homeassistant/components/tado/climate.py
e76170fbfd691432e51a4e37235a5300cf741749
core
1
8,414
18
13
7
96
12
1
19
83
disable
Config Object (#2426) * Fixed loss instances across features * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed binary OneOfImplementation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Flake 8 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix custom loss components * Fix gbm category * Remove config object code, out of scope * Fixed more tests * Fixed incorrect text preproc default, added clip to category feature level * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixes additional tests * Cache jsonschema validator to reduce memory pressure * Fix imports * Skip neuropod test * Added upgrade audio to default preproc back compat and cleaned up * Small nits * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Change backfill constant for audio * Add docstring to compute feature hash * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Unused import * Another backfill constant change * Unused import * remove default population functions * Added config object test * rewired build_inputs * rewired combiner in ecd, added logic to config object * Refactored ecd.py * Fixing up merge_with_defaults, need metadata changes in master * Refactored defaults section and mega upgraded config obj * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed some formatting * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed feature col, proc col, and render config from defaults.py * Fix duplicate import * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added config initializer to merge defaults flow * Refactored update_config_with_metadata * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added dict conversion method to config object and refactored merge config function in config_utils * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Refactored until preproc entrypoint * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed update_config_with_metadata * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Removed load config base feature method - no longer necessary * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Formatting * Fixed input size assignment * Temp fix * Fixed pretrained encoder path referencing temp until preproc refactor * Solved the WORST BUG EVER * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Switch reduce_input to None for sequence tagger * Fixed another one * Fixed typo * Various test fixes * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Flake 8 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed excess defaults params issue * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Minor fixes * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed some defaults tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed more tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed more tests * Formatting * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * More test fixes * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed defaults tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix more tests * Flake 8 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix more tests * Fixed more tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed more tests * Fixed more tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fixing ghost tests attempt * Deep copy to smash the ghost failures * Copied top level modules now too * Started fixing hyperopt * Fixed Hyperopt Issues * Flake 8 * Remove commented out code * Address Piero feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Flake 8 * Removed merge with defaults * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed various issues with preprocessing and splitting positioning * Fixed hyperopt issues * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Refactored api pipeline to use all config obj references * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed more tests * Flake 8 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix more tests * Fixed auto tune learning rate and batch size * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed sequence feature tests * Fixed image feature test * Fixed last test * flake 8 * Marshmallowify Config object, remove manual to dict method, add Factory method constructors * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Validate config within config object * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * All Travis feedback addressed * Using all new constructors now * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * removed from class attributes * Added deep copies back and piped repr inheritance * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Format * Small error fix, moved back compat into Config Object * Flake8 * Docstring for hyperopt defaults method * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address Joppe feedback * Revert "Address Joppe feedback" This reverts commit 42f1665ef917d062a010550bb960594c355285ff. * Fix tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Flake8 * fix test * Small improvement * Changed repr for input features, added feature enabling/disabling * Added feature enabling/disabling, and better reprs for SDK dev * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Flake 8 * Added rich to requirements.txt * Add some more CO tests and comment more on CO code * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix explain issue * Julian feedback * Added TODOs for future refactor PRs * Fix explain test failure, test shared state improvement and bug fix, remove unncessary code from convert_submodules * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * implement Daniel's feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix residual errors * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Error fix * Using mixins now so no loose attributes on defaults, fixed height width schema restrictions * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Removed unnecessary filtering from defaults schema logic * Piero's simplification and cleanup * Flake 8 * Fix test and update docstrings from Pieros change * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address most of Justin's feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix tests and more feedback implementation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Renamed files to correspond to ModelConfig class name * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Missing constant import * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed incorrect merge conflict resolution * Flake8 * Fix remaining tests (except old models training from trainer type removal) * Fixed old models not validating trainer type * Add output_feature=False to test_hyperopt_ray.py * Implement Kabir's feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Travis Addair <tgaddair@gmail.com> Co-authored-by: w4nderlust <w4nderlust@gmail.com>
https://github.com/ludwig-ai/ludwig.git
def disable(self): if not self.active: _error_console.print("This feature is already disabled!") else: self.active = False _info_console.print(f"{self.name} feature disabled!\n") logger.info(self.__repr__()) @dataclass(repr=False)
@dataclass(repr=False)
42
base.py
Python
ludwig/schema/features/base.py
4d2d81f9fdefc52eea6a9bf0826a6f2ffc8d681b
ludwig
2
260,070
210
11
5
303
30
0
363
331
token_freqs
DOC Rework plot_hashing_vs_dict_vectorizer.py example (#23266) Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org> Co-authored-by: Julien Jerphanion <git@jjerphan.xyz>
https://github.com/scikit-learn/scikit-learn.git
def token_freqs(doc): freq = defaultdict(int) for tok in tokenize(doc): freq[tok] += 1 return freq token_freqs("That is one example, but this is another one") # %% # Observe in particular that the repeated token `"is"` is counted twice for # instance. # # Breaking a text document into word tokens, potentially losing the order # information between the words in a sentence is often called a `Bag of Words # representation <https://en.wikipedia.org/wiki/Bag-of-words_model>`_. # %% # DictVectorizer # -------------- # # First we benchmark the :func:`~sklearn.feature_extraction.DictVectorizer`, # then we compare it to :func:`~sklearn.feature_extraction.FeatureHasher` as # both of them receive dictionaries as input. from time import time from sklearn.feature_extraction import DictVectorizer dict_count_vectorizers = defaultdict(list) t0 = time() vectorizer = DictVectorizer() vectorizer.fit_transform(token_freqs(d) for d in raw_data) duration = time() - t0 dict_count_vectorizers["vectorizer"].append( vectorizer.__class__.__name__ + "\non freq dicts" ) dict_count_vectorizers["speed"].append(data_size_mb / duration) print(f"done in {duration:.3f} s at {data_size_mb / duration:.1f} MB/s") print(f"Found {len(vectorizer.get_feature_names_out())} unique terms") # %% # The actual mapping from text token to column index is explicitly stored in # the `.vocabulary_` attribute which is a potentially very large Python # dictionary: type(vectorizer.vocabulary_) # %% len(vectorizer.vocabulary_) # %% vectorizer.vocabulary_["example"] # %% # FeatureHasher # ------------- # # Dictionaries take up a large amount of storage space and grow in size as the # training set grows. Instead of growing the vectors along with a dictionary, # feature hashing builds a vector of pre-defined length by applying a hash # function `h` to the features (e.g., tokens), then using the hash values # directly as feature indices and updating the resulting vector at those # indices. When the feature space is not large enough, hashing functions tend to # map distinct values to the same hash code (hash collisions). As a result, it # is impossible to determine what object generated any particular hash code. # # Because of the above it is impossible to recover the original tokens from the # feature matrix and the best approach to estimate the number of unique terms in # the original dictionary is to count the number of active columns in the # encoded feature matrix. For such a purpose we define the following function: import numpy as np
28
plot_hashing_vs_dict_vectorizer.py
Python
examples/text/plot_hashing_vs_dict_vectorizer.py
6ff214c46bacaf0385125bb47b4d8cb4a305fa3a
scikit-learn
2
275,248
27
13
18
148
13
0
35
237
build
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def build(self, var_list): super().build(var_list) if hasattr(self, "_built") and self._built: return self._built = True self._m = [] self._u = [] for var in var_list: self._m.append( self.add_variable_from_reference( model_variable=var, variable_name="m" ) ) self._u.append( self.add_variable_from_reference( model_variable=var, variable_name="u" ) )
89
adamax.py
Python
keras/optimizers/optimizer_experimental/adamax.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
4
259,108
20
11
7
128
17
0
24
45
test_im_kw_adjust_vmin_vmax
ENH Adds im_kw to ConfusionMatrixDisplay (#20753) Co-authored-by: Julien Jerphanion <git@jjerphan.xyz>
https://github.com/scikit-learn/scikit-learn.git
def test_im_kw_adjust_vmin_vmax(pyplot): confusion_matrix = np.array([[0.48, 0.04], [0.08, 0.4]]) disp = ConfusionMatrixDisplay(confusion_matrix) disp.plot(im_kw=dict(vmin=0.0, vmax=0.8)) clim = disp.im_.get_clim() assert clim[0] == pytest.approx(0.0) assert clim[1] == pytest.approx(0.8)
98
test_confusion_matrix_display.py
Python
sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
13174b16596d3222b721fed8868c814628b1becb
scikit-learn
1
60,101
8
12
5
43
7
0
9
52
logger
Create abstract Job interface block (#7832) Co-authored-by: Alexander Streed <desertaxle@users.noreply.github.com>
https://github.com/PrefectHQ/prefect.git
def logger(self): try: return get_run_logger() except MissingContextError: return get_logger(self.__class__.__name__)
24
abstract.py
Python
src/prefect/blocks/abstract.py
04ee589138a2fd4c9aa0c7bf2ac96c935bcbbd7c
prefect
2
122,167
22
13
4
98
14
0
26
39
_promote_like_jnp
Split parts of lax_numpy_test.py into separate test files. Why? The main test file is getting too big and this hinders iteration on individual tests PiperOrigin-RevId: 478130215
https://github.com/google/jax.git
def _promote_like_jnp(fun, inexact=False): _promote = _promote_dtypes_inexact if inexact else _promote_dtypes def wrapper(*args, **kw): flat_args, tree = tree_util.tree_flatten(args) args = tree_util.tree_unflatten(tree, _promote(*flat_args)) return fun(*args, **kw) return wrapper
21
lax_numpy_operators_test.py
Python
tests/lax_numpy_operators_test.py
439217644a180f9a69d86971aeb409ba620f875d
jax
2
198,676
49
11
12
180
13
0
74
186
add_member
default values for supports and loads removed along with other changes
https://github.com/sympy/sympy.git
def add_member(self, label, start, end): if start not in self._node_labels or end not in self._node_labels or start==end: raise ValueError("The start and end points of the member must be unique nodes") elif label in list(self._members): raise ValueError("A member with the same label already exists for the truss") elif self._nodes_occupied.get(tuple([start, end])): raise ValueError("A member already exists between the two nodes") else: self._members[label] = [start, end] self._nodes_occupied[start, end] = True self._nodes_occupied[end, start] = True self._internal_forces[label] = 0
115
truss.py
Python
sympy/physics/continuum_mechanics/truss.py
73b2975a89b45ef437f11b697d39796f755a856b
sympy
6
267,999
7
6
6
26
5
1
7
20
is_managed
ansible-test - Use more native type hints. (#78435) * ansible-test - Use more native type hints. Simple search and replace to switch from comments to native type hints for return types of functions with no arguments. * ansible-test - Use more native type hints. Conversion of simple single-line function annotation type comments to native type hints. * ansible-test - Use more native type hints. Conversion of single-line function annotation type comments with default values to native type hints. * ansible-test - Use more native type hints. Manual conversion of type annotation comments for functions which have pylint directives.
https://github.com/ansible/ansible.git
def is_managed(self) -> bool: return False @dataclasses.dataclass
@dataclasses.dataclass
10
host_configs.py
Python
test/lib/ansible_test/_internal/host_configs.py
3eb0485dd92c88cc92152d3656d94492db44b183
ansible
1
176,981
13
12
4
69
9
1
13
28
all_triads
Add docstring examples for triads functions (#5522) Adds docstring examples to the functions in the triads module as well as some additional explanatory text + links to other examples. Co-authored-by: Ross Barnowski <rossbar@berkeley.edu> Co-authored-by: Mridul Seth <mail@mriduls.com>
https://github.com/networkx/networkx.git
def all_triads(G): triplets = combinations(G.nodes(), 3) for triplet in triplets: yield G.subgraph(triplet).copy() @not_implemented_for("undirected")
@not_implemented_for("undirected")
34
triads.py
Python
networkx/algorithms/triads.py
db35812af218482b0ddf9ca47e4792e47e4d4666
networkx
2
322,189
59
18
28
289
19
0
91
495
_concat_short_text_reuslts
Update neural search readme and Add Paddle Serving Support (#1558) * add recall inference similarity * update examples * updatea readme * update dir name * update neural search readme * update milvus readme * update domain adaptive pretraining readme * fix the mistakes * update readme * add recall Paddle Serving Support * update readme * update readme and format the code * reformat the files * move the files * reformat the code * remove redundant code Co-authored-by: Zeyu Chen <chenzeyu01@baidu.com> Co-authored-by: tianxin <tianxin04@baidu.com>
https://github.com/PaddlePaddle/PaddleNLP.git
def _concat_short_text_reuslts(self, input_texts, results): long_text_lens = [len(text) for text in input_texts] concat_results = [] single_results = {} count = 0 for text in input_texts: text_len = len(text) while True: if len(single_results) == 0 or len(single_results[ "text"]) < text_len: if len(single_results) == 0: single_results = copy.deepcopy(results[count]) else: single_results["text"] += results[count]["text"] single_results["items"].extend(results[count]["items"]) count += 1 elif len(single_results["text"]) == text_len: concat_results.append(single_results) single_results = {} break else: raise Exception( "The length of input text and raw text is not equal.") for result in concat_results: pred_words = result['items'] pred_words = self._reset_offset(pred_words) result['items'] = pred_words return concat_results
172
knowledge_mining.py
Python
paddlenlp/taskflow/knowledge_mining.py
621357338437ee420eabbbf5ab19065bc85e73a5
PaddleNLP
9
104,393
7
10
2
46
6
0
7
21
from_pydict
Update docs to new frontend/UI (#3690) * WIP: update docs to new UI * make style * Rm unused * inject_arrow_table_documentation __annotations__ * hasattr(arrow_table_method, "__annotations__") * Update task_template.rst * Codeblock PT-TF-SPLIT * Convert loading scripts * Convert docs to mdx * Fix mdx * Add <Tip> * Convert mdx tables * Fix codeblock * Rm unneded hashlinks * Update index.mdx * Redo dev change * Rm circle ci `build_doc` & `deploy_doc` * Rm unneeded files * Update docs reamde * Standardize to `Example::` * mdx logging levels doc * Table properties inject_arrow_table_documentation * ``` to ```py mdx * Add Tips mdx * important,None -> <Tip warning={true}> * More misc * Center imgs * Update instllation page * `setup.py` docs section * Rm imgs since they are in hf.co * Update docs/source/access.mdx Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update index mdx * Update docs/source/access.mdx Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * just `Dataset` obj * Addedversion just italics * Update ReadInstruction doc example syntax * Change docstring for `prepare_for_task` * Chore * Remove `code` syntax from headings * Rm `code` syntax from headings * Hashlink backward compatability * S3FileSystem doc * S3FileSystem doc updates * index.mdx updates * Add darkmode gifs * Index logo img css classes * Index mdx dataset logo img size * Docs for DownloadMode class * Doc DownloadMode table * format docstrings * style * Add doc builder scripts (#3790) * add doc builder scripts * fix docker image * Docs new UI actions no self hosted (#3793) * No self hosted * replace doc injection by actual docstrings * Docstring formatted Co-authored-by: Quentin Lhoest <lhoest.q@gmail.com> Co-authored-by: Mishig Davaadorj <dmishig@gmail.com> Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr> Co-authored-by: Mishig Davaadorj <dmishig@gmail.com> * Rm notebooks from docs actions since they dont exi * Update tsting branch * More docstring * Chore * bump up node version * bump up node * ``` -> ```py for audio_process.mdx * Update .github/workflows/build_documentation.yml Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com> * Uodate dev doc build * remove run on PR * fix action * Fix gh doc workflow * forgot this change when merging master * Update build doc Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by: Quentin Lhoest <lhoest.q@gmail.com> Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com> Co-authored-by: Lysandre Debut <lysandre.debut@reseau.eseo.fr>
https://github.com/huggingface/datasets.git
def from_pydict(cls, *args, **kwargs): return cls(pa.Table.from_pydict(*args, **kwargs))
28
table.py
Python
src/datasets/table.py
e35be138148333078284b942ccc9ed7b1d826f97
datasets
1
304,549
19
11
7
62
6
0
19
73
start
Improve type hint in eddystone sensor entity (#77135)
https://github.com/home-assistant/core.git
def start(self) -> None: if not self.scanning: self.scanner.start() self.scanning = True else: _LOGGER.debug("start() called, but scanner is already running")
34
sensor.py
Python
homeassistant/components/eddystone_temperature/sensor.py
5cb91d7cefe75523a568a265ca76be36347fc9d1
core
2
199,471
24
11
6
90
14
0
26
83
parallel_axis
Add optional frame argument to parallel axis method
https://github.com/sympy/sympy.git
def parallel_axis(self, point, frame=None): # circular import issue from sympy.physics.mechanics.functions import inertia_of_point_mass if frame is None: frame = self.frame return self.central_inertia.express(frame) + inertia_of_point_mass( self.mass, self.masscenter.pos_from(point), frame)
59
rigidbody.py
Python
sympy/physics/mechanics/rigidbody.py
801e149d69d5f88919a735f8b55b6024f97c6950
sympy
2
172,806
19
17
6
206
19
1
25
47
adv_search_text
Better epub cover parsing with multiple cover-image items Code cosmetics renamed variables refactored xml page generation refactored prepare author
https://github.com/janeczku/calibre-web.git
def adv_search_text(q, include_inputs, exclude_inputs, data_value): for inp in include_inputs: q = q.filter(db.Books.data.any(data_value == inp)) for excl in exclude_inputs: q = q.filter(not_(db.Books.data.any(data_value == excl))) return q
'''
64
web.py
Python
cps/web.py
4545f4a20d9ff90b99bbd4e3e34b6de4441d6367
calibre-web
3
181,657
14
10
4
60
7
0
15
27
test_auto_detect_categorical
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def test_auto_detect_categorical(): selected = auto_select_categorical_features(iris_data[0:16, :], threshold=10) expected = [False, False, True, True] assert_equal(selected, expected)
39
one_hot_encoder_tests.py
Python
tests/one_hot_encoder_tests.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
1
287,891
35
11
13
182
12
0
65
112
test_probability_updates
Fix Bayesian sensor to use negative observations (#67631) Co-authored-by: Diogo Gomes <diogogomes@gmail.com>
https://github.com/home-assistant/core.git
async def test_probability_updates(hass): prob_given_true = [0.3, 0.6, 0.8] prob_given_false = [0.7, 0.4, 0.2] prior = 0.5 for p_t, p_f in zip(prob_given_true, prob_given_false): prior = bayesian.update_probability(prior, p_t, p_f) assert round(abs(0.720000 - prior), 7) == 0 prob_given_true = [0.8, 0.3, 0.9] prob_given_false = [0.6, 0.4, 0.2] prior = 0.7 for p_t, p_f in zip(prob_given_true, prob_given_false): prior = bayesian.update_probability(prior, p_t, p_f) assert round(abs(0.9130434782608695 - prior), 7) == 0
156
test_binary_sensor.py
Python
tests/components/bayesian/test_binary_sensor.py
49eeeae51da329284070eb7b91ed6cc8078d2f19
core
3
337,136
17
14
11
107
17
0
19
120
embed_text
Add Stable Diffusion Interpolation Example (#862) * :sparkles: Add Stable Diffusion Interpolation Example * :lipstick: style * Update examples/community/interpolate_stable_diffusion.py Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
https://github.com/huggingface/diffusers.git
def embed_text(self, text): text_input = self.tokenizer( text, padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", ) with torch.no_grad(): embed = self.text_encoder(text_input.input_ids.to(self.device))[0] return embed
66
interpolate_stable_diffusion.py
Python
examples/community/interpolate_stable_diffusion.py
ee9875ee9beff8e3acbb0944191fafef9b15607f
diffusers
1
210,653
57
11
17
191
20
0
72
171
check_version
Clean fluid (#6075) * clean fluid * mv static to legacy * remove yolo box * revert legacy dir * revert static link * update in_dynamic_mode * clean iou_similarity, collect_fpn_proposals, bipartite_match
https://github.com/PaddlePaddle/PaddleDetection.git
def check_version(version='2.0'): err = "PaddlePaddle version {} or higher is required, " \ "or a suitable develop version is satisfied as well. \n" \ "Please make sure the version is good with your code.".format(version) version_installed = [ paddle_version.major, paddle_version.minor, paddle_version.patch, paddle_version.rc ] if version_installed == ['0', '0', '0', '0']: return version_split = version.split('.') length = min(len(version_installed), len(version_split)) for i in six.moves.range(length): if version_installed[i] > version_split[i]: return if version_installed[i] < version_split[i]: raise Exception(err)
115
check.py
Python
ppdet/utils/check.py
c103d0250719ec7914cbd9978ee40f60026de0c8
PaddleDetection
5
80,143
28
14
16
277
16
0
28
168
test_combine_multiple_types_to_streamblock
Add tests for streamfield migration helpers Currently failing due to wagtail-factories being broken on Wagtail 4.1: https://github.com/wagtail/wagtail-factories/issues/65
https://github.com/wagtail/wagtail.git
def test_combine_multiple_types_to_streamblock(self): altered_raw_data = apply_changes_to_raw_data( raw_data=self.raw_data, block_path_str="", operation=StreamChildrenToStreamBlockOperation( block_names=["char1", "char2"], stream_block_name="stream1" ), streamfield=models.SampleModel.content, ) self.assertEqual(len(altered_raw_data), 1) self.assertEqual(altered_raw_data[0]["type"], "stream1") self.assertEqual(len(altered_raw_data[0]["value"]), 4) self.assertEqual(altered_raw_data[0]["value"][0], self.raw_data[0]) self.assertEqual(altered_raw_data[0]["value"][1], self.raw_data[1]) self.assertEqual(altered_raw_data[0]["value"][2], self.raw_data[2]) self.assertEqual(altered_raw_data[0]["value"][3], self.raw_data[3])
176
test_simple_structures.py
Python
wagtail/tests/streamfield_migrations/test_simple_structures.py
ad65741b94f36fbe793cf15f0ab002482070cdb6
wagtail
1
322,613
54
17
27
297
26
0
77
565
_postprocess
Update Taskflow word_segmentation and ner tasks (#1666) * Add AutoSplitter & AutoJoiner * codestyle fix * unify auto joiner * add comments * add sentence split mode * update params * add paddle version check * add wordtag for word_segmentation * add wordtag for word_segmentation * add ner-lac and word_segmentation-jieba * add return entities only for ner * fix ci * fix ci * fix ci * fix ci * fix ci * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * fix bugs of dataloader * remove guard * use fast mode for rnn example * Update README.md * Update README.md
https://github.com/PaddlePaddle/PaddleNLP.git
def _postprocess(self, inputs): results = [] for examples, texts, temp_results in zip(inputs['batch_examples'], inputs['batch_texts'], inputs['batch_results']): for i in range(len(examples)): result = {} det_pred, char_preds, length = temp_results[i] pred_result = self._parse_decode(texts[i], char_preds, det_pred, length) result['source'] = texts[i] result['target'] = ''.join(pred_result) results.append(result) results = self._auto_joiner(results, self.input_mapping, is_dict=True) for result in results: errors_result = [] for i, (source_token, target_token ) in enumerate(zip(result['source'], result['target'])): if source_token != target_token: errors_result.append({ 'position': i, 'correction': { source_token: target_token } }) result['errors'] = errors_result return results
186
text_correction.py
Python
paddlenlp/taskflow/text_correction.py
1e2ee01dade0d4076ba98aa613c3eb150c615abb
PaddleNLP
6
274,458
31
14
12
107
15
0
37
117
_has_kwargs
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _has_kwargs(fn): if isinstance(fn, functools.partial): fn = fn.func elif _is_callable_object(fn): fn = fn.__call__ elif not callable(fn): raise TypeError( "fn should be a function-like object, but is of type {}.".format( type(fn) ) ) return tf_inspect.getfullargspec(fn).varkw is not None
64
variable_scope_shim.py
Python
keras/legacy_tf_layers/variable_scope_shim.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
4
89,082
38
12
18
349
32
0
63
209
test_get_replays_require_duration
Revert "feat(replays): remove 5 second duration condition on replay snuba query (#41799)" This reverts commit d298d5bd35c3bdad55c70b05253a6c3ce9dd8e15. Co-authored-by: JoshFerge <1976777+JoshFerge@users.noreply.github.com>
https://github.com/getsentry/sentry.git
def test_get_replays_require_duration(self): project = self.create_project(teams=[self.team]) replay1_id = uuid.uuid4().hex replay2_id = uuid.uuid4().hex replay1_timestamp0 = datetime.datetime.now() - datetime.timedelta(seconds=15) replay1_timestamp1 = datetime.datetime.now() - datetime.timedelta(seconds=10) replay2_timestamp0 = datetime.datetime.now() - datetime.timedelta(seconds=10) replay2_timestamp1 = datetime.datetime.now() - datetime.timedelta(seconds=6) self.store_replays(mock_replay(replay1_timestamp0, project.id, replay1_id)) self.store_replays(mock_replay(replay1_timestamp1, project.id, replay1_id)) self.store_replays(mock_replay(replay2_timestamp0, project.id, replay2_id)) self.store_replays(mock_replay(replay2_timestamp1, project.id, replay2_id)) with self.feature(REPLAYS_FEATURES): response = self.client.get(self.url) assert response.status_code == 200 response_data = response.json() assert "data" in response_data assert len(response_data["data"]) == 1
217
test_organization_replay_index.py
Python
tests/sentry/replays/test_organization_replay_index.py
6cbcf1a86c318f808ee2a881b728cc49bcf2143a
sentry
1
154,056
20
13
4
81
12
0
21
54
_applymap
FEAT-#4147: Add partial compatibility with Python 3.6 and pandas 1.1 (#4301) Signed-off-by: Devin Petersohn <devin.petersohn@gmail.com> Signed-off-by: Vasily Litvinov <fam1ly.n4me@yandex.ru> Co-authored-by: Alexey Prutskov <lehaprutskov@gmail.com> Co-authored-by: Rehan Durrani <rehan@ponder.io> Co-authored-by: Igoshev, Yaroslav <Poolliver868@mail.ru> Co-authored-by: Myachev, Anatoly <anatoly.myachev@intel.com>
https://github.com/modin-project/modin.git
def _applymap(self, func, **kwargs): # noqa: PR01, RT01, D200 if not callable(func): raise ValueError("'{0}' object is not callable".format(type(func))) return DataFrame(query_compiler=self._query_compiler.applymap(func, **kwargs))
48
dataframe.py
Python
modin/pandas/dataframe.py
6ce9cf4daec7f9996038205289bce2186be87611
modin
2
20,183
5
8
2
30
3
0
5
19
_supported_features
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def _supported_features(self): return self._call_hook('_supported_features', {})
16
wrappers.py
Python
pipenv/patched/notpip/_vendor/pep517/wrappers.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
1
274,619
7
8
2
37
4
0
7
21
get_config
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def get_config(self): return {"name": self.name, "dtype": self.dtype}
20
base_metric.py
Python
keras/metrics/base_metric.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
155,200
12
10
6
55
8
0
14
60
force_materialization
FEAT-#5053: Add pandas on unidist execution with MPI backend (#5059) Signed-off-by: Igoshev, Iaroslav <iaroslav.igoshev@intel.com>
https://github.com/modin-project/modin.git
def force_materialization(self, get_ip=False): materialized = super( PandasOnUnidistDataframeVirtualPartition, self ).force_materialization(get_ip=get_ip) self._list_of_block_partitions = materialized.list_of_block_partitions return materialized
34
virtual_partition.py
Python
modin/core/execution/unidist/implementations/pandas_on_unidist/partitioning/virtual_partition.py
193505fdf0c984743397ba3df56262f30aee13a8
modin
1
300,171
7
9
2
32
6
0
7
21
set_source
Add ws66i core integration (#56094) * Add ws66i core integration * Remove all ws66i translations * Update ws66i unit tests to meet minimum code coverage * Update ws66i based on @bdraco review * General improvements after 2nd PR review * Disable entities if amp shutoff, set default source names, set 30sec polling * Add _attr_ and change async_on_unload * Improve entity generation * Implement coordinator * Made options fields required, retry connection on failed attempts, use ZoneStatus for attributes * Refactor WS66i entity properties, raise HomeAssistantError on restore service if no snapshot * Update to pyws66i v1.1 * Add quality scale of silver to manifest * Update config_flow test
https://github.com/home-assistant/core.git
def set_source(self, zone_id, source_idx): self.zones[zone_id].source = source_idx
20
test_media_player.py
Python
tests/components/ws66i/test_media_player.py
5e737bfe4fbc5a724f5fdf04ea9319c2224cb114
core
1
289,302
23
14
15
303
6
0
45
186
extra_state_attributes
Move attribution to standalone attribute [m-q] (#80518)
https://github.com/home-assistant/core.git
def extra_state_attributes(self): attr = {} if self.data is None: return attr if self.data["hi_lo"][1] == "H": attr["high_tide_time"] = self.data.index[1].strftime("%Y-%m-%dT%H:%M") attr["high_tide_height"] = self.data["predicted_wl"][1] attr["low_tide_time"] = self.data.index[2].strftime("%Y-%m-%dT%H:%M") attr["low_tide_height"] = self.data["predicted_wl"][2] elif self.data["hi_lo"][1] == "L": attr["low_tide_time"] = self.data.index[1].strftime("%Y-%m-%dT%H:%M") attr["low_tide_height"] = self.data["predicted_wl"][1] attr["high_tide_time"] = self.data.index[2].strftime("%Y-%m-%dT%H:%M") attr["high_tide_height"] = self.data["predicted_wl"][2] return attr
175
sensor.py
Python
homeassistant/components/noaa_tides/sensor.py
6b256bab227346bdd6d0ae871855b70ebefbba01
core
4
252,120
82
10
88
302
9
0
134
758
load
Support specifying the local address for outgoing connections. (#5366) * allow sockname to specify local-addr * set local_addr via command line * minor fix for reconfig * minor rewording Co-authored-by: Maximilian Hils <github@maximilianhils.com>
https://github.com/mitmproxy/mitmproxy.git
def load(self, loader): loader.add_option( "connection_strategy", str, "eager", "Determine when server connections should be established. When set to lazy, mitmproxy " "tries to defer establishing an upstream connection as long as possible. This makes it possible to " "use server replay while being offline. When set to eager, mitmproxy can detect protocols with " "server-side greetings, as well as accurately mirror TLS ALPN negotiation.", choices=("eager", "lazy"), ) loader.add_option( "stream_large_bodies", Optional[str], None, , ) loader.add_option( "body_size_limit", Optional[str], None, , ) loader.add_option( "keep_host_header", bool, False, , ) loader.add_option( "proxy_debug", bool, False, "Enable debug logs in the proxy core.", ) loader.add_option( "normalize_outbound_headers", bool, True, , ) loader.add_option( "validate_inbound_headers", bool, True, , ) loader.add_option( "connect_addr", Optional[str], None, , ) loader.add_option( "dns_server", bool, False, ) loader.add_option( "dns_listen_host", str, "", ) loader.add_option("dns_listen_port", int, 53, ) loader.add_option( "dns_mode", str, "regular", , )
180
proxyserver.py
Python
mitmproxy/addons/proxyserver.py
8fce7c7fa3be59f5760653e9e6daccee7f13cee9
mitmproxy
1
153,659
64
12
23
133
16
0
85
244
_is_zero_copy_arrow_op
FEAT-#4244: Implement dataframe exchange protocol for OmniSci (#4269) Co-authored-by: Yaroslav Igoshev <Poolliver868@mail.ru> Co-authored-by: Vasily Litvinov <vasilij.n.litvinov@intel.com> Signed-off-by: Dmitry Chigarev <dmitry.chigarev@intel.com>
https://github.com/modin-project/modin.git
def _is_zero_copy_arrow_op(cls, op) -> bool: is_zero_copy_op = False if isinstance(op, (FrameNode, TransformNode, UnionNode)): # - FrameNode: already materialized PyArrow table # - TransformNode: select certain columns of the table, implemented zero-copy (``df._arrow_select``) # - UnionNode: concatenate PyArrow tables, implemented zero-copy (``df._arrow_concat``) is_zero_copy_op = True elif isinstance(op, MaskNode) and ( isinstance(op.row_positions, slice) or is_range_like(op.row_positions) ): # Can select rows zero-copy if indexer is a slice-like (``df._arrow_row_slice``) is_zero_copy_op = True return is_zero_copy_op and all( # Walk the computation tree cls._is_zero_copy_arrow_op(_op) for _op in getattr(op, "inputs", []) )
83
dataframe.py
Python
modin/experimental/core/execution/native/implementations/omnisci_on_native/exchange/dataframe_protocol/dataframe.py
0c1a2129df64cf45bf1ff49c8ed92c510fdb1c82
modin
7
189,596
73
15
18
222
19
0
110
349
project_points
[pre-commit.ci] pre-commit autoupdate (#2520) * [pre-commit.ci] pre-commit autoupdate updates: - [github.com/psf/black: 21.12b0 → 22.1.0](https://github.com/psf/black/compare/21.12b0...22.1.0) - [github.com/asottile/blacken-docs: v1.12.0 → v1.12.1](https://github.com/asottile/blacken-docs/compare/v1.12.0...v1.12.1) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Darylgolden <darylgolden@gmail.com>
https://github.com/ManimCommunity/manim.git
def project_points(self, points): frame_center = self.frame_center focal_distance = self.get_focal_distance() zoom = self.get_zoom() rot_matrix = self.get_rotation_matrix() points = points - frame_center points = np.dot(points, rot_matrix.T) zs = points[:, 2] for i in 0, 1: if self.exponential_projection: # Proper projection would involve multiplying # x and y by d / (d-z). But for points with high # z value that causes weird artifacts, and applying # the exponential helps smooth it out. factor = np.exp(zs / focal_distance) lt0 = zs < 0 factor[lt0] = focal_distance / (focal_distance - zs[lt0]) else: factor = focal_distance / (focal_distance - zs) factor[(focal_distance - zs) < 0] = 10**6 points[:, i] *= factor * zoom return points
139
three_d_camera.py
Python
manim/camera/three_d_camera.py
b26137e9b62666e3f0bba0d18a72399077d3dbb6
manim
3
20,286
18
13
4
64
8
0
19
62
format
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def format(self, tokensource, outfile): if self.encoding: # wrap the outfile in a StreamWriter outfile = codecs.lookup(self.encoding)[3](outfile) return self.format_unencoded(tokensource, outfile)
40
formatter.py
Python
pipenv/patched/notpip/_vendor/pygments/formatter.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
2
156,735
15
8
3
72
9
0
16
37
any
Don't include docs in ``Array`` methods, just refer to module docs (#9244) Co-authored-by: James Bourbeau <jrbourbeau@users.noreply.github.com>
https://github.com/dask/dask.git
def any(self, axis=None, keepdims=False, split_every=None, out=None): from dask.array.reductions import any return any(self, axis=axis, keepdims=keepdims, split_every=split_every, out=out)
51
core.py
Python
dask/array/core.py
2820bae493a49cb1d0a6e376985c5473b8f04fa8
dask
1
247,538
33
11
16
115
11
0
34
164
test_blacklisted_ip_specific_direct
Add type hints to `tests/rest`. (#12208) Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
https://github.com/matrix-org/synapse.git
def test_blacklisted_ip_specific_direct(self) -> None: channel = self.make_request( "GET", "preview_url?url=http://192.168.1.1", shorthand=False ) # No requests made. self.assertEqual(len(self.reactor.tcpClients), 0) self.assertEqual( channel.json_body, { "errcode": "M_UNKNOWN", "error": "IP address blocked by IP blacklist entry", }, ) self.assertEqual(channel.code, 403)
67
test_url_preview.py
Python
tests/rest/media/v1/test_url_preview.py
32c828d0f760492711a98b11376e229d795fd1b3
synapse
1
37,319
16
11
6
86
8
0
30
76
build_inputs_with_special_tokens
add DebertaV2 fast tokenizer (#15529) Co-authored-by: alcinos <carion.nicolas@gmail.com> Co-authored-by: SaulLu <55560583+SaulLu@users.noreply.github.com> Co-authored-by: Nicolas Carion <carion.nicolas@gmail.com> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
https://github.com/huggingface/transformers.git
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): if token_ids_1 is None: return [self.cls_token_id] + token_ids_0 + [self.sep_token_id] cls = [self.cls_token_id] sep = [self.sep_token_id] return cls + token_ids_0 + sep + token_ids_1 + sep
55
tokenization_deberta_v2_fast.py
Python
src/transformers/models/deberta_v2/tokenization_deberta_v2_fast.py
ff06b177917384137af2d9585697d2d76c40cdfc
transformers
2
32,946
29
11
10
145
11
0
45
91
speed_metrics
Fix docstrings with last version of hf-doc-builder styler (#18581) * Fix docstrings with last version of hf-doc-builder styler * Remove empty Parameter block
https://github.com/huggingface/transformers.git
def speed_metrics(split, start_time, num_samples=None, num_steps=None): runtime = time.time() - start_time result = {f"{split}_runtime": round(runtime, 4)} if num_samples is not None: samples_per_second = num_samples / runtime result[f"{split}_samples_per_second"] = round(samples_per_second, 3) if num_steps is not None: steps_per_second = num_steps / runtime result[f"{split}_steps_per_second"] = round(steps_per_second, 3) return result
86
trainer_utils.py
Python
src/transformers/trainer_utils.py
c23cbdff4c097d3f3039999827a675cf8f06a32e
transformers
3
88,978
14
11
8
89
4
1
17
67
latest_release_only
feat(metrics): Add static time to adoption to latest release bias (#41635)
https://github.com/getsentry/sentry.git
def latest_release_only(default_project): default_project.update_option( "sentry:dynamic_sampling_biases", [ {"id": "boostEnvironments", "active": False}, {"id": "ignoreHealthChecks", "active": False}, ], ) @patch("sentry.dynamic_sampling.rules_generator.sentry_sdk") @patch("sentry.dynamic_sampling.rules_generator.quotas.get_blended_sample_rate")
@patch("sentry.dynamic_sampling.rules_generator.sentry_sdk") @patch("sentry.dynamic_sampling.rules_generator.quotas.get_blended_sample_rate")
36
test_generate_rules.py
Python
tests/sentry/dynamic_sampling/test_generate_rules.py
f8254cc58d5a8eaf0c7613e5f90c4242e4d5e5f1
sentry
1
101,320
14
11
10
56
10
0
15
43
completed
bugfix - timelapse image loader multithreading.py - typing + docs
https://github.com/deepfakes/faceswap.git
def completed(self) -> bool: retval = all(not thread.is_alive() for thread in self._threads) logger.debug(retval) return retval
33
multithreading.py
Python
lib/multithreading.py
326110f09d45dbdce2e490fa1ae4b1208e5efe2c
faceswap
2
3,825
24
11
12
152
12
0
37
133
test_state
🎉 🎉 Source FB Marketing: performance and reliability fixes (#9805) * Facebook Marketing performance improvement * add comments and little refactoring * fix integration tests with the new config * improve job status handling, limit concurrency to 10 * fix campaign jobs, refactor manager * big refactoring of async jobs, support random order of slices * update source _read_incremental to hook new state logic * fix issues with timeout * remove debugging and clean up, improve retry logic * merge changes from #8234 * fix call super _read_increment * generalize batch execution, add use_batch flag * improve coverage, do some refactoring of spec * update test, remove overrides of source * add split by AdSet * add smaller insights * fix end_date < start_date case * add account_id to PK * add notes * fix new streams * fix reversed incremental stream * update spec.json for SAT * upgrade CDK and bump version Co-authored-by: Dmytro Rezchykov <dmitry.rezchykov@zazmic.com> Co-authored-by: Eugene Kulak <kulak.eugene@gmail.com>
https://github.com/airbytehq/airbyte.git
def test_state(self, api, state): stream = AdsInsights( api=api, start_date=datetime(2010, 1, 1), end_date=datetime(2011, 1, 1), ) assert stream.state == {} stream.state = state actual_state = stream.state actual_state["slices"] = sorted(actual_state.get("slices", [])) state["slices"] = sorted(state.get("slices", [])) assert actual_state == state
96
test_base_insight_streams.py
Python
airbyte-integrations/connectors/source-facebook-marketing/unit_tests/test_base_insight_streams.py
a3aae8017a0a40ff2006e2567f71dccb04c997a5
airbyte
1
195,069
34
13
13
184
31
0
40
171
build_model
Extend BERT-based classification with customized layers (#4553) * Extend BERT-based classification with customized layers * fix bugs and add tests * increase lr to improve training stability * upgrading torch version * adjusting loss value
https://github.com/facebookresearch/ParlAI.git
def build_model(self): num_classes = len(self.class_list) bert_model = BertModel.from_pretrained(self.pretrained_path) if self.classifier_layers is not None: prev_dimension = bert_model.embeddings.word_embeddings.weight.size(1) layers, dims = self._get_layer_parameters( prev_dimension=prev_dimension, output_dimension=num_classes ) decoders = torch.nn.Sequential() for l, d in zip(layers, dims): decoders.append(self._map_layer(l, d)) return BertWrapper(bert_model=bert_model, classifier_layer=decoders) return BertWrapper(bert_model=bert_model, output_dim=num_classes)
118
bert_classifier.py
Python
parlai/agents/bert_classifier/bert_classifier.py
1628d8c87e6f13e3f73d2e59acc7bd40770b9943
ParlAI
3
107,706
13
10
6
62
7
0
19
69
set_family
Simplify FontProperties init. The various setters already set the None attributes to the rcParams values; no need to do this twice in `__init__`. Also inline the now-single-use _normalize_font_family, move the aliases all to a single block, and prefer get_size to get_size_in_points.
https://github.com/matplotlib/matplotlib.git
def set_family(self, family): if family is None: family = rcParams['font.family'] if isinstance(family, str): family = [family] self._family = family
37
font_manager.py
Python
lib/matplotlib/font_manager.py
40fdce6fa259aa9ca7e11bc1b74e6431dd418a52
matplotlib
3
126,860
43
15
25
180
29
0
57
352
get_cluster_status
[serve] Make serve agent not blocking when GCS is down. (#27526) This PR fixed several issue which block serve agent when GCS is down. We need to make sure serve agent is always alive and can make sure the external requests can be sent to the agent and check the status. - internal kv used in dashboard/agent blocks the agent. We use the async one instead - serve controller use ray.nodes which is a blocking call and blocking forever. change to use gcs client with timeout - agent use serve controller client which is a blocking call with max retries = -1. This blocks until controller is back. To enable Serve HA, we also need to setup: - RAY_gcs_server_request_timeout_seconds=5 - RAY_SERVE_KV_TIMEOUT_S=5 which we should set in KubeRay.
https://github.com/ray-project/ray.git
async def get_cluster_status(self, req): (legacy_status, formatted_status_string, error) = await asyncio.gather( *[ self._gcs_aio_client.internal_kv_get( key.encode(), namespace=None, timeout=GCS_RPC_TIMEOUT_SECONDS ) for key in [ DEBUG_AUTOSCALING_STATUS_LEGACY, DEBUG_AUTOSCALING_STATUS, DEBUG_AUTOSCALING_ERROR, ] ] ) formatted_status = ( json.loads(formatted_status_string.decode()) if formatted_status_string else {} ) return dashboard_optional_utils.rest_response( success=True, message="Got cluster status.", autoscaling_status=legacy_status.decode() if legacy_status else None, autoscaling_error=error.decode() if error else None, cluster_status=formatted_status if formatted_status else None, )
121
reporter_head.py
Python
dashboard/modules/reporter/reporter_head.py
dac7bf17d9214dd3b79238caf0c8ec76f40328c6
ray
6
263,957
31
8
2
52
7
0
40
68
_cleanup_bytecode_string
depend: simplify _cleanup_bytecode_string Use a regex to filter CACHE instructions from the bytecode.
https://github.com/pyinstaller/pyinstaller.git
def _cleanup_bytecode_string(bytecode): return _cache_instruction_filter.sub(rb"\2", bytecode) # Python 3.11 removed CALL_FUNCTION and CALL_METHOD, and replaced them with PRECALL + CALL instructions. # The CALL_FUNCTION_EX is still present. # language=PythonVerboseRegExp _call_function_bytecode = bytecode_regex( rb ) # language=PythonVerboseRegExp _extended_arg_bytecode = bytecode_regex( rb )
15
bytecode.py
Python
PyInstaller/depend/bytecode.py
2a1397ad830b36801652242970eb9bae739c8f77
pyinstaller
1
57,780
7
9
7
36
6
0
7
13
Crashed
Add `Crashed` as canonical state type (PrefectHQ/orion#2353) Co-authored-by: Jeremiah Lowin <153965+jlowin@users.noreply.github.com>
https://github.com/PrefectHQ/prefect.git
def Crashed(**kwargs) -> State: return State(type=StateType.CRASHED, **kwargs)
21
states.py
Python
src/prefect/orion/schemas/states.py
907fc7c0a895b9f46484690be44a2a5660b320ee
prefect
1
141,995
9
9
5
59
9
1
10
24
short_path_dir
[runtime env] Skip content hash for unopenable files (#25413)
https://github.com/ray-project/ray.git
def short_path_dir(): dir = Path("short_path") dir.mkdir() yield dir shutil.rmtree(str(dir)) @pytest.fixture
@pytest.fixture
27
test_runtime_env_packaging.py
Python
python/ray/tests/test_runtime_env_packaging.py
0d8cbb1cae4ad0da9e620fcf02b7650c8c9628cf
ray
1
300,983
11
9
13
46
9
0
13
25
test_fail_outdated_pgsql
Fail recorder setup with unsupported dialect or version (#70888)
https://github.com/home-assistant/core.git
def test_fail_outdated_pgsql(caplog, pgsql_version, message): instance_mock = MagicMock(_db_supports_row_number=True) execute_args = [] close_mock = MagicMock()
67
test_util.py
Python
tests/components/recorder/test_util.py
037f6947d88f0754b15d156180cdffb053a25b1a
core
1
247,858
6
7
12
27
3
0
6
20
test_process_pulled_event_with_missing_state
Optimise `_get_state_after_missing_prev_event`: use `/state` (#12040) If we're missing most of the events in the room state, then we may as well call the /state endpoint, instead of individually requesting each and every event.
https://github.com/matrix-org/synapse.git
def test_process_pulled_event_with_missing_state(self) -> None: return self._test_process_pulled_event_with_missing_state(False)
15
test_federation_event.py
Python
tests/handlers/test_federation_event.py
9b43df1f7b2977431563b3cda8fed1ed879651ba
synapse
1
289,378
76
14
37
319
20
0
112
405
test_statistics_during_period
Ensure recorder test fixture is setup before hass fixture (#80528) * Ensure recorder test fixture is setup before hass fixture * Adjust more tests
https://github.com/home-assistant/core.git
async def test_statistics_during_period(recorder_mock, hass, hass_ws_client, caplog): now = dt_util.utcnow() await async_setup_component(hass, "history", {}) client = await hass_ws_client() # Test the WS API works and issues a warning await client.send_json( { "id": 1, "type": "history/statistics_during_period", "start_time": now.isoformat(), "end_time": now.isoformat(), "statistic_ids": ["sensor.test"], "period": "hour", } ) response = await client.receive_json() assert response["success"] assert response["result"] == {} assert ( "WS API 'history/statistics_during_period' is deprecated and will be removed in " "Home Assistant Core 2022.12. Use 'recorder/statistics_during_period' instead" ) in caplog.text # Test the WS API forwards to recorder with patch( "homeassistant.components.history.recorder_ws.ws_handle_get_statistics_during_period", wraps=ws_handle_get_statistics_during_period, ) as ws_mock: await client.send_json( { "id": 2, "type": "history/statistics_during_period", "start_time": now.isoformat(), "end_time": now.isoformat(), "statistic_ids": ["sensor.test"], "period": "hour", } ) await client.receive_json() ws_mock.assert_awaited_once()
173
test_init.py
Python
tests/components/history/test_init.py
31a787558fd312331b55e5c2c4b33341fc3601fc
core
1
192,312
29
13
7
127
24
0
32
93
test_audio_present_pts
Improve test_video_reader (#5498) * Improve test_video_reader * Fix linter error
https://github.com/pytorch/vision.git
def test_audio_present_pts(self, test_video, backend, start_offset, end_offset): full_path = os.path.join(VIDEO_DIR, test_video) container = av.open(full_path) if container.streams.audio: set_video_backend(backend) _, audio, _ = io.read_video(full_path, start_offset, end_offset, pts_unit="pts") assert all([dimension > 0 for dimension in audio.shape[:2]])
84
test_video_reader.py
Python
test/test_video_reader.py
c50d48845f7b1ca86d6a3b7f37a59be0ae11e36b
vision
3
128,304
10
9
5
44
4
0
11
43
resolved_filesystem
[Datasets] Add `partitioning` parameter to `read_` functions (#28413)
https://github.com/ray-project/ray.git
def resolved_filesystem(self) -> "pyarrow.fs.FileSystem": if self._resolved_filesystem is None: self._normalize_base_dir() return self._resolved_filesystem
24
partitioning.py
Python
python/ray/data/datasource/partitioning.py
c3ff77f5a13395631a2af580ea4429ceb5dfea13
ray
2
108,531
3
8
2
21
2
0
3
9
nipy_spectral
Cleanup documentation generation for pyplot - remove the awkward `pyplot.plotting()` function, which only served as a namespace to take up the docs for pyplot and output them via `.. autofunction` - Instead generate the same information using `.. autosummary::`. We have to list the desired methods here explicitly. I've added a test that these are the same as previously auto-generated in the `plotting()` docstring. If we change anything in pyplot, we'll be notified through the test failure that we have to adapt the autosummary list. - Removed the docstring generation logic `_setup_pyplot_info_docstrings()`. Apart from generating the `plotting()` docstring, this added docstrings to the pyplot colormap setters. Instead, we now add these docstrings directly via boilerplate.py Co-authored-by: Elliott Sales de Andrade <quantum.analyst@gmail.com>
https://github.com/matplotlib/matplotlib.git
def nipy_spectral(): set_cmap('nipy_spectral')
9
pyplot.py
Python
lib/matplotlib/pyplot.py
032316bc6c7798fca6c82de24167c975f237687f
matplotlib
1
95,662
14
11
7
69
10
0
14
75
test_orderby_2
feat(metrics): Support multi-field orderby for performance [INGEST-805] (#31162) * feat(metrics): Support metrics multi-field orderby queries Adds support for the performance table to the metrics organization data endpoint
https://github.com/getsentry/sentry.git
def test_orderby_2(self): response = self.get_response( self.project.organization.slug, field=["sum(sentry.sessions.session)", "count_unique(sentry.sessions.user)"], orderBy=["sum(sentry.sessions.session)"], ) assert response.status_code == 200
41
test_organization_metrics.py
Python
tests/sentry/api/endpoints/test_organization_metrics.py
9af098891a8243d08ee5ab6e51925a082135e3f2
sentry
1
93,732
81
17
36
352
37
0
103
525
sync_status_outbound
ref(Jira): Split Jira Cloud and Jira Server (#37034) * Split Jira Cloud and Jira Server
https://github.com/getsentry/sentry.git
def sync_status_outbound(self, external_issue, is_resolved, project_id, **kwargs): client = self.get_client() jira_issue = client.get_issue(external_issue.key) jira_project = jira_issue["fields"]["project"] try: external_project = IntegrationExternalProject.objects.get( external_id=jira_project["id"], organization_integration_id__in=OrganizationIntegration.objects.filter( organization_id=external_issue.organization_id, integration_id=external_issue.integration_id, ), ) except IntegrationExternalProject.DoesNotExist: return jira_status = ( external_project.resolved_status if is_resolved else external_project.unresolved_status ) # don't bother updating if it's already the status we'd change it to if jira_issue["fields"]["status"]["id"] == jira_status: return try: transitions = client.get_transitions(external_issue.key) except ApiHostError: raise IntegrationError("Could not reach host to get transitions.") try: transition = [t for t in transitions if t.get("to", {}).get("id") == jira_status][0] except IndexError: # TODO(jess): Email for failure logger.warning( "jira.status-sync-fail", extra={ "organization_id": external_issue.organization_id, "integration_id": external_issue.integration_id, "issue_key": external_issue.key, }, ) return client.transition_issue(external_issue.key, transition["id"])
213
integration.py
Python
src/sentry/integrations/jira_server/integration.py
2fbf550ec05c8501cbc9eca62e73526e717dcbdf
sentry
8
282,077
35
11
21
136
23
0
39
242
call_cgglobal
Crypto features: Replace coingecko scrapping (#1156) * replaced cgcategories with api * added coingecko categories * refactoring commands to use api, added coins to cryptocontroller and merged find and coins * autocompletion for coins * removed unused vars * added dappradar features * refactoring commands position * refactoring commands position * adding visual commands and fixed report * skipped tests for now * lint notebook * correct report * black formatter keeps crying because notebook * removed unused imports * Fixed black * Keep kernel metadata 'cause it's required by papermill * Change jupyter cleanup hook to one based on nbconvert * Try fix the hook I just broke * Fix trailing commas in the crypto notebook * Change the jupyter hook to a one that's featured on pre-commit's page * Format report notebook and test new notebook hook * Black the notebook * Remove deleted functions from the crypto discovery API * Remove deleted functions from the crypto overview API * replaced print for console print and removed print from table * replaced print for console print and removed print from table * auto completion + sort for all discovery commands * replacing help messages * fix linting * added docs and removed unused commands * added todos and fixed help messages * lint * pr issues fixed * updated tests * tests merge * replaced with new rich table function Co-authored-by: Colin Delahunty <colin99delahunty@gmail.com> Co-authored-by: Theodore Aptekarev <aptekarev@gmail.com>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def call_cgglobal(self, other_args): parser = argparse.ArgumentParser( prog="cgglobal", add_help=False, formatter_class=argparse.ArgumentDefaultsHelpFormatter, description=, ) parser.add_argument( "--pie", action="store_true", help="Flag to show pie chart with market cap distribution", dest="pie", default=False, ) ns_parser = parse_known_args_and_warn( parser, other_args, EXPORT_ONLY_RAW_DATA_ALLOWED ) if ns_parser: pycoingecko_view.display_global_market_info( export=ns_parser.export, pie=ns_parser.pie )
85
overview_controller.py
Python
gamestonk_terminal/cryptocurrency/overview/overview_controller.py
4501dfd442d371150b8785d379c5354095b6954b
OpenBBTerminal
2
288,145
6
6
3
22
4
0
6
20
characteristic_uuid
Add ESPHome BleakClient (#78911) Co-authored-by: Paulus Schoutsen <balloob@gmail.com>
https://github.com/home-assistant/core.git
def characteristic_uuid(self) -> str: return self.__characteristic_uuid
12
descriptor.py
Python
homeassistant/components/esphome/bluetooth/descriptor.py
7042d6d35be54865b1252c0b28a50cce1a92eabc
core
1
301,332
43
14
13
160
24
1
45
175
async_press
ZHA Add entities for Lidl water valve quirk (#72307) * init * added timer number entity * added write attribute button entity * fixed missed errors * minor changes & fixed failing test * removed icon * unit and icons
https://github.com/home-assistant/core.git
async def async_press(self) -> None: try: result = await self._channel.cluster.write_attributes( {self._attribute_name: self._attribute_value} ) except zigpy.exceptions.ZigbeeException as ex: self.error("Could not set value: %s", ex) return if not isinstance(result, Exception) and all( record.status == Status.SUCCESS for record in result[0] ): self.async_write_ha_state() @CONFIG_DIAGNOSTIC_MATCH( channel_names="tuya_manufacturer", manufacturers={ "_TZE200_htnnfasr", }, )
@CONFIG_DIAGNOSTIC_MATCH( channel_names="tuya_manufacturer", manufacturers={ "_TZE200_htnnfasr", }, )
81
button.py
Python
homeassistant/components/zha/button.py
db815a7504cae47cee7dc9906eae66cc0f0a9fd5
core
5
270,867
33
10
13
123
12
0
52
120
is_in_tf_function
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def is_in_tf_function(): # Check if running in V1 graph mode. if not tf.compat.v1.executing_eagerly_outside_functions(): return False if not tf.inside_function(): return False # Check if inside Keras FuncGraph. if is_in_keras_graph(): return False # Check for a v1 `wrap_function` FuncGraph. graph = tf.compat.v1.get_default_graph() if getattr(graph, "name", False) and graph.name.startswith( "wrapped_function" ): return False return True
70
base_layer_utils.py
Python
keras/engine/base_layer_utils.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
6
176,769
42
14
7
125
15
0
52
96
move_witnesses
equitable_coloring: Get lazily first item instead of creating whole list (#5668)
https://github.com/networkx/networkx.git
def move_witnesses(src_color, dst_color, N, H, F, C, T_cal, L): X = src_color while X != dst_color: Y = T_cal[X] # Move _any_ witness from X to Y = T_cal[X] w = next(x for x in C[X] if N[(x, Y)] == 0) change_color(w, X, Y, N=N, H=H, F=F, C=C, L=L) X = Y
89
equitable_coloring.py
Python
networkx/algorithms/coloring/equitable_coloring.py
51055a0623a73bd1da4232df782cefd223f5e19e
networkx
4
250,017
77
11
45
281
8
0
150
682
test_get_entities_changed
Add missing types to tests.util. (#14597) Removes files under tests.util from the ignored by list, then fully types all tests/util/*.py files.
https://github.com/matrix-org/synapse.git
def test_get_entities_changed(self) -> None: cache = StreamChangeCache("#test", 1) cache.entity_has_changed("user@foo.com", 2) cache.entity_has_changed("bar@baz.net", 3) cache.entity_has_changed("user@elsewhere.org", 4) # Query all the entries, but mid-way through the stream. We should only # get the ones after that point. self.assertEqual( cache.get_entities_changed( ["user@foo.com", "bar@baz.net", "user@elsewhere.org"], stream_pos=2 ), {"bar@baz.net", "user@elsewhere.org"}, ) # Query all the entries mid-way through the stream, but include one # that doesn't exist in it. We shouldn't get back the one that doesn't # exist. self.assertEqual( cache.get_entities_changed( [ "user@foo.com", "bar@baz.net", "user@elsewhere.org", "not@here.website", ], stream_pos=2, ), {"bar@baz.net", "user@elsewhere.org"}, ) # Query all the entries, but before the first known point. We will get # all the entries we queried for, including ones that don't exist. self.assertEqual( cache.get_entities_changed( [ "user@foo.com", "bar@baz.net", "user@elsewhere.org", "not@here.website", ], stream_pos=0, ), {"user@foo.com", "bar@baz.net", "user@elsewhere.org", "not@here.website"}, ) # Query a subset of the entries mid-way through the stream. We should # only get back the subset. self.assertEqual( cache.get_entities_changed(["bar@baz.net"], stream_pos=2), {"bar@baz.net"}, )
158
test_stream_change_cache.py
Python
tests/util/test_stream_change_cache.py
acea4d7a2ff61b5beda420b54a8451088060a8cd
synapse
1
20,560
87
20
57
469
20
0
171
708
__mul__
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def __mul__(self, other): if other is Ellipsis: other = (0, None) elif isinstance(other, tuple) and other[:1] == (Ellipsis,): other = ((0,) + other[1:] + (None,))[:2] if isinstance(other, int): minElements, optElements = other, 0 elif isinstance(other, tuple): other = tuple(o if o is not Ellipsis else None for o in other) other = (other + (None, None))[:2] if other[0] is None: other = (0, other[1]) if isinstance(other[0], int) and other[1] is None: if other[0] == 0: return ZeroOrMore(self) if other[0] == 1: return OneOrMore(self) else: return self * other[0] + ZeroOrMore(self) elif isinstance(other[0], int) and isinstance(other[1], int): minElements, optElements = other optElements -= minElements else: raise TypeError( "cannot multiply ParserElement and ({}) objects".format( ",".join(type(item).__name__ for item in other) ) ) else: raise TypeError( "cannot multiply ParserElement and {} objects".format( type(other).__name__ ) ) if minElements < 0: raise ValueError("cannot multiply ParserElement by negative value") if optElements < 0: raise ValueError( "second tuple value must be greater or equal to first tuple value" ) if minElements == optElements == 0: return And([]) if optElements:
368
core.py
Python
pipenv/patched/notpip/_vendor/pyparsing/core.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
23
125,000
44
20
19
213
23
0
49
325
test_relative_json
[RLlib] improved unittests for dataset_reader and fixed bugs (#26458)
https://github.com/ray-project/ray.git
def test_relative_json(self): # this should work regardless of where th current working directory is. with tempfile.TemporaryDirectory() as tmp_dir: cwdir = os.getcwd() os.chdir(tmp_dir) unzipped_paths = _unzip_if_needed( [str(Path(self.relative_path) / "large.json")], "json" ) self.assertEqual( os.path.realpath(str(Path(unzipped_paths[0]).absolute())), os.path.realpath( str( Path(__file__).parent.parent.parent / self.relative_path / "large.json" ) ), ) assert all([Path(fpath).exists() for fpath in unzipped_paths]) os.chdir(cwdir)
126
test_dataset_reader.py
Python
rllib/offline/tests/test_dataset_reader.py
569fe0109629048d08e1d9e023f7769f10bd2244
ray
2
138,047
30
12
13
169
12
0
42
105
test_numerical_error
[air/tune] Internal resource management 1 - Ray AIR resource manager implementation (#30777) Prerequisite to #30016 This PR adds a new Ray AIR resource manager to replace the PlacementGroupManager of Ray Tune. Details can be found in #30016. Specifically, this PR - Adds the main resource manager abstractions - Renames (and moves) PlacementGroupFactory to ResourceRequest - Adds implementations and tests for a placement group based manager and a budget based manager Signed-off-by: Kai Fricke <kai@anyscale.com> Signed-off-by: Kai Fricke <krfricke@users.noreply.github.com> Co-authored-by: matthewdeng <matthew.j.deng@gmail.com>
https://github.com/ray-project/ray.git
def test_numerical_error(ray_start_4_cpus): manager = FixedResourceManager( total_resources={"CPU": 0.99, "GPU": 0.99, "a": 0.99} ) resource_request = ResourceRequest([{"CPU": 0.33, "GPU": 0.33, "a": 0.33}]) for i in range(3): manager.request_resources(resource_request) assert manager.acquire_resources( resource_request=resource_request ), manager._available_resources assert manager._available_resources["CPU"] == 0 assert manager._available_resources["GPU"] == 0 assert manager._available_resources["a"] == 0
112
test_resource_manager_fixed.py
Python
python/ray/air/tests/test_resource_manager_fixed.py
edb17fd2069844f12237c85ba6607afae536401d
ray
2
133,402
22
13
16
73
9
0
25
123
_inject_tracing_into_function
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def _inject_tracing_into_function(function): # Add _ray_trace_ctx to function signature if not is_tracing_enabled(): return function setattr( function, "__signature__", add_param_to_signature( function, inspect.Parameter( "_ray_trace_ctx", inspect.Parameter.KEYWORD_ONLY, default=None ), ), )
53
tracing_helper.py
Python
python/ray/util/tracing/tracing_helper.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
2
70,332
36
9
5
78
10
0
44
100
test_publish_view
Ensure confirmation page respects custom get_admin_display_title methods
https://github.com/wagtail/wagtail.git
def test_publish_view(self): # Request confirm publish page response = self.client.get(self.url) # # Check that the user received an publish confirm page self.assertEqual(response.status_code, 200) self.assertTemplateUsed(response, 'wagtailadmin/pages/bulk_actions/confirm_bulk_publish.html') # Page titles shown on the confirmation page should use SimplePage's custom get_admin_display_title method self.assertContains(response, "Hello world!-1 (simple page)")
44
test_bulk_publish.py
Python
wagtail/admin/tests/pages/test_bulk_actions/test_bulk_publish.py
cd01bf60874e8a80bb5bd7fb0cd44c192234375e
wagtail
1
269,759
37
13
14
113
12
0
39
138
_collective_communication
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _collective_communication(all_reduce_alg): collective_communication_options = { None: tf.distribute.experimental.CollectiveCommunication.AUTO, "ring": tf.distribute.experimental.CollectiveCommunication.RING, "nccl": tf.distribute.experimental.CollectiveCommunication.NCCL, } if all_reduce_alg not in collective_communication_options: raise ValueError( "When used with `multi_worker_mirrored`, valid values for " "all_reduce_alg are [`ring`, `nccl`]. Supplied value: {}".format( all_reduce_alg ) ) return collective_communication_options[all_reduce_alg]
68
distribution_util.py
Python
keras/benchmarks/distribution_util.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
9,894
5
9
2
38
5
0
5
19
update_worker_pea_args
feat: star routing (#3900) * feat(proto): adjust proto for star routing (#3844) * feat(proto): adjust proto for star routing * feat(proto): generate proto files * feat(grpc): refactor grpclet interface (#3846) * feat: refactor connection pool for star routing (#3872) * feat(k8s): add more labels to k8s deployments * feat(network): refactor connection pool * feat(network): refactor k8s pool * feat: star routing graph gateway (#3877) * feat: star routing - refactor grpc data runtime (#3887) * feat(runtimes): refactor grpc dataruntime * fix(tests): adapt worker runtime tests * fix(import): fix import * feat(proto): enable sending multiple lists (#3891) * feat: star routing gateway (#3893) * feat: star routing gateway all protocols (#3897) * test: add streaming and prefetch tests (#3901) * feat(head): new head runtime for star routing (#3899) * feat(head): new head runtime * feat(head): new head runtime * style: fix overload and cli autocomplete * feat(network): improve proto comments Co-authored-by: Jina Dev Bot <dev-bot@jina.ai> * feat(worker): merge docs in worker runtime (#3905) * feat(worker): merge docs in worker runtime * feat(tests): assert after clean up * feat(tests): star routing runtime integration tests (#3908) * fix(tests): fix integration tests * test: test runtimes fast slow request (#3910) * feat(zmq): purge zmq, zed, routing_table (#3915) * feat(zmq): purge zmq, zed, routing_table * style: fix overload and cli autocomplete * feat(zmq): adapt comment in dependency list * style: fix overload and cli autocomplete * fix(tests): fix type tests Co-authored-by: Jina Dev Bot <dev-bot@jina.ai> * test: add test gateway to worker connection (#3921) * feat(pea): adapt peas for star routing (#3918) * feat(pea): adapt peas for star routing * style: fix overload and cli autocomplete * feat(pea): add tests * feat(tests): add failing head pea test Co-authored-by: Jina Dev Bot <dev-bot@jina.ai> * feat(tests): integration tests for peas (#3923) * feat(tests): integration tests for peas * feat(pea): remove _inner_pea function * feat: star routing container pea (#3922) * test: rescue tests (#3942) * fix: fix streaming tests (#3945) * refactor: move docker run to run (#3948) * feat: star routing pods (#3940) * feat(pod): adapt pods for star routing * feat(pods): adapt basepod to star routing * feat(pod): merge pod and compound pod * feat(tests): fix tests * style: fix overload and cli autocomplete * feat(test): add container pea int test * feat(ci): remove more unnecessary tests * fix(tests): remove jinad runtime * feat(ci): remove latency tracking * fix(ci): fix ci def * fix(runtime): enable runtime to be exited * fix(tests): wrap runtime test in process * fix(runtimes): remove unused runtimes * feat(runtimes): improve cancel wait * fix(ci): build test pip again in ci * fix(tests): fix a test * fix(test): run async in its own process * feat(pod): include shard in activate msg * fix(pea): dont join * feat(pod): more debug out * feat(grpc): manage channels properly * feat(pods): remove exitfifo * feat(network): add simple send retry mechanism * fix(network): await pool close * fix(test): always close grpc server in worker * fix(tests): remove container pea from tests * fix(tests): reorder tests * fix(ci): split tests * fix(ci): allow alias setting * fix(test): skip a test * feat(pods): address comments Co-authored-by: Jina Dev Bot <dev-bot@jina.ai> * test: unblock skipped test (#3957) * feat: jinad pea (#3949) * feat: jinad pea * feat: jinad pea * test: remote peas * test: toplogy tests with jinad * ci: parallel jobs * feat(tests): add pod integration tests (#3958) * feat(tests): add pod integration tests * fix(tests): make tests less flaky * fix(test): fix test * test(pea): remote pea topologies (#3961) * test(pea): remote pea simple topology * test: remote pea topologies * refactor: refactor streamer result handling (#3960) * feat(k8s): adapt K8s Pod for StarRouting (#3964) * test: optimize k8s test * test: increase timeout and use different namespace * test: optimize k8s test * test: build and load image when needed * test: refactor k8s test * test: fix image name error * test: fix k8s image load * test: fix typoe port expose * test: update tests in connection pool and handling * test: remove unused fixture * test: parameterize docker images * test: parameterize docker images * test: parameterize docker images * feat(k8s): adapt k8s pod for star routing * fix(k8s): dont overwrite add/remove function in pool * fix(k8s): some fixes * fix(k8s): some more fixes * fix(k8s): linting * fix(tests): fix tests * fix(tests): fix k8s unit tests * feat(k8s): complete k8s integration test * feat(k8s): finish k8s tests * feat(k8s): fix test * fix(tests): fix test with no name * feat(k8s): unify create/replace interface * feat(k8s): extract k8s port constants * fix(tests): fix tests * fix(tests): wait for runtime being ready in tests * feat(k8s): address comments Co-authored-by: bwanglzu <bo.wang@jina.ai> * feat(flow): adapt Flow for StarRouting (#3986) * feat(flow): add routes * feat(flow): adapt flow to star routing * style: fix overload and cli autocomplete * feat(flow): handle empty topologies * feat(k8s): allow k8s pool disabling * style: fix overload and cli autocomplete * fix(test): fix test with mock * fix(tests): fix more tests * feat(flow): clean up tests * style: fix overload and cli autocomplete * fix(tests): fix more tests * feat: add plot function (#3994) * fix(tests): avoid hanging tests * feat(flow): add type hinting * fix(test): fix duplicate exec name in test * fix(tests): fix more tests * fix(tests): enable jinad test again * fix(tests): random port fixture * fix(style): replace quotes Co-authored-by: Jina Dev Bot <dev-bot@jina.ai> Co-authored-by: Joan Fontanals <joan.martinez@jina.ai> * feat(ci): bring back ci (#3997) * feat(ci): enable ci again * style: fix overload and cli autocomplete * feat(ci): add latency tracking * feat(ci): bring back some tests * fix(tests): remove invalid port test * feat(ci): disable daemon and distributed tests * fix(tests): fix entrypoint in hub test * fix(tests): wait for gateway to be ready * fix(test): fix more tests * feat(flow): do rolling update and scale sequentially * fix(tests): fix more tests * style: fix overload and cli autocomplete * feat: star routing hanging pods (#4011) * fix: try to handle hanging pods better * test: hanging pods test work * fix: fix topology graph problem * test: add unit test to graph * fix(tests): fix k8s tests * fix(test): fix k8s test * fix(test): fix k8s pool test * fix(test): fix k8s test * fix(test): fix k8s connection pool setting * fix(tests): make runtime test more reliable * fix(test): fix routes test * fix(tests): make rolling update test less flaky * feat(network): gurantee unique ports * feat(network): do round robin for shards * fix(ci): increase pytest timeout to 10 min Co-authored-by: Jina Dev Bot <dev-bot@jina.ai> Co-authored-by: Joan Fontanals <joan.martinez@jina.ai> * fix(ci): fix ci file * feat(daemon): jinad pod for star routing * Revert "feat(daemon): jinad pod for star routing" This reverts commit ed9b37ac862af2e2e8d52df1ee51c0c331d76f92. * feat(daemon): remote jinad pod support (#4042) * feat(daemon): add pod tests for star routing * feat(daemon): add remote pod test * test(daemon): add remote pod arguments test * test(daemon): add async scale test * test(daemon): add rolling update test * test(daemon): fix host * feat(proto): remove message proto (#4051) * feat(proto): remove message proto * fix(tests): fix tests * fix(tests): fix some more tests * fix(tests): fix more tests * fix(tests): fix more tests * fix(tests): fix more tests * fix(tests): fix more tests * feat(proto): put docs back in data * fix(proto): clean up * feat(proto): clean up * fix(tests): skip latency tracking * fix(test): fix hub test * fix(tests): fix k8s test * fix(test): some test clean up * fix(style): clean up style issues * feat(proto): adjust for rebase * fix(tests): bring back latency tracking * fix(tests): fix merge accident * feat(proto): skip request serialization (#4074) * feat: add reduce to star routing (#4070) * feat: add reduce on shards to head runtime * test: add reduce integration tests with fixed order * feat: add reduce on needs * chore: get_docs_matrix_from_request becomes public * style: fix overload and cli autocomplete * docs: remove undeterministic results warning * fix: fix uses_after * test: assert correct num docs after reducing in test_external_pod * test: correct asserts after reduce in test_rolling_update * fix: no reduce if uses_after_address is set * fix: get_docs_from_request only if needed * fix: fix tests after merge * refactor: move reduce from data_request_handler to head * style: fix overload and cli autocomplete * chore: apply suggestions * fix: fix asserts * chore: minor test fix * chore: apply suggestions * test: remove flow tests with external executor (pea) * fix: fix test_expected_messages_routing * fix: fix test_func_joiner * test: adapt k8s test Co-authored-by: Jina Dev Bot <dev-bot@jina.ai> * fix(k8s): fix static pool config * fix: use custom protoc doc generator image (#4088) * fix: use custom protoc doc generator image * fix(docs): minor doc improvement * fix(docs): use custom image * fix(docs): copy docarray * fix: doc building local only * fix: timeout doc building * fix: use updated args when building ContainerPea * test: add container PeaFactory test * fix: force pea close on windows (#4098) * fix: dont reduce if uses exist (#4099) * fix: dont use reduce if uses exist * fix: adjust reduce tests * fix: adjust more reduce tests * fix: fix more tests * fix: adjust more tests * fix: ignore non jina resources (#4101) * feat(executor): enable async executors (#4102) * feat(daemon): daemon flow on star routing (#4096) * test(daemon): add remote flow test * feat(daemon): call scale in daemon * feat(daemon): remove tail args and identity * test(daemon): rename scalable executor * test(daemon): add a small delay in async test * feat(daemon): scale partial flow only * feat(daemon): call scale directly in partial flow store * test(daemon): use asyncio sleep * feat(daemon): enable flow level distributed tests * test(daemon): fix jinad env workspace config * test(daemon): fix pod test use new port rolling update * feat(daemon): enable distribuetd tests * test(daemon): remove duplicate tests and zed runtime test * test(daemon): fix stores unit test * feat(daemon): enable part of distributed tests * feat(daemon): enable part of distributed tests * test: correct test paths * test(daemon): add client test for remote flows * test(daemon): send a request with jina client * test(daemon): assert async generator * test(daemon): small interval between tests * test(daemon): add flow test for container runtime * test(daemon): add flow test for container runtime * test(daemon): fix executor name * test(daemon): fix executor name * test(daemon): use async client fetch result * test(daemon): finish container flow test * test(daemon): enable distributed in ci * test(daemon): enable distributed in ci * test(daemon): decare flows and pods * test(daemon): debug ci if else * test(daemon): debug ci if else * test(daemon): decare flows and pods * test(daemon): correct test paths * test(daemon): add small delay for async tests * fix: star routing fixes (#4100) * docs: update docs * fix: fix Request.__repr__ * docs: update flow remarks * docs: fix typo * test: add non_empty_fields test * chore: remove non_empty_fields test * feat: polling per endpoint (#4111) * feat(polling): polling per endpoint configurable * fix: adjust tests * feat(polling): extend documentation * style: fix overload and cli autocomplete * fix: clean up * fix: adjust more tests * fix: remove repeat from flaky test * fix: k8s test * feat(polling): address pr feedback * feat: improve docs Co-authored-by: Jina Dev Bot <dev-bot@jina.ai> * feat(grpc): support connect grpc server via ssl tunnel (#4092) * feat(grpc): support ssl grpc connect if port is 443 * fix(grpc): use https option instead of detect port automatically * chore: fix typo * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <joan.martinez@jina.ai> * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <joan.martinez@jina.ai> * fix: update jina/peapods/networking.py Co-authored-by: Joan Fontanals <joan.martinez@jina.ai> * test(networking): add test for peapods networking * fix: address comments Co-authored-by: Joan Fontanals <joan.martinez@jina.ai> * feat(polling): unify polling args (#4113) * fix: several issues for jinad pods (#4119) * fix: activate for jinad pods * fix: dont expose worker pod in partial daemon * fix: workspace setting * fix: containerized flows * fix: hub test * feat(daemon): remote peas on star routing (#4112) * test(daemon): fix request in peas * test(daemon): fix request in peas * test(daemon): fix sync async client test * test(daemon): enable remote peas test * test(daemon): replace send message to send request * test(daemon): declare pea tests in ci * test(daemon): use pea args fixture * test(daemon): head pea use default host * test(daemon): fix peas topologies * test(daemon): fix pseudo naming * test(daemon): use default host as host * test(daemon): fix executor path * test(daemon): add remote worker back * test(daemon): skip local remote remote topology * fix: jinad pea test setup * fix: jinad pea tests * fix: remove invalid assertion Co-authored-by: jacobowitz <tobias.jacobowitz@posteo.de> * feat: enable daemon tests again (#4132) * feat: enable daemon tests again * fix: remove bogy empty script file * fix: more jinad test fixes * style: fix overload and cli autocomplete * fix: scale and ru in jinad * fix: fix more jinad tests Co-authored-by: Jina Dev Bot <dev-bot@jina.ai> * fix: fix flow test * fix: improve pea tests reliability (#4136) Co-authored-by: Joan Fontanals <joan.martinez@jina.ai> Co-authored-by: Jina Dev Bot <dev-bot@jina.ai> Co-authored-by: Deepankar Mahapatro <deepankar.mahapatro@jina.ai> Co-authored-by: bwanglzu <bo.wang@jina.ai> Co-authored-by: AlaeddineAbdessalem <alaeddine-13@live.fr> Co-authored-by: Zhaofeng Miao <522856232@qq.com>
https://github.com/jina-ai/jina.git
def update_worker_pea_args(self): self.peas_args['peas'] = self._set_peas_args(self.args)
21
__init__.py
Python
jina/peapods/pods/__init__.py
933415bfa1f9eb89f935037014dfed816eb9815d
jina
1
213,027
47
16
10
140
13
0
69
232
add_tags
fix: Py27hash fix (#2182) * Add third party py27hash code * Add Py27UniStr and unit tests * Add py27hash_fix utils and tests * Add to_py27_compatible_template and tests * Apply py27hash fix to wherever it is needed * Apply py27hash fix, all tests pass except api_with_any_method_in_swagger * apply py27hash fix in openapi + run black * remove py27 testing * remove other py27 references * black fixes * fixes/typos * remove py27 from tox.ini * refactoring * third party notice * black * Fix py27hash fix to deal with null events * Fix Py27UniStr repr for unicode literals * black reformat * Update _template_has_api_resource to check data type more defensively * Apply py27Dict in _get_authorizers * Apply Py27Dict to authorizers and gateway responses which will go into swagger * Update to_py27_compatible_template to handle parameter_values; Add Py27LongInt class * Rename _convert_to_py27_dict to _convert_to_py27_type * Apply Py27UniStr to path param name * Handle HttpApi resource under to_py27_compatible_template * Fix InvalidDocumentException to not sort different exceptions * black reformat * Remove unnecessary test files Co-authored-by: Wing Fung Lau <4760060+hawflau@users.noreply.github.com>
https://github.com/aws/serverless-application-model.git
def add_tags(self, tags): for name, value in tags.items(): # find an existing tag with this name if it exists existing_tag = next((existing_tag for existing_tag in self.tags if existing_tag.get("name") == name), None) if existing_tag: # overwrite tag value for an existing tag existing_tag[self._X_APIGW_TAG_VALUE] = value else: # create as Py27Dict and insert key one by one to preserve input order tag = Py27Dict() tag["name"] = name tag[self._X_APIGW_TAG_VALUE] = value self.tags.append(tag)
84
open_api.py
Python
samtranslator/open_api/open_api.py
a5db070f446b7cfebdaa6ad2e3dcf78f6105a272
serverless-application-model
5
104,789
14
9
13
58
9
0
14
35
data
Add code examples for DatasetDict (#4245) * 📝 add code examples for DatasetDict * 🖍 apply quentin review
https://github.com/huggingface/datasets.git
def data(self) -> Dict[str, Table]: self._check_values_type() return {k: dataset.data for k, dataset in self.items()}
36
dataset_dict.py
Python
src/datasets/dataset_dict.py
1904d0c0a3a96330d9b870cdca3e9a3a137f2977
datasets
2
212,809
42
10
8
72
5
0
89
203
clipboard_get
Replaced all temp Tk windows with creating the hidden-master-root
https://github.com/PySimpleGUI/PySimpleGUI.git
def clipboard_get(): root = _get_hidden_master_root() try: value = root.clipboard_get() except: value = '' root.update() return value # MM"`YM # MM mmmmm M # M' .M .d8888b. 88d888b. dP dP 88d888b. .d8888b. # MM MMMMMMMM 88' `88 88' `88 88 88 88' `88 Y8ooooo. # MM MMMMMMMM 88. .88 88. .88 88. .88 88. .88 88 # MM MMMMMMMM `88888P' 88Y888P' `88888P' 88Y888P' `88888P' # MMMMMMMMMMMM 88 88 # dP dP # ------------------------------------------------------------------------------------------------------------------ # # ===================================== Upper PySimpleGUI ======================================================== # # ------------------------------------------------------------------------------------------------------------------ # # ----------------------------------- The mighty Popup! ------------------------------------------------------------ #
31
PySimpleGUI.py
Python
PySimpleGUI.py
cfc43679ecad6f3c84592fd1a96072a6de5bcc8e
PySimpleGUI
2
322,536
88
18
35
449
39
0
141
426
word_repetition
Enhance SimCSE by Word Repetition strategy (#1747) * Add WR strategy * Update readme * Update bash file * Update Readme * fix the format error * Update readme * rm english dataset results * Add colon
https://github.com/PaddlePaddle/PaddleNLP.git
def word_repetition(input_ids, token_type_ids, dup_rate=0.32): input_ids = input_ids.numpy().tolist() token_type_ids = token_type_ids.numpy().tolist() batch_size, seq_len = len(input_ids), len(input_ids[0]) repetitied_input_ids = [] repetitied_token_type_ids = [] rep_seq_len = seq_len for batch_id in range(batch_size): cur_input_id = input_ids[batch_id] actual_len = np.count_nonzero(cur_input_id) dup_word_index = [] # If sequence length is less than 5, skip it if (actual_len > 5): dup_len = random.randint(a=0, b=max(2, int(dup_rate * actual_len))) # Skip cls and sep position dup_word_index = random.sample( list(range(1, actual_len - 1)), k=dup_len) r_input_id = [] r_token_type_id = [] for idx, word_id in enumerate(cur_input_id): # Insert duplicate word if idx in dup_word_index: r_input_id.append(word_id) r_token_type_id.append(token_type_ids[batch_id][idx]) r_input_id.append(word_id) r_token_type_id.append(token_type_ids[batch_id][idx]) after_dup_len = len(r_input_id) repetitied_input_ids.append(r_input_id) repetitied_token_type_ids.append(r_token_type_id) if after_dup_len > rep_seq_len: rep_seq_len = after_dup_len # Padding the data to the same length for batch_id in range(batch_size): after_dup_len = len(repetitied_input_ids[batch_id]) pad_len = rep_seq_len - after_dup_len repetitied_input_ids[batch_id] += [0] * pad_len repetitied_token_type_ids[batch_id] += [0] * pad_len return paddle.to_tensor(repetitied_input_ids), paddle.to_tensor( repetitied_token_type_ids)
283
data.py
Python
examples/text_matching/simcse/data.py
653ca9635b103582961f8eafc47051b31b10f181
PaddleNLP
7
138,178
26
13
9
133
21
1
28
66
test_run_config_port2
[serve] Correct `serve run config` port override sequence (#31107)
https://github.com/ray-project/ray.git
def test_run_config_port2(ray_start_stop): config_file_name = os.path.join( os.path.dirname(__file__), "test_config_files", "basic_graph_http.yaml" ) subprocess.Popen(["serve", "run", config_file_name]) wait_for_condition( lambda: requests.post("http://localhost:8005/").text == "wonderful world", timeout=15, ) @pytest.mark.skipif(sys.platform == "win32", reason="File path incorrect on Windows.")
@pytest.mark.skipif(sys.platform == "win32", reason="File path incorrect on Windows.")
59
test_cli.py
Python
python/ray/serve/tests/test_cli.py
fefda8bce104b92a97d5fb3ea7fa14ac233b452a
ray
1
268,721
9
9
3
46
8
0
9
23
volumes
ansible-test - Improve container management. (#78550) See changelogs/fragments/ansible-test-container-management.yml for details.
https://github.com/ansible/ansible.git
def volumes(self) -> dict[str, t.Any]: return self.config.get('Volumes') or {}
27
docker_util.py
Python
test/lib/ansible_test/_internal/docker_util.py
cda16cc5e9aa8703fb4e1ac0a0be6b631d9076cc
ansible
2
191,392
11
10
5
54
7
0
11
27
test_huggingface_call_error
Harrison/add huggingface hub (#23) Add support for huggingface hub I could not find a good way to enforce stop tokens over the huggingface hub api - that needs to hopefully be cleaned up in the future
https://github.com/hwchase17/langchain.git
def test_huggingface_call_error() -> None: llm = HuggingFaceHub(max_new_tokens=-1) with pytest.raises(ValueError): llm("Say foo:")
28
test_huggingface_hub.py
Python
tests/integration_tests/llms/test_huggingface_hub.py
020c42dcae2dba497d5ef5efb4dd533189bfb79f
langchain
1
129,846
35
13
8
116
13
0
43
71
_system_usage
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def _system_usage(): # noqa cpu_summary_str = open(PROC_STAT_PATH).read().split("\n")[0] parts = cpu_summary_str.split() assert parts[0] == "cpu" usage_data = parts[1:8] total_clock_ticks = sum(int(entry) for entry in usage_data) # 100 clock ticks per second, 10^9 ns per second usage_ns = total_clock_ticks * 10 ** 7 return usage_ns
67
k8s_utils.py
Python
dashboard/k8s_utils.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
2
77,200
60
14
24
252
30
1
77
339
test_translation_count_in_context
Sync tree: cascade unpublish, move and delete (#7984) * Add construct_synced_page_tree_list hook and use in page unpublish view * Implement construct_synced_page_tree_list in simple_translation but only when sync page tree is enabled * Add hook documentation * Add construct_synced_page_tree_list hook tests (#8058) * Move translated and alias pages when WAGTAIL_I18N_ENABLED and WAGTAILSIMPLETRANSLATION_SYNC_PAGE_TREE are enabled Co-Authored-By: Kalob Taulien <4743971+KalobTaulien@users.noreply.github.com> * Delete corresponding translations when WAGTAIL_I18N_ENABLED and WAGTAILSIMPLETRANSLATION_SYNC_PAGE_TREE are true Co-Authored-By: Kalob Taulien <4743971+KalobTaulien@users.noreply.github.com> * Rename the hook to be more specific * Update singular string version in confirm_move.html * Update test test_translation_count_in_context Co-authored-by: Kalob Taulien <4743971+KalobTaulien@users.noreply.github.com> Co-authored-by: Karl Hobley <karl@kaed.uk>
https://github.com/wagtail/wagtail.git
def test_translation_count_in_context(self): self.login() # BlogIndex needs translated pages before child pages can be translated self.fr_blog_index = self.en_blog_index.copy_for_translation(self.fr_locale) self.de_blog_index = self.en_blog_index.copy_for_translation(self.de_locale) # create translation in FR tree self.fr_blog_post = self.en_blog_post.copy_for_translation(self.fr_locale) # create alias in DE tree self.de_blog_post = self.en_blog_post.copy_for_translation( self.de_locale, alias=True ) response = self.client.get( reverse( "wagtailadmin_pages:move_confirm", args=( self.en_blog_post.id, self.en_homepage.id, ), ), follow=True, ) self.assertEqual(response.status_code, 200) self.assertEqual(response.context["translations_to_move_count"], 1) self.assertIn( "This will also move one translation of this page and its child pages", response.content.decode("utf-8"), ) @override_settings( WAGTAILSIMPLETRANSLATION_SYNC_PAGE_TREE=True, WAGTAIL_I18N_ENABLED=True )
@override_settings( WAGTAILSIMPLETRANSLATION_SYNC_PAGE_TREE=True, WAGTAIL_I18N_ENABLED=True )
146
test_wagtail_hooks.py
Python
wagtail/contrib/simple_translation/tests/test_wagtail_hooks.py
4cc10322a1c86c1137f5042a13d94d8017498bf7
wagtail
1
289,208
6
8
6
34
4
0
6
20
target_temperature_high
Bump plugwise to v0.25.2 and adapt climate (#80347) Co-authored-by: Franck Nijhof <frenck@frenck.nl>
https://github.com/home-assistant/core.git
def target_temperature_high(self) -> float: return self.device["thermostat"]["setpoint_high"]
18
climate.py
Python
homeassistant/components/plugwise/climate.py
f5666641ce712775a18a06a42450e0ed0c0d75e0
core
1
268,511
7
8
2
33
5
0
7
21
fetch_file
Add `use_rsa_sha2_algorithms` option for paramiko (#78789) Fixes #76737 Fixes #77673 Co-authored-by: Matt Clay <matt@mystile.com>
https://github.com/ansible/ansible.git
def fetch_file(self, in_path, out_path): return self._local.fetch_file(in_path, out_path)
21
connection_base.py
Python
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/plugin_utils/connection_base.py
76b746655a36807fa9198064ca9fe7c6cc00083a
ansible
1
321,852
11
9
8
48
5
0
12
44
text
sql: Add *all* primary sqlite result codes For three reasons: - There are only 31 of them, and we don't really expect any more to turn up (last happened in 2013, and we have a test for it happening) - It makes for nicer debug output - It always felt strange to only have a small subset in the enum
https://github.com/qutebrowser/qutebrowser.git
def text(self) -> str: if self.error is None: return str(self) return self.error.databaseText()
28
sql.py
Python
qutebrowser/misc/sql.py
ee4d6e0396a6b570f4d5592a9c4c1a9fee1027b6
qutebrowser
2
47,657
35
14
20
198
29
1
45
200
dag_bag_head_tail
Replace usage of `DummyOperator` with `EmptyOperator` (#22974) * Replace usage of `DummyOperator` with `EmptyOperator`
https://github.com/apache/airflow.git
def dag_bag_head_tail(): dag_bag = DagBag(dag_folder=DEV_NULL, include_examples=False) with DAG("head_tail", start_date=DEFAULT_DATE, schedule_interval="@daily") as dag: head = ExternalTaskSensor( task_id='head', external_dag_id=dag.dag_id, external_task_id="tail", execution_delta=timedelta(days=1), mode="reschedule", ) body = EmptyOperator(task_id="body") tail = ExternalTaskMarker( task_id="tail", external_dag_id=dag.dag_id, external_task_id=head.task_id, execution_date="{{ macros.ds_add(ds, 1) }}", ) head >> body >> tail dag_bag.bag_dag(dag=dag, root_dag=dag) return dag_bag @provide_session
@provide_session
119
test_external_task_sensor.py
Python
tests/sensors/test_external_task_sensor.py
49e336ae0302b386a2f47269a6d13988382d975f
airflow
1
178,460
56
12
28
192
19
1
79
202
_getPythonForSconsExePath
Scons: Refactor Python scan for major cleanup * This is in preparation of making it reusable for onefile compression which also has a simular need.
https://github.com/Nuitka/Nuitka.git
def _getPythonForSconsExePath(): python_exe = Options.getPythonPathForScons() if python_exe is not None: return python_exe scons_supported_pythons = ("3.5", "3.6", "3.7", "3.8", "3.9", "3.10") if not Utils.isWin32Windows(): scons_supported_pythons += ("2.7", "2.6") # Our inline copy needs no other module, just the right version of Python is needed. python_for_scons = findInstalledPython( python_versions=scons_supported_pythons, module_name=None, module_version=None ) if python_for_scons is None: if Utils.isWin32Windows(): scons_python_requirement = "Python 3.5 or higher" else: scons_python_requirement = "Python 2.6, 2.7 or Python >= 3.5" Tracing.scons_logger.sysexit( % scons_python_requirement ) return python_for_scons.getPythonExe() @contextlib.contextmanager
@contextlib.contextmanager
102
SconsInterface.py
Python
nuitka/build/SconsInterface.py
c4ce69f97f7fefbcf637e9e59b6df056ad03eb16
Nuitka
5
292,724
8
9
3
37
8
0
8
22
extra_state_attributes
Create greeneye_monitor entities when monitor connects (#66710)
https://github.com/home-assistant/core.git
def extra_state_attributes(self) -> dict[str, Any]: return {DATA_PULSES: self._sensor.pulses}
23
sensor.py
Python
homeassistant/components/greeneye_monitor/sensor.py
a08165a8d70f484ef28ef01a1ac7679f21f3264a
core
1