n_words
int64
3
1.95k
n_ast_errors
int64
0
2
complexity
int64
1
151
nloc
int64
2
546
path
stringlengths
8
125
id
int64
280
339k
commit_message
stringlengths
3
18.1k
repo
stringlengths
3
28
ast_levels
int64
4
28
language
stringclasses
1 value
vocab_size
int64
3
677
file_name
stringlengths
5
67
code
stringlengths
101
24k
commit_id
stringlengths
40
40
ast_errors
stringlengths
0
2.76k
token_counts
int64
7
3.77k
url
stringlengths
31
61
n_whitespaces
int64
4
13.9k
random_cut
stringlengths
21
13.9k
n_identifiers
int64
1
157
n_ast_nodes
int64
10
3.6k
fun_name
stringlengths
3
72
19
0
3
5
.venv/lib/python3.8/site-packages/pip/_internal/pyproject.py
60,962
upd; format
transferlearning
11
Python
19
pyproject.py
def _is_list_of_str(obj): # type: (Any) -> bool return ( isinstance(obj, list) and all(isinstance(item, str) for item in obj) )
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
28
https://github.com/jindongwang/transferlearning.git
41
def _is_list_of_str(obj): # type: (Any) -> bool return ( isinstance(obj, list) and all(isinstance(item, str) for item in obj) )
7
43
_is_list_of_str
9
0
1
2
demo/blocks_flashcards/run.py
180,788
Refactoring Layout: Adding column widths, forms, and more. (#2097) * changes * changes * fix * change * change * changes * changes * changes * changes * change * remove test config outputs * fix wflow * attempt root user * attempt root user * attempt root user * attempt root user * changes * changes * changes * changes * changes * change * changes * change * Update gradio/layouts.py Co-authored-by: Abubakar Abid <abubakar@huggingface.co> * changes Co-authored-by: Abubakar Abid <abubakar@huggingface.co>
gradio
9
Python
9
run.py
def flip_card(card): return card[1], gr.Column.update(visible=True) flip_btn.click(flip_card, [selected_card], [back, answer_col])
99833d506ef88f9452e516ef78db98edae8798f6
21
https://github.com/gradio-app/gradio.git
18
def flip_card(card): return card[1], gr.Column.update(visible=True) flip_btn.click(fl
11
55
flip_card
23
0
1
15
flair/models/relation_extractor_model.py
214,492
Fix relation extractor
flair
12
Python
22
relation_extractor_model.py
def _get_state_dict(self): model_state = { **super()._get_state_dict(), "embeddings": self.embeddings, "label_dictionary": self.label_dictionary, "label_type": self.label_type, "entity_label_type": self.entity_label_type, "weight_dict": self.weight_dict, "pooling_operation": self.pooling_operation, "entity_pair_filters": self.entity_pair_filters, } return model_state
78a4f3aae54d8134a89ea868662eec933bd2dea6
80
https://github.com/flairNLP/flair.git
131
def _get_state_dict(self): model_state = { **super()._get_state_dict(), "embeddings": self.embeddings, "label_dictionary": self.label_dictionary, "label_type": self.label_type, "entity_label_type": self.entity_label_type, "weight_dict": self.weight_dict, "pooling_operation": self.pooling_operation, "entity_pair_filters": self.entity_pair_filters,
11
104
_get_state_dict
26
0
2
6
code/default/launcher/tests/ingegrate_testing.py
219,219
try auto testing.
XX-Net
11
Python
25
ingegrate_testing.py
def kill_python(self): xlog.info("start kill python") if sys.platform == "win32": # This will kill this script as well. os.system("taskkill /im /F python.exe") else: os.system("pkill -9 -f 'start.py'")
39c9303aacc44c84767e40856be0c952ede47281
32
https://github.com/XX-net/XX-Net.git
79
def kill_python(self): xlog.info("start kill python")
8
63
kill_python
16
0
1
7
tests/cache/tests.py
202,074
Refs #33476 -- Reformatted code with Black.
django
10
Python
14
tests.py
def test_incr_version(self): "Dummy cache versions can't be incremented" cache.set("answer", 42) with self.assertRaises(ValueError): cache.incr_version("answer") with self.assertRaises(ValueError): cache.incr_version("does_not_exist")
9c19aff7c7561e3a82978a272ecdaad40dda5c00
42
https://github.com/django/django.git
65
def test_incr_version(self): "Dummy cache versions can't be incremented"
7
81
test_incr_version
11
0
1
5
tests/contenttypes_tests/test_fields.py
202,312
Refs #33476 -- Reformatted code with Black.
django
10
Python
11
test_fields.py
def test_get_content_type_no_arguments(self): with self.assertRaisesMessage( Exception, "Impossible arguments to GFK.get_content_type!" ): Answer.question.get_content_type()
9c19aff7c7561e3a82978a272ecdaad40dda5c00
22
https://github.com/django/django.git
46
def test_get_content_type_no_arguments(self): with self.assertRaisesMessage( Exception, "Imposs
7
40
test_get_content_type_no_arguments
13
0
2
6
kivy/core/window/__init__.py
194,429
Feature: EventManagerBase (#7658) * Added EventManagerBase class and event_managers attribute to WindowBase class. * Added on_motion event to Widget class. * Updated post_dispatch_input in EventLoopBase to skip non-touch events. * Using type ids in MouseMotionEventProvider. * Added on_motion method to Widget subclasses. * Updated Widget.on_motion method to dispatch to filtered widgets if 'pos' is not in me.profile. * Changed motion_filter property in Widget to store key to list values. * Updated Widget.on_motion to not dispatch event to children if widget is disabled. * Widget: Using flags to control dispatching in on_motion method. * Widget: Don't dispatch on_motion to children if only self is registered. * Widget: Removed collision on disabled check from on_motion method. * Widget: Added docstrings for motion_filter and related methods. * EventManager: Moved motion event flags to eventmanager/__init__.py module. * ScreenManager: Overrode the on_motion method. * WindowBase: Using attributes event_managers and event_managers_dict. * WindowBase: Added doc for register_event_manager and unregister_event_manager methods. * Widget: Improved default dispatch to stop after the last registered widgets. * EventManagerBase: Added initial docs class and module. * Widget: Added experimental warnings to motion_filter property and to on_motion and (un)register_for_motion_event methods. * WindowBase: Added docs for event_managers and event_managers_dict attributes. * MotionEvent: Added type_id and flags to push_attrs list. * EventManagerBase: Added versionadded tag on all flags. * EventManagerBase: Use dispatch modes instead of flags.
kivy
11
Python
13
__init__.py
def unregister_event_manager(self, manager): self.event_managers.remove(manager) for type_id in manager.type_ids: self.event_managers_dict[type_id].remove(manager) manager.stop() manager.window = None
1830123ba3edf7290b7c6cb1c6f406ccf1d0e5d4
44
https://github.com/kivy/kivy.git
59
def unregister_event_manager(self, manager): self.event_managers.remove(manager) for type_id in manager.type_ids: self.event_managers_dict[type_id].remove(manager) manager.st
10
72
unregister_event_manager
6
1
1
2
tests/components/flunearyou/conftest.py
310,412
Clean up Flu Near You tests (#64575) * Clean up Flu Near You tests * Docstring * More fixtures * Revert "More fixtures" This reverts commit 30f079b6266ef6cb14417ca895da1ae937c87abe.
core
10
Python
6
conftest.py
def data_cdc_fixture(): return json.loads(load_fixture("cdc_data.json", "flunearyou")) @pytest.fixture(name="setup_flunearyou")
b2811cff515a87685673aea2319037c415b067a7
@pytest.fixture(name="setup_flunearyou")
17
https://github.com/home-assistant/core.git
11
def data_cdc_fixture(): return jso
7
51
data_cdc_fixture
8
0
1
4
homeassistant/components/hdmi_cec/media_player.py
306,934
Use new media player enums [e-h] (#78049)
core
7
Python
8
media_player.py
def media_pause(self) -> None: self.send_keypress(KEY_PAUSE) self._state = MediaPlayerState.PAUSED
56c4e0391dd4696ee52b20cf2660da8c9cac480b
21
https://github.com/home-assistant/core.git
29
def media_pause(self) -> None: self.send_keyp
7
37
media_pause
23
0
1
8
python/ray/air/tests/test_data_batch_conversion.py
125,475
[Datasets] Automatically cast tensor columns when building Pandas blocks. (#26684) This PR tries to automatically cast tensor columns to our TensorArray extension type when building Pandas blocks, logging a warning and falling back to the opaque object-typed column if the cast fails. This should allow users to remain mostly tensor extension agnostic. TensorArray now eagerly validates the underlying tensor data, raising an error if e.g. the underlying ndarrays have heterogeneous shapes; previously, TensorArray wouldn't validate this on construction and would instead let failures happen downstream. This means that our internal TensorArray use needs to follow a try-except pattern, falling back to a plain NumPy object column.
ray
11
Python
20
test_data_batch_conversion.py
def test_numpy_object_pandas(): input_data = np.array([[1, 2, 3], [1]], dtype=object) expected_output = pd.DataFrame({TENSOR_COLUMN_NAME: input_data}) actual_output = convert_batch_type_to_pandas(input_data) assert expected_output.equals(actual_output) np.testing.assert_array_equal( convert_pandas_to_batch_type(actual_output, type=DataType.NUMPY), input_data )
0c139914bbb3e3557f13738b5f3f9fe8d2d428b4
72
https://github.com/ray-project/ray.git
47
def test_numpy_object_pandas(): input_data = np.array([[1, 2, 3], [1]], dtype=object) expected_output = pd.DataFrame({TENSOR_COLUMN_NAME: input_data}) actual_output = convert_batch_type_to_pandas(input_data) assert expected_output.equals(actual_output) np.testing.assert_array_equal(
19
109
test_numpy_object_pandas
10
0
1
3
synapse/storage/engines/sqlite.py
248,306
Tidy up and type-hint the database engine modules (#12734) Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
synapse
7
Python
10
sqlite.py
def supports_returning(self) -> bool: return sqlite3.sqlite_version_info >= (3, 35, 0)
1fe202a1a3343fad77da270ffe0923a46f1944dd
20
https://github.com/matrix-org/synapse.git
24
def supports_returning(self) -> bool: return sqlite3.sqlite_version_info >= (3, 35, 0)
5
32
supports_returning
216
0
3
29
sympy/core/tests/test_power.py
198,238
fix(core): fix evaluation of sqrt((-1+I)**2)
sympy
16
Python
119
test_power.py
def test_issue_7638(): f = pi/log(sqrt(2)) assert ((1 + I)**(I*f/2))**0.3 == (1 + I)**(0.15*I*f) # if 1/3 -> 1.0/3 this should fail since it cannot be shown that the # sign will be +/-1; for the previous "small arg" case, it didn't matter # that this could not be proved assert (1 + I)**(4*I*f) == ((1 + I)**(12*I*f))**Rational(1, 3) assert (((1 + I)**(I*(1 + 7*f)))**Rational(1, 3)).exp == Rational(1, 3) r = symbols('r', real=True) assert sqrt(r**2) == abs(r) assert cbrt(r**3) != r assert sqrt(Pow(2*I, 5*S.Half)) != (2*I)**Rational(5, 4) p = symbols('p', positive=True) assert cbrt(p**2) == p**Rational(2, 3) assert NS(((0.2 + 0.7*I)**(0.7 + 1.0*I))**(0.5 - 0.1*I), 1) == '0.4 + 0.2*I' assert sqrt(1/(1 + I)) == sqrt(1 - I)/sqrt(2) # or 1/sqrt(1 + I) e = 1/(1 - sqrt(2)) assert sqrt(e) == I/sqrt(-1 + sqrt(2)) assert e**Rational(-1, 2) == -I*sqrt(-1 + sqrt(2)) assert sqrt((cos(1)**2 + sin(1)**2 - 1)**(3 + I)).exp in [S.Half, Rational(3, 2) + I/2] assert sqrt(r**Rational(4, 3)) != r**Rational(2, 3) assert sqrt((p + I)**Rational(4, 3)) == (p + I)**Rational(2, 3) for q in 1+I, 1-I: assert sqrt(q**2) == q for q in -1+I, -1-I: assert sqrt(q**2) == -q assert sqrt((p + r*I)**2) != p + r*I e = (1 + I/5) assert sqrt(e**5) == e**(5*S.Half) assert sqrt(e**6) == e**3 assert sqrt((1 + I*r)**6) != (1 + I*r)**3
1ceeaf7635d2a633fe1a4295bed4fbebebcb8402
552
https://github.com/sympy/sympy.git
375
def test_issue_7638(): f = pi/log(sqrt(2)) assert ((1 + I)**(I*f/2))**0.3 == (1 + I)**(0.15*I*f) # if 1/
23
826
test_issue_7638
59
0
1
13
tests/components/stream/test_worker.py
290,548
Refactor camera stream settings (#81663)
core
12
Python
49
test_worker.py
async def test_get_image(hass, h264_video, filename): await async_setup_component(hass, "stream", {"stream": {}}) # Since libjpeg-turbo is not installed on the CI runner, we use a mock with patch( "homeassistant.components.camera.img_util.TurboJPEGSingleton" ) as mock_turbo_jpeg_singleton: mock_turbo_jpeg_singleton.instance.return_value = mock_turbo_jpeg() stream = create_stream(hass, h264_video, {}, dynamic_stream_settings()) with patch.object(hass.config, "is_allowed_path", return_value=True): make_recording = hass.async_create_task(stream.async_record(filename)) await make_recording assert stream._keyframe_converter._image is None assert await stream.async_get_image() == EMPTY_8_6_JPEG await stream.stop()
ee910bd0e41391e00ccd521fe7d605e494d33046
110
https://github.com/home-assistant/core.git
121
async def test_get_image(hass, h264_video, filename): await async_setup_component(hass, "stream", {"stream": {}})
23
189
test_get_image
195
0
5
21
src/diffusers/pipelines/pipeline_ddim.py
334,842
finish refactor
diffusers
17
Python
123
pipeline_ddim.py
def __call__(self, batch_size=1, generator=None, torch_device=None, eta=0.0, num_inference_steps=50): # eta corresponds to η in paper and should be between [0, 1] if torch_device is None: torch_device = "cuda" if torch.cuda.is_available() else "cpu" num_trained_timesteps = self.noise_scheduler.timesteps inference_step_times = range(0, num_trained_timesteps, num_trained_timesteps // num_inference_steps) self.unet.to(torch_device) # Sample gaussian noise to begin loop image = torch.randn( (batch_size, self.unet.in_channels, self.unet.resolution, self.unet.resolution), generator=generator, ) image = image.to(torch_device) # See formulas (12) and (16) of DDIM paper https://arxiv.org/pdf/2010.02502.pdf # Ideally, read DDIM paper in-detail understanding # Notation (<variable name> -> <name in paper> # - pred_noise_t -> e_theta(x_t, t) # - pred_original_image -> f_theta(x_t, t) or x_0 # - std_dev_t -> sigma_t # - eta -> η # - pred_image_direction -> "direction pointingc to x_t" # - pred_prev_image -> "x_t-1" for t in tqdm.tqdm(reversed(range(num_inference_steps)), total=num_inference_steps): # 1. predict noise residual with torch.no_grad(): residual = self.unet(image, inference_step_times[t]) # 2. predict previous mean of image x_t-1 pred_prev_image = self.noise_scheduler.step(residual, image, t, num_inference_steps, eta) # 3. optionally sample variance variance = 0 if eta > 0: noise = torch.randn(image.shape, generator=generator).to(image.device) variance = self.noise_scheduler.get_variance(t, num_inference_steps).sqrt() * eta * noise # 4. set current image to prev_image: x_t -> x_t-1 image = pred_prev_image + variance return image
12b10cbe0986409e2b87e891248d299b071d0383
225
https://github.com/huggingface/diffusers.git
511
def __call__(self, batch_size=1, generator=None, torch_device=None, eta=0.0, num_inference_steps=50): # eta corresponds to η in paper and should be between [0, 1] if torch_device is None: torch_device = "cuda" if torch.cuda.is_available() else "cpu" num_trained_timesteps = self.noise_scheduler.timesteps inference_step_times = range(0, num_trained_timesteps, num_trained_timesteps // num_inference_steps) self.unet.to(torch_device) # Sample gaussian noise to begin loop image = torch.randn( (batch_size, self.unet.in_channels, self.unet.resolution, self.unet.resolution), generator=generator, ) image = image.to(torch_device) # See formulas (12) and (16) of DDIM paper https://arxiv.org/pdf/2010.02502.pdf # Ideally, read DDIM paper in-detail understanding # Notation (<variable name> -> <name in paper> # - pred_noise_t -> e_theta(x_t, t) # - pred_original_image -> f_theta(x_t, t) or x_0 # - std_dev_t -> sigma_t # - eta -> η # - pred_image_
35
355
__call__
83
0
1
30
rllib/evaluation/tests/test_rollout_worker.py
137,298
[RLlib] `AlgorithmConfig.overrides()` to replace `multiagent->policies->config` and `evaluation_config` dicts. (#30879)
ray
24
Python
71
test_rollout_worker.py
def test_action_normalization(self): from ray.rllib.examples.env.random_env import RandomEnv action_space = gym.spaces.Box(0.0001, 0.0002, (5,)) # Normalize: True (unsquash between Policy's action_space.low/high). ev = RolloutWorker( env_creator=lambda _: RandomEnv( config=dict( action_space=action_space, max_episode_len=10, p_done=0.0, check_action_bounds=True, ) ), config=AlgorithmConfig() .multi_agent( policies={ "default_policy": PolicySpec( policy_class=RandomPolicy, config={"ignore_action_bounds": True}, ) } ) .rollouts(num_rollout_workers=0, batch_mode="complete_episodes") .environment( action_space=action_space, normalize_actions=True, clip_actions=False ), ) sample = convert_ma_batch_to_sample_batch(ev.sample()) # Check, whether the action bounds have been breached (expected). # We still arrived here b/c we unsquashed according to the Env's action # space. self.assertGreater(np.max(sample["actions"]), action_space.high[0]) self.assertLess(np.min(sample["actions"]), action_space.low[0]) ev.stop()
794cfd9725b4dc113aa50e60428367b15e921514
189
https://github.com/ray-project/ray.git
489
def test_action_normalization(self): from ray.rllib.examples.env.random_env import RandomEnv action_space = gym.spaces.Box(0.0001, 0.0002, (5,)) # Normalize: True (unsquash between Policy's action_space.low/high). ev = RolloutWorker( env_creator=lambda _: RandomEnv( config=dict( action_space=action_space, max_episode_len=10, p_done=0.0, check_action_bounds=True, ) ), config=AlgorithmConfig() .multi_agent( policies={ "default_policy": PolicySpec( policy_class=RandomPolicy, config={"ignore_action_bounds": True}, ) } ) .rollouts(num_rollout_workers=0, batch_mode="complete_episodes") .environment( action_space=action_space, normalize_actions=True, clip_actions=False ), ) sample = convert_ma_batch_to_sample_batch(ev.sample()) # Check, whether the action
43
284
test_action_normalization
33
0
1
25
tests/providers/google/cloud/operators/test_vertex_ai.py
44,312
Create CustomJob and Datasets operators for Vertex AI service (#20077)
airflow
12
Python
23
test_vertex_ai.py
def test_execute(self, mock_hook, to_dict_mock): op = CreateDatasetOperator( task_id=TASK_ID, gcp_conn_id=GCP_CONN_ID, delegate_to=DELEGATE_TO, impersonation_chain=IMPERSONATION_CHAIN, region=GCP_LOCATION, project_id=GCP_PROJECT, dataset=TEST_DATASET, retry=RETRY, timeout=TIMEOUT, metadata=METADATA, ) op.execute(context={'ti': mock.MagicMock()}) mock_hook.assert_called_once_with( gcp_conn_id=GCP_CONN_ID, delegate_to=DELEGATE_TO, impersonation_chain=IMPERSONATION_CHAIN ) mock_hook.return_value.create_dataset.assert_called_once_with( region=GCP_LOCATION, project_id=GCP_PROJECT, dataset=TEST_DATASET, retry=RETRY, timeout=TIMEOUT, metadata=METADATA, )
640c0b67631c5f2c8ee866b0726fa7a8a452cd3c
119
https://github.com/apache/airflow.git
268
def test_execute(self, mock_hook, to_dict_mock): op = CreateDatasetOperator( task_id=TASK_ID, gcp_conn_id=GCP_CONN_ID, delegate_to=DELEGATE_TO, impersonation_chain=IMPERSONATION_CHAIN, region=GCP_LOCATION, project_id=GCP_PROJECT, dataset=TEST_DATASET,
33
168
test_execute
82
0
1
38
tests/sentry/api/endpoints/test_organization_metric_data.py
91,679
feat(metrics): make indexer more configurable (#35604) This makes the sentry_metrics indexer more configurable in the following ways, to enable indexing on the ingest-performance-metrics topic: - configurable input Kafka topic - configurable output Kafka topic - configurable model from which to pull index results - tags for internal metrics to distinguish between the two modes operationally
sentry
16
Python
53
test_organization_metric_data.py
def test_abnormal_user_sessions(self): user_ts = time.time() self._send_buckets( [ { "org_id": self.organization.id, "project_id": self.project.id, "metric_id": self.session_user_metric, "timestamp": user_ts, "tags": { self.session_status_tag: _indexer_record(self.organization.id, "abnormal") }, "type": "s", "value": [1, 2, 4], "retention_days": 90, }, { "org_id": self.organization.id, "project_id": self.project.id, "metric_id": self.session_user_metric, "timestamp": user_ts, "tags": {}, "type": "s", "value": [1, 2, 4, 7, 9], "retention_days": 90, }, ], entity="metrics_sets", ) response = self.get_success_response( self.organization.slug, field=["session.abnormal_user"], statsPeriod="6m", interval="1m", ) group = response.data["groups"][0] assert group["totals"] == {"session.abnormal_user": 3} assert group["series"] == {"session.abnormal_user": [0, 0, 0, 0, 0, 3]}
7f60db924ea37f34e0cfe6856777239e2a2ffe13
218
https://github.com/getsentry/sentry.git
620
def test_abnormal_user_sessions(self): user_ts = time.time() self._send_buckets( [ { "org_id": self.organization.id, "project_id": self.project.id, "metric_id": self.session_user_metric, "timestamp": user_ts, "tags": { self.session_status_tag: _indexer_record(self.organization.id, "abnormal") }, "type": "s", "value": [1,
20
354
test_abnormal_user_sessions
19
0
1
11
rllib/offline/estimators/tests/test_ope.py
126,525
[RLlib] Add OPE Learning Tests (#27154)
ray
9
Python
18
test_ope.py
def test_dm_mixed_policy_random_data(self): print("Test DirectMethod on mixed policy on random dataset") check_estimate( estimator_cls=DirectMethod, gamma=self.gamma, q_model_config=self.q_model_config, policy=self.mixed_policy, batch=self.random_batch, mean_ret=self.mixed_reward, std_ret=self.mixed_std, )
5b6a58ed2850d52b8e279d9553a910b7b1de1b42
52
https://github.com/ray-project/ray.git
116
def test_dm_mixed_policy_random_data(self): print("Test DirectMethod on mixed policy on random dataset") check_estimate( estimator_cls=DirectMethod, gamma=self.gamma, q_model_config=self.q_model_config, policy=self.mixed_policy, batch=self.random_batch,
16
77
test_dm_mixed_policy_random_data
29
0
1
6
python/ray/tests/spark/test_utils.py
137,649
Ray on spark implementation (#28771) REP: ray-project/enhancements#14
ray
12
Python
19
test_utils.py
def test_get_spark_task_assigned_physical_gpus(): with patch.dict(os.environ, {}, clear=True): assert get_spark_task_assigned_physical_gpus([2, 5]) == [2, 5] with patch.dict(os.environ, {"CUDA_VISIBLE_DEVICES": "2,3,6"}, clear=True): assert get_spark_task_assigned_physical_gpus([0, 1]) == [2, 3] assert get_spark_task_assigned_physical_gpus([0, 2]) == [2, 6]
e76ccee69aaa7583be1a9d81cf7b2aa72cf25647
86
https://github.com/ray-project/ray.git
55
def test_get_spark_task_assigned_physical_gpus(): with patch.dict(os.environ, {}, clear=True): assert get_spark_tas
7
133
test_get_spark_task_assigned_physical_gpus
78
0
5
27
keras/callbacks_test.py
269,982
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
keras
16
Python
61
callbacks_test.py
def fitModelAndAssertKerasModelWritten(self, model): x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1)) tb_cbk = keras.callbacks.TensorBoard( self.logdir, write_graph=True, profile_batch=0 ) model.fit( x, y, batch_size=2, epochs=3, validation_data=(x, y), callbacks=[tb_cbk], ) summary_file = list_summaries(self.logdir) self.assertEqual( summary_file.tensors, { _ObservedSummary(logdir=self.train_dir, tag="keras"), }, ) if not model.run_eagerly: # There should be one train graph self.assertLen(summary_file.graph_defs, 1) for graph_def in summary_file.graph_defs: graph_def_str = str(graph_def) # All the model layers should appear in the graphs for layer in model.layers: if "input" not in layer.name: self.assertIn(layer.name, graph_def_str)
84afc5193d38057e2e2badf9c889ea87d80d8fbf
174
https://github.com/keras-team/keras.git
385
def fitModelAndAssertKerasModelWritten(self, model): x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1)) tb_cbk = keras.callbacks.TensorBoard( self.logdir, write_graph=True, profile_batch=0 ) model.fit( x, y, batch_size=2, epochs=3, validation_data=(x, y), callbacks=[tb_cbk], ) summary_file = list_summaries(self.logdir) self.assertEqual( summary_file.tensors, { _ObservedSummary(logdir=self.train_dir, tag="keras"), }, ) if not model.run_eagerly: # Th
35
259
fitModelAndAssertKerasModelWritten
21
0
1
12
torchvision/prototype/datasets/_builtin/oxford_iiit_pet.py
192,654
Refactor and simplify prototype datasets (#5778) * refactor prototype datasets to inherit from IterDataPipe (#5448) * refactor prototype datasets to inherit from IterDataPipe * depend on new architecture * fix missing file detection * remove unrelated file * reinstante decorator for mock registering * options -> config * remove passing of info to mock data functions * refactor categories file generation * fix imagenet * fix prototype datasets data loading tests (#5711) * reenable serialization test * cleanup * fix dill test * trigger CI * patch DILL_AVAILABLE for pickle serialization * revert CI changes * remove dill test and traversable test * add data loader test * parametrize over only_datapipe * draw one sample rather than exhaust data loader * cleanup * trigger CI * migrate VOC prototype dataset (#5743) * migrate VOC prototype dataset * cleanup * revert unrelated mock data changes * remove categories annotations * move properties to constructor * readd homepage * migrate CIFAR prototype datasets (#5751) * migrate country211 prototype dataset (#5753) * migrate CLEVR prototype datsaet (#5752) * migrate coco prototype (#5473) * migrate coco prototype * revert unrelated change * add kwargs to super constructor call * remove unneeded changes * fix docstring position * make kwargs explicit * add dependencies to docstring * fix missing dependency message * Migrate PCAM prototype dataset (#5745) * Port PCAM * skip_integrity_check * Update torchvision/prototype/datasets/_builtin/pcam.py Co-authored-by: Philip Meier <github.pmeier@posteo.de> * Address comments Co-authored-by: Philip Meier <github.pmeier@posteo.de> * Migrate DTD prototype dataset (#5757) * Migrate DTD prototype dataset * Docstring * Apply suggestions from code review Co-authored-by: Philip Meier <github.pmeier@posteo.de> Co-authored-by: Philip Meier <github.pmeier@posteo.de> * Migrate GTSRB prototype dataset (#5746) * Migrate GTSRB prototype dataset * ufmt * Address comments * Apparently mypy doesn't know that __len__ returns ints. How cute. * why is the CI not triggered?? * Update torchvision/prototype/datasets/_builtin/gtsrb.py Co-authored-by: Philip Meier <github.pmeier@posteo.de> Co-authored-by: Philip Meier <github.pmeier@posteo.de> * migrate CelebA prototype dataset (#5750) * migrate CelebA prototype dataset * inline split_id * Migrate Food101 prototype dataset (#5758) * Migrate Food101 dataset * Added length * Update torchvision/prototype/datasets/_builtin/food101.py Co-authored-by: Philip Meier <github.pmeier@posteo.de> Co-authored-by: Philip Meier <github.pmeier@posteo.de> * Migrate Fer2013 prototype dataset (#5759) * Migrate Fer2013 prototype dataset * Update torchvision/prototype/datasets/_builtin/fer2013.py Co-authored-by: Philip Meier <github.pmeier@posteo.de> Co-authored-by: Philip Meier <github.pmeier@posteo.de> * Migrate EuroSAT prototype dataset (#5760) * Migrate Semeion prototype dataset (#5761) * migrate caltech prototype datasets (#5749) * migrate caltech prototype datasets * resolve third party dependencies * Migrate Oxford Pets prototype dataset (#5764) * Migrate Oxford Pets prototype dataset * Update torchvision/prototype/datasets/_builtin/oxford_iiit_pet.py Co-authored-by: Philip Meier <github.pmeier@posteo.de> Co-authored-by: Philip Meier <github.pmeier@posteo.de> * migrate mnist prototype datasets (#5480) * migrate MNIST prototype datasets * Update torchvision/prototype/datasets/_builtin/mnist.py Co-authored-by: Nicolas Hug <contact@nicolas-hug.com> Co-authored-by: Nicolas Hug <contact@nicolas-hug.com> * Migrate Stanford Cars prototype dataset (#5767) * Migrate Stanford Cars prototype dataset * Address comments * fix category file generation (#5770) * fix category file generation * revert unrelated change * revert unrelated change * migrate cub200 prototype dataset (#5765) * migrate cub200 prototype dataset * address comments * fix category-file-generation * Migrate USPS prototype dataset (#5771) * migrate SBD prototype dataset (#5772) * migrate SBD prototype dataset * reuse categories * Migrate SVHN prototype dataset (#5769) * add test to enforce __len__ is working on prototype datasets (#5742) * reactivate special dataset tests * add missing annotation * Cleanup prototype dataset implementation (#5774) * Remove Dataset2 class * Move read_categories_file out of DatasetInfo * Remove FrozenBunch and FrozenMapping * Remove test_prototype_datasets_api.py and move missing dep test somewhere else * ufmt * Let read_categories_file accept names instead of paths * Mypy * flake8 * fix category file reading Co-authored-by: Philip Meier <github.pmeier@posteo.de> * update prototype dataset README (#5777) * update prototype dataset README * fix header level * Apply suggestions from code review Co-authored-by: Nicolas Hug <contact@nicolas-hug.com> Co-authored-by: Nicolas Hug <contact@nicolas-hug.com> Co-authored-by: Nicolas Hug <contact@nicolas-hug.com>
vision
10
Python
17
oxford_iiit_pet.py
def _resources(self) -> List[OnlineResource]: images = HttpResource( "https://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz", sha256="67195c5e1c01f1ab5f9b6a5d22b8c27a580d896ece458917e61d459337fa318d", preprocess="decompress", ) anns = HttpResource( "https://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz", sha256="52425fb6de5c424942b7626b428656fcbd798db970a937df61750c0f1d358e91", preprocess="decompress", ) return [images, anns]
1ac6e8b91b980b052324f77828a5ef4a6715dd66
46
https://github.com/pytorch/vision.git
121
def _resources(self) -> List[OnlineResource]: images = HttpResource( "https://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz", sha256="67
9
78
_resources
19
0
2
7
erpnext/patches/v12_0/set_cwip_and_delete_asset_settings.py
66,660
style: format code with black
erpnext
11
Python
18
set_cwip_and_delete_asset_settings.py
def execute(): if frappe.db.exists("DocType", "Asset Settings"): frappe.reload_doctype("Asset Category") cwip_value = frappe.db.get_single_value("Asset Settings", "disable_cwip_accounting") frappe.db.sql(, cint(cwip_value)) frappe.db.sql() frappe.delete_doc_if_exists("DocType", "Asset Settings")
494bd9ef78313436f0424b918f200dab8fc7c20b
64
https://github.com/frappe/erpnext.git
12
def execute(): if frappe.db.exists("DocType", "Asset Settings"): frappe.reload_doctype("Asset Category") cwip_value = frappe.db.get_single_value("Asset Settings", "disable_cwip_accounting") frappe.db.sql(, cint(cwip_value)) frappe.db.sql() fr
10
121
execute
17
0
1
9
TTS/tts/utils/text/symbols.py
261,954
Implement BaseCharacters, IPAPhonemes, Graphemes
TTS
8
Python
17
symbols.py
def parse_symbols(): return { "pad": _pad, "eos": _eos, "bos": _bos, "characters": _characters, "punctuations": _punctuations, "phonemes": _phonemes, }
2fb1f705031d4a9602e5853232d28b53cde89a5f
31
https://github.com/coqui-ai/TTS.git
64
def parse_symbols(): return { "pad": _pad, "eos": _eos, "bos": _bos, "characters": _char
7
55
parse_symbols
34
0
1
17
IPython/core/tests/test_magic.py
208,580
Format code
ipython
13
Python
28
test_magic.py
def test_file_double_quote(): ip = get_ipython() with TemporaryDirectory() as td: fname = os.path.join(td, '"file1"') ip.run_cell_magic( "writefile", fname, "\n".join( [ "line1", "line2", ] ), ) s = Path(fname).read_text(encoding="utf-8") assert "line1\n" in s assert "line2" in s
1a9d9554bcee466394990535e190d55008904df8
71
https://github.com/ipython/ipython.git
197
def test_file_double_quote(): ip = get_ipython() with TemporaryDirectory() as td: fname = os.path.join(td, '"file1"') ip.run_cell_magic( "writefile", fname, "\n".join( [ "line1", "line2", ] ), ) s = Path(fname).read_text(encoding="utf-8") assert "line1\n" in s assert "line2" in s
14
134
test_file_double_quote
179
0
2
35
pandas/tests/scalar/timestamp/test_timestamp.py
171,193
API: make Timestamp/Timedelta _as_unit public as_unit (#48819) * API: make Timestamp/Timedelta _as_unit public as_unit * update test * update test * update tests * fix pyi typo * fixup * fixup
pandas
13
Python
67
test_timestamp.py
def test_sub_timedeltalike_mismatched_reso(self, ts_tz): # case with non-lossy rounding ts = ts_tz # choose a unit for `other` that doesn't match ts_tz's; # this construction ensures we get cases with other._creso < ts._creso # and cases with other._creso > ts._creso unit = { NpyDatetimeUnit.NPY_FR_us.value: "ms", NpyDatetimeUnit.NPY_FR_ms.value: "s", NpyDatetimeUnit.NPY_FR_s.value: "us", }[ts._creso] other = Timedelta(0).as_unit(unit) assert other._creso != ts._creso result = ts + other assert isinstance(result, Timestamp) assert result == ts assert result._creso == max(ts._creso, other._creso) result = other + ts assert isinstance(result, Timestamp) assert result == ts assert result._creso == max(ts._creso, other._creso) if ts._creso < other._creso: # Case where rounding is lossy other2 = other + Timedelta._from_value_and_reso(1, other._creso) exp = ts.as_unit(other.unit) + other2 res = ts + other2 assert res == exp assert res._creso == max(ts._creso, other._creso) res = other2 + ts assert res == exp assert res._creso == max(ts._creso, other._creso) else: ts2 = ts + Timedelta._from_value_and_reso(1, ts._creso) exp = ts2 + other.as_unit(ts2.unit) res = ts2 + other assert res == exp assert res._creso == max(ts._creso, other._creso) res = other + ts2 assert res == exp assert res._creso == max(ts._creso, other._creso)
f3c46cd0899d5e11e0602798d9390c90e51e9ba7
283
https://github.com/pandas-dev/pandas.git
533
def test_sub_timedeltalike_mismatched_reso(self, ts_tz): # case with non-lossy rounding ts = ts_tz # choose a unit for `other` that doesn't match ts_tz's; # this construction ensures we get cases with other._creso < ts._creso # and cases with other._creso > ts._creso unit = { NpyDatetimeUnit.NPY_FR_us.value: "ms", NpyDatetimeUnit.NPY_FR_ms.value: "s", NpyDatetimeUnit.NPY_FR_s.value: "us", }[ts._creso] other = Timedelta(0).as_unit(unit) assert other._creso != ts._creso result = ts + other assert isinstance(result, Timestamp) assert result == ts assert result._creso == max(ts._creso, other._creso) result = other + ts assert isinstance(result, Timestamp) assert result == ts assert result._creso == max(ts._creso, other._creso) if ts._creso < other._creso: # Case where rounding is lossy other2 = other + Timedelta._from_value_and_reso(1, other._creso) exp = ts.as_unit(other.unit) + other2 res = ts + other2 assert res == exp assert res._creso == max(ts._creso, other._creso) res = other2 + ts assert res == exp assert res._creso == max(ts._creso, other._creso) else: ts2 = ts + Timedelta._from_value_and_reso(1, ts._creso) exp = ts2 + other.as_unit(ts2.unit) res = ts2 + other assert res == exp assert res._creso == max(ts._creso, other._creso) res = ot
23
438
test_sub_timedeltalike_mismatched_reso
41
1
4
14
gradio/routes.py
179,572
added backend support, see demo xray_blocks
gradio
15
Python
37
routes.py
def file(path): if ( app.launchable.encrypt and isinstance(app.launchable.examples, str) and path.startswith(app.launchable.examples) ): with open(safe_join(app.cwd, path), "rb") as encrypted_file: encrypted_data = encrypted_file.read() file_data = encryptor.decrypt(app.launchable.encryption_key, encrypted_data) return FileResponse( io.BytesIO(file_data), attachment_filename=os.path.basename(path) ) else: return FileResponse(safe_join(app.cwd, path)) @app.get("/api", response_class=HTMLResponse) # Needed for Spaces @app.get("/api/", response_class=HTMLResponse)
188757c1abd64a69a27ee926e30a890fcd87bc7b
@app.get("/api", response_class=HTMLResponse) # Needed for Spaces @app.get("/api/", response_class=HTMLResponse)
109
https://github.com/gradio-app/gradio.git
126
def file(path): if ( app.launchable.encrypt and isinstance(app.launchable.examples, str) and path
28
211
file
33
0
4
6
ludwig/models/predictor.py
7,066
Adds mechanism for calibrating probabilities for category and binary features (#1949) * Started adding files for calibration implementation. * Adds option to return logits and labels in predictor. * Pre-commit fixes * First pass temperature scaling working. * Fixes calibration for categorical feature. * Separate calibrated logits from logits. * Adds option to revert temperature scaling. * Refactoring, move binary prediction logic into calibration class. * Reverted accidental commit to simple_model_training.py * Adds checks and comments. * Fixes matrix scaling, convert pandas series to numpy arrays. * Fixes number of classes for categorical features. * Adds structured calibration result, unit tests. * Make create_calibration_module not abstract, default implementation returns None. * Relax precision requirement for calibration test. * Save weights after calibration, so calibration results are included in save file. * Implemented dirichlet scaling with l2 off-diagonal regularization. * Adds masked_select off_diagonal method. * Change back to matrix scaling. * Updates test expectations to reflect learning rate settings. * Tuned default regularization weight. * Comments. * Set random seed, testing to see if that makes a difference. * Remove checks for exact NLL, ECE values post calibration. * Restored LOGITS to EXCLUDE_PRED_SET, added another option to return logits in batch_predict. * Factor calibration method out of Trainer into Calibrator * Removed horovod argument from calibrator. * Return batch_size if eval_batch_size not specified. * Fix calibration_module docstring. * Updates comment, adds fallback method of calibrating on training set if no validation set available. * Adds calibration registry, replaces if statements for instantiating calibration. * Raise ValueError if unsupported calibration method specified. * Remove calibrate method from Trainer * f string * Use backend to create predictor for calibration. * Moves saving out of calibrator * Fix comment. * Adds ray test of calibration. * Implements collect_logits in ray predictor. * First pass implementation of collect_labels. * Implements collect_logits and collect_labels in ray backend. * Merge predictions and labels in ray backend * Reverts collect_labels, get labels from dataset in calibrate. * Allow overriding EXCLUDE_PRED_SET when getting preds. * Changes 'calibration' config option to binary. * Test both binary and category output features in ray test. * Comments/ * Adds type hints. Co-authored-by: Daniel Treiman <daniel@predibase.com>
ludwig
14
Python
29
predictor.py
def _accumulate_preds(self, preds, predictions, exclude_pred_set=EXCLUDE_PRED_SET): # accumulate predictions from batch for each output feature for of_name, of_preds in preds.items(): for pred_name, pred_values in of_preds.items(): if pred_name not in exclude_pred_set: key = f"{of_name}_{pred_name}" predictions[key].append(pred_values)
e65f74e87e8e29922f4e9f9d839978ffb2c5b029
54
https://github.com/ludwig-ai/ludwig.git
110
def _accumulate_preds(self, preds, predictions, exclude_pred_set=EXCLUDE_PRED_SET): # accumulate predictions from batch for each output feature for of_name, of_preds in preds.items(): for pred_name, pred_v
13
91
_accumulate_preds
104
0
4
85
erpnext/accounts/doctype/cheque_print_template/cheque_print_template.py
64,842
style: format code with black
erpnext
13
Python
88
cheque_print_template.py
def create_or_update_cheque_print_format(template_name): if not frappe.db.exists("Print Format", template_name): cheque_print = frappe.new_doc("Print Format") cheque_print.update( { "doc_type": "Payment Entry", "standard": "No", "custom_format": 1, "print_format_type": "Jinja", "name": template_name, } ) else: cheque_print = frappe.get_doc("Print Format", template_name) doc = frappe.get_doc("Cheque Print Template", template_name) cheque_print.html = % { "starting_position_from_top_edge": doc.starting_position_from_top_edge if doc.cheque_size == "A4" else 0.0, "cheque_width": doc.cheque_width, "cheque_height": doc.cheque_height, "acc_pay_dist_from_top_edge": doc.acc_pay_dist_from_top_edge, "acc_pay_dist_from_left_edge": doc.acc_pay_dist_from_left_edge, "message_to_show": doc.message_to_show if doc.message_to_show else _("Account Pay Only"), "date_dist_from_top_edge": doc.date_dist_from_top_edge, "date_dist_from_left_edge": doc.date_dist_from_left_edge, "acc_no_dist_from_top_edge": doc.acc_no_dist_from_top_edge, "acc_no_dist_from_left_edge": doc.acc_no_dist_from_left_edge, "payer_name_from_top_edge": doc.payer_name_from_top_edge, "payer_name_from_left_edge": doc.payer_name_from_left_edge, "amt_in_words_from_top_edge": doc.amt_in_words_from_top_edge, "amt_in_words_from_left_edge": doc.amt_in_words_from_left_edge, "amt_in_word_width": doc.amt_in_word_width, "amt_in_words_line_spacing": doc.amt_in_words_line_spacing, "amt_in_figures_from_top_edge": doc.amt_in_figures_from_top_edge, "amt_in_figures_from_left_edge": doc.amt_in_figures_from_left_edge, "signatory_from_top_edge": doc.signatory_from_top_edge, "signatory_from_left_edge": doc.signatory_from_left_edge, } cheque_print.save(ignore_permissions=True) frappe.db.set_value("Cheque Print Template", template_name, "has_print_format", 1) return cheque_print
494bd9ef78313436f0424b918f200dab8fc7c20b
246
https://github.com/frappe/erpnext.git
63
def create_or_update_cheque_print_format(template_name): if not frappe.db.exists("Print Format", template_name): cheque_print = frappe.new_doc("Print Format") cheque_print.update( { "doc_type": "Payment Entry", "standard": "No", "custom_format": 1, "print_format_type": "Jinja", "name": template_name, } ) else: cheque_print = frappe.get_doc("Print Format", template_name) doc = frappe.get_doc("Cheque Print Template", template_name) cheque_print.html = % { "starting_position_from_top_edge": doc.starting_position_from_top_edge if doc.cheque_size == "A4" else 0.0, "cheque_width": doc.cheque_width, "cheque_height": doc.cheque_height, "acc_pay_dist_from_top_edge": doc.acc_pay_dist_from_top_edge, "acc_pay_dist_from_left_edge": doc.acc_pay_dist_from_left_edge, "message_to_show": doc.message_to_show if doc.message_to_show else _("Account Pay Only"), "date_dist_from_top_edge": doc.date_dist_from_top_edge, "date_dist_from_left_edge": doc.date_dist_from_left_edge, "acc_no_dist_from_top_edge": doc.acc_no_dist_from_top_edge, "acc_no_dist_from_left_edge": doc.acc_no_dist_from_left_edge, "payer_name_from_top_edge": doc.payer_name_from_top_edge, "payer_name_from_left_edge": doc.payer_name_from_left_edge, "amt_in_words_from_top_edge": doc.amt_in_words_from_top_edge,
36
419
create_or_update_cheque_print_format
40
0
3
9
sympy/physics/quantum/shor.py
197,337
Remove abbreviations in documentation
sympy
13
Python
27
shor.py
def shor(N): a = random.randrange(N - 2) + 2 if igcd(N, a) != 1: return igcd(N, a) r = period_find(a, N) if r % 2 == 1: shor(N) answer = (igcd(a**(r/2) - 1, N), igcd(a**(r/2) + 1, N)) return answer
65be461082dda54c8748922f9c29a19af1279fe1
89
https://github.com/sympy/sympy.git
75
def shor(N): a = random.randrange(N - 2) + 2 if igc
9
138
shor
28
0
3
9
keras/utils/data_utils.py
276,738
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
keras
12
Python
26
data_utils.py
def _hash_file(fpath, algorithm="sha256", chunk_size=65535): if isinstance(algorithm, str): hasher = _resolve_hasher(algorithm) else: hasher = algorithm with open(fpath, "rb") as fpath_file: for chunk in iter(lambda: fpath_file.read(chunk_size), b""): hasher.update(chunk) return hasher.hexdigest()
84afc5193d38057e2e2badf9c889ea87d80d8fbf
73
https://github.com/keras-team/keras.git
75
def _hash_file(fpath, algorithm="sha256", chunk_size=65535): if isinstance(algorithm, str): hasher = _resolve_hasher(algorithm) else: hasher = algorithm with open(fpath, "rb") as fpath_file: for chunk in iter(lambda: fpath_file.read(chunk_size), b""): hasher.u
15
123
_hash_file
20
0
2
28
tests/jobs/test_backfill_job.py
46,930
Fixed backfill interference with scheduler (#22701) Co-authored-by: Dmirty Suvorov <dmitry.suvorov@scribd.com>
airflow
10
Python
15
test_backfill_job.py
def test_backfill_max_limit_check(self, dag_maker): dag_id = 'test_backfill_max_limit_check' run_id = 'test_dag_run' start_date = DEFAULT_DATE - datetime.timedelta(hours=1) end_date = DEFAULT_DATE dag_run_created_cond = threading.Condition()
9769a65c20f6028d640061efacbc5bfeb5ebaf3d
176
https://github.com/apache/airflow.git
54
def test_backfill_max_limit_check(self, dag_maker): dag_id = 'test_backfill_max_limit_check' run_id = 'test_dag_run' start_date = DEFAULT_DATE - datetime.timedelta(hours=1) end_date = DEFAULT_DATE dag_run_created_cond = threading.Condition()
14
61
test_backfill_max_limit_check
16
0
2
5
airflow/www/utils.py
45,438
Make Grid and and Graph view work with task mapping (#21740) * Expand mapped tasks in the Scheduler Technically this is done inside DagRun.task_instance_scheduling_decisions, but the only place that is currently called is the Scheduler The way we are getting `upstream_ti` to pass to expand_mapped_task is all sorts of wrong and will need fixing, I think the interface for that method is wrong and the mapped task should be responsible for finding the right upstream TI itself. * make UI and tree work with mapped tasks * add graph tooltip and map count * simplify node label redraw logic * add utils.js and map_index to /taskInstances * use TaskInstanceState instead of strings * move map_index on /taskinstance to separate PR * check to use Task or Tasks * remove `no_status` and use TaskInstanceState Co-authored-by: Ash Berlin-Taylor <ash@apache.org>
airflow
9
Python
15
utils.py
def get_instance_with_map(task_instance, session): if task_instance.map_index == -1: return alchemy_to_dict(task_instance) mapped_instances = get_mapped_instances(task_instance, session) return get_mapped_summary(task_instance, mapped_instances)
bb26f96665567325a7fbb810249820e7dac0322a
35
https://github.com/apache/airflow.git
31
def get_instance_with_map(task_instance, session): if task_instance.map_index == -1: return alchemy_to_dict
8
54
get_instance_with_map
9
0
1
3
saleor/graphql/discount/mutations/voucher_create.py
27,823
Reorganise discount mutations (#10037) * reorganise discount mutations * remove commented imports * fixes after review * drop NodeIatalogueInfo
saleor
9
Python
9
voucher_create.py
def success_response(cls, instance): instance = ChannelContext(node=instance, channel_slug=None) return super().success_response(instance)
2a5e6795271fcec84228f86267f3127c9925a888
28
https://github.com/saleor/saleor.git
22
def success_response(cls, instance): instance = ChannelContext(node=insta
7
44
success_response
26
0
2
6
test/test_prototype_datasets_utils.py
192,861
simplify OnlineResource.load (#5990) * simplify OnlineResource.load * [PoC] merge mock data preparation and loading * Revert "cache mock data based on config" This reverts commit 5ed6eedef74865e0baa746a375d5ec1f0ab1bde7. * Revert "[PoC] merge mock data preparation and loading" This reverts commit d62747962f9ed6a7b0b80849e7c971efabb5d3da. * remove preprocess returning a new path in favor of querying twice * address test comments * clarify comment * mypy * use builtin decompress utility
vision
12
Python
22
test_prototype_datasets_utils.py
def test_load_folder(self, tmp_path): folder, files = self._make_folder(tmp_path) resource = self.DummyResource(file_name=folder.name) dp = resource.load(tmp_path) assert isinstance(dp, FileOpener) assert {path: buffer.read().decode() for path, buffer in dp} == files
b430ba684fb0d689427eaa44ba0b2c363e64f285
66
https://github.com/pytorch/vision.git
60
def test_load_folder(self, tmp_path): folder, files = self._make_folder(tmp_path) resource = self.DummyResource(file_name=folder.name)
18
103
test_load_folder
49
0
7
11
erpnext/accounts/report/gross_and_net_profit_report/gross_and_net_profit_report.py
65,261
style: format code with black
erpnext
13
Python
35
gross_and_net_profit_report.py
def adjust_account(data, period_list, consolidated=False): leaf_nodes = [item for item in data if item["is_group"] == 0] totals = {} for node in leaf_nodes: set_total(node, node["total"], data, totals) for d in data: for period in period_list: key = period if consolidated else period.key d[key] = totals[d["account"]] d["total"] = totals[d["account"]] return data
494bd9ef78313436f0424b918f200dab8fc7c20b
94
https://github.com/frappe/erpnext.git
38
def adjust_account(data, period_list, consolidated=False): leaf_nodes = [item for item in data
12
144
adjust_account
122
0
1
58
tests/sentry/search/events/test_builder.py
92,947
fix(snuba): Add appropriate `UseCaseKey` for indexer [TET-146] (#36308) * fix(snuba): Add appropriate `UseCaseKey` for indexer Update indexer invocation call to have the appropriate `UseCaseKey` depending on use case. In `src/sentry/sentry_metrics/indexer/base.py::StringIndexer` when using `resolve` and `reverse_resolve` callers should not rely on the default use_case_id. Important changes: - Add required parameter `use_case_id: UseCaseKey` to `get_series` from `src/sentry/snuba/metrics/datasource.py#L612`; - Add required parameter to `get_metrics` in `src/sentry/snuba/metrics/datasource.py` - Add required parameter to `get_tags` in `src/sentry/snuba/metrics/datasource.py` - Add required parameter to `get_tag_values` in `src/sentry/snuba/metrics/datasource.py`
sentry
12
Python
80
test_builder.py
def test_aggregate_query_with_multiple_entities_without_orderby(self): self.store_metric( 200, tags={"transaction": "baz_transaction"}, timestamp=self.start + datetime.timedelta(minutes=5), ) self.store_metric( 1, metric="user", tags={"transaction": "bar_transaction"}, timestamp=self.start + datetime.timedelta(minutes=5), ) self.store_metric( 1, metric="user", tags={"transaction": "baz_transaction"}, timestamp=self.start + datetime.timedelta(minutes=5), ) self.store_metric( 2, metric="user", tags={"transaction": "baz_transaction"}, timestamp=self.start + datetime.timedelta(minutes=5), ) # This will query both sets & distribution cause of selected columns query = MetricsQueryBuilder( self.params, # Filter by count_unique since the default primary is distributions without an orderby "count_unique(user):>1", dataset=Dataset.PerformanceMetrics, selected_columns=[ "transaction", "project", "p95(transaction.duration)", "count_unique(user)", ], allow_metric_aggregates=True, use_aggregate_conditions=True, ) result = query.run_query("test_query") assert len(result["data"]) == 1 assert result["data"][0] == { "transaction": indexer.resolve( self.organization.id, "baz_transaction", UseCaseKey.PERFORMANCE, ), "project": self.project.slug, "p95_transaction_duration": 200, "count_unique_user": 2, } self.assertCountEqual( result["meta"], [ {"name": "transaction", "type": "UInt64"}, {"name": "project", "type": "String"}, {"name": "p95_transaction_duration", "type": "Float64"}, {"name": "count_unique_user", "type": "UInt64"}, ], )
cd803d173c72b64d06c0687170bf9a945d0b503c
293
https://github.com/getsentry/sentry.git
746
def test_aggregate_query_with_multiple_entities_without_orderby(self): self.store_metric( 200, tags={"transaction": "baz_transaction"}, timestamp=self.start + datetime.timedelta(minutes=5), ) self.store_metric( 1, metric="user", tags={"transaction": "bar_transaction"}, timestamp=self.start + datetime.timedelta(minutes=5), ) self.store_metric( 1, metric="user", tags={"transaction": "baz_transaction"}, timestamp=self.start + datetime.timedelta(minutes=5), ) self.store_metric( 2, metric="user", tags={"transaction": "baz_transaction"}, timestamp=self.start + datetime.timedelta(minutes=5), ) # This will query both sets & distribution cause of selected columns query = MetricsQueryBuilder( self.params, # Filter by count_unique since the default primary is distributions without an orderby "count_unique(user):>1", dataset=Dataset.PerformanceMetrics, selected_columns=[ "transaction", "project", "p95(transaction.duration)", "count_unique(user)", ], allow_metric_aggregates=True, use_aggregate_conditions=True, ) res
31
496
test_aggregate_query_with_multiple_entities_without_orderby
96
0
3
30
tests/sentry/sentry_metrics/test_postgres_indexer.py
91,721
feat(metrics): make indexer more configurable (#35604) This makes the sentry_metrics indexer more configurable in the following ways, to enable indexing on the ingest-performance-metrics topic: - configurable input Kafka topic - configurable output Kafka topic - configurable model from which to pull index results - tags for internal metrics to distinguish between the two modes operationally
sentry
13
Python
57
test_postgres_indexer.py
def test_already_created_plus_written_results(self) -> None: org_id = 1234 v0 = StringIndexer.objects.create(organization_id=org_id, string="v1.2.0") v1 = StringIndexer.objects.create(organization_id=org_id, string="v1.2.1") v2 = StringIndexer.objects.create(organization_id=org_id, string="v1.2.2") expected_mapping = {"v1.2.0": v0.id, "v1.2.1": v1.id, "v1.2.2": v2.id} results = self.indexer.bulk_record( use_case_id=self.use_case_id, org_strings={org_id: {"v1.2.0", "v1.2.1", "v1.2.2"}} ) assert len(results[org_id]) == len(expected_mapping) == 3 for string, id in results[org_id].items(): assert expected_mapping[string] == id results = self.indexer.bulk_record( use_case_id=self.use_case_id, org_strings={org_id: {"v1.2.0", "v1.2.1", "v1.2.2", "v1.2.3"}}, ) v3 = StringIndexer.objects.get(organization_id=org_id, string="v1.2.3") expected_mapping["v1.2.3"] = v3.id assert len(results[org_id]) == len(expected_mapping) == 4 for string, id in results[org_id].items(): assert expected_mapping[string] == id fetch_meta = results.get_fetch_metadata() assert_fetch_type_for_tag_string_set( fetch_meta, FetchType.CACHE_HIT, {"v1.2.0", "v1.2.1", "v1.2.2"} ) assert_fetch_type_for_tag_string_set(fetch_meta, FetchType.FIRST_SEEN, {"v1.2.3"})
7f60db924ea37f34e0cfe6856777239e2a2ffe13
270
https://github.com/getsentry/sentry.git
302
def test_already_created_plus_written_results(self) -> None: org_id = 1234 v0 = StringIndexer.objects.create(organization_id=org_id, string="v1.2.0") v1 = StringIndexer.objects.create(organization_id=org_id, string="v1.2.1") v2 = StringIndexer.objects.create(organization_id=org_id, string="v1.2.2") expected_mapping = {"v1.2.0": v0.id, "v1.2.1": v1.id, "v1.2.2": v2.id} results = self.indexer.bulk_record( use_case_id=self.use_case_id, org_strings={org_id: {"v1.2.0", "v1.2.1", "v1.2.2"}} ) assert len(results[org_id]) == len(expected_mapping) == 3 for string, id in results[org_id].items(): assert expected_mapping[str
28
436
test_already_created_plus_written_results
30
0
1
7
pandas/tests/arrays/categorical/test_indexing.py
168,180
DEPR: inplace keyword for Categorical.set_ordered, setting .categories directly (#47834) * DEPR: inplcae keyword for Categorical.set_ordered, setting .categories directly * update docs * typo fixup * suppress warning Co-authored-by: Jeff Reback <jeff@reback.net>
pandas
11
Python
26
test_indexing.py
def test_categories_assignments(self): cat = Categorical(["a", "b", "c", "a"]) exp = np.array([1, 2, 3, 1], dtype=np.int64) with tm.assert_produces_warning(FutureWarning, match="Use rename_categories"): cat.categories = [1, 2, 3] tm.assert_numpy_array_equal(cat.__array__(), exp) tm.assert_index_equal(cat.categories, Index([1, 2, 3]))
46c615d43bd197fb4defdf6231929b58c0e50288
95
https://github.com/pandas-dev/pandas.git
75
def test_categories_assignments(self): cat = Categorical(["a", "b", "c", "a"]) exp = np.array([1, 2, 3, 1], dtype=np.int64) with tm.assert_produces_warning(FutureW
18
149
test_categories_assignments
46
1
1
16
tests/components/alexa/test_smart_home.py
298,363
Correct time stamp format in Alexa responses (#70267)
core
10
Python
37
test_smart_home.py
async def test_input_boolean(hass): device = ("input_boolean.test", "off", {"friendly_name": "Test input boolean"}) appliance = await discovery_test(device, hass) assert appliance["endpointId"] == "input_boolean#test" assert appliance["displayCategories"][0] == "OTHER" assert appliance["friendlyName"] == "Test input boolean" assert_endpoint_capabilities( appliance, "Alexa.PowerController", "Alexa.EndpointHealth", "Alexa" ) await assert_power_controller_works( "input_boolean#test", "input_boolean.turn_on", "input_boolean.turn_off", hass, "2022-04-19T07:53:05Z", ) @freeze_time("2022-04-19 07:53:05")
e45d4d53dd98ac23f138eed57d39bd46be8048fd
@freeze_time("2022-04-19 07:53:05")
76
https://github.com/home-assistant/core.git
117
async def test_input_boolean(hass): device = ("input_boolean.test", "off", {"friendly_name": "Test input boolean"})
8
156
test_input_boolean
82
0
1
12
tests/test_models/test_task_modules/test_prior_generators/test_anchor_generator.py
245,306
Refactor package
mmdetection
11
Python
41
test_anchor_generator.py
def test_strides(): from mmdet.models.task_modules.prior_generators import AnchorGenerator # Square strides self = AnchorGenerator([10], [1.], [1.], [10]) anchors = self.grid_anchors([(2, 2)], device='cpu') expected_anchors = torch.tensor([[-5., -5., 5., 5.], [5., -5., 15., 5.], [-5., 5., 5., 15.], [5., 5., 15., 15.]]) assert torch.equal(anchors[0], expected_anchors) # Different strides in x and y direction self = AnchorGenerator([(10, 20)], [1.], [1.], [10]) anchors = self.grid_anchors([(2, 2)], device='cpu') expected_anchors = torch.tensor([[-5., -5., 5., 5.], [5., -5., 15., 5.], [-5., 15., 5., 25.], [5., 15., 15., 25.]]) assert torch.equal(anchors[0], expected_anchors)
fa77be290460e84ce7da975831cb7e687a419177
258
https://github.com/open-mmlab/mmdetection.git
186
def test_strides(): from mmdet.models.task_modules.prior_generators import AnchorGenerator # Square strides self = AnchorGenerator([10], [1.], [1.], [10]) anchors = self.grid_anchors([(2, 2)], device='cpu') expected_anchors = torch.tensor([[-5., -5., 5., 5.], [5., -5., 15., 5.], [-5., 5., 5., 15.], [5., 5., 15., 15.]]) assert torch.equal(anchors[0], expected_anc
14
306
test_strides
19
1
2
4
tests/integration/external_deployment/test_external_deployment.py
13,223
feat: distributed replicas across different hosts (#5217)
jina
10
Python
18
test_external_deployment.py
def foo(self, docs, *args, **kwargs): for doc in docs: doc.tags['name'] = self.runtime_args.name doc.tags['uuid'] = self._id @pytest.mark.parametrize('num_shards', [1, 2], indirect=True)
82960f105149c478e4fc88e8b4fef8bbe2454429
@pytest.mark.parametrize('num_shards', [1, 2], indirect=True)
40
https://github.com/jina-ai/jina.git
46
def foo(self, docs, *args, **kwargs): for doc in docs: doc.tags['name'] = self.runtime_ar
14
92
foo
20
0
1
5
tests/test_text.py
161,708
Fix numerous typos in tests
rich
10
Python
17
test_text.py
def test_wrap_overflow_long(): text = Text("bigword" * 10) lines = text.wrap(Console(), 4, overflow="ellipsis") assert len(lines) == 1 assert lines[0] == Text("big…")
7975c563e19f04be4c39fd7f36bc3939e5ed9d84
45
https://github.com/Textualize/rich.git
31
def test_wrap_overflow_long(): text
8
77
test_wrap_overflow_long
104
0
1
17
parsing/dml_csr/networks/modules/ddgcn.py
9,084
Create ddgcn.py
insightface
11
Python
51
ddgcn.py
def forward(self, x): # b, c, h, w = x.size() node_k = self.node_k(x) node_v = self.node_v(x) node_q = self.node_q(x) b,c,h,w = node_k.size() node_k = node_k.view(b, c, -1).permute(0, 2, 1) node_q = node_q.view(b, c, -1) node_v = node_v.view(b, c, -1).permute(0, 2, 1) # A = k * q # AV = k * q * v # AVW = k *(q *v) * w AV = torch.bmm(node_q,node_v) AV = self.softmax(AV) AV = torch.bmm(node_k, AV) AV = AV.transpose(1, 2).contiguous() AVW = self.conv_wg(AV) AVW = self.bn_wg(AVW) AVW = AVW.view(b, c, h, -1) # out = F.relu_(self.out(AVW) + x) out = self.gamma * self.out(AVW) + x return out
4992c3d75bdbc3bfde8b49fa2b0f6694bfad9987
190
https://github.com/deepinsight/insightface.git
250
def forward(self, x): # b, c, h, w = x.size() node_k = self.node_k(x) node_v = self.node_v(x) node_q = self.node_q(x) b,c,h,w = node_k.size() node_k = node_k.view(b, c, -1).permute(0, 2, 1) node_q = node_q.view(b, c, -1) node_v = node_v.view(b, c, -1).permute(0, 2, 1) # A = k * q # AV = k * q * v # AVW = k *(q *v) * w AV = torch.bmm(node_q,node_v) AV = self.softmax(AV) AV = torch.bmm(node_k, AV) AV = AV.transpose(1, 2).contiguous() AVW = self.conv_wg(AV) AVW = self.bn_wg(AVW)
24
292
forward
85
1
3
31
erpnext/controllers/queries.py
69,412
test: added test case to validate seachfields for customer, supplier
erpnext
16
Python
69
queries.py
def customer_query(doctype, txt, searchfield, start, page_len, filters, as_dict=False): doctype = "Customer" conditions = [] cust_master_name = frappe.defaults.get_user_default("cust_master_name") fields = ["name"] if cust_master_name != "Customer Name": fields = ["customer_name"] fields = get_fields(doctype, fields) searchfields = frappe.get_meta(doctype).get_search_fields() searchfields = " or ".join(field + " like %(txt)s" for field in searchfields) return frappe.db.sql( .format( **{ "fields": ", ".join(fields), "scond": searchfields, "mcond": get_match_cond(doctype), "fcond": get_filters_cond(doctype, filters, conditions).replace("%", "%%"), } ), {"txt": "%%%s%%" % txt, "_txt": txt.replace("%", ""), "start": start, "page_len": page_len}, as_dict=as_dict, ) # searches for supplier @frappe.whitelist() @frappe.validate_and_sanitize_search_inputs
5f84993bae5df78e257cc2bfc41c123a1122a0b6
@frappe.whitelist() @frappe.validate_and_sanitize_search_inputs
171
https://github.com/frappe/erpnext.git
60
def customer_query(doctype, txt, searchfield, start, page_len, filters, as_dict=False): doctype = "Customer" conditions = [] cust_maste
28
311
customer_query
12
0
1
4
erpnext/maintenance/doctype/maintenance_visit/test_maintenance_visit.py
66,367
style: format code with black
erpnext
11
Python
11
test_maintenance_visit.py
def make_sales_person(name): sales_person = frappe.get_doc({"doctype": "Sales Person", "sales_person_name": name}) sales_person.insert(ignore_if_duplicate=True) return sales_person
494bd9ef78313436f0424b918f200dab8fc7c20b
31
https://github.com/frappe/erpnext.git
8
def make_sales_person(name): sales_person = frappe.get_doc({"doctype": "Sales Person", "sales_person_name": name}) sales_person.insert(ignore_if_d
7
55
make_sales_person
7
0
1
3
python3.10.4/Lib/importlib/_bootstrap_external.py
218,121
add python 3.10.4 for windows
XX-Net
6
Python
6
_bootstrap_external.py
def _set_bootstrap_module(_bootstrap_module): global _bootstrap _bootstrap = _bootstrap_module
8198943edd73a363c266633e1aa5b2a9e9c9f526
10
https://github.com/XX-net/XX-Net.git
12
def _set_bootstrap_module(_bootstrap_module): global _bootstrap
3
17
_set_bootstrap_module
251
0
13
47
erpnext/patches/v13_0/create_website_items.py
64,262
fix: (Linter) Write queries using QB/ORM and other minor lines for semgrep to skip
erpnext
15
Python
158
create_website_items.py
def execute(): frappe.reload_doc("e_commerce", "doctype", "website_item") frappe.reload_doc("e_commerce", "doctype", "website_item_tabbed_section") frappe.reload_doc("e_commerce", "doctype", "website_offer") frappe.reload_doc("e_commerce", "doctype", "recommended_items") frappe.reload_doc("e_commerce", "doctype", "e_commerce_settings") frappe.reload_doc("stock", "doctype", "item") item_fields = ["item_code", "item_name", "item_group", "stock_uom", "brand", "image", "has_variants", "variant_of", "description", "weightage"] web_fields_to_map = ["route", "slideshow", "website_image_alt", "website_warehouse", "web_long_description", "website_content", "thumbnail"] # get all valid columns (fields) from Item master DB schema item_table_fields = frappe.db.sql("desc `tabItem`", as_dict=1) # nosemgrep item_table_fields = [d.get('Field') for d in item_table_fields] # prepare fields to query from Item, check if the web field exists in Item master web_query_fields = [] for web_field in web_fields_to_map: if web_field in item_table_fields: web_query_fields.append(web_field) item_fields.append(web_field) # check if the filter fields exist in Item master or_filters = {} for field in ["show_in_website", "show_variant_in_website"]: if field in item_table_fields: or_filters[field] = 1 if not web_query_fields or not or_filters: # web fields to map are not present in Item master schema # most likely a fresh installation that doesnt need this patch return items = frappe.db.get_all( "Item", fields=item_fields, or_filters=or_filters ) total_count = len(items) for count, item in enumerate(items, start=1): if frappe.db.exists("Website Item", {"item_code": item.item_code}): continue # make new website item from item (publish item) website_item = make_website_item(item, save=False) website_item.ranking = item.get("weightage") for field in web_fields_to_map: website_item.update({field: item.get(field)}) website_item.save() # move Website Item Group & Website Specification table to Website Item for doctype in ("Website Item Group", "Item Website Specification"): frappe.db.set_value( doctype, {"parenttype": "Item", "parent": item.item_code}, # filters {"parenttype": "Website Item", "parent": website_item.name} # value dict ) if count % 20 == 0: # commit after every 20 items frappe.db.commit() frappe.utils.update_progress_bar('Creating Website Items', count, total_count)
4b62d2d7fe08ab9b36b533419ecb38d0aa5a3ab1
359
https://github.com/frappe/erpnext.git
197
def execute(): frappe.reload_doc("e_commerce", "doctype", "website_item") frappe.reload_doc("e_commerce", "doctype", "website_item_tabbed_section") frappe.reload_doc("e_commerce", "doctype", "website_offer") frappe.reload_doc("e_commerce", "doctype", "recommended_items") frappe.reload_doc("e_commerce", "doctype", "e_commerce_settings") frappe.reload_doc("stock", "doctype", "item") item_fields = ["item_code", "item_name", "item_group", "stock_uom", "brand", "image", "has_variants", "variant_of", "description", "weightage"] web_fields_to_map = ["route", "slideshow", "website_image_alt", "website_warehouse", "web_long_description", "website_content", "thumbnail"] # get all valid columns (fields) from Item master DB schema item_table_fields = frappe.db.sql("desc `tabItem`", as_dict=1) # nosemgrep item_table_fields = [d.get('Field') for d in item_table_fields] # prepare fields to query from Item, check if the web field exists in Item master web_query_fields = [] for web_field in web_fields_to_map: if web_field in item_table_fields: web_query_fields.append(web_field) item_fields.append(web_field) # check if the filter fields exist in Item master or_filters = {} for field in ["show_in_website", "show_variant_in_website"]: if field in item_table_fields:
38
640
execute
97
0
6
25
ludwig/features/text_feature.py
7,606
Encoder refactor V2 (#2370) * Added base files and some initial code * More files created, fleshing out binary feature and corresponding encoders * Added more schema infra * Registered all feature encoders * Separated feature utils infra * Added all preprocessing classes * Filled out rest of schema configs * Fixed preproc dataclass * Fixed small errors blocking import * Tests should be passing * Deleted unnecesssary files and removed commented out code * fixed flake8 * Fixed most tests * fixed pattern validation * Fixed missing val strategies and solved custom encoder update issue * Removed preprocessing from features due to schema SSOT * fix flake 8 * Started encoder schema work * Parallel CNN Encoder * StackedCNN Encoder * Added image encoders * Finished sequence encoders * Partway through text encoders * Added text encoders * Bag Encoders * Binary and Date Encoders * category, date, h3, and set encoders * Wired up encoder schemas * Switched input feature encoder schema definitions * Fixed handful of issues * Fix schema issues * Refactored a bunch of test configs * Small changes * Removed default param from register_encoder * Schema working now, working on refactoring * Finished decoder schemas * Removed default param from register_decoder * Added some default params to output features and more decoder work * Refactored all input feature encoder/decoder referencing * Refactored pretty much all the tests * Added back constants * Solved gbm issue * Fixed save_load test * various fixes * Fixed import issue * Flake 8 and various fixes * Solved more failed tests * Refactored missed tests * Removed commented lines * Added init file for decoders schema * Fixed failing tests * Fixed hyperopt shared params test * Added backwards compatability logic and test * Flake 8 * removed comment * Added base files and some initial code * More files created, fleshing out binary feature and corresponding encoders * Added more schema infra * Registered all feature encoders * Separated feature utils infra * Added all preprocessing classes * Filled out rest of schema configs * Fixed preproc dataclass * Fixed small errors blocking import * Tests should be passing * Deleted unnecesssary files and removed commented out code * fixed flake8 * Fixed most tests * fixed pattern validation * Fixed missing val strategies and solved custom encoder update issue * Removed preprocessing from features due to schema SSOT * fix flake 8 * Started encoder schema work * Parallel CNN Encoder * StackedCNN Encoder * Added image encoders * Finished sequence encoders * Partway through text encoders * Added text encoders * Bag Encoders * Binary and Date Encoders * category, date, h3, and set encoders * Wired up encoder schemas * Switched input feature encoder schema definitions * Fixed handful of issues * Fix schema issues * Refactored a bunch of test configs * Small changes * Removed default param from register_encoder * Schema working now, working on refactoring * Finished decoder schemas * Removed default param from register_decoder * Added some default params to output features and more decoder work * Refactored all input feature encoder/decoder referencing * Refactored pretty much all the tests * Added back constants * Solved gbm issue * Fixed save_load test * various fixes * Fixed import issue * Flake 8 and various fixes * Solved more failed tests * Refactored missed tests * Removed commented lines * Added init file for decoders schema * Fixed failing tests * Fixed hyperopt shared params test * Added backwards compatability logic and test * Flake 8 * removed comment * Skipping CTRL Encoder test since it's blasting memory * Fixed audio_feature test * Addressed failing tests * Fixed backwards compatability * Fixed more failing tests * Flake 8 * Fixed more tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Refactored default logic for all features * Fixed H3 weighted_sum encoder wrong type * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix import issue * Mark slow HF tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed defaults tests * Pin Ray nightly version * fix link * pin torch to 07/26 * cleanup * upgrade ray pinned version to enable parquet partition filtering * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * downgrade Ray to ensure TensorDtypes are not inferred during Ray Dataset <=> Dask conversions * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Removed custom encoder decoder helper method * unpin torch * Flake 8 * Daniel feedback * Small fixes * Fixed default weights init * Added test with encoder dependencies for global defaults * Fixed Arnav's test * Addressed Arnav's feedback * Address nit * Addressed feedback * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address nit * Fix test * Initial feedback refactor * More refactoring * Added vocab field to all text_encoder configs * More refactoring * Fixed more tests * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix audio feature test, also s/logging/logger. * param names should start with lowercase s/N/n * Re-added schema utils used in encoder refactor. * Removes unused overwrite_defaults() * Oops, name is passed to feature as a kwarg not a member of the feature config. Why? Probably should change that. * Change lowercase default back to True. Fixes test_strings_utils * Set feature validation error with output size 1. * MLP mixer encoder needs num_channels. * Use schema.dump instead of .__dict__ to convert marshmallow dataclass to dict * (x,) in python is a tuple with a single element x. Watch out for this when defining schemas. * Construct features by using build_single_input/output to share code for deserializing feature configs. Also changes ECD to BaseModel, IMO its confusing to import ECD to use a class method from BaseModel. * Fix test_trainer_utils, adds convenience method BaseFeature.load_from_dictionary * Use feature load_from_dictionary instead of BaseModel in feature tests. * Populate encoder and decoder types in shared test fixtures, fixes error expectations in test_validate_config_combiner.py * Fixes test_validate_config_misc.py by ensuring only one option of OneOf allows None, because OneOf fails validation if more than one condition match. * Updates test_defaults.py * Adds type, column, proc_column to feature schemas. Revert feature tests by passing in config dict again. * decorate feature base classes with @dataclass, fixes failure building input features in trainer. * Implement _serialize for PreprocessingDataclassField. * use type(feature) to get schema class. * Fix test_trainer_utils.py * audio_feature requires embedding_size, but passthrough encoder does not have this property. Technically, passthrough encoder is not supported for audio features. * Wow, apparently the order of elements in the oneOf affects which error message we get from jsonschema. * Get default encoders from feature schema. * Get encoder defaults from schema in config_utils.py * Make number feature allow decoders without clip property * s/list/List * Adds reduce_output to h3 encoder. * Moves decoder params into nested decoder. * Update processing parameters with computed_fill_value. * Removes test code. * Adds input_size to decoder base because some features assume decoders have an input_size * dense encoder not supported for bag features, changed to embed. * Adds input_size param to dense encoder schema, since its a required parameter of dense encoder. * Fixes vector feature input_size in encoder metadata. * Fixes test reducers, set sequence reduce mode in output feature base. * Don't nest encoder parameters in decoder * Fixes test_torchscript, get num_classes from encoder config. * Audio feature padding is float, not int. * Adds temp check for threshold to fix GBM tests. * Adds missing value strategy drop_row for vector feature in test. * Drop row should work even if computed_fill_value is an empty string * Removes duplicated TOP_K constant. * Consolidated set_default_values * Removes commented-out defaults. * Remove load_config from OutputFeature, it isn't doing anything here. * Removes comment. * Fix type annotations for input/output feature constructors. * Fixes output feature dependencies being ignored. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Adds test for construction of output features with dependencies. * Encoder/Decoder config now lives on encoder/decoder object * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixes decoder params to match their respective classes. Moves fc_stack params and threshold back to output feature. * Make clip property of number output feature again. * Adds threshold property to set feature schema, use this property instead of storing it in the decoder. * input_size in output_feature instead of decoder. * Made vector_size property of vector_feature. * Fixed gbm tests * Fixed flake 8 * Re-adds num_classes as member of category output feature. * Makes vocab_size match vocab used in preprocessing. * num_classes in CategoryOutputFeature. * Moves num_classes from decoder to category output feature. * Fixes test_model_training_options. Copies fc_layer keys into decoder if they are present on output features. * Adds field descriptors for fc_layers params in BaseOutputFeatureConfig. Co-authored-by: connor-mccorm <connor@predibase.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: connor-mccorm <97468934+connor-mccorm@users.noreply.github.com> Co-authored-by: Geoffrey Angus <geoffrey@predibase.com> Co-authored-by: Arnav Garg <arnav@predibase.com> Co-authored-by: Daniel Treiman <daniel@predibase.com>
ludwig
17
Python
73
text_feature.py
def update_config_with_metadata(output_feature, feature_metadata, *args, **kwargs): output_feature[DECODER]["vocab_size"] = feature_metadata["vocab_size"] output_feature[DECODER]["max_sequence_length"] = feature_metadata["max_sequence_length"] if isinstance(output_feature[LOSS]["class_weights"], (list, tuple)): # [0, 0] for UNK and PAD output_feature[LOSS]["class_weights"] = [0, 0] + output_feature[LOSS]["class_weights"] if len(output_feature[LOSS]["class_weights"]) != output_feature[DECODER]["vocab_size"]: raise ValueError( "The length of class_weights ({}) is not compatible with " "the number of classes ({})".format( len(output_feature[LOSS]["class_weights"]), output_feature[DECODER]["vocab_size"] ) ) if output_feature[LOSS]["class_similarities_temperature"] > 0: if "class_similarities" in output_feature: distances = output_feature["class_similarities"] temperature = output_feature[LOSS]["class_similarities_temperature"] for i in range(len(distances)): distances[i, :] = softmax(distances[i, :], temperature=temperature) output_feature[LOSS]["class_similarities"] = distances else: raise ValueError( "class_similarities_temperature > 0," "but no class similarities are provided " "for feature {}".format(output_feature[COLUMN]) )
03b4ab273abd7e22a56bb550b56f3d667200abf9
212
https://github.com/ludwig-ai/ludwig.git
455
def update_config_with_metadata(output_feature, feature_metadata, *args, **kwargs): output_feature[DECODER]["vocab_size"] = feature_metadata["vocab_size"] output_feature[DECODER]["max_sequence_length"] = feature_metadata["max_sequence_length"] if isinstance(output_feature[LOSS]["class_weights"], (list, tuple)): # [0, 0] for UNK and PAD output_feature[LOSS]["class_weights"] = [0, 0] + output_feature[LOSS]["class_weights"] if len(output_feature[LOSS]["class_weights"]) != output_feature[DECODER]["vocab_size"]: raise ValueError( "The length of class_weights ({}) is not compatible with " "the number of classes ({})".format( len(output_feature[LOSS]["class_weights"]), output_feature[DECODER]["vocab_size"] ) ) if output_feature[LOSS]["class_similarities_temperature"] > 0: if "class_similarities" in output_feature: distances = output_feature["class_similarities"] temperature = output_feature[LOSS]["class_similarities_temperature"]
19
352
update_config_with_metadata
56
1
1
10
tests/integration/test_lock.py
19,434
missed these tests becasue they run only on earlier python versions.
pipenv
12
Python
41
test_lock.py
def test_outdated_setuptools_with_pep517_legacy_build_meta_is_updated(PipenvInstance): with PipenvInstance(chdir=True) as p: c = p.pipenv('run pip install "setuptools<=40.2"') assert c.returncode == 0 c = p.pipenv("run python -c 'import setuptools; print(setuptools.__version__)'") assert c.returncode == 0 assert c.stdout.splitlines()[1] == "40.2.0" c = p.pipenv("install legacy-backend-package") assert c.returncode == 0 assert "vistir" in p.lockfile["default"] @pytest.mark.lock @pytest.mark.install @pytest.mark.skip_windows @pytest.mark.skipif(sys.version_info >= (3, 9), reason="old setuptools doesn't work") @pytest.mark.needs_internet
5a151615aa47901f7c44e5b543fe2e2b0f6e9d24
@pytest.mark.lock @pytest.mark.install @pytest.mark.skip_windows @pytest.mark.skipif(sys.version_info >= (3, 9), reason="old setuptools doesn't work") @pytest.mark.needs_internet
80
https://github.com/pypa/pipenv.git
113
def test_outdated_setuptools_with_pep517_legacy_build_meta_is_updated(PipenvInstance): with PipenvInstance(chdir=True) as p: c = p.pipenv('run pip install "setuptools<=40.2"') assert c.returncode == 0 c = p.pipenv("run python -c 'import setuptools; print(setuptools.__version__)'") assert c.returncode == 0 assert c.stdout.splitlines()[1] == "40.2.0"
20
212
test_outdated_setuptools_with_pep517_legacy_build_meta_is_updated
85
0
5
25
mitmproxy/connection.py
252,213
add multi proxy mode This commit makes it possible for mitmproxy to spawn multiple TCP/UDP proxy servers at the same time, see https://github.com/mitmproxy/mitmproxy/discussions/5288
mitmproxy
12
Python
60
connection.py
def set_state(self, state): self.peername = tuple(state["address"]) if state["address"] else None self.alpn = state["alpn"] self.cipher = state["cipher_name"] self.id = state["id"] self.sni = state["sni"] self.timestamp_end = state["timestamp_end"] self.timestamp_start = state["timestamp_start"] self.timestamp_tls_setup = state["timestamp_tls_setup"] self.tls_version = state["tls_version"] # only used in sans-io self.state = ConnectionState(state["state"]) self.sockname = tuple(state["sockname"]) if state["sockname"] else None self.error = state["error"] self.tls = state["tls"] self.certificate_list = [ certs.Cert.from_state(x) for x in state["certificate_list"] ] self.mitmcert = ( certs.Cert.from_state(state["mitmcert"]) if state["mitmcert"] is not None else None ) self.alpn_offers = state["alpn_offers"] self.cipher_list = state["cipher_list"] self.proxy_mode = mode_specs.ProxyMode.from_state(state["proxy_mode"])
83e543c3e66654b952f1979c0adaa62df91b2832
213
https://github.com/mitmproxy/mitmproxy.git
275
def set_state(self, state): self.peername = tuple(state["address"]) if state["address"] else None self.alpn = state["alpn"] self.cipher = state["cipher_name"] self.id = state["id"] self.sni = state["sni"] self.timestamp_end = state["timestamp_end"] self.timestamp_start = state["timestamp_start"] self.timestamp_tls_setup = state["timestamp_tls_setup"] self.tls_version = state["tls_version"] # only used in sans-io self.state = ConnectionState(state["state"]) self.soc
28
360
set_state
89
1
3
24
dask/dataframe/io/tests/test_parquet.py
156,356
Change `to_parquet` default to `write_metadata_file=None` (#8988) * Refactor to_parquet A bit of refactoring before changing the default of `write_metadata_file` to `None` in `to_parquet`. - Simplify implementation - Don't include file metadata in `write_partition` calls if it's not needed - Everything needed to support implementing `write_metadata_file=None` as default *except* changing the value (to ensure tests pass). * Fixup failing parquet tests Most of the failures are due to divisions not being known by default anymore, since they're only known by default if a `_metadata` file is present. * Respond to feedback
dask
16
Python
68
test_parquet.py
def test_local(tmpdir, write_engine, read_engine, has_metadata): tmp = str(tmpdir) data = pd.DataFrame( { "i32": np.arange(1000, dtype=np.int32), "i64": np.arange(1000, dtype=np.int64), "f": np.arange(1000, dtype=np.float64), "bhello": np.random.choice(["hello", "yo", "people"], size=1000).astype( "O" ), } ) df = dd.from_pandas(data, chunksize=500) kwargs = {"write_metadata_file": True} if has_metadata else {} df.to_parquet(tmp, write_index=False, engine=write_engine, **kwargs) files = os.listdir(tmp) assert ("_common_metadata" in files) == has_metadata assert ("_metadata" in files) == has_metadata assert "part.0.parquet" in files df2 = dd.read_parquet(tmp, index=False, engine=read_engine) assert len(df2.divisions) > 1 out = df2.compute(scheduler="sync").reset_index() for column in df.columns: assert (data[column] == out[column]).all() @pytest.mark.parametrize("index", [False, True]) @write_read_engines_xfail
00572071d15e7e8cfc20d8342b00aabadf0d2102
@pytest.mark.parametrize("index", [False, True]) @write_read_engines_xfail
228
https://github.com/dask/dask.git
219
def test_local(tmpdir, write_engine, read_engine, has_metadata): tmp = str(tmpdir) data = pd.DataFrame( { "i32": np.arange(1000, dtype=np.int32), "i64": np.arange(1000, dtype=np.int64), "f": np.arange(1000, dtype=np.float64), "bhello": np.random.choice(["hello", "yo", "people"], size=1000).astype( "O" ), } ) df = dd.from_pandas(data, chunksize=500) kwargs = {"write_metadata_file": True} if has_metada
47
389
test_local
28
0
1
2
jax/_src/lax/lax.py
119,827
Revert previous change PiperOrigin-RevId: 435397906
jax
9
Python
25
lax.py
def _top_k_translation_rule(ctx, avals_in, avals_out, x, *, k): return xla.xla_destructure(ctx.builder, xops.TopK(x, k)) top_k_p = Primitive('top_k') top_k_p.multiple_results = True top_k_p.def_impl(partial(xla.apply_primitive, top_k_p)) top_k_p.def_abstract_eval(_top_k_abstract_eval) xla.register_translation(top_k_p, _top_k_translation_rule) ad.primitive_jvps[top_k_p] = _top_k_jvp batching.primitive_batchers[top_k_p] = _top_k_batch_rule
c3a4a6e63da11246611247feac7ff4c00750ae21
33
https://github.com/google/jax.git
21
def _top_k_translation_rule(ctx, avals_in, avals_out, x, *, k): return xla.xla_destructure(ctx.builder, xops.TopK(x, k)) top_k_p = Primitive('top_k') top_k_p.multiple_results = True top_k_p.def_impl(partial(xla.apply_primi
26
132
_top_k_translation_rule
32
0
3
9
pandas/tests/io/json/test_pandas.py
170,125
DEP: Enforce numpy keyword deprecation in read_json (#49083)
pandas
11
Python
25
test_pandas.py
def test_series_roundtrip_simple(self, orient, string_series): data = string_series.to_json(orient=orient) result = read_json(data, typ="series", orient=orient) expected = string_series if orient in ("values", "records"): expected = expected.reset_index(drop=True) if orient != "split": expected.name = None tm.assert_series_equal(result, expected)
2410fca2c62898fb29659d5b93273a65515d695b
73
https://github.com/pandas-dev/pandas.git
95
def test_series_roundtrip_simple(self, orient, string_series): data = string_series.to_json(orient=orient) result = read_json(data, typ="series", orient=orient) expected = string_series if orient in ("values", "records"):
15
119
test_series_roundtrip_simple
23
1
1
14
tests/openbb_terminal/stocks/options/test_yfinance_view.py
283,635
Updating some names (#1575) * quick econ fix * black * keys and feature flags * terminal name :eyes: * some more replacements * some more replacements * edit pyproject * gst -> openbb * add example portfolios back to git * Update api from gst * sorry. skipping some tests * another round of names * another round of test edits * Missed some .gst refs and update timezone * water mark stuff * Fixing Names in terminal.spec and name of GTFF_DEFAULTS to OBBFF_DEFAULTS * fix more GST to OpenBB Terminal * Logging : merge conflicts with main * Revert wrong files Co-authored-by: Andrew <andrew.kenreich@gmail.com> Co-authored-by: DidierRLopes <dro.lopes@campus.fct.unl.pt> Co-authored-by: Chavithra PARANA <chavithra@gmail.com>
OpenBBTerminal
9
Python
20
test_yfinance_view.py
def test_show_parity(mocker): # MOCK CHARTS mocker.patch( target="openbb_terminal.stocks.options.yfinance_view.theme.visualize_output" ) # MOCK EXPORT_DATA mocker.patch(target="openbb_terminal.stocks.options.yfinance_view.export_data") yfinance_view.show_parity( ticker="PM", exp="2022-01-07", put=True, ask=True, mini=0.0, maxi=100.0, export="csv", ) @pytest.mark.default_cassette("test_risk_neutral_vals") @pytest.mark.vcr
b71abcfbf4d7e8ac1855522aff0378e13c8b5362
@pytest.mark.default_cassette("test_risk_neutral_vals") @pytest.mark.vcr
58
https://github.com/OpenBB-finance/OpenBBTerminal.git
97
def test_show_parity(mocker): # MOCK CHARTS mocker.patch( target="openbb_terminal.stocks.options.yfinance_view.theme.visualize_output" ) # MOCK EXPORT_DATA mocker.patch(target="openbb_terminal.stocks.options.yfinance_view.export_data") yfinance_view.show_
17
117
test_show_parity
79
0
2
19
src/transformers/models/layoutlmv3/configuration_layoutlmv3.py
33,411
Add image height and width to ONNX dynamic axes (#18915)
transformers
15
Python
44
configuration_layoutlmv3.py
def inputs(self) -> Mapping[str, Mapping[int, str]]: # The order of inputs is different for question answering and sequence classification if self.task in ["question-answering", "sequence-classification"]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("attention_mask", {0: "batch", 1: "sequence"}), ("bbox", {0: "batch", 1: "sequence"}), ("pixel_values", {0: "batch", 1: "num_channels", 2: "height", 3: "width"}), ] ) else: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("bbox", {0: "batch", 1: "sequence"}), ("attention_mask", {0: "batch", 1: "sequence"}), ("pixel_values", {0: "batch", 1: "num_channels"}), ] )
6519150c315bdcd415bbd115cec11e839f3eb866
162
https://github.com/huggingface/transformers.git
355
def inputs(self) -> Mapping[str, Mapping[int, str]]: # The order of inputs is different for question answering and sequence classification if self.task in ["question-answering", "sequence-classification"]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("attention_mask", {0: "batch", 1: "sequence"}), ("bbox", {0: "batch", 1: "
7
275
inputs
13
0
1
49
erpnext/accounts/report/asset_depreciations_and_balances/asset_depreciations_and_balances.py
65,150
style: format code with black
erpnext
10
Python
13
asset_depreciations_and_balances.py
def get_assets(filters): return frappe.db.sql( , {"to_date": filters.to_date, "from_date": filters.from_date, "company": filters.company}, as_dict=1, )
494bd9ef78313436f0424b918f200dab8fc7c20b
39
https://github.com/frappe/erpnext.git
7
def get_assets(filters): return frappe.db.sql( , {"to_date": filters.to_date, "from_date": filters.from_date, "company": filters.company}, as_dict=
9
64
get_assets
26
0
5
7
src/datasets/table.py
106,162
Sharded save_to_disk + multiprocessing (#5268) * add num_shards, num_proc, storage_options to save_to_disk * minor * add tests * remove old s3fs integreation tests * style * style * Update DatasetDict.save_to_disk * test dataset dict * update dataset dict load_from_disk * minor * update test * update docs * backport to_reader to pyarrow < 8 * typo * support both max_shard_size and num_shards * style * docstrings * test _estimate_nbytes * add test for num_shards * style * mario's comment * add config.PBAR_REFRESH_TIME_INTERVAL * fix docstrings * use kwargs_iterable in iflatmap_unordered * fix tests
datasets
16
Python
22
table.py
def __iter__(self): for batch in self.table._batches: if self.max_chunksize is None or len(batch) <= self.max_chunksize: yield batch else: for offset in range(0, len(batch), self.max_chunksize): yield batch.slice(offset, self.max_chunksize)
232a43943e87dfedcc328a9a3d3b4d89ea5c6627
62
https://github.com/huggingface/datasets.git
103
def __iter__(self): for batch in self.table._batches: if self.max_chunksize is None or len(batch) <= self.max_chunksize: yield batch else: for offset in range(0,
10
96
__iter__
28
1
1
9
saleor/graphql/checkout/tests/test_checkout_promo_codes.py
27,658
Unify checkout mutations/resolvers to use id field. (#9862) * Unify checkout mutations/resolvers to use id field. * Update changelog * Remove uneeded " " in mutation's field description
saleor
10
Python
24
test_checkout_promo_codes.py
def test_checkout_add_voucher_code_by_token(api_client, checkout_with_item, voucher): variables = { "id": to_global_id_or_none(checkout_with_item), "promoCode": voucher.code, } data = _mutate_checkout_add_promo_code(api_client, variables) assert not data["errors"] assert data["checkout"]["token"] == str(checkout_with_item.token) assert data["checkout"]["voucherCode"] == voucher.code @mock.patch("saleor.plugins.webhook.tasks.send_webhook_request_sync")
3673e7e11f22e5a695c708b7a594c11857a93898
@mock.patch("saleor.plugins.webhook.tasks.send_webhook_request_sync")
67
https://github.com/saleor/saleor.git
58
def test_checkout_add_voucher_code_by_token(api_client, checkout_with_item, voucher): variables = { "id": to_global_id_or_none(checkout_with_item), "promoCode": voucher.code, } data = _mutate_checkout_add_promo_code(api_client, variables) assert not data["errors"] assert data["checkout"]["token"] ==
13
126
test_checkout_add_voucher_code_by_token
6
0
1
2
tests/models/nllb/test_tokenization_nllb.py
32,211
NLLB tokenizer (#18126) * NLLB tokenizer * Apply suggestions from code review - Thanks Stefan! Co-authored-by: Stefan Schweter <stefan@schweter.it> * Final touches * Style :) * Update docs/source/en/model_doc/nllb.mdx Co-authored-by: Stefan Schweter <stefan@schweter.it> * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * PR reviews * Auto models Co-authored-by: Stefan Schweter <stefan@schweter.it> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
transformers
11
Python
6
test_tokenization_nllb.py
def test_mask_token(self): self.assertListEqual(self.tokenizer.convert_tokens_to_ids(["<mask>", "ar_AR"]), [256203, 3])
c1c79b06550b587b2a975016ef9d18b53258025b
28
https://github.com/huggingface/transformers.git
12
def test_mask_token(self): self.assertListEqual(self.tokeni
5
46
test_mask_token
6
1
1
2
keras/backend.py
269,532
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
keras
7
Python
6
backend.py
def shape(x): return tf.shape(x) @keras_export("keras.backend.int_shape") @doc_controls.do_not_generate_docs
84afc5193d38057e2e2badf9c889ea87d80d8fbf
@keras_export("keras.backend.int_shape") @doc_controls.do_not_generate_docs
13
https://github.com/keras-team/keras.git
10
def shape(x): return tf.shape(x) @keras_export("keras.backend.int_shape"
6
41
shape
16
0
1
3
tests/components/shelly/test_utils.py
291,322
Fix Shelly gen2 channel name (#82655) * Fix Shelly gen2 channel name * Review comment
core
9
Python
13
test_utils.py
async def test_get_rpc_channel_name(mock_rpc_device): assert get_rpc_channel_name(mock_rpc_device, "input:0") == "test switch_0" assert get_rpc_channel_name(mock_rpc_device, "input:3") == "Test name switch_3"
815dfe9134db71b9e182fa7ac974393aaf6910d5
24
https://github.com/home-assistant/core.git
25
async def test_get_rpc_channel_name(mock_rpc_device): assert get_rpc_channel_name(mock_rpc_device, "input:0") == "test switch_0" assert get_rpc_channel_name(mock_rp
3
48
test_get_rpc_channel_name
81
0
1
24
wagtail/snippets/tests/test_snippets.py
78,404
Add tests for snippets with DraftStateMixin enabled
wagtail
14
Python
55
test_snippets.py
def test_publish(self): timestamp = now() with freeze_time(timestamp): response = self.post( post_data={ "text": "Draft-enabled Foo, Published", "action-publish": "action-publish", } ) snippet = DraftStateModel.objects.get(text="Draft-enabled Foo, Published") self.assertRedirects( response, reverse("wagtailsnippets_tests_draftstatemodel:list") ) # The instance should be created self.assertEqual(snippet.text, "Draft-enabled Foo, Published") # The instance should be live self.assertTrue(snippet.live) self.assertFalse(snippet.has_unpublished_changes) self.assertEqual(snippet.first_published_at, timestamp) self.assertEqual(snippet.last_published_at, timestamp) # A revision should be created and set as both latest_revision and live_revision self.assertIsNotNone(snippet.live_revision) self.assertEqual(snippet.live_revision, snippet.latest_revision) # The revision content should contain the new data self.assertEqual( snippet.live_revision.content["text"], "Draft-enabled Foo, Published", )
ab5a3390e363907369b572dce2b6defaea1a2370
140
https://github.com/wagtail/wagtail.git
329
def test_publish(self): timestamp = now()
26
241
test_publish
7
0
1
3
python3.10.4/Lib/asyncio/trsock.py
220,896
add python 3.10.4 for windows
XX-Net
8
Python
7
trsock.py
def __enter__(self): self._na('context manager protocol') return self._sock.__enter__()
8198943edd73a363c266633e1aa5b2a9e9c9f526
19
https://github.com/XX-net/XX-Net.git
20
def __enter__(self): self._na('context manager protocol')
4
34
__enter__
124
0
3
41
keras/metrics/metrics.py
274,753
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
keras
17
Python
74
metrics.py
def interpolate_pr_auc(self): dtp = ( self.true_positives[: self.num_thresholds - 1] - self.true_positives[1:] ) p = tf.math.add(self.true_positives, self.false_positives) dp = p[: self.num_thresholds - 1] - p[1:] prec_slope = tf.math.divide_no_nan( dtp, tf.maximum(dp, 0), name="prec_slope" ) intercept = self.true_positives[1:] - tf.multiply(prec_slope, p[1:]) safe_p_ratio = tf.where( tf.logical_and(p[: self.num_thresholds - 1] > 0, p[1:] > 0), tf.math.divide_no_nan( p[: self.num_thresholds - 1], tf.maximum(p[1:], 0), name="recall_relative_ratio", ), tf.ones_like(p[1:]), ) pr_auc_increment = tf.math.divide_no_nan( prec_slope * (dtp + intercept * tf.math.log(safe_p_ratio)), tf.maximum(self.true_positives[1:] + self.false_negatives[1:], 0), name="pr_auc_increment", ) if self.multi_label: by_label_auc = tf.reduce_sum( pr_auc_increment, name=self.name + "_by_label", axis=0 ) if self.label_weights is None: # Evenly weighted average of the label AUCs. return tf.reduce_mean(by_label_auc, name=self.name) else: # Weighted average of the label AUCs. return tf.math.divide_no_nan( tf.reduce_sum( tf.multiply(by_label_auc, self.label_weights) ), tf.reduce_sum(self.label_weights), name=self.name, ) else: return tf.reduce_sum(pr_auc_increment, name="interpolate_pr_auc")
84afc5193d38057e2e2badf9c889ea87d80d8fbf
337
https://github.com/keras-team/keras.git
621
def interpolate_pr_auc(self): dtp = ( self.true_positives[: self.num_thresholds - 1] - self.true_positives[1:] ) p = tf.math.add(self.true_positives, self.false_positives) dp = p[: self.num_thresholds - 1] - p[1:] prec_slope = tf.math.divide_no_nan( dtp, tf.maximum(dp, 0), name="prec_slope" ) intercept = self.true_positives[1:] - tf.multiply(prec_slope, p[1:]) safe_p_ratio = tf.where( tf.logical_and(p[: self.num_thresholds - 1] > 0, p[1:] > 0), tf.math.divide_no_nan( p[: self.num_thresholds - 1], tf.maximum(p[1:], 0), name="recall_relative_ratio", ), tf.ones_like(p[1:]), ) pr_auc_increment = tf.math.divide_no_nan( prec_slope * (dtp + intercept * tf.math.log(safe_p_ratio)), tf.maximum(self.true_positives[1:] + self.false_negatives[1:], 0), name="pr_auc_increment", ) if self.multi_label: by_label_auc = tf.reduce_sum( pr_auc_increment, name=self.name + "_by_label", axis=0 ) if self.label_weights is None: # Evenly weighted average of the label AUCs. return tf.reduce_mean(b
30
515
interpolate_pr_auc
25
0
3
6
erpnext/loan_management/doctype/loan_balance_adjustment/loan_balance_adjustment.py
69,030
feat: add adjustment amount to loan - fix: bugs in loan balance adjustment
erpnext
11
Python
17
loan_balance_adjustment.py
def get_values_on_cancel(self, loan_details): if self.adjustment_type == "Credit Adjustment": adjustment_amount = loan_details.adjustment_amount - self.amount elif self.adjustment_type == "Debit Adjustment": adjustment_amount = loan_details.adjustment_amount + self.amount return adjustment_amount
5c0a25012c602ed0d47136468e3b0bee11ddf5dd
41
https://github.com/frappe/erpnext.git
67
def get_values_on_cancel(self, loan_details): if self.adjustment_
6
68
get_values_on_cancel
20
0
3
8
ludwig/data/dataset_synthesizer.py
7,812
Adds registry to organize backward compatibility updates around versions and config sections (#2335) * First pass implementation of VersionTransformation * Remove test main. * Refactors backward_compatibility.py to use version registration system * Changed sort order to process outer first. * Moves test_deprecated_field_aliases from test_defaults.py to test_backward_compatibility.py * s/prefix/prefixes in test_version_transformation.py * Removes comment, print statements. * Adds docstrings. * typo fix. * Removes unused import. * Small cleanup to backward_compatibility.py, removed redundant keys. * Assume version 0.4 if no version present in the config. * Updates dataset synthesis to work with nested encoder/decoders. * Fixes test_server.py * nesting image feature params in test_ray * _get_feature_encoder_or_decoder in generate_category. * oops, forgot random.choice. Co-authored-by: Daniel Treiman <daniel@predibase.com>
ludwig
10
Python
14
dataset_synthesizer.py
def _get_feature_encoder_or_decoder(feature): if DECODER in feature: return feature[DECODER] elif ENCODER in feature: return feature[ENCODER] else: feature[ENCODER] = {} return feature[ENCODER]
60197fe851aadfa51d18c16dd42b49f728ed7eaa
40
https://github.com/ludwig-ai/ludwig.git
60
def _get_feature_encoder_or_decoder(feature): if DECODER in feature: return feature[DECODER] elif ENCODER in feature: return feature[ENCODER] else:
4
65
_get_feature_encoder_or_decoder
58
0
1
16
onnx/backend/test/case/node/gatherelements.py
254,766
Use Python type annotations rather than comments (#3962) * These have been supported since Python 3.5. ONNX doesn't support Python < 3.6, so we can use the annotations. Diffs generated by https://pypi.org/project/com2ann/. Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * Remove MYPY conditional logic in gen_proto.py It breaks the type annotations and shouldn't be needed. Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * Get rid of MYPY bool from more scripts Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * move Descriptors class above where its referenced in type annotation Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fixes Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * remove extra blank line Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix type annotations Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix type annotation in gen_docs Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix Operators.md Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix TestCoverage.md Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix protoc-gen-mypy.py Signed-off-by: Gary Miguel <garymiguel@microsoft.com>
onnx
12
Python
46
gatherelements.py
def export_gather_elements_1() -> None: axis = 0 node = onnx.helper.make_node( 'GatherElements', inputs=['data', 'indices'], outputs=['y'], axis=axis, ) data = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=np.float32) indices = np.array([[1, 2, 0], [2, 0, 0]], dtype=np.int32) y = gather_elements(data, indices, axis) # print(y) produces # [[4, 8, 3], # [7, 2, 3]] expect(node, inputs=[data, indices.astype(np.int64)], outputs=[y], name='test_gather_elements_1')
83fa57c74edfd13ddac9548b8a12f9e3e2ed05bd
145
https://github.com/onnx/onnx.git
261
def export_gather_elements_1() -> None: axis = 0 node = onnx.helper.make_node( 'GatherElements', inputs=['data', 'indices'], outputs=['y'], axis=axis, ) data = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=np.float32) indices = np.array([[1, 2, 0], [2, 0, 0]], dtype=np.int32) y = gather_elements(data, indices, axis) # print(y) produces # [[4, 8, 3], # [7, 2, 3]] expect(node, inputs=[data, indices.astype(np.int64)], outputs=[y], name='test
21
213
export_gather_elements_1
12
0
2
6
src/transformers/models/videomae/modeling_videomae.py
32,768
Add VideoMAE (#17821) * First draft * Add VideoMAEForVideoClassification * Improve conversion script * Add VideoMAEForPreTraining * Add VideoMAEFeatureExtractor * Improve VideoMAEFeatureExtractor * Improve docs * Add first draft of model tests * Improve VideoMAEForPreTraining * Fix base_model_prefix * Make model take pixel_values of shape (B, T, C, H, W) * Add loss computation of VideoMAEForPreTraining * Improve tests * Improve model testsé * Make all tests pass * Add VideoMAE to main README * Add tests for VideoMAEFeatureExtractor * Add integration test * Improve conversion script * Rename patch embedding class * Remove VideoMAELayer from init * Update design of patch embeddings * Improve comments * Improve conversion script * Improve conversion script * Add conversion of pretrained model * Add loss verification of pretrained model * Add loss verification of unnormalized targets * Add integration test for pretraining model * Apply suggestions from code review * Fix bug to make feature extractor resize only shorter edge * Address more comments * Improve normalization of videos * Add doc examples * Move constants to dedicated script * Remove scripts * Transfer checkpoints, fix docs * Update script * Update image mean and std * Fix doc tests * Set return_tensors to NumPy by default * Revert the previous change Co-authored-by: Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
transformers
6
Python
12
modeling_videomae.py
def get_sinusoid_encoding_table(n_position, d_hid): # TODO: make it with torch instead of numpy
f9a0008d2d3082a665f711b24f5314e4a8205fab
86
https://github.com/huggingface/transformers.git
18
def get_sinusoid_encoding_table(n_position, d_hid): # TODO: make it with torch instead of numpy
3
16
get_sinusoid_encoding_table
16
0
3
3
test/lib/ansible_test/_internal/commands/sanity/pylint.py
267,930
ansible-test - Use more native type hints. (#78435) * ansible-test - Use more native type hints. Simple search and replace to switch from comments to native type hints for return types of functions with no arguments. * ansible-test - Use more native type hints. Conversion of simple single-line function annotation type comments to native type hints. * ansible-test - Use more native type hints. Conversion of single-line function annotation type comments with default values to native type hints. * ansible-test - Use more native type hints. Manual conversion of type annotation comments for functions which have pylint directives.
ansible
11
Python
16
pylint.py
def supported_python_versions(self) -> t.Optional[t.Tuple[str, ...]]: return tuple(version for version in CONTROLLER_PYTHON_VERSIONS if str_to_version(version) < (3, 11))
3eb0485dd92c88cc92152d3656d94492db44b183
40
https://github.com/ansible/ansible.git
30
def supported_python_versions(self) -> t.Optional[t.Tuple[str, ...]]: return tuple(
10
61
supported_python_versions
113
0
4
29
seaborn/_marks/basic.py
41,030
Update Line and Bar marks with some of the new patterns
seaborn
14
Python
86
basic.py
def _plot_split(self, keys, data, ax, kws): # TODO Not backcompat with allowed (but nonfunctional) univariate plots # (That should be solved upstream by defaulting to "" for unset x/y?) # (Be mindful of xmin/xmax, etc!) kws = kws.copy() markers = self._resolve(data, "marker") fill = self._resolve(data, "fill") fill & np.array([m.is_filled() for m in markers]) edgecolors = self._resolve_color(data) facecolors = self._resolve_color(data, "fill") facecolors[~fill, 3] = 0 linewidths = self._resolve(data, "linewidth") pointsize = self._resolve(data, "pointsize") paths = [] path_cache = {} for m in markers: if m not in path_cache: path_cache[m] = m.get_path().transformed(m.get_transform()) paths.append(path_cache[m]) sizes = pointsize ** 2 offsets = data[["x", "y"]].to_numpy() points = mpl.collections.PathCollection( paths=paths, sizes=sizes, offsets=offsets, facecolors=facecolors, edgecolors=edgecolors, linewidths=linewidths, transOffset=ax.transData, transform=mpl.transforms.IdentityTransform(), ) ax.add_collection(points)
fa680c4226e618710014fd18a756c4f98daef956
226
https://github.com/mwaskom/seaborn.git
377
def _plot_split(self, keys, data, ax, kws): # TODO Not backcompat with allowed (but nonfunctional) univariate plots # (That should be solved upstream by defaulting to "" for unset x/y?) # (Be mindful of xmin/xmax, etc!) kws = kws.copy() markers = self._resolve(data, "marker") fill = self._resolve(data, "fill") fill & np.array([m.is_filled() for m in markers]) edgecolors = self._resolve_color(data) facecolors = self._resolve_color(data, "fill") facecolors[~fill, 3] = 0 linewidths = self._resolve(data, "linewidth") pointsize = self._resolve(data, "pointsize") paths = [] path_cache = {} for m in markers: if m not in path_cache: path_cache[m] = m.get_path().transformed(m.get_transform()) paths.append(path_cache[m]) sizes = pointsize ** 2 offsets = data[["x", "y"]].to_numpy() points = mpl.collections.PathCollection( paths=paths, sizes=sizes, offsets=offsets, facecolors=facecolors, edgecolors=edgecolors, linewidths=linewidths, transOffset=ax.transData, transform=mpl.transforms.Ident
38
357
_plot_split
20
0
1
4
seaborn/tests/_core/test_scales.py
40,907
Thoroughly update scaling logic and internal API
seaborn
12
Python
19
test_scales.py
def test_convert_categories(self, scale): x = pd.Series(pd.Categorical(["a", "b", "c"], ["b", "a", "c"])) s = CategoricalScale(scale, None, format).setup(x) assert_series_equal(s.convert(x), pd.Series([1., 0., 2.]))
6f3077f12b7837106ba0a79740fbfd547628291b
74
https://github.com/mwaskom/seaborn.git
40
def test_convert_categories(self, scale): x = pd.Series(pd.Categorical(["a", "b", "c"], ["b", "a", "c"])) s = CategoricalScale(scale, None, format).setup(x) assert_series_equal(s.convert(x), pd.Series([1., 0., 2.]))
13
114
test_convert_categories
29
0
1
13
test/mitmproxy/proxy/layers/test_websocket.py
251,944
make it black!
mitmproxy
12
Python
20
test_websocket.py
def test_drop_message(ws_testdata): tctx, playbook, flow = ws_testdata assert ( playbook << websocket.WebsocketStartHook(flow) >> reply() >> DataReceived(tctx.server, b"\x81\x03foo") << websocket.WebsocketMessageHook(flow) ) flow.websocket.messages[-1].drop() playbook >> reply() playbook << None assert playbook
b3587b52b25077f68116b9852b041d33e7fc6601
73
https://github.com/mitmproxy/mitmproxy.git
84
def test_drop_message(ws_testdata): tctx, playbook, flow = ws_testdata assert ( playbook << websocket.WebsocketStartHook(flow) >> reply() >> DataReceived(tctx.server, b"\x81\x03foo") << websocket.WebsocketMessageHook(flow) ) flow.websocket.messages[-1].drop() playbook >> reply() playbook << None assert playbook
13
109
test_drop_message
10
0
2
10
cms/tests/test_page_admin.py
82,413
ci: Added codespell (#7355) Co-authored-by: Christian Clauss <cclauss@me.com> * ci: codespell config taken from #7292
django-cms
9
Python
8
test_page_admin.py
def _parse_page_tree(self, response, parser_class): content = response.content content = content.decode(response.charset)
c1290c9ff89cb00caa5469129fd527e9d82cd820
66
https://github.com/django-cms/django-cms.git
23
def _parse_page_tree(self, response, parser_class): content = response.content content
7
37
_parse_page_tree
49
0
6
13
erpnext/accounts/doctype/pricing_rule/utils.py
64,045
chore: undo unnecessary changes
erpnext
20
Python
40
utils.py
def sorted_by_priority(pricing_rules, args, doc=None): # If more than one pricing rules, then sort by priority pricing_rules_list = [] pricing_rule_dict = {} for pricing_rule in pricing_rules: pricing_rule = filter_pricing_rules(args, pricing_rule, doc) if pricing_rule: if not pricing_rule.get('priority'): pricing_rule['priority'] = 1 if pricing_rule.get('apply_multiple_pricing_rules'): pricing_rule_dict.setdefault(cint(pricing_rule.get("priority")), []).append(pricing_rule) for key in sorted(pricing_rule_dict): pricing_rules_list.extend(pricing_rule_dict.get(key)) return pricing_rules_list
3da2cac772b0557e15ddf4ee9673381b0d98bca1
103
https://github.com/frappe/erpnext.git
35
def sorted_by_priority(pricing_rules, args, doc=None): # If more than one pricing rules, then sort by
15
170
sorted_by_priority
15
0
1
9
tests/test_css_parse.py
181,989
Separate parsing of scalar, number, duration
textual
9
Python
13
test_css_parse.py
def test_parse_text_foreground(): css = stylesheet = Stylesheet() stylesheet.parse(css) styles = stylesheet.rules[0].styles assert styles.text_color == Color.parse("green")
fd47ef491b7700a4414d85bf573f1e719cfae555
39
https://github.com/Textualize/textual.git
30
def test_parse_text_foreground(): css = stylesheet = Stylesheet() stylesheet.parse(css) styles = stylesheet.rules[0].styles assert styles.text_color == Color.parse("green")
9
68
test_parse_text_foreground
48
0
3
11
seaborn/tests/_core/test_data.py
40,466
Add tests for many Plot behaviors
seaborn
14
Python
38
test_data.py
def test_concat_all_operations(self, long_df): v1 = {"x": "x", "y": "y", "hue": "a"} v2 = {"y": "s", "size": "s", "hue": None} p1 = PlotData(long_df, v1) p2 = p1.concat(None, v2) for var, key in v2.items(): if key is None: assert var not in p2 else: assert p2.names[var] == key assert_vector_equal(p2.frame[var], long_df[key])
399f9b6aeef04623e03613c584bcb0c615d3cb01
101
https://github.com/mwaskom/seaborn.git
149
def test_concat_all_operations(self, long_df): v1 = {"x": "x", "y": "y", "hue": "a"} v2 = {"y": "s", "size": "s", "hue": None} p1 = PlotData(long_df, v1) p2 =
15
171
test_concat_all_operations
51
1
1
4
tests/utilities/test_importtools.py
58,233
Add tests for `import_object`
prefect
11
Python
39
test_importtools.py
def test_import_object_from_script_with_relative_imports(script_path): # Remove shared_libs if it exists from a prior test or the module can be cached sys.modules.pop("shared_libs", None) foobar = import_object(f"{script_path}:foobar") assert foobar() == "foobar" @pytest.mark.parametrize( "script_path", [ TEST_PROJECTS_DIR / "nested-project" / "explicit_relative.py", TEST_PROJECTS_DIR / "tree-project" / "imports" / "explicit_relative.py", TEST_PROJECTS_DIR / "tree-project" / "imports" / "implicit_relative.py", ], )
1cf2a8463d93bed3e445ebebf089ac4872fbd34c
@pytest.mark.parametrize( "script_path", [ TEST_PROJECTS_DIR / "nested-project" / "explicit_relative.py", TEST_PROJECTS_DIR / "tree-project" / "imports" / "explicit_relative.py", TEST_PROJECTS_DIR / "tree-project" / "imports" / "implicit_relative.py", ], )
28
https://github.com/PrefectHQ/prefect.git
90
def test_import_object_from_script_with_relative_imports(script_path): # Remove shared_libs if it exists from a prior test or the module can be cached sys.modules.pop("shared_libs", None) foobar = import_object(f"{script_path}:foobar") assert foobar() == "foobar" @pytest.mark.parametrize( "script_path", [ TEST_PROJECTS_DIR / "nested-
11
123
test_import_object_from_script_with_relative_imports
14
0
1
2
onnx/test/automatic_upgrade_test.py
255,270
Use Python type annotations rather than comments (#3962) * These have been supported since Python 3.5. ONNX doesn't support Python < 3.6, so we can use the annotations. Diffs generated by https://pypi.org/project/com2ann/. Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * Remove MYPY conditional logic in gen_proto.py It breaks the type annotations and shouldn't be needed. Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * Get rid of MYPY bool from more scripts Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * move Descriptors class above where its referenced in type annotation Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fixes Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * remove extra blank line Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix type annotations Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix type annotation in gen_docs Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix Operators.md Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix TestCoverage.md Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix protoc-gen-mypy.py Signed-off-by: Gary Miguel <garymiguel@microsoft.com>
onnx
11
Python
13
automatic_upgrade_test.py
def test_Div(self) -> None: self._test_op_upgrade('Div', 1, [[3, 4, 5], [3, 1, 5]], attrs={'consumed_inputs': [0]})
83fa57c74edfd13ddac9548b8a12f9e3e2ed05bd
43
https://github.com/onnx/onnx.git
20
def test_Div(self) -> None: self._test_op_upgrade('Div', 1, [[3, 4, 5], [3, 1, 5]], attrs={'consumed_inputs': [0]})
4
63
test_Div
16
0
1
9
tests/components/laundrify/conftest.py
301,146
Add laundrify integration (#65090) * First version of laundrify integration * Code cleanup * Code cleanup after review #2 * Move coordinator to its own file * Save devices as dict and implement available prop as fn * Validate token on init, abort if already configured * Some more cleanup after review * Add strict type hints * Minor changes after code review * Remove OptionsFlow (use default poll interval instead) * Fix CODEOWNERS to pass hassfest job * Fix formatting to pass prettier job * Fix mypy typing error * Update internal device property after fetching data * Call parental update handler and remove obsolete code * Add coordinator tests and fix some config flow tests * Refactor tests * Refactor fixtures * Device unavailable if polling fails
core
16
Python
15
conftest.py
def laundrify_api_fixture(laundrify_exchange_code, laundrify_validate_token): with patch( "laundrify_aio.LaundrifyAPI.get_account_id", return_value=VALID_ACCOUNT_ID, ), patch( "laundrify_aio.LaundrifyAPI.get_machines", return_value=json.loads(load_fixture("laundrify/machines.json")), ) as get_machines_mock: yield get_machines_mock
abf9aab18f9a6953b49c4f8aee1ca7e560911e36
41
https://github.com/home-assistant/core.git
63
def laundrify_api_fixture(laundrify_exchange_code, laundrify_validate_token): with patch( "laundrify_aio.LaundrifyAPI.get_account_id", return_value=VALID_ACCOUNT_ID, ), patch( "laundrify_aio.LaundrifyAPI.get_machine
10
74
laundrify_api_fixture
105
0
9
38
mindsdb/api/mysql/mysql_proxy/classes/sql_query.py
114,034
test fixes
mindsdb
21
Python
75
sql_query.py
def _process_query(self, sql): # self.query = parse_sql(sql, dialect='mindsdb') integrations_names = self.datahub.get_datasources_names() integrations_names.append('information_schema') integrations_names.append('files') integrations_names.append('views') all_tables = get_all_tables(self.query) predictor_metadata = {} predictors = db.session.query(db.Predictor).filter_by(company_id=self.session.company_id) for model_name in set(all_tables): for p in predictors: if p.name == model_name: if isinstance(p.data, dict) and 'error' not in p.data: ts_settings = p.learn_args.get('timeseries_settings', {}) if ts_settings.get('is_timeseries') is True: window = ts_settings.get('window') order_by = ts_settings.get('order_by')[0] group_by = ts_settings.get('group_by') if isinstance(group_by, list) is False and group_by is not None: group_by = [group_by] predictor_metadata[model_name] = { 'timeseries': True, 'window': window, 'horizon': ts_settings.get('horizon'), 'order_by_column': order_by, 'group_by_columns': group_by } else: predictor_metadata[model_name] = { 'timeseries': False } self.model_types.update(p.data.get('dtypes', {})) self.planner = query_planner.QueryPlanner( self.query, integrations=integrations_names, predictor_namespace=self.mindsdb_database_name, predictor_metadata=predictor_metadata, default_namespace=self.database )
dc7949207fbf3c63b2ba30b68a84b2ee7f2b5e80
269
https://github.com/mindsdb/mindsdb.git
806
def _process_query(self, sql): # self.query = parse_sql(sql, dialect='mindsdb') integrations_names = self.datahub.get_datasources_names() integrations_names.append('information_schema') integrations_names.append('files') integrations_names.append('views') all_tables = get_all_tables(self.query) predictor_metadata = {} predictors = db.session.query(db.Predictor).filter_by(company_id=self.session.company_id) for model_name in set(all_tables): for p in predictors: if p.name == model_name: if isinstance(p.data, dict) and 'error' not in p.data: ts_settings = p.learn_args.get('timeseries_settings', {}) if ts_settings.get('is_timeseries') is True: window = ts_settings.get('window') order_by = ts_settings.get('order_by')[0] group_by = ts_settings.get('group_by') if isinstance(group_by, list) is False and group_by is not None: group_by = [group_by] predictor_metadata[model_name] = { 'timeseries': True, 'window': window, 'horizon': ts_settings.get('horizon'), 'order_by_column': order_by, 'group_by_columns': group_by } else: predictor_metadata[model_name] = { 'timeseries': False
41
446
_process_query
8
0
1
3
homeassistant/components/zwave_js/addon.py
289,758
Refactor zwave_js add-on manager (#80883) * Make addon slug an instance attribute * Extract addon name and addon config * Update docstrings
core
9
Python
8
addon.py
async def async_restart_addon(self) -> None: await async_restart_addon(self._hass, self.addon_slug)
838691f22f27852a05313809cdf9c51094ad3798
19
https://github.com/home-assistant/core.git
22
async def async_restart_addon(self) -> None:
4
34
async_restart_addon
131
0
2
12
src/diffusers/schedulers/scheduling_ddpm.py
335,088
rename image to sample in schedulers
diffusers
12
Python
68
scheduling_ddpm.py
def step(self, residual, sample, t): # 1. compute alphas, betas alpha_prod_t = self.get_alpha_prod(t) alpha_prod_t_prev = self.get_alpha_prod(t - 1) beta_prod_t = 1 - alpha_prod_t beta_prod_t_prev = 1 - alpha_prod_t_prev # 2. compute predicted original sample from predicted noise also called # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf pred_original_sample = (sample - beta_prod_t ** (0.5) * residual) / alpha_prod_t ** (0.5) # 3. Clip "predicted x_0" if self.clip_predicted_sample: pred_original_sample = self.clip(pred_original_sample, -1, 1) # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * self.get_beta(t)) / beta_prod_t current_sample_coeff = self.get_alpha(t) ** (0.5) * beta_prod_t_prev / beta_prod_t # 5. Compute predicted previous sample µ_t # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * sample return pred_prev_sample
dcb23b2d7299708442aee5b4dbf23ee6df363f2c
129
https://github.com/huggingface/diffusers.git
267
def step(self, residual, sample, t): # 1. compute alphas, betas alpha_prod_t = self.get_alpha_prod(t) alpha_prod_t_prev = self.get_alpha_prod(t - 1) beta_prod_t = 1 - alpha_prod_t beta_prod_t_prev = 1 - alpha_prod_t_prev # 2. compute predicted original sample from predicted noise also called # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf pred_original_sample = (sample - beta_prod_t ** (0.5) * residual) / alpha_prod_t ** (0.5) # 3. Clip "predicted x_0" if self.clip_predicted_sample: pred_original_sample = self.clip(pred_original_sample, -1, 1) # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t
18
194
step
61
0
1
38
tests/sentry/event_manager/test_event_manager.py
94,331
test(event_manager): Fix incorrect invocations of manager.save (#36615)
sentry
22
Python
44
test_event_manager.py
def test_culprit_after_stacktrace_processing(self): from sentry.grouping.enhancer import Enhancements enhancement = Enhancements.from_config_string( , ) manager = EventManager( make_event( platform="native", exception={ "values": [ { "type": "Hello", "stacktrace": { "frames": [ { "function": "not_in_app_function", }, { "function": "in_app_function", }, ] }, } ] }, ) ) manager.normalize() manager.get_data()["grouping_config"] = { "enhancements": enhancement.dumps(), "id": "legacy:2019-03-12", } event1 = manager.save(self.project.id) assert event1.transaction is None assert event1.culprit == "in_app_function"
39cfdcb446e74732c67ce07d7dd8d8d5ace471b1
124
https://github.com/getsentry/sentry.git
682
def test_culprit_after_stacktrace_processing(self): from sentry.grouping.enhancer import Enhancements enhancement = Enhancements.from_config_string(
22
218
test_culprit_after_stacktrace_processing
113
0
2
23
tests/sentry/utils/performance_issues/test_consecutive_db_detector.py
89,897
(refactor) consecutive db detector tests to assert on detector (#42444) Part of PERF-1847 Refactor consecutive db tests to assert on detector output vs tags Move tests to its own file
sentry
12
Python
51
test_consecutive_db_detector.py
def test_does_not_detect_consecutive_db_spans_with_parameterized_query(self): span_duration = 750 spans = [ create_span( "db", span_duration, "SELECT m.* FROM authors a INNER JOIN books b ON a.book_id = b.id AND b.another_id = 'another_id_123' ORDER BY b.created_at DESC LIMIT 3", ), create_span( "db", span_duration, "SELECT m.* FROM authors a INNER JOIN books b ON a.book_id = b.id AND b.another_id = 'another_id_456' ORDER BY b.created_at DESC LIMIT 3", ), create_span( "db", span_duration, "SELECT m.* FROM authors a INNER JOIN books b ON a.book_id = b.id AND b.another_id = 'another_id_789' ORDER BY b.created_at DESC LIMIT 3", ), ] spans = [modify_span_start(span, span_duration * spans.index(span)) for span in spans] event = create_event(spans, "a" * 16) problems = self.find_consecutive_db_problems(event) assert problems == []
9fbdf75bdb6a55803da351f2ef881d86d7fdc9a0
86
https://github.com/getsentry/sentry.git
362
def test_does_not_detect_consecutive_db_spans_with_parameterized_query(self): span_duration = 750 spans = [ create_span( "db", span_duration, "SELECT m.* FROM authors a INNER JOIN books b ON a.book_id = b.id AND b.another_id = 'another_id_123' ORDER BY b.created_at DESC LIMIT 3", ), create_span( "db", span_duration, "SELECT m.* FROM authors a INNER JOIN books b ON a.
12
138
test_does_not_detect_consecutive_db_spans_with_parameterized_query
85
0
9
31
homeassistant/components/command_line/sensor.py
313,244
Improve code quality command_line (#65333)
core
17
Python
59
sensor.py
def update(self) -> None: self.data.update() value = self.data.value if self._json_attributes: self._attr_extra_state_attributes = {} if value: try: json_dict = json.loads(value) if isinstance(json_dict, Mapping): self._attr_extra_state_attributes = { k: json_dict[k] for k in self._json_attributes if k in json_dict } else: _LOGGER.warning("JSON result was not a dictionary") except ValueError: _LOGGER.warning("Unable to parse output as JSON: %s", value) else: _LOGGER.warning("Empty reply found when expecting JSON data") if value is None: value = STATE_UNKNOWN elif self._value_template is not None: self._attr_native_value = ( self._value_template.render_with_possible_json_value( value, STATE_UNKNOWN ) ) else: self._attr_native_value = value
3771c154fa0ea8e0b49d41ece55a7a18c444ee6a
142
https://github.com/home-assistant/core.git
531
def update(self) -> None: self.data.update() value = self.data.value if self._json_attributes: self._attr_extra_state_attributes = {} if value: try: json_dict = json.loads(value) if isinstance(json_dict, Mapping): self._attr_extra_state_attributes = { k: json_dict[k] for k in self._json_attributes if k in json_dict } else: _LOGGER.warning("JSON result was not a dictionary") except ValueError: _LOGGER.warning("Unable to parse ou
19
235
update
191
0
15
73
nuitka/plugins/standard/DllFiles.py
178,782
Standalone: Added support for including DLL of 'vosk' package.
Nuitka
17
Python
126
DllFiles.py
def getExtraDlls(self, module): full_name = module.getFullName() # Checking for config, but also allowing fall through for cases that have to # have some code still here. config = self.config.get(full_name) if config: for dll_entry_point in self._handleDllConfigs( config=config, full_name=full_name ): yield dll_entry_point # TODO: This is legacy code, ideally moved to yaml config over time. if full_name == "uuid" and isLinux(): uuid_dll_path = self.locateDLL("uuid") if uuid_dll_path is not None: yield self.makeDllEntryPoint( uuid_dll_path, os.path.basename(uuid_dll_path), None ) elif full_name == "iptc" and isLinux(): import iptc.util # pylint: disable=I0021,import-error xtwrapper_dll = iptc.util.find_library("xtwrapper")[0] xtwrapper_dll_path = xtwrapper_dll._name # pylint: disable=protected-access yield self.makeDllEntryPoint( xtwrapper_dll_path, os.path.basename(xtwrapper_dll_path), None ) elif full_name == "coincurve._libsecp256k1" and isWin32Windows(): yield self.makeDllEntryPoint( os.path.join(module.getCompileTimeDirectory(), "libsecp256k1.dll"), os.path.join(full_name.getPackageName(), "libsecp256k1.dll"), full_name.getPackageName(), ) # TODO: This should be its own plugin. elif ( full_name in ( "pythoncom", "win32api", "win32clipboard", "win32console", "win32cred", "win32crypt", "win32event", "win32evtlog", "win32file", "win32gui", "win32help", "win32inet", "win32job", "win32lz", "win32net", "win32pdh", "win32pipe", "win32print", "win32process", "win32profile", "win32ras", "win32security", "win32service", "win32trace", "win32transaction", "win32ts", "win32wnet", ) and isWin32Windows() ): pywin_dir = getPyWin32Dir() if pywin_dir is not None: for dll_name in "pythoncom", "pywintypes": pythoncom_filename = "%s%d%d.dll" % ( dll_name, sys.version_info[0], sys.version_info[1], ) pythoncom_dll_path = os.path.join(pywin_dir, pythoncom_filename) if os.path.exists(pythoncom_dll_path): yield self.makeDllEntryPoint( pythoncom_dll_path, pythoncom_filename, None )
87f7d22b39a19d15d762c1da63b918c2bf04c6ec
325
https://github.com/Nuitka/Nuitka.git
1,240
def getExtraDlls(self, module): full_name = module.getFullName() # Checking for config, but also allowing fall through for cases that have to # have some code still here. config = self.config.get(full_name) if config: for dll_entry_point in self._handleDllConfigs( config=config, full_name=full_name ): yield dll_entry_point # TODO: This is legacy code, ideally moved to yaml config over time.
34
552
getExtraDlls
60
0
5
18
django/core/serializers/xml_serializer.py
204,760
Refs #33476 -- Reformatted code with Black.
django
15
Python
50
xml_serializer.py
def handle_fk_field(self, obj, field): self._start_relational_field(field) related_att = getattr(obj, field.get_attname()) if related_att is not None: if self.use_natural_foreign_keys and hasattr( field.remote_field.model, "natural_key" ): related = getattr(obj, field.name) # If related object has a natural key, use it related = related.natural_key() # Iterable natural keys are rolled out as subelements for key_value in related: self.xml.startElement("natural", {}) self.xml.characters(str(key_value)) self.xml.endElement("natural") else: self.xml.characters(str(related_att)) else: self.xml.addQuickElement("None") self.xml.endElement("field")
9c19aff7c7561e3a82978a272ecdaad40dda5c00
133
https://github.com/django/django.git
308
def handle_fk_field(self, obj, field): self._start_relational_field(field) related_att = getattr(obj, field.get_attname()) if related_att is not None: if self.use_natural_foreign_keys and hasattr( field.remote_field.model, "natural_key" ):
22
225
handle_fk_field
24
0
4
9
qutebrowser/browser/webengine/webenginetab.py
321,879
mypy: defer to machinery for conditional: QWebEngineScripts
qutebrowser
13
Python
20
webenginetab.py
def _remove_js(self, name): scripts = self._widget.page().scripts() if machinery.IS_QT6: for script in scripts.find(f'_qute_{name}'): scripts.remove(script) else: # Qt 5 script = scripts.findScript(f'_qute_{name}') if not script.isNull(): scripts.remove(script)
046244b54ddb1e95b63da78789137b7efe7b489e
68
https://github.com/qutebrowser/qutebrowser.git
126
def _remove_js(self, name): scripts = self._widget.page().scripts() if machinery.IS_QT6: for script in scripts.find(f'_qute_{name}'): scripts.remove(script) else:
13
124
_remove_js
29
0
5
8
misc/tools/postprocess-vf.py
162,912
UPM 2048 and opsz axis (#462) - UPM is adjusted to 2048 - Additional opsz VF axis (multi master) added which will eventually replace the separate Display family - New tooling that uses fontmake instead of Inter's own fontbuild toolchain. (The old toolchain is still supported, i.e. `make -f Makefile_v1.make ...`)
inter
11
Python
23
postprocess-vf.py
def clear_subfamily_name(font): nameTable = font["name"] rmrecs = [] for rec in nameTable.names: if rec.nameID == SUBFAMILY_NAME or rec.nameID == TYPO_SUBFAMILY_NAME: rmrecs.append(rec) for rec in rmrecs: nameTable.removeNames(rec.nameID, rec.platformID, rec.platEncID, rec.langID)
07960766590650e516a75ce6ceba91b68a5fa551
66
https://github.com/rsms/inter.git
43
def clear_subfamily_name(font): nameTable = font["name"] rmrecs = [] for rec in nameTable.names: if rec.nameID == SUBFAMILY_NAME or rec.nameID ==
14
102
clear_subfamily_name
36
0
1
8
test/document_stores/test_opensearch.py
258,421
feat: make `score_script` first class citizen via `knn_engine` param (#3284) * OpenSearchDocumentStore: make score_script accessible via knn_engine * blacken * fix tests * fix format * fix naming of 'score_script' consistently * fix tests * fix test * fix ef_search tests * always validate index * improve clone_embedding_field * fix pylint * reformat * remove port * update tests * set no_implicit_optional = false * fix myp * fix test * refactorings * reformat * fix and refactor tests * better tests * create search_field mappings * remove no_implicit_optional = false * skip validation for custom mapping * format * Apply suggestions from docs code review Co-authored-by: Agnieszka Marzec <97166305+agnieszka-m@users.noreply.github.com> * Apply tougher suggestions from code review * fix messages * fix typos * update tests * Update haystack/document_stores/opensearch.py Co-authored-by: Agnieszka Marzec <97166305+agnieszka-m@users.noreply.github.com> * fix tests * fix ef_search validation * add test for ef_search nmslib * fix assert_not_called Co-authored-by: Agnieszka Marzec <97166305+agnieszka-m@users.noreply.github.com>
haystack
13
Python
33
test_opensearch.py
def test__validate_and_adjust_document_index_wrong_mapping_raises(self, mocked_document_store, existing_index): existing_index["mappings"]["properties"]["age"] = {"type": "integer"} mocked_document_store.search_fields = ["age"] with pytest.raises( DocumentStoreError, match=f"The index '{self.index_name}' needs the 'text' type for the search_field 'age' to run full text search, but got type 'integer'.", ): mocked_document_store._validate_and_adjust_document_index(self.index_name)
6c067b2b4f62f11850415a30d75b719aa286adc1
55
https://github.com/deepset-ai/haystack.git
104
def test__validate_and_adjust_document_index_wrong_mapping_raises(self, mocked_document_store, existing_index): existing_index["mappings"]["properties"]["a
11
106
test__validate_and_adjust_document_index_wrong_mapping_raises
25
0
3
7
src/sentry/models/activity.py
90,845
ref(models): `ActivityType` (#34978) ## Objective: We want to separate enum logic from Model logic. This breaks a lot of circular dependencies.
sentry
14
Python
24
activity.py
def save(self, *args, **kwargs): created = bool(not self.id) super().save(*args, **kwargs) if not created: return # HACK: support Group.num_comments if self.type == ActivityType.NOTE.value: self.group.update(num_comments=F("num_comments") + 1)
b9f5a910dc841b85f58d46266ec049ae5a7fd305
63
https://github.com/getsentry/sentry.git
81
def save(self, *args, **kwargs): created = bool(not self.id) super().save(*args, **kwargs) if not created: return # HACK: support Group.num_comments if self.type == ActivityType.NOTE.value: self.group.u
16
105
save
99
0
11
32
homeassistant/components/vera/sensor.py
297,714
Use UnitOfTemperature in integrations (t-z) (#84309)
core
13
Python
46
sensor.py
def update(self) -> None: super().update() if self.vera_device.category == veraApi.CATEGORY_TEMPERATURE_SENSOR: self.current_value = self.vera_device.temperature vera_temp_units = self.vera_device.vera_controller.temperature_units if vera_temp_units == "F": self._temperature_units = UnitOfTemperature.FAHRENHEIT else: self._temperature_units = UnitOfTemperature.CELSIUS elif self.vera_device.category == veraApi.CATEGORY_LIGHT_SENSOR: self.current_value = self.vera_device.light elif self.vera_device.category == veraApi.CATEGORY_UV_SENSOR: self.current_value = self.vera_device.light elif self.vera_device.category == veraApi.CATEGORY_HUMIDITY_SENSOR: self.current_value = self.vera_device.humidity elif self.vera_device.category == veraApi.CATEGORY_SCENE_CONTROLLER: controller = cast(veraApi.VeraSceneController, self.vera_device) value = controller.get_last_scene_id(True) time = controller.get_last_scene_time(True) if time == self.last_changed_time: self.current_value = None else: self.current_value = value self.last_changed_time = time elif self.vera_device.category == veraApi.CATEGORY_POWER_METER: self.current_value = self.vera_device.power elif self.vera_device.is_trippable: tripped = self.vera_device.is_tripped self.current_value = "Tripped" if tripped else "Not Tripped" else: self.current_value = "Unknown"
79d3d4ceaed75cae908064f012c2839d336f4aba
238
https://github.com/home-assistant/core.git
416
def update(self) -> None: super().update() if self.vera_device.category == veraApi.CATEGORY_TEMPERATURE_SENSOR: self.current_value = self.vera_device.temperature vera_temp_units = self.vera_device.vera_controller.temperature_units if vera_temp_units == "F": self._temperature_units = UnitOfTemperature.FAHRENHEIT else: self._temperature_units = UnitOfTemperature.CELSIUS elif self.vera_device.category == veraApi.CATEGORY_LIGHT_SENSOR: self.current_value = self.vera_device.light elif self.vera_device.category == veraApi.CATEGORY_UV_SENSOR: self.current_value = self.vera_device.light elif self.vera_device.category == veraApi.CATEGORY_HUMIDITY_SENSOR: self.current_value = self.vera_device.humidity elif self.vera_device.category == veraApi.CATEGORY_SCENE_CONTROLLER: controller = cast(veraApi.VeraSceneController, self.vera_device) value = controller.get_last_scene_id(True) time = controller.get_last_scene_time(True) if time == self.last_changed_time: self.current_value = None else: self.current_value = value self.last_changed_time = time elif self.vera_device.category == veraApi.CATEGORY_POWER_METER: self.current_value = self.vera_device.power elif self.vera_device.is_trippable: tripped = self.vera_device.is_tripped self.current_value = "Tripped" if
35
387
update
94
0
1
34
erpnext/manufacturing/report/bom_operations_time/bom_operations_time.py
66,458
style: format code with black
erpnext
11
Python
52
bom_operations_time.py
def get_columns(filters): return [ {"label": _("BOM ID"), "options": "BOM", "fieldname": "name", "fieldtype": "Link", "width": 220}, { "label": _("Item Code"), "options": "Item", "fieldname": "item", "fieldtype": "Link", "width": 150, }, {"label": _("Item Name"), "fieldname": "item_name", "fieldtype": "Data", "width": 110}, {"label": _("UOM"), "options": "UOM", "fieldname": "uom", "fieldtype": "Link", "width": 100}, { "label": _("Operation"), "options": "Operation", "fieldname": "operation", "fieldtype": "Link", "width": 140, }, { "label": _("Workstation"), "options": "Workstation", "fieldname": "workstation", "fieldtype": "Link", "width": 110, }, {"label": _("Time (In Mins)"), "fieldname": "time_in_mins", "fieldtype": "Float", "width": 120}, { "label": _("Sub-assembly BOM Count"), "fieldname": "used_as_subassembly_items", "fieldtype": "Int", "width": 200, }, ]
494bd9ef78313436f0424b918f200dab8fc7c20b
200
https://github.com/frappe/erpnext.git
60
def get_columns(filters): return [ {"label": _("BOM ID"), "options": "BOM", "fieldname": "name", "fieldtype": "Link", "width": 220}, { "label": _("Item Code"), "options": "Item", "fieldname": "item", "fieldtype": "Link", "width": 150, }, {"label": _("Item Name"), "fieldname": "item_name", "fieldtype": "Data", "width": 110}, {"la
3
399
get_columns
54
0
7
15
homeassistant/components/mqtt/cover.py
291,107
Enforce CoverEntityFeature (#82457) * Enforce CoverEntityFeature * Adjust pylint
core
11
Python
24
cover.py
def supported_features(self) -> CoverEntityFeature: supported_features = CoverEntityFeature(0) if self._config.get(CONF_COMMAND_TOPIC) is not None: if self._config.get(CONF_PAYLOAD_OPEN) is not None: supported_features |= CoverEntityFeature.OPEN if self._config.get(CONF_PAYLOAD_CLOSE) is not None: supported_features |= CoverEntityFeature.CLOSE if self._config.get(CONF_PAYLOAD_STOP) is not None: supported_features |= CoverEntityFeature.STOP if self._config.get(CONF_SET_POSITION_TOPIC) is not None: supported_features |= CoverEntityFeature.SET_POSITION if self._config.get(CONF_TILT_COMMAND_TOPIC) is not None: supported_features |= TILT_FEATURES return supported_features
34607d4410a78fc6e41337e2b51600d1fdf39580
117
https://github.com/home-assistant/core.git
196
def supported_features(self) -> CoverEntityFeature: supported_features = CoverEntityFeature(0) if self._config.get(CONF_COMMAND_TOPIC) is not None: if self._config.get(CONF_PAYLOAD_OPEN) is not None: supported_features |= CoverEntityFeature.OPEN if self._config.get(CONF_PAYLOAD_CLOSE) is not None: supported_features |= CoverEntityFeature.CLOSE if self._config.get(CONF_PAYLOAD_STOP) is not None: supported_features |= CoverEntityFeature.STOP if self._config.get(CON
16
186
supported_features
107
0
1
21
test/mitmproxy/test_http.py
250,541
fix a crash when refreshing headers with a negative unix timestamp, fix #5054 (#5078)
mitmproxy
11
Python
76
test_http.py
def test_refresh(self): r = tresp() n = time.time() r.headers["date"] = email.utils.formatdate(n, usegmt=True) pre = r.headers["date"] r.refresh(946681202) assert pre == r.headers["date"] r.refresh(946681262) d = email.utils.parsedate_tz(r.headers["date"]) d = email.utils.mktime_tz(d) # Weird that this is not exact... assert abs(60 - (d - n)) <= 1 cookie = "MOO=BAR; Expires=Tue, 08-Mar-2011 00:20:38 GMT; Path=foo.com; Secure" r.headers["set-cookie"] = cookie r.refresh() # Cookie refreshing is tested in test_cookies, we just make sure that it's triggered here. assert cookie != r.headers["set-cookie"] with mock.patch('mitmproxy.net.http.cookies.refresh_set_cookie_header') as m: m.side_effect = ValueError r.refresh(n) # Test negative unixtime, which raises on at least Windows. r.headers["date"] = pre = "Mon, 01 Jan 1601 00:00:00 GMT" r.refresh(946681202) assert r.headers["date"] == pre
53f60c88b1d7e4d817194f186d9730b32953d1a7
174
https://github.com/mitmproxy/mitmproxy.git
275
def test_refresh(self): r = tresp()
23
301
test_refresh
55
0
6
18
utils/angle.py
19,322
Enhance dubins path docs (#664) * Engance dubins path docs * Update dubins_path.rst * fix doc artifact link in CI * wip * wip * wip * Update dubins_path.rst * wip * wip * wip * wip * wip
PythonRobotics
14
Python
30
angle.py
def angle_mod(x, zero_2_2pi=False, degree=False): if isinstance(x, float): is_float = True else: is_float = False x = np.asarray(x).flatten() if degree: x = np.deg2rad(x) if zero_2_2pi: mod_angle = x % (2 * np.pi) else: mod_angle = (x + np.pi) % (2 * np.pi) - np.pi if degree: mod_angle = np.rad2deg(mod_angle) if is_float: return mod_angle.item() else: return mod_angle
32b545fe7c35b57f280cd9d570f62839886f2e4b
114
https://github.com/AtsushiSakai/PythonRobotics.git
141
def angle_mod(x, zero_2_2pi=False, degree=False): if isinstance(x, float): is_float = True else: is_float = False x = np.asarray(x).flatten() if degree: x = np.deg2rad(x) if zero_2_2pi: mod_angle = x % (2 * np.pi) else: mod_angle = (x + np.pi) % (2 * np.pi) - np.pi if degree: mod_angle = np.rad2deg(mod_angle) if is_float: return mod_angle.item() else
15
185
angle_mod
7
0
1
2
tests/custom_object_test.py
122,862
(NFC) Prepare for migration from producing MHLO to producing StableHLO This CL renames occurrences of "mhlo" in: 1) names, 2) tests, 3) prose in order to prepare for the upcoming migration. Unchanged occurrences: 1) Public API that contains "mhlo", e.g. XlaLowering.mhlo and the "mhlo" argument value in Lowering.as_text and Lowering.compiler_ir. 2) Documentation (changelog, JEPs, IR examples, etc). 3) One rare situation where prose says "StableHLO" and "MHLO" in one sentence, so both are necessary to disambiguate. PiperOrigin-RevId: 495771153
jax
7
Python
7
custom_object_test.py
def _sp_data_hlo_lowering(ctx, data_and_indices): return [data_and_indices[0]] mlir.register_lowering(sp_data_p, _sp_data_hlo_lowering)
b8ae8e3fa10f9abe998459fac1513915acee776d
14
https://github.com/google/jax.git
6
def _sp_data_hlo_lowering(ctx, data_and_indices): return [data_and_indices[0]] mlir.register_lowering(sp_data_p, _sp_data_hlo_lowering)
6
33
_sp_data_hlo_lowering
9
0
2
3
airflow/models/mappedoperator.py
45,973
Rename task-mapping trigger to 'expand' (#22106)
airflow
11
Python
9
mappedoperator.py
def __del__(self): if not self._expand_called: warnings.warn(f"{self!r} was never mapped!")
b1fdcdfe6778574c53bdf6bcbd59090c59605287
18
https://github.com/apache/airflow.git
26
def __del__(self): if not self._expand_called: warnings.warn(f"{self!r} wa
5
36
__del__
94
0
1
39
sklearn/linear_model/tests/test_ransac.py
260,892
MAINT Clean deprecation for 1.2: Ransac losses (#24408)
scikit-learn
12
Python
54
test_ransac.py
def test_ransac_min_n_samples(): estimator = LinearRegression() ransac_estimator1 = RANSACRegressor( estimator, min_samples=2, residual_threshold=5, random_state=0 ) ransac_estimator2 = RANSACRegressor( estimator, min_samples=2.0 / X.shape[0], residual_threshold=5, random_state=0, ) ransac_estimator5 = RANSACRegressor( estimator, min_samples=2, residual_threshold=5, random_state=0 ) ransac_estimator6 = RANSACRegressor(estimator, residual_threshold=5, random_state=0) ransac_estimator7 = RANSACRegressor( estimator, min_samples=X.shape[0] + 1, residual_threshold=5, random_state=0 ) # GH #19390 ransac_estimator8 = RANSACRegressor( Ridge(), min_samples=None, residual_threshold=5, random_state=0 ) ransac_estimator1.fit(X, y) ransac_estimator2.fit(X, y) ransac_estimator5.fit(X, y) ransac_estimator6.fit(X, y) assert_array_almost_equal( ransac_estimator1.predict(X), ransac_estimator2.predict(X) ) assert_array_almost_equal( ransac_estimator1.predict(X), ransac_estimator5.predict(X) ) assert_array_almost_equal( ransac_estimator1.predict(X), ransac_estimator6.predict(X) ) with pytest.raises(ValueError): ransac_estimator7.fit(X, y) err_msg = "`min_samples` needs to be explicitly set" with pytest.raises(ValueError, match=err_msg): ransac_estimator8.fit(X, y)
f8991210f022270d640a302820ed4b9ec58b42c1
251
https://github.com/scikit-learn/scikit-learn.git
262
def test_ransac_min_n_samples(): estimator = LinearRegression() ransac_estimator1 = RANSACRegressor( estimator, min_samples=2, residual_threshold=5, random_state=0 ) ransac_estimator2 = RANSACRegressor( estimator, min_samples=2.0 / X.shape[0], residual_threshold=5, random_state=0, ) ransac_estimator5 = RANSACRegressor( estimator, min_samples=2, residual_threshold=5, random_state=0 ) ransac_estimator6 = RANSACRegressor(estimator, residual_threshold=5, random_state=0) ransac_estimator7 = RANSACRegressor( estimator, min_samples=X.shape[0] +
25
377
test_ransac_min_n_samples
32
0
2
8
test/test_prototype_transforms.py
193,029
[proto] Ported all transforms to the new API (#6305) * [proto] Added few transforms tests, part 1 (#6262) * Added supported/unsupported data checks in the tests for cutmix/mixup * Added RandomRotation, RandomAffine transforms tests * Added tests for RandomZoomOut, Pad * Update test_prototype_transforms.py * Added RandomCrop transform and tests (#6271) * [proto] Added GaussianBlur transform and tests (#6273) * Added GaussianBlur transform and tests * Fixing code format * Copied correctness test * [proto] Added random color transforms and tests (#6275) * Added random color transforms and tests * Disable smoke test for RandomSolarize, RandomAdjustSharpness * Added RandomPerspective and tests (#6284) - replaced real image creation by mocks for other tests * Added more functional tests (#6285) * [proto] Added elastic transform and tests (#6295) * WIP [proto] Added functional elastic transform with tests * Added more functional tests * WIP on elastic op * Added elastic transform and tests * Added tests * Added tests for ElasticTransform * Try to format code as in https://github.com/pytorch/vision/pull/5106 * Fixed bug in affine get_params test * Implemented RandomErase on PIL input as fallback to tensors (#6309) Added tests * Added image_size computation for BoundingBox.rotate if expand (#6319) * Added image_size computation for BoundingBox.rotate if expand * Added tests * Added erase_image_pil and eager/jit erase_image_tensor test (#6320) * Updates according to the review Co-authored-by: Vasilis Vryniotis <datumbox@users.noreply.github.com>
vision
12
Python
21
test_prototype_transforms.py
def test__get_params(self, sigma): transform = transforms.GaussianBlur(3, sigma=sigma) params = transform._get_params(None) if isinstance(sigma, float): assert params["sigma"][0] == params["sigma"][1] == 10 else: assert sigma[0] <= params["sigma"][0] <= sigma[1] assert sigma[0] <= params["sigma"][1] <= sigma[1]
77c8c91cad88a1e48da856ecb7957f4691244e21
91
https://github.com/pytorch/vision.git
92
def test__get_params(self, sigma): transform
10
138
test__get_params